"According to the police, the only way we can reduce the number of online abuses is to stop them before they even happen. Punishing the abusers doesn't help. By then, the abuse has already occurred," Tokerud explains.
Preventing abuse is further complicated by the fact that both children and abusers use linguistic tricks to bypass automatic filters. For example, words can be replaced with emojis, text can be written vertically, or letters can be replaced with other characters. This is something Amanda's language models are trained on, and eventually, Aiba will also train the algorithms on audio and visual content. The more tools Amanda gets, the more conversations can be intercepted.
Tokerud believes this is essentially an arms race:
"Abusers share methods and procedures with each other, causing the problem to spiral out of control. We estimate that there are between one and a half million abusers working day and night to groom and exploit children and young people."
Eight years of research
The environment behind Aiba originates from the information security environment at NTNU in Gjøvik, with significant contributions from Professor of Behavioral Biology Patrick Bours. Bours is from the Netherlands, and it was through him that the story of Amanda Todd became central to the project.
Tokerud met Bours in 2018, and together they began to examine the issue. They quickly discovered the enormous extent of the abuses and, most importantly, the lack of tools to handle them.