News

Doctoral thesis: Technology that's better at detecting hate speech may also increase security risks

Advances in language technology don't only facilitate moderation but also censorship, says Tommi Gröndahl, who is defending his dissertation on language technology methods.
Neural networks process language effectively, but not necessarily the same way as humans. Picture: Matti Ahlgren, Aalto University.
Neural networks process language effectively, but not necessarily the same way as humans. Picture: Matti Ahlgren, Aalto University.

Doctoral candidate Tommi Gröndahl will defend his doctoral dissertation on language technologies at Aalto University on 23 August. Language technology is one of the most common forms of artificial intelligence, and Gröndahl tested its methods for detecting lies and hate speech.

"Artificial intelligence is needed when massive amounts of data need to be processed and the targeted material is screened out. However, language technologies have been a black-box, which means that they have been utilised without actually understanding how they process language,“ Gröndahl says.

Gröndahl was particularly interested in the consequences of the methods from the perspective of information security and privacy protection.

"If language technology methods are developed, it improves the possibilities of both content moderation and censoring. The techniques are the same, regardless of the consequences the classification of the text has for the author,” says Gröndahl.

In the study, Gröndahl compared deep learning neural networks with more traditional rule-based methods. In the rule-based methods, a person creates a rule in the system, which is then utilised in automation. Major differences were found between the methods.

"Complex neural network models have a vast amount of numbers, which makes it difficult to know which feature of the text each number represents, and what happens to the text when one of the numbers is changed. When a text needs detailed structuring, the most commonly used neural network models are not always reliable. For example, a neural network model cannot distinguish between a sentence and its negative equivalent,” explains Gröndahl.

Then again, rule-based methods are not ideal for screening vast amounts of data. It is therefore essential to combine the best of the two methods.

Methods easily deceived

The classification of text is typically based on fairly simple features, such as specific words. For example, in the case of lie detection, the methods do not actually detect lies, but simple features in the data. Complex machine learning models catch such features similarly to simpler models. This makes models vulnerable to attacks; for instance, hate speech detectors are easily fooled by people, when spaces are removed from the text or the word ‘love’ is added to the text.

Gröndahl found that the assigned task and training data affect how well an AI-based classifier succeeds in the task.

"As machine learning models are massive, it is important that the classifier receives enough training data. It can also easily be caught out by undesirable features if the training data is distorted, i.e. in some way unrepresentative. Complex machine learning models do not know when to take into account a word or a character and when not, but they act on the basis of the training data provided,” says Gröndahl.

In addition to hate speech and lie detection, Gröndahl also studied the possibilities of language technology for automatic alteration of writing style and automatic text editing. The objective may be, for instance, to modify the text so that the author cannot be identified.

For example, when the writing style was automatically altered, the neural network produced a conversion similar to a machine translation that could result in repeating the same text or changing the meaning of the text content. In rule-based methods, it is possible to control in more detail, for example, the effect of a certain word, such as a synonym or a contradiction, on the conversion of a sentence.

Tommi Gröndahl

Unusual path

Becoming a doctoral student in security and privacy usually requires prior studies in the topic, typically as part of undergraduate studies in computer science. Gröndahl’s journey was untypical: as a cognitive scientist at the University of Helsinki, his previous studies focused on language research. He ended up in Professor N. Asokan’s research group after a summer internship to help run user studies.

“Tommi Gröndahl had no background in security and privacy when he started, and yet he has had a very impressive record, publishing in excellent security and privacy venues, and getting very good media coverage for one of his papers” Asokan says.

Gröndahl has continued with his previous discipline as well, doing another PhD in the cognitive science of translation at the University of Helsinki.

Further information:

How Google Perspective rates a comment otherwise deemed toxic after some inserted typos and a little love.

Hate speech-detecting AIs are fools for ‘love’

State-of-the-art detectors that screen out online hate speech can be easily duped by humans, shows new study

News
  • Published:
  • Updated:

Read more news

ınterns
Research & Art, University Published:

Pengxin Wang: The internship was an adventure filled with incredible research, unforgettable experiences, and lifelong friendships.

Pengxin Wang’s AScI internship advanced AI research, fostered global friendships, and inspired his journey toward trustworthy AI solutions.
Radiokatu20_purkutyömaa_Pasila_Laura_Berger
Research & Art Published:

Major grant from the Kone Foundation for modern architecture research - Laura Berger's project equates building loss with biodiversity loss

Aalto University postdoctoral researcher Laura Berger and her team have been awarded a 541 400 euro grant from the Kone Foundation to study the effects of building loss on society and the environment.
Matti Rossi vastaanotti palkinnon
Awards and Recognition Published:

AIS Impact Award 2024 goes to Professor Matti Rossi and his team

The team won the award for technological and entrepreneurial impact
An artistic rendering of two chips on a circuit board, one is blue and the other is orange and light is emitting from their surf
Press releases Published:

Researchers aim to correct quantum errors at super-cold temperatures instead of room temperature

One of the major challenges in the development of quantum computers is that the quantum bits, or qubits, are too imprecise. More efficient quantum error correction is therefore needed to make quantum computers more widely available in the future. Professor Mikko Möttönen has proposed a novel solution for quantum error correction and has received a three-year grant from the Jane and Aatos Erkko Foundation to develop it.