Emojis make it harder to prosecute online harassment

Wallpaper with your favorite emojis

A study by the Oxford Internet Institute concludes that the use of emoticons complicates the detection and prosecution of online harassmentSome offensive posts may go undetected while, conversely, innocuous comments may be mistakenly flagged as containing aggressive messages.

Emojis or emoticons allow you to add emotional context to a message by graphically representing emotions

The problem comes when algorithms are in charge of interpreting the content of messages and discriminating whether they are offensive, aggressive or abusive posts, but they are not able to properly interpret the context added by the presence of an emoticonan element of graphic representation of emotions that has its own rules of use if it is intended to serve adequately the concept for which it was born: helping to contextualize communication text-based.

The study cites offensive messages directed against some footballers following the England team’s defeat at Euro 2020, where the content was accompanied by emoticons depicting monkeys.

And the problem is that most systems currently used to automatically monitor content are trained to recognize text strings, but not emoticons. An analysis by the British television channel Sky News explains that posts with racist content on Instagram that feature emojis have three times less likely to be blocked by the social network than those containing only text.

To try to counteract this, researchers at the Oxford Internet Institute have developed a database of about 4,000 phrases, most of which include emoticons with offensive animus, with the purpose of training an artificial intelligence model that is able to understand which messages are offensive and which are not. Initially the human element is added to serve as a guide to guide in the interpretation of what subtle details make a message offensive.

Issues related to race, gender, sexual identity, sexuality, sexuality, religion and disability are taken into account, analyzing the different ways in which emoticons can be offensive. The model achieved a 30% improvement in correctly identifying whether or not content is offensive, improving by 80% the detection of such quality in relation to the use of emoticonsalthough the researchers themselves acknowledge that the technology is still far from being completely effective.

Among other issues, the evolution of language itself complicates this task, and there may be circumstances in which there are false positives that silence the voice of some groups. This ties in with the results of a research carried out at the University of Sao Paulo, where they detected through the application of an algorithm that Twitter accounts belonging to drag queens apparently generated more toxic messages than those belonging to white supremacists, for the simple reason that some collectives employ language of their own that may be apparently more offensive to receivers of the message from outside that collective, even if for the internal context of that collective it is not really taken as offensive.

Hannah Rose Kirk, director of the research project conducted by the Oxford Internet Institute, explains that they have decided to share the data from their study so that other scholars can improve their own models, noting that social platforms have similar studies that they jealously guard and that, if shared, could help improve the accurate scrutiny of hateful and harassing messages.

Click to rate this entry!
(Votes: 1 Average: 5)
Share!

Leave a Comment