what is “Algospeak” in social networks? [Vídeo]

Tools for discovering duplicate content on the Internet

The major social networks have algorithms that employ Artificial Intelligence to detect, in users’ posts and comments, sensitive content or content that may violate its rules of use.

There are controversial topics that are forbidden to talk about on certain social networks. For example, on Facebook it is not allowed to advertise weapons, or sexual content, when it involves minors, is also not allowed on most platforms.

It would be impossible, given the large amount of content posted every day on social networks, to moderate by human technicians all that content. At the same time, it would mean that these employees could have access to the content posted by users.

That is why algorithms in charge of moderating content are used, which perform an initial screening and alert when illegal or prohibited content is found. Then there are social networks that resort to human moderators to confirm what their algorithms have removed, while others leave all the weight to these Artificial Intelligences and only question their task when a user, for example, files a lawsuit for content that he believes has been removed unfairly.

Be that as it may, there is that first automatic filter to determine what can and cannot be talked about on social networks. However, users also try to circumvent this first sieve and “bypass the algorithms” with something that has come to be called “AlgoSpeak” (from “algorithm” and “speak”) and it is becoming a term that is trending in social networks and is being used more and more. We tell you what “AlgoSpeak” is all about in this video.

In short, the “AlgoSpeak” is a secret codecreated by users of social networks, in order to be able to talk about a forbidden or inappropriate topic, without those algorithms finding out. The “AlgoSpeak” is usually composed of code words, emojis with a hidden meaning or deliberate typos. All of this to prevent the moderation Artificial Intelligence of social networks from detecting such content that would not be allowed on the platform.

Only those who talk about those contents know the true meaning of those contents. For example, to talk about issues related to abortion – in the United States the Supreme Court recently abolished it – users talk about “camping.” Content about how to pitch a tent can hide, if you read between the lines, another type of message. Algorithms are not able to identify them.

Many emojis also have other connotations beyond the icon they represent. This is what happens, for example, in the apps for flirting with emoticons of certain fruits and vegetables. It is also possible to talk about “420” to refer to cannabis consumption, or even pizza emojis are used -especially “cheese pizza”- to refer to sexual contents involving minors.

With this type of strategies, the aim is to outwit algorithms, which are unable to detect these mentions This algorithmic language is increasingly employed by social network users, who employ it not only to address sensitive topics, but also to circumvent methods of avoiding harassment and bullying on social networks.

Using emojis and alternative phrases is becoming more and more common on social networks, as well as employing more written words – such as “seggs” instead of sex – is also trending. Artificial Intelligences try to recognize them – for example “pron”, instead of “porn” but in clear reference – is already a term that many of these algorithms recognize.

However, users benefit from the time it takes for these AIs to learn new trends to take advantage of and talk about those topics. Once the algorithms recognize what they mean by “cheese pizza,” they will stop employing it. The terms in the “AlgoSpeak” have a short lifespan, as once many people recognize them and they are no longer a code in code, they are no longer safe.

Click to rate this entry!
(Votes: 0 Average: 0)
Share!

Leave a Comment