argia.eus
INPRIMATU
Technology
Mental balance of moderator content
Diana Franco Eguren 2022ko uztailaren 21

For in the social networks we have today, a little-known work of moderating content. Apparently, in the digital field, not everything is as automated as we think, at least in some areas. There are contents that the machines do not distinguish well: those related to violence, above all, because there are people who seek to disregard the owners of social networks, disseminate violence and learn to provoke algorithms. To avoid these behaviors, big digital corporations have created the work of moderating content. The moderating contents should visualize and censor daily contents that visualize violence in full time in a graphic way. For this work, they often hire people from developing countries. Once again, “developed worlds” waste is expanding in the societies of developing countries.

In recent years we have known the existence of this work, because the moderating contents have denounced that their mental situation has been greatly damaged.

Is there an artificial intelligence option that prevents what we don't want to see in the digital world? How far is it from knowing the reality of the human being? To what extent does it not make us more vulnerable to protect the human mind from this reality?

In my office, I have a very interesting group of women who work against violence against women. They work in all areas of violence: legal, emotional, social...

Perhaps, taking into account the model of these women, it would be necessary to socialize the need to hide the problem through moderators and algorithms, and create pathways facing each other. Maybe.