The first question is whether victims or criminals are foreigners, in a questionnaire that uses the Ertzaintza to measure the risk of male violence. Journalist Naiara Bellio has published the findings of the Algortihm Watch Ertzaintza algorithm investigation to measure the risk of male violence. The EPV-R tool, in Spanish, is based on a psychological questionnaire "Scale for predicting the risk of severe violence against the partner".
Although it is the first in the list of questions about the origin of victims and offenders, the questionnaire is not always addressed to the victims in this order, according to the Ertzaintza inspector general, Oskar Fernandez, An. He adds that this question is worth only one point, while others are worth up to three points. However, according to the questionnaire, "foreigners" are only Western. For example, if someone is Belgian or German, they are not taken into account. As Fernández explained, it only applies in "cases where culture is not European". However, the questionnaire does not require countries to be specific.
Together with the original questionnaire, the questionnaire consists of another nineteen questions. Among other issues, they ask the victim whether the aggressor is jealous, whether the offender is aggressive to another person, whether he is threatened with death or whether they have been weapons. Each question has a score that can reach a maximum of 48 points. They do not have to ask all the questions, but at least twelve. They measure the risk level based on the score obtained. Above 24 points the risk is "severe", above 18 "high", above 10 "medium" and below "low".
This algorithm is parallel to judicial research. That is, before a judge decides, the Ertzaintza launches its protocol to protect victims, in which they take into account the EPV-R algorithm. Subsequently, the police report to the judge includes as evidence the results of the interrogation. This is not the case in other similar systems.
The performance of the algorithm in question
In 2022, the algorithm performance was studied at Cornel University in the United States. According to the report that collected the results, in most cases (53%) that were considered high risk by other routes, the algorithm concluded that the risk was "low". According to the study, the number of false negatives is higher than that of true positives. This means that this algorithm is highly likely to consider serious as non-serious cases.
It's not the only xenophobic algorithm.
They also use xenophobic algorithms elsewhere. For example, in Catalonia they use the RisCanvi tool to predict whether a prisoner will commit a crime after leaving prison. This algorithm also takes into account the origin of prisoners.