Like many other areas, the examinations in the covid era lends itself to technological solutionism: letting Artificial Intelligence monitor the exams. But… are automatic surveillance technologies (TVA) the solution to this problem? In recent years, these technologies seem to be in vogue. However, there is growing evidence that they can cause serious harm and that they lead to technical, legal and ethical problems.
So much so that the European Union is studying a moratorium to all uses of facial recognition in domains sensitive, including surveillance, police, education and employment. Quite a few cities in America, including San Francisco, Berkeley and Oakland, have already legislated bans to prevent government institutions from making use of it. Finally, companies like IBM, Amazon or Microsoft are abandoning their plans in these technologies.
In Spain, the UNED, a pioneer in distance learning, asked it: can we delegate exclusively to the TVA the responsibility of guaranteeing cleanliness in online exams? The answer was to try to replicate the face-to-face exams by developing the Virtual Exam Classroom, a tool with which more than half a million tests have been carried out, and which does not implement automatic surveillance. To arrive at this solution, certain potentially problematic TVA issues were identified:
A) There is no clear regulatory framework: most countries lack specific legislation, and that is our case. For example, in the LOPD and the RGPD there is no specific section on facial recognition technologies and, what is worse: there is no jurisprudence. There are, however, some severe recommendations from the AEPD, which, among other things, recommend avoiding the use of biometric technologies.
Wide question banks that are randomly selected are an effective tool against plagiarism.
B) The use of TVA may constitute a violation of the principles of necessity and proportionality. Thus, it must be determined that there are no other less burdensome measures that can guarantee the same or an equivalent result. And there are: proper test design, for example, with large question banks that are randomly selected, is an effective tool against plagiarism.
C) TVAs may violate the right to privacy. Evidently, surveillance during an exam is lawful, but the fact that an educational institution equips itself with the ability to automatically recognize students raises privacy concerns, such as the impossibility of ensuring that they cannot be monitored at other times (in their lifetime on campus, for example).
D) TVAs are naturally imprecise and the software fallible. For example, many studies have shown how algorithms trained on race-biased data have a difficult time identifying racialized people, especially women. Do we want to burden already underprivileged groups with a new burden, having alternatives?
E) TVAs can lead to automation biases: it has been proven that those who use these technologies tend to blindly assume that they are infallible, and this can lead to very wrong decisions. What happens when the TVAs are wrong? Can we assume the bureaucratic ordeal that supposes an automatic erroneous decision taken for granted because “the machine had told it”?
F) The TVA can produce discrimination and violations of the principle of equality. People with little computer training, elderly, with connection problems, with obsolete devices, with difficulties derived from the pandemic: do they all access the same?
G) TVAs can generate discrimination based on functional diversity. Even if they comply with accessibility legislation, there are questions about whether the rights of people with and without disabilities are being addressed. For example, people with visual impairments may have trouble drinking selfies, something essential for the use of these technologies.
Nothing prevents someone from having more than one keyboard and monitor connected to their computer
Finally, the TVAs, despite their promises, cannot even guarantee authorship or the absence of the use of fraudulent means in conducting tests online. As the hackers, it is very difficult to conceive of a totally secure computer system. In this case, nothing prevents someone from having more than one keyboard and monitor connected to their computer, for example, or accessing through a virtual operating system (executed in one window, leaving total freedom to use a browser in another, for example) .
Are just two examples out of the many, very imaginative, that the community has been opposing the TVA lately. And they show that these technologies do not mean cleaning in conducting the exams: They do not guarantee the impossibility of the intervention of third parties or the non-use of prohibited materials. In conclusion, when proposing non-face-to-face solutions for exams, any educational institution should abide by the precautionary principle. Automate Exam Supervision Using Automatic Surveillance Technologies it brings about legal, ethical and technical problems that are, today, unavoidable.
José L. Aznarte Mellado He is a professor in the Artificial Intelligence department of the UNED and assistant vice-rector for intelligent data management. Juan Manuel Lacruz Lopez He is a professor in the Department of Criminal Law and Criminology at the UNED and vice-rector for undergraduate and graduate degrees.
Sign up for EL PAÍS Education Newsletter