This is the fourth in a series of five op-eds by Nani Jansen Reventlow published by the Dutch newspaper de Volkskrant in October 2023. Read the original Dutch version here.
Imagine that companies built technology that was fundamentally racist: it was acknowledged to fail Black people at a rate nearly 30% higher than it did white people. Then imagine that this technology would be deployed in a crucial area of your life: your work, education, healthcare. Now, imagine you are a Black woman, and the technology works as expected: it harms you. You file a complaint. Only to be told by the country’s human rights body that on this occasion, it probably was not racism.
Welcome to Robin Pocornie’s reality, who was told earlier this week by the College voor de Rechten van de Mens that it believes the word of tech companies over that of Black women experiencing algorithmic discrimination.
In 2020, as COVID-19 saw remote working and schooling become a reality for many, the Vrije Universiteit acquired software called Proctorio, planning to use it to detect “exam fraud.” It used facial recognition technology to detect whether the student was behind their computer for the duration of the exam and calculated a “suspicion score” for each student, indicating the extent to which the software believed them to be cheating.
When Robin Pocornie, a Black woman studying at Vrije Universiteit, was taking remote exams, Proctorio was unable to properly recognise her face. To make sure the system could more easily identify her as human, she had to take exams with a bright light shining in her face. She was admitted to exams late because the system failed to see her, and once was even removed from an exam. Pocornie fought back and filed a complaint with the College, setting out how the VU discriminated against her on the basis of race by using Proctorio to administer exams. She sought an apology, and a commitment from the university not to use discriminatory technology like this again.
Earlier this week, the College dismissed Pocornie’s claim, in what is a remarkable feat of reverse engineering to justify automated structural racism. While the College tries to dress it up nicely, its decision essentially comes down to refusing to find discrimination because of limited, more “neutral,” reasons it found to justify the failure of the software, rather than acknowledging the actual experience put forward by Pocornie. This perpetuates a dynamic of minimising lived experiences of harm while not validating how systemic racism has come to bear on the case – a form of gaslighting Black women know all too well.
We know facial recognition does not work for people who do not “meet the standard” of being white, male, and cis: it has never worked. Computer scientist and digital activist Joy Buolamwini was one of the first researchers to flag the serious problem facial recognition technology had with Black, and especially Black female, faces: the computer software that prompted her research recognised her better when Buolamwini, who is Black, wore a static white mask than when she appeared without one. Just the other week research undertaken by RTL Nieuws showed that Proctorio’s software was equally incapable of properly recognising Black faces.
Nevertheless, the College found that the Vrije Universiteit properly executed its duty of care by acquiring software that is notoriously flawed without having made any effort whatsoever to investigate a piece of technology that we know to negatively impact a significant portion of its student body. This is in spite of the fact that the VU “mainly relied on the statements of Proctorio” on whether or not it was discriminatory.
Even putting aside the question why any university would ever want to be so negligent in its duty of care towards its students, the case raises important questions about how Dutch institutions handle complaints about racism and whose perspective gets precedent when those complaints are investigated. Both the Vrije Universiteit and the College decided to take the shady statements of a commercial tech company that is peddling a product known to be racially biased more seriously than the experiences of a Black woman this very technology is known to disadvantage. Instead of taking Pocornie’s case as an opportunity to affirm that big, powerful institutions should opt out of systems that further entrench and amplify the racist power structures in our society, our main human rights institution took this as its cue to align itself with big tech, and provide an easy out to big institutions everywhere who have the opportunity, resources, and moral obligation to do better.
The conditions we create and systems we use to ensure “fair education and assessments” should never be at the expense of Black women. Or anyone. Why are our human rights bodies incapable of understanding this?