*They've got a lot to be EDRi about in this one.
======================================================================
EDRi-gram
fortnightly newsletter about digital civil rights in Europe
EDRi-gram 18.3, 12 February 2020
Read online: https://edri.org/edri-gram/18-3/
Contents
1. The human rights impacts of migration control technologies
2. Cloud extraction: A deep dive on secret mass data collection tech
3. Digitalcourage fights back against data retention in Germany
4. Double legality check in e-evidence: Bye bye “direct data requests”
5. Data protection safeguards needed in EU-Vietnam trade agreements
6. PI and Liberty submit a new legal challenge against MI5
7. Dangerous by design: A cautionary tale about facial recognition
8. Recommended Action
9. Recommended Reading
10. Agenda
11. About
1. The human rights impacts of migration control technologies
This is the first blogpost of a series on our new project which brings
to the forefront the lived experiences of people on the move as they are
impacted by technologies of migration control. The project highlights
the need to regulate the opaque technological experimentation documented
in and around border zones of the EU and beyond. We will be releasing a
full report later in 2020, but this series of blogposts will feature
some of the most interesting case studies.
At the start of this new decade, over 70 million people have been forced
to move due to conflict, instability, environmental factors, and
economic reasons. As a response to the increased migration into the
European Union, many states are looking into various technological
experiments to strengthen border enforcement and manage migration. These
experiments range from Big Data predictions about population movements
in the Mediterranean to automated decision-making in immigration
applications and Artificial Intelligence (AI) lie detectors at European
borders. However, often these technological experiments do not consider
the profound human rights ramifications and real impacts on human lives
A human laboratory of high risk experiments
Technologies of migration management operate in a global context. They
reinforce institutions, cultures, policies and laws, and exacerbate the
gap between the public and the private sector, where the power to design
and deploy innovation comes at the expense of oversight and
accountability. Technologies have the power to shape democracy and
influence elections, through which they can reinforce the politics of
exclusion. The development of technology also reinforces power
asymmetries between countries and influence our thinking around which
countries can push for innovation, while other spaces like conflict
zones and refugee camps become sites of experimentation. The development
of technology is not inherently democratic and issues of informed
consent and right of refusal are particularly important to think about
in humanitarian and forced migration contexts. For example, under the
justification of efficiency, refugees in Jordan have their irises
scanned in order to receive their weekly rations. Some refugees in the
Azraq camp have reported feeling like they did not have the option to
refuse to have their irises scanned, because if they did not
participate, they would not get food. This is not free and informed consent.
These discussions are not just theoretical: various technologies are
already used to control migration, to automate decisions, and to make
predictions about people’s behaviour.
Palantir machine says: no
However, are these appropriate tools to use, particularly without any
governance or accountability mechanisms in place for if or when things
go wrong? Immigration decisions are often opaque, discretionary, and
hard to understand, even when human officers, not artificial
intelligence, are the ones making decisions. Many of us have had
difficult experiences trying to get a work permit, reunite with our
spouse, or adopt a baby across borders, not to mention seek refugee
protection as a result of a conflict and a war. These technological
experiments to augment or replace human immigration officers can have
drastic results: in the UK, 7000 students were wrongfully deported
because a faulty algorithm accused them of cheating on a language
acquisition text. In the US, the Immigration and Customs Enforcement
Agency (ICE) has partnered with Palantir Technologies to track and
separate families and enforce deportations and detentions of people
escaping violence in Central and Latin America.
What if you wanted to challenge one of these automated decisions? Where
does responsibility and liability lie – with the designer of the
technology, its coder, the immigration officer, or the algorithm itself?
Should algorithms have legal personality? It’s paramount to answer these
questions, as much of the decision-making related to immigration and
refugee decisions already sits at an uncomfortable legal nexus: the
impact on the rights of individuals is very significant, even where
procedural safeguards are weak.
Sauron Inc. watches you – the role of the private sector
The lack of technical capacity within government and the public sector
can lead to potentially inappropriate over-reliance on the private
sector. Adopting emerging and experimental tools without in-house talent
capable of understanding, evaluating, and managing these technologies is
irresponsible and downright dangerous. Private sector actors have an
independent responsibility to make sure technologies that they develop
do not violate international human rights and domestic legislation. Yet
much of technological development occurs in so-called “black boxes,”
where intellectual property laws and proprietary considerations shield
the public from fully understanding how the technology operates.
Powerful actors can easily hide behind intellectual property legislation
or various other corporate shields to “launder” their responsibility and
create a vacuum of accountability.
While the use of these technologies may lead to faster decisions and
shorten delays, they may also exacerbate and create new barriers to
access to justice. At the end of the day, we have to ask ourselves, what
kind of world do we want to create, and who actually benefits from the
development and deployment of technologies used to manage migration,
profile passengers, or other surveillance mechanisms?
Technology replicates power structures in society. Affected communities
must also be involved in technological development and governance. While
conversations around the ethics of AI are taking place, ethics do not go
far enough. We need a sharper focus on oversight mechanisms grounded in
fundamental human rights.
This project builds on critical examinations of the human rights impacts
of automated decision-making in Canada’s refugee and immigration system.
In the coming months, we will be collecting testimonies in locations
including the Mediterranean corridor and various border sites in Europe.
Our next blogpost will explore how new technologies are being used
before, at, and beyond the border, and we will highlight the very real
impacts that these technological experiments have on people’s lives and
rights as they are surveilled and as their movement is controlled.
If you are interested in finding out more about this project or have
feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca.
The project is funded by the Mozilla and Ford Foundations.
Mozilla Fellow Petra Molnar joins us to work on AI & discrimination
(26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/
Technology on the margins: AI and global migration management from a
human rights perspective, Cambridge International Law Journal, December 2019
https://www.researchgate.net/publication/337780154_Technology_on_the_margins_AI_and_global_migration_management_from_a_human_rights_perspective
Bots at the Gate: A Human Rights Analysis of Automated Decision-Making
in Canada’s Immigration and Refugee Systems, University of Toronto,
September 2018
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf
New technologies in migration: human rights impacts, Forced Migration
Review, June 2019
https://www.fmreview.org/ethics/molnar
Once migrants on Mediterranean were saved by naval patrols. Now they
have to watch as drones fly over (04.08.2019)
https://www.theguardian.com/world/2019/aug/04/drones-replace-patrol-ships-mediterranean-fears-more-migrant-deaths-eu
Mijente: Who is Behind ICE?
https://mijente.net/notechforice/
The Threat of Artificial Intelligence to POC, Immigrants, and War Zone
Civilians
https://towardsdatascience.com/the-threat-of-artificial-intelligence-to-poc-immigrants-and-war-zone-civilians-e163cd644fe0
(Contribution, Petra Molnar, Mozilla Fellow, EDRi)
2. Cloud extraction: A deep dive on secret mass data collection tech
Mobile phones remain the most frequently used and most important digital
source for law enforcement investigations. Yet it is not just what is
physically stored on the phone that law enforcement are after, but what
can be accessed from it, primarily data stored in the “cloud”. This is
why law enforcement is turning to “cloud extraction”: the forensic
analysis of user data which is stored on third-party servers, typically
used by device and application manufacturers to back up data. As we
spend more time using social media and messaging apps, store files with
the likes of Dropbox and Google Drive, as our phones become more secure,
locked devices harder to crack, and file-based encryption becomes more
widespread, cloud extraction is, as a prominent industry player says,
“arguably the future of mobile forensics.”
The report “Cloud extraction technology: the secret tech that lets
government agencies collect masses of data from your apps” brings
together the results of Privacy International’s open source research,
technical analyses and freedom of information requests to expose and
address this emerging and urgent threat to people’s rights.
Phone and cloud extraction go hand in hand
EDRi member Privacy International has repeatedly raised concerns over
risks of mobile phone extraction from a forensics perspective and
highlighted the absence of effective privacy and security safeguards.
Cloud extraction goes a step further, promising access to not just what
is contained within the phone, but also to what is accessible from it.
Cloud extraction technologies are deployed with little transparency and
in the context of very limited public understanding. The seeming “wild
west” approach to highly sensitive data carries the risk of abuse,
misuse and miscarriage of justice. It is a further disincentive to
victims of serious offences to hand over their phones, particularly if
we lack even basic information from law enforcement about what they are
doing.
The analysis of data extracted from mobile phones and other devices
using cloud extraction technologies increasingly includes the use of
facial recognition capabilities. If we consider the volume of personal
data that can be obtained from cloud-based sources such as Instagram,
Google photos, iCloud, which contain facial images, the ability to use
facial recognition on masses of data is a big deal. Because of this,
greater urgency is needed to address the risks that arise from such
extraction, especially as we consider the addition of facial and emotion
recognition to software which analyses the extracted data. The fact
that it is potentially being used on vast troves of cloud-stored data
without any transparency and accountability is a serious concern.
What you can do
There is an absence of information regarding the use of cloud extraction
technologies, making it unclear how this is lawful and equally how
individuals are safeguarded from abuse and misuse of their data. This
is part of a dangerous trend by law enforcement agencies and we want to
ensure globally the existence of transparency and accountability with
respect to new forms of technology they use.
If you live in the UK, you can submit a Freedom of Information Act
Request to your local police to ask them about their use of cloud
extraction techonoligies using this template:
https://privacyinternational.org/action/3324/ask-your-local-uk-police-force-about-cloud-extraction.
You can also use it to send a request if you are based in another
country which has Freedom of Information legislation.
Privacy International
https://privacyinternational.org/
Cloud extraction technology: the secret tech that lets government
agencies collect masses of data from your apps (07.01.2020)
https://privacyinternational.org/long-read/3300/cloud-extraction-technology-secret-tech-lets-government-agencies-collect-masses-data
Phone Data Extraction
https://privacyinternational.org/campaigns/phone-data-extraction
Push This Button For Evidence: Digital Forensics
https://privacyinternational.org/explainer/3022/push-button-evidence-digital-forensics
Can the police limit what they extract from your phone? (14.11.2019)
https://privacyinternational.org/news-analysis/3281/can-police-limit-what-they-extract-your-phone
Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/
Ask your local UK police force about cloud extraction
https://privacyinternational.org/action/3324/ask-your-local-uk-police-force-about-cloud-extraction
(Contribution by Antonella Napolitano, EDRi member Privacy International)
3. Digitalcourage fights back against data retention in Germany
On 10 February 2020, EDRi member Digitalcourage published the German
government’s plea in the data retention case at the European Court of
Justice (ECJ). Dated 9 September 2019, the document from the government
explains the use of retained telecommunications data by secret services,
the question whether the 2002 ePrivacy Directive might apply to various
forms of data retention, which exceptions from human rights protections
apply to secret service operations, and justifies its plans for the use
of data retention to solve a broad range of crimes with the example of a
case of the abduction of a Vietnamese man in Berlin by Vietnamese
agents. However, this case is very specific and, even if then the
retained data was “useful”, that is not a valid legal basis for mass
data retention, and therefore can not justify drastic incisions into the
basic rights of all individuals in Germany. Finally, the German
government also argues that the scope and time period of the storage
makes a difference regarding the compatibility of data retention laws
with fundamental rights.
Digitalcourage calls for all existing illegal data retention laws to be
declared invalid in the EU. There are no grounds for blanket and
suspicion-less surveillance in a democracy and under the rule of law.
Whether it is content data or metadata that is being stored, data
retention (blanket and mass collection of telecommunications data) is
inappropriate, unnecessary and ineffective, and therefore illegal. Where
the German government argues that secret services need to use
telecommunications data to protect state interests, Digitalcourage
agrees with many human rights organisations that activities of secret
services can be a direct threat to the core trust between the general
public and the state. The ECJ has itself called for the storage to be
reduced to the absolutely required minimum – and that, according to
Digitalcourage, can be only be fulfilled if no data is stored without
individual suspicion.
Digitalcourage
https://digitalcourage.de/
Press release: EU data retention: Digitalcourage publishes and
criticises the position of the German government (only in German,
10.02.2020)
https://digitalcourage.de/pressemitteilungen/2020/bundesregierung-eugh-eu-weite-vorratsdatenspeicherung
(Contribution by Sebastian Lisken, EDRi member Digitalcourage, Germany)
4. Double legality check in e-evidence: Bye bye “direct data requests”
After having tabled some 600 additional amendments, members of the
European Parliament Committee on Civil Liberties (LIBE) are still
discussing the conditions under which law enforcement authorities in the
EU should access data for their criminal investigations in cross-border
cases. One of the key areas of debate is the involvement of a second
authority in the access process – usually the judicial authority in the
State in which the online service provider is based (often called the
“executing State”).
To prevent the misuse of this new cross-border data access instrument,
LIBE Committee Rapporteur Birgit Sippel’s draft Report had angered the
Commission by proposing that the executing State should receive, by
default, the European Preservation or Production Order at the same time
as the service provider. It should then have ten days to evaluate and
possibly object to an Order by invoking one of the grounds for
non-recognition or non-execution – including based on a breach of the EU
Charter of Fundamental Rights.
What is more, the Sippel Report proposes that if it is clear from the
early stages of the investigation that a suspected person does neither
reside in the Member State that is seeking data access (the issuing
State) nor in the executing State where the service provider is
established, the judicial authorities of the State in which the person
resides (the affected State) should also get the chance to intervene.
Notification as a fundamental element of EU judicial cooperation
The reasoning behind such a notification system is compelling:
Entrusting one single authority to carry out the full legality and
proportionality assessment for two or even three different jurisdictions
(the issuing, the executing and the affected State) is careless at best.
A national prosecutor or judge alone cannot possibly take into account
all national security and defence interests, immunities and privileges
and the legal framework of the other Member States, nor the special
protections a suspected person may have in their capacity as a lawyer,
doctor or journalist. This is especially relevant if the other Member
States’ rules are different or even incompatible with the rules of the
prosecutor’s own domestic investigation. The examination of a second
judicial authority with a genuine possibility to review the Order is
therefore of paramount importance to ensure its legality.
The LIBE Committee is currently discussing the details of this
notification process. Some amendments that were tabled are unfortunately
trying to undermine the protections that the notification requirement
would bring. For example, some try to restrict the notification to
Production Orders only (when data is transmitted directly), excluding
all Preservation Orders (when the data is just frozen and needs to be
acquired with a separate Order). Others try to limit notification to
transactional data (aka metadata) or content data, alleging that
subscriber data is somehow less sensitive and therefore needs less
protection. Lastly, some propose that the notification does not have
suspensive effects on the obligations of the service provider to respond
to an order, meaning that if the notified State objects to an order and
the service provider already gave out the data, it is too late.
The Parliament should uphold the basic principles of human rights law
If accepted, some of those amendments would bring the Parliament
position dangerously close to the Council’s highly problematic weak
notification model which does not provide any of the necessary
safeguards it is supposed to have. To ensure the human rights compliance
of the procedure, notifying the executing and the affected State should
be mandatory for all types of data and Orders. Notifications should be
simultaneously sent to the relevant judicial authority and the online
service provider, and the latter should wait for a positive reaction
from the former before executing the Order. The affected State should
have the same grounds for refusal as the executing State, because it is
best placed to protect its residents and their rights.
There seems to be a general consensus in the European Parliament about
the involvement of a second judicial authority in the issuance of
Orders. Meanwhile, the Commission grits its teeth and continues to
pretend that mutual trust among EU Member States is all that is needed
to protect people from law enforcement overreach. So far, the Commission
seems to refuse to see the tremendous risks that its “e-evidence”
proposal entails – especially in a context where some Member States are
subjected to Article 7 proceedings which could lead to the suspension of
some of their rights as Member States are suspended, because of
endangered independence of their judicial systems and potential breaches
of the rule of law. Mutual trust should not serve as an excuse to
undermine individuals’ fundamental right to data protection and the
basic principles of human rights law.
Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/
“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/
EU rushes into e-evidence negotiations without common position (19.06.2019)
https://edri.org/eu-rushes-into-e-evidence-negotiations-without-common-position/
Recommendations on cross-border access to data (25.04.2019)
https://edri.org/files/e-evidence/20190425-EDRi_PositionPaper_e-evidence_final.pdf
(Contribution by Chloé Berthélémy, EDRi)
5. Data protection safeguards needed in EU-Vietnam trade agreements
On 12 February 2020, the European Parliament gave consent for the
ratification of the EU-Vietnam trade and investment agreements.
The trade agreement contains two cross-border data flow commitments. The
related data protection safeguards in this agreement are similar to the
ones in the EU-Japan agreement, which entered into force in February
2019. Civil society organisations and academics had pointed out flaws in
these safeguards.
The EU-Vietnam investment agreement contains a variant of the
controversial investor-to-state dispute settlement (ISDS) mechanism. In
Opinion 1/17 (ISDS in EU-Canada CETA) the Court of Justice of the
European Union found this mechanism compatible with the EU Treaties.
ISDS does not interfere with the principle of autonomy of EU Law as the
EU and its member states can refuse to pay ISDS damages awards, the
Court suggests. Refusing to pay ISDS damages, however, comes with
serious drawbacks.
The continued use of weak data protection safeguards is all the more
disappointing as two years ago, in January 2018, the European Commission
adopted a proposal for stronger safeguards to be used in trade
agreements. Consumer and digital rights organisations supported these
safeguards in principle. The Commission, however, never applied it. In
order to properly protect the fundamental right to data protection in
the context of trade agreements, the new von der Leyen Commission should
adopt the proposed better safeguards and actually use them.
Vrijschrift
https://www.vrijschrift.org/
EU/Vietnam Free Trade Agreement 2018/0356(NLE)
https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2018/0356(NLE)&l=en
Weak data protection in EU-Vietnam trade agreement (06.02.2020)
https://www.vrijschrift.org/serendipity/index.php?/archives/242-Weak-data-protection-in-EU-Vietnam-trade-agreement.html
EU-Japan trade agreement not compatible with EU data protection (10.01.2018)
https://edri.org/eu-japan-trade-agreement-eu-data-protection/
The European Commission rightly decides to defend citizens’ privacy in
trade discussions (28.02.2018)
https://edri.org/the-european-commission-rightly-decides-to-defend-citizens-privacy-in-trade-discussions/
Study launch: The EU can achieve data protection-proof trade agreements
(13.07.2016)
https://edri.org/study-launch-eu-can-achieve-data-protection-proof-trade-agreements/
EU Court CETA ruling shows failure of ISDS reform (06.05.2019)
https://www.vrijschrift.org/serendipity/index.php?/archives/237-EU-Court-CETA-ruling-shows-failure-of-ISDS-reform.html
(Contribution by Ante Wessels, EDRi member Vrijschrift, the Netherlands)
6. PI and Liberty submit a new legal challenge against MI5
On 1 February 2020, EDRi member Privacy International (PI) and civil
rights group Liberty filed a complaint with the Investigatory Powers
Tribunal, the judicial body that oversees the intelligence agencies in
the United Kingdom, against the security service MI5 in relation to how
they handle vast troves of personal data.
In mid-2019, MI5 admitted, during a case brought by Liberty, that
personal data was being held in “ungoverned spaces”. Much about these
ungoverned spaces, and how they would effectively be “governed” in the
future, remained unclear. At the moment, they are understood to be a
“technical environment” where personal data of an unknown number of
individuals was being “handled”. The use of “technical environment”
suggests something more than simply a compilation of a few datasets or
databases.
The longstanding and serious failings of MI5 and other intelligence
agencies, in relation to these “ungoverned spaces” first emerged in PI’s
pre-existing case that started in November 2015. The case challenges the
processing of bulk personal datasets and bulk communications data by the
UK Security and Intelligence Agencies.
In the course of these proceedings, it was revealed that PI’s data were
illegally held by MI5, among other intelligence and security agencies.
MI5 deleted PI’s data while the investigation was ongoing. With the new
complaint PI also requested the reopening of this case in relation to
MI5’s actions.
In parallel proceedings brought by Liberty against the bulk surveillance
powers contained in the Investigatory Powers Act 2016 (IPA), MI5
admitted that personal data was being held in “ungoverned spaces”,
demonstrating a known and continued failure to comply with both
statutory and non-statutory safeguards in relation to the handling of
bulk data since at least 2014. Importantly, documents disclosed in that
litigation and detailed in the new joint complaint showed that MI5 had
sought and obtained bulk interception warrants on the basis of
misleading statements made to the relevant authorities.
The documents reveal that MI5 not only broke the law, but for years
misled the Investigatory Powers Commissioner’s Office (IPCO), the body
responsible for overseeing UK surveillance practices.
In this new complaint, PI and Liberty argue that MI5’s data handling
arrangements result in the systematic violation of the rights to privacy
and freedom of expression (as protected under Articles 8 and 10 of the
European Convention of Human Rights) and under EU law. Furthermore, they
maintain that the decisions to issue warrants requested by MI5, in
circumstances where the necessary safeguards were lacking, are unlawful
and void.
Privacy International
https://privacyinternational.org/
MI5 ungoverned spaces challenge
https://privacyinternational.org/legal-action/mi5-ungoverned-spaces-challenge
Bulk Personal Datasets & Bulk Communications Data challenge
https://privacyinternational.org/legal-action/bulk-personal-datasets-bulk-communications-data-challenge
The Investigative Tribunal case no. IPT/15/110/CH
https://privacyinternational.org/sites/default/files/2019-08/IPT-Determination%20-%2026September2018.pdf
Reject Mass Surveillance
https://www.libertyhumanrights.org.uk/our-campaigns/reject-mass-surveillance
MI5 law breaking triggers Liberty and Privacy International legal action
(03.02.2020)
https://www.libertyhumanrights.org.uk/news/press-releases-and-statements/mi5-law-breaking-triggers-liberty-and-privacy-international-legal
(Contribution by EDRi member Privacy International)
7. Dangerous by design: A cautionary tale about facial recognition
This series has explored facial recognition as a fundamental right; the
EU's response; evidence about the risks; and the threat of public and
commercial data exploitation. In this fifth installment, we consider an
experience of harm caused by fundamentally violatory biometric
surveillance technology.
Leo Colombo Viña is the founder of a software development company and a
professor of Computer Science. A self-professed tech lover, it was
“ironic”, he says, that a case of mistaken identity with police facial
recognition happened to him. What unfolded next, paints a powerful
picture of the intrinsic risks of biometric surveillance. Whilst Leo’s
experience occurred in Buenos Aires, Argentina, his story raises serious
issues for the deployment of facial and biometric recognition in the EU,
too.
“I’m not the guy they’re looking for”
One day in 2019, Leo was leaving the bank mid-afternoon to take the
metro back to his office. While waiting for the train, he was approached
by a police officer who had received an alert on his phone that Leo was
wanted for armed robbery 17 years ago. The alert had been triggered by
the metro station’s facial recognition surveillance system, which was
recently the subject of a large media campaign.
His first assumption was “okay, there’s something up, I’m not the guy
they’re looking for”. But once the police showed him the alert, it
clearly showed his picture and personal details. “Okay,” he thought,
“what the f***?” When they told him that the problem could not be
resolved there and then, and he would have to accompany them to the
police station, Leo’s initial surprise turned into concern.
Wrongful criminalisation
It turned out that whilst the picture and ID number in the alert matched
Leo’s, bizarrely, the name and date of birth did not. Having never
committed a crime, nor even been investigated, Leo still does not know
how his face and ID number came to be wrongfully included in a criminal
suspect database. Despite subsequent legal requests from across civil
society, the government have not made information about the processing
of, storage of or access to people’s data available. This is not a
unique issue: across Europe, policing technology and processing of
personal data is frighteningly opaque.
At the police station, Leo spent four hours in the bizarre position of
having to “prove that I am who I am”. He says the police treated him
kindly and respectfully – although he thinks that being a caucasian
professional meant that they dismissed him as a threat. The evidence for
this came later, when a similar false alert happened to another man who
also did not have a criminal record, but who had darker skin than Leo
and came from a typically poorer area. He was wrongfully jailed for six
days because the system’s alert was used to justify imprisoning him –
despite the fact that his name was not a match.
Undermining police authority
If the purpose of policing is to catch criminals and keep people safe,
then Leo’s experience is a great example of why facial recognition does
not work. Four officers spent a combined total of around 20 hours trying
to resolve his issue (at the taxpayers’ expense, he points out). That
doesn’t include the time spent afterwards by the public prosecutor to
try and work out what went wrong. Leo recalls that the police were
frustrated to be tied up with bureaucracy and attempts to understand the
decision that the system had made, whilst their posts were left vacant
and real criminals went free.
The police told Leo that the Commissioner receives a bonus tied to the
use of the facial recognition system. They confided that it seemed to be
a political move, not a policing or security improvement. Far from
helping them solve violent crime – one of the reasons often given for
allowing such intrusive systems – it mostly flagged non-violent issues
such as witnesses who had not turned up for trials because they hadn’t
received a summons, or parents who had overdue child support payments.
The implications on police autonomy are stark. Leo points out that
despite swift confirmation that he was not the suspect, the police had
neither the ability nor the authority to override the alert. They were
held hostage to a system that they did not properly understand or
control, but they were compelled to follow its instructions and
decisions without knowing how or why it had made them.
Technology is a tool made by humans, not a source of objective truth or
legal authority. In Leo’s case, the police assumed early on that the
match was not legitimate because he did not fit their perception of a
criminal. But for others also wrongfully identified, the assumption was
that they did look like a criminal, so the system was assumed to be
working correctly. Global anti-racism activists will be familiar with
these damaging, prejudicial beliefs. Facial recognition does not solve
human bias, but rather supports it by giving discriminatory human
assumptions a false sense of “scientific” legitimacy.
Technology cannot fix a broken system
The issues faced by Leo, and the officers who had to resolve his
situation, reflect deeper systemic problems which cannot be solved by
technology. Biased or inefficient police processes, mistakes with data
entry, and a lack of transparency do not disappear when you automate
policing – they get worse.
Leo has had other experiences with the fallacies of biometric
technology. A few years ago, he and his colleagues experimented with
fingerprinting. “We realised that biometric systems are not good
enough,” he says. “It feels good enough, it[‘s] good marketing, but it’s
not safe.” He points to the fact that he was recently able to unlock his
phone using a picture of himself. “See? You are not secure.”
Leo shared his story – which quickly went viral on Twitter – because he
wanted to show that “there is no magic in technology.” As a software
engineer, people see him like a “medieval wizard”. As he sees it,
though, he is someone with the responsibility and ability to show people
the truth behind government propaganda about facial recognition,
starting with his own experience.
Aftermath
I asked Leo if the government considered the experiences of those who
had been affected. He laughed sardonically. “No, no, absolutely not,
no.” He continues that “I shouldn’t be in that database, because I
didn’t commit any crime.” Yet it took the public prosecutor four months
to confirm the removal of his data, and the metro facial recognition
system is still in use today. Leo thinks it has been a successful
marketing tool for a powerful city government wanting to assuage
citizens’ safety concerns. He thinks that the people have been lied to,
and that fundamentally unsafe technology cannot make the city safer.
A perfect storm of human errors, systemic policing issues and privacy
violations led to Leo being included in the database, but this is by no
means a uniquely Argentinian problem. The Netherlands, for example, have
included millions of people in a criminal database despite them never
being charged with a crime. Leo reflects that “the system is the whole
thing, from the beginning to end, from the input to the output. The
people working in technology just look at the algorithms, the data, the
bits. They lose the big picture. That’s why I shared my story … Just
because.” We hope the EU is taking notes.
As told to Ella Jakubowska by Leo Colombo
Dismantling AI Myths and Hype (04.12.2019)
https://daniel-leufer.com/2019/12/05/dismantling-ai-myths-and-hype/
Data-driven policing: The hardwiring of discriminatory policing
practices across Europe (19.11.2019)
https://www.citizensforeurope.eu/learn/data-driven-policing-the-hardwiring-of-discriminatory-policing-practices-across-europe
Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/
The many faces of facial recognition in the EU (18.12.2019)
https://edri.org/the-many-faces-of-facial-recognition-in-the-eu/
Your face rings a bell: Three common uses of facial recognition (15.01.2020)
https://edri.org/your-face-rings-a-bell-three-common-uses-of-facial-recognition/
Stalked by your digital doppelganger? (29.01.2020)
https://edri.org/stalked-by-your-digital-doppelganger/
Facial recognition technology: fundamental rights considerations in the
context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf
(Contribution by Ella Jakubowska, EDRi intern)
8. Recommended Action
Oppose police surveillance! #NeighbourhoodWatched
Download Privacy International and Liberty's new campaign pack to learn
more about the police surveillance technology that might already be
being used in your local area, and find out what you can do to get your
police force to be more accountable to you and your community.
https://privacyinternational.org/long-read/3016/neighbourhood-watched-how-oppose-police-surveillance-your-local-community
Celebrate Free Software!
The "I love Free Software Day" on 14 February (also known as Valentine's
Day) is the perfect opportunity for you to express your special
gratitude to Free Software contributors who do important work for our
society. Happy #ilovefs Day to everyone! ❤
https://fsfe.org/campaigns/ilovefs/
9. Recommended Reading
The intelligence coup of the century - For decades, the CIA read the
encrypted communications of allies and adversaries (11.02.2020)
https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/
magma
https://magma.lavafeld.org/
Who Goes There - Security questions and CAPTCHA don’t just verify our
identities but subtly reshape them (20.02.2020)
https://reallifemag.com/who-goes-there/
10. Agenda
20.04.2020, Valencia, Spain
Internet Freedom Festival 2020
https://internetfreedomfestival.org
30.04.2020, Bielefeld, Germany
German Big Brother Awards 2020
https://bigbrotherawards.de/
06.05.2020, Berlin, Germany
re:publica20
https://20.re-publica.com/en
09.06.2020, Costa Rica
RightsCon 2020
https://www.rightscon.org/
06.11.2020, Brussels, Belgium
Freedom not Fear 2020
https://www.freedomnotfear.org/fnf-2020/freedom-not-fear-2020-6-9-november-2020
26.01.2021, Brussels, Belgium
Privacy Camp 2021
https://privacycamp.eu/
12. About
EDRi-gram is a fortnightly newsletter about digital civil rights by
European Digital Rights (EDRi), an association of civil and human rights
organisations from across Europe. EDRi takes an active interest in
developments in the EU accession countries and wants to share knowledge
and awareness through the EDRi-gram.
All contributions, suggestions for content, corrections or agenda-tips
are most welcome. Errors are corrected as soon as possible and are
visible on the EDRi website.
Except where otherwise noted, this newsletter is licensed under the
Creative Commons Attribution 3.0 License. See the full text at
http://creativecommons.org/licenses/by/3.0/
Newsletter editor: Heini Jarvinen - edrigram@edri.org
Information about EDRi and its members: http://www.edri.org/
European Digital Rights needs your help in upholding digital rights in
the EU. If you wish to help us promote digital rights, please consider
making a private donation.
https://edri.org/donate/
- EDRi-gram subscription information
subscribe by e-mail
To: edri-news-request@mailman.edri.org
Subject: subscribe
You will receive an automated e-mail asking to confirm your request.
Unsubscribe by e-mail
To: edri-news-request@mailman.edri.org
Subject: unsubscribe
- Newsletter archive
Back issues are available at:
http://www.edri.org/newsletters/
- Help
Please ask edrigram@edri.org if you have any problems with subscribing
or unsubscribing.