EU Lawmakers Must Reject This Proposal To Scan Private Chats

EU Lawmakers Must Reject This Proposal To Scan Private Chats

Having a private conversation is a basic human right. Like the rest of our rights, we shouldn’t lose it when we go online. But a new proposal by the European Union could throw our privacy rights out the window.

LEARN MORE

Tell the European Parliament: Stop Scanning Me

The European Union’s executive body is pushing ahead with a proposal that could lead to mandatory scanning of every private message, photo, and video. The EU Commission wants to open the intimate data of our digital lives up to review by government-approved scanning software, and then checked against databases that maintain images of child abuse.

The tech doesn’t work right. And launching a system of “bugs in our pockets” is just wrong, even when it’s done in the name of protecting children.

We don’t need government watchers reviewing our private conversations, whether it’s AI, bots, or live police. Adults don’t need it, and children don’t need it either.

If you’re in one of the EU’s 27 member countries, it’s a good time to contact your Member of European Parliament and let them know you’re opposed to this dangerous proposal. Today, our partners at European Digital Rights (EDRi) launched a website called “Stop Scanning Me,” with more information about the proposal and its problems. It features a detailed legal analysis of the regulation, and a letter co-signed by 118 NGOs that oppose this proposal, including EFF. German speakers may also want to view and share the Chatkontrolle Stoppen! Website run by German civil liberties groups.

Even if you’re not an EU resident, this regulation should still be of concern to you. Large messaging platforms won’t withdraw from this massive market, even if that means abandoning privacy and security commitments to their users. That will affect users around the globe, even those that don’t regularly communicate with persons in the EU.

“Detection Orders” To Listen To Private Conversations

The EU’s proposed Child Sexual Abuse Regulation (CSAR) is a disappointing step backwards. In the past, the EU has taken the lead on privacy legislation that, while not perfect, has moved in the direction of increasing, rather than decreasing, peoples’ privacy, such as the General Data Protection Regulation (GDPR) and the e-Privacy Directive. But the CSA Regulation goes in the opposite direction. It fails to respect the EU Charter of Fundamental Rights and undermines the recently adopted Digital Services Act, which already gives powers to authorities to remove illegal content.

The proposal requires online platforms and messaging service providers to mitigate abusive content and incentivizes general monitoring of user communication. But If “significant” risks of online sexual child abuse remain after these mitigations—and it’s entirely unclear what this means in practice— law enforcement agencies can send “detection orders” to tech platforms. Once a detection order is issued, the company running the platform could be required to scan messages, photos, videos, and other data using software that’s approved by law enforcement.

With detection orders in place, the platforms won’t be able to host truly private conversations. Whether they’re scanning peoples’ messages on a central server or on their own devices, the CSA regulation simply won’t be compatible with end-to-end encryption.

Not content with reviewing our data and checking them against government databases of existing child abuse, the proposal authors go much further. The CSAR suggests using algorithms to take guesses at what other images might represent abuse. It even plans to seek out “grooming” by using AI to review peoples’ text messages to try to guess at what communications might suggest future child abuse.

Large social media companies often can’t even meet the stated promises of their own content moderation policies. It’s incredible that EU lawmakers might now force these companies to use their broken surveillance algorithms to level accusations against their own users of the worst types of crimes.

The EU Commission is Promoting Crime Detection AI That Doesn’t Work 

It’s difficult to audit the accuracy of the software that’s most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should be sending up red flags, not encouraging lawmakers to move forward.

  • A Facebook study found that 75% of the messages flagged by its scanning system to detect child abuse material were not “malicious,” and included messages like bad jokes and memes.
  • LinkedIn reported 75 cases of suspected CSAM to EU authorities in 2021. After manual review, only 31 of those cases—about 41%—involved confirmed CSAM.
  • Newly released data from Ireland, published in a report by our partners at EDRi (see page 34), shows more inaccuracies. In 2020, Irish police received 4,192 reports from the  U.S. National Center for Missing and Exploited Children (NCMEC). Of those, 852 referrals (20.3%) were confirmed as actual CSAM. Of those, 409 referrals (9.7%) were deemed “actionable” and 265 referrals (6.3%) were “completed” by Irish police.

Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: widespread scanning produces significant numbers of false accusations. Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering.

The false positives cause real harm. A recent New York Times story highlighted a faulty Google CSAM scanner that wrongly identified two U.S. fathers of toddlers as being child abusers. In fact, both men had sent medical photos of infections on their children at the request of their pediatricians. Their data was reviewed by local police, and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by the failed AI system, and defended their opaque human review process.

With regards to the recently published Irish data, the Irish national police verified that they are currently retaining all personal data forwarded to them by NCMEC—including user names, email addresses, and other data of verified innocent users.

Growing the Haystack

Child abuse is horrendous. When digital technology is used to exchange images of child sexual abuse, it’s a serious crime that warrants investigation and prosecution.

That’s why we shouldn’t waste efforts on actions that are ineffectual and even harmful. The overwhelming majority of internet interactions aren’t criminal acts. Police investigating online crimes are already in the position of looking for a proverbial “needle in a haystack.” Introducing mandatory scanning of our photos and messages won’t help them narrow in on the target—it will massively expand the “haystack.”

The EU proposal for a regulation also suggests mandatory age verification as a route to reducing the spread of CSAM. There’s no form of online age verification that doesn’t have negative effects on the human rights of adult speakers. Age verification companies tend to collect (and share) biometric data. The process also interferes with the right of adults to speak anonymously—a right that’s especially vital for dissidents and minorities who may be oppressed, or unsafe.

EU states or other Western nations may well be the first nations to ban encryption in order to scan every message. They won’t be the last. Governments around the world have made it clear: they want to read peoples’ encrypted messages. They’ll be happy to highlight terrorism, crimes against children, or other atrocities if it gets their citizens to accept more surveillance. If this regulation passes, authoritarian countries, which often have surveillance regimes already in place, will demand to apply EU-style message scanning in order to find their own “crimes.” The list will likely include governments that attack dissidents and openly criminalize LBGT+ communities.

LEARN MORE

Tell the European Parliament: Stop Scanning Me


* This article was automatically syndicated and expanded from EFF – Electronic Frontier Foundation.

Be the first to comment

Leave a Reply

Your email address will not be published.


*