Abstract
The European Commission is proposing a CSAM regulation that would force digital communication apps, such as WhatsApp and iMessage, to scan digital communications of all EU citizens for criminal offences, even if they are not under suspicion. This proposal has faced criticism from various quarters, including academics, privacy regulators, and the Council of the European Union's internal legal experts for violating the essence of the fundamental right to privacy. The technology intended for this regulation, primarily based on artificial intelligence, is believed to be flawed and likely to flag innocent individuals. Despite its inefficacy, this move threatens the privacy and security of digital conversations.
On 14 September, a lack of support for the proposal in the Council of the European Union was revealed. However, the next day, a paid advertising campaign targeting specific countries was launched by the European Commission on X, formerly Twitter, using misleading statistics and emotional tactics. The European Commission also employed 'microtargeting' to ensure the ads were not visible to certain categories of users, and put X/Twitter's algorithm to work to identify specific categories of engaging groups, using categories of sensitive personal data, violating X/Twitter's advertising rules, Article 26(3) of the Digital Services Act and the General Data Protection Regulation. Such tactics are reminiscent of disinformation campaigns seen in other contexts.
This article argues that the European Commission's approach undermines European values and the Union's foundation, calling for the ad campaigns to be discontinued.
On 14 September, a lack of support for the proposal in the Council of the European Union was revealed. However, the next day, a paid advertising campaign targeting specific countries was launched by the European Commission on X, formerly Twitter, using misleading statistics and emotional tactics. The European Commission also employed 'microtargeting' to ensure the ads were not visible to certain categories of users, and put X/Twitter's algorithm to work to identify specific categories of engaging groups, using categories of sensitive personal data, violating X/Twitter's advertising rules, Article 26(3) of the Digital Services Act and the General Data Protection Regulation. Such tactics are reminiscent of disinformation campaigns seen in other contexts.
This article argues that the European Commission's approach undermines European values and the Union's foundation, calling for the ad campaigns to be discontinued.
Translated title of the contribution | European Commission misleads citizens with disinformation campaign and illegal ads |
---|---|
Original language | Dutch |
Pages (from-to) | 23 |
Number of pages | 1 |
Journal | De Volkskrant |
Volume | 2023 |
Publication status | Published - 13 Oct 2023 |
Keywords
- GDPR
- General Data Protection Regulation
- Digital Services Act
- DSA
- Microtargeting
- European Commission
- X
- Client-Side Scanning