France is pushing the EU to harden upcoming rules for social media platforms after a schoolteacher was beheaded in France, arguing that “harmful content” such as hate speech needs to be reined in.
The push, which comes as Brussels is drafting a new rulebook for platforms like Facebook and Twitter, includes urging the EU to go beyond its current focus on illegal content, according to an EU official with firsthand knowledge of France’s position.
Ministers in French President Emmanuel Macron’s government are also lining up to criticize platforms’ content moderation efforts after Samuel Paty, the slain schoolteacher, faced a wave of online harassment.
“Today, we don’t have the information or the capacity to force the large social media platforms to implement [content] moderation worthy of what they represent … A text pushed by France will be presented in early December at European level,” Junior Digital Minister Cédric O said over the weekend.
Paris has long been a driving force behind the EU’s effort to rein in tech companies. But the harassment of Paty, who was killed for showing caricatures of the Prophet Muhammad in a classroom, has prompted the government to ramp up pressure on the European Commission.
The push comes as the Commission is due to present on December 2 a legislative proposal known as the Digital Services Act that will lay out content moderation rules for platforms such as Google, Facebook, Twitter and TikTok. New rules could include harmonization of notice-and-take down procedures across the bloc.
The stakes are high for France. A national law on hate speech was struck down before the summer by the Constitutional Council for threatening freedom of expression. Since then, the law’s author Laetitia Avia has taken the fight to Brussels.
“Platform regulation has been a priority for the French government, and not only in the past few days. There has been an acceleration in the sense that the prime minister himself has taken up the issue,” said Avia, an MP from Macron’s La République En Marche party who’s in regular contact with Internal Market Commissioner Thierry Breton and his team about platform regulation.
France’s Plan B
Prime Minister Jean Castex conveyed a message of urgency during his inaugural visit to Brussels last week.
“It’s important for France to make progress — and to do so very quickly — on strong regulation of those networks,” Castex said after meeting with Commission President Ursula von der Leyen and Breton. A French official added that Paris is hoping for swift negotiations in Brussels in 2021.
However, if the EU legislation doesn’t go far enough on hate speech, the French government is ready to add measures in an upcoming national bill that aims to fight radical Islam, according to the Journal du Dimanche.
The French draft legislation is expected on December 9 — one week after the Digital Services Act, though it’s currently unclear what kind of new obligations would be included if Paris is not satisfied by the EU proposal.
The French government could also decide to integrate the DSA’s obligations in its own bill to ensure they are applicable in France faster, Avia said.
O’s office did not immediately reply to a request for comment.
Go beyond illegal content
Behind the scenes, France has been pushing for a far-reaching piece of legislation that would cover a wide range of content.
Paris recently circulated a document arguing the Digital Services Act should go beyond illegal content to tackle material that is harmful but legal, such as disinformation, an EU official with firsthand knowledge of the document told POLITICO. The document was drafted before the terror attack, French officials said.
The French authorities also wrote that the future legislation should not only set rules for content removals after notification, but also impose transparency and supervision obligations on the platforms’ content moderation algorithms and filtering tools, the EU official said.
When it comes to disinformation, measures to reduce the content’s virality — such as downgrading its visibility or limiting sharing — would be more appropriate than removals and blocking, according to the French.
France also wants regulators and civil society to be able to audit moderation algorithms, which implies having access to data, and have more information about human moderation. Regulators should also have the power to issue injunctions and, in the case of systemic failure to remove content, sanctions.
Economy Minister Bruno Le Maire recently pitched fines that could go up to 4 percent of the company’s turnover.
“Whether the Commission decides on 3 or 5 percent [fines] is irrelevant, what matters is that it’s efficient,” O said over the weekend.
This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email email@example.com with the code ‘TECH’ for a complimentary trial.