Musk’s Twitter tweaks foreshadow EU showdown over new rules
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$19 $0 for the first 4 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*No charge for 4 weeks then billed as $19 every four weeks (new subscribers and qualified returning subscribers only). Cancel anytime.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 14/12/2022 (695 days ago), so information in it may no longer be current.
LONDON (AP) — Self-proclaimed free speech warrior Elon Musk’s more unfettered version of Twitter could collide with new rules in Europe, where officials warn that the social media company will have to comply with some of the world’s toughest laws targeting toxic content.
While the new digital rulebook means the European Union is likely to be a global leader in cracking down on Musk’s reimagined platform, the 27-nation bloc will face its own challenges forcing Twitter and other online companies to comply. The law doesn’t fully take effect until 2024, and EU officials are scrambling to recruit enough workers to hold Big Tech to account.
Known as the Digital Services Act, the EU’s sweeping set of rules aims to make platforms and search engines more accountable for illegal and harmful content including hate speech, scams and disinformation. They’ll kick in next summer for the biggest digital companies like Google, Facebook and TikTok and then expand to all online services the following year.
Those standards are poised to run up against Musk’s whipsawing policies at Twitter: He abruptly axed a group of advisers this week who address problems like hate speech, child exploitation and self-harm, halved Twitter’s workforce and issued conflicting decisions about content moderation.
“A lot can change in six months, but it sure seems like Twitter is lining up to be Europe’s first major test case when it comes to enforcing the DSA,” said John Albert of Berlin-based AlgorithmWatch, a nonprofit research and advocacy group.
Musk has called for “freedom of speech, not freedom of reach,” saying he wants to downgrade negative and hateful posts. The billionaire Tesla CEO considers the bloc’s rules “a sensible approach to implement on a worldwide basis,” EU digital policy chief Thierry Breton recounted after a video call with Musk this month.
Other jurisdictions are far behind Europe. In the U.S., Silicon Valley lobbyists have largely succeeded in keeping federal lawmakers at bay, and Congress has been politically divided on efforts to address competition, online privacy, disinformation and more. Britain is working on its own Online Safety Bill, but it was recently watered down and not clear when it will be approved.
Musk’s style of making ad hoc changes won’t fly under the new European rulebook, experts said.
Twitter’s disastrous rollout of paid “verified” blue checks likely would have triggered an EU investigation and possibly big fines because such major design changes wouldn’t be allowed without a risk assessment, Albert said.
The premium service was abandoned last month after a flood of imposter accounts spread disinformation. It relaunched this week.
The abrupt disbanding of Twitter’s Trust and Safety Council also would “raise some eyebrows in Brussels,” Albert said. Expert advisers aren’t required under the EU rules, but “good-faith voluntary efforts” show “European regulators that you care about transparency and are invested in trust and safety,” he said.
Musk’s tinkering — including dropping enforcement of COVID-19 misinformation rules and granting amnesty to suspended accounts — has already alarmed European officials.
Musk’s approach is “a big issue” that calls for “more regulation,” French President Emmanuel Macron told “Good Morning America.”
In Europe, “you can demonstrate you can have free speech, you can write what you want. But there is responsibilities and limits,” he said. Macron, who met with Musk in the U.S. this month, tweeted that “efforts have to be made by Twitter to comply with European regulations.”
The bloc will require online companies to follow clear rules on dealing with illegal content and explain to users why the material was taken down or given a warning label. They will have to be transparent about the workings of their content moderation systems and recommendation algorithms, which suggest the next song, news story or product to users. They must let EU regulators review their efforts.
Breton, the EU’s digital policy chief, said he reminded Musk about the penalties for violations, including fines worth 6% of global annual revenue that could reach billions. Repeat violations could result in an EU-wide ban. Musk and Twitter didn’t respond to messages seeking comment.
Musk is already “backtracking on the absolutism” of free speech by suspending the rapper formerly known as Kanye West for a swastika post, said Marietje Schaake, a former European Parliament lawmaker who’s now international cyber policy director at Stanford University.
“The problem is that there is a lot of hateful content below this threshold, which will make Twitter under Musk less safe and pleasant for women, minorities and people whose opinions are met with aggression,” Schaake said.
To tackle such “lawful but awful” content that frequently bedevils content moderators, the EU will require extra scrutiny for the biggest online platforms — those with 45 million monthly users.
There’s speculation Twitter might not qualify. It reported 238 million users before it was bought by Musk, who complained that the number of fake accounts was vastly understated. Companies have to report their user numbers to the EU by mid-February.
Big platforms will have to assess how they’re dealing with “systemic risks,” such as harassment, election-related disinformation, hoaxes and manipulation during pandemics.
By the summer, the first changes stemming from the rules should start appearing via digital “buttons” on websites and apps so users can easily flag illegal content.
The wide-ranging rulebook also poses a challenge for regulators who need to hire enough enforcers. EU officials have estimated they will add more than 100 full-time staff by 2024 to enforce the DSA and other new rules on digital competition.
Each of the 27 EU countries also will have to hire more people to police smaller platforms and coordinate with Brussels. On top of that, tech companies need to recruit more compliance staff.
All three groups will be hiring for very specific and similar skill sets: experts who know how platforms and their algorithms work, have insight into sites’ content moderation practices and have experience enforcing regulations.
The problem is they “might end up competing for the same talents,” said Rita Jonusaite, advocacy coordinator at EU DisinfoLab, a nonprofit group that researches disinformation.
There are concerns some European countries won’t have the means and expertise to enforce the rules, especially if they’re building skills in areas like disinformation from scratch.
“Regulators need to train themselves and acquire capacity very quickly,” Jonusaite said.
The EU’s executive Commission has launched a recruitment spree for dozens of expert jobs, including legal officers, data scientists, technology specialists, and digital policy officers to help supervise the systems that online platforms use to combat illegal content such as terrorist or child sexual abuse material and to fight harmful posts like disinformation.
Meanwhile, Musk axed thousands of employees and many others resigned, including those in content moderation roles. It’s unclear whether he plans to add staff to comply with Europe’s rules.
“As Musk is scrambling to both save money, comply with the law, and keep advertisers on board, the main question is whether he will dedicate the needed resources to monitor content at all,” said Schaake of Stanford.
“It is one thing to make a conscious decision to leave racist content up, it is another to have it be up unnoticed” because teams monitoring content “are simply not there,” she said.