EU law targets Big Tech for hate speech and misinformation


Placeholder while article actions load

BRUSSELS (AP) — The European Union reached a historic agreement early Saturday to crack down on hate speech, disinformation and other harmful content online.

The law will force big tech companies to police themselves more, make it easier for users to flag problems, and empower regulators to punish non-compliance with billions in fines.

EU officials finally sealed the deal in principle in the early hours of Saturday. The Digital Services Act will revise the digital rulebook of 27 countries and cement Europe’s reputation as a world leader in reining in the power of social media companies and other digital platforms, such as Facebook, Google and Amazon.

“With the DSA, the time of big online platforms behaving as if they are ‘too big to care’ is coming to an end,” said EU Internal Market Commissioner Thierry Breton.

EU Commission Vice President Margrethe Vestager added that “with today’s agreement we ensure that platforms are held accountable for the risks that their services may pose to society and citizens.”

The law is the third major EU law targeting the tech industry, a notable contrast to the US, where lobbyists representing Silicon Valley interests have largely succeeded in keeping federal lawmakers at bay.

While the Justice Department and the Federal Trade Commission have filed major antitrust actions against Google and Facebook, Congress remains politically divided on efforts to address competition, online privacy, misinformation and more.

The new EU rules, which are designed to protect internet users and their “fundamental rights online,” should make tech companies more accountable for user-created content amplified by their platforms’ algorithms.

Breton said they will have plenty of resources to back their laws.

“Entrusts the Commission with the supervision of very large platforms, including the possibility of imposing effective and dissuasive sanctions of up to 6% of global turnover or even the prohibition of operating in the EU single market in case of repeated serious infringements” , said.

The tentative agreement was reached between the EU parliament and member states. It still needs to be officially approved by those institutions, but it should not pose any political problems.

“The DSA is nothing short of a paradigm shift in technology regulation. It’s the first big attempt to set rules and standards for algorithmic systems in digital media markets,” said Ben Scott, a former technology policy adviser to Hillary Clinton who is now executive director of the advocacy group Reset.

Negotiators had hoped to reach a deal before Sunday’s French election. A new French government could rethink different positions on digital content.

The need to regulate big tech more effectively became more apparent after the 2016 US presidential election, when it was discovered that Russia had used social media platforms to try to influence the country’s vote. Tech companies like Facebook and Twitter have promised to crack down on disinformation, but the problems have only gotten worse. During the pandemic, health misinformation flourished and again companies were slow to act, cracking down after years of allowing anti-vaccine falsehoods to thrive on their platforms.

Under EU law, governments could require companies to remove a wide range of content that would be considered illegal, including material that promotes terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms like Facebook and Twitter would need to provide users with tools to flag such content in an “easy and effective way” so that it can be quickly removed. Online marketplaces like Amazon would have to do the same for dubious products, like counterfeit sneakers or unsafe toys.

These systems will be standardized so that they work the same way on any online platform.

Tech giants have been lobbying furiously in Brussels to ease EU rules.

Twitter said on Saturday it would review the rules “in detail” and that it supports “intelligent, forward-thinking regulation that balances the need to address online harm with protecting the open Internet.”

Google said in a statement Friday that it looks forward to “working with lawmakers to get the remaining technical details right to ensure the law works for everyone.” Amazon referred to a blog post from last year that said it welcomed measures that improve trust in online services. Facebook did not respond to requests for comment.

The Digital Services Law would prohibit ads targeting minors, as well as ads targeting users based on their gender, ethnicity, and sexual orientation. It would also ban deceptive techniques companies use to push people into doing things they didn’t mean to do, like signing up for services that are easy to accept but hard to refuse.

To show that they are making progress in limiting these practices, tech companies would need to conduct annual risk assessments of their platforms.

Until now, regulators have not had access to the inner workings of Google, Facebook and other popular services. But under the new law, companies will be required to be more transparent and provide information to regulators and independent investigators about content moderation efforts. This could mean, for example, having YouTube hand over data on whether its recommendation algorithm has been directing users to more Russian propaganda than normal.

To enforce the new rules, the European Commission is expected to hire more than 200 new employees. To pay for it, tech companies will be charged a “monitoring fee,” which could be as much as 0.1% of their annual global net income, according to negotiations.

Experts said the new rules are likely to lead to copycat regulatory efforts by other countries’ governments, while tech companies will also face pressure to implement the rules beyond EU borders.

“If Joe Biden stands at the podium and says, ‘Hell, why don’t American consumers deserve the same protections that Google and Facebook give European consumers,’ it will be hard for those companies to deny applying the same rules.” elsewhere, Scott said.

But companies aren’t likely to do it voluntarily, said Zach Meyers, a senior fellow at the Center for European Reform think tank. There is too much money at stake if a company like Meta, which owns Facebook and Instagram, is restricted in how it can target advertising to specific groups of users.

“Big tech companies will strongly resist other countries adopting similar rules, and I can’t imagine companies voluntarily applying these rules outside of the EU,” Meyers said.

The EU reached a separate agreement last month on its so-called Digital Markets Law, a law aimed at reining in the market power of tech giants and making them treat smaller rivals fairly.

And in 2018, the EU’s General Data Protection Regulation set the global standard for data privacy protection, though it has faced criticism for not being effective in changing the behavior of tech companies. Much of the issue centers around the fact that a company’s main privacy regulator is in the country where its European headquarters are located, which for most tech companies is Ireland.

Irish regulators have opened dozens of data privacy investigations but only handed down rulings on a handful. Critics say a lack of staff is the problem, but the Irish regulator says the cases are complex and time-consuming.

EU officials say they have learned from that experience and will make the bloc’s Executive Commission the enforcer of the Digital Services Law and the Digital Markets Law.

AP Business reporter Kelvin Chan reported from London. AP technology writer Barbara Ortutay contributed to this story.



Reference-www.washingtonpost.com

Leave a Comment