It is suspiciously convenient that Facebook already meets most of the regulatory requirements that governments need to focus on the rest of the technology industry. Mark Zuckerberg, CEO of Facebook, is in Brussels and is working for the European Union regulators to enact new laws regulating artificial intelligence, content moderation, and more. But if you follow Facebook Suggestions could strengthen the power of the social network instead of keeping it at bay by hindering companies with fewer resources.
We have already seen this at GDPR. The idea was to strengthen privacy and weaken the exploitative data collection that technology giants like Facebook and Google rely on for their business models. The result was that, according to WhoTracksMe, Facebook and Google actually gained or only marginally lost EU market share, while all other adtech providers were ruined by the regulation.
Technology giants like Facebook have the profits of lawyers, lobbyists, engineers, designers, scalability and constant cash flow to control regulatory changes. If new laws don't directly target the abuse or dominance of these large companies, their collateral damage can be huge. Instead of investing time and money that they don't have to meet the requirements, some smaller competitors will fold, shrink, or sell out.
But at least in the case of the GDPR, everyone had to add new transparency and deactivate functions. If Facebook's requests go through, it will sail largely undisturbed, while rivals and upstarts strive to stay up to date. I made this argument in March 2018 in my post, "Regulation could protect Facebook, not punish". Then the GDPR did just that.
This does not mean that these protective measures are not useful for everyone. However, regulators need to consider what Facebook doesn't suggest if it wants to tackle its scope and boldness, and what schedules or penalties would be feasible for smaller players.
If we take a quick look at what Facebook is suggesting, it becomes clear that it is selfishly proposing what it has already achieved:
- User-friendly channels for content reporting – Every post and entity on Facebook can already be tagged by users with an explanation of why
- External oversight of policies or enforcement – Facebook is currently closing its independent supervisory body
- Regular public reporting on enforcement data – Facebook publishes a report on enforcing its community standards twice a year
- Publication of their content standards – Facebook publishes its standards and detects updates
- Consultation with stakeholders regarding significant changes – Facebook is consulting a security advisory board and will have its new supervisory committee
- Create a channel for users to challenge decisions to remove company content – Facebook's Oversight Board will review appeals against content removal
- Incentives to achieve certain goals, e.g. B. Keep the frequency of content violations below an agreed threshold – Facebook is already promoting that 99% of the content of nudity in children and 80% of the hate speech removed is proactively recognized and 99% of the content is deleted by ISIS and Al-Qaida
Finally, Facebook demands that the rules for banning content on the Internet "recognize user preferences and differences between Internet services, be enforced on a large scale and allow flexibility in terms of language, trends and context". That is a lot of scope. Facebook already allows different content in different regions to comply with local laws, groups are more self-monitoring than the news feed, and Zuckerberg has advocated customizable filters for offensive content, the default settings of which have been set by local majorities.
"… can be enforced on a large scale" is a final push for laws that would not require tons of human moderators to enforce what could further decrease Facebook's share price. "There are 100 billion content added every day, so let's not look at everything." Investing in the security of elections, content and cybersecurity has already reduced Facebook's profits from a growth of 61% over the previous year in 2019 to only 7% in 2019.
It's great that Facebook is already doing this. Formally, little is required. If the company were as bad as some imagine it would be, none of it would.
On the other hand, Facebook made a profit of $ 18 billion on our data in 2019, repeatedly proving that it wasn't adequately protected. The $ 5 billion fine and the agreement with the FTC that Facebook has committed to putting more emphasis on privacy and transparency show that Facebook is still catching up given its role as an ubiquitous communications company.
The EU and hopefully the U.S. regulators have much more to investigate. Should Facebook pay a tax on the use of AI? How does it treat and pay its moderators for human content? Would the requirement that users be allowed to export their interoperable friends list would promote much-needed social network competition that could force the market to force Facebook to do better?
As EU Internal Market Commissioner Thierry Breton said to reporters after Zuckerberg's meetings with regulators: "It is not our job to adapt to these companies, but to adapt to us."
Unemployment is the greatest risk for AI. I think a tax on use by large corporations could help fund job retraining that the world desperately needs
– Josh Constine (@JoshConstine), February 17, 2020