A livestream debate between Facebook's CEO Mark Zuckerberg and an EU commissioner who shaped digital politics for the single market, Thierry Breton, sounded warm enough on the surface. Breton referred to “Mark” – and spoke of “dialogue” to establish the right governance for digital platforms – while Zuckerberg sounded respectful by addressing “the Commissioner” indirectly.
But the underlying message from Europe to Facebook remained steely: Follow our rules or expect that this will be achieved through regulations.
When Facebook invests in "smart" workarounds – be it "creatively" to reduce its regional tax burden or bypass democratic values and processes – the company should expect lawmakers to respond in kind, Breton told Zuckerberg.
“In Europe we have (clear and strong) values. You are clear. And if you understand very well the values on which we build our continent year after year, you will understand how you have to behave, ”said the Commissioner. "And I think if you run a systemic platform, understanding these values is extremely important so that we can anticipate – and even better – work with us and build new governance year after year.
"We won't do that overnight. We have to build it year after year. However, I think it is extremely important to anticipate what can lead to a "bad response" that forces us to regulate. "
"Let's think about taxes," added Breton. "I was CEO myself and I always speak to my team. Don't try to be too smart." Pay taxes where you have to pay taxes. I don't get into a port. Pay taxes. Don't be too smart with taxes. This is an important issue for countries in which you operate. So don't be too smart.
"Don't be too smart, it could be something we have to learn in the coming days."
Work with us, not against us
The key message that platforms have to fit into European regulations and not vice versa is one that Breton has expressed since taking office in the Commission at the end of last year.
Even though he made sure to throw his usual bones next to it yesterday, he said he didn't want to have to regulate; His preference remains the collaboration and “partnership” between platforms and regulators in the service of citizens – unless, of course, he has no other choice. So the message from Brussels to Big Tech continues: "Do what we ask, or we will legislate that you cannot ignore."
This commission, which also includes Breton, began its five-year mandate at the end of last year – and this year presented several parts of a comprehensive reform plan for digital policy, including the exchange of industrial data for business and research. and propose rules for certain high-risk AI applications.
However, a fundamental rethink regarding platform liabilities is still in progress. Although Breton refused to provide new details on the upcoming laws yesterday, he only said that they would arrive by the end of the year.
The digital services law could have serious implications for Facebook's business, which explains why Zuckerberg took the time to video chat with the Brussels legislature. Something that the Facebook CEO consistently refused to give to the British Parliament – and rejected several international parliaments when parliamentarians joined forces to question him about political disinformation.
The one-hour online discussion between the technology giant's CEO and a Brussels legislator closely involved in shaping the future of regional platform regulation was organized by Cerre, a Brussels-based think tank focused on regulating the network and digital industries concentrated.
It was moderated by Cerre, with DG Bruno Liebhaberg asking and selecting the questions, with a couple selected from the audience's submissions.
Zuckerberg had brought his usual laundry list of conversation topics with him when discussing regulations that could limit the scope and scale of his global empire. For example, he wanted to set the only options available against digital rules as choosing between the United States or China.
However, this is a framework that is not well received in Europe.
The Commission has long spoken about the idea of advocating a third, uniquely European way of regulating technology. It will put guard rails on digital platforms to ensure that they are in the service of European values and that the rights and freedoms of citizens are not only not eroded by technology, but actively supported. Hence the talk of a "trustworthy AI".
(At least that's the rhetoric of the Commission; however, its first draft to regulate AI was much easier than hoped for by rights advocates, with a narrow focus on so-called "high-risk" applications of AI that spanned the full spectrum of rights-related risks gloss over automation can generate.)
Zuckerberg's simplified dichotomy of "My Way or the China Motorway" is unlikely to give him friends or influence among European legislators. This means that he simply did not notice or actively ignored regional ambitions to work for his own digital regulatory standard. Neither will impress in Brussels.
The Facebook CEO also tried to take advantage of the Cambridge Analytica data misuse scandal. The claim that the episode is an example of the risks when dominant platforms are required to share data with competitors, e.g. B. if the regulations meet the portability requirements to improve the competitive conditions.
There was too much openness in the past that led to Facebook users' data being shamefully harvested by the app developer who worked for Cambridge Analytica.
This claim is also unlikely to be well received in Europe, where Zuckerberg was exposed to hostile questions from EU parliamentarians in 2018 after the outbreak of the scandal – including the demand that European citizens be compensated for the misuse of their Facebook data.
Facebook's business continues to be subject to several ongoing investigations into how EU citizens handle personal data. However, Zuckerberg's only mention of the European GDPR during the conversation was the allegation of "compliance" with the EU-wide data protection framework, which he also proposed to raise the standards that he offers to users elsewhere.
Another area in which the Facebook CEO tried to cloud the water – and therefore work to limit the scope of future EU-wide platform provisions – was the area in which data should be considered as belonging to a specific user. And whether the user should therefore have the right to port them to another location.
"In general, I have been very much in favor of data portability and I think it would be very helpful to have the right rules in place to enforce this." In general, I don't think anyone is against the idea that you should be able to transfer your data from one service to another. I think all the difficult questions concern the definition of your data and in particular the context of social services, what is someone else's data? " he said.
He gave the example of friends' birthdays that Facebook can display to users and asked whether a user could therefore port this data to a calendar app.
“Your friends have to sign out now and every single person agrees that you agree to export this data to your calendar, because when it does, it just gets too difficult in practice and no developer will care this integration, ”he suggested. "And it could be annoying to ask all of your friends to do that. So where I would draw the line between your data and your friends is a very critical question in my opinion.
"This is not just an abstract thing. Our platform started out more openly and on the side of data portability – and to be precise, that is exactly one of the reasons why we addressed the Cambridge Analytica issues we were dealing with because our platform used to work the way a person could Sign in to an app and bring data your friends shared with you, and if your friend shared something with you, you should in one so you can see it and use another app.
“But obviously we saw the disadvantages of it – that is, if you move data that a friend shared with you into another app and that app becomes malicious, a lot of data can now be used by people as they are did not expect. So in my opinion it is extremely important to get the nuance of data portability right. And we have to recognize that there are direct trade-offs between openness and privacy. And if our policy is that we want to block everything from a data protection perspective as much as possible, it is not as possible to have an open ecosystem as we want. And that means compromising on innovation, competition and academic research. "
Regulation that helps industry "balance these two important values of openness and data protection", as Zuckerberg put it, would therefore be welcome at 1 Hacker Way.
Breton followed this monologue by addressing what he called “stickiness” of data and pointing out that “access to data is the most important asset for the platform economy”.
"It is important in this platform economy, but – but! – Competition will come. And you will have some platforms that will probably make this portability faster than you think," he said. "So I think it’s already important Predict what your customers are ready at the end of the day. "
"Portability will happen," added Breton. "It is not easy, it is not an easy way to find a simple passport, but … we are talking about the design of this fourth dimension – the data room … We are still at the beginning. It will probably take a generation. And it will take some time, but let me tell you something, but with regard to personal data, customers will understand and demand more and more that the personal data belongs to them. They will ask about portability in one way or another. "
Regarding "misinformation", the first topic that Zuckerberg highlighted – as misinformation (instead of "disinformation" or actually "counterfeiting") – he had prepared with some statistics to support a claim by Facebook for "intensified efforts" Combating counterfeiting related to the Coronavirus crisis.
"In general, we have really stepped up efforts to combat misinformation. We have removed hundreds of thousands of malicious misinformation. And our independent fact-checking program has resulted in over 50 million warnings being displayed for content that is incorrect about COVID are, "he said, claiming that 95% of the times people see such flagged content," don't click through "- which further suggests that this is really good collaboration.
(According to an envelope mathematics, 5% of 50 million is still 2.5 million clicks in just this one narrow example …)
Breton later got in touch with another deflator after being asked whether the current EU code of conduct for disinformation – a self-regulatory initiative for which multiple technology platforms have signed up – is "sufficient" governance.
"We will never do enough," he replied. "So that we can understand each other. We will never do enough about disinformation. This is a disease of the center. So everything we have to do must be followed."
"It's a big problem," Breton continued, saying that, as a former CEO, he preferred KPIs that "show that we're making progress." "Of course we have to keep track of progress and if I am unable to report (other EU institutions and commissioners) with strong KPIs, we have to regulate – stronger."
He added that platforms that work together on self-regulation in this area have given cause for optimism to make further progress, but stressed: “This issue is extremely important for our democracy. Extreme … So we'll be very careful. "
The commissioner also pointed out Zuckerberg that the money would stand with him as CEO, and rejected the prospect that Facebook's newly formed “supervisory board” would even camouflage decision-making after Zuckerberg had previously spoken to the conversation.
"If you are a CEO at the end of the day, you are the only one responsible for it, nobody else … you are required to perform your due diligence when making decisions," said Breton after giving a little polite praise had spread for the supervisory board as a "very good idea".
"Understand what I'm trying to tell you – if you're the CEO of an important platform, you have to deal with a lot of stakeholders. So it is important, of course, that you have boards, advisory boards, a board of directors, all sorts of things to do understand what these stakeholders have to say to you, because at the end of the day a CEO’s mission is to listen to everyone and then make the decision, but at the end of the day, Mark will be responsible. ”
In another direct statement, Breton warned the Facebook CEO against playing “a gatekeeper role”.
"Be careful to help our domestic market. Don't matter if you're a systemic player and the gatekeeper controls others you play with." Be careful with democracy. Expect what's going to happen. Be careful with disinformation. This could have a negative impact on what is extremely important to us – including our values, "he said, appealing to Zuckerberg to" work together to develop the right governance tools and behavior "- and ended with an appeal at Silicon Valley style "building the future together".
The inevitable point that Breton was working towards was only "because something is not forbidden, it does not mean that it is authorized". In other words, platforms have to learn to ask regulators for permission – and shouldn't expect forgiveness if they don't. This principle is particularly important for the digital market and the information society as a whole, Breton concluded.
A special low point for Zuckerberg during the conversation was earlier when Liebhaberg asked for his assessment of the effectiveness of measures to moderate content that Facebook has taken so far – and particularly with regard to how quickly illegal and / or harmful content is removed. (See also: Last week, France, the youngest EU country, passed a law requiring platforms to quickly remove illegal content such as hate speech.)
Zuckerberg nodded to his usual topic of "free speech against content moderation" before becoming a systematic claim to progress in "increasingly proactive" content moderation through the use of artificial intelligence ("AI") and what he called "human." ".
"In recent years … we have updated all of our content review systems to now … our primary goal at this point is what percentage of the content that will be harmful can proactively identify and remove our systems before anyone does see? Part of it is AI and part of it is human systems, ”he said, referring to the up to 30,000 people Facebook pays to use their brains and eyes to moderate content as“ human systems ”.
"If someone has to see it and report it to us, we won't catch everything ourselves, but in general, if someone has to report it to us, it means that we should do it a little better in the future." So there is still a lot to do here, ”Zuckerberg continued, adding:“ We are getting much better here. I think our systems are constantly being improved. "
His use of the plural of "systems" at this point suggests that he included people in his calculation.
However, he did not mention the mental health burden that moderation brings with it to the thousands of people on whom Facebook's business depends to absorb about 20% of hate speech that still cannot be identified by its AI systems. (He didn't offer performance metrics on how (in) effective AI systems can proactively identify other types of content that human moderates are routinely exposed to so that Facebook users don't have to – like images of rape, murder, and violence.)
Just last week, Facebook paid $ 52 million to settle a lawsuit from 11,000 current and former content moderators who developed mental health problems, including PTSD at work.
The Verge reported that under the terms of the comparison, each moderator will receive $ 1,000 that can be spent at will, but Facebook intends to fund part of the medical treatment, for example, to help diagnose mental health problems, at that a moderator may be suffering from.