Social media platforms have repeatedly found themselves in the crosshairs of the US government in recent years as it gradually became known how much power they really wield and for what purposes they have chosen to use it. In contrast to, for example, a weapons or pharmaceutical manufacturer, there is no named authority that specifies what these platforms can and cannot do. So who regulates them? You could say everyone and nobody.
It must now be made clear at the beginning that these companies are by no means “unregulated”, as no legal business in this country is unregulated. For example Facebook, certainly a social media company that last year received a record $ 5 billion fine for failing to comply with rules set by the FTC. But not because the company violated its social media regulations – there aren't any.
Facebook and other companies are subject to the same rules that most companies must follow, such as: B. Generally agreed definitions of fair business practices, truth in advertising, etc. But industries like medicine, energy, alcohol and automotive have additional rules, in fact whole agencies, that are specific to them. Not so with social media companies.
I say "social media" rather than "tech" because the latter is far too broad a concept to have a single regulator. While Google and Amazon (and Airbnb, Uber, etc.) also need new regulations, they may need a different specialist, e.g. B. an algorithmic accountability bureau or an antitrust commission for online retailing. (To the extent that technology companies in regulated industries like Google are active in broadband, they are already regulated as such.)
Social media can be broadly defined as platforms that people log on to to communicate and exchange messages and media. This is pretty wide enough already without adding ad marketplaces, anti-competitive issues, and other serious issues.
Who then regulates these social media companies? For U.S. purposes, there are four main directions that can result in significant restrictions or police restrictions, but each has severe restrictions and none was actually created for the task.
1. Federal regulators
The Federal Communications Commission and the Federal Trade Commission are what people think of when "social media" and "regulation" are used together in one sentence. But one is a specialist – unfortunately not the right one – and the other a generalist.
The FCC, Unsurprisingly, it's primarily about communication, but because of the laws that created it and gave it authority, it has almost no authority over what is communicated. The sabotage of net neutrality has made this a little complicated, but even the Group of the Commission devoted to the backward stance that has been taken during this administration has not argued that the news and media you publish are subject to their authority. You may have called for social media and big tech to be regulated – but for the most part they are unwilling and unable to do it yourself.
The Commission's mandate is explicitly to build a robust and equitable communications infrastructure, which today primarily means fixed and mobile broadband (although increasingly also satellite services). The applications and businesses using this broadband are generally not a concern of the Agency, although they may be affected by the decisions of the FCC, and this has been said repeatedly.
The only potentially relevant exception is the much-discussed section 230 of the Communications Decency Act (an amendment to the sweeping Communications Act), which waives liability for companies if illegal content is posted on their platforms as long as those companies make a “good” effort of faith to remove it in accordance with the law.
However, this part of the law does not grant the FCC authority over these companies or define good faith, and there is an enormous risk of entering unconstitutional territory as a government agency tells a company what content it needs to maintain or tear down in full speed the first change. That is why, while many believe Section 230 should be revised, few take Trump's weak executive action in this direction seriously.
The agency has announced that it will review the prevailing interpretation of Section 230, but unless there is an established legal authority or Congressional mandate for the FCC to deal with social media businesses, it simply cannot.
The FTC is a different story. As a watchdog over business practices in general, it has a similar responsibility to Twitter as it does towards Nabisco. There are no rules about what a social media company can or cannot do, and there are no rules about how many types of Cheez-It there should be. (There are industry-specific "guidelines", but these are more indicative of how general rules have been interpreted.)
On the flip side, the FTC is the driving force when Facebook misrepresents how user data is exchanged, or when Nabisco overrates the amount of real cheese in its crackers. The agency's most important responsibility towards the social media world is to enforce the truthfulness of material claims.
You can thank the FTC for the now well-known, carefully worded statements that avoid real claims or responsibilities: "We take safety very seriously" and "We think we have the best method" and that sort of thing – pretty much everything Mark Zuckerberg says. Businesses and executives are trained to avoid involvement with the FTC: "Take security seriously" is unenforceable, but "User data is never shared" certainly is.
In some cases, this can still have an impact, as the $ 5 billion fine recently fell on Facebook's lap (though it wasn't actually very momentous for many reasons). It's important to understand that the fine was to break binding promises the company had made – not to break some kind of social media-specific regulation, because again there really aren't any.
The last point worth mentioning is that the FTC is a reactive agency. While there are certainly guidelines on the limits of legal conduct, there are no rules that result in a legal fine or charge if violated. Instead, complaints filter through the numerous reporting systems and a case is created against a company, often with the help of the Department of Justice. This makes it slow to respond compared to the lightning fast tech industry and the companies or victims involved may have passed the crisis point while a complaint is being formalized there. The Equifax historic violation and minimal consequence is a telling case:
So, while the FCC and FTC are important guard rails for the social media industry, it wouldn't be correct to say that they are its regulators.
2. State legislators
States are increasingly becoming battlegrounds for the frontiers of technology, including social media companies. This is likely due to frustration with the partisan deadlock in Congress, which has left serious problems unsolved for years or decades. Two good examples of states that have lost patience are the new California data protection regulations and the Illinois Biometric Information Privacy Act (BIPA).
The California Consumer Privacy Act (CCPA) arguably emerged from the ashes of other attempts at national level to make companies more transparent about their data collection policies, such as the unfortunate Broadband Privacy Act.
California officials decided that there was no reason the state shouldn't at least bother about its own if the government wasn't reinforced. According to the Convention, state laws that provide consumer protection generally take precedence over weaker federal laws. Hence, a state is not prohibited from taking measures for the safety of its citizens while the slower machinery of Congress moves forward.
The resulting law, which is very briefly worded, creates formal requirements for disclosure of data collection, methods of deactivating it, and also grants the authority to enforce these laws. The rules may seem like common sense to your reading, but they're pretty far out there compared to the relative freedom tech and social media ventures that have been enjoyed before. Unsurprisingly, they spoke out loudly against the CCPA.
BIPA has a somewhat similar origin, as a particularly far-sighted legislator created a set of rules in 2008 that restricts the collection and use of biometric data such as fingerprints and facial recognition by companies. It has proven to be a thorn in the side of Facebook, Microsoft, Amazon, Google, and others who have taken it for granted to analyze a user's biological metrics and use them for pretty much anything they want.
Many lawsuits have been filed for BIPA violations, and while few have imposed noteworthy penalties like these, they have been invaluable in forcing companies to state exactly what they are doing and how. Sometimes it's pretty surprising! The optics are terrible, and tech companies have campaigned (luckily with little success) to replace or weaken the law.
What is crucial about these two laws is that they essentially force companies to choose between following a new, higher standard for something like data protection in general, or establishing a tiered system where some users get more data protection than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment”, they wonder why Mainers or Puerto Ricans don't get it too.
In this way, state laws exert an overwhelming influence and force companies to make changes nationally or globally, as decisions technically only apply to a small subset of their users. You can think of these states as activists (especially if their attorneys general are proactive) or just ahead of the curve, but either way, they're making their mark.
However, this is not ideal because, in the extreme, it creates a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. At least this is the doomsday scenario that is being predicted almost everywhere by companies capable of losing.
State laws serve as a test bed for new guidelines, but usually only emerge when the movement at the federal level is too slow. Though, like BIPA, they hit the bull's eye every now and then, it would be unwise to rely on a single state or combination of them to miraculously, like so many monkey lawmakers who hit typewriters, a comprehensive regulatory structure for social media to accomplish. Unfortunately, that leads us to the congress.
What can be said about the ineffectiveness of Congress that has not been said over and over again? Even in the best of times, few would trust these people to make sensible, clear rules that reflect reality. Congress is just not the right tool for this job as it stubbornly and willfully ignores almost all technology and social media issues, has countless conflicts of interest, and is painfully indolent – sorry, thought – when it actually bills and says goodbye because good.
Corporations oppose state laws like the CCPA and demand national rules because they know this will take forever and there are more ways to put your finger in the cake before it's baked. Not only are national rules much late, but they are much more likely to be watered down and riddled with loopholes by industry lobbyists. (This is an indication of the influence these companies have on their own regulation, but hardly officially.)
But Congress is not a total loss. In moments of clarity, it has set up expert agencies like those set up in the first point, who have control of Congress but are otherwise independent, empowered to make rules, and technically – if somewhat limp – remain impartial.
Unfortunately, the issue of social media regulation is too new for Congress to authorize a specialist agency to deal with. Social media companies don't exactly fit into any of the categories regulated by existing specialists. This is clearly shown in the current attempt to extend Section 230 beyond the breaking point just to get someone on the beat.
Federal laws must not be used to regulate this fast-moving industry, as the current state of affairs more than sufficiently shows. And until a dedicated expert agency or something like that is formed, it's unlikely anything that shows up on Capitol Hill will do much to hold back the world's Facebooks.
4. European regulators
Of course, central as it may be, the US is only part of a global ecosystem with diverse and changing priorities, leaders, and legal systems. But in some sort of inside-out version of state laws that are above their weight, laws that affect much of the world except the US can still have a huge impact on how companies operate here.
The most obvious example is the General Data Protection Regulation (GDPR), a set of rules, or rather an extension of existing regulations, dating back to 1995 that changed the way some social media companies do business.
However, this is only the last step in a fantastically complex process that has taken decades to bring national laws and the needs of the European Union into line. Member States to provide the clout they need to enforce compliance with international rules. Bureaucracy rarely disturbs tech companies that rely on bottomless pockets to plow through or innate agility to dance away.
Although the turtle can in some ways overtake the rabbit in this case, it is currently GDPR The main obstacle is not only the complexity of its rules, but also the lack of decisive enforcement. Each country's data protection authority acts as a hub in a network that needs to reach consensus to bring the hammer down. This process is slow and extremely delicate.
However, when the blow finally lands, it can be a major blow that bans entire practices on an industry-wide basis rather than just imposing financial penalties that can shake off these immensely wealthy entities. There is room for optimism as cases escalate, drawing batsmen like antitrust laws in efforts to capture the entire “big tech” ecosystem.
The wide range of European regulations is really too complex for a topic to be covered in detail here that it deserves, and it goes beyond the question of who exactly regulates social media. Europe's role on this issue, speaking slowly and carrying a large stick, if you will, promises to produce results on a large scale, but for the purposes of this article it cannot really be considered an effective police unit.
(Natasha Lomas, theinformationsuperhighway's E.U. Regulatory Maven, contributed to this section.)
5. Nobody? "Really?"
As you can see, the regulatory ecosystem that social media floats in is more or less free from predators. Most dangerous are the small, agile – state lawmakers – who can take a bite before the platforms have had a chance to prepare. The other regulators are either too slow, too compromised, or too involved (or a combination of the three) to pose a real threat. For this reason, it may be necessary to introduce a new but familiar type: the expert agency.
As noted above, the best-known example of one of these is the FCC, although its role is so fragmented that it could be forgotten that it was originally created to ensure the integrity of the telephone and telegraph system. Then why is it the Orbital Waste Expert Agency? This is a story for another time.
What is clearly needed is the establishment of an independent federal agency or commission of experts in the United States with the legal authority to create and enforce rules for how social media platforms handle consumer data.
Like the FCC (and something like the European Union's data protection authorities), this should be officially impartial – though, like the FCC, it will almost certainly waver in its allegiance – and have specific mandates about what it can and cannot do. For example, it would be inappropriate and unconstitutional for such an agency to say that this or that topic of speech should be banned from Facebook or Twitter. However, it can be said that companies need a reasonable and accessible definition of the language they are prohibiting and a process for reviewing and challenging shutdowns. (The details of establishing and designing such an agency are well beyond the scope of this article.)
Even those like the FAA are lagging behind changes in the industry, such as the surge in drones which required a hasty revision of the existing rules or the huge increase in commercial space launches. But that's a function, not a bug. These agencies are designed not to act unilaterally on the wisdom and experience of their executives, but rather to conduct or solicit research, consult the public and industry alike, and produce evidence-based guidelines that contain, or at least address, a minimum of sufficiently objective data.
Sure, that didn't really work with net neutrality, but I think you will find that the industry has not been ready to take advantage of this temporary waiver of authority by the FCC because it sees the current composition of the commission as a losing battle against voluminous fights leads to evidence, public opinion and common sense. You see the writing on the wall and understand that under this system it can no longer be ignored.
With an analogous authority on social media, the evidence could be made public, intentions to regulate clarified, and shareholders – that is, users – their opinions expressed in a public forum that is not owned and operated by the very companies they wish to curb .
Without such authority, these companies and their activities – of the extent of which we have only the slightest clue – will remain in blissful limbo, choosing which rules to adhere to and which to fulminate and lobby against. We need to help them decide and weigh our own priorities against theirs. You have already abused the naive trust of your users around the world – maybe it is time you asked them to trust us sometime.