Enlarge /. A comparison of an original and fake video by Facebook CEO Mark Zuckerberg.
We are lucky that fake videos are not a big problem yet. The best deepfake detector that emerged from a major Facebook initiative to combat the altered videos would only catch about two thirds of them.
In September, as speculation about the risk of deepfakes increased, Facebook asked artificial intelligence assistants to develop deepfake video detection techniques. In January, the company also banned deepfakes that spread misinformation.
Facebook's Deepfake Detection Challenge was conducted in collaboration with Microsoft, Amazon Web Services and the AI partnership through Kaggle, a coding competition platform owned by Google. It offered an extensive collection of face-swap videos: 100,000 deep-fake clips created by Facebook with paid actors and on which the participants tested their detection algorithms. The project attracted more than 2,000 participants from industry and science and generated more than 35,000 deepfake detection models.
The best model to emerge from the competition discovered deepfakes from the Facebook collection in just over 82 percent of the cases. However, when this algorithm was tested against a number of previously unseen deepfakes, its performance dropped to just over 65 percent.
"It's all right and good to help human moderators, but it obviously doesn't even match the accuracy you need," said Hany Farid, professor at UC Berkeley and digital forensic authority, who is familiar with Facebook. managed project. "You have to make mistakes on the order of a billion, something like that."
Deepfakes use artificial intelligence to digitally transfer a person's face to another, making it look like that person did and said things they never did. At the moment, most deepfakes are bizarre and amusing. Some have appeared in clever ads.
The concern is that one day deepfakes could become a particularly powerful and effective weapon for political misinformation, hate speech or harassment that spread virally on platforms like Facebook. The yardstick for deepfakes is worryingly low because simple point-and-click programs are based on AI algorithms that are already freely available.
"I was pretty personally frustrated with how much time and energy intelligent researchers put into better deepfakes," said Mike Schroepfer, Facebook's chief technology officer. He says the challenge aims to "foster a broad industry focus on tools and technologies that help us recognize these things so that when used maliciously, we have scaled approaches to fight them . "
Schröder considers the results of the challenge to be impressive since the participants only had a few months. Deepfakes are not a big problem yet, but Schröder says it is important to be ready if they are armed. "I really want to be prepared for many bad things that never happen and not the other way around," says Schroepfer.
The algorithm with the highest score from the deepfake challenge was written by Selim Seferbekov, a machine learning engineer at Mapbox in Minsk, Belarus. He won $ 500,000. Seferbekov says he is not particularly worried about deepfakes for now.
"At the moment, their malicious use, if any, is very low," says Seferbekov. However, he suspects that improved approaches to machine learning could change this. "They could have some impact in the future, just like today's fake news." Seferbekov's algorithm will be open source so others can use it.
Katz & # 39; and mouse
Catching deepfakes with AI is something of a cat-and-mouse game. A detector algorithm can be trained to detect deep counterfeits, but then an algorithm that creates counterfeits can possibly be trained to avoid detection. Schroepfer says this raised some concerns about releasing the code from the project, but Facebook concluded that the risk was worth it to get more people involved in the effort.
According to Schroepfer, Facebook is already using technology to automatically recognize some deepfakes. However, the company declined to say how many deepfake videos were tagged in this way. Part of the problem with automating deepfake detection is, according to Schroepfer, that some are just entertaining while others could do harm. In other words, just like other forms of misinformation, context is important. And that is difficult for a machine to grasp.
UC Berkeley's Farid said it could be even more difficult to create a really useful deepfake detector than the competition would suggest, as new techniques are emerging rapidly and a malicious deepfake maker may be working hard to outsmart a particular detector.
Farid questions the value of such a project if Facebook is reluctant to monitor the content users upload. "If Mark Zuckerberg says we're not the arbitrators of the truth, why are we doing this?" he asks.
Even if Facebook's policies change, Farid says the social media company faces more pressing challenges when it comes to misinformation. "While deepfakes are an emerging threat, I would encourage you not to be too distracted by them," says Farid. "We don't need it yet. The simple stuff works."
This story originally appeared on wired.com.