Philip N. Howard is the director of the Oxford Internet Institute and author of the upcoming "Lie Machines: How to Save Democracy from Troll Armies, Fraudulent Robots, Junk News Operations, and Political Activists".
Lisa-Maria Neudert is a researcher at the Computational Propaganda Project at the Oxford Internet Institute at Oxford University.
South Korea has successfully slowed the spread of the corona virus. In addition to widespread quarantine measures and tests, the country's innovative use of technology is seen as a critical factor in combating the spread of the disease. While Europe and the United States have difficulty coping with this, many governments are now turning to AI tools over the long term to both advance medical research and manage public health: technical solutions for contact tracking, symptom tracking, immunity certificates, and other uses on road. These technologies are certainly promising, but they must be implemented so that human rights are not undermined.
Seoul has extensively and intrusively collected the personal information of its citizens and analyzed millions of data points from credit card transactions, CCTV footage and cell phone geolocation data. The South Korean Ministry of Home Affairs and Security even developed a smartphone app that shares GPS data from people who have been quarantined with officials. When people in quarantine cross the “electronic fence” of their assigned area, the app notifies officials. The effects of such widespread surveillance on privacy and security are deeply worrying.
South Korea is not the only one who uses personal data for containment efforts. China, Iran, Israel, Italy, Poland, Singapore, Taiwan and others have used location data from mobile phones for various applications to combat the corona virus. This data, loaded with artificial intelligence and machine learning, can be used not only for social control and surveillance, but also to predict travel patterns, identify future outbreaks, model infection chains, or project immunity.
The impact on human rights and data protection goes far beyond containing COVID-19. Introduced as a short-term solution to the immediate coronavirus threat, the widespread exchange, monitoring and monitoring of data could become an integral part of modern public life. Temporary applications can normalize under the guise of protecting citizens from future public health emergencies. At the very least, government decisions to rush to introduce immature technologies – and in some cases, legally oblige citizens to use them – have created a dangerous precedent.
However, such data and AI-driven applications could be useful advances in the fight against coronaviruses, and personal data – anonymized and unidentifiable – provides valuable insights for governments to find their way through this unprecedented public health emergency. The White House is reportedly in active talks with a variety of technology companies about how they can use anonymized location data at an aggregate level from mobile phones. The UK government is discussing the use of location and usage data with mobile operators. And even Germany, which normally campaigns for data rights, has introduced a controversial app that uses geographic data from fitness trackers and smartwatches to determine the geographical spread of the virus.
Big Tech is also rushing to the rescue. Google provides Community Mobility Reports for more than 140 countries that provide insight into mobility trends in areas such as retail and leisure, workplaces and residential areas. Apple and Google are working on a contact tracking app and have just launched a developer toolkit with an API. Facebook is introducing "local alerts" that allow local government, emergency organizations, and law enforcement to communicate with citizens based on their location.
It is obvious that data that reveal the health and geolocation of citizens is as personal as possible. The potential benefits outweigh the concerns, but also concerns about the misuse and abuse of these applications. Data protection measures are in place – perhaps the most advanced is the European GDPR – but in times of national emergencies, governments have the right to make exceptions. And the framework for the lawful and ethical use of AI in democracy is much less developed, if at all.
There are many uses for governments to enforce social controls, predict outbreaks, and track infections – some of which are more promising than others. Contact tracking apps are currently the focus of government interest in Europe and the United States. Decentralized privacy-protecting proximity tracing or “DP3T” approaches that use Bluetooth may provide a secure and decentralized protocol for users to consent to sharing data with public health authorities. The European Commission has already published guidelines for contact tracking applications that prefer such decentralized approaches. Regardless of whether centralized or not, EU member states must comply with the GDPR when implementing such instruments.
Austria, Italy and Switzerland have announced that they will use the decentralized frameworks developed by Apple and Google. Germany has recently rejected plans for a centralized app, which has opted instead for a decentralized solution, following ongoing public debates and strict warnings from data protection experts. However, France and Norway use centralized systems in which confidential personal data is stored on a central server.
The UK government has also experimented with an app that uses a centralized approach and is currently being tested on the Isle of Wight: with the National Health Service's NHSX, health officials can reach potentially infected people directly and personally. Up to this point, it remains unclear how the collected data will be used and whether it will be combined with other data sources. According to the applicable regulations, Great Britain is still obliged to comply with the GDPR until the end of the Brexit transition period in December 2020.
Aside from government efforts, there is a worrying increase in a variety of contact tracking apps and websites and other forms of outbreak control that urge citizens to voluntarily provide their personal information, but offer few, if any, privacy and security features, let alone functions. Certainly well-intentioned, these tools often come from hobby developers and often come from amateur hackathons.
Sorting wheat from the chaff is not an easy task and our governments are most likely not equipped to do so. At this point, artificial intelligence, and especially its use in governance for public bodies, is still new. Local regulators struggle to assess the legitimacy and wide-ranging impact of different AI systems on democratic values. In the absence of adequate procurement guidelines and legal frameworks, governments are ill-prepared to take these decisions now when they are most needed.
Even worse, once AI-driven applications are unpacked, it will be difficult to reset them, similar to increased security measures at airports after September 11th. Governments could argue that they need data access to avoid a second wave of coronavirus or another impending pandemic.
Regulators are unlikely to generate specific new terms for AI during the coronavirus crisis. Therefore, we have to make at least one pact: All AI applications designed to deal with the public health crisis must end up as public applications with the data and algorithms, inputs and outputs provided by public health researchers and public science agencies be held for the common good. Calling the coronavirus pandemic as a sop against violations of data protection standards and as a reason to free the public of valuable data is not permitted.
We all want sophisticated AI to help deliver a medical cure and emergency response in the area of public health. AI's short-term privacy and human rights risks are likely to decrease as human lives are lost. However, when corona virus is under control, we want our privacy restored and our rights restored. If governments and businesses in democracies tackle this problem and keep the institutions strong, we all have to see how the apps work, the public health data has to end up with medical researchers, and we have to be able to check tracking systems and to deactivate. AI must support good governance in the long term.
The coronavirus pandemic is a major public health emergency that will profoundly affect governance in the coming decades. And it also puts a strong spotlight on gaping shortcomings in our current systems. AI is now coming up with some powerful applications, but our governments are ill-prepared to ensure their democratic use. Given the extraordinary impact of a global pandemic, fast and dirty policies are not enough to ensure good governance, but it may be the best solution we have.