Lisa Wehden is an investor in Bloomberg Beta, a VC fund that focuses on the future of work. Before that, she started Entrepreneur First in Berlin.
Next month, Apple and Google will introduce features that enable contact tracking on iOS and Android to identify people who have had contact with someone who has tested positive for the novel corona virus.
Security experts have quickly identified potential threats, including privacy risks such as revealing the identity of COVID-19 positive users, assisting advertisers in tracking, or trolling false alarms.
These are new concerns in familiar debates about technology ethics. How should technologists think of the trade-off between the immediate need for public health surveillance and individual privacy? And malformation and freedom of speech? Facebook and other platforms play a much more active role than ever before in the assessment of information quality: they promote official information sources prominently and remove some contributions from users who oppose social distancing.
As the pandemic spreads and the race to develop new technologies accelerates, it is more important than ever that technology finds a way to fully address these issues. Today, technologists are poorly equipped to meet this challenge: they need to strike a healthy balance between competing concerns – such as data protection and security – and at the same time explain their attitude to the public.
In recent years, scientists have worked to give students opportunities to address the ethical dilemmas that technology poses. Last year, Stanford announced a new (and now popular) undergraduate course in ethics, public order, and technological change, taught by faculties from the fields of philosophy, politics, and computer science. Harvard, MIT, UT Austin and others teach similar courses.
However, if the only students are future technologists, the solutions will remain. If we want a more ethical tech industry today, we need ethical studies for tech practitioners, not just students.
To extend this teaching to technical practitioners, our Bloomberg Beta venture fund agreed to host the same Stanford faculty for an experiment. Based on your undergraduate studies, could we develop an educational experience for seniors who work across the technology sector? We adapted the content (taking into account real dilemmas), the structure and the location of the class and created a six-week evening course in San Francisco. A week after the course was announced, we received twice as many applications as we could accept.
In every possible way, we have selected a diverse group of students who are all responsible for technology. They told us that if they had an ethical work dilemma, they lacked a community to turn to – some trusted friends or family, others said they were looking for answers online. Many were afraid to speak freely in their companies. Despite several company-led ethics initiatives, including rewarding ethics appointments initiatives and Microsoft and IBM principles for ethical AI, students in our class told us they had no room for open and honest discussions about the behavior of technicians.
If we want a more ethical tech industry today, we need ethical studies for tech practitioners, not just students.
Like students, our students wanted to learn from both academics and industry leaders. Experts such as Marietje Schaake, former MEPs, were present every week from the Netherlands, who discussed real issues, from data protection to political advertising. The professors enabled discussions and encouraged our students to discuss several, often conflicting, views with our specialist guests.
Over half of the class came from a MINT environment and had missed a lot of explicit training in ethical framework conditions. Our class discussed principles from other areas such as medical ethics, including the guiding principle of the doctor (“first, do no harm”) in connection with the development of new algorithms. Texts from the world of science fiction, such as "Those who leave Omelas" by Ursula K. Le Guin, also offered opportunities to deal with problems and led students to evaluate how data is collected and used responsibly .
The answers to the value-based questions we examined (such as the compromise between misinformation and freedom of speech) did not converge with clear "right" or "wrong" answers. Instead, the participants told us that the discussions were critical to developing skills to more effectively review their own prejudices and make informed decisions. One student said:
After working through a series of questions, thought experiments or discussion topics with the professors and thinking deeply about the individual topics, I often came to the opposite positions to what I originally believed.
When shelter-in-place meant that the class could no longer meet, participants turned to virtual meetings within a week – and wanted a forum to discuss real-time events with their peers in a structured environment. After our first virtual session examining how government, technology and individuals responded to COVID-19, one participant remarked, “It seems so much better to talk about what can we do and what should we do we do what do we have to do? "
Technicians seem to be looking for ways to deal with ethical learning – the task now is to offer more opportunities. We plan to hold another course this year and are looking for ways to provide an online version and publish the materials.
COVID-19 will not be the last crisis in which we rely on technology for solutions and need them immediately. If we want more in-depth discussions about the behavior of technicians and we want people who make decisions to enter these crises to be willing to think ethically, we need to ethically train people who work in technology think.
In order to enable the students to explore opposing, unpleasant points of view and to exchange their personal experiences, the class discussions were treated confidentially. I have received express permission to share student knowledge here.