This interview is part of a cross-disciplinary series examining the real and possible effects of the COVID-19 crisis.
Fran Berman is the 2019–2020 Katherine Hampson Bessell Fellow at the Radcliffe Institute and the Edward P. Hamilton Distinguished Professor in Computer Science at Rensselaer Polytechnic Institute. As a fellow, she focused on social, environmental, and privacy concerns related to the Internet of Things.
Right now, the world is relying on digital technologies more than ever. As a data scientist, what are you seeing and hearing? What should others be paying attention to?
The COVID-19 pandemic has thrust many of our activities into cyberspace. Digital technologies are enabling the “normalcy” of our quarantined lives: allowing us to hold online meetings, courses, family celebrations, ballet classes, and community gatherings (including Radcliffe fellow Zoom talks).
Our dependence on these technologies exacerbates the need for comprehensive US consumer privacy protections, better security standards, and a national discussion about when and how surveillance is acceptable. Pay attention to the wide-ranging posts and op-eds on this. They represent more than a foray into Zoom security or how private digital contact tracing apps should be. They are evidence of the need for fundamental resolution about how digital technologies should be regulated and designed to promote the public interest.
Public scrutiny is particularly important when digital technologies serve as vehicles for contact tracing and determining health status. How much privacy should we give up, and how much surveillance should we allow to promote the public interest during a pandemic? Can we put the genie back in the bottle when it is over? Social controls and endgame scenarios for digital strategies must be part of current discussions.
In countries that are using digital contract tracing, the spread of COVID-19 seems to have slowed. While this can prevent people from getting sick, what are the consequences of going down this road, both short- and long-term?
Digital technologies have been an important part of public-health strategies for COVID-19 containment around the world. In evaluating their success, it’s important to recognize two things: 1) digital strategies are only effective in the context of sufficient COVID-19 testing, adequate resources, and other strategies—they are not stand-alone; and 2) digital strategies are most effective (and acceptable) when they align with the prevailing social and political environment.
China is a good example of this. The Chinese approach for COVID-19 containment is comprehensive and includes widespread testing, expansion of hospitals and triage facilities, a country-wide shut down, and digital strategies. China repurposed its existing and extensive system of digital surveillance to restrict the access of confirmed COVID-19 victims (and those who have been exposed to them) to public spaces and public transportation. This allowed the Chinese to keep confirmed and potential COVID-19 cases relatively isolated. The Chinese system utilized location and other personal information and relied on a closely interlinked public-private network of health, transportation, financial, and social media organizations, as well as government agencies.
The Chinese approach worked well for containment in a collectivist environment with extraordinary surveillance, government control, and social supervision. This approach is unlikely to work in the United States, where government surveillance is not as extensive and our cultural appetite for privacy is much greater. To get the benefits of contact tracing, the United States is exploring more privacy-promoting approaches that don’t rely on geolocation and personal information to identify those at risk.
Google and Apple recently announced that they would collaborate to create a new contact tracing system, using Bluetooth on our phones. How can such apps ensure that users’ privacy is protected? Do you think they can/will be successful in the United States?
The Google/Apple partnership is noteworthy in its focus on interoperability between competitors and user privacy. Here’s how their approach works: Participating iPhones and Androids exchange anonymous Bluetooth keys between users who are in close proximity. Each user’s smartphone retains their own keys and the keys of those with whom they have been in close contact. If someone tests positively for COVID-19, they can choose to upload their keys to a cloud-based database. Participating users’ smartphones periodically check the database and determine locally whether the database’s keys match their smartphone’s keys of recent contacts. If there is a match—meaning the user may have been exposed to COVID-19—information on next steps is provided. The Bluetooth keys are anonymous, matches are done within the privacy of your smartphone, and the system is voluntary and opt-in. In this way, Apple and Google hope to promote privacy protections by design.
The Google/Apple design is promising, but like any technology, details matter for the system to work as planned: Distance between users, duration of proximity, and other model details need to be right. There needs to be sufficient uptake. There needs to be sufficient COVID-19 testing, so users know if they have the virus in the first place. Security and privacy controls must work as expected. It’s also important to know who will decide when the system is no longer needed, and what will happen then. If the system is repurposed, we need to make sure that privacy and security protections for the new uses continue to support the public interest. All of these will influence the success, usefulness, and acceptability of the new system.
Interview was edited for clarity and length.