Skip to main content

Although we are excited to have our fellows back on campus and working in Byerly Hall, Harvard Radcliffe Institute programs remain primarily virtual as we continue to monitor the coronavirus pandemic. See Coronavirus (COVID-19) Information and Updates.

Intimate Data: Ensuring Equity as Psychiatry Embraces Boundless Data and AI

May 2022

Benjamin Silverman, Harvard Medical School
Francis Shen, University of Minnesota Law School

Providing effective mental health interventions, especially for marginalized and underserved populations, remains a significant public health challenge. Built on decades of technological development, and spurred by the necessity of virtual visits during the COVID-19 pandemic, digital health promises a potentially groundbreaking solution to this historic problem, by bringing quantitative, objective, and real-time data into psychiatric research and assessment.

The collection and analysis of potentially boundless data—24/7 data on location, movement, email and text communications, and social media, among others—combined with brain scans, genetics/genomics, neuropsychological batteries, and clinical interviews, and then analyzed with ever-evolving artificial intelligence (AI) algorithms, promises to produce precise and time-sensitive understanding of psychiatric illness.

Collection of this “intimate data” could pave the way for delivery of individualized mental health services digitally in real time, naturalistic settings.

Yet precisely because the data will reveal more than ever before, the use of computational phenotyping raises fundamental, and too often overlooked, questions of racial, gender, and socioeconomic equity, justice, and bias. We propose a Radcliffe Exploratory Seminar to examine these ethical issues from an interdisciplinary perspective. The Seminar will convene experts in psychiatry, law, AI and racial bias, ethics, criminal justice, neuroscience, gender studies, political science, and related fields to address critical gaps in the current ethics and regulatory literature, focusing on the potential for exacerbation of social inequality through unequal access to these tools, algorithmic bias, and coercive government and corporate use of digital health data to circumvent existing privacy protections.

Back to top