Vital services such as communications, financial trading, health care, and transportation depend on sophisticated algorithms. Some rely on unpredictable artificial intelligence techniques, such as deep learning, that are increasingly embedded in complex software systems. As high-speed trading, medical devices, and autonomous aircraft become more widely used, stronger checks are necessary to prevent failures. Design strategies that promote comprehensible, predictable, and controllable human-centered systems can increase safety and make failure investigations more effective. Social strategies that support human-centered independent oversight during planning, continuous monitoring during operation, and retrospective analyses following failures can play a powerful role in making more reliable and trustworthy systems. Clarifying responsibility for failures stimulates improved design thinking.
Ben Shneiderman is a Distinguished University Professor in the Department of Computer Science and the founding director (1983–2000) of the Human-Computer Interaction Laboratory at the University of Maryland, where he is also a member of the University of Maryland Institute for Advanced Computer Studies.
This event is cosponsored by the Harvard Data Science Initiative.