Quantifying the Uncertainty in Decisions Driven by AI and Deep Learning

June 2021

As a society, we are increasingly interacting with systems powered by AI and deep learning. These systems make appealing promises, such as helping to make communities safer—through predictive policing—and healthier—through the use of “smart” health systems. This has led to a “trust and do not” verify attitude towards these systems. For instance, there is ample evidence that neural networks, the algorithms that power deep learning, are brittle: changing a pixel in the picture of the organ of a sick patient can make the networks think the patient is healthy. Neural networks also lack a framework for assessing the level of certainty in the predictions they make: they can tell that with high probability that a patient is likely to develop a disease, but they cannot tell what the uncertainty around this probability is. Is it 0.95 +/- 0.05 or 0.95 +/- 0.5? The former is a “strong” prediction, while the latter is a rather weak one. By bringing together a group of researchers from statistics, computer science, neuroscience, engineering and ethics, this exploratory seminar will develop a framework for holding AI/deep-learning systems accountable, and for beginning to understand how these systems make their decisions and when we can trust them.

Discipline: