Stefano Soatto

Stefano Soatto

Stefano Soatto

University of California

Stefano Soatto is a professor at the University of California, founding director of the UCLA Vision Lab, and VP at AWS. He received his Ph.D. from the California Institute of Technology. His general research interests are in Computer Vision and Nonlinear Estimation and Control Theory. In particular, he is interested in ways for computers to use sensory information (e.g. vision, sound, touch) to interact with humans and the environment.

Keynote abstract - Can a Neural Network Understand What’s Real?

The recent AI discourse features concerns about the safety and trustworthiness of large-scale trained models: What do they understand? Can we trust their answers? Can they tell real from fake? Can we? Can we prevent hallucination? Or at least detect it? Many of these issues are discussed without quantifiable characterization of the terms, making the discourse complex and partly conflicting. I will define and describe how modern large-scale generative models represent abstract concepts and meanings. I will then show that “reality”, as we ordinarily assume it, is also an abstract concept and therefore fundamental limitations apply in relation to what large-scale trained models (as well as humans) can “understand”. Specifically, certain classes of large-scale models — which do not include feed-forward maps such as CNNs, MLPs, and NERFs — can represent abstract concepts. They can also “understand” them, meaning that they can arrive at a valid abstraction by processing a finite amount of data in finite time and with finite resources. However, an external agent cannot determine when that has happened. I will also show that the ability to “hallucinate”, once suitably defined, is tightly related to a model’s ability to represent abstract concepts, and that there are fundamental limits to external agents being able to assess uncertainty, confidence, truthfulness, and ultimately trust large-scale trained models.