When humans and autonomous systems share control of a vehicle, there will be some explaining to do. When the autonomous system takes over suddenly, the driver will ask why. When an accident happens in a car that is co-driven by a person and a machine, police officials, insurance companies, and the people who are harmed will want to know who or what is accountable for the accident. Control systems in the vehicle should be able to give an accurate unambiguous accounting of the events. Explanations will have to be simple enough for users to understand even when subject to cognitive distractions. At the same time, given the need for legal accountability and technical integrity these systems will have to support their basic explanations with rigorous and reliable detail. In the case of hybrid human-machine systems, we will want to know how the human and mechanical parts contributed to final results such as accidents or other unwanted behaviors.
The ability to provide coherent explanations of complex behavior is also important in the design and debugging of such systems, and it is essential to give us all confidence in the competence and integrity of our automatic helpers. But the mechanisms of merging measurements with qualitative models can also enable more sophisticated control strategies than are currently feasible.
Our research explores the development of methodology and supporting technology for combining qualitative and semi-quantitative models with measured data to produce concise, understandable symbolic explanations of actions that are taken by a system that is built out of many, possibly autonomous, parts (including the human operator).
We have currently developed a two-step process to explain what happened and why those events happened in a particular vehicle CAN bus log. In the first step, we take a CAN bus log as input to begin an analysis of what happened during a particular car trip. This analysis includes smoothing noisy data, performing edge detection to find out when particular events occurred (e.g. when did the operator apply the brakes), and interval analysis to see how particular intervals relate to each other (e.g. did the car slow after the brakes were applied?). Using this analysis, we were able to construct a story of what happened in a particular car trip and detect particular events of interest (e.g abrupt changes in speed and braking, and dangerous maneuvers like skids). In the second step, we take a particular event of interest (that was identified in step 1) and explain why it happened. We have developed three different models to explain vehicle physics in a human readable form. We have constructed a model of the car internals, which explains the process by which individual components of the car affect other components. We have also constructed a purely qualitative physical model of the car, which explains vehicle actions using qualitative terms like increasing, decreasing, no change, and unknown change. While this model is easy for humans to understand, it lacks the level of detail needed to explain more sophisticated actions like skids. So we have also developed a semi-qualitative model of car physics using geometry. This model infers the overall effect on the normal forces and frictional forces on the wheels from the reported lateral and longitudinal acceleration during a particular interval. Then, these effects and their consequences are explained qualitatively to the user.
We are currently working on reasonable monitors: a methodological tool to interpret and detect inconsistent machine behavior by imposing constraints of reasonableness. Reasonable monitors are implemented as two types of interfaces around the subsystems of a complex machine. Local monitors check the behavior of a specific subsystem, and non-local reasonableness monitors watch the behavior of multiple subsystems working together: neighborhoods of interconnected subsystems that share a common task. This non-local monitor consistently checks that the neighborhood of subsystems are cooperating as expected. We demonstrate the effectiveness of reasonable monitors in an example application of explaining a perceptual mechanism. We require that each monitor has a set of reasonable concepts (a knowledge base). We defend this methodology as a step towards tackling global inconsistencies with locally consistent subsets of information. Further, we present this system as a design criteria for safe, intelligent, consistent machines.
A continuation of the project by Gerald Sussman, Hal Abelson, Lalana Kagal, Daniel Weitzner.
Publications:
- L. H. Gilpin, “Anomaly Detection through Explanations,” PhD, MIT, 2020.
- L. H. Gilpin, “MONITORING OPAQUE LEARNING SYSTEMS,” in Presented at ICLR 2019 Debugging Machine Learning Models Workshop, 2019 [Online]. Available: https://debug-ml-iclr2019.github.io/cameraready/DebugML-19_paper_25.pdf
- L. H. Gilpin, T. Chen, and L. S. Kagal, “Learning from Explanations for Robust Autonomous Driving,” 2019 [Online]. Available: https://sites.google.com/view/icml2019aiad/accepted-abstracts-and-papers
- L. H. Gilpin and L. Kagal, “A Self-Monitoring Framework for Opaque Machines,” in AAMAS 2019, 2019 [Online]. Available: http://www.ifaamas.org/Proceedings/aamas2019/pdfs/p1982.pdf
- L. H. Gilpin, C. Testart, N. Fruchter, and J. Adebayo, “Explaining Explanations to Society,” in NIPS 2018, 2018 [Online]. Available: https://arxiv.org/abs/1901.06560
- L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning,” in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018 [Online]. Available: https://doi.org/10.1109/DSAA.2018.00018
- L. H. Gilpin, J. C. Macbeth, and E. Florentine, “Monitoring Scene Understanders with Conceptual Primitive Decomposition and Commonsense Knowledge,” Advances in Cognitive Systems, vol. 6, pp. 1–20, Aug. 2018 [Online]. Available: http://www.cogsys.org/papers/ACSvol6/article01.pdf
- L. H. Gilpin, C. Zaman, D. Olson, and B. Z. Yuan, “Reasonable Perception: Connecting Vision and Language Systems for Validating Scene Descriptions,” in The 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), 2018, pp. 115–116 [Online]. Available: https://doi.org/10.1145/3173386.3176994
- L. Gilpin, “Reasonableness Monitors,” in The 23rd AAAI/SIGAI Doctoral Consortium (DC) at AAAI-18, 2018 [Online]. Available: https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17361
- L. H. Gilpin and B. Z. Yuan, “Getting Up to Speed on Vehicle Intelligence,” in 2017 AAAI Spring Symposium Series, Palo Alto, CA USA, 2017 [Online]. Available: https://aaai.org/ocs/index.php/SSS/SSS17/paper/view/15322/14599