Explanation Oriented Programming
The goal of this project is to investigate the concept of explanation and to derive from it principles for explanation-oriented programming, which can be applied in three major ways. First, we can design domain-specific languages to build explanations for specific domains that are traditionally hard to understand. Second, we can identify the notion of "explainability" as a language design criterion in the sense of the cognitive dimensions framework. The language designer can then use this principle in designing basic language structures and constructs in a way that lead to programs that have a higher degree of explainability. Third, we can think about changing the objective in the design of general-purpose languages. Currently, the purpose of a program is to compute a value or an effect. Whenever a program fails to meet the expectations of a user, the question is "Why did this happen?", and "What went wrong?". In such a situation we typically have to recourse to debuggers to understand how the value or effect was produced, which is often a very tedious process. The idea of explanation-oriented programming is to design languages in a way so that the language constructs produce not only values, but also explanations of how and why the values are obtained.
- Causal Reasoning with Neuron Diagrams
- Visual Explanations of Probabilistic Reasoning
- A DSL for Explaining Probabilistic Reasoning
- A Visual Language for Representing and Explaining Strategies in Game Theory