Research on transparent and explainable artificial intelligence (XAI) is important for fair and ethical AI and has been defined as one of the “grand challenges” for HCI (Stephanidis et al., 2019). Our lab aims to contribute to existing research by exploring what impact human-centered AI explanations have on people. Our focus lies on the psychological construct of trust and how it is affected by AI explanations. Depending on whether trust is operationalized and measured as an attitude or behavior, transparency seems to have different ramifications. Given the importance of AI, which is increasingly used to make critical decisions with far-reaching implications, meaningful evaluations of such systems require agreed-upon constructs and appropriate measurements. Furthermore, we are interested in how AI can overcome biased and non-optimal human decision-making to help us reach better decisions.
- Scharowski, N., Perrig, S. A. C., von Felten, N., & Brühlmann, F. (2022). Trust and Reliance in XAI–Distinguishing Between Attitudinal and Behavioral Measures. CHI 2022 TRAIT Workshop on Trust and Reliance in AI-Human Teams. https://arxiv.org/abs/2203.12318
- Masterproject: possible
- Contact: E-Mail