Construal Level Theory (CLT) for designing explanation interfaces in operational contexts

Construal Level Theory (CLT) for designing explanation interfaces in operational contexts
Roberto Venditti, Nareg Minaskan Karabid, Evmorfia Biliri, Miguel Villegas, Barry Kirwan, Carl Westin, Jekaterina Basjuka, Simone Pozzi
In: Human Interaction and Emerging Technologies: Artificial Intelligence and Future Applications. International Conference on Applied Human Factors and Ergonomics (AHFE-2025), 13th International Conference on Human Interaction & Emerging Technologies: Artificial Intelligence & Future Applications, located at IHIET-AI 2025, AHFE Open Access, 3/2025.

Abstract:
Explainability is essential to fostering trust, transparency, and effective Human-AI Teaming (HAT) in high-stakes operational contexts where humans interact with complex AI systems. This paper presents the application of Construal Level Theory (CLT), a psychological framework, to design explainability interfaces in safety-critical contexts where the quantity of information and the time required to process it are critical factors. The CLT was originally developed to explain how individuals mentally construe objects and events at different levels of abstraction based on psychological distance (temporal, spatial, or social). The CLT has since been applied in the design of user interfaces, where it serves as the theoretical framework to structure information retrieval systems so that users can progressively query data at different levels of abstraction. Building on this foundational work, our contribution extends the CLT’s application to design explanation interfaces tailored to operators of AI systems used in six aviation use cases, including cockpit, air traffic control tower and airport operations. Our use of the CLT framework addresses key explainability questions in such systems: What information should be presented? When should it be shown? For how long? and At what level of detail? This paper outlines the design methodology and demonstrates its application in one Use Case where an Intelligent Sequence Assistant (ISA) is being developed to support and enhance decision-making for Air Traffic Controllers. ISA optimises runway utilisation in single-runway airports, providing real-time sequence suggestions for arriving and departing aircraft. These operational suggestions are accompanied by text-based explanations for all the sequence changes, structured according to the CLT in various levels of detail. Controllers can progressively query these explanations (e.g. by interacting with dedicated sections of the interface) to access the desired level of detail, build situational awareness, and understand the assistant’s reasoning. While the CLT provides a framework for structuring the information and the interaction with the system, it does not prescribe how the information should be visually presented on the Human-Machine Interface (HMI), leaving this decision to the designer.