Operationalizing Transparency in AI Systems through Progressive Disclosure in User Interface Design
Deepa Muralidhar
Citations
Abstract
Recent advances in artificial intelligence (AI) systems have resulted in their ability to provide precise recommendations in response to users’ questions (prompts). However, the black-box nature of AI models leaves users unaware of how conclusions are reached or decisions are made. AI models are pattern replicators. Large language models produce outputs as a result of computing millions of pieces of data and are good at predicting the next word in a sequence. However AI experts, designers, and builders are not able to explain why the AI system behaves in certain ways, makes decisions that are biased, and violates people’s privacy. As a result, users are unable to trust them. The concern is that since AI systems currently exhibit discriminatory behavior, there’s a possibility that, in the future, they could deceive even the experts. Hence the need for transparency is an urgent need. Users need to be convinced that these AI system align with some societal standards and moral values. According to the EU’s AI Act, transparency includes providing information about the system’s capabilities, data usage, and performance metrics, as well as allowing for human oversight and intervention if necessary. Explainability in AI is important to building trust, as it enables transparency in decision-making. In our research work we focus on explainability of the outcome by asking the AI agent questions such as why, how, and what is at stake. Explainability is often expressed as key to making AI systems understandable to humans. Explanations provide insights into how a system made a decision. In the next few chapters, this thesis introduces the idea that holding the view that explanations are essential to transparency tells us only part of the story. In order to achieve transparency, AI systems should have user interfaces that are interactive and follow other software development usability guidelines. The goal of this thesis was to operationalize transparency. For the purposes of this thesis, we narrowed the focus to transparency for the user through the user interface, specifically about the data fed into the AI system and its outcomes. For this research we considered two classes of systems for our study. First, AI text generators that produce textual content in response to a prompt (a sentence/couple of sentences text) given as input to the AI system. The second class of systems leans towards providing personalized medical care, AI-Clinical Diagnosis Support Systems. We designed prototypes for a transparent user interfaces for these systems. Using semi-structured interviews and questionnaires we conducted a user study to gather user perspectives on a selectively transparent user interface. We used Notion.ai for the study with 30 participants and Docus.ai for our second study with 50 participants. For our second study with Docus.ai we interviewed 8 doctors and 8 medical students to gather their feedback on the prototypes. Both the studies were aimed at investigating the effect of progressive disclosure and adjusting the explanations so as to adapt to users’ mental models for improving the transparency of AI text generation systems. In both use cases we involved the domain experts in the design of the prototype. User feedback confirms that the progressive disclosure process as a way of building a transparent user interface in AI systems is operative. The order as well as the choice of explanation techniques presented to the user is important. Explanations should be presented to user gradually and in small chunks such that they bring the user’s mental model closer to the conceptual model of the system. In order to operationalize selective transparency, the designers of the AI system should have knowledge of the requirements of the domain in which the AI systems functions. While designing a transparent user interface for Notion.ai We learned that the while providing explanations, the AI agent must be cautious to not expose too much information because a end user could game the system. While designing a similar one for Docus.ai results from the study informed me that there were risks involved in providing the end user, a possible patient, focused specific information about their ailments. The thesis that elaborates on the study and its findings provide valuable lessons.
