Inclusive and Explainable AI Systems A Systematic Literature Review
- Publisher:
- AIS
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the Annual Hawaii International Conference on System Sciences, 2024, pp. 1297-1306
- Issue Date:
- 2024-01-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Explainable AI (XAI) plays a crucial role in enhancing transparency and providing rational explanations to support users of AI systems. Inclusive AI actively seeks to engage and represent individuals with diverse attributes who are affected by and contribute to the AI ecosystem. Both inclusion and XAI advocate for the active involvement of the users and stakeholders during the entire AI system lifecycle. However, the relationship between XAI and Inclusive AI has not been explored. In this paper, we present the results of a systematic literature review with the objective to explore this relationship in the recent AI research literature. We were able to identify 18 research articles on the topic. Our analysis focused on exploring approaches to (1) the human attributes and perspectives, (2) preferred explanation methods, and (3) human-AI interaction. Based on our findings, we identified potential future XAI research directions and proposed strategies for practitioners involved in the design and development of inclusive AI systems.
Please use this identifier to cite or link to this item: