Generating Purpose-Driven Explanations: The Case of Process Predictive Model Inspection

Publisher:
Springer
Publication Type:
Conference Proceeding
Citation:
International Journal of Intelligent Information and Database Systems, 2022, 452, pp. 120-129
Issue Date:
2022-01-01
Filename Description Size
978-3-031-07481-3_14.pdfPublished version3.06 MB
Adobe PDF
Full metadata record
Explainable AI is an emerging branch of data science that focuses on demystifying the complex computation logic of machine learning with an aim to improve the transparency, validity and trust in automated decisions. While existing research focuses on building methods and techniques to explain ‘black-box’ models, much attention has not been paid to generating model explanations. Effective model explanations are often driven by the purpose of explanation in a given problem context. In this paper, we propose a framework to support generating model explanations for the purpose of model inspection in the context of predictive process analytics. We build a visual explanation platform as an implementation of the proposed framework for inspecting and analysing a process predictive model, and demonstrate the applicability of the framework using a real-life case study on a loan application process.
Please use this identifier to cite or link to this item: