Explainable AI (XAI) in Rules as Code (RaC): The DataLex approach

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Computer Law and Security Review, 2023, 48, pp. 105771
Issue Date:
2023-04-01
Filename Description Size
explainable-ai.pdfPublished version2.62 MB
Adobe PDF
Full metadata record
The need for explainability in implementations of ‘Rules as Code (RaC)’ has similarities to the concept of ‘Explainable AI (XAI)’. Explainability is also necessary to avoid RaC being controlled or monopolised by governments and big business. We identify the following desirable features of ‘explainability’ relevant to RaC: Transparency (in various forms); Traceability; Availability; Sustainability; Links to legal sources; and Accountability. Where RaC applications are used to develop automated decision-making systems, some forms of explainability are increasingly likely to be required by law. We then assess how AustLII's DataLex environment implements ‘explainability’ when used to develop RaC: in open software and codebases; in development and maintenance methodologies; and in explanatory features when codebases are executed. All of these XAI aspects of DataLex's RaC are consistent with keeping legislation in the public domain no matter how it is encoded.
Please use this identifier to cite or link to this item: