What's Missing in Requirements Engineering for Responsible AI?
- Publisher:
- IEEE COMPUTER SOC
- Publication Type:
- Journal Article
- Citation:
- IEEE Software, 2023, 40, (6), pp. 11-15
- Issue Date:
- 2023-11-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Whats_Missing_in_Requirements_Engineering_for_Responsible_AI.pdf | Published version | 2.63 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The rapid evolution of artificial intelligence (AI) has catalyzed a multifaceted discourse in the software engineering (SE) community. The crux of this dialogue is to pinpoint the distinct attributes of AI systems that necessitate tailored SE methodologies. While classical SE techniques have proved effective across a spectrum of systems, there's an emerging consensus: AI introduces distinct challenges, compelling us to rethink some foundational principles of traditional SE.1 Central to AI systems is the imperative to design models, curate training datasets, govern system autonomy, and embed ethical guidelines. Two salient features of AI operations include continuous learning from evolving datasets and human feedback while dealing with the increased uncertainties and risks due to system autonomy.
Please use this identifier to cite or link to this item: