Addressing the potential social risk of self-reported data within a computational-intensive world

Publisher:
Springer Nature
Publication Type:
Journal Article
Citation:
SN Social Sciences, 2025, 5, (7), pp. 99
Issue Date:
2025-07-01
Full metadata record
This paper addresses the potential social risk associated with the capability to infer sensitive opinions from large self-reported data within a computational-intensive world, in which AI is pervasively and inherently adopted as part of the resulting socio-technical system. Such a social risk should be framed assuming a variety of socio-political contexts, including also non-democratic systems or, more in general, systems with significant lacks in terms of human rights. A simplified view of social risk is considered proportional to the sensitivity of the information and the prediction performance. The related computational experiments are conducted by applying Machine Learning techniques (Neural Networks) on a pre-existent case study based on a subset of the popular World Values Survey. Despite such a use case is not explicitly designed to maximise the prediction performance and is characterised by low dimensionality, the empirical results pointed out an overall interesting capability to infer potentially sensitive information. Additionally, the prediction accuracy resulted to be proportional to the likelihood of data to change along the time. Those results are discussed in context in the paper, looking holistically at the associated social risk, as well as at possible practical implications. In a continuously evolving context, characterised by fast advances of AI technology in contrast with a lack of systematic frameworks for reasoning about risk, uncertainty, and their potentially catastrophic consequences, this study focuses on computational experimentation and case studies to further stimulate the convergence of analysis frameworks and to nurture awareness from both a social and a user perspective.
Please use this identifier to cite or link to this item: