top of page

Research Review: “Thinking Responsibly about Responsible AI and ‘the Dark Side’ of AI”

  • Writer: Nikita Silaech
    Nikita Silaech
  • Oct 28, 2025
  • 2 min read
Image generated by Ideogram.ai
Image generated by Ideogram.ai

By Leslie P. Willcocks, Christopher Sauer, and Mary C. Lacity | European Journal of Information Systems, Guest Editorial (2022)

Published: Feb 2022 | DOI


Overview

This editorial introduces the concept of the “dark side of AI” to highlight the unintended and adverse consequences emerging from artificial intelligence systems. It situates responsible-AI discourse within broader debates about digital ethics, societal risk, and governance. The paper emphasises that responsible AI requires not only aspirational values but also a systematic understanding of harm, accountability, and impact.


Core Arguments

  1. Conceptual Clarification: The authors distinguish between normative frameworks—fairness, accountability, transparency—and the empirical contexts in which these principles are difficult to operationalise.

  2. Mapping the Dark Side: The paper outlines domains of risk such as bias, surveillance, privacy erosion, environmental impact, labor displacement, misinformation, and loss of control. These are illustrated through examples, including facial-recognition misuse and automation-related inequalities.

  3. Research Agenda: The editorial proposes four thematic directions for future work:

    • Measurement of hidden or systemic harms beyond algorithmic bias.

    • Development of governance mechanisms that embed preventive and corrective controls.

    • Interdisciplinary methodologies connecting technical, legal, organisational, and societal analysis.

    • Evaluation of mitigation strategies through empirical studies of AI deployments.


Strengths

  1. Interdisciplinary Framing: Combines insights from information systems, ethics, and management studies.

  2. Timeliness: Aligns with emerging regulatory frameworks such as the EU AI Act and sustainability-AI initiatives.

  3. Agenda-Setting Value: Identifies specific research priorities and provides a structure for future academic inquiry.


Limitations

  • Lack of Empirical Evidence: As a guest editorial, it does not present original data or case studies.

  • Broad Scope: The discussion spans many risk areas, leading to limited depth in individual topics.

  • Restricted Accessibility: The journal’s paywall limits access for interdisciplinary and public audiences.


Directions for Future Research

  • Conduct in-depth case studies to examine how AI failures manifest in practice.

  • Translate conceptual risk categories into audit and assessment tools.

  • Promote open-access dissemination to improve interdisciplinary collaboration.


Relevance to Responsible AI Practice

The editorial contributes to the foundation of responsible-AI governance by identifying areas where ethical principles need translation into measurable and enforceable practices. It supports the integration of risk assessment, transparency protocols, and cross-disciplinary governance mechanisms as key elements of responsible AI development.


Comments


bottom of page