top of page

Review of Sustainable AI and the Third Wave of AI Ethics: A Structural Turn

  • Writer: Nikita Silaech
    Nikita Silaech
  • 5 days ago
  • 5 min read
Image generated with Ideogram.ai
Image generated with Ideogram.ai

By Larissa Bolte, Aimee van Wynsberghe | AI and Ethics (2025)

Published: July, 2024 | DOI


Overview

This paper tackles a pretty straightforward question: Is Sustainable AI just old wine in a new bottle, or does it actually represent a real shift in how we think about AI and our responsibilities around it?

Bolte and van Wynsberghe argue it's the real deal. They say Sustainable AI is part of what they call the "third wave" of AI ethics, which looks beyond individual technologies to examine the broader systems, infrastructures, and power dynamics that shape them. Instead of treating AI as a bunch of tools that need some ethical fine-tuning, they see it as part of a massive web of interconnected social and technical relationships that need fundamental change.


The Three Waves of AI Ethics

Building on earlier work by van Wynsberghe, the authors describe three major phases in AI ethics. These aren't strictly time-based but represent different ways of thinking about the problems.

  1. First Wave: The speculative era. This phase was dominated by thinkers like Nick Bostrom and Vernor Vinge, who worried mostly about superintelligence, the singularity, and hypothetical threats to humanity's existence. It was visionary but often disconnected from real-world concerns. The authors call this "dystopian tech-determinism."

  2. Second Wave: A pragmatic reaction. Here, people started focusing on actual systems and measurable problems like algorithmic bias, explainability, privacy, and accountability. This is the version of AI ethics most people recognise today. But it has limits. It treats AI problems as isolated bugs that can be fixed through better design, rather than as symptoms of deeper issues.

  3. Third Wave: The "structural turn." This wave refuses to see AI in isolation. It places technology within larger social, political, and environmental contexts. It also questions why we keep reaching for narrow technical solutions and who benefits when we do. Sustainable AI, from this perspective, isn't just about more energy-efficient models or better servers. It's about fundamentally rethinking the systems that create and support those models.


What Makes the "Structural Turn" Structural?

The authors identify two key aspects of this shift:

  1. Systems-Level Thinking

Instead of asking "Is this AI model energy-efficient?" third wave ethics asks "What kind of world makes this AI model possible, and what does it cost?"

The paper uses electric vehicles as a great example: EVs seem sustainable at first glance, but their true impact depends on where the electricity comes from, where the lithium is mined, and how the entire transportation system is set up.

AI systems work the same way. Their sustainability depends on:

  • The energy and labour that goes into hardware and data centres

  • The global supply chains that support computation

  • The rebound effects where efficiency improvements can actually lead to more total energy use

  • The structural dependencies that make it hard to reverse course on AI adoption

  • Power and Ideology


The second aspect is about power: who gets to decide what counts as an ethical problem, and who benefits from keeping the focus on technical fixes?

The authors connect this to "techno-solutionism," the belief that every complex social problem has a technical answer if we just optimise enough. They argue this mindset protects existing power structures, often benefiting corporations or political interests that profit from the appearance of progress without actual systemic change.

Put simply: second wave ethics wants better design to improve systems. Third wave ethics wants systemic change to enable better design. It's a subtle but important flip.


Sustainable AI as the Third Wave in Action

Van Wynsberghe originally distinguished between "AI for sustainability" (using AI to achieve environmental goals) and "sustainability of AI" (reducing AI's own environmental footprint). Most of the world, she argues, has focused on the former while mostly ignoring the latter.

This paper positions Sustainable AI as part of a broader ethical awakening that sees environmental harm, labour exploitation, and technological dependency as interconnected problems. The goal isn't just to train less energy-hungry models but to question the extractive, power-concentrated systems behind them.


Beyond Sustainability: The Bias and Fairness Connection

One of the paper's smartest moves is showing that this structural shift isn't limited to sustainability discussions. It's also changing how researchers approach bias and fairness.

Earlier fairness research treated bias as a technical glitch, a problem of "bad data" or "bad models." But scholars like Ruha Benjamin and Safiya Noble argue this misses the bigger picture. Algorithms don't just reflect bias; they reproduce it, because they're built within social hierarchies that determine whose experiences matter.

This structural perspective doesn't ask "How do we fix the dataset?" but rather "Why do these data and systems exist this way, and who do they serve?" The authors use this to show that the third wave isn't a niche movement but a fundamental rethinking of what ethical inquiry means.


What Works Well

The paper's biggest strength is its clarity. The three-wave framework helps readers understand complex debates without dumbing them down.

The electric vehicle example is particularly effective, making abstract systems theory concrete and relatable. And their core distinction between "better design" versus "systemic change" is both clear and memorable.

The approach is refreshingly interdisciplinary as well. The authors draw from science and technology studies, critical theory, and sustainability science, weaving them into a coherent story. And importantly, they don't oversell their case. They acknowledge that older approaches still exist and often dominate policy and practice.


Where It Falls Short

Despite its conceptual clarity, the paper lacks hard evidence. The authors claim the structural turn is gaining momentum, but the support is mostly qualitative. Some data analysis or systematic review of the literature could have made the case more convincing.

It also doesn't offer much practical guidance. How do you actually do a systems-level ethics analysis? What tools or frameworks exist? Without this, "structural turn" risks staying an appealing idea rather than something people can actually use.

The wave model, while tidy, can also oversimplify things. Not every second wave study ignores broader contexts, and third wave work spans different priorities (from ecological systems to social justice) that don't always fit together neatly.

Finally, while the critique of techno-solutionism makes sense, it could go deeper. Not every technology-level fix is part of corporate power games, and not every systems-level critique escapes those dynamics. Reality is messier than the binary suggests.


Why It Matters

The paper's conclusion feels both urgent and measured: Sustainable AI might represent a genuine shift in thinking, but only if people can move beyond talk into action.

More broadly, the "structural turn" connects with a growing movement demanding that we understand technology in planetary context. Some call this the "Terrestrial Turn." In an era of environmental crisis and deep inequality, this isn't just academic debate. It's about redefining what responsible innovation actually means.


Future Directions

The paper points to several promising areas for future work:

  • Empirical research: Studying whether structural ethics is actually being adopted in policy and corporate settings

  • Method development: Creating practical tools for systemic and power-aware analysis

  • Comparative studies: Looking at whether similar structural shifts are happening in other tech areas like biotech or geo-engineering

  • Institutional critique: Examining how academic and industrial structures shape what kinds of ethics get funded or valued

  • Interdisciplinary synthesis: Bringing together insights from critical theory, systems thinking, and feminist perspectives into practical frameworks



This is a smart, forward-thinking paper that helps clarify where AI ethics has been and where it might be going. Its main argument (that Sustainable AI represents a shift toward systemic, power-aware thinking) is both convincing and long overdue.

It doesn't answer every question, but it changes what we're asking. And that's often how new ways of thinking get started. For researchers, policymakers, and practitioners, Bolte and van Wynsberghe's work reminds us that ethics isn't just about fixing AI. It's about reimagining the world that creates it. Read Full Article: Here

 
 
 

Comments


bottom of page