Governing AI: Beyond Risk-Based Regulation

This article, titled “Limitations of risk-based artificial intelligence regulation: a structuration theory approach” and authored by Lily Ballot Jones, Julia Thornton, and Daswin De Silva (2025), critically examines the prevalent risk-based approach to AI regulation. The paper highlights that while Artificial Intelligence (AI) is profoundly changing life and work, the current global transition towards mandatory AI regulation, exemplified by the European Union AI Act, predominantly employs a risk-based classification.

However, the authors argue that this prescriptive and application-focused approach overlooks several complex, circular impacts of AI, including:

  • Inherent limitations in the measurement of risk. The concept of ‘risk’ in AI regulation is often vague, obscuring normative dimensions. Technical standards used for risk assessment, particularly in the EU AI Act, struggle to incorporate value judgments and may bypass democratic normative choices, leading to superficial understanding of potential harms. The article suggests moving from a focus on ‘risk’ to ‘harms,’ acknowledging that many adverse effects of AI have already materialized.
  • Overemphasis on high-risk classification. The EU AI Act’s focus primarily on ‘high-risk’ AI systems, General Purpose AI (GPAI), and a limited number of systems with transparency requirements, fails to account for the potential cumulative effects of seemingly ‘low’ or ‘medium’ risk systems, which can lead to dangerous underestimations of their impact and widespread harm at group and societal levels.
  • Perceived trustworthiness of AI. The article contends that ‘trustworthy AI’ is often used as a regulatory visibility tactic by businesses and regulators to encourage technology adoption and gain competitive economic advantage, rather than being a genuine reflection of safe and responsible development. This strategic emphasis on trustworthiness aims to build citizen trust, thereby unlocking economic potential for the EU.
  • Geopolitical power imbalance of AI. AI development and operation are disproportionately concentrated in a few countries, such as the US and China, creating a “moligopolic” global market. This concentration results in other jurisdictions being highly dependent and susceptible to lobbying efforts from major tech companies, potentially leading to diluted or flawed regulation that does not fully address the origins and control of AI systems.

In response to these limitations, the article proposes a structuration theory approach. This theoretical framework, rooted in Giddens’ structuration theory, views AI systems not merely as tools but as active agents that participate in the duality of structure (rules and resources) and agency (the knowledgeable activities of actors), thereby recursively shaping society. This perspective acknowledges AI’s increasing ability to interpret information, make inferences, and influence human thinking and decision-making, positioning it as an active participant in the ongoing structuration of social systems.

The structuration theory approach facilitates a re-orientation of the regulatory narrative by:

  • Clearly identifying and defining the roles of all involved actors within the complex web of AI’s circular impact, extending beyond the current insufficient differentiation in existing regulations like the EU AI Act.
  • Ensuring that normative decisions regarding fundamental rights risks are made through legitimate democratic processes informed by systemic research and public discourse, rather than being obscured or delegated to non-democratic bodies.
  • Providing a deeper insight into how changes within one part of the AI ecosystem can trigger cascading effects across the entire social system, due to AI’s active role in shaping structures and agency.
  • Bringing attention to how AI may already be shaping perceptions of regulation, potentially co-producing specific understandings of fundamental rights through its reliance on quantifiable metrics.

Ultimately, the article concludes that AI regulation needs to evolve beyond the current risk-based model to encompass the full extent and complexity of AI’s far-reaching impact, recognizing AI systems as active participants in the duality of structure and agency and the subsequent shaping of society.

Reference: Jones, L. B., Thornton, J., & De Silva, D. (2025). Limitations of risk-based artificial intelligence regulation: a structuration theory approach. Discover Artificial Intelligence, 5(14). https://doi.org/10.1007/s44163-025-00233-9

Video

Podcast Link

https://notebooklm.google.com/notebook/87c4c2a9-22c8-4489-8935-a261e16b0a14/audio

Subscribe to the Health Topics Newsletter!

Google reCaptcha: Invalid site key.