At a time when artificial intelligence is reshaping the global balance of power, Europe faces a strategic dilemma: how to make law a driver of innovation rather than a constraint? From the Letta Report to the Draghi Report, a common ambition is emerging: to turn Europe’s complexity into strength.

Echoing the warning issued in 1999 by Lawrence Lessig with his famous “Code is Law” later hijacked by libertarians to celebrate the supremacy of code over democratic rules Digital New Deal’s report highlights the urgent need to avoid a shift toward AI writing its own rules. From “Code is Law” to “AI is Law”, there is a line Europe cannot cross without repoliticizing the digital space and reasserting the primacy of law.

Faced with an accumulation of often-contested regulations, it becomes essential to repoliticize the digital realm. The thesis defended in this report is clear: we do not have to choose between regulation and innovation — it is their strategic interplay that will shape a truly European AI.

“CODE IS LAW by Lawrence Lessig was a warning.
AI IS LAW must be a wake-up call.”

DOWNLOAD THE FULL REPORT : AI IS LAWDOWNLOAD THE SUMMARY : AI IS LAW

The author, Simon Bernard, identifies three essential conditions to achieve this objective:

  • The standard must support scalability;
  • The standard must accelerate its acceptability;
  • The standard must be actionable to become a strategic asset.

His report is structured in three parts:

  1. It clarifies the rules specific to the digital realm — often misunderstood — and aligns them around a shared goal: building ethical and safe AI.
  2. It broadens the reflection to general branches of law — civil, commercial, environmental — which can also contribute to a distinctive form of AI, balancing tradition and disruption.
  3. It proposes a political and strategic architecture aligned with the ambitions expressed by Letta and Draghi, in order to transform this legal framework into an operational instrument of European sovereignty.

AI is Law thus argues that clear, codified law is not a brake on innovation, but a powerful tool for sovereignty, competitiveness, and scaling in the age of artificial intelligence.

RECOMMENDATION TO ADAPT THE LEGAL ARCHITECTURE TO THE NEEDS OF AI

Why?
Law must become a lever for the scalability of innovation: it protects, legitimizes, and secures innovation.

What?
The legal architecture must be adapted to support technologies that evolve faster than the norms, while preserving its protective role for society.
An architecture inspired by the layered model of computing systems can be proposed (when Kelsen meets Turing):

  • Law as an Infrastructure (LaaI): with fundamental rights and freedoms guaranteed by nature and rendered inalterable.
  • Law as a Platform (LaaP): legal texts that can be coded (an “API-based law”), starting with data regulations.
  • Law as a Service (LaaS): an experimental perimeter within a flexible but secure framework, inspired by regulatory sandboxes.

How?

  • Recommendation No. 1 – Make code the 25th official language of the European Union, so that any new legal text can be translated into machine-readable language.
  • Recommendation No. 2 – Include a legal coding clause in the upcoming omnibus regulations, enabling, at the European level, the short-term codification of key digital regulations, in line with ongoing work.
  • Recommendation No. 3 – Support the Legal Data Space initiative, which already brings together a large number of stakeholders in the legal sector. This would be a shared infrastructure for processing and sharing public (open data) and private (data spaces) legal data — with the aim of developing sovereign legal AIs, by and for the legal sector.

PROMOTE A READABLE, HARMONIZED, AND SOVEREIGN LEGAL FRAMEWORK

Why?
To be a lever for innovation, legal rules must be predictable, intelligible, and actionable.

What?
Adapt the existing framework to provide greater coherence and strategic ambition.

How?

  • Recommendation No. 4 – Simplify, harmonize, coordinate and sovereignize, limiting future changes to the legal framework to these core objectives.
  • Recommendation No. 5 – Assess the effectiveness of AI-specific rules using a three-part framework:
    • Standardization (the rule should support scalability),
    • Acceptability (the rule should help innovation gain public and market acceptance),
    • Actionability (the rule should empower actors to use it as a competitive advantage).