Sat. Feb 7th, 2026

Ireland can regulate AI now, and that’s the easy part


Artificial Intelligence

The AI Office will create real legal machinery to oversee artificial intelligence. The question is whether anyone can read what’s inside the black box – and if that’s even the right question, says Jason Walsh

Blogs


The Government has published its blueprint for enforcing artificial intelligence (AI) rules across Europe. This, of course, matters because Google, Meta, Apple, Microsoft, X (formerly Twitter) and TikTok all run their EU operations from the country, which, in practice, means Ireland isn’t simply implementing the EU’s AI Act so much as deciding whether it really gets enforced.

The architecture is impressive. A new statutory body, the AI Office of Ireland, must be operational by August and the fines it can levy can reach €35 million or 7% of a company’s global turnover. Regulators can demand documentation, conduct inspections, and, notably, access source code. This is serious stuff.

And yet, as the old proverb goes: paper will bear anything that is written on it. 

 
advertisement


 

While Ireland spent the 1990s and 2000s transforming itself into Europe’s tech hub, meaning transforming itself into a door for US-based computing companies and then skimming off the top, its regulatory apparatus never quite developed the same ambition.

Ireland’s Data Protection Commission (DPC), née 1989, spent years as the bottleneck in enforcement, complaints gathering dust while tech companies’ legal commandos ran down the clock. The DPC has since improved, but any AI Office will inherit that baggage before it opens its doors. 

Of course, that won’t matter if the Office really does get a grip on what AI companies are doing.

The chosen model is interesting, and, frankly, probably right, but it also sharpens the concern: rather than one powerful regulator, existing bodies, from the the Central Bank, to the health authorities, and Coimisiún na Meán, supervise AI in their respective domains. 

This makes sense. After all, sector regulators understand their industries. On the other hand, it could make it easier for businesses to run rings around regulations as it results in multiple agencies doing one job, none with clear ownership, all with long-standing relationships with the industries they now police. On a bad day, that is regulatory capture waiting to happen.

Then there’s the enforcement power that sounds fierce but almost certainly isn’t. Source code access makes for a good headline (and, again, to be clear, is the correct decision). In practice, though, no regulator has the technical bench to audit a frontier model, and any serious attempt would vanish into trade secrets litigation until everyone involved has retired. The power exists on paper. It will likely stay there.

The analogy is not perfect, but when Unix operating system co-creator Dennis Ritchie was asked by AT&T to audit the code of a suspiciously compatible system called Coherent, he, the man who wrote half of the original code and jointly oversaw all of it, said he could not “find anything that was copied [but] it might have been that some parts were written with [AT&T] source nearby”.

If the man who wrote the thing could not make a deliberation on software then that at least implies that expecting an Irish regulator to meaningfully audit an AI model built by thousands of engineers is optimistic.

We call it all ‘code’ now. We are wrong. Programming languages are limited but expressive languages, just like natural language is, and it is not necessarily immediately obvious how or why any function, or even line, is doing whatever it does. 

We already know this about language, hence why in The Hour of the Star, Clarice Lispector, or rather her fictive writer acting as the narrator on the life of a fictive subject can say: “Remember that, no matter what I write, my basic material is the word. So this story will consist of words that form phrases from which there emanates a secret meaning that exceeds both words and phrases.”

We can choose, though we should not, to take the term ‘secret meaning’ literally, but either way, the statement is plainly true of ‘code’.

And for anyone who cares, here is a Python script that may be interesting to run:

text = “Remember that, no matter what I write, my basic material is the word. So this story will consist of words that form phrases from which there emanates a secret meaning that exceeds both words and phrases.”

print(text)

# The hidden function:

print(text.replace(“word”, “law”).replace(“phrases”, “regulations”))

I have, at least admitted to you (and it is plain to anyone who can read the syntax)  that the above listing does not, in fact, do what it claims to. But we can do much worse than that. Ritchie’s collaborator, Turing Award-winner Ken Thompson, opened up a vortex of instability in 1984 when he published the short paper Reflections on Trusting Trust, noting that you could modify a C compiler binary to insert a backdoor when compiling the login program, leaving no trace in the source code. “The moral is obvious. You can’t trust code that you did not totally create yourself (Especially code from companies that employ people like me.),” he wrote.

Worse still, AI has hidden ‘behaviours’ that don’t appear in its source code. For AI models, the real issue is that the ‘source’ that matters, such as training data, learned weights, isn’t even legible in the way code is, meaning that you can inspect it and still not understand what it does.

Yesterday’s fears

There is also a simpler, and deeper, problem, though. The EU AI Act was drafted before ChatGPT and its ilk began to expand to consume all known matter, so to speak. While it has proved excellent at ensuring CV-screening software has proper documentation, it has almost nothing to say about labour displacement, the concentration of power in a handful of AI labs, or, crucially, the slow degradation of shared epistemic ground.

We are, in short, regulating yesterday’s technology against yesterday’s fears while the present slides past unexamined. This matters because we are building enforcement architectures for a technology whose implications we do not yet understand, using institutions that have already demonstrated limitations.

Ireland now holds significant cards in how AI gets governed across Europe; the legal machinery is real, and the fines are eye-watering. The question is how it will be used. And if it will be aimed at the right targets.

On that, the record offers more caution than confidence.

Read More: AI Artificial Intelligence Blog Blogs


Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *