At the Political Tech Summit in Berlin, our CEO Clemens Maria Schuster delivered a message that cut straight through the current AI hype cycle in public affairs.
Clemens’ argument was simple but uncomfortable: artificial intelligence will not fix a public-affairs ecosystem that is already drowning in its own output.
The promise that AI will “democratize lobbying” or automatically improve political decision-making sounds appealing. But in reality, the opposite may be happening. Lower barriers to producing content don’t necessarily create better participation. They often create more noise.
Slides from the talk
The Transparency Trap
Over the past decades, public affairs has become increasingly transparent. Legislative processes are more visible, documents are accessible online, and monitoring tools allow organizations to track developments in real time (of course, this is what SAVOIRR does as well, but we deliver much more in terms of workflows, data, people, etc).
In theory, this transparency should lead to better, more informed decision-making. In practice, something else is happening. AI now makes it possible to generate:
- position papers
- policy briefings
- consultation responses
- amendment proposals
…in seconds.
What used to require time, expertise, and internal debate can now be produced almost instantly. The result is what Clemens calls the “transparency trap.” More participation does not necessarily produce better input. Instead, scale and speed often produce mediocrity.
And when scale meets automation, the result can be a wall of noise.
When Scale + Speed = Noise
Public affairs once operated under a natural constraint: producing high-quality policy input took time. Organizations had to invest resources into research, drafting, and internal coordination. That friction acted as a filter. AI removes much of that friction. Suddenly:
- every organization can produce more content
- every consultant can generate policy text instantly
- every campaign can multiply its output
The barrier to participation falls. But so does the barrier to mediocre participation.
Inboxes of policymakers risk filling with what Clemens calls “spam-mendments” — AI-generated proposals that look legitimate but add little value.
Over time, the system reacts. Decision-makers develop filters. Gatekeepers become stricter. Real stakeholders risk being filtered out along with the noise. In other words: Automation meant to democratize influence may ultimately make it harder to be heard.
The Probability Fallacy of AI
Another issue lies in how AI performance is often communicated. Models are described as “95% accurate.” That sounds reassuring. But in public affairs and regulatory work, the remaining 5% matters enormously. Examples are already emerging across fields:
- legal briefs citing hallucinated court cases
- policy reports containing dead or fabricated references
- AI-generated research with invented sources
One example mentioned in the talk involved AI-assisted drafting where references appeared credible but led to non-existent or irrelevant documents. In fields where trust is the primary currency, a single mistake can have disproportionate consequences. One careless prompt can bankrupt a reputation.
Where AI Actually Works
Rejecting the hype does not mean rejecting AI. The real question is where AI creates value. Clemens argues that AI works best not in persuasion, but in data-heavy operational tasks. In other words: the boring stuff. Examples from GovTech illustrate this well:
- flood prediction models
- disaster resource allocation
- infrastructure monitoring
- cross-border coordination systems
These systems succeed because they focus on large-scale data analysis and logistical optimization. They do not attempt to:
- write legislation
- persuade policymakers
- simulate political judgement
They assist with information processing, not political strategy.
The Real Equation of Influence
If AI is not the strategic brain of public affairs, what is? Our proposes a simpler formula (read more here: https://www.savoirr.com/article/public-affairs-the-equation-for-influence
Influence = Data + People + Workflows
AI belongs in the data layer. It helps monitor legislation, summarize developments, and process large volumes of information. But influence ultimately depends on:
- human judgement
- relationships
- institutional knowledge
- credibility
Those elements cannot be automated. At least not without destroying the very legitimacy public affairs relies on.
Be the Architect, Not the Bot
The idea that AI will magically democratize lobbying is comforting. It suggests technology alone can fix structural inequalities in political influence. Reality is more complex.
If AI is applied without care, it may simply industrialize lobbying output rather than democratize it. To avoid that outcome, public affairs professionals must design how AI is used. That means:
- investing in clean, structured data
- building transparent workflows
- using AI as a filter, not a megaphone
Automation should handle repetitive tasks. Humans should retain responsibility for:
- judgement
- ethics
- relationships
- strategic decisions
Or, as Clemens summarized during the talk:
“Be the architect, not the bot.”