fbpx

When Fact-Checkers Become Gatekeepers: Romania, Breton, and the EU’s ‘Truth Wars’

World - December 28, 2025

The trajectory of Thierry Breton’s mandate, as the EU’s digital czar, has become a paragon for a model of platform governance that relies heavily on external “fact‑checking” and opaque risk assessments, where citizens get little real transparency or due process. The sanctions against him that the US imposed, based on charges of transatlantic censorship, reveal more than a clash of legal cultures: they expose the rising power of quasi‑private truth arbiters who work behind closed doors and who have no accountability to voters or to clear, predictable criteria. In this environment, the cancellation of Romania’s presidential elections and the war in narratives over “disinformation” show how fact‑checking can quickly turn toward a political weapon rather than a neutral protector.

EU‑aligned fact‑checking networks claim to be structural “correctives” to misinformation and technical corrections, but more often than not, their behaviour is structurally subjective. Editorial decisions regarding what to include, which stories to omit and frame verdicts will inevitably encode ideological and institutional biases, regardless of whether or not an individual journalist is acting in good faith. Words like “false” or “misleading” or “lacks context” aren’t neutral descriptors, they are speech‑shaping tools, tools that downgrade reach, stigmatize speakers, subtly influence electoral debate (often with minimal accountability beyond that of a formal state restriction). So, the opacity of these processes is not so much as disturbing, but rather like how it’s done. Such criteria for flagging content, along with the training data that shapes the evaluators’ judgment and appeals mechanisms, are rarely described in terms that ordinary users can scrutinize or contest. That leaves behind an irony in which platforms and regulators preach “transparency” to all others, and the architecture of fact‑checking itself is insulated from meaningful public critique. When these systems pile on top of algorithmic amplification, a small elite group of private or semi‑private actors is able to wield outsized influence over what millions of people can and cannot see, share or trust.

Romania’s voided presidential election illustrates how slippery the line between defending democracy and controlling narratives may be. Once intelligence services and courts conclude that a vote has been compromised, the information space is immediately configured anew: Some claims are rendered state-endorsed truths, the facts of others fall in the category of “disinformation,” and actors doubting the official line run the risk of being viewed with suspicion. Indeed, in such a climate, fact‑checking networks are not just correcting, they are patrolling the limits of what is legitimate discourse around the election itself. The issue isn’t that foreign interference or coordinated manipulations do not occur (they certainly do), but that people are supposed to acquiesce to important decisions about trust without much to work with in terms of the primary evidence and the methodological specifics. The public seldom witness raw data, the forensic metrics for distinguishing organic mobilization from inauthentic coordination, or the full laundry list of alternative explanations considered and dismissed. When the democratic remedy to interference is radical (annulment), so should the demand for radical transparency; instead, the gatekeepers of information often take control, and the informational authorities rationalize using the justification provided in the language of “resilience” that justifies this action.

Pavel Durov’s claims of a “digital gulag” in Europe may be viewed as exaggerated, but they draw on a genuine concern: the union of state regulation, platform moderation and fact‑checking partnerships in one cohesive, vertically controlled system of command. When regulators such as Breton pressure platforms at least in the name of the DSA and these platforms “assign” epistemic competences to fact‑checking consortia whose internal affairs are not apparent, opposition leaders become locked in a battle with not one censor but rather an embedded web. Durov’s critique echoes because a lot of people feel that the rules that govern their speech are secretly negotiated among government bodies, non­governmental organizations and businesses, with citizens encouraged to “trust the process” rather than to interrogate it. “In this way, the problem is not whether we can spot a fact‑check that is right or wrong, but it becomes more about the whole model normalising the idea that politically salient truth is something that has to be subject to centralized curation.” It can be, once that norm takes root, replicated endlessly on elections, public health, protest, foreign policy, notably when the authorities refer to emergencies or “hybrid threats.” So the very word “disinformation” can also be used as a flexible tool to shut out inconvenient stories, even those that have truth or a question of power.

And in Romania, fact‑checking bodies hold a somewhat paradoxical position between journalism, advocacy and regulatory auxiliary. To date they often depend on funding from European programs, partnerships with platforms or partnerships with think tanks connected to wider EU policy objectives. This doesn’t necessarily invalidate their work, but it generates structural incentives: subjects that support funders’ objectives are more likely to be covered, while systemic criticisms of EU or NATO policy may be implicitly downscaled or punished more severely. These tensions are further compounded by the lack of substantial, enforceable transparency obligations. Romania operates within a complex ecosystem involving multiple stakeholders, including the Bulgarian-Romanian Observatory of Disinformation (BROD), which emerged in early 2023 as an alliance of fact-checkers, researchers, and technologists. The European Digital Media Observatory (EDMO) coordinates fact-checking efforts across Europe, with organizations like Funky Citizens serving as one of two Meta-approved fact-checking entities in Romania.

Citizens generally don’t have any historical view of all the posts that have been labeled, down‑ranked, demonetized; they rarely get an explanation of why a narrative was picked up and another, no less suspect, was not. Where appeals are accessible at all they often are slow to process, arcane and stacked against laypersons lacking time or skill to navigate them. And in a nation already characterized by disaffection with institutions and a vulnerability to conspiratorial thinking, it makes everything work with the risk that the mix of actual disinformation with obscure fact‑checking would amplify, rather than help to restore, cynicism.

A genuinely democratic approach would be to separate verification from gatekeeping. Fact‑checkers could have articles that describe their methods, disclose budgets and partnerships as precisely as possible, and maintain searchable databases of all information (including the kinds of fixes to their own mistakes). Platforms and regulators could be tasked with offering clear metrics for how fact‑check labels affect reach, and users could have direct, accessible ways to challenge decisions. Most importantly, citizens should be able to read alternative analyses for themselves, including ones that take on official narratives as invalid, without worrying about being algorithmically hidden.

A critical attitude on fact‑checking doesn’t mean yielding control of information space to trolls and foreign intelligence agencies; it means insisting that whatever system can have that impact on elections, reputation, or public debate must be itself radically open to scrutiny and pluralistic.