Hallucinated References: Five Excuses for Academic Misconduct

Symmetrische Ansicht einer historischen Bibliothek mit hohen Bücherregalen, Glasdach und goldenen Säulen

Academic journals increasingly face a problem: papers citing hallucinated references. We are talking about citations that simply don’t exist – fake titles, fake journals, and fake authors. When I suggested on LinkedIn that such papers should be desk-rejected and authors banned for one year, the support was overwhelming. But the pushback in the comments was even more revealing.

Not because it challenged my position, but because the arguments reveal a dynamic that extends far beyond hallucinated references, namely how we talk about AI in general and what we reveal about ourselves in the process.

I’m a recovering academic, I quit academia about 10 years now. Before that, I experienced the system from the inside: I published, I reviewed, and I felt all the pressure. So, there are good reasons why I left. But this discussion reminded me how liberating it is to no longer be trapped in that system – and still be able to say: The system isn’t to blame for everything.

What follows are five arguments from this debate. I’m deliberately sharpening them – they reveal patterns, not individuals: defensive deflection, TINA rhetoric, resignation, victim mentality, nihilism.

Argument 1: The problem isn’t AI, but humans

The most striking thing in the discussion: people vehemently defended AI – even though I hadn’t attacked AI at all. I criticized people who submit hallucinated references. But several commenters rushed to emphasize: “The problem isn’t AI, it’s humans.”

This reflex isn’t new. I see it again and again: As soon as someone criticizes anything in the context of AI – even when the criticism explicitly targets human behavior – some people seem to take it personally.

A classic sign of hype. Many have invested massively in AI – money, time, emotions. For them, criticism becomes an existential threat: “If AI doesn’t deliver, I lose my money and my face.”

And these defenses almost always go hand in hand with a devaluation of humans: “The real problem isn’t AI – it’s human intelligence that doesn’t verify anything.” To elevate AI, humans must be diminished. That’s misanthropy in tech-friendly clothing. Smart, but transparent.

Argument 2: There is no alternative

This AI defense is based on a worldview that comes in two variants.

Variant 1: Tech-solutionism. Claims every problem has a technical solution, we just need to wait. In the discussion, this sounds like: “Hallucinated references are an interaction design failure, there will be a fix soon.” The fact that this is about integrity, responsibility, trust – decidedly non-technical values – gets ignored. Moral questions are reframed as technical problems.

Variant 2: Tech-determinism. Sees the AI revolution as inevitable, says we must simply accept it. Prophesies: “We are entering the age of AI-centric knowledge creation.” This is resignation with a philosophical veneer.

Both use the same rhetoric: “There Is No Alternative” (TINA). We know this from markets: “The market forces us.” Now the market is replaced by AI: “AI forces us.” In both cases, human decisions are sold as natural laws.

The result? The solutionist says “we don’t need to do anything,” the determinist says “we can’t do anything.” In both cases, responsibility disappears.

Argument 3: Misconduct has always existed

Another popular argument claims: “This is nothing new. There have always been fake references in scientific papers.”

True. There has always been misconduct in science. Hallucinated references are just a new variant of an old problem.

So am I getting worked up over nothing?

No. Because “it’s always been this way” never works as a moral justification. By this logic, we would never have abolished slavery, never banned child labor, never improved anything.

The fact that scientific misconduct has always existed doesn’t mean it’s acceptable or should remain so. This is the classic naturalistic fallacy: deriving ‘ought’ from ‘is.’

Behind this lies defeatism: ‘We could never prevent it, so we can’t prevent it now either.’ Standards are being sacrificed for convenience.

Argument 4: The system is to blame

Some people say: The academic publication system is broken – publish-or-perish pressure, peer review at its limit, profit-driven journals. Researchers are victims of this system.

As a former academic, I understand this criticism. I know the pressure, the overload, the perverse incentives. There are good reasons why I left.

But honestly? This attitude annoys me. It’s based on deterministic thinking again: ‘We are victims’ – as if researchers were passive objects without agency.

Let’s look at this: Does the system create pressure? Yes. Does the system set wrong incentives? Yes. But nobody – truly nobody – is forced by this system to submit non-existent references. That’s absurd.

Whoever submits hallucinated references pretends to have a scientific foundation that doesn’t exist. This misleads reviewers and readers and undermines trust in science. This is an individual decision for which individuals are responsible. Structure may explain some things, but it doesn’t absolve. Individual responsibility remains.

From this victim mentality, it’s only a small step to complete moral collapse: If the system is to blame, why should we uphold standards at all?

Argument 5: Knowledge doesn’t need to be true

And here comes what I consider the worst of all: nihilism. Its proponents say things like: “Perhaps knowledge doesn’t need to be true, just plausible, to trigger new and innovative thoughts.”

Or: “Perhaps we have been in a ‘post-Enlightenment’ era for some time now, but simply can’t accept it yet.”

This sounds intellectually sophisticated – ‘post-Enlightenment’ as a philosophical position. But in the shallows of social media this is nothing more than intellectual posturing.

Because if knowledge doesn’t need to be true – why should we care about hallucinated references? If there is no truth – why should there be standards? This perspective dissolves any foundation for shared meaning.

You don’t need to believe in absolute truth to know: A source that doesn’t exist cannot be cited. This isn’t a philosophical debate, it’s a no-brainer: Either the paper was published or it wasn’t.

And science requires that claims be verifiable. Hallucinated references make verification impossible.

Perhaps this also explains the curious affinity between philosophical nihilism and AI enthusiasm: If truth is irrelevant anyway, then LLMs – which essentially produce plausible statements without reference to truth (critics call it bullshit) – are not a threat but confirmation.

This nihilism is the logical endpoint of evading responsibility. “The system is to blame” (Argument 4) at least still claims standards are important – just unfortunately unattainable. Nihilism abandons even that: Standards are irrelevant anyway.

My assessment? Anyone who sees no difference between real and fabricated references undermines the foundation of scientific work.

What remains

This discussion was revealing – not because it changed my position, but because it exposes fundamental patterns in the AI debate.

Defensive deflection, TINA rhetoric, resignation, victim mentality, nihilism. These aren’t fringe phenomena, but precisely the arguments we must contend with, again and again, whenever we talk about AI.

After all this, I stand by my proposal: Anyone who submits hallucinated references should be sanctioned. Desk reject and one-year ban.

To the tech-solutionists: If your predictions are correct, a technical solution will emerge soon. Until then: take responsibility instead of attacking those who point out problems.

To those with the victim mentality: Yes, the system is broken. But structure doesn’t absolve individual responsibility.

And those who never believed in truth? They have no place in science.

But perhaps we should stop just reacting anyway: We can defend standards and we can hold ourselves accountable. We can stop treating technological development as inevitable fate and start seeing it as something we can shape.

The question ultimately isn’t whether AI changes science, but whether we’re willing to defend what makes science possible in the first place: trust, integrity, and the conviction that truth matters.