Don’t believe your eyes — or your ears: The AI lies are piling up

We’re living in a world where seeing is no longer believing, and hearing might mean being misled. Advances in generative AI have turned photos, videos, voices, and even entire backstories into tools of manipulation. These aren’t just digital curiosities or harmless pranks – they’re undermining truth, trust, and the very foundations of our shared reality.

In just the past two weeks, three alarming new stories came to light, each of which illustrates how rapidly the landscape is shifting. From national security threats involving Secretary of State Marco Rubio, to cultural confusion surrounding an all-AI "band" called the Velvet Sundown, to monetized misinformation (AKA AI slop) gaining 70 million views by leveraging fascination with the Diddy trial, each one of these stories shows the growing power of AI to blur the line between the truth and lies. But a fourth story offers a sliver of hope, and a look at how one company is fighting back.

A voice you cannot trust: The Marco Rubio deepfake

According to a report in The Washington Post yesterday, someone used AI to clone Secretary of State Marco Rubio’s voice and impersonated him via Signal, email, and voicemail. The deepfake "Rubio" reached out to multiple high-ranking targets: three foreign ministers, a U.S. governor, and a sitting member of Congress. The messages appeared to come from a spoofed State Department email address (Marco.Rubio@state.gov), and the audio was realistic enough to prompt concern among intelligence officials.

The incident was revealed in a classified diplomatic cable distributed to allied governments and select congressional offices.

Hany Farid, a University of California at Berkeley professor who specializes in digital forensics, said operations of this nature do not require sophisticated actors, but they are often successful because government officials can be careless about data security. According to the Post, Farid said, “This is precisely why you shouldn’t use Signal or other insecure channels for official government business.”

Even apart from Signal, these voice-cloning attacks are becoming increasingly common. Only 15 to 20 seconds of clean audio is needed to generate a convincing replica using commercial tools such as ElevenLabs or Resemble AI. The implications for national security are profound: Trust in official communications can be undermined without a single network being breached.

As the FBI warned in May, “If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.” 

The Velvet Sundown: An all-AI "band" that fooled a million listeners

At the end of June, The Washington Post and Rolling Stone reported that the Velvet Sundown, a band with more than 1.1 million monthly listeners on Spotify, was entirely generated by AI. The songs and vocals – not to mention the cover art and promotional materials – were all synthetic. The group had no human performers, no live appearances, and no disclosures to listeners.

Initially, the project was described as an experimental concept – a reflection on the nature of authenticity in music. But this had not been communicated to Spotify users, many of whom believed they had stumbled onto a retro-inspired indie band. When the truth came out, there was backlash over what critics described as a form of cultural deception. Spotify later added a label identifying the “band” as a "synthetic music project guided by human creative direction."

The incident sparked a wider debate over whether AI-generated art should always carry a disclaimer. Who owns AI music? Who gets paid? And is it a bad thing when algorithms can produce chart-topping content without a single human voice? 

AI slop shops: The Diddy trial unleashes a wave of AI misinformation

A couple of weeks ago, The Guardian reported that YouTube was inundated with fake, AI-generated content claiming that celebrities such as Leonardo DiCaprio, Oprah Winfrey, and Brad Pitt had testified in the legal case against Sean "Diddy" Combs. None of that was true. The videos used AI-generated voiceovers, fabricated thumbnails, and bogus courtroom sketches.

This wave of content has been labeled "AI slop" by experts – a term describing mass-produced, low-quality misinformation optimized for monetization through YouTube’s algorithm. One creator reportedly earned tens of thousands of dollars from ad revenue alone. YouTube has taken some action, but enforcement remains inconsistent.

What makes AI slop dangerous is that it exploits curiosity and breaks down the boundary between entertainment and disinformation. The more convincing it becomes, the more likely it is to be believed – especially when consumed in volume. And the incentives to generate it remain high.

The deepfake detective: A partial fix – for some of these issues

On Monday, The Los Angeles Times profiled Dan Neely, CEO of Vermillio, a tech startup working to help celebrities detect and remove deepfakes. Vermillio’s platform uses a combination of media fingerprinting, voice synthesis detection, reverse image lookup, and proactive monitoring tools to identify and flag unauthorized or synthetic media. Its services are designed to work in real-time, identifying problematic content as it begins to circulate rather than after it’s gone viral.

The Chicago-based Vermillio partners with talent agencies, production companies, and rights-holders to create verified digital signatures – akin to audio-visual watermarks – that allow it to match authentic content against synthetic or manipulated material online. Once identified, the company quickly issues takedown requests and coordinates with platforms to suppress further spread. According to Neely, the company can often respond to a deepfake within minutes.

One key to Vermillio’s strategy is scale: the company is now scanning over 100 million videos per day for manipulated content. He emphasized that Vermillio isn’t trying to ban synthetic media, but that it wants to give creators and public figures the tools to assert ownership and control over their digital identities.

Neely also pushed back on the idea that AI fraud should be someone else’s responsibility to fix. “We can’t wait for governments to solve this problem,” he said. “We can’t wait for legislators to solve this problem. We can’t wait for other people to solve this problem. We just said it’s the right thing to do, so we should just be doing it.”

It’s fair to say that people are paying attention. Just a few weeks ago, Vermillio was named to the annual TIME100 Most Influential Companies list. In a release, the firm noted that its TraceIDTM technology not only provides comprehensive detection and removal capabilities for unauthorized AI-generated material, but that it also enables IP holders to protect and monetize their content.

While Vermillio’s services are currently geared toward high-profile clients, the company is developing a "freemium" model that offers partial services for no charge. The company's work suggests that rapid, AI-assisted detection and response to malicious synthetic content is possible – but only if there’s enough motivation to act quickly, not to mention sufficient collaboration between tech platforms and detection services.

The Big Picture: Truth is in crisis

The three stories of fraud, intentional ambiguity, and deception aren’t isolated incidents. They represent an emerging pattern in which generative AI is used to manipulate what we see, hear, and believe. Hopefully, the fourth story is a sign that possible solutions are forthcoming.

The stakes are growing. In a world where authenticity is easily faked, our instincts no longer serve us well. The line between real and artificial is fading, and unless we build new defenses – legal frameworks, detection tools, cultural norms – we risk entering a post-truth society.

And yet, the challenge isn’t just technical. It’s emotional. I’ve found myself second-guessing stories I once would have shared, scrutinizing videos that feel a bit too polished (or not quite human), and I've wondered whether what I’m hearing is what was truly said. If I’m doing this, I’m fairly certain that most readers of Colorado AI News are having to do it, too.

But maybe that’s the shift we need.

Maybe we’re entering an age where a very healthy skepticism becomes not just a virtue but a necessity. Where we teach our kids how to question digital narratives the way we once taught them to read between the lines of a newspaper editorial. Where platforms stop dragging their feet and start labeling synthetic content, and where audiences reward honesty over virality.

There’s no silver bullet. But there is a path forward – a slow, imperfect path of rebuilding trust in the age of illusion. And it starts with acknowledging just how terribly good the lies have gotten.