Three years ago today, on November 30, 2022, OpenAI quietly put a new “research preview” called ChatGPT on its website. It was announced in a now-quaint-but-fascinating blog post and framed as an experiment to gather feedback, not as a splashy product launch or a “next iPhone” moment.
Five days later, more than one million people had signed up, according to a post from CEO Sam Altman, and within weeks it was clear this low-key experiment had become something much bigger. Three years on, it is hard to remember what the tech and media worlds felt like before people started saying, “Let me just ask ChatGPT real quick.”
The scale still amazes me. A recent study by OpenAI’s economic research team and academic partners, published as How People Use ChatGPT, estimates that by July of this year ChatGPT was being used weekly by around 10% of the world’s adult population. Independent summaries of that work put the figure at roughly 700 million weekly users sending more than 18 billion messages each week worldwide. That is an astonishing adoption curve for a tool that didn't even exist in the public imagination three Thanksgivings ago.
The technology has moved just as fast. ChatGPT began as a text-only interface on top of early GPT models. Since then it has cycled through more capable systems, including GPT-4 and GPT-4o, which can handle text, images and audio and power features like code writing, document analysis and real-time voice interaction. Under the hood, it is no longer “just a chatbot,” but a front door to a family of increasingly capable, multimodal models.
Public opinion has been racing to catch up. A September 2025 survey from the Pew Research Center found that 50% of U.S. adults say the increased use of AI in daily life makes them more concerned than excited, 10% are more excited than concerned, and 38% feel equally both. A majority, 57%, rate the risks of AI as high, compared with just 25% who say the benefits are high.
Globally, the picture is similar – although users appear to be not as concerned as Americans. (Which will have to be the topic of a different column.) Across 25 countries, Pew reports that a median of 34% of adults are more concerned than excited about AI, 42% are equally concerned and excited, and just 16% are more excited than concerned. In many of those countries, that “equally concerned and excited” middle is the largest group, and in none is the “mostly excited” camp on top. (Those numbers come from Pew’s October 2025 report on global views of AI.)
Overall – and even here in the U.S. where concern is high – people are using the tools anyway. A national survey from Elon University’s Imagining the Digital Future Center found that about half of American adults now use large language models like ChatGPT, Gemini, Claude and Copilot — one of the fastest adoption curves for a major technology in U.S. history.
That mix of heavy use, real enthusiasm, and real worry is exactly what today’s Model Behavior cartoon is referencing with the question, “Has there ever been a more controversial 3-year-old?”
From where I sit, that tension shows up in very practical ways. I have watched first responders, teachers, founders, artists, and students wrestle with the same basic questions: When does this help, and when does it hurt? Who does this help, and who does it hurt? What does “responsible use” actually look like in a small organization with very limited time and resources?
Colorado AI News launched in the summer of 2024, when ChatGPT was already shifting – at least for many of us – from curiosity to everyday tool. From the start, its mission has been twofold: to tell Colorado’s own AI stories and to help readers make sense of the broader AI landscape — the tools, ethics, governance, and cultural debates that shape how this technology shows up in everyone's lives.
Three years in, ChatGPT is no longer new, but it – and its generative AI brothers and sisters – continue to change so quickly that the work of understanding it never gets old. We will keep covering how tools like this are changing work, education, government, and creativity, both here in Colorado and beyond.
By the time this controversial three-year-old blows out four candles, my bet is that AI will feel even more ordinary – and equally unavoidable – in our daily tools and our daily lives, ever more tightly woven into schoolwork, office software, public services, entertainment, and the media, and even more fought over in courtrooms, legislatures, and union negotiations.
We may not understand the black-box machinery under the hood much better in another year, but we'll be living with its consequences every day. For now, it’s enough just to notice the moment: Three years in, we’re still debating what kind of grown-up this technology will become, even as we already lean on the world's most infamous toddler for our amusement, homework, meetings, communications, strategic planning, and more.
