Wikipedia earlier this month released its list of the 25 most viewed articles on English Wikipedia in 2023. As always, it’s a good sign of the times.

There’s a lot of what you’d expect on the list. The Barbie movie is on there (No. 13, with 18 million views). Taylor Swift, unsurprisingly, made the cut at No. 12 (19.4 million views). Movies, soccer players, celebrities who passed away too young, all made it. (Matthew Perry is at No. 17 with 16.4 million views, and Lisa Marie Presley is at No. 22 with 13.7 million.)

But the No. 1 most-viewed article, with a whopping 49.4 million views? It’s Wikipedia’s entry on ChatGPT, a chatbot developed by OpenAI and launched just over a year ago on Nov. 30, 2022. With its clever responses and ability to engage in a humanlike conversation, it captured the public’s attention and set off a wide-ranging conversation that’s still going strong 12 months later.

ChatGPT isn’t the only genAI chatbot out there. Microsoft’s Bing, for instance, began integrating AI into explore in February and opened to more users in May. Google Bard also opened up to bigger audiences. (In a breakdown of the three services in the spring, CNET’s Imad Khan found ChatGPT to be the best, but noted that all three were learning and changing.) And there are yet more, including Character.ai and Claude.ai (the latter created by people who jumped from OpenAI).

Still, it’s ChatGPT that became the touchstone for many people when they think of AI — which we’re all probably doing a lot more than we did a year ago. Hence the enshrinement atop Wikipedia, the free online encyclopedia and hugely popular resource. Students turn to it for homework help, sports and movie fans to answer those nagging trivia questions, job-seekers use it to research potential employers, and for most anyone, it’s much too easy to simply fall down a random-article rabbit hole and not emerge for hours. Wikipedia received more than 84 billion page views so far in 2023, according to data shared with CNN

That top ranking for ChatGPT is a clear signal of how much the generative AI tool upended the zeitgeist in 2023 — and how little people still know about what it is or does.  

From zero to No. 1 in one year

ChatGPT came on fast. Last year’s top Wikipedia entries list didn’t include ChatGPT, of course, since it was just days old then. But last year’s list also didn’t include any artificial intelligence-related entries. Deceased cannibal and murderer Jeffrey Dahmer topped the 2022 list with 54.9 million views, thanks to Monster, the Netflix series about his life and crimes.

The entry on ChatGPT isn’t the longest in Wikipedia. But it’s complex, with 210 footnotes cited. It defines what the bot is and gives a little history — ChatGPT gained over 100 million users by January and is now up to 150 million — and discusses its features, training and reception.

The GPT part of the name stands for generative pretrained transformer, and that training of the AI, on enormous troves of data, is the bedrock of the technology. With the GPT-4 upgrade in March to the underlying large language model — the tech that powers the chatbot so it’s able to produce answers with original content — ChatGPT can churn out longer strings of text, reply when given images and avoid some of the pitfalls of earlier versions. With those improvements, ChatGPT became smart enough to not only pass the bar exam, something it could already do, but score in the top 10%.  

That fundamental training of ChatGPT isn’t without controversy. Writers including Game of Thrones author George R. R. Martin are part of a lawsuit against ChatGPT’s parent company, OpenAI, in which the plaintiffs allege that using their works was copyright infringement.  

ChatGPT and Wikipedia: More alike than you think

It’s funny that ChatGPT, of all things, ended up topping the Wikipedia most-read list, because the two have similarities. You don’t have to know anything about a topic to use ChatGPT or Wikipedia for information — that’s why people turn to each of them. They’re a starting place. If you have more in-depth knowledge of a topic, you can narrow in on a specific Wikipedia entry or ChatGPT question.

Wikipedia, however, is assembled by human editors, and cites its sources. Sometimes Wikipedia entries have errors, whether unintentional or on purpose. Just check the page for a controversial public figure after they die — you’ll often see mean jokes and false descriptions thrown in, though the site cleans these up pretty fast. 

But Wikipedia does a very good job of pulling together basics and telling you how to find out more. 

On a good day, that’s what ChatGPT does too. Need to send a sensitive email to your boss or mother-in-law? ChatGPT doesn’t know that person, so it doesn’t know how to personalize the email with the specific details that will help it read better to the recipient. But it can sketch out the basic sentences that a person might use to ask for a meeting or to suggest a switch in family vacation plans.

When I was in school, we had old-fashioned, multivolume, paper-and-ink encyclopedias. And you better believe teachers put hard limits on their use as reference sources. They didn’t want you citing them in a paper — too easy, too lazy, not specific enough information. 

Many teachers have similar rules for Wikipedia. You can start there and get a good sense of your topic, but don’t you dare let that be your resource. It would be nice to think of people using ChatGPT in the same way: as a starting push, an idea generator. Not for, uh, having AI write essays for school assignments that they pass off as their own work.

For now, it’s still early, but nearly 50 million people used Wikipedia to learn more about ChatGPT this year. That number’s likely to grow. 

Getting existential

It’s under the “Use and Implications” header that things really start to get interesting.

The Wikipedia entry discusses the controversies surrounding the chatbot, from kids using it to cheat in school to the bot throwing back untrustworthy information or even hallucinations, when a generative AI tool makes up things that sound true. In fact, on Tuesday, Dictionary.com named the AI sense of “hallucinate” as its 2023 word of the year, defining it as “to produce false information contrary to the intent of the user, and present it as if true and factual.”

There’s also the big fear, the idea that AI’s power will guide to a Terminator-esque future, where the machines are no longer under human control. A Scientific American article in October discusses “AI anxiety,” a term explaining fears about the rapid rise of generative AI. 

Many people are afraid AI will eventually take their jobs, and others have larger fears involving human obsolescence. I admit, I have a great deal of that AI anxiety for sure — I saw Terminator, after all. And as a Gen Xer, I’ve always been raised with the idea that human-caused destruction is just around the corner. It was in our songs, our books, our movies. Shall we play a game? How about global thermonuclear war

Sam Altman, the CEO of OpenAI who in November was fired and rehired in the span of a week, earlier this month spoke on What Now? with Trevor Noah. Noah asked Altman about worries that genAI will provoke the apocalypse. Altman’s answer wasn’t exactly reassuring to worrywarts admire me.

“Society has … actually a fairly good, messy but good, process for collectively determining what safety thresholds should be,” Altman told Noah on the podcast. “I think we do as a world need to stare that in the face … this idea that there is catastrophic or potentially even existential risk in a way that just because we can’t precisely define it doesn’t mean we get to ignore it either. And so we’re doing a lot of work here to try to forecast and measure what those issues might be, when they might come, how we would detect them early.”

Thanks, I think? As Binkley famously told Milo in Bloom County, “well, you can just rock me to sleep tonight.”

Wikipedia, of course, may not offer the most nuanced explanation of ChatGPT or AI. But I’m strongly on the side of it hitting No. 1 on the Wikipedia list. The more people get a clear picture of this advancing technology — the good, the bad, and the unknown of it — the better off we’ll be.

Where will AI be next year?

I tell my teenage daughter and her friends not to worry too much about what careers they’ll have when they grow up, because those jobs probably haven’t been invented yet.

In much the same way, we just have no idea where ChatGPT will be in a year. It seems fair that it’ll be settling into some terrific uses, and naturally, creating more controversy about things we wish it couldn’t do.

If life was a Twilight Zone episode, we might be able to peer ahead somehow to next year’s list of Wikipedia’s most-viewed articles, and learn from them what the world went through in the 12 months of 2024. But we’re still in our world, so instead I asked ChatGPT if it could anticipate the most-viewed Wikipedia entries for next year. Its answer was both telling and vague, as you might expect.

“I wish I had a crystal ball for that!” ChatGPT responded. “Predicting the future is a bit tricky, especially when it comes to internet trends. But if I had to guess, it might be something related to a major global event, scientific discovery, or a breakthrough in technology. What’s your prediction?”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.


Source link