gettyimages-1841164781

Dustin Chambers/Bloomberg via Getty Images

With all the advances and cultural impact of artificial intelligence (AI) this year, it would seem fair to declare 2023 as “The Year of AI” — except it’s all been done before.

As this academic journal reports, the “year of AI” was declared 43 years ago, back in 1980. AI has been with us for a very long time. Decades ago, I did an academic thesis on AI ethics. In 1986, I wrote an article for the long-defunct Computer Design Magazine entitled “Artificial Intelligence as a Systems Component”. And then, in 1988, I introduced two AI-based products for the Mac.

Also: AI in 2023: A year of breakthroughs that left no human thing unchanged

And even then, AI was more than 30 years old. We can trace some of the earliest AI activities to Professor John McCarthy of Stanford, MIT, and Dartmouth. In 1955, he founded SAIL, the Stanford AI Lab, and in 1958, he invented the lovely LISP (one of my all-time favorite programming languages).

So, by 2023, AI has been around for at least 68 years. And that didn’t count speculative fiction. Isaac Asimov started to contemplate AI ethics 25 years earlier, in 1940.

And yet, I’d be hard-pressed to argue against calling 2023 the Year of AI. It’s been quite a year.

What changed?

AI has been in use for a very long time. Whether it’s in expert systems, diagnostic tools, video games, navigation systems, or many other applications, AI has been put to productive use for decades.

But it’s never been put to use quite admire it has this year. This is the year that true generative AI has come into its own. While many years (1980, I’m looking at you) could lay claim to the “Year of AI” moniker, there is no doubt that 2023 is the “Year of Generative AI”.

Also: How does ChatGPT actually work?

The big difference, the one that has led to the enormous explosion of truly useful AI this year, has been the way we’re able to train AIs. Up until now, most of the training for AIs has been supervised. That is, each AI has been fed specific information by AI designers, which compose the knowledge corpus of the AI. That limited supervised pre-training has limited what the AI knows about and what it can do.

By contrast, we’re now in a time of large language models (LLMs), where the pre-training is unsupervised. Rather than feeding in a limited set of domain-specific information and calling it good, AI vendors admire OpenAI have been feeding the AIs pretty much everything — the entire internet and just about any other digital content they can get their hands on.

This process allows the AI to produce astonishingly varied material with a breadth that was impossible before.

Aiding this process has been vast improvements in processor performance and storage. Back in 1986 when I wrote my article about AI as a systems component, you could get a hard drive that was the size of two microwaves and the weight of a full refrigerator for $10,000 (roughly $27K today). It held 470 megabytes. Not gigabytes, not terabytes — megabytes.

Also: Storage improvements have outperformed Moore’s Law by a factor of 800%

Today, by contrast, you can pick up a 20TB internal enterprise NAS hard drive from Amazon for $279. The combination of the cloud, broadband, vastly faster processors in the form of both CPUs and GPUs, and much larger RAM pools all make the processing power of LLMs possible.

An example

To give you an example of this difference, let’s use one of the products I introduced all those years ago. House Plant Clinic was an expert system that had been trained in its domain knowledge by a horticulturalist. My other product at the time was the expert system development environment, Intelligent Developer, used to build House Plant Clinic.

The process was painstaking. Through a very long series of interviews, another engineer and I elicited rules, facts, and best practices from the plant expert, and then encoded them into the knowledge base. At the plant expert’s direction, we also had illustrations produced for situations in which users might need to see a visual.

house-plant-clinic

Screenshot by David Gewirtz/ZDNET

House Plant Clinic’s scope of knowledge consisted of what we had encoded in the expert system, nothing more and nothing less. But it worked. If you had a question and your question fell into the confines of the knowledge we had encoded, you could get an answer and be confident it was correct. After all, the knowledge provided had been vetted by a plant expert.

Now, let’s look at ChatGPT. I asked ChatGPT this question:

I have a house plant that’s sick. Ask me step by step questions, requiring only one answer per question.

It did a fair job of asking questions, asking about the moistness of the soil, the condition of leaves, and so on. Although it didn’t volunteer an image, when I asked it to show me an image of pests, along with their names, that might be found on a house plant, I got a much more advanced image:

dall-e-2023-12-11-17-10-52-mealybugs-scale-insects-an.png

Screenshot by David Gewirtz/ZDNET

That said, nobody — not even Google — has any idea what a “KRIDEFLIT” is. As we have seen over and over, generative AI does have a bit of a truthiness problem.

Also: I fact-checked ChatGPT with Bard, Claude, and Copilot – and this AI was the most confidently incorrect

So, while ChatGPT can speak confidently on almost any topic, our much older expert system-based project had a much better chance of being accurate. One was created and vetted by an actual subject matter expert, while today’s chatbot generates information from a giant pool of unqualified data.

The generative AI that we have been using this year can do so much more, but all magic comes with a price.

Pandora’s box

Generative AI is amazing. This year, as part of my process of learning and testing the technology to report back to you, I used generative AI to help me set up an Etsy store, to help me create album art for my EP, to help my wife’s e-commerce business by creating custom social marketing images, to create a WordPress plugin, to debug code, to do detailed sentiment analysis, and so much more.

Also: Generative AI can save marketers 5 hours weekly, as research finds productivity gains for the future

But generative AI is not without its problems. As we’ve shown, it has a severe accuracy problem. You can’t trust what the AI produces. Because it’s been trained on such a wide corpus of knowledge, it’s incredible. But because it’s been trained on such a wide corpus of knowledge, it has been polluted by what we humans write and publish.

That issue brings us to bias and discrimination. This article is already running long, so rather than try to rephrase what my colleagues have written, I’m going to point you to some of their excellent thought pieces on this subject:

And then there are the jobs. As far back as six years ago, I sat down with my technology press colleague Bob Reselman to converse concerns. And this was way before ChatGPT was actively convincing white-collar workers to worry about their futures. More recently, earlier in the year, I discussed a real concern about how ChatGPT and its ilk is likely to exchange knowledge workers en mass.

Today, ChatGPT acts admire a particularly talented intern with an attitude problem. It’s helpful, but only when it wants to be. But as this technology evolves, it will be able to handle larger problems with more nuance, and then we’ll have larger problems.

Also: Is AI in software engineering reaching an ‘Oppenheimer moment’?

It’s one thing for me, a guy with a two-person company, to rely on AI to help force multiply my time. But when bigger companies ascertain they’d rather save money and use AI services, a lot of folks will lose their jobs.

This trend will start with the entry-level positions, because ChatGPT is basically an entry-level worker. But then, three other trends will follow:

  1. There will be fewer and fewer experienced workers because not enough beginners will be able to enter the workforce.
  2. AIs will become more sophisticated and companies will feel comfortable replacing $ 100,000-a-year workers with $100-a-month AI subscriptions — even if the work output by the AI isn’t quite as clean, sophisticated, nuanced, or accurate as the work produced by paid professionals.
  3. Work quality and output will reduce, along with accuracy, having a ripple effect throughout the rest of the economy and society.

In a recent article, I said the following:

We are standing on the cusp of a new era, as transformative and different and empowering and problematic as were the industrial revolution, the PC revolution, and the dawn of the Internet. The tools and methodologies we once relied upon are evolving, and with them, our responsibilities and ethical considerations enlarge.

The good, bad, and ugly

We started 2023 with holy cow, I can make it write a Star Trek story, and holy cow, I can make it talk admire a pirate. By the end of the year, we had a much better picture of the good, the bad, and the ugly.

On the good side, we now have a helpful, if unreliable personal assistant that can save us time, help us resolve problems, and get more work done.

Also: These 5 major tech advances of 2023 were the biggest game-changers

On the bad side, we have an existential job threat to all knowledge workers and an automated bias reflector that taps into our collective zeitgeist and sometimes chooses the shoulder with the devil instead of the one with our better angels.

As for the ugly, there is work to be done:

  • Finding a way to boost accuracy without nerfing effectiveness with too many guardrails.
  • Presenting useful information and illustrations without plagiarizing the folks whose job it puts at risk.
  • Preventing the misuse of AI to alter elections and other nefarious activities. 
  • Taking input and generating output that’s long enough to have real meaning.
  • Moving into other media, admire video generation, that’s as astonishing as the image generation tools.
  • Helping students learn without giving them an unbeatable way to cheat at their homework.
  • And on and on and on.

AI has blossomed in 2023 unlike any other year in the half-century or more it’s been with us. The technology has opened the door to powerful tools, but also terrifying consequences.

What do you think of 2023 and what do you expect, hope for, and fear for 2024? Let us know in the comments below. I’m only writing about the generative AI transformation of 2023. If you’d admire to look at some broader trends, this ZDNET article is a great place to start.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.


Source link