How Generative AI Has Evolved

The safety risks of generative AI and its potential uses are mainstream now in ways they weren’t in November 2022.

Since OpenAI released ChatGPT on Nov. 30, 2022, the generative AI chatbot has helped make artificial intelligence a $207 billion industry. Two months after its launch, ChatGPT was recognized as the fastest-growing consumer application in history.

Due to the quick and massive success of ChatGPT, generative AI has become a mainstream term and changed the way many enterprise decision-makers consider software purchasing decisions. Additionally, generative AI has become an enticing tool for workplace strategy and operations; it’s currently unclear if it will shift how work is done or be a temporary workplace trend.

Since ChatGPT’s launch, enterprise technology companies and governments have discussed the potential safety problems and security risks of generative AI. ChatGPT has been temporarily or permanently banned at some companies for exposing internal data; other organizations have embraced adding ChatGPT-powered applications to their software products. In this article we’ll look back on the last year to explore ChatGPT’s influence on enterprise software.

Jump to:

Generative AI became a major player in enterprise software

ChatGPT has opened up a massive conversation — and massive spending — in enterprise software, as companies work to execute chatbots and other tools enabled by large language models. Hyperscalers and e-learning platforms have released a wide array of classes and scholarships to maintain skilled workers in the generative AI space and advocate generative AI-driven roles as viable career paths. Companies have also begun creating policies for AI use that are distinct from the rules surrounding other enterprise software utilization.

However, AI adoption has not yet caught on as much across the general population. A Pew Research research published in August 2023 noted that in a survey of 5,057 Americans, only 24% of people who had heard of ChatGPT had used it. More people who had heard of ChatGPT used the generative AI chatbot for entertainment (20%) than for work (16%).

ChatGPT sparked large-scale experimentation with generative AI

“It won’t be an exaggeration to say ChatGPT has become an unofficial mascot for AI over the past year for millions of consumers and business users,” Arun Chandrasekaran, distinguished vice president and analyst at Gartner, told TechRepublic via email. “It has made it possible for business users to experiment with use cases rapidly with LLMs and advance forward with automation across a variety of business functions.”

In the year since ChatGPT was released, OpenAI has leveraged the popularity of its product with monetizable versions such as ChatGPT Plus and ChatGPT Enterprise. These products have made it easy to overlook that OpenAI is technically a nonprofit intended to impede dangerous outcomes from theoretical artificial general intelligence. This may create tension between quick product growth and this ethical mission. In fact, OpenAI’s future was in question shortly before ChatGPT’s one-year anniversary when CEO Sam Altman briefly joined Microsoft (which has committed to providing OpenAI with $10 billion in funding). He returned to OpenAI under “an agreement in principle” with the board on Nov. 22.

Other more commercially-minded companies such as Google, Microsoft and Amazon now have more robust generative AI chatbot products of their own — such as Google Bard, Microsoft’s Copilot and Amazon Q, respectively — than they did before ChatGPT’s release.

“Competition is certainly heating up with conversational chatbots available from others such as Google and Anthropic,” said Chandrasekaran. “In addition, there has been an explosion in both closed-source and open-source LLMs in the past year, many of which were a response to ChatGPT’s viral adoption. While the demand for the consumer version of the model has seen an uneven demand, often varying across months, the demand for LLMs in the enterprise continues to be high.”

Chandrasekaran predicts that the high cost of running inference for LLMs, which is part of a generative AI model’s training, may mean more companies find success with midsize models.

Generative AI solutions admire ChatGPT are predicted to add up to $4.4 trillion to the global economy annually, according to a McKinsey research report published in June 2023. Its rapid growth means generative AI is proliferating and specializing for different use cases. McKinsey predicts specialized generative AI applications will create more value than more broad generative AI applications.

Generative AI raised ethical, copyright and security questions

Over the past year, OpenAI’s mission of creating “safe and beneficial” artificial intelligence has sparked a lot of conversation. The technology that OpenAI’s charter talks about — artificial general intelligence that can take on tasks with the flexibility of a human worker — still doesn’t exist. But OpenAI has grappled with how AI perpetuates bias and what can be done to mitigate biased training data and outcomes.

Some U.S. companies have signed a voluntary list of assurances regarding generative AI safety and how they will impede people from creating misinformation with this technology. The EU is also working on guidelines that will likely influence how generative AI products, including ChatGPT, evolve in the next year.

Use of generative AI in cybersecurity

While some generative AI tools have become part of different organizations’ cybersecurity defense mechanisms, the same technology has also been used by cybersecurity threat actors. For example, ChatGPT can be used to write phishing emails more quickly than threat actors might otherwise be able to complete such a task, although AI-generated phishing emails are not necessarily as effective as those written by humans. Overall, generative AI has contributed both positively and negatively to the cybersecurity landscape.

The black box problem

Despite its many advantages, one particular problem users are facing with AIs admire ChatGPT is “black boxes.” With this transparency issue, people can’t see how AI-powered chatbots come to their decisions.

“LLMs — large language models admire GPT-4, on which ChatGPT is built — make decisions that affect users in a way that is opaque, unreliable and potentially unfair, and the system is found to be, for example, perpetuating harmful biases only after the fact,” Chandrasekaran said.

“Questions around data ownership and how that data should be treated by LLMs is still a very subjective decision, often varying across jurisdictions,” he added.

Copyright and ChatGPT

Some authors have sued OpenAI, claiming that ChatGPT infringes on the copyright of their works because the model was trained on the authors’ works that are available on the internet. In another example, The New York Times considered suing OpenAI in August over a licensing deal; the newspaper tried to negotiate how OpenAI might become a competitor because of ChatGPT’s ability to summarize news articles.

SEE: Why some artists hate AI-generated art (TechRepublic)

What’s next for ChatGPT?

OpenAI has been expanding ChatGPT’s capabilities, including releasing speech functionality in November 2023. Additionally, OpenAI released GPT-4 Turbo, a version of the foundation model with more recent knowledge, greater context and other enhancements, on Nov. 6.

“We should expect ChatGPT to handle more modalities in the future beyond text and code,”  explained Chandrasekaran. “Already, image integration has been a huge plus, and we can expect speech and video capabilities in 2024 and beyond. In addition, we should expect more autonomous actions from ChatGPT in the midterm, as OpenAI has signaled its intentions to deliver more autonomous agent features.”

Regulations around generative AI have developed over the last year and can be expected to mature. On Nov. 26, the U.S., U.K. and other countries among the Group of Seven (G7) nations released security-by-design guidelines for AI cybersecurity. These guidelines constitute the first international agreement regarding the security of AI and may shape the development of ChatGPT and other generative AI chatbots in the future.

Note: TechRepublic reached out to OpenAI and Microsoft for commentary.

Source link