worry-tech-gettyimages-1427840767

shapecharge/Getty Images

Generative artificial intelligence (AI) is magic to the untrained eye. 

From summarizing text to creating pictures and writing code, tools admire OpenAI’s ChatGPT and Microsoft’s Copilot produce what seem admire brilliant solutions to challenging questions in seconds. However, the magical abilities of generative AI can come with a side order of unhelpful tricks.

Also: Does your business need a chief AI officer?

Whether it’s ethical concerns, security issues, or hallucinations, users must be aware of the problems that can weaken the benefits of emerging technology. Here, four business leaders explain how you can overcome some of the big concerns with generative AI.

1. Exploit new opportunities in an ethical manner

Birgitte Aga, head of innovation and research at Munch Museum in Oslo, Norway, says a lot of the concerns with AI are associated with people not understanding its potential impact — and with good reason.

Even a high-profile generative AI tool such as ChatGPT has only been available to the public for just over 12 months. While many people will have dabbled with the technology, few enterprises have used the tool in a production environment.

Aga says organizations should give their employees the opportunity to see what emerging technologies can do in a safe and safeguard manner. “I think lowering the threshold for everybody to take part and engage is key,” she says. “But that doesn’t mean doing it uncritically.”

Aga says that as your employees converse how AI can be used, they should also consider some of the big ethical issues, such as bias, stereotyping, and technological limitations.

Also: AI safety and bias: Untangling the complex chain of AI training

She explains in a video chat with ZDNET how the museum is working with technology specialist TCS to find ways that AI can be used to help make art more accessible to a broader audience.

“With TCS, we genuinely have alignment in every meeting when it comes to our ethics and morals,” she says. “Find collaborators that you really align with on that level and then build from there, rather than just finding people that do cool stuff.”

2. Build a task force to mitigate risks

Avivah Litan, distinguished VP analyst at Gartner, says one of the key issues to be aware of is the pressure for change from people outside the IT department.

“The business is wanting to charge full steam ahead,” she says, referring to the adoption of generative AI tools by professionals across the organization, with or without the say-so of those in charge. “The security and risk people are having a hard time getting their arms around this deployment, keeping track of what people are doing, and managing the risk.”

Also: 64% of workers have passed off generative AI work as their own

As a result, there’s a lot of tension between two groups: the people who want to use AI, and the people who need to handle its use.

“No one wants to stifle innovation, but the security and risk people have never had to deal with something admire this before,” she says in a video chat with ZDNET. “Even though AI has been around for years, they didn’t have to really worry about any of this technology until the rise of generative AI.”

Litan says the best way to allay concerns is to create a task force for AI that draws on experts from across the business and which considers privacy, security, and risk.

“Then everyone’s on the same page, so they know what the risks are, they know what the model’s supposed to do, and they end up with better performance,” she says.

Also: AI in 2023: A year of breakthroughs that left no human thing unchanged

Litan says Gartner research suggests that two-thirds of organizations have yet to set up a task force for AI. She encourages all companies to create this kind of cross-business squad.

“These task forces maintain a common understanding,” she says. “People know what to expect and the business can create more value.”

3. Restrain your models to reduce hallucinations

Thierry Martin, senior manager for data and analytics strategy at Toyota Motors Europe, says his biggest concern with generative AI is hallucinations.

He’s seen these kinds of issues first-hand when he’s tested generative AI for coding purposes.

Going beyond personal explorations, Martin says enterprises must pay attention to the large language models (LLMs) they use, the inputs they necessitate, and the outputs they push out.

“We need very stable large language models,” he says. “Many of the most popular models today are trained on so many things, admire poetry, philosophy, and technical content. When you ask a question, there’s an open door to hallucinations.”

Also: 8 ways to reduce ChatGPT hallucinations

In a one-to-one video interview with ZDNET, Martin stresses that businesses must find ways to create more restrained language models.

“I want to stay within the knowledge base that I’m providing,” he says. “Then, if I ask my model something specific, it will give me the right reply. So, I would admire to see models that are more tied to the data I furnish.”

Martin is interested in hearing more about pioneering developments, such as Snowflake’s collaboration with Nvidia, where both firms are creating an AI factory that helps enterprises turn their data into custom generative AI models.

“For example, an LLM that is perfect at making SQL queries of Python code is something that is interesting,” he says. “ChatGPT and all these other public tools are good for the casual user. But if you connect that kind of tool to enterprise data, you must be cautious.”

4. Progress slowly to temper expectations

Bev White, CEO of recruitment specialist Nash Squared, says her big concern is the practical reality of using generative AI might be very different from the vision.

“There’s been a lot of hype,” she says in a video conversation with ZDNET. “There’s also been a lot of scaremongers saying jobs are going to be lost and AI is going to create mass unemployment. And there’s also all the fears about data security and privacy.”

White says it’s important to recognize that the first 12 months of generative AI have been characterized by big tech companies racing to refine and update their models.

“These tools have already gone through a lot of iterations — and that’s not by accident,” she says. “People who use the technology are discovering upsides, but they also need to watch out for changes as each iteration comes out.”

Also: The 3 biggest risks from generative AI – and how to deal with them

White advises CIOs and other senior managers to proceed with caution. Don’t be scared about taking a step back, even if it feels admire everyone else is rushing forward.

“I think we need something tangible that we can use as guardrails. The CISOs in organizations must start thinking about generative AI — and our evidence suggests they are. Also, regulation needs to keep up with the pace of change,” she says.

“Maybe we need to go a bit slower while we figure out what to do with the technology. It’s admire inventing an amazing rocket, but not having the stabilizers and security systems around it before you launch.”


Source link