VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass
DevOps teams rely more on AI-coding assistants to boost team productivity by automating coding tasks with only the most conscientious scanning final code for security flaws, Forrester warns in their 2024 cybersecurity, risk, and privacy predictions.
The research and advisory firm predicts inconsistent compliance and governance practices combined with many Devops teams experimenting with multiple AI-coding assistants simultaneously to increase productivity will lead to flawed A.I. code responsible for at least three publically-admitted breaches in 2024. Forrester also warns that A.I. code flaws will pose API security risks.
AI-coding assistants are redefining Shadow I.T.
49% of business and technology professionals with knowledge of AI-coding assistants say their organizations are piloting, implementing, or have already implemented them in their organizations. Gartner predicts that by 2028, 75% of enterprise software engineers will use A.I. coding assistants, up from less than 10% in early 2023.
Devops leaders tell VentureBeat it’s common to find multiple AI-coding assistants being used across teams as the pressure to produce a high volume of code every day is growing. Tighter timelines for more complex coding combined with the proliferation of over 40 AI-coding assistants available is leading to a new form of shadow I.T. where Devops teams switch from one A.I. assistant to another to see which delivers the highest performance for a given task. Enterprises are struggling to keep up with the demand from their Devops teams for new AI-coding tools approved for use corporate-wide.
VB Event
AI Unleashed
Don’t miss out on AI Unleashed on November 15! This virtual event will showcase exclusive insights and best practices from data leaders including Albertsons, Intuit, and more.
AI-coding assistants are available from leading A.I. and LLM providers, including Anthropic, Amazon, GitHub, GitLab, Google, Hugging Face, IBM, Meta, Parasoft, Red Hat, Salesforce, ServiceNow, Stability AI, Tabnine, and others.
CISOs face a challenging balancing act in 2024
Forrester’s cybersecurity, risk, and privacy predictions reflect a challenging year ahead for CISOs who will need to balance the productivity gains generative A.I. provides with the need for greater compliance, governance, and security for A.I. and machine learning models under development.
Getting compliance right will be essential for protecting intellectual property, the one asset no one wants to put at risk despite the stepwise gains generative A.I. is delivering today.
How well a CISO and their teams can triangulate innovation, compliance, and governance to give their companies a competitive advantage in 2024 will be more measurable in 2024 than any previous year. Generative A.I.’s productivity gains balanced against risks, and the need for reliable guardrails will be a key issue every CISO will likely deal with next year, too.
The goal: Achieve A.I.’s innovation gains while reducing risk
Forrester’s cybersecurity, risk, and privacy predictions for 2024 guide every organization on achieving greater A.I. innovation gains while reducing the risks of human- and code-based breach risks. Taken together, they reflect how urgent it is to get compliance, governance, and guardrails for new A.I. and ML models right first, so the productivity gains from generative AI-based coding and devops tools deliver the greatest benefit at the lowest risk.
“In 2024, as organizations embrace the generative A.I. (genAI) imperative, governance and accountability will be a critical component to ensure that A.I. usage is ethical and does not violate regulatory requirements,” writes Forrester in their cybersecurity predictions. “This will enable organizations to safely transition from experimentation to implementation of new AI-based technologies,” the report continues.
Forrester’s 2023 data shows that 53% of A.I. decision-makers whose organizations have made policy changes regarding genAI are evolving their A.I. governance programs to support A.I. use cases.
The following are their predictions for 2024:
Social engineering attacks soar as attackers find new ways to use generative A.I.
FraudGPT was just the start of how attackers will weaponize generative A.I. and go on the offensive. 2024 will see social engineering attacks soar from 74% of all breach attempts to 90% next year. Forrester warns they’re seeing the human element be more attacked than ever.
That’s sobering news for an industry where some of the most devastating ransomware attacks in 2023 started with a phone call. Existing approaches to security awareness training aren’t working. Forrester makes the point that what’s needed is a more data-driven approach to behavior change that quantifies human risk and provides real-time training feedback to employees and perceptual gaps they may have in identifying threats.
Cyber insurance carriers will tighten their standards, red-flagging two tech vendors as high risk
Combining greater real-time telemetry data and more powerful analytics and genAI tools to analyze it will give insurance carriers the visibility they’ve needed for years to reduce their risks. Forrester observes that insurance carriers will also have more insights from security services and tech partnerships and more data-driven insights, including forensics from insurance claims.
Given the growing number and severity of massive one-to-many breaches like MOVEit, Forrester predicts security vendors will be assessed by risk scoring and calculations that will also be used for calculating insurance premiums of their customers seeking coverage.
Expect to see a ChatGPT-based app fined for mishandling personally identifiable information (PII)
Implicit in this prediction is how vulnerable identity and access management (IAM) systems are to attack. Active Directory (A.D.) is one of the most popular targets of any identity-motivated attack. Approximately 95 million Active Directory accounts are attacked daily, as 90% of organizations use the identity platform as their primary authentication and user authorization method. John Tolbert, director of cybersecurity research and lead analyst at KuppingerCole, writes in the report Identity & Security: Addressing the Modern Threat Landscape: “Active Directory components are high-priority targets in campaigns, and once found, attackers can create additional Active Directory (A.D.) forests and domains and establish trusts between them to facilitate easier access on their part. They can also create federation trusts between entirely different domains.’Implicit in this prediction is how vulnerable identity and access management (IAM) systems are to attack. Active Directory (A.D.) is one of the most popular targets of any identity-motivated attack. Approximately 95 million Active Directory accounts are attacked daily, as 90% of organizations use the identity platform as their primary authentication and user authorization method. John Tolbert, director of cybersecurity research and lead analyst at KuppingerCole, writes in the report Identity & Security: Addressing the Modern Threat Landscape: “Active Directory components are high-priority targets in campaigns, and once found, attackers can create additional Active Directory (A.D.) forests and domains and establish trusts between them to facilitate easier access on their part. They can also create federation trusts between entirely different domains.
Forrester notes that OpenAI continues to receive more regulatory scrutiny with the ongoing investigation in Italy, and lawyers in Poland are dealing with a new lawsuit for several potential GDPR violations. As a result, the European Data Protection Board has launched a task force to coordinate enforcement actions against OpenAI’s ChatGPT. In the U.S., the FTC is also investigating OpenAI. While OpenAI has the technical and financial resources to defend itself against regulators, other third-party apps running ChatGPT do not.
Senior-level zero-trust roles and titles will double across the global public and private sectors
Currently, there are 92 zero trust positions available in the U.S. advertised on LinkedIn and 151 worldwide. Forrester’s optimistic forecast of zero trust position growth doubling in the next twelve months is supported by the broader adoption of the NIST Zero Trust Architecture framework across their client base. Forrester predicts zero trust adoption will also increase demand for cybersecurity professionals with engineering, governance, strategy, and leadership expertise. These positions will sit within federal agency security organizations and become a staple of the staffing and services for firms that augment those agency functions and the private sector enterprises responsible for supporting 85% of the U.S.’s critical infrastructure. Forrester advises their clients to prepare by reviewing the requirements for a zero-trust role at their organizations and identifying a cohort of individuals to pursue Zero Trust certifications.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.