Join Gen AI enterprise leaders in Boston on March 27 for an exclusive night of networking, insights, and conversations surrounding data integrity. Request an invite here.


Security leaders’ intentions aren’t matching up with their actions to secure AI and MLOps according to a recent report

An overwhelming majority of IT leaders, 97%, say that securing AI and safeguarding systems is essential, yet only 61% are confident they’ll get the funding they will need. Despite the majority of IT leaders interviewed, 77%, saying they had experienced some form of AI-related breach (not specifically to models), only 30% have deployed a manual defense for adversarial attacks in their existing AI development, including MLOps pipelines. 

Just 14% are planning and testing for such attacks. Amazon Web Services defines MLOps as “a set of practices that automate and simplify machine learning (ML) workflows and deployments.”

IT leaders are growing more reliant on AI models, making them an attractive attack surface for a wide variety of adversarial AI attacks. 

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.


Request an invite

On average, IT leaders’ companies have 1,689 models in production, and 98% of IT leaders consider some of their AI models crucial to their success. Eighty-three percent are seeing prevalent use across all teams within their organizations. “The industry is working hard to accelerate AI adoption without having the property security measures in place,” write the report’s analysts.    

HiddenLayer’s AI Threat Landscape Report provides a critical analysis of the risks faced by AI-based systems and the advancements being made in securing AI and MLOps pipelines.

Defining Adversarial AI

Adversarial AI’s goal is to deliberately mislead AI and machine learning (ML) systems so they are worthless for the use cases they’re being designed for. Adversarial AI refers to “the use of artificial intelligence techniques to manipulate or deceive AI systems. It’s like a cunning chess player who exploits the vulnerabilities of its opponent. These intelligent adversaries can bypass traditional cyber defense systems, using sophisticated algorithms and techniques to evade detection and launch targeted attacks.”

HiddenLayer’s report defines three broad classes of adversarial AI defined below:  

Adversarial machine learning attacks. Looking to exploit vulnerabilities in algorithms, the goals of this type of attack range from modifying a broader AI application or systems’ behavior, evading detection of AI-based detection and response systems, or stealing the underlying technology. Nation-states practice espionage for financial and political gain, looking to reverse-engineer models to gain model data and also to weaponize the model for their use. 

Generative AI system attacks. The goal of these attacks often centers on targeting filters, guardrails, and restrictions that are designed to safeguard generative AI models, including every data source and large language models (LLMs) they rely on. VentureBeat has learned that nation-state attacks continue to weaponize LLMs.

Attackers consider it table stakes to bypass content restrictions so they can freely create prohibited content the model would otherwise block, including deepfakes, misinformation or other types of harmful digital media. Gen AI system attacks are a favorite of nation-states attempting to influence U.S. and other democratic elections globally as well. The 2024 Annual Threat Assessment of the U.S. Intelligence Community finds that “China is demonstrating a higher degree of sophistication in its influence activity, including experimenting with generative AI” and “the People’s Republic of China (PRC)  may attempt to influence the U.S. elections in 2024 at some level because of its desire to sideline critics of China and magnify U.S. societal divisions.”

MLOps and software supply chain attacks. These are most often nation-state and large e-crime syndicate operations aimed at bringing down frameworks, networks and platforms relied on to build and deploy AI systems. Attack strategies include targeting the components used in MLOps pipelines to introduce malicious code into the AI system. Poisoned datasets are delivered through software packages, arbitrary code execution and malware delivery techniques.    

Four ways to defend against an adversarial AI attack 

The greater the gaps across DevOps and CI/CD pipelines, the more vulnerable AI and ML model development becomes. Protecting models continues to be an elusive, moving target, made more challenging by the weaponization of gen AI

These are a few of the many steps organizations can take to defend against an adversarial AI attack, however. They include the following:  

Make red teaming and risk assessment part of the organization’s muscle memory or DNA. Don’t settle for doing red teaming on a sporadic schedule, or worse, only when an attack triggers a renewed sense of urgency and vigilance. Red teaming needs to be part of the DNA of any DevSecOps supporting MLOps from now on. The goal is to preemptively identify system and any pipeline weaknesses and work to prioritize and harden any attack vectors that surface as part of MLOps’ System Development Lifecycle (SDLC) workflows. 

Stay current and adopt the defensive framework for AI that works best for your organization. Have a member of the DevSecOps team staying current on the many defensive frameworks available today. Knowing which one best fits an organization’s goals can help secure MLOps, saving time and securing the broader SDLC and CI/CD pipeline in the process. Examples include the NIST AI Risk Management Framework and OWASP AI Security and Privacy Guide​​.

Reduce the threat of synthetic data-based attacks by integrating biometric modalities and passwordless authentication techniques into every identity access management system. VentureBeat has learned that synthetic data is increasingly being used to impersonate identities and gain access to source code and model repositories. Consider using a combination of biometrics modalities, including facial recognition, fingerprint scanning, and voice recognition, combined with passwordless access technologies to secure systems used across MLOps. Gen AI has proven capable of helping produce synthetic data. MLOps teams will increasingly battle deepfake threats, so taking a layered approach to securing access is quickly becoming a must-have. 

Audit verification systems randomly and often, keeping access privileges current. With synthetic identity attacks starting to become one of the most challenging threats to contain, keeping verification systems current on patches and auditing them is critical. VentureBeat believes that the next generation of identity attacks will be primarily based on synthetic data aggregated together to appear legitimate.

Source link