GPT-4, OpenAI’s latest multimodal large language model (LLM), can exploit zero-day vulnerabilities independently, according to a study reported by TechSpot.
The study by University of Illinois Urbana-Champaign researchers has shown that LLMs, including GPT-4, can execute attacks on systems by utilizing undisclosed vulnerabilities, known as zero-day flaws. As part of the ChatGPT Plus service, GPT-4 has demonstrated significant advancements over its predecessors in terms of security penetration without human intervention.
The study involved testing LLMs against a set of 15 “high to critically severe” vulnerabilities from various domains, such as web services and Python packages, which had no existing patches at the time.
Mashable Light Speed
GPT-4 displayed startling effectiveness by successfully exploiting 87 percent of these vulnerabilities, compared to a zero percent success rate by earlier models like GPT-3.5. The findings suggest that GPT-4 can autonomously identify and exploit vulnerabilities that traditional open-source vulnerability scanners often miss.
Why this is concerning
The implications of such capabilities are significant, with the potential to democratize the tools of cybercrime, making them accessible to less skilled individuals known as “script-kiddies.” UIUC’s Assistant Professor Daniel Kang emphasized the risks posed by such powerful LLMs, which could lead to increased cyber attacks if detailed vulnerability reports remain accessible.
Kang advocates for limiting detailed disclosures of vulnerabilities and suggests more proactive security measures such as regular updates. However, his study also noted the limited effectiveness of withholding information as a defense strategy. Kang emphasized that there’s a need for robust security approaches to address the challenges introduced by advanced AI technologies like GPT-4.