and say that several state-backed hacking groups are using the latter’s generative AI (GAI) tools to bolster cyberattacks. The pair suggests that new research details for the first time how hackers linked to foreign governments are making use of GAI. The groups in question have ties to China, Russia, North Korea and Iran.

According to the companies, the state actors are using GAI for code debugging, looking up open-source information to research targets, developing social engineering techniques, drafting phishing emails and translating text. OpenAI (which powers Microsoft GAI products such as ) says it shut down the groups’ access to its GAI systems after finding out they were using its tools.

Notorious Russian group Forest Blizzard (better known as or APT 12) was one of the state actors said to have used OpenAI’s platform. The hackers used OpenAI tools “primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks,” .

As part of its cybersecurity efforts, it tracks north of 300 hacking groups, including 160 nation-state actors. It shared its knowledge of them with OpenAI to help detect the hackers and shut down their accounts.

OpenAI says it invests in resources to pinpoint and disrupt threat actors’ activities on its platforms. Its staff uses a number of methods to look into hackers’ use of its systems, such as employing its own models to follow leads, analyzing how they interact with OpenAI tools and determining their broader objectives. Once it detects such illicit users, OpenAI says it disrupts their use of the platform through the likes of shutting down their accounts, terminating services or minimizing their access to resources.

Source link