Microsoft Catches APTs Using ChatGPT For Vuln Research, Malware Creation
Urfa Sarmad
Microsoft threat hunters state that foreign APTs are interacting with OpenAI’s ChatGPT for the automation of malicious vulnerability research, target reconnaissance, and malware creation tasks.
In a report published on Wednesday, Microsoft states that it joined forces with OpenAI to study the use of LLMs by malicious actors and found various known APTs that were experimenting with ChatGPT to learn about potential victims, improve malware scripting tasks and go through public security advisories. Microsoft also said it caught hacking teams from Russia, North Korea, China, and Iran using LLMs in their active APT operations.
In one such case, Microsoft said it caught North Korean APT Emerald Sleet (aka Kimsuky) using LLMs to generate content most probably used for spear-phishing campaigns. Additionally, Pyongyang hackers got caught using LLMs to understand the vulnerabilities known to the public, troubleshoot technical issues, and for help in using various web technologies. Microsoft stated:
“Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.”
Redmond also found evidence that ATP groups were using generative AI technology to understand better the reported vulnerabilities like the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability known as “Follina.”
In these cases, Microsoft worked with OpenAI to disable all accounts and assets linked to the advanced threat actors.
No comments were posted yet