Microsoft, OpenAI name five hacker groups using AI services like ChatGPT to improve cyberattacks

9 months ago 14

Several cybersecurity companies have already reported that hackers are now using AI tools to refine their techniques, and now, two tech majors that are considered to be synonymous with AI for the past few months are now naming the groups that are using chatbots to improve their cyberattacks. Both

Microsoft

and

OpenAI

have listed five state-affiliated malicious actors that are using services offered by the

ChatGPT

maker.

“We build AI tools that improve lives and help solve complex challenges, but we know that malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations. Among those malicious actors, state-affiliated groups—which may have access to advanced technology, large financial resources, and skilled personnel—can pose unique risks to the digital ecosystem and human welfare,” OpenAI said in a post.
The company partnered with Microsoft to disrupt these “five state-affiliated actors that sought to use

AI services

in support of malicious cyber activities.”
How hackers used OpenAI services for cyberattacks

Just like AI models are used to strengthen defences against cyber attacks, hackers are using the same tools for translating, finding coding errors and running basic coding tasks to bolster their attack mechanisms.
These include use of large language models for gathering actionable intelligence on technologies and potential vulnerabilities, scripting techniques, social engineering, payload crafting, anomaly detection evasion and resource development.

The five state-backed hacker group are:
China-affiliated Charcoal Typhoon used OpenAI services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
Salmon Typhoon, another China-affiliated group, used OpenAI services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
Crimson Sandstorm, Iran-affiliated threat actor, used AI for scripting support related to app and web development and generating content likely for spear-phishing campaigns.
North Korea-affiliated Emerald Sleet used AI technology to identify experts and organisations focused on defence issues in the Asia-Pacific region, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
Russia-affiliated actor Forest Blizzard used OpenAI technology for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

Article From: timesofindia.indiatimes.com
Read Entire Article



Note:

We invite you to explore our website, engage with our content, and become part of our community. Thank you for trusting us as your go-to destination for news that matters.

Certain articles, images, or other media on this website may be sourced from external contributors, agencies, or organizations. In such cases, we make every effort to provide proper attribution, acknowledging the original source of the content.

If you believe that your copyrighted work has been used on our site in a way that constitutes copyright infringement, please contact us promptly. We are committed to addressing and rectifying any such instances

To remove this article:
Removal Request