Researchers warn, why using AI chatbots for writing your emails may be 'dangerous'

8 months ago 22

A group of researchers have developed a prototype

AI worm

called

Morris II

. According to the research papers (spotted by Wired), this first-generation AI worm can steal data, spread

malware

and

spam

users through AI-powered

email assistants

. However, it's important to note that this research was conducted in a controlled environment and the worm has not been deployed in the real world.

Yet, this development highlights the potential vulnerabilities in

generative AI models

and emphasises the need for strict security measures.

What the researchers have to say about the AI worm

The research team, comprising Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit, named the worm after the original Morris worm. This notorious computer worm unleashed in 1988. Unlike its predecessor, Morris II targets AI apps, specifically those using large language models (LLMs) like

Gemini Pro

, ChatGPT 4.0, and LLaVA, to generate text and images.
The worm uses a technique called "

adversarial self-replicating prompts

." These prompts, when fed into the LLM, trick the model into replicating them and initiating malicious actions. This includes:

The researchers described: The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images).”

The researchers successfully demonstrated the worm's capabilities in two scenarios:

  • Spamming: Morris II generated and sent spam emails through the compromised email assistant.
  • Data Exfiltration: The worm extracted sensitive personal data from the infected system.

The researchers said that AI worms like this can help cyber criminals to extract confidential information, including credit card details, social security numbers and more. They also uploaded a video on YouTube to explain how the worm works:

ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications

What AI companies said about the worm

In a statement, an

OpenAI

spokesperson said: “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”
The spokesperson said that the company is making its systems more resilient and added that developers should use methods that ensure they are not working with harmful input.
Meanwhile,

Google

refused to comment about the research.

Article From: timesofindia.indiatimes.com
Read Entire Article



Note:

We invite you to explore our website, engage with our content, and become part of our community. Thank you for trusting us as your go-to destination for news that matters.

Certain articles, images, or other media on this website may be sourced from external contributors, agencies, or organizations. In such cases, we make every effort to provide proper attribution, acknowledging the original source of the content.

If you believe that your copyrighted work has been used on our site in a way that constitutes copyright infringement, please contact us promptly. We are committed to addressing and rectifying any such instances

To remove this article:
Removal Request