The digital world is quickly expanding, and generative AI-ChatGPT, Gemini, Copilot, and others are now riding high on this trend. We are surrounded by a network of AI-powered platforms that provide solutions to the majority of our issues. Need a diet plan? You will receive one that is tailored to your specifications. Struggling to write code? Using AI, you’ll be able to see the full draft immediately away. However, as the AI ecosystem grows in importance and spreads, new risks emerge that have the ability to hurt you significantly. One such issue is the creation of AI worms, which may steal your sensitive data and penetrate the security barriers erected by generative AI systems.
According to Wired, researchers from Cornell University, Technion-Israel Institute of Technology, and Intuit have developed a new sort of malware known as “Morris II” – or the first generative AI worm – capable of stealing data and spreading itself across several devices. Morris II, named after the first internet worm that appeared on the internet in 1988, has the ability to attack security flaws in popular AI models such as ChatGPT and Gemini. “It basically means that now you have the ability to conduct or perform a new kind of cyberattack that hasn’t been seen before,” explains Cornell Tech researcher Ben Nassi.
While the development and study of this new AI malware took place in a controlled environment, and no such malware has yet been seen in the real world, researchers are concerned that this new type of malware could be used to steal data or send spam emails to millions of people via AI assistants. The experts warn that it poses a potential security concern, which developers and IT businesses should address as quickly as feasible.
Table of Contents
ToggleHow does the AI worm do its job?
You may imagine this Morris II as a stealthy computer worm. Its job is to meddle with artificial intelligence (AI)-powered email assistants.
Initially, Morris II employed a cunning technique known as “adversarial self-replication.” It bombards the email system with messages, causing it to travel in circles by repeatedly forwarding messages. This causes the AI algorithms underpinning the email assistant to get confused. They end up accessing and modifying data, which can result in data theft or the transmission of malicious software (such as malware).
According to studies, Morris II has two ways of sneaking in: Text-Based: It conceals malicious instructions within emails, deceiving the assistant’s protection. Image-Based: It employs visuals with secret cues to help the worm spread even farther. In layman’s terms, Morris II is a clever computer worm that utilizes deceptive strategies to disrupt email systems and confound the AI that powers them.
What happens when AI gets tricked?
Morris’s infiltration of AI assistants not only violates security rules but also jeopardizes user privacy. The malware can extract sensitive information from emails using generative AI, such as names, phone numbers, credit card data, and social security numbers.
Here’s what you can do to be safe
Notably, this AI worm is a new idea that has yet to be detected in the field. However, experts feel it poses a potential security risk that developers and businesses should be aware of, particularly as AI systems grow more linked and capable of acting on our behalf.
Here are some strategies to guard against AI worms:
Secure design
Developers should design AI systems with security in mind, employing standard security approaches and refraining from naively trusting AI model outputs.
Human oversight
Keeping people involved in decision-making and prohibiting AI systems from acting without authority will help reduce dangers.
Monitoring
Keeping an eye out for anomalous activity in AI systems, such as excessive repetition of instructions, can aid in the detection of possible assaults.