Hacked

DeepLocker: Here’s How AI Could ‘Help’ Malware To Attack Stealthily

By this time, we have realized how artificial intelligence is a boon and a bane at the same time. Computers have become capable of performing things that human beings cannot. It is not tough to imagine a world where you AI could program human beings; thanks to sci-fi television series available lately.

What happens when a malware meets AI? IBM has tried to answer this question by developing an AI-powered malware named DeepLocker. DeepLocker is a new breed of highly targeted and evasive attack tools that are powered by AI.

The malware has been developed to understand how artificial intelligence could be used by bad actors to magnify the effects of current malware techniques to come up with a new breed of malware. The malware could be launched with a video conferencing software to target a particular victim. Until it reaches the victim, it remains in a dormant state.

The malware identifies its victim with the combination of multiple factors based on facial recognition, geolocation, voice recognition and data obtained from social media and online trackers. Once it identifies the target, the malware is launched.

IBM describes this stealth mode, “You can think of this capability as similar to a sniper attack, in contrast to the “spray and pray” approach of traditional malware.”

What makes DeepLocker more threatening as compared to conventional malware is the fact that it could attack systems without getting detected. If the conditions for identifying the target are not met, the malware remains hidden and undetectable.

To show the capabilities of the malware, IBM researchers designed a proof of concept in which Wannacry ransomware was added in a video conferencing application; antivirus engines and sandboxing were not able to detect the malware. An individual was selected, and the AI was trained to launch the malware when the certain conditions including the facial recognition of the target were met.

Upon identifying the target, the app in which the malware has been stealthily added would feed camera snapshots into the AI model, and the malicious payload would be executed. The target’s face was preprogrammed to unlock the payload.

To everybody’s relief, DeepLocker is just an experiment by IBM to show how malware could evolve in the future with the help of AI and it seems like a deadly scenario.

To Top

Pin It on Pinterest

Share This