Data Security

‘Optical Adversarial Attack’ uses low-cost projector to trick AI

Last year, we covered a research report which found out how projectors could be used to display virtual objects and fool self-driving cars. In the latest, we have another piece of research that deals with strikingly similar details but incorporating the trickery of Artificial Intelligence (AI) as a whole.


Discovered by researchers Abhiram Gnanasambandam, Alex M. Sherman, and Stanley H. Chan from Purdue University; the new attack has been dubbed as an OPtical ADversarial attack (OPAD) and involves using three objects: a low-cost projector, a camera, and a computer in order to execute the attack.

SEE: These people don’t exist – They were created by tech using AI

How it does so is not by placing new virtual objects in the environment but by modifying how existing objects already present are seen by AI. The image below shows how objects like a basketball were modified to be seen as something else by projecting certain calculated patterns onto them.

The pros of such an attack method are that no physical access is needed to the objects themself. This, according to the researchers, is a move away from previously discussed AI trickery methods which obviously can make the concealment of attackers much easier.


Demonstrating another attack example, the researchers cite the example below:

Detailing further, the researchers state in their researcher paper [PDF] that,

The difficulty of launching an optical attack is making sure that the perturbations are imperceptible while compensating for the environmental attenuations and the instrument’s nonlinearity.

OPAD overcomes these difficulties by taking into consideration the environment and the algorithm. OPAD is a meta-attack framework that can be applied to any existing digital attack.

The real-world implications of this are that it can be used to trick, for example, self-driving cars and therefore be used to cause accidents or perhaps incur pranks. On the other hand, security cameras employing AI can also be tricked which can have significant repercussions.

To conclude, this will obviously not work in all situations but regardless, companies developing AI technologies need to be on the lookout for such potential security problems.

Furthermore, it is important to mention that the research was funded by the U.S Army which indicates that the military may already be on the lookout for using such methods to aid their missions in the field.


To Top

Pin It on Pinterest

Share This