AML - A type of malicious interference with AI-based security systems. Adversaries have the ability to manipulate AI data and algorithms to the point where the AI system is defeated. Malware can then pass through undetected, putting vital corporate data, systems, and users at risk.
EXAMPLE-1 - Evasion attacks. In this case, adversaries deluge the system with false negatives (malware disguised as benign code), causing security analysts to completely ignore alerts or de-prioritize them.
EXAMPLE-2 - Poisoning attacks, which inject false data with the intent of poisoning the training data set and creating biases to certain classifications. This can actually change the AI model and significantly impact decisions and outcomes
Vendors and enterprise security teams need to be extra vigilant about continually monitoring AI-based security systems to ensure that they are doing what they are meant to do as they evolve and adapt to the changing threat landscape
https://www.scmagazine.com/home/opinion/artifical-intelligence-in-cybersecurity-is-vulnerable/
No comments:
Post a Comment