Warning: Undefined array key "city" in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 56 Warning: Undefined array key "postal" in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 60 Warning: Undefined array key "country" in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 64 Warning: Undefined variable $org in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 70 Warning: Undefined variable $loc in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 78 Warning: Trying to access array offset on value of type null in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 78 Warning: Undefined variable $loc in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 79 Warning: Trying to access array offset on value of type null in /var/app/current/wp-content/themes/bestinternet_prod/header.php on line 79

Find –Two types of dataset poisoning attacks that can corrupt AI system results

Two types of dataset poisoning attacks that can corrupt AI system results

Because such systems learn from what they see, if they happen across something that is wrong, they have no way of knowing it, and thus incorporate it into their set of rules. As an example, consider an AI system that is trained to recognize patterns on a mammogram as cancerous tumors. Such systems would be trained by showing them many examples of real tumors collected during mammograms.

But what happens if someone inserts images into the dataset showing cancerous tumors, but they are labeled as non-cancerous? Very soon the system would begin missing those tumors because it has been taught to see them as non-cancerous. In this new effort, the research team has shown that something similar can happen with AI systems that are trained using publicly available data on the Internet.

The researchers began by noting that ownership of URLs on the Internet often expire—including those that have been used as sources by AI systems. That leaves them available for purchase by nefarious types looking to disrupt AI systems. If such URLs are purchased and are then used to create websites with false information, the AI system will add that information to its knowledge bank just as easily as it will true information—and that will lead to the AI system producing less then desirable results.

The research team calls this type of attack split view poisoning. Testing showed that such an approach could be used to purchase enough URLs to poison a large portion of mainstream AI systems, for as little as $10,000.