Fujitsu Laboratories has announced the development of a technology to make AI models more robust against deception attacks
The technology protects against attempts to use forged attack data to trick AI models into making a deliberate misjudgment when AI is used for sequential data consisting of multiple elements.
With the use of AI technologies in various fields in recent years, the risk of attacks that intentionally interfere with AI’s ability to make correct judgments is a source of growing concern. There are many suitable conventional security resistance enhancement technologies available for media data such as images and sound.
However, their application to sequential data, such as communication logs and service usage history, remains insufficient due to the challenges posed by the preparation of simulated attack data and the loss of accuracy. In order to overcome these challenges, Fujitsu has developed a robust AI model enhancement technology that is applicable to sequential data.
This technology automatically generates a large amount of data simulating an attack and combines it with the original training data set to improve resistance to potential deception attacks while maintaining the accuracy of judgment. By applying this technology to an AI model developed by Fujitsu to assess the need for cyber-attack counter-measures, it has been confirmed that a misjudgment of about 88% can be prevented in our own test attack data. Details of this technology will be announced at the Computer Security Symposium 2020 from 26-29 October.
Fujitsu has developed a technology that can automatically generate simulated attack data for training, which can be applied to AI models that analyse sequential data and enable training with less deterioration in the accuracy of attack detection.