With each aspect of our life moving towards integration in the Information Age, the risks are also. In an additional countermeasure to prevent that from happening, tech giants are working together to avoid future AI threats.

The MITRE Corporation, a tech nonprofit based in Bedford, Massachusetts, together with twelve technology giants - including Microsoft, Bosch, IBM, and Nvidia - are collaborating in the release of the Adversarial Machine Learning Threat Matrix. According to a press release, the "industry-focused open framework" will help security experts in responding and mitigating threats in the future.

Charles Clancy, MITRE's chief futurist, senior vice president, and general manager, explains that whether you fear or welcome it, artificial intelligence (AI) is "hurtling down the tracks toward our future."

Because ML Cyberattacks are More Common Than We Think

Microsoft notes that machine learning (ML) is transforming a number of fields - defense, healthcare, finance - affecting our lives in immeasurable ways. However, the tech giant explains that some businesses, in its eagerness to capitalize on the latest technology, miss the necessary scrutiny of these systems and leave themselves vulnerable.

RELATED: Machine Learning Might Guide the Arrow of Time in Microscopic Processes

In its security blog, Microsoft reported that in the last four years, there has been a notable increase in cyberattacks aimed toward commercial ML systems. Last year, business research and analytics company Gartner suggested that through 2022, up to a third of all AI cyberattacks will be using "training-data poisoning, AI model theft, or adversarial samples" to attack their systems.

Microsoft conducted a survey among 20 businesses that revealed cognitive dissonance among security analysts misguided in the belief that ML systems attacks remain a futuristic concern. Twenty-five out of the twenty-eight businesses showed the lack of right tools for ML systems security. This behavior was found to be prevalent even across Fortune 500 companies and government organizations.

Empowering Security Analysts

With the collaborative framework that is the Adversarial ML Threat Matrix, the security community receives access to a systemic organization of techniques commonly used by cyberattackers in circumventing current security measures used by commercial ML systems. The tech groups behind the project hope that members of the security community can improve their monitoring and response strategies and better protect their systems.

Microsoft notes that security analysts are the primary audience in the ML threat matrix, since the safety of machine learning systems is a concern for the infosec field. The structure of the new threat matrix follows MITRE's ATT&CK - Adversarial Tactics, Techniques & Common Knowledge - an earlier attempt to tabulate strategies used by red teamers, threat hunters, and defenders for strengthening the security industry.

RELATEDMicrosoft Unveils New Tool for Detecting Deepfakes 

Additionally, the Adversarial ML Threat Matrix is grounded on real and recorded attacks on machine learning systems. With this approach, security analysts no longer need to wander around with theoretical scenarios, focusing instead on realistic threats. Microsoft lends its experience in this area for the development of the framework, one of which is the fact that model stealing is not actually the endgame for cyberattackers. They use the stolen model for a "more insidious model evasion." Also, attackers now use a hybrid between traditional attack methods like phishing along the more updated adversarial ML techniques.

Check out more news and information on Machine Learning in Science Times.