HiddenLayer, a Gartner recognized AI Application Security company, is a provider of security solutions for machine learning algorithms, models & the data that power them. With a first-of-its-kind, non-invasive software approach to observing & securing ML, HiddenLayer is helping to protect the world’s most valuable technologies.
Protect Your Advantage
HiddenLayer’s Platform is the Solution
HiddenLayer offers an easy-to-deploy platform that stops adversarial attacks and provides visibility into the health and security of your ML assets.
Detect & Respond to adversarial ML attacks.
- Real-time defense
- Flexible response options — including alerting, isolation, profiling, and misleading
- Configurable settings fine-tuned to your company’s needs
Scan and guarantee model integrity.
- Identify vulnerabilities
- Ensure model has not been compromised
- Detect malicious code injections
Validate ML model security across the enterprise.
- Comprehensive view of AI/ML assets security status
- On-demand dashboard and distributable reporting
- Vulnerability prioritization
For information about protecting your ML with HiddenLayer, contact us for a demo.
Guard your ML models. Protect your advantage.
Defend your ML assets without compromising speed, efficacy and reliability with a cloud-based architecture that doesn’t require access to your data or intellectual property.
Inference / Extraction
Extraction attacks involve an attacker manipulating model inputs, analyzing outputs, and inferring decision boundaries to reconstruct the training data, extract model parameters, or model theft by training a substitute that approximates the target.
Stealing Machine Learning Model
Stealing Machine Learning Model
“New research from Canada offers a possible method by which attackers could steal the fruits of expensive machine learning frameworks, even when the only access to a proprietary system is via a highly sanitized and apparently well-defended API.”
– Unite.AI
Extraction of Training Data
Extraction of Training Data
“In many cases, the attackers can stage membership inference attacks without having access to the machine learning model’s parameters and just by observing its output. Membership inference can cause security and privacy concerns in cases where the target model has been trained on sensitive information.”
– VentureBeat
Data Poisoning
Poisoning occurs when an attacker injects training sets with new, specifically altered data designed to fool or subvert a machine learning model to provide inaccurate, biased or malicious results.
Investment Reliance on ML
Investment Reliance on ML
“By exploiting inherent weaknesses in an AI model, threat actors could lull a company into a false sense of security regarding the way a trade will play out.”
– FireEye
Medical Misdiagnosis
Medical Misdiagnosis
“By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.”
– New York Times
Evasion
Model evasion occurs when an attacker manipulates the input in a specific way in order to bypass the correct classification or induce a particular desired classification.
Bypassing Anti-Virus
Bypassing Anti-Virus
“…Skylight Cyber, based in Sydney, analyzed the engine and model for Cylance PROTECT, the company’s AI antimalware product, to find a way ‘to fool it consistently, creating a universal bypass.’”
– TechTarget
Autonomous Car Hijacking
Autonomous Car Hijacking
“[Tencent researchers] placed bright-colored stickers on the road to create a ‘fake lane’ that tricked the self-driving software of a Tesla Model S into veering from the appropriate driving lane.”
– CNBC
Model Injection
Model injection is a technique that relies on altering the machine learning model by inserting a malicious module that introduces some secret harmful or unwanted behavior.
Model Backdooring
Model Backdooring
“A successful backdoor attack [against a machine learning model] can cause severe consequences, such as allowing an adversary to bypass critical authentication systems.”
– Analytics Insight
Model Hijacking
Model Hijacking
Sultanik and associates also developed a proof-of-concept exploit based on the official PyTorch tutorial that can inject malicious code into an existing PyTorch model. The PoC, when loaded as a model in PyTorch, will exfiltrate all the files in the current directory to a remote server.”
– The Register
Built on the standard for AI security
HiddenLayer uses the MITRE ATLAS framework to align with the industry’s leading authority on adversarial threats targeting AI systems.
The Latest From HiddenLayer
Read more in our full research section or sign up for our occasional email newsletter and we’ll make sure you’re first in the know.
Show yourself.
Interested in the absolute cutting-edge information about HiddenLayer or securing ML? Sign up for our occasional email newsletter and we’ll make sure you’re first in the know.
How can we protect your ML?
Start by requesting your demo and let’s discuss protecting your unique ML advantage.