HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
Introduction to Model Verification
Microsoft uses HiddenLayer Model Scanner to help developers assess the security of open-source models in the Azure AI model catalog.
HiddenLayer’s Model Scanner provides verification for models curated by Azure AI when it has completed scanning third-party and open-source models for emerging threats, such as cybersecurity vulnerabilities, malware, and other signs of tampering. The resulting attestations from Model Scanner, provided within each model card, can help security teams streamline AI deployment processes and empower development teams to fine-tune or deploy open-source models safely and confidently.
Risks Associated With Deploying AI/ML Models
Open-source model-sharing repositories have been born out of inherent data science complexity, practitioner shortage, and the limitless potential and value they provide to organizations – dramatically reducing the time and effort required for AI adoption. However, such repositories often lack comprehensive security controls, which ultimately pass the risk on to the end user – and attackers are counting on it. The scarcity of security around AI models, coupled with the increasingly sensitive data that AI models are exposed to, means that model hijacking attacks evade traditional security solutions and have a high propensity for damage.
Risks can include:
- Network Requests — The model can make network requests, potentially allowing data theft and remote access, which could compromise the security of restricted environments.
- Embedded Payloads — Files can hide within the model, possibly through appending, steganography, or bundling, enabling malicious actions such as spreading malware or executing unauthorized commands.
- Arbitrary Code Execution — Attackers can run any code, exploiting model functions or vulnerabilities to compromise systems, posing severe risks such as data breaches or system manipulation.
- Decompression Vulnerabilities — Some models compress small but expand greatly upon loading, crashing systems and leading to potential disruptions in critical operations and data loss.
- Unsafe Python Modules — Certain Python modules can execute any code, posing a threat to machines potentially enabling attackers to gain unauthorized access or cause system instability.
- File System Access — Models can access local files, risking data theft or unauthorized writes, which could leak sensitive information or compromise system integrity.
- Exploitation — Models can suffer from typical vulnerabilities, like buffer overflows, enabling various attacks on host machines, potentially resulting in widespread system compromise or unauthorized access.
Overview of Model Scanner
HiddenLayer Model Scanner analyzes Artificial Intelligence Models to identify hidden cybersecurity risks and threats, such as malware, vulnerabilities, and integrity issues. Its advanced scanning engine, using HiddenLayer’s patented detection techniques, is built to analyze your artificial intelligence models. It meticulously inspects each layer and component to detect possible signs of malicious activity, including malware, tampering, and backdoors.
With HiddenLayer Model Scanner, one can ensure the integrity and safety of their artificial intelligence models, protecting them from potential cyber threats.
Key Model Scanning Product Capabilities
- Malware Analysis — Scans AI Models for embedded malicious code that could serve as an infection vector & launchpad for malware
- Vulnerability Assessment — Scans for known CVEs & zero-day vulnerabilities targeting AI Models
- Model Integrity — Analysis of AI Model’s layers, components & tensors to detect tampering or corruption
- Enterprise Efficacy — Uses a combination of static detection and analysis to identify malware, vulnerabilities, model integrity & corruption issues
Key Model Scanning Product Benefits
- Ensure third-party and open source AI models hosted by online communities & repositories are safe and secure to use
- Prevent inheritance of cybersecurity vulnerabilities, malware and corruption via transfer learning of open-source AI Models
- Ensure AI Models are free of vulnerabilities and malware before deploying to production
- Improve the security & integrity of proprietary models and protect a company’s intellectual property
- Prevent AI Models from being a launchpad for malware
Why HiddenLayer?
HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.