Summary

OpenAI revolutionized the world by launching ChatGPT, marking a pivotal moment in technology history. The AI arms race, where companies speed to integrate AI amidst the dual pressures of rapid innovation and cybersecurity challenges, highlights the inherent risks in AI models. HiddenLayer’s Model Scanner is crucial for identifying and mitigating these vulnerabilities. From the surge of third-party models on platforms like Hugging Face to the Wild West-like rush for AI dominance, this article offers insights into securing AI’s future while enabling businesses to harness its transformative power safely.

Introduction

November 30, 2022 will go down in history as a major milestone in the history of technology. That was the day OpenAI made ChatGPT publicly available to the masses. Although, as a society, we’ve been experimenting with artificial intelligence academically since the 1950s, and many industries (finance, defense, healthcare, insurance, manufacturing, cybersecurity, and more.) have been putting AI into practical use since the early 2010s, OpenAI’s launch of ChatGPT helped the general public truly understand the vast benefits AI has on technology, economy, and society. We now find ourselves in the middle of an AI Arms Race with practically every enterprise and start-up company trying to rapidly embrace and adopt AI to help solve their business and/or technical problems. 

AI Acceleration vs AI Hesitancy

The sudden acceleration of AI adoption in this arms race puts immense pressure on companies’ cybersecurity organizations to facilitate AI initiatives without impeding progress. Many company CISOs and their teams have “AI Hesitancy” because they have not been afforded the time to understand the full scope of the cybersecurity risk and attack threat landscape to put the people, processes, procedures, and products in place to embrace AI safely and securely. In A Beginner’s Guide to Securing AI for SecOps, we offer a primer for Security Operations teams to consider securing AI. 

AI acceleration causes cybersecurity risks inherent in AI models. HiddenLayer’s AI Model Scanner can empower cybersecurity teams to help companies adopt AI while minimizing cybersecurity risks and attacks. 

The Wild Wild West of the New AI Frontier

Today’s AI technological frontier is reminiscent of the Wild West of America in the 1800s. Like early pioneers, those venturing into this new era are motivated by its promise, and first-movers gain significant advantages by staking their claim early before the area becomes saturated. As success stories emerge, they attract an influx of others, including unwanted threat actors. The frontier remains largely lawless despite new regulations due to a lack of enforcement and security resources. Consequently, organizations must take proactive steps to protect themselves and their AI assets.

AI Rush: Supply vs Demand

The hyper-demand for AI and machine learning models is exacerbated by the lack of supply of AI expertise (data scientists, ML engineers, etc) and has created a market explosion of third-party and open-source AI Models. A symptom of this hyper-demand can be seen in the growth of Hugging Face. Billed as the “GitHub of AI Models,” Hugging Face has established itself as the leader in AI Model Marketplaces, where anyone can download AI Models to bootstrap their adoption of AI. In 2023, Hugging Face had about 50,000 models. Today, in a little over a year, they have exceeded 650,000 models created by AI companies and creators. It is clear that we are in the middle of a gold rush in the era of the Dot AI Boom. 

Downloading third-party models without validation, attestation, or insights into the trustworthiness of AI models exposes companies to significant cybersecurity risks. Recognizing this as an issue that could impede AI adoption, Microsoft uses HiddenLayer to scan the models in their curated Azure AI catalog on behalf of their customers. 

Exploitation of Malicious AI Models

AI Robbery

AI models are uniquely attractive to threat actors and ripe for attack because they contain both sensitive data and code execution capabilities. Threat actors commonly utilize malicious code execution to access sensitive data and intel. In this scenario, the keys to the safe are attached to the safe itself. 

What are the most common threats to AI Model Files?

  • Arbitrary Code Execution Arbitrary code can be executed as part of a model format’s intended functionality or by exploiting a vulnerability. An attacker may run any code to compromise a target system, exfiltrate data, poison training data sets, coin mining, encrypt the machine, or worse.
  • Network Requests – The machine learning model may execute network requests, allowing for data exfiltration and remote access to a restricted environment.
  • Embedded PayloadsMalicious executables and other files can be embedded within a machine learning model in several ways: either appended to a model, injected into the weights and biases via steganography, or bundled as part of a model archive.
  • Decompression Vulnerabilities – Some machine learning models can be compressed to a small size when saved but can be designed to expand to an enormous size on load, crashing the system it is loaded on.
  • Unsafe Python Modules – Unsafe modules within the Python ecosystem can execute arbitrary code and be used to compromise a machine.
  • File System Access – The machine learning model can access the local file system, allowing for data exfiltration or arbitrary file writes to the file system.
  • Exploitation – Machine learning models are not impervious to typical vulnerabilities such as buffer overflows and path traversals when parsing the model file. These can then be used to exploit the host machine to achieve arbitrary code execution, arbitrary file writes, and more. 

HiddenLayer Model Scanner

HiddenLayer’s Model Scanner performs a deep introspective analysis of AI models with the industry’s most comprehensive breadth and depth of coverage. It recognizes and parses all the major model file formats to identify cybersecurity risks and threats embedded in the model’s layers, tensors, and functionality using HiddenLayer’s patented detection techniques. 

AI Model Format War

AI  Models come in all types of flavors. Each one of them has nuances and capabilities that can expose their vulnerabilities to be exploited. Some of the most commonly used AI Model formats seen in the wild are:

Model FormatDescriptionFile Extensions
GGUFGGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework and then converted to GGUF for use in GGML.gguf
H5H5 is a file format used to organize large datasets and can contain multiple files that can potentially reference each other. It’s very common to bundle datasets, weights, or supporting scripts in an h5 file..h5
KerasKeras is a high-level neural network API written in Python and runs on top of multiple open-source ML frameworks like TensorFlow. Keras model format can be a directory or a single file..keras, .tf
NemoNemo models are used to train and reproduce Conversational AI models and are compatible with the PyTorch ecosystem..nemo
NumPyA file type for storing N-dimensional arrays, a Python datatype that is very common in machine learning..npy
ONNXA machine learning file format that allows for easy exchange between different frameworks. ONNX file stores model information as a graph object..onnx
PickleA file type that serializes Python objects. Can contain data, trained models, and weights..pkl, .pickle
PytorchModel format primarily used by PyTorch ML framework. The format is a compressed ZIP archive containing a data.pkl (pickle file) and associated model weights. .pt, .bin, .zip
SafetensorsSafetensors is a safe and fast file format for storing and loading tensors. Safetensors are meant to replace PyTorch models distributed as pickles with safer versions, where only the tensor is serialized without any surrounding code and logic..safetensors
TensorflowTensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. Tensorflow native save format (.tf) is a directory containing variables and three protobuf files. The SavedModel format persists the graph of a TensorFlow model to disk..savedmodel, .tf, .pb

Security Checkpoints Throughout the AI Model Lifecycle

AI model training and development process can be very dynamic, with constant changes to data, functionality, weights, and biases from a team of contributors. This dynamic nature makes implementing traditional change control, code audits, and chain of custody difficult.  

HiddenLayer Model Scanner should be used to implement security checkpoints at multiple stages of the AI Operations lifecycle to ensure the security and trustworthiness of the model:

  1. Scan third-party models upon initial download to ensure the foundational model is free of vulnerabilities or malicious code. This should be done before feeding it sensitive training data. 
  2. Perform scans on all models within an MLOps Tools registry/catalog to identify any existing latent security risks.
  3. Scan models whenever a new version is created to identify supply chain attacks or inadvertent inclusions of new vulnerabilities
  4. Enforce model scanning before transitioning to production to confirm their safety and take a snapshot of the last known safe state.
A diagram of a model scanner

Description automatically generated

HiddenLayer AISec Platform integrates with MLOps tools (such as Microsoft AzureML, Databricks, and others) to synchronize and aggregate different MLOps tools’ model registry into HiddenLayer’s Model Inventory to give security teams a single view of all the company’s models in development. 

Detection Analysis & Incident Response

When the Model Scanner detects an issue with an AI model, it provides insightful details to allow security teams to collaborate with data science teams to investigate further. In this example, the scan of a Keras file found that the model has a lambda function that allows for arbitrary code execution. 

The presence of this function could allow a threat actor to exploit its capabilities to execute malware, a backdoor, or any other capability to accomplish their goal. Many questions arise from this single detection:

  1. Was this vulnerability already embedded in the 3rd party model used as the foundation for this version?
  2. Was the Data Science team aware of this capability in the model?
  3. If this comes as a surprise, could this be evidence of a supply chain attack by an external threat actor, an internal threat, or a result of a compromised credential?
  4. If the Data Science team was aware of the functionality, perhaps they felt it was important for the model to deliver on its purpose but were unaware of the cybersecurity risks it poses to the company.

By detecting this early in the MLOps lifecycle and gaining valuable insight from the detection details and subsequent investigation, security teams could save data science teams and the company time and money spent on the training and development of insecure AIModels or, worse, a potential breach resulting from the exploitation of the vulnerability. 

Conclusion

Companies can go from “AI Hesitancy” to “AI Acceleration” if they take the steps to include security into their AI adoption early in their journey. HiddenLayer AISec Platform and Model Scanner can be used as security checkpoints at key milestones in the MLOps life cycle to identify embedded vulnerabilities and malicious code within AI Models, reducing the company’s risk of attacks and breaches and increasing their AI Security Posture.