Summary
The future of artificial intelligence depends not just on powerful models but also on our ability to trust them. As AI models become the backbone of countless applications, from healthcare diagnostics to financial systems, their integrity and security have never been more important. Yet the current AI ecosystem faces a fundamental challenge: How does one prove that the model to be deployed is exactly what the creator intended? Without layered verification mechanisms, organizations risk deploying compromised, tampered, or maliciously modified models, which could lead to potentially catastrophic consequences.
At the beginning of April, the OpenSSF Model Signing Project released V1.0 of the Model Signing library and CLI to enable the community to sign and verify model artifacts. Today, the project has formalized their work as the OpenSSF Model Signing (OMS) Specification, a critical milestone in establishing security and trust across AI supply chains. HiddenLayer is proud to have helped drive this initiative in partnership with NVIDIA and Google, as well as the many contributors whose input has helped shape this project.
For more technical information on the release, head over to the OpenSSF blog to read more about this specification launch.
What is Model Signing?
The software development industry has gained valuable insights from decades of supply chain security challenges. We’ve established robust integrity protection mechanisms such as code signing certificates that safeguard Windows and mobile applications, GPG signatures to verify Linux package integrity, and SSL/TLS certificates that authenticate every HTTPS connection. These same security fundamentals that secure our applications, infrastructure, and information must now be applied to AI models.
Modern AI development involves complex distribution chains where model creators differ from production implementers. Models are embedded in applications, distributed through repositories and deployment platforms, and accessed by countless users worldwide. Each handoff, from developers to model hubs to applications to end users, creates potential attack vectors where malicious actors could compromise model integrity.
Cryptographic model signing via the OMS Specification helps to ensure trust in this ecosystem by delivering verifiable proof of a model’s authenticity and unchanged state, and that it came from a trusted supplier. Just as deploying unsigned software in production environments violates established security practices, deploying unverified AI models, whether standalone or bundled within applications, introduces comparable risks that organizations should be unwilling to accept.
Who should use Model Signing?
Whether you’re producing models or sourcing them elsewhere, being able to sign and verify your models is essential for establishing trust, ensuring integrity, and maintaining compliance in your AI development pipeline.
- End users receive assurance that their AI models remain authentic and unaltered from their original form.
- Compliance and governance teams gain access to comprehensive audit trails that facilitate regulatory reporting and oversight processes.
- Developers and MLOps teams can detect tampering, verify model integrity, and ensure consistent reproducibility across testing and production environments.
Why use the OMS Specification?
The OMS specification was designed to address the unique constraints of ML model development, the breadth of application, and the wide variety of signing methods. Key design features include:
- Support for any model format and size across collections of files
- Flexible private key infrastructure (PKI) options, including Sigstore, self-signed certificates, and key pairs from PKI providers.
- Building towards traceable origins and provenance throughout the AI supply chain
- Risk mitigation against tampering, unauthorized modifications, and supply chain attacks
Interested in learning more? You can find comprehensive information about this specification release on the OpenSSF blog. Alternatively, if you wish to get hands-on with model signing today, you can use the library and CLI in the model-signing PyPi package to sign and verify your model artefacts.
In addition, the OMS standard is already being adopted by major model hubs such as NVIDIA’s NGC catalog.
Do I still need to scan models if they’re signed?
Model scanning and model signing work together as complementary security measures, both essential for comprehensive AI model protection. Model scanning helps you know if there’s malicious code or vulnerabilities in the model, while model signing ensures you know if the model has been tampered with in transit. It’s worth remembering that during the infamous SolarWinds attack, backdoored libraries went undetected for months, partly as they were signed with a valid digital certificate. The malware was trusted because of its signature (signing), but the malicious content itself was never verified to be safe (scanning). This example demonstrates the need of multiple verification layers in supply chain security.
Model scanning provides essential visibility by detecting anomalous patterns and security risks within AI models. However, scanning only reveals what a model contains at the time of analysis, not whether it remains unchanged from its initial state during distribution. Model signing fills this critical gap by providing cryptographic proof that the scanned model is identical to the one being deployed, and that it came from a verifiable provider, creating a chain of trust from initial analysis through production deployment.
Together, these complementary layers ensure both the integrity of the model’s contents and the authenticity of its delivery, providing comprehensive protection against supply chain attacks targeting AI systems. If you’re interested in learning more about model scanning, check out our datasheet on the HiddenLayer Model Scanner.
Community and Next Steps
The OpenSSF Model Signing Project is part of OpenSSF’s effort to improve security across open-source software and AI systems. The project is actively developing the OMS specification to provide a foundation for AI supply chain security and are looking to incorporate additional metadata for provenance verification, dataset integrity and more in the near future.
This open-source project operates within the OpenSSF AI/ML working group and welcomes contributions from developers, security practitioners, and anyone interested in AI security. Whether you want to help with specification development, implementation, or documentation, we would like your input in building practical trust mechanisms for AI systems.