Innovation Hub

AI System Reconnaissance

Summary Honeypots are decoy systems designed to attract attackers and provide valuable insights into their tactics in a controlled environment. By observing adversarial behavior, organizations can enhance their understanding of emerging threats. In this blog, we share findings from a honeypot mimicking an exposed ClearML server. Our observations indicate that an external actor intentionally targeted […]

Indirect Prompt Injection of Claude Computer Use

Introduction Recently, Anthropic released an exciting new application of generative AI called Claude Computer Use as a public beta, along with a reference implementation for Linux. Computer Use is a framework that allows users to interact with their computer via a chat interface, enabling the chatbot to view their workspace via screenshots, manipulate the interface […]

SAI Security Advisory

CVE-2024-0129

NVIDIA NeMo Vulnerability Report

Unsafe extraction of NeMo archive leading to arbitrary file write CVE Number CVE-2024-0129 Summary The _unpack_nemo_file function used by the SaveRestoreConnector class for model loading uses tarfile.extractall() in an unsafe way which can lead to an arbitrary file write when a model is loaded. Products Impacted This vulnerability is present in Nvidia NeMo versions prior to r2.0.0rc0. CVSS Score: 6.3 AV:L/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:L […]

CVE-2024-24590

Pickle Load on Artifact Get Leading to Code Execution

Following responsible disclosure practices, the vulnerabilities referenced in this blog were disclosed to ClearML before publishing. We would like to thank their team for their efforts in working with us to resolve the issues well within the 90-day window. This demonstrates that responsible disclosure allows for a good working relationship between security teams and product […]

CVE-2024-24591

Path Traversal on File Download Leading to Arbitrary Write

Following responsible disclosure practices, the vulnerabilities referenced in this blog were disclosed to ClearML before publishing. We would like to thank their team for their efforts in working with us to resolve the issues well within the 90-day window. This demonstrates that responsible disclosure allows for a good working relationship between security teams and product […]

CVE-2024-24592

Improper Auth Leading to Arbitrary Read-Write Access

Following responsible disclosure practices, the vulnerabilities referenced in this blog were disclosed to ClearML before publishing. We would like to thank their team for their efforts in working with us to resolve the issues well within the 90-day window. This demonstrates that responsible disclosure allows for a good working relationship between security teams and product […]

CVE-2024-24593

Cross-site Request Forgery in ClearML Server

Following responsible disclosure practices, the vulnerabilities referenced in this blog were disclosed to ClearML before publishing. We would like to thank their team for their efforts in working with us to resolve the issues well within the 90-day window. This demonstrates that responsible disclosure allows for a good working relationship between security teams and product […]

Webinar

Automated Red Teaming for AI Explained

Ready to dive into the essentials of automated red teaming? In this informative session, we will dive into the essentials of automated red teaming for AI, a crucial aspect of security for AI that empowers organizations to enhance their defenses against advanced threats.

Wednesday, December 4th at 1pm CT

HiddenLayer in the News

Security for AI Platform Expansion: Introducing Automated Red Teaming for AI

Austin, TX — November 20, 2024 — HiddenLayer, a leader in security for AI solutions, today announced the launch of its Automated Red Teaming solution for artificial intelligence, a transformative tool that enables security teams to rapidly and thoroughly assess generative AI system vulnerabilities. The addition of this new product extends HiddenLayer’s AISec platform capabilities […]

HiddenLayer Recognized as a Gartner Cool Vendor for AI Security in 2024

Austin, TX – October 30, 2024 – HiddenLayer, a leader in security for AI solutions, is honored to be recognized as a Cool Vendor for AI Security in Gartner’s 2024 report. This prestigious distinction highlights HiddenLayer’s innovative approaches to safeguarding artificial intelligence models, data, and workflows against a rapidly evolving threat landscape. HiddenLayer’s proactive solutions […]

HiddenLayer Announces New Features to Safeguard Enterprise AI Models with Improved Risk Detection

Austin, TX – October 8, 2024 – HiddenLayer today announced the launch of several new features to its AISec Platform and Model Scanner, designed to enhance risk detection, scalability, and operational control for enterprises deploying AI at scale. As the pace of AI adoption accelerates, so do the threats targeting these systems, necessitating security measures […]