HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
Research 09.25.2024
September 25, 2024
Executive Summary This blog explores the vulnerabilities of Google’s Gemini for Workspace, a versatile AI assistant integrated...
AI Security Vulnerability research Research 08.09.2024
August 9, 2024
Summary HiddenLayer researchers have recently conducted security research on edge AI devices, largely from an exploratory...
Adversarial Machine Learning Vulnerability research Research 06.25.2024
June 25, 2024
Executive Summary Many LLMs and LLM-powered apps deployed today use some form of prompt filter or alignment to protect their...
Vulnerability research Research 06.06.2024
Adversarial Machine Learning, AI Security, Cybersecurity, Data Scientists, ML Ops, Supply Chain, Vulnerability research
June 6, 2024
Summary OpenAI revolutionized the world by launching ChatGPT, marking a pivotal moment in technology history. The AI arms...
Adversarial Machine Learning AI Security Cybersecurity Research 04.29.2024
Adversarial Machine Learning, Cyber Threat Intelligence, Cybersecurity, Data Science, Malware, Supply Chain, Vulnerability research
April 29, 2024
Summary HiddenLayer researchers have discovered a vulnerability, CVE-2024-27322, in the R programming language that allows...
Adversarial Machine Learning Cyber Threat Intelligence Cybersecurity Research 03.12.2024
March 12, 2024
Google Gemini Content and Usage Security Risks Discovered: LLM Prompt Leakage, Jailbreaks, & Indirect Injections. POC...
Google Gemini Large Language Model Vulnerability research Research 02.21.2024
February 21, 2024
Summary In this blog, we show how an attacker could compromise the Hugging Face Safetensors conversion space and its associated...
Hugging Face Malicious models Safetensors