Showing 214 results for
Sort IconSORT:
  • Relevance
  • Newest to oldest
  • Oldest to newest

Tags

Same Model, Different Hat

Summary OpenAI recently released its Guardrails framework, a new set of safety tools designed to detect and block potentially harmful model behavior. Among these are “jailbreak” and “prompt injection” detectors that rely on large language models (LLMs) themselves to judge whether an input or output poses a risk. Our research shows that this approach is […]

Research
Read it now

Securing AI Through Patented Innovation

As AI systems power critical decisions and customer experiences, the risks they introduce must be addressed. From prompt injection attacks to adversarial manipulation and supply chain threats, AI applications face vulnerabilities that traditional cybersecurity can’t defend against. HiddenLayer was built to solve this problem, and today, we hold one of the world’s strongest intellectual property […]

Insights
Read it now

AI Discovery in Development Environments

What Is AI Discovery in AI Development? AI is reshaping how organizations build and deliver software. From customer-facing applications to internal agents that automate workflows, AI is being woven into the code we develop and deploy in the cloud. But as the pace of adoption accelerates, most organizations lack visibility into what exactly is inside […]

Insights
Read it now

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its Data Intelligence Platform for Cybersecurity, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry’s direction. Why Databricks’ Data Intelligence Platform for Cybersecurity […]

In the News
Read it now

The Expanding AI Cyber Risk Landscape

Anthropic’s recent disclosure proves what many feared: AI has already been weaponized by cybercriminals. But those incidents are just the beginning. The real concern lies in the broader risk landscape where AI itself becomes both the target and the tool of attack. Unlimited Access for Threat Actors Unlike enterprises, attackers face no restrictions. They are […]

Research
Read it now

The First AI-Powered Cyber Attack

In August, Anthropic released a threat intelligence report that may mark the start of a new era in cybersecurity. The report details how cybercriminals have begun actively employing Anthropic AI solutions to conduct fraud and espionage and even develop sophisticated malware. These criminals used AI to plan, execute, and manage entire operations. Anthropic’s disclosure confirms […]

Research
Read it now

Prompts Gone Viral: Practical Code Assistant AI Viruses

Where were we? Cursor is an AI-powered code editor designed to help developers write code faster and more intuitively by providing intelligent autocomplete, automated code suggestions, and real-time error detection. It leverages advanced machine learning models to analyze coding context and streamline software development tasks. As adoption of AI-assisted coding grows, tools like Cursor play […]

Research
Read it now

Integrating AI Security into the SDLC

Executive Summary AI and ML systems are expanding the software attack surface in new and evolving ways, through model theft, adversarial evasion, prompt injection, data poisoning, and unsafe model artifacts. These risks can’t be fully addressed by traditional application security alone. They require AI-specific defenses integrated directly into the Software Development Lifecycle (SDLC). This guide […]

Insights
Read it now

Persistent Backdoors

Summary Unlike other model backdooring techniques, the Shadowlogic technique discovered by the HiddenLayer SAI team ensures that a backdoor will remain persistent and effective even after model conversion and/or fine-tuning. Whether a model is converted from PyTorch to ONNX, ONNX to TensorRT, or even if it is fine-tuned, the backdoor persists. Therefore, the ShadowLogic technique […]

Research
Read it now