insights

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

By

Arman Abdulhayoglu

December 22, 2025

Table of Contents

Share:

Modern AI environments don’t fail because of a single vulnerability. They fail when security can’t keep pace with how AI is actually built, deployed, and operated. That’s why our latest platform update represents more than a UI refresh. It’s a structural evolution of how AI security is delivered.

With the release of HiddenLayer AI Security Platform Console v25.12, we’ve introduced workflow-aligned modules, a unified Security Dashboard, and an expanded Learning Center, all designed to give security and AI teams clearer visibility, faster action, and better alignment with real-world AI risk.

From Products to Platform Modules

As AI adoption accelerates, security teams need clarity, not fragmented tools. In this release, we’ve transitioned from standalone product names to platform modules that map directly to how AI systems move from discovery to production.

Here’s how the modules align:

Previous Name New Module Name
Model Scanner AI Supply Chain Security
Automated Red Teaming for AI AI Attack Simulation
AI Detection & Response (AIDR) AI Runtime Security

This change reflects a broader platform philosophy: one system, multiple tightly integrated modules, each addressing a critical stage of the AI lifecycle.

What’s New in the Console

Workflow-Driven Navigation & Updated UI

The Console now features a redesigned sidebar and improved navigation, making it easier to move between modules, policies, detections, and insights. The updated UX reduces friction and keeps teams focused on what matters most, understanding and mitigating AI risk.

Unified Security Dashboard

Formerly delivered through reports, the new Security Dashboard offers a high-level view of AI security posture, presented in charts and visual summaries. It’s designed for quick situational awareness, whether you’re a practitioner monitoring activity or a leader tracking risk trends.

Exportable Data Across Modules

Every module now includes exportable data tables, enabling teams to analyze findings, integrate with internal workflows, and support governance or compliance initiatives.

Learning Center

AI security is evolving fast, and so should enablement. The new Learning Center centralizes tutorials and documentation, enabling teams to onboard quicker and derive more value from the platform.

Incremental Enhancements That Improve Daily Operations

Alongside the foundational platform changes, recent updates also include quality-of-life improvements that make day-to-day use smoother:

  • Default date ranges for detections and interactions
  • Severity-based filtering for Model Scanner and AIDR
  • Improved pagination and table behavior
  • Updated detection badges for clearer signal
  • Optional support for custom logout redirect URLs (via SSO)

These enhancements reflect ongoing investment in usability, performance, and enterprise readiness.

Why This Matters

The new Console experience aligns directly with the broader HiddenLayer AI Security Platform vision: securing AI systems end-to-end, from discovery and testing to runtime defense and continuous validation.

By organizing capabilities into workflow-aligned modules, teams gain:

  • Clear ownership across AI security responsibilities
  • Faster time to insight and response
  • A unified view of AI risk across models, pipelines, and environments

This update reinforces HiddenLayer’s focus on real-world AI security, purpose-built for modern AI systems, model-agnostic by design, and deployable without exposing sensitive data or IP

Looking Ahead

These Console updates are a foundational step. As AI systems become more autonomous and interconnected, platform-level security, not point solutions, will define how organizations safely innovate.

We’re excited to continue building alongside our customers and partners as the AI threat landscape evolves.

Related Insights

Insights
xx
min read

Why Autonomous AI Is the Next Great Attack Surface

Insights
xx
min read

Model Intelligence

Bringing Transparency to Third-Party AI Models

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Every new AI model expands what’s possible and what’s vulnerable. Protecting these systems requires more than traditional cybersecurity. It demands expertise in how AI itself can be manipulated, misled, or attacked. Adversarial manipulation, data poisoning, and model theft represent new attack surfaces that traditional cybersecurity isn’t equipped to defend.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.