HiddenLayer releases enhanced capabilities to their AI Risk Assessment, ML Training, and Red Teaming Services

Amidst escalating global AI regulations, including the European AI Act and Biden’s Executive AI Order, in addition to the release of recent AI frameworks by prominent industry leaders like Google and IBM, HiddenLayer has been working diligently to enhance its Professional Services to meet growing customer demand. Today, we are excited to bring upgraded capabilities to the market, offering customized offensive security evaluations for companies across every industry, including an AI Risk Assessment, ML Training, and, maybe most excitingly, our Red Teaming services.

What does Red Teaming mean for AI?

In cybersecurity, red teaming typically denotes assessing and evaluating an organization’s security posture through a series of offensive means. Of course, this may sound like a fancy way of describing a hacker, but in these instances, the red teamer is employed or contracted to perform this testing.

Even after the collective acceptance that AI will define this decade, implementing internal red teams is often only possible by industry giants such as Google, NVIDIA, and Microsoft – each being able to afford the dedicated resources to build internally focused teams to protect the models that they develop and those they put to work. Our offerings aim to expand this level of support and best practices to companies of all sizes through our ML Threat Operations team. This team partners closely with your data science and cybersecurity teams with the knowledge, insight, and tools needed to protect and maximize AI investments.

Advancements in AI Red Teaming

It would be unfair to mention these companies by name and not highlight some of their incredible work to bring light to the security of AI systems. In December 2021, Microsoft published its Best practices for AI security risk management. In June 2023, NVIDIA introduced its red team to the world alongside the framework it uses as the foundation for its assessments, and most recently, in July 2023, Google announced their own AI red team following the release of their Secure AI Framework (SAIF).  

So what has turned the heads of these giants, and what’s stopping others from following suit?

Risk vs reward

The rapid innovation and integration of AI into our lives has far outpaced the security controls we’ve placed around it. With LLMs taking the world by storm over the last year, a growing awareness has manifested about the sheer scale of critical decisions being deferred to AI systems. The honeymoon period has started to wane, leaving an uncomfortable feeling that the systems we’ve placed so much trust in are inherently vulnerable. 

Further, there continues to be a cybersecurity skills shortage. The shortage is even more acute for individuals at the intersection of AI and cybersecurity. Those with the necessary experience in both fields are often already employed by industry leaders and firms specializing in securing AI. This can mean that talent is hard to find yet harder to acquire.

To help address this problem, HiddenLayer provides our own ML Threat Operations team, specifically designed to help companies secure their AI and ML systems without having to find, hire, and grow their own bespoke AI security teams. We engage directly with data scientists and cybersecurity stakeholders to help them understand and mitigate the potential risks posed throughout the machine learning development lifecycle and beyond. 

To achieve this, we have three distinct offerings specifically designed to give you peace of mind.

Securing AI Systems with HiddenLayer Professional Services

HiddenLayer Red Teaming

AI models are incredibly prone to adversarial attacks, including inference, exfiltration, and prompt injection, to name just a few modern attack types. An example of an attack on a digital supply chain can include an adversary exploiting highly weighted features in a model to game the system in the way it benefits them –  for example, to push through fraudulent payments, force loan approvals, or evade spam/malware detection. We suggest reviewing the MITRE ATLAS framework for a more exhaustive listing of attacks.

AI red teaming is much like the well-known pentest but, in this instance, for models. It can also take many forms, from attacking the models themselves to evaluating the systems underneath and how they’re implemented. When understanding how your models fare under adversarial attack, the HiddenLayer ML Threat Operations team is uniquely positioned to provide invaluable insight. Taking known tactics and techniques published in frameworks such as MITRE ATLAS in addition to custom attack tooling, our team can work to assess the robustness of models against adversarial machine learning attacks. 

HiddenLayer’s red teaming assessment uncovers weaknesses in model classification that adversaries could exploit for fraudulent activities without triggering detection. Armed with a prioritized list of identified exploits, our client can channel their resources, involving data science and cyber teams, towards mitigation and remediation efforts with maximum impact. The result is an enhanced security posture for the entire platform without introducing additional friction for internal or external customers.

HiddenLayer AI Risk Assessment

When we think of attacking an AI system, the focal point is often the model that underpins it. Of course, this isn’t incorrect, but the technology stack that supports the model deployment and the model’s placement within the company’s general business context can be just as important, if not more important. 

To help a company understand the security posture of their AI systems, we created the HiddenLayer AI Risk Assessment, a well-considered, comprehensive evaluation of risk factors from training to deployment and everything in between. We work closely with data science teams and cybersecurity stakeholders to discover and address vulnerabilities and security concerns and provide actionable insights into their current security posture. Regardless of organizational maturity, this evaluation has proved incredibly helpful for practitioners and decision-makers alike to understand what they’ve been getting wrong and what they’ve been getting right.

For a more in-depth understanding of the potential security risks of AI models, check out our past blogs on the topic, such as “The Tactics and Techniques of Adversarial ML.”

Adversarial ML Training

Numerous attack vectors in the overall ML operations lifecycle are still largely unknown to those tasked with building and securing these systems. A traditional penetration testing or red team may be on an engagement and run across machine learning systems or models and not know what to do with them. Alternatively, data science teams building models for their organizations may not follow best practices regarding AI hygiene security. For both teams, learning about the attack vectors and how they can be exploited can uplift their skills and improve their output. 

The Adversarial ML Training is a multi-day course alternating between instruction and interactive hands-on lab assignments. Attendees will build up and train their own models to understand ML fundamentals before getting their hands dirty by attacking the models via inference-style attacks and injecting malware directly into the models themselves. Attendees can take these lessons back with an understanding of how to safely apply the principles within their organization. 

Learn More

HiddenLayer believes that, as an industry, we can get ahead of securing AI. With decades of experience safeguarding our most critical technologies, the cybersecurity industry plays a pivotal role in shaping the solution. Securing your organization’s AI infrastructure does not have to be a daunting task. Our Professional Services team is here to help share how to test and secure your AI systems from end to end.

Learn more about how we can help you protect your advantage.