Introduction

In a rapidly evolving financial landscape, the integrity of AI-driven fraud detection systems is paramount. This case study explores how a leading global financial services company partnered with HiddenLayer to fortify its machine-learning models against potential adversarial threats. With over 50 million users and billions of transactions annually, the company faced the dual challenge of maintaining an optimal customer experience while combating sophisticated fraud. HiddenLayer’s AI Red Teaming initiative was crucial in identifying vulnerabilities and enhancing the security of their AI models, ensuring robust fraud detection without compromising user satisfaction.

Company Overview

A financial services company engaged the HiddenLayer Professional Services team to conduct a red team evaluation of machine learning models used to detect and intercept fraud during financial transactions. The primary purpose of the assessment was to identify weaknesses in their AI model classifications that adversaries could exploit to conduct fraudulent activities on the platform without triggering detection, resulting in millions of dollars of potential losses annually.  

Challenges

Hitting the Target: Ensuring an Optimal Customer Experience and Lowering Losses Amidst Rising Fraud Risks

With over 50 million users and facilitating more than 5 billion transactions annually, our customer grappled with the ongoing challenge of minimizing customer experience issues while simultaneously combating fraud. The delicate balance required cutting-edge AI and ML models to detect and intercept fraudulent transactions effectively. With these models at the core of their transaction operations, their

commitment to stellar customer experiences and the need for advanced security for AI led them to engage with HiddenLayer for a comprehensive red teaming initiative.

Discovery And Selection Of Hiddenlayer

An existing customer referred the customer to HiddenLayer, who recognized our deep domain expertise in cyber and data science and our experience in automated adversarial attack tools. Additionally, HiddenLayer’s flexible pricing model aligned with the customer’s needs, making HiddenLayer the clear choice for their red teaming endeavor.

Key Selection Criteria

  • Deep Expertise: Proficiency across cyber and data science modalities.
  • Adversarial Attack Experience: Experience in detecting and mitigating automated adversarial attack tools.
  • Flexible Pricing: The pricing model offered the flexibility sought for the client’s unique requirements.

Objectives of AI Red Teaming 

  • Identify Vulnerable Features: Pinpoint features within the models are susceptible to an attacker’s influence, which could potentially substantially impact classification outcomes trending towards legitimacy.
  • Create Adversarial Examples: Develop adversarial examples by modifying the fewest features in inputs classified as fraudulent, transitioning the classification from fraudulent to legitimate.
  • Improve Model Classification: Identify areas for improvement within the target models to enhance the accuracy of classifying fraudulent activities.

Hiddenlayer’s Mitigation

Alongside existing security controls, the introduction of inference-time monitoring for model inputs and outputs to detect targeted attacks against the models allowed the ability to flag and block suspected adversarial abuse.

HiddenLayer’s AI Security (AISec) Platform, which includes AI Detection and Response for Gen AI (AIDR), provides real-time, scalable, and unobtrusive inference-time monitoring for all model types. AIDR can be used to audit all existing models for adversarial abuse and ongoing prevention of abuse. AIDR does not require access to the customer’s data or models, as all detections are performed using vectorized inputs and outputs.

AIDR provides protection against common adversarial techniques, including model extraction/theft, tampering, data poisoning/model injection, and inference.

Impact Of Hiddenlayer

The core purpose of HiddenLayer’s AI Red Teaming Assessment was to uncover weaknesses in model classification that adversaries could exploit for fraudulent activities without triggering detection. Now armed with a prioritized list of identified exploits, our client can channel their invaluable resources, involving data science and cyber teams, towards mitigation and remediation efforts with maximum impact. The result is an enhanced security posture for the entire platform without introducing additional friction for internal or external customers.

HiddenLayer’s product has proven instrumental in fortifying our defenses, allowing us to address vulnerabilities effectively and elevate our overall security stance while maintaining a seamless experience for our users.

Learn More

To better understand AI Red Teaming, read A Guide to AI Read Teaming and join our webinar on July 17th.