An Interview with Dan Klinedinst

Introduction

At HiddenLayer, we keep a close eye on everything in AI/ML security and are always on the lookout for the latest research, detailed analyses, and prescient thoughts from within the field. When Dan Klinedinst’s recently published book: ‘Shall We Play A Game? Analyzing Threats to Artificial Intelligence’ appeared in our periphery, we knew we had to investigate.

Shall We Play A Game opens with an eerily human-like paragraph generated by a text generation model – we didn’t expect to see reference to a ‘gigantic death spiral’ either, but here we are! What comes after is a wide-ranging and well-considered exploration of the threats facing AI, written in an engaging and accessible manner. From GPU attacks and Generative Adversarial Networks to the abuse of financial AI models, cognitive bias, and beyond, Dan’s book offers a comprehensive introduction to the topic and should be considered essential reading for anyone interested in understanding more about the world of adversarial machine learning.

We were fortunate enough to have had the pleasure to speak with Dan and ask his views on the state of the industry, how taxonomies, frameworks, and lawmakers can help play a role in securing AI, and where we’re headed in the future – oh, and some Sci-Fi, too.

Q&A

Beyond reading your book, what other resources are available to someone starting to think about ML security?

The first source I’d like to call out is the AI Village at the annual DefCon conference (aivillage.org). They have talks, contests, and a year-round discussion on Discord. Second, a lot of the information on AI security is still found in academic papers. While researching the book, I found it useful to go beyond media reports and review the original sources. I couldn’t always follow the math, but I found their hypotheses and conclusions more actionable than media reports. MITRE is also starting to publish applied research on adversarial ML, such as the ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework mentioned in the next question. Finally, Microsoft has published some excellent advice on threat modeling AI.

You mention NISTIR 8269, “A Taxonomy and Terminology of Adversarial Machine Learning.” There are other frameworks, such as MITRE ATLAS(™). Are such frameworks helpful for existing security teams to start thinking about ML-specific security concerns?

These types of frameworks and models are useful for providing a structured approach to examine the security of an AI or ML system. However, it’s important to remember that these types of tools are very broad and can’t provide a risk assessment of specific systems. For example, a Denial of Service attack against a business analytics system is likely to have a much different impact than a Denial of Service on a self-driving bus. It’s also worth remembering that attackers don’t follow the rules of these frameworks and may well invent innovative classes of attacks that aren’t currently represented.

Traditional computer security incidents have evolved over many years – from no security to simple exploration, benign proof of concept, entertainment/chaos, damage/harm, and the organized criminal enterprises we see today. Do you think ML attacks will evolve in the same way?

I think they’ll evolve in different ways. For one thing, we’ll jump straight to the stage of attacking ML systems for financial damage, whether that’s through ransomware, fraud, or subversion of digital currency. Beyond that, attacks will have different goals than past attacks. Theft of data was the primary goal of attackers until recently, when they realized ransomware is more profitable and arguably easier. In other words, they’ve moved from attacking confidentiality to attacking availability. I can see attacks on ML systems changing targets again to focus on subverting integrity. It’s not clear yet what the impact will be if we cannot trust the answers we get from ML systems.

Where do you foresee the future target of ML attacks? Will they focus more on the algorithm, model implementation, or underlying hardware/software?

I see attacks on model implementation as being similar to reverse engineering of proprietary systems today. It will be widespread but it will often be a means to enable further attacks. Attacks on the algorithm will be more challenging but will potentially give attackers more value. (For an interesting but relatively understandable example of attacks on the algorithm, see this recent post). The primary advantage of using AI and ML systems is that they can learn, so as an attacker the primary goal is to affect what and how it learns. All of that said, we still need to secure the underlying hardware and software! We have in no way mastered that component as an industry.

What defensive countermeasures can organizations adopt to help secure themselves from the most critical forms of AI attack?

Create threat models! This can be as simple as brainstorming possible vulnerabilities on a whiteboard or as complex as very detailed MBSE models or digital twins. Become familiar with techniques to make ML systems resistant to adversarial actions. For example, feature squeezing and feature denoising are methods for detecting violations of model input integrity (https://docs.microsoft.com/en-us/security/engineering/threat-modeling-aiml). Finally, focus on securing interfaces, just like you would in traditional-but-complex systems. If a classifier is created to differentiate between “dog” and “cat”, you should never accept the answer “giraffe”!

Currently, organizations are not required to disclose an attack on their ML systems/assets. How do you foresee tighter regulatory guidelines affecting the industry?

We’ve seen relatively little appetite for regulating cybersecurity at the national and international level. Outside of critical infrastructure, compliance tends to be more market-based, such as PCI and cyber insurance. I think regulation of AI is likely to come out of the regulatory bodies for specific industries rather than an overarching security policy framework. For example, financial lenders will have to prove that their models aren’t biased and are transparent enough that you can show exactly what transactions are being made. Attacks on ML systems might have to be reported in financial disclosures, if they’re material to a public company’s stock price. Medical systems will be subject to malpractice guidelines and autonomous vehicles will be liable for accidents. However, I don’t anticipate an “AI Security Act of 2028” or anything in most countries.

EU regulators recently proposed legislation that would require AI systems to meet certain transparency obligations . With the growing complexity of advanced neural networks, is explainable AI a viable way forward?

Explainable AI (XAI) is a necessary but insufficient control that will enable some of the regulatory requirements. However, I don’t think XAI alone is enough to convince users or regulators that AI is trustworthy. There will be some AI advances that cannot easily be explained, so creators of such systems need to establish trust based on other methods of transparency and attestation. I think of it as similar to how we trust humans – we can’t always understand their thought processes, but if their externally-observable actions are consistently trustworthy, we grant them more trust than if they are consistently wrong or dishonest. We already have ways to measure wrongness and dishonesty, from technical testing to courts of law.

And finally, are you a science fiction fan? As a total moonshot, how do you think the industry will look in 50 years compared to past and present science fiction writing? *cough* Battlestar Galactica *cough*

I’m a huge science fiction fan; my editor made me take a lot of sci-fi references out of my book because they were too obscure. Fifty years is a long time in this field. We could even have human-equivalent AI by then (although I personally doubt it will be that soon.) I think in 50 years – or possibly much sooner – AI will be performing most of the functions that cybersecurity professionals do now – vulnerability analysis, validation & verification, intrusion detection and threat hunting, et cetera. The massive state space of interconnected global systems, combined with vast amounts of data from cheap sensors, will be far greater than what humans can mentally process in a usable timeframe. AIs will be competing with each other at high speed to attack and defend. These might be considered adversarial attacks or they might just be considered how global competition works at that stage (think of the AIs and zaibatsus in early William Gibson novels). Humans in the industry will have to focus on higher order concerns – algorithms, model robustness, the security of the information as opposed to the security of the computers, simulation/modeling, and accurate risk assessment. Oh and don’t forget all the new technology that AI will probably enable – nanotech, biotech, mixed reality, quantum foo. I don’t lose sleep over our world becoming like those in the Matrix or Terminator movies; my concerns are more Ex Machina or Black Mirror.

Closing Notes

We hope you found this conversation as insightful as we did. By having these conversations and bringing them into the public sphere – we aspire to raise more awareness surrounding the potential threats to AI/ML systems, the outcomes thereof, and what we can do to defend against them. We’d like to thank Dan for his time in providing such insightful answers and look forward to seeing his future work. For more information on Dan Klinedinst, or to grab yourself a copy of his book ‘Shall We Play A Game? Analyzing Threats to Artificial Intelligence’, be sure to check him out on Twitter or visit his website.

About Dan Klinedinst

Dan Klinedinst is an information security engineer focused on emerging technologies such as artificial intelligence, autonomous robots, and augmented / virtual reality. He is a former security engineer and researcher at Lawrence Berkeley National Laboratory, Carnegie Mellon University’s Software Engineering Institute, and the CERT Coordination Center. He currently works as a Distinguished Member of Technical Staff at General Dynamics Mission Systems, designing security architectures for large systems in the aerospace and defense industries. He has also designed and implemented numerous offensive security simulation environments; and is the creator of the Gibson3D security visualization tool. His hobbies include travel, cooking, and the outdoors. He currently resides in Pittsburgh, PA.

About HiddenLayer

HiddenLayer helps enterprises safeguard the machine learning models behind their most important products with a comprehensive security platform. Only HiddenLayer offers turnkey AI/ML security that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded in March of 2022 by experienced security and ML professionals, HiddenLayer is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures. For more information, visit www.hiddenlayer.com and follow us on LinkedIn or Twitter.