On July 26th, 2023 the Securities and Exchange Commission (SEC) released its final rule on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure. Organizations now have 5 months to craft and confirm a compliance plan before the new regulations go into effect mid-December. The revisions from these proposed rules aim to streamline the disclosure requirements in many ways. But what exactly are these SEC regulations requiring you to disclose, and how much? And does this apply to my organization’s AI?

The Rules & The “So What?”

The new regulations will require registrants to disclose any cybersecurity incident they determine to be material and describe the material aspects of the nature, scope, and timing of the incident, as well as the material impact or reasonably likely material impact of the incident on the registrant, including its financial condition and results of operations. While also necessitating that “registrants must determine the materiality of an incident without unreasonable delay following discovery and, if the incident is determined material, file an Item 1.05 Form 8-K generally within four business days of such determination.” Something else to note is that “New Regulation S-K Item 106 will require registrants to describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as whether any risks from cybersecurity threats, including as a result of any previous cybersecurity incidents, have materially affected or are reasonably likely to materially affect the registrant.” 

The word disclosure can be daunting. So what does “disclosing” really mean? Basically, companies must disclose any incident that affects the materiality of the company. Essentially, anything that affects what’s important to your company, shareholders, or clients. The allotted time to disclose does not leave much time for dilly dallying as companies only have about 4 days to release this information. If the company fails to fit their disclosure in this time frame they are subject to heavy fines, penalties, and potentially an investigation. Another noteworthy thing to address is that companies now must describe their process for mitigating risk. Companies must have a plan stating not only their cybersecurity measures but also the action they will take if a breach occurs. In reality, many companies are not ready to lift the hood and expose their cyber capabilities underneath, especially in regards to the new threat landscape of the quickly growing AI sector. 

In Regards to AI

These new rules mean that companies are now liable to report any adversarial attacks on their AI models. Not only do companies need a process for mitigating risk with models before they are deployed in their system, but they also need a process for mitigating and monitoring risks as the model is live. Despite AI’s new found stardom, it remains wildly under secured today. Companies are waiting for cybersecurity to catch up to AI instead of creating and executing a real, tangible  security plan to protect their AI. The truth is, most companies are under prepared to showcase a security plan for their models. Many companies today are utilizing AI to create material benefit for the company. However, wherever a company is creating material benefit, they are also creating the risk of material damage, especially if that model being utilized is not secure. Because if the model is not secure (see figure 1.0 below) then it is not trustworthy. These SEC rules are saying we can no longer wait for cybersecurity to play catch up – the time to secure your AI models was yesterday.

Figure 1.0

Looking at figure 1.1 below, we can see that 76% of ML attacks have had a physical world impact, meaning that 76% of ML attacks affected the materiality of a company, their clients, and/or our society. This number is staggering. It is no surprise, looking at the data, that “Senate Majority Leader Chuck Schumer” is taking a step in the right direction by  holding “a series of AI ‘Insight Forums’ to “lay down the foundation for AI policy.” These forums are to be held in September and October, “in place of congressional hearings that focus on senators’ questions, which Schumer said would not work for AI’s complex issues around finding a path towards AI legislation and regulation.” Due to the complexity of the issues being discussed and the vast amount of public noise around them, the “Senate meeting with top AI leaders will be ‘closed-door,’ no press or public allowed.” These forums emphasize the  government’s efforts at accelerating AI adoption in a secure manner, which is applaudable as the US should aim to secure  its leadership position as we enter a new digital era.

Figure 1.1

Is It Enough?

While our government is moving in the right direction, there’s still more to be done. Looking at this data we see that no one is as secure as they think they are. These attacks aren’t easy to brush under the rug as though they had no impact. A majority of attacks, 76%,  directly impacted society in some way. And with the SEC rules going into effect, all of these attacks would now be required to be disclosed. Is your company ready? What are you doing now to secure your AI processes and deployed models? 

The truth is there is still a ton of gray space surrounding security for AI. But it is no longer an issue that can be placed on the back burner to be answered later. As we see in this data, as we understand with these SEC rules, the time for securing our models was yesterday. 

Where We Go From Here

HiddenLayer believes as an industry we can get ahead of securing AI, and with decades of experience safeguarding our most critical technologies, the cybersecurity industry plays a pivotal role in shaping the solution. HiddenLayer’s MLSec Platform consists of a suite of products that provide comprehensive Artificial Intelligence security to protect Enterprise Machine Learning Models against adversarial machine learning attacks, vulnerabilities, and malicious code injections. HiddenLayer’s patent-pending solution, MLDR,  provides a noninvasive, software-based platform that monitors the inputs and outputs of your machine learning algorithms for anomalous activity consistent with adversarial ML attack techniques. It’s detection and response capabilities support efforts to disclose in a timely manner

“Disclose” does not have to be a daunting word. It does not have to make companies nervous or uneasy. Companies can feel secure in their cybersecurity efforts, they can trust their ML models and AI processes. By implementing the right risk mitigation plan and covering all of their bases, companies can step into this new digital age feeling confident in the security and protection of their technological assets.