How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.
The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.
CISA Director Jen Easterly said, “We don’t have a cyber problem, we have a technology and culture problem. Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And no place in technology reveals the obsession with speed to market more than generative AI.
AI training sets ingest massive amounts of valuable and sensitive data, which makes AI models a juicy attack target. Organizations cannot afford to bring unsecured AI into their environments, but they can’t do without the technology either.
To bridge the gap between the need for AI and its inherent risks, it’s imperative to establish a solid framework to direct AI security and model use. To help meet this need, IBM recently announced its Framework for Securing Generative AI. Let’s see how a well-developed framework can help you establish solid AI cybersecurity.
Securing the AI pipeline
A generative AI framework should be designed to help customers, partners and organizations to understand the likeliest attacks on AI. From there, defensive approaches can be prioritized to quickly secure generative AI initiatives.
Securing the AI pipeline involves five areas of action:
- Securing the data: How data is collected and handled
- Securing the model: AI model development and training
- Securing the usage: AI model inference and live use
- Securing AI model infrastructure
- Establishing sound AI governance
Now, let’s see how each area is oriented to address AI security threats.
Learn more about AI cybersecurity
1. Secure the AI data
Hungry AI models consume massive amounts of data, which data scientists, engineers and developers will access for development purposes. However, developers might not have security high on their list of priorities. If mishandled, your sensitive data and critical intellectual property (IP) could end up exposed.
In AI model attacks, exfiltration of underlying data sets is likely to be one of the most common attack scenarios. Therefore, security fundamentals are the first line of defense to protect these data sets. AI security fundamentals include:
2. Secure the AI model
When developing AI applications, data scientists frequently use pre-existing, freely available machine learning (ML) models sourced from online repositories. However, like any open-source library, security is frequently not built in.
Every organization must consider the AI security risks versus the benefits of accelerated model development. However, without proper AI model security, the downside risk can be significant. Remember, hackers have access to online repositories as well, and backdoors or malware can be injected into open-source models. Any organization that downloads an infected model is wide open to attack.
Furthermore, API-enabled large language models (LLMs) present a similar risk. Hackers can target API interfaces to access and exploit data being transported across the APIs. And LLM agents or plug-ins with excessive permissions further increase the risk for compromise.
To secure AI models, organizations should:
3. Secure the AI usage
When AI models first became widely available, waves of users rushed to test the platforms. It wasn’t long before hackers were able to trick the models into ignoring guardrails and generate biased, false or even dangerous responses. All this can lead to reputational damage and increase the risk of costly legal headaches.
Attackers can also attempt to analyze input/output pairs and train a surrogate model to mimic the behavior of your organization’s AI model. This means the enterprise can lose its competitive edge. Finally, AI models are also vulnerable to denial of service attacks, where attackers overwhelm the LLM with inputs that degrade the quality of service and ramp up resource use.
Best practices for AI model usage security include:
- Monitoring for prompt injections
- Monitoring for outputs containing sensitive data or inappropriate content
- Detecting and responding to data poisoning, model evasion and model extraction
- Deploying machine learning detection and response (MLDR), which can be integrated into security operations solutions, such as IBM Security® QRadar®, enabling the ability to deny access and quarantine or disconnect compromised models.
4. Secure the infrastructure
A secure infrastructure must underpin any solid AI cybersecurity strategy. Strengthening network security, refining access control, implementing robust data encryption and deploying vigilant intrusion detection and prevention systems around AI environments are all critical for securing infrastructure that supports AI. Additionally, allocating resources towards innovative security solutions tailored for safeguarding AI assets should be a priority.
5. Establish AI governance
Artificial intelligence governance entails the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.
IBM is an industry leader in AI governance, as shown by its presentation of the IBM Framework for Securing Generative AI. As entities continue to give AI more business process and decision-making responsibility, AI model behavior must be kept in check, monitoring for fairness, bias and drift over time. Whether induced or not, a model that diverges from what it was originally designed to do can introduce significant risk.