in

Navigating the ethics of AI in cybersecurity


Even if we’re not always consciously aware of it, artificial intelligence is now all around us. We’re already used to personalized recommendation systems in e-commerce, customer service chatbots powered by conversational AI and a whole lot more. In the realm of information security, we’ve already been relying on AI-powered spam filters for years to protect us from malicious emails.

Those are all well-established use cases. However, since the meteoric rise of generative AI in the last few years, machines have become capable of so much more. From threat detection to incident response automation to testing employee awareness through simulated phishing emails, the AI opportunity in cybersecurity is indisputable.

But with any new opportunity comes new risks. Threat actors are now using AI to launch ever more convincing phishing attacks at a scale that wasn’t possible before. To keep ahead of the threats, those on the defensive lines also need AI, but its use must be transparent and with a central focus on ethics to avoid stepping into the realm of gray-hat tactics.

Now is the time for information security leaders to adopt responsible AI strategies.

Balancing privacy and safety in AI-powered security tools

Crime is a human problem, and cyber crime is no different. Technology, including generative AI, is simply another tool in an attacker’s arsenal. Legitimate companies train their AI models on vast swaths of data scraped from the internet. Not only are these models often trained on the creative efforts of millions of real people — there’s also a chance of them hoovering up personal information that’s ended up in the public domain, intentionally or unintentionally. As a result, some of the biggest AI model developers are now facing lawsuits, while the industry at large faces growing attention from regulators.

While threat actors care little for AI ethics, it’s easy for legitimate companies to unwittingly end up doing the same thing. Web-scraping tools, for instance, may be used to collect training data to create a model to detect phishing content. However, these tools might not make any distinction between personal and anonymized information — especially in the case of image content. Open-source data sets like LAION for images or The Pile for text have a similar problem. For example, in 2022, a Californian artist found that private medical photos taken by her doctor had ended up in the LAION-5B dataset used to train the popular open-source image synthesizer Stable Diffusion.

There’s no denying that the careless development of cybersecurity-verticalized AI models can lead to greater risk than not using AI at all. To prevent that from happening, security solution developers must maintain the highest standards of data quality and privacy, especially when it comes to anonymizing or safeguarding confidential information. Laws like Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), though developed before the rise of generative AI, serve as valuable guidelines for informing ethical AI strategies.

Explore AI cybersecurity solutions

An emphasis on privacy

Companies have been using machine learning to detect security threats and vulnerabilities long before the rise of generative AI. Systems powered by natural language processing (NLP), behavioral and sentiment analytics and deep learning are all well-established in these use cases. But they, too, present ethical conundrums where privacy and security can become competing disciplines.

For example, consider a company that uses AI to monitor employee browsing histories to detect insider threats. While this enhances security, it might also involve capturing personal browsing information — such as medical searches or financial transactions — that employees expect to stay private.

Privacy is also a concern in physical security. For instance, AI-driven fingerprint recognition might prevent unauthorized access to sensitive sites or devices, but it also involves collecting highly sensitive biometric data, which, if compromised, could cause long-lasting problems for the individuals concerned. After all, if your fingerprint data is hacked, you can’t exactly get a new finger. That’s why it’s imperative that biometric systems are kept under maximum security and backed up with responsible data retention policies.

Keeping humans in the loop for accountability in decision-making

Perhaps the most important thing to remember about AI is that, just like people, it can misstep in many different ways. One of the central tasks of adopting an ethical AI strategy is TEVV, or testing, evaluation, validation and verification. That’s especially the case in such a mission-critical area as cybersecurity.

Many of the risks that come with AI manifest themselves during the development process. For instance, the training data must undergo thorough TEVV for quality assurance, as well as to ensure that it hasn’t been manipulated. This is vital because data poisoning is now one of the number-one attack vectors deployed by more sophisticated cyber criminals.

Another issue inherent to AI — just as it is to people — is bias and fairness. For example, an AI tool used to flag malicious emails might target legitimate emails because they show signs of vernacular commonly associated with a specific cultural group. This results in unfair profiling and targeting of specific groups, raising concerns about unjust actions being taken.

The purpose of AI is to augment human intelligence, not to replace it. Machines can’t be held accountable if something goes wrong. It’s important to remember that AI does what humans train it to do. Because of this, AI inherits human biases and poor decision-making processes. The “black-box” nature of many AI models can also make it notoriously difficult to identify the root causes of such issues, simply because end users are given no insight into how AI comes up with the decisions it makes. These models lack the explainability critical for obtaining transparency and accountability in AI-driven decision-making.

Keep human interests central to AI development

Whether developing or engaging with AI — in cybersecurity or any other context — it’s essential to keep humans in the loop throughout the process. Training data must be regularly audited by diverse and inclusive teams and refined to reduce bias and misinformation. While people themselves are prone to the same problems, continuous supervision and the ability to explain how AI draws the conclusions it does can greatly mitigate these risks.

On the other hand, simply viewing AI as a shortcut and a human replacement inevitably results in AI evolving in its own way, being trained on its own outputs to the point it only amplifies its own shortcomings — a concept known as AI drift.

The human role in safeguarding AI and being accountable for its adoption and usage can’t be understated. That’s why, instead of focusing on AI as a way to reduce headcounts and save money, companies should invest any savings in retraining and transitioning their teams into new AI-adjacent roles. That means all information security professionals must put ethical AI usage (and thus people) first.

This Talking Pet Collar Is Like a Chatbot for Your Dog

This Talking Pet Collar Is Like a Chatbot for Your Dog

Video game leaderboard illustrating Endor Labs' new tool for evaluating and scoring AI models.

Endor Labs unveils evaluation tool