Ethical AI – a challenge for security applications?

Ethical AI – a challenge for security applications?

Rohit MehtaUpdated: Wednesday, May 25, 2022, 11:11 PM IST
article-image
Ethical AI – a challenge for security applications? | Photo: Pexels

Recently, the movement for Ethical AI has been gaining steady momentum from academic researchers, social activists, and lawyers. Ethical AI is the development and use of artificial intelligence systems in a manner that is safe, responsible and non-discriminatory. An organization that uses AI ethically would treat user data with respect, collect, store and use it in accordance with user consent, and ensure that the model does not make biased decisions. In the past, seemingly intelligent systems have been known to discriminate; such as the AI system that Amazon was using for hiring decisions was systematically discriminating against women.

Ethical AI is built around the core principles of fairness, transparency, accountability and consent. These principles do seem to make sense for the most part; they put users in control of the data, ensure that there is no discrimination, and enforces through audits and accountability that companies are following best practices and legal regulations. However, ethical AI might not always be the best course of action – especially in cases where the AI is being used for cyber security.

AI for security includes the use of machine learning and data science for a variety of issues like fraud detection, spam filtering, malware analysis and hate speech detection. It is different from other applications of AI (like making hiring decisions) for a multitude of reasons, but principally because of the potential cost of an incorrect decision by the AI, which can be significant in terms of loss of money, data, or sometimes even life.

AI Security researchers from the University of California Berkeley, Jessica Newman and Rajvardhan Oak have published a framework that outlines key obstacles for ethical AI in four categories, and the security-related challenges in each category. The issues they have outlined provide us a foundation and allow us to analyze ethical adaptations of AI security applications in four key areas: design, process, use and impact.

The first category they describe is design; most security applications are dependent on security through obscurity, which means that the exact way in which data will be used cannot be revealed. As a result, a transparent design is not possible. If companies were to reveal their algorithms and data used, it would cause challenges in the process, which is the second category of challenges. As Newman and Oak note, AI systems are susceptible to adversarial attacks and can cause catastrophic consequences. If features used by AI are open to audit (something that ethical AI may recommend), it would be extremely easy for an attacker to construct malicious adversarial data points. The third category is use; while ethical AI may advocate for releasing models publicly, the technology in the wrong hands is disastrous. According to Newman and Oak, open sourcing AI can lead to it being used to develop misleading news articles, impersonate others online, automate the production of abusive content, and automate phishing. They also raised concerns about deep fakes, and we can now see that their apprehensions were justified; as the technology to create deep fakes is now so easily accessible, Russian actors created fake videos of the Ukrainian president accepting defeat and ordering his soldiers to retreat. The final category concerns the impact; AI based security applications have the potential to cause visible and rapid impact – an intelligent anomaly detection model can save millions of dollars by detecting financial fraud, or identify child pornography being circulated on the Internet.

This makes one wonder – do the principles of ethical AI really apply when it comes to cyber security? As we saw in Newman and Oak’s research, to bring those ethical principles in practice would cause several fallouts. Is the improved transparency and accountability really worth the catastrophic consequences? To put it in perspective, which one would you prefer: your bank account remains safe, or that you (and attackers!) get to know the details of how the fraud detection algorithm works?

To counter this tradeoff, we need a different paradigm for AI ethics when it comes to security applications. Till then, it’s better to be safe than sorry, and better to be non-transparent, non-accountable than ethical.

RECENT STORIES