In previous blog post, there was an introduction to backdoor attack and its various forms. In this post, I will provide the basic difference between the two forms of attacks using a single example so as to understand the difference in a more precise manner and I will finally provide…
Category: AI attacks
Reviewing Prompt Injection and GPT-3
Recently, AI researcher Simon Willison discovered a new-yet-familiar kind of attack on OpenAI’s GPT-3. The attack dubbed as prompt injection attack has taken the internet by storm over the last couple of weeks highlighting how vulnerable GPT-3 is to this attack. This review article gives a brief overview on GPT-3,…
Machine “Un”learning
With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application,…
Backdoor: The Undercover Agent
As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into…
Explainability vs. Confidentiality: A Conundrum
Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much…
Generative Adversarial Networks (GAN): The Devil’s Advocate
AI is fueled with abundant and qualitative data. But deriving such vast amount from real resources can be quite challenging. Not only because resources are limited, but also the privacy factor which at present is a major security requirement to be complied with, by AI powered systems. In this trade-off…
Model Stealing: Show me “Everything” you got!
Model Stealing Attack (Ref: Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey, Miao et al.) By now you must have realised how Model Stealing attack is different from Inference attack. While Inference attack focuses on extracting training data information and intends to rebuild a training dataset, model…
Inference Attack: Show Me What You Got!
Inference Attack (Ref: MEMBERSHIP INFERENCE ATTACKS AGAINST MACHINE LEARNING MODELSReza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov (2017)Presented by Christabella Irwanto) In previous blog entries, we had a basic understanding of what data poisoning attack is, what does Evasion attack do, and how are data poisoning and Evasion attacks…
Evasion Attack: Fooling AI Model
In an earlier blog, we had a fair knowledge about data poisoning, wherein the adversary is able to make changes to the training data, filling it with corrupt information so as to malign the AI algorithm such that it is trained according to malicious information to render a corrupt, biased…
Data Poisoning: A Catch-22 Situation
What is Data Poisoning? If you all remember a famous case of data bias issue, wherein Google Photos labeled a picture of African-American couple as “Gorillas”, then you know what I am talking about. ML models which are the subset of AI, are specifically susceptible to such data poisoning attacks….