Threat Landscape of AI Systems
Threat Landscape of AI Systems, Navigating Security Threats and Defenses in AI Systems.
Course Description
Artificial intelligence (AI) systems are increasingly integrated into critical industries, from healthcare to finance, yet they face growing security challenges from adversarial attacks and vulnerabilities. Threat Landscape of AI Systems is an in-depth exploration of the security threats that modern AI systems face, including various types of attacks, such as evasion, poisoning, model inversion, and more. This course series provides learners with the knowledge and tools to understand and defend AI systems against a broad range of adversarial exploits.
Participants will delve into:
Evasion Attacks: How subtle input manipulations deceive AI systems and cause misclassifications.
Poisoning Attacks: How attackers corrupt training data to manipulate model behavior and reduce accuracy.
Model Inversion Attacks: How sensitive input data can be reconstructed from a model’s output, leading to privacy breaches.
Other Attack Vectors: Including data extraction, membership inference, and backdoor attacks.
Additionally, this course covers:
Impact of Adversarial Attacks: The effects of these threats on industries such as facial recognition, autonomous vehicles, financial models, and healthcare AI.
Mitigation Techniques: Strategies for defending AI systems, including adversarial training, differential privacy, model encryption, and access controls.
Real-World Case Studies: Analyzing prominent examples of adversarial attacks and how they were mitigated.
Through a combination of lectures, case studies, practical exercises, and assessments, students will gain a solid understanding of the current and future threat landscape of AI systems. They will also learn how to apply cutting-edge security practices to safeguard AI models from attack.