PROJECTS

AIAS

 

The growing use of AI-based management and orchestration systems (cloud/edge virtualized infrastructures, private and public 5G, etc.) creates new attack surfaces for hosting frameworks and AI/ML algorithms. AI-based components are often black boxes, making them prime targets for threat actors who have developed techniques to impair the robustness of AI systems with adversarial AI attacks, violate the integrity of AI models (e.g., by poisoning training data), and bypass or disable the models by querying them with malicious input. Such, threat actors also weaponize AI technology to effectively perform attacks, ranging from automated individual threat actions, such as injection generators1,2, information gathering and exploitation3,4 to complete and sophisticated attacks, (such as AI-based emulation of Advanced Persistent Threats (APT) attacks5). To address this challenge

AIAS project aims to perform in-depth research on adversarial AI to design and develop an innovative AI-based security platform for the protection of AI systems’ technical robustness and AI-based operations of organisations, relying on Adversarial AI defence methods (e.g., adversarial training, adversarial AI attack detection), deception mechanisms (e.g., high-interaction honeypots, digital twins, virtual personas) as well as on explainable AI solutions (XAI) that empower security teams to materialise the concept of “AI for Cybersecurity” (i.e., AI/ML-based tools to enhance the detection performance, defence and response to cyberattacks) and “Cybersecurity for AI” (i.e., protection of AI systems against adversarial AI attacks).

AIAS envisions a sustainable secure environment for AI systems employing a long-term, international and cross discipline research scheme to develop a novel platform for securing industries against AI adversarial attacks. AIAS aspires to design, develop and validate a holistic platform for monitoring, protecting and improving the robustness of AI systems against threat actors, while providing comprehensible and transparent explanations to administrators (i.e., via XAI) to configure their AI systems and mitigate adversarial AI attacks. The proposed solution aims to solidify the strategies to improve AI system’s resilience to cyberattacks safeguarding the confidentiality, integrity, and availability of their operations. AIAS’s training aims at providing the opportunity to ERs/ESRs transfer knowledge (ToK), and practical skills that will stimulate their professional growth through carefully planned cross-partner training activities and interactions with the partner organisations and their respective networks in the markets of cybersecurity and AI. The ERs/ESRs will dive into SotA and innovative research challenges, collaborate in a multicultural environment and develop new hard and soft skills to become leaders in the employment market.

SATRD role in the AIAS project

Role description