Explainable and neuro-symbolic artificial intelligence
2° Year of course - First semester
Frequency Not mandatory
- 6 CFU
- 48 hours
- English
- Trieste
- Opzionale
- Standard teaching
- Oral Exam
- SSD INF/01
This course will provide a grasp of principles and techniques used for explainable and neuro-symbolic AI systems. The first part of the course will focus on explainable AI (XAI). Students will learn the principal concepts and methods of XAI, including interpretable-by-design models and post hoc explanations. Most of the course will focus on post-hoc explanations, particularly for Neural Network architectures, including also state-of-the-art approaches. Students will learn which are the best techniques to provide explanations, tailoring them to various stakeholders, like domain experts and end-users. Students will also examine the challenges and future research directions in the field of XAI, preparing them to contribute to the development of ethical and trustworthy AI systems. The second part of the course will focus on neuro-symbolic AI. We will take the logic perspective, modelling machine learning problems in logic framework, moving towards efficient and interpretable AI models. Knowledge and understanding: Students will become familiar with the most important concepts about explainable and neuro-symbolic AI. Moreover, they will see the concepts in practice through a selection of illustrative case studies from various application domains. Applying knowledge and understanding: At the end of the course, the students will be able to reason about interpretability of AI systems. They will be able to integrate their knowledge towards design and implementation of relevant solutions. Making Judgement: Students will gain skills in assessing the interpretability of AI models, critically analysing their strengths and limitations in real-world applications. Through practical exercises and case studies, students will develop the capacity to make informed decisions about responsible AI solutions. Communication skills: The students will practise communication skills through teamwork for project assignments, as well as while exchanging feedback with the instructors and tutors, and through the final project presentation. They will be able to explain the basic concepts of explainable and neuro-symbolic AI, and assess their importance in a given context of technology and/or research. Learning skills: The successful student will be able to explore the literature on explainable and neuro-symbolic AI so as to identify and apply state-of-the-art methods when solving complex tasks. If needed, the student will be able to design novel or combine existing methods.
Basic knowledge of Python, probabilistic machine learning, deep learning.
Artificial Intelligence faces significant challenges in offering clear explanations for the recommendations generated by intelligent systems and in effectively representing knowledge. Explanations play a crucial role in helping stakeholders understand the rationale behind AI recommendations. Meanwhile, knowledge representation enables advanced neural-symbolic reasoning, bridging the gap between data-driven learning and logical inference. The first part centres on explainable AI. Students will explore techniques such as model transparency, post-hoc explanations, and visualisation-based methods. This module will demonstrate how to create AI systems with enhanced user trust and accountability. The second part of the course focuses on Neuro-Symbolic AI (NeSy AI). NeSy AI combines neural networks’ scalability with symbolic methods’ interpretability. This hybrid approach leverages data-driven learning and logical reasoning for more transparent AI systems. Throughout the course, students will engage in a variety of activities designed to enhance their learning experience. These include exercise sessions, class presentations, and the development of research projects on a chosen topic of interest. Through these activities, students will learn to implement and evaluate explainable and neuro-symbolic AI systems, preparing them to address real-world challenges and contribute to the advancement of interpretable AI.
Here are some reference papers and books on the Explainable AI part: Arrieta, Alejandro Barredo, et al. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information fusion 58 (2020): 82-115. Yang, Wenli, et al. "Survey on explainable AI: From approaches, limitations and Applications aspects." Human-Centric Intelligent Systems 3.3 (2023): 161-188. Núñez, Haydemar, Cecilio Angulo, and Andreu Català. "Rule extraction from support vector machines." Esann. 2002. Mothilal, Ramaravind K., Amit Sharma, and Chenhao Tan. "Explaining machine learning classifiers through diverse counterfactual explanations." Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020. The Neuro Symbolic Computing part builds on: Shakarian, Paulo, et al. (2023). "Neuro Symbolic Reasoning and Learning". Springer. Other relevant references as well as slides will be provided throughout the course.
Part I: Explainable AI (XAI) Introduction, motivations, metrics for explainability Visualisation based approaches (Partial Dependency Plots, Individual Conditional Expectation) Feature attribution with Lime, Anchors, and SHAP Rule-based approaches to post-hoc explainability Counterfactual-based explanations Gradient-based methods for Neural Networks (DeepLift, Integrated Gradients, Layerwise relevance propagation), GradCAM and extensions for image data Concept-based approaches to XAI XAI for Transformers Robustness of XAI methods and adversarial attacks Neuro-symbolic XAI Part II: Neuro-Symbolic AI Fundamental Topics in Logic Introduction to Kautz’s Taxonomy Neuro-Symbolic Knowledge Graph Logic Tensor Networks Logic Neural Networks Neural SAT
Each topic of the course will be covered with lectures and hands-on sessions (roughly in proportion 1:1). At the hands-on sessions, the students will be working on a set of tasks which will allow them to practise and deepen their understanding of the concepts seen during the lectures. Some tasks will include implementation in Python.
Bring laptop to each lecture
The evaluation will consist of a project for each part of the course, and an oral exposition. Project topics are proposed by the students, taking inspiration from a list of topics given by the instructors. Concrete project goals and tasks are proposed by the group. Projects are done in teams, ideally in groups of 2 or 3 students. Each group works on one project encompassing topic of both parts of the course or on two smaller projects on two separate topics, one for each part of the course. The project results are presented in 10 minutes, explaining the research question and results obtained. After the project, there will be a discussion in which some questions about the project and the course topics will be asked. Main points of evaluation are clarity and comprehensiveness of the presentation, understanding of the topic of the project, understanding of the topics of the course, as well as the scope, depth, and originality of the performed analyses. Laude can be given in exceptional cases.
This course explores topics closely related to one or more goals of the United Nations 2030 Agenda for Sustainable Development (SDGs)