Titulo Estágio
Enhancing User Experience and User Interaction in Explainable AI
Áreas de especialidade
Sistemas Inteligentes
Engenharia de Software
Local do Estágio
IPN - Instituto Pedro Nunes (Laboratório de Informática e Sistemas)
Enquadramento
As Artificial Intelligence (AI) systems become more prevalent in various domains, it is crucial to ensure that these systems are transparent, interpretable, and accountable. Explainable AI (XAI) aims to address these concerns by providing insights into how AI algorithms make decisions and by enhancing human understanding and trust in AI systems.
However, the effectiveness of XAI methods heavily relies on the human factors and user interaction aspects of the visualization techniques used. The visualization of AI models and their underlying processes plays a critical role in facilitating users' comprehension, interpretation, and interaction with the AI systems.
To advance the field of XAI and enhance user experience, it is essential to explore innovative approaches and tools for visualizing human factors and user interaction in Explainable AI systems. This research aims to bridge the gap between complex AI algorithms and human cognition by designing intuitive, informative, and interactive visualizations that empower users to understand and control AI decision-making processes.
Even though this work has a broad applicability scope, the outcome does not depend on specific datasets and models. As such, it is possible to either (i) study yet to be defined (i.e., to define at the beginning of the thesis) applicability scenarios with specific data or models publicly available (e.g., Molecular Dynamics (1), GNN Explanation Methods (2) and several others (3,4,5,6)) or (ii) consider previous work developed at IPN. In case of the considering previous work, the available scenario considers intrusion detection AI models based on MLP and data pertaining to Linux system calls (i.e., ADFA-LD (7)) and device authentication events. These were a target of analysis by explainability models such as LIME and SHAP in previous work that is available for usage within this internship.
This topic is part of the NEXUS project, in which Pedro Nunes Institute (IPN) is a consortium member.
________________
(1) Sanchez-Lengeling et al. Evaluating attribution for graph neural networks. NeurIPS (2020). - https://proceedings.neurips.cc/paper/2020/hash/417fbbf2e9d5a28a855a11894b2e795a-Abstract.html
(2) Lukas Faber, Amin K. Moghaddam, and Roger Wattenhofer. 2021. When Comparing to Ground Truth is Wrong: On Evaluating GNN Explanation Methods. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD '21). Association for Computing Machinery, New York, NY, USA, 332–341. https://doi.org/10.1145/3447548.3467283
(3) https://cloud.google.com/bigquery/public-data/
(4) https://paperswithcode.com/task/explainable-artificial-intelligence#datasets
(5) https://github.com/marcotcr/lime
(6)https://slundberg.github.io/shap/notebooks/Iris%20classification%20with%20scikit-learn.html
(7) https://research.unsw.edu.au/projects/adfa-ids-datasets
Objetivo
The primary objective of this thesis is to develop an effective visualization tool to enhance user experience and enabling user interaction with Explainable AI systems.
The solution should consider the following objectives:
• Review and analyze existing XAI techniques and visualization approaches for user interaction and human factors such as user experience, domain expertise, trust, transparency and others.
• Identify key human factors and user requirements that influence the understanding and interaction with AI systems (e.g., cognitive load, decision-making processes, ease of use, user-specific contexts or others).
• Design and develop intuitive visual representations that effectively convey the inner workings of AI algorithms and decision-making processes.
• Incorporate interactive elements into the visualizations to allow users to explore and manipulate AI models.
• Evaluate the effectiveness and usability of the developed visualization tools through user studies and feedback.
By the end of this research, it is expected that the developed visualization tool will enable users to gain insights into AI decision-making processes, better interpret AI outcomes, and actively interact with AI systems. These visualizations will enhance transparency, trust, and user acceptance of AI technologies while considering human factors and user experience.
Plano de Trabalhos - Semestre 1
[Weeks 1-4] - Literature review on Explainable AI, XAI techniques, visualization methods, and human factors in AI interaction.
[Weeks 5-8] - Identify and analyze user requirements and factors influencing the visualization of AI models.
[Weeks 9-12] - Design and prototype initial visualization concepts based on the identified human factors and user requirements.
[Weeks 13-15] - Implement and evaluate the initial visualization prototypes using synthetic data and expert feedback.
[Week 16-20] – Prepare first intermediate report.
Plano de Trabalhos - Semestre 2
[Weeks 1-6] - Refine and enhance the visualization prototypes based on the feedback from the evaluation phase.
[Weeks 7-10] - Conduct user studies to evaluate the effectiveness, usability, and user satisfaction of the developed visualizations.
[Weeks 11-15] - Analyze the user study results, iterate on the visualization designs, and finalize the visualization tool.
[Week 16-20] - Finalize the master's thesis report, submission of document and preparation for defense.
Condições
The workplace will be at the Instituto Pedro Nunes (IPN) Computer and Systems Laboratory.
Remunerated internship according to the IPN's scholarship regulations approved by FCT.
Observações
During the application phase, doubts related to this proposal, namely about the objectives and conditions, can be clarified with the supervisors, via email or a meeting, to be scheduled after contact by email.
Orientador
Paulo Miguel Guimarães da Silva
pmgsilva@ipn.pt 📩