Propostas Submetidas

DEI - FCTUC
Gerado a 2024-11-22 03:39:40 (Europe/Lisbon).
Voltar

Titulo Estágio

AIVis - A Human-Centred approach for Explainable Artificial Intelligence through visualization

Local do Estágio

Porto or Remote Work

Enquadramento

Background and motivation
Ethical concerns and lack of trust from the users has led to a resurgence of the concept of Explainable Artificial Intelligence (XAI). This argues that through “more transparent, interpretable, or explainable systems, users will be better equipped to understand and therefore trust the intelligent agents” (Miller 2018). One possible approach to XAI is through what Miller calls “explanation”, in which intelligent systems provide explanations to people concerning its decisions (the outputs from the AI system). However, the decision of how to create these explanations are mostly based on the intuitions of the researchers of what constitutes a good explanation, rather than a human-centred approach that is considerate of the user’s expectations, concerns and experience (Madumal et al 2019). Likewise, current work on explainable AI does not concern with the interaction, visualization, and readability of information generated by the system.

Objetivo

Objectives
The purpose of this project is to develop a human-centred approach to explainable AI, focused on the design of visualization and interaction models that contribute to user readability of AI decisions.

Innovative aspects
Human-centred approaches to explainable AI is scarce, and ones that focus on the visualization aspects of information are virtually non-existent. This project would originally contribute to the topic of explainable and accountable AI.

Plano de Trabalhos - Semestre 1

Workplan
1. State of the Art of Explainable Artificial Intelligence.
2. Qualitative user research regarding existing XAI work.

Plano de Trabalhos - Semestre 2

Workplan
1. Co-design and experimentation of visualizations solutions for XAI. Test and evaluation of results.
2. Iterating and refinement of visualization solutions.
3. Documentation and development of best-practices.

Condições

Candidate Profile
- Background in communication design or a similar area.
- Inquisitive and curious mind.
- Interest – and, ideally, experience – in information visualization.
- Interest in AI.
- Proficiency in English and good domain of Portuguese for field work.

Observações

MSc Thesis to be supervised by Prof. Paula Alexandra Silva (Department of Computer Science).

References
- Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
- Madumal, P., Miller, T., Sonenberg, L., & Vetere, F. (2019, May). A Grounded Interaction Protocol for Explainable Artificial Intelligence. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1033-1041). International Foundation for Autonomous Agents and Multiagent Systems. Chicago
- Miller, T. (2018). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.
- Mittelstadt, B., Russell, C., & Wachter, S. (2019, January). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288). ACM. Chicago
- Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics.
- Wortham, R. H., Theodorou, A., & Bryson, J. J. (2017). Improving robot transparency: real-time visualisation of robot AI substantially improves understanding in naive observers. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 1424-1431). IEEE. Chicago

Orientador

Ricardo Manuel Coelho de Melo
ricardo.melo@fraunhofer.pt 📩