Propostas sem aluno

DEI - FCTUC
Gerado a 2024-05-17 12:17:03 (Europe/Lisbon).
Voltar

Titulo Estágio

Explainable AI models for Transparent and Trustworthy Systems

Local do Estágio

Coimbra

Enquadramento

The technological progress that we have been witnessing in the areas of Artificial Intelligence (AI) has a significant impact on the most diverse sectors of our society. Specifically, the adoption of a wide variety of devices, technologies and platforms allow the offer of new services and products that transform the way we live, work and learn.

In this context, researchers strive develop intelligent solutions that take advantage of advanced Machine Learning (ML) mechanisms. Such solutions have the most variety applications. These advanced ML models can be used for security (e.g., Intrusion Detection Systems), privacy (e.g., Federated Learning), finance (e.g., stock market predictions) and many others. They often provide enhanced prediction and classification capabilities but lack the ability to provide explanation as to why such conclusions are taken. To minimize this issue, Explainable AI (XAI) plays a crucial role in improving researchers’ understanding of why AI models provide specific predictions.

Explainable AI (XAI) not only provides additional insight to humans on models’ outputs but also promotes increased trustworthiness, auditability and transparency. At the same time, developing explainable AI models helps reducing businesses legal and reputational risks when models are deployed in production, supports continuous model evaluation and enables fine-tunning to produce better models. In this context, in order to increase the trustworthiness and transparency of AI models, it is crucial to develop models capable of showcasing the factors the contributed towards their decisions.

This topic is part of the Autonomous Trust, Security and Privacy Management Framework for IoT (ARCADIAN-IoT) project, coordinated by the Pedro Nunes Institute (IPN), and funded by the European Commission's H2020 program (agreement nº 101020259). As such, the work described in this internship can also use the models developed by IPN in the context of this project (e.g., intrusion detection models for IoT devices) or develop novel models for different use cases.

Objetivo

Explainable AI can be addressed in different stages of AI development (i.e., pre-modelling, model development, and post-modelling). The scope of this work mainly addresses the post-modelling stage.

The objectives of this work are the following:

(i) analyze the state of the art on Explainable AI methodologies and tools;

(ii) requirement elicitation and design of a mechanism that is able to provide post-modelling explainability (i.e., to extract explanations and the rational that describes models’ outputs);

(iii) implement and test a functional prototype of the proposed mechanism;

(iv) evaluate the solution with realistic ML models and document the results in the master dissertation and a scientific publication.

Plano de Trabalhos - Semestre 1

[Week 1-8] - Literature review of relevant machine learning algorithms, data modelling, and explainable AI approaches (e.g., inherently explainable models, hybrid approaches, regularization, and others);

[Week 9-11] - Identification and familiarization with the processes (e.g., perturbation mechanisms, backward propagation, proxy models, and others) and methodologies (e.g., feature visualization, influence functions, DeepLift, SOCRAT, and others) required for this work;

[Week 12-13] - Definition and familiarization with existent ML models and data (e.g., in house or from open access datasets [1]) used to validate the solution;

[Week 14-15] - Prototype requirements analysis (methodology definition, metrics, parameters);

[Week 16-18] – Specification of the solution;

[Week 14-20] – Preparation of the master's dissertation interim report.


[1] https://paperswithcode.com/datasets

Plano de Trabalhos - Semestre 2

[Week 1-6] - Experimental work (definition of the experimental environment, training and evaluation of AF models, simulation of attacks and intrusions);

[Week 7-12] - Solution implementation (prototype development to be validated with real ML models);

[Week 13-17] - Evaluation of proposed approach and solution fine-tuning (analysis of implementation results, parameter adjustment, tests);

[Week 14-20] - Preparation of the final report

Condições

The place of work will be at Laboratório de Informática e Sistemas (LIS), Instituto Pedro Nunes (IPN).

This work will be integrated into an international research project. The student may apply for a research grant, for a period of 6 months, possibly renewable, with the amount of 875.98€ / month.

Orientador

Paulo Silva
pmgsilva@ipn.pt 📩