Propostas Submetidas

DEI - FCTUC
Gerado a 2025-07-17 13:28:26 (Europe/Lisbon).
Voltar

Titulo Estágio

Enhancing Transparency in Federated Learning through Explainable Artificial Intelligence (XAI)

Áreas de especialidade

Sistemas Inteligentes

Engenharia de Software

Local do Estágio

Rua Dom João Castro n.12, 3030-384 Coimbra, Portugal

Enquadramento

The growing adoption of machine learning in sensitive and high-stakes domains (such as healthcare, finance, and cybersecurity) has made model transparency and interpretability a critical concern. Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex models understandable to humans, enabling trust, accountability, and informed decision-making.

In parallel, Federated Learning (FL) has emerged as a promising paradigm for privacy-preserving model training, where data remains decentralized across multiple clients. However, FL introduces new challenges for explainability due to its distributed nature, model heterogeneity, and limited access to raw data. Traditional XAI techniques, which often rely on centralized data or model access, may not directly apply in this context.

Integrating XAI into Federated Learning is essential for making decentralized models not only privacy-preserving and secure but also interpretable and transparent. This is particularly important in security-focused applications like DeepGuardian, where understanding model decisions is crucial for detecting and responding to anomalies or adversarial threats.

Objetivo

The primary goal of this thesis is to investigate and develop Explainable Artificial Intelligence (XAI) techniques suitable for Federated Learning (FL) environments, with the overarching aim of enhancing model transparency and interpretability in decentralized machine learning systems. This work will begin by analyzing the applicability of existing XAI methods (such as SHAP, LIME, and counterfactual explanations) in the context of FL, identifying the limitations and challenges posed by data decentralization, client heterogeneity, and communication constraints. Building on this analysis, the thesis will propose adaptations or novel approaches to explainability that are compatible with the privacy-preserving and distributed nature of FL.
These methods will then be integrated into a real-world framework, such as DeepGuardian, to demonstrate their effectiveness in practical scenarios involving anomaly detection and security monitoring. Finally, the proposed solutions will be evaluated based on their interpretability, computational efficiency, scalability, and their impact on model performance and data privacy.

Plano de Trabalhos - Semestre 1

1. Conduct a comprehensive literature review on Explainable Artificial Intelligence (XAI) and Federated Learning (FL).
2. Analyze the applicability and limitations of existing XAI methods (e.g., SHAP, LIME) in federated environments.
3. Explore the challenges of integrating interpretability into decentralized, privacy-preserving systems.
4. Perform initial experiments using public or synthetic datasets to evaluate baseline compatibility of XAI with FL.
5. Write an intermediate report, summarizing the research problem, objectives, related work, and preliminary findings.

Plano de Trabalhos - Semestre 2

1. Design and implement XAI techniques tailored to FL constraints (e.g., decentralized computation, data privacy).
2. Integrate the developed methods into a prototype system, preferably within the DeepGuardian framework.
3. Simulate real-world scenarios (e.g., anomaly detection, adversarial behavior) to test the practical value of the explanations.
4. Evaluate the performance of the proposed techniques based on interpretability, scalability, efficiency, and impact on model performance.
5. Collect and analyze relevant metrics to quantify the effectiveness and usability of the XAI methods.
6. Write and finalize the report, incorporating all results and conclusions.

Condições

The trainee will have all the necessary conditions to carry out the planned tasks, being integrated into the research and development teams within European research projects in which OneSource is involved.

Orientador

Jorge Diogo Gomes Proença
jorge.proenca@onesource.pt 📩