Titulo Estágio
Visualizing Information Flows for Enhanced Explicability in Spiking Neural Networks
Áreas de especialidade
Sistemas Inteligentes
Local do Estágio
Laboratory of Artificial Neural Networks (LARN-CISUC)
Enquadramento
In recent years, advancements in artificial intelligence (AI) and machine learning (ML) have culminated in increasingly powerful and complex solutions. However, despite their clear advantages, most of these solutions end up as closed systems that hardly explain how their results were generated. This may reduce the trust in such systems, limiting their adoption and full potential, especially when integrated into sensitive decision-making processes. The project NextGenAI – Centre for Responsible AI aims to find solutions in ‘responsible artificial intelligence’. By developing strategies based on three fundamental pillars – explainability, transparency, and sustainability – the objective is to evaluate and address the potential risks of current AI and ML technologies, such as, for example, bias, model opacity and lack of interpretability, high data and energy consumption, and black box design. The participation of CISUC includes the development of algorithms that allow the creation of transparent, fair, reliable, and energy-efficient models necessary for the market establishment of responsible and sustainable AI products.
Objetivo
Artificial Neural Networks (ANNs) are widely known for their success and ubiquity in a vast number of domains. However, their inner workings and predictions, especially Deep Learning (DL) architectures, are often of black-box nature and thus difficult to explain. Spiking Neural Networks (SNNs), as the third generation of ANNs, among other advantages, promise to increase explainability through causal inference. In the context of NextGenAI, the objective of this internship is twofold: first, to conduct a literature review on SNNs, available implementations, including visualization tools, and causal inference using SNNs; second, through an empirical setting and case study, to develop a visual tool to visualize the flow of information from inputs to outputs in SNNs for greater explainability. The internship should provide the candidate with an increased awareness of model explainability and interpretability, which is of utmost importance for trustworthy and robust modern AI solutions.
Plano de Trabalhos - Semestre 1
1. State-of-the-art review of SNNs
2. Review of libraries/frameworks of SNNs implementations and visualization tools
3. Outline the technical requirements (libraries, technologies, local vs webs server, static vs interactive, etc.) and the overall design for the visualization tool and build a working prototype.
4. Writing of the intermediate internship report.
Plano de Trabalhos - Semestre 2
5. Review and expand the prototype into the final visualization tool.
6. Explore, analyse, and discuss the developed tool, also providing an ‘instructions manual’.
7. Preparation of a paper for a conference or journal (to be determined).
8. Writing and submission of the thesis.
Condições
The internship will take place at the Laboratory of Artificial Neural Networks (LARN-CISUC) with regular meetings with the supervision team. The candidate should be strongly motivated to conduct research in the areas of artificial intelligence and machine learning. Experience in Python, ANNs and DL is essential, as well as in related machine learning algorithms and software/programming tools. Experience in visual tools and methods is highly recommended and preferable. Besides the laboratory itself, the selected candidate will be integrated within the research team currently working on the ongoing project NextGenAI supported by the National Recovery and Resilience Plan (PRR) and the Next Generation EU Funds.
Observações
A research grant opportunity, supported by the abovementioned project, will become available during the internship
Orientador
Bernardete Ribeiro (Prof.) / Francisco Antunes (Prof. Convidado) / Dylan Perdigão (MSc)
fnibau@dei.uc.pt 📩