Propsotas Atribuidas 2023/2024

DEI - FCTUC
Gerado a 2024-11-24 07:13:06 (Europe/Lisbon).
Voltar

Titulo Estágio

Reliability of machine learning models based on data density principles with application to risk assessment

Local do Estágio

Laboratorio Grupo Computação Adaptativa

Enquadramento

This work is part of the "Center for Responsible AI – Next Generation AI" project, led by Unbabel, which aims to develop the next generation of responsible artificial intelligence products. The primary objective of this thesis is to research and implement pointwise reliability metrics that can effectively evaluate the predictions made by machine learning models. The intention is to provide users with valuable guidance regarding the reliability of the model's predictions, particularly useful in critical applications, such as healthcare.
The main hypothesis to be considered in this thesis is: "The reliability of a particular instance can be assessed by evaluating the structural similarity between such instance and the training instances used to build the model."
This work can be assigned to more than one student, and scholarships are available for those students.

Machine learning (ML) and artificial intelligence (AI) methods have demonstrated remarkable performance in various fields, and their research impact has been undeniable. However, the practical deployment of intelligent models still faces certain limitations, particularly in critical tasks like the clinical domain.
One of the fundamental concerns regarding ML tools is the ability to trust the predictions they generate. Therefore, in addition to the prediction outcome, the availability of reliability or confidence measures for these predictions would enhance security and transparency, fostering trust in human-AI interaction. Moreover, such measures could also contribute to model improvement. As a result, there is a strong motivation for developing innovative ML models that provide reliability measures for individual predictions.

During the training phase, it is feasible to estimate the overall performance of a model using specific metrics, such as sensitivity and specificity. Additionally, the uncertainty of the model can be also assessed by computing confidence intervals. However, these estimations provide an average understanding of the performance and do not offer a means to measure the performance of individual instances. Therefore, estimating pointwise confidence would be particularly valuable in assessing the model's performance on a case-by-case basis.
This thesis aims to investigate pointwise reliability/confidence metrics with the objective of enhancing professionals' trust in the application of ML methods. In effect, when applied in real-life operations, these metrics would enable decision-makers to address a critical question they encounter in their daily work: Can I rely on the ML model for predicting this current instance?

The reliability of ML models has been exploited using two main approaches: density and the local fit principles. The first principle uses distance measures and data patterns to compare the instance under interest to the examples present in the training set; the second assesses how the trained model performs on training subsets similar to the instance under evaluation to derive a reliability measure. Basically, the model will perform well on instances similar to training instances that exist on the data set and where the model performs correctly.

This thesis will primarily focus on the former. The main hypothesis to be researched is as follows: "The reliability of a particular instance can be assessed by evaluating the structural similarity between that instance and the training instances used to build the model."
The methods to be developed will be validated using practical use cases provided by partners involved in the CRAI-AI project. Practical applications related to risk assessment in the context of cardiovascular risk, and population stratification in rehabilitation programs are foreseen.

Objetivo

The main scientific hypothesis to be validated is the potential to enhance current machine learning (ML) models by incorporating reliability measures, leading to improved and trustworthy prediction models.
From the practical perspective, the main goal is to apply these results in the clinical domain, specifically in the context of risk assessment.
By addressing these challenges, we will contribute to enhancing the acceptance and trust of clinical professionals in ML models. This will not only assist professionals in making more informed decisions but also facilitate the integration of these models into clinical workflows.

Plano de Trabalhos - Semestre 1

1|Reliability measures: state of the art related to reliable methods applied to ML techniques based on density principles, which involve data distribution and structural similarity between instances.
2| Use cases: analysis of use cases, particularly in risk assessment area.
3| Report: first-semester report.

Preliminary results
- Implementation of reliability measurements for ML models (initial approaches)

Plano de Trabalhos - Semestre 2

Plano de trabalhos Semestre 2: (5000 caracteres)

1| Reliability measures: research and implement reliability metrics for ML models, improving the results achieved in the first semester.
2| Use cases: Implement the required adjustments and interfaces with use cases, specifically focusing on risk assessment models.
3| Validation: validate the developed reliable measures using available benchmarks and/or the use cases data/models.
4| Integration: Integrate reliability measures and use cases
5| Report: final report.

Condições

The work will take place at the Laboratories of the Adaptive Computing Group at CISUC, at DEI.
This work can be assigned to more than one student.
There is funding available (scholarships) for the students.

Orientador

Jorge Henriques
jh@dei.uc.pt 📩