Propostas Atribuidas 2023/2024

DEI - FCTUC
Gerado a 2024-05-17 12:17:02 (Europe/Lisbon).
Voltar

Titulo Estágio

Trustworthiness in Collaborative Artificial Intelligence

Áreas de especialidade

Sistemas Inteligentes

Sistemas Inteligentes

Local do Estágio

Centre for Informatics and Systems of the University of Coimbra (CISUC), at the Department of Informatics Engineering of the University of Coimbra

Enquadramento

Multiagent systems (MAS) are systems composed of multiple autonomous agents that interact with each other and their environment. These agents can be computational entities, robots, humans, or a combination thereof. MAS is a field of study that focuses on understanding and designing systems where multiple agents work together, compete, negotiate, and cooperate to achieve individual and collective goals.

Collaboration refers to agents working together towards a shared goal or solving a problem collectively. They can exchange information, coordinate their actions, and share resources to achieve better outcomes than individual agents acting independently. Collaboration involves communication, coordination, and synergy among agents.

Collaborative AI enables collaboration and interaction among agents. It enhances the multiagent system capabilities and effectiveness by enabling agents to work together, handle complex tasks, and leverage collective intelligence for better outcomes.

One form of collaborative AI refers to the integration of artificial intelligence (AI) systems and human intelligence to work together towards a common goal. It combines the strengths and capabilities of both AI and human agents, leveraging their complementary abilities to achieve superior performance. This particular form of collaborative AI recognizes that while AI systems excel in processing large amounts of data, making fast computations, and identifying patterns, they may lack certain human qualities such as common sense reasoning, creativity, and ethical judgment. On the other hand, humans possess rich contextual knowledge, intuition, and social intelligence, but can be limited by cognitive biases and the capacity to handle massive data sets. By integrating AI and human agents in a collaborative framework, the aim is to achieve outcomes that are more accurate, robust, and aligned with human values. Thus, regarding the prevalence of control, the focus of power of decision-making, the Collaborative AI can manifest in various ways, such as the following ones.

Human-in-the-loop is a collaborative AI approach where human input and oversight are integrated into the AI system's pipeline, including and especially in the decision-making process. The AI system performs automated tasks, but humans are involved at critical points to provide guidance, review and validate results, and make decisions based on their expertise and judgment. This helps ensure that the AI system's outputs are reliable, ethical, and aligned with human expectations.

On the opposite spectrum of HitL is Machine in the Loop, in which the machine enters in the pipeline of humans. There is a kind of augmented intelligence, also known as intelligence amplification, focusing on enhancing human intelligence with AI technologies. AI systems assist humans in complex tasks, providing data analysis, recommendations, or automation to support decision-making. The human remains in control and leverages the AI system as a tool for better efficiency and effectiveness.

In the middle of the spectrum there is place for negotiation and democratic involvement of machines and humans. The mixed-decision is taken by processes in which there is a flat architecture, with no prevalence of control from one part. There is a kind of equilibrium in the policy power.

Collaborative AI recognizes the importance of human expertise and judgment, and seeks to create synergistic partnerships between AI and human agents to achieve more effective, trustworthy, and beneficial outcomes.

Active learning, a form of HitL, is a machine learning technique that aims to improve model performance and reduce labeling efforts by selectively and strategically choosing the most informative instances for which to request human annotation. In traditional supervised learning, a large labeled dataset is used to train a model. However, acquiring labeled data can be costly and time-consuming, especially when experts are needed for annotation. Active learning addresses this challenge by actively selecting data samples that are likely to be the most valuable or uncertain for model training.

The problem of delegation to humans in active learning refers to the challenges and considerations associated with relying on human annotators to provide accurate and trustworthy labels for the selected instances in the active learning process. While active learning can reduce labeling efforts, it introduces a dependency on human input, making the reliability and trustworthiness of human annotators crucial.

Several factors come into play when delegating labeling tasks to humans in active learning. Among them there is human expertise and past performance evaluation in that the agent can learn from previous interactions and observe the accuracy and reliability of different annotators. By considering annotator performance history, the agent can identify the most reliable and knowledgeable annotators for specific types of instances.

Objetivo

The aim of this master science thesis is to investigate and address the challenges associated with delegating labeling tasks to human annotators in Collaborative AI, with especial focus on the HitL, active learning process. This includes considering factors such as human expertise and past performance evaluation to ensure accurate and trustworthy annotations. Based on this research aim, the following research objectives can be identified:

1. Addressing challenges in delegation to human annotators:

- Investigating factors such as human expertise: The objective is to explore the role of human expertise in the active learning process. This includes understanding the domain knowledge and specific expertise of different human annotators. By considering the annotators' expertise, the research aims to develop approaches that assign labeling tasks to annotators who possess the relevant knowledge and skills for accurate annotations.
- Examining past performance evaluation: The objective is to analyze and learn from the past interactions and performance of different annotators. By observing the accuracy and reliability of annotators in previous labeling tasks, the research aims to develop methods to assess and evaluate annotator performance. This information can then be used to identify the most reliable and knowledgeable annotators for specific types of instances in future active learning scenarios.

2. Enhancing reliability and trustworthiness of human annotators:

- Developing evaluation mechanisms: The objective is to design mechanisms for evaluating the reliability and trustworthiness of human annotators. This involves developing metrics and criteria to assess the quality of annotations provided by annotators. By establishing evaluation mechanisms, the research aims to ensure that the annotations obtained from human annotators are accurate and reliable.
- Incorporating feedback and iterative learning: The objective is to enable a feedback loop between the AI system and human annotators. This includes mechanisms to incorporate feedback from annotators into the active learning process, allowing the system to learn from previous interactions and improve over time. By iteratively refining the annotation process based on feedback, the research aims to enhance the reliability and trustworthiness of human annotators in active learning.


These research objectives focus on addressing the challenges and considerations associated with delegating labeling tasks to human annotators in active learning. By exploring factors such as human expertise, past performance evaluation, and developing evaluation mechanisms, the aim is to improve the overall reliability and trustworthiness of human annotators and ensure accurate annotations in the active learning pipeline.

Plano de Trabalhos - Semestre 1

1- State of the art [Sept – Oct]
2- Problem statement, research aims and objectives [Nov]
3- Design and first implementation of the AI system [Nov – Jan]
4- Thesis proposal writing [Dec – Jan]

Plano de Trabalhos - Semestre 2

5- Improvement of the AI system [Feb – Apr]
6- Experimental Tests [Apr – May]
7- Paper writing [May – Jun]
8- Thesis writing [Jan – Jul]

Condições

The work should take place at the Centre for Informatics and Systems of the University of Coimbra (CISUC) at the Department of Informatics Engineering of the University of Coimbra in the scope of the NextGenAI - Center for Responsible AI research project (https://www.cisuc.uc.pt/en/projects/NextGenAI%20-%20Centre%20For%20Responsible%20AI).

An 930,98 euros per month scholarship is foreseen for 6 months. The attribution of the scholarship is subject to a public application.

The candidate must have a very good background knowledge in Artificial Intelligence, especially in the areas of Artificial Intelligence that the internship falls within.

Observações

Advisors:
Luís Macedo, Amílcar Cardoso

Orientador

Luís Macedo
macedo@dei.uc.pt 📩