Propostas Submetidas

DEI - FCTUC
Gerado a 2024-07-17 09:36:39 (Europe/Lisbon).
Voltar

Titulo Estágio

Advancing Lyrics Music Emotion Recognition: Emotion-Centric Feature Extraction and Model Interpretability

Áreas de especialidade

Sistemas Inteligentes

Local do Estágio

DEI-FCTUC

Enquadramento

In today's society, digital music increasingly plays an important role in the music market made available to users. In 2017, digital music revenues increased to $7.8 billion, representing 50% of global music revenues, while the global recorded music market grew 5.9%.
The amount of digital music is expected to continue to explode. Digital music repositories, therefore, need more advanced, flexible, and user-friendly search engines adapted to the needs of individual users. This has led to greater awareness in the area of ​​music information retrieval (MIR). Several companies, for example, Google, Pandora, Spotify, Apple, Sony, and Philips, have created MIR research agendas with commercial applications already in place.
Within MIR, Music Emotion Recognition (MER) has emerged as a significant subfield, as users' search for emotions increasingly becomes one of the most common forms of search on these platforms (e.g., providing a playlist of songs that convey a certain emotion). This requires platforms to use effective predictive models to classify the enormous amount of new music received daily correctly.
The process of building predictive models starts with extracting features from the audio and lyrics of songs and then moving on to building models using these extracted features. It is known that to build more effective models, it is necessary to use more dedicated features from an emotional point of view, otherwise the effectiveness of the models created tends to stabilize, i.e., for example, in the case of song lyrics, generic features extracted from text in general (e.g., news, prose) are not enough, it is essential to identify and extract features extracted from the domain of musical lyrics and that has an emotional context.
This project aims to identify and extract new features for a Lyrics Music Emotion Recognition (LMER) context to build more effective models. Among other things, features based on rhymes in verses and lyrics are expected, as well as features based on lexical dictionaries of emotional words. Another objective of the work is related to issues of interpretability of the models created, i.e., being able to obtain if...then rules that allow features to be related to specific emotions. The process involves the use of natural language processing techniques, text mining, feature engineering, and knowledge of applied machine learning.

Objetivo

1. Research and development of new emotionally based features for static Lyrics-based MER (LMER)
2. Research and development of interpretability approaches that allow the relation between features and emotions.

Plano de Trabalhos - Semestre 1

1. State-of-the-art review: MER, NLP and feature engineering, machine learning for text in general and LMER in particular
2. Evaluation of the current algorithms (based on classical Natural Language Processing techniques and machine learning) on the novel datasets developed within the research group
3. Research and development of NLP and feature engineering approaches for static LMER (namely structural features)
4. Writing of the Intermediate Report

Plano de Trabalhos - Semestre 2

5. Research and development of deep learning (DL) approaches for LMER (e.g., via CNNs, transfer learning, etc.)
6. Critical evaluation of the developed DL models
7. Writing of a scientific paper and thesis

Condições

- Access to a toolbox for LMER (with feature extraction and deep learning modules), developed by our team
- Access to an LMER database, developed by our team
- Server access (hosted at DEI) with 10 high-performance GPU cards

Orientador

Rui Pedro Paiva, Ricardo Malheiro e Renato Panda
ruipedro@dei.uc.pt 📩