TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-Task Learning with LLMs for Implicit Sentiment Analy...

Multi-Task Learning with LLMs for Implicit Sentiment Analysis: Data-level and Task-level Automatic Weight Learning

Wenna Lai, Haoran Xie, Guandong Xu, Qing Li

2024-12-12Sentiment AnalysisHallucinationAspect-Based Sentiment Analysis (ABSA)Multi-Task Learning
PaperPDF

Abstract

Implicit sentiment analysis (ISA) presents significant challenges due to the absence of salient cue words. Previous methods have struggled with insufficient data and limited reasoning capabilities to infer underlying opinions. Integrating multi-task learning (MTL) with large language models (LLMs) offers the potential to enable models of varying sizes to reliably perceive and recognize genuine opinions in ISA. However, existing MTL approaches are constrained by two sources of uncertainty: data-level uncertainty, arising from hallucination problems in LLM-generated contextual information, and task-level uncertainty, stemming from the varying capacities of models to process contextual information. To handle these uncertainties, we introduce MT-ISA, a novel MTL framework that enhances ISA by leveraging the generation and reasoning capabilities of LLMs through automatic MTL. Specifically, MT-ISA constructs auxiliary tasks using generative LLMs to supplement sentiment elements and incorporates automatic MTL to fully exploit auxiliary data. We introduce data-level and task-level automatic weight learning (AWL), which dynamically identifies relationships and prioritizes more reliable data and critical tasks, enabling models of varying sizes to adaptively learn fine-grained weights based on their reasoning capabilities. We investigate three strategies for data-level AWL, while also introducing homoscedastic uncertainty for task-level AWL. Extensive experiments reveal that models of varying sizes achieve an optimal balance between primary prediction and auxiliary tasks in MT-ISA. This underscores the effectiveness and adaptability of our approach.

Results

TaskDatasetMetricValueModel
Sentiment AnalysisSemEvalF1-score0.9268MT-ISAO (Ours)
Sentiment AnalysisSemEval-2014 Task-4Laptop (Acc)85.74MT-ISA
Sentiment AnalysisSemEval-2014 Task-4Mean Acc (Restaurant + Laptop)89.21MT-ISA
Sentiment AnalysisSemEval-2014 Task-4Restaurant (Acc)92.68MT-ISA
Aspect-Based Sentiment Analysis (ABSA)SemEval-2014 Task-4Laptop (Acc)85.74MT-ISA
Aspect-Based Sentiment Analysis (ABSA)SemEval-2014 Task-4Mean Acc (Restaurant + Laptop)89.21MT-ISA
Aspect-Based Sentiment Analysis (ABSA)SemEval-2014 Task-4Restaurant (Acc)92.68MT-ISA

Related Papers

AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles2025-07-15DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15Robust-Multi-Task Gradient Boosting2025-07-15SentiDrop: A Multi Modal Machine Learning model for Predicting Dropout in Distance Learning2025-07-14ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11