Japanese SimCSE Technical Report
Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
Abstract
We report the development of Japanese SimCSE, Japanese sentence embedding models fine-tuned with SimCSE. Since there is a lack of sentence embedding models for Japanese that can be used as a baseline in sentence embedding research, we conducted extensive experiments on Japanese sentence embeddings involving 24 pre-trained Japanese or multilingual language models, five supervised datasets, and four unsupervised datasets. In this report, we provide the detailed training setup for Japanese SimCSE and their evaluation results.
Related Papers
From Neurons to Semantics: Evaluating Cross-Linguistic Alignment Capabilities of Large Language Models via Neurons Alignment2025-07-20SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17Intrinsic vs. Extrinsic Evaluation of Czech Sentence Embeddings: Semantic Relevance Doesn't Help with MT Evaluation2025-06-25Do We Talk to Robots Like Therapists, and Do They Respond Accordingly? Language Alignment in AI Emotional Support2025-06-19Factors affecting the in-context learning abilities of LLMs for dialogue state tracking2025-06-10Quality-Diversity Red-Teaming: Automated Generation of High-Quality and Diverse Attackers for Large Language Models2025-06-08Mechanistic Decomposition of Sentence Representations2025-06-04Rethinking the Understanding Ability across LLMs through Mutual Information2025-05-25