Using Images to Find Context-Independent Word Representations in Vector Space
Harsh Kumar
Abstract
Many methods have been proposed to find vector representation for words, but most rely on capturing context from the text to find semantic relationships between these vectors. We propose a novel method of using dictionary meanings and image depictions to find word vectors independent of any context. We use auto-encoder on the word images to find meaningful representations and use them to calculate the word vectors. We finally evaluate our method on word similarity, concept categorization and outlier detection tasks. Our method performs comparably to context-based methods while taking much less training time.
Related Papers
Robust Spatiotemporal Epidemic Modeling with Integrated Adaptive Outlier Detection2025-07-12Universal Embeddings of Tabular Data2025-07-08Hybrid Meta-Learning Framework for Anomaly Forecasting in Nonlinear Dynamical Systems via Physics-Inspired Simulation and Deep Ensembles2025-06-15Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers2025-05-30LayerIF: Estimating Layer Quality for Large Language Models using Influence Functions2025-05-27Learning novel representations of variable sources from multi-modal $\textit{Gaia}$ data via autoencoders2025-05-22Re-experiment Smart: a Novel Method to Enhance Data-driven Prediction of Mechanical Properties of Epoxy Polymers2025-05-19Importance Sampling for Nonlinear Models2025-05-18