Converting Image Labels to Meaningful and Information-rich Embeddings
A challenge of the computer vision community is to understand the semantics of an image that will allow for higher quality image generation based on existing high-level features and better analysis of (semi-) labeled datasets. Categorical labels aggregate a huge amount of information into a binary value which conceals valuable high-level concepts from the Machine Learning models. Towards addressing this challenge, this paper introduces a method, called Occlusion-based Latent Representations (OLR), for converting image labels to meaningful representations that capture a significant amount of data semantics. Besides being informationrich, these representations compose a disentangled low-dimensional latent space where each image label is encoded into a separate vector. We evaluate the quality of these representations in a series of experiments whose results suggest that the proposed model can capture data concepts and discover data interrelations. ; This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy.