Image Captioning with Sentiment Terms via Weakly-Supervised Sentiment Dataset

Abstract

Image captioning task has become a highly competitive research area with successful application of convolutional and recurrent neural networks, especially with the advent of long short-term memory (LSTM) architecture. However, its primary focus has been a factual description of the images, including the objects, movements, and their relations. While such focus has demonstrated competence, describing the images along with non-factual elements, namely sentiments of the images expressed via adjectives, has mostly been neglected. We attempt to address this issue by fine-tuning an additional convolutional neural network solely devoted to sentiments, where dataset on sentiment is built from a data-driven, multi-label approach. Our experimental results show that our method can generate image captions with sentiment terms that are more compatible with the images than solely relying on features devoted to object classification, while capable of preserving the semantics.

Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada, ”Image Captioning with Sentiment Terms via Weakly-Supervised Sentiment Dataset” BMVC 2016.

Downloads

Since our dataset is very large (~1M images), we recommend sampling a subset.

  • VGG-19 fine-tuned .caffemodel
  • VGG-19 .prototxt for fine-tuned model
  • Sentiment Dataset (original size) (coming soon)
  • Sentimdnet Dataset (size reduced to 256x256) (13 GB)
  • Classes and label numbers
  • Filenames with assigned labels
  • Image URLs and comments
  • Contact: andrew (at) mi.t.u-tokyo.ac.jp