in

Consideration-based Picture Captioning with Keras


In picture captioning, an algorithm is given a picture and tasked with producing a smart caption. It’s a difficult job for a number of causes, not the least being that it includes a notion of saliency or relevance. For this reason current deep studying approaches largely embrace some “consideration” mechanism (generally even a couple of) to assist specializing in related picture options.

On this put up, we reveal a formulation of picture captioning as an encoder-decoder downside, enhanced by spatial consideration over picture grid cells. The concept comes from a current paper on Neural Picture Caption Technology with Visible Consideration (Xu et al. 2015), and employs the identical sort of consideration algorithm as detailed in our put up on machine translation.

We’re porting Python code from a current Google Colaboratory notebook, utilizing Keras with TensorFlow keen execution to simplify our lives.

Stipulations

The code proven right here will work with the present CRAN variations of tensorflow, keras, and tfdatasets.
Test that you simply’re utilizing at the least model 1.9 of TensorFlow. If that isn’t the case, as of this writing, this

will get you model 1.10.

When loading libraries, please ensure you’re executing the primary 4 strains on this precise order.
We’d like to ensure we’re utilizing the TensorFlow implementation of Keras (tf.keras in Python land), and we now have to allow keen execution earlier than utilizing TensorFlow in any method.

No must copy-paste any code snippets – you’ll discover the whole code (so as essential for execution) right here: eager-image-captioning.R.

The dataset

MS-COCO (“Frequent Objects in Context”) is one in every of, maybe the, reference dataset in picture captioning (object detection and segmentation, too).
We’ll be utilizing the training images and annotations from 2014 – be warned, relying in your location, the obtain can take a lengthy time.

After unpacking, let’s outline the place the pictures and captions are.

annotation_file <- "train2014/annotations/captions_train2014.json"
image_path <- "train2014/train2014"

The annotations are in JSON format, and there are 414113 of them! Fortunately for us we didn’t must obtain that many photographs – each picture comes with 5 completely different captions, for higher generalizability.

annotations <- fromJSON(file = annotation_file)
annot_captions <- annotations[[4]]

num_captions <- length(annot_captions)

We retailer each annotations and picture paths in lists, for later loading.

all_captions <- vector(mode = "record", size = num_captions)
all_img_names <- vector(mode = "record", size = num_captions)

for (i in seq_len(num_captions)) {
  caption <- paste0("<begin> ",
                    annot_captions[[i]][["caption"]],
                    " <finish>"
                    )
  image_id <- annot_captions[[i]][["image_id"]]
  full_coco_image_path <- sprintf(
    "%s/COCO_train2014_percent012d.jpg",
    image_path,
    image_id
  )
  all_img_names[[i]] <- full_coco_image_path
  all_captions[[i]] <- caption
}

Relying in your computing surroundings, you’ll for certain need to prohibit the variety of examples used.
This put up will use 30000 captioned photographs, chosen randomly, and put aside 20% for validation.

Beneath, we take random samples, cut up into coaching and validation elements. The companion code will even retailer the indices on disk, so you’ll be able to decide up on verification and evaluation later.

num_examples <- 30000

random_sample <- sample(1:num_captions, dimension = num_examples)
train_indices <- sample(random_sample, dimension = length(random_sample) * 0.8)
validation_indices <- setdiff(random_sample, train_indices)

sample_captions <- all_captions[random_sample]
sample_images <- all_img_names[random_sample]
train_captions <- all_captions[train_indices]
train_images <- all_img_names[train_indices]
validation_captions <- all_captions[validation_indices]
validation_images <- all_img_names[validation_indices]

Interlude

Earlier than actually diving into the technical stuff, let’s take a second to replicate on this job.
In typical image-related deep studying walk-throughs, we’re used to seeing well-defined issues – even when in some circumstances, the answer could also be onerous. Take, for instance, the stereotypical canine vs. cat downside. Some canine might seem like cats and a few cats might seem like canine, however that’s about it: All in all, within the ordinary world we stay in, it must be a kind of binary query.

If, alternatively, we ask folks to explain what they see in a scene, it’s to be anticipated from the outset that we’ll get completely different solutions. Nonetheless, how a lot consensus there’s will very a lot rely on the concrete dataset we’re utilizing.

Let’s check out some picks from the very first 20 coaching gadgets sampled randomly above.

Figure from MS-COCO 2014

Now this picture doesn’t depart a lot room for choice what to give attention to, and acquired a really factual caption certainly: “There’s a plate with one slice of bacon a half of orange and bread.” If the dataset had been all like this, we’d assume a machine studying algorithm ought to do fairly nicely right here.

Choosing one other one from the primary 20:

Figure from MS-COCO 2014

What could be salient data to you right here? The caption supplied goes “A smiling little boy has a checkered shirt.”
Is the look of the shirt as essential as that? You would possibly as nicely give attention to the surroundings, – and even one thing on a very completely different stage: The age of the photograph, or it being an analog one.

Let’s take a closing instance.

From MS-COCO 2014

What would you say about this scene? The official label we sampled right here is “A bunch of individuals posing in a humorous method for the digicam.” Properly …

Please don’t neglect that for every picture, the dataset consists of 5 completely different captions (though our n = 30000 samples most likely received’t).
So this isn’t saying the dataset is biased – in no way. As a substitute, we need to level out the ambiguities and difficulties inherent within the job. Truly, given these difficulties, it’s all of the extra superb that the duty we’re tackling right here – having a community robotically generate picture captions – must be doable in any respect!

Now let’s see how we will do that.

For the encoding a part of our encoder-decoder community, we’ll make use of InceptionV3 to extract picture options. In precept, which options to extract is as much as experimentation, – right here we simply use the final layer earlier than the totally linked prime:

image_model <- application_inception_v3(
  include_top = FALSE,
  weights = "imagenet"
)

For a picture dimension of 299×299, the output will likely be of dimension (batch_size, 8, 8, 2048), that’s, we’re making use of 2048 function maps.

InceptionV3 being a “massive mannequin,” the place each cross by means of the mannequin takes time, we need to precompute options prematurely and retailer them on disk.
We’ll use tfdatasets to stream photographs to the mannequin. This implies all our preprocessing has to make use of tensorflow capabilities: That’s why we’re not utilizing the extra acquainted image_load from keras beneath.

Our customized load_image will learn in, resize and preprocess the pictures as required to be used with InceptionV3:

load_image <- perform(image_path) {
  img <-
    tf$read_file(image_path) %>%
    tf$picture$decode_jpeg(channels = 3) %>%
    tf$picture$resize_images(c(299L, 299L)) %>%
    tf$keras$purposes$inception_v3$preprocess_input()
  list(img, image_path)
}

Now we’re prepared to save lots of the extracted options to disk. The (batch_size, 8, 8, 2048)-sized options will likely be flattened to (batch_size, 64, 2048). The latter form is what our encoder, quickly to be mentioned, will obtain as enter.

preencode <- unique(sample_images) %>% unlist() %>% sort()
num_unique <- length(preencode)

# adapt this in line with your system's capacities  
batch_size_4save <- 1
image_dataset <-
  tensor_slices_dataset(preencode) %>%
  dataset_map(load_image) %>%
  dataset_batch(batch_size_4save)
  
save_iter <- make_iterator_one_shot(image_dataset)
  
until_out_of_range({
  
  save_count <- save_count + batch_size_4save
  batch_4save <- save_iter$get_next()
  img <- batch_4save[[1]]
  path <- batch_4save[[2]]
  batch_features <- image_model(img)
  batch_features <- tf$reshape(
    batch_features,
    list(dim(batch_features)[1], -1L, dim(batch_features)[4]
  )
                               )
  for (i in 1:dim(batch_features)[1]) {
    np$save(path[i]$numpy()$decode("utf-8"),
            batch_features[i, , ]$numpy())
  }
    
})

Earlier than we get to the encoder and decoder fashions although, we have to deal with the captions.

Processing the captions

We’re utilizing keras text_tokenizer and the textual content processing capabilities texts_to_sequences and pad_sequences to remodel ascii textual content right into a matrix.

# we'll use the 5000 most frequent phrases solely
top_k <- 5000
tokenizer <- text_tokenizer(
  num_words = top_k,
  oov_token = "<unk>",
  filters = '!"#$%&()*+.,-/:;=?@[]^_`~ ')
tokenizer$fit_on_texts(sample_captions)

train_captions_tokenized <-
  tokenizer %>% texts_to_sequences(train_captions)
validation_captions_tokenized <-
  tokenizer %>% texts_to_sequences(validation_captions)

# pad_sequences will use 0 to pad all captions to the identical size
tokenizer$word_index["<pad>"] <- 0

# create a lookup dataframe that enables us to go in each instructions
word_index_df <- data.frame(
  phrase = tokenizer$word_index %>% names(),
  index = tokenizer$word_index %>% unlist(use.names = FALSE),
  stringsAsFactors = FALSE
)
word_index_df <- word_index_df %>% arrange(index)

decode_caption <- perform(textual content) {
  paste(map(textual content, perform(quantity)
    word_index_df %>%
      filter(index == quantity) %>%
      select(phrase) %>%
      pull()),
    collapse = " ")
}

# pad all sequences to the identical size (the utmost size, in our case)
# may experiment with shorter padding (truncating the very longest captions)
caption_lengths <- map(
  all_captions[1:num_examples],
  perform(c) str_split(c," ")[[1]] %>% length()
  ) %>% unlist()
max_length <- fivenum(caption_lengths)[5]

train_captions_padded <-  pad_sequences(
  train_captions_tokenized,
  maxlen = max_length,
  padding = "put up",
  truncating = "put up"
)

validation_captions_padded <- pad_sequences(
  validation_captions_tokenized,
  maxlen = max_length,
  padding = "put up",
  truncating = "put up"
)

Loading the information for coaching

Now that we’ve taken care of pre-extracting the options and preprocessing the captions, we’d like a approach to stream them to our captioning mannequin. For that, we’re utilizing tensor_slices_dataset from tfdatasets, passing within the record of paths to the pictures and the preprocessed captions. Loading the pictures is then carried out as a TensorFlow graph operation (utilizing tf$pyfunc).

The unique Colab code additionally shuffles the information on each iteration. Relying in your {hardware}, this will take a very long time, and given the dimensions of the dataset it isn’t strictly essential to get cheap outcomes. (The outcomes reported beneath had been obtained with out shuffling.)

batch_size <- 10
buffer_size <- num_examples

map_func <- perform(img_name, cap) {
  p <- paste0(img_name$decode("utf-8"), ".npy")
  img_tensor <- np$load(p)
  img_tensor <- tf$forged(img_tensor, tf$float32)
  list(img_tensor, cap)
}

train_dataset <-
  tensor_slices_dataset(list(train_images, train_captions_padded)) %>%
  dataset_map(
    perform(item1, item2) tf$py_func(map_func, list(item1, item2), list(tf$float32, tf$int32))
  ) %>%
  # optionally shuffle the dataset
  # dataset_shuffle(buffer_size) %>%
  dataset_batch(batch_size)

Captioning mannequin

The mannequin is principally the identical as that mentioned within the machine translation post. Please consult with that article for a proof of the ideas, in addition to an in depth walk-through of the tensor shapes concerned at each step. Right here, we offer the tensor shapes as feedback within the code snippets, for fast overview/comparability.

Nonetheless, if you happen to develop your personal fashions, with keen execution you’ll be able to merely insert debugging/logging statements at arbitrary locations within the code – even in mannequin definitions. So you’ll be able to have a perform

maybecat <- perform(context, x) {
  if (debugshapes) {
    identify <- enexpr(x)
    dims <- paste0(dim(x), collapse = " ")
    cat(context, ": form of ", identify, ": ", dims, "n", sep = "")
  }
}

And if you happen to now set

you’ll be able to hint – not solely tensor shapes, however precise tensor values by means of your fashions, as proven beneath for the encoder. (We don’t show any debugging statements after that, however the sample code has many extra.)

Encoder

Now it’s time to outline some some sizing-related hyperparameters and housekeeping variables:

# for encoder output
embedding_dim <- 256
# decoder (LSTM) capability
gru_units <- 512
# for decoder output
vocab_size <- top_k
# variety of function maps gotten from Inception V3
features_shape <- 2048
# form of consideration options (flattened from 8x8)
attention_features_shape <- 64

The encoder on this case is only a totally linked layer, taking within the options extracted from Inception V3 (in flattened kind, as they had been written to disk), and embedding them in 256-dimensional house.

cnn_encoder <- perform(embedding_dim, identify = NULL) {
    
  keras_model_custom(identify = identify, perform(self) {
      
    self$fc <- layer_dense(models = embedding_dim, activation = "relu")
      
    perform(x, masks = NULL) {
      # enter form: (batch_size, 64, features_shape)
      maybecat("encoder enter", x)
      # form after fc: (batch_size, 64, embedding_dim)
      x <- self$fc(x)
      maybecat("encoder output", x)
      x
    }
  })
}

Consideration module

Not like within the machine translation put up, right here the eye module is separated out into its personal customized mannequin.
The logic is identical although:

attention_module <- perform(gru_units, identify = NULL) {
  
  keras_model_custom(identify = identify, perform(self) {
    
    self$W1 = layer_dense(models = gru_units)
    self$W2 = layer_dense(models = gru_units)
    self$V = layer_dense(models = 1)
      
    perform(inputs, masks = NULL) {
      options <- inputs[[1]]
      hidden <- inputs[[2]]
      # options(CNN_encoder output) form == (batch_size, 64, embedding_dim)
      # hidden form == (batch_size, gru_units)
      # hidden_with_time_axis form == (batch_size, 1, gru_units)
      hidden_with_time_axis <- k_expand_dims(hidden, axis = 2)
        
      # rating form == (batch_size, 64, 1)
      rating <- self$V(k_tanh(self$W1(options) + self$W2(hidden_with_time_axis)))
      # attention_weights form == (batch_size, 64, 1)
      attention_weights <- k_softmax(rating, axis = 2)
      # context_vector form after sum == (batch_size, embedding_dim)
      context_vector <- k_sum(attention_weights * options, axis = 2)
        
      list(context_vector, attention_weights)
    }
  })
}

Decoder

The decoder at every time step calls the eye module with the options it obtained from the encoder and its final hidden state, and receives again an consideration vector. The eye vector will get concatenated with the present enter and additional processed by a GRU and two totally linked layers, the final of which provides us the (unnormalized) possibilities for the following phrase within the caption.

The present enter at every time step right here is the earlier phrase: the proper one throughout coaching (instructor forcing), the final generated one throughout inference.

rnn_decoder <- perform(embedding_dim, gru_units, vocab_size, identify = NULL) {
    
  keras_model_custom(identify = identify, perform(self) {
      
    self$gru_units <- gru_units
    self$embedding <- layer_embedding(input_dim = vocab_size, 
                                      output_dim = embedding_dim)
    self$gru <- if (tf$check$is_gpu_available()) {
      layer_cudnn_gru(
        models = gru_units,
        return_sequences = TRUE,
        return_state = TRUE,
        recurrent_initializer = 'glorot_uniform'
      )
    } else {
      layer_gru(
        models = gru_units,
        return_sequences = TRUE,
        return_state = TRUE,
        recurrent_initializer = 'glorot_uniform'
      )
    }
      
    self$fc1 <- layer_dense(models = self$gru_units)
    self$fc2 <- layer_dense(models = vocab_size)
      
    self$consideration <- attention_module(self$gru_units)
      
    perform(inputs, masks = NULL) {
      x <- inputs[[1]]
      options <- inputs[[2]]
      hidden <- inputs[[3]]
        
      c(context_vector, attention_weights) %<-% 
        self$consideration(list(options, hidden))
        
      # x form after passing by means of embedding == (batch_size, 1, embedding_dim)
      x <- self$embedding(x)
        
      # x form after concatenation == (batch_size, 1, 2 * embedding_dim)
      x <- k_concatenate(list(k_expand_dims(context_vector, 2), x))
        
      # passing the concatenated vector to the GRU
      c(output, state) %<-% self$gru(x)
        
      # form == (batch_size, 1, gru_units)
      x <- self$fc1(output)
        
      # x form == (batch_size, gru_units)
      x <- k_reshape(x, c(-1, dim(x)[[3]]))
        
      # output form == (batch_size, vocab_size)
      x <- self$fc2(x)
        
      list(x, state, attention_weights)
        
    }
  })
}

Loss perform, and instantiating all of it

Now that we’ve outlined our mannequin (constructed of three customized fashions), we nonetheless want to really instantiate it (being exact: the 2 lessons we’ll entry from outdoors, that’s, the encoder and the decoder).

We additionally must instantiate an optimizer (Adam will do), and outline our loss perform (categorical crossentropy).
Word that tf$nn$sparse_softmax_cross_entropy_with_logits expects uncooked logits as an alternative of softmax activations, and that we’re utilizing the sparse variant as a result of our labels usually are not one-hot-encoded.

encoder <- cnn_encoder(embedding_dim)
decoder <- rnn_decoder(embedding_dim, gru_units, vocab_size)

optimizer = tf$prepare$AdamOptimizer()

cx_loss <- perform(y_true, y_pred) {
  masks <- 1 - k_cast(y_true == 0L, dtype = "float32")
  loss <- tf$nn$sparse_softmax_cross_entropy_with_logits(
    labels = y_true,
    logits = y_pred
  ) * masks
  tf$reduce_mean(loss)
}

Coaching

Coaching the captioning mannequin is a time-consuming course of, and you’ll for certain need to save the mannequin’s weights!
How does this work with keen execution?

We create a tf$prepare$Checkpoint object, passing it the objects to be saved: In our case, the encoder, the decoder, and the optimizer. Later, on the finish of every epoch, we’ll ask it to write down the respective weights to disk.

restore_checkpoint <- FALSE

checkpoint_dir <- "./checkpoints_captions"
checkpoint_prefix <- file.path(checkpoint_dir, "ckpt")
checkpoint <- tf$prepare$Checkpoint(
  optimizer = optimizer,
  encoder = encoder,
  decoder = decoder
)

As we’re simply beginning to prepare the mannequin, restore_checkpoint is about to false. Later, restoring the weights will likely be as straightforward as

if (restore_checkpoint) {
  checkpoint$restore(tf$prepare$latest_checkpoint(checkpoint_dir))
}

The coaching loop is structured similar to within the machine translation case: We loop over epochs, batches, and the coaching targets, feeding within the right earlier phrase at each timestep.
Once more, tf$GradientTape takes care of recording the ahead cross and calculating the gradients, and the optimizer applies the gradients to the mannequin’s weights.
As every epoch ends, we additionally save the weights.

num_epochs <- 20

if (!restore_checkpoint) {
  for (epoch in seq_len(num_epochs)) {
    
    total_loss <- 0
    progress <- 0
    train_iter <- make_iterator_one_shot(train_dataset)
    
    until_out_of_range({
      
      batch <- iterator_get_next(train_iter)
      loss <- 0
      img_tensor <- batch[[1]]
      target_caption <- batch[[2]]
      
      dec_hidden <- k_zeros(c(batch_size, gru_units))
      
      dec_input <- k_expand_dims(
        rep(list(word_index_df[word_index_df$word == "<start>", "index"]), 
            batch_size)
      )
      
      with(tf$GradientTape() %as% tape, {
        
        options <- encoder(img_tensor)
        
        for (t in seq_len(dim(target_caption)[2] - 1)) {
          c(preds, dec_hidden, weights) %<-%
            decoder(list(dec_input, options, dec_hidden))
          loss <- loss + cx_loss(target_caption[, t], preds)
          dec_input <- k_expand_dims(target_caption[, t])
        }
        
      })
      
      total_loss <-
        total_loss + loss / k_cast_to_floatx(dim(target_caption)[2])
      
      variables <- c(encoder$variables, decoder$variables)
      gradients <- tape$gradient(loss, variables)
      
      optimizer$apply_gradients(purrr::transpose(list(gradients, variables)),
                                global_step = tf$prepare$get_or_create_global_step()
      )
    })
    cat(paste0(
      "nnTotal loss (epoch): ",
      epoch,
      ": ",
      (total_loss / k_cast_to_floatx(buffer_size)) %>% as.double() %>% round(4),
      "n"
    ))
    
    checkpoint$save(file_prefix = checkpoint_prefix)
  }
}

Peeking at outcomes

Identical to within the translation case, it’s fascinating to have a look at mannequin efficiency throughout coaching. The companion code has that performance built-in, so you’ll be able to watch mannequin progress for your self.

The fundamental perform right here is get_caption: It will get handed the trail to a picture, masses it, obtains its options from Inception V3, after which asks the encoder-decoder mannequin to generate a caption. If at any level the mannequin produces the finish image, we cease early. In any other case, we proceed till we hit the predefined most size.

Anderson et al. 2017) use object detection strategies to bottom-up isolate fascinating objects, and an LSTM stack whereby the primary LSTM computes top-down consideration guided by the output phrase generated by the second.

One other fascinating method involving consideration is utilizing a multimodal attentive translator (Liu et al. 2017), the place the picture options are encoded and introduced in a sequence, such that we find yourself with sequence fashions each on the encoding and the decoding sides.

One other different is so as to add a discovered matter to the knowledge enter (Zhu, Xue, and Yuan 2018), which once more is a top-down function present in human cognition.

When you discover one in every of these, or yet one more, method extra convincing, an keen execution implementation, within the fashion of the above, will possible be a sound method of implementing it.

Anderson, Peter, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2017. “Backside-up and Prime-down Consideration for Picture Captioning and VQA.” CoRR abs/1707.07998. http://arxiv.org/abs/1707.07998.
Liu, Chang, Fuchun Solar, Changhu Wang, Feng Wang, and Alan L. Yuille. 2017. “A Multimodal Attentive Translator for Picture Captioning.” CoRR abs/1702.05658. http://arxiv.org/abs/1702.05658.
Xu, Kelvin, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. “Present, Attend and Inform: Neural Picture Caption Technology with Visible Consideration.” CoRR abs/1502.03044. http://arxiv.org/abs/1502.03044.
Zhu, Zhihao, Zhan Xue, and Zejian Yuan. 2018. “A Subject-Guided Consideration for Picture Captioning.” CoRR abs/1807.03514v1. https://arxiv.org/abs/1807.03514v1.


Posit AI Weblog: Picture-to-image translation with pix2pix

Neural type switch with keen execution and Keras