In picture captioning, an algorithm is given a picture and tasked with producing a wise caption. It’s a difficult job for a number of causes, not the least being that it entails a notion of saliency or relevance. For this reason current deep studying approaches largely embody some “consideration” mechanism (generally even multiple) to assist specializing in related picture options.
On this put up, we reveal a formulation of picture captioning as an encoder-decoder downside, enhanced by spatial consideration over picture grid cells. The concept comes from a current paper on Neural Picture Caption Era with Visible Consideration (Xu et al. 2015), and employs the identical sort of consideration algorithm as detailed in our put up on machine translation.
We’re porting Python code from a current Google Colaboratory pocket book, utilizing Keras with TensorFlow keen execution to simplify our lives.
Stipulations
The code proven right here will work with the present CRAN variations of tensorflow
, keras
, and tfdatasets
.
Examine that you just’re utilizing not less than model 1.9 of TensorFlow. If that isn’t the case, as of this writing, this
will get you model 1.10.
When loading libraries, please ensure you’re executing the primary 4 strains on this precise order.
We’d like to ensure we’re utilizing the TensorFlow implementation of Keras (tf.keras
in Python land), and we’ve got to allow keen execution earlier than utilizing TensorFlow in any manner.
No must copy-paste any code snippets – you’ll discover the whole code (so as essential for execution) right here: eager-image-captioning.R.
The dataset
MS-COCO (“Frequent Objects in Context”) is certainly one of, maybe the, reference dataset in picture captioning (object detection and segmentation, too).
We’ll be utilizing the coaching photographs and annotations from 2014 – be warned, relying in your location, the obtain can take a lengthy time.
After unpacking, let’s outline the place the photographs and captions are.
annotation_file "train2014/annotations/captions_train2014.json"
image_path "train2014/train2014"
The annotations are in JSON format, and there are 414113 of them! Fortunately for us we didn’t must obtain that many photographs – each picture comes with 5 completely different captions, for higher generalizability.
annotations fromJSON(file = annotation_file)
annot_captions annotations[[4]]
num_captions size(annot_captions)
We retailer each annotations and picture paths in lists, for later loading.
all_captions vector(mode = "record", size = num_captions)
all_img_names vector(mode = "record", size = num_captions)
for (i in seq_len(num_captions)) {
caption paste0(" " ,
annot_captions[[i]][["caption"]],
" "
)
image_id annot_captions[[i]][["image_id"]]
full_coco_image_path sprintf(
"%s/COCO_train2014_percent012d.jpg",
image_path,
image_id
)
all_img_names[[i]] full_coco_image_path
all_captions[[i]] caption
}
Relying in your computing atmosphere, you’ll for positive wish to prohibit the variety of examples used.
This put up will use 30000 captioned photographs, chosen randomly, and put aside 20% for validation.
Under, we take random samples, cut up into coaching and validation components. The companion code will even retailer the indices on disk, so you possibly can choose up on verification and evaluation later.
num_examples 30000
random_sample pattern(1:num_captions, measurement = num_examples)
train_indices pattern(random_sample, measurement = size(random_sample) * 0.8)
validation_indices setdiff(random_sample, train_indices)
sample_captions all_captions[random_sample]
sample_images all_img_names[random_sample]
train_captions all_captions[train_indices]
train_images all_img_names[train_indices]
validation_captions all_captions[validation_indices]
validation_images all_img_names[validation_indices]
Interlude
Earlier than actually diving into the technical stuff, let’s take a second to mirror on this job.
In typical image-related deep studying walk-throughs, we’re used to seeing well-defined issues – even when in some instances, the answer could also be exhausting. Take, for instance, the stereotypical canine vs. cat downside. Some canine might appear to be cats and a few cats might appear to be canine, however that’s about it: All in all, within the standard world we dwell in, it ought to be a kind of binary query.
If, alternatively, we ask folks to explain what they see in a scene, it’s to be anticipated from the outset that we’ll get completely different solutions. Nonetheless, how a lot consensus there’s will very a lot rely upon the concrete dataset we’re utilizing.
Let’s check out some picks from the very first 20 coaching objects sampled randomly above.

Now this picture doesn’t go away a lot room for resolution what to give attention to, and obtained a really factual caption certainly: “There’s a plate with one slice of bacon a half of orange and bread.” If the dataset had been all like this, we’d suppose a machine studying algorithm ought to do fairly effectively right here.
Choosing one other one from the primary 20:

What could be salient info to you right here? The caption supplied goes “A smiling little boy has a checkered shirt.”
Is the look of the shirt as vital as that? You may as effectively give attention to the surroundings, – and even one thing on a very completely different stage: The age of the photograph, or it being an analog one.
Let’s take a ultimate instance.

What would you say about this scene? The official label we sampled right here is “A bunch of individuals posing in a humorous manner for the digital camera.” Effectively …
Please don’t neglect that for every picture, the dataset consists of 5 completely different captions (though our n = 30000 samples most likely gained’t).
So this isn’t saying the dataset is biased – in no way. As a substitute, we wish to level out the ambiguities and difficulties inherent within the job. Really, given these difficulties, it’s all of the extra wonderful that the duty we’re tackling right here – having a community robotically generate picture captions – ought to be attainable in any respect!
Now let’s see how we are able to do that.
For the encoding a part of our encoder-decoder community, we are going to make use of InceptionV3 to extract picture options. In precept, which options to extract is as much as experimentation, – right here we simply use the final layer earlier than the totally linked high:
image_model application_inception_v3(
include_top = FALSE,
weights = "imagenet"
)
For a picture measurement of 299×299, the output can be of measurement (batch_size, 8, 8, 2048)
, that’s, we’re making use of 2048 characteristic maps.
InceptionV3 being a “large mannequin,” the place each cross via the mannequin takes time, we wish to precompute options prematurely and retailer them on disk.
We’ll use tfdatasets to stream photographs to the mannequin. This implies all our preprocessing has to make use of tensorflow features: That’s why we’re not utilizing the extra acquainted image_load
from keras under.
Our customized load_image
will learn in, resize and preprocess the photographs as required to be used with InceptionV3:
Now we’re prepared to avoid wasting the extracted options to disk. The (batch_size, 8, 8, 2048)
-sized options can be flattened to (batch_size, 64, 2048)
. The latter form is what our encoder, quickly to be mentioned, will obtain as enter.
preencode distinctive(sample_images) %>% unlist() %>% type()
num_unique size(preencode)
# adapt this based on your system's capacities
batch_size_4save 1
image_dataset
tensor_slices_dataset(preencode) %>%
dataset_map(load_image) %>%
dataset_batch(batch_size_4save)
save_iter make_iterator_one_shot(image_dataset)
until_out_of_range({
save_count save_count + batch_size_4save
batch_4save save_iter$get_next()
img batch_4save[[1]]
path batch_4save[[2]]
batch_features image_model(img)
batch_features tf$reshape(
batch_features,
record(dim(batch_features)[1], -1L, dim(batch_features)[4]
)
)
for (i in 1:dim(batch_features)[1]) {
np$save(path[i]$numpy()$decode("utf-8"),
batch_features[i, , ]$numpy())
}
})
Earlier than we get to the encoder and decoder fashions although, we have to handle the captions.
Processing the captions
We’re utilizing keras text_tokenizer
and the textual content processing features texts_to_sequences
and pad_sequences
to rework ascii textual content right into a matrix.
# we are going to use the 5000 most frequent phrases solely
top_k 5000
tokenizer text_tokenizer(
num_words = top_k,
oov_token = "" ,
filters = '!"#$%&()*+.,-/:;=?@[]^_`~ ')
tokenizer$fit_on_texts(sample_captions)
train_captions_tokenized
tokenizer %>% texts_to_sequences(train_captions)
validation_captions_tokenized
tokenizer %>% texts_to_sequences(validation_captions)
# pad_sequences will use 0 to pad all captions to the identical size
tokenizer$word_index["" ] 0
# create a lookup dataframe that permits us to go in each instructions
word_index_df knowledge.body(
phrase = tokenizer$word_index %>% names(),
index = tokenizer$word_index %>% unlist(use.names = FALSE),
stringsAsFactors = FALSE
)
word_index_df word_index_df %>% organize(index)
decode_caption perform(textual content) {
paste(map(textual content, perform(quantity)
word_index_df %>%
filter(index == quantity) %>%
choose(phrase) %>%
pull()),
collapse = " ")
}
# pad all sequences to the identical size (the utmost size, in our case)
# might experiment with shorter padding (truncating the very longest captions)
caption_lengths map(
all_captions[1:num_examples],
perform(c) str_split(c," ")[[1]] %>% size()
) %>% unlist()
max_length fivenum(caption_lengths)[5]
train_captions_padded pad_sequences(
train_captions_tokenized,
maxlen = max_length,
padding = "put up",
truncating = "put up"
)
validation_captions_padded pad_sequences(
validation_captions_tokenized,
maxlen = max_length,
padding = "put up",
truncating = "put up"
)
Loading the information for coaching
Now that we’ve taken care of pre-extracting the options and preprocessing the captions, we’d like a solution to stream them to our captioning mannequin. For that, we’re utilizing tensor_slices_dataset
from tfdatasets, passing within the record of paths to the photographs and the preprocessed captions. Loading the photographs is then carried out as a TensorFlow graph operation (utilizing tf$pyfunc).
The unique Colab code additionally shuffles the information on each iteration. Relying in your {hardware}, this may increasingly take a very long time, and given the scale of the dataset it’s not strictly essential to get cheap outcomes. (The outcomes reported under had been obtained with out shuffling.)
batch_size 10
buffer_size num_examples
map_func perform(img_name, cap) {
p paste0(img_name$decode("utf-8"), ".npy")
img_tensor np$load(p)
img_tensor tf$forged(img_tensor, tf$float32)
record(img_tensor, cap)
}
train_dataset
tensor_slices_dataset(record(train_images, train_captions_padded)) %>%
dataset_map(
perform(item1, item2) tf$py_func(map_func, record(item1, item2), record(tf$float32, tf$int32))
) %>%
# optionally shuffle the dataset
# dataset_shuffle(buffer_size) %>%
dataset_batch(batch_size)
Captioning mannequin
The mannequin is principally the identical as that mentioned within the machine translation put up. Please discuss with that article for an evidence of the ideas, in addition to an in depth walk-through of the tensor shapes concerned at each step. Right here, we offer the tensor shapes as feedback within the code snippets, for fast overview/comparability.
Nevertheless, should you develop your individual fashions, with keen execution you possibly can merely insert debugging/logging statements at arbitrary locations within the code – even in mannequin definitions. So you possibly can have a perform
And should you now set
you possibly can hint – not solely tensor shapes, however precise tensor values via your fashions, as proven under for the encoder. (We don’t show any debugging statements after that, however the pattern code has many extra.)
Encoder
Now it’s time to outline some some sizing-related hyperparameters and housekeeping variables:
# for encoder output
embedding_dim 256
# decoder (LSTM) capability
gru_units 512
# for decoder output
vocab_size top_k
# variety of characteristic maps gotten from Inception V3
features_shape 2048
# form of consideration options (flattened from 8x8)
attention_features_shape 64
The encoder on this case is only a totally linked layer, taking within the options extracted from Inception V3 (in flattened type, as they had been written to disk), and embedding them in 256-dimensional area.
cnn_encoder perform(embedding_dim, identify = NULL) {
keras_model_custom(identify = identify, perform(self) {
self$fc layer_dense(models = embedding_dim, activation = "relu")
perform(x, masks = NULL) {
# enter form: (batch_size, 64, features_shape)
maybecat("encoder enter", x)
# form after fc: (batch_size, 64, embedding_dim)
x self$fc(x)
maybecat("encoder output", x)
x
}
})
}
Consideration module
Not like within the machine translation put up, right here the eye module is separated out into its personal customized mannequin.
The logic is identical although:
attention_module perform(gru_units, identify = NULL) {
keras_model_custom(identify = identify, perform(self) {
self$W1 = layer_dense(models = gru_units)
self$W2 = layer_dense(models = gru_units)
self$V = layer_dense(models = 1)
perform(inputs, masks = NULL) {
options inputs[[1]]
hidden inputs[[2]]
# options(CNN_encoder output) form == (batch_size, 64, embedding_dim)
# hidden form == (batch_size, gru_units)
# hidden_with_time_axis form == (batch_size, 1, gru_units)
hidden_with_time_axis k_expand_dims(hidden, axis = 2)
# rating form == (batch_size, 64, 1)
rating self$V(k_tanh(self$W1(options) + self$W2(hidden_with_time_axis)))
# attention_weights form == (batch_size, 64, 1)
attention_weights k_softmax(rating, axis = 2)
# context_vector form after sum == (batch_size, embedding_dim)
context_vector k_sum(attention_weights * options, axis = 2)
record(context_vector, attention_weights)
}
})
}
Decoder
The decoder at every time step calls the eye module with the options it bought from the encoder and its final hidden state, and receives again an consideration vector. The eye vector will get concatenated with the present enter and additional processed by a GRU and two totally linked layers, the final of which supplies us the (unnormalized) chances for the subsequent phrase within the caption.
The present enter at every time step right here is the earlier phrase: the proper one throughout coaching (trainer forcing), the final generated one throughout inference.
rnn_decoder perform(embedding_dim, gru_units, vocab_size, identify = NULL) {
keras_model_custom(identify = identify, perform(self) {
self$gru_units gru_units
self$embedding layer_embedding(input_dim = vocab_size,
output_dim = embedding_dim)
self$gru if (tf$take a look at$is_gpu_available()) {
layer_cudnn_gru(
models = gru_units,
return_sequences = TRUE,
return_state = TRUE,
recurrent_initializer = 'glorot_uniform'
)
} else {
layer_gru(
models = gru_units,
return_sequences = TRUE,
return_state = TRUE,
recurrent_initializer = 'glorot_uniform'
)
}
self$fc1 layer_dense(models = self$gru_units)
self$fc2 layer_dense(models = vocab_size)
self$consideration attention_module(self$gru_units)
perform(inputs, masks = NULL) {
x inputs[[1]]
options inputs[[2]]
hidden inputs[[3]]
c(context_vector, attention_weights) %
self$consideration(record(options, hidden))
# x form after passing via embedding == (batch_size, 1, embedding_dim)
x self$embedding(x)
# x form after concatenation == (batch_size, 1, 2 * embedding_dim)
x k_concatenate(record(k_expand_dims(context_vector, 2), x))
# passing the concatenated vector to the GRU
c(output, state) % self$gru(x)
# form == (batch_size, 1, gru_units)
x self$fc1(output)
# x form == (batch_size, gru_units)
x k_reshape(x, c(-1, dim(x)[[3]]))
# output form == (batch_size, vocab_size)
x self$fc2(x)
record(x, state, attention_weights)
}
})
}
Loss perform, and instantiating all of it
Now that we’ve outlined our mannequin (constructed of three customized fashions), we nonetheless want to really instantiate it (being exact: the 2 courses we are going to entry from exterior, that’s, the encoder and the decoder).
We additionally must instantiate an optimizer (Adam will do), and outline our loss perform (categorical crossentropy).
Be aware that tf$nn$sparse_softmax_cross_entropy_with_logits
expects uncooked logits as a substitute of softmax activations, and that we’re utilizing the sparse variant as a result of our labels are usually not one-hot-encoded.
encoder cnn_encoder(embedding_dim)
decoder rnn_decoder(embedding_dim, gru_units, vocab_size)
optimizer = tf$prepare$AdamOptimizer()
cx_loss perform(y_true, y_pred) {
masks 1 - k_cast(y_true == 0L, dtype = "float32")
loss tf$nn$sparse_softmax_cross_entropy_with_logits(
labels = y_true,
logits = y_pred
) * masks
tf$reduce_mean(loss)
}
Coaching
Coaching the captioning mannequin is a time-consuming course of, and you’ll for positive wish to save the mannequin’s weights!
How does this work with keen execution?
We create a tf$prepare$Checkpoint
object, passing it the objects to be saved: In our case, the encoder, the decoder, and the optimizer. Later, on the finish of every epoch, we are going to ask it to write down the respective weights to disk.
restore_checkpoint FALSE
checkpoint_dir "./checkpoints_captions"
checkpoint_prefix file.path(checkpoint_dir, "ckpt")
checkpoint tf$prepare$Checkpoint(
optimizer = optimizer,
encoder = encoder,
decoder = decoder
)
As we’re simply beginning to prepare the mannequin, restore_checkpoint
is about to false. Later, restoring the weights can be as simple as
if (restore_checkpoint) {
checkpoint$restore(tf$prepare$latest_checkpoint(checkpoint_dir))
}
The coaching loop is structured identical to within the machine translation case: We loop over epochs, batches, and the coaching targets, feeding within the right earlier phrase at each timestep.
Once more, tf$GradientTape
takes care of recording the ahead cross and calculating the gradients, and the optimizer applies the gradients to the mannequin’s weights.
As every epoch ends, we additionally save the weights.
num_epochs 20
if (!restore_checkpoint) {
for (epoch in seq_len(num_epochs)) {
total_loss 0
progress 0
train_iter make_iterator_one_shot(train_dataset)
until_out_of_range({
batch iterator_get_next(train_iter)
loss 0
img_tensor batch[[1]]
target_caption batch[[2]]
dec_hidden k_zeros(c(batch_size, gru_units))
dec_input k_expand_dims(
rep(record(word_index_df[word_index_df$word == "" , "index"]),
batch_size)
)
with(tf$GradientTape() %as% tape, {
options encoder(img_tensor)
for (t in seq_len(dim(target_caption)[2] - 1)) {
c(preds, dec_hidden, weights) %
decoder(record(dec_input, options, dec_hidden))
loss loss + cx_loss(target_caption[, t], preds)
dec_input k_expand_dims(target_caption[, t])
}
})
total_loss
total_loss + loss / k_cast_to_floatx(dim(target_caption)[2])
variables c(encoder$variables, decoder$variables)
gradients tape$gradient(loss, variables)
optimizer$apply_gradients(purrr::transpose(record(gradients, variables)),
global_step = tf$prepare$get_or_create_global_step()
)
})
cat(paste0(
"nnTotal loss (epoch): ",
epoch,
": ",
(total_loss / k_cast_to_floatx(buffer_size)) %>% as.double() %>% spherical(4),
"n"
))
checkpoint$save(file_prefix = checkpoint_prefix)
}
}
Peeking at outcomes
Identical to within the translation case, it’s attention-grabbing to have a look at mannequin efficiency throughout coaching. The companion code has that performance built-in, so you possibly can watch mannequin progress for your self.
The essential perform right here is get_caption
: It will get handed the trail to a picture, hundreds it, obtains its options from Inception V3, after which asks the encoder-decoder mannequin to generate a caption. If at any level the mannequin produces the finish
image, we cease early. In any other case, we proceed till we hit the predefined most size.
perform(picture) {
matrix(0, nrow = max_length, ncol = attention_features_shape)
k_expand_dims(load_image(picture)[[1]], 1)
image_model(temp_input)
k_reshape(
img_tensor_val,record(dim(img_tensor_val)[1], -1, dim(img_tensor_val)[4])
) encoder(img_tensor_val)
k_zeros(c(1, gru_units))
k_expand_dims(
record(word_index_df[word_index_df$word == "" , "index"])
)
""
for (t in seq_len(max_length - 1)) {
c(preds, dec_hidden, attention_weights) %
decoder(record(dec_input, options, dec_hidden))
k_reshape(attention_weights, c(-1))
attention_weights %>% as.double()
tf$multinomial(exp(preds), num_samples = 1)[1, 1]
%>% as.double()
$index == pred_idx, "word"]
word_index_df[word_index_df
if (pred_word == "" ) {
paste(end result, pred_word)
1:length(str_split(result, " ")[[1]]), ,
attention_matrix[= FALSE]
drop return (record(end result, attention_matrix))
else {
} paste(end result, pred_word)
k_expand_dims(record(pred_idx))
}
}
record(str_trim(end result), attention_matrix)
} dec_input end result
attention_matrix
end result
pred_word
pred_idx attention_matrix[t,] attention_weights end result dec_input
dec_hidden options img_tensor_val img_tensor_val temp_input attention_matrix
get_caption
With that performance, now let’s really try this: peek at outcomes whereas the community is studying!
We’ve picked 3 examples every from the coaching and validation units. Right here they’re.
First, our picks from the coaching set:

Let’s see the goal captions:
- a herd of giraffe standing on high of a grass lined area
- a view of playing cards driving down a road
- the skateboarding flips his board off of the sidewalk
Apparently, right here we even have an indication of how labeled datasets (like something human) might comprise errors. (The samples weren’t picked for that; as a substitute, they had been chosen – with out an excessive amount of screening – for being reasonably unequivocal of their visible content material.)
Now for the validation candidates.

and their official captions:
- a left handed pitcher throwing the bottom ball
- a lady taking a chunk of a slice of pizza in a restaraunt
- a lady hitting swinging a tennis racket at a tennis ball on a tennis court docket
(Once more, any spelling peculiarities haven’t been launched by us.)
Epoch 1
Now, what does our community produce after the primary epoch? Do not forget that this implies, having seen every one of many 24000 coaching photographs as soon as.
First then, listed here are the captions for the prepare photographs:
a bunch of sheep standing within the grass
a bunch of automobiles driving down a road
a person is standing on a road
Not solely is the syntax right in each case, the content material isn’t that unhealthy both!
How in regards to the validation set?
a baseball participant is enjoying baseball uniform is holding a baseball bat
a person is holding a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk with a desk
a tennis participant is holding a tennis court docket
This actually tells that the community has been capable of generalize over – let’s not name them ideas, however mappings between visible and textual entities, say It’s true that it’s going to have seen a few of these photographs earlier than, as a result of photographs include a number of captions. You can be extra strict organising your coaching and validation units – however right here, we don’t actually care about goal efficiency scores and so, it does not likely matter.
Let’ skip on to epoch 20, our final coaching epoch, and examine for additional enhancements.
Epoch 20
That is what we get for the coaching photographs:
a bunch of many tall giraffe standing subsequent to a sheep
a view of playing cards and white gloves on a road
a skateboarding flips his board
And this, for the validation photographs.
a baseball catcher and umpire hit a baseball recreation
a person is consuming a sandwich
a feminine tennis participant is within the court docket
I feel we’d agree that this nonetheless leaves room for enchancment – however then, we solely educated for 20 epochs and on a really small portion of the dataset.
Within the above code snippets, you’ll have observed the decoder returning an attention_matrix
– however we weren’t commenting on it.
Now lastly, simply as within the translation instance, take a look what we are able to make of that.
The place does the community look?
We are able to visualize the place the community is “wanting” because it generates every phrase by overlaying the unique picture and the eye matrix. This instance is taken from the 4th epoch.
Right here white-ish squares point out areas receiving stronger focus. In comparison with text-to-text translation although, the mapping is inherently much less simple – the place does one “look” when producing phrases like “and,” “the,” or “in?”

Conclusion
It most likely goes with out saying that significantly better outcomes are to be anticipated when coaching on (a lot!) extra knowledge and for far more time.
Other than that, there are different choices, although. The idea carried out right here makes use of spatial consideration over a uniform grid, that’s, the eye mechanism guides the decoder the place on the grid to look subsequent when producing a caption.
Nevertheless, this isn’t the one manner, and this isn’t the way it works with people. A way more believable strategy is a mixture of top-down and bottom-up consideration. E.g., (Anderson et al. 2017) use object detection strategies to bottom-up isolate attention-grabbing objects, and an LSTM stack whereby the primary LSTM computes top-down consideration guided by the output phrase generated by the second.
One other attention-grabbing strategy involving consideration is utilizing a multimodal attentive translator (Liu et al. 2017), the place the picture options are encoded and offered in a sequence, such that we find yourself with sequence fashions each on the encoding and the decoding sides.
One other various is so as to add a realized matter to the knowledge enter (Zhu, Xue, and Yuan 2018), which once more is a top-down characteristic present in human cognition.
For those who discover certainly one of these, or yet one more, strategy extra convincing, an keen execution implementation, within the model of the above, will seemingly be a sound manner of implementing it.