Pre-trained model - Pre-calculation of "bootleneck" features

When using pre-trained model like VGG16 with the parameter trainable = 0 and include_top = false, I was wondering if the output of the VGG16 last layer before Fully-connected layer get pre-calculate for all the images in the training set ? It is possible to do that since the VGG16 output will be fixed for each image (non-trainable). These outputs could be used (instead of propagating the image through the CNN part) to speed-up the training in this use case of the pre-trained model.

When using pre-trained model, majority of use cases will keep few layers trainable to train on custom data. so trainable=0 is narrow use case and we dont plan to do this optimization in DLS.

Ok I understand. I was asking because there’s quite a few articles/posts on the web that describe doing that and it makes senses when the user hasn’t got much computing power.