TensorFlow visualization framework

TensorFlow visualization framework

TensorFlow visualization framework


To do good research or develop good models, we need rich, frequent feedback about what’s going on inside our models during our experiments. That’s the point of running experiments: to get information about how well a model performs—as much information as possible. Making progress is an iterative process or loop: we start with an idea and express it as an experiment, attempting to validate or invalidate our idea. We run this experiment and process the information it generates. This inspires our next idea. The more iterations of this loop we’re able to run, the more refined and powerful our ideas become. Keras helps us go from idea to experiment in the least possible time, and fast GPU s can help us get from experiment to result as quickly as possible. But what about processing the experiment results? That’s where Tensor-Board comes in.


TensorBoard, a browser-based visualization tool that comes packaged with TensorFlow. Note that it’s only available for Keras models when we’re using Keras with the TensorFlow backend. The key purpose of TensorBoard is to help us visually monitor everything that goes on inside our model during training. If we’re monitoring more information than just the model’s final loss, we can develop a clearer vision of what the model does and doesn’t do, and we can make progress more quickly. TensorBoard gives us access to several neat features, all in our browser:

  • Visually monitoring metrics during training
  • Visualizing our model architecture
  • Visualizing histograms of activations and gradients
  • Exploring embeddings in 3D

Let’s demonstrate these features in a simple example. We’ll train a 1D convnet on the IMDB sentiment-analysis task. We’ll consider only the top 2,000 words in the IMDB vocabulary, to make visualizing word embeddings more tractable.

Text-classification model to use with TensorBoard

import keras
from keras import layers
from keras.datasets import imdb
from keras.preprocessing import sequence
Number of words to
consider as features
max_features = 2000
max_len = 500
Cuts off texts after this number of words (among max_features most common words)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
model = keras.models.Sequential()
model.add(layers.Embedding(max_features, 128,
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.Conv1D(32, 7, activation='relu'))
Before we start using TensorBoard, we need to create a directory where we’ll store the log files it generates.

Creating a directory for TensorBoard log files

$ mkdir my_log_dir
Let’s launch the training with a TensorBoard callback instance. This callback will write
log events to disk at the specified location.

Training the model with a TensorBoard callback

callbacks = [
                                                               Log files will be written at this location.

                                                                Records activation histograms every 1 epoch
embeddings_freq=1, ) ]
                                                                Records embedding data every 1 epoch
history = model.fit(x_train, y_train,
At this point, we can launch the TensorBoard server from the command line, instructing it to read the logs the callback is currently writing. The tensorboard utility should have been automatically installed on our machine the moment we installed
TensorFlow (for example, via pip ):
$ tensorboard --logdir=my_log_dir
We can then browse to http://localhost:6006 and look at our model training. In addition, to live graphs of the training and validation metrics, we get access to the Histograms tab, where we can find pretty visualizations of histograms of activation values taken by our layers
Powered by Blogger.