CNN Network Visualizer

Rakesh TS
3 min readOct 5, 2023

Visualizing a Network Architecture as a image is an important debugging tool and a way to represent our architecture in a pictorial way if we are implementing our custom architecture or publishing the developed network architecture in a presentation deck or publication.

In order to construct a graph of our network and save it to disk using Keras, we need to install the graphviz using — https://graphviz.org/download/ and install the below packages.

$ pip install graphviz==0.5.2
$ pip install pydot-ng==1.0.0

Visualizing Keras Networks Code:

Open up a new file in a folder named “visualize”, name it lenet.py and insert the following code:

# import the necessary packages
from keras.models import Sequential
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras import backend as K

class LeNet:
@staticmethod
def build(width, height, depth, classes):
# initialize the model
model = Sequential()
inputShape = (height, width, depth)

# if we are using "channels first", update the input shape
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)

# first set of CONV => RELU => POOL layers
model.add(Conv2D(20, (5, 5), padding="same",
input_shape=inputShape))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))

# second set of CONV => RELU => POOL layers
model.add(Conv2D(50, (5, 5), padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))

# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))

# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))

# return the constructed network architecture
return model

Open another file in the same path, name it arch.py and insert the following code.

# python arch.py

# import the necessary packages
from visualize import LeNet
from keras.utils import plot_model

# initialize LeNet and then write the network architecture
# visualization graph to disk
model = LeNet.build(28, 28, 1, 10)
plot_model(model, to_file="arch.png", show_shapes=True)
#to execute the code

python arch.py

Once the command successfully exists, check your current working directory for the Network Architecture image.

Each node in the graph represents a specific layer function (i.e., convolution, pooling, activation, flattening, fully-connected, etc.). Arrows represent the flow of data through the network. Each node also includes the volume input size and output size after a given operation.

Walking through the LeNet architecture, we see the first layer is our InputLayer which accepts a 28×28×1 input image. The spatial dimensions for the input and output of the layer are the same as this is simply a “placeholder” for the input data. You might be wondering what the None represents in the data shape (None, 28, 28, 1). The None is actually our batch size. When visualizing the network architecture, Keras does not know our intended batch size so it leaves the value as None. When training this value would change to 32, 64, 128, etc., or whatever batch size we deemed appropriate.

Next, our data flows to the first CONV layer, where we learn 20 kernels on the 28×28×1 input. The output of this first CONV layer is 28×28×20. We have retained our original spatial dimensions due to zero padding, but by learning 20 filters we have changed the volume size.

An activation layer follows the CONV layer, which by definition cannot change the input volume size. However, a POOL operation can reduce the volume size — here our input volume is reduced from 28×28×20 down to 14×14×20. The second CONV accepts the 14×14×20 volume as input, but then learns 50 filters, changing the output volume size to 14×14×50 (again, zero padding is leveraged to ensure the convolution itself does not reduce the width and height of the input). An activation is applied prior to another POOL operation which again halves the width and height from 14×14×50 down to 7×7×50.

At this point, we are ready to apply our FC layers. To accomplish this, our 7×7×50 input is flattened into a list of 2,450 values (since 7×7×50 = 2,450). Now that we have flattened the output of the convolutional part of our network, we can apply a FC layer that accepts the 2,450 input values and learns 500 nodes. An activation follows, followed by another FC layer, this time reducing 500 down to 10 (the total number of class labels for the MNIST dataset).

Finally, a softmax classifier is applied to each of the 10 input nodes, giving us our final class probabilities.

--

--