Using tensorflowjs_converter

A friend and I were looking to convert a Tensorflow model to a TensorFlow.js model. My friend had done some prior research and started me off with the following page: Importing a Keras model into TensorFlow.js.

The first step, and one which I messed up on, is installing the `tensorflowjs` library into a Python environment: `pip install tensorflowjs`. Although I'm not sure on the reason for the mistake, the fact of the matter is that I ended up installing `tensorflow` (`pip install tensorflow`) and spent a good amount of time wondering why the bash command `tensorflowjs_converter` wasn't working.

In any case, in the tutorial linked above, the command takes the following structure:
# bash
tensorflowjs_converter --input_format keras \
                       path/to/my_model.h5 \

Meanwhile, the readme for the tfjs COCO-SSD model supplies a converter command for removing the post process graph as a way of improving performance for the browser:
# bash
tensorflowjs_converter --input_format=tf_frozen_model \
                       --output_format=tfjs_graph_model \
                       --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' \
                       ./frozen_inference_graph.pb \

Eventually, we referred to the documentation for the command itself (to better understand the various flags that can be set). In particular, the page gives an example for each possible input format: (1) TensorFlow SavedModel, (2) TensorFlow Hub module, (3) TensorFlow.js JSON format, (4) Keras HDF5 model, or (5) tf.keras SavedModel.

In our test case, we downloaded the iNaturalist Species trained model and ran a conversion on the TensorFlow SavedModel.

In detail, we downloaded "faster_rcnn_resnet101_fgvc_2018_07_19.tar.gz" which, as of 20200813, can be found here. After extracting the contents, I ran the following command:
# bash
tensorflowjs_converter --input_format=tf_saved_model \
                       --output_format=tfjs_graph_model \
                       ./faster_rcnn_resnet101_fgvc_2018_07_19/saved_model  \

Note that the conversion takes a considerable amount of RAM (don't quote me on this but I think around 8 GB). Also, I forgot to build TensorFlow with CUDA so the process might have run much slower than it should have.


Popular posts from this blog

Observable HQ: dropdown input, d3 transition, and viewof

PySpark + Anaconda + Jupyter (Windows)

Getting to know... D3