Monday, July 22, 2019

Tensorflow Tools

Tensorflow Tools





Keras to Tensorflow Model
keras2tf.py -input_model_file LWTNN_v10.h5 -output_model_file LWTNN_v10.pb
(https://github.com/amir-abdi/keras_to_tensorflow) 

 

Summarize Graph

 bazel run tensorflow/tools/graph_transforms:summarize_graph \
--in_graph=/tmp/innocent/models/LWTNN_v10.pb --print_structure=true 

(To find input and output of graph)
 
 

Converting to TFLITE

Method 1
 
import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                                       tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model) 
  
 
Method 2
bazel run --define=with_select_tf_ops=true tflite_convert -- \
  --output_file=inference.tflite \
  --graph_def_file=inference_model.pb \
  --input_arrays="inputs","input_lengths" \
  --output_arrays=model/inference/add \
  --target_ops=TFLITE_BUILTINS,SELECT_TF_OPS 



Optimization For Android Platform
=================================
best way is to add only the code that you use in the actual model.
1.python tensorflow/python/tools/print_selective_registration_header.py --graphs="xxx.pb" > ops_to_register.h"

ops_to_register.h -> Recompile the code by inserting the generated header.
cp ops_to_register.h tensorflow/core/framework/
When you build a bazel --copts=-DSELECTIVE_REGISTRATION added


Threading
==========
The desktop version of TensorFlow has a threading model.
This means that several operations can be performed in parallel.
Two types of parallelism are supported.
>inter-op
>intra-op


Quantization
=============
quantize_wieghts


Useful graph conversion tools
==============================
strip_unused_nodes


What Ops are available in the mobile environment?
=================================================
1.the first thing to do strip_unused_nodesis to do this .
If the ops with the errors go to the strip, it is a problem to solve!


Implementation Location
An operation is divided into two parts in implementation.
You can think of it as a signature for the op definition : operator. Because it is small in size, it is included in the library.
op implementation : The actual implementation code. It is mostly tensorflow/core/kernelsimplemented in subdirectories.
If you compile C ++, you can control what operations are actually needed.
For example Mul, the operation is actually tensorflow/core/kernels/cwise_op_mul_1.ccdescribed in.
If you want to search the code, try the following.
$ grep 'REGISTER.*"Mul"' tensorflow/core/kernels/*.cc


tflite_convert: Starting from TensorFlow 1.9, the command-line tool tflite_convert is installed as part of the Python package. All of the examples below use tflite_convert for simplicity.
Example: tflite_convert --output_file=...

Earlier it was toco ( now deprecated)


tflite_convert \
  --output_file=/tmp/foo.tflite \
  --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
  --input_arrays=input \
  --output_arrays=MobilenetV1/Predictions/Reshape_1









1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete

Featured Post

XDP - Getting Started with XDP (Linux)