How to Load TensoFlow Checkpoint variables (.ckpt) and make predictions in C++

As far as I know in Tensorflow C++ API there is no function for loading tf checkpoints like you used to do in Python. You can use this tool to combine your graph.pb an *.ckpt together or get deeper understanding how it works here..

Let's find out how to have variables initialized in C++ Tensorflow Graph.

Suppose we have TF Object in Python:

class TFObject:

    def __init__(self):

        # Some train parameters
        self.learning_rate = 0.001
        ... and so on ...

        # Some network parameters
        self.n_input = 500  # input features
        self.n_classes = 2  # output classes
        ... and so on ...

Now let's define some placeholders for input and output data and make variables for wights and biases:

        self.x = tf.placeholder("float", [None, self.n_input], name="nX")
        self.y = tf.placeholder("float", [None, self.n_classes], name="nY")

        self.weights = {
            'h1': tf.Variable(tf.random_normal([self.n_input, self.n_hidden_1])),
            'out': tf.Variable(tf.random_normal([self.n_hidden_1, self.n_classes]))
        }
        self.biases = {
            'b1': tf.Variable(tf.random_normal([self.n_hidden_1])),
            'out': tf.Variable(tf.random_normal([self.n_classes]))
        }

Then we can define model, like so:


        # Multilayer perceptron: 
        # one hidden layer with RELU activation
        self.pTron = tf.add(tf.matmul(tf.nn.relu(tf.add(tf.matmul(self.x, self.weights['h1']), self.biases['b1'])),
                                     self.weights['out']), self.biases['out'], name="nPTron")

Don't be scare of this line, it's a little bit tricky and messy, but now we have the name of this Operation.

You have to name each important Node to get an easy use later. Do it like so:

        self.init = tf.initialize_variables(tf.all_variables(), name="nInit")

        self.saver = tf.train.Saver(name="nSaver")

After some model training you can save checkpoint (in binary format, like map<namestring, Tensor>) and protobuf graph (where you have many nodes, but no train variables saved). In python you can simply restore the session from that checkpoint by:

with tf.Session() as sess:
  # Restore variables from disk.
  saver.restore(sess, "/tmp/model.ckpt")

But C++ has no such method for doing that, as i said before.

Let's make function for saving our training variables like constants in graph:

def save(self, filename):
        for variable in tf.trainable_variables():
            tensor = tf.constant(variable.eval())
            tf.assign(variable, tensor, name="nWeights")

        # This does not work in tensorflow with python3 now, 
        # but we defenetely need to save graph as binary!
        tf.train.write_graph(self.sess.graph_def, 'graph/', 'graph.pb', as_text=False)

Note: to make it work in py3 you should fix file: tensorflow/python/training/training_util.pу on line 71:

# f = gfile.FastGFile(path, "w")
if as_text:
f = gfile.FastGFile(path, "w")
f.write(str(graph_def))
else:
f = gfile.FastGFile(path, "wb")
f.write(graph_def.SerializeToString())
f.close()

Let's go cpp now. Load graph, make session and initialize our saved weights:

void load(std::string model) {
            auto load_graph_status =
                    ReadBinaryProto(tensorflow::Env::Default(), model, &graph_def);

            auto session_status = session->Create(graph_def);

            std::vector<tensorflow::Tensor> output;
            std::vector<string> vNames;

            int node_count = graph_def.node_size();
            for (int i = 0; i < node_count; i++) {
                auto n = graph_def.node(i);

                if (n.name().find("nWeights") != std::string::npos) {
                    vNames.push_back(n.name());
                }
            }

            session->Run({}, vNames, {}, &nnOutput);

To use other evaluations you should run session with proper Operation name (Remember? We named some before).

Let's define Tensors with the same size (as in Python) and variable for neural net answer:

        tensorflow::TensorShape inputShape;
        inputShape.InsertDim(0, 1);
        inputShape.InsertDim(1, 500);

        tensorflow::Tensor inputTensor(DT_FLOAT, inputShape);

        std::vector<std::pair<string, Tensor>> input;
        std::vector<tensorflow::Tensor> answer;

We need to push some data to neural network and get answer back. To do this:

  • put the correct node names
  • fill tensor with correct data
  • and run TensorFlow session
            input.emplace_back(std::string("nX"), inputTensor);

            auto statusPred = session->Run(input, {"nPTron"}, {}, &answer);

After successful run you'll have your data in the answer variable.

results matching ""

    No results matching ""