Let’s assume you already have an image in numpy’s ndarray
format, e.g. because you loaded it with OpenCV’s imread()
function, and you want to convert it to TensorFlow’s Tensor format and later back to ndarray
.
That’s essentially three calls to TensorFlow:
import cv2 import tensorflow as tf import numpy as np # normalize the pixel values to 0..1 range and convert them # to a single-precision tensor image_in = cv2.imread('image.png') / 255. t = tf.convert_to_tensor(image_in, dtype=tf.float32) assert isinstance(t, tf.Tensor) # in order to convert the tensor back to an array, we need # to evaluate it; for this, we need a session with tf.Session() as sess: image_out = sess.run(fetches=t) assert isinstance(image_out, np.ndarray) # for imshow to work, the image needs to be in 0..1 range # whenever it is a float; that's why we normalized it. cv2.imshow('Image', image_out) cv2.readKey(0)
Note that instead of using sess.run(t)
we could also have used
with tf.Session() as sess: image_out = t.eval(sess)
which essentially performs the same action. A benefit of using sess.run()
directly is that we can fetch more than one tensor in the same pass through the (sub-)graph (say, tuple = sess.run(fetches=[t1, t2, t3])
), whereas calling tensor.eval()
always results in one separate pass per call.