Let’s assume you already have an image in numpy’s ndarray format, e.g. because you loaded it with OpenCV’s imread() function, and you want to convert it to TensorFlow’s Tensor format and later back to ndarray.
That’s essentially three calls to TensorFlow:
import cv2
import tensorflow as tf
import numpy as np
# normalize the pixel values to 0..1 range and convert them
# to a single-precision tensor
image_in = cv2.imread('image.png') / 255.
t = tf.convert_to_tensor(image_in, dtype=tf.float32)
assert isinstance(t, tf.Tensor)
# in order to convert the tensor back to an array, we need
# to evaluate it; for this, we need a session
with tf.Session() as sess:
image_out = sess.run(fetches=t)
assert isinstance(image_out, np.ndarray)
# for imshow to work, the image needs to be in 0..1 range
# whenever it is a float; that's why we normalized it.
cv2.imshow('Image', image_out)
cv2.readKey(0)
Note that instead of using sess.run(t) we could also have used
with tf.Session() as sess:
image_out = t.eval(sess)
which essentially performs the same action. A benefit of using sess.run() directly is that we can fetch more than one tensor in the same pass through the (sub-)graph (say, tuple = sess.run(fetches=[t1, t2, t3])), whereas calling tensor.eval() always results in one separate pass per call.
Dezember 12th, 2016 GMT +1 von
Markus
2016-12-12T15:59:54+01:002016-12-12T15:59:54+01:00
· 0 Kommentare
While reading up on line search algorithms in nonlinear optimization for neural network training, I came across this problem: Given a function \(f(x)\) , find a quadratic interpolant \(q(x) = ax^2 + bx + c\) that fulfills the conditions \(f(x_0) = q(x_0)\) , \(f(x_1) = q(x_1)\) and \(f'(x_0) = q'(x_0)\) . Basically this:
So I took out my scribbling pad, wrote down some equations and then, after two pages of nonsense, decided it really wasn’t worth the hassle. It turns out that the simple system
We also would need to check the interpolant’s second derivative \(q''(x_{min})\) to ensure the approximated minimizer is indeed a minimum of \(q(x)\) by requiring \(q''(x_{min}) > 0\) , with the second derivative given as:
The premise of the line search in minimization problems usually is that the search direction is already a direction of descent. By having \(0 > f'(x_0)\) and \(f'(x_1) > 0\) (as would typically be the case when bracketing the local minimizer of \(f(x)\) ), the interpolant should always be (strictly) convex. If these conditions do not hold, there might be no solution at all: one obviously won’t be able to find a quadratic interpolant given the initial conditions for a function that is linear to machine precision. In that case, watch out for divisions by zero.
Last but not least, if the objective is to minimize \(\varphi(\alpha) = f(\vec{x}_k + \alpha \vec{d}_k)\) using \(q(\alpha)\) , where \(\vec{d}_k\) is the search direction and \(\vec{x}_k\) the current starting point, such that
If \(q(\alpha)\) is required to be strongly convex, then we’ll observe that
\begin{align}
q''(\alpha) &= 2a \overset{!}{\succeq} m
\end{align}
for an \(m > 0\) , giving us that \(a\) must be greater than zero (or \(\epsilon\) , for that matter), which is a trivial check. The following picture visualizes that this is indeed the case:
Convexity of a parabola for different highest-order coefficients a with positive b (top), zero b (middle) and negative b (bottom). Lowest-order coefficient c is left out for brevity.
Juli 2nd, 2015 GMT +1 von
Markus
2015-07-2T04:54:49+01:002018-03-4T14:45:44+01:00
· 0 Kommentare