2x1=10

because numbers are people, too
Persönliches
Fotografie
Programmierung
    • Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?

      The arti­cle Why Machine Learn­ing Doesn’t Work Well for Some Prob­lems? (Sha­hab , 2017)  describes the effect of Emer­gence as a bar­ri­er for pre­dic­tive infer­ence.

      Emer­gence is the phe­nom­e­non of com­plete­ly new behav­ior aris­ing (emerg­ing) from inter­ac­tions of ele­men­tary enti­ties, such as life emerg­ing from bio­chem­istry and col­lec­tive intel­li­gence emerg­ing from social ani­mals.

      In gen­er­al, effects of emer­gence can­not be inferred through a pri­ori analy­sis of a sys­tem (or its ele­men­tary enti­ties). While weak emer­gence can be under­stood still by observ­ing or sim­u­lat­ing the sys­tem, emer­gent qual­i­ties from strong emer­gence can­not be sim­u­lat­ed with cur­rent sys­tems.

      Sheikh-Bahei sug­gests inter­pret­ing emer­gence (in a pre­dic­tive con­text) as an addi­tion­al dimen­sion, called the E-Dimen­sion, where mov­ing along that dimen­sion results in new qual­i­ties emerg­ing. Cross­ing E-Dimen­sions dur­ing infer­rence leads to reduced pre­dic­tive pow­er as emer­gent qual­i­ties can­not be nec­es­sar­i­ly described as a func­tion of the observed fea­tures alone. The more E-Dimen­sions are crossed dur­ing infer­rence, the low­er the pre­dic­tion suc­cess will be, regard­less of the amount of fea­ture noise. Cur­rent-gen­er­a­tion algo­rithms do not han­dle this kind of prob­lem well and fur­ther research is required in this area.

      Hypothetical example of the E-Dimension concept.

      Hypo­thet­i­cal exam­ple of the E-Dimen­sion con­cept: Emer­gence phe­nom­e­na can be con­sid­ered as a bar­ri­er for mak­ing pre­dic­tive infer­ences. The fur­ther away the tar­get is from fea­tures along this dimen­sion, the less infor­ma­tion the fea­tures pro­vide about the tar­get. The fig­ure shows an exam­ple of pre­dict­ing organ­ism lev­el prop­er­ties (tar­get) using mol­e­c­u­lar and physic­o­chem­i­cal prop­er­ties (fea­ture space). (Sha­hab , 2017)

      Effects of emer­gence on exam­ple machine learn­ing prob­lems (Sha­hab , 2017):
      Exam­ple ML prob­lem Fea­ture space Fea­ture noise Emer­gence bar­ri­er Pre­dic­tion suc­cess
      Char­ac­ter recog­ni­tion hand­writ­ten char­ac­ter images high none high
      Speech recog­ni­tion sound waves high none high
      Weath­er pre­dic­tions cli­mate sen­sor data high weak high
      Rec­om­men­da­tion sys­tem his­toric pref­er­ences, likes, etc. low weak mod­er­ate
      Ad-click pre­dic­tion his­toric click behav­ior low weak mod­er­ate
      Device fail­ure pre­dic­tion sen­sor data high weak mod­er­ate
      Health­care out­come predici­tons patient data, vital signs, behav­ior, etc. high strong low
      Melting/boiling point pre­dic­tion molecular/atomic struc­ture low strong low
      Stock pre­dic­tion his­toric stock val­ue, news arti­cles, etc. low strong low
      Sha­hab , S.-B. (2017, July 6). The E-Dimen­sion: Why Machine Learn­ing Doesn’t Work Well for Some Prob­lems? Retrieved March 4, 2018, from https://www.datasciencecentral.com/profiles/blogs/the-e-dimension-why-machine-learning-doesn-t-work-well-for-some
      März 4th, 2018 GMT +1 von
      Markus
      2018-03-4T13:00:07+01:00 2018-03-4T15:38:33+01:00 · 0 Kommentare
      emergence inference
      Data Science Machine Learning ML Summarized
      • Use your conda environment in Jupyter Notebooks

        Sad­ly, run­ning jupyter note­book from with­in a con­da envi­ron­ment does not imply your note­book also runs in the same envi­ron­ment. Thank­ful­ly, there’s an easy fix for that, name­ly nb_conda, and you’ll get it using

        conda install nb_conda
        

        in the envi­ron­ment of your choice. After that, start up your note­book and select the Ker­nel you want either when cre­at­ing a new note­book or from the notebook’s Ker­nel menu:

        There we go.

        August 23rd, 2017 GMT +1 von
        Markus
        2017-08-23T21:33:34+01:00 2017-08-23T21:34:16+01:00 · 1 Kommentar
        Python Anaconda Jupyter
        Data Science
        • Building OpenCV for Anaconda Python 3

          Don’t judge me, I know how this title sounds. The harsh real­i­ty is — as per writ­ing of this post — that I always have a hard time get­ting CMake to rec­og­nize the right Python path when I’m using an Ana­con­da envi­ron­ment. In the­o­ry it’s just

          git clone https://github.com/opencv/opencv.git opencv
          mkdir opencv/build && cd opencv/build
          cmake -DCMAKE_INSTALL_PREFIX=/usr/local  ..
          make -j8
          (sudo) make install
          

          and there you go. It will hap­pi­ly pick up your environment’s Python instal­la­tion and build that for you. Of course, it looks more along the lines of

          cmake \
              -DCMAKE_INSTALL_PREFIX="/usr/local" \
              -DOPENCV_EXTRA_MODULES_PATH="../opencv_contrib/modules" \
              -DBUILD_DOCS=OFF \
              -DBUILD_TESTS=OFF \
              -DBUILD_EXAMPLES=OFF \
              -DBUILD_PERF_TESTS=OFF \
              -DBUILD_opencv_dnn=OFF \
              -DENABLE_FAST_MATH=ON \
              -DWITH_OPENMP=ON \
              -DWITH_TBB=ON \
              -DMKL_WITH_TBB=ON \
              -DMKL_WITH_OPENMP=ON \
              -DCMAKE_CXX_COMPILER="/usr/bin/g++-5" \
              -DCMAKE_C_COMPILER="/usr/bin/gcc-5" \
              -DCUDA_HOST_COMPILER="/usr/bin/gcc-5" \
              -DCUDA_FAST_MATH=ON \
              -DCUDA_ARCH_BIN="5.2" \
              -DWITH_CUBLAS=ON \
              ..
          

          When­ev­er you then add

          conda activate my-environment
          

          to the mix though, all hell breaks loose and you end up with par­tial­ly con­fig­ured Python 2 and no Python 3 sup­port at all. The trick seems to be not to rely on OpenCV’s stan­dard Python con­fig­u­ra­tion val­ues

          PYTHON3_LIBRARY
          PYTHON3_EXECUTABLE
          PYTHON3_INCLUDE_DIR
          PYTHON3_INCLUDE_DIR2
          PYTHON3_NUMPY_INCLUDE_DIRS
          

          but rather to use the seem­ing­ly undoc­u­ment­ed val­ues

          PYTHON3_LIBRARIES
          PYTHON3_INCLUDE_PATH
          

          as well, giv­ing it the nice appear­ance of

          cmake \
              -DCMAKE_BUILD_TYPE=RELEASE \
              -DCMAKE_INSTALL_PREFIX="/your/anaconda3" \
              -DOPENCV_EXTRA_MODULES_PATH="../opencv_contrib/modules" \
              -DBUILD_DOCS=OFF \
              -DBUILD_TESTS=OFF \
              -DBUILD_EXAMPLES=OFF \
              -DBUILD_PERF_TESTS=OFF \
              -DBUILD_opencv_dnn=ON \
              -DTINYDNN_USE_NNPACK=OFF \
              -DTINYDNN_USE_TBB=ON \
              -DTINYDNN_USE_OMP=ON \
              -DENABLE_FAST_MATH=ON \
              -DWITH_OPENMP=ON \
              -DWITH_TBB=ON \
              -DMKL_WITH_TBB=ON \
              -DMKL_WITH_OPENMP=ON \
              -DCMAKE_CXX_COMPILER="/usr/bin/g++-5" \
              -DCMAKE_C_COMPILER="/usr/bin/gcc-5" \
              -DCUDA_HOST_COMPILER="/usr/bin/gcc-5" \
              -DCUDA_FAST_MATH=ON \
              -DCUDA_ARCH_BIN="5.2" \
              -DWITH_CUBLAS=ON \
              -DBUILD_opencv_python2=OFF \
              -DPYTHON_EXECUTABLE="/your/anaconda3/bin/python3" \
              -DPYTHON_LIBRARY="/your/anaconda3/lib/libpython3.6m.so" \
              -DPYTHON3_LIBRARY="/your/anaconda3/lib/libpython3.6m.so" \
              -DPYTHON3_EXECUTABLE="/your/anaconda3/bin/python3" \
              -DPYTHON3_INCLUDE_DIR="/your/anaconda3/include/python3.6m" \
              -DPYTHON3_INCLUDE_DIR2="/your/anaconda3/include" \
              -DPYTHON3_NUMPY_INCLUDE_DIRS="/your/anaconda3/lib/python3.6/site-packages/numpy/core/include" \
              -DPYTHON3_INCLUDE_PATH="/your/anaconda3/include/python3.6m" \
              -DPYTHON3_LIBRARIES="/your/anaconda3/lib/libpython3.6m.so" \
              -DHDF5_C_LIBRARY_z="/your/anaconda3/lib/libz.so" \
              ..
          

          And then it’ll do what you want it to do. Because I’m tired of retry­ing every­time, on github.com/sunsided/opencv-cmake is my repo with sup­port for the OpenCV Extras Mod­ule and some doc­u­men­ta­tion for this.

          So, what paths go in there? Well, this blog post has a very flashy way of find­ing out.

          For CMAKE_INSTALL_PREFIX you use

          python3 -c "import sys; print(sys.prefix)"
          

          so it will install OpenCV direct­ly into your Ana­con­da instal­la­tion.
          For PYTHON3_EXECUTABLE you call

          which python3
          

          and PYTHON3_INCLUDE_DIR is giv­en by

          python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())"
          

          Final­ly, a PYTHON3_PACKAGES_PATH can be found using

          python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
          

          which you a can use to deter­mine the PYTHON3_NUMPY_INCLUDE_DIRS, although CMake strange­ly did that part right for me.

          August 23rd, 2017 GMT +1 von
          Markus
          2017-08-23T21:06:08+01:00 2017-08-23T21:35:12+01:00 · 0 Kommentare
          OpenCV Anaconda CMake
          Image Processing
          • Using TensorFlow’s Supervisor with TensorBoard summary groups

            One of TensorFlow’s more awe­some parts is def­i­nite­ly Ten­sor­Board, i.e. the capa­bil­i­ty of col­lect­ing and visu­al­iz­ing data from the Ten­sor­Flow graph as the net­work is run­ning while also being able to dis­play and browse the graph itself. Com­ing from Caffe, where I even­tu­al­ly wrote my own tool­ing just to visu­al­ize the train­ing loss from logs of the raw con­sole out­put and hat to copy-paste the graph’s pro­totxt to some online ser­vice in order to visu­al­ize it, this is a mas­sive step in the best pos­si­ble direc­tion. To get some of Caffe’s check­point­ing fea­tures back, you can use TensorFlow’s Super­vi­sor. This blog post is about using both Ten­sor­Board and the Super­vi­sor for fun and prof­it.
            TL;DR: Scroll to the end for an exam­ple of using grouped sum­maries with the Super­vi­sor.

            Apart from just stor­ing scalar data for Ten­sor­Board, the his­togram fea­ture turned out to be espe­cial­ly valu­able to me for observ­ing the per­for­mance of a prob­a­bil­i­ty infer­ence step.

            Here, the left half shows the dis­tri­b­u­tion of ground truth prob­a­bil­i­ty val­ues in the train­ing and val­i­da­tion sets over time, where­as the right half shows the actu­al inferred prob­a­bil­i­ties over time. It’s not hard to see that the net­work is get­ting bet­ter, but there is more to it:

            • The his­togram of the ground truth val­ues (here on the left) allows you to ver­i­fy that your train­ing data is indeed cor­rect. If the data is not bal­anced, you might learn a net­work that is biased towards one out­come.
            • If the net­work does indeed obtain some biased view of the data, you’ll cleary see pat­terns emerg­ing in the inferred his­togram that do not match the expect­ed ground truth dis­tri­b­u­tion. In this exam­ple, the right his­tograms approach the left his­tograms, so it appears to be work­ing fine.
            • How­ev­er, if you only mea­sure net­work per­for­mance in accu­ra­cy, as ratio of cor­rect guess­es over all exam­ples, you might be get­ting the wrong impres­sion: If the input dis­tri­b­u­tion is skewed towards 95% pos­i­tive and 5% neg­a­tive exam­ples, a net­work guess­ing “pos­i­tive” 100% of the time is pro­duc­ing only 5% error. If your total accu­ra­cy is an aggre­gate over mul­ti­ple dif­fer­ent val­ues, you will def­i­nite­ly miss this, espe­cial­ly since ran­dom­ized mini-batch­es only fur­ther obscure this issue.
            • Worse, if the learned coef­fi­cients run into sat­u­ra­tion, learn­ing will stop for them. Again, this might not be obvi­ous if the total loss and accu­ra­cy is actu­al­ly an aggre­gate of dif­fer­ent val­ues.

            Influence of the learning rate

            Let’s take the exam­ple of a vari­able learn­ing rate. If at some point the train­ing slows down, it’s not imme­di­ate­ly clear if this is due to the fact that

            • a para­me­ter space opti­mum has been found and train­ing is done,
            • the algo­rithm found a plateau in para­me­ter space and would con­tin­ue to fall after a few more hun­dreds or thou­sands of iter­a­tions or
            • the train­ing is actu­al­ly diverg­ing because the learn­ing rate is not small enough in order to enter a local opti­mum in the first place.

            Now opti­miz­ers like Adam are tai­lored to over­come the prob­lems of fixed learn­ing rates but they too can only go so far: If the learn­ing rate is too big to begin with, it’s still too big after fine-tun­ing. Or worse, after a cou­ple of iter­a­tions the adjust­ed weights could end up in sat­u­ra­tion and no fur­ther change would be able to do any­thing to change this.

            To rule out at least one part, you can make the learn­ing rate a change­able para­me­ter of the net­work, e.g. a func­tion of the train­ing iter­a­tion. I had some suc­cess in using Caffe’s “mul­ti-step” approach of chang­ing the learn­ing rate at fixed iter­a­tion num­bers — say, reduc­ing it one decade at iter­a­tion 1000, 5000 and 16000 — where I deter­mined these val­ues over dif­fer­ent train­ing runs of the net­work.

            So instead of bak­ing the learn­ing rate into the graph dur­ing con­struc­tion, you would define a place­hold­er for it and feed the learn­ing rate of the cur­rent epoch/iteration into the opti­miza­tion oper­a­tion each time you call it, like so:

            with tf.Graph().as_default() as graph:
                p_lr = tf.placeholder(tf.float32, (), name='learning_rate')
                t_loss = tf.reduce_mean(...)
                op_minimize = tf.train.AdamOptimizer(learning_rate=p_lr)\
                                      .minimize(t_loss)
            
            with tf.Session(graph=graph) as sess:
                init = tf.group(tf.global_variables_initializer(),
                                tf.local_variables_initializer())
                sess.run(init)
            
                for _ in range(0, epochs):
                    learning_rate = 0.1
                    loss, _ = sess.run([t_loss, op_minimize],
                                       feed_dict={p_lr: learning_rate)
            

            Alter­na­tive­ly, you could make it a non-learn­able Vari­able and explic­it­ly assign it when­ev­er it needs to be changed; let’s assume we don’t do that.
            The first thing I usu­al­ly do is then to also add a sum­ma­ry node to track the cur­rent learn­ing rate (as well as the train­ing loss):

            with tf.Graph().as_default() as graph:
                p_lr = tf.placeholder(tf.float32, (), name='learning_rate')
                t_loss = tf.reduce_mean(...)
                op_minimize = tf.train.AdamOptimizer(learning_rate=p_lr)\
                                      .minimize(t_loss)
            
                tf.summary.scalar('learning_rate', p_lr)
                tf.summary.scalar('loss', t_loss)
            
                # histograms work the same way
                tf.summary.histogram('probability', t_some_batch)
            
                s_merged = tf.summary.merge_all()
            
            writer = tf.summary.FileWriter('log', graph=graph)
            with tf.Session(graph=graph) as sess:
                init = tf.group(tf.global_variables_initializer(),
                                tf.local_variables_initializer())
                sess.run(init)
            
                for _ in range(0, epochs):
                    learning_rate = 0.1
                    loss, summary, _ = sess.run([t_loss, s_merged, op_minimize], 
                                                feed_dict={p_lr: learning_rate)
                    writer.add_summary(summary)
            

            Now, for each epoch, the val­ues of the t_loss and p_lr ten­sors are stored in a pro­to­col buffer file in the log sub­di­rec­to­ry. You can then start Ten­sor­Board with the --logdir para­me­ter point­ing to it and get a nice visu­al­iza­tion of the train­ing progress.

            And one exam­ple where doing this mas­sive­ly helped me track­ing down errors is exact­ly the net­work I took the intro­duc­tion his­togram pic­ture from; here, I set the learn­ing rate to 0.1 for about a two hun­dred iter­a­tions before drop­ping it to 0.01. It turned out that hav­ing the learn­ing rate this high for my par­tic­u­lar net­work did result in sat­u­ra­tion and learn­ing effec­tive­ly stopped. The his­togram helped notic­ing the issue and the scalar graph helped deter­min­ing the “cor­rect” learn­ing rates.

            Training and validation set summaries

            Sup­pose now you want to have dif­fer­ent sum­maries that may or may not appear on dif­fer­ent instances of the graph. The learn­ing rate, for exam­ple, has no influ­ence on the out­come of the val­i­da­tion batch, so includ­ing it in val­i­da­tion runs is only eat­ing up time, mem­o­ry and stor­age. How­ev­er, the tf.summary.merge_all() oper­a­tion doesn’t care where the sum­maries live per se — and since some sum­maries depend on nodes from the train­ing graph (e.g. the learn­ing rate place­hold­er), you sud­den­ly cre­ate a depen­den­cy on nodes you didn’t want to trig­ger — with effects of very vary­ing lev­els of fun.

            It turns out that sum­mar­ries can be bun­dled into col­lec­tions — e.g. “train” and “test” — by spec­i­fy­ing their mem­ber­ship upon con­struc­tion, so that you can lat­er obtain only those sum­maries that belong to the spec­i­fied col­lec­tions:

            with tf.Graph().as_default() as graph:
                p_lr = tf.placeholder(tf.float32, (), name='learning_rate')
                t_loss = tf.reduce_mean(...)
                op_minimize = tf.train.AdamOptimizer(learning_rate=p_lr)\
                                      .minimize(t_loss)
            
                tf.summary.scalar('learning_rate', p_lr, collections=['train'])
                tf.summary.scalar('loss', t_loss, collections=['train', 'test'])
            
                # merge summaries per collection
                s_training = tf.summary.merge_all('train')
                s_test = tf.summary.merge_all('test')
            
            writer = tf.summary.FileWriter('log', graph=graph)
            with tf.Session(graph=graph) as sess:
                init = tf.group(tf.global_variables_initializer(),
                                tf.local_variables_initializer())
                sess.run(init)
            
                for _ in range(0, epochs):
                    # during training
                    learning_rate = 0.1     
                    loss, summary, _ = sess.run([t_loss, s_training, op_minimize], 
                                                feed_dict={p_lr: learning_rate)
                    writer.add_summary(summary)
            
                    # during validation
                    loss, summary = sess.run([t_loss, s_test])
                    writer.add_summary(summary)
            

            In com­bi­naion with lib­er­al uses of tf.name_scope(), it could then look like on the fol­low­ing image. The graphs shows three dif­fer­ent train­ing runs where we now got the abil­i­ty to rea­son about the choice(s) of the learn­ing rate.

            This works, but we can do bet­ter.

            Using the Supervisor

            One cur­rent­ly (doc­u­men­ta­tion wise) very under­rep­re­sent­ed yet pow­er­ful fea­ture of TensorFlow’s Python API is the Super­vi­sor, a man­ag­er that basi­cal­ly takes care of writ­ing sum­maries, tak­ing snap­shots, run­ning queues (should you use them, which you prob­a­bly do), ini­tial­iz­ing vari­ables and also grace­ful­ly stop­ping train­ing.

            In order to use the Super­vi­sor you basi­cal­ly swap out your own ses­sion with a man­aged one, skip vari­able ini­tial­iza­tion and tell it when you want which of your cus­tom sum­maries to be stored. While not being required, but appar­ent­ly being a good prac­tice is the addi­tion of a global_step vari­able to the graph; should the Super­vi­sor find such a vari­able, it will auto­mat­i­cal­ly use it for inter­nal coor­di­na­tion. If you bind the vari­able to the opti­miz­er it will also be auto­mat­i­cal­ly incre­ment­ed for each opti­miza­tion step, free­ing you from hav­ing to keep track of the iter­a­tion your­self. Here’s an exam­ple of how to use it:

            with tf.Graph().as_default() as graph:
                p_lr = tf.placeholder(tf.float32, (), name='learning_rate')
                t_loss = tf.reduce_mean(...)
            
                # adding the global_step and telling the optimizer about it
                global_step = tf.Variable(0, name='global_step', trainable=False)
                op_minimize = tf.train.AdamOptimizer(learning_rate=p_lr)\
                                      .minimize(t_loss, global_step=global_step)
            
                tf.summary.scalar('learning_rate', p_lr, collections=['train'])
                tf.summary.scalar('loss', t_loss, collections=['train', 'test'])
            
                s_training = tf.summary.merge_all('train')
                s_test = tf.summary.merge_all('test')
            
            # create the supervisor and obtain a managed session;
            # variable initialization will now be done automatically.
            sv = tf.train.Supervisor(logdir='log', graph=graph)
            with sv.managed_session() as sess:
            
                # run until training should stop
                while not sv.should_stop():
                    learning_rate = 0.1     
                    loss, s, i, _ = sess.run([t_loss, s_training, 
                                              global_step, op_minimize], 
                                              feed_dict={p_lr: learning_rate)
            
                    # hand over your own summaries to the Supervisor
                    sv.summary_computed(sess, s, global_step=i)
            
                    loss, s = sess.run([t_loss, s_test])
                    sv.summary_computed(sess, s, global_step=i)
            
                    # ... at some point, request a stop
                    sv.request_stop()
            

            The Super­vi­sor will also add addi­tion­al sum­maries to your graph for free, e.g. an insight over the num­ber of train­ing steps per sec­ond. This could allow you to fine-tune mini­batch sizes, for exam­ple, because they cur­rent­ly tend to have a big impact on the host to device trans­mis­sion on the data.
            Dif­fer­ent from Caffe’s behav­ior, the Super­vi­sor will by default keep only the last five snap­shots of the learned weights; unless you fear of miss­ing the val­i­da­tion loss opti­mum, leav­ing the train­ing run­ning for days is now not an issue any­more — disk­wise, at least.

            Januar 21st, 2017 GMT +1 von
            Markus
            2017-01-21T18:45:45+01:00 2017-02-2T03:34:54+01:00 · 2 Kommentare
            Supervisor Learning Rate TensorBoard
            Machine Learning TensorFlow Caffe
            • Getting an image into and out of TensorFlow

              Let’s assume you already have an image in numpy’s ndarray for­mat, e.g. because you loaded it with OpenCV’s imread() func­tion, and you want to con­vert it to TensorFlow’s Ten­sor for­mat and lat­er back to ndarray.

              That’s essen­tial­ly three calls to Ten­sor­Flow:

              import cv2
              import tensorflow as tf
              import numpy as np
              
              # normalize the pixel values to 0..1 range and convert them 
              # to a single-precision tensor
              image_in = cv2.imread('image.png') / 255.
              t = tf.convert_to_tensor(image_in, dtype=tf.float32)
              assert isinstance(t, tf.Tensor)
              
              # in order to convert the tensor back to an array, we need
              # to evaluate it; for this, we need a session
              with tf.Session() as sess:
                  image_out = sess.run(fetches=t)
                  assert isinstance(image_out, np.ndarray)
              
              # for imshow to work, the image needs to be in 0..1 range
              # whenever it is a float; that's why we normalized it.
              cv2.imshow('Image', image_out)
              cv2.readKey(0)
              

              Note that instead of using sess.run(t) we could also have used

              with tf.Session() as sess:
                  image_out = t.eval(sess)
              

              which essen­tial­ly per­forms the same action. A ben­e­fit of using sess.run() direct­ly is that we can fetch more than one ten­sor in the same pass through the (sub-)graph (say, tuple = sess.run(fetches=[t1, t2, t3])), where­as call­ing tensor.eval() always results in one sep­a­rate pass per call.

              Dezember 12th, 2016 GMT +1 von
              Markus
              2016-12-12T15:59:54+01:00 2016-12-12T15:59:54+01:00 · 0 Kommentare
              OpenCV Python tensorflow
              Image Processing TensorFlow Neural Networks
    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. older »
    • Kategorien

      • .NET
        • ASP.NET
        • Core
        • DNX
      • Allgemein
      • Android
      • Data Science
      • Embedded
      • FPGA
      • Humor
      • Image Processing
      • Kalman Filter
      • Machine Learning
        • Caffe
        • Hidden Markov Models
        • ML Summarized
        • Neural Networks
        • TensorFlow
      • Mapping
      • MATLAB
      • Robotik
      • Rust
      • Signal Processing
      • Tutorial
      • Version Control
    • Neueste Beiträge

      • Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?
      • Use your conda environment in Jupyter Notebooks
      • Building OpenCV for Anaconda Python 3
      • Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Getting an image into and out of TensorFlow
    • Kategorien

      .NET Allgemein Android ASP.NET Caffe Core Data Science DNX Embedded FPGA Hidden Markov Models Humor Image Processing Kalman Filter Machine Learning Mapping MATLAB ML Summarized Neural Networks Robotik Rust Signal Processing TensorFlow Tutorial Version Control
    • Tags

      .NET Accelerometer Anaconda Bitmap Bug Canvas CLR docker FPGA FRDM-KL25Z FRDM-KL26Z Freescale git Gyroscope Integration Drift Intent J-Link Linear Programming Linux Magnetometer Matlab Mono Naismith OpenCV Open Intents OpenSDA Optimization Pipistrello Player/Stage PWM Python Sensor Fusion Simulink Spartan 6 svn tensorflow Tilt Compensation TRIAD ubuntu Windows Xilinx Xilinx SDK ZedBoard ZYBO Zynq
    • Letzte Kommetare

      • Lecke Mio bei Frequency-variable PWM generator in Simulink
      • Vaibhav bei Use your conda environment in Jupyter Notebooks
      • newbee bei Frequency-variable PWM generator in Simulink
      • Markus bei Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Toke bei Using TensorFlow’s Supervisor with TensorBoard summary groups
    • Blog durchsuchen

    • Januar 2021
      M D M D F S S
      « Mrz    
       123
      45678910
      11121314151617
      18192021222324
      25262728293031
    • Self

      • Find me on GitHub
      • Google+
      • Me on Stack­Ex­change
      • Ye olde blog
    • Meta

      • Anmelden
      • Beitrags-Feed (RSS)
      • Kommentare als RSS
      • WordPress.org
    (Generiert in 0,454 Sekunden)

    Zurück nach oben.