2x1=10

because numbers are people, too
Persönliches
Fotografie
Programmierung
    • Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?

      The arti­cle Why Machine Learn­ing Doesn’t Work Well for Some Prob­lems? (Sha­hab , 2017)  describes the effect of Emer­gence as a bar­ri­er for pre­dic­tive infer­ence.

      Emer­gence is the phe­nom­e­non of com­plete­ly new behav­ior aris­ing (emerg­ing) from inter­ac­tions of ele­men­tary enti­ties, such as life emerg­ing from bio­chem­istry and col­lec­tive intel­li­gence emerg­ing from social ani­mals.

      In gen­er­al, effects of emer­gence can­not be inferred through a pri­ori analy­sis of a sys­tem (or its ele­men­tary enti­ties). While weak emer­gence can be under­stood still by observ­ing or sim­u­lat­ing the sys­tem, emer­gent qual­i­ties from strong emer­gence can­not be sim­u­lat­ed with cur­rent sys­tems.

      Sheikh-Bahei sug­gests inter­pret­ing emer­gence (in a pre­dic­tive con­text) as an addi­tion­al dimen­sion, called the E-Dimen­sion, where mov­ing along that dimen­sion results in new qual­i­ties emerg­ing. Cross­ing E-Dimen­sions dur­ing infer­rence leads to reduced pre­dic­tive pow­er as emer­gent qual­i­ties can­not be nec­es­sar­i­ly described as a func­tion of the observed fea­tures alone. The more E-Dimen­sions are crossed dur­ing infer­rence, the low­er the pre­dic­tion suc­cess will be, regard­less of the amount of fea­ture noise. Cur­rent-gen­er­a­tion algo­rithms do not han­dle this kind of prob­lem well and fur­ther research is required in this area.

      Hypothetical example of the E-Dimension concept.

      Hypo­thet­i­cal exam­ple of the E-Dimen­sion con­cept: Emer­gence phe­nom­e­na can be con­sid­ered as a bar­ri­er for mak­ing pre­dic­tive infer­ences. The fur­ther away the tar­get is from fea­tures along this dimen­sion, the less infor­ma­tion the fea­tures pro­vide about the tar­get. The fig­ure shows an exam­ple of pre­dict­ing organ­ism lev­el prop­er­ties (tar­get) using mol­e­c­u­lar and physic­o­chem­i­cal prop­er­ties (fea­ture space). (Sha­hab , 2017)

      Effects of emer­gence on exam­ple machine learn­ing prob­lems (Sha­hab , 2017):
      Exam­ple ML prob­lem Fea­ture space Fea­ture noise Emer­gence bar­ri­er Pre­dic­tion suc­cess
      Char­ac­ter recog­ni­tion hand­writ­ten char­ac­ter images high none high
      Speech recog­ni­tion sound waves high none high
      Weath­er pre­dic­tions cli­mate sen­sor data high weak high
      Rec­om­men­da­tion sys­tem his­toric pref­er­ences, likes, etc. low weak mod­er­ate
      Ad-click pre­dic­tion his­toric click behav­ior low weak mod­er­ate
      Device fail­ure pre­dic­tion sen­sor data high weak mod­er­ate
      Health­care out­come predici­tons patient data, vital signs, behav­ior, etc. high strong low
      Melting/boiling point pre­dic­tion molecular/atomic struc­ture low strong low
      Stock pre­dic­tion his­toric stock val­ue, news arti­cles, etc. low strong low
      Sha­hab , S.-B. (2017, July 6). The E-Dimen­sion: Why Machine Learn­ing Doesn’t Work Well for Some Prob­lems? Retrieved March 4, 2018, from https://www.datasciencecentral.com/profiles/blogs/the-e-dimension-why-machine-learning-doesn-t-work-well-for-some
      März 4th, 2018 GMT +2 von
      Markus
      2018-03-4T13:00:07+02:00 2018-03-4T15:38:33+02:00 · 0 Kommentare
      emergence inference
      Data Science Machine Learning ML Summarized

      Hinterlasse einen Kommentar

      Hier klicken, um das Antworten abzubrechen.

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. “Compiler crashed with code 1” on Mono">9
    10. …
    11. 43
    12. older »
    • Kategorien

      • .NET
        • ASP.NET
        • Core
        • DNX
      • Allgemein
      • Android
      • Data Science
      • Embedded
      • FPGA
      • Humor
      • Image Processing
      • Kalman Filter
      • Machine Learning
        • Caffe
        • Hidden Markov Models
        • ML Summarized
        • Neural Networks
        • TensorFlow
      • Mapping
      • MATLAB
      • Robotik
      • Rust
      • Signal Processing
      • Tutorial
      • Version Control
    • Neueste Beiträge

      • Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?
      • Use your conda environment in Jupyter Notebooks
      • Building OpenCV for Anaconda Python 3
      • Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Getting an image into and out of TensorFlow
    • Kategorien

      .NET Allgemein Android ASP.NET Caffe Core Data Science DNX Embedded FPGA Hidden Markov Models Humor Image Processing Kalman Filter Machine Learning Mapping MATLAB ML Summarized Neural Networks Robotik Rust Signal Processing TensorFlow Tutorial Version Control
    • Tags

      .NET Accelerometer Anaconda Bitmap Bug Canvas CLR docker FPGA FRDM-KL25Z FRDM-KL26Z Freescale git Gyroscope Integration Drift Intent J-Link Linear Programming Linux Magnetometer Matlab Mono Naismith OpenCV Open Intents OpenSDA Optimization Pipistrello Player/Stage PWM Python Sensor Fusion Simulink Spartan 6 svn tensorflow Tilt Compensation TRIAD ubuntu Windows Xilinx Xilinx SDK ZedBoard ZYBO Zynq
    • Letzte Kommetare

      • Lecke Mio bei Frequency-variable PWM generator in Simulink
      • Vaibhav bei Use your conda environment in Jupyter Notebooks
      • newbee bei Frequency-variable PWM generator in Simulink
      • Markus bei Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Toke bei Using TensorFlow’s Supervisor with TensorBoard summary groups
    • Blog durchsuchen

    • März 2018
      M D M D F S S
      « Aug    
       1234
      567891011
      12131415161718
      19202122232425
      262728293031  
    • Self

      • Find me on GitHub
      • Google+
      • Me on Stack­Ex­change
      • Ye olde blog
    • Meta

      • Anmelden
      • Beitrags-Feed (RSS)
      • Kommentare als RSS
      • WordPress.org
    (Generiert in 0,383 Sekunden)

    Zurück nach oben.