Soft Indicator Function

Very often we come across indicator functions denoting class membership. These functions in their native form are neither continuous nor differentiable. I will describe a trick to convert such indicator functions to an approximate continuous and differentiable function. This blog is organized as follows: Describe a computation case with indicator function Trick to convert More remarks … Continue reading Soft Indicator Function

Reinforcement Learning : Memo

I came across this  tutorial series on Reinforcement Learning by Arthur Juliani: [WWW] Fundamentals textbook : Reinforcement Learning: An Introduction - By Richard S. Sutton and Andrew G. Barto Freely available online : https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html Video Tutorial by R. Sutton. OpenAI Gym OpenAI is an research organization for RL. They have a environment called OpenAI-Gym (Python), useful … Continue reading Reinforcement Learning : Memo

Deep Residual Nets with Tensorflow

Git Gist : https://gist.github.com/mpkuse/6f9dcd419effa707422eb2c5097f51b4 Deep Residual Nets  (ResNets) from Microsoft Research has become one of the popular deep learning network architecture. Already 800+ citation, given that the paper appeared in 2015. Recently, I ported all my code from Caffe to Tensorflow. While it is lot easier to deal with caffe but I must say, the control you … Continue reading Deep Residual Nets with Tensorflow

Robust Keypoint Point Matching

Came across this interesting paper which does feature matching (SIFT-like features) between images under a probabilistic formulation. The methods starts with all matches as inliers and as iterations progress gets rid of matches. About 120 citations as of May 2017. Jiayi Ma, Ji Zhao, Jinwen Tian, Alan L. Yuille, and Zhuowen Tu. Robust Point Matching via … Continue reading Robust Keypoint Point Matching

Deep Learning Overview

View my Deep Learning Overview : [Google Slides] Deep Learning Research Projects: [Google Slides] Beware, these things get out of date very quick. This presentation is from Oct 2016. The outline of the talk: Toy Neural Network Loss Function Stochastic Gradient Descent Forward-pass (Neural Function Evaluation) Backward-pass (Gradient of Neural function wrt to params) Recent … Continue reading Deep Learning Overview

Neural Network as Universal Approximators : Intuitive Explaination

Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results. The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on $latex \sigma( w\times x … Continue reading Neural Network as Universal Approximators : Intuitive Explaination