Learning Parameters, Part 5: AdaGrad, RMSProp, and Adam
Let’s look at gradient descent with adaptive learning rate.
Let’s look at gradient descent with adaptive learning rate.
Before moving on to advanced optimization algorithms let us revisit the problem of learning rate in gradient descent.
Let’s digress a bit from optimizers and talk about the stochastic versions of these algorithms.
Let’s look at two simple, yet very useful variants of gradient descent.
Gradient Descent is an iterative optimization algorithm for finding the (local) minimum of a function.
A quick look at some basic stuff essential to understand how parameters are learned.
This HCI application in Python(3.6) will allow you to control your mouse cursor with your facial movements, works with a regular webcam. Its hands-free, no wearable hardware or sensors needed.
In this post I discuss the famous Perceptron Learning Algorithm proposed by Minsky and Papert in 1969.
In this post, I talk about the perceptron model proposed before the ‘activation’ part came into the picture.
The very first step towards the perceptron we use today was taken in 1943 by McCulloch and Pitts, by "mimicking" the functionality of a biological neuron.
This is a tutorial on how to build a python application that can put various sunglasses on a detected face (I am calling them ‘Selfie Filters’) by finding the Facial Keypoints (15 unique points).
This is a tutorial on how to build a deep learning application that can recognize alphabet written by an object-of-interest (a bottle cap in this case) in real-time.
This is a tutorial on how to build an OpenCV application that can track an object’s movement, using which a user can draw on the screen by moving the object around.
The point of this post is to answer questions (once and for all) I get asked about the PGDAST programme (offered by IGNOU).
My objective here is to try and break down each and every course offered in the programme and show how they can relate to a learner in the data science field.