ABSTRACT Recognition ofHandwriting by humans may seem as a very easy task but when done by a machineis a very complex one. It is impossible for humans to expend so much of theirtime to interpret characters apart from them analyzing the collected data. Ourmain focus should be on analyzing the data rather than trying to interpret thedata in the first place. Apart from this, the manual interpretation of data maynot yield the right results since it may wary from person to person and hence itis not accurate to a great extent and may take a lot of time and energy.Algorithms using neural networks have made this task a lot easier and moreaccurate.

Therefore neural networks have been utilized with an aim to determinethe characters by recognizing them using algorithms, which use various rulesfor prediction of handwritten characters. In this paper we discuss the recognition of handwrittendigits taken from the MNIST data set and calculate the efficiency of ouralgorithm. Keywords Introduction Handwriting is a formof writing peculiar to a person which it varying in size, shape of letters,spacing between letters. There are different styles of handwriting includingcursive, block letters, calligraphy, signature etc.It makes the task ofrecognizing handwritten characters a bit complex for neural networks since ithas to predict the characters based on its ability to learn rather than beingexplicitly programmed.

Arthur Samuel and TomMitchell offer the definitions of machine learning. An informal definition givenby Arthur Samuel is “the field of study that gives computers the abilityto learn without being explicitly programmed.” The more modern definitionis given by Tom Mitchell states: “A computer program is said to learn fromexperience E with respect to some class of tasks T and performance measure P,if its performance at tasks in T, as measured by P, improves with experienceE.”Proposed MethodologyDigit recognition is done by training a multi-layerfeedforward neural network by using mini batch stochastic gradient descent andbackpropagation algorithm.The MNIST data set obtained from link1 contains amodified version of the original training set of 60,000 images. The originaltraining set is split into a training set with 50,000 examples and a validationset with 10,000 examples.

This set is then used to train the neural network. Each image is representedas numpy 1-dimensional array of 784 float values between 0 and 1. The labelsare numbers between 0 and 9 indicating which digit the image represents. link1 Theneural network is an artificial neural network with sigmoid neurons. Therefore,the output of each neuron is calculated using the sigmoid function. The output is given as.

Where, w is theweight, b is the bias and x is the input.Initially, the weights and biases of the neuralnetwork are initialized randomly using Gaussian distribution. They are lateradjusted by applying mini batch stochastic gradient descent. The training data is split into a number of minibatches. In each epoch, the training data is shuffled and split into minibatches of a fixed size and gradient descent is applied.

The neural network istrained over a number of epochs. The gradient of the cost function is calculatedby using the backpropagation algorithm. This calculated gradient is then usedto update the weights and biases of the neural network ResultsThe settings of hyerparameters,On taking epochs=30, mini-batch size=10 and learningrate=3.0, we get the following accuracy for 10,000 test images in the MNISTdata set.On taking epochs=20, mini-batch size=15 and learningrate=2.8, we get the following accuracy for 10,000 test images in the MNISTdata set.