C++ Back Propagation Neural Network Code v2


There was a lot of feedback on my neural network implementation, mostly regarding architectural suggestion so i sat down and rewrote my neural network implementation, its pretty elegant now. I seperated the training of the network from the actual network itself and so i have a basic feed forward network, a data set reader and a neural network trainer. I also renamed several data structures to make things more understandable, also i wasnt lazy and used proper header files and includes :P

Below is an updated class diagram of the new version:

Here’s the updated implementation (with a VS2k8 solution):

The original tutorials can be found here:

About these ads

About Bobby
I'm a programmer at Ubisoft. My work interests include Animation and Artificial Intelligence. All opinions are my own!

27 Responses to C++ Back Propagation Neural Network Code v2

  1. liyen says:

    i cannot download the source code :(

  2. J says:

    Hey Bobby,

    I just wanted to say good work on your neural net code and articles/tutorials. I study Theoretical Neuroscience and Artificial Intelligence as a PhD in England and I found I have very many similar interests to you (I used to run a CS 1.5/1.6 league in the UK called CSGN).

    I come to the field of neural networks from a biological and psychological background, so the coding is my main obstacle. I mainly work with competitive networks (and Spatial Orientated Maps) for my modelling, so familiarising myself with the code for a back-prop net here was very useful!

    Keep up the good work :-)

  3. Bobby says:

    I’m really glad you found it useful! As long as at least one person found it useful, then I’ve succeeded at what I wanted to do.

    haha, a bit off topic, man i miss playing competitive CS!

  4. J says:

    Same, although it is only as good as the people you are playing with. I used to have shed loads of fun with the other 4/5 members of my team when I played. Without them it was enjoyable, but no where near as much!

    I have managed to get to grips with much of your code, mainly thanks to all your helpful comments. I have a couple of questions regarding the training set-up – is it possible to read in a separate .csv files for training/testing/validation rather than using the % set-up currently in place? Feel free to shoot me an email :-)

  5. Scott Higgins says:

    Hey, your project looks fantastic. I need a neural net for my final year project, to interpret shapes read in through a wiimote to enable gesture control.

    Is there any chance of getting a VS 2005 version?

    Thank you,

    Scott Higgins

  6. Dean says:

    Much clearer. One question do perform gradient descent on
    Ep = 1/2 * S(Tp – Op)2

    or

    E = SEp

    Where “S” means sum

    Thank you

    Jermaine

  7. Bobby says:

    gradient descent is the name of the weight update technique, the errors calculated are used in the update of the weights. As such i dont understand your question, also what dooe Tp and Op signify.

  8. Marty says:

    hi,

    i can’t figure out how to use the outputted weights with the actual nn. how can i instatiate the nn with my trained weights to get my nn output?

  9. vania says:

    Hi Bobby, I am a computer science student in the Brazilian city Bh. I loved your blog, which has helped me to understand neural networks. Often people who know a lot some knowledge, do not have much good will to teach, mainly put on the internet. Fortunately exixtem wonderful people like you. Kisses
    PS: You’re really cute !!!!!!!!
    Vania

  10. Ricardo says:

    Plz. Delete my previous post. Is corrupted:

    I reupload it to pastebin: http://pastebin.com/yZLSmpXh

    Is patch for compile in Linux.

    Great Post!!

    Ricardo.

  11. Daytona675x says:

    Just found a bug in neuralNetwork::loadWeights, delete cstr instead of delete[] cstr.
    Cheers,
    Daniel

  12. venkat says:

    which artificial neural network algorithm we need to use to classify water pollution.
    take inputs PH,chloride,fluoride. This is my project. and give me the code also and send how to execute.plz

    • Bobby says:

      I am not going to give you code past what is on my blog. It is your project so do it yourself. I think the tutorials on this site and the web are more than sufficient for you to figure it out. I am not going to do your work for you.

    • James says:

      Jeeze you sound so rude. Plus your solution is really easy. Go read something.

      • Bobby says:

        Not sure if you are replying to me or the initial comment but I do tend to get annoyed when people ask me for code or questions especially considering how much information is available within the post and on the internet. I have said this numerous times on this blog that I will gladly answer questions and help when I can BUT I will not do people’s work for them or spoon-feed them…

    • James says:

      Definitely replying to his original comment. It indents when it’s a reply to a specific comment above ;)

  13. aliyaho says:

    hi,ive followed your guide to make my first basic neural network( XOR network)
    but it seems to train incorrectly :(
    ive looked in many sites\tutorials to seek where is my problem in the code
    ill write the formulas ive used in my code ,and ill be glad if you take a look and maybe explain me where im wrong in the code implementation:)
    im using 2 layers:input,hidden,output(2,2,1).
    b-bias bw-biasweight,
    each neuron has input,output and error.

    for inputlayer:
    neuron[i].input=(b*bw+inputs[i]);
    while inputs is the array of inputs ive use to train the net.
    neuron.output=sigmod(neuron.input);

    for all other layers(hidden&output)
    sum=
    forevery neuron in the previous layer – neuron.output*weightWithme;
    weightwithme is the coneection between the current neuron and the neuron from the prevois layer.

    then,as input layer:neuron.input=(sum+b*bw).
    neuron.output=sigmoid(neuron.input).

    calc errors:
    for output layer
    neuron.error=myoutput*(1-myoutput)*(targetoutput-myoutput).
    for all other layers(hidden&input)
    sum=
    for every neuron in the next layer-neuron.error*weightwithme.

    neuron.error=neuron.output*(1-neuron.output)*sum;

    update weights:
    weight between neuron i-> j :Wij+=learnrate*i.output*j.error;

    thank you very very much for your help:)

    • Bobby says:

      All your questions are covered by my NN tutorial posts, please read them. There are two neural network code examples on this blog and I’m sure there are dozens more on the web.

  14. pun_tgif@hotmail.com says:

    Hi I’m student in Thailand,I ‘am a beginner of ANN. I understand this is implement for training dataset. But I need to classification new data to return the actual class of its . Can you suggest me about testing phase in ANN. thank you

  15. arihant says:

    hi sir,i am from india.can u please provide me a code to implement artificial neural network for rainfall forecasting using back propogation algo.i wud be really indebted to u .i hav to submit it on 21st september.thanx in advance.

  16. Just want to say your article is as surprising. The clarity for your publish is simply excellent and that i can assume you are knowledgeable in this subject. Well along with your permission allow me to take hold of your RSS feed to keep up to date with approaching post. Thank you one million and please continue the rewarding work.

  17. This piece of writing is genuinely a good one it helps new
    the webb visitors, who are wishing for blogging.

  18. Denise says:

    I blog often and I truly appreciate your content. This article has really peaked my interest.
    I am going to take a note of your blog and keep checking for new details about once per week.
    I opted in for your RSS feed too.

  19. charles says:

    Thanks for the code. I am taking the Coursera Machine learning course which uses Octave to teach NNs. Since I already know C++, that’s one less thing I have to learn in order to experiment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: