Question

Gradient descent weight update rule for a tanh unit. (2 pts) Assume throughout this exercise that we are using gradient desce

지 wo x2 net .ari i ン..ornet) ...._ 1+e Ww PI FIGURE 4.6 The sigmoid threshold unit.

Gradient descent weight update rule for a tanh unit. (2 pts) Assume throughout this exercise that we are using gradient descent to minimize the error as defined in formula (4.2) on p.89 in the textbook: td -od Recall that the corresponding weight update rule for a sigmoid unit like the one in Figure 4.6 on p.96 in the textbook is: td - od) od (1-od) i,d ded Let us replace the sigmoid function σ in Figure 4.6 by the function "tanh". In other words, the output of the unit is now: o = tanh(net) = tanh(uoro + wJZ1 + tv2x2+ + u'nxn) Derive the new weight update rule. Show your work, and indicate clearly in your answer what the weight update rule for the tanh unit looks like. Hint: tank(z) = 1-(tanh(z))2.
지 wo x2 net .ari 'i ン..ornet) ...._ 1+e Ww PI FIGURE 4.6 The sigmoid threshold unit.
0 0
Add a comment Improve this question Transcribed image text
Answer #1

IF YOU HAVE ANY DOUBTS COMMENT BELOW I WILL BE TTHERE TO HELP YOU..ALL THE BEST..

AS FOR GIVEN DATA.

The cncr atoudput nod and dden node sea t? → The ta(get output eran it ourput ot unt > haina vue ret Cs Scanned CamScanner rhidken uoits *cünins neu ded Bnetk do Z)と A0 Ke donseam Scanned with CamScanner

I HOPE YOU UNDERSTAND..

PLS RATE THUMBS UP..ITS HELPS ME ALOT..

THANK YOU...!!

Add a comment
Know the answer?
Add Answer to:
Gradient descent weight update rule for a tanh unit. (2 pts) Assume throughout this exercise that we are using gradien...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
  • 4.7. Consider a two-layer feedforward ANN with two inputs a and b, one hidden unit c, and one output unit d. This net...

    4.7. Consider a two-layer feedforward ANN with two inputs a and b, one hidden unit c, and one output unit d. This network has five weights (wca, Wcb, Wco, Wse, Wao). where wro represents the threshold weight for unit x. Initialize these weights to the values (.1,.1,.1,.1,.1), then give their values after each of the first two training iterations of the BACKPROPAGATION algorithm. Assume learning rte '-.3, momentum α-: 0.9, incremental weight updates, and the following training examples: 0 1...

ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT