1). The weight of w12 is damaged. Before this power failure the
output of the network is 0.92129 when input x was applied. Compute
the value of w12 weight supposing that the activation function is
the logsig
(Ans: w12 = 7.5)
2). A power failure damaged weights w11 and w12. Before the
damage the output network was 0.539915 when the first column of x
was applied and 0.327393 for the second column of x. Compute the
values of w11 and w12.
(Ans: w11 = -1 and w12 = 9.1)
3). The network has 2 hidden layers. Compute the size of the
matrices: W1, W2 & W3 AND compute the number
of weights of this artificial neural network
4). What are other activation functions apart from logsig and tanh
Please answer ALL FOUR QUESTIONS
Q1:
Now from the network given:
y1 = (w11 * x1) + (w12 * x2) + (w13 * x3) + w1B
y1 = (-12 * 2) + (w12 * -3) + (8.2 * 5.3) + 5.5 = 24.96 - 3*w12
here z1 = 0.92129
y1 = ln(0.92129 / (1-0.92129)) = 2.46
=> 24.96 - 3*w12 = 2.46
=> w12 = 7.5
################################################################################
Q2:
Similarly we will make 2 equations and then solve them to get the values of weights
For the first column:
y1 = (w11 * x1) + (w12 * x2) + (w13 * x3) + w1B
y1 = (w11 * 12) + (w12 * -13) + (15.2 * 7.3) + 19.5 = 130.46 + 12*w11 - 13*w12
here z1 = 0.539915
y1 = ln(0.539915 / (1-0.539915)) = 0.16
=> 130.46 + 12*w11 - 13*w12 = 0.16
=> 12*w11 - 13*w12 + 130.3 = 0 ---------------------- equation1
For the second column:
y1 = (w11 * x1) + (w12 * x2) + (w13 * x3) + w1B
y1 = (w11 * 2.2) + (w12 * -11) + (15.2 * 5.4) + 19.5 = 101.58 + 2.2*w11 - 11*w12
here z1 = 0.327393
y1 = ln(0.327393 / (1-0.327393)) = -0.72
=> 101.58 + 2.2*w11 - 11*w12 = -0.72
=> 2.2*w11 - 11*w12 + 102.3 = 0 ---------------------- equation2
Now we will solve equation1 and equation2 by using operation (11 * equation1 - 13*equation2)
We get:
(132*w11 - 143*w12 + 1433.3) - ( 28.6*w11 - 143*w12 + 1329.9) = 0
=> 103.4*w11 + 103.4 = 0
=> w11 = -1
now from equation2:
2.2 * -1 - 11*w12 + 102.3 = 0
=> w12 = 9.1
################################################################################
Q3:
Bias can also be considered as weights because they are learnable parameters.
1. For first layer of weights W1, input layer has 3 elements and "Hidden Layer 1" has 3 neurons. Therefore W1 has dimentions 3x3 = 9 weights. If we consider bias also as weights we get 9+3 = 12 weights.
2. For second layer of weights W2, "Hidden Layer 1" has 3 neurons and "Hidden Layer 2" has 2 neurons. Therefore W2 has dimentions 2x3 = 6 weights. If we consider bias also as weights we get 6+2 = 8 weights.
2. For third layer of weights W3, "Hidden Layer 2" has 2 neurons and "Output Layer" has 2 neurons(outputs). Therefore W3 has dimentions 2x2 = 4 weights. If we consider bias also as weights we get 4+2 = 6 weights.
Therefore total number of weights in ANN are:
6 + 4 + 9 = 19 (Excluding bias)
12 + 8 + 6 = 26 (including bias)
##############################################################################
Q4:
Basically activation functions have 3 properties:
1. Nonlinearity
2. Continuously Differentiability
3. Monotonicity
There are many other activation functions available. some of them are :
1. ReLU (Rectified Linear Unit): ReLU(y) = max(y, 0)
2. Leaky ReLU: LReLU(y) = max(y, alpha * y), where alpha is a small number e.g. 0.0001
3. ELU (Exponential Linear Unit): = x when x>=0 and alpha*(exp(x) - 1) when x<0
1). The weight of w12 is damaged. Before this power failure the output of the network...
1. Consider a neural network, which contains one hidden layer and an output layer with one output unit. Let the hidden units have negative sigmoid as the activation function, which is formulated as 1 n(v) 1 + exp(-1) and the output unit has a linear activation function in which the output is equal to the activation input). (a) Show that the derivative of the negative sigmoid obeys the following relation dn(v) dv = n(v)(1 + n(v)) (b) Let the cost...
A deep learning problem.
The following matrices describing a neural network were uncovered by scientists. The weights for the hidden layer are given in the matrix W[1] = [0 1] The bias for the hidden layer is given in the vector b[1] = [1] The weights for the output layer are given in the vector W[2] [8] 0 1 The biases for the output layer are 612] = -0.5 0.75 The input X is given in the vector X 1.25...
ARTIFICIAL NEURAL NETWORK HELP PLEASE
Compute the output value for the neural network shown below. The
artificial neural network has two inputs, two neurons in the hidden
layer 1, one neuron in the output layer and one output.
Suppose that the artificial neural network is using the logsig
function
A). manually and B). using neural lab code in
C
Answer should be z = 0.641199
Please answer BOTH A and B AND show FULL
work
LAYER 1 LAYER 2 Neo...
Consider the following network: 01-0 W11-1 W13-5 x,-1 W03-0 X2 2 W22-1 Obtain the output Y for the following cases: (a) All the neurons are represented by a linear function with slope of 1 (y-x) (b) All the neurons are represented by a McCulloch-Pitts model (hard limit activation function with negative threshold zero) (c) All the neurons are represented based on a sigmoid activation function.
Consider the following network: 01-0 W11-1 W13-5 x,-1 W03-0 X2 2 W22-1 Obtain the output...
Draw a fully connected neural network with 1 hidden layer where the number of units input, hidden layer, and output layer are 3, 2, 1, respectively. . (5+5+5+5) a. Show all the weight matrices and their dimensions for this neural network. b. Label the network connections using the weight values (e.g., w12, w23). c. Total how many weights do you need to train in this neural network? . Explain supervised and unsupervised learning in your own words. (10)
Draw a...
1. Compared with PID Control, what are the advantages and disadvantages of Neural Network Control? 2. The multi-layer neural network shown in Figure I has two inputs and one output. The network has two neurons in a hidden layer. The network is to be trained with backpropagation algorithm. Each neuron has a sigmoid activation function: Assume that the biases to the neurons is +1 and the learning rate is 1. The network has the following initial weights: (w). w1 wa1...
4.7. Consider a two-layer feedforward ANN with two inputs a and b, one hidden unit c, and one output unit d. This network has five weights (wca, Wcb, Wco, Wse, Wao). where wro represents the threshold weight for unit x. Initialize these weights to the values (.1,.1,.1,.1,.1), then give their values after each of the first two training iterations of the BACKPROPAGATION algorithm. Assume learning rte '-.3, momentum α-: 0.9, incremental weight updates, and the following training examples: 0 1...
Let's design a convolutional neural network together. Suppose the size of the input image is 32-by-32-by-1 a) The first layer is a convolutional layer. The size of a filter is 7-by-7-by-X. What is the number for X? b) Given a., what is the size of the one feature map (activation map)? Note that we do not pad zeros around the input image and stride -1. c) Suppose we use 32 filters in a. How many feature maps are there after...
Consider the following network.
a. (16 pt.)
With the indicated link costs, use Dijkstra’s shortest-path
algorithm to compute the shortest path from “w” to
all network nodes. Show how the algorithm works by computing the
table below. Note: If there exists any tie in each step, choose the
left-most column first.
Step
N’
D(s),
p(s)
D(t),
p(t)
D(u),
p(u)
D(v),
p(v)
D(x),
p(x)
D(y),
p(y)
D(z),
p(z)
0
1
2
3
4
5
6
7
b. (7 pt.)
Construct the...
Use Karnaugh maps to simplify the following Boolean functions ex minterms 1. a) fx,y,z)-ml +m2+ m5+m6+ m7 xy b) f(w, x y,z) -2(0,2,4,5,6,7,12,13) c) f(w, x, y, z) Σ(3, 4, 5, 6, 7, 9, 12, 13, 14, 15) wx