神经网络基础(BP算法推导和基本的神经网络的实现)

Back Propagation (BP)

the code is from here. Here is his
blog.

1. The four fundamental equations behind backpropagation definition:

$$δ^l_j=\frac{∂C}{∂a^l_j}$$

==>

An equation for the error in the output layer:

$$δ^L_j=\frac{∂C}{∂a^L_j}σ′(z^L_j)$$

(BP1):

  • L means the last layer(l)
  • you can understand this simply by the chain rule.

==>

$$δ^L=∇_aC⊙σ′(z^L)$$

(BP1a)

  • matrix-based form

An equation for the error δl in terms of the error in the next layer:

$$δ^l=((w^{l+1})^Tδ^{l+1})⊙σ′(z^l)$$

(BP2)

  • this is bp !!

$$\frac{∂C}{∂b^l_j}=δ^l_j$$

(BP3)

$$\frac{∂C}{∂w^l_{jk}}=a^{l-1}_kδ^l_j$$

(BP4)

BP3 and BP4 is what we want, BP1 is the start of BP2.

That’s all for bp.

2. Basic Implementation

cost function:

$$ C(w,b)≡1/2n∑_x ||y(x)−a||^2 $$

  • w: collection of all weights in network
  • b: all the biases
  • quadratic cost function

$$w_k→w′_k=w_k−η\frac{∂C}{∂w_k}$$
$$b_k→b′_k=b_k−η\frac{∂C}{∂b_k}$$

gradient
descent algorithm

1
2
3
4
5
6
7
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))

def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))

use sigmoid function

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
"""
A module to implement the stochastic gradient descent learning
algorithm for a feedforward neural network. Gradients are calculated
using backpropagation. Note that I have focused on making the code
simple, easily readable, and easily modifiable. It is not optimized,
and omits many desirable features.
"""

#### Libraries
# Standard library
import random

# Third-party libraries
import numpy as np
class Network(object):

def __init__(self, sizes):
"""if we want to create a Network object with 2 neurons in
the first layer, 3 neurons in the second layer,
and 1 neuron in the final layer:
net = Network([2, 3, 1])"""
self.num_layers = len(sizes)
self.sizes = sizes
# init all w and b for every layer
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]

def feedforward(self, a):
"""Return the output of the network if "a" is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a

def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The "training_data" is a list of tuples
"(x, y)" representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If "test_data" is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
if test_data: n_test = len(test_data)
n = len(training_data)
for j in xrange(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
if test_data:
print "Epoch {0}: {1} / {2}".format(
j, self.evaluate(test_data), n_test)
else:
print "Epoch {0} complete".format(j)

def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The "mini_batch" is a list of tuples "(x, y)", and "eta"
is the learning rate."""
# nabla denotes ∇
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
#the gradient for the cost function C_x
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]

def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
# Note that the variable l in the loop below is used a little
# differently to the notation in Chapter 2 of the book. Here,
# l = 1 means the last layer of neurons, l = 2 is the
# second-last layer, and so on. It's a renumbering of the
# scheme in the book, used here to take advantage of the fact
# that Python can use negative indices in lists.
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)

def evaluate(self, test_data):
"""Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation."""
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)

def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)

test with mnist dataset

1
2
3
import mnist_loader
training_data, validation_data, test_data = \
mnist_loader.load_data_wrapper()
1
net = Network([784, 30, 10])
1
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
0: 8945 / 10000
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Epoch 1: 9171 / 10000
Epoch 2: 9251 / 10000
Epoch 3: 9307 / 10000
Epoch 4: 9332 / 10000
Epoch 5: 9397 / 10000
Epoch 6: 9401 / 10000
Epoch 7: 9415 / 10000
Epoch 8: 9425 / 10000
Epoch 9: 9419 / 10000
Epoch 10: 9469 / 10000
Epoch 11: 9442 / 10000
Epoch 12: 9462 / 10000
Epoch 13: 9458 / 10000
Epoch 14: 9461 / 10000
Epoch 15: 9498 / 10000
Epoch 16: 9481 / 10000
Epoch 17: 9488 / 10000
Epoch 18: 9507 / 10000
Epoch 19: 9483 / 10000
Epoch 20: 9473 / 10000
Epoch 21: 9484 / 10000
Epoch 22: 9507 / 10000
Epoch 23: 9488 / 10000
Epoch 24: 9484 / 10000
Epoch 25: 9473 / 10000
Epoch 26: 9503 / 10000
Epoch 27: 9507 / 10000
Epoch 28: 9488 / 10000
Epoch 29: 9503 / 10000

3. Other details I want to mention …

Matrix element W[j,k] means the weight of kth neural to jth neural of the next layer.