Document

```Neural Network Learning without Backpropagation
Author : Bogdan M. Wilamowski, Fellow, IEEE
Source : IEEE INDUSTRIAL ELECTRONICS MAGAZINE
Date : 2012/9/24
Presenter : 林哲緯
1
Outline
•
•
•
•
Introduction
Forward & Backpropagation
Derivation
Conclusion
2

• 每一層都只跟前一層或下一層相連
3
FCC & ANN
Arbitrarily connected neuron network(ANN)
4
Weight Updating Rule
First-order algorithm
MLP
w n 1  w n  α g n
α : learning constant
Second-order algorithm
FCC
ANN
w n  1  w n  ( J Tn J n  μI)
1
gn
J : Jacobian matrix
μ : learning parameter
I : identity matrix
e : error vector
5
Outline
•
•
•
•
Introduction
Forward & Backpropagation
Derivation
Conclusion
6
Forward & Backpropagation
ni
net
j

w
j ,i
y j ,i  w j , 0
y j  f j ( net j )
i 1
7
Forward & Backpropagation
f
 
1
1 e
t
8
Forward & Backpropagation
• F()是一個nonlinear relationship function
9
Om =
Fm,j(yj)
9
Outline
•
•
•
•
Introduction
Forward & Backpropagation
Derivation
Conclusion
10
Derivation
∂net
j
∂w j , i
y j  f j ( net j )
sj 
 yi, j
∂y j
∂net

j
∂ f j( net j )
∂net
j
11
Derivation
δ2,1 
∂F 2 ,1  y 1 
∂net 1

∂F 2 ,1  y 1  ∂y 1
∂y 1
∂net 1
Forward :
S2 = S1W1,2
S3 = S2W2,3 + S1W1,3
.
.
.
Backpropagation :
δ2,1 = S1F’2,1
δ3,1 = S1F’3,1
.
.
 F 2' ,1 s 1 .
12
Derivation
13
Derivation

w n 1  w n  α g n
LM Algorithm :
w n  1  w n  ( J J n  μI)
T
n
g
j ,i

∂E
∂ w j ,i
1
gn
 yj,i δj
14
Minimization Problem
Newton's method
15
Minimization Problem
Steepest descent method
16
Least square problem
Gauss–Newton algorithm
http://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
17
Calculation of δ
δ1,1 = S1
δ2,2 = S2
δ2,1 = S2w1,2S1
δ3,3 = S3
δ3,2 = S3w2,3S2
δ3,1 = S3w1,3S1 +
S3w2,3S2w1,2S1
δ3,2 = δ3,3w2,3δ2,2
δ3,1 = δ3,3w1,3δ1,1 +
δ3,3w2,3δ2,1
18
Outline
•
•
•
•
Introduction
Forward & Backpropagation
Derivation
Conclusion
19
Conclusion
• 使任意連接神經元架構不再複雜
• 執行速度比傳統EBP快，架構越大，速度快

• 繼承LM algorithm的優缺點
– 優點 : 每次echo只需要一個iteration
– 缺點 : 每次iteration需要做一次反矩陣運算
20
```