PDF문서lecture8.pdf

닫기

background image

T&C LAB-AI

Robotics

Neural Network

Lecture 8

Jeong-Yean Yang

2020/12/10

1


background image

T&C LAB-AI

H.W. Neural Network

0

2


background image

T&C LAB-AI

Robotics

HW.1 Sigmoidal NN for Sin(x)

• 0<x<10  X=linspace(0,10,20)  N=20
• Y = sin(X)

• Find the best NN result with Sigmoidal NN

– W1 and W2 = zeros or randn
– How many iterations are required?

3

Y= sin(x)


background image

T&C LAB-AI

Robotics

HW 2 Why J shows this Phenomenon?

• During Learning process,
J shows sudden Jump.

Why?

4


background image

T&C LAB-AI

Robotics

HW3. Using RBF for Y=Sin(x)

• 0<x<10  X=linspace(0,10,20)  N=20
• Y = sin(X)

5


background image

T&C LAB-AI

Robotics

HW 4. Tell me What the differences are 

Between Sigmoidal and RBF NN

• Iteration, Convergence, alpha, Initial value, 
• Anything is O.K.

6

HW.5. Why Sin(x) is Not Smooth?

Find the Answer and the RESULT


background image

T&C LAB-AI

Robotics

HW. 6 Noisy Data With RBF NN

• n=100
• X=linspace(0,10,n)
• y= -0.1*pow(x-2,2)+randn(n) 

• Try RBF with the above x and y.
• RBF becomes what?  

7


background image

T&C LAB-AI

Unbalanced Cost Function

1

8


background image

T&C LAB-AI

Robotics

We used Squared Error

• Remind Differentiation for Gradient Descent Method

9

2

2

1

2

1

1

1

2

2

2

1

1

ˆ

||

||

2

2

,

i

i

i D

i D

J

e

y

y

J

J

W

W

J

W

W

W

J

W

W

W

• Question: 

If we use Absolute Error, |e| , then What occurs?


background image

T&C LAB-AI

Robotics

Absolute Error

• Absolute Error is not well used because of Differentiation.

– It is NOT continuous

10

2

1

2

'

'

i

i D

i

i

i D

J

e

J

e e

1

|

|

2

i

i D

J

e

'

0

1

'

'

0

2

0

0

i

i

i

i

i

e

e

J

e

e

e


background image

T&C LAB-AI

Robotics

In Spite of All, Why We Concern |e|?

• Convergence

– At a solution, convergence rate is too slow.

11

2

1

2

i

i D

J

e

1

|

|

2

i

i D

J

e

A

B

'

'

w A

w B

J

J

Convergence Rate is NOT 

constant

'

'

w A

w B

J

J

A

B

Convergence rate is constant

Sliding into a goal

Remind

sgn( )

T

K

s

 

Benito

Fernandez

J

e

J

e


background image

T&C LAB-AI

Robotics

What Changes in OUR RBF Network?

12

2

2

2

1

1

2

2

T

k

k

T

J

e

e e

J

e

e

W

W

1

2

1

1

(

1)

2

2

2

2

1

(

1)

1

1

2

(

1) 1

[

]

[

]

[

]

[

]

T

T

T

T

n

n

h

T

T

T

n

h

n

h

Y

I W

J

e

Y

e

e

e

e

Y

I

W

W

W

W

J

Transpose

Vector

Y

I

e

Y

I e

W

 

 

 

 

 

 

 

 

2

2

|

|

( , , 0)

k

k

J

e

J

e

W

W

  

How we solve Matrix Row-Column Problem in this 

|e| Network?

New equation is required!


background image

T&C LAB-AI

Robotics

See Derivative of Error in RBF NN

• Error vector e, is replaced by [+, -, or 0] vector

13

2

2

2

1

1

2

2

T

k

k

T

J

e

e e

J

e

e

W

W

2

2

|

|

( , , 0)

k

k

J

e

J

e

W

W

  

2

2

2

1 (

1)

1

2

1

1

(

1)

2

[

]

[

]

T

T

h

T

T

n

n

h

J

e

Y

e

e

W

W

W

Y

I W

e

e

Y

I

W

 

 

 

 

 

2

2

2

1 (

1)

1

(

1)

1

( , 0)

( , 0)

1

1

[

]

0

...

T

T

h

T

n

h

n

J

e

Y

W

W

W

Y

I

 

 

 

  

 

 

 

 

 

 

 


background image

T&C LAB-AI

Robotics

Derivatives of W1 in RBF NN

14

2

2

2

1,

1

1

2

2

2

k

k

T

T

k

k

k

k

k

k

k

k

J

e

J

eW

Z

eW

Y

Z

W

  

 

2

2

1,

1

|

|

1

1

1

1

2

2

0

0

...

...

k

k

T

T

k

k

k

k

k

k

k

k

J

e

J

W

Z

W

Y

Z

W

 

 

 

 

 

 

 

 

 

 

 

 

 

 


background image

T&C LAB-AI

Robotics

Exampl) l8abs1.py

15

|

|

k

k

J

e

1

1

1

sgn( )

0

...

n

e

 

 

  

 

 

 

RBF-NN 

2

1

2

k

k

J

e


background image

T&C LAB-AI

Robotics

Example) l8abs1.py

• Alpha=0.1  Too many oscillations(chattering)

16


background image

T&C LAB-AI

Robotics

Example) l8abs1.py with Small Alpha

• Alpha=0.01  Too many oscillation

17


background image

T&C LAB-AI

Robotics

Alpha is the Key for Chattering

Remind Gradient Descent method

18

J

e

J

e

0.1

 

0.01

 

Big alpha moves faster and farther.

J

e

J

e

0.1

 

0.01

 

Large Chattering

Small chattering


background image

T&C LAB-AI

Robotics

Alternative Strategy

for Small Chattering

19

J

e

0.1

 

J

e

0.1

 

When near J=0, differentiation is 

NOT continuous.

We can use Hybrid method.

In Small error  regions, we use 

2

e

Insight from Sliding Mode Control( with Low pass filter)

2

| e |

: J

,   J=2

'           

| e |

: J

| |,

[1, 1, 0] '

e

ee

e

J

e

 


background image

T&C LAB-AI

Robotics

Example) l8abs2.py

20

No Chattering

Low pass filter

Activates!


background image

T&C LAB-AI

Robotics

Another Idea of |e|

• Unbalanced Error 

21

J

e

J

e

What is it?

Case 1: if e>0, very generous.

if e<0, very tough. 

Case 1

Case 2

1/a

-a

a

-1/a


background image

T&C LAB-AI

Robotics

Example) l8abs3.py

• a= 3  or 1/3

22

J

e

Case 1

1/a

-a

J

e

Case 2

a

-1/a


background image

T&C LAB-AI

Robotics

Can You Image the Result?

• Example) test a=3 and a=1/3 for case I 

23

J

e

Case 1

1/a

-a

J

e

Case 2

a

-1/a


background image

T&C LAB-AI

Robotics

RBF in Noisy Signal (l8abs4.py)

• RBF learning in the Noisy Signal, y= f(x)+ sin(20x)
• The results becomes, mean value

24


background image

T&C LAB-AI

Robotics

Unbalanced Error with Noisy Signal

25

a=6.554

How we find the magic number, 6.554?