CNN(Convolutional Neural Networks)

CNN 4 steps


There are four main steps in CNN : convolution, subsampling, activation and  full connectedness

 
 
 

Step 1: Convolution

The first layers that receive an input signal are called convolution filters. Convolution is a process where the network tries to label the input signal by referring to what it has learned in the past.
If the input signal looks like previous cat images it has seen before, the "cat" reference signal will be mixed into, or convolved with, the input signal. The resulting output signal is then passed on to the next layer.
 
 
Convolution has the nice property of being translational invariant. Intuitively, this means that each convolution filter represents a feature of interest, and the CNN algorithm learns which features comprise the resulting reference. The output signal strength is not dependent on whare the features are located, but simply whether the features are present. Hence, a cat could be sitting in different positions, and the CNN algorithm would still be able to recognize it.
 
 

Step 2 : Subsampling

Inputs from the convolution layer can be "smoothened" to reduce the sensitivity of the filters to noise and variations. This smoothing process is called subsampling, and can be achieved by taking averages or taking the maximum over a sample of the signal. Examples of subsampling methods(for image signals) include reducing the size of the image, or reducing the color contrast across red, green, blue(RGB) channels.
 
 

Step 3 : Activation

The activation layer controls how the signal flows from one layer to the next, emulating how neurons are fired in our brain. Output signals which are strongly associated with past references would activate more neurons, enabling signals to be propagated more efficiently for identification.
 
CNN is compatible with a wide variety of complex activation functions to model signal propagation, the most common function being the Rectified Linear Unit(ReLU), which is favored for its faster training speed.
 
 

Step 4 : Fully connected

The last layers in the networks are fully connected, meaning that neurons of preceding layers are connected to every neuron in subsequent layers. This mimics high level reasoning where all possible pathways from the input to output are considered.
 
 

(During Training) Step 5 : Loss

When training the neural network, there is additional layer called the loss layer. This layer provides feedback to the neural network on whether it identified inputs correctly, and if not, how far off its guesses were. This helps to guide the neural network to reinforce the right concepts as it trains. This is always the last layer during training.

 
 
 

reference


https://annalyzin.wordpress.com/2016/01/26/introduction-to-convolutional-neural-network/

이 블로그의 인기 게시물

USArrests(1973년 미국 50개주 십만명당 강력범죄수)

군집분석(Cluster Analysis)

SRTP(Secure Real-Time Transport Protocol)