Convolutional Neural Networks (CNN)

The continuous case

Convolution is defined as an operation between 2 functions f and g such that:

(f * g)(t) \equiv \int_{-\infty}^{+\infty} f(\tau) g(t-\tau) d \tau

Example:

The area under the convolution \int_{-\infty}^{+\infty}\hairsp(f\ast g)dt is the product of the areas under f and g.

1 D Discrete Case

(f * g)[n] \equiv \sum_{m=-\infty}^{m=+\infty} f[m] g[n-m]

\mathrm{Let\ }f[n]=[1,2,3],g[n]=[2,1] and suppose n starts from 0. We are computing h[n]=f[n]*g[n].

A tiny toy example of CNN that is made up of just one conv layer consisting of just one filter F of shape 2×2 followed by a max-pooling layer of shape 2×2. The input image is of shape 3×3.

The output of the CNN is calculated a Pool(ReLU(Conv(I))) where:

\mathbb{R} \operatorname{eLU}(x)=\max (0, x)

if:

I=\left[\begin{array}{lll}1 & 0 & 2 \ 3 & 1 & 0 \ 0 & 0 & 4\end{array}\right]
F=\left[\begin{array}{ll}1 & 0 \ 0 & 1\end{array}\right]

The output of the CNN is 5.

Don’t miss these tips!

We don’t spam! Read our privacy policy for more info.

Open chat
Powered by