Activation functions¶

In every neural network we have an activation function.

\[ g(Wx + b) \]

RELu (Rectified linear activation function)¶

Similar to linear, because of this it is easy to optimize.