In every neural network we have an activation function.
Similar to linear, because of this it is easy to optimize.