Dropout regularization
From Rice Wiki
Dropout regularization behaves quite differently than other regularization techniques. Instead of penalizing large weights in the loss function, it adds a layer that randomly ignores neurons at every full-pass.
Dropout regularization is controlled by hyperparameter dropout rate. For example, a dropout rate of 0.2 means that 20% of input neurons will be ignored.