Multilayer perceptron (MLP) is feedforward neural networks
trained with the standard backpropagation algorithm.
They are supervised networks so they require a desired response to be trained.
They learn how to transform input data into a desired response,
so they are widely used for pattern classification.
With one or two hidden layers, they can approximate virtually any input-output map.
They have been shown to approximate the performance of optimal statistical classifiers in difficult problems.
TMultiLayerPerceptron class in ROOT
mlpHiggs.C example
Netwrok structure:
r3x3, (pt_gamma-pt_jet)/pt_gamma, nCharge, bBtow, eTow2x1: 10 hidden layers: one output later
Figure 1:
Figure 2: Input parameters vs. network output
Row: 1: MC QCD, 2: gamma-jet, 3 pp2006 data
Vertical axis: r3x3, (pt_gamma-pt_jet)/pt_gamma, nCharge, bBtow, eTow2x1
Horisontal axis: network output