Multilayer perceptron (MLP) is feedforward neural networks
trained with the standard backpropagation algorithm.
They are supervised networks so they require a desired response to be trained.
They learn how to transform input data into a desired response,
so they are widely used for pattern classification.
With one or two hidden layers, they can approximate virtually any input-output map.
They have been shown to approximate the performance of optimal statistical classifiers in difficult problems.
TMultiLayerPerceptron class in ROOT
mlpHiggs.C example
Netwrok structure:
r3x3, (pt_gamma-pt_jet)/pt_gamma, nCharge, bBtow, eTow2x1: 10 hidden layers: one output later
Figure 1:
Figure 2: Input parameters vs. network output
Row: 1: MC QCD, 2: gamma-jet, 3 pp2006 data
Vertical axis: r3x3, (pt_gamma-pt_jet)/pt_gamma, nCharge, bBtow, eTow2x1
Horisontal axis: network output
ROOT implementation for LDA and MLP:
LDA configuration: default
MLP configuration:
Input parameters (same for both LDA and MLP):
Figure 1: Signal efficiency and purity, background rejection (left),
and significance: Sig/sqrt[Sig+Bg] (right) vs. LDA (upper plots) and MLP (lower plots) classifier discriminants
Figure 2:
Figure 3: Data to Monte-Carlo comparison for LDA (upper plots) and MLP (lower plots)
Good (within ~ 10%) match between data nad Monte-Carlo
a) up to 0.8 for LDA discriminant, and b) up to -0.7 for MLP.
Figure 4: Data to Monte-Carlo comparison for input parameters
from left to right
1) pt_gamma 2) pt_jet 3) r3x3 4) gamma-jet pt balance 5) N_ch[gamma] 6) N_eTow[gamma] 7) N_bTow[gamma]
Colour coding: black pp2006 data, red gamma-jet MC, green QCD MC, blue gamma-jet+QCD
Figure 5: Data to Monte-Carlo comparison:
correlations between input variables (in the same order as in Fig. 4)
and LDA classifier discriminant (horizontal axis).
1st raw: QCD MC; 2nd: gamma-jet MC; 3rd: pp2006 data; 4th: QCD+gamma-jet MC
Endcap photon-jet update at the STAR Collaboration meeting