// @(#)root/hist:$Name:  $:$Id: TMultiDimFit.cxx,v 1.22 2005/09/05 10:02:38 brun Exp $
// Author: Christian Holm Christensen 07/11/2000

//____________________________________________________________________
//
//
// 
/*

Multidimensional Fits in ROOT


Overview

A common problem encountered in different fields of applied science is to find an expression for one physical quantity in terms of several others, which are directly measurable.

An example in high energy physics is the evaluation of the momentum of a charged particle from the observation of its trajectory in a magnetic field. The problem is to relate the momentum of the particle to the observations, which may consists of of positional measurements at intervals along the particle trajectory.

The exact functional relationship between the measured quantities (e.g., the space-points) and the dependent quantity (e.g., the momentum) is in general not known, but one possible way of solving the problem, is to find an expression which reliably approximates the dependence of the momentum on the observations.

This explicit function of the observations can be obtained by a least squares fitting procedure applied to a representive sample of the data, for which the dependent quantity (e.g., momentum) and the independent observations are known. The function can then be used to compute the quantity of interest for new observations of the independent variables.

This class TMultiDimFit implements such a procedure in ROOT. It is largely based on the CERNLIB MUDIFI package [2]. Though the basic concepts are still sound, and therefore kept, a few implementation details have changed, and this class can take advantage of MINUIT [4] to improve the errors of the fitting, thanks to the class TMinuit.

In [5] and [6] H. Wind demonstrates the utility of this procedure in the context of tracking, magnetic field parameterisation, and so on. The outline of the method used in this class is based on Winds discussion, and I refer these two excellents text for more information.

And example of usage is given in $ROOTSYS/tutorials/multidimfit.C.


The Method

Let $ D$ by the dependent quantity of interest, which depends smoothly on the observable quantities $ x_1, \ldots, x_N$, which we'll denote by $ \mathbf{x}$. Given a training sample of $ M$ tuples of the form, (TMultiDimFit::AddRow)

$\displaystyle \left(\mathbf{x}_j, D_j, E_j\right)\quad,
$

where $ \mathbf{x}_j = (x_{1,j},\ldots,x_{N,j})$ are $ N$ independent variables, $ D_j$ is the known, quantity dependent at $ \mathbf{x}_j$, and $ E_j$ is the square error in $ D_j$, the class TMultiDimFit will try to find the parameterization

$\displaystyle D_p(\mathbf{x}) = \sum_{l=1}^{L} c_l \prod_{i=1}^{N} p_{li}\left(x_i\right) = \sum_{l=1}^{L} c_l F_l(\mathbf{x})$ (1)

such that

$\displaystyle S \equiv \sum_{j=1}^{M} \left(D_j - D_p\left(\mathbf{x}_j\right)\right)^2$ (2)

is minimal. Here $ p_{li}(x_i)$ are monomials, or Chebyshev or Legendre polynomials, labelled $ l = 1, \ldots, L$, in each variable $ x_i$, $ i=1, \ldots, N$.

So what TMultiDimFit does, is to determine the number of terms $ L$, and then $ L$ terms (or functions) $ F_l$, and the $ L$ coefficients $ c_l$, so that $ S$ is minimal (TMultiDimFit::FindParameterization).

Of course it's more than a little unlikely that $ S$ will ever become exact zero as a result of the procedure outlined below. Therefore, the user is asked to provide a minimum relative error $ \epsilon$ (TMultiDimFit::SetMinRelativeError), and $ S$ will be considered minimized when

$\displaystyle R = \frac{S}{\sum_{j=1}^M D_j^2} < \epsilon
$

Optionally, the user may impose a functional expression by specifying the powers of each variable in $ L$ specified functions $ F_1, \ldots,
F_L$ (TMultiDimFit::SetPowers). In that case, only the coefficients $ c_l$ is calculated by the class.


Limiting the Number of Terms

As always when dealing with fits, there's a real chance of over fitting. As is well-known, it's always possible to fit an $ N-1$ polynomial in $ x$ to $ N$ points $ (x,y)$ with $ \chi^2 = 0$, but the polynomial is not likely to fit new data at all [1]. Therefore, the user is asked to provide an upper limit, $ L_{max}$ to the number of terms in $ D_p$ (TMultiDimFit::SetMaxTerms).

However, since there's an infinite number of $ F_l$ to choose from, the user is asked to give the maximum power. $ P_{max,i}$, of each variable $ x_i$ to be considered in the minimization of $ S$ (TMultiDimFit::SetMaxPowers).

One way of obtaining values for the maximum power in variable $ i$, is to perform a regular fit to the dependent quantity $ D$, using a polynomial only in $ x_i$. The maximum power is $ P_{max,i}$ is then the power that does not significantly improve the one-dimensional least-square fit over $ x_i$ to $ D$ [5].

There are still a huge amount of possible choices for $ F_l$; in fact there are $ \prod_{i=1}^{N} (P_{max,i} + 1)$ possible choices. Obviously we need to limit this. To this end, the user is asked to set a power control limit, $ Q$ (TMultiDimFit::SetPowerLimit), and a function $ F_l$ is only accepted if

$\displaystyle Q_l = \sum_{i=1}^{N} \frac{P_{li}}{P_{max,i}} < Q
$

where $ P_{li}$ is the leading power of variable $ x_i$ in function $ F_l$. (TMultiDimFit::MakeCandidates). So the number of functions increase with $ Q$ (1, 2 is fine, 5 is way out).

Gram-Schmidt Orthogonalisation

To further reduce the number of functions in the final expression, only those functions that significantly reduce $ S$ is chosen. What `significant' means, is chosen by the user, and will be discussed below (see 2.3).

The functions $ F_l$ are generally not orthogonal, which means one will have to evaluate all possible $ F_l$'s over all data-points before finding the most significant [1]. We can, however, do better then that. By applying the modified Gram-Schmidt orthogonalisation algorithm [5] [3] to the functions $ F_l$, we can evaluate the contribution to the reduction of $ S$ from each function in turn, and we may delay the actual inversion of the curvature-matrix (TMultiDimFit::MakeGramSchmidt).

So we are let to consider an $ M\times L$ matrix $ \mathsf{F}$, an element of which is given by

$\displaystyle f_{jl} = F_j\left(x_{1j} , x_{2j}, \ldots, x_{Nj}\right) = F_l(\mathbf{x}_j) $   with$\displaystyle  j=1,2,\ldots,M,$ (3)

where $ j$ labels the $ M$ rows in the training sample and $ l$ labels $ L$ functions of $ N$ variables, and $ L \leq M$. That is, $ f_{jl}$ is the term (or function) numbered $ l$ evaluated at the data point $ j$. We have to normalise $ \mathbf{x}_j$ to $ [-1,1]$ for this to succeed [5] (TMultiDimFit::MakeNormalized). We then define a matrix $ \mathsf{W}$ of which the columns $ \mathbf{w}_j$ are given by
$\displaystyle \mathbf{w}_1$ $\displaystyle =$ $\displaystyle \mathbf{f}_1 = F_1\left(\mathbf x_1\right)$ (4)
$\displaystyle \mathbf{w}_l$ $\displaystyle =$ $\displaystyle \mathbf{f}_l - \sum^{l-1}_{k=1} \frac{\mathbf{f}_l \bullet
\mathbf{w}_k}{\mathbf{w}_k^2}\mathbf{w}_k .$ (5)

and $ \mathbf{w}_{l}$ is the component of $ \mathbf{f}_{l}$ orthogonal to $ \mathbf{w}_{1}, \ldots, \mathbf{w}_{l-1}$. Hence we obtain [3],

$\displaystyle \mathbf{w}_k\bullet\mathbf{w}_l = 0$   if$\displaystyle  k \neq l\quad.$ (6)

We now take as a new model $ \mathsf{W}\mathbf{a}$. We thus want to minimize

$\displaystyle S\equiv \left(\mathbf{D} - \mathsf{W}\mathbf{a}\right)^2\quad,$ (7)

where $ \mathbf{D} = \left(D_1,\ldots,D_M\right)$ is a vector of the dependent quantity in the sample. Differentiation with respect to $ a_j$ gives, using (6),

$\displaystyle \mathbf{D}\bullet\mathbf{w}_l - a_l\mathbf{w}_l^2 = 0$ (8)

or

$\displaystyle a_l = \frac{\mathbf{D}_l\bullet\mathbf{w}_l}{\mathbf{w}_l^2}$ (9)

Let $ S_j$ be the sum of squares of residuals when taking $ j$ functions into account. Then

$\displaystyle S_l = \left[\mathbf{D} - \sum^l_{k=1} a_k\mathbf{w}_k\right]^2 = ...
...2 - 2\mathbf{D} \sum^l_{k=1} a_k\mathbf{w}_k + \sum^l_{k=1} a_k^2\mathbf{w}_k^2$ (10)

Using (9), we see that
$\displaystyle S_l$ $\displaystyle =$ $\displaystyle \mathbf{D}^2 - 2 \sum^l_{k=1} a_k^2\mathbf{w}_k^2 +
\sum^j_{k=1} a_k^2\mathbf{w}_k^2$  
  $\displaystyle =$ $\displaystyle \mathbf{D}^2 - \sum^l_{k=1} a_k^2\mathbf{w}_k^2$  
  $\displaystyle =$ $\displaystyle \mathbf{D}^2 - \sum^l_{k=1} \frac{\left(\mathbf D\bullet \mathbf
w_k\right)}{\mathbf w_k^2}$ (11)

So for each new function $ F_l$ included in the model, we get a reduction of the sum of squares of residuals of $ a_l^2\mathbf{w}_l^2$, where $ \mathbf{w}_l$ is given by (4) and $ a_l$ by (9). Thus, using the Gram-Schmidt orthogonalisation, we can decide if we want to include this function in the final model, before the matrix inversion.


Function Selection Based on Residual

Supposing that $ L-1$ steps of the procedure have been performed, the problem now is to consider the $ L^{\mbox{th}}$ function.

The sum of squares of residuals can be written as

$\displaystyle S_L = \textbf{D}^T\bullet\textbf{D} - \sum^L_{l=1}a^2_l\left(\textbf{w}_l^T\bullet\textbf{w}_l\right)$ (12)

where the relation (9) have been taken into account. The contribution of the $ L^{\mbox{th}}$ function to the reduction of S, is given by

$\displaystyle \Delta S_L = a^2_L\left(\textbf{w}_L^T\bullet\textbf{w}_L\right)$ (13)

Two test are now applied to decide whether this $ L^{\mbox{th}}$ function is to be included in the final expression, or not.


Test 1

Denoting by $ H_{L-1}$ the subspace spanned by $ \textbf{w}_1,\ldots,\textbf{w}_{L-1}$ the function $ \textbf {w}_L$ is by construction (see (4)) the projection of the function $ F_L$ onto the direction perpendicular to $ H_{L-1}$. Now, if the length of $ \textbf {w}_L$ (given by $ \textbf{w}_L\bullet\textbf{w}_L$) is very small compared to the length of $ \textbf {f}_L$ this new function can not contribute much to the reduction of the sum of squares of residuals. The test consists then in calculating the angle $ \theta $ between the two vectors $ \textbf {w}_L$ and $ \textbf {f}_L$ (see also figure 1) and requiring that it's greater then a threshold value which the user must set (TMultiDimFit::SetMinAngle).

Figure 1: (a) Angle $ \theta $ between $ \textbf {w}_l$ and $ \textbf {f}_L$, (b) angle $ \phi $ between $ \textbf {w}_L$ and $ \textbf {D}$
\begin{figure}\begin{center}
\begin{tabular}{p{.4\textwidth}p{.4\textwidth}}
\...
... \put(80,100){$\mathbf{D}$}
\end{picture} \end{tabular} \end{center}\end{figure}


Test 2

Let $ \textbf {D}$ be the data vector to be fitted. As illustrated in figure 1, the $ L^{\mbox{th}}$ function $ \textbf {w}_L$ will contribute significantly to the reduction of $ S$, if the angle $ \phi^\prime$ between $ \textbf {w}_L$ and $ \textbf {D}$ is smaller than an upper limit $ \phi $, defined by the user (TMultiDimFit::SetMaxAngle)

However, the method automatically readjusts the value of this angle while fitting is in progress, in order to make the selection criteria less and less difficult to be fulfilled. The result is that the functions contributing most to the reduction of $ S$ are chosen first (TMultiDimFit::TestFunction).

In case $ \phi $ isn't defined, an alternative method of performing this second test is used: The $ L^{\mbox{th}}$ function $ \textbf {f}_L$ is accepted if (refer also to equation (13))

$\displaystyle \Delta S_L > \frac{S_{L-1}}{L_{max}-L}$ (14)

where $ S_{L-1}$ is the sum of the $ L-1$ first residuals from the $ L-1$ functions previously accepted; and $ L_{max}$ is the total number of functions allowed in the final expression of the fit (defined by user).

>From this we see, that by restricting $ L_{max}$ -- the number of terms in the final model -- the fit is more difficult to perform, since the above selection criteria is more limiting.

The more coefficients we evaluate, the more the sum of squares of residuals $ S$ will be reduced. We can evaluate $ S$ before inverting $ \mathsf{B}$ as shown below.

Coefficients and Coefficient Errors

Having found a parameterization, that is the $ F_l$'s and $ L$, that minimizes $ S$, we still need to determine the coefficients $ c_l$. However, it's a feature of how we choose the significant functions, that the evaluation of the $ c_l$'s becomes trivial [5]. To derive $ \mathbf{c}$, we first note that equation (4) can be written as

$\displaystyle \mathsf{F} = \mathsf{W}\mathsf{B}$ (15)

where

$\displaystyle b_{ij} = \left\{\begin{array}{rcl} \frac{\mathbf{f}_j \bullet \ma...
...f} & i < j\  1 & \mbox{if} & i = j\  0 & \mbox{if} & i > j \end{array}\right.$ (16)

Consequently, $ \mathsf{B}$ is an upper triangle matrix, which can be readily inverted. So we now evaluate

$\displaystyle \mathsf{F}\mathsf{B}^{-1} = \mathsf{W}$ (17)

The model $ \mathsf{W}\mathbf{a}$ can therefore be written as

$\displaystyle (\mathsf{F}\mathsf{B}^{-1})\mathbf{a} =
\mathsf{F}(\mathsf{B}^{-1}\mathbf{a}) .
$

The original model $ \mathsf{F}\mathbf{c}$ is therefore identical with this if

$\displaystyle \mathbf{c} = \left(\mathsf{B}^{-1}\mathbf{a}\right) = \left[\mathbf{a}^T\left(\mathsf{B}^{-1}\right)^T\right]^T .$ (18)

The reason we use $ \left(\mathsf{B}^{-1}\right)^T$ rather then $ \mathsf{B}^{-1}$ is to save storage, since $ \left(\mathsf{B}^{-1}\right)^T$ can be stored in the same matrix as $ \mathsf{B}$ (TMultiDimFit::MakeCoefficients). The errors in the coefficients is calculated by inverting the curvature matrix of the non-orthogonal functions $ f_{lj}$ [1] (TMultiDimFit::MakeCoefficientErrors).


Considerations

It's important to realize that the training sample should be representive of the problem at hand, in particular along the borders of the region of interest. This is because the algorithm presented here, is a interpolation, rahter then a extrapolation [5].

Also, the independent variables $ x_{i}$ need to be linear independent, since the procedure will perform poorly if they are not [5]. One can find an linear transformation from ones original variables $ \xi_{i}$ to a set of linear independent variables $ x_{i}$, using a Principal Components Analysis (see TPrincipal), and then use the transformed variable as input to this class [5] [6].

H. Wind also outlines a method for parameterising a multidimensional dependence over a multidimensional set of variables. An example of the method from [5], is a follows (please refer to [5] for a full discussion):

  1. Define $ \mathbf{P} = (P_1, \ldots, P_5)$ are the 5 dependent quantities that define a track.
  2. Compute, for $ M$ different values of $ \mathbf{P}$, the tracks through the magnetic field, and determine the corresponding $ \mathbf{x} = (x_1, \ldots, x_N)$.
  3. Use the simulated observations to determine, with a simple approximation, the values of $ \mathbf{P}_j$. We call these values $ \mathbf{P}^\prime_j, j = 1, \ldots, M$.
  4. Determine from $ \mathbf{x}$ a set of at least five relevant coordinates $ \mathbf{x}^\prime$, using contrains, or alternative:
  5. Perform a Principal Component Analysis (using TPrincipal), and use to get a linear transformation $ \mathbf{x} \rightarrow \mathbf{x}^\prime$, so that $ \mathbf{x}^\prime$ are constrained and linear independent.
  6. Perform a Principal Component Analysis on $ Q_i = P_i / P^prime_i  i = 1, \ldots, 5$, to get linear indenpendent (among themselves, but not independent of $ \mathbf{x}$) quantities $ \mathbf{Q}^\prime$
  7. For each component $ Q^\prime_i$ make a mutlidimensional fit, using $ \mathbf{x}^\prime$ as the variables, thus determing a set of coefficents $ \mathbf{c}_i$.

To process data, using this parameterisation, do

  1. Test wether the observation $ \mathbf{x}$ within the domain of the parameterization, using the result from the Principal Component Analysis.
  2. Determine $ \mathbf{P}^\prime$ as before.
  3. Detetmine $ \mathbf{x}^\prime$ as before.
  4. Use the result of the fit to determind $ \mathbf{Q}^\prime$.
  5. Transform back to $ \mathbf{P}$ from $ \mathbf{Q}^\prime$, using the result from the Principal Component Analysis.


Testing the parameterization

The class also provides functionality for testing the, over the training sample, found parameterization (TMultiDimFit::Fit). This is done by passing the class a test sample of $ M_t$ tuples of the form $ (\mathbf{x}_{t,j},
D_{t,j}, E_{t,j})$, where $ \mathbf{x}_{t,j}$ are the independent variables, $ D_{t,j}$ the known, dependent quantity, and $ E_{t,j}$ is the square error in $ D_{t,j}$ (TMultiDimFit::AddTestRow).

The parameterization is then evaluated at every $ \mathbf{x}_t$ in the test sample, and

$\displaystyle S_t \equiv \sum_{j=1}^{M_t} \left(D_{t,j} -
D_p\left(\mathbf{x}_{t,j}\right)\right)^2
$

is evaluated. The relative error over the test sample

$\displaystyle R_t = \frac{S_t}{\sum_{j=1}^{M_t} D_{t,j}^2}
$

should not be to low or high compared to $ R$ from the training sample. Also, multiple correlation coefficient from both samples should be fairly close, otherwise one of the samples is not representive of the problem. A large difference in the reduced $ \chi^2$ over the two samples indicate an over fit, and the maximum number of terms in the parameterisation should be reduced.

It's possible to use Minuit [4] to further improve the fit, using the test sample.

Christian Holm
November 2000, NBI

Bibliography

1
Philip R. Bevington and D. Keith Robinson.
Data Reduction and Error Analysis for the Physical Sciences.
McGraw-Hill, 2 edition, 1992.

2
René Brun et al.
Mudifi.
Long writeup DD/75-23, CERN, 1980.

3
Gene H. Golub and Charles F. van Loan.
Matrix Computations.
John Hopkins Univeristy Press, Baltimore, 3 edition, 1996.

4
F. James.
Minuit.
Long writeup D506, CERN, 1998.

5
H. Wind.
Function parameterization.
In Proceedings of the 1972 CERN Computing and Data Processing School, volume 72-21 of Yellow report. CERN, 1972.

6
H. Wind.
1. principal component analysis, 2. pattern recognition for track finding, 3. interpolation and functional representation.
Yellow report EP/81-12, CERN, 1981.
*/ //

//

#include "Riostream.h"
#include "TMultiDimFit.h"
#include "TMath.h"
#include "TH1.h"
#include "TH2.h"
#include "TROOT.h"
#include "TBrowser.h"
#include "TDecompChol.h"

#define RADDEG (180. / TMath::Pi())
#define DEGRAD (TMath::Pi() / 180.)
#define HIST_XORIG     0
#define HIST_DORIG     1
#define HIST_XNORM     2
#define HIST_DSHIF     3
#define HIST_RX        4
#define HIST_RD        5
#define HIST_RTRAI     6
#define HIST_RTEST     7
#define PARAM_MAXSTUDY 1
#define PARAM_SEVERAL  2
#define PARAM_RELERR   3
#define PARAM_MAXTERMS 4


//____________________________________________________________________
static void mdfHelper(int&, double*, double&, double*, int);

//____________________________________________________________________
ClassImp(TMultiDimFit);

//____________________________________________________________________
// Static instance. Used with mdfHelper and TMinuit
TMultiDimFit* TMultiDimFit::fgInstance = 0;


//____________________________________________________________________
 TMultiDimFit::TMultiDimFit()
{
  // Empty CTOR. Do not use
  fMeanQuantity            = 0;
  fMaxQuantity             = 0;
  fMinQuantity             = 0;
  fSumSqQuantity           = 0;
  fSumSqAvgQuantity        = 0;
  fPowerLimit              = 1;

  fMaxAngle                = 0;
  fMinAngle                = 1;

  fNVariables              = 0;
  fMaxVariables            = 0;
  fMinVariables            = 0;
  fSampleSize              = 0;

  fMaxAngle                = 0;
  fMinAngle                = 0;

  fPolyType                = kMonomials;
  fShowCorrelation         = kFALSE;

  fIsUserFunction          = kFALSE;

  fPowers                  = 0;
  fMaxPowers               = 0;
  fMaxPowersFinal          = 0;

  fHistograms              = 0;
  fHistogramMask           = 0;

  fQuantity(0);
  fVariables(0);
  fMaxVariables(0);
  fMinVariables(0);
  fMeanVariables(0);

  fFitter                  = 0;
}


//____________________________________________________________________
 TMultiDimFit::TMultiDimFit(Int_t dimension,
                           EMDFPolyType type,
                           Option_t *option)
  : TNamed("multidimfit","Multi-dimensional fit object"),
    fQuantity(dimension),
    fSqError(dimension),
    fVariables(dimension*100),
    fMeanVariables(dimension),
    fMaxVariables(dimension),
    fMinVariables(dimension)
{
  // Constructor
  // Second argument is the type of polynomials to use in
  // parameterisation, one of:
  //      TMultiDimFit::kMonomials
  //      TMultiDimFit::kChebyshev
  //      TMultiDimFit::kLegendre
  //
  // Options:
  //   K      Compute (k)correlation matrix
  //   V      Be verbose
  //
  // Default is no options.
  //

  fgInstance = this;

  fMeanQuantity           = 0;
  fMaxQuantity            = 0;
  fMinQuantity            = 0;
  fSumSqQuantity          = 0;
  fSumSqAvgQuantity       = 0;
  fPowerLimit             = 1;

  fMaxAngle               = 0;
  fMinAngle               = 1;

  fNVariables             = dimension;
  fMaxVariables           = 0;
  fMinVariables           = 0;
  fSampleSize             = 0;
  fTestSampleSize         = 0;
  fMinRelativeError       = 0.01;
  fError                  = 0;
  fTestError              = 0;
  fPrecision              = 0;
  fTestPrecision          = 0;
  fParameterisationCode   = 0;

  fPolyType               = type;
  fShowCorrelation        = kFALSE;
  fIsVerbose              = kFALSE;

  TString opt             = option;
  opt.ToLower();

  if (opt.Contains("k")) fShowCorrelation = kTRUE;
  if (opt.Contains("v")) fIsVerbose       = kTRUE;

  fIsUserFunction         = kFALSE;

  fHistograms             = 0;
  fHistogramMask          = 0;

  fPowers                 = 0;
  fMaxPowers              = new Int_t[dimension];
  fMaxPowersFinal         = new Int_t[dimension];
  fFitter                 = 0;
}


//____________________________________________________________________
 TMultiDimFit::~TMultiDimFit()
{
  // DTOR
  delete [] fPowers;
  delete [] fMaxPowers;
  delete [] fMaxPowersFinal;
  delete [] fPowerIndex;
  delete [] fFunctionCodes;
  fHistograms->Clear("nodelete");
  delete fHistograms;
}


//____________________________________________________________________
 void TMultiDimFit::AddRow(const Double_t *x, Double_t D, Double_t E)
{
  // Add a row consisting of fNVariables independent variables, the
  // known, dependent quantity, and optionally, the square error in
  // the dependent quantity, to the training sample to be used for the
  // parameterization.
  // The mean of the variables and quantity is calculated on the fly,
  // as outlined in TPrincipal::AddRow.
  // This sample should be representive of the problem at hand.
  // Please note, that if no error is given Poisson statistics is
  // assumed and the square error is set to the value of dependent
  // quantity.  See also the
  // 
class description

  if (!x)
    return;

  if (++fSampleSize == 1) {
    fMeanQuantity  = D;
    fMaxQuantity   = D;
    fMinQuantity   = D;
  }
  else {
    fMeanQuantity  *= 1 - 1./Double_t(fSampleSize);
    fMeanQuantity  += D / Double_t(fSampleSize);
    fSumSqQuantity += D * D;

    if (D >= fMaxQuantity) fMaxQuantity = D;
    if (D <= fMinQuantity) fMinQuantity = D;
  }


  // If the vector isn't big enough to hold the new data, then
  // expand the vector by half it's size.
  Int_t size = fQuantity.GetNrows();
  if (fSampleSize > size) {
    fQuantity.ResizeTo(size + size/2);
    fSqError.ResizeTo(size + size/2);
  }

  // Store the value
  fQuantity(fSampleSize-1) = D;
  fSqError(fSampleSize-1) = (E == 0 ? D : E);

  // Store data point in internal vector
  // If the vector isn't big enough to hold the new data, then
  // expand the vector by half it's size
  size = fVariables.GetNrows();
  if (fSampleSize * fNVariables > size)
    fVariables.ResizeTo(size + size/2);


   // Increment the data point counter
  Int_t i,j;
  for (i = 0; i < fNVariables; i++) {
    if (fSampleSize == 1) {
      fMeanVariables(i) = x[i];
      fMaxVariables(i)  = x[i];
      fMinVariables(i)  = x[i];
    }
    else {
      fMeanVariables(i) *= 1 - 1./Double_t(fSampleSize);
      fMeanVariables(i) += x[i] / Double_t(fSampleSize);

      // Update the maximum value for this component
      if (x[i] >= fMaxVariables(i)) fMaxVariables(i)  = x[i];

      // Update the minimum value for this component
      if (x[i] <= fMinVariables(i)) fMinVariables(i)  = x[i];

    }

    // Store the data.
    j = (fSampleSize-1) * fNVariables + i;
    fVariables(j) = x[i];
  }
}


//____________________________________________________________________
 void TMultiDimFit::AddTestRow(const Double_t *x, Double_t D, Double_t E)
{
  // Add a row consisting of fNVariables independent variables, the
  // known, dependent quantity, and optionally, the square error in
  // the dependent quantity, to the test sample to be used for the
  // test of the parameterization.
  // This sample needn't be representive of the problem at hand.
  // Please note, that if no error is given Poisson statistics is
  // assumed and the square error is set to the value of dependent
  // quantity.  See also the
  // 
class description

  if (fTestSampleSize++ == 0) {
    fTestQuantity.ResizeTo(fNVariables);
    fTestSqError.ResizeTo(fNVariables);
    fTestVariables.ResizeTo(fNVariables * 100);
  }

  // If the vector isn't big enough to hold the new data, then
  // expand the vector by half it's size.
  Int_t size = fTestQuantity.GetNrows();
  if (fTestSampleSize > size) {
    fTestQuantity.ResizeTo(size + size/2);
    fTestSqError.ResizeTo(size + size/2);
  }

  // Store the value
  fTestQuantity(fTestSampleSize-1) = D;
  fTestSqError(fTestSampleSize-1) = (E == 0 ? D : E);

  // Store data point in internal vector
  // If the vector isn't big enough to hold the new data, then
  // expand the vector by half it's size
  size = fTestVariables.GetNrows();
  if (fTestSampleSize * fNVariables > size)
    fTestVariables.ResizeTo(size + size/2);


   // Increment the data point counter
  Int_t i,j;
  for (i = 0; i < fNVariables; i++) {
    j = fNVariables * (fTestSampleSize - 1) + i;
    fTestVariables(j) = x[i];

    if (x[i] > fMaxVariables(i))
      Warning("AddTestRow", "variable %d (row: %d) too large: %f > %f",
               i, fTestSampleSize, x[i], fMaxVariables(i));
    if (x[i] < fMinVariables(i))
      Warning("AddTestRow", "variable %d (row: %d) too small: %f < %f",
               i, fTestSampleSize, x[i], fMinVariables(i));
  }
}


//____________________________________________________________________
 void TMultiDimFit::Browse(TBrowser* b)
{
  // Browse the TMultiDimFit object in the TBrowser.
  if (fHistograms) {
    TIter next(fHistograms);
    TH1* h = 0;
    while ((h = (TH1*)next()))
      b->Add(h,h->GetName());
  }
  if (fVariables.IsValid())
    b->Add(&fVariables, "Variables (Training)");
  if (fQuantity.IsValid())
    b->Add(&fQuantity, "Quantity (Training)");
  if (fSqError.IsValid())
    b->Add(&fSqError, "Error (Training)");
  if (fMeanVariables.IsValid())
    b->Add(&fMeanVariables, "Mean of Variables (Training)");
  if (fMaxVariables.IsValid())
    b->Add(&fMaxVariables, "Mean of Variables (Training)");
  if (fMinVariables.IsValid())
    b->Add(&fMinVariables, "Min of Variables (Training)");
  if (fTestVariables.IsValid())
    b->Add(&fTestVariables, "Variables (Test)");
  if (fTestQuantity.IsValid())
    b->Add(&fTestQuantity, "Quantity (Test)");
  if (fTestSqError.IsValid())
    b->Add(&fTestSqError, "Error (Test)");
  if (fFunctions.IsValid())
    b->Add(&fFunctions, "Functions");
  if(fCoefficients.IsValid())
    b->Add(&fCoefficients,"Coefficients");
  if(fCoefficientsRMS.IsValid())
    b->Add(&fCoefficientsRMS,"Coefficients Errors");
  if (fOrthFunctions.IsValid())
    b->Add(&fOrthFunctions, "Orthogonal Functions");
  if (fOrthFunctionNorms.IsValid())
    b->Add(&fOrthFunctionNorms, "Orthogonal Functions Norms");
  if (fResiduals.IsValid())
    b->Add(&fResiduals, "Residuals");
  if(fOrthCoefficients.IsValid())
    b->Add(&fOrthCoefficients,"Orthogonal Coefficients");
  if (fOrthCurvatureMatrix.IsValid())
    b->Add(&fOrthCurvatureMatrix,"Orthogonal curvature matrix");
  if(fCorrelationMatrix.IsValid())
    b->Add(&fCorrelationMatrix,"Correlation Matrix");
  if (fFitter)
    b->Add(fFitter, fFitter->GetName());
}


//____________________________________________________________________
 void TMultiDimFit::Clear(Option_t *option)
{
  // Clear internal structures and variables
  Int_t i, j, n = fNVariables, m = fMaxFunctions;

  // Training sample, dependent quantity
  fQuantity.Zero();
  fSqError.Zero();
  fMeanQuantity                 = 0;
  fMaxQuantity                  = 0;
  fMinQuantity                  = 0;
  fSumSqQuantity                = 0;
  fSumSqAvgQuantity             = 0;

  // Training sample, independent variables
  fVariables.Zero();
  fNVariables                   = 0;
  fSampleSize                   = 0;
  fMeanVariables.Zero();
  fMaxVariables.Zero();
  fMinVariables.Zero();

  // Test sample
  fTestQuantity.Zero();
  fTestSqError.Zero();
  fTestVariables.Zero();
  fTestSampleSize               = 0;

  // Functions
  fFunctions.Zero();
  //for (i = 0; i < fMaxTerms; i++)  fPowerIndex[i]    = 0;
  //for (i = 0; i < fMaxTerms; i++)  fFunctionCodes[i] = 0;
  fMaxFunctions                 = 0;
  fMaxStudy                     = 0;
  fOrthFunctions.Zero();
  fOrthFunctionNorms.Zero();

  // Control parameters
  fMinRelativeError             = 0;
  fMinAngle                     = 0;
  fMaxAngle                     = 0;
  fMaxTerms                     = 0;

  // Powers
  for (i = 0; i < n; i++) {
    fMaxPowers[i]               = 0;
    fMaxPowersFinal[i]          = 0;
    for (j = 0; j < m; j++)
      fPowers[i * n + j]        = 0;
  }
  fPowerLimit                   = 0;

  // Residuals
  fMaxResidual                  = 0;
  fMinResidual                  = 0;
  fMaxResidualRow               = 0;
  fMinResidualRow               = 0;
  fSumSqResidual                = 0;

  // Fit
  fNCoefficients                = 0;
  fOrthCoefficients             = 0;
  fOrthCurvatureMatrix          = 0;
  fRMS                          = 0;
  fCorrelationMatrix.Zero();
  fError                        = 0;
  fTestError                    = 0;
  fPrecision                    = 0;
  fTestPrecision                = 0;

  // Coefficients
  fCoefficients.Zero();
  fCoefficientsRMS.Zero();
  fResiduals.Zero();
  fHistograms->Clear(option);

  // Options
  fPolyType                     = kMonomials;
  fShowCorrelation              = kFALSE;
  fIsUserFunction               = kFALSE;
}


//____________________________________________________________________
 Double_t TMultiDimFit::Eval(const Double_t *x, const Double_t* coeff)
{
  // Evaluate parameterization at point x. Optional argument coeff is
  // a vector of coefficients for the parameterisation, fNCoefficients
  // elements long.
  Double_t returnValue = fMeanQuantity;
  Double_t term        = 0;
  Int_t    i, j;

  for (i = 0; i < fNCoefficients; i++) {
    // Evaluate the ith term in the expansion
    term = (coeff ? coeff[i] : fCoefficients(i));
    for (j = 0; j < fNVariables; j++) {
      // Evaluate the factor (polynomial) in the j-th variable.
      Int_t    p  =  fPowers[fPowerIndex[i] * fNVariables + j];
      Double_t y  =  1 + 2. / (fMaxVariables(j) - fMinVariables(j))
        * (x[j] - fMaxVariables(j));
      term        *= EvalFactor(p,y);
    }
    // Add this term to the final result
    returnValue += term;
  }
  return returnValue;
}


//____________________________________________________________________
 Double_t TMultiDimFit::EvalControl(const Int_t *iv)
{
  // PRIVATE METHOD:
  // Calculate the control parameter from the passed powers
  Double_t s = 0;
  Double_t epsilon = 1e-6; // a small number
  for (Int_t i = 0; i < fNVariables; i++) {
    if (fMaxPowers[i] != 1)
      s += (epsilon + iv[i] - 1) / (epsilon + fMaxPowers[i] - 1);
  }
  return s;
}

//____________________________________________________________________
 Double_t TMultiDimFit::EvalFactor(Int_t p, Double_t x)
{
  // PRIVATE METHOD:
  // Evaluate function with power p at variable value x
  Int_t    i   = 0;
  Double_t p1  = 1;
  Double_t p2  = 0;
  Double_t p3  = 0;
  Double_t r   = 0;

  switch(p) {
  case 1:
    r = 1;
    break;
  case 2:
    r =  x;
    break;
  default:
    p2 = x;
    for (i = 3; i <= p; i++) {
      p3 = p2 * x;
      if (fPolyType == kLegendre)
        p3 = ((2 * i - 3) * p2 * x - (i - 2) * p1) / (i - 1);
      else if (fPolyType == kChebyshev)
        p3 = 2 * x * p2 - p1;
      p1 = p2;
      p2 = p3;
    }
    r = p3;
  }

  return r;
}


//____________________________________________________________________
 void TMultiDimFit::FindParameterization(Option_t *)
{
  // Find the parameterization
  //
  // Options:
  //     None so far
  //
  // For detailed description of what this entails, please refer to the
  // 
class description

  MakeNormalized();
  MakeCandidates();
  MakeParameterization();
  MakeCoefficients();
  MakeCoefficientErrors();
  MakeCorrelation();
}

//____________________________________________________________________
 void TMultiDimFit::Fit(Option_t *option)
{
  // Try to fit the found parameterisation to the test sample.
  //
  // Options
  //     M     use Minuit to improve coefficients
  //
  // Also, refer to
  // 
class description

  Int_t i, j;
  Double_t*      x    = new Double_t[fNVariables];
  Double_t  sumSqD    = 0;
  Double_t    sumD    = 0;
  Double_t  sumSqR    = 0;
  Double_t    sumR    = 0;

  // Calculate the residuals over the test sample
  for (i = 0; i < fTestSampleSize; i++) {
    for (j = 0; j < fNVariables; j++)
      x[j] = fTestVariables(i * fNVariables + j);
    Double_t res =  fTestQuantity(i) - Eval(x);
    sumD         += fTestQuantity(i);
    sumSqD       += fTestQuantity(i) * fTestQuantity(i);
    sumR         += res;
    sumSqR       += res * res;
    if (TESTBIT(fHistogramMask,HIST_RTEST))
      ((TH1D*)fHistograms->FindObject("res_test"))->Fill(res);
  }
  Double_t dAvg         = sumSqD - (sumD * sumD) / fTestSampleSize;
  Double_t rAvg         = sumSqR - (sumR * sumR) / fTestSampleSize;
  fTestCorrelationCoeff = (dAvg - rAvg) / dAvg;
  fTestError            = sumSqR;
  fTestPrecision        = sumSqR / sumSqD;

  TString opt(option);
  opt.ToLower();

  if (!opt.Contains("m"))
    MakeChi2();

  if (fNCoefficients * 50 > fTestSampleSize)
    Warning("Fit", "test sample is very small");

  if (!opt.Contains("m"))
    return;

  fFitter = TVirtualFitter::Fitter(0,fNCoefficients);
  fFitter->SetFCN(mdfHelper);

  const Int_t  maxArgs = 16;
  Int_t           args = 1;
  Double_t*   arglist  = new Double_t[maxArgs];
  arglist[0]           = -1;
  fFitter->ExecuteCommand("SET PRINT",arglist,args);

  for (i = 0; i < fNCoefficients; i++) {
    Double_t startVal = fCoefficients(i);
    Double_t startErr = fCoefficientsRMS(i);
    fFitter->SetParameter(i, Form("coeff%02d",i),
                          startVal, startErr, 0, 0);
  }

  // arglist[0]           = 0;
  args                 = 1;
  // fFitter->ExecuteCommand("SET PRINT",arglist,args);
  fFitter->ExecuteCommand("MIGRAD",arglist,args);

  for (i = 0; i < fNCoefficients; i++) {
    Double_t val = 0, err = 0, low = 0, high = 0;
    fFitter->GetParameter(i, Form("coeff%02d",i),
                         val, err, low, high);
    fCoefficients(i)    = val;
    fCoefficientsRMS(i) = err;
  }
}


//____________________________________________________________________
 void TMultiDimFit::MakeCandidates()
{
  // PRIVATE METHOD:
  // Create list of candidate functions for the parameterisation. See
  // also
  // 
class description

  Int_t i = 0;
  Int_t j = 0;
  Int_t k = 0;

  // The temporary array to store the powers in. We don't need to
  // initialize this array however.
  Int_t *powers = new Int_t[fNVariables * fMaxFunctions];

  // store of `control variables'
  Double_t* control  = new Double_t[fMaxFunctions];

  // We've better initialize the variables
  Int_t *iv = new Int_t[fNVariables];
  for (i = 0; i < fNVariables; i++)
    iv[i] = 1;

  if (!fIsUserFunction) {

    // Number of funcs selected
    Int_t     numberFunctions = 0;

    // Absolute max number of functions
    Int_t maxNumberFunctions = 1;
    for (i = 0; i < fNVariables; i++)
      maxNumberFunctions *= fMaxPowers[i];

    while (kTRUE) {
      // Get the control value for this function
      Double_t s = EvalControl(iv);

      if (s <= fPowerLimit) {

        // Call over-loadable method Select, as to allow the user to
        // interfere with the selection of functions.
        if (Select(iv)) {
          numberFunctions++;

          // If we've reached the user defined limit of how many
          // functions we can consider, break out of the loop
          if (numberFunctions > fMaxFunctions)
          break;

          // Store the control value, so we can sort array of powers
          // later on
          control[numberFunctions-1] = Int_t(1.0e+6*s);

          // Store the powers in powers array.
          for (i = 0; i < fNVariables; i++) {
            j = (numberFunctions - 1) * fNVariables + i;
            powers[j] = iv[i];
          }
        } // if (Select())
      } // if (s <= fPowerLimit)

      for (i = 0; i < fNVariables; i++)
        if (iv[i] < fMaxPowers[i])
          break;

      // If all variables have reached their maximum power, then we
      // break out of the loop
      if (i == fNVariables) {
        fMaxFunctions = numberFunctions;
        break;
      }

      // Next power in variable i
      iv[i]++;

      for (j = 0; j < i; j++)
        iv[j] = 1;
    } // while (kTRUE)
  }
  else {
    // In case the user gave an explicit function
    for (i = 0; i < fMaxFunctions; i++) {
      // Copy the powers to working arrays
      for (j = 0; j < fNVariables; j++) {
        powers[i * fNVariables + j] = fPowers[i * fNVariables + j];
        iv[j]                 = fPowers[i * fNVariables + j];
      }

      control[i] = Int_t(1.0e+6*EvalControl(iv));
    }
  }

  // Now we need to sort the powers according to least `control
  // variable'
  Int_t *order = new Int_t[fMaxFunctions];
  for (i = 0; i < fMaxFunctions; i++)
    order[i] = i;
  fPowers = new Int_t[fMaxFunctions * fNVariables];

  for (i = 0; i < fMaxFunctions; i++) {
    Double_t x = control[i];
    Int_t    l = order[i];
    k = i;

    for (j = i; j < fMaxFunctions; j++) {
      if (control[j] <= x) {
        x = control[j];
        l = order[j];
        k = j;
      }
    }

    if (k != i) {
      control[k] = control[i];
      control[i] = x;
      order[k]   = order[i];
      order[i]   = l;
    }
  }

  for (i = 0; i < fMaxFunctions; i++)
    for (j = 0; j < fNVariables; j++)
      fPowers[i * fNVariables + j] = powers[order[i] * fNVariables + j];

  delete [] control;
  delete [] powers;
  delete [] order;
  delete [] iv;
}


//____________________________________________________________________
 Double_t TMultiDimFit::MakeChi2(const Double_t* coeff)
{
  // Calculate Chi square over either the test sample. The optional
  // argument coeff is a vector of coefficients to use in the
  // evaluation of the parameterisation. If coeff == 0, then the found
  // coefficients is used.
  // Used my MINUIT for fit (see TMultDimFit::Fit)
  fChi2 = 0;
  Int_t i, j;
  Double_t* x = new Double_t[fNVariables];
  for (i = 0; i < fTestSampleSize; i++) {
    // Get the stored point
    for (j = 0; j < fNVariables; j++)
      x[j] = fTestVariables(i * fNVariables + j);

    // Evaluate function. Scale to shifted values
    Double_t f = Eval(x,coeff);

    // Calculate contribution to Chic square
    fChi2 += 1. / TMath::Max(fTestSqError(i),1e-20)
      * (fTestQuantity(i) - f) * (fTestQuantity(i) - f);
  }

  // Clean up
  delete [] x;

  return fChi2;
}


//____________________________________________________________________
 void TMultiDimFit::MakeCode(const char* filename, Option_t *option)
{
  // Generate the file <filename> with .C appended if argument doesn't
  // end in .cxx or .C. The contains the implementation of the
  // function:
  //
  //   Double_t <funcname>(Double_t *x)
  //
  // which does the same as TMultiDimFit::Eval. Please refer to this
  // method.
  //
  // Further, the static variables:
  //
  //     Int_t    gNVariables
  //     Int_t    gNCoefficients
  //     Double_t gDMean
  //     Double_t gXMean[]
  //     Double_t gXMin[]
  //     Double_t gXMax[]
  //     Double_t gCoefficient[]
  //     Int_t    gPower[]
  //
  // are initialized. The only ROOT header file needed is Rtypes.h
  //
  // See TMultiDimFit::MakeRealCode for a list of options


  TString outName(filename);
  if (!outName.EndsWith(".C") && !outName.EndsWith(".cxx"))
    outName += ".C";

  MakeRealCode(outName.Data(),"",option);
}



//____________________________________________________________________
 void TMultiDimFit::MakeCoefficientErrors()
{
  // PRIVATE METHOD:
  // Compute the errors on the coefficients. For this to be done, the
  // curvature matrix of the non-orthogonal functions, is computed.
  Int_t    i = 0;
  Int_t    j = 0;
  Int_t    k = 0;
  TVectorD iF(fSampleSize);
  TVectorD jF(fSampleSize);
  fCoefficientsRMS.ResizeTo(fNCoefficients);

  TMatrixDSym curvatureMatrix(fNCoefficients);

  // Build the curvature matrix
  for (i = 0; i < fNCoefficients; i++) {
    iF = TMatrixDRow(fFunctions,i);
    for (j = 0; j <= i; j++) {
      jF = TMatrixDRow(fFunctions,j);
      for (k = 0; k < fSampleSize; k++)
        curvatureMatrix(i,j) +=
          1 / TMath::Max(fSqError(k), 1e-20) * iF(k) * jF(k);
      curvatureMatrix(j,i) = curvatureMatrix(i,j);
    }
  }

  // Calculate Chi Square
  fChi2 = 0;
  for (i = 0; i < fSampleSize; i++) {
    Double_t f = 0;
    for (j = 0; j < fNCoefficients; j++)
      f += fCoefficients(j) * fFunctions(j,i);
    fChi2 += 1. / TMath::Max(fSqError(i),1e-20) * (fQuantity(i) - f)
      * (fQuantity(i) - f);
  }

  // Invert the curvature matrix
  const TVectorD diag = TMatrixDDiag_const(curvatureMatrix);
  curvatureMatrix.NormByDiag(diag);

  TDecompChol chol(curvatureMatrix);
  if (!chol.Decompose())
    Error("MakeCoefficientErrors", "curvature matrix is singular");
  chol.Invert(curvatureMatrix);

  curvatureMatrix.NormByDiag(diag);

  for (i = 0; i < fNCoefficients; i++)
    fCoefficientsRMS(i) = TMath::Sqrt(curvatureMatrix(i,i));
}


//____________________________________________________________________
 void TMultiDimFit::MakeCoefficients()
{
  // PRIVATE METHOD:
  // Invert the model matrix B, and compute final coefficients. For a
  // more thorough discussion of what this means, please refer to the
  // 
class description

  //
  // First we invert the lower triangle matrix fOrthCurvatureMatrix
  // and store the inverted matrix in the upper triangle.

  Int_t i = 0, j = 0;
  Int_t col = 0, row = 0;

  // Invert the B matrix
  for (col = 1; col < fNCoefficients; col++) {
    for (row = col - 1; row > -1; row--) {
      fOrthCurvatureMatrix(row,col) = 0;
      for (i = row; i <= col ; i++)
         fOrthCurvatureMatrix(row,col) -=
          fOrthCurvatureMatrix(i,row)
          * fOrthCurvatureMatrix(i,col);
    }
  }

  // Compute the final coefficients
  fCoefficients.ResizeTo(fNCoefficients);

  for (i = 0; i < fNCoefficients; i++) {
    Double_t sum = 0;
    for (j = i; j < fNCoefficients; j++)
      sum += fOrthCurvatureMatrix(i,j) * fOrthCoefficients(j);
    fCoefficients(i) = sum;
  }

  // Compute the final residuals
  fResiduals.ResizeTo(fSampleSize);
  for (i = 0; i < fSampleSize; i++)
    fResiduals(i) = fQuantity(i);

  for (i = 0; i < fNCoefficients; i++)
    for (j = 0; j < fSampleSize; j++)
      fResiduals(j) -= fCoefficients(i) * fFunctions(i,j);

  // Compute the max and minimum, and squared sum of the evaluated
  // residuals
  fMinResidual = 10e10;
  fMaxResidual = -10e10;
  Double_t sqRes  = 0;
  for (i = 0; i < fSampleSize; i++){
    sqRes += fResiduals(i) * fResiduals(i);
    if (fResiduals(i) <= fMinResidual) {
      fMinResidual     = fResiduals(i);
      fMinResidualRow  = i;
    }
    if (fResiduals(i) >= fMaxResidual) {
      fMaxResidual     = fResiduals(i);
      fMaxResidualRow  = i;
    }
  }

  fCorrelationCoeff = fSumSqResidual / fSumSqAvgQuantity;
  fPrecision        = TMath::Sqrt(sqRes / fSumSqQuantity);

  // If we use histograms, fill some more
  if (TESTBIT(fHistogramMask,HIST_RD) ||
      TESTBIT(fHistogramMask,HIST_RTRAI) ||
      TESTBIT(fHistogramMask,HIST_RX)) {
    for (i = 0; i < fSampleSize; i++) {
      if (TESTBIT(fHistogramMask,HIST_RD))
        ((TH2D*)fHistograms->FindObject("res_d"))->Fill(fQuantity(i),
                                                        fResiduals(i));
      if (TESTBIT(fHistogramMask,HIST_RTRAI))
        ((TH1D*)fHistograms->FindObject("res_train"))->Fill(fResiduals(i));

      if (TESTBIT(fHistogramMask,HIST_RX))
        for (j = 0; j < fNVariables; j++)
          ((TH2D*)fHistograms->FindObject(Form("res_x_%d",j)))
            ->Fill(fVariables(i * fNVariables + j),fResiduals(i));
    }
  } // If histograms

}


//____________________________________________________________________
 void TMultiDimFit::MakeCorrelation()
{
  // PRIVATE METHOD:
  // Compute the correlation matrix
  if (!fShowCorrelation)
    return;

  fCorrelationMatrix.ResizeTo(fNVariables,fNVariables+1);

  Double_t d2      = 0;
  Double_t ddotXi  = 0; // G.Q. needs to be reinitialized in the loop over i fNVariables
  Double_t xiNorm  = 0; // G.Q. needs to be reinitialized in the loop over i fNVariables
  Double_t xidotXj = 0; // G.Q. needs to be reinitialized in the loop over j fNVariables
  Double_t xjNorm  = 0; // G.Q. needs to be reinitialized in the loop over j fNVariables

  Int_t i, j, k, l, m;  // G.Q. added m variable
  for (i = 0; i < fSampleSize; i++)
    d2 += fQuantity(i) * fQuantity(i);

  for (i = 0; i < fNVariables; i++) {
    ddotXi = 0.; // G.Q. reinitialisation
    xiNorm = 0.; // G.Q. reinitialisation
    for (j = 0; j< fSampleSize; j++) {
      // Index of sample j of variable i
      k =  j * fNVariables + i;
      ddotXi += fQuantity(j) * (fVariables(k) - fMeanVariables(i));
      xiNorm += (fVariables(k) - fMeanVariables(i))
        * (fVariables(k) - fMeanVariables(i));
    }
    fCorrelationMatrix(i,0) = ddotXi / TMath::Sqrt(d2 * xiNorm);

    for (j = 0; j < i; j++) {
      xidotXj = 0.; // G.Q. reinitialisation
      xjNorm = 0.; // G.Q. reinitialisation
      for (k = 0; k < fSampleSize; k++) {
        // Index of sample j of variable i
        // l =  j * fNVariables + k;  // G.Q.
        l =  k * fNVariables + j; // G.Q.
        m =  k * fNVariables + i; // G.Q.
        // G.Q.        xidotXj += (fVariables(i) - fMeanVariables(i))
        // G.Q.          * (fVariables(l) - fMeanVariables(j));
        xidotXj += (fVariables(m) - fMeanVariables(i))
          * (fVariables(l) - fMeanVariables(j));  // G.Q. modified index for Xi
        xjNorm  += (fVariables(l) - fMeanVariables(j))
          * (fVariables(l) - fMeanVariables(j));
      }
      //fCorrelationMatrix(i+1,j) = xidotXj / TMath::Sqrt(xiNorm * xjNorm);
      fCorrelationMatrix(i,j+1) = xidotXj / TMath::Sqrt(xiNorm * xjNorm);
    }
  }
}



//____________________________________________________________________
 Double_t TMultiDimFit::MakeGramSchmidt(Int_t function)
{
  // PRIVATE METHOD:
  // Make Gram-Schmidt orthogonalisation. The class description gives
  // a thorough account of this algorithm, as well as
  // references. Please refer to the
  // 
class description



  // calculate w_i, that is, evaluate the current function at data
  // point i
  Double_t f2                        = 0;
  fOrthCoefficients(fNCoefficients)      = 0;
  fOrthFunctionNorms(fNCoefficients)  = 0;
  Int_t j        = 0;
  Int_t k        = 0;

  for (j = 0; j < fSampleSize; j++) {
    fFunctions(fNCoefficients, j) = 1;
    fOrthFunctions(fNCoefficients, j) = 0;
    // First, however, we need to calculate f_fNCoefficients
    for (k = 0; k < fNVariables; k++) {
      Int_t    p   =  fPowers[function * fNVariables + k];
      Double_t x   =  fVariables(j * fNVariables + k);
      fFunctions(fNCoefficients, j) *= EvalFactor(p,x);
    }

    // Calculate f dot f in f2
    f2 += fFunctions(fNCoefficients,j) *  fFunctions(fNCoefficients,j);
    // Assign to w_fNCoefficients f_fNCoefficients
    fOrthFunctions(fNCoefficients, j) = fFunctions(fNCoefficients, j);
  }

  // the first column of w is equal to f
  for (j = 0; j < fNCoefficients; j++) {
    Double_t fdw = 0;
    // Calculate (f_fNCoefficients dot w_j) / w_j^2
    for (k = 0; k < fSampleSize; k++) {
      fdw += fFunctions(fNCoefficients, k) * fOrthFunctions(j,k)
        / fOrthFunctionNorms(j);
    }

    fOrthCurvatureMatrix(fNCoefficients,j) = fdw;
    // and subtract it from the current value of w_ij
    for (k = 0; k < fSampleSize; k++)
      fOrthFunctions(fNCoefficients,k) -= fdw * fOrthFunctions(j,k);
  }

  for (j = 0; j < fSampleSize; j++) {
    // calculate squared length of w_fNCoefficients
    fOrthFunctionNorms(fNCoefficients) +=
      fOrthFunctions(fNCoefficients,j)
      * fOrthFunctions(fNCoefficients,j);

    // calculate D dot w_fNCoefficients in A
    fOrthCoefficients(fNCoefficients) += fQuantity(j)
      * fOrthFunctions(fNCoefficients, j);
  }

  // First test, but only if didn't user specify
  if (!fIsUserFunction)
    if (TMath::Sqrt(fOrthFunctionNorms(fNCoefficients) / (f2 + 1e-10))
        < TMath::Sin(fMinAngle*DEGRAD))
      return 0;

  // The result found by this code for the first residual is always
  // much less then the one found be MUDIFI. That's because it's
  // supposed to be. The cause is the improved precision of Double_t
  // over DOUBLE PRECISION!
  fOrthCurvatureMatrix(fNCoefficients,fNCoefficients) = 1;
  Double_t b = fOrthCoefficients(fNCoefficients);
  fOrthCoefficients(fNCoefficients) /= fOrthFunctionNorms(fNCoefficients);

  // Calculate the residual from including this fNCoefficients.
  Double_t dResidur = fOrthCoefficients(fNCoefficients) * b;

  return dResidur;
}


//____________________________________________________________________
 void TMultiDimFit::MakeHistograms(Option_t *option)
{
  // Make histograms of the result of the analysis. This message
  // should be sent after having read all data points, but before
  // finding the parameterization
  //
  // Options:
  //     A         All the below
  //     X         Original independent variables
  //     D         Original dependent variables
  //     N         Normalised independent variables
  //     S         Shifted dependent variables
  //     R1        Residuals versus normalised independent variables
  //     R2        Residuals versus dependent variable
  //     R3        Residuals computed on training sample
  //     R4        Residuals computed on test sample
  //
  // For a description of these quantities, refer to
  // 
class description

  TString opt(option);
  opt.ToLower();

  if (opt.Length() < 1)
    return;

  if (!fHistograms)
    fHistograms = new TList;

  // Counter variable
  Int_t i = 0;

  // Histogram of original variables
  if (opt.Contains("x") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_XORIG);
    for (i = 0; i < fNVariables; i++)
      if (!fHistograms->FindObject(Form("x_%d_orig",i)))
        fHistograms->Add(new TH1D(Form("x_%d_orig",i),
                                  Form("Original variable # %d",i),
                                  100, fMinVariables(i),
                                  fMaxVariables(i)));
  }

  // Histogram of original dependent variable
  if (opt.Contains("d") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_DORIG);
    if (!fHistograms->FindObject("d_orig"))
      fHistograms->Add(new TH1D("d_orig", "Original Quantity",
                                100, fMinQuantity, fMaxQuantity));
  }

  // Histograms of normalized variables
  if (opt.Contains("n") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_XNORM);
    for (i = 0; i < fNVariables; i++)
      if (!fHistograms->FindObject(Form("x_%d_norm",i)))
        fHistograms->Add(new TH1D(Form("x_%d_norm",i),
                                  Form("Normalized variable # %d",i),
                                  100, -1,1));
  }

  // Histogram of shifted dependent variable
  if (opt.Contains("s") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_DSHIF);
    if (!fHistograms->FindObject("d_shifted"))
      fHistograms->Add(new TH1D("d_shifted", "Shifted Quantity",
                                100, fMinQuantity - fMeanQuantity,
                                fMaxQuantity - fMeanQuantity));
  }

  // Residual from training sample versus independent variables
  if (opt.Contains("r1") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_RX);
    for (i = 0; i < fNVariables; i++)
      if (!fHistograms->FindObject(Form("res_x_%d",i)))
        fHistograms->Add(new TH2D(Form("res_x_%d",i),
                                  Form("Computed residual versus x_%d", i),
                                  100, -1,    1,
                                  35,
                                  fMinQuantity - fMeanQuantity,
                                  fMaxQuantity - fMeanQuantity));
  }

  // Residual from training sample versus. dependent variable
  if (opt.Contains("r2") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_RD);
    if (!fHistograms->FindObject("res_d"))
      fHistograms->Add(new TH2D("res_d",
                                "Computed residuals vs Quantity",
                                100,
                                fMinQuantity - fMeanQuantity,
                                fMaxQuantity - fMeanQuantity,
                                35,
                                fMinQuantity - fMeanQuantity,
                                fMaxQuantity - fMeanQuantity));
  }

  // Residual from training sample
  if (opt.Contains("r3") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_RTRAI);
    if (!fHistograms->FindObject("res_train"))
      fHistograms->Add(new TH1D("res_train",
                                "Computed residuals over training sample",
                                100, fMinQuantity - fMeanQuantity,
                                fMaxQuantity - fMeanQuantity));

  }
  if (opt.Contains("r4") || opt.Contains("a")) {
    SETBIT(fHistogramMask,HIST_RTEST);
    if (!fHistograms->FindObject("res_test"))
      fHistograms->Add(new TH1D("res_test",
                                "Distribution of residuals from test",
                                100,fMinQuantity - fMeanQuantity,
                                fMaxQuantity - fMeanQuantity));
  }
}


//____________________________________________________________________
 void TMultiDimFit::MakeMethod(const Char_t* classname, Option_t* option)
{
  // Generate the file <classname>MDF.cxx which contains the
  // implementation of the method:
  //
  //   Double_t <classname>::MDF(Double_t *x)
  //
  // which does the same as  TMultiDimFit::Eval. Please refer to this
  // method.
  //
  // Further, the public static members:
  //
  //   Int_t    <classname>::fgNVariables
  //   Int_t    <classname>::fgNCoefficients
  //   Double_t <classname>::fgDMean
  //   Double_t <classname>::fgXMean[]       //[fgNVariables]
  //   Double_t <classname>::fgXMin[]        //[fgNVariables]
  //   Double_t <classname>::fgXMax[]        //[fgNVariables]
  //   Double_t <classname>::fgCoefficient[] //[fgNCoeffficents]
  //   Int_t    <classname>::fgPower[]       //[fgNCoeffficents*fgNVariables]
  //
  // are initialized, and assumed to exist. The class declaration is
  // assumed to be in <classname>.h and assumed to be provided by the
  // user.
  //
  // See TMultiDimFit::MakeRealCode for a list of options
  //
  // The minimal class definition is:
  //
  //   class <classname> {
  //   public:
  //     Int_t    <classname>::fgNVariables;     // Number of variables
  //     Int_t    <classname>::fgNCoefficients;  // Number of terms
  //     Double_t <classname>::fgDMean;          // Mean from training sample
  //     Double_t <classname>::fgXMean[];        // Mean from training sample
  //     Double_t <classname>::fgXMin[];         // Min from training sample
  //     Double_t <classname>::fgXMax[];         // Max from training sample
  //     Double_t <classname>::fgCoefficient[];  // Coefficients
  //     Int_t    <classname>::fgPower[];        // Function powers
  //
  //     Double_t Eval(Double_t *x);
  //   };
  //
  // Whether the method <classname>::Eval should be static or not, is
  // up to the user.

  MakeRealCode(Form("%sMDF.cxx", classname), classname, option);
}



//____________________________________________________________________
 void TMultiDimFit::MakeNormalized()
{
  // PRIVATE METHOD:
  // Normalize data to the interval [-1;1]. This is needed for the
  // classes method to work.

  Int_t i = 0;
  Int_t j = 0;
  Int_t k = 0;

  for (i = 0; i < fSampleSize; i++) {
    if (TESTBIT(fHistogramMask,HIST_DORIG))
      ((TH1D*)fHistograms->FindObject("d_orig"))->Fill(fQuantity(i));

    fQuantity(i) -= fMeanQuantity;
    fSumSqAvgQuantity  += fQuantity(i) * fQuantity(i);

    if (TESTBIT(fHistogramMask,HIST_DSHIF))
      ((TH1D*)fHistograms->FindObject("d_shifted"))->Fill(fQuantity(i));

    for (j = 0; j < fNVariables; j++) {
      Double_t range = 1. / (fMaxVariables(j) - fMinVariables(j));
      k              = i * fNVariables + j;

      // Fill histograms of original independent variables
      if (TESTBIT(fHistogramMask,HIST_XORIG))
        ((TH1D*)fHistograms->FindObject(Form("x_%d_orig",j)))
          ->Fill(fVariables(k));

      // Normalise independent variables
      fVariables(k) = 1 + 2 * range * (fVariables(k) - fMaxVariables(j));

      // Fill histograms of normalised independent variables
      if (TESTBIT(fHistogramMask,HIST_XNORM))
        ((TH1D*)fHistograms->FindObject(Form("x_%d_norm",j)))
          ->Fill(fVariables(k));

    }
  }
  // Shift min and max of dependent variable
  fMaxQuantity -= fMeanQuantity;
  fMinQuantity -= fMeanQuantity;

  // Shift mean of independent variables
  for (i = 0; i < fNVariables; i++) {
    Double_t range = 1. / (fMaxVariables(i) - fMinVariables(i));
    fMeanVariables(i) = 1 + 2 * range * (fMeanVariables(i)
                                         - fMaxVariables(i));
  }
}


//____________________________________________________________________
 void TMultiDimFit::MakeParameterization()
{
  // PRIVATE METHOD:
  // Find the parameterization over the training sample. A full account
  // of the algorithm is given in the
  // 
class description


  Int_t     i              = -1;
  Int_t     j              = 0;
  Int_t     k              = 0;
  Int_t     maxPass        = 3;
  Int_t     studied        = 0;
  Double_t  squareResidual = fSumSqAvgQuantity;
  fNCoefficients            = 0;
  fSumSqResidual           = fSumSqAvgQuantity;
  fFunctions.ResizeTo(fMaxTerms,fSampleSize);
  fOrthFunctions.ResizeTo(fMaxTerms,fSampleSize);
  fOrthFunctionNorms.ResizeTo(fMaxTerms);
  fOrthCoefficients.ResizeTo(fMaxTerms);
  fOrthCurvatureMatrix.ResizeTo(fMaxTerms,fMaxTerms);
  fFunctions = 1;

  fFunctionCodes = new Int_t[fMaxFunctions];
  fPowerIndex    = new Int_t[fMaxTerms];
  Int_t l;
  for (l=0;l<fMaxFunctions;l++) fFunctionCodes[l] = 0;
  for (l=0;l<fMaxTerms;l++)     fPowerIndex[l]    = 0;
  
  if (fMaxAngle != 0)  maxPass = 100;
  if (fIsUserFunction) maxPass = 1;

  // Loop over the number of functions we want to study.
  // increment inspection counter
  while(kTRUE) {

    // Reach user defined limit of studies
    if (studied++ >= fMaxStudy) {
      fParameterisationCode = PARAM_MAXSTUDY;
      break;
    }

    // Considered all functions several times
    if (k >= maxPass) {
      fParameterisationCode = PARAM_SEVERAL;
      break;
    }

    // increment function counter
    i++;

    // If we've reached the end of the functions, restart pass
    if (i == fMaxFunctions) {
      if (fMaxAngle != 0)
        fMaxAngle += (90 - fMaxAngle) / 2;
      i = 0;
      studied--;
      k++;
      continue;
    }
    if (studied == 1)
      fFunctionCodes[i] = 0;
    else if (fFunctionCodes[i] >= 2)
      continue;

    // Print a happy message
    if (fIsVerbose && studied == 1)
      cout << "Coeff   SumSqRes    Contrib   Angle      QM   Func"
           << "     Value        W^2  Powers" << endl;

    // Make the Gram-Schmidt
    Double_t dResidur = MakeGramSchmidt(i);

    if (dResidur == 0) {
      // This function is no good!
      // First test is in MakeGramSchmidt
      fFunctionCodes[i] = 1;
      continue;
    }

    // If user specified function, assume she/he knows what he's doing
    if (!fIsUserFunction) {
      // Flag this function as considered
      fFunctionCodes[i] = 2;

      // Test if this function contributes to the fit
      if (!TestFunction(squareResidual, dResidur)) {
        fFunctionCodes[i] = 1;
        continue;
      }
    }

    // If we get to here, the function currently considered is
    // fNCoefficients, so we increment the counter
    // Flag this function as OK, and store and the number in the
    // index.
    fFunctionCodes[i]          = 3;
    fPowerIndex[fNCoefficients] = i;
    fNCoefficients++;

    // We add the current contribution to the sum of square of
    // residuals;
    squareResidual -= dResidur;


    // Calculate control parameter from this function
    for (j = 0; j < fNVariables; j++) {
      if (fNCoefficients == 1
          || fMaxPowersFinal[j] <= fPowers[i * fNVariables + j] - 1)
        fMaxPowersFinal[j] = fPowers[i * fNVariables + j] - 1;
    }
    Double_t s = EvalControl(&fPowers[i * fNVariables]);

    // Print the statistics about this function
    if (fIsVerbose) {
      cout << setw(5)  << fNCoefficients << " "
           << setw(10) << setprecision(4) << squareResidual << " "
           << setw(10) << setprecision(4) << dResidur << " "
           << setw(7)  << setprecision(3) << fMaxAngle << " "
           << setw(7)  << setprecision(3) << s << " "
           << setw(5)  << i << " "
           << setw(10) << setprecision(4)
           << fOrthCoefficients(fNCoefficients-1) << " "
           << setw(10) << setprecision(4)
           << fOrthFunctionNorms(fNCoefficients-1) << " "
           << flush;
      for (j = 0; j < fNVariables; j++)
        cout << " " << fPowers[i * fNVariables + j] - 1 << flush;
      cout << endl;
    }

    if (fNCoefficients >= fMaxTerms /* && fIsVerbose */) {
      fParameterisationCode = PARAM_MAXTERMS;
      break;
    }

    Double_t err  = TMath::Sqrt(TMath::Max(1e-20,squareResidual) /
                                fSumSqAvgQuantity);
    if (err < fMinRelativeError) {
      fParameterisationCode = PARAM_RELERR;
      break;
    }

  }

  fError          = TMath::Max(1e-20,squareResidual);
  fSumSqResidual -= fError;
  fRMS = TMath::Sqrt(fError / fSampleSize);
}


//____________________________________________________________________
 void TMultiDimFit::MakeRealCode(const char *filename,
                                const char *classname,
                                Option_t *)
{
  // PRIVATE METHOD:
  // This is the method that actually generates the code for the
  // evaluation the parameterization on some point.
  // It's called by TMultiDimFit::MakeCode and TMultiDimFit::MakeMethod.
  //
  // The options are: NONE so far
  Int_t i, j;

  Bool_t  isMethod     = (classname[0] == '\0' ? kFALSE : kTRUE);
  const char *prefix   = (isMethod ? Form("%s::", classname) : "");
  const char *cv_qual  = (isMethod ? "" : "static ");

  ofstream outFile(filename,ios::out
  if (!outFile) {
    Error("MakeRealCode","couldn't open output file '%s'",filename);
    return;
  }

  if (fIsVerbose)
    cout << "Writing on file \"" << filename << "\" ... " << flush;
  //
  // Write header of file
  //
  // Emacs mode line ;-)
  outFile << "// -*- mode: c++ -*-" << endl;
  // Info about creator
  outFile << "// " << endl
          << "// File " << filename
          << " generated by TMultiDimFit::MakeRealCode" << endl;
  // Time stamp
  TDatime date;
  outFile << "// on " << date.AsString() << endl;
  // ROOT version info
  outFile << "// ROOT version " << gROOT->GetVersion()
          << endl << "//" << endl;
  // General information on the code
  outFile << "// This file contains the function " << endl
          << "//" << endl
          << "//    double  " << prefix << "MDF(double *x); " << endl
          << "//" << endl
          << "// For evaluating the parameterization obtained" << endl
          << "// from TMultiDimFit and the point x" << endl
          << "// " << endl
          << "// See TMultiDimFit class documentation for more "
          << "information " << endl << "// " << endl;
  // Header files
  if (isMethod)
    // If these are methods, we need the class header
    outFile << "#include \"" << classname << ".h\"" << endl;

  //
  // Now for the data
  //
  outFile << "//" << endl
          << "// Static data variables"  << endl
          << "//" << endl;
  outFile << cv_qual << "int    " << prefix << "gNVariables    = "
          << fNVariables << ";" << endl;
  outFile << cv_qual << "int    " << prefix << "gNCoefficients = "
          << fNCoefficients << ";" << endl;
  outFile << cv_qual << "double " << prefix << "gDMean         = "
          << fMeanQuantity << ";" << endl;

  // Assignment to mean vector.
  outFile << "// Assignment to mean vector." << endl;
  outFile << cv_qual << "double " << prefix
          << "gXMean[] = {" << endl;
  for (i = 0; i < fNVariables; i++)
    outFile << (i != 0 ? ", " : "  ") << fMeanVariables(i) << flush;
  outFile << " };" << endl << endl;

  // Assignment to minimum vector.
  outFile << "// Assignment to minimum vector." << endl;
  outFile << cv_qual << "double " << prefix
          << "gXMin[] = {" << endl;
  for (i = 0; i < fNVariables; i++)
    outFile << (i != 0 ? ", " : "  ") << fMinVariables(i) << flush;
  outFile << " };" << endl << endl;

  // Assignment to maximum vector.
  outFile << "// Assignment to maximum vector." << endl;
  outFile << cv_qual << "double " << prefix
          << "gXMax[] = {" << endl;
  for (i = 0; i < fNVariables; i++)
    outFile << (i != 0 ? ", " : "  ") << fMaxVariables(i) << flush;
  outFile << " };" << endl << endl;

  // Assignment to coefficients vector.
  outFile << "// Assignment to coefficients vector." << endl;
  outFile << cv_qual << "double " << prefix
          << "gCoefficient[] = {" << flush;
  for (i = 0; i < fNCoefficients; i++)
    outFile << (i != 0 ? "," : "") << endl
            << "  " << fCoefficients(i) << flush;
  outFile << endl << " };" << endl << endl;

  // Assignment to powers vector.
  outFile << "// Assignment to powers vector." << endl
          << "// The powers are stored row-wise, that is" << endl
          << "//  p_ij = " << prefix
          << "gPower[i * NVariables + j];" << endl;
  outFile << cv_qual << "int    " << prefix
          << "gPower[] = {" << flush;
  for (i = 0; i < fNCoefficients; i++) {
    for (j = 0; j < fNVariables; j++) {
      if (j != 0) outFile << flush << "  ";
      else        outFile << endl << "  ";
      outFile << fPowers[fPowerIndex[i] * fNVariables + j]
              << (i == fNCoefficients - 1 && j == fNVariables - 1 ? "" : ",")
              << flush;
    }
  }
  outFile << endl << "};" << endl << endl;


  //
  // Finally we reach the function itself
  //
  outFile << "// " << endl
          << "// The "
          << (isMethod ? "method " : "function ")
          << "  double " << prefix
          << "MDF(double *x)"
          << endl << "// " << endl;
  outFile << "double " << prefix
          << "MDF(double *x) {" << endl
          << "  double returnValue = " << prefix << "gDMean;" << endl
          << "  int    i = 0, j = 0, k = 0;" << endl
          << "  for (i = 0; i < " << prefix << "gNCoefficients ; i++) {"
          << endl
          << "    // Evaluate the ith term in the expansion" << endl
          << "    double term = " << prefix << "gCoefficient[i];"
          << endl
          << "    for (j = 0; j < " << prefix << "gNVariables; j++) {"
          << endl
          << "      // Evaluate the polynomial in the jth variable." << endl
          << "      int power = "<< prefix << "gPower["
          << prefix << "gNVariables * i + j]; " << endl
          << "      double p1 = 1, p2 = 0, p3 = 0, r = 0;" << endl
          << "      double v =  1 + 2. / ("
          << prefix << "gXMax[j] - " << prefix
          << "gXMin[j]) * (x[j] - " << prefix << "gXMax[j]);" << endl
          << "      // what is the power to use!" << endl
          << "      switch(power) {" << endl
          << "      case 1: r = 1; break; " << endl
          << "      case 2: r = v; break; " << endl
          << "      default: " << endl
          << "        p2 = v; " << endl
          << "        for (k = 3; k <= power; k++) { " << endl
          << "          p3 = p2 * v;" << endl;
  if (fPolyType == kLegendre)
    outFile << "          p3 = ((2 * i - 3) * p2 * v - (i - 2) * p1)"
            << " / (i - 1);" << endl;
  if (fPolyType == kChebyshev)
    outFile << "          p3 = 2 * v * p2 - p1; " << endl;
  outFile << "          p1 = p2; p2 = p3; " << endl << "        }" << endl
          << "        r = p3;" << endl << "      }" << endl
          << "      // multiply this term by the poly in the jth var" << endl
          << "      term *= r; " << endl << "    }" << endl
          << "    // Add this term to the final result" << endl
          << "    returnValue += term;" << endl << "  }" << endl
          << "  return returnValue;" << endl << "}" << endl << endl;

  // EOF
  outFile << "// EOF for " << filename << endl;

  // Close the file
  outFile.close();

  if (fIsVerbose)
    cout << "done" << endl;
}


//____________________________________________________________________
 void TMultiDimFit::Print(Option_t *option) const
{
  // Print statistics etc.
  // Options are
  //   P        Parameters
  //   S        Statistics
  //   C        Coefficients
  //   R        Result of parameterisation
  //   F        Result of fit
  //   K        Correlation Matrix
  //   M        Pretty print formula
  //
  Int_t i = 0;
  Int_t j = 0;

  TString opt(option);
  opt.ToLower();

  if (opt.Contains("p")) {
    // Print basic parameters for this object
    cout << "User parameters:" << endl
         << "----------------" << endl
         << " Variables:                    " << fNVariables << endl
         << " Data points:                  " << fSampleSize << endl
         << " Max Terms:                    " << fMaxTerms << endl
         << " Power Limit Parameter:        " << fPowerLimit << endl
         << " Max functions:                " << fMaxFunctions << endl
         << " Max functions to study:       " << fMaxStudy << endl
         << " Max angle (optional):         " << fMaxAngle << endl
         << " Min angle:                    " << fMinAngle << endl
         << " Relative Error accepted:      " << fMinRelativeError << endl
         << " Maximum Powers:               " << flush;
    for (i = 0; i < fNVariables; i++)
      cout << " " << fMaxPowers[i] - 1 << flush;
    cout << endl << endl
         << " Parameterisation will be done using " << flush;
    if (fPolyType == kChebyshev)
      cout << "Chebyshev polynomials" << endl;
    else if (fPolyType == kLegendre)
      cout << "Legendre polynomials" << endl;
    else
      cout << "Monomials" << endl;
    cout << endl;
  }

  if (opt.Contains("s")) {
    // Print statistics for read data
    cout << "Sample statistics:" << endl
         << "------------------" << endl
         << "                 D"  << flush;
    for (i = 0; i < fNVariables; i++)
      cout << " " << setw(10) << i+1 << flush;
    cout << endl << " Max:   " << setw(10) << setprecision(7)
         << fMaxQuantity << flush;
    for (i = 0; i < fNVariables; i++)
      cout << " " << setw(10) << setprecision(4)
           << fMaxVariables(i) << flush;
    cout << endl << " Min:   " << setw(10) << setprecision(7)
         << fMinQuantity << flush;
    for (i = 0; i < fNVariables; i++)
      cout << " " << setw(10) << setprecision(4)
           << fMinVariables(i) << flush;
    cout << endl << " Mean:  " << setw(10) << setprecision(7)
         << fMeanQuantity << flush;
    for (i = 0; i < fNVariables; i++)
      cout << " " << setw(10) << setprecision(4)
           << fMeanVariables(i) << flush;
    cout << endl << " Function Sum Squares:         " << fSumSqQuantity
         << endl << endl;
  }

  if (opt.Contains("r")) {
    cout << "Results of Parameterisation:" << endl
         << "----------------------------" << endl
         << " Total reduction of square residuals    "
         << fSumSqResidual << endl
         << " Relative precision obtained:           "
         << fPrecision   << endl
         << " Error obtained:                        "
         << fError << endl
         << " Multiple correlation coefficient:      "
         << fCorrelationCoeff   << endl
         << " Reduced Chi square over sample:        "
         << fChi2 / (fSampleSize - fNCoefficients) << endl
         << " Maximum residual value:                "
         << fMaxResidual << endl
         << " Minimum residual value:                "
         << fMinResidual << endl
         << " Estimated root mean square:            "
         << fRMS << endl
         << " Maximum powers used:                   " << flush;
    for (j = 0; j < fNVariables; j++)
      cout << fMaxPowersFinal[j] << " " << flush;
    cout << endl
         << " Function codes of candidate functions." << endl
         << "  1: considered,"
         << "  2: too little contribution,"
         << "  3: accepted." << flush;
    for (i = 0; i < fMaxFunctions; i++) {
      if (i % 60 == 0)
        cout << endl << " " << flush;
      else if (i % 10 == 0)
        cout << " " << flush;
      cout << fFunctionCodes[i];
    }
    cout << endl << " Loop over candidates stopped because " << flush;
    switch(fParameterisationCode){
    case PARAM_MAXSTUDY:
      cout << "max allowed studies reached" << endl; break;
    case PARAM_SEVERAL:
      cout << "all candidates considered several times" << endl; break;
    case PARAM_RELERR:
      cout << "wanted relative error obtained" << endl; break;
    case PARAM_MAXTERMS:
      cout << "max number of terms reached" << endl; break;
    default:
      cout << "some unknown reason" << endl;
      break;
    }
    cout << endl;
  }

  if (opt.Contains("f")) {
    cout << "Results of Fit:" << endl
         << "---------------" << endl
         << " Test sample size:                      "
         << fTestSampleSize << endl
         << " Multiple correlation coefficient:      "
         << fTestCorrelationCoeff << endl
         << " Relative precision obtained:           "
         << fTestPrecision   << endl
         << " Error obtained:                        "
         << fTestError << endl
         << " Reduced Chi square over sample:        "
         << fChi2 / (fSampleSize - fNCoefficients) << endl
         << endl;
    if (fFitter) {
      fFitter->PrintResults(1,1);
      cout << endl;
    }
  }

  if (opt.Contains("c")){
    cout << "Coefficients:" << endl
         << "-------------" << endl
         << "   #         Value        Error   Powers" << endl
         << " ---------------------------------------" << endl;
    for (i = 0; i < fNCoefficients; i++) {
      cout << " " << setw(3) << i << "  "
           << setw(12) << fCoefficients(i) << "  "
           << setw(12) << fCoefficientsRMS(i) << "  " << flush;
      for (j = 0; j < fNVariables; j++)
        cout << " " << setw(3)
             << fPowers[fPowerIndex[i] * fNVariables + j] - 1 << flush;
      cout << endl;
    }
    cout << endl;
  }
  if (opt.Contains("k") && fCorrelationMatrix.IsValid()) {
    cout << "Correlation Matrix:" << endl
         << "-------------------";
    fCorrelationMatrix.Print();
  }
  
  if (opt.Contains("m")) {
    cout << "Parameterization:" << endl
         << "-----------------" << endl
         << "  Normalised variables: " << endl;
    for (i = 0; i < fNVariables; i++) 
      cout << "\ty_" << i << "\t= 1 + 2 * (x_" << i << " - " 
           << fMaxVariables(i) << ") / (" 
           << fMaxVariables(i) << " - " << fMinVariables(i) << ")" 
           << endl;
    cout << endl
         << "  f(";
    for (i = 0; i < fNVariables; i++) {
      cout << "y_" << i;
      if (i != fNVariables-1) cout << ", ";
    }
    cout << ") = ";
    for (Int_t i = 0; i < fNCoefficients; i++) {
      if (i != 0)
        cout << endl << "\t" << (fCoefficients(i) < 0 ? "- " : "+ ") 
             << TMath::Abs(fCoefficients(i));
      else 
        cout << fCoefficients(i);
      for (Int_t j = 0; j < fNVariables; j++) {
        Int_t p = fPowers[fPowerIndex[i] * fNVariables + j];
        switch (p) { 
        case 1: break;
        case 2: cout << " * y_" << j; break;
        default:
          switch(fPolyType) {
          case kLegendre:  cout << " * L_" << p-1 << "(y_" << j << ")"; break;
          case kChebyshev: cout << " * C_" << p-1 << "(y_" << j << ")"; break;
          default:         cout << " * y_" << j << "^" << p-1; break;
          }
        }
                          
      }
    }
    cout << endl;
  }
}


//____________________________________________________________________
 Bool_t TMultiDimFit::Select(const Int_t *)
{
  // Selection method. User can override this method for specialized
  // selection of acceptable functions in fit. Default is to select
  // all. This message is sent during the build-up of the function
  // candidates table once for each set of powers in
  // variables. Notice, that the argument array contains the powers
  // PLUS ONE. For example, to De select the function
  //     f = x1^2 * x2^4 * x3^5,
  // this method should return kFALSE if given the argument
  //     { 3, 4, 6 }
  return kTRUE;
}

//____________________________________________________________________
 void TMultiDimFit::SetMaxAngle(Double_t ang)
{
  // Set the max angle (in degrees) between the initial data vector to
  // be fitted, and the new candidate function to be included in the
  // fit.  By default it is 0, which automatically chooses another
  // selection criteria. See also
  // 
class description

  if (ang >= 90 || ang < 0) {
    Warning("SetMaxAngle", "angle must be in [0,90)");
    return;
  }

  fMaxAngle = ang;
}

//____________________________________________________________________
 void TMultiDimFit::SetMinAngle(Double_t ang)
{
  // Set the min angle (in degrees) between a new candidate function
  // and the subspace spanned by the previously accepted
  // functions. See also
  // 
class description

  if (ang > 90 || ang <= 0) {
    Warning("SetMinAngle", "angle must be in [0,90)");
    return;
  }

  fMinAngle = ang;

}


//____________________________________________________________________
 void TMultiDimFit::SetPowers(const Int_t* powers, Int_t terms)
{
  // Define a user function. The input array must be of the form
  // (p11, ..., p1N, ... ,pL1, ..., pLN)
  // Where N is the dimension of the data sample, L is the number of
  // terms (given in terms) and the first number, labels the term, the
  // second the variable.  More information is given in the
  // 
class description

  fIsUserFunction = kTRUE;
  fMaxFunctions   = terms;
  fMaxTerms       = terms;
  fMaxStudy       = terms;
  fPowers         = new Int_t[fMaxFunctions * fNVariables];
  Int_t i, j;
  for (i = 0; i < fMaxFunctions; i++)
    for(j = 0; j < fNVariables; j++)
      fPowers[i * fNVariables + j] = powers[i * fNVariables + j]  + 1;
}

//____________________________________________________________________
 void TMultiDimFit::SetPowerLimit(Double_t limit)
{
  // Set the user parameter for the function selection. The bigger the
  // limit, the more functions are used. The meaning of this variable
  // is defined in the
  // 
class description

  fPowerLimit = limit;
}

//____________________________________________________________________
 void TMultiDimFit::SetMaxPowers(const Int_t* powers)
{
  // Set the maximum power to be considered in the fit for each
  // variable. See also
  // 
class description

  if (!powers)
    return;

  for (Int_t i = 0; i < fNVariables; i++)
    fMaxPowers[i] = powers[i]+1;
}

//____________________________________________________________________
 void TMultiDimFit::SetMinRelativeError(Double_t error)
{
  // Set the acceptable relative error for when sum of square
  // residuals is considered minimized. For a full account, refer to
  // the
  // 
class description

  fMinRelativeError = error;
}


//____________________________________________________________________
 Bool_t TMultiDimFit::TestFunction(Double_t squareResidual,
                                  Double_t dResidur)
{
  // PRIVATE METHOD:
  // Test whether the currently considered function contributes to the
  // fit. See also
  // 
class description


  if (fNCoefficients != 0) {
    // Now for the second test:
    if (fMaxAngle == 0) {
      // If the user hasn't supplied a max angle do the test as,
      if (dResidur <
          squareResidual / (fMaxTerms - fNCoefficients + 1 + 1E-10)) {
        return kFALSE;
      }
    }
    else {
      // If the user has provided a max angle, test if the calculated
      // angle is less then the max angle.
      if (TMath::Sqrt(dResidur/fSumSqAvgQuantity) <
          TMath::Cos(fMaxAngle*DEGRAD)) {
        return kFALSE;
      }
    }
  }
  // If we get here, the function is OK
  return kTRUE;
}


//____________________________________________________________________
void mdfHelper(int& /*npar*/, double* /*divs*/, double& chi2,
               double* coeffs, int /*flag*/)
{
  // Helper function for doing the minimisation of Chi2 using Minuit

  // Get pointer  to current TMultiDimFit object.
  TMultiDimFit* mdf = TMultiDimFit::Instance();
  chi2     = mdf->MakeChi2(coeffs);
}


ROOT page - Class index - Class Hierarchy - Top of the page

This page has been automatically generated. If you have any comments or suggestions about the page layout send a mail to ROOT support, or contact the developers with any questions or problems regarding ROOT.