jump to navigation

Weighting with fit Chisquare or taking the midpoint ? October 6, 2006

Posted by dorigo in mathematics, physics, science.
trackback

Imagine you have n background models B_i, with i=1…n -all equally reasonable- for a mass distribution, and the data contains mostly background but includes a small amount of an additional signal of unknown mass M.  

You take each background model B_i in turn and fit the data to the sum of background plus a signal template which depends on the unknown value of M. Each fit will return a preferred value  M_i, its statistical error S_i, and a chisquared X_i, which is a measure of how well the data is explained by the hypothesis of signal of mass M plus i-th background.

An example is given below, where the various values of X_i are plotted on the y axis, at a value proportional to the mass M_i on the x axis (it is the infamous b-jet energy scale – a ratio between the measured and true mass value of a jet-decaying resonance): 

Now, how do you choose which is the right value M_i of M among the n possibilities, if all the fits are reasonably good? And how do you assign a systematical error to your measurement due to the arbitrariness of the background model ?

One solution would be to take the weighted sum of the n values  M_i, where the weights are the factors exp(-X_i). I have seen that done somewhere. But for some reason I do not like that too much. It looks as if it sort of underestimates the systematical error due to the arbitrariness of the background model.

My skin feeling would say you need to give each background model equal chances of representing nature, and thus you cannot rule any of them out by looking at the combined behavior of the other models. Following this line of reasoning brings me to think that defining the parameter estimate as the middle point of the range spanned by the M_i and the systematic error on the parameter as some sort of “maximum spread” is the way to go.

Any thoughts on the subject ? Examples of how the matter is dealt with in specific measurements ? Your contribution is welcome.

Comments

1. Julien - October 6, 2006

I think it would have been nice to at least cite the name of the author of this plot …

I would also like to stress out that is a very *preliminary* plot, and it should probably not even be posted there … eh Tommaso ?😉

2. dorigo - October 6, 2006

Sure… The author of the very colorful plot is Julien Donini!

But I do not think I need to write anything about the preliminary nature of the data contained in the plot. If I am not saying what the plot describes, I run no risks whatsoever of breaking the non-disclosure agreements of my collaboration. Note I did not even mention what is the experiment they refer to, in the post…

Cheers,
T.

3. Ambitwistor - October 6, 2006

Put prior probabilities on each of the B_i (you imply a uniform distribution), do Bayesian model averaging, compute a marginal posterior distribution for M, and if you prefer, apply some point estimator (such as the mean or maximum a posteriori).

4. dorigo - October 6, 2006

Hi ambitwistor,

this is funny – I understand what you mean, but written as you have, it reminds me that funny t-shirt:
Subtract infinity, add heavy fermions, set all fermion masses to zero, invent another symmetry, throw it on the lattice, blame it on the Planck scale, recall the success of the SM, invoke the Anthropic Principle, wave hands a lot, speak with a strong accent, manipulate the data….

Cheers – and thank you for your useful if a bit laconic suggestion!
T.

5. Markk - October 9, 2006

It sounds like you don’t trust the chi-squared values. Your suggestion essentially throws them away. Correct? Why?

6. dorigo - October 9, 2006

Hi Markk,

indeed, taking the half spread of the values does throw away the chisquared information – although I am pre-selecting only those fits whose probability is larger than 5%.

The idea is to assume that one of the background models is correct. If that is the case, the others will be wrong, by a variable amount. Now, I have no means to determine which is the correct model, since all of those I selected (which are a subset of those in the plot above, BTW) are reasonably good fits. And so, I feel it like cheating to use the chisquared of a wrong model to determine an error on the right one. Sure, the error I am determining is of systematic nature, one I will then label as “background model uncertainty”.

And then there is an additional piece of information I hid (to avoid making it too boring) in the post. All my background models are slightly correlated with each other. So if I am to use their chisquared information, I will have to do some acrobatic math…

Hmmm, the more I think about it the more I am confused. Suggestions are welcome.

7. Gordon Watts - October 10, 2006

Don’t Trust Chi2 -> All background models equally valid, and get equal weight. And you’re stuck; you can’t extract any more info. You note that one of the background models is the correct one (you hope), but by removing the chi2 you’ve erased the ability of the data to tell you which one it is likely.

Average the M_i’s and then use the spread as a systematic. I’m not sure you need to use the max spread (unless you have a very small number of background models) — fit the Mi to a gaussian and use its width, perhaps?

To improve the result you’ll have to figure out how to understand the chi2.

Nice b-jet energy scale. I’ll show it around DZERO and see what they think.😉

8. dorigo - October 10, 2006

Hi Gordon,

yes, it’s kind of awkward. But I have an alternative way to compute the background systematics, maybe. If it works, I can indeed use the chisquare values above, since they will only be telling me about a different systematics – derived by the arbitrariness of my choices for a bkg modeling.

…And sure, go ahead and show the plot around – oh forgot to tell you it’s only pseudoexperiments😉

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: