Home • People • Courses • Program • Research • Clinic • Goals • Kiosk • News
Understanding Basic Statistics • Fitting • Exercise • Excel • Igor • Kaleidagraph • Origin • Power Laws • Dimensional AnalysisA common and powerful way to compare data to a theory is to search for a theoretical curve that matches the data as closely as possible. You may suspect, for example, that friction causes a uniform deceleration of a spinning disk, so you have gathered data for the angular velocity of the disk as a function of time. If your hypothesis is correct, then these data should lie approximately on a straight line when angular velocity is plotted as a function of time. They won't be exactly on the line because your experimental observations are inevitably uncertain to some degree. They might look like the data shown in the figure below.
Our task is to find the best line that goes through these data. When we have found it, we would like answers to the following questions:
Associated with each data point is an error bar, which is the graphical representation of the uncertainty of the measured value. We assume that the errors are normally distributed, which means that they are described by the bellshaped curve or Gaussian shown in the discussion of standard deviation. The height between the data point and the top or bottom of the error bar is σ, so about 2/3 of the time, the line or curve should pass within one error bar of the data point.
Sometimes the uncertainty of each data point is the same, but it is just as likely (if not more likely!) that the uncertainty varies from datum to datum. In that case the line should pay more attention to the points that have smaller uncertainty. That is, it should try to get close to those "more certain" points. When it can't, we should grow worried that the data and the line (or curve) fundamentally don't agree.
A pretty good way to fit straight lines to plotted data is to fiddle with a ruler, doing your best to get the line to pass close to as many data points as possible, taking care to count more heavily the points with smaller uncertainty. This method is quick and intuitive, and is worth practicing. Here's my attempt to fit a line by eye.
For more careful work, we need a way to evaluate how successfully a given line (or curve) agrees with the data. Each data point sets its own standard of agreement: its uncertainty. We can quantify the disagreement between a point and the line by measuring the (vertical) distance between the point and the line, in units of the error bar for each point. The data point at t = 10 s, for example, is about 1 error bar unit away from the line. It turns out that a very useful way of adding up all the discrepancies [y_{i}f(x_{i})]/σ_{i} between the line and the data is to square them first. That way, all the terms in the sum are positive (after all, a point can't be correct with 200% probability!).
We define the function χ^{2} to be this sum of squares of discrepancies, each measured in units of error bars. Symbolically,
where the sum is over the n data points and f(x) is the equation of the line (or curve) we think models the data. Since it is the sum of squares, χ^{2} cannot be negative. We would like χ^{2} to be as small as possible. As we try different lines, we can calculate χ^{2} for each one. The “best line” is the one with the smallest value of χ^{2}. That is, the best line is the one which has the “least squares.”
Kaleidagraph, Igor, and Origin can perform the operation of finding the line or curve that minimizes χ^{2}. The result of performing this leastsquares fit is shown in the red curve in the following figure.
Evidently, my χ by eye method was pretty good for the slope, but was off a bit in the offset. According to this fit, the acceleration is 3.10 ± 0.08 bar/s/s, which you can read off the fit results table made by Kaleidagraph. This is pretty neat! The plotting and analysis program found the bestfit line for me, and even estimated the confidence of the slope. What could be better?
Well, what about some assessment of the likelihood that these data are really trying to follow a straight line? We may have found the best line, in the sense of the one that minimizes the squared deviations of the data points, but it may well be that the data follow a different curve and so no line properly describes the data.
The value of χ^{2} tells us a great deal about whether we should trust this whole fitting operation. If our assumptions about normal errors and the straight line are correct, then the typical deviation between a data point and the line should be a little less than 1 σ. This means that the value of χ^{2} should be about equal to the number of data points.
Actually, we have to reduce the number of data points N by the number of fit parameters m because each fit parameter allows us to match one more data point exactly. In the pictured data set, there are 16 data points and 2 fit parameters. We can compute the reduced value of χ^{2}, denoted , by dividing χ^{2} by Nm. Hence, we find here that = 2.1. This strongly suggests that the data and the line do not agree!
How can this be? They look so good together! A good way to look more closely is to prepare a plot of residuals. Residuals are the differences between each data point and the line or curve at the corresponding value of x. Such a plot is shown at the right.
For a reasonable fit, about twothirds of the points should be within one error bar from the black line at zero. In this fit we can see that several points are considerably more than one standard deviation from the line at zero. The first point is decidedly above the line, and the last point is clearly above the line, too. Almost all the other points are below the line, and a few of them are considerably below, again measured in units of their error bars. Maybe we need a curve that opens up a bit, instead of a line.
On more solid theoretical grounds, if the braking torque (twisting force) is proportional to the rotational speed, then we would expect a speed that decreases exponentially with time. Let's try an exponential curve of the form
where ω is the angular speed and τ is the characteristic time of the deceleration. The result of performing such a fit is shown below.
Does it look a bit better to the eye? Maybe. But it certainly looks better statistically. The value of χ^{2} = 16.3, which means = 1.16. It is a little higher than expected, but not alarmingly so. According to the table in Appendix D of An Introduction to Error Analysis, Second Edition, by John R. Taylor, the probability of getting a value of that is larger than 1.16 on repeating this experiment is about 31%. That is, slightly more than 2/3 of the time we should expect a value of that is smaller than this value. Not perfect, but quite reasonable.
By contrast, the same table gives the probability that the straight line fit shown above is correct is only about 1%. It's hard to see by eye that the exponential fit is so much better than the linear fit.
A residual plot also shows a more even distribution of errors. Now about half the points are above the zero line, half below. The end points are still above the line, but not markedly so. The residual plot helps build confidence in our exponential analysis.
Now that we have a fit with a reasonable value for χ^{2}, we can be more confident of the values determined by the fit. These values, and their uncertainties, are shown in the red table of the figure. (I hasten to add that such a means of presenting this information is informal; it is great for lab notebooks and notes, but in a formal presentation of data, such as in a technical report or journal article, such information is removed from the figure and the most important parts are placed in a caption below the figure.) In particular, the deceleration time constant is τ = (24.3 ± 0.7) s and the initial angular speed is (100.2 ± 0.6) bar/s.
Conclusions 

Based on the better
behavior of the exponential fit we can conclude that

Well, each data point is supposed to have some uncertainty, estimated as σ_{i}. It is fantastically improbable that the discrepancy between each point and the curve should vanish. When χ^{2} = 0, it means that you drylabbed the experiment. Don't even think of trying it!
Thus far we have assumed that the errors in the dependent variable (along the y axis) are normally distributed and random, but that the value of the independent variable is perfect. Quite commonly, the uncertainty in the x value is significant and contributes to the overall uncertainty of the data point. Is there a way to account for this additional uncertainty?
Conceptually it is not too much more difficult to account for uncertainties in both the x and y values. If the x uncertainties dominate, the simplest approach is simply to reverse the roles of the dependent and independent variables. This requires you to invert the functional relationship between x and y, however.
If inverting the function is impossible, or if both x and y uncertainties are significant, you will need to map the x error into an equivalent y error. As shown in the figure, the significance of an x uncertainty depends on the slope of the curve. At point A, where the curve is steep, the x uncertainty is sufficient to make the point agree with the curve. At point B where it is shallow, the same size x error does not produce agreement.
As shown in the inset with the blue triangle, to map the error in x into an equivalent error in y, you can use the straightline approximation of the derivative of the fit function at the x value of the data point to compute an effective y error according to
However, there is a problem. You don't know the right curve to use to compute the derivative! Sometimes this is a real problem, but frequently you have a pretty good idea based on the data in the neighborhood what the slope of the right curve must be. If that is the case, multiply δ x by the slope to produce an effective y uncertainty, δ y_{eff}.
If the y uncertainty in the measurement is also appreciable, you can combine δ y and &delta y_{eff} in quadrature to produce an honest estimate of the actual uncertainty of the data point.
Copyright ©
Harvey Mudd College Physics Department 241 Platt Blvd., Claremont, CA 91711 9096218024 http://www.physics.hmc.edu/ WebMaster (at) physics.hmc.edu Last modified: 01 October 2014 