Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

less than 0.1% off the predicted value.


Standard deviations are preferred to percentages, because they give you a better sense of how wrong your model really is.

If we just looked at percentages, nobody would’ve paid any attention to the anomalies that led to general relativity.


Sort of but I think you are confusing concepts. Here we have a number spat out of an experiment that is compared to a number from theory. In that case it is fair to use a percentage as a comparison: "My best guess from my experiment is x and theory says y ... x compared to y is z%" "I've used a percentage because it is ubiquitous and easily understood".

However, the experiment should have some uncertainty in it that might be quantised unless it is being performed by $DEITY. We'll also note here that percentages, whilst great for simple number magnitude comparisons are easily the most abused "statistic tool".

What we really want is an estimate of error for x (your measure by experiment). You might be able to quantify your errors in such a way that you can write it down in terms of stand deviations about a mean or perhaps not.

Let's look at an example: Using a steel rule, mark a point. The rule is about 1mm thick. Most people will pin the rule with their dominant hand and sight the mark with their dominant eye. What could possibly go wrong?

Your hand naturally works about 30cm/1' laterally from your head. You probably don't know which is your dominant eye which adds or takes around 15cm/6". So when you naively mark a point with a steel rule you are probably at least 45 degrees out on a 1mm thick rule which is something like 0.5mm. So you soon learn to position your eye over the work but which eye? Most people do have a dominant eye for sighting. Point at something. Close one eye and sight at that thing, then close the other eye and sight at the thing with the other eye. The eye that does not make your finger apparently move is your dominant eye - it is the one that you naturally use to aim/sight with.

That 0.5mm doesn't sound much but that is enough to make a woodwork joint look crap. Once you combine it with the thickness of a pencil strike and other factors, you start to appreciate that carpentry can be tricky.

Now let's do some physics where the measures are really minute and you can't simply move your head 6" to limit your errors. Now we are really going to have to do some science.

The principles are the same though: understand your limitations as best you can and then compensate.


I don't really understand how you can use "standard deviations" for a single value. Aren't deviations used across a set of values?


The mass estimate in this isn’t story isn’t from a single event, it’s “based on an analysis of about 4 million W bosons produced at the Tevatron between 2002 and 2011”.


So the "measured" mass is not a single value but a set of samples with varying masses? And those samples have a standard deviation from the theoretical value?


As I understand it, the mean of those values is seven standard deviations away from the theoretical value, and you should only expect a result that extreme to occur by chance roughly once per 3.9e11 reruns of the entire experiment.

But I would caution that precision is so important to both physics and statistics that what I think I understand may be quite different from what’s actually going on.


No, it's the average value taken from (naturally) slightly varying measurements of (hopefully) the exact same mass. In the standard model, all particles of a given type are assumed to share the same properties, including the same mass, and so far we have no reason to believe otherwise.


Is 100 = 100.1 a hundred percent wrong or 0.1% wrong?


Landing a plane 0.1% short on its journey is not great, say.


Landing a plane is always great. Ask any pilot.


Any landing that you can walk away from is a good landing. Any landing after which you can use the plane again is a great one.

A landing that meets neither of the above two criteria, much less so.

Remember: taking off is optional, but landing is mandatory.


> Remember: taking off is optional, but landing is mandatory.

This depends on how fast your plane can go.


Are you referring to one-way trips to space?


It depends on what your goal was


In the context where your measurements allow you to measure discrepancies a seventh of that error, that’s a huge error (for the purpose of THIS experiment). There might be some experiments where the difference doesn’t matter at all, or other experiments where the dependence is linear, so the answer also shifts by 0.1%. But the dependence could just as easily be some complicated nonlinear function which leads to a large discrepancy compared to the measurement precision.


While it would depend on the tolerance level of the particular situation, generally, 100 = 100,00 is way more wrong than 100 = 100.1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: