you are allowed to assign probabilities to non-repeatable events, i.e. what's the probability there is life on mars
I get the impression that believers in the Bayesian Revolution often also subscribe to the many worlds quantum interpretation, which would make P(life on Mars) a repeatable event.
If 80 in 10000 people who read Hacker News click on that link and 80% of them read all of the page, and 9 in 200 of them would previously have voted up the link and 32% of them leave New York at 14:10 travelling at 55mph, what is the probability that one of them is Bruce Willis?
"Bayesianity states that, when you die, Pierre-Simon Laplace examines every single event in your life [...]
Then Laplace takes every event in your life, and every probability you assigned to each event, and multiplies all the probabilities together. This is your Final Judgment - the probability you assigned to your life.
Those who follow Bayesianity strive all their lives to maximize their Final Judgment. This is the sole virtue of Bayesianity. The rest is just math.
One who follows Bayesianity will never assign a probability of 1.0 to anything. Assigning a probability of 1.0 to some outcome uses up all your probability mass. If you assign a probability of 1.0 to some outcome, and reality delivers a different answer, you must have assigned the actual outcome a probability of 0. This is Bayesianity's sole mortal sin. Zero times anything is zero. When Laplace multiplies together all the probabilities of your life, the combined probability will be zero. Your Final Judgment will be doodly-squat, zilch, nada, nil. No matter how rational your guesses during the rest of your life, you'll spend eternity next to some guy who believed in flying saucers and got all his information from the Weekly World News"
Oh, sorry - the prior probability that any given person is Bruce Willis is roughly 1 in 6x10^9.
I can follow the nice, sensible text about how you can take an initial probability of something happening, and a binary test which branches into four outcomes including genuine and false positives and genuine and false negatives, and the ratios coming from the four branches describe how accurately the test discerns the two possibilities, and the more accurate the test the more tightly coupled it is to the thing you are testing for and the more it drives the initial probability one way or the other.
"A Plan for Spam" - an initial probability than an email is spam. A test such as "does the email contain 'vi4gra'" and some probabilities calculated from your email archive of how often spam contains 'vi4gra' and how often it doesn't, and how often ham contains 'vi4gra' and how often it doesn't, and you can precisely adjust your probability of an email being spam based on whether it contains that text.
Training it on your email archive means you can judge incoming email never seen before without being unfairly prejudiced against it just because it's new, only judging it on whether it has spammy characteristics - and Bayes' Theorem applied to your previous email _tells you what it means for an email to have spammy characteristics_, is that right?
It seems kind of straightforward, and yet I can't (yet) follow it through numerically so I clearly don't understand it; I may be glossing over big important parts of it. So if 300 people have a 30% probability of having a positive result from a mammography, and 60% of people who don't have a positive result aren't 40% less likely to not have a 1 in 10 chance of already being a winner ... wait, where did the train come into it again?
oh, and you are allowed to assign probabilities to non-repeatable events, i.e. what's the probability there is life on mars?