This week tragedy unfolded off the coast of Sicily. A pre-dawn squall arose, spinning a water tornado toward a luxury sailboat which was anchored outside the port of Palermo. Within moments the yacht, named Bayesian, capsized in a freak accident, and 7 lives were lost. My heart and condolences go out to all their families and loved ones.
The host of the gathering was UK entrepreneur Michael Lynch, who perished in the wreck. Michael was described by his friends and colleagues as genius, caring, brilliant, passionate, remarkable, inspiring, legendary and a gift to the world of business. High praise.
By all accounts, Michael was obsessed with the science of probability and on the leading edge of applying conditional probabilities. Hence the name of his sailboat — Bayesian.
What is Bayes Theorem?
Bayes Theorem describes the relationship between conditional probabilities, where the probability of an outcome is updated based on new evidence.
Sometimes we accurately intuit conditional probabilities. For example, if it’s sunny outside, the probability of slipping on the sidewalk is low. Whereas if icy sleet is present, then the probability of slipping on the sidewalk is high. Common sense, right? However, our mental ability to reliably track conditional probabilities and arrive at an accurate probability of an outcome is not always so obvious.
This article on Better Explained does a nice job of illuminating Bayes Theorem. Here’s an excerpt.
First, let’s establish foundational Bayesian concepts:
1 – Tests are not the event. A cancer test, for example, is separate from the event of actually having cancer.
2 – Tests are flawed. Tests detect things that don’t exist (false positive), and miss things that do exist (false negative).
3 – Even science is a test. At a philosophical level, scientific experiments are potentially flawed tests. There is a test for a phenomenon, and there is the event of the phenomenon itself. Our tests and measuring equipment have a rate of error to be accounted for.
Bayes Theorem converts the results from your test into the real probability of the event. For example, you can:
1 – Correct for measurement errors. If you know the real probabilities and the chance of a false positive and false negative, you can correct for measurement errors.
2 – Relate the actual probability to the measured test probability. For example, given mammogram test results and known error rates, you can predict the actual chance of having cancer given a positive test.
Applying Bayes Theorem often produces surprising results. For example, let’s assume the following stats of the efficacy of breast cancer testing:
1% of women have breast cancer (and therefore 99% do not).
80% of mammograms detect breast cancer when it is there (and therefore 20% miss it).
9.6% of mammograms detect breast cancer when it’s not there (and therefore 90.4% correctly return a negative result). [Note, see below.]
So, if you get a positive test result, what’s the chance you have breast cancer?
Is it 80% since the test is 80% accurate?
Or even 90% since 90% of the time the test accurately returns a negative result?
Nope, if you get a positive test result, your chance of having cancer is 7.8%.
That’s the power of conditional probabilities, deciphered via Bayes Theorem.
I wish I had met Michael Lynch. He has left us a legacy of breakthrough insights, with a character honored by those closest to him. Rest in peace, Michael and his guests and crew of the Bayesian.
[Note: When I originally published this article, I left out the not in the following assumption – 9.6% of mammograms detect breast cancer when it’s not there (and therefore 90.4% correctly return a negative result). Of course, that little word is essential! I excerpted the example from Better Explained, where the assumption was stated correctly. Clearly my copy-paste skills need a little improvement!]