Causing a stir

Today’s homepage on the BBC’s website questions whether breast cancer screening does more harm than good (see here for the article). However, the article refers to sets of data which can provide misleading information if not dealt with carefully. When referring to people’s health, this can be very dangerous.

Consider the example below. The numbers are rounded, to make the maths easier, but are close – and more than adequately demonstrate the effect.

For any woman, the probability of having breast cancer is 1 per cent.

Of those who have cancer, screening will give a positive test result 90% of the time (10% of tests are incorrectly identified as negative).

Of those who do not have cancer, screening will still give a positive (a ‘false positive’) test result 9% of the time (91% of these tests correctly identify that cancer is not present).

So, with my maths teacher hat on, I ask the question “if a woman tests positive to a breast cancer screening, what is the (approximate) probability she has cancer?”

  1. 90%
  2. 80%
  3. 10%
  4. 1%

Take a moment to think through your answer…

[if this were a maths lesson, I would leave the question at this point for a while, to give the students an opportunity to figure out the answer; meanwhile, I might suggest that most gynaecologists would intuitively suggest either A or B, ref here]

To answer the question, it can be treated in terms of natural frequencies.

The article on the BBC refers to 2000 women, so I have referred to the same number of people here.

We ‘know’ that 1% of them have cancer, i.e. 20 people. Of those, we would expect 90% (18 people) to test positive when screened.

Of the 1980 people who don’t have cancer, we would still expect 178 (9% of 1980) to test positive when screened (these are the ‘false positives’).

In total, this means that 196 (18 + 178 = 196) women test positive, 18 of whom have cancer.

And 2 cancer sufferers would go undetected.

So the answer to the question is about 10%. 18 out of 196 is 9.18% (to 3 significant figures).

The point I am making here is that it is very easy to get lost in amongst all of the numbers. The media are very good at whipping this up and scaring the reader. But, when talking about people’s health, it is of paramount importance to be as clear and honest as possible.

It’s not like this is even new news. The BBC did the maths 30 months ago (see here), albeit with simpler numbers than used in this post. But after all, what use is an article whose headline is “Breast cancer screening under review” other than to generate a bit of scaremongering.

For information, the maths of conditional probabilities involved here is Bayes’ Theorem

Update 30th October 2012: The New York Times published this article, echoing the confusion, on 29th October 2012.


The recent referendum on AV, which caused much debate, provides many mathematical opportunities, and this video shows what happens in the event of a tie.

Personally, I would like to see a tie decided by rock-paper-scissors. This version claims to learn your behaviour and subsequently beat you – note that it is more likely to beat you if you play for a long time (whatever the definition of ‘long’ is). The image below shows how to defeat a human opponent, and maybe this should be shown to both candidates before playing.


So, how about this for a potential framework for a lesson on experimental probability:

(i) estimate the experimental probability of winning a game against a human
(ii) estimate the experimental probability of winning a game against a human when one player has seen a ‘strategy’ (i.e. has seen the image)
(iii) estimate the experimental probability of winning a game against a human when both players have seen the same ‘strategy’(or indeed a different strategy)
(iv) estimate the experimental probability of winning a game against a computer