One phenomenon that I find very strange in the NHS (and elsewhere, probably, I’ve never worked anywhere else) is the obsession people have with having tables of numbers instead of graphs. I have encountered this absolutely everywhere. People really want to know whether something is 21.4% of the whole or 19.2% of the whole and they can’t tell by looking at the beautiful graph that you’ve drawn.
I saw an analysis today which had nine separate tables of proportions. I’m going to go out on a limb and say no human being can understand a thing of such complexity. Nine tables, each with three categories, 27 proportions given. You could fit the whole thing on one graph and it would be readily apparent how they compare with each other.
But no, people want to know is it 13% or 15%, even though in almost all cases the amount of precision far exceeds the confidence levels of the sample.
Your report needs to say “category A is found twice as often as C, whereas A and B are similar”. Not “category A is found 17.6% of the time, whereas C is found 9.2% of the time- on the other hand category B is found 19.5% of the time”. Just writing it is exhausting me, never mind trying to understand it from cold in a meeting.
There are of course rare exceptions to this rule, sometimes you really need to know that something is 13.5% of the whole. But you should be asking yourself more questions- how reliable is the measure? What is the sampling error associated with this estimate? Otherwise your 13.5% is 14.6% is 12.3%. And who is usually saying this, if anyone-me!