Make statistics sexy

I’ve been enjoined to make statistics sexy. It’s really taking root in my brain.There are no easy answers in statistics; it’s a long, hard road. It’s rigorous, and honest, and that’s why I love it. Frank Harrell’s talk at NHS-R typified this. The price of truth can be very high- like binning all the data from your £1m study instead of doing what many do which is slice it into so many pieces that something looks shiny and publishing that.

At the same time there’s a lot of hype in ML and AI. There are serious minded people doing ML properly and making a real difference but there are also a lot of private companies selling the NHS snake oil in a new, fancy bottle and giving them overfit, janky, ungeneralisable models or even just selling them a pile of promises and a dashboard and leaving them with nothing.

How can the “I think you’ll find it’s more complicated than that” brigade compete with the “Our data science team showed that this model increased patient flow by 17%” (but please don’t ask us about pre-existing trend, or the Hawthorne effect, or the other 8 models they deployed that didn’t do much at all).

I truly have no idea, but I’m going to have a good go at finding out. Rest assured if I discover the answer I shall write it here first 😃

Data saves lives- and so does open code

I’m reading data saves lives. And look, it’s that thing that I say all the time, that I have pinned on my Twitter profile, the thing I’ve heard time and time again for a decade that everybody just stands around nodding at and nothing actually changes. I quote:

“Working in the open

Public services are built with public money, and so the code they are based on should be made available for digital pioneers across the health and care system, and those working with it, to reuse and build on.

Analysts should be encouraged to think from the outset of a project about how work can be shared, or consider ‘coding in the open’ for example, through use of open notebook science. This will include sharing of technical skills and domain knowledge through sites like Cross Validated and StackOverFlow, and sharing code and methodology through platforms like GitHub, will build high quality analytics throughout the system

Our commitments:

  • we will begin to make all new source code that we produce or commission open and reusable and publish it under appropriate licences to encourage further innovation (such as MIT and OGLv3, alongside suitable open datasets or dummy data) (end of 2021)”

My team does this, proudly. Does yours?

Pay and reward

I’m reading No Rules Rules, the Netflix management book. It’s really good, lots of interesting stuff in there about doing things differently and although clearly there’s a lot I can’t do in the NHS, there are lessons in there for sure.

I’ve just read the bit about paying people top of market rate and I’m so happy because it describes loads of people who are on $250,000, crazy money, moping around and leaving because they can get a 40% pay rise somewhere else.

How does 40% on top of $0.25m make you any happier? It’s a nonsense. But the people I work with and recruit don’t chop and change for money. They do what they do because it’s meaningful, and because they believe in the founding principles of the NHS.

So although I can’t pay top of market I don’t need to, either. I just need to ask people to do useful stuff, in the open, for everyone, and to do it their way with whatever support they need. And pay them properly, of course.

That I can do ❤️

That is, in fact, my jam 🎸🎸🎸

Graphs > tables

One phenomenon that I find very strange in the NHS (and elsewhere, probably, I’ve never worked anywhere else) is the obsession people have with having tables of numbers instead of graphs. I have encountered this absolutely everywhere. People really want to know whether something is 21.4% of the whole or 19.2% of the whole and they can’t tell by looking at the beautiful graph that you’ve drawn.

I saw an analysis today which had nine separate tables of proportions. I’m going to go out on a limb and say no human being can understand a thing of such complexity. Nine tables, each with three categories, 27 proportions given. You could fit the whole thing on one graph and it would be readily apparent how they compare with each other.

But no, people want to know is it 13% or 15%, even though in almost all cases the amount of precision far exceeds the confidence levels of the sample.

Your report needs to say “category A is found twice as often as C, whereas A and B are similar”. Not “category A is found 17.6% of the time, whereas C is found 9.2% of the time- on the other hand category B is found 19.5% of the time”. Just writing it is exhausting me, never mind trying to understand it from cold in a meeting.

There are of course rare exceptions to this rule, sometimes you really need to know that something is 13.5% of the whole. But you should be asking yourself more questions- how reliable is the measure? What is the sampling error associated with this estimate? Otherwise your 13.5% is 14.6% is 12.3%. And who is usually saying this, if anyone-me!

Stop punishing people with data

NHS data people know all about Goodhart’s law. First stated by Goodhart as the not very catchy

Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes

It was then popularised by Strathern as

When a measure becomes a target, it ceases to be a good measure

So far, so ordinary. What else do we all know? League tables are a bad idea too. Somewhat related to Goodhart’s law, the problem with league tables is that they often ignore nuance. "Bad" schools aren’t bad, they just have an intake that starts way behind. "Bad" surgeons aren’t bad, they just take on the more complex cases or work in a geographical location where there are more complex cases. There are ways of trying to balance out this bias, "value added" in the case of schools (being married to a teacher, I hear a lot about this one), and case mix adjustment in the case of surgeons. Although these methods go some way to ironing out problems there is always the chance that an unmeasured variable is affecting the results and distorting the picture.

In my own area of work, patient experience data, league tables are a terrible idea because the services are all so different and they serve so many different people. Older people are, on average, more satisfied with healthcare. People who are detained under the mental health act for what can be 10 years are, unsurprisingly, often less positive about their healthcare.

I would like to add another law of data to the canon.

The more data is used as a punishment the less engaged those it punishes will be with all data

I’ve read so many beautifully written and persuasive arguments about the power of data, all, naturally enough, written by True Believers. People who can see how data can be used to inform service delivery and planning. If I’m honest I think I live in an echo chamber, filled with analysts and technical types, all passionate about the insights data can generate. But sadly the reality outside my little bubble is that mostly people have data done "to" them. Every quarter a manager or public body drops out of the sky and asks them for all these numbers.

They’re punished if the numbers are late. They’re punished if the numbers are wrong

And they’re punished if the numbers don’t show them in a good light even if everybody knows there’s a perfectly good explanation as to why the data looks like that (thinking back to value added and case mix adjustment above).

They get it all done, feeling harried and that the way the data is collected and scrutinised is unfair and makes their team/ department look bad, they cross their fingers and pray that it doesn’t come back with red pen on it three days later, and then they forget about it for another quarter.

And the really sad thing is that these people have questions that could be answered with data. Everybody makes suppositions and hypotheses all the time, it’s impossible not to, and with help and understanding they could refine their ideas and be more effective at their job.

But data has crashed through their window like a rabid dog and made a mess of the place and it never occurs to them to take that same dog to the park to see if it can help them find the jacket they left there last week

They’re just glad to close the door and lock it and forget about the whole ordeal until the next quarter.

I think we do need to keep promoting the value of data and analytics, and I obviously spend a decent chunk of my time either selling people that dream or (more often) fighting Python dependencies on a server to make that dream a reality.

But I think it’s just as important to stop beating people with data, to try to work with them to understand why their numbers look the way they do, and to try to make all processes that involve data more to do with learning and less to do with punishing people with an unfair analysis of what really goes on in their department.

Shoddy data

You know, naming no names because I’ll get in trouble but someone somewhere has paid for some data from a proprietary vendor and they’re shipping absolutely unusable garbage data.

They won’t fix it because “no-one else has complained”.

HAVE SOME PRIDE IN WHAT YOU’RE DOING. How about that? How about fixing it because it’s an embarrassment? I’m a random idiot in some random NHS Trust in the countryside and I couldn’t sleep if my database looked like what you’re sending.

I swear on my life the NHS could do 99% of this stuff better itself if we just dug in and had a go. Everyone’s in thrall to these people with expensive watches and glossy brochures but it’s all a confidence trick.

Managing data science teams in the trenches

We’re systematically devaluing management and leadership in the analyst space, and I encounter a lot of people who think that a day with no code written is a day wasted. I do write code, but I do a lot of other valuable stuff too. I’ve written this post as someone who is very new to managing people with my personal opinion, so just like my posts on proxied SSL with Apache, there is your caveat emptor 🙂

The people I manage like to get in the zone and write code. I think of it as like they’re digging a trench. They’re not getting out of the trench, they’re not sticking their head out, they’re just digging. And it’s efficient, they’re digging fast, and it’s rewarding too. They’re working on interesting problems and solving them and it feels good. My job is just to stick my head in the trench occasionally.

“What are you digging there? Oh yes, I like it, nice. You know, you could use that other tool over there on that bit. And actually, have you thought about trying this other technique? Let me show you, pass me the chisel. Okay, great. Let me know if you get stuck”

next trench… “Hey, that looks good, what is it? I like this trench, I like this bit. You know, you’re actually going the wrong way, though, you’ve gone off at an angle. Come up here and I’ll show you. See that trench over there? We’re trying to get there. Just jump back in and I’ll help you get it pointing in the right direction”

next trench… “Hey, this trench is taking ages, isn’t it?! We need to finish it soon, really. I think we maybe need to not finish some bits of it for now. Which are the really important bits, do you think? Okay, let’s keep this bit. This other bit, I know you love it, and I love it too. It’s really clever what you’ve done. But we can’t ship on time with it. I think we need to come back to it later, or maybe just chalk it up to experience and maybe we can do it next time”

This is pretty basic stuff, really. It feels a bit silly to have to spell it out. The point I’m trying to make is that the people getting in the trench occasionally are helping the people in the trench stay in the trench and that’s good for them and good for the project. And that’s my test for all the stuff that we do. Code review. Standup. Appraisals. Hack days. Overall, is it helping them stay in the trench, digging quickly and solving problems, or is it just a silly distraction?

I’ve never in my life had a manager who has the slightest idea what autoregression is or how to test the assumptions of OLS regression. I’m really happy now that I do manage people and that I do know what those things are, and I hope the people I manage can benefit from my understanding of what they’re doing (even if they usually know way more about it than I do- I can at least keep up if they explain it to me).

(if you’ve read my previous post and you’re paying attention you’ll realise that I’ve just contradicted myself by saying firstly that analysts should do more than just write code all day, and then said in this post that a manager’s job is to help analysts write code all day. Forgive me. In reality the people in the trench have got other people in the trench, or possibly in a neighbouring trench, and it’s their job to stick their head in that trench and help that person write code instead of needing to get out of their trench, before coming back to their own trench. Indeed, really, the trench visitor is in fact in their own trench, and someone else sticks their head in there occasionally, but the analogy gets a bit convoluted and silly at this point. I think you get the idea, even if it doesn’t translate to actual trenches within trenches within trenches)

What do we want from senior analysts in the NHS?

I’ve been meaning to write this for ages and I’ve just had a rather silly conversation on Twitter about managers and leaders in the NHS which has inspired me to get on with it. I think most people are agreed that we have A Problem with mangers and leaders in the analytic space. Much ink has been spilled in the subject, and I think that the two main conclusions of all this ink are broadly accepted by everyone. They are, briefly:

1: Managers often lack the analytical training and experience to critically engage with the data and analyses with which they are presented (or indeed, to do analyses well themselves) and this leads to decision by anecdote and hunch.

2: Analysts lack a clear career track that rewards them for developing their analytic skills rather than for their becoming managers, unlike, say, surgeons, who can be very successful by just being very good surgeons and aren’t expected to become managers (although there’s a bit of nuance even in there that I’m getting to so stick with me if you’re thinking I’m talking rubbish already)

I’m not really going to talk about point number one except to say that in my view part of the solution to this problem is to have analysts in more senior positions, including at board, and by that I mean actual analysts, not managers of analysts. Many people with much more knowledge and experience (and credibility) than me have said this already, so you didn’t really come here to read that. All this does relate to point number two, however, which I will talk about now.

Having advanced some way in my career as an analyst, I can say that my experience absolutely fits with the general point that people make about analyst careers. Quite honestly, there wasn’t really a career structure that I fitted into when I finished my PhD in 2009, and I’ve sorted of bobbed around doing different stuff. I’ve never looked at job and thought “That’s the job for me, I’m applying for that”. I’ve pretty much just made it up as I went along.

The thing I’m not sure about, however, is this idea that we need to reward analysts just for their analytical skills. Sometimes when I talk to people about this I get the idea that they’re promoting the idea that we’ll have these Python geniuses just sitting in a box doing linear algebra and deep learning and advancing up the career track like that. To me I think that misunderstands what we want from analysts in the NHS. I believe the likes of Google and Facebook do pay some very, very clever people to just work on really high end statistical and computational tasks, and they use the methods that these people are producing to increase revenue. I think we in the NHS look at that and we imagine that we can make that work for us. But the NHS is not Google, and most analytical teams are far too small to support an individual working that way. There may be scope to employ people in that capacity in the national bodies, which are much larger. I don’t claim to know much about that. But speaking as someone who works in a provider trust and who has worked with people all over the country in trusts and CCGs, I’m pretty confident in my belief that we actually want to reward people to do other stuff than just clever Python and machine learning.

We do want to employ people with high end technical skills. Source control. Statistics. Machine learning. But once they’re good at that stuff we want more from them. I don’t want them sitting in a box writing code all day. That is a much too narrow definition of working effectively. They need to be training. Mentoring. Doing high level design. Understanding the local, regional, and national landscape. Writing and winning funding applications. Even if they’re not line managing anybody they need to be aware of what more junior colleagues are doing, helping them prioritise and understand their work, and managing projects so they’re delivered on time and to spec.

And therein lies the heart of what I consider to be the mythology around this issue. This stuff is hard. Universities are churning out people who can write Python for data science at a rate of knots now. We’ve got it the wrong way round. Yes, those people at Google are terribly clever, and I couldn’t do what they do in a thousand years. But this stuff is hard too. Teamwork. Mentoring. Communication. Understanding what to do where, with whom, and how. These skills are incredibly valuable. Recruit to and reward technical skills, by all means. But ask yourself if you want more from your team members.

SQL versus analytic tools

From a tweet from my NHS-R buddy John MacKintosh

“Two schools of thought :
1. Do as much as possible using SQL (joins/ munging/ calculations and analysis) so that your analysis is portable between different BI / Viz tools
2. Do the basics (e.g. joins and simple aggregates) in SQL, calcs and analysis in front end tools”

This is a great question, and I don’t mean to detract from its greatness when I say it is over simplistic. It’s an important question, and I have a lot to say on the subject, so much so in fact that I’m going to answer a tweet with a blog, which is very on brand for me 😄.When we think of data scientists and data engineers, we tend to think of data engineers providing beautifully normal and rapid data structures and data scientists wrangling them with a mix of SQL and R/ Python/ whatever. But in my experience it doesn’t really work cleanly that way, and nor should it.

An important factor is the way that the data is organised and documented. People in the NHS often get upset about all the different datasets that we use. A common complaint is that they don’t link up easily. People who don’t work in data think that the Trust has this big database of everything, and you just go and look up the thing you’re interested in, and it’s right there indexed against every other piece of data, and off you go. This notion is hilarious to anyone to anybody who actually works with data.

It’s not because the data is bad or because it’s not being looked after properly. It seems to me that databases, far from being perfect platonic ideas about the trust are in fact opinionated. You tend to find that they do the thing that they were originally designed to do very well. The payroll database is really good at paying people. The EHR is really good at, well, being an EHR. Data scientists, by our very nature, always want to do something that the database was not designed to do. That almost seems to me to be an actual definition of data science. Taking a database and making it do something else. If your database has already done what you’re doing, chances are you don’t need a data scientist. Just stick one of those auto ML things on it or point PowerBI at the cube. Boom. Instant insight.

So what does this all mean for the question? It means that the data engineers and data scientists are on the same team. They’re not just throwing stuff over the wall at each other. The data scientists are customers of the data engineers, and we couldn’t do a thing without them, but we can be smart customers. We might want the data in a different form, sure. We might prefer it if some of the joins were done in the backend to save us a bit of work and to help us all work together. But we can give stuff back. We might use an algorithm to predict the risk of rehospitalisation at 28 days, say. Once we’ve done that we can probably use that calculated value more quickly if it’s productionised in the DB. And if it is productionised in the DB that means everyone can access it- not just the data scientists but also the data engineers and all of their customers.

Data scientists and data engineers working together can take a database that does one thing and turn it into a database that does two things. And they can turn that database into something that does three things. And the whole time the data engineers are keeping it all fast, scalable, legible, and accurate. It’s early days for us, working this way. I’ve read about it many times, but we’re working with our friends in data engineering and I hope one day to tell our story and how we’ve all worked together to produce lots of insights and delivered them all around the Trust.

To answer the question, then, I would say that if you want to go fast, use analytic based tools, and if you want to go together take the time to port all of the insight to SQL based tools. And given the state of analytics in the NHS as it stands today I would recommend we all go as fast and as together as possible. I know we at Nottinghamshire Healthcare are.

Building generic Shiny applications with a data module

We’re rebuilding our patient experience dashboard at the moment, partly to incorporate some of the work that we’re doing on text mining and partly to connect it up to other types of information we have like staff experience and clinical outcomes. It has to be reusable because we’re using it as the basis of the text mining dashboard that we’re building for any provider trust to use with their friends and family test data. We’re trying to make everything reusable anyway partly because the different bits of the NHS should all cooperate and produce stuff together that everyone can use and partly because we’re realising that when you make code reusable the first person who can reuse it is you, 6 months later on a similar project.
So we have a patient experience dashboard that we want to hook up to our other data in our own context, and we have other trusts who want to take what we produce and hook their stuff in to it. An obvious approach is to use modules in Shiny and have a data module at the heart of the operation, which can take account of the different datasets available in each context. The next level would be a “summary type” module which would bring together all of the inputs and outputs for a particular type of data- one for patient experience, one for staff experience, etc. And the bottom level would be submodules which would carry out the individual tasks- one module to produce graphical summaries, one to produce downloadable reports, etc. We would use the {golem} package for all the reasons that we normally use {golem} (help with dependencies, packaging, testing, and deployment)
The basic structure would look like this:

  • Data module. This module would be responsible for loading ALL data, patient experience data and any other datasets that the person deploying the application might want- clinical outcomes, staff experience, whatever it might be. This module would contain upload and download features as appropriate (users may wish to upload their own data or download processed data) but it would in the main merely draw from databases, process the data into a useful form, and prepare them to be imported into the other modules
  • Function module. Each of these modules is responsible for a particular type of data (staff, patient, clinical outcome). In some trusts the dashboard wouldn’t have access to data of all of the necessary types and these modules would not show in these contexts.
  • Sub modules. Each function module would have access to submodules which do different tasks- draw graphs, make reports, etc. Because some types of data are quite similar (e.g. staff and patient experience data) there is potential to reuse these modules in some areas.

Although this is a nice approach there are lots of different ways of achieving this and I’m hoping for a bit of guidance as to which might work best.

  1. “Automatic dictator” data module. This would be a data module that works with no instruction from the person deploying the application and which instructs all the other modules. This data module would simply attempt to load all of the data, catch any errors that result in doing so, report them to the user (in an unobtrusive format- “No staff data found”) and then instruct which of the function modules should appear. Further, it would export not only dataframes but also a set of instructions to the function modules as to how to carry out their processing. For example, there may be just one or several free text questions in a patient experience function module. The data module would determine that and would export with the dataframe a description of the dataset for the use of each function module
  2. “Automatic” data module. As above, except it would not export instructions, but just data. Function modules would be responsible for understanding the shape of the data and would instruct their submodules accordingly.
  3. “Programmable” data module. This data module would accept a list of options within the run_app() function and would therefore be set up individually within each trust. For example, the run_app() function might be passed list(patient_experience = c(TRUE, 2, 5), staff_experience = c(FALSE)), indicating to draw the patient experience function module with 2 free text questions and 5 Likert-type questions, and to omit staff experience. In practice the options list would need to be longer, but this is just to illustrate the principle. The data module would load and process the data according to the arguments in run_app(). It could then either instruct the function modules as in the dictator data module or leave it up to them as in the automatic dictator module.
  4. “Dumb” data module. This data module would be dumb. It would just load the data, process, and export it to the other modules. The function modules would be expected to react intelligently to the presence or absence of data and to make sense of the structure of the data. I don’t think this example would work well because there would be nothing to turn off the function modules, which mean there would be blank tabs in the application, but I’m including it for the sake of completeness in case it gives me or anyone else an idea at a later stage.

Most of the options, except 4, are pretty similar, but they have a lot of scope to change the overall way that the application is programmed. Leaving stuff up to the function modules is probably more flexible but it may be that all the function modules need tweaks in different trusts, which is not really what we want. Having an automatic dictator model is very attractive in that you should just be able to seal the whole thing and deploy it anywhere but the reduced flexibility inherent in that approach may make it more difficult to actually get it working in different contexts. In practice some of it is probably a bit of a grey area anyway- I don’t think you could have a completely dumb data module, and nor could you have a completely dumb function module. Even the submodules will have various degrees of “dumb”. It’s more about getting an idea of where you build the flexibility and how much control you give each module over its behaviour.
I’ve written this blog post to show it to one person whom I will now ask politely on Twitter for their opinion but if anybody else has any thoughts I would be very glad to hear them. It’s better to tweet @chrisbeeley because I don’t seem to get email notifications from blog comments for some reason.