Simulating data

I try to publish as much of my work as I can on the internet, where the people who own the data agree (my little bit of the website is here) but very often I can’t publish the data because of issues with ongoing projects, other publications, various levels of confidentiality of data, and so on.

So I’ve decided to try to simulate the data whenever there is an issue with publication. It’s very easy and it’s an excellent way for me to be able to show people my work without any of the issues that come from showing real data. The code is below, it comes in two parts for the first set of variables because we fiddled with the questionnaire a bit after Time 8 in the dataset.


mydatafirst[mydatafirst$Time>8,
  which(names(mydatafirst)=="Service") : which(names(mydatafirst)=="Therapist")] = 
    apply(mydatafirst[mydatafirst$Time>8, which(names(mydatafirst)=="Service") : which(names(mydatafirst)=="Therapist")],
      2, function (x) sample(x[!is.na(x)], length(mydatafirst$Service[mydatafirst$Time>8]), replace=TRUE))

mydatafirst[mydatafirst$Time<9,
  which(names(mydatafirst)=="Service"):which(names(mydatafirst)=="Therapist")] = 
    apply(mydatafirst[mydatafirst$Time<9,which(names(mydatafirst)=="Service") : 
      which(names(mydatafirst)=="Therapist")],
      2, function (x) sample(x[!is.na(x)], length(mydatafirst$Service[mydatafirst$Time<9]), replace=TRUE))

mydatafirst[c(which(names(mydatafirst)=="Imp1"), which(names(mydatafirst)=="Best1"))] =
  apply(mydatafirst[c(which(names(mydatafirst)=="Imp1"), which(names(mydatafirst)=="Best1"))], 2, function (x)
    sample(x[!is.na(x)], length(mydatafirst$Imp1), replace=TRUE))

Thanks to the magic of R, and in particular the Sweave and Brew packages, all I need to do is insert these four lines into the code, re-run the report, and I have a nicely simulated dataset. I must confess I didn’t use R to convert the comments to gibberish, it was easier to download them from here, but if this website didn’t exist then I certainly could have used R to do this very easily.

Something else that R and Sweave are really helping me with at the moment is making it possible to start to analyse data and compile reports before the data comes in. Because Sweave will automatically put together the statistics and graphs for me as I go along, it frees me up to just work on the data, share the progress with people as it comes along, and then put together the final analysis when all the data is collected, without having to manually re-write all the statistics and copy and paste all the graphs all the way through. I’ll post about the usefulness of Sweave and how it helps with workflow another time.

Fun with wordclouds

As always, I’m late to this party, and wordclouds have come under fire in recent times, e.g. here: drewconway.com. From my point of view they’re eye-catching, and I hope that by putting them up on a website or in a report they might cause people to linger and look in more detail at other pieces of data and visualisation. That’s all I’m going to say for now, I’m sure I’ll talk again about what’s attractive to data scientists and statisticians and what’s attractive to the general public, but let’s leave it for now.

I am looking at interesting ways of looking at the patient survey (see previous post) at the moment and I thought I would have a go at a wordcloud. Thanks to the wonderful people producing packages for R (the tm and wordcloud packages, many thanks to both!), it’s easy. I nicked a bit of code from another blog (thanks One R tip a day!) and pretty soon I had my own. It’s from two areas of the Trust, featuring the things people like about our services.

Here’s the code:


mydata=subset(mydatafirst, Var1==0) # slice the data by area

mycorpus=Corpus(DataframeSource(data.frame(mydata[-is.na(mydata$Best),34]))) # make a text corpus for the tm package out of a data frame, removing missing values at the same time

mycorpus &lt;- tm_map(mycorpus, removePunctuation) # remove all the puncuation
mycorpus &lt;- tm_map(mycorpus, tolower) # make everything lower case
mycorpus &lt;- tm_map(mycorpus, function(x) removeWords(x, c(stopwords(&quot;english&quot;), &quot;none&quot;, &quot;ive&quot;, &quot;dont&quot;, &quot;etc&quot;))) # remove words without meaning, many of which are provided by the stopwords(&quot;english&quot;) function

# these next steps take the corpus and turn it into a dataframe ready for the wordcloud function

tdm &lt;- TermDocumentMatrix(mycorpus)
m &lt;- as.matrix(tdm)
v &lt;- sort(rowSums(m),decreasing=TRUE)
d &lt;- data.frame(word = names(v),freq=v)

par(mfrow=c(1,2))

wordcloud(d$word,d$freq,c(3.5,.5),2,100,TRUE,.15, terrain.colors(5),vfont=c(&quot;sans serif&quot;,&quot;plain&quot;))

# ...same steps again for the other area

wordcloud(d2$word,d2$freq,scale=c(4,.5),2,100,TRUE,.15, terrain.colors(5),vfont=c(&quot;sans serif&quot;,&quot;plain&quot;))

I adjusted by eye the scale which you can see in the 3rd parameter of the wordcloud function (making the second area a bit larger). There’s probably a better way, I will investigate further as I look more at what to do with all this data.

Here’s the word cloud:

Rplot02

Not bad for a few hours’ work! I’m hoping it will draw people in to look at more of the reporting that we do at any rate. Let me know what you think of it, and word clouds generally, in the comments.

The patient survey: past, present and future

Within my organisation we have something known as the Service User and Carer Experience Survey, often abbreviated to the SUCE, or in more natural spoken English, the patient survey. It’s a chance for the users of our services to tell us about our services and includes Likert-type “ticky box” questions about specific aspects of care as well as an open section in which they can give an area for improvement as well as something they like about the service. Results are published quarterly and reporting represents a major challenge- we report over about 100 teams, all organised into larger directorates and divisions, with each category having its own type of report. I’ve been involved right from the start and the process goes nicely from a How-Not-To-Do-It guide to a How-To guide.

At first I was responsible for all data entry and reporting and in my na├»veté I did everything manually. I made a page in Microsoft Excel which automatically generated frequency tables from raw data, and then fed those values into pie charts (yes I know: juiceanalytics.com/writing/the-problem-with-pie-charts but I’m afraid many find them accessible, possibly due to familiarity? Or just overall “friendly” appearance?). I made the bar graphs in SPSS. The whole thing took ages and was done quarterly. The first step on the road to getting the computer to shoulder this massive burden was writing some R (The R project) code which produced the graphs. It was very easy to do, I was a real novice back then, and here it is:


# select the current team, denoted x, and select the most recent mydata for the pie charts (put into Subset.P). Put data for the team from all points of time into Subset.B, which will be used to draw the barcharts 

Subset.P=subset(mymydata, mydata$Code==x & mydata$Time==8)
Subset.B=subset(mymydata, mydata$Code==x)

# add labels to each variable, and calculate the percentages in each category

Service.P=table(factor(Subset.P$Service, levels=1:6, labels=c("Very poor", "Poor", "Fair", "Good", "Very good", "Excellent")))
Service.P=Service.P/sum(Service.P)
names(Service.P) &lt;- paste(names(Service.P), "-", round(Service.P*100), "%", sep="")

# this bit fixes the colours and steps through each category removing the colours from the empty categories. Without this step the colours aren't consistent if there are any empty categories

attributes(Service.P)$cols <- rainbow(6)
    if(0 %in% Service.P){
        ind <- which(Service.P == 0)
        Service.P <- Service.P[-ind]
        attributes(Service.P)$cols <- rainbow(6)[-ind]
    }

# and so on for each question

Comm.P=table(factor(Subset.P$Communication, levels=1:6, labels=c("Very poor", "Poor", "Fair", "Good", "Very good", "Excellent")))
Comm.P=Comm.P/sum(Comm.P)
names(Comm.P) <- paste(names(Comm.P), "-", round(Comm.P*100), "%", sep="")
attributes(Comm.P)$cols <- rainbow(6)
    if(0 %in% Comm.P){
        ind <- which(Comm.P == 0)
        Comm.P <- Comm.P[-ind]
        attributes(Comm.P)$cols <- rainbow(6)[-ind]
}

#... I've edited the repetitions of the above out to save space

# draw all the charts in a 3 by 2 grid ready for copying straight into the report (the last graph is drawn later)

par(mfrow=c(3,2))

pie(Service.P, clockwise=TRUE, main = c("Service quality"), radius=1, cex=1.1, init.angle=45, col=attr(Service.P, "cols"))
pie(Comm.P, clockwise=TRUE, main = c("Communication"), radius=1, cex=1.3, init.angle=45, col=attr(Comm.P, "cols"))
pie(Ldign.P, clockwise=TRUE, main = c("Dignity and respect"), radius=1, cex=1.3, init.angle=45, col=attr(Ldign.P, "cols"))
pie(Involved.P, clockwise=TRUE, main = c("Involved in care"), radius=1, cex=1.3, init.angle=45, col=attr(Involved.P, "cols"))
pie(Improved.P, clockwise=TRUE, main = c("Improved life"), radius=1, cex=1.3, init.angle=45, col=attr(Improved.P, "cols"))

# take the Subset.B data, which contains all data for each team across each time point, as opposed to the Subset.P which is just this quarter's data for the pie charts

# also multiply the figures to fix them all at a maximum of 5

Subset.B$Service=Subset.B$Service*5/6
Subset.B$Communication=Subset.B$Communication*5/6
Subset.B$Ldign=Subset.B$Ldign*5/3
Subset.B$Involved=Subset.B$Involved*5/3

# produce a matrix, ybar, which will hold the responses for each
# question at 5 time points- last year and the next 4 quarters

ybar=as.data.frame(matrix(data=NA, nrow=5, ncol=5))
names(ybar)=c("Apr - Mar 10", "Apr - Jun 10", "Jul - Sept 10", "Oct - Dec 10", "Jan - Mar 11")

# construct each one from the mean at time < 5, time==6, time==7 etc.

ybar[1,] = c(mean(subset(Subset.B, Time<5)$Service, na.rm=TRUE),
             mean(subset(Subset.B, Time==5)$Service, na.rm=TRUE),
             mean(subset(Subset.B, Time==6)$Service, na.rm=TRUE),
             mean(subset(Subset.B, Time==7)$Service, na.rm=TRUE),
             mean(subset(Subset.B, Time==8)$Service, na.rm=TRUE))

ybar[2,] = c(mean(subset(Subset.B, Time,5)$Communication, na.rm=TRUE),
             mean(subset(Subset.B, Time==5)$Communication, na.rm=TRUE),
             mean(subset(Subset.B, Time==6)$Communication, na.rm=TRUE),
             mean(subset(Subset.B, Time==7)$Communication, na.rm=TRUE),
             mean(subset(Subset.B, Time==8)$Communication, na.rm=TRUE))

ybar[3,] = c(mean(subset(Subset.B, Time<5)$Ldign, na.rm=TRUE),
             mean(subset(Subset.B, Time==5)$Ldign, na.rm=TRUE),
             mean(subset(Subset.B, Time==6)$Ldign, na.rm=TRUE),
             mean(subset(Subset.B, Time==7)$Ldign, na.rm=TRUE),
             mean(subset(Subset.B, Time==8)$Ldign, na.rm=TRUE))

ybar[4,] = c(mean(subset(Subset.B, Time<5)$Involved, na.rm=TRUE),
             mean(subset(Subset.B, Time==5)$Involved, na.rm=TRUE),
             mean(subset(Subset.B, Time==6)$Involved, na.rm=TRUE),
             mean(subset(Subset.B, Time==7)$Involved, na.rm=TRUE),
             mean(subset(Subset.B, Time==8)$Involved, na.rm=TRUE))

ybar[5,] = c(mean(subset(Subset.B, Time&lt;5)$Improved, na.rm=TRUE),
             mean(subset(Subset.B, Time==5)$Improved, na.rm=TRUE),
             mean(subset(Subset.B, Time==6)$Improved, na.rm=TRUE),
             mean(subset(Subset.B, Time==7)$Improved, na.rm=TRUE),
             mean(subset(Subset.B, Time==8)$Improved, na.rm=TRUE))

# remove all the empty bits, to prevent the graph containing ugly holes

if(is.na(ybar[,1])|is.na(ybar[,2])|is.na(ybar[,3])|is.na(ybar[,4])){
ybar=ybar[-which(is.nan(ybar[1,])==TRUE)]
}

# draw the graph, the enormous ylim value is to give room at the top for the legend 

barplot(as.matrix(ybar), beside=T, col=rainbow(5), ylim=c(0,9), yaxt = "n")
axis(2, at = 0:5)
legend("top", legend=c("Service quality", "Communication", "Dignity and respect",
                       "Involved with care", "Improved life"), fill=rainbow(5), bty="n", cex=1, ncol = 2 )

# now manually change the value of x to go to the next team and re-run the code

This greatly sped up the process of producing the graphs, but left a lot of the process in the hands of a human- figuring out which teams were reporting that quarter, running the code, pasting the graphs, counting the responses, etc. etc. As I’ve grown more confident I’ve given more and more of these tasks to the computer and saved more and more time. I’ll show some more of the steps in subsequent posts, leading up to where we are now and plans for the future.