Munging Patient Opinion data with R

I’ve written a post about using Patient Opinion’s new API, focusing on how to download your data using R but also perhaps useful to those using other languages as a gentle introduction, which can be found here.

This post focuses on the specific data operations which I perform in R in order to get the data nicely formatted and stored on our server. This may be of interest to those using R who want to do something very similar, much of this code will probably work in a similar context.

The code shown checks for the most recent Patient Opinion data every night and, if there is new data that is not yet on the database, downloads and processes it and uploads it to our own database. In the past I had downloaded the whole set every night and simply wiped the previous version, but this is not possible now we are starting to add our own data codes to the Patient Opinion data (enriching the metadata, for example by improving the match to our service tree all the way to team level where possible).

First things first, we’ll be using the lappend function, which I stole from StackExchange a very long time ago and use all the time. I’ve often thought of starting a campaign to get it included in base R but then I thought lots of people must have that idea about other functions and it probably really annoys the base R people, so I just define it whenever I need to use it. One day I will set up a proper R starting environment and include it in there, I use R on a lot of machines and so have always been frightened to open that particular can of worms.


library(httr)
library(RMySQL)

### lappend function

lappend <- function(lst, obj) {

  lst[[length(lst)+1]] <- obj

  return(lst)
}

Now we store our api key somewhere where we can get it again and query our database to see the date of the most recent Patient Opinion story. We will issue a dbDisconnect(mydb) right at the end after we have finished the upload. Note that we add one to the date because we only want stories that are more recent than the database, and the API searches on dates after and including the given date.

We need to convert this date into a string like the following “01%2F08%2F2015” because this is the format the API accepts dates in. This date is the 1st August 2015.


apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# connect to database

theDriver <- dbDriver("MySQL")

mydb = dbConnect(theDriver, user = "myusername", password = "mypassword",
  dbname = "dbName", host = "localhost")

dateQuery = as.Date(dbGetQuery(mydb, "SELECT MAX(Date) FROM POFinal")[, 1]) + 1

dateFinal = paste0(substr(dateQuery, 6, 7), "%2F", substr(dateQuery, 9, 10), "%2F", substr(dateQuery, 1, 4))

Now it’s time to fetch the data from Patient Opinion and process it. We will produce an empty list and then add 100 stories to it (the maximum number the API will return from a single request), using a loop to take the next hundred, and the next hundred, until there are no more stories (it’s highly unlikely that there would be over 100 stories just in one day, obviously, but no harm in making the code cope with such an eventuality).

The API accepts a value of skip which allows you to select which story you want, in order of appearance, so this is set at 0, and then 100, and then 200, and so on within a loop to fetch all of the data.


# produce empty list to lappend

opinionList = list()

# set skip at 0 and make continue TRUE until no stories are returned

skip = 0

continue = TRUE

while(continue){

  opinions = GET(paste0("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=",
  skip, "&submittedonafter=", dateFinal),
  add_headers(Authorization = apiKey))

  if(length(content(opinions)) == 0){ # if there are no stories then stop

  continue = FALSE
  }

  opinionList = c(opinionList, content(opinions)) # add the stories to the list
  # increase skip, and repeat

  skip = skip + 100

}

Having done this we test to see if there are any stories since yesterday. If there are, the length of opinionList will be greater than 0 (that is, length(opinionList) > 0). If there are we extract the relevant bits we want from each list ready to build it into a dataframe

# if there are no new stories just miss this entire bit out

if(length(opinionList) > 0){

  keyID = lapply(opinionList, "[[", "id")
  title = lapply(opinionList, "[[", "title")
  story = lapply(opinionList, "[[", "body")
  date = lapply(opinionList, "[[", "dateOfPublication")
  criticality = lapply(opinionList, "[[", "criticality")

The service from which each story originates is slightly buried inside opinionList, and you’ll need to step through with lapply, extracting the $links element, and then take the object that results and step through it, taking the 5th list element and extracting the element named “id”. This actually makes more sense to explain in code:

# location is inside $links

linkList = lapply(opinionList, "[[", "links")
location = lapply(linkList, function(x) x[[5]][['id']])

# Some general tidying up and we're ready to put in a dataframe

# there are null values in some of these, need converting to NA
# before they're put in the dataframe

date[sapply(date, is.null)] = NA
criticality[sapply(criticality, is.null)] = NA

finalData = data.frame("keyID" = unlist(keyID),
  "Title" = unlist(title),
  "PO" = unlist(story),
  "Date" = as.Date(substr(unlist(date), 1, 10)),
  "criticality" = unlist(criticality),
  "location" = unlist(location),
  stringsAsFactors = FALSE
)

Now we have the data in a dataframe it’s time to match up the location codes that Patient Opinion use with the codes that we use internally. There’s nothing earth shattering in the code so I won’t laboriously explain it all, there may be a few lines in it that could help you with your own data.

### match up NACS codes

POLookup = read.csv("poLookup.csv", stringsAsFactors = FALSE)

### some of the cases are inconsistent, make them all lower case

finalData$location = tolower(finalData$location)

POLookup$NACS = tolower(POLookup$NACS)

PO1 = merge(finalData, POLookup, by.x = "location", by.y = "NACS", all.x = TRUE)

### clean the data, removing HTML tags

PO1$PO = gsub("<(.|\n)*?>", "", PO1$PO)

PO1$Date = as.Date(PO1$Date)

poQuarters = as.numeric(substr(quarters(PO1$Date), 2, 2))

poYears = as.numeric(format(PO1$Date, "%Y"))

PO1$Time = poQuarters + (poYears - 2009) * 4 - 1

### write PO to database

# all team codes must be present

PO1 = PO1[!is.na(PO1$TeamC), ]

# strip out line returns

PO1$PO = gsub(pattern = "\r", replacement = "", x = PO1$PO)

The following line is perhaps the most interesting. There are some smileys in our data, encoded in UTF-8, and I don’t know exactly why but RMySQL is not playing nicely with them. I’m sure there is a clever way to sort this problem but I have a lot of other work to do so I confess I just destroyed them as follows

# strip out the UTF-8 smileys

PO1$PO = iconv(PO1$PO, "ASCII", "UTF-8", sub = "")

And with all that done we can use dbWriteTable to upload the data to the database

# write to database

dbWriteTable(mydb, "POFinal", PO1[, c("keyID", "Title", "PO", "Date", "Time",
             "TeamC")], append = TRUE, row.names = FALSE)
}

That’s it! All done

Querying the Patient Opinion API from R

Patient Opinion have a new API out. They are retiring the old API, so you’ll need to transfer over to the new one if you are currently using the old version, but in any case there are a number of advantages to doing so. The biggest one for me is the option to receive results in JSON (or JSONP) rather than XML (although you can still have XML if you want). The old XML output was fiddly and annoying to parse so personally I’m glad to see the back of it (you can see the horrible hacky code I wrote to parse the old version here). The JSON comes out beautifully using the built in functions from the R package httr which I shall be using to download and process the data.

There is a lot more information in the new API as well, including tags, the URL of the story, responses, status of the story (has response, change planned, etc.). I haven’t even remotely begun to think of all the new exciting things we can do with all this information, but I confess to being pretty excited about it.

With all that said, let’s have a look at how we can get this data into R with the minimum of effort. If you’re not using R, hopefully this guide may be of some use to you to show you the general direction of travel, whichever language you’re using.

This guide should be read in conjunction with the excellent API documentation at Patient Opinion which can be found here.

First things first, let’s get access to the API. You’ll need a key which can be obtained here. Get the HTTP one, not the Uri version which only lasts for 24 hours.

We’ll be using the HTTP protocol to access the API, there’s a nice introduction to HTTP linked from the httr package here. We will be using the aforementioned httr package in order to make requests using the HTTP protocol.

Let’s get started


# put the subscription key header somewhere (this is not a real API key, of course, insert your own here)

apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# run the GET command using add_headers to add the authentication header

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
              add_headers(Authorization = apiKey))

This will return the stories in the default JSON format. If you prefer XML, which I strongly advise you don’t, fetch the stories like this:

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
add_headers(Authorization = apiKey, Accept = "text/xml") )

For those of you who are using something other than R to access the API, note that the above add_headers() command is equivalent to adding the following to your HTTP header:

Authorization: SUBSCRIPTION_KEY qwertyuiop

You can extract the data from the JSON or XML object which is returned very simply by using:

storiesList = content(stories)

This will return the data in a list. There is more on extracting the particular data that you want in another blog post which is parallel to this one here, for brevity we can say here that you can return all the titles by running:

title = lapply(opinionList, "[[", "title")

That’s it. You’re done. You’ve done it. There’s loads more about what’s available and how to search it in the help pages, hopefully this was a useful intro for R users.

Concatenating text nicely with commas and “and”s

Wow, this blog is quiet at the moment. I can’t even remember if I’ve written this anywhere but I had a liver transplant in May and I’ve been repeatedly hospitalised with complications ever since. So work and life are a little slow. I’m hoping to be back to full health in a few weeks.

Anyway, I was just writing some Shiny code and I have a dynamic text output consisting of one, two, or many pieces of text which needs to have commas and “and”s added to make natural English, like so:

Orange. Orange and apple. Orange, apple, and banana. Orange, apple, banana, and grapefruit.

I really toyed with the idea of just using commas and not worrying about “and”s but I decided that it was only a quick job and it would make the world more attractive. So if you’ve just Googled this, here’s my solution:


theWords = list(LETTERS[1], LETTERS[2:3], LETTERS[4:6])

vapply(theWords, function(x) {

if(length(x) == 1){

x = x # Do nothing!

} else if(length(x) == 2){

x = paste0(x[1], " and ", x[2])
} else if(length(x) > 2){

x = paste0(paste0(x[-length(x)], ", ", collapse = ""),
"and ", x[length(x)])
}

}, "A")

Notice the use of vapply, not lapply, because Thou Shalt Use Vapply.

Working as a coder at a technology company versus working as a coder in other contexts

I was just skimming a book about Python and I found a rather interesting quote:

“Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.

People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.”

Looking at salaries for developers as well as reading the experiences of coders in technology companies this really seems to ring true. I’m the other kind, I’ve picked up a bit of code in order to do my day job which is really focused on collecting and analysing user experience data. And actually I do get respect and I have been able to do a lot of new things that nobody was doing before I got started.

Quite inspiring words, really. I feel a renewed urge to develop my programming skills within the current role that I have to try to push things even further and will stop daydreaming about what it would be like to be a “real” programmer.

 

 

The NHS, hammers, and spanners, part 27

The scene- a group of NHS managers stand around a bolt fixed to the wall, repeatedly hitting it with a hammer in a futile attempt to remove it, and a valuable painting, from the wall. Every manager is holding a new, shiny, and identical, hammer.
A colleague and I walk in and, wielding a spanner, easily remove it from the wall, and hand the painting to the most senior person in the room. We turn to leave.
“Wait!”
“Yes…?”
*gesturing to a huge bank of paintings on the wall, each affixed with a single bolt* “Can you port what you just did to hammers?”
Without a word, we trudge wearily out.

Open document formats and why we need them

The whole world is moving to open document formats for the obvious reason that in order to read and write documents one shouldn’t need to waste money on Microsoft’s increasingly ungainly Office application when perfectly good free and open source alternatives exist (thinking particularly of the mighty LibreOffice). The UK Government has formally adopted an open standard for all of its documents.

Despite this many public bodies in the UK, including the NHS, councils, and many other organisations, continue to use Microsoft’s proprietary formats for documents. It may seem to the casual user that the arguments for proprietary versus open document formats are rather arcane. I would like to tell a story from my own life that illustrates why in actual fact every citizen and certainly every public body should be interested in open document formats.

My wife has just applied for a job. The application process involved filling out a Microsoft Word document. Naturally, she wanted to appear efficient and professional and so was concerned that if she used LibreOffice the tables within the document would be mangled. We don’t own a copy of Microsoft Word. I consider it to be a waste of money and besides object to funding Microsoft since they have done so much to oppose open standards.

As a consequence, I have spent three hours, which I could have spent working, or looking after my family, or just relaxing from doing either, using a huge pile of technology (a Citrix enabled Word program that I can access through my job, Google’s own document preview function in their email, a .pdf to .docx converter, and LibreOffice) to ensure that the document renders nicely.

This was further complicated by the fact that the document itself, in its original state, was badly formatted, and so I had to ensure that the bad formatting left in the document really was in the original or if we had introduced it using LibreOffice.

I’m a computer programmer fluent in R, moderately proficient in PHP/ MySQL, and manage two Linux servers. Not only that but I have extensive experience with Microsoft Office from my PhD. I actually made the document better.

Not everybody will want to or be able to pay for Microsoft Office. Not everybody will know somebody who can fiddle around with the huge pile of technology necessary to make the form look reasonable. But I think it’s pretty obvious that we don’t want to make job decisions based on an application form’s looking unprofessional because the applicant didn’t have Office and wasn’t married to a geek.

And that, ladies and gentlemen, is why we need open document standards.

Backing up a remote MySQL database on a remote server using cron

This is more a collection of links, really, than a proper blog post, but I just spent quite a while figuring out how all of these different technologies all fit together to do the job I wanted so if you want to do what it says on the tin of this blog post then you should find it useful.

I have a remote server running a MySQL database. I wish to backup this server daily. I would rather do this on my other remote server than on my local machine because, obviously, my server is always switched on whereas my local machine isn’t. So the task is to dump a MySQL database to a file and upload it to a remote server.

To begin, you’re going to make sure that the server with the MySQL database on it can access the other server. This is easiest using ssh key login. I use this for all my servers and all of my local machines. There is a summary here. The beauty of using a key based login is it eliminates the need for passwords, which is not only secure in general but also means you do not need to store passwords in cron scripts or anywhere else on the computer which is highly insecure. Unless you really know what you’re doing you should never do this.

Next you’ll need to create a user on your MySQL and give it SELECT privileges so it can read the data to make the backup. This is very simple

GRANT SELECT, LOCK TABLES ON mydatabase.* to ‘backup’@’localhost’ IDENTIFIED BY ‘password’;

Now you need to set up a cron job to back up the database every day. There’s some stuff about it here, but you won’t need the bit about passwords because we’re using key authentication. As I mentioned above, it’s easier and safer. We’re also going to add some terminal magic to automatically change the file name every day to the current date so you can have daily backups on the server. Here’s the finished cron entry.

0 1 * * * mysqldump -u backup SUCE | ssh chrisbeeley@178.79.158.221 “cat > ~/backupDB/$(date +\%F)_SUCE_DB.sql”

This runs the backup operation every day at 1am.

You’ll need to clean up the old files occasionally, otherwise your server will get full of very old copies of the database. The following command will remove all files from the directory that are older than 31 days old:

find ~/backupDB -type f -mtime +31 -exec rm {} +

BE VERY CAREFUL with this command. If you run it with the wrong directory or have any strange errors with it you may find yourself deleting a lot of files that you did not want to delete. ALWAYS be safe and test by running find ~/backupDB -type f -mtime +31 first and reviewing the files which come back. Only when you’re happy that it’s finding the right files should you add -exec rm{} +

I haven’t added this to a cron job, partly because it doesn’t seem worth the risk (the command is safe in my environment today, but always be wary of PEBKAC, even for yourself), and also because it means I can choose when to flush the old database files, just in case I find some current problem in the database which I can fix with an older version.