Munging Patient Opinion data with R

I’ve written a post about using Patient Opinion’s new API, focusing on how to download your data using R but also perhaps useful to those using other languages as a gentle introduction, which can be found here.

This post focuses on the specific data operations which I perform in R in order to get the data nicely formatted and stored on our server. This may be of interest to those using R who want to do something very similar, much of this code will probably work in a similar context.

The code shown checks for the most recent Patient Opinion data every night and, if there is new data that is not yet on the database, downloads and processes it and uploads it to our own database. In the past I had downloaded the whole set every night and simply wiped the previous version, but this is not possible now we are starting to add our own data codes to the Patient Opinion data (enriching the metadata, for example by improving the match to our service tree all the way to team level where possible).

First things first, we’ll be using the lappend function, which I stole from StackExchange a very long time ago and use all the time. I’ve often thought of starting a campaign to get it included in base R but then I thought lots of people must have that idea about other functions and it probably really annoys the base R people, so I just define it whenever I need to use it. One day I will set up a proper R starting environment and include it in there, I use R on a lot of machines and so have always been frightened to open that particular can of worms.


library(httr)
library(RMySQL)

### lappend function

lappend <- function(lst, obj) {

  lst[[length(lst)+1]] <- obj

  return(lst)
}

Now we store our api key somewhere where we can get it again and query our database to see the date of the most recent Patient Opinion story. We will issue a dbDisconnect(mydb) right at the end after we have finished the upload. Note that we add one to the date because we only want stories that are more recent than the database, and the API searches on dates after and including the given date.

We need to convert this date into a string like the following “01%2F08%2F2015” because this is the format the API accepts dates in. This date is the 1st August 2015.


apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# connect to database

theDriver <- dbDriver("MySQL")

mydb = dbConnect(theDriver, user = "myusername", password = "mypassword",
  dbname = "dbName", host = "localhost")

dateQuery = as.Date(dbGetQuery(mydb, "SELECT MAX(Date) FROM POFinal")[, 1]) + 1

dateFinal = paste0(substr(dateQuery, 6, 7), "%2F", substr(dateQuery, 9, 10), "%2F", substr(dateQuery, 1, 4))

Now it’s time to fetch the data from Patient Opinion and process it. We will produce an empty list and then add 100 stories to it (the maximum number the API will return from a single request), using a loop to take the next hundred, and the next hundred, until there are no more stories (it’s highly unlikely that there would be over 100 stories just in one day, obviously, but no harm in making the code cope with such an eventuality).

The API accepts a value of skip which allows you to select which story you want, in order of appearance, so this is set at 0, and then 100, and then 200, and so on within a loop to fetch all of the data.


# produce empty list to lappend

opinionList = list()

# set skip at 0 and make continue TRUE until no stories are returned

skip = 0

continue = TRUE

while(continue){

  opinions = GET(paste0("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=",
  skip, "&submittedonafter=", dateFinal),
  add_headers(Authorization = apiKey))

  if(length(content(opinions)) == 0){ # if there are no stories then stop

  continue = FALSE
  }

  opinionList = c(opinionList, content(opinions)) # add the stories to the list
  # increase skip, and repeat

  skip = skip + 100

}

Having done this we test to see if there are any stories since yesterday. If there are, the length of opinionList will be greater than 0 (that is, length(opinionList) > 0). If there are we extract the relevant bits we want from each list ready to build it into a dataframe

# if there are no new stories just miss this entire bit out

if(length(opinionList) > 0){

  keyID = lapply(opinionList, "[[", "id")
  title = lapply(opinionList, "[[", "title")
  story = lapply(opinionList, "[[", "body")
  date = lapply(opinionList, "[[", "dateOfPublication")
  criticality = lapply(opinionList, "[[", "criticality")

The service from which each story originates is slightly buried inside opinionList, and you’ll need to step through with lapply, extracting the $links element, and then take the object that results and step through it, taking the 5th list element and extracting the element named “id”. This actually makes more sense to explain in code:

# location is inside $links

linkList = lapply(opinionList, "[[", "links")
location = lapply(linkList, function(x) x[[5]][['id']])

# Some general tidying up and we're ready to put in a dataframe

# there are null values in some of these, need converting to NA
# before they're put in the dataframe

date[sapply(date, is.null)] = NA
criticality[sapply(criticality, is.null)] = NA

finalData = data.frame("keyID" = unlist(keyID),
  "Title" = unlist(title),
  "PO" = unlist(story),
  "Date" = as.Date(substr(unlist(date), 1, 10)),
  "criticality" = unlist(criticality),
  "location" = unlist(location),
  stringsAsFactors = FALSE
)

Now we have the data in a dataframe it’s time to match up the location codes that Patient Opinion use with the codes that we use internally. There’s nothing earth shattering in the code so I won’t laboriously explain it all, there may be a few lines in it that could help you with your own data.

### match up NACS codes

POLookup = read.csv("poLookup.csv", stringsAsFactors = FALSE)

### some of the cases are inconsistent, make them all lower case

finalData$location = tolower(finalData$location)

POLookup$NACS = tolower(POLookup$NACS)

PO1 = merge(finalData, POLookup, by.x = "location", by.y = "NACS", all.x = TRUE)

### clean the data, removing HTML tags

PO1$PO = gsub("<(.|\n)*?>", "", PO1$PO)

PO1$Date = as.Date(PO1$Date)

poQuarters = as.numeric(substr(quarters(PO1$Date), 2, 2))

poYears = as.numeric(format(PO1$Date, "%Y"))

PO1$Time = poQuarters + (poYears - 2009) * 4 - 1

### write PO to database

# all team codes must be present

PO1 = PO1[!is.na(PO1$TeamC), ]

# strip out line returns

PO1$PO = gsub(pattern = "\r", replacement = "", x = PO1$PO)

The following line is perhaps the most interesting. There are some smileys in our data, encoded in UTF-8, and I don’t know exactly why but RMySQL is not playing nicely with them. I’m sure there is a clever way to sort this problem but I have a lot of other work to do so I confess I just destroyed them as follows

# strip out the UTF-8 smileys

PO1$PO = iconv(PO1$PO, "ASCII", "UTF-8", sub = "")

And with all that done we can use dbWriteTable to upload the data to the database

# write to database

dbWriteTable(mydb, "POFinal", PO1[, c("keyID", "Title", "PO", "Date", "Time",
             "TeamC")], append = TRUE, row.names = FALSE)
}

That’s it! All done

Querying the Patient Opinion API from R

Patient Opinion have a new API out. They are retiring the old API, so you’ll need to transfer over to the new one if you are currently using the old version, but in any case there are a number of advantages to doing so. The biggest one for me is the option to receive results in JSON (or JSONP) rather than XML (although you can still have XML if you want). The old XML output was fiddly and annoying to parse so personally I’m glad to see the back of it (you can see the horrible hacky code I wrote to parse the old version here). The JSON comes out beautifully using the built in functions from the R package httr which I shall be using to download and process the data.

There is a lot more information in the new API as well, including tags, the URL of the story, responses, status of the story (has response, change planned, etc.). I haven’t even remotely begun to think of all the new exciting things we can do with all this information, but I confess to being pretty excited about it.

With all that said, let’s have a look at how we can get this data into R with the minimum of effort. If you’re not using R, hopefully this guide may be of some use to you to show you the general direction of travel, whichever language you’re using.

This guide should be read in conjunction with the excellent API documentation at Patient Opinion which can be found here.

First things first, let’s get access to the API. You’ll need a key which can be obtained here. Get the HTTP one, not the Uri version which only lasts for 24 hours.

We’ll be using the HTTP protocol to access the API, there’s a nice introduction to HTTP linked from the httr package here. We will be using the aforementioned httr package in order to make requests using the HTTP protocol.

Let’s get started


# put the subscription key header somewhere (this is not a real API key, of course, insert your own here)

apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# run the GET command using add_headers to add the authentication header

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
              add_headers(Authorization = apiKey))

This will return the stories in the default JSON format. If you prefer XML, which I strongly advise you don’t, fetch the stories like this:

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
add_headers(Authorization = apiKey, Accept = "text/xml") )

For those of you who are using something other than R to access the API, note that the above add_headers() command is equivalent to adding the following to your HTTP header:

Authorization: SUBSCRIPTION_KEY qwertyuiop

You can extract the data from the JSON or XML object which is returned very simply by using:

storiesList = content(stories)

This will return the data in a list. There is more on extracting the particular data that you want in another blog post which is parallel to this one here, for brevity we can say here that you can return all the titles by running:

title = lapply(opinionList, "[[", "title")

That’s it. You’re done. You’ve done it. There’s loads more about what’s available and how to search it in the help pages, hopefully this was a useful intro for R users.