Quarters and modulo arithmetic

This is another post that’s mainly for my benefit when I inevitably forget. I’m working with dates in PHP which, unlike MySQL, does not have a built in quarter function for extracting the quarter from a year.

Even if it did, one would have to be very careful with it because quarters are actually defined differently in different countries. In the UK, where I am, April to June is the first quarter (and January to March the last). In other countries, including (I think) the US, the first quarter is January to March, and October to December the last.

With no quarter function, and the hypothetical threat of an unpredictable one, my next recourse is to divide the month by 3 in order to give me a quarter number. This is done very simply as:


echo ceil(date("m") / 3);

This will return 1 when the month is 1 to 3 (January to March), 2 when it’s 4 to 6 (April to June) etc. But as I mentioned before UK quarters don’t work like this. We don’t have quarters in a sequence 1, 2, 3, 4. They go in a sequence like 4, 1, 2, 3. Now I could write code that deals with each of those cases individually, converting 1 to 4, 2 to 1, 3 to 2, etc., but I thought it was better to do it properly (and store up generalisability for the future) by converting the 1, 2, 3, 4 sequence with modulo arithmetic. I confess, I still can’t work out how to do this other than by trial and error but it definitely involves modulo 4 because the sequence is of period 4.

To convert 1, 2, 3, 4 to 4, 1, 2, 3 one need only do the following, where x is the input value 1, 2, 3, 4:

4 – (5 – x) %% 4

As far as I can figure it the first constant (4) tells the sequence how big it should be (4, 1, 2, 3 or 10, 7, 8, 9) and the second constant (5) tells the series where it should flip back to the start (4, 1, 2, 3 or 3, 4, 1, 2).

That’s all I know, I’m really busy and don’t have time to think about it any more. Hopefully this will help someone/ me in two years’ time.

Consuming REST APIs with PHP and CURL

I wasted such a lot of time on this that I must commit it to the internet on the off chance that it helps someone else in the same situation.

If you are using PHP to consume a RESTful API via CURL and you want to manipulate the data you get back it’s very important that you set CURLOPT_RETURNTRANSFER to true. This allows you to collect the response from the server in a variable. If you don’t set this option it will just echo the return to the screen, which is obviously of no use whatsoever.

While I’m here I may as well mention as well that if you want json_decode to return an array you need to use json_decode($result, true); otherwise you get an object back. The final code I wrote looks like this:


$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, "http://YOUR_URL_HERE");
$result = curl_exec($ch);

curl_close($ch);

$result = json_decode($result, true); // giving true to json_decode returns array

Converting a grouped plyr::ddply() to dplyr

I’m going full tidyverse at the moment and so I’m converting my old plyr code to dplyr. It’s been pretty steady going so far, although I had a bit of difficulty converting an instruction using ddply which carried out a function based on a subgrouping within the data. I wrote a toy example to get it right, I may as well share it with the internet in case it helps someone else. If you can’t figure out what I’m talking about with what the function does, just run the code. You’ll see what it is. Easier than explaining in words. The following two functions carry out the same task, the latter is a translation of the former.


library(plyr)
library(tidyverse)

plyr::ddply(mtcars, "cyl", mutate, mean.mpg = mean(mpg))

mtcars %>% 
  dplyr::group_by(cyl) %>% 
  dplyr::mutate(mean.mpg = mean(mpg))

Munging Patient Opinion data with R

I’ve written a post about using Patient Opinion’s new API, focusing on how to download your data using R but also perhaps useful to those using other languages as a gentle introduction, which can be found here.

This post focuses on the specific data operations which I perform in R in order to get the data nicely formatted and stored on our server. This may be of interest to those using R who want to do something very similar, much of this code will probably work in a similar context.

The code shown checks for the most recent Patient Opinion data every night and, if there is new data that is not yet on the database, downloads and processes it and uploads it to our own database. In the past I had downloaded the whole set every night and simply wiped the previous version, but this is not possible now we are starting to add our own data codes to the Patient Opinion data (enriching the metadata, for example by improving the match to our service tree all the way to team level where possible).

First things first, we’ll be using the lappend function, which I stole from StackExchange a very long time ago and use all the time. I’ve often thought of starting a campaign to get it included in base R but then I thought lots of people must have that idea about other functions and it probably really annoys the base R people, so I just define it whenever I need to use it. One day I will set up a proper R starting environment and include it in there, I use R on a lot of machines and so have always been frightened to open that particular can of worms.


library(httr)
library(RMySQL)

### lappend function

lappend <- function(lst, obj) {

  lst[[length(lst)+1]] <- obj

  return(lst)
}

Now we store our api key somewhere where we can get it again and query our database to see the date of the most recent Patient Opinion story. We will issue a dbDisconnect(mydb) right at the end after we have finished the upload. Note that we add one to the date because we only want stories that are more recent than the database, and the API searches on dates after and including the given date.

We need to convert this date into a string like the following “01%2F08%2F2015” because this is the format the API accepts dates in. This date is the 1st August 2015.


apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# connect to database

theDriver <- dbDriver("MySQL")

mydb = dbConnect(theDriver, user = "myusername", password = "mypassword",
  dbname = "dbName", host = "localhost")

dateQuery = as.Date(dbGetQuery(mydb, "SELECT MAX(Date) FROM POFinal")[, 1]) + 1

dateFinal = paste0(substr(dateQuery, 6, 7), "%2F", substr(dateQuery, 9, 10), "%2F", substr(dateQuery, 1, 4))

Now it’s time to fetch the data from Patient Opinion and process it. We will produce an empty list and then add 100 stories to it (the maximum number the API will return from a single request), using a loop to take the next hundred, and the next hundred, until there are no more stories (it’s highly unlikely that there would be over 100 stories just in one day, obviously, but no harm in making the code cope with such an eventuality).

The API accepts a value of skip which allows you to select which story you want, in order of appearance, so this is set at 0, and then 100, and then 200, and so on within a loop to fetch all of the data.


# produce empty list to lappend

opinionList = list()

# set skip at 0 and make continue TRUE until no stories are returned

skip = 0

continue = TRUE

while(continue){

  opinions = GET(paste0("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=",
  skip, "&submittedonafter=", dateFinal),
  add_headers(Authorization = apiKey))

  if(length(content(opinions)) == 0){ # if there are no stories then stop

  continue = FALSE
  }

  opinionList = c(opinionList, content(opinions)) # add the stories to the list
  # increase skip, and repeat

  skip = skip + 100

}

Having done this we test to see if there are any stories since yesterday. If there are, the length of opinionList will be greater than 0 (that is, length(opinionList) > 0). If there are we extract the relevant bits we want from each list ready to build it into a dataframe

# if there are no new stories just miss this entire bit out

if(length(opinionList) > 0){

  keyID = lapply(opinionList, "[[", "id")
  title = lapply(opinionList, "[[", "title")
  story = lapply(opinionList, "[[", "body")
  date = lapply(opinionList, "[[", "dateOfPublication")
  criticality = lapply(opinionList, "[[", "criticality")

The service from which each story originates is slightly buried inside opinionList, and you’ll need to step through with lapply, extracting the $links element, and then take the object that results and step through it, taking the 5th list element and extracting the element named “id”. This actually makes more sense to explain in code:

# location is inside $links

linkList = lapply(opinionList, "[[", "links")
location = lapply(linkList, function(x) x[[5]][['id']])

# Some general tidying up and we're ready to put in a dataframe

# there are null values in some of these, need converting to NA
# before they're put in the dataframe

date[sapply(date, is.null)] = NA
criticality[sapply(criticality, is.null)] = NA

finalData = data.frame("keyID" = unlist(keyID),
  "Title" = unlist(title),
  "PO" = unlist(story),
  "Date" = as.Date(substr(unlist(date), 1, 10)),
  "criticality" = unlist(criticality),
  "location" = unlist(location),
  stringsAsFactors = FALSE
)

Now we have the data in a dataframe it’s time to match up the location codes that Patient Opinion use with the codes that we use internally. There’s nothing earth shattering in the code so I won’t laboriously explain it all, there may be a few lines in it that could help you with your own data.

### match up NACS codes

POLookup = read.csv("poLookup.csv", stringsAsFactors = FALSE)

### some of the cases are inconsistent, make them all lower case

finalData$location = tolower(finalData$location)

POLookup$NACS = tolower(POLookup$NACS)

PO1 = merge(finalData, POLookup, by.x = "location", by.y = "NACS", all.x = TRUE)

### clean the data, removing HTML tags

PO1$PO = gsub("<(.|\n)*?>", "", PO1$PO)

PO1$Date = as.Date(PO1$Date)

poQuarters = as.numeric(substr(quarters(PO1$Date), 2, 2))

poYears = as.numeric(format(PO1$Date, "%Y"))

PO1$Time = poQuarters + (poYears - 2009) * 4 - 1

### write PO to database

# all team codes must be present

PO1 = PO1[!is.na(PO1$TeamC), ]

# strip out line returns

PO1$PO = gsub(pattern = "\r", replacement = "", x = PO1$PO)

The following line is perhaps the most interesting. There are some smileys in our data, encoded in UTF-8, and I don’t know exactly why but RMySQL is not playing nicely with them. I’m sure there is a clever way to sort this problem but I have a lot of other work to do so I confess I just destroyed them as follows

# strip out the UTF-8 smileys

PO1$PO = iconv(PO1$PO, "ASCII", "UTF-8", sub = "")

And with all that done we can use dbWriteTable to upload the data to the database

# write to database

dbWriteTable(mydb, "POFinal", PO1[, c("keyID", "Title", "PO", "Date", "Time",
             "TeamC")], append = TRUE, row.names = FALSE)
}

That’s it! All done

Querying the Patient Opinion API from R

Patient Opinion have a new API out. They are retiring the old API, so you’ll need to transfer over to the new one if you are currently using the old version, but in any case there are a number of advantages to doing so. The biggest one for me is the option to receive results in JSON (or JSONP) rather than XML (although you can still have XML if you want). The old XML output was fiddly and annoying to parse so personally I’m glad to see the back of it (you can see the horrible hacky code I wrote to parse the old version here). The JSON comes out beautifully using the built in functions from the R package httr which I shall be using to download and process the data.

There is a lot more information in the new API as well, including tags, the URL of the story, responses, status of the story (has response, change planned, etc.). I haven’t even remotely begun to think of all the new exciting things we can do with all this information, but I confess to being pretty excited about it.

With all that said, let’s have a look at how we can get this data into R with the minimum of effort. If you’re not using R, hopefully this guide may be of some use to you to show you the general direction of travel, whichever language you’re using.

This guide should be read in conjunction with the excellent API documentation at Patient Opinion which can be found here.

First things first, let’s get access to the API. You’ll need a key which can be obtained here. Get the HTTP one, not the Uri version which only lasts for 24 hours.

We’ll be using the HTTP protocol to access the API, there’s a nice introduction to HTTP linked from the httr package here. We will be using the aforementioned httr package in order to make requests using the HTTP protocol.

Let’s get started


# put the subscription key header somewhere (this is not a real API key, of course, insert your own here)

apiKey = "SUBSCRIPTION_KEY qwertyuiop"

# run the GET command using add_headers to add the authentication header

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
              add_headers(Authorization = apiKey))

This will return the stories in the default JSON format. If you prefer XML, which I strongly advise you don’t, fetch the stories like this:

stories = GET("https://www.patientopinion.org.uk/api/v2/opinions?take=100&skip=0",
add_headers(Authorization = apiKey, Accept = "text/xml") )

For those of you who are using something other than R to access the API, note that the above add_headers() command is equivalent to adding the following to your HTTP header:

Authorization: SUBSCRIPTION_KEY qwertyuiop

You can extract the data from the JSON or XML object which is returned very simply by using:

storiesList = content(stories)

This will return the data in a list. There is more on extracting the particular data that you want in another blog post which is parallel to this one here, for brevity we can say here that you can return all the titles by running:

title = lapply(opinionList, "[[", "title")

That’s it. You’re done. You’ve done it. There’s loads more about what’s available and how to search it in the help pages, hopefully this was a useful intro for R users.

Concatenating text nicely with commas and “and”s

Wow, this blog is quiet at the moment. I can’t even remember if I’ve written this anywhere but I had a liver transplant in May and I’ve been repeatedly hospitalised with complications ever since. So work and life are a little slow. I’m hoping to be back to full health in a few weeks.

Anyway, I was just writing some Shiny code and I have a dynamic text output consisting of one, two, or many pieces of text which needs to have commas and “and”s added to make natural English, like so:

Orange. Orange and apple. Orange, apple, and banana. Orange, apple, banana, and grapefruit.

I really toyed with the idea of just using commas and not worrying about “and”s but I decided that it was only a quick job and it would make the world more attractive. So if you’ve just Googled this, here’s my solution:


theWords = list(LETTERS[1], LETTERS[2:3], LETTERS[4:6])

vapply(theWords, function(x) {

if(length(x) == 1){

x = x # Do nothing!

} else if(length(x) == 2){

x = paste0(x[1], " and ", x[2])
} else if(length(x) > 2){

x = paste0(paste0(x[-length(x)], ", ", collapse = ""),
"and ", x[length(x)])
}

}, "A")

Notice the use of vapply, not lapply, because Thou Shalt Use Vapply.

Working as a coder at a technology company versus working as a coder in other contexts

I was just skimming a book about Python and I found a rather interesting quote:

“Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.

People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.”

Looking at salaries for developers as well as reading the experiences of coders in technology companies this really seems to ring true. I’m the other kind, I’ve picked up a bit of code in order to do my day job which is really focused on collecting and analysing user experience data. And actually I do get respect and I have been able to do a lot of new things that nobody was doing before I got started.

Quite inspiring words, really. I feel a renewed urge to develop my programming skills within the current role that I have to try to push things even further and will stop daydreaming about what it would be like to be a “real” programmer.