One weird trick to getting column types right with read_csv

Using read_csv from the tidyverse is so easy that I didn’t bother to look at the readr documentation for a long time. However, I’m glad I did, because there is, as they say in the click bait world, one weird trick to get your column types right with read_csv. read_csv (or the other delimited file reading functions like read_tsv) does a brilliant job guessing what column types things are but by default it only looks at 1000 rows. Fine for most datasets, but actually I have more than one dataset where the first 1000 rows are missing, which doesn’t help the parser at all. So do it manually and get it right. But what a pain, all that typing, right? Wrong. Just do this:


testSpec = read_csv("masterTest.csv")

And you’ll get this output automatically:


Parsed with column specification:
cols(
  TeamN = col_character(),
  Time = col_integer(),
  TeamC = col_double(),
  Division = col_integer(),
  Directorate = col_integer(),
  Contacts = col_integer(),
  HIS = col_character(),
  Inpatient = col_character(),
  District = col_character(),
  SubDistrict = col_character(),
  fftCategory = col_character()
)

You’re supposed to copy and paste that into a new call, putting right any mistakes. And in fact there is one, in this very spreadsheet, the parser incorrectly guesses that Inpatient is character when it is in fact integer- because the first 1000 rows are missing.

So just copy all that into a new call and fix the mistake, like this:


testSpec = read_csv("masterTest.csv", 
                    col_types = 
                      cols(TeamN = col_character(),
                           Time = col_integer(),
                           TeamC = col_double(),
                           Division = col_integer(),
                           Directorate = col_integer(),
                           Contacts = col_integer(),
                           HIS = col_character(),
                           Inpatient = col_integer(),
                           District = col_character(),
                           SubDistrict = col_character(),
                           fftCategory = col_character()
                      ))

If you’re still having problems, you can have a look using problems(testSpec).

Absolute pure genius. The more I use the tidyverse, the more I know about it, and the more I know about it, the more I love it.

Analysing runs from the Polar web flow service

Well, we’re still in New Year’s resolutions territory, so what better time to have a look at using R to analyse data collected from a run? For this analysis I have used the Polar Flow web service to download two attempts at the same Parkrun, recorded on a Polar M600 (which I love, by the way, if you’re looking for a running/ smartwatch recommendation).

The background to the analysis is in the second of the two runs I thought I was doing really well and was going to crush my PB and it ended up being exactly the same as the previous run, in terms of total time taken, but with my heart rate a lot lower.

But I didn’t really feel like I wasn’t pushing myself hard enough, so I can’t really explain why my heart rate has dropped so much without a corresponding increase in performance. One possible explanation is I have moved from being bottlenecked by the performance of my cardiovascular system to being bottlenecked by the performance of my legs, but that these two bottlenecks are very similar in terms of where they cap my pace.

It was pretty fun having a look in R. Here’s a link to the analysis as it stands.

I thought I would look at my race strategy in terms of how fast I went at each point, reasoning that maybe I let myself down on the hills or the straights or something in the second attempt. However, as you can see the pace is absolutely identical the whole way in both runs. The heart rate, as you can see, is consistently lower in the second run, and it only creeps up at the end for the sprint finish (which makes me wonder if I really was pushing myself hard enough).

I need to do more analysis. My next idea is to look at the relationship between incline, heart rate, and pace (the route is pretty hilly so this is quite important).

New Year blog post

So my favourite productivity guru, David Allen, he of Getting Things Done fame, has a suggestion for a New Year review that you should carry out each year. His suggested structure can be found here.

I think this is an excellent idea so I have set myself a recurring reminder to do it each year, starting now. And I may as well blog it so I can find it again and anyone else who’s interested can have a read and maybe be inspired to do their own.

I had a bit of a rough year in 2017 and required surgery to remove my colon and spleen after a long battle with ulcerative colitis, so the 2017 bit is going to be a bit shorter than it will be next year. Still, I did achieve some stuff so let’s have a look.

The most sort of visible thing I did this year was make some videos about Shiny. There are two courses, and they’re available to buy (or you can watch them with a subscription) at the links Getting Started with Shiny and UI Development with Shiny. There are some angry people on Amazon reviewing my book who clearly wanted it to have more advanced applications in so I’ll warn you here they don’t feature any highly complex applications. They’re more for people who are starting out. I have another one coming out soon called Advanced Shiny, I can’t link to it yet because it doesn’t exist. Again that’s sort of moderately advanced- interacting with databases, JavaScript, adding a password, stuff like that. So don’t buy them or subscribe to the video service hoping for some real high level stuff, because it’s not in there. I’d hate for you to feel like you wasted your money.

Anyway, I also started doing a lot of work with Shiny at work, it’s taking off in a big way, and when I talk about 2018 I’ll be saying that I’ve got big plans at work, my role is developing quite a lot based on what we’ve achieved already in 2017. I’ve also learned quite a lot of stuff about text analytics, with the hope of relaunching a Shiny application I wrote a few years back and currently maintain with a lot of tools in it to allow the user to explore text based data. I’ll say more about that next year when I’ve actually done it.

I also learned how to implement an API in PHP, which is pretty easy really when you know how but it was cool to learn anyway.

I was in the right place at the right time, really, at work, in terms of having the experience with Shiny that I do and being able to help with this project that’s developing, so I’ve been quite lucky to be on board but I also give myself some credit for reading the runes and training myself up in Shiny to be ready to help with something like this. The skills I’ve acquired both in Shiny programming and in running Linux servers to run Shiny Server and relevant databases on are suddenly very valuable to my organisation, so I feel I called this one correctly. My next prediction is text analysis, I feel like if I can learn that and do it really well over the next five years then there could be opportunities there.

In terms of my personal life, really it feels like all I did was be very, very ill with severe inflammatory bowel disease and then have surgery to correct it. That consumed a lot of my energy, really. I’ve now recovered and I’m back doing 10 mile runs at the weekend, which is great. I’ve actually very tentatively started writing a book about my experiences being ill, which I’ll talk about more if I actually get anywhere with it next year.

So in 2018 I’ll be developing my role at work, and having much more input more widely across the organisation both in terms of Shiny, live analytics, dashboards and all that stuff, but also in terms of statistics and analysing real datasets from healthcare services, which is where I started out, really, finishing my psychology PhD in 2008 looking at routinely collected data in psychiatric hospitals with mixed effects models. And as I mentioned I’ll be improving one of my Shiny applications, adding a lot of tools to help users explore the text.

In my personal life I want to be a really good dad since my kids got a bit of raw deal with my being sick in 2017 and I’m going to run a marathon in less than four hours. I didn’t run for 3 years or so because of having liver disease and then bowel disease so I really owe it to myself to get a nice marathon PB under my belt (current PB is 4’14”). And I’m going to try to clear the decks a bit and get writing a book about my experiences being ill, a lot of people have told me that they think it would be good so I’m going to have a go.

David Allen has some questions, I missed out the ones for the year just gone because it was such a strange year, but let’s look at some for 2018.

What would you like to be your biggest triumph in 2018?
If I can make this new job role work then I’ll be really pleased because it’s a big important step up for me and I think it will be really valuable to the Trust. And I really want to run a sub 4 marathon. If I can do those two things, I’m happy.

What is the major effort you are planning to improve your financial results in 2018?
I’ve actually got a savings account now, after being hopeless with money in the past. I’m really trying to save up for things instead of impulsively buying them, and with a bit of luck I can treat myself to a beautiful Entroware laptop in 2018

What major indulgence are you willing to experience in 2018?
Now the kids are a bit older I’d love to go on a skiing holiday with them. I love snowboarding and haven’t been for ages.

What would you most like to change about yourself in 2018?
Definitely better organised and less forgetful. I’m really trying hard at the moment to put reminders in my phone for everything, work stuff, buying presents, stuff for my kids, everything.

What is one as yet undeveloped talent you are willing to explore in 2018?
I’m really going to have to get big picture mode at work. I’ve had my head down learning the stack for processing patient experience data, but in 2018 I need to work much more widely- with say finance data, or HR, clinical outcomes. I’m really looking forward to getting my teeth into that.

What brings you the most joy and how are you going to do or have more of that in 2018?
Running. Lots and lots and lots of lovely running. I’m making time to do that, already am.

Who or what, other than yourself, are you most committed to loving and serving in 2018?
My kids deserve a really good dad who’s not ill all the time, and that’s exactly what they’re going to get.

What one word would you like to have as your theme in 2018?
Health. Beautiful, glorious, wonderful health.

Gathering data (wide to long) in the tidyverse

I seem to have a pretty severe mental block about the gather() function from dplyr so this is yet another post that to be honest is basically for me to refer to in 6 months when I forget all this stuff. So I’m going to address the mental block I have very specifically and show some code; hopefully it will help someone else out there.

So whenever I use gather I put the whole dataframe in. Say I’ve got ten variables. I whack the whole dataframe in and try to pull out just the ones I want using the select() notation at the end of the list of arguments. This DOES NOT MAKE ANY SENSE. You can’t do this:


theData = tibble(ID = 1:10, Q1 = runif(10),
  Q2 = runif(10),
  Q3 = runif(10),
  Q4 = runif(10),
  Q5 = runif(10))

gather(theData, key = Question, value = Score, Q1, Q2, Q3)

This does not work! I don’t know why I think it does! What do I think is going to happen to the ID column? It’s just going to magically go away?

I DON’T KNOW WHY I’M SO BAD AT THIS.

It’s going to gather the whole dataframe, and you just end up with a huge mess. The other thing to say is, and I have started to get the hang of this, but just in case. THE KEY AND VALUE ARGUMENTS YOU JUST MAKE UP. THEY ARE *NOT* RELATED TO THE NAMES OF THE DATAFRAME AT ALL.

What you actually do is get JUST THE VARIABLES YOU WANT, and then you need to decide whether you want any other variables, but not gather them. So as a concrete example, let’s say you want to gather Q1 – Q3 and keep the ID column. You want to put the ID column in, but you don’t want to GATHER it. So you put it in the select statement, but use -ID in the gather statement:


testData %>%
  select(ID : Q3) %>%
  gather(key = Question, value = Score, -ID)

# A tibble: 30 x 3
  ID Question Score
  <int> <chr> <dbl>
 1 1 Q1 0.26001265
 2 2 Q1 0.34674771
 3 3 Q1 0.43080742
 4 4 Q1 0.28397929
 5 5 Q1 0.14545496
 6 6 Q1 0.63496928
 7 7 Q1 0.78777785
 8 8 Q1 0.44622476
 9 9 Q1 0.86785324
10 10 Q1 0.02611436
# ... with 20 more rows

Or if you don’t want the ID column (not doing anything useful in this particular, made up, case):


testData %>%
  select(Q1 : Q3) %>%
  gather(key = Question, value = Score, Q1 : Q3)

# A tibble: 30 x 2
  Question Score
  <chr> <dbl>
 1 Q1 0.26001265
 2 Q1 0.34674771
 3 Q1 0.43080742
 4 Q1 0.28397929
 5 Q1 0.14545496
 6 Q1 0.63496928
 7 Q1 0.78777785
 8 Q1 0.44622476
 9 Q1 0.86785324
10 Q1 0.02611436
# ... with 20 more rows

Note that by default it will include ALL variables anyway, so this is totally equivalent to:


testData %>%
  select(Q1 : Q3) %>%
  gather(key = Question, value = Score)

That’s it! As I said at the beginning of the post, I have no idea why I have such a ridiculous mental block about it, it’s all in the documentation, I just get all the columns references and the – notation and all that stuff mixed up (I think partly because using -ID KEEPS the ID variable, it just doesn’t GATHER it). It’s my fault for being an idiot, but the next time I get stuck I’ll read this and understand clearly 🙂

Oh yes, last thing, Q1 : Q3 is just “from Q1 to Q3”, meaing Q1, Q2, and Q3, and Q3 : Q5 would be Q3, Q4, Q5 etc. There are lots of ways to select the variables. See more at ?gather and ?select (which uses the same variable name rules).

One neat trick is num_range(), which is a shortcut to selecting ranges of things like Q1, Q2, Q3, X1, X2, X3 and so on. You just give the prefix and the numbers you want-


testData %>%
  select(num_range("Q", 1:3)) %>%
  gather(key = Question, value = Score)

Right, I’ll stop now, this post is getting too long.

Analysis tools for Manager Joe

I’m using someone else’s data today. It’s absolutely hideously laid out. I could munge it into R but it would take absolutely ages and I’m just not doing enough with it for that to be worth doing.

So I need to have a look at about 30 survey questions using the tools available to the Average Manager Joe- a spreadsheet and the “graph” button.

It’s a real eye opener. Everything takes ages, for one thing, and everything is so janky that I’m not even really sure if I’m drawing the right conclusion. I think the most worrying thing is that the effort involved is so high that I’m losing my curiosity- I’m just trying to get it done. I’m just churning out all this rubbish, giving it a quick eyeball and crashing on.

Why does that seem so familiar? Oh yes, that’s what I’ve always assumed people have done when I read their reports. It’s a big problem, we all know it is, data is too difficult to make sense of, so people do it quickly, and wrongly. We all know this. But I’m living it right now. And I have renewed purpose to make all MY data applications beautifully easy to use. Stay tuned…

[… time passes]

I’ve come back to this post. It’s no good. I can’t do it. I’m munging the data into R, even if it will take a little while. It just goes to show, it’s really hard to get away with not doing it properly.

Failure to produce pdf with RMarkdown tidyverse

I’m using tidyverse for everything now, as I’ve mentioned in previous posts, when I want a cup of tea I just run:


house %>%
  filter(kitchen == 1) %>%
  select(tea, kettle) %>%
  infuse()

I just ran the following code in a vanilla RStudio setup with pdflatex installed:


---
title: "Test document"
author: "Chris Beeley"
date: "20 December 2017"
output: pdf_document
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```

```{r}

library(tidyverse)

# let's do some data stuff here...

```

This is the code that you get if you set up an RMarkdown project in RStudio and select “compile to LaTeX”, and you want to do some data stuff with the tidyverse package.

And it produced the following error message:

! Package inputenc Error: Unicode char √ (U+221A)
(inputenc) not set up for use with LaTeX.

See the inputenc package documentation for explanation.
Type H for immediate help.

l.145 \end{verbatim}

Try running pandoc with –latex-engine=xelatex.
pandoc: Error producing PDF
Error: pandoc document conversion failed with error 43
Execution halted

I was a bit confused by this for quite a while, the answer of course turns out to be the lovely messages which the tidyverse produces on loading:

With the default message = TRUE behaviour in the code chunk pandoc ends up trying to render those little ticks in LaTeX. Evidently it doesn’t support unicode.

So the document fails, and it’s hard to understand why until you knit to HTML and see the little ticks.

Changing the knitr::opts_chunk$set(echo = TRUE) line to knitr::opts_chunk$set(echo = TRUE, message = FALSE) fixes the problem.

I can’t help but think that this is a rare example of R getting harder to use. When I started with R 10 years ago it was much more difficult to do even simple things like load a csv file or work with dates. These days there are lots of lovely packages to help, and of course RStudio itself makes using R much more intuitive. But this is going to confuse newbies, I think, which is a bit of a shame.

There are several obvious fixes, I won’t bother to list them all, maybe make message = FALSE the default in RMarkdown documents in RStudio seems like the best one, but maybe there’s some reason they don’t want to do that.

Font size of code in .Rpres presentations

I don’t know if I even knew about the .Rpres presentation feature in RStudio v. 0.98 and above. As I think I mentioned I’ve been rather ill for the last couple of years and I’m afraid I kind of fell out of touch with things a bit. Anyway, I’m all better now and I’m going to be giving a talk at the R User Group in Nottingham (which I love profoundly) so I thought I’d do it this new sexy way.

It seems pretty handy, haven’t made the whole presentation yet so I’m sure there’s more to come but the first thing is, dang! The code in an echo = TRUE chunk is really large! I can’t fit any output on the page!

So I found this guide to making it smaller, and lots of other nice tweaks, too.

Better Git commit messages

Something else I’m trying to be better at is using Git. I did use it, briefly, a few years back but I never quite got the hang of it and I’ve reverted to the bad habit of having MainCode/ and TestingCode/ and TryNewFunction/ folders filled with near identical code.

So I’m back on the Git wagon again. Atom (see my previous blog post) has beautiful Git integration, as you’d expect since it was built by the GitHub people. It also enforces a couple of conventions with writing Git commit messages, which inspired me on a Google search which led me to this, a guide to writing better commit messages.

I never even thought about the art of it, but, of course, like code comments, good commit messages are essential for collaborating with anyone, even your future self.

Ellen Townsend: Small talk saves lives — IMH Blog (Nottingham)

It sounds much too simple doesn’t it? Making small talk could save a life. But the truth is, it really could. Today SHRG is supporting the campaign launched by the Samaritans. They are asking us all to be courageous and strike up a conversation with someone if we are worried about them at a railway […]

via Ellen Townsend: Small talk saves lives — IMH Blog (Nottingham)

Filtering data straight into a plot with tidyverse

I’m still trying to go full tidyverse, as I believe I mentioned a while back. It’s clearly a highly useful approach, but on top of this I see a load of code in blogs and tutorials that uses a tidy approach. So unless I learn it I’m not going to have a lot of luck reading it. I saw somebody do the following a little while back and I really like it so I thought I’d share it.

In days gone by I would draw lots of graphs in an RMarkdown document like this:


firstFilteredDataset = subset(wholeData, 
  Date > as.Date("2017-04-01"))

ggplot(firstFilteredDataset, 
  aes(x = X1, y = y)) + geom_... etc.

secondFilteredDataset = subset(wholeData, 
  Date > as.Date("2015-01-01"))

ggplot(secondFilteredDataset, 
  aes(x = X1, y = y)) + geom_... etc.

thirdFilteredDataset = ... etc.

It’s fine, there’s nothing wrong with doing that, really. The two drawbacks are firstly that the code looks a bit ungainly, creating lots of objects that are used once and then forgotten about, and secondly it is filling your RAM with data. Not really a problem on my main box, which has 16GB of RAM, but it’s a bad habit and you may come unstuck somewhere else where RAM is more limited- like for example when you’re running code on a server.

So I saw some code on the internet the other day and they just piped data straight from a dplyr filter statement to a ggplot instruction. No muss, no fuss, the data is defined in the same function in which it’s used, and you’re not making loads of objects and leaving them lying around. So here’s an example:


library(tidyverse)

mpg %>% 
  filter(cyl == 4) %>%
  group_by(drv) %>%
  summarise(highwayMilesPG = mean(hwy)) %>%
  ggplot(aes(x = drv, y = highwayMilesPG)) +
  geom_bar(stat = "identity")

There’s only one word for it- it’s tidy! I like it!