New Year blog post

So my favourite productivity guru, David Allen, he of Getting Things Done fame, has a suggestion for a New Year review that you should carry out each year. His suggested structure can be found here.

I think this is an excellent idea so I have set myself a recurring reminder to do it each year, starting now. And I may as well blog it so I can find it again and anyone else who’s interested can have a read and maybe be inspired to do their own.

I had a bit of a rough year in 2017 and required surgery to remove my colon and spleen after a long battle with ulcerative colitis, so the 2017 bit is going to be a bit shorter than it will be next year. Still, I did achieve some stuff so let’s have a look.

The most sort of visible thing I did this year was make some videos about Shiny. There are two courses, and they’re available to buy (or you can watch them with a subscription) at the links Getting Started with Shiny and UI Development with Shiny. There are some angry people on Amazon reviewing my book who clearly wanted it to have more advanced applications in so I’ll warn you here they don’t feature any highly complex applications. They’re more for people who are starting out. I have another one coming out soon called Advanced Shiny, I can’t link to it yet because it doesn’t exist. Again that’s sort of moderately advanced- interacting with databases, JavaScript, adding a password, stuff like that. So don’t buy them or subscribe to the video service hoping for some real high level stuff, because it’s not in there. I’d hate for you to feel like you wasted your money.

Anyway, I also started doing a lot of work with Shiny at work, it’s taking off in a big way, and when I talk about 2018 I’ll be saying that I’ve got big plans at work, my role is developing quite a lot based on what we’ve achieved already in 2017. I’ve also learned quite a lot of stuff about text analytics, with the hope of relaunching a Shiny application I wrote a few years back and currently maintain with a lot of tools in it to allow the user to explore text based data. I’ll say more about that next year when I’ve actually done it.

I also learned how to implement an API in PHP, which is pretty easy really when you know how but it was cool to learn anyway.

I was in the right place at the right time, really, at work, in terms of having the experience with Shiny that I do and being able to help with this project that’s developing, so I’ve been quite lucky to be on board but I also give myself some credit for reading the runes and training myself up in Shiny to be ready to help with something like this. The skills I’ve acquired both in Shiny programming and in running Linux servers to run Shiny Server and relevant databases on are suddenly very valuable to my organisation, so I feel I called this one correctly. My next prediction is text analysis, I feel like if I can learn that and do it really well over the next five years then there could be opportunities there.

In terms of my personal life, really it feels like all I did was be very, very ill with severe inflammatory bowel disease and then have surgery to correct it. That consumed a lot of my energy, really. I’ve now recovered and I’m back doing 10 mile runs at the weekend, which is great. I’ve actually very tentatively started writing a book about my experiences being ill, which I’ll talk about more if I actually get anywhere with it next year.

So in 2018 I’ll be developing my role at work, and having much more input more widely across the organisation both in terms of Shiny, live analytics, dashboards and all that stuff, but also in terms of statistics and analysing real datasets from healthcare services, which is where I started out, really, finishing my psychology PhD in 2008 looking at routinely collected data in psychiatric hospitals with mixed effects models. And as I mentioned I’ll be improving one of my Shiny applications, adding a lot of tools to help users explore the text.

In my personal life I want to be a really good dad since my kids got a bit of raw deal with my being sick in 2017 and I’m going to run a marathon in less than four hours. I didn’t run for 3 years or so because of having liver disease and then bowel disease so I really owe it to myself to get a nice marathon PB under my belt (current PB is 4’14”). And I’m going to try to clear the decks a bit and get writing a book about my experiences being ill, a lot of people have told me that they think it would be good so I’m going to have a go.

David Allen has some questions, I missed out the ones for the year just gone because it was such a strange year, but let’s look at some for 2018.

What would you like to be your biggest triumph in 2018?
If I can make this new job role work then I’ll be really pleased because it’s a big important step up for me and I think it will be really valuable to the Trust. And I really want to run a sub 4 marathon. If I can do those two things, I’m happy.

What is the major effort you are planning to improve your financial results in 2018?
I’ve actually got a savings account now, after being hopeless with money in the past. I’m really trying to save up for things instead of impulsively buying them, and with a bit of luck I can treat myself to a beautiful Entroware laptop in 2018

What major indulgence are you willing to experience in 2018?
Now the kids are a bit older I’d love to go on a skiing holiday with them. I love snowboarding and haven’t been for ages.

What would you most like to change about yourself in 2018?
Definitely better organised and less forgetful. I’m really trying hard at the moment to put reminders in my phone for everything, work stuff, buying presents, stuff for my kids, everything.

What is one as yet undeveloped talent you are willing to explore in 2018?
I’m really going to have to get big picture mode at work. I’ve had my head down learning the stack for processing patient experience data, but in 2018 I need to work much more widely- with say finance data, or HR, clinical outcomes. I’m really looking forward to getting my teeth into that.

What brings you the most joy and how are you going to do or have more of that in 2018?
Running. Lots and lots and lots of lovely running. I’m making time to do that, already am.

Who or what, other than yourself, are you most committed to loving and serving in 2018?
My kids deserve a really good dad who’s not ill all the time, and that’s exactly what they’re going to get.

What one word would you like to have as your theme in 2018?
Health. Beautiful, glorious, wonderful health.

Gathering data (wide to long) in the tidyverse

I seem to have a pretty severe mental block about the gather() function from dplyr so this is yet another post that to be honest is basically for me to refer to in 6 months when I forget all this stuff. So I’m going to address the mental block I have very specifically and show some code; hopefully it will help someone else out there.

So whenever I use gather I put the whole dataframe in. Say I’ve got ten variables. I whack the whole dataframe in and try to pull out just the ones I want using the select() notation at the end of the list of arguments. This DOES NOT MAKE ANY SENSE. You can’t do this:


theData = tibble(ID = 1:10, Q1 = runif(10),
  Q2 = runif(10),
  Q3 = runif(10),
  Q4 = runif(10),
  Q5 = runif(10))

gather(theData, key = Question, value = Score, Q1, Q2, Q3)

This does not work! I don’t know why I think it does! What do I think is going to happen to the ID column? It’s just going to magically go away?

I DON’T KNOW WHY I’M SO BAD AT THIS.

It’s going to gather the whole dataframe, and you just end up with a huge mess. The other thing to say is, and I have started to get the hang of this, but just in case. THE KEY AND VALUE ARGUMENTS YOU JUST MAKE UP. THEY ARE *NOT* RELATED TO THE NAMES OF THE DATAFRAME AT ALL.

What you actually do is get JUST THE VARIABLES YOU WANT, and then you need to decide whether you want any other variables, but not gather them. So as a concrete example, let’s say you want to gather Q1 – Q3 and keep the ID column. You want to put the ID column in, but you don’t want to GATHER it. So you put it in the select statement, but use -ID in the gather statement:


testData %>%
  select(ID : Q3) %>%
  gather(key = Question, value = Score, -ID)

# A tibble: 30 x 3
  ID Question Score
  <int> <chr> <dbl>
 1 1 Q1 0.26001265
 2 2 Q1 0.34674771
 3 3 Q1 0.43080742
 4 4 Q1 0.28397929
 5 5 Q1 0.14545496
 6 6 Q1 0.63496928
 7 7 Q1 0.78777785
 8 8 Q1 0.44622476
 9 9 Q1 0.86785324
10 10 Q1 0.02611436
# ... with 20 more rows

Or if you don’t want the ID column (not doing anything useful in this particular, made up, case):


testData %>%
  select(Q1 : Q3) %>%
  gather(key = Question, value = Score, Q1 : Q3)

# A tibble: 30 x 2
  Question Score
  <chr> <dbl>
 1 Q1 0.26001265
 2 Q1 0.34674771
 3 Q1 0.43080742
 4 Q1 0.28397929
 5 Q1 0.14545496
 6 Q1 0.63496928
 7 Q1 0.78777785
 8 Q1 0.44622476
 9 Q1 0.86785324
10 Q1 0.02611436
# ... with 20 more rows

Note that by default it will include ALL variables anyway, so this is totally equivalent to:


testData %>%
  select(Q1 : Q3) %>%
  gather(key = Question, value = Score)

That’s it! As I said at the beginning of the post, I have no idea why I have such a ridiculous mental block about it, it’s all in the documentation, I just get all the columns references and the – notation and all that stuff mixed up (I think partly because using -ID KEEPS the ID variable, it just doesn’t GATHER it). It’s my fault for being an idiot, but the next time I get stuck I’ll read this and understand clearly 🙂

Oh yes, last thing, Q1 : Q3 is just “from Q1 to Q3”, meaing Q1, Q2, and Q3, and Q3 : Q5 would be Q3, Q4, Q5 etc. There are lots of ways to select the variables. See more at ?gather and ?select (which uses the same variable name rules).

One neat trick is num_range(), which is a shortcut to selecting ranges of things like Q1, Q2, Q3, X1, X2, X3 and so on. You just give the prefix and the numbers you want-


testData %>%
  select(num_range("Q", 1:3)) %>%
  gather(key = Question, value = Score)

Right, I’ll stop now, this post is getting too long.

Analysis tools for Manager Joe

I’m using someone else’s data today. It’s absolutely hideously laid out. I could munge it into R but it would take absolutely ages and I’m just not doing enough with it for that to be worth doing.

So I need to have a look at about 30 survey questions using the tools available to the Average Manager Joe- a spreadsheet and the “graph” button.

It’s a real eye opener. Everything takes ages, for one thing, and everything is so janky that I’m not even really sure if I’m drawing the right conclusion. I think the most worrying thing is that the effort involved is so high that I’m losing my curiosity- I’m just trying to get it done. I’m just churning out all this rubbish, giving it a quick eyeball and crashing on.

Why does that seem so familiar? Oh yes, that’s what I’ve always assumed people have done when I read their reports. It’s a big problem, we all know it is, data is too difficult to make sense of, so people do it quickly, and wrongly. We all know this. But I’m living it right now. And I have renewed purpose to make all MY data applications beautifully easy to use. Stay tuned…

[… time passes]

I’ve come back to this post. It’s no good. I can’t do it. I’m munging the data into R, even if it will take a little while. It just goes to show, it’s really hard to get away with not doing it properly.

Failure to produce pdf with RMarkdown tidyverse

I’m using tidyverse for everything now, as I’ve mentioned in previous posts, when I want a cup of tea I just run:


house %>%
  filter(kitchen == 1) %>%
  select(tea, kettle) %>%
  infuse()

I just ran the following code in a vanilla RStudio setup with pdflatex installed:


---
title: "Test document"
author: "Chris Beeley"
date: "20 December 2017"
output: pdf_document
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```

```{r}

library(tidyverse)

# let's do some data stuff here...

```

This is the code that you get if you set up an RMarkdown project in RStudio and select “compile to LaTeX”, and you want to do some data stuff with the tidyverse package.

And it produced the following error message:

! Package inputenc Error: Unicode char √ (U+221A)
(inputenc) not set up for use with LaTeX.

See the inputenc package documentation for explanation.
Type H for immediate help.

l.145 \end{verbatim}

Try running pandoc with –latex-engine=xelatex.
pandoc: Error producing PDF
Error: pandoc document conversion failed with error 43
Execution halted

I was a bit confused by this for quite a while, the answer of course turns out to be the lovely messages which the tidyverse produces on loading:

With the default message = TRUE behaviour in the code chunk pandoc ends up trying to render those little ticks in LaTeX. Evidently it doesn’t support unicode.

So the document fails, and it’s hard to understand why until you knit to HTML and see the little ticks.

Changing the knitr::opts_chunk$set(echo = TRUE) line to knitr::opts_chunk$set(echo = TRUE, message = FALSE) fixes the problem.

I can’t help but think that this is a rare example of R getting harder to use. When I started with R 10 years ago it was much more difficult to do even simple things like load a csv file or work with dates. These days there are lots of lovely packages to help, and of course RStudio itself makes using R much more intuitive. But this is going to confuse newbies, I think, which is a bit of a shame.

There are several obvious fixes, I won’t bother to list them all, maybe make message = FALSE the default in RMarkdown documents in RStudio seems like the best one, but maybe there’s some reason they don’t want to do that.

Font size of code in .Rpres presentations

I don’t know if I even knew about the .Rpres presentation feature in RStudio v. 0.98 and above. As I think I mentioned I’ve been rather ill for the last couple of years and I’m afraid I kind of fell out of touch with things a bit. Anyway, I’m all better now and I’m going to be giving a talk at the R User Group in Nottingham (which I love profoundly) so I thought I’d do it this new sexy way.

It seems pretty handy, haven’t made the whole presentation yet so I’m sure there’s more to come but the first thing is, dang! The code in an echo = TRUE chunk is really large! I can’t fit any output on the page!

So I found this guide to making it smaller, and lots of other nice tweaks, too.

Better Git commit messages

Something else I’m trying to be better at is using Git. I did use it, briefly, a few years back but I never quite got the hang of it and I’ve reverted to the bad habit of having MainCode/ and TestingCode/ and TryNewFunction/ folders filled with near identical code.

So I’m back on the Git wagon again. Atom (see my previous blog post) has beautiful Git integration, as you’d expect since it was built by the GitHub people. It also enforces a couple of conventions with writing Git commit messages, which inspired me on a Google search which led me to this, a guide to writing better commit messages.

I never even thought about the art of it, but, of course, like code comments, good commit messages are essential for collaborating with anyone, even your future self.

Ellen Townsend: Small talk saves lives — IMH Blog (Nottingham)

It sounds much too simple doesn’t it? Making small talk could save a life. But the truth is, it really could. Today SHRG is supporting the campaign launched by the Samaritans. They are asking us all to be courageous and strike up a conversation with someone if we are worried about them at a railway […]

via Ellen Townsend: Small talk saves lives — IMH Blog (Nottingham)

Filtering data straight into a plot with tidyverse

I’m still trying to go full tidyverse, as I believe I mentioned a while back. It’s clearly a highly useful approach, but on top of this I see a load of code in blogs and tutorials that uses a tidy approach. So unless I learn it I’m not going to have a lot of luck reading it. I saw somebody do the following a little while back and I really like it so I thought I’d share it.

In days gone by I would draw lots of graphs in an RMarkdown document like this:


firstFilteredDataset = subset(wholeData, 
  Date > as.Date("2017-04-01"))

ggplot(firstFilteredDataset, 
  aes(x = X1, y = y)) + geom_... etc.

secondFilteredDataset = subset(wholeData, 
  Date > as.Date("2015-01-01"))

ggplot(secondFilteredDataset, 
  aes(x = X1, y = y)) + geom_... etc.

thirdFilteredDataset = ... etc.

It’s fine, there’s nothing wrong with doing that, really. The two drawbacks are firstly that the code looks a bit ungainly, creating lots of objects that are used once and then forgotten about, and secondly it is filling your RAM with data. Not really a problem on my main box, which has 16GB of RAM, but it’s a bad habit and you may come unstuck somewhere else where RAM is more limited- like for example when you’re running code on a server.

So I saw some code on the internet the other day and they just piped data straight from a dplyr filter statement to a ggplot instruction. No muss, no fuss, the data is defined in the same function in which it’s used, and you’re not making loads of objects and leaving them lying around. So here’s an example:


library(tidyverse)

mpg %>% 
  filter(cyl == 4) %>%
  group_by(drv) %>%
  summarise(highwayMilesPG = mean(hwy)) %>%
  ggplot(aes(x = drv, y = highwayMilesPG)) +
  geom_bar(stat = "identity")

There’s only one word for it- it’s tidy! I like it!

One editor to rule them all- Atom

I’m very happy using RStudio for all my R code. It goes without saying that the support for R coding built into RStudio is phenomenal. If you don’t know loads of cool stuff RStudio does, you’re missing out, but that’s a blog post on its own.

I’ve never quite been happy with my choice for other general editing, though. Sometimes I write PHP, HTML, markdown, Python, or something else, and I’ve never really found an editor that I love. Geany is pretty good and that’s what I have been using when I write PHP or HTML. I tended to write markdown in RStudio, which is kind of stupid, since RStudio is an awfully big hammer to crack that nut, but it does support markdown and I’m familiar with RStudio, so I was happy enough doing that. I never really found a Python IDE that I loved. As far as I can tell there isn’t really an RStudio equivalent in the Python world, something so well featured and brilliant that it’s really the only choice unless you have a very particular reason to use something else.

So about a year ago I gave Atom a try. It had been out of beta for about a year by that point. I don’t really remember it clearly now but it seemed a bit clunky and I just rapidly gave up (to be fair, this may have just been me being thick, I’ve no idea how much it has really improved since). It just didn’t grab me. I keep seeing it mentioned everywhere and I thought I would give it another go.

This time I was hooked straight away. It’s described as a “hackable editor for the 21st Century” and that’s the real strength of it. The actual interface is very clean and simple, no bells and whistles, but it comes bundled with some plugins and there is a thriving ecosystem of user contributed packages that can make Atom, it seems so far, anything you want it to be.

I think I love Atom for the same reason I love R. It has a big ecosystem of packages around it, and whatever problem you want to solve, as Apple almost said of the iPhone, “there’s a package for that”.

Your needs will be different from mine, of course, but I recommend you give Atom a try if you haven’t already. It supports Markdown preview out of the box. So far I have installed two packages- platformio-ide-terminal, and script. Platformio-ide-terminal allows you to spawn a terminal underneath your code window, which I have been mainly using to run pandoc on my markdown files. Script will run your code for you (sections, the whole thing, etc.) all with a shortcut key. So far I’ve been using that for testing Python scripts. Oh yes, and the markdown editor supports word completion out of the box too, not code, just normal words, which is more useful than it sounds.

While I’ve been Googling Atom to find the links to put in this post I have found two really cool things that I didn’t know about. Firstly, there is the Hydrogen package, which allows Jupyter like functionality in Atom. If you don’t know what Jupyter is, you should find out, but essentially it allows you to weave together your code and output, just like you can with RMarkdown.

And secondly Atom themselves have just released teletype which is a tool that allows collaboration on code files right inside the Atom editor. I don’t really need to do that, not that I can think of anyway, but you have to admit it’s pretty awesome. They’ve solved a lot of the problems with code collaboration elsewhere, as well, have a look at the blog post for more details.

So go give Atom a try. I’ll try to post any more Atom-related awesomeness that I see on my travels.

Lazy tables with R and pander

One of the many things I love about R is how gloriously lazy it can help you to be. I’m writing a report at the moment and I need to make lots of tables in R Markdown. I need them to be proportions, expressed as a percentage, rounded to 0 decimal places, and I need to add (%) to each label on the table. That’s a lot of code when you’ve got 8 or 10 tables to draw, so I just made a function that does it. It takes two arguments, the variable you want tabulated, and the order in which you want the table. I need to specify the order manually because the default alphabetical ordering doesn’t work with all of the data that I want, as in the example here. Without ordering manually, the “Less than one year” category appears at the end.

Here’s a minimal example:


library(pander)

niceTable = function(x, y) {
  
  tempTable = round(
    prop.table(
      table(
        factor(x, 
               levels = y)
      )
    ) * 100, 0)
  
  names(tempTable) = paste0(names(tempTable), " (%)")
  
  pandoc.table(tempTable)
  
}

a = c(rep("Less than one year", 3), rep("1 - 5 years", 4), 
    rep("5 - 10 years", 2))

niceTable(a, c("Less than one year", 
    "1 - 5 years", "5 - 10 years"))

Boom! Instant laziness. Unless my boss is reading, in which case it’s efficiency 🙂