Jake McCrary

Put the Last Command’s Run Time in Your Bash Prompt

| Comments

I’m fairly certain the following scenario has happened to every terminal user. You run a command and, while it is running, realize you should have prefixed it time. You momentarily struggle with the thought of killing the command and rerunning it with time. You decide not to and the command finishes without you knowing how long it took. You debate running it again.

For the last year I’ve lived in a world without this problem. Upon completion, a command’s approximate run time is displayed in my prompt. It is awesome.

Overview

Most of the code below is from a post on Stack Overflow. It has been slightly modified to support having multiple commands in your $PROMPT_COMMAND variable. Below is a minimal snippet that could be included in your .bashrc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function timer_start {
  timer=${timer:-$SECONDS}
}

function timer_stop {
  timer_show=$(($SECONDS - $timer))
  unset timer
}

trap 'timer_start' DEBUG

if [ "$PROMPT_COMMAND" == "" ]; then
  PROMPT_COMMAND="timer_stop"
else
  PROMPT_COMMAND="$PROMPT_COMMAND; timer_stop"
fi

PS1='[last: ${timer_show}s][\w]$ '

Modify your .bashrc to include the above and you’ll have a prompt that looks like the image below. It is a minimal prompt but it includes the time spent on the last command. This is great. No more wondering how long a command took.

Example of prompt

The details

timer_start is a function that sets timer to be its current value or, if timer is unset, sets it to the value of $SECONDS. $SECONDS is a special variable that contains the number of seconds since the shell was started. timer_start is invoked after every simple command as a result of trap 'timer_start' DEBUG.

timer_stop calculates the difference between $SECONDS and timer and stores it in timer_show. It also unsets timer. Next time timer_start is invoked timer will be set to the current value of $SECONDS. Because timer_stop is part of the $PROMPT_COMMAND it is executed prior to the prompt being printed.

It is the interaction between timer_start and timer_stop that captures the run time of commands. It is important that timer_stop is the last command in the $PROMPT_COMMAND. If there are other commands after it then those will be executed and their execution might cause timer_start to be called. This results in you timing the length of time between the prior and current prompts being printed.

My prompt

My prompt is a bit more complicated. It shows the last exit code, last run time, time of day, directory, and git information. The run time of the last command is one of the more useful parts of my prompt. I highly recommend you add it to yours.

My prompt

Errata

2015/5/04

Gary Fredericks noticed that the original code sample broke if you didn’t already have something set as your $PROMPT_COMMAND. I’ve updated the original snippet to reflect his changes.

Quieter clojure.test Output

| Comments

If you use clojure.test then there is a good chance you’ve been annoyed by all the output when you run your tests in the terminal. When there is a test failure you have to scroll through pages of output to find the error.

With release 0.9.0 of lein-test-refresh you can minimize the output of clojure.test and only see failure and summary messages. To enable this feature add :quiet true to the :test-refresh configuration map in either your project.clj or profiles.clj file. If you configure lein-test-refresh in ~/.lein/profiles.clj then turning on this feature looks like the following. 1

1
2
{:user {:plugins [[com.jakemccrary/lein-test-refresh "0.9.0"]]
        :test-refresh {:quiet true}}}

Setting up your profiles.clj like above allows you to move to Clojure project in your terminal, run lein test-refresh, and have your clojure.tests run whenever a file changes. In addition, your terminal won’t show the usual Testing a.namespace output.

Below is what you typically see when running clojure.test tests in a terminal. I had to cut most of the Testing a.namespace messages from the picture.

Normal view of test output

The following picture is with quiet mode turned on in lein-test-refresh. No more Testing a.namespace messages! No more scrolling through all your namespaces to find the failure!

Minimal output in console

I just released this feature so i haven’t had a chance to use it too much. I imagine it may evolve to change the output more.


  1. More configuration options can be found here

Making Tmate and Tmux Play Nice With OS X Terminal-notifier

| Comments

For nearly the last two years, I’ve been doing most of my development in OS X. Most of that development has been done in Clojure and, whenever possible, using lein-test-refresh with terminal-notifier to have my tests automatically run and a notification shown with the status of the test run. Its a great work flow that gives me a quick feedback cycle and doesn’t pull my attention in different directions.

Recently I’ve started using tmate for remote pairing. Unfortunately when I first started using it my quick feedback cycle was broken. lein test-refresh would run my tests but would become hung when sending a notification using terminal-notifier. This was terrible and, if I hadn’t been able to fix it, would have stopped me from using tmate. After some searching I stumbled across this GitHub issue which helped solve the problem.

To make tmate work nicely with terminal-notifier you’ll need to install reattach-to-user-namespace and change your tmate configuration to use it. If you use brew you can install by running brew install --with-wrap-pbcopy-and-pbpaste reattach-to-user-namespace. Then open your .tmux.conf or .tmate.conf file and add the line below.

1
set-option -g default-command "which reattach-to-user-namespace > /dev/null && reattach-to-user-namespace -l $SHELL || $SHELL"

The above tells tmate to use reattach-to-user-namespace if it is available. Now terminal-notifier no longer hangs when invoked inside tmate. Unsurprisingly, this change also makes tmux place nice with terminal-notifier.

My Home Work Space

| Comments

I’ve been working remotely for about a year and a half. In that time, I’ve worked from many locations but most of my time has been spent working from my apartment in Chicago. During this time I’ve tweaked my environment by building a standing desk, building a keyboard, and changed my monitor stands. Below is a my desk (click for larger image).

My Desk

The Desk

I built my own desk using the Gerton table top from Ikea and the S2S Height Adjustable Desk Base from Ergoprise. I originally received a defective part from Ergoprise and after a couple emails I was sent a replacement part. Once I had working parts, attaching the legs to the table top was straightforward. The desk legs let me adjust the height of my desk so I can be sitting or standing comfortably.

The Monitors

I have two 27 inch Apple Cinema displays that are usually connected to a 15 inch MacBook Pro. The picture doesn’t show it, but I actively use all the monitors.

My laptop is raised by a mStand Laptop Stand. While I’m sitting this stand puts the laptop at a comfortable height. I highly recommend getting one.

The middle monitor, the one I use the most, has had the standard stand (you can see it in the right monitor) replaced with an ErgoTech Freedom Arm. This lets me raise the monitor to a comfortable height when I’m standing (as seen in this picture). It also allows me to rotate the monitor vertically, though I have only done that once since installing it. Installation of the arm wasn’t trivial, but it wasn’t that difficult.

I’ve been using the arm for four months now and I’m enjoying it. If you bump the desk the monitor does wobble a bit but I don’t notice it while I’m typing. I haven’t noticed any slippage; the monitor arm seems to hold the monitor in place.

I’ve decided against getting a second arm for my other monitor. Installing the monitor arm renders your monitor non-portable. It doesn’t happen often, but sometimes I travel and stay at a place for long enough that I want to bring a large monitor.

The Chair

My desk chair is a Herman Miller Setu. It is a very comfortable chair that boasts only a single adjustment. You can only raise or lower it.

I moved to this chair from a Herman Miller Aeron. The Aeron had been my primary chair for eight years prior to me buying the Setu.

They are both great chairs. I haven’t missed the extreme amount of customization the Aeron provides; its actually nice having fewer knobs to tweak. I also find the Setu more visually appealing. The Aeron is sort of a giant black monster of a chair; I prefer seeing the chartreuse Setu in my apartment.

The Keyboard and Mouse

I built my own keyboard. It is an ErgoDox with Cherry MX Blue key switches and DSA key caps. More details about my build can be found in an earlier post.

I’ve been using this keyboard for about eight months. It has been rock solid. This is my first keyboard that has mechanical switches. They are nice. It feels great typing on this keyboard.

The ErgoDox has six keys for each thumb. I originally thought I’d be using the thumb clusters a lot but, in practice, I only actively use two or three keys per thumb.

The ErgoDox also supports having multiple layers. This means that with the press of a key I can have an entirely different keyboard beneath my finger tips. It turns out this is another feature I don’t frequently use. I really only use layers for controlling my music playback through media keys and for hitting function keys.

If I were going to build a keyboard again I would not use Cherry MX Blues as the key switch. They are very satisfying to use but they are loud. You can hear me type in every room of my one bedroom apartment. When I’m remote pairing with other developers, they can here me type through my microphone.

For my mouse I use Apple’s Magic Trackpad. I definitely have problems doing precise mouse work (though I rarely find myself needing this) but I really enjoy the gestures in enables. I’ve been using one of these trackpads for years now. I really don’t want to go back to using a mouse.

Other Items

I’m a fan of using pens and paper to keep track of notes. My tools of choice are Leuchturm Whitelines notebook with dotted paper and a TWSBI 580 fountain pen with a fine nib. I’ve been using fountain pens1 for a couple years now and find them much more enjoyable to use than other pen styles. The way you glide across the page is amazing. I usually have my pen inked with Noodler’s 54th Massachusetts. The ink is a beautiful blue black color and very permanent.

No desk is complete without a few fun desk toys. My set of toys includes a bobble head of myself (this was a gift from a good friend), a 3d printed Success Kid, a keyboard switch sampler, a few more 3d printed objects, and some climbing related hand toys.

End

That pretty much covers my physical work space. I’ve tweaked it enough where I don’t feel like I need to experiment anymore. The monitor arm is my most recent addition and it really helped bring my environment to the next level. I think I’ll have a hard time improving my physical setup.


  1. If you want to try out fountain pens I highly recommend the Pilot Metropolitan. It is widely regarded as the best introduction to fountain pens. The medium nib is about the same width as my fine. It is a great introduction to fountain pens. Another great intro pen (that includes a smiling face on the nib) is the Pilot Kakuno.

Advanced Leiningen Checkouts: Configuring What Ends Up on Your Classpath

| Comments

Leiningen checkout dependencies are a useful feature. Checkout dependencies allow you to work on a library and consuming project at the same time. By setting up checkout dependencies you can skip running lein install in the library project; it appears on the classpath of the consuming project. An example of what this looks like can be found in the Leiningen documentation or in a previous post of mine.

By default, Leiningen adds the :source-paths, :test-paths, :resource-paths, and :compile-path directories of the checkout projects to your consuming project’s classpath. It also recurses and adds the checkouts of your checkouts (and keeps recursing).

You can override what gets added to your classpath by :checkout-deps-shares to your project.clj. This key’s value should be a vector of functions that when applied to your checkouts’ project map return the paths that should be included on the classpath. The default values can be found here and an example of overriding the default behavior can be found in the sample.project.clj.

I ran into a situation this week where having my checkouts’ :test-paths on the classpath caused issues my consuming project. My first pass at fixing this problem was to add :checkout-deps-shares [:source-paths :resource-paths :compile-path] to my project.clj. This didn’t work. My project.clj looked like below.

1
2
3
4
(defproject example "1.2.3-SNAPSHOT"
  :dependencies [[library "1.2.2"]
                 [org.clojure/clojure "1.6.0"]]
  :checkout-deps-shares [:source-paths :resource-paths :compile-path])

Why didn’t it work? It didn’t work because of how Leiningen merges duplicate keys in the project map. When Leiningen merges the various configuration maps (from merging profiles, merging defaults, etc) and it encounters values that are collections it combines them (more details found in documentation). Using lein pprint :checkout-deps-shares shows what we end up with.

1
2
3
4
5
6
7
8
9
10
$ lein pprint :checkout-deps-shares
(:source-paths
 :resource-paths
 :compile-path
 :source-paths
 :test-paths
 :resource-paths
 :compile-path
 #<Var@43e3a075:
   #<classpath$checkout_deps_paths leiningen.core.classpath$checkout_deps_paths@6761b44b>>)

We’ve ended up with the default values and the values we specified in the project.clj. This isn’t hard to fix. To tell Leiningen to replace the value instead of merging you add the ^:replace metadata to the value. Below is the same project.clj but with ^:replace added.

1
2
3
4
(defproject example "1.2.3-SNAPSHOT"
  :dependencies [[library "1.2.2"]
                 [org.clojure/clojure "1.6.0"]]
  :checkout-deps-shares ^:replace [:source-paths :resource-paths :compile-path])

This solves the problem of :test-paths showing up on the classpath but it introduces another problem. Checkouts’ checkout dependencies no longer show up on the classpath. This is because leiningen.core.classpath/checkout-deps-paths is no longer applied to the checkouts.

Without leiningen.core.classpath/checkout-deps-paths Leiningen stops recursing and, as a result, no longer picks up checkouts’ checkout dependencies. My first attempt at fixing this was to modify my project.clj so the :checkout-deps-shares section looked like below.

1
2
:checkout-deps-shares ^:replace [:source-paths :resource-paths :compile-path
                                 leiningen.core.classpath/checkout-deps-paths]

The above fails. It runs but doesn’t actually add the correct directories to the classpath. The next attempt is below.

1
2
:checkout-deps-shares ^:replace [:source-paths :resource-paths :compile-path
                                 #'leiningen.core.classpath/checkout-deps-paths]

This attempt failed quicker. Now an exception is thrown when trying to run Leiningen tasks.

The next one works. It takes advantage of dynamic eval through read-eval syntax. With the below snippet the checkouts’ checkouts are added to the classpath.

1
2
:checkout-deps-shares ^:replace [:source-paths :resource-paths :compile-path
                                 #=(eval leiningen.core.classpath/checkout-deps-paths)]

Hopefully this is useful to someone else. It took a bit of digging to figure it out and many incorrect attempts to get correct. The full example project.clj is below.

1
2
3
4
5
(defproject example "1.2.3-SNAPSHOT"
  :dependencies [[library "1.2.2"]
                 [org.clojure/clojure "1.6.0"]]
  :checkout-deps-shares ^:replace [:source-paths :resource-paths :compile-path
                                   #=(eval leiningen.core.classpath/checkout-deps-paths)])

Remote Pairing

| Comments

Over a year ago I joined Outpace. All of Outpace’s developers are remote but we still practice pair programming. As a result I’ve done a lot of remote pairing. I was skeptical before joining that it would work well and I’m happy to report that I was wrong. Remote pairing works.

Why remote pairing?

The usual pair programming benefits apply to remote pairing; more people know the code, quality is higher, and it provides an opportunity for mentorship. Another benefit, more beneficial in a remote setting, is that it increases social interaction.

The most common response I receive when I tell someone I work from my apartment is “I’d miss the interaction with co-workers.” When you work remote you do miss out on the usual in office interaction. Pair programming helps replace some of this. It helps you build and maintain relationships with your remote colleagues.

Communication

Communication is an important part of pair programming. When you’re pairing in person you use both physical and vocal communication. When remote pairing you primarily use vocal communication. You can pick up on some physical cues with video chat but it is hard. You will never notice your pair reaching for their keyboard.

I’ve used Google Hangouts, Zoom, and Skype for communication. Currently I’m primarily using Zoom. It offers high quality video and audio and usually doesn’t consume too many resources.

I recommend not using your computers built-in microphone. You should use headphones with a mic or a directional microphone. You’ll sound better and you’ll stop your pair from hearing themselves through your computer.

I use these headphones. They are cheap, light, and open-eared but are wired. I’ve been told I sound the best when I’m using them. I also own these wireless headphones. They are closed-ear, heavier, and wireless. The wireless is great but the closed-ear design causes me to talk differently and by the end of the day my throat is hoarse. Both of these headphones are widely used by my colleagues and I don’t think you can go wrong with either one.

Some people don’t like wearing headphones all day. If you are one of those I’d recommend picking up a directional microphone. Many of my colleagues use a Snowball.

Connecting the machines

So now you can communicate with your pair. It is time to deal with the main problem in remote pairing. How do you actually work on the same code with someone across the world?

At Outpace we’ve somewhat cheated and have standardized our development hardware. Everyone has a computer running OS X and, if they want it, at least one 27 inch monitor (mostly Apple 27 inch displays or a Dell) with a resolution of 2560x1440. Since everyone has nearly identical hardware and software we are able to pair using OS X’s built-in screen sharing. This allows full sharing of the host’s desktop. This full desktop sharing is the best way to emulate working physically next to your pair. This enable the use of any editor and lets you both look at the same browser windows (useful for testing UIs or reading reference material). With decent internet connections both programmers can write code with minimal lag. This is my preferred way of pairing.

Another option that works well is tmate. tmate is a fork of tmux that makes remote pairing easy. It makes it dead simple to have remote developer connect to your machine and share your terminal. This means you are stuck using tools that work in a terminal and, if you are working on a user interface, you need to share that some other way. There generally is less lag when the remote developer is typing.

A third option is to have the host programmer share their screen using screen sharing built-in to Google Hangouts or Zoom. This is a quick way to share a screen and is my preferred way of sharing GUIs with more than one other person. With both Zoom and Google Hangouts the remote developer can control the host’s machine but it isn’t a great experience. If you are pairing this way the remote developer rarely touches the keyboard.

Soloing

It might seem weird to have a section on soloing in an article about remote pairing. Soloing happens and even in an environment that almost entirely pairs it is important. Not everyone can or wants to pair 100% of the time. Soloing can be recharging. It is important to be self-aware and recognize if you need solo time. Below are a few tips for getting that solo time.

One way to introduce solo time is to take your lunch at a different time than your pair. This provides both of you and your pair with an opportunity to do a bit of soloing.

Other short soloing opportunities happen because of meetings and interviews. It isn’t uncommon for half of a pair to leave for a bit to join a meeting, give an interview, or jump over to help out another developer for a bit.

Soloing also happens as a result of uneven team numbers. If your team is odd numbered than there are plenty of opportunities for being a solo developer. Try to volunteer to be the solo developer but be aware of becoming too isolated.

Conclusion

Remote pairing works. Working at Outpace has shown me how well it can work. With the right people and modern technology almost makes it feel as if your pair is in the same room as you.

Overview of My Leiningen profiles.clj

| Comments

Leiningen, a Clojure build tool, has the concept of profiles. One thing profiles are useful for is allowing you to have development tools available to a project without having them as dependencies when you release your project. An example of when you might want to do this is when you are using a testing library like expectations.

Some development tools, such as lein-test-refresh, are useful to have across most of your Clojure projects. Rather nicely, Leiningen supports adding global profiles to ~/.lein/profiles.clj. These profiles are available in all your projects.

Below is most of my profiles.clj. I’ve removed some sensitive settings and what is left are the development tools that I find useful.

Entire :user profile
1
2
3
4
5
6
7
8
9
10
11
12
13
{:user {:plugin-repositories [["private-plugins" {:url "private repo url"}]]
        :dependencies [[pjstadig/humane-test-output "0.6.0"]]
        :injections [(require 'pjstadig.humane-test-output)
                     (pjstadig.humane-test-output/activate!)]
        :plugins [[cider/cider-nrepl "0.8.2"]
                  [refactor-nrepl "0.2.2"]
                  [com.jakemccrary/lein-test-refresh "0.5.5"]
                  [lein-autoexpect "1.4.2"]
                  [lein-ancient "0.5.5"]
                  [jonase/eastwood "0.2.1"]
                  [lein-kibit "0.0.8"]
                  [lein-pprint "1.1.2"]]
        :test-refresh {:notify-command ["terminal-notifier" "-title" "Tests" "-message"]}}}

:plugin-repositories [["private-plugins" {:url "private repo url"}]] sets a private plugin repository. This allows me to use Outpace’s private Leiningen templates for setting up new projects for work.

The next few lines are all related. They setup humane-test-output. humane-test-output makes clojure.test output more readable. It makes using clojure.test much more enjoyable. I highly recommend it. Sample output can be found in my Comparing Clojure Testing Libraries post.

humane-test-output setup in the :user profile
1
2
3
:dependencies [[pjstadig/humane-test-output "0.6.0"]]
:injections [(require 'pjstadig.humane-test-output)
             (pjstadig.humane-test-output/activate!)]

Next we get to my :plugins section. This is the bulk of my profiles.clj.

:plugins section of my :user profile
1
2
3
4
5
6
7
8
:plugins [[cider/cider-nrepl "0.8.2"]
          [refactor-nrepl "0.2.2"]
          [com.jakemccrary/lein-test-refresh "0.5.5"]
          [lein-autoexpect "1.4.2"]
          [lein-ancient "0.5.5"]
          [jonase/eastwood "0.2.1"]
          [lein-kibit "0.0.8"]
          [lein-pprint "1.1.2"]]

The first entry is for cider/cider-nrepl. I write Clojure using Emacs and CIDER and much of CIDER’s functionality exists in nrepl middleware found in cider/cider-nrepl. This dependency is required for me to be effective while writing Clojure.

refactor-nrepl is next. clj-refactor.el requires it for some refactorings. I actually don’t use any of those refactorings (I only use move to let, extract to let, and introduce let refactorings) but I still keep it around.

com.jakemccrary/lein-test-refresh is next. This lets me use lein-test-refresh globally. lein-test-refresh runs your clojure.test tests whenever a file changes in your project. This is another key development tool in my process.

Up next is lein-autoexpect. It was the first Leiningen plugin I wrote and it enables continuous testing with expectations.

Both lein-autoexpect and lein-test-refresh are projects I created and maintain. Writing lein-autoexpect was my first exposure to continuous testing and it changed how I develop code. I find it frustrating to develop without such a tool.

Next up is lein-ancient. It checks your project.clj for outdated dependencies and plugins. It isn’t something that gets used every day but it is super useful when you need it.

The next two entries are for jonase/eastwood and lein-kibit. They are both tools that look at your Clojure code and report common mistakes. I don’t use either consistently but I do find them useful. I’ve found bugs with eastwood.

The final plugin is lein-pprint. lein-pprint prints out your project map. It is useful for trying to grasp what is going on when messing around with various Leiningen options.

The final part, seen below, of my profiles.clj is configuration for lein-test-refresh. It configures lein-test-refresh to use terminal-notifier to notify me when my tests pass or fail. Using a continuous tester that allows flexible notification is useful. Not having to glance at a terminal to see if your tests are passing or failing is great.

1
:test-refresh {:notify-command ["terminal-notifier" "-title" "Tests" "-message"]}

That is my ~/.lein/profiles.clj. I don’t think it contains anything mind blowing but it definitely contains a useful collection of Clojure development tools. I encourage you to check out them out and to think about what tools you should be putting into your global :user profile.

Reading in 2014

| Comments

At the beginning of last year I took some time and reviewed my 2013 reading using Clojure and Incanter to generate some stats. It was a useful exercise to reflect back on my reading and play around with Incanter again.

Over the last couple of weeks I’ve taken a similar look at my 2014 reading. The rest of this post highlights some of the top books from the previous year and then posts some numbers at the end.

I review every book I read using Goodreads. If you want to see more of what I’ve been reading you can find me here. I track and review every book I read and have found this practice to be extremely rewarding.

2014 Goals

I entered 2014 without a volume goal. Unlike 2013, I didn’t have a page or book count goal. I entered 2014 with the desire to reread two specific books and the nebulous goal of reading more non-fiction.

2014 Results

I ended up setting a new volume record. I read 69 books for a total of almost 23,000 pages. I also read every week of Day One, a weekly literary journal containing one short story and one poem from new authors. This doesn’t count towards my page or book count but is reading I enjoy. It exposes me to many different styles.

More than a third of my reading was non-fiction. I don’t have numbers for 2013 but that feels like an increase. I consider my goal of reading more non-fiction achieved.

I also reread the two books I had planned on rereading. I wanted to reread Infinite Jest and Hard-Boiled Wonderland and the End of the World and succeeded in rereading both of them.

Recommendations

I awarded seven books a five out of five star rating. I’ve listed them below in (in no particular order). Each book I’d recommend without hesitation. Instead of reworking or copying my previous reviews I’ve provided links to Goodreads. The titles link to Amazon.

I’m recommending a specific translation of Meditations. I attempted to read different one first and it was so painful to read I ended up giving up. The linked translation is modern and contains a useful forward giving you background information on the time.

I only read one series this year but it was a good one. The Magicians, by Lev Grossman, was recommended by a friend who described it as “Harry Potter but with characters battling depression.” I’m not sure that fully captures the feel of the series but it is a start. The series introduces you to a world like our own but with magic. You follow cynical, self-absorbed students as they attend school, graduate, and grow up living in both the magical and non-magical world. The first book in the series is the weakest so if you read that and find it enjoyable you should definitely pick up the next two books.

2015 Goals

2015 isn’t going to have an easily measured goal. I don’t feel the need to set number of books or pages goals any more. I’m hoping to increase the quality of my reading. This is a pretty unclear goal. To me this doesn’t mean increasing the average rating of books I read but instead I want to get more out of what I read. I want to think a bit deeper about the subjects I’m reading.

2014 Measurements

Below are some random measurements that are probably only interesting to me.

This year I recorded the format of the books I read. This was the year of the ebook; over 90% of the books I read were electronic. I’d guess that this is a higher percentage of ebooks than previous years. I wish I had recorded the formats read in previous years.

1
2
3
4
5
| Binding   | Number of books |
|-----------+-----------------|
| Hardcover |               1 |
| Paperback |               4 |
| Kindle    |              64 |

My average rating has been going down over the last four years.

1
2
3
4
5
6
| Year | Average Rating |
|------+----------------|
| 2011 | 3.84           |
| 2012 | 3.66           |
| 2013 | 3.67           |
| 2014 | 3.48           |

In 2014, three authors composed nearly 25% of my reading (by page count). The top six authors by page count are below.

1
2
3
4
5
6
7
8
| Author               | My Average Rating | Number of Books | Number of Pages | Percent of Total Page Count |
|----------------------+-------------------+-----------------+-----------------+-----------------------------|
| David Mitchell       |                 4 |               5 |            2334 |                      10.19% |
| David Foster Wallace |       4.333333333 |               3 |            1753 |                       7.65% |
| Lev Grossman         |       3.666666667 |               3 |            1244 |                       5.43% |
| Marisha Pessl        |               3.5 |               2 |            1153 |                       5.03% |
| Haruki Murakami      |               3.5 |               2 |             768 |                       3.35% |
| Cormac McCarthy      |               3.5 |               2 |             650 |                       2.84% |

My top six authors by average rating (with ties broken by number of books) are below.

1
2
3
4
5
6
7
8
| Author               | My Average Rating | Number of Books | Number of Pages | Percent of Total Page Count |
|----------------------+-------------------+-----------------+-----------------+-----------------------------|
| Gerald M. Weinberg   |                 5 |               1 |             228 |                       1.00% |
| Kent Beck            |                 5 |               1 |             224 |                       0.98% |
| Jay Fields           |                 5 |               1 |             204 |                       0.89% |
| Kurt Vonnegut        |               4.5 |               2 |             377 |                       1.65% |
| David Foster Wallace |       4.333333333 |               3 |            1753 |                       7.65% |
| David Mitchell       |                 4 |               5 |            2334 |                      10.19% |

I did top six for both of these because otherwise David Mitchell would not have been in the second one. I’ve devoured his writing in the last year and a half for a reason. I’m consistently rating his books highly.

Restricting Access to Certain Routes

| Comments

Recently I’ve been working on adding authentication and authorization to a Clojure web service. The project uses compojure for routing and friend for authentication and authorization. My pair and I wanted to restrict access to specific routes while leaving some routes completely public. It took a few tries until we figured out how to do this in a way that made us happy.

The rest of this post shows the approximate path we took to our current solution. It focuses on using friend to restrict access to specific routes. It does not go into details about adding authentication to your web service.

Below is an example of the routes before adding authorization checks.

1
2
3
4
5
6
7
8
9
10
(ns example.server
  (:require [compojure.core :refer [GET defroutes] :as compojure]
            [compojure.route :as route]))

(defroutes app
  (GET "/status" _ (status))
  (GET "/cars" _ (fetch-cars))
  (GET "/attributes" _ (fetch-attributes))
  (GET "/drivers" _ (fetch-drivers))
  (route/not-found "NOT FOUND"))

We wanted to make /cars, /attributes, and /drivers require that the request satisfies the :example.server/user role. Requesting /status should not require authorization. The first attempt left us with the following code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(ns example.server
  (:require [compojure.core :refer [GET defroutes] :as compojure]
            [compojure.route :as route]
            [cemerick.friend :as friend]))

(defroutes app
  (GET "/status" _ (status))
  (GET "/cars" _
       (friend/authorize #{::user}
                         (fetch-cars)))
  (GET "/attributes" _
       (friend/authorize #{::user}
                         (fetch-attributes)))
  (GET "/drivers" _
       (friend/authorize #{::user}
                         (fetch-drivers)))
  (route/not-found "NOT FOUND"))

The above works but it suffers from repetition. You could write a macro to minimize the repetition but we thought there must be a better way.

After reading more of friend’s documentation we discovered friend/wrap-authorize. This is middleware that only allows requests through if the request satisfies the required roles. Our first pass at using friend/wrap-authorize looked like the following example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(ns example.server
  (:require [compojure.core :refer [GET defroutes] :as compojure]
            [compojure.route :as route]
            [cemerick.friend :as friend]))

(defroutes protected-routes
  (GET "/cars" _ (fetch-cars))
  (GET "/attributes" _ (fetch-attributes))
  (GET "/drivers" _ (fetch-drivers)))

(defroutes app
  (GET "/status" _ (status))
  (friend/wrap-authorize protected-routes #{::user})
  (route/not-found "NOT FOUND"))

This is much nicer. The repetition is removed by extracting routes that require authorization into a separate defroutes and wrapping it with friend/wrap-authorize.

This introduces a subtle bug. A response with status code 404 is no longer returned if a non-existent resource is requested and the request is unauthorized. This is because the authorization check happens before matching a route. friend’s documentation warns against this and suggests using compojure/context to scope usage of friend/wrap-authorize. This doesn’t solve the problem but it at least narrows its scope. We can do better.

Compojure 1.2.0 introduced the function wrap-routes. wrap-routes applies middleware after a route is matched. By using this we can have all of the benefits of using friend/wrap-authorize without breaking returning 404 responses.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(ns example.server
  (:require [compojure.core :refer [GET defroutes] :as compojure]
            [compojure.route :as route]
            [cemerick.friend :as friend]))

(defroutes protected-routes
  (GET "/cars" _ (fetch-cars))
  (GET "/attributes" _ (fetch-attributes))
  (GET "/drivers" _ (fetch-drivers)))

(defroutes app
  (GET "/status" _ (status))
  (compojure/wrap-routes protected-routes
                         friend/wrap-authorize
                         #{::user})
  (route/not-found "NOT FOUND"))

There we have it. A solution without duplication that still responds properly to requests for non-existent resources. compojure/wrap-routes is a useful function to know about.

An Effective Code Review Process

| Comments

The above was tweeted 1 recently and it resulted in some decent discussion about code reviews. In the past six months at Outpace, I’ve been part of a handful of code review sessions that have been extremely productive. After the reviews many developers have expressed shock at the effectiveness of the process. A tweet-sized overview of the process we’ve followed can be found in Carin Meier’s responses to the above tweet. Since you can’t fit details into tweets the rest of this post expands on our code review process.

Some background before we dive into the details. Outpace is a software company that practices, despite every programmer working remotely, nearly 100% pair programming. In addition, the team Carin and I are on do most of our work through GitHub pull requests. Before merging with master, the pull requests are reviewed by other teammates. Between pairing and pull requests many eyes see every line of code as changes are made.

Even with all this, we’ve found value in having more traditional code reviews. We’ve found that different feedback and action items emerge from reviewing code that we already have than from reviews of code changes (e.g., pull requests).

In addition to working for the team described above, the process below has been successfully used to review an internal library where the reviewers where mostly interested users with a couple contributors. It has also been successful on teams that were not adherent to doing work through reviewed pull requests.

The Code Review Process

Step 1: Select the code to review

Typically we do this between a week and two weeks before the code review. Here we identify the code we want to review and create a two-hour meeting on a Friday at the end of day.

Having the meeting late on Friday helps create a relaxed environment. The review becomes a time to unwind, enjoy a beverage of choice, and talk about code. I haven’t met a developer that doesn’t enjoy discussing how to make code better and this lets everyone finish the week doing just that. The code review becomes an uplifting way to finish a week.

Step 2: Open the code review

A few days (typically late Tuesday or early Wednesday) before the Friday code review meeting we start the review. We do this by opening a GitHub pull request. The following steps will create a pull request where you can comment every line of code being reviewed.

  1. Create a local branch.
  2. Delete the code being reviewed and commit locally.
  3. Push the branch to GitHub.
  4. Open a pull request.

These steps are necessary because GitHub pull requests only let you view code that has changed. This process marks every line as deleted, which causes every line to appear the Files changed tab.

Opening the pull request a few days before the review meeting provides a location for pre-meeting comments to be added. This lets reviewers spend a couple days thinking about and commenting on the code. Comments on the pull request indicate a conversation should happen during the code review meeting.

Step 3: The code review meeting

Its finally Friday and time to review the code as a group. Everyone joins a video conference and someone volunteers to lead the code review. At least one other person volunteers to be a note taker.

The leader directs the code review and keeps it moving forward. To do this the leader shares their screen with the video conference and scrolls through the Files changed view of the pull request. When a comment appears on screen the leader stops scrolling and discussion starts.

The comments are read (often silently) and discussion happens. The leader tries to recognize when a conclusion has been reached or when further discussion, outside of the code review, needs to happen. When a conclusion is reached someone (often the leader) states a quick summary and a note taker records the next steps. The next steps are added as additional comments in the comment thread being discussed. As the next steps are recorded the leader moves on to the next comment.

This continues until either time runs out or the group runs out of things to discuss.

After the code review a volunteer turns the next steps comments into Trello cards and we take care of the code review items as part of our usual work.

Results

We’ve seen impressive improvements to code quality in the projects that have undergone this style of code review. Both small and large changes have happened as a result. Code has become simpler, clearer, and better understood. Additionally, the feeling of collective code ownership has increased.

Teammates have been surprised at how well this process has worked. More than a couple have said that historically they have not found code reviews useful but that these were.

This style of code review has worked in a few different settings and I encourage you to give it a shot.


  1. Reading through the discussion on Twitter after this tweet can give some hints as to what it takes to have an effective code review.