Jake McCrary

Building the ErgoDox Keyboard

| Comments

Earlier this year I built an ErgoDox. The ErgoDox is a split hand mechanical keyboard whose design has been released under the GNU GPLv3. There are a few standard 1 ways of getting the parts. It basically comes down to sourcing all the parts yourself or buying a bundle from Massdrop. I opted to wait until Massdrop was selling them and bought a kit from them.

My ErgoDox

Why?

  1. I’ve used an ergonomic keyboard for years and was intrigued by the split hand design.
  2. I wanted to try out Cherry MX key switches.
  3. Using your thumb for more than just space bar made a lot of sense to me.
  4. The firmware lets you have multiple layers. I thought this could be really useful.
  5. The project sounded fun. I used to make physical devices and this seemed like a good way to do that again.

Buying

As mentioned earlier I bought my parts from Massdrop. In the buy I participated in I had the option of a full hand case or the traditional case and I opted for the full hand. As part of the buy I also bought additional aluminum top layers, a blank set of DSA 2 keycaps, and Cherry MX blue key switches.

If I were doing it again I would not buy the extra aluminum top layer. I built one of my hands using the aluminum and the other with the basic acrylic top. I enjoy both the look and feel of the acrylic hand better.

I would also not buy the set of DSA keycaps from Massdrop. It was convenient and a bit cheaper to buy from them but had I known I could get different colors from Signature Plastics I would have done that.

I also bought eight “deep-dish” DSA keys direct from Signature Plastics. These keys feel different which lets me know when my fingers are above the home row. I’d recommend doing this. You can order from this page.

For key switches I bought Cherry MX Blues through Massdrop. Blues are extremely clicky. You can easily hear me typing in every room of my apartment. It is very satisfying.

After using the keyboard for about a week I also ended up buying some pads for my wrists. I occasionally rest my wrists on the keyboard and the keyboard’s edge would dig into me.

Building

I followed Massdrop’s step-by-step guide and this YouTube video. Another great resource is the community at GeekHack. I’d recommend reading and watching as much as possible before starting your build.

I built this using a cheap soldering iron I’ve had for years, very thin solder, solder wick, and a multimeter. I don’t know if this would have been easier with better tools or not but those got the job done.

While soldering the surface mount diodes I was in the zone and soldered a few locations that didn’t actually need to be soldered. When you are soldering the diodes you should only be soldering them to the locations that have the key silk screen.

My system for minimizing errors while soldering the diodes is the following five steps.

  1. Lay down some solder on one of the pads.
  2. Put the correct end of the diode on top of that solder, reheat and push down.
  3. Test the connection with a multimeter.
  4. Solder the other half of the diode.
  5. Test the connection.

I batched up the steps. I’d do a whole row of the first step, then move to the second for the entire row, then do the third, etc. Being rigorous about testing every connection is important. Catching mistakes early makes it easier to fix the mistakes.

If you solder a diode on the wrong way there is a huge difference (at least for me using solder wick) between the difficulty of fixing the error when only one pad has been soldered versus two pads. I soldered more than one diode backwards and a mistake noticed after soldering only one pad was easy to fix. After soldering both pads it took serious effort.

Eventually you’ll need to cut open a USB cable. I ended up removing the plastic housing using a Dremel. When soldering the wires to the USB holes I was too concerned with it looking pretty and did not leave plenty of wire. This made it harder to solder and as a result I ended up doing a poor job that resulted in a short. After desoldering and destroying another cable, but leaving more wire, I managed to do a better job. I originally noticed the short because I kept getting warnings from my computer about my USB Keyboard drawing too much power.

I’ve annotated a copy of Massdrop’s instructions using Evernote. It contains the above tips inline.

Firmware

After you physically build your keyboard you need to build the firmware. There are a few different firmwares that can work and you can discover those on GeekHack. I’m using a fork of what Massdrop’s graphical configuration tool uses. It is based off benblazak/ergodox-firmware.

One of the exciting things about the ErgoDox is tweaking the firmware. I took the base firmware and modified it to have media key support and light up the LEDs when I’m on any layer besides the base. Some people have added the ability to record keyboard macros and other neat features. I encourage you to take a look at the source even if you use the graphical configuration tool. I haven’t explored beyond benblazak/ergodox-firmware so I can’t compare it to other firmwares.

Conclusion

I really enjoy it. Building it was both fun and frustrating 3.

After using the keyboard for a few months I’ve found that I really only use three (on each hand) of the thumb cluster keys. I also don’t use the keyboard layers too often. I have three layers programmed and I always stay on the main one unless I want to hit a media key.

Would I recommend building your own ErgoDox? If you already can or are willing to learn to solder and this sounds at all interesting to you I would recommend it. The project can be frustrating but the result is great.

The Future

There is still a lot left to explore in the custom keyboard space. Even so I have no plans on leaving the ErgoDox anytime soon. In terms of improving my ErgoDox, I plan on poking around the different firmwares at some point. I’d also like to explore tenting options.

Resources


  1. I feel a bit odd using the word standard to describe acquiring parts to build a keyboard.

  2. This page has diagrams that shows the different keycap families.

  3. Those surface mount diodes are so tiny.

Using Emacs to Explore an HTTP API

| Comments

Recently I rediscovered an Emacs package that allows you to interact with HTTP endpoints from the comfort of an Emacs buffer. restclient.el provides restclient-mode. This mode allows you to write and execute HTTP requests in an Emacs buffer. This package can be found in MELPA.

Below is an example buffer that touches the GitHub API.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
:github = https://api.github.com

# get users orgs

GET :github/users/jakemcc/orgs

# rendor markdown

POST :github/markdown

{
  "text" : "## Title"
}

# rendor markdown raw

POST :github/markdown/raw
Content-Type: text/plain

Title
-----

The example above has a few interesting snippets. :github is an example of a variable. Lines 8-14 show an example of posting json to an endpoint. You put the data you want to send below the query. The last POST shows how to set headers for a request.

The location of your cursor decides what query to execute. Comments start with # and break your document into sections. The query in the same section as your cursor is the one that is executed. If the cursor is anywhere on lines 3-6 and I hit C-c C-c then Emacs queries GitHub for my organizations. Below is what pops up in a buffer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[
    {
        "avatar_url": "https:\/\/avatars.githubusercontent.com\/u\/1826953?",
        "public_members_url": "https:\/\/api.github.com\/orgs\/speakerconf\/public_members{\/member}",
        "members_url": "https:\/\/api.github.com\/orgs\/speakerconf\/members{\/member}",
        "events_url": "https:\/\/api.github.com\/orgs\/speakerconf\/events",
        "repos_url": "https:\/\/api.github.com\/orgs\/speakerconf\/repos",
        "url": "https:\/\/api.github.com\/orgs\/speakerconf",
        "id": 1826953,
        "login": "speakerconf"
    },
    {
        "avatar_url": "https:\/\/avatars.githubusercontent.com\/u\/4711436?",
        "public_members_url": "https:\/\/api.github.com\/orgs\/outpace\/public_members{\/member}",
        "members_url": "https:\/\/api.github.com\/orgs\/outpace\/members{\/member}",
        "events_url": "https:\/\/api.github.com\/orgs\/outpace\/events",
        "repos_url": "https:\/\/api.github.com\/orgs\/outpace\/repos",
        "url": "https:\/\/api.github.com\/orgs\/outpace",
        "id": 4711436,
        "login": "outpace"
    }
]
// HTTP/1.1 200 OK
// Server: GitHub.com
// Date: Fri, 04 Jul 2014 17:34:26 GMT
// Content-Type: application/json; charset=utf-8
// other headers removed for space consideration on blog

C-c C-c triggers restclient-http-send-current which runs a query and pretty prints the result. I could have used C-c C-r to trigger restclient-http-send-current-raw which executes a query and shows the raw result.

It isn’t a perfect mode. One issue I’ve come across is that queries targeting localhost fail. The solution is to query 127.0.0.1.

restclient-mode makes Emacs a useful tool for exploring and testing HTTP APIs. Since it operates on a simple text format it allows you to easily share executable documentation with others. I highly recommend restclient.el.

Comparing Clojure Testing Libraries: Output

| Comments

I recently became interested in how Clojure testing libraries help you when there is a test failure. This interest resulted in me exploring different Clojure testing libraries. I created the same tests using clojure.test (with and without humane-test-output), expectations, Midje, and Speclj and looked at the output.

I ran all of these examples using Leiningen. Midje, Speclj, and expectations color their output but I’m not going to try to reproduce that here. The color added by Midje and expectations is useful. Speclj color hurt its readability. I use a dark colored terminal and Speclj colors the line that tells where the failure occurs black. This made it hard to read.

I’m not going to show what the tests look like for each testing library past the first comparison. How a test in expressed is important but not what I want to focus on in this post.

Comparing Strings

Going to start off with a basic string comparison. The failing test compares two strings that only differ by one character.

clojure.test

Most (hopefully all) Clojure programmers should be familiar with clojure.test. It is the testing library that is included with Clojure.

1
2
3
4
5
6
(ns example.string-test
  (:require [clojure.test :refer :all]))

(deftest string-comparisons
  (is (= "strings equal" "strings equal"))
  (is (= "space" "spice")))

The output below is what you get when the above test runs. Even in this simple example it isn’t the easiest to read. It doesn’t make it easy to find the expected or actual values.

clojure.test output
1
2
3
FAIL in (string-comparisons) (string_test.clj:6)
expected: (= "space" "spice")
  actual: (not (= "space" "spice"))

Below is the same test but with humane-test-output enabled. It is easy to read the output and see the expected and actual value. It even provides a diff between them although in this situation it isn’t that useful.

clojure.test with humane-test-output
1
2
3
4
5
FAIL in (string-comparisons) (string_test.clj:6)
expected: "space"
  actual: "spice"
    diff: - "space"
          + "spice"
expectations

Another testing library is Jay Field’s expectations. You can see from the example that it has a fairly minimal syntax.

1
2
3
4
5
(ns example.string-expectations
  (:require [expectations :refer :all]))

(expect "strings equal" "strings equal")
(expect "space" "spice")
expectations output
1
2
3
4
5
6
7
8
9
failure in (string_expectations.clj:5) : example.string-expectations
(expect "space" "spice")

           expected: "space"
                was: "spice"

           matches: "sp"
           diverges: "ace"
                  &: "ice"

The output from expectations is very readable. You can easily pick out the expected and actual values. It also shows you where the string starts to diverge.

Speclj

Before writing this post I had zero experience with Micah Martin’s Speclj. Below is my translation of the failing string test and its output.

1
2
3
4
5
6
(ns example.string-spec
  (:require [speclj.core :refer :all]))

(describe "String comparisons"
  (it "have nice error message"
      (should= "space" "spice")))
Speclj
1
2
3
4
  9) String comparisons have nice error message
     Expected: "space"
          got: "spice" (using =)
     /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:7

Speclj’s test output above is an improvement over clojure.test. You can easily find the expected and actual values. It doesn’t provide any help with diagnosing how those values are different.

Midje

I have a little bit of experience with Brian Marick’s Midje. Unlike the other libraries it switches up the assertion syntax. In Midje the expected value is on the right side of =>.

1
2
3
4
5
6
7
8
(ns example.string-test
  (:require [midje.sweet :refer :all]))

(fact "strings are equal"
  "string is equal" => "string is equal")

(fact "strings not equal"
   "spice" => "space")
Midje
1
2
3
FAIL "strings not equal" at (string_test.clj:8)
    Expected: "space"
      Actual: "spice"

Midje’s output is similar to Speclj’s. You can quickly find the expected and actual values but it doesn’t help you spot the difference.

String Comparison Winner

expectations wins for best output. You can easily spot the expected and actual values and it also helps you find the difference between the strings.

The worst output comes from clojure.test. It doesn’t make it easy to spot the difference or even find the expected and actual values.

Comparing Maps

For maps I’ve setup three assertions. The first has an extra key-value pair in the actual value. The second has an extra in the expected value. The final assertion has a different value for the :cheese key. The clojure.test example is below.

1
2
3
4
(deftest map-comparisons
  (is (= {:sheep 1} {:cheese 1 :sheep 1}))
  (is (= {:sheep 1 :cheese 1} {:sheep 1}))
  (is (= {:sheep 1 :cheese 1} {:sheep 1 :cheese 5})))
clojure.test
1
2
3
4
5
6
7
8
9
10
11
FAIL in (map-comparisons) (map_test.clj:5)
expected: (= {:sheep 1} {:cheese 1, :sheep 1})
  actual: (not (= {:sheep 1} {:cheese 1, :sheep 1}))

FAIL in (map-comparisons) (map_test.clj:6)
expected: (= {:sheep 1, :cheese 1} {:sheep 1})
  actual: (not (= {:cheese 1, :sheep 1} {:sheep 1}))

FAIL in (map-comparisons) (map_test.clj:7)
expected: (= {:sheep 1, :cheese 1} {:sheep 1, :cheese 5})
  actual: (not (= {:cheese 1, :sheep 1} {:cheese 5, :sheep 1}))

Unsurprisingly the default clojure.test output for maps suffers from the same problems found in the string comparisons. To find the actual and expected values you need to manually parse the output.

clojure.test with humane-test-output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FAIL in (map-comparisons) (map_test.clj:5)
expected: {:sheep 1}
  actual: {:cheese 1, :sheep 1}
    diff: + {:cheese 1}

FAIL in (map-comparisons) (map_test.clj:6)
expected: {:cheese 1, :sheep 1}
  actual: {:sheep 1}
    diff: - {:cheese 1}

FAIL in (map-comparisons) (map_test.clj:7)
expected: {:cheese 1, :sheep 1}
  actual: {:cheese 5, :sheep 1}
    diff: - {:cheese 1}
          + {:cheese 5}

Above is the output of using clojure.test with humane-test-output. It is a big improvement over the default clojure.test. You can quickly see the expected and actual values. Unlike with the string assertions the diff view is actually helpful. The diffs do a good job of helping you identify the error.

expectations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
failure in (map_expectations.clj:6) : example.map-expectations
(expect {:sheep 1} {:sheep 1, :cheese 1})

           expected: {:sheep 1}
                was: {:cheese 1, :sheep 1}

           in expected, not actual: null
           in actual, not expected: {:cheese 1}

failure in (map_expectations.clj:7) : example.map-expectations
(expect {:sheep 1, :cheese 1} {:sheep 1})

           expected: {:cheese 1, :sheep 1}
                was: {:sheep 1}

           in expected, not actual: {:cheese 1}
           in actual, not expected: null

failure in (map_expectations.clj:8) : example.map-expectations
(expect {:sheep 1, :cheese 5} {:sheep 1, :cheese 1})

           expected: {:cheese 5, :sheep 1}
                was: {:cheese 1, :sheep 1}

           in expected, not actual: {:cheese 5}
           in actual, not expected: {:cheese 1}

expectations does a pretty good job helping you as well. As before, you can clearly read the expected and actual values. expectations also provides some hint as to what is different between the maps. I find the English descriptions a bit easier to read than humane-test-output’s diff view. Still seeing lines like line 7 (in expected, not actual: null) is a bit confusing and the output would be improved if it was suppressed.

I’m just going to lump Speclj and Midje together. The output for each is below. They both improve over clojure.test by making it easy to see the expected and actual value. They both don’t do anything beyond that.

Speclj
1
2
3
4
5
6
7
8
9
10
11
12
13
14
  4) map comparisons have nice error messages when extra entries keys present
     Expected: {:sheep 1}
          got: {:cheese 1, :sheep 1} (using =)
     /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:7

  5) map comparisons have nice error messages when missing an entry
     Expected: {:cheese 1, :sheep 1}
          got: {:sheep 1} (using =)
     /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:9

  6) map comparisons have nice error messages when mismatched values
     Expected: {:cheese 5, :sheep 1}
          got: {:cheese 1, :sheep 1} (using =)
     /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:11
Midje
1
2
3
4
5
6
7
8
9
10
11
FAIL "map is missing an entry" at (map_test.clj:5)
    Expected: {:cheese 1, :sheep 1}
      Actual: {:sheep 1}

FAIL "map has an extra entry" at (map_test.clj:8)
    Expected: {:sheep 1}
      Actual: {:cheese 1, :sheep 1}

FAIL "map has a different value" at (map_test.clj:11)
    Expected: {:cheese 5, :sheep 1}
      Actual: {:cheese 1, :sheep 1}

Map Comparison Winner

Tie between humane-test-output and expectations. Both do a good job of helping the reader spot the difference.

Comparing Sets

Next up are sets. Only two assertions for this section. One with the actual value having an extra member and one test where it is missing a member.

1
2
3
4
5
6
(ns example.set-test
  (:require [clojure.test :refer :all]))

(deftest set-comparisons
  (is (= #{:a :b} #{:a :b :c}))
  (is (= #{:a :b :c} #{:a :b})))

First up is the basic clojure.test output. It suffers from the same problem it has suffered this entire time. It doesn’t make it easy to read the expected and actual values.

clojure.test
1
2
3
4
5
6
7
FAIL in (set-comparisons) (set_test.clj:5)
expected: (= #{:b :a} #{:c :b :a})
  actual: (not (= #{:b :a} #{:c :b :a}))

FAIL in (set-comparisons) (set_test.clj:6)
expected: (= #{:c :b :a} #{:b :a})
  actual: (not (= #{:c :b :a} #{:b :a}))

No surprises with humane-test-output. It improves the clojure.test output by making it easy to read the expected and actual values. The diff view also helps figure out what is causing the assertion to fail.

clojure.test with humane-test-output
1
2
3
4
5
6
7
8
9
FAIL in (set-comparisons) (set_test.clj:5)
expected: #{:b :a}
  actual: #{:c :b :a}
    diff: + #{:c}

FAIL in (set-comparisons) (set_test.clj:6)
expected: #{:c :b :a}
  actual: #{:b :a}
    diff: - #{:c}

expectations once again delivers nice output. It continues to be easy to find the expected and actual values and helps you spot the differences with a diff view.

expectations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
failure in (set_expectations.clj:4) : example.set-expectations
(expect #{:b :a} #{:c :b :a})

           expected: #{:b :a}
                was: #{:c :b :a}

           in expected, not actual: null
           in actual, not expected: #{:c}

failure in (set_expectations.clj:5) : example.set-expectations
(expect #{:c :b :a} #{:b :a})

           expected: #{:c :b :a}
                was: #{:b :a}

           in expected, not actual: #{:c}
           in actual, not expected: null

Speclj and Midje both have better output than the basic clojure.test.

Speclj
1
2
3
4
5
6
7
8
9
  7) set comparisons have nice error messages when missing item
     Expected: #{:b :a}
          got: #{:c :b :a} (using =)
     /Users/jake/src/jakemcc/example/spec/example/set_spec.clj:9

  8) set comparisons have nice error messages when more items
     Expected: #{:c :b :a}
          got: #{:b :a} (using =)
     /Users/jake/src/jakemcc/example/spec/example/set_spec.clj:11
Midje
1
2
3
4
5
6
7
FAIL "set is superset of expected" at (set_test.clj:5)
    Expected: #{:a :b}
      Actual: #{:a :b :c}

FAIL "set is subset of expected" at (set_test.clj:8)
    Expected: #{:a :b :c}
      Actual: #{:a :b}

Set Comparison Winner

Similar to the winner of the map comparisons I’m going to split the victory between expectations and humane-test-output.

Comparing Lists

Next up we compare lists (and lists to vectors). There are three comparisons; one with an extra element, one with same length but a mismatched element, and one comparing a vector and list with drastically different contents.

1
2
3
4
5
6
7
(ns example.seq-test
  (:require [clojure.test :refer :all]))

(deftest list-comparisons
  (is (= '(1 2 3) '(1 2 3 4)))
  (is (= '(1 2 4) '(1 2 3)))
  (is (= '(9 8 7) [1 2 3])))

First up clojure.test. Same issues as with all the previous comparisons.

clojure.test
1
2
3
4
5
6
7
8
9
10
11
FAIL in (list-comparisons) (seq_test.clj:5)
expected: (= (quote (1 2 3)) (quote (1 2 3 4)))
  actual: (not (= (1 2 3) (1 2 3 4)))

FAIL in (list-comparisons) (seq_test.clj:6)
expected: (= (quote (1 2 4)) (quote (1 2 3)))
  actual: (not (= (1 2 4) (1 2 3)))

FAIL in (list-comparisons) (seq_test.clj:7)
expected: (= (quote (9 8 7)) [1 2 3])
  actual: (not (= (9 8 7) [1 2 3]))

Once again humane-test-output improves upon clojure.test. Only interesting difference from previous comparisons is that the diff view ends up having nil values in it where the elements are the same.

clojure.test with humane-test-output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FAIL in (list-comparisons) (seq_test.clj:5)
expected: (1 2 3)
  actual: (1 2 3 4)
    diff: + [nil nil nil 4]

FAIL in (list-comparisons) (seq_test.clj:6)
expected: (1 2 4)
  actual: (1 2 3)
    diff: - [nil nil 4]
          + [nil nil 3]

FAIL in (list-comparisons) (seq_test.clj:7)
expected: (9 8 7)
  actual: [1 2 3]
    diff: - [9 8 7]
          + [1 2 3]

expectations continues to have good output. It tries to help you out as well. You’ll notice that it also has nil values inserted where the lists are the same.

expectations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
failure in (list_expectations.clj:4) : example.list-expectations
(expect '(1 2 3) '(1 2 3 4))

           expected: (1 2 3)
                was: (1 2 3 4)

           in expected, not actual: null
           in actual, not expected: [nil nil nil 4]
           actual is larger than expected

failure in (list_expectations.clj:5) : example.list-expectations
(expect '(1 2 4) '(1 2 3))

           expected: (1 2 4)
                was: (1 2 3)

           in expected, not actual: [nil nil 4]
           in actual, not expected: [nil nil 3]

failure in (list_expectations.clj:6) : example.list-expectations
(expect '(9 8 7) [1 2 3])

           expected: (9 8 7)
                was: [1 2 3]

           in expected, not actual: [9 8 7]
           in actual, not expected: [1 2 3]

Unsurprisingly, Speclj and Midje are better than clojure.test but again don’t go beyond making easy to find the expected and actual values.

Speclj
1
2
3
4
5
6
7
8
9
10
11
12
13
14
  1) List/vector comparisons when there is an extra element
     Expected: (1 2 3)
          got: (1 2 3 4) (using =)
     /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:7

  2) List/vector comparisons when there is a mismatched element
     Expected: (1 2 4)
          got: (1 2 3) (using =)
     /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:9

  3) List/vector comparisons when comparing different types
     Expected: (9 8 7)
          got: [1 2 3] (using =)
     /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:11
Midje
1
2
3
4
5
6
7
8
9
10
11
FAIL "lists are different sizes" at (seq_test.clj:5)
    Expected: (1 2 3)
      Actual: (1 2 3 4)

FAIL "lists have different entries" at (seq_test.clj:8)
    Expected: (1 2 4)
      Actual: (1 2 3)

FAIL "compare very different list like values" at (seq_test.clj:14)
    Expected: (9 8 7)
      Actual: [1 2 3]

List Comparison Winner

I find the clojure.test with humane-test-output to be a bit easier to read than expectations. Both have better output than the basic clojure.test, Speclj, and Midje.

Overall Winner

If I were picking a testing library based entirely on what a failing test looks like I would use expectations. My second pick would be clojure.test with humane-test-output.

It is great that Clojure ships with clojure.test. It is unfortunate that it does so little to help you read a failing test. Every library I tried has better output than clojure.test.

Addendum

Added 2014/06/23

Colin Jones points out that Speclj provides should==. should== checks that the expected and actual value have the same contents. He provided a gist that shows the difference.

Quicker Feedback From Failing Tests

| Comments

Over the last couple of years I’ve desired quicker feedback from my Clojure tests. This has resulted in the development of lein-autoexpect and more recently lein-test-refresh. Each tool monitors your project for changes and on change uses tools.namespace to reload your code and then reruns either your expectations or clojure.test tests. Using tools like these has changed my development process.

Version 0.5.0 of lein-test-refresh was released last week. This release enables even quicker feedback by tracking which tests fail and after reloading your code it runs those tests first. Only when your previously failed tests pass does it then rerun all of your tests.

lein-test-refresh has had quite a few features added since I last wrote about it. The readme will always have the latest list but as of the time of writing this they include:

  • Reloads code and reruns tests on changes to your project’s code.
  • Runs previously failing tests first.
  • Supports custom notification commands.
  • Built in Growl support.
  • Can notify after test success and failure or just after failure.
  • Supports a subset of Leiningen test selectors.
  • Reports on your tests running time.

I don’t have enough experience with the new lein-test-refresh to say how having failing tests will affect my development practices. I don’t expect this to change my development practices but it will enable quicker feedback. Quick feedback cycles are what it is all about.

Acknowledgments

Most of the ‘rerun failed tests first’ feature was hashed out and spiked during a mob programming session organized by Zee Spencer. This happened at a company conference put on by Outpace in Las Vegas. Many developers were involved but two that most influenced the final result were Joel Holdbrooks and Timothy Pratley.

Book Review: Clojure for Machine Learning

| Comments

I was recently given a review copy of Clojure for Machine Learning. I have an academic familiarity with machine learning techniques and presented on a few at speakerconf 2012. I haven’t explored machine learning in Clojure since preparing that talk and was excited to read a book on the topic.

The book gives a shallow introduction to many different topics. It does so through a bit of mathematics and much more code. Depending on the section, the code examples implement the algorithm being discussed, show you how to use a specific library, or do both.

An aspect I particularly enjoy about the code examples is that they always start by showing what dependencies should be added to your project.clj file. This is done even if the library has been used in a previous chapter. Because of this every example can stand on its own.

Something that can almost always be improved about Clojure examples is that namespaces should be referenced using the require form with a namespace alias. Even if that require requires a namespace with a terrible alias, such as (require '[example :as e]), it makes the example code easier to understand. Being able to read e/a-func instead of a-func makes it more explicit as to where that function is located and aides understanding.

I rate all my books by the goodreads five star scale1. This book earns three stars. Even with my limited machine learning background I didn’t learn anything new but I was introduced to some Clojure libraries and enjoyed seeing Clojure implementations of machine learning techniques.

If you enjoy Clojure and the table of contents excites you then you’ll most likely find this book interesting. If you want to maximize your learning I’d recommend taking an online course in machine learning2. It will be a larger time investment but you’ll leave with a deeper understanding.


  1. 1 star = did not like, 2 stars = it was ok, 3 stars = liked it, 4 stars = really liked it, 5 stars = loved it.

  2. I took the original offering from Stanford when it was first offered. Post about it here.

Emacs: Generating Project Shortcuts

| Comments

I’m now writing Clojure nearly 100% of my time and as a result am spending more time in Emacs. I’m working in a few different projects and wanted a quicker way to jump between them. My first attempt at this ended with me defining many functions that looked like the following.

1
2
3
(defun b/test-refresh ()
  (interactive)
  (find-file "~/src/jakemcc/lein-test-refresh/test-refresh/project.clj"))

After writing a couple of these I decided the computer could do this better than I could and decided to write some code to automate it. A sample of my directory structure is shown below.

1
2
3
4
5
jakemcc/
├── bookrobot
│   └── project.clj
└── lein-autoexpect
    └── project.clj

Taking advantage of this structure I wrote some Emacs lisp to walk a directory and define functions that open up any found project.clj files.

1
2
3
4
5
6
7
8
9
10
11
12
;; -*- lexical-binding: t -*-

(defun open-file-fn (file)
  (lambda ()
    (interactive)
    (find-file file)))

(defun create-project-shortcuts (prefix base)
  (dolist (elt (directory-files base))
    (let ((project (concat base "/" elt "/project.clj")))
      (when (file-exists-p project)
        (fset (intern (concat prefix elt)) (open-file-fn project))))))

open-file-fn creates an anonymous interactive function (meaning the function can be called interactively) that opens file. It takes advantage of the feature in Emacs 24 that enables lexical scoping by adding ;; -*- lexical-binding: t -*- to the top of your Emacs lisp file. This lets the anonymous function capture file.

create-project-shortcuts takes in a prefix and a base directory. It searches base for directories that contain a project.clj file. For each found project.clj file a function is created (using fset) with the name of the containing directory prefixed by prefix.

With those two functions defined all that is left is to call create-project-shortcuts.

1
(create-project-shortcuts "b/" "~/src/jakemcc")

Now b/bookrobot and b/lein-autoexpect are available after hitting M-x.

I’ve used this code to create quick shortcuts to all of my work and non-work projects. It has been immensely useful for jumping around projects.

Managing Windows in OS X Using Phoenix

| Comments

Last year I wrote about how I managed my windows under OS X, Windows, and Linux. I’m a big fan of having an orderly layout and try to use grid managers. Since then I’ve changed jobs and now my main machine is an MacBook Pro running OS X Mavericks with two 27 inch cinema displays. As a result I’ve started experimenting with more OS X window managers. After trying a few out I’m going to stick with Phoenix.

Before Phoenix

Last year I was satisfied using Spectacle. It is (or at least was, I haven’t used it in a while) easy to install and had good defaults. I’d still recommend it for most people.

At the recommendation from a reader, I switched to Slate. Slate has a ton of features and I barely scratched the surface in how I used it. I used it as a replacement for Spectacle and didn’t touch any of the advanced features. Before I had the urge to explore the advanced features I ended up becoming dissatisfied with Slate. I ran into an issue where after running for a while (talking at least a week) it would start to respond slowly. I’d try to move a window to another monitor and it wouldn’t move. Eventually I’d be in another process and the command would register sending whatever window I was currently focused on to another monitor.

Introducing Phoenix

I was looking for solutions to Slate’s unresponsiveness when I stumbled on Phoenix. I was drawn in by its stated goal; it “aims for efficiency and a very small footprint.” The fact that it is still being actively developed was also a huge selling point. Knowing that any bugs I find have a potential to be fixed is great.

Phoenix provides a JavaScript API that allows you to interact with your running applications or launch applications. It doesn’t provide anything out of the box; it is up to you to make it useful by writing your own (or taking another persons) configuration.

This is a double-edged sword. This means you get exactly the features you want. It also means you might spend significant amounts of time figuring out how to get the features you want.

Luckily there are examples that you can use as a starting point. Browsing through the examples is a great way of becoming familiar with what is possible and can be inspiring.

My configuration is relatively minimal. I’ve written code to move windows between monitors (rotating between three added some complexity to this), start or focus certain applications, and resize windows. This is enough for me to feel efficient.

I encourage you to use a tool to help manage your windows. Personally I think Phoenix is pretty great and don’t mind tinkering with my configuration and strongly recommend it. As a bonus it is a young project where the maintainer is open to suggestions. If you have an idea for a useful feature it has a possibility of being added pretty quickly.

Flexible Notification of Clojure Tests Failing

| Comments

lein-test-refresh has always supported notifying you of your tests’ status through growl. With the release of version 0.3.4 it now will notify you using whatever program you want.

To make my Mac whisper my the results of running my tests I can use the following project.clj

1
2
3
4
(defproject sample "1.2.3"
  :dependencies [[org.clojure/clojure "1.5.1"]]
  :profiles {:dev {:plugins [[com.jakemccrary/lein-test-refresh "0.3.4"]]}}
  :test-refresh {:notify-command ["say" "-v" "Whisper"]})

The specification of the command is found in the :test-refresh {:notify-command ["say" "-v" "Whisper"]} entry in the above project.clj. After running your tests lein-test-refresh will pass a (usually) short summary message as the final parameter to the specified command.

Now you can finally have the results of running your tests whispered to you.

Releasing Lein-test-refresh 0.3.0

| Comments

At the suggestion of my coworker Jeff Bay you can now hit a single keystroke to cause lein-test-refresh to rerun your tests. Now you can hit enter in the terminal running lein test-refresh to cause your tests to be run.

Add the below text to your project.clj to start using lein-test-refresh today. Clojars generated dependency vector

As a reminder if you call you can pass the argument :growl to lein-test-refresh. If you pass :growl as an argument then you’ll be notified of test success and failures through growl. On top of the quick feedback cycles that lein-test-refresh (and lein-autoexpect) provides the growl notification is my favorite feature. I’d highly recommend giving it a shot.

Using Incanter to Review My 2013 Reading

| Comments

I use goodreads to keep track of my reading and have since early 2010. I find very useful for capturing what I want to read and reminding me of how I felt about books I’ve read. I thought it would be fun to take a closer look at what I read in 2013. I’m doing this using Clojure with Incanter. I haven’t used Incanter since I wrote this post and thought this would be a good opportunity to visit it again.

First I need to get my data out of goodreads. I’ve worked with the Goodreads API before 1 but am not going to use it for this exercise. Instead I’m using the goodreads export functionality (at goodreads follow the links: My Books > import/export) to export a csv file. Having the csv file also lets me cleanup some of the data since some of the book’s page counts were missing 2.

Now that I have data it is time to start playing with it. Run lein new goodreads-summary and edit the project.clj file to have a dependency on Incanter.

1
2
3
4
5
(defproject goodreads-summary "0.1.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [org.clojure/data.csv "0.1.2"]
                 [incanter "1.5.4"]
                 [clj-time "0.6.0"]])

Next I’m going to take the csv file and transform it into an Incanter dataset. This is easily done with incanter.io/read-dataset. It isn’t well documented but by passing :keyword-headers false to read-dataset the headers from the csv are not converted to keywords. I’m doing this because some of the goodreads headers contain spaces and dealing with spaces in keywords is a pain. The snippet below has all of the necessary requires for the remainder of the examples.

1
2
3
4
5
6
7
8
9
10
11
(ns goodreads-summary.core
  (:require [clojure.data.csv :as csv]
            [clojure.string :as str]
            [incanter.core :as incanter]
            [incanter.io :as io]
            [incanter.charts :as charts]
            [clj-time.core :as tc]
            [clj-time.format :as tf]))

(defn read-csv [filepath]
  (io/read-dataset filepath :header true :keyword-headers false))

Calling read-csv with the path to the exported goodreads data results in dataset. If you want to view the data use incanter.core/view. Running (incanter/view (read-csv "goodreads_export.csv")) pops up a grid of with all the data. I don’t care about most of the columns so lets define a function that selects out the few I care about.

1
2
(defn select-columns [dataset]
  (incanter/sel dataset :cols ["Number of Pages" "Date Read" "Bookshelves" "Exclusive Shelf"]))

Selecting columns is done with incanter.core/sel. Like most Incanter functions it has many overloads. One way to use it is to pass a dataset with a vector of columns you want to select.

Filtering a dataset is done using incanter.core/$where. Goodreads has three default shelves to-read, currently-reading, and read. To select all your read books you filter of the Exclusive Shelf column for read books.

1
2
(defn finished [dataset]
  (incanter/$where {"Exclusive Shelf" "read"} dataset))

Filtering for books read in 2013 is a bit more complicated. First I convert the Date Read column from a string to a org.joda.time.DateTime. This is done with the combination of transform-date-read-column and parse-date. Some of the my data is missing a Date Read value. I’m choosing to handle this by treating missing data as the result of (clj-time.core/date-time 0).

The $where in books-read-in-2013 is a bit more complicated than the filtering in finished. Here I’m providing a predicate to use instead of just doing an equality comparison.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(defn parse-date [date-str]
  (if date-str
    (tf/parse (tf/formatter "yyyy/MM/dd") date-str)
    (tc/date-time 0)))

(defn transform-date-read-column [dataset]
  (incanter/transform-col dataset "Date Read" parse-date))

(defn date-greater-than-pred [date]
  (fn [challenger]
    (> (.compareTo challenger date) 0)))

(defn books-read-in-2013 [dataset]
  (let [finished (finished (select-columns dataset))
        with-dates (incanter/$where {"Date Read" {:fn identity}} finished)
        with-date-objects (transform-date-read-column with-dates)]
    (incanter/$where {"Date Read" {:fn (date-greater-than-pred (parse-date "2012/12/31"))}}
                     with-date-objects)))

Now we have a dataset that that contains only books read in 2013 (well, until I read a book in 2014 and the filter above also grabs books in 2014). Now to generate some analytic for each month. First lets add a Month column to our data. Originally I wrote the function below. It uses incanter.core/$map to generate the data, makes a dataset with the new data, and then adds that to the original dataset.

1
2
3
4
5
(defn add-month-read-column [dataset]
  (let [month-read (incanter/$map tc/month "Date Read" dataset)
        month-dataset (incanter/dataset ["Month"] month-read)
        with-month-read (incanter/conj-cols dataset month-dataset)]
    with-month-read))

When I wrote the above code it seemed like there should be a better way. While writing this post I stumbled across incanter.core/add-derived-column. Switching to add-derived-column makes add-month-read-column almost trivial.

1
2
(defn add-month-read-column [dataset]
  (incanter/add-derived-column "Month" ["Date Read"] tc/month dataset))

Now that we have add-month-read-column we can now start aggregating some stats. Lets write code for calculating the pages read per month.

1
2
3
4
(defn pages-by-month [dataset]
  (let [with-month-read (add-month-read-column dataset)]
    (->> (incanter/$rollup :sum "Number of Pages" "Month" with-month-read)
         (incanter/$order "Month" :asc))))

That was pretty easy. Lets write a function to count the number of books read per month.

1
2
3
4
(defn book-count-by-month [dataset]
  (let [with-month-read (add-month-read-column dataset)]
    (->> (incanter/$rollup :count "Number of books" "Month" with-month-read)
         (incanter/$order "Month" :asc))))

pages-by-month and book-count-by-month are very similar. Each uses incanter.core/$rollup to calculate per month stats. The first argument to $rollup can be a function that takes a sequence of values or one of the supported magical “function identifier keywords”.

Next lets combine the data together so we can print out a nice table. While we are at it lets add another column.

1
2
3
4
5
6
7
8
9
(defn stats-by-month [dataset]
  (->> (incanter/$join ["Month" "Month"]
                     (pages-by-month dataset)
                     (book-count-by-month dataset))
       (incanter/rename-cols {"Number of Pages" "Page Count"
                              "Number of books" "Book Count"})
       (incanter/add-derived-column "Pages/Books"
                                  ["Page Count" "Book Count"]
                                  (fn [p b] (Math/round (double (/ p b)))))))

stats-by-month returns a dataset which when printed looks like the following table. It joins the data, renames columns, and adds a derived column.

| Month | Book Count | Page Count | Pages/Books |
|-------+------------+------------+-------------|
|     1 |          6 |       1279 |         213 |
|     2 |          2 |       1251 |         626 |
|     3 |          8 |       2449 |         306 |
|     4 |          5 |       1667 |         333 |
|     5 |          6 |       2447 |         408 |
|     6 |          5 |       1609 |         322 |
|     7 |          5 |       1445 |         289 |
|     8 |          5 |       2229 |         446 |
|     9 |          2 |        963 |         482 |
|    10 |          5 |       1202 |         240 |
|    11 |          5 |       2248 |         450 |
|    12 |          7 |       1716 |         245 |

Great. Now we have a little ascii table. Lets get graphical and make some bar charts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(defn chart-column-by-month [column dataset]
  (let [select (fn [column] (incanter/sel dataset :cols column))
        months (select "Month")]
    (charts/bar-chart months (select column)
                      :y-label column :x-label "Month")))

(defn chart-page-count-by-month [dataset]
  (chart-column-by-month "Page Count" dataset))

(defn chart-book-count-by-month [dataset]
  (chart-column-by-month "Book Count" dataset))

(defn view-page-count-chart []
  (-> (read-csv "goodreads_export.csv")
      books-read-in-2013
      stats-by-month
      chart-page-count-by-month
      incanter/view))

Running the snippet view-page-count-chart produces a pop-up with the below bar chart. The chart actually surprises me as I fully expected to have higher page counts during the winter months than the summer months. This chart and analysis is pretty useless though without knowing the difficulty of the pages read. For example, last February I read Infinite Jest. Knowing that I don’t feel like having a low page count in that month is slacking at all.

Bar chart of total page count by month

2013 Summary

2013 was a pretty big year of reading. I read more books this past year than all other years that I have data. I also read some of the best books I’ve ever read. Not only that but I actually created multiple 3 custom Kindle dictionaries to help improve my (and others) reading experience.

Summary table 4:

|   :shelf | :books | :pages |
|----------+--------+--------|
| non-tech |     51 |  17798 |
|     tech |     10 |   2707 |
|     read |     61 |  20505 |

Plans for 2014

I’m planning on reading a similar about in this upcoming year but probably having a bit more non-fiction books. First step towards doing that is to start classifying my books as non-fiction or fiction. I’m also planning on rereading at least two books that I’ve read in the last few years. This is unusual for me because I don’t often reread books that quickly.

If you have any book recommendations feel free to leave them in the comments or contact me through twitter or email.


  1. A project on Heroku that takes your to-read list from goodreads and queries the Chicago Public Library to see if books are available. Someday I’ll give it some love and make it usable by others.

  2. I’ve also applied to be a goodreads librarian so I can actually fix their data as well.

  3. One for Functional JavaScript and another for Dune. If you want a custom Kindle dictionary made feel free to reach out.

  4. tech shelf only includes programming books.