I recently became interested in how Clojure testing libraries help you
when there is a test failure. This interest resulted in me
exploring different
Clojure testing libraries. I created the same tests using clojure.test
(with and without
humane-test-output),
expectations,
Midje, and
Speclj and looked at the output.
I ran all of these examples using Leiningen. Midje, Speclj, and
expectations color their output but I’m not going to try to reproduce
that here. The color added by Midje and expectations is useful. Speclj
color hurt its readability. I use a dark colored terminal and Speclj
colors the line that tells where the failure occurs
black. This made it hard to read.
I’m not going to show what the tests look like for each testing
library past the first comparison. How a test in expressed is
important but not what I want to focus on in this post.
Comparing Strings
Going to start off with a basic string comparison. The failing test
compares two strings that only differ by one character.
clojure.test
Most (hopefully all) Clojure programmers should be familiar with
clojure.test. It is the testing library that is included with Clojure.
The output below is what you get when the above test runs. Even in
this simple example it isn’t the easiest to read. It doesn’t make it
easy to find the expected or actual values.
Below is the same test but with humane-test-output enabled. It is easy
to read the output and see the expected and actual value. It even
provides a diff between them although in this situation it isn’t that useful.
The output from expectations is very readable. You can easily pick out
the expected and actual values. It also shows you where the
string starts to diverge.
Speclj
Before writing this post I had zero experience with Micah Martin’s
Speclj. Below is my translation of the failing
string test and its output.
Speclj’s test output above is an improvement over clojure.test. You
can easily find the expected and actual values. It doesn’t provide any
help with diagnosing how those values are different.
Midje
I have a little bit of experience with Brian Marick’s
Midje. Unlike the other libraries
it switches up the assertion syntax. In Midje the expected value is on
the right side of =>.
12345678
(ns example.string-test(:require[midje.sweet:refer:all]))(fact"strings are equal""string is equal"=>"string is equal")(fact"strings not equal""spice"=>"space")
Midje
123
FAIL "strings not equal" at (string_test.clj:8) Expected: "space" Actual: "spice"
Midje’s output is similar to Speclj’s. You can quickly find the
expected and actual values but it doesn’t help you spot the
difference.
String Comparison Winner
expectations wins for best output. You can easily spot the expected
and actual values and it also helps you find the difference between
the strings.
The worst output comes from clojure.test. It doesn’t make it easy to
spot the difference or even find the expected and actual values.
Comparing Maps
For maps I’ve setup three assertions. The first has an extra key-value
pair in the actual value. The second has an extra in the expected
value. The final assertion has a different value for the :cheese
key. The clojure.test example is below.
Unsurprisingly the default clojure.test output for maps suffers from
the same problems found in the string comparisons. To find the
actual and expected values you need to manually parse the output.
Above is the output of using clojure.test with humane-test-output. It
is a big improvement over the default clojure.test. You can quickly
see the expected and actual values. Unlike with the string assertions
the diff view is actually helpful. The diffs do a good job of helping
you identify the error.
expectations
1234567891011121314151617181920212223242526
failure in (map_expectations.clj:6) : example.map-expectations(expect {:sheep 1} {:sheep 1, :cheese 1}) expected: {:sheep 1} was: {:cheese 1, :sheep 1} in expected, not actual: null in actual, not expected: {:cheese 1}failure in (map_expectations.clj:7) : example.map-expectations(expect {:sheep 1, :cheese 1} {:sheep 1}) expected: {:cheese 1, :sheep 1} was: {:sheep 1} in expected, not actual: {:cheese 1} in actual, not expected: nullfailure in (map_expectations.clj:8) : example.map-expectations(expect {:sheep 1, :cheese 5} {:sheep 1, :cheese 1}) expected: {:cheese 5, :sheep 1} was: {:cheese 1, :sheep 1} in expected, not actual: {:cheese 5} in actual, not expected: {:cheese 1}
expectations does a pretty good job helping you as well. As before,
you can clearly read the expected and actual values. expectations also
provides some hint as to what is different between the maps. I find
the English descriptions a bit easier to read than
humane-test-output’s diff view. Still seeing lines like line 7 (in
expected, not actual: null) is a bit confusing and the output would
be improved if it was suppressed.
I’m just going to lump Speclj and Midje together. The output for each
is below. They both improve over clojure.test by making it easy to see
the expected and actual value. They both don’t do anything beyond that.
Speclj
1234567891011121314
4) map comparisons have nice error messages when extra entries keys present Expected: {:sheep 1} got: {:cheese 1, :sheep 1} (using =) /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:7 5) map comparisons have nice error messages when missing an entry Expected: {:cheese 1, :sheep 1} got: {:sheep 1} (using =) /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:9 6) map comparisons have nice error messages when mismatched values Expected: {:cheese 5, :sheep 1} got: {:cheese 1, :sheep 1} (using =) /Users/jake/src/jakemcc/example/spec/example/map_spec.clj:11
Midje
1234567891011
FAIL "map is missing an entry" at (map_test.clj:5) Expected: {:cheese 1, :sheep 1} Actual: {:sheep 1}FAIL "map has an extra entry" at (map_test.clj:8) Expected: {:sheep 1} Actual: {:cheese 1, :sheep 1}FAIL "map has a different value" at (map_test.clj:11) Expected: {:cheese 5, :sheep 1} Actual: {:cheese 1, :sheep 1}
Map Comparison Winner
Tie between humane-test-output and expectations. Both do a
good job of helping the reader spot the difference.
Comparing Sets
Next up are sets. Only two assertions for this section. One
with the actual value having an extra member and one test where it is
missing a member.
First up is the basic clojure.test output. It suffers from the same
problem it has suffered this entire time. It doesn’t make it easy to
read the expected and actual values.
No surprises with humane-test-output. It improves the clojure.test
output by making it easy to read the expected and actual values. The
diff view also helps figure out what is causing the assertion to fail.
expectations once again delivers nice output. It continues to be easy
to find the expected and actual values and helps you spot the
differences with a diff view.
expectations
1234567891011121314151617
failure in (set_expectations.clj:4) : example.set-expectations(expect #{:b :a} #{:c :b :a}) expected: #{:b :a} was: #{:c :b :a} in expected, not actual: null in actual, not expected: #{:c}failure in (set_expectations.clj:5) : example.set-expectations(expect #{:c :b :a} #{:b :a}) expected: #{:c :b :a} was: #{:b :a} in expected, not actual: #{:c} in actual, not expected: null
Speclj and Midje both have better output than the basic clojure.test.
Speclj
123456789
7) set comparisons have nice error messages when missing item Expected: #{:b :a} got: #{:c :b :a} (using =) /Users/jake/src/jakemcc/example/spec/example/set_spec.clj:9 8) set comparisons have nice error messages when more items Expected: #{:c :b :a} got: #{:b :a} (using =) /Users/jake/src/jakemcc/example/spec/example/set_spec.clj:11
Midje
1234567
FAIL "set is superset of expected" at (set_test.clj:5) Expected: #{:a :b} Actual: #{:a :b :c}FAIL "set is subset of expected" at (set_test.clj:8) Expected: #{:a :b :c} Actual: #{:a :b}
Set Comparison Winner
Similar to the winner of the map comparisons I’m going to split the
victory between expectations and humane-test-output.
Comparing Lists
Next up we compare lists (and lists to vectors). There are three
comparisons; one with an extra element, one with same length but a
mismatched element, and one comparing a vector and list with drastically
different contents.
Once again humane-test-output improves upon clojure.test. Only
interesting difference from previous comparisons is that the diff view
ends up having nil values in it where the elements are the same.
expectations continues to have good output. It tries to help you out
as well. You’ll notice that it also has nil values inserted
where the lists are the same.
expectations
123456789101112131415161718192021222324252627
failure in (list_expectations.clj:4) : example.list-expectations(expect '(1 2 3) '(1 2 3 4)) expected: (1 2 3) was: (1 2 3 4) in expected, not actual: null in actual, not expected: [nil nil nil 4] actual is larger than expectedfailure in (list_expectations.clj:5) : example.list-expectations(expect '(1 2 4) '(1 2 3)) expected: (1 2 4) was: (1 2 3) in expected, not actual: [nil nil 4] in actual, not expected: [nil nil 3]failure in (list_expectations.clj:6) : example.list-expectations(expect '(9 8 7) [1 2 3]) expected: (9 8 7) was: [1 2 3] in expected, not actual: [9 8 7] in actual, not expected: [1 2 3]
Unsurprisingly, Speclj and Midje are better than clojure.test but
again don’t go beyond making easy to find the expected and actual
values.
Speclj
1234567891011121314
1) List/vector comparisons when there is an extra element Expected: (1 2 3) got: (1 2 3 4) (using =) /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:7 2) List/vector comparisons when there is a mismatched element Expected: (1 2 4) got: (1 2 3) (using =) /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:9 3) List/vector comparisons when comparing different types Expected: (9 8 7) got: [1 2 3] (using =) /Users/jake/src/jakemcc/example/spec/example/string_spec.clj:11
Midje
1234567891011
FAIL "lists are different sizes" at (seq_test.clj:5) Expected: (1 2 3) Actual: (1 2 3 4)FAIL "lists have different entries" at (seq_test.clj:8) Expected: (1 2 4) Actual: (1 2 3)FAIL "compare very different list like values" at (seq_test.clj:14) Expected: (9 8 7) Actual: [1 2 3]
List Comparison Winner
I find the clojure.test with humane-test-output to be a bit easier to
read than expectations. Both have better output than the basic
clojure.test, Speclj, and Midje.
Overall Winner
If I were picking a testing library based entirely on what a failing
test looks like I would use expectations. My second pick would be
clojure.test with humane-test-output.
It is great that Clojure ships with clojure.test. It is unfortunate
that it does so little to help you read a failing test. Every library
I tried has better output than clojure.test.
Addendum
Added 2014/06/23
Colin Jones
points out
that Speclj provides should==. should== checks that the expected
and actual value have the same contents. He provided a
gist that
shows the difference.
Looking forward to the next article? Never miss a post by subscribing using e-mail or RSS. The e-mail newsletter goes out periodically (at most once a month) and includes reviews of books I've been reading and links to stuff I've found interesting.