I was recently approached by Packt Publishing asking if I’d review
Shantanu Kuma’s book
Clojure High Performance Programming
. It sounded interesting so I took them up on
their offer for a free copy and read it over two flights.
table of contents
does a good job describing the book. This book doesn’t
dive too deep into any one topic but instead gives you a taste of
Overall the book was pretty good. It provides interesting examples of
real world Clojure code that solve specific performance problems. It
talks about host performance, both JVM and hardware, concerns which are
both areas that shouldn’t be overlooked. I thought the book was best
when showing examples of well performing code from libraries.
I’d recommend this book for developers who aren’t past the beginning
stages of writing performant code. It does a good job introducing the
topics you’ll want to think about when trying to craft well performing
It isn’t for the developer who has spent years optimizing code for
performance. Those developers are already going to be familiar with
the language and concerns of writing high performance code.
If I could add anything to the book it would be a chapter about
measuring performance in production. If you are writing high
performance programs it has been my experience that you must
measure in production. This is easiest to do if you build measuring in
from the very beginning.
I recently switched companies and find myself working
on a project that uses clojure.test. I haven’t worked with
clojure.test since I started using
combination spoiled me when it comes to testing Clojure code. I
can no longer stand running tests by hand; I’m too used to having a
tool run them for me. As a result I tried out some
clojure.test continuous testing tools.
I wasn’t satisfied with what I found. Since I wrote lein-autoexpect,
a continous tester for expectations, it was easy for me to fork it
and and create
lein-test-refresh solves the issues I ran into with the other
To use lein-test-refresh follow these steps (latest version found in
image at end):
Add [com.jakemccrary/lein-test-refresh "0.1.2"] to the :plugins
section in your project.clj or ~/.lein/profiles.clj file.
Run lein test-refresh or lein test-refresh :growl.
Enjoy your minimal feedback delays between editing your Clojure
code and seeing if your tests pass.
watches the source and test directories specified in your
project.clj and reloads code when files changes. After reloading
your code your clojure.test tests are run and the output is printed
to your console. When you pass :growl as a command line argument the
plugin will use growl to notify you of success and failures. This is
one of my favorite features about lein-test-refresh as it allows me
to continuously run my tests without taking up space on my monitors.
I hope you enjoy
has made using clojure.test much more enjoyable.
My text editor of choice is Emacs. Its extensibility is a major
contributor to this preference. The ease of adding additional
functionality means you can customize it to your liking. You should
not go overboard and change too much of the default behavior but you
should feel free to add additional features.
I recently found myself often editing a file in emacs and then
switching to a terminal and running a bash script to see how the
output changed. This is part of my work flow for shutting down or
starting new server processes. Since this is something I’ll be doing
quite frequently in the future, I wrote some Emacs Lisp to run the
shell script and display the output in a temporary buffer. With this
function in place I no longer have to toggle to a terminal and run a
I’m picky and I wanted this output buffer to have the same behavior as
the help buffer. That is, I wanted to be able to close the buffer by
just hitting the letter q. It took me a while to figure out how to
do this so I thought I would share it here in hopes it might benefit others.
First I’ll show the code and then I’ll explain what it is doing.
(defunblog-example()(interactive)(with-output-to-temp-buffer"*blog-example*"(shell-command"echo This is an example""*blog-example*""*Messages*")(pop-to-buffer"*blog-example*")))
The above snippet defines a function named blog-example. It takes no
arguments and is interactive (as indicated by the second line calling
interactive). This call to interactive makes blog-example
available to be called interactively, meaning you can call it after
triggering M-x. This is probably a simplification of what is
actually does, so if you care the documentation is available
After the call to interactive we hit the core of this function, the
call to with-output-to-temp-buffer. This function a buffer name as a first argument
and additional forms. The output of those forms is put into the named
The form I’m passing to with-output-to-temp-buffer is a call to
shell-command. shell-command will run echo This is an example
synchronously and redirect stdout to *blog-example* and stderr to
The final line opens the buffer and switches focus to it. Now you can
look at the output and when you are ready to return just hit q.
This is a simplified example but it shows how easy it is to extend
Emacs functionality. Doing something similar to this made a task I do
frequently more pleasant.
My use case is a bit more complicated and involves saving the buffer
I’m currently editing and then running a command against the saved
file. Below is some sample code that does something similar.
Put together a new release of
lein-autoexpect is a plugin for Leiningen
that monitors your source directories for changes and then reloads
your code and runs your
expectations. It reports
test output to the console and optionally sends notifications to
Growl (and Growl like notification tools).
To use lein-autoexpect, add :plugins [[lein-autoexpect "1.0"]] to
your either your project’s project.clj or your global
~/.lein/profiles.clj. To use the plugin run lein autoexpect.
This will display the test results to the console. To also have
results reported using Growl run lein autoexpect :growl.
Release 1.0 of lein-autoexpect upgrades its dependency on
to version 0.2.4. It also no longer crashes if there isn’t a Growl
If you haven’t tried using expectations and lein-autoexpect I
encourage you to give it a try. Having my tests run automatically made
a huge positive difference on my development experience.
You may have seen me tweetingabout building custom Kindle dictionaries. A few months ago I made a
I was taking my time reading Fogus’s book and, as a result, found myself forgetting the implementation of functions defined earlier in the book. I wanted to be able to look up implementations easily and realized that a dictionary of function names to implementations would solve my problem.
I found the book’s repo and confirmed the license would allow this. Then extracted the data (wrote a simple parser in Clojure, extracts functions that follow this format) and made a dictionary.
Steps to using my custom dictionary:
Put it on your e-ink Kindle (transfer over USB or email it).
This dictionary isn’t perfect but it did improve my reading experience. One example of where it fails is if you look up the function partial1 it will look up partial. This is result of how the Kindle looks up words. Another minor issue is that the functions are often too large to fit in the pop-up window. The fix to both of these is to click the “Show Full Definition” button of the pop-up to be taken to the dictionary. Another issue is that the numerous functions defined by composition (example: isOdd) are not parsed by my parser and therefor not part of the dictionary.
This was definitely a larger challenge than creating my custom Dune dictionary. It forced me to dive into the Amazon documentation a bit and figure out more of the markup language. I have notes on my experience creating Kindle dictionaries and sometime in the future will be writing a post with details about what I’ve learned.
I spent the last week reading 1 the Clojure Data Analysis Cookbook by Eric Rochester. As you may expect from the name, this book follows a traditional cookbook format. Each section presents a goal and then some code which achieves the goal.
The text covers a variety of data analysis topics. Some include reading data from files, machine learning, graphing, and interfacing with other analysis tools. I particularly enjoyed the section on lazily processing large data sets. I find this is an area of frustration for many and this should serve as a reference to point them towards.
The examples are fairly easy to follow. Many of the examples use require to alias dependent namespaces. I think this is key when presenting Clojure examples. Having to prefix calls to library functions causes them to stand out from uses of core Clojure functions. It also lets readers know from which library each function comes from. I would have liked to see all of the examples use require instead of use for pulling in dependencies because of the clarity it brings.
I do have a sort of nit-picky negative about this (in particular, the PDF I received from the Packt Publishing website) book. While the vast majority of the code examples were well formatted every once in a while one would be poorly formatted. Poorly formatted code in a book all about showing code is disappointing and interrupts the flow of reading a recipe. One example of this is found in the first step of chapter 3’s “Combining agents and STM” recipe.
Would I recommend getting this book? If any section in the table of contents sounds useful to you then yes, you should buy the book. It will be a useful reference.
Would I recommend reading this book front to back? Probably not. I would recommend reading sections that interest you and skimming others.
Just like a food cookbook’s purpose (usually) isn’t to teach you how to cook, this book will not teach you how to write Clojure. It will help you become better at specific tasks.
I was given this book to review by Packt Publishing. If you think you have something interesting to read and want another set of eyes on it, feel free to reach out. Depending on the topic I’m willing to give feedback before publication or potentially write a review after.↩
I’m the type of computer user that wants an organized
workspace. To me this means having my active applications organized
into a grid. Efficiently doing this is important to me. Before I jump
into what tools I use let me give a quick explanation of what
organized into a grid means to me.
Imagine that your screen is divided both vertically and
horizontally. To me a good tool for managing windows lets you take
your active application and move it so it fits in any rectangle formed
by the edges of your screen and those two lines splitting your
monitor. This means that with a keystroke you can make the active
window take up the full screen, half screen, or quarter screen. Below
I’ve listed the tools that let me do that.
I’ve switched to using i3, a
tiling window manager
instead of the default window manager on every distribution I use. When using
i3 the tiling is done automatically. There are hotkeys for changing
window layout and for moving focus between windows. The automatic tiling and
shortcuts take a bit to get used to, but now that I am I can’t believe
I switched to using a tiling window manager sometime in the last eight
When developing under Windows I use
Winsplit Revolution. Unlike i3,
Winsplit Revolution only provides hotkeys snapping windows to
different locations. This is admittedly more approachable than i3 as
the grid isn’t forced on you. WinSplit Revolution is pretty flexible,
you can change shortcuts and even define your own grid.
I can’t remember when I started using Winsplit Revolution but it has
become a vital tool for when I’m stuck doing development on a Windows
My only OS X machine is my 13 inch MacBook Air. I always thought that
with such a small screen being able to tile my windows wouldn’t be as
useful. I was completely wrong. If anything it may be more useful
because of the tiny screen real estate. The 13 inch screen is just wide
enough to have an editor up on one half and documentation on the
The tool I use to snap my windows to a grid is
Spectacle. Spectacle provides some
sensible keystrokes for moving windows around. The hotkeys are
similar to Winsplit Revolution’s which makes switching between
operating systems easy.
If you haven’t tried using a tool to help you organize your windows I
highly recommend that you do. I’ve introduced both technical and
non-technical people to these tools and everyone has enjoyed them.
On my Ubuntu desktop the volume at 100% is often too quiet. With Ubuntu’s default window manager I could open up the graphical “Sound Preferences” and bump the volume to above 100%. After using i3 window manager for a while I found myself missing this and took the time to figure out how to do it from the command line.
Ubuntu uses PulseAudio to handle sound related tasks. The tool pacmd allows you to change PulseAudio settings, such as volume, on the fly. The command is pacmd set-sink-volume <sink-index> <sink-volume> where <sink-index> is an identifier for your output device and <sink-volume> is an integer greater than or equal to zero. Zero represents muted and 65536 represents 100% volume. <sink-index> is the index found in the output from the pacmd list-sinks for your output card. In my case it is 0.
The below script makes changing volume with pacmd straightforward. I’m using Perl convert a percentage into the proper units for the argument. Using this script if you want to pull a Spinal Tap and go above 100% you simply pass in a number greater than 100.
#!/bin/bashif["$1"==""];thenecho"Need to supply a percentage"exit 1
fivol=$(perl -e "print int(65536 * ($1 / 100))")echo"Setting volume to $1 ($vol)"pacmd set-sink-volume 0$vol
Ran into a situation where I needed to replace certain files in a directory tree with files from a similarly shaped directory tree. The other files in each tree needed to remain untouched. Below is an example of the directory structure.
Goal is to copy instruments.csv from the sub-directories of other-dir to the matching sub-directories of target-dir. In the past I’ve solved this by being in the other-dir directory and banging out a for loop at the command line (other-dir$ is the bash prompt).
other-dir$ for d in $(ls);do cp $d/instruments.txt ../target-dir/$d/;done
One feature (or issue) with this approach is that if a sub-directory exists in other-dir but not in target-dir that sub-directory will not be created in target-dir.
I took a bit of time to explore other ways of accomplishing this task and stopped after coming up with two additional ways.
The above is basically the same as the first solution. It uses find to generate the list of files and then constructs cp commands. It also doesn’t create sub-directories in target-dir.
The next example has different behavior from the above cp solutions. Like the second solution, it generates a list of files to copy using find but then uses rsync with the --files-from flag to mirror those files under target-dir. Unlike the cp based solutions, sub-directories of other-dir that do not exist in target-dir will be created.
I’m sure there are many more ways of accomplishing this task. Figuring out the three above was enough for me. They are fairly straight forward and don’t depend on esoteric command line flags. The solution I use in the future will depend on whether or not I need sub-directories created in the target directory.
Very few coders would debate the wisdom of breaking a project into smaller libraries. One complaint about breaking a project into tinier libraries is the added hassle of making changes simultaneously to multiple projects at once. Constantly releasing a library so another project can pick up changes is annoying and slows you down. Luckily for us in a Clojure project using Leiningen it is simple to make changes to a library and then use those changes without needing to perform a release.
This is accomplished by using the checkouts directory feature of Leiningen. This is a feature that, despite being listed in the Leiningen FAQ, I only recently discovered. To make your Clojure project (from now on calling this the main project) depend on the source of another project simply make a checkouts directory in your main project’s root directory and then in checkouts link to the root of the library’s project. This causes the library to be added to the main project’s classpath. Now you can make changes to the main project and its dependencies without going through the hassle of releasing new versions of the library for every change.
In case the above paragraph isn’t clear, here is an example of the main projects directory structure.
│ └── subproject -> /Users/jmccrary/src/temp/subproject/
│ └── main_project
│ └── core.clj
└── test └── main_project
$ ls checkouts/subproject/
README project.clj src test
Running a lein classpath in the main project directory and we can see the classpath has the subproject in it. I’ve edited the lein classpath output to remove most entries not related to subproject and to make it easier to read. As the example shows the subproject has been added to the classpath.
The Leiningen checkouts directory option is pretty useful. This feature isn’t there to discourage you from releasing versions of a library, but instead is there to facilitate quicker development cycles. I’d encourage you to experiment with it and figure out of it makes you more effective.