Above I jumped from my home directory to the root of this website’s code. Being able to jump between directories by just remembering a name (or part of a name) is great. This frees me from having to remember full paths or set up aliases.
autojump works by keeping a database of “time” spent in directories and jumps to the most frequently visited one that match SUBSTRING_OF_DIR. If you are curious as to what that database looks like the tool jumpstat will give you a view.
I used to set up aliases for jumping between projects but now that I’ve trained myself to use autojump I don’t think I’ll ever go back. Not having to do any extra work besides simply entering the root directory of new projects to setup efficient directory movement is great. I think that if you give it a shot for a while you’ll find the same benefits.
If you are on a Mac and use homebrew you can install by doing brew install autojump. For other platforms check out the github page.
I’m guessing that software you write connects to some other server. I’m also guessing that how it handles disconnects is tested (if ever tested) by either killing the process it connects to or by pulling out your network cable. I recently stumbled across a nifty Linux command line tool that makes causing disconnects significantly easier.
This tool is tcpkill. To use tcpkill you specify an interface and a tcpdump style filter and it kills traffic on that interface that matches the filter.
For example, if your application has a connection to 192.168.1.130, then to force a disconnect you would execute tcpkill -i eth0 host 192.168.1.130.
tcpkill can be used for more than forcing disconnects. It can also be used as a simple website filter. If Stack Overflow wastes too much of your time then you could simply leave tcpkill -i eth0 host stackoverflow.com running and enjoy your increased productivity.
tcpkill is a pretty useful tool. If you want to install it in Ubuntu it is found in the dsniff package (apt-get install dsniff).
This post is now out of date. The library recommended by this post is now a contrib library. Check out tools.cli for great documentation about handling command line arguments in Clojure.
Write enough Clojure and eventually you will need to handle command line arguments. There are numerous ways of doing this. Keep reading for a brief introduction to three.
Using built-in features
There exists a sequence named *command-line-args* which contains the arguments to your application. Using it is simple, it is just a sequence after all, and it is always available to you. No need to pull in external dependencies that others may not be familiar with.
This simplicity is also a downside. Because only a sequence is provided for you it is up to you to actually figure out the arguments. If you want to do any sort of verification that certain arguments are supplied you write the code that does the verifying. If you want to move away from positional arguments to using command line flags once again it is up to you to write it.
Because of the amount of code required to do any sort of advanced argument handling I tend to use *command-line-args* only for applications that take a single type of argument, for example a file path, and require one or more of this type of argument.
Setup for next two sections
For the next two sections I’m using version 1.5.0 of Leiningen and the specified versions of libraries as stated in the below project.clj file.
I’m using lein run to run the examples. lein run :cc runs the clojure.contrib example. Likewise, running lein run :clargon will run the clargon examples. Both of these commands can be followed by additional arguments that get passed to the application.
The next step after using *command-line-args* is to use the library clojure.contrib.command-line. This library provides the function with-command-line that allows you specify requirements and then handles the parsing of the command line arguments for you.
Positives of using clojure.contrib.command-line:
* Part of clojure.contrib. Probably extremely low friction to start using it.
* No longer need to write your own command line parsing code.
* Responds to -h and --help.
A negative of using clojure.contrib.command-line is that the documentation is pretty sparse. This can lead to some fumbling around as you learn how to use it. Another downside is that there isn’t a way of specifying whether an argument is required or optional. This means you must manually check for required arguments and give appropriate error messages to the user.
Below is an example of using clojure.contrib.command-line. It specifies a few different arguments. The --cow argument has a default value of “cow”. --chicken has no default value, if it is left unspecified it will be nil. The line with milk? specifies a boolean value. If --milk (or -m because of the m? specification) is specified at the command line then milk? will be true. extras will collect any additional arguments.
(ns command-line-example(:require[clojure.contrib.command-line:asccl]))(defn -main[&args](ccl/with-command-lineargs"Command line demo"[[cow"This is the cows name""cow"][chicken"This specifies the chickens name"][milk?m?"Should you milk the cow?"]extras](println "cow's name: "cow)(println "chicken's name: "chicken)(println "milk?: "milk?)(println "extra args: "extras)))
And here is an example of calling that -main function from the repl.
$ lein run :cc --cow Herb --milk other args
cow's name: Herbchicken's name: nil
milk?: trueextra args: [other args]
Using some other library
Another option is to use some library that isn’t found in clojure.contrib. One example of this is clargon. Clargon is a library that Gaz Jones (his blog post here) wrote. The documentation (both in his blog post and through the github page and tests) is the primary reason I started using it.
Pros of clargon:
* Great documentation. Makes it quick to get started.
* Can specify functions to transform arguments prior to gaining access to them
* You specify if an argument is required or optional.
* Responds to -h and --help.
One potential negative of using clargon is that it isn’t a clojure.contrib library. This means there is slightly more friction to start using it on your project as, unlike clojure.contrib, you are probably not already depending on it.
Below is an example similar to the above clojure.contrib.command-line example. One important difference is that some arguments are now specified as either required or optional. If a required argument is not specified then an error is printed and execution stops.
(ns clargon-example(:require[clargon.core:asc]))(defn -main[&args](let [opts(c/clargonargs(c/optional["--cow""Specify the cow's name":default"cow"])(c/required["--chicken""Chicken's name"])(c/optional["-m?""--milk?""should you milk the cow?"]))](println args)(println opts)))
optional and required both take a vector that defines the specification of a flag. Starting with the first element in that vector, each element that is a string and starts with a ‘-’ is considered a potential flag for that argument. The last flag is stripped of leading ‘-’ characters and is considered the name of that flag (unless a :name option is specified later). The name is used to look up the value of the argument in the option map that is returned by the clargon function. If the next element after the last flag is a string then it is considered the documentation for that flag. When clargon runs into a non-string element then it and everything after it are considered options and should be specified as key value pairs. Options that do something are :default, :name, and :required.
optional and required both can take a function as a second argument. This function will be passed the argument for that flag and should return a transformed version of it. Below is an example using this functionality to specify a required flag that takes a comma separated list of files. These comma separated files are split apart and stuck into a vector.
(ns clargon-example(:require[clargon.core:asc]))(defn -main[&args](let [opts(c/clargonargs(c/required["--files""Files to process"]#(vec(.split%","))))](println "Parsed opts: "opts)))
Clargon supports some more advanced nested argument handling that I’m not going to go into here. If you want to know more about clargon I’d recommend reading reading Gaz’s blog post and the clargon readme and tests.
There are many more ways to handle command line parsing in Clojure. You are not limited to any of the three above. I’ve personally found clargon to hit all of my needs and plan on continuing to use it.
I was interacting with a SQL database using Clojure and needed to create a table so I turned to create-table from clojure.contrib.sql. Looking at the docs for create-table it seemed pretty straight forward. To create a table with columns date, id, symbol, price, and quantity you would write the following.
The above works. I also wanted to specify that columns date and id to form a composite primary key. I wasn’t sure how to specify a composite primary key with create-table and ended up diving into its code.
(defn create-table"Creates a table on the open database connection given a table name and specs. Each spec is either a column spec: a vector containing a column name and optionally a type and other constraints, or a table-level constraint: a vector containing words that express the constraint. All words used to describe the table may be supplied as strings or keywords."[name &specs](do-commands(format"CREATE TABLE %s (%s)"(as-strname)(apply str (map as-str(apply concat (interpose[", "](map (partial interpose" ")specs))))))))
Looking at create-table we can see it creates a SQL statement which is then executed by do-commands. In order to have a composite key we need do-commands to execute a SQL statement that looks similar to below.
Let’s break down create-table to figure out what we need to pass it to make do-commands run the above statement. The code for create-table is repeated below with comments pointing out what step lines up the code.
Recently I was writing some data mining Clojure code which needed to parse a log file and do some transforms of the data. Some of the transforms were dependent on data found across multiple lines. There was no ordering or proximity guarantees to these lines.
This required the code to handle a variety of situations. After writing a couple simple tests and getting those passing I wanted to more extensively test my solution. I was lazy though and did not want to hand code all of the potential orderings. Enter permutations.
permutations is a function out of clojure.contrib.combinatorics. As the name suggests, you give it a collection and it returns a lazy sequence containing all the different permutations of the elements in that collection. An example is below.
You can already see where this is going. I was able to use permutations to generate all the potential different orderings of the input. This saved me the trouble of having to do that by hand.
One difficulty of generating test inputs pragmatically is telling what sort of inputs caused it to fail. To get around this I used the rarely used (at least in code I’m working on) second argument of clojure.test’sis. This second argument is a message that prints on a failure.
Below is a contrived example of using permutations to test an obviously wrong silly-add function. silly-add is defined below.
Below is a test that uses permutations to exercise silly-add with all the potential orderings three input numbers. Note that it takes advantage of the second argument to is. Without this we would not know what input caused the failure.
generate>(use'clojure.test)nilgenerate>(deftestgenerate-some-tests(doseq [input(permutations)](is(= 10(apply silly-addinput))(str "Failed on input: "(seq input)))))#'generate/generate-some-tests
Running the test we see that there is clearly an error.
I often find myself browsing the Internet and then suddenly I want to have a Clojure REPL at my fingertips. As I’ve become better with emacs and paredit I’ve become dependent on the powerful editing this combo affords. The rest of this post details how I changed my five step process into a two step process. It does not explain basic emacs/slime setup but rather explains how I cut a few steps out of a suboptimal workflow for getting a powerful Clojure REPL up and running in emacs.
My previous workflow was the following:
Open a terminal
Change to the root of Clojure project where I use Leiningen and have swank-clojure as a dependency.
Run the command lein swank
Run M-x slime-connect
This five step process was terrible. From me seeing something interesting to try to having a REPL open took too much time.
Today I changed my process so it on takes two steps. They are:
Run M-x clojure-swank
This is a much better. I’ll admit had a lot of room for improvement so it wasn’t too hard to make it better. Below are the steps I took to cut three steps.
First, using Leiningen 1.4.0, I ran lein install swank-clojure 1.3.0-SNAPSHOT. This installed a script called swank-clojure into $HOME/.lein/bin. When run, this script starts a swank server waiting for connections on port 4005.
Next I wrote a function in elisp that gives emacs the ability to call the newly installed swank-clojure script, wait for the swank server to start, and then connect to it. This function, clojure-swank, can be seen below. It creates a buffer named *clojure-swank*, runs the newly installed script, and captures the output in the freshly created buffer. When the “Connection opened” line appears slime-connect is called, connecting emacs to the freshly started swank server. After this we are at the REPL with all the advantages that emacs and paredit give us.
(defunclojure-swank()"Launch swank-clojure from users homedir/.lein/bin"(interactive)(let((buffer(get-buffer-create"*clojure-swank*")))(flet((display-buffer(buffer-or-name&optionalnot-this-windowframe)nil))(bury-bufferbuffer)(shell-command"~/.lein/bin/swank-clojure &"buffer))(set-process-filter(get-buffer-processbuffer)(lambda(processoutput)(with-current-buffer"*clojure-swank*"(insertoutput))(when(string-match"Connection opened on local port +\\([0-9]+\\)"output)(slime-connect"localhost"(match-string1output))(set-process-filterprocessnil))))(message"Starting swank.. ")))
I’ve also written a clojure-kill-swank function for stopping the swank server.
(defunclojure-kill-swank()"Kill swank process started by lein swank."(interactive)(let((process(get-buffer-process"*clojure-swank*")))(whenprocess(ignore-errors(slime-quit-lisp))(let((timeout10))(while(and(>timeout0)(eql'run(process-statusprocess)))(sit-for1)(decftimeout)))(ignore-errors(kill-buffer"*clojure-swank*")))))
Both of those functions need to be added to a location where they will get defined on emacs start-up. Once this is done the powerful REPL you are used to emacs providing can be at your finger tips in practically no time at all.
The other day I stumbled across some Clojure code that used mutual recursion. Mutual recursion can be a valuable tool when solving a problem. Unfortunately because of the lack of tail call optimization on the JVM this can be a dangerous technique when writing Clojure code. It can be easy to forget about this limitation and end up writing code that blows the stack.
Take the classic even/odd checking code from the Wikipedia page. If we just translate it to Clojure it will cause a stack overflow error when we pass in a large number. The massive number of function calls require before returning causes too much memory to be consumed.
(declare my-odd?)(defn my-even?[n](if (zero? n)true(my-odd?(dec (Math/absn)))))(defn my-odd?[n](if (zero? n)false(my-even?(dec (Math/absn)))))user>(my-even?1000000); Evaluation aborted. <- this is a result of java.util.StackOverflowError
Luckily since Clojure 1.0 there has been a useful function for dealing with this. trampoline, with minor modifications to your code, can be used to get around the lack of tail call optimizations (docs here).
trampoline takes a function (and, if needed, arguments to pass into the function) and calls it. If the function returns a function then trampoline calls that. As long as functions are returned trampoline will continue calling them. When a non-function value is returned trampoline returns, passing through the value.
To make our sample code work with trampoline we simply change our functions to return a closure which wraps the call that was previously being executed. This just entails putting a # before the final s-exp. This takes advantage of Clojure’s anonymous function syntax to change the function call into a closure which is returned.
By doing this we’ve changed how the caller interacts with my-even? and my-odd?. It now needs to be called by trampoline.
Now we no longer suffer from the stack overflow error.
I think we can still do better though, because now the caller of my-even? and my-odd? suffers since they are forced to remember to use trampoline. By forcing this on the caller, we’ve pushed what should be hidden implementations details into the callers code. We can fix this by pushing the use of trampoline into our functions.
This worked but seemed overly verbose for doing what (in our minds) should have been a simple operation. After some digging around in the docs we found the function assoc-in. This useful function allowed us to greatly simplify the code.
Recently I found myself wanting to plot some time series data and wanted to do this in Clojure. Unfortunately Incanter, a good statistical and graphics library for Clojure, did not provide a way to plot data where the x-axis is a time value. A quick fork on github and a pull request later and now Incanter does. Since I added this functionality I thought I would write up a short example of using it.
The example time series data I’m using I took from Yahoo’s finance section. Here is a link to the csv file I used.
I’m using the read-dataset function provided by Incanter. This procedure reads a delimited file (or URL) and returns an Incanter dataset.
Yahoo stores the date in a yyyy-mm-dd format. I need to convert that to milliseconds from the epoch so it can be used in time-series-plot as the x-axis data. To do this I wrote a function which takes the string representation of the date, splits in on “-”, then use the joda-date and to-ms functions from incanter.chrono to get the number of milliseconds from the epoch.
Now that we have a function which takes the string representation and get the milliseconds it is time to get the data I want from the dataset. The below code selects the :Close and :Date column while mapping the :Date column to a millisecond from epoch representation of date.
The next step is to use the time-series-plot function to actually create the plot. Because the data we have is in a dataset, we can pass in the column names as the x and y parameters and provide the data set as the value to the :data key in the optional parameters.
(def chart(time-series-plot:Date:Close:x-label"Date":y-label"Closing Price":title"Closing price over time for Yahoo":datamod-data))
Then we use the Incanter function view to actually see the chart.