Jake McCrary

Manage your workspace with grids under Linux, OS X, and Windows

I’m the type of computer user that wants an organized workspace. To me this means having my active applications organized into a grid. Efficiently doing this is important to me. Before I jump into what tools I use let me give a quick explanation of what organized into a grid means to me.

Imagine that your screen is divided both vertically and horizontally. To me a good tool for managing windows lets you take your active application and move it so it fits in any rectangle formed by the edges of your screen and those two lines splitting your monitor. This means that with a keystroke you can make the active window take up the full screen, half screen, or quarter screen. Below I’ve listed the tools that let me do that.

Linux

I’ve switched to using i3, a tiling window manager instead of the default window manager on every distribution I use. When using i3 the tiling is done automatically. There are hotkeys for changing window layout and for moving focus between windows. The automatic tiling and shortcuts take a bit to get used to, but now that I am I can’t believe I switched to using a tiling window manager sometime in the last eight months.

Windows

When developing under Windows I use Winsplit Revolution. Unlike i3, Winsplit Revolution only provides hotkeys snapping windows to different locations. This is admittedly more approachable than i3 as the grid isn’t forced on you. WinSplit Revolution is pretty flexible, you can change shortcuts and even define your own grid.

I can’t remember when I started using Winsplit Revolution but it has become a vital tool for when I’m stuck doing development on a Windows machine.

OS X

My only OS X machine is my 13 inch MacBook Air. I always thought that with such a small screen being able to tile my windows wouldn’t be as useful. I was completely wrong. If anything it may be more useful because of the tiny screen real estate. The 13 inch screen is just wide enough to have an editor up on one half and documentation on the other.

The tool I use to snap my windows to a grid is Spectacle. Spectacle provides some sensible keystrokes for moving windows around. The hotkeys are similar to Winsplit Revolution’s which makes switching between operating systems easy.

If you haven’t tried using a tool to help you organize your windows I highly recommend that you do. I’ve introduced both technical and non-technical people to these tools and everyone has enjoyed them.

Change volume from the command line

On my Ubuntu desktop the volume at 100% is often too quiet. With Ubuntu’s default window manager I could open up the graphical “Sound Preferences” and bump the volume to above 100%. After using i3 window manager for a while I found myself missing this and took the time to figure out how to do it from the command line.

Ubuntu uses PulseAudio to handle sound related tasks. The tool pacmd allows you to change PulseAudio settings, such as volume, on the fly. The command is pacmd set-sink-volume <sink-index> <sink-volume> where <sink-index> is an identifier for your output device and <sink-volume> is an integer greater than or equal to zero. Zero represents muted and 65536 represents 100% volume. <sink-index> is the index found in the output from the pacmd list-sinks for your output card. In my case it is 0.

The below script makes changing volume with pacmd straightforward. I’m using Perl convert a percentage into the proper units for the argument. Using this script if you want to pull a Spinal Tap and go above 100% you simply pass in a number greater than 100.

1
2
3
4
5
6
7
8
9
10
#!/bin/bash

if [ "$1" == "" ]; then
  echo "Need to supply a percentage"
  exit 1
fi

vol=$(perl -e "print int(65536 * ($1 / 100))")
echo "Setting volume to $1 ($vol)"
pacmd set-sink-volume 0 $vol

Maintaining Directory Layout When Selectively Copying Files

Ran into a situation where I needed to replace certain files in a directory tree with files from a similarly shaped directory tree. The other files in each tree needed to remain untouched. Below is an example of the directory structure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root-dir
├── target-dir
│   ├── 20121230
│   │   ├── data.csv
│   │   └── instruments.csv
│   └── 20121231
│       ├── data.csv
│       └── instruments.csv
└── other-dir
    ├── 20121230
    │   ├── data.csv
    │   └── instruments.csv
    └── 20121231
        ├── data.csv
        └── instruments.csv

Goal is to copy instruments.csv from the sub-directories of other-dir to the matching sub-directories of target-dir. In the past I’ve solved this by being in the other-dir directory and banging out a for loop at the command line (other-dir$ is the bash prompt).

1
other-dir$ for d in $(ls); do cp $d/instruments.txt ../target-dir/$d/; done

One feature (or issue) with this approach is that if a sub-directory exists in other-dir but not in target-dir that sub-directory will not be created in target-dir.

I took a bit of time to explore other ways of accomplishing this task and stopped after coming up with two additional ways.

1
other-dir$ find . -name "instruments.txt" | xargs -I {} cp {} ../target-dir/{}

The above is basically the same as the first solution. It uses find to generate the list of files and then constructs cp commands. It also doesn’t create sub-directories in target-dir.

The next example has different behavior from the above cp solutions. Like the second solution, it generates a list of files to copy using find but then uses rsync with the --files-from flag to mirror those files under target-dir. Unlike the cp based solutions, sub-directories of other-dir that do not exist in target-dir will be created.

1
other-dir$ find . -name "instruments.txt" | rsync --files-from=- . ../target-dir

I’m sure there are many more ways of accomplishing this task. Figuring out the three above was enough for me. They are fairly straight forward and don’t depend on esoteric command line flags. The solution I use in the future will depend on whether or not I need sub-directories created in the target directory.

Working on multiple Clojure projects at once

Very few coders would debate the wisdom of breaking a project into smaller libraries. One complaint about breaking a project into tinier libraries is the added hassle of making changes simultaneously to multiple projects at once. Constantly releasing a library so another project can pick up changes is annoying and slows you down. Luckily for us in a Clojure project using Leiningen it is simple to make changes to a library and then use those changes without needing to perform a release.

This is accomplished by using the checkouts directory feature of Leiningen. This is a feature that, despite being listed in the Leiningen FAQ, I only recently discovered. To make your Clojure project (from now on calling this the main project) depend on the source of another project simply make a checkouts directory in your main project’s root directory and then in checkouts link to the root of the library’s project. This causes the library to be added to the main project’s classpath. Now you can make changes to the main project and its dependencies without going through the hassle of releasing new versions of the library for every change.

In case the above paragraph isn’t clear, here is an example of the main projects directory structure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ pwd
src/main-project
$ tree
.
├── checkouts
│   └── subproject -> /Users/jmccrary/src/temp/subproject/
├── project.clj
├── src
│   └── main_project
│       └── core.clj
└── test
    └── main_project
        └── core_test.clj
$ ls checkouts/subproject/
README project.clj src test

Running a lein classpath in the main project directory and we can see the classpath has the subproject in it. I’ve edited the lein classpath output to remove most entries not related to subproject and to make it easier to read. As the example shows the subproject has been added to the classpath.

1
2
3
4
5
6
$ lein classpath
...:
src/main-project/checkouts/subproject/src
src/main-project/checkouts/subproject/classes
src/main-project/checkouts/subproject/resources
src/main-project/lib/clojure-1.3.0.jar

The Leiningen checkouts directory option is pretty useful. This feature isn’t there to discourage you from releasing versions of a library, but instead is there to facilitate quicker development cycles. I’d encourage you to experiment with it and figure out of it makes you more effective.

Reflections on Stanford's online class experiment

This past fall I took part in Stanford’s online learning experiment. For those of you who are not aware, Stanford gave anyone with Internet access the opportunity to enroll in three different online courses. This was free of charge, not even books were required. Each was an introductory level class, with the three subjects being artificial intelligence, machine learning, and databases (links may die as offered classes change). As far as I know, this was the first time that lecture videos have been paired with scheduled homework, quizzes, and programming assignments at this scale. I only vaguely remember numbers for the AI (artificial intelligence) and ML (machine learning) classes but I believe before classes began there were greater than 100,000 students enrolled apiece. The number of active students was approximately a third of the enrolled number. Even still, this is teaching at an unheard of scale.

I enrolled in both the AI and ML courses. Enrolling in two was more work, but I’m glad I did it. The classes followed slightly different formats and taking both provided a perspective in what style worked better for me.

AI Class

The AI class had video lectures, in video quizzes, homework, and two exams. The videos for this class were short. Each video lasted approximately one to four minutes. This was a nice length as you could easily go back and pick out specific topics to re-watch without jumping around a longer video. The videos showed the professor hand writing notes on pieces of paper. This style made it seem like you were receiving one-on-one tutoring.

At the end of some of the videos a quiz question was asked. These questions did not count towards your final grade. They existed to help you think and learn about the material as it was being presented.

The homework usually consisted of a fewer than 10 questions. You had until the due date to submit or change your answers. You did not get any feedback on the homework until it was graded.

For both of the exams you had about three days to finish the 15 or so questions that were asked.

ML Class

The ML class had video lectures, in video quizzes, homework, and programming assignments. In contrast to the AI class, the ML video lectures were usually five to thirteen minutes long. This made it harder to re-watch specific parts as you had to jump around the video to find different topics. On the plus side, the ML videos could be easily downloaded and watched on any device (you could not do this with the AI lectures). The ML videos also had controls for speeding up playback. I watched the vast majority of the videos at 1.5 times the normal speed.

The video questions for the ML class were similar to the AI class in that they didn’t count towards a final grade and there was usually a max of one per video. Because of the fewer number of videos, there were fewer quizzes than in the AI class. More quizzes would be good, as it forces the student to think instead of simply zoning out while watching the lectures.

The homework in the ML class was different from the AI class in a couple of ways. The largest difference was that you received feedback immediately when you submitted your answers. You were allowed to attempt the homework as often as you liked and your highest score became your score for that assignment. The questions were somewhat different between different attempts to minimize memorization of answers. There was also a minimum waiting period of ten minutes between attempts, but in order to reduce reliance on simply remembering previous attempts I usually waited a few hours to a day between attempts.

The ML class had programming assignments. In these exercises you filled in some Octave code to implement an aspect of machine learning. These exercises where great. Most of the time they made you think about the techniques you were learning that week. It was also nice to get some hands-on experience solving sample machine learning problems. It was rewarding watching my simple spam filter flag email as spam. The programming exercises were best when they weren’t simply a task in translating math to code.

Thoughts on the differences

I enjoyed the ML class homework style, instant feedback upon submission, better than the AI because it provided a quicker feedback cycle. Instead of submitting homework answers and waiting to review mistakes you were able to review instantly. I found this, and being able to repeat homework, to be more effective for learning than the single shot style of the AI class.

I preferred the shortness of the AI lectures. It allowed for easier repetition of lectures and provided more opportunities for quizzes. I also preferred the handwritten style of the AI lectures. The ML lectures felt like I was back in a classroom watching a professor go through some power point slides, adding handwritten notes as the lecture progressed. That isn’t very engaging. The one-on-one style of the AI lectures was more engaging.

I enjoyed both the programming exercises of ML class and the exams in the AI class. Both added an interesting way of learning to their respective class.

Recommendations

If I were designing a course for myself, I found the short videos with many quiz questions style of lecturing to be the more effective style. I would offer homework in a fashion similar to the ML class. Submit as many times as you want, but with a minimum time between submissions. I would up that time to be at least one hour to minimize memorization of answers. My ideal class would also have both programming exercises and exams.

Both classes had some student run question and answer boards (similar to stackoverflow). This was alright but I would look into adding a type of forum that is more suited towards discussions. A Q&A style board is not a great environment for having a discussion that is ordered by time. I think normal forums style with replies ordered by time would be effective and worth an experiment.

Conclusion

I greatly enjoyed taking these two Stanford classes. I think online lectures and homework were an extremely effective way of delivering information and reinforcing learning. I found taking these two classes to be a manageable amount of work. I did pretty much stop reading books while taking these classes and instead focused on watching lectures and doing homework. Had I only taken one class or spent breaks at work watching videos, I think I would have been able to maintain my other activities while taking these classes.

There are quite a few classes being offered at Stanford that start in the near future. Scroll to the bottom of this page to take a look at what Stanford is offering starting in January. I would highly recommend taking a class. MIT also just announced MITx. I haven’t heard a ton of information about it, but it sounds similar and I look forward to its launch.

Continuous testing with Clojure and expectations

I’ve recently started using Jay Fields' Clojure testing library, expectations. I’m not going to explain expectations, Jay already did a great job on his blog, but I will quote its Github page.

expectations is a minimalist’s testing framework

The above quote is absolutely true, which is one of the major reasons I’m liking expectations. It hasn’t been all sunshine though, when I first started using it I had a major problem. It slowed down my usual Clojure workflow.

Up until this point I had stuck to using clojure.test. Combined with emacs, slime, swank, and clojure-test-mode I found the time between making a change to code and running tests to be minimal.

When I switched to expectations the time it took between making a code change and running tests increased. With expectations I couldn’t reevaluate my buffer to get the new tests in my repl environment. Doing so caused the new tests to be there along with the old tests. This meant I needed to switch to the command line to run my tests. This caused me to incur the startup costs of the jvm simply to run my expectations (tests). This was a huge cost compared to what I was used to before.

Introducing lein-autoexpect

To fix my problem I wrote lein-autoexpect. lein-autoexpect is a Leiningen plugin that monitors a project’s source and test directory and when a Clojure file changes it reloads the affected namespaces and runs all the expectations. Using this plugin my turn around time from modifying code to running all of my expectations is practically nothing. Without the cost of the jvm startup there is practically no time wasted between when code is saved and tests are run.

To use lein-autoexpect simply add [lein-autoexpect "0.0.2"] to your project.clj file and fetch the dependency. Then at the command line run lein autoexpect. You’ll see your tests run and then it will just hang there, eagerly waiting for code to change.

1
2
3
4
5
$ lein autoexpect
*********************************************
*************** Running tests ***************
Ran 3 tests containing 3 assertions in 16 msecs
0 failures, 0 errors.

Next time you end up saving you’ll see your tests run again and the following example output appears.

1
2
3
4
*********************************************
*************** Running tests ***************
Ran 4 tests containing 4 assertions in 3 msecs
0 failures, 0 errors.

lein-autoexpect tries to clearly delimit each test session with the banner made of *. This helps keep different runs separate when scrolling through your terminal.

This style of testing is called continuous testing. If you haven’t tried it, I would highly recommend giving it a shot. Even just using it for the last few days changed how I think testing should be done.

Source can be found on Github.

Utilities I like: autojump

autojump is a nifty command line tool that enables quicker jumping between directories. I’ve been using it for a few months now and miss it when I work other machines.

To jump to a directory you type j SUBSTRING_OF_DIR. Example:

1
2
3
4
5
6
$ pwd
/Users/jmccrary
$ j jake
/Users/jmccrary/src/github/jakemcc/jakemccrary.com
$ pwd
/Users/jmccrary/src/github/jakemcc/jakemccrary.com

Above I jumped from my home directory to the root of this website’s code. Being able to jump between directories by just remembering a name (or part of a name) is great. This frees me from having to remember full paths or set up aliases.

autojump works by keeping a database of “time” spent in directories and jumps to the most frequently visited one that match SUBSTRING_OF_DIR. If you are curious as to what that database looks like the tool jumpstat will give you a view.

I used to set up aliases for jumping between projects but now that I’ve trained myself to use autojump I don’t think I’ll ever go back. Not having to do any extra work besides simply entering the root directory of new projects to setup efficient directory movement is great. I think that if you give it a shot for a while you’ll find the same benefits.

If you are on a Mac and use homebrew you can install by doing brew install autojump. For other platforms check out the github page.

A simple way of testing disconnect logic

I’m guessing that software you write connects to some other server. I’m also guessing that how it handles disconnects is tested (if ever tested) by either killing the process it connects to or by pulling out your network cable. I recently stumbled across a nifty Linux command line tool that makes causing disconnects significantly easier.

This tool is tcpkill. To use tcpkill you specify an interface and a tcpdump style filter and it kills traffic on that interface that matches the filter.

For example, if your application has a connection to 192.168.1.130, then to force a disconnect you would execute tcpkill -i eth0 host 192.168.1.130.

tcpkill can be used for more than forcing disconnects. It can also be used as a simple website filter. If Stack Overflow wastes too much of your time then you could simply leave tcpkill -i eth0 host stackoverflow.com running and enjoy your increased productivity.

tcpkill is a pretty useful tool. If you want to install it in Ubuntu it is found in the dsniff package (apt-get install dsniff).

Command line arguments in Clojure

This post is now out of date. The library recommended by this post is now a contrib library. Check out tools.cli for great documentation about handling command line arguments in Clojure.


Write enough Clojure and eventually you will need to handle command line arguments. There are numerous ways of doing this. Keep reading for a brief introduction to three.

Using built-in features

There exists a sequence named *command-line-args* which contains the arguments to your application. Using it is simple, it is just a sequence after all, and it is always available to you. No need to pull in external dependencies that others may not be familiar with.

This simplicity is also a downside. Because only a sequence is provided for you it is up to you to actually figure out the arguments. If you want to do any sort of verification that certain arguments are supplied you write the code that does the verifying. If you want to move away from positional arguments to using command line flags once again it is up to you to write it.

Because of the amount of code required to do any sort of advanced argument handling I tend to use *command-line-args* only for applications that take a single type of argument, for example a file path, and require one or more of this type of argument.

Setup for next two sections

For the next two sections I’m using version 1.5.0 of Leiningen and the specified versions of libraries as stated in the below project.clj file.

1
2
3
4
5
6
7
(defproject blogpost "1.0.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.2.0"]
                 [org.clojure/clojure-contrib "1.2.0"]
                 [clargon "1.0.0"]]
  :dev-dependencies [[swank-clojure "1.2.1"]]
  :run-aliases {:clargon clargon-example
                :cc command-line-example})

I’m using lein run to run the examples. lein run :cc runs the clojure.contrib example. Likewise, running lein run :clargon will run the clargon examples. Both of these commands can be followed by additional arguments that get passed to the application.

Using clojure.contrib.command-line

The next step after using *command-line-args* is to use the library clojure.contrib.command-line. This library provides the function with-command-line that allows you specify requirements and then handles the parsing of the command line arguments for you.

Positives of using clojure.contrib.command-line: * Part of clojure.contrib. Probably extremely low friction to start using it. * No longer need to write your own command line parsing code. * Responds to -h and --help.

A negative of using clojure.contrib.command-line is that the documentation is pretty sparse. This can lead to some fumbling around as you learn how to use it. Another downside is that there isn’t a way of specifying whether an argument is required or optional. This means you must manually check for required arguments and give appropriate error messages to the user.

Below is an example of using clojure.contrib.command-line. It specifies a few different arguments. The --cow argument has a default value of “cow”. --chicken has no default value, if it is left unspecified it will be nil. The line with milk? specifies a boolean value. If --milk (or -m because of the m? specification) is specified at the command line then milk? will be true. extras will collect any additional arguments.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(ns command-line-example
  (:require [clojure.contrib.command-line :as ccl]))

(defn -main [& args]
  (ccl/with-command-line args
    "Command line demo"
    [[cow "This is the cows name" "cow"]
     [chicken "This specifies the chickens name"]
     [milk? m? "Should you milk the cow?"]
     extras]
    (println "cow's name: " cow)
    (println "chicken's name: " chicken)
    (println "milk?: " milk?)
    (println "extra args: " extras)))

And here is an example of calling that -main function from the repl.

1
2
3
4
5
$ lein run :cc --cow Herb --milk other args
cow's name:  Herb
chicken's name:  nil
milk?:  true
extra args:  [other args]

Using some other library

Another option is to use some library that isn’t found in clojure.contrib. One example of this is clargon. Clargon is a library that Gaz Jones (his blog post here) wrote. The documentation (both in his blog post and through the github page and tests) is the primary reason I started using it.

Pros of clargon: * Great documentation. Makes it quick to get started. * Can specify functions to transform arguments prior to gaining access to them * You specify if an argument is required or optional. * Responds to -h and --help.

One potential negative of using clargon is that it isn’t a clojure.contrib library. This means there is slightly more friction to start using it on your project as, unlike clojure.contrib, you are probably not already depending on it.

Below is an example similar to the above clojure.contrib.command-line example. One important difference is that some arguments are now specified as either required or optional. If a required argument is not specified then an error is printed and execution stops.

1
2
3
4
5
6
7
8
9
10
11
12
13
(ns clargon-example
  (:require [clargon.core :as c]))

(defn -main
  [& args]
  (let [opts
        (c/clargon
         args
         (c/optional ["--cow" "Specify the cow's name" :default "cow"])
         (c/required ["--chicken" "Chicken's name"])
         (c/optional ["-m?" "--milk?" "should you milk the cow?"]))]
    (println args)
    (println opts)))

optional and required both take a vector that defines the specification of a flag. Starting with the first element in that vector, each element that is a string and starts with a ‘-’ is considered a potential flag for that argument. The last flag is stripped of leading ‘-’ characters and is considered the name of that flag (unless a :name option is specified later). The name is used to look up the value of the argument in the option map that is returned by the clargon function. If the next element after the last flag is a string then it is considered the documentation for that flag. When clargon runs into a non-string element then it and everything after it are considered options and should be specified as key value pairs. Options that do something are :default, :name, and :required.

optional and required both can take a function as a second argument. This function will be passed the argument for that flag and should return a transformed version of it. Below is an example using this functionality to specify a required flag that takes a comma separated list of files. These comma separated files are split apart and stuck into a vector.

1
2
3
4
5
6
7
8
9
10
(ns clargon-example
  (:require [clargon.core :as c]))

(defn -main
  [& args]
  (let [opts (c/clargon
              args
              (c/required ["--files" "Files to process"]
                          #(vec (.split % ","))))]
    (println "Parsed opts: " opts)))

Below is the above example being ran.

1
2
$ lein run :clargon --files one.txt,two.txt,three.txt
Parsed opts:  {:files [one.txt two.txt three.txt]}

Clargon supports some more advanced nested argument handling that I’m not going to go into here. If you want to know more about clargon I’d recommend reading reading Gaz’s blog post and the clargon readme and tests.

End

There are many more ways to handle command line parsing in Clojure. You are not limited to any of the three above. I’ve personally found clargon to hit all of my needs and plan on continuing to use it.

Creating a SQL table with a composite primary key in Clojure

I was interacting with a SQL database using Clojure and needed to create a table so I turned to create-table from clojure.contrib.sql. Looking at the docs for create-table it seemed pretty straight forward. To create a table with columns date, id, symbol, price, and quantity you would write the following.

1
2
3
4
5
6
(create-table "orders"
              [:date     "date"]
              [:id       "integer"]
              [:symbol   "char(10)"]
              [:price    "integer"]
              [:quantity "integer"])

The above works. I also wanted to specify that columns date and id to form a composite primary key. I wasn’t sure how to specify a composite primary key with create-table and ended up diving into its code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(defn create-table
  "Creates a table on the open database connection given a table name and
  specs. Each spec is either a column spec: a vector containing a column
  name and optionally a type and other constraints, or a table-level
  constraint: a vector containing words that express the constraint. All
  words used to describe the table may be supplied as strings or keywords."
  [name & specs]
  (do-commands
   (format "CREATE TABLE %s (%s)"
           (as-str name)
           (apply str 
             (map as-str
              (apply concat 
               (interpose [", "]
                (map (partial interpose " ") specs))))))))

Looking at create-table we can see it creates a SQL statement which is then executed by do-commands. In order to have a composite key we need do-commands to execute a SQL statement that looks similar to below.

1
2
3
4
5
6
7
8
CREATE TABLE track(
  date date,
  id integer,
  symbol char(10),
  price integer,
  quantity integer,
  PRIMARY KEY (date, id)
)

Let’s break down create-table to figure out what we need to pass it to make do-commands run the above statement. The code for create-table is repeated below with comments pointing out what step lines up the code.

1
2
3
4
5
6
7
8
9
10
(defn create-table
  [name & specs]
  (do-commands                                              ; step 7
   (format "CREATE TABLE %s (%s)"                           ; step 6
           (as-str name)
           (apply str                                       ; step 5
             (map as-str                                    ; step 4
              (apply concat                                 ; step 3
               (interpose [", "]                            ; step 2
                (map (partial interpose " ") specs))))))))  ; step 1
  1. First create-table takes the sequences in specs and puts a space between each element in each sequence.
  2. The result of step 1 then has a vector containing a comma and a space interposed between each element of it.
  3. concat combined with apply is used to combine each element of the result of step 2 into a single sequence.
  4. as-str (from c.c.string) is mapped over the result of step 3 to make sure every element is a string.
  5. str is used to make one string out of the sequence of strings from step 4.
  6. format is used to substitute in name and the result of step 5 to create the SQL statement.
  7. do-commands executes the statement created in step 6.

Knowing how create-table works now allows us to specify the arguments that will create the orders table with the composite primary key of date and id.

1
2
3
4
5
6
7
(create-table "orders"
              [:date     "date"]
              [:id       "integer"]
              [:symbol   "char(10)"]
              [:price    "integer"]
              [:quantity "integer"]
              ["PRIMARY KEY" "(date, id)")