Destructuring, records, protocols and named arguments

optimization,pondering — cgrand, 8 May 2010 @ 12 h 19 min

Warning: this post is full of microbenchmarks so take it with a pinch of salt.

Destructuring a record

Field access through keyword-lookups on records is fast:

user=> (defrecord Foo [a b])
user.Foo
user=> (let [x (Foo. 1 2)] (dotimes [_ 5] (time (dotimes [_ 1e7] (let [a (:a x) b (:b x)] [a b])))))
"Elapsed time: 114.787424 msecs"
"Elapsed time: 102.568273 msecs"
"Elapsed time: 71.150593 msecs"
"Elapsed time: 72.217418 msecs"
"Elapsed time: 70.127489 msecs"
nil

But as far as I know destructure still emits (get x :k), let check:

user=> (let [x (Foo. 1 2)] (dotimes [_ 5] (time (dotimes [_ 1e7] (let [{:keys [a b]} x] [a b])))))
"Elapsed time: 968.616612 msecs"
"Elapsed time: 945.704133 msecs"
"Elapsed time: 911.290751 msecs"
"Elapsed time: 927.658125 msecs"
"Elapsed time: 916.796408 msecs"
nil

Actually it’s slightly slower than lookup on small maps:

user=> (let [x {:a 1 :b 2}] (dotimes [_ 5] (time (dotimes [_ 1e7] (let [{:keys [a b]} x] [a b])))))
"Elapsed time: 866.942377 msecs"
"Elapsed time: 746.273968 msecs"
"Elapsed time: 734.366239 msecs"
"Elapsed time: 729.346188 msecs"
"Elapsed time: 746.96393 msecs"
nil

One patch later

I patched destructure to emit keyword-lookups:

user=> (let [x (Foo. 1 2)] (dotimes [_ 5] (time (dotimes [_ 1e7] (let [{:keys [a b]} x] [a b])))))
"Elapsed time: 479.911064 msecs"
"Elapsed time: 488.895167 msecs"
"Elapsed time: 464.617431 msecs"
"Elapsed time: 480.944575 msecs"
"Elapsed time: 464.969779 msecs"
nil

It’s better but still slower than without destructuring. Let see what slows us:

user=> (macroexpand-1 '(let [{:keys [a b]} x] [a b]))
(let* [map__3838 x map__3838 (if (clojure.core/seq? map__3838) (clojure.core/apply clojure.core/hash-map map__3838) map__3838) b (:b map__3838) a (:a map__3838)] [a b])

My bet is on the if+seq? so I remove it:

user=> (let [x (Foo. 1 2)] (dotimes [_ 5] (time (dotimes [_ 1e7] (let* [map__3838 x map__3838 map__3838 b (:b map__3838) a (:a map__3838)] [a b])))))
"Elapsed time: 125.188397 msecs"
"Elapsed time: 103.041099 msecs"
"Elapsed time: 70.061558 msecs"
"Elapsed time: 70.793984 msecs"
"Elapsed time: 69.759146 msecs"
nil

This if+seq? allows for named arguments. I wonder if this behaviour should be an option of map-destructuring (eg {:keys [a b] :named-args true}). Anyway I had in mind that such a dispatch could be optimized with protocols.

Optimizing dispatch with protocols

user=> (defprotocol Seq (my-seq [this]) (my-seq? [this]))
Seq
user=> (extend-protocol Seq
clojure.lang.ISeq
(my-seq [this] (.seq this))
(my-seq? [this] true)
Object
(my-seq [this] (clojure.lang.RT/seq this))
(my-seq? [this] false))
nil

Let see how my-seq? compares to seq?

user=> (let [x (Foo. 1 2)] (dotimes [_ 5] (time (dotimes [_ 1e7] (let* [map__3838 x map__3838 (if (my-seq? map__3838) (clojure.core/apply clojure.core/hash-map map__3838) map__3838) b (:b map__3838) a (:a map__3838)] [a b])))))
"Elapsed time: 179.282982 msecs"
"Elapsed time: 161.781526 msecs"
"Elapsed time: 154.307042 msecs"
"Elapsed time: 155.567677 msecs"
"Elapsed time: 153.716604 msecs"
nil

Hence my-seq? is 3x faster than seq? which means that protocols are indeed speedy: yet another incredible piece of work by Rich Hickey!

I’m still exploring how low-level protocols fns behave in a concurrent setting, stay tuned!

Meanwhile don’t forget to register to the next European Clojure training session taught by Lau and me.

An optimization job is never done

optimization — cgrand, 18 November 2009 @ 12 h 26 min

… when you don’t set measurable goals.

Yesterday, in Life of Brian I showed several tricks to make an implementation of Brian’s Brain faster. I tought it was enough but since some disagree here comes another perf boost.

Repetitive lookups

In neighbours-count the first arg (which happens to be a seq of 3 items) is destructured:

(defn neighbours-count [[above current below] i w]

which means that there’s 3 lookups per :off cell while these lookups results are constant for a given row!

So let’s move these lookups in step1:

(defn neighbours-count [above current below i w]
  (let [j (mod (inc i) w)
        k (mod (dec i) w)
        s #(if (= (aget #^objects %1 (int %2)) :on) 1 0)]
    (+ (+ (+ (s above j) (s above i))
         (+ (s above k) (s current j)))
      (+ (+ (s current k) (s below j))
        (+ (s below i) (s below k))))))
 
(defn step1 [[above #^objects current below]]
  (let [w (count current)]
    (loop [i (int (dec w)), #^objects row (aclone current)]
      (if (neg? i)
        row
        (let [c (aget current i)]
          (cond
            (= c :on)
              (recur (dec i) (doto row (aset i :dying)))
            (= c :dying)
              (recur (dec i) (doto row (aset i :off)))
            (= 2 (neighbours-count above current below i w))
              (recur (dec i) (doto row (aset i :on)))
            :else
              (recur (dec i) row)))))))

Benchmark:

user=> (do  (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 2058.372014 msecs"
nil
user=> (do  (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 1105.499303 msecs"
nil

Wow, twice as fast! Again. 2.1ms/step on a 80*80 board, 11.1ms/step on a 200*200 board.

This version on a 200*200 board is then nearly 10x faster than my first one!

Life of Brian

optimization — cgrand, 17 November 2009 @ 13 h 24 min

Some weeks ago, Lau Jensen implemented Brian’s Brain but gave up making it go faster. He reached for my advice but I was swamped in work.

First cut

Here is my take:

(defn neighbours-count [rows i w]
  (count (for [j [(mod (inc i) w) i (mod (dec i) w)]
               r rows :when (= :on (r j))] r)))

(defn step1 [rows]
  (let [current (second rows)
        w (count current)]
    (loop [i (dec w), row (transient current)]
      (if (neg? i)
        (persistent! row)
        (let [c (current i)]
          (cond
            (= c :on) 
              (recur (dec i) (assoc! row i :dying))
            (= c :dying) 
              (recur (dec i) (assoc! row i :off))
            (= 2 (neighbours-count rows i w))
              (recur (dec i) (assoc! row i :on))
            :else
              (recur (dec i) row)))))))

(defn step [board]
  (vec (map step1 
         (partition 3 1 (concat [(peek board)] board [(first board)])))))

A board is a vector of vectors (rows) containing either :on, :dying or :off.

If you look closely at step you should recognize torus-window from Lau’s Brians functional brain. I must confess I am the one who gave Lau the idea of the seq-heavy implementation — just to prove it was doable without indices.
This is why I chose to keep a clear, functional, indexless processing at the row level. It is only the cell level that is worth optimizing.

step1 steps a single row. It takes as only argument a seq of 3 rows (above, current and below) and use a transient vector to compute the next state of the current row. I tried to keep neighbours-count simple while reducing the number of intermediate seqs (less alloc, less GC, faster code) by using a seq comprehension (for) rather than several higher-order functions.

How does it run?

First a little utility to create random boards (with one third of :on cells):

(defn rand-board [w h]
  (vec (map vec (partition w (take (* w h) (repeatedly #(if (zero? (rand-int 3)) :on :off)))))))

Then the actual benchmark:

user=> (do (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 16734.609776 msecs"
nil
user=> (do (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 10200.273708 msecs"
nil

So I get 16.7ms/step on a 80*80 board (where Lau was stuck at 200ms on his box) and 102ms/step on a 200*200 board.

Optimizations

Unrolling neighbours-count

neighbours-count is called once for each :off cell and is rather complex so making it go faster should yield sensible gains this is why I unrolled it:

(defn neighbours-count [[above current below] i w]
  (let [j (mod (inc i) w)
        k (mod (dec i) w)
        s #(if (= (%1 %2) :on) 1 0)]
    (+ (+ (+ (s above j) (s above i))
         (+ (s above k) (s current j)))
      (+ (+ (s current k) (s below j))
        (+ (s below i) (s below k))))))

I paired the sum terms because currently only math ops with two args are inlined.

user=> (do (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 7880.492493 msecs"
nil
user=> (do (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 4679.679401 msecs"
nil

More than twice as fast! 7.9ms/step for a 80*80 board and 46.8ms/step for a 200*200 one.

Using java arrays

Since there is little sharing between two iterations, vectors are not that useful so why not to try using arrays to represent rows? The board is then a vector of arrays:

(defn neighbours-count [[above current below] i w]
  (let [j (mod (inc i) w)
        k (mod (dec i) w)
        s #(if (= (aget #^objects %1 (int %2)) :on) 1 0)]
    (+ (+ (+ (s above j) (s above i))
         (+ (s above k) (s current j)))
      (+ (+ (s current k) (s below j))
        (+ (s below i) (s below k))))))

(defn step1 [rows]
  (let [#^objects current (second rows)
        w (count current)]
    (loop [i (int (dec w)), #^objects row (aclone current)]
      (if (neg? i)
        row
        (let [c (aget current i)]
          (cond
            (= c :on) 
              (recur (dec i) (doto row (aset i :dying)))
            (= c :dying) 
              (recur (dec i) (doto row (aset i :off)))
            (= 2 (neighbours-count rows i w))
              (recur (dec i) (doto row (aset i :on)))
            :else
              (recur (dec i) row)))))))

(defn step [board]
  (vec (map step1 
         (partition 3 1 (concat [(peek board)] board [(first board)])))))

(defn rand-board [w h]
  (vec (map #(into-array Object %) (partition w (take (* w h) (repeatedly #(if (zero? (rand-int 3)) :on :off)))))))

Benchmark numbers:

user=> (do (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 5155.422268 msecs"
nil
user=> (do (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 2964.524262 msecs"
nil

Arrays cut one third of the runtime: we are down to 5.2ms/step for a 80*80 board and 29.6ms/step for a 200*200 one. This means the overall speedup is greater than 3x!

Adding a swing frontend

I decoupled the rendering from the computation: 25 fps no matter how many steps are really computed. If the computation is fast not all steps are rendered (eg for a 80*80 board at 200 sps only one step out of eight is rendered to screen).

(import '(javax.swing JFrame JPanel) 
        '(java.awt Color Graphics2D))

(defn dims [[row :as board]] [(count row) (count board)])

(defn render-board [board #^Graphics2D g cell-size]
  (let [[w h] (dims board)]
    (doto g
      (.setColor Color/BLACK)
      (.scale cell-size cell-size)
      (.fillRect 0 0 w h))
    (doseq [i (range h) j (range w)]
      (when-let [color ({:on Color/WHITE :dying Color/GRAY} ((board i) j))]
        (doto g
          (.setColor color)
          (.fillRect j i 1 1))))))

(defn start-brian [board cell-size]
  (let [[w h] (dims board)
        switch (atom true)
        board (atom board)
        panel (doto (proxy [JPanel] []
                      (paint [g] (render-board @board g cell-size)))
                (.setDoubleBuffered true))]
    (doto (JFrame.) 
      (.addWindowListener (proxy [java.awt.event.WindowAdapter] []
                            (windowClosing [_] (reset! switch false))))
      (.setSize (* w cell-size) (* h cell-size))
      (.add panel)
      .show)
    (future (while @switch (swap! board step)))
    (future (while @switch (.repaint panel) (Thread/sleep 40)))))

(start-brian (rand-board 200 200) 5)

The rendering code is slightly different when using arrays:

(defn render-board [board #^Graphics2D g cell-size]
  (let [[w h] (dims board)]
    (doto g
      (.setColor Color/BLACK)
      (.scale cell-size cell-size)
      (.fillRect 0 0 w h))
    (doseq [i (range h) j (range w)]
      (when-let [color ({:on Color/WHITE :dying Color/GRAY} (aget #^objects (board i) (int j)))]
        (doto g
          (.setColor color)
          (.fillRect j i 1 1))))))

Putting all cores to work

The first attempt to parallelizing is to replace map by pmap in step.

(defn step [board]
  (vec (pmap step1 
         (partition 3 1 (concat [(peek board)] board [(first board)])))))

This does not yield such a gain:

user=> (do  (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 5624.407083 msecs"
nil
user=> (do  (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 2778.74241 msecs"
nil

Performances are slighlty worse on the 80*80 board and slightly better on the 200*200. The reason is that the processing of a single row is not computationally intense enough compared to the parallelization overhead. So I have to send each core more stuff at once.

Rich is working on some cool features in the par branch that will make such parallelization a breeze meanwhile I am going to show how to manually parallelize step.

The idea is simply to split the vector in N chunks where N is the number of cores.

(defn step [board]
  (let [n (.availableProcessors (Runtime/getRuntime))
        l (count board)
        bounds (map int (range 0 (inc l) (/ l n)))]
    (apply into
      (pmap #(vec (map step1 (partition 3 1 
               (concat [(board (mod (dec %1) l))] 
                 (subvec board %1 %2) 
                 [(board (mod %2 l))]))))
        bounds (rest bounds)))))

How does it fare?

user=> (do  (let [b (rand-board 80 80)] (time ((apply comp (repeat 1000 step)) b))) nil)
"Elapsed time: 3672.935876 msecs"
nil
user=> (do  (let [b (rand-board 200 200)] (time ((apply comp (repeat 100 step)) b))) nil)
"Elapsed time: 2154.728325 msecs"
nil

3.7ms/step on a 80*80 board, 21.5ms/step on the 200*200 board. It is indeed better than plain pmap and cuts off 30% of the run time. Now this is nearly 5 times faster than my initial attempt!

You can view the whole source code (including variants) here and follow me on Twitter there.

See this subsequent post for more optimizations (another 2x improvement).

What *warn-on-reflection* doesn’t tell you about arrays

optimization — cgrand, 6 August 2009 @ 9 h 12 min
user=> (time (let [a (make-array Object 100)] (dotimes [i 10000000] (aset a (rem i 100) nil))))
Reflection warning, NO_SOURCE_PATH:840 - call to aset can't be resolved.
"Elapsed time: 136063.015272 msecs"

With aget/aset, the index must be hinted to be an int:

user=> (time (let [a (make-array Object 100)] (dotimes [i 10000000] (aset a (int (rem i 100)) nil))))
"Elapsed time: 1064.546402 msecs"

Wow, more than 100x faster (reflection is bad) but despite the compiler doesn’t complain one can help it to choose a faster path:

user=> (time (let [a #^"[Ljava.lang.Object;" (make-array Object 100)] (dotimes [i 10000000] (aset a (int (rem i 100)) nil))))
"Elapsed time: 247.446882 msecs"

On the whole we get a 500x speed-up with only two type hints.

(c) 2024 Clojure and me | powered by WordPress with Barecity