Content-Defined Dependency Shading

unsorted — cgrand, 9 March 2018 @ 14 h 16 min

Shading is the practice of renaming a dependency and embed it in a project to be sure it won’t conflict with another version of itself (it’s a good time to go watch or rewatch Rich Hickey’s Spec-ulation).

For Unrepl we rely on shading extensively as we don’t want the code injected by the client to interfere with running code or even tools running a different strain of Unrepl.

That’s how we ended with the idea of content-defined shading: choose a granularity of shading (e.g. namespace or all eps, or whole project), compute a hash (in our case SHA1, thus we are SHA-ding!) on it and use the hash in the renaming process.

Doing so we end with stable names that don’t depend on date, version or commit.

Datastructures: stateless transient flags

unsorted — cgrand, 5 March 2018 @ 16 h 07 min

Traditionally transient data structures use a mutable box to determine whether a node can be modified in place or not. Somehow it acts like a transaction: when the mutable box contains a non null then it means the transient is still “open” (editable nodes have not been shared yet os can be modified). Thus each node has a reference to a reference object (when the node was created by a persistent operation the reference to the reference itself is null).

When I was working on confluent map I found another way to track transients node ownership.

The main idea is to put the flag not in the node but in its parent. Since there are one flag per child, better to store them has a bitmap. Space-wise this solution is not greedier than using a reference type: a 32 bit bitmap vs a 32 bit reference (with compressed pointers) and an additional object.

The role of these flags is to tell whether a child is exclusively owned (not shared) by its parent. Now when one traverse a transient data structures from its root, one has just to check that the whole ownership chain is exclusive. When so the node is editable (mutable). No mutability required.

Tangentially related notes:

  1. In confluent map, I doesn’t even need to have a separate bitmap, since I had a case left in the main bitmap.
  2. CHAMP hash maps are no faster than Clojure ones, benchmarks in the paper measure differences in hash algorithms (plain Java vs Clojure Murmur3 strain) not in data structures implementations.
  3. confluent maps sports 3-way merge in time linear with the number of edits (not the actual size).

Mic check

unsorted — cgrand, @ 13 h 04 min

Long time no post.
In the years since the last post, I’ve worked on several projects, let me introduce some!

First there’s xforms a collection of transducers-related stuff. Xforms is really great if you need to do any kind of aggregation, it also provides transducer versions of some core functions and several new transducing contexts (strings, io). Plus it has optimizations for dealing with key-value pairs without ever allocating a pair object (best case).

Xforms was initially a clojure/jvm lib but Mike Fikes started porting it to cljs, however I was not happy with having to either split the codebase or break the API to solve the “macros-and-code-in-one-file” problem. With his and António’s expert knowledges to guide me I figured out a couple of macros which allows to write cljc code which mixes macros and code, works on clj/jvm, cljs/clj and self-host cljs (yes there are macros to figure out the cljs flavor). This is really useful when porting clj/jvm code to cljs/*. These macros are packaged in a library named Macrovich.

With colleagues at HCA Datalab we worked on Powderkeg (“Keg” for friends) which basically turns Apache Spark in a giant transducing context. Plus it works without any AOT. Start the repl, connect to the cluster, run your transducers on RDDs. Benefits: you can experiment against real data with a tight feedback loop and you can test your computations with no dependencies on Spark.

The part of Keg that makes the REPL-no-AOT experience possible has been repurposed into Portkey which allows to deploy freshly REPLed functions as AWS Lambdas; a sub project is aws-clj-sdk an AWS api generated from the machine-readable services descriptions provided by Amazon (like official SDKs or Python’s Boto); Kimmo Koskinen and Baptiste Dupuch are hard at work on both projects.

Last, there’s Unrepl which aims to provide better REPLs and tooling in general without requiring a single dependency to your project. All that is needed is a plain socket repl and an Unrepl-powered tool (like Spiral for Emacs, Vimpire for VIM, or Unravel for the terminal).

These aren’t the reducing functions you are looking for

unsorted — cgrand, 8 October 2014 @ 1 h 12 min

Transducers are powerful and easy to grasp when they claim they transform reducing functions. However once you scratch their surface you quickly realize that’s not their true nature: they transform stateful processes.

In a previous post, I explained why seeded transduce forces transducers to return stateful reducing functions. However this can be fixed. The current implementation of transduce reads:

(defn transduce
  "reduce with a transformation of f (xf). If init is not
  supplied, (f) will be called to produce it. f should be a reducing
  step function that accepts both 1 and 2 arguments, if it accepts
  only 2 you can add the arity-1 with 'completing'. Returns the result
  of applying (the transformed) xf to init and the first item in coll,
  then applying xf to that result and the 2nd item, etc. If coll
  contains no items, returns init and f is not called. Note that
  certain transforms may inject or skip items."  {:added "1.7"}
  ([xform f coll] (transduce xform f (f) coll))
  ([xform f init coll]
     (let [f (xform f)
           ret (clojure.core.protocols/coll-reduce coll f init)]
       (f ret))))

To fix it you have to make the seeded case the special case and not the normal case:

(defn fseed [f init]
  (fn
    ([] init)
    ([x] (f x))
    ([x y] (f x y))))

(defn transduce
  "reduce with a transformation of f (xf). If init is not
  supplied, (f) will be called to produce it. f should be a reducing
  step function that accepts both 1 and 2 arguments, if it accepts
  only 2 you can add the arity-1 with 'completing'. Returns the result
  of applying (the transformed) xf to init and the first item in coll,
  then applying xf to that result and the 2nd item, etc. If coll
  contains no items, returns init and f is not called. Note that
  certain transforms may inject or skip items."  {:added "1.7"}
  ([xform f init coll] (transduce xform (fseed f init) coll))
  ([xform f coll]
     (let [f (xform f)
           ret (clojure.core.protocols/coll-reduce coll f (f))]
       (f ret))))

By making the seeded reduce the special case, the init value can be wrapped in a composite init value by transducer-returned reducing functions. Now writing, for example, a pure take is possible.

This modification fixes the seeded reduce case but it requires allocating intermediate objects at each step which goes against the promise that transducers alleviate allocative pressure (promise inherited from reducers). So it’s fixable but for performance reasons transducer-returned reducing functions remain stateful.

Once you go stateful…

Once you assume transducer-returned reducing functions are stateful, you realize they have some cruft from their functional origins (this was discussed at the last Paris Clojure Meetup (fr))):

  • 0-arity always delegate to the 0-arity of the downstream reducing function,
  • in any arity, the accumulator must be used in a linear manner with the only allowed operation being the 2-arity of the downstream function.
  • don’t forget to check for reduced return values!

So the accumulator values are passed around but never used except by the most downstream reducing function: the one that was passed to transduce by the user! Why not, then, encapsulates the accumulator state in a process?

A process in this model has only two operations: process one input and complete which could be respectively mapped to 1-arity and 0-arity of a function:

(fn
  ([] ...) ; completes the process, return value is unspecified
  ([x] ...)) ; processes x, returns true when no more input should be fed in.

So now transduce could be written as:

(defn transduce [xform f init coll]
  (let [vacc (volatile! init)
        p (fn 
            ([])
            ([x] (reduced? (vswap! vacc f x))))
        p (xform p)]
    (feed! p coll) ; transducers are now process -> process
    (p)
    (f (let [acc @vacc] (if (reduced? acc) @acc acc)))))

Where feed! is the iteration primitive – for exposition it can be implemented using reduce but it’s backwards: reduce should be implemented on top of feed!.

(defn feed!
 "Returns true if the process can't process more input."
 [p coll]
  (reduce #(and (p %2) (reduced true)) false coll))

Now we can try to reimplement some transducers as process-transforming functions:

(defn map [f]
  (fn [p1]
    (fn
      ([] (p1))
      ([x] (p1 (f x))))))

(defn filter [pred]
  (fn [p1]
    (fn
      ([] (p1))
      ([x] (and (pred x) (p1 x))))))

(defn take [n]
  (fn [p1]
    (let [vn (volatile! (dec n))]   
      (fn
        ([] (p1))
        ([x] (or (neg? @vn) (p1 x) (neg? (vswap! vn dec))))))))

(defn partition-by [f]
  (fn [p]
    (let [a (java.util.ArrayList.)
          pv (volatile! ::none)]
      (fn
        ([]
          (when-not (.isEmpty a)
            (let [v (vec (.toArray a))]
              ;;clear first!
              (.clear a)
              (p v)))
          (p))
        ([input]
          (let [pval @pv
                val (f input)]
            (vreset! pv val)
            (if (or (identical? pval ::none)
                  (= val pval))
              (do (.add a input) false) ; .add returns true
              (let [v (vec (.toArray a))]
                (.clear a)
                (or (p v)
                  (do (.add a input) false))))))))))

Conclusion

To me the main advantage of transducers as process transformers is that their interface is a bit less complected; especially by not having to deal with reduced wrappers – because of that it may be easier to have them support primitive types.

The process model also seems to be less of a mismatch when reasoning about transducers in other contexts (e.g. sequences, channels).

A better name

There has been much discussion on how to best type them but not that much about how to name them in a more patterned way. What about BuilderDecoratorFactories?

The Rules of Transducer Club

unsorted — cgrand, 11 September 2014 @ 15 h 46 min

First rule: you don’t call a transducer.
Second rule: only composition is allowed.
Third rule: better stateful than pure.

Here are the rationale behind each rule:

First rule: you don’t call a transducer.

Transducers may be so-called “stateful” that is they create stateful reducing functions. As a consequence you should not hold onto them for too long (otherwise state goes sour…). And there’s no betetr way to not hold them for too long that to never get a hold on them at all!

That’s why transduce, sequence, iteration and chan all take a transducer to avoid you the perils of having to call it.

Second rule: only composition is allowed.

It’s a direct consequence of the first rule: you should only compose transducers.

Third rule: better stateful than pure.

This one is a bit more subtle and caused exclusively by transduce.

A reducing fn has now threes arities: [] -> acc, [acc x] -> acc and [acc] -> result.

When one wants to pass state from one step to the next there are two options: be pure and pass it in the accumulator (and you’ll use the 1-arity to clean up) or be stateful.

It turns out that the pure option is a bad one, for two reasons. First reason, it’s going to increase object churn. Second reason, it’s going to change the type of the accumulator and this is a problem because:

  • in transduce an init value may be specified and this init value must be of the type of the accumulator,
  • we don’t have a way to map from the result domain to the accumulator domain.

So it means the user has to be aware of an implementation detail of your transducer (the way you smuggle state in the accumulator) to craft a proper init value. It’s an abstraction leak, it’s bad. Be stateful.

Optimal justification of text with Dynamic Programming

unsorted — cgrand, 29 August 2014 @ 11 h 46 min

This is an exercise in dynamic programming I found on /ftp.

The goal is to tightly arrange a given sequence of n words within page margins, maximizing overall neatness. To be more precise, we wish to minimize the sum, over all lines except the last, of the cubes of the number of blank characters at the end of each line. See the comments in the code for more details.

The algorithm has O(n^2) time and space complexities.

Below is my take on it and I believe the complexity to be O(n*width). I achieve linearity in the words count by laying out the text back to front: the key insight is that the optimal layout of some words only depends on the amount of space left on the current line, it does not depend on the layout of the words before them.

(defn layout [words width]
  (let [cat (fn [word {u :ugliness [l & ls] :lines}] ; adds the word to the start of the first line
              {:ugliness u :lines (cons (cons word l) ls)})
        br (fn [rem {u :ugliness ls :lines}] ; adds a break (creates a new line)
              {:ugliness (+ u (* rem rem rem)) :lines (cons () ls)})
        layout ; layout words, knowing that the first line shouldn't have more than rem characters
          (fn [layout words rem]
            (if-let [[word & ws] (seq words)]
              (cond
                (= rem width) ; blank line ahead
                  (cat word (layout ws (- rem (count word))))
                (< (count word) rem) ; enough room for a space and the current word
                  (cat word
                    (min-key :ugliness
                      (layout ws (- rem (count word) 1))
                      (br rem (layout ws width))))
                :else (br rem (layout words width)))
              {:ugliness 0 :lines (list ())}))
        mlayout (memoize layout)
        layout (fn self [words rem] (mlayout self words rem))]
    (:lines (layout words width))))

This is an exercise I prepared for a Lambda Next workshop but that we never used.

Macros, closures and unexpected object retention

unsorted — cgrand, 11 September 2013 @ 12 h 37 min

The common advice about macros is that they should emit as little code as possible and delegate to ancillary functions as soon as possible. Here is an example from clojure.java.jdbc:

(defmacro transaction
  [& body]
  `(transaction* (fn [] ~@body)))

I still think this is good advice but it has unintended consequences. The problem with this piece of code is that all closed-over objects in body are going to be retained longer than expected, longer than they would have been retained if the macro had emitted all the logic implemented in transaction* instead of delegating to it. (See this discussion as an example of issues created by such code.)

The closure object has references to all closed-over objects and since a closure can be called many times, it can’t get rid of them. So the only time where they are going to be garbage collectible is once the closure itself becomes collectible… and a closure can’t be collected while it’s executing.

Helpfully there’s a low-level feature for that:

(defmacro transaction
  [& body]
  `(transaction* (^:once fn* [] ~@body)))

It instructs the compiler that the closure is one-shot and that closed-over references should be cleared, thus allowing referenced objects to be garbage collected before the closure returns.

This problem is not specific to macros but can easily be solved in most cases: the closure is an implementation detail and the macro writer knows enough about its life-cycle to fix it. However any regular closure (fn or reify) is going to prevent closed-overs to be garbage-collected while one of its (java) methods is running because the closure is referenced by the stack.

During the last LambdaNext workshop a delegate stumbled on such a case while experimenting with reducers (and incidentally it made me understand a memory issue I worked around last year):

=> (time (reduce + 0 (map identity (range 1e8))))
"Elapsed time: 5729.579 msecs"
4999999950000000
=> (time (reduce + 0 (r/map identity (range 1e8))))
;; Interrupting...
Expression was interrupted: null

(Depending on your memory settings, you may have to tweak the length of the range to exhibit the problem; more details here)

If one modifies reducers to not use (java) methods but external extensions:

(in-ns 'clojure.core.reducers)

(defrecord Folder [coll xf])

(defn folder
  "Given a foldable collection, and a transformation function xf,
  returns a foldable collection, where any supplied reducing
  fn will be transformed by xf. xf is a function of reducing fn to
  reducing fn."
  {:added "1.5"}
  ([coll xf]
     (Folder. coll xf)))

(extend-type Folder
      clojure.core.protocols/CollReduce
      (coll-reduce [fldr f1]
                   (clojure.core.protocols/coll-reduce (:coll fldr) ((:xf fldr) f1) (f1)))
      (coll-reduce [fldr f1 init]
                   (clojure.core.protocols/coll-reduce (:coll fldr) ((:xf fldr) f1) init))

      CollFold
      (coll-fold [fldr n combinef reducef]
                 (coll-fold (:coll fldr) n combinef ((:xf fldr) reducef))))

Then the problem disappears:

(in-ns 'user)
=> (time (reduce + 0 (r/map identity (range 1e8))))
"Elapsed time: 4437.012 msecs"
4999999950000000

This is because the protocol methods is not a java method of the reducer object anymore and thus it can be reclaimed while the (protocol) method is executing.

So next time you have a memory issue, look for closures tying the life-cycle of their closed-overs to theirs!

Tarjan’s strongly connected components algorithm

unsorted — cgrand, 18 March 2013 @ 19 h 48 min

I dislike algorithms that are full of indices and mutations. Not because they are bad but because I always have the feeling that the core idea is buried. As such, Tarjan’s SCC algorithm irked me.

So I took the traditional algorithm, implemented it in Clojure with explicit environment passing, then I replaced indices by explicit stacks (thanks to persistence!) and after some tweaks, I realized that I’ve gone full circle and could switch to stacks lengths instead of stacks themselves and get rid of the loop. However the whole process made the code cleaner to my eye. You can look at the whole history.

Here is the resulting code:

(defn tarjan 
  "Returns the strongly connected components of a graph specified by its nodes
   and a successor function succs from node to nodes.
   The used algorithm is Tarjan's one."
  [nodes succs]
  (letfn [(sc [env node]
            ; env is a map from nodes to stack length or nil,
            ; nil means the node is known to belong to another SCC
            ; there are two special keys: ::stack for the current stack 
            ; and ::sccs for the current set of SCCs
            (if (contains? env node)
              env
              (let [stack (::stack env)
                    n (count stack)
                    env (assoc env node n ::stack (conj stack node))
                    env (reduce (fn [env succ]
                                  (let [env (sc env succ)]
                                    (assoc env node (min (or (env succ) n) (env node)))))
                          env (succs node))]
                (if (= n (env node)) ; no link below us in the stack, call it a SCC
                  (let [nodes (::stack env)
                        scc (set (take (- (count nodes) n) nodes))
                        ; clear all stack lengths for these nodes since this SCC is done
                        env (reduce #(assoc %1 %2 nil) env scc)]
                    (assoc env ::stack stack ::sccs (conj (::sccs env) scc)))
                  env))))]
    (::sccs (reduce sc {::stack () ::sccs #{}} nodes))))

As always, if you need some short-term help with Clojure (code review, consulting, training etc.), contact me!

Decaying lists: log scale for lists

pondering — cgrand, 12 February 2013 @ 13 h 44 min

The concept of exponentially decaying lists by David Barbour piqued my interest and I implemented it. It can be summed up as: log scale for lists!

As for log scale, decaying lists allow to manage large range of values and thus to get a better understanding of the data.

So a decaying list grows logarithmically with the number of conjed items. It follows that some items are dropped when others are inserted. Let’s see:

=> (seq (into (decaying-list) (range 100)))
(99 98 97 96 95 94 92 91 90 89 88 87 86 84 83 80 70 65 61 59 57 49 44 30 29 27 25 23 22 18 12 4 0)

So most of the recent items are there but the older elements have a greater probability to have been dropped.

A decaying list is parametrized by its half-life. The default half-life is 20, it means that an item in the list has one chance out of two to “survive” the insertion of 20 other items.

=> (seq (into (decaying-list 5) (range 100))) ; a half-life of 5 conjs
(99 98 97 96 95 94 93 91 83 81 79 78 68 61 57 54 52 31 15)

You can know where the next decay (if any: nil is returned when no decay) is going to happen:

=> (def dl (into (decaying-list 5) (range 100)))
#'net.cgrand.decay/dl
=> (decay-loc dl)
[(93 95 97 98 99) (92 91 89 87 85 81 77 72 71 63 54 52 46 31 20 16)]

So here the decay is going to happen between 93 and 92 (the first items of the “left” and “right” sequences). By default the most recent one is kept.

=> (seq (conj dl 100))
(100 99 98 97 95 93 91 89 87 85 81 77 72 71 63 54 52 46 31 20 16)

Indeed 92 has been dropped. (Repeated calls to decay-loc always yield the same result.)

However we may elect to synthetize a new value rather than just keep one of the two candidates to decay. For example we can compute their average:

=> (seq (into (decaying-list 5 
                :collapse (fn [dl]
                            (let [[[l] [r]] (decay-loc dl)] 
                              (/ (+ l r) 2.0))))
          (range 100)))
(99 98 97 95.5 93.5 91.5 90 89 87.125 82.75 79.5 72.875 64.8125 60.875 58 50.732421875 27.0 17.75 11.25 2.8125)

Or something a bit more elaborate (and precise):

=> (defn stats [x]
     (if (map? x)
       x
       {:min x :max x :avg x :n 1}))
#'net.cgrand.decay/stats
=> (defn merge-stats [{mina :min maxa :max avga :avg na :n}
                      {minb :min maxb :max avgb :avg nb :n}]
     {:min (min mina minb)
      :max (max maxa maxb)
      :avg (/ (+ (* na avga) (* nb avgb)) (double (+ na nb)))
      :n (+ na nb)})
#'net.cgrand.decay/merge-stats
=> (seq (into (decaying-list 5 
                :collapse (fn [dl]
                            (let [[[l] [r]] (decay-loc dl)] 
                              (merge-stats (stats l) (stats r)))))
          (range 100)))
(99 98 97 {:min 92, :max 96, :avg 94.0, :n 5} 91 {:min 87, :max 90, :avg 88.5, :n 4} 86 85 84 83 {:min 80, :max 82, :avg 81.0, :n 3} {:min 78, :max 79, :avg 78.5, :n 2} 77 {:min 70, :max 76, :avg 73.0, :n 7} 69 68 {:min 49, :max 67, :avg 58.0, :n 19} {:min 41, :max 48, :avg 44.5, :n 8} {:min 36, :max 40, :avg 38.0, :n 5} 35 {:min 32, :max 34, :avg 33.0, :n 3} {:min 10, :max 31, :avg 20.5, :n 22} {:min 3, :max 9, :avg 6.0, :n 7} {:min 0, :max 2, :avg 1.0, :n 3})

Decaying lists can also maintain a state – that is updated after each conj or decay. Below the implementation, there is an example of state management.

You can also cap the length of the decaying list by using the :capacity option. A quick note on capacity: increasing the capacity by the half-life value (eg going from 1000 to 1050 when half-life is 50), doubles the range the decaying list remembers.

From lazy seqs to reducers and back

unsorted — cgrand, 11 February 2013 @ 17 h 27 min

Sometimes in a seq pipeline, you know that some intermediate results are, well, intermediate and as such don’t need to be persistent but, on the whole, you still need the laziness.

You can’t always opt for reducers because while being non-persistent (and thus faster), they imply getting rid of laziness.

So you can’t mix and match transformations on sequences and on reducers when you care for laziness. Or, wait!, maybe you can.

At core, reducers are just a clever way to compose big functions while giving the user the illusion of manipulating collections. So we should be able to apply a reducer pipeline if we get hold of the composed function. This is exactly what the below seq-seq function does:

(defn reverse-conses 
  ([s tail] 
    (if (identical? (rest s) tail)
      s
      (reverse-conses s tail tail)))
  ([s from-tail to-tail]
    (loop [f s b to-tail]
      (if (identical? f from-tail)
        b
        (recur (rest f) (cons (first f) b))))))

(defn seq-seq [f s]
  (let [f1 (reduce #(cons %2 %1) nil 
              (f (reify clojure.core.protocols.CollReduce
                   (coll-reduce [this f1 init]
                     f1))))]
    ((fn this [s]
       (lazy-seq 
         (when-let [s (seq s)]
           (let [more (this (rest s)) 
                 x (f1 more (first s))]
             (if (reduced? x)
               (reverse-conses @x more nil)
               (reverse-conses x more)))))) s)))

(defmacro seq->> [s & forms]
  `(seq-seq (fn [n#] (->> n# ~@forms)) ~s))

Note that the captured function (f1) may be impure, so don’t share it!

Now one can specifies non-persistent lazy seq pipelines:

=> (seq->> (range) (r/map str) (r/take 25) (r/drop 5))
("5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" 
 "17" "18" "19" "20" "21" "22" "23" "24")

To prove laziness is at play here:

=> (take 2 (seq->> (range) (r/map #(str (doto % prn))) (r/take 25) (r/drop 5)))
(0
1
2
3
4
5
6
"5" "6")

Of course seq-seq could be made to support chunked sequences and, if you try to play with it, beware of r/mapcat.

Update:

I added reverse-conses to account for the fact that when several items are added during a single call to f1 (eg during a r/mapcat “step”) they were added in the wrong order – if only everything was more set-like.

Next Page »
(c) 2024 Clojure and me | powered by WordPress with Barecity