So... Clojure has this pretty cool thing. Transducers. Rich Hickey introduced them into clojure because they were tired or rewriting reducing functions for every different part of the language. The cool thing about transducers is that they are a generalization of map/filter/take/etc which means you can make them play well with just about any part of your language. That is really awesome. Read on to see why.
So: about this
The main difference compared to the clojure transducers is of course that these aren't arities of the regular comprehension procedures, but stand-alone procedures.
After reading about transducers, this is what I can make of them. They are more or less the same as clojure's transducers, but without all the cool part about transducers: a language where they are actually useful :D
The transducers that I have implemented are:
History, explanations and examples
Transducers were introduced to the wider public by Rich Hickey (of Clojure fame) here: https://www.youtube.com/watch?v=6mTbuzafcII and explained more thoroughly here: https://www.youtube.com/watch?v=4KqUvG8HPYo
If you don't want to watch the whole thing: transducers are a generalization of map, filter etc. (map 1+ (list 1 2 3)) is written using transducers as (transduce (tmap 1+) conj (list 1 2 3)). So, more verbose and harder to understand! Hooray!
Seriously though: it has some benefits. Transducing functions can compose into new transducing functions. This means you can do (filter odd? (map 1+ (list 1 2 3))) but without building intermediate lists: (transduce (compose (tmap 1+) (tfilter odd?)) conj (list 1 2 3)).
For chez, this means that this stupid example:
(transduce (compose (tmap 1+) (tmap 1+) (tfilter odd?) (tmap 1-)) tcount (iota 1000000))
runs in a third of the time of:
(length (map 1- (filter odd? (map 1+ (map 1+ (iota 1000000))))))
almost exclusively because of GC overhead. Increasing the list length to ten million makes the difference even bigger (0.3 vs 1.3s on my computer). In guile the overhead of transducers is much larger, but for ten million elements there is still a non-trivial difference (2.1 vs 2.7s). This is bound to change as guile becomes faster.
It is most certainly slower than writing your own damn named let, or for guile users: using my racket-like for loops. For chez scheme it might actually sometimes be sensible to use.
Not documentation, but the best you have got
I proved two helper functions: conj and tcount. conj is a transducer-friendly cons. You should use it if you want the transducing process to produce a list. tcount counts the elements at the end of a transduce.
tmap, tfilter, ttake, tdrop, ttake-while, tdrop-while, tmapcat are pretty much the same as they are in srfi-1 or CL, but they don't take a list as the third argument.
tlog takes either a log function (might be anything) that is passed the value that passes through is, or defaults to (lambda (x) (display x) (newline)).
(trandom-sample probability) returns a random sample. probability is a number betwenn 0 and 1. A probability of 1 means all elements are passed through.
I provide 3 versions. transducers.scm (guile scheme) transducers.chezscheme.sls (chez scheme) and transducers-r6rs.scm (r6rs). They all include transducers-impl.scm. Each wrapper creates a record named <reduced> with the immutable field val (accessor deref-reduced) and creator (reduced val). The predicate is (reduced? r).
transducers-impl.scm is written in more-or-less portable scheme. The scheme you chose must support case-lambda and have a gensym procedure. If your scheme does not have a compose function, please have a look at transducers-r6rs.scm which provides you with one.
trandom-sample cannot be implemented in portable scheme, since neither r5rs or r6rs defines anything about random numbers. look at transducers.scm (guile) or transducers.chezscheme.sls for that.
Some of the transducers are stateful and are not suitable for use with multi-shot continuations or threading. In fact, only tmap, tfilter, trandom-sample, tcat and tmapcat are not stateful.
Licensed under the MPL v.2