R.equals(new Error('XXX'), new Error('XXX'))
this evaluates to false - the tests in equals.js test for it to be true
npm run test
it passes
npm run pretest && mocha test/equals
dist/ramda.js
that is commited is the build from the most recent tagged release.
npm run build
prior, it was using ramda v0.18, which didn't yet have the equals
implementation for Error
npm run test
implicitly runs npm run pretest
by convention of npm scripts prefixed with pre
(and likewise for post
).
@gilligan You can view it as a way to compose a computation, where every step takes the value(s) from the previous step, and emits zero or more new values for the next step. This allows the final computation to be used in any kind of reduction (map, filter, etc) of any kind of list (array, stream, generator, etc). The internal mechanism also has a built-in way to deal with short-termination (when doing a take()
, for example).
Since you're extending the "step" function you end up with a means to "stream" your input list through the entire computation value by value, rather than doing .map().filter().take()
and building up intermediate lists in between steps. So once you've coded a transducer it is more efficient, and more reusable than when you code a chain of operations.
I found watching this video over and over again quite helpful in gaining an understanding: https://www.youtube.com/watch?v=6mTbuzafcII
pipe([map(f), filter(g)], arr)
go into([], compose(map(f), filter(g)), arr)
. I really like using them for processing object streams in Node with transduce-stream, because it allows me to work using my normal Ramda workflow on Streams: import throughx from 'transduce-stream';
stream.pipe(throughx(compose(
map(f),
filter(g),
take(5)
), {objectMode}))
pipe
and compose
from :point_up: December 12, 2015 10:11 AM have the same order of operations. I take it transducers do something to reverse the order of operations?
0.17
, invoker(1, 'getResponseHeader', 'content-type')
worked fine, but was documented to be used as invoker(1, 'getResponseHeader')('content-type')
. I suspect the currying of pipe
and compose
had something to do with that. Upgrade to 0.18
and I run into errors with functions further down the pipe because 'content-type'
was effectively dropped. Doh.
compose(a, b, c)
= (from right to left) "c
is wrapped by b
is wrapped by a
", so you end up with a transformation that first applies a
, and a
then calls b
and b
then calls c
.
With a transducer you're really just extending a step
function. Let's take into
as an example. Without a transformation over the step
function , all into([], identity, list)
does is iterate list
and append
every item to []
. So the step
function here is append
. However, I could choose to extend the step function, by not passing identity
, but (a,b -> b) -> (a,b -> b)
(note how the functions have the same signature as append
). So the function I'm passing to into([], f, list)
takes the step
function as a first argument, and returns a new step
which wraps the old one in order to apply some custom logic (like transforming its argument, in case of map
). In the case of a composition of these "step transformers" that means we're just taking this step
function, and threading it through the pipeline, decorating it along the way. In the end we're left with append
, but wrapped many times in order to do all sorts of things with its argument before it's finally called (or not called, if you're using filter
as one of its "decorators").
Now, that's a bit simplified, in reality our step
function is actually three functions (contained in an object), one init
(like "setup") one step
and one result
(like "teardown"), but the idea remains the same.