@hemanth Depends on the size of the example
const max = R.reduce(R.max, 0, [3, 4, 1, 9 ,3]);
Imperative code would be something like this:
let max = 0;
for (let i of [3, 4, 1, 9 ,3]) { max = R.max(max, i); }
max
be const
. In the other I have to make it a let
. IMO this highlights the key difference between imperative and functional code. The first is a definition of what max
is. The second is a sequence of steps that ends of with max
having the correct value.
Math.max.apply(null,[3,4,1,9,3])
?
let map = function(f, args) {
let [x, ...xs] = args;
if (x === void 0) {
return [];
} else {
return [f(x)].concat([].slice.call(map(f, xs)));
}
};
let [x,...xs] = args
is avoided it would be nice...
const map = (f, list) =>
list.length > 0 ? [f(list[0])].concat(map(f, list.slice(1)) : []
slice
slice
.
slice
doesn't mutate the array, nice.
R.pipe(R.prop('rows'), R.pluck('doc'))
with a single function?
@raine another options is to add another function, one that performs the algorithm, like @00Davo said:
baseAlgorithm = (read, payload) =>
read(payload.type, payload.site)
.then(buildEmailOptions(payload))
.then(transporter.sendMailAsync.bind(transporter));
ioAlgorithm = (payload) =>
baseAlgorithm(readTemplates, payload);
module.exports = {baseAlgorithm, ioAlgorithm};
Then you can test the algorithm with a unit test, and the io version with an integration test.
input is
['apple', 'the', 'one', 'two', 'three', 'elephant', 'whatever']
var filterByLength = R.filter(function(str) {
return str.length > 3 && str.length < 6;
});
desired output is
['apple', 'three']
var filterByLength = R.filter(
R.converge(
R.and,
R.compose(
R.lt(3),
R.prop('length')
),
R.compose(
R.gt(6),
R.prop('length')
)
)
);
R.filter(R.where({length: R.allPass([R.gt(6), R.lt(3)])}))
is shorter.
filter $ ((> 3) <&&> (< 6)) . length
R.filter( R.compose( R.contains(R.__, R.range(4,6)), R.prop('length')))
R.lt(R.__, 3)
or R.flip(R.lt)(3)
for readability.
gt
and lt
? It had me confused for a moment. I expect var lt6 = R.lt(6)
to make a predicate function that returns true when the argument is Less Than 6.
R.lt(x, y)
means x < y
and curried arguments apply left to right. https://github.com/algesten/fnuc exists mainly because that's Kind of Weird.
R.lt(7, 12)
to return?
lt
and similar functions? Or would that be too inconsistent?
false
because 12 is not less than 7.
fn(a, b) === fn(a)(b)
for Ramda functions. To say its the same except for certain non-commutative operators was problematic. And we probably couldn't just say binary functions, either, or would you expect R.map(fn, list)
but R.map(list)(fn)
? Madness.
gt
and lt
, like I explained. It's not the end of the world, but Ramda's argument order has started to make a lot of sense to me, until I met gt
and lt
just now. :P
R.pipe(R.add, R.map)(10)([1, 2, 3])
lt
, gte
, etc. We had subtractN
, divideBy
, and such, but no good names for these.
R.lt(7, 12)
just seems to me it should return true
and divide(20, 5)
should return 4
not 0.25
. fnuc
was built around trying to solve this issue, but I think it ends up somewhat incoherent.
R.divide
to be able to let divideBy20 = R.divide(20)
?
lt
take only one parameter. lt(x, y)
is just a long way to spell x < y
, which you don't need, whereas lt(x)
is useful.
R.lt(6)
to test if the following value is less than 6. I think I've even used it like that to explain currying to colleagues. :P
R
itself were made callable, and then you wrote something like R(6).lt
to express the section (6 <)
? It's a hack, but maybe it's a good one?
.chain()
method so we can keep on chaining: R(6).chain().lt(3).equals(true).value()
!
R()
would just return an object full of partially-applied functions.
R(x)
, all of Ramda's functions are partially applied with x
, and the results are returned in an object. That's an interesting way of doing things. :P
R.compose
/R.pipe
, there's nothing special about Ramda functions; you can pass anything you want. So, while you could do something like `R(3).someRfunc, the only way I know to make it more generic so that you could apply any other function you like is with a stateful registration process.
R
itself as the function.
compose
changes.
R(x).whatever
would be a lot less interoperable. Considering R.flip
is available, though, maybe that's okay? R(x).f
would really just be sugar specifically for the binary operator funcs.
R.filter(R.where({length: R.both(R.lt(3), R.gt(6))}))
! :P
Right, which is why I use:
R.filter(R.where({length: R.both(R.gt(R.__, 3), R.lt(R.__, 6))}))
or
R.filter(R.where({length: R.both(R.flip(R.gt)(3), R.flip(R.lt)(6))}))
R.flip
them yourself all the time for legibility, that's inconvenient in my opinion.
const _ = R.__
, which is a tiny bit more convenient to use when you have to. Still less convenient than having a function that's the right way around to begin with though.
R.map(arr, fn);
R.map
takes it's data last, because it's a much more common case to want to create a partially applied map
with a mapper, than it is to create a partially applied map with it's data, waiting for different mappers to be passed-in
gt
and lt
, where that idea seems to have failed to make it. :P
subtract
, divide
, modulo
. But it's still a very small minority of Ramda functions.
fnuc
since I first saw it. But I've never been quite convinced. map(square)
seems an obvious abstraction, and currying is familiar to most functional programmers in that format.
fnuc
to go anywhere. but i'm happy you find it interesting. i'm all pro ramda and it's role in FP for javascript.
lt(x, y) !== lt(x)(y)
! But I just feel the settled-upon order of arguments is wrong. I disagree with your statement that lt(10, 12)
is the only comfortable way to write "is 10 less than 12", because under that reasoning map(arr, func)
would also be the preferrable version.
map(func, arr)
reads "map func
over arr
", which is totally reasonable.
For a coffeescripter this makes so much sense.
map arr, (a) -> blaha
Granted could also be solved.
map(arr) (a) -> blaha
lt
, gt
, subtract
, they're partially applied
flip
them
subtract
, I had just assumed the arguments were in the reverse order that they're currently in.
Array.prototype.map
, and there it just felt more natural to convert arr.map(fn)
to map(arr, fn)
. If they had been considering currying, or spending a lot of time looking at other languages that do this, they might well have chosen a different order.
R.subtract(a, b); //=> b - a;
.
setTimeout
to take its function last, because it reads and writes nicer that way.
subtract 3 5
returns 2, so I don't think it's that strange
(subtract 3)(5)
, in a language where all functions are unary. But here we deal with polyadic functions. And our curry is more complex, allowing fn(a, b, c) === fn(a, b)(c) === fn(a)(b, c) === fn(a)(b)(c)
R.divide(20, 5); //=> 0.25
, and "why should we favour the legibility of partially applied functions when the main definition is so clearly wrong?" That's why we have the placeholder. But, as I said, I would not mind a PR brought up to get this issues into everyone's view.
||
instead of &&
depending on how tired you are or how quick you're writing it, or something else.
var filterByLength = R.filter(function(str) {
return str.length > 3 && str.length < 6;
});
length
property part and use propSatisfies
for thatvar isBetween = R.curry((low, high, x) => x > low && x < high);
var filterByLength = R.filter(R.propSatisfies(isBetween(3, 6), 'length'));
a < x < b
, a < x <= b
, a <= x < b
and a <= x <= b
. Obviously you can do it, but I didn't see the pretty, obvious abstraction, anything cleaner than inRange :: Bool -> Bool -> Num -> Num -> Num -> Bool
, where the first two are includesLeft
and includesRight
, and your isBetween
is inRange(false, false)
. Those boolean flags are code smells to me, but I'm not sure what a better abstraction would be.
data Clusivity = In | Ex
inRange :: Clusivity -> Clusivity -> Num -> Num -> Num -> Bool
Bool
, so you gain readability, without losing power.
!
or !!
to coerce anything as you choose.
inRange :: (Num -> Num -> Bool) -> (Num -> Num -> Bool) -> Num -> Num -> Num -> Bool
inRange :: (Num -> Bool) -> (Num -> Bool) -> Num -> Bool
When I write a public API, I like to ensure that if the user does the wrong thing, she's likely to notice something wrong quickly, an undefined
or null
or empty list when she expects data, or at last resort an error thrown.
My day job is almost all client-side JS; I suppose I could look for another one, but outside that I do have to deal with bad data from users, and have to consider how to handle it. So I would probably need to check for Clusivity.In
, Clusivity.Ex
, and other inside that function. And that would be for both sides of the range. While this is a nice advance in type-safety, it's also significantly more complex code than checking for a boolean.
That's my dilemma. The boolean is not nearly as readable, and it would probably lose to a nicer function, but adding a data type such as this, when there's no built-in support does not end up coming across as actually simpler.
Well, that's why I didn't bring it up when the question first arose. I started down isRange(a, b)
but realized that it really was not as useful as it should be as it didn't let a user choose in/exclusive at either end, but, as I said, couldn't come up with a clean way to do that.
And yes, I do mean user of a library. But I work with an ever-shifting, large team that includes some fairly advanced JS people and some... let's say... much less advanced ones. This is not end-users. Some of these library users would probably pass true
in place of Clusive.In
on their first attempt, regardless of the documentation and test cases. The worst of them would ask for help at that point, before even checking the obvious places.
R.inRange({gt: 3, lt: 6})
?
R.inRange({gt: 6, lte: 3})
?
That wouldn't bother me much. I have no problem responding to incoherent data with the only answer that logically matches:
R.inRange({gt: 6, lte: 3})(n); //=> false for all n
No problem.
{gt: 6, gte: 12}
too.
inRange({gt: 6, gte: 6})
or inRange({})
or inRange({gthan: 6, lthan: 3}
{'>': 6, '<': 12}
I guess.
gt
and gte
have the same number?
lt
and gt
.
and
just in general.
lt
, lte
, gt
, gte
, or maybe symbolic equivalents. Anything else can be ignored.
true
when they were supposed to pass Clusivity.Ex
or whatever in this case, though, right?