These are chat archives for ramda/ramda

26th
Sep 2015
Jethro Larson
@jethrolarson
Sep 26 2015 02:12
See also perl?
Scott Sauyet
@CrossEye
Sep 26 2015 02:13
In perl, even with a lot of discipline... :smile:
Scott Christopher
@scott-christopher
Sep 26 2015 05:03
Looks like Proxies might be a viable option for currying once all browsers get around to implementing them.
function _curryHandler(boundArgs) {
  return {
    apply: function (target, thisArg, argList) {
      var totalArgs = boundArgs.concat(argList);
      return totalArgs.length >= target.length ? target.apply(thisArg, totalArgs) : new Proxy(target, _curryHandler(totalArgs));
    },
    get: function (target, prop, recv) {
      switch (prop) {
        case 'length':
          return target.length - boundArgs.length;
        case 'name':
          return target.name;
        default:
          return target[prop];
      }
    }
  }
}

function curry(fn) {
  return new Proxy(fn, _curryHandler([]));
}
Performance looks quite good too: https://jsperf.com/curried-proxies
With the advantage of name and length properties that behave somewhat too.
var adder = curry(function add(a, b) { return a + b; });
adder.name   // add
adder.length // 2

var incr = adder(1)
incr.name   // add
incr.length // 1
incr(41)    // 42

adder(1, 2) // 3
The name property will be stuck with the name of the function that was first curried, but that's still quite a bit nicer for debugging than some other internal function name.
Scott Christopher
@scott-christopher
Sep 26 2015 05:08
The downside: AFAIK, only Firefox currently supports them.
Scott Christopher
@scott-christopher
Sep 26 2015 05:14
and Edge apparently, but I don't have Windows to test.
Raine Virta
@raine
Sep 26 2015 11:09
@davidchambers is methodNames different from R.functions?
Geert Pasteels
@Enome
Sep 26 2015 14:32
https://gist.github.com/Enome/5a0d89c1b8a011a94cfd Is this expected behavior? I want to partial apply the second argument of a function so that it will always use that argument even when it's called with two arguments later on.
Martin Algesten
@algesten
Sep 26 2015 15:59
i believe it is expected – to support variadic functions. however you can ensure it works the way you expect by wrapping the second argument in R.identity
to force it to be unary.
the result of what you're doing is effectively ('hello', 'universe', 'world')
Raine Virta
@raine
Sep 26 2015 16:04
algesten: wrap the second argument in R.identity?
Martin Algesten
@algesten
Sep 26 2015 16:05
R.identity('hello', 'universe') // => 'hello' right?
or am i thinking wrong
Raine Virta
@raine
Sep 26 2015 16:09
I don't see how it helps here
Martin Algesten
@algesten
Sep 26 2015 16:10
> R.partialRight(concat, 'world')(R.identity('hello', 'universe'))
'helloworld'
or we could compose. R.compose(R.partialRight(concat,'word'),R.identity)
> R.compose(R.partialRight(concat,'word'),R.identity)('hello', 'universe')
'helloword'
or use unary.
Raine Virta
@raine
Sep 26 2015 16:14
yep that works, I'd like to see the real use case here though. I've never had to do that
Martin Algesten
@algesten
Sep 26 2015 16:14
> R.unary(R.partialRight(concat,'word'))('hello', 'universe')
'helloword'
i see the problem. basically ramda want to support variadic functions at the same time as partially applying them.
the question is to chop or not to chop to the arity of the original function.
i've been in that situation with Math.max
> R.partialRight(Math.max,2)(3,4)
4
the question is whether this should evaluate to 3 or 4.
considering that Math.max(2,3,4) is 4
and Math.max.length is 2
Raine Virta
@raine
Sep 26 2015 16:18
Math.max is variadic, why shouldn't it evaluate to 4?
Martin Algesten
@algesten
Sep 26 2015 16:19
since the arity is 2
but since javascript has this. variadic function with arity 2. then R.partialRight must decide what to do. and it chose to keep all args. even though it gets a bit confusing like in @Enome example
Raine Virta
@raine
Sep 26 2015 18:59
R.uniq is very slow compared to lodash.uniq in large lists
Scott Sauyet
@CrossEye
Sep 26 2015 19:00
Not surprising. We do deep equality testing.
Martin Algesten
@algesten
Sep 26 2015 19:01
yes. looks nice. but obviously a bad way performance wise.
Scott Sauyet
@CrossEye
Sep 26 2015 19:01
There was some recent talk about using a hash algorithm.
I'm curious to see if we can manage that.
Martin Algesten
@algesten
Sep 26 2015 19:01
but are we really doing uniq with deep equals?
Scott Sauyet
@CrossEye
Sep 26 2015 19:01
Not sure what it would look like.
Martin Algesten
@algesten
Sep 26 2015 19:01
is deep really a thing here?
Raine Virta
@raine
Sep 26 2015 19:01
uniqWith(identical) wouldn't be faster?
Scott Sauyet
@CrossEye
Sep 26 2015 19:02
it should be
Raine Virta
@raine
Sep 26 2015 19:02
it's not
Scott Sauyet
@CrossEye
Sep 26 2015 19:02
haven't tested
oh
Then we need to look at the fundamentals of uniq.
Martin Algesten
@algesten
Sep 26 2015 19:03
for fnuc i found this to be quite fast
 uniq = function(as) {
    if (!as) {
      return as;
    }
    return _filter(as, function(v, i) {
      return as.indexOf(v) === i;
    });
  };
where _filter is an internal looping filter that does give index for each position.
as.indexOf(v) is a fast operation javascript wise.
but then. no deep equality.
obviously.
Martin Algesten
@algesten
Sep 26 2015 19:08
you use LS for that?
Raine Virta
@raine
Sep 26 2015 19:09
yes
33,000 items in the list before uniq
Scott Sauyet
@CrossEye
Sep 26 2015 19:10
how bad is it without the identical, or is it about the same?
Raine Virta
@raine
Sep 26 2015 19:10
let me try just uniq again
Scott Sauyet
@CrossEye
Sep 26 2015 19:10
I guess equals shouldn't be too bad at comparing strings. Let's see if I'm right.
Raine Virta
@raine
Sep 26 2015 19:11
./longest-words.sh shakespeare-hamlet-25.txt 37.44s user 0.21s system 97% cpu 38.814 total
Scott Sauyet
@CrossEye
Sep 26 2015 19:11
ok, then.
Raine Virta
@raine
Sep 26 2015 19:12
I have a faint recollection of doing R.uniq on large lists before in similar context and it wasn't this slow
Scott Sauyet
@CrossEye
Sep 26 2015 19:12
Well, I think we need to find a way to speed up uniqWith. I'm pretty sure it's been slow for some time.
There've been discussions about it recently, either here or on the issues.
But I haven't looked into the code. I was hoping that whoever was having the problem might investigate/do a PR. But I can't even remember who that was now.
One suggestion was to use R.toString to generate a hash. That might be faster for larger lists.
Scott Sauyet
@CrossEye
Sep 26 2015 19:18
Obviously R.toString is not likely to be particularly fast, but it should reduce from O(n^2) to O(n), albeit with fairly large coefficients.
Martin Algesten
@algesten
Sep 26 2015 19:18
lodash is not doing a deep equals. they do have a possible transform function to apply to each element. maybe they have a hash-transform that does make it deep equals. but shrug.
Raine Virta
@raine
Sep 26 2015 19:18
uniqWith is implemented in terms of containsWith, I'm not an expert on algorithms but that doesn't sound very efficient
because it would get very slow as the list gets bigger
Scott Sauyet
@CrossEye
Sep 26 2015 19:19
Unless we do uniqBy and generate a hash, we're stuck with O(n^2), I believe.
We have to compare each test candidate with each of the uniq ones already collected to see if it matches.
that's n^2.
Raine Virta
@raine
Sep 26 2015 19:21
if (pred === R.identical) return require('lodash.uniq')(list); modularity FTW
just kidding. I'm going to take a long bike ride home now
Scott Sauyet
@CrossEye
Sep 26 2015 19:22
enjoy. Gonna go stain my deck. Yours sounds like more fun.
Martin Algesten
@algesten
Sep 26 2015 20:56
O(n^2) is the most pessimistic situation. i.e when the entire list consists of already unique things. on the other hand a list containing only the same thing would be O(n). any real scenario would be in between depending on data.
Scott Sauyet
@CrossEye
Sep 26 2015 21:01
@algesten, although it's not the only measure, worst-case is a common descriptor for algorithms, especially for big-O notation.
Martin Algesten
@algesten
Sep 26 2015 21:19
@CrossEye sure. just saying that it's not like a bubble sort that would definitely be O(n^2) no matter the input. lodash appears to be using a non-hash approach, and most likely any real world scenario would not be hit bad by this time complexity.
Scott Sauyet
@CrossEye
Sep 26 2015 21:24
That's interesting, my thought is just the opposite. I haven't used uniq all that often, but when I have it's been to remove the few duplicates from a large collection rather than to find the small number of unique elements from a duplicate-laden one. Obviously either is possible, as is anything in between.
Martin Algesten
@algesten
Sep 26 2015 21:30
Haven't used it much myself. only really with <100 elements I'd say.