I'm looking for the opposite of

`differenceWith(eqProps('id'), l1, l2);`

where if there is match on `id`

a new list is returned with all values from l1 overwritten if there is a match in l2.
what's a functional way of caching an XHR response for a certain period of time?

`R.intersectionWith`

perhaps, @bgvianyc?
Here is how I did it using plain JS, wondering what a pointfree way would look like? https://dpaste.de/WNwu

First I’d get rid of the global in line 3. ‘updateAssociations

@davidchambers I don’t think he means intersection. He just wants to update the ones in itemizations with the ones that are in the list updateAssocations.

Intersection would in his case just result in a copy of updateAssocations

which I think is so

what he can do is. he can first use differenceWith to get rid of all items which are in updateAssociations and then append to that the intersectionWith of both lists

@MarkusPfundstein how would that snippet look?

naive:

```
const replaceUpdateAssocations = (updatedAssocations, itemizations) => {
const p = (a,b) => a._id === b._id;
const intersect = R.intersectionWith(R.prop('_id'), itemizations, updatedAssocations);
const diff = R.differenceWith(p, itemizations, updatedAssocations);
return R.concat(intersect, diff);
}
```

I am looking for a fork method. Then it can be done much mor eclean

```
const fork = R.curry((f, a, b, v) => f(a(v), b(v)));
const rua = (updatedAssocations) => {
const p = (a, b) => a._id === b._id;
return fork( // <- combines output of second function and third function by applying first argument to them
R.concat,
R.intersectionWith(R.prop('_id'), R.__ ,updatedAssocations),
R.differenceWith(p, R.__, updatedAssocations));
}
```

point-free if you want updatedAssocations to be global.

```
const i = [
{ _id: 1, name: "markus" },
{ _id: 2, name: "thomas" }
];
const u = [
{ _id: 2, name: "michiel" }
];
console.log(rua(u)(i));
// -> [ { _id: 2, name: 'michiel' }, { _id: 1, name: 'markus' } ]
```

doesn't seem like a big win over the JS way

well its functional

:P

the way I did, is not point free but I could say it's some what functional

```
const fork = R.curry((f, a, b, v) => f(a(v), b(v)));
const updateById = us => fork(
R.concat,
R.intersectionWith(R.prop('_id'), R.__, us),
R.differenceWith((a,b) => a._id === b._id, R.__, us));
// u = updateAssocations
const updateAssocs = updateById(u);
// i = itemizations
console.log(updateAssocs(i));
```

something like this then? “-)

yup

but I don’t like it like this. I think I’d do it like this:

```
const fork = R.curry((f, a, b, v) => f(a(v), b(v)));
const updateById = R.curry((us, is) => fork(
R.concat,
R.intersectionWith(R.prop('_id'), R.__, us),
R.differenceWith((a,b) => a._id === b._id, R.__, us),
is));
// i = itemizations
console.log(updateById(u, i));
~
```

And you gain a reusable function :-) With a bit of effort you can even generalise it over every key

So like this:

```
const fork = R.curry((f, a, b, v) => f(a(v), b(v)));
const updateByKey = R.curry((prop, us, is) => fork(
R.concat,
R.intersectionWith(R.eqBy(R.prop(prop)), R.__, us), // <- R.eqBy
R.differenceWith(R.eqBy(R.prop(prop)), R.__, us),
is));
const updateById = updateByKey('_id’);
const i = [
{ _id: 1, name: "markus" },
{ _id: 2, name: "thomas" },
{ _id: 3, name: "b" }
];
const u = [
{ _id: 2, name: "x" },
{ _id: 1, name: "michiel" },
];
console.log(updateById(u, i));
// [ { _id: 2, name: 'x' },
// { _id: 1, name: 'michiel' },
// { _id: 3, name: 'b' } ]
```

that fork is nifty

check the update

now its generalized for every key

i see that

that was a fun exercise, thx :P

@MarkusPfundstein I wonder if your implementation would be more performant than the vanilla JS version?

phew. Lets say n = i.length and m = u.length. Because in the worst case u.length = i.length we have n=m . Both are arrays so I’d assume intersection and difference are both O(nm) = O(n^2). So we have O(n) (concat) + 2*O(n^2) = O(n^2) …

Yours is O(n^2) . So for small n yours should be faster, but in the limit it doesn’t matter

This is guesswork right , if we look at the implementation we can do a better complexity analsyis ofc

Yours is O(n^2) . So for small n yours should be faster, but in the limit it doesn’t matter

This is guesswork right , if we look at the implementation we can do a better complexity analsyis ofc

The ramda version has in any case more function calls. So that could be a limiting factor as well..

How big is your n at most?

just checked differenceWith. It loops through the first array (i) in our case. So thats n. and calls Array.prototype.indexOf at any iteration. Which is O(n) [1]. So yeah. my stuff above holds

@xgrommx streamjs :O looks awesome

so some streamjs methods are hoisted out of sequence? .filter().map().findFirst becomes filter().findFirst().map() ?

based on this: https://github.com/winterbe/streamjs#why-streamjs

no

it means that first he passes down through the whole chain for the first element of the stream

then the second, and so on, until its done. You essentially reduce m*O(n) to O(n) where m is the number of maps/reduces/filter functions in your pipe