Ramda's lens
is a bit involved. https://github.com/ramda/ramda/blob/master/src/lens.js#L29-L38
What's the extra complexity for, compared to something like:
const lens = (get, set) => ({get, set})
const view = (lens, data) => lens.get(data)
const set = (lens, v, data) => lens.set(v, data)
const over = (lens, fn, data) => lens.set(fn(lens.get(data)), data)
a
and you have a lens that points to a property b
you can compose them compose(lensA, lensB)
lensPath(['a','b'])
set
in terms of over
, but view
could be a lot less indirect
{ value: theValue }
in that process
type Iso s a = forall p. (Profunctor p) => p a a -> p s s
type Iso' s a = (a -> a) -> (s -> s)
hens
was a tickling starting place.
fromPromise
method was the way to go...
Hey @m59peacemaker I'll give it a closer look soon, but ftr you can take advantage of block scoping
map
and chain
, its also made by the same person who wrote most.js, so it's probably pretty fast too.
isLens
) out of interest?
fromPromise
to do. But then can't seem to just get out the resolved value (which I want to shove into an Either for handling whether or not the fields I need exist on the json...)
Future
, so something like Future.bimap( Left, Right )
will give you a Future Either a
.bimap(e => e, s => s.users_url)
but accessing the users_url may fail as it may not exist (hence wanting to shove it into an Either)
R.minBy
on an array of objects but considering two properties on the object? That is "choose the object from the array that has the smaller X, and the smaller Y" (in that order)
var sortByKeys =
pipe(
map( pipe(prop, ascend) )
,sortWith
)
sortByKeys(['x','y'])(xs)
pipe(
map( pipe(prop, ascend) )
,(fs) => (a,b) => sortWith(fs, [a,b])
)
(() => {
and I didn't feel like it
const should have just done 1 thing, made a variable immutable, but they throw in block scoping, which var doesn't do, so there's now this inconsistency, where previously it was just "no block scoping use functions". It also didn't really get immutability right, so...
And "no block scoping: use functions" isn't a bad thing at all, its just unusual. But JS is unusual, and if we just accept it for what it is we could avoid these additions that have complex exceptional behaviour.
this
, again exceptional behaviour that is of 0 use in FPEverything they add, its like they get the thing its meant to do wrong, and they add some special addendum. Reading the spec is like "its like ... kind of sort of, except when.... oh and watch out for"
They design the language for sub optimal use cases which only validates those sub optimal paths as optimal paths.
So I wish they just stopped adding stuff. And if they are going to add stuff, be consistent, respect the existing language behaviour and extend it instead of pretending its a different language.
I mean web assembly is here, there's really no reason to pretend JS isn't JS anymore.
const x = {}; x.newProp = 'huehuehue'
also pet peeve with block scoping:
try {
const a = ...
const b = ...
const c = ...
throw ...
} catch (e) {
// I want to access a,b,c (e.g. for logging) here but I can't
}
The above seems easy to fix, but the solution is to just localize try catches as much as possible, adding more and more layers of nesting.
This problem doesn't really affect me because I don't use try catch, but it still annoys me in principal.
But I imagine it would come up a lot when people write imperative code with async await.
const mapNth = (n, fn, data) => over(lensIndex(n), fn, data)
over
. mapLens
doesn't make sense :)
@ramda/reduce
@ramda/map
etc
over
topic, I think having map
in the name makes it more understandable.
over
, so that settles it imo :)
You both might have different experience to me, but so far I've found trying to write utils that hide the lens from the caller ends up being less powerful/useful.
I mean, there's exceptions, but instead of mapNth, I'd just make writing lensIndex less verbose, like L.at
over
filterByState
point free?, optionally by using lenses:const list = {
tasks: [
{ id: 2, state: 'done' },
{ id: 3, state: 'todo' },
{ id: 4, state: 'done' },
],
};
const filterByState = state => R.pipe(
R.prop('tasks'),
R.defaultTo([]),
R.filter(R.propEq('state', state)),
);
const todos = filterByState('todo');
const result = todos(list);
const mapNth = (n, fn, data) => over(lensIndex(n), fn, data)
vs over( L.at(0), fn, data )
Not so bad inline
const mapNth = n => over(L.at(n))
map(over(lensIndex(1), inc), { foo: 1 }) // { foo: 2 }
map(overNth(1), inc
over
is elegant there.
pipe(map(inc), filter(isEven))
// actually using transducers but looks the same as usual
for
loop wrt let
map
for
used to be sugar
for (var i = 0; i < 4; i++) {...};
// same as
var i = 0;
while (i < 4) {...; i++;}
but w/ let
i
is weirdly scoped to the block
over
, thinking that it is like normal map
where that's ok because you're creating a new thing.
[ k, v ]
all the time
overLast
just for that purpose.
overFirst
to map the keys
for (let i = 0; i < 4; i++) {...};
// same as
{
let i = 0;
while (i < 4) {...; i++;}
}
I was working on a ui abstraction over lenses, and it quickly became apparent hiding lenses from the user, would make things more complex than that getting them comfortable with lenses.
So instead of hiding them, I changed the api around, basically binding lenses to a particular state stream, which you'd think would defeat the point of lenses, but on reflection, reading/writing from arbitrary state is less useful than composing lenses (for me at least)
converge
or useWith
have been the ticket to making it point free.
first
, last
, head
and tail
.
useWith(
filter, [propEq('state'), propOr([], 'tasks')]
)
([ k , v ]) => [ k, inc(v) ]
overNth(1, inc)
// how does first, last, head, or tail help?
@smeijer the first argument will go to the first function of the list, so 'todo'
would go into propEq('state', 'todo')
propEq('state', 'todo')
And the list, argument will go to the second funciton
propOr([], 'tasks', list)
Then when they're expanded, they both get applied to filter
.
So we end up with something like filter( propEq('state', 'todo'), propOr([], 'tasks', list)
useWith
is helpful when you have separate transform paths for separate args, converge
is useful when you have multiple transform paths for the same arg
prop
or pluck
etc
function
, you know they will put this
in it.
this
this
is
this
?
this
and first-class functions you don't need a module system implementation. I know it's heresy. But you do need a specification
import
based module system over the anarchy of the past
babel-preset-env
for the super winniest win everest
node_modules/app
to accomplish something like that
rm -r node_modules
WTF DANGIT git reset --hard
yarn
came around
postinstall
hook :(
require('~/path/like/this')
would work
require('project-relative-require').setRoot(__dirname)
const things = require('~/things/relatively')
cwd()
as a default if no path given?
path.resolve( require.resolve('yourpackage'), '..', '..' )
main
is, but that's in your control
require.resolve('yourpackage/package.json')
process.env
instead
var p;
while( p = require.parent ){
}
root = p.id
process.env.ROOT_DIR
? Run script withROOT_DIR=/my/root node ./main.js
var p = process.env.ROOT_DIR || path.resolve( require.resolve('yourpackage'), '..', '..' )
require.main.id
const fs = require('fs')
var p = __dirname
var root;
while( !root ){
fs.readdirSync('.')
.find( s => s.includes('package.json')
p = path.resolve( p, '..' )
}
const fs = require('fs')
var p = __dirname
var root = process.env.ROOT_DIR;
while( !root ){
fs.readdirSync('.')
.find( s => s.includes('package.json')
p = path.resolve( p, '..' )
}
ROOT_DIR=/usr/local/myproject node ./testfile.js
for await (let v of p);
do
in loop form?
async function* OMG() { yield await response() }
for
though
forEach
for async iterators?
a
and b
. I want to test whether a[ k ] === b[ k ]
. I'm drawing a blank.
pipe(pluck,apply(equals))
@Ramblurr I think its a great idea, but my experience to date (typescript) is essentially useless for point free style. It makes composition painful. (I really, really wanted it to work, and I tried so many times :( )
But you could have a type system that worked really well for FP JS, I just don't think the likes of Facebook or Microsoft would jump on that project.
While trying to make it work I realised how much I value composition above static type guarantees. And that the bugs typescript catches aren't bugs you'd encounter if you wrote FP style code anyway.
I'd really like to be proven wrong here though!
/* globals localStorage */
//eslint-disable-next-line no-undef
const checkTypes = !!process.env.CHECK_TYPES
const $ = require('sanctuary-def')
const Type = require('sum-type')($, { checkTypes, env: $.env })
const Stream = require('flyd')
const remember = Stream.stream(true)
import $AuthPermission from '../../../types/auth_permissions'
const $Auth = Type.Named('Auth', {
LoggedOut: {}
,LoggedIn:
{ auth_token: String
, user_id: String
, auth_permissions: $.Array($AuthPermission)
}
})
const initial = JSON.parse(localStorage.getItem('auth') || 'null')
const auth = Stream.stream(
initial
? $Auth.LoggedInOf(initial)
: $Auth.LoggedOutOf({})
)
// notifications if an invalid auth object is created
auth.map(
$Auth.case({
LoggedOut: () => localStorage.setItem('auth', 'null')
,LoggedIn: () =>
remember()
&& localStorage.setItem('auth', JSON.stringify(auth()) )
})
)
export default {
stream: auth
, type: $Auth
, remember
}
@kurtmilam Anything you have to share (thoughts, intuitions, etc) would be appreciated.
I’ve done some experimenting with State and streams in isolation…not really lenses though
sum-type
the other day. Really cool. Any reason you're not using sanctuary-def
as an internal dependency?
over
, set
and get
but they take streams rather than objects, and operate on the objects contained in the streams.
flyd.stream
partial.lenses
const s = flyd.stream( {} )
// later on
s() // to get the value in the stream
s( {a:1} ) // to set the value in the stream
slice
or lensed
streams that focus on a specific part of the state object in the state stream container.
const over = stream => optic => fn =>
R.compose( R.tap( stream )
, Object.freeze
, L.modify( optic, fn )
)( stream() )
@kurtmilam yeah…I mean the Monad A Day - State example is the most “complete” example that I’ve found.
Second I guess would be the ramda-fantasy state example.
But they are really just handling one process…and maybe that is enough to start with.
main
or IO) before the values go to the view (which can ONLY be managed via side effects)
evolve
, where the transformations could become rather complex
sense
of immutability...
adjust(add(10), 1, data)
over(lensIndex(1), add(10), data)
evolve
and assocPath
, now I can't see a use for lenses :/
Is there a way to delay evaluation of the default
expression in pathOr
. For example, in my redux thunk action creator:
const setEvent: ThunkActionCreator = (eventId: EventId): Thunk =>
async (dispatch: Dispatch, getState: GetState): AsyncVoid => {
const event = R.pathOr(await getEvent(eventId), ['events', 'map', eventId], getState());
console.log(event);
};
I want to see if the event exists in state.events.map
, and if not, I'll call my api method to fetch it. It works, but getEvent
is called even if the event exists in the map. I want it to be lazily evaluated. Can I make that happen?
await getEvent(eventId)
in a thunk and check to see if event
is a function. If so, call it
const setEvent: ThunkActionCreator = (eventId: EventId): Thunk =>
async (dispatch: Dispatch, getState: GetState): AsyncVoid => {
const event = R.pathOr(async () => await getEvent(eventId), ['events', 'map', eventId], getState());
console.log(typeof event === 'function' ? event() : event);
};
async
functions and didn't even know they had async
arrow functions yet.
@gabejohnson Thanks, that works. I would just have to change event()
to await event()
, but I think I'm just going to stick with:
const event = R.path(['events', 'map', eventId], getState()) || await getEvent(eventId);
It's a little cleaner.
@JAForbes isn't pipeK + yield/value (a way to fold) essentially a for expression?
it's not at all. nevermind
@gabejohnson thanks, yeah it used to be a direct dependency, but sanctuary-def changes versions so rapidly that I couldn't keep up. I decided I'd set the version as a peer dep that I know works, but if I'm a few versions behind someone can live dangerously and inject their own. Once the api's for both libraries settle I'll go back to a direct dependency most likely.
There's going to be a lot of breaking changes coming up in sum-type, eventually, one thing I'd really like to fix is the initialization experience for users that have no idea what sanctuary-def is, and don't want to pass in env etc. But I think that will depend on upstream discussions, which would probably make it a sum-type 2.0 change.
The breaking changes I'm driving towards right now, are really just removing stuff, moving in a more static direction, removing prototype support, making case's behaviour more powerful but also predictable, uncurried constructors (the most common footgun I run into), support object literal style only, easier serialization for sending types over the wire, a lot of small changes that should make for a simpler library.
@Ramblurr I don't sorry :(
I've seen lots of back end code in that style, for a library, but large systems code, and especially front end code is exceedingly rare in open source in that style.
@miwillhite I've ranted a lot in the past about why I think Rx's whole "subject's are bad" is harmful.
Clearly subjects aren't bad per se, because they use them internally for things like fromEvent
etc.
I think the reason they say pushing values into streams is bad, is because they are worried people will do it all the time instead of writing reactive code. And for their demographic (OO programmers) that is perfectly understandable.
But if you have a source of data (e.g. a virtual dom node) and you want to have a source stream for that event. Then naturally there's nothing wrong with { onclick: clicks }
. Particularly when event listeners are already destroyed in element removal, which in the context of virtual dom isn't something you need to think about. And old streams will be gc'd automatically. In fact { onclick: clicks }
is a lot simpler and declarative than obtaining the DOM node via a hook to pass to Rx so it can set up event listeners on its own separate to the framework. That's pretty ridiculous in my opinion. Particularly in frameworks like mithril, where event binding and redraw logic is handled by the framework automatically.
I do think we should as much as possible define streams in terms of mapping over a source stream, and we shouldn't push values into dependent streams (except when we know what we're doing in some rare, but powerful cases)
The reason its not simply misguided but harmful in my opinion to advocate for "subjects are bad": If you aren't allowed to compose subjects directly you end up requiring a massive list of predefined operators (e.g. Rx's standard lib) to do anything useful. And even worse, you need to name them, and store those names in your head! And even worse a lot of the names are proprietary and specific to Rx, or they reuse names from FP but incorrectly.
Flyd can do everything Rx can do. A lot of operators in Rx can be reproduced effortlessly in flyd simply because subjects are ok. Really with flyd all you ever need is map
. The ability to push values into a stream, and retrieve the most recent value, is invaluable for client side development. If you can't do that you end up jumping through all these hoops on the quest of avoiding subjects that you have to invent entirely new UI programming paradigms. But flyd is also fantasy land compatible, so it works with ramda/sanctuary out of the box.
Most is a great library, but I think streams shouldn't be monadic, they should just be applicative functors. We shouldn't be using observables for everything, its like using div's for everything. There's no reason to have a nested stream in my experience. And there's no reason for a stream to have any notion of errors, we've got Either, Maybe, Future etc for that already.
There's so much wrong with the way we talk about streams/observables in my opinion. E.g. the constant comparison to promises, when they are useful for completely different things. Or thinking of streams as "observable" is a very imperative framing. Instead of thinking about "values that change over time" we should be thinking about "permanent relationships that never change". And focusing on time at all defeats the point of abstracting over it. I wish we called them Fact's instead of Streams, or Observables. But I think Streams is arguably closer to the truth than observables.
Think of a line plot. Rx thinks of streams as a series of dots on the graph. We should think of streams as a line or curve on that plot where the individual dots are not of any interest, but the relationship, or the equation is.
So y = x ^ 2
not [{ x:0, y:0}, { x:2, y: 4}, { x: 3, y: 9 }, ...]
The fact there are discrete points on a line isn't important at all.
Anyway that's my rant on Rx/observables :D
adjust(add(10), 1, data)
// instead of
over(lensIndex(1), add(10), data)
[{name: "Brian", address: {street: {no: 32, name: "George", suffix: "St"}}}]
adjust
stuff as well?
groupBy
make any sense as a transducer?
return (acc, value) => {
const key = getKey(value)
const group = getGroup(key, acc)
const newGroup = putIntoGroup(value, group)
return nextStep(acc, [ key, newGroup ])
}
array of objects-to-object-with-key, array-pairs-groupby
out of that?
R.groupBy
const getKey = v => v.name
const getGroup = (key, coll) => coll[key] || []
const putIntoGroup = (value, group) => {
group.push(value) // not really supposed to mutate, but it's ok here
return group
}
const groupBy = getKey => nextStep => {
return (acc, value) => {
const key = getKey(value)
const group = getGroup(key, acc)
const newGroup = putIntoGroup(value, group)
return nextStep(acc, [ key, newGroup ])
}
}
getGroup, putIntoGroup
and derive a groupBy
for something other than array to object.
pipe(groupBy(prop('name')), filter(v => v[1].length > 2))