amark on master
Allow Dockerfile to auto-update… (compare)
amark on master
rad fixed dup text, perf, read … (compare)
Hello guys, long time no see. I haven't touched gun for a long time.
But now, when I get back, I notice something different. When you open the page from brand new browser (e.g. incognito), it doesn't retrieve the data that's already saved on the server.
is this the new design ?
@amark did you check again on bun? 4 new versions available since our first try. Notice how much websocket (messages sent per second) and server-side rendering (http requests per second) is accelerated. Imagine this for Gun. 😍
https://gun.eco/docs/Panic
npm link did not work back then.
@ahg:it-dengler.de
From an article: Use IP whitelisting to limit the IP addresses from which your APIs can be called. This will ensure that your APIs are not abused and that your application is not overloaded.
Me: This way only data from your app is allowed to be written and read. (got this idea from one of my older apps with google login, which is, in the Google app settings, locked to take requests from my domain only)
Whitelisting is the easiest way to start from. Just an extension of the Gun relay code. if data from domain then transfer, if not then 🔥
plus domain/edge-storage wise => https strict, tls, (my Cloudflare Pages got DDOS and stuff on top)
This way your app becomes the only gateway for data-transfers between your app and the relay (which you can control with your app's logic, security measurements and rate limiting) With the right balanced strategy and timing, you will become able to control and limit your users actions, but not the users. (for instance in my project D-Couchsurfing, i limit the offering of a couch to one a day per user. Logic? Yes! Rocket Science? Not at all...)
https://blog.logrocket.com/rate-limiting-node-js/ good first explainer of rate limiting algorithms. its pretty much like i said in the thread, you will have to measure the amount of actions in a specified timeframe, and then action (eventhandler).
I know relays should / could be open...
I feel you, but this hybrid solution is necessary to find the way to full decentralization. But i could imagine that different organisations/devs could share/join a relay, if they meet certain criteria in how they check their users not to be bots and how they moderate them, and how evolved their rate limiting is. Like a Gun relay licence for a minimum criteria to join a relay(without dictating to much)
If the criteria is met, the domain of the new project would get added to the whitelist script in the public gun relays. Kinda like this...
Regarding rate limiting & spam removal...
I would like to make each node continuously look at the data it gets from .get
. Should that data exceed a certain (unrealistic) amount, the node should look at the emitter node of the excessive amounts of data (identified by it's private key) and then take several actions:
That way, if all nodes follow this procedure, all network paths to malicious nodes would be terminated eventually and the malicious nodes' data gets removed. Any thoughts regarding this procedure?
Of course, to be able to do this, I would need the following requirements:
.get
running and returning massive amounts of data - I would like to do continous tests & removals on incoming data instead of waiting for .get
to complete first)Does anybody have some ideas / thoughts / doubts about this mechanism? Can gun match my requirements?
:point_up: Edit: Regarding rate limiting & spam removal...
I would like to make each node continuously look at the data it gets from .get
. Should that data exceed a certain (unrealistic) amount, the node should look at the emitter node of the excessive amounts of data (data has to be signed with a private key) and then take several actions:
That way, if all nodes follow this procedure, all network paths to malicious nodes would be terminated eventually and the malicious nodes' data gets removed. Any thoughts regarding this procedure?
Of course, to be able to do this, I would need the following requirements:
.get
running and returning massive amounts of data - I would like to do continous tests & removals on incoming data instead of waiting for .get
to complete first)Does anybody have some ideas / thoughts / doubts about this mechanism? Can gun match my requirements?
I think a key question here would also be how to associate the original emitter peer's id
with the data they .put
. I thought of signatures, but that would mean there should be an information provably linked to the peer's ID (similar to public / private keys).
How are the peer ID's derived? Are these similar to a public key, where each peer also possesses the private key only known to itself?
...39
? go to /gun/package.json
to check.
...39
version cause bad bug in AXE that cut off multi-property sync :( in prev versions.)map()
do you mean new as in changed, or new as in not previously in the set? I think map().once()
mostly gives you what you want, however right, it doesn't do "deletes" ... I assume you mean that as a node that is unlinked from the set, regardless of whether it is replaced with another, nulled out, etc. (?) there's special logic in the chain that catches that and sends it to map().on(
so I think it'd be hard to ignore inbetween edits via API combo versus just having an if
statement at top of your listener that only checks for new vs unlinked. Write a unit test that saves 5 new nodes to the table over 5 seconds, then 2 seconds edits a node, then 2 second later removes a node, and check with the callbacks print as. Especially.on(function(data, key, message, event)
parts give extra info.
gun.get('list').get(IDofElementToRemove).put(null)
npm install gun mocha && cd node_modules/gun && mocha test/panic/chat.js
& follow onscreen prompts to see how fast on your machien!)var next = gun.get('next').put(data); gun.get('index').get('hadOldThing').put(next);
npm start
one in a machine it should work, other than certs, which you can pass it as ENV params when you get them. I haven't seen that IndexedDB error before, was it easy to replicate? @tayelno . @nikhiljha I think gunjs peer got nuked with Salesforce Heroku free wipe.@amark:
AXE already throttles updates to the same key at 2X 60fps. I have plans for other throttling rules but have been focused on improving perf until I can hit 100M/mo/users first.
Could you make available the setting of the value inside Gun apps? Like a value that we could decide ourselfs. throttlerelaytorelay = 2x60fps, throttlepeertorelay = (games: 2x60fps, string: 1fps, media: 1-30fps, individual setting for something: 0,1-10fps)
want to try running gun on bun?
yes of course. i will try later. thought you try paralell with me on your machine ;) npm, link, run
incorrect. GUN only syncs what you query.
Is this different to what i said? "gun stupidly syncs everything you .put(), your app code is the filter" 🤔
@amark:
if anyone does build app-specific rate limiting, please sell it as an enterprise service and then donate money back to me 😛
is this a coveted thing? 😅 i think i will integrate rate-limiting in my DAuth repo. Fits well together with DAuth's authentication and user session management.
as.js
code, but not sure how it handles loops / lists of items. #
is replaced with map()
, but how do you fill all the key => value
pairs related to one item / node? Tried to get the "parent soul" to know which list entry the key value pair is related to, but failed to get the parent soul
@Lexi:matrix.org the thread has problems opening...
It would be pretty easy to do on the "client" through your app code but then that can be easily circumvented by sending requests manually without the restrictions of your app's code.
Actually, i would implement the schema client-side in my app, and whitelist my webapp's domain relay-side.
Can this be circumvented by sending requests manually in the way you mentioned? if yes, how? f12 console or something like this https://chrome.google.com/webstore/detail/console-injector/abdfbnapkafgcheofcijaieahcbjnpkd maybe?
@benpreiss:matrix.org
even though @Lexi:matrix.org is right with the user-space strategy, relays having IDs would come in handy for many strategies in general.
Relays could have a onetime execution on the first install of the relay on a server, desktop, etc.
// onetime execution on relay at first install (nanoid or uuid libs are having a good anti-collision, the hash even strengthen this)
let ìd = randomGenerator();
const fixedIdFiletoStorage = hash(id)
This way every relay gets an ID from the start.
PS: nanoid (https://github.com/ai/nanoid) uses unpredictable hardware randomness, crypto instead of math.random(), and is only 130 bytes.
@amark
I know people want those features, but I'm trying to encourage people in the opposite direction: combine together into a large DHT so we all can reuse each other's spare bandwidth/compute.
we both are still on the same train. but i feel the need for an intermediate hybrid solution.
this "plateau" will sure allow me to see the way for the full decentralization (spinning up gun relays that belong to humanity only, but auth/security/spam full covered)
nice :) salt explainer, did you see the Cartoon Cryptography? https://gun.eco/docs/Cartoon-Cryptography ? Then Natnael added 3FA (3factor friend authentication) for lost password/account recovery. https://twitter.com/marknadal/status/1427715775838572545
I am super new to encryption and hash concepts (didn't even know the difference 4 days ago) 😅
But i watched your crypto cartoon last weekend(and know your 3FA 🔥🙏), which brang me to salt, which brang me to hash and salt and pepper, which brang me to this fireship video https://youtu.be/NuyzuNBFWxQ (after watching it you will be like "i know kung-fu")
And this two from Computerphile: https://youtu.be/8ZtInClXe1Q (How NOT to Store Passwords!) and https://youtu.be/b4b8ktEV4Bg (Hashing Algorithms and Security) Somehow 9 years old, but from reading other articles still actual.
PBKDF2 seems to be out of date today btw.
One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap.[12] The bcrypt password hashing function requires a larger amount of RAM (but still not tunable separately, i.e. fixed for a given amount of CPU time) and is slightly stronger against such attacks,[13] while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.[12]
In 2013, the Password Hashing Competition (PHC) was held to develop a more resistant approach. On 20 July 2015 Argon2 was selected as the final PHC winner, with special recognition given to four other password hashing schemes: Catena, Lyra2, yescrypt and Makwa.[14] Another alternative is Balloon hashing, which is recommended in NIST password guidelines.[15]
So my stack will be rather argon2, salt and pepper (does someone know a in the browser working repo btw?
I wanted to go for SHA-3 first(https://github.com/emn178/js-sha3), but i read its not good for passwords compared to Argon2.
https://github.com/antelle/argon2-browser is based on wasm which collides with my vite bundler (known issue)😭
@amark
https://github.com/worldpeaceenginelabs/GUNJS-Starterkit
This is a collection of tools. The code is vanilla JS btw. (not even Typescript) in a Svelte environment.
You can just copy paste the JS parts if you like to re-use them.
I am 24/7 occupied with DAuth (it really thrives me) but i will publish a GunJS Quickstart Guide asap. (many changes of concept so far, i dont want to spam the wiki so i will publish when i am 100% done and sure it became a "GunJS for Dummies")
Pretty much a clone of https://gun.eco/docs/Introduction and https://gun.eco/docs/API, but a bit easier to approach (and get that deprecated stuff out of the way)
Hi everybody!
Not sure if this is off-topic (i made it for our Gun apps because i am to lazy to learn how to SEA for now(i know i have to in the near future, because of encrypting every single file in case someone hacks the credentials of a user))
But i like to know if this concept is feasible anyway?
So many answers! ❤️
Aaaand I have some more questions haha:
.get().get()
commands only retrieve that very specific requested piece of data or also the intermediate data?.get().on()
should also deliver the id of the peer that distributed the event)@benpreiss:matrix.org
@Lexi@matrix.org
@amark
I started to look into the auth/spam issue from ground up, but this time more visually.
Plus I had some nice cryptography geek discussion on discord today.
The following slide is the actual state that every developer will face at some point when starting with GunJS (Github, Cloudflare, ENV's, are exchangeable with your providers/methods)
I invite everybody to wrap your head around the slide, to find the best balance between...
Authorized and unauthorized user (which share a red flag!)
...and how to measure, identify and regulate them.
Slide (duplicate to modify) https://docs.google.com/presentation/d/1xb6l41eqt6OYxNwtZJSh1wC_rTMrIEyExcsCBdhRIxo/edit?usp=sharing
I copy pasted the top slide to the bottom, made the ME card a user, and pretended to just lock the whole system. So i exchanged the red conditions with new green conditions.
Notice from the color change:
WE CANT FULLY SECURE THE APP BECAUSE THE CODE IS MODIFIABLE
WE CANT FULLY SECURE THE RELAY EITHER, BECAUSE THE ADDRESS CAN BE KNOWN
THE FORMER POINTS US STRONGLY ON SECURING THE DATA ITSELF, THE HANDLING OF .get/.put (API)
Measurement No 1: ENCRYPTION (for data with audience less then absolute everybody)
In case of even someone legit or someone sneaking in, they only find garbage.
hash, padding, encryption: ECDSA-SALT-RSA or SHA-3, SHA256, AES
Measurement No 2: VALIDATION/SIGNING
Sign and validate all data with keypairs (transfer/post/message)
hash, padding, encryption: ECDSA-SALT-RSA, or HMAC, PBKDF2
Measurement No 3: RESTRICT/LIMIT/BALANCE ACCESS TO .get/.put (API)
Measurement No 4: - AUTH
You can start to see some patterns coming up from playing with red,yellow and green, kind of a puzzle...
I will start by locking the whole system up and myself out, and then start open it a bit, look what happens, maybe unlock it a bit further... You'll get the point...
@amark
I know people want those features, but I'm trying to encourage people in the opposite direction: combine together into a large DHT so we all can reuse each other's spare bandwidth/compute.
i made thoughts into your direction, and if this usecase dependant relay strategies (slide under this post) were integrated into Gun, that would be awesome!
Maybe an extension of AXE? You'll find a way🔥🔥🔥
Incentivise API consumers with high bandwidth needs, and in turn grow the whole Gun Ecosystem
How does it work?
After the first 24 hours of a relay relaying data to and from multiple dapps/applications from different sources, it generates a list (just for the sake of explaining, same with Google, Fortnite, Dtube) which gets updated every 24h or less.
This list would look basically like this:
24h USAGE:
From here could the AXE extension do the following:
Group the request sources by bandwidth usage and sowith grant access to only relays to dapps/apps with similar bandwidth need.
Group C could be incentiviced to create more Gun relay infrastructure for the whole Gun Ecosystem (they have the money, so why not?)
PS: more groups A,B,C,D... for a finer grain are possible of course, this is just for the sake of explaining