Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
Yeastplume
@yeastplume
@ignopeverell Thanks, I had similar thoughts re: a separate struct maintaining or populating its own references to all of the other data besides the reactor::Handle in the grin::Server struct (the ones already wrapped in Arcs,) but didn't want to smash into that area of the code without discussing it first. I think also you're right about using native threads instead of futures, I think you'd want to make it easy to interact with running servers to do stuff like add peers on the fly or simulate transactions rather than being tied to a particular closure waiting for a result.
Would it work, do you think, if there were a method on the Server struct that retured a struct with reference counted refs to the p2p, chain_head, chain_store etc which could be stored before spawning the server threads? Or should it be a struct that just provides specific data for test results, like peer count, head info, etc?
Yeastplume
@yeastplume
Yes, keeping a separate reference struct does work, if you check the changes in the PR here, you can see it in action. There are 2 tests in simulnet.rs that use it and are able to read the results: https://github.com/ignopeverell/grin/pull/66/files/ebf2a773d95c7547404698cf6616e06d9d4d7b86..25f7be36936654eba2719b669079769d59de3ba5
Antonis Anastasiadis
@antanst
Hi. The Grin Github readme mentions a fast block size. Is there a rationale or discussion archive about this somewhere?
I don't mean to bikeshed, just curious.
Andrew Poelstra
@apoelstra
i think the last 72 hours of this channel has discussion,y ou can scroll up, it's not long
Antonis Anastasiadis
@antanst
will do. Thanks!
Ignotus Peverell
@ignopeverell
@yeastplume sorry to come in late but I'd prefer a ligherweight structure with specific data. It could be consumed by other things than tests if required too. Sharing references with the outside world is a bit of a slippery slope which could lead to resource leaks and weird bugs. I'd rather only simple accessors to get data or simple set methods to change things if required. Does that make sense?
said differently, Server is meant as a higher-level interface/facade to the rest of the system and so should control side-effects and resource usage
Yeastplume
@yeastplume
@ignopeverell yes, that makes perfect sense, in the server struct you have there are helper methods used for tests like peer-count, head, etc, so perhaps just a structure that fills those out and can be expanded as needed by tests as required as opposed to the big reference struct I have there now in my fork?
Ignotus Peverell
@ignopeverell
yes, exactly
Yeastplume
@yeastplume
No problem I'll make those changes over the next day or so. Are the other bits about configuring ports, etc in the PR generally okay or is there anything else you think should be changed?
Ignotus Peverell
@ignopeverell
just started going through the PR, I'll comment on github if I have remarks?
Yeastplume
@yeastplume
yep, sounds good
Ignotus Peverell
@ignopeverell
@yeastplume rest of the code looks just fine actually
Yeastplume
@yeastplume
great, I'll just make the changes to that struct then and you know when they're in
Alec Spier
@glitchdigger
    sup you guys
is there a testnet?
Ignotus Peverell
@ignopeverell
@glitchdigger not quite yet but going there little by little
sifr-0
@sifr-0
@apoelstra @yeastplume Thanks for your responses. I'd just like to add one more bit concerning the blocktimes. Even if pools allow clients to produce blocks I would argue that is not sufficient as it only addresses on of the problems. It may solve decentralization of block production, but a pool is still a centralization of hashing power. An malicious third party or government agency can easily have a pool taken down or DDOSd. Performing such acts on solo miners is far more expensive. I would argue that solo miners are strictly the best miners. In every objective measure they are better for the network than a pool miner. The health of the network relies on decentralized hashpower which cannot be censored. Discouraging solo mining via long blocktimes I fear will lead to a less secure network. I believe this may be a real concern for grin because if it offers true anonymity then it will certainly play host to transactions which governments would like to scrutinize.
sifr-0
@sifr-0
I've seen no mention of Proof of Stake anywhere in grin documentation. Is it safe to say that grin has no plans of going to proof of stake, fully or in hybrid fashion such as Decred?
Andrew Poelstra
@apoelstra
sifr-0: that is very safe to say
sifr-0: regarding "centralization of hashpower", that is not what a pool is either, a pool only centralizes payouts
if a pool is ddos'd it takes basically zero effort for the actual hashpower to switch to a different pool or to solo mine
Yeastplume
@yeastplume
This is probably just going to amount to a lot of stupid, but I’m considering the paper wallet problem, i.e. the fact that cold storage of any kind is going to be so unwieldy as to be unusable, cause you’ll need to update your offline storage with new amounts every time you send/receive anything.
Andrew Poelstra
@apoelstra
any ideas are welcome on that front, don't worry about sounding stupid :)
Yeastplume
@yeastplume
Heh, well, in ignopeverell/grin#28 if I read it right, I think the discussion seems to indicate that the only way to do cold storage in MW is to allow the amounts to be brute forced by iterating with transaction keys over the entire UTXO set, and since there’s potentially 8 or 10 sig digits per currency unit and a who knows how bit UTXO set, this would quickly become prohibitive. So the only way to allow cold wallets is to somehow reduce the search space that a wallet needs to brute force through. One suggestion is denominations, I’m not entirely clear on what that means, I’m guessing means that you reduce the values allowed in an input/output to known divisions, like a “quarter” at 0.00000025, etc, which seems limiting but weighing that against the need for cold storage might not be such a bad thing... however
Andrew Poelstra
@apoelstra
oh, that's a simpler problem than what i was thinking about..
i didn't see this bug, this one is easy ;p
Yeastplume
@yeastplume
What's the solution? I was thinking that since there’s communication between both parties anyhow, would it not somehow be possible for the receiver to indicate he wants the transaction structured in a certain way in order to reduce the space that needs to be brute forced through?
Andrew Poelstra
@apoelstra
if there's communication then that isn't cald..
old
Yeastplume
@yeastplume
What's the problem you're thinking about?
But I meant communication before the transaction is finalised
Andrew Poelstra
@apoelstra
how do you send money to a cold wallet when the cold wallet has to produce the outputs for you
yes, a cold-wallet in bitcoin doesn't communicate with anybody at all, except when redeeming coins
Yeastplume
@yeastplume
No, I didn't mean sending the money to a cold wallet, I meant as the transaction is being created, manipulate the blinding factor or transaction set somehow to make it easier for the recipient to identify later assuming he loses the amount, i.e. only has the private key that can be generated from cold storage
Andrew Poelstra
@apoelstra
oh yes, i posted on the github link how to do that
it's easy to encrypt stuff to yourself in your own rangeproof
and it's easy to detect your outputs by fixing the key for a specific digit
(it would be even easier with unconditional soundness where you have a key that's independent of the value)
Yeastplume
@yeastplume
ah, very good, I see that now
so what's the hard problem?
Andrew Poelstra
@apoelstra
the hard problem is sending coins to a cold wallet
without any online keys
MW inherently requires somebody prove ownership of transaction outputs, for those outputs to be valid
Yeastplume
@yeastplume
well, that seems to be a big stumbling block towards the claim of privacy in transactions, when you need to be running a wallet server somewhere in order to receive anything
Andrew Poelstra
@apoelstra
running a server isn't a big privacy block, you have to communicate with your counterparty anyway
you can improve network privacy using valueshuffle and tor (we believe)
Yeastplume
@yeastplume
Yes well... we can.. but the er... general public not so much