Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
Yeastplume
@yeastplume
Solo mining should actually be more attractive, and the main theory behind the algorithm in Cuckoo cycles is that it's memory bound (as in RAM latency is the biggest bottleneck,) and therefore doesn't succumb to the ASIC power-hungry arms race found in other algs. Of course you can run a large farm if you want, but the ability to do so won't necessarily depend on your having access to custom hardware or cheap electricity... so the whole thing is more resistant to centralisation.
That's the theory anyhow :D
Andrew Poelstra
@apoelstra
sifr-0: igno's current plan is to have short blocktimes, but i think you're both wrong for a couple reasons .. first, cuckoo cycle attempts take a long enough time that a minute may be insufficient to get progress-freeness, and secondly long blocktimes in MW give more time for people to merge transactions and more blockspace in which to do so, third they allow syncing the chain through a
high-latency mix network without needing constant traffic
fourth, pool hashpower centralization does not require centralization of block production, there is no reason individual miners can't choose their own blocks and submit to pools proofs that the coinbase destination is correct
(regardless of consensus rules this is something i think is critical for launch, that there is existing pool software that works this way)
Yeastplume
@yeastplume
I could be wrong because I haven't spent much time with it, but from my current reading of the code the difficulty of the cycle finding is quite low, and a new graph is generated after every two second pause to collect new transactions. After each pause, a new block header is containing a new random nonce, and new hash fed into the graph initialisation. When I did some tests yesterday, the code was actually finding suitable cycles quite easily, but the difficulty hash currently seems to be a larger factor in how long solutions take.
Again, could be wrong or lacking understanding of how it's supposed to work, I know the current code is just a first iteration for testing
Andrew Poelstra
@apoelstra
i believe the current code is not very memory-hard to enable fast cycle finding
in any case, to get accurate numbers we need to use the most efficient mining sw, which i haven't done
and run it on a GPU, etc
Yeastplume
@yeastplume
Yes, just looking now, there's a default consensus value set to intentionally allow for quick cycle-finding, which I assume is just splatting a comically large number of edges into the graph? In any case, yes this will all have to be properly tested with the tomato miner and GPU mining variants
Ignotus Peverell
@ignopeverell
@apoelstra John Trump's latest recommendations, after the recent optimization that trades memory for speed, is to use Cuckoo30 which requires 2-4GB of memory and should give over 1 sol/sec, so plenty progress-free for a 1min block time
even on memory constrained devices, they should be able to get a few solutions per block
Yeastplume
@yeastplume
Hi all, I've run into an issue trying to create a more general testing framework for automatically spawning multiple servers on the same machine.. If you look into the first two test functions in simulnet.rs in my own forked branch, you can see what I'm getting at, a library to generate many server instances running on different CPU threads, performing the run and then making the results available in order to perform whatever checks are required.. here: https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/simulnet.rs
Yeastplume
@yeastplume
The problem I've come across is within https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/framework.rs If you look around line 416, you can see that there's a thread spawned from a CPU pool for each 'virtual server' so to speak, so if I'm running 5 threads, the run_server method in LocalServerContainer is run 5 times and returns its results into the closure there 5 times. Now in order to get the state of these servers back to the caller so post-execution tests can actually be run against them, the servers spawned on all threads need to be collected into the original calling thread. However, no matter how I try to wrap them, rust will not allow them across threads because the grin::Server struct contains a handle to reactor::Core, which must in all cases be owned by its own thread and cannot be shared across threads .
Can anyone have a look at what I'm trying to do and advise if anything is possible here, or is my approach all wrong and anti-Rust?
Ignotus Peverell
@ignopeverell
@yeastplume don't think your approach is wrong but I wouldn't use futures, I'd just use native threads instead
or we'll need to take the handle to the reactor out of the Server but I'm not sure that's a good idea, ideally the Server would be able to schedule new futures in the event loop by itself.
Ignotus Peverell
@ignopeverell
an alternative could be to introduce a struct that gets relevant info out of the server and that can travel between threads easily
probably the easier and most natural solution
shared refs of the whole server are not something we should try too hard to have
Ignotus Peverell
@ignopeverell
@apoelstra going back to the block time, I could provide opposing arguments (like premature optimizations, one way or another, or randomness in picking parameters right now) and ultimately I'm always open to changes, maybe bring it up to the mailing list?
Andrew Poelstra
@apoelstra
ignopeverell: for sure, i will (or try to get somebody more knowledgeable to do so). agreed about premature optimizations and randomness, i don't enjoy discussing these aspects of the system either :)
Ignotus Peverell
@ignopeverell
@apoelstra sounds good!
Alec Spier
@glitchdigger
what the fuck is up team
you guys are doing GREAT work
Yeastplume
@yeastplume
@ignopeverell Thanks, I had similar thoughts re: a separate struct maintaining or populating its own references to all of the other data besides the reactor::Handle in the grin::Server struct (the ones already wrapped in Arcs,) but didn't want to smash into that area of the code without discussing it first. I think also you're right about using native threads instead of futures, I think you'd want to make it easy to interact with running servers to do stuff like add peers on the fly or simulate transactions rather than being tied to a particular closure waiting for a result.
Would it work, do you think, if there were a method on the Server struct that retured a struct with reference counted refs to the p2p, chain_head, chain_store etc which could be stored before spawning the server threads? Or should it be a struct that just provides specific data for test results, like peer count, head info, etc?
Yeastplume
@yeastplume
Yes, keeping a separate reference struct does work, if you check the changes in the PR here, you can see it in action. There are 2 tests in simulnet.rs that use it and are able to read the results: https://github.com/ignopeverell/grin/pull/66/files/ebf2a773d95c7547404698cf6616e06d9d4d7b86..25f7be36936654eba2719b669079769d59de3ba5
Antonis Anastasiadis
@antanst
Hi. The Grin Github readme mentions a fast block size. Is there a rationale or discussion archive about this somewhere?
I don't mean to bikeshed, just curious.
Andrew Poelstra
@apoelstra
i think the last 72 hours of this channel has discussion,y ou can scroll up, it's not long
Antonis Anastasiadis
@antanst
will do. Thanks!
Ignotus Peverell
@ignopeverell
@yeastplume sorry to come in late but I'd prefer a ligherweight structure with specific data. It could be consumed by other things than tests if required too. Sharing references with the outside world is a bit of a slippery slope which could lead to resource leaks and weird bugs. I'd rather only simple accessors to get data or simple set methods to change things if required. Does that make sense?
said differently, Server is meant as a higher-level interface/facade to the rest of the system and so should control side-effects and resource usage
Yeastplume
@yeastplume
@ignopeverell yes, that makes perfect sense, in the server struct you have there are helper methods used for tests like peer-count, head, etc, so perhaps just a structure that fills those out and can be expanded as needed by tests as required as opposed to the big reference struct I have there now in my fork?
Ignotus Peverell
@ignopeverell
yes, exactly
Yeastplume
@yeastplume
No problem I'll make those changes over the next day or so. Are the other bits about configuring ports, etc in the PR generally okay or is there anything else you think should be changed?
Ignotus Peverell
@ignopeverell
just started going through the PR, I'll comment on github if I have remarks?
Yeastplume
@yeastplume
yep, sounds good
Ignotus Peverell
@ignopeverell
@yeastplume rest of the code looks just fine actually
Yeastplume
@yeastplume
great, I'll just make the changes to that struct then and you know when they're in
Alec Spier
@glitchdigger
    sup you guys
is there a testnet?
Ignotus Peverell
@ignopeverell
@glitchdigger not quite yet but going there little by little
sifr-0
@sifr-0
@apoelstra @yeastplume Thanks for your responses. I'd just like to add one more bit concerning the blocktimes. Even if pools allow clients to produce blocks I would argue that is not sufficient as it only addresses on of the problems. It may solve decentralization of block production, but a pool is still a centralization of hashing power. An malicious third party or government agency can easily have a pool taken down or DDOSd. Performing such acts on solo miners is far more expensive. I would argue that solo miners are strictly the best miners. In every objective measure they are better for the network than a pool miner. The health of the network relies on decentralized hashpower which cannot be censored. Discouraging solo mining via long blocktimes I fear will lead to a less secure network. I believe this may be a real concern for grin because if it offers true anonymity then it will certainly play host to transactions which governments would like to scrutinize.
sifr-0
@sifr-0
I've seen no mention of Proof of Stake anywhere in grin documentation. Is it safe to say that grin has no plans of going to proof of stake, fully or in hybrid fashion such as Decred?
Andrew Poelstra
@apoelstra
sifr-0: that is very safe to say
sifr-0: regarding "centralization of hashpower", that is not what a pool is either, a pool only centralizes payouts