Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
Lucifer1903
@Lucifer1903
I see, thank you @ignopeverell
Ignotus Peverell
@ignopeverell
no problem, actually if someone wanted to write up an explanation of our mining algo, that'd be very nice :)
I'm hoping we gradually document all part of the system as they get settled to end up with a reasonably complete design doc
(which I guess we'd have to call a whitepaper, to do like everyone else)
Lucifer1903
@Lucifer1903
@ignopeverell I've never done anything like that be. I'm not a programmer but I would like to help in whatever way I can. I'll start by reading up more on cuckoo cycle and try to write an explanation of if... Hopefully it doesn't turn out to bad hahaha
Jacob Payne
@Latrasis
Hi @ignopeverell, just wanted to follow-up. I'll be starting work for #56 and try writing up the simplest banning scenarios. Let me know if you have any other points to raise. Thanks!
Yeastplume
@yeastplume
I've just spent a good while going over Cuckoo cycles and mining as currently implemented. Just wondering, at the moment the code's just a simple, non-optimised example for testing that looks like it's based on the original Cuckoo 'simple-miner'... There are other more efficient miners being added to Tromp's github over time, one that runs faster at the expense of more memory overhead, a cuda version, etc...
urza
@urza_cc_twitter
Ad cucko.. I have a question. I wanted to try the cuckoo miner that consumes more memory but is faster (the claimed speedup bounty by xenocat). I guess it is the "mean_miner" in @tromp repo? The code contains some hardcoded CPU instructions (AVX2) that are available only on newer processors. These parts of code are wrapped in directive so it can be compiled with or without them (it took me a while to understand this, there is no proper indentation). When I tried to compile it on my kaby lake 1. with avx2 and 2. without avx2, the results were that [1.] was 2x faster then [2.] Is this due to inefficient code in [2.] that can be further optimized? Or is it not possible? For memory hard pow, it seems like it gives newer CPUs too big advantage over older cpus... I tried to understand more from the miner code, but it is very hard to read, there are no comments, variables have names like a, b, c, u, v,...
Yeastplume
@yeastplume
How optimised should the mining be within grin when it gets into the wild? I'm thinking there's a case to be made for already including the fastest known algorithms, as it puts everyone on the exact same footing with respect to initial mining
Ignotus Peverell
@ignopeverell
@Latrasis sounds good, I'll keep an eye on it
@yeastplume I should add a task/issue for it but we'll need a much more robust miner that'll likely integrate John Tromp's implementation
we'll need the main optimized cuckoo routines but also the multiplexing across multiple GPUs and all that good stuff
@urza_cc_twitter I don't think John is done with these optimizations, he's still working on it
Yeastplume
@yeastplume
@ignopeverell I take it you mean putting rust bindings around what's there as opposed to porting them, probably the best way to do it because I don't think anyone will do a better job of implementation than Mr. Tromp. I remember trying to mine digibyte once and having to scour the bowels of the internet to find an appropriate miner... it'd be much better from a user perspective if the main client keeps up to date with the latest and greatest mining algos that work for GPU/CPU setups and makes it easy for non-technical users. Nice thing about Cuckoo cycles is that theoretically anyone running a wallet client anywhere should be able to mine effectively without needing a dyson sphere to meet energy requirements.
Yeastplume
@yeastplume
BTW in ignopeverell/grin#23 when you say servers mining in parallel at different rates, do you mean adding some option to the miner to slow it down artificially?
Ignotus Peverell
@ignopeverell
@yeastplume yes, I meant Rust bindings. And in #23 I meant either either slowing it down or just yielding to other processes (that'd assumingly be mining as well) proportionally
sifr-0
@sifr-0
Hello all. I'm interested in the grin project and want to learn more about it. I've read the readme on the github page but still have a few questions. I've read in a few articles that grin will eventually 'peg' or support bitcoin and other coins. Does it mean that bitcoin users will gain the anonymity benefits provided by grin? How will this work? Another question I have is about blocktimes. My initial reading bout Cuckoo Cycle hashing left me with the understanding that the blocktimes would be long. I'm concerned that long block times will make solo mining less attractive due to variance and lead to centralization of hashing power in pools. Please direct me to any documentation which I may have missed. I apologize if my questions are redundant or answered elsewhere.
Yeastplume
@yeastplume
@sifr-0 Hi, with respect to Cuckoo cycles, the difficulty can be adjusted like any other algorithm to keep close to a target block time. The main part of creating a POW is detecting cycles of a certain length within a large graph, but then there's an additional difficulty check of the hash of the result afterwards, similar to hashcash found in bitcoin... at any rate there are plenty of ways to keep within a target solve time
Solo mining should actually be more attractive, and the main theory behind the algorithm in Cuckoo cycles is that it's memory bound (as in RAM latency is the biggest bottleneck,) and therefore doesn't succumb to the ASIC power-hungry arms race found in other algs. Of course you can run a large farm if you want, but the ability to do so won't necessarily depend on your having access to custom hardware or cheap electricity... so the whole thing is more resistant to centralisation.
That's the theory anyhow :D
Andrew Poelstra
@apoelstra
sifr-0: igno's current plan is to have short blocktimes, but i think you're both wrong for a couple reasons .. first, cuckoo cycle attempts take a long enough time that a minute may be insufficient to get progress-freeness, and secondly long blocktimes in MW give more time for people to merge transactions and more blockspace in which to do so, third they allow syncing the chain through a
high-latency mix network without needing constant traffic
fourth, pool hashpower centralization does not require centralization of block production, there is no reason individual miners can't choose their own blocks and submit to pools proofs that the coinbase destination is correct
(regardless of consensus rules this is something i think is critical for launch, that there is existing pool software that works this way)
Yeastplume
@yeastplume
I could be wrong because I haven't spent much time with it, but from my current reading of the code the difficulty of the cycle finding is quite low, and a new graph is generated after every two second pause to collect new transactions. After each pause, a new block header is containing a new random nonce, and new hash fed into the graph initialisation. When I did some tests yesterday, the code was actually finding suitable cycles quite easily, but the difficulty hash currently seems to be a larger factor in how long solutions take.
Again, could be wrong or lacking understanding of how it's supposed to work, I know the current code is just a first iteration for testing
Andrew Poelstra
@apoelstra
i believe the current code is not very memory-hard to enable fast cycle finding
in any case, to get accurate numbers we need to use the most efficient mining sw, which i haven't done
and run it on a GPU, etc
Yeastplume
@yeastplume
Yes, just looking now, there's a default consensus value set to intentionally allow for quick cycle-finding, which I assume is just splatting a comically large number of edges into the graph? In any case, yes this will all have to be properly tested with the tomato miner and GPU mining variants
Ignotus Peverell
@ignopeverell
@apoelstra John Trump's latest recommendations, after the recent optimization that trades memory for speed, is to use Cuckoo30 which requires 2-4GB of memory and should give over 1 sol/sec, so plenty progress-free for a 1min block time
even on memory constrained devices, they should be able to get a few solutions per block
Yeastplume
@yeastplume
Hi all, I've run into an issue trying to create a more general testing framework for automatically spawning multiple servers on the same machine.. If you look into the first two test functions in simulnet.rs in my own forked branch, you can see what I'm getting at, a library to generate many server instances running on different CPU threads, performing the run and then making the results available in order to perform whatever checks are required.. here: https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/simulnet.rs
Yeastplume
@yeastplume
The problem I've come across is within https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/framework.rs If you look around line 416, you can see that there's a thread spawned from a CPU pool for each 'virtual server' so to speak, so if I'm running 5 threads, the run_server method in LocalServerContainer is run 5 times and returns its results into the closure there 5 times. Now in order to get the state of these servers back to the caller so post-execution tests can actually be run against them, the servers spawned on all threads need to be collected into the original calling thread. However, no matter how I try to wrap them, rust will not allow them across threads because the grin::Server struct contains a handle to reactor::Core, which must in all cases be owned by its own thread and cannot be shared across threads .
Can anyone have a look at what I'm trying to do and advise if anything is possible here, or is my approach all wrong and anti-Rust?
Ignotus Peverell
@ignopeverell
@yeastplume don't think your approach is wrong but I wouldn't use futures, I'd just use native threads instead
or we'll need to take the handle to the reactor out of the Server but I'm not sure that's a good idea, ideally the Server would be able to schedule new futures in the event loop by itself.
Ignotus Peverell
@ignopeverell
an alternative could be to introduce a struct that gets relevant info out of the server and that can travel between threads easily
probably the easier and most natural solution
shared refs of the whole server are not something we should try too hard to have
Ignotus Peverell
@ignopeverell
@apoelstra going back to the block time, I could provide opposing arguments (like premature optimizations, one way or another, or randomness in picking parameters right now) and ultimately I'm always open to changes, maybe bring it up to the mailing list?
Andrew Poelstra
@apoelstra
ignopeverell: for sure, i will (or try to get somebody more knowledgeable to do so). agreed about premature optimizations and randomness, i don't enjoy discussing these aspects of the system either :)
Ignotus Peverell
@ignopeverell
@apoelstra sounds good!
Alec Spier
@glitchdigger
what the fuck is up team
you guys are doing GREAT work
Yeastplume
@yeastplume
@ignopeverell Thanks, I had similar thoughts re: a separate struct maintaining or populating its own references to all of the other data besides the reactor::Handle in the grin::Server struct (the ones already wrapped in Arcs,) but didn't want to smash into that area of the code without discussing it first. I think also you're right about using native threads instead of futures, I think you'd want to make it easy to interact with running servers to do stuff like add peers on the fly or simulate transactions rather than being tied to a particular closure waiting for a result.
Would it work, do you think, if there were a method on the Server struct that retured a struct with reference counted refs to the p2p, chain_head, chain_store etc which could be stored before spawning the server threads? Or should it be a struct that just provides specific data for test results, like peer count, head info, etc?
Yeastplume
@yeastplume
Yes, keeping a separate reference struct does work, if you check the changes in the PR here, you can see it in action. There are 2 tests in simulnet.rs that use it and are able to read the results: https://github.com/ignopeverell/grin/pull/66/files/ebf2a773d95c7547404698cf6616e06d9d4d7b86..25f7be36936654eba2719b669079769d59de3ba5
Antonis Anastasiadis
@antanst
Hi. The Grin Github readme mentions a fast block size. Is there a rationale or discussion archive about this somewhere?
I don't mean to bikeshed, just curious.