Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
James Hilliard
@jameshilliard
so miners could send to the pool shares below network target diff I guess to prove they are mining?
Ignotus Peverell
@ignopeverell
right, they could send any cuckoo cycle proof even, only the pool would verify they're below some diff
James Hilliard
@jameshilliard
so the same concept used for bitcoin pooled mining could also be used here? ie pool sets a share diff target below network diff and miners submit all proofs above that share diff target
Ignotus Peverell
@ignopeverell
yes
Yeastplume
@yeastplume
@ignopeverell re: wallet receiver, It's an 'interesting challenge' anyhow. I had thought at one point that it could be adapted to become a 'feature', allowing for additional checks that make it harder for someone to accidentally send money to someone who's not expecting it, the way that bitcoin and all other CCs are vulnerable to simple clipboard attacks where someone's cut and paste of an address is interfered with client-side... or how it's possible to accidentally send money into a black hole
I still need to get my head around the entire process more before I can try and think of any solutions, I work in the smartcard world, and wondering if any protocols could be adapted to make the entire process safer, but understanding of MW is still developing so might be a while :D
Lucifer1903
@Lucifer1903
@jameshilliard @ignopeverell I'm not sure why exactly why cuckoo cycle is difficult to pool mine but here it says shares can't periodically send "proof of work" to the pool. https://monero.stackexchange.com/questions/1682/which-reasons-were-discussed-for-potentially-changing-the-proof-of-work-algorith
Wayne Vaughan
@waynevaughan
Looks like I'm the 100th member to join the room! Hello everyone.
Ignotus Peverell
@ignopeverell
@waynevaughan congrats!
@Lucifer1903 it's hard to second guess what this user could have meant, I'm guessing (s)he was assuming a setup where the whole proof of work is built on cuckoo cycle, without additional hashing
but to maintain progress freeness, it's not really the best setup
Lucifer1903
@Lucifer1903
I see, thank you @ignopeverell
Ignotus Peverell
@ignopeverell
no problem, actually if someone wanted to write up an explanation of our mining algo, that'd be very nice :)
I'm hoping we gradually document all part of the system as they get settled to end up with a reasonably complete design doc
(which I guess we'd have to call a whitepaper, to do like everyone else)
Lucifer1903
@Lucifer1903
@ignopeverell I've never done anything like that be. I'm not a programmer but I would like to help in whatever way I can. I'll start by reading up more on cuckoo cycle and try to write an explanation of if... Hopefully it doesn't turn out to bad hahaha
Jacob Payne
@Latrasis
Hi @ignopeverell, just wanted to follow-up. I'll be starting work for #56 and try writing up the simplest banning scenarios. Let me know if you have any other points to raise. Thanks!
Yeastplume
@yeastplume
I've just spent a good while going over Cuckoo cycles and mining as currently implemented. Just wondering, at the moment the code's just a simple, non-optimised example for testing that looks like it's based on the original Cuckoo 'simple-miner'... There are other more efficient miners being added to Tromp's github over time, one that runs faster at the expense of more memory overhead, a cuda version, etc...
urza
@urza_cc_twitter
Ad cucko.. I have a question. I wanted to try the cuckoo miner that consumes more memory but is faster (the claimed speedup bounty by xenocat). I guess it is the "mean_miner" in @tromp repo? The code contains some hardcoded CPU instructions (AVX2) that are available only on newer processors. These parts of code are wrapped in directive so it can be compiled with or without them (it took me a while to understand this, there is no proper indentation). When I tried to compile it on my kaby lake 1. with avx2 and 2. without avx2, the results were that [1.] was 2x faster then [2.] Is this due to inefficient code in [2.] that can be further optimized? Or is it not possible? For memory hard pow, it seems like it gives newer CPUs too big advantage over older cpus... I tried to understand more from the miner code, but it is very hard to read, there are no comments, variables have names like a, b, c, u, v,...
Yeastplume
@yeastplume
How optimised should the mining be within grin when it gets into the wild? I'm thinking there's a case to be made for already including the fastest known algorithms, as it puts everyone on the exact same footing with respect to initial mining
Ignotus Peverell
@ignopeverell
@Latrasis sounds good, I'll keep an eye on it
@yeastplume I should add a task/issue for it but we'll need a much more robust miner that'll likely integrate John Tromp's implementation
we'll need the main optimized cuckoo routines but also the multiplexing across multiple GPUs and all that good stuff
@urza_cc_twitter I don't think John is done with these optimizations, he's still working on it
Yeastplume
@yeastplume
@ignopeverell I take it you mean putting rust bindings around what's there as opposed to porting them, probably the best way to do it because I don't think anyone will do a better job of implementation than Mr. Tromp. I remember trying to mine digibyte once and having to scour the bowels of the internet to find an appropriate miner... it'd be much better from a user perspective if the main client keeps up to date with the latest and greatest mining algos that work for GPU/CPU setups and makes it easy for non-technical users. Nice thing about Cuckoo cycles is that theoretically anyone running a wallet client anywhere should be able to mine effectively without needing a dyson sphere to meet energy requirements.
Yeastplume
@yeastplume
BTW in ignopeverell/grin#23 when you say servers mining in parallel at different rates, do you mean adding some option to the miner to slow it down artificially?
Ignotus Peverell
@ignopeverell
@yeastplume yes, I meant Rust bindings. And in #23 I meant either either slowing it down or just yielding to other processes (that'd assumingly be mining as well) proportionally
sifr-0
@sifr-0
Hello all. I'm interested in the grin project and want to learn more about it. I've read the readme on the github page but still have a few questions. I've read in a few articles that grin will eventually 'peg' or support bitcoin and other coins. Does it mean that bitcoin users will gain the anonymity benefits provided by grin? How will this work? Another question I have is about blocktimes. My initial reading bout Cuckoo Cycle hashing left me with the understanding that the blocktimes would be long. I'm concerned that long block times will make solo mining less attractive due to variance and lead to centralization of hashing power in pools. Please direct me to any documentation which I may have missed. I apologize if my questions are redundant or answered elsewhere.
Yeastplume
@yeastplume
@sifr-0 Hi, with respect to Cuckoo cycles, the difficulty can be adjusted like any other algorithm to keep close to a target block time. The main part of creating a POW is detecting cycles of a certain length within a large graph, but then there's an additional difficulty check of the hash of the result afterwards, similar to hashcash found in bitcoin... at any rate there are plenty of ways to keep within a target solve time
Solo mining should actually be more attractive, and the main theory behind the algorithm in Cuckoo cycles is that it's memory bound (as in RAM latency is the biggest bottleneck,) and therefore doesn't succumb to the ASIC power-hungry arms race found in other algs. Of course you can run a large farm if you want, but the ability to do so won't necessarily depend on your having access to custom hardware or cheap electricity... so the whole thing is more resistant to centralisation.
That's the theory anyhow :D
Andrew Poelstra
@apoelstra
sifr-0: igno's current plan is to have short blocktimes, but i think you're both wrong for a couple reasons .. first, cuckoo cycle attempts take a long enough time that a minute may be insufficient to get progress-freeness, and secondly long blocktimes in MW give more time for people to merge transactions and more blockspace in which to do so, third they allow syncing the chain through a
high-latency mix network without needing constant traffic
fourth, pool hashpower centralization does not require centralization of block production, there is no reason individual miners can't choose their own blocks and submit to pools proofs that the coinbase destination is correct
(regardless of consensus rules this is something i think is critical for launch, that there is existing pool software that works this way)
Yeastplume
@yeastplume
I could be wrong because I haven't spent much time with it, but from my current reading of the code the difficulty of the cycle finding is quite low, and a new graph is generated after every two second pause to collect new transactions. After each pause, a new block header is containing a new random nonce, and new hash fed into the graph initialisation. When I did some tests yesterday, the code was actually finding suitable cycles quite easily, but the difficulty hash currently seems to be a larger factor in how long solutions take.
Again, could be wrong or lacking understanding of how it's supposed to work, I know the current code is just a first iteration for testing
Andrew Poelstra
@apoelstra
i believe the current code is not very memory-hard to enable fast cycle finding
in any case, to get accurate numbers we need to use the most efficient mining sw, which i haven't done
and run it on a GPU, etc
Yeastplume
@yeastplume
Yes, just looking now, there's a default consensus value set to intentionally allow for quick cycle-finding, which I assume is just splatting a comically large number of edges into the graph? In any case, yes this will all have to be properly tested with the tomato miner and GPU mining variants
Ignotus Peverell
@ignopeverell
@apoelstra John Trump's latest recommendations, after the recent optimization that trades memory for speed, is to use Cuckoo30 which requires 2-4GB of memory and should give over 1 sol/sec, so plenty progress-free for a 1min block time
even on memory constrained devices, they should be able to get a few solutions per block
Yeastplume
@yeastplume
Hi all, I've run into an issue trying to create a more general testing framework for automatically spawning multiple servers on the same machine.. If you look into the first two test functions in simulnet.rs in my own forked branch, you can see what I'm getting at, a library to generate many server instances running on different CPU threads, performing the run and then making the results available in order to perform whatever checks are required.. here: https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/simulnet.rs
Yeastplume
@yeastplume
The problem I've come across is within https://github.com/yeastplume/grin/blob/feature/wallet-post/grin/tests/framework.rs If you look around line 416, you can see that there's a thread spawned from a CPU pool for each 'virtual server' so to speak, so if I'm running 5 threads, the run_server method in LocalServerContainer is run 5 times and returns its results into the closure there 5 times. Now in order to get the state of these servers back to the caller so post-execution tests can actually be run against them, the servers spawned on all threads need to be collected into the original calling thread. However, no matter how I try to wrap them, rust will not allow them across threads because the grin::Server struct contains a handle to reactor::Core, which must in all cases be owned by its own thread and cannot be shared across threads .
Can anyone have a look at what I'm trying to do and advise if anything is possible here, or is my approach all wrong and anti-Rust?
Ignotus Peverell
@ignopeverell
@yeastplume don't think your approach is wrong but I wouldn't use futures, I'd just use native threads instead
or we'll need to take the handle to the reactor out of the Server but I'm not sure that's a good idea, ideally the Server would be able to schedule new futures in the event loop by itself.
Ignotus Peverell
@ignopeverell
an alternative could be to introduce a struct that gets relevant info out of the server and that can travel between threads easily
probably the easier and most natural solution