Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
Andrew Poelstra
@apoelstra
yes, it would require everyone know about spent outputs
Andres G. Aragoneses
@knocte
but then, wallets would need to keep all history or connect to fat "txindex" servers that don't prune, :-m
Andrew Poelstra
@apoelstra
yes, and the benefit of all this infrastructure and inefficiency is that users are able to undermine the privacy of the system
such users shouldn't use MW, hopefully we will be able to sidechain to other chains that support such uses
Yeastplume
@yeastplume
So it's not just me thinking the need for the recipient to provide a blinding factor is going to be a bit of a hurdle... also, if you're running a wallet at a certain address, you're basically giving telling an attacker exactly where you're storing your private keys.
Yeastplume
@yeastplume
BTW if anyone's listening, transactions are only being written to stdout at the moment, I'm going to implement the http post directly to another wallet over the next day or so and submit another PR
James Hilliard
@jameshilliard
It was mentioned above that cuckoo cycle makes mining pools difficult, what's the reason behind that?
Ignotus Peverell
@ignopeverell
@jameshilliard to be honest I'm not entirely sure what's the source of that assertion
@yeastplume it's going to make things a little harder for sure, but I think we can be a little creative
Ignotus Peverell
@ignopeverell
we could build a well-reviewed hardened docker image with the wallet receiver, with some loss of privacy I think it could also only need a pubkey (pre-computing some range proofs and pubkey pairs), it could also automatically forward to another wallet that's not public
James Hilliard
@jameshilliard
@ignopeverell does cuckoo cycle give you diff proofs lower than network diff?
Ignotus Peverell
@ignopeverell
not really, it's just an additional requirement
James Hilliard
@jameshilliard
so miners could send to the pool shares below network target diff I guess to prove they are mining?
Ignotus Peverell
@ignopeverell
right, they could send any cuckoo cycle proof even, only the pool would verify they're below some diff
James Hilliard
@jameshilliard
so the same concept used for bitcoin pooled mining could also be used here? ie pool sets a share diff target below network diff and miners submit all proofs above that share diff target
Ignotus Peverell
@ignopeverell
yes
Yeastplume
@yeastplume
@ignopeverell re: wallet receiver, It's an 'interesting challenge' anyhow. I had thought at one point that it could be adapted to become a 'feature', allowing for additional checks that make it harder for someone to accidentally send money to someone who's not expecting it, the way that bitcoin and all other CCs are vulnerable to simple clipboard attacks where someone's cut and paste of an address is interfered with client-side... or how it's possible to accidentally send money into a black hole
I still need to get my head around the entire process more before I can try and think of any solutions, I work in the smartcard world, and wondering if any protocols could be adapted to make the entire process safer, but understanding of MW is still developing so might be a while :D
Lucifer1903
@Lucifer1903
@jameshilliard @ignopeverell I'm not sure why exactly why cuckoo cycle is difficult to pool mine but here it says shares can't periodically send "proof of work" to the pool. https://monero.stackexchange.com/questions/1682/which-reasons-were-discussed-for-potentially-changing-the-proof-of-work-algorith
Wayne Vaughan
@waynevaughan
Looks like I'm the 100th member to join the room! Hello everyone.
Ignotus Peverell
@ignopeverell
@waynevaughan congrats!
@Lucifer1903 it's hard to second guess what this user could have meant, I'm guessing (s)he was assuming a setup where the whole proof of work is built on cuckoo cycle, without additional hashing
but to maintain progress freeness, it's not really the best setup
Lucifer1903
@Lucifer1903
I see, thank you @ignopeverell
Ignotus Peverell
@ignopeverell
no problem, actually if someone wanted to write up an explanation of our mining algo, that'd be very nice :)
I'm hoping we gradually document all part of the system as they get settled to end up with a reasonably complete design doc
(which I guess we'd have to call a whitepaper, to do like everyone else)
Lucifer1903
@Lucifer1903
@ignopeverell I've never done anything like that be. I'm not a programmer but I would like to help in whatever way I can. I'll start by reading up more on cuckoo cycle and try to write an explanation of if... Hopefully it doesn't turn out to bad hahaha
Jacob Payne
@Latrasis
Hi @ignopeverell, just wanted to follow-up. I'll be starting work for #56 and try writing up the simplest banning scenarios. Let me know if you have any other points to raise. Thanks!
Yeastplume
@yeastplume
I've just spent a good while going over Cuckoo cycles and mining as currently implemented. Just wondering, at the moment the code's just a simple, non-optimised example for testing that looks like it's based on the original Cuckoo 'simple-miner'... There are other more efficient miners being added to Tromp's github over time, one that runs faster at the expense of more memory overhead, a cuda version, etc...
urza
@urza_cc_twitter
Ad cucko.. I have a question. I wanted to try the cuckoo miner that consumes more memory but is faster (the claimed speedup bounty by xenocat). I guess it is the "mean_miner" in @tromp repo? The code contains some hardcoded CPU instructions (AVX2) that are available only on newer processors. These parts of code are wrapped in directive so it can be compiled with or without them (it took me a while to understand this, there is no proper indentation). When I tried to compile it on my kaby lake 1. with avx2 and 2. without avx2, the results were that [1.] was 2x faster then [2.] Is this due to inefficient code in [2.] that can be further optimized? Or is it not possible? For memory hard pow, it seems like it gives newer CPUs too big advantage over older cpus... I tried to understand more from the miner code, but it is very hard to read, there are no comments, variables have names like a, b, c, u, v,...
Yeastplume
@yeastplume
How optimised should the mining be within grin when it gets into the wild? I'm thinking there's a case to be made for already including the fastest known algorithms, as it puts everyone on the exact same footing with respect to initial mining
Ignotus Peverell
@ignopeverell
@Latrasis sounds good, I'll keep an eye on it
@yeastplume I should add a task/issue for it but we'll need a much more robust miner that'll likely integrate John Tromp's implementation
we'll need the main optimized cuckoo routines but also the multiplexing across multiple GPUs and all that good stuff
@urza_cc_twitter I don't think John is done with these optimizations, he's still working on it
Yeastplume
@yeastplume
@ignopeverell I take it you mean putting rust bindings around what's there as opposed to porting them, probably the best way to do it because I don't think anyone will do a better job of implementation than Mr. Tromp. I remember trying to mine digibyte once and having to scour the bowels of the internet to find an appropriate miner... it'd be much better from a user perspective if the main client keeps up to date with the latest and greatest mining algos that work for GPU/CPU setups and makes it easy for non-technical users. Nice thing about Cuckoo cycles is that theoretically anyone running a wallet client anywhere should be able to mine effectively without needing a dyson sphere to meet energy requirements.
Yeastplume
@yeastplume
BTW in ignopeverell/grin#23 when you say servers mining in parallel at different rates, do you mean adding some option to the miner to slow it down artificially?
Ignotus Peverell
@ignopeverell
@yeastplume yes, I meant Rust bindings. And in #23 I meant either either slowing it down or just yielding to other processes (that'd assumingly be mining as well) proportionally
sifr-0
@sifr-0
Hello all. I'm interested in the grin project and want to learn more about it. I've read the readme on the github page but still have a few questions. I've read in a few articles that grin will eventually 'peg' or support bitcoin and other coins. Does it mean that bitcoin users will gain the anonymity benefits provided by grin? How will this work? Another question I have is about blocktimes. My initial reading bout Cuckoo Cycle hashing left me with the understanding that the blocktimes would be long. I'm concerned that long block times will make solo mining less attractive due to variance and lead to centralization of hashing power in pools. Please direct me to any documentation which I may have missed. I apologize if my questions are redundant or answered elsewhere.
Yeastplume
@yeastplume
@sifr-0 Hi, with respect to Cuckoo cycles, the difficulty can be adjusted like any other algorithm to keep close to a target block time. The main part of creating a POW is detecting cycles of a certain length within a large graph, but then there's an additional difficulty check of the hash of the result afterwards, similar to hashcash found in bitcoin... at any rate there are plenty of ways to keep within a target solve time
Solo mining should actually be more attractive, and the main theory behind the algorithm in Cuckoo cycles is that it's memory bound (as in RAM latency is the biggest bottleneck,) and therefore doesn't succumb to the ASIC power-hungry arms race found in other algs. Of course you can run a large farm if you want, but the ability to do so won't necessarily depend on your having access to custom hardware or cheap electricity... so the whole thing is more resistant to centralisation.
That's the theory anyhow :D
Andrew Poelstra
@apoelstra
sifr-0: igno's current plan is to have short blocktimes, but i think you're both wrong for a couple reasons .. first, cuckoo cycle attempts take a long enough time that a minute may be insufficient to get progress-freeness, and secondly long blocktimes in MW give more time for people to merge transactions and more blockspace in which to do so, third they allow syncing the chain through a
high-latency mix network without needing constant traffic
fourth, pool hashpower centralization does not require centralization of block production, there is no reason individual miners can't choose their own blocks and submit to pools proofs that the coinbase destination is correct
(regardless of consensus rules this is something i think is critical for launch, that there is existing pool software that works this way)
Yeastplume
@yeastplume
I could be wrong because I haven't spent much time with it, but from my current reading of the code the difficulty of the cycle finding is quite low, and a new graph is generated after every two second pause to collect new transactions. After each pause, a new block header is containing a new random nonce, and new hash fed into the graph initialisation. When I did some tests yesterday, the code was actually finding suitable cycles quite easily, but the difficulty hash currently seems to be a larger factor in how long solutions take.
Again, could be wrong or lacking understanding of how it's supposed to work, I know the current code is just a first iteration for testing