Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 13 2019 08:02
    @jaspervdm banned @matrixbot
  • May 06 2018 21:29
    User @ignopeverell unbanned @maesitos
  • May 06 2018 00:41
    @ignopeverell banned @maesitos
Andrew Poelstra
@apoelstra
if you do everything via payment protocol it's not so bad, it's basically the same as bitcoin
but cold wallets are a bear
Andres G. Aragoneses
@knocte
cannot the replay attacks be prevented seriously? adding a timestamp and making the transaction be valid only for some hours for example?
Andrew Poelstra
@apoelstra
yes, if you put timeouts on transactions that would block replays. that's a really nasty solution, there's a reason bitcoin has endeavored to never make a mechanism that let a valid transaction become invalid (except doubl-spending it)
it basically makes every transaction non-reorg-safe, like coinbases are
and creates incentives for miner censorship in same cases
Andres G. Aragoneses
@knocte
mmm
Andrew Poelstra
@apoelstra
it's also an incomplete solution if your payments aren't always very spaced out
Andres G. Aragoneses
@knocte
how about making it opt-in? for recurrent payments you use an address that has timelocks, but for the rest (change addresses), it's default and replayable
(just thinking out loud)
Andrew Poelstra
@apoelstra
that wolud be impossible, it would still break MW pruning
Andres G. Aragoneses
@knocte
mkay
Andrew Poelstra
@apoelstra
you can't put any conditions on outputs except what you can encode in their EC keys
(or things that stop mattering once they're spent, like them being non-negative)
Andres G. Aragoneses
@knocte
how about making it impossible to send money to an address that has received money before? would that also break pruning?
I guess those heuristics should be rather implemented at the wallet level?
Andrew Poelstra
@apoelstra
yes, it would require everyone know about spent outputs
Andres G. Aragoneses
@knocte
but then, wallets would need to keep all history or connect to fat "txindex" servers that don't prune, :-m
Andrew Poelstra
@apoelstra
yes, and the benefit of all this infrastructure and inefficiency is that users are able to undermine the privacy of the system
such users shouldn't use MW, hopefully we will be able to sidechain to other chains that support such uses
Yeastplume
@yeastplume
So it's not just me thinking the need for the recipient to provide a blinding factor is going to be a bit of a hurdle... also, if you're running a wallet at a certain address, you're basically giving telling an attacker exactly where you're storing your private keys.
Yeastplume
@yeastplume
BTW if anyone's listening, transactions are only being written to stdout at the moment, I'm going to implement the http post directly to another wallet over the next day or so and submit another PR
James Hilliard
@jameshilliard
It was mentioned above that cuckoo cycle makes mining pools difficult, what's the reason behind that?
Ignotus Peverell
@ignopeverell
@jameshilliard to be honest I'm not entirely sure what's the source of that assertion
@yeastplume it's going to make things a little harder for sure, but I think we can be a little creative
Ignotus Peverell
@ignopeverell
we could build a well-reviewed hardened docker image with the wallet receiver, with some loss of privacy I think it could also only need a pubkey (pre-computing some range proofs and pubkey pairs), it could also automatically forward to another wallet that's not public
James Hilliard
@jameshilliard
@ignopeverell does cuckoo cycle give you diff proofs lower than network diff?
Ignotus Peverell
@ignopeverell
not really, it's just an additional requirement
James Hilliard
@jameshilliard
so miners could send to the pool shares below network target diff I guess to prove they are mining?
Ignotus Peverell
@ignopeverell
right, they could send any cuckoo cycle proof even, only the pool would verify they're below some diff
James Hilliard
@jameshilliard
so the same concept used for bitcoin pooled mining could also be used here? ie pool sets a share diff target below network diff and miners submit all proofs above that share diff target
Ignotus Peverell
@ignopeverell
yes
Yeastplume
@yeastplume
@ignopeverell re: wallet receiver, It's an 'interesting challenge' anyhow. I had thought at one point that it could be adapted to become a 'feature', allowing for additional checks that make it harder for someone to accidentally send money to someone who's not expecting it, the way that bitcoin and all other CCs are vulnerable to simple clipboard attacks where someone's cut and paste of an address is interfered with client-side... or how it's possible to accidentally send money into a black hole
I still need to get my head around the entire process more before I can try and think of any solutions, I work in the smartcard world, and wondering if any protocols could be adapted to make the entire process safer, but understanding of MW is still developing so might be a while :D
Lucifer1903
@Lucifer1903
@jameshilliard @ignopeverell I'm not sure why exactly why cuckoo cycle is difficult to pool mine but here it says shares can't periodically send "proof of work" to the pool. https://monero.stackexchange.com/questions/1682/which-reasons-were-discussed-for-potentially-changing-the-proof-of-work-algorith
Wayne Vaughan
@waynevaughan
Looks like I'm the 100th member to join the room! Hello everyone.
Ignotus Peverell
@ignopeverell
@waynevaughan congrats!
@Lucifer1903 it's hard to second guess what this user could have meant, I'm guessing (s)he was assuming a setup where the whole proof of work is built on cuckoo cycle, without additional hashing
but to maintain progress freeness, it's not really the best setup
Lucifer1903
@Lucifer1903
I see, thank you @ignopeverell
Ignotus Peverell
@ignopeverell
no problem, actually if someone wanted to write up an explanation of our mining algo, that'd be very nice :)
I'm hoping we gradually document all part of the system as they get settled to end up with a reasonably complete design doc
(which I guess we'd have to call a whitepaper, to do like everyone else)
Lucifer1903
@Lucifer1903
@ignopeverell I've never done anything like that be. I'm not a programmer but I would like to help in whatever way I can. I'll start by reading up more on cuckoo cycle and try to write an explanation of if... Hopefully it doesn't turn out to bad hahaha
Jacob Payne
@Latrasis
Hi @ignopeverell, just wanted to follow-up. I'll be starting work for #56 and try writing up the simplest banning scenarios. Let me know if you have any other points to raise. Thanks!
Yeastplume
@yeastplume
I've just spent a good while going over Cuckoo cycles and mining as currently implemented. Just wondering, at the moment the code's just a simple, non-optimised example for testing that looks like it's based on the original Cuckoo 'simple-miner'... There are other more efficient miners being added to Tromp's github over time, one that runs faster at the expense of more memory overhead, a cuda version, etc...
urza
@urza_cc_twitter
Ad cucko.. I have a question. I wanted to try the cuckoo miner that consumes more memory but is faster (the claimed speedup bounty by xenocat). I guess it is the "mean_miner" in @tromp repo? The code contains some hardcoded CPU instructions (AVX2) that are available only on newer processors. These parts of code are wrapped in directive so it can be compiled with or without them (it took me a while to understand this, there is no proper indentation). When I tried to compile it on my kaby lake 1. with avx2 and 2. without avx2, the results were that [1.] was 2x faster then [2.] Is this due to inefficient code in [2.] that can be further optimized? Or is it not possible? For memory hard pow, it seems like it gives newer CPUs too big advantage over older cpus... I tried to understand more from the miner code, but it is very hard to read, there are no comments, variables have names like a, b, c, u, v,...
Yeastplume
@yeastplume
How optimised should the mining be within grin when it gets into the wild? I'm thinking there's a case to be made for already including the fastest known algorithms, as it puts everyone on the exact same footing with respect to initial mining
Ignotus Peverell
@ignopeverell
@Latrasis sounds good, I'll keep an eye on it