by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 28 2019 19:45
    @Arachnid banned @merkle_tree_twitter
  • Feb 17 2019 00:56
    @jpitts banned @Aquentson_twitter
  • Apr 09 2018 04:14
    @holiman banned @JennyJennywren_twitter
  • Oct 21 2017 19:12
    @Arachnid banned @Musk11
ledgerwatch
@AlexeyAkhunov
Hi! I have reviewed ProgPow Pull request and also played around with the code a bit. I concluded that the reason it is so slow is simply that it tries to access dataset around 100x times more than Hashimoto. Not sure it is intentional. So I would send it back to the original authors to think about
See my comments in the PR
ledgerwatch
@AlexeyAkhunov
I have just realised (perhaps it has been discussed before though), that the current spec of CREATE2 opcode allows the original creator of the contract to revive it after it has been self-destructed. Obviously, with the same code. Self-destruct moves (or burns) residual ETH, so it cannot be used for fund-recovery. And the storage will be also clear on such a revival, right? It is not necessarily bad, but a quirk worth knowing about.
Danny Ryan
@djrtwo
revive the previously existing code or create an entirely new contract at the same address with different bytecode?
ledgerwatch
@AlexeyAkhunov
revive the previous existing code at the same address. Essentially, it would make self-destruct not really terminal state of a contract. It can be further created and destroyed many times
I do not see a problem with that (so far)
Danny Ryan
@djrtwo
makes auditing code and dependencies a little trickier because it is not immediately obvious how the code will or had been deployed
and seems to break the semantics of self-destruct
ledgerwatch
@AlexeyAkhunov
Don't think it breaks semantics. From my understanding, the purpose and semantics of self-destruct is to remove contract from the state (and therefore shrink the state).
as for auditing, I would say if I see SELF-DESTRUCT possibility in the contract, then I know that at this point all bets are off :) so don't think it poses a problem IMO
Danny Ryan
@djrtwo
which changes your guarantees with SELF-DESTRUCT which imo changes the semantics. regardless, thanks for bringing this up. Definitely worth dicussing more
Martin Holst Swende
@holiman
It's been discussed before, back when eip86/208 was discussed. A brand new different contract can take it's place. So it can be used to make truly upgradeable contracts, for better or worse. The trick to do so is to have the initcode load runtime code from an oracle
All storage data is lost along the way, and yes, balance too
Therefore, it's fitting that extcodehash becomes active simultaneously as this
Cc @AlexeyAkhunov
ledgerwatch
@AlexeyAkhunov
@holiman great, thank you for the explanation
Martin Holst Swende
@holiman
More discussion on this exists on the PR 86/208. I dug up the link about a week ago, the link can be found on the PM discussion for last coredev call
Paweł Bylica
@chfast

@holiman @karalabe I tried to compare progpow verification results with https://github.com/chfast/ethash.

Single Ethash verification with the best currently available compiler is 0.9 ms.

The progpow implementation was done in this fork: https://github.com/ifdefelse/proghash/commit/dc8e918aa5cf68da0b6dc049634bde596170c14c#diff-551b1d6ef147e694c203c6e9a6dacbf7R521 but does not seem to work. At least they did not update the unit tests.

The same benchmark shows absurd 1.3 us.

ledgerwatch
@AlexeyAkhunov
@chfast I can see that this implementation you referenced below does 96 times fewer accesses to dataItems than the Go implementation I have reviewed. No wonder it takes 0.9 ms, which is almost exactly 80ms / 96 :)
it does not have data item access from progPow function itself (which contributed 2/3 of all accesses and was not actually present in the "white paper"). Also, access in the progPowLoop has been taken out of the inner loop (which has 32 iterations). Therefore we have factor 3*32 = 96
now it does exactly the same number of accesses as Hashimoto (existing ethash) = 128. And because they use simpler keccak function, it is faster than Hashimoto. Again, no wonders here too
Paweł Bylica
@chfast
Thanks for checking it out. I didn't have time yet to compare this with the EIP. The progpow verification takes here 0.0013 ms which is suspiciously fast. The 0.9 ms is for Ethash verification (Go time is 4 ms).
ledgerwatch
@AlexeyAkhunov
Ah, 0.9ms is EtHash. I see
ledgerwatch
@AlexeyAkhunov
actually, Progpow makes 64 accesses
Paweł Bylica
@chfast
Do you know where to find Keccak-f[800] reference implementation with test cases?
ledgerwatch
@AlexeyAkhunov
it uses simpler keccak too, that is why maybe it faster. But I would also be sceptical. Also, since the algorithm has been changed so much (judging by the significant change of number of accesses), does it still has claimed ASIC-resistant properties? I did not look at any depth into why it has these properties, because I have a limited understanding of modern ASIC design and manufacturing processes, but I just know that innovation in that space is very fast
@chfast No, I don't know where to find them, sorry
Paweł Bylica
@chfast
There must be bug in the code as progpow verification should be slower (https://github.com/ethereum/go-ethereum/pull/17731#issuecomment-424519942), not 1000x faster.
ledgerwatch
@AlexeyAkhunov
The latest piece I am reading about is the article "A Domain-Specific Architecture for Deep Neural Networks", which describes how Google went from using GPUs for deep learning to in-house developed TPUs (Tensor Processing Unit). Here is one quote I like
Paweł Bylica
@chfast
@lrettig No, SHA-3 is based on Keccak-f[1600]. The algorithm is the same, but state size is different (uses 32-bit words instead of 64-bit words). I have to find some constants for this variant. Will find it, don't worry.
ledgerwatch
@AlexeyAkhunov
"Since transistors are not getting much better (reflecting the end of Moore's Law), the peak power per mm^2 of chip area is increasing (due to the end of Dennard scaling), but the power budget per chip is not increasing (due to electro-migration and mechanical and thermal limits), and chip designers have already played the multi-core card (which is limited by Amdahl's Law), architects now widely believe the only path left for major improvements in performance-energy is domain-specific architectures. They do only a few tasks but do them extremely well".
I am also reading another book, called "Parallel Algorithms and Architectures" by F. Thomson Leighton (superficially for now)
and the intuition that I got from these two sources is that ASICs (or Domain-Specific architectures) can beat GPU not just because they do less things, but also because they can have different computing architectures, like two-dimensional arrays, or meshes, which allow systolic computations instead of GPU model, where all the computation flows through the registers
so it is not that simple
Greg Colvin
@gcolvin
Way back in the mid-1970s I was wanting to emulate portions of the nervous system with physical networks of microprocessors. I was told that any parallel process can be simulated efficiently on a serial computer, and went back to feeding punch cards full of FORTRAN to our mainframe...
ledgerwatch
@AlexeyAkhunov
@gcolvin Ha-ha :) at that time, I suspect, the Moore's Law was still a thing, so advice you were given totally made sense. But now ASICs and Domain-Specific architectures is the only way. We already see the rise of meta-programming (programs write programs), I guess we will see the rise of meta-programming when programs design hardware, print it, and so on. This should gradually abolish the supremacy of large chip companies (I suspect a lot of their hardware design tooling is implemented in hardware itself)
Alex Beregszaszi
@axic
Is the specification of extcodehash “final” or are there any expected changes to it?
Reason I am asking is I’d like to merge the PR adding it to the Solidity assembler and having confidence in:
  • the opcode number
  • the fact that it will be part of constantinople
  • (and would be nice to be confident in the associated gas cost)
Paweł Bylica
@chfast
@holiman @AlexeyAkhunov Here are test vectors for Keccak-f800 from Keccak Code Package: https://github.com/XKCP/XKCP/blob/master/tests/TestVectors/KeccakF-800-IntermediateValues.txt
Danny Ryan
@djrtwo
What is (if there is one) the common strategy to handle peers that send a huge number of txs with gasprice that is too low?
Mikhail Kalinin
@mkalinin
I am sure it depends on client's implementation. EthereumJ has a gas price threshold for Tx mem pool, Txes with gas prices lower than that threshold just skipped. And there is nothing that would be done with the peer in that case. I think it worth to reduce peer's reputation if it sends tons of spam messages.
ledgerwatch
@AlexeyAkhunov
@djrtwo I just checked in geth. There is no penalisation. In this "case" clause at the link, you can see that the only place "p" (peer) is still associated with the transaction is the call to p.MarkTransaction(). And inside that MarkTransaction, it only adds tx ID to the set of max 32k transaction that the peer "knows about", so that we do not send them back to the same peer: https://github.com/ethereum/go-ethereum/blob/4b6824e07b1b7c5a2907143b4d122283eadb2474/eth/handler.go#L666
Danny Ryan
@djrtwo
Thank you!
Mikhail Kalinin
@mkalinin
Btw, known Txes set is implemented in EthereumJ, also, and it works in the same way as @AlexeyAkhunov described, this set is also used to quickly recognize already known Txes if they come again from the net
ledgerwatch
@AlexeyAkhunov
Yes, the determination of whether to ignore certain transaction happens in the tx pool, by which point the information about which peer sent it, is not there anymore
ledgerwatch
@AlexeyAkhunov
same for parity. I think. In the code under the link, you can see that peer_id is not passed into "import_external_transactions" (this is where txs get to the pool). In the "notify.transaction_received()" no penalisation happens either. https://github.com/paritytech/parity-ethereum/blob/cc963d42a06bcae2480cec657fa4b55a829fdaa6/ethcore/src/client/client.rs#L2099
Greg Colvin
@gcolvin
@AlexeyAkhunov Yeah, Moore's law was in the process of becoming an imperative about then. Still, there was a market for Cray's specialized vector machines.
ledgerwatch
@AlexeyAkhunov
I have read that article about TPUs a bit further, and understood that Google only uses TPUs for inference on the neural networks that have already been trained. For the training itself, it is still GPUs :)