Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 19 18:03
    fjl edited #2085
  • Oct 19 18:03
    fjl opened #2085
  • Oct 19 07:44
    gsalgado edited #2081
  • Oct 19 07:44
    gsalgado synchronize #2081
  • Oct 19 07:28
    gsalgado synchronize #2081
  • Oct 17 10:42
    Elnaril commented #2078
  • Oct 17 10:37
    Elnaril synchronize #2078
  • Oct 17 09:37
    Elnaril synchronize #2078
  • Oct 16 18:48
    pipermerriam opened #2084
  • Oct 16 14:30
    Elnaril commented #2078
  • Oct 16 14:18
    Elnaril synchronize #2078
  • Oct 16 13:55
    pipermerriam commented #2078
  • Oct 16 12:21
    Elnaril synchronize #2078
  • Oct 16 11:33
    njgheorghita synchronize #2077
  • Oct 16 11:26
    njgheorghita synchronize #2077
  • Oct 16 10:51
    njgheorghita synchronize #2069
  • Oct 16 10:21
    Elnaril synchronize #2078
  • Oct 16 09:37
    njgheorghita synchronize #2076
  • Oct 16 09:34
    njgheorghita commented #2069
  • Oct 16 09:28
    njgheorghita synchronize #2069
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Guilherme Salgado> @Christoph can you update the node you mention in ethereum/trinity#2045 to python3.8?
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Guilherme Salgado> that will give us task names in the logs, which might help debugging
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Piper Merriam> @carver I think I could figure it out by browsing code but I know you can probably answer from your brain. Under what conditions (roughly) does backfill happen. I know it gets paused/deprioritized for the urgent node requests, does that apply even if your peer pool has multiple other good peers? And how do predictive requests fit into this?
<carver, Jason Carver> Yeah
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<carver, Jason Carver> Roughly, it only does backfill if there are more peers than are needed for urgent requests, and the predictive peers have been dormant for a bit.
<Piper Merriam> so in theory, piling on more geth nodes would speed up backfill
<Piper Merriam> also, 111%!!
   DEBUG  2020-09-14 16:07:39,822        BeamDownloader  beam-sync: all=1237  urgent=965  crit=100%  pred=162  all/sec=13  urgent/sec=10  urg_reqs=14  pred_reqs=10  timeouts=6  u_pend=0  u_prog=4  p_pend=0  p_prog=61  p_wait=15  p_woke=0  p_found=44  thread_Q=20+3
   DEBUG  2020-09-14 16:07:49,824        BeamDownloader  beam-sync: all=1503  urgent=1172  crit=111%  pred=173  all/sec=14  urgent/sec=11  urg_reqs=218  pred_reqs=14  timeouts=6  u_pend=0  u_prog=1  p_pend=0  p_prog=5  p_wait=7  p_woke=11  p_found=0  thread_Q=20+1
<carver, Jason Carver> Potentially, yeah
<carver, Jason Carver> Lol
<carver, Jason Carver> Yeah so imagine you're reporting time spent waiting on urgent nodes every second, and the most recent request took 1.11s
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Christoph Burgdorf (cburgdorf)> @Guilherme Salgado 👍 I updated the DappNode package to use Python 3.8 and redeployed
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Piper Merriam> @Christoph regarding evm_extensions and the performance difference between using different uint sizes. I would look into whether we can do something that is optimized for uint64 for things like tracking amount of gas used, memory sizes, etc. There are a lot of places in the EVM that never reasonably go above uint64. If it is possible for us to do simple bounds checking and fall back to the pure python implementation or something similar.... There is an EIP somewhere authored by ?axic? which changes the EVM rules to bound certain values to the uint64 range which might be worth looking into as well.
Indica
@indi-ca
Hi... I ended up on the Python EVM Gitter channel from the link on this page: https://trinity-client.readthedocs.io/en/latest/
Christoph Burgdorf
@cburgdorf
Oh, you are right! The link should go to this channel. Would you be open to send a PR to fix it?
Indica
@indi-ca
Sure. Why not...
Christoph Burgdorf
@cburgdorf
Thank you, that would be very nice!
The link to the source code appears to be wrong as well
The projects where once combined in a single repository so that's a fallout from seperating them
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Christoph Burgdorf (cburgdorf)> @Piper yeah, we can definitely go down that path but the Stack in general doesn't appear to be our strongest target because it is quite literally just pushing and popping things to and from a list. And travelling between Python and native code comes with a bit of overhead. If we then end up not doing much on the rust end, the performance gains are eaten up by the context switching costs. I think that CodeStream is a better target to focus on.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Guilherme Salgado> Are you guys also seeing lots of Task was destroyed but it is pending errors in your logs, like the ones from ethereum/trinity#2045 and ethereum/trinity#1895 ? I'd shelved work on #1895 but I'm getting those all the time now, so I ended up going down that rabbit hole again
<Guilherme Salgado> If I'm the only one seeing them frequently, maybe it's not worth chasing?
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<carver, Jason Carver> Yeah, I've been running with PYTHONWARNINGS=ignore for a bit, but I will shut it off on the next run to see what's up. The shutdown seems to have been dirty recently, generally, but I haven't noticed "Task was destroyed but it is pending" in particular
Indica
@indi-ca

I followed the cookbook:
https://trinity-client.readthedocs.io/en/latest/cookbook.html

and I want to start with just syncing the latest blocks. I tried both syncing from an Etherscan checkpoint, and also manually entering block details.

However, the process dies with:

eth.exceptions.HeaderNotFound: No canonical header for block number #10770396

Do I need to download headers first?

Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Christoph Burgdorf (cburgdorf)> Can you share the cli options that you used to start Trinity? I used this one a few times but it's not super recent: --sync-from-checkpoint=eth://block/byhash/0x8cecad06e59c6a9705e24fa522196b8a39e8e7cb74bd69629527aaf024f90152?score=17,034,982,899,386,366,812,359
Indica
@indi-ca

I tried this,

trinity --sync-from-checkpoint eth://block/byetherscan/latest

and

trinity --sync-from-checkpoint eth://block/byhash/0xc3ed3755bbd4322a7949687558bf027ff6467a048d58830018b269908d937fba?score=17,431,128,075,868,796,124,802

Earlier today, both failed with the above error after a few minutes. I tried both now, and it looks it is fine and the Beam protocol is running. Odd.

Indica
@indi-ca
The latest etherscan checkpoint seems to be about a year old (age=-1y11m4w).
Eth-Gitter-Bridge
@Eth-Gitter-Bridge

<Christoph Burgdorf (cburgdorf)> Yep, that looks good. For eth://block/byetherscan/latest to work you need to have a token for their API. I think the error you ran into is this one:
ethereum/trinity#1998

This also explains why it is working now. Basically, we need to reject checkpoints that are too close to the tip but right now we just crash with this not very user friendly error message.

<Christoph Burgdorf (cburgdorf)> Btw, this could be a great issue if you are interested in contributing further 😉
Indica
@indi-ca
I think I was syncing using the Beam protocol (default argument), but I noticed I was only getting one transaction per second. Perhaps from the upstream MuirGlacierVM. I restarted with --sync-mode full, but in one hour, found no peers.
My motivation is to crack open the latest blocks and see what's happening.
Still figuring out all the options. Trinity would be ideal if I can actually find some peers. I don't know that the incentive mechanisms are for peers to sync. Perhaps I need to run Geth and sync full mode all the way from beginning, but I'd need to find to allocate at least 500 GB for that.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Jochem Brouwer> Hey @carver I had a random thought today about semi-speeding up Beam Sync. Would it not be possible to run transactions in parallel and just request the node data on the fly here? If transaction A and transaction B (where A is executed before B) both access the same trie and slot (like contract storage) then B has to be executed again since the storage slot which we hit has changed; however, it is more likely that in that case we can read the node data cache instead of requesting it (because it is unknown at this point). Of course theoretically you can craft blocks which slow this down, but in practice this might help speed up "executing" beam-syncing blocks, so in practice you would need to pivot less often. So in the best case you can just run blocks in parallel where all transactions miss any touched trie key; in the worst case they always hit a trie key and thus need to be re-ran after execution.
<Jochem Brouwer> Hope I'm not too vague here 😅 It would be rather complex to implement this but I think in practice it would work rather well 😄
<Jochem Brouwer> (meaning: block would be executed faster during beam sync so it is more likely to keep up-to-date with current chain head)
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<carver, Jason Carver> Yup! Internally, we refer to this approach as speculative execution. It is implemented in master. In practice, many transactions do not read data that was changed by an earlier transaction in the block, so it does provide some benefit. It is worth grouping together transactions by sender, since they otherwise fail immediately when they check the nonce.
It is not always clear whether spec exec caused the transaction to execute incorrectly. So we still import the full block with every transaction in order. But the slowest part of Beam Sync is data collection, so the re-run of the execution is usually quite fast if spec exec has correctly preloaded the data.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Jochem Brouwer> Ah very cool so it is already implemented! Do I understand you correctly that this still leads to wrong executions using that approach? So there are bugs?
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<carver, Jason Carver> It's only when transactions depend on each other, like your A/B example. There's no way I know to get around that except sequential execution
<Jochem Brouwer> Yeah exactly. It is rather interesting here if we would have pre byzantium tx receipta
<Jochem Brouwer> Receipts
<Jochem Brouwer> Then you could use the last state root of the previous transaction
<Jochem Brouwer> I think you can then parallellize execution 🤔
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Jochem Brouwer> Why was this thrown out again and replaced by tx status? Tx status is very handy of course. But if you know what state root the state should be before any tx execution then this would be super handy
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<carver, Jason Carver> It's expensive to re-calculate the full merkle trie after every transaction
<carver, Jason Carver> Even when it was calculated, I'm not sure if any clients served the mid-block state over GetNodeData
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
<Jochem Brouwer> I am assuming that getnodedata would just look it up in the leveldb? Should not lead to problems. But yeah I guess it might be too expensive to calculate the root
<carver, Jason Carver> I expect that most pre-byzantium clients don't write the intermediate tries to disk. They calculate them on the fly, but only write the block-end tries to disk. That's what py-evm/trinity does, IIRC.
<Jochem Brouwer> Ah yeah that makes sense
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
This message was deleted