by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Attila Gazso
    @agazso
    Paul Le Cam
    @PaulLeCam
    Sure, thanks!
    Attila Gazso
    @agazso
    @PaulLeCam when are you planning to publish the next version? I'm eager to use erebos for raw feeds ;)
    Paul Le Cam
    @PaulLeCam
    I don’t know yet, I wanted to learn a bit more about raw feeds and how they could be used, for example if the timeline implementation could benefit from it
    If I understand correctly, that allows the client to completely drive the feed behavior, no? Assuming the writer keeps track of the epoch, it could write a feed without needing to query the metadata from Swarm?
    Attila Gazso
    @agazso
    exactly
    for example we used it to write a chat application with it, where the updates were written with an incremental epoch
    so if you know the last update you just have to poll the next epoch, which is just one chunk retrieval instead of somewhat between 1 and 31 with the normal feed api
    also when updating you can just write simply to the next epoch and you don't need to query the metadata first
    Paul Le Cam
    @PaulLeCam
    Can the client provide any epoch? I’m not familiar with how it works internally for Swarm, with the time and level, can we make them completely deterministic (ex just use a time increment as an index)?
    Attila Gazso
    @agazso
    yes, the client can provide any time and level
    you can regard them as a 64-bit key
    btw you could already provide any value before the raw-feeds when updating
    but it messed up the lookup algorithm, that's why we needed the bzz-feed-raw to avoid it when looking up
    Paul Le Cam
    @PaulLeCam
    sounds great, I think that would allow to provide a new API similar to Timeline but as an indexed list, no? The first element would be time = 0 and then incrementing, and in order to know the last epoch, the client could still fetch the feed metadata, if I understand it correctly?
    Attila Gazso
    @agazso
    in that case for finding the latest, you would have to write your own lookup algorithm that runs on the client side, because you would violate assumptions that the current feed API has
    for example it uses current timestamps when reading and writing to avoid many lookups
    but that's definitely possible
    Paul Le Cam
    @PaulLeCam
    That’s one problem I was thinking about to make a pseudo-DB backed by feeds/timelines: syncing could be very slow especially the first time because the client would need to fetch the entire history to get the correct state, but if the hashes can be known upfront more data can be fetched in parallel because the client can know exactly how many updates there are and what are their hashes
    Attila Gazso
    @agazso
    if you are thinking of writing a DB then you can make things faster by using indexes: for example you could invent a convention where you store indexes under different topics
    for example you can store the last index in a different topic with a regular feed
    and it does not have to be totally consistent, just for speeding up the lookup
    you can also store snapshots and whatever in different indexes
    the only thing that's important to keep in mind is that Swarm doesn't even try to enforce consistency, so if you have multiple writers and they write to different nodes then Swarm won't do conflict resolution
    Paul Le Cam
    @PaulLeCam
    yes, I was mostly thinking about the problems of syncing data between multiple writers (ex Alice writes to her feed and read Bob’s, and Bob does the same) where they need to be in sync on the reading side to know what diff to write in their own update
    Attila Gazso
    @agazso
    if you have immutable data and have pointers in them, the worst case scenario can be that the data points to an older version
    but if you know how the data is arranged, you can even use that as a starting point and try to walk to the latest version
    Paul Le Cam
    @PaulLeCam
    yes exactly, so far the Timeline only allows us to go back in history, but seems like raw feeds would allow us to go forwards so that offers new possibilities
    Attila Gazso
    @agazso
    yes
    also I'm trying to figure out how you can add some kind of long-polling to Swarm so that the client does not have to poll for the latest update in real-time applications
    ethersphere/swarm#1984
    Paul Le Cam
    @PaulLeCam
    That would be nice indeed, way better UX!
    Attila Gazso
    @agazso
    @PaulLeCam I left some comments on your PR
    MainframeHQ/erebos#131
    I hope it helps, let me know if you have questions
    Paul Le Cam
    @PaulLeCam
    Thanks! I replied in the PR thread, but we can discuss it further here if you want?
    Attila Gazso
    @agazso
    I see your message now, I just replied in the PR :)
    Paul Le Cam
    @PaulLeCam
    Thanks!
    Paul Le Cam
    @PaulLeCam
    @agazso I’m thinking about the feed id with time = 0, at this point there’s no way to know if the writer has written to the first chunk (time = 0) or if it’s just the default value. One way would be to set the time value to null to differentiate from 0 but that would add a bunch of null-checks in the other pieces of logic, or start the iteration from 1 instead of 0, so we can treat a feed ID with time = 0 as having never been written to, what do you think please?
    Paul Le Cam
    @PaulLeCam
    The nice side-effect of starting iteration from 1 is that reader.load(writer.length) would load the latest chunk, no need to offset it
    Attila Gazso
    @agazso
    @PaulLeCam Can you please elaborate which case you are thinking?
    I see in the code that you changed the iterators to start from 1, but I don't yet see the problem with starting from 0
    Paul Le Cam
    @PaulLeCam
    Let’s say we create a new writer with only the user param provided, the time would default to 0. Then we export the writer.getID() to keep track of it in the app for future use. How can the app know if the chunk at time = 0 has been written or not?
    Might be more a problem on the reader side actually, if the reader is provided an ID with time = 0, should it expect the chunk at that time to have been written or not?
    Attila Gazso
    @agazso
    as far as I see the writer increases the time after pushing, so if it hasn't written it's 0 otherwise it's 1
    so I think it's more of a semantic problem, because what getID() returns is actually the ID of the next write, but the name does not reflect that
    and if I understand correctly your question is if we had a function that returns the last written id, what should it return when there were no writes?
    Paul Le Cam
    @PaulLeCam
    With the change I made getID() returns the ID of the latest write, but with the assumtion that if time = 0 there has been no write.
    If we were to use time = 0 for the first chunk, yes the problem is how shall we translate the fact there has been no chunk written? We could set time = null but then that breaks the current interface and it would require additional logic in the code. Otherwise it could be time = -1 but I don’t think it’s valid with Swarm, so if the app just tries to make a request using the provided ID, I don’t think it would work as expected.
    Do you see any downside with starting iteration from 1 please? I think that simplifies the protocol/logic a lot.
    Attila Gazso
    @agazso
    So then the special case handling would be only in load when passing in 0, right?
    And that would also mean that the 0th index in the feeds would be never written?
    Paul Le Cam
    @PaulLeCam
    yes exactly, load(0) shouldn’t return anything because the writer should never write with time = 0
    Attila Gazso
    @agazso
    I'm not a huge fan of using integer special cases, but if you feel like this simplifies it's fine by me :)
    Paul Le Cam
    @PaulLeCam
    Yeah me neither, it was really more about simplifying the logic and avoid null checks but it makes things less obvious. I’ll try with defaulting to -1 and see how it affects the code and tests.
    Paul Le Cam
    @PaulLeCam
    Erebos v0.12 has been released, now supporting raw Swarm feeds! https://github.com/MainframeHQ/erebos/releases/tag/v0.12.0
    Paul Le Cam
    @PaulLeCam
    Erebos v0.13 has been released, it includes a bunch of changes for bzz-related packages, and a new package to synchronize JSON documents from various sources: https://github.com/MainframeHQ/erebos/releases/tag/v0.13.0
    Rogelio Morrell
    @molekilla
    hi, is it possible to sign messsages with ethersjs + metamask and then send signature with postSignedChunked?
    Attila Gazso
    @agazso
    @molekilla I think it's possible, but I have never done it. There is also the signBytes function that you can use for this: https://erebos.js.org/docs/bzz-feed#signbytesfunc