Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Nadeem Bhati
    @nadeemb53
    And it exist
    Paul Le Cam
    @PaulLeCam
    You need to provide a file path, not a folder one if you are downloading a single file (which is the case here with the tar).
    If you replace your client.bzz.downloadTo(req.params.hash, localpath, {headers: {accept: 'application/x-tar'}}) by client.bzz.downloadFileTo(req.params.hash, localpath + '/' + req.params.hash + '.tar', {headers: {accept: 'application/x-tar'}}) for example I think you should get the behaviour you’re looking for
    Nadeem Bhati
    @nadeemb53
    Okay, great. Let me try this out
    Yes, it works!
    Paul Le Cam
    @PaulLeCam
    Nice!
    Nadeem Bhati
    @nadeemb53
    Thank you. I guess now I understood the file handling here.
    Paul Le Cam
    @PaulLeCam
    You’re welcome! Yes it’s a bit tricky with the mix of headers and downloading as a file, I think I’ll add a downloadTarTo() method to cover this use case in a simpler way
    Nadeem Bhati
    @nadeemb53
    Yes, that'll make things easier.
    Attila Gazso
    @agazso
    Are encrypted Timelines supported with Erebos?
    As far as I understand it's possible to use Swarm's encryption for storing the Chapters, in that case Erebos is supposed to handle the references that contain the encryption keys.
    However this is not recommended to use with gateways, because then you hand over the encryption key to the gateway, right?
    Paul Le Cam
    @PaulLeCam
    In theory it should be possible to use Swarm’s built-in encryption by setting the encrypt option (https://erebos.js.org/docs/api-bzz#uploadoptions) when adding chapters, but I’ve never used this as it’s unsafe when using a gateway
    The alternative is to handle the encryption client-side, which can be implemented relatively simply by providing the encode and/or decode functions when creating a Timeline instance: https://erebos.js.org/docs/timeline-api#timelineconfig
    This is what I’ve used in Aegle, see https://github.com/MainframeHQ/aegle/blob/master/packages/sync/src/index.ts#L167-L182 and https://github.com/MainframeHQ/aegle/blob/master/packages/sync/src/index.ts#L262-L278
    Attila Gazso
    @agazso
    thanks!
    thecryptofruit
    @thecryptofruit
    Hi all ... just curious, anybody from erebos team coming to Berlin blockchain week next week?
    Paul Le Cam
    @PaulLeCam
    Hi, no we’re not attending this one, but some of us will be in Osaka for Devcon 5
    thecryptofruit
    @thecryptofruit
    Public API uploadFile() seems to work without providing contentType, however when calling node-specific uploadFileFrom() without it, I get Bad Request. Is that as expected? Took me awhile :)
    Paul Le Cam
    @PaulLeCam
    I think it’s a limitation from Swarm as there’s not specific logic in Erebos related to this. If you provide the content-length header it works, I’ll ask why it’s required for bzz-raw: but not bzz:
    Attila Gazso
    @agazso
    when using the client side encryption, are the references (that are stored in the feeds) encrypted?
    Paul Le Cam
    @PaulLeCam
    @agazso sorry not sure what you mean? Erebos itself doesn’t handle client-side encryption, at most it allows to provide custom encoding and decoding functions (for the Timeline), that the consumer can use to implement the necessary logic
    Attila Gazso
    @agazso
    ok, so by providing encoding to the Timeline I can encrpyt either the content in the Chapter or the Chapter itself?
    Paul Le Cam
    @PaulLeCam
    yes the encoding function gets the full chapter JSON, so you can encrypt only the content and leave the metadata in clear text or you can encrypt the entire chapter, as long as you have a matching decoding function able to handle it and provide the full chapter back to the Timeline instance
    Attila Gazso
    @agazso
    the Timeline is not doing an encoding after this, it just uploads the (possibly encrypted) chapter and stores the hash in the feed, right?
    Paul Le Cam
    @PaulLeCam
    yes exactly
    Attila Gazso
    @agazso
    thanks!
    Paul Le Cam
    @PaulLeCam
    you’re welcome!
    Attila Gazso
    @agazso
    btw I saw that last week you merged in the React Native specific changes, so it means we can start integrating Erebos to the Felfele app
    We are planning to do it after our next release, but in the meanwhile I am trying to implement things in a way that will be compatible with Timelines for example, that's why I'm asking these questions
    Paul Le Cam
    @PaulLeCam
    That’s great! Yes don’t hesitate to ask here!
    Attila Gazso
    @agazso
    I have some questions regarding the type definition of the Timeline. There is a hexValue type and uploading to BZZ returns this type. However the type of previous, id and references is string and not this type. What's the rationale behind this?
    Also what is author supposed to be? I'm storing the user of the feed in it, is that correct?
    Paul Le Cam
    @PaulLeCam
    Hey sorry I've been off since last week. The Timeline APIs use string because that's how it got defined in the spec, but I agree the implementation could use hexValue instead.
    Paul Le Cam
    @PaulLeCam
    The author could be the feed user yes but I imagine it could also be whoever authored the content, for example Bob could post a Chapter on behalf of Alice I suppose. @jpeletier might have other ideas for additional use cases related to EtherGit.
    Pascal Belouin
    @pbelouin
    Hello all and thanks to the erebos team for this great library! I have a particular use case and I'm not sure I am using the library correctly: I want to allow users (using the browser bzz library) to upload json files (or json strings coming from a text field) to the swarm, and make these retrievable as a list: so far I've been doing some experiments with first creating a feed manifest, and uploading a first set of json to the feed once it has been created, using the feed examples available in the documentation - however subsequent uploads of additional json strings (with a different key) doesn't seem to work, as they do not seem to be addressable neither - I assume that this is because of my poor understanding of the various data structures one can use with the library / swarm, so I was wondering what would be the best way to go about it: from my understanding, one can upload several files (for a website for example) onto a feed, then visiting the feed url would then display the website and its dependencies: this is what I have been trying to replicate with not much success so far: this is a bit long winded, but basically am I doing something wrong here? should I just use the upload file functionality? ( the question then is having a persistent location where I can access the "root" of the various json files I want to upload)...
    Pascal Belouin
    @pbelouin
    another issue I have been having was that visiting the feedURL using the browser triggers a download of a file, but I assume this is because the call I'm making has the wrong content-type in the header
    Attila Gazso
    @agazso
    In my experience the best way to use feeds is that you upload the data you want with bzz and store the returned hash in the feed
    If you also want historical data, you can store the previous hash in the uploaded data, so you can go back in time if you'd like (just like with a linked list)
    Or alternatively you can use the Erebos Timeline API, which implements what I wrote above
    https://erebos.js.org/docs/timeline-spec
    If you use this however then you will need to run Javascript code from your website in order to be able to fetch data (in other words it cannot be static html)
    Pascal Belouin
    @pbelouin
    I see, thanks @agazso I was actually thinking about using the timeline API - using js is perfectly fine, I've actually started writing the UI in vue
    Paul Le Cam
    @PaulLeCam
    Hey, I think it mostly depends if you want users to “share” a same list, or each having their own?
    For example you can have Alice upload a first file, then provide the resulting manifest hash to Bob, and Bob can upload another file using the manifestHash option (https://erebos.js.org/docs/api-bzz#uploadoptions), so the new manifest hash would contain both Bob’s and Alice’s files.
    The problem is then that because the manifest hash changes with every new upload, you need a way for all users to know what is the “current” manifest hash to use, that’s where using a feed is useful, because you can get a deterministic location to look for this manifest hash. This is what erebos provides with the setFeedContentHash() (https://erebos.js.org/docs/api-bzz#setfeedcontenthash) method.
    The downside is that you need to have access to the private key in order to update the feed, so you either need to share this private key (that can be OK is all users are trusted) or having a single user or service trusted to update the feed as needed.
    The alternative would be to use an Ethereum smart contract to store the manifest hash rather than a feed, so you can implement the rules you want to authorise setting the manifest hash, but the downside is the latency for updates (blocks creation) and the cost (gas for each transaction to update the manifest hash).
    Pascal Belouin
    @pbelouin
    wow thanks @PaulLeCam this makes a lot of sense - ideally I would like all users to share the same list indeed, as I want this to become some kind of shared "repository"... I think sharing the private key is OK at least at this stage, as all users will indeed be trusted (this is at this stage mostly a proof of concept) - I would hope that if this proof of concept goes anywhere, we'll probably have more resources to potentially implement a smart contract; So, to make sure I fully understood what you said, here would be the steps:
    Pascal Belouin
    @pbelouin
    Alice uploads a file using the .uploadFile() function, which returns the manifestHash for this file's location. this can then be used by any user to upload more files to it using the manifestHash option of the .uploadFile function. every time an upload is made, the resulting hash gets stored on a feed, which can then be accessed by any user that has access to the feed's private key.
    Attila Gazso
    @agazso
    You don't need the private key to read a feed, only to update it
    However be aware that concurrent updates (e.g. multiple updates that happen in the same second) are undefined behavior on Swarm, meaning it cannot be deterministically tell ahead who will "win" and what state the feed will be after the update
    Pascal Belouin
    @pbelouin
    thanks for the clarification @agazso - makes sense; At the moment I've tried it another way, which was to simply use the uploadFile function, and storing the resulting hash in an array - I was wondering if perhaps I could store that array in a feed as well somehow
    Paul Le Cam
    @PaulLeCam
    Yeah as Attila pointed out sharing a feed private key is not recommended, that why rather than having 1 feed with multiple writers, it’s better to have 1 feed per writer, but that puts the burden on the reader to keep track of all the relevant feeds to stay in sync, especially if you don’t have a discovery mechanism.
    If you use the Timeline, it should help a bit in that regard because you can have an ordered set of changes, at least for a given feed, and in case of conflict between multiple writers, the reader can decide what to do. The next step would be to use CRDTs so you don’t even have to handle conflicts on the reading side.
    Pascal Belouin
    @pbelouin
    thanks paul - armed with all this info I'll go ahead and give it a go!
    Paul Le Cam
    @PaulLeCam
    Erebos v0.10 released! With support for new features introduced in Swarm v0.5: https://github.com/MainframeHQ/erebos/releases/tag/v0.10.0
    thecryptofruit
    @thecryptofruit
    Wow, you're fast, kudos!
    Paul Le Cam
    @PaulLeCam
    Thanks!
    Adam Uhlíř
    @AuHau
    Hey there,
    have a question 😇 Why is bzz.downloadDirectoryData() Node specific (acording the docs)? It should not touch any FS or anything right?
    Paul Le Cam
    @PaulLeCam
    Hey, it doesn’t touch the FS but uses https://github.com/mafintosh/tar-stream to extract the tar, which uses node streams
    Adam Uhlíř
    @AuHau
    Hmm I believe that is not true. tar-streamuses readable-stream which is a port of node's streams compatible also with browsers. Would you accept a PR that would try to move downloadObservable, downloadDirectory, uploadFileStream and maybe uploadTar also to browsers?
    Paul Le Cam
    @PaulLeCam
    Yes if you can open a PR with added tests for browser support I’m happy to review it, thanks!