Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 15 10:21

    significance on master

    Deprecation notice (#2217) add… (compare)

  • Jan 15 10:21
    significance closed #2217
  • Jan 14 12:05
    significance synchronize #2217
  • Jan 14 12:05

    significance on deprecation-notice

    improve readme (compare)

  • Jan 14 11:30
    significance synchronize #2217
  • Jan 14 11:30

    significance on deprecation-notice

    improve readme (compare)

  • Jan 14 11:22
    crtahlin commented #2218
  • Jan 14 11:12
    significance synchronize #2217
  • Jan 14 11:12

    significance on deprecation-notice

    add deprecation notice to readme (compare)

  • Jan 14 11:09
    significance synchronize #2217
  • Jan 14 11:09

    significance on deprecation-notice

    remove commented code (compare)

  • Jan 14 10:59
    crtahlin assigned #2218
  • Jan 14 10:59
    crtahlin commented #2218
  • Jan 14 10:50
    bee-runner[bot] labeled #2218
  • Jan 14 10:50
    Eknir opened #2218
  • Jan 08 12:11
    significance review_requested #2217
  • Jan 08 12:11
    significance review_requested #2217
  • Jan 08 12:11
    significance review_requested #2217
  • Jan 08 12:10
    bee-runner[bot] labeled #2217
  • Jan 08 12:10
    significance opened #2217
wschwab
@wschwab
Logs seem to show a lot of cannot retrieve chunk, dropped peers, and batch already requested. My first thought is that it might be because I'm not completely synced - I'm about 75 blocks behind, last I checked (but I have what I suspect is all the state for those blocks). Is this the problem?
Super excited about the MVP article, by the way. It's one of the things that got me trying again.
Attila Gazso
@agazso
@wschwab at first run it does an initial sync that is a more resource intensive part than the normal operation usually
swarm team
@ethswarm_gitlab
[mattermost] <acud> @wschwab definitely should not be 16 gigs of ram. the latest stable version runs at about 300 megs in memory once all of the caches get filled up
swarm team
@ethswarm_gitlab
[mattermost] <eknir> @acud , I think I experienced the same issues as @wschwab. More people experienced this, during the last workshop which I held in Utrecht.
swarm team
@ethswarm_gitlab
[mattermost] <acud> @eknir 16 gigs of mem usage?
[mattermost] <acud> I remember you’ve mentioned the visibility of the output in the terminal as described, but such memory usage is something new to me tbh
[mattermost] <racnela> now i'm curious : probably it should be easy to look at RAM usage in swarm's testing cluster? I was hoping to run swarm in a raspberry 4 cluster..
swarm team
@ethswarm_gitlab
[mattermost] <edinalovas> Dani at EthCC, live now: https://www.youtube.com/watch?v=vX3F4QyQRw8
Nick Savers
@nicksavers
When setting up Swarm in a Docker container, it's there anything else that needs to be done before running it? It says that Geth is needed in the documentation. If starting the container where does it look for an active Geth node?
Nick Savers
@nicksavers
What happens when somebody or you yourself tries to upload a file that's already present in the network?
swarm team
@ethswarm_gitlab
[mattermost] <zelig> currently, if the uploader has a chunk, nothing happens but in the new pushsync protocol, chunks will nonetheless travel from the uploader node to their respective storer, if storer node has it already they stop spreading. they will reappear though in the sync pool to be pull synced by the storers neighbours. this is needed for (1) post garbage collcection reupload and (2) attach a new postage stamp to a chunk
swarm team
@ethswarm_gitlab
[mattermost] <edinalovas> mmohorko added to the channel by edinalovas.
wschwab
@wschwab

Hey all, I'm trying out a Swarm node (again), and I see that it's eating a bunch of resources - CPU seems to bounce between 10-50% (i7 7th gen), and memory hovering around 50% (on a 32 GB RAM machine). Is this to be expected?

I'm trying again again now, now with an internal SSD. Insane memory consumption seems to have gone away, so maybe it was something about using an external drive (or something with my drive specifically). CPU usage seems to hover around 20-40%., though I suspect this is the initial sync @agazso was talking about. I've seen this before, but I did want to ask about it again. My whole output right now is warnings like:

WARN [03-13|11:32:50.637] message handler: (msg code 1): netstore.Get can not retrieve chunk for ref 2051a8e4f859e04665c8f1524d855ebb38be4be306bd6b3b919e3e19bea09905: no suitable peer
Does this mean I'm not syncing to peers at all? Shouldn't I be seeing some successful syncing and chunk retrieval?
JukeBox
@JukeB38_twitter
Hello, I use a private swarm network with ~ 10 nodes, but I am always loosing peers, I tried to add manually lost peers, then I made a script to do that but nothing works well, what would you advice ?
Héctor Guilló Antón
@hguillo
Hi, I'm sorry if this question is obvious but how can I share files between two nodes? I have tried with swarm up on my computer and then swarm down bzz:/<hash> on another one. All I get is a Manifest not found error. I guess the file si not being uploaded to the network. Any advice on how to achieve this? Thanks.
Vojtech Simetka
@vojtechsimetka
HI @hguillo just use the hase directly swarm down <hash>
Héctor Guilló Antón
@hguillo
With swarm down <hash> I get Fatal: could not parse uri argument: unknown scheme "".
Attila Gazso
@agazso
@hguillo it seems that your nodes are not connected
are you connecting to the public network? if yes, then you can try to access your file on https://swarm-gateways.net/bzz:/<hash> to see what happens
Héctor Guilló Antón
@hguillo
Yeah, you are right, I'm not connected to the public network because I still get the Manifest not found error on the public gateway. But Idk why I'm not connected, I followed the steps on the swarm docs. Thanks for your help guys
Attila Gazso
@agazso
@hguillo maybe you are running an older version? or it can be also firewall issues
you can get more information if you connect to swarm on the admin interface using geth like this:
geth attach bzzd.ipc
where bzzd.ipc is the one in your swarm directory
Then type admin.peers and it will print your connected peers
Mattia Dalzocchio
@mattiaz9
Hello, is there any way to sign a feed digest using metamask? since it adds the prefix “\x19Ethereum Signed Message” it fails to update the feed.
Attila Gazso
@agazso
@mattiaz9 I found this related issue ethersphere/swarm#983
Mattia Dalzocchio
@mattiaz9
@agazso thank you!!
Surabhi
@Surabhidudhefiya
Hello, I am new to Swarm and have a very basic question about how is backup of data managed in Swarm? Is there any disaster recovery plan available?
FI
@step21_gitlab
@Surabhidudhefiya what do you mean by backup? of which data?
FI
@step21_gitlab
more general question - is the connection to geth node really only needed for ENS?
and does it mean it needs a fully synced node, or even archive node? that would be a major shortcoming
Attila Gazso
@agazso
@step21_gitlab atm the geth connection is only needed for ENS resolution
David Portabella
@dportabella

unstoppabledomains.com runs on top of ethereum/swarm, right?
they say the following in their FAQ:

How will I be able to view a blockchain website?
You will need to use a mirroring service, a browser extension or a browser that supports blockchain domains.

is there any public mirroring service?

Juliano Rizzo
@juli
unstoppable domains is like selling land on the moon
Ghost
@ghost~59105ab4d73408ce4f5dd5e1
@zelig Someone should make an application that takes a snapshot of a website and publishes it on swarm, so it cant be edited etc.
Possibly maybe
:D
Ghost
@ghost~59105ab4d73408ce4f5dd5e1
So if we reference something we know it wont disappear or be edited if its wikipedia for example
Swarm Beta is around the corner! ^^^
ldeffenb
@ldeffenb
1 week and counting...
Jon Bray
@heyJonBray
Finally :D
@ghost~59105ab4d73408ce4f5dd5e1 You could technically automate this process now, by going through a sites directory structure and converting each page to an IPFS file. But I'm extremely excited about swarm.
matrixbot
@matrixbot
ˈt͡sɛːzaɐ̯ Hasn't that been a long-standing ipfs feature request? something like ipfs add -r http://foobar/ ?
Hi all, ^^^ here's the program for the day :)
Rinke Hendriksen
@Eknir

Hi everybody,

On November 24th, the Swarm team organizes yet another event to celebrate our recent advancements. Come join, to celebrate with us and hear all about the latest advancements from the developer team, research, organization and ecosystem!

Hope to see you there

More info and sign-up: https://ethswarm.medium.com/join-the-swarm-live-release-event-95a4aaf2aea3

Fernando Llaca
@fllaca
hey there, I was trying to figure out how to get prometheus metrics from Swarm. Is it supported? I found this in the code https://github.com/ethersphere/swarm/blob/17a389d98ac8355e0e73fb6b30e547bc970b8433/metrics/flags.go#L65, but I don't know how to invoke that endpoint (curl http://localhost:8500/debug/metrics/prometheus/accounting doesn't do the job, I get a 404)
Attila Gazso
@agazso
@fllaca Swarm public chat moved over to Mattermost, please try https://beehive.ethswarm.org/swarm/channels/public
Rinke Hendriksen
@Eknir
Hi all, a last reminder for the Swarm live event, tomorrow at 1400 CET. If you didn't register yet, please do so now here. See you all tomorrow!
John
@john-a-m
Is there any documentation on the anonymity guarantees of swarm? (i.e. if I host a file using the Bee client could someone figure out I am the person hosting that file?)