Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 23 02:33
    dependabot[bot] labeled #2056
  • Jun 23 02:33
    dependabot[bot] labeled #2056
  • Jun 23 02:33
    dependabot[bot] opened #2056
  • Jun 23 02:33
    dependabot[bot] review_requested #2056
  • Jun 23 02:33

    dependabot[bot] on npm_and_yarn

    Bump jsdom from 16.4.0 to 16.7.… (compare)

  • Jun 23 02:30
    dependabot[bot] labeled #2055
  • Jun 23 02:30
    dependabot[bot] labeled #2055
  • Jun 23 02:30
    dependabot[bot] opened #2055
  • Jun 23 02:30
    dependabot[bot] review_requested #2055
  • Jun 23 02:30

    dependabot[bot] on npm_and_yarn

    Bump jsdom in /examples/cryptoc… (compare)

  • Jun 22 17:09

    dependabot[bot] on cargo

    (compare)

  • Jun 22 17:09
    dependabot[bot] closed #2045
  • Jun 22 17:09
    dependabot[bot] commented #2045
  • Jun 22 17:09
    dependabot[bot] labeled #2054
  • Jun 22 17:09
    dependabot[bot] labeled #2054
  • Jun 22 17:09
    dependabot[bot] review_requested #2054
  • Jun 22 17:09
    dependabot[bot] opened #2054
  • Jun 22 17:09

    dependabot[bot] on cargo

    Update protobuf requirement fro… (compare)

  • Jun 22 04:38
    dependabot[bot] labeled #2053
  • Jun 22 04:38
    dependabot[bot] labeled #2053
Oleksandr Anyshchenko
@aleksuss
Yes. It should
Yu
@Fatman13
kk Thank you!
Oleksandr Anyshchenko
@aleksuss
You are welcome
zakum1
@zakum1
@aleksuss sorry for the late reply. i wrote my app before the helpers and haven't migrated it to that framework. i will do it. thanks for your answer
Yu
@Fatman13
Hello, team, if data belong to single entity is changed multiple times on blockchain, how should we retrieve the latest copy of such data?
Yu
@Fatman13
best practice wise
@aleksuss
Anthony Albertorio
@tesla809
Hello!
Who would I talk to about using Exonum in a hackathon?
Sponsorships and all that
Elena Buzovska
@Buzovska
@tesla809 Hello Anthony! Feel free to email me at olena.buzovska@bitfury.com
Elena Buzovska
@Buzovska
We released version 0.8 of our Exonum Java Binding, along with updated documentation. Exonum Java now fully supports the 0.12 version of Exonum core version. Learn more on our website! https://exonum.com/doc/version/0.12/get-started/java-binding/
Elena Buzovska
@Buzovska
@/all Join our Oct. 16 developer webinar to learn how to build an e-auction service with Exonum! It's a great opportunity to understand how blockchain can work in the real world. Register now for free! https://bitfury.zoom.us/webinar/register/6315689822497/WN_pZf4qC9YQWKrm_6poKyzIA
zakum1
@zakum1

Hello. I have mentioned previously that I experienced nodes crashing and I was asked to get more logging information. I had the crash again today, with 3 out of four nodes crashing in relatively quick succession and am posting the log files in case there is something that you recognise on it

[2019-10-16T11:49:04.344604469Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T11:49:06.857354813Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T12:06:09.338924127Z ERROR exonum::events::error] An error occurred: Connection refused (os error 111)
[2019-10-16T13:41:14.277744269Z ERROR exonum::events::error] An error occurred: Connection refused (os error 111)
[2019-10-16T15:49:12.162348250Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T20:15:16.319639660Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
thread '<unnamed>' panicked at 'Remote peer address resolve failed: Os { code: 107, kind: NotConnected, message: "Transport endpoint is not connected" }', src/libcore/result.rs:999:5
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:47
   3: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:36
   4: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:200
   5: std::panicking::default_hook
             at src/libstd/panicking.rs:214
   6: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:481
   7: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:384
   8: rust_begin_unwind
             at src/libstd/panicking.rs:311
   9: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
  10: core::result::unwrap_failed
  11: <futures::stream::for_each::ForEach<S,F,U> as futures::future::Future>::poll
  12: <futures::future::join::Join<A,B> as futures::future::Future>::poll
  13: <futures::future::select::Select<A,B> as futures::future::Future>::poll
  14: <futures::future::map_err::MapErr<A,F> as futures::future::Future>::poll
  15: <futures::future::map::Map<A,F> as futures::future::Future>::poll
  16: futures::task_impl::std::set
  17: <futures::future::lazy::Lazy<F,R> as futures::future::Future>::poll
  18: futures::task_impl::std::set
  19: std::thread::local::LocalKey<T>::with
  20: tokio_current_thread::Entered<P>::block_on
  21: std::thread::local::LocalKey<T>::with
  22: std::thread::local::LocalKey<T>::with
  23: std::thread::local::LocalKey<T>::with
  24: scoped_tls::ScopedKey<T>::set
  25: tokio_core::reactor::Core::run
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Followed immediately by:

hread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:999:5
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:47
   3: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:36
   4: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:200
   5: std::panicking::default_hook
             at src/libstd/panicking.rs:214
   6: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:481
   7: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:384
   8: rust_begin_unwind
             at src/libstd/panicking.rs:311
   9: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
  10: core::result::unwrap_failed
  11: exonum::node::Node::run_handler
  12: exonum::node::Node::run
  13: exonum::helpers::fabric::builder::NodeBuilder::run
  14: iouze_node::main
  15: std::rt::lang_start::{{closure}}
  16: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:49
  17: std::panicking::try::do_call
             at src/libstd/panicking.rs:296
  18: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:82
  19: std::panicking::try
             at src/libstd/panicking.rs:275
  20: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  21: std::rt::lang_start_internal
             at src/libstd/rt.rs:48
  22: main
  23: __libc_start_main
  24: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Besides the stack trace, the intermittent errors that I see in my log file:
ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
(At the very top of the first log file above). These seem quite strange - why is 0.0.0.0:2000 seen as a peer? And why would it seem to be connecting to itself?
zakum1
@zakum1

I use the helpers to create my config files, which have this setup:

consensus_public_key = "1532a25e2481623938ab6dc001988851d32056e3b0b6bb3c3db870f19048b37d"
consensus_secret_key = "/config/consensus.key.toml"
external_address = "node2.abc.com:2000"
listen_address = "0.0.0.0:2000"
service_public_key = "31169e28fb3dc29e65ae8825dff635bb06be459fafe88cf128443bdbe4633fa0"
service_secret_key = "/config/service.key.toml"

[api]
private_api_address = "0.0.0.0:8010"
public_api_address = "0.0.0.0:8000"
state_update_timeout = 10000
[[connect_list.peers]]
address = "node4.abc.com:2000"
public_key = "0f20ad34a3ee830a323a07e3f6bb51a8030c72e72fc7caadae21df349f4a207e"

[[connect_list.peers]]
address = "node3.abc.com:2000"
public_key = "30ce1435d6175f085beda1027cc6189cef9af65eebbae8c8c6dbc42ff8a79627"

[[connect_list.peers]]
address = "node1.abc.com:2000"
public_key = "c08cd95a8d9c3ac1737b4d835d0e8c77cc57ca9665cad990944cdc18c34bb1c5"

This is the config for node2.abc.com, of a 4 node network that includes node1.abc.com, node2.abc.com, node3.abc.com, node4.abc.com. All other nodes similarly have 3 peers with listen_address set to 0.0.0.0:2000 and external_address as their FQDN

Does anything jump out in these log file / config file snippets?
zakum1
@zakum1
I should also mention that at the time of the crash, the nodes were creating empty blocks, there were no transactions being submitted
VadimBuyanov
@VadimBuyanov
does Exonum has any benchmarking tool like HL Caliper?
ivan-ochc
@ivan-ochc
@zakum1 , does /etc/hosts contain appropriate node ip_address and domain_name?
@VadimBuyanov , we don't have such tool in public access so far
VadimBuyanov
@VadimBuyanov
@ivan-ochc I got it. Is it possible to discuss the possibility to develop such a tool for Exonum users ?
ivan-ochc
@ivan-ochc
@VadimBuyanov , it is possible to discuss, but this is not a matter of the nearest future
VadimBuyanov
@VadimBuyanov
@ivan-ochc well maybe there are any public benchmarks reports regarding Exonum performance metrics especially in comparison with competitives (HL\Corda\Waves Enterpise)?
ivan-ochc
@ivan-ochc
VadimBuyanov
@VadimBuyanov
@ivan-ochc thanks!
Mike Lubinets
@mersinvald

Hello everyone!
I'm trying to add an auditor node to a running network, and I cannot quite get it right. Can you help with that?

I use the timestamping example as the basis, so it can be used to reproduce the error.

After bringing up 4 validators with launch.sh script, I configure a new node the same as the validators:

./app generate-config config/common.toml config/a --peer-address 127.0.0.1:6335 -n

./app finalize --public-api-address 0.0.0.0:9000 --private-api-address 0.0.0.0:9001 config/a/sec.toml config/a/node.toml --public-configs config/{1,2,3,4}/pub.toml

That should result in a config for a node that's not in the list of validators, i.e it should act as an auditor (to my understanding)

Then, I start the new node up (with all validators in peer list)

./app run -c config/a/node.toml -d config/a/db --consensus-key-pass pass --service-key-pass pass

That's when the strange things start to happen: a lot of errors on every node in the network and connection failures

Validator nodes log: https://pastebin.com/zrsUaKj3
New node log: https://pastebin.com/i1c9U3AM

ivan-ochc
@ivan-ochc
@mersinvald, try to add auditor's address and public key to peer list of every other node via API https://exonum.com/doc/version/latest/advanced/node-management/#add-new-peer
Mike Lubinets
@mersinvald
@ivan-ochc thanks, I already figured it out after patching exonum error handling a bit.
When you add a context to the error -- you somehow lose the error itself, so I have only seen the context in the logs, not the error itself.
diff --git a/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs b/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
index abc53b7c..84466d33 100644
--- a/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
+++ b/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
@@ -172,7 +172,7 @@ impl Handshake for NoiseHandshake {
             .and_then(|(stream, handshake)| handshake.read_handshake_msg(stream))
             .and_then(|(stream, handshake, message)| handshake.finalize(stream, message))
             .map_err(move |e| {
-                e.context(format!("peer {} disconnected", peer_address))
+                failure::format_err!("peer {} disconnected: {}", peer_address, e)
                     .into()
             });
         Box::new(framed)
@@ -195,7 +195,7 @@ impl Handshake for NoiseHandshake {
             })
             .and_then(|((stream, handshake), message)| handshake.finalize(stream, message))
             .map_err(move |e| {
-                e.context(format!("peer {} disconnected", peer_address))
+                failure::format_err!("peer {} disconnected: {}", peer_address, e)
                     .into()
             });
         Box::new(framed)
Pavel Mukhanov
@pavel-mukhanov
@mersinvald thank you for pointing this, you are welcome to create a PR :)
andrew lyon
@orthecreedence
Hello, are there docs or guides on advanced node creation? I'm using NodeBuilder and it seems like when I throw too many transactions at it, it starts getting backed up. I'd like to adjust the max tx_pool_size (which seems to be at 0 at all times during my testing) and having trouble finding how this would be done.
andrew lyon
@orthecreedence
So what happens is I have a simulator that is sending test transactions to the node. It works great at first, tx_count grows steadily, but after a few minutes tx_count stops growing even though I am throwing transactions at it constantly. At this point I would expect (maybe?) to see tx_pool_size grow beyond 0 but it stays there and then the requests to grab transaction information start to time out (they have a 30s timeout).
So it seems like something is blocking new transactions from being entered. I am probably running about 40-50 transactions/second against a quad core machine (CPU/mem are not being saturated)
I tried setting [mempool] tx_pool_capacity = 5000 in my node config but it didn't seem to change anything
andrew lyon
@orthecreedence
Please disregard. Looks like the issue is in the transactions themselves.
Dean Harry
@dharry1968
Hi guys, trying to get v0.013.0-rc2 running and it is failing when trying to compile exonum-merkledb, something about google/protobuf/empty.proto: File not found... libprotoc 3.11.0 is installed and seems to be working ok. Any ideas?
Oleksandr Anyshchenko
@aleksuss
Hi, @dharry1968. What OS do you use ?
Oleksandr Anyshchenko
@aleksuss
If you use linux try to install libprotobuf-dev package.
Dean Harry
@dharry1968
Hi @aleksuss , have it installed under OSX.
Oleksandr Anyshchenko
@aleksuss
How did you install protobuf ? By brew ?
Dean Harry
@dharry1968
yes, installed by brew :)
Oleksandr Anyshchenko
@aleksuss
Strange. I've never met such issue before. Could you provide full error output ?
Dean Harry
@dharry1968
sure, I will have to do it in the morning, i'm away from my machine now.
Oleksandr Anyshchenko
@aleksuss
Ok. By the way. You can create an issue on github https://github.com/exonum/exonum/issues
Dean Harry
@dharry1968
@aleksuss thanks, I submitted the issue to GitHub :)
Utkarsh Tripathi
@utkarshkvs1
hi @all I'm new here, I was trying to run cryptocurrency-advance tutorial from the docker image given in the repo but it is only running the 8 nodes, the application/service itself is not running
can someone pls look into this?