Exonum is an extensible open-source framework for creating blockchain applications. Github: https://github.com/exonum/ Documentation: https://exonum.com/doc/ Russian-speaking community https://gitter.im/exonum/ruExonum Telegram channel: https://t.me/exonum_blockchain
dependabot[bot] on cargo
dependabot[bot] on cargo
Update uuid requirement from 0.… (compare)
dependabot[bot] on npm_and_yarn
dependabot[bot] on npm_and_yarn
Bump node-fetch from 2.6.1 to 2… (compare)
Hello. I have mentioned previously that I experienced nodes crashing and I was asked to get more logging information. I had the crash again today, with 3 out of four nodes crashing in relatively quick succession and am posting the log files in case there is something that you recognise on it
[2019-10-16T11:49:04.344604469Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T11:49:06.857354813Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T12:06:09.338924127Z ERROR exonum::events::error] An error occurred: Connection refused (os error 111)
[2019-10-16T13:41:14.277744269Z ERROR exonum::events::error] An error occurred: Connection refused (os error 111)
[2019-10-16T15:49:12.162348250Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
[2019-10-16T20:15:16.319639660Z ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
thread '<unnamed>' panicked at 'Remote peer address resolve failed: Os { code: 107, kind: NotConnected, message: "Transport endpoint is not connected" }', src/libcore/result.rs:999:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:47
3: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:36
4: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:200
5: std::panicking::default_hook
at src/libstd/panicking.rs:214
6: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:481
7: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:384
8: rust_begin_unwind
at src/libstd/panicking.rs:311
9: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
10: core::result::unwrap_failed
11: <futures::stream::for_each::ForEach<S,F,U> as futures::future::Future>::poll
12: <futures::future::join::Join<A,B> as futures::future::Future>::poll
13: <futures::future::select::Select<A,B> as futures::future::Future>::poll
14: <futures::future::map_err::MapErr<A,F> as futures::future::Future>::poll
15: <futures::future::map::Map<A,F> as futures::future::Future>::poll
16: futures::task_impl::std::set
17: <futures::future::lazy::Lazy<F,R> as futures::future::Future>::poll
18: futures::task_impl::std::set
19: std::thread::local::LocalKey<T>::with
20: tokio_current_thread::Entered<P>::block_on
21: std::thread::local::LocalKey<T>::with
22: std::thread::local::LocalKey<T>::with
23: std::thread::local::LocalKey<T>::with
24: scoped_tls::ScopedKey<T>::set
25: tokio_core::reactor::Core::run
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Followed immediately by:
hread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:999:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:47
3: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:36
4: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:200
5: std::panicking::default_hook
at src/libstd/panicking.rs:214
6: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:481
7: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:384
8: rust_begin_unwind
at src/libstd/panicking.rs:311
9: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
10: core::result::unwrap_failed
11: exonum::node::Node::run_handler
12: exonum::node::Node::run
13: exonum::helpers::fabric::builder::NodeBuilder::run
14: iouze_node::main
15: std::rt::lang_start::{{closure}}
16: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:49
17: std::panicking::try::do_call
at src/libstd/panicking.rs:296
18: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:82
19: std::panicking::try
at src/libstd/panicking.rs:275
20: std::panic::catch_unwind
at src/libstd/panic.rs:394
21: std::rt::lang_start_internal
at src/libstd/rt.rs:48
22: main
23: __libc_start_main
24: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
ERROR exonum::events::error] An error occurred: peer 0.0.0.0:2000 disconnected
0.0.0.0:2000
seen as a peer? And why would it seem to be connecting to itself?
I use the helpers to create my config files, which have this setup:
consensus_public_key = "1532a25e2481623938ab6dc001988851d32056e3b0b6bb3c3db870f19048b37d"
consensus_secret_key = "/config/consensus.key.toml"
external_address = "node2.abc.com:2000"
listen_address = "0.0.0.0:2000"
service_public_key = "31169e28fb3dc29e65ae8825dff635bb06be459fafe88cf128443bdbe4633fa0"
service_secret_key = "/config/service.key.toml"
[api]
private_api_address = "0.0.0.0:8010"
public_api_address = "0.0.0.0:8000"
state_update_timeout = 10000
[[connect_list.peers]]
address = "node4.abc.com:2000"
public_key = "0f20ad34a3ee830a323a07e3f6bb51a8030c72e72fc7caadae21df349f4a207e"
[[connect_list.peers]]
address = "node3.abc.com:2000"
public_key = "30ce1435d6175f085beda1027cc6189cef9af65eebbae8c8c6dbc42ff8a79627"
[[connect_list.peers]]
address = "node1.abc.com:2000"
public_key = "c08cd95a8d9c3ac1737b4d835d0e8c77cc57ca9665cad990944cdc18c34bb1c5"
This is the config for node2.abc.com
, of a 4 node network that includes node1.abc.com
, node2.abc.com
, node3.abc.com
, node4.abc.com
. All other nodes similarly have 3 peers with listen_address
set to 0.0.0.0:2000
and external_address
as their FQDN
Hello everyone!
I'm trying to add an auditor node to a running network, and I cannot quite get it right. Can you help with that?
I use the timestamping example as the basis, so it can be used to reproduce the error.
After bringing up 4 validators with launch.sh
script, I configure a new node the same as the validators:
./app generate-config config/common.toml config/a --peer-address 127.0.0.1:6335 -n
./app finalize --public-api-address 0.0.0.0:9000 --private-api-address 0.0.0.0:9001 config/a/sec.toml config/a/node.toml --public-configs config/{1,2,3,4}/pub.toml
That should result in a config for a node that's not in the list of validators, i.e it should act as an auditor (to my understanding)
Then, I start the new node up (with all validators in peer list)
./app run -c config/a/node.toml -d config/a/db --consensus-key-pass pass --service-key-pass pass
That's when the strange things start to happen: a lot of errors on every node in the network and connection failures
Validator nodes log: https://pastebin.com/zrsUaKj3
New node log: https://pastebin.com/i1c9U3AM
diff --git a/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs b/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
index abc53b7c..84466d33 100644
--- a/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
+++ b/exonum/src/events/noise/wrappers/sodium_wrapper/handshake.rs
@@ -172,7 +172,7 @@ impl Handshake for NoiseHandshake {
.and_then(|(stream, handshake)| handshake.read_handshake_msg(stream))
.and_then(|(stream, handshake, message)| handshake.finalize(stream, message))
.map_err(move |e| {
- e.context(format!("peer {} disconnected", peer_address))
+ failure::format_err!("peer {} disconnected: {}", peer_address, e)
.into()
});
Box::new(framed)
@@ -195,7 +195,7 @@ impl Handshake for NoiseHandshake {
})
.and_then(|((stream, handshake), message)| handshake.finalize(stream, message))
.map_err(move |e| {
- e.context(format!("peer {} disconnected", peer_address))
+ failure::format_err!("peer {} disconnected: {}", peer_address, e)
.into()
});
Box::new(framed)
tx_pool_size
(which seems to be at 0 at all times during my testing) and having trouble finding how this would be done.
tx_count
grows steadily, but after a few minutes tx_count
stops growing even though I am throwing transactions at it constantly. At this point I would expect (maybe?) to see tx_pool_size
grow beyond 0 but it stays there and then the requests to grab transaction information start to time out (they have a 30s timeout).
[mempool]
tx_pool_capacity = 5000
in my node config but it didn't seem to change anything