Hey - I have a small problem with reloading configurations. If the source is http / logplex / splunk_hec (all of which use Warp) and you change the configuration, but don't change the port, I get a configuration error (address already in use) and the reload fails. Workaround is to just change the port to a new value. After a successful reload you can then change the port back to the original.
It's not a huge issue, but I wanted to see if it was known.
ERROR vector::topology: Configuration error: Source "in": Address already in use (os error 48)
ERROR vector: Reload was not successful.
Hi, guys! Small question regarding config for kafka sink - I have next piece of config
[sinks.kafka]
type = "kafka"
inputs = ["json"]
bootstrap_servers = "kafka-server:9092"
topic = "vector"
compression = "none"
healthcheck = true
buffer.type = "disk"
buffer.max_size = 104900000
buffer.when_full = "block"
encoding.codec = "json"
And when I try to start vector I get:
unknown variant `codec`, expected `text` or `json` for key `sinks.kafka`
What's wrong with config?
Thanks!
I'm having issues withunknown variant `codec`, expected `text` or `json` for key `sinks.some_sink`
for several different types of sinks... It only works when specifying encoding = "text"
or encoding = "json"
- Problem is, I need some of the options under encoding
.
Tried looking at the source, but I'm not familiar with Rust enough to locate the error myself.
Anyone know if this is a known bug?
rate_limit_num
that will allow more throughput.
separator = "\t"
work?
Hi there,
Has anyone here successfully configured vector to ship to Amazon Elasticsearch?
(I believe) I've configured the EC2 instance profiles and Elasticsearch permissions correctly but I'm getting a 403 in the logs:Mar 31 10:42:07.843 WARN sink{name=elasticsearch_vpcflowlogs type=elasticsearch}: vector::sinks::util::retries: request is not retryable; dropping the request. reason=response status: 403 Forbidden
Not sure where to start looking to debug this
Hmm, okay - stuck again.
I'm getting a '400 Bad Request' from GCP on my GCS sink, but even on TRACE level it's not showing the body of the response so I can't get at which actual problem it's encountering. All the output I get on trace is:
Apr 07 13:50:06.853 TRACE sink{name=gcp type=gcp_cloud_storage}: vector::sinks::util: request succeeded. response=Response { status: 400, version: HTTP/1.1, headers: {"x-guploader-uploadid": "xxx", "content-type": "application/xml; charset=UTF-8", "content-length": "170", "vary": "Origin", "date": "Tue, 07 Apr 2020 13:50:06 GMT", "server": "UploadServer", "alt-svc": "quic=\":443\"; ma=2592000; v=\"46,43\",h3-Q050=\":443\"; ma=2592000,h3-Q049=\":443\"; ma=2592000,h3-Q048=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000"}, body: Body(Streaming) }
The body
property there doesn't get revealed further down in the log anywhere, and then the connection closes.
It seems like the http connection is being closed by the caller before the body can be received fully? :s
Apr 07 14:05:54.397 TRACE hyper::proto::h1::dispatch: body receiver dropped before eof, closing
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close_read()
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close()
Apr 07 14:05:54.397 TRACE tokio_threadpool::worker: -> wakeup; idx=3
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
hello
maybe I'm a bit biased but here some quick thoughts about prometheus source timberio/vector#991 (it's not worth creating separate issue, just a small feedback based on usage :)
I'm not saying that this source is not good, just it has a place where to grow :)