Hi, guys! Small question regarding config for kafka sink - I have next piece of config
[sinks.kafka]
type = "kafka"
inputs = ["json"]
bootstrap_servers = "kafka-server:9092"
topic = "vector"
compression = "none"
healthcheck = true
buffer.type = "disk"
buffer.max_size = 104900000
buffer.when_full = "block"
encoding.codec = "json"
And when I try to start vector I get:
unknown variant `codec`, expected `text` or `json` for key `sinks.kafka`
What's wrong with config?
Thanks!
I'm having issues withunknown variant `codec`, expected `text` or `json` for key `sinks.some_sink`
for several different types of sinks... It only works when specifying encoding = "text"
or encoding = "json"
- Problem is, I need some of the options under encoding
.
Tried looking at the source, but I'm not familiar with Rust enough to locate the error myself.
Anyone know if this is a known bug?
rate_limit_num
that will allow more throughput.
separator = "\t"
work?
Hi there,
Has anyone here successfully configured vector to ship to Amazon Elasticsearch?
(I believe) I've configured the EC2 instance profiles and Elasticsearch permissions correctly but I'm getting a 403 in the logs:Mar 31 10:42:07.843 WARN sink{name=elasticsearch_vpcflowlogs type=elasticsearch}: vector::sinks::util::retries: request is not retryable; dropping the request. reason=response status: 403 Forbidden
Not sure where to start looking to debug this
Hmm, okay - stuck again.
I'm getting a '400 Bad Request' from GCP on my GCS sink, but even on TRACE level it's not showing the body of the response so I can't get at which actual problem it's encountering. All the output I get on trace is:
Apr 07 13:50:06.853 TRACE sink{name=gcp type=gcp_cloud_storage}: vector::sinks::util: request succeeded. response=Response { status: 400, version: HTTP/1.1, headers: {"x-guploader-uploadid": "xxx", "content-type": "application/xml; charset=UTF-8", "content-length": "170", "vary": "Origin", "date": "Tue, 07 Apr 2020 13:50:06 GMT", "server": "UploadServer", "alt-svc": "quic=\":443\"; ma=2592000; v=\"46,43\",h3-Q050=\":443\"; ma=2592000,h3-Q049=\":443\"; ma=2592000,h3-Q048=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000"}, body: Body(Streaming) }
The body
property there doesn't get revealed further down in the log anywhere, and then the connection closes.
It seems like the http connection is being closed by the caller before the body can be received fully? :s
Apr 07 14:05:54.397 TRACE hyper::proto::h1::dispatch: body receiver dropped before eof, closing
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close_read()
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close()
Apr 07 14:05:54.397 TRACE tokio_threadpool::worker: -> wakeup; idx=3
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
hello
maybe I'm a bit biased but here some quick thoughts about prometheus source timberio/vector#991 (it's not worth creating separate issue, just a small feedback based on usage :)
I'm not saying that this source is not good, just it has a place where to grow :)
I have an Nginx config producing about 1MB/sec of JSON (per node), the JSON is in array format, and parse_json does not like arrays. So far I am using add_fields + templating to wrap the array into an object literal before passing to parse_json, which is obviously nonsense, but it works.
The next problem is having parsed the array, how to expand it into an object with named properties. Is Lua the best option here? I thought Vector had a built-in "zip" transform, but seems it only supports it in combination with string splitting (using split's field_names argument)
hello
I want to implement docker proxy to connect Vector to MQTT (as a replacement of Kafka in event-based architecture)
which transport it's better to choose for communication between vector container and proxy container?
"vector source/sink" looks like a native solution, but "http source/sink" has at-least-once guarantee