Hello someone has try AWS S3 Sink with ceph ? For me it doesn't work, for example for the healtcheck ceph return a 404 response code for the head method while it return 200 response code when i'm using mc ls, here is the config
[sinks.ceph]
# REQUIRED - General
type = "aws_s3" # must be: "aws_s3"
inputs = ["syslog"] # example
bucket = "vector" # example
compression = "none" # example, enum
endpoint = "http://my-ceph.com:9000"
# OPTIONAL - Object Names
filename_append_uuid = true # default
filename_extension = "log" # default
filename_time_format = "%s" # default
key_prefix = "date=%F/" # default
# REQUIRED - requests
encoding = "text" # example, enum
# OPTIONAL - General
healthcheck = true# default
i set also the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . When i try to send a log it returns meFeb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util::retries: encountered non-retriable error. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error>
Feb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util: request failed. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error>
Could you help me with that please ? :-) Have a nice day
hello
https://github.com/prometheus/statsd_exporter
Note that timers will be accepted with the ms, h, and d statsd types. The first two are timers and histograms and the d type is for DataDog's "distribution" type. The distribution type is treated identically to timers and histograms.
does vector support DD type? do we need to create issue?
Hey - I have a small problem with reloading configurations. If the source is http / logplex / splunk_hec (all of which use Warp) and you change the configuration, but don't change the port, I get a configuration error (address already in use) and the reload fails. Workaround is to just change the port to a new value. After a successful reload you can then change the port back to the original.
It's not a huge issue, but I wanted to see if it was known.
ERROR vector::topology: Configuration error: Source "in": Address already in use (os error 48)
ERROR vector: Reload was not successful.
Hi, guys! Small question regarding config for kafka sink - I have next piece of config
[sinks.kafka]
type = "kafka"
inputs = ["json"]
bootstrap_servers = "kafka-server:9092"
topic = "vector"
compression = "none"
healthcheck = true
buffer.type = "disk"
buffer.max_size = 104900000
buffer.when_full = "block"
encoding.codec = "json"
And when I try to start vector I get:
unknown variant `codec`, expected `text` or `json` for key `sinks.kafka`
What's wrong with config?
Thanks!
I'm having issues withunknown variant `codec`, expected `text` or `json` for key `sinks.some_sink`
for several different types of sinks... It only works when specifying encoding = "text"
or encoding = "json"
- Problem is, I need some of the options under encoding
.
Tried looking at the source, but I'm not familiar with Rust enough to locate the error myself.
Anyone know if this is a known bug?
rate_limit_num
that will allow more throughput.
separator = "\t"
work?
Hi there,
Has anyone here successfully configured vector to ship to Amazon Elasticsearch?
(I believe) I've configured the EC2 instance profiles and Elasticsearch permissions correctly but I'm getting a 403 in the logs:Mar 31 10:42:07.843 WARN sink{name=elasticsearch_vpcflowlogs type=elasticsearch}: vector::sinks::util::retries: request is not retryable; dropping the request. reason=response status: 403 Forbidden
Not sure where to start looking to debug this
Hmm, okay - stuck again.
I'm getting a '400 Bad Request' from GCP on my GCS sink, but even on TRACE level it's not showing the body of the response so I can't get at which actual problem it's encountering. All the output I get on trace is:
Apr 07 13:50:06.853 TRACE sink{name=gcp type=gcp_cloud_storage}: vector::sinks::util: request succeeded. response=Response { status: 400, version: HTTP/1.1, headers: {"x-guploader-uploadid": "xxx", "content-type": "application/xml; charset=UTF-8", "content-length": "170", "vary": "Origin", "date": "Tue, 07 Apr 2020 13:50:06 GMT", "server": "UploadServer", "alt-svc": "quic=\":443\"; ma=2592000; v=\"46,43\",h3-Q050=\":443\"; ma=2592000,h3-Q049=\":443\"; ma=2592000,h3-Q048=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000"}, body: Body(Streaming) }
The body
property there doesn't get revealed further down in the log anywhere, and then the connection closes.
It seems like the http connection is being closed by the caller before the body can be received fully? :s
Apr 07 14:05:54.397 TRACE hyper::proto::h1::dispatch: body receiver dropped before eof, closing
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close_read()
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close()
Apr 07 14:05:54.397 TRACE tokio_threadpool::worker: -> wakeup; idx=3
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }