--net=host
, but also it seems that it's only the /latest/api/token
endpoint that is timing out from within a container. It seems that the API that should be used instead is the IMDS metadata - botocore updated due to this issue and you can see the changes here: boto/botocore#1895
drop_field
and field
but it didn't workHello someone has try AWS S3 Sink with ceph ? For me it doesn't work, for example for the healtcheck ceph return a 404 response code for the head method while it return 200 response code when i'm using mc ls, here is the config
[sinks.ceph]
# REQUIRED - General
type = "aws_s3" # must be: "aws_s3"
inputs = ["syslog"] # example
bucket = "vector" # example
compression = "none" # example, enum
endpoint = "http://my-ceph.com:9000"
# OPTIONAL - Object Names
filename_append_uuid = true # default
filename_extension = "log" # default
filename_time_format = "%s" # default
key_prefix = "date=%F/" # default
# REQUIRED - requests
encoding = "text" # example, enum
# OPTIONAL - General
healthcheck = true# default
i set also the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . When i try to send a log it returns meFeb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util::retries: encountered non-retriable error. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error>
Feb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util: request failed. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error>
Could you help me with that please ? :-) Have a nice day
hello
https://github.com/prometheus/statsd_exporter
Note that timers will be accepted with the ms, h, and d statsd types. The first two are timers and histograms and the d type is for DataDog's "distribution" type. The distribution type is treated identically to timers and histograms.
does vector support DD type? do we need to create issue?
Hey - I have a small problem with reloading configurations. If the source is http / logplex / splunk_hec (all of which use Warp) and you change the configuration, but don't change the port, I get a configuration error (address already in use) and the reload fails. Workaround is to just change the port to a new value. After a successful reload you can then change the port back to the original.
It's not a huge issue, but I wanted to see if it was known.
ERROR vector::topology: Configuration error: Source "in": Address already in use (os error 48)
ERROR vector: Reload was not successful.
Hi, guys! Small question regarding config for kafka sink - I have next piece of config
[sinks.kafka]
type = "kafka"
inputs = ["json"]
bootstrap_servers = "kafka-server:9092"
topic = "vector"
compression = "none"
healthcheck = true
buffer.type = "disk"
buffer.max_size = 104900000
buffer.when_full = "block"
encoding.codec = "json"
And when I try to start vector I get:
unknown variant `codec`, expected `text` or `json` for key `sinks.kafka`
What's wrong with config?
Thanks!
I'm having issues withunknown variant `codec`, expected `text` or `json` for key `sinks.some_sink`
for several different types of sinks... It only works when specifying encoding = "text"
or encoding = "json"
- Problem is, I need some of the options under encoding
.
Tried looking at the source, but I'm not familiar with Rust enough to locate the error myself.
Anyone know if this is a known bug?
rate_limit_num
that will allow more throughput.