Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • Sep 19 23:06
    blt opened #9240
  • Sep 19 17:26
    dbcfd synchronize #9077
  • Sep 19 14:33
    dbcfd synchronize #7120
  • Sep 19 13:31
    dbcfd synchronize #9108
  • Sep 19 13:30
    dbcfd synchronize #9077
  • Sep 18 18:50
    binarylogic closed #9232
  • Sep 18 16:21
    binarylogic review_requested #9232
  • Sep 18 16:21
    binarylogic opened #9232
  • Sep 18 07:56
    baoyachi synchronize #8726
  • Sep 18 05:42
    dependabot[bot] labeled #9230
  • Sep 18 05:42
    dependabot[bot] opened #9230
  • Sep 18 05:40
    leebenson closed #9139
  • Sep 18 01:37
    blt review_requested #9228
  • Sep 18 01:37
    blt review_requested #9228
  • Sep 18 01:37
    blt review_requested #9228
  • Sep 18 01:37
    blt opened #9228
  • Sep 18 00:52
    blt review_requested #9226
  • Sep 18 00:52
    blt review_requested #9226
  • Sep 18 00:52
    blt review_requested #9226
  • Sep 18 00:52
    blt opened #9226
Mark Klass
Hi, I'm trying to send logs to Loki, and it works, but I've only got one label (agent="vector") for every log. I've noticed there's a labels.key field in the configuration demo. What are they for, and how do I use them? Can I use them to tag my logs?
  # General
  type = "loki" # required
  inputs = ["cleaned_traefik_logs"]
  endpoint = "http://loki:3100" # required
  healthcheck = true # optional, default

  # Encoding
  encoding.codec = "json" # optional, default

  # Labels
  labels.key = "value" # I'm not sure what this does
  labels.key = "{{ event_field }}" # nor this
4 replies
Hello !
Can someone help ? Have a bug with vector in SUSE - it doesn't clean buffer and i have a plenty of files stored on host after being sent to the server
6 replies
ll /var/lib/vector/vector_buffer/ | wc -l
  type = "journald" # required

  # General
  type = "vector"
  inputs = ["in"]
  address = ""
  healthcheck = true

  buffer.max_size = 504900000
  buffer.type = "disk"
  buffer.when_full = "block"
Felipe Passos
Shoud i use loki or elasticsearch for log visualization ? I'm using prometheus/grafana for metrics but i don't really know if loki is the best option for the logs
2 replies
Hi folks, I wants to ship my k8s pod container logs located inside /var/lib/docker/containers/<containerid>/*.log Which source of vectordev should I use?
20 replies
Felipe Passos
I'm getting 401 error on my loki sink, but the basic auth is correct, why ?
  inputs   = ["nginx_dev"]
  type     = "loki"
  endpoint = "https://a-endpoint"
  auth.strategy = "basic"
  auth.user = "username"
  auth.password = "some_password"
  labels.key = "dev_nginx"
Aug 31 11:24:14 ip-172-31-41-152 vector[1202]: Aug 31 11:24:14.693 ERROR vector::topology::builder: Healthcheck: Failed Reason: A non-successful status returned: 401 Unauthorized
Aug 31 11:24:15 ip-172-31-41-152 vector[1202]: Aug 31 11:24:15.488  WARN sink{name=loki-nginx type=loki}:request{request_id=0}: vector::sinks::util::retries2: request is not retryable;
31 replies
Ryan Miguel

Can someone help me understand why TLS is failing here? We're using letsencrypt to get certs for the central collector and don't really care about having individual host certs for each client, I just want to transmit the logs securely. It works if I set tls.verify_certificate = false on the client but I'd prefer not to.

Sep 01 17:29:59.836 ERROR vector::topology::builder: Healthcheck: Failed Reason: Connect error: TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915:

Collector config:

  type                  = "vector"
  address               = ""
  shutdown_timeout_secs = 30
  tls.enabled           = true
  tls.crt_file          = "/etc/letsencrypt/fullchain.pem"
  tls.ca_file           = "/etc/letsencrypt/chain.pem"
  tls.key_file          = "/etc/letsencrypt/privkey.pem"

Client config:

  type = "vector"
  inputs = ["apache_log"]
  address = "${CENTRAL_ENDPOINT}:9000"
  healthcheck = true

  # Buffer
  buffer.max_events = 500
  buffer.type = "memory"
  buffer.when_full = "block"

  # TLS
  tls.enabled = true
4 replies

I've started evaluating vector for delivering logs from fluent-bit to s3.
I've followed the examples and created a config like this:
type = "http" # required
address = "" # required
encoding = "json" # optional, default

Output data

bucket = "fluentlogsink" # required
inputs = ["in"] # required
region = "us-east-1" # required, required when endpoint = ""
type = "aws_s3" # required
compression = "gzip"

the logs are showing up in s3 with .gz extension, however they are still plain text files
have anyone exeperienced something like this and maybe found a solution ?
Slawomir Skowron
If you download through the browser files may be decompresed on the fly. You can compare dowload size vs size reported on s3.
Ryan Miguel
If you need TLS please vote for this issue: timberio/vector#3664
Jesse Szwedko


Hey all!

A quick announcement: we are moving from gitter to discord for our community chat. You can join us here: https://discord.gg/jm97nzy (see channels in the vector category).


As the team supporting vector and building its community, we've found a number of issues using gitter for this purpose:

  • Poor notifications
  • Poor editing experience
  • Poor mobile support

We hope having people come to discord instead will result in more messages being seen and responded to.

We also hope to move more of our general development discussions to discord as well to make it easier for people to follow along and contribute.

For more detailed support issues, Github Issues is still the best place to ensure that the they are seen, triaged, and responded to.

The link on the website and other pointers will be updated shortly.

Hope to see you there! We also welcome any feedback on how we can better support the vector community.

abbas ali chezgi
please correct this link on github issue reporting page: https://github.com/timberio/vector/issues/new/choose
1 reply
I'm evaluating Vector to replace fluent, one thing I have noticed is the absolute write limit to s3 is significantly slower than fluentd. Is there a way to improve throughput ?
3 replies
Michael Pietzsch
Hi Guys, i got my Vector setup running today. I got a syslog source pushing in to a loki sink. But i am struggling to setup static labels Im only getting a "agent="vector" label in grafana
1 reply
Andrey Afoninsky

a generic question about periodic health check:

  • we have "--require-healthy" to check problem on startup
  • we have unit tests to assist in the development of complex topology

recently, our kafka instance (sink) was down and errors started to appear in the console -> so the service stopped to work but didn't fall
it fell only after restart as "--require-healthy" flag is specified and sink is not healthy

there was a command we could trigger periodically which returned >0 exit code if health didn't pass -> but it was removed in the latest versions
a generic question: is it possible to setup health check (ex.: in kubernetes) somehow, any workarounds? thanks

5 replies
Grant Isdale

Hey all,

Does vector support the Web Identity Provider in STS? This feature was merged into Rusoto in Dec '19 (rusoto/rusoto#1577), but I'm struggling to implement.

As far as I'm aware, everything is set up correctly and Web Identity Provider works with our other k8s services (and my set-up confirmed by this guide here: https://dev.to/pnehrer/a-story-of-rusty-containers-queues-and-the-role-of-assumed-identity-kl2) but when I'm trying to put to a CloudWatch log group it won't assume the correct SA.

2 replies
Liran Albeldas
If I have multiple sinks and 1 of them getting time out. all the other stops operating until all sinks are work?
1 reply
Vyacheslav Rakhinskiy
Hi, how I can use custom grok patterns? for example https://github.com/padusumilli/postfix-grok/blob/master/postfix-grok-patterns
1 reply
Mark Klass
Hello, is there a way to use Vector's transformation to "clean" the fields? For example, I used the tokenize transformation to get some of the values from some log, but now, I have values like {"protocol":":udp", "source_port":":57714->", etc}
Is there a way to clean them? Like removing the : in protocol and the : and -> in source_port?
Hi I am using http source and when I start and stop my application multiple times(which restart vector as well each time), on client side I start getting error in connecting to vector: Error message:Connection refused (Connection refused)
i'm tringto use vecto on my docker compose
but it can't catch the traffic on 5000 port
on my local machine
Jesse Szwedko
Hi @Bindu-Mawat and @mikele_g_twitter ; just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
HI, how I can disable rate_limits on errors (I set drop_invalid to true for json_parser)?
For example
Nov 25 12:11:45.976 WARN transform{name=nginx_parse_json type=json_parser}: vector::internal_events::json: 19 "Event failed to parse as JSON" events were rate limited. rate_limit_secs=5
Vyacheslav Rakhinskiy
And how I can debug this json? I start vector with LOG="trace" and can see only WARN for this event
Jesse Szwedko
Hi @rakhinskiy ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
@jszwedko ok thanks
Liem Le Hoang Duc

Hi there, I'm stuck at write log to file with correct timezone.
inputs = ["in"]
type = "file"
path = "/tmp/%Y-%m-%d/%H.log"
encoding.codec = "text"

The time is UTC based where I need it in Local Timezone (+7 in my case). Is there anyway to achieve this with Vector? I've searched around but no luck.

Jesse Szwedko
Hi @liemle3893 ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
1 reply
Hey, guys. Does the vector is fips compatible?
Hi @valerypetrov ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
I have error unknown field transform
type = "remap"
inputs = ["import_logs_tr"]
source = '''
.del(.file, .host)
.log = "import"
this transform is as in docs
data before transform is:
{"file":"/var/log/app/app.log","host":"244f68ac3445","message":"num **213","severity":"import","source_type":"file","timestamp":"2021-05-19T13:06:18Z"}
Sanskar Gupta

I have a rust tcp server running at 4000, when I try to start the vector to get the tcp metrics am getting

ERROR source{component_kind="source" component_name=my_source_id component_type=socket}: vector::sources::util::tcp: Failed to bind to listener socket. error=TCP bind failed: Address already in use (os error 98)
Jun 23 22:47:43.438 ERROR source{component_kind="source" component_name=my_source_id component_type=socket}: vector::topology: An error occurred that vector couldn't handle.
Here is my vector.toml

type = "socket"
address = ""
mode = "tcp"

inputs = ["my_source_id"]
type = "console"
encoding = "text"


anyone have an example of a config for getting pod logs from a specific kubernetes namespace only? i'm trying

    type: kubernetes_logs
    extra_field_selector: metadata.namespace==vector-testing

    type: filter
      - kube_log_source
    condition: .kubernetes.namespace == "vector-testing"

to no avail