Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • 00:10
    lucperkins converted_to_draft #9307
  • 00:10
    lucperkins synchronize #9307
  • Sep 28 23:29
    StephenWakely review_requested #9363
  • Sep 28 23:29
    StephenWakely opened #9363
  • Sep 28 21:07
    bruceg synchronize #9308
  • Sep 28 20:29
    bruceg review_requested #9215
  • Sep 28 19:46
    fuchsnj synchronize #9361
  • Sep 28 19:43
    fuchsnj synchronize #9361
  • Sep 28 19:39
    fuchsnj opened #9361
  • Sep 28 19:38
    jszwedko auto_merge_enabled #8717
  • Sep 28 19:31
    jszwedko synchronize #8717
  • Sep 28 19:29
    jszwedko synchronize #8717
  • Sep 28 17:14
    bruceg synchronize #9308
  • Sep 28 16:41
    jszwedko closed #9359
  • Sep 28 16:34
    jszwedko auto_merge_enabled #9359
  • Sep 28 16:34
    jszwedko edited #9359
  • Sep 28 15:58
    001wwang review_requested #9327
  • Sep 28 15:58
    001wwang review_requested #9327
  • Sep 28 15:52
    bruceg synchronize #9308
  • Sep 28 15:29
    bruceg closed #9312
Liran Albeldas
Which sink is the right one to send logs to Logstash?
1 reply
Andrey Afoninsky

I have a lot of spam messages after installing helm chart "vector-0.11.0-nightly-2020-08-24":

Aug 25 13:34:06.533  WARN source{name=kubernetes_logs type=kubernetes_logs}: vector::internal_events::kubernetes_logs: failed to annotate event with pod metadata event=Log(LogEvent { fields: {"file": Bytes(b"/var/log/pods/vector_cluster-logs-chf8d_290b7ab5-9752-49f1-81d7-cc9a51483c4d/vector/2.log"), "message": Bytes(b"{\"log\":\"Aug 25 13:19:17.029  INFO source{name=kubernetes type=kubernetes}:file_server: file_source::file_server: More than one file has same fingerprint. path=\\\"/var/log/pods/jaeger_jaeger-cassandra-2_3d357498-7fd7-448e-a0d7-54b8922b0050/jaeger-cassandra/6.log\\\" old_path=\\\"/var/log/pods/jaeger_jaeger-cassandra-2_3d357498-7fd7-448e-a0d7-54b8922b0050/jaeger-cassandra/5.log\\\"\\n\",\"stream\":\"stdout\",\"time\":\"2020-08-25T13:19:17.02974474Z\"}"), "source_type": Bytes(b"kubernetes_logs"), "timestamp": Timestamp(2020-08-25T13:34:06.533091773Z)} })


    enabled: true
    sourceId: kubernetes_logs
    - name: LOGGLY_TOKEN
      value: ****-****-****-****-****
    # console:
    #   type: console
    #   inputs: ["kubernetes_logs"]
    #   rawConfig: |
    #     encoding.codec = "json"
      type: http
      inputs: ["kubernetes_logs"]
      rawConfig: |
        uri = "https://logs-01.loggly.com/bulk/${LOGGLY_TOKEN}/tag/olly,dev,k8s/"
        batch.max_size = 50000
        encoding.codec = "ndjson"

should I create an issue or it's already known and/or fixed? thanks

1 reply
Binary Logic
@afoninsky please open an issue and we'll get the right person on it.
Jesse Orr
Hello, should vector be fingerprinting inputs from the file source when they are older than the ignore_older value?
I have an application that logs to many new logs, so I have an arbitrarily low ignore value to limit the scope of what vector sees, but I am running into issues with it opening too many files.
  # General
  type = "file"
  ignore_older = 300
  include = ["/var/log/od/access_*.log"]
  start_at_beginning = false
  oldest_first = true
  fingerprinting.strategy = "checksum"
  fingerprinting.ignored_header_bytes = 2048
  fingerprinting.fingerprint_bytes = 4096

Aug 25 14:39:14 vm8857 vector: Aug 25 14:39:14.117 ERROR source{name=access-raw type=file}:file_server: file_source::file_server: Error reading file for fingerprinting err=Too many open files (os error 24) file="/var/log/od/access_2020-02-24_13-53-24_pid_2074.log"
I could change max_open_files, which is limited to 1024 for the vector user, but it seems odd to have to do such a thing when only one log file at a time is being written.
Jesse Szwedko
I tried this out. It looks like it isn't fingerprinting it, but I do see that it maintains an open file handle even if the file is older than the cutoff. I'll open an issue to see if this is expected
Jesse Orr
Interesting, good to know that I'm not 100% crazy. Thank you Jesse =)
Jesse Szwedko
Mark Klass
Hi, I'm trying to send logs to Loki, and it works, but I've only got one label (agent="vector") for every log. I've noticed there's a labels.key field in the configuration demo. What are they for, and how do I use them? Can I use them to tag my logs?
  # General
  type = "loki" # required
  inputs = ["cleaned_traefik_logs"]
  endpoint = "http://loki:3100" # required
  healthcheck = true # optional, default

  # Encoding
  encoding.codec = "json" # optional, default

  # Labels
  labels.key = "value" # I'm not sure what this does
  labels.key = "{{ event_field }}" # nor this
4 replies
Hello !
Can someone help ? Have a bug with vector in SUSE - it doesn't clean buffer and i have a plenty of files stored on host after being sent to the server
6 replies
ll /var/lib/vector/vector_buffer/ | wc -l
  type = "journald" # required

  # General
  type = "vector"
  inputs = ["in"]
  address = ""
  healthcheck = true

  buffer.max_size = 504900000
  buffer.type = "disk"
  buffer.when_full = "block"
Felipe Passos
Shoud i use loki or elasticsearch for log visualization ? I'm using prometheus/grafana for metrics but i don't really know if loki is the best option for the logs
2 replies
Hi folks, I wants to ship my k8s pod container logs located inside /var/lib/docker/containers/<containerid>/*.log Which source of vectordev should I use?
20 replies
Felipe Passos
I'm getting 401 error on my loki sink, but the basic auth is correct, why ?
  inputs   = ["nginx_dev"]
  type     = "loki"
  endpoint = "https://a-endpoint"
  auth.strategy = "basic"
  auth.user = "username"
  auth.password = "some_password"
  labels.key = "dev_nginx"
Aug 31 11:24:14 ip-172-31-41-152 vector[1202]: Aug 31 11:24:14.693 ERROR vector::topology::builder: Healthcheck: Failed Reason: A non-successful status returned: 401 Unauthorized
Aug 31 11:24:15 ip-172-31-41-152 vector[1202]: Aug 31 11:24:15.488  WARN sink{name=loki-nginx type=loki}:request{request_id=0}: vector::sinks::util::retries2: request is not retryable;
31 replies
Ryan Miguel

Can someone help me understand why TLS is failing here? We're using letsencrypt to get certs for the central collector and don't really care about having individual host certs for each client, I just want to transmit the logs securely. It works if I set tls.verify_certificate = false on the client but I'd prefer not to.

Sep 01 17:29:59.836 ERROR vector::topology::builder: Healthcheck: Failed Reason: Connect error: TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915:

Collector config:

  type                  = "vector"
  address               = ""
  shutdown_timeout_secs = 30
  tls.enabled           = true
  tls.crt_file          = "/etc/letsencrypt/fullchain.pem"
  tls.ca_file           = "/etc/letsencrypt/chain.pem"
  tls.key_file          = "/etc/letsencrypt/privkey.pem"

Client config:

  type = "vector"
  inputs = ["apache_log"]
  address = "${CENTRAL_ENDPOINT}:9000"
  healthcheck = true

  # Buffer
  buffer.max_events = 500
  buffer.type = "memory"
  buffer.when_full = "block"

  # TLS
  tls.enabled = true
4 replies

I've started evaluating vector for delivering logs from fluent-bit to s3.
I've followed the examples and created a config like this:
type = "http" # required
address = "" # required
encoding = "json" # optional, default

Output data

bucket = "fluentlogsink" # required
inputs = ["in"] # required
region = "us-east-1" # required, required when endpoint = ""
type = "aws_s3" # required
compression = "gzip"

the logs are showing up in s3 with .gz extension, however they are still plain text files
have anyone exeperienced something like this and maybe found a solution ?
Slawomir Skowron
If you download through the browser files may be decompresed on the fly. You can compare dowload size vs size reported on s3.
Ryan Miguel
If you need TLS please vote for this issue: timberio/vector#3664
Jesse Szwedko


Hey all!

A quick announcement: we are moving from gitter to discord for our community chat. You can join us here: https://discord.gg/jm97nzy (see channels in the vector category).


As the team supporting vector and building its community, we've found a number of issues using gitter for this purpose:

  • Poor notifications
  • Poor editing experience
  • Poor mobile support

We hope having people come to discord instead will result in more messages being seen and responded to.

We also hope to move more of our general development discussions to discord as well to make it easier for people to follow along and contribute.

For more detailed support issues, Github Issues is still the best place to ensure that the they are seen, triaged, and responded to.

The link on the website and other pointers will be updated shortly.

Hope to see you there! We also welcome any feedback on how we can better support the vector community.

abbas ali chezgi
please correct this link on github issue reporting page: https://github.com/timberio/vector/issues/new/choose
1 reply
I'm evaluating Vector to replace fluent, one thing I have noticed is the absolute write limit to s3 is significantly slower than fluentd. Is there a way to improve throughput ?
3 replies
Michael Pietzsch
Hi Guys, i got my Vector setup running today. I got a syslog source pushing in to a loki sink. But i am struggling to setup static labels Im only getting a "agent="vector" label in grafana
1 reply
Andrey Afoninsky

a generic question about periodic health check:

  • we have "--require-healthy" to check problem on startup
  • we have unit tests to assist in the development of complex topology

recently, our kafka instance (sink) was down and errors started to appear in the console -> so the service stopped to work but didn't fall
it fell only after restart as "--require-healthy" flag is specified and sink is not healthy

there was a command we could trigger periodically which returned >0 exit code if health didn't pass -> but it was removed in the latest versions
a generic question: is it possible to setup health check (ex.: in kubernetes) somehow, any workarounds? thanks

5 replies
Grant Isdale

Hey all,

Does vector support the Web Identity Provider in STS? This feature was merged into Rusoto in Dec '19 (rusoto/rusoto#1577), but I'm struggling to implement.

As far as I'm aware, everything is set up correctly and Web Identity Provider works with our other k8s services (and my set-up confirmed by this guide here: https://dev.to/pnehrer/a-story-of-rusty-containers-queues-and-the-role-of-assumed-identity-kl2) but when I'm trying to put to a CloudWatch log group it won't assume the correct SA.

2 replies
Liran Albeldas
If I have multiple sinks and 1 of them getting time out. all the other stops operating until all sinks are work?
1 reply
Vyacheslav Rakhinskiy
Hi, how I can use custom grok patterns? for example https://github.com/padusumilli/postfix-grok/blob/master/postfix-grok-patterns
1 reply
Mark Klass
Hello, is there a way to use Vector's transformation to "clean" the fields? For example, I used the tokenize transformation to get some of the values from some log, but now, I have values like {"protocol":":udp", "source_port":":57714->", etc}
Is there a way to clean them? Like removing the : in protocol and the : and -> in source_port?
Hi I am using http source and when I start and stop my application multiple times(which restart vector as well each time), on client side I start getting error in connecting to vector: Error message:Connection refused (Connection refused)
i'm tringto use vecto on my docker compose
but it can't catch the traffic on 5000 port
on my local machine
Jesse Szwedko
Hi @Bindu-Mawat and @mikele_g_twitter ; just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
HI, how I can disable rate_limits on errors (I set drop_invalid to true for json_parser)?
For example
Nov 25 12:11:45.976 WARN transform{name=nginx_parse_json type=json_parser}: vector::internal_events::json: 19 "Event failed to parse as JSON" events were rate limited. rate_limit_secs=5
Vyacheslav Rakhinskiy
And how I can debug this json? I start vector with LOG="trace" and can see only WARN for this event
Jesse Szwedko
Hi @rakhinskiy ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
@jszwedko ok thanks
Liem Le Hoang Duc

Hi there, I'm stuck at write log to file with correct timezone.
inputs = ["in"]
type = "file"
path = "/tmp/%Y-%m-%d/%H.log"
encoding.codec = "text"

The time is UTC based where I need it in Local Timezone (+7 in my case). Is there anyway to achieve this with Vector? I've searched around but no luck.

Jesse Szwedko
Hi @liemle3893 ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
1 reply
Hey, guys. Does the vector is fips compatible?
Hi @valerypetrov ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
I have error unknown field transform
type = "remap"
inputs = ["import_logs_tr"]
source = '''
.del(.file, .host)
.log = "import"