Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 06:00
    binarylogic synchronize #6216
  • 05:46
    binarylogic opened #6216
  • 05:41
    lucperkins synchronize #6205
  • 04:15
    lucperkins edited #6184
  • 04:15
    lucperkins edited #6184
  • 04:14
    lucperkins edited #6184
  • Jan 23 19:57
    ktff edited #6214
  • Jan 23 19:57
    ktff edited #6214
  • Jan 23 19:49
    ktff edited #6214
  • Jan 23 19:46
    ktff review_requested #6214
  • Jan 23 19:45
    ktff labeled #6214
  • Jan 23 19:45
    ktff opened #6214
  • Jan 23 19:45
    ktff assigned #6214
  • Jan 23 19:36
    binarylogic closed #6193
  • Jan 23 18:31
    ktff edited #6211
  • Jan 23 18:30
    dependabot[bot] closed #6210
  • Jan 23 18:30
    dependabot[bot] assigned #6213
  • Jan 23 18:30
    dependabot[bot] review_requested #6213
  • Jan 23 18:30
    dependabot[bot] labeled #6213
  • Jan 23 18:30
    dependabot[bot] opened #6213
Jesse Szwedko
@jszwedko
I tried this out. It looks like it isn't fingerprinting it, but I do see that it maintains an open file handle even if the file is older than the cutoff. I'll open an issue to see if this is expected
Jesse Orr
@jesseorr
Interesting, good to know that I'm not 100% crazy. Thank you Jesse =)
Jesse Szwedko
@jszwedko
Mark Klass
@ChristianKlass
Hi, I'm trying to send logs to Loki, and it works, but I've only got one label (agent="vector") for every log. I've noticed there's a labels.key field in the configuration demo. What are they for, and how do I use them? Can I use them to tag my logs?
[sinks.loki]
  # General
  type = "loki" # required
  inputs = ["cleaned_traefik_logs"]
  endpoint = "http://loki:3100" # required
  healthcheck = true # optional, default

  # Encoding
  encoding.codec = "json" # optional, default

  # Labels
  labels.key = "value" # I'm not sure what this does
  labels.key = "{{ event_field }}" # nor this
4 replies
alpi-ua
@alpi-ua
Hello !
Can someone help ? Have a bug with vector in SUSE - it doesn't clean buffer and i have a plenty of files stored on host after being sent to the server
6 replies
ll /var/lib/vector/vector_buffer/ | wc -l
11772
[sources.in]
  type = "journald" # required

[sinks.vector]
  # General
  type = "vector"
  inputs = ["in"]
  address = "1.2.3.4:5000"
  healthcheck = true

  buffer.max_size = 504900000
  buffer.type = "disk"
  buffer.when_full = "block"
Felipe Passos
@SharksT
Shoud i use loki or elasticsearch for log visualization ? I'm using prometheus/grafana for metrics but i don't really know if loki is the best option for the logs
2 replies
Abhijit
@abhi-paul
Hi folks, I wants to ship my k8s pod container logs located inside /var/lib/docker/containers/<containerid>/*.log Which source of vectordev should I use?
20 replies
Felipe Passos
@SharksT
I'm getting 401 error on my loki sink, but the basic auth is correct, why ?
[sinks.loki-nginx]
  inputs   = ["nginx_dev"]
  type     = "loki"
  endpoint = "https://a-endpoint"
  auth.strategy = "basic"
  auth.user = "username"
  auth.password = "some_password"
  labels.key = "dev_nginx"
Aug 31 11:24:14 ip-172-31-41-152 vector[1202]: Aug 31 11:24:14.693 ERROR vector::topology::builder: Healthcheck: Failed Reason: A non-successful status returned: 401 Unauthorized
Aug 31 11:24:15 ip-172-31-41-152 vector[1202]: Aug 31 11:24:15.488  WARN sink{name=loki-nginx type=loki}:request{request_id=0}: vector::sinks::util::retries2: request is not retryable;
31 replies
Ryan Miguel
@renegaderyu

Can someone help me understand why TLS is failing here? We're using letsencrypt to get certs for the central collector and don't really care about having individual host certs for each client, I just want to transmit the logs securely. It works if I set tls.verify_certificate = false on the client but I'd prefer not to.

Sep 01 17:29:59.836 ERROR vector::topology::builder: Healthcheck: Failed Reason: Connect error: TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915:

Collector config:

[sources.vector]
  type                  = "vector"
  address               = "0.0.0.0:9000"
  shutdown_timeout_secs = 30
  tls.enabled           = true
  tls.crt_file          = "/etc/letsencrypt/fullchain.pem"
  tls.ca_file           = "/etc/letsencrypt/chain.pem"
  tls.key_file          = "/etc/letsencrypt/privkey.pem"

Client config:

[sinks.central_collector]
  type = "vector"
  inputs = ["apache_log"]
  address = "${CENTRAL_ENDPOINT}:9000"
  healthcheck = true

  # Buffer
  buffer.max_events = 500
  buffer.type = "memory"
  buffer.when_full = "block"

  # TLS
  tls.enabled = true
4 replies
gaborzsazsa
@gaborzsazsa

Hi,
I've started evaluating vector for delivering logs from fluent-bit to s3.
I've followed the examples and created a config like this:
[sources.in]
type = "http" # required
address = "172.31.60.17:8080" # required
encoding = "json" # optional, default

Output data

[sinks.out]
bucket = "fluentlogsink" # required
inputs = ["in"] # required
region = "us-east-1" # required, required when endpoint = ""
type = "aws_s3" # required
compression = "gzip"

the logs are showing up in s3 with .gz extension, however they are still plain text files
have anyone exeperienced something like this and maybe found a solution ?
Slawomir Skowron
@szibis_twitter
If you download through the browser files may be decompresed on the fly. You can compare dowload size vs size reported on s3.
Ryan Miguel
@renegaderyu
If you need TLS please vote for this issue: timberio/vector#3664
Jesse Szwedko
@jszwedko

/@all

Hey all!

A quick announcement: we are moving from gitter to discord for our community chat. You can join us here: https://discord.gg/jm97nzy (see channels in the vector category).

Details:

As the team supporting vector and building its community, we've found a number of issues using gitter for this purpose:

  • Poor notifications
  • Poor editing experience
  • Poor mobile support

We hope having people come to discord instead will result in more messages being seen and responded to.

We also hope to move more of our general development discussions to discord as well to make it easier for people to follow along and contribute.

For more detailed support issues, Github Issues is still the best place to ensure that the they are seen, triaged, and responded to.

The link on the website and other pointers will be updated shortly.

Hope to see you there! We also welcome any feedback on how we can better support the vector community.

abbas ali chezgi
@chezgi_gitlab
please correct this link on github issue reporting page: https://github.com/timberio/vector/issues/new/choose
1 reply
mcgfenwick
@mcgfenwick
I'm evaluating Vector to replace fluent, one thing I have noticed is the absolute write limit to s3 is significantly slower than fluentd. Is there a way to improve throughput ?
3 replies
Michael Pietzsch
@michaelpietzsch
Hi Guys, i got my Vector setup running today. I got a syslog source pushing in to a loki sink. But i am struggling to setup static labels Im only getting a "agent="vector" label in grafana
1 reply
Andrey Afoninsky
@afoninsky

hello
a generic question about periodic health check:

  • we have "--require-healthy" to check problem on startup
  • we have unit tests to assist in the development of complex topology

recently, our kafka instance (sink) was down and errors started to appear in the console -> so the service stopped to work but didn't fall
it fell only after restart as "--require-healthy" flag is specified and sink is not healthy

there was a command we could trigger periodically which returned >0 exit code if health didn't pass -> but it was removed in the latest versions
a generic question: is it possible to setup health check (ex.: in kubernetes) somehow, any workarounds? thanks

5 replies
Grant Isdale
@grantisdale

Hey all,

Does vector support the Web Identity Provider in STS? This feature was merged into Rusoto in Dec '19 (rusoto/rusoto#1577), but I'm struggling to implement.

As far as I'm aware, everything is set up correctly and Web Identity Provider works with our other k8s services (and my set-up confirmed by this guide here: https://dev.to/pnehrer/a-story-of-rusty-containers-queues-and-the-role-of-assumed-identity-kl2) but when I'm trying to put to a CloudWatch log group it won't assume the correct SA.

2 replies
Liran Albeldas
@albeldas
Hi,
If I have multiple sinks and 1 of them getting time out. all the other stops operating until all sinks are work?
1 reply
Vyacheslav Rakhinskiy
@rakhinskiy
Hi, how I can use custom grok patterns? for example https://github.com/padusumilli/postfix-grok/blob/master/postfix-grok-patterns
1 reply
Mark Klass
@ChristianKlass
Hello, is there a way to use Vector's transformation to "clean" the fields? For example, I used the tokenize transformation to get some of the values from some log, but now, I have values like {"protocol":":udp", "source_port":":57714->", etc}
Is there a way to clean them? Like removing the : in protocol and the : and -> in source_port?
Bindu-Mawat
@Bindu-Mawat
Hi I am using http source and when I start and stop my application multiple times(which restart vector as well each time), on client side I start getting error in connecting to vector: Error message:Connection refused (Connection refused)
Ghost
@ghost~5cd19fe2d73408ce4fbfa0bf
i'm tringto use vecto on my docker compose
but it can't catch the traffic on 5000 port
on my local machine
Jesse Szwedko
@jszwedko
Hi @Bindu-Mawat and @mikele_g_twitter ; just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
@rakhinskiy
HI, how I can disable rate_limits on errors (I set drop_invalid to true for json_parser)?
For example
Nov 25 12:11:45.976 WARN transform{name=nginx_parse_json type=json_parser}: vector::internal_events::json: 19 "Event failed to parse as JSON" events were rate limited. rate_limit_secs=5
Vyacheslav Rakhinskiy
@rakhinskiy
And how I can debug this json? I start vector with LOG="trace" and can see only WARN for this event
Jesse Szwedko
@jszwedko
Hi @rakhinskiy ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
Vyacheslav Rakhinskiy
@rakhinskiy
@jszwedko ok thanks
Liem Le Hoang Duc
@liemle3893

Hi there, I'm stuck at write log to file with correct timezone.
[sinks.all]
inputs = ["in"]
type = "file"
path = "/tmp/%Y-%m-%d/%H.log"
encoding.codec = "text"


The time is UTC based where I need it in Local Timezone (+7 in my case). Is there anyway to achieve this with Vector? I've searched around but no luck.

Jesse Szwedko
@jszwedko
Hi @liemle3893 ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
1 reply
valerypetrov
@valerypetrov
Hey, guys. Does the vector is fips compatible?
avalenn
@avalenn:matrix.org
[m]
Hi @valerypetrov ! Just a note that community discussion and support has moved to discord: https://discord.gg/jm97nzy
valerypetrov
@valerypetrov
Thanks