Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 22:29
    Hoverbear edited #4684
  • 22:26
    Hoverbear edited #4684
  • 22:25
    Hoverbear edited #4684
  • 22:25
    Hoverbear synchronize #4684
  • 22:24
    Hoverbear edited #4684
  • 22:12
    Hoverbear reopened #4684
  • 22:12
    Hoverbear closed #4684
  • 22:11
    Hoverbear synchronize #4458
  • 22:10
    Hoverbear review_requested #4427
  • 22:08
    jszwedko review_request_removed #4685
  • 22:08
    jszwedko review_request_removed #4685
  • 22:08
    jszwedko review_request_removed #4685
  • 22:08
    jszwedko review_request_removed #4685
  • 22:08
    jszwedko review_requested #4686
  • 22:08
    jszwedko review_request_removed #4686
  • 22:08
    jszwedko review_request_removed #4686
  • 22:08
    jszwedko review_request_removed #4686
  • 22:08
    jszwedko review_requested #4686
  • 22:08
    jszwedko review_requested #4686
  • 22:08
    jszwedko opened #4686
ShadowNet
@shadownetro_twitter
Hello, i'm having trouble forcing lowercase index name in my vector.toml config. I'm getting this error:
Hello, i'm having trouble forcing lowercase index name in my vector.toml config. I'm getting this error: ElasticSearch error response err_type=invalid_index_name_exception reason=Invalid index name [application-CRON-2020-08-19.{lc_identifier}], must be lowercase
Sorry for the double post.Is there a fast way of solving this? Thx
Jesse Szwedko
@jszwedko
@shadownetro_twitter I think the only way to do that right now might be the lua transform. I'll open an issue for this to track. There is some work happening right now around field transformations that I think this could fit into
Jesse Szwedko
@jszwedko
ShadowNet
@shadownetro_twitter

Issue: timberio/vector#3496

thank you

Michele Gatti
@mikele_g_twitter
the skin for bigquery is ready?
Jesse Szwedko
@jszwedko
not yet, but there is an open PR for it: timberio/vector#1951
davidconnett-splunk
@davidconnett-splunk
Jonathan Endy
@jonathan.endy.csr_gitlab

Hi All,
Hope you can help me, I'm trying to stream data from Kafka to GCS.
The requirement is to create an object for each event from Kafka and the object name is compound from content in the event.
The first question, is it possible not to use the batch option? (or batch 1)
Second, I think I saw it possible to reference all fields can I use conversion and split of date from one field?
third, If I'm reading from Kafka can I skip disk buffer and still achieve at least one?

Thank you all!

11 replies
夜读书
@db2jlu_twitter
Hello All ,I met some error below ,could you pls have a look? thanks
Aug 23 02:28:47.114 ERROR sink{name=clickhouse-apilog type=clickhouse}:request{request_id=212}: vector::sinks::util::sink: Response wasn't successful. response=Response { status: 400, version: HTTP/1.1, h
eaders: {"date": "Sun, 23 Aug 2020 02:28:47 GMT", "connection": "Keep-Alive", "content-type": "text/tab-separated-values; charset=UTF-8", "x-clickhouse-server-display-name": "master-01", "transfer-encodin
g": "chunked", "x-clickhouse-query-id": "1188cca8-94ef-4b63-b3c9-19c7771ee72b", "x-clickhouse-format": "TabSeparated", "x-clickhouse-timezone": "UTC", "x-clickhouse-exception-code": "26", "keep-alive": "t
imeout=3", "x-clickhouse-summary": "{\"read_rows\":\"0\",\"read_bytes\":\"0\",\"written_rows\":\"0\",\"written_bytes\":\"0\",\"total_rows_to_read\":\"0\"}"}, body: b"Code: 26, e.displayText() = DB::Except
ion: Cannot parse JSON string: expected opening quote: (while read the value of key consumer.created_at): (at row 19)\n (version 20.6.3.28 (official build))\n” }
夜读书
@db2jlu_twitter
Seems clickhouse sind doesn’t support metrics, could I know the reason ? thanks !
Jesse Szwedko
@jszwedko

@db2jlu_twitter I'm not super familiar with Clickhouse, but there is an open issue for metrics support: timberio/vector#3435 . It may just not be implemented yet.

Looking at that though, are you sure that's the reason? It seems like it might be a mismatch in the schema or datatypes in clickhouse or, possibly, that vector is sending invalid JSON

夜读书
@db2jlu_twitter
@jszwedko sorry ,that is two different question . for the first question ,I checked ch logs ,seems it happened on vector only ,not on ch side ,maybe special characters ? not sure . for the second question ,that is opened by me , hope that feature could be implemented ,vector is so cool ! Thank you again !
夜读书
@db2jlu_twitter
@jszwedko btw,what is the main difference for metrics and log to store in sink ?
Jay Fenton
@jfenton

I just posted a blog about Vector: https://www.splunk.com/en_us/blog/it/meet-the-fastest-forwarder-on-the-net.html

huh...Splunk pulled the article?

3 replies
Liran Albeldas
@albeldas
Hi,
I'm trying to implement vector as DS (Helm) and having some troubles with filter conditions
I tried to add the namespace before with _ and / but it doesn't work.
If I'm removing the filter condition all containers logs go out to console.
my pod label: app=liran-demo , Namespace: demo
transforms:
   "liran-demo-logs":
     type: filter
     inputs: ["kubernetes_logs"]
     rawConfig: |
      [transforms.liran-demo-logs.condition]
      "kubernetes.pod_labels.component.eq" = "app=liran-demo"
        "stream.eq" = "stdout"

sinks:
   console:
     type: "console"
     inputs: ["liran-demo-logs"]
     taget: "stdout"
     rawConfig: |
      # Encoding
      encoding.codec = "json" # required
1 reply
Liran Albeldas
@albeldas
Never mind i had miss confguration in my lables everything works.
jsomwaru
@jsomwaru
I have an issue where s3 sink can't verify SSL of the s3 bucket. I've looked in the docs and i can't find anything about it. WARN sink{name=meraki_dump type=aws_s3}:request{request_id=2}: vector::sinks::util::retries2: retrying after error: Error during dispatch: error trying to connect: the handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915:: unable to get local issuer certificate
Is anyone aware of some work around for this?
Liran Albeldas
@albeldas
Hi,
Which sink is the right one to send logs to Logstash?
1 reply
Andrey Afoninsky
@afoninsky

hello
I have a lot of spam messages after installing helm chart "vector-0.11.0-nightly-2020-08-24":

Aug 25 13:34:06.533  WARN source{name=kubernetes_logs type=kubernetes_logs}: vector::internal_events::kubernetes_logs: failed to annotate event with pod metadata event=Log(LogEvent { fields: {"file": Bytes(b"/var/log/pods/vector_cluster-logs-chf8d_290b7ab5-9752-49f1-81d7-cc9a51483c4d/vector/2.log"), "message": Bytes(b"{\"log\":\"Aug 25 13:19:17.029  INFO source{name=kubernetes type=kubernetes}:file_server: file_source::file_server: More than one file has same fingerprint. path=\\\"/var/log/pods/jaeger_jaeger-cassandra-2_3d357498-7fd7-448e-a0d7-54b8922b0050/jaeger-cassandra/6.log\\\" old_path=\\\"/var/log/pods/jaeger_jaeger-cassandra-2_3d357498-7fd7-448e-a0d7-54b8922b0050/jaeger-cassandra/5.log\\\"\\n\",\"stream\":\"stdout\",\"time\":\"2020-08-25T13:19:17.02974474Z\"}"), "source_type": Bytes(b"kubernetes_logs"), "timestamp": Timestamp(2020-08-25T13:34:06.533091773Z)} })

config:

  kubernetesLogsSource:
    enabled: true
    sourceId: kubernetes_logs
  env:
    - name: LOGGLY_TOKEN
      value: ****-****-****-****-****
  sinks:
    # console:
    #   type: console
    #   inputs: ["kubernetes_logs"]
    #   rawConfig: |
    #     encoding.codec = "json"
    loggly:
      type: http
      inputs: ["kubernetes_logs"]
      rawConfig: |
        uri = "https://logs-01.loggly.com/bulk/${LOGGLY_TOKEN}/tag/olly,dev,k8s/"
        batch.max_size = 50000
        encoding.codec = "ndjson"

should I create an issue or it's already known and/or fixed? thanks

1 reply
Binary Logic
@binarylogic
@afoninsky please open an issue and we'll get the right person on it.
Jesse Orr
@jesseorr
Hello, should vector be fingerprinting inputs from the file source when they are older than the ignore_older value?
I have an application that logs to many new logs, so I have an arbitrarily low ignore value to limit the scope of what vector sees, but I am running into issues with it opening too many files.
[sources.access-raw]
  # General
  type = "file"
  ignore_older = 300
  include = ["/var/log/od/access_*.log"]
  start_at_beginning = false
  oldest_first = true
  fingerprinting.strategy = "checksum"
  fingerprinting.ignored_header_bytes = 2048
  fingerprinting.fingerprint_bytes = 4096

Aug 25 14:39:14 vm8857 vector: Aug 25 14:39:14.117 ERROR source{name=access-raw type=file}:file_server: file_source::file_server: Error reading file for fingerprinting err=Too many open files (os error 24) file="/var/log/od/access_2020-02-24_13-53-24_pid_2074.log"
I could change max_open_files, which is limited to 1024 for the vector user, but it seems odd to have to do such a thing when only one log file at a time is being written.
Jesse Szwedko
@jszwedko
I tried this out. It looks like it isn't fingerprinting it, but I do see that it maintains an open file handle even if the file is older than the cutoff. I'll open an issue to see if this is expected
Jesse Orr
@jesseorr
Interesting, good to know that I'm not 100% crazy. Thank you Jesse =)
Jesse Szwedko
@jszwedko
Mark Klass
@ChristianKlass
Hi, I'm trying to send logs to Loki, and it works, but I've only got one label (agent="vector") for every log. I've noticed there's a labels.key field in the configuration demo. What are they for, and how do I use them? Can I use them to tag my logs?
[sinks.loki]
  # General
  type = "loki" # required
  inputs = ["cleaned_traefik_logs"]
  endpoint = "http://loki:3100" # required
  healthcheck = true # optional, default

  # Encoding
  encoding.codec = "json" # optional, default

  # Labels
  labels.key = "value" # I'm not sure what this does
  labels.key = "{{ event_field }}" # nor this
4 replies
alpi-ua
@alpi-ua
Hello !
Can someone help ? Have a bug with vector in SUSE - it doesn't clean buffer and i have a plenty of files stored on host after being sent to the server
6 replies
ll /var/lib/vector/vector_buffer/ | wc -l
11772
[sources.in]
  type = "journald" # required

[sinks.vector]
  # General
  type = "vector"
  inputs = ["in"]
  address = "1.2.3.4:5000"
  healthcheck = true

  buffer.max_size = 504900000
  buffer.type = "disk"
  buffer.when_full = "block"
Felipe Passos
@SharksT
Shoud i use loki or elasticsearch for log visualization ? I'm using prometheus/grafana for metrics but i don't really know if loki is the best option for the logs
2 replies
Abhijit
@abhi-paul
Hi folks, I wants to ship my k8s pod container logs located inside /var/lib/docker/containers/<containerid>/*.log Which source of vectordev should I use?
20 replies
Felipe Passos
@SharksT
I'm getting 401 error on my loki sink, but the basic auth is correct, why ?
[sinks.loki-nginx]
  inputs   = ["nginx_dev"]
  type     = "loki"
  endpoint = "https://a-endpoint"
  auth.strategy = "basic"
  auth.user = "username"
  auth.password = "some_password"
  labels.key = "dev_nginx"
Aug 31 11:24:14 ip-172-31-41-152 vector[1202]: Aug 31 11:24:14.693 ERROR vector::topology::builder: Healthcheck: Failed Reason: A non-successful status returned: 401 Unauthorized
Aug 31 11:24:15 ip-172-31-41-152 vector[1202]: Aug 31 11:24:15.488  WARN sink{name=loki-nginx type=loki}:request{request_id=0}: vector::sinks::util::retries2: request is not retryable;
31 replies
Ryan Miguel
@renegaderyu

Can someone help me understand why TLS is failing here? We're using letsencrypt to get certs for the central collector and don't really care about having individual host certs for each client, I just want to transmit the logs securely. It works if I set tls.verify_certificate = false on the client but I'd prefer not to.

Sep 01 17:29:59.836 ERROR vector::topology::builder: Healthcheck: Failed Reason: Connect error: TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915:

Collector config:

[sources.vector]
  type                  = "vector"
  address               = "0.0.0.0:9000"
  shutdown_timeout_secs = 30
  tls.enabled           = true
  tls.crt_file          = "/etc/letsencrypt/fullchain.pem"
  tls.ca_file           = "/etc/letsencrypt/chain.pem"
  tls.key_file          = "/etc/letsencrypt/privkey.pem"

Client config:

[sinks.central_collector]
  type = "vector"
  inputs = ["apache_log"]
  address = "${CENTRAL_ENDPOINT}:9000"
  healthcheck = true

  # Buffer
  buffer.max_events = 500
  buffer.type = "memory"
  buffer.when_full = "block"

  # TLS
  tls.enabled = true
4 replies
gaborzsazsa
@gaborzsazsa

Hi,
I've started evaluating vector for delivering logs from fluent-bit to s3.
I've followed the examples and created a config like this:
[sources.in]
type = "http" # required
address = "172.31.60.17:8080" # required
encoding = "json" # optional, default

Output data

[sinks.out]
bucket = "fluentlogsink" # required
inputs = ["in"] # required
region = "us-east-1" # required, required when endpoint = ""
type = "aws_s3" # required
compression = "gzip"

the logs are showing up in s3 with .gz extension, however they are still plain text files
have anyone exeperienced something like this and maybe found a solution ?
Slawomir Skowron
@szibis_twitter
If you download through the browser files may be decompresed on the fly. You can compare dowload size vs size reported on s3.
Ryan Miguel
@renegaderyu
If you need TLS please vote for this issue: timberio/vector#3664
Jesse Szwedko
@jszwedko

/@all

Hey all!

A quick announcement: we are moving from gitter to discord for our community chat. You can join us here: https://discord.gg/jm97nzy (see channels in the vector category).

Details:

As the team supporting vector and building its community, we've found a number of issues using gitter for this purpose:

  • Poor notifications
  • Poor editing experience
  • Poor mobile support

We hope having people come to discord instead will result in more messages being seen and responded to.

We also hope to move more of our general development discussions to discord as well to make it easier for people to follow along and contribute.

For more detailed support issues, Github Issues is still the best place to ensure that the they are seen, triaged, and responded to.

The link on the website and other pointers will be updated shortly.

Hope to see you there! We also welcome any feedback on how we can better support the vector community.

abbas ali chezgi
@chezgi_gitlab
please correct this link on github issue reporting page: https://github.com/timberio/vector/issues/new/choose
1 reply
mcgfenwick
@mcgfenwick
I'm evaluating Vector to replace fluent, one thing I have noticed is the absolute write limit to s3 is significantly slower than fluentd. Is there a way to improve throughput ?
3 replies
Michael Pietzsch
@michaelpietzsch
Hi Guys, i got my Vector setup running today. I got a syslog source pushing in to a loki sink. But i am struggling to setup static labels Im only getting a "agent="vector" label in grafana
1 reply
Andrey Afoninsky
@afoninsky

hello
a generic question about periodic health check:

  • we have "--require-healthy" to check problem on startup
  • we have unit tests to assist in the development of complex topology

recently, our kafka instance (sink) was down and errors started to appear in the console -> so the service stopped to work but didn't fall
it fell only after restart as "--require-healthy" flag is specified and sink is not healthy

there was a command we could trigger periodically which returned >0 exit code if health didn't pass -> but it was removed in the latest versions
a generic question: is it possible to setup health check (ex.: in kubernetes) somehow, any workarounds? thanks

5 replies
Grant Isdale
@grantisdale

Hey all,

Does vector support the Web Identity Provider in STS? This feature was merged into Rusoto in Dec '19 (rusoto/rusoto#1577), but I'm struggling to implement.

As far as I'm aware, everything is set up correctly and Web Identity Provider works with our other k8s services (and my set-up confirmed by this guide here: https://dev.to/pnehrer/a-story-of-rusty-containers-queues-and-the-role-of-assumed-identity-kl2) but when I'm trying to put to a CloudWatch log group it won't assume the correct SA.

2 replies
Liran Albeldas
@albeldas
Hi,
If I have multiple sinks and 1 of them getting time out. all the other stops operating until all sinks are work?
1 reply
Vyacheslav Rakhinskiy
@rakhinskiy
Hi, how I can use custom grok patterns? for example https://github.com/padusumilli/postfix-grok/blob/master/postfix-grok-patterns
1 reply