by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jul 05 22:46
    ktff synchronize #2955
  • Jul 05 22:34
    ktff review_requested #2958
  • Jul 05 22:33
    ktff labeled #2958
  • Jul 05 22:33
    ktff labeled #2958
  • Jul 05 22:33
    ktff labeled #2958
  • Jul 05 22:33
    ktff labeled #2958
  • Jul 05 22:33
    ktff opened #2958
  • Jul 05 22:33
    ktff assigned #2958
  • Jul 05 21:17
    psinghal20 synchronize #2957
  • Jul 05 21:03
    bruceg labeled #2957
  • Jul 05 21:03
    bruceg unlabeled #2957
  • Jul 05 21:03
    bruceg labeled #2957
  • Jul 05 21:03
    bruceg labeled #2957
  • Jul 05 21:03
    bruceg review_requested #2957
  • Jul 05 20:53
    psinghal20 opened #2957
  • Jul 05 20:53
    psinghal20 review_requested #2957
  • Jul 05 20:18
    ktff synchronize #2955
  • Jul 05 20:12
    ktff labeled #2956
  • Jul 05 20:12
    ktff review_requested #2956
  • Jul 05 20:12
    ktff labeled #2956
Lucio Franco
@LucioFranco
What type of instance are you on? and what does your curl command look like
Samuel Cormier-Iijima
@sciyoshi
thanks for the quick response @LucioFranco! it's a standard EC2 instance, m5.xlarge. here's the command I'm running:
admin@ip-172-20-98-28:~$ sudo docker run -it --entrypoint /bin/sh -e LOG=debug --rm --name vector -v $PWD/vector.toml:/etc/vector/vector.toml -v /var/lib/docker:/var/lib/docker -v /var/run/docker.sock:/var/run/docker.sock -v
$PWD/vector:/var/lib/vector -v /var/log/pods:/var/log/pods timberio/vector:nightly-alpine
/ # apk add curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/3) Installing nghttp2-libs (1.39.2-r0)
(2/3) Installing libcurl (7.66.0-r0)
(3/3) Installing curl (7.66.0-r0)
Executing busybox-1.30.1-r3.trigger
OK: 10 MiB in 19 packages
/ # curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
  "accountId" : "------------",
  "architecture" : "x86_64",
  "availabilityZone" : "ca-central-1a",
  "billingProducts" : null,
  "devpayProductCodes" : null,
  "marketplaceProductCodes" : null,
  "imageId" : "ami-0xxxxxx",
  "instanceId" : "i-0xxxxxx",
  "instanceType" : "m5.xlarge",
  "kernelId" : null,
  "pendingTime" : "2020-02-11T15:42:59Z",
  "privateIp" : "172.20.98.28",
  "ramdiskId" : null,
  "region" : "ca-central-1",
  "version" : "2017-09-30"
}/
Lucio Franco
@LucioFranco
ah looks like you're running vector within a container, that may be the reason
Samuel Cormier-Iijima
@sciyoshi
the curl command is also running from within the container
Lucio Franco
@LucioFranco
@sciyoshi can you try running the docker command with --net=host?
Samuel Cormier-Iijima
@sciyoshi
oh yup, that worked!! thank you :) not sure why curl would have been able to connect?
Lucio Franco
@LucioFranco
I would assume black magic :) glad that worked! let us know if you have any other issues.
Samuel Cormier-Iijima
@sciyoshi
I have another quick question - the json_parser transform seems to always remove the source field when drop_field is true. This seems inconsistent with the behavior of e.g. grok_parser, which only removes it when the parse succeeds. Is that behavior intentional?
Binary Logic
@binarylogic
Hey @sciyoshi , the behavior should be consistent across the two. I've opened timberio/vector#1861 to fix that.
Sebastian YEPES
@syepes
Small question, Is it currently possible to ingest (receive from UDP, TCP or file) metrics using the line protocol?
2 replies
Samuel Cormier-Iijima
@sciyoshi
@LucioFranco update on the original issue - I'm not able to use --net=host, but also it seems that it's only the /latest/api/token endpoint that is timing out from within a container. It seems that the API that should be used instead is the IMDS metadata - botocore updated due to this issue and you can see the changes here: boto/botocore#1895
7 replies
Andrey Afoninsky
@afoninsky
does vector have loggly support? haven't found any issues about it: https://github.com/timberio/vector/search?q=loggly&unscoped_q=loggly
1 reply
Aleksey Shirokih
@freeseacher
Hi! how can i transform something like that "file":"/var/log/mysystem/subsystem-component_name-07.log" to component_name ?
1 reply
Ana Hobden
@Hoverbear
Glad you got it!
Aleksey Shirokih
@freeseacher
as i can see there are type https://vector.dev/docs/about/data-model/metric/#aggregated_summary but how can i get it ? i am interested in prometheus summary of cause. there are some reference to timberio/vector#710 but i can't catch the point
Ana Hobden
@Hoverbear
@freeseacher if you're taking in logs and want to output metrics please try https://vector.dev/docs/reference/transforms/log_to_metric/
Aleksey Shirokih
@freeseacher
yes i am talking about metrics and already found log2metric but it does not help. type must must be one of: "counter" "gauge" "histogram" "set" but not quantile.
Samuel Cormier-Iijima
@sciyoshi
I am having issues with Docker log rotation using the default json-file logging driver - Vector stops picking up logs after the file is rotated
25 replies
C├ędric Da Fonseca
@Kwelity
Hi, I'm not sure to understand how the regex transform works.
I'm trying to only parse error log message, so I have a regexp starting with "^ERROR.*", I'm expecting the transform to drop the log that doesn't match. But, the log is parsed and the log content is put in the "message" field.
I tried to play with drop_field and field but it didn't work
What would be the best solution for my use case ?
2 replies
Heinz N. Gies
@Licenser
it worked :D
Ana Hobden
@Hoverbear
Gitter: It works sometimes! :)
mlki
@MlkiTouch_twitter

Hello someone has try AWS S3 Sink with ceph ? For me it doesn't work, for example for the healtcheck ceph return a 404 response code for the head method while it return 200 response code when i'm using mc ls, here is the config

[sinks.ceph]
  # REQUIRED - General
  type = "aws_s3" # must be: "aws_s3"
  inputs = ["syslog"] # example
  bucket = "vector" # example
  compression = "none" # example, enum
  endpoint = "http://my-ceph.com:9000"

  # OPTIONAL - Object Names
  filename_append_uuid = true # default
  filename_extension = "log" # default
  filename_time_format = "%s" # default
  key_prefix = "date=%F/" # default
  # REQUIRED - requests
  encoding = "text" # example, enum

  # OPTIONAL - General
  healthcheck = true# default

i set also the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . When i try to send a log it returns me
Feb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util::retries: encountered non-retriable error. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error> Feb 28 16:40:05.185 ERROR sink{name=ceph type=aws_s3}: vector::sinks::util: request failed. error=<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>http://my-ceph.com:9000</BucketName><RequestId>tx00000000000000c51a948-005e594265-430c8a-myhost-1</RequestId><HostId>myhostid</HostId></Error>
Could you help me with that please ? :-) Have a nice day

mahsoud
@mahsoud
Hey everyone, just started playing with vector agent on Windows to collect logs from a legacy application. In my case, when the application starts it writes a very long line into the log file (\u0000 on repeat)... what transform would you suggest to use to drop that one line?
Andrey Afoninsky
@afoninsky

hello
https://github.com/prometheus/statsd_exporter

Note that timers will be accepted with the ms, h, and d statsd types. The first two are timers and histograms and the d type is for DataDog's "distribution" type. The distribution type is treated identically to timers and histograms.

does vector support DD type? do we need to create issue?

2 replies
Andrey Afoninsky
@afoninsky
https://medium.com/@valyala/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350
does it make sense to create an issue with implementation request for prometheus sync?
pros: a better histogram (less cardinality, more accuracy)
cons: VictoriaMetric specific only, maybe it's useful in specific cases only
1 reply
ChethanU
@ChethanUK
Is there Offical helm chart?
2 replies
Bill
@bill-bateman

Hey - I have a small problem with reloading configurations. If the source is http / logplex / splunk_hec (all of which use Warp) and you change the configuration, but don't change the port, I get a configuration error (address already in use) and the reload fails. Workaround is to just change the port to a new value. After a successful reload you can then change the port back to the original.

It's not a huge issue, but I wanted to see if it was known.

ERROR vector::topology: Configuration error: Source "in": Address already in use (os error 48)
ERROR vector: Reload was not successful.
leidruid
@leidruid_gitlab
hello! Is there a correct way to specify multiple targets in elasticsearch sink, as in logstash?
9 replies
Andrey Afoninsky
@afoninsky
hello, please fix me if I wrong: "vector" source is a grpc-server and I can send logs/metrics directly using https://github.com/timberio/vector/blob/master/proto/event.proto ?
2 replies
Andrey Afoninsky
@afoninsky
another question: what's the best approach to implement log rotation / truncate in https://vector.dev/docs/reference/sinks/file/ and docker image? do you want to have an issue about it or it should be achieved using external tools? for now, I'm launching logrotate docker image as sidecar :)
2 replies
Andrey Afoninsky
@afoninsky
please take a look if you have a free time, can't understand is it a bug or my misunderstanding of documentation :) thx
timberio/vector#2036
2 replies
Andrey Afoninsky
@afoninsky
one more question :) "file sink" does not recreate a file if old one was deleted using "rm" - is it correct behaviour?
7 replies
gtie
@gtie
anyone else seeing behavior like timberio/vector#2080 ?
mmacedo
@mmacedoeu
hi, building vector using make or using docker as stated on https://vector.dev/docs/setup/installation/manual/from-source/ generates a debug mode binary
1 reply
is there any instruction I am missing to generate a release version ?
mmacedo
@mmacedoeu
I found that hotmic is deprecated, https://github.com/timberio/vector/blob/master/lib/tracing-metrics/Cargo.toml#L9 do you plan to replace it with crate metrics ?
6 replies
Serhii M.
@mikhno-s

Hi, guys! Small question regarding config for kafka sink - I have next piece of config

[sinks.kafka]
  type = "kafka"
  inputs = ["json"]
  bootstrap_servers = "kafka-server:9092"
  topic = "vector"
  compression = "none"
  healthcheck = true

  buffer.type = "disk"
  buffer.max_size = 104900000
  buffer.when_full = "block"

  encoding.codec = "json"

And when I try to start vector I get:

unknown variant `codec`, expected `text` or `json` for key `sinks.kafka`

What's wrong with config?

Thanks!

Serhii M.
@mikhno-s
ok, it looks like in docs is not working example - work well just with encoding = "json"
Ana Hobden
@Hoverbear
@mmacedoeu yes! @lukesteensen has a PR #1953 to do that
@mikhno-s yes unfortunately the docs are showing a new feature we're about to release
gtie
@gtie
How do people monitor vector in production? Figuring out that the service is up fine, but how can you tell if it is indeed capable of shipping data to its sink(s)?
3 replies
carumusan
@carumusan
Is timber.io still being supported? The site is currently broken for me after logging in.
1 reply
Mads
@MadsRC_gitlab

I'm having issues with
unknown variant `codec`, expected `text` or `json` for key `sinks.some_sink` for several different types of sinks... It only works when specifying encoding = "text" or encoding = "json" - Problem is, I need some of the options under encoding.

Tried looking at the source, but I'm not familiar with Rust enough to locate the error myself.

Anyone know if this is a known bug?

1 reply
Andrey Afoninsky
@afoninsky
is there a way to trigger health check periodically? will "vector --dry-run --require-healthy --quiet" do the job?
1 reply
Alex
@Alexx-G
Hi,
Is it possible to route log stream to a specific Splunk index using splunk-hec sink?
In fluent* it's done by adding "index" field and enabling "send_raw" option. However I couldn't find any example for vector.
Thanks.
8 replies
Madhurranjan Mohaan
@madhurranjan_twitter
Hi, Is there anyone using vector to stream logs from envoy logs and upload it to S3 or GCS ?
2 replies
Chris Holcombe
@cholcombe973
Hi everyone. I was thinking of giving vector a try but I'm in need of some clarification. It looks like there's a required schema for every log event. Is that correct?
4 replies
Madhurranjan Mohaan
@madhurranjan_twitter
Hi, what is the limit recommended in terms of bytes per record? On the website it says, its not a replacement for an analytics record. How do you define an analytics record ? Based on bytes / no of fields / something else ?
1 reply
Pasha Radchenko
@ep4sh
hey folks, I 'm new to Vector, just a quick question, can Vector output to AWS SQS?
1 reply
Am I right if the Vector is log shipper like Filebeat?
2 replies