Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jul 23 23:42
    blt auto_merge_enabled #8445
  • Jul 23 23:42
    blt synchronize #8445
  • Jul 23 23:33
    jszwedko review_requested #8446
  • Jul 23 23:33
    jszwedko unassigned #8446
  • Jul 23 23:23
    blt review_requested #8445
  • Jul 23 23:23
    blt review_requested #8445
  • Jul 23 23:23
    blt synchronize #8445
  • Jul 23 23:13
    jszwedko edited #8446
  • Jul 23 23:12
    jszwedko assigned #8446
  • Jul 23 23:12
    jszwedko opened #8446
  • Jul 23 22:54
    blt review_requested #8445
  • Jul 23 22:41
    jszwedko edited #8445
  • Jul 23 22:40
    blt edited #8445
  • Jul 23 22:40
    blt synchronize #8445
  • Jul 23 22:39
    blt synchronize #8445
  • Jul 23 22:24
    blt review_requested #8445
  • Jul 23 22:24
    blt review_requested #8445
  • Jul 23 22:24
    blt opened #8445
  • Jul 23 22:17
    jszwedko review_requested #8444
  • Jul 23 22:17
    jszwedko opened #8444
Binary Logic
@binarylogic
First message in our new community. I'm testing the experience as well as threaded conversations.
11 replies
Vlad Gorodetsky
@bai
:wave: Hey friends! I noticed that when statsd tags are encoded, you perform sort operation on tags here: https://github.com/timberio/vector/blob/aed6f1bf1cb0d3d10b360e16bd118665a49c4ea5/src/sinks/statsd.rs#L118 which is not something required by protocol and adds an extra O(n log n) operation. Is there any reason for doing that?
14 replies
Dan Palmer
@danpalmer
@binarylogic I've just tried to set up Vector and found that the "datadog_metrics" sink doesn't exist in 0.5.0. Is this in a point release in 0.5.x, or is it available only in nightly? I'm not sure we want to run nightly in production, but datadog aggregation is currently our main use-case. What do you advise?
27 replies
Vlad Gorodetsky
@bai
Ah one more question - from what I see tags are stored as HashMap, which kind of breaks (and please correct me if i'm wrong) the protocol a bit, since statsd/dogstatsd allow duplicate tags with different values. Is that a bug or a feature?
3 replies
Dan Palmer
@danpalmer
@binarylogic not wanting to pile things on for the 0.6.0 release, but this might be a quick fix and could save some confusion. timberio/vector#1311
2 replies
Dan Palmer
@danpalmer
@loony-bean I'm having some trouble getting a statsd aggregator working. I've got a statsd source bound to localhost 8126 on a bunch of machines, with statsd out to one machine, which then forwards on to Datadog.
The issue I'm seeing is on the forwarders, I get the error: ERROR sink{name=metrics_out type=statsd}: vector::sinks::statsd: error sending datagram: Os { code: 22, kind: InvalidInput, message: "Invalid argument" }
10 replies
Dan Palmer
@danpalmer
@binarylogic do you think 0.6.0 or a nightly will go out today with the fix from timberio/vector#1316 for timberio/vector#1312 This is currently blocking us from rolling out Vector in production. Would be great to get an idea for planning my time around rolling it out.
1 reply
Quốc Bảo
@baonq243
vector[110358]: Dec 06 11:51:38.284 WARN source{name="my_source_id"}:file_server: file_source::file_server: Problem writing checkpoints: Os { code: 13, kind: PermissionDenied, message: "Permission denied"
6 replies
how to fix this issue
Binary Logic
@binarylogic
Perfect, glad that worked
Dan Palmer
@danpalmer
Hey team, was there a nightly build last night? I've re-deployed Vector to hopefully pick up the fixes to the statsd forwarder, but it still doesn't seem to be working (same error).
2 replies
fschaffa
@fschaffa
anyone working on kerberos support? looking to sink data into a kerberized kafka.
2 replies
Hemendra Patel
@Hemendra.patel_gitlab
I'm working on nestjs to push request/response detail to tcp server using vector. I have successfully implemented using C# but phasing issue with nestjs. I'm getting connection open and close message but not able to log it to vector tcp.
4 replies
Dan Palmer
@danpalmer
@loony-bean does the fact that this is merged: timberio/vector#1263 - mean that this comment https://github.com/timberio/vector/blob/master/website/docs/reference/sinks/datadog_metrics.md.erb#L42 is now out of date?
4 replies
Rafael Gumieri
@gumieri
Hi! First of all, thank you for the new version!
One thing that I want to point out is just that the aws_s3 sink doc is showing that the support for the formats ORC and Parquet are planned to this version.
1 reply
Felipe Cecagno
@fcecagno
Hi, I'd like to use the json_parser to parse messages that might contain an array, and split the message in multiple events for each element of the array. Is that possible?
3 replies
laazy
@laazy
Hi, everyone. I want to use vector to track increment change for a log-like file. When this file has been appended bytes, vector will produce a log. Is that possible? BTW, I also have a file record the length of that log-like ifle.
4 replies
Francois Joulaud
@francoisj_gitlab
Hi, is there any way to split vector TOML config files in several files ?
1 reply
Francois Joulaud
@francoisj_gitlab
Another unrelated question. It seems that the Rosie transform was postponed. Does anyone here know the reason ?
1 reply
jogster
@jogster
Hi All, I have started using vector for monitoring metrics with statsd_source/prometheus_sink . Currently this works fine on the single machine with prometheus running on the same machine. Any advice from anyone how to set-up when setting up a local statsd and connecting to a remote prometheus_sink do I need to use a local tcp_source and remote prometheus_sink. I am not clear how to connect vector instances together...
5 replies
Thomas Silvestre
@thosil
Hi, I'd like to know if vector could be used as a "gateway" to connect separated kafka instances? So an event produced by instance A will be forwarded to instance B.
Thanks
1 reply
Frédéric Haziza
@silverdaz

Hi, I must be doing something wrong, but I'd like to collect my python logs, (multiple microservices running in docker containers) into a centralized place, a container itself.
For that I configured my python loggers to send UDP datagrams to a given location, and at that location, I have Vector running, with the following config:

[sources.my_source_id]
  type = "socket"
  address = "0.0.0.0:9000"
  mode = "udp"

[sinks.out]
inputs   = ["my_source_id"]
type     = "console"
encoding = "text"

However, I get the following error:

INFO vector: Log level "info" is enabled.
Jan 08 17:16:14.241  INFO vector: Loading config. path="/etc/vector/vector.toml"
Jan 08 17:16:14.246  ERROR vector: Configuration error: unknown variant `socket`, expected one of `docker`, `file`, `journald`, `kafka`, `splunk_hec`, `statsd`, `stdin`, `syslog`, `tcp`, `udp`, `vector` for key `sources.my_source_id.type`
I'm running the latest docker image (the alpine one).
What am I missing?
Frédéric Haziza
@silverdaz
Issue tracked on Github: timberio/vector#1491
And I got the answer ;)
Thanks
Spencer Dixon
@SpencerCDixon
I have an integration question that someone here may be able to help with. I have a native macOS app and I want to use Vector for log collection to send to S3 or CloudWatch. What would be the recommended source type? Right now I have my app set up to fork a new process with Vector running listening on stdin and then writing logs to that and it seems to work great but wanted to see if the authors had different recommendations?
Spencer Dixon
@SpencerCDixon
Also, won't I need AWS creds to send logs to S3/CloudWatch which I can't give to clients for security reasons.