Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:52
    prognant synchronize #11033
  • 10:14
    jdrouet synchronize #11034
  • 10:11
    github-actions[bot] labeled #11035
  • 10:11
    github-actions[bot] labeled #11035
  • 10:11
    github-actions[bot] labeled #11035
  • 10:11
    github-actions[bot] labeled #11035
  • 10:11
    jdrouet opened #11035
  • 10:11
    jdrouet review_requested #11035
  • 10:11
    jdrouet review_requested #11035
  • 09:33
    jdrouet review_requested #11034
  • 09:33
    jdrouet review_requested #11034
  • 09:30
    github-actions[bot] labeled #11034
  • 09:30
    github-actions[bot] labeled #11034
  • 09:30
    jdrouet opened #11034
  • 09:24
    prognant synchronize #11021
  • 09:05
    prognant synchronize #11033
  • 08:55
    prognant edited #11033
  • 08:54
    github-actions[bot] labeled #11033
  • 08:54
    github-actions[bot] labeled #11033
  • 08:54
    prognant opened #11033
Kruno Tomola Fabro
@ktff
@zzswang kubernetes_logs is in alpha phase so it isn't included in the latest version
夜读书
@db2jlu_twitter
hi guys ,one question ,I need to push nginx log to clickhouse ,but got error like this .
Aug 03 22:03:29.275 ERROR sink{name=out type=clickhouse}:request{request_id=2}: vector::sinks::util::sink: Response wasn't successful. response=Response { status: 400, version: HTTP/1.1, headers: {"date": "Tue, 04 Aug 2020 02:03:29 GMT", "connection": "Keep-Alive", "content-type": "text/tab-separated-values; charset=UTF-8", "x-clickhouse-server-display-name": "ch01", "transfer-encoding": "chunked", "x-clickhouse-query-id": "dbb169c2-25be-4634-a38b-90df80bc9114", "x-clickhouse-format": "TabSeparated", "x-clickhouse-timezone": "UTC", "x-clickhouse-exception-code": "117", "keep-alive": "timeout=3", "x-clickhouse-summary": "{\"read_rows\":\"0\",\"read_bytes\":\"0\",\"written_rows\":\"0\",\"written_bytes\":\"0\",\"total_rows_to_read\":\"0\"}"}, body: b"Code: 117, e.displayText() = DB::Exception: Unknown field found while parsing JSONEachRow format: file: (at row 1)\n (version 20.5.4.40 (official build))\n" }
Tay
@taythebot

Is it possible for Vector to treat items in a nested array as separate logs? There is no way to import logs that are batched to Vector into ClickHouse without treating them as separate logs...

In the example below, each array in logs need to be treated as its own entity.

Example:

{
    logs: [
        {
            "event": 1
        },
        {
            "event": 2
        }
    ]
}
Ana Hobden
@Hoverbear
@defusevenue Ohhh that's a good question! I think you'll need to use our Lua v2 transform.
@db2jlu_twitter Looks like you might not be sending valid json?
@Bindu-Mawat I suggest using the memory buffer and allow Vector to apply back pressure :)
mcgfenwick
@mcgfenwick
I'm parsing a log file which is in effect a collection of metrics, content looks a bit like this:
{"_stream":"tcp_stats","_system_name":"foo.bar.com","ts":"2020-08-04T23:14:27.438506Z","conn_type":"RSTOS0","conn_count":26.0}
{"_stream":"tcp_stats","_system_name":"foo.bar.com","ts":"2020-08-04T23:14:27.438506Z","conn_type":"RSTRH","conn_count":5.0}
Whats the best approach to convert these to metrics for prometheus?
Jesse Szwedko
@jszwedko
3 replies
Anderson Ferneda
@dersonf
Hi everyone, I'm very happy using vector, I send a tons of logs to kafka using this wonderful tool, but a have some questions about regex, I read many times the documentation but I can't go far, I have this log from log4j:
2020-08-04 16:56:58,474 INFO [XNIO-1 I/O-14] [USER_MESSAGES] {"key1":"value","key2":"value","key3":"value","key4":"value","key5":"value","key6":"value"}
When this line file pass through file the log seems like this:
Anderson Ferneda
@dersonf
{message:"2020-08-04 16:56:58,474 INFO [XNIO-1 I/O-14] [USER_MESSAGES] {\"key1\":\"value\",\"key2\":\"value\",\"key3\":\"value\",\"key4\":\"value\",\"key5\":\"value\",\"key6\":\"value\"}"
2 replies
Any way to remove the \ or remove the begin of message until {content}?
I make something using regex_parser
Anderson Ferneda
@dersonf
[transforms.filein_regex_message]
type = "regex_parser"
inputs = ["file"]
drop_field = true
field = "message"
patterns = ['^(?P<day>[\d-]+) (?P<hour>[\d:,\d]+) (?P<loglevel>.) {"key":"(?P<key>.)",(?P<key2>.*)}$']
When I try to use \ to be removed I receive the error, probably I making something wrong, but I could find out what
Anderson Ferneda
@dersonf
Thanks, I find the problem, the "\" is just a new line, if a remove it and place another regex it vanish.
Ashwanth Goli
@iamashwanth

Hi everyone, I came across vector recently and thinking of replacing my existing filebeat + logstash pipeline with it. For some reason, I am not able to get the multi-line parsing working.

I am trying to capture lines between tokens TESTS and TESTC, but vector is dumping all the lines to the sink. What am I doing wrong here?

[sources.test_run_log]
  # General
  type = "file"
  ignore_older = 3600
  include = ["/path_to_log.log"]
  start_at_beginning = false

  # Priority
  oldest_first = true

  [sources.test_run_log.multiline]
    start_pattern = ".*TESTS.*"
    mode = "halt_with"
    condition_pattern = ".*TESTE.*"
    timeout_ms = 1000
3 replies
Bindu-Mawat
@Bindu-Mawat
Hi
I am seeing this error when configuring html as source:
Aug 05 21:55:03.249 ERROR vector: Configuration error: "/etc/vector/vector.toml": unknown variant http, expected one of docker, file, journald, kafka, kubernetes, logplex, prometheus, socket, splunk_hec, statsd, stdin, syslog, vector for key sources.bindu-in.type
^C
[1]+ Exit 78
Jesse Szwedko
@jszwedko
@Bindu-Mawat that should work. What version of vector are you using?
Bindu-Mawat
@Bindu-Mawat
Hi I am using vector 0.8.2 (v0.8.2 x86_64-unknown-linux-musl 2020-03-06)
Jesse Szwedko
@jszwedko
It's possible that version did not have the http source, let me check. The current version is 0.10.0 if you are able to upgrade
@Bindu-Mawat that source was added in 0.9.0. I would upgrade to the latest though, 0.10.0
Bindu-Mawat
@Bindu-Mawat
Thanks Jesse. I'll see if it is possible for me.
Grant Isdale
@grantisdale

Hey all,

AWS S3 Sink Q:

When using server_side_encryption = "aws:kms" I am trying to pass the relevant ssekms_key_id but the key exists in a different account (alongside the S3 bucket) from where the cluster itself exists.

I have used the assume_role key to assume a role in the target (where the S3 bucket lives) account; this works for the aws_cloudwatch_logs sink - by assuming a role in another account it 'knows' to look for the specific log group in the target account, not the account that the cluster is running in. But I'm currently getting an error because vector is unable to find the kms key because it is looking in the account where the cluster exists not the account where the assumed role exists.
Is there something I should be doing differently? Is this generally possible for KMS keys the way it is for cloudwatch log groups?

5 replies
夜读书
@db2jlu_twitter
hi all ,is it possible to transform and parse multiple json fields ? thanks !
3 replies
Ashwanth Goli
@iamashwanth

@jszwedko Is it possible to embed the entire event.log as a json value while writing to a sink?

Kafka rest proxy expects the records in the following format. I am having trouble connecting HTTP sink to my kafka rest endpoint because of this.

{
   "records": [
        {"value": event.log}
    ]
}
13 replies
Ghost
@ghost~5cd19fe2d73408ce4fbfa0bf
Hi i have a problem on logstash json
can i save a part of the like path and status response on gcp_stackdriver_logs?
mcgfenwick
@mcgfenwick
Whats the underlying data type for the garage metric ? I'm getting a lot of output values of zero, the input values are quiet large
Jesse Szwedko
@jszwedko
@mcgfenwick the internal representation is a 64 bit float
mcgfenwick
@mcgfenwick
Hmm, ok, then my problem is something else. Thanks
mcgfenwick
@mcgfenwick
My problem appears to be in the way the prometheus sink handles my metrics, The code generates about 20 metrics per second, but when I scrape the prometheus sink, I appear to get the last metric generated or something that does not represent the average or something more useful. Is there a way too change this ?
Jesse Szwedko
@jszwedko
Gauges typically represent the current value of a metric. Vector is capable of aggregating samples into histograms. I'm not seeing a way to have it take an average though
What is the metric? That would help inform the best representation for it
mcgfenwick
@mcgfenwick
Its a count of packets, not exactly sure if its a per second value or some other interval, it would require quiet a bit of digging to find that out. The value varies form billions to zero
mcgfenwick
@mcgfenwick
But my question is really more about how the prometheus sink deals with the metrics it receives.
Jesse Szwedko
@jszwedko
If I had to guess. You probably want to represent it as a counter. The Prometheus sink just exposes a scrape endpoint for Prometheus. You can curl it yourself too see the values
(on my phone or I'd dig up more resources)
mcgfenwick
@mcgfenwick
ack
guy
@guy1976_gitlab
question: how can i ran different regex_parser transform depending on the pod label?
Rick Richardson
@rrichardson
@guy1976_gitlab - create a filter that only accepts that pod label.. then use that filter as the source for your regex_parser transform
Jesse Szwedko
@jszwedko
You might also be interested in the swimlanes transform https://vector.dev/docs/reference/transforms/swimlanes/
Ayush Goyal
@perfectayush
Hi, I am running into an issue with vector that it's not closing file descriptors on rotation of files via logrotate (file source). These are nginx logs. This is is happening for already rotated *.access.log.1 files which are rotated a second time, to *.access.log.2.gz. These deleted file descriptors accumulate over a period of time and we have to restart vector to fix disk alerts. Fingerprinting is currently configured with checksum strategy, with file source configured to check only for *.access.log file
7 replies
夜读书
@db2jlu_twitter
hello all , is it possible for clickhouse sink to store metrics ? thanks !