by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 07:03
    MOZGIII closed #3353
  • 06:52
    markonen synchronize #3265
  • 04:04
    dependabot[bot] closed #3292
  • 04:04
    dependabot[bot] assigned #3380
  • 04:04
    dependabot[bot] review_requested #3380
  • 04:04
    dependabot[bot] labeled #3380
  • 04:04
    dependabot[bot] opened #3380
  • Aug 07 21:44
    Hoverbear synchronize #3358
  • Aug 07 20:42
    perfectayush synchronize #3373
  • Aug 07 19:08
    binarylogic edited #3377
  • Aug 07 19:00
    bhattchaitanya opened #3377
  • Aug 07 18:16
    perfectayush synchronize #3373
  • Aug 07 17:44
    Hoverbear synchronize #3358
  • Aug 07 17:27
    Hoverbear synchronize #3358
  • Aug 07 17:00
    Hoverbear synchronize #3358
  • Aug 07 16:17
    Hoverbear synchronize #3358
  • Aug 07 14:25
    Hoverbear synchronize #3358
  • Aug 07 14:08
    binarylogic assigned #3373
  • Aug 07 14:08
    binarylogic review_requested #3373
  • Aug 07 12:38
    oktal synchronize #2896
Bindu-Mawat
@Bindu-Mawat
I have one more question. I am using TCP source, Does vector sends and response(any ACK?) to the TCP client on reception of the data?
Moris
@KramerMoris_twitter
Hey guys, Amazing work on the vector.
I need to push logs from s3 to elastic, can vector help me here? and if so, please point me to what/where in docs
1 reply
zzswang
@zzswang

Try to following the guide in

https://github.com/timberio/vector/blob/master/distribution/kubernetes/vector-namespaced.yaml

but it doesn't work. A little confused, so anyone can help me? Thanks.

Aug 02 04:48:03.520  INFO vector: Log level "info" is enabled.
Aug 02 04:48:03.522  INFO vector: Loading configs. path=["/etc/vector/managed.toml"]
Aug 02 04:48:03.525 ERROR vector: Configuration error: "/etc/vector/managed.toml": unknown variant `kubernetes_logs`, expected one of `docker`, `file`, `generator`, `http`, `internal_metrics`, `journald`, `kafka`, `logplex`, `prometheus`, `socket`, `splunk_hec`, `statsd`, `stdin`, `syslog`, `vector` for key `sources.kubernetes_logs.type`

Is the doc in distribution/kubernetes available now?

It seems kubernetes_logs not working.
Kruno Tomola Fabro
@ktff
@zzswang kubernetes_logs is in alpha phase so it isn't included in the latest version
夜读书
@db2jlu_twitter
hi guys ,one question ,I need to push nginx log to clickhouse ,but got error like this .
Aug 03 22:03:29.275 ERROR sink{name=out type=clickhouse}:request{request_id=2}: vector::sinks::util::sink: Response wasn't successful. response=Response { status: 400, version: HTTP/1.1, headers: {"date": "Tue, 04 Aug 2020 02:03:29 GMT", "connection": "Keep-Alive", "content-type": "text/tab-separated-values; charset=UTF-8", "x-clickhouse-server-display-name": "ch01", "transfer-encoding": "chunked", "x-clickhouse-query-id": "dbb169c2-25be-4634-a38b-90df80bc9114", "x-clickhouse-format": "TabSeparated", "x-clickhouse-timezone": "UTC", "x-clickhouse-exception-code": "117", "keep-alive": "timeout=3", "x-clickhouse-summary": "{\"read_rows\":\"0\",\"read_bytes\":\"0\",\"written_rows\":\"0\",\"written_bytes\":\"0\",\"total_rows_to_read\":\"0\"}"}, body: b"Code: 117, e.displayText() = DB::Exception: Unknown field found while parsing JSONEachRow format: file: (at row 1)\n (version 20.5.4.40 (official build))\n" }
defusevenue
@defusevenue

Is it possible for Vector to treat items in a nested array as separate logs? There is no way to import logs that are batched to Vector into ClickHouse without treating them as separate logs...

In the example below, each array in logs need to be treated as its own entity.

Example:

{
    logs: [
        {
            "event": 1
        },
        {
            "event": 2
        }
    ]
}
Ana Hobden
@Hoverbear
@defusevenue Ohhh that's a good question! I think you'll need to use our Lua v2 transform.
@db2jlu_twitter Looks like you might not be sending valid json?
@Bindu-Mawat I suggest using the memory buffer and allow Vector to apply back pressure :)
mcgfenwick
@mcgfenwick
I'm parsing a log file which is in effect a collection of metrics, content looks a bit like this:
{"_stream":"tcp_stats","_system_name":"foo.bar.com","ts":"2020-08-04T23:14:27.438506Z","conn_type":"RSTOS0","conn_count":26.0}
{"_stream":"tcp_stats","_system_name":"foo.bar.com","ts":"2020-08-04T23:14:27.438506Z","conn_type":"RSTRH","conn_count":5.0}
Whats the best approach to convert these to metrics for prometheus?
Jesse Szwedko
@jszwedko
3 replies
Anderson Ferneda
@dersonf
Hi everyone, I'm very happy using vector, I send a tons of logs to kafka using this wonderful tool, but a have some questions about regex, I read many times the documentation but I can't go far, I have this log from log4j:
2020-08-04 16:56:58,474 INFO [XNIO-1 I/O-14] [USER_MESSAGES] {"key1":"value","key2":"value","key3":"value","key4":"value","key5":"value","key6":"value"}
When this line file pass through file the log seems like this:
Anderson Ferneda
@dersonf
{message:"2020-08-04 16:56:58,474 INFO [XNIO-1 I/O-14] [USER_MESSAGES] {\"key1\":\"value\",\"key2\":\"value\",\"key3\":\"value\",\"key4\":\"value\",\"key5\":\"value\",\"key6\":\"value\"}"
Any way to remove the \ or remove the begin of message until {content}?
I make something using regex_parser
Anderson Ferneda
@dersonf
[transforms.filein_regex_message]
type = "regex_parser"
inputs = ["file"]
drop_field = true
field = "message"
patterns = ['^(?P<day>[\d-]+) (?P<hour>[\d:,\d]+) (?P<loglevel>.) {"key":"(?P<key>.)",(?P<key2>.*)}$']
When I try to use \ to be removed I receive the error, probably I making something wrong, but I could find out what
Anderson Ferneda
@dersonf
Thanks, I find the problem, the "\" is just a new line, if a remove it and place another regex it vanish.
Ashwanth Goli
@iamashwanth

Hi everyone, I came across vector recently and thinking of replacing my existing filebeat + logstash pipeline with it. For some reason, I am not able to get the multi-line parsing working.

I am trying to capture lines between tokens TESTS and TESTC, but vector is dumping all the lines to the sink. What am I doing wrong here?

[sources.test_run_log]
  # General
  type = "file"
  ignore_older = 3600
  include = ["/path_to_log.log"]
  start_at_beginning = false

  # Priority
  oldest_first = true

  [sources.test_run_log.multiline]
    start_pattern = ".*TESTS.*"
    mode = "halt_with"
    condition_pattern = ".*TESTE.*"
    timeout_ms = 1000
3 replies
Bindu-Mawat
@Bindu-Mawat
Hi
I am seeing this error when configuring html as source:
Aug 05 21:55:03.249 ERROR vector: Configuration error: "/etc/vector/vector.toml": unknown variant http, expected one of docker, file, journald, kafka, kubernetes, logplex, prometheus, socket, splunk_hec, statsd, stdin, syslog, vector for key sources.bindu-in.type
^C
[1]+ Exit 78
Jesse Szwedko
@jszwedko
@Bindu-Mawat that should work. What version of vector are you using?
Bindu-Mawat
@Bindu-Mawat
Hi I am using vector 0.8.2 (v0.8.2 x86_64-unknown-linux-musl 2020-03-06)
Jesse Szwedko
@jszwedko
It's possible that version did not have the http source, let me check. The current version is 0.10.0 if you are able to upgrade
@Bindu-Mawat that source was added in 0.9.0. I would upgrade to the latest though, 0.10.0
Bindu-Mawat
@Bindu-Mawat
Thanks Jesse. I'll see if it is possible for me.
Grant Isdale
@grantisdale

Hey all,

AWS S3 Sink Q:

When using server_side_encryption = "aws:kms" I am trying to pass the relevant ssekms_key_id but the key exists in a different account (alongside the S3 bucket) from where the cluster itself exists.

I have used the assume_role key to assume a role in the target (where the S3 bucket lives) account; this works for the aws_cloudwatch_logs sink - by assuming a role in another account it 'knows' to look for the specific log group in the target account, not the account that the cluster is running in. But I'm currently getting an error because vector is unable to find the kms key because it is looking in the account where the cluster exists not the account where the assumed role exists.
Is there something I should be doing differently? Is this generally possible for KMS keys the way it is for cloudwatch log groups?

3 replies
夜读书
@db2jlu_twitter
hi all ,is it possible to transform and parse multiple json fields ? thanks !
3 replies
Ashwanth Goli
@iamashwanth

@jszwedko Is it possible to embed the entire event.log as a json value while writing to a sink?

Kafka rest proxy expects the records in the following format. I am having trouble connecting HTTP sink to my kafka rest endpoint because of this.

{
   "records": [
        {"value": event.log}
    ]
}
8 replies
Michele Gatti
@mikele_g_twitter
Hi i have a problem on logstash json
can i save a part of the like path and status response on gcp_stackdriver_logs?
mcgfenwick
@mcgfenwick
Whats the underlying data type for the garage metric ? I'm getting a lot of output values of zero, the input values are quiet large
Jesse Szwedko
@jszwedko
@mcgfenwick the internal representation is a 64 bit float
mcgfenwick
@mcgfenwick
Hmm, ok, then my problem is something else. Thanks
mcgfenwick
@mcgfenwick
My problem appears to be in the way the prometheus sink handles my metrics, The code generates about 20 metrics per second, but when I scrape the prometheus sink, I appear to get the last metric generated or something that does not represent the average or something more useful. Is there a way too change this ?
Jesse Szwedko
@jszwedko
Gauges typically represent the current value of a metric. Vector is capable of aggregating samples into histograms. I'm not seeing a way to have it take an average though
What is the metric? That would help inform the best representation for it
mcgfenwick
@mcgfenwick
Its a count of packets, not exactly sure if its a per second value or some other interval, it would require quiet a bit of digging to find that out. The value varies form billions to zero
mcgfenwick
@mcgfenwick
But my question is really more about how the prometheus sink deals with the metrics it receives.
Jesse Szwedko
@jszwedko
If I had to guess. You probably want to represent it as a counter. The Prometheus sink just exposes a scrape endpoint for Prometheus. You can curl it yourself too see the values
(on my phone or I'd dig up more resources)
mcgfenwick
@mcgfenwick
ack
guy
@guy1976_gitlab
question: how can i ran different regex_parser transform depending on the pod label?