Hmm, okay - stuck again.
I'm getting a '400 Bad Request' from GCP on my GCS sink, but even on TRACE level it's not showing the body of the response so I can't get at which actual problem it's encountering. All the output I get on trace is:
Apr 07 13:50:06.853 TRACE sink{name=gcp type=gcp_cloud_storage}: vector::sinks::util: request succeeded. response=Response { status: 400, version: HTTP/1.1, headers: {"x-guploader-uploadid": "xxx", "content-type": "application/xml; charset=UTF-8", "content-length": "170", "vary": "Origin", "date": "Tue, 07 Apr 2020 13:50:06 GMT", "server": "UploadServer", "alt-svc": "quic=\":443\"; ma=2592000; v=\"46,43\",h3-Q050=\":443\"; ma=2592000,h3-Q049=\":443\"; ma=2592000,h3-Q048=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000"}, body: Body(Streaming) }
The body
property there doesn't get revealed further down in the log anywhere, and then the connection closes.
It seems like the http connection is being closed by the caller before the body can be received fully? :s
Apr 07 14:05:54.397 TRACE hyper::proto::h1::dispatch: body receiver dropped before eof, closing
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close_read()
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: State::close()
Apr 07 14:05:54.397 TRACE tokio_threadpool::worker: -> wakeup; idx=3
Apr 07 14:05:54.397 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
hello
maybe I'm a bit biased but here some quick thoughts about prometheus source timberio/vector#991 (it's not worth creating separate issue, just a small feedback based on usage :)
I'm not saying that this source is not good, just it has a place where to grow :)
I have an Nginx config producing about 1MB/sec of JSON (per node), the JSON is in array format, and parse_json does not like arrays. So far I am using add_fields + templating to wrap the array into an object literal before passing to parse_json, which is obviously nonsense, but it works.
The next problem is having parsed the array, how to expand it into an object with named properties. Is Lua the best option here? I thought Vector had a built-in "zip" transform, but seems it only supports it in combination with string splitting (using split's field_names argument)
hello
I want to implement docker proxy to connect Vector to MQTT (as a replacement of Kafka in event-based architecture)
which transport it's better to choose for communication between vector container and proxy container?
"vector source/sink" looks like a native solution, but "http source/sink" has at-least-once guarantee
Hi everyone :)
I've a question about the Loki sink. Is it a way to send a field (message for example) instead the complete event ? Because, inside Grafana, the Vector event is a JSON object and we can't currently get statistics inside the message field directly through the explorer. (And because we can generate labels from Vector, sending the full event is not really interesting).
Hi I like to send internal metrics from vector do datadog as metrics using datadog_metrics my config looks like this
[sources.internal_metrics]
type = "internal_metrics"
[transforms.tags_internal_metrics]
# General
type = "add_tags" # required
inputs = ["internal_metrics"] # required
# Tags
tags.hostname = "${VECTOR_HOSTNAME}"
tags.role = "${VECTOR_ROLE}"
tags.cluster = "${VECTOR_CLUSTER}"
tags.env = "${VECTOR_ENV}"
tags.region = "${VECTOR_REGION}"
tags.project = "${VECTOR_PROJECT}"
tags.hostgroup = "${VECTOR_HOSTGROUP}"
[sinks.internal_metrics_log]
# General
type = "console" # required
inputs = ["tags_internal_metrics"] # required
target = "stdout" # optional, default
# Encoding
encoding.codec = "json" # required
encoding.timestamp_format = "rfc3339" # optional, default
[sinks.datadog_metrics_internal_metrics]
# General
type = "datadog_metrics" # required
inputs = ["tags_internal_metrics"] # required
api_key = “<“secret key> # required
healthcheck = true # optional, default
host = "https://app.datadoghq.com"
namespace = "vector" # required
# Batch
batch.max_events = 20 # optional, default, events
batch.timeout_secs = 1 # optional, default, seconds
tags are added and metrics show in logs in format that looks valid but Datadog sing taking 404 response
Apr 23 11:39:48 ip-172-18-99-67 vector[24857]: Apr 23 11:39:48.217 WARN sink{name=datadog_metrics_internal_metrics type=datadog_metrics}:request{request_id=1}: vector::sinks::util::retries: request is not retryable; dropping the request. reason=response status: 404 Not Found
@mike.cardwell:grepular.com
> <@mike.cardwell:grepular.com> The globbing in the file source, can you use multiple asterisks in multiple locations. E.g is this valid? include = ["/opt/nomad/data/alloc/*/alloc/logs/monitor.std*.*"]
To answer my own question: yes. The problem I was having was that Vector could not see these files because the "alloc" dir was owned root:root with mode 0711, which meant the vector user couldn't get a directory listing, therefore the globbing failed. I feel like this should have been logged by Vector, but it wasn't.
@mike.cardwell:grepular.com
I don't know if this is a known issue, but there are no nightly rpms atm, just .debs: https://packages.timber.io/vector/nightly/latest/
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.004 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/placement/availability-zone
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.004 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-hostname
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.005 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-ipv4
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.005 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/mac
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.005 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:a5:31:79:0f:ac/subnet-id
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.006 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:a5:31:79:0f:ac/vpc-id
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.025 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/placement/availability-zone
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.025 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/placement/availability-zone
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.026 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-hostname
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.026 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-hostname
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.026 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-ipv4
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.026 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/local-ipv4
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.027 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/mac
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.027 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/mac
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.027 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:a5:31:79:0f:ac/subnet-id
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.027 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:a5:31:79:0f:ac/subnet-id
Apr 27 14:26:26 ip-10-105-195-187 vector[18040]: Apr 27 14:26:26.028 DEBUG aws_ec2_metadata: worker: vector::transforms::aws_ec2_metadata: Sending metadata request. uri=http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:a5:31:79:0f:ac/vpc-id
^(?P<level>[\w\.]+) \[(?P<threadname>.*)\]: (?P<logger>[\w\.]*):(?P<linenumber>[\d\.]*) - (?P<message>.*).*E:(?P<exception>.*)?\n?(?P<stacktrace>(?s).*)?$
INFO [Heartbeat]: SQSClientImpl:359 - Reset fffc3ffbcr Dev_SQS_Queue E: