status.Code() == pdata.StatusCodeError
? I'd be happy to update the datadog exporter to match whatever the current spec is
system.swap.
(see here) that are similar in meaning but different in naming to the system.paging.
metrics that feature in the spec (see here). Is this expected? If not, what should be changed (spec/implementation)? Should I open an issue for this?
cross-posting from opentelemetry-specification
Friendly reminder we have
otel spec issue scrub 🧽
mtg tomorrow, Friday morning 8:30a PST.
Not many spec issues to triage, so if we have quorum with collector maintainers, we’ll also continue triaging collector and collector-contrib issues.
@bogdandrutu (pinging you because you worked on the FluentBit loadtests)
I've been working on migrating the CircleCI workflows to GitHub Actions and there is just 1 job left to do; loadtest. In loadtest all the tests are passing except TestLog10kDPS/FluentBitToOTLP
which I think might have an error in the implementation. I'll attach the logs of the run in thread but the main issue im running into is that im getting conflicting messages from the logs on whether the FluentBItWriter can connect to the NewOTLPDataReceiver
or not. On one hand, the logs are filled with a whole bunch of Sent: X items| Received: 0)
2020/12/15 00:07:48 Agent RAM (RES): 49 MiB, CPU: 1.3% | Sent: 59600 items | Received: 0 items (0/sec)
2020/12/15 00:07:51 Agent RAM (RES): 49 MiB, CPU: 0.3% | Sent: 89600 items | Received: 0 items (0/sec)
2020/12/15 00:07:54 Agent RAM (RES): 49 MiB, CPU: 0.0% | Sent: 119600 items | Received: 0 items (0/sec)
2020/12/15 00:07:57 Agent RAM (RES): 49 MiB, CPU: 0.0% | Sent: 149600 items | Received: 0 items (0/sec)
...
validator.go:46:
Error Trace: validator.go:46
test_case.go:273
scenarios.go:190
log_test.go:78
Error: Not equal:
expected: 0x249f0
actual : 0x0
Test: TestLog10kDPS/FluentBitToOTLP
Messages: Received and sent counters do not match.
but then I also get the following error message:
test_case.go:312: Time out waiting for [all data items received]
This is weird because test_case.go:312 is tc.t.Error("Time out waiting for", errMsg)
meaning the errMsg
in the log is saying all data items were received (which is isn't really an error 🤔).
Every other writer for every other test is able to connect to the NewOTLPDataReceiver
though but I dont know enough about the fluentBit exporter to know why this fails.
Any help would be appreciated, thanks!
host
info as a span attribute that we'd like to mapped to the appropriate resource attribute
key so that various exporters can pick it up correctly as the hostname info
queued_retry
processor is doing retry on failure, and if it continues failing, it will be trying to retry it in endless loop till it will successfully retry? (so there is no upper bound that tells it to stop when it failed to retry too many times, or for some period of time)
collector-contrib
. But I seem to get Lint errors on code that isn't related to my PR. Can someone give me some hints? open-telemetry/opentelemetry-collector-contrib#1892
hi friends, have a PR that's been approved by a number of folks, would be great to see this merged before next release as it's been 9 days now open-telemetry/opentelemetry-collector#2253
And two others on contrib
have been approved and sitting for a little while that i'd been keen to see merged before next release
these are all impacting end users so would be a great help for adoption, cheers!
Hi,
I are using otel collector and with open census and we are getting this error
exporterhelper/queued_retry.go:239 Exporting failed. The error is not retryable. Dropping data. {"component_kind": "exporter", "component_type": "prometheusremotewrite", "component_name": "prometheusremotewrite", "error": "Permanent error: [Permanent error: nil data point. opencensus.io/http/client/roundtrip_latency is dropped; Permanent error: nil data point. opencensus.io/http/client/received_bytes is dropped; Permanent error: nil data point. grpc.io/server/sent_messages_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/received_bytes_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/sent_bytes_per_rpc is dropped; Permanent error: nil data point. opencensus.io/http/client/sent_bytes is dropped; Permanent error: nil data point. grpc.io/server/received_messages_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/server_latency is dropped; Permanent error: nil data point. grpc.io/server/completed_rpcs is dropped]", "dropped_items": 9}
the config is :
extensions:
health_check:
pprof:
endpoint: 0.0.0.0:1777
receivers:
opencensus:
endpoint: "0.0.0.0:55678"
processors:
batch:
memory_limiter:
ballast_size_mib: 683
limit_mib: 1500
spike_limit_mib: 512
check_interval: 5s
queued_retry:
exporters:
logging:
logLevel: debug
jaeger:
endpoint: <jaeger_collector_url>
insecure: true
prometheusremotewrite:
endpoint: <prom_remote_url>
service:
pipelines:
traces:
receivers: [opencensus]
processors: [memory_limiter, batch, queued_retry]
exporters: [jaeger]
metrics:
receivers: [opencensus]
processors: [memory_limiter, batch, queued_retry]
exporters: [prometheusremotewrite]
extensions: [health_check, pprof]
Please help