Hi - I'd sometimes like to upload data dumps / full request dumps etc. and couple this to a specific trace (event/attribute). I think this might belong in the logging camp. However, I beleive that the data dumps/blobs are too big to transport to the collector via exporters (??) What I was thinking, is to use a logging attribute e.g. 'dump.file_path' in the application and write the file to /tmp, and in the agent (collector on the same node) in a "custom" processor:
Is this a sensible thing to do?
Hi - I’m implementing a logs receiver, and wish to call
consumer.ConsumeLogs(ctx context.Context, ld pdata.Logs) error.
I have a struct representing log records, and wish to convert this to
pdata.Logs. However, I’m not finding a way to do this that doesn’t require direct interaction with internal packages.
Am I right to think this should be possible? Is there an expected approach to this that someone could point me to?
kafkaexporterto output logs. Should not be very difficult. The primary question will be in what format you write to kafka.
kafkaexportercurrently supports a couple serialization formats for traces (otlp, jaeger). We will need to decide what to support for logs.