jcchavezs on master
Fix: Return of createNoopTracer… (compare)
I am getting this error when opentelemetry is trying to export traces to Kafka using Kafka sender.
[BatchSpanProcessor_WorkerThread-1] WARN io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker - Exporter threw an Exception org.apache.kafka.common.config.ConfigException: Invalid value org.apache.kafka.common.serialization.ByteArraySerializer for configuration key.serializer: Class org.apache.kafka.common.serialization.ByteArraySerializer could not be found.
i am using -Dotel.exporter.jar=myexporter.jar option to use my own custom exporter . myexporter.java does contains the org.apache.kafka.common.serialization.StringSerializer. but when running th application getting this error
StackOverFlow Question: https://stackoverflow.com/questions/65946487/class-not-found-error-in-open-telemetry-exporter
Any help is highly appreciated. Thanks
409
for valid cases. We don't throw any exception when the API returns 409. I'm not sure if there's some logic down the line that just converts HTTP 409
s to errors. Any ideas?
Hello team, I'm trying to convert Tags
field in Zipkin's SpanModel
from map to string. zSpans
is defined like this so I'm trying to call zSpans.Tags
but my terminal says
zSpans.Tags undefined (type []*model.SpanModel has no field or method Tags)
which I am confused of because there is a Tags
field in SpanModel. Is there a better way to refer to Zipkin's Tags
field?
Hey everyone, I want to use the kafka collector but my cluster is setup with SASL_SSL with GSSAPI sasl mechanism. The collector is having trouble connecting to the cluster. I also want to use kafka as storage so I will probably need to add similar configs for that as well. Can you please provide some help. I tried running the zipkin jar like this. I removed some of the sensitive information.
KAFKA_BOOTSTRAP_SERVERS=SASL_SSL://ip:port,SASL_SSL://ip:port java -Dzipkin.collector.kafka.overrides.sasl.mechanism=GSSAPI -Dzipkin.collector.kafka.overrides.security.protocol=SASL_SSL -Djava.security.auth.login.config=/path/jaas.conf -Djava.security.krb5.conf=/path/krb.conf -Djavax.security.auth.useSubjectCredsOnly=true -Dzipkin.collector.kafka.overrides.ssl.truststore.location=/path/truststore.jks -Dzipkin.collector.kafka.overrides.ssl.keystore.location=/path/keystore.jks -Dzipkin.collector.kafka.overrides.ssl.truststore.password=pass -Dzipkin.collector.kafka.overrides.ssl.keystore.password=pass -Dzipkin.collector.kafka.overrides.ssl.key.password=pass -Dzipkin.collector.kafka.overrides.sasl.kerberos.service.name=name -jar zipkin.jar --logging.level.zipkin2=DEBUG
The zipkin server starts with no errors but doesn't seem like the collector is connected to the cluster. Am I using the right properties?
I also tried using the docker zipkin image. These are the env I set when creating the container. It's pretty much the same as above.
KAFKA_BOOTSTRAP_SERVERS=SASL_SSL://ip:port,SASL_SSL://ip:port
JAVA_OPTS=-Dzipkin.collector.kafka.overrides.sasl.mechanism=GSSAPI -Dzipkin.collector.kafka.overrides.security.protocol=SASL_SSL -Djava.security.auth.login.config=/path/jaas.conf -Djava.security.krb5.conf=/path/krb.conf -Djavax.security.auth.useSubjectCredsOnly=true -Dzipkin.collector.kafka.overrides.ssl.truststore.location=/path/truststore.jks -Dzipkin.collector.kafka.overrides.ssl.keystore.location=/path/keystore.jks -Dzipkin.collector.kafka.overrides.ssl.truststore.password=pass -Dzipkin.collector.kafka.overrides.ssl.keystore.password=pass -Dzipkin.collector.kafka.overrides.ssl.key.password=pass -Dzipkin.collector.kafka.overrides.sasl.kerberos.service.name=name
When the zipkin container starts up, I get this class not found error. Is the java inside the container missing that class? How can I add it?
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:646) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:626) ~[kafka-clients-2.7.0.jar:?]
at zipkin2.collector.kafka.KafkaCollectorWorker.run(KafkaCollectorWorker.java:69) ~[zipkin-collector-kafka-2.23.2.jar:?]
at zipkin2.collector.kafka.KafkaCollector$LazyKafkaWorkers.lambda$guardFailures$0(KafkaCollector.java:265) ~[zipkin-collector-kafka-2.23.2.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
at java.lang.Thread.run(Unknown Source) [?:?]
Caused by: java.lang.NoClassDefFoundError: org/ietf/jgss/GSSException
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:138) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:73) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:734) ~[kafka-clients-2.7.0.jar:?]
... 7 more
Hello guys, my zipkin is working pretty much ok .. the only problem I have is the cassandra integration ... I installed cassandra, run it ... but I cannot force zipkin use it's storage
by simply running "root@machineName ~]# STORAGE_TYPE=casandra java -jar /opt/zipkin/zipkin.jar" it fails to start with
APPLICATION FAILED TO START
Description:
Parameter 0 of constructor in zipkin2.server.internal.ZipkinQueryApiV2 required a bean of type 'zipkin2.storage.StorageComponent' that could not be found.
Action:
Consider defining a bean of type 'zipkin2.storage.StorageComponent' in your configuration.
Tried with adding zipkin security jar but getting
Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanDefinitionStoreException: Failed to parse configuration class [zipkin.module.security.ZipkinSecurityModule]; nested exception is java.lang.IllegalStateException: Failed to introspect annotated methods on class zipkin.module.security.ZipkinSecurityModule
Hi all,
I got Unable to establish connection to RabbitMQ server: Connection refused
when trying to run zipkin-server with rabbitmq and elasticsearch, using this docker-compose file :
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.10.1
container_name: elasticsearch
restart: on-failure
ports:
- 9200:9200
environment:
#not recommended in production
- "ES_JAVA_OPTS=-Xms750m -Xmx750m"
- discovery.type=single-node
rabbitmq:
image: rabbitmq:3.8-management-alpine
container_name: rabbitmq
restart: on-failure
ports:
- 5672:5672
- 15672:15672
environment:
RABBITMQ_DEFAULT_VHOST: ${rabbitmq_vhost}
RABBITMQ_DEFAULT_USER: ${rabbitmq_user}
RABBITMQ_DEFAULT_PASS: ${rabbitmq_pass}
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 1024MiB
zipkin-server:
image: openzipkin/zipkin:2
container_name: zipkin-server
depends_on:
- rabbitmq
- elasticsearch
restart: on-failure
ports:
- 9411:9411
environment:
RABBIT_CONCURRENCY: 1
RABBIT_CONNECTION_TIMEOUT: 60000
RABBIT_QUEUE: zipkin
RABBIT_ADDRESSES: ${rabbitmq_host}
RABBIT_PASSWORD: ${rabbitmq_pass}
RABBIT_USER: ${rabbitmq_user}
RABBIT_VIRTUAL_HOST: ${rabbitmq_vhost}
RABBIT_USE_SSL: "false"
STORAGE_TYPE: elasticsearch
ES_HOSTS: ${elasticsearch_host}
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9411/health || exit 1
interval: 10s
start_period: 15s
retries: 3
zipkin-dependencies:
image: openzipkin/zipkin-dependencies
container_name: zipkin-dependencies
depends_on:
- elasticsearch
restart: on-failure
environment:
STORAGE_TYPE: elasticsearch
ES_HOSTS: ${elasticsearch_host}
volumes:
default:
external:
name: zipkin-monitoring
networks:
default:
external:
name: zipkin-monitoring
knowing that rabbitmq and elastic starts normaly
remoteEndpoint
. Zipkin's documentation says it's an RPC (or messaging) span, indicating the other side of the connection, but I'm not sure of what this means. There is an ipv4
field under remoteEndpoint
. Can it be an IP address of an external user?
hello guys, another thing ... I was able to manage to run the zipkin and was able to store traces successfully in Cassandra.. but now it doesn't show any dependencies .. traces are all good, accurate and "realtime" but when I click on dependencies it's empty.. .I read I have to set the zipkin-dependencies.jar to be run periodically so I did (based on this article https://stackoverflow.com/questions/37459261/cant-view-any-dependencies-inside-zipkin-ui-dependencies-tab ) but hell - it doesn't show anything still.. I even tried to run that as a service .. but no luck either ...
my zipkin service:[Unit]
Description=Managa Java service
Documentation=https://zipkin.io/
[Service]
WorkingDirectory=/opt/zipkin
Environment="STORAGE_TYPE=cassandra3"
ExecStart=/usr/bin/java -jar zipkin.jar
User=zipkin
Group=zipkin
Type=simple
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
my zipkin-dependency.sh (crontab - every 5 mins)STORAGE_TYPE=cassandra3 /usr/bin/java -jar /opt/zipkin/zipkin-dependencies.jar
crontab -l*/5 * * * * cd /opt/zipkin && ./zipkin-dependencies.sh
any thoughts?
hello guys, I have a question .... I have finally managed to make Zipkin + Cassandra work together... however, to properly display dependencies I need to periodically run zipkin-dependencies.jar (as mentioned above) ... I do this via cron in linux ... the problem is -> it takes a little over 1hr to calc the data...
Is there any way to limit the spans to automatically check for those nonprocessed only? or any other way I can tune cassandra / zipkin-dependencies.jar to make it work faster? Any tips?
I am not sure if I should focus on the cassandra tuning or the problem is somewhere else ... I tried to run twice manually the zipkin-dependencies and it says:Running Dependencies job for 2021-03-01: 1614556800000000 =< Span.timestamp 1614643199999999
for both instances ... so it obivously recalc the same data ...
Hi,
I'm running zipkin in a container and sending a get (wrapFetch/node-fetch) trace to it (zipkin-transport-http):
const { Tracer } = require('zipkin');
const { BatchRecorder } = require('zipkin'); // format data spans
const { HttpLogger } = require('zipkin-transport-http');
const fetch = require('node-fetch');
const wrapFetch = require('zipkin-instrumentation-fetch');
const CLSContext = require('zipkin-context-cls'); // continous local storage
const ctxImpl = new CLSContext();
const localServiceName = 'xxx';
const recorder = new BatchRecorder({
logger: new HttpLogger({
endpoint: http://localhost:9411/api/v1/spans
})
});
const tracer = new Tracer({ ctxImpl, recorder, localServiceName });
const remoteServiceName = 'youtube';
const zipkinFetch = wrapFetch(fetch, { tracer, remoteServiceName });
zipkinFetch('https://www.youtube.com/').then(res => res)
How do I go about instrumenting and sending my own span?
Thanks,