jcchavezs on 1306_test
chore: test to prove #1306 (compare)
jcchavezs on 1306_test
chore: test to prove #1306 (compare)
jcchavezs on master
Capture row counts for plain st… (compare)
llinder on master
[maven-release-plugin] prepare … (compare)
llinder on 2.23.18
llinder on master
[maven-release-plugin] prepare … (compare)
llinder on release-2.23.18
llinder on release-2.23.18
/zipkin/traces/{traceId}
sudo docker run -d -p 9411:9411 \
-e ES_HOSTS=http://172.30.5.99:3002 \
-e ES_INDEX=tracing \
-e STORAGE_TYPE=elasticsearch \
--name=zipkin_test openzipkin/zipkin
The container does not start, it stays in "starting" mode in portainer, then it stops. are the env-var ES_HOSTS
and ES_INDEX
supported by default in the image?
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
- JAVA_OPTS=-Darmeria.ssl.key-store=/zipkin/keystore.p12 -Darmeria.ssl.key-store-password=password -Darmeria.ssl.enabled=true -Darmeria.ports[0].port=9411 -Darmeria.ports[0].protocols[0]=https
The server starts up and the UI works with https, but zipkin logs Caused by: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST
every 5 seconds. Any ideas what I'm doing wrong?
22/01/05 10:26:59 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2022-01-05/span
22/01/05 10:26:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.elasticsearch.hadoop.util.Version.<clinit>(Version.java:66)
at org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:216)
at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions$lzycompute(AbstractEsRDD.scala:79)
at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions(AbstractEsRDD.scala:78)
at org.elasticsearch.spark.rdd.AbstractEsRDD.getPartitions(AbstractEsRDD.scala:48)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.groupBy(RDD.scala:713)
at org.apache.spark.api.java.JavaRDDLike$class.groupBy(JavaRDDLike.scala:243)
at org.apache.spark.api.java.AbstractJavaRDDLike.groupBy(JavaRDDLike.scala:45)
at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:189)
at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:170)
at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:79)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field protected byte[] java.io.ByteArrayInputStream.buf accessible: module java.base does not "opens java.io" to unnamed module @5274766b
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
at org.elasticsearch.hadoop.util.ReflectionUtils.makeAccessible(ReflectionUtils.java:70)
at org.elasticsearch.hadoop.util.IOUtils.<clinit>(IOUtils.java:53)
... 28 more
Hi All,
I’m wondering if anyone has set up distributed tracing using Zipkin in AWS where spans from different AWS accounts running their own services are collected and exported to a common AWS ElasticSearch cluster (or another AWS storage), so that we can view the entire traces across all the different services. And in terms of the architecture design, is it recommended to have one collector for all services, or each service having their own collector running as sidecar container/process?
Hi everyone, I am trying to deploy zipkin(version 2.23.16) on Kubernetes [1.21 version] and want to use elastic search as storage. I'm getting the below error. Could someone please help me
com.linecorp.armeria.server.RequestTimeoutException: null
at com.linecorp.armeria.server.RequestTimeoutException.get(RequestTimeoutException.java:36) ~[armeria-1.13.4.jar:?]
at com.linecorp.armeria.internal.common.CancellationScheduler.invokeTask(CancellationScheduler.java:467) ~[armeria-1.13.4.jar:?]
at com.linecorp.armeria.internal.common.CancellationScheduler.lambda$setTimeoutNanosFromNow0$13(CancellationScheduler.java:293) ~[armeria-1.13.4.jar:?]
at com.linecorp.armeria.common.RequestContext.lambda$makeContextAware$3(RequestContext.java:547) ~[armeria-1.13.4.jar:?]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384) [netty-transport-classes-epoll-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.70.Final.jar:4.1.70.Final]
at java.lang.Thread.run(Unknown Source) [?:?]
```
tracer := zipkinotp.Wrap(nativeTracer)
from "github.com/openzipkin-contrib/zipkin-go-opentracing" but tutorial is not exhaustive on how to continue
\health
endpoint all I see is "StackdriverStorage{<<GCP-PROJECT>>}" : {
"status" : "DOWN",
"details" : {
"error" : "com.linecorp.armeria.common.grpc.protocol.ArmeriaStatusException: The caller does not have permission"
}
}