Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
JB-data
@JB-data
I have some lines consisting of 528 points max.
JB-data
@JB-data

Another small question about lines constructed with st_makeLine...
I have points with some features that are point-specific (like the timestamp of the point), and some are the same for all points (like the name of the route=line that these points make up).
Suppose I construct a line, this is 1 geometric object in geomesa so I can add the name of the route once to the line.
Is it also possible to keep the points having some of their point-specific properties?
So that in geoserver if I filter I could filter out with CQL certain parts of the line depending on the timestamp?

example:
structure=geometry/route/timestamp, e.g.:
point 1 (lon1,lat1), route_NYC-LA, 9am
point 2 (lon2,lat2),route_NYC-LA, 9.30 am
point 3 (lon3,lat3),route_NYC-LA, 10am
-->
SELECT route,st_makeline(collect_set(points)) AS theroute
GROUP BY route
gives me:
route_NYC-LA, LINE (point1, point2, point3).
--> is there a way to keep info about the timestamp for each point?

3 replies
Peter Corless
@PeterCorless
Hello! Peter Corless here from ScyllaDB. I see that GeoMesa has documentation for use with Apache Cassandra. Has anyone done any testing to make sure it works with ScyllaDB? It should be a compatible solution. How would I be able to work with the GeoMesa community to validate ScyllaDB as a storage layer and get the docs changed to reflect that?
4 replies
Bruno Costa
@bruno.costa:matrix.org
[m]

Hello guys, I need some help with this error:

Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /geomesa/ds/kafka/metadata/migration~check
[2022-09-06T13:41:50.836Z] at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
[2022-09-06T13:41:50.836Z] at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
[2022-09-06T13:41:50.836Z] at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:2021)
[2022-09-06T13:41:50.836Z] at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:268)
[2022-09-06T13:41:50.836Z] at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:257)
[2022-09-06T13:41:50.837Z] at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
[2022-09-06T13:41:50.837Z] at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
[2022-09-06T13:41:50.837Z] at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForegroundStandard(ExistsBuilderImpl.java:254)
[2022-09-06T13:41:50.837Z] at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:247)
[2022-09-06T13:41:50.837Z] at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
[2022-09-06T13:41:50.837Z] at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
[2022-09-06T13:41:50.837Z] at org.locationtech.geomesa.utils.zk.ZookeeperMetadata.scanValue(ZookeeperMetadata.scala:52)
[2022-09-06T13:41:50.837Z] at org.locationtech.geomesa.index.metadata.KeyValueStoreMetadata$class.scanValue(KeyValueStoreMetadata.scala:40)
[2022-09-06T13:41:50.837Z] at org.locationtech.geomesa.utils.zk.ZookeeperMetadata.scanValue(ZookeeperMetadata.scala:16)
[2022-09-06T13:41:50.837Z] at org.locationtech.geomesa.index.metadata.TableBasedMetadata

KaTeX parse error: Can't use function '$' in math mode at position 5: anon$̲1.load(TableBas…: anon$1.load(TableBasedMetadata.scala:114)
[2022-09-06T13:41:50.837Z]     at org.locationtech.geomesa.index.metadata.TableBasedMetadata
anon$1.load(TableBasedMetadata.scala:110)
[2022-09-06T13:41:50.838Z] at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3308)
[2022-09-06T13:41:50.838Z] ... 33 more

Does anyone know what I can do to solve this problem?

ty

5 replies
JB-data
@JB-data
@elahrvivaz sorry to bug you again... but I did more investigation for the bug with the heatmap for lines -see question Sep.02 2022(and still not sure if it comes from geotools or geomesa)...
Getting this to work is quite important for me.
Here some observations:

1)Heatmap with millions of points: OK
2)HEatmap with lines: OK for some lines, if I use too many I get error.
3)I can investigate for what lines I get the error.
4)Next I can investigate this line, and leave some points out until it stops failing. I then notice exactly what point is "causing" the error.
I dont see anything strange for the point that causes it to crash. Example:
LINESTRING (allvalidpoints, 62.079166412353516 41.94388961791992 (this one is ok), 60.654998779296875 41.578887939453125 (when I add this one it crashes))
And if I leave out that point and add another point it will still crash. SO looks like the point itself is not corrupt .
In my example it happened with the 68th point). Other lines are much longer and dont have a problem.

Also, either I get the error posted originally and the OutOfBounds that tells me it happens at index 928(for different lines its always the same number).
If I change radiusPixels:xx to some other number I see the index in the error changes.
Sometimes it shows this error (no index specified):

org.geoserver.platform.ServiceException: Rendering process failed
        at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:642)
        at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:275)
        at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:135)
        at org.geoserver.wms.GetMap.executeInternal(GetMap.java:749)
...
Caused by: java.lang.RuntimeException: Failed to evaluate the process function, error is: Error processing heatmap
        at org.geotools.process.function.ProcessFunction.evaluate(ProcessFunction.java:162)
        at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:601)
        ... 133 more
Caused by: org.geotools.process.ProcessException: Error processing heatmap
        at org.locationtech.geomesa.process.analytic.DensityProcess.execute(DensityProcess.scala:75)
        at sun.reflect.GeneratedMethodAccessor452.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.geotools.process.factory.AnnotationDrivenProcessFactory$InvokeMethodProcess.execute(AnnotationDrivenProcessFactory.java:621)
        at org.geotools.process.function.ProcessFunction.evaluate(ProcessFunction.java:148)
        ... 138 more
Caused by: java.lang.ArrayIndexOutOfBoundsException

which makes me think it could be a bug in https://www.geomesa.org/documentation/stable/user/process.html#densityprocess ?
I didnt find the code yet to investigate line 75 (note I am using an old version-3.0-I know... outdated but dont think this is causing it).

20 replies
JB-data
@JB-data
Finally, I notice if I add radiusPixels:xx with xx a very larg number (way too high for what I want) then for this one line that was causing an issue before it will be displayed, but off course then the colour is smoothed out over a way too large area.
sanket1411
@sanket1411
Hi guys, just wanted one clarification, when we say geomesa supports spatio-temporal index it is supported on some distributed databases mentioned in document. With kafka geomesa it is only used for streaming.Do we have support for queries on kafka(ksql) where spatio-temporal index can come into play?
9 replies
James Srinivasan
@jrs53
1 reply
Villalta Humberto
@humberto5213

Dear Experts;

I am currently new in the world of GeoMesa and DataStores. I program only in python and scala. I am interested in how to connect the Jupyter notebook with GeoMesa and use its ST_Functions for processing some Spatio-Temporal data and also saving these data sets in a database. If someone has some documentation or tips on how to do it, I would appreciate it very much.

Once again, thank you for your time in answering my inquiry.

2 replies
JB-data
@JB-data
Dear experts,
Im looking to extend my geomesa process to allow for updates.
Im using spark to process and I write to hbase.
What I Ideally would want is to set something up that when a file comes in, this triggers a recomputation of some data already stored (after recompute some non-geospatial/temporal fields will be different than before).
The triggering of en event can be done through geomesa-kafka, but I think it wont be capable to update data, just to insert new data.
As hbase isnt acid-compliant and hence wont allow for updates, right?
27 replies
JB-data
@JB-data
Or is this relevant?
https://www.geomesa.org/documentation/current/user/nifi/updates.html
Maybe it is possible when you simply ingest data using nifi (I also do quite a bit of sparksql stuff to my incoming data ).
This page also talks about an"ingest process in modify mode" suggesting modifications are possible?
rschnei87
@rschnei87

Hello, I'm trying to connect from a remote Jupyter notebook to an AWS EMR cluster running GeoMesa HBase backed by S3. My Spark kernel is working just fine, but I'm having trouble with the PySpark kernel, specifically the geomesa_pyspark library. I followed the instructions here: https://www.geomesa.org/documentation/stable/user/spark/pyspark.html, built the artifact, and installed it on the cluster via pip3 with no issues. However, I'm stumbling on this part of the tutorial:

"You may then access Spark using a Yarn master by default. Importantly, because of the way the geomesa_pyspark library interacts with the underlying Java libraries, you must set up the GeoMesa configuration before referencing the pyspark library."

Since I'm working from a remote notebook, I have to set up a Spark Session first (via sparkmagic) in order to access the cluster environment and the geomesa_pyspark library. It looks like the tutorial assumes that the notebook and Spark installations are local to each other, but is there any guidance on how to utilize geomesa_pyspark within an EMR + remote notebook context? I apologize if I'm missing something simple, thank you for your time.

5 replies
yn-q
@yn-q
Hello, I have a strange problem. I have a geomesahbase table, and a field is of the multipolygon type. The data of this field is large, about 2 MB. The query point is in the multipolygon, and the new geomesahbase table is created. A single query takes several milliseconds. However, after tens of millions of queries are performed, the single query takes 70 to 80 milliseconds. The subsequent query takes 70 to 80 milliseconds. The single query takes several milliseconds only after being deleted and rebuilt.
7 replies
chdsb
@chdsb
@elahrvivaz hi,geomesa is powerful,and i have some quesitions.1.Have geomesa try Protobuf Convert?2.which binary format that geomeas used to storage data in different backend(filesystem,hbase.etc),avro?
3 replies
sidharth-subramaniam
@sidharth-subramaniam

hi i was trying to install Geomesa HBase on EMR with S3 backend. Using this link - https://www.geomesa.org/documentation/stable/tutorials/geomesa-hbase-s3-on-aws.html

im using geomesa 2.11-3.4.0 gz file from release.

once the gz file is unzipped and bootstrap.sh file is run, i re-login to EMR. Then when I run the ingest commands, local ingest works, but distributed ingest doesn't.

[root@ip-xxxx gdelt]# geomesa-hbase ingest -c geomesa.gdelt -C gdelt -f gdelt -s gdelt $files
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/geomesa-hbase_2.11-3.4.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
INFO  Schema 'gdelt' exists
INFO  Running ingestion in distributed mode
INFO  Submitting job 'GeoMesa Tools Ingest' - please wait...
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.getArchiveSharedCacheUploadPolicies(Lorg/apache/hadoop/conf/Configuration;)Ljava/util/Map;
        at org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:491)
        at org.apache.hadoop.mapred.YARNRunner.setupContainerLaunchContextForAM(YARNRunner.java:545)
        at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:583)
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:325)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:242)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
        at org.locationtech.geomesa.tools.utils.JobRunner$.submit(JobRunner.scala:46)
        at org.locationtech.geomesa.tools.ingest.ConverterIngestJob.<init>(ConverterIngestJob.scala:57)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anon$1.<init>(IngestCommand.scala:173)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$class.startIngest(IngestCommand.scala:173)
        at org.locationtech.geomesa.hbase.tools.HBaseRunner$$anon$2.startIngest(HBaseRunner.scala:30)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$2.apply(IngestCommand.scala:127)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$2.apply(IngestCommand.scala:106)
        at scala.Option.foreach(Option.scala:257)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$class.execute(IngestCommand.scala:106)
        at org.locationtech.geomesa.hbase.tools.HBaseRunner$$anon$2.execute(HBaseRunner.scala:30)
        at org.locationtech.geomesa.tools.Runner$MainExecutor.execute(Runner.scala:203)
        at org.locationtech.geomesa.tools.Runner$class.execute(Runner.scala:39)
        at org.locationtech.geomesa.tools.Runner$class.main(Runner.scala:34)
        at org.locationtech.geomesa.hbase.tools.HBaseRunner$.main(HBaseRunner.scala:14)
        at org.locationtech.geomesa.hbase.tools.HBaseRunner.main(HBaseRunner.scala)
15 replies
Connor
@bmcmillan2

I’m in the process of exporting data from one cluster and ingesting it in another. On the old cluster, I’ve exported my data in an avro format and on my new cluster I initially created the catalog/feature using a SFT/Converter that was based on JSON data. I’m trying to do the re-ingest on my new cluster using the PutGeoMesaAccumulo 3.4.1 processor in NIFI (v 1.16.3) but I’m getting a strange error:
Incompatible schema change detected for schema ‘test’. Changing the default geometry attribute is not supported

The geometry is the same between the two.. Both are points. The only difference is how they are created. The exported avro data uses the transformation geometry(avroPath($1,‘/geom’)) to define the geometry where as the original JSON data (that was used to create the catalog/feature) was defined using the transformation function `point($lon,$lat)`

The weird thing is I can ingest the data just fine using the command-line tools(same avro file and same sft/converter spec for the avro).. It’s only throwing the error in NIFI. Is there something that I’m missing in the NIFI configurations or is there some difference between how data ingest is handled using the command-line tools vs NIFI?

13 replies
Pekka Kasa
@pekka-aleksi
Hello. Quick question - I might be dumb but theoretically why wouldn't whatever function that goes through from (0,0) (say, the upper left corner) and sweeps to the upper right corner (0, 1) then takes a discrete jump downwards, then sweeps back left, and continues be able to replace the space-filling curve Geomesa uses now? (was just reading how that works)
7 replies
It doesn't fill R^2 surely but that's not the point with discrete data is it
tosen1990
@tosen1990

Hi, teams

I'm recently working on Lambda Data Store. I strictly followed
the tutorial and try to complete the case.
But for now, I've already tried with different versions of geoserver and geomesa and still failed to visualize data with geoserver,
Could anyone give me a hand?

The log from geoserver shows:

2022-11-07 10:51:50,871 ERROR [geoserver.ows] - 
org.geoserver.platform.ServiceException: Rendering process failed
    at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:642)
    at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:275)
    at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:135)
    at org.geoserver.wms.GetMap.executeInternal(GetMap.java:749)
    at org.geoserver.wms.GetMap.run(GetMap.java:300)
    at org.geoserver.wms.GetMap.run(GetMap.java:123)
    at org.geoserver.wms.DefaultWebMapService.getMap(DefaultWebMapService.java:246)
    at sun.reflect.GeneratedMethodAccessor337.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
    at org.geoserver.kml.WebMapServiceKmlInterceptor.invoke(WebMapServiceKmlInterceptor.java:38)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.geoserver.gwc.wms.CacheSeedingWebMapService.invoke(CacheSeedingWebMapService.java:55)
    at org.geoserver.gwc.wms.CacheSeedingWebMapService.invoke(CacheSeedingWebMapService.java:31)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.geoserver.gwc.wms.CachingWebMapService.invoke(CachingWebMapService.java:61)
    at org.geoserver.gwc.wms.CachingWebMapService.invoke(CachingWebMapService.java:41)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.geoserver.ows.util.RequestObjectLogger.invoke(RequestObjectLogger.java:50)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
    at com.sun.proxy.$Proxy109.getMap(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor298.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.geoserver.ows.Dispatcher.execute(Dispatcher.java:877)
    at org.geoserver.ows.Dispatcher.handleRequestInternal(Dispatcher.java:265)
    at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:177)
    at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:52)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
    at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
    at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:873)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623)
15 replies
Caused by: org.geotools.data.DataSourceException
    at org.vfny.geoserver.global.GeoServerFeatureSource.getFeatures(GeoServerFeatureSource.java:432)
    at org.vfny.geoserver.global.GeoServerFeatureSource.getFeatures(GeoServerFeatureSource.java:71)
    at org.geotools.renderer.lite.StreamingRenderer.getFeatures(StreamingRenderer.java:2336)
    at org.geotools.renderer.lite.StreamingRenderer.processStylers(StreamingRenderer.java:2213)
    at org.geotools.renderer.lite.StreamingRenderer.paint(StreamingRenderer.java:900)
    at org.geoserver.wms.map.RenderedImageMapOutputFormat.produceMap(RenderedImageMapOutputFormat.java:601)
    ... 126 more
Caused by: java.lang.NullPointerException
    at org.vfny.geoserver.global.GeoServerFeatureSource.applyProjectionPolicies(GeoServerFeatureSource.java:497)
    at org.vfny.geoserver.global.GeoServerFeatureSource.getFeatures(GeoServerFeatureSource.java:430)
    ... 131 more
tosen1990
@tosen1990
image.png
JB-data
@JB-data
random question: will geomesa work on spark for kubernetes?
Has this been tested?
Im using spark currently on a traditional cluster, but more and more is moving to a spark on k8s ...
8 replies
James Srinivasan
@jrs53
Before I create the pr, any objections to a .devcontainer folder? (https://code.visualstudio.com/docs/devcontainers/containers)
5 replies
MARIAH👩🏾‍💻
@mariahakinbi_twitter
Not sure if this is the right spot to post this question, but does anyone know what package st_transform is in? https://stackoverflow.com/questions/74382838/geomesa-pyspark-analysisexception-undefined-function-st-transform/74390616#74390616
20 replies
image.png
image.png
Peter Corless
@PeterCorless
image.png

@elahrvivaz : We've done a first pass at testing with #ScyllaDB. The good news? It works out of the box. We'll keep doing more testing to see what we can learn. Would love to chat with you more re: ScyllaDB compatibility on your site, and also presenting GeoMesa to our ScyllaDB Community at our upcoming ScyllaDB Summit. https://www.scylladb.com/2022/10/25/scylladb-summit-2023-call-for-speakers/

You can email me at peter@scylladb.com.

Kristin Cowalcijk
@Kontinuation
Hi. We've implemented a library GeoMesa SQL for running SQL queries on GeoMesa DataStores. I hope it could be useful for GeoMesa users.
14 replies
James Srinivasan
@jrs53
Should geomesa's tests run on a 8 GB system? Getting OOM killer doing its thing running full build
9 replies
Rubin Wang
@1085904057
Hi, We create sft with z2 index, our data store is HBase, when we insert data with the same coordinates and different fid , data coverage occurs. How can i avoid this. Thank you!
3 replies
mwhei
@mwhei
Hello everyone,
When I run a geomesa spark sql demo, it went error. Would you please give me some suggesions? Thank you very much!
5 replies
image.png
mwhei
@mwhei
Hi, Is there a complete example code that uses Geomesa Spark SQL to access Geomesa HBase?
4 replies
Chintan Mistri
@cM2908
Hii @elahrvivaz @jnh5y I'm trying to bulk load the tdrive data in geomesa-hbase
bulk-ingest completed successfully with no errors.
while doing bulk-load of generated files below error occurred.
Do anyone have and idea about what could have caused this error?
Thanks
4 replies
mwhei
@mwhei
mwhei
@mwhei
How to force spark sql using GeoToolsSpatialRDDProvider to access geomesa hbase?
val params = Map(
"geotools" -> "true",
"hbase.catalog" -> "buildings",
"hbase.zookeeper.property.clientPort"->"2181",
"hbase.zookeepers" -> "192.168.0.11:2181"
) This parameters is not work.
3 replies
Chintan Mistri
@cM2908
@elahrvivaz What would be the appropriate version of GeoServer for GeoMesa 3.4.1 as nothing is mentioned in the official documentation?
1 reply
Chintan Mistri
@cM2908
Screenshot from 2022-11-16 18-06-53.png
4 replies
JB-data
@JB-data
Dear experts,
Some questions about the fid. I just changed it to a custom one, since I want to be able to update records later.
1)is the fid not queryable in a CQL query? I get an error
2)my custom fid is a unique id (some number+date), like 1922383221123. Does this sound ok? Or does it not have any impact on peformance for retrieving data through CQL?
Thanks!!
12 replies
Michael McNeil
@scompmc
The geomesa-accumulo export command produces drastically different results between distributed and local exports for the same cql. When running distributed, it seems to not find all of the data. For example, using the same cql with just a time range only yielded 409,243 results for local export, but only 53,740 for distributed export into hdfs. I'd like to use distributed for speed of export. Any ideas why it might not be exporting all of the data? Note: using Geomesa 3.3 on accumulo/hdfs cluster.
4 replies
Bo Wang
@snowfield516_gitlab
Dear experts. Is there possible to ingest GeoJSON data with EPSG:4490 to the GeoMesa? many thanks for considering my request.
1 reply
JB-data
@JB-data

Dear experts,
Im trying to see if I can delete a record as per https://www.geomesa.org/documentation/3.0.0/user/geotools.html#writing-data .
I first need to load the datastore as per: https://www.geomesa.org/documentation/3.0.0/user/geotools.html#getting-a-data-store-instance
As the hbase-datastore implementation needs to be in the classpath, I create a directory /mydir and put hbase-site.xml and geomesa-hbase-datastore_2.11-3.0.0.jar in there.
Then export CLASSPATH=/mydir.
When I try to load the datastore of interest I get a message which hints probably he doesnt find all in the classpath:

scala> val params = Map(HBaseDataStoreParams.HBaseCatalogParam.key -> "ftfm_lines_testD")
<console>:28: error: Symbol 'type <none>.scalalogging.LazyLogging' is missing from the classpath.
This symbol is required by 'class org.locationtech.geomesa.utils.geotools.GeoMesaParam'.
Make sure that type LazyLogging is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
A full rebuild may help if 'GeoMesaParam.class' was compiled against an incompatible version of <none>.scalalogging.
       val params = Map(HBaseDataStoreParams.HBaseCatalogParam.key -> "ftfm_lines_testD")

Did I put the correct file in the classpath or is there more to it?

22 replies
chdsb
@chdsb
Hi,if I only want to serializable and save simplefeatures of scala List or Rdd to disk rather use z2 index.
parquert ,shapefile,avro.which is the best choose or which can combine with geomesa better?because the file on disk should read to RAM and calculte by geomesa,Thanks
1 reply
loridigia
@loridigia
Hi guys, could you gently recall me how Geomesa cache Hbase connections in Java? (If i don't remember wrong it was using the name of the FILE.XML and some other params to create a MAP of KEY - Connection).
Thank you!
8 replies
James Srinivasan
@jrs53
How hard might it be to support newer (>8) Java in geomesa?
5 replies
Nick Maynard
@nickmayn
Hey is there a timeline for when a version will be released that supports spark 3.2? Even if it’s experimental it would help me out a ton. Thank you so much!
4 replies
sidharth-subramaniam
@sidharth-subramaniam

https://github.com/locationtech/geomesa/blob/main/geomesa-hbase/geomesa-hbase-tools/bin/bootstrap-geomesa-hbase-aws.sh

Can someone tell me what needs to be updated in this file, to run geomesa 3.4.1 with scala 2.12 on HBase with S3 backend.
Although the tutorial on website seems up to date, this file in github, and in release .gz file which is downloaded does not seem to be updated since 2020.

24 replies
pradipth48
@pradipth48
while creating the the schema in Geomesa-Hbase Im getting That issue >>>ERROR Warning: Missing dependency for command execution: org/apache/hadoop/hbase/HBaseConfiguration
3 replies