Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 17:36
    schnerd starred locationtech/geowave
  • Jan 30 2019 11:01
    hsg77 commented #1474
  • Jan 30 2019 10:58
    hsg77 commented #1474
  • Jan 30 2019 10:57
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:51
    hsg77 commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:12
    rfecher commented #1474
  • Jan 29 2019 10:44
    hsg77 commented #1474
  • Jan 28 2019 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 2019 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 2019 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 2019 20:47
    rfecher closed #1488
  • Jan 28 2019 20:47
    rfecher opened #1488
  • Jan 28 2019 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 2019 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 2019 16:53
    rfecher closed #1486
Sharath S Bhargav
@sharathbhargav
yeah that would probably work. Will try this out.
I am mostly looking at distributed data store so i think i have to use geowave or geomesa or have to look the distributed extension for PostGIS. Will consider their pros and cons and decide.
Thanks for your time.
Grigory
@pomadchin
heeey are there any plans to upgrade geotools and jts (up to 1.17)?
unfortunately 1.17 is not a binary compatible release :/ and in creates issues because of that
rfecher
@rfecher
Yep, we've talked about it. We're currently talking about doing a 2.0 release next which would include that breaking change and other bigger ideas.
zhang.yun
@Zhang-Yun
Hi,Guys,I am new to GeoWave.I am wonderimng whether it supports to store and index linestring/polygon with x,y,z-coordinate
since all examples I found is relevent to point
another question is that possible to manage massive point cloud in geowave
thanks
zhang.yun
@Zhang-Yun
is there any production grade project that uses geowave as datastore?
rfecher
@rfecher
@Zhang-Yun yes geowave is production grade
zhang.yun
@Zhang-Yun
THANKS
one more questions does it support to store and index linestring/polygon with x,y,z-coordinate
rfecher
@rfecher
regarding linestring/polygon x/y/z I think there should be many examples of linestring/polygon with x,y. We store/retrieve "Z" if any geometry has a Z component and it can be used for massive point cloud datasets although to index "Z" in either case requires some custom code outside of the general API (indices are discovered by the CLI using Java SPI, so you can inject any indexing scheme integrated throughout the application with some custom code, out of the box we support 2D spatial and spatiotemporal indexing).
if indexing X,Y and then post-filtering Z is sufficient for the linestring/poly x,y,z use case that should be pretty standard out of the box. But to interleave Z into the keys (ie. index) you'd have to add a new index to the app for that.
zhang.yun
@Zhang-Yun
thanks a lot!
rfecher
@rfecher
also regarding usage for point clouds there's locationtech/geowave#1799 someone else had raised about a month ago that could give some insight
Muhammed Kalkan
@Nymria
Hello, i can not find any example ingesting directly from postgresql to geowave. Previous answer by @rfecher states that it is possible by cli, but in documentation almost all of the commands require local files. Can you please give a one liner example ?
Also quite surprised there isnt any example about this since it is the mostly used tool in GIS community
rfecher
@rfecher
apologies for not having an example but we'd be happy to accept contributions...geowave can ingest any data source that has a corresponding geotools datastore (on geowave's classpath) using geowave's geotools-vector format and a corresponding properties file which are the connection parameters (unless of course the datastore is already a file-based source such as shapefile in which case you just need to use the filename of the data rather than a properties file)
so for postgis you'll have to make sure the postgis library is in the classpath (for RPM installs there's a "plugins" directory which you can just add jar files in and they'll automatically be on the CLI classpath) and then run geowave ingest localtogw <properties filename> ... where the properties file has the <key>=<value> params specified here: https://docs.geotools.org/stable/userguide/library/jdbc/postgis.html
Muhammed Kalkan
@Nymria
Thanks for the information. I have already started by java programatically. I will create an example for the cli as you described when i am done with this. One more question. Writing data to datastore (accumulo embedded in my case) causes memory to build up and overflow eventually with big data (27 million or so polygons). I am trying to flush time to time , do you suggest something else ?
final SimpleFeature sf = sfBuilder.buildFeature(feature.getID()); i++; indexWriter.write(sf); if (i % 1000 == 0) { indexWriter.flush(); }
some primitive flushing like above ? does it help ?
rfecher
@rfecher
flush() will writer the statistics and clear them, so it is probably a nicety to periodically flush but really shouldn't be a necessity (aggregated statistics shouldn't be a memory issue) ... when you're flushing many times after you finish writing it is best then to merge the stats in the metadata table (for accumulo when serverside library is enabled this is a table compaction on the metadata table, although generally speaking there's a CLI command geowave stat compact which would do the appropriate thing for each datastore and probably is just your best/easiest way to merge them) because the stats will be stored as a row per flush() and the stat merging would otherwise need to be done at scan time (well, for accumulo the merging is already tied to accumulo's inherent compaction cycles so it may end up merged through the background compaction anyways, I just find its often nice to ensure its compacted at the end of a large ingest). I guess thats mostly a tangent to understanding why you're having memory issues - is it the accumulo server processes that are constantly growing in memory or is it that client process that you're writing thats building up memory?
Muhammed Kalkan
@Nymria
I have figured out memory issue. It was not related with geowave. After successful ingestion of 27 million polygons, i have tried subsample pixel sld, it seems, that subsamples data. When looking at the big picture and zooming in, there are far less data than the original. Even when i change pixel size to 0.
I thought it renders pixel by pixel, when a pixel is occupied by a feature geowave no longer searches any more records and hops on to the next pixel. Ofcourse if i understrand correctly from sources online. The behaviour was like i mentioned before
Muhammed Kalkan
@Nymria
Accumulo is throwing errors when i first run my code to ingest , like
18 Jul 08:28:59 ERROR [vector.FeatureDataAdapter] - BasicWriter not found for binding type:java.util.Date
18 Jul 08:28:59 WARN [base.BaseDataStoreUtils] - Data writer of class class org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper does not support field for 2019-04-01
when i try to run it second time , ends with null pointer
sh-4.2# java -jar geowaveapi-1.0-SNAPSHOT-jar-with-dependencies.jar
18 Jul 08:32:41 WARN [transport.TIOStreamTransport] - Error closing output stream.
java.io.IOException: The stream is closed
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:335)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:595)
at org.apache.accumulo.core.rpc.ThriftUtil.returnClient(ThriftUtil.java:159)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:755)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:81)
at org.locationtech.geowave.core.store.base.BaseDataStoreUtils.getWriteInfo(BaseDataStoreUtils.java:348)
at org.locationtech.geowave.core.store.base.BaseIndexWriter.write(BaseIndexWriter.java:77)
at org.locationtech.geowave.core.store.base.BaseIndexWriter.write(BaseIndexWriter.java:64)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.lambda$write$0(IndexCompositeWriter.java:42)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter$$Lambda$89/275056979.apply(Unknown Source)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.internalWrite(IndexCompositeWriter.java:55)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.write(IndexCompositeWriter.java:42)
at com.uasis.geowaveapi.Geowave.ingestFromPostgis(Geowave.java:162)
at com.uasis.geowaveapi.Geowave.main(Geowave.java:98)
Any ideas what might be happening ? I have downloaded geowave-accumulo-1.2.0-apache-accumulo1.7.jar and configured like in user guide
Muhammed Kalkan
@Nymria
using version 1.7.2 accumulo
rfecher
@rfecher
re: the issues with ingest, my best guess is geowaveapi-1.0-SNAPSHOT-jar-with-dependencies.jar doesn't contain the SPI files under META-INF/services. Can you confirm that inside of that jar there is a file META-INF/services/org.locationtech.geowave.core.store.data.field.FieldSerializationProviderSpi and that inside of that file there is a line for org.locationtech.geowave.core.geotime.store.field.DateSerializationProvider?
that Service Provider Interface (SPI) is how GeoWave would find the reader and writer for java.util.Date which it is saying it is unable to find so it seems that the line mentioned above is missing in the META-INF when you created that jar
rfecher
@rfecher
re: the subsample pixel mentioned above, that works really well for point data but it oversamples polygons based on there not being a good one-to-one correlation between a pixel boundary and the space-filling curve representation of a polygon (not to mention styling is important, such as fill or no fill) - there are complex alternatives I've prototyped and in general tile caching may be a simpler alternative, but this all is reasonably detailed here for some further info
Muhammed Kalkan
@Nymria
Thanks for the tips. Unfortunately date lib is not present at META-INF as you have described. I have totally commented out date fields to just make it work but accumulo problem persists. Meaning, first time ingestions goes ok but i cant see any types. It does not ingest even tho there is no error thrown. When i try second time and so on, i got null pointer as described above.I have also tried accumulo 1.9.x version but no luck.
And by the way, i was using https://github.com/geodocker/geodocker-accumulo accumulo setup. Maybe helps to resolve something in the future
rfecher
@rfecher
well, I wasn't suggesting to just not be able to use date fields, it was just something in your description that indicated the general overall problem. In your jar are you including geowave-core-geotime? I think you likely are as its pretty fundamental and core to geowave, but if not you should. I think the issue is probably with how you generate that shaded jar - you need to concatenate all the SPI files so that its fully inclusive. In maven that is done with this line as an example, where you invoke the ServicesResourceTransformer which will automatically concatenate the META-INF/services files. If something like this is not done, it will overwrite common services and some will just end up missing, likely causing many unknown issues.
Muhammed Kalkan
@Nymria
I have included core-geotime package aswell. Still META-INF was missing.However, i had to skip Date issue for now, just to see everything else was working as expected and that to be dealt with later on. But i suppose you think this might be the source of accumulo ingest problems without Date aswell. I will take a look and get back
Muhammed Kalkan
@Nymria
Update on subject. I am working directly from project and running it with maven. Tried 2 dockerized accumulo setups and getting the same errors below
Muhammed Kalkan
@Nymria
[root@ffe7b9e3d42a geowaveapi]# mvn exec:java -Dexec.mainClass="com.uasis.geowaveapi.Geowave"
[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------< com.uasis:geowaveapi >------------------------
[INFO] Building geowaveapi 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for commons-codec:commons-codec:jar:1.15-SNAPSHOT is missing, no dependency information available
[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ geowaveapi ---
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionProvider', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionProvider', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Finito
22 Jul 22:52:33 ERROR [zookeeper.ClientCnxn] - Event thread exiting due to interruption
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
22 Jul 22:52:40 WARN [zookeeper.ClientCnxn] - Session 0x10008674a240008 for server zookeeper.geodocker-accumulo-geomesa_default/172.25.0.3:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[WARNING] thread Thread[com.uasis.geowaveapi.Geowave.main(zookeeper.geodocker-accumulo-geomesa_default:2181),5,com.uasis.geowaveapi.Geowave] was interrupted but is still alive after waiting at least 14999msecs
[WARNING] thread Thread[com.uasis.geowaveapi.Geowave.main(zookeeper.geodocker-accumulo-geomesa_default:2181),5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[Thrift Connection Pool Checker,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[GT authority factory disposer,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[WeakCollectionCleaner,8,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[BatchWriterLatencyTimer,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] NOTE: 5 thread(s) did not finish despite being asked to  via interruption. This is not a problem with exec:java, it is a problem with the running code. Although not serious, it should be remedied.
[WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=com.uasis.geowaveapi.Geowave,maxpri=10]
java.lang.IllegalThreadStateException
    at java.lang.ThreadGroup.destroy (ThreadGroup.java:778)
    at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:293)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  25.523 s
[INFO] Finished at: 2021-07-22T22:52:48Z
[INFO] ------------------------------------------------------------------------
[root@ffe7b9e3d42a geowaveapi]# geowave vector query "select * from acc.uasis limit 1"
Exception in thread "Thread-4" java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:47)
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:37)
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:30)
    at org.locationtech.geowave.datastore.accumulo.AccumuloRow.<init>(AccumuloRow.java:52)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.internalNext(AccumuloReader.java:198)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.access$200(AccumuloReader.java:35)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:146)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:125)
    at org.locationtech.geowave.core.store.operations.SimpleParallelDecoder$1.run(SimpleParallelDecoder.java:41)
    at java.lang.Thread.run(Thread.java:748)
[root@ffe7b9e3d42a geowaveapi]# java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
rfecher
@rfecher
with regard to the latter NoSuchMethodError, from a quick search it looks like you compiled the classes in that jar with a JDK version >=9 which produces incompatible byte code for JDK 8 apparently in this regard - see here for the exact same issue and a bit more description explaining it
rfecher
@rfecher
as for the previous errors in the first 2 consoles, I really don't have much context as to what you're trying to do in each of those consoles. All I read is the second console is apparently attempting to terminate threads in the first console and the first console has a zookeeper thread warning about being interrupted (seemingly related to the second console, considering there are messages like "Thread will linger despite being asked to die via interruption"). This generally seems more related to your application logic than core/fundamental geowave processes?
Muhammed Kalkan
@Nymria
About jdk incompatibility, geowave cli was installed through website. Maybe i should compile with that very version of my jdk , given that output?
About zookeeper , i do understand what you meant. First error it gives, happens right after all is done, final return statement. I should investigate a bit more. Thats why i wanted to check it via cli if ingestion happened, but that also failed as above.
Muhammed Kalkan
@Nymria
more on subject
2021-07-23 21:42:23,352 [iterators.IteratorUtil] ERROR: java.lang.ClassNotFoundException: org.locationtech.geowave.datastore.accumulo.MergingCombiner
accumulo-tserver_1  | 2021-07-23 21:42:23,353 [scan.LookupTask] WARN : exception while doing multi-scan 
accumulo-tserver_1  | java.lang.RuntimeException: java.lang.ClassNotFoundException: org.locationtech.geowave.datastore.accumulo.MergingCombiner
accumulo-tserver_1  |     at org.apache.accumulo.core.iterators.IteratorUtil.loadIterators(IteratorUtil.java:336)
accumulo-tserver_1  |     at org.apache.accumulo.core.iterators.IteratorUtil.loadIterators(IteratorUtil.java:294)
accumulo-tserver_1  |     at org.apache.accumulo.tserver.tablet.ScanDataSource.createIterator(ScanDataSource.java:231)
accumulo-tserver_1  |     at org.apache.accumulo.tserver.tablet.ScanDataSource.iterator(ScanDataSource.java:134)
accumulo-tserver_1  |     at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.seek(SourceSwitchingIterator.java:231)
accumulo-tserver_1  |     at org.apache.accumulo.tserver.tablet.Tablet.lookup(Tablet.java:601)
accumulo-tserver_1  |     at org.apache.accumulo.tserver.tablet.Tablet.lookup(Tablet.java:755)
accumulo-tserver_1  |     at org.apache.accumulo.tserver.scan.LookupTask.run(LookupTask.java:116)
accumulo-tserver_1  |     at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
accumulo-tserver_1  |     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
accumulo-tserver_1  |     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
accumulo-tserver_1  |     at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
accumulo-tserver_1  |     at java.lang.Thread.run(Thread.java:748)
adding accumulo.jar to classpath
config -s general.vfs.context.classpath.geowave=hdfs://hdfs-name:8020/accumulo/lib/[^.].*.jar
sh-4.2# hdfs dfs -ls hdfs://hdfs-name:8020/accumulo/lib
Found 1 items
-rwxrwxrwx   3 root supergroup  226825543 2021-07-23 21:28 hdfs://hdfs-name:8020/accumulo/lib/geowave-deploy-1.2.0-accumulo.jar