Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 17:36
    schnerd starred locationtech/geowave
  • Jan 30 2019 11:01
    hsg77 commented #1474
  • Jan 30 2019 10:58
    hsg77 commented #1474
  • Jan 30 2019 10:57
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:51
    hsg77 commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:12
    rfecher commented #1474
  • Jan 29 2019 10:44
    hsg77 commented #1474
  • Jan 28 2019 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 2019 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 2019 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 2019 20:47
    rfecher closed #1488
  • Jan 28 2019 20:47
    rfecher opened #1488
  • Jan 28 2019 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 2019 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 2019 16:53
    rfecher closed #1486
Muhammed Kalkan
@Nymria
final SimpleFeature sf = sfBuilder.buildFeature(feature.getID()); i++; indexWriter.write(sf); if (i % 1000 == 0) { indexWriter.flush(); }
some primitive flushing like above ? does it help ?
rfecher
@rfecher
flush() will writer the statistics and clear them, so it is probably a nicety to periodically flush but really shouldn't be a necessity (aggregated statistics shouldn't be a memory issue) ... when you're flushing many times after you finish writing it is best then to merge the stats in the metadata table (for accumulo when serverside library is enabled this is a table compaction on the metadata table, although generally speaking there's a CLI command geowave stat compact which would do the appropriate thing for each datastore and probably is just your best/easiest way to merge them) because the stats will be stored as a row per flush() and the stat merging would otherwise need to be done at scan time (well, for accumulo the merging is already tied to accumulo's inherent compaction cycles so it may end up merged through the background compaction anyways, I just find its often nice to ensure its compacted at the end of a large ingest). I guess thats mostly a tangent to understanding why you're having memory issues - is it the accumulo server processes that are constantly growing in memory or is it that client process that you're writing thats building up memory?
Muhammed Kalkan
@Nymria
I have figured out memory issue. It was not related with geowave. After successful ingestion of 27 million polygons, i have tried subsample pixel sld, it seems, that subsamples data. When looking at the big picture and zooming in, there are far less data than the original. Even when i change pixel size to 0.
I thought it renders pixel by pixel, when a pixel is occupied by a feature geowave no longer searches any more records and hops on to the next pixel. Ofcourse if i understrand correctly from sources online. The behaviour was like i mentioned before
Muhammed Kalkan
@Nymria
Accumulo is throwing errors when i first run my code to ingest , like
18 Jul 08:28:59 ERROR [vector.FeatureDataAdapter] - BasicWriter not found for binding type:java.util.Date
18 Jul 08:28:59 WARN [base.BaseDataStoreUtils] - Data writer of class class org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper does not support field for 2019-04-01
when i try to run it second time , ends with null pointer
sh-4.2# java -jar geowaveapi-1.0-SNAPSHOT-jar-with-dependencies.jar
18 Jul 08:32:41 WARN [transport.TIOStreamTransport] - Error closing output stream.
java.io.IOException: The stream is closed
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:335)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:595)
at org.apache.accumulo.core.rpc.ThriftUtil.returnClient(ThriftUtil.java:159)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:755)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:81)
at org.locationtech.geowave.core.store.base.BaseDataStoreUtils.getWriteInfo(BaseDataStoreUtils.java:348)
at org.locationtech.geowave.core.store.base.BaseIndexWriter.write(BaseIndexWriter.java:77)
at org.locationtech.geowave.core.store.base.BaseIndexWriter.write(BaseIndexWriter.java:64)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.lambda$write$0(IndexCompositeWriter.java:42)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter$$Lambda$89/275056979.apply(Unknown Source)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.internalWrite(IndexCompositeWriter.java:55)
at org.locationtech.geowave.core.store.index.writer.IndexCompositeWriter.write(IndexCompositeWriter.java:42)
at com.uasis.geowaveapi.Geowave.ingestFromPostgis(Geowave.java:162)
at com.uasis.geowaveapi.Geowave.main(Geowave.java:98)
Any ideas what might be happening ? I have downloaded geowave-accumulo-1.2.0-apache-accumulo1.7.jar and configured like in user guide
Muhammed Kalkan
@Nymria
using version 1.7.2 accumulo
rfecher
@rfecher
re: the issues with ingest, my best guess is geowaveapi-1.0-SNAPSHOT-jar-with-dependencies.jar doesn't contain the SPI files under META-INF/services. Can you confirm that inside of that jar there is a file META-INF/services/org.locationtech.geowave.core.store.data.field.FieldSerializationProviderSpi and that inside of that file there is a line for org.locationtech.geowave.core.geotime.store.field.DateSerializationProvider?
that Service Provider Interface (SPI) is how GeoWave would find the reader and writer for java.util.Date which it is saying it is unable to find so it seems that the line mentioned above is missing in the META-INF when you created that jar
rfecher
@rfecher
re: the subsample pixel mentioned above, that works really well for point data but it oversamples polygons based on there not being a good one-to-one correlation between a pixel boundary and the space-filling curve representation of a polygon (not to mention styling is important, such as fill or no fill) - there are complex alternatives I've prototyped and in general tile caching may be a simpler alternative, but this all is reasonably detailed here for some further info
Muhammed Kalkan
@Nymria
Thanks for the tips. Unfortunately date lib is not present at META-INF as you have described. I have totally commented out date fields to just make it work but accumulo problem persists. Meaning, first time ingestions goes ok but i cant see any types. It does not ingest even tho there is no error thrown. When i try second time and so on, i got null pointer as described above.I have also tried accumulo 1.9.x version but no luck.
And by the way, i was using https://github.com/geodocker/geodocker-accumulo accumulo setup. Maybe helps to resolve something in the future
rfecher
@rfecher
well, I wasn't suggesting to just not be able to use date fields, it was just something in your description that indicated the general overall problem. In your jar are you including geowave-core-geotime? I think you likely are as its pretty fundamental and core to geowave, but if not you should. I think the issue is probably with how you generate that shaded jar - you need to concatenate all the SPI files so that its fully inclusive. In maven that is done with this line as an example, where you invoke the ServicesResourceTransformer which will automatically concatenate the META-INF/services files. If something like this is not done, it will overwrite common services and some will just end up missing, likely causing many unknown issues.
Muhammed Kalkan
@Nymria
I have included core-geotime package aswell. Still META-INF was missing.However, i had to skip Date issue for now, just to see everything else was working as expected and that to be dealt with later on. But i suppose you think this might be the source of accumulo ingest problems without Date aswell. I will take a look and get back
Muhammed Kalkan
@Nymria
Update on subject. I am working directly from project and running it with maven. Tried 2 dockerized accumulo setups and getting the same errors below
Muhammed Kalkan
@Nymria
[root@ffe7b9e3d42a geowaveapi]# mvn exec:java -Dexec.mainClass="com.uasis.geowaveapi.Geowave"
[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------< com.uasis:geowaveapi >------------------------
[INFO] Building geowaveapi 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for commons-codec:commons-codec:jar:1.15-SNAPSHOT is missing, no dependency information available
[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ geowaveapi ---
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionProvider', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'GeoServerResourceLoader', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionProvider', but ApplicationContext is unset.
Jul 22, 2021 10:52:28 PM org.geoserver.platform.GeoServerExtensions checkContext
WARNING: Extension lookup 'ExtensionFilter', but ApplicationContext is unset.
Finito
22 Jul 22:52:33 ERROR [zookeeper.ClientCnxn] - Event thread exiting due to interruption
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
22 Jul 22:52:40 WARN [zookeeper.ClientCnxn] - Session 0x10008674a240008 for server zookeeper.geodocker-accumulo-geomesa_default/172.25.0.3:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[WARNING] thread Thread[com.uasis.geowaveapi.Geowave.main(zookeeper.geodocker-accumulo-geomesa_default:2181),5,com.uasis.geowaveapi.Geowave] was interrupted but is still alive after waiting at least 14999msecs
[WARNING] thread Thread[com.uasis.geowaveapi.Geowave.main(zookeeper.geodocker-accumulo-geomesa_default:2181),5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[Thrift Connection Pool Checker,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[GT authority factory disposer,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[WeakCollectionCleaner,8,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] thread Thread[BatchWriterLatencyTimer,5,com.uasis.geowaveapi.Geowave] will linger despite being asked to die via interruption
[WARNING] NOTE: 5 thread(s) did not finish despite being asked to  via interruption. This is not a problem with exec:java, it is a problem with the running code. Although not serious, it should be remedied.
[WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=com.uasis.geowaveapi.Geowave,maxpri=10]
java.lang.IllegalThreadStateException
    at java.lang.ThreadGroup.destroy (ThreadGroup.java:778)
    at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:293)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  25.523 s
[INFO] Finished at: 2021-07-22T22:52:48Z
[INFO] ------------------------------------------------------------------------
[root@ffe7b9e3d42a geowaveapi]# geowave vector query "select * from acc.uasis limit 1"
Exception in thread "Thread-4" java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:47)
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:37)
    at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:30)
    at org.locationtech.geowave.datastore.accumulo.AccumuloRow.<init>(AccumuloRow.java:52)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.internalNext(AccumuloReader.java:198)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.access$200(AccumuloReader.java:35)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:146)
    at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:125)
    at org.locationtech.geowave.core.store.operations.SimpleParallelDecoder$1.run(SimpleParallelDecoder.java:41)
    at java.lang.Thread.run(Thread.java:748)
[root@ffe7b9e3d42a geowaveapi]# java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
rfecher
@rfecher
with regard to the latter NoSuchMethodError, from a quick search it looks like you compiled the classes in that jar with a JDK version >=9 which produces incompatible byte code for JDK 8 apparently in this regard - see here for the exact same issue and a bit more description explaining it
rfecher
@rfecher
as for the previous errors in the first 2 consoles, I really don't have much context as to what you're trying to do in each of those consoles. All I read is the second console is apparently attempting to terminate threads in the first console and the first console has a zookeeper thread warning about being interrupted (seemingly related to the second console, considering there are messages like "Thread will linger despite being asked to die via interruption"). This generally seems more related to your application logic than core/fundamental geowave processes?
Muhammed Kalkan
@Nymria
About jdk incompatibility, geowave cli was installed through website. Maybe i should compile with that very version of my jdk , given that output?
About zookeeper , i do understand what you meant. First error it gives, happens right after all is done, final return statement. I should investigate a bit more. Thats why i wanted to check it via cli if ingestion happened, but that also failed as above.
Muhammed Kalkan
@Nymria

I have figured it out almost all of it. Only one line is throwing error

dataStore.addType(sfAdapter, spatialIndex);

25 Jul 18:20:16 WARN [transport.TIOStreamTransport] - Error closing output stream.
java.io.IOException: The stream is closed
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
    at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
    at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
    at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:335)
    at org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:595)
    at org.apache.accumulo.core.rpc.ThriftUtil.returnClient(ThriftUtil.java:159)
    at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:755)
    at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
    at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
    at java.lang.Thread.run(Thread.java:748)
Muhammed Kalkan
@Nymria
Hmm repoerted before locationtech/geowave#22
rfecher
@rfecher
yeah, so that is an accumulo background thread where the stream seems to at times be closed twice. That warning can actually be suppressed as we've dug in pretty deep and there are no adverse effects. Does it work for you despite that warning?
using our log4j properties which we embed within our packages/installers that warning would be suppressed
Muhammed Kalkan
@Nymria
Yup, all seems in order.
Ray Hall
@kova70

Hello all! newbie here... I'm evaluating geographic tools for big data (Cloudera specifically) and have gone through. the geowave quickstart, also tried a variation where I use the kudu DB in Cloudera as data store.
What I haven't found is how exactly is the syntax of the ingest localToGW commando to ingest a geojson dataset. I have been able to ingest the gdelt files, but when calling the ingest with this command:
geowave ingest localToGW -f geotools-vector --geotools-vector.type geojson test.geojson kustore kustore-spatial

Nothing happens (actually test.geojson can have any content, there's no debug output and nothing is stored on the backend)

Thanks in advance for any help!

rfecher
@rfecher
one thing that may help is --debug (has to come immediately after geowave so geowave --debug ingest ... to perhaps get a bit more feedback. Another thing I can say is --geotools-vector.type is not what you're thinking it is in this case. It filters the ingest to only use that feature type name, so if you had a file with various type names, let's say tracks and waypoints or really whatever "names" you could supply that to only ingest one of the feature types.
2 replies
rfecher
@rfecher

lastly, I can say we're using this geotools datastore for the GeoJSON support (which is an "unsupported" geotools extension, which does work for geojson I've tested with, mileage may vary) so if you're still having issues it could be worthwhile to make sure that it works with that data store (one way is by including that library in geoserver and seeing if you can add the file in directly as a geoserver layer, here's a gisexchange thread briefly discussing it).

In the end it may also just be worth quickly writing an ingest format plugin to geowave (similar to whats done for GDELT). Here is an example for writing a custom ingest format in geowave.

HuiWang
@scially
geowave-hbase geoserver Layer Preview error
14 Oct 20:22:05 ERROR [client.AsyncRequestFutureImpl] - Cannot get replica 0 location for {"cacheBlocks":true,"totalColumns":1,"row":"zsfwm.83070","families":{"XgA":["ALL"]},"maxVersions":1,"timeRange":["0","9223372036854775807"]}
14 Oct 20:22:05 ERROR [client.AsyncRequestFutureImpl] - Cannot get replica 0 location for {"cacheBlocks":true,"totalColumns":1,"row":"zsfwm.18472","families":{"XgA":["ALL"]},"maxVersions":1,"timeRange":["0","9223372036854775807"]}
.......................

14 Oct 20:23:31 ERROR [dataidx.BatchDataIndexRetrievalIteratorHelper] - Error decoding row
java.util.concurrent.CompletionException: java.lang.NullPointerException
        at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
        at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
        at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:618)
        at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at org.locationtech.geowave.core.store.base.dataidx.BatchIndexRetrievalImpl.lambda$flush$4(BatchIndexRetrievalImpl.java:153)
        at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
        at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
but geowave query is work
image.png
HuiWang
@scially
Thanks in advance for any help!
rfecher
@rfecher
are you using secondary indexing?
HuiWang
@scially
geowave store add -t hbase --coprocessorJar hdfs://master:9000/hbase/lib/geowave-hbase.jar -z master:2181,slave1:2181,slave2:2181 --gwNamespace geowave datahub-geowave

 geowave index add -t spatial -np 4 -PS ROUND_ROBIN datahub-geowave datahub-geowave-index


 geowave ingest localToGW -f geotools-vector /opt/data/zsfwm2.shp datahub-geowave datahub-geowave-index
maybe not
HuiWang
@scially
geowave 2.0.0
HuiWang
@scially
oh.... it's work when i use secondary index...but i don't understand why
 geowave store add -t hbase --coprocessorJar hdfs://master:9000/hbase/lib/geowave-hbase.jar --enableSecondaryIndexing -z master:2181,slave1:2181,slave2:2181 --gwNamespace geowave datahub-geowave

oh.... it's not work when i use secondary index...but i don't understand why

 geowave store add -t hbase --coprocessorJar hdfs://master:9000/hbase/lib/geowave-hbase.jar --enableSecondaryIndexing -z master:2181,slave1:2181,slave2:2181 --gwNamespace geowave datahub-geowave

it's work when i use secondary index

HuiWang
@scially
geowave type rm datahub-geowave db_enterpriset_enterprise_polygon
has error log:
15 Oct 17:56:51 WARN [operations.HBaseOperations] - Unable to find index to delete