Bringing the scalability of distributed computing to modern geospatial software.
rfecher on gh-pages
Lastest javadoc on successful t… (compare)
rfecher on master
fixing coveralls (#1488) (compare)
rfecher on master
Update README.md (compare)
rfecher on master
updated readme.md (#1486) (compare)
[root@ffe7b9e3d42a geowaveapi]# geowave vector query "select * from acc.uasis limit 1"
Exception in thread "Thread-4" java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:47)
at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:37)
at org.locationtech.geowave.core.store.entities.GeoWaveKeyImpl.<init>(GeoWaveKeyImpl.java:30)
at org.locationtech.geowave.datastore.accumulo.AccumuloRow.<init>(AccumuloRow.java:52)
at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.internalNext(AccumuloReader.java:198)
at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader.access$200(AccumuloReader.java:35)
at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:146)
at org.locationtech.geowave.datastore.accumulo.operations.AccumuloReader$NonMergingIterator.next(AccumuloReader.java:125)
at org.locationtech.geowave.core.store.operations.SimpleParallelDecoder$1.run(SimpleParallelDecoder.java:41)
at java.lang.Thread.run(Thread.java:748)
[root@ffe7b9e3d42a geowaveapi]# java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
25 Jul 18:20:16 WARN [transport.TIOStreamTransport] - Error closing output stream.
java.io.IOException: The stream is closed
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:335)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:595)
at org.apache.accumulo.core.rpc.ThriftUtil.returnClient(ThriftUtil.java:159)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:755)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:748)
Hello all! newbie here... I'm evaluating geographic tools for big data (Cloudera specifically) and have gone through. the geowave quickstart, also tried a variation where I use the kudu DB in Cloudera as data store.
What I haven't found is how exactly is the syntax of the ingest localToGW commando to ingest a geojson dataset. I have been able to ingest the gdelt files, but when calling the ingest with this command:
geowave ingest localToGW -f geotools-vector --geotools-vector.type geojson test.geojson kustore kustore-spatial
Nothing happens (actually test.geojson can have any content, there's no debug output and nothing is stored on the backend)
Thanks in advance for any help!
--debug
(has to come immediately after geowave
so geowave --debug ingest ...
to perhaps get a bit more feedback. Another thing I can say is --geotools-vector.type
is not what you're thinking it is in this case. It filters the ingest to only use that feature type name, so if you had a file with various type names, let's say tracks
and waypoints
or really whatever "names" you could supply that to only ingest one of the feature types.
lastly, I can say we're using this geotools datastore for the GeoJSON support (which is an "unsupported" geotools extension, which does work for geojson I've tested with, mileage may vary) so if you're still having issues it could be worthwhile to make sure that it works with that data store (one way is by including that library in geoserver and seeing if you can add the file in directly as a geoserver layer, here's a gisexchange thread briefly discussing it).
In the end it may also just be worth quickly writing an ingest format plugin to geowave (similar to whats done for GDELT). Here is an example for writing a custom ingest format in geowave.
14 Oct 20:22:05 ERROR [client.AsyncRequestFutureImpl] - Cannot get replica 0 location for {"cacheBlocks":true,"totalColumns":1,"row":"zsfwm.83070","families":{"XgA":["ALL"]},"maxVersions":1,"timeRange":["0","9223372036854775807"]}
14 Oct 20:22:05 ERROR [client.AsyncRequestFutureImpl] - Cannot get replica 0 location for {"cacheBlocks":true,"totalColumns":1,"row":"zsfwm.18472","families":{"XgA":["ALL"]},"maxVersions":1,"timeRange":["0","9223372036854775807"]}
.......................
14 Oct 20:23:31 ERROR [dataidx.BatchDataIndexRetrievalIteratorHelper] - Error decoding row
java.util.concurrent.CompletionException: java.lang.NullPointerException
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:618)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.locationtech.geowave.core.store.base.dataidx.BatchIndexRetrievalImpl.lambda$flush$4(BatchIndexRetrievalImpl.java:153)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
geowave store add -t hbase --coprocessorJar hdfs://master:9000/hbase/lib/geowave-hbase.jar -z master:2181,slave1:2181,slave2:2181 --gwNamespace geowave datahub-geowave
geowave index add -t spatial -np 4 -PS ROUND_ROBIN datahub-geowave datahub-geowave-index
geowave ingest localToGW -f geotools-vector /opt/data/zsfwm2.shp datahub-geowave datahub-geowave-index
maybe not
oh.... it's not work when i use secondary index...but i don't understand why
geowave store add -t hbase --coprocessorJar hdfs://master:9000/hbase/lib/geowave-hbase.jar --enableSecondaryIndexing -z master:2181,slave1:2181,slave2:2181 --gwNamespace geowave datahub-geowave
it's work when i use secondary index