Bringing the scalability of distributed computing to modern geospatial software.
rfecher on gh-pages
Lastest javadoc on successful t… (compare)
rfecher on master
fixing coveralls (#1488) (compare)
rfecher on master
Update README.md (compare)
rfecher on master
updated readme.md (#1486) (compare)
▶ aws s3 ls s3://geowave-rpms/release-jars/JAR/geowave-tools-1.0.0
2019-06-28 11:26:42 355540754 geowave-tools-1.0.0-RC1-apache-accumulo1.7.jar
2019-06-28 11:26:42 355829795 geowave-tools-1.0.0-RC1-apache.jar
2019-06-28 11:26:50 408998836 geowave-tools-1.0.0-RC1-cdh5.jar
2019-06-28 11:26:53 356281268 geowave-tools-1.0.0-RC1-hdp2.jar
2019-09-06 15:54:10 365597604 geowave-tools-1.0.0-hdp2.jar
RE CLI for ingest: is there a setting for configuring the namespace separator character? My store configuration:
geowave store add -t accumulo -u userxxx -i gwinstance -p passxxx --gwNamespace geowave --zookeeper zk-accumulo:2181 geolife_store
When geowave subsequently attempts to great the metadata table, it uses the underscore (_) separator instead of the expected "dot" (.) separator between the namespace and the table name. I.e., it attempts to create "geowave_GEOWAVE_METADATA" instead of the expected "geowave.GEOWAVE_METADATA". This is failing because my user only has permission to create tables in the "geowave" namespace.
I'm using the new 1.0.0 release of geowave.
geowave store add ...
the primary required parameter is "storename" which is just an arbitrary name you give to that connection configuration so that you can reference it in any subsequent command without needing all the other options. For GeoWaveRDDLoader you need "DataStorePluginOptions" which can be instantiated with any of the data store's required options. So in your case you can use new DataStorePluginOptions(<HBaseRequiredOptions>)
to get that. I'm not sure where you're seeing "storename" come from in GeoWaveRDDLoader, but hopefully that clarifies it
<guava.version>12.0.1</guava.version>
after this line and rebuild that geowave-hbase jar
this.storeOptions = new HBaseRequiredOptions(zkAddress, geowaveNamespace, extraOpts);
and the HbaseOptions class doesnt seem to have any where to specify that in its methods
RDDOptions.setQuery()
would allow you to choose an index (and look at the DataStore
API and examples for how to write data to an index or indices of your choice)
Okay so I thought this was working but it looks like it can never find the index name that i specify,
2019-10-03 04:27:11 WARN AbstractGeoWavePersistence:232 - Object 'detectionIndex' not found
even though its listed if I go into hbase shell and list tables. It always defaults to our entityActivityIndex
so when we attempt to run a spark job intentionally reading entityActivity, it works as intended, but not if we 're trying to read another table.
adapters
error before and don't know where we can find a list of them for our store
geowave gs layer add ${GEOWAVE_NAMESPACE}-store -ws cite
geowave gs ds add ${GEOWAVE_NAMESPACE}-store -ws cite
Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.
"
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
instead.
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
Available types:
"
The following are the steps I followed to ingest data
1.geowave store add teststore3 -t cassandra --contactPoints localhost
--gwNamespace test3
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
geotools-vector
Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command
'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '
It gives me an error saying
'Expected a value after parameter --query.typeNames'
What should I do now? Can anyone say where am I going wrong?