Bringing the scalability of distributed computing to modern geospatial software.
rfecher on gh-pages
Lastest javadoc on successful t… (compare)
rfecher on master
fixing coveralls (#1488) (compare)
rfecher on master
Update README.md (compare)
rfecher on master
updated readme.md (#1486) (compare)
SPATIAL_IDX
and my initial guess is it may be because of some mishaps on that datastore at some point (maybe a failed ingest to the default index?) ... try geowave remote listindex <datastore>
and if SPATIAL_IDX prints to the console then there must have been an ingest attempt at some point to that index. On a new ingest, the indexing scheme and data type is serialized to the GEOWAVE_METADATA table and geoserver is simply referencing that info. So my guess is it finds it in the metadata table and is looking for the index. If it is listed in your datastore now, try a new gwNamespace
and on a clean ingest, try geowave remote listindex <store>
to see if it exists
we have to choose a charset to serialize/deserialize java Strings within the underlying key/value store. The code change you have as I understand it is relevant to geotools parsing of a shapefile (reading DBF). However, when we serialize it we still use our default charset which we get here.
In order to facilitate configurable charsets, I went ahead and pulled both the geotools charset and the geowave serialization charset from a java property geowave.charset
so I think you could be good once PR #1388 makes it to master. If you are using our commandline tools you can set the environment variable GEOWAVE_TOOL_JAVA_OPT="-Dgeowave.charset=GBK"
to set java properties.
We are pleased to announce the release of GeoWave v0.9.8!
There is a lot packed into this release in preparation for 1.0.0 within the next development iteration. In preparation for the upcoming v1.0.0 release, version 0.9.8 introduces performance optimizations, and full support for Apache Cassandra and Amazon DynamoDB.
Some of the significant developments include:
Major New Features
API Improvements
Analytic Improvements
Versioning Updates
...and many many more, see the change log for details.
@kullaibigdata
Hi all
we are facing below error and very new to this please suggest me.
how to resolve this issue.
geowave gs addds -ds geowave_eea -ws geowave eea-store
25 Sep 18:23:21 WARN [cli.GeoWaveMain] - Unable to execute operation
com.beust.jcommander.ParameterException: Cannot find store name: eea-store
thanks,
kullai.
eea-store
is meant to be a named configuration to connect to a backend keyvalue store and typically you'd have data in it before trying to add it to geoserver
geowave config addstore -t hbase -z <zookeeper host:port> eea-store
for example would configure an hbase connection with the given zookeeper and name it eea-store
so it can be referenced in subsequent commands
Hi
I did this 5 steps
geowave config addindex -t spatial eea-spindex --partitionStrategy ROUND_ROBIN (completed)
geowave config addindex -t spatial_temporal eea-hrindex --partitionStrategy ROUND_ROBIN --period HOUR
geowave config addstore eea-store --gwNamespace geowave.eea -t hbase --zookeeper mapr1:5181
geowave ingest localtogw -f geotools-vector /root/AirBase_v7_stations.csv eea-store eea-spindex,eea-hrindex
geowave config geoserver -ws geowave -u admin -p geoserver http://localhost:8080/geoserver
geowave gs addlayer eea-store -a ALL
i did't give any partition
just now I am executing your command it's working fine
Now I am getting different error that is
geowave config addstore -t hbase -z mapr1:5181 eea-store
[root@mapr1 ~]# geowave gs addds -ds geowave_eea -ws geowave eea-store
25 Sep 23:09:46 WARN [cli.GeoWaveMain] - Unable to execute operation
javax.ws.rs.ProcessingException: java.net.SocketException: Unexpected end of file from server
finally I got the error
[cli.GeoWaveMain] - Unable to execute operation
javax.ws.rs.ProcessingException: java.net.SocketException: Unexpected end of file from server
geotools-vector
just delegates reading the file (or database) from a geotools supported source - and geotools support for CSV is a unsupported module
plugins
directory
hbase.rootdir
is configured to the S3 bucket and key prefix that you provide EMR. So the HBase tables will be in S3 and you will be able to terminate the cluster, start another one back up with the data fully in tact as long as you use that same S3 location