by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 17:36
    schnerd starred locationtech/geowave
  • Jan 30 2019 11:01
    hsg77 commented #1474
  • Jan 30 2019 10:58
    hsg77 commented #1474
  • Jan 30 2019 10:57
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:51
    hsg77 commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:12
    rfecher commented #1474
  • Jan 29 2019 10:44
    hsg77 commented #1474
  • Jan 28 2019 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 2019 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 2019 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 2019 20:47
    rfecher closed #1488
  • Jan 28 2019 20:47
    rfecher opened #1488
  • Jan 28 2019 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 2019 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 2019 16:53
    rfecher closed #1486
mawhitby
@mawhitby
@jalanglois1_twitter Sorry if I'm miss-understanding, but are you talking about building the geowave accumulo jar?
If so that jar is would need to be put in the accumulo/lib directory on HDFS not in the local accumulo lib directory.
Josée-Anne Langlois
@jalanglois1_twitter
Thanks @mawhitby ! I was putting it in the local accumulo lib directory. Now that the file is on HDFS I can start Accumulo.
mawhitby
@mawhitby
Awesome! Happy to help.
HuiWang
@scially
when i ingest shapefile, and add layer to geoserver, some filed(Chinese) in openlayer are garbled, the shapefile charset is GBK
rfecher
@rfecher

we have to choose a charset to serialize/deserialize java Strings within the underlying key/value store. The code change you have as I understand it is relevant to geotools parsing of a shapefile (reading DBF). However, when we serialize it we still use our default charset which we get here.

In order to facilitate configurable charsets, I went ahead and pulled both the geotools charset and the geowave serialization charset from a java property geowave.charset so I think you could be good once PR #1388 makes it to master. If you are using our commandline tools you can set the environment variable GEOWAVE_TOOL_JAVA_OPT="-Dgeowave.charset=GBK" to set java properties.

HuiWang
@scially
thanks..
rfecher
@rfecher

We are pleased to announce the release of GeoWave v0.9.8!

There is a lot packed into this release in preparation for 1.0.0 within the next development iteration. In preparation for the upcoming v1.0.0 release, version 0.9.8 introduces performance optimizations, and full support for Apache Cassandra and Amazon DynamoDB.

Some of the significant developments include:

Major New Features

  • Full Feature Support for Apache Cassandra and Amazon DynamoDB with EMR Bootstrap scripts for GeoWave with Apache Cassandra to follow along with all the existing Quickstart Guides
  • Indexing over Configurable Coordinate Reference Systems for vector and raster data

API Improvements

  • gRPC service support for all GeoWave operations as well as bulk ingest and query services, and SPI available for providing additional services at runtime
  • New packaging for gRPC services to include RPMs and Puppet Modules
  • New GeoWave PySpark libraries for direct Python integration with GeoWave

Analytic Improvements

  • EMR bootstrap scripts for a JupyterHub deployment fully integrated with GeoWave
  • Significant advancements have been made on GeoWave’s distributed indexed spatial join with example notebooks available

Versioning Updates

  • Many version updates to include HBase 1.4.6, Accumulo 1.9.2, Spark 2.3.1, Hadoop 2.8.4, GeoServer 2.13.2, GeoTools 19.2, and more
  • Maintaining the same backwards compatibility with older versions of these major components as GeoWave v0.9.7 compatibility

...and many many more, see the change log for details.

HuiWang
@scially
TIM截图20180915113220.png
I recommend commenting on this code because Maven always downloads meta-data.xml and then the download fails.
Very excited to release 0.9.8
rfecher
@rfecher
good point re: xuggle, that was for an old dependency. Removed in PR #1417
HuiWang
@scially
Wow, thank you
rfecher
@rfecher
In preparation for the GeoWave 1.0.0 release, the GeoWave project has renamed all of the Java packages from "mil.nga.giat.geowave" to "org.locationtech.geowave"
also the maven group ID is renamed from "mil.nga.giat" to "org.locationtech.geowave"
kullaibigdata
@kullaibigdata

@kullaibigdata
Hi all
we are facing below error and very new to this please suggest me.
how to resolve this issue.
geowave gs addds -ds geowave_eea -ws geowave eea-store
25 Sep 18:23:21 WARN [cli.GeoWaveMain] - Unable to execute operation
com.beust.jcommander.ParameterException: Cannot find store name: eea-store

thanks,
kullai.

rfecher
@rfecher
did you ingest any data into eea-store?
eea-store is meant to be a named configuration to connect to a backend keyvalue store and typically you'd have data in it before trying to add it to geoserver
geowave config addstore -t hbase -z <zookeeper host:port> eea-store for example would configure an hbase connection with the given zookeeper and name it eea-store so it can be referenced in subsequent commands
kullaibigdata
@kullaibigdata

Hi

I did this 5 steps

geowave config addindex -t spatial eea-spindex --partitionStrategy ROUND_ROBIN (completed)

geowave config addindex -t spatial_temporal eea-hrindex --partitionStrategy ROUND_ROBIN --period HOUR

geowave config addstore eea-store --gwNamespace geowave.eea -t hbase --zookeeper mapr1:5181

geowave ingest localtogw -f geotools-vector /root/AirBase_v7_stations.csv eea-store eea-spindex,eea-hrindex

geowave config geoserver -ws geowave -u admin -p geoserver http://localhost:8080/geoserver

rfecher
@rfecher
looks good although you're giving a partition strategy for each index which won't do anything without without --numPartitions <number greater than 1> as well
I tend to always just add the layer to geoserver which also adds a datastore, so I just do something like geowave gs addlayer eea-store -a ALL
kullaibigdata
@kullaibigdata

i did't give any partition
just now I am executing your command it's working fine

Now I am getting different error that is

geowave config addstore -t hbase -z mapr1:5181 eea-store
[root@mapr1 ~]# geowave gs addds -ds geowave_eea -ws geowave eea-store
25 Sep 23:09:46 WARN [cli.GeoWaveMain] - Unable to execute operation
javax.ws.rs.ProcessingException: java.net.SocketException: Unexpected end of file from server

finally I got the error

[cli.GeoWaveMain] - Unable to execute operation
javax.ws.rs.ProcessingException: java.net.SocketException: Unexpected end of file from server

this one
rfecher
@rfecher
the other issue I see is that geotools-vector just delegates reading the file (or database) from a geotools supported source - and geotools support for CSV is a unsupported module
its not included, although you can include any additional modules in our plugins directory
but without special modules added, the ingest would result in no data
also you can simply write your own geowave format plugin to handle whatever data you are interested in
kullaibigdata
@kullaibigdata
yes
I am interested
rfecher
@rfecher
we have several examples from community file formats that are just CSV, such as Microsoft Research's GeoLife trajectory dataset or google's events in GDELT
Hackfred
@Hackfred
Hello, I just discovered GeoWave. The documentation says, that it supports HBase as a datastore. If I setup HBase on EMR I can choose between HDFS and S3. As far as I now, the GeoWave documentation says, that both is possible. So I set it up on S3. But now I am wondering, how to ingest the data into HBase on S3. If I use "LocalToGW", the data is ingested the nodes, but not into S3. Can anybody see my mistake?
rfecher
@rfecher
the data will go into HBase and if you setup EMR to use S3 that means the hbase.rootdir is configured to the S3 bucket and key prefix that you provide EMR. So the HBase tables will be in S3 and you will be able to terminate the cluster, start another one back up with the data fully in tact as long as you use that same S3 location
HuiWang
@scially
I configured many geoservers by ngnix using direction agents, and how to addlayer to this geoserver? is it releated to Lock Management?
image
kullaibigdata
@kullaibigdata
Hi Please try below one
layer name is your store name
geowave gs addlayer layername
did you configure geowave with geoserver
HuiWang
@scially

but i have many geoserver, example

http://10.66.150.1:8080/geoserver
http://10.66.150.2:8080/geoserver
...
http://10.66.150.10:8080/geoserver

and i used a reverse proxy:http://10.66.150.1:8900/geoserver

does need to reconfigure the geowave.properties each time when execute a command gs addlayer name?
kullaibigdata
@kullaibigdata
which one do u need to add layer
HuiWang
@scially
e... all
kullaibigdata
@kullaibigdata
ok did u create the store and name space
HuiWang
@scially
ok...thanks... and i hava an other problem
image.png
Can you explain how to set this parameter?
kullaibigdata
@kullaibigdata
sure
HuiWang
@scially
image.png