by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 17:36
    schnerd starred locationtech/geowave
  • Jan 30 2019 11:01
    hsg77 commented #1474
  • Jan 30 2019 10:58
    hsg77 commented #1474
  • Jan 30 2019 10:57
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:51
    hsg77 commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:12
    rfecher commented #1474
  • Jan 29 2019 10:44
    hsg77 commented #1474
  • Jan 28 2019 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 2019 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 2019 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 2019 20:47
    rfecher closed #1488
  • Jan 28 2019 20:47
    rfecher opened #1488
  • Jan 28 2019 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 2019 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 2019 16:53
    rfecher closed #1486
rfecher
@rfecher
are you doing a range query with id? seems to me the answer is likely no?
John Meehan
@n0rb3rt
no
rfecher
@rfecher
if you already have the id, is space-time constraints necessary?
John Meehan
@n0rb3rt
id isn't unique
it represents a group of input data tied to an external system
rfecher
@rfecher
ahh, got it
so our index is composable and to me, I'd structure it as <layerName> + <id> + <SFC>
we have a partition key and a sort key that are completely logically separated throughout the code, the SFC is meant to be well-sorted and support range scan, the other things are not necessary to be well-sorted and can form part of the partition key (we call it partition key, which maps to partition key in cassandra, hash key in dynamodb, and in hbase and accumulo just become a row ID prefix that leverages the concept of "pre-splitting")
John Meehan
@n0rb3rt
Ok sounds like I should roll up my sleeves and try it. Thanks for the tips!
rfecher
@rfecher
ha, yeah, let me know if you have questions along the way - out of the box we provide spatial and spatiotemporal indices through SPI and when I have provided custom indexing I end up also extending FeatureDataAdapter to make sure the attributes within the feature get mapped correctly, here's one way I've done it for NYC taxi data, and I've done other approaches as well not as shareable, but mapping fields like layername and ID to the index is where at the moment some custom code in our data adapter is required
so I always end up making some one-off hard-coded "adapter" that does the mapping and its never very hard, but while you're in there I'd love to get feedback if anything dawns on you for a decent more generic approach that wouldn't require that custom code to do the mappings
Brad Hards
@bradh
The user guide (https://locationtech.github.io/geowave/userguide.html) says: "A query in GeoWave currently consists of a set of ranges on the dimensions of the primary index. Up to three dimensions, plus temporal optionally, can take advantage of any complex OGC geometry for the query window. For dimensions of four or greater the query can only be a set of ranges on each dimension, e.g., hyper-rectangle, etc."
Does that mean I can do a query for an n-dimensional bounding box in any dimensionality, but can do an efficient query for (say) a buffered linestring?
Thinking about storing RF emitter information (where the dimensions might be 3D+T, plus some kind of transmit frequency (or maybe a frequency range + instantaneous bandwidth).
rfecher
@rfecher
hi @bradh - yes, I think you have it right. For the full scope of geometry types and relationships (the DE-9IM model we heavily leverage JTS, but for arbitrary dimensionality we generally support basic range constraints (essentially hyper-rectangle intersection). With some custom code an index can be composed of any dimensional definitions you define. The core of the project treats indexing as purely generic multi-dimensional and the default indexing for spatial and spatial-temporal are added on top via SPI here in the geowave-geotime module. But the idea is that special purpose dimensional definitions can be added in the same way that these are.
HuiWang
@scially

When use other CRS(CGCS2000), the table name in accumulo is inconsistent with the name of the table to be queried by the geoserver .

And Does GeoWave support Gaussian projection plane coordinate system, such as "CGCS2000 / 3-degree Gauss-Kruger CM 114E"( EPSG:4547)?

image.png
image.png
rfecher
@rfecher
that is strange that it would look for SPATIAL_IDX and my initial guess is it may be because of some mishaps on that datastore at some point (maybe a failed ingest to the default index?) ... try geowave remote listindex <datastore> and if SPATIAL_IDX prints to the console then there must have been an ingest attempt at some point to that index. On a new ingest, the indexing scheme and data type is serialized to the GEOWAVE_METADATA table and geoserver is simply referencing that info. So my guess is it finds it in the metadata table and is looking for the index. If it is listed in your datastore now, try a new gwNamespace and on a clean ingest, try geowave remote listindex <store> to see if it exists
re: EPSG:4547, because it has infinite bounds, I think you'll have to wait for 0.9.8 and it'll work. 0.9.7 works for CRS's with strict bounds, but 0.9.8 will work for infinite bound CRS's as well.
HuiWang
@scially
ok,thank you , i try it
Josée-Anne Langlois
@jalanglois1_twitter
Hello, I'm new to Geowave. I want to test raster ingestion in an accumulo data store. I have an accumulo installed on an ubuntu standalone VM. I created the package accumulo-container-singlejar from the source and put the jar file in $ACCUMULO_HOME/lib directory. But when I do this, I can't restart accumulo. I have the following error :
lanj2410@ubuntu2:/opt$ /opt/accumulo-1.7.4/bin/start-all.sh
Starting monitor on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tablet servers .... done
Starting tablet server on localhost
WARN : Max open files on localhost is 1024, recommend 32768
2018-08-01 16:03:58,395 [start.Main] ERROR: Uncaught exception
java.util.ServiceConfigurationError: org.apache.accumulo.start.spi.KeywordExecutable: Provider org.apache.accumulo.server.conf.ConfigSanityCheck not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.accumulo.start.Main.checkDuplicates(Main.java:223)
at org.apache.accumulo.start.Main.getExecutables(Main.java:215)
at org.apache.accumulo.start.Main.main(Main.java:78)
Starting master on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting garbage collector on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tracer on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Can you help me with that?
rfecher
@rfecher
It looks like a bit of an accumulo configuration issue but I'm not certain as I've never seen that before.
Try following these instructions: https://locationtech.github.io/geowave/userguide.html#accumulo-config as that's the basics I always follow.
mawhitby
@mawhitby
@jalanglois1_twitter Sorry if I'm miss-understanding, but are you talking about building the geowave accumulo jar?
If so that jar is would need to be put in the accumulo/lib directory on HDFS not in the local accumulo lib directory.
Josée-Anne Langlois
@jalanglois1_twitter
Thanks @mawhitby ! I was putting it in the local accumulo lib directory. Now that the file is on HDFS I can start Accumulo.
mawhitby
@mawhitby
Awesome! Happy to help.
HuiWang
@scially
when i ingest shapefile, and add layer to geoserver, some filed(Chinese) in openlayer are garbled, the shapefile charset is GBK
rfecher
@rfecher

we have to choose a charset to serialize/deserialize java Strings within the underlying key/value store. The code change you have as I understand it is relevant to geotools parsing of a shapefile (reading DBF). However, when we serialize it we still use our default charset which we get here.

In order to facilitate configurable charsets, I went ahead and pulled both the geotools charset and the geowave serialization charset from a java property geowave.charset so I think you could be good once PR #1388 makes it to master. If you are using our commandline tools you can set the environment variable GEOWAVE_TOOL_JAVA_OPT="-Dgeowave.charset=GBK" to set java properties.

HuiWang
@scially
thanks..
rfecher
@rfecher

We are pleased to announce the release of GeoWave v0.9.8!

There is a lot packed into this release in preparation for 1.0.0 within the next development iteration. In preparation for the upcoming v1.0.0 release, version 0.9.8 introduces performance optimizations, and full support for Apache Cassandra and Amazon DynamoDB.

Some of the significant developments include:

Major New Features

  • Full Feature Support for Apache Cassandra and Amazon DynamoDB with EMR Bootstrap scripts for GeoWave with Apache Cassandra to follow along with all the existing Quickstart Guides
  • Indexing over Configurable Coordinate Reference Systems for vector and raster data

API Improvements

  • gRPC service support for all GeoWave operations as well as bulk ingest and query services, and SPI available for providing additional services at runtime
  • New packaging for gRPC services to include RPMs and Puppet Modules
  • New GeoWave PySpark libraries for direct Python integration with GeoWave

Analytic Improvements

  • EMR bootstrap scripts for a JupyterHub deployment fully integrated with GeoWave
  • Significant advancements have been made on GeoWave’s distributed indexed spatial join with example notebooks available

Versioning Updates

  • Many version updates to include HBase 1.4.6, Accumulo 1.9.2, Spark 2.3.1, Hadoop 2.8.4, GeoServer 2.13.2, GeoTools 19.2, and more
  • Maintaining the same backwards compatibility with older versions of these major components as GeoWave v0.9.7 compatibility

...and many many more, see the change log for details.

HuiWang
@scially
TIM截图20180915113220.png
I recommend commenting on this code because Maven always downloads meta-data.xml and then the download fails.
Very excited to release 0.9.8
rfecher
@rfecher
good point re: xuggle, that was for an old dependency. Removed in PR #1417
HuiWang
@scially
Wow, thank you
rfecher
@rfecher
In preparation for the GeoWave 1.0.0 release, the GeoWave project has renamed all of the Java packages from "mil.nga.giat.geowave" to "org.locationtech.geowave"
also the maven group ID is renamed from "mil.nga.giat" to "org.locationtech.geowave"
kullaibigdata
@kullaibigdata

@kullaibigdata
Hi all
we are facing below error and very new to this please suggest me.
how to resolve this issue.
geowave gs addds -ds geowave_eea -ws geowave eea-store
25 Sep 18:23:21 WARN [cli.GeoWaveMain] - Unable to execute operation
com.beust.jcommander.ParameterException: Cannot find store name: eea-store

thanks,
kullai.

rfecher
@rfecher
did you ingest any data into eea-store?
eea-store is meant to be a named configuration to connect to a backend keyvalue store and typically you'd have data in it before trying to add it to geoserver
geowave config addstore -t hbase -z <zookeeper host:port> eea-store for example would configure an hbase connection with the given zookeeper and name it eea-store so it can be referenced in subsequent commands
kullaibigdata
@kullaibigdata

Hi

I did this 5 steps

geowave config addindex -t spatial eea-spindex --partitionStrategy ROUND_ROBIN (completed)

geowave config addindex -t spatial_temporal eea-hrindex --partitionStrategy ROUND_ROBIN --period HOUR

geowave config addstore eea-store --gwNamespace geowave.eea -t hbase --zookeeper mapr1:5181

geowave ingest localtogw -f geotools-vector /root/AirBase_v7_stations.csv eea-store eea-spindex,eea-hrindex

geowave config geoserver -ws geowave -u admin -p geoserver http://localhost:8080/geoserver

rfecher
@rfecher
looks good although you're giving a partition strategy for each index which won't do anything without without --numPartitions <number greater than 1> as well
I tend to always just add the layer to geoserver which also adds a datastore, so I just do something like geowave gs addlayer eea-store -a ALL