Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 17:36
    schnerd starred locationtech/geowave
  • Jan 30 11:01
    hsg77 commented #1474
  • Jan 30 10:58
    hsg77 commented #1474
  • Jan 30 10:57
    hsg77 commented #1474
  • Jan 30 10:53
    hsg77 commented #1474
  • Jan 30 10:53
    hsg77 commented #1474
  • Jan 30 10:51
    hsg77 commented #1474
  • Jan 29 16:30
    JWileczek commented #1474
  • Jan 29 16:30
    JWileczek commented #1474
  • Jan 29 16:12
    rfecher commented #1474
  • Jan 29 10:44
    hsg77 commented #1474
  • Jan 28 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 20:47
    rfecher closed #1488
  • Jan 28 20:47
    rfecher opened #1488
  • Jan 28 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 16:53
    rfecher closed #1486
rfecher
@rfecher
the official 1.0.0 will be out by the end of the week so hang tight and grab that - but the commands have changed (to stay, guaranteed through 1.x)
rfecher
@rfecher
for example its geowave gs ds list or geowave gs datastore list instead of geowave gs listds in your version
rfecher
@rfecher
GeoWave 1.0.0 is officially released! See the announcement on the geowave-dev mailing list for more details.
Grigory
@pomadchin
@rfecher :tada: :tada: :tada:
Davis Silverman
@sinistersnare
Will there be a 1.0.0-apache.jar release on http://locationtech.github.io/geowave/packages.html ? I dont see anything right now.
Also congrats on 1.0! Great work Rich and co.
rfecher
@rfecher
The listing is limited to a certain number in that page but it is there. The JavaScript does an S3 bucket listing and just is missing a bunch that are there
Davis Silverman
@sinistersnare
I dont see the release for geowave-tools for 1.0.0
▶ aws s3 ls s3://geowave-rpms/release-jars/JAR/geowave-tools-1.0.0
2019-06-28 11:26:42  355540754 geowave-tools-1.0.0-RC1-apache-accumulo1.7.jar
2019-06-28 11:26:42  355829795 geowave-tools-1.0.0-RC1-apache.jar
2019-06-28 11:26:50  408998836 geowave-tools-1.0.0-RC1-cdh5.jar
2019-06-28 11:26:53  356281268 geowave-tools-1.0.0-RC1-hdp2.jar
2019-09-06 15:54:10  365597604 geowave-tools-1.0.0-hdp2.jar
jhickman-prominent
@jhickman-prominent

RE CLI for ingest: is there a setting for configuring the namespace separator character? My store configuration:

geowave store add -t accumulo -u userxxx -i gwinstance -p passxxx --gwNamespace geowave --zookeeper zk-accumulo:2181 geolife_store

When geowave subsequently attempts to great the metadata table, it uses the underscore (_) separator instead of the expected "dot" (.) separator between the namespace and the table name. I.e., it attempts to create "geowave_GEOWAVE_METADATA" instead of the expected "geowave.GEOWAVE_METADATA". This is failing because my user only has permission to create tables in the "geowave" namespace.

I'm using the new 1.0.0 release of geowave.

jhickman-prominent
@jhickman-prominent
image.png
I think this ^ might be a bug. In Line_110, the code truly is using the underscore (_) character rather than what I believe should be the "dot" (.) character as a namespace separator. I will dig a bit further on this.
rfecher
@rfecher
@jhickman-prominent the geowave namespace is a table prefix and underscores are used for suffixes. An accumulo namespace uses the '.' separator by convention. So if you want to use "geowave" as your accumulo namespace then your geowave namespace should include a '.' otherwise it will be using the default accumulo namespace. So for example with your store above if you used "geowave.geolife" then all your tables would have that prefix and the accumulo namespace would be "geowave" as you are expecting.
jhickman-prominent
@jhickman-prominent
@rfecher , I made the modifications you described and the artifacts were created correctly. Thanks!
rfecher
@rfecher
np, glad to help
gibranparvez
@gibranparvez
Hi, I was wondering where I could find out the storename of my geowave-hbase instance
I'm trying to write a Spark source using the GeoWaveRDDLoader and it seems to require a storename, but i'm not sure how I can figure that out. I can't perform any store operations in the geowave command line either since I don't know the storename
if I log into my hbase I can see the metadata but no key in there with that
Haocheng Wang
@HaochengNn
image.png
i'm running pseudo-distributed mode hbase 1.2.1 with hadoop2.7.7 on my computer, and everything goes well until I place the "geowave-deploy-1.1.0-SNAPSHOT-hbase.jar" to hbase/lib: Hregionserver will quit automatically after I start hbase, and then Hmaster quit. the log of regionserver is this. Can anyone give me some idea to solve this problem?
rfecher
@rfecher
@gibranparvez "storename" is a commandline concept only ... when you run geowave store add ...the primary required parameter is "storename" which is just an arbitrary name you give to that connection configuration so that you can reference it in any subsequent command without needing all the other options. For GeoWaveRDDLoader you need "DataStorePluginOptions" which can be instantiated with any of the data store's required options. So in your case you can use new DataStorePluginOptions(<HBaseRequiredOptions>) to get that. I'm not sure where you're seeing "storename" come from in GeoWaveRDDLoader, but hopefully that clarifies it
rfecher
@rfecher
@HaochengNn the error message appears to be a mismatch between the version of guava in the geowave jar in the version of guava in HBase. It appears that you are building the geowave jar from source, so perhaps just try adding <guava.version>12.0.1</guava.version> after this line and rebuild that geowave-hbase jar
I think the difference between how you're deploying it and how we do may not afford the same classpath isolation - we add it as a coprocessor library and to the dynamic library path of hbase, but if in your installation you are putting it in the same directory as core hbase libraries then its possibly left as the first guava version on the classpath randomly wins, which is trouble for hbase
Haocheng Wang
@HaochengNn
@rfecher Thank you, It works now!
gibranparvez
@gibranparvez
@rfecher I see. I brought it up because the storeloader throws an ioexception saying "cannot find store name" But maybe I don't actually need the store loader to get the store options?
gibranparvez
@gibranparvez
2019-09-19 21:39:32 ERROR GeoWaveRDDLoader:91 - Must supply input store to load. Please set storeOptions and try again. Specifically this
rfecher
@rfecher
yep, here's the code and it seems you must be passing in null for the input DataStorePluginOptions ...you should be fine if you instead instantiate the plugin options using HBaseRequiredOptions in the constructor
gibranparvez
@gibranparvez
I see I see. Thanks!
gibranparvez
@gibranparvez
@rfecher I'm having a hard time finding an option in the datastore plugin options class or rdd options classes to specify index name. Suggestions on this? Right now our storeOptions just consists of
this.storeOptions = new HBaseRequiredOptions(zkAddress, geowaveNamespace, extraOpts); and the HbaseOptions class doesnt seem to have any where to specify that in its methods
rfecher
@rfecher
@gibranparvez do you have multiple indices and you'd like the RDD to use one in particular? If so thats a query option, which can be specified through the QueryBuilder API (for example QueryBuilder.newBuilder().indexName("myindex").build() would query everything in "myindex").
a store can have multiple indicies (and multiple data types) - it all depends on what you write to it (ingest)
so it wouldn't make sense to configure an index as part of a store, but an RDD is representative of a geowave query, so RDDOptions.setQuery() would allow you to choose an index (and look at the DataStore API and examples for how to write data to an index or indices of your choice)
gibranparvez
@gibranparvez
Thanks for the help! The query method worked.
gibranparvez
@gibranparvez

Okay so I thought this was working but it looks like it can never find the index name that i specify,

2019-10-03 04:27:11 WARN AbstractGeoWavePersistence:232 - Object 'detectionIndex' not found

even though its listed if I go into hbase shell and list tables. It always defaults to our entityActivityIndex so when we attempt to run a spark job intentionally reading entityActivity, it works as intended, but not if we 're trying to read another table.

gibranparvez
@gibranparvez
We ended up solving the above with pairing it with specifying type as well
gibranparvez
@gibranparvez
Has anyone encountered
2019-10-19 14:57:35,509 WARN [main] cli.GeoWaveMain: Unable to execute operation java.lang.Exception: Error adding GeoServer layer for store 'test-store': {"adapters":]} GeoServer Response Code = 400
we haven't seen this adapters error before and don't know where we can find a list of them for our store
This is from running geowave gs layer add ${GEOWAVE_NAMESPACE}-store -ws cite
previous command to that is geowave gs ds add ${GEOWAVE_NAMESPACE}-store -ws cite
Haocheng Wang
@HaochengNn
image.png
Hi, I come up with a problem that only under "extensions\formats\tdrive" directory I can import "org.locationtech.geowave.datastore.hbase.config.HBaseOptions" but when I do some development under "extensions\formats\geolife" or other formats' directory and want to import the Hbase related classes, it failed and said "The import org.locationtech.geowave.datastore cannot be resolved". Can anyone help me solve this ?
surajtalari
@surajtalari
hi ,in dbscan what is this parameter ? "The following option is required: --query.typeNames " I cant find any documentation regarding this?
surajtalari
@surajtalari
..
rfecher
@rfecher
@gibranparvez hmm, I wander why the list of adapters in that message is empty...regardless you should see an error in the geoserver log that is more descriptive. You can also just directly add the layer through the geoserver admin console.
@HaochengNn the formats do not have a direct dependency on any of the datastore implementations by design ... in general the "ext" folder is comprised of plugins that are discovered at runtime enabling an application's dependencies to be limited to only what you need
@surajtalari this is answered on the mailing list
surajtalari
@surajtalari

Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.

"
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
instead.
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
Available types:
"
The following are the steps I followed to ingest data

1.geowave store add teststore3 -t cassandra --contactPoints localhost
--gwNamespace test3
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
geotools-vector

Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command

'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '

It gives me an error saying

'Expected a value after parameter --query.typeNames'

What should I do now? Can anyone say where am I going wrong?

Haocheng Wang
@HaochengNn
@rfecher thank you!
rfecher
@rfecher
@surajtalari this looks like the same question that was answered yesterday in detail on the geowave-dev mailing list?
Grigory
@pomadchin
hey guys; im using a custom dataadapter and writing into the cassandra storage;
if I will write data into an empty cassadnra / table - everything works perfect
once Im trying to reingest everything (by creating a new adapter, etc, etc) Im getting
java.lang.NullPointerException:
[info]   at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:70)
is there smth wrong with the serialization or I didnt specify smth in the SPI?