Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 17:36
    schnerd starred locationtech/geowave
  • Jan 30 11:01
    hsg77 commented #1474
  • Jan 30 10:58
    hsg77 commented #1474
  • Jan 30 10:57
    hsg77 commented #1474
  • Jan 30 10:53
    hsg77 commented #1474
  • Jan 30 10:53
    hsg77 commented #1474
  • Jan 30 10:51
    hsg77 commented #1474
  • Jan 29 16:30
    JWileczek commented #1474
  • Jan 29 16:30
    JWileczek commented #1474
  • Jan 29 16:12
    rfecher commented #1474
  • Jan 29 10:44
    hsg77 commented #1474
  • Jan 28 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 20:47
    rfecher closed #1488
  • Jan 28 20:47
    rfecher opened #1488
  • Jan 28 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 16:53
    rfecher closed #1486
jhickman-prominent
@jhickman-prominent
image.png
I think this ^ might be a bug. In Line_110, the code truly is using the underscore (_) character rather than what I believe should be the "dot" (.) character as a namespace separator. I will dig a bit further on this.
rfecher
@rfecher
@jhickman-prominent the geowave namespace is a table prefix and underscores are used for suffixes. An accumulo namespace uses the '.' separator by convention. So if you want to use "geowave" as your accumulo namespace then your geowave namespace should include a '.' otherwise it will be using the default accumulo namespace. So for example with your store above if you used "geowave.geolife" then all your tables would have that prefix and the accumulo namespace would be "geowave" as you are expecting.
jhickman-prominent
@jhickman-prominent
@rfecher , I made the modifications you described and the artifacts were created correctly. Thanks!
rfecher
@rfecher
np, glad to help
gibranparvez
@gibranparvez
Hi, I was wondering where I could find out the storename of my geowave-hbase instance
I'm trying to write a Spark source using the GeoWaveRDDLoader and it seems to require a storename, but i'm not sure how I can figure that out. I can't perform any store operations in the geowave command line either since I don't know the storename
if I log into my hbase I can see the metadata but no key in there with that
Haocheng Wang
@HaochengNn
image.png
i'm running pseudo-distributed mode hbase 1.2.1 with hadoop2.7.7 on my computer, and everything goes well until I place the "geowave-deploy-1.1.0-SNAPSHOT-hbase.jar" to hbase/lib: Hregionserver will quit automatically after I start hbase, and then Hmaster quit. the log of regionserver is this. Can anyone give me some idea to solve this problem?
rfecher
@rfecher
@gibranparvez "storename" is a commandline concept only ... when you run geowave store add ...the primary required parameter is "storename" which is just an arbitrary name you give to that connection configuration so that you can reference it in any subsequent command without needing all the other options. For GeoWaveRDDLoader you need "DataStorePluginOptions" which can be instantiated with any of the data store's required options. So in your case you can use new DataStorePluginOptions(<HBaseRequiredOptions>) to get that. I'm not sure where you're seeing "storename" come from in GeoWaveRDDLoader, but hopefully that clarifies it
rfecher
@rfecher
@HaochengNn the error message appears to be a mismatch between the version of guava in the geowave jar in the version of guava in HBase. It appears that you are building the geowave jar from source, so perhaps just try adding <guava.version>12.0.1</guava.version> after this line and rebuild that geowave-hbase jar
I think the difference between how you're deploying it and how we do may not afford the same classpath isolation - we add it as a coprocessor library and to the dynamic library path of hbase, but if in your installation you are putting it in the same directory as core hbase libraries then its possibly left as the first guava version on the classpath randomly wins, which is trouble for hbase
Haocheng Wang
@HaochengNn
@rfecher Thank you, It works now!
gibranparvez
@gibranparvez
@rfecher I see. I brought it up because the storeloader throws an ioexception saying "cannot find store name" But maybe I don't actually need the store loader to get the store options?
gibranparvez
@gibranparvez
2019-09-19 21:39:32 ERROR GeoWaveRDDLoader:91 - Must supply input store to load. Please set storeOptions and try again. Specifically this
rfecher
@rfecher
yep, here's the code and it seems you must be passing in null for the input DataStorePluginOptions ...you should be fine if you instead instantiate the plugin options using HBaseRequiredOptions in the constructor
gibranparvez
@gibranparvez
I see I see. Thanks!
gibranparvez
@gibranparvez
@rfecher I'm having a hard time finding an option in the datastore plugin options class or rdd options classes to specify index name. Suggestions on this? Right now our storeOptions just consists of
this.storeOptions = new HBaseRequiredOptions(zkAddress, geowaveNamespace, extraOpts); and the HbaseOptions class doesnt seem to have any where to specify that in its methods
rfecher
@rfecher
@gibranparvez do you have multiple indices and you'd like the RDD to use one in particular? If so thats a query option, which can be specified through the QueryBuilder API (for example QueryBuilder.newBuilder().indexName("myindex").build() would query everything in "myindex").
a store can have multiple indicies (and multiple data types) - it all depends on what you write to it (ingest)
so it wouldn't make sense to configure an index as part of a store, but an RDD is representative of a geowave query, so RDDOptions.setQuery() would allow you to choose an index (and look at the DataStore API and examples for how to write data to an index or indices of your choice)
gibranparvez
@gibranparvez
Thanks for the help! The query method worked.
gibranparvez
@gibranparvez

Okay so I thought this was working but it looks like it can never find the index name that i specify,

2019-10-03 04:27:11 WARN AbstractGeoWavePersistence:232 - Object 'detectionIndex' not found

even though its listed if I go into hbase shell and list tables. It always defaults to our entityActivityIndex so when we attempt to run a spark job intentionally reading entityActivity, it works as intended, but not if we 're trying to read another table.

gibranparvez
@gibranparvez
We ended up solving the above with pairing it with specifying type as well
gibranparvez
@gibranparvez
Has anyone encountered
2019-10-19 14:57:35,509 WARN [main] cli.GeoWaveMain: Unable to execute operation java.lang.Exception: Error adding GeoServer layer for store 'test-store': {"adapters":]} GeoServer Response Code = 400
we haven't seen this adapters error before and don't know where we can find a list of them for our store
This is from running geowave gs layer add ${GEOWAVE_NAMESPACE}-store -ws cite
previous command to that is geowave gs ds add ${GEOWAVE_NAMESPACE}-store -ws cite
Haocheng Wang
@HaochengNn
image.png
Hi, I come up with a problem that only under "extensions\formats\tdrive" directory I can import "org.locationtech.geowave.datastore.hbase.config.HBaseOptions" but when I do some development under "extensions\formats\geolife" or other formats' directory and want to import the Hbase related classes, it failed and said "The import org.locationtech.geowave.datastore cannot be resolved". Can anyone help me solve this ?
surajtalari
@surajtalari
hi ,in dbscan what is this parameter ? "The following option is required: --query.typeNames " I cant find any documentation regarding this?
surajtalari
@surajtalari
..
rfecher
@rfecher
@gibranparvez hmm, I wander why the list of adapters in that message is empty...regardless you should see an error in the geoserver log that is more descriptive. You can also just directly add the layer through the geoserver admin console.
@HaochengNn the formats do not have a direct dependency on any of the datastore implementations by design ... in general the "ext" folder is comprised of plugins that are discovered at runtime enabling an application's dependencies to be limited to only what you need
@surajtalari this is answered on the mailing list
surajtalari
@surajtalari

Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.

"
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
instead.
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
Available types:
"
The following are the steps I followed to ingest data

1.geowave store add teststore3 -t cassandra --contactPoints localhost
--gwNamespace test3
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
geotools-vector

Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command

'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '

It gives me an error saying

'Expected a value after parameter --query.typeNames'

What should I do now? Can anyone say where am I going wrong?

Haocheng Wang
@HaochengNn
@rfecher thank you!
rfecher
@rfecher
@surajtalari this looks like the same question that was answered yesterday in detail on the geowave-dev mailing list?
Grigory
@pomadchin
hey guys; im using a custom dataadapter and writing into the cassandra storage;
if I will write data into an empty cassadnra / table - everything works perfect
once Im trying to reingest everything (by creating a new adapter, etc, etc) Im getting
java.lang.NullPointerException:
[info]   at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:70)
is there smth wrong with the serialization or I didnt specify smth in the SPI?
toBinary and fromBinary works okay; But It looks like it doesnt perform a proper serialization / deserialization of adapter(?)
Grigory
@pomadchin
ah that was indeed SPI; I forgot to add adapter to the registry...
thanks!
rfecher
@rfecher
ahh, yeah, thanks for spotting that - I was on a call but about to suggest that
Grigory
@pomadchin

yo It is again me!
I created a custom field and would like to query by it:

val DIMENSIONS = Array(
      new LongitudeDefinition(),
      new LatitudeDefinition(true),
      new TimeDefinition(Unit.YEAR),
      new MyDefinition()
    )

// … 

new CustomNameIndex( … )

And the query constraints look like:

  val geoConstraints = GeometryUtils.basicConstraintsFromGeometry(queryGeometry)
    val temporalConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(startTime.getTime(), endTime.getTime()), false),
        classOf[TimeDefinition],
        classOf[SimpleTimeDefinition]
      )
    )
    val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(i, i), false),
        classOf[MyDefinition]
      )
    )


  val cons = geoConstraints.merge(temporalConstraints).merge(myConstraints)

When I definte myConstaints like this:

 val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(i, i), false),
        classOf[MyDefinition]
      )
    )

It looks like it doesnt filter by my cusom definition; I notcied that it goes into UnboundedHilbertSFCOperations and computes normalized value // etc etc

But if I’ll use

   val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(depth, depth), false),
        classOf[NumericDimensionDefinition]
      )
    )

Filtering works fast and correct O:

It looks like I forgot again to register smth somewhere?
Grigory
@pomadchin
so it jumps into SFCDimensionDefinition.normalize in case I use MyDefintion
Grigory
@pomadchin
ah no.. i had a wrong test and looked into a wrong place… bad night. thanks again! everything works as expected
rfecher
@rfecher
haha, great! keep the questions (and answers) coming ;)