Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 17:36
    schnerd starred locationtech/geowave
  • Jan 30 2019 11:01
    hsg77 commented #1474
  • Jan 30 2019 10:58
    hsg77 commented #1474
  • Jan 30 2019 10:57
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:53
    hsg77 commented #1474
  • Jan 30 2019 10:51
    hsg77 commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:30
    JWileczek commented #1474
  • Jan 29 2019 16:12
    rfecher commented #1474
  • Jan 29 2019 10:44
    hsg77 commented #1474
  • Jan 28 2019 22:47
    sunapi386 starred locationtech/geowave
  • Jan 28 2019 21:12

    rfecher on gh-pages

    Lastest javadoc on successful t… (compare)

  • Jan 28 2019 20:47

    rfecher on master

    fixing coveralls (#1488) (compare)

  • Jan 28 2019 20:47
    rfecher closed #1488
  • Jan 28 2019 20:47
    rfecher opened #1488
  • Jan 28 2019 17:02

    rfecher on master

    Update README.md (compare)

  • Jan 28 2019 16:53

    rfecher on master

    updated readme.md (#1486) (compare)

  • Jan 28 2019 16:53
    rfecher closed #1486
Haocheng Wang
@HaochengNn
Hi, I come up with a problem that only under "extensions\formats\tdrive" directory I can import "org.locationtech.geowave.datastore.hbase.config.HBaseOptions" but when I do some development under "extensions\formats\geolife" or other formats' directory and want to import the Hbase related classes, it failed and said "The import org.locationtech.geowave.datastore cannot be resolved". Can anyone help me solve this ?
surajtalari
@surajtalari
hi ,in dbscan what is this parameter ? "The following option is required: --query.typeNames " I cant find any documentation regarding this?
surajtalari
@surajtalari
..
rfecher
@rfecher
@gibranparvez hmm, I wander why the list of adapters in that message is empty...regardless you should see an error in the geoserver log that is more descriptive. You can also just directly add the layer through the geoserver admin console.
@HaochengNn the formats do not have a direct dependency on any of the datastore implementations by design ... in general the "ext" folder is comprised of plugins that are discovered at runtime enabling an application's dependencies to be limited to only what you need
@surajtalari this is answered on the mailing list
surajtalari
@surajtalari

Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.

"
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
instead.
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
Available types:
"
The following are the steps I followed to ingest data

1.geowave store add teststore3 -t cassandra --contactPoints localhost
--gwNamespace test3
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
geotools-vector

Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command

'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '

It gives me an error saying

'Expected a value after parameter --query.typeNames'

What should I do now? Can anyone say where am I going wrong?

Haocheng Wang
@HaochengNn
@rfecher thank you!
rfecher
@rfecher
@surajtalari this looks like the same question that was answered yesterday in detail on the geowave-dev mailing list?
Grigory
@pomadchin
hey guys; im using a custom dataadapter and writing into the cassandra storage;
if I will write data into an empty cassadnra / table - everything works perfect
once Im trying to reingest everything (by creating a new adapter, etc, etc) Im getting
java.lang.NullPointerException:
[info]   at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:70)
is there smth wrong with the serialization or I didnt specify smth in the SPI?
toBinary and fromBinary works okay; But It looks like it doesnt perform a proper serialization / deserialization of adapter(?)
Grigory
@pomadchin
ah that was indeed SPI; I forgot to add adapter to the registry...
thanks!
rfecher
@rfecher
ahh, yeah, thanks for spotting that - I was on a call but about to suggest that
Grigory
@pomadchin

yo It is again me!
I created a custom field and would like to query by it:

val DIMENSIONS = Array(
      new LongitudeDefinition(),
      new LatitudeDefinition(true),
      new TimeDefinition(Unit.YEAR),
      new MyDefinition()
    )

// … 

new CustomNameIndex( … )

And the query constraints look like:

  val geoConstraints = GeometryUtils.basicConstraintsFromGeometry(queryGeometry)
    val temporalConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(startTime.getTime(), endTime.getTime()), false),
        classOf[TimeDefinition],
        classOf[SimpleTimeDefinition]
      )
    )
    val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(i, i), false),
        classOf[MyDefinition]
      )
    )


  val cons = geoConstraints.merge(temporalConstraints).merge(myConstraints)

When I definte myConstaints like this:

 val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(i, i), false),
        classOf[MyDefinition]
      )
    )

It looks like it doesnt filter by my cusom definition; I notcied that it goes into UnboundedHilbertSFCOperations and computes normalized value // etc etc

But if I’ll use

   val myConstraints = new ConstraintsByClass(
      new ConstraintSet(
        new ConstraintData(new NumericRange(depth, depth), false),
        classOf[NumericDimensionDefinition]
      )
    )

Filtering works fast and correct O:

It looks like I forgot again to register smth somewhere?
Grigory
@pomadchin
so it jumps into SFCDimensionDefinition.normalize in case I use MyDefintion
Grigory
@pomadchin
ah no.. i had a wrong test and looked into a wrong place… bad night. thanks again! everything works as expected
rfecher
@rfecher
haha, great! keep the questions (and answers) coming ;)
Grigory
@pomadchin
so for instance I have a custom adapter and I want to create a custom index; how would the key for the entry be built? Is there a way to check how the key would look like without the ingest? smth like index.encodeKey(entry) //> would be some key here; Im also wondering what happens by deafult if there duplicates (by key) in the database?
Grigory
@pomadchin

Hm, also what is dataId in adapters? how it is used and how it differes from the dimensions that are used for building an index? And Im wodnering how the actual indexing information is stored in cassandra?

// sorry for so many questions just diving into the query / indexing mecahnism, and yep I saw the Key Structure picture but actually my cassandra table looks like this only:

(
    partition blob,
    adapter_id smallint,
    sort blob,
    data_id blob,
    vis blob,
    nano_time blob,
    field_mask blob,
    num_duplicates tinyint,
    value blob,
    PRIMARY KEY (partition, adapter_id, sort, data_id, vis, nano_time)
)
Grigory
@pomadchin
Hmmm, mb I had to implement IndexDependentDataAdapter?
Grigory
@pomadchin
would it make sense to generate Id basing on the entry dimensions and to use it as the dataId?
rfecher
@rfecher
the data ID needs to be unique per data type name (which corresponds to an adapter ID) - so the combination of the adapter ID + data ID pair must be unique within a geowave datastore
the index strategy ends up mapping rows to a sort key and a partition key
Grigory
@pomadchin
if it is an image, what should be a dataId? a combination of spatial bounds + time + smth else; or what is the right way of its usage?
rfecher
@rfecher
IndexDependentDataAdapter is likely unnecessary for you - because the adapter is generally independent of the index by design, sometimes it is necessary for the adapter to get a callback when its been assigned to an index - in particular, one example of this is because our index can be configured with any CRS, our vector data adapter assigns the feature type's default CRS to be the same as the index
for efficiency
for the raster data adapter the data ID is empty bytes actually
although you have a custom adapter that may not follow our RasterDataAdapter ... but generally it treats overlap differently than other adapters
it uses IndexDependentDataAdapter to convert the incoming arbitrarily sized image into tiles that match the grid of the index
Grigory
@pomadchin

Yep! and I have a custom adapter and my question is more ~ how to generate dataId properly? I looked into the IndexDependentDataAdapter to use index to generate partition key manually (smth similar to what is done in the RasterAdapter); is it a correct approach? My idea was to get partition key + sorted key from the index and use it as a dataId; or is it smth bad?

I saw in other adapters you use or featureId or create a string basing on the data unique parameters, is this smth I should aim?

rfecher
@rfecher
and it also implements RowMergingDataAdapter to inject custom merge strategy logic (defaulted to "NoDataMergeStrategy" where it track "no data" in the form of footprint boundaries and reserved no data values and the last one written wins for "data" but doesn't blanket overwrite tiles in the case of no data)
Grigory
@pomadchin
hm at least for now Im definitely following an ez path :D I need smth like FeatureAdapter (I dont need to merge entries and probably wont need it but I need to index them (3-5 dims) and to query by these dims)
rfecher
@rfecher
well, overlapping data IDs for the way the raster case is intentional so that merging happens
Grigory
@pomadchin
Ahhhhh
rfecher
@rfecher
hmm, are you creating the index programmatically?
Grigory
@pomadchin
Yep; smth like
new CustomNameIndex(
      XZHierarchicalIndexFactory.createFullIncrementalTieredStrategy(
        dimensions, // 4 dims 
        Array[Int](
          options.getBias.getSpatialPrecision,
          options.getBias.getSpatialPrecision,
          options.getBias.getTemporalPrecision,
          options.getBias.getSpatialPrecision // just an example of a 4th dim precision
        ),
        SFCType.HILBERT,
        options.getMaxDuplicates
      ),
      indexModel,
      combinedId
    )
rfecher
@rfecher
so to the best of my understanding you can really treat it like vector data rather than what we're doing with tiling the data within the natural gridding of the index and merging overlapping raster tiles...so make the data ID something unique per row
I don't believe you need to care about IndexDependent... or RowMerging...
I don't think using index sort/partition keys as the data ID would be a good idea (they're not guaranteed unique, plus its already in the key so one thing data ID is there for is to absolutely guarantee uniqueness of a key)
with 3-5 dimensions you start to get into extreme unlikeliness for overlapping keys anyways
Grigory
@pomadchin
Thanks @rfecher makes sense; so I will try to derive some unique string basing on the input entry (: thanks!
I also thought to derive it basing on some information in the entry and basing on the index :o
~ get partition key from the index by passing all dims inside + some kinda identifying information from the entry
rfecher
@rfecher
and in answer to another question you had, to just see what keys your index should be generated for a row you can call index.getIndexStrategy().getInsertionIds(<BasicNumericDataSet>)
Grigory
@pomadchin
:+1: nice
rfecher
@rfecher
BasicNumericDataSet just wraps NumericData (which can be a range or single value) per dimension in the same order as the dimensions defined in your index
basically what your NumericDimensionField in the CommonIndexModel does within its getNumericData() method gets passed to the index strategy's getInsertionIds() method which ultimately gets written as the partition and sort keys in the data store