rfecher on gh-pages
Lastest javadoc on successful t… (compare)
rfecher on master
fixing coveralls (#1488) (compare)
rfecher on master
Update README.md (compare)
rfecher on master
updated readme.md (#1486) (compare)
Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
The following are the steps I followed to ingest data
1.geowave store add teststore3 -t cassandra --contactPoints localhost
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command
'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '
It gives me an error saying
'Expected a value after parameter --query.typeNames'
What should I do now? Can anyone say where am I going wrong?
java.lang.NullPointerException: [info] at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:70)
fromBinaryworks okay; But It looks like it doesnt perform a proper serialization / deserialization of adapter(?)
yo It is again me!
I created a custom field and would like to query by it:
val DIMENSIONS = Array( new LongitudeDefinition(), new LatitudeDefinition(true), new TimeDefinition(Unit.YEAR), new MyDefinition() ) // … new CustomNameIndex( … )
And the query constraints look like:
val geoConstraints = GeometryUtils.basicConstraintsFromGeometry(queryGeometry) val temporalConstraints = new ConstraintsByClass( new ConstraintSet( new ConstraintData(new NumericRange(startTime.getTime(), endTime.getTime()), false), classOf[TimeDefinition], classOf[SimpleTimeDefinition] ) ) val myConstraints = new ConstraintsByClass( new ConstraintSet( new ConstraintData(new NumericRange(i, i), false), classOf[MyDefinition] ) ) val cons = geoConstraints.merge(temporalConstraints).merge(myConstraints)
When I definte myConstaints like this:
val myConstraints = new ConstraintsByClass( new ConstraintSet( new ConstraintData(new NumericRange(i, i), false), classOf[MyDefinition] ) )
It looks like it doesnt filter by my cusom definition; I notcied that it goes into UnboundedHilbertSFCOperations and computes normalized value // etc etc
But if I’ll use
val myConstraints = new ConstraintsByClass( new ConstraintSet( new ConstraintData(new NumericRange(depth, depth), false), classOf[NumericDimensionDefinition] ) )
Filtering works fast and correct O:
index.encodeKey(entry) //> would be some key here; Im also wondering what happens by deafult if there duplicates (by key) in the database?
Hm, also what is
dataId in adapters? how it is used and how it differes from the dimensions that are used for building an index? And Im wodnering how the actual indexing information is stored in cassandra?
// sorry for so many questions just diving into the query / indexing mecahnism, and yep I saw the
Key Structure picture but actually my cassandra table looks like this only:
( partition blob, adapter_id smallint, sort blob, data_id blob, vis blob, nano_time blob, field_mask blob, num_duplicates tinyint, value blob, PRIMARY KEY (partition, adapter_id, sort, data_id, vis, nano_time) )
IndexDependentDataAdapteris likely unnecessary for you - because the adapter is generally independent of the index by design, sometimes it is necessary for the adapter to get a callback when its been assigned to an index - in particular, one example of this is because our index can be configured with any CRS, our vector data adapter assigns the feature type's default CRS to be the same as the index
IndexDependentDataAdapterto convert the incoming arbitrarily sized image into tiles that match the grid of the index
Yep! and I have a custom adapter and my question is more ~ how to generate dataId properly? I looked into the IndexDependentDataAdapter to use index to generate partition key manually (smth similar to what is done in the RasterAdapter); is it a correct approach? My idea was to get partition key + sorted key from the index and use it as a dataId; or is it smth bad?
I saw in other adapters you use or featureId or create a string basing on the data unique parameters, is this smth I should aim?
RowMergingDataAdapterto inject custom merge strategy logic (defaulted to "NoDataMergeStrategy" where it track "no data" in the form of footprint boundaries and reserved no data values and the last one written wins for "data" but doesn't blanket overwrite tiles in the case of no data)
new CustomNameIndex( XZHierarchicalIndexFactory.createFullIncrementalTieredStrategy( dimensions, // 4 dims Array[Int]( options.getBias.getSpatialPrecision, options.getBias.getSpatialPrecision, options.getBias.getTemporalPrecision, options.getBias.getSpatialPrecision // just an example of a 4th dim precision ), SFCType.HILBERT, options.getMaxDuplicates ), indexModel, combinedId )