Bringing the scalability of distributed computing to modern geospatial software.
rfecher on gh-pages
Lastest javadoc on successful t… (compare)
rfecher on master
fixing coveralls (#1488) (compare)
rfecher on master
Update README.md (compare)
rfecher on master
updated readme.md (#1486) (compare)
geowave store add ...
the primary required parameter is "storename" which is just an arbitrary name you give to that connection configuration so that you can reference it in any subsequent command without needing all the other options. For GeoWaveRDDLoader you need "DataStorePluginOptions" which can be instantiated with any of the data store's required options. So in your case you can use new DataStorePluginOptions(<HBaseRequiredOptions>)
to get that. I'm not sure where you're seeing "storename" come from in GeoWaveRDDLoader, but hopefully that clarifies it
<guava.version>12.0.1</guava.version>
after this line and rebuild that geowave-hbase jar
this.storeOptions = new HBaseRequiredOptions(zkAddress, geowaveNamespace, extraOpts);
and the HbaseOptions class doesnt seem to have any where to specify that in its methods
RDDOptions.setQuery()
would allow you to choose an index (and look at the DataStore
API and examples for how to write data to an index or indices of your choice)
Okay so I thought this was working but it looks like it can never find the index name that i specify,
2019-10-03 04:27:11 WARN AbstractGeoWavePersistence:232 - Object 'detectionIndex' not found
even though its listed if I go into hbase shell and list tables. It always defaults to our entityActivityIndex
so when we attempt to run a spark job intentionally reading entityActivity, it works as intended, but not if we 're trying to read another table.
adapters
error before and don't know where we can find a list of them for our store
geowave gs layer add ${GEOWAVE_NAMESPACE}-store -ws cite
geowave gs ds add ${GEOWAVE_NAMESPACE}-store -ws cite
Hi im a newbie to geowave and the command, 'geowave store listtypes <storename>' gives me the following
output with no types.
"
05 Nov 19:40:39 WARN [core.NettyUtil] - Found Netty's native epoll
transport, but not running on linux-based operating system. Using NIO
instead.
05 Nov 19:40:40 WARN [core.Cluster] - You listed
localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found
in the control host's system.peers at startup
Available types:
"
The following are the steps I followed to ingest data
1.geowave store add teststore3 -t cassandra --contactPoints localhost
--gwNamespace test3
2.geowave index add -t spatial teststore3 testindex3
3.geowave ingest localtogw sample.csv teststore3 testindex3 -f
geotools-vector
Here sample.csv contains columns lat, long . I can see a keyspace 'test3'
created in cassandra with one table name 'index_geowave_metadata
http://0.0.0.0:3000/test3/index_geowave_metadata' .But when i do Dbscan
with the below command
'geowave analytic dbscan -cmi 5 -cms 10 -emn 2 -emx 6 -pmd 1000 -orc 4
-hdfs localhost:9870 -jobtracker localhost:8088 -hdfsbase /test_dir
teststore3 --query.typeNames '
It gives me an error saying
'Expected a value after parameter --query.typeNames'
What should I do now? Can anyone say where am I going wrong?
java.lang.NullPointerException:
[info] at org.locationtech.geowave.core.store.adapter.InternalDataAdapterWrapper.encode(InternalDataAdapterWrapper.java:70)
toBinary
and fromBinary
works okay; But It looks like it doesnt perform a proper serialization / deserialization of adapter(?)
yo It is again me!
I created a custom field and would like to query by it:
val DIMENSIONS = Array(
new LongitudeDefinition(),
new LatitudeDefinition(true),
new TimeDefinition(Unit.YEAR),
new MyDefinition()
)
// …
new CustomNameIndex( … )
And the query constraints look like:
val geoConstraints = GeometryUtils.basicConstraintsFromGeometry(queryGeometry)
val temporalConstraints = new ConstraintsByClass(
new ConstraintSet(
new ConstraintData(new NumericRange(startTime.getTime(), endTime.getTime()), false),
classOf[TimeDefinition],
classOf[SimpleTimeDefinition]
)
)
val myConstraints = new ConstraintsByClass(
new ConstraintSet(
new ConstraintData(new NumericRange(i, i), false),
classOf[MyDefinition]
)
)
val cons = geoConstraints.merge(temporalConstraints).merge(myConstraints)
When I definte myConstaints like this:
val myConstraints = new ConstraintsByClass(
new ConstraintSet(
new ConstraintData(new NumericRange(i, i), false),
classOf[MyDefinition]
)
)
It looks like it doesnt filter by my cusom definition; I notcied that it goes into UnboundedHilbertSFCOperations and computes normalized value // etc etc
But if I’ll use
val myConstraints = new ConstraintsByClass(
new ConstraintSet(
new ConstraintData(new NumericRange(depth, depth), false),
classOf[NumericDimensionDefinition]
)
)
Filtering works fast and correct O:
index.encodeKey(entry) //> would be some key here
; Im also wondering what happens by deafult if there duplicates (by key) in the database?
Hm, also what is dataId
in adapters? how it is used and how it differes from the dimensions that are used for building an index? And Im wodnering how the actual indexing information is stored in cassandra?
// sorry for so many questions just diving into the query / indexing mecahnism, and yep I saw the Key Structure
picture but actually my cassandra table looks like this only:
(
partition blob,
adapter_id smallint,
sort blob,
data_id blob,
vis blob,
nano_time blob,
field_mask blob,
num_duplicates tinyint,
value blob,
PRIMARY KEY (partition, adapter_id, sort, data_id, vis, nano_time)
)