by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
James Hughes
@jnh5y
I think there's a middle option
and it'd depend quite a bit on use cases, etc
if you an entity id, labeling it as a high cardinality is sane
if you have an index on a low cardinality column, you may want to consider if you really want the index or not
James Srinivasan
@jrs53
yeah, will depend on query - esp thinking binarry attrs
James Hughes
@jnh5y
I would almost never index a binary attribute
James Srinivasan
@jrs53
maybe if very skewed
basically high-cardinality attribute range queries will be preferred over z3, while regular or low-cardinality ones won't
if you have a join index, there's an additional factor that lowers the preference
James Srinivasan
@jrs53
wasn't going to bother with join indices
does the query planner take account for the z3 range e.g. let's say I have AIS data. Searching globally for a specific vessel might be more efficient using the attr index, whereas searching in a small geographic area and/or time range for taht same vessel may be better to use z3
Emilio
@elahrvivaz
it does not... in your example, by default the attribute index has a secondary z-index on it, so it should be able to leverage that and do better than the z3
James Srinivasan
@jrs53
ah ok, can't remember if I added kerberos support to create indices from CLI which is why I', keen to get it right sooner rather than later
James Hughes
@jnh5y
there's always small/medium scale performance testing and MapReduce (re)ingests!
James Srinivasan
@jrs53
don't think I suport Kerberos MR ingest either :-P
gispathfinder
@zyxgis
the pureconfig0.11.0 in GeoMesa 3.0.0 is conflict with pureconfig0.13.0 in GeoTrellis 3.4.0, how to handle this problem
1 reply
Nithin
@Nithin77210903_twitter

hi guys, i was trying to query using hbase as the datastore.
I'm only getting the features when we use this filter with whole world as bbox:
Filter spatialFilter =ff.bbox("geom",-180,-90,180,90,"EPSG:4326");

when I change the bbox value as per the inserted point object its not retrieving anything.
please help here

Emilio
@elahrvivaz
are you seeing any errors? usually if you can retrieve features without any filter (whole world bbox gets optimized out), then it points to an issue with the geomesa-distributed-runtime jar installation
Jayashree
@sjayashree01
Hey everyone, quick question: Is there a specific reason why raster support is getting deprecated in GeoMesa?
Emilio
@elahrvivaz
lack of anyone willing to maintain it, mostly
James Hughes
@jnh5y
Also, there are better projects like GeoTrellis which also support rasters in Accumulo (as well other backends)
Jayashree
@sjayashree01
Sounds good, thanks Emilio and James for clarifying.
James Hughes
@jnh5y
Sure. Were you using GeoMesa's raster support? Or are you looking for a project to use?
Jayashree
@sjayashree01
@jnh5y Honestly, just started exploring GeoMesa and wanted to see the types of vector and raster data and operations that were being supported, when I came across the documentation note about the deprecation.
I'll look into GeoTrellis.
James Hughes
@jnh5y
Sounds good. Yeah, GeoMesa is focused on vector data. GeoTrellis has done great work with raster data. The two types are handled somewhat differently and in my opinion, there's a benefit from having separate projects focused on each
Jayashree
@sjayashree01
Absolutely, thank you for all the help.
cwd293
@cwd293
大家好,请问一下,我用GeoMesa命令行的方式摄取CSV文件,命令执行完hbase中有相关的表文件,可是表内却只有0行数据,这是出了什么问题?
cwd293
@cwd293
geomesa-hbase ingest -c cwd -f gpp -s gpp.sft -C gpp.convert rxxx.csv
INFO Creating schema 'gpp'
INFO Running ingestion in local mode
INFO Ingesting 1 file with 1 thread
[============================================================] 100% complete 0 ingested 2 failed in 00:00:01
INFO Local ingestion complete in 00:00:01
INFO Ingested 0 features and failed to ingest 2 features for file: /root/rxxx.csv
有人知道这是怎么回事吗?
pinis123
@pinis123
@cwd293 哥们,这里面常驻的全都是国外程序员,你是认真的?
命令行显示总共导入0条,失败导入2条,所以你说的hbase里边,没有数据才是正常的
James Hughes
@jnh5y
@cwd293 嗨,我正在使用Google翻译。您已阅读日志文件吗?该消息不提供有关错误的任何信息。
loridigia
@loridigia
Sorry guys, does geomesa support truncate operation from API?
Emilio
@elahrvivaz
yes, if you do something like ds.getFeatureStore().removeFeatures(Filter.INCLUDE) that should be optimized in most cases to just truncate the data
which back-end are you using?
loridigia
@loridigia
Hbase
you could also just delete the schema and re-create it, which will drop and re-create the tables
loridigia
@loridigia
Oh great, thanks!
jg895512
@jg895512
so.. I'm finally back to spark streaming. Has there been any progress along that journey (spark streaming of GeoMesa Kafka Data Store) since then? No sense in fighting with my code from early July on this if someone else has made (any) progress...
cwd293
@cwd293
@pinis123 所以为什么会这样呢?出了啥问题?求告知谢谢
@jnh5y geomesa-hbase ingest -c cwd -f gpp -s gpp.sft -C gpp.convert rxxx.csv
INFO Creating schema 'gpp'
INFO Running ingestion in local mode
INFO Ingesting 1 file with 1 thread
[============================================================] 100% complete 0 ingested 2 failed in 00:00:01
INFO Local ingestion complete in 00:00:01
INFO Ingested 0 features and failed to ingest 2 features for file: /root/rxxx.csv
@jnh5y Do you know what the problem is here?
James Hughes
@jnh5y
@jg895512 I don't think so. Last time anyone thought about Spark Streaming was a long time ago; your work may be the most recent!
@cwd293 there is likely an error with the converter. If you look in logs/geomesa.log, you may find more info
you might try switch to 'raise-errors' mode to help debug the converter: https://www.geomesa.org/documentation/stable/user/convert/parsing_and_validation.html#error-mode
jg895512
@jg895512
@jnh5y thanks, i'll keep plugging away, expect more questions from me here. ;-)
pinis123
@pinis123
@cwd293 你不妨把你的那两条记录截个图发到这,看看