Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
DimFIlippas
@DimFIlippas
My redis version is 3.2.12. I will upgrade it and check again. Thanks
DimFIlippas
@DimFIlippas
It worked with the latest redis version(v=5.0.7). Thanks
Emilio
@elahrvivaz
:+1:
phemmmie
@phemmmie
Hello everyone I am in need of a freelance geomesa developer to kickstart a location tech application
DimFIlippas
@DimFIlippas
Hello again, I ingested one csv file with 1306910 features 1) with FileSystem the local ingestion was completed in 00:02:00 2) with Redis the local ingestion was completed in 00:02:37. Both of them with 1 thread. Is this the expected result? Is it normal FileSystem to be faster than Redis?
John
@Canadianboy122_twitter
Hello, i successfully ingested data through geomesa-accumulo ... But now i would like to ingest data externaly. I would like ingest for example csv but not from command line in geomesa. Would like to use python client or something else if possible.Also tried to ingest with Geoserver rest api but no success. I need to ingest to costum created feature and datastore in geomesa. Thanks for any help.
Emilio
@elahrvivaz
@DimFIlippas it could be due to the fact that Redis creates multiple indices by default, where the FSDS only creates one. you could try just enabling a single index to get an apples-to-apples comparison: https://www.geomesa.org/documentation/user/datastores/index_config.html#customizing-index-creation
is your Redis instance local or remote? did you ingest to a local filesystem for the FSDS? those could also affect things
I would expect Redis to be faster on queries, but I haven't really compared ingest speeds before
@Canadianboy122_twitter did you try WFS-T through GeoServer?
GeoMesa is a scala/java app, so you might be able to use Py4J
Emilio
@elahrvivaz
We also provide a REST API for use with geojson that you may be able to integrate with more easily: https://www.geomesa.org/documentation/user/geojson.html
John
@Canadianboy122_twitter
yes i saw rest api but didnt found way to ingest in my costum feature. Try few ways but nothing didnt pass only return [] (it should return index if successfull)
and about WFS-T didnt found exactly way how to do it, where to pass csv for ingest...
Emilio
@elahrvivaz
you would have to convert your feature to geojson and put all the attributes in the 'properties' element
WFS-T expects GML I think, so you'd have to convert your CSV to that
yeah that looks reasonable
John
@Canadianboy122_twitter
yea i try that but no luck
here is schema and converter
Emilio
@elahrvivaz
you don't need a converter if you're using the geojson rest api
John
@Canadianboy122_twitter
and i can use costum schema ?
or i cant
Emilio
@elahrvivaz
you can put whatever you want in the properties
Emilio
@elahrvivaz
no, you'd have to create a new one through the rest api
John
@Canadianboy122_twitter
yea i know about that but had problem with reading data. Ok nvm will try again. And about WFS-T where can i add my features cant find any field for adding feature list or something like that
Emilio
@elahrvivaz
i'm not sure exactly, you'd have to check the geoserver docs for that
John
@Canadianboy122_twitter
Ok thanks will check. Thank you very much.
James Srinivasan
@jrs53
We're getting this on 2.4.0:
[xxx@hdp-client ~]$ geomesa-accumulo explain -u xxx@XXX.LOCAL --keytab ~/keytabs/xxx  -i hdp-accumulo-instance -c geomesa.noaaWeatherBuoy  -f weatherBuoy -q "dts AFTER 2019-01-01T00:00:00Z"
ERROR Unexpected range type LowerBoundedRange(Z3IndexKey(2557,0))
java.lang.IllegalArgumentException: Unexpected range type LowerBoundedRange(Z3IndexKey(2557,0))
    at org.locationtech.geomesa.index.index.z3.legacy.Z3IndexV5$Z3IndexKeySpaceV5$$anonfun$getRangeBytes$2.apply(Z3IndexV5.scala:122)
    at org.locationtech.geomesa.index.index.z3.legacy.Z3IndexV5$Z3IndexKeySpaceV5$$anonfun$getRangeBytes$2.apply(Z3IndexV5.scala:115)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
    at scala.collection.Iterator$class.toStream(Iterator.scala:1180)
    at scala.collection.AbstractIterator.toStream(Iterator.scala:1194)
    at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1180)
    at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1180)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
For a DURING query, it works:
[xxx@hdp-client ~]$ geomesa-accumulo explain -u xxx@XXX.LOCAL --keytab ~/keytabs/xxx  -i hdp-accumulo-instance -c geomesa.noaaWeatherBuoy  -f weatherBuoy -q "dts DURING 2019-01-01T00:00:00Z/2021-01-01T00:00:00Z"
Planning 'weatherBuoy' dts DURING 2019-01-01T00:00:00+00:00/2021-01-01T00:00:00+00:00
  Original filter: dts DURING 2019-01-01T00:00:00+00:00/2021-01-01T00:00:00+00:00
  Hints: bin[false] arrow[false] density[false] stats[false] sampling[none]
  Sort: none
  Transforms: none
  Strategy selection:
    Query processing took 29ms for 1 options
    Filter plan: FilterPlan[Z3IndexV5(geom,dts)[dts DURING 2019-01-01T00:00:00+00:00/2021-01-01T00:00:00+00:00][None]]
    Strategy selection took 2ms for 1 options
  Strategy 1 of 1: Z3IndexV5(geom,dts)
    Strategy filter: Z3IndexV5(geom,dts)[dts DURING 2019-01-01T00:00:00+00:00/2021-01-01T00:00:00+00:00][None]
    Geometries: FilterValues(List(POLYGON ((-180 -90, 180 -90, 180 90, -180 90, -180 -90))),true,false)
    Intervals: FilterValues(List([2019-01-01T00:00:01Z,2020-12-31T23:59:59Z]),true,false)
    Plan: BatchScanPlan
      Tables: geomesa.noaaWeatherBuoy_z3_v5
      Column Families: F
      Ranges (688): [%01;%00;%0a;F%00;%00;%00;%00;%00;%00;%00;%00;::%01;%00;%0a;F%80;), [%01;%01;%0a;F%00;%00;%00;%00;%00;%00;%00;%00;::%01;%01;%0a;F%80;), [%01;%02;%0a;F%00;%00;%00;%00;%00;%00;%00;%00;::%01;%02;%0a;F%80;), [%01;%03;%0a;F%00;%00;%00;%00;%00;%00;%00;%00;::%01;%03;%0a;F%80;), [%01;%00;%0a;4%00;%00;%00;%00;%00;%00;%00;%00;::%01;%00;%0a;4%80;)
      Iterators (1):
        name:z3, priority:23, class:org.locationtech.geomesa.accumulo.iterators.Z3Iterator, properties:{zo=2, zxy=0:0:2097151:2097151, zt=1497969:2097151,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0:299589, epoch=2556:2661}
    Plan creation took 114ms
  Query planning took 268ms
Emilio
@elahrvivaz
@jrs53 looks like a bug... we added some code to handle open-ended queries with the z3 index, but looks like we didn't fix it in the older index impls
i guess really we should move query planning into the different index versions
James Srinivasan
@jrs53
yeah I remember the improvement, I guess if I update the index version it shoujld be good?
glad we caught this in our testing
Emilio
@elahrvivaz
yeah, looks like things created with 2.3+ should work
i've got a fix up: locationtech/geomesa#2423
James Srinivasan
@jrs53
so now I can start bugging you for a 2.4.1 release ;-)
Emilio
@elahrvivaz
:D
James Srinivasan
@jrs53
what's the deal with libthrift nowadays? We are running on HDP which uses Accumulo 1.7. The docs say for GeoServer we have to use the 1.9 client libs, but also that for earlier versions of accumulo we have to downgrade the deps and rebuild. @jg895512 has found it all works fine for him on an unsecured cluster, but we have issues with our Kerberized cluster with user impersonation in Zeppelin
Emilio
@elahrvivaz
i think you only have to rebuild for the accumulo-spark-runtime jar, which bundles accumulo in it
for geoserver you should just be able to copy in your accumulo-client jars and whatever libthrift version goes with them
possibly there is some conflict or bug with kerberos and accumulo 1.7...
what error are you getting?
James Srinivasan
@jrs53
just thrift connections hanging
GSS initiate fail which is a catch all Kerberos error
Emilio
@elahrvivaz
oh i see that note in the install docs about conflicting jars in geoserver... i don't remember what caused that though
i think that was due to geoserver updating the jars it ships with at some point
James Srinivasan
@jrs53
I'm also seeing some tserver libthrift errors every minute - will need some debugging grrr