Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Emilio
@elahrvivaz
it just writes it to ES
then you can hook it up to grafana
James Srinivasan
@jrs53
and geoserver doesn't do that by default? (ITYM kibana)
Emilio
@elahrvivaz
kibana, right :)
no, it doesn't
James Srinivasan
@jrs53
gotcha
Emilio
@elahrvivaz
afaik just local files by default
James Srinivasan
@jrs53
the geoserver docs are very vague
says can log to a database...but not how
Emilio
@elahrvivaz
monitor hibernate community module?
James Srinivasan
@jrs53
ahh, that sounds like it would do the trick (if hibernate is your thing)
James Hughes
@jnh5y
Morning! Sorry to be late to the party, @jrs53.
Here's how to think about it. The GeoServer monitoring extension (https://docs.geoserver.org/latest/en/user/extensions/monitoring/index.html) is an interface which hooks into the GeoServer request lifecycle. That interface needs some kind of implementation to write out the logged info
So one has to install the GeoServer monitoring extension and an implementation like the Hibernate one (or the GeoMesa-GeoServer Elasticsearch extension)
James Srinivasan
@jrs53
aye, looks super handy
feel like putting that in a repo README.md?
James Hughes
@jnh5y
Yeah, we should say a little more about it.
James Srinivasan
@jrs53
is the intention for audit, or performance monitoring or generic?
James Hughes
@jnh5y
ooooo.... that's a good question
the GeoServer monitoring extension kinda lets you do either....
If auditing may mean blocking queries, you'll need something a little different, but that's totally manageable with GeoServer.
GeoServer is rather extensible....
For performance monitoring, the metrics from the monitoring extension should give you some idea of what's going on
As we were hooking this up, I tried to think through ideas which would let one hook up info from that GeoMesa query planner and/or query execution.....
there are possibilities which likely involve some whacky interfaces to avoid license trouble and a whole bunch of thread locals;)
James Srinivasan
@jrs53
that would be super neat, maybe to propose additional indices
using magic AI pixie dust
James Hughes
@jnh5y
meh, I'm either gonna show that I'm old school or ignorant.... but I mean, if you understand what your query patterns are, the indices to use are kinda obvious....
James Srinivasan
@jrs53
obvious...to you
plus AI makes everything better
James Hughes
@jnh5y
yeah.... I think one could make an application that looked at sample of data (say 1 million rows) and asked you 5 questions to figure out what indices you ought to use
Hani Ramadhan
@haniramadhan
Hello, I am sorry to ask something that maybe a very beginner mistake. I am trying to use the Geomesa code from github then I tried to run one of the test using Intellij. It returns some errors such as
  1. package org.locationtech.geomesa.utils.conf does not exist
  2. package org.locationtech.geomesa.utils.geotools.SimpleFeatureTypes.Config does not exists
  3. package WKTUtils$ does not exists
    Anyone can help?
James Srinivasan
@jrs53
Have you run maven inside IntelliJ to make sure all the dependencies etc are fetched?
Am also guessing some tests will run fine in Maven, but possibly not in IntelliJ - tho that is pretty much my workflow
Which test are you trying to run? And what platform?
Hani Ramadhan
@haniramadhan
AttributeIndexTest in org.locationtech.geomesa.index.index. My platform is Windows 10.
James Srinivasan
@jrs53
Some of the tests def won't run on Windows
Emilio
@elahrvivaz
did you import the project into intellij as a maven type?
as James says, windows isn't supported, so ymmv
you can always run tests through mvn test -Dtest=foo
MyTen
@letui
when will geomesa supports other crs? now it only supports 4326.
MyTen
@letui
@elahrvivaz
Hani Ramadhan
@haniramadhan
I think I solved my Issue. I need to use Java 8 JDK.
gispathfinder
@zyxgis
how to develope in geomesa-spark with python3
JB-data
@JB-data

Hi,
for a kerberized cluster with hbase, I see in the docs I need to add to the hbase config a property for the keytab and principal. FOr keytab location would be:

<property>
     <name>hbase.geomesa.keytab</name>
     <value>/etc/security/keytabs/hbase.geomesa.keytab</value>
</property>

If I run a spark job that will use geomesa, does this mean that I need to make sure each worker node of my cluster has the keytab in this location?
I would expect that in case I call my spark shell with the option

pyspark --principal hbase.geomesa  --keytab /locationofkeytab_on_node_thatstartsjob/hbase.geomesa.keytab

that he should recognize the keytab automatically and ship it to the nodes where he executes the jobs?
But doesnt seem the case.

James Srinivasan
@jrs53
I don't know how hbase works, but with geomesa spark on accumulo you have to create a delegation token and send that to the executors using the right config channels (this lives on the geomesa side, not user code)
2 replies
you might get away with having a common location for the keytab per executor
JB-data
@JB-data
I know in the past I was told that a mapreduce ingest would not work on a kerberized cluster.
I managed to run an ingest command with a large file that is on my local file system
I notice that if I run ingest and specify a hdfs location that contains many csvs, the ingest starts a mapreduce job and this fails.
Is there any other way of running this without mapreduce? Probably not?
I guess the only option is copy all files to one big file on the local and then ingest on the local system.
1 reply
wiosen
@wiosen
Hello, I want to ingest SHP data using 'geomesa convert', but my DBF database is encoded in Windows-1250 and 'geomesa convert' uses UTF by default. Is it possible to ingest data in a different encoding than UTF?