Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
henry000513
@henry000513
@HSLUCKY
I'm outside
HSLUCKY
@HSLUCKY
i deal with the ingest problem....because the .prj file ....i mv the file to another dir . it's work...i think its a bug.
but the dbf file still cant ingest, need converter
mdoakes42
@mdoakes42
Is there a demo of stealth available?
All the videos here seem to be not avail. http://www.ccri.com/case-studies/stealth/
James Hughes
@jnh5y
@mdoakes42 there is a not a public demo of Stealth due to various restrictions.
If you PM me your email address, I'd be happy to set up a demo for you
mdoakes42
@mdoakes42
ok, Thanks Jim
Emilio
@elahrvivaz
@jnh5y any idea why those video links no longer work?
James Hughes
@jnh5y
on the link above? I do not
mdoakes42
@mdoakes42
I'm getting a little strange behavior with NIFI Loading. Could someone explain how that works related to my issue below. If I "stop" and terminate my nifi processor and re-start it.. Then load additional data. It seems to corrupt my table somehow. If I'm connected via geoserver and viewing the points on a WMS Map. All my features disappear. (And my WFS Count goes to zero as well). So it almost looks like a "new" table or something is being created. I need to have a static table that we load data into and it shouldn't change anything/create new tables/indexes if we stop/start nifi or our loader processes?
After I loaded my test table the first time I had 6 records in "Test", and 99 in all of the test_ tables..
i then loaded an additional 25 records and test updated to 7 entries..
mdoakes42
@mdoakes42
test_test_attr_name_5fnew_geom_v8 and test_test_attr_name_geom_v8 both went to 123 records ( less the 1 for header)
but test_test_id_v4 and test_test_z2_geom_v5 stayed at 99
and Zero records are now showing up in geoserver.
(I also dropped and re-created the data store in geoserver) and still see zero
Emilio
@elahrvivaz
were there any errors in nifi? and/or have you enabled 'FeatureWriterCaching' on the processor?
91 replies
HSLUCKY
@HSLUCKY

I find the geomesa-hbase is stop at
org.geotools:gt-metadaa:23.0
public static boolean clean(final ByteBuffer buffer) {
// the issue was fixed in Java 9+, and Java 8 too from a given point, but testing
// the minor version is annoying
if (buffer == null
|| !buffer.isDirect()
|| SystemUtils.isJavaVersionAtLeast(JavaVersion.JAVA_9)) {
return true;
}

When i debug , i change the buffer = null in memory and the program is work.
@elahrvivaz

what i need to do , and fix the problem.
HSLUCKY
@HSLUCKY
OK, i deal with the problem. i remove SystemUtils.isJavaVersionAtLeast(JavaVersion.JAVA_9)) and its work. and modify the jar .
2 replies
HSLUCKY
@HSLUCKY
Now i have another question. I used UTF-8 in the sft schema config. but the code read file create datastore is iso8859-1. Is there a params config the charset. @elahrvivaz
kyzyer
@kyzyer
anyone deploying geomesa hbase on cloudera CDH 6.X ?
HSLUCKY
@HSLUCKY
@kyzyer me
6.3.2
kyzyer
@kyzyer
have a problem when it's runing ? @HSLUCKY
HSLUCKY
@HSLUCKY
not many @kyzyer I have raised some issues and have been resolved
Now the last problem with data storage is the encoding problem @kyzyer i dont known where to config the charset for the dbf.
Charset dbfCharset = lookup(DBFCHARSET, params, Charset.class);
kyzyer
@kyzyer
wow, nice , Did you write the installation documentation ?
kyzyer
@kyzyer
May be , i not have a standard installation process, now there are a lot of problems , i want to get a document
HSLUCKY
@HSLUCKY
@kyzyer i havnt stander document, but you can talk about your problems. may be i can help you
i just try and try
Emilio
@elahrvivaz
@HSLUCKY we don't currently support setting the charset - a while ago a user opened a ticket to do so, if you're interested in working on it: https://geomesa.atlassian.net/browse/GEOMESA-2679
HSLUCKY
@HSLUCKY
@elahrvivaz thanks
JB-data
@JB-data

Im on a hbase cluster that is kerberized. I only have a test user for this cluster, for which I can kinit and have a keytab.
Can I use this user to execute a geomesa-hbase ingest commands?
I assume the names for the user (hbaseGeomesa) here https://www.geomesa.org/documentation/stable/user/hbase/kerberos.html#development- can be mofidied into:

<property>
<name>hbase.geomesa.principal</name>
<value>mytestuser/_HOST@machineName</value>
</property>

<property>
<name>hbase.geomesa.keytab</name>
<value>/etc/security/keytabs/mytestuser.keytab</value>
</property>
int the hbase-site.xml.

As this is hbase on azure (azure blob file system as file storage), I will just have to copy the hbase-site.xml into the GEOMESA_HBASE_HOME/conf folder after adding the 2 extra properties above?
And I assume the keytab location, this is just the location on the machine (so not on hdfs or abfs) where geomesa-hbase is installed and from where I try to run the ingest command?
Emilio
@elahrvivaz
that's correct. you won't be able to do a distributed map/reduce ingest, because no one has implemented kerberos for that yet. but you can do a local ingest, and run other geomesa CLI commands
6 replies
JB-data
@JB-data

When runnning the simplest geomesa-hbase ingest -c auto_ingest auto_ingest.csv
I get following:

ERROR
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.locationtech.geomesa.hbase.utils.HBaseVersions$$anonfun$_createTableAsync$1.apply(HBaseVersions.scala:133)
        at org.locationtech.geomesa.hbase.utils.HBaseVersions$$anonfun$_createTableAsync$1.apply(HBaseVersions.scala:133)
        at org.locationtech.geomesa.hbase.utils.HBaseVersions$.createTableAsync(HBaseVersions.scala:62)
        at org.locationtech.geomesa.hbase.data.HBaseIndexAdapter$$anonfun$createTable$1.apply(HBaseIndexAdapter.scala:122)
        at org.locationtech.geomesa.hbase.data.HBaseIndexAdapter$$anonfun$createTable$1.apply(HBaseIndexAdapter.scala:76)
        at org.locationtech.geomesa.utils.io.package$WithClose$.apply(package.scala:64)
        at org.locationtech.geomesa.hbase.data.HBaseIndexAdapter.createTable(HBaseIndexAdapter.scala:76)
        at org.locationtech.geomesa.index.geotools.GeoMesaDataStore$$anonfun$onSchemaCreated$3.apply(GeoMesaDataStore.scala:202)
        at org.locationtech.geomesa.index.geotools.GeoMesaDataStore$$anonfun$onSchemaCreated$3.apply(GeoMesaDataStore.scala:202)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.locationtech.geomesa.index.geotools.GeoMesaDataStore.onSchemaCreated(GeoMesaDataStore.scala:202)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:159)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:40)
        at org.locationtech.geomesa.tools.ingest.AbstractConverterIngest.run(AbstractConverterIngest.scala:31)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$2.apply(IngestCommand.scala:106)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$2.apply(IngestCommand.scala:105)
        at scala.Option.foreach(Option.scala:257)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$class.execute(IngestCommand.scala:105)
        at org.locationtech.geomesa.hbase.tools.HBaseRunner$$anon$2.execute(HBaseRunner.scala:32)
        at org.locationtech.geomesa.tools.Runner$class.main(Runner.scala:28)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: /hbase/lib/geomesa-hbase-distributed-runtime-hbase2_2.11-3.0.0.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

I noticed that he was able to create a table in hbase, but not save the data within.
Looks like he has problems with the runtime distributed jar?
Im on hbase 2.2.5 which shuold be the version geomesa 3.0.0 was tested with.

Emilio
@elahrvivaz
did you install the distributed-runtime jar into hdfs?
JB-data
@JB-data

So previous issue was resolved, Im able to ingest data with geomesa-hbase cli commands.
I set up geoserver and was able to add a geomesa-hbase store.
I can see the data in the gml in geoserver.
When I choose openlayers I get instead of a visual this error:

23 Oct 16:42:35 TRACE [utils.Explainer] - Planning 'auto_ingest' INCLUDE
23 Oct 16:42:35 TRACE [utils.Explainer] -   Original filter: BBOX(geom_Point_srid_4326, -272.8125,-137.8125,272.8125,137.8125)
23 Oct 16:42:35 TRACE [utils.Explainer] -   Hints: bin[false] arrow[false] density[false] stats[false] sampling[none]
23 Oct 16:42:35 TRACE [utils.Explainer] -   Sort: none
23 Oct 16:42:35 TRACE [utils.Explainer] -   Transforms: geom_Point_srid_4326
23 Oct 16:42:35 TRACE [utils.Explainer] -   Strategy selection:
23 Oct 16:42:35 TRACE [utils.Explainer] -     Query processing took 8ms for 1 options
23 Oct 16:42:35 TRACE [utils.Explainer] -     Filter plan: FilterPlan[Z3Index(geom_Point_srid_4326,dtg_Date)[INCLUDE][None]]
23 Oct 16:42:35 TRACE [utils.Explainer] -     Strategy selection took 2ms for 1 options
23 Oct 16:42:35 TRACE [utils.Explainer] -   Strategy 1 of 1: Z3Index(geom_Point_srid_4326,dtg_Date)
23 Oct 16:42:35 TRACE [utils.Explainer] -     Strategy filter: Z3Index(geom_Point_srid_4326,dtg_Date)[INCLUDE][None]
23 Oct 16:42:35 TRACE [utils.Explainer] -     Plan: ScanPlan
23 Oct 16:42:35 TRACE [utils.Explainer] -       Tables: auto_ingest8_auto_5fingest_z3_geom_5fPoint_5fsrid_5f4326_dtg_5fDate_v6
23 Oct 16:42:35 TRACE [utils.Explainer] -       Ranges (1): [::]
23 Oct 16:42:35 TRACE [utils.Explainer] -       Scans (4): [%01;::%02;], [%02;::%03;], [%03;::], [::%01;]
23 Oct 16:42:35 TRACE [utils.Explainer] -       Column families: d
23 Oct 16:42:35 TRACE [utils.Explainer] -       Remote filters: TransformFilter[geom_Point_srid_4326]
23 Oct 16:42:35 TRACE [utils.Explainer] -     Plan creation took 139ms
23 Oct 16:42:35 TRACE [utils.Explainer] -   Query planning took 188ms
23 Oct 16:42:35 WARN [util.DynamicClassLoader] - Failed to identify the fs of dir abfs://storage-fs@ourlake.dfs.core.windows.net/hbasecluster/hbase/lib, ignored
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "abfs"
...
        at java.security.AccessController.doPrivileged(Native Method)
...
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ClassNotFoundException: org.locationtech.geomesa.hbase.rpc.filter.CqlTransformFilter

almost looks like geoserver can not read the distr. runtime jar from the location as he does not recognized abfs (which is the next generation adls, so the microsoft hdfs).
But is that possible? Since ingestion works, and I could add the datastore, and the query plan seems to have been carried out, this suggests that the distr. runtime jar was found correctly?

James Hughes
@jnh5y
This is a subtle thing.... even though GeoServer does not need to read/write to abfs, it still ends up doing a quick check that involves knowing that "abfs" or whatever is a valid filesystem
If you add whatever jar includes the Hadoop FileSystem implementation for abfs to the GeoServer WEB-INF/lib directory, things should work
JB-data
@JB-data
ok, great, then I shuold be able to do it.
This one would be the most logical one
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure/3.3.0
but doesnt do it. I shuold look further
James Hughes
@jnh5y
cool. If you find something that works and feel like adding a note to the GeoMesa docs, let us know. @elahrvivaz and I can help identify where to put and help with contributing, etc.
Emilio
@elahrvivaz
@JB-data for clarification, the distributed runtime jar is only loaded in the hbase region servers. the CLI tools are likely picking up your local hdfs installation, and thus the adfs code. you can see the CLI classpath with geomesa-hbaase classpath. as Jim said, you'll need to install the correct hadoop jars into geoserver, since it doesn't have access to the same classpath as the CLI tools
Emilio
@elahrvivaz
@/all we're going to force push the main git branch to clean up some merge commits from 2 weeks ago. hopefully it won't affect anyone too badly
kyzyer
@kyzyer
@HSLUCKY brother。