Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Emilio
@elahrvivaz
i saw your email to the accumulo list
you could add some new params to specify the proxy through string keys, we could use that to construct the connector appropriately
replicate however you're creating the connector now
James Srinivasan
@jrs53
The problem is that in Accumulo 1.7, I can't create a KerberosToken for a proxy user because the guard for that fn is too strict. It was corrected to match the docs for 1.9
I'm not using the 1.9.3 client in GeoMesa and the world's worst Accumulo client (written by yours truly) works having manually built myself a connector
I was hoping to use that connector to test geomesa directly, but can't due to the serialisation issue
Emilio
@elahrvivaz
ooh, you wrote your own client? hardcore
James Srinivasan
@jrs53
I refer you to "world's worst..."
Emilio
@elahrvivaz
haha
you could write a new data store factory that wraps the geomesa accumulo one, and creates your connector appropriately
James Srinivasan
@jrs53
nah, because all I am doing is creating a KerberosToken() rather than KerberosToken(keytab,...)
there are use cases not to have a keytab, and this is one of them
// Don't try this at home
val tableScanner = conn.createScanner("geomesa.gdelt", new Authorizations());

while(tableScanner.iterator.hasNext) {
    println(tableScanner.iterator.next.getKey.toString)
}
tableScanner.close
I'm thinking the PR will allow neither keytab nor password to be set in AccumuloDataStoreFactory (currently this is an error), in which case it uses the no-args KerberosToken() ctor. I'd need to add something to the cli to handle this - currently omitting both prompts for a password (guess new cli option)
Emilio
@elahrvivaz
ah, makes sense
James Srinivasan
@jrs53
not a breaking change?
Emilio
@elahrvivaz
doesn't seem like it would be
jhkcool
@jhkcool
Now I have a problem integrating the SSI algorithm into GEOMESA, CQLFilter seems to have lost its function. Through debugging, I found that the data at scan conforms to the range of CQLFilter, but the result is filtered, and the result size is always 0. The default filter was also tried in Google S2 before, and this problem was not found.
Is there something I haven't noticed that's been bothering me all day.
@elahrvivaz
Emilio
@elahrvivaz
@jhkcool make sure that your geomesa coprocessor version matches the geomesa client version
if you're using hbase, you can try setting hbase.remote.filtering to false in the data store params
that should let you debug locally any filtering
jhkcool
@jhkcool
Is this configuration in Query hints?
Emilio
@elahrvivaz
no, in the data store parameter map
jhkcool
@jhkcool
OK,Let me try.
jhkcool
@jhkcool
@elahrvivaz yeah, I setting hbase.remote.filtering to false, then the query result is right. but while remote filtering the xz2's query result is ok and ssi2's query result is not ok.
jhkcool
@jhkcool
I solved it. something error in creating ssi2 index while I remote debug hbase.
James Hughes
@jnh5y
That makes sense. Did you have the S2 libraries in the geomesa-hbase-distributed-runtime? (That'd be a clear way that things could go wrong)
James Hughes
@jnh5y
@urbanit alright, the students I mentioned just posted their PR here: locationtech/geomesa#2426. As a reminder, when you all put up your work, you'll need to sign the Eclipse ECA and use the git commit sign-off flag
jg895512
@jg895512
how sensitive is geomesa-kafka to zookeeper versions? I have a cluster with HDP and Kafka 2.0.0, but the Zookeeper versions are (cluster has 3.4.10, but looks like gm-kafka is looking for 3.5.6, including something like this as a geoserver jar (zookeeper-jute-3.5.6.jar), but that is not even available in 3.4.x.. Getting various errors around zookeeper in both the command line as well as trying to add the layer in geoserver. here is my geomesa-kafka command line listener result
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /geomesa/ds/kafka/metadata/migration~check
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1111)
        at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:268)
        at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:257)
        at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
        at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
        at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForegroundStandard(ExistsBuilderImpl.java:254)
        at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:247)
        at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
        at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
        at org.locationtech.geomesa.utils.zk.ZookeeperMetadata.scanValue(ZookeeperMetadata.scala:58)
        at org.locationtech.geomesa.index.metadata.KeyValueStoreMetadata$class.scanValue(KeyValueStoreMetadata.scala:40)
        at org.locationtech.geomesa.utils.zk.ZookeeperMetadata.scanValue(ZookeeperMetadata.scala:18)
        at org.locationtech.geomesa.index.metadata.TableBasedMetadata$$anon$1.load(TableBasedMetadata.scala:114)
        at org.locationtech.geomesa.index.metadata.TableBasedMetadata$$anon$1.load(TableBasedMetadata.scala:110)
        at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java
i know with accumulo it matters where you are (needing the right client jar versions or not, etc.). I haven't used Kafka for a while and have a bunch of errors like this as I try to add/access data
Emilio
@elahrvivaz
@jg895512 i think it should work with 3.4.x or 3.5.x
3.5.x requires an extra jar, the -jute
jg895512
@jg895512
any other weirdness you think we should have to worry about?
Emilio
@elahrvivaz
not that i know of...
zookeeper generally seems pretty flexible with versions
can you use the zkCli from the same box you have the CLI tools?
might be a connection issue or something
James Srinivasan
@jrs53
newest nifi has an issue with zk versions
Emilio
@elahrvivaz
i don't think that's changed
as in zk minor versions seem sometimes incompatible. Unclear (to me) if config file just needs changing
Emilio
@elahrvivaz
ah, gotcha
you matched your zk version to the one that's installed?
@jg895512 the install scripts use the 'recommended' version, but you can change the version at the top of the file to match your install
or just copy them manually
jg895512
@jg895512
matched it where? in geoserver/ I just left what was there from the geomesa accumulo geoserver install
Emilio
@elahrvivaz
match the zk jars in geoserver/CLI (i.e. your client) with the zk version you're connecting to
jg895512
@jg895512
do I need to build the geomesa-kafka packages thoguh with the right version, or will there be a conflict with it expecting 3.5.x but environment has 3.4?