Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 14 2017 09:41

    pomadchin on master

    Update README.md (compare)

  • Feb 03 2017 20:24
    pomadchin closed #4
  • Feb 03 2017 20:24
    pomadchin commented #4
  • Feb 03 2017 20:23

    pomadchin on master

    Fix quay.io link Merge pull request #5 from jwal… (compare)

  • Feb 03 2017 20:23
    pomadchin closed #5
  • Feb 03 2017 20:23
    pomadchin commented #5
  • Feb 03 2017 20:10
    jwalgran opened #5
  • Dec 10 2016 21:50
    pomadchin commented #4
  • Dec 10 2016 13:42
    Plippe opened #4
shortwavedave
@shortwavedave
thank you!
gaurav chhabra
@gc5483_twitter
hey guys, can anyone please tell me what are the minimum hardware specs for runnig geomesa docker. Appreciate your help.
Emilio
@elahrvivaz
@gc5483_twitter I'd refer to the accumulo install docs: https://accumulo.apache.org/1.9/accumulo_user_manual.html#_hardware
the docker can be run in different modes (i.e. tserver vs master) - you can run them all on a single host or across multiple hosts
gaurav chhabra
@gc5483_twitter
thanks @elahrvivaz for the help
feng-botian
@feng-botian
Hi everyone, can u please tell me how to start geoserver docker with internal access? Thanks , it worked for localhost:9090, but can't work for ip:9090
Emilio
@elahrvivaz
@feng-botian when i run cd geodocker-accumulo-geomesa; docker-compose up it binds to my host name, not just localhost
feng-botian
@feng-botian
@elahrvivaz thanks, image geoserver was used for this case , it can't access ip:port but work for localhost:port.
gaurav chhabra
@gc5483_twitter
HI everyone, I am facing this error "There are no tablet servers: check that zookeeper and accumulo are running" when I am running accumulo shell -u root -p GisPwd inside accumulo-master_1 docker. Please help.
All my dockers are running
James Hughes
@jnh5y
from the Accumulo docker, have you checked that all the Accumulo services are running?
There should be 5 processes: master, gc, tracer, tserver, and monitor (If I recall)
gaurav chhabra
@gc5483_twitter
yes - James Hughes @jnh5y
gaurav chhabra
@gc5483_twitter
@elahrvivaz - please help
Emilio
@elahrvivaz
@feng-botian are you saying that you are using a different docker image? what image are you using?
@gc5483_twitter check the docker accumulo/zookeeper/hdfs logs for errors - possibly there is a port conflict on the host machine
Ravi Teja Kankanala
@kraviteja
When i do an export and import in Accumulo shell for the tables I get the error
ERROR: org.apache.accumulo.core.client.AccumuloException: File F0000038.rf does not exist in import dir
How do I go about fixing it ?
This is for Geomesa 2.0.2 Accumulo 1.9.1
Another issue I have for the setup Geomesa 2.1.0 Accumulo 1.9.2 is When I try to run geomesa-accumulo version I get an error
Error: Could not find or load main class .usr.lib.zookeeper.zookeeper.jar
Please let me know how to fix this as well.
@elahrvivaz
Emilio
@elahrvivaz
@kraviteja I'm not sure about your import export issue, but the other problem was fixed in 2.1.1: https://geomesa.atlassian.net/browse/GEOMESA-2480
Ravi Teja Kankanala
@kraviteja
@elahrvivaz When I tried Geomesa 2.2.0 Accumulo 1.9.2 got error of access denied to s3 even though cluster has access to s3.
ERROR s3a://gdelt-open-data/events/20171027.export.csv: getFileStatus on s3a://gdelt-open-data/events/20171027.export.csv: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: E2292913F282FA85), S3 Extended Request ID: bsQ0euHh9x7MUteZ06MswgIA1wLq8zbixV4Iied7BkulKZukLmtti4XwmSro2hYrZXYWOjadnEI=
Emilio
@elahrvivaz
@kraviteja you still have to set up authentication, that hasn't changed: https://www.geomesa.org/documentation/user/cli/filesystems.html#enabling-s3-ingest
oh, is that a public bucket? hmm, might be aws jars
you may need this update: locationtech/geomesa@bc8da72
you can just modify the existing install script by hand
Ravi Teja Kankanala
@kraviteja
This is in AWS EMR
Emilio
@elahrvivaz
aren't you running a docker?
this is the geodocker chat, if you're having general geomesa issues you can hop over to https://gitter.im/locationtech/geomesa
Ravi Teja Kankanala
@kraviteja
Yes running the docker in AWS EMR using bootstrap script.
Emilio
@elahrvivaz
so I don't think the docker will get any of the AWS environment then
Ravi Teja Kankanala
@kraviteja
Previously with no changes it used to work for old versions
So do you want me to run that script inside accumulo-master docker container ?
Emilio
@elahrvivaz
yeah, wherever you are running the tools ingest command
i think the bootstrap runs it for you, but you may need the newer aws jars
Rakesh
@rakeshkr00
@elahrvivaz Can Geomesa 2.3.0 be built with Geoserver 2.12.0 without any issue by just changing dependency in pom.xml ?
Emilio
@elahrvivaz
no, the package names for JTS geometries changed
Rakesh
@rakeshkr00
Thanks
Emilio
@elahrvivaz
np
Rakesh
@rakeshkr00

@elahrvivaz I am trying to successfully dockerize Geomesa2.3.0 and facing issue. After deploying the Geomesa image, it is throwing below error when ingesting a file residing on the hdfs:

2019-05-23 16:36:38,353 ERROR [org.locationtech.geomesa.tools.user] Could not acquire distributed lock at '/org.locationtech.geomesa/ds/geomesa.gdelt' within 2 minutes
java.lang.RuntimeException: Could not acquire distributed lock at '/org.locationtech.geomesa/ds/geomesa.gdelt' within 2 minutes
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore$$anonfun$acquireCatalogLock$1.apply(MetadataBackedDataStore.scala:378)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore$$anonfun$acquireCatalogLock$1.apply(MetadataBackedDataStore.scala:378)
        at scala.Option.getOrElse(Option.scala:121)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.acquireCatalogLock(MetadataBackedDataStore.scala:377)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:123)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:40)
        at org.locationtech.geomesa.tools.ingest.AbstractConverterIngest.run(AbstractConverterIngest.scala:34)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$3.apply(IngestCommand.scala:107)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$3.apply(IngestCommand.scala:106)
        at scala.Option.foreach(Option.scala:257)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$class.execute(IngestCommand.scala:106)
        at org.locationtech.geomesa.accumulo.tools.ingest.AccumuloIngestCommand.execute(AccumuloIngestCommand.scala:21)
        at org.locationtech.geomesa.tools.Runner$class.main(Runner.scala:28)
        at org.locationtech.geomesa.accumulo.tools.AccumuloRunner$.main(AccumuloRunner.scala:29)
        at org.locationtech.geomesa.accumulo.tools.AccumuloRunner.main(AccumuloRunner.scala)
2019-05-23 16:36:38,401 ERROR [org.apache.commons.vfs2.impl.DefaultFileMonitor] Unknown message with code "Unable to check existance ".
org.apache.commons.vfs2.FileSystemException: Unknown message with code "Unable to check existance ".
        at org.apache.commons.vfs2.provider.hdfs.HdfsFileObject.exists(HdfsFileObject.java:256)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor$FileMonitorAgent.check(DefaultFileMonitor.java:508)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor$FileMonitorAgent.access$200(DefaultFileMonitor.java:367)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor.run(DefaultFileMonitor.java:330)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)

Any suggestion please ?

Rakesh
@rakeshkr00
hadoop version 2.8.4
Emilio
@elahrvivaz
not sure... looks like maybe a connectivity or configuration issue, where it can't talk to zookeeper and/or hdfs
Rakesh
@rakeshkr00
Ok.

@elahrvivaz for Geomesa 2.2.2, it throws below error when trying to access s3 location during ingestion

com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: D1D35381972265C4),

Any suggestion would be helpful.

Emilio
@elahrvivaz
you probably need to set up your aws credentials - there are details in the geomesa documentation