Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 14 2017 09:41

    pomadchin on master

    Update README.md (compare)

  • Feb 03 2017 20:24
    pomadchin closed #4
  • Feb 03 2017 20:24
    pomadchin commented #4
  • Feb 03 2017 20:23

    pomadchin on master

    Fix quay.io link Merge pull request #5 from jwal… (compare)

  • Feb 03 2017 20:23
    pomadchin closed #5
  • Feb 03 2017 20:23
    pomadchin commented #5
  • Feb 03 2017 20:10
    jwalgran opened #5
  • Dec 10 2016 21:50
    pomadchin commented #4
  • Dec 10 2016 13:42
    Plippe opened #4
Emilio
@elahrvivaz
this is the geodocker chat, if you're having general geomesa issues you can hop over to https://gitter.im/locationtech/geomesa
Ravi Teja Kankanala
@kraviteja
Yes running the docker in AWS EMR using bootstrap script.
Emilio
@elahrvivaz
so I don't think the docker will get any of the AWS environment then
Ravi Teja Kankanala
@kraviteja
Previously with no changes it used to work for old versions
So do you want me to run that script inside accumulo-master docker container ?
Emilio
@elahrvivaz
yeah, wherever you are running the tools ingest command
i think the bootstrap runs it for you, but you may need the newer aws jars
Rakesh
@rakeshkr00
@elahrvivaz Can Geomesa 2.3.0 be built with Geoserver 2.12.0 without any issue by just changing dependency in pom.xml ?
Emilio
@elahrvivaz
no, the package names for JTS geometries changed
Rakesh
@rakeshkr00
Thanks
Emilio
@elahrvivaz
np
Rakesh
@rakeshkr00

@elahrvivaz I am trying to successfully dockerize Geomesa2.3.0 and facing issue. After deploying the Geomesa image, it is throwing below error when ingesting a file residing on the hdfs:

2019-05-23 16:36:38,353 ERROR [org.locationtech.geomesa.tools.user] Could not acquire distributed lock at '/org.locationtech.geomesa/ds/geomesa.gdelt' within 2 minutes
java.lang.RuntimeException: Could not acquire distributed lock at '/org.locationtech.geomesa/ds/geomesa.gdelt' within 2 minutes
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore$$anonfun$acquireCatalogLock$1.apply(MetadataBackedDataStore.scala:378)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore$$anonfun$acquireCatalogLock$1.apply(MetadataBackedDataStore.scala:378)
        at scala.Option.getOrElse(Option.scala:121)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.acquireCatalogLock(MetadataBackedDataStore.scala:377)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:123)
        at org.locationtech.geomesa.index.geotools.MetadataBackedDataStore.createSchema(MetadataBackedDataStore.scala:40)
        at org.locationtech.geomesa.tools.ingest.AbstractConverterIngest.run(AbstractConverterIngest.scala:34)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$3.apply(IngestCommand.scala:107)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$$anonfun$execute$3.apply(IngestCommand.scala:106)
        at scala.Option.foreach(Option.scala:257)
        at org.locationtech.geomesa.tools.ingest.IngestCommand$class.execute(IngestCommand.scala:106)
        at org.locationtech.geomesa.accumulo.tools.ingest.AccumuloIngestCommand.execute(AccumuloIngestCommand.scala:21)
        at org.locationtech.geomesa.tools.Runner$class.main(Runner.scala:28)
        at org.locationtech.geomesa.accumulo.tools.AccumuloRunner$.main(AccumuloRunner.scala:29)
        at org.locationtech.geomesa.accumulo.tools.AccumuloRunner.main(AccumuloRunner.scala)
2019-05-23 16:36:38,401 ERROR [org.apache.commons.vfs2.impl.DefaultFileMonitor] Unknown message with code "Unable to check existance ".
org.apache.commons.vfs2.FileSystemException: Unknown message with code "Unable to check existance ".
        at org.apache.commons.vfs2.provider.hdfs.HdfsFileObject.exists(HdfsFileObject.java:256)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor$FileMonitorAgent.check(DefaultFileMonitor.java:508)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor$FileMonitorAgent.access$200(DefaultFileMonitor.java:367)
        at org.apache.commons.vfs2.impl.DefaultFileMonitor.run(DefaultFileMonitor.java:330)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)

Any suggestion please ?

Rakesh
@rakeshkr00
hadoop version 2.8.4
Emilio
@elahrvivaz
not sure... looks like maybe a connectivity or configuration issue, where it can't talk to zookeeper and/or hdfs
Rakesh
@rakeshkr00
Ok.

@elahrvivaz for Geomesa 2.2.2, it throws below error when trying to access s3 location during ingestion

com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: D1D35381972265C4),

Any suggestion would be helpful.

Emilio
@elahrvivaz
you probably need to set up your aws credentials - there are details in the geomesa documentation
Rakesh
@rakeshkr00
@elahrvivaz I was going thru the document at https://www.geomesa.org/documentation/user/cli/filesystems.html#configuration and it suggests to make changes in core-site.xml. I am not sure what value to set for variable HADOOP_MAPRED_HOME. Could you please suggest? I am using EMR 5.18. We have IAM policy for the EMR so wouldn't be setting up credentials. I am using Geomesa 2.2.2 and Hadoop 2.8.4, Accumulo 1.9.2.
Emilio
@elahrvivaz
i think you can use the provided bin/install-hadoop.sh script, and that will download the aws jars as well
you may also need to copy over the core-site.xml from your hdfs cluster, assuming hdfs is not in the docker
i don't think you would need to set HADOOP_MAPRED_HOME in that case
Grigory
@pomadchin
@elahrvivaz hey! we’re spinning up accumulo in dockers on EMR, but how to forward AWS S3 credentials inside? Would it be enough to mount EC2 hypervisor file inside? Or how to you useually forward credentials into the container
Emilio
@elahrvivaz
@pomadchin not sure, sorry. just curious, why do you need s3 credentials?
Grigory
@pomadchin

ah; probably wrote a bit messy;

we want to ingest data from the S3 bucket...

and we used s3://geomesa-docker/bootstrap-geodocker-accumulo.sh to bootstrap the cluster
Emilio
@elahrvivaz
ah, that should use the EMR hadoop right?
Grigory
@pomadchin
Yep
on EMR hadoop picks everything up from the EC2 instance metadata >_>
Emilio
@elahrvivaz
i think that you would only need the credentials where you launch the job, e.g. where you have the geomesa tools installed
you could just unzip them on one of the hadoop nodes instead of inside a docker
Grigory
@pomadchin
Ah… but do we need to do it on EMR?
since we can perform aws s3 commands from the outside of the container
Emilio
@elahrvivaz
no, you could do it anywhere that has access to the cluster
you would probably need to copy down the hadoop *-site.xml files
and probably need to copy/install some aws jars too
Grigory
@pomadchin

ah… so how we do it right now:

after bootstraping, sshing to a master node, going inside the accumulo docker

and performing cli ingest here
Emilio
@elahrvivaz
you could install the cli tools on the master node, outside the docker
Grigory
@pomadchin
do you have a quick tip how to do it?
or a link to docs >_>
Emilio
@elahrvivaz
hadoop master node right?
Grigory
@pomadchin
yep
Emilio
@elahrvivaz
the cli tools should pick up the hadoop conf if you're running it there
you basically just unzip the geomesa tar.gz, and then copy in your accumulo client jars
there's a script bin/install-hadoop-accumulo.sh, but you would want to comment out the hadoop parts if you use that
Grigory
@pomadchin
gotcha
thank you!
Emilio
@elahrvivaz
sure, if things don't work let me know
Grigory
@pomadchin
Thanks :+1: