Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 15:28
    jdries commented #3146
  • Oct 21 14:04

    pomadchin on master

    Remove unused allocation from C… (compare)

  • Oct 21 14:04
    pomadchin closed #3297
  • Oct 21 13:42
    pomadchin commented #3297
  • Oct 21 13:42
    pomadchin commented #3297
  • Oct 21 00:17
    kthompson commented #3297
  • Oct 21 00:14
    kthompson synchronize #3297
  • Oct 21 00:11
    kthompson synchronize #3297
  • Oct 21 00:04
    kthompson synchronize #3297
  • Oct 21 00:00
    pomadchin commented #3297
  • Oct 21 00:00
    pomadchin commented #3297
  • Oct 20 22:09
    kthompson closed #3298
  • Oct 20 22:09
    kthompson opened #3298
  • Oct 20 22:02
    pomadchin commented #3297
  • Oct 20 21:59
    kthompson opened #3297
  • Oct 14 14:25

    pomadchin on master

    Use the latest azavea docker im… (compare)

  • Oct 14 14:25
    pomadchin closed #3296
  • Oct 14 13:18
    pomadchin synchronize #3296
  • Oct 14 13:13
    pomadchin opened #3296
  • Sep 29 19:35
    jpolchlo commented #3295
jterry64
@jterry64
Sure, I think Azavea actually helped set up our project originally. This is for Global Forest Watch, to batch process tree cover loss (and related) data
jterry64
@jterry64

Ah, I see the issue in the bootstrap action logs (not sure why this doesn't cause a more obvious error on EMR):

sudo pip3 install tqdm
sudo: pip3: command not found

If I try to run the script you linked above on the master node, I run into the same issue. Do I need to make sure pip3 is installed from a different dependency?

Grigory
@pomadchin
ooooh right;
@jterry64 hm I don’t remember installing it, but the thing that can be important is that I used EMR6
probably on earlier EMR version they don’t have pip installed
@jterry64 niiice, so you’re upgrading the project up to the fresh gt version?
theoreticlaly it should be easier for you now and you’re not locked to some old gdal version - everythign is installed through conda now
jterry64
@jterry64

Ah, I was using EMR 5, so I'll try with EMR 6.

Yup! A big upgrade actually, we got very behind - we were on 2.2 and still using a bunch of contrib packages that were added in gt 3. So we're just now using gt with gdal for the first time, but it's good we can stay up-to-date on gdal going forward

Grigory
@pomadchin
@jterry64 sounds pretty exciting
nice
pinis123
@pinis123
@pomadchin Hello, I followed your technical support. I processed geotiff according to the cog mechanism. I found that when the level is 8 to 10, or 11 to 15, the corresponding row and column numbers below belong to minZoom, that is, level 8 and level 11. Therefore, when the level is 9 to 10, or 12 to 15, How should I get the corresponding tile data? I judge whether it is this way: convert z/x/y to bounding box (bbox), and then use bbox to get the corresponding tiles, just like these lines of code
// The result of a query
val queryResult: TileLayerRDD[SpatialKey] =
reader
.querySpatialKey, Tile)
.where(Intersects(bounds1) or Intersects(bounds2))
.result
// Let's create a ValueReader to query tiles by (x, y)
image.png
thank you
pinis123
@pinis123
image.png
Cloud Optimized GeoTIFF (COG) ,thanks @pomadchin
Grigory
@pomadchin
hey @pinis123 I think you used COG Layers, and we have a partial pyramids concept here;
the partial pyarmid is a range of zoom levels to which COGs in this range correspond
Grigory
@pomadchin
All COGs consist of segments (that are Tiles), and each COGs has overviews; each overview (including the base ifd) has its resolution and can be corresponded to some zoom level
COGLayers were implemented to speedup access to TIFFs, since in the COGLayer case tiffs are indexed and we can compute what overview of what tiff to read
pinis123
@pinis123
@pomadchin Yes, I used COG Layers. The front-end map component is openlayers. According to the tms (3857) specification, I used akka to provide rest services to read COGLayers, and then correctly displayed the COGLayers tile data through openlayers.
Thanks again
Grigory
@pomadchin
That’s great! @pinis123, thanks for sharing your story here!
pinis123
@pinis123
image.png
just like that
A bright smile to you
gispathfinder
@zyxgis

Hi everyone

the versions of pureconfig in GeoTrellis and GeoMesa are conflict,
the version of pureconfig in GeoTrellis 3.5.0 is 0.13.0
but the version of pureconfig in GeoMesa 3.0.0 is 0.11.1

how to handle this problem

gispathfinder
@zyxgis

when I use pureconfig 0.11.1

dependencyOverrides ++= Seq(
"com.github.pureconfig" %% "pureconfig" % 0.11.1
  )

the error is

20/10/22 00:17:14 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 2, gisserver4, executor 2): java.lang.NoClassDefFoundError: Could not initialize class geotrellis.vector.GeomFactory$
    at geotrellis.vector.MultiPolygonConstructors$class.apply(MultiPolygon.scala:31)
    at geotrellis.vector.MultiPolygon$.apply(MultiPolygon.scala:44)
    at geotrellis.vector.MultiPolygonConstructors$class.apply(MultiPolygon.scala:28)
    at geotrellis.vector.MultiPolygon$.apply(MultiPolygon.scala:44)
    at geotrellis.layer.MapKeyTransform.keysForGeometry(MapKeyTransform.scala:172)
    at com.smartmap.task.vectorAnalyze.overlay.UnionTask$$anonfun$13.apply(UnionTask.scala:448)
    at com.smartmap.task.vectorAnalyze.overlay.UnionTask$$anonfun$13.apply(UnionTask.scala:437)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

20/10/22 00:17:14 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 0.0 (TID 3, gisserver4, executor 2): java.lang.NoClassDefFoundError: pureconfig/ConfigSource$
    at geotrellis.vector.conf.JtsConfig$.conf$lzycompute(JtsConfig.scala:42)
    at geotrellis.vector.conf.JtsConfig$.conf(JtsConfig.scala:42)
    at geotrellis.vector.conf.JtsConfig$.jtsConfigToClass(JtsConfig.scala:43)
    at geotrellis.vector.GeomFactory$.<init>(GeomFactory.scala:26)
    at geotrellis.vector.GeomFactory$.<clinit>(GeomFactory.scala)
    at geotrellis.vector.MultiPolygonConstructors$class.apply(MultiPolygon.scala:31)
    at geotrellis.vector.MultiPolygon$.apply(MultiPolygon.scala:44)
    at geotrellis.vector.MultiPolygonConstructors$class.apply(MultiPolygon.scala:28)
    at geotrellis.vector.MultiPolygon$.apply(MultiPolygon.scala:44)
    at geotrellis.layer.MapKeyTransform.keysForGeometry(MapKeyTransform.scala:172)
    at com.smartmap.task.vectorAnalyze.overlay.UnionTask$$anonfun$13.apply(UnionTask.scala:448)
    at com.smartmap.task.vectorAnalyze.overlay.UnionTask$$anonfun$13.apply(UnionTask.scala:437)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: pureconfig.ConfigSource$
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    ... 24 more
gispathfinder
@zyxgis

when I use pureconfig 0.13.0

the error is

Uncaught error from thread [application-akka.actor.default-dispatcher-4]: pureconfig.generic.DerivedConfigWriter$.labelledGenericWriter(Lshapeless/LabelledGeneric;Lshapeless/Lazy;)Lpureconfig/generic/DerivedConfigWriter;, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[application]
java.lang.NoSuchMethodError: pureconfig.generic.DerivedConfigWriter$.labelledGenericWriter(Lshapeless/LabelledGeneric;Lshapeless/Lazy;)Lpureconfig/generic/DerivedConfigWriter;
    at org.locationtech.geomesa.fs.storage.common.package$anon$lazy$macro$356$1.inst$macro$344$lzycompute(package.scala:26)
    at org.locationtech.geomesa.fs.storage.common.package$anon$lazy$macro$356$1.inst$macro$344(package.scala:26)
    at org.locationtech.geomesa.fs.storage.common.package$.<init>(package.scala:26)
    at org.locationtech.geomesa.fs.storage.common.package$.<clinit>(package.scala)
    at org.locationtech.geomesa.fs.storage.common.metadata.MetadataJson$$anonfun$1$$anonfun$apply$1.apply(MetadataJson.scala:56)
    at org.locationtech.geomesa.fs.storage.common.metadata.MetadataJson$$anonfun$1$$anonfun$apply$1.apply(MetadataJson.scala:55)
    at org.locationtech.geomesa.utils.io.package$WithClose$.apply(package.scala:64)
    at org.locationtech.geomesa.fs.storage.common.metadata.MetadataJson$$anonfun$1.apply(MetadataJson.scala:55)
    at org.locationtech.geomesa.fs.storage.common.metadata.MetadataJson$$anonfun$1.apply(MetadataJson.scala:55)
Grigory
@pomadchin
hi @zyxgis I guess the easy unser would be to bump pureconfig version in geomesa; there is also a possible workaround; how do you start your app? do you build an assembly? if so, you can shade dependencies during the assembly build (i.e. rename pureconfig in gt binaries or in geomesa binaries)
gispathfinder
@zyxgis
@pomadchin Thank you for your help
Grigory
@pomadchin
@zyxgis np! Shading is described here https://github.com/sbt/sbt-assembly#shading
gispathfinder
@zyxgis
@pomadchin
Thanks again
I will learn the tech
gispathfinder
@zyxgis

@pomadchin

Thank you very much again

I add the follow code into build.sbt

assemblyShadeRules in assembly := Seq(
  // https://github.com/sbt/sbt-assembly#shading
  ShadeRule.rename("pureconfig.generic.**" -> "shade.pureconfig.generic.@1").inLibrary("com.github.pureconfig" %% "pureconfig" % "0.11.1").inProject
)

the problem is solved

Grigory
@pomadchin
Perfect!
caoguangshun
@caoguangshun
image.png
@pomadchin hi, when i use gt 2.3.1 it renderPNG well like
but when i change gt as 3.5.0 it renderpng like this one
image.png
the code is here

def renderImage(tile: MultibandTile,r_band:Int,g_band:Int,b_band:Int): Png = {
val (red, green, blue) =
if(tile.cellType == UShortCellType) {
// Landsat

  // magic numbers. Fiddled with until visually it looked ok. ¯\_(ツ)_/¯
  val (min, max) = (4000, 15176)

  def clamp(z: Int) = {
    if(isData(z)) { if(z > max) { max } else if(z < min) { min } else { z } }
    else { z }
  }
  val red = tile.band(r_band).convert(IntCellType).map(clamp _).normalize(min, max, 0, 255)
  val green = tile.band(g_band).convert(IntCellType).map(clamp _).normalize(min, max, 0, 255)
  val blue = tile.band(b_band).convert(IntCellType).map(clamp _).normalize(min, max, 0, 255)

  ( red, green,blue)
} else {
  // Planet Labs
  (tile.band(0).combine(tile.band(3)) { (z, m) => if(m == 0) 0 else z },
   tile.band(1).combine(tile.band(3)) { (z, m) => if(m == 0) 0 else z },
   tile.band(2).combine(tile.band(3)) { (z, m) => if(m == 0) 0 else z })
}

def clampColor(c: Int): Int =
  if(isNoData(c)) { c }
  else {
    if(c < 0) { 0 }
    else if(c > 255) { 255 }
    else c
  }

// -255 to 255
val brightness = 15
def brightnessCorrect(v: Int): Int =
  if(v > 0) { v + brightness }
  else { v }

// 0.01 to 7.99
val gamma = 0.8
val gammaCorrection = 1 / gamma
def gammaCorrect(v: Int): Int =
  (255 * math.pow(v / 255.0, gammaCorrection)).toInt

// -255 to 255
val contrast: Double = 30.0
val contrastFactor = (259 * (contrast + 255)) / (255 * (259 - contrast))
def contrastCorrect(v: Int): Int =
  ((contrastFactor * (v - 128)) + 128).toInt

def adjust(c: Int): Int = {
  if(isData(c)) {
    var cc = c
    cc = clampColor(brightnessCorrect(cc))
    cc = clampColor(gammaCorrect(cc))
    cc = clampColor(contrastCorrect(cc))
    cc
  } else {
    c
  }
}

val adjRed = red.map(adjust _)
val adjGreen = green.map(adjust _)
val adjBlue = blue.map(adjust _)

ArrayMultibandTile(adjRed, adjGreen, adjBlue).renderPng

}

Grigory
@pomadchin
Hey @caoguangshun we changed the way we handle nodata in the renderPng function, right now it takes into account the cellType and nodata of the tile you want to render
Set the nodata value to zero to replicate the old behavior
Fenno Vermeij
@fennovj
Hey I'm investigating point cloud data, and geotrellis pointcloud seems really suitable, but I'm still a newcomer. Is there any documentation, or a 'minimum working example' for geotrellis pointcloud that I can use for reference?
pinis123
@pinis123

@pomadchin Hello, I used the hadoopcoglayer mechanism to successfully implement the TMS service. In the high-level case of the coglayer mechanism, geotiff files (with partial pyramids) are stored in hadoop, instead of pre-cut 256*256 tiles and avro encoded layers file.

However, now I am thinking, if I don’t use hadoopcoglayer, then I still want to implement similar TMS/WMS services, and the remote sensing image files stored in hadoop are also in geotiff format (maybe these geotiffs do not build partial pyramids, but these files are also in geotiff format, instead of pre-cut 256*256 tiles and avro encoded layer files).In this case, I know how to get tiles dynamically according to TMS or WMS , but I am not sure if there is a way for geotrellis to import geotiff into hadoop. If there is such a method, then I don’t have to write my own hadoop IO Code to import geotiff into hadoop(Just like the file upload process done by hadoop fs put... shell command)

Finally, let me give a high-level overview of my thoughts. If possible, I might build a partial pyramid. Maybe, I just want to improve the design and coding ability in this direction and deepen my understanding.

Very grateful
Thank you so much for your continued support
Eugene Cheipesh
@echeipesh
@fennovj if you're comfortable with spark you should be able to work back from this test in geotrellis-pointcloud, you'll need local PDAL isntall with mbio support in order to actually run it as well.
Fenno Vermeij
@fennovj
Thanks, that helps!
Grigory
@pomadchin
Hi @pinis123 tiff.write(Path(“some hdfs path”))
^ this allows tiff writes on hdfs / any backed that is supported by the hadoop io
caoguangshun
@caoguangshun
@pomadchin thanks , I have solved it