Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 12 05:21

    dependabot[bot] on pip

    (compare)

  • Apr 12 05:21
    dependabot[bot] closed #1125
  • Apr 12 05:21
    dependabot[bot] commented #1125
  • Apr 12 05:21
    dependabot[bot] labeled #1150
  • Apr 12 05:21
    dependabot[bot] opened #1150
  • Apr 12 05:21

    dependabot[bot] on pip

    Bump rasterio from 1.0.7 to 1.2… (compare)

  • Apr 05 20:59

    lewfish on ssl

    Add notebook to make SpaceNet C… wip (compare)

  • Apr 05 16:09

    lewfish on ssl

    Add MinMaxRasterTransformer Add Spacenet SSL config Remove data/ from .gitignore T… (compare)

  • Apr 05 05:22

    dependabot[bot] on pip

    (compare)

  • Apr 05 05:22
    dependabot[bot] closed #1135
  • Apr 05 05:21
    dependabot[bot] commented #1135
  • Apr 05 05:21
    dependabot[bot] labeled #1149
  • Apr 05 05:21
    dependabot[bot] opened #1149
  • Apr 05 05:21

    dependabot[bot] on pip

    Bump pystac from 0.5.2 to 0.5.6… (compare)

  • Apr 05 05:21

    dependabot[bot] on pip

    (compare)

  • Apr 05 05:21
    dependabot[bot] closed #1103
  • Apr 05 05:21
    dependabot[bot] commented #1103
  • Apr 05 05:21
    dependabot[bot] labeled #1148
  • Apr 05 05:21
    dependabot[bot] opened #1148
  • Apr 05 05:21

    dependabot[bot] on pip

    Update pillow requirement in /r… (compare)

arixrobotics
@arixrobotics
Hello, I'm trying to setup aws using the guide here https://docs.rastervision.io/en/0.12/cloudformation.html
However, when I upload the template.yml file in CloudFormation, this error pops-up:
The following resource types are not supported for resource import: AWS::ECR::Repository,AWS::Batch::JobDefinition,AWS::EC2::LaunchTemplate,AWS::Batch::JobDefinition,AWS::Batch::JobDefinition,AWS::Batch::JobDefinition,AWS::EC2::LaunchTemplate,AWS::Batch::ComputeEnvironment,AWS::Batch::ComputeEnvironment,AWS::Batch::JobQueue,AWS::Batch::JobQueue
That's quite a number of unsupported resources... am I missing something here?
2 replies
wassi12
@wassi12
Bonsoir
1 reply
arixrobotics
@arixrobotics
The output of predict command is a .tif image - is it possible to output a shapefile or geojson file or other formats? The idea is to view this output as a new layer on top of the raster.
2 replies
Andrés Veiro
@veiro

hi, i'm new to all of this. I am trying to run the local example with the command "rastervision predict https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-0.12/spacenet-vegas-buildings-ss/ model-bundle.zip https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-0.12/spacenet-vegas-buildings-ss/1929.tif prediction.tif "
It gives me the following error:

File "/opt/src/rastervision_aws_s3/rastervision/aws_s3/s3_file_system.py", line 182, in write_bytes
raise NotWritableError ('Could not write {}'. format (uri)) from e
rastervision.pipeline.file_system.file_system.NotWritableError: Could not write s3: //raster-vision-lf-dev/examples/spacenet-vegas-buildings-ss/output_6_27c/predict/1332-0-polygons.json
root @ 89c196e3f178: / opt / src # "

Does anyone know how to fix it?

Christos
@Charmatzis
@veiro did you setup the right data paths?
5 replies
EinCB
@luqinghui
image.png
2 replies
Saadiq Mohiuddin
@smohiudd
I'm using v0.12. When I use the predict CLI, it works fine with no errors. But when I try and load the Predictor class myself Predictor(model_bundle, tmp_dir, update_stats, channel_order) I keep getting the following error: rastervision.pipeline.config.ConfigError: The plugin_versions field contains an unrecognized plugin name: rastervision.core. Not sure why I'm getting an error with the Predictor class when the CLI works fine.
2 replies
Saadiq Mohiuddin
@smohiudd
The error seems to be originating from config_dict = upgrade_config(config_dict) in rastervision.core.predictor
Jerome Maleski
@jeromemaleski
I would like to see and online workshop/training for rastervision. Is this a possibility?
Jerome Maleski
@jeromemaleski
I have rastervision working with my data. I am using segmentation on NAIP imagery starting from the Potsdam example script. The defaults from the Potsdam script worked great and i am getting recall in the 90's for most of my classes! One thing i noticed is that there is prediction problems along the chip seam lines. Is there any way to add a chip buffer to the pipeline? I would like it to use a buffer around the chip when predicting. I noticed that when i am running rastervision predict it only runs on the cpu and only on one core. There does not seem to be an option to run on the GPU or a number of cores. Is there a way to run predict on the gpu or several cores?
6 replies
Jerome Maleski
@jeromemaleski
I want to get back the areas from the segmentation predictions. So i need something to convert the raster colors, to area sums or to vector graphics. Does anyone have any suggestions for Qgis, GDAL or Python tools to do this?
4 replies
Jerome Maleski
@jeromemaleski
I'm running prediction on NAIP imagery. They can deliver it as uncompressed .tif or compressed .jp2. I would really like to use jp2 because it is 10 times smaller but raster-vision predict takes 4 times longer to run. On a .tif tile predict finishes in 5 min but on a .jp2 tile predict finishes in 22 min. Do you what is causing this difference?
1 reply
Jerome Maleski
@jeromemaleski
I would like to create a training loop where the you can take the prediction output, make some manual corrections then feed it back to the model. I can take the raster output convert it to shape in Qgis and then try to edit the polygons but the editing in Qgis is not very good. It would be nice if you could just sort of 'paint' your corrections. Does anyone know some software that you would use to do this? Groundwork would be a nice interface but i cant upload raster results to edit.
1 reply
Troy Chan
@cschan279

I have tried to follow instructions in 0.12 example
to run following

rastervision run local rastervision.pytorch_backend.examples.chip_classification.spacenet_rio  -a raw_uri $RAW_URI -a processed_uri $PROCESSED_URI -a root_uri $ROOT_URI -a test True --splits 2

However, I get exceptions as below:

File "/opt/src/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/utils.py", line 95, in save_image_crop
    geojson_vs_config = GeoJSONVectorSourceConfig(uri=label_uri)
  File "pydantic/main.py", line 274, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GeoJSONVectorSourceConfig
default_class_id
  field required (type=value_error.missing)

I am running those on a docker as below:
docker run -it --gpus all -e DISPLAY=$DISPLAY --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --net=host -p 5000:5000 -v ${RV_QUICKSTART_CODE_DIR}:/opt/src/code -v ${RV_QUICKSTART_OUT_DIR}:/opt/data/output -v ${RV_QUICKSTART_DATA_SET_DIR}:/opt/data/spacenet-dataset/ --name raster-vision quay.io/azavea/raster-vision:pytorch-0.12 bash
Can anyone tell me how to fix it?

4 replies
Humberto Yances
@hyances_twitter

Hi! I'm triying to install raster-vision in Ubuntu 20.04 with $ pip3 install rastervision --user, an issue with Numpy appear during the proccess:

INFO: pip is looking at multiple versions of numpy to determine which version is compatible with other requirements. This could take a while.
Collecting numpy<1.17
  Using cached numpy-1.16.5.zip (5.1 MB)
  Using cached numpy-1.16.4.zip (5.1 MB)
  Using cached numpy-1.16.3.zip (5.1 MB)
...
  Using cached numpy-1.6.1.zip (3.4 MB)
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-fjj4j8dh/numpy_3e3d59a0af8e474aac1dcfea217a7e8b/setup.py'"'"'; __file__='"'"'/tmp/pip-install-fjj4j8dh/numpy_3e3d59a0af8e474aac1dcfea217a7e8b/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-c36dbayo
         cwd: /tmp/pip-install-fjj4j8dh/numpy_3e3d59a0af8e474aac1dcfea217a7e8b/
...
      File "/tmp/pip-install-fjj4j8dh/numpy_3e3d59a0af8e474aac1dcfea217a7e8b/build/py3k/numpy/distutils/misc_util.py", line 743, in __init__
        raise ValueError("%r is not a directory" % (package_path,))
    ValueError: 'build/py3k/numpy' is not a directory
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Please can you tell me ¿what I'm doing wrong? Thanks!

2 replies
asoxien
@asoxien
Hi everyone. I use the given example of object detection that detect cars how can I configure it for many object for example my shapefile have two ids 0 and 1 for cars and buildings: but the program juste consider the default class id that is 0 label_source = ObjectDetectionLabelSourceConfig(
vector_source=GeoJSONVectorSourceConfig(
uri=label_uri,
default_class_id=0,
ignore_crs_field=True))
1 reply
Christos
@Charmatzis
@asoxien hello, I think the classes can be cofig using a DatasetConfig https://docs.rastervision.io/en/0.12/api.html?highlight=datasetconfig#datasetconfig
asoxien
@asoxien
@Charmatzis Hello, yes it shoud be configured in DatasetConfig. but how? all what I find is in this place vector_source=GeoJSONVectorSourceConfig(
uri=label_uri, default_class_id=0, ignore_crs_field=True) but the problem is that the default_class_id is not optional and it considers only default IDs in shapefile. If not mentioned in GeoJSONVectorSourceConfig it gets me an error even if in doc it is mentioned as Optional[int] !!!!
asoxien
@asoxien
@Charmatzis another question please each time I run the script object_detection.py before training processes it should do analyzing and shaping that take about an 1h30 even if I just changed just nb_steps. how can I avoid this? thanks
1 reply
asoxosolox
@asox_gitlab
@Charmatzis @lewfish every time I execute the program it goes over chip command. How can I skip this command if already done in first execution !! thanks
2 replies
asoxosolox
@asox_gitlab
@lossyrob @Rabscuttler I want to run rastervision in local machine/server to eal with Pleiades images. What do you recommend me in termes of hardware is there any machine preconfigured to deal with things like this give me links if you have recommendations thanks
1 reply
Tobias1234
@Tobias1234
Docker on Windows 10 question: did anyone succeed installing rastervision on docker using gpu om windows 10? Seems like a cumbersome combo?
Ashwin Nair
@ashnair1
Hi, recently came across raster-vision. I have two questions:
  1. Does it allow for training models on non geo-referenced datasets and running inference on a geotiff?
  2. The README says raster-vision has detectron2 integration. How is this done?
    Thanks
3 replies
Tobias1234
@Tobias1234
Do I have to label every object in an image tile? For instance must all buildings in a tile be annotated to get a good training result?
I am doing a land use classifiation (semantic classification),
1 reply
Tobias1234
@Tobias1234
Did anyone succeed pip install latest version of rastervision? I tried pip install from a virtual conda environment. First installing geopandas, Fiona, gdal, ... From .whl. files, then trying to pip install latest version of rastervision but without success so far.
1 reply
asoxien
@asoxien
Hi everyone whene I use Pytorch image it takes about 10min for every iteration. for Tenforflow image with the same configuration each 10 minutes its doing about 700 iterations. I use in the two cases the same machine and the same configuration. Is this normal ?
2 replies
Tobias1234
@Tobias1234

Hi everyone!
I am running rastervision docker desktop, windows 10. I am trying
to run tiny_spacenet.py code as in example. It says no file or
directory.
[Errno 2] No such file or directory: 'code/tiny_spacenet.py'
Why is that?
Workflow:

  1. set RV_QUICKSTART_CODE_DIR="C:\Users\tt742\RV2\code"
    set RV_QUICKSTART_OUT_DIR="C:\Users\tt742\RV2\output"
    mkdir RV_QUICKSTART_CODE_DIR
    mkdir RV_QUICKSTART_OUT_DIR
    docker run --rm -it --name devtest4 --mount type=bind,
    source="C:/Users/tt742/RV2"/code,target=/code
    --mount type=bind,source="C:/Users/tt742/RV2"/output,
    target=/output quay.io/azavea/raster-vision:pytorch-0.12 /bin/bash

  2. testing the path is writable/readable:
    touch C:/Users/tt742/RV2/code/zzzz1234

  3. Running rastervision run local code/tiny_spacenet.py

root@c27276280db9:/opt/src# rastervision run local code/tiny_spacenet.py
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in call
return self.main(args, kwargs)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback,
ctx.params)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(
args, kwargs)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
cfgs = get_configs(cfg_module, runner, args)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 62, in get_configs
spec.loader.exec_module(cfg_module)
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 780, in get_code
File "<frozen importlib._bootstrap_external>", line 832, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'code/tiny_spacenet.py'
root@c27276280db9:/opt/src# rastervision run local -p code/tiny_spacenet.py
Error: no such option: -p
root@c27276280db9:/opt/src# rastervision run local -p tiny_spacenet.py
Error: no such option: -p
root@c27276280db9:/opt/src# rastervision run local code/tiny_spacenet.py
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in call
return self.main(*args,
kwargs)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args,
kwargs)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
cfgs = get_configs(cfg_module, runner, args)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 62, in get_configs
spec.loader.exec_module(cfg_module)
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 780, in get_code
File "<frozen importlib._bootstrap_external>", line 832, in get_data

2 replies
Ashwin Nair
@ashnair1
Is 1.2 the latest pytorch version that raster-vision is compatible with?
1 reply
Tobias1234
@Tobias1234
Hello, I am running the Potsdam example, (https://github.com/azavea/raster-vision/blob/0.12/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/semantic_segmentation/isprs_potsdam.py). I get an error message running "rastervision run local code/semantic_segmentation2.py -a raw_uri C:\Users\tt742\RV2\opt\src\code\dataset -a processed_uri C:\Users\tt742\RV2\opt\src\code\processed-data -a root_uri C:\Users\tt742\RV2\code\opt\src\local-output-a test True --splits 2" . The error is "AttributeError: 'SemanticSegmentation' object has no attribute 'test'
C:Userstt742RV2codeoptsrclocal-output-a/Makefile:6: recipe for target '0' failed
make: * [0] Error 1". Why is that?
1 reply
Ashwin Nair
@ashnair1

Running the following command in the rastervision container

root@b2d81089d4d5:/opt/src# rastervision run local rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py     -a raw_uri $RAW_URI -a processed_uri $PROCESSED_URI -a root_uri $ROOT_URI     -a test True --splits 2

throws the following error

Saving test crop to /opt/data/examples/spacenet/rio/processed-data/crops/013022232022.tif...
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
    main()
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
    cfgs = get_configs(cfg_module, runner, args)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 74, in get_configs
    cfgs = _get_configs(runner, **args)
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 72, in get_config
    train_scenes = [make_scene(info) for info in train_scene_info]
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 72, in <listcomp>
    train_scenes = [make_scene(info) for info in train_scene_info]
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 49, in make_scene
    class_config=class_config)
  File "/opt/src/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/utils.py", line 95, in save_image_crop
    geojson_vs_config = GeoJSONVectorSourceConfig(uri=label_uri)
  File "pydantic/main.py", line 274, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GeoJSONVectorSourceConfig
default_class_id
  field required (type=value_error.missing)

However, if I switch the test argument to False this error doesn't occur. Does anyone know why this happens?

1 reply
Ashwin Nair
@ashnair1
For semantic segmentation, what format does raster-vision expect the labels to be in? I have binary segmentation maps (0 or 255) and was wondering if I need to modify it before passing it into SemanticSegmentationLabelSourceConfig
3 replies
Ashwin Nair
@ashnair1
Given a geotiff and a geojson, does rastervision have the functionality to generate a MS-COCO dataset (image tiles/chips and an annotation file) ?
4 replies
Tobias1234
@Tobias1234
Hi, I am trying to train on a rgb image. I have done labeling in GroundWork (I have geojson and tiff images). My idea is to elaborate the tiny_spacenet example to train om my local data. The channel order [0, 1, 2] must be correct in my case? Do I need to add some environmental parameter or is it something I need to change in the python code different from the template. I end up with the rasterio error: rasterio.errors.RasterioIOError: C://Users/tt742/RV2/train.tif: No such file or directory. This is how my code look like: def get_config(runner):
root_uri = '/opt/data/output/'
base_uri = ('C://Users/tt742/RV2')
train_image_uri = '{}/train.tif'.format(
base_uri)
train_label_uri = '{}/train.geojson'.format(
base_uri)
val_image_uri = '{}/val_image.tif'.format(base_uri)
val_label_uri = '{}/val_label.geojson'.format(base_uri)
channel_order = [0, 1, 2]
1 reply
Tobias1234
@Tobias1234
My data is in WGS84
RichardScottOZ
@RichardScottOZ
Hi @lewfish - a suggestion on twitter that you have approached large scale Sentinel dataset building. Here's my current use case https://discourse.pangeo.io/t/best-practices-for-automating-large-scale-sentinel-dataset-building-and-machine-learning/1161/15 - TLDR state/country scale Sentinel 2 mosaics for downstream geology/mineral ML
Tobias1234
@Tobias1234
Hi! How should I best train a model to detect roads? As vector labels input data, should I use geojson line objects or polygons? Everything is connected in a road network which results in one single polygon if using polygons. I tried that but result wasn´t that good. Should I better use lines with line_bufs={0: XX})? I refer to azavea/raster-vision#711.
2 replies
Tobias1234
@Tobias1234
Doing transfer learning: I don't understand how to set the init_weights field to the model file I previously created (11.3 in the RV Documentation). I am familiar with saving and loading pretrained models from cmd in Keras. But this is somehow a different approach. Where is the model_config file?
2 replies
Tobias1234
@Tobias1234

Hello everyone! I have an example with one scene I would like to elaborate(tiny_spacenet.py with local data). Let's say I want 3 scenes and one validation scene(no test splits).
How should I specify the the training data/labels in the code based on the code below? I have my images and labels in two different folders under "base_uri". I couldn't use the RV experimental examples since I got error messages importing some of the modules,
but I really just need something simple.

from os.path import join

from rastervision.core.rv_pipeline import
from rastervision.core.backend import

from rastervision.core.data import
from rastervision.pytorch_backend import

from rastervision.pytorch_learner import *

def get_config(runner):
root_uri = '/opt/data/output/'
base_uri = '/opt/data/data_input'
train_image_uri = '{}/train.tif'.format(
base_uri)
train_label_uri = '{}/labels2.geojson'.format(
base_uri)
val_image_uri = '{}/val_image2.tif'.format(base_uri)
val_label_uri = '{}/val_label2.geojson'.format(base_uri)
channel_order = [0, 1, 2]
class_config = ClassConfig(
names=['building', 'background'], colors=['red', 'black'])
def make_scene(scene_id, image_uri, label_uri):

    - StatsTransformer is used to convert uint16 values to uint8.
    - The GeoJSON does not have a class_id property for each geom,
      so it is inferred as 0 (ie. building) because the default_class_id
      is set to 0.
    - The labels are in the form of GeoJSON which needs to be rasterized
      to use as label for semantic segmentation, so we use a RasterizedSource.
    - The rasterizer set the background (as opposed to foreground) pixels
      to 1 because background_class_id is set to 1.
    raster_source = RasterioSourceConfig(
        uris=[image_uri],
        channel_order=channel_order,
        transformers=[StatsTransformerConfig()])
    vector_source = GeoJSONVectorSourceConfig(
        uri=label_uri, default_class_id=0, ignore_crs_field=True)
    label_source = SemanticSegmentationLabelSourceConfig(
        raster_source=RasterizedSourceConfig(
            vector_source=vector_source,
            rasterizer_config=RasterizerConfig(background_class_id=1)))
    return SceneConfig(
        id=scene_id,
        raster_source=raster_source,
        label_source=label_source)

dataset = DatasetConfig(
    class_config=class_config,
    train_scenes=[
        make_scene('scene_206', train_image_uri, train_label_uri)
    ],
    validation_scenes=[
        make_scene('scene_26', val_image_uri, val_label_uri)
    ])

# Use the PyTorch backend for the SemanticSegmentation pipeline.
chip_sz = 500
backend = PyTorchSemanticSegmentationConfig(
    model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
    solver=SolverConfig(lr=1e-4, num_epochs=5, batch_sz=2))
chip_options = SemanticSegmentationChipOptions(
    window_method=SemanticSegmentationWindowMethod.random_sample,
    chips_per_scene=10)

return SemanticSegmentationConfig(
    root_uri=root_uri,
    dataset=dataset,
    backend=backend,
    train_chip_sz=chip_sz,
    predict_chip_sz=chip_sz,
    chip_options=chip_options)
3 replies
Blessings Hadebe
@blessings-h

Hi everyone! I'm having a problem with object detection where I'm getting a (very!) large number of predicted boxes, often with lots of overlap between them. I've set the merge threshold pretty low in the predict options, but the boxes returned are coming back with have far greater overlap than this.

task = rv.TaskConfig.builder(rv.OBJECT_DETECTION) \
                            .with_chip_size(250) \
                            .with_classes(['tree']) \
                            .with_chip_options(neg_ratio=1.0,
                                               ioa_thresh=0.8,
                                               window_method = 'chip') \
                            .with_predict_options(merge_thresh=0.1,
                                                  score_thresh=0.6) \
                            .build()

Any thoughts on where the problem may lie?

2 replies
Tobias1234
@Tobias1234
Hi, Is it possible to use more than 3 channels in the latest release of RV? I have an image with 4 channels [R, G, B, Elevation]. Can i train with 4 channels or should I use for instance channel_order = [0, 1, 3] (R, G, Elevation)?
1 reply
Tobias1234
@Tobias1234
Hi! Seems there's some difference in the master branch (latest) vs the quay.io/azavea/raster-vision:pytorch-0.12. I get an error, “pydantic.error_wrappers.ValidationError: 1 validation error for PyTorchSemanticSegmentationConfig data field required (type=value_error.missing)” running the modified tiny_spacenet code below in latest but it works in 0.12 version. I will try training with 4 channels so need master branch. I wonder what causes the error in the latest version?
Code:
from os.path import join

from rastervision.core.rv_pipeline import *
from rastervision.core.backend import *
from rastervision.core.data import *
from rastervision.pytorch_backend import *
from rastervision.pytorch_learner import *

def get_config(runner):
root_uri = '/opt/data/output/'
train_image_uris = ['/opt/data/data_input/images/1.tif','/opt/data/data_input/images/2.tif']
train_label_uris = ['/opt/data/data_input/labels/1.geojson','/opt/data/data_input/labels/2.geojson']
train_scene_ids = ['1','2']
train_scene_list = list(zip(train_scene_ids, train_image_uris, train_label_uris))

val_image_uri = '/opt/data/data_input/images/3.tif'
val_label_uri = '/opt/data/data_input/labels/3.geojson'
val_scene_id = '3'
channel_order = [0, 1, 2]

train_scenes_input = []



class_config = ClassConfig(
names=['building', 'background'], colors=['red', 'black'])

def make_scene(scene_id, image_uri, label_uri):

raster_source = RasterioSourceConfig(
uris=[image_uri],
channel_order=channel_order,
transformers=[StatsTransformerConfig()])
vector_source = GeoJSONVectorSourceConfig(
uri=label_uri, default_class_id=0, ignore_crs_field=True)
label_source = SemanticSegmentationLabelSourceConfig(
raster_source=RasterizedSourceConfig(
vector_source=vector_source,
rasterizer_config=RasterizerConfig(background_class_id=1)))
return SceneConfig(
id=scene_id,
raster_source=raster_source,
label_source=label_source)


for scene in train_scene_list:
train_scenes_input.append(make_scene(*scene))

dataset = DatasetConfig(
class_config=class_config,
train_scenes=
train_scenes_input
,
validation_scenes=[
make_scene(val_scene_id, val_image_uri, val_label_uri)
])


# Use the PyTorch backend for the SemanticSegmentation pipeline.
chip_sz = 500
backend = PyTorchSemanticSegmentationConfig(
model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
solver=SolverConfig(lr=1e-4, num_epochs=5, batch_sz=2))
chip_options = SemanticSegmentationChipOptions(
window_method=SemanticSegmentationWindowMethod.random_sample,
chips_per_scene=10)

return SemanticSegmentationConfig(
root_uri=root_uri,
dataset=dataset,
backend=backend,
train_chip_sz=chip_sz,
predict_chip_sz=chip_sz,
chip_options=chip_options)
7 replies
Tobias1234
@Tobias1234
Hi, I have some problems with my script trying 4 channel semantic segmentation with RGB and elevation "Training dataset has fewer elements than batch size". I also posted my issue here azavea/raster-vision#1114. I wonder if it´s a data or code issue. I am using docker, local data and the master branch.
Tobias1234
@Tobias1234
Hi, When using polygons for road labels/training the error message: rastervision.core.box.BoxsizeError: size of random square cannot be >= width (or height). Is it the shape of the polygons incorrect? To big?
Tobias1234
@Tobias1234
Testing predict command by first defining/mounting local path "docker run --ipc=host --rm -it --name devtest --mount type=bind,source="C:/Users/tobbe/RV2/RV_BUNDLE_DIR",target=/opt/src/bundle --mount type=bind,source="C:/Users/tobbe/RV2/RV_INFILE_DIR",target=/opt/src/infile --mount type=bind,source="C:/Users/tobbe/RV2/RV_OUTFILE_DIR",target=/opt/data/outfile quay.io/azavea/raster-vision:pytorch-0.12 /bin/bash" and the running command "rastervision predict /opt/src/bundle/model-bundle.zip /opt/src/infile/1.tif /opt/data/outfile/test.tif" Error message says method is missing. Did I miss anything ?
1 reply
Tobias1234
@Tobias1234
Hi, I am getting unpickling stack underflow error like 50% of the times I am running the script here:azavea/raster-vision#1114. Any suggestions on why this happens?
Simon Planzer
@SPlanzer
Hello, I am trying to deploy aws infrastructure via the raster-vision cloudformation template and run batch jobs in an aws account that only has private subnets available. The jobs all run fine with public subnets but hang in a "runnable" state in the private subnets "I have added my IPv4 CIDR for the private subnet to the cloudformation for the 'CidrRange' parameter. Are they any other modifications I need to make to the cloudformation template to get the batch jobs to execute successfully via aws batch with private subnsets?
2 replies
Adeel Hassan
@AdeelH
Raster Vision 0.13 is now released! This release adds a bunch of new features that make RV even more flexible.
For more details, check out: https://docs.rastervision.io/en/0.13/changelog.html
Tobias1234
@Tobias1234

I am trying to follow the Potsdam example with my own data:
https://github.com/azavea/raster-vision/blob/0.13/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/semantic_segmentation/isprs_potsdam.py

I have raster images and labels as vector data with 3 different labels; roads, buildings and trees
How do I make a similar RGB label image used as input for training as in the Potsdam example?

I am trying to understand the documentation text below:
"In the ISPRS Potsdam example, the following code is used to explicitly create a LabelStore
that writes out the predictions in “RGB” format, where the color of
each pixel represents the class, and predictions of class 0 (ie. car)
are also written out as polygons."

1 reply
Lewis Fishgold
@lewfish
@Tobias1234 If you want to rasterize overlapping polygons see https://rasterio.readthedocs.io/en/latest/topics/features.html#burning-shapes-into-a-raster "The geometries will be rasterized by the “painter’s algorithm” - geometries are handled in order and later geometries will overwrite earlier values." So if you want a polygon to have precedence you will need to put it at the end of the GeoJSON file.
@Tobias1234 For elevation, I think it makes sense to use the height relative to some local baseline like the median elevation of the city. (But I'm really not sure what best practices are for processing Lidar to elevation).