Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 08:33
    AdeelH commented #1280
  • 07:42
    AdeelH opened #1281
  • 06:35
    AdeelH commented #1271
  • 06:32
    AdeelH edited #1271
  • 04:58
    sagar1899 closed #1279
  • Sep 27 16:46
    marcioluish commented #1280
  • Sep 27 16:42
    AdeelH commented #1280
  • Sep 27 16:36
    marcioluish commented #1280
  • Sep 27 16:34
    marcioluish commented #1280
  • Sep 27 16:10
    AdeelH commented #1280
  • Sep 27 13:34
    sagar1899 edited #1279
  • Sep 27 13:28
    marcioluish edited #1280
  • Sep 27 13:26
    marcioluish opened #1280
  • Sep 27 10:01
    sagar1899 edited #1279
  • Sep 27 09:59
    sagar1899 opened #1279
  • Sep 27 07:02
    AdeelH labeled #1278
  • Sep 27 07:02
    AdeelH labeled #1277
  • Sep 27 07:02
    AdeelH closed #1277
  • Sep 27 07:02
    AdeelH commented #1277
  • Sep 24 17:48
    sagar1899 closed #1278
asoxien
@asoxien
@Charmatzis Hello, yes it shoud be configured in DatasetConfig. but how? all what I find is in this place vector_source=GeoJSONVectorSourceConfig(
uri=label_uri, default_class_id=0, ignore_crs_field=True) but the problem is that the default_class_id is not optional and it considers only default IDs in shapefile. If not mentioned in GeoJSONVectorSourceConfig it gets me an error even if in doc it is mentioned as Optional[int] !!!!
asoxien
@asoxien
@Charmatzis another question please each time I run the script object_detection.py before training processes it should do analyzing and shaping that take about an 1h30 even if I just changed just nb_steps. how can I avoid this? thanks
1 reply
asoxosolox
@asox_gitlab
@Charmatzis @lewfish every time I execute the program it goes over chip command. How can I skip this command if already done in first execution !! thanks
2 replies
asoxosolox
@asox_gitlab
@lossyrob @Rabscuttler I want to run rastervision in local machine/server to eal with Pleiades images. What do you recommend me in termes of hardware is there any machine preconfigured to deal with things like this give me links if you have recommendations thanks
1 reply
Tobias1234
@Tobias1234
Docker on Windows 10 question: did anyone succeed installing rastervision on docker using gpu om windows 10? Seems like a cumbersome combo?
Ashwin Nair
@ashnair1
Hi, recently came across raster-vision. I have two questions:
  1. Does it allow for training models on non geo-referenced datasets and running inference on a geotiff?
  2. The README says raster-vision has detectron2 integration. How is this done?
    Thanks
3 replies
Tobias1234
@Tobias1234
Do I have to label every object in an image tile? For instance must all buildings in a tile be annotated to get a good training result?
I am doing a land use classifiation (semantic classification),
1 reply
Tobias1234
@Tobias1234
Did anyone succeed pip install latest version of rastervision? I tried pip install from a virtual conda environment. First installing geopandas, Fiona, gdal, ... From .whl. files, then trying to pip install latest version of rastervision but without success so far.
1 reply
asoxien
@asoxien
Hi everyone whene I use Pytorch image it takes about 10min for every iteration. for Tenforflow image with the same configuration each 10 minutes its doing about 700 iterations. I use in the two cases the same machine and the same configuration. Is this normal ?
2 replies
Tobias1234
@Tobias1234

Hi everyone!
I am running rastervision docker desktop, windows 10. I am trying
to run tiny_spacenet.py code as in example. It says no file or
directory.
[Errno 2] No such file or directory: 'code/tiny_spacenet.py'
Why is that?
Workflow:

  1. set RV_QUICKSTART_CODE_DIR="C:\Users\tt742\RV2\code"
    set RV_QUICKSTART_OUT_DIR="C:\Users\tt742\RV2\output"
    mkdir RV_QUICKSTART_CODE_DIR
    mkdir RV_QUICKSTART_OUT_DIR
    docker run --rm -it --name devtest4 --mount type=bind,
    source="C:/Users/tt742/RV2"/code,target=/code
    --mount type=bind,source="C:/Users/tt742/RV2"/output,
    target=/output quay.io/azavea/raster-vision:pytorch-0.12 /bin/bash

  2. testing the path is writable/readable:
    touch C:/Users/tt742/RV2/code/zzzz1234

  3. Running rastervision run local code/tiny_spacenet.py

root@c27276280db9:/opt/src# rastervision run local code/tiny_spacenet.py
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in call
return self.main(args, kwargs)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback,
ctx.params)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(
args, kwargs)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
cfgs = get_configs(cfg_module, runner, args)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 62, in get_configs
spec.loader.exec_module(cfg_module)
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 780, in get_code
File "<frozen importlib._bootstrap_external>", line 832, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'code/tiny_spacenet.py'
root@c27276280db9:/opt/src# rastervision run local -p code/tiny_spacenet.py
Error: no such option: -p
root@c27276280db9:/opt/src# rastervision run local -p tiny_spacenet.py
Error: no such option: -p
root@c27276280db9:/opt/src# rastervision run local code/tiny_spacenet.py
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in call
return self.main(*args,
kwargs)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args,
kwargs)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
cfgs = get_configs(cfg_module, runner, args)
File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 62, in get_configs
spec.loader.exec_module(cfg_module)
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 780, in get_code
File "<frozen importlib._bootstrap_external>", line 832, in get_data

2 replies
Ashwin Nair
@ashnair1
Is 1.2 the latest pytorch version that raster-vision is compatible with?
1 reply
Tobias1234
@Tobias1234
Hello, I am running the Potsdam example, (https://github.com/azavea/raster-vision/blob/0.12/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/semantic_segmentation/isprs_potsdam.py). I get an error message running "rastervision run local code/semantic_segmentation2.py -a raw_uri C:\Users\tt742\RV2\opt\src\code\dataset -a processed_uri C:\Users\tt742\RV2\opt\src\code\processed-data -a root_uri C:\Users\tt742\RV2\code\opt\src\local-output-a test True --splits 2" . The error is "AttributeError: 'SemanticSegmentation' object has no attribute 'test'
C:Userstt742RV2codeoptsrclocal-output-a/Makefile:6: recipe for target '0' failed
make: * [0] Error 1". Why is that?
1 reply
Ashwin Nair
@ashnair1

Running the following command in the rastervision container

root@b2d81089d4d5:/opt/src# rastervision run local rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py     -a raw_uri $RAW_URI -a processed_uri $PROCESSED_URI -a root_uri $ROOT_URI     -a test True --splits 2

throws the following error

Saving test crop to /opt/data/examples/spacenet/rio/processed-data/crops/013022232022.tif...
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 233, in <module>
    main()
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 156, in run
    cfgs = get_configs(cfg_module, runner, args)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 74, in get_configs
    cfgs = _get_configs(runner, **args)
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 72, in get_config
    train_scenes = [make_scene(info) for info in train_scene_info]
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 72, in <listcomp>
    train_scenes = [make_scene(info) for info in train_scene_info]
  File "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/chip_classification/spacenet_rio.py", line 49, in make_scene
    class_config=class_config)
  File "/opt/src/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/utils.py", line 95, in save_image_crop
    geojson_vs_config = GeoJSONVectorSourceConfig(uri=label_uri)
  File "pydantic/main.py", line 274, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GeoJSONVectorSourceConfig
default_class_id
  field required (type=value_error.missing)

However, if I switch the test argument to False this error doesn't occur. Does anyone know why this happens?

1 reply
Ashwin Nair
@ashnair1
For semantic segmentation, what format does raster-vision expect the labels to be in? I have binary segmentation maps (0 or 255) and was wondering if I need to modify it before passing it into SemanticSegmentationLabelSourceConfig
3 replies
Ashwin Nair
@ashnair1
Given a geotiff and a geojson, does rastervision have the functionality to generate a MS-COCO dataset (image tiles/chips and an annotation file) ?
4 replies
Tobias1234
@Tobias1234
Hi, I am trying to train on a rgb image. I have done labeling in GroundWork (I have geojson and tiff images). My idea is to elaborate the tiny_spacenet example to train om my local data. The channel order [0, 1, 2] must be correct in my case? Do I need to add some environmental parameter or is it something I need to change in the python code different from the template. I end up with the rasterio error: rasterio.errors.RasterioIOError: C://Users/tt742/RV2/train.tif: No such file or directory. This is how my code look like: def get_config(runner):
root_uri = '/opt/data/output/'
base_uri = ('C://Users/tt742/RV2')
train_image_uri = '{}/train.tif'.format(
base_uri)
train_label_uri = '{}/train.geojson'.format(
base_uri)
val_image_uri = '{}/val_image.tif'.format(base_uri)
val_label_uri = '{}/val_label.geojson'.format(base_uri)
channel_order = [0, 1, 2]
1 reply
Tobias1234
@Tobias1234
My data is in WGS84
RichardScottOZ
@RichardScottOZ
Hi @lewfish - a suggestion on twitter that you have approached large scale Sentinel dataset building. Here's my current use case https://discourse.pangeo.io/t/best-practices-for-automating-large-scale-sentinel-dataset-building-and-machine-learning/1161/15 - TLDR state/country scale Sentinel 2 mosaics for downstream geology/mineral ML
Tobias1234
@Tobias1234
Hi! How should I best train a model to detect roads? As vector labels input data, should I use geojson line objects or polygons? Everything is connected in a road network which results in one single polygon if using polygons. I tried that but result wasn´t that good. Should I better use lines with line_bufs={0: XX})? I refer to azavea/raster-vision#711.
2 replies
Tobias1234
@Tobias1234
Doing transfer learning: I don't understand how to set the init_weights field to the model file I previously created (11.3 in the RV Documentation). I am familiar with saving and loading pretrained models from cmd in Keras. But this is somehow a different approach. Where is the model_config file?
2 replies
Tobias1234
@Tobias1234

Hello everyone! I have an example with one scene I would like to elaborate(tiny_spacenet.py with local data). Let's say I want 3 scenes and one validation scene(no test splits).
How should I specify the the training data/labels in the code based on the code below? I have my images and labels in two different folders under "base_uri". I couldn't use the RV experimental examples since I got error messages importing some of the modules,
but I really just need something simple.

from os.path import join

from rastervision.core.rv_pipeline import
from rastervision.core.backend import

from rastervision.core.data import
from rastervision.pytorch_backend import

from rastervision.pytorch_learner import *

def get_config(runner):
root_uri = '/opt/data/output/'
base_uri = '/opt/data/data_input'
train_image_uri = '{}/train.tif'.format(
base_uri)
train_label_uri = '{}/labels2.geojson'.format(
base_uri)
val_image_uri = '{}/val_image2.tif'.format(base_uri)
val_label_uri = '{}/val_label2.geojson'.format(base_uri)
channel_order = [0, 1, 2]
class_config = ClassConfig(
names=['building', 'background'], colors=['red', 'black'])
def make_scene(scene_id, image_uri, label_uri):

    - StatsTransformer is used to convert uint16 values to uint8.
    - The GeoJSON does not have a class_id property for each geom,
      so it is inferred as 0 (ie. building) because the default_class_id
      is set to 0.
    - The labels are in the form of GeoJSON which needs to be rasterized
      to use as label for semantic segmentation, so we use a RasterizedSource.
    - The rasterizer set the background (as opposed to foreground) pixels
      to 1 because background_class_id is set to 1.
    raster_source = RasterioSourceConfig(
        uris=[image_uri],
        channel_order=channel_order,
        transformers=[StatsTransformerConfig()])
    vector_source = GeoJSONVectorSourceConfig(
        uri=label_uri, default_class_id=0, ignore_crs_field=True)
    label_source = SemanticSegmentationLabelSourceConfig(
        raster_source=RasterizedSourceConfig(
            vector_source=vector_source,
            rasterizer_config=RasterizerConfig(background_class_id=1)))
    return SceneConfig(
        id=scene_id,
        raster_source=raster_source,
        label_source=label_source)

dataset = DatasetConfig(
    class_config=class_config,
    train_scenes=[
        make_scene('scene_206', train_image_uri, train_label_uri)
    ],
    validation_scenes=[
        make_scene('scene_26', val_image_uri, val_label_uri)
    ])

# Use the PyTorch backend for the SemanticSegmentation pipeline.
chip_sz = 500
backend = PyTorchSemanticSegmentationConfig(
    model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
    solver=SolverConfig(lr=1e-4, num_epochs=5, batch_sz=2))
chip_options = SemanticSegmentationChipOptions(
    window_method=SemanticSegmentationWindowMethod.random_sample,
    chips_per_scene=10)

return SemanticSegmentationConfig(
    root_uri=root_uri,
    dataset=dataset,
    backend=backend,
    train_chip_sz=chip_sz,
    predict_chip_sz=chip_sz,
    chip_options=chip_options)
3 replies
Blessings Hadebe
@blessings-h

Hi everyone! I'm having a problem with object detection where I'm getting a (very!) large number of predicted boxes, often with lots of overlap between them. I've set the merge threshold pretty low in the predict options, but the boxes returned are coming back with have far greater overlap than this.

task = rv.TaskConfig.builder(rv.OBJECT_DETECTION) \
                            .with_chip_size(250) \
                            .with_classes(['tree']) \
                            .with_chip_options(neg_ratio=1.0,
                                               ioa_thresh=0.8,
                                               window_method = 'chip') \
                            .with_predict_options(merge_thresh=0.1,
                                                  score_thresh=0.6) \
                            .build()

Any thoughts on where the problem may lie?

2 replies
Tobias1234
@Tobias1234
Hi, Is it possible to use more than 3 channels in the latest release of RV? I have an image with 4 channels [R, G, B, Elevation]. Can i train with 4 channels or should I use for instance channel_order = [0, 1, 3] (R, G, Elevation)?
1 reply
Tobias1234
@Tobias1234
Hi! Seems there's some difference in the master branch (latest) vs the quay.io/azavea/raster-vision:pytorch-0.12. I get an error, “pydantic.error_wrappers.ValidationError: 1 validation error for PyTorchSemanticSegmentationConfig data field required (type=value_error.missing)” running the modified tiny_spacenet code below in latest but it works in 0.12 version. I will try training with 4 channels so need master branch. I wonder what causes the error in the latest version?
Code:
from os.path import join

from rastervision.core.rv_pipeline import *
from rastervision.core.backend import *
from rastervision.core.data import *
from rastervision.pytorch_backend import *
from rastervision.pytorch_learner import *

def get_config(runner):
root_uri = '/opt/data/output/'
train_image_uris = ['/opt/data/data_input/images/1.tif','/opt/data/data_input/images/2.tif']
train_label_uris = ['/opt/data/data_input/labels/1.geojson','/opt/data/data_input/labels/2.geojson']
train_scene_ids = ['1','2']
train_scene_list = list(zip(train_scene_ids, train_image_uris, train_label_uris))

val_image_uri = '/opt/data/data_input/images/3.tif'
val_label_uri = '/opt/data/data_input/labels/3.geojson'
val_scene_id = '3'
channel_order = [0, 1, 2]

train_scenes_input = []



class_config = ClassConfig(
names=['building', 'background'], colors=['red', 'black'])

def make_scene(scene_id, image_uri, label_uri):

raster_source = RasterioSourceConfig(
uris=[image_uri],
channel_order=channel_order,
transformers=[StatsTransformerConfig()])
vector_source = GeoJSONVectorSourceConfig(
uri=label_uri, default_class_id=0, ignore_crs_field=True)
label_source = SemanticSegmentationLabelSourceConfig(
raster_source=RasterizedSourceConfig(
vector_source=vector_source,
rasterizer_config=RasterizerConfig(background_class_id=1)))
return SceneConfig(
id=scene_id,
raster_source=raster_source,
label_source=label_source)


for scene in train_scene_list:
train_scenes_input.append(make_scene(*scene))

dataset = DatasetConfig(
class_config=class_config,
train_scenes=
train_scenes_input
,
validation_scenes=[
make_scene(val_scene_id, val_image_uri, val_label_uri)
])


# Use the PyTorch backend for the SemanticSegmentation pipeline.
chip_sz = 500
backend = PyTorchSemanticSegmentationConfig(
model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
solver=SolverConfig(lr=1e-4, num_epochs=5, batch_sz=2))
chip_options = SemanticSegmentationChipOptions(
window_method=SemanticSegmentationWindowMethod.random_sample,
chips_per_scene=10)

return SemanticSegmentationConfig(
root_uri=root_uri,
dataset=dataset,
backend=backend,
train_chip_sz=chip_sz,
predict_chip_sz=chip_sz,
chip_options=chip_options)
7 replies
Tobias1234
@Tobias1234
Hi, I have some problems with my script trying 4 channel semantic segmentation with RGB and elevation "Training dataset has fewer elements than batch size". I also posted my issue here azavea/raster-vision#1114. I wonder if it´s a data or code issue. I am using docker, local data and the master branch.
Tobias1234
@Tobias1234
Hi, When using polygons for road labels/training the error message: rastervision.core.box.BoxsizeError: size of random square cannot be >= width (or height). Is it the shape of the polygons incorrect? To big?
Tobias1234
@Tobias1234
Testing predict command by first defining/mounting local path "docker run --ipc=host --rm -it --name devtest --mount type=bind,source="C:/Users/tobbe/RV2/RV_BUNDLE_DIR",target=/opt/src/bundle --mount type=bind,source="C:/Users/tobbe/RV2/RV_INFILE_DIR",target=/opt/src/infile --mount type=bind,source="C:/Users/tobbe/RV2/RV_OUTFILE_DIR",target=/opt/data/outfile quay.io/azavea/raster-vision:pytorch-0.12 /bin/bash" and the running command "rastervision predict /opt/src/bundle/model-bundle.zip /opt/src/infile/1.tif /opt/data/outfile/test.tif" Error message says method is missing. Did I miss anything ?
1 reply
Tobias1234
@Tobias1234
Hi, I am getting unpickling stack underflow error like 50% of the times I am running the script here:azavea/raster-vision#1114. Any suggestions on why this happens?
Simon Planzer
@SPlanzer
Hello, I am trying to deploy aws infrastructure via the raster-vision cloudformation template and run batch jobs in an aws account that only has private subnets available. The jobs all run fine with public subnets but hang in a "runnable" state in the private subnets "I have added my IPv4 CIDR for the private subnet to the cloudformation for the 'CidrRange' parameter. Are they any other modifications I need to make to the cloudformation template to get the batch jobs to execute successfully via aws batch with private subnsets?
2 replies
Adeel Hassan
@AdeelH
Raster Vision 0.13 is now released! This release adds a bunch of new features that make RV even more flexible.
For more details, check out: https://docs.rastervision.io/en/0.13/changelog.html
Tobias1234
@Tobias1234

I am trying to follow the Potsdam example with my own data:
https://github.com/azavea/raster-vision/blob/0.13/rastervision_pytorch_backend/rastervision/pytorch_backend/examples/semantic_segmentation/isprs_potsdam.py

I have raster images and labels as vector data with 3 different labels; roads, buildings and trees
How do I make a similar RGB label image used as input for training as in the Potsdam example?

I am trying to understand the documentation text below:
"In the ISPRS Potsdam example, the following code is used to explicitly create a LabelStore
that writes out the predictions in “RGB” format, where the color of
each pixel represents the class, and predictions of class 0 (ie. car)
are also written out as polygons."

1 reply
Lewis Fishgold
@lewfish
@Tobias1234 If you want to rasterize overlapping polygons see https://rasterio.readthedocs.io/en/latest/topics/features.html#burning-shapes-into-a-raster "The geometries will be rasterized by the “painter’s algorithm” - geometries are handled in order and later geometries will overwrite earlier values." So if you want a polygon to have precedence you will need to put it at the end of the GeoJSON file.
@Tobias1234 For elevation, I think it makes sense to use the height relative to some local baseline like the median elevation of the city. (But I'm really not sure what best practices are for processing Lidar to elevation).
Tobias1234
@Tobias1234
Hi, I am running a semantic segmentation pipeline with 4 layers(docker and master branch). It runs well but freezes at ”rastervision.pytorch_learner.learner: INFO - Plotting predictions...”. Still no error message after many hours, but still working without using much CPU. Can it be that chip size is to big? Hard to predict?
3 replies
Jian Shi
@shijianjian
Hi, Kornia has recently released patch augmentation as documented in here https://kornia-tutorials.readthedocs.io/en/latest/data_patch_sequential.html, do you think if this can somehow benefit rastervision?
1 reply
BO_Zai
@LinNiu
hi!sir how to predict?
image.png
why the command is gone?
2 replies
Simon Planzer
@SPlanzer

Is it possible to use raster-vision across aws accounts (compute and data in separate accounts)? I have been given read access to data in a private s3 bucket I want to train with. This has been setup that my own aws account can assume a role created for me in the data account. I have tested this via the aws s3 CLI and it works as expected (i.e I can assume the role and assess the data). I am now trying to deploy the ratser-vision compute resources to my aws account and read the data from the other accounts private bucket but am failing to do so. I have added a policy to my BatchInstanceIAMRole to allow AssumeRole for the data (across account) role but my compute environment can not access the read only across account bucket that is out side my compute account.

is it possible to have data (in a private s3 bucket) and compute in separate aws accounts with raster-vision as it is? or do I need to look to modify the python source code to assume the role?

1 reply
Silas Frantz
@s-frantz
Having generalized something super similar for segmentation via GDAL/RasterIO & TensorFlow, I was wondering if you'd be willing to share what you see as the biggest weaknesses / growth areas for raster-vision? I feel like it would be difficult to keep up with the latest advances on the DL side (e.g. band selection, data prep for transfer learning, strategies for class imbalance, loss functions, etc. etc.), but relatively easier to provide stable support on the vanilla GIS data >> image pipeline side. Would have liked to give raster-vision a try originally, but was led down the path of custom development by a healthy skepticism of leading GIS software and a bet on the future of TensorFlow.
3 replies
Jerome Maleski
@jeromemaleski

hi i just git cloned and built the latest rastervision and when i try to run the potsdam example i get this error

AttributeError: module 'torchvision.models.segmentation.segmentation' has no attribute '_segm_resnet'
/opt/src/data/output/Makefile:6: recipe for target '0' failed
make: * [0] Error 1

is it possible that this is some version conflict with the right way to load the model changing?

my environment:
PyTorch version: 1.9.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 16.04.7 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.10

Python version: 3.7.6 | packaged by conda-forge | (default, Mar 23 2020, 23:03:20) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.11.0-25-generic-x86_64-with-debian-stretch-sid
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 460.91.03
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] torch==1.9.0
[pip3] torchvision==0.10.0
[conda] numpy 1.19.4 py37h7e9df27_1 conda-forge
[conda] torch 1.9.0 pypi_0 pypi
[conda] torchvision 0.10.0 pypi_0 pypi

3 replies
Jerome Maleski
@jeromemaleski
in the potsdam script is the data normalized? I could not figure out where in the scrip the command to normalize the data. it looks like it skips that step if you do not augment? If i do add the means i need to add normalized means but i do not need to normalize the data before running?
19 replies
Lewis Fishgold
@lewfish
If you use recent changes to the master branch of RV with your existing CPU Batch compute environment, you will get an out of space error. To fix this, re-create your compute environment, which will use a new version 2 of the Amazon Linux AMI. See azavea/raster-vision#1212
Daniel Volarik
@Daniel7V_gitlab
Can raster-vision work with Nvidia RTX 30 series (Ampere architecture)? I found that raster-vision version 13.1 uses CUDA 10.2 but RTX 30 series needs CUDA 11.2. Is it possible to build raster-vision docker image with this newer CUDA 11.2?
6 replies
BO_Zai
@LinNiu
Hi sir! can you tell me how to make label about semantic segmentation
1 reply
Sagar Khimani
@sagar1899
can you mention what is commercial use of raster vision (object detection, chip classification, sementic segmentation)?
2 replies
in which field we can use object detection, chip classification ?
Sagar Khimani
@sagar1899

Currently I have 16 gb ram and chip size is 1000.
Training dataset contains total 1200 chips of size 1000*1000.
And I'm getting memory error. Batchsize is 16.

Unable to allocate 9 gb ram buy a new RAM. Like that

So can you mention chip size range of maximum size of chip?

I'm using single band image while training in object detection. Chips is generated of single band.but when i run train command the following error is shown.

x = torch.tensor(x).permute(2,0,1).float()/255.0
RuntimeError: number of dims don't match in permute

can i use single band image in object detection training ?

Sagar Khimani
@sagar1899

i'm trying to train uint8 datatype images. in previous comments you mentions that use StatsTransformer. so i made following changes to cowc_potsdam.py file.

raster_source = RasterioSourceConfig(
channel_order=[0, 1, 2],
uris=[img_path],
transformers=[StatsTransformerConfig()])

but when i run chip command it will requires "stats.json" file.
Can you please mention what is "stats.json" and what is it contains ?