Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 18 08:31
    AdeelH commented #1406
  • May 18 08:31
    AdeelH commented #1406
  • May 18 05:20
    sagar1899 opened #1407
  • May 17 22:00
    xholmes commented #1406
  • May 17 09:35
    AdeelH commented #1406
  • May 17 03:27
    xholmes closed #1406
  • May 17 03:27
    xholmes commented #1406
  • May 16 08:03
    AdeelH commented #1406
  • May 16 05:03
    xholmes commented #1406
  • May 15 20:36
    xholmes commented #1406
  • May 15 20:36
    xholmes commented #1406
  • May 12 11:35
    AdeelH commented #1406
  • May 12 11:30
    AdeelH commented #1406
  • May 10 21:15
    xholmes commented #1406
  • May 10 21:04
    xholmes opened #1406
  • May 09 09:22
    AdeelH commented #1405
  • May 09 05:39
    Vijay-P1999 edited #1405
  • May 07 05:15
    Vijay-P1999 opened #1405
  • May 07 05:06
    Vijay-P1999 closed #1384
  • May 07 05:06
    Vijay-P1999 closed #1387
pulkit97
@pulkit97

Hi Developers, I am working on a road segmentation task in which I have an RGB label raster [(255,0,0) - Road and (0,255,0) - Not_Road]. However, when I train the model, the output is completely black, and even during training only null precision metrics have some values while the other precision scores and values for Road and Not_Road is completely zero in every epoch as shown below:

2021-12-12 22:57:24:rastervision.pytorch_learner.learner: INFO - metrics: {'epoch': 4, 'train_loss': 0.1367075890302658, 'train_time': '0:00:01.851892', 'val_loss': 0.214578777551651, 'avg_precision': 1.0, 'avg_recall': 0.5575103759765625, 'avg_f1': 0.7158994078636169, 'Road_precision': 0.0, 'Road_recall': 0.0, 'Road_f1': 0.0, 'Not_Road_precision': 0.0, 'Not_Road_recall': 0.0, 'Not_Road_f1': 0.0, 'null_precision': 1.0, 'null_recall': 0.5575103759765625, 'null_f1': 0.7158994078636169, 'valid_time': '0:00:01.097195' }

Below is my training code:

from rastervision.pipeline.file_system import list_paths
from rastervision.core.rv_pipeline import *
from rastervision.core.backend import *
from rastervision.core.data import *
from rastervision.pytorch_backend import *
from rastervision.pytorch_learner import *

CLASS_NAMES = [
    'Road', 'Not_Road'
]
CLASS_COLORS = [
    'red', 'green'
]

class_config = ClassConfig(names=CLASS_NAMES, colors=CLASS_COLORS)

def make_scene(scene_id,image_uri,label_uri, extent):

    channel_order = [0, 1, 2]

    raster_source = RasterioSourceConfig(
            uris=[image_uri], channel_order=channel_order, extent_crop=extent)

    label_source = SemanticSegmentationLabelSourceConfig(
            rgb_class_config=class_config,
            raster_source=RasterioSourceConfig(uris=[label_uri]))

    label_store= SemanticSegmentationLabelStoreConfig(
            rgb=True, vector_output=[PolygonVectorOutputConfig(class_id=0)])

    return SceneConfig(id=scene_id,raster_source=raster_source,label_source=label_source,label_store=label_store, aoi_uris=['/opt/data/input/aoi_4326.geojson'])

def get_config(runner) -> SemanticSegmentationConfig:
    root_uri = '/opt/data/output/'
    base_uri = ('/opt/data/input')


    image_uri = f'/opt/data/input/raster_from_gpkg_4326.tif'
    label_uri = f'/opt/data/nas_data/gis_data/VHM_Data_IDP/VHM__Eleonor.tif'

    chip_sz = 250

    dataset= DatasetConfig(
        class_config=class_config,
        train_scenes=[
            make_scene('scene_train',image_uri, label_uri,extent= CropOffsets(skip_bottom=0.30))
            # make_scene('scene_train', image_uri, label_uri)
        ],
        validation_scenes=[
            make_scene('scene_val', image_uri, label_uri,extent= CropOffsets(skip_top=0.70))
            # make_scene('scene_val', image_uri, label_uri)
        ]
        )


    backend = PyTorchSemanticSegmentationConfig(
        data=SemanticSegmentationGeoDataConfig(
            scene_dataset=dataset,
            window_opts=GeoDataWindowConfig(
                method=GeoDataWindowMethod.random,
                size=chip_sz,
                size_lims=(chip_sz, chip_sz + 1),
                max_windows=10)),
        model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50, pretrained=True),
        solver=SolverConfig(lr=1e-4, num_epochs=5, batch_sz=2),
        log_tensorboard=True,
        run_tensorboard=False)


    return SemanticSegmentationConfig(
        root_uri=root_uri,
        dataset=dataset,
        backend=backend,
        train_chip_sz=chip_sz,
        predict_chip_sz=chip_sz)
10 replies
Adeel Hassan
@AdeelH
ICYMI: Raster Vision recently joined the PyTorch Ecosystem and we wrote an article for the PyTorch blog describing it: https://medium.com/pytorch/raster-vision-a-geospatial-deep-learning-framework-cd69ba840a83
Laetitia Lalla
@laetitialalla_gitlab

Hey everyone. First of all, thanks a lot for this great open source tool, that’s a lot of work ! I apologise if my question was asked before, but I was wondering if it is possible to use a custom-made model with rasterVision.

I created a specific model for semantic segmentation, trained it on a small number of images and I want to inject it in the RasterVision framework to train it with more images. However, when I went to look into the doc for SemanticSegmentationModelConfig and ExternalModuleConfig, I kinda understood that it only works so far with models from Torch Hub (Is this the case?).

So I tried to load a torch hub model (for example, a Unet like this one : https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) by changing this in the config :
model=SemanticSegmentationModelConfig(
external_def = ExternalModuleConfig(
uri='/opt/src/code/brain-segmentation-pytorch-master.zip',
name='/opt/src/code/pytorchmodel',
entrypoint='unet')),

However, I got the following error :
« RuntimeError: Sizes of tensors must match except in dimension 1. Got 36 and 37 in dimension 2 (The offending index is 1) »

Does that means that this model is not compatible with the default backbone restnet50 in the SemanticSegmentationModelConfig ? How can I « plug in» a custom-made model into the Rastervision framework ? Thanks a lot for your help !

9 replies
klmc66666
@klmc66666
@LinNiu brother,jia ge vx KillerLeeAng
Umang Kalra
@theoway
I want to build a REST API for rastervision. I tired to install packages with pip but it's to complicated and filled with errors.
I thought of using docker image, but as far as I know, I can use S3 for file storage since I can't have files in the container.
I wanted to know, if I can use Azure Blob storage instead of Amazon S3??
If not, how can I install rastervision? I want to build a server that can schedule deep learning jobs through REST API.
Our whole project is Azure based, so I can't move to S3 for storage.
Adeel Hassan
@AdeelH

Quoting @lewfish from a previous thread re: Azure storage:

This is sometimes a point of confusion, so I'll try to clarify here. RV has support for running on AWS Batch (allowing you to submit the job locally and then have Batch start and stop instances and parallelize the workload). But it's still possible to run RV locally on an Azure GPU instance. You just need to manually start the instance, install RV locally, and then run it, and then stop the instance when it's done. You would need to manually transfer the files on and off the instance, or if your data is on Azure blob storage, the GDAL file system can be used to read/write directly to Azure blob storage. Also, there is an extendable Runner abstraction, and it should be possible to write an Azure Batch runner, but that would take some work. https://azure.microsoft.com/en-us/services/batch/

Laetitia Lalla
@laetitialalla_gitlab

Hello everyone,

thanks again for your work on RV ! Following your Quickstart tutorial, I managed to do some semantic segmentation with the default resnet model on satellites images.

Now I'm trying to define some AOIs in my input images but I cannot make RV understand what I want... The training happens without error, but it takes the whole images and not the AOIs I gave. So I'm wondering how exactly I should define the aoi ?

My images are tiff, not geotiff, so I don't have georeferencing metadata. Can I just use somehow a Json file with pixels coordinates ? Or do I HAVE to use a GeoJSON file for the AOI definition ? In that case, what should I put for the coordinates? I tried putting latitude and longitude following the geojson standards but it didn't work (I guess it's missing the georeferencing metadata to make sens of this anyway)...

Thanks a lot for your help !

All the best,
Laetitia

5 replies
Pantelis Monogioudis
@pantelis

Hi everyone,

I am new to this library and trying the docker container setup. It seems that the tag pytorch-latest does not work . The tag pytorch-0.13.1 works as expected (at least on CPU).

docker run --rm -it \
     -v ${RV_QUICKSTART_CODE_DIR}:/opt/src/code  \
     -v ${RV_QUICKSTART_OUT_DIR}:/opt/data/output \
     quay.io/azavea/raster-vision:pytorch-latest /bin/bash

rastervision run local code/tiny_spacenet.py

It produces:

rastervision run local code/tiny_spacenet.py 
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 250, in <module>
    _main()
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 246, in _main
    main()
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 175, in run
    pipeline_run_name)
  File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 113, in _run_pipeline
    cfg.update()
  File "/opt/src/rastervision_core/rastervision/core/rv_pipeline/semantic_segmentation_config.py", line 101, in update
    super().update()
  File "/opt/src/rastervision_core/rastervision/core/rv_pipeline/rv_pipeline_config.py", line 100, in update
    self.dataset.update(pipeline=self)
  File "/opt/src/rastervision_core/rastervision/core/data/dataset_config.py", line 28, in update
    s.update(pipeline=pipeline)
  File "/opt/src/rastervision_core/rastervision/core/data/scene_config.py", line 100, in update
    self.label_source.update(pipeline=pipeline, scene=self)
  File "/opt/src/rastervision_core/rastervision/core/data/label_source/semantic_segmentation_label_source_config.py", line 24, in update
    self.rgb_class_config.ensure_null_class()
AttributeError: 'NoneType' object has no attribute 'ensure_null_class'

I am trying also to create an environment for RTX Ampere GPUs (eg A4000/A8000). Anyone has modified the docker container for CUDA 11+ that works?

Adeel Hassan
@AdeelH
Hi, thanks for catching that. pytorch-latest should be fixed as soon as azavea/raster-vision#1322 gets merged.

I am trying also to create an environment for RTX Ampere GPUs (eg A4000/A8000). Anyone has modified the docker container for CUDA 11+ that works?

There is some discussion about it in this thread that might be useful:
https://gitter.im/azavea/raster-vision?at=611527769484630efa34a27d

3 replies
Umang Kalra
@theoway
I'm running Rastervision on Azure Container Instances. I have a few questions: 1) What is the RAM(container storage) requirement for RV? 2) How many RV commands can I run parallely on a single Azure Container Instance?
Umang Kalra
@theoway
Also, I was running the chip classification bundle from Model Zoo, but I'm getting this error:
rastervision.filesystem.filesystem.NotReadableError: Could not read /tmp/tmph6zxrtcv/tmp8z0ovx7i/package/bundle_config.json
This model is stored locally and mounted to the container.
Umang Kalra
@theoway
Tried to read the bundle through Http(from the link to the S3, given in documentation), still getting the same error as above.
Umang Kalra
@theoway
I've explained the issue better azavea/raster-vision#1338
Umang Kalra
@theoway
I was going through this experiment, where it says processed_uri can be set to S3 path or local path, to store the predictions and results. But in the code, it only uses from rastervision.pipeline.file_system and not rastervision.aws_s3. I'm confused at if rastervision internally figures out which file system to use. Or do we have to specifically use a particular file system. I'd really appreciate if someone could explain this bit to me.
1 reply
Umang Kalra
@theoway
Can anyone help me out with this?
Umang Kalra
@theoway
Hi! Just got RV working on my setup and boi, it is amazing! For our project, we'll be migrating to AWS as RV has better support for it. Nonetheless, as promised, I'll add support for Azure as well.
I just had a question, since I'll be changing the code and dockerfile, will building it every time take as much time as first time?
I won't be changing much in the dockerfile, only adding some python package for Azure support. (I'm new to docker env, that's why I am asking 😁)
Adeel Hassan
@AdeelH
Docker caches what it can. Unless you're changing the requirements, the next builds should be pretty quick.
Umang Kalra
@theoway
@AdeelH Thanks for clearing this out for me! 😄
Agano oasis
@kenseii
Hello, Noob question here, is it possible to use raster-vision's segmentation on a single band DTM data? also is it advised to train the model on a big tiff(>10gb) file or clip the tiff into small ones and use those one by one? Thank you
2 replies
Sagar Khimani
@sagar1899

when i'm running raster vision object detection. i noticed that when my image size is more than 25000*25000 pixels, after some time python will crashed, so can you help me regarding this.

is there any limitation of image size in raster vision ?

i'm using cuda 10.2 and gpu nvidia geforce rtc 2060. when i'm running train command the following error is shown.

OSError : The paging file is to small for this operation to complete. error loading caffe2_detection_ops_gpu.dll or it's dependencies.

Adeel Hassan
@AdeelH
@sagar1899 That looks like a Windows and NVIDIA problem rather than a RV problem. This might be a possible solution: https://github.com/ultralytics/yolov3/issues/1643#issuecomment-939545009 .
NoaMills
@NoaMills
I'm working through the quickstart. When I run:
docker run --rm -it -p 6006:6006 \ -v ${RV_QUICKSTART_CODE_DIR}:/opt/src/code \ -v ${RV_QUICKSTART_EXP_DIR}:/opt/data \ quay.io/azavea/raster-vision:cpu-0.10 /bin/bash
I get the error:
Unable to find image 'quay.io/azavea/raster-vision:cpu-0.10' locally docker: Error response from daemon: manifest for quay.io/azavea/raster-vision:cpu-0.10 not found: manifest unknown: manifest unknown. See 'docker run --help'.
Since the image with tag cpu-0.10 is not listed on quay.io repo, should I assume this version is no longer supported? I have tried using two other tags, namely pytorch-latest and pytorch-e9182ab. Both will allow the docker container to run, however when I then run :
rastervision run local -p tiny_spacenet.py -n
I get the error:
Usage: python -m rastervision.pipeline.cli run [OPTIONS] RUNNER CFG_MODULE [COMMANDS]... Try 'python -m rastervision.pipeline.cli run --help' for help. Error: No such option: -p
I've looked at the documentation for rastervision run and like the error says, -p isn't an option. What command should I use instead?
2 replies
Sagar Khimani
@sagar1899

when i'm running raster vision object detection. i noticed that when my image size is more than 25000*25000 pixels, after some time python will crashed, so can you help me regarding this.

is there any limitation of image size in raster vision ?

Vijay
@v_i_j_a_y_1_twitter
Hii, I'm new to rastervision. Infact i'm new to computer vision itself. I need to segment buildings, roads and waterbodies in Tiff images. So i came across raster-vision. Since i'm new to this docker and all, i couldn't understand their documentations. I installed raster-vision thourgh pip3. But how to do some predictions after installing. Which .py file should i run on my custom image file. I'm so confuse please someone provide solution.
Umang Kalra
@theoway
I wanted to fix some issues in rastervision. For now, I've been making code changes and building docker image to check those changes everytime. This gets frustrating after a while 😅
Is there any way to make the changes in the container itself to see how it goes? Or any other way to connect my local repo to the running container to see changes real-time.
Adeel Hassan
@AdeelH
When you run the docker image via docker/run, it mounts your RV directory to /opt/src inside the container (https://github.com/azavea/raster-vision/blob/b58985231a7383417459e9fadb3190b480167223/docker/run#L112). This means that any changes you make to the RV source code outside the container will automatically be available inside it without having to rebuild.
Umang Kalra
@theoway
@AdeelH Thanks! That's so cool! I'll get on fixing stuff then! 🦸‍♂️
NoaMills
@NoaMills
I've run the quickstart successfully on my local machine, and now I'm trying to run it remotely on a compute cluster. I was able to run the docker image in a singularity container, but I'm running into errors because I can't download data directly from a compute node. Any advice for how to download the quickstart data separately so I can upload it to the cluster?
1 reply
Adeel Hassan
@AdeelH
New blog post showing how to use Raster Vision on a change detection dataset: https://www.azavea.com/blog/2022/04/18/change-detection-with-raster-vision/
There is also a companion Colab notebook (https://colab.research.google.com/drive/1Ex5xbAx4epzReCQDsqvZLoTpF6AtfnHr?usp=sharing) that you can run to try it out for yourself!
Umang Kalra
@theoway
That's awesome! Kudos to the entire Rastervision team! ❤️
I was wondering if there's a benchmark resource. Like a reference doc to tell us what EC2 instance should be according to training data size, how to choose the number of split for batch mode and how many parallel predict, run commands can be used on a single instance.
4 replies
NoaMills
@NoaMills

I'm having trouble running the quickstart with singularity instead of docker. I'm hoping to use rastervision with singularity so I can run it on an HPC. Here's a minimally working example:

First, download the docker image with singularity:

singularity pull docker://quay.io/azavea/raster-vision:pytorch-0.13

Then define the environment variables, as in the quickstart tutorial (I opted for shorter variable names):

export RV_OUT=``pwd``/output export RV_CODE=``pwd``/code

Run the container with singularity run:

singularity run -B ${RV_CODE}:/opt/src/code -B {RV_OUT}:/opt/data/output raster-vision_pytorch-0.13.sif

Then call rastervision:

rastervision run local /opt/src/code/tiny_spacenet.py.

I run into an error because Singularity, unlike Docker, has a read-only file system. Only directories that are bind mounted can be edited. The quickstart code attempts to access /opt/data/data-cache which does not exist in the container, and cannot be created. Any advice for how I can use rastervision with the read-only filesystem of singularity?

The full text output with all the warnings and errors is too long to include in one message, so here are some excerpts that I believe are the most useful:

2022-04-22 12:42:17:rastervision.pipeline.rv_config: WARNING - Root temporary directory cannot be used: /opt/data/tmp. Using root: /tmp/tmp1uf8xk_g (this error is repeated four times with different temporary root directory names)

OSError: [Errno 30] Read-only file system: '/opt/data/data-cache'

9 replies
Sagar Khimani
@sagar1899

hi
i'm running raster vision 0.12 and python 3.6.2 and nvidia geforce rtx 2080 (6 GB and 1920 cores).

input image dimension is 15000*15000, predict batch size is 1, num workers is 4 (by default) and it takes 7 min to process.
when i change predict batch size to 8 or 16 it will take same time (it will use more memory of gpu, take same time).
can you help me to reduce prediction time. what are the parameters i have to change to reduce prediction time?

Sagar Khimani
@sagar1899

hi
i'm running raster vision 0.12 and python 3.6.2 and nvidia geforce rtx 2080 (6 GB and 1920 cores).

input image dimension is 15000*15000, predict batch size is 1, num workers is 4 (by default) and it takes 7 min to process.
when i change predict batch size to 8 or 16 it will take same time (it will use more memory of gpu, take same time).
can you help me to reduce prediction time. what are the parameters i have to change to reduce prediction time?

Sagar Khimani
@sagar1899
can someone help me?
NoaMills
@NoaMills
I'm trying to run the spacenet_vegas example here and am having some trouble. Unlike in the quickstart, this example passes arguments for raw_uri and root_uri into the get_config function, whereas in the quickstart, we hardcode the uri's in the body of the get_config function. I tried setting default values for these arguments, but am running into some weird errors. How should I go about specifying the raw_uri and root_uri?
1 reply
Sagar Khimani
@sagar1899
i'm using rv 0.12 and python 3.6.2 on local windows system. when i run command : python -m rastervision.pipeline.cli predict "model path" "image_path" "output_path" the following error will be shown : Exception : pipeline in model bundle must have predict method. can you help ?
NoaMills
@NoaMills

I'm having trouble running the rastervision examples with the test.py script from the github repo. Here's what I do from the cloned repo:

docker run --rm -it raster-vision-pytorch /bin/bash
python "rastervision_pytorch_backend/rastervision/pytorch_backend/examples/test.py" \
run "spacenet-rio-cc" \
--remote

Then I get an error saying: FileNotFoundError: [Errno 2] No such file or directory: 'rastervision': 'rastervision'
I can however navigate to the directory /opt/src/rastervision_pytorch_backend/rastervision so I'm confused why the directory can't be found. I've also tried without the --remote tag and I get the same error.

1 reply
alicia
@ecologis
Is anyone consulting on raster vision? I'm a noob and willing to pay for training and help in applying it to my problem.
1 reply