Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 01:27
    jamesmcclain commented #1046
  • 01:26
    jamesmcclain commented #1046
  • 01:24
    jamesmcclain commented #1046
  • 01:24
    jamesmcclain commented #1046
  • 01:23
    jamesmcclain commented #1046
  • 01:21
    jamesmcclain commented #1046
  • 01:19
    lewfish commented #1046
  • 01:18
    lewfish commented #1046
  • Dec 02 16:23
    AdeelH synchronize #1046
  • Dec 02 15:42
    AdeelH edited #1046
  • Dec 02 15:41
    AdeelH synchronize #1046
  • Dec 02 15:36
    AdeelH commented #1003
  • Dec 02 14:47
    AdeelH labeled #1040
  • Dec 02 14:46
    AdeelH labeled #1041
  • Dec 02 14:46
    AdeelH assigned #1040
  • Dec 02 14:45
    AdeelH assigned #1041
  • Dec 02 14:45
    AdeelH assigned #1053
  • Dec 02 14:44
    AdeelH labeled #1053
  • Dec 02 14:44
    AdeelH opened #1053
  • Dec 02 12:12
    AdeelH commented #1046
Saadiq Mohiuddin
@smohiudd

Hello, I'm looking for some help with predictions using the Predictor class. I wasn't able to find any examples in the documentation or GitHub.

I'm using the PyTorch SpaceNet Vegas Buildings predict package from the model zoo along with the sample image.

I define a new instance of the Predictor class and then predict on an image:

predict = rv.Predictor("predict_package.zip","tmp").predict("1929.tif")

The return is: rastervision.data.label.semantic_segmentation_labels.SemanticSegmentationLabels

How do I access an array of labels from my prediction?

Looking at the SemanticSegmentatationLabels class I thought it was something like this:

windows = predict.get_windows()
label_array = predict.get_label_array(windows[0])

But I'm getting this error:

ActivationError: RasterSource must be activated before use

Can anyone please let me know if I'm on the right track or going about this the wrong way. Thanks!

ninexy
@ninexy
@lewfish yeah,I def register_plugin in my class like this:
def register_plugin(plugin_registry):
    plugin_registry.register_aux_command(PREPROCESS,
                                         PreProcessCommand)
g2giovanni
@g2giovanni

@g2giovanni There isn't any direct way to make prediction using a model file. A workaround (as you observed) is to modify an existing predict package which contains necessary metadata. The files in the zip file have to be at the root of the zip file, and I think that might be causing the problem. Another potentially deeper issue is that the model file needs to have just the weights state_dict, not the whole pickled model. (See https://pytorch.org/tutorials/beginner/saving_loading_models.html). And if you supply the weights, they need to correspond to a model architecture that Raster Vision knows about (ie. the ones used by the PyTorch backends).

Thank you @lewfish . My first problem actually is I want to make a prediction using a custom model architecture, not included in Pytorch backends.
I'm planning to develop a new backend plugin in order to load a custom Pytorch models and a simple CLI python module to create a mock bundle_config.json file

Lewis Fishgold
@lewfish
@smohiudd It looks like you are doing the right thing, but that there is a bug in RV. If you use the Predictor via the predict CLI command I think things should work. So I would follow this as a template for what to do: https://github.com/azavea/raster-vision/blob/master/rastervision/cli/main.py#L273-L281 Mainly the thing to follow here is passing an output_uri to predict so that it writes the output to disk. That may be inefficient for your use case, but I think it should work.
Lewis Fishgold
@lewfish
@smohiudd BTW, the code for making predictions for semantic segmentation is pretty convoluted and buggy. I've simplified it a lot in the next version of RV (in the rastervision2 package) which is under development now and is in a minimally usable state. So I'm reluctant to try to debug the old version of it. Unfortunately I don't have the corresponding rv2 predict package to share at the moment.
Saadiq Mohiuddin
@smohiudd
Thanks for getting back to me @lewfish. Since posting that a few days ago I went through the CLI functions as you suggested and some other Classes and eventually got it to work. I managed to get the labels without saving to disk. Looking forward to rv2 release!
Joe Morrison
@jmorrison1847
RV is only 15 stars away from 1,000 Github stars, congrats y'all!
Jonny
@JonathanNiesel
Hi, how would you deal with the following problem: Training data is in jpg format, and only includes AOI. Howver the data for production are actual .tif tiles covering a very large area compared to the AOI. Should i subset the tif tiles ? Do you have other recommendations?
Lewis Fishgold
@lewfish
@JonathanNiesel I'm not sure I understand your question. Are you saying that the prediction imagery covers a very large area so you want to split it into a bunch of scenes so that you can make predictions in parallel? (I didn't understand what you meant by "subsetting the tif files")
Jonny
@JonathanNiesel
Training images are 1250x 600 jpg images, however in production the input images are 16000x 10000 tif files. The Training images are subsets of the original full size tif files.
With subsetting i mean splitting the 160000x10000 tif in ca. 1250x600 size images, but somehow keep information about the grid (lat/lon).
Lewis Fishgold
@lewfish
@JonathanNiesel It should be able to make predictions on imagery that size without a problem. Training and testing on different sized scenes doesn't require anything "extra". If you really need it to run faster you could split the images into smaller pieces, but you'd have to write your own script to do that.
Maybe you thought it would run out of memory with such a large image. But RV handles this by using a sliding window to make predictions on windows and then combines the predictions across all windows.
3 replies
qqaba
@qqaba
I'm just getting started with rastervision and have a few questions. For context, I'm currently attempting to do object detection using PyTorch/Resnet and rastervision 0.10, I've had some success so far, but looking to improve some things.
1) Does the pytorch object detection pipeline normalize the image channels to what the pretrained ImageNet models were trained with? https://pytorch.org/hub/pytorch_vision_resnet/
2) Are there any plans for integration with MLFlow?
3) Is work still being done on the rastervision project, or should things be moved to rastervision2? Are you still accepting pull requests for rastervision1. I've made a few tiny quality of life tweaks that I think might be useful (e.g., using the environment's TORCH_HOME if defined vs setting it to /opt/data/torch-cache, making the example docker images not run as the root user, flushing the tb_writer after each epoch, etc.)
Lewis Fishgold
@lewfish
@qqaba 1) No, it doesn't normalize that way. I never noticed a significant difference when using the Imagenet normalization, but that's just anecdotal. It seems like being able to specify a desired mean and std would be a good option to add. 2) No. I'm not very familiar with the project, but we have thought a little about adding support for callbacks so that you could log things to MLFLow or whatever. After the rv2 release (hopefully in a month or so) we could talk about how you could approach adding support for it. 3) No, all development now is for rv2. But we are planning one final release of rv1, so you could add a PR for that. If you made a PR with that stuff I would try to include the corresponding changes in rv2. Those changes sound good and I was planning on making the TORCH_HOME change.
Joe Morrison
@jmorrison1847
Hey RV friends! The most common request I get from people who are just getting started with Raster Vision is this: "could you share a Jupyter Notebook/tutorial I could use as a starting point?" There are some already available here: https://github.com/azavea/raster-vision-examples
But it would be so awesome if we had more to point to. If you're interested in a straightforward way to make a big impact on the RV project, example notebooks is a really great way to do it
Lewis Fishgold
@lewfish

We will be releasing the next version of Raster Vision, which is a major refactoring sometime before June 1. The code is currently being developed in the rastervision2 package and is in a usable, but poorly documented state. The major changes include:

  • a simpler configuration system that uses a single Config class based on Pydantic in place of the existing Config, ConfigBuilder, and Protobufs, and that does validation, serialization, and deserialization automatically using a declarative representation.
  • generalizing and extracting out the functionality for running configurable pipelines in the cloud. What used to be Experiments are now just instances of Pipeline, and the pipeline package can be used independent of the rest of RV.
  • elimination of the Tensorflow backends, and abstraction of the PyTorch/torchvision backend code, so that all tasks are implemented in a uniform way and it's easier to customize aspects of the training loop.

Before releasing the big refactor of RV, we will have a final release of the existing RV codebase, which will be version 0.11. If there are any small changes you want to get into that final release, please make a PR by next Friday May 5. After that release is made, the old version of RV will be removed from the master branch, the code in rastervision2 will be moved to rastervision, and that will be released in version 0.12.

arixrobotics
@arixrobotics
Hello, I've just started using rastervision, and also GroundWork for labelling. Is there any guide on using the output of GroundWork (downloaded catalog.zip with many files in it) with rastervision?
3 replies
Lewis Fishgold
@lewfish
For some reason I can't edit the message I posted above about the next releases, but the dates I mentioned are incorrect. I meant to say that any PRs to "old RV" should be submitted by Friday June 5, and that the release of the refactored RV will be by July 1.
VAHID
@VbsmRobotic
I am trying to do object detection using RV. I face the following error File "/opt/src/code/test_CarBuildingObjectDetection.py", line 55, in exp_main
train_label_source = rv.LabelSourceConfig.builder(rv.OBJECT_DETECTION) \
AttributeError: 'ObjectDetectionLabelSourceConfigBuilder' object has no attribute 'with_raster_source'
anyone have any solution ?
2 replies
Guillermo E. Ponce-Campos
@gponce-ars
Does raster-vision provide tools to work on labeling your input imagery?
2 replies
Jerome Maleski
@jeromemaleski
can someone confirm if the quickstart docker is still working? I just set it up an ran it. No error messages but the val-scene.tif is just black. I also ran the vegas segmentation. Also output a black tiff. i suspect one of the links to the images are not available? But no errors.
3 replies
Tyler Frazier
@geosimafrica_twitter
Hi is there a way to run rastervision with Tensorflow 2.2.0? It seems the h5py version keeps throwing an error on installation. Just wondering if this would be possible, thanks!
2 replies
Tyler Frazier
@geosimafrica_twitter
it seems to want 2.7.1 when it looks like tf needs 2.10.0 when using pip install
Tareq Alqutami
@TareqAlqutami

Hello,
for a semantic segmentation task, When I add a with_vector_output to the labelStore builder to output labels as polygons, It works for the evaluation scenes after training is completed. The output in the predict directory will contain predicted labels in both tif and geojson formats. But when I use the bundle to predict new scenes, it only outputs labels in raster format (.tif).
I tried both versions 0.10 and 0.11 of rastervision but both produce the same results.
Here is the labelStore code that is added to the scenes using .with_label_store:

vector_output = {'mode': 'polygons', 'class_id': 1, 'denoise': 3}
label_store = rv.LabelStoreConfig.builder(rv.SEMANTIC_SEGMENTATION_RASTER) \
                                             .with_vector_output([vector_output]).build()

Here is the scene,groundTruthLabelSource and predictionLabelStore sections of the bundle_config.json:

` "scene": { "id": "train_austin1", "rasterSource": { "sourceType": "RASTERIO_SOURCE", "channelOrder": [ 0, 1, 2 ], "rasterioSource": { "uris": [ "BUNDLE" ], "xShiftMeters": 0.0, "yShiftMeters": 0.0 } }, "groundTruthLabelSource": { "sourceType": "SEMANTIC_SEGMENTATION_RASTER", "semanticSegmentationLabelSource": { "source": { "sourceType": "RASTERIZED_SOURCE", "rasterizedSource": { "vectorSource": { "geojson": { "uri": "/opt/src/code/inria/train/small_test_vector/austin1.geojson" }, "sourceType": "GEOJSON_SOURCE", "classIdToFilter": { "1": [ "==", "DN", 255.0 ] } }, "rasterizerOptions": { "backgroundClassId": 2, "allTouched": false } } } } }, "predictionLabelStore": { "storeType": "SEMANTIC_SEGMENTATION_RASTER", "semanticSegmentationRasterStore": { "rgb": false, "vectorOutput": [ { "denoise": 2, "uri": "", "mode": "polygons", "classId": 1, "buildingOptions": { "minAspectRatio": 1.6180000305175781, "elementWidthFactor": 0.5, "elementThickness": 0.0010000000474974513 } } ] } }

Anyone knows what am I missing?

1 reply
karlsad
@karlsad
Hi, I am having issues connecting using "winpty docker run --rm -it quay.io/azavea/raster-vision:pytorch-0.12 /bin/bash". Error msg: Unable to find image 'quay.io/azavea/raster-vision:pytorch-0.12' locally. I am using windows 10 (Git Bash). Any ideas?
2 replies
For inof, I am trying to set up the quickstart tutorial in order to learn how to use raster-vision
Lewis Fishgold
@lewfish
Raster Vision 0.12 is now released! This was a major refactoring intended to simplify the codebase, and make it more flexible and customizable. For more details, please see https://docs.rastervision.io/en/0.12/changelog.html
Rob Emanuele
@lossyrob
:tada: :tada: Congrats @lewfish this a ton of awesome work!
Christos
@Charmatzis
@lewfish wow very good news
I have a project using raster-vision v0.10 using aws batch and now there are errors due the reason of breaking changes of the latest docker image.
Christos
@Charmatzis

I tried to update it to 0.12 and now I willing to execute it in aws batch, I miss few things

  1. in 0.10 I spin up an image like that

    docker run --rm -it -p 6006:6006  -p 8888:8888 -p 8000:8000 \
      -e AWS_PROFILE=default -v /home/xristos/.aws:/root/.aws:ro \
      -v /home/xristos/.rastervision:/root/.rastervision \
      -v `pwd`/notebooks:/opt/notebooks  \
      -v ${RV_CODE_DIR}:/opt/src/code  \
      -v ${RV_EXP_DIR}:/opt/data \
      quay.io/azavea/raster-vision:pytorch-0.10

    Now how can I pass the aws profile in the image? the same way?

  2. I really miss the -n args where checks everything is OK and in place

Christos
@Charmatzis
Now, all good. I moved everything to my Linux machine and everything works
1 reply
Christos
@Charmatzis
One quick question, the raster sources and the label sources should be in EPSG:4326 or EPSG:3827 (TMS raster source is good?)
1 reply
Christos
@Charmatzis
Does version 0.12 supports the predict command using aws batch?
Smthg like that
rastervision predict batch --update-stats s3://somewhere/bundle/model-bundle.zip s3://somewhere/predict.json
2 replies
Jerome Maleski
@jeromemaleski

I found the spacenet dataset for RIO here:

https://spacenet.ai/spacenet-buildings-dataset-v1/
aws s3 ls s3://spacenet-dataset/spacenet/SN1_buildings/

I was able to download it, but it seems to be in a different format from what the spacenet_rio_data_prep.ipymb was created for.

7 replies
arixrobotics
@arixrobotics
Hello, I'm trying to setup aws using the guide here https://docs.rastervision.io/en/0.12/cloudformation.html
However, when I upload the template.yml file in CloudFormation, this error pops-up:
The following resource types are not supported for resource import: AWS::ECR::Repository,AWS::Batch::JobDefinition,AWS::EC2::LaunchTemplate,AWS::Batch::JobDefinition,AWS::Batch::JobDefinition,AWS::Batch::JobDefinition,AWS::EC2::LaunchTemplate,AWS::Batch::ComputeEnvironment,AWS::Batch::ComputeEnvironment,AWS::Batch::JobQueue,AWS::Batch::JobQueue
That's quite a number of unsupported resources... am I missing something here?
2 replies
wassi12
@wassi12
Bonsoir
1 reply
arixrobotics
@arixrobotics
The output of predict command is a .tif image - is it possible to output a shapefile or geojson file or other formats? The idea is to view this output as a new layer on top of the raster.
2 replies
Andrés Veiro
@veiro

hi, i'm new to all of this. I am trying to run the local example with the command "rastervision predict https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-0.12/spacenet-vegas-buildings-ss/ model-bundle.zip https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-0.12/spacenet-vegas-buildings-ss/1929.tif prediction.tif "
It gives me the following error:

File "/opt/src/rastervision_aws_s3/rastervision/aws_s3/s3_file_system.py", line 182, in write_bytes
raise NotWritableError ('Could not write {}'. format (uri)) from e
rastervision.pipeline.file_system.file_system.NotWritableError: Could not write s3: //raster-vision-lf-dev/examples/spacenet-vegas-buildings-ss/output_6_27c/predict/1332-0-polygons.json
root @ 89c196e3f178: / opt / src # "

Does anyone know how to fix it?

Christos
@Charmatzis
@veiro did you setup the right data paths?
5 replies
EinCB
@luqinghui
image.png
2 replies
Saadiq Mohiuddin
@smohiudd
I'm using v0.12. When I use the predict CLI, it works fine with no errors. But when I try and load the Predictor class myself Predictor(model_bundle, tmp_dir, update_stats, channel_order) I keep getting the following error: rastervision.pipeline.config.ConfigError: The plugin_versions field contains an unrecognized plugin name: rastervision.core. Not sure why I'm getting an error with the Predictor class when the CLI works fine.
2 replies
Saadiq Mohiuddin
@smohiudd
The error seems to be originating from config_dict = upgrade_config(config_dict) in rastervision.core.predictor
Jerome Maleski
@jeromemaleski
I would like to see and online workshop/training for rastervision. Is this a possibility?
Jerome Maleski
@jeromemaleski
I have rastervision working with my data. I am using segmentation on NAIP imagery starting from the Potsdam example script. The defaults from the Potsdam script worked great and i am getting recall in the 90's for most of my classes! One thing i noticed is that there is prediction problems along the chip seam lines. Is there any way to add a chip buffer to the pipeline? I would like it to use a buffer around the chip when predicting. I noticed that when i am running rastervision predict it only runs on the cpu and only on one core. There does not seem to be an option to run on the GPU or a number of cores. Is there a way to run predict on the gpu or several cores?
6 replies
Jerome Maleski
@jeromemaleski
I want to get back the areas from the segmentation predictions. So i need something to convert the raster colors, to area sums or to vector graphics. Does anyone have any suggestions for Qgis, GDAL or Python tools to do this?
4 replies
Jerome Maleski
@jeromemaleski
I'm running prediction on NAIP imagery. They can deliver it as uncompressed .tif or compressed .jp2. I would really like to use jp2 because it is 10 times smaller but raster-vision predict takes 4 times longer to run. On a .tif tile predict finishes in 5 min but on a .jp2 tile predict finishes in 22 min. Do you what is causing this difference?
1 reply
Jerome Maleski
@jeromemaleski
I would like to create a training loop where the you can take the prediction output, make some manual corrections then feed it back to the model. I can take the raster output convert it to shape in Qgis and then try to edit the polygons but the editing in Qgis is not very good. It would be nice if you could just sort of 'paint' your corrections. Does anyone know some software that you would use to do this? Groundwork would be a nice interface but i cant upload raster results to edit.
1 reply