by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 18:02

    lewfish on docs

    wip wip (compare)

  • 15:49

    lewfish on docs

    Update docs Also convert enum … Update docs Start adding examples docs Mov… and 2 more (compare)

  • 14:55
    jamesmcclain edited #924
  • 14:48
    jamesmcclain synchronize #924
  • 14:44
    jamesmcclain edited #924
  • 14:44
    jamesmcclain edited #924
  • 14:43
    jamesmcclain edited #924
  • Jun 03 23:22
    jamesmcclain synchronize #924
  • Jun 03 23:07
    jamesmcclain edited #924
  • Jun 03 22:46
    jamesmcclain synchronize #924
  • Jun 03 19:28
    jamesmcclain synchronize #924
  • Jun 03 17:00
    jamesmcclain synchronize #924
  • Jun 03 16:59
    jamesmcclain synchronize #924
  • Jun 03 16:04

    lewfish on docs2

    Update docs Start adding examples docs Mov… Fix bugs and 2 more (compare)

  • Jun 03 15:31

    lewfish on master

    Initial implementation of VSI f… Fix Flake errors Some fixes and 9 more (compare)

  • Jun 03 15:31
    lewfish closed #915
  • Jun 03 15:31
    lewfish closed #905
  • Jun 03 15:28
    lewfish opened #925
  • Jun 02 21:01
    jamesmcclain edited #924
  • Jun 02 20:50

    lewfish on docs

    wip wip (compare)

NesKamdz70
@NesKamdz70
image.png
@lewfish, Hi, I hope you are well, thank you for your help, increased the ram the RAM and it worked. Now I have rastervision installed. I tried to ran the quickstart example without success, these are the steps taken:
NesKamdz70
@NesKamdz70
1) Created a directory QuickStartwithRasterVision
2) cd to the created directory
3) Created directories code and rv_root using the commands:
export RV_QUICKSTART_CODE_DIR=pwd/code
export RV_QUICKSTART_EXP_DIR=pwd/rv_root
mkdir -p ${RV_QUICKSTART_CODE_DIR} ${RV_QUICKSTART_EXP_DIR}
4) Copied tiny_spacenet.py to /code
5) Then run: docker run --rm -it -p 6006:6006 -v ${RV_QUICKSTART_CODE_DIR}:/opt/src/code -v ${RV_QUICKSTART_EXP_DIR}:/opt/data quay.io/azavea/raster-vision:cpu-latest /bin/bash
7) run the commands:
cd /opt/src/code
rastervision run local -p tiny_spacenet.py -n
I got the following error message, please see the picture attached above.
Lewis Fishgold
@lewfish
@NesKamdz70 Could you run ls while you are in /opt/src/code to see if the tiny_spacenet.py file is there? It looks like that it's not there which makes me think the volume mounting is off (ie ${RV_QUICKSTART_CODE_DIR}:/opt/src/code)
NesKamdz70
@NesKamdz70
Yes it is not their.
NesKamdz70
@NesKamdz70
@lewfish I changed 'pwd' with /D/RasterVisionQuickstart/QuickStartwithRasterVision/code, tiny_spacenet.py is not in the /opt/src/code.
Rob Emanuele
@lossyrob
NesKamdz70
@NesKamdz70
Thank you Rob, I will try it.
NesKamdz70
@NesKamdz70
image.png
@lossyrob I followed the steps you provided in the link above, still no "!tiny_spacenet.py" in directory "/opt/src/code", please check in the image attached above that what I have done is correct.
Lewis Fishgold
@lewfish
@NesKamdz70 Can you run echo ${RV_QUICKSTART_CODE_DIR} and ls ${RV_QUICKSTART_CODE_DIR}? We should check that the volume mounting option (ie -v) is correct.
NesKamdz70
@NesKamdz70
@lewfish Hello, I think I have resolved the issue the problem was that my D drive was not recognised by the virtual machine. I can now see "tiny_spacenet.py" in the guest directory code, however I am attribute that 'PYTORCH_SEMANTIC_SEGMENTATION' does not exist, please see the attached picture below:
image.png
Lewis Fishgold
@lewfish
From the second screenshot above it looks like you are using an old version of the Docker image. The Quickstart instructions at https://docs.rastervision.io/en/latest/quickstart.html say to use quay.io/azavea/raster-vision:pytorch-0.10 but you are using quay.io/azavea/raster-vision:cpu-latest.
Lewis Fishgold
@lewfish
@NesKamdz70 ^
NesKamdz70
@NesKamdz70
@lewfish Thank you, it worked.
ninexy
@ninexy
hi ,i m a newer. when i register_aux_command,and use it in CommandConfig, throw a error:
``` rastervision.registry.RegistryError: No command found for type EXAMPLE    ``` 
rastervision -p example run local -e split_images_sta EXAMPLE -a root_uri /opt/rvspace/sindarasterpro/rastervision/dm/ -a image_uri /opt/rvspace/sindarasterpro/rastervision/data/shptemp/train.tif
i don't understang it. can anyone help me?
g2giovanni
@g2giovanni
Hi, is there a way to use Rastervision package with Pytorch models not trained with Rastervision? I'd like to use Rastervision just for the prediction, using already trained models. Thank you for the amazing work made.
ninexy
@ninexy
@g2giovanni yeah ,rastervision has run command,eg. rastervision predict model_uri srctif_path restif_path
g2giovanni
@g2giovanni
Thank you @ninexy , unfortunately as I understood the predict command accept a "Predict Package Uri" but my model is not in a predict package format. Maybe I should create manually a bundle_config.json file ... but I don't know why
ninexy
@ninexy
@g2giovanni I had try deleter the bundle_config.json file in my bundle_config.json file ,then rastervision predit is wrong .
image.png
g2giovanni
@g2giovanni
I did the same test... I'm trying to figure out how I can create the bundle_config.json manually like it was created by the bundle command...
Lewis Fishgold
@lewfish
@g2giovanni There isn't any direct way to make prediction using a model file. A workaround (as you observed) is to modify an existing predict package which contains necessary metadata. The files in the zip file have to be at the root of the zip file, and I think that might be causing the problem. Another potentially deeper issue is that the model file needs to have just the weights state_dict, not the whole pickled model. (See https://pytorch.org/tutorials/beginner/saving_loading_models.html). And if you supply the weights, they need to correspond to a model architecture that Raster Vision knows about (ie. the ones used by the PyTorch backends).
Lewis Fishgold
@lewfish
@ninexy It looks your custom command is not registered. Did you remember to call the register_plugin function? See https://docs.rastervision.io/en/latest/commands.html#custom-aux-commands
Saadiq Mohiuddin
@smohiudd

Hello, I'm looking for some help with predictions using the Predictor class. I wasn't able to find any examples in the documentation or GitHub.

I'm using the PyTorch SpaceNet Vegas Buildings predict package from the model zoo along with the sample image.

I define a new instance of the Predictor class and then predict on an image:

predict = rv.Predictor("predict_package.zip","tmp").predict("1929.tif")

The return is: rastervision.data.label.semantic_segmentation_labels.SemanticSegmentationLabels

How do I access an array of labels from my prediction?

Looking at the SemanticSegmentatationLabels class I thought it was something like this:

windows = predict.get_windows()
label_array = predict.get_label_array(windows[0])

But I'm getting this error:

ActivationError: RasterSource must be activated before use

Can anyone please let me know if I'm on the right track or going about this the wrong way. Thanks!

ninexy
@ninexy
@lewfish yeah,I def register_plugin in my class like this:
def register_plugin(plugin_registry):
    plugin_registry.register_aux_command(PREPROCESS,
                                         PreProcessCommand)
g2giovanni
@g2giovanni

@g2giovanni There isn't any direct way to make prediction using a model file. A workaround (as you observed) is to modify an existing predict package which contains necessary metadata. The files in the zip file have to be at the root of the zip file, and I think that might be causing the problem. Another potentially deeper issue is that the model file needs to have just the weights state_dict, not the whole pickled model. (See https://pytorch.org/tutorials/beginner/saving_loading_models.html). And if you supply the weights, they need to correspond to a model architecture that Raster Vision knows about (ie. the ones used by the PyTorch backends).

Thank you @lewfish . My first problem actually is I want to make a prediction using a custom model architecture, not included in Pytorch backends.
I'm planning to develop a new backend plugin in order to load a custom Pytorch models and a simple CLI python module to create a mock bundle_config.json file

Lewis Fishgold
@lewfish
@smohiudd It looks like you are doing the right thing, but that there is a bug in RV. If you use the Predictor via the predict CLI command I think things should work. So I would follow this as a template for what to do: https://github.com/azavea/raster-vision/blob/master/rastervision/cli/main.py#L273-L281 Mainly the thing to follow here is passing an output_uri to predict so that it writes the output to disk. That may be inefficient for your use case, but I think it should work.
Lewis Fishgold
@lewfish
@smohiudd BTW, the code for making predictions for semantic segmentation is pretty convoluted and buggy. I've simplified it a lot in the next version of RV (in the rastervision2 package) which is under development now and is in a minimally usable state. So I'm reluctant to try to debug the old version of it. Unfortunately I don't have the corresponding rv2 predict package to share at the moment.
Saadiq Mohiuddin
@smohiudd
Thanks for getting back to me @lewfish. Since posting that a few days ago I went through the CLI functions as you suggested and some other Classes and eventually got it to work. I managed to get the labels without saving to disk. Looking forward to rv2 release!
Joe Morrison
@jmorrison1847
RV is only 15 stars away from 1,000 Github stars, congrats y'all!
Jonny
@JonathanNiesel
Hi, how would you deal with the following problem: Training data is in jpg format, and only includes AOI. Howver the data for production are actual .tif tiles covering a very large area compared to the AOI. Should i subset the tif tiles ? Do you have other recommendations?
Lewis Fishgold
@lewfish
@JonathanNiesel I'm not sure I understand your question. Are you saying that the prediction imagery covers a very large area so you want to split it into a bunch of scenes so that you can make predictions in parallel? (I didn't understand what you meant by "subsetting the tif files")
Jonny
@JonathanNiesel
Training images are 1250x 600 jpg images, however in production the input images are 16000x 10000 tif files. The Training images are subsets of the original full size tif files.
With subsetting i mean splitting the 160000x10000 tif in ca. 1250x600 size images, but somehow keep information about the grid (lat/lon).
Lewis Fishgold
@lewfish
@JonathanNiesel It should be able to make predictions on imagery that size without a problem. Training and testing on different sized scenes doesn't require anything "extra". If you really need it to run faster you could split the images into smaller pieces, but you'd have to write your own script to do that.
Maybe you thought it would run out of memory with such a large image. But RV handles this by using a sliding window to make predictions on windows and then combines the predictions across all windows.
3 replies
qqaba
@qqaba
I'm just getting started with rastervision and have a few questions. For context, I'm currently attempting to do object detection using PyTorch/Resnet and rastervision 0.10, I've had some success so far, but looking to improve some things.
1) Does the pytorch object detection pipeline normalize the image channels to what the pretrained ImageNet models were trained with? https://pytorch.org/hub/pytorch_vision_resnet/
2) Are there any plans for integration with MLFlow?
3) Is work still being done on the rastervision project, or should things be moved to rastervision2? Are you still accepting pull requests for rastervision1. I've made a few tiny quality of life tweaks that I think might be useful (e.g., using the environment's TORCH_HOME if defined vs setting it to /opt/data/torch-cache, making the example docker images not run as the root user, flushing the tb_writer after each epoch, etc.)
Lewis Fishgold
@lewfish
@qqaba 1) No, it doesn't normalize that way. I never noticed a significant difference when using the Imagenet normalization, but that's just anecdotal. It seems like being able to specify a desired mean and std would be a good option to add. 2) No. I'm not very familiar with the project, but we have thought a little about adding support for callbacks so that you could log things to MLFLow or whatever. After the rv2 release (hopefully in a month or so) we could talk about how you could approach adding support for it. 3) No, all development now is for rv2. But we are planning one final release of rv1, so you could add a PR for that. If you made a PR with that stuff I would try to include the corresponding changes in rv2. Those changes sound good and I was planning on making the TORCH_HOME change.
Joe Morrison
@jmorrison1847
Hey RV friends! The most common request I get from people who are just getting started with Raster Vision is this: "could you share a Jupyter Notebook/tutorial I could use as a starting point?" There are some already available here: https://github.com/azavea/raster-vision-examples
But it would be so awesome if we had more to point to. If you're interested in a straightforward way to make a big impact on the RV project, example notebooks is a really great way to do it
Lewis Fishgold
@lewfish

We will be releasing the next version of Raster Vision, which is a major refactoring sometime before June 1. The code is currently being developed in the rastervision2 package and is in a usable, but poorly documented state. The major changes include:

  • a simpler configuration system that uses a single Config class based on Pydantic in place of the existing Config, ConfigBuilder, and Protobufs, and that does validation, serialization, and deserialization automatically using a declarative representation.
  • generalizing and extracting out the functionality for running configurable pipelines in the cloud. What used to be Experiments are now just instances of Pipeline, and the pipeline package can be used independent of the rest of RV.
  • elimination of the Tensorflow backends, and abstraction of the PyTorch/torchvision backend code, so that all tasks are implemented in a uniform way and it's easier to customize aspects of the training loop.

Before releasing the big refactor of RV, we will have a final release of the existing RV codebase, which will be version 0.11. If there are any small changes you want to get into that final release, please make a PR by next Friday May 5. After that release is made, the old version of RV will be removed from the master branch, the code in rastervision2 will be moved to rastervision, and that will be released in version 0.12.

arixrobotics
@arixrobotics
Hello, I've just started using rastervision, and also GroundWork for labelling. Is there any guide on using the output of GroundWork (downloaded catalog.zip with many files in it) with rastervision?
3 replies
Lewis Fishgold
@lewfish
For some reason I can't edit the message I posted above about the next releases, but the dates I mentioned are incorrect. I meant to say that any PRs to "old RV" should be submitted by Friday June 5, and that the release of the refactored RV will be by July 1.
VAHID
@VbsmRobotic
I am trying to do object detection using RV. I face the following error File "/opt/src/code/test_CarBuildingObjectDetection.py", line 55, in exp_main
train_label_source = rv.LabelSourceConfig.builder(rv.OBJECT_DETECTION) \
AttributeError: 'ObjectDetectionLabelSourceConfigBuilder' object has no attribute 'with_raster_source'
anyone have any solution ?
2 replies