by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 21 16:38
    baristahell edited #16
  • Feb 21 16:33
    baristahell edited #16
  • Feb 21 16:33
    baristahell opened #16
  • Feb 21 16:06
    baristahell commented #6
  • Feb 21 16:05
    baristahell commented #6
  • Feb 21 14:50
    baristahell commented #6
  • Feb 20 14:11
  • Feb 20 07:07
    arruga commented #15
  • Feb 19 17:53
    ocourtin commented #15
  • Feb 19 17:52

    ocourtin on master

    remove negative buffer option c… (compare)

  • Feb 19 17:51
    ocourtin closed #15
  • Feb 19 17:51
    ocourtin commented #15
  • Feb 19 16:15
    ocourtin commented #15
  • Feb 19 15:06
    ocourtin labeled #15
  • Feb 19 15:06
    ocourtin commented #15
  • Feb 19 14:58
    ocourtin commented #15
  • Feb 19 14:47
    ocourtin commented #15
  • Feb 19 12:41
    arruga opened #15
  • Feb 17 13:47
  • Feb 14 19:47

    ocourtin on master

    Update README.md (compare)

Olivier Courtin
@ocourtin
This chan allow any kind of questions related to Computer Vision with GeoSpatial Imagery.
And especially with Neat-EO.pink
Worth to mention that commercial assistance and training sessions are also provided by DataPink
Isaac Besora Vilardaga
@ibesora
Hi, is there a way to see how the training process is going in Neat-EO.pink?
I'm looking for a way to see validation/loss charts for example
Olivier Courtin
@ocourtin
@ibesora on each epoch neo train output the loss value, so it could give you a first indication if training converge yes or no
And if you want to have a metric output neo eval is the right tool
And keep in mind you can launch a train on several epochs, check with eval the current accuracy, and if needed resume the training
HTH,
Kelin Christi
@crikeli
hey! great project -- I'm a little confused with the Open Cities challenge example that you have given. With the dataset have you downloaded using "wget -nc https://datapink.net/neo/101/ds.tar", How do i enable the web ui? I am running an instance on GCP and was wondering if there are additional steps i need to take to make the web ui available! Thanks for putting this project out there!
Olivier Courtin
@ocourtin
Hi @crikeli ,
WebUI is simple to enable, you just need to have a Web Server running on your GPU instance.
We provide as example Apache2 command:
sudo apt install -y apache2 && sudo ln -s ~ /var/www/html/neo
to list all your data directories
And click on the one you want to display
That's it.
Kelin Christi
@crikeli
Great, thank you for your helo @ocourtin
help* :)
Kelin Christi
@crikeli
Hey again @ocourtin, I tried to follow the command you mentioned. I have already ran the "neo tile" command from the tutorial and as a result have the train/images directory which contains "19", "compare.html", "index.html", "leaflet.html", "log" & "tiles.json" I tried running ``sudo apt install -y apache2 && sudo ln -s ~ /var/www/html/neo``` and then went to the gpu IP from my instance, but had no luck with viewing any of the data. For further context, I have a jupyter server running. Could that be hindering me being able to view neo? Thanks again!
Kelin Christi
@crikeli
nevermind! figured it out...just had to alter instance configs, where I had to enable http traffic...
oneOfThePeople
@oneOfThePeople
hi,
my aim is get house location,
so i really don't care about bbox or segmentation.
someone have idea which type of neural network will work better for me?
for now i just take the center of the predicted building,
but maybe i can make my loss or architecture of the network better
thank you
Olivier Courtin
@ocourtin
@oneOfThePeople same as a object detection NN + loss, except you will not have 2 points but only one per feature.
Youthanasia5
@Youthanasia5
Hello everyone! What does sat.py is doing?
Olivier Courtin
@ocourtin
@Youthanasia5 neo sat tool is still in a WIP stage.
Purpose, at this point, is to retrieve Sentinel 2 images
Kelin Christi
@crikeli
Hey again! I've been playing around with various zoom levels for training and visualizing the data. When I try the zoom level 17, I am not able to visualize using the web ui. Another question I have is that after acquiring the datasets and preprocessing it(data + relevant labels), is it possible to create a validation set in the config file? Also, is it possible for me to take the pre-processed data and feed it into other deep learning frameworks such as pytorch/fastai/tensorflow? Thanks!
Olivier Courtin
@ocourtin
@crikeli There's no reason Web UI handle a specific zoom level differently. Will works the same at zoom level 17, 21 or 12....
Only thing to keep in mind is that you only have one single zoom level at once to display, so you have to zoom enough before to be able to.
In the meantime a pink bbox is displayed in areas where data are available
Validation dataset can be created by splitting your whole dataset in train+eval
neo cover --splits 80/20 + neo subset to copy both images and labels
Kelin Christi
@crikeli
Thank you! Will try it out :)
Youthanasia5
@Youthanasia5
Hello everyone. I'm using 0.6.1 version. Can you send me old 101.md file?