Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Hristo Vrigazov
    @hristo-vrigazov
    What's up guys, I have added you here so that we can chat about computer vision fun stuff
    To start with, I recently needed a segmentation in a project, and I used the pretrained network of dilated convolutions from here https://github.com/fyu/dilation
    Here are some samples, what do you think? https://s3.amazonaws.com/vision4j/examples.zip
    vigician
    @vigician
    it's funny that it did not recognize the islam woman with the hijab :D
    pdatascience
    @pdatascience
    Hello, everyone! Nice to be part of this group!
    Zdravko Andonov
    @zdravkoandonov
    Hi guys, nice to meet you :)
    vigician
    @vigician
    I think Tensorflow has DeepLabV3 built-in, which is the state of the art for segmentation, at least on Pascal VOC
    Hristo Vrigazov
    @hristo-vrigazov
    That's awesome
    Do you guys know of a model for image inpainting?
    I mean, by a given image with hole in it (expressed as a mask for example) - find the values that complete the image semantically?
    something like this, but trained for something other than faces :D
    the paper looks truly amazing though
    Zdravko Andonov
    @zdravkoandonov
    Check that one
    The paper is from NVidia from a few days ago
    Haven't read it in detail but the results seem nice
    Hristo Vrigazov
    @hristo-vrigazov
    This paper looked awesome: http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/ , the authors even did upload a trained model here: https://github.com/satoshiiizuka/siggraph2017_inpainting but is sucked a lot
    This one looks very related, it even cites it. Let's hope they open source it and upload the trained model :D
    Nelson Tsaku
    @Tsakunelson
    Hi every one, I have an issue with pulling the Image segmentation docker image with the right documents in it. Help would be most appreciated
    Here is the image i am talking about:
    docker pull vision4j/deeplabv3-pascal-voc-segmentation: benchmark-gpu
    I cant find the content, as is exactly on github
    Where is the model.py, train.py and every other file, infact the directory tensorflow/models/research/ found please.
    Hristo Vrigazov
    @hristo-vrigazov
    @Tsakunelson This is the model from tensorflow research as you guessed, but this image is minized and intended only for serving (inference).
    Nelson Tsaku
    @Tsakunelson
    Thanks so much Hristo, that makes much sense. I am pretty much new to dockers, and thought the image was exactly what is in the github content. That means to reproduce (implement) the paper "Deeplab image Segmentation V3" I would have to clone the github repo and start on from there right?
    Hristo Vrigazov
    @hristo-vrigazov

    You are welcome :) Yes, in order to reproduce the paper, you would have to:

    1. Clone this and move to this directory https://github.com/tensorflow/models/tree/master/research/deeplab
    2. Install the neccessary packages (tensorflow-gpu)

    Alternatively, you could run:

    nvidia-docker run -it -v <pathToRepo>/research/deeplab:/notebooks/deeplab -p 8888:8888 tensorflow/tensorflow:1.7.0-gpu-py3
    This way you can have Jupyter notebooks with all the packages installed.
    Nelson Tsaku
    @Tsakunelson
    Thank you Hristo, I will go directly for the second option
    both*
    Nelson Tsaku
    @Tsakunelson
    @hristo-vrigazov the image pulling is still in progress, and I am just curious as to whit IP should I use in order to access the notebook on chrome browser? just 8888:8888 ?
    Hristo Vrigazov
    @hristo-vrigazov
    Do you run this on your local machine? Just localhost:8888 should do it. In fact, Jupyter should give you a link with an access token
    Nelson Tsaku
    @Tsakunelson
    Yess i got the token and used 'ip route' to get my localhost ip address, given I work on a server actually, in a linux based docker container "nvidia docker"
    Nelson Tsaku
    @Tsakunelson
    @hristo-vrigazov the notebook is currently running on a linux server from work, thanks to your support. Is there a quick way I can remotely access the notebook on my personal computer from home via my browser?
    Nelson Tsaku
    @Tsakunelson
    Got it solved by installing cygwin and Xlauncher to have the browser opened on my windows host. I appreciate your help Hristo, it feels relieving that all the framework is already setup in a notebook, and I don't get to install dependencies anymore.
    Hristo Vrigazov
    @hristo-vrigazov
    @Tsakunelson you are very welcome and I am glad you figured it out :) By the way, would you be interested in contributing in the future? We basically packaging models per computer vision problem and integrating models, so if you are interested - I can give you more info in the future
    Nelson Tsaku
    @Tsakunelson
    sure, I would definitely be interested; just let me in the loop
    Nelson Tsaku
    @Tsakunelson
    talking of model packaging, the net.mobilenet package (models/research/slim/nets/mobilenet/) is not included in this version of the 'tensorflow/tensorflow:1.7.0-gpu-py3' docker image i am currently using. Whereas, there is a good number of function calls to that package, but it results to importErrors: no module named nets.
    Hristo Vrigazov
    @hristo-vrigazov
    Yes, we have not packaged / tested it. We plan to add new models and will definitely consider lightweight models as mobilenet. By the way, @vigician do you think we should package the repos as well in some images? Here is one use case which we have not covered yet.
    Nelson Tsaku
    @Tsakunelson
    Hi @hristo-vrigazov how can I assist in packaging, training and testing then? I there a customized platform for that?
    Nelson Tsaku
    @Tsakunelson
    Side question please, do I need to download one of the specified dataset (Pascal, Cityscape or ADE20K) provided by the deeplab Image segmentation docker image? or can I actuary run/train without images downloaded? Does it download the images from an online sight? I say because I can't find any directory containing the training images. Thanks
    Hristo Vrigazov
    @hristo-vrigazov
    We currently do not have customized platform, we write the Dockerfiles by hand. The datasets are not included, you would have to download them and mount them. Those images are meant for inference only (not training). We will soon start working on creating tags in the docker images for training as well.
    Nelson Tsaku
    @Tsakunelson
    Quite innovative from CNNs. With time, they tend to act just like the human brain. Its amazing that at some points, massive training isn't required anymore
    May I have the Dockerfile and .yml file of the Docker Image you previously provided (nvidia-docker run -it -v <pathToRepo>/research/deeplab:/notebooks/deeplab -p 8888:8888 tensorflow/tensorflow:1.7.0-gpu-py3)? I have the segmented dataset downloaded, and ready for mounting. That way, I could update the existing Docker Image as an additional tag.
    Hristo Vrigazov
    @hristo-vrigazov
    Oh, yes sure. In general, our Dockerfiles for inference are available here https://github.com/vision4j/vision4j-collection/tree/master/external/deeplabv3-pascal-segmentation
    The one you are using uses just tensorflow, so you need the official tensorflow docker image (it's maintained by Google): https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docker/README.md
    vigician
    @vigician
    @hristo-vrigazov Yes, I think we should definitely create dev tagged images for models - one obvious use case is when one just wants to play with the packaged model. The dev image should have git and probably run git pull on startup. It will be more convenient for us as well.
    vigician
    @vigician
    @hristo-vrigazov aren't you gonna add some self-driving car problems? :D
    Hristo Vrigazov
    @hristo-vrigazov
    Yea, probably soon. Lane detection is really good candidate, although not quite sure who would need a Java API for it apart from some demos.