Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Mackenzie Mathis
    @MMathisLab
    @vcorbit if you are using dlc > 2.0.4, you do not need to run that command, it happens automatically where you create the training set!
    KonradDanielewski
    @KonradDanielewski
    Hi, can I not downsample frames as much with extraction still using kmeans? Due to poor lighting of my videos, some bodyparts get really blurry after downsampling (hard to label)
    Mackenzie Mathis
    @MMathisLab
    @KonradDanielewski there is no downsampling from video to extract frames, so I don’t know what you mean. If your data is blurry, you need to increase the frame rate, increase light to be able to reduce exposure time.
    @mashedpoteto please check the Colab demo notebook to correctly set DLClight
    KonradDanielewski
    @KonradDanielewski
    Ye, I just realized it's the downsampling I did before that makes it blurry
    Sorry to bother
    Fnoop
    @fnoop
    Is the colormap config value referring to opencv colormaps?
    KonradDanielewski
    @KonradDanielewski
    @Fnoop I think matplotlibs' colormaps
    mashedpoteto
    @mashedpoteto
    @fnoop @MMathisLab thank you for the suggestions!! The DIC option didn't fix the issue, but I used the terminal to label the video using the GUI and then added the file to my google drive. This way I could train the network.
    I am currently facing a problem finding the plot-posses directory and the labeled videos.It's not created in my google drive directory.
    Anna Zhukovskaya
    @annazhuk
    @MMathisLab I tried creating several different versions of my training dataset and determined that the iterations run normally if I exclude all the labels from three of my videos (not sure what the issue is with these videos because extract_labels, label_frames, check_labels, and create_training_dataset all seem to work normally with them). I have a related question about the training: is running one training session for 400,000 iterations about the same as running two training sessions (with the same training dataset) for 200,000 iterations each, if the second session is started from the most recent snapshot from the first training session? I thought this might be the case from what happens with the loss value.
    Mackenzie Mathis
    @MMathisLab
    @annazhuk correct! Okay good to know, thanks for testing this. It is possible if meta data is corrupt on a video every other step could work fine until training actually, so you might want to check out those videos!
    @mashedpoteto to be clear, you cannot label on Colab, you can only do so outside of Colab. The DLClight option essentially had no way to “fail” if run as I have it set in the Colab ;)
    vcorbit
    @vcorbit
    @MMathisLab , for some reason the labels don't always get changed for me when I create training datasets, I often get errors about paths not being found becuase the slashes are in the wrong direction. Another question though - If I add a joint to an existing network, then I want to train the network for a new iteration, can I still use a snapshot from the previous training (without the added joint) for initweights? I just tried to do this and got this error:
    image.png
    mashedpoteto
    @mashedpoteto
    @MMathisLab ahaa! I didn't know that part in the beginning. Thank you. The labeled video is not present in my video directory. Do you have any suggestions on what I can do?
    Rudgas
    @Rudgas
    @MMathisLab there seems to be some bug in the refinement window, as it is possible to move to frame -1. This also messes up the label positions for frame 0.
    Rudgas
    @Rudgas
    In the labeled_data folder are two sub-folders. project and project_labeled. the project_labeled folder contains all labeled frames from iteration-0 when first labelling frames, however the project folder contains the new frames from the refinement step (outlier extraction). And within this folder I have for a few frames two files: img312.png and img312labeled.png Could this be the cause of issues?
    ic checked the machinelabels.csv, and the labeles are set incorrect for the first (0) frame, and can't be set correctly by GUI, position resets
    Mackenzie Mathis
    @MMathisLab
    @vcorbit you cannot re-train a network with different bodypoints, you need to start fresh,
    @Rudgas - could you post an issue on github please, thanks! please use the issue template and if you can provide a short screen cap video, that is very helpful!
    @Rudgas you can safely delete the imgXXlabeled.png files, they are just for your viewing. THat should not be the cause, but easy to check.
    Rudgas
    @Rudgas
    I opened an issue, however my network is currently training. If I see this behavior again I'll add more documentation.
    Fnoop
    @fnoop
    hiya, if I’ve trained DLC to watch eye parts (inner/outer corners, upper/lower eyelids) all using the left eyes of different animals (horses, which look quite different), how do I approach the right eyes? Do I label them as different body parts, or just carry on with the existing body parts? Will DLC cope with the inner/outer corners being ‘reversed’?
    Mackenzie Mathis
    @MMathisLab
    you can train with mirror=true and that can then treat each eye as a eye not left vs. right. This is what is done for human legs/arms. If you want to keep track of left vs. right, then you should label both eyes accordingly.
    Fnoop
    @fnoop
    @MMathisLab Ah that’s great, thanks :) Will that work for 4 legged mammals for gait analysis as well, or would that be better to create separate parts? Sorry if I missed this somewhere in the docs, couldn’t find anything or in the forum
    Mackenzie Mathis
    @MMathisLab
    For gait analysis I absolutely label each limb independently; mirror=True is going to treat left and right as the same ;). You can check the default pose_cfg.yaml file and Box 2 on GitHub and NP paper (I think it’s perhaps documented there)
    Fnoop
    @fnoop
    Thanks v much :)
    Shauyin CHAN
    @shauyin520
    Hi, I am using the DLC to analyse my video(3.5GB/1 hour, 1920*1080 in pixels, 15fps). I do the training with 20000 iterations, MobileNet (since my video is large, so i do not set up the iteration times higher),
    and the training results is :
    Results for 20000 training iterations: 95 1 train error: 1.57 pixels. Test error: 7.43 pixels.
    With pcutoff of 0.1 train error: 1.57 pixels. Test error: 7.43 pixels
    Next i continue the following steps, and finish all the procedures.
    I tried to open the filtered video produced by the program and i found there are several problems:
    one is the label animal is not the one i labeled for training before(i have many animals within one video and most of animals are marked with number tag) and the label objects jump to another animals with another number tag or even jump to animals without number tag, and sometimes even jump to the wall of the space where the animals live.
    the second is the results of the plot_poses, they alĺ look ugly:
    hist_filtered..png
    trajectory_filtered..png
    plot_filtered..png
    plot-likelihood_filtered..png
    Mackenzie Mathis
    @MMathisLab
    I answered your question on GitHub; you don’t tell us anything about the data and how much data you labeled
    Mackenzie Mathis
    @MMathisLab
    And 20K iterations is not nearly enough with batch size 1, you need at least 500K.
    Ross Koepke
    @rckoepke

    whats the best way to deal with nvidia-docker not being available on ubuntu 18.04? (only nvidia-docker2)
    tried loading from a repo with the older package but still getting

    The following packages have unmet dependencies: nvidia-docker : Depends: sysv-rc (>= 2.88dsf-24) but it is not installable or file-rc (>= 0.8.16) but it is not installable E: Unable to correct problems, you have held broken packages.

    and can't figure out how I got around this in the past

    tfrostig
    @tfrostig
    Hi, Is it possible to join somehow multiple projects? What is the correct way if I want to train on all of the videos together?
    Ross Koepke
    @rckoepke

    @tfrostig

    Under: "Create a New Project" heading here:
    https://github.com/AlexEMG/DeepLabCut/blob/master/docs/UseOverviewGuide.md

    The function to invoke multiple videos is shown to be:
    deeplabcut.create_new_project(`Name of the project',`Name of the experimenter', [`Full path of video 1',`Full path of video2',`Full path of video3'], working_directory=`Full path of the working directory',copy_videos=True/False)

    So lets say you already have ProjectA, ProjectB, and Project C -- but you want to merge them into Project1. If you already have some of the frames of other videos labeled, they will be in ./ProjectA/labeled-data/<Video1> and ./ProjectA/labeled-data/<Video1>_labeled. So you can copy those two directories (the ones inside./ProjectA/labeled-data/ ) to ./Project1/labeled-data/. Then do the same for ./ProjectB/labeled-data/ and ./ProjectC/labeled-data/.

    Assuming you have enough labeled frames covering most of the unique cases in your footage, you can skip deeplabcut.extract_frames(), deeplabcut.check_labels(), and deeplabcut.label_frames() to go straight to deeplabcut.create_training_dataset().

    P.S., I don't think ./ProjectA/labeled-data/<Video1>_labeled is actually used by the software - i think it's generated for human convenience only when deeplabcut.check_labels() is ran. The actual labels are stored in ./ProjectA/labeled-data/<Video1>/CollectedData_<User>.csv and/or ./ProjectA/labeled-data/<Video1>/CollectedData_<User>.h5

    tfrostig
    @tfrostig
    thanks!
    Ross Koepke
    @rckoepke

    whats the best way to test that my container has the ability to use GPU for training?

    Inside docker, nvcc / nvisida-smi seems to be working fine, e.g.

    ubuntu@4d3df5a62144:/work$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_21:08:03_CDT_2017 Cuda compilation tools, release 9.0, V9.0.176

    However, deeplabcut.train_network(config_path,shuffle=1,trainingsetindex=0,gputouse=0,max_snapshots_to_keep=1,autotune=False,displayiters=None,saveiters= None, maxiters=None)

    results in CPU max loading, and no raise in GPU memory utilization (sticks at 0.4GB / 1GB regardless if training or not). Listing processes in nvidia-smi is not supported by my graphics card, however my GPU does seem to support CUDA 9 with a compute capability of 3.0

    CUDA V9.0.176
    Nvidia-driver-390.129

    Tom Vajtay
    @tvajtay
    Start Ipython and see if you can load tensorflow and DLC should verify installation
    @rckoepke
    Ross Koepke
    @rckoepke

    @tvajtay

    Doesn't look like I've got TF running on GPU. Inside the container, running the following in ipython

    import tensorflow as tf
       ...: with tf.device('/gpu:0'):
       ...:     a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='
       ...: a')
       ...:     b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='
       ...: b')
       ...:     c = tf.matmul(a, b)
       ...: 
       ...: with tf.Session() as sess:
       ...:     print (sess.run(c))

    results in
    InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'MatMul_1': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.

    DLC not complaining at all fwiw
    Tom Vajtay
    @tvajtay
    Can you do, conda list inside the environment and see if tensorflow-gpu is there
    Ross Koepke
    @rckoepke

    @tvajtay thanks for teaching me a new command with conda list.

    between then and now I ran conda install tensorflow-gpu=1.8.0 so it definitely shows up at the moment...

    I'm not sure if it was there earlier; I was using the https://github.com/eqs/DeepLabCut-Docker/blob/master/docker/Dockerfile (eqs) docker image. If you're familiar with it, you may know whether it had tf-gpu installed.

    however, after running conda install tensorflow-gpu=1.8.0 inside the container I still have the same issue when I repeat the test above (print (sess.run(c)))
    Tom Vajtay
    @tvajtay
    @rckoepke I'm not familiar with the docker container :(
    Ross Koepke
    @rckoepke
    no problem, re-testing with the normal docker container https://github.com/MMathisLab/Docker4DeepLabCut2.0 to see if that works out for me :)
    thanks again