## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
angusharley
@angusharley
Hi, would anyone be able to tell me how to extract the pixel likelihood values within each image? Trying to extract a likelihood function as opposed to a point estimate. Thanks.
6 replies
hackedintothemainframe
@hackedintothemainframe
Hi I just want to point out that there was a fantastic maDLC (2.2) COLAB notebook up on the DLC website and github that is no longer there. It was titled 'DeepLabCut 2.2 Toolbox - COLAB' and it was extremely helpful to me when I was brand new to python, as it would be for incoming users.
jerrykchang
@jerrykchang
Hi I keep getting "Unknown Error: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above" And it seems like there is a compatibility issue with the versions and drivers. I have RTX 2060 GPU and installed the latest studio driver()(with nvidia-smi command it says the driver version is 461.72 and the CUDA version is 11.2). I also installed CUDA 10 and when I type nvcc -V it says the CUDA compiler version 10.0.130. I created the anaconda environment with the given config file that includes tensorflow 1.13 and cudnn 7.6.5. I checked the table from tensorflow documentation and it seems to be OK. Which package and/or driver should I consider upgrade/downgrade?
2 replies
wolff51n
@wolff51n
Hi all, when adding new labels to a network, during labelling I was prompted with a window saying "New label found in the config file. Do you want to see all the other labels?". I selected no, assuming this would allow me to quickly label just the new points, and retain the previously labelled bodyparts (but just not display them). However, if I now select yes, only the new labels are shown and the older labels appear to be absent. Does this mean I need to re-label the original bodyparts - or are these still saved somewhere?
6 replies
claybaker99
@claybaker99
Hi I am working to extract outlier frames. My videos were analyzed in Google Colab, and I relocated the dlc-models and evalutation-results folders to my local computer where I also have the config file. When I select an original (unlabled) video, there is an error about it not being analyzed. However, when I select a labeled video, it says no suitable videos. Am I approaching this wrong? Any advice would be great! I excluded some folder names in the thrown error, but it has the format of this: "No suitable videos found in ['~/videos/nd1_tom_8_006_pbs_1DLC_resnet_50_bu_upLight_closedFeb19shuffle1_220000_labeled.mp4']"
21 replies
dlee124065
@dlee124065
Hi, I keep running into an index error every time I try to initiate DLC in the DLC Live GUI. I cant seem to figure out what I am doing wrong. Thanks for your help!
5 replies
weilu1998
@weilu1998
Hi, I am currently using DLC for multiple animal tracking. Each animal was marked with a unique dot combination for individual identification. Should I add each unique dot set as a new multianimalbodypart? For example, individual 1 (marked by 1 dot) would have body part "1 dot", but missing for body part "2 dots". Or should I build an individual classifier that would classify individuals at the end of tracking? Any advice would be welcome. Thanks for your help!
aloksneurobot1
@aloksneurobot1
hi this may be very trivial, however i am stuck at the installation step
following is the warning message that pops out everytime i try to create the env

(base) PS C:\Users\user\Documents\DeepLabCut-master\conda-environments> conda env create -f DLC-CPU.yaml
Solving environment: failed

ResolvePackageNotFound:

• tensorflow==1.15.5
Did any one face this ? if so , could you guide me as to what might be the issue
etarter
@etarter
hey everyone, out of experience, what is the recommended bitrate for optimal inference (optimal meaning less time to analyse than the video itself)? Thanks!!
claybaker99
@claybaker99
This is the script that I wrote to convert from a CSV to a HDF, it did not send in the thread
whygrossman
@whygrossman
whygrossman
@whygrossman
Hi everyone, I trained a model in DLC on one computer, then copied everything to run another computer (VPNing in to lab kept making my environment crash). I know that I need to change several things about my config.yaml file so that I can add my new videos to the project, but I'm not sure what to change.
3 replies
whygrossman
@whygrossman
NejcKejzar
@NejcKejzar
Hi all! Has anyone been able to run single-animal DLC2.1.9 on NVIDIA RTX 3070 GPU with 460 proprietory drivers and 11.2 CUDA? I seem to be getting some strange behavior - DLC recognizes the GPU, but then seemingly freezes on "Starting to extract posture". I've been running the same code normally on NVIDIA TITAN V with 450 proprietory drivers and 11.0 CUDA, so I'm wondering if the drivers are the problem (and therefore if maybe I should downgrade the 460 to 450 drivers and 11.0 CUDA).
NejcKejzar
@NejcKejzar
UPDATE2: downgrading to CUDA 11.0 still hasn't resolved the issue. Attached is the output of the code run in Jupyter notebook (and DLC-GPU.yaml conda environment) as well as the terminal output:
NejcKejzar
@NejcKejzar

After about 10-15 min, the Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR error pops up. Any ideas?

10 replies
Oded Rinsky
@odedrin
Hi everyone, I've trained a model with DLC and am trying to export it in order to use it with DLC-live. (Linux is still new to me). When I try to use the export_model method I get an permissionError [Errno 1] Operation not permitted. Traceback shows me the problem occurs when chmod_func() is called. I am using ubuntu and have sudo rights. Has anyone encountered this issue?
williambrice
@williambrice
Hi everyone, can sombody tell me please how i can generate a config.yaml without a video? I have a Dataset with just the labeled Data and i would like to use it to create a project and train with the labeled Data. I've been stuck for several days now, please help me.
2 replies
timtensor
@timtensor
HI , one quick question about analyzing multiple videos in a folder ? I have a folder that has the following structure
/content/videos under which i have a bunch of folders 1.mp4, 2.mp4,3.mp4 . I tried to use the following command to analyze the videos and then create filtered output
deeplabcut.create_labeled_video(config_file,['/content/videos'],videotype='.mp4',) and for creating labels
deeplabcut.create_labeled_video(config_file,['/content/videos'],videotype='.mp4', draw_skeleton=True,save_frames=True) . I was under the impression it would create bunch of csv files for example 1.csv, 2.csv ,3.csv and bunch of filtered outputs such as 1.mp4,2.mp4, 3.mp4 . However I got results on only one video 3.csv and 3.mp4 . I was wondering if there is something wrong or what is the expected outcome ?
8 replies
kl-debug
@kl-debug
Hi everyone. I used google colab to train a network and since then I have been analyzing all of my videos on colab as well. Recently, the video background slightly changed and some of my labels are off so I would like to refine my labels in these newer videos and retrain my network. I'm having difficulty figuring out how to take my network off of google colab and relabel the outlier frames on my home computer. I have downloaded the network folders 'dlc-models' and 'evaluation-results' as well as the h5 files and raw video files of the videos i would like to relabel. on the DLC-Gui there is no option to refine. Can someone offer on advice on how to go about doing this or point me in the direction of some helpful information? I'm having trouble finding any information on this. Thank you!
4 replies
ensorpalacios
@ensorpalacios
Hi everyone. I'm trying to analyse new video but i get a strange behaviour: after some frames (about half of all frames) the network stops producing labels. I checked the integrity of the video (https://github.com/DeepLabCut/DeepLabCut/issues/982#issuecomment-723480607) and it seems fine. I successfully analysed videos i used for training. Has anyone had a similar issue?
4 replies
kbakhurin
@kbakhurin

I have a question about a general strategy for using the same model on multiple experiments/animals. We were being pretty conservative and creating new models for each experiment that we performed in the lab. Now that we're comfortable with that, we have been wondering what might happen if we tried to make a 'mega-model' that is trained iteratively on new videos as we collect them. The procedure we've been trying out is

1. take an existing model add new videos and label them
2. merge datasets
3. create new training set
4. train
5. analyze (the new videos seem to contain mistakes)
6. extract outliers
7. refine labels, merge datasets, create training set, and train again
8. reanalyze the new video with the updated model, which still contains mistakes

at this point, it seems like less work to just make a new model from scratch. Is the idea that eventually by adding new videos, the 'mega-model' will generalize to any new video fairly well and eventually will require little additional work to analyze new experiments? Are people finding this to be the case?

I can see that the 'loss' scores start out at lower values if you retrain, but the newly added videos still contain more error frames than if we had just trained a model from scratch. Maybe I've totally missed the point or am using the wrong strategy.

6 replies
Daniel Hereford
@dhereford:matrix.org
[m]
Hi there, we were mistakenly using the CPU version of DLC which was leading to long processing time to train the network. We believe that we have now installed the GPU version, but when looking at the task manager the GPU is only showing ~30% usage. CUDA is using ~90% however. Is this normal usage for the GPU? We're using a Nvidia 1080 with an i7 7700.
williambrice
@williambrice
This message was deleted
2 replies
williambrice
@williambrice
williambrice
@williambrice
Hi everyone, I have run deeplabcut.create_training_dataset('/content/drive/MyDrive/openfield-Pranav-2018-10-30/config.yaml') but I get this error: ValueError: all the input array dimensions except for the concatenation axis must match exactly. Please help me to solve this problem, Thanks
basicvisual
@basicvisual
Hi , I had a question about the plotting function. There is an inbuilt plotting functions for each of the joints , defined in config.yaml file using the following command
deeplabcut.plot_trajectories(config_path, [‘fullpath/analysis/project/videos/reachingvideo1.avi’]). Is there a way to selectively plot a certain joint. It could also be after the creation of the .csv files , Looking for more in post processing ?
4 replies
@kl-debug
Daniel Hereford
@dhereford:matrix.org
[m]
Hi there, is it possible to correct more than 20 outlier frames during the refine labels process? We're having issues with DLC "guessing" where something is when the feature is occluded.
2 replies
cgaletzka
@cgaletzka
Hi, I'm manually extracting frames with the python GUI. The GUI freezes every time I click the quit button. My macbook is running Big Sur 11.0.1. To start python, I use pythonw instead of ipython.
weilu1998
@weilu1998
Hi, I am using deeplabcut for multi-animal tracking. I use ResNet-50 as the initial weight and trained the model with 50 frames. The pose estimation results label each animal correctly but it also mislabels other objects as animals. I am wondering is there a way to limit the number of maximum animals it detects when analyzing videos? For example, if I know there would be at most 4 animals in all the frames, can it only pick the top 4 animals with the highest likelihoods.
Tobias Bollig
@tobiasbollig:matrix.org
[m]
Good Morning! I have a training question. I m using a Multianimaltracking project and got a huge overtrained set now which i extracted my coordinates from. I want to determin how low i can go step by step to still have a good recognition. Is there a way to "retrain" the set with a reduced amount of frames or videos? Can i simply delete some frames from the csv and retrain with that?
etarter
@etarter
hello everyone, do you guys know where one can find more information about the tracking parameters? the ones that are cross-validated: what range can be set and what they exactly mean (some are self explanatory, some are not) thanks in advance!!
Daniel Hereford
@dhereford:matrix.org
[m]
We're currently training a network, and we have over 1,200 labeled images for training. We've trained our latest iteration of the network, but on the evaluate network step we get the "wx._core.wxAssertionError: C++ assertion "Assert failure" failed at ....\src\msw\toolbar.cpp(938) in wxToolBar::Realize(): Could not add bitmap to toolbar" error. The network will still analyze a new video however, so I'm confused what the purpose of the evaluate network step is in the proper functioning of the model's ability to analyze novel videos. Previous threads with similar issues seem to indicate we have too many frames labeled, so is there a way to delete some of the frames individually, or would starting a new project be necessary to build from a smaller number of frames?
etarter
@etarter
good afternoon everyone, does someone know the meaning of "method: 'm1'" in the tracking parameters to be cross validated? thanks in advance!
jonahpc
@jonahpc

Hello all, I have a few questions regarding triangulation. For starters, I have set up 3 cameras (all perpendicular to one another). I want to triangulate the camera outputs (I understand that triangulation cannot use all 3 camera at this time, but eventually that is something that will happen), but have some problems interpreting the output.

How is the 3D space registered? What is the "x", "y", and "z" assigned to? Are different "scans" co-registered to a common space?

basicvisual
@basicvisual
Hello all, the deeplabcut GUI has a K means step(before labelling) to cut a video sequence into frames. It also compares ( if i understood correctly) two videos (which a nearly similar) and outputs the frames that are totally are different. I was wondering how does that work ? And how has it been implemented in the code . (I might have totally misunderstood the K means step here )
clmil
@clmil
Hullo! I'm using the 3D demo notebook, but it won't recognize any of my calibration images. They are in calibration_images, the config file matches the file naming convention as directed in the notebook (cameras are cam1 and cam2, images are cam1-1, cam2-1, cam1-2, cam2-2, etc.). Most folks seem to have an issue with it not recognizing corners, but I haven't encountered any documentation about not finding the images at all. I've tried explicitly defining img_path in the config file, which didn't change anything. Would any of you lovely folks have any suggestions?