discussion forum for www.deeplabcut.org; also the forum for posting questions: https://forum.image.sc/tags/deeplabcut
Hi All! I'm very new to DeepLabCut and all that is associated (Python, Colab etc). I have been practicing with running the open field demos. I am mostly able to get things to run (starting with the GUI on my machine, then using the Colab notebook to run training, then back to GUI on my machine), but don't run the example for the full recommended iterations (since I'm only testing/learning). After analyzing the video, my plots (such as x pixels versus y pixels) are completely blank. I am still able to create a labeled video showing my labeled points. Can anyone help me figure out what step I might have missed? Thanks in advance.
Boosting, anyone have advice? I'm pretty sure its something simple, but I'm a coding newb....
Hey DLC community, I have some issues to do the refinement part of my project using maDLC.
I did the training phase using Google Colab, and now I want to do the refinement of my tracklets using the GUI of DLC, but I have multiple issue:
First I notice that I don't have the same "tab" (Label Fram, Creat Training Dataset etc...) as most of the tutorials done by Matias's Lab. And in the tab 'OPT: Refine Tracklets', I can choose the config file, the video, but not the tracklet file (.pickle).
So I tried to do it without, but whenever I use the Flag tool, it give me the following error:
IndexError: invalid index to scalar variable.
And If I use the Lasso tool to change the ID of my mice, it give me :
AttributeError: can't set attribute
Hi Guys,
I'm having some trouble with my model and processing videos. We have used 4 videos at about 5 minutes each, and extracted around 250 frames total. I use one video at a time, train for ~100k iterations, process a second video, and use refine outliers to add a new video and start a new iteration. I got to iteration four, and the model was performing fairly well. I tried to process one of our experimental videos. These videos are ~40 minutes long, but are identical in lighting, experimental hardware, and animals as the training videos.
We usually measure performance with what percentage of frames are under 0.95 certainty. We are shooting for ~5% of frames under 0.95 and will smooth out the other 5% using some post-processing. At the end of iteration 4 we processed a experimental video and got very good certainty numbers; however, all of the actual XY coordinates were wrong. The points were occasionally placed in the correct place, but often they were just flickering around the video. It was strange because the model was very certain they were all correct, and performance while we were training was generally good. I then processed a different experimental video and performance is very poor. With over 50% of the frames showing uncertainty <0.95 across all our labeled points.
The later leads me to believe there are some subtle changes that the model isn't trained for, but what about the former video where the model was very certain but mislabeled everything? Would the mismatch in video length/size have any affect on the ability to properly label points? Should my training videos be longer to get a wider variety of subtle changes in lighting and animal movement? I feel like I captured most of the behavior in my labeled frames very well. So I'm not sure where the problem lies.
Docker_TrainNetwork_VideoAnalysis.ipynb
notebook and it seems to only be using CUDA device id 0 (I have 8 devices). In the logs for the deeplabcut.train_network()
call, all 8 devices are displayed, but the work seems to only be going to device 0.
Is there an issue with using multiple GPUs? For some reason, using 4 instead of 1 (of the same) GPU only leads to an minor increase in inference speed (from 75fps to 85 fps), but that doesn't make much sense...
Additionally, changing batchsize in the analyze_videos call does not lead to faster inference either.
Am I missing something?
Does anyone know how the mirror
parameter is used in the train/pose_cfg.yaml
?
I found DeepLabCut/DeepLabCut#939:
"Mirror is used when you have two symmetrical points you want treated the same during training, like two knees."
This is certainly not what I want and it may be true for imgaug
, but unless I have misunderstood something I am sure this is not what the deterministic
augmenter does.
The deterministic
augmenter seems to handle mirror augmentations in the way that I want: transforming the coordinates of the joints (flip left to right) and then swapping the joint names, only for the joints that are chiral (ie have a symmetric counterpart). The symmetric association of the chiral labels is specified by the list of lists in the all_joints
parameter in the pose_cfg.
So by using mirror: true
I expect to effectively double my training data and reduce left/right asymmetry in the predictions of my model.
Can anyone confirm this?
def getsubfolders(folder):
''' returns list of subfolders '''
return [os.path.join(folder,p) for p in os.listdir(folder) if os.path.isdir(os.path.join(folder,p))]
def do_analysis():
project=r"MSA - V2-LF-20220303"
shuffle=1
prefix=r"A:\AIs"
projectpath=os.path.join(prefix,project)
config=os.path.join(projectpath,"config.yaml")
basepath=r"A:!!! DLC Input"
subfolders=getsubfolders(basepath)
for subfolder in subfolders: #this would be January, February etc. in the upper example
print("Starting analyze data in:", subfolder)
subsubfolders=getsubfolders(subfolder)
for subsubfolder in subsubfolders: #this would be Febuary1, etc. in the upper example...
print("Starting analyze data in:", subsubfolder)
for vtype in [".mp4",".m4v",".mpg"]:
deeplabcut.analyze_videos(config,[subsubfolder],shuffle=shuffle,videotype=vtype,save_as_csv=True)
schedule.every().day.at("16:15:00").do(do_analysis)