(base) PS C:\Users\user\Documents\DeepLabCut-master\conda-environments> conda env create -f DLC-CPU.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed
I have a question about a general strategy for using the same model on multiple experiments/animals. We were being pretty conservative and creating new models for each experiment that we performed in the lab. Now that we're comfortable with that, we have been wondering what might happen if we tried to make a 'mega-model' that is trained iteratively on new videos as we collect them. The procedure we've been trying out is
at this point, it seems like less work to just make a new model from scratch. Is the idea that eventually by adding new videos, the 'mega-model' will generalize to any new video fairly well and eventually will require little additional work to analyze new experiments? Are people finding this to be the case?
I can see that the 'loss' scores start out at lower values if you retrain, but the newly added videos still contain more error frames than if we had just trained a model from scratch. Maybe I've totally missed the point or am using the wrong strategy.
appreciate your help! Thanks
config.yamlfile using the following command
deeplabcut.plot_trajectories(config_path, [‘fullpath/analysis/project/videos/reachingvideo1.avi’]). Is there a way to selectively plot a certain joint. It could also be after the creation of the
.csvfiles , Looking for more in post processing ?
Hello all, I have a few questions regarding triangulation. For starters, I have set up 3 cameras (all perpendicular to one another). I want to triangulate the camera outputs (I understand that triangulation cannot use all 3 camera at this time, but eventually that is something that will happen), but have some problems interpreting the output.
How is the 3D space registered? What is the "x", "y", and "z" assigned to? Are different "scans" co-registered to a common space?
GUIhas a K means step(before labelling) to cut a video sequence into frames. It also compares ( if i understood correctly) two videos (which a nearly similar) and outputs the frames that are totally are different. I was wondering how does that work ? And how has it been implemented in the code . (I might have totally misunderstood the K means step here )