Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    mingyu906
    @mingyu906
    @sgoldenlab My file: project_folder/csv/features_extracted is empty as well, so I can't run the Optional step before running machine model on new data as well
    sgoldenlab
    @sgoldenlab
    @mingyu906 you have to extract the frames before you can label the video
    mingyu906
    @mingyu906
    I have finished the extract the frames
    @sgoldenlab
    sgoldenlab
    @sgoldenlab
    hrmm try running features extraction again and make sure there are csvs in the features_extracted folder
    mingyu906
    @mingyu906
    @sgoldenlab C:/behaviors/mingyu1/aggression1/project_folder/project_config.ini
    Pose-estimation body part setting for feature extraction: 16
    Extracting features from 0 files...
    All feature extraction complete.
    Pose-estimation body part setting for feature extraction: 16
    Extracting features from 0 files...
    All feature extraction complete.
    I have run the features extraction again! The file is still empty
    image.png
    NIHRBC
    @NIHRBC
    Does your project folder match your project config ini
    sgoldenlab
    @sgoldenlab
    @mingyu906 have you got any files showing in your project_folder/csv/outlier_corrected_movement_location folder?
    NIHRBC
    @NIHRBC
    I had this problem (or similar). Have to skip outlier correction.
    mingyu906
    @mingyu906
    image.png
    @sgoldenlab @NIHRBC no files! I didn't skip outlier correction. And two new csv log files are in the /project_folder/log folder now
    image.png
    mingyu906
    @mingyu906
    I want to know how to fill out the form below? Why C=1.5?
    image.png
    @sgoldenlab @sgoldenlab @NIHRBC
    mingyu906
    @mingyu906
    I guess I have a problem with this part! Above is my parameter at this part!
    Jia Jie Choong
    @inoejj
    @mingyu906 can I know what version of simba are you using? Are you using simba-uw-tf?
    do you have any tracking data in the csv/input_csv
    NIHRBC
    @NIHRBC
    That's the proportion difference you'll accept between - in your case - the nose and tail of the same animal. So, if for some reason the tracking thinks that the nose of animal A is the nose of animal B, but that point is more than - in this example - 1.5x the average nose-tail distance of animal A, it will (correct me if I'm wrong) instead place that point at the last high-confidence point within criterion
    It is the amount of variance you'll accept between two points that should belong to the same animal. It wouldn't make sense for the nose of an animal to be on the opposite side from the tail, so it tries to fix anything outside of the boundary you set
    https://github.com/sgoldenlab/simba/blob/master/misc/Outlier_settings.pdf is where they explain this (with a handy diagram!)
    mingyu906
    @mingyu906
    @inoejj Yes, I use simba-uw-tf, but there is no file in the csv/input_csv, but I indeed input one by one according the instruction! how to build the tracking data? By the DeepLabcut? Thank you for your time
    @NIHRBC Thank you so very much! I got it!
    Jia Jie Choong
    @inoejj
    @mingyu906 Yes you have to have tracking data from deeplabcut
    There is a tutorial on how to use deeplabcut with our gui
    then you import the tracking data into your simba project folder and you can start labelling behaviors
    mingyu906
    @mingyu906
    @inoejj I just reviewed the whole process again! And I understand now! Thank you for your instruction~!
    Enny-96
    @Enny-96

    Hi, I'm new here and I'm trying to install simBA. I have an error and I can't fix it. Someone could help me please?
    I've printed in Anaconda prompt "pip install simba-uw-no-tf" and the error is the following:
    "The error is "ERROR: Could not find a version that satisfies the requirement opencv-python==3.4.5.20 (from simba-uw-no-tf) (from versions: 3.4.8.29, 3.4.9.31, 3.4.9.33, 3.4.10.35, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 4.1.2.30, 4.2.0.32, 4.2.0.34, 4.3.0.36, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44)
    ERROR: No matching distribution found for opencv-python==3.4.5.20 (from simba-uw-no-tf)"

    Thanks

    NIHRBC
    @NIHRBC
    have you tried grabbing open cv separately? pip install opencv-python==3.4.5.20
    1 reply
    sgoldenlab
    @sgoldenlab
    @Enny-96 , yes please try pip install opencv-python==3.4.5.20
    4 replies
    Can I know what version of python you are using for your conda environment?
    Stefanos Stagkourakis
    @stagkourakis
    Hi everyone! I'm just about to try SIMBA and a couple of questions came up:
    1. If during labelling of frames, certain parts of each mouse are covered, is it ok to skip annotating them? For example, if the nose of mouse 2 is not visible at specific frames, should it still be annotated at an "expected" ROI?
    2. If a 10 min behavioral trial is composed of; a single animal for the first 5 minutes, and 2 animals for the next 5 min, is SIMBA going to work in this condition?
      Many thanks in advance!!
    sgoldenlab
    @sgoldenlab

    Hi @stagkourakis !

    1. Can you expend a little here, are you using the SimBA to label frames for a specific classifier behavior? If the behavior is happening (e.g., let's say that you are interested in mouse 2 sniffing mouse 1) but the nose of mouse two is briefly obscured by the other mouse, then go ahead and label that frame as sniffing. Rule of thumb, if you can see the behavior happening, regardless of what is obscured, go ahead and label the the frame as having the behavior. If the question is more about the pose-estimation and getting the tracking going, then check with the dlc/dpk/sleap forums.

    2. For these scenarios, we have clipped the videos into two. SimBA will be confused if told to look for two animals, and only one is present. So we analyze the one animal segments with a specific pose-estimator and simba classification models, and the second parts with another pose-estimator and classification models. If you have a lot of videos to clip at different time points, I recommend looking into these menus in simba: https://github.com/sgoldenlab/simba/blob/master/docs/tutorial_process_videos.md

    1 reply
    Enny-96
    @Enny-96
    hi everybody!
    I'd like to ask you if simBA it's fine to work with three mice with the same coat color. I'd like to analyze the behavior during an emotional discrimination test. Do you suggest me to use SimBA?
    Thanks
    NIHRBC
    @NIHRBC
    Do you have good tracking on them with DLC/SLEAP?
    9 replies
    You'd need to make a custom pose configuration to get started. Some of the features in simBA are only set up for the preconfigured body part configurations (e.g. 16 bp [8 parts on 2 mice]) and there's no support in those features for 3 animals. Classification should be possible, I think (but check with sgoldenlab), but a lot of the plots and analyses just don't support 3.
    mingyu906
    @mingyu906
    @sgoldenlab May I ask when I training the postures by DeepLabCut, it has run at least 8 hours, and still didn't have results. Is It normal?I don't know what's wrong?
    image.png
    WeChat Screenshot_20201022060928.jpg
    WeChat Screenshot_20201022060928.jpg
    Below is the output, and I don't know whether is normal or not?iteration: 536000 loss: 0.0025 lr: 0.002
    iteration: 537000 loss: 0.0026 lr: 0.002
    iteration: 538000 loss: 0.0024 lr: 0.002
    iteration: 539000 loss: 0.0025 lr: 0.002
    iteration: 540000 loss: 0.0025 lr: 0.002
    iteration: 541000 loss: 0.0025 lr: 0.002
    iteration: 542000 loss: 0.0025 lr: 0.002
    iteration: 543000 loss: 0.0025 lr: 0.002
    iteration: 544000 loss: 0.0025 lr: 0.002
    iteration: 545000 loss: 0.0025 lr: 0.002
    iteration: 546000 loss: 0.0025 lr: 0.002
    iteration: 547000 loss: 0.0025 lr: 0.002
    iteration: 548000 loss: 0.0026 lr: 0.002
    iteration: 549000 loss: 0.0026 lr: 0.002
    iteration: 550000 loss: 0.0026 lr: 0.002
    iteration: 551000 loss: 0.0025 lr: 0.002
    iteration: 552000 loss: 0.0025 lr: 0.002
    iteration: 553000 loss: 0.0025 lr: 0.002
    iteration: 554000 loss: 0.0024 lr: 0.002
    iteration: 555000 loss: 0.0025 lr: 0.002
    iteration: 556000 loss: 0.0026 lr: 0.002
    iteration: 557000 loss: 0.0025 lr: 0.002
    iteration: 558000 loss: 0.0025 lr: 0.002
    iteration: 559000 loss: 0.0025 lr: 0.002
    iteration: 560000 loss: 0.0025 lr: 0.002
    iteration: 561000 loss: 0.0025 lr: 0.002
    iteration: 562000 loss: 0.0024 lr: 0.002
    iteration: 563000 loss: 0.0025 lr: 0.002
    iteration: 564000 loss: 0.0024 lr: 0.002
    iteration: 565000 loss: 0.0025 lr: 0.002
    iteration: 566000 loss: 0.0025 lr: 0.002
    iteration: 567000 loss: 0.0024 lr: 0.002
    iteration: 568000 loss: 0.0024 lr: 0.002
    iteration: 569000 loss: 0.0025 lr: 0.002
    iteration: 570000 loss: 0.0024 lr: 0.002
    iteration: 571000 loss: 0.0025 lr: 0.002
    iteration: 572000 loss: 0.0025 lr: 0.002
    iteration: 573000 loss: 0.0025 lr: 0.002
    iteration: 574000 loss: 0.0024 lr: 0.002
    iteration: 575000 loss: 0.0024 lr: 0.002
    iteration: 576000 loss: 0.0023 lr: 0.002
    iteration: 577000 loss: 0.0024 lr: 0.002
    iteration: 578000 loss: 0.0024 lr: 0.002
    NIHRBC
    @NIHRBC
    That's model training output that looks like it is from deeplabcut. The default iterations in dlc is over a million, so unless you changed that value, that behavior is expected
    I think deeplabcut recommends 100-200k iterations and seeing how that looks before you refine and retrain.
    mingyu906
    @mingyu906
    @NIHRBC Yes, it is the deeplabcut! I got it! Thank you so very much! You are so excellent and helpful, and hopefully, I will grasp SimBA and Deep Learning like you!
    sgoldenlab
    @sgoldenlab
    @mingyu906 you can also join the deeplabcut community and ask questions there too! https://gitter.im/DeepLabCut/community
    mingyu906
    @mingyu906
    @sgoldenlab Hello, I have one more question! When uploading videos in SimBA project, we need to upload original videos after pre-process or videos with labels after processing by DLC?
    Sam A. Golden
    @GoldenNeuron_twitter
    Hi Mingyu, you will also need the original videos if you would like to create visualizations of your classifications.
    mingyu906
    @mingyu906
    @GoldenNeuron_twitter Thank you for the instruction! I got it!