Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Eric Thomson
    @EricThomson

    In case this comes up with anyone else. I was in install hell for a while with caiman yesterday I think it was probably a very localized problem, but just in case anyone hits it. Windows 10, mamba install, which pretty much always works, but yesterday ran into this:

    CondaVerificationError: The package for ipyparallel located at C:\ProgramData\Miniconda3\envs\mesmerize\pkgs\ipyparallel-8.0.0-pyhd8ed1ab_0
    appears to be corrupted.

    Hit my head against it for a while doing a lot of useless things. Solution I found was, after wiping and creating a new environment. Before doing anything else, do a conda install of ipyparallel. Then everything was happy caiman was fine. Trying it after I hit this error did not work. Had to do it before...one of those dependency local minima mysteries.

    Pat Gunn
    @pgunn
    The most recent release of caiman, released today, fixes a bug in CNMF that may have been painful for use with certain datasets.
    An index error you were most likely to hit during a very long CNMF.fit
    Pat Gunn
    @pgunn
    @EricThomson Wondering if conda install --force-reinstall ipyparallel for ipyparallel would've done the trick.
    I guess we all learn a lot about these tools when we see them occasionally stumble
    Eric Thomson
    @EricThomson
    Good idea I will try that if it comes up again!
    EC-byte
    @EC-byte
    Hi @pgunn , I'm running into an issue trying to run the mask_rcnn method from the demo. It is throwing this error:
    Screen Shot 2021-12-02 at 12.11.23 PM.png
    EC-byte
    @EC-byte
    tensorflow 2.7.0 --- keras 2.7.0 --- caiman 1.9.3. Do you have any idea what's going on here? It seems there's some version compatibility issues between tf and keras for this specific method.
    Thanks! @pgunn
    Pat Gunn
    @pgunn
    Yes, I think that's right.
    I will need to see if this can be fixed by adjusting imports or pinning versions.
    Tensorflow 2.7 though; I wasn't aware that was already a thing
    @EC-byte How did you make your environment?
    EC-byte
    @EC-byte
    I used the mamba install method for caiman, and then pip installed keras as per the docs recommendation in the demo code.
    Pat Gunn
    @pgunn
    oh
    That's the volpy-specific documentation...
    hm
    guessing that upgraded your tensorflow and a lot of other stuff.
    maybe beyond what the code currently supports
    I wonder if that code would work using the keras support in current versions of tensorflow.
    If it could, with a few changed imports we could eliminate that manual-install step
    I'll ask Cai, who wrote that code
    EC-byte
    @EC-byte
    Thank you for the quick response, I'll keep troubleshooting with library versions. @pgunn
    Pat Gunn
    @pgunn
    You might try installing keras using conda/mamba rather than pip
    that might avoid the bleeding-edge versions
    EC-byte
    @EC-byte
    Okay. will try this and get back to you - let me know if you hear anything from Cai!
    Pat Gunn
    @pgunn
    To reset after an attempt, you probably want to remove and recreate your caiman environment.
    as doing that pip install probably replaced a lot of things in your environment you don't want replaced.
    Pat Gunn
    @pgunn
    Chatted with Cai ; we're vendoring a package called mrcnn there which has outdated software dependencies; that corner of the code is getting harder to get working (and might eventually be removed)
    I'll see if I can figure out how to make it work with a current python, but expect anything around that functionality to be rough. Sorry.
    EC-byte
    @EC-byte

    No worries, thank you for the quick responses! If it helps, I followed your advice for a fresh env and used mamba to install keras, I'm confronted with a much longer traceback: Configurations:
    BACKBONE resnet50
    BACKBONE_STRIDES [4, 8, 16, 32, 64]
    BATCH_SIZE 1
    BBOX_STD_DEV [0.1 0.1 0.2 0.2]
    COMPUTE_BACKBONE_SHAPE None
    DETECTION_MAX_INSTANCES 100
    DETECTION_MIN_CONFIDENCE 0.7
    DETECTION_NMS_THRESHOLD 0.3
    FPN_CLASSIF_FC_LAYERS_SIZE 1024
    GPU_COUNT 1
    GRADIENT_CLIP_NORM 5.0
    IMAGES_PER_GPU 1
    IMAGE_CHANNEL_COUNT 3
    IMAGE_MAX_DIM 512
    IMAGE_META_SIZE 14
    IMAGE_MIN_DIM 128
    IMAGE_MIN_SCALE 0
    IMAGE_RESIZE_MODE pad64
    IMAGE_SHAPE [512 512 3]
    LEARNING_MOMENTUM 0.9
    LEARNING_RATE 0.001
    LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
    MASK_POOL_SIZE 14
    MASK_SHAPE [28, 28]
    MAX_GT_INSTANCES 100
    MEAN_PIXEL [0 0 0]
    MINI_MASK_SHAPE (56, 56)
    NAME neurons
    NUM_CLASSES 2
    POOL_SIZE 7
    POST_NMS_ROIS_INFERENCE 1000
    POST_NMS_ROIS_TRAINING 1000
    PRE_NMS_LIMIT 6000
    ROI_POSITIVE_RATIO 0.33
    RPN_ANCHOR_RATIOS [0.5, 1, 2]
    RPN_ANCHOR_SCALES (16, 32, 64, 128, 256)
    RPN_ANCHOR_STRIDE 1
    RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
    RPN_NMS_THRESHOLD 0.7
    RPN_TRAIN_ANCHORS_PER_IMAGE 64
    STEPS_PER_EPOCH 100
    TOP_DOWN_PYRAMID_SIZE 256
    TRAIN_BN False
    TRAIN_ROIS_PER_IMAGE 128
    USE_MINI_MASK True
    USE_RPN_ROIS True
    VALIDATION_STEPS 50
    WEIGHT_DECAY 0.0001

    WARNING:tensorflow:AutoGraph could not transform <bound method ProposalLayer.call of <caiman.source_extraction.volpy.mrcnn.model.ProposalLayer object at 0x15792c9a0>> and will run it as-is.
    Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
    Cause: invalid syntax (tmpi08av7zs.py, line 10)
    To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
    1301829 [ag_logging.py: warn():146][45227] AutoGraph could not transform <bound method ProposalLayer.call of <caiman.source_extraction.volpy.mrcnn.model.ProposalLayer object at 0x15792c9a0>> and will run it as-is.
    Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
    Cause: invalid syntax (tmpi08av7zs.py, line 10)
    To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
    WARNING: AutoGraph could not transform <bound method ProposalLayer.call of <caiman.source_extraction.volpy.mrcnn.model.ProposalLayer object at 0x15792c9a0>> and will run it as-is.
    Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
    Cause: invalid syntax (tmpi08av7zs.py, line 10)
    To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
    WARNING:tensorflow:AutoGraph could not transform <bound method PyramidROIAlign.call of <caiman.source_extraction.volpy.mrcnn.model.PyramidROIAlign object at 0x1578ac100>> and will run it as-is.
    Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
    Cause: invalid syntax (tmp6n1a2ob5.py, line 34)
    To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

    ultimately

    SyntaxError Traceback (most recent call last)
    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
    446 program_ctx = converter.ProgramContext(options=options)
    --> 447 converted_f = _convert_actual(target_entity, program_ctx)
    448 if logging.has_verbosity(2):

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py in _convert_actual(entity, program_ctx)
    283
    --> 284 transformed, module, source_map = _TRANSPILER.transform(entity, program_ctx)
    285

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/transpiler.py in transform(self, obj, user_context)
    285 if inspect.isfunction(obj) or inspect.ismethod(obj):
    --> 286 return self.transform_function(obj, user_context)
    287

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/transpiler.py in transform_function(self, fn, user_context)
    489 ctx.info.name, fn.code.co_freevars, self.get_extra_locals())
    --> 490 factory.create(
    491 nodes, ctx.namer, future_features=ctx.info.future_features)

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/transpiler.py in create(self, nodes, namer, inner_factory_name, outer_factory_name, futurefeatures)
    182
    --> 183 module,
    , source_map = loader.load_ast(
    184 nodes, include_source_map=True)

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/loader.py in load_ast(nodes, indentation, include_source_map, delete_onexit)
    95 source = parser.unparse(nodes, indentation=indentation)
    ---> 96 module,
    = load_source(source, delete_on_exit)
    97

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/loader.py in load_source(source, delete_on_exit)
    62 module = importlib.util.module_from_spec(spec)
    ---> 63 spec.loader.exec_module(module)
    64 # TODO(mdan): Use our own garbage-collected cache instead of sys.modules.

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/importlib/_bootstrap_external.py in exec_module(self, module)

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/importlib/_bootstrap_external.py in get_code(self, fullname)

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/importlib/_bootstrap_external.py in source_to_code(self, data, path, _optimize)

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/importlib/_bootstrap.py in _call_with_frames_removed(f, args, *kwds)

    SyntaxError: invalid syntax (tmp6n1a2ob5.py, line 34)

    During handling of the above exception, another exception occurred:

    AttributeError Traceback (most recent call last)
    /var/folders/mp/3jb4bv5j5t382dc6bs_93cqm0000gp/T/ipykernel_45227/245315884.py in <module>
    11 elif method == 'maskrcnn': # Important!! Make sure install keras before using mask rcnn.
    12 weights_path = download_model('mask_rcnn') # also make sure you have downloaded the new weight. The weight was updated on Dec 1st 2020.
    ---> 13 ROIs = utils.mrcnn_inference(img=summary_images.transpose([1, 2, 0]), size_range=[5, 22],
    14 weights_path=weights_path, display_result=True) # size parameter decides size range of masks to be selected
    15 cm.movie(ROIs).save(fnames[:-5] + 'mrcnn_ROIs.hdf5')

    ~/opt/anaconda3/envs/billionth_test/lib/python3.9/site-packages/caiman/source_extraction/volpy/utils.py in mrcnn_inference(img, size_range, weights_path, display_result)
    127 DEVICE = "/cpu:0" # /cpu:0 or /gpu:0
    128 with tf.device(DEVICE):
    --> 129 model = modellib.MaskRCNN(mode="inference", model_dir=model_dir,
    130 config=config)
    131

    Pat Gunn
    @pgunn
    Oof. That looks messy
    Can you give me a conda list of your environment?
    EC-byte
    @EC-byte
    That won't fit in this chatbox, can I send it to an email?
    Pat Gunn
    @pgunn
    sure, or pastebin
    Mitchell Lab
    @SiFTW
    Hello, I've slightly tweaked the 3D demo to work with my movies and got memories mapped etc but when it comes to cnm.fit() I get openCV errors related to ksize and gaussian blur. I've pasted the stack trace here: https://pastebin.com/Q8xSJm5R
    I assume this is related to my install but I'm using mamba etc, and believe everything is up to date.
    Any ideas?
    EC-byte
    @EC-byte
    Here's a pastebin @pgunn https://pastebin.com/PJPivkMY
    Pat Gunn
    @pgunn
    @SiFTW If you can find the smallest movie you can that exhibits the problem and the modified demo, and email us a link to a dropbox link, we can dig further. That's probably the fastest way (also if you can tell us how you installed caiman, what version, ...)
    @EC-byte Ah, thanks, that helps. So it's not a weird version mismatch, but that may be the version limits of mrcnn.
    One thing that may work would be to build a python 3.7 environment as that would get you tensorflow 1.14.0
    And that version is known to work with the mrcnn code
    To do that, you'd do conda create -n caiman37 -c conda-forge python=3.7 caiman
    Pat Gunn
    @pgunn
    Eventually we may need to remove some volpy features as tensorflow 1.14.0 isn't going to keep getting builds for newer python versions :(
    EC-byte
    @EC-byte
    Excellent! This works. Thank you for the help!
    Mitchell Lab
    @SiFTW
    Am I right that movie.resize does not work for 3D movies (so 4D data)? I'm trying to spatially (and potentially temporally) downsample a movie.
    Pat Gunn
    @pgunn
    Looking at the code, I believe that's right
    You'd probably need to use external details/code to do that.