Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    FranciscoD_
    @ankursinha:fedora.im
    [m]

    @pgleeson: for Figshare, do we want to limit ourselves to datasets only? At the moment, the way it's implemented is that resources like this one can be imported as a repository:

    https://figshare.com/articles/presentation/Presentation_1_NWB_Query_Engines_Tools_to_Search_Data_Stored_in_Neurodata_Without_Borders_Format_pdf/12962813

    It's only because we haven't made the regex specific enough to limit URLs with dataset in it.
    Padraig Gleeson
    @pgleeson
    @ankursinha:fedora.im I think it's fine to allow adding a Figshare repository like that. There may well be something in a repo with documents that could form the basis of a workspace, e.g. if it's a spreadsheet, or pdf containing tables, or even extracting plot values from figures in pdfs...
    1 reply
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    anhknguyen96: hiya, this is the right channel---are you able to post in here?
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Yes I think
    I created a channel that included you and Padraig, not sure why you didn't get the message there
    1 reply
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    Let's use the channel here for all our discussion, we should only resort to closed channels if there's something private we need to discuss. Otherwise it's better to "default to open".
    FranciscoD|homeserver
    @sanjay_ankur:matrix.org
    [m]
    This is me from the second account---I don't see a private message invitation there either, unfortunately :/
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    :/ anyway, thanks
    Right now to view NWB files on NWBE, users have to paste a url to the file. Has the option of opening NWBE with an already downloaded NWB file been looked into?
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    It's been discussed, but I don't think we've made any progress on that front yet. The idea currently is that NWBE is used in the new Open Source Brain v2 platform. So you'd upload your NWB file as a resource to the workspace, and then open it using NWBE there
    I do seem to remember that one could access a NWB file from the local file system in NWBE, but I can't quite remember what the method was. I think one had to use a special file:///... URL, but I'll have to double-check that.
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    For the task I'm working on, checking compatibility of (possibly) all files on dandi, NWBE having access to local file system would make a streamlined workflow
    1 reply
    Please check that, thank you :)
    I also have some ideas but I think that would require working with geppeto and/or java
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    But it cannot open files from other systems because that'll require it to first "upload" the file, and this has not yet been implemented
    So, you can't use the instance at nwbexplorer.opensourcebrain.org with local files. You'll have to run your own NWBE instance to be able to use files from your computer
    How are you testing for compatibility? Are you downloading each file and loading it manually in NWBE (is that the idea?)?
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Well that's the primitive idea, which is also a concern I raised because it seem unfeasible given the number of datasets we have. If compatibility means being able to viewed and interacted with on the NWBE, I don't know other way besides doing it on a file-by-file basis. The downloading part can be done systematically with datalad though.
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    Hrm, yeh. There are far too many files though, so we're going to have to think of a way to automate this---you doing it one by one is not going to scale (and it'll be very boring too..)
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Agreed...
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    So before a NWB file is valid for NWBE, it needs to be valid NWB, right?
    as in, NWBE will probably support a subset of the NWB standard---but the file should adhere to this standard and be valid according to pynwb?
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Yes
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    so what we could perhaps do is see what aspects of the NWB standard NWB supports, and then check against those?
    so:
    • first, check if file is valid against standard using pynwb
    • if it is, check against parts of NWB standard that NWBE should already support
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Ah makes sense
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    so instead of you manually uploading files to NWBE, we take a look at the code to see what bits of the NWB standard are being read/processed by NWBE?
    This should be automatable. And of course, we can pick a few files here and there to verify that the results from this automated validation do reflect what NWBE is able to load
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Totally :) thanks
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    I'm afraid this will require you to dive into the NWB code a little. I did look into it a year ago, so I remember a little of how it works.
    Do you want to go through it yourself and then we can discuss it at a meeting and see what we know about it together?
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Yes
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Also the pynwb and nwb-schema repos?
    1 reply
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]

    here's a quick summary of how it works:

    • it loads NWB using PyNWB and creates a "nwb model"
    • the UI bits are javascript etc., and know how to work with a particular model, called the "geppetto model" (because the tool is called geppetto)
    • to show an NWB file in the UI, it maps bits of the "nwb model" to the "geppetto model".

    So as long as the file can be converted to the "nwb model", it will be converted to the "geppetto model" and shown in the UI elements

    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Cool, thanks!
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    awesome, ping us here whenver you need to---I usually keep my element app open so I should get notifications etc.
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    :point_up: Edit: so what we could perhaps do is see what aspects of the NWB standard NWBE supports, and then check against those?
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]

    anhknguyen96: @pgleeson noted in the slack too:

    What I use is to run “serve” in that directory, then I can browse http://localhost:8080/, find the url to the file and it will load it fine in a local copy of nwbe. Can be installed with npm: https://www.npmjs.com/package/serve

    1 reply
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]

    Hi, a quick update:

    • Bulk testing using nwbinspector (validate_and_test_nwb.py) is pushed on dev branch. The script runs in python 3.9, conda environment. It has 2 arguments when run in terminal, --test to only test 5 dandisets, 2 files each (default is test all), and --succint to create a less detailed report (test result is grouped based on validation type and not file name - default) in /tmp/validation_dandiset_reports. After testing is finished, the dandiset_summary_readme.csv and dandiset_summary.csv files are updated with column validation_result, where it shows the validation types (eg:pynwb_validation, critical, etc.) that the corresponding dandiset has. The readme file is thus updated (using the update_readme method from nwb_table_readme.py) .
    • Since the priority is to get the all dandisets validated with nwbinspector as the first step to find incompatibility issues with NWBE, it would be great if you can run the bulk testing script at your earliest convenience. Meanwhile, there are two pertinent items that I think are of lower priority and urgency but worth considering:

      • where to store individual report files: I'm thinking the report file of each dandiset could be embedded in the according cell in the readme file, and either the file can be downloaded upon clicking on the link or uploaded to the dandiset's dandiarchive webpage.
      • evironment.yml file if there's no neat way of resolving the dependency issues (python venv vs conda - h5py/ros3, python 3.9 vs 3.10 - datalad).
    • At the same time, I'll start to see how we can retrieve NWB version information in each dandiset. I haven't quite understood the nwb-schema/hdmf-common schema/nwb-schema and pynwb version relationship. I think this issue (NeurodataWithoutBorders/pynwb#1289) elaborates my confusion.
    1 reply
    Also, I don't think a meeting this week is needed, but let me know otherwise :)
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Actually, it would be a good idea if we meet and discuss the version mismatches and to see if I will have a better grasp of the issue then. The bulk test result should be ready by then and we could talk about it. What do you think?
    1 reply
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Sure 😊
    1 reply
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Hi, would you please at some point run the get_nwb.py script on the dev branch and download an example nwb file to see if datalad is working for you? dandiset id can be anything, but maybe 000003 for reproducibility
    1 reply
    Ankur Sinha (FranciscoD@fedora)
    @ankursinha:fedora.im
    [m]
    I selected one file, and that seems to be downloading fine---is there anything specific you want us to test? I don't think either of our machines in the lab can manage the full 2TB download---we probably have the bandwidth, but not the disk space
    anhknguyen96
    @anhknguyen96:matrix.org
    [m]
    Hi, thanks, that's good enough. I'm having problem using datalad get/install using the same script or just the command line, although I don't think significant changes were made since the script was created
    1 reply
    also, more information about pynwb and nwb-schema versions. I think we would have to match the provenance with the nwb-schema vesion manually until they support that feature
    1 reply
    Padraig Gleeson
    @pgleeson
    @anhknguyen96:matrix.org Thanks for the update. Any questions though which may require some discussion are probably best opened as GH issues, e.g. about the locations of files (I think it's best to commit them all in a subfolder was thext files with links form the main readme) and it will be easier to check if they have been resolved.