Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Şeyma Bayrak
    @sheyma
    Screenshot_2019-09-09_19-07-53.png
    even better formulation: image on right is created using VolumeObj, on left is BrainObj('white'). Is there a way to project the volume on the right across the surface on the left?
    Şeyma Bayrak
    @sheyma
    Hi everyone,
    Şeyma Bayrak
    @sheyma
    volToStats_map.png
    I solved my question of sampling from volume to nifti with @EtienneCmb 's amazing help! Basically, a combination of nilearn and visbrain does the job: title . Thanks a lot!
    Lucy Owen
    @lucywowen
    Looks great @EtienneCmb! Thanks for following up @sheyma, will definitely be using this new functionality.
    Şeyma Bayrak
    @sheyma
    Has anybody used a self-made colorbar in visbrain? I want to use this colorbar of mine, however, don't
    brain_cbar03.png
    *don't know how to use it in visbrain. Any idea is appreciated!
    Etienne Combrisson
    @EtienneCmb
    discret colorbars are not supported by visbrain
    ulyssek
    @ulyssek
    Hi there, and thanks for the great work!
    I'm new here, and I'd like to use visbrain with macaque data
    But i can't make it work and don't find any documentation about that
    any suggestions ? :)
    ulyssek
    @ulyssek
    Hi again, I could dig into the documentation, and I guess I figured out my problem. Now I'm looking for a monkey area atlas file (.annot) any clue where I could find one ? Thanks !
    Jesús Silva-Rodríguez
    @txusser
    @sheyma Hi, facing exactly the same problem here, can you give me some clue?
    I have the exact same, problem, want to plot some SPM results using visbrain
    But can produce only Volume Objects,
    Jesús Silva-Rodríguez
    @txusser
    Is there a way to project this volume objects into the surface of the B1 template, to produce fMRI-like activation maps¿
    apadee
    @apadee
    Hi! I have a question also regarding colormaps. Is there a way of fixing a specific colour to a specific value? What I would like to do is to have a colorscale consistent between different subjects.
    iPsych
    @iPsych
    @EtienneCmb Hi, can I get the transform from the brain object visualized in GUI? like self.scatter.node_transform(self.view)? I tried self.tr = self.atlas.mesh._transforms(self.view) but, gave error as expect....
    iPsych
    @iPsych
    Since the gui adopts camera, zoom, and centering, I lost somethere between transform coordinate. Is there any example to get the object or vertex under the mouse pointer?
    Etienne Combrisson
    @EtienneCmb
    @apadee yes, you can use the clim arguments to fix it
    @iPsych if I remember correcly, there's one vispy example illustrating how to get mouse cursor coordinates in 3d. I don't think the example has been merged so check out in the PR section of vispy
    iPsych
    @iPsych
    @EtienneCmb I found vispy/vispy#1225, but it's an example used scatter. Since Brain GUI uses camera, I believe something should be changed. I tried tr=self.atlas.mesh.canvas.transforms
    data6=tr.get_transform()
    print(data6)
    in Brain gui, and got <ChainTransform [ChainTransform, ChainTransform, ChainTransform, ChainTransform, ChainTransform] at 0x14dcbb710>.
    iPsych
    @iPsych
    @EtienneCmb I think it's VisBrain specific question. In vispy/vispy#1225,
    self.tr = self.scatter.node_transform(self.view)
    data = self.tr.map(self.data)[:, :2]
    Convert the original scatter coordinates of,
    [[-1.93938247e-01 -3.33470840e-01 -4.85806042e-01]
    [-8.90897346e-01 5.06059808e-01 5.45311491e-01]
    [ 3.07523187e-02 2.22233138e-01 2.89604361e-01]]
    to screen coordinates,
    [[588.0425767 520.57840061]
    [693.44326508 183.05984346]
    [502.02075981 309.80857567]],
    However, in Brain GUI,
    data = self._vbNode.transform.map(self.atlas._xyz)[:, :2]
    only provides gscaled transform in visual.py like below.
    vist.STTransform(scale=[self._gl_scale] * 3)
    Which attribute I reference to .map for get screen coordinates from some coordinates in source or atlas object?
    iPsych
    @iPsych
    @EtienneCmb Using self._camera.transform.map(self.atlas._xyz) provides some 'reasonable' coordinates, and properly changes when turn the brain. However it looks like somehow 'flipped' or inverted. Definitely, negative numbers are not screen coordinates. Should I add canvas size or transform more?
    [ [ 4.6777946e+01 -1.8140440e+01 4.3870148e+01]
    [ 4.2156826e+01 -3.0580599e+01 5.0884960e+01]
    [ 4.6858597e+01 -6.7612923e+01 5.4059964e-02]]
    [ [-506.04952414 -219.07359128]
    [-499.02184417 -205.82960967]
    [-501.83567476 -194.71121717]]
    iPsych
    @iPsych
    @EtienneCmb,Making all things more simple, Want to get distance from mouse curser (event.pos) with each coordinates of xyz in s_obj = SourceObj('SourceExample', xyz, **kwargs).
    I used three possible transform I could get from code reading,
    self.view.wc._transform.imap
    self._camera.transform.imap
    transforms = self._vbNode.transforms.get_transform(), then transforms.imap
    but nothing seems work.
    iPsych
    @iPsych
    @EtienneCmb , "get_transform(map_from='visual', map_to='canvas')" resolved the problem.
    apadee
    @apadee

    @apadee yes, you can use the clim arguments to fix it

    Thanks! I tried it before and could not get it right. It turns out I had to provide the same clim argument to both ColorbarObj and SourceObj.project_sources(). No it works perfectly.

    I have another question though: is there a simple way of integrating the colorbar in the screenshot with Brain.screenshot()?
    Etienne Combrisson
    @EtienneCmb

    I have another question though: is there a simple way of integrating the colorbar in the screenshot with Brain.screenshot()?

    @apadee You should be able to do Brain.screenshot(canvas='colorbar') (you can check the screenshot doc). However, if you want to have a better plotting control I recommend using the SceneObj

    Etienne Combrisson
    @EtienneCmb
    @iPsych if you need some inspiration, I remember I had to do it but using a 2D camera. Still, you might found some ideas. Check this
    iPsych
    @iPsych
    @EtienneCmb Hello, can the brainobj saved and loaded ? (one like BrainObj('test_brain', vertices=vert ...)) I tried with pickle, but It shows
    AttributeError: Can't pickle local object 'CbarBase.init.<locals>.fcn'
    Etienne Combrisson
    @EtienneCmb
    Unfortunately no, this is something I tried to implement but I didn't an elegant way of doing it, also because there's many objects in visbrain
    iPsych
    @iPsych
    @EtienneCmb Thanks. I found the 'temporary' saving function implemented in some objects as npz. doesn't it works?
    Etienne Combrisson
    @EtienneCmb
    I don't think so
    Andreas Allen
    @AndreasAllen5_twitter
    Hi, thank you for developing Visbrain. I started using the tool recently in macOS Catalina and I have a question regarding the text labels in the SourceObj. I'm following the example '06_add_time_series.py' and I noticed the labels are being declared but not used ( s_text = [str(k) for k in range(s_xyz.shape[0])] ). I included the labels in the source object as: s_obj = SourceObj('MySources', s_xyz, text=s_text, adtsymbol='disc', color='green'), however I still cannot see the labels in the GUI. I'm wondering, am I using the text labels properly? I'm not an expert programmer so I may be missing something basic here. Thank you.
    Jingyun (Josh) Chen
    @jingyunc
    @EtienneCmb I had same issue with @AndreasAllen5_twitter on my MacBook. When I ran the sample script "03_sources.py" from http://visbrain.org/auto_examples/gui_brain/03_sources.html#sphx-glr-auto-examples-gui-brain-03-sources-py , no text labels showed up in the GUI. Can you please advise a fix?
    iPsych
    @iPsych
    @EtienneCmb, I tried to get depth information of brain mesh from GUI canvas, via self.view.canvas.render(). but the depth info is all 255(for empty space), and 127 (for brain). I believe the 3D information is inflated in somewhere between mesh creation to canvas.render. Do you have any hint for me where to dig around?
    Carolina Migliorelli
    @cmiglio
    Hello, I'm having the same issue as @AndreasAllen5_twitter and @jingyunc when trying to plot text labels. Im also using MacOS Catalina. Have anyone found a solution?
    xuesongwang
    @xuesongwang
    Hey, everyone! Thanks for this amazing tool. I couldn't find an .annot file for the AAL parcellation. The tutorial gave a Destrieux demo only. Can someone please help with that? Thanks very much
    apadee
    @apadee
    I also run into the problem with labels @jingyunc described. I also encountered a labels problem with visbrain.objects.TopoObj. In that case, they appear shifted way above the picture.
    刘政(Barry Liu)
    @BarryLiu97
    Do you guys know how to download files manually? I cannot use download_files for some connection error.
    I have some sEEG data, if I want to show the electrodes in an MNI brain, I have already read the tutorials http://visbrain.org/auto_examples/gui_brain/04_connectivity.html#sphx-glr-auto-examples-gui-brain-04-connectivity-py and http://visbrain.org/auto_examples/objects/plot_connectivity_obj.html#sphx-glr-auto-examples-objects-plot-connectivity-obj-py, but if I want to the electrodes which are in the same group, how do I do it?
    I got the https address by debugging codes, is there any faster way?
    刘政(Barry Liu)
    @BarryLiu97
    OK, I know how to download templates.
    刘政(Barry Liu)
    @BarryLiu97
    Hi, anyone knows how to specify the path of the brain templates downloaded? I want to change the ~/visbrain_data to another path
    Not just downloaded, but using a specified path of templates
    Pablo Rodríguez
    @pablrdriguez_gitlab
    Hi! I'm trying to use visbrain but I keep getting error messages (like "'visbrain' is not a package"). Using Anaconda on Windows, anyone can help?
    Raulsaurus
    @Raulsaurus_twitter
    Hello guys. I just recently started working with the VisBrain library. I am working on a code to display nodes on a brain surface; basically, creating a scene-object, a source-object, a brain-object, and displaying both source- and brain-object on the scene-object. However, when displaying the scene objects, I get a couple of black holes/patches I want to get rid of. I saw a couple of solutions on the web. I was wondering if you guys have any suggestions to get rid of this bug. Thank you.
    Guhan Sundar
    @guhandi
    Hey this is an awesome package! For the brain object is there and option to change how transparent the brain is. Would be nice to specify a float value rather than just a bool to toggle translucent vs opaque