hello!
I'm having a little issue with the ROI recoloring, specifically when defining my own ROI
I'm loading in what are essentially nifti masks, and creating appropriate labels, index, roi_to_color maps, but when I call select_roi(roi_to_color=my_colors)
and preview()
, I get extremely dark colors, and it looks like the interior of the ROI is being painted (see these links https://ibb.co/PwxyXdC, https://ibb.co/XCXSWxD, https://ibb.co/GcyCtdF)
Incidentally, it does not matter if I allow the Brain GUI to select colors for me, the painting is similarly dark
I've tried the exact same approach with the default Brodmann areas ROI, and the painting works perfectly
any ideas what I might need to change to make the colors visible?
# define the roi object
r_obj = RoiObj(...)
# select your roi's
r_obj.select_roi(...)
# invert normals
r_obj.mesh._normals *= -1
In vispy, text object is declared like below.
def __init__(self, text=None, color='black', bold=False,
italic=False, face='OpenSans', font_size=12, pos=[0, 0, 0],
rotation=0., anchor_x='center', anchor_y='center',
method='cpu', font_manager=None):
I can see the bold, italic, and font_size in visbrain, but is there any connection to 'face' which might allow the font change in the canvas using system font?
io.write_image.write_fig_canvas
function so as to be able to export canvas as SVG files? It would be convenient to make figures to be able to "screenshot" the GUI in vectorial format
Hi there. I am trying to use visbrain to read in .rec files. I import the following functionfrom visbrain.io.read_sleep import read_edf
I can't import this becuase of a QT error
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
I am running this code in a pyspark script, all I need to do is read the data, I don't need a GUI. Any suggestions?
--user
option or check the permissions. Could someone please advise ?
Hi, I have a sequence of activations which I'd like to display as an animation on a surface. I can set up the gui.Brain
or use .screenshot
fine, but I'm not sure how to animate my time series of surface activations. Here's my code, with the last three lines being roughly how I'd expect to be able to do this.
from nilearn.plotting.surf_plotting import _check_mesh
import numpy as np
import time
from visbrain.objects import BrainObj
from visbrain.gui import Brain
verts, faces = load_surf_mesh(_check_mesh("fsaverage5").pial_right)
verts *= 1000
activation = np.load("right-visual.npy")
frame = activation[500]
vmax = np.abs(activation).max()
vmin = -vmax
brain_object = BrainObj("fsaverage5", vertices=verts, faces=faces, translucent=False)
brain_object.add_activation(data=frame, clim=(vmin, vmax), cmap="coolwarm")
vb = Brain(brain_obj=brain_object)
vb.show()
for frame in activation:
brain_object.add_activation(data=frame, clim=(vmin, vmax), cmap="coolwarm")
brain_object.update()
I'd appreciate any help!