@tjjdoman_gitlab I think we need more to go on. When you talk about calling
.render(), you are then saving that array to an image on disk? When you say "vert filters" and "frag filters", how exactly are you applying these? Is it possible for you to show us a minimal example?
It is hard to tell the difference between the images since they are different sizes...and I don't know what I'm looking at
@djhoese Here is a minimal example of the issue I am seeing:
from vispy.visuals.transforms import MatrixTransform import numpy as np import matplotlib.pyplot as plt from vispy import scene import vispy vispy.use('pyqt5') images = np.zeros(shape=(2, 2, 400, 400, 4), dtype=np.uint8) images[..., 3] = 255 images[1, 1, -200:, :, 3] = 100 images[1, 1, :, -200:, 3] = 100 images[1, 0, -200:, :, 3] = 100 images[0, 1, :, -200:, 3] = 100 images[0,0,..., 0] = 255 images[1,0,..., 1] = 255 images[0,1,..., 2] = 255 images[1,1,...,:3] = 255 s = scene.SceneCanvas(size=(600, 600), title='actual') view = s.central_widget.add_view() # if i comment out the below line my image changes to a gray square in the corner view.camera = 'panzoom' im = np.ndarray((2,2), dtype=object) for c, i in enumerate(np.ndindex(im.shape)): im[i] = scene.visuals.Image(images[i], parent=view.scene) imX = im[i] mat = np.eye(4) mat[3, 1] = -i * 200 mat[3, 0] = -i * 200 mat[3, 2] = 3 - c imX.transform = MatrixTransform(mat) imX.order = c imX.update() s.show() view.camera.set_range(x=(-200, 400), y=(-200, 400)) render = imX.parent.canvas.render() plt.figure() plt.imshow(render)
The image show on the canvas (left) shows 9 different color squares, while the rendered image (right) only shows 3 squares and 3 rectangles.
Thank you for any advice!
mat[3, 2] = -c * 100makes it work
@martianmartin There isn't much beyond the examples in the repository and the gallery: https://vispy.org/gallery/scene/index.html
That is a long and broad list of topics that you've asked about though. You've basically asked for how all of VisPy works. With the plotting API, things are even less documented as that interface was never really "finished" by the people who originally wrote it and we've been working more on improving the Visuals and SceneCanvas rather than plotting. If you have a specific question about how they interact I can try to answer as much as I can.
Over on napari/napari#3357 we noticed that canvas widgets have a default
_border_width of 1, which can result in unexpected/undesired borders. The only easy way I found to remove that border is to modify the vispy source to make the default
border_width parameter 0 instead.
Before I open any issues/PRs on vispy, I wanted to check a few of things.
_border_widthto be 0?
There are some code snippets on the napari issue, but I'm happy to try to create a minimal reproducer - napari's Qt widgets/layout are non-trivial, so it's very possible that the default 1-pixel transparent borders only cause an issue for us (e.g. because of the effective background color of some Qt widget). Though, when calling
grabFrameBuffer on the native Qt widget, we'll also getting that border, so I think that constrains where that color is coming from.
@andy-sweet Thanks for reaching out.
canvas.central_widget.add_view()(or whatever it is) create a Grid widget as a central_widget implicitly and then
ViewBoxwidget. But I think at least for the view you should be able to import the ViewBox class, instantiate it, then add it to the grid widget with
central_widget.add_widget(view, ...). At least I think so. I might be mixing up multiple interfaces.
Am I OK with border width being set to 0? Sure, probably. It will probably cause a lot of tests to fail. @almarklein @rougier @larsoner any memory of the border width stuff? Looks like it came from a PR by @campagnola in vispy/vispy#1030
@djhoese : thanks for the quick reply! I think you effectively solved this issue by educating me on
We have already have a
VispyCanvas that extends
SceneCanvas so by overriding the
SceneCanvas.central_widget property there and by passing
border_width=0 through to
add_view, I think I'm able to get the desired behavior in napari, so I think that's probably going to be enough.
So unless I'm get some pushback from other people on napari, or you are / someone else is really curious about how many and what tests fails, I'm probably going to avoid opening that PR for now.
border_width0, since the other border widths can be more easily controlled (e.g. in
Markers(for example), you can do that with
Compound._subvisuals.size(3 is the index of the
See my comment there, but please respond here (unless that was your question)
import numpy as np import sys from vispy import scene def ImageOnClickRelease(event): if event.button == 1: print("Position: ", event.pos) print(view.camera.center, view.camera.get_state(), "map= ", transform.map(event.pos)) canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True) view = canvas.central_widget.add_view() Set 2D camera (the camera will scale to the contents in the scene) view.camera = scene.PanZoomCamera() view.camera.flip = (0, 1, 0) # Y-Axis should be flipped for displaying images canvas.events.mouse_release.connect(ImageOnClickRelease) img_data = np.zeros((100,100)) points=np.random.randint(100,size=(30,2)) print(points) for p in points: img_data[p,p] = 0.5 image_visual = scene.visuals.Image(img_data, cmap='grays', clim=(0,1), parent=view.scene) view.camera.set_range(margin=0) transform = image_visual.transforms.get_transform() if name == 'main' and sys.flags.interactive == 0: canvas.app.run()
.imap(event.pos)will give you the correct X/Y coordinate on the image. So that solves the mouse click -> image location problem, but you say you want to know where the bright pixels are. Are you the user supposed to click on the bright pixels? What is your end goal? I'm confused because couldn't you check the image data without the clicking to find the bright spots?
imap(). I have noticed that the pixel info is in the first two coordinates returned by
imap(). What are the remaining two coordinates?
@dvsphanindra The transforms are going between coordinate systems. On your screen you have a 2D set of pixels, but in a Visual you could have a 3D coordinate system (ex. Volumes, Meshes, etc), but your ImageVisual is just 2D so the third z coordinate doesn't mean anything regarding that. The pan/zoom camera is kind of 3D because you are zooming in and out, but it is still a 2D view. I don't remember off the top of my head how the pan/zoom camera internally implements that, but same kind of point.
Typically the 4th dimension is a normalization value. Let me see if I can find the documentation on that, but for your application don't worry too much about it
Don't ask me any questions about it though. I only ever know as much as I need to when I need to
Hi everyone !
So I'm trying to change two vertices coordinates over time in python and then send this information to OpenGL. I was able to do it in glumpy using :
x = .5*np.cos(totalTime) program['position'] = np.array([(x, 0.), (-x, 0.)])
In vispy however it has been more complicated. I looked a bit in the code and finally managed to do it using :
x = .5*np.cos(totalTime) newPos = np.array([(x, 0.), (-x, 0.)]) self.program['position'].base.set_subdata(newPos.astype(np.float32))
So this works but I was wondering about why this is so different and wether it's really the simplest way to do that. Does anyone have an idea ? Thank you !
(I think my variables are quite clear but if not I would be happy to send a more complete source.)