Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 16:26
    kmuehlbauer commented #1568
  • Jan 31 2019 16:15
    kmuehlbauer commented #1568
  • Jan 31 2019 15:53
    djhoese commented #1568
  • Jan 31 2019 15:46
    kmuehlbauer edited #1568
  • Jan 31 2019 15:44
    kmuehlbauer opened #1568
  • Jan 31 2019 15:32
    djhoese commented #1566
  • Jan 31 2019 15:30
    djhoese commented #1567
  • Jan 31 2019 14:18
    GuillaumeFavelier commented #1561
  • Jan 31 2019 14:15
    GuillaumeFavelier commented #1561
  • Jan 31 2019 12:46
    opekar synchronize #1130
  • Jan 31 2019 09:54
    kmuehlbauer commented #1567
  • Jan 31 2019 09:53
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:36
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:28
    kmuehlbauer commented #1567
  • Jan 31 2019 09:27
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:19
    kmuehlbauer assigned #1563
  • Jan 31 2019 09:17
    kmuehlbauer opened #1567
Thomas Doman
@tjjdoman_gitlab
^^^final view before rendering
image after rendering vvv
image.png
David Hoese
@djhoese

@tjjdoman_gitlab I think we need more to go on. When you talk about calling .render(), you are then saving that array to an image on disk? When you say "vert filters" and "frag filters", how exactly are you applying these? Is it possible for you to show us a minimal example?

It is hard to tell the difference between the images since they are different sizes...and I don't know what I'm looking at

Thomas Doman
@tjjdoman_gitlab
ok I figured I would check in case there was an obvious solution. I will make an example. Thanks
David Hoese
@djhoese
@tjjdoman_gitlab what version of vispy are you using?
Thomas Doman
@tjjdoman_gitlab
'0.8.1'
David Hoese
@djhoese
ok, darn
Thomas Doman
@tjjdoman_gitlab

@djhoese Here is a minimal example of the issue I am seeing:


from vispy.visuals.transforms import MatrixTransform
import numpy as np
import matplotlib.pyplot as plt
from vispy import scene
import vispy
vispy.use('pyqt5')

images = np.zeros(shape=(2, 2, 400, 400, 4), dtype=np.uint8)
images[..., 3] = 255
images[1, 1, -200:, :, 3] = 100
images[1, 1, :, -200:, 3] = 100
images[1, 0, -200:, :, 3] = 100
images[0, 1, :, -200:, 3] = 100

images[0,0,..., 0] = 255
images[1,0,..., 1] = 255
images[0,1,..., 2] = 255
images[1,1,...,:3] = 255

s = scene.SceneCanvas(size=(600, 600), title='actual')
view = s.central_widget.add_view()
# if i comment out the below line my image changes to a gray square in the corner
view.camera = 'panzoom'

im = np.ndarray((2,2), dtype=object)
for c, i in enumerate(np.ndindex(im.shape)):
    im[i] = scene.visuals.Image(images[i], parent=view.scene)
    imX = im[i]
    mat = np.eye(4)
    mat[3, 1] = -i[0] * 200
    mat[3, 0] = -i[1] * 200
    mat[3, 2] = 3 - c
    imX.transform = MatrixTransform(mat)
    imX.order = c
    imX.update()
s.show()
view.camera.set_range(x=(-200, 400), y=(-200, 400))

render = imX.parent.canvas.render()
plt.figure()
plt.imshow(render)

The image show on the canvas (left) shows 9 different color squares, while the rendered image (right) only shows 3 squares and 3 rectangles.
Thank you for any advice!

image.png
David Hoese
@djhoese
@tjjdoman_gitlab This definitely has something to do with ordering. If I reverse the order .order = -c then I get the same image for both vispy and imshow:
image.png
@tjjdoman_gitlab I don't know why, but changing your last Z matrix entry to mat[3, 2] = -c * 100 makes it work
David Hoese
@djhoese
if I do anything less than 35 in the line then it isn't the same and the red covers more than expected. Must be some Z level precision
martianmartin
@martianmartin
@djhoese I am struggling to grok the dynamics of the SceneCanvas. Specifically I want to implement autoscale of y-axis so that when I pan left and right, the y-axis automatically adjusts to fit the data in my plot widget. More generally though, I am wondering if you can briefly explain or point to any documentation which relates the SceneCanvas to the following key objects: ViewBox, Camera, Visuals, and Transform system. Thanks!
David Hoese
@djhoese

@martianmartin There isn't much beyond the examples in the repository and the gallery: https://vispy.org/gallery/scene/index.html

That is a long and broad list of topics that you've asked about though. You've basically asked for how all of VisPy works. With the plotting API, things are even less documented as that interface was never really "finished" by the people who originally wrote it and we've been working more on improving the Visuals and SceneCanvas rather than plotting. If you have a specific question about how they interact I can try to answer as much as I can.

David Hoese
@djhoese
@kmuehlbauer or @almarklein or anyone, if you can provide some ideas for speeding this up (the user's use case or the TextVisual's handling in general) I'd appreciate the help: https://stackoverflow.com/questions/69871325/text-labels-in-image-pixel-coordinate-system-in-vispy
kmuehlbauer
@kmuehlbauer:matrix.org
[m]
@djhoese Sorry, I can only have a look next week. But one thing mentioned in the docstring is text collections. Was there any initial work on this already? It might be the right thing for the OP.
David Hoese
@djhoese
I don't think they'd be able to mix collections and Visuals/SceneCanvas
Andy Sweet
@andy-sweet

Over on napari/napari#3357 we noticed that canvas widgets have a default _border_width of 1, which can result in unexpected/undesired borders. The only easy way I found to remove that border is to modify the vispy source to make the default border_width parameter 0 instead.

Before I open any issues/PRs on vispy, I wanted to check a few of things.

  • Is this an intended behavior of vispy? If not, would you be open to changing the default _border_widthto be 0?
  • Can you think of any other ways to make all canvas widgets have a _border_width of 0?

There are some code snippets on the napari issue, but I'm happy to try to create a minimal reproducer - napari's Qt widgets/layout are non-trivial, so it's very possible that the default 1-pixel transparent borders only cause an issue for us (e.g. because of the effective background color of some Qt widget). Though, when calling grabFrameBuffer on the native Qt widget, we'll also getting that border, so I think that constrains where that color is coming from.

David Hoese
@djhoese

@andy-sweet Thanks for reaching out.

  • Is this intended? VisPy has been off and on with maintainers for so long that it is likely the people who originally made that suggestion don't even remember making it or why. I'll see if I can track something down in the git history
  • You should be able to "manually" create Widget-subclass (ViewBox, Widget, Grid) and add them to the SceneCanvas instead of relying on the default behavior. For example, canvas.central_widget.add_view() (or whatever it is) create a Grid widget as a central_widget implicitly and then add_view creates a ViewBox widget. But I think at least for the view you should be able to import the ViewBox class, instantiate it, then add it to the grid widget with central_widget.add_widget(view, ...). At least I think so. I might be mixing up multiple interfaces.

Am I OK with border width being set to 0? Sure, probably. It will probably cause a lot of tests to fail. @almarklein @rougier @larsoner any memory of the border width stuff? Looks like it came from a PR by @campagnola in vispy/vispy#1030

Nicolas P. Rougier
@rougier
I don't remember either but a default of 1 is ok if border could be disabled (by default), else a default of 0 seems ok. Question is wheter code for drawing the border is ran when border=0
David Hoese
@djhoese
some of the comments in #1030 seem related to having multiple views in the same Canvas. @andy-sweet I'd say make a pull request and we'll see what tests fail
Andy Sweet
@andy-sweet

@djhoese : thanks for the quick reply! I think you effectively solved this issue by educating me on add_view.

We have already have a VispyCanvas that extends SceneCanvas so by overriding the SceneCanvas.central_widget property there and by passing border_width=0 through to add_view, I think I'm able to get the desired behavior in napari, so I think that's probably going to be enough.

So unless I'm get some pushback from other people on napari, or you are / someone else is really curious about how many and what tests fails, I'm probably going to avoid opening that PR for now.

David Hoese
@djhoese
I wouldn't mind a PR, but yeah if you don't have the time or the interest then don't worry about it
Andy Sweet
@andy-sweet
@djhoese : I did something slightly simpler, which is to make SceneCanvas.central_widget's border_width 0, since the other border widths can be more easily controlled (e.g. in add_view): vispy/vispy#2255
I'm not sure if this makes sense and I haven't set up my vispy dev environment enough to run all the tests. Feel free to run them on CI, or close this PR if it's just silly.
Jordão Bragantini
@JoOkuma
Hi, is there a way to filter the size of only a single subvisual from a visuals.Compound?I have a Compound that contains [Line(), Text(), Line(), Markers()] and when I try change the v_size the shader doesn't compile. Thanks in advance.
Lorenzo Gaifas
@brisvag
Not sure what you mean there. Can you provide some minimal example?
If you're trying to access the size of the Markers (for example), you can do that with Compound._subvisuals[3].size (3 is the index of the Markers subvisual`)
Jordão Bragantini
@JoOkuma
I'm trying to add a radius to the napari tracks layer. Here you can find the visuals and the filter.
I'm using a filter because that's how the tracks control their tail length.
Jordão Bragantini
@JoOkuma
I want to change the size of markers as the user navigates over the z-slices of the 3d-volume.
Lorenzo Gaifas
@brisvag
ah, I see! On the fork you linked I don't see anything that quickly screams wrong at me, except that you refer to self._current_time in the current_z property, which is probably not what you wanted
is there an issue/pr open in napari so we can discuss there with some context?
loctran15
@loctran15
Hi Guys, is there a way to identify x, y position of a point of an image inside a scenecanvas. The camera I am using is PanZoomCamera. Thank you
Jordão Bragantini
@JoOkuma
I haven't created the issue/pr yet. I forgot to commit the latest change, this line causes the issue
your suggestion of setting the subvisuals size might be better, but requires more change in the napari layer. I might continue the discussion in the napari board
dvsphanindra
@dvsphanindra
Hi all,
I am using the PanZoom camera for displaying a 2D grayscale image. I would like to determine the pixel under the mouse cursor when the button is clicked. What I am able to access are the coordinates in the scene coordinate framework. Also, how do I determine the pixel where the mouse is clicked even when the image is panned or zoomed. Can someone help me with a small code snippet. Thank you.
David Hoese
@djhoese
@dvsphanindra did you ask this question somewhere else? I could have sworn I just answered this question. I'll have to go searching...

@dvsphanindra Here it is: https://stackoverflow.com/questions/70193398/vispy-2d-coordinate-x-y-of-a-point-of-an-image-inside-a-scenecanvas

See my comment there, but please respond here (unless that was your question)

dvsphanindra
@dvsphanindra
@djhoese Thanks for the reply. I have the same requirement as given in the question although I am not the one who asked the question. Also, let me tell you I have already tried the transform with map and imap. I could not figure out what I was missing because I get wrong outputs with both map and imap. Can you please point out the steps to configure the transform to work properly.
David Hoese
@djhoese
@dvsphanindra it would be easiest if you could provide me a minimal example that I could run (with no extra data files) and I could see how the map/imap calls could be updated to work properly
dvsphanindra
@dvsphanindra
@djhoese Here is an example code of what I am trying to do:
I would like to recover the positions of the bright pixels that are generated randomly after clicking on the image visual which I am printing in ImageOnClickRelease() function
import numpy as np
import sys
from vispy import scene

def ImageOnClickRelease(event):
    if event.button == 1:
        print("Position: ", event.pos)
        print(view.camera.center, view.camera.get_state(), "map= ", transform.map(event.pos))

canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True)

view = canvas.central_widget.add_view()
Set 2D camera (the camera will scale to the contents in the scene)

view.camera = scene.PanZoomCamera()
view.camera.flip = (0, 1, 0) # Y-Axis should be flipped for displaying images
canvas.events.mouse_release.connect(ImageOnClickRelease)

img_data = np.zeros((100,100))
points=np.random.randint(100,size=(30,2))
print(points)
for p in points:
    img_data[p[0],p[1]] = 0.5

image_visual = scene.visuals.Image(img_data, cmap='grays', clim=(0,1), parent=view.scene)
view.camera.set_range(margin=0)
transform = image_visual.transforms.get_transform()

if name == 'main' and sys.flags.interactive == 0:
     canvas.app.run()
David Hoese
@djhoese
@dvsphanindra Thanks for the example. It made it much easier to test. In your call to get_transform() pass map_to="canvas" and then .imap(event.pos) will give you the correct X/Y coordinate on the image. So that solves the mouse click -> image location problem, but you say you want to know where the bright pixels are. Are you the user supposed to click on the bright pixels? What is your end goal? I'm confused because couldn't you check the image data without the clicking to find the bright spots?
dvsphanindra
@dvsphanindra
@djhoese Thanks for the suggestion. The code is working now. I am working on pattern recognition wherein after the identification of various pixel groups in an image, the user can get info regarding the object pointed by mouse. I just created a small example to demonstrate my requirement.
Also can you please explain the output of map() and imap(). I have noticed that the pixel info is in the first two coordinates returned by imap(). What are the remaining two coordinates?
David Hoese
@djhoese

@dvsphanindra The transforms are going between coordinate systems. On your screen you have a 2D set of pixels, but in a Visual you could have a 3D coordinate system (ex. Volumes, Meshes, etc), but your ImageVisual is just 2D so the third z coordinate doesn't mean anything regarding that. The pan/zoom camera is kind of 3D because you are zooming in and out, but it is still a 2D view. I don't remember off the top of my head how the pan/zoom camera internally implements that, but same kind of point.

Typically the 4th dimension is a normalization value. Let me see if I can find the documentation on that, but for your application don't worry too much about it

David Hoese
@djhoese
dvsphanindra
@dvsphanindra
@djhoese Thank you I got some idea ;)
cgharib
@cgharib

Hi everyone !

So I'm trying to change two vertices coordinates over time in python and then send this information to OpenGL. I was able to do it in glumpy using :

x = .5*np.cos(totalTime)
program['position'] = np.array([(x, 0.), (-x, 0.)])

In vispy however it has been more complicated. I looked a bit in the code and finally managed to do it using :

x = .5*np.cos(totalTime)
newPos = np.array([(x, 0.), (-x, 0.)])
self.program['position'].base.set_subdata(newPos.astype(np.float32))

So this works but I was wondering about why this is so different and wether it's really the simplest way to do that. Does anyone have an idea ? Thank you !

(I think my variables are quite clear but if not I would be happy to send a more complete source.)

David Hoese
@djhoese
@cgharib It is hard to tell based on what you've provided so far. Something like what you have in glumpy should have worked in vispy. If not then I would have expected self.program['position'][:] = np.array(...) to work. What kind of error, if any, are you getting?