Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 16:26
    kmuehlbauer commented #1568
  • Jan 31 2019 16:15
    kmuehlbauer commented #1568
  • Jan 31 2019 15:53
    djhoese commented #1568
  • Jan 31 2019 15:46
    kmuehlbauer edited #1568
  • Jan 31 2019 15:44
    kmuehlbauer opened #1568
  • Jan 31 2019 15:32
    djhoese commented #1566
  • Jan 31 2019 15:30
    djhoese commented #1567
  • Jan 31 2019 14:18
    GuillaumeFavelier commented #1561
  • Jan 31 2019 14:15
    GuillaumeFavelier commented #1561
  • Jan 31 2019 12:46
    opekar synchronize #1130
  • Jan 31 2019 09:54
    kmuehlbauer commented #1567
  • Jan 31 2019 09:53
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:36
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:28
    kmuehlbauer commented #1567
  • Jan 31 2019 09:27
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:19
    kmuehlbauer assigned #1563
  • Jan 31 2019 09:17
    kmuehlbauer opened #1567
David Hoese
@djhoese

@ericgyounkin If you are using VisPy 0.7+ then you should look at adding the ShadingFilter to your MeshVisual: https://vispy.org/api/vispy.visuals.filters.mesh.html#vispy.visuals.filters.mesh.ShadingFilter

You can access the default instance of this through the MeshVisual.shading property or you can create your own filter and attach it after you create the MeshVisual.

You can see examples of this in this example: https://vispy.org/gallery/scene/mesh_shading.html

Eric Younkin
@ericgyounkin
@djhoese thanks for the suggestion, I am using a MarkersVisual similar to https://vispy.org/gallery/scene/point_cloud.html
I do see an attach() method, which can be used to attach a filter? So maybe that is the approach?
David Hoese
@djhoese
Oh! Sorry @ericgyounkin I didn't realize you were using a MarkersVisual. That is going to be more difficult. I don't know too much about the Markers implementation, but I think knowing what symbol you are using would be a good start. Maybe we can come up with some kind of hack for you. That said, I'm not super optimistic as the MarkersVisual seems rather limited as far as customizing the actual look of the individual marker points
Eric Younkin
@ericgyounkin
I use the following lines
scatter = scene.visuals.Markers(parent=self.view.scene)
scatter.set_data(self.displayed_points, edge_color=clrs, face_color=clrs, symbol='o', size=3)
I see that the ShadingFilter appears to let you align the light source with the camera direction. Maybe that is the right approach? And then attach() the Filter to the Visual?
If I rotate the camera to the sweet spot, the lighting is perfect.
image.png
David Hoese
@djhoese
@ericgyounkin Is the above screenshot with or without the filter? If not, I'm not sure the filter will work for any non-mesh visuals (like the Markers). The reason I asked about the symbol you're using is that doing what you want may require editing the OpenGL shader on the markers. I'm actually surprised there is any idea of lighting in the markers though. I wonder if this is just a side effect of depth testing or the edge color of the markers.
another thing you could try after making your Markers is to do scatter.antialias = False and see if that fixes what you're seeing
Eric Younkin
@ericgyounkin
@djhoese the above screenshot is without the filter, I just moved the camera around until it lit up.
image.png
I am using symbol='o' in the scatter set_data
Eric Younkin
@ericgyounkin
Oh interesting! Adding this makes it perfect
scatter.set_gl_state(blend=False, cull_face=False, depth_test=False)
It seems to be the depth_test that makes the difference. I guess the lighting is based on distance from camera to point? Depth test turns that off? Unsure, I should really learn more about OpenGL, but this seems to be the solution
image.png
David Hoese
@djhoese

Eh learning more OpenGL may not be the best thing if this is "good enough" for you right now @ericgyounkin. You may have complications in the future with these settings if you try to put other Visuals in the same canvas as your Markers. Overlapping Visuals with your Markers may end up being drawn weird.

Did the antialiasing change anything (without changing set_gl_state)? My guess is that for some reason the depth testing seems to think that all the markers are behind one another in the original case and decides not to draw some/most of them. Hard to tell right now though.

Eric Younkin
@ericgyounkin
@djhoese it did not. I now see that depth_testing drops fragments if it deems them to be behind something, like you say. Pretty odd, but this is definitely 'good enough'. Thanks so much.
I should also mention that I am getting close to having a real interactive 3d scatter plot widget, not sure if that is something that is worth trying to add to Vispy. It might be more code than you want.
David Hoese
@djhoese
@ericgyounkin How does that differ from the scatter plot available through the plotting API? Oh duh, ours is only 2D, right? A 3D implementation would be appreciated.
qaz10102030
@qaz10102030
Hi all, i want do the real-time object tracking use vispy. i use regionprops function in scipy to get the bbox information. does vispy have add_patch function like plt and opencv or any suggest or example, thanks!
David Hoese
@djhoese
@qaz10102030 No. You would want to either create a LineVisual to create a simple series of lines or create a RectangleVisual. I've never done what you're trying to do so feel free to tell me if I'm missing something important here.
qaz10102030
@qaz10102030
OK, i got it. Thank you!
black banana
@blackbanana_gitlab
can you please recommend resources for learning vispy
David Hoese
@djhoese

@blackbanana_gitlab What are you trying to accomplish? Everything we have is described here: https://vispy.org/

Admittedly it needs a lot of work. I would suggest as a beginner to use the SceneCanvas and look at the Gallery of examples for the "Scene". There are also resources linked on our website about OpenGL understanding the basics of that.

martianmartin
@martianmartin
Greeting! I would like to synchronize the x-axis of two plots, so that when I drag one, the other moves as well. I am using the plotting api. After some digging I found the _process_mouse_event() method in SceneCanvas, which finds the VisualNode at event.pos and passes it the event. I tried passing the event to both of the desired ViewBox objects found in VisualNode._visual_ids, but only the second one was effected... Should I be taking a different approach to implementing this functionality or am I on the right track? Please advise. Thanks in advance!
David Hoese
@djhoese
@martianmartin I'm not sure we've had anyone successfully share axes with vispy plotting yet (I personally don't use the plotting API for my work so little experience). Looking at the code, I see there are ways it could be implemented inside vispy in a better way. As for making it work with vispy as-is, using mouse events doesn't seem wrong (I can't think of a better way off the top of my head). Is it possible for you to show me a minimal example of what you're doing?
martianmartin
@martianmartin
@djhoese
def _process_mouse_event(self, event, override_picked=False):
        prof = Profiler()  # noqa
        next_picked = None # sync hack
        deliver_types = ['mouse_press', 'mouse_wheel']
        if self._send_hover_events:
            deliver_types += ['mouse_move']

        picked = self._mouse_handler if not override_picked else override_picked
        if picked is None:
            if event.type in deliver_types:
                picked = self.visual_at(event.pos)

        # NOTE: hack to sync other ViewBox, should check a sync dict or parameter of VisualNodes instead
        if type(picked) == ViewBox and not override_picked:
            next_picked = VisualNode._visual_ids[33]

        # No visual to handle this event; bail out now
        if picked is None:
            return

        # Create an event to pass to the picked visual
        scene_event = SceneMouseEvent(event=event, visual=picked)

        # Deliver the event
        if override_picked: # we know ViewBox handles event directly so don't need to search parents and don't want to change _mouse_handler
            getattr(picked.events, event.type)(scene_event)
        elif picked == self._mouse_handler:
            # If we already have a mouse handler, then no other node may
            # receive the event
            if event.type == 'mouse_release':
                self._mouse_handler = None
            getattr(picked.events, event.type)(scene_event)
        else:
            # If we don't have a mouse handler, then pass the event through
            # the chain of parents until a node accepts the event.
            while picked is not None:
                getattr(picked.events, event.type)(scene_event)
                if scene_event.handled:
                    if event.type == 'mouse_press':
                        self._mouse_handler = picked
                    break
                if event.type in deliver_types:
                    # events that are not handled get passed to parent
                    picked = picked.parent
                    scene_event.visual = picked
                else:
                    picked = None

        # If something in the scene handled the scene_event, then we mark
        # the original event accordingly.
        event.handled = scene_event.handled

        if next_picked: # propagate to synced node
            self._process_mouse_event(event, override_picked=next_picked)
Apologize for the large message, if there's another way I should share please let me know. Anyways I hacked this together to test the approach and it's working fine. I think a non hard-coded approach may be to add an attribute to the VisualNode that we want to sync that includes the other Node(s) which are synced to it, then simply to propagate the event as shown. Thoughts?
martianmartin
@martianmartin

For context my top level looks like this:

from vispy import plot as vp
...
fig = vp.Fig(size=(800, 800), show=False)

ohlc = fig[0, 0].ohlc_plot((x, o, h, l, c), line_pos, symbol='c', title='BTC/USD 1 Minute Candles',
                      xlabel='Current (pA)', ylabel='Membrane Potential (mV)')
lines_plot = fig[1, 0].plot((x, c))

grid = vp.visuals.GridLines(color=(0, 0, 0, 0.5))
grid.set_gl_state('translucent')
fig[0, 0].view.add(grid)

This results in two Viewbox's being added to the SceneCanvas which contain the plots to be synced

David Hoese
@djhoese

@martianmartin Ok, so that hack works for you right now? I'm not sure I follow the details of this, but is the basic issue that the mouse event handler is finding the "picked" AxisVisual/Widget and call its mouse handler but it never "finds" the other axis visual? Oh or is it not the AxisVisual's mouse handler that is called but the ViewBox and the AxisVisual is responding to those changes? I'm trying to think of other ways this could be done.

Side node: Don't do if type(picked) == ViewBoxand instead do if isinstance(picked, ViewBox).

martianmartin
@martianmartin
@djhoese This hack works for now, I just wanted to check if there was some obviously better way that I was missing as I'm new to this library. Thanks for your feedback!
David Hoese
@djhoese
@kmuehlbauer have you done anything like shared axes in vispy? ^
Kai Mühlbauer
@kmuehlbauer
Not that I recall. But if, I think I would have done similar things.
Thomas Doman
@tjjdoman_gitlab

Hello I am having a strange issue with Canvas.render(). I am creating a canvas with scene.SceneCanvas and then a view with canvas.central_widget.add_view() and placing image visuals withing that view. I then attach some vert filters to position the image visuals and some frag filters to correct their color and alter the alpha channel to blend the seams. My final view looks like the first image.

However once I try to render the canvas to the cpu it seems as if all of the frag filters are removed (notice the seams around the "H2"). I wonder if it is something to do with the order I apply my filters and if the rendering is happening before these filters are applied. If so is there a way to specify at which point in the pipeline the rendering happens? Thanks for the help!

image.png
^^^final view before rendering
image after rendering vvv
image.png
David Hoese
@djhoese

@tjjdoman_gitlab I think we need more to go on. When you talk about calling .render(), you are then saving that array to an image on disk? When you say "vert filters" and "frag filters", how exactly are you applying these? Is it possible for you to show us a minimal example?

It is hard to tell the difference between the images since they are different sizes...and I don't know what I'm looking at

Thomas Doman
@tjjdoman_gitlab
ok I figured I would check in case there was an obvious solution. I will make an example. Thanks
David Hoese
@djhoese
@tjjdoman_gitlab what version of vispy are you using?
Thomas Doman
@tjjdoman_gitlab
'0.8.1'
David Hoese
@djhoese
ok, darn
Thomas Doman
@tjjdoman_gitlab

@djhoese Here is a minimal example of the issue I am seeing:


from vispy.visuals.transforms import MatrixTransform
import numpy as np
import matplotlib.pyplot as plt
from vispy import scene
import vispy
vispy.use('pyqt5')

images = np.zeros(shape=(2, 2, 400, 400, 4), dtype=np.uint8)
images[..., 3] = 255
images[1, 1, -200:, :, 3] = 100
images[1, 1, :, -200:, 3] = 100
images[1, 0, -200:, :, 3] = 100
images[0, 1, :, -200:, 3] = 100

images[0,0,..., 0] = 255
images[1,0,..., 1] = 255
images[0,1,..., 2] = 255
images[1,1,...,:3] = 255

s = scene.SceneCanvas(size=(600, 600), title='actual')
view = s.central_widget.add_view()
# if i comment out the below line my image changes to a gray square in the corner
view.camera = 'panzoom'

im = np.ndarray((2,2), dtype=object)
for c, i in enumerate(np.ndindex(im.shape)):
    im[i] = scene.visuals.Image(images[i], parent=view.scene)
    imX = im[i]
    mat = np.eye(4)
    mat[3, 1] = -i[0] * 200
    mat[3, 0] = -i[1] * 200
    mat[3, 2] = 3 - c
    imX.transform = MatrixTransform(mat)
    imX.order = c
    imX.update()
s.show()
view.camera.set_range(x=(-200, 400), y=(-200, 400))

render = imX.parent.canvas.render()
plt.figure()
plt.imshow(render)

The image show on the canvas (left) shows 9 different color squares, while the rendered image (right) only shows 3 squares and 3 rectangles.
Thank you for any advice!

image.png
David Hoese
@djhoese
@tjjdoman_gitlab This definitely has something to do with ordering. If I reverse the order .order = -c then I get the same image for both vispy and imshow:
image.png
@tjjdoman_gitlab I don't know why, but changing your last Z matrix entry to mat[3, 2] = -c * 100 makes it work
David Hoese
@djhoese
if I do anything less than 35 in the line then it isn't the same and the red covers more than expected. Must be some Z level precision