Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 16:26
    kmuehlbauer commented #1568
  • Jan 31 2019 16:15
    kmuehlbauer commented #1568
  • Jan 31 2019 15:53
    djhoese commented #1568
  • Jan 31 2019 15:46
    kmuehlbauer edited #1568
  • Jan 31 2019 15:44
    kmuehlbauer opened #1568
  • Jan 31 2019 15:32
    djhoese commented #1566
  • Jan 31 2019 15:30
    djhoese commented #1567
  • Jan 31 2019 14:18
    GuillaumeFavelier commented #1561
  • Jan 31 2019 14:15
    GuillaumeFavelier commented #1561
  • Jan 31 2019 12:46
    opekar synchronize #1130
  • Jan 31 2019 09:54
    kmuehlbauer commented #1567
  • Jan 31 2019 09:53
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1567
  • Jan 31 2019 09:37
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:36
    kmuehlbauer labeled #1563
  • Jan 31 2019 09:28
    kmuehlbauer commented #1567
  • Jan 31 2019 09:27
    kmuehlbauer synchronize #1567
  • Jan 31 2019 09:19
    kmuehlbauer assigned #1563
  • Jan 31 2019 09:17
    kmuehlbauer opened #1567
IsolatedSushi
@IsolatedSushi_gitlab
Is there a better wat of doing that instead of making a 1.000.000 array containing the color values per point?
Or does that happen behind the scenes anyway?
David Hoese
@djhoese
As far as I know @IsolatedSushi_gitlab, a color array would be the only way to do this unless you wrote your own shader. But even with your own shader you'd be mapping "some value" -> "some color" so you'd still probably have 1.000.000 values
Chouran Camara
@chouran
Hi guys ! I'm working on a brain computer interface project for moving video games cameras using EEG signals.I would like to do a demo on a simple window, and just move a 360 degree camera around. Would you have some recommendations ? Should I go for gloo or the scene module with the integrated camera objects ? `
David Hoese
@djhoese
@chouran Go with the SceneCanvas
"gloo" development in VisPy will be "downsized" in future versions of VisPy so it is best to try to stick with the higher level APIs if at all possible
Eric Younkin
@ericgyounkin
Hi, i'm working with the Markers visual + Turntable camera, trying to show a 3d point cloud. I find that when i rotate the camera with the mouse event, the points are jittery, where they move just slightly each tick of the rotation. Is that some OpenGL issue? Has anyone seen that before?
Eric Younkin
@ericgyounkin
I think I have isolated the issue. Appears that if the x/y value of the 3d data is too large, you get the jittering. I'll open an Issue instead.
David Hoese
@djhoese
@ericgyounkin That could be with floating point precision inside the GPU. There have been multiple issues about it and I'm not sure there is a way around it without changing what data you send to the GPU
code examples are always good though if you do file an issue
Eric Younkin
@ericgyounkin
Understood, thanks. I think I will just center the data on zero, seems like the issue only arises with large x/y values. Thanks for your help.
Precise Simulation
@precise-simulation
Hi, I'm not sure if this is the right place to ask basic questions, but I'm visualizing a mesh with vispy.scene.visuals.Mesh(mesh_data) which works very well. But if I later want to change the color of the mesh I use mesh.set_data(mesh_data, color), but I need to both store and pass the mesh_data again which seems a bit of a waste. Is there any functionality to just change color of an existing mesh object?
David Hoese
@djhoese
@precise-simulation This is the perfect spot to ask this question. I think there was an issue on github related to this recently that might have an optimization that someone is working on (or maybe I worked on it, not sure). You should be able to provide just the color to set_datawith no data. Does that work? Or do you get an error?
Precise Simulation
@precise-simulation
If I just provide the color as input then the whole plot disappears (no error is thrown though), only if I add the vertices, and faces as input again does it actually show the shape. I'll check the issue, maybe I'm doing something else wrong.
Precise Simulation
@precise-simulation
I looked at the https://github.com/vispy/vispy/blob/master/vispy/visuals/mesh.py#L223-L255 function and the defaults for all inputs are None, so unless new meshdata is provided everything will be reset. I'm not sure about the proper modification so I just for now changed my code to: mesh._color = Color(color); mesh.mesh_data_changed()
David Hoese
@djhoese
@precise-simulation I knew I did something related to this recently: vispy/vispy#2002
This doesn't effect the MeshVisual itself, but it does teach us a workaround for your problem
in your example code you said mesh_data, is this a MeshData object or is this just the vertices?
If it is a MeshData object then you should be able to do:
mesh_data.set_vertex_colors(colors)
mesh.set_data(meshdata=mesh_data)
Precise Simulation
@precise-simulation
@djhose Yes, I think some similar check as #2002 might be appropriate for "mesh.set_data" as well (for users who don't save the meshdata, I didn't as I thought maybe there then is effectivley two copies of the mesh which maybe is unecessary). Thanks for pointing out the workaround tough.
David Hoese
@djhoese
@precise-simulation If you want to file an issue or a pull request with a fix that'd be great
Eric Younkin
@ericgyounkin
Hello, apologies if this is too much code for this forum, but I'm trying to figure out how to get data coordinates from pixel coordinates. I have this so far:
'''
Eric Younkin
@ericgyounkin
import numpy as np
from vispy import visuals, scene, app

canvas = scene.SceneCanvas(keys='interactive', show=True)
view = canvas.central_widget.add_view()
view.camera = scene.TurntableCamera(fov=45)
view.camera.distance = 1000
sctr = scene.visuals.create_visual_node(visuals.MarkersVisual)
scatter = sctr(parent=view.scene)
scatter.set_data(np.stack([np.arange(100) * 100, np.arange(100) * 100, np.linspace(0, 5, 100)], axis=1))
print(canvas.size)
>>> (800, 600)

transform = scatter.node_transform(view)
print(transform.imap((0, 0)))
>>>  [  683.63844969 -1185.20109842   790.42826329     1.58113909]
print(transform.imap((800, 600)))
>>> [  684.80214175 -1185.00753309   789.71082439     1.58113909]

app.run()
This does not appear to be the data coordinates. I would expect that the corners of the screen would be negative at origin and positive at (800,600). Am I not getting the correct transform?
David Hoese
@djhoese
@ericgyounkin
  1. Posting here is fine. Thanks.
  2. No, need to do create_visual_node. Instead use scene.visuals.Markers which is the same thing, but already done for you. I've been seeing this function pop up a couple times in user code lately. Is there a tutorial or example somewhere that showed it being used?
  3. Try using scatter.transforms.get_transform(...) and read its docstring. It lets you choose which "levels" of the overall series of transforms you use and are considering.
Eric Younkin
@ericgyounkin

Thanks David. Most of the vispy examples/tutorial/visuals and examples/basics/visuals seem to have used create_visual_node. I saw that it didn't really add much, but I thought that might be a future thing vispy was encouraging users to do. I'll move away from it.

For transforms from mouse event reported coordinates to coordinates in the visual coordinate system, it seems like I would want to transform from canvas to visual:

Visual - arbitrary local coordinate frame of the visual. Vertex buffers used by the visual are usually specified in this coordinate system.
Canvas - This coordinate system represents the logical pixel coordinates of the canvas. It has its origin in the top-left corner of the canvas, and is typically the coordinate system that mouse and touch events are reported in.

transform = scatter.transforms.get_transform('canvas', 'visual')
print(transform.map((0, 0)))
>>>  [  683.63844969 -1185.20109842   790.42826329     1.58113909]
print(transform.map((800, 600)))
>>>  [  684.80214175 -1185.00753309   789.71082439     1.58113909]

Seems like the same answer. I guess i'm not sure if even transforming a 2d pixel coordinate to a 3d data coordinate even makes sense. I guess I need to study this a little more to understand what it is doing. Has anyone else talked about selecting 3d data using mouse events? I got the mouse event/camera part fine, just this transformation part has me stuck.

David Hoese
@djhoese

@ericgyounkin About the examples, it looks like the examples/tutorial use it a lot when they are making their own Visuals and need to use them with the SceneCanvas. I'll have to look into updating the 5 non-tutorial ones that use it. Thanks.

I don't use the TurntableCamera a lot, but it may be messing up your results. I'll try playing with your example code with the 'panzoom' camera later. You may also need to define an STTransform on the visual scatter.transform = STTransform, but it's early here and I can't seem to get things to make sense.

David Hoese
@djhoese
import numpy as np
from vispy import scene, app
from vispy.visuals.transforms import STTransform
from vispy.scene import visuals

canvas = scene.SceneCanvas(keys='interactive', show=True)
view = canvas.central_widget.add_view()
#view.camera = scene.TurntableCamera(fov=45)
#view.camera.distance = 1000
view.camera = 'panzoom'
scatter = visuals.Markers(parent=view.scene)
data = np.stack([np.arange(100) * 100, np.arange(100) * 100, np.linspace(0, 5, 100)], axis=1)
scatter.set_data(data)
print(canvas.size)

@canvas.events.mouse_release.connect
def on_mouse_release(event):
    print(event.pos)
    transform = scatter.transforms.get_transform('canvas', 'visual')
    print(transform.map(event.pos))
    return event

app.run()
This works for me ^
If I pan around and zoom out I can click on the dots and get the expected coordinates.
@ericgyounkin ^
glumpyfan101
@glumpyfan101
image.png

Hi! I realize this is not the entirely correct place to ask, but seeing as glumpy is the sister project of vispy and since the glumpy chatroom is not very active at the moment, I try anyways:

Right now I am learning about data buffering on the GPU. In the program above (image to the left) I try to

  1. Buffer some data to the GPU (for now just the simple float 1)
  2. Have the GPU perform a calculation on that data (doubling the value every cycle)
  3. Read back the data from the buffer (which, after one cycle should give me 2, then 4 etc.)

As you can see (image to the right), Python is not reading back what I expected. I am obviously quite the amature, and there is probably something conceptually wrong with my approach here. Does anyone see the problem?

David Hoese
@djhoese
@rougier ^
@glumpyfan101 I could be wrong, but "binding" in glumpy is likely only creating a Buffer on the GPU side to pass your data to. It doesn't create a 2-way communication between the CPU and GPU. Additionally, in glumpy, is "activating" a GL Program enough to actually execute it?
Eric Younkin
@ericgyounkin
@djhoese Thanks! This does seem to work fine. I'm going to dig into the Base3dRotationCamera to see if I can figure out how the transforms work. I think my 3d scatter widget is going to need rotations in the end.
chhb
@chhb:tchncs.de
[m]

Hello everyone,

I am trying to create a 2d plot with multiple y axes. I use the scene module. I am able to plot multiple lines on one view but do not understand how to link them to different y axes. I link the axis to the view but do not get how the scene.LinePlot is connected to the axis.

I am also able to plot two views on top of each other, each one with its own y axis. This was a smart solution until I realised that the mouse only affects the top view and when I want to scroll to the right I only change the x axis of that view. So I would need to keep the views in sync which I don't know how to realise as well.

So my question is if this is possible at all (for a mechanical engineer), how to do it or whether there might be an example I have missed that could guide me.

David Hoese
@djhoese
@chhb:tchncs.de I could be wrong, but I don't think we have any builtin support for two y-axes. What do your code look like so far though? If you are doing everything with the SceneCanvas and the Visuals then maybe it is possible but I'm not sure (I don't using the plotting stuff too much)
chhb
@chhb:tchncs.de
[m]
@djhoese: Thank you for your answer. I will try to make it more minimal tomorrow evening (it has grown with my tries) an post it then.
Nicolas P. Rougier
@rougier
Sorry for the lag, I don't get gitter email notifications for some reason. @glumpyfan101 Best would be to open an issuer on GitHub.
Eric Younkin
@ericgyounkin
3dview_axes.png

I'm using the AxisWidget and Markers visual to create a scatterplot widget. I'm able to set the domains to the min/max of the data correctly, but I'm not sure how to stretch the axis so that the end of the axis aligns with the end of the data. I've tried using self.axis_x.stretch and using the pos keyword in the AxisWidget init, not really seeing the results. Seems like AxisWidget uses the screen transform over the pos keyword anyway, if I'm understanding it correctly.

Is there some trick to getting the axis to align with the data extents?

self.scatter = scene.visuals.Markers(parent=self.view.scene)
self.scatter.set_data(self.displayed_points, edge_color=clrs, face_color=clrs, symbol='o', size=3)
self.axis_x = scene.AxisWidget(orientation='bottom', domain=(0, self.x.max() - self.x.min()))
self.view.add(self.axis_x)
self.axis_z = scene.AxisWidget(orientation='right', domain=(self.z.min(), self.z.max()))
self.view.add(self.axis_z)
David Hoese
@djhoese
@ericgyounkin A couple ideas and things I wanted to mention:
  1. The "Widget" classes are meant to be used by the VisPy plotting API. At least that's my understanding of them. That said, they are Visuals at the end of the day so this shouldn't be a problem.
  2. You could try adding a .transform = STTransform(...) to scale the widget.
  3. There is a size keyword argument to the AxisWidget that I think specifies how large it is supposed to be.
chhb
@chhb:tchncs.de
[m]
@djhoese: So my code is still quite long:
from vispy import app, scene

import numpy as np

class Canvas(scene.SceneCanvas):
    def __init__(self):
        scene.SceneCanvas.__init__(self, keys='interactive', show=True)
        self.size = 1600, 1200
        self.unfreeze()
        self.grid = self.central_widget.add_grid(margin = 10)
        self.grid.spacing = 0
        self.view = self.grid.add_view(row=1, col=1, border_color='white', bgcolor = 'black')
        self.view2 = self.grid.add_view(row=1, col=1, border_color='white', bgcolor = None)
        self.view.camera = 'panzoom'
        self.view2.camera = 'panzoom'

        self.yaxis = scene.AxisWidget(orientation='left')
        self.yaxis.width_max = 80
        self.grid.add_widget(self.yaxis, row=1, col=0)

        self.yaxis2 = scene.AxisWidget(orientation='right')
        self.yaxis2.width_max = 80
        self.grid.add_widget(self.yaxis2, row=1, col=2)

        self.xaxis = scene.AxisWidget(orientation='bottom')
        self.xaxis.height_max = 80
        self.grid.add_widget(self.xaxis, row=2, col=1)

        self.xaxis2 = scene.AxisWidget(orientation='top')
        self.xaxis2.height_max = 80
        self.grid.add_widget(self.xaxis2, row=0, col=1)

        self.xaxis2.link_view(self.view2)
        self.yaxis2.link_view(self.view2)
        self.xaxis.link_view(self.view)
        self.yaxis.link_view(self.view)

        self.freeze()

canvas = Canvas()
data = np.array([[0.0, 1.0, 2.0], [0.0, 1.5, 0.75]]).transpose()
scene.LinePlot(data, parent = canvas.view.scene, color = 'r')
data = np.array([[0.0, 1.5, 3.0], [0.0, 1.5, 3.75]]).transpose()
scene.LinePlot(data, parent = canvas.view.scene, color = 'y')
canvas.view.camera.set_range()

data = np.array([[0.0, 1.0, 2.0], [1.0, 0.5, 1.75]]).transpose()
scene.LinePlot(data, parent = canvas.view2.scene, color = 'b')
canvas.view2.camera.set_range()

if __name__ == '__main__':
    import sys
    if sys.flags.interactive != 1:
        app.run()
David Hoese
@djhoese

@chhb:tchncs.de This is a difficult one. I don't think this is possible. The AxisWidget has its link_view method which ties itself to the view. Every time the view changes (via the camera) the axis is updated. However, there might be some hack-y stuff you could do by modifying your_axis_widget.transform = STTransform(...). I'm looking at the AxisWidget _view_changed method and notice all it is doing is transforming the axis end points to new coordinates using the newly changed view transform (and every transform between the view and the widget). I'm not sure what some_axis_widget.transform is set to by default in this Widget, but if you customized it you might be able to get this to work...

but it definitely isn't supported out of the box.

chhb
@chhb:tchncs.de
[m]

@djhoese: That is what I feared. 😆 I will try to understand the transform stuff and see if I find a solution.

Thank you very much for looking at my code and for your answer.

David Hoese
@djhoese
yeah, sorry I couldn't give you a better answer. I don't spend a lot of time with the existing plotting functionality in vispy and it is relatively incomplete compared to some of the other parts in vispy.
Eric Younkin
@ericgyounkin
@djhoese thanks for the tip, the size attribute seems like the fix for me. It seems to be in visual coordinates, which is interesting and super helpful. I can just set the size to the data max - data min and it lines up perfectly.
peach1995
@peach1995
hello everyone,I want to move the canvas with code(not with mouse),
I want to center the canvas with the pos where my mouse clicked.
how can I do it in the scene.SceneCanvas?
peach1995
@peach1995
Ok, I have found it,we can use the attribute:
self.canvas = scene.SceneCanvas()
self.view = self.canvas.central_widget.add_view()
self.view.camera = scene.cameras.TurntableCamera(elevation=90, azimuth=0, # fov=20,
distance=200.0, center=(0, 0, 0), scale_factor=30)
self.view.camera.center = (100,200)
David Hoese
@djhoese
@peach1995 You've got it. The parameter changes depending on the camera you choose. You could also not use a camera and manually change the ".transform" of the view (or other scene canvas node) yourself, but the camera is probably the easiest.