Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Tarek Sherif
@tsherif
Prepping v0.12.0 for release tsherif/picogl.js#128
Biggest update it support for compressed cubemaps.
Tarek Sherif
@tsherif
PicoGL.js v0.12.0 has just been published. Change log: https://github.com/tsherif/picogl.js/blob/master/CHANGELOG.md
New example showing how to use compressed cubemaps: https://tsherif.github.io/picogl.js/examples/compressed-cubemap.html
Tarek Sherif
@tsherif
Prepping v0.13.0 for release tsherif/picogl.js#140
Biggest updates are support for parallel shader compilation and importing without requiring a bundler.
Tarek Sherif
@tsherif
Prepping v0.14.0 for release tsherif/picogl.js#147
Biggest updates are support for multi-draw and anisotropic texture filtering.
Nehal Patel
@habemus-papadum
Nice to see v0.14.0 -- I created a notebook in observablehq demo'ing how to use picogl in that enviroment: https://observablehq.com/@habemus-papadum/picogl-101
Tarek Sherif
@tsherif
Awesome @habemus-papadum!
Tarek Sherif
@tsherif
v0.14.0 is live, BTW. Change log is here: https://github.com/tsherif/picogl.js/blob/master/CHANGELOG.md
Tim van Scherpenzeel
@TimvanScherpenzeel
Hi Tarek, how did you generate the compressed cubemaps / textures (.pvr)? Are you using PVRTexTool? Is there a reason you prefer PVR over KTX?
Tarek Sherif
@tsherif
Hi Tim. I used crunch and PVTTextool (I believe crunch for DXT, PVRTextool for the rest).
No preference for container format, really. They're all pretty much the same. Just that PVRTextool is the only tool I know of that produces PVRTC and it only generates PVR. And I just didn't want to write a bunch of different parsers.
Tarek Sherif
@tsherif
One minor annoyance I remember with KTX is that it inserts the size of each mip layer before the layer data, so you have to remember to skip over that while parsing (DDS and PVR just lay out the layers consecutively).
Tim van Scherpenzeel
@TimvanScherpenzeel
Thanks!
Martijn Mulder
@AgentMulder
Can you recommend a book to study WebGL2?
Tarek Sherif
@tsherif
@AgentMulder I think the best resource is this one: https://webgl2fundamentals.org/
Martijn Mulder
@AgentMulder
Where can I find a picogl-example that employs indexed drawing (ie. drawElements(), drawElementsInstanced())?
Tarek Sherif
@tsherif
The spheres in the examples are indexed meshes, so examples that use them will use indexed drawing (e.g. https://tsherif.github.io/picogl.js/examples/oit.html)
Martijn Mulder
@AgentMulder
When I stay under 1000 triangles scattered over different scenes, and the triangles DO share a lot of vertices, is it worth the trouble of indexed drawing?
Tarek Sherif
@tsherif
You pretty much always want to do indexed drawing if there are shared vertices. There are the memory wins and it allows you to take advantage of the vertex cache (https://www.khronos.org/opengl/wiki/Post_Transform_Cache).
Will
@WHSnyder
does anyone here frequently use this library w/ spector for debugging?
Will
@WHSnyder
Im pretty new to shader programming and Im having trouble debugging an app that relies heavily on floating point textures..
Simon Moos
@Shimmen
@WHSnyder I have used spector extensively while working with picogl.js. It does work and shouldn't be any different from using it with some other WebGL project/library. Do you have any specific questions?
Will
@WHSnyder
A couple of questions I had were actually bugs on my part that took a while to find but there's still this really annoying issue where canvas framebuffers are displayed as tiny squares in the bottom left of an empty image.
It's not due to an incorrectly set viewport
I could only find one other mention of this issue on the web but nobody gave a solution
But all in all its not a big deal, I was just wondering if it could've been a picogl specific issue.
Also, Im using picogl in a deferred rendering pipeline to draw 1k+ lights. This is my first time implementing a deferred rendering strategy so let me know if this plan doesnt make sense: 1) draw normals, texture, geometry to g buffer
2) draw 1k+ lights as glPoint primitives
3) adjust glPointSize depending on the distance from the camera and frustum parameters
Will
@WHSnyder
4) calculate lighting per fragment generated
My main question is: why do people draw actual spheres as light volumes and instead of glPoints with an adjusted radius? Doesn't my method save so much time by skipping a lot of vertex shading?
Will
@WHSnyder
(Given that I'm only rendering point lights)
Simon Moos
@Shimmen
Your reasoning makes sense, but there are some sneaky details! First of all (if I remember correctly) WebGL isn't required to support any point size >2 so right there your plan kind of breaks. So you shouldn't use glPoints for that reason. Secondly, it's good to use properly drawn volumes since they can be run through the depth test and if some pixels are occluded you won't have to consider light contributions for them (since it's not visible from the camera anyway). Third, drawing 1k+ spheres of low vert count (e.g. 20-50 vertices) isn't that expensive in the grand scheme of things, if you use instancing.
There are also other techniques which avoiding drawing light volumes, e.g. tiled deferred and clustered deferred, but that's more advanced stuff
Will
@WHSnyder
Thanks this is helpful, I really need more advice from live human beings. So on the first point, is there a spec that details on which devices/browsers glPoints aren't fully supported? I can't find anything definite right now, but maybe it's not even worth investigating... On the second point, that would explain why I can't seem to get depth testing working at all... On the third, does 'isn't that expensive' apply to slow graphics cards on a MacBook Pro 2015 or modern standards?
Also, from the names of the last two methods you mention, I assume they use some kind of divide and conquer strategy by splitting the frame up into areas. How does this optimize things? Won't the frame always get done as soon as the slowest tile finishes?
Tarek Sherif
@tsherif
You can get a lot of great info about WebGL feature support from http://webglstats.com, e.g. http://webglstats.com/webgl2/parameter/ALIASED_POINT_SIZE_RANGE
Just landed v0.15.0. Change log is here: https://github.com/tsherif/picogl.js/blob/master/CHANGELOG.md
Made some changes to the build set up, so let me know if there are any issues.
Tarek Sherif
@tsherif
Also, I'm now including a source map with the bundle builds. Useful if you're loading via webpack or something similar.
Simon Moos
@Shimmen
Huh, sorry, not sure where the 2px thing came from... Anyway, you still have to consider the performance implications of using glPoints. The vertex/fragment pipeline is highly optimized and very fundamental while glPoints is slightly more niche, so it's hard to know anything by just guessing. Really, what you should do is not trust me and instead try it out if you got the time and energy :) I also own a 2015 macbook pro, and while it's not very fast for graphics in general, 1.000*20 = 20.000 vertices is still a small amount and it should be fine. But make sure to use instanced rendering for it, as I mentioned.
My point was that tiled/clustered doesn't render light volumes but instead figures out which lights affect what parts and then performs one or more smarter passes etc.
Finally, I'm not sure this chat is the best place to discuss all of this. I'm glad to help if I can, but this chat-room should probably be more about picogl.js :)
Will
@WHSnyder
Yea true on that ^, got kinda carried away haha. Mind if I PM you?
Simon Moos
@Shimmen
No, sure, go ahead!
Tarek Sherif
@tsherif
Honestly, if everyone else is ok with, I'm totally cool with general 3D graphics/GPU conversations happening here.
It's kinda necessary to do anything with picogl, so we can consider it "about picogl" :)