Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 28 13:52
    Aveline67 closed #96
  • Nov 28 13:52
    Aveline67 commented #96
  • Nov 28 09:48
    kobalicek commented #96
  • Nov 28 06:17
    Aveline67 edited #96
  • Nov 28 06:13
    Aveline67 opened #96
  • Nov 25 18:03
    kobalicek commented #3
  • Nov 25 17:22
    dumblob opened #95
  • Nov 25 16:49
    LifeIsStrange commented #3
  • Nov 25 16:49
    LifeIsStrange commented #3
  • Nov 25 02:46
    danger89 commented #3
  • Nov 25 02:44
    danger89 commented #3
  • Nov 25 02:44
    danger89 commented #3
  • Nov 09 06:38

    kobalicek on ci_improvements

    (compare)

  • Nov 08 22:08

    kobalicek on master

    Improved GitHub workflows (compare)

  • Nov 08 22:04

    kobalicek on master

    Improved GitHub workflows (compare)

  • Nov 08 22:04

    kobalicek on master

    Improved GitHub workflows (compare)

  • Nov 08 21:24

    kobalicek on ci_improvements

    CI improvements (compare)

  • Nov 08 20:02

    kobalicek on ci_improvements

    CI improvements (compare)

  • Nov 08 19:13

    kobalicek on ci_improvements

    CI improvements (compare)

  • Nov 08 19:08

    kobalicek on ci_improvements

    CI improvements (compare)

Petr Kobalicek
@kobalicek
And then convert this to a single pixel value
Not saying it's impossible, but I really wonder what performance you would get by trying to calculate it exactly like this.
Bogdan
@xbngnx_twitter

image.png

In this case with triangulation you need to fold opacity to modified colors of new triangles - resulting to few blue triangles, few red triangles and few pink triangles. Of course some triangles will be small but proper antialiasing already requires support differentiation of small shapes (up to 16*16 grid inside pixel for 256 shades of color for common rgb encoding with 1-byte per color)

Petr Kobalicek
@kobalicek

Well, I would definitely not spend my time on even trying this as I don't really see how this could be fast. There are trade-offs, and with Blend2D we do basically what others are doing, but we do optimize it better. So for example if your stack is AGG or Qt you would gain performance when switching to Blend2D without sacrificing the quality.

But if you need something like you have described - composition at a sub-pixel level then this is not something Blend2D can help with. You can still multisample and then downscale - I think this could be still faster than doing the triangulation and resolving intersections and overlapping of everything in the scene if it's complicated enough.

BTW do you have a link to any project that does the approach you have described? I would take a look to see how viable it is
Bogdan
@xbngnx_twitter
For proper blending you just need to resolve intersection between ui primitives. You can use triangulation or scanline approach (not for the one path-element like skia but for the whole ui scene by building edge table for all ui-primitives and only then do pixels shading by line scanning).
And resolving intersection important not only for right blending (and solving conflation problem) but for the efficient rasterization too. How blend2d optimizes overlapping? Does it suffers from overdrawing? When you have complex ui primitives which overlaps you can spend more time doing extra work (by blitting pixels of shapes which will be hidden by new opaque shapes later) than by resolving intersection between shapes geometrically and blit only visible pixels
Bogdan
@xbngnx_twitter

BTW do you have a link to any project that does the approach you have described? I would take a look to see how viable it is

No, I'm just planning to write one by myself.
I think that fast raw scene rasterization is not the first goal of rasterizer engine. I would want to have right blending first, and only then do speed comparison of rasterizer libs. And when you have pixels caching and incremental propagation of data changes to pixels (resulting in smaller work like only rasterizing few new pixel lines when you scoll or rasterizing small frame of pixels when you zoom out) - then the raw rasterization speed doesn't matter as much

Petr Kobalicek
@kobalicek

How blend2d optimizes overlapping? Does it suffers from overdrawing? When you have complex ui primitives which overlaps you can spend more time doing extra work (by blitting pixels of shapes which will be hidden by new opaque shapes later) then by resolving intersection between shapes geometrically and blit only visible pixels

It doesn't - it does composition, which means that if you render 100 rectangles on top of each other, it would render 100 rectangles (at the moment) - I plan to do tile-based optimizations of solid fills in the future, but it's not relevant now and maybe with a fast cache and banding it wouldn't be that beneficial as it may seem.

Basically Blend2D works this way, those two operations are identical:

Create rendering context:
fillRect(A);
fillRect(B);
Destroy rendering context.

vs

Create rendering context:
fillRect(A);
Destroy rendering context.
Create rendering context:
fillRect(B);
Destroy rendering context.

The second will be slower, but that's it - the output will be identical, because it's two render commands done separately.

Niki Spahiev
@niki-sp
Petr Kobalicek
@kobalicek
Not now, but I have been thinking about compound rasterization for quite some time
Not sure it would be exactly like that in AGG, but I have some ideas
William Adams
@Wiladams
@kobalicek good on the non-JIT pathway. I've always wondered how performant that might turn out to be. If in most cases there are only a handful of JIT cases, I've wondered if those could just be captured at compile time, and be as performant as JIT.
Petr Kobalicek
@kobalicek
It's possible to write all the pipelines in C++, in the end this is what the reference pipeline does, the problem is the number of combinations that would be required for a one-stage pipeline - in non-JIT pipeline it's just better to sacrifice a bit and make it two-stage for some operations.
Robert M. Münch
@Robert-M-Muench
Does B2D has some image format conversion functions? Like when I have a paletted GIF, can I conver it with B2D to RGBA format?
Petr Kobalicek
@kobalicek
yeah BLPixelConverter
but it's very low-level
it's used by image codecs
Robert M. Münch
@Robert-M-Muench
Yes, I’m loading a GIF and want to convert the data.
Am I right that BLImage::createFromData doesn’t copy the data? Is there a way that it copies the data?
Petr Kobalicek
@kobalicek
The simplest is to create a BLImage, get its data, like:
  BLImageData imageData;
  BLResult result = image.makeMutable(&imageData);
And then change that data
yeah createFromData() doesn't copy the data
it uses it
Robert M. Münch
@Robert-M-Muench
William Adams
@Wiladams
Now the craziness begins
Now you can play with 'topmost' to keep your tree on top.
It really inspires possibilities doesnt it?
Robert M. Münch
@Robert-M-Muench
Yes, it’s really nice. The tree is already animated… :-)
William Adams
@Wiladams
Reminds me of cursor animation on a sun workstation, circa 1987...
Robert M. Münch
@Robert-M-Muench
Yep :-)
William Adams
@Wiladams
lines.png
Just a little something. It looks better full image
Basically, I've setup a DPI aware context, and my own user space units.
    gAppSurface->setPpiUnits(systemPpi, 96);
In this case, I take whatever the system pixel density is, and the desired user space units, and do some math.
The 'do some math' looks like this
    inline void setPpiUnits(double ppi, double units)
    {
        fDimensionScale = ppi / units;
        fCtx.scale(fDimensionScale);
        fCtx.userToMeta();
    }
William Adams
@Wiladams
The trick on Windows, and anywhere else, is figuring out the actual pixel density of the screen. That is, how many pixels are actually per inch. Not "logical" inch, but actual physical inch. So, the following comes into the picture.
    auto dhdc = CreateDC(TEXT("DISPLAY"), NULL, NULL, NULL);

    auto screenWidth = ::GetDeviceCaps(dhdc, HORZSIZE)/25.4;
    auto screenHeight = ::GetDeviceCaps(dhdc, VERTSIZE)/25.4;
    auto pixelWidth = ::GetDeviceCaps(dhdc, LOGPIXELSX);
    auto pixelHeight = ::GetDeviceCaps(dhdc, LOGPIXELSY);
    double screenHPpi = (double)dpidisplayWidth / screenWidth;
    double screenVPpi = (double)dpidisplayHeight / screenHeight;
    systemPpi = (unsigned int)screenVPpi;
The HORZSIZE of devicecaps is the number of millimeters across the device. The LOGPIXELSX is the number of pixels (horizontal resolution). Take that, divided by the millimeters (/25.4 for inches), and you've got the real physical pixels per inch.
Use that as the scale for userToMeta(), and you've got a context setup for your user units.
In the graphics themselves, the black lines on yellow, I'm doing the following
    background(255, 255, 0);
    stroke(0);

    // print lines of different lengths
    // increasing width as we go
    for (size_t i = 1; i < 192; i++) {
        auto w = map(i, 1, 192, 0.25, 2);
        strokeWeight(w);

        line(0, i*2, i, i*2);
    }
So, I go from thickness of .25 to 2 (in user units), while the line length goes from 1 (user units) to 192 (2 inches).
William Adams
@Wiladams
I find it interesting that the lines draw fairly well as the thickness is very thin to 4 times as thick. This is a good thing, it doesn't look all crappy at small scale.
So, nothing to see here really. I was just testing out my user space units code, and thought I'd share one of the calibration experiments. I did in fact use my rule to measure lines on my screen, and 2 inches IS two inches if you do it in this way. That's really nice because it means I can do my Postscript code and display it on the screen, and see the actual size, before exporting to laser cutter, or paper printer. Laser cutter likes 600 dpi, and laser printer likes 1200 dpi. So, I just change the one line to set the pixel density, depending on my output, and things are good and exact.
Robert M. Münch
@Robert-M-Muench
+1
小鱼干
@firefishu
Hi.I found some fonts that install correctly on Windows, but have some errors and warnings on Mac. The fonts are loaded successfully with blend2d but the text is not rendered. The debugger found that gb.replacementData() is null.
Also, does blend2d support faux bold and faux italic text?
@kobalicek