Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Gregg Irwin
    @greggirwin
    It's always a balancing act, between potentially useful things, and commonly useful things, weighed against adding too much stuff, making everything harder to use correctly. Include enough rich functionality that you don't need a lot of external bits, as with JS, but not so many that the built-in features are overwhelming and confusing.
    Jose Luis
    @planetsizecpu

    Hi folks, last days I found the Red console to crash if break is evaluated on foreach-face loop as :

    l: layout [a: area 100x100 red b: box 100x100 green c: field 100x100 blue] foreach-face l [if face/size/x = 100 [break]]

    Did it deserve a ticket?

    Gregg Irwin
    @greggirwin
    @planetsizecpu duped here on Win10. Yes, please file a ticket.
    Jose Luis
    @planetsizecpu
    @greggirwin Ok thx!
    Semseddin Moldibi
    @endo64
    I remember I already provided a solution to this issue (using throw) but Doc said we need a proper function attributes.
    hiiamboris
    @hiiamboris
    @dockimbel I believe red/red#3805 deserves some more of your attention :)
    corona-nova
    @corona-nova

    red does not take into account sRGB)'s nonlinearity @ libred-and-macros

    should i file an issue?, it affects color gradients and anti-aliasing and transparency,

    it basically means that (1.0/2)≠0.5 when @ sRGB thus maths that don't compensate for that don't work as intended

    corona-nova
    @corona-nova
    complex-pen.gif.1.png
    above: math done in gamma-space (not compensating for gamma)
    below: mockup interpreting as linear RGB and changing it to sRGB
    0001.png
    gamma-corrected (done in linear space)
    notice that lines at diagonal points are darker if
    it does not compensate for this
    complex-pen.gif.4.png
    corona-nova
    @corona-nova

    color gradients look bad if not dealt with;
    is this easy to implement, is the slight slowdown too much,
    it affects aesthetics, should this [option] be default

    affects image up/down-scaling, text-rendering also

    0004.png
    color gradients looks more natural esp the yellows here
    those are only mockups, when implemented, it should apply inverse also on the other side of whatever math operation to keep color-endpoints the same
    Gregg Irwin
    @greggirwin
    @qtxie :point_up:
    Maciej Łoziński
    @loziniak
    copy/part built-in docs state that pair! argument is possible. but I get error when using pair!:
    >> copy/part "abcdef" 2x4
    *** Script Error: invalid /part argument: 2x4
    *** Where: copy
    *** Stack:
    Vladimir Vasilyev
    @9214
    @loziniak this applies to image! values only.
    Maciej Łoziński
    @loziniak
    Not a word about it in documentation...
    Gregg Irwin
    @greggirwin

    @gltewalt :point_up:. Though, as will be the case with many things, because documentation is hard, especially as things change, people need to apply some common sense when things don't work as they expect. This is a general rule that we need to encourage, while bodies of knowledge develop over time, for details like this.

    In the grand scheme, the question is whether we can and should lock everything down in these early releases. If we leave things open a bit, as we have to with doc strings, it gives us room to change things. That is, if it isn't written down as a final behavior for the language somewhere, it could change. Being up front about that is no different than for any other language.

    Qingtian
    @qtxie
    @corona-nova You mean the color of the gradient is not correct? Does the web browser draw it correctly? Red just uses the OS API to draw the gradients, I don't think it's easy to change it. You can open an issue on github with the code to reproduce it.
    Maciej Łoziński
    @loziniak
    @greggirwin It reminds me a little a programming environment of Windows, with lots of undocumented behaviors, which, when used by developers, locks a language in some way anyway. Not to mention the problems it brings to projects like Wine, where people struggle to mimic these behaviors by trial and error to make software work on their platform.
    Changing doc strings along the code is not that hard I think. And even when some missing behavior is left in docs, this situation is not worse than being confused by undocumented quirks. What changes is just a source of confusion :-)
    Vladimir Vasilyev
    @9214

    @loziniak Red's design is based on highly polymorphic functions, in which number of all possible argument combinations is fairly large (not to mention that concrete datatypes are abstracted away by typesets); so, documenting each of them case-by-case in docstrings is a flawed approach, which narrows our design space and pollutes function's spec. I second @greggirwin's response WRT common sense - it's rather easy to apply, since language is built on top of orthogonal features, which work consistently across the board.

    You can help @gltewalt by extending image! documentation with missing details, although I'm not sure if it's is the best place to keep such info.

    Maciej Łoziński
    @loziniak
    @9214 thanks, I thought about it. The fact that image! is a series and can be manipulated by series functions by using pair!s is worth documenting. It's not so obvious and can generate questions like is 2x1 before 1x2 in image series?. I thought pair! can be used to denote start and end indexes of copy/part done on string! or block!, perhaps I'm not the only one.
    Gregg Irwin
    @greggirwin
    And this is where an example is worth a thousand words.
    Nulldata
    @nulldatamap
    Looks like there's a buffer overflow bug in at least decompress/zlib:
    >> decompress/zlib #{78DA636060F8CFC4C0CA08C40C8C0CFC0C409A8551351044B302311323C35A109B1988D918557C1918D597310000597203BB} 52 ; Correct size
    == #{
    000000FF020005010200050001000F0002000504012551000200050502000502
    0100AD00020005030200050601244D000127A600
    }
    >> decompress/zlib #{78DA636060F8CFC4C0CA08C40C8C0CFC0C409A8551351044B302311323C35A109B1988D918557C1918D597310000597203BB} 128 ; Bigger size
    == #{
    000000FF020005010200050001000F0002000504012551000200050502000502
    0100AD00020005030200050601244D000127A600
    }
    >> decompress/zlib #{78DA636060F8CFC4C0CA08C40C8C0CFC0C409A8551351044B302311323C35A109B1988D918557C1918D597310000597203BB} 4 ; Too small size
    == #{
    000000FF020005010200050001000F00010000200000000010000000ACDFD801
    BCDFD801237B0A30303030303046463032303030
    } ; Notice initial data is correct, but the rest is garbage
    With red-09aug19-8a9920e6 on MacOS X
    Vladimir Vasilyev
    @9214
    Nulldata
    @nulldatamap
    Same behavior on master (ccfff52)
    Qingtian
    @qtxie
    @nulldatamap IIRC. You need to provide the exact size of the uncompressed data when use the /zlib refinement.
    Vladimir Vasilyev
    @9214
    @qtxie which is at times not trivial at all, and requires passing of size together with compressed binary! data. I haven't found any ways to approximate or infer uncompressed data size from input itself (only compression level in zlib magic header).
    Gregg Irwin
    @greggirwin
    If it has to be exact, and can cause a buffer overflow, that's a problem. The wrapper should either ensure no buffer overflow occurs, based on dst size, or this feature should not be available for general use.
    Vladimir Vasilyev
    @9214
    @greggirwin agreed, current interface goes against the grain of Red's user-friendliness.
    Gregg Irwin
    @greggirwin
    @9214 if we trigger an error rather than overflowing, it should work with your multishot suggestion from red/welcome.
    From what @nulldatamap says they're doing, it sounds like this feature is genuinely needed in some cases. And having a PDF parser will be very cool. We might even be able to sponsor some of that work.
    Vladimir Vasilyev
    @9214
    @greggirwin yes, but that's not an optimal solution. Perhaps we can pre-allocate sufficiently large (but how much is enough, exactly?) buffer for decompression, but yet again it's a memory penalty.
    Gregg Irwin
    @greggirwin
    The trouble with not knowing is... ;^\
    That is, if there are unknowns, it can't be optimal. ;^)
    Vladimir Vasilyev
    @9214
    OTOH, I'm pretty sure PDF should embed metainfo about compressed stream size, or chunk data into fixed-size blocks.
    Gregg Irwin
    @greggirwin
    You would think. Because they want it to be optimal too.
    Qingtian
    @qtxie
    I'm not familiar with the code, but it should be not difficult to add some checking code and throw an error. It's not intended to be used alone. That is, the compressed data usually come from some well-defined data. PNG, PDF, for example. In that case, you know the size of the uncompressed data.
    Gregg Irwin
    @greggirwin
    @nulldatamap is the uncompressed size stored in the PDF?
    Nulldata
    @nulldatamap
    @greggirwin It is not, it's not mandated by spec, and there isn't a field for specifying it defined in the spec (PDF 1.7, 7.4.4.3: https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf#G6.1709041), so no
    Nulldata
    @nulldatamap
    Looking at PDFs in the wild I can't find any uncompressed size hints either
    Nulldata
    @nulldatamap
    Here's a compressed object stream from a PDF generated by pdflatex for example:
    5 0 obj
    <<
    /Type /ObjStm
    /N 7
    /First 40
    /Length 489       
    /Filter /FlateDecode
    >>
    stream
    ... 489 bytes of compressed data ...
    endstream
    endobj
    Nulldata
    @nulldatamap
    /Length is the compressed length
    Qingtian
    @qtxie
    @nulldatamap Do you know why the PDF generator use zlib format instead of gzip format which contains data's original size? They both use deflate algorithm, just that more meta info in the gzip format.
    Nulldata
    @nulldatamap
    @qtxie I don't know why the spec doesn't use gzip, but it only supplies the FlateDecode filter which is zlib/deflate but no gzip alternative