Interesting. So the examples are all too long?
That's weird but maybe not too bad.. What would happen if we were to truncate?
This may have previously worked incidentally.
while bytes_read < compressed_length && uncompressed.len() < max_uncompressed_length {
let (len, bytes) = decoder.decode_bytes(&compressed[bytes_read..])?;
uncompressed.extend_from_slice(bytes);
Since the lzw
library decodes symbols word-for-word it's not unlikely that we hit exactly the right length.
The new library decodes fixed size chunks
buffer.byte_len
?
According to the docs: https://developer.gnome.org/gdk-pixbuf/unstable/gdk-pixbuf-The-GdkPixbuf-Structure.html#image-data
Image data in a pixbuf is stored in memory in uncompressed, packed format. Rows in the image are stored top to bottom, and in each row pixels are stored from left to right. There may be padding at the end of a row. The "rowstride" value of a pixbuf, as returned by gdk_pixbuf_get_rowstride(), indicates the number of bytes between rows.
Hi, I got stuck a bit when trying to implement a combined transformation...
How do I get from a GenericImageView
to an ImageBuffer
(without any modification) without iterating over rows/cols and calling get_pixel / set_pixel? (That seems really inefficient to me)
Also, is there a good way to combine multiple transforms efficiently without copying each time?
For context, I'm implementing EXIF rotation. This means that depending on the input parameter, we'll do nothing, rotate, flip or rotate + flip.
Rotating by 0 and 180 degrees as well as flipping can be done in-place, so no copying would be necessary. The buffer could be modified in-place.
Rotating by 90 and 270 degrees however always requires a copy, because the image may not be square.
Right now the APIs seem to take &self
and return a new ImageBuffer
. Do you have a good way to solve this? We could use something like fn rotate(&mut self) -> RotationResult
with enum RotationResult { Modified, Copied<ImageBuffer> }
, but that seems very ugly from an API point of view.
image
itself anways.GenericImage
-based interface is not at all optimized for this work. For example, as you've observed, it isn't efficient to iterate pixels individually but that's pretty much all you can do with the generic interface and what's done internally. Also the layout would—I guess—be more efficient (cache oblivious) if it were a recursive, space-filling curve of macroblocks of pixels instead of row-by-row or col-by-col but that requires an extra conversion of layouts and is somewhat specialized.image
currently.
@HeroicKatora: thanks for the reply! sorry, I somehow missed it.
then I guess I'll do the implementation locally only for now. would be nice if multiple transformations could be combined in the future!
flat::
module? It's not possible to go between the buffers them without copying since each of them is an owning buffer, with their own allocations strategy. But you can certainly create a view on a GTK buffer which implements GenericImage
.
PixBuf::from_bytes((*img_buf).into(), ...)
hello! I tried loading a webp from memory and encountered an unsupport error: Err` value: Unsupported(UnsupportedError { format: Exact(WebP), kind: GenericFeature("ALPH") })
I assume image
does not support alpha webp? Is this intentional? The supported file table says it does not only decode lossy/luma channel? Should I open an issue about this?
I'm not entirely sure if I understand exactly what you intend to your code to do. Since Rust does not have dependent typing (types that depend on values), I don't think that would work. You can't use color
in the type context of a generic parameter. So, no, there is no such way where color
can influence the type of img
on line 4. However it might be feasible to have the last line produce a DynamicImage
with the previous variant again, as in:
let img = DynamicImage::from(img).convert_into(color);
That's not implemented either but, theoretically, something like this could be possible in Rust.
image
to write it out as a jpeg