from PIL import Image, ImageDraw, ImageFont
im = Image.new("RGBA", (130, 120))
d = ImageDraw.Draw(im)
font = ImageFont.truetype("NotoColorEmoji.ttf", 109)
d.text((0, 0), "🐶", "#f00", font)
im.save("out.png")
from urllib.request import urlopen
from PIL import Image
url = "https://raw.githubusercontent.com/python-pillow/pillow-logo/main/pillow-logo-248x250.png"
img = Image.open(urlopen(url))
Hi everyone. I'm trying to use Pillow to load some image data which is 16 bits per pixel RGB. Each pixel is 5 bytes Red channel, 5 bytes Green channel, 6 bytes Blue channel (16 % 3 != 0)
I'm currently using the raw decoder as follows:
Image.frombytes(
"RGB",
(width, height),
data,
"raw",
"RGB;16",
0,
0
)
Yes, I can see some support for 565 - https://github.com/python-pillow/Pillow/blob/75913950c204aec7e71bc3ca8df3232793301911/src/libImaging/Unpack.c#L720-L748 - but not 556.
By the "bits" decoder, I'm imaging you meant the "bit" decoder. I don't think it will do what you are after - https://github.com/python-pillow/Pillow/blob/main/src/libImaging/BitDecode.c
import struct
from PIL import Image
# black, red, green, blue, white
x = [0b0, 0b11111, 0b1111100000, 0b1111110000000000, 0b1111111111111111]
width = len(x)
height = 1
data = b"".join(struct.pack("H", y) for y in x)
# b'\x00\x00\x1f\x00\xe0\x03\x00\xfc\xff\xff'
print(data)
# Convert 556 data to RGB
pixels = []
for i in range(0, len(data), 2):
z = struct.unpack("H", data[i:i+2])[0]
r = int((z & 31) * 255 / 31)
g = int((z >> 5 & 31) * 255 / 31)
b = int((z >> 10 & 63) * 255 / 63)
pixels.append((r, g, b))
im = Image.new("RGB", (width, height))
im.putdata(pixels)
# black, red, green, blue, white
# [(0, 0, 0), (255, 0, 0), (0, 255, 0), (0, 0, 255), (255, 255, 255)]
print(list(im.getpixel((x, 0)) for x in range(5)))
from PIL import Image
# create a list of 4096 24-bit integers
colors = []
while len(colors) < 4096:
colors.append(0xff0000) # red
colors.append(0x00ff00) # green
colors.append(0x0000ff) # blue
colors.append(0xffffff) # white
# split the 24-bit integers into 8-bit pixels
pixels = []
for color in colors:
r = (color >> 16) & 0xff
g = (color >> 8) & 0xff
b = color & 0xff
pixels.append((r, g, b))
im = Image.new("RGB", (64, 64))
im.putdata(pixels)
im.save("out.png")
I want to add a test case for 16 bit RGBA PNGs. The problem is that Pillow currently does not support this format, so the test will fail, which is arguably the correct thing to do, but I also feel like it is a bit rude to add a failing test case. Alternatively, I could only test the 8 high bits in this test case, but once 16 bit support is introduced, the test will start failing. Something like this:
import io
import base64
from PIL import Image
image = Image.open(io.BytesIO(base64.b64decode("""
iVBORw0KGgoAAAANSUhEUgAAAAcAAAACEAYAAADEDxojAAAAQ0lEQVQI10WMWw0AIBRCz91MYBcD
WMI89LGCFcyEH1cnG48PABvA/gQ7PVOCC0mS7PLLETakQq2tjQFzrrX3u9Cd9zg0Ai9H03VKQwAA
AABJRU5ErkJggg==""".strip())))
expected_image_data = [
# First row
(0xffff, 0x0000, 0x0000, 0xffff), # R
(0x0000, 0xffff, 0x0000, 0xffff), # G
(0x0000, 0x0000, 0xffff, 0xffff), # B
(0x0000, 0x0000, 0x0000, 0xffff), # Black
(0xffff, 0xffff, 0xffff, 0xffff), # White
(0x0000, 0x0000, 0x0000, 0x0000), # Transparent
(0x8080, 0x8080, 0x8080, 0xffff), # Gray
# Second row
(0xffff, 0xffff, 0x0000, 0xffff), # Yellow
(0xffff, 0x0000, 0xffff, 0xffff), # Fuchsia
(0x0000, 0xffff, 0xffff, 0xffff), # Cyan
(0x1212, 0x3434, 0x5656, 0xffff), # Darkish blue
(0xaaaa, 0xbbbb, 0xcccc, 0xffff), # Grayish blue
(0xffff, 0xffff, 0xffff, 0x8000), # White 50 % transparency
(0xffff, 0xffff, 0xffff, 0x4000), # White 25 % transparency
]
# TODO This should be removed once 16 bit support is introduced in the future.
# Truncate to 8 bits as long as Pillow does not support 16 bit PNGs
expected_image_data = [tuple(x >> 8 for x in px) for px in expected_image_data]
assert image.mode == "RGBA"
assert image.size == (7, 2)
assert list(image.getdata()) == expected_image_data
Any thoughts on the best course of action?
If you want to just note it for the future, you could consider just leaving a comment on python-pillow/Pillow#1888
If you would like to add a failing test, that doesn't need to call attention to itself until we add support for it - you could mark your test with xfail - https://docs.pytest.org/en/7.1.x/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail
Photoshop.
I wouldn't mind being able to print directly from Python if I could get the colors right but that seems difficult. I am printing to an injket printer that has a GUI configuration menu that is invoked by Windows when you try to print something. I tried to print using WINSPOOL but I found that the configuration for paper sizes (even normal things like 4''x6'') and paper types (10 kinds of glossy, premium matte, bright paper) is not standardized. It seems that an Epson/HP/Brother printer has a proprietary settings dialog that passes parameters to a proprietary driver and it is not straightforward to (say) capture the data from the settings dialog once and pass it to the driver.
I am not just printing though, everything I print has a QR code on the back side that points to a "web side" where you can get more information about the object, for instance
HTTPS://GEN5.INFO/$/6-*XERJ0LO3S1GNQP/
is an image that I think is affected by the same color distortion but has such strong perspective that it is 100% successful despite it. Long term I want to export image files that are mipmappable and display them with WebGL with something that is more of a videogame interface (e.g. flip an image over by peeling it like in a Paper Mario game.
^----- I found an answer to my problem. It turns out I have a Dell U2711 monitor which has a wide color gamut. Applications like Photoshop, Windows Photo Viewer and the Firefox web browser apply a color profile that shows up in the screenshot... The screenshot is in the monitor's color space.
Tkinter is ancient and doesn't support color management. When I took screenshots on my laptop monitor the colors matched in tkinter, photoshop, and the screenshot.
I'm still a little mystified that the anaglyph looks better when it is displayed on a wide gamut monitor w/o the color profile applied. I thought it was fishy that the math for the anaglyph fusion was being done in sRGB but it seemed to work OK so I didn't question it seriously. I rewrote it to transform sRGB to linear light, do the math in linear light with float 32's, then go back to sRGB. The resulting printout is better than a printout done with sRGB math but I still think it looks better on the Wide Gamut monitor w/o the Wide Gamut color profile. I guess I could apply a transformation like that to the image manually and see if the result is general. I'm pretty convinced that particular image is marginal to being with so if anything I'm going to find a better image and maybe tweak the transform.
At the moment, no.
But is a simple setting, so I've created those for you. Let us know if you would like Pillow to offer these normally.