What is this
This is just a simple tech demo. I was curious to see how recognisable an RGB image would be if it were mapped to the screen in the same way LED TVs/monitors draw colors with RGB.
Since each fake pixel is 33px - this gives a grand usable screen space of 4242px!!!!!
I went looking for 42*42 picture to experiment with and found this one (which I selected as it was the only one I could find that wasn't a person)
42px version:
RGB data was converted to a Lua table using a simple python script: https://colab.research.google.com/drive/1DrwY74iLzNmt3H5f2VX2sG0UDQxsSFh-?usp=sharing
How this works
Each fake "pixel" is made up of a 3*3px bars of R G B .
To achieve this effect I created a custom color palette, which just focused on RGB values. I did try to experiment with Black and White for extreme values, but I wasn't happy with the results. I added a flag that allows you to switch modes between the image and palette.
Each "pixel" is draw with 3 rectfill()
that are stacked vertically and each RGB value is mapped from 0-255 to 4 discreet values `{63,128,191,255}.
The RGB values are then drawn to each 3*3 "pixel" on the screen.
Outcomes
Does this have any practical value?
> if you shrink the console down to around 42px it looks similar to the original
> However, this method uses 7439 tokens to draw a 4242px image to the screen 😂
Was it worth doing?
- I think so. I've only had the pico-8 a couple of days, so I am trying to learn what I can. So this taught me about how to exploit
pal()
.
you could save tokens by making a function which converts a hex number to a table with 3 numbers for a color and returns it
[Please log in to post a comment]