Log In  


Here's a simple ebook reader with the first three chapters of A Tale of Two Cities. Press X to advance (once per paragraph / page).

Cart #16672 | 2015-11-15 | Code ▽ | Embed ▽ | License: CC4-BY-NC-SA
9

This is a tech demo of some tools I've been working on for developing text-based games. It'd be more impressive if it were an actual game, but this victory was hard won so I'm posting it. :)

Notes:

  • Text is stored in cart data, not as string literals in the code.

  • The original source file does have the text as string literals in code. I use a post-processor to extract the string literals, pack them into text data stored in the cart, then replace them with string IDs. I use a custom syntax to flag which strings ought to be extracted so I can still use regular string literals elsewhere. The processing tool lets me adjust the location of the text in memory, so I can set aside space for sprites, sfx, etc. by limiting the size of the text data region.

  • Text is compressed using LZW with variable-width codes. The processor has a compressor written in Python, and it appends a decoder written in Lua to the cart code. This Tale of Two Cites excerpt is 25,613 bytes, and compresses to 12,457 bytes for storage in the cart data, 48% of its original size.

  • My LZW implementation is designed to allow random access to strings during program execution. LZW is a dictionary-based compression algorithm, and all strings share the same dictionary for efficient packing. The entire corpus is not decompressed all at once into Lua memory. Instead, the lookup dictionary is calculated from the bit stream and retained in Lua memory so that strings can be decoded on the fly as they are accessed. In the cart data, I use a simple binary layout that gives each string a header with information that helps track the code bit width during decompression, and byte-aligns each string's first and last characters.

  • This Tale of Two Cities demo gets close to the Lua memory limit with its dictionary. I maximized the size of the dictionary (7,903 entries) to minimize the size of the compressed data. In practice, I'll probably cap the dictionary size to 4,096 entries, which for this text gains a few kilobytes in cart data. But headroom in Lua RAM will be important for real games.

  • The slow scroll of the text in this reader app is artificial, originally intended for use in a text game. Decompression is quite fast after the initial dictionary is built. I have limited interest in making a usable ebook reader cart, but you're welcome to try it. This implementation uses only 292 tokens and 5017 chars, and that could probably be tightened up a bit.

  • This excerpt is 4,634 words. For comparison, Zork I is 14,214 words. Considering Zork had the luxury of paging from a 160k floppy disk and this is packed into a 16k region of cart data, that's not too shabby. :)

I don't know yet if this will actually be useful for a game project, but it was fun to make. The complete code is not ready for public consumption, but here's the Github link anyway: https://github.com/dansanderson/p8advent It's based on and requires picotool.

Happy reading!

-- Dan

9


@dddaaannn I know this is pretty old by now, but I'm really interested in how you were able to store so much text here. When you said this was pre-packed and stored in 16k of cart data, where did you mean, specifically? This would be really cool to use for other purposes as well.


This experiment was about storing a table of indexable strings as compressed data in the addressable region in a cart, i.e. where the graphics and sound data go. By putting compressed text in the graphics/sound region, you can save multiple chunks across multiple cart files and load them in selectively with reload(). A single cart's graphics/sound region is 16 KB.

If you open the Tale of Two Cities cart in Pico-8 and switch to the sprite sheet editor, you can see all of the text data as rainbow noise. I started with fixed-width LZW codes instead of variable-width and you could see stripes indicating the unused gaps.

If I remember correctly, my experimental tools let you set how much memory to reserve for text, so you can leave some space for graphics/sound as well. I didn't production-harden these tools so they're a little rough. Feel free to mess with them for your own purposes!


Ah, I was looking at the memory map and wondering where it was located. I wasn't sure if there was some other trick involved, like if there was some way to pre-load user data before run time. So this does compromise your ability to use graphics and sound.

Is it something like 0x000 to 0x4300 (the start of user data)? That might be what confused me, there wasn't an easily delineated 16k chunk in memory.


Correct. This technique takes up addressable cart ROM that would normally be used by graphics/sound data. 0x0000-0x4300 = 16.75 KB.

In a real game using both text and graphics/sound, I would expect to use a small portion of the graphics data for text, and page in additional chunks as needed from auxiliary carts. Each reload() comes with an artificial pause, so how practical this is probably depends on the game.


I can't figure out how to apply the methods here. The github project doesn't work as-is, and even after fixing the errors, if I mark up any of the lines in the test_game.lua file with *, the lua parser chokes.


Ah, you beat me to the punch, dddaaannn.

I was gonna try something "strange" for compressing text and put up some Alice In Wonderland. I'm not there yet though, still working on other projects.


I won't have time to try to repro my own results any time soon, but I'll repeat the note that p8advent requires my other project picotool:

Going just from code review, it looks like you put picotool on the Python load path (so it can see the pico8 module) then run the advent command as:

./advent --lua somefile.lua

The augmented syntax for the .lua file is to put an asterisk before the first quote of a string that you want compressed, then pass that string to _t(...) to decompress and use it. The advent program produces a .p8 file with the compressed strings in the addressable memory, and the source transformed to use it as expected.

Good luck, and let me know how it goes. Me from the past was a smart cookie but apparently had a lot more free time than I do now. ;)


I've got picotool, and am actually currently using it – it's been fantastically helpful.

the errors, though, are the same as the ones in this issue: https://github.com/dansanderson/p8advent/issues/1

Main issue seems to be some strings are read as strings and some as bytes, and python doesn't like that. I was able to work around that with my very limited python knowledge, but it still had trouble once I added the asterisks to the file.


I'd totally believe there's been a regression in string handling at this point, but just to be sure, can you confirm which version of Python you have installed? picotool was written for Python 3.4 or later.

python --version

The p8tool script starts with a python3 shebang line, but if you run it as "python p8tool" (or whatever) and your Python binary is Python 2, I would expect that to fail.


It's definitely not 2; I did see that it required Python 3. Lemme check... 3.7.4


Thanks for checking. If I weren't working 7 days a week on the day job at the moment I'd troubleshoot. :) Feel free to leave notes on the issue in Github.


cool. Yeah, I'll add my findings to the issue on Github. and if I'm able to make any headway, I'll update here/there.

hope you get a day off soon!


it turned out to be easier to fix the issues than do a comprehensible writeup on the github issue :D

so yeah I forked & made a pull request. I think it works now?


Nice! Thanks for taking a look.


I did mention I was going to try and make my own using my own compression method, dddaaannn. Well, here it is ! Actually there are 2 different methods I chose to compress text with.

I'd be curious and appreciate your input as you mastered this type of cart first.

https://www.lexaloffle.com/bbs/?tid=35608



[Please log in to post a comment]