Tuesday, March 7, 2023

ASCII Table PDF

Vasudev Ram has a blog with many different posts about various programming topics including Python, Linux, SQL, and PDFs. On the topic of PDF generation, they have a blog post about making an ASCII Table to PDF with xtopdf.

Recently, I had the need for an ASCII table lookup, which I searched for and found, thanks to the folks here:

www.ascii-code.com

That gave me the idea of writing a simple program to generate an ASCII table in PDF. Here is the code for a part of that table - the first 32 (0 to 31) ASCII characters, which are the control characters:

It might not be widely known, but Factor has built-in support for writing to PDF Streams using the formatted output protocol. This supports text styles including changing font names, bold and italic styles, foreground and background colors, etc.

We start by defining the symbols and descriptions of the first 32 ASCII characters. These are all non-printable control character, which is why we use this array of strings to render them in a table.

CONSTANT: ASCII {
    "NUL Null char"
    "SOH Start of Heading"
    "STX Start of Text"
    "ETX End of Text"
    "EOT End of Transmission"
    "ENQ Enquiry"
    "ACK Acknowledgment"
    "BEL Bell"
    "BS Back Space"
    "HT Horizontal Tab"
    "LF Line Feed"
    "VT Vertical Tab"
    "FF Form Feed"
    "CR Carriage Return"
    "SO Shift Out / X-On"
    "SI Shift In / X-Off"
    "DLE Data Line Escape"
    "DC1 Device Control 1 (oft. XON)"
    "DC2 Device Control 2"
    "DC3 Device Control 3 (oft. XOFF)"
    "DC4 Device Control 4"
    "NAK Negative Acknowledgement"
    "SYN Synchronous Idle"
    "ETB End of Transmit Block"
    "CAN Cancel"
    "EM End of Medium"
    "SUB Substitute"
    "ESC Escape"
    "FS File Separator"
    "GS Group Separator"
    "RS Record Separator"
    "US Unit Separator"
}

The core printing logic is a header, followed by rows for each character, formatted into a table of decimal, octal, hexadecimal, and binary values along with their symbol and description from the array above:

: ascii. ( -- )
    "ASCII Control Characters - 0 to 31" print nl
    ASCII [
        1 + swap [
            {
                [ >dec ]
                [ >oct 3 CHAR: 0 pad-head ]
                [ >hex 2 CHAR: 0 pad-head ]
                [ >bin 8 CHAR: 0 pad-head ]
            } cleave
        ] dip " " split1 6 narray
    ] map-index {
        "DEC" "OCT" "HEX" "BIN" "Symbol" "Description"
    } prefix format-table unclip
    H{ { font-style bold } } format nl
    [ print ] each ;

Since the UI listener supports formatted streams, you can see it from the listener:

Outputting this to a PDF file is now easy. We make sure to set the font to monospace and then run ascii. with our PDF writer, saving the generated PDF output into a file.

: ascii-pdf ( path -- )
    [
        H{ { font-name "monospace" } } [ ascii. ] with-style
    ] with-pdf-writer pdf>string swap utf8 set-file-contents ;

We also support writing to HTML streams in a similar manner, so it would be pretty easy to create an ascii-html word to output an HTML file with the same printing logic above but instead using our HTML writer.

Friday, March 3, 2023

Short UUID

The shortuuid project is a “simple python library that generates concise, unambiguous, URL-safe UUIDs”. I thought it would be a fun exercise to implement this in Factor.

What is a “short UUID”?

You can read the original announcement, but basically it is a string representation of a number using a reduced alphabet that can be used in places like URLs where conciseness is desirable. The author mentions that it provides security by “not divulging information (such as how many rows there are in that particular table, the time difference between one item and the next, etc.)”. However, I think it is more security through obscurity than real security.

In any event, the alphabet used are these 57 characters:

CONSTANT: alphabet
"23456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"

We encode a numeric input by repeatedly “divmod”, indexing into an alphabet, until exhausted.

: encode-uuid ( uuid -- shortuuid )
    [ dup 0 > ] [
        alphabet [ length /mod ] [ nth ] bi
    ] "" produce-as nip reverse ;

We decode using a reverse process, looking up the position of each character in the alphabet, re-assembling the numeric input for each character in the shortuuid.

: decode-uuid ( shortuuid -- uuid )
    0 [
        alphabet index [ alphabet length * ] dip +
    ] reduce ;

This is available on my GitHub, including features to deal with legacy values generated before version 1.0.0 as well as supporting different alphabets being used.

Wednesday, March 1, 2023

Geo Timezones

Brad Fitzpatrick wrote a Go package called latlong which efficiently maps a latitude/longitude to a timezone. The original post describing it was on Google+ and is likely lost forever — unless it made it into the Google+ archive before Google+ joined the Google Graveyard.

It tries to have a small binary size (~360 KB), low memory footprint (~1 MB), and incredibly fast lookups (~0.5 microseconds). It does not try to be perfectly accurate when very close to borders.

It’s available in other languages, too!

Huon Wilson ported the library to the Rust Programming Language, making the code available on GitHub and installable via Cargo. There is even a wrapper made for NodeJs that is installable via NPM that uses a command-line executable written in Go.

When it was announced in 2015, I had ported the library to Factor, but missed the opportunity to blog about it. Below we discuss some details about the implementation, starting with its use of a shapefile of the TZ timezones of the world to divide the world into zones that are assigned timezone values — looking something like this:

The world is divided into 6 zoom levels of tiles (represented by a key and an index value) that allow us to search from a very large area first, then down to the more specific geographic area. Note: we represent the struct as a big endian struct with structure packing to minimize wasted space in the files.

The zoom levels are then cached using literal syntax into a zoom-levels constant.

BE-PACKED-STRUCT: tile
    { key uint }
    { idx ushort } ;

SPECIALIZED-ARRAY: tile

CONSTANT: zoom-levels $[
    6 <iota> [
        number>string
        "vocab:geo-tz/zoom" ".dat" surround
        binary file-contents tile cast-array
    ] map
]

Each of the zoom levels reference indexes into a leaves data structure that contains 14,110 items — each represented by one of three data types:

  1. Type S is a string.
  2. Type 2 is a one bit tile.
  3. Type P is a pixmap thats 128 bytes long.

These we load and cache into a unique-leaves constant.

CONSTANT: #leaves 14110

BE-PACKED-STRUCT: one-bit-tile
    { idx0 ushort }
    { idx1 ushort }
    { bits ulonglong } ;

CONSTANT: unique-leaves $[
    "vocab:geo-tz/leaves.dat" binary [
        #leaves [
            read1 {
                { CHAR: S [ { 0 } read-until drop utf8 decode ] }
                { CHAR: 2 [ one-bit-tile read-struct ] }
                { CHAR: P [ 128 read ] }
            } case
        ] replicate
    ] with-file-reader
]

The core logic involves looking up a leaf (which is one of three types, loaded above), given an (x, y) coordinate. If it is a string type, we are done. If it is a one-bit-tile, we defer to the appropriate leaf specified by idx0 or idx1. And if it is pixmap, we have a smidge more logic to detect oceans or defer again to a different leaf.

CONSTANT: ocean-index 0xffff

GENERIC#: lookup-leaf 2 ( leaf x y -- zone/f )

M: string lookup-leaf 2drop ;

M:: one-bit-tile lookup-leaf ( leaf x y -- zone/f )
    leaf bits>> y 3 bits 3 shift x 3 bits bitor bit?
    [ leaf idx1>> ] [ leaf idx0>> ] if
    unique-leaves nth x y lookup-leaf ;

M:: byte-array lookup-leaf ( leaf x y -- zone/f )
    y 3 bits 3 shift x 3 bits bitor 2 * :> i
    i leaf nth 8 shift i 1 + leaf nth +
    dup ocean-index = [ drop f ] [
        unique-leaves nth x y lookup-leaf
    ] if ;

We’re almost done! Given a zoom level, a tile-key helps us find a specific tile that we then can lookup the leaf for, hopefully finding the timezone associated with the coordinate.

:: lookup-zoom-level ( zoom-level x y tile-key -- zone/f )
    zoom-level [ key>> tile-key >=< ] search swap [
        dup key>> tile-key = [
            idx>> unique-leaves nth x y lookup-leaf
        ] [ drop f ] if
    ] [ drop f ] if ;

Each coordinate is effectively a pixel in the image, so our logic searches from the outermost zoom level to the innermost, trying to lookup a timezone in each one using the coordinate and level as a tile-key.

:: tile-key ( x y level -- tile-key )
    level dup 3 + neg :> n
    y x [ n shift 14 bits ] bi@
    { 0 14 28 } bitfield ;

:: lookup-pixel ( x y -- zone )
    6 <iota> [| level |
        level zoom-levels nth
        x y 2dup level tile-key
        lookup-zoom-level
    ] map-find-last drop ;

Finally, we have enough to implement our public API, converting a given latitude/longitude coordinate to a pixel value, deferring to the word we just defined above.

CONSTANT: deg-pixels 32

:: lookup-zone ( lat lon -- zone )
    lon 180 + deg-pixels * 0 360 deg-pixels * 1 - clamp
    90 lat - deg-pixels * 0 180 deg-pixels * 1 - clamp
    [ >integer ] bi@ lookup-pixel ;

And then a couple of test cases to show it’s working:

{ "America/Los_Angeles" } [ 37.7833 -122.4167 lookup-zone ] unit-test

{ "Australia/Sydney" } [ -33.8885 151.1908 lookup-zone ] unit-test

Performance is pretty good, we can generate over 3 million lookups per second, putting our cost per lookup around 0.33 microseconds. And all of that in less than 70 lines of code.

This is available on my GitHub.