- Generally expect the input in UTF-8 encoding.
- Characters are byte-sequences, 1 to 4 bytes long.
- The lexer engine takes bytes as input, not characters.
- To compensate for the engine above character-based grammars are rewritten into a byte-based form where multi-byte characters are represented by rules reflecting their byte sequences.
- Character classes become alternatives of characters, which in turn are byte sequences.
- It is possible to optimize the above into a set of alternatives of sequences of byte-ranges.
The first simplifies literals without touching their literal-ness, i.e. the result of normalizing a literal is still a literal. The second goes further, able to break a literal apart into a collection of priority-rules representing sequences and alternates of simpler literals.
Regardless, at the bottom the engine has to support only bytes and byte-ranges, or even only bytes, with the ranges rewritten into alternations. (Finite, at most 256 for a full range [00-ff]).
As a side effect we can support the full range of unicode character classes, despite Tcl itself not supporting them.
Note: The current C runtime supports only bytes and the grammar reducer targeting it breaks byte-ranges apart as well.