Tcl Library Source Code

Check-in [c74461e613]
Bounty program for improvements to Tcl and certain Tcl packages.

Overview
Comment: Doctools: - Extended testsuite, markdown special character example. - Fixed issue in markdown engine with handling of its special characters in verbatim blocks. Should not emit them as quoted, they are not special in such blocks. Regenerated package docs (version bump & fixes making changes) Version bump - doctools 1.5.2 B (markdown) T (markdown) family | ancestors | descendants | both | files | file ages | folders c74461e6131911185ce55a9ab400446161e261bc007c5ae9f3615113a7739acf aku 2019-04-10 20:46:17
Context
 2019-04-11 16:25 Updated branch with trunk work in prep for merge back (zig). Resolves most test issues of the branch. check-in: e7e38ee473 user: aku tags: hypnotoad 03:16 imap4 Tkt [e1b2ade141] D - Added category information (Networking). check-in: 5ed7010be0 user: aku tags: trunk 2019-04-10 20:58 Bring md special-char/verbatim fixes to the devguide check-in: 43904ac14b user: aku tags: doc-overhaul 20:46 Doctools: - Extended testsuite, markdown special character example. - Fixed issue in markdown engine with handling of its special characters in verbatim blocks. Should not emit them as quoted, they are not special in such blocks. Regenerated package docs (version bump & fixes making changes) Version bump - doctools 1.5.2 B (markdown) T (markdown) check-in: c74461e613 user: aku tags: trunk 2019-03-29 04:14 Doctools: - Extended testsuite, nesting of different list types. - Fixed issues in text/markdown engines with this kind of nesting. (Context creation did not clear inherited type markers, and paragraph handling took def markers over other list markers) Regenerated package docs (version, bump, fixes making changes) Version bump - doctools 1.5.1 B (text, markdown) T (text, markdown) check-in: 3ddb294ff0 user: aku tags: trunk
Changes

Changes to embedded/md/tcllib/files/apps/dtplite.md.

 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362   - $2$ The following directory structure is created when processing a single set of input documents\. The file extension used is for output in HTML, but that is not relevant to the structure and was just used to have proper file names\. output/ toc\.html index\.html files/ path/to/FOO\.html The last line in the example shows the document generated for a file FOO located at inputdirectory/path/to/FOO - $3$ When merging many packages into a unified set of documents the generated directory structure is a bit deeper: output \.toc \.idx \.tocdoc \.idxdoc \.xrf toc\.html index\.html FOO1/ \.\.\. FOO2/ toc\.html files/ path/to/BAR\.html Each of the directories FOO1, \.\.\. contains the documents generated for the package FOO1, \.\.\. and follows the structure shown for use case $2$\. The only exception is that there is no per\-package index\. The files "\.toc", "\.idx", and "\.xrf" contain the internal status of the whole output and will be read and updated by the next invokation\. Their   | | | | | | | | | | | | |  320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362   - $2$ The following directory structure is created when processing a single set of input documents\. The file extension used is for output in HTML, but that is not relevant to the structure and was just used to have proper file names\. output/ toc.html index.html files/ path/to/FOO.html The last line in the example shows the document generated for a file FOO located at inputdirectory/path/to/FOO - $3$ When merging many packages into a unified set of documents the generated directory structure is a bit deeper: output .toc .idx .tocdoc .idxdoc .xrf toc.html index.html FOO1/ ... FOO2/ toc.html files/ path/to/BAR.html Each of the directories FOO1, \.\.\. contains the documents generated for the package FOO1, \.\.\. and follows the structure shown for use case $2$\. The only exception is that there is no per\-package index\. The files "\.toc", "\.idx", and "\.xrf" contain the internal status of the whole output and will be read and updated by the next invokation\. Their 

Changes to embedded/md/tcllib/files/apps/page.md.

 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 ... 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281  the plugin option they are associated with does not understand them, and was not superceded by a plugin option coming after\. Default options are used if and only if the command line did not contain any options at all\. They will set the application up as a PEG\-based parser generator\. The exact list of options is \-c peg And now the recognized options and their arguments, if they have any: - __\-\-help__ - __\-h__ ................................................................................ * *peg* It sets the application up as a parser generator accepting parsing expression grammars and writing a packrat parser in Tcl\. The actual arguments it specifies are: \-\-reset \-\-append \-\-reader peg \-\-transform reach \-\-transform use \-\-writer me - __\-r__ *name* Readers\. The name of the package for the plugin *name* is "page::reader::*name*"\. We have five predefined plugins:   | | | | | | |  121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 ... 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281  the plugin option they are associated with does not understand them, and was not superceded by a plugin option coming after\. Default options are used if and only if the command line did not contain any options at all\. They will set the application up as a PEG\-based parser generator\. The exact list of options is -c peg And now the recognized options and their arguments, if they have any: - __\-\-help__ - __\-h__ ................................................................................ * *peg* It sets the application up as a parser generator accepting parsing expression grammars and writing a packrat parser in Tcl\. The actual arguments it specifies are: --reset --append --reader peg --transform reach --transform use --writer me - __\-r__ *name* Readers\. The name of the package for the plugin *name* is "page::reader::*name*"\. We have five predefined plugins: 

Changes to embedded/md/tcllib/files/apps/pt.md.

 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740  In this section we are working a complete example, starting with a PEG grammar and ending with running the parser generated from it over some input, following the outline shown in the figure below: ![](\.\./\.\./\.\./image/flow\.png) Our grammar, assumed to the stored in the file "calculator\.peg" is PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; From this we create a snit\-based parser via pt generate snit calculator\.tcl \-class calculator \-name calculator peg calculator\.peg which leaves us with the parser package and class written to the file "calculator\.tcl"\. Assuming that this package is then properly installed in a place where Tcl can find it we can now use this class via a script like package require calculator lassign $argv input set channel $open input r$ set parser $calculator$ set ast $parser parse channel$$parser destroy close $channel \.\.\. now process the returned abstract syntax tree \.\.\. where the abstract syntax tree stored in the variable will look like set ast \{Expression 0 4 \{Factor 0 4 \{Term 0 2 \{Number 0 2 \{Digit 0 0\} \{Digit 1 1\} \{Digit 2 2\} \} \} \{AddOp 3 3\} \{Term 4 4 \{Number 4 4 \{Digit 4 4\} \} \} \} \} assuming that the input file and channel contained the text 120\+5 A more graphical representation of the tree would be ![](\.\./\.\./\.\./image/expr\_ast\.png) Regardless, at this point it is the user's responsibility to work with the tree to reach whatever goal she desires\. I\.e\. analyze it, transform it, etc\. The package __[pt::ast](\.\./modules/pt/pt\_astree\.md)__ should be of help here,   | | | | | | | | | | | | | | | | | | | | | < < > > | | | | < < < < | > > > > |  673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740  In this section we are working a complete example, starting with a PEG grammar and ending with running the parser generated from it over some input, following the outline shown in the figure below: ![](\.\./\.\./\.\./image/flow\.png) Our grammar, assumed to the stored in the file "calculator\.peg" is PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; From this we create a snit\-based parser via pt generate snit calculator.tcl -class calculator -name calculator peg calculator.peg which leaves us with the parser package and class written to the file "calculator\.tcl"\. Assuming that this package is then properly installed in a place where Tcl can find it we can now use this class via a script like package require calculator lassign$argv input set channel [open $input r] set parser [calculator] set ast [$parser parse $channel]$parser destroy close $channel ... now process the returned abstract syntax tree ... where the abstract syntax tree stored in the variable will look like set ast {Expression 0 4 {Factor 0 4 {Term 0 2 {Number 0 2 {Digit 0 0} {Digit 1 1} {Digit 2 2} } } {AddOp 3 3} {Term 4 4 {Number 4 4 {Digit 4 4} } } } } assuming that the input file and channel contained the text 120+5 A more graphical representation of the tree would be ![](\.\./\.\./\.\./image/expr\_ast\.png) Regardless, at this point it is the user's responsibility to work with the tree to reach whatever goal she desires\. I\.e\. analyze it, transform it, etc\. The package __[pt::ast](\.\./modules/pt/pt\_astree\.md)__ should be of help here,  Changes to embedded/md/tcllib/files/modules/aes/aes.md.  140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160   randomly and transmitted as the first block of the output\. Errors in encryption affect the current block and the next block after which the cipher will correct itself\. CBC is the most commonly used mode in software encryption\. This is the default mode of operation for this module\. # EXAMPLES % set nil\_block $string repeat \\\\0 16$ % aes::aes \-hex \-mode cbc \-dir encrypt \-key$nil\_block $nil\_block 66e94bd4ef8a2c3b884cfa59ca342b2e set Key $aes::Init cbc sixteen\_bytes\_key\_data sixteen\_byte\_iv$ append ciphertext $aes::Encrypt Key plaintext$ append ciphertext $aes::Encrypt Key additional\_plaintext$ aes::Final$Key # REFERENCES 1. "Advanced Encryption Standard", Federal Information Processing Standards Publication 197, 2001 $$[http://csrc\.nist\.gov/publications/fips/fips197/fips\-197\.pdf](http://csrc\.nist\.gov/publications/fips/fips197/fips\-197\.pdf)$$   | | | | |  140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160   randomly and transmitted as the first block of the output\. Errors in encryption affect the current block and the next block after which the cipher will correct itself\. CBC is the most commonly used mode in software encryption\. This is the default mode of operation for this module\. # EXAMPLES % set nil_block [string repeat \\0 16] % aes::aes -hex -mode cbc -dir encrypt -key $nil_block$nil_block 66e94bd4ef8a2c3b884cfa59ca342b2e set Key [aes::Init cbc $sixteen_bytes_key_data$sixteen_byte_iv] append ciphertext [aes::Encrypt $Key$plaintext] append ciphertext [aes::Encrypt $Key$additional_plaintext] aes::Final $Key # REFERENCES 1. "Advanced Encryption Standard", Federal Information Processing Standards Publication 197, 2001 $$[http://csrc\.nist\.gov/publications/fips/fips197/fips\-197\.pdf](http://csrc\.nist\.gov/publications/fips/fips197/fips\-197\.pdf)$$  Changes to embedded/md/tcllib/files/modules/amazon-s3/S3.md.  1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 .... 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 .... 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493   * __\-prefix__ This names the prefix that will be added to all resources\. That is, it is the remote equivalent of __\-directory__\. If it is not specified, the root of the bucket will be treated as the remote directory\. An example may clarify\. S3::Push \-bucket test \-directory /tmp/xyz \-prefix hello/world In this example, /tmp/xyz/pdq\.html will be stored as http://s3\.amazonaws\.com/test/hello/world/pdq\.html in Amazon's servers\. Also, /tmp/xyz/abc/def/Hello will be stored as http://s3\.amazonaws\.com/test/hello/world/abc/def/Hello in Amazon's servers\. Without the __\-prefix__ option, /tmp/xyz/pdq\.html would be stored as http://s3\.amazonaws\.com/test/pdq\.html\. ................................................................................ delete files that have been deleted from one place but not the other yet not copying changed files is untested\. # USAGE SUGGESTIONS To fetch a "directory" out of a bucket, make changes, and store it back: file mkdir \./tempfiles S3::Pull \-bucket sample \-prefix of/interest \-directory \./tempfiles \\ \-timestamp aws do\_my\_process \./tempfiles other arguments S3::Push \-bucket sample \-prefix of/interest \-directory \./tempfiles \\ \-compare newer \-delete true To delete files locally that were deleted off of S3 but not otherwise update files: S3::Pull \-bucket sample \-prefix of/interest \-directory \./myfiles \\ \-compare never \-delete true # FUTURE DEVELOPMENTS The author intends to work on several additional projects related to this package, in addition to finishing the unfinished features\. First, a command\-line program allowing browsing of buckets and transfer of files ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *amazon\-s3* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | |  1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 .... 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 .... 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493   * __\-prefix__ This names the prefix that will be added to all resources\. That is, it is the remote equivalent of __\-directory__\. If it is not specified, the root of the bucket will be treated as the remote directory\. An example may clarify\. S3::Push -bucket test -directory /tmp/xyz -prefix hello/world In this example, /tmp/xyz/pdq\.html will be stored as http://s3\.amazonaws\.com/test/hello/world/pdq\.html in Amazon's servers\. Also, /tmp/xyz/abc/def/Hello will be stored as http://s3\.amazonaws\.com/test/hello/world/abc/def/Hello in Amazon's servers\. Without the __\-prefix__ option, /tmp/xyz/pdq\.html would be stored as http://s3\.amazonaws\.com/test/pdq\.html\. ................................................................................ delete files that have been deleted from one place but not the other yet not copying changed files is untested\. # USAGE SUGGESTIONS To fetch a "directory" out of a bucket, make changes, and store it back: file mkdir ./tempfiles S3::Pull -bucket sample -prefix of/interest -directory ./tempfiles \ -timestamp aws do_my_process ./tempfiles other arguments S3::Push -bucket sample -prefix of/interest -directory ./tempfiles \ -compare newer -delete true To delete files locally that were deleted off of S3 but not otherwise update files: S3::Pull -bucket sample -prefix of/interest -directory ./myfiles \ -compare never -delete true # FUTURE DEVELOPMENTS The author intends to work on several additional projects related to this package, in addition to finishing the unfinished features\. First, a command\-line program allowing browsing of buckets and transfer of files ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *amazon\-s3* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/amazon-s3/xsxp.md.  139 140 141 142 143 144 145 146 147 148 149 150 151 152 153   * %PCDATA? is like %PCDATA, but returns an empty string if no PCDATA is found\. For example, to fetch the first bold text from the fifth paragraph of the body of your HTML file, xsxp::fetch$pxml \{body p\#4 b\} %PCDATA - __xsxp::fetchall__ *pxml\_list* *path* ?*part*? This iterates over each PXML in *pxml\_list* $$which must be a list of pxmls$$ selecting the indicated path from it, building a new list with the selected data, and returning that new list\.   |  139 140 141 142 143 144 145 146 147 148 149 150 151 152 153   * %PCDATA? is like %PCDATA, but returns an empty string if no PCDATA is found\. For example, to fetch the first bold text from the fifth paragraph of the body of your HTML file, xsxp::fetch $pxml {body p#4 b} %PCDATA - __xsxp::fetchall__ *pxml\_list* *path* ?*part*? This iterates over each PXML in *pxml\_list* $$which must be a list of pxmls$$ selecting the indicated path from it, building a new list with the selected data, and returning that new list\.  Changes to embedded/md/tcllib/files/modules/base64/ascii85.md.  63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89   Ascii85 decodes the given *string* and returns the binary data\. The decoder ignores whitespace in the string, as well as tabs and newlines\. # EXAMPLES % ascii85::encode "Hello, world" 87cURD\_\*\#TDfTZ\) % ascii85::encode $string repeat xyz 24$ G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G ^4U$HX^\\H?a^$ % ascii85::encode \-wrapchar "" $string repeat xyz 24$ G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$G^4U$HX^\\H?a^$ \# NOTE: ascii85 encodes BINARY strings\. % set chemical $encoding convertto utf\-8 "C\\u2088H\\u2081\\u2080N\\u2084O\\u2082"$ % set encoded $ascii85::encode chemical$ 6fN\]R8E,5Pidu\\UiduhZidua % set caffeine $encoding convertfrom utf\-8 \[ascii85::decode encoded$\] # References 1. [http://en\.wikipedia\.org/wiki/Ascii85](http://en\.wikipedia\.org/wiki/Ascii85) 1. Postscript Language Reference Manual, 3rd Edition, page 131\. [http://www\.adobe\.com/devnet/postscript/pdfs/PLRM\.pdf](http://www\.adobe\.com/devnet/postscript/pdfs/PLRM\.pdf)   | | | | | | | | | | |  63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89   Ascii85 decodes the given *string* and returns the binary data\. The decoder ignores whitespace in the string, as well as tabs and newlines\. # EXAMPLES % ascii85::encode "Hello, world" 87cURD_*#TDfTZ) % ascii85::encode [string repeat xyz 24] G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G ^4U[H$X^\H?a^] % ascii85::encode -wrapchar "" [string repeat xyz 24] G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^]G^4U[H$X^\H?a^] # NOTE: ascii85 encodes BINARY strings. % set chemical [encoding convertto utf-8 "C\u2088H\u2081\u2080N\u2084O\u2082"] % set encoded [ascii85::encode$chemical] 6fN]R8E,5Pidu\UiduhZidua % set caffeine [encoding convertfrom utf-8 [ascii85::decode $encoded]] # References 1. [http://en\.wikipedia\.org/wiki/Ascii85](http://en\.wikipedia\.org/wiki/Ascii85) 1. Postscript Language Reference Manual, 3rd Edition, page 131\. [http://www\.adobe\.com/devnet/postscript/pdfs/PLRM\.pdf](http://www\.adobe\.com/devnet/postscript/pdfs/PLRM\.pdf)  Changes to embedded/md/tcllib/files/modules/base64/base64.md.  67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91   ignores whitespace in the string\. # EXAMPLES % base64::encode "Hello, world" SGVsbG8sIHdvcmxk % base64::encode $string repeat xyz 20$ eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6 eHl6eHl6eHl6 % base64::encode \-wrapchar "" $string repeat xyz 20$ eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6 \# NOTE: base64 encodes BINARY strings\. % set chemical $encoding convertto utf\-8 "C\\u2088H\\u2081\\u2080N\\u2084O\\u2082"$ % set encoded $base64::encode chemical$ Q\+KCiEjigoHigoBO4oKET\+KCgg== % set caffeine $encoding convertfrom utf\-8 \[base64::decode encoded$\] # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *base64* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | |  67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91   ignores whitespace in the string\. # EXAMPLES % base64::encode "Hello, world" SGVsbG8sIHdvcmxk % base64::encode [string repeat xyz 20] eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6 eHl6eHl6eHl6 % base64::encode -wrapchar "" [string repeat xyz 20] eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6eHl6 # NOTE: base64 encodes BINARY strings. % set chemical [encoding convertto utf-8 "C\u2088H\u2081\u2080N\u2084O\u2082"] % set encoded [base64::encode$chemical] Q+KCiEjigoHigoBO4oKET+KCgg== % set caffeine [encoding convertfrom utf-8 [base64::decode $encoded]] # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *base64* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/base64/uuencode.md.  90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117   The uuencoded data header line contains a suggested permissions bit pattern expressed as an octal string\. To change the default of 0644 you can set this option\. For instance, 0755 would be suitable for an executable\. See __chmod$$1$$__\. # EXAMPLES % set d $uuencode::encode "Hello World\!"$ 2&5L;&\\\\@5V\]R;&0A % uuencode::uudecode$d Hello World\! % set d $uuencode::uuencode \-name hello\.txt "Hello World"$ begin 644 hello\.txt \+2&5L;&\\@5V\]R;&0\ \ end % uuencode::uudecode $d \{hello\.txt 644 \{Hello World\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *base64* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | < > |  90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117   The uuencoded data header line contains a suggested permissions bit pattern expressed as an octal string\. To change the default of 0644 you can set this option\. For instance, 0755 would be suitable for an executable\. See __chmod$$1$$__\. # EXAMPLES % set d [uuencode::encode "Hello World!"] 2&5L;&\\@5V]R;&0A % uuencode::uudecode$d Hello World! % set d [uuencode::uuencode -name hello.txt "Hello World"] begin 644 hello.txt +2&5L;&\@5V]R;&0  end % uuencode::uudecode $d {hello.txt 644 {Hello World}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *base64* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/base64/yencode.md.  95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111   - \-crc32 boolean The yEnc specification recommends the inclusion of a cyclic redundancy check value in the footer\. Use this option to change the default from *true* to *false*\. % set d $yencode::yencode \-file testfile\.txt$ =ybegin line=128 size=584 name=testfile\.txt \-o\- data not shown \-o\- =yend size=584 crc32=ded29f4f # References 1. [http://www\.yenc\.org/yenc\-draft\.1\.3\.txt](http://www\.yenc\.org/yenc\-draft\.1\.3\.txt) # Bugs, Ideas, Feedback   | | |  95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111   - \-crc32 boolean The yEnc specification recommends the inclusion of a cyclic redundancy check value in the footer\. Use this option to change the default from *true* to *false*\. % set d [yencode::yencode -file testfile.txt] =ybegin line=128 size=584 name=testfile.txt -o- data not shown -o- =yend size=584 crc32=ded29f4f # References 1. [http://www\.yenc\.org/yenc\-draft\.1\.3\.txt](http://www\.yenc\.org/yenc\-draft\.1\.3\.txt) # Bugs, Ideas, Feedback  Changes to embedded/md/tcllib/files/modules/bench/bench_lang_intro.md.  55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 .. 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148  number of commands to support the declaration of benchmarks\. A document written in this language is a Tcl script and has the same syntax\. ## Basics One of the most simplest benchmarks which can be written in bench is bench \-desc LABEL \-body \{ set a b \} This code declares a benchmark named __LABEL__ which measures the time it takes to assign a value to a variable\. The Tcl code doing this assignment is the __\-body__ of the benchmark\. ## Pre\- and postprocessing ................................................................................ __\-post__\-body, respectively\. In our example, directly drawn from the benchmark suite of Tcllib's __[aes](\.\./aes/aes\.md)__ package, the concrete initialization code constructs the key schedule used by the encryption command whose speed we measure, and the cleanup code releases any resources bound to that schedule\. bench \-desc "AES\-$\{len\} ECB encryption core" __\-pre__ \{ set key $aes::Init ecb k i$ \} \-body \{ aes::Encrypt $key$p \} __\-post__ \{ aes::Final $key \} ## Advanced pre\- and postprocessing Our last example again deals with initialization and cleanup code\. To see the difference to the regular initialization and cleanup discussed in the last section it is necessary to know a bit more about how bench actually measures the speed of the the __\-body__\. ................................................................................ example we used above to demonstrate the necessity for the advanced initialization and cleanup\. Its concrete initialization code constructs a variable refering to a set with specific properties $$The set has a string representation, which is shared$$ affecting the speed of the inclusion command, and the cleanup code releases the temporary variables created by this initialization\. bench \-desc "set include, missing x$times $n" __\-ipre__ \{ set A$sx$$times,n$$ set B $A \} \-body \{ struct::set include A x \} __\-ipost__ \{ unset A B \} # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of benchmarks, he should be fortified enough to be able to understand the formal *bench language specfication*\. It will also serve as the detailed specification and cheat sheet for all available commands and their syntax\.   | | | | | | | < > | | | | < >  55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 .. 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148  number of commands to support the declaration of benchmarks\. A document written in this language is a Tcl script and has the same syntax\. ## Basics One of the most simplest benchmarks which can be written in bench is bench -desc LABEL -body { set a b } This code declares a benchmark named __LABEL__ which measures the time it takes to assign a value to a variable\. The Tcl code doing this assignment is the __\-body__ of the benchmark\. ## Pre\- and postprocessing ................................................................................ __\-post__\-body, respectively\. In our example, directly drawn from the benchmark suite of Tcllib's __[aes](\.\./aes/aes\.md)__ package, the concrete initialization code constructs the key schedule used by the encryption command whose speed we measure, and the cleanup code releases any resources bound to that schedule\. bench -desc "AES-${len} ECB encryption core" __-pre__ { set key [aes::Init ecb $k$i] } -body { aes::Encrypt $key$p } __-post__ { aes::Final $key } ## Advanced pre\- and postprocessing Our last example again deals with initialization and cleanup code\. To see the difference to the regular initialization and cleanup discussed in the last section it is necessary to know a bit more about how bench actually measures the speed of the the __\-body__\. ................................................................................ example we used above to demonstrate the necessity for the advanced initialization and cleanup\. Its concrete initialization code constructs a variable refering to a set with specific properties $$The set has a string representation, which is shared$$ affecting the speed of the inclusion command, and the cleanup code releases the temporary variables created by this initialization\. bench -desc "set include, missing x$times $n" __-ipre__ { set A$sx($times,$n) set B $A } -body { struct::set include A x } __-ipost__ { unset A B } # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of benchmarks, he should be fortified enough to be able to understand the formal *bench language specfication*\. It will also serve as the detailed specification and cheat sheet for all available commands and their syntax\.  Changes to embedded/md/tcllib/files/modules/blowfish/blowfish.md.  141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160   randomly and transmitted as the first block of the output\. Errors in encryption affect the current block and the next block after which the cipher will correct itself\. CBC is the most commonly used mode in software encryption\. # EXAMPLES % blowfish::blowfish \-hex \-mode ecb \-dir encrypt \-key secret01 "hello, world\!" d0d8f27e7a374b9e2dbd9938dd04195a set Key $blowfish::Init cbc eight\_bytes\_key\_data eight\_byte\_iv$ append ciphertext $blowfish::Encrypt Key plaintext$ append ciphertext $blowfish::Encrypt Key additional\_plaintext$ blowfish::Final$Key # REFERENCES 1. Schneier, B\. "Applied Cryptography, 2nd edition", 1996, ISBN 0\-471\-11709\-9, pub\. John Wiley & Sons\.   | | | |  141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160   randomly and transmitted as the first block of the output\. Errors in encryption affect the current block and the next block after which the cipher will correct itself\. CBC is the most commonly used mode in software encryption\. # EXAMPLES % blowfish::blowfish -hex -mode ecb -dir encrypt -key secret01 "hello, world!" d0d8f27e7a374b9e2dbd9938dd04195a set Key [blowfish::Init cbc $eight_bytes_key_data$eight_byte_iv] append ciphertext [blowfish::Encrypt $Key$plaintext] append ciphertext [blowfish::Encrypt $Key$additional_plaintext] blowfish::Final $Key # REFERENCES 1. Schneier, B\. "Applied Cryptography, 2nd edition", 1996, ISBN 0\-471\-11709\-9, pub\. John Wiley & Sons\.  Changes to embedded/md/tcllib/files/modules/cmdline/cmdline.md.  156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206  Starting with version 1\.5 all errors thrown by the package have a proper __::errorCode__ for use with Tcl's __[try](\.\./try/tcllib\_try\.md)__ command\. This code always has the word __CMDLINE__ as its first element\. # EXAMPLES package require Tcl 8\.5 package require try ;\# Tcllib\. package require cmdline 1\.5 ;\# First version with proper error\-codes\. \# Notes: \# \- Tcl 8\.6\+ has 'try' as a builtin command and therefore does not \# need the 'try' package\. \# \- Before Tcl 8\.5 we cannot support 'try' and have to use 'catch'\. \# This then requires a dedicated test $$if$$ on the contents of \# ::errorCode to separate the CMDLINE USAGE signal from actual errors\. set options \{ \{a "set the atime only"\} \{m "set the mtime only"\} \{c "do not create non\-existent files"\} \{r\.arg "" "use time from ref\_file"\} \{t\.arg \-1 "use specified time"\} \} set usage ": MyCommandName \\$options$ filename \.\.\.\\noptions:" try \{ array set params $::cmdline::getoptions argv options usage$ \} trap \{CMDLINE USAGE\} \{msg o\} \{ \# Trap the usage signal, print the message, and exit the application\. \# Note: Other errors are not caught and passed through to higher levels\! puts$msg exit 1 \} if \{ $params$$a$$ \} \{ set set\_atime "true" \} set has\_t $expr \{params$$t$$ \!= \-1\}$ set has\_r $expr \{\[string length params$$r$$$ > 0\}\] if \{$has\_t && $has\_r\} \{ return \-code error "Cannot specify both \-r and \-t" \} elseif \{$has\_t\} \{ \.\.\. \} This example, taken $$and slightly modified$$ from the package __[fileutil](\.\./fileutil/fileutil\.md)__, shows how to use cmdline\. First, a list of options is created, then the 'args' list is passed to cmdline for processing\. Subsequently, different options are checked to see if they have been passed to the script, and what their value is\.   | | | | | | | | | | | | | | | < > | | | | | | < | > | | | | | | | < >  156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206  Starting with version 1\.5 all errors thrown by the package have a proper __::errorCode__ for use with Tcl's __[try](\.\./try/tcllib\_try\.md)__ command\. This code always has the word __CMDLINE__ as its first element\. # EXAMPLES package require Tcl 8.5 package require try ;# Tcllib. package require cmdline 1.5 ;# First version with proper error-codes. # Notes: # - Tcl 8.6+ has 'try' as a builtin command and therefore does not # need the 'try' package. # - Before Tcl 8.5 we cannot support 'try' and have to use 'catch'. # This then requires a dedicated test (if) on the contents of # ::errorCode to separate the CMDLINE USAGE signal from actual errors. set options { {a "set the atime only"} {m "set the mtime only"} {c "do not create non-existent files"} {r.arg "" "use time from ref_file"} {t.arg -1 "use specified time"} } set usage ": MyCommandName $options] filename ...\noptions:" try { array set params [::cmdline::getoptions argv options usage] } trap {CMDLINE USAGE} {msg o} { # Trap the usage signal, print the message, and exit the application. # Note: Other errors are not caught and passed through to higher levels! puts msg exit 1 } if { params(a) } { set set_atime "true" } set has_t [expr {params(t) != -1}] set has_r [expr {[string length params(r)] > 0}] if {has_t && has_r} { return -code error "Cannot specify both -r and -t" } elseif {has_t} { ... } This example, taken $$and slightly modified$$ from the package __[fileutil](\.\./fileutil/fileutil\.md)__, shows how to use cmdline\. First, a list of options is created, then the 'args' list is passed to cmdline for processing\. Subsequently, different options are checked to see if they have been passed to the script, and what their value is\.  Changes to embedded/md/tcllib/files/modules/comm/comm.md.  107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 ... 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 ... 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 ... 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 ... 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 ... 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 ... 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 ... 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 ... 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 ... 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 ... 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000  server for the communication path\. As a result, __comm__ works with multiple interpreters, works on Windows and Macintosh systems, and provides control over the remote execution path\. These commands work just like __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ and __winfo interps__ : ::comm::comm send ?\-async? id cmd ?arg arg \.\.\.? ::comm::comm interps This is all that is really needed to know in order to use __comm__ ## Commands The package initializes __::comm::comm__ as the default *chan*\. ................................................................................ If you find that __::comm::comm send__ doesn't work for a particular command, try the same thing with Tk's send and see if the result is different\. If there is a problem, please report it\. For instance, there was had one report that this command produced an error\. Note that the equivalent __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ command also produces the same error\. % ::comm::comm send id llength \{a b c\} wrong \# args: should be "llength list" % send name llength \{a b c\} wrong \# args: should be "llength list" The __eval__ hook $$described below$$ can be used to change from __[send](\.\./\.\./\.\./\.\./index\.md\#send)__'s double eval semantics to single eval semantics\. ## Multiple Channels ................................................................................ - __::comm::comm channels__ This lists all the channels allocated in this Tcl interpreter\. The default configuration parameters for a new channel are: "\-port 0 \-local 1 \-listen 0 \-silent 0" The default channel __::comm::comm__ is created with: "::comm::comm new ::comm::comm \-port 0 \-local 1 \-listen 1 \-silent 0" ## Channel Configuration The __config__ method acts similar to __fconfigure__ in that it sets or queries configuration variables associated with a channel\. - __::comm::comm config__ ................................................................................ Variables: __chan__, __id__ This hook is invoked before making a connection to the remote named in *id*\. An error return $$via __[error](\.\./\.\./\.\./\.\./index\.md\#error)__$$ will abort the connection attempt with the error\. Example: % ::comm::comm hook connecting \{ if \{\[string match \{\*\[02468$\} $id\]\} \{ error "Can't connect to even ids" \} \} % ::comm::comm send 10000 puts ok Connect to remote failed: Can't connect to even ids % - __connected__ Variables: __chan__, __fid__, __id__, __host__, and ................................................................................ Hook invoked when receiving an incoming connection, allowing arbitrary authentication over socket named by *fid*\. An error return $$via __[error](\.\./\.\./\.\./\.\./index\.md\#error)__$$ will close the connection with the error\. Note that the peer is named by *remport* and *addr* but that the remote *id* is still unknown\. Example: ::comm::comm hook incoming \{ if \{$string match 127\.0\.0\.1 addr$\} \{ error "I don't talk to myself" \} \} - __eval__ Variables: __chan__, __id__, __cmd__, and __buffer__\. This hook is invoked after collecting a complete script from a remote but *before* evaluating it\. This allows complete control over the processing ................................................................................ __break__ and __return \-code break__ *result* is supported, acting similarly to __return \{\}__ and __return \-code return__ *result*\. Examples: 1. augmenting a command % ::comm::comm send $::comm::comm self$ pid 5013 % ::comm::comm hook eval \{puts "going to execute$buffer"\} % ::comm::comm send $::comm::comm self$ pid going to execute pid 5013 1. short circuiting a command % ::comm::comm hook eval \{puts "would have executed $buffer"; return 0\} % ::comm::comm send $::comm::comm self$ pid would have executed pid 0 1. Replacing double eval semantics % ::comm::comm send $::comm::comm self$ llength \{a b c\} wrong \# args: should be "llength list" % ::comm::comm hook eval \{return $uplevel \#0 buffer$\} return $uplevel \#0 buffer$ % ::comm::comm send $::comm::comm self$ llength \{a b c\} 3 1. Using a slave interpreter % interp create foo % ::comm::comm hook eval \{return $foo eval buffer$\} % ::comm::comm send $::comm::comm self$ set myvar 123 123 % set myvar can't read "myvar": no such variable % foo eval set myvar 123 1. Using a slave interpreter $$double eval$$ % ::comm::comm hook eval \{return $eval foo eval buffer$\} 1. Subverting the script to execute % ::comm::comm hook eval \{ switch \-\-$buffer \{ a \{return A\-OK\} b \{return B\-OK\} default \{error "$buffer is a no\-no"\} \} \} % ::comm::comm send $::comm::comm self$ pid pid is a no\-no % ::comm::comm send $::comm::comm self$ a A\-OK - __reply__ Variables: __chan__, __id__, __buffer__, __ret__, and __return__\. This hook is invoked after collecting a complete reply script from a remote ................................................................................ Variables: __chan__, __id__, and __reason__\. This hook is invoked when the connection to __id__ is lost\. Return value $$or thrown error$$ is ignored\. *reason* is an explanatory string indicating why the connection was lost\. Example: ::comm::comm hook lost \{ global myvar if \{$myvar$$id$$ == $id\} \{ myfunc return \} \} ## Unsupported These interfaces may change or go away in subsequence releases\. - __::comm::comm remoteid__ ................................................................................ - __::comm::comm\_send__ Invoking this procedure will substitute the Tk __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ and __winfo interps__ commands with these equivalents that use __::comm::comm__\. proc send \{args\} \{ eval ::comm::comm send$args \} rename winfo tk\_winfo proc winfo \{cmd args\} \{ if \{\!$string match in\* cmd$\} \{ return $eval \[list tk\_winfo cmd$ $args\] \} return $::comm::comm interps$ \} ## Security Starting with version 4\.6 of the package an option __\-socketcmd__ is supported, allowing the user of a comm channel to specify which command to use when opening a socket\. Anything which is API\-compatible with the builtin __::socket__ $$the default$$ can be used\. The envisioned main use is the specification of the __tls::socket__ command, see package __[tls](\.\./\.\./\.\./\.\./index\.md\#tls)__, to secure the communication\. \# Load and initialize tls package require tls tls::init \-cafile /path/to/ca/cert \-keyfile \.\.\. \# Create secured comm channel ::comm::comm new SECURE \-socketcmd tls::socket \-listen 1 \.\.\. The sections [Execution Environment](#subsection6) and [Callbacks](#subsection9) are also relevant to the security of the system, providing means to restrict the execution to a specific environment, perform additional authentication, and the like\. ## Blocking Semantics ................................................................................ being computed the future will not try to deliver the result it got, but just destroy itself\. The future can be configured with a command to call when the invoker is lost\. This enables the user to implement an early abort of the long\-running operation, should this be supported by it\. An example: \# Procedure invoked by remote clients to run database operations\. proc select \{sql\} \{ \# Signal the async generation of the result set future $::comm::comm return\_async$ \# Generate an async db operation and tell it where to deliver the result\. set query $db query \-command \[list future return$$sql\] \# Tell the database system which query to cancel if the connection \# goes away while it is running\. $future configure \-command $list db cancel query$ \# Note: The above will work without problem only if the async \# query will nover run its completion callback immediately, but \# only from the eventloop\. Because otherwise the future we wish to \# configure may already be gone\. If that is possible use 'catch' \# to prevent the error from propagating\. return \} The API of a future object is: * __$future__ __return__ ?__\-code__ *code*? ?*value*? Use this method to tell the future that long\-running operation has completed\. Arguments are an optional return value $$defaults to the empty ................................................................................ being returned correctly from __comm send__\. This has been fixed by removing the extra level of indirection into the internal procedure __commSend__\. Also added propagation of the *errorCode* variable\. This means that these commands return exactly as they would with __[send](\.\./\.\./\.\./\.\./index\.md\#send)__: comm send id break catch \{comm send id break\} comm send id expr 1 / 0 Added a new hook for reply messages\. Reworked method invocation to avoid the use of comm:\* procedures; this also cut the invocation time down by 40%\. Documented __comm config__ \(as this manual page still listed the defunct __comm init__\!$$ ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # Author John LoVerso, [email protected]\.Southborough\.MA\.US *http://www\.opengroup\.org/~loverso/tcl\-tk/\#comm*   | | | | | | | | | < < > > | | | | | | | | | | | | | | | | | | | | | | | < < > > | | | | | | < < > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | | |  107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 ... 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 ... 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 ... 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 ... 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 ... 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 ... 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 ... 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 ... 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 ... 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 ... 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000  server for the communication path\. As a result, __comm__ works with multiple interpreters, works on Windows and Macintosh systems, and provides control over the remote execution path\. These commands work just like __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ and __winfo interps__ : ::comm::comm send ?-async? id cmd ?arg arg ...? ::comm::comm interps This is all that is really needed to know in order to use __comm__ ## Commands The package initializes __::comm::comm__ as the default *chan*\. ................................................................................ If you find that __::comm::comm send__ doesn't work for a particular command, try the same thing with Tk's send and see if the result is different\. If there is a problem, please report it\. For instance, there was had one report that this command produced an error\. Note that the equivalent __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ command also produces the same error\. % ::comm::comm send id llength {a b c} wrong # args: should be "llength list" % send name llength {a b c} wrong # args: should be "llength list" The __eval__ hook $$described below$$ can be used to change from __[send](\.\./\.\./\.\./\.\./index\.md\#send)__'s double eval semantics to single eval semantics\. ## Multiple Channels ................................................................................ - __::comm::comm channels__ This lists all the channels allocated in this Tcl interpreter\. The default configuration parameters for a new channel are: "-port 0 -local 1 -listen 0 -silent 0" The default channel __::comm::comm__ is created with: "::comm::comm new ::comm::comm -port 0 -local 1 -listen 1 -silent 0" ## Channel Configuration The __config__ method acts similar to __fconfigure__ in that it sets or queries configuration variables associated with a channel\. - __::comm::comm config__ ................................................................................ Variables: __chan__, __id__ This hook is invoked before making a connection to the remote named in *id*\. An error return $$via __[error](\.\./\.\./\.\./\.\./index\.md\#error)__$$ will abort the connection attempt with the error\. Example: % ::comm::comm hook connecting { if {[string match {*[02468]} $id]} { error "Can't connect to even ids" } } % ::comm::comm send 10000 puts ok Connect to remote failed: Can't connect to even ids % - __connected__ Variables: __chan__, __fid__, __id__, __host__, and ................................................................................ Hook invoked when receiving an incoming connection, allowing arbitrary authentication over socket named by *fid*\. An error return $$via __[error](\.\./\.\./\.\./\.\./index\.md\#error)__$$ will close the connection with the error\. Note that the peer is named by *remport* and *addr* but that the remote *id* is still unknown\. Example: ::comm::comm hook incoming { if {[string match 127.0.0.1$addr]} { error "I don't talk to myself" } } - __eval__ Variables: __chan__, __id__, __cmd__, and __buffer__\. This hook is invoked after collecting a complete script from a remote but *before* evaluating it\. This allows complete control over the processing ................................................................................ __break__ and __return \-code break__ *result* is supported, acting similarly to __return \{\}__ and __return \-code return__ *result*\. Examples: 1. augmenting a command % ::comm::comm send [::comm::comm self] pid 5013 % ::comm::comm hook eval {puts "going to execute $buffer"} % ::comm::comm send [::comm::comm self] pid going to execute pid 5013 1. short circuiting a command % ::comm::comm hook eval {puts "would have executed$buffer"; return 0} % ::comm::comm send [::comm::comm self] pid would have executed pid 0 1. Replacing double eval semantics % ::comm::comm send [::comm::comm self] llength {a b c} wrong # args: should be "llength list" % ::comm::comm hook eval {return [uplevel #0 $buffer]} return [uplevel #0$buffer] % ::comm::comm send [::comm::comm self] llength {a b c} 3 1. Using a slave interpreter % interp create foo % ::comm::comm hook eval {return [foo eval $buffer]} % ::comm::comm send [::comm::comm self] set myvar 123 123 % set myvar can't read "myvar": no such variable % foo eval set myvar 123 1. Using a slave interpreter $$double eval$$ % ::comm::comm hook eval {return [eval foo eval$buffer]} 1. Subverting the script to execute % ::comm::comm hook eval { switch -- $buffer { a {return A-OK} b {return B-OK} default {error "$buffer is a no-no"} } } % ::comm::comm send [::comm::comm self] pid pid is a no-no % ::comm::comm send [::comm::comm self] a A-OK - __reply__ Variables: __chan__, __id__, __buffer__, __ret__, and __return__\. This hook is invoked after collecting a complete reply script from a remote ................................................................................ Variables: __chan__, __id__, and __reason__\. This hook is invoked when the connection to __id__ is lost\. Return value $$or thrown error$$ is ignored\. *reason* is an explanatory string indicating why the connection was lost\. Example: ::comm::comm hook lost { global myvar if {$myvar(id) ==$id} { myfunc return } } ## Unsupported These interfaces may change or go away in subsequence releases\. - __::comm::comm remoteid__ ................................................................................ - __::comm::comm\_send__ Invoking this procedure will substitute the Tk __[send](\.\./\.\./\.\./\.\./index\.md\#send)__ and __winfo interps__ commands with these equivalents that use __::comm::comm__\. proc send {args} { eval ::comm::comm send $args } rename winfo tk_winfo proc winfo {cmd args} { if {![string match in*$cmd]} { return [eval [list tk_winfo $cmd]$args] } return [::comm::comm interps] } ## Security Starting with version 4\.6 of the package an option __\-socketcmd__ is supported, allowing the user of a comm channel to specify which command to use when opening a socket\. Anything which is API\-compatible with the builtin __::socket__ $$the default$$ can be used\. The envisioned main use is the specification of the __tls::socket__ command, see package __[tls](\.\./\.\./\.\./\.\./index\.md\#tls)__, to secure the communication\. # Load and initialize tls package require tls tls::init -cafile /path/to/ca/cert -keyfile ... # Create secured comm channel ::comm::comm new SECURE -socketcmd tls::socket -listen 1 ... The sections [Execution Environment](#subsection6) and [Callbacks](#subsection9) are also relevant to the security of the system, providing means to restrict the execution to a specific environment, perform additional authentication, and the like\. ## Blocking Semantics ................................................................................ being computed the future will not try to deliver the result it got, but just destroy itself\. The future can be configured with a command to call when the invoker is lost\. This enables the user to implement an early abort of the long\-running operation, should this be supported by it\. An example: # Procedure invoked by remote clients to run database operations. proc select {sql} { # Signal the async generation of the result set future [::comm::comm return_async] # Generate an async db operation and tell it where to deliver the result. set query [db query -command [list $future return]$sql] # Tell the database system which query to cancel if the connection # goes away while it is running. $future configure -command [list db cancel$query] # Note: The above will work without problem only if the async # query will nover run its completion callback immediately, but # only from the eventloop. Because otherwise the future we wish to # configure may already be gone. If that is possible use 'catch' # to prevent the error from propagating. return } The API of a future object is: * __$future__ __return__ ?__\-code__ *code*? ?*value*? Use this method to tell the future that long\-running operation has completed\. Arguments are an optional return value $$defaults to the empty ................................................................................ being returned correctly from __comm send__\. This has been fixed by removing the extra level of indirection into the internal procedure __commSend__\. Also added propagation of the *errorCode* variable\. This means that these commands return exactly as they would with __[send](\.\./\.\./\.\./\.\./index\.md\#send)__: comm send id break catch {comm send id break} comm send id expr 1 / 0 Added a new hook for reply messages\. Reworked method invocation to avoid the use of comm:\* procedures; this also cut the invocation time down by 40%\. Documented __comm config__ \(as this manual page still listed the defunct __comm init__\!$$ ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # Author John LoVerso, [email protected]\.Southborough\.MA\.US *http://www\.opengroup\.org/~loverso/tcl\-tk/\#comm*  Changes to embedded/md/tcllib/files/modules/comm/comm_wire.md.  124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 ... 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186   __concat__enated together by the server to form the full script to execute on the server side\. This emulates the Tcl "eval" semantics\. In most cases it is best to have only one word in the list, a list containing the exact command\. Examples: $$a$$ \{send 1 \{\{array get tcl\_platform\}\}\} $$b$$ \{send 1 \{array get tcl\_platform\}\} $$c$$ \{send 1 \{array \{get tcl\_platform\}\}\} are all valid representations of the same command\. They are generated via $$a'$$ send \{array get tcl\_platform\} $$b'$$ send array get tcl\_platform $$c'$$ send array \{get tcl\_platform\} respectively Note that $$a$$, generated by $$a'$$, is the usual form, if only single commands are sent by the client\. For example constructed using __[list](\.\./\.\./\.\./\.\./index\.md\#list)__, if the command contains variable arguments\. Like send $list array get the\_variable$ These three instructions all invoke the script on the server side\. Their difference is in the treatment of result values, and thus determines if a reply is expected\. * __send__ ................................................................................ Like the previous three command, however the tcl script in the payload is highly restricted\. It has to be a syntactically valid Tcl __[return](\.\./\.\./\.\./\.\./index\.md\#return)__ command\. This contains result code, value, error code, and error info\. Examples: \{reply 1 \{return \-code 0 \{\}\}\} \{reply 1 \{return \-code 0 \{osVersion 2\.4\.21\-99\-default byteOrder littleEndian machine i686 platform unix os Linux user andreask wordSize 4\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *comm* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | |  124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 ... 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186   __concat__enated together by the server to form the full script to execute on the server side\. This emulates the Tcl "eval" semantics\. In most cases it is best to have only one word in the list, a list containing the exact command\. Examples: (a) {send 1 {{array get tcl_platform}}} (b) {send 1 {array get tcl_platform}} (c) {send 1 {array {get tcl_platform}}} are all valid representations of the same command. They are generated via (a') send {array get tcl_platform} (b') send array get tcl_platform (c') send array {get tcl_platform} respectively Note that $$a$$, generated by $$a'$$, is the usual form, if only single commands are sent by the client\. For example constructed using __[list](\.\./\.\./\.\./\.\./index\.md\#list)__, if the command contains variable arguments\. Like send [list array get$the_variable] These three instructions all invoke the script on the server side\. Their difference is in the treatment of result values, and thus determines if a reply is expected\. * __send__ ................................................................................ Like the previous three command, however the tcl script in the payload is highly restricted\. It has to be a syntactically valid Tcl __[return](\.\./\.\./\.\./\.\./index\.md\#return)__ command\. This contains result code, value, error code, and error info\. Examples: {reply 1 {return -code 0 {}}} {reply 1 {return -code 0 {osVersion 2.4.21-99-default byteOrder littleEndian machine i686 platform unix os Linux user andreask wordSize 4}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *comm* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/control/control.md.

 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 ... 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161   that debugging efforts can be independently controlled module by module\. % package require control % control::control assert enabled 1 % namespace eval one namespace import ::control::assert % control::control assert enabled 0 % namespace eval two namespace import ::control::assert % one::assert \{1 == 0\} assertion failed: 1 == 0 % two::assert \{1 == 0\} - __control::do__ *body* ?*option test*? The __[do](\.\./\.\./\.\./\.\./index\.md\#do)__ command evaluates the script *body* repeatedly *until* the expression *test* becomes true or as long as $$*while*$$ *test* is true, depending on the value of *option* being __until__ or __while__\. If *option* and *test* are omitted ................................................................................ \-code $code__\] within one of those script arguments for any value of *$code* other than *ok*\. In this way, the commands of the __control__ package are limited as compared to Tcl's built\-in control flow commands $$such as __if__, __while__, etc\.$$ and those control flow commands that can be provided by packages coded in C\. An example of this difference: % package require control % proc a \{\} \{while 1 \{return \-code error a\}\} % proc b \{\} \{control::do \{return \-code error b\} while 1\} % catch a 1 % catch b 0 # Bugs, Ideas, Feedback   | | | |  111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 ... 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161   that debugging efforts can be independently controlled module by module\. % package require control % control::control assert enabled 1 % namespace eval one namespace import ::control::assert % control::control assert enabled 0 % namespace eval two namespace import ::control::assert % one::assert {1 == 0} assertion failed: 1 == 0 % two::assert {1 == 0} - __control::do__ *body* ?*option test*? The __[do](\.\./\.\./\.\./\.\./index\.md\#do)__ command evaluates the script *body* repeatedly *until* the expression *test* becomes true or as long as $$*while*$$ *test* is true, depending on the value of *option* being __until__ or __while__\. If *option* and *test* are omitted ................................................................................ \-code $code__\] within one of those script arguments for any value of *$code* other than *ok*\. In this way, the commands of the __control__ package are limited as compared to Tcl's built\-in control flow commands $$such as __if__, __while__, etc\.$$ and those control flow commands that can be provided by packages coded in C\. An example of this difference: % package require control % proc a {} {while 1 {return -code error a}} % proc b {} {control::do {return -code error b} while 1} % catch a 1 % catch b 0 # Bugs, Ideas, Feedback 

Changes to embedded/md/tcllib/files/modules/crc/cksum.md.

 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136   Returns the checksum value and releases any resources held by this token\. Once this command completes the token will be invalid\. The result is a 32 bit integer value\. # EXAMPLES % crc::cksum "Hello, World\!" 2609532967 % crc::cksum \-format 0x%X "Hello, World\!" 0x9B8A5027 % crc::cksum \-file cksum\.tcl 1828321145 % set tok $crc::CksumInit$ % crc::CksumUpdate $tok "Hello, " % crc::CksumUpdate$tok "World\!" % crc::CksumFinal $tok 2609532967 # AUTHORS Pat Thoyts   | | | | |  111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136   Returns the checksum value and releases any resources held by this token\. Once this command completes the token will be invalid\. The result is a 32 bit integer value\. # EXAMPLES % crc::cksum "Hello, World!" 2609532967 % crc::cksum -format 0x%X "Hello, World!" 0x9B8A5027 % crc::cksum -file cksum.tcl 1828321145 % set tok [crc::CksumInit] % crc::CksumUpdate$tok "Hello, " % crc::CksumUpdate $tok "World!" % crc::CksumFinal$tok 2609532967 # AUTHORS Pat Thoyts 

Changes to embedded/md/tcllib/files/modules/crc/crc16.md.

 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149   flag is important when processing data from parameters\. If the binary data looks like one of the options given above then the data will be read as an option if this marker is not included\. Always use the *\-\-* option termination flag before giving the data argument\. # EXAMPLES % crc::crc16 \-\- "Hello, World\!" 64077 % crc::crc\-ccitt \-\- "Hello, World\!" 26586 % crc::crc16 \-format 0x%X \-\- "Hello, World\!" 0xFA4D % crc::crc16 \-file crc16\.tcl 51675 # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback   | | | |  126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149   flag is important when processing data from parameters\. If the binary data looks like one of the options given above then the data will be read as an option if this marker is not included\. Always use the *\-\-* option termination flag before giving the data argument\. # EXAMPLES % crc::crc16 -- "Hello, World!" 64077 % crc::crc-ccitt -- "Hello, World!" 26586 % crc::crc16 -format 0x%X -- "Hello, World!" 0xFA4D % crc::crc16 -file crc16.tcl 51675 # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback 

Changes to embedded/md/tcllib/files/modules/crc/crc32.md.

 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150   Returns the checksum value and releases any resources held by this token\. Once this command completes the token will be invalid\. The result is a 32 bit integer value\. # EXAMPLES % crc::crc32 "Hello, World\!" 3964322768 % crc::crc32 \-format 0x%X "Hello, World\!" 0xEC4AC3D0 % crc::crc32 \-file crc32\.tcl 483919716 % set tok $crc::Crc32Init$ % crc::Crc32Update $tok "Hello, " % crc::Crc32Update$tok "World\!" % crc::Crc32Final $tok 3964322768 # AUTHORS Pat Thoyts   | | | | |  125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150   Returns the checksum value and releases any resources held by this token\. Once this command completes the token will be invalid\. The result is a 32 bit integer value\. # EXAMPLES % crc::crc32 "Hello, World!" 3964322768 % crc::crc32 -format 0x%X "Hello, World!" 0xEC4AC3D0 % crc::crc32 -file crc32.tcl 483919716 % set tok [crc::Crc32Init] % crc::Crc32Update$tok "Hello, " % crc::Crc32Update $tok "World!" % crc::Crc32Final$tok 3964322768 # AUTHORS Pat Thoyts 

Changes to embedded/md/tcllib/files/modules/crc/sum.md.

 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114   - \-format *string* Return the checksum using an alternative format template\. # EXAMPLES % crc::sum "Hello, World\!" 37287 % crc::sum \-format 0x%X "Hello, World\!" 0x91A7 % crc::sum \-file sum\.tcl 13392 # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback   | | |  94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114   - \-format *string* Return the checksum using an alternative format template\. # EXAMPLES % crc::sum "Hello, World!" 37287 % crc::sum -format 0x%X "Hello, World!" 0x91A7 % crc::sum -file sum.tcl 13392 # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback 

Changes to embedded/md/tcllib/files/modules/cron/cron.md.

 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 ... 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 ... 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208   *timecode*\. If *timecode* is expressed as an integer, the timecode is assumed to be in unixtime\. All other inputs will be interpreted by __clock scan__ and converted to unix time\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. If *processname* is not given, one is generated and returned by the command\. ::cron::at start\_coffee \{Tomorrow at 9:00am\} \{remote::exec::coffeepot power on\} ::cron::at shutdown\_coffee \{Tomorrow at 12:00pm\} \{remote::exec::coffeepot power off\} - __::cron::cancel__ *processname* This command unregisters the process *processname* and cancels any pending commands\. Note: processname can be a process created by either __::cron::at__ or __::cron::every__\. ::cron::cancel check\_mail - __::cron::every__ *processname* *frequency* *command* This command registers a *command* to be called at the interval of *frequency*\. *frequency* is given in seconds\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. ::cron::every check\_mail 900 ::imap\_client::check\_mail ::cron::every backup\_db 3600 \{::backup\_procedure ::mydb\} - __::cron::in__ *?processname?* *timecode* *command* This command registers a *command* to be called after a delay of time specified by *timecode*\. *timecode* is expressed as an seconds\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. ................................................................................ If the ::cron::time variable is > 0 this command will advance the internal time, 100ms at a time\. In all other cases this command will generate a fictious variable, generate an after call, and vwait the variable: set eventid $incr ::cron::eventcount$ set var ::cron::event\_\#$eventid set$var 0 ::after $ms "set$var 1" ::vwait $var ::unset$var Usage: ................................................................................ does so\. - __::cron::wake__ *?who?* Wake up cron, and arrange for its event loop to be run during the next Idle cycle\. ::cron::wake \{I just did something important\} Several utility commands are provided that are used internally within cron and for testing cron, but may or may not be useful in the general cases\. - __::cron::clock\_step__ *milliseconds* Return a clock time absolute to the epoch which falls on the next border   | | | | | | | |  79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 ... 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 ... 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208   *timecode*\. If *timecode* is expressed as an integer, the timecode is assumed to be in unixtime\. All other inputs will be interpreted by __clock scan__ and converted to unix time\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. If *processname* is not given, one is generated and returned by the command\. ::cron::at start_coffee {Tomorrow at 9:00am} {remote::exec::coffeepot power on} ::cron::at shutdown_coffee {Tomorrow at 12:00pm} {remote::exec::coffeepot power off} - __::cron::cancel__ *processname* This command unregisters the process *processname* and cancels any pending commands\. Note: processname can be a process created by either __::cron::at__ or __::cron::every__\. ::cron::cancel check_mail - __::cron::every__ *processname* *frequency* *command* This command registers a *command* to be called at the interval of *frequency*\. *frequency* is given in seconds\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. ::cron::every check_mail 900 ::imap_client::check_mail ::cron::every backup_db 3600 {::backup_procedure ::mydb} - __::cron::in__ *?processname?* *timecode* *command* This command registers a *command* to be called after a delay of time specified by *timecode*\. *timecode* is expressed as an seconds\. This task can be modified by subsequent calls to this package's commands by referencing *processname*\. If *processname* exists, it will be replaced\. ................................................................................ If the ::cron::time variable is > 0 this command will advance the internal time, 100ms at a time\. In all other cases this command will generate a fictious variable, generate an after call, and vwait the variable: set eventid [incr ::cron::eventcount] set var ::cron::event_#$eventid set$var 0 ::after $ms "set$var 1" ::vwait $var ::unset$var Usage: ................................................................................ does so\. - __::cron::wake__ *?who?* Wake up cron, and arrange for its event loop to be run during the next Idle cycle\. ::cron::wake {I just did something important} Several utility commands are provided that are used internally within cron and for testing cron, but may or may not be useful in the general cases\. - __::cron::clock\_step__ *milliseconds* Return a clock time absolute to the epoch which falls on the next border 

Changes to embedded/md/tcllib/files/modules/csv/csv.md.

 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242  The alternate format is activated through specification of the option __\-alternate__ to the various split commands\. # EXAMPLE Using the regular format the record 123,"123,521\.2","Mary says ""Hello, I am Mary""","" is parsed into the items a\) 123 b\) 123,521\.2 c\) Mary says "Hello, I am Mary" d\) " Using the alternate format the result is a\) 123 b\) 123,521\.2 c\) Mary says "Hello, I am Mary" d\) $$the empty string$$ instead\. As can be seen only item $$d$$ is different, now the empty string instead of a "\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and   | | | | | | | | |  214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242  The alternate format is activated through specification of the option __\-alternate__ to the various split commands\. # EXAMPLE Using the regular format the record 123,"123,521.2","Mary says ""Hello, I am Mary""","" is parsed into the items a) 123 b) 123,521.2 c) Mary says "Hello, I am Mary" d) " Using the alternate format the result is a) 123 b) 123,521.2 c) Mary says "Hello, I am Mary" d) (the empty string) instead\. As can be seen only item $$d$$ is different, now the empty string instead of a "\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and 

Changes to embedded/md/tcllib/files/modules/defer/defer.md.

 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106   identifier returned by __::defer::defer__, __::defer::with__, or __::defer::autowith__\. Any number of arguments may be supplied, and all of the IDs supplied will be cancelled\. # EXAMPLES package require defer 1 apply \{\{\} \{ set fd $open /dev/null$ defer::defer close $fd \}\} # REFERENCES # AUTHORS Roy Keene   | | |  89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106   identifier returned by __::defer::defer__, __::defer::with__, or __::defer::autowith__\. Any number of arguments may be supplied, and all of the IDs supplied will be cancelled\. # EXAMPLES package require defer 1 apply {{} { set fd [open /dev/null] defer::defer close$fd }} # REFERENCES # AUTHORS Roy Keene 

Changes to embedded/md/tcllib/files/modules/des/des.md.

 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189   OFB is similar to CFB except that the output of the cipher is fed back into the next round and not the xor'd plain text\. This means that errors only affect a single block but the cipher is more vulnerable to attack\. # EXAMPLES % set ciphertext $DES::des \-mode cbc \-dir encrypt \-key secret plaintext$ % set plaintext $DES::des \-mode cbc \-dir decrypt \-key secret ciphertext$ set iv $string repeat \\\\0 8$ set Key $DES::Init cbc \\\\0\\\\1\\\\2\\\\3\\\\4\\\\5\\\\6\\\\7 iv$ set ciphertext $DES::Encrypt Key "somedata"$ append ciphertext $DES::Encrypt Key "moredata"$ DES::Reset $Key$iv set plaintext $DES::Decrypt Key ciphertext$ DES::Final $Key # REFERENCES 1. "Data Encryption Standard", Federal Information Processing Standards Publication 46\-3, 1999, $$[http://csrc\.nist\.gov/publications/fips/fips46\-3/fips46\-3\.pdf](http://csrc\.nist\.gov/publications/fips/fips46\-3/fips46\-3\.pdf)$$   | | | | | | |  167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189   OFB is similar to CFB except that the output of the cipher is fed back into the next round and not the xor'd plain text\. This means that errors only affect a single block but the cipher is more vulnerable to attack\. # EXAMPLES % set ciphertext [DES::des -mode cbc -dir encrypt -key$secret $plaintext] % set plaintext [DES::des -mode cbc -dir decrypt -key$secret $ciphertext] set iv [string repeat \\0 8] set Key [DES::Init cbc \\0\\1\\2\\3\\4\\5\\6\\7$iv] set ciphertext [DES::Encrypt $Key "somedata"] append ciphertext [DES::Encrypt$Key "moredata"] DES::Reset $Key$iv set plaintext [DES::Decrypt $Key$ciphertext] DES::Final $Key # REFERENCES 1. "Data Encryption Standard", Federal Information Processing Standards Publication 46\-3, 1999, $$[http://csrc\.nist\.gov/publications/fips/fips46\-3/fips46\-3\.pdf](http://csrc\.nist\.gov/publications/fips/fips46\-3/fips46\-3\.pdf)$$  Changes to embedded/md/tcllib/files/modules/dicttool/dicttool.md.  79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112   - __rmerge__ *args* Return a dict which is the product of a recursive merge of all of the arguments\. Unlike __dict merge__, this command descends into all of the levels of a dict\. Dict keys which end in a : indicate a leaf, which will be interpreted as a literal value, and not descended into further\. set items $dict merge \{ option \{color \{default: green\}\} \} \{ option \{fruit \{default: mango\}\} \} \{ option \{color \{default: blue\} fruit \{widget: select values: \{mango apple cherry grape\}\}\} \}$ puts $dict print items$ Prints the following result: option \{ color \{ default: blue \} fruit \{ widget: select values: \{mango apple cherry grape\} \} \} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *dict* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | < > | | < < > >  79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112   - __rmerge__ *args* Return a dict which is the product of a recursive merge of all of the arguments\. Unlike __dict merge__, this command descends into all of the levels of a dict\. Dict keys which end in a : indicate a leaf, which will be interpreted as a literal value, and not descended into further\. set items [dict merge { option {color {default: green}} } { option {fruit {default: mango}} } { option {color {default: blue} fruit {widget: select values: {mango apple cherry grape}}} }] puts [dict print$items] Prints the following result: option { color { default: blue } fruit { widget: select values: {mango apple cherry grape} } } # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *dict* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/dns/tcllib_dns.md.

 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293   users system\. On a unix machine this parses the /etc/resolv\.conf file for nameservers $$if it exists$$ and on Windows systems we examine certain parts of the registry\. If no nameserver can be found then the loopback address $$127\.0\.0\.1$$ is used as a default\. # EXAMPLES % set tok $dns::resolve www\.tcl\.tk$ ::dns::1 % dns::status $tok ok % dns::address$tok 199\.175\.6\.239 % dns::name $tok www\.tcl\.tk % dns::cleanup$tok Using DNS URIs as queries: % set tok $dns::resolve "dns:tcl\.tk;type=MX"$ % set tok $dns::resolve "dns://l\.root\-servers\.net/www\.tcl\.tk"$ Reverse address lookup: % set tok $dns::resolve 127\.0\.0\.1$ ::dns::1 % dns::name $tok localhost % dns::cleanup$tok Using DNS over TLS $$RFC 7858$$: % set tok $dns::resolve www\.tcl\.tk \-nameserver dns\-tls\.bitwiseshift\.net \-usetls 1 \-cafile /etc/ssl/certs/ca\-certificates\.crt$ ::dns::12 % dns::wait $tok ok % dns::address$tok 104\.25\.119\.118 104\.25\.120\.118 # REFERENCES 1. Mockapetris, P\., "Domain Names \- Concepts and Facilities", RFC 1034, November 1987\. $$[http://www\.ietf\.org/rfc/rfc1034\.txt](http://www\.ietf\.org/rfc/rfc1034\.txt)$$   | | | | | | | |  249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293   users system\. On a unix machine this parses the /etc/resolv\.conf file for nameservers $$if it exists$$ and on Windows systems we examine certain parts of the registry\. If no nameserver can be found then the loopback address $$127\.0\.0\.1$$ is used as a default\. # EXAMPLES % set tok [dns::resolve www.tcl.tk] ::dns::1 % dns::status $tok ok % dns::address$tok 199.175.6.239 % dns::name $tok www.tcl.tk % dns::cleanup$tok Using DNS URIs as queries: % set tok [dns::resolve "dns:tcl.tk;type=MX"] % set tok [dns::resolve "dns://l.root-servers.net/www.tcl.tk"] Reverse address lookup: % set tok [dns::resolve 127.0.0.1] ::dns::1 % dns::name $tok localhost % dns::cleanup$tok Using DNS over TLS $$RFC 7858$$: % set tok [dns::resolve www.tcl.tk -nameserver dns-tls.bitwiseshift.net -usetls 1 -cafile /etc/ssl/certs/ca-certificates.crt] ::dns::12 % dns::wait $tok ok % dns::address$tok 104.25.119.118 104.25.120.118 # REFERENCES 1. Mockapetris, P\., "Domain Names \- Concepts and Facilities", RFC 1034, November 1987\. $$[http://www\.ietf\.org/rfc/rfc1034\.txt](http://www\.ietf\.org/rfc/rfc1034\.txt)$$ 

Changes to embedded/md/tcllib/files/modules/dns/tcllib_ip.md.

 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 ... 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 ... 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 ... 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380   suitable for displaying to users\. - __::ip::distance__ *ipaddr1* *ipaddr2* This command computes the $$integer$$ distance from IPv4 address *ipaddr1* to IPv4 address *ipaddr2*, i\.e\. "ipaddr2 \- ipaddr1" % ::ip::distance 1\.1\.1\.1 1\.1\.1\.5 4 - __::ip::nextIp__ *ipaddr* ?*offset*? This command adds the integer *offset* to the IPv4 address *ipaddr* and returns the new IPv4 address\. % ::ip::distance 1\.1\.1\.1 4 1\.1\.1\.5 - __::ip::prefix__ *address* Returns the address prefix generated by masking the address part with the mask if provided\. If there is no mask then it is equivalent to calling __normalize__ ................................................................................ - __::ip::prefixToNative__ *prefix* This command converts the string *prefix* from dotted form $$/ format$$ to native $$hex$$ form\. Returns a list containing two elements, ipaddress and mask, in this order, in hexadecimal notation\. % ip::prefixToNative 1\.1\.1\.0/24 0x01010100 0xffffff00 - __::ip::nativeToPrefix__ *nativeList*|*native* ?__\-ipv4__? This command converts from native $$hex$$ form to dotted form\. It is the complement of __::ip::prefixToNative__\. ................................................................................ * list *native* $$in$$ A list as returned by __::ip::prefixToNative__\. The command returns a list of addresses in dotted form if it was called with a list of addresses\. Otherwise a single address in dotted form is returned\. % ip::nativeToPrefix \{0x01010100 0xffffff00\} \-ipv4 1\.1\.1\.0/24 - __::ip::intToString__ *number* ?__\-ipv4__? This command converts from an ip address specified as integer number to dotted form\. ip::intToString 4294967295 255\.255\.255\.255 - __::ip::toInteger__ *ipaddr* This command converts a dotted form ip into an integer number\. % ::ip::toInteger 1\.1\.1\.0 16843008 - __::ip::toHex__ *ipaddr* This command converts dotted form ip into a hexadecimal number\. % ::ip::toHex 1\.1\.1\.0 0x01010100 - __::ip::maskToInt__ *ipmask* This command convert an ipmask in either dotted $$255\.255\.255\.0$$ form or mask length form $$24$$ into an integer number\. ................................................................................ - __::ip::broadcastAddress__ *prefix* ?__\-ipv4__? This commands returns a broadcast address in dotted form for the given route *prefix*, either in the form "addr/mask", or in native form\. The result is in dotted form\. ::ip::broadcastAddress 1\.1\.1\.0/24 1\.1\.1\.255 ::ip::broadcastAddress \{0x01010100 0xffffff00\} 0x010101ff - __::ip::maskToLength__ *dottedMask*|*integerMask*|*hexMask* ?__\-ipv4__? This command converts the dotted or integer form of an ipmask to the mask length form\. ::ip::maskToLength 0xffffff00 \-ipv4 24 % ::ip::maskToLength 255\.255\.255\.0 24 - __::ip::lengthToMask__ *maskLength* ?__\-ipv4__? This command converts an ipmask in mask length form to its dotted form\. ::ip::lengthToMask 24 255\.255\.255\.0 - __::ip::nextNet__ *ipaddr* *ipmask* ?*count*? ?__\-ipv4__? This command returns an ipaddress in the same position in the *count* next network\. The default value for *count* is __1__\. The address can be specified as either integer number or in dotted form\. The ................................................................................ - __::ip::isOverlap__ *prefix* *prefix*\.\.\. This command checks if the given ip prefixes overlap\. All arguments are in dotted "addr/mask" form\. All arguments after the first prefix are compared against the first prefix\. The result is a boolean value\. It is true if an overlap was found for any of the prefixes\. % ::ip::isOverlap 1\.1\.1\.0/24 2\.1\.0\.1/32 0 ::ip::isOverlap 1\.1\.1\.0/24 2\.1\.0\.1/32 1\.1\.1\.1/32 1 - __::ip::isOverlapNative__ ?__\-all__? ?__\-inline__? ?__\-ipv4__? *hexipaddr* *hexipmask* *hexiplist* This command is similar to __::ip::isOverlap__, however the arguments are in the native form, and the form of the result is under greater control of the caller\. If the option __\-all__ is specified it checks all ................................................................................ The first overlapping prefix, or an empoty string if there is none\. * \-all \-inline A list containing the prefixes of all overlaps found, or an empty list if there are none\. % ::ip::isOverlapNative 0x01010100 0xffffff00 \{\{0x02010001 0xffffffff\}\} 0 % ::ip::isOverlapNative 0x01010100 0xffffff00 \{\{0x02010001 0xffffffff\} \{0x01010101 0xffffffff\}\} 2 - __::ip::ipToLayer2Multicast__ *ipaddr* This command an converts ipv4 address in dotted form into a layer 2 multicast address, also in dotted form\. % ::ip::ipToLayer2Multicast 224\.0\.0\.2 01\.00\.5e\.00\.00\.02 - __::ip::ipHostFromPrefix__ *prefix* ?__\-exclude__ *prefixExcludeList*? This command returns a host address from a prefix in the form "ipaddr/masklen", also making sure that the result is not an address found in the *prefixExcludeList*\. The result is an ip address in dotted form\. %::ip::ipHostFromPrefix 1\.1\.1\.5/24 1\.1\.1\.1 %::ip::ipHostFromPrefix 1\.1\.1\.1/32 1\.1\.1\.1 - __::ip::reduceToAggregates__ *prefixlist* This command finds nets that overlap and filters out the more specific nets\. The prefixes are in either addr/mask form or in native format\. The result is a list containing the non\-overlapping ip prefixes from the input\. % ::ip::reduceToAggregates \{1\.1\.1\.0/24 1\.1\.0\.0/8 2\.1\.1\.0/24 1\.1\.1\.1/32 \} 1\.0\.0\.0/8 2\.1\.1\.0/24 - __::ip::longestPrefixMatch__ *ipaddr* *prefixlist* ?__\-ipv4__? This command finds longest prefix match from set of prefixes, given a specific host address\. The prefixes in the list are in either native or dotted form, whereas the host address is in either ipprefix format, dotted form, or integer form\. The result is the prefix which is the most specific match to the host address\. % ::ip::longestPrefixMatch 1\.1\.1\.1 \{1\.1\.1\.0/24 1\.0\.0\.0/8 2\.1\.1\.0/24 1\.1\.1\.0/28 \} 1\.1\.1\.0/28 - __::ip::collapse__ *prefixlist* This commands takes a list of prefixes and returns a list prefixes with the largest possible subnet masks covering the input, in this manner collapsing adjacent prefixes into larger ranges\. This is different from __::ip::reduceToAggregates__ in that the latter only removes specific nets from a list when they are covered by other elements of the input whereas this command actively merges nets into larger ranges when they are adjacent to each other\. % ::ip::collapse \{1\.2\.2\.0/24 1\.2\.3\.0/24\} 1\.2\.2\.0/23 - __::ip::subtract__ *prefixlist* This command takes a list of prefixes, some of which are prefixed by a dash\. These latter *negative* prefixes are used to punch holes into the ranges described by the other, *positive*, prefixes\. I\.e\. the negative prefixes are subtracted frrom the positive ones, resulting in a larger list of describes describing the covered ranges only as positives\. # EXAMPLES % ip::version ::1 6 % ip::version 127\.0\.0\.1 4 % ip::normalize 127/8 127\.0\.0\.0/8 % ip::contract 192\.168\.0\.0 192\.168 % % ip::normalize fec0::1 fec0:0000:0000:0000:0000:0000:0000:0001 % ip::contract fec0:0000:0000:0000:0000:0000:0000:0001 fec0::1 % ip::equal 192\.168\.0\.4/16 192\.168\.0\.0/16 1 % ip::equal fec0::1/10 fec0::fe01/10 1 # REFERENCES 1. Postel, J\. "Internet Protocol\." RFC 791, September 1981,   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 ... 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 ... 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 ... 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380   suitable for displaying to users\. - __::ip::distance__ *ipaddr1* *ipaddr2* This command computes the $$integer$$ distance from IPv4 address *ipaddr1* to IPv4 address *ipaddr2*, i\.e\. "ipaddr2 \- ipaddr1" % ::ip::distance 1.1.1.1 1.1.1.5 4 - __::ip::nextIp__ *ipaddr* ?*offset*? This command adds the integer *offset* to the IPv4 address *ipaddr* and returns the new IPv4 address\. % ::ip::distance 1.1.1.1 4 1.1.1.5 - __::ip::prefix__ *address* Returns the address prefix generated by masking the address part with the mask if provided\. If there is no mask then it is equivalent to calling __normalize__ ................................................................................ - __::ip::prefixToNative__ *prefix* This command converts the string *prefix* from dotted form $$/ format$$ to native $$hex$$ form\. Returns a list containing two elements, ipaddress and mask, in this order, in hexadecimal notation\. % ip::prefixToNative 1.1.1.0/24 0x01010100 0xffffff00 - __::ip::nativeToPrefix__ *nativeList*|*native* ?__\-ipv4__? This command converts from native $$hex$$ form to dotted form\. It is the complement of __::ip::prefixToNative__\. ................................................................................ * list *native* $$in$$ A list as returned by __::ip::prefixToNative__\. The command returns a list of addresses in dotted form if it was called with a list of addresses\. Otherwise a single address in dotted form is returned\. % ip::nativeToPrefix {0x01010100 0xffffff00} -ipv4 1.1.1.0/24 - __::ip::intToString__ *number* ?__\-ipv4__? This command converts from an ip address specified as integer number to dotted form\. ip::intToString 4294967295 255.255.255.255 - __::ip::toInteger__ *ipaddr* This command converts a dotted form ip into an integer number\. % ::ip::toInteger 1.1.1.0 16843008 - __::ip::toHex__ *ipaddr* This command converts dotted form ip into a hexadecimal number\. % ::ip::toHex 1.1.1.0 0x01010100 - __::ip::maskToInt__ *ipmask* This command convert an ipmask in either dotted $$255\.255\.255\.0$$ form or mask length form $$24$$ into an integer number\. ................................................................................ - __::ip::broadcastAddress__ *prefix* ?__\-ipv4__? This commands returns a broadcast address in dotted form for the given route *prefix*, either in the form "addr/mask", or in native form\. The result is in dotted form\. ::ip::broadcastAddress 1.1.1.0/24 1.1.1.255 ::ip::broadcastAddress {0x01010100 0xffffff00} 0x010101ff - __::ip::maskToLength__ *dottedMask*|*integerMask*|*hexMask* ?__\-ipv4__? This command converts the dotted or integer form of an ipmask to the mask length form\. ::ip::maskToLength 0xffffff00 -ipv4 24 % ::ip::maskToLength 255.255.255.0 24 - __::ip::lengthToMask__ *maskLength* ?__\-ipv4__? This command converts an ipmask in mask length form to its dotted form\. ::ip::lengthToMask 24 255.255.255.0 - __::ip::nextNet__ *ipaddr* *ipmask* ?*count*? ?__\-ipv4__? This command returns an ipaddress in the same position in the *count* next network\. The default value for *count* is __1__\. The address can be specified as either integer number or in dotted form\. The ................................................................................ - __::ip::isOverlap__ *prefix* *prefix*\.\.\. This command checks if the given ip prefixes overlap\. All arguments are in dotted "addr/mask" form\. All arguments after the first prefix are compared against the first prefix\. The result is a boolean value\. It is true if an overlap was found for any of the prefixes\. % ::ip::isOverlap 1.1.1.0/24 2.1.0.1/32 0 ::ip::isOverlap 1.1.1.0/24 2.1.0.1/32 1.1.1.1/32 1 - __::ip::isOverlapNative__ ?__\-all__? ?__\-inline__? ?__\-ipv4__? *hexipaddr* *hexipmask* *hexiplist* This command is similar to __::ip::isOverlap__, however the arguments are in the native form, and the form of the result is under greater control of the caller\. If the option __\-all__ is specified it checks all ................................................................................ The first overlapping prefix, or an empoty string if there is none\. * \-all \-inline A list containing the prefixes of all overlaps found, or an empty list if there are none\. % ::ip::isOverlapNative 0x01010100 0xffffff00 {{0x02010001 0xffffffff}} 0 % ::ip::isOverlapNative 0x01010100 0xffffff00 {{0x02010001 0xffffffff} {0x01010101 0xffffffff}} 2 - __::ip::ipToLayer2Multicast__ *ipaddr* This command an converts ipv4 address in dotted form into a layer 2 multicast address, also in dotted form\. % ::ip::ipToLayer2Multicast 224.0.0.2 01.00.5e.00.00.02 - __::ip::ipHostFromPrefix__ *prefix* ?__\-exclude__ *prefixExcludeList*? This command returns a host address from a prefix in the form "ipaddr/masklen", also making sure that the result is not an address found in the *prefixExcludeList*\. The result is an ip address in dotted form\. %::ip::ipHostFromPrefix 1.1.1.5/24 1.1.1.1 %::ip::ipHostFromPrefix 1.1.1.1/32 1.1.1.1 - __::ip::reduceToAggregates__ *prefixlist* This command finds nets that overlap and filters out the more specific nets\. The prefixes are in either addr/mask form or in native format\. The result is a list containing the non\-overlapping ip prefixes from the input\. % ::ip::reduceToAggregates {1.1.1.0/24 1.1.0.0/8 2.1.1.0/24 1.1.1.1/32 } 1.0.0.0/8 2.1.1.0/24 - __::ip::longestPrefixMatch__ *ipaddr* *prefixlist* ?__\-ipv4__? This command finds longest prefix match from set of prefixes, given a specific host address\. The prefixes in the list are in either native or dotted form, whereas the host address is in either ipprefix format, dotted form, or integer form\. The result is the prefix which is the most specific match to the host address\. % ::ip::longestPrefixMatch 1.1.1.1 {1.1.1.0/24 1.0.0.0/8 2.1.1.0/24 1.1.1.0/28 } 1.1.1.0/28 - __::ip::collapse__ *prefixlist* This commands takes a list of prefixes and returns a list prefixes with the largest possible subnet masks covering the input, in this manner collapsing adjacent prefixes into larger ranges\. This is different from __::ip::reduceToAggregates__ in that the latter only removes specific nets from a list when they are covered by other elements of the input whereas this command actively merges nets into larger ranges when they are adjacent to each other\. % ::ip::collapse {1.2.2.0/24 1.2.3.0/24} 1.2.2.0/23 - __::ip::subtract__ *prefixlist* This command takes a list of prefixes, some of which are prefixed by a dash\. These latter *negative* prefixes are used to punch holes into the ranges described by the other, *positive*, prefixes\. I\.e\. the negative prefixes are subtracted frrom the positive ones, resulting in a larger list of describes describing the covered ranges only as positives\. # EXAMPLES % ip::version ::1 6 % ip::version 127.0.0.1 4 % ip::normalize 127/8 127.0.0.0/8 % ip::contract 192.168.0.0 192.168 % % ip::normalize fec0::1 fec0:0000:0000:0000:0000:0000:0000:0001 % ip::contract fec0:0000:0000:0000:0000:0000:0000:0001 fec0::1 % ip::equal 192.168.0.4/16 192.168.0.0/16 1 % ip::equal fec0::1/10 fec0::fe01/10 1 # REFERENCES 1. Postel, J\. "Internet Protocol\." RFC 791, September 1981, 

Changes to embedded/md/tcllib/files/modules/docstrip/docstrip.md.

 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 ... 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ... 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 ... 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 ... 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412  The basic unit __docstrip__ operates on are the *lines* of a master source file\. Extraction consists of selecting some of these lines to be copied from input text to output text\. The basic distinction is that between *code lines* $$which are copied and do not begin with a percent character$$ and *comment lines* $$which begin with a percent character and are not copied$$\. docstrip::extract $join \{ \{% comment\} \{% more comment \!"\#%&/$$\} \{some command\} \{ % blah blah "Not a comment\."\} \{% abc; this is comment\} \{\# def; this is code\} \{ghi\} \{% jkl\} \} \\n$ \{\} returns the same sequence of lines as join \{ \{some command\} \{ % blah blah "Not a comment\."\} \{\# def; this is code\} \{ghi\} "" \} \\n It does not matter to __docstrip__ what format is used for the documentation in the comment lines, but in order to do better than plain text comments, one typically uses some markup language\. Most commonly LaTeX is used, as that is a very established standard and also provides the best support for mathematical formulae, but the __docstrip::util__ package also gives some support for *[doctools](\.\./\.\./\.\./\.\./index\.md\#doctools)*\-like markup\. ................................................................................ line is one of '%' '<' STARSLASH EXPRESSION '>' '%' '<' PLUSMINUS EXPRESSION '>' CODE where STARSLASH ::= '\*' | '/' PLUSMINUS ::= | '\+' | '\-' EXPRESSION ::= SECONDARY | SECONDARY ',' EXPRESSION | SECONDARY '|' EXPRESSION SECONDARY ::= PRIMARY | PRIMARY '&' SECONDARY PRIMARY ::= TERMINAL | '\!' PRIMARY | '\(' EXPRESSION '$$' CODE ::= \{ any character except end\-of\-line \} Comma and vertical bar both denote 'or'\. Ampersand denotes 'and'\. Exclamation mark denotes 'not'\. A TERMINAL can be any nonempty string of characters not containing '>', '&', '|', comma, '$$', or '$$', although the __docstrip__ manual is a bit restrictive and only guarantees proper operation for strings of letters $$although even the LaTeX core sources make heavy use also of digits in TERMINALs$$\. The second argument of __docstrip::extract__ is the list of ................................................................................ TERMINALs count as being 'false' when guard expressions are evaluated\. In the case of a '%<\**EXPRESSION*>' guard, the lines guarded are all lines up to the next '%' guard with the same *EXPRESSION* $$compared as strings$$\. The blocks of code delimited by such '\*' and '/' guard lines must be properly nested\. set text $join \{ \{begin\} \{%<\*foo>\} \{1\} \{%<\*bar>\} \{2\} \{%\} \{%<\*\!bar>\} \{3\} \{%\} \{4\} \{%\} \{5\} \{%<\*bar>\} \{6\} \{%\} \{end\} \} \\n$ set res $docstrip::extract text foo$ append res $docstrip::extract text \{foo bar\}$ append res $docstrip::extract text bar$ sets $res to the result of join \{ \{begin\} \{1\} \{3\} \{4\} \{5\} \{end\} \{begin\} \{1\} \{2\} \{4\} \{5\} \{6\} \{end\} \{begin\} \{5\} \{6\} \{end\} "" \} \\n In guard lines without a '\*', '/', '\+', or '\-' modifier after the '%<', the guard applies only to the CODE following the '>' on that single line\. A '\+' modifier is equivalent to no modifier\. A '\-' modifier is like the case with no modifier, but the expression is implicitly negated, i\.e\., the CODE of a '%<\-' guard line is only included if the expression evaluates to false\. ................................................................................ Metacomment lines are "comment lines which should not be stripped away", but be extracted like code lines; these are sometimes used for copyright notices and similar material\. The '%%' prefix is however not kept, but substituted by the current __\-metaprefix__, which is customarily set to some "comment until end of line" character $$or character sequence$$ of the language of the code being extracted\. set text $join \{ \{begin\} \{% foo\} \{%<\+foo>plusfoo\} \{%<\-foo>minusfoo\} \{middle\} \{%% some metacomment\} \{%<\*foo>\} \{%%another metacomment\} \{%\} \{end\} \} \\n$ set res $docstrip::extract text foo \-metaprefix \{\# \}$ append res $docstrip::extract text bar \-metaprefix \{\#\}$ sets$res to the result of join \{ \{begin\} \{ foo\} \{plusfoo\} \{middle\} \{\# some metacomment\} \{\# another metacomment\} \{end\} \{begin\} \{minusfoo\} \{middle\} \{\# some metacomment\} \{end\} "" \} \\n Verbatim guards can be used to force code line interpretation of a block of lines even if some of them happen to look like any other type of lines to docstrip\. A verbatim guard has the form '%<<*END\-TAG*' and the verbatim block is terminated by the first line that is exactly '%*END\-TAG*'\. set text $join \{ \{begin\} \{%<\*myblock>\} \{some stupid\} \{ \#computer\} \{%<\} \{%QQQ\-98765\} \{ using\*[email protected]\} \{%\} \{end\} \} \\n$ set res $docstrip::extract text myblock \-metaprefix \{\# \}$ append res $docstrip::extract text \{\}$ sets $res to the result of join \{ \{begin\} \{some stupid\} \{ \#computer\} \{% These three lines are copied verbatim $$including percents\} \{%% even if \-metaprefix is something different than %%$$\.\} \{%\} \{ using\*[email protected]\} \{end\} \{begin\} \{end\} "" \} \\n The processing of verbatim guards takes place also inside blocks of lines which due to some outer block guard will not be copied\. The final piece of __docstrip__ syntax is that extraction stops at a line that is exactly "\\endinput"; this is often used to avoid copying random whitespace at the end of a file\. In the unlikely case that one wants such a code ................................................................................ that files employing that document format are given the suffix "\.ddt", to distinguish them from the more traditional LaTeX\-based "\.dtx" files\. Master source files with "\.dtx" extension are usually set up so that they can be typeset directly by __[latex](\.\./\.\./\.\./\.\./index\.md\#latex)__ without any support from other files\. This is achieved by beginning the file with the lines % \\iffalse %<\*driver> \\documentclass\{tclldoc\} \\begin\{document\} \\DocInput\{*filename\.dtx*\} \\end\{document\} % % \\fi or some variation thereof\. The trick is that the file gets read twice\. With normal LaTeX reading rules, the first two lines are comments and therefore ignored\. The third line is the document preamble, the fourth line begins the document body, and the sixth line ends the document, so LaTeX stops there — non\-comments below that point in the file are never subjected to the normal LaTeX reading rules\. Before that, however, the \\DocInput command on the fifth   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 ... 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ... 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 ... 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 ... 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412  The basic unit __docstrip__ operates on are the *lines* of a master source file\. Extraction consists of selecting some of these lines to be copied from input text to output text\. The basic distinction is that between *code lines* $$which are copied and do not begin with a percent character$$ and *comment lines* $$which begin with a percent character and are not copied$$\. docstrip::extract [join { {% comment} {% more comment !"#$%&/(} {some command} { % blah $blah "Not a comment."} {% abc; this is comment} {# def; this is code} {ghi} {% jkl} } \n] {} returns the same sequence of lines as join { {some command} { % blah$blah "Not a comment."} {# def; this is code} {ghi} "" } \n It does not matter to __docstrip__ what format is used for the documentation in the comment lines, but in order to do better than plain text comments, one typically uses some markup language\. Most commonly LaTeX is used, as that is a very established standard and also provides the best support for mathematical formulae, but the __docstrip::util__ package also gives some support for *[doctools](\.\./\.\./\.\./\.\./index\.md\#doctools)*\-like markup\. ................................................................................ line is one of '%' '<' STARSLASH EXPRESSION '>' '%' '<' PLUSMINUS EXPRESSION '>' CODE where STARSLASH ::= '*' | '/' PLUSMINUS ::= | '+' | '-' EXPRESSION ::= SECONDARY | SECONDARY ',' EXPRESSION | SECONDARY '|' EXPRESSION SECONDARY ::= PRIMARY | PRIMARY '&' SECONDARY PRIMARY ::= TERMINAL | '!' PRIMARY | '(' EXPRESSION ')' CODE ::= { any character except end-of-line } Comma and vertical bar both denote 'or'\. Ampersand denotes 'and'\. Exclamation mark denotes 'not'\. A TERMINAL can be any nonempty string of characters not containing '>', '&', '|', comma, '$$', or '$$', although the __docstrip__ manual is a bit restrictive and only guarantees proper operation for strings of letters $$although even the LaTeX core sources make heavy use also of digits in TERMINALs$$\. The second argument of __docstrip::extract__ is the list of ................................................................................ TERMINALs count as being 'false' when guard expressions are evaluated\. In the case of a '%<\**EXPRESSION*>' guard, the lines guarded are all lines up to the next '%' guard with the same *EXPRESSION* $$compared as strings$$\. The blocks of code delimited by such '\*' and '/' guard lines must be properly nested\. set text [join { {begin} {%<*foo>} {1} {%<*bar>} {2} {%} {%<*!bar>} {3} {%} {4} {%} {5} {%<*bar>} {6} {%} {end} } \n] set res [docstrip::extract $text foo] append res [docstrip::extract$text {foo bar}] append res [docstrip::extract $text bar] sets$res to the result of join { {begin} {1} {3} {4} {5} {end} {begin} {1} {2} {4} {5} {6} {end} {begin} {5} {6} {end} "" } \n In guard lines without a '\*', '/', '\+', or '\-' modifier after the '%<', the guard applies only to the CODE following the '>' on that single line\. A '\+' modifier is equivalent to no modifier\. A '\-' modifier is like the case with no modifier, but the expression is implicitly negated, i\.e\., the CODE of a '%<\-' guard line is only included if the expression evaluates to false\. ................................................................................ Metacomment lines are "comment lines which should not be stripped away", but be extracted like code lines; these are sometimes used for copyright notices and similar material\. The '%%' prefix is however not kept, but substituted by the current __\-metaprefix__, which is customarily set to some "comment until end of line" character $$or character sequence$$ of the language of the code being extracted\. set text [join { {begin} {% foo} {%<+foo>plusfoo} {%<-foo>minusfoo} {middle} {%% some metacomment} {%<*foo>} {%%another metacomment} {%} {end} } \n] set res [docstrip::extract $text foo -metaprefix {# }] append res [docstrip::extract$text bar -metaprefix {#}] sets $res to the result of join { {begin} { foo} {plusfoo} {middle} {# some metacomment} {# another metacomment} {end} {begin} {minusfoo} {middle} {# some metacomment} {end} "" } \n Verbatim guards can be used to force code line interpretation of a block of lines even if some of them happen to look like any other type of lines to docstrip\. A verbatim guard has the form '%<<*END\-TAG*' and the verbatim block is terminated by the first line that is exactly '%*END\-TAG*'\. set text [join { {begin} {%<*myblock>} {some stupid()} { #computer} {%<} {%QQQ-98765} { using*[email protected]} {%} {end} } \n] set res [docstrip::extract$text myblock -metaprefix {# }] append res [docstrip::extract $text {}] sets$res to the result of join { {begin} {some stupid()} { #computer} {% These three lines are copied verbatim (including percents} {%% even if -metaprefix is something different than %%).} {%} { using*[email protected]} {end} {begin} {end} "" } \n The processing of verbatim guards takes place also inside blocks of lines which due to some outer block guard will not be copied\. The final piece of __docstrip__ syntax is that extraction stops at a line that is exactly "\\endinput"; this is often used to avoid copying random whitespace at the end of a file\. In the unlikely case that one wants such a code ................................................................................ that files employing that document format are given the suffix "\.ddt", to distinguish them from the more traditional LaTeX\-based "\.dtx" files\. Master source files with "\.dtx" extension are usually set up so that they can be typeset directly by __[latex](\.\./\.\./\.\./\.\./index\.md\#latex)__ without any support from other files\. This is achieved by beginning the file with the lines % \iffalse %<*driver> \documentclass{tclldoc} \begin{document} \DocInput{*filename.dtx*} \end{document} % % \fi or some variation thereof\. The trick is that the file gets read twice\. With normal LaTeX reading rules, the first two lines are comments and therefore ignored\. The third line is the document preamble, the fourth line begins the document body, and the sixth line ends the document, so LaTeX stops there — non\-comments below that point in the file are never subjected to the normal LaTeX reading rules\. Before that, however, the \\DocInput command on the fifth 

Changes to embedded/md/tcllib/files/modules/docstrip/docstrip_util.md.

 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 ... 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 ... 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 ... 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 ... 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619  terminal '__docstrip\.tcl::catalogue__'\. This supports both the style of collecting all catalogue lines in one place and the style of putting each catalogue line in close proximity of the code that it declares\. Putting catalogue entries next to the code they declare may look as follows % First there's the catalogue entry % \\begin\{tcl\} %pkgProvide foo::bar 1\.0 \{foobar load\} % \\end\{tcl\} % second a metacomment used to include a copyright message % \\begin\{macrocode\} %<\*foobar> %% This file is placed in the public domain\. % \\end\{macrocode\} % third the package implementation % \\begin\{tcl\} namespace eval foo::bar \{ \# \.\.\. some clever piece of Tcl code elided \.\.\. % \\end\{tcl\} % which at some point may have variant code to make use of a % |load|able extension % \\begin\{tcl\} %<\*load> load $file rootname \[info script$\]$info sharedlibextension$ % %<\*\!load> \# \.\.\. even more clever scripted counterpart of the extension \# also elided \.\.\. % \} % % \\end\{tcl\} % and that's it\! The corresponding set\-up with __pkgIndex__ would be % First there's the catalogue entry % \\begin\{tcl\} %pkgIndex foobar load % \\end\{tcl\} % second a metacomment used to include a copyright message % \\begin\{tcl\} %<\*foobar> %% This file is placed in the public domain\. % \\end\{tcl\} % third the package implementation % \\begin\{tcl\} package provide foo::bar 1\.0 namespace eval foo::bar \{ \# \.\.\. some clever piece of Tcl code elided \.\.\. % \\end\{tcl\} % which at some point may have variant code to make use of a % |load|able extension % \\begin\{tcl\} %<\*load> load $file rootname \[info script$\]$info sharedlibextension$ % %<\*\!load> \# \.\.\. even more clever scripted counterpart of the extension \# also elided \.\.\. % \} % % \\end\{tcl\} % and that's it\! - __docstrip::util::index\_from\_catalogue__ *dir* *pattern* ?*option* *value* \.\.\.? This command is a sibling of the standard __pkg\_mkIndex__ command, in that it adds package entries to "pkgIndex\.tcl" files\. The difference is that it indexes __[docstrip](docstrip\.md)__\-style source files rather than raw "\.tcl" or loadable library files\. Only packages listed in the ................................................................................ An existing file of the same name as one to be created will be overwritten\. - __docstrip::util::classical\_preamble__ *metaprefix* *message* *target* ?*source* *terminals* \.\.\.? This command returns a preamble in the classical __[docstrip](docstrip\.md)__ style \#\# \#\# This is \TARGET', \#\# generated by the docstrip::util package\. \#\# \#\# The original source files were: \#\# \#\# SOURCE $$with options: \foo,bar'$$ \#\# \#\# Some message line 1 \#\# line2 \#\# line3 if called as docstrip::util::classical\_preamble \{\#\#\}\\ "\\nSome message line 1\\nline2\\nline3" TARGET SOURCE \{foo bar\} The command supports preambles for files generated from multiple sources, even though __modules\_from\_catalogue__ at present does not need that\. - __docstrip::util::classical\_postamble__ *metaprefix* *message* *target* ?*source* *terminals* \.\.\.? This command returns a postamble in the classical __[docstrip](docstrip\.md)__ style \#\# Some message line 1 \#\# line2 \#\# line3 \#\# \#\# End of file \TARGET'\. if called as docstrip::util::classical\_postamble \{\#\#\}\\ "Some message line 1\\nline2\\nline3" TARGET SOURCE \{foo bar\} In other words, the *source* and *terminals* arguments are ignored, but supported for symmetry with __classical\_preamble__\. - __docstrip::util::packages\_provided__ *text* ?*setup\-script*? This command returns a list where every even index element is the name of a ................................................................................ *setup\-script* is evaluated in the local context of the __packages\_provided__ procedure just before the *text* is processed\. At that time, the name of the slave command for the safe interpreter that will do this processing is kept in the local variable __c__\. To for example copy the contents of the __::env__ array to the safe interpreter, one might use a *setup\-script* of $c eval $list array set env \[array get ::env$\] # Source processing commands Unlike the previous group of commands, which would use __docstrip::extract__ to extract some code lines and then process those further, the following commands operate on text consisting of all types of lines\. ................................................................................ __emph__asised\. At the time of writing, no project has employed __[doctools](\.\./doctools/doctools\.md)__ markup in master source files, so experience of what works well is not available\. A source file could however look as follows % $manpage\_begin gcd n 1\.0$ % $keywords divisor$ % $keywords math$ % $moddesc \{Greatest Common Divisor\}$ % $require gcd \[opt 1\.0$\] % $description$ % % $list\_begin definitions$ % $call \[cmd gcd$ $arg a$ $arg b$\] % The $cmd gcd$ procedure takes two arguments $arg a$ and $arg b$ which % must be integers and returns their greatest common divisor\. proc gcd \{a b\} \{ % The first step is to take the absolute values of the arguments\. % This relieves us of having to worry about how signs will be treated % by the remainder operation\. set a $expr \{abs$$a$$\}$ set b $expr \{abs$$b$$\}$ % The next line does all of Euclid's algorithm\! We can make do % without a temporary variable, since$a is substituted before the % $lb$set a $b$rb$ and thus continues to hold a reference to the % "old" value of $var a$\. while \{$b>0\} \{ set b $expr \{ a % \[set a b$ \}\] \} % In Tcl 8\.3 we might want to use $cmd set$ instead of $cmd return$ % to get the slight advantage of byte\-compilation\. % set a %<\!tcl83> return $a \} % $list\_end$ % % $manpage\_end$ If the above text is fed through __docstrip::util::ddt2man__ then the result will be a syntactically correct __[doctools](\.\./doctools/doctools\.md)__ manpage, even though its purpose is a bit different\. It is suggested that master source code files with ................................................................................ the header of each hunk specifies which case is at hand\. It is normally necessary to manually review both the return value from __[patch](\.\./\.\./\.\./\.\./index\.md\#patch)__ and the patched text itself, as this command cannot adjust comment lines to match new content\. An example use would look like set sourceL $split \[docstrip::util::thefile from\.dtx$ \\n\] set terminals \{foo bar baz\} set fromtext $docstrip::util::thefile from\.tcl$ set difftext $exec diff \-\-unified from\.tcl to\.tcl$ set leftover $docstrip::util::patch sourceL terminals fromtext\\ \[docstrip::util::import\_unidiff difftext$ \-metaprefix \{\#\}\] set F $open to\.dtx w$; puts$F $join sourceL \\n$; close $F return$leftover Here, "from\.dtx" was used as source for "from\.tcl", which someone modified into "to\.tcl"\. We're trying to construct a "to\.dtx" which can be used as source for "to\.tcl"\. - __docstrip::util::thefile__ *filename* ?*option* *value* \.\.\.?   | | | | | | | | | | | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | | | | | | | | |  110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 ... 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 ... 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 ... 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 ... 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619  terminal '__docstrip\.tcl::catalogue__'\. This supports both the style of collecting all catalogue lines in one place and the style of putting each catalogue line in close proximity of the code that it declares\. Putting catalogue entries next to the code they declare may look as follows % First there's the catalogue entry % \begin{tcl} %pkgProvide foo::bar 1.0 {foobar load} % \end{tcl} % second a metacomment used to include a copyright message % \begin{macrocode} %<*foobar> %% This file is placed in the public domain. % \end{macrocode} % third the package implementation % \begin{tcl} namespace eval foo::bar { # ... some clever piece of Tcl code elided ... % \end{tcl} % which at some point may have variant code to make use of a % |load|able extension % \begin{tcl} %<*load> load [file rootname [info script]][info sharedlibextension] % %<*!load> # ... even more clever scripted counterpart of the extension # also elided ... % } % % \end{tcl} % and that's it! The corresponding set\-up with __pkgIndex__ would be % First there's the catalogue entry % \begin{tcl} %pkgIndex foobar load % \end{tcl} % second a metacomment used to include a copyright message % \begin{tcl} %<*foobar> %% This file is placed in the public domain. % \end{tcl} % third the package implementation % \begin{tcl} package provide foo::bar 1.0 namespace eval foo::bar { # ... some clever piece of Tcl code elided ... % \end{tcl} % which at some point may have variant code to make use of a % |load|able extension % \begin{tcl} %<*load> load [file rootname [info script]][info sharedlibextension] % %<*!load> # ... even more clever scripted counterpart of the extension # also elided ... % } % % \end{tcl} % and that's it! - __docstrip::util::index\_from\_catalogue__ *dir* *pattern* ?*option* *value* \.\.\.? This command is a sibling of the standard __pkg\_mkIndex__ command, in that it adds package entries to "pkgIndex\.tcl" files\. The difference is that it indexes __[docstrip](docstrip\.md)__\-style source files rather than raw "\.tcl" or loadable library files\. Only packages listed in the ................................................................................ An existing file of the same name as one to be created will be overwritten\. - __docstrip::util::classical\_preamble__ *metaprefix* *message* *target* ?*source* *terminals* \.\.\.? This command returns a preamble in the classical __[docstrip](docstrip\.md)__ style ## ## This is TARGET', ## generated by the docstrip::util package. ## ## The original source files were: ## ## SOURCE (with options: foo,bar') ## ## Some message line 1 ## line2 ## line3 if called as docstrip::util::classical_preamble {##}\ "\nSome message line 1\nline2\nline3" TARGET SOURCE {foo bar} The command supports preambles for files generated from multiple sources, even though __modules\_from\_catalogue__ at present does not need that\. - __docstrip::util::classical\_postamble__ *metaprefix* *message* *target* ?*source* *terminals* \.\.\.? This command returns a postamble in the classical __[docstrip](docstrip\.md)__ style ## Some message line 1 ## line2 ## line3 ## ## End of file TARGET'. if called as docstrip::util::classical_postamble {##}\ "Some message line 1\nline2\nline3" TARGET SOURCE {foo bar} In other words, the *source* and *terminals* arguments are ignored, but supported for symmetry with __classical\_preamble__\. - __docstrip::util::packages\_provided__ *text* ?*setup\-script*? This command returns a list where every even index element is the name of a ................................................................................ *setup\-script* is evaluated in the local context of the __packages\_provided__ procedure just before the *text* is processed\. At that time, the name of the slave command for the safe interpreter that will do this processing is kept in the local variable __c__\. To for example copy the contents of the __::env__ array to the safe interpreter, one might use a *setup\-script* of $c eval [list array set env [array get ::env]] # Source processing commands Unlike the previous group of commands, which would use __docstrip::extract__ to extract some code lines and then process those further, the following commands operate on text consisting of all types of lines\. ................................................................................ __emph__asised\. At the time of writing, no project has employed __[doctools](\.\./doctools/doctools\.md)__ markup in master source files, so experience of what works well is not available\. A source file could however look as follows % [manpage_begin gcd n 1.0] % [keywords divisor] % [keywords math] % [moddesc {Greatest Common Divisor}] % [require gcd [opt 1.0]] % [description] % % [list_begin definitions] % [call [cmd gcd] [arg a] [arg b]] % The [cmd gcd] procedure takes two arguments [arg a] and [arg b] which % must be integers and returns their greatest common divisor. proc gcd {a b} { % The first step is to take the absolute values of the arguments. % This relieves us of having to worry about how signs will be treated % by the remainder operation. set a [expr {abs($a)}] set b [expr {abs($b)}] % The next line does all of Euclid's algorithm! We can make do % without a temporary variable, since$a is substituted before the % [lb]set a $b[rb] and thus continues to hold a reference to the % "old" value of [var a]. while {$b>0} { set b [expr { $a % [set a$b] }] } % In Tcl 8.3 we might want to use [cmd set] instead of [cmd return] % to get the slight advantage of byte-compilation. % set a % return $a } % [list_end] % % [manpage_end] If the above text is fed through __docstrip::util::ddt2man__ then the result will be a syntactically correct __[doctools](\.\./doctools/doctools\.md)__ manpage, even though its purpose is a bit different\. It is suggested that master source code files with ................................................................................ the header of each hunk specifies which case is at hand\. It is normally necessary to manually review both the return value from __[patch](\.\./\.\./\.\./\.\./index\.md\#patch)__ and the patched text itself, as this command cannot adjust comment lines to match new content\. An example use would look like set sourceL [split [docstrip::util::thefile from.dtx] \n] set terminals {foo bar baz} set fromtext [docstrip::util::thefile from.tcl] set difftext [exec diff --unified from.tcl to.tcl] set leftover [docstrip::util::patch sourceL$terminals $fromtext\ [docstrip::util::import_unidiff$difftext] -metaprefix {#}] set F [open to.dtx w]; puts $F [join$sourceL \n]; close $F return$leftover Here, "from\.dtx" was used as source for "from\.tcl", which someone modified into "to\.tcl"\. We're trying to construct a "to\.dtx" which can be used as source for "to\.tcl"\. - __docstrip::util::thefile__ *filename* ?*option* *value* \.\.\.? 

Changes to embedded/md/tcllib/files/modules/doctools/changelog.md.

 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90   ChangeLog\. Each element/entry is then a list of three elements describing the date of the entry, its author, and the comments made, in this order\. The last item in each element/entry, the comments, is a list of sections\. Each section is described by a list containing two elements, a list of file names, and a string containing the true comment associated with the files of the section\. \{ \{ date author \{ \{ \{file \.\.\.\} commenttext \} \.\.\. \} \} \{\.\.\.\} \} - __::doctools::changelog::flatten__ *entries* This command converts a list of entries as generated by __change::scan__ above into a simpler list of plain text blocks each containing all the information of a single entry\.   < < > > < < > > | < > | < < > > | < >  63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90   ChangeLog\. Each element/entry is then a list of three elements describing the date of the entry, its author, and the comments made, in this order\. The last item in each element/entry, the comments, is a list of sections\. Each section is described by a list containing two elements, a list of file names, and a string containing the true comment associated with the files of the section\. { { date author { { {file ...} commenttext } ... } } {...} } - __::doctools::changelog::flatten__ *entries* This command converts a list of entries as generated by __change::scan__ above into a simpler list of plain text blocks each containing all the information of a single entry\. 

Changes to embedded/md/tcllib/files/modules/doctools/docidx_lang_intro.md.

 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 ... 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 ... 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180  interspersed between them, except for whitespace\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. \.\.\. $key \{markup language\}$ \.\.\. \.\.\. $manpage thefile \\\\ \{file description\}$ \.\.\. ## Basic structure The most simple document which can be written in docidx is $index\_begin GROUPTITLE TITLE$ $index\_end$ Not very useful, but valid\. This also shows us that all docidx documents consist of only one part where we will list all keys and their references\. A more useful index will contain at least keywords, or short 'keys', i\.e\. the phrases which were indexed\. So: $index\_begin GROUPTITLE TITLE$ $__key markup__$ $__key \{semantic markup\}$__\] $__key \{docidx markup\}__$ $__key \{docidx language\}__$ $__key \{docidx commands\}__$ $index\_end$ In the above example the command __key__ is used to declare the keyword phrases we wish to be part of the index\. However a truly useful index does not only list the keyword phrases, but will also contain references to documents associated with the keywords\. Here is a made\-up index for all the manpages in the module *[base64](\.\./\.\./\.\./\.\./index\.md\#base64)*: $index\_begin tcllib/base64 \{De\- & Encoding\}$ $key base64$ $__manpage base64__$ $key encoding$ $__manpage base64__$ $__manpage uuencode__$ $__manpage yencode__$ $key uuencode$ $__manpage uuencode__$ $key yEnc$ $__manpage yencode__$ $key ydecode$ $__manpage yencode__$ $key yencode$ $__manpage yencode__$ $index\_end$ In the above example the command __[manpage](\.\./\.\./\.\./\.\./index\.md\#manpage)__ is used to insert references to documents, using symbolic file names, with each command belonging to the last __key__ command coming before it\. The other command to insert references is ................................................................................ to be used before the __index\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the table of contents\. I\.e\. it is possible to write $__include FILE__$ $__vset VAR VALUE__$ $index\_begin GROUPTITLE TITLE$ \.\.\. $index\_end$ Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. $index\_begin GROUPTITLE TITLE$ $__include FILE__$ $__vset VAR VALUE__$ \.\.\. $index\_end$ The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __index\_begin__ may contain only the templating commands __vset__ and __include__, a file included after a key may contain only manape or url references, and other keys, etc\. ................................................................................ characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. \.\.\. These commands, $cmd lb$ and $cmd lb$ respectively, are required because our use of $__lb__$ and $__rb__$ to bracket markup commands makes it impossible to directly use $__lb__$ and $__rb__$ within the text\. \.\.\. # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[docidx language syntax](docidx\_lang\_syntax\.md)* specification as well\. From here on out the *[docidx language command   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 ... 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 ... 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180  interspersed between them, except for whitespace\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. ... [key {markup language}] ... ... [manpage thefile \\ {file description}] ... ## Basic structure The most simple document which can be written in docidx is [index_begin GROUPTITLE TITLE] [index_end] Not very useful, but valid\. This also shows us that all docidx documents consist of only one part where we will list all keys and their references\. A more useful index will contain at least keywords, or short 'keys', i\.e\. the phrases which were indexed\. So: [index_begin GROUPTITLE TITLE] [__key markup__] [__key {semantic markup}]__] [__key {docidx markup}__] [__key {docidx language}__] [__key {docidx commands}__] [index_end] In the above example the command __key__ is used to declare the keyword phrases we wish to be part of the index\. However a truly useful index does not only list the keyword phrases, but will also contain references to documents associated with the keywords\. Here is a made\-up index for all the manpages in the module *[base64](\.\./\.\./\.\./\.\./index\.md\#base64)*: [index_begin tcllib/base64 {De- & Encoding}] [key base64] [__manpage base64__] [key encoding] [__manpage base64__] [__manpage uuencode__] [__manpage yencode__] [key uuencode] [__manpage uuencode__] [key yEnc] [__manpage yencode__] [key ydecode] [__manpage yencode__] [key yencode] [__manpage yencode__] [index_end] In the above example the command __[manpage](\.\./\.\./\.\./\.\./index\.md\#manpage)__ is used to insert references to documents, using symbolic file names, with each command belonging to the last __key__ command coming before it\. The other command to insert references is ................................................................................ to be used before the __index\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the table of contents\. I\.e\. it is possible to write [__include FILE__] [__vset VAR VALUE__] [index_begin GROUPTITLE TITLE] ... [index_end] Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. [index_begin GROUPTITLE TITLE] [__include FILE__] [__vset VAR VALUE__] ... [index_end] The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __index\_begin__ may contain only the templating commands __vset__ and __include__, a file included after a key may contain only manape or url references, and other keys, etc\. ................................................................................ characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. ... These commands, [cmd lb] and [cmd lb] respectively, are required because our use of [__lb__] and [__rb__] to bracket markup commands makes it impossible to directly use [__lb__] and [__rb__] within the text. ... # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[docidx language syntax](docidx\_lang\_syntax\.md)* specification as well\. From here on out the *[docidx language command 

Changes to embedded/md/tcllib/files/modules/doctools/docidx_lang_syntax.md.

 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108   1. The construct \{ X \} stands for zero or more occurrences of X\. 1. The construct $X$ stands for zero or one occurrence of X\. The syntax: index = defs INDEX\_BEGIN $contents$ INDEX\_END \{ \} defs = \{ INCLUDE | VSET | \} contents = keyword \{ keyword \} keyword = defs KEY ref \{ ref \} ref = MANPAGE | URL | defs At last a rule we were unable to capture in the EBNF syntax, as it is about the arguments of the markup commands, something which is not modeled here\. 1. The arguments of all markup commands have to be plain text, and/or text markup commands, i\.e\. one of   | | | | | | | |  85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108   1. The construct \{ X \} stands for zero or more occurrences of X\. 1. The construct $X$ stands for zero or one occurrence of X\. The syntax: index = defs INDEX_BEGIN [ contents ] INDEX_END { } defs = { INCLUDE | VSET | } contents = keyword { keyword } keyword = defs KEY ref { ref } ref = MANPAGE | URL | defs At last a rule we were unable to capture in the EBNF syntax, as it is about the arguments of the markup commands, something which is not modeled here\. 1. The arguments of all markup commands have to be plain text, and/or text markup commands, i\.e\. one of 

Changes to embedded/md/tcllib/files/modules/doctools/docidx_plugin_apiref.md.

 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: idx\_numpasses \-> n idx\_listvariables \-> vars idx\_varset var1 value1 idx\_varset var2 value2 \.\.\. idx\_varset varK valueK idx\_initialize idx\_setup 1 \.\.\. idx\_setup 2 \.\.\. \.\.\. idx\_setup n \.\.\. idx\_postprocess idx\_shutdown \.\.\. I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the   | | | | | | | | | | | | | | | | |  215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: idx_numpasses -> n idx_listvariables -> vars idx_varset var1 value1 idx_varset var2 value2 ... idx_varset varK valueK idx_initialize idx_setup 1 ... idx_setup 2 ... ... idx_setup n ... idx_postprocess idx_shutdown ... I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the 

Changes to embedded/md/tcllib/files/modules/doctools/doctoc_lang_intro.md.

 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 .. 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 ... 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250  interspersed between them, except for whitespace\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. \.\.\. $division\_start \{Appendix 1\}$ \.\.\. \.\.\. $item thefile \\\\ label \{file description\}$ \.\.\. ## Basic structure The most simple document which can be written in doctoc is $toc\_begin GROUPTITLE TITLE$ $toc\_end$ This also shows us that all doctoc documents consist of only one part where we will list *items* and *divisions*\. The user is free to mix these as she sees fit\. This is a change from version 1 of the language, which did not allow this mixing, but only the use of either a series of items or a series of divisions\. ................................................................................ Symbolic names are used to preserve the convertibility of this format to any output format\. The actual name of any file will be inserted by the chosen formatting engine when converting the input, based on a mapping from symbolic to actual names given to the engine\. Here a made up example for a table of contents of this document: $toc\_begin Doctoc \{Language Introduction\}$ $__item 1 DESCRIPTION__$ $__item 1\.1 \{Basic structure\}__$ $__item 1\.2 Items__$ $__item 1\.3 Divisions__$ $__item 2 \{FURTHER READING\}__$ $toc\_end$ ## Divisions One thing of notice in the last example in the previous section is that the referenced sections actually had a nested structure, something which was expressed in the item labels, by using a common prefix for all the sections nested under section 1\. ................................................................................ - __division\_end__ This command closes the last opened and not yet closed division\. Using this we can recast the last example like this $toc\_begin Doctoc \{Language Introduction\}$ $__division\_start DESCRIPTION__$ $item 1 \{Basic structure\}$ $item 2 Items$ $item 3 Divisions$ $__division\_end__$ $__division\_start \{FURTHER READING\}__$ $__division\_end__$ $toc\_end$ Or, to demonstrate deeper nesting $toc\_begin Doctoc \{Language Introduction\}$ $__division\_start DESCRIPTION__$ $__division\_start \{Basic structure\}__$ $item 1 Do$ $item 2 Re$ $__division\_end__$ $__division\_start Items__$ $item a Fi$ $item b Fo$ $item c Fa$ $__division\_end__$ $__division\_start Divisions__$ $item 1 Sub$ $item 1 Zero$ $__division\_end__$ $__division\_end__$ $__division\_start \{FURTHER READING\}__$ $__division\_end__$ $toc\_end$ And do not forget, it is possible to freely mix items and divisions, and to have empty divisions\. $toc\_begin Doctoc \{Language Introduction\}$ $item 1 Do$ $__division\_start DESCRIPTION__$ $__division\_start \{Basic structure\}__$ $item 2 Re$ $__division\_end__$ $item a Fi$ $__division\_start Items__$ $item b Fo$ $item c Fa$ $__division\_end__$ $__division\_start Divisions__$ $__division\_end__$ $__division\_end__$ $__division\_start \{FURTHER READING\}__$ $__division\_end__$ $toc\_end$ ## Advanced structure In all previous examples we fudged a bit regarding the markup actually allowed to be used before the __toc\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the table of contents\. I\.e\. it is possible to write $__include FILE__$ $__vset VAR VALUE__$ $toc\_begin GROUPTITLE TITLE$ \.\.\. $toc\_end$ Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. $toc\_begin GROUPTITLE TITLE$ $__include FILE__$ $__vset VAR VALUE__$ \.\.\. $toc\_end$ The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __toc\_begin__ may contain only the templating commands __vset__ and __include__, a file included in a division may contain only items or divisions commands, etc\. ................................................................................ characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. \.\.\. These commands, $cmd lb$ and $cmd lb$ respectively, are required because our use of $__lb__$ and $__rb__$ to bracket markup commands makes it impossible to directly use $__lb__$ and $__rb__$ within the text\. \.\.\. # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[doctoc language syntax](doctoc\_lang\_syntax\.md)* specification as well\. From here on out the *[doctoc language command   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 .. 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 ... 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250  interspersed between them, except for whitespace\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. ... [division_start {Appendix 1}] ... ... [item thefile \\ label {file description}] ... ## Basic structure The most simple document which can be written in doctoc is [toc_begin GROUPTITLE TITLE] [toc_end] This also shows us that all doctoc documents consist of only one part where we will list *items* and *divisions*\. The user is free to mix these as she sees fit\. This is a change from version 1 of the language, which did not allow this mixing, but only the use of either a series of items or a series of divisions\. ................................................................................ Symbolic names are used to preserve the convertibility of this format to any output format\. The actual name of any file will be inserted by the chosen formatting engine when converting the input, based on a mapping from symbolic to actual names given to the engine\. Here a made up example for a table of contents of this document: [toc_begin Doctoc {Language Introduction}] [__item 1 DESCRIPTION__] [__item 1.1 {Basic structure}__] [__item 1.2 Items__] [__item 1.3 Divisions__] [__item 2 {FURTHER READING}__] [toc_end] ## Divisions One thing of notice in the last example in the previous section is that the referenced sections actually had a nested structure, something which was expressed in the item labels, by using a common prefix for all the sections nested under section 1\. ................................................................................ - __division\_end__ This command closes the last opened and not yet closed division\. Using this we can recast the last example like this [toc_begin Doctoc {Language Introduction}] [__division_start DESCRIPTION__] [item 1 {Basic structure}] [item 2 Items] [item 3 Divisions] [__division_end__] [__division_start {FURTHER READING}__] [__division_end__] [toc_end] Or, to demonstrate deeper nesting [toc_begin Doctoc {Language Introduction}] [__division_start DESCRIPTION__] [__division_start {Basic structure}__] [item 1 Do] [item 2 Re] [__division_end__] [__division_start Items__] [item a Fi] [item b Fo] [item c Fa] [__division_end__] [__division_start Divisions__] [item 1 Sub] [item 1 Zero] [__division_end__] [__division_end__] [__division_start {FURTHER READING}__] [__division_end__] [toc_end] And do not forget, it is possible to freely mix items and divisions, and to have empty divisions\. [toc_begin Doctoc {Language Introduction}] [item 1 Do] [__division_start DESCRIPTION__] [__division_start {Basic structure}__] [item 2 Re] [__division_end__] [item a Fi] [__division_start Items__] [item b Fo] [item c Fa] [__division_end__] [__division_start Divisions__] [__division_end__] [__division_end__] [__division_start {FURTHER READING}__] [__division_end__] [toc_end] ## Advanced structure In all previous examples we fudged a bit regarding the markup actually allowed to be used before the __toc\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the table of contents\. I\.e\. it is possible to write [__include FILE__] [__vset VAR VALUE__] [toc_begin GROUPTITLE TITLE] ... [toc_end] Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. [toc_begin GROUPTITLE TITLE] [__include FILE__] [__vset VAR VALUE__] ... [toc_end] The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __toc\_begin__ may contain only the templating commands __vset__ and __include__, a file included in a division may contain only items or divisions commands, etc\. ................................................................................ characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. ... These commands, [cmd lb] and [cmd lb] respectively, are required because our use of [__lb__] and [__rb__] to bracket markup commands makes it impossible to directly use [__lb__] and [__rb__] within the text. ... # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[doctoc language syntax](doctoc\_lang\_syntax\.md)* specification as well\. From here on out the *[doctoc language command 

Changes to embedded/md/tcllib/files/modules/doctools/doctoc_lang_syntax.md.

 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111   1. The construct \{ X \} stands for zero or more occurrences of X\. 1. The construct $X$ stands for zero or one occurrence of X\. The syntax: toc = defs TOC\_BEGIN contents TOC\_END \{ \} defs = \{ INCLUDE | VSET | \} contents = \{ defs entry \} $defs$ entry = ITEM | division division = DIVISION\_START contents DIVISION\_END # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | |  85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111   1. The construct \{ X \} stands for zero or more occurrences of X\. 1. The construct $X$ stands for zero or one occurrence of X\. The syntax: toc = defs TOC_BEGIN contents TOC_END { } defs = { INCLUDE | VSET | } contents = { defs entry } [ defs ] entry = ITEM | division division = DIVISION_START contents DIVISION_END # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/doctools/doctoc_plugin_apiref.md.

 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: toc\_numpasses \-> n toc\_listvariables \-> vars toc\_varset var1 value1 toc\_varset var2 value2 \.\.\. toc\_varset varK valueK toc\_initialize toc\_setup 1 \.\.\. toc\_setup 2 \.\.\. \.\.\. toc\_setup n \.\.\. toc\_postprocess toc\_shutdown \.\.\. I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the   | | | | | | | | | | | | | | | | |  214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: toc_numpasses -> n toc_listvariables -> vars toc_varset var1 value1 toc_varset var2 value2 ... toc_varset varK valueK toc_initialize toc_setup 1 ... toc_setup 2 ... ... toc_setup n ... toc_postprocess toc_shutdown ... I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the 

Changes to embedded/md/tcllib/files/modules/doctools/doctools.md.

 1 2 3 4 5 6 7 8 9 10 11 12 .. 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  [//000000001]: # (doctools \- Documentation tools) [//000000002]: # (Generated from file 'doctools\.man' by tcllib/doctools with format 'markdown') [//000000003]: # (Copyright © 2003\-2019 Andreas Kupries ) [//000000004]: # (doctools$$n$$ 1\.5\.1 tcllib "Documentation tools")
................................................................................ - [Category](#category) - [Copyright](#copyright) # SYNOPSIS package require Tcl 8\.2 package require doctools ?1\.5\.1? [__::doctools::new__ *objectName* ?*option value*\.\.\.?](#1) [__::doctools::help__](#2) [__::doctools::search__ *path*](#3) [__objectName__ __method__ ?*arg arg \.\.\.*?](#4) [*objectName* __configure__](#5) [*objectName* __configure__ *option*](#6)   | |  1 2 3 4 5 6 7 8 9 10 11 12 .. 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  [//000000001]: # (doctools \- Documentation tools) [//000000002]: # (Generated from file 'doctools\.man' by tcllib/doctools with format 'markdown') [//000000003]: # (Copyright © 2003\-2019 Andreas Kupries ) [//000000004]: # (doctools$$n$$ 1\.5\.2 tcllib "Documentation tools")
................................................................................ - [Category](#category) - [Copyright](#copyright) # SYNOPSIS package require Tcl 8\.2 package require doctools ?1\.5\.2? [__::doctools::new__ *objectName* ?*option value*\.\.\.?](#1) [__::doctools::help__](#2) [__::doctools::search__ *path*](#3) [__objectName__ __method__ ?*arg arg \.\.\.*?](#4) [*objectName* __configure__](#5) [*objectName* __configure__ *option*](#6) 

Changes to embedded/md/tcllib/files/modules/doctools/doctools_lang_intro.md.

 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 ... 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 ... 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 ... 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 ... 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 ... 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579  consists primarily of text, with markup commands embedded into it\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. \.\.\. $list\_begin enumerated$ \.\.\. \.\.\. $call \[cmd foo$ \\\\ $arg bar$\] \.\.\. \.\.\. $term \{complex concept\}$ \.\.\. \.\.\. $opt "\[arg key$ $arg value$"\] \.\.\. ## Basic structure The most simple document which can be written in doctools is $manpage\_begin NAME SECTION VERSION$ $see\_also doctools\_intro$ $see\_also doctools\_lang\_cmdref$ $see\_also doctools\_lang\_faq$ $see\_also doctools\_lang\_syntax$ $keywords \{doctools commands\}$ $keywords \{doctools language\}$ $keywords \{doctools markup\}$ $keywords \{doctools syntax\}$ $keywords markup$ $keywords \{semantic markup\}$ $description$ $vset CATEGORY doctools$ $include \.\./doctools2base/include/feedback\.inc$ $manpage\_end$ This also shows us that all doctools documents are split into two parts, the *header* and the *body*\. Everything coming before $__description__$ belongs to the header, and everything coming after belongs to the body, with the whole document bracketed by the two __manpage\_\*__ commands\. Before and after these opening and closing commands we have only *whitespace*\. ................................................................................ and they can be used in any order\. However for __titledesc__ and __moddesc__ only the last occurrence is taken\. For the other two the specified information is accumulated, in the given order\. Regular text is not allowed within the header\. Given the above a less minimal example of a document is $manpage\_begin NAME SECTION VERSION$ $__copyright \{YEAR AUTHOR\}__$ $__titledesc TITLE__$ $__moddesc MODULE\_TITLE__$ $__require PACKAGE VERSION__$ $__require PACKAGE__$ $description$ $manpage\_end$ Remember that the whitespace is optional\. The document $manpage\_begin NAME SECTION VERSION$ $copyright \{YEAR AUTHOR\}$$titledesc TITLE$$moddesc MODULE\_TITLE$ $require PACKAGE VERSION$$require PACKAGE$$description$ $vset CATEGORY doctools$ $include \.\./doctools2base/include/feedback\.inc$ $manpage\_end$ has the same meaning as the example before\. On the other hand, if *whitespace* is present it consists not only of any sequence of characters containing the space character, horizontal and vertical tabs, carriage return, and newline, but it may contain comment markup as well, in the form of the __[comment](\.\./\.\./\.\./\.\./index\.md\#comment)__ command\. $__comment \{ \.\.\. \}__$ $manpage\_begin NAME SECTION VERSION$ $copyright \{YEAR AUTHOR\}$ $titledesc TITLE$ $moddesc MODULE\_TITLE$$__comment \{ \.\.\. \}__$ $require PACKAGE VERSION$ $require PACKAGE$ $description$ $manpage\_end$ $__comment \{ \.\.\. \}__$ ## Advanced structure In the simple examples of the last section we fudged a bit regarding the markup actually allowed to be used before the __manpage\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the document\. I\.e\. it is possible to write $__include FILE__$ $__vset VAR VALUE__$ $manpage\_begin NAME SECTION VERSION$ $description$ $manpage\_end$ Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. I\.e\. for example in the header as well\. $manpage\_begin NAME SECTION VERSION$ $__include FILE__$ $__vset VAR VALUE__$ $description$ $manpage\_end$ The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __manpage\_begin__ may contain only the templating commands __vset__ and __include__, a file included in the header may contain only header commands, etc\. ................................................................................ The simplest way of structuring the body is through the introduction of paragraphs\. The command for doing so is __para__\. Each occurrence of this command closes the previous paragraph and automatically opens the next\. The first paragraph is automatically opened at the beginning of the body, by __description__\. In the same manner the last paragraph automatically ends at __manpage\_end__\. $manpage\_begin NAME SECTION VERSION$ $description$ \.\.\. $__para__$ \.\.\. $__para__$ \.\.\. $manpage\_end$ Empty paragraphs are ignored\. A structure coarser than paragraphs are sections, which allow the writer to split a document into larger, and labeled, pieces\. The command for doing so is __section__\. Each occurrence of this command closes the previous section and automatically opens the next, including its first paragraph\. The first section ................................................................................ is automatically opened at the beginning of the body, by __description__ $$This section is labeled "DESCRIPTION"$$\. In the same manner the last section automatically ends at __manpage\_end__\. Empty sections are *not* ignored\. We are free to $$not$$ use paragraphs within sections\. $manpage\_begin NAME SECTION VERSION$ $description$ \.\.\. $__section \{Section A\}__$ \.\.\. $para$ \.\.\. $__section \{Section B\}__$ \.\.\. $manpage\_end$ Between sections and paragraphs we have subsections, to split sections\. The command for doing so is __subsection__\. Each occurrence of this command closes the previous subsection and automatically opens the next, including its first paragraph\. A subsection is automatically opened at the beginning of the body, by __description__, and at the beginning of each section\. In the same manner the last subsection automatically ends at __manpage\_end__\. Empty subsections are *not* ignored\. We are free to $$not$$ use paragraphs within subsections\. $manpage\_begin NAME SECTION VERSION$ $description$ \.\.\. $section \{Section A\}$ \.\.\. $__subsection \{Sub 1\}__$ \.\.\. $para$ \.\.\. $__subsection \{Sub 2\}__$ \.\.\. $section \{Section B\}$ \.\.\. $manpage\_end$ ## Text markup Having handled the overall structure a writer can impose on the document we now take a closer at the text in a paragraph\. While most often this is just the unadorned content of the document we do have ................................................................................ Its argument is a widget name\. The example demonstrating the use of text markup is an excerpt from the *[doctools language command reference](doctools\_lang\_cmdref\.md)*, with some highlighting added\. It shows their use within a block of text, as the arguments of a list item command $$__call__$$, and our ability to nest them\. \.\.\. $call \[__cmd arg\_def__$ $__arg type__$ $__arg name__$ $__opt__ \[__arg mode__$\]\] Text structure\. List element\. Argument list\. Automatically closes the previous list element\. Specifies the data\-$__arg type__$ of the described argument of a command, its $__arg name__$ and its i/o\-$__arg mode__$\. The latter is optional\. \.\.\. ## Escapes Beyond the 20 commands for simple markup shown in the previous section we have two more available which are technically simple markup\. However their function is not the marking up of phrases as specific types of things, but the insertion of characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. \.\.\. These commands, $cmd lb$ and $cmd lb$ respectively, are required because our use of $__lb__$ and $__rb__$ to bracket markup commands makes it impossible to directly use $__lb__$ and $__rb__$ within the text\. \.\.\. ## Cross\-references The last two commands we have to discuss are for the declaration of cross\-references between documents, explicit and implicit\. They are __[keywords](\.\./\.\./\.\./\.\./index\.md\#keywords)__ and __see\_also__\. Both take an arbitrary number of arguments, all of which have to be plain unmarked ................................................................................ whether she wants to have them at the beginning of the body, or at its end, maybe near the place a keyword is actually defined by the main content, or considers them as meta data which should be in the header, etc\. Our example shows the sources for the cross\-references of this document, with some highlighting added\. Incidentally they are found at the end of the body\. \.\.\. $__see\_also doctools\_intro__$ $__see\_also doctools\_lang\_syntax__$ $__see\_also doctools\_lang\_cmdref__$ $__keywords markup \{semantic markup\}__$ $__keywords \{doctools markup\} \{doctools language\}__$ $__keywords \{doctools syntax\} \{doctools commands\}__$ $manpage\_end$ ## Examples Where ever we can write plain text we can write examples too\. For simple examples we have the command __example__ which takes a single argument, the text of the argument\. The example text must not contain markup\. If we wish to have markup within an example we have to use the 2\-command combination ................................................................................ embed examples and lists within an example\. On the other hand, we *can* use templating commands within example blocks to read their contents from a file $$Remember section [Advanced structure](#subsection3)$$\. The source for the very first example in this document $$see section [Fundamentals](#subsection1)$$, with some highlighting added, is $__example__ \{ \.\.\. \[list\_begin enumerated$ \.\.\. \}\] Using __example\_begin__ / __example\_end__ this would look like $__example\_begin__$ \.\.\. $list\_begin enumerated$ \.\.\. $__example\_end__$ ## Lists Where ever we can write plain text we can write lists too\. The main commands are __list\_begin__ to start a list, and __list\_end__ to close one\. The opening command takes an argument specifying the type of list started it, and this in turn determines which of the eight existing list item commands are ................................................................................ is a specialized form of a term definition list where the term is the name of a configuration option for a widget, with its name and class in the option database\. Our example is the source of the definition list in the previous paragraph, with most of the content in the middle removed\. \.\.\. $__list\_begin__ definitions$ $__def__ \[const arg$\] $$$cmd arg\_def$$$ This opens an argument $$declaration$$ list\. It is a specialized form of a definition list where the term is an argument name, with its type and i/o\-mode\. $__def__ \[const itemized$\] $$$cmd item$$$ This opens a general itemized list\. \.\.\. $__def__ \[const tkoption$\] $$$cmd tkoption\_def$$$ This opens a widget option $$declaration$$ list\. It is a specialized form of a definition list where the term is the name of a configuration option for a widget, with its name and class in the option database\. $__list\_end__$ \.\.\. Note that a list cannot begin in one $$sub$$section and end in another\. Differently said, $$sub$$section breaks are not allowed within lists and list items\. An example of this *illegal* construct is \.\.\. $list\_begin itemized$ $item$ \.\.\. $__section \{ILLEGAL WITHIN THE LIST\}__$ \.\.\. $list\_end$ \.\.\. # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[doctools language syntax](doctools\_lang\_syntax\.md)* specification as well\. From here on out the *[doctools language command   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 ... 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 ... 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 ... 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 ... 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 ... 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579  consists primarily of text, with markup commands embedded into it\. Each markup command is a Tcl command surrounded by a matching pair of __$__ and __$__\. Inside of these delimiters the usual rules for a Tcl command apply with regard to word quotation, nested commands, continuation lines, etc\. I\.e\. ... [list_begin enumerated] ... ... [call [cmd foo] \\ [arg bar]] ... ... [term {complex concept}] ... ... [opt "[arg key] [arg value]"] ... ## Basic structure The most simple document which can be written in doctools is [manpage_begin NAME SECTION VERSION] [see_also doctools_intro] [see_also doctools_lang_cmdref] [see_also doctools_lang_faq] [see_also doctools_lang_syntax] [keywords {doctools commands}] [keywords {doctools language}] [keywords {doctools markup}] [keywords {doctools syntax}] [keywords markup] [keywords {semantic markup}] [description] [vset CATEGORY doctools] [include ../doctools2base/include/feedback.inc] [manpage_end] This also shows us that all doctools documents are split into two parts, the *header* and the *body*\. Everything coming before $__description__$ belongs to the header, and everything coming after belongs to the body, with the whole document bracketed by the two __manpage\_\*__ commands\. Before and after these opening and closing commands we have only *whitespace*\. ................................................................................ and they can be used in any order\. However for __titledesc__ and __moddesc__ only the last occurrence is taken\. For the other two the specified information is accumulated, in the given order\. Regular text is not allowed within the header\. Given the above a less minimal example of a document is [manpage_begin NAME SECTION VERSION] [__copyright {YEAR AUTHOR}__] [__titledesc TITLE__] [__moddesc MODULE_TITLE__] [__require PACKAGE VERSION__] [__require PACKAGE__] [description] [manpage_end] Remember that the whitespace is optional\. The document [manpage_begin NAME SECTION VERSION] [copyright {YEAR AUTHOR}][titledesc TITLE][moddesc MODULE_TITLE] [require PACKAGE VERSION][require PACKAGE][description] [vset CATEGORY doctools] [include ../doctools2base/include/feedback.inc] [manpage_end] has the same meaning as the example before\. On the other hand, if *whitespace* is present it consists not only of any sequence of characters containing the space character, horizontal and vertical tabs, carriage return, and newline, but it may contain comment markup as well, in the form of the __[comment](\.\./\.\./\.\./\.\./index\.md\#comment)__ command\. [__comment { ... }__] [manpage_begin NAME SECTION VERSION] [copyright {YEAR AUTHOR}] [titledesc TITLE] [moddesc MODULE_TITLE][__comment { ... }__] [require PACKAGE VERSION] [require PACKAGE] [description] [manpage_end] [__comment { ... }__] ## Advanced structure In the simple examples of the last section we fudged a bit regarding the markup actually allowed to be used before the __manpage\_begin__ command opening the document\. Instead of only whitespace the two templating commands __include__ and __vset__ are also allowed, to enable the writer to either set and/or import configuration settings relevant to the document\. I\.e\. it is possible to write [__include FILE__] [__vset VAR VALUE__] [manpage_begin NAME SECTION VERSION] [description] [manpage_end] Even more important, these two commands are allowed anywhere where a markup command is allowed, without regard for any other structure\. I\.e\. for example in the header as well\. [manpage_begin NAME SECTION VERSION] [__include FILE__] [__vset VAR VALUE__] [description] [manpage_end] The only restriction __include__ has to obey is that the contents of the included file must be valid at the place of the inclusion\. I\.e\. a file included before __manpage\_begin__ may contain only the templating commands __vset__ and __include__, a file included in the header may contain only header commands, etc\. ................................................................................ The simplest way of structuring the body is through the introduction of paragraphs\. The command for doing so is __para__\. Each occurrence of this command closes the previous paragraph and automatically opens the next\. The first paragraph is automatically opened at the beginning of the body, by __description__\. In the same manner the last paragraph automatically ends at __manpage\_end__\. [manpage_begin NAME SECTION VERSION] [description] ... [__para__] ... [__para__] ... [manpage_end] Empty paragraphs are ignored\. A structure coarser than paragraphs are sections, which allow the writer to split a document into larger, and labeled, pieces\. The command for doing so is __section__\. Each occurrence of this command closes the previous section and automatically opens the next, including its first paragraph\. The first section ................................................................................ is automatically opened at the beginning of the body, by __description__ $$This section is labeled "DESCRIPTION"$$\. In the same manner the last section automatically ends at __manpage\_end__\. Empty sections are *not* ignored\. We are free to $$not$$ use paragraphs within sections\. [manpage_begin NAME SECTION VERSION] [description] ... [__section {Section A}__] ... [para] ... [__section {Section B}__] ... [manpage_end] Between sections and paragraphs we have subsections, to split sections\. The command for doing so is __subsection__\. Each occurrence of this command closes the previous subsection and automatically opens the next, including its first paragraph\. A subsection is automatically opened at the beginning of the body, by __description__, and at the beginning of each section\. In the same manner the last subsection automatically ends at __manpage\_end__\. Empty subsections are *not* ignored\. We are free to $$not$$ use paragraphs within subsections\. [manpage_begin NAME SECTION VERSION] [description] ... [section {Section A}] ... [__subsection {Sub 1}__] ... [para] ... [__subsection {Sub 2}__] ... [section {Section B}] ... [manpage_end] ## Text markup Having handled the overall structure a writer can impose on the document we now take a closer at the text in a paragraph\. While most often this is just the unadorned content of the document we do have ................................................................................ Its argument is a widget name\. The example demonstrating the use of text markup is an excerpt from the *[doctools language command reference](doctools\_lang\_cmdref\.md)*, with some highlighting added\. It shows their use within a block of text, as the arguments of a list item command $$__call__$$, and our ability to nest them\. ... [call [__cmd arg_def__] [__arg type__] [__arg name__] [__opt__ [__arg mode__]]] Text structure. List element. Argument list. Automatically closes the previous list element. Specifies the data-[__arg type__] of the described argument of a command, its [__arg name__] and its i/o-[__arg mode__]. The latter is optional. ... ## Escapes Beyond the 20 commands for simple markup shown in the previous section we have two more available which are technically simple markup\. However their function is not the marking up of phrases as specific types of things, but the insertion of characters, namely __$__ and __$__\. These commands, __lb__ and __rb__ respectively, are required because our use of $and$ to bracket markup commands makes it impossible to directly use $and$ within the text\. Our example of their use are the sources of the last sentence in the previous paragraph, with some highlighting added\. ... These commands, [cmd lb] and [cmd lb] respectively, are required because our use of [__lb__] and [__rb__] to bracket markup commands makes it impossible to directly use [__lb__] and [__rb__] within the text. ... ## Cross\-references The last two commands we have to discuss are for the declaration of cross\-references between documents, explicit and implicit\. They are __[keywords](\.\./\.\./\.\./\.\./index\.md\#keywords)__ and __see\_also__\. Both take an arbitrary number of arguments, all of which have to be plain unmarked ................................................................................ whether she wants to have them at the beginning of the body, or at its end, maybe near the place a keyword is actually defined by the main content, or considers them as meta data which should be in the header, etc\. Our example shows the sources for the cross\-references of this document, with some highlighting added\. Incidentally they are found at the end of the body\. ... [__see_also doctools_intro__] [__see_also doctools_lang_syntax__] [__see_also doctools_lang_cmdref__] [__keywords markup {semantic markup}__] [__keywords {doctools markup} {doctools language}__] [__keywords {doctools syntax} {doctools commands}__] [manpage_end] ## Examples Where ever we can write plain text we can write examples too\. For simple examples we have the command __example__ which takes a single argument, the text of the argument\. The example text must not contain markup\. If we wish to have markup within an example we have to use the 2\-command combination ................................................................................ embed examples and lists within an example\. On the other hand, we *can* use templating commands within example blocks to read their contents from a file $$Remember section [Advanced structure](#subsection3)$$\. The source for the very first example in this document $$see section [Fundamentals](#subsection1)$$, with some highlighting added, is [__example__ { ... [list_begin enumerated] ... }] Using __example\_begin__ / __example\_end__ this would look like [__example_begin__] ... [list_begin enumerated] ... [__example_end__] ## Lists Where ever we can write plain text we can write lists too\. The main commands are __list\_begin__ to start a list, and __list\_end__ to close one\. The opening command takes an argument specifying the type of list started it, and this in turn determines which of the eight existing list item commands are ................................................................................ is a specialized form of a term definition list where the term is the name of a configuration option for a widget, with its name and class in the option database\. Our example is the source of the definition list in the previous paragraph, with most of the content in the middle removed\. ... [__list_begin__ definitions] [__def__ [const arg]] ([cmd arg_def]) This opens an argument (declaration) list. It is a specialized form of a definition list where the term is an argument name, with its type and i/o-mode. [__def__ [const itemized]] ([cmd item]) This opens a general itemized list. ... [__def__ [const tkoption]] ([cmd tkoption_def]) This opens a widget option (declaration) list. It is a specialized form of a definition list where the term is the name of a configuration option for a widget, with its name and class in the option database. [__list_end__] ... Note that a list cannot begin in one $$sub$$section and end in another\. Differently said, $$sub$$section breaks are not allowed within lists and list items\. An example of this *illegal* construct is ... [list_begin itemized] [item] ... [__section {ILLEGAL WITHIN THE LIST}__] ... [list_end] ... # FURTHER READING Now that this document has been digested the reader, assumed to be a *writer* of documentation should be fortified enough to be able to understand the formal *[doctools language syntax](doctools\_lang\_syntax\.md)* specification as well\. From here on out the *[doctools language command 

Changes to embedded/md/tcllib/files/modules/doctools/doctools_lang_syntax.md.

 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148   1. The construct LIST\_BEGIN stands for the markup command __list\_begin__ with __X__ as its type argument\. The syntax: manpage = defs MANPAGE\_BEGIN header DESCRIPTION body MANPAGE\_END \{ \} defs = \{ INCLUDE | VSET | \} header = \{ TITLEDESC | MODDESC | COPYRIGHT | REQUIRE | defs | xref \} xref = KEYWORDS | SEE\_ALSO | CATEGORY body = paras \{ SECTION sbody \} sbody = paras \{ SUBSECTION ssbody \} ssbody = paras paras = tblock \{ $$PARA | NL$$ tblock \} tblock = \{ | defs | markup | xref | an\_example | a\_list \} markup = ARG | CLASS | CMD | CONST | EMPH | FILE | FUN | LB | METHOD | NAMESPACE | OPT | OPTION | PACKAGE | RB | SECTREF | STRONG | SYSCMD | TERM | TYPE | URI | USAGE | VAR | WIDGET example = EXAMPLE | EXAMPLE\_BEGIN extext EXAMPLE\_END extext = \{ | defs | markup \} a\_list = LIST\_BEGIN argd\_list LIST\_END | LIST\_BEGIN cmdd\_list LIST\_END | LIST\_BEGIN def\_list LIST\_END | LIST\_BEGIN enum\_list LIST\_END | LIST\_BEGIN item\_list LIST\_END | LIST\_BEGIN optd\_list LIST\_END | LIST\_BEGIN tkoptd\_list LIST\_END argd\_list =  \{ ARG\_DEF paras \} cmdd\_list =  \{ CMD\_DEF paras \} def\_list =  \{ $$DEF|CALL$$ paras \} enum\_list =  \{ ENUM paras \} item\_list =  \{ ITEM paras \} optd\_list =  \{ OPT\_DEF paras \} tkoptd\_list =  \{ TKOPTION\_DEF paras \} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148   1. The construct LIST\_BEGIN stands for the markup command __list\_begin__ with __X__ as its type argument\. The syntax: manpage = defs MANPAGE_BEGIN header DESCRIPTION body MANPAGE_END { } defs = { INCLUDE | VSET | } header = { TITLEDESC | MODDESC | COPYRIGHT | REQUIRE | defs | xref } xref = KEYWORDS | SEE_ALSO | CATEGORY body = paras { SECTION sbody } sbody = paras { SUBSECTION ssbody } ssbody = paras paras = tblock { (PARA | NL) tblock } tblock = { | defs | markup | xref | an_example | a_list } markup = ARG | CLASS | CMD | CONST | EMPH | FILE | FUN | LB | METHOD | NAMESPACE | OPT | OPTION | PACKAGE | RB | SECTREF | STRONG | SYSCMD | TERM | TYPE | URI | USAGE | VAR | WIDGET example = EXAMPLE | EXAMPLE_BEGIN extext EXAMPLE_END extext = { | defs | markup } a_list = LIST_BEGIN argd_list LIST_END | LIST_BEGIN cmdd_list LIST_END | LIST_BEGIN def_list LIST_END | LIST_BEGIN enum_list LIST_END | LIST_BEGIN item_list LIST_END | LIST_BEGIN optd_list LIST_END | LIST_BEGIN tkoptd_list LIST_END argd_list = [ ] { ARG_DEF paras } cmdd_list = [ ] { CMD_DEF paras } def_list = [ ] { (DEF|CALL) paras } enum_list = [ ] { ENUM paras } item_list = [ ] { ITEM paras } optd_list = [ ] { OPT_DEF paras } tkoptd_list = [ ] { TKOPTION_DEF paras } # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/doctools/doctools_plugin_apiref.md.

 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: fmt\_numpasses \-> n fmt\_listvariables \-> vars fmt\_varset var1 value1 fmt\_varset var2 value2 \.\.\. fmt\_varset varK valueK fmt\_initialize fmt\_setup 1 \.\.\. fmt\_setup 2 \.\.\. \.\.\. fmt\_setup n \.\.\. fmt\_postprocess fmt\_shutdown \.\.\. I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the   | | | | | | | | | | | | | | | | |  281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312   1. initialize and shutdown each pass 1. query and initialize engine parameters After the plugin has been loaded and the frontend commands are established the commands will be called in the following sequence: fmt_numpasses -> n fmt_listvariables -> vars fmt_varset var1 value1 fmt_varset var2 value2 ... fmt_varset varK valueK fmt_initialize fmt_setup 1 ... fmt_setup 2 ... ... fmt_setup n ... fmt_postprocess fmt_shutdown ... I\.e\. first the number of passes and the set of available engine parameters is established, followed by calls setting the parameters\. That second part is optional\. After that the plugin is initialized, the specified number of passes executed, the final result run through a global post processing step and at last the 

Changes to embedded/md/tcllib/files/modules/doctools2idx/idx_export_json.md.

 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130  # JSON notation of keyword indices The JSON format used for keyword indices is a direct translation of the [Keyword index serialization format](#section5), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::idx \{ label \{Keyword Index\} keywords \{ changelog \{changelog\.man cvs\.man\} conversion \{doctools\.man docidx\.man doctoc\.man apps/dtplite\.man mpexpand\.man\} cvs cvs\.man \} references \{ apps/dtplite\.man \{manpage dtplite\} changelog\.man \{manpage doctools::changelog\} cvs\.man \{manpage doctools::cvs\} docidx\.man \{manpage doctools::idx\} doctoc\.man \{manpage doctools::toc\} doctools\.man \{manpage doctools\} mpexpand\.man \{manpage mpexpand\} \} title \{\} \} is equivalent to the JSON string \{ "doctools::idx" : \{ "label" : "Keyword Index", "keywords" : \{ "changelog" : $"changelog\.man","cvs\.man"$, "conversion" : $"doctools\.man","docidx\.man","doctoc\.man","apps\\/dtplite\.man","mpexpand\.man"$, "cvs" : $"cvs\.man"$, \}, "references" : \{ "apps\\/dtplite\.man" : $"manpage","dtplite"$, "changelog\.man" : $"manpage","doctools::changelog"$, "cvs\.man" : $"manpage","doctools::cvs"$, "docidx\.man" : $"manpage","doctools::idx"$, "doctoc\.man" : $"manpage","doctools::toc"$, "doctools\.man" : $"manpage","doctools"$, "mpexpand\.man" : $"manpage","mpexpand"$ \}, "title" : "" \} \} # Configuration The JSON export plugin recognizes the following configuration variables and changes its behaviour as they specify\. - boolean *indented*   | | | | | | < > | | | | | | | | < > | < | > < > | | | | | | | | | | | | | | | < < > >  76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130  # JSON notation of keyword indices The JSON format used for keyword indices is a direct translation of the [Keyword index serialization format](#section5), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::idx { label {Keyword Index} keywords { changelog {changelog.man cvs.man} conversion {doctools.man docidx.man doctoc.man apps/dtplite.man mpexpand.man} cvs cvs.man } references { apps/dtplite.man {manpage dtplite} changelog.man {manpage doctools::changelog} cvs.man {manpage doctools::cvs} docidx.man {manpage doctools::idx} doctoc.man {manpage doctools::toc} doctools.man {manpage doctools} mpexpand.man {manpage mpexpand} } title {} } is equivalent to the JSON string { "doctools::idx" : { "label" : "Keyword Index", "keywords" : { "changelog" : ["changelog.man","cvs.man"], "conversion" : ["doctools.man","docidx.man","doctoc.man","apps\/dtplite.man","mpexpand.man"], "cvs" : ["cvs.man"], }, "references" : { "apps\/dtplite.man" : ["manpage","dtplite"], "changelog.man" : ["manpage","doctools::changelog"], "cvs.man" : ["manpage","doctools::cvs"], "docidx.man" : ["manpage","doctools::idx"], "doctoc.man" : ["manpage","doctools::toc"], "doctools.man" : ["manpage","doctools"], "mpexpand.man" : ["manpage","mpexpand"] }, "title" : "" } } # Configuration The JSON export plugin recognizes the following configuration variables and changes its behaviour as they specify\. - boolean *indented* 

Changes to embedded/md/tcllib/files/modules/doctools2idx/idx_import_json.md.

 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129  # JSON notation of keyword indices The JSON format used for keyword indices is a direct translation of the [Keyword index serialization format](#section4), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::idx \{ label \{Keyword Index\} keywords \{ changelog \{changelog\.man cvs\.man\} conversion \{doctools\.man docidx\.man doctoc\.man apps/dtplite\.man mpexpand\.man\} cvs cvs\.man \} references \{ apps/dtplite\.man \{manpage dtplite\} changelog\.man \{manpage doctools::changelog\} cvs\.man \{manpage doctools::cvs\} docidx\.man \{manpage doctools::idx\} doctoc\.man \{manpage doctools::toc\} doctools\.man \{manpage doctools\} mpexpand\.man \{manpage mpexpand\} \} title \{\} \} is equivalent to the JSON string \{ "doctools::idx" : \{ "label" : "Keyword Index", "keywords" : \{ "changelog" : $"changelog\.man","cvs\.man"$, "conversion" : $"doctools\.man","docidx\.man","doctoc\.man","apps\\/dtplite\.man","mpexpand\.man"$, "cvs" : $"cvs\.man"$, \}, "references" : \{ "apps\\/dtplite\.man" : $"manpage","dtplite"$, "changelog\.man" : $"manpage","doctools::changelog"$, "cvs\.man" : $"manpage","doctools::cvs"$, "docidx\.man" : $"manpage","doctools::idx"$, "doctoc\.man" : $"manpage","doctools::toc"$, "doctools\.man" : $"manpage","doctools"$, "mpexpand\.man" : $"manpage","mpexpand"$ \}, "title" : "" \} \} # Keyword index serialization format Here we specify the format used by the doctools v2 packages to serialize keyword indices as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a   | | | | | | < > | | | | | | | | < > | < | > < > | | | | | | | | | | | | | | | < < > >  75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129  # JSON notation of keyword indices The JSON format used for keyword indices is a direct translation of the [Keyword index serialization format](#section4), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::idx { label {Keyword Index} keywords { changelog {changelog.man cvs.man} conversion {doctools.man docidx.man doctoc.man apps/dtplite.man mpexpand.man} cvs cvs.man } references { apps/dtplite.man {manpage dtplite} changelog.man {manpage doctools::changelog} cvs.man {manpage doctools::cvs} docidx.man {manpage doctools::idx} doctoc.man {manpage doctools::toc} doctools.man {manpage doctools} mpexpand.man {manpage mpexpand} } title {} } is equivalent to the JSON string { "doctools::idx" : { "label" : "Keyword Index", "keywords" : { "changelog" : ["changelog.man","cvs.man"], "conversion" : ["doctools.man","docidx.man","doctoc.man","apps\/dtplite.man","mpexpand.man"], "cvs" : ["cvs.man"], }, "references" : { "apps\/dtplite.man" : ["manpage","dtplite"], "changelog.man" : ["manpage","doctools::changelog"], "cvs.man" : ["manpage","doctools::cvs"], "docidx.man" : ["manpage","doctools::idx"], "doctoc.man" : ["manpage","doctools::toc"], "doctools.man" : ["manpage","doctools"], "mpexpand.man" : ["manpage","mpexpand"] }, "title" : "" } } # Keyword index serialization format Here we specify the format used by the doctools v2 packages to serialize keyword indices as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a 

Changes to embedded/md/tcllib/files/modules/doctools2idx/idx_introduction.md.

 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168  markup of *tables of contents*, and of general documentation, respectively\. They are described in their own sets of documents, starting at the *DocTools \- Tables Of Contents* and the *DocTools \- General*, respectively\. # Package Overview ~~~~~~~~~~~ doctools::idx ~~~~~~~~~~~ ~~ | ~~ doctools::idx::export ~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~ doctools::idx::import | | | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | | | | | | | | | doctools::config = | | | = doctools::include doctools::config doctools::paths | | | | | doctools::idx::export::<\*> | | | doctools::idx::import::<\*> docidx | | | docidx, json json | | | | \\\\ html | | | doctools::idx::parse \\\\ nroff | | | | \\\\ wiki | | | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ json text | | | | | doctools::idx::structure | | \+\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | | doctools::html doctools::html::cssdefaults doctools::tcl::parse doctools::msgcat | | doctools::text doctools::nroff::man\_macros = | doctools::msgcat::idx::<\*> c, en, de, fr $$fr == en for now$$ ~~ Interoperable objects, without actual package dependencies \-\- Package dependency, higher requires lower package = Dynamic dependency through plugin system <\*> Multiple packages following the given form of naming\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | < > | | | | < > | | | |  126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168  markup of *tables of contents*, and of general documentation, respectively\. They are described in their own sets of documents, starting at the *DocTools \- Tables Of Contents* and the *DocTools \- General*, respectively\. # Package Overview ~~~~~~~~~~~ doctools::idx ~~~~~~~~~~~ ~~ | ~~ doctools::idx::export ~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~ doctools::idx::import | | | +---------------+-------------------------+ | +------------------+---------------+-----------------------+---------------+ | | | | | | | | | doctools::config = | | | = doctools::include doctools::config doctools::paths | | | | | doctools::idx::export::<*> | | | doctools::idx::import::<*> docidx | | | docidx, json json | | | | \\ html | | | doctools::idx::parse \\ nroff | | | | \\ wiki | | | +---------------+ json text | | | | | doctools::idx::structure | | +-------+---------------+ | | doctools::html doctools::html::cssdefaults doctools::tcl::parse doctools::msgcat | | doctools::text doctools::nroff::man_macros = | doctools::msgcat::idx::<*> c, en, de, fr (fr == en for now) ~~ Interoperable objects, without actual package dependencies -- Package dependency, higher requires lower package = Dynamic dependency through plugin system <*> Multiple packages following the given form of naming. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/doctools2toc/toc_export_json.md.

 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150  # JSON notation of tables of contents The JSON format used for tables of contents is a direct translation of the [ToC serialization format](#section5), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::toc \{ items \{ \{reference \{ desc \{DocTools \- Tables of Contents\} id introduction\.man label doctools::toc::introduction \}\} \{division \{ id processing\.man items \{ \{reference \{ desc \{doctoc serialization utilities\} id structure\.man label doctools::toc::structure \}\} \{reference \{ desc \{Parsing text in doctoc format\} id parse\.man label doctools::toc::parse \}\} \} label Processing \}\} \} label \{Table of Contents\} title TOC \} is equivalent to the JSON string \{ "doctools::toc" : \{ "items" : $\{ "reference" : \{ "desc" : "DocTools \- Tables of Contents", "id" : "introduction\.man", "label" : "doctools::toc::introduction" \} \},\{ "division" : \{ "id" : "processing\.man", "items" : \[\{ "reference" : \{ "desc" : "doctoc serialization utilities", "id" : "structure\.man", "label" : "doctools::toc::structure" \} \},\{ "reference" : \{ "desc" : "Parsing text in doctoc format", "id" : "parse\.man", "label" : "doctools::toc::parse" \} \}$, "label" : "Processing" \} \}\], "label" : "Table of Contents", "title" : "TOC" \} \} # Configuration The JSON export plugin recognizes the following configuration variables and changes its behaviour as they specify\. - boolean *indented*   | | | | | | | | | | | | | | | | < | > < | > | < > < > | | | | | > | < | | | | | > | < | | > | < > | < < < > >  76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150  # JSON notation of tables of contents The JSON format used for tables of contents is a direct translation of the [ToC serialization format](#section5), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::toc { items { {reference { desc {DocTools - Tables of Contents} id introduction.man label doctools::toc::introduction }} {division { id processing.man items { {reference { desc {doctoc serialization utilities} id structure.man label doctools::toc::structure }} {reference { desc {Parsing text in doctoc format} id parse.man label doctools::toc::parse }} } label Processing }} } label {Table of Contents} title TOC } is equivalent to the JSON string { "doctools::toc" : { "items" : [{ "reference" : { "desc" : "DocTools - Tables of Contents", "id" : "introduction.man", "label" : "doctools::toc::introduction" } },{ "division" : { "id" : "processing.man", "items" : [{ "reference" : { "desc" : "doctoc serialization utilities", "id" : "structure.man", "label" : "doctools::toc::structure" } },{ "reference" : { "desc" : "Parsing text in doctoc format", "id" : "parse.man", "label" : "doctools::toc::parse" } }], "label" : "Processing" } }], "label" : "Table of Contents", "title" : "TOC" } } # Configuration The JSON export plugin recognizes the following configuration variables and changes its behaviour as they specify\. - boolean *indented* 

Changes to embedded/md/tcllib/files/modules/doctools2toc/toc_import_json.md.

 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149  # JSON notation of tables of contents The JSON format used for tables of contents is a direct translation of the [ToC serialization format](#section4), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::toc \{ items \{ \{reference \{ desc \{DocTools \- Tables of Contents\} id introduction\.man label doctools::toc::introduction \}\} \{division \{ id processing\.man items \{ \{reference \{ desc \{doctoc serialization utilities\} id structure\.man label doctools::toc::structure \}\} \{reference \{ desc \{Parsing text in doctoc format\} id parse\.man label doctools::toc::parse \}\} \} label Processing \}\} \} label \{Table of Contents\} title TOC \} is equivalent to the JSON string \{ "doctools::toc" : \{ "items" : $\{ "reference" : \{ "desc" : "DocTools \- Tables of Contents", "id" : "introduction\.man", "label" : "doctools::toc::introduction" \} \},\{ "division" : \{ "id" : "processing\.man", "items" : \[\{ "reference" : \{ "desc" : "doctoc serialization utilities", "id" : "structure\.man", "label" : "doctools::toc::structure" \} \},\{ "reference" : \{ "desc" : "Parsing text in doctoc format", "id" : "parse\.man", "label" : "doctools::toc::parse" \} \}$, "label" : "Processing" \} \}\], "label" : "Table of Contents", "title" : "TOC" \} \} # ToC serialization format Here we specify the format used by the doctools v2 packages to serialize tables of contents as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a   | | | | | | | | | | | | | | | | < | > < | > | < > < > | | | | | > | < | | | | | > | < | | > | < > | < < < > >  75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149  # JSON notation of tables of contents The JSON format used for tables of contents is a direct translation of the [ToC serialization format](#section4), mapping Tcl dictionaries as JSON objects and Tcl lists as JSON arrays\. For example, the Tcl serialization doctools::toc { items { {reference { desc {DocTools - Tables of Contents} id introduction.man label doctools::toc::introduction }} {division { id processing.man items { {reference { desc {doctoc serialization utilities} id structure.man label doctools::toc::structure }} {reference { desc {Parsing text in doctoc format} id parse.man label doctools::toc::parse }} } label Processing }} } label {Table of Contents} title TOC } is equivalent to the JSON string { "doctools::toc" : { "items" : [{ "reference" : { "desc" : "DocTools - Tables of Contents", "id" : "introduction.man", "label" : "doctools::toc::introduction" } },{ "division" : { "id" : "processing.man", "items" : [{ "reference" : { "desc" : "doctoc serialization utilities", "id" : "structure.man", "label" : "doctools::toc::structure" } },{ "reference" : { "desc" : "Parsing text in doctoc format", "id" : "parse.man", "label" : "doctools::toc::parse" } }], "label" : "Processing" } }], "label" : "Table of Contents", "title" : "TOC" } } # ToC serialization format Here we specify the format used by the doctools v2 packages to serialize tables of contents as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a 

Changes to embedded/md/tcllib/files/modules/doctools2toc/toc_introduction.md.

 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168  markup of *keyword indices*, and of general documentation, respectively\. They are described in their own sets of documents, starting at the *DocTools \- Keyword Indices* and the *DocTools \- General*, respectively\. # Package Overview ~~~~~~~~~~~ doctools::toc ~~~~~~~~~~~ ~~ | ~~ doctools::toc::export ~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~ doctools::toc::import | | | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | | | | | | | | | doctools::config = | | | = doctools::include doctools::config doctools::paths | | | | | doctools::toc::export::<\*> | | | doctools::toc::import::<\*> doctoc | | | doctoc, json json | | | | \\\\ html | | | doctools::toc::parse \\\\ nroff | | | | \\\\ wiki | | | \+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ json text | | | | | doctools::toc::structure | | \+\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\+ | | doctools::html doctools::html::cssdefaults doctools::tcl::parse doctools::msgcat | | doctools::text doctools::nroff::man\_macros = | doctools::msgcat::toc::<\*> c, en, de, fr $$fr == en for now$$ ~~ Interoperable objects, without actual package dependencies \-\- Package dependency, higher requires lower package = Dynamic dependency through plugin system <\*> Multiple packages following the given form of naming\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | < > | | | | < > | | | |  126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168  markup of *keyword indices*, and of general documentation, respectively\. They are described in their own sets of documents, starting at the *DocTools \- Keyword Indices* and the *DocTools \- General*, respectively\. # Package Overview ~~~~~~~~~~~ doctools::toc ~~~~~~~~~~~ ~~ | ~~ doctools::toc::export ~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~ doctools::toc::import | | | +---------------+-------------------------+ | +------------------+---------------+-----------------------+---------------+ | | | | | | | | | doctools::config = | | | = doctools::include doctools::config doctools::paths | | | | | doctools::toc::export::<*> | | | doctools::toc::import::<*> doctoc | | | doctoc, json json | | | | \\ html | | | doctools::toc::parse \\ nroff | | | | \\ wiki | | | +---------------+ json text | | | | | doctools::toc::structure | | +-------+---------------+ | | doctools::html doctools::html::cssdefaults doctools::tcl::parse doctools::msgcat | | doctools::text doctools::nroff::man_macros = | doctools::msgcat::toc::<*> c, en, de, fr (fr == en for now) ~~ Interoperable objects, without actual package dependencies -- Package dependency, higher requires lower package = Dynamic dependency through plugin system <*> Multiple packages following the given form of naming. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *doctools* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/dtplite/pkg_dtplite.md.

 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364   - $2$ The following directory structure is created when processing a single set of input documents\. The file extension used is for output in HTML, but that is not relevant to the structure and was just used to have proper file names\. output/ toc\.html index\.html files/ path/to/FOO\.html The last line in the example shows the document generated for a file FOO located at inputdirectory/path/to/FOO - $3$ When merging many packages into a unified set of documents the generated directory structure is a bit deeper: output \.toc \.idx \.tocdoc \.idxdoc \.xrf toc\.html index\.html FOO1/ \.\.\. FOO2/ toc\.html files/ path/to/BAR\.html Each of the directories FOO1, \.\.\. contains the documents generated for the package FOO1, \.\.\. and follows the structure shown for use case $2$\. The only exception is that there is no per\-package index\. The files "\.toc", "\.idx", and "\.xrf" contain the internal status of the whole output and will be read and updated by the next invokation\. Their   | | | | | | | | | | | | |  322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364   - $2$ The following directory structure is created when processing a single set of input documents\. The file extension used is for output in HTML, but that is not relevant to the structure and was just used to have proper file names\. output/ toc.html index.html files/ path/to/FOO.html The last line in the example shows the document generated for a file FOO located at inputdirectory/path/to/FOO - $3$ When merging many packages into a unified set of documents the generated directory structure is a bit deeper: output .toc .idx .tocdoc .idxdoc .xrf toc.html index.html FOO1/ ... FOO2/ toc.html files/ path/to/BAR.html Each of the directories FOO1, \.\.\. contains the documents generated for the package FOO1, \.\.\. and follows the structure shown for use case $2$\. The only exception is that there is no per\-package index\. The files "\.toc", "\.idx", and "\.xrf" contain the internal status of the whole output and will be read and updated by the next invokation\. Their 

Changes to embedded/md/tcllib/files/modules/fileutil/fileutil.md.

 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 ... 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 ... 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519   This command performs purely lexical normalization on the *path* and returns the changed path as its result\. Symbolic links in the path are *not* resolved\. Examples: fileutil::lexnormalize /foo/\./bar => /foo/bar fileutil::lexnormalize /foo/\.\./bar => /bar - __::fileutil::fullnormalize__ *path* This command resolves all symbolic links in the *path* and returns the changed path as its result\. In contrast to the builtin __file normalize__ this command resolves a symbolic link in the last element of ................................................................................ joined it with the result of __pwd__ to get an absolute filename\. The result of *filtercmd* is a boolean value that indicates if the current file should be included in the list of interesting files\. Example: \# find \.tcl files package require fileutil proc is\_tcl \{name\} \{return $string match \*\.tcl name$\} set tcl\_files $fileutil::find \. is\_tcl$ - __::fileutil::findByPattern__ *basedir* ?__\-regexp__|__\-glob__? ?__\-\-__? *patterns* This command is based upon the __TclX__ command __recursive\_glob__, except that it doesn't allow recursion over more than one directory at a time\. It uses __::fileutil::find__ internally and is thus able to and does follow symbolic links, something the __TclX__ command does not do\. ................................................................................ A concrete example and extreme case is the "/sys" hierarchy under Linux where some hundred devices exist under both "/sys/devices" and "/sys/class" with the two sub\-hierarchies linking to the other, generating millions of legal paths to enumerate\. The structure, reduced to three devices, roughly looks like /sys/class/tty/tty0 \-\-> \.\./\.\./dev/tty0 /sys/class/tty/tty1 \-\-> \.\./\.\./dev/tty1 /sys/class/tty/tty2 \-\-> \.\./\.\./dev/tty1 /sys/dev/tty0/bus /sys/dev/tty0/subsystem \-\-> \.\./\.\./class/tty /sys/dev/tty1/bus /sys/dev/tty1/subsystem \-\-> \.\./\.\./class/tty /sys/dev/tty2/bus /sys/dev/tty2/subsystem \-\-> \.\./\.\./class/tty The command __fileutil::find__ currently has no way to escape this\. When having to handle such a pathological hierarchy It is recommended to switch to package __fileutil::traverse__ and the same\-named command it provides, and then use the __\-prefilter__ option to prevent the traverser from following symbolic links, like so: package require fileutil::traverse proc NoLinks \{fileName\} \{ if \{$string equal \[file type fileName$ link\]\} \{ return 0 \} return 1 \} fileutil::traverse T /sys/devices \-prefilter NoLinks T foreach p \{ puts $p \} T destroy # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas   | | | | | | | | | | | | | < > < | > | | < >  72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 ... 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 ... 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519   This command performs purely lexical normalization on the *path* and returns the changed path as its result\. Symbolic links in the path are *not* resolved\. Examples: fileutil::lexnormalize /foo/./bar => /foo/bar fileutil::lexnormalize /foo/../bar => /bar - __::fileutil::fullnormalize__ *path* This command resolves all symbolic links in the *path* and returns the changed path as its result\. In contrast to the builtin __file normalize__ this command resolves a symbolic link in the last element of ................................................................................ joined it with the result of __pwd__ to get an absolute filename\. The result of *filtercmd* is a boolean value that indicates if the current file should be included in the list of interesting files\. Example: # find .tcl files package require fileutil proc is_tcl {name} {return [string match *.tcl$name]} set tcl_files [fileutil::find . is_tcl] - __::fileutil::findByPattern__ *basedir* ?__\-regexp__|__\-glob__? ?__\-\-__? *patterns* This command is based upon the __TclX__ command __recursive\_glob__, except that it doesn't allow recursion over more than one directory at a time\. It uses __::fileutil::find__ internally and is thus able to and does follow symbolic links, something the __TclX__ command does not do\. ................................................................................ A concrete example and extreme case is the "/sys" hierarchy under Linux where some hundred devices exist under both "/sys/devices" and "/sys/class" with the two sub\-hierarchies linking to the other, generating millions of legal paths to enumerate\. The structure, reduced to three devices, roughly looks like /sys/class/tty/tty0 --> ../../dev/tty0 /sys/class/tty/tty1 --> ../../dev/tty1 /sys/class/tty/tty2 --> ../../dev/tty1 /sys/dev/tty0/bus /sys/dev/tty0/subsystem --> ../../class/tty /sys/dev/tty1/bus /sys/dev/tty1/subsystem --> ../../class/tty /sys/dev/tty2/bus /sys/dev/tty2/subsystem --> ../../class/tty The command __fileutil::find__ currently has no way to escape this\. When having to handle such a pathological hierarchy It is recommended to switch to package __fileutil::traverse__ and the same\-named command it provides, and then use the __\-prefilter__ option to prevent the traverser from following symbolic links, like so: package require fileutil::traverse proc NoLinks {fileName} { if {[string equal [file type $fileName] link]} { return 0 } return 1 } fileutil::traverse T /sys/devices -prefilter NoLinks T foreach p { puts$p } T destroy # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas 

Changes to embedded/md/tcllib/files/modules/fileutil/multiop.md.

 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465   Returns the current path type limiter\. # EXAMPLES The following examples assume that the variable __F__ contains a reference to a multi\-file operation object\. $F do copy \\\\ the \*\.dll \\\\ from c:/TDK/PrivateOpenSSL/bin \\\\ to $installdir\_of tls$$F do move \\\\ the \* \\\\ from /sources \\\\ into /scratch \\\\ but not \*\.html \# Alternatively use 'except for \*\.html'\. $F do \\\\ move \\\\ the index \\\\ from /sources \\\\ into /scratch \\\\ as pkgIndex\.tcl$F do \\\\ remove \\\\ the \*\.txt \\\\ in /scratch Note that the fact that most commands just modify the object state allows us to use more off forms as specifications instead of just nearly\-natural language sentences\. For example the second example in this section can re\-arranged into: $F do \\\\ from /sources \\\\ into /scratch \\\\ but not \*\.html \\\\ move \\\\ the \* and the result is not only still a valid specification, but even stays relatively readable\. Further note that the information collected by the commands __but__, __except__, and __as__ is automatically reset after the associated __the__ was executed\. However no other state is reset in that manner, allowing the user to avoid repetitions of unchanging information\. For example the second and third examples of this section can be merged and rewritten into the equivalent:$F do \\\\ move \\\\ the \* \\\\ from /sources \\\\ into /scratch \\\\ but not \*\.html not index \\\\ the index \\\\ as pkgIndex\.tcl # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465   Returns the current path type limiter\. # EXAMPLES The following examples assume that the variable __F__ contains a reference to a multi\-file operation object\. $F do copy \\ the *.dll \\ from c:/TDK/PrivateOpenSSL/bin \\ to [installdir_of tls]$F do move \\ the * \\ from /sources \\ into /scratch \\ but not *.html # Alternatively use 'except for *.html'. $F do \\ move \\ the index \\ from /sources \\ into /scratch \\ as pkgIndex.tcl$F do \\ remove \\ the *.txt \\ in /scratch Note that the fact that most commands just modify the object state allows us to use more off forms as specifications instead of just nearly\-natural language sentences\. For example the second example in this section can re\-arranged into: $F do \\ from /sources \\ into /scratch \\ but not *.html \\ move \\ the * and the result is not only still a valid specification, but even stays relatively readable\. Further note that the information collected by the commands __but__, __except__, and __as__ is automatically reset after the associated __the__ was executed\. However no other state is reset in that manner, allowing the user to avoid repetitions of unchanging information\. For example the second and third examples of this section can be merged and rewritten into the equivalent:$F do \\ move \\ the * \\ from /sources \\ into /scratch \\ but not *.html not index \\ the index \\ as pkgIndex.tcl # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/fileutil/traverse.md.

 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191   A concrete example and extreme case is the "/sys" hierarchy under Linux where some hundred devices exist under both "/sys/devices" and "/sys/class" with the two sub\-hierarchies linking to the other, generating millions of legal paths to enumerate\. The structure, reduced to three devices, roughly looks like /sys/class/tty/tty0 \-\-> \.\./\.\./dev/tty0 /sys/class/tty/tty1 \-\-> \.\./\.\./dev/tty1 /sys/class/tty/tty2 \-\-> \.\./\.\./dev/tty1 /sys/dev/tty0/bus /sys/dev/tty0/subsystem \-\-> \.\./\.\./class/tty /sys/dev/tty1/bus /sys/dev/tty1/subsystem \-\-> \.\./\.\./class/tty /sys/dev/tty2/bus /sys/dev/tty2/subsystem \-\-> \.\./\.\./class/tty When having to handle such a pathological hierarchy it is recommended to use the __\-prefilter__ option to prevent the traverser from following symbolic links, like so: package require fileutil::traverse proc NoLinks \{fileName\} \{ if \{$string equal \[file type fileName$ link\]\} \{ return 0 \} return 1 \} fileutil::traverse T /sys/devices \-prefilter NoLinks T foreach p \{ puts $p \} T destroy # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas   | | | | | | | | < > < | > | | < >  150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191   A concrete example and extreme case is the "/sys" hierarchy under Linux where some hundred devices exist under both "/sys/devices" and "/sys/class" with the two sub\-hierarchies linking to the other, generating millions of legal paths to enumerate\. The structure, reduced to three devices, roughly looks like /sys/class/tty/tty0 --> ../../dev/tty0 /sys/class/tty/tty1 --> ../../dev/tty1 /sys/class/tty/tty2 --> ../../dev/tty1 /sys/dev/tty0/bus /sys/dev/tty0/subsystem --> ../../class/tty /sys/dev/tty1/bus /sys/dev/tty1/subsystem --> ../../class/tty /sys/dev/tty2/bus /sys/dev/tty2/subsystem --> ../../class/tty When having to handle such a pathological hierarchy it is recommended to use the __\-prefilter__ option to prevent the traverser from following symbolic links, like so: package require fileutil::traverse proc NoLinks {fileName} { if {[string equal [file type$fileName] link]} { return 0 } return 1 } fileutil::traverse T /sys/devices -prefilter NoLinks T foreach p { puts $p } T destroy # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *fileutil* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas  Changes to embedded/md/tcllib/files/modules/fumagic/rtcore.md.  167 168 169 170 171 172 173 174 175 176 177 178 179 180 181   This command behaves mostly like __::fileutil::magic::rt::Nv__, except that it compares the fetched and masked value against *val* as specified with *comp* and returns the result of that comparison\. The argument *comp* has to contain one of Tcl's comparison operators, and the comparison made will be The special comparison operator __x__ signals that no comparison should be done, or, in other words, that the fetched value will always match *val*\. - __::fileutil::magic::rt::Nvx__ *type* *offset* ?*qual*?   |  167 168 169 170 171 172 173 174 175 176 177 178 179 180 181   This command behaves mostly like __::fileutil::magic::rt::Nv__, except that it compares the fetched and masked value against *val* as specified with *comp* and returns the result of that comparison\. The argument *comp* has to contain one of Tcl's comparison operators, and the comparison made will be The special comparison operator __x__ signals that no comparison should be done, or, in other words, that the fetched value will always match *val*\. - __::fileutil::magic::rt::Nvx__ *type* *offset* ?*qual*?  Changes to embedded/md/tcllib/files/modules/generator/generator.md.  109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 ... 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 ... 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 ... 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 ... 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 ... 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 ... 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494  multiple return values and looping over multiple generators at once\. Writing a generator is also a simple task, much like writing a normal procedure: simply use the __define__ command to define the generator, and then call __yield__ instead of __[return](\.\./\.\./\.\./\.\./index\.md\#return)__\. For example, we can define a generator for looping through the integers in a particular range: generator define range \{n m\} \{ for \{set i$n\} \{$i <=$m\} \{incr i\} \{ generator yield $i \} \} generator foreach x $range 1 10$ \{ puts "x =$x" \} The above example will print the numbers from 1 to 10 in sequence, as you would expect\. The difference from a normal loop over a list is that the numbers are only generated as they are needed\. If we insert a break into the loop then any remaining numbers in the sequence would never be generated\. To illustrate, we can define a generator that produces the sequence of natural numbers: an infinite series\. A normal procedure would never return trying to produce this series as a list\. By using a generator we only have to generate those values which are actually used: generator define nats \{\} \{ while 1 \{ generator yield $incr nat$ \} \} generator foreach n $nats$ \{ if \{$n > 100\} \{ break \} \} # COMMANDS - __generator__ __define__ *name* *params* *body* Creates a new generator procedure\. The arguments to the command are identical to those for __[proc](\.\./\.\./\.\./\.\./index\.md\#proc)__: a ................................................................................ be used like a __finally__ block in the __[try](\.\./try/tcllib\_try\.md)__ command, except that it is tied to the life\-cycle of the generator rather than to a particular scope\. For example, if we create a generator to iterate over the lines in a text file, we can use __finally__ to ensure that the file is closed whenever the generator is destroyed: generator define lines file \{ set in $open file$ \# Ensure file is always closed generator finally close$in while \{$gets in line$ >= 0\} \{ generator yield $line \} \} generator foreach line $lines /etc/passwd$ \{ puts "$incr count$:$line" if \{$count > 10\} \{ break \} \} \# File will be closed even on early exit If you create a generator that consumes another generator $$such as the standard __map__ and __filter__ generators defined later$$, then you should use a __finally__ command to ensure that this generator is destroyed when its parent is\. For example, the __map__ generator is defined as follows: generator define map \{f xs\} \{ generator finally generator destroy$xs generator foreach x $xs \{ generator yield $\{\*\}f x$ \} \} - __generator__ __from__ *format* *value* Creates a generator from a data structure\. Currently, supported formats are __list__, __dict__, or __string__\. The list format yields each element in turn\. For dictionaries, each key and value are yielded separately\. Finally, strings are yielded a character at a time\. ................................................................................ - __generator__ __to__ *format* *generator* Converts a generator into a data structure\. This is the reverse operation of the __from__ command, and supports the same data structures\. The two operations obey the following identity laws $$where __=__ is interpreted appropriately$$: $generator to fmt \[generator from fmt value$\] =$value $generator from fmt \[generator to fmt gen$\] = $gen # PRELUDE The following commands are provided as a standard library of generator combinators and functions that perform convenience operations on generators\. The functions in this section are loosely modelled on the equivalent functions from the Haskell Prelude\. *Warning:* most of the functions in this prelude destroy ................................................................................ Apply a function to every element of a generator, returning a new generator of the results\. This is the classic map function from functional programming, applied to generators\. For example, we can generate all the square numbers using the following code $$where __nats__ is defined as earlier$$: proc square x \{ expr \{$x \* $x\} \} generator foreach n $generator map square \[nats$\] \{ puts "n =$n" if \{$n > 1000\} \{ break \} \} - __generator__ __filter__ *predicate* *generator* Another classic functional programming gem\. This command returns a generator that yields only those items from the argument generator that satisfy the predicate $$boolean function$$\. For example, if we had a generator __employees__ that returned a stream of dictionaries representing people, we could filter all those whose salaries are above 100,000 dollars $$or whichever currency you prefer$$ using a simple filter: proc salary> \{amount person\} \{ expr \{$dict get person salary$ >$amount\} \} set fat\-cats $generator filter \{salary> 100000\} employees$ - __generator__ __reduce__ *function* *zero* *generator* This is the classic left\-fold operation\. This command takes a function, an initial value, and a generator of values\. For each element in the generator it applies the function to the current accumulator value $$the *zero* argument initially$$ and that element, and then uses the result as the new ................................................................................ the function to be a binary operator, and the zero argument to be the left identity element of that operation, then we can consider the __reduce__ command as *folding* the operator between each successive pair of values in the generator in a left\-associative fashion\. For example, the sum of a sequence of numbers can be calculated by folding a __\+__ operator between them, with 0 as the identity: \# sum xs = reduce \+ 0 xs \# sum $range 1 5$ = reduce \+ 0 $range 1 5$ \# = reduce \+ $\+ 0 1$ $range 2 5$ \# = reduce \+ $\+ 1 2$ $range 3 5$ \# = \.\.\. \# = reduce \+ $\+ 10 5$ \# = $$\(\(\(0\+1$$\+2\)\+3\)\+4\)\+5 \# = 15 proc \+ \{a b\} \{ expr \{$a \+$b\} \} proc sum gen \{ generator reduce \+ 0 $gen \} puts $sum \[range 1 10$\] The __reduce__ operation is an extremely useful one, and a great variety of different operations can be defined using it\. For example, we can define a factorial function as the product of a range using generators\. This definition is both very clear and also quite efficient $$in both memory and running time$$: proc \* \{x y\} \{ expr \{$x \* $y\} \} proc prod gen \{ generator reduce \* 0$gen \} proc fac n \{ prod $range 1 n$ \} However, while the __reduce__ operation is efficient for finite generators, care should be taken not to apply it to an infinite generator, as this will result in an infinite loop: sum $nats$; \# Never returns - __generator__ __foldl__ *function* *zero* *generator* This is an alias for the __reduce__ command\. - __generator__ __foldr__ *function* *zero* *generator* ................................................................................ - __generator__ __iterate__ *function* *init* Returns an infinite generator formed by repeatedly applying the function to the initial argument\. For example, the Fibonacci numbers can be defined as follows: proc fst pair \{ lindex $pair 0 \} proc snd pair \{ lindex$pair 1 \} proc nextFib ab \{ list $snd ab$ $expr \{\[fst ab$ \+ $snd ab$\}\] \} proc fibs \{\} \{ generator map fst $generator iterate nextFib \{0 1\}$ \} - __generator__ __last__ *generator* Returns the last element of the generator $$if it exists$$\. - __generator__ __length__ *generator* ................................................................................ - __generator__ __splitWhen__ *predicate* *generator* Splits the generator into lists of elements using the predicate to identify delimiters\. The resulting lists are returned as a generator\. Elements matching the delimiter predicate are discarded\. For example, to split up a generator using the string "|" as a delimiter: set xs $generator from list \{a | b | c\}$ generator split \{string equal "|"\} $xs ;\# returns a then b then c - __generator__ __scanl__ *function* *zero* *generator* Similar to __foldl__, but returns a generator of all of the intermediate values for the accumulator argument\. The final element of this generator is equivalent to __foldl__ called on the same arguments\.   | | | | | | | | | | | | | | | | < < > > | | | < > | | | < > | | | | | < > | | | | | | | | | | | | | | | | | | | | | | |  109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 ... 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 ... 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 ... 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 ... 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 ... 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 ... 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494  multiple return values and looping over multiple generators at once\. Writing a generator is also a simple task, much like writing a normal procedure: simply use the __define__ command to define the generator, and then call __yield__ instead of __[return](\.\./\.\./\.\./\.\./index\.md\#return)__\. For example, we can define a generator for looping through the integers in a particular range: generator define range {n m} { for {set i$n} {$i <=$m} {incr i} { generator yield $i } } generator foreach x [range 1 10] { puts "x =$x" } The above example will print the numbers from 1 to 10 in sequence, as you would expect\. The difference from a normal loop over a list is that the numbers are only generated as they are needed\. If we insert a break into the loop then any remaining numbers in the sequence would never be generated\. To illustrate, we can define a generator that produces the sequence of natural numbers: an infinite series\. A normal procedure would never return trying to produce this series as a list\. By using a generator we only have to generate those values which are actually used: generator define nats {} { while 1 { generator yield [incr nat] } } generator foreach n [nats] { if {$n > 100} { break } } # COMMANDS - __generator__ __define__ *name* *params* *body* Creates a new generator procedure\. The arguments to the command are identical to those for __[proc](\.\./\.\./\.\./\.\./index\.md\#proc)__: a ................................................................................ be used like a __finally__ block in the __[try](\.\./try/tcllib\_try\.md)__ command, except that it is tied to the life\-cycle of the generator rather than to a particular scope\. For example, if we create a generator to iterate over the lines in a text file, we can use __finally__ to ensure that the file is closed whenever the generator is destroyed: generator define lines file { set in [open$file] # Ensure file is always closed generator finally close $in while {[gets$in line] >= 0} { generator yield $line } } generator foreach line [lines /etc/passwd] { puts "[incr count]:$line" if {$count > 10} { break } } # File will be closed even on early exit If you create a generator that consumes another generator $$such as the standard __map__ and __filter__ generators defined later$$, then you should use a __finally__ command to ensure that this generator is destroyed when its parent is\. For example, the __map__ generator is defined as follows: generator define map {f xs} { generator finally generator destroy$xs generator foreach x $xs { generator yield [{*}$f $x] } } - __generator__ __from__ *format* *value* Creates a generator from a data structure\. Currently, supported formats are __list__, __dict__, or __string__\. The list format yields each element in turn\. For dictionaries, each key and value are yielded separately\. Finally, strings are yielded a character at a time\. ................................................................................ - __generator__ __to__ *format* *generator* Converts a generator into a data structure\. This is the reverse operation of the __from__ command, and supports the same data structures\. The two operations obey the following identity laws $$where __=__ is interpreted appropriately$$: [generator to$fmt [generator from $fmt$value]] = $value [generator from$fmt [generator to $fmt$gen]] = $gen # PRELUDE The following commands are provided as a standard library of generator combinators and functions that perform convenience operations on generators\. The functions in this section are loosely modelled on the equivalent functions from the Haskell Prelude\. *Warning:* most of the functions in this prelude destroy ................................................................................ Apply a function to every element of a generator, returning a new generator of the results\. This is the classic map function from functional programming, applied to generators\. For example, we can generate all the square numbers using the following code $$where __nats__ is defined as earlier$$: proc square x { expr {$x * $x} } generator foreach n [generator map square [nats]] { puts "n =$n" if {$n > 1000} { break } } - __generator__ __filter__ *predicate* *generator* Another classic functional programming gem\. This command returns a generator that yields only those items from the argument generator that satisfy the predicate $$boolean function$$\. For example, if we had a generator __employees__ that returned a stream of dictionaries representing people, we could filter all those whose salaries are above 100,000 dollars $$or whichever currency you prefer$$ using a simple filter: proc salary> {amount person} { expr {[dict get$person salary] > $amount} } set fat-cats [generator filter {salary> 100000}$employees] - __generator__ __reduce__ *function* *zero* *generator* This is the classic left\-fold operation\. This command takes a function, an initial value, and a generator of values\. For each element in the generator it applies the function to the current accumulator value $$the *zero* argument initially$$ and that element, and then uses the result as the new ................................................................................ the function to be a binary operator, and the zero argument to be the left identity element of that operation, then we can consider the __reduce__ command as *folding* the operator between each successive pair of values in the generator in a left\-associative fashion\. For example, the sum of a sequence of numbers can be calculated by folding a __\+__ operator between them, with 0 as the identity: # sum xs = reduce + 0 xs # sum [range 1 5] = reduce + 0 [range 1 5] # = reduce + [+ 0 1] [range 2 5] # = reduce + [+ 1 2] [range 3 5] # = ... # = reduce + [+ 10 5] # = ((((0+1)+2)+3)+4)+5 # = 15 proc + {a b} { expr {$a +$b} } proc sum gen { generator reduce + 0 $gen } puts [sum [range 1 10]] The __reduce__ operation is an extremely useful one, and a great variety of different operations can be defined using it\. For example, we can define a factorial function as the product of a range using generators\. This definition is both very clear and also quite efficient $$in both memory and running time$$: proc * {x y} { expr {$x * $y} } proc prod gen { generator reduce * 0$gen } proc fac n { prod [range 1 $n] } However, while the __reduce__ operation is efficient for finite generators, care should be taken not to apply it to an infinite generator, as this will result in an infinite loop: sum [nats]; # Never returns - __generator__ __foldl__ *function* *zero* *generator* This is an alias for the __reduce__ command\. - __generator__ __foldr__ *function* *zero* *generator* ................................................................................ - __generator__ __iterate__ *function* *init* Returns an infinite generator formed by repeatedly applying the function to the initial argument\. For example, the Fibonacci numbers can be defined as follows: proc fst pair { lindex$pair 0 } proc snd pair { lindex $pair 1 } proc nextFib ab { list [snd$ab] [expr {[fst $ab] + [snd$ab]}] } proc fibs {} { generator map fst [generator iterate nextFib {0 1}] } - __generator__ __last__ *generator* Returns the last element of the generator $$if it exists$$\. - __generator__ __length__ *generator* ................................................................................ - __generator__ __splitWhen__ *predicate* *generator* Splits the generator into lists of elements using the predicate to identify delimiters\. The resulting lists are returned as a generator\. Elements matching the delimiter predicate are discarded\. For example, to split up a generator using the string "|" as a delimiter: set xs [generator from list {a | b | c}] generator split {string equal "|"} $xs ;# returns a then b then c - __generator__ __scanl__ *function* *zero* *generator* Similar to __foldl__, but returns a generator of all of the intermediate values for the accumulator argument\. The final element of this generator is equivalent to __foldl__ called on the same arguments\.  Changes to embedded/md/tcllib/files/modules/gpx/gpx.md.  149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171   elements: *latitude*, *longitude* and *metadata dictionary*\. *Latitude* and *longitude* are decimal numbers\. The *metadata dictionary* format is described above\. For points in a track, typically there will always be ele $$elevation$$ and time metadata keys\. # EXAMPLE % set token $::gpx::Create myGpxFile\.gpx$ % set version $dict get \[::gpx::GetGPXMetadata token$ version\] % set trackCnt $::gpx::GetTrackCount token$ % set firstPoint $lindex \[::gpx::GetTrackPoints token 1$ 0\] % lassign$firstPoint lat lon ptMetadata % puts "first point in the first track is at $lat,$lon" % if \{$dict exists ptMetadata ele$\} \{ puts "at elevation $dict get ptMetadata ele$ meters" \} % ::gpx::Cleanup $token # REFERENCES 1. GPX: the GPS Exchange Format $$[http://www\.topografix\.com/gpx\.asp](http://www\.topografix\.com/gpx\.asp)$$   | | | | | | < >  149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171   elements: *latitude*, *longitude* and *metadata dictionary*\. *Latitude* and *longitude* are decimal numbers\. The *metadata dictionary* format is described above\. For points in a track, typically there will always be ele $$elevation$$ and time metadata keys\. # EXAMPLE % set token [::gpx::Create myGpxFile.gpx] % set version [dict get [::gpx::GetGPXMetadata$token] version] % set trackCnt [::gpx::GetTrackCount $token] % set firstPoint [lindex [::gpx::GetTrackPoints$token 1] 0] % lassign $firstPoint lat lon ptMetadata % puts "first point in the first track is at$lat, $lon" % if {[dict exists$ptMetadata ele]} { puts "at elevation [dict get $ptMetadata ele] meters" } % ::gpx::Cleanup$token # REFERENCES 1. GPX: the GPS Exchange Format $$[http://www\.topografix\.com/gpx\.asp](http://www\.topografix\.com/gpx\.asp)$$ 

Changes to embedded/md/tcllib/files/modules/grammar_aycock/aycock.md.

 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158  # EXAMPLE The following code demonstrates a trivial desk calculator, admitting only __\+__, __\*__ and parentheses as its operators\. It also shows the format in which the lexical analyzer is expected to present terminal symbols to the parser\. set p $aycock::parser \{ start ::= E \{\} E ::= E \+ T \{expr \{\[lindex \_ 0$ \+ $lindex \_ 2$\}\} E ::= T \{\} T ::= T \* F \{expr \{$lindex \_ 0$ \* $lindex \_ 2$\}\} T ::= F \{\} F ::= NUMBER \{\} F ::= $$E$$ \{lindex $\_ 1\} \}\] puts $p parse \{$$NUMBER \+ NUMBER$$ \* $$NUMBER \+ NUMBER$$ \} \{\{\} 2 \{\} 3 \{\} \{\} \{\} 7 \{\} 1 \{\}\}$$p destroy The example, when run, prints __40__\. # KEYWORDS Aycock, Earley, Horspool, parser, compiler   | | | | | | | | | |  135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158  # EXAMPLE The following code demonstrates a trivial desk calculator, admitting only __\+__, __\*__ and parentheses as its operators\. It also shows the format in which the lexical analyzer is expected to present terminal symbols to the parser\. set p [aycock::parser { start ::= E {} E ::= E + T {expr {[lindex $_ 0] + [lindex$_ 2]}} E ::= T {} T ::= T * F {expr {[lindex $_ 0] * [lindex$_ 2]}} T ::= F {} F ::= NUMBER {} F ::= ( E ) {lindex $_ 1} }] puts [$p parse {( NUMBER + NUMBER ) * ( NUMBER + NUMBER ) } {{} 2 {} 3 {} {} {} 7 {} 1 {}}] $p destroy The example, when run, prints __40__\. # KEYWORDS Aycock, Earley, Horspool, parser, compiler  Changes to embedded/md/tcllib/files/modules/grammar_fa/fa.md.  165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 ... 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 ... 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 ... 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493   overwriting any existing definition\. This is the assignment operator for automatons\. It copies the automaton contained in the FA object *srcFA* over the automaton definition in *faName*\. The old contents of *faName* are deleted by this operation\. This operation is in effect equivalent to *faName* __deserialize__ $*srcFA* __serialize__$ - *faName* __\-\->__ *dstFA* This is the reverse assignment operator for automatons\. It copies the automation contained in the object *faName* over the automaton definition in the object *dstFA*\. The old contents of *dstFA* are deleted by this operation\. This operation is in effect equivalent to *dstFA* __deserialize__ $*faName* __serialize__$ - *faName* __serialize__ This method serializes the automaton stored in *faName*\. In other words it returns a tcl *value* completely describing that automaton\. This allows, for example, the transfer of automatons over arbitrary channels, persistence, etc\. This method is also the basis for both the copy ................................................................................ 1) The last element is a dictionary describing the transitions for the state\. The keys are symbols $$or the empty string$$, and the values are sets of successor states\. Assuming the following FA $$which describes the life of a truck driver in a very simple way :$$ Drive \-\- yellow \-\-> Brake \-\- red \-\-> $$Stop$$ \-\- red/yellow \-\-> Attention \-\- green \-\-> Drive $$\.\.\.$$ is the start state\. a possible serialization is grammar::fa \\\\ \{yellow red green red/yellow\} \\\\ \{Drive \{0 0 \{yellow Brake\}\} \\\\ Brake \{0 0 \{red Stop\}\} \\\\ Stop \{1 0 \{red/yellow Attention\}\} \\\\ Attention \{0 0 \{green Drive\}\}\} A possible one, because I did not care about creation order here - *faName* __deserialize__ *serialization* This is the complement to __serialize__\. It replaces the automaton definition in *faName* with the automaton described by the ................................................................................ more transitions\. - *faName* __unreachable\_states__ Returns the set of states which are not reachable from any start state by any number of transitions\. This is $faName states$ \- $faName reachable\_states$ - *faName* __reachable__ *s* A predicate\. It tests whether the state *s* in the FA *faName* can be reached from a start state by one or more transitions\. The result is a boolean value\. It will be set to __true__ if the state can be reached, and __false__ otherwise\. ................................................................................ more transitions\. - *faName* __unuseful\_states__ Returns the set of states which are not able to reach a final state by any number of transitions\. This is $faName states$ \- $faName useful\_states$ - *faName* __useful__ *s* A predicate\. It tests whether the state *s* in the FA *faName* is able to reach a final state by one or more transitions\. The result is a boolean value\. It will be set to __true__ if the state is useful, and __false__ otherwise\.   | | | | | | | | | | | |  165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 ... 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 ... 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 ... 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493   overwriting any existing definition\. This is the assignment operator for automatons\. It copies the automaton contained in the FA object *srcFA* over the automaton definition in *faName*\. The old contents of *faName* are deleted by this operation\. This operation is in effect equivalent to *faName* __deserialize__ [*srcFA* __serialize__] - *faName* __\-\->__ *dstFA* This is the reverse assignment operator for automatons\. It copies the automation contained in the object *faName* over the automaton definition in the object *dstFA*\. The old contents of *dstFA* are deleted by this operation\. This operation is in effect equivalent to *dstFA* __deserialize__ [*faName* __serialize__] - *faName* __serialize__ This method serializes the automaton stored in *faName*\. In other words it returns a tcl *value* completely describing that automaton\. This allows, for example, the transfer of automatons over arbitrary channels, persistence, etc\. This method is also the basis for both the copy ................................................................................ 1) The last element is a dictionary describing the transitions for the state\. The keys are symbols $$or the empty string$$, and the values are sets of successor states\. Assuming the following FA $$which describes the life of a truck driver in a very simple way :$$ Drive -- yellow --> Brake -- red --> (Stop) -- red/yellow --> Attention -- green --> Drive (...) is the start state. a possible serialization is grammar::fa \\ {yellow red green red/yellow} \\ {Drive {0 0 {yellow Brake}} \\ Brake {0 0 {red Stop}} \\ Stop {1 0 {red/yellow Attention}} \\ Attention {0 0 {green Drive}}} A possible one, because I did not care about creation order here - *faName* __deserialize__ *serialization* This is the complement to __serialize__\. It replaces the automaton definition in *faName* with the automaton described by the ................................................................................ more transitions\. - *faName* __unreachable\_states__ Returns the set of states which are not reachable from any start state by any number of transitions\. This is [faName states] - [faName reachable_states] - *faName* __reachable__ *s* A predicate\. It tests whether the state *s* in the FA *faName* can be reached from a start state by one or more transitions\. The result is a boolean value\. It will be set to __true__ if the state can be reached, and __false__ otherwise\. ................................................................................ more transitions\. - *faName* __unuseful\_states__ Returns the set of states which are not able to reach a final state by any number of transitions\. This is [faName states] - [faName useful_states] - *faName* __useful__ *s* A predicate\. It tests whether the state *s* in the FA *faName* is able to reach a final state by one or more transitions\. The result is a boolean value\. It will be set to __true__ if the state is useful, and __false__ otherwise\.  Changes to embedded/md/tcllib/files/modules/grammar_peg/peg.md.  114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 ... 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 ... 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301  one of the nonterminals N in the expression, and one of the alternative rules R for N, and then replace the nonterminal in A with the RHS of the chosen rule\. Here we can see why the terminal symbols are called such\. They cannot be expanded any further, thus terminate the process of deriving new expressions\. An example Rules $$1$$ A <\- a B c $$2a$$ B <\- d B $$2b$$ B <\- e Some derivations, using starting expression A\. A \-/1/\-> a B c \-/2a/\-> a d B c \-/2b/\-> a d e c A derived expression containing only terminal symbols is a *sentence*\. The set of all sentences which can be derived from the start expression is the *language* of the grammar\. Some definitions for nonterminals and expressions: ................................................................................ overwriting any existing definition\. This is the assignment operator for grammars\. It copies the grammar contained in the grammar object *srcPEG* over the grammar definition in *pegName*\. The old contents of *pegName* are deleted by this operation\. This operation is in effect equivalent to *pegName* __deserialize__ $*srcPEG* __serialize__$ - *pegName* __\-\->__ *dstPEG* This is the reverse assignment operator for grammars\. It copies the automation contained in the object *pegName* over the grammar definition in the object *dstPEG*\. The old contents of *dstPEG* are deleted by this operation\. This operation is in effect equivalent to *dstPEG* __deserialize__ $*pegName* __serialize__$ - *pegName* __serialize__ This method serializes the grammar stored in *pegName*\. In other words it returns a tcl *value* completely describing that grammar\. This allows, for example, the transfer of grammars over arbitrary channels, persistence, etc\. This method is also the basis for both the copy constructor and the ................................................................................ values produced by the symbol\. 1. The last item is a parsing expression, the *start expression* of the grammar\. Assuming the following PEG for simple mathematical expressions Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' Sign <\- '\+' / '\-' Number <\- Sign? Digit\+ Expression <\- '$$' Expression '$$' / $$Factor \(MulOp Factor$$\*\) MulOp <\- '\*' / '/' Factor <\- Term $$AddOp Term$$\* AddOp <\- '\+'/'\-' Term <\- Number a possible serialization is grammar::peg \\\\ \{Expression \{/ \{x $$Expression$$\} \{x Factor \{\* \{x MulOp Factor\}\}\}\} \\\\ Factor \{x Term \{\* \{x AddOp Term\}\}\} \\\\ Term Number \\\\ MulOp \{/ \* /\} \\\\ AddOp \{/ \+ \-\} \\\\ Number \{x \{? Sign\} \{\+ Digit\}\} \\\\ Sign \{/ \+ \-\} \\\\ Digit \{/ 0 1 2 3 4 5 6 7 8 9\} \\\\ \} \\\\ \{Expression value Factor value \\\\ Term value MulOp value \\\\ AddOp value Number value \\\\ Sign value Digit value \\\\ \} Expression A possible one, because the order of the nonterminals in the dictionary is not relevant\. - *pegName* __deserialize__ *serialization*   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < >  114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 ... 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 ... 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301  one of the nonterminals N in the expression, and one of the alternative rules R for N, and then replace the nonterminal in A with the RHS of the chosen rule\. Here we can see why the terminal symbols are called such\. They cannot be expanded any further, thus terminate the process of deriving new expressions\. An example Rules (1) A <- a B c (2a) B <- d B (2b) B <- e Some derivations, using starting expression A. A -/1/-> a B c -/2a/-> a d B c -/2b/-> a d e c A derived expression containing only terminal symbols is a *sentence*\. The set of all sentences which can be derived from the start expression is the *language* of the grammar\. Some definitions for nonterminals and expressions: ................................................................................ overwriting any existing definition\. This is the assignment operator for grammars\. It copies the grammar contained in the grammar object *srcPEG* over the grammar definition in *pegName*\. The old contents of *pegName* are deleted by this operation\. This operation is in effect equivalent to *pegName* __deserialize__ [*srcPEG* __serialize__] - *pegName* __\-\->__ *dstPEG* This is the reverse assignment operator for grammars\. It copies the automation contained in the object *pegName* over the grammar definition in the object *dstPEG*\. The old contents of *dstPEG* are deleted by this operation\. This operation is in effect equivalent to *dstPEG* __deserialize__ [*pegName* __serialize__] - *pegName* __serialize__ This method serializes the grammar stored in *pegName*\. In other words it returns a tcl *value* completely describing that grammar\. This allows, for example, the transfer of grammars over arbitrary channels, persistence, etc\. This method is also the basis for both the copy constructor and the ................................................................................ values produced by the symbol\. 1. The last item is a parsing expression, the *start expression* of the grammar\. Assuming the following PEG for simple mathematical expressions Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' Sign <- '+' / '-' Number <- Sign? Digit+ Expression <- '(' Expression ')' / (Factor (MulOp Factor)*) MulOp <- '*' / '/' Factor <- Term (AddOp Term)* AddOp <- '+'/'-' Term <- Number a possible serialization is grammar::peg \\ {Expression {/ {x ( Expression )} {x Factor {* {x MulOp Factor}}}} \\ Factor {x Term {* {x AddOp Term}}} \\ Term Number \\ MulOp {/ * /} \\ AddOp {/ + -} \\ Number {x {? Sign} {+ Digit}} \\ Sign {/ + -} \\ Digit {/ 0 1 2 3 4 5 6 7 8 9} \\ } \\ {Expression value Factor value \\ Term value MulOp value \\ AddOp value Number value \\ Sign value Digit value \\ } Expression A possible one, because the order of the nonterminals in the dictionary is not relevant\. - *pegName* __deserialize__ *serialization*  Changes to embedded/md/tcllib/files/modules/hook/hook.md.  312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334  change the model's data: hook call ::model The __\.view__ megawidget displays the model state, and needs to know about model updates\. Consequently, it subscribes to the ::model's hook\. hook bind ::model \.view $list \.view ModelUpdate$ When the __::model__ calls the hook, the __\.view__s ModelUpdate subcommand will be called\. Later the __\.view__ megawidget is destroyed\. In its destructor, it tells the *[hook](\.\./\.\./\.\./\.\./index\.md\#hook)* that it no longer exists: hook forget \.view All bindings involving __\.view__ are deleted\. # Credits Hook has been designed and implemented by William H\. Duquette\.   | |  312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334  change the model's data: hook call ::model The __\.view__ megawidget displays the model state, and needs to know about model updates\. Consequently, it subscribes to the ::model's hook\. hook bind ::model .view [list .view ModelUpdate] When the __::model__ calls the hook, the __\.view__s ModelUpdate subcommand will be called\. Later the __\.view__ megawidget is destroyed\. In its destructor, it tells the *[hook](\.\./\.\./\.\./\.\./index\.md\#hook)* that it no longer exists: hook forget .view All bindings involving __\.view__ are deleted\. # Credits Hook has been designed and implemented by William H\. Duquette\.  Changes to embedded/md/tcllib/files/modules/http/autoproxy.md.  102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # COMMANDS - __::autoproxy::init__ Initialize the autoproxy package from system resources\. Under unix this means we look for environment variables\. Under windows we look for the same ................................................................................ The end\-of\-options indicator may be used alone to unset any authentication details currently enabled\. # EXAMPLES package require autoproxy autoproxy::init autoproxy::configure \-basic \-username ME \-password SEKRET set tok $http::geturl http://wiki\.tcl\.tk/$ http::data$tok package require http package require tls package require autoproxy autoproxy::init http::register https 443 autoproxy::tls\_socket set tok $http::geturl https://www\.example\.com/$ # REFERENCES 1. Berners\-Lee, T\., Fielding R\. and Frystyk, H\. "Hypertext Transfer Protocol \-\- HTTP/1\.0", RFC 1945, May 1996, $$[http://www\.rfc\-editor\.org/rfc/rfc1945\.txt](http://www\.rfc\-editor\.org/rfc/rfc1945\.txt)$$   | | | | | |  102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # COMMANDS - __::autoproxy::init__ Initialize the autoproxy package from system resources\. Under unix this means we look for environment variables\. Under windows we look for the same ................................................................................ The end\-of\-options indicator may be used alone to unset any authentication details currently enabled\. # EXAMPLES package require autoproxy autoproxy::init autoproxy::configure -basic -username ME -password SEKRET set tok [http::geturl http://wiki.tcl.tk/] http::data $tok package require http package require tls package require autoproxy autoproxy::init http::register https 443 autoproxy::tls_socket set tok [http::geturl https://www.example.com/] # REFERENCES 1. Berners\-Lee, T\., Fielding R\. and Frystyk, H\. "Hypertext Transfer Protocol \-\- HTTP/1\.0", RFC 1945, May 1996, $$[http://www\.rfc\-editor\.org/rfc/rfc1945\.txt](http://www\.rfc\-editor\.org/rfc/rfc1945\.txt)$$  Changes to embedded/md/tcllib/files/modules/httpd/httpd.md.  141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 ... 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 ... 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512  # Minimal Example Starting a web service requires starting a class of type __httpd::server__, and providing that server with one or more URIs to service, and __httpd::reply__ derived classes to generate them\. tool::define ::reply\.hello \{ method content \{\} \{ my puts "IRM Dispatch Server" my puts " Hello World\! " my puts \} \} ::docserver::server create HTTPD port 8015 myaddr 127\.0\.0\.1 HTTPD add\_uri /\* $list mixin reply\.hello$ # Class ::httpd::server This class is the root object of the webserver\. It is responsible for opening the socket and providing the initial connection negotiation\. - constructor ?port ?port?? ?myaddr ?ipaddr?|all? ?server\_string ?string?? ?server\_name ?string?? ................................................................................ the __puts__ method of the reply, or simply populating the *reply\_body* variable of the object\. The information returned by the __content__ method is not interpreted in any way\. If an exception is thrown $$via the __[error](\.\./\.\./\.\./\.\./index\.md\#error)__ command in Tcl, for example$$ the caller will auto\-generate a 500 \{Internal Error\} message\. A typical implementation of __content__ look like: tool::define ::test::content\.file \{ superclass ::httpd::content\.file \# Return a file \# Note: this is using the content\.file mixin which looks for the reply\_file variable \# and will auto\-compute the Content\-Type method content \{\} \{ my reset set doc\_root $my http\_info get doc\_root$ my variable reply\_file set reply\_file $file join doc\_root index\.html$ \} \} tool::define ::test::content\.time \{ \# return the current system time method content \{\} \{ my variable reply\_body my reply set Content\-Type text/plain set reply\_body $clock seconds$ \} \} tool::define ::test::content\.echo \{ method content \{\} \{ my variable reply\_body my reply set Content\-Type $my request get CONTENT\_TYPE$ set reply\_body $my PostData \[my request get CONTENT\_LENGTH$\] \} \} tool::define ::test::content\.form\_handler \{ method content \{\} \{ set form $my FormData$ my reply set Content\-Type \{text/html; charset=UTF\-8\} my puts $my html header \{My Dynamic Page\}$ my puts "" my puts "You Sent " my puts "" foreach \{f v\}$form \{ my puts "

$f$v
" \} my puts "

" my puts "Send some info:

" my puts "
" my puts "" foreach field \{name rank serial\_number\} \{ set line "

$field " my puts$line \} my puts "" my puts $my html footer$ \} \} - method __EncodeStatus__ *status* Formulate a standard HTTP status header from he string provided\. - method FormData ................................................................................ %a, %d %b %Y %T %Z - method __TransferComplete__ *args* Intended to be invoked from __chan copy__ as a callback\. This closes every channel fed to it on the command line, and then destroys the object\. \#\#\# \# Output the body \#\#\# chan configure $sock \-translation binary \-blocking 0 \-buffering full \-buffersize 4096 chan configure$chan \-translation binary \-blocking 0 \-buffering full \-buffersize 4096 if \{$length\} \{ \#\#\# \# Send any POST/PUT/etc content \#\#\# chan copy$sock $chan \-size$SIZE \-command $info coroutine$ yield \} catch \{close $sock\} chan flush$chan - method __Url\_Decode__ *string* De\-httpizes a string\. # Class ::httpd::content   | | | < < > > | | | | | | | | | | | < < > > | | | | | | < < > > | | | | | < < > > | | | | | | < > | | | | | < > < > | < < > > | | | | | | | | | | < > |  141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 ... 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 ... 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512  # Minimal Example Starting a web service requires starting a class of type __httpd::server__, and providing that server with one or more URIs to service, and __httpd::reply__ derived classes to generate them\. tool::define ::reply.hello { method content {} { my puts "IRM Dispatch Server" my puts "

Hello World!

" my puts } } ::docserver::server create HTTPD port 8015 myaddr 127.0.0.1 HTTPD add_uri /* [list mixin reply.hello] # Class ::httpd::server This class is the root object of the webserver\. It is responsible for opening the socket and providing the initial connection negotiation\. - constructor ?port ?port?? ?myaddr ?ipaddr?|all? ?server\_string ?string?? ?server\_name ?string?? ................................................................................ the __puts__ method of the reply, or simply populating the *reply\_body* variable of the object\. The information returned by the __content__ method is not interpreted in any way\. If an exception is thrown $$via the __[error](\.\./\.\./\.\./\.\./index\.md\#error)__ command in Tcl, for example$$ the caller will auto\-generate a 500 \{Internal Error\} message\. A typical implementation of __content__ look like: tool::define ::test::content.file { superclass ::httpd::content.file # Return a file # Note: this is using the content.file mixin which looks for the reply_file variable # and will auto-compute the Content-Type method content {} { my reset set doc_root [my http_info get doc_root] my variable reply_file set reply_file [file join $doc_root index.html] } } tool::define ::test::content.time { # return the current system time method content {} { my variable reply_body my reply set Content-Type text/plain set reply_body [clock seconds] } } tool::define ::test::content.echo { method content {} { my variable reply_body my reply set Content-Type [my request get CONTENT_TYPE] set reply_body [my PostData [my request get CONTENT_LENGTH]] } } tool::define ::test::content.form_handler { method content {} { set form [my FormData] my reply set Content-Type {text/html; charset=UTF-8} my puts [my html header {My Dynamic Page}] my puts "" my puts "You Sent " my puts "" foreach {f v}$form { my puts "

$f$v
" } my puts "

" my puts "Send some info:

" my puts "" my puts "" foreach field {name rank serial_number} { set line "

$field " my puts$line } my puts "" my puts [my html footer] } } - method __EncodeStatus__ *status* Formulate a standard HTTP status header from he string provided\. - method FormData ................................................................................ %a, %d %b %Y %T %Z - method __TransferComplete__ *args* Intended to be invoked from __chan copy__ as a callback\. This closes every channel fed to it on the command line, and then destroys the object\. ### # Output the body ### chan configure $sock -translation binary -blocking 0 -buffering full -buffersize 4096 chan configure$chan -translation binary -blocking 0 -buffering full -buffersize 4096 if {$length} { ### # Send any POST/PUT/etc content ### chan copy$sock $chan -size$SIZE -command [info coroutine] yield } catch {close $sock} chan flush$chan - method __Url\_Decode__ *string* De\-httpizes a string\. # Class ::httpd::content 

Changes to embedded/md/tcllib/files/modules/imap4/imap4.md.

 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 ... 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 ... 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 ... 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 ... 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 ... 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 ... 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 ... 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485   The namespace variable __::imap4::use\_ssl__ can be used to establish to a secure connection via TSL/SSL if set to true\. In this case default connection port defaults to 993\. *Note:* For connecting via SSL the Tcl module *tls* must be already loaded otherwise an error is raised\. package require tls ; \# must be loaded for TLS/SSL set ::imap4::use\_ssl 1 ; \# request a secure connection set chan $::imap4::open server$ ; \# default port is now 993 - __::imap4::starttls__ *chan* Use this when tasked with connecting to an unsecure port which must be changed to a secure port prior to user login\. This feature is known as *STARTTLS*\. ................................................................................ *mboxname* \- mailbox name, defaults to "\*" If __\-inline__ is specified a compact folderlist is returned instead of the result code\. All flags are converted to lowercase and leading special characters are removed\. \{\{Arc08 noselect\} \{Arc08/Private \{noinferiors unmarked\}\} \{INBOX noinferiors\}\} - __::imap4::select__ *chan* ?*mailbox*? Select a mailbox, 0 is returned on success\. *chan* \- imap channel ................................................................................ Currently supported options: *delim* \- hierarchy delimiter only, *match* \- ref and mbox search patterns $$see __::imap4::folders__$$, *names* \- list of folder names only, *flags* \- list of folder names with flags in format *\{ \{name \{flags\}\} \.\.\. \}* $$see also compact format in function __::imap4::folders__$$\. \{\{Arc08 \{\{\\NoSelect\}\}\} \{Arc08/Private \{\{\\NoInferiors\} \{\\UnMarked\}\}\} \{INBOX \{\\NoInferiors\}\}\} - __::imap4::msginfo__ *chan* *msgid* ?*info*? ?*defval*? Get information $$from previously collected using fetch$$ from a given *msgid*\. If the 'info' argument is omitted or a null string, the list of available information options for the given message is returned\. ................................................................................ 'recent' flagged msgs\), *FLAGS* In conjunction with OK: *PERMFLAGS*, *UIDNEXT*, *UIDVAL*, *UNSEEN* Div\. states: *CURRENT*, *FOUND*, *PERM*\. ::imap4::select $chan INBOX puts "$::imap4::mboxinfo chan exists$ mails in INBOX" - __::imap4::isableto__ *chan* ?*capability*? Test for capability\. It returns 1 if requested capability is supported, 0 otherwise\. If *capability* is omitted all capability imap codes are retured as list\. ................................................................................ *Imap conditional search flags:* SMALLER, LARGER, ON, SENTBEFORE, SENTON, SENTSINCE, SINCE, BEFORE $$not implemented$$, UID $$not implemented$$ *Logical search conditions:* OR, NOT ::imap4::search$chan larger 4000 seen puts "Found messages: $::imap4::mboxinfo chan found$" Found messages: 1 3 6 7 8 9 13 14 15 19 20 - __::imap4::close__ *chan* Close the mailbox\. Permanently removes \\Deleted messages and return to the AUTH state\. ................................................................................ * \-FLAGS Remove the flags in *flaglist* to the existing flags for the message\. For example: ::imap4::store $chan$start\_msgid:$end\_msgid \+FLAGS "Deleted" - __::imap4::expunge__ *chan* Permanently removes all messages that have the \\Deleted flag set from the currently selected mailbox, without the need to close the connection\. *chan* \- imap channel ................................................................................ *chan* \- imap channel # EXAMPLES set user myusername set pass xtremescrt set server imap\.test\.tld set FOLDER INBOX \# Connect to server set imap $::imap4::open server$ ::imap4::login$imap $user$pass ::imap4::select $imap$FOLDER \# Output all the information about that mailbox foreach info $::imap4::mboxinfo imap$ \{ puts "$info \-> $::imap4::mboxinfo imap info$" \} \# fetch 3 records inline set fields \{from: to: subject: size\} foreach rec $::imap4::fetch imap :3 \-inline \{\*\}fields$ \{ puts \-nonewline "\#$incr idx$\)" for \{set j 0\} \{$j<$llength fields$\} \{incr j\} \{ puts "\\t$lindex fields j$ $lindex rec j$" \} \} \# Show all the information available about the message ID 1 puts "Available info about message 1: $::imap4::msginfo imap 1$" \# Use the capability stuff puts "Capabilities: $::imap4::isableto imap$" puts "Is able to imap4rev1? $::imap4::isableto imap imap4rev1$" \# Cleanup ::imap4::cleanup $imap # TLS Security Considerations This package uses the __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ package to handle the security for __https__ urls and other socket connections\. ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # REFERENCES Mark R\. Crispin, "INTERNET MESSAGE ACCESS PROTOCOL \- VERSION 4rev1", RFC 3501, March 2003, [http://www\.rfc\-editor\.org/rfc/rfc3501\.txt](http://www\.rfc\-editor\.org/rfc/rfc3501\.txt)   | | | | | | | | | | | | | | < > | | | | | | < < | > > | | | | | | | |  92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 ... 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 ... 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 ... 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 ... 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 ... 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 ... 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 ... 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485   The namespace variable __::imap4::use\_ssl__ can be used to establish to a secure connection via TSL/SSL if set to true\. In this case default connection port defaults to 993\. *Note:* For connecting via SSL the Tcl module *tls* must be already loaded otherwise an error is raised\. package require tls ; # must be loaded for TLS/SSL set ::imap4::use_ssl 1 ; # request a secure connection set chan [::imap4::open$server] ; # default port is now 993 - __::imap4::starttls__ *chan* Use this when tasked with connecting to an unsecure port which must be changed to a secure port prior to user login\. This feature is known as *STARTTLS*\. ................................................................................ *mboxname* \- mailbox name, defaults to "\*" If __\-inline__ is specified a compact folderlist is returned instead of the result code\. All flags are converted to lowercase and leading special characters are removed\. {{Arc08 noselect} {Arc08/Private {noinferiors unmarked}} {INBOX noinferiors}} - __::imap4::select__ *chan* ?*mailbox*? Select a mailbox, 0 is returned on success\. *chan* \- imap channel ................................................................................ Currently supported options: *delim* \- hierarchy delimiter only, *match* \- ref and mbox search patterns $$see __::imap4::folders__$$, *names* \- list of folder names only, *flags* \- list of folder names with flags in format *\{ \{name \{flags\}\} \.\.\. \}* $$see also compact format in function __::imap4::folders__$$\. {{Arc08 {{\NoSelect}}} {Arc08/Private {{\NoInferiors} {\UnMarked}}} {INBOX {\NoInferiors}}} - __::imap4::msginfo__ *chan* *msgid* ?*info*? ?*defval*? Get information $$from previously collected using fetch$$ from a given *msgid*\. If the 'info' argument is omitted or a null string, the list of available information options for the given message is returned\. ................................................................................ 'recent' flagged msgs\), *FLAGS* In conjunction with OK: *PERMFLAGS*, *UIDNEXT*, *UIDVAL*, *UNSEEN* Div\. states: *CURRENT*, *FOUND*, *PERM*\. ::imap4::select $chan INBOX puts "[::imap4::mboxinfo$chan exists] mails in INBOX" - __::imap4::isableto__ *chan* ?*capability*? Test for capability\. It returns 1 if requested capability is supported, 0 otherwise\. If *capability* is omitted all capability imap codes are retured as list\. ................................................................................ *Imap conditional search flags:* SMALLER, LARGER, ON, SENTBEFORE, SENTON, SENTSINCE, SINCE, BEFORE $$not implemented$$, UID $$not implemented$$ *Logical search conditions:* OR, NOT ::imap4::search $chan larger 4000 seen puts "Found messages: [::imap4::mboxinfo$chan found]" Found messages: 1 3 6 7 8 9 13 14 15 19 20 - __::imap4::close__ *chan* Close the mailbox\. Permanently removes \\Deleted messages and return to the AUTH state\. ................................................................................ * \-FLAGS Remove the flags in *flaglist* to the existing flags for the message\. For example: ::imap4::store $chan$start_msgid:$end_msgid +FLAGS "Deleted" - __::imap4::expunge__ *chan* Permanently removes all messages that have the \\Deleted flag set from the currently selected mailbox, without the need to close the connection\. *chan* \- imap channel ................................................................................ *chan* \- imap channel # EXAMPLES set user myusername set pass xtremescrt set server imap.test.tld set FOLDER INBOX # Connect to server set imap [::imap4::open$server] ::imap4::login $imap$user $pass ::imap4::select$imap $FOLDER # Output all the information about that mailbox foreach info [::imap4::mboxinfo$imap] { puts "$info -> [::imap4::mboxinfo$imap $info]" } # fetch 3 records inline set fields {from: to: subject: size} foreach rec [::imap4::fetch$imap :3 -inline {*}$fields] { puts -nonewline "#[incr idx])" for {set j 0} {$j<[llength $fields]} {incr j} { puts "\t[lindex$fields $j] [lindex$rec $j]" } } # Show all the information available about the message ID 1 puts "Available info about message 1: [::imap4::msginfo$imap 1]" # Use the capability stuff puts "Capabilities: [::imap4::isableto $imap]" puts "Is able to imap4rev1? [::imap4::isableto$imap imap4rev1]" # Cleanup ::imap4::cleanup $imap # TLS Security Considerations This package uses the __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ package to handle the security for __https__ urls and other socket connections\. ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # REFERENCES Mark R\. Crispin, "INTERNET MESSAGE ACCESS PROTOCOL \- VERSION 4rev1", RFC 3501, March 2003, [http://www\.rfc\-editor\.org/rfc/rfc3501\.txt](http://www\.rfc\-editor\.org/rfc/rfc3501\.txt)  Changes to embedded/md/tcllib/files/modules/irc/picoirc.md.  87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102   break error code to halt further processing\. In this way the application can override the default send via the callback procedure\. # CALLBACK The callback must look like: proc Callback \{context state args\} \{ \} where context is the irc context variable name $$in case you need to pass it back to a picoirc procedure$$\. state is one of a number of states as described below\. - __init__ called just before the socket is created   | |  87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102   break error code to halt further processing\. In this way the application can override the default send via the callback procedure\. # CALLBACK The callback must look like: proc Callback {context state args} { } where context is the irc context variable name $$in case you need to pass it back to a picoirc procedure$$\. state is one of a number of states as described below\. - __init__ called just before the socket is created  Changes to embedded/md/tcllib/files/modules/jpeg/jpeg.md.  90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 ... 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148   thumbnail is included in *file*, and the empty string otherwise\. Note that it is possible to include thumbnails in formats other than JPEG although that is not common\. The command finds thumbnails that are encoded in either the JFXX or EXIF segments of the JPEG information\. If both are present the EXIF thumbnail will take precedence\. Throws an error if *file* is not a JPEG image\. set fh $open thumbnail\.jpg w\+$ fconfigure$fh \-translation binary \-encoding binary puts \-nonewline $fh $::jpeg::getThumbnail photo\.jpg$ close$fh - __::jpeg::getExif__ *file* ?*section*? *section* must be one of __main__ or __thumbnail__\. The default is __main__\. Returns a dictionary containing the EXIF information for the specified section\. For example: set exif \{ Make Canon Model \{Canon DIGITAL IXUS\} DateTime \{2001:06:09 15:17:32\} \} Throws an error if *file* is not a JPEG image\. - __::jpeg::getExifFromChannel__ *channel* ?*section*? This command is as per __::jpeg::getExif__ except that it uses a previously opened channel\. *channel* should be a seekable channel and ................................................................................ - __::jpeg::formatExif__ *keys* Takes a list of key\-value pairs as returned by __getExif__ and formats many of the values into a more human readable form\. As few as one key\-value may be passed in, the entire exif is not required\. foreach \{key val\} $::jpeg::formatExif \[::jpeg::getExif photo\.jpg$\] \{ puts "$key:$val" \} array set exif $::jpeg::getExif photo\.jpg$ puts "max f\-stop: $::jpeg::formatExif \[list MaxAperture exif$$MaxAperture$$$\] - __::jpeg::exifKeys__ Returns a list of the EXIF keys which are currently understood\. There may be keys present in __getExif__ data that are not understood\. Those keys will appear in a 4 digit hexadecimal format\.   | | | | | | < > | < | > | |  90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 ... 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148   thumbnail is included in *file*, and the empty string otherwise\. Note that it is possible to include thumbnails in formats other than JPEG although that is not common\. The command finds thumbnails that are encoded in either the JFXX or EXIF segments of the JPEG information\. If both are present the EXIF thumbnail will take precedence\. Throws an error if *file* is not a JPEG image\. set fh [open thumbnail.jpg w+] fconfigure $fh -translation binary -encoding binary puts -nonewline$fh [::jpeg::getThumbnail photo.jpg] close $fh - __::jpeg::getExif__ *file* ?*section*? *section* must be one of __main__ or __thumbnail__\. The default is __main__\. Returns a dictionary containing the EXIF information for the specified section\. For example: set exif { Make Canon Model {Canon DIGITAL IXUS} DateTime {2001:06:09 15:17:32} } Throws an error if *file* is not a JPEG image\. - __::jpeg::getExifFromChannel__ *channel* ?*section*? This command is as per __::jpeg::getExif__ except that it uses a previously opened channel\. *channel* should be a seekable channel and ................................................................................ - __::jpeg::formatExif__ *keys* Takes a list of key\-value pairs as returned by __getExif__ and formats many of the values into a more human readable form\. As few as one key\-value may be passed in, the entire exif is not required\. foreach {key val} [::jpeg::formatExif [::jpeg::getExif photo.jpg]] { puts "$key: $val" } array set exif [::jpeg::getExif photo.jpg] puts "max f-stop: [::jpeg::formatExif [list MaxAperture$exif(MaxAperture)]] - __::jpeg::exifKeys__ Returns a list of the EXIF keys which are currently understood\. There may be keys present in __getExif__ data that are not understood\. Those keys will appear in a 4 digit hexadecimal format\. 

Changes to embedded/md/tcllib/files/modules/json/json.md.

 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134   throw an error\. # EXAMPLES An example of a JSON array converted to Tcl\. A JSON array is returned as a single item with multiple elements\. $\{ "precision": "zip", "Latitude": 37\.7668, "Longitude": \-122\.3959, "Address": "", "City": "SAN FRANCISCO", "State": "CA", "Zip": "94107", "Country": "US" \}, \{ "precision": "zip", "Latitude": 37\.371991, "Longitude": \-122\.026020, "Address": "", "City": "SUNNYVALE", "State": "CA", "Zip": "94085", "Country": "US" \}$ => \{Country US Latitude 37\.7668 precision zip State CA City \{SAN FRANCISCO\} Address \{\} Zip 94107 Longitude \-122\.3959\} \{Country US Latitude 37\.371991 precision zip State CA City SUNNYVALE Address \{\} Zip 94085 Longitude \-122\.026020\} An example of a JSON object converted to Tcl\. A JSON object is returned as a multi\-element list $$a dict$$\. \{ "Image": \{ "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": \{ "Url": "http://www\.example\.com/image/481989943", "Height": 125, "Width": "100" \}, "IDs": $116, 943, 234, 38793$ \} \} => Image \{IDs \{116 943 234 38793\} Thumbnail \{Width 100 Height 125 Url http://www\.example\.com/image/481989943\} Width 800 Height 600 Title \{View from 15th Floor\}\} # RELATED To write json, instead of parsing it, see package __[json::write](json\_write\.md)__\. # Bugs, Ideas, Feedback   | | | | | | < > | | < < > > | < > | | | | | < < > > |  78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134   throw an error\. # EXAMPLES An example of a JSON array converted to Tcl\. A JSON array is returned as a single item with multiple elements\. [ { "precision": "zip", "Latitude": 37.7668, "Longitude": -122.3959, "Address": "", "City": "SAN FRANCISCO", "State": "CA", "Zip": "94107", "Country": "US" }, { "precision": "zip", "Latitude": 37.371991, "Longitude": -122.026020, "Address": "", "City": "SUNNYVALE", "State": "CA", "Zip": "94085", "Country": "US" } ] => {Country US Latitude 37.7668 precision zip State CA City {SAN FRANCISCO} Address {} Zip 94107 Longitude -122.3959} {Country US Latitude 37.371991 precision zip State CA City SUNNYVALE Address {} Zip 94085 Longitude -122.026020} An example of a JSON object converted to Tcl\. A JSON object is returned as a multi\-element list $$a dict$$\. { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } => Image {IDs {116 943 234 38793} Thumbnail {Width 100 Height 125 Url http://www.example.com/image/481989943} Width 800 Height 600 Title {View from 15th Floor}} # RELATED To write json, instead of parsing it, see package __[json::write](json\_write\.md)__\. # Bugs, Ideas, Feedback 

Changes to embedded/md/tcllib/files/modules/lambda/lambda.md.

 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83  # DESCRIPTION This package provides two convenience commands to make the writing of anonymous procedures, i\.e\. lambdas more __[proc](\.\./\.\./\.\./\.\./index\.md\#proc)__\-like\. Instead of, for example, to write set f \{::apply \{\{x\} \{ \.\.\.\. \}\}\} with its deep nesting of braces, or set f $list ::apply \{\{x y\} \{ \.\.\.\. \}\} value\_for\_x$ with a list command to insert some of the arguments of a partial application, just write set f $lambda \{x\} \{ \.\.\.\. \}$ and set f $lambda \{x y\} \{ \.\.\.\. \} value\_for\_x$ # COMMANDS - __::lambda__ *arguments* *body* ?*arg*\.\.\.? The command constructs an anonymous procedure from the list of arguments, body script and $$optional$$ predefined argument values and returns a command   | | | | | | | | | | | |  48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83  # DESCRIPTION This package provides two convenience commands to make the writing of anonymous procedures, i\.e\. lambdas more __[proc](\.\./\.\./\.\./\.\./index\.md\#proc)__\-like\. Instead of, for example, to write set f {::apply {{x} { .... }}} with its deep nesting of braces, or set f [list ::apply {{x y} { .... }} $value_for_x] with a list command to insert some of the arguments of a partial application, just write set f [lambda {x} { .... }] and set f [lambda {x y} { .... }$value_for_x] # COMMANDS - __::lambda__ *arguments* *body* ?*arg*\.\.\.? The command constructs an anonymous procedure from the list of arguments, body script and $$optional$$ predefined argument values and returns a command 

Changes to embedded/md/tcllib/files/modules/lazyset/lazyset.md.

 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98   boolean* is specified as true, then 2 arguments are appended corresponding to the name of the variable and the index, otherwise 1 argument is appended containing the name of variable\. The *commandPrefix* code is run in the same scope as the variable is read\. # EXAMPLES ::lazyset::variable page \{apply \{\{name\} \{ package require http set token $http::geturl http://www\.tcl\.tk/$ set data $http::data token$ return $data \}\}\} puts$page ::lazyset::variable \-array true page \{apply \{\{name index\} \{ package require http set token $http::geturl index$ set data $http::data token$ return $data \}\}\} puts$page$$http://www\.tcl\.tk/$$ ::lazyset::variable \-appendArgs false simple \{ return \-level 0 42 \} puts $simple # AUTHORS Roy Keene   | | | | | | | | | | | < >  64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98   boolean* is specified as true, then 2 arguments are appended corresponding to the name of the variable and the index, otherwise 1 argument is appended containing the name of variable\. The *commandPrefix* code is run in the same scope as the variable is read\. # EXAMPLES ::lazyset::variable page {apply {{name} { package require http set token [http::geturl http://www.tcl.tk/] set data [http::data$token] return $data }}} puts$page ::lazyset::variable -array true page {apply {{name index} { package require http set token [http::geturl $index] set data [http::data$token] return $data }}} puts$page(http://www.tcl.tk/) ::lazyset::variable -appendArgs false simple { return -level 0 42 } puts $simple # AUTHORS Roy Keene  Changes to embedded/md/tcllib/files/modules/ldap/ldap.md.  98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 ... 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 ... 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 ... 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 ... 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 ... 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 ... 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # COMMANDS - __::ldap::connect__ *host* ?*port*? Opens a LDAPv3 connection to the specified *host*, at the given *port*, and returns a token for the connection\. This token is the *handle* ................................................................................ If *verify\_cert* is set to 1, the default, this checks the server certificate against the known hosts\. If *sni\_servername* is set, the given hostname is used as the hostname for Server Name Indication in the TLS handshake\. Use __::tls::init__ to setup defaults for trusted certificates\. tls::init \-cadir /etc/ssl/certs/ca\-certificates\.crt TLS supports different protocol levels\. In common use are the versions 1\.0, 1\.1 and 1\.2\. By default all those versions are offered\. If you need to modify the acceptable protocols, you can change the ::ldap::tlsProtocols list\. - __::ldap::disconnect__ *handle* ................................................................................ *attributes* of all matching objects $$DNs$$\. If the list of *attributes* was empty all attributes are returned\. The command blocks until it has received all results\. The valid *options* are identical to the options listed for __::ldap::searchInit__\. An example of a search expression is set filterString "|$$cn=Linus\*$$$$sn=Torvalds\*$$" The return value of the command is a list of nested dictionaries\. The first level keys are object identifiers $$DNs$$, second levels keys are attribute names\. In other words, it is in the form \{dn1 \{attr1 \{val11 val12 \.\.\.\} attr2 \{val21\.\.\.\} \.\.\.\}\} \{dn2 \{a1 \{v11 \.\.\.\} \.\.\.\}\} \.\.\. - __::ldap::searchInit__ *handle* *baseObject* *filterString* *attributes* *options* This command initiates a LDAP search below the *baseObject* tree using a complex LDAP search expression *filterString*\. The search gets the specified *attributes* of all matching objects $$DNs$$\. The command itself just starts the search, to retrieve the actual results, use ................................................................................ extensions__, which invoke a search internally\. Error responses from the server due to wrong arguments or similar things are returned with the first __::ldap::searchNext__ call and should be checked and dealed with there\. If the list of requested *attributes* is empty all attributes will be returned\. The parameter *options* specifies the options to be used in the search, and has the following format: \{\-option1 value1 \-option2 value2 \.\.\. \} Following options are available: * __\-scope__ base one sub Control the scope of the search to be one of __base__, __one__, or __sub__, to specify a base object, one\-level or subtree search\. ................................................................................ This command returns the next entry from a LDAP search initiated by __::ldap::searchInit__\. It returns only after a new result is received or when no further results are available, but takes care to keep the event loop alive\. The returned entry is a list with two elements: the first is the DN of the entry, the second is the list of attributes and values, under the format: dn \{attr1 \{val11 val12 \.\.\.\} attr2 \{val21\.\.\.\} \.\.\.\} The __::ldap::searchNext__ command returns an empty list at the end of the search\. - __::ldap::searchEnd__ *handle* This command terminates a LDAP search initiated by ................................................................................ - __::ldap::modifyMulti__ *handle* *dn* *attrValToReplace* ?*attrValToDelete*? ?*attrValToAdd*? This command modifies the object *dn* on the ldap server we are connected to via *handle*\. It replaces attributes with new values, deletes attributes, and adds new attributes with new values\. All arguments are lists with the format: attr1 \{val11 val12 \.\.\.\} attr2 \{val21\.\.\.\} \.\.\. where each value list may be empty for deleting all attributes\. The optional arguments default to empty lists of attributes to delete and to add\. * list *attrValToReplace* $$in$$ No attributes will be changed if this argument is empty\. The dictionary ................................................................................ # EXAMPLES A small example, extracted from the test application coming with this code\. package require ldap \# Connect, bind, add a new object, modify it in various ways set handle $ldap::connect localhost 9009$ set dn "cn=Manager, o=University of Michigan, c=US" set pw secret ldap::bind$handle $dn$pw set dn "cn=Test User,ou=People,o=University of Michigan,c=US" ldap::add $handle$dn \{ objectClass OpenLDAPperson cn \{Test User\} mail test$email protected]\.com uid testuid sn User telephoneNumber \+31415926535 telephoneNumber \+27182818285 \} set dn "cn=Another User,ou=People,o=University of Michigan,c=US" ldap::addMulti handle dn \{ objectClass \{OpenLDAPperson\} cn \{\{Anotther User\}\} mail \{test\[email protected]\.com\} uid \{testuid\} sn \{User\} telephoneNumber \{\+31415926535 \+27182818285\} \} \# Replace all attributes ldap::modify handle dn \[list drink icetea uid JOLO$ \# Add some more ldap::modify $handle$dn \{\} \{\} $list drink water drink orangeJuice pager "\+1 313 555 7671"$ \# Delete ldap::modify $handle$dn \{\} $list drink water pager ""$ \# Move ldap::modifyDN $handle$dn "cn=Tester" \# Kill the test object, and shut the connection down\. set dn "cn=Tester,ou=People,o=University of Michigan,c=US" ldap::delete $handle$dn ldap::unbind $handle ldap::disconnect$handle And a another example, a simple query, and processing the results\. package require ldap set handle $ldap::connect ldap\.acme\.com 389$ ldap::bind $handle set results $ldap::search handle "o=acme,dc=com" "$$uid=jdoe$$" \{\}$ foreach result$results \{ foreach \{object attributes\} result break \# The processing here is similar to what 'parray' does\. \# I\.e\. finding the longest attribute name and then \# generating properly aligned output listing all attributes \# and their values\. set width 0 set sortedAttribs \{\} foreach \{type values\}attributes \{ if \{$string length type$ > $width\} \{ set width $string length type$ \} lappend sortedAttribs $list type values$ \} puts "object='$object'" foreach sortedAttrib $sortedAttribs \{ foreach \{type values\}$sortedAttrib break foreach value $values \{ regsub \-all "\\$\\x01\-\\x1f\\$"$value ? value puts $format " %\-\{width\}s %s" type value$ \} \} puts "" \} ldap::unbind $handle ldap::disconnect$handle # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *ldap* of the [Tcllib   | | | | | | | | | | | | | | | < > | | | | | | | < | > | | | | | | | | | | | | | | | | | | | | < > | < | > | | | | | < < > > < >  98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 ... 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 ... 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 ... 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 ... 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 ... 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 ... 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # COMMANDS - __::ldap::connect__ *host* ?*port*? Opens a LDAPv3 connection to the specified *host*, at the given *port*, and returns a token for the connection\. This token is the *handle* ................................................................................ If *verify\_cert* is set to 1, the default, this checks the server certificate against the known hosts\. If *sni\_servername* is set, the given hostname is used as the hostname for Server Name Indication in the TLS handshake\. Use __::tls::init__ to setup defaults for trusted certificates\. tls::init -cadir /etc/ssl/certs/ca-certificates.crt TLS supports different protocol levels\. In common use are the versions 1\.0, 1\.1 and 1\.2\. By default all those versions are offered\. If you need to modify the acceptable protocols, you can change the ::ldap::tlsProtocols list\. - __::ldap::disconnect__ *handle* ................................................................................ *attributes* of all matching objects $$DNs$$\. If the list of *attributes* was empty all attributes are returned\. The command blocks until it has received all results\. The valid *options* are identical to the options listed for __::ldap::searchInit__\. An example of a search expression is set filterString "|(cn=Linus*)(sn=Torvalds*)" The return value of the command is a list of nested dictionaries\. The first level keys are object identifiers $$DNs$$, second levels keys are attribute names\. In other words, it is in the form {dn1 {attr1 {val11 val12 ...} attr2 {val21...} ...}} {dn2 {a1 {v11 ...} ...}} ... - __::ldap::searchInit__ *handle* *baseObject* *filterString* *attributes* *options* This command initiates a LDAP search below the *baseObject* tree using a complex LDAP search expression *filterString*\. The search gets the specified *attributes* of all matching objects $$DNs$$\. The command itself just starts the search, to retrieve the actual results, use ................................................................................ extensions__, which invoke a search internally\. Error responses from the server due to wrong arguments or similar things are returned with the first __::ldap::searchNext__ call and should be checked and dealed with there\. If the list of requested *attributes* is empty all attributes will be returned\. The parameter *options* specifies the options to be used in the search, and has the following format: {-option1 value1 -option2 value2 ... } Following options are available: * __\-scope__ base one sub Control the scope of the search to be one of __base__, __one__, or __sub__, to specify a base object, one\-level or subtree search\. ................................................................................ This command returns the next entry from a LDAP search initiated by __::ldap::searchInit__\. It returns only after a new result is received or when no further results are available, but takes care to keep the event loop alive\. The returned entry is a list with two elements: the first is the DN of the entry, the second is the list of attributes and values, under the format: dn {attr1 {val11 val12 ...} attr2 {val21...} ...} The __::ldap::searchNext__ command returns an empty list at the end of the search\. - __::ldap::searchEnd__ *handle* This command terminates a LDAP search initiated by ................................................................................ - __::ldap::modifyMulti__ *handle* *dn* *attrValToReplace* ?*attrValToDelete*? ?*attrValToAdd*? This command modifies the object *dn* on the ldap server we are connected to via *handle*\. It replaces attributes with new values, deletes attributes, and adds new attributes with new values\. All arguments are lists with the format: attr1 {val11 val12 ...} attr2 {val21...} ... where each value list may be empty for deleting all attributes\. The optional arguments default to empty lists of attributes to delete and to add\. * list *attrValToReplace* $$in$$ No attributes will be changed if this argument is empty\. The dictionary ................................................................................ # EXAMPLES A small example, extracted from the test application coming with this code\. package require ldap # Connect, bind, add a new object, modify it in various ways set handle [ldap::connect localhost 9009] set dn "cn=Manager, o=University of Michigan, c=US" set pw secret ldap::bind $handle$dn $pw set dn "cn=Test User,ou=People,o=University of Michigan,c=US" ldap::add$handle $dn { objectClass OpenLDAPperson cn {Test User} mail [email protected] uid testuid sn User telephoneNumber +31415926535 telephoneNumber +27182818285 } set dn "cn=Another User,ou=People,o=University of Michigan,c=US" ldap::addMulti$handle $dn { objectClass {OpenLDAPperson} cn {{Anotther User}} mail {test[email protected].com} uid {testuid} sn {User} telephoneNumber {+31415926535 +27182818285} } # Replace all attributes ldap::modify$handle $dn [list drink icetea uid JOLO] # Add some more ldap::modify$handle $dn {} {} [list drink water drink orangeJuice pager "+1 313 555 7671"] # Delete ldap::modify$handle $dn {} [list drink water pager ""] # Move ldap::modifyDN$handle $dn "cn=Tester" # Kill the test object, and shut the connection down. set dn "cn=Tester,ou=People,o=University of Michigan,c=US" ldap::delete$handle $dn ldap::unbind$handle ldap::disconnect $handle And a another example, a simple query, and processing the results\. package require ldap set handle [ldap::connect ldap.acme.com 389] ldap::bind$handle set results [ldap::search $handle "o=acme,dc=com" "(uid=jdoe)" {}] foreach result$results { foreach {object attributes} result break # The processing here is similar to what 'parray' does. # I.e. finding the longest attribute name and then # generating properly aligned output listing all attributes # and their values. set width 0 set sortedAttribs {} foreach {type values}attributes { if {[string length $type] >$width} { set width [string length $type] } lappend sortedAttribs [list$type $values] } puts "object='$object'" foreach sortedAttrib $sortedAttribs { foreach {type values}$sortedAttrib break foreach value $values { regsub -all "$\x01-\x1f$"$value ? value puts [format " %-${width}s %s"$type $value] } } puts "" } ldap::unbind$handle ldap::disconnect $handle # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *ldap* of the [Tcllib  Changes to embedded/md/tcllib/files/modules/ldap/ldapx.md.  351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 ... 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 ... 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736   difference is computed from the entry and its internal backup $$see section [OVERVIEW](#section2)$$\. Return value is the computed change list\. ## Entry Example package require ldapx \# \# Create an entry and fill it as a standard entry with \# attributes and values \# ::ldapx::entry create e e dn "uid=joe,ou=people,o=mycomp" e set1 "uid" "joe" e set "objectClass" \{person anotherObjectClass\} e set1 "givenName" "Joe" e set1 "sn" "User" e set "telephoneNumber" \{\+31415926535 \+2182818\} e set1 "anotherAttr" "This is a beautiful day, isn't it?" puts stdout "e\\n$e print$" \# \# Create a second entry as a backup of the first, and \# make some changes on it\. \# Entry is named automatically by snit\. \# set b $::ldapx::entry create %AUTO%$ e backup$b puts stdout "$b\\n$b print$"$b del "anotherAttr" $b del1 "objectClass" "anotherObjectClass" \# \# Create a change entry, a compute differences between first \# and second entry\. \# ::ldapx::entry create c c diff e$b puts stdout "$c\\n$c print$" \# \# Apply changes to first entry\. It should be the same as the \# second entry, now\. \# e apply c ::ldapx::entry create nc nc diff e$b puts stdout "nc\\n$nc print$" \# \# Clean\-up \# e destroy $b destroy c destroy nc destroy # LDAP CLASS ................................................................................ Note: in the future, this method should use the LDAP transaction extension provided by OpenLDAP 2\.3 and later\. ## Ldap Example package require ldapx \# \# Connects to the LDAP directory \# ::ldapx::ldap create l set url "ldap://server\.mycomp\.com" if \{\! $l connect url "cn=admin,o=mycomp" "mypasswd"$\} then \{ puts stderr "error: $l error$" exit 1 \} \# \# Search all entries matching some criterion \# l configure \-scope one ::ldapx::entry create e set n 0 l traverse "ou=people,o=mycomp" "$$sn=Joe\*$$" \{sn givenName\} e \{ puts "dn: $e dn$" puts " sn: $e get1 sn$" puts " givenName: $e get1 givenName$" incr n \} puts "$n entries found" e destroy \# \# Add a telephone number to some entries \# Note this modification cannot be done in the "traverse" operation\. \# set lent $l search "ou=people,o=mycomp" "$$sn=Joe\*$$" \{\}$ ::ldapx::entry create c foreach e $lent \{$e backup $e add1 "telephoneNumber" "\+31415926535" c diff$e if \{\! $l commit c$\} then \{ puts stderr "error: $l error$" exit 1 \} $e destroy \} c destroy l disconnect l destroy # LDIF CLASS ................................................................................ This method writes the entry given in the argument *entry* to the LDIF file\. ## Ldif Example package require ldapx \# This examples reads a LDIF file containing entries, \# compare them to a LDAP directory, and writes on standard \# output an LDIF file containing changes to apply to the \# LDAP directory to match exactly the LDIF file\. ::ldapx::ldif create liin liin channel stdin ::ldapx::ldif create liout liout channel stdout ::ldapx::ldap create la if \{\! $la connect "ldap://server\.mycomp\.com"$\} then \{ puts stderr "error: $la error$" exit 1 \} la configure \-scope one \# Reads LDIF file ::ldapx::entry create e1 ::ldapx::entry create e2 ::ldapx::entry create c while \{$liin read e1$ \!= 0\} \{ set base $e1 superior$ set id $e1 rdn$ if \{$la read base "$$id$$" e2$ == 0\} then \{ e2 reset \} c diff e1 e2 if \{$llength \[c change$\] \!= 0\} then \{ liout write c \} \} la disconnect la destroy e1 destroy e2 destroy c destroy liout destroy   < > | | < > | | | < > | | | < | > | | < > | | < > | | | | | | | | | | | < > | < | > | | | < | < > > | < | > | | | | | < > < > | | < | > | | | | | < > < > | | | | | | < > | | | | | | < | > | < < > >  351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 ... 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 ... 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736   difference is computed from the entry and its internal backup $$see section [OVERVIEW](#section2)$$\. Return value is the computed change list\. ## Entry Example package require ldapx # # Create an entry and fill it as a standard entry with # attributes and values # ::ldapx::entry create e e dn "uid=joe,ou=people,o=mycomp" e set1 "uid" "joe" e set "objectClass" {person anotherObjectClass} e set1 "givenName" "Joe" e set1 "sn" "User" e set "telephoneNumber" {+31415926535 +2182818} e set1 "anotherAttr" "This is a beautiful day, isn't it?" puts stdout "e\n[e print]" # # Create a second entry as a backup of the first, and # make some changes on it. # Entry is named automatically by snit. # set b [::ldapx::entry create %AUTO%] e backup$b puts stdout "$b\n[$b print]" $b del "anotherAttr"$b del1 "objectClass" "anotherObjectClass" # # Create a change entry, a compute differences between first # and second entry. # ::ldapx::entry create c c diff e $b puts stdout "$c\n[$c print]" # # Apply changes to first entry. It should be the same as the # second entry, now. # e apply c ::ldapx::entry create nc nc diff e$b puts stdout "nc\n[nc print]" # # Clean-up # e destroy $b destroy c destroy nc destroy # LDAP CLASS ................................................................................ Note: in the future, this method should use the LDAP transaction extension provided by OpenLDAP 2\.3 and later\. ## Ldap Example package require ldapx # # Connects to the LDAP directory # ::ldapx::ldap create l set url "ldap://server.mycomp.com" if {! [l connect$url "cn=admin,o=mycomp" "mypasswd"]} then { puts stderr "error: [l error]" exit 1 } # # Search all entries matching some criterion # l configure -scope one ::ldapx::entry create e set n 0 l traverse "ou=people,o=mycomp" "(sn=Joe*)" {sn givenName} e { puts "dn: [e dn]" puts " sn: [e get1 sn]" puts " givenName: [e get1 givenName]" incr n } puts "$n entries found" e destroy # # Add a telephone number to some entries # Note this modification cannot be done in the "traverse" operation. # set lent [l search "ou=people,o=mycomp" "(sn=Joe*)" {}] ::ldapx::entry create c foreach e$lent { $e backup$e add1 "telephoneNumber" "+31415926535" c diff $e if {! [l commit c]} then { puts stderr "error: [l error]" exit 1 }$e destroy } c destroy l disconnect l destroy # LDIF CLASS ................................................................................ This method writes the entry given in the argument *entry* to the LDIF file\. ## Ldif Example package require ldapx # This examples reads a LDIF file containing entries, # compare them to a LDAP directory, and writes on standard # output an LDIF file containing changes to apply to the # LDAP directory to match exactly the LDIF file. ::ldapx::ldif create liin liin channel stdin ::ldapx::ldif create liout liout channel stdout ::ldapx::ldap create la if {! [la connect "ldap://server.mycomp.com"]} then { puts stderr "error: [la error]" exit 1 } la configure -scope one # Reads LDIF file ::ldapx::entry create e1 ::ldapx::entry create e2 ::ldapx::entry create c while {[liin read e1] != 0} { set base [e1 superior] set id [e1 rdn] if {[la read $base "($id)" e2] == 0} then { e2 reset } c diff e1 e2 if {[llength [c change]] != 0} then { liout write c } } la disconnect la destroy e1 destroy e2 destroy c destroy liout destroy 

Changes to embedded/md/tcllib/files/modules/log/log.md.

 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 ... 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 ... 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253  a *level* determining the importance of the message\. The user can then select which levels to log, what commands to use for the logging of each level and the channel to write the message to\. In the following example the logging of all message with level __debug__ is deactivated\. package require log log::lvSuppress debug log::log debug "Unseen message" ; \# No output By default all messages associated with an error\-level $$__emergency__, __alert__, __critical__, and __error__$$ are written to __stderr__\. Messages with any other level are written to __stdout__\. In the following example the log module is reconfigured to write __debug__ messages to __stderr__ too\. ................................................................................ log any message\. In the following example all messages of level __notice__ are given to the non\-standard command __toText__ for logging\. This disables the channel setting for such messages, assuming that __toText__ does not use it by itself\. package require log log::lvCmd notice toText log::log notice "Handled by \\"toText\\"" Another database maintained by this facility is a map from message levels to colors\. The information in this database has *no* influence on the behaviour of the module\. It is merely provided as a convenience and in anticipation of the usage of this facility in __tk__\-based application which may want to colorize message logs\. ................................................................................ Like __::log::log__, but *msg* may contain substitutions and variable references, which are evaluated in the caller scope first\. The purpose of this command is to avoid overhead in the non\-logging case, if the log message building is expensive\. Any substitution errors raise an error in the command execution\. The following example shows an xml text representation, which is only generated in debug mode: log::logsubst debug \{XML of node $node is '$node toXml$'\} - __::log::logMsg__ *text* Convenience wrapper around __::log::log__\. Equivalent to __::log::log info text__\. - __::log::logError__ *text*   | | |  78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 ... 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 ... 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253  a *level* determining the importance of the message\. The user can then select which levels to log, what commands to use for the logging of each level and the channel to write the message to\. In the following example the logging of all message with level __debug__ is deactivated\. package require log log::lvSuppress debug log::log debug "Unseen message" ; # No output By default all messages associated with an error\-level $$__emergency__, __alert__, __critical__, and __error__$$ are written to __stderr__\. Messages with any other level are written to __stdout__\. In the following example the log module is reconfigured to write __debug__ messages to __stderr__ too\. ................................................................................ log any message\. In the following example all messages of level __notice__ are given to the non\-standard command __toText__ for logging\. This disables the channel setting for such messages, assuming that __toText__ does not use it by itself\. package require log log::lvCmd notice toText log::log notice "Handled by \"toText\"" Another database maintained by this facility is a map from message levels to colors\. The information in this database has *no* influence on the behaviour of the module\. It is merely provided as a convenience and in anticipation of the usage of this facility in __tk__\-based application which may want to colorize message logs\. ................................................................................ Like __::log::log__, but *msg* may contain substitutions and variable references, which are evaluated in the caller scope first\. The purpose of this command is to avoid overhead in the non\-logging case, if the log message building is expensive\. Any substitution errors raise an error in the command execution\. The following example shows an xml text representation, which is only generated in debug mode: log::logsubst debug {XML of node$node is '[$node toXml]'} - __::log::logMsg__ *text* Convenience wrapper around __::log::log__\. Equivalent to __::log::log info text__\. - __::log::logError__ *text*  Changes to embedded/md/tcllib/files/modules/log/logger.md.  81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 ... 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 ... 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 ... 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 ... 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 ... 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441  The __logger__ package provides a flexible system for logging messages from different services, at priority levels, with different commands\. To begin using the logger package, we do the following: package require logger set log $logger::init myservice$$\{log\}::notice "Initialized myservice logging" \.\.\. code \.\.\. $\{log\}::notice "Ending myservice logging"$\{log\}::delete In the above code, after the package is loaded, the following things happen: - __logger::init__ *service* Initializes the service *service* for logging\. The service names are actually Tcl namespace names, so they are separated with '::'\. The service ................................................................................ Set the script to call when the log instance in question changes its log level\. If called without a command it returns the currently registered command\. The command gets two arguments appended, the old and the new loglevel\. The callback is invoked after all changes have been done\. If child loggers are affected, their callbacks are called before their parents callback\. proc lvlcallback \{old new\} \{ puts "Loglevel changed from $old to$new" \} $\{log\}::lvlchangeproc lvlcallback - __$\{log\}::logproc__ *level* - __$\{log\}::logproc__ *level* *command* - __$\{log\}::logproc__ *level* *argname* *body* ................................................................................ command currently registered as callback command\. __logproc__ specifies which command will perform the actual logging for a given level\. The logger package ships with default commands for all log levels, but with __logproc__ it is possible to replace them with custom code\. This would let you send your logs over the network, to a database, or anything else\. For example: proc logtoserver \{txt\} \{ variable socket puts $socket "Notice:$txt" \} $\{log\}::logproc notice logtoserver Trace logs are slightly different: instead of a plain text argument, the argument provided to the logproc is a dictionary consisting of the __enter__ or __leave__ keyword along with another dictionary of details about the trace\. These include: * __proc__ \- Name of the procedure being traced\. ................................................................................ - __$\{log\}::delproc__ Set the script to call when the log instance in question is deleted\. If called without a command it returns the currently registered command\. For example: $\{log\}::delproc $list closesock logsock$ - __$\{log\}::delete__ This command deletes a particular logging service, and its children\. You must call this to clean up the resources used by a service\. - __$\{log\}::trace__ *command* ................................................................................ This command controls logging of enter/leave traces for specified procedures\. It is used to enable and disable tracing, query tracing status, and specify procedures are to be traced\. Trace handlers are unregistered when tracing is disabled\. As a result, there is not performance impact to a library when tracing is disabled, just as with other log level commands\. proc tracecmd \{ dict \} \{ puts$dict \} set log $::logger::init example$ $\{log\}::logproc trace tracecmd proc foo \{ args \} \{ puts "In foo" bar 1 return "foo\_result" \} proc bar \{ x \} \{ puts "In bar" return "bar\_result" \}$\{log\}::trace add foo bar $\{log\}::trace on foo \# Output: enter \{proc ::foo level 1 script \{\} caller \{\} procargs \{args \{\}\}\} In foo enter \{proc ::bar level 2 script \{\} caller ::foo procargs \{x 1\}\} In bar leave \{proc ::bar level 2 script \{\} caller ::foo status ok result bar\_result\} leave \{proc ::foo level 1 script \{\} caller \{\} status ok result foo\_result\} - __$\{log\}::trace__ __on__ Turns on trace logging for procedures registered through the __[trace](\.\./\.\./\.\./\.\./index\.md\#trace)__ __add__ command\. This is similar to the __enable__ command for other logging levels, but allows trace logging to take place at any level\. The trace logging mechanism takes ................................................................................ # Logprocs and Callstack The logger package takes extra care to keep the logproc out of the call stack\. This enables logprocs to execute code in the callers scope by using uplevel or linking to local variables by using upvar\. This may fire traces with all usual side effects\. \# Print caller and current vars in the calling proc proc log\_local\_var \{txt\} \{ set caller $info level \-1$ set vars $uplevel 1 info vars$ foreach var $lsort vars$ \{ if \{$uplevel 1 \[list array exists var$\] == 1\} \{ lappend val $var \} else \{ lappend val$var $uplevel 1 \[list set var$\] \} \} puts "$txt" puts "Caller:$caller" puts "Variables in callers scope:" foreach \{var value\} $val \{ puts "$var = $value" \} \} \# install as logproc$\{log\}::logproc debug log\_local\_var # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *logger* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | < > | | < | > | | | < | > | | | | < | > | | < | > | | | | | | | | | | | | | | | < < > > | | | | | | |  81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 ... 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 ... 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 ... 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 ... 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 ... 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441  The __logger__ package provides a flexible system for logging messages from different services, at priority levels, with different commands\. To begin using the logger package, we do the following: package require logger set log [logger::init myservice] ${log}::notice "Initialized myservice logging" ... code ...${log}::notice "Ending myservice logging" ${log}::delete In the above code, after the package is loaded, the following things happen: - __logger::init__ *service* Initializes the service *service* for logging\. The service names are actually Tcl namespace names, so they are separated with '::'\. The service ................................................................................ Set the script to call when the log instance in question changes its log level\. If called without a command it returns the currently registered command\. The command gets two arguments appended, the old and the new loglevel\. The callback is invoked after all changes have been done\. If child loggers are affected, their callbacks are called before their parents callback\. proc lvlcallback {old new} { puts "Loglevel changed from$old to $new" }${log}::lvlchangeproc lvlcallback - __$\{log\}::logproc__ *level* - __$\{log\}::logproc__ *level* *command* - __$\{log\}::logproc__ *level* *argname* *body* ................................................................................ command currently registered as callback command\. __logproc__ specifies which command will perform the actual logging for a given level\. The logger package ships with default commands for all log levels, but with __logproc__ it is possible to replace them with custom code\. This would let you send your logs over the network, to a database, or anything else\. For example: proc logtoserver {txt} { variable socket puts$socket "Notice: $txt" }${log}::logproc notice logtoserver Trace logs are slightly different: instead of a plain text argument, the argument provided to the logproc is a dictionary consisting of the __enter__ or __leave__ keyword along with another dictionary of details about the trace\. These include: * __proc__ \- Name of the procedure being traced\. ................................................................................ - __$\{log\}::delproc__ Set the script to call when the log instance in question is deleted\. If called without a command it returns the currently registered command\. For example:${log}::delproc [list closesock $logsock] - __$\{log\}::delete__ This command deletes a particular logging service, and its children\. You must call this to clean up the resources used by a service\. - __$\{log\}::trace__ *command* ................................................................................ This command controls logging of enter/leave traces for specified procedures\. It is used to enable and disable tracing, query tracing status, and specify procedures are to be traced\. Trace handlers are unregistered when tracing is disabled\. As a result, there is not performance impact to a library when tracing is disabled, just as with other log level commands\. proc tracecmd { dict } { puts$dict } set log [::logger::init example] ${log}::logproc trace tracecmd proc foo { args } { puts "In foo" bar 1 return "foo_result" } proc bar { x } { puts "In bar" return "bar_result" }${log}::trace add foo bar ${log}::trace on foo # Output: enter {proc ::foo level 1 script {} caller {} procargs {args {}}} In foo enter {proc ::bar level 2 script {} caller ::foo procargs {x 1}} In bar leave {proc ::bar level 2 script {} caller ::foo status ok result bar_result} leave {proc ::foo level 1 script {} caller {} status ok result foo_result} - __$\{log\}::trace__ __on__ Turns on trace logging for procedures registered through the __[trace](\.\./\.\./\.\./\.\./index\.md\#trace)__ __add__ command\. This is similar to the __enable__ command for other logging levels, but allows trace logging to take place at any level\. The trace logging mechanism takes ................................................................................ # Logprocs and Callstack The logger package takes extra care to keep the logproc out of the call stack\. This enables logprocs to execute code in the callers scope by using uplevel or linking to local variables by using upvar\. This may fire traces with all usual side effects\. # Print caller and current vars in the calling proc proc log_local_var {txt} { set caller [info level -1] set vars [uplevel 1 info vars] foreach var [lsort $vars] { if {[uplevel 1 [list array exists$var]] == 1} { lappend val $var } else { lappend val$var [uplevel 1 [list set $var]] } } puts "$txt" puts "Caller: $caller" puts "Variables in callers scope:" foreach {var value}$val { puts "$var =$value" } } # install as logproc ${log}::logproc debug log_local_var # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *logger* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/log/loggerUtils.md.  130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172   * __\-appenderArgs__ appenderArgs Additional arguments to apply to the appender\. The argument of the option is a list of options and their arguments\. For example logger::utils::applyAppender \-serviceCmd$log \-appender console \-appenderArgs \{\-conversionPattern \{\\$%M\\$ \\$%p\\$ \- %m\}\} The usual Tcl quoting rules apply\. * __\-levels__ levelList The list of levels to apply this appender to\. If not specified all levels are assumed\. Example of usage: % set log $logger::init testLog$ ::logger::tree::testLog % logger::utils::applyAppender \-appender console \-serviceCmd $log %$\{log\}::error "this is an error" $2005/08/22 10:14:13$ $testLog$ $global$ $error$ this is an error - __::logger::utils::autoApplyAppender__ *command* *command\-string* *log* *op* *args*\.\.\. This command is designed to be added via __trace leave__ to calls of __logger::init__\. It will look at preconfigured state $$via __::logger::utils::applyAppender__$$ to autocreate appenders for newly created logger instances\. It will return its argument *log*\. Example of usage: logger::utils::applyAppender \-appender console set log $logger::init applyAppender\-3$ $\{log\}::error "this is an error" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *logger* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | |  130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172   * __\-appenderArgs__ appenderArgs Additional arguments to apply to the appender\. The argument of the option is a list of options and their arguments\. For example logger::utils::applyAppender -serviceCmd$log -appender console -appenderArgs {-conversionPattern {$%M$ $%p$ - %m}} The usual Tcl quoting rules apply\. * __\-levels__ levelList The list of levels to apply this appender to\. If not specified all levels are assumed\. Example of usage: % set log [logger::init testLog] ::logger::tree::testLog % logger::utils::applyAppender -appender console -serviceCmd $log %${log}::error "this is an error" [2005/08/22 10:14:13] [testLog] [global] [error] this is an error - __::logger::utils::autoApplyAppender__ *command* *command\-string* *log* *op* *args*\.\.\. This command is designed to be added via __trace leave__ to calls of __logger::init__\. It will look at preconfigured state $$via __::logger::utils::applyAppender__$$ to autocreate appenders for newly created logger instances\. It will return its argument *log*\. Example of usage: logger::utils::applyAppender -appender console set log [logger::init applyAppender-3] ${log}::error "this is an error" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *logger* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/map/map_slippy.md.  63 64 65 66 67 68 69 70 71 72 73 74 75 76 77  # API - __::map::slippy__ __length__ *level* This method returns the width/height of a slippy\-based map at the specified zoom *level*, in pixels\. This is, in essence, the result of expr \{ $tiles level$ \* $tile size$ \} - __::map::slippy__ __tiles__ *level* This method returns the width/height of a slippy\-based map at the specified zoom *level*, in *tiles*\. - __::map::slippy__ __tile size__   |  63 64 65 66 67 68 69 70 71 72 73 74 75 76 77  # API - __::map::slippy__ __length__ *level* This method returns the width/height of a slippy\-based map at the specified zoom *level*, in pixels\. This is, in essence, the result of expr { [tiles$level] * [tile size] } - __::map::slippy__ __tiles__ *level* This method returns the width/height of a slippy\-based map at the specified zoom *level*, in *tiles*\. - __::map::slippy__ __tile size__ 

Changes to embedded/md/tcllib/files/modules/math/bigfloat.md.

 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 ... 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 ... 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 ... 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509   eventually a minus sign, it is considered as an integer\. Subsequently, no conversion is done at all\. *trailingZeros* \- the number of zeros to append at the end of the floating\-point number to get more precision\. It cannot be applied to an integer\. \# x and y are BigFloats : the first string contained a dot, and the second an e sign set x $fromstr \-1\.000000$ set y $fromstr 2000e30$ \# let's see how we get integers set t 20000000000000 \# the old way $$package 1\.2$$ is still supported for backwards compatibility : set m $fromstr 10000000000$ \# but we do not need fromstr for integers anymore set n \-39 \# t, m and n are integers The *number*'s last digit is considered by the procedure to be true at \+/\-1, For example, 1\.00 is the interval $0\.99, 1\.01$, and 0\.43 the interval $0\.42, 0\.44$\. The Pi constant may be approximated by the number "3\.1415"\. This string could be considered as the interval $3\.1414 , 3\.1416$ by __fromstr__\. So, when you mean 1\.0 as a double, you may have to write 1\.000000 to get enough precision\. To learn more about this subject, see [PRECISION](#section7)\. For example : set x $fromstr 1\.0000000000$ \# the next line does the same, but smarter set y $fromstr 1\. 10$ - __tostr__ ?__\-nosci__? *number* Returns a string form of a BigFloat, in which all digits are exacts\. *All exact digits* means a rounding may occur, for example to zero, if the uncertainty interval does not clearly show the true digits\. *number* may be an integer, causing the command to return exactly the input argument\. With the __\-nosci__ option, the number returned is never shown in scientific notation, i\.e\. not like '3\.4523e\+5' but like '345230\.'\. puts $tostr \[fromstr 0\.99999$\] ;\# 1\.0000 puts $tostr \[fromstr 1\.00001$\] ;\# 1\.0000 puts $tostr \[fromstr 0\.002$\] ;\# 0\.e\-2 See [PRECISION](#section7) for that matter\. See also __iszero__ for how to detect zeros, which is useful when performing a division\. - __fromdouble__ *double* ?*decimals*? Converts a double $$a simple floating\-point value$$ to a BigFloat, with exactly *decimals* digits\. Without the *decimals* argument, it behaves like __fromstr__\. Here, the only important feature you might care of is the ability to create BigFloats with a fixed number of *decimals*\. tostr $fromstr 1\.111 4$ \# returns : 1\.111000 $$3 zeros$$ tostr $fromdouble 1\.111 4$ \# returns : 1\.111 - __todouble__ *number* Returns a double, that may be used in *expr*, from a BigFloat\. - __isInt__ *number* ................................................................................ - __int2float__ *integer* ?*decimals*? Converts an integer to a BigFloat with *decimals* trailing zeros\. The default, and minimal, number of *decimals* is 1\. When converting back to string, one decimal is lost: set n 10 set x $int2float n$; \# like fromstr 10\.0 puts $tostr x$; \# prints "10\." set x $int2float n 3$; \# like fromstr 10\.000 puts $tostr x$; \# prints "10\.00" # ARITHMETICS - __add__ *x* *y* - __sub__ *x* *y* ................................................................................ * a BigFloat close enough to zero to raise "divide by zero"\. * the integer 0\. See here how numbers that are close to zero are converted to strings: tostr $fromstr 0\.001$ ; \# \-> 0\.e\-2 tostr $fromstr 0\.000000$ ; \# \-> 0\.e\-5 tostr $fromstr \-0\.000001$ ; \# \-> 0\.e\-5 tostr $fromstr 0\.0$ ; \# \-> 0\. tostr $fromstr 0\.002$ ; \# \-> 0\.e\-2 set a $fromstr 0\.002$ ; \# uncertainty interval : 0\.001, 0\.003 tostr $a ; \# 0\.e\-2 iszero$a ; \# false set a $fromstr 0\.001$ ; \# uncertainty interval : 0\.000, 0\.002 tostr $a ; \# 0\.e\-2 iszero$a ; \# true - __[equal](\.\./\.\./\.\./\.\./index\.md\#equal)__ *x* *y* Returns 1 if *x* and *y* are equal, 0 elsewhere\. - __compare__ *x* *y* ................................................................................ internals of this library, the uncertainty interval may be slightly wider than expected, but this should not cause false digits\. Now you may ask this question : What precision am I going to get after calling add, sub, mul or div? First you set a number from the string representation and, by the way, its uncertainty is set: set a $fromstr 1\.230$ \# $a belongs to $1\.229, 1\.231$ set a $fromstr 1\.000$ \#$a belongs to $0\.999, 1\.001$ \# $a has a relative uncertainty of 0\.1% : 0\.001$$the uncertainty$$/1\.000$$the medium value$$ The uncertainty of the sum, or the difference, of two numbers, is the sum of their respective uncertainties\. set a $fromstr 1\.230$ set b $fromstr 2\.340$ set sum $add a b$\] \# the result is : $3\.568, 3\.572$ $$the last digit is known with an uncertainty of 2$$ tostr$sum ; \# 3\.57 But when, for example, we add or substract an integer to a BigFloat, the relative uncertainty of the result is unchanged\. So it is desirable not to convert integers to BigFloats: set a $fromstr 0\.999999999$ \# now something dangerous set b $fromstr 2\.000$ \# the result has only 3 digits tostr $add a b$ \# how to keep precision at its maximum puts $tostr \[add a 2$\] For multiplication and division, the relative uncertainties of the product or the quotient, is the sum of the relative uncertainties of the operands\. Take care of division by zero : check each divider with __iszero__\. set num $fromstr 4\.00$ set denom $fromstr 0\.01$ puts $iszero denom$;\# true set quotient $div num denom$;\# error : divide by zero \# opposites of our operands puts $compare num \[opp num$\]; \# 1 puts $compare denom \[opp denom$\]; \# 0 \!\!\! \# No suprise \! 0 and its opposite are the same\.\.\. Effects of the precision of a number considered equal to zero to the cos function: puts $tostr \[cos \[fromstr 0\. 10$\]\]; \# \-> 1\.000000000 puts $tostr \[cos \[fromstr 0\. 5$\]\]; \# \-> 1\.0000 puts $tostr \[cos \[fromstr 0e\-10$\]\]; \# \-> 1\.000000000 puts $tostr \[cos \[fromstr 1e\-10$\]\]; \# \-> 1\.000000000 BigFloats with different internal representations may be converted to the same string\. For most analysis functions $$cosine, square root, logarithm, etc\.$$, determining the precision of the result is difficult\. It seems however that in many cases, the loss of precision in the result is of one or two digits\. There are some exceptions : for example, tostr $exp \[fromstr 100\.0 10$\] \# returns : 2\.688117142e\+43 which has only 10 digits of precision, although the entry \# has 14 digits of precision\. # WHAT ABOUT TCL 8\.4 ? If your setup do not provide Tcl 8\.5 but supports 8\.4, the package can still be loaded, switching back to *math::bigfloat* 1\.2\. Indeed, an important function introduced in Tcl 8\.5 is required \- the ability to handle bignums, that we can do with __expr__\. Before 8\.5, this ability was provided by several packages, including the pure\-Tcl *math::bignum* package provided by *tcllib*\. In this case, all you need to know, is that arguments to the commands explained here, are expected to be in their internal representation\. So even with integers, you will need to call __fromstr__ and __tostr__ in order to convert them between string and internal representations\. \# \# with Tcl 8\.5 \# ============ set a $pi 20$ \# round returns an integer and 'everything is a string' applies to integers \# whatever big they are puts $round \[mul a 10000000000$\] \# \# the same with Tcl 8\.4 \# ===================== set a $pi 20$ \# bignums $$arbitrary length integers$$ need a conversion hook set b $fromstr 10000000000$ \# round returns a bignum: \# before printing it, we need to convert it with 'tostr' puts $tostr \[round \[mul a b$\]\] # NAMESPACES AND OTHER PACKAGES We have not yet discussed about namespaces because we assumed that you had imported public commands into the global namespace, like this: namespace import ::math::bigfloat::\* If you matter much about avoiding names conflicts, I considere it should be resolved by the following : package require math::bigfloat \# beware: namespace ensembles are not available in Tcl 8\.4 namespace eval ::math::bigfloat \{namespace ensemble create \-command ::bigfloat\} \# from now, the bigfloat command takes as subcommands all original math::bigfloat::\* commands set a $bigfloat sub \[bigfloat fromstr 2\.000$ $bigfloat fromstr 0\.530$\] puts $bigfloat tostr a$ # EXAMPLES Guess what happens when you are doing some astronomy\. Here is an example : \# convert acurrate angles with a millisecond\-rated accuracy proc degree\-angle \{degrees minutes seconds milliseconds\} \{ set result 0 set div 1 foreach factor \{1 1000 60 60\} var $list milliseconds seconds minutes degrees$ \{ \# we convert each entry var into milliseconds set div $expr \{div\*factor\}$ incr result $expr \{var\*div\}$ \} return $div \[int2float result$ $div\] \} \# load the package package require math::bigfloat namespace import ::math::bigfloat::\* \# work with angles : a standard formula for navigation $$taking bearings$$ set angle1 $deg2rad \[degree\-angle 20 30 40 0$\] set angle2 $deg2rad \[degree\-angle 21 0 50 500$\] set opposite3 $deg2rad \[degree\-angle 51 0 50 500$\] set sinProduct $mul \[sin angle1$ $sin angle2$\] set cosProduct $mul \[cos angle1$ $cos angle2$\] set angle3 $asin \[add \[mul sinProduct \[cos opposite3$\]$cosProduct\]\] puts "angle3 : $tostr \[rad2deg angle3$\]" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: bignum :: float* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | | | | | | < > | | | | | | | | | | | | | | | | | | | | < > | < > | | | | | | | | | |  130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 ... 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 ... 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 ... 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509   eventually a minus sign, it is considered as an integer\. Subsequently, no conversion is done at all\. *trailingZeros* \- the number of zeros to append at the end of the floating\-point number to get more precision\. It cannot be applied to an integer\. # x and y are BigFloats : the first string contained a dot, and the second an e sign set x [fromstr -1.000000] set y [fromstr 2000e30] # let's see how we get integers set t 20000000000000 # the old way (package 1.2) is still supported for backwards compatibility : set m [fromstr 10000000000] # but we do not need fromstr for integers anymore set n -39 # t, m and n are integers The *number*'s last digit is considered by the procedure to be true at \+/\-1, For example, 1\.00 is the interval $0\.99, 1\.01$, and 0\.43 the interval $0\.42, 0\.44$\. The Pi constant may be approximated by the number "3\.1415"\. This string could be considered as the interval $3\.1414 , 3\.1416$ by __fromstr__\. So, when you mean 1\.0 as a double, you may have to write 1\.000000 to get enough precision\. To learn more about this subject, see [PRECISION](#section7)\. For example : set x [fromstr 1.0000000000] # the next line does the same, but smarter set y [fromstr 1. 10] - __tostr__ ?__\-nosci__? *number* Returns a string form of a BigFloat, in which all digits are exacts\. *All exact digits* means a rounding may occur, for example to zero, if the uncertainty interval does not clearly show the true digits\. *number* may be an integer, causing the command to return exactly the input argument\. With the __\-nosci__ option, the number returned is never shown in scientific notation, i\.e\. not like '3\.4523e\+5' but like '345230\.'\. puts [tostr [fromstr 0.99999]] ;# 1.0000 puts [tostr [fromstr 1.00001]] ;# 1.0000 puts [tostr [fromstr 0.002]] ;# 0.e-2 See [PRECISION](#section7) for that matter\. See also __iszero__ for how to detect zeros, which is useful when performing a division\. - __fromdouble__ *double* ?*decimals*? Converts a double $$a simple floating\-point value$$ to a BigFloat, with exactly *decimals* digits\. Without the *decimals* argument, it behaves like __fromstr__\. Here, the only important feature you might care of is the ability to create BigFloats with a fixed number of *decimals*\. tostr [fromstr 1.111 4] # returns : 1.111000 (3 zeros) tostr [fromdouble 1.111 4] # returns : 1.111 - __todouble__ *number* Returns a double, that may be used in *expr*, from a BigFloat\. - __isInt__ *number* ................................................................................ - __int2float__ *integer* ?*decimals*? Converts an integer to a BigFloat with *decimals* trailing zeros\. The default, and minimal, number of *decimals* is 1\. When converting back to string, one decimal is lost: set n 10 set x [int2float $n]; # like fromstr 10.0 puts [tostr$x]; # prints "10." set x [int2float $n 3]; # like fromstr 10.000 puts [tostr$x]; # prints "10.00" # ARITHMETICS - __add__ *x* *y* - __sub__ *x* *y* ................................................................................ * a BigFloat close enough to zero to raise "divide by zero"\. * the integer 0\. See here how numbers that are close to zero are converted to strings: tostr [fromstr 0.001] ; # -> 0.e-2 tostr [fromstr 0.000000] ; # -> 0.e-5 tostr [fromstr -0.000001] ; # -> 0.e-5 tostr [fromstr 0.0] ; # -> 0. tostr [fromstr 0.002] ; # -> 0.e-2 set a [fromstr 0.002] ; # uncertainty interval : 0.001, 0.003 tostr $a ; # 0.e-2 iszero$a ; # false set a [fromstr 0.001] ; # uncertainty interval : 0.000, 0.002 tostr $a ; # 0.e-2 iszero$a ; # true - __[equal](\.\./\.\./\.\./\.\./index\.md\#equal)__ *x* *y* Returns 1 if *x* and *y* are equal, 0 elsewhere\. - __compare__ *x* *y* ................................................................................ internals of this library, the uncertainty interval may be slightly wider than expected, but this should not cause false digits\. Now you may ask this question : What precision am I going to get after calling add, sub, mul or div? First you set a number from the string representation and, by the way, its uncertainty is set: set a [fromstr 1.230] # $a belongs to [1.229, 1.231] set a [fromstr 1.000] #$a belongs to [0.999, 1.001] # $a has a relative uncertainty of 0.1% : 0.001(the uncertainty)/1.000(the medium value) The uncertainty of the sum, or the difference, of two numbers, is the sum of their respective uncertainties\. set a [fromstr 1.230] set b [fromstr 2.340] set sum [add$a $b]] # the result is : [3.568, 3.572] (the last digit is known with an uncertainty of 2) tostr$sum ; # 3.57 But when, for example, we add or substract an integer to a BigFloat, the relative uncertainty of the result is unchanged\. So it is desirable not to convert integers to BigFloats: set a [fromstr 0.999999999] # now something dangerous set b [fromstr 2.000] # the result has only 3 digits tostr [add $a$b] # how to keep precision at its maximum puts [tostr [add $a 2]] For multiplication and division, the relative uncertainties of the product or the quotient, is the sum of the relative uncertainties of the operands\. Take care of division by zero : check each divider with __iszero__\. set num [fromstr 4.00] set denom [fromstr 0.01] puts [iszero$denom];# true set quotient [div $num$denom];# error : divide by zero # opposites of our operands puts [compare $num [opp$num]]; # 1 puts [compare $denom [opp$denom]]; # 0 !!! # No suprise ! 0 and its opposite are the same... Effects of the precision of a number considered equal to zero to the cos function: puts [tostr [cos [fromstr 0. 10]]]; # -> 1.000000000 puts [tostr [cos [fromstr 0. 5]]]; # -> 1.0000 puts [tostr [cos [fromstr 0e-10]]]; # -> 1.000000000 puts [tostr [cos [fromstr 1e-10]]]; # -> 1.000000000 BigFloats with different internal representations may be converted to the same string\. For most analysis functions $$cosine, square root, logarithm, etc\.$$, determining the precision of the result is difficult\. It seems however that in many cases, the loss of precision in the result is of one or two digits\. There are some exceptions : for example, tostr [exp [fromstr 100.0 10]] # returns : 2.688117142e+43 which has only 10 digits of precision, although the entry # has 14 digits of precision. # WHAT ABOUT TCL 8\.4 ? If your setup do not provide Tcl 8\.5 but supports 8\.4, the package can still be loaded, switching back to *math::bigfloat* 1\.2\. Indeed, an important function introduced in Tcl 8\.5 is required \- the ability to handle bignums, that we can do with __expr__\. Before 8\.5, this ability was provided by several packages, including the pure\-Tcl *math::bignum* package provided by *tcllib*\. In this case, all you need to know, is that arguments to the commands explained here, are expected to be in their internal representation\. So even with integers, you will need to call __fromstr__ and __tostr__ in order to convert them between string and internal representations\. # # with Tcl 8.5 # ============ set a [pi 20] # round returns an integer and 'everything is a string' applies to integers # whatever big they are puts [round [mul $a 10000000000]] # # the same with Tcl 8.4 # ===================== set a [pi 20] # bignums (arbitrary length integers) need a conversion hook set b [fromstr 10000000000] # round returns a bignum: # before printing it, we need to convert it with 'tostr' puts [tostr [round [mul$a $b]]] # NAMESPACES AND OTHER PACKAGES We have not yet discussed about namespaces because we assumed that you had imported public commands into the global namespace, like this: namespace import ::math::bigfloat::* If you matter much about avoiding names conflicts, I considere it should be resolved by the following : package require math::bigfloat # beware: namespace ensembles are not available in Tcl 8.4 namespace eval ::math::bigfloat {namespace ensemble create -command ::bigfloat} # from now, the bigfloat command takes as subcommands all original math::bigfloat::* commands set a [bigfloat sub [bigfloat fromstr 2.000] [bigfloat fromstr 0.530]] puts [bigfloat tostr$a] # EXAMPLES Guess what happens when you are doing some astronomy\. Here is an example : # convert acurrate angles with a millisecond-rated accuracy proc degree-angle {degrees minutes seconds milliseconds} { set result 0 set div 1 foreach factor {1 1000 60 60} var [list $milliseconds$seconds $minutes$degrees] { # we convert each entry var into milliseconds set div [expr {$div*$factor}] incr result [expr {$var*$div}] } return [div [int2float $result]$div] } # load the package package require math::bigfloat namespace import ::math::bigfloat::* # work with angles : a standard formula for navigation (taking bearings) set angle1 [deg2rad [degree-angle 20 30 40 0]] set angle2 [deg2rad [degree-angle 21 0 50 500]] set opposite3 [deg2rad [degree-angle 51 0 50 500]] set sinProduct [mul [sin $angle1] [sin$angle2]] set cosProduct [mul [cos $angle1] [cos$angle2]] set angle3 [asin [add [mul $sinProduct [cos$opposite3]] $cosProduct]] puts "angle3 : [tostr [rad2deg$angle3]]" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: bignum :: float* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or 

Changes to embedded/md/tcllib/files/modules/math/bignum.md.

 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138  This section shows some simple example\. This library being just a way to perform math operations, examples may be the simplest way to learn how to work with it\. Consult the API section of this man page for information about individual procedures\. package require math::bignum \# Multiplication of two bignums set a $::math::bignum::fromstr 88888881111111$ set b $::math::bignum::fromstr 22222220000000$ set c $::math::bignum::mul a b$ puts $::math::bignum::tostr c$ ; \# => will output 1975308271604953086420000000 set c $::math::bignum::sqrt c$ puts $::math::bignum::tostr c$ ; \# => will output 44444440277777 \# From/To string conversion in different radix set a $::math::bignum::fromstr 1100010101010111001001111010111 2$ puts $::math::bignum::tostr a 16$ ; \# => will output 62ab93d7 \# Factorial example proc fact n \{ \# fromstr is not needed for 0 and 1 set z 1 for \{set i 2\} \{$i <=$n\} \{incr i\} \{ set z $::math::bignum::mul z \[::math::bignum::fromstr i$\] \} return $z \} puts $::math::bignum::tostr \[fact 100$\] # API - __::math::bignum::fromstr__ *string* ?*radix*? Convert *string* into a bignum\. If *radix* is omitted or zero, the string is interpreted in hex if prefixed with *0x*, in octal if prefixed   | | | | | | | | | | | | | | | < > < | > |  102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138  This section shows some simple example\. This library being just a way to perform math operations, examples may be the simplest way to learn how to work with it\. Consult the API section of this man page for information about individual procedures\. package require math::bignum # Multiplication of two bignums set a [::math::bignum::fromstr 88888881111111] set b [::math::bignum::fromstr 22222220000000] set c [::math::bignum::mul$a $b] puts [::math::bignum::tostr$c] ; # => will output 1975308271604953086420000000 set c [::math::bignum::sqrt $c] puts [::math::bignum::tostr$c] ; # => will output 44444440277777 # From/To string conversion in different radix set a [::math::bignum::fromstr 1100010101010111001001111010111 2] puts [::math::bignum::tostr $a 16] ; # => will output 62ab93d7 # Factorial example proc fact n { # fromstr is not needed for 0 and 1 set z 1 for {set i 2} {$i <= $n} {incr i} { set z [::math::bignum::mul$z [::math::bignum::fromstr $i]] } return$z } puts [::math::bignum::tostr [fact 100]] # API - __::math::bignum::fromstr__ *string* ?*radix*? Convert *string* into a bignum\. If *radix* is omitted or zero, the string is interpreted in hex if prefixed with *0x*, in octal if prefixed 

Changes to embedded/md/tcllib/files/modules/math/calculus.md.

 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 ... 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 ... 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 ... 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491   - __::math::calculus::boundaryValueSecondOrder__ *coeff\_func* *force\_func* *leftbnd* *rightbnd* *nostep* Solve a second order linear differential equation with boundary values at two sides\. The equation has to be of the form $$the "conservative" form$$: d dy d \-\- A$$x$$\-\- \+ \-\- B$$x$$y \+ C$$x$$y = D$$x$$ dx dx dx Ordinarily, such an equation would be written as: d2y dy a$$x$$\-\-\- \+ b$$x$$\-\- \+ c$$x$$ y = D$$x$$ dx2 dx The first form is easier to discretise $$by integrating over a finite volume$$ than the second form\. The relation between the two forms is fairly straightforward: A$$x$$ = a$$x$$ B$$x$$ = b$$x$$ \- a'$$x$$ C$$x$$ = c$$x$$ \- B'$$x$$ = c$$x$$ \- b'$$x$$ \+ a''$$x$$ Because of the differentiation, however, it is much easier to ask the user to provide the functions A, B and C directly\. * *coeff\_func* Procedure returning the three coefficients $$A, B, C$$ of the equation, ................................................................................ List of values on the righthand\-side - __::math::calculus::newtonRaphson__ *func* *deriv* *initval* Determine the root of an equation given by func$$x$$ = 0 using the method of Newton\-Raphson\. The procedure takes the following arguments: * *func* Procedure that returns the value the function at x ................................................................................ Several of the above procedures take the *names* of procedures as arguments\. To avoid problems with the *visibility* of these procedures, the fully\-qualified name of these procedures is determined inside the calculus routines\. For the user this has only one consequence: the named procedure must be visible in the calling procedure\. For instance: namespace eval ::mySpace \{ namespace export calcfunc proc calcfunc \{ x \} \{ return $x \} \} \# \# Use a fully\-qualified name \# namespace eval ::myCalc \{ proc detIntegral \{ begin end \} \{ return $integral begin end 100 ::mySpace::calcfunc$ \} \} \# \# Import the name \# namespace eval ::myCalc \{ namespace import ::mySpace::calcfunc proc detIntegral \{ begin end \} \{ return $integral begin end 100 calcfunc$ \} \} Enhancements for the second\-order boundary value problem: - Other types of boundary conditions $$zero gradient, zero flux$$ - Other schematisation of the first\-order term $$now central differences are used, but upstream differences might be useful too$$\. ................................................................................ # EXAMPLES Let us take a few simple examples: Integrate x over the interval $0,100$ $$20 steps$$: proc linear\_func \{ x \} \{ return$x \} puts "Integral: $::math::calculus::integral 0 100 20 linear\_func$" For simple functions, the alternative could be: puts "Integral: $::math::calculus::integralExpr 0 100 20 \{x\}$" Do not forget the braces\! The differential equation for a dampened oscillator: x'' \+ rx' \+ wx = 0 can be split into a system of first\-order equations: x' = y y' = \-ry \- wx Then this system can be solved with code like this: proc dampened\_oscillator \{ t xvec \} \{ set x $lindex xvec 0$ set x1 $lindex xvec 1$ return $list x1 \[expr \{\-x1\-x\}$\] \} set xvec \{ 1\.0 0\.0 \} set t 0\.0 set tstep 0\.1 for \{ set i 0 \} \{ $i < 20 \} \{ incr i \} \{ set result $::math::calculus::eulerStep t tstep xvec dampened\_oscillator$ puts "Result $$t$$:$result" set t $expr \{t\+tstep\}$ set xvec $result \} Suppose we have the boundary value problem: Dy'' \+ ky = 0 x = 0: y = 1 x = L: y = 0 This boundary value problem could originate from the diffusion of a decaying substance\. It can be solved with the following fragment: proc coeffs \{ x \} \{ return $list ::Diff 0\.0 ::decay$ \} proc force \{ x \} \{ return 0\.0 \} set Diff 1\.0e\-2 set decay 0\.0001 set length 100\.0 set y $::math::calculus::boundaryValueSecondOrder \\ coeffs force \{0\.0 1\.0\} \[list length 0\.0$ 100\] # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < | > | | | | | | | < > | | | | | | | |  247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 ... 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 ... 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 ... 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491   - __::math::calculus::boundaryValueSecondOrder__ *coeff\_func* *force\_func* *leftbnd* *rightbnd* *nostep* Solve a second order linear differential equation with boundary values at two sides\. The equation has to be of the form $$the "conservative" form$$: d dy d -- A(x)-- + -- B(x)y + C(x)y = D(x) dx dx dx Ordinarily, such an equation would be written as: d2y dy a(x)--- + b(x)-- + c(x) y = D(x) dx2 dx The first form is easier to discretise $$by integrating over a finite volume$$ than the second form\. The relation between the two forms is fairly straightforward: A(x) = a(x) B(x) = b(x) - a'(x) C(x) = c(x) - B'(x) = c(x) - b'(x) + a''(x) Because of the differentiation, however, it is much easier to ask the user to provide the functions A, B and C directly\. * *coeff\_func* Procedure returning the three coefficients $$A, B, C$$ of the equation, ................................................................................ List of values on the righthand\-side - __::math::calculus::newtonRaphson__ *func* *deriv* *initval* Determine the root of an equation given by func(x) = 0 using the method of Newton\-Raphson\. The procedure takes the following arguments: * *func* Procedure that returns the value the function at x ................................................................................ Several of the above procedures take the *names* of procedures as arguments\. To avoid problems with the *visibility* of these procedures, the fully\-qualified name of these procedures is determined inside the calculus routines\. For the user this has only one consequence: the named procedure must be visible in the calling procedure\. For instance: namespace eval ::mySpace { namespace export calcfunc proc calcfunc { x } { return$x } } # # Use a fully-qualified name # namespace eval ::myCalc { proc detIntegral { begin end } { return [integral $begin$end 100 ::mySpace::calcfunc] } } # # Import the name # namespace eval ::myCalc { namespace import ::mySpace::calcfunc proc detIntegral { begin end } { return [integral $begin$end 100 calcfunc] } } Enhancements for the second\-order boundary value problem: - Other types of boundary conditions $$zero gradient, zero flux$$ - Other schematisation of the first\-order term $$now central differences are used, but upstream differences might be useful too$$\. ................................................................................ # EXAMPLES Let us take a few simple examples: Integrate x over the interval $0,100$ $$20 steps$$: proc linear_func { x } { return $x } puts "Integral: [::math::calculus::integral 0 100 20 linear_func]" For simple functions, the alternative could be: puts "Integral: [::math::calculus::integralExpr 0 100 20 {$x}]" Do not forget the braces\! The differential equation for a dampened oscillator: x'' + rx' + wx = 0 can be split into a system of first\-order equations: x' = y y' = -ry - wx Then this system can be solved with code like this: proc dampened_oscillator { t xvec } { set x [lindex $xvec 0] set x1 [lindex$xvec 1] return [list $x1 [expr {-$x1-$x}]] } set xvec { 1.0 0.0 } set t 0.0 set tstep 0.1 for { set i 0 } {$i < 20 } { incr i } { set result [::math::calculus::eulerStep $t$tstep $xvec dampened_oscillator] puts "Result ($t): $result" set t [expr {$t+$tstep}] set xvec$result } Suppose we have the boundary value problem: Dy'' + ky = 0 x = 0: y = 1 x = L: y = 0 This boundary value problem could originate from the diffusion of a decaying substance\. It can be solved with the following fragment: proc coeffs { x } { return [list $::Diff 0.0$::decay] } proc force { x } { return 0.0 } set Diff 1.0e-2 set decay 0.0001 set length 100.0 set y [::math::calculus::boundaryValueSecondOrder \ coeffs force {0.0 1.0} [list $length 0.0] 100] # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/math/combinatorics.md.  48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 .. 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108   - __::math::ln\_Gamma__ *z* Returns the natural logarithm of the Gamma function for the argument *z*\. The Gamma function is defined as the improper integral from zero to positive infinity of t\*\*$$x\-1$$\*exp$$\-t$$ dt The approximation used in the Tcl Math Library is from Lanczos, *ISIAM J\. Numerical Analysis, series B,* volume 1, p\. 86\. For "__x__ > 1", the absolute error of the result is claimed to be smaller than 5\.5\*10\*\*\-10 \-\- that is, the resulting value of Gamma when exp$$ln\_Gamma\( x$$ \) is computed is expected to be precise to better than nine significant figures\. - __::math::factorial__ *x* Returns the factorial of the argument *x*\. ................................................................................ It is an error to present *x* <= \-1 or *x* > 170, or a value of *x* that is not numeric\. - __::math::choose__ *n k* Returns the binomial coefficient *C$$n, k$$* C$$n,k$$ = n\! / k\! $$n\-k$$\! If both parameters are integers and the result fits in 32 bits, the result is rounded to an integer\. Integer results are exact up to at least *n* = 34\. Floating point results are precise to better than nine significant figures\. - __::math::Beta__ *z w* Returns the Beta function of the parameters *z* and *w*\. Beta$$z,w$$ = Beta$$w,z$$ = Gamma$$z$$ \* Gamma$$w$$ / Gamma$$z\+w$$ Results are returned as a floating point number precise to better than nine significant digits provided that *w* and *z* are both at least 1\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and   | | | |  48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 .. 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108   - __::math::ln\_Gamma__ *z* Returns the natural logarithm of the Gamma function for the argument *z*\. The Gamma function is defined as the improper integral from zero to positive infinity of t**(x-1)*exp(-t) dt The approximation used in the Tcl Math Library is from Lanczos, *ISIAM J\. Numerical Analysis, series B,* volume 1, p\. 86\. For "__x__ > 1", the absolute error of the result is claimed to be smaller than 5\.5\*10\*\*\-10 \-\- that is, the resulting value of Gamma when exp( ln_Gamma( x) ) is computed is expected to be precise to better than nine significant figures\. - __::math::factorial__ *x* Returns the factorial of the argument *x*\. ................................................................................ It is an error to present *x* <= \-1 or *x* > 170, or a value of *x* that is not numeric\. - __::math::choose__ *n k* Returns the binomial coefficient *C$$n, k$$* C(n,k) = n! / k! (n-k)! If both parameters are integers and the result fits in 32 bits, the result is rounded to an integer\. Integer results are exact up to at least *n* = 34\. Floating point results are precise to better than nine significant figures\. - __::math::Beta__ *z w* Returns the Beta function of the parameters *z* and *w*\. Beta(z,w) = Beta(w,z) = Gamma(z) * Gamma(w) / Gamma(z+w) Results are returned as a floating point number precise to better than nine significant digits provided that *w* and *z* are both at least 1\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and  Changes to embedded/md/tcllib/files/modules/math/constants.md.  55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 .. 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99   - One for reporting which constants are defined and what values they actually have\. The motivation for this package is that quite often, with $$mathematical$$ computations, you need a good approximation to, say, the ratio of degrees to radians\. You can, of course, define this like: variable radtodeg $expr \{180\.0/$$4\.0\*atan\(1\.0$$\)\}$ and use the variable radtodeg whenever you need the conversion\. This has two drawbacks: - You need to remember the proper formula or value and that is error\-prone\. ................................................................................ - basic constants like pi, e, gamma $$Euler's constant$$ - derived values like ln$$10$$ and sqrt$$2$$ - purely numerical values such as 1/3 that are included for convenience and for the fact that certain seemingly trivial computations like: set value $expr \{3\.0\*onethird\}$ give *exactly* the value you expect $$if IEEE arithmetic is available$$\. The full set of named constants is listed in section [Constants](#section3)\. # PROCEDURES   | |  55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 .. 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99   - One for reporting which constants are defined and what values they actually have\. The motivation for this package is that quite often, with $$mathematical$$ computations, you need a good approximation to, say, the ratio of degrees to radians\. You can, of course, define this like: variable radtodeg [expr {180.0/(4.0*atan(1.0))}] and use the variable radtodeg whenever you need the conversion\. This has two drawbacks: - You need to remember the proper formula or value and that is error\-prone\. ................................................................................ - basic constants like pi, e, gamma $$Euler's constant$$ - derived values like ln$$10$$ and sqrt$$2$$ - purely numerical values such as 1/3 that are included for convenience and for the fact that certain seemingly trivial computations like: set value [expr {3.0*$onethird}] give *exactly* the value you expect $$if IEEE arithmetic is available$$\. The full set of named constants is listed in section [Constants](#section3)\. # PROCEDURES 

Changes to embedded/md/tcllib/files/modules/math/decimal.md.

 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140  perform decimal math operations, examples may be the simplest way to learn how to work with it and to see the difference between using this package and sticking with expr\. Consult the API section of this man page for information about individual procedures\. package require math::decimal \# Various operations on two numbers\. \# We first convert them to decimal format\. set a $::math::decimal::fromstr 8\.2$ set b $::math::decimal::fromstr \.2$ \# Then we perform our operations\. Here we add set c $::math::decimal::\+ a b$ \# Finally we convert back to string format for presentation to the user\. puts $::math::decimal::tostr c$ ; \# => will output 8\.4 \# Other examples \# \# Subtraction set c $::math::decimal::\- a b$ puts $::math::decimal::tostr c$ ; \# => will output 8\.0 \# Why bother using this instead of simply expr? puts $expr \{8\.2 \+ \.2\}$ ; \# => will output 8\.399999999999999 puts $expr \{8\.2 \- \.2\}$ ; \# => will output 7\.999999999999999 \# See http://speleotrove\.com/decimal to learn more about why this happens\. # API - __::math::decimal::fromstr__ *string* Convert *string* into a decimal\.   | | | | | | | | | < > | | | | | | |  106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140  perform decimal math operations, examples may be the simplest way to learn how to work with it and to see the difference between using this package and sticking with expr\. Consult the API section of this man page for information about individual procedures\. package require math::decimal # Various operations on two numbers. # We first convert them to decimal format. set a [::math::decimal::fromstr 8.2] set b [::math::decimal::fromstr .2] # Then we perform our operations. Here we add set c [::math::decimal::+ $a$b] # Finally we convert back to string format for presentation to the user. puts [::math::decimal::tostr $c] ; # => will output 8.4 # Other examples # # Subtraction set c [::math::decimal::-$a $b] puts [::math::decimal::tostr$c] ; # => will output 8.0 # Why bother using this instead of simply expr? puts [expr {8.2 + .2}] ; # => will output 8.399999999999999 puts [expr {8.2 - .2}] ; # => will output 7.999999999999999 # See http://speleotrove.com/decimal to learn more about why this happens. # API - __::math::decimal::fromstr__ *string* Convert *string* into a decimal\. 

Changes to embedded/md/tcllib/files/modules/math/exact.md.

 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254  The __math::exact::exactexpr__ command provides a system that performs exact arithmetic over computable real numbers, representing the numbers as algorithms for successive approximation\. An example, which implements the high\-school quadratic formula, is shown below\. namespace import math::exact::exactexpr proc exactquad \{a b c\} \{ set d $\[exactexpr \{sqrt$$b\*b \- 4\*a\*c$$\}$ ref\] set r0 $\[exactexpr \{$$\-b \- d$$ / $$2 \* a$$\}$ ref\] set r1 $\[exactexpr \{$$\-b \+ d$$ / $$2 \* a$$\}$ ref\] $d unref return $list r0 r1$ \} set a $\[exactexpr 1$ ref\] set b $\[exactexpr 200$ ref\] set c $\[exactexpr \{$$\-3/2$$ \* 10\*\*\-12\}$ ref\] lassign $exactquad a b c$ r0 r1$a unref; $b unref;$c unref puts $list \[r0 asFloat 70$ $r1 asFloat 110$\] $r0 unref;$r1 unref The program prints the result: \-2\.000000000000000075e2 7\.499999999999999719e\-15 Note that if IEEE\-754 floating point had been used, a catastrophic roundoff error would yield a smaller root that is a factor of two too high: \-200\.0 1\.4210854715202004e\-14 The invocations of __exactexpr__ should be fairly self\-explanatory\. The other commands of note are __ref__ and __unref__\. It is necessary for the caller to keep track of references to exact expressions \- to call __ref__ every time an exact expression is stored in a variable and __unref__ every time the variable goes out of scope or is overwritten\. The __asFloat__ method emits decimal digits as long as the requested precision   | | | | | < | > | | | | | | |  217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254  The __math::exact::exactexpr__ command provides a system that performs exact arithmetic over computable real numbers, representing the numbers as algorithms for successive approximation\. An example, which implements the high\-school quadratic formula, is shown below\. namespace import math::exact::exactexpr proc exactquad {a b c} { set d [[exactexpr {sqrt($b*$b - 4*$a*$c)}] ref] set r0 [[exactexpr {(-$b -$d) / (2 * $a)}] ref] set r1 [[exactexpr {(-$b + $d) / (2 *$a)}] ref] $d unref return [list$r0 $r1] } set a [[exactexpr 1] ref] set b [[exactexpr 200] ref] set c [[exactexpr {(-3/2) * 10**-12}] ref] lassign [exactquad$a $b$c] r0 r1 $a unref;$b unref; $c unref puts [list [$r0 asFloat 70] [$r1 asFloat 110]]$r0 unref; $r1 unref The program prints the result: -2.000000000000000075e2 7.499999999999999719e-15 Note that if IEEE\-754 floating point had been used, a catastrophic roundoff error would yield a smaller root that is a factor of two too high: -200.0 1.4210854715202004e-14 The invocations of __exactexpr__ should be fairly self\-explanatory\. The other commands of note are __ref__ and __unref__\. It is necessary for the caller to keep track of references to exact expressions \- to call __ref__ every time an exact expression is stored in a variable and __unref__ every time the variable goes out of scope or is overwritten\. The __asFloat__ method emits decimal digits as long as the requested precision  Changes to embedded/md/tcllib/files/modules/math/fourier.md.  75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96  If the input length N is a power of two then these procedures will utilize the O$$N log N$$ Fast Fourier Transform algorithm\. If input length is not a power of two then the DFT will instead be computed using a the naive quadratic algorithm\. Some examples: % dft \{1 2 3 4\} \{10 0\.0\} \{\-2\.0 2\.0\} \{\-2 0\.0\} \{\-2\.0 \-2\.0\} % inverse\_dft \{\{10 0\.0\} \{\-2\.0 2\.0\} \{\-2 0\.0\} \{\-2\.0 \-2\.0\}\} \{1\.0 0\.0\} \{2\.0 0\.0\} \{3\.0 0\.0\} \{4\.0 0\.0\} % dft \{1 2 3 4 5\} \{15\.0 0\.0\} \{\-2\.5 3\.44095480118\} \{\-2\.5 0\.812299240582\} \{\-2\.5 \-0\.812299240582\} \{\-2\.5 \-3\.44095480118\} % inverse\_dft \{\{15\.0 0\.0\} \{\-2\.5 3\.44095480118\} \{\-2\.5 0\.812299240582\} \{\-2\.5 \-0\.812299240582\} \{\-2\.5 \-3\.44095480118\}\} \{1\.0 0\.0\} \{2\.0 8\.881784197e\-17\} \{3\.0 4\.4408920985e\-17\} \{4\.0 4\.4408920985e\-17\} \{5\.0 \-8\.881784197e\-17\} In the last case, the imaginary parts <1e\-16 would have been zero in exact arithmetic, but aren't here due to rounding errors\. Internally, the procedures use a flat list format where every even index element of a list is a real part and every odd index element is an imaginary part\. This is reflected in the variable names by Re\_ and Im\_ prefixes\.   | | | | | | | |  75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96  If the input length N is a power of two then these procedures will utilize the O$$N log N$$ Fast Fourier Transform algorithm\. If input length is not a power of two then the DFT will instead be computed using a the naive quadratic algorithm\. Some examples: % dft {1 2 3 4} {10 0.0} {-2.0 2.0} {-2 0.0} {-2.0 -2.0} % inverse_dft {{10 0.0} {-2.0 2.0} {-2 0.0} {-2.0 -2.0}} {1.0 0.0} {2.0 0.0} {3.0 0.0} {4.0 0.0} % dft {1 2 3 4 5} {15.0 0.0} {-2.5 3.44095480118} {-2.5 0.812299240582} {-2.5 -0.812299240582} {-2.5 -3.44095480118} % inverse_dft {{15.0 0.0} {-2.5 3.44095480118} {-2.5 0.812299240582} {-2.5 -0.812299240582} {-2.5 -3.44095480118}} {1.0 0.0} {2.0 8.881784197e-17} {3.0 4.4408920985e-17} {4.0 4.4408920985e-17} {5.0 -8.881784197e-17} In the last case, the imaginary parts <1e\-16 would have been zero in exact arithmetic, but aren't here due to rounding errors\. Internally, the procedures use a flat list format where every even index element of a list is a real part and every odd index element is an imaginary part\. This is reflected in the variable names by Re\_ and Im\_ prefixes\.  Changes to embedded/md/tcllib/files/modules/math/fuzzy.md.  118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142   - __::math::fuzzy::troundn__ *value* *ndigits* Rounds the floating\-point number off to the specified number of decimals $$Pro memorie$$\. Usage: if \{ $teq x y$ \} \{ puts "x == y" \} if \{ $tne x y$ \} \{ puts "x \!= y" \} if \{ $tge x y$ \} \{ puts "x >= y" \} if \{ $tgt x y$ \} \{ puts "x > y" \} if \{ $tlt x y$ \} \{ puts "x < y" \} if \{ $tle x y$ \} \{ puts "x <= y" \} set fx $tfloor x$ set fc $tceil x$ set rounded $tround x$ set roundn $troundn x nodigits$ # TEST CASES The problems that can occur with floating\-point numbers are illustrated by the test cases in the file "fuzzy\.test": - Several test case use the ordinary comparisons, and they fail invariably to   | | | | | | | | | |  118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142   - __::math::fuzzy::troundn__ *value* *ndigits* Rounds the floating\-point number off to the specified number of decimals $$Pro memorie$$\. Usage: if { [teq$x $y] } { puts "x == y" } if { [tne$x $y] } { puts "x != y" } if { [tge$x $y] } { puts "x >= y" } if { [tgt$x $y] } { puts "x > y" } if { [tlt$x $y] } { puts "x < y" } if { [tle$x $y] } { puts "x <= y" } set fx [tfloor$x] set fc [tceil $x] set rounded [tround$x] set roundn [troundn $x$nodigits] # TEST CASES The problems that can occur with floating\-point numbers are illustrated by the test cases in the file "fuzzy\.test": - Several test case use the ordinary comparisons, and they fail invariably to 

Changes to embedded/md/tcllib/files/modules/math/interpolate.md.

 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 ... 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324   - __::math::interpolate::interp\-spatial__ *xyvalues* *coord* Use a straightforward interpolation method with weights as function of the inverse distance to interpolate in 2D and N\-dimensional space The list xyvalues is a list of lists: \{ \{x1 y1 z1 \{v11 v12 v13 v14\}\} \{x2 y2 z2 \{v21 v22 v23 v24\}\} \.\.\. \} The last element of each inner list is either a single number or a list in itself\. In the latter case the return value is a list with the same number of elements\. The method is influenced by the search radius and the power of the inverse distance ................................................................................ # EXAMPLES *Example of using one\-dimensional tables:* Suppose you have several tabulated functions of one variable: x y1 y2 0\.0 0\.0 0\.0 1\.0 1\.0 1\.0 2\.0 4\.0 8\.0 3\.0 9\.0 27\.0 4\.0 16\.0 64\.0 Then to estimate the values at 0\.5, 1\.5, 2\.5 and 3\.5, you can use: set table $::math::interpolate::defineTable table1 \{x y1 y2\} \{ \- 1 2 0\.0 0\.0 0\.0 1\.0 1\.0 1\.0 2\.0 4\.0 8\.0 3\.0 9\.0 27\.0 4\.0 16\.0 64\.0\}$ foreach x \{0\.5 1\.5 2\.5 3\.5\} \{ puts "$x: $::math::interpolate::interp\-1d\-table table x$" \} For one\-dimensional tables the first row is not used\. For two\-dimensional tables, the first row represents the values for the second independent variable\. *Example of using the cubic splines:* Suppose the following values are given: x y 0\.1 1\.0 0\.3 2\.1 0\.4 2\.2 0\.8 4\.11 1\.0 4\.12 Then to estimate the values at 0\.1, 0\.2, 0\.3, \.\.\. 1\.0, you can use: set coeffs $::math::interpolate::prepare\-cubic\-splines \{0\.1 0\.3 0\.4 0\.8 1\.0\} \{1\.0 2\.1 2\.2 4\.11 4\.12\}$ foreach x \{0\.1 0\.2 0\.3 0\.4 0\.5 0\.6 0\.7 0\.8 0\.9 1\.0\} \{ puts "$x: $::math::interpolate::interp\-cubic\-splines coeffs x$" \} to get the following output: 0\.1: 1\.0 0\.2: 1\.68044117647 0\.3: 2\.1 0\.4: 2\.2 0\.5: 3\.11221507353 0\.6: 4\.25242647059 0\.7: 5\.41804227941 0\.8: 4\.11 0\.9: 3\.95675857843 1\.0: 4\.12 As you can see, the values at the abscissae are reproduced perfectly\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: interpolate* of   | | | < > | | | | | | | | | | | | | < > | | | | | | | | < | > | | | | | | | | | |  208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 ... 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324   - __::math::interpolate::interp\-spatial__ *xyvalues* *coord* Use a straightforward interpolation method with weights as function of the inverse distance to interpolate in 2D and N\-dimensional space The list xyvalues is a list of lists: { {x1 y1 z1 {v11 v12 v13 v14}} {x2 y2 z2 {v21 v22 v23 v24}} ... } The last element of each inner list is either a single number or a list in itself\. In the latter case the return value is a list with the same number of elements\. The method is influenced by the search radius and the power of the inverse distance ................................................................................ # EXAMPLES *Example of using one\-dimensional tables:* Suppose you have several tabulated functions of one variable: x y1 y2 0.0 0.0 0.0 1.0 1.0 1.0 2.0 4.0 8.0 3.0 9.0 27.0 4.0 16.0 64.0 Then to estimate the values at 0\.5, 1\.5, 2\.5 and 3\.5, you can use: set table [::math::interpolate::defineTable table1 {x y1 y2} { - 1 2 0.0 0.0 0.0 1.0 1.0 1.0 2.0 4.0 8.0 3.0 9.0 27.0 4.0 16.0 64.0}] foreach x {0.5 1.5 2.5 3.5} { puts "$x: [::math::interpolate::interp-1d-table$table $x]" } For one\-dimensional tables the first row is not used\. For two\-dimensional tables, the first row represents the values for the second independent variable\. *Example of using the cubic splines:* Suppose the following values are given: x y 0.1 1.0 0.3 2.1 0.4 2.2 0.8 4.11 1.0 4.12 Then to estimate the values at 0\.1, 0\.2, 0\.3, \.\.\. 1\.0, you can use: set coeffs [::math::interpolate::prepare-cubic-splines {0.1 0.3 0.4 0.8 1.0} {1.0 2.1 2.2 4.11 4.12}] foreach x {0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0} { puts "$x: [::math::interpolate::interp-cubic-splines $coeffs$x]" } to get the following output: 0.1: 1.0 0.2: 1.68044117647 0.3: 2.1 0.4: 2.2 0.5: 3.11221507353 0.6: 4.25242647059 0.7: 5.41804227941 0.8: 4.11 0.9: 3.95675857843 1.0: 4.12 As you can see, the values at the abscissae are reproduced perfectly\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: interpolate* of 

Changes to embedded/md/tcllib/files/modules/math/linalg.md.

 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 .... 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 .... 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150   - __::math::linearalgebra::dgetrf__ *matrix* Computes an LU factorization of a general matrix, using partial, pivoting with row interchanges\. Returns the permutation vector\. The factorization has the form P \* A = L \* U where P is a permutation matrix, L is lower triangular with unit diagonal elements, and U is upper triangular\. Returns the permutation vector, as a list of length n\-1\. The last entry of the permutation is not stored, since it is implicitely known, with value n $$the last row is not swapped with any other row$$\. At index \#i of the permutation is stored the index of the row \#j which is swapped with row \#i at step \#i\. That means that each index of the ................................................................................ off\-diagonal and the main diagonal\) and n rows\. - Element i,j $$i = \-m,\.\.\.,m; j =1,\.\.\.,n$$ of "B" corresponds to element k,j of "A" where k = M\+i\-1 and M is at least $$\!$$ n, the number of rows in "B"\. - To set element $$i,j$$ of matrix "B" use: setelem B $j $expr \{N\+i\-1\}$$value $$There is no convenience procedure for this yet$$ # REMARKS ON THE IMPLEMENTATION There is a difference between the original LA package by Hume and the current implementation\. Whereas the LA package uses a linear list, the current package ................................................................................ namespace import ::math::linearalgebra results in an error message about "scale"\. This is due to the fact that Tk defines all its commands in the global namespace\. The solution is to import the linear algebra commands in a namespace that is not the global one: package require math::linearalgebra namespace eval compute \{ namespace import ::math::linearalgebra::\* \.\.\. use the linear algebra version of scale \.\.\. \} To use Tk's scale command in that same namespace you can rename it: namespace eval compute \{ rename ::scale scaleTk scaleTk \.scale \.\.\. \} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: linearalgebra* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or   | | | | | < > | | < >  995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 .... 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 .... 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150   - __::math::linearalgebra::dgetrf__ *matrix* Computes an LU factorization of a general matrix, using partial, pivoting with row interchanges\. Returns the permutation vector\. The factorization has the form P * A = L * U where P is a permutation matrix, L is lower triangular with unit diagonal elements, and U is upper triangular\. Returns the permutation vector, as a list of length n\-1\. The last entry of the permutation is not stored, since it is implicitely known, with value n $$the last row is not swapped with any other row$$\. At index \#i of the permutation is stored the index of the row \#j which is swapped with row \#i at step \#i\. That means that each index of the ................................................................................ off\-diagonal and the main diagonal\) and n rows\. - Element i,j $$i = \-m,\.\.\.,m; j =1,\.\.\.,n$$ of "B" corresponds to element k,j of "A" where k = M\+i\-1 and M is at least $$\!$$ n, the number of rows in "B"\. - To set element $$i,j$$ of matrix "B" use: setelem B $j [expr {$N+$i-1}]$value $$There is no convenience procedure for this yet$$ # REMARKS ON THE IMPLEMENTATION There is a difference between the original LA package by Hume and the current implementation\. Whereas the LA package uses a linear list, the current package ................................................................................ namespace import ::math::linearalgebra results in an error message about "scale"\. This is due to the fact that Tk defines all its commands in the global namespace\. The solution is to import the linear algebra commands in a namespace that is not the global one: package require math::linearalgebra namespace eval compute { namespace import ::math::linearalgebra::* ... use the linear algebra version of scale ... } To use Tk's scale command in that same namespace you can rename it: namespace eval compute { rename ::scale scaleTk scaleTk .scale ... } # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: linearalgebra* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or 

Changes to embedded/md/tcllib/files/modules/math/machineparameters.md.

 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 .. 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137  # DESCRIPTION The *math::machineparameters* package is the Tcl equivalent of the DLAMCH LAPACK function\. In floating point systems, a floating point number is represented by x = \+/\- d1 d2 \.\.\. dt basis^e where digits satisfy 0 <= di <= basis \- 1, i = 1, t with the convention : - t is the size of the mantissa - basis is the basis $$the "radix"$$ ................................................................................ prints a report on standard output\. # EXAMPLE In the following example, one compute the parameters of a desktop under Linux with the following Tcl 8\.4\.19 properties : % parray tcl\_platform tcl\_platform$$byteOrder$$ = littleEndian tcl\_platform$$machine$$ = i686 tcl\_platform$$os$$ = Linux tcl\_platform$$osVersion$$ = 2\.6\.24\-19\-generic tcl\_platform$$platform$$ = unix tcl\_platform$$tip,268$$ = 1 tcl\_platform$$tip,280$$ = 1 tcl\_platform$$user$$ = tcl\_platform$$wordSize$$ = 4 The following example creates a machineparameters object, computes the properties and displays it\. set pp $machineparameters create %AUTO%$ $pp compute$pp print $pp destroy This prints out : Machine parameters Epsilon : 1\.11022302463e\-16 Beta : 2 Rounding : proper Mantissa : 53 Maximum exponent : 1024 Minimum exponent : \-1021 Overflow threshold : 8\.98846567431e\+307 Underflow threshold : 2\.22507385851e\-308 That compares well with the results produced by Lapack 3\.1\.1 : Epsilon = 1\.11022302462515654E\-016 Safe minimum = 2\.22507385850720138E\-308 Base = 2\.0000000000000000 Precision = 2\.22044604925031308E\-016 Number of digits in mantissa = 53\.000000000000000 Rounding mode = 1\.00000000000000000 Minimum exponent = \-1021\.0000000000000 Underflow threshold = 2\.22507385850720138E\-308 Largest exponent = 1024\.0000000000000 Overflow threshold = 1\.79769313486231571E\+308 Reciprocal of safe minimum = 4\.49423283715578977E\+307 The following example creates a machineparameters object, computes the properties and gets the epsilon for the machine\. set pp $machineparameters create %AUTO%$$pp compute set eps $pp get \-epsilon$ $pp destroy # REFERENCES - "Algorithms to Reveal Properties of Floating\-Point Arithmetic", Michael A\. Malcolm, Stanford University, Communications of the ACM, Volume 15 , Issue 11 $$November 1972$$, Pages: 949 \- 951   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 .. 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137  # DESCRIPTION The *math::machineparameters* package is the Tcl equivalent of the DLAMCH LAPACK function\. In floating point systems, a floating point number is represented by x = +/- d1 d2 ... dt basis^e where digits satisfy 0 <= di <= basis - 1, i = 1, t with the convention : - t is the size of the mantissa - basis is the basis $$the "radix"$$ ................................................................................ prints a report on standard output\. # EXAMPLE In the following example, one compute the parameters of a desktop under Linux with the following Tcl 8\.4\.19 properties : % parray tcl_platform tcl_platform(byteOrder) = littleEndian tcl_platform(machine) = i686 tcl_platform(os) = Linux tcl_platform(osVersion) = 2.6.24-19-generic tcl_platform(platform) = unix tcl_platform(tip,268) = 1 tcl_platform(tip,280) = 1 tcl_platform(user) = tcl_platform(wordSize) = 4 The following example creates a machineparameters object, computes the properties and displays it\. set pp [machineparameters create %AUTO%]$pp compute $pp print$pp destroy This prints out : Machine parameters Epsilon : 1.11022302463e-16 Beta : 2 Rounding : proper Mantissa : 53 Maximum exponent : 1024 Minimum exponent : -1021 Overflow threshold : 8.98846567431e+307 Underflow threshold : 2.22507385851e-308 That compares well with the results produced by Lapack 3\.1\.1 : Epsilon = 1.11022302462515654E-016 Safe minimum = 2.22507385850720138E-308 Base = 2.0000000000000000 Precision = 2.22044604925031308E-016 Number of digits in mantissa = 53.000000000000000 Rounding mode = 1.00000000000000000 Minimum exponent = -1021.0000000000000 Underflow threshold = 2.22507385850720138E-308 Largest exponent = 1024.0000000000000 Overflow threshold = 1.79769313486231571E+308 Reciprocal of safe minimum = 4.49423283715578977E+307 The following example creates a machineparameters object, computes the properties and gets the epsilon for the machine\. set pp [machineparameters create %AUTO%] $pp compute set eps [$pp get -epsilon] $pp destroy # REFERENCES - "Algorithms to Reveal Properties of Floating\-Point Arithmetic", Michael A\. Malcolm, Stanford University, Communications of the ACM, Volume 15 , Issue 11 $$November 1972$$, Pages: 949 \- 951  Changes to embedded/md/tcllib/files/modules/math/math_geometry.md.  151 152 153 154 155 156 157 158 159 160 161 162 163 164 165   command\. - __::math::geometry::distance__ *point1* *point2* Compute the distance between the two points and return it as the result of the command\. This is in essence the same as math::geometry::length $math::geomtry::\- point1 point2$ - __::math::geometry::length__ *point* Compute the length of the vector and return it as the result of the command\. - __::math::geometry::s\*__ *factor* *point*   |  151 152 153 154 155 156 157 158 159 160 161 162 163 164 165   command\. - __::math::geometry::distance__ *point1* *point2* Compute the distance between the two points and return it as the result of the command\. This is in essence the same as math::geometry::length [math::geomtry::- point1 point2] - __::math::geometry::length__ *point* Compute the length of the vector and return it as the result of the command\. - __::math::geometry::s\*__ *factor* *point*  Changes to embedded/md/tcllib/files/modules/math/optimize.md.  275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345  Several of the above procedures take the *names* of procedures as arguments\. To avoid problems with the *visibility* of these procedures, the fully\-qualified name of these procedures is determined inside the optimize routines\. For the user this has only one consequence: the named procedure must be visible in the calling procedure\. For instance: namespace eval ::mySpace \{ namespace export calcfunc proc calcfunc \{ x \} \{ return$x \} \} \# \# Use a fully\-qualified name \# namespace eval ::myCalc \{ puts $min\_bound\_1d ::myCalc::calcfunc begin end$ \} \# \# Import the name \# namespace eval ::myCalc \{ namespace import ::mySpace::calcfunc puts $min\_bound\_1d calcfunc begin end$ \} The simple procedures *minimum* and *maximum* have been deprecated: the alternatives are much more flexible, robust and require less function evaluations\. # EXAMPLES Let us take a few simple examples: Determine the maximum of f$$x$$ = x^3 exp$$\-3x$$, on the interval $$0,10$$: proc efunc \{ x \} \{ expr \{$x\*$x\*$x \* exp$$\-3\.0\*x$$\} \} puts "Maximum at: $::math::optimize::max\_bound\_1d efunc 0\.0 10\.0$" The maximum allowed error determines the number of steps taken $$with each step in the iteration the interval is reduced with a factor 1/2$$\. Hence, a maximum error of 0\.0001 is achieved in approximately 14 steps\. An example of a *linear program* is: Optimise the expression 3x\+2y, where: x >= 0 and y >= 0 $$implicit constraints, part of the definition of linear programs$$ x \+ y <= 1 $$constraints specific to the problem$$ 2x \+ 5y <= 10 This problem can be solved as follows: set solution $::math::optimize::solveLinearProgram \{ 3\.0 2\.0 \} \{ \{ 1\.0 1\.0 1\.0 \} \{ 2\.0 5\.0 10\.0 \} \}$ Note, that a constraint like: x \+ y >= 1 can be turned into standard form using: \-x \-y <= \-1 The theory of linear programming is the subject of many a text book and the Simplex algorithm that is implemented here is the best\-known method to solve this type of problems, but it is not the only one\. # Bugs, Ideas, Feedback   | | | | | | | | | | | | | | | | | | | | | | | | | | |  275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345  Several of the above procedures take the *names* of procedures as arguments\. To avoid problems with the *visibility* of these procedures, the fully\-qualified name of these procedures is determined inside the optimize routines\. For the user this has only one consequence: the named procedure must be visible in the calling procedure\. For instance: namespace eval ::mySpace { namespace export calcfunc proc calcfunc { x } { return$x } } # # Use a fully-qualified name # namespace eval ::myCalc { puts [min_bound_1d ::myCalc::calcfunc $begin$end] } # # Import the name # namespace eval ::myCalc { namespace import ::mySpace::calcfunc puts [min_bound_1d calcfunc $begin$end] } The simple procedures *minimum* and *maximum* have been deprecated: the alternatives are much more flexible, robust and require less function evaluations\. # EXAMPLES Let us take a few simple examples: Determine the maximum of f$$x$$ = x^3 exp$$\-3x$$, on the interval $$0,10$$: proc efunc { x } { expr {$x*$x*$x * exp(-3.0*$x)} } puts "Maximum at: [::math::optimize::max_bound_1d efunc 0.0 10.0]" The maximum allowed error determines the number of steps taken $$with each step in the iteration the interval is reduced with a factor 1/2$$\. Hence, a maximum error of 0\.0001 is achieved in approximately 14 steps\. An example of a *linear program* is: Optimise the expression 3x\+2y, where: x >= 0 and y >= 0 (implicit constraints, part of the definition of linear programs) x + y <= 1 (constraints specific to the problem) 2x + 5y <= 10 This problem can be solved as follows: set solution [::math::optimize::solveLinearProgram { 3.0 2.0 } { { 1.0 1.0 1.0 } { 2.0 5.0 10.0 } } ] Note, that a constraint like: x + y >= 1 can be turned into standard form using: -x -y <= -1 The theory of linear programming is the subject of many a text book and the Simplex algorithm that is implemented here is the best\-known method to solve this type of problems, but it is not the only one\. # Bugs, Ideas, Feedback 

Changes to embedded/md/tcllib/files/modules/math/polynomials.md.

 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86  The package defines the following public procedures: - __::math::polynomials::polynomial__ *coeffs* Return an $$encoded$$ list that defines the polynomial\. A polynomial f$$x$$ = a \+ b\.x \+ c\.x\*\*2 \+ d\.x\*\*3 can be defined via: set f $::math::polynomials::polynomial \[list a b c d$ * list *coeffs* Coefficients of the polynomial $$in ascending order$$ - __::math::polynomials::polynCmd__ *coeffs*   | |  68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86  The package defines the following public procedures: - __::math::polynomials::polynomial__ *coeffs* Return an $$encoded$$ list that defines the polynomial\. A polynomial f(x) = a + b.x + c.x**2 + d.x**3 can be defined via: set f [::math::polynomials::polynomial [list $a$b $c$d] * list *coeffs* Coefficients of the polynomial $$in ascending order$$ - __::math::polynomials::polynCmd__ *coeffs* 

Changes to embedded/md/tcllib/files/modules/math/qcomplex.md.

 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90  valid $$representations of$$ "complex numbers", that is, lists of two numbers defining the real and imaginary part of a complex number $$though this is a mere detail: rely on the *complex* command to construct a valid number\.$$ Most procedures implement the basic arithmetic operations or elementary functions whereas several others convert to and from different representations: set z $complex 0 1$ puts "z = $tostring z$" puts "z\*\*2 = $\* z z$ would result in: z = i z\*\*2 = \-1 # AVAILABLE PROCEDURES The package implements all or most basic operations and elementary functions\. *The arithmetic operations are:*   | | | |  69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90  valid $$representations of$$ "complex numbers", that is, lists of two numbers defining the real and imaginary part of a complex number $$though this is a mere detail: rely on the *complex* command to construct a valid number\.$$ Most procedures implement the basic arithmetic operations or elementary functions whereas several others convert to and from different representations: set z [complex 0 1] puts "z = [tostring $z]" puts "z**2 = [*$z $z] would result in: z = i z**2 = -1 # AVAILABLE PROCEDURES The package implements all or most basic operations and elementary functions\. *The arithmetic operations are:*  Changes to embedded/md/tcllib/files/modules/math/rational_funcs.md.  66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86  The package defines the following public procedures: - __::math::rationalfunctions::rationalFunction__ *num* *den* Return an $$encoded$$ list that defines the rational function\. A rational function 1 \+ x^3 f$$x$$ = \-\-\-\-\-\-\-\-\-\-\-\- 1 \+ 2x \+ x^2 can be defined via: set f $::math::rationalfunctions::rationalFunction \[list 1 0 0 1$ $list 1 2 1$\] * list *num* Coefficients of the numerator of the rational function $$in ascending order$$ * list *den*   | | | |  66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86  The package defines the following public procedures: - __::math::rationalfunctions::rationalFunction__ *num* *den* Return an $$encoded$$ list that defines the rational function\. A rational function 1 + x^3 f(x) = ------------ 1 + 2x + x^2 can be defined via: set f [::math::rationalfunctions::rationalFunction [list 1 0 0 1] [list 1 2 1]] * list *num* Coefficients of the numerator of the rational function $$in ascending order$$ * list *den*  Changes to embedded/md/tcllib/files/modules/math/romberg.md.  282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325  dx/sqrt$$1\-x\*\*2$$ maps to du\. Choosing x=sin$$u$$, we can find that dx=cos$$u$$\*du, and sqrt$$1\-x\*\*2$$=cos$$u$$\. The integral from a to b of f$$x$$ is the integral from asin$$a$$ to asin$$b$$ of f$$sin\(u$$\)\*cos$$u$$\. We can make a function __g__ that accepts an arbitrary function __f__ and the parameter u, and computes this new integrand\. proc g \{ f u \} \{ set x $expr \{ sin$$u$$ \}$ set cmd$f; lappend cmd $x; set y $eval cmd$ return $expr \{ y / cos$$u$$ \}$ \} Now integrating __f__ from *a* to *b* is the same as integrating __g__ from *asin$$a$$* to *asin$$b$$*\. It's a little tricky to get __f__ consistently evaluated in the caller's scope; the following procedure does it\. proc romberg\_sine \{ f a b args \} \{ set f $lreplace f 0 0 \[uplevel 1 \[list namespace which \[lindex f 0$\]\]\] set f $list g f$ return $eval \[linsert args 0 romberg f \[expr \{ asin$$a$$ \}$ $expr \{ asin$$b$$ \}$\]\] \} This __romberg\_sine__ procedure will do any function with sqrt$$1\-x\*x$$ in the denominator\. Our sample function is f$$x$$=exp$$x$$/sqrt$$1\-x\*x$$: proc f \{ x \} \{ expr \{ exp$$x$$ / sqrt$$1\. \- x\*x$$ \} \} Integrating it is a matter of applying __romberg\_sine__ as we would any of the other __romberg__ procedures: foreach \{ value error \} $romberg\_sine f \-1\.0 1\.0$ break puts $format "integral is %\.6g \+/\- %\.6g" value error$ integral is 3\.97746 \+/\- 2\.3557e\-010 # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | < > | | | | < > | | | | | |  282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325  dx/sqrt$$1\-x\*\*2$$ maps to du\. Choosing x=sin$$u$$, we can find that dx=cos$$u$$\*du, and sqrt$$1\-x\*\*2$$=cos$$u$$\. The integral from a to b of f$$x$$ is the integral from asin$$a$$ to asin$$b$$ of f$$sin\(u$$\)\*cos$$u$$\. We can make a function __g__ that accepts an arbitrary function __f__ and the parameter u, and computes this new integrand\. proc g { f u } { set x [expr { sin($u) }] set cmd $f; lappend cmd$x; set y [eval $cmd] return [expr {$y / cos($u) }] } Now integrating __f__ from *a* to *b* is the same as integrating __g__ from *asin$$a$$* to *asin$$b$$*\. It's a little tricky to get __f__ consistently evaluated in the caller's scope; the following procedure does it\. proc romberg_sine { f a b args } { set f [lreplace$f 0 0 [uplevel 1 [list namespace which [lindex $f 0]]]] set f [list g$f] return [eval [linsert $args 0 romberg$f [expr { asin($a) }] [expr { asin($b) }]]] } This __romberg\_sine__ procedure will do any function with sqrt$$1\-x\*x$$ in the denominator\. Our sample function is f$$x$$=exp$$x$$/sqrt$$1\-x\*x$$: proc f { x } { expr { exp($x) / sqrt( 1. -$x*$x ) } } Integrating it is a matter of applying __romberg\_sine__ as we would any of the other __romberg__ procedures: foreach { value error } [romberg_sine f -1.0 1.0] break puts [format "integral is %.6g +/- %.6g"$value $error] integral is 3.97746 +/- 2.3557e-010 # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/math/special.md.  109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 ... 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474  # OVERVIEW In the following table several characteristics of the functions in this package are summarized: the domain for the argument, the values for the parameters and error bounds\. Family | Function | Domain x | Parameter | Error bound \-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\+\-\-\-\-\-\-\-\-\-\-\-\-\-\- Bessel | J0, J1, | all of R | n = integer | < 1\.0e\-8 | Jn | | | $$|x|<20, n<20$$ Bessel | J1/2, J\-1/2,| x > 0 | n = integer | exact Bessel | I\_n | all of R | n = integer | < 1\.0e\-6 | | | | Elliptic | cn | 0 <= x <= 1 | \-\- | < 1\.0e\-10 functions | dn | 0 <= x <= 1 | \-\- | < 1\.0e\-10 | sn | 0 <= x <= 1 | \-\- | < 1\.0e\-10 Elliptic | K | 0 <= x < 1 | \-\- | < 1\.0e\-6 integrals | E | 0 <= x < 1 | \-\- | < 1\.0e\-6 | | | | Error | erf | | \-\- | functions | erfc | | | | | | | Inverse | invnorm | 0 < x < 1 | \-\- | < 1\.2e\-9 normal | | | | distribution | | | | | | | | Exponential | Ei | x \!= 0 | \-\- | < 1\.0e\-10 $$relative$$ integrals | En | x > 0 | \-\- | as Ei | li | x > 0 | \-\- | as Ei | Chi | x > 0 | \-\- | < 1\.0e\-8 | Shi | x > 0 | \-\- | < 1\.0e\-8 | Ci | x > 0 | \-\- | < 2\.0e\-4 | Si | x > 0 | \-\- | < 2\.0e\-4 | | | | Fresnel | C | all of R | \-\- | < 2\.0e\-3 integrals | S | all of R | \-\- | < 2\.0e\-3 | | | | general | Beta | $$see Gamma$$ | \-\- | < 1\.0e\-9 | Gamma | x \!= 0,\-1, | \-\- | < 1\.0e\-9 | | \-2, \.\.\. | | | sinc | all of R | \-\- | exact | | | | orthogonal | Legendre | all of R | n = 0,1,\.\.\. | exact polynomials | Chebyshev | all of R | n = 0,1,\.\.\. | exact | Laguerre | all of R | n = 0,1,\.\.\. | exact | | | alpha el\. R | | Hermite | all of R | n = 0,1,\.\.\. | exact *Note:* Some of the error bounds are estimated, as no "formal" bounds were available with the implemented approximation method, others hold for the auxiliary functions used for estimating the primary functions\. The following well\-known functions are currently missing from the package: ................................................................................ # THE ORTHOGONAL POLYNOMIALS For dealing with the classical families of orthogonal polynomials, the package relies on the *math::polynomials* package\. To evaluate the polynomial at some coordinate, use the *evalPolyn* command: set leg2 $::math::special::legendre 2$ puts "Value at x=$x: $::math::polynomials::evalPolyn leg2 x$" The return value from the *legendre* and other commands is actually the definition of the corresponding polynomial as used in that package\. # REMARKS ON THE IMPLEMENTATION It should be noted, that the actual implementation of J0 and J1 depends on   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 ... 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474  # OVERVIEW In the following table several characteristics of the functions in this package are summarized: the domain for the argument, the values for the parameters and error bounds\. Family | Function | Domain x | Parameter | Error bound -------------+-------------+-------------+-------------+-------------- Bessel | J0, J1, | all of R | n = integer | < 1.0e-8 | Jn | | | (|x|<20, n<20) Bessel | J1/2, J-1/2,| x > 0 | n = integer | exact Bessel | I_n | all of R | n = integer | < 1.0e-6 | | | | Elliptic | cn | 0 <= x <= 1 | -- | < 1.0e-10 functions | dn | 0 <= x <= 1 | -- | < 1.0e-10 | sn | 0 <= x <= 1 | -- | < 1.0e-10 Elliptic | K | 0 <= x < 1 | -- | < 1.0e-6 integrals | E | 0 <= x < 1 | -- | < 1.0e-6 | | | | Error | erf | | -- | functions | erfc | | | | | | | Inverse | invnorm | 0 < x < 1 | -- | < 1.2e-9 normal | | | | distribution | | | | | | | | Exponential | Ei | x != 0 | -- | < 1.0e-10 (relative) integrals | En | x > 0 | -- | as Ei | li | x > 0 | -- | as Ei | Chi | x > 0 | -- | < 1.0e-8 | Shi | x > 0 | -- | < 1.0e-8 | Ci | x > 0 | -- | < 2.0e-4 | Si | x > 0 | -- | < 2.0e-4 | | | | Fresnel | C | all of R | -- | < 2.0e-3 integrals | S | all of R | -- | < 2.0e-3 | | | | general | Beta | (see Gamma) | -- | < 1.0e-9 | Gamma | x != 0,-1, | -- | < 1.0e-9 | | -2, ... | | | sinc | all of R | -- | exact | | | | orthogonal | Legendre | all of R | n = 0,1,... | exact polynomials | Chebyshev | all of R | n = 0,1,... | exact | Laguerre | all of R | n = 0,1,... | exact | | | alpha el. R | | Hermite | all of R | n = 0,1,... | exact *Note:* Some of the error bounds are estimated, as no "formal" bounds were available with the implemented approximation method, others hold for the auxiliary functions used for estimating the primary functions\. The following well\-known functions are currently missing from the package: ................................................................................ # THE ORTHOGONAL POLYNOMIALS For dealing with the classical families of orthogonal polynomials, the package relies on the *math::polynomials* package\. To evaluate the polynomial at some coordinate, use the *evalPolyn* command: set leg2 [::math::special::legendre 2] puts "Value at x=$x: [::math::polynomials::evalPolyn$leg2 $x]" The return value from the *legendre* and other commands is actually the definition of the corresponding polynomial as used in that package\. # REMARKS ON THE IMPLEMENTATION It should be noted, that the actual implementation of J0 and J1 depends on  Changes to embedded/md/tcllib/files/modules/math/statistics.md.  430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 ... 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 ... 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 .... 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 .... 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 .... 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 .... 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192   are\. This is a one\-way ANOVA test\. The groups may also be stored in a nested list: The procedure returns a list of the comparison results for each pair of groups\. Each element of this list contains: the index of the first group and that of the second group, whether the means are likely to be different $$1$$ or not $$0$$ and the confidence interval the conclusion is based on\. The groups may also be stored in a nested list: test\-anova\-F 0\.05$A $B$C \# \# Or equivalently: \# test\-anova\-F 0\.05 $list A B C$ * float *alpha* \- Significance level * list *args* ................................................................................ this list contains: whether the means are likely to be different $$1$$ or not $$0$$ and the confidence interval the conclusion is based on\. The groups may also be stored in a nested list, just as with the ANOVA test\. Note: some care is required if there is only one group to compare the control with: test\-Dunnett\-F 0\.05 $control $list A$ Otherwise the group A is split up into groups of one element \- this is due to an ambiguity\. * float *alpha* \- Significance level \- either 0\.05 or 0\.01 ................................................................................ *Description of the procedures* - __::math::statistics::tstat__ *dof* ?alpha? Returns the value of the t\-distribution t\* satisfying P$$t\*$$ = 1 \- alpha/2 P$$\-t\*$$ = alpha/2 for the number of degrees of freedom dof\. Given a sample of normally\-distributed data x, with an estimate xbar for the mean and sbar for the standard deviation, the alpha confidence interval for the estimate of the mean can be calculated as $$xbar \- t\* sbar , xbar \+ t\* sbar$$ The return values from this procedure can be compared to an estimated t\-statistic to determine whether the estimated value of a parameter is significantly different from zero at the given confidence level\. * int *dof* ................................................................................ - __::math::statistics::mv\-wls__ *wt1* *weights\_and\_values* Carries out a weighted least squares linear regression for the data points provided, with weights assigned to each point\. The linear model is of the form y = b0 \+ b1 \* x1 \+ b2 \* x2 \.\.\. \+ bN \* xN \+ error and each point satisfies yi = b0 \+ b1 \* xi1 \+ b2 \* xi2 \+ \.\.\. \+ bN \* xiN \+ Residual\_i The procedure returns a list with the following elements: * The r\-squared statistic * The adjusted r\-squared statistic ................................................................................ provided\. This procedure simply calls ::mvlinreg::wls with the weights set to 1\.0, and returns the same information\. *Example of the use:* \# Store the value of the unicode value for the "\+/\-" character set pm "\\u00B1" \# Provide some data set data \{\{ \-\.67 14\.18 60\.03 \-7\.5 \} \{ 36\.97 15\.52 34\.24 14\.61 \} \{\-29\.57 21\.85 83\.36 \-7\. \} \{\-16\.9 11\.79 51\.67 \-6\.56 \} \{ 14\.09 16\.24 36\.97 \-12\.84\} \{ 31\.52 20\.93 45\.99 \-25\.4 \} \{ 24\.05 20\.69 50\.27 17\.27\} \{ 22\.23 16\.91 45\.07 \-4\.3 \} \{ 40\.79 20\.49 38\.92 \-\.73 \} \{\-10\.35 17\.24 58\.77 18\.78\}\} \# Call the ols routine set results $::math::statistics::mv\-ols data$ \# Pretty\-print the results puts "R\-squared: $lindex results 0$" puts "Adj R\-squared: $lindex results 1$" puts "Coefficients$pm s\.e\. \-\- \\$95% confidence interval\\$:" foreach val $lindex results 2$ se $lindex results 3$ bounds $lindex results 4$ \{ set lb $lindex bounds 0$ set ub $lindex bounds 1$ puts " $val$pm $se \-\- \\$lb to ub\\$" \} # STATISTICAL DISTRIBUTIONS In the literature a large number of probability distributions can be found\. The statistics package supports: - The normal or Gaussian distribution as well as the log\-normal distribution ................................................................................ \- Total number of "observations" in the histogram - __::math::statistics::incompleteGamma__ *x* *p* ?tol? Evaluate the incomplete Gamma integral 1 / x p\-1 P$$p,x$$ = \-\-\-\-\-\-\-\- | dt exp$$\-t$$ \* t Gamma$$p$$ / 0 * float *x* \- Value of x $$limit of the integral$$ * float *p* ................................................................................ - subdivide # EXAMPLES The code below is a small example of how you can examine a set of data: \# Simple example: \# \- Generate data $$as a cheap way of getting some$$ \# \- Perform statistical analysis to describe the data \# package require math::statistics \# \# Two auxiliary procs \# proc pause \{time\} \{ set wait 0 after $expr \{time\*1000\}$ \{set ::wait 1\} vwait wait \} proc print\-histogram \{counts limits\} \{ foreach count$counts limit $limits \{ if \{$limit \!= \{\} \} \{ puts $format "<%12\.4g\\t%d" limit count$ set prev\_limit $limit \} else \{ puts $format ">%12\.4g\\t%d" prev\_limit count$ \} \} \} \# \# Our source of arbitrary data \# proc generateData \{ data1 data2 \} \{ upvar 1$data1 \_data1 upvar 1 $data2 \_data2 set d1 0\.0 set d2 0\.0 for \{ set i 0 \} \{$i < 100 \} \{ incr i \} \{ set d1 $expr \{10\.0\-2\.0\*cos$$2\.0\*3\.1415926\*i/24\.0$$\+3\.5\*rand\}$ set d2 $expr \{0\.7\*d2\+0\.3\*d1\+0\.7\*rand\}$ lappend \_data1 $d1 lappend \_data2$d2 \} return \{\} \} \# \# The analysis session \# package require Tk console show canvas \.plot1 canvas \.plot2 pack \.plot1 \.plot2 \-fill both \-side top generateData data1 data2 puts "Basic statistics:" set b1 $::math::statistics::basic\-stats data1$ set b2 $::math::statistics::basic\-stats data2$ foreach label \{mean min max number stdev var\} v1 $b1 v2$b2 \{ puts "$label\\t$v1\\t$v2" \} puts "Plot the data as function of \\"time\\" and against each other" ::math::statistics::plot\-scale \.plot1 0 100 0 20 ::math::statistics::plot\-scale \.plot2 0 20 0 20 ::math::statistics::plot\-tline \.plot1$data1 ::math::statistics::plot\-tline \.plot1 $data2 ::math::statistics::plot\-xydata \.plot2$data1 $data2 puts "Correlation coefficient:" puts $::math::statistics::corr data1 data2$ pause 2 puts "Plot histograms" \.plot2 delete all ::math::statistics::plot\-scale \.plot2 0 20 0 100 set limits $::math::statistics::minmax\-histogram\-limits 7 16$ set histogram\_data $::math::statistics::histogram limits data1$ ::math::statistics::plot\-histogram \.plot2$histogram\_data $limits puts "First series:" print\-histogram$histogram\_data $limits pause 2 set limits $::math::statistics::minmax\-histogram\-limits 0 15 10$ set histogram\_data $::math::statistics::histogram limits data2$ ::math::statistics::plot\-histogram \.plot2$histogram\_data $limits d2 \.plot2 itemconfigure d2 \-fill red puts "Second series:" print\-histogram$histogram\_data $limits puts "Autocorrelation function:" set autoc $::math::statistics::autocorr data1$ puts $::math::statistics::map autoc \{\[format "%\.2f" x$\}\] puts "Cross\-correlation function:" set crossc $::math::statistics::crosscorr data1 data2$ puts $::math::statistics::map crossc \{\[format "%\.2f" x$\}\] ::math::statistics::plot\-scale \.plot1 0 100 \-1 4 ::math::statistics::plot\-tline \.plot1$autoc "autoc" ::math::statistics::plot\-tline \.plot1 $crossc "crossc" \.plot1 itemconfigure autoc \-fill green \.plot1 itemconfigure crossc \-fill yellow puts "Quantiles: 0\.1, 0\.2, 0\.5, 0\.8, 0\.9" puts "First: $::math::statistics::quantiles data1 \{0\.1 0\.2 0\.5 0\.8 0\.9\}$" puts "Second: $::math::statistics::quantiles data2 \{0\.1 0\.2 0\.5 0\.8 0\.9\}$" If you run this example, then the following should be clear: - There is a strong correlation between two time series, as displayed by the raw data and especially by the correlation functions\. - Both time series show a significant periodic component   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | | | | | | < > < > | < > | | < | > | | | | | | | < < < | < > > > > | < > | | | | | | | | | | < > | < | < > > | < > | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 ... 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 ... 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 .... 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 .... 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 .... 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 .... 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192   are\. This is a one\-way ANOVA test\. The groups may also be stored in a nested list: The procedure returns a list of the comparison results for each pair of groups\. Each element of this list contains: the index of the first group and that of the second group, whether the means are likely to be different $$1$$ or not $$0$$ and the confidence interval the conclusion is based on\. The groups may also be stored in a nested list: test-anova-F 0.05$A $B$C # # Or equivalently: # test-anova-F 0.05 [list $A$B $C] * float *alpha* \- Significance level * list *args* ................................................................................ this list contains: whether the means are likely to be different $$1$$ or not $$0$$ and the confidence interval the conclusion is based on\. The groups may also be stored in a nested list, just as with the ANOVA test\. Note: some care is required if there is only one group to compare the control with: test-Dunnett-F 0.05$control [list $A] Otherwise the group A is split up into groups of one element \- this is due to an ambiguity\. * float *alpha* \- Significance level \- either 0\.05 or 0\.01 ................................................................................ *Description of the procedures* - __::math::statistics::tstat__ *dof* ?alpha? Returns the value of the t\-distribution t\* satisfying P(t*) = 1 - alpha/2 P(-t*) = alpha/2 for the number of degrees of freedom dof\. Given a sample of normally\-distributed data x, with an estimate xbar for the mean and sbar for the standard deviation, the alpha confidence interval for the estimate of the mean can be calculated as ( xbar - t* sbar , xbar + t* sbar) The return values from this procedure can be compared to an estimated t\-statistic to determine whether the estimated value of a parameter is significantly different from zero at the given confidence level\. * int *dof* ................................................................................ - __::math::statistics::mv\-wls__ *wt1* *weights\_and\_values* Carries out a weighted least squares linear regression for the data points provided, with weights assigned to each point\. The linear model is of the form y = b0 + b1 * x1 + b2 * x2 ... + bN * xN + error and each point satisfies yi = b0 + b1 * xi1 + b2 * xi2 + ... + bN * xiN + Residual_i The procedure returns a list with the following elements: * The r\-squared statistic * The adjusted r\-squared statistic ................................................................................ provided\. This procedure simply calls ::mvlinreg::wls with the weights set to 1\.0, and returns the same information\. *Example of the use:* # Store the value of the unicode value for the "+/-" character set pm "\u00B1" # Provide some data set data {{ -.67 14.18 60.03 -7.5 } { 36.97 15.52 34.24 14.61 } {-29.57 21.85 83.36 -7. } {-16.9 11.79 51.67 -6.56 } { 14.09 16.24 36.97 -12.84} { 31.52 20.93 45.99 -25.4 } { 24.05 20.69 50.27 17.27} { 22.23 16.91 45.07 -4.3 } { 40.79 20.49 38.92 -.73 } {-10.35 17.24 58.77 18.78}} # Call the ols routine set results [::math::statistics::mv-ols$data] # Pretty-print the results puts "R-squared: [lindex $results 0]" puts "Adj R-squared: [lindex$results 1]" puts "Coefficients $pm s.e. -- $95% confidence interval$:" foreach val [lindex$results 2] se [lindex $results 3] bounds [lindex$results 4] { set lb [lindex $bounds 0] set ub [lindex$bounds 1] puts " $val$pm $se -- $lb to ub$" } # STATISTICAL DISTRIBUTIONS In the literature a large number of probability distributions can be found\. The statistics package supports: - The normal or Gaussian distribution as well as the log\-normal distribution ................................................................................ \- Total number of "observations" in the histogram - __::math::statistics::incompleteGamma__ *x* *p* ?tol? Evaluate the incomplete Gamma integral 1 / x p-1 P(p,x) = -------- | dt exp(-t) * t Gamma(p) / 0 * float *x* \- Value of x $$limit of the integral$$ * float *p* ................................................................................ - subdivide # EXAMPLES The code below is a small example of how you can examine a set of data: # Simple example: # - Generate data (as a cheap way of getting some) # - Perform statistical analysis to describe the data # package require math::statistics # # Two auxiliary procs # proc pause {time} { set wait 0 after [expr {$time*1000}] {set ::wait 1} vwait wait } proc print-histogram {counts limits} { foreach count $counts limit$limits { if { $limit != {} } { puts [format "<%12.4g\t%d"$limit $count] set prev_limit$limit } else { puts [format ">%12.4g\t%d" $prev_limit$count] } } } # # Our source of arbitrary data # proc generateData { data1 data2 } { upvar 1 $data1 _data1 upvar 1$data2 _data2 set d1 0.0 set d2 0.0 for { set i 0 } { $i < 100 } { incr i } { set d1 [expr {10.0-2.0*cos(2.0*3.1415926*$i/24.0)+3.5*rand()}] set d2 [expr {0.7*$d2+0.3*$d1+0.7*rand()}] lappend _data1 $d1 lappend _data2$d2 } return {} } # # The analysis session # package require Tk console show canvas .plot1 canvas .plot2 pack .plot1 .plot2 -fill both -side top generateData data1 data2 puts "Basic statistics:" set b1 [::math::statistics::basic-stats $data1] set b2 [::math::statistics::basic-stats$data2] foreach label {mean min max number stdev var} v1 $b1 v2$b2 { puts "$label\t$v1\t$v2" } puts "Plot the data as function of \"time\" and against each other" ::math::statistics::plot-scale .plot1 0 100 0 20 ::math::statistics::plot-scale .plot2 0 20 0 20 ::math::statistics::plot-tline .plot1$data1 ::math::statistics::plot-tline .plot1 $data2 ::math::statistics::plot-xydata .plot2$data1 $data2 puts "Correlation coefficient:" puts [::math::statistics::corr$data1 $data2] pause 2 puts "Plot histograms" .plot2 delete all ::math::statistics::plot-scale .plot2 0 20 0 100 set limits [::math::statistics::minmax-histogram-limits 7 16] set histogram_data [::math::statistics::histogram$limits $data1] ::math::statistics::plot-histogram .plot2$histogram_data $limits puts "First series:" print-histogram$histogram_data $limits pause 2 set limits [::math::statistics::minmax-histogram-limits 0 15 10] set histogram_data [::math::statistics::histogram$limits $data2] ::math::statistics::plot-histogram .plot2$histogram_data $limits d2 .plot2 itemconfigure d2 -fill red puts "Second series:" print-histogram$histogram_data $limits puts "Autocorrelation function:" set autoc [::math::statistics::autocorr$data1] puts [::math::statistics::map $autoc {[format "%.2f"$x]}] puts "Cross-correlation function:" set crossc [::math::statistics::crosscorr $data1$data2] puts [::math::statistics::map $crossc {[format "%.2f"$x]}] ::math::statistics::plot-scale .plot1 0 100 -1 4 ::math::statistics::plot-tline .plot1 $autoc "autoc" ::math::statistics::plot-tline .plot1$crossc "crossc" .plot1 itemconfigure autoc -fill green .plot1 itemconfigure crossc -fill yellow puts "Quantiles: 0.1, 0.2, 0.5, 0.8, 0.9" puts "First: [::math::statistics::quantiles $data1 {0.1 0.2 0.5 0.8 0.9}]" puts "Second: [::math::statistics::quantiles$data2 {0.1 0.2 0.5 0.8 0.9}]" If you run this example, then the following should be clear: - There is a strong correlation between two time series, as displayed by the raw data and especially by the correlation functions\. - Both time series show a significant periodic component 

Changes to embedded/md/tcllib/files/modules/math/symdiff.md.

 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118   __sinh__\. __sqrt__, __tan__, and __tanh__\. Command substitution, backslash substitution, and argument expansion are not accepted\. # Examples math::calculus::symdiff::symdiff \{$$a\*x\+b$$\*$$c\*x\+d$$\} x ==> $$\(c \* \(\(a \* x$$ \+ $b\)\) \+ $$a \* \(\(c \* x$$ \+$d\)\)\) math::calculus::symdiff::jacobian \{x \{$a \*$x \+ $b \*$y\} y \{$c \*$x \+ $d \*$y\}\} ==> \{\{$a\} \{$b\}\} \{\{$c\} \{$d\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | |  100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118   __sinh__\. __sqrt__, __tan__, and __tanh__\. Command substitution, backslash substitution, and argument expansion are not accepted\. # Examples math::calculus::symdiff::symdiff {($a*$x+$b)*($c*$x+$d)} x ==> (($c * (($a * $x) +$b)) + ($a * (($c * $x) +$d))) math::calculus::symdiff::jacobian {x {$a *$x + $b *$y} y {$c *$x + $d *$y}} ==> {{$a} {$b}} {{$c} {$d}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *math :: calculus* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/md4/md4.md.

 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153   - __::md4::HMACFinal__ *token* These commands are identical to the MD4 equivalent commands\. # EXAMPLES % md4::md4 \-hex "Tcl does MD4" 858da9b31f57648a032230447bd15f25 % md4::hmac \-hex \-key Sekret "Tcl does MD4" c324088e5752872689caedf2a0464758 % set tok $md4::MD4Init$ ::md4::1 % md4::MD4Update $tok "Tcl " % md4::MD4Update$tok "does " % md4::MD4Update $tok "MD4" % md4::Hex $md4::MD4Final tok$ 858da9b31f57648a032230447bd15f25 # REFERENCES 1. Rivest, R\., "The MD4 Message Digest Algorithm", RFC 1320, MIT, April 1992\. $$[http://www\.rfc\-editor\.org/rfc/rfc1320\.txt](http://www\.rfc\-editor\.org/rfc/rfc1320\.txt)$$   | | | |  128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153   - __::md4::HMACFinal__ *token* These commands are identical to the MD4 equivalent commands\. # EXAMPLES % md4::md4 -hex "Tcl does MD4" 858da9b31f57648a032230447bd15f25 % md4::hmac -hex -key Sekret "Tcl does MD4" c324088e5752872689caedf2a0464758 % set tok [md4::MD4Init] ::md4::1 % md4::MD4Update$tok "Tcl " % md4::MD4Update $tok "does " % md4::MD4Update$tok "MD4" % md4::Hex [md4::MD4Final $tok] 858da9b31f57648a032230447bd15f25 # REFERENCES 1. Rivest, R\., "The MD4 Message Digest Algorithm", RFC 1320, MIT, April 1992\. $$[http://www\.rfc\-editor\.org/rfc/rfc1320\.txt](http://www\.rfc\-editor\.org/rfc/rfc1320\.txt)$$  Changes to embedded/md/tcllib/files/modules/md5/md5.md.  138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163   - __::md5::HMACFinal__ *token* These commands are identical to the MD5 equivalent commands\. # EXAMPLES % md5::md5 \-hex "Tcl does MD5" 8AAC1EE01E20BB347104FABB90310433 % md5::hmac \-hex \-key Sekret "Tcl does MD5" 35BBA244FD56D3EDF5F3C47474DACB5D % set tok $md5::MD5Init$ ::md5::1 % md5::MD5Update$tok "Tcl " % md5::MD5Update $tok "does " % md5::MD5Update$tok "MD5" % md5::Hex $md5::MD5Final tok$ 8AAC1EE01E20BB347104FABB90310433 # REFERENCES 1. Rivest, R\., "The MD5 Message\-Digest Algorithm", RFC 1321, MIT and RSA Data Security, Inc, April 1992\. $$[http://www\.rfc\-editor\.org/rfc/rfc1321\.txt](http://www\.rfc\-editor\.org/rfc/rfc1321\.txt)$$   | | | |  138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163   - __::md5::HMACFinal__ *token* These commands are identical to the MD5 equivalent commands\. # EXAMPLES % md5::md5 -hex "Tcl does MD5" 8AAC1EE01E20BB347104FABB90310433 % md5::hmac -hex -key Sekret "Tcl does MD5" 35BBA244FD56D3EDF5F3C47474DACB5D % set tok [md5::MD5Init] ::md5::1 % md5::MD5Update $tok "Tcl " % md5::MD5Update$tok "does " % md5::MD5Update $tok "MD5" % md5::Hex [md5::MD5Final$tok] 8AAC1EE01E20BB347104FABB90310433 # REFERENCES 1. Rivest, R\., "The MD5 Message\-Digest Algorithm", RFC 1321, MIT and RSA Data Security, Inc, April 1992\. $$[http://www\.rfc\-editor\.org/rfc/rfc1321\.txt](http://www\.rfc\-editor\.org/rfc/rfc1321\.txt)$$ 

Changes to embedded/md/tcllib/files/modules/md5crypt/md5crypt.md.

 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107   % md5crypt::md5crypt password 01234567 $1$01234567$b5lh2mHyD2PdJjFfALlEz1 % md5crypt::aprcrypt password 01234567$apr1$01234567$IXBaQywhAhc0d75ZbaSDp/ % md5crypt::md5crypt password $md5crypt::salt$ $1$dFmvyRmO$T\.V3OmzqeEf3hqJp2WFcb\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *md5crypt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | |  92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107   % md5crypt::md5crypt password 01234567$1$01234567$b5lh2mHyD2PdJjFfALlEz1 % md5crypt::aprcrypt password 01234567 $apr1$01234567$IXBaQywhAhc0d75ZbaSDp/ % md5crypt::md5crypt password [md5crypt::salt]$1$dFmvyRmO$T.V3OmzqeEf3hqJp2WFcb. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *md5crypt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/mime/mime.md.

 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184   This command returns a string containing the body of the leaf MIME part represented by *token* in canonical form\. If the __\-command__ option is present, then it is repeatedly invoked with a fragment of the body as this: uplevel \#0 $callback $list "data" fragment$ $$The __\-blocksize__ option, if present, specifies the maximum size of each fragment passed to the callback\.$$ When the end of the body is reached, the callback is invoked as: uplevel \#0$callback "end" Alternatively, if an error occurs, the callback is invoked as: uplevel \#0 $callback $list "error" reason$ Regardless, the return value of the final invocation of the callback is propagated upwards by __::mime::getbody__\. If the __\-command__ option is absent, then the return value of __::mime::getbody__ is a string containing the MIME part's entire body\.   | | |  159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184   This command returns a string containing the body of the leaf MIME part represented by *token* in canonical form\. If the __\-command__ option is present, then it is repeatedly invoked with a fragment of the body as this: uplevel #0$callback [list "data" $fragment] $$The __\-blocksize__ option, if present, specifies the maximum size of each fragment passed to the callback\.$$ When the end of the body is reached, the callback is invoked as: uplevel #0$callback "end" Alternatively, if an error occurs, the callback is invoked as: uplevel #0 $callback [list "error" reason] Regardless, the return value of the final invocation of the callback is propagated upwards by __::mime::getbody__\. If the __\-command__ option is absent, then the return value of __::mime::getbody__ is a string containing the MIME part's entire body\.  Changes to embedded/md/tcllib/files/modules/mime/smtp.md.  175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227  __sasl__ depends on a lot of the cryptographic $$secure$$ hashes, i\.e\. all of __[md5](\.\./md5/md5\.md)__, __[otp](\.\./otp/otp\.md)__, __[md4](\.\./md4/md4\.md)__, __[sha1](\.\./sha1/sha1\.md)__, and __[ripemd160](\.\./ripemd/ripemd160\.md)__\. # EXAMPLE proc send\_simple\_message \{recipient email\_server subject body\} \{ package require smtp package require mime set token $mime::initialize \-canonical text/plain \\\\ \-string body$ mime::setheader$token Subject $subject smtp::sendmessage$token \\\\ \-recipients $recipient \-servers$email\_server mime::finalize $token \} send\_simple\_message [email protected]\.com localhost \\\\ "This is the subject\." "This is the message\." # TLS Security Considerations This package uses the __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ package to handle the security for __https__ urls and other socket connections\. Policy decisions like the set of protocols to support and what ciphers to use ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # REFERENCES 1. Jonathan B\. Postel, "SIMPLE MAIL TRANSFER PROTOCOL", RFC 821, August 1982\. $$[http://www\.rfc\-editor\.org/rfc/rfc821\.txt](http://www\.rfc\-editor\.org/rfc/rfc821\.txt)$$ 1. J\. Klensin, "Simple Mail Transfer Protocol", RFC 2821, April 2001\.   | | | | | < | > | | | |  175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227  __sasl__ depends on a lot of the cryptographic $$secure$$ hashes, i\.e\. all of __[md5](\.\./md5/md5\.md)__, __[otp](\.\./otp/otp\.md)__, __[md4](\.\./md4/md4\.md)__, __[sha1](\.\./sha1/sha1\.md)__, and __[ripemd160](\.\./ripemd/ripemd160\.md)__\. # EXAMPLE proc send_simple_message {recipient email_server subject body} { package require smtp package require mime set token [mime::initialize -canonical text/plain \\ -string$body] mime::setheader $token Subject$subject smtp::sendmessage $token \\ -recipients$recipient -servers $email_server mime::finalize$token } send_simple_message [email protected].com localhost \\ "This is the subject." "This is the message." # TLS Security Considerations This package uses the __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ package to handle the security for __https__ urls and other socket connections\. Policy decisions like the set of protocols to support and what ciphers to use ................................................................................ To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # REFERENCES 1. Jonathan B\. Postel, "SIMPLE MAIL TRANSFER PROTOCOL", RFC 821, August 1982\. $$[http://www\.rfc\-editor\.org/rfc/rfc821\.txt](http://www\.rfc\-editor\.org/rfc/rfc821\.txt)$$ 1. J\. Klensin, "Simple Mail Transfer Protocol", RFC 2821, April 2001\. 

Changes to embedded/md/tcllib/files/modules/multiplexer/multiplexer.md.

 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71  It is possible to have different multiplexers running concurrently\. - __::multiplexer::create__ The __create__ command creates a new multiplexer 'instance'\. For example: set mp $::multiplexer::create$ This instance can then be manipulated like so: $\{mp\}::Init 35100 - __$\{multiplexer\_instance\}::Init__ *port* This starts the multiplexer listening on the specified port\. - __$\{multiplexer\_instance\}::Config__ *key* *value*   | |  53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71  It is possible to have different multiplexers running concurrently\. - __::multiplexer::create__ The __create__ command creates a new multiplexer 'instance'\. For example: set mp [::multiplexer::create] This instance can then be manipulated like so:${mp}::Init 35100 - __$\{multiplexer\_instance\}::Init__ *port* This starts the multiplexer listening on the specified port\. - __$\{multiplexer\_instance\}::Config__ *key* *value* 

Changes to embedded/md/tcllib/files/modules/ncgi/ncgi.md.

 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331  # EXAMPLES Uploading a file HTML:
Path:
Name:
TCL: upload\.cgi \#\!/usr/local/bin/tclsh ::ncgi::parse set filedata $::ncgi::value filedata$ set filedesc $::ncgi::value filedesc$ puts " File uploaded at $filedesc " set filename /www/images/$filedesc set fh $open filename w$ puts \-nonewline $fh$filedata close $fh # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *ncgi* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas   | | | | | | | |  298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331  # EXAMPLES Uploading a file HTML: Path: Name: TCL: upload.cgi #!/usr/local/bin/tclsh ::ncgi::parse set filedata [::ncgi::value filedata] set filedesc [::ncgi::value filedesc] puts " File uploaded at$filedesc " set filename /www/images/$filedesc set fh [open$filename w] puts -nonewline $fh$filedata close $fh # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *ncgi* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas  Changes to embedded/md/tcllib/files/modules/nmea/nmea.md.  66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 ... 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162   - __::nmea::input__ *sentence* Processes and dispatches the supplied sentence\. If *sentence* contains no commas it is treated as a Tcl list, otherwise it must be standard comma delimited NMEA data, with an optional checksum and leading __$__\. nmea::input \{$GPGSA,A,3,04,05,,09,12,,,24,,,,,2\.5,1\.3,2\.1\*39\} nmea::input $list GPGSA A 3 04 05 09 12 "" "" 24 "" "" "" 2\.5 1\.3 2\.1$ - __::nmea::open\_port__ *port* ?speed? Open the specified COM port and read NMEA sentences when available\. Port speed is set to 4800bps by default or to *speed*\. - __::nmea::close\_port__ ................................................................................ EOF handler is invoked when End Of File is reached on the open file or port\. The handler procedures, with the exception of the builtin types,must take exactly one argument, which is a list of the data values\. The DEFAULT handler should have two arguments, the sentence type and the data values\. The EOF handler has no arguments\. nmea::event gpgsa parse\_sat\_detail nmea::event default handle\_unknown proc parse\_sat\_detail \{data\} \{ puts $lindex data 1$ \} proc handle\_unknown \{name data\} \{ puts "unknown data type$name" \} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *nmea* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | < | > | < >  66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 ... 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162   - __::nmea::input__ *sentence* Processes and dispatches the supplied sentence\. If *sentence* contains no commas it is treated as a Tcl list, otherwise it must be standard comma delimited NMEA data, with an optional checksum and leading __$__\. nmea::input {$GPGSA,A,3,04,05,,09,12,,,24,,,,,2.5,1.3,2.1*39} nmea::input [list GPGSA A 3 04 05 09 12 "" "" 24 "" "" "" 2.5 1.3 2.1] - __::nmea::open\_port__ *port* ?speed? Open the specified COM port and read NMEA sentences when available\. Port speed is set to 4800bps by default or to *speed*\. - __::nmea::close\_port__ ................................................................................ EOF handler is invoked when End Of File is reached on the open file or port\. The handler procedures, with the exception of the builtin types,must take exactly one argument, which is a list of the data values\. The DEFAULT handler should have two arguments, the sentence type and the data values\. The EOF handler has no arguments\. nmea::event gpgsa parse_sat_detail nmea::event default handle_unknown proc parse_sat_detail {data} { puts [lindex $data 1] } proc handle_unknown {name data} { puts "unknown data type$name" } # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *nmea* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/nntp/nntp.md.

 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370   *msgid2*\) are queried\. # EXAMPLE A bigger example for posting a single article\. package require nntp set n $nntp::nntp NNTP\_SERVER$ $n post "From: [email protected]\.EXT $$USER\_FULL$$ Path: COMPUTERNAME\!USERNAME Newsgroups: alt\.test Subject: Tcl test post \-ignore Message\-ID: <$pid$$clock seconds$ @COMPUTERNAME> Date: $clock format \[clock seconds$ \-format "%a, %d % b %y %H:%M:%S GMT" \-gmt true\] Test message body" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *nntp* of the [Tcllib   | | | | | | | |  348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370   *msgid2*\) are queried\. # EXAMPLE A bigger example for posting a single article\. package require nntp set n [nntp::nntp NNTP_SERVER]$n post "From: [email protected].EXT (USER_FULL) Path: COMPUTERNAME!USERNAME Newsgroups: alt.test Subject: Tcl test post -ignore Message-ID: <[pid][clock seconds] @COMPUTERNAME> Date: [clock format [clock seconds] -format "%a, %d % b %y %H:%M:%S GMT" -gmt true] Test message body" # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *nntp* of the [Tcllib 

Changes to embedded/md/tcllib/files/modules/ntp/ntp_time.md.

 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174   Wait for a query to complete and return the status upon completion\. - __::time::cleanup__ *token* Remove all state variables associated with the request\. % set tok $::time::gettime ntp2a\.mcc\.ac\.uk$ % set t $::time::unixtime tok$ % ::time::cleanup $tok % set tok $::time::getsntp pool\.ntp\.org$ % set t $::time::unixtime tok$ % ::time::cleanup$tok proc on\_time \{token\} \{ if \{$time::status token$ eq "ok"\} \{ puts $clock format \[time::unixtime token$\] \} else \{ puts $time::error token$ \} time::cleanup $token \} time::getsntp \-command on\_time pool\.ntp\.org # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback   | | | | | | | | | < > < > |  144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174   Wait for a query to complete and return the status upon completion\. - __::time::cleanup__ *token* Remove all state variables associated with the request\. % set tok [::time::gettime ntp2a.mcc.ac.uk] % set t [::time::unixtime$tok] % ::time::cleanup $tok % set tok [::time::getsntp pool.ntp.org] % set t [::time::unixtime$tok] % ::time::cleanup $tok proc on_time {token} { if {[time::status$token] eq "ok"} { puts [clock format [time::unixtime $token]] } else { puts [time::error$token] } time::cleanup $token } time::getsntp -command on_time pool.ntp.org # AUTHORS Pat Thoyts # Bugs, Ideas, Feedback  Changes to embedded/md/tcllib/files/modules/oauth/oauth.md.  69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 ... 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 ... 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # Commands - __::oauth::config__ When this command is invoked without arguments it returns a dictionary containing the current values of all options\. ................................................................................ * url *baseURL* This argument is the URI path to the OAuth API server\. If you plan send a GET query, you should provide a full path\. HTTP GET ::oauth::header \{https://api\.twitter\.com/1\.1/users/lookup\.json?screen\_name=AbiertaMente\} * url\-encoded\-string *postQuery* When you have to send a header in POST format, you have to put the query string into this argument\. ::oauth::header \{https://api\.twitter\.com/1\.1/friendships/create\.json\} \{user\_id=158812437&follow=true\} - __::oauth::query__ *baseURL* ?*postQuery*? This procedure will use the settings made with __::oauth::config__ to create the basic authentication and then send the command to the server API\. It takes the same arguments as __::oauth::header__\. ................................................................................ Here is an example of how it would work in Twitter\. Do not forget to replace the placeholder tokens and keys of the example with your own tokens and keys when trying it out\. % package require oauth % package require json % oauth::config \-consumerkey \{your\_consumer\_key\} \-consumersecret \{your\_consumer\_key\_secret\} \-accesstoken \{your\_access\_token\} \-accesstokensecret \{your\_access\_token\_secret\} % set response $oauth::query https://api\.twitter\.com/1\.1/users/lookup\.json?screen\_name=AbiertaMente$ % set jsondata $lindex response 1$ % set data $json::json2dict jsondata$$ set data $lindex data 0$ % dict for \{key val\} $data \{puts "$key => $val"\} id => 158812437 id\_str => 158812437 name => Un Librepensador screen\_name => AbiertaMente location => Explico mis tuits ahí → description => 160Caracteres para un SMS y contaba mi vida entera sin recortar vocales\. Ahora en Twitter, podemos usar hasta 140 y a mí me sobrarían 20 para contaros todo lo q url => http://t\.co/SGs3k9odBn entities => url \{urls \{\{url http://t\.co/SGs3k9odBn expanded\_url http://librepensamiento\.es display\_url librepensamiento\.es indices \{0 22\}\}\}\} description \{urls \{\}\} protected => false followers\_count => 72705 friends\_count => 53099 listed\_count => 258 created\_at => Wed Jun 23 18:29:58 \+0000 2010 favourites\_count => 297 utc\_offset => 7200 time\_zone => Madrid geo\_enabled => false verified => false statuses\_count => 8996 lang => es status => created\_at \{Sun Oct 12 08:02:38 \+0000 2014\} id 521209314087018496 id\_str 521209314087018496 text \{@thesamethanhim http://t\.co/WFoXOAofCt\} source \{Twitter Web Client\} truncated false in\_reply\_to\_status\_id 521076457490350081 in\_reply\_to\_status\_id\_str 521076457490350081 in\_reply\_to\_user\_id 2282730867 in\_reply\_to\_user\_id\_str 2282730867 in\_reply\_to\_screen\_name thesamethanhim geo null coordinates null place null contributors null retweet\_count 0 favorite\_count 0 entities \{hashtags \{\} symbols \{\} urls \{\{url http://t\.co/WFoXOAofCt expanded\_url http://www\.elmundo\.es/internacional/2014/03/05/53173dc1268e3e3f238b458a\.html display\_url elmundo\.es/internacional/… indices \{16 38\}\}\} user\_mentions \{\{screen\_name thesamethanhim name Ἑλένη id 2282730867 id\_str 2282730867 indices \{0 15\}\}\}\} favorited false retweeted false possibly\_sensitive false lang und contributors\_enabled => false is\_translator => true is\_translation\_enabled => false profile\_background\_color => 709397 profile\_background\_image\_url => http://pbs\.twimg\.com/profile\_background\_images/704065051/9309c02aa2728bdf543505ddbd408e2e\.jpeg profile\_background\_image\_url\_https => https://pbs\.twimg\.com/profile\_background\_images/704065051/9309c02aa2728bdf543505ddbd408e2e\.jpeg profile\_background\_tile => true profile\_image\_url => http://pbs\.twimg\.com/profile\_images/2629816665/8035fb81919b840c5cc149755d3d7b0b\_normal\.jpeg profile\_image\_url\_https => https://pbs\.twimg\.com/profile\_images/2629816665/8035fb81919b840c5cc149755d3d7b0b\_normal\.jpeg profile\_banner\_url => https://pbs\.twimg\.com/profile\_banners/158812437/1400828874 profile\_link\_color => FF3300 profile\_sidebar\_border\_color => FFFFFF profile\_sidebar\_fill\_color => A0C5C7 profile\_text\_color => 333333 profile\_use\_background\_image => true default\_profile => false default\_profile\_image => false following => true follow\_request\_sent => false notifications => false # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *oauth* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 ... 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 ... 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # Commands - __::oauth::config__ When this command is invoked without arguments it returns a dictionary containing the current values of all options\. ................................................................................ * url *baseURL* This argument is the URI path to the OAuth API server\. If you plan send a GET query, you should provide a full path\. HTTP GET ::oauth::header {https://api.twitter.com/1.1/users/lookup.json?screen_name=AbiertaMente} * url\-encoded\-string *postQuery* When you have to send a header in POST format, you have to put the query string into this argument\. ::oauth::header {https://api.twitter.com/1.1/friendships/create.json} {user_id=158812437&follow=true} - __::oauth::query__ *baseURL* ?*postQuery*? This procedure will use the settings made with __::oauth::config__ to create the basic authentication and then send the command to the server API\. It takes the same arguments as __::oauth::header__\. ................................................................................ Here is an example of how it would work in Twitter\. Do not forget to replace the placeholder tokens and keys of the example with your own tokens and keys when trying it out\. % package require oauth % package require json % oauth::config -consumerkey {your_consumer_key} -consumersecret {your_consumer_key_secret} -accesstoken {your_access_token} -accesstokensecret {your_access_token_secret} % set response [oauth::query https://api.twitter.com/1.1/users/lookup.json?screen_name=AbiertaMente] % set jsondata [lindex$response 1] % set data [json::json2dict $jsondata]$ set data [lindex $data 0] % dict for {key val}$data {puts "$key =>$val"} id => 158812437 id_str => 158812437 name => Un Librepensador screen_name => AbiertaMente location => Explico mis tuits ahí → description => 160Caracteres para un SMS y contaba mi vida entera sin recortar vocales. Ahora en Twitter, podemos usar hasta 140 y a mí me sobrarían 20 para contaros todo lo q url => http://t.co/SGs3k9odBn entities => url {urls {{url http://t.co/SGs3k9odBn expanded_url http://librepensamiento.es display_url librepensamiento.es indices {0 22}}}} description {urls {}} protected => false followers_count => 72705 friends_count => 53099 listed_count => 258 created_at => Wed Jun 23 18:29:58 +0000 2010 favourites_count => 297 utc_offset => 7200 time_zone => Madrid geo_enabled => false verified => false statuses_count => 8996 lang => es status => created_at {Sun Oct 12 08:02:38 +0000 2014} id 521209314087018496 id_str 521209314087018496 text {@thesamethanhim http://t.co/WFoXOAofCt} source {Twitter Web Client} truncated false in_reply_to_status_id 521076457490350081 in_reply_to_status_id_str 521076457490350081 in_reply_to_user_id 2282730867 in_reply_to_user_id_str 2282730867 in_reply_to_screen_name thesamethanhim geo null coordinates null place null contributors null retweet_count 0 favorite_count 0 entities {hashtags {} symbols {} urls {{url http://t.co/WFoXOAofCt expanded_url http://www.elmundo.es/internacional/2014/03/05/53173dc1268e3e3f238b458a.html display_url elmundo.es/internacional/… indices {16 38}}} user_mentions {{screen_name thesamethanhim name Ἑλένη id 2282730867 id_str 2282730867 indices {0 15}}}} favorited false retweeted false possibly_sensitive false lang und contributors_enabled => false is_translator => true is_translation_enabled => false profile_background_color => 709397 profile_background_image_url => http://pbs.twimg.com/profile_background_images/704065051/9309c02aa2728bdf543505ddbd408e2e.jpeg profile_background_image_url_https => https://pbs.twimg.com/profile_background_images/704065051/9309c02aa2728bdf543505ddbd408e2e.jpeg profile_background_tile => true profile_image_url => http://pbs.twimg.com/profile_images/2629816665/8035fb81919b840c5cc149755d3d7b0b_normal.jpeg profile_image_url_https => https://pbs.twimg.com/profile_images/2629816665/8035fb81919b840c5cc149755d3d7b0b_normal.jpeg profile_banner_url => https://pbs.twimg.com/profile_banners/158812437/1400828874 profile_link_color => FF3300 profile_sidebar_border_color => FFFFFF profile_sidebar_fill_color => A0C5C7 profile_text_color => 333333 profile_use_background_image => true default_profile => false default_profile_image => false following => true follow_request_sent => false notifications => false # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *oauth* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas 

Changes to embedded/md/tcllib/files/modules/oometa/oometa.md.

 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 ... 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173  # DESCRIPTION The __oo::meta__ package provides a data registry service for TclOO classes\. # Usage oo::class create animal \{ meta set biodata animal: 1 \} oo::class create mammal \{ superclass animal meta set biodata mammal: 1 \} oo::class create cat \{ superclass mammal meta set biodata diet: carnivore \} cat create felix puts $felix meta dump biodata$ > animal: 1 mammal: 1 diet: carnivore felix meta set biodata likes: \{birds mice\} puts $felix meta get biodata$ > animal: 1 mammal: 1 diet: carnivore likes: \{bird mice\} \# Modify a class mammal meta set biodata metabolism: warm\-blooded puts $felix meta get biodata$ > animal: 1 mammal: 1 metabolism: warm\-blooded diet: carnivore likes: \{birds mice\} \# Overwrite class info felix meta set biodata mammal: yes puts $felix meta get biodata$ > animal: 1 mammal: yes metabolism: warm\-blooded diet: carnivore likes: \{birds mice\} # Concept The concept behind __oo::meta__ is that each class contributes a snippet of *local* data\. When __oo::meta::metadata__ is called, the system walks through the linear ancestry produced by __oo::meta::ancestors__, and recursively combines all of that local data for all of a class' ancestors into a ................................................................................ following: - __oo::meta::info branchget__ ?*key*? ?\.\.\.? Returns a dict representation of the element at *args*, but with any trailing : removed from field names\. ::oo::meta::info $myclass set option color \{default: green widget: colorselect\} puts $::oo::meta::info myclass get option color$ > \{default: green widget: color\} puts $::oo::meta::info myclass branchget option color$ > \{default green widget color\} - __oo::meta::info branchset__ ?*key\.\.\.*? *key* *value* Merges *dict* with any other information contaned at node ?*key\.\.\.*?, and adding a trailing : to all field names\. ::oo::meta::info$myclass branchset option color \{default green widget colorselect\} puts $::oo::meta::info myclass get option color$ > \{default: green widget: color\} - __oo::meta::info dump__ *class* Returns the complete snapshot of a class metadata, as producted by __oo::meta::metadata__ - __oo::meta::info__ *class* __is__ *type* ?*args*? Returns a boolean true or false if the element ?*args*? would match __string is__ *type* *value* ::oo::meta::info $myclass set constant mammal 1 puts $::oo::meta::info myclass is true constant mammal$ > 1 - __oo::meta::info__ *class* __[merge](\.\./\.\./\.\./\.\./index\.md\#merge)__ ?*dict*? ?*dict*? ?*\.\.\.*? Combines all of the arguments into a single dict, which is then stored as the new local representation for this class\. ................................................................................ - __oo::define meta__ The package injects a command __oo::define::meta__ which works to provide a class in the process of definition access to __oo::meta::info__, but without having to look the name up\. oo::define myclass \{ meta set foo bar: baz \} - __oo::class method meta__ The package injects a new method __meta__ into __oo::class__ which works to provide a class instance access to __oo::meta::info__\. - __oo::object method meta__   | < > | < > | < | > | | | | | | | | | | | | | | | | | | | | | < >  54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 ... 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173  # DESCRIPTION The __oo::meta__ package provides a data registry service for TclOO classes\. # Usage oo::class create animal { meta set biodata animal: 1 } oo::class create mammal { superclass animal meta set biodata mammal: 1 } oo::class create cat { superclass mammal meta set biodata diet: carnivore } cat create felix puts [felix meta dump biodata] > animal: 1 mammal: 1 diet: carnivore felix meta set biodata likes: {birds mice} puts [felix meta get biodata] > animal: 1 mammal: 1 diet: carnivore likes: {bird mice} # Modify a class mammal meta set biodata metabolism: warm-blooded puts [felix meta get biodata] > animal: 1 mammal: 1 metabolism: warm-blooded diet: carnivore likes: {birds mice} # Overwrite class info felix meta set biodata mammal: yes puts [felix meta get biodata] > animal: 1 mammal: yes metabolism: warm-blooded diet: carnivore likes: {birds mice} # Concept The concept behind __oo::meta__ is that each class contributes a snippet of *local* data\. When __oo::meta::metadata__ is called, the system walks through the linear ancestry produced by __oo::meta::ancestors__, and recursively combines all of that local data for all of a class' ancestors into a ................................................................................ following: - __oo::meta::info branchget__ ?*key*? ?\.\.\.? Returns a dict representation of the element at *args*, but with any trailing : removed from field names\. ::oo::meta::info$myclass set option color {default: green widget: colorselect} puts [::oo::meta::info $myclass get option color] > {default: green widget: color} puts [::oo::meta::info$myclass branchget option color] > {default green widget color} - __oo::meta::info branchset__ ?*key\.\.\.*? *key* *value* Merges *dict* with any other information contaned at node ?*key\.\.\.*?, and adding a trailing : to all field names\. ::oo::meta::info $myclass branchset option color {default green widget colorselect} puts [::oo::meta::info$myclass get option color] > {default: green widget: color} - __oo::meta::info dump__ *class* Returns the complete snapshot of a class metadata, as producted by __oo::meta::metadata__ - __oo::meta::info__ *class* __is__ *type* ?*args*? Returns a boolean true or false if the element ?*args*? would match __string is__ *type* *value* ::oo::meta::info $myclass set constant mammal 1 puts [::oo::meta::info$myclass is true constant mammal] > 1 - __oo::meta::info__ *class* __[merge](\.\./\.\./\.\./\.\./index\.md\#merge)__ ?*dict*? ?*dict*? ?*\.\.\.*? Combines all of the arguments into a single dict, which is then stored as the new local representation for this class\. ................................................................................ - __oo::define meta__ The package injects a command __oo::define::meta__ which works to provide a class in the process of definition access to __oo::meta::info__, but without having to look the name up\. oo::define myclass { meta set foo bar: baz } - __oo::class method meta__ The package injects a new method __meta__ into __oo::class__ which works to provide a class instance access to __oo::meta::info__\. - __oo::object method meta__ 

Changes to embedded/md/tcllib/files/modules/ooutil/ooutil.md.

 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ... 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 ... 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 ... 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182   available to a user of the class and of derived classes\. Note: The command is equivalent to the command __typemethod__ provided by the OO package __[snit](\.\./snit/snit\.md)__ for the same purpose\. Example oo::class create ActiveRecord \{ classmethod find args \{ puts "$self$ called with arguments: $args" \} \} oo::class create Table \{ superclass ActiveRecord \} puts $Table find foo bar$ \# ====== \# which will write \# ====== \# ::Table called with arguments: foo bar - __classvariable__ ?*arg*\.\.\.? This command is available within instance methods\. It takes a series of variable names and makes them available in the method's scope\. The originating scope for the variables is the class $$instance$$ the object instance belongs to\. In other words, the referenced variables are shared ................................................................................ Note: The command is roughly equivalent to the command __typevariable__ provided by the OO package __[snit](\.\./snit/snit\.md)__ for the same purpose\. The difference is that it cannot be used in the class definition itself\. Example: % oo::class create Foo \{ method bar \{z\} \{ classvariable x y return $incr x z$,$incr y$ \} \} ::Foo % Foo create a ::a % Foo create b ::b % a bar 2 2,1 % a bar 3 5,2 % b bar 7 12,3 % b bar \-1 11,4 % a bar 0 11,5 - __link__ *method*\.\.\. - __link__ \{*alias* *method*\}\.\.\. ................................................................................ The alias name under which the method becomes available defaults to the method name, except where explicitly specified through an alias/method pair\. Examples: link foo \# The method foo is now directly accessible as foo instead of my foo\. link \{bar foo\} \# The method foo is now directly accessible as bar\. link a b c \# The methods a, b, and c all become directly acessible under their \# own names\. The main use of this command is expected to be in instance constructors, for convenience, or to set up some methods for use in a mini DSL\. - __ooutil::singleton__ ?*arg*\.\.\.? This command is a meta\-class, i\.e\. a variant of the builtin ................................................................................ __oo::class__ which ensures that it creates only a single instance of the classes defined with it\. Syntax and results are like for __oo::class__\. Example: % oo::class create example \{ self mixin singleton method foo \{\} \{self\} \} ::example % $example new$ foo ::oo::Obj22 % $example new$ foo ::oo::Obj22 # AUTHORS Donal Fellows, Andreas Kupries # Bugs, Ideas, Feedback   | | < > | < > | | | | | | | | < < > > | | | | | | | | < > | |  76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ... 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 ... 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 ... 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182   available to a user of the class and of derived classes\. Note: The command is equivalent to the command __typemethod__ provided by the OO package __[snit](\.\./snit/snit\.md)__ for the same purpose\. Example oo::class create ActiveRecord { classmethod find args { puts "[self] called with arguments:$args" } } oo::class create Table { superclass ActiveRecord } puts [Table find foo bar] # ====== # which will write # ====== # ::Table called with arguments: foo bar - __classvariable__ ?*arg*\.\.\.? This command is available within instance methods\. It takes a series of variable names and makes them available in the method's scope\. The originating scope for the variables is the class $$instance$$ the object instance belongs to\. In other words, the referenced variables are shared ................................................................................ Note: The command is roughly equivalent to the command __typevariable__ provided by the OO package __[snit](\.\./snit/snit\.md)__ for the same purpose\. The difference is that it cannot be used in the class definition itself\. Example: % oo::class create Foo { method bar {z} { classvariable x y return [incr x $z],[incr y] } } ::Foo % Foo create a ::a % Foo create b ::b % a bar 2 2,1 % a bar 3 5,2 % b bar 7 12,3 % b bar -1 11,4 % a bar 0 11,5 - __link__ *method*\.\.\. - __link__ \{*alias* *method*\}\.\.\. ................................................................................ The alias name under which the method becomes available defaults to the method name, except where explicitly specified through an alias/method pair\. Examples: link foo # The method foo is now directly accessible as foo instead of my foo. link {bar foo} # The method foo is now directly accessible as bar. link a b c # The methods a, b, and c all become directly acessible under their # own names. The main use of this command is expected to be in instance constructors, for convenience, or to set up some methods for use in a mini DSL\. - __ooutil::singleton__ ?*arg*\.\.\.? This command is a meta\-class, i\.e\. a variant of the builtin ................................................................................ __oo::class__ which ensures that it creates only a single instance of the classes defined with it\. Syntax and results are like for __oo::class__\. Example: % oo::class create example { self mixin singleton method foo {} {self} } ::example % [example new] foo ::oo::Obj22 % [example new] foo ::oo::Obj22 # AUTHORS Donal Fellows, Andreas Kupries # Bugs, Ideas, Feedback  Changes to embedded/md/tcllib/files/modules/otp/otp.md.  69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87   - __::otp::otp\-sha1__ ?*\-hex*? ?*\-words*? *\-seed seed* *\-count count* *data* - __::otp::otp\-rmd160__ ?*\-hex*? ?*\-words*? *\-seed seed* *\-count count* *data* # EXAMPLES % otp::otp\-md5 \-count 99 \-seed host67821 "My Secret Pass Phrase" $$binary gibberish$$ % otp::otp\-md5 \-words \-count 99 \-seed host67821 "My Secret Pass Phrase" SOON ARAB BURG LIMB FILE WAD % otp::otp\-md5 \-hex \-count 99 \-seed host67821 "My Secret Pass Phrase" e249b58257c80087 # REFERENCES 1. Haller, N\. et al\., "A One\-Time Password System", RFC 2289, February 1998\. [http://www\.rfc\-editor\.org/rfc/rfc2289\.txt](http://www\.rfc\-editor\.org/rfc/rfc2289\.txt)   | | | |  69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87   - __::otp::otp\-sha1__ ?*\-hex*? ?*\-words*? *\-seed seed* *\-count count* *data* - __::otp::otp\-rmd160__ ?*\-hex*? ?*\-words*? *\-seed seed* *\-count count* *data* # EXAMPLES % otp::otp-md5 -count 99 -seed host67821 "My Secret Pass Phrase" (binary gibberish) % otp::otp-md5 -words -count 99 -seed host67821 "My Secret Pass Phrase" SOON ARAB BURG LIMB FILE WAD % otp::otp-md5 -hex -count 99 -seed host67821 "My Secret Pass Phrase" e249b58257c80087 # REFERENCES 1. Haller, N\. et al\., "A One\-Time Password System", RFC 2289, February 1998\. [http://www\.rfc\-editor\.org/rfc/rfc2289\.txt](http://www\.rfc\-editor\.org/rfc/rfc2289\.txt)  Changes to embedded/md/tcllib/files/modules/page/page_util_peg.md.  74 75 76 77 78 79 80 81 82 83 84 85 86 87 88   more users\. A used by B and C, B is reachable, C is not, so A now loses the node in the expression for C calling it, or rather, not calling it anymore\. This command updates the cross\-references and which nonterminals are now undefined\. - __::page::util::peg::flatten__ *treequery* *tree* This commands flattens nested sequence and choice operators in the AST   |  74 75 76 77 78 79 80 81 82 83 84 85 86 87 88   more users\. A used by B and C, B is reachable, C is not, so A now loses the node in the expression for C calling it, or rather, not calling it anymore. This command updates the cross\-references and which nonterminals are now undefined\. - __::page::util::peg::flatten__ *treequery* *tree* This commands flattens nested sequence and choice operators in the AST  Changes to embedded/md/tcllib/files/modules/pluginmgr/pluginmgr.md.  154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182   $$underscore for environment variables, backslash for registry entries, and / for directories$$\. Examples: ::pluginmgr::paths ::obj docidx => env DOCIDX\_PLUGINS reg HKEY\_LOCAL\_MACHINE\\SOFTWARE\\docidx\\PLUGINS reg HKEY\_CURRENT\_USER\\SOFTWARE\\docidx\\PLUGINS path ~/\.docidx/plugins ::pluginmgr::paths ::obj doctools::idx => env DOCTOOLS\_PLUGINS env DOCTOOLS\_IDX\_PLUGINS reg HKEY\_LOCAL\_MACHINE\\SOFTWARE\\doctools\\PLUGINS reg HKEY\_LOCAL\_MACHINE\\SOFTWARE\\doctools\\idx\\PLUGINS reg HKEY\_CURRENT\_USER\\SOFTWARE\\doctools\\PLUGINS reg HKEY\_CURRENT\_USER\\SOFTWARE\\doctools\\idx\\PLUGINS path ~/\.doctools/plugin path ~/\.doctools/idx/plugin ## OBJECT COMMAND All commands created by the command __::pluginmgr__ $$See section [PACKAGE COMMANDS](#subsection1)$$ have the following general form and may be used to invoke various operations on their plugin manager object\.   | | | | | | | | | | | |  154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182   $$underscore for environment variables, backslash for registry entries, and / for directories$$\. Examples: ::pluginmgr::paths ::obj docidx => env DOCIDX_PLUGINS reg HKEY_LOCAL_MACHINE\SOFTWARE\docidx\PLUGINS reg HKEY_CURRENT_USER\SOFTWARE\docidx\PLUGINS path ~/.docidx/plugins ::pluginmgr::paths ::obj doctools::idx => env DOCTOOLS_PLUGINS env DOCTOOLS_IDX_PLUGINS reg HKEY_LOCAL_MACHINE\SOFTWARE\doctools\PLUGINS reg HKEY_LOCAL_MACHINE\SOFTWARE\doctools\idx\PLUGINS reg HKEY_CURRENT_USER\SOFTWARE\doctools\PLUGINS reg HKEY_CURRENT_USER\SOFTWARE\doctools\idx\PLUGINS path ~/.doctools/plugin path ~/.doctools/idx/plugin ## OBJECT COMMAND All commands created by the command __::pluginmgr__ $$See section [PACKAGE COMMANDS](#subsection1)$$ have the following general form and may be used to invoke various operations on their plugin manager object\.  Changes to embedded/md/tcllib/files/modules/pop3/pop3.md.  78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 ... 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init \-tls1 1 ;\# forcibly activate support for the TLS1 protocol \.\.\. your own application code \.\.\. # API - __::pop3::open__ ?__\-msex__ 0|1? ?__\-retr\-mode__ retr|list|slow? ?__\-socketcmd__ cmdprefix? ?__\-stls__ 0|1? ?__\-tls\-callback__ stls\-callback\-command? *host username password* ?*port*? Open a socket connection to the server specified by *host*, transmit the *username* and *password* as login information to the server\. The ................................................................................ __\-socketcmd__ or the option __\-stls__ of the command __pop3::open__\. The first method, option __\-socketcmd__, will force the use of the __tls::socket__ command when opening the connection\. This is suitable for POP3 servers which expect SSL connections only\. These will generally be listening on port 995\. package require tls tls::init \-cafile /path/to/ca/cert \-keyfile \.\.\. \# Create secured pop3 channel pop3::open \-socketcmd tls::socket \\\\$thehost $theuser$thepassword \.\.\. The second method, option __\-stls__, will connect to the standard POP3 port and then perform an STARTTLS handshake\. This will only work for POP3 servers which have this capability\. The package will confirm that the server supports STARTTLS and the handshake was performed correctly before proceeding with authentication\. package require tls tls::init \-cafile /path/to/ca/cert \-keyfile \.\.\. \# Create secured pop3 channel pop3::open \-stls 1 \\\\ $thehost$theuser $thepassword \.\.\. # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pop3* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | |  78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 ... 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289  To handle this change the applications using __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ must be patched, and not this package, nor __[TLS](\.\./\.\./\.\./\.\./index\.md\#tls)__ itself\. Such a patch may be as simple as generally activating __tls1__ support, as shown in the example below\. package require tls tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol ... your own application code ... # API - __::pop3::open__ ?__\-msex__ 0|1? ?__\-retr\-mode__ retr|list|slow? ?__\-socketcmd__ cmdprefix? ?__\-stls__ 0|1? ?__\-tls\-callback__ stls\-callback\-command? *host username password* ?*port*? Open a socket connection to the server specified by *host*, transmit the *username* and *password* as login information to the server\. The ................................................................................ __\-socketcmd__ or the option __\-stls__ of the command __pop3::open__\. The first method, option __\-socketcmd__, will force the use of the __tls::socket__ command when opening the connection\. This is suitable for POP3 servers which expect SSL connections only\. These will generally be listening on port 995\. package require tls tls::init -cafile /path/to/ca/cert -keyfile ... # Create secured pop3 channel pop3::open -socketcmd tls::socket \\$thehost $theuser$thepassword ... The second method, option __\-stls__, will connect to the standard POP3 port and then perform an STARTTLS handshake\. This will only work for POP3 servers which have this capability\. The package will confirm that the server supports STARTTLS and the handshake was performed correctly before proceeding with authentication\. package require tls tls::init -cafile /path/to/ca/cert -keyfile ... # Create secured pop3 channel pop3::open -stls 1 \\ $thehost$theuser $thepassword ... # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pop3* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pop3d/pop3d.md.  258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276  The option __\-socket__ $$see [Options](#section2)$$ enables users of the package to override how the server opens its listening socket\. The envisioned main use is the specification of the __tls::socket__ command, see package __[tls](\.\./\.\./\.\./\.\./index\.md\#tls)__, to secure the communication\. package require tls tls::init \\\\ \.\.\. pop3d::new S \-socket tls::socket \.\.\. # References 1. [RFC 1939](http://www\.rfc\-editor\.org/rfc/rfc1939\.txt) 1. [RFC 2449](http://www\.rfc\-editor\.org/rfc/rfc2449\.txt)   | | | |  258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276  The option __\-socket__ $$see [Options](#section2)$$ enables users of the package to override how the server opens its listening socket\. The envisioned main use is the specification of the __tls::socket__ command, see package __[tls](\.\./\.\./\.\./\.\./index\.md\#tls)__, to secure the communication\. package require tls tls::init \\ ... pop3d::new S -socket tls::socket ... # References 1. [RFC 1939](http://www\.rfc\-editor\.org/rfc/rfc1939\.txt) 1. [RFC 2449](http://www\.rfc\-editor\.org/rfc/rfc2449\.txt)  Changes to embedded/md/tcllib/files/modules/pt/pt_astree.md.  227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275   1. The string representation of the value is the canonical representation of a pure Tcl list\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the parsing expression grammar below PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; and the input string 120\+5 then a parser should deliver the abstract syntax tree below $$except for whitespace$$ set ast \{Expression 0 4 \{Factor 0 4 \{Term 0 2 \{Number 0 2 \{Digit 0 0\} \{Digit 1 1\} \{Digit 2 2\} \} \} \{AddOp 3 3\} \{Term 4 4 \{Number 4 4 \{Digit 4 4\} \} \} \} \} Or, more graphical ![](\.\./\.\./\.\./\.\./image/expr\_ast\.png) # Bugs, Ideas, Feedback   | | | | | | | | | | | | | | | | | < < > > | | | | < < < < > > > >  227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275   1. The string representation of the value is the canonical representation of a pure Tcl list\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the parsing expression grammar below PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; and the input string 120+5 then a parser should deliver the abstract syntax tree below $$except for whitespace$$ set ast {Expression 0 4 {Factor 0 4 {Term 0 2 {Number 0 2 {Digit 0 0} {Digit 1 1} {Digit 2 2} } } {AddOp 3 3} {Term 4 4 {Number 4 4 {Digit 4 4} } } } } Or, more graphical ![](\.\./\.\./\.\./\.\./image/expr\_ast\.png) # Bugs, Ideas, Feedback  Changes to embedded/md/tcllib/files/modules/pt/pt_from_api.md.  185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 ... 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 ... 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473   the plugin in a state where another usage cycle can be run without problems\. # Usage To use a converter do \# Get the converter $$single command here, not class$$ package require the\-converter\-package \# Perform the conversion set serial $theconverter convert thegrammartext$ \.\.\. process the result \.\.\. To use a plugin __FOO__ do \# Get an import plugin manager package require pt::peg::import pt::peg::import I \# Run the plugin, and the converter inside\. set serial $I import serial thegrammartext FOO$ \.\.\. process the result \.\.\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a PEG ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | < > | |  185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 ... 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 ... 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473   the plugin in a state where another usage cycle can be run without problems\. # Usage To use a converter do # Get the converter (single command here, not class) package require the-converter-package # Perform the conversion set serial [theconverter convert$thegrammartext] ... process the result ... To use a plugin __FOO__ do # Get an import plugin manager package require pt::peg::import pt::peg::import I # Run the plugin, and the converter inside. set serial [I import serial $thegrammartext FOO] ... process the result ... # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a PEG ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pt/pt_json_language.md.  128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 ... 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 ... 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; a JSON serialization for it is \{ "pt::grammar::peg" : \{ "rules" : \{ "AddOp" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Digit" : \{ "is" : "\\/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}", "mode" : "value" \}, "Expression" : \{ "is" : "\\/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\}", "mode" : "value" \}, "Factor" : \{ "is" : "x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}", "mode" : "value" \}, "MulOp" : \{ "is" : "\\/ \{t \*\} \{t \\/\}", "mode" : "value" \}, "Number" : \{ "is" : "x \{? \{n Sign\}\} \{\+ \{n Digit\}\}", "mode" : "value" \}, "Sign" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Term" : \{ "is" : "n Number", "mode" : "value" \} \}, "start" : "n Expression" \} \} and a Tcl serialization of the same is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | < > | < < | > > | | | | | | | | | | < > | < > | | | | | | | | | | | | | | | | | | | < > | < > | |  128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 ... 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 ... 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; a JSON serialization for it is { "pt::grammar::peg" : { "rules" : { "AddOp" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Digit" : { "is" : "\/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}", "mode" : "value" }, "Expression" : { "is" : "\/ {x {t (} {n Expression} {t )}} {x {n Factor} {* {x {n MulOp} {n Factor}}}}", "mode" : "value" }, "Factor" : { "is" : "x {n Term} {* {x {n AddOp} {n Term}}}", "mode" : "value" }, "MulOp" : { "is" : "\/ {t *} {t \/}", "mode" : "value" }, "Number" : { "is" : "x {? {n Sign}} {+ {n Digit}}", "mode" : "value" }, "Sign" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Term" : { "is" : "n Number", "mode" : "value" } }, "start" : "n Expression" } } and a Tcl serialization of the same is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pt/pt_param.md.  554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609  __[pt::rde](pt\_rdengine\.md)__, is not only coded in Tcl, but also relies on Tcl commands to provide it with control flow $$instructions$$\. # Interaction of the Instructions with the Architectural State Instruction Inputs Outputs ======================= ======================= ==================== ast\_pop\_discard AS \-> AS ast\_pop\_rewind AS \-> AS, ARS ast\_push ARS, AS \-> AS ast\_value\_push SV, ARS \-> ARS ======================= ======================= ==================== error\_clear \- \-> ER error\_nonterminal sym ER, LS \-> ER error\_pop\_merge ES, ER \-> ER error\_push ES, ER \-> ES ======================= ======================= ==================== input\_next msg IN \-> TC, CL, CC, ST, ER ======================= ======================= ==================== loc\_pop\_discard LS \-> LS loc\_pop\_rewind LS \-> LS, CL loc\_push CL, LS \-> LS ======================= ======================= ==================== status\_fail \- \-> ST status\_negate ST \-> ST status\_ok \- \-> ST ======================= ======================= ==================== symbol\_restore sym NC \-> CL, ST, ER, SV symbol\_save sym CL, ST, ER, SV LS \-> NC ======================= ======================= ==================== test\_alnum CC \-> ST, ER test\_alpha CC \-> ST, ER test\_ascii CC \-> ST, ER test\_char char CC \-> ST, ER test\_ddigit CC \-> ST, ER test\_digit CC \-> ST, ER test\_graph CC \-> ST, ER test\_lower CC \-> ST, ER test\_print CC \-> ST, ER test\_punct CC \-> ST, ER test\_range chars chare CC \-> ST, ER test\_space CC \-> ST, ER test\_upper CC \-> ST, ER test\_wordchar CC \-> ST, ER test\_xdigit CC \-> ST, ER ======================= ======================= ==================== value\_clear \- \-> SV value\_leaf symbol LS, CL \-> SV value\_reduce symbol ARS, LS, CL \-> SV ======================= ======================= ==================== # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |  554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609  __[pt::rde](pt\_rdengine\.md)__, is not only coded in Tcl, but also relies on Tcl commands to provide it with control flow $$instructions$$\. # Interaction of the Instructions with the Architectural State Instruction Inputs Outputs ======================= ======================= ==================== ast_pop_discard AS -> AS ast_pop_rewind AS -> AS, ARS ast_push ARS, AS -> AS ast_value_push SV, ARS -> ARS ======================= ======================= ==================== error_clear - -> ER error_nonterminal sym ER, LS -> ER error_pop_merge ES, ER -> ER error_push ES, ER -> ES ======================= ======================= ==================== input_next msg IN -> TC, CL, CC, ST, ER ======================= ======================= ==================== loc_pop_discard LS -> LS loc_pop_rewind LS -> LS, CL loc_push CL, LS -> LS ======================= ======================= ==================== status_fail - -> ST status_negate ST -> ST status_ok - -> ST ======================= ======================= ==================== symbol_restore sym NC -> CL, ST, ER, SV symbol_save sym CL, ST, ER, SV LS -> NC ======================= ======================= ==================== test_alnum CC -> ST, ER test_alpha CC -> ST, ER test_ascii CC -> ST, ER test_char char CC -> ST, ER test_ddigit CC -> ST, ER test_digit CC -> ST, ER test_graph CC -> ST, ER test_lower CC -> ST, ER test_print CC -> ST, ER test_punct CC -> ST, ER test_range chars chare CC -> ST, ER test_space CC -> ST, ER test_upper CC -> ST, ER test_wordchar CC -> ST, ER test_xdigit CC -> ST, ER ======================= ======================= ==================== value_clear - -> SV value_leaf symbol LS, CL -> SV value_reduce symbol ARS, LS, CL -> SV ======================= ======================= ==================== # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas  Changes to embedded/md/tcllib/files/modules/pt/pt_parser_api.md.  146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 ... 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 ... 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411   This method runs the parser using the string in *text* as input\. In all other ways it behaves like the method __parse__, shown above\. # Usage A generated parser is used like this package require the\-parser\-package ;\# Generated by result\-formats 'critcl', 'snit' or 'oo' of 'pt'\. set parser $the\-parser\-class$ set ast $parser parse channel$ \.\.\. process the abstract syntax tree \.\.\. When using a grammar interpreter for parsing some differences creep in package require the\-grammar\-package ;\# Generated by result\-format 'container' of 'pt'\. set grammar $the\-grammar\-class$ package require pt::peg::interp set parser $pt::peg::interp$$parser use $grammar set ast $parser parse channel$$parser destroy \.\.\. process the abstract syntax tree \.\.\. # AST serialization format Here we specify the format used by the Parser Tools to serialize Abstract Syntax Trees $$ASTs$$ as immutable values for transport, comparison, etc\. Each node in an AST represents a nonterminal symbol of a grammar, and the range ................................................................................ 1. The string representation of the value is the canonical representation of a pure Tcl list\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the parsing expression grammar below PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; and the input string 120\+5 then a parser should deliver the abstract syntax tree below $$except for whitespace$$ set ast \{Expression 0 4 \{Factor 0 4 \{Term 0 2 \{Number 0 2 \{Digit 0 0\} \{Digit 1 1\} \{Digit 2 2\} \} \} \{AddOp 3 3\} \{Term 4 4 \{Number 4 4 \{Digit 4 4\} \} \} \} \} Or, more graphical ![](\.\./\.\./\.\./\.\./image/expr\_ast\.png) # PE serialization format ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | < < > > | | | | < < < < > > > > | |  146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 ... 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 ... 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411   This method runs the parser using the string in *text* as input\. In all other ways it behaves like the method __parse__, shown above\. # Usage A generated parser is used like this package require the-parser-package ;# Generated by result-formats 'critcl', 'snit' or 'oo' of 'pt'. set parser [the-parser-class] set ast [$parser parse$channel] ... process the abstract syntax tree ... When using a grammar interpreter for parsing some differences creep in package require the-grammar-package ;# Generated by result-format 'container' of 'pt'. set grammar [the-grammar-class] package require pt::peg::interp set parser [pt::peg::interp] $parser use$grammar set ast [$parser parse$channel] $parser destroy ... process the abstract syntax tree ... # AST serialization format Here we specify the format used by the Parser Tools to serialize Abstract Syntax Trees $$ASTs$$ as immutable values for transport, comparison, etc\. Each node in an AST represents a nonterminal symbol of a grammar, and the range ................................................................................ 1. The string representation of the value is the canonical representation of a pure Tcl list\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the parsing expression grammar below PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; and the input string 120+5 then a parser should deliver the abstract syntax tree below $$except for whitespace$$ set ast {Expression 0 4 {Factor 0 4 {Term 0 2 {Number 0 2 {Digit 0 0} {Digit 1 1} {Digit 2 2} } } {AddOp 3 3} {Term 4 4 {Number 4 4 {Digit 4 4} } } } } Or, more graphical ![](\.\./\.\./\.\./\.\./image/expr\_ast\.png) # PE serialization format ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pt/pt_peg_container.md.  187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 ... 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 ... 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648   This method assigns the contents of the PEG object *source* to ourselves, overwriting the existing definition\. This is the assignment operator for grammars\. This operation is in effect equivalent to *objectName* __deserialize =__ $*source* __serialize__$ - *objectName* __\-\->__ *destination* This method assigns our contents to the PEG object *destination*, overwriting the existing definition\. This is the reverse assignment operator for grammars\. This operation is in effect equivalent to *destination* __deserialize =__ $*objectName* __serialize__$ - *objectName* __serialize__ ?*format*? This method returns our grammar in some textual form usable for transfer, persistent storage, etc\. If no *format* is not specified the returned result is the canonical serialization of the grammar, as specified in the section [PEG serialization format](#section2)\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | < > | < > | |  187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 ... 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 ... 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648   This method assigns the contents of the PEG object *source* to ourselves, overwriting the existing definition\. This is the assignment operator for grammars\. This operation is in effect equivalent to *objectName* __deserialize =__ [*source* __serialize__] - *objectName* __\-\->__ *destination* This method assigns our contents to the PEG object *destination*, overwriting the existing definition\. This is the reverse assignment operator for grammars\. This operation is in effect equivalent to *destination* __deserialize =__ [*objectName* __serialize__] - *objectName* __serialize__ ?*format*? This method returns our grammar in some textual form usable for transfer, persistent storage, etc\. If no *format* is not specified the returned result is the canonical serialization of the grammar, as specified in the section [PEG serialization format](#section2)\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pt/pt_peg_export.md.  314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 ... 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491   1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | < > | < > | |  314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 ... 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491   1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.  Changes to embedded/md/tcllib/files/modules/pt/pt_peg_export_container.md.  184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 ... 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 ... 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498  It has no direct formal specification beyond what was said above\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; one possible CONTAINER serialization for it is snit::type a\_pe\_grammar \{ constructor \{\} \{ install myg using pt::peg::container$\{selfns\}::G $myg start \{n Expression\}$myg add AddOp Digit Expression Factor MulOp Number Sign Term $myg modes \{ AddOp value Digit value Expression value Factor value MulOp value Number value Sign value Term value \}$myg rules \{ AddOp \{/ \{t \-\} \{t \+\}\} Digit \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} Expression \{/ \{x \{t \\50\} \{n Expression\} \{t \\51\}\} \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\}\} Factor \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} MulOp \{/ \{t \*\} \{t /\}\} Number \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} Sign \{/ \{t \-\} \{t \+\}\} Term \{n Number\} \} return \} component myg delegate method \* to myg \} # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a PEG ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | < > | | | | | | | | | < > < | > | < > | | | | | | | | | | | | | | | | | | | < > | < > | |  184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 ... 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 ... 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498  It has no direct formal specification beyond what was said above\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; one possible CONTAINER serialization for it is snit::type a_pe_grammar { constructor {} { install myg using pt::peg::container ${selfns}::G$myg start {n Expression} $myg add AddOp Digit Expression Factor MulOp Number Sign Term$myg modes { AddOp value Digit value Expression value Factor value MulOp value Number value Sign value Term value } \$myg rules { AddOp {/ {t -} {t +}} Digit {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} Expression {/ {x {t \50} {n Expression} {t \51}} {x {n Factor} {* {x {n MulOp} {n Factor}}}}} Factor {x {n Term} {* {x {n AddOp} {n Term}}}} MulOp {/ {t *} {t /}} Number {x {? {n Sign}} {+ {n Digit}}} Sign {/ {t -} {t +}} Term {n Number} } return } component myg delegate method * to myg } # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a PEG ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/pt/pt_peg_export_json.md.

 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 ... 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 ... 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; a JSON serialization for it is \{ "pt::grammar::peg" : \{ "rules" : \{ "AddOp" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Digit" : \{ "is" : "\\/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}", "mode" : "value" \}, "Expression" : \{ "is" : "\\/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\}", "mode" : "value" \}, "Factor" : \{ "is" : "x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}", "mode" : "value" \}, "MulOp" : \{ "is" : "\\/ \{t \*\} \{t \\/\}", "mode" : "value" \}, "Number" : \{ "is" : "x \{? \{n Sign\}\} \{\+ \{n Digit\}\}", "mode" : "value" \}, "Sign" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Term" : \{ "is" : "n Number", "mode" : "value" \} \}, "start" : "n Expression" \} \} and a Tcl serialization of the same is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | < > | < < | > > | | | | | | | | | | < > | < > | | | | | | | | | | | | | | | | | | | < > | < > | |  203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 ... 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 ... 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; a JSON serialization for it is { "pt::grammar::peg" : { "rules" : { "AddOp" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Digit" : { "is" : "\/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}", "mode" : "value" }, "Expression" : { "is" : "\/ {x {t (} {n Expression} {t )}} {x {n Factor} {* {x {n MulOp} {n Factor}}}}", "mode" : "value" }, "Factor" : { "is" : "x {n Term} {* {x {n AddOp} {n Term}}}", "mode" : "value" }, "MulOp" : { "is" : "\/ {t *} {t \/}", "mode" : "value" }, "Number" : { "is" : "x {? {n Sign}} {+ {n Digit}}", "mode" : "value" }, "Sign" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Term" : { "is" : "n Number", "mode" : "value" } }, "start" : "n Expression" } } and a Tcl serialization of the same is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/pt/pt_peg_export_peg.md.

 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 ... 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 ... 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540  writing the specification of a grammar easy, something the other formats found in the Parser Tools do not lend themselves too\. It is formally specified by the grammar shown below, written in itself\. For a tutorial / introduction to the language please go and read the *[PEG Language Tutorial](pt\_peg\_language\.md)*\. PEG pe\-grammar\-for\-peg $$Grammar$$ \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \# Syntactical constructs Grammar <\- WHITESPACE Header Definition\* Final EOF ; Header <\- PEG Identifier StartExpr ; Definition <\- Attribute? Identifier IS Expression SEMICOLON ; Attribute <\- $$VOID / LEAF$$ COLON ; Expression <\- Sequence $$SLASH Sequence$$\* ; Sequence <\- Prefix\+ ; Prefix <\- $$AND / NOT$$? Suffix ; Suffix <\- Primary $$QUESTION / STAR / PLUS$$? ; Primary <\- ALNUM / ALPHA / ASCII / CONTROL / DDIGIT / DIGIT / GRAPH / LOWER / PRINTABLE / PUNCT / SPACE / UPPER / WORDCHAR / XDIGIT / Identifier / OPEN Expression CLOSE / Literal / Class / DOT ; Literal <\- APOSTROPH $$\!APOSTROPH Char$$\* APOSTROPH WHITESPACE / DAPOSTROPH $$\!DAPOSTROPH Char$$\* DAPOSTROPH WHITESPACE ; Class <\- OPENB $$\!CLOSEB Range$$\* CLOSEB WHITESPACE ; Range <\- Char TO Char / Char ; StartExpr <\- OPEN Expression CLOSE ; void: Final <\- "END" WHITESPACE SEMICOLON WHITESPACE ; \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \# Lexing constructs Identifier <\- Ident WHITESPACE ; leaf: Ident <\- $$$\_:$ /$$ $$$\_:$ /$$\* ; Char <\- CharSpecial / CharOctalFull / CharOctalPart / CharUnicode / CharUnescaped ; leaf: CharSpecial <\- "\\\\" $nrt'"\\\[\\$\\\\\] ; leaf: CharOctalFull <\- "\\\\" $0\-2$$0\-7$$0\-7$ ; leaf: CharOctalPart <\- "\\\\" $0\-7$$0\-7$? ; leaf: CharUnicode <\- "\\\\" 'u' HexDigit $$HexDigit \(HexDigit HexDigit?$$?\)? ; leaf: CharUnescaped <\- \!"\\\\" \. ; void: HexDigit <\- $0\-9a\-fA\-F$ ; void: TO <\- '\-' ; void: OPENB <\- "$" ; void: CLOSEB <\- "$" ; void: APOSTROPH <\- "'" ; void: DAPOSTROPH <\- '"' ; void: PEG <\- "PEG" \!$$$\_:$ /$$ WHITESPACE ; void: IS <\- "<\-" WHITESPACE ; leaf: VOID <\- "void" WHITESPACE ; \# Implies that definition has no semantic value\. leaf: LEAF <\- "leaf" WHITESPACE ; \# Implies that definition has no terminals\. void: SEMICOLON <\- ";" WHITESPACE ; void: COLON <\- ":" WHITESPACE ; void: SLASH <\- "/" WHITESPACE ; leaf: AND <\- "&" WHITESPACE ; leaf: NOT <\- "\!" WHITESPACE ; leaf: QUESTION <\- "?" WHITESPACE ; leaf: STAR <\- "\*" WHITESPACE ; leaf: PLUS <\- "\+" WHITESPACE ; void: OPEN <\- "$$" WHITESPACE ; void: CLOSE <\- "$$" WHITESPACE ; leaf: DOT <\- "\." WHITESPACE ; leaf: ALNUM <\- "" WHITESPACE ; leaf: ALPHA <\- "" WHITESPACE ; leaf: ASCII <\- "" WHITESPACE ; leaf: CONTROL <\- "" WHITESPACE ; leaf: DDIGIT <\- "" WHITESPACE ; leaf: DIGIT <\- "" WHITESPACE ; leaf: GRAPH <\- "" WHITESPACE ; leaf: LOWER <\- "" WHITESPACE ; leaf: PRINTABLE <\- "" WHITESPACE ; leaf: PUNCT <\- "" WHITESPACE ; leaf: SPACE <\- "" WHITESPACE ; leaf: UPPER <\- "" WHITESPACE ; leaf: WORDCHAR <\- "" WHITESPACE ; leaf: XDIGIT <\- "" WHITESPACE ; void: WHITESPACE <\- $$" " / "\\t" / EOL / COMMENT$$\* ; void: COMMENT <\- '\#' $$\!EOL \.$$\* EOL ; void: EOL <\- "\\n\\r" / "\\n" / "\\r" ; void: EOF <\- \!\. ; \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- END; ## Example Our example specifies the grammar for a basic 4\-operation calculator\. PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; Using higher\-level features of the notation, i\.e\. the character classes $$predefined and custom$$, this example can be rewritten as PEG calculator $$Expression$$ Sign <\- $\-\+$ ; Number <\- Sign? \+ ; Expression <\- '$$' Expression '$$' / $$Factor \(MulOp Factor$$\*\) ; MulOp <\- $\*/$ ; Factor <\- Term $$AddOp Term$$\* ; AddOp <\- $\-\+$ ; Term <\- Number ; END; # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | < > | |  151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 ... 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 ... 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540  writing the specification of a grammar easy, something the other formats found in the Parser Tools do not lend themselves too\. It is formally specified by the grammar shown below, written in itself\. For a tutorial / introduction to the language please go and read the *[PEG Language Tutorial](pt\_peg\_language\.md)*\. PEG pe-grammar-for-peg (Grammar) # -------------------------------------------------------------------- # Syntactical constructs Grammar <- WHITESPACE Header Definition* Final EOF ; Header <- PEG Identifier StartExpr ; Definition <- Attribute? Identifier IS Expression SEMICOLON ; Attribute <- (VOID / LEAF) COLON ; Expression <- Sequence (SLASH Sequence)* ; Sequence <- Prefix+ ; Prefix <- (AND / NOT)? Suffix ; Suffix <- Primary (QUESTION / STAR / PLUS)? ; Primary <- ALNUM / ALPHA / ASCII / CONTROL / DDIGIT / DIGIT / GRAPH / LOWER / PRINTABLE / PUNCT / SPACE / UPPER / WORDCHAR / XDIGIT / Identifier / OPEN Expression CLOSE / Literal / Class / DOT ; Literal <- APOSTROPH (!APOSTROPH Char)* APOSTROPH WHITESPACE / DAPOSTROPH (!DAPOSTROPH Char)* DAPOSTROPH WHITESPACE ; Class <- OPENB (!CLOSEB Range)* CLOSEB WHITESPACE ; Range <- Char TO Char / Char ; StartExpr <- OPEN Expression CLOSE ; void: Final <- "END" WHITESPACE SEMICOLON WHITESPACE ; # -------------------------------------------------------------------- # Lexing constructs Identifier <- Ident WHITESPACE ; leaf: Ident <- ([_:] / ) ([_:] / )* ; Char <- CharSpecial / CharOctalFull / CharOctalPart / CharUnicode / CharUnescaped ; leaf: CharSpecial <- "\\" [nrt'"\\] ; leaf: CharOctalFull <- "\\" [0-2][0-7][0-7] ; leaf: CharOctalPart <- "\\" [0-7][0-7]? ; leaf: CharUnicode <- "\\" 'u' HexDigit (HexDigit (HexDigit HexDigit?)?)? ; leaf: CharUnescaped <- !"\\" . ; void: HexDigit <- [0-9a-fA-F] ; void: TO <- '-' ; void: OPENB <- "[" ; void: CLOSEB <- "]" ; void: APOSTROPH <- "'" ; void: DAPOSTROPH <- '"' ; void: PEG <- "PEG" !([_:] / ) WHITESPACE ; void: IS <- "<-" WHITESPACE ; leaf: VOID <- "void" WHITESPACE ; # Implies that definition has no semantic value. leaf: LEAF <- "leaf" WHITESPACE ; # Implies that definition has no terminals. void: SEMICOLON <- ";" WHITESPACE ; void: COLON <- ":" WHITESPACE ; void: SLASH <- "/" WHITESPACE ; leaf: AND <- "&" WHITESPACE ; leaf: NOT <- "!" WHITESPACE ; leaf: QUESTION <- "?" WHITESPACE ; leaf: STAR <- "*" WHITESPACE ; leaf: PLUS <- "+" WHITESPACE ; void: OPEN <- "(" WHITESPACE ; void: CLOSE <- ")" WHITESPACE ; leaf: DOT <- "." WHITESPACE ; leaf: ALNUM <- "" WHITESPACE ; leaf: ALPHA <- "" WHITESPACE ; leaf: ASCII <- "" WHITESPACE ; leaf: CONTROL <- "" WHITESPACE ; leaf: DDIGIT <- "" WHITESPACE ; leaf: DIGIT <- "" WHITESPACE ; leaf: GRAPH <- "" WHITESPACE ; leaf: LOWER <- "" WHITESPACE ; leaf: PRINTABLE <- "" WHITESPACE ; leaf: PUNCT <- "" WHITESPACE ; leaf: SPACE <- "" WHITESPACE ; leaf: UPPER <- "" WHITESPACE ; leaf: WORDCHAR <- "" WHITESPACE ; leaf: XDIGIT <- "" WHITESPACE ; void: WHITESPACE <- (" " / "\t" / EOL / COMMENT)* ; void: COMMENT <- '#' (!EOL .)* EOL ; void: EOL <- "\n\r" / "\n" / "\r" ; void: EOF <- !. ; # -------------------------------------------------------------------- END; ## Example Our example specifies the grammar for a basic 4\-operation calculator\. PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; Using higher\-level features of the notation, i\.e\. the character classes $$predefined and custom$$, this example can be rewritten as PEG calculator (Expression) Sign <- [-+] ; Number <- Sign? + ; Expression <- '(' Expression ')' / (Factor (MulOp Factor)*) ; MulOp <- [*/] ; Factor <- Term (AddOp Term)* ; AddOp <- [-+] ; Term <- Number ; END; # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/pt/pt_peg_from_json.md.

 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 ... 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 ... 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; a JSON serialization for it is \{ "pt::grammar::peg" : \{ "rules" : \{ "AddOp" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Digit" : \{ "is" : "\\/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}", "mode" : "value" \}, "Expression" : \{ "is" : "\\/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\}", "mode" : "value" \}, "Factor" : \{ "is" : "x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}", "mode" : "value" \}, "MulOp" : \{ "is" : "\\/ \{t \*\} \{t \\/\}", "mode" : "value" \}, "Number" : \{ "is" : "x \{? \{n Sign\}\} \{\+ \{n Digit\}\}", "mode" : "value" \}, "Sign" : \{ "is" : "\\/ \{t \-\} \{t \+\}", "mode" : "value" \}, "Term" : \{ "is" : "n Number", "mode" : "value" \} \}, "start" : "n Expression" \} \} and a Tcl serialization of the same is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | < > | < < | > > | | | | | | | | | | < > | < > | | | | | | | | | | | | | | | | | | | < > | < > | |  165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 ... 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 ... 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505  themselves are not translated further, but kept as JSON strings containing a nested Tcl list, and there is no concept of canonicity for the JSON either\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; a JSON serialization for it is { "pt::grammar::peg" : { "rules" : { "AddOp" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Digit" : { "is" : "\/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}", "mode" : "value" }, "Expression" : { "is" : "\/ {x {t (} {n Expression} {t )}} {x {n Factor} {* {x {n MulOp} {n Factor}}}}", "mode" : "value" }, "Factor" : { "is" : "x {n Term} {* {x {n AddOp} {n Term}}}", "mode" : "value" }, "MulOp" : { "is" : "\/ {t *} {t \/}", "mode" : "value" }, "Number" : { "is" : "x {? {n Sign}} {+ {n Digit}}", "mode" : "value" }, "Sign" : { "is" : "\/ {t -} {t +}", "mode" : "value" }, "Term" : { "is" : "n Number", "mode" : "value" } }, "start" : "n Expression" } } and a Tcl serialization of the same is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } The similarity of the latter to the JSON should be quite obvious\. # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator (Expression) Digit <- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <- '-' / '+' ; Number <- Sign? Digit+ ; Expression <- Term (AddOp Term)* ; MulOp <- '*' / '/' ; Term <- Factor (MulOp Factor)* ; AddOp <- '+'/'-' ; Factor <- '(' Expression ')' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg { rules { AddOp {is {/ {t -} {t +}} mode value} Digit {is {/ {t 0} {t 1} {t 2} {t 3} {t 4} {t 5} {t 6} {t 7} {t 8} {t 9}} mode value} Expression {is {x {n Term} {* {x {n AddOp} {n Term}}}} mode value} Factor {is {/ {x {t (} {n Expression} {t )}} {n Number}} mode value} MulOp {is {/ {t *} {t /}} mode value} Number {is {x {? {n Sign}} {+ {n Digit}}} mode value} Sign {is {/ {t -} {t +}} mode value} Term {is {x {n Factor} {* {x {n MulOp} {n Factor}}}} mode value} } start {n Expression} } # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <- Term (AddOp Term)* then its canonical serialization $$except for whitespace$$ is {x {n Term} {* {x {n AddOp} {n Term}}}} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. 

Changes to embedded/md/tcllib/files/modules/pt/pt_peg_from_peg.md.

 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 ... 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 ... 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482  writing the specification of a grammar easy, something the other formats found in the Parser Tools do not lend themselves too\. It is formally specified by the grammar shown below, written in itself\. For a tutorial / introduction to the language please go and read the *[PEG Language Tutorial](pt\_peg\_language\.md)*\. PEG pe\-grammar\-for\-peg $$Grammar$$ \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \# Syntactical constructs Grammar <\- WHITESPACE Header Definition\* Final EOF ; Header <\- PEG Identifier StartExpr ; Definition <\- Attribute? Identifier IS Expression SEMICOLON ; Attribute <\- $$VOID / LEAF$$ COLON ; Expression <\- Sequence $$SLASH Sequence$$\* ; Sequence <\- Prefix\+ ; Prefix <\- $$AND / NOT$$? Suffix ; Suffix <\- Primary $$QUESTION / STAR / PLUS$$? ; Primary <\- ALNUM / ALPHA / ASCII / CONTROL / DDIGIT / DIGIT / GRAPH / LOWER / PRINTABLE / PUNCT / SPACE / UPPER / WORDCHAR / XDIGIT / Identifier / OPEN Expression CLOSE / Literal / Class / DOT ; Literal <\- APOSTROPH $$\!APOSTROPH Char$$\* APOSTROPH WHITESPACE / DAPOSTROPH $$\!DAPOSTROPH Char$$\* DAPOSTROPH WHITESPACE ; Class <\- OPENB $$\!CLOSEB Range$$\* CLOSEB WHITESPACE ; Range <\- Char TO Char / Char ; StartExpr <\- OPEN Expression CLOSE ; void: Final <\- "END" WHITESPACE SEMICOLON WHITESPACE ; \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \# Lexing constructs Identifier <\- Ident WHITESPACE ; leaf: Ident <\- $$$\_:$ /$$ $$$\_:$ /$$\* ; Char <\- CharSpecial / CharOctalFull / CharOctalPart / CharUnicode / CharUnescaped ; leaf: CharSpecial <\- "\\\\" $nrt'"\\\[\\$\\\\\] ; leaf: CharOctalFull <\- "\\\\" $0\-2$$0\-7$$0\-7$ ; leaf: CharOctalPart <\- "\\\\" $0\-7$$0\-7$? ; leaf: CharUnicode <\- "\\\\" 'u' HexDigit $$HexDigit \(HexDigit HexDigit?$$?\)? ; leaf: CharUnescaped <\- \!"\\\\" \. ; void: HexDigit <\- $0\-9a\-fA\-F$ ; void: TO <\- '\-' ; void: OPENB <\- "$" ; void: CLOSEB <\- "$" ; void: APOSTROPH <\- "'" ; void: DAPOSTROPH <\- '"' ; void: PEG <\- "PEG" \!$$$\_:$ /$$ WHITESPACE ; void: IS <\- "<\-" WHITESPACE ; leaf: VOID <\- "void" WHITESPACE ; \# Implies that definition has no semantic value\. leaf: LEAF <\- "leaf" WHITESPACE ; \# Implies that definition has no terminals\. void: SEMICOLON <\- ";" WHITESPACE ; void: COLON <\- ":" WHITESPACE ; void: SLASH <\- "/" WHITESPACE ; leaf: AND <\- "&" WHITESPACE ; leaf: NOT <\- "\!" WHITESPACE ; leaf: QUESTION <\- "?" WHITESPACE ; leaf: STAR <\- "\*" WHITESPACE ; leaf: PLUS <\- "\+" WHITESPACE ; void: OPEN <\- "$$" WHITESPACE ; void: CLOSE <\- "$$" WHITESPACE ; leaf: DOT <\- "\." WHITESPACE ; leaf: ALNUM <\- "" WHITESPACE ; leaf: ALPHA <\- "" WHITESPACE ; leaf: ASCII <\- "" WHITESPACE ; leaf: CONTROL <\- "" WHITESPACE ; leaf: DDIGIT <\- "" WHITESPACE ; leaf: DIGIT <\- "" WHITESPACE ; leaf: GRAPH <\- "" WHITESPACE ; leaf: LOWER <\- "" WHITESPACE ; leaf: PRINTABLE <\- "" WHITESPACE ; leaf: PUNCT <\- "" WHITESPACE ; leaf: SPACE <\- "" WHITESPACE ; leaf: UPPER <\- "" WHITESPACE ; leaf: WORDCHAR <\- "" WHITESPACE ; leaf: XDIGIT <\- "" WHITESPACE ; void: WHITESPACE <\- $$" " / "\\t" / EOL / COMMENT$$\* ; void: COMMENT <\- '\#' $$\!EOL \.$$\* EOL ; void: EOL <\- "\\n\\r" / "\\n" / "\\r" ; void: EOF <\- \!\. ; \# \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- END; ## Example Our example specifies the grammar for a basic 4\-operation calculator\. PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; Using higher\-level features of the notation, i\.e\. the character classes $$predefined and custom$$, this example can be rewritten as PEG calculator $$Expression$$ Sign <\- $\-\+$ ; Number <\- Sign? \+ ; Expression <\- '$$' Expression '$$' / $$Factor \(MulOp Factor$$\*\) ; MulOp <\- $\*/$ ; Factor <\- Term $$AddOp Term$$\* ; AddOp <\- $\-\+$ ; Term <\- Number ; END; # PEG serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expression Grammars as immutable values for transport, comparison, etc\. ................................................................................ 1. The string representation of the value is the canonical representation of a Tcl dictionary\. I\.e\. it does not contain superfluous whitespace\. ## Example Assuming the following PEG for simple mathematical expressions PEG calculator $$Expression$$ Digit <\- '0'/'1'/'2'/'3'/'4'/'5'/'6'/'7'/'8'/'9' ; Sign <\- '\-' / '\+' ; Number <\- Sign? Digit\+ ; Expression <\- Term $$AddOp Term$$\* ; MulOp <\- '\*' / '/' ; Term <\- Factor $$MulOp Factor$$\* ; AddOp <\- '\+'/'\-' ; Factor <\- '$$' Expression '$$' / Number ; END; then its canonical serialization $$except for whitespace$$ is pt::grammar::peg \{ rules \{ AddOp \{is \{/ \{t \-\} \{t \+\}\} mode value\} Digit \{is \{/ \{t 0\} \{t 1\} \{t 2\} \{t 3\} \{t 4\} \{t 5\} \{t 6\} \{t 7\} \{t 8\} \{t 9\}\} mode value\} Expression \{is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} mode value\} Factor \{is \{/ \{x \{t $$\} \{n Expression\} \{t$$\}\} \{n Number\}\} mode value\} MulOp \{is \{/ \{t \*\} \{t /\}\} mode value\} Number \{is \{x \{? \{n Sign\}\} \{\+ \{n Digit\}\}\} mode value\} Sign \{is \{/ \{t \-\} \{t \+\}\} mode value\} Term \{is \{x \{n Factor\} \{\* \{x \{n MulOp\} \{n Factor\}\}\}\} mode value\} \} start \{n Expression\} \} # PE serialization format Here we specify the format used by the Parser Tools to serialize Parsing Expressions as immutable values for transport, comparison, etc\. We distinguish between *regular* and *canonical* serializations\. While a ................................................................................ 1. Terminals are *not* encoded as ranges $$where start and end of the range are identical$$\. ## Example Assuming the parsing expression shown on the right\-hand side of the rule Expression <\- Term $$AddOp Term$$\* then its canonical serialization $$except for whitespace$$ is \{x \{n Term\} \{\* \{x \{n AddOp\} \{n Term\}\}\}\} # Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *pt* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\.   | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < > | < > | |  93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 ... 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 ... 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482  writing the specification of a grammar easy, something the other formats found in the Parser Tools do not lend themselves too\. It is formally specified by the grammar shown below, written in itself\. For a tutorial / introduction to the language please go and read the *[PEG Language Tutorial](pt\_peg\_language\.md)*\. PEG pe-grammar-for-peg (Grammar) # -------------------------------------------------------------------- # Syntactical constructs Grammar <- WHITESPACE Header Definition* Final EOF ; Header <- PEG Identifier StartExpr ; Definition <- Attribute? Identifier IS Expression SEMICOLON ; Attribute <- (VOID / LEAF) COLON ; Expression <- Sequence (SLASH Sequence)* ; Sequence <- Prefix+ ; Prefix <- (AND / NOT)? Suffix ; Suffix <- Primary (QUESTION / STAR / PLUS)? ; Primary <- ALNUM / ALPHA / ASCII / CONTROL / DDIGIT / DIGIT / GRAPH / LOWER / PRINTABLE / PUNCT / SPACE / UPPER / WORDCHAR / XDIGIT / Identifier / OPEN Expression CLOSE / Literal / Class / DOT ; Literal <- APOSTROPH (!APOSTROPH Char)* APOSTROPH WHITESPACE / DAPOSTROPH (!DAPOSTROPH Char)* DAPOSTROPH WHITESPACE ; Class <- OPENB (!CLOSEB Range)* CLOSEB WHITESPACE ; Ran