Check-in [5dab3d4c27]

Login
Bounty program for improvements to Tcl and certain Tcl packages.
Tcl 2019 Conference, Houston/TX, US, Nov 4-8
Send your abstracts to [email protected]
or submit via the online form by Sep 9.

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:merge wordsmithing branch "kbk-527-redaction"
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA3-256: 5dab3d4c27a77ab1a183b266c3b6670afb58ed23368c21e39083030fd6692b3c
User & Date: sebres 2018-12-03 07:59:09
Context
2018-12-03
19:20
New version of TIP #507 by René Zaumseil check-in: e709b1ffa6 user: fvogel tags: trunk
07:59
merge wordsmithing branch "kbk-527-redaction" check-in: 5dab3d4c27 user: sebres tags: trunk
2018-11-30
21:32
New version of TIP #510 from René Zaumseil. Complete rewrite, with tkpath now included. check-in: 1acb7ad7a5 user: fvogel tags: trunk
19:47
Do some wordsmithing to make the proposal easier to read (the original author is not a native speaker of English). Closed-Leaf check-in: 8d45366d0b user: kennykb tags: kbk-527-redaction
Changes
Hide Diffs Unified Diffs Ignore Whitespace Patch

Changes to index.json.

525
526
527
528
529
530
531
532
	"4":{"url":"./tip/4.md","created":"26-Oct-2000","post-history":"","state":"Draft","vote":"Pending","type":"Informative","title":"# TIP 4: Tcl Release and Distribution Philosophy","discussions-to":"news:comp.lang.tcl","author":["Brent Welch <[email protected]>","Donal K. Fellows <[email protected]>","Larry W. Virden <[email protected]>","Larry W. Virden <[email protected]>"],"is-jest":false},
	"3":{"url":"./tip/3.md","created":"14-Sep-2000","post-history":"","state":"Accepted","vote":"Done","type":"Process","title":"# TIP 3: TIP Format","author":["Andreas Kupries <[email protected]>","Donal K. Fellows <[email protected]>"],"is-jest":false},
	"2":{"url":"./tip/2.md","created":"12-Sep-2000","post-history":"","state":"Draft","vote":"Pending","type":"Process","title":"# TIP 2: TIP Guidelines","author":["Andreas Kupries <[email protected]>","Donal K. Fellows <[email protected]>","Don Porter <[email protected]>","Mo DeJong <[email protected]>","Larry W. Virden <[email protected]>","Kevin Kenny <[email protected]>"],"is-jest":false},
	"1":{"url":"./tip/1.md","created":"14-Sep-2000","post-history":"","state":"Active","vote":"No voting","type":"Informational","title":"# TIP 1: TIP Index","author":["TIP Editor <[email protected]>"],"is-jest":false},
	"0":{"url":"./tip/0.md","created":"11-Dec-2000","post-history":"","state":"Final","vote":"Done","type":"Process","title":"# TIP 0: Tcl Core Team Basic Rules","author":["John Ousterhout <[email protected]>"],"is-jest":false},
	"@min": 0,
	"@max": 527
}, "@timestamp": 1543613497}






|
525
526
527
528
529
530
531
532
	"4":{"url":"./tip/4.md","created":"26-Oct-2000","post-history":"","state":"Draft","vote":"Pending","type":"Informative","title":"# TIP 4: Tcl Release and Distribution Philosophy","discussions-to":"news:comp.lang.tcl","author":["Brent Welch <[email protected]>","Donal K. Fellows <[email protected]>","Larry W. Virden <[email protected]>","Larry W. Virden <[email protected]>"],"is-jest":false},
	"3":{"url":"./tip/3.md","created":"14-Sep-2000","post-history":"","state":"Accepted","vote":"Done","type":"Process","title":"# TIP 3: TIP Format","author":["Andreas Kupries <[email protected]>","Donal K. Fellows <[email protected]>"],"is-jest":false},
	"2":{"url":"./tip/2.md","created":"12-Sep-2000","post-history":"","state":"Draft","vote":"Pending","type":"Process","title":"# TIP 2: TIP Guidelines","author":["Andreas Kupries <[email protected]tend.com>","Donal K. Fellows <[email protected]>","Don Porter <[email protected]>","Mo DeJong <[email protected]>","Larry W. Virden <[email protected]>","Kevin Kenny <[email protected]>"],"is-jest":false},
	"1":{"url":"./tip/1.md","created":"14-Sep-2000","post-history":"","state":"Active","vote":"No voting","type":"Informational","title":"# TIP 1: TIP Index","author":["TIP Editor <[email protected]>"],"is-jest":false},
	"0":{"url":"./tip/0.md","created":"11-Dec-2000","post-history":"","state":"Final","vote":"Done","type":"Process","title":"# TIP 0: Tcl Core Team Basic Rules","author":["John Ousterhout <[email protected]>"],"is-jest":false},
	"@min": 0,
	"@max": 527
}, "@timestamp": 1543823872}

Changes to tip/527.md.

6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34


35
36
37
38
39
40
41
..
58
59
60
61
62
63
64
65
66
67
68

69
70
71

72
73
74


75
76
77
78

79
80
81
82

83
84
85
86

87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139



140
141



142
143
144
145

146
147
148
149
150
151
152
153
154
155
156
157
158
	Created:        26-Nov-2018
	Tcl-Version:    8.5
	Tcl-Branch:     sebres-8-6-timerate
	Discussions-To: news:comp.lang.tcl
	Post-History: 
-----

# Abstract

This TIP proposes new command `timerate` as well as measurement toolchain for TCL.

For provide a possibility to compare results between Tcl versions the version 8.5 is suggested as target version for this TIP.
(In case of some objections by extending of old versions, the command `timerate` could be placed into `::tcl::unsupported namespace`.)

Additionally this TIP proposes (optionally) a small framework to test performance. This makes possible to create a test-script for
performance-relevant coverage and allows to compare the results diff-based to find performance degradation between revisions.

# Rationale

Although the already available command `time` can be used for performance measurement, but it has several disadvantages:

 1. the execution is limited by fixed repetition count, so the real execution time is undefined (if the evaluation time of single iteration is unknown or may vary in time in different versions), thus it could grow to very long time.
 2. it does not have a calibration ability.
 3. it uses default script invocation function (`Tcl_EvalObjEx`), therefore the execution of the script differentiates from execution of compiled byte-code (e. g. in the compiled procedure),
    additionally measurement is very unprecise on extream fast scripts, so it has certain overhead (on execution up to TclNRRunCallbacks resp. corresponding TEBCResume of compiled script
    as well as the extern costs involved like a washout of the CPU-cache, branch misprediction, etc).

<hr/>
# I. Proposal of command `timerate`



	timerate - Time-related execution resp. performance measurement of a script

### Synopsis

	timerate script ?time? ?count?

................................................................................

       - the count how many times it was executed (`lindex $result 2`)

       - the estimated rate per second (`lindex $result 4`)

       - the estimated real execution time without measurement overhead (`lindex $result 6`)

Time is measured in elapsed time using heighest timer resolution as possible, not CPU time.
This command may be used to provide information as to how well the script or a tcl-command is performing and can help determine bottlenecks and fine-tune application performance.

In opposition to `time` the execution limited here by fixed time.

Additionally the compiled variant of the script will be used during whole evaluation (as if it were part of a compiled proc),
if parameter -direct is not specified.
Therefore it provides more precise results and prevents very long execution time by slow scripts resp. scripts with unknown execution time of single iteration.


To measure very fast scripts as exact as posible the calibration process may be required.



	-calibrate

This parameter used to calibrate timerate calculating the estimated overhead of given script as default overhead for further execution of timerate.
It can take up to 10 seconds if parameter time is not specified.


	-overhead double

This parameter used to supply the measurement overhead of single iteration (in microseconds) that should be ignored during whole evaluation process.


	-direct

Causes direct execution per iteration (not compiled variant of evaluation used, similar to `time` - can be used to measure `Tcl_EvalObjEx` as well as invocation of canonical list).


### Examples

Estimate how fast it takes for a simple Tcl for loop (including operations on variable i) to count to a ten:

	# calibrate:
	timerate -calibrate {}
	# measure:
	timerate { for {set i 0} {$i<10} {incr i} {} } 5000

Estimate how fast it takes for a simple Tcl for loop only (ignoring the overhead for operations on variable i) to count to a ten:

	# calibrate for overhead of variable operations:
	set i 0; timerate -calibrate {expr {$i<10}; incr i} 1000 
	# measure:
	timerate { for {set i 0} {$i<10} {incr i} {} } 5000

Estimate the rate of calculating the hour using clock format only, ignoring overhead of the rest, without measurement how fast it takes for a whole script:

	# calibrate:
	timerate -calibrate {}
	# estimate overhead:
	set tm 0
	set ovh [lindex [timerate { incr tm [expr {24*60*60}] }] 0]
	# measure using esimated overhead:
	set tm 0
	timerate -overhead $ovh {
	  clock format $tm -format %H
	  incr tm [expr {24*60*60}]; # overhead for this is ignored
	} 5000

### Imact

As the current implementation in [sebres-8-6-timerate](/tcl/timeline?n=100&r=sebres-8-6-timerate) 
(resp. [sebres-8-5-timerate](/tcl/timeline?n=100&r=sebres-8-5-timerate)) shows, no public API's are affected by introducing this,
solely some new functions are provided for Tcl internal API (see tclInt.h).

Don Porter (dgp) has made a review of this targeting 8.7, which can be found in branch [dgp-sebres-timerate-review](/tcl/timeline?n=100&r=dgp-sebres-timerate-review).
This branch already contains the test-performance framework as well as `tests-perf/clock.perf.tcl` script (originate by me in clock speedup branch).

<hr/>
# II. Proposal of test-performance framework `::tclTestPerf`

This small test-suite should provide possibility for batch-based measurement as well as to have evaluable diff-based results
as comparison between revisions, which should allow to find performance degradation on some versions.

### Synopsis

	::tclTestPerf::_test_run time scripts

This command executes a batch of scripts, a "canonical" tcl-list with an exception that rows could be commented (using tcl comment-chars `#`), 
which simply produces an output, can be used as anchor in automated evaluation of test-results (or diff).
The execution of uncommented blocks produces tclsh-similar output so `% command ...`, result of the single execution and result of the timerate invocation.




Special tokens can be used to setup or cleanup, to execute once some script before and after the measurement:




	::tclTestPerf::_test_run 500 {
	  setup {set i 0}
	  {clock format [incr i] -format "%Y-%m-%dT%H:%M:%S" -locale en -timezone :CET}

	  cleanup {unset i}
	  ...
	}

It is recommendable to separate multiple scripts into several bolocks, be cause this produces the summary output at end of execution, 
which contains the total, average, min and max time of execution of this script-block (thus making easier to find a candidate affected by degradation).

### Usage as test-script

	## common test performance framework:
	if {![namespace exists ::tclTestPerf]} {
	  source [file join [file dirname [info script]] test-performance.tcl]
	}






|

|

|
|

|
|

|

|

|
|
|
|
|


<
>
>







 







|
|

|
>
|
|
<
>

|

>
>
|

<
<
>

|

<
>

|

<
>



|






|






|













|


|
|

|
|


|

|
|





|
|
|
>
>
>
|
<
>
>
>



|
>




|
|







6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

34
35
36
37
38
39
40
41
42
..
59
60
61
62
63
64
65
66
67
68
69
70
71
72

73
74
75
76
77
78
79
80


81
82
83
84

85
86
87
88

89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146

147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
	Created:        26-Nov-2018
	Tcl-Version:    8.5
	Tcl-Branch:     sebres-8-6-timerate
	Discussions-To: news:comp.lang.tcl
	Post-History: 
-----

## Abstract

This TIP proposes a new command `timerate` as well as a measurement toolchain for TCL.

In order for it to be possibile to compare results among older Tcl versions, the version 8.5 is suggested as a target version for this TIP.
(To forestall the objection that this TIP extends old versions, the command `timerate` shall be placed into `::tcl::unsupported ` namespace in versions prior to 8.7.)

Additionally, this TIP proposes-optionally-a small framework to test performance. This makes possible to create test scripts for
possible performance regressions and allows a `diff`-based comparison of the results to find performance degradation between revisions.

## Rationale

Although the existing command `time` can be used for performance measurement, it has several disadvantages:

 1. The execution is limited by fixed repetition count, so the real execution time is undefined (if the evaluation time of single iteration is unknown or may vary in time in different versions). The time neede by a test case could grow to be very great.
 2. `time` does not have a calibration ability.
 3. `time` uses the default script invocation function (`Tcl_EvalObjEx`), therefore the speed of the script is different from that of compiled byte-code (_e.g._ in a compiled procedure),
 4. In addition measurement is very imprecise on extremely fast scripts, because of certain unadvoidable overhead costs (the execution of `time` and the interpreter overhead up to the `TclNRRunCallbacks` call that invokes `TEBCResume` for a compiled script).
 5. `time` introduces additional external costs on the script being measured, such as washout of the CPU-cache and branch misprediction.

<hr/>


## I. Proposed command: `timerate`

	timerate - Time-related execution resp. performance measurement of a script

### Synopsis

	timerate script ?time? ?count?

................................................................................

       - the count how many times it was executed (`lindex $result 2`)

       - the estimated rate per second (`lindex $result 4`)

       - the estimated real execution time without measurement overhead (`lindex $result 6`)

Time is measured in elapsed time using the finest timer resolution as possible, not CPU time.
This command may be used to provide information as to how well the script or a tcl-command is performing and can help determine bottlenecks and fine-tune application performance.

As opposed to the `time` commmand, which runs the tested script for a fixed
number of iterations, the `timerate` command runs it for a fixed time.
Additionally, the compiled variant of the script will be used during the entire measurement, as if the script were part of a compiled procedure,
if the `-direct` option is not specified.

The fixed time period and possibility of compilation allow for  more precise results and prevent very long execution times by slow scripts, making it practical for measuring scripts with highly uncertain execution times.

To measure very fast scripts as precisely as posible the calibration process may be required.

### Options

`-calibrate`



The `-calibrate` option is used to calibrate `timerate`, calculating the estimated overhead of the given script as the default overhead for future invocations of the `timerate` command. If the `time` parameter is not specified, the `-calibrate` procedure runs for up to 10 seconds.

`-overhead` _double_


The `-overhead` parameter supplies an estimate (in microseconds) of the measurement overhead of each iteration of the tested script. This quantity will be subtracted from the measured time prior to reporting results.

`-direct`


The `-direct` option causes direct execution of the supplied script, without compilation, in a manner similar to the `time` command. It can be used to measure the cost of `Tcl_EvalObjEx`, of the invocation of canonical lists, and of the uncompiled versions of bytecoded commands.

### Examples

Estimate how fast it takes for a simple Tcl `for` loop (including operations on variable `i`) to count to ten:

	# calibrate:
	timerate -calibrate {}
	# measure:
	timerate { for {set i 0} {$i<10} {incr i} {} } 5000

Estimate how fast it takes for a simple Tcl `for` loop, ignoring the overhead for to perform ten iterations, ignoring the overhead of the management of the variable that controls the loop:

	# calibrate for overhead of variable operations:
	set i 0; timerate -calibrate {expr {$i<10}; incr i} 1000 
	# measure:
	timerate { for {set i 0} {$i<10} {incr i} {} } 5000

Estimate the speed of calculating the hour of the day using `clock format` only, ignoring overhead of the portion of the script that prepares the time for it to calculate:

	# calibrate:
	timerate -calibrate {}
	# estimate overhead:
	set tm 0
	set ovh [lindex [timerate { incr tm [expr {24*60*60}] }] 0]
	# measure using esimated overhead:
	set tm 0
	timerate -overhead $ovh {
	  clock format $tm -format %H
	  incr tm [expr {24*60*60}]; # overhead for this is ignored
	} 5000

### Impact

As the current implementation in [sebres-8-6-timerate](/tcl/timeline?n=100&r=sebres-8-6-timerate) 
(or, for Tcl 8.5, [sebres-8-5-timerate](/tcl/timeline?n=100&r=sebres-8-5-timerate)) shows, no public API's are affected by introducing this,
A few new functions are provided for Tcl's internal API (see `tclInt.h`).

Don Porter (`dgp`) has made a review of the version of this change that  targets Tcl 8.7, which can be found in branch [dgp-sebres-timerate-review](/tcl/timeline?n=100&r=dgp-sebres-timerate-review).
The branch already contains the performance testing  framework as well as a new script, `tests-perf/clock.perf.tcl` that instruments the `clock` command in preparation for integrating changes that improve its performance..

<hr/>
## II. Proposed performance-testing framework: `::tclTestPerf`

The small test suite in the `::tclTestPerf` namespace allows for batch-based measurement and diff-based results
to compare performamnce among revisions and detect performance regressions.

### Synopsis

	::tclTestPerf::_test_run time scripts

The `_test_run` command executes a batch of scripts provided in the syntax of a Tcl list, with the exception that rows may be commented using Tcl's `#` syntax.
It produces an output, that can be used for automated comparison of perfomance measurements.
When uncommented elements of the script are executed, an output is produced that is similar to the transcript of a `tclsh` session. The output for each element consists of:
 * a line consisting of the string, '% ', followed by the element being evaluated.
 * a line containing the result returned by evaluating the given element.
 * a line containing the result of applying `timerate` to the given element.
 

The special tokens, `setup` and `cleanup`, can be used to specify actions
needed to initialize and finalize the test sequence before and after
conducting the measurements; for example:

	::tclTestPerf::_test_run 500 {
	  setup {set i 0}
	  {clock format [incr i] -format "%Y-%m-%dT%H:%M:%S" \
	       -locale en -timezone :CET}
	  cleanup {unset i}
	  ...
	}

It is recommended that the programmer divide multiple scripts into several blocks, because each block produces a separate summary output at end of execution, 
which contains the total, average, min and max time of execution of the script that the block contains, making it easier to identify which specific element has suffered from a performance regression.

### Usage as test-script

	## common test performance framework:
	if {![namespace exists ::tclTestPerf]} {
	  source [file join [file dirname [info script]] test-performance.tcl]
	}