Tcl Source Code

Ticket Change Details
Login
Overview

Artifact ID: b7f4b5a3f4a34b37759ea71855079b0bbddad9837f9642e061e4c39b876bdc25
Ticket: de232b49f26da1c18e07513d4c7caa203cd27910
write-only nonblocking refchan and Tcl internal buffers
User & Date: apnadkarni 2024-04-02 02:49:54
Changes

  1. icomment:
    Nathan,
    
    The reason I backed out your test case modifications were differing semantics. You added an additional vwait command in effect creating a delay that allows the connection success/failure to happen in time! I do think the original test could have been better targeted because the puts is irrelevant to the actual bug. I may add a separate test case in my branch to illustrate the actual connection failure but don't want to muddy the waters here.
    
    And regarding your comment -
    
    *If you want to prove your point, and also show that you're not just holding my changes hostage to win this argument, you'll make if fail consistently on all platforms so that it has meaning as a test.*
    
    Some test cases, by their nature, cannot be made to fail reliably on every
    platform every time. This can be either because the failure is timing dependent,
    depends on the libraries in use, operating system versions etc. Examples include
    failures caused by race conditions in multithreaded cases - it may take millions
    of iterations to cause a failure, may depend on processor speed, number of
    processors etc. So also, 14.11 is timing dependent. There is effectively a race
    between the connection completion and delivery of the write event.
    
    FWIW, on my system 14.11 fails every single time. I do not know if that
    reproducibility is specific to Windows or to my system. Don indicated it is
    sporadic on his (non-Windows) system which is an indication it is timing
    related.
    
    So yes, while ideally failures would be consistently reproducible, across all
    platforms all the time, that is not always possible. Does not make the test
    invalid.
    
    And in cases like this, where the cause of the failure is ascertained to be the code being tested, all the more incentive to keep the test.
    
    Think of test failures analogous to the user reporting a crash. I suppose your response would be "Oh unless you can show that happens on all systems, it is not a bug"!
    
  2. login: "apnadkarni"
  3. mimetype: "text/x-markdown"