Within the scope of EST, I'm also considering scenarii such as Environmental Qualification (design proving), TAF (Test/Analyses/Fix), Reliability Growth, etc., i.e. long-term tests as well as (relatively) short-term thermal and vibration stress screening.
An obvious factor here is that faults have to be reported to the test operator: Where tests are being directed from an external Test Manager, then this is inherent, otherwise there must some form of communication, either via a test bus or a mission interface. It also needs to be noted that these tests are typically supervised by a "watchman" with a remit to monitor several test sites over a shift.
The actual data presented and the timeliness with which it is presented is not necessarily a clear cut matter, and depends on how the tests are managed.
In most cases, it is essential to be able to correlate any failures with the conditions prevailing at the time of failure. Generally, "time tagging" of fail messages will suffice as the environmental stimlus is frequently controlled by a discrete autonomous process, so more sophisticated methods of correlating UUT behaviour with stimulus are rarely viable. This time-tagging may be performed externally (i.e. external to the JTAG Test Manager) if test results are being reported to the monitoring system in "real-time"; otherwise, if the relaying of results is deferred, then the JTAG Test Manager must add the time-tags.
In determining what data is needed timeously by the operator and what can be deferred, it probably best to consider "critical" and "non-critical" failures, the terms being used here in the context of "test impact" rather than "mission impact".
Critical failures are those where continued operation beyond the point of failure may endager the UUT, the test facility or personnel: Such things as overtemperature, gross over-current, etc., where fire or explosion may occur. In such cases it is essential to shut down power to the UUT and stop any stimulus as quickly as pssible. This can be expected to be an automated process so the relayed fault information must contain sufficient information to indicate criticality and trip the "alarm" system of the test facility.
At this point, a diagnostic would be helpful in deciding on the next actions. It would generally be considered inadvisable to switch the UUT on again, so the diagnostic must be derivable from either data on failing vectors/failing nets output at the point of failure or test data held in non-volatile storage that is accessible with UUT power off.
For non-critical failures, such as indication of an open circuit between two devices, then it may be sufficient to indicate to the operator simply the number of failures that have occured during each test cycle (in EST it may be expected that a pre-determind test cycle will be executed repeatedly during stimulation).
Where tests are being run concurrently on several boards within a UUT, this very basic level of reporting may be all that is viable, in terms of utilisation of the reporting bus. For externally managed JTAG tests, a greater level of detail on the failures may be available, but in either case, it will usually be left to the operator (or some overall supervisory process) to determine when or if the test run should be terminated.
For EST, there is probably no great penalty in conducting off-line diagnostics. While real-time diagnostics may be helpful, it is important that these are not provided at the cost of significant extension of test execution time. One of the main benefits of using JTAG within EST is to increase the number of electrical test cycles which can be performed during each stimulus cycle, when compared to functional testing. Typical board level diagnostics techniques will analyse the syndromes from the failing vectors to determine the most probable causes to net or possibly pin level. For a few single faults on unrelated nets, this may not be very time consuming, but if multiple faults manifest then this kind of analysis on a large design may run into minutes.
In the case where embedded JTAG testing is employed during EST and the delivery of detailed test data is deferred, then we can consider that there will likely be periodic opportunities to transfer the test data to the supervisory system or monitoring console. The embedded solution therefore needs to have sufficient results storage to be able to accumulate the essential test data (possibly self-diagnosis results or failing vector/net data in some compressed form, rather than full test result vectors). The storage requirements may not be easy to determine at design time however.
Leonardo MW Ltd.