PAR scope and direction

Discuss the generic proposals for SJTAG
Post Reply
User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Thu Feb 15, 2018 10:11 pm

There were a few things that I felt came out of this week's meeting. I won't use the word "Needs" for these because I think that's become a rather overloaded term; let's call them expectations or aspirations:
  1. There is an aspiration to have something that a system specifier can give to sub-system designers that sets out what test support capabilities the sub-system should provide.
  2. There is an aspiration to have something that a COTS vendor can use to advertise to prospective customers what test support capabilities they can expect (which may have different "levels").
  3. There is an aspiration to have something that a system/sub-system designer can use to help select parts that have appropriate test support features and use them within a suitable architecture.
  4. There is an aspiration to have something that test designers/ATPGs can use to understand the test capabilities that are provided by the system/sub-systems.
I'm not convinced that these are all met by the same document. 1 and 3 are things that drive into the design, 2 and 4 are things that are a result of the design, although it might be fair to claim that 2 is a declaration of compliance with the design requirement. 1 and 3 (and to a lesser extent, 2) are essentially planning and executing DFT. 4 is really abstracted from the DFT effort and works with what you have, whether the DFT is good or bad. Often, when you get to the point of generating tests you're already well past the point where you can influence the design.

1 is about specifying requirements but it feels like it works best (maybe only works) when it goes along with a standardised form factor for the sub-assembly that defines what interfaces are to be provided. I guess that can include in-house system designs with a supporting Interface Control Document, but I'm wondering if that ends up putting the cart before the horse.
When I first got involved with this group, I think 3 is what I was expecting SJTAG was going to answer for me, but the longer I've been involved, the less I feel that's necessary - there are often bigger functional issues on component selection that will trump testability (unfortunately), and once you've sorted out a few basics the "architecture" side isn't a huge deal. Instead, I've moved towards seeing 4 as being the problem that most wants to be addressed; it's probably the harder task.

In the broad perspective, all of the above indeed fit into the scope of "system test", but we're not trying to solve all of system test for all perspectives of what that might mean - that's just way too big a task. We need to pick our fight(s) carefully.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Sun Feb 18, 2018 7:24 pm

Apologies to everyone if I seem to be making a habit of posting things late on a Sunday leaving people little chance to read my comments prior to the meeting, but frankly, this is about the only free time I get these days...
LYUngar1 wrote:
Tue Feb 13, 2018 10:35 pm
I have been accommodating to all types of suggestions to "tone down" the need for test coverage metrics, but it appears that it is still not being included - even in the Need section of the PAR.
Louis, if you're referring to Brad's "SJTAG Need" post, that is merely a record of points captured during last week's discussion - the Need section of the PAR has not yet been written, and indeed Heiko's suggestion for accommodating your wish and your follow-up to that are recorded in this thread.

In fact, when couched in the terms proposed, it becomes something that could be moved to the Scope, as the Scope can include statements to declare things that the standard under consideration intends not to address. So that becomes another option. In either case, the mention of examples of coverage metrics seems to be entirely redundant.

However, this seems to be primarily a debate between Louis, Brad and myself and I'm not seeing much in the way of commentary or opinion from the other members of the group.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
LYUngar1
SJTAG Member
Posts: 9
Joined: Thu Nov 30, 2017 9:45 pm

Re: PAR scope and direction

Post by LYUngar1 » Mon Feb 19, 2018 3:52 pm

The debate is not limited to Brad, Ian and Louis.
Heiko Ehrenberg wrote:
Mon Feb 05, 2018 7:20 pm
Perhaps it would help address Louis' concerns if we include a sentence in the Needs statement that says that SJTAG recognizes the need for some form of standardized measurement of test coverage and quality of test, but this standardization effort (this PAR) does not attempt to address that need?
Additionally, there have been discussions during meetings. Terry, Bill Eklow and Jon Stewart have chimed in on the subject of test coverage. Privately I had discussions some people and I know that test coverage is important to military and avionics folks. I have no objection to continued discussion on the subject, but I feel that at some point the decision was made to drop the matter of test coverage and leave it out of the bullets we are collecting for the Need section. I am objecting to that and wondering why.
Last edited by LYUngar1 on Mon Feb 19, 2018 4:36 pm, edited 2 times in total.

User avatar
LYUngar1
SJTAG Member
Posts: 9
Joined: Thu Nov 30, 2017 9:45 pm

Re: PAR scope and direction

Post by LYUngar1 » Mon Feb 19, 2018 4:16 pm

Bradford Van Treuren wrote:
Wed Feb 14, 2018 12:04 am
I would have to courteously disagree with Louis. Test Coverage is not the only reason for the existence of SJTAG. There are many factors, too many to enumerate in a PAR.
I don't know how you concluded from what I wrote that I promote test coverage as "the only reason for the existence of SJTAG." I don't believe that and I doubt I wrote that.

The issues you raise can arguably be part of the PAR. However, none of them negate the overriding need of the industry to do what we do in order to bring about better tests. So it is not a case of "my" needs vs. "your" needs. I submit again that if there would be no need for better tests (as measured by test coverage) the entire subject of testability would be unnecessary.

Unfortunately, many companies treat testability as totally unnecessary, probably because they do not recognize the complexity of test coverage. How can we allow ourselves to make that same mistake?

User avatar
Bradford Van Treuren
SJTAG Chair Emeritus
Posts: 104
Joined: Fri Nov 16, 2007 2:06 pm
Location: NOKIA / USA

Re: PAR scope and direction

Post by Bradford Van Treuren » Mon Feb 19, 2018 4:57 pm

Since Ian needed to cancel the meeting today, I thought I would use this time to mine important ideas/comments from various email threads to post them to the forum so we have a single place for the repository of ideas. Certainly, I am not having the time to investigate all the discussions, but will post what I discovered during this mining session for data.

Mukund Modi wrote on 2 Oct 2017:
Background: In DoD systems, the product's life cycle cost is very important. Unlike commercial systems the useable life of the product is much longer. This situation requires maintenance and support policies that must address issues like design for testability and. system structures consisting of supportable and replaceable components. These considerations must be addressed throughout a products life cycle. The collection of data is essential for test and improvement of design in the products early stage of development.

Over the years there has been little progress made in achieving these DoD maintenance and support requirements. One of the reasons is standards like 1149.1 and subsequent are used to address the design of systems that focus on the production cycle and relate to commercial systems. These systems usually have short life cycles and a throw away maintenance concepts.

If I understand correctly this study group is attempting to define a description to better manage how existing 1149 types are used in larger systems. SJATG can provide management or supervisory standard to define the coordination and dependencies of instruments, configuration, use of other standards (JTAG, i JTAG, etc).

I recommend the following:

1. Define the boundaries required for test and support (systems vs component).

2. Define the interfaces at the levels required to test and maintain the systems.

3. Document difficulties in utilizing current test standards.

A system consists of one or more interacting or interdependent group of components. The system structure is designed to perform a set of functions. Systems have interfaces that allow then to operate in their intended environment. A component of a system can itself be a system.
From an email thread titled "Updated SJTAG notes for October 2" started on 4 Oct 2017 the following discussion was held over email:
On 4 Oct 2017 Louis Ungar reported (following that week's meeting):
In pursuit of reworking the legacy SJTAG definition " a system may be described as an organized collection of components or assemblies that are designed to operate together to perform one or more tasks or functions," I would like to comment that some of the functions are "synergistic" (only exist because of the combined contribution of two or more assemblies) but other functions are distinct irrespective of other assemblies in the system. The tests need to take this into account. One Design for Testability goal should be to allow each of the assemblies to be tested in that independent (isolated) manner.
Ian McIntosh responded on the same day with:
Indeed Louis, but at the risk of pedantry, the description of the lower level assembly (sub-assembly, board or whatever) would define the intended tasks or functions as only the set that can be provided by that level of assembly. So the definition seems to remain valid in the context of whatever you’re considering to be the testable item.

You’re certainly right that it is often the case that “the whole may be more than the sum of its parts”. That seems to lean towards “functional” test; in my experience, functional tests are generally very good at detecting the presence of a fault but not so good at diagnosing where it lies, unless the tests are very carefully designed and very focussed. And that generally requires that the testability is built-in from the bottom up (in my domain that means starting with the board designs) as trying to ask for more testability once something is built is a tough ask. So yes, the sub-assemblies need to be testable in their own right.
Louis Ungar replied:
If we are considering SJTAG (test) interfaces to be based on 1149.1 or 1687 then these mechanisms need to be able to help isolate assemblies from the system as a whole and should be eluded to in our definition of a system. A triangle is not merely a collection of 3 lines, but for testability and diagnosability we should be able to test each line for its properties without regards to the triangle. If all interfaces between the lines (corners and angle of the triangle) have the same testability properties (say, 1149.1 or 1687) then we should be able to accomplish that. I am suggesting that we say so in the definition.

So, actually I am not saying we should alter the properties of the definition that imply that “the whole may be more than the sum of its parts”. What I am trying to say, instead, is that the SJTAG definition should (in addition) encompass our desire to test the assemblies separately.
Terry Duepner tried to sum up the discussion with her response on 4 Oct 2017:
It sounds like we are trying to describe a system in a way that guarantees that we will use 1149.1 and 1687 in the solution. My biggest concern is that doing so may limit the ability of whatever standard is eventually defined to be adopted. As we get further up the system hierarchy, the contribution of test programs based on scan chains is significantly reduced. Thus, requiring access be supported up at the highest levels becomes a cost/benefit question where test will often lose out.

In addition, we are trying to bridge a very tools-based testing environment (Chip and ICT board level testing), where pattern based testing is very common, with system level testing where functional testing rules the roost with it’s incumbent interdependency of software and hardware. In addition, we are bridging numerous vendor layers. It seems like if our group’s task is to document the need for a System Level Test Standard, then we must first find out what are the biggest headaches for system test folks. There are some on our committee who work in this field and that is a start. I would also wonder if we could get information from other large system designers to understand what types of problems they are seeing. Or perhaps get a survey of some sort going at conferences such as Autotestcon were there is a large proliferation of system test engineers. Once we have discovered and documented those problems, we can bin them to categories to outline the need. With that information, I think it would be the challenge of the standards committee to see if they can come up with a methodology that would meet those needs. Whether or not existing standards can be leveraged would have a lot to do with what the needs are.

All of this points to making sure that we are not trying to force the solution before we even justify the needs. I believe this means we should keep the definitions more generic and not add too much specific detail into them.
Naveen Kumar Srivastava responded to Terry's posting with:
I agree. There are repository of past discussion and documents but need some real world problem schematics to look for solution and hence standards. Right now universe looks too big. The need here to write out those problem and looks for solution with standard.
Louis Ungar then replied:
Great discussion...

Let's not forget that the name of the group is SJTAG, which implies - if it doesn't actually state - that we are trying to apply JTAG to system level test. Unless I am missing something, JTAG is 1149.1 and 1687 (and to lesser degree 1500, 1149.6, 1149.10 and even possibly SPI and I2C). So even if we can avoid these things in the definition of "system" they will pop up in the Scope and/or the Purpose.

I will let our Navy test experts chime in on this, Terry, but in chairing the Testability Panel at AUTOTESTCON for many years, I can tell you that System Level Test for that audience has always centered around being able to diagnose to the responsible replaceable subassembly.

I believe this is no different for other systems. A laptop manufacturer at Dell wants to know which assembly is causing a computer malfunction. When you at National put together an ATE and have a virtual instrument cobbled together from a power supply and a signal generator, you will want to know which one of those two is the culprit if the "system" is not working properly.

Perhaps the definition of the system can bypass the diagnostic issue, but wouldn't it be better to incorporate it there?
The last entry in the discussion came from Ian McIntosh:
Louis, as I now understand your suggestion, you are proposing that a system, for our purposes, should provide means to isolate a sub-assembly from the other parts of the system in order that the sub-assembly might be better tested without interference from other sub-assemblies?

While that's a desirable property, I don't believe it can be a mandatory one. I'd argue that SJTAG could still be applied to a system that does not exhibit such a property, although what it could achieve might be reduced. This broaches on the notion that we could have "SJTAG compliant" and "SJTAG compatible" systems, where a "compatible" system may not have all the features desired for SJTAG but nevertheless supports its use.
Bradford Van Treuren
Distinguished Member of Technical Staff
NOKIA MN

User avatar
Bradford Van Treuren
SJTAG Chair Emeritus
Posts: 104
Joined: Fri Nov 16, 2007 2:06 pm
Location: NOKIA / USA

Re: PAR scope and direction

Post by Bradford Van Treuren » Mon Feb 19, 2018 6:16 pm

LYUngar1 wrote:
Mon Feb 19, 2018 4:16 pm
The issues you raise can arguably be part of the PAR. However, none of them negate the overriding need of the industry to do what we do in order to bring about better tests. So it is not a case of "my" needs vs. "your" needs. I submit again that if there would be no need for better tests (as measured by test coverage) the entire subject of testability would be unnecessary.
In Louis' previous post:
If better test coverage of systems is not part of the Need, then I submit there is no need at all for testability, thus no need for SJTAG.
My point was that "TEST" is not the only factor driving the SJTAG effort. The first use case I shared had to do with functional operation and not anything to do with "TEST." The second use case showed how insight to what was done for manufacturing could be leveraged in the product built-in testing to resolve testability issues ("test coverage" as you termed it) outside of the normal use case of boundary-scan test and something that traditional functional test was unable to perform. To specifically state "test coverage" as defining one of the drivers limits the scope of usability for other concepts people find SJTAG to be useful for. The PAR needs to be more open to allow for innovative ideas the user community wants to apply it to. That was the point I tried to make in comparing SJTAG with 1149.1. The JTAG team provided an access mechanism and also provided some specific examples and mandatory concepts (EXTEST, INTEST, BYPASS, etc.) for use cases they felt were required to support and be compliant with 1149.1. JTAG team, however, did not limit its use to just "TEST" in it's purest sense. In fact section 1.2.4 of the 1149.1 standard is entitled, "Use of this standard to achieve other test goals." It is that desire for openness and freedom for the user to apply the SJTAG standard to whatever need they may have is why I desire to have the PAR loosely defined as to the end applications. If we specify some use cases in the PAR, we could limit the scope as to how it is used in the real world. That is my caution. We have to get real creative, as the JTAG team was, to specify what is to be included as being "Compliant" and what may be included or leveraged to be "Compatible." Specific Use Cases should not drive compliance. It should help guide the enlightenment as to what has to be mandatory to be "Compliant." We can learn a lot from the JTAG team's actions. I sure did from my colleagues, Rod Tulloss, Najmi Jarwalla, Chi Yau, James Beausang, and Yervant Zorian back in the days when 1149.1 was written. Rod had the insight that this interface (1149.1) could certainly be used for more than just "TEST." This is why he was a driver for 1149.5 as that was a natural extension to 1149.1 beyond the traditional "TEST" cases. I'm not against "test coverage," but I caution that it should not be THE driver defining what SJTAG is about. SJTAG is much more than that.

The second issue I have with "test coverage" is in regards to your comment:
LYUngar1 wrote:
Mon Feb 19, 2018 4:16 pm
I submit again that if there would be no need for better tests (as measured by test coverage) the entire subject of testability would be unnecessary.
The issue of "test coverage" and thus the ability to measure such coverage is an outcome of the particular test being applied. I would argue that SJTAG provides the means for applying "tests", but is not responsible for what the test actually does. The coverage is a factor of the composition of the test and can only be measured based on the particular test being applied. That is a separate issue from the mechanism to apply such a test. For example, my embedded boundary-scan capability is used to apply all kinds of different "tests" and applications to the product. There are infrastructure tests on the access link itself, interconnect tests, programming applications, monitoring applications, configuration applications, etc. that are all performed using the same "IEEE 1149.1 interface." The Manufacturing Test via PC through the 1149.1 interface does the same. Is 1149.1 measuring the "test coverage?" Absolutely, not. It is the analysis of the tests being applied that provides the "test coverage." Thus, an interconnect test is going to be analyzed differently from a functional test due to the difference in granularity of the diagnostics available. SJTAG, I argue, is a means for applying the tests and the "test coverage" is a factor of what is being applied through that interface. Therein is the problem I am seeing with the current direction of the discussion. This is the tail wagging the dog in my mind. If we get into the mandate we have to specify "test coverage" as part of the PAR, then we are expanding SJTAG into a domain it should not be taken and at the same time limiting its scope to just "TEST" applications. SJTAG as an interface mechanism and what SJTAG is used for are two distinctly different discussions and should probably be treaded through separate PARs. So "my need" is for providing an interface that can be leveraged to apply all sorts of applications to the board/system, in order to enhance my test capabilities and provide access to functional instrumentation from the same or higher level. "Your need" is requiring a methodology that improves the way testing is being done. They are two very different, but related subjects.
Bradford Van Treuren
Distinguished Member of Technical Staff
NOKIA MN

User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Mon Feb 19, 2018 8:28 pm

LYUngar1 wrote:
Mon Feb 19, 2018 3:52 pm
I have no objection to continued discussion on the subject, but I feel that at some point the decision was made to drop the matter of test coverage and leave it out of the bullets we are collecting for the Need section. I am objecting to that and wondering why.
Louis, I'd ask you to re-read my earlier post: Your suggestion has not been left out of the bullets - we simply have not yet assembled a collated list of those bullet points. The bullet point lists that you see in the posts that Brad made are simply copies of the comments he collected on the markup slide during the course of that particular meeting. They're a complement to the meeting minutes. At some point (very soon I hope), all of these things will need to brought together so we can start sorting the essential from the peripheral.

That brings me to another point, and the main reason for this post and what I had hoped to raise at today's meeting:
After the last meeting I had the (optimistic) notion that I should try to re-draft the Need statement myself, as a straw-man proposal to be shot at, based on what had been commented over the past few weeks. Once again, I was beaten by time, but the little time I did have to think about the comments left me a wholly unclear about the direction we think we're going. This kind of comes out in the post I made on Thursday 15th.

I've since realised that, as chair, I've probably allowed discussions to be too wide-ranging and have rather forgotten what this (System Test Access Management) group was sponsored by TTSC to do. The following is quoted from the original participation invitation:
The goal of this study group is to explore the feasibility and to develop a project authorization request (PAR), including the scope and purpose, for an IEEE standard that defines methods to allow, in conjunction with existing methods, for the coordination and control of device, board, and sub-system test interfaces to extend access to the system level, by leveraging existing test interface standards (by defining a description to better manage how they are used in the system).
Perhaps the key part is actually the final bit in parentheses - this is really what we asked TTSC to sponsor and what other groups, like 1687.1, are expecting us to provide. It's really the item 4 from my earlier post.

We should consider System Test Access Management as a "project" within the broader scope of SJTAG/System Test, and some of the other topics as candidates for other projects. I appreciate that System Test Access Management might not describe what some members might have hoped to get from this group, but this is probably the essential core to making all the higher level aspirations viable: It's the framework they can build on, or at least a part of that framework.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
LYUngar1
SJTAG Member
Posts: 9
Joined: Thu Nov 30, 2017 9:45 pm

Re: PAR scope and direction

Post by LYUngar1 » Tue Feb 20, 2018 11:01 pm

I think we are stating and restating our positions over and over again. A good analogy might be to consider SJTAG as a standard for drills. There may be many different considerations and different ideas about how best to accomplish making standard drills, but undoubtedly the underlying need for drills is the necessity to make holes. In this example, the Need for holes is analogous to the Need for better tests as measured by test coverage. The standard need not be about holes and in fact can be fully focused on drills. Holes, however, cannot be ignored.

User avatar
Bradford Van Treuren
SJTAG Chair Emeritus
Posts: 104
Joined: Fri Nov 16, 2007 2:06 pm
Location: NOKIA / USA

Re: PAR scope and direction

Post by Bradford Van Treuren » Tue Feb 20, 2018 11:52 pm

You have a bit of a problem with your analogy. As an FAA Certified Airframe & Powerplant mechanic, I used to have to measure the thickness of crankcase bearings during overhaul. The micrometers at the shop did not have a curved platform matching the bearing curve. The proper practice was we had a golden set of drill bits, used as gauges. We would use the shaft of the drill bit to measure from with the bearing and then subtracted the diameter of the drill bit to get the true thickness of the bearing. (See http://knowhow.napaonline.com/know-note ... clearance/ with the section on Measuring Bearings to get a good idea.) Notice, I didn't care at all about holes in my application of the drill bit use. That is my case in point with SJTAG. Your idea is only one of many justifications for SJTAG and may not even be considered by an end user for their application. You will notice I did ignore holes in my application for very valid reasons. Could these golden bits be used to drill holes? Sure, but I would be fired for damaging the golden measuring rods. I was repurposing the drill bits for my application in a way the original designers probably did not consider when designing the drill bits. That freedom is what made IEEE 1149.1 be so useful for other applications than just test. I want to ensure SJTAG provides that same freedom.
Bradford Van Treuren
Distinguished Member of Technical Staff
NOKIA MN

User avatar
LYUngar1
SJTAG Member
Posts: 9
Joined: Thu Nov 30, 2017 9:45 pm

Re: PAR scope and direction

Post by LYUngar1 » Wed Feb 21, 2018 5:52 pm

Brad, with all due respect, your example does not negate the analogy. It shows that you used drill bits for measurements (a kind of test coverage). I suppose your point was that SJTAG can be used for other purposes besides test coverage metrics. Agreed, and I never said it can't be. My complaint is that my request to include in the Need section verbiage about test coverage improvements has not happened. If I understand Ian correctly, inclusion of such a statement in the Need section is simply being delayed. If I understand you correctly, it is of low priority and I can't tell whether or not you wish to leave it out altogether. I am here to say that in my opinion such verbiage in the Need section is necessary.

If the need for better test coverage in the 1149.1 standard was not included, it should have been. Perhaps that is why there are so many people familiar with JTAG who have never heard of boundary scan. They use JTAG to program FPGAs but haven't the foggiest notion about testability. I would like to avoid that for SJTAG. Perhaps it makes no difference to the tool makers who simply want to sell JTAG tools, but it makes a difference to me as a testability consultant that SJTAG improve test coverage (and diagnoses and possibly prognoses).

User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Thu Feb 22, 2018 7:50 pm

Please let's avoid introducing analogies and debating their merits - we've enough "real" things to sort out.

I keep feeling that one of the things, maybe the main thing, that is causing us to keep revisiting the same issues is a lack of clarity of what this group was created for. I think a contributor to that is that "SJTAG" has almost become an overloaded term and we need to be clearer about the distinction between this System Test Access Management study group and the broader concept of SJTAG (which encompasses more than System Test Access Management). I quickly threw together this little graphic to try to encapsulate the kind of hierarchy (I actually had a vision of something slightly more elaborate, but I wasn't going to spend time fighting with PowerPoint over layout):
SJTAG-STAM.PNG
SJTAG - STAM - device stds relationship
The concept this is aiming to show is that the (typically device level) standards/interfaces are the "primitives" that are brought together by System Test Access Management, as each of those currently has no way to know how to interoperate with another. The "SJTAG Applications" can then use STAM in order to exploit what the primitives offer. SJTAG then forms part of a even broader picture of applications (i.e. SJTAG does not solve "all of test"). Others might well point out flaws with this diagram, but as a simplistic view of the overall concept, I think it is adequate.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Fri Feb 23, 2018 7:36 pm

I'm not going to manage to draft up a form of words for Need, but I can probably have a go at pulling together the bullets for consideration...
  • We should be looking to utilise any test features that exist within COTS items
  • We need better tools, but that requires that the tools can "see" the features that are available
  • Leveraging the interface standards is not the only way to do this
  • (I'm just copying the following directly from Brad's post as I couldn't see a better way of incorporating it here)
    • SJTAG is intended to improve the ability to test, diagnose and provide prognostic health information about systems.
      • (Analyze from top down in decomposition is necessary to be able to know what has to be exposed. How someone implements it is less important if it is clearly documented and usable. Testablilty “flow down” may be outside of SJTAG scope : Testability Framework Requirements. Available Testability “flow up” is what is advertised from the bottom up: Availability of Testability Features.)
    • A standardized method is needed to coordinate
      • (coordinate - exposure of underlying test capabilities that might exist?) (everyone puts testability at their level and don’t usually plan for use at a higher level) (Documentation of what is available at each level is key.)
    • component
      • (component could relate to discretes and not what we want)
    • access topologies,
      • (Board level BIST is more than a component access topology.)
    • interface constraints, and other dependencies at the board and system level
      • (Should really focus on system and sub-systems, which includes boards.)
    • in order to be able to effectively leverage the existing and future component level standards. Thus, a new supervisory standard is required to define the coordination and dependencies of instruments as well as configuration, management, and application of vector based testing at the board and system levels.
      • (The higher up you go in the hierarchy, the more you morph into functional testing.) (Downloading code into modules and executing them is also part of this infrastructure that is needed.)
  • Specific interfaces are not really broad enough for the Needs statement
  • This standard recognizes the need for some form of standardized measurement of test coverage and quality of test, but this standardization effort does not attempt to address that need (Additional comment: As this, or words to this effect, describe an exclusion from the standard, it could become part of the stated Scope instead of the Need)
  • Should be inclusive of diverse Use Cases (and not preclude any), but should they be detailed in the PAR?
In assembling the above, I've filtered out many of the comments that seemed to me to be more "meta level" (for example almost all of the comments arising out of the Feb 5 meeting read more like philosophical discussion) and I may have abbreviated remarks for compactness, so if you think there are key things missing, add them in a reply.

Note that none of this implies anything about the final form of words - it's just trying to bring together what seem to be the key points so we have something more condensed to work from. In the end, I don't really expect that everything above will be included - the PAR is only a thumbnail sketch of the standard.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
Ian McIntosh
SJTAG Chair
Posts: 419
Joined: Mon Nov 05, 2007 11:49 pm
Location: Leonardo, UK
Contact:

Re: PAR scope and direction

Post by Ian McIntosh » Wed Feb 28, 2018 6:52 pm

When I circulated the draft minutes, I mentioned that there were a few things that I didn't have a chance to comment on during the discussion, so I'll say those things here, using extracts from the notes:
Motivation to have a standard that requires a sub-assembly to make information available to the higher level; not motivated to have the other standards used more.
My position is entirely the opposite. While I understand and see value in that motivation, it doesn't hit me as high priority. We're designing most of our product ourselves so that "need" is met simply through flowing down a requirement and incorporating it in the inter-module ICD, so we can do that now without any new standard. The one drop-off is where we might use a COTS board where we can't mandate Its ICD, but since we're not going to do any repair work on those all we need is a Go/NoGo indication which we will usually get from its own BIT. I could see it being of greater use to a systems integrator assembling a system from purely COTS items but I think those cases are probably built around some common architecture/backplane scheme (e.g. something like ATCA) which will define pretty much all the interfaces. I could see such a standard being useful, and fitting within the "SJTAG" banner, but I don't see it as crucial and it's certainly not something I (or my management) would be interested in pursuing.
I do see the interoperability of 1149.1, 1149.7, 1687, 1687.1, 1500, 1838, SPI, I2C, etc., as a big shortfall, as tooling struggles (if it manages at all) to pull these together, and largely treats each in isolation from the others. Many of these standards claim to enhance testability which, by and large, they do, but only at the chip level - making that usable at the board level and above is non-trivial as is constructing anything that requires two devices on different interfaces to collaborate. It can be done, but it's hard work and difficult to show ROI. In my view (personally, rather than as SJTAG chair), this is what the STAM PAR should be about.
It's possible that the Scope and Purpose could focus on the existing standards for now, while the Need could be wider.
I might structure that suggestion differently: The Scope is clearly the scope of the standard being proposed so per the example would focus on existing standards. The Purpose is where I'd think the wider view (what I think Louis referred to as "the 30 thousand foot view") would go. This allows multiple standards to share a common purpose. The Need should be a justification for why this standard is required (i.e. in the context of the Scope).
One of the end user's needs is to test the boards in a box without opening the box.
I'm fully behind that. But what stuns me is that the Navy guys aren't getting that from their systems anyway! Or perhaps more likely the vendors simply aren't making that level of test available to customers? I can't think of one of our systems that wouldn't do that straight off the bat, because that capability is pretty much a crucial feature in doing our own in-house testing and field maintenance: It's in the designer's/vendor's interest to be able to do it so why wouldn't it be present? With that said, we have situations where we are legally required (to meet export rules) to not provide access to certain modules (for test or otherwise) to end users. Anyway, it feels like a requirements issue again, perhaps one that would benefit from a standard, but I'm not convinced of it and I think the Mil/Aero industry is moving in a different direction anyway as regards support, but that's maybe a different conversation.
Ian McIntosh
Testability Lead
Leonardo MW Ltd.

User avatar
Heiko Ehrenberg
SJTAG Vice Chair
Posts: 45
Joined: Wed Nov 21, 2007 3:15 pm
Location: GOEPEL Electronics - Austin, TX
Contact:

Re: PAR scope and direction

Post by Heiko Ehrenberg » Wed Feb 28, 2018 7:57 pm

LYUngar1 wrote:
Tue Feb 20, 2018 11:01 pm
I think we are stating and restating our positions over and over again. A good analogy might be to consider SJTAG as a standard for drills. There may be many different considerations and different ideas about how best to accomplish making standard drills, but undoubtedly the underlying need for drills is the necessity to make holes. In this example, the Need for holes is analogous to the Need for better tests as measured by test coverage. The standard need not be about holes and in fact can be fully focused on drills. Holes, however, cannot be ignored.
But who says that everyone needs a round hole, Louis? Some want a square hole, some want a tapered hole, ... Who is to say which one is better or more important? So why limit the "hole making" standard to just drills that make round holes?
- Heiko

User avatar
Heiko Ehrenberg
SJTAG Vice Chair
Posts: 45
Joined: Wed Nov 21, 2007 3:15 pm
Location: GOEPEL Electronics - Austin, TX
Contact:

Re: PAR scope and direction

Post by Heiko Ehrenberg » Wed Feb 28, 2018 8:32 pm

Ian McIntosh wrote:
Fri Feb 23, 2018 7:36 pm
...
SJTAG is intended to improve the ability to test, diagnose and provide prognostic health information about systems.
...
This standard recognizes the need for some form of standardized measurement of test coverage and quality of test, but this standardization effort does not attempt to address that need (Additional comment: As this, or words to this effect, describe an exclusion from the standard, it could become part of the stated Scope instead of the Need)
...
To me, the above acknowledges the need for good testability and test coverage, for certain use cases. But since this study group is supposed to investigate "system test access management", not only quality of test, I don't think we need to say more about it in the PAR's Need statement.

Another way to say what Brad was getting at - I think - is that the "JTAG" test doesn't define test coverage or what a test should do or how well it should do it, it defines and interface (TAP) and tools (instructions and registers / cells) that can be utilized for "tests" such as interconnect testing, debugging, device programming, etc.). Likewise, the aim of the "SJTAG" standard should be to extend that infrastructure to the system level, making the interfaces and tools defined by 1149.1, 1687, and other (!) standards available for users to create their system level tests, whatever they may be (one person may focus on "testing" the outside of chips, another may focus on the inside of chips, and most probably will want to do both - but either way, without a system level infrastructure one cannot get to the "tools" inside the chips).
While test coverage and testability are important topics, this study group was tasked to investigate "system test access management", not a measure of test coverage or testability. Without access there is no coverage or testability. "Test" as such, with a measure of test coverage / testability, is only a subset of the use cases that have been identified in the past by the SJTAG study group.
- Heiko

Post Reply