next up previous contents
Next: 4 Architecture Design Up: Case Studies Index Previous:2.0 Overview of the Problem with the Synthetic Aperture Array System

SAR Case Study

3 System Design

3.1 System Process Description

The system process captured customer requirements and converted them into functional and performance processing requirements (Figure 3- 1). Signal processor developers then performed functional and performance analyses to properly decompose the system-level description.

The first step captured the requirements in the form of simulatable functions, or "executable specification." An Executable Specification is a description of a system or subsystem that can be executed in a computer simulation, the purpose of which is to reflect the precise behavior of the intended system. All functions were independent of implementation at the level of the Executable Specification. MIT/Lincoln Laboratory (MIT/LL) generated the Executable Specification for the synthetic aperture radar (SAR) processor and delivered it to Lockheed Martin Advanced Technology Laboratories (LM ATL) as part of the SAR's processor requirements.

Figure 3- 1: System design process.

3.1.1 RASSP Innovations in the System Design Process

3.1.1.1 Hierarchical Simulation

The verification methodology of the Rapid Prototyping of Application-Specific Signal Processors (RASSP) process deviated significantly from the traditional approach of physical prototyping. The RASSP methodology uses a top- down VHSIC Hardware Design Language (VHDL) -based, hierarchical, virtual-prototyping process. This process supported accurate trade-offs by using simulations to verify interaction of the hardware/software. To the maximum extent possible, users chose models, largely composed of hierarchical VHDL models of the architecture, from the model-year architecture elements in RASSP's reuse library, Users developed new library elements and inserted them into the reuse library as required.

Virtual prototyping played a major role at each level of the top-down design process. Prototyping began at an abstract level and proceeded to more concrete descriptions as details resolved. Models supported the process at each level of abstraction to verify distinct aspects of the system and its components. The resulting hierarchical model set would support model-year upgrades by providing various degrees of implementation-independent description of the system's functionality.

Virtual prototyping began with an Executable Specification that described the system's signal-processing requirements. Refinement of this specification was a joint effort among contractor and customer to obtain a top-level design guideline that captured the customer's needs. The Executable Specification removed ambiguity associated with written specifications. Token-based performance-model virtual prototyping determined resource requirements like number of processors and memory requirements.

Developers evaluated the impact of different mappings of software functions to processing hardware. At the next level, an abstract, behavioral, virtual prototype verified the correctness and performance of the signal-processor design. At the lowest design level, a detailed behavioral virtual prototype described clock-cycle and bit-level details of the new hardware. All virtual-prototype models verified hardware/software interaction. Table 3- 1 summarizes the purposes of the primary virtual prototype models.

Table 3 - 1: Model Hierarchy

Model Purpose Entity Resolution Signal Resolution Time Resolution
Executable Specification Accurately describe function and system interface requirements System and interfaces Abstract values Major event
Performance Model Optimize mapping, verify throughput, check resource requirements Functional units (Processing element, switch, memory) Abstract tokens Major event
Abstract Behavior Test numerical correctness, generate test point data, verify overall approach Functional units (Processing element, switch, memory) Abstract values Major event
Detailed Behavior Verify design and system interaction IC, MCM, descretes Bit-true Clock edge

3.1.1.2 System Tools

Developers used Ascent Logic Corporation's RDD- 100 tool to perform requirement analysis, functional analysis, and physical decomposition for the SAR benchmark. RDD- 100 is an Entity, Relationship, Attribute (ERA) database tool with a graphical user interface that provides developers with requirement, functional, and physical views of the system. The requirements are related to the functions and the functions are allocated to the physical architecture. The interrelation of these three views enabled the systems engineers to automatically generate the level specification documents from the RDD- 100 database. The physical view let developers use PRICE's cost model and Management Sciences, Inc.'s RAM-ILS tool to analyze cost, reliability, and maintainability.

The system engineering tools were not integrated during the SAR benchmark instead the tools were used individually. The SAR results and templates were used as examples to drive the tool integration and define the data exchange necessary for systems trade-off studies. See the RASSP System Process application note for more information on the RASSP system design process and the use of the Integrated System Engineering tools, ISE ( Figure 3 - 2).

Figure 3 - 2: RASSP Integrated Systen Engineering tools.

3.2 System Requirements Analysis

MIT/LL provided the originating requirements for the SAR in a Benchmark Technical Description (BTD) and Benchmark Executable Specification. The BTD contained programmatic and system requirements. Developers extracted the systems requirements from this document and placed them in the RDD- 100 database. The database then generated a SAR Configuration Item Development Specification (CIDS) based on these requirements. To achieve this, developers reworded, reordered, and refined the BTD requirements in the database using the Benchmark Executable Specification for clarification. They then performed in RDD- 100 requirements analysis and functional analysis based on the CIDS. As a result, they maintained traceability to the BTD and they generated from the RDD- 100 database a full set of software and hardware specifications. For more information, see the application note Integrated System Tools. Figure 3 - 3 shows the SAR requirements tree. The source document at the top of the tree is the BTD. Each block below the source is a requirement documented by the source. In the upper left corner of each requirement block is the requirement number. If the requirement number has an SOW (statement of work) prefix, then that requirement is a BTD paragraph that does not contain a true requirement for the SAR. It may contain descriptive or non-technical programmatic information. Developers incorporated these types of paragraphs into the CIDS. An example is requirement SOW.2.2, External Interfaces, which is in the upper left corner. It was only a title paragraph with actual requirements located in subparagraphs. The requirements that have no prefix on the associated requirement number, such as 3.1.1.1, are CIDS requirements.

Requirement names are in the center of each requirement block. Each requirement name has a prefix that indicates the origin or the level of the requirement. An example is requirement 3.1.1.1. Its name has the prefix SOW that indicates it originated in the BTD but that it may contain minor re-wording. A lesson learned was that all originating requirements must be kept in original form within the database. The CIDS requirements must be separate entries in the database with traceability to the BTD requirements.

In the requirements diagram, all requirements with a black square in the upper left corner have further decomposition and allocations that can be retrieved by clicking on that requirement. In a more complex system, the whole process would be repeated at each subsystem level.

After generating the CIDS, developers performed Requirements Analysis in parallel with Functional Analysis and System Partitioning tasks. An example is requirement 3.2.1.2.2 SOW PRI Detection. Clicking on that requirement in Figure 3 - 3 displays the full decomposition, derivation, and allocation of the originating requirement. Decomposition refers to the action of separating multiple requirements into individual requirements. During decomposition, the form of the requirement does not change and only minor re-wordings or budgeting are permitted. Derivation of a requirement changes the basic form of the requirement to make it appropriate for a lower level subsystem. An example is a noise budget that can be decomposed into individual subsystem noise contributions, but it might be derived to word width and linearity for an A/D (analog to digital) converter.

Figure 3 - 3: SAR requirements diagram. Click on figure for larger view

The first step was to decompose paragraphs into individual requirements or requirement sets. A requirement set was a cluster of requirements that could be allocated to a single function and system element. In addition, a proper requirement set had to be verifiable through a single test, inspection, demonstration, or analysis. In this case, the original requirement paragraph (3.2.1.2.2) was decomposed into six different requirement sets. The requirement number and SOW prefix now contain a letter suffix (3.2.1.2.2.a) to indicate that the requirement is a decomposition of the originating requirement.

The system then traced requirement 3.2.1.2.2a to the function Get Sensor Data, which was the functionality mandated by the requirement. Get Sensor Data was then allocated to the Data I/O Module. Once the subsystem was assumed (i.e., Data I/O Module), a specification had to be generated for that subsystem. The requirements in the Data I/O Module specification have to be traced to the originating requirements. In this case, the Data I/O Module requirement DATA Data Synchronization was either a derivation or decomposition of requirement 3.2.1.2.2.a. The DATA prefix was placed on the equipment name to indicate that this was a requirement from the Data I/O Module specification.

After the initial partitioning and functional allocation was completed, the Subsystems Architecture and Allocations Document [sar_all.pdf] was generated from the database. The following full set of software and hardware development specifications were generated automatically from the RDD- 100 database:

Generating all specifications from this database minimized discrepancies among documents. The database also defined hardware/software and hardware/hardware interfaces. The Interface Definition Specification [sar-ids.pdf] was automatically generated from the database.

3.2.1 Executable Specification

The MIT/LL also provided an Executable Specification and written requirements. The Executable Specification package included a SAR processor model, test bench, input stimuli of two four-frame input data sets, reference images, and a user's manual.

The Executable Specification model included information on I/O (input/output) timing constraints, I/O protocols, I/O data formats, system latency, and internal function. The external ports were modeled with clock-cycle temporal precision and bit-true data values. By design, the Executable Specification did not contain information on hardware implementation or structure, except where compatibility with external interfaces required it. This allowed maximum flexibility in architecture selection and detailed design. The internal function did not contain timing, except for input- to-output latency. Data was expressed as floating point values.

The purpose of the Executable Specification was to detail processor requirements and to provide a fully functional test bench to verify more detailed VHDL-structural models of the SAR processor. Output images from the Executable Specification simulations were compared with outputs from functional simulations [VHDL virtual prototype (abstract and detailed behavioral), matlab simulations, and Processing Graph Method (PGM) Processing Graph Support Environment (PGSE) simulations] to determine the maximum pixel error difference (Figure 3- 4).

Figure 3- 4: Executable Specification used as a reference image generator.

The processor model and test bench consists of about 2400 lines of VHDL code, and MIT/LL reported that the Executable Specification took about 1300 hours to develop. Some of this effort was attributable to first-time learning. Find more information on the Executable Specification in the following:

The Executable Specification took 142 minutes on a Sparc - 10 to process one frame of data for one polarization, and it required a minimum swap space of 256 MB for efficient execution. The large amount of time was a consequence of using VHDL signals to model bit-level transfers at the interface between the SAR processor and the test bench.

3.3 Functional Analysis

After completing the initial requirement decomposition, developers started a functional analysis. Functional analysis is decomposing a system into individual actions it must perform, where each action is called a function. Functional analysis must include a description of those actions or functions and the implicit and explicit flow of control and data associated with that functionality. Developers performed the SAR functional analysis in conjunction with systems partitioning and continued requirement analysis. The result of the functional analysis process was a description of the functional requirements as a set of verifiable and allocable functions and the interaction of those functions. The prime purpose of performing this analysis was to ensure that all implied and partitioning-specific functionalities were fully understood.

Although some experts believe a functional analysis can be independent of the system partitioning, the SAR Benchmark developers believe it to be impractical because each low-level function has to be allocable to only one subsystem and verifiable by a single test, demonstration, inspection, or analysis. In addition, some required functionality, such as subsystem-to-subsystem interfaces, may exist only because of the implementation. Ignoring this additional required functionality during a functional analysis can compromise the system-partitioning activity by ignoring its impacts on hardware loading and software timelines.

The functional decomposition of the SAR began with a simple behavior diagram that showed the SAR function, the outside world, and the data that was exchanged among these two functions. Figure 3- 5 shows the top-level behavior diagram generated from the RDD- 100 database. (See RDD- 100 Behavior Diagram Notation Tutorial.) This figure indicates that there are two major input-data types and two major data-output types. The SAR processor received control information and sensor data, sent diagnostic data, and processed output data to the outside world. It was not important for the design to know how the data was created or used in the outside world because the interface was well defined in the BTD.

Figure 3- 5: SAR processor behavioral diagram

Following the top level view as shown in Figure 3- 5, further decompositions were performed to show explicit functionality and then to add implicit functionality. The original functional decomposition was based solely on the requirements and it proved to be un-allocable. When allocations were performed, each hardware/software element appeared to perform disjoint functions, instead of highly inter-related functions. As a result, the functional decomposition may have provided information on the SAR as a whole but it was useless at providing an understanding of the individual hardware and software elements. An updated functional decomposition was generated as a result of the system partitioning. The functions were used as the basis for the final allocation of the functional requirements to the hardware and software elements.

The final functional description in Figure 3- 5 is hierarchical and explorable. As in the requirements diagram, clicking any element with a box in the upper left corner will show additional decomposition. Click on the box Benchmark 1 SAR Functional Model to view first-level decomposition. This level was derived primarily from the hardware/software partitioning. The top-level SAR functions are the following:

  • Processor Element Platform Functions (3.1): All hardware functions performed by the Processor Element, independent of the software
  • Host Interface Platform Functions (3.5): All hardware functions provided by the Host Interface Module, independent of the software
  • Control and Configuration Functions (3.3): All functions performed by the Command Program
  • Data I/O Functions (3.2): All functions provided by the Data I/O Module
  • Data Processing Functions (3.4): All functions provided by the Signal Processing Firmware.

    The Processor Element Platform Functions and Host Interface Platform Functions were added during the functional analysis because they were necessary sources and sinks of some Control and Configuration Function data items. These were simple additions but they could have become complex, depending on the system's complexity and the completeness of system-level requirements.

    At lower levels there was also required functionality that was not explicitly called for in the BTD or CIDS. An example was the Build Command Line Function within the Control and Configuration Function. Partitioning-specific details were present in the functional decomposition but they were added in a way to minimize handicapping the architecture-selection process.

    By detailing functionality to lower levels, a better understanding was gained of the requirements and the partitioning that was required to satisfy them. An example of this were the Data Processing Functions (3.4) . Clicking on this box in the first-level decomposition exposed the next level of detail. From this diagram, the details of the Range Processing and the Azimuth Compression Functions were seen by clicking on either box. The behavior diagrams were taken to this detail to understand and partition processing among the Data I/O Module and the COTS (commercial off-the-shelf) signal processors. The final allocation showed FIR ( Finite Impulse Response) filtering being done in custom hardware, and the range and azimuth processing being done in the COTS hardware.

    3.4 System Partitioning

    In a large scale system, system partitioning is partitioning requirements/functions to major subsystems and associated software. Because the SAR processor was a relatively small system, the system partitioning task was not as complicated, so developers performed the task in conjunction with the Architecture Design activity. Developers understood the domain space but the desire to use COTS processors further restricted partitioning alternatives. However, the system partitioning was valuable as a test case to refine the Integrated System Engineering (ISE) tools, schemas, and processes. See the Architecture Design portion of this case study for specific trade-off set and architecture details.

    The functions were decomposed to where each leaf-level function could be allocated to a hardware or software element. To achieve this, a high-level hardware/software partition helped minimize iterations to the functional decomposition. The potential architecture candidates used the hardware/software tree shown in Figure 3- 6. The differences were in the quantities and capabilities of the hardware in the equipment tree. Each of the architectures used in the trade-off study contained a Host Interface module, Data I/O Assembly, Processor Element Assembly, backplane assembly, and chassis. The software can be considered as a separate tree or as part of the subsystem that hosts the software. The tools permitted either view. In this effort, the Signal Processing Software was considered part of the Processor Element Assembly and the Command Program was considered part of the Host Interface Assembly. This partitioning tree, along with other candidate partitioning trees, were the primary inputs to the final architecture selection process.

    Figure 3- 6: RDD hardware/software tree.

    The SAR signal processor contained three hardware elements and two firmware elements. The final architecture was not selected until the architecture design phase but contained the following hardware and software elements:

    The ISE tools allowed the system designer to more fully consider the cost and reliability impacts of the system partitioning effort. Two different candidate architectures, based on this hardware/software tree in Figure 3- 6, were carried into the PRICE Cost estimation and the RAM/ILS reliability tools. The primary difference between the two architectures was the quantity and maturity of the Processor Element Module. Both candidates used COTS modules for the Processor Element Module. Candidate 1 used i860 processors, and five Processor Element Modules were required to meet latency requirements. Candidate 2 used ADSP21060 SHARC processors, so only three Processor Element Modules were required. The disadvantage of Candidate 2 was a higher cost per module and a higher risk due to being a leading-edge processor.

    Cost and reliability analyses were performed on the candidates using the ISE tools and assuming a deployment of 500 units and a life-cycle of 20 years. For more details of this cost analysis, see the RASSP application note How the RASSP Integrated System Tools Support the Systems Engineering Process. The results in Table 3- 2 were obtained in minutes versus days or weeks of a bottom-up effort with many people.

    Table 3- 2: Candidate Architecture Cost and Reliability Analysis

    Candidate Development Production Support Total MTBCF *
    1 (i860) $1,92 4,000 $106,5 23,000 $34,4 56,000 $142,9 05,000 20 68 hours
    2 (21060) $2,25 7,000 $94,51 8,000 $26,8 61,000 $123,6 38,000 32 96 hours
    * Mean Time Between Component Failure

    The system partitioning done as part of the engineering process was preliminary. On the SAR benchmark, Candidate 2 was selected during architecture selection. The SHARC module was unavailable. The cost and reliability impacts were instantly available because of the simplicity of RASSP's ISE tools.

    3.5 Lessons Learned in System Design

    3.5.1 Executable Specification

    Lessons learned from using the Executable Specification were the following:

    Figure 3- 7: Example of test image.

    3.5.2 System Tools

    The developers had not previously used PRICE's cost tools, so a member of the development team spent several weeks ensuring that the cost estimates were accurate within 10 percent of bottom-up calculations. This estimate was provided in less than a day or in about 20 percent of the time needed for a full bottom-up calculation. The lessons learned from the SAR Benchmark using the system tools were the following:


    next up previous contents
    Next: 4 Architecture Design Up: Case Studies Index Previous:2.0 Overview of the Problem with the Synthetic Aperture Array System

    Page Status: in-review, January 1998 Dennis Basara