Which of the following is not an example of an attempt to reduce design complexity?

Which of the following is not an example of an attempt to reduce design complexity?

Computing Platforms

Marilyn Wolf, in Computers as Components (Fourth Edition), 2017

4.5.1 Example platforms

The design complexity of the hardware platform can vary greatly, from a totally off-the-shelf solution to a highly customized design. A platform may consist of anywhere from one to dozens of chips.

Open source platforms

Fig. 4.21 shows a BeagleBoard [Bea11]. The BeagleBoard is the result of an open source project to develop a low-cost platform for embedded system projects. The processor is an ARM Cortex™-A8, which also comes with several built-in I/O devices. The board itself includes many connectors and support for a variety of I/O: flash memory, audio, video, etc. The support environment provides basic information about the board design such as schematics, a variety of software development environments, and many example projects built with the BeagleBoard.

Which of the following is not an example of an attempt to reduce design complexity?

Figure 4.21. A BeagleBoard.

Evaluation boards

Chip vendors often provide their own evaluation boards or evaluation modules for their chips. The evaluation board may be a complete solution or provide what you need with only slight modifications. The hardware design (netlist, board layout, etc.) is typically available from the vendor; companies provide such information to make it easy for customers to use their microprocessors. If the evaluation board does not completely meet your needs, you can modify the design using the netlist and board layout without starting from scratch. Vendors generally do not charge royalties for the hardware board design.

Fig. 4.22 shows an ARM evaluation module. Like the BeagleBoard, this evaluation module includes the basic platform chip and a variety of I/O devices. However, the main purpose of the BeagleBoard is as an end-use, low-cost board, while the evaluation module is primarily intended to support software development and serve as a starting point for a more refined product design. As a result, this evaluation module includes some features that would not appear in a final product such as the connections to the processors pins that surround the processor chip itself.

Which of the following is not an example of an attempt to reduce design complexity?

Figure 4.22. An ARM evaluation module.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053874000042

From Array Theory to Shared Aperture Arrays

NICHOLAS FOURIKIS, in Advanced Array Systems, Applications and RF Technologies, 2000

2.4.4 3D/2D Cylindrical Arrays

A 3D array provides range, azimuth and elevation information and represents a maximum complexity design. As the 2D array provides range and azimuth information only, it represents a medium complexity design. The arrays can be used for radar applications such as air-traffic control and surveillance.

The descriptions of these arrays are necessarily sketchy [113] and we shall focus attention on aspects particularly important to cylindrical arrays. To simplify the switching–commutation subsystem, phase-shifters are used to scan the beam ±15° in azimuth; the commutation is further simplified by dividing the array aperture into three 120° sectors. With this arrangement the commutator need only provide for the commutation of four 30° sectors, enabling it to be realized using simple 4×4 transfer switches.

The 2D array does not use column network phase-shifters and arrangements can be made for the array to have either several elevation beams or fine beam scan in azimuth.

Despite the sketchy descriptions of the above arrays, their realizations constitute a significant step toward the acceptance of cylindrical arrays as serious contenders for low-cost radars or ESM systems that operate in environments where the targets are at short to medium ranges.

In reference [114] a cylindrical active phased array radar is presented as a low-cost, efficient, and versatile solution suitable for operation in the tactical environment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780122629426500044

DSP Integrated Circuits

Lars Wanhammar, in DSP Integrated Circuits, 1999

Meet-In-The-Middle Approach

The aim of a structured design methodology is not only to cope with the high design complexity, but also to increase design efficiency and the probability of an error-free design. As mentioned earlier, the complexity is reduced by imposing a hierarchy of abstractions upon the design. In this way, the system is systematically decomposed into regular and modular blocks. In practice, however, a meet-in-the-middle approach is often used. In this approach, which is illustrated in Figure 1.17, the specification-synthesis process is carried out in essentially a top-down fashion, but the actual design of the building blocks is performed in a bottom-up fashion. The design process is therefore divided into two almost independent parts that meet in the middle. The circuit design phase can be shortened by using efficient circuit design tools or even automatic logic synthesis tools. Often, some of the building blocks are already available in a circuit library.

Which of the following is not an example of an attempt to reduce design complexity?

Figure 1.17. Meet-in-the-middle design approach

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780127345307500015

28th European Symposium on Computer Aided Process Engineering

Marco Colombo, Michael Fairweather, in Computer Aided Chemical Engineering, 2018

Abstract

In nuclear power plants, passive cooling can increase the safety of reactors whilst reducing costs and design complexity. However, the effectiveness of passive mechanisms needs to be carefully proved with reliable computational modelling. This work focuses on assessing the external reactor vessel cooling (ERVC) strategy by means of computational fluid dynamics (CFD). The accuracy of a two-fluid Eulerian-Eulerian CFD model with boiling capabilities is first assessed, as subsequently is its ability to predict ERVC. The CFD model described proves to be a valuable tool for predicting passive cooling by detecting local boiling incipience and providing three-dimensional vessel temperature distributions, as well as thermal stratification and velocity field predictions in the cooling pool. This work is part of a larger programme aimed at assessing CFD capabilities for predicting passive cooling strategies, with a quantitative assessment of ERVC currently ongoing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444642356501807

Data Services

James Bean, in SOA and Web Services Interface Design, 2010

5.2 Multiple and Disparate Data at Rest Sources

With its current scope, the Add New Product example is a somewhat simple composite data service. Although it is supported by several underlying process steps and subordinate services, there is only one data at rest source used in the process (the Product Catalog Database). Unfortunately, in a moderate to large enterprise, this is rarely the case. Rather, there are separate and vertical applications, with each managing their own data at rest implemented version of the same basic data. This is also an example of the enterprise integration problem that plagues many organizations.

As any enterprise evolves and matures, there are many business and technology decisions that come into play, each of which can have significant implications to data at rest implementations. During periods of growth, some organizations will focus on business acquisition. Often, an acquired business will also have its own enabling technologies, applications, and data. It may be deemed too costly or complex to extract and migrate the data of the acquired business into the operational systems of the enterprise. Yet, the value of that new data is obvious and may have even been part of the acquisition rationale.

Another common example is the use of packaged software applications (sometimes referred to as COTS, i.e., Commercial Off The Shelf applications). While most software packages provide APIs for accessing the underlying data stores, the API functionality may have technology-specific limitations. Since the data at rest implementations and the supporting context of these COTS solutions are parochial to the application, it is rare that the data are structured in the exact same manner as other data at rest implementations of the enterprise.

Another and possibly the most common example is organic growth. As the enterprise has continued to grow, the need for rapid technology solutioning may unfortunately take precedence over more effective architectural patterns and design best practices. Often, this will result in development of vertical and disjoint solutions with significant data replication. As the enterprise continues to evolve, each of these solutions and additional data replications will often shift further away in structure and semantics from their originating sources.

All of these examples and many others have contributed to the integration problems that plague today’s typical enterprise. SOA, Web services, and, more importantly, data services can provide a largely enabling technology set to help address the challenge. However, these same technology enablers are not exclusively the answer. Data is an enterprise asset. It is largely harvested, acquired, manipulated, enriched, and consumed by business constituents. They are the subject matter experts who understand the business value and semantics of enterprise data. Identifying the applications and data at rest sources of roughly analogous data, and aligning on consistent and cross organization definitions for this data are requisite business activities. These important tasks are also part of the SOA governance, where the SOA architects and service designers look to converge and rationalize data at rest sources, rather than create further redundancy (see Figure 5.5).

Which of the following is not an example of an attempt to reduce design complexity?

Figure 5.5. A Core Data Concept Implemented Across Multiple Data at Rest Sources

Enterprise integration is not the only value of SOA and Web services, but it is an obvious one. Attaining a cross-line of business and cross-application view of enterprise data is a long desired goal of many an enterprise. While each vertical application is largely functional there is a significant missed opportunity when data of the same context is locked in separate data at rest implementations. A carefully designed data service can provide a potential solution.

Before developing a data service, the architect or designer must determine the scope of data that will be involved. Data scope describes a context or focal subject and can be referred to as a data domain. Subject matter experts from the business community are critical to defining the data domains of the enterprise and determining which applications host data that should participate in those domains. The data domain for the example Add New Product service is specifically Product data, and the current implementation is limited to a single data at rest source. The Add New Product service can be extended to also consider additional and similar data from the Engineering system. The resulting service will provide a 360-degree view of products for the enterprise.

Data practitioners will often apply an architecture pattern described as the data of record. The data of record for a specific data domain is the authoritative and recognized source of information crossing all lines of business and applications. When the scenario includes multiple data at rest implementations that are also of different structural and semantic definitions, identifying or developing a single data of record can be a significant challenge. However, a data service can hide those complexities and, in many cases, resolve the impedance between data at rest sources of the same domain. The result is a logical data of record.

This approach can have tremendous value to the enterprise. The historic investment in the individual operational systems and data can often be preserved. Disruption to the vertical systems and the business operations they support is minimized. Further, the logical data service can be designed in a manner where the service interface provides options to consumers that allow them to selectively request service functionality (a Web service interface in this example). As with the single data source example, a composite data service is defined above the data sources and hides the complexity of access from consumers. Depending upon the operation exposed by the service, consumers can predetermine if they want to act upon one of the data at rest sources or the entire set of data that has been defined. This is most common with a read-only “Get” service but can also apply to some other service operations.

To enhance the Add New Product data service by adding a second data at rest implementation requires some specific technical information. The designer must have identified the type of data persistence technology in use and the connection methods for accessing the new data at rest source. The designer must also have harvested metadata that describes the specific structures, semantics, and types for the data. Sources of the necessary metadata will usually include data models, process models (for business rules), file layouts, and the system catalog or similar structural descriptors for data at rest implementations. Once this metadata information has been harvested, it must be reviewed by subject matter experts from the business community to help determine which parts of the data should be exposed to service consumers, any business rules that might apply, and the standard by which the data will be returned as data in motion to consumers. This standard also serves as the common canonical for mapping the different data sources to resolve any impedance between the data at rest implementations. Predefined industry canonicals can also be acquired from industry groups or vendors and can help to jump-start the mapping process.

The Add New Product process orchestration will also require modification and extension. Each of the steps will be evaluated to determine whether it is still required and if it needs to be revised or extended. New process steps will also be required to access the additional data at rest implementations. Within the scope of the operations exposed by the service, any potential options will need to be considered. When included in the design of the service, the interface will be revised to include them as request options for consumers (see Figure 5.6).

Which of the following is not an example of an attempt to reduce design complexity?

Figure 5.6. Enhanced Add New Product Service

The data domain that defines the scope of data for the revised Add New Product service is determined to include the already implemented Product Catalog Database. However, the data service will also need to include data from the Engineering Master Database. The existing steps of the orchestrated business process are still required. However, there also are several new process steps and subordinate fine-grained services that will be added. The most obvious include Check Duplicate Engineering Item and Add Engineering Item services.

The designer might consider adding this additional functionality to the existing process steps and supporting services. However, adding this functionality expands the processing of the subordinate fine-grained services and makes them less isolated and reusable. This is an important decision that could have longer term implications to the SOA architecture.

As an example, if the Check Duplicate Items data service is designed to now check both the Product Catalog Database and the Engineering Master Database, depending upon the degree to which the logic is coupled to the data source layer, other composite data services might not be able to reuse it. There are also some important operational design considerations. At the lowest layer of the SOA and data servicing architecture are the database implementations. Using the example of a database, there are a number of operational characteristics that can influence performance, latency, availability, and other technical qualities of the systems in which they reside. While it might initially seem advantageous to localize the database insert behavior for both the Product Catalog Database and the Engineering Master Database into a single service, the resulting database operations can be impacted. If the overall Add New Product data service begins to experience a high volume of requests, the resulting Add Item service will also incur a similar increase in requests. Depending upon the database access methods, structure of the SQL insert, the defined unit of work at the database level, and the locking schemes of the database implementations, the Add Item service might experience increased latency. Further, the operational applications in the separate engineering and product systems continue to operate on the same databases, and they might also experience latency as a result of locking and contention with the requests from the data services. Although a database analyst (DBA) might be able to address some of the problems in each database, being able to easily identify specific areas of opportunity and to address the root cause of the latency can become complicated. Alternatively, if the Add Item process activities are designed into separate services, and each is limited to operations on their specific database implementations, there is a greater potential to avoid problems and, when they occur, to identify the root cause and resolve them.

As with the Add New Product example, when the service operation is not read only, there is yet another design complexity. Separating the process steps to access each data at rest source independently requires some form of correlation. Anomalies that might be encountered when a Check Duplicate Engineering Items service returns an “Item Found” result (the item already exists), but the Check Duplicate Items service returns an “Item Not Found” result, mean a processing decision must be made. Either the responses from the two services need to be correlated, or a fail on condition event for either service needs to be defined. This might seem trivial at first, but the consistency and integrity of the underlying data at rest sources will rely upon an effective correlation scheme. Depending upon the data at rest database implementations, APIs, and query languages, a recommendation would be to resolve this complexity with a commercial federated query and transaction management solution. The importance of correlating a complete unit of work across federated databases requires support for distributed transactions. The final commit transaction needs to be implemented as the result of transaction coordination across the data at rest sources, or rolled back if the transaction for any one of the participating data at rest sources fails. This example extends to other process steps as well (as with Add Engineering Item and Add Item process steps). As potential design decision examples:

On any duplicate item found condition, return a fault message to the consumer and end the process.

If one item is determined to already exist, but the other does not, ignore the existing item and insert only the new item.

If one item is determined to already exist, but the other does not, interrogate the existing item to determine if it is in fact the same, and, if yes, insert the new item. Otherwise, return a fault message to the consumer and end the process.

Correlation behavior can in some cases be developed and implemented either as a process step and service to invoke a federated database transaction server, or intrinsically to the overall BPEL orchestrated process as a flow where the steps are invoked in parallel. However, determining which design approach is best should not be exclusive to a technology pattern or approach. This is another example of needed SOA governance and an area where business subject matter experts can help.

The Add New Product service will be revised to include new process steps in the orchestrated business process with data access for both the Product Catalog and the Engineering Master Databases. Rather than introducing complex correlation processing for the check duplicate items processing, the unit of work processing will include an “all or nothing” pattern. The duplicate check will be defined to execute in parallel, but a found condition in either database will trigger a fault response to the consumer.

A final correlation is required before committing the inserts to the separate Product Catalog and Engineering Master Databases. Since the final steps are specific to an Add operation, the end result data for both databases must be in a logically consistent state. That is, the Add (insert) of the new items into both the Engineering Master Database and the Product Catalog Database must be successful. There are a number of commercial transaction management products that can help to correlate federated database transactions and distributed queries as a single unit of work. These technologies can also mediate individual transaction failure and rollback. For the Add New Product service example, a similar, but fictitious technology will be implemented as a process step and messaged as a subordinate service (see Figure 5.7).

Which of the following is not an example of an attempt to reduce design complexity?

Figure 5.7. Enhanced Add New Product Sequence Diagram

The previously existing Add New Product process steps will remain as originally defined. The Receive Message step is fundamental and required. This is the step that receives the request message from consumers of the service. The validation step is also requisite and verifies that the received message conforms to the service interface context as defined by an XML Schema.

A new Check Duplicate Engineering Item process step and a service are added to the BPEL definition. This step is analogous to the already defined Check Duplicate Item step. However, the Engineering Master Database is the additional data at rest source used for checking. The original Check Duplicate Item step and service remain as part of the process. A BPEL flow (“<flow/>”) will be added to execute both of the check duplicate process steps in parallel.

The intent is to check both the Engineering Master Database and the Product Catalog Database to determine if the requested item already exists. If the item exists in either database, the overall Add New Product service will return a fault message to the consumer and terminate processing. The metadata that describes the item identifier differs between these two databases. The Engineering Master Database refers to a “PartNo”, while the Product Catalog Database refers to an “ItemID”. This is a common scenario when multiple data at rest implementations are included in the scope of a data service, and it requires that the requested product is expressed in a manner that can be used by each database. Resolution for the enhanced Add New Product data service assumes that the appropriate translations are included in the process.

Assuming the check for duplicates processing is successful, the process will advance to add processing. Similar to the check duplicates process, a new step to Add Engineering Item will be added to the process. The process is similar to the previously existing Add Item step, where a SQL instruction to “insert” the data will be created. This new SQL statement will then be submitted to the Engineering Master Database for processing. The original Add Item process step and service all remain. These two Add steps will be included in a BPEL flow to run in parallel. However, correlation is critical at this point in the process. In order for the databases to be consistent, the Add operation (insert) into each database must be successful. To help accomplish this, new Correlate Add Item process step and service will be added. The Correlate Add Item step will assume that a federated database server and distributed query processor will be invoked. Note that the specific implementation of database correlation will depend upon the technology employed. As one scenario, the federated database technology may be implemented as a background listener and transaction manager, and a single distributed SQL statement will declare a unit of work and include the database insert statements for both databases. If this were the case, the BPEL steps and referenced services would be declared differently. It might be necessary for the BPEL to include only a single Add Item step and a referenced data service that includes the noted unit of work and distributed SQL statement (see Example 5.2).

Example 5.2

Enhanced New Product BPEL Process

&lt;?xml version=“1.0” encoding=“UTF-8” ?&gt;

- &lt;process name=“AddNewProduct” targetNamespace=“http://Widget-Example.com/Inventory” xmlns:tns=“http://Widget-Example.com/Inventory” xmlns=“http://docs.oasis-open.org/wsbpel/2.0/process/executable”&gt;

- &lt;partnerLinks&gt;

&lt;partnerLink name=“Consumer” partnerLinkType=“tns:AddNewProductRequest” myRole=“AddNewProdductService” partnerRole=“Consumer”/&gt;

&lt;partnerLink name=“ValidateMessage” partnerLinkType=“tns:ValidationLink” myRole=“ValidationServiceRequester” partnerRole=“ValidationService”/&gt;

&lt;partnerLink name=“CheckDuplicateItem” partnerLinkType=“tns:CheckDuplicateItemLink” myRole=“DuplicateItemServiceRequester” partnerRole=“DuplicateItemService”/&gt;

&lt;partnerLink name=“CheckDuplicateEngineeringItem” partnerLinkType=“tns:CheckDuplicateEngineeringItemLink” myRole=“DuplicateEngineeringItemServiceRequester” partnerRole=“DuplicateEngineeringItemService”/&gt;

&lt;partnerLink name=“AddItem” partnerLinkType=“tns:AddItemLink” myRole=“AddItemServiceRequester” partnerRole=“AddItemService”/&gt;

&lt;partnerLink name=“AddEngineeringItem” partnerLinkType=“tns:AddEngineeringItemLink” myRole=“AddEngineeringItemServiceRequester” partnerRole=“AddEngineeringItemService”/&gt;

&lt;partnerLink name=“CorrelateAddItems” partnerLinkType=“tns:CorrelateAddItemsLink” myRole=“CorrelateAddItemsServiceRequester” partnerRole=“CorrelateAddItemsService”/&gt;

&lt;/partnerLinks&gt;

- &lt;variables&gt;

&lt;variable name=“AddNewProductRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“ValidationRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“ValidationResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“ValidationFault” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“CheckDuplicateItemRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“CheckDuplicateItemResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“CheckDuplicateItemFault” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“CheckDuplicateEngineeringItemRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“CheckDuplicateEngineeringItemResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“CheckDuplicateEngineeringItemFault” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“AddItemRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“AddItemResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“AddItemFailure” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“AddEngineeringItemRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“AddEngineeeringItemResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“AddEngineeringItemFailure” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“CorrrelateAddItemsRequest” messageType=“tns:AddNewProductRequestMessage”/&gt;

&lt;variable name=“CorrelateAddItemsResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“CorrelateDuplicateItemsFault” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;variable name=“AddNewProductResponse” messageType=“tns:AddNewProductResponseMessage”/&gt;

&lt;variable name=“AddNewProductFault” messageType=“tns:AddNewProductFaultMessage”/&gt;

&lt;/variables&gt;

- &lt;faultHandlers&gt;

- &lt;catch faultName=“tns:ValidationFault” faultVariable=“ValidationFault” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“ValidationFault”/&gt;

&lt;/catch&gt;

- &lt;catch faultName=“tns:CheckDuplicateEngineeringItemFault” faultVariable=“CheckDuplicateEngineeringItemFault” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“CheckDuplicateEngineeringItemFault”/&gt;

&lt;/catch&gt;

- &lt;catch faultName=“tns:CheckDuplicateItemFault” faultVariable=“CheckDuplicateItemFault” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“CheckDuplicateItemFault”/&gt;

&lt;/catch&gt;

- &lt;catch faultName=“tns:AddEngineeringItemFault” faultVariable=“AddEngineeringItemFailure” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“AddEngineeringItemFault”/&gt;

&lt;/catch&gt;

- &lt;catch faultName=“tns:AddItemFault” faultVariable=“AddItemFailure” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“AddItemFault”/&gt;

&lt;/catch&gt;

- &lt;catch faultName=“tns:CorrelateAddItemFault” faultVariable=“CorrelateAddItemFailure” faultMessageType=“tns:AddNewProductFaultMessage”&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“CorrelateAddItemFault”/&gt;

&lt;/catch&gt;

- &lt;catchAll&gt;

- &lt;sequence&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from expression=“string(‘ProcessFault’)”/&gt;

&lt;to variable=“AddNewProductFault” part=“FaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductFault” faultName=“ProcessFault”/&gt;

&lt;/sequence&gt;

&lt;/catchAll&gt;

&lt;/faultHandlers&gt;

- &lt;eventHandlers&gt;

- &lt;onMessage partnerLink=“ValidateMessage” portType=“tns:ValidateMessagePort” operation=“Validate” variable=“ValidationFault”&gt;

&lt;throw faultName=“tns:ValidationFault” faultVariable=“ValidationFault”/&gt;

&lt;/onMessage&gt;

- &lt;onMessage partnerLink=“CheckEngineeringDuplicateItem” portType=“tns:CheckEngineeringDuplicateItemPort” operation=“Check” variable=“CheckDuplicateEngineeringItemFault”&gt;

&lt;throw faultName=“tns:CheckDuplicateEngineeringItemFault” faultVariable=“CheckDuplicateEngineeringItemFault”/&gt;

&lt;/onMessage&gt;

- &lt;onMessage partnerLink=“CheckDuplicateItem” portType=“tns:CheckDuplicateItemPort” operation=“Check” variable=“CheckDuplicateItemFault”&gt;

&lt;throw faultName=“tns:CheckDuplicateItemFault” faultVariable=“CheckDuplicateItemFault”/&gt;

&lt;/onMessage&gt;

- &lt;onMessage partnerLink=“AddEngineeringItem” portType=“tns:AddEngineeringItemPort” operation=“Add” variable=“AddEngineeringItemFault”&gt;

&lt;throw faultName=“tns:AddEngineeringItemFailure” faultVariable=“AddEngineeringItemFailure”/&gt;

&lt;/onMessage&gt;

- &lt;onMessage partnerLink=“AddItem” portType=“tns:AddItemPort” operation=“Add” variable=“AddItemFault”&gt;

&lt;throw faultName=“tns:AddItemFailure” faultVariable=“AddItemFailure”/&gt;

&lt;/onMessage&gt;

- &lt;onMessage partnerLink=“CorrelateAddItem” portType=“tns:CorrelateAddItemPort” operation=“Correlate” variable=“CorrelateAddItemFault”&gt;

&lt;throw faultName=“tns:CorrelateAddItemFailure” faultVariable=“CorrelateAddItemFailure”/&gt;

&lt;/onMessage&gt;

&lt;/eventHandlers&gt;

- &lt;sequence name=“AddNewProductProcess”&gt;

&lt;receive partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductRequest” createInstance=“yes”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddNewProductRequest” part=“RequestMessage”/&gt;

&lt;to variable=“ValidationRequest” part=“RequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“ValidateMessage” portType=“tns:ValidateMessagePort” operation=“Validate” inputVariable=“ValidationRequest” outputVariable=“ValidationResponse” faultVariable=“ValidationFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“ValidationResponse” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“ValidationResponseMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“ValidationFault” part=“FaultMessage”/&gt;

&lt;to variable=“AddNewProductFault” part=“ValidationFaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;flow&gt;

- &lt;sequence&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddNewProductRequest” part=“RequestMessage”/&gt;

&lt;to variable=“CheckDuplicateEngineeringItemRequest” part=“RequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“CheckDuplicateEngineeringItem” portType=“tns:CheckDuplicateEngineeringItemPort” operation=“Check” inputVariable=“CheckDuplicateEngineeringItemRequest” outputVariable=“CheckDuplicateEngineeringItemResponse” faultVariable=“CheckDuplicateEngineeringItemFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“CheckDuplicateEngineeringItemResponse” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“CheckDuplicateEngineeringResponseMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“CheckDuplicateEngineeringItemFault” part=“FaultMessage”/&gt;

&lt;to variable=“AddNewProductFault” part=“CheckDuplicateEngineeringFaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddNewProductRequest” part=“RequestMessage”/&gt;

&lt;to variable=“CheckDuplicateItemRequest” part=“RequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“CheckDuplicateItem” portType=“tns:CheckDuplicateItemPort” operation=“Check” inputVariable=“CheckDuplicateItemRequest” outputVariable=“CheckDuplicateItemResponse” faultVariable=“CheckDuplicateItemFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“CheckDuplicateItemResponse” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“AddEngineeringItem”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“CheckDuplicateEngineeringItemFault” part=“FaultMessage”/&gt;

&lt;to variable=“AddNewProductFault” part=“CheckDuplicateEngineeringFaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;/sequence&gt;

&lt;/flow&gt;

- &lt;flow&gt;

- &lt;sequence&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddNewProductRequest” part=“RequestMessage”/&gt;

&lt;to variable=“AddEngineeringItemRequest” part=“RequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“AddEngineeringItem” portType=“tns:AddNewEngineeringItemPort” operation=“Add” inputVariable=“AddEngineeringItemRequest” outputVariable=“AddEngineeringItemResponse” faultVariable=“AddEngineeringItemFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddEngineeringItemFault” part=“FaultMessage”/&gt;

&lt;to variable=“AddNewProductFault” part=“AddEngineeringItemFaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddNewProductRequest” part=“RequestMessage”/&gt;

&lt;to variable=“AddItemRequest” part=“RequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“AddItem” portType=“tns:AddNewItemPort” operation=“Add” inputVariable=“AddItemRequest” outputVariable=“AddItemResponse” faultVariable=“AddtemFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddItemFault” part=“FaultMessage”/&gt;

&lt;to variable=“AddNewProductFault” part=“AddItemFaultMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;/sequence&gt;

&lt;/flow&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddEngineeringItemResponse” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“CorrelateAddItemRequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“AddItemResponse” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“CorrelateAddItemRequestMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;invoke partnerLink=“CorrelateAddItem” portType=“tns:CorrelateAddNewItemPort” operation=“Correlate” inputVariable=“CorrelateAddItemRequest” outputVariable=“CorrelateAddItemResponse” faultVariable=“CorrelateAddtemFault”/&gt;

- &lt;assign&gt;

- &lt;copy&gt;

&lt;from variable=“CorrelateAddItemRequest” part=“ResponseMessage”/&gt;

&lt;to variable=“AddNewProductResponse” part=“ResponseMessage”/&gt;

&lt;/copy&gt;

&lt;/assign&gt;

&lt;reply partnerLink=“Consumer” portType=“tns:AddNewProductPort” operation=“Add” variable=“AddNewProductResponse”/&gt;

&lt;/sequence&gt;

&lt;/process&gt;

Upon successful insert of the data to both the Engineering Master Database and the Product Catalog Database, and correlation of the database transactions, a reply will be returned to the requesting consumer.

The Add New Product service design and BPEL do not yet take into account other technical complexities such as availability of the two data at rest sources, latency between the services and transactions, and a need for wait states. Further, there might be a need for correlation among sets of process steps (flows), case logic to identify conditional branching, and more advanced exception handling—all of which would be additional design optimizations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123748911000058

GPU-Based Parallel Computing for Fast Circuit Optimization

Yifang Liu, Jiang Hu, in GPU Computing Gems Emerald Edition, 2011

24.1 Introduction, Problem Statement, and Context

Fast circuit optimization technique is used during chip design. Although the pressure of time-to-market is almost never relieved, design complexity keeps growing along with transistor count. In addition, more and more issues need to be considered — from conventional objectives like performance and power to new concerns like process variability and transistor aging. On the other hand, the advancement of chip technology opens new avenues for boosting computing power. One example is the amazing progress of GPU technology. In the past five years, the computing performance of GPU has grown from about the same as a CPU to about 10 times that of a CPU in term of GFLOPS [4]. GPU is particularly good at coarse-grained parallelism and data-intensive computations. Recently, GPU-based parallel computation has been successfully applied for the speedup of fault simulation [3] and power-grid simulation [2].

In this work, we propose GPU-based parallel computing techniques for simultaneous gate sizing and threshold voltage assignment. Gate sizing is a classic approach for optimizing performance and power of combinational circuits. Different size implementations of a gate realize the same gate logic, but present a trade-off between the gate's input capacitance and output resistance, thus affecting the signal propagation delay on the gate's fan-in and fan-out paths, as well as the balance between timing performance and power dissipation of the circuit. Threshold voltage (Vt) assignment is a popular technique for reducing leakage power while meeting a timing performance requirement. It is a trade-off factor between the signal propagation delay and power consumption at a gate. Because both of these two circuit optimization methods essentially imply a certain implementation for a logic gate, it is not difficult to perform them simultaneously. It is conceivable that a simultaneous approach is often superior to a separated one in terms of solution quality.

To demonstrate the GPU-based parallelization techniques, we use as an example of the simultaneous gate sizing and Vt assignment problem, which minimizes the power dissipation of a combinational circuit under timing (performance) constraint. The problem is formulated as follows.

Timing-Constrained Power Optimization: Given the netlist of a combinational logic circuit G(V, E), arrival times (AT) at its primary inputs, required arrival times (RAT) at its primary outputs, and a cell library, in which each gate vi has a size implementation set Wi and a Vt implementation set Ui, select an implementation (a specific size and Vt level) for each gate to minimize the power dissipation under timing constraints, that is,

Min:∑vi∈Vp(vi)s.t.:RAT(v i)≥AT(vi),∀v i∈I(G)RAT(v i)≥RAT(vj)+D(vj,vi),∀(vj,vi)∈Ewi∈Wi,∀vi∈Vui ∈Ui,∀vi∈V

where wi and ui are the size and the Vt level of gate vi, respectively, D(vj, vi) is the signal propagation delay from gate vj to gate vi, pi is the power consumption at gate vi, and I(G) is the primary inputs of the circuit.

For the optimization solution, we focus on discrete algorithm because (1) it can be directly applied with cell library-based timing and power models, and (2) Vt assignment is a highly discrete problem. Discrete gate sizing and Vt assignment faces two interdependent difficulties. First, the underlying topology of a combinational circuit is typically a DAG (directed acyclic graph). The path reconvergence of a DAG makes it difficult to carry out a systematic solution search like dynamic programming (DP). Second, the size of a combinational circuit can be very large, sometimes with dozens of thousands of gates. As a result, most of the existing methods are simple heuristics [1]. Recently, a Joint Relaxation and Restriction (JRR) algorithm [5] was proposed to handle the path reconvergence problem and enable a DP-like systematic solution search. Indeed, the systematic search [5] remarkably outperforms its previous work. To address the large problem size, a grid-based parallel gate sizing method was introduced in [10]. Although it can obtain high-solution quality with very fast speed, it concurrently uses 20 computers and entails significant network bandwidth. In contrast, GPU-based parallelism is much more cost-effective. The expense of a GPU card is only a few hundreds of dollars, and the local parallelism obviously causes no overhead on network traffic.

It is not straightforward to map a conventional sequential algorithm onto GPU computation and achieve desired speedup. In general, parallel computation implies that a large computation task needs to be partitioned to many threads. For the partitioning, one needs to decide its granularity levels, balance the computation load, and minimize the interactions among different threads. Managing data and memory is also important. One needs to properly allocate the data storage to various parts of the memory system of a GPU. Apart from general parallel computing issues, the characteristics of the GPU should be taken into account. For example, the parallelization should be SIMD (single instruction multiple data) in order to better exploit the advantages of GPU. In this work, we propose task-scheduling techniques for performing gate-sizing/Vt assignment on the GPU. To the best of our knowledge, this is the first work on GPU-based combinational circuit optimization. In the experiment, we compare our parallel version of the joint relaxation and restriction algorithm [5] with its original sequential implementation. The results show that our parallelization achieves speedup of up to 56× and 39× on average. At the same time, our techniques can retain the exact same solution quality as [5]. Such speedup will allow many systematic optimization approaches, which were slow and previously regarded as impractical, to be widely adopted in realistic applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849885000243

European Symposium on Computer Aided Process Engineering-12

Ramagopal Uppaluri, ... Antonis Kokossis, in Computer Aided Chemical Engineering, 2002

1 Air separation using vacuum pumps

Membrane network optimisation for enriched oxygen production is targeted using stochastic optimisation techniques. Vacuum application is considered on permeate side to offer additional network design complexity. Feed, membrane, network specifications are provided from Bhide and Stern [7] and are summarised in Table 1.Targets for network performance are provided as 10 tons of EPO2 with 30 and 40 % O2. Network optimisation considers simultaneous optimisation of feed flow rate and network design for required product specifications. Allocation of vacuum pump follows from a generic allocation methodology developed in this work. A single vacuum pump is allocated for each permeate compartment stream undergoing partial or complete permeate to retentate recycle and a single vacuum pump is allocated for all permeate streams undergoing no recycle and entering the product stream. Only cross flow is considered for the case study.

Table 1. Problem specifications for enriched oxygen production

PH (bar) 1.07 CHPRCOM,CHPFCOM 54927
(HP/(kmol/s))
PL(bar) 0.2675 CFRCOM,CFFCOM 1264
($ / HP)
(Per/δ) O2 2.0491 x 10-4 ηRCOM,ηFCOM 0.75
(kmol/m2.s.bar)
(Per/δ) N2 9.509 x 10-5 CHPVAC 1718213
(kmol/m2.s.bar) ($ HP/(kmol/s))
Cannmem 23.4 ηVAC 0.5
($/m2)
FO2 0.047 FN2 0.17671
kmol/s kmol/s

Results obtained for both the cases are summarised in Figure 1.As shown, optimised objective (TAC) value corresponds to a value of about $176,000 with an optimal feed rate value of about 0.191 kmol/s for 10 tons of EPO2 at 30 % purity. Optimised objective value corresponds to about $ 289,000 with an optimal feed rate of about 0.1093 kmol/s for 10 tons of EPO2 at 40 % purity. All the structures generated after stochastic optimisation presented good confidence, providing a standard deviation value of about 5 % for markov chain length values of above 40.

Which of the following is not an example of an attempt to reduce design complexity?

Figure 1. Optimised membrane networks for a) 30 % O2 and b) 40 % O2 permeate products

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S157079460280089X

Introduction to Microelectronics

In Top-Down Digital VLSI Design, 2015

1.2.1 The guinness book of records point of view

In a world obsessed with records, a foremost question asks “How large is that circuit?”.

Die size is a poor metric for design complexity because the geometric dimensions of a circuit greatly vary as a function of technology generation, fabrication depth, and design style.

Transistor count is a much better indication. Still, comparing across logic families is problematic as the number of devices necessary to implement some given function varies.3

Gate equivalents attempt to capture a design's hardware complexity independently from its actual circuit style and fabrication technology. One gate equivalent (GE) stands for a two- input nand gate and corresponds to four MOSFETs in static CMOS; a flip-flop takes roughly 7 GEs. Memory circuits are rated according to storage capacity in bits. Gate equivalents and memory capacities are at the basis of the naming convention below.

Which of the following is not an example of an attempt to reduce design complexity?

Clearly, this type of classification is a very arbitrary one in that it attempts to impose boundaries where there are none. Also, it equates one storage bit to one gate equivalent. While this is approximately correct when talking of static RAM (SRAM) with its four- or six-transistor cells, the single-transistor cells found in dynamic RAMs (DRAM) and in ROMs cannot be likened to a two-input nand gate. A better idea is to state storage capacities separately from logic complexity and along with the memory type concerned, e.g. 75 000 GE of logic + 32 Kibit SRAM + 512 bit flash ≈ 108 000 GE overall complexity.4

One should not forget that circuit complexity per se is of no merit. Rather than coming up with inflated designs, engineers are challenged to find the most simple and elegant solutions that satisfy the specifications given in an efficient and dependable way.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007303000010

Embedded Computing

Marilyn Wolf, in Computers as Components (Fourth Edition), 2017

1.6 Summary

Embedded microprocessors are everywhere. Microprocessors allow sophisticated algorithms and user interfaces to be added relatively inexpensively to an amazing variety of products. Microprocessors also help reduce design complexity and time by separating out hardware and software design. Embedded system design is much more complex than programming PCs because we must meet multiple design constraints, including performance, cost, and so on. In the remainder of this book, we will build a set of techniques from the bottom-up that will allow us to conceive, design, and implement sophisticated microprocessor-based systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053874000017

Next-Generation Data Center Architectures and Technologies

Stephen R. Smoot, Nam K. Tan, in Private Cloud Computing, 2012

The data center consolidation and virtualization modus operandi

Our modus operandi for the DC consolidation and virtualization is illustrated in Figure 2.2. We will tackle one subject at a time for comprehensiveness, and one module at a time to reduce overall design complexity. Figure 2.2 is the high-level view of the server, storage, and fabric modules shown in Figure 2.1. The DC is generically divided into these three modules. The consolidation and virtualization of each module are discussed separately in this chapter.

Which of the following is not an example of an attempt to reduce design complexity?

Figure 2.2. Data center consolidation and virtualization workflows

In addition, the interface between the modules is as simple as “ABC,” where A is the interface between the server and fabric modules, B is the interface between the server and storage modules, and C is the interface between the fabric and storage modules. These module interconnections or attachment points are covered in the subsequent sections.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849199000027