How would you best describe the throughput or processing components of computers

24th European Symposium on Computer Aided Process Engineering

Baharuddin Maghfuri, ... Masahiko Hirao, in Computer Aided Chemical Engineering, 2014

3.2.2 Impacts on petroleum refining processes

Fig. 3 summarizes changes of process throughputs in PRI for all BUS alternatives. The result reveals that the throughputs of crude units decrease with the increase of biomass utilization scale. Each BUS alternative shows different changes of the throughputs of secondary processes. For example, results of alternative 2 indicates that in the refinery, cracking processes (FCC, hydrocracking, etc.) and naphtha upgrading processes (C4 isomerization, alkylation, naphtha hydrotreater) tend to decrease their throughputs. However, for middle distillate processes such as kerosene hydrotreater, distillate hydro treater, etc., the throughputs are observed to increase. This is because petroleum refineries need to reduce gasoline while production of middle and heavy products need to be maintained.

How would you best describe the throughput or processing components of computers

Figure 3. Changes in refinery process throughputs for each BUS alternative

A common characteristic of all alternatives is the decrease of cracking process throughputs. Nowadays, several studies claim that the cracking of heavy distillates and residues is a difficult process due to the content of heavy metals and sulfur which could poison catalysts (Rana et al., 2007). Although further evaluation is needed to decide more suitable materials to be cracked, this result suggests that utilizing biomass is a powerful candidate to replace the difficult upgrading of heavy oil.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444634559500027

The Marriott Corporation

Jamshid Gharajedaghi, in Systems Thinking (Third Edition), 2012

12.3.2.2 Project management

Project teams will be organized around specific throughput processes, but will all have cross-functional expertise in information technology, total quality management (TQM), human systems, and design. The objective is proper implementation of the integrated solution (developed by each brand manager) at each site in the region, allowing all of the following critical goals for success to be accomplished simultaneously: reduction of cycle time, elimination of waste, creation of flexibility, and management of total quality. Rollout of any other projects developed by the core knowledge groups or the corporation as a whole will be undertaken by this function.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012385915000012X

Commonwealth Energy System

Jamshid Gharajedaghi, in Systems Thinking (Third Edition), 2012

13.9.1 Core Knowledge Pool

To discharge the responsibility for latency, synergy, and throughput processes, the executive office will be equipped with a core knowledge pool. The core knowledge pool will be the system's center of expertise that develops and disseminates state-of-the-art knowledge throughout the value chain. All the essential functions of the executive office will be carried out by professionals who operate in pools of expertise and work in an interdisciplinary manner on specific projects. Each member of the pool can be involved in more than one project. The main outputs of these projects include creation of startup businesses, improvement of existing operations' effectiveness, and design of measurement, reward, and early warning systems. Generating system-wide “bench strength” and fostering an entrepreneurial culture are among the by-products of these corporate activities.

The core knowledge pool will be staffed by a select group of top-notch experts who will rotate among all activities. They will be drawn, temporarily or permanently, from internal businesses and recruited or contracted from external sources to contribute to the richness and variety of the system's gene pool. Initially, the nucleus of the core knowledge pool will consist of the design team members who will be involved in the development of design details for the new architecture. These activities will provide an environment for learning by designing, learning by doing, and earning while learning. Core knowledge members will offer their expertise in the context of project teams designed to produce integrated solutions.

The core knowledge pool will also be responsible for creating a common language and acting as a center to reinforce organizational learning. Organizational learning will include reciprocal traffic of knowledge and mutual exchange of people. No new professional will enter the system without first being initiated through the core knowledge pool.

Members of the core knowledge pool will have a good insight into environmental trends, technological developments, and changing customer needs. They will operate in modular cross-disciplinary teams and, through the mechanism of project management, may be assigned to other shared services for systems development and/or to line-management roles for carrying out special missions. The core knowledge pool will also be responsible for developing models for the throughput system, target costing, the measurement and reward system, and an internal market mechanism.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123859150000131

Magnetolithography

Amos Bardea, Ron Naaman, in Advances in Imaging and Electron Physics, 2010

4 Summary and Future Perspective

We have demonstrated a strategy for ML that may allow use of the bottom-up approach for high-throughput processes. The positive and negative ML routes can be applied on a wide range of surfaces for patterning with either small or large molecules or with biomolecules that are sensitive to the chemical environment. The patterning of the substrate by backside ML is not affected by the topography or planarity of the surfaces. An important feature of ML is its ability to keep the surface clean during the patterning process and/or to keep it within a solution throughout the chemical patterning process. This feature may be especially important for biological applications. Here we described the ability to pattern the inside of a tube and to use the patterned substrate for catalyzing reactions in spatially localized regions. The new abilities demonstrated here open up the possibility of inducing chemical and biochemical patterning of the inner tube surfaces, especially when using tubes with a small diameter, as efficient reactors for lab-on-a-chip and as very sensitive biosensors. By applying the capability to chemically pattern the interior of a capillary tube, it is possible to produce linear arrays of probes within the microcapillary, where each probe within the array is patterned at specific and addressable locations along the capillary interior. Hence, the device will be able to detect various components within a sample by flowing the sample through a capillary tube.

We have already demonstrated the ability of ML to pattern both flat and non-flat surfaces with gradient properties on many length scales (Tatikonda et al., 2010). Figure 15 shows the microscopic image of condensed water on the hydrophobic/hydrophilic gradient surface. The decrease in the droplet size from left to right in Figure 15a indicates the increase in the hydrophobic nature of the surface. Also shown are images of water drops (Figure 15b), revealing the change in the contact angle between the water and the substrate as a function of their position on the surface. The gradient that is formed is not limited to a specific substrate or adsorbate and, in principle, patterning with different properties and on any substrate can be achieved. Furthermore, the ML method can be used not only for chemical patterning but also for common microelectronic processes such as etching, deposition, and ion implant. The ability to use the same method for all types of surface patterning simplifies production processes related to applications combining electronics with chemical/biorecognition processes.

How would you best describe the throughput or processing components of computers

Figure 15. (a) Microscopic image of condensed water on the hydrophobic hydrophilic gradient surface. The decrease in the droplet size from left to right indicates the increase in the hydrophobic nature of the surface. (b) Contact angle measurements of water droplets placed on the surface in which the hydrophobicity increases away from the center.

A new means of generating a magnetic field is by using hard-disk (HD) devices used in computers. In the case of HD, a magnetic field is patterned by changing the magnetic direction of domains on the HD by using a magnetic head. Hence, it is possible, in principle, to pattern any shape on the HD with a resolution that is the size of a domain. This can be done with software that translates a pattern (e.g., a drawing patterned on the computer's screen) to a shape on the HD surface. Hence, in ML the HD can combine the function of the magnetic field source and the mask that defines the spatial distribution and shape of the magnetic field applied on the substrate. After patterning the HD in the disk driver, the disk is removed and used as a magnetic mask (Figure 16). The resolution depends on the domain size, which in the current HD technology is less than 100 nm. This method provides a simple and inexpensive means of patterning surfaces and it has the potential to become the method of choice in the future.

How would you best describe the throughput or processing components of computers

Figure 16. Scanning electron microscopy image of a pattern obtained on the surface of HD with Fe3O4 nanoparticles. The particles are attracted to the region of maximum field gradient by the force exerted on domain walls.

Another method of patterning a magnetic field on a substrate is by using conductive wires for inducing and patterning the magnetic field. When current flows through the wires, a magnetic field is created. By switching the current flow sequentially, it is possible to induce different patterned fields as different stages. This method, which we refer to as a “dynamic mask,” may be applied either with no permanent magnetic or it can be combined with a permanent magnetic field so that the current through the wires alternates the magnetic field by producing an additional field.

The ML technique is versatile and allows for variability in each part of the setup and in the process itself. Different kinds of magnetic masks can be used—permanent, dynamic masks, or HD. The substrate can be flat or non-flat and the ML route can be either positive or negative. The ML can be applied for molecular or biomolecular patterning or for performing physical patterning for common microelectronic processes such as etching, deposition, and ion implantation. The ML can be used for either regular patterning or patterning gradients of a particular property. All possible variations of ML are summarized in Figure 17.

How would you best describe the throughput or processing components of computers

Figure 17. Scheme describing the different routes in every stage of the ML method.

This chapter describes the ML method as a new approach for patterning surfaces. The ML method is a single-step process and is much simpler and less expensive to apply than the common photolithography method. Moreover, ML does not require application of resists; therefore, the surfaces remain clean and can be easily modified chemically as needed. We demonstrated here that ML does not depend on the surface topography and planarity and can be used for patterning non-flat and the inside surfaces of a closed volume. The widths of lines made with ML can actually be smaller than the lines on the mask used. Importantly, the ML method opens up new possibilities in high-throughput surface patterning since the same method can be used for all types of surfaces by combining the ability to pattern surfaces both for electronics and chemical/bioprocesses.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123813121000018

12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering

Funmilayo N. Osuolale, Jie Zhang, in Computer Aided Chemical Engineering, 2015

Abstract

This paper presents a new methodology for optimising the energy efficiency of a crude distillation unit without trading off the product quality and process throughput. It incorporates the second law of thermodynamics which indicates how well a system is performing compared to the optimum possible performance and hence gives a good indication of the actual energy use of a process. Bootstrap aggregated neural networks are used for enhanced model accuracy and reliability. In addition to the process operation objectives, minimising the model prediction confidence bound is incorporated in multi-objective optimisation to improve the reliability of the optimisation results. The economic analysis of the recoverable energy (sum of internal and external exergy losses) reveals the energy saving potential of the method. The proposed method will aid the design and operation of energy efficient crude distillation columns.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444635785501079

Operational Thinking

Jamshid Gharajedaghi, in Systems Thinking (Third Edition), 2011

6.3.1 Critical Properties of the Process

The most important element in our throughput scheme is identification of critical properties of the process. Time, cost, flexibility, and quality are usually among the major factors that determine the success of a throughput process. They form an interdependent set of variables so that each one can be improved at the expense of others. Treating them as independent variables, as is normally done, is an unacceptable mistake. Unfortunately, expertise is in any one of the areas of time cycle reduction, cost control or waste reduction, and quality control. Each expert tries to suboptimize the single area of her/his concerns by manipulating the process. This might lead to incompatibility among the solutions. The challenge is to reduce cycle time, while eliminating the waste and ensuring availability of the output (in kind, volume, space, and time), in addition to managing the process in such a way that it is “competent and in control” all at the same time. This can only be done by simulating the throughput process by building a dynamic model.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123859150000064

29th European Symposium on Computer Aided Process Engineering

Ouyang Wu, ... Matthias Roth, in Computer Aided Chemical Engineering, 2019

1 Introduction

Multipurpose chemical batch plants are often a challenge to schedule. These processes typically produce a large number of products requiring several processing steps. When scheduling such processes several factors must be taken into account including, but not limited to, process throughput, storage constraints, equipment condition, and maintenance concerns. When creating scheduling models, one of the major decisions is the time representation chosen for the problem. The time representation can either be continuous or discrete. Continuous time problems in general result in far fewer variables than their discrete counterparts and provide an explicit representation of timing decisions. Continuous time scheduling formulations are broadly classified into precedence, single, or multiple time grid based models (Méndez et al., 2006). Continuous time models have been applied to many different scenarios. Castro and Grossmann (2012) used generalized disjunctive programming to derive generic continuous-time scheduling models.

One aspect of scheduling that has been gaining attention in literature lately is that of integrating equipment condition into production scheduling. Vieira et al. (2017) studied the optimal planning of a continuous biopharmaceutical process with decaying yield. They formulated the problem as a continuous single-time grid formulation making both production scheduling and maintenance scheduling decisions. Other applications of combined maintenance and production scheduling can be found in Biondi et al. (2017) and Dalle Ave et al. (2019) who studied the joint production and maintenance problem of steel plant scheduling under different conditions.

The goal of this work is to investigate the production scheduling of a chemical batch plant with unit degradation. A batch reactor case study considering fouling is presented in which the fouling prolongs the time needed to complete a batch and is highly influenced by the sequence of products produced in the reactor. The problem is formulated using a set of continuous-time MILP-based scheduling models which will be described and compared in the remainder of this work.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128186343502034

Business Architecture

Jamshid Gharajedaghi, in Systems Thinking (Third Edition), 2011

9.5.3 Recap

Interactive design is a process for operationalizing the most exciting vision of the future that the designers are capable of producing. It is the design of the next generation of their system to replace the existing order.

Design of a throughput process is essentially technology driven, whereas design of organizational processes depends on the paradigm in use.

Winning is fun, but to win, one has to keep score. And the way one keeps score defines the game.

The realization effort is not a one-time proposition. Successive approximations of the next generation design make up the evolutionary process by which the transformation effort is conducted.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012385915000009X

Carrier Corporation

Jamshid Gharajedaghi, in Systems Thinking (Third Edition), 2012

14.7 Inputs

14.7.1 The Technology

The technology group will be the research and development arm of Carrier, responsible for identifying and nurturing core technologies required by Carrier businesses. Carrier will continuously assess its technological profile to match the emerging needs of the business. The company's desired position with respect to related technologies and components will be defined using the following three categories:

Must have knowledge and control in-house

Must have knowledge and strategic alliances

Must have knowledge to influence independent developers

Table 14.1 summarizes Carrier's present need for core components and corresponding technologies.

Table 14.1. Needed Technologies and Components

Core technology or componentKnowledge and controlKnowledge and allianceKnowledge and influence
Compressors X
Electronic controls X
Heat transfer devices X
Enclosures, fans, and air movements X
Motors X
Air cleaning devices X
Refrigerants X
Diesel engines X
Building software X

A special group within the technology group will focus on developing service technology (e.g., remote diagnosis capability). This activity will be funded by all the business units.

14.7.2 Operational Support (Process Design)

This unit will focus on increasing throughput of the enterprise. It is organized around modular teams and provides support to all units in Carrier.

Carrier will create centers of expertise that develop and disseminate knowledge throughout the organization. They will be organized into process teams and technology teams. This knowledge is not divided by disciplines, as in universities, but will be cross-disciplinary so it can provide solutions to complex problems.

Process and technology teams will work as internal consultants attacking and improving critical elements of the business. They will educate the rest of the organization on the latest advancements in design and throughput processes. The teams will be expected to market their services to both the executive office and to the operating units. Process teams must re-create themselves in the organization. They are accountable for redesigning, implementing, and handing off systems solutions. Only when the project has been considered a success will the team move on to its next challenge.

Process teams will be organized around specific throughput processes, but all will have cross-functional expertise in information technology, total quality, human systems, and design. Their objective is to produce an integrated solution (a single design) for each of the critical throughput processes so that all of the following critical goals for success can be simultaneously accomplished:

Reduce the time cycle

Eliminate waste

Achieve flexibility

Achieve total quality

Technology teams will be multidisciplinary, with the ability to integrate and apply various technologies to solve specific business problems. They are charged with educating everyone in the company, from engineers to the direct sales force, on the potential impact of emerging technologies on Carrier's business. Incentives for team members will be based on the success of the team as a whole. Teams will be composed of very competent professionals and kept together for an extended period to ensure continuous improvement.

14.7.3 Management Support Services

Management support services include financial, accounting, and administrative services; human resource services; MIS; and quality. These units will all provide services. The function of control will be part of the executive office. All input units will be “performance centers.” For each unit, a set of specific measures of performance will be developed. Each unit will be expected to add value to Carrier through its operations. Therefore, profitability will be a key factor in each unit's performance measurement. The revenues for each unit will be derived from its “sales” to other Carrier units and/or to external customers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123859150000143

Semiconductor Quantum Science and Technology

Samir Bounouar, ... Stephan Reitzenstein, in Semiconductors and Semimetals, 2020

3.1.1 Wafer bonding

We next discuss the wafer bonding technique which is of particular importance for the realization of hybrid or heterogeneous photonic circuits to be presented in section 5. While QD lasers produced on silicon substrates through a direct wafer fusion bonding process were demonstrated (Fang et al., 2006), in Davanco et al. (2017) it was shown that direct wafer bonding could be leveraged to produce SiN WG-based photonic circuits incorporating single QDs as single-photon sources. In this method, two wafers of different materials are initially brought together and bonded to form a heterogeneous wafer stack. Typically, the substrate of one of the wafers is removed, leaving behind a thin film of material that can be processed alongside the receiving material substrate. Device fabrication can then follow a single high-throughput process flow with completely top-down techniques, where different material layers are subsequently etched, following high-resolution lithography. The fully top-down fabrication process, which allows high-resolution lithography-based definition of the IQPC geometries of all layers involved, in principle allows fabrication of high-performance geometries.

One of the main advantages of such an approach is in scalability—it allows wafer-scale device production by leveraging the massive parallelism enabled by mature, top-down semiconductor fabrication methods. Indeed, the wafer bonding approach has been employed commercially for the creation of heterogeneous integrated III–V semiconductor/silicon-on-insulator photonic devices, such as lasers and transceivers, with optical gain provided by III–V materials (Komljenovic et al., 2018).

An important issue with the wafer bonding approach is that considerable effort is required for bringing two materials together before device fabrication can even be considered. After the two materials are bonded, suboptimal device performance may still arise due to process incompatibility. Increasing the complexity to more than two types of materials on one chip is furthermore challenging. At least for fast device prototyping, deterministic fabrication methods presented in the following section offer advantages over wafer bonding.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0080878420300302

What best defines the process of acquiring knowledge through use of the senses?

Cognition is described in the Oxford dictionary as the mental actions or processes involved in acquiring, maintaining and understanding knowledge through thought, experience and the senses (definition of Cognition from the English Oxford Dictionary, 2018), and is described by Licht, Hull and Ballantyne (2014) as the ...

What measures network performance by the maximum amount of data that can pass from one point to another in a unit of time?

Network bandwidth is a measurement indicating the maximum capacity of a wired or wireless communications link to transmit data over a network connection in a given amount of time. Typically, bandwidth is represented in the number of bits, kilobits, megabits or gigabits that can be transmitted in 1 second.

What best describes the central goal of health informatics?

What best describes the central goal of health informatics? To manage and communicate data, information, knowledge, and wisdom in the delivery of health care.

Is designed to increase the overall throughput of the CPU?

Cache memory helps speed up the overall throughput of the CPU. The speed of a CPU is measured in dots per second.