Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

Virtualization

Dijiang Huang, Huijun Wu, in Mobile Cloud Computing, 2018

Hardware Abstraction Layer (HAL)

In computers, a hardware abstraction layer (HAL) is a layer of programming that allows a computer OS to interact with a hardware device at a general or abstract level rather than at a detailed hardware level. HAL can be called from either the OS's kernel or from a device driver. In either case, the calling program can interact with the device in a more general way than it would otherwise.

Virtualization at the HAL exploits the similarity in architectures of the guest and host platforms to cut down the interpretation latency. Virtualization technique helps map the virtual resources to physical resources and use the native hardware for computations in the VM. When the emulated machine needs to talk to critical physical resources, the simulator takes over and multiplexes appropriately. For such a virtualization technology to work correctly, the VM must be able to trap every privileged instruction execution and pass it to the underlying VMM to be taken care of. Hardware-level VMs tend to possess properties like high degree of isolation, i.e., both from other VMs as well as from the underlying physical machine, acceptance of the concept, support for different OSes and applications without requiring to reboot or going through the complicated dual-boot setup procedure, low risk, and easy maintenance. Typical HAL based virtualization solutions include many popular computer virtualization solutions such as VMware [233], Virtual PC [130], Denali [273], Plex86 [174], user model Linux [98], Cooperative Linux [50], etc.

In the third virtualization classification presented in Section 2.2.3, we will present two of the most popular HAL-based virtualization solutions, i.e., parallel virtualization (or bare-metal, or Type-I virtualization) and host-based virtualization (or Type-II virtualization), in detail.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012809641300003X

Cortex-M3 Programming

Joseph Yiu, in The Definitive Guide to the ARM Cortex-M3 (Second Edition), 2010

10.4.2 Areas of Standardization

The scope of CMSIS involves standardization in the following areas:

Hardware Abstraction Layer (HAL) for Cortex-M processor registers: This includes standardized register definitions for NVIC, System Control Block registers, SYSTICK register, MPU registers, and a number of NVIC and core feature access functions.

Standardized system exception names: This allows OS and middleware to use system exceptions easily without compatibility issues.

Standardized method of header file organization: This makes it easier for users to learn new Cortex microcontroller products and improve software portability.

Common method for system initialization: Each Microcontroller Unit (MCU) vendor provides a SystemInit() function in their device driver library for essential setup and configuration, such as initialization of clocks. Again, this helps new users to start to use Cortex-M microcontrollers and aids software portability.

Standardized intrinsic functions: Intrinsic functions are normally used to produce instructions that cannot be generated by IEC/ISO C.* By having standardized intrinsic functions, software reusability and portability are considerably improved.

Common access functions for communication: This provides a set of software interface functions for common communication interfaces including universal asynchronous receiver/transmitter (UART), Ethernet, and Serial Peripheral Interface (SPI). By having these common access functions in the device driver library, reusability and portability of embedded software are improved. At the time of writing this book, it is still under development.

Standardized way for embedded software to determine system clock frequency: A software variable called SystemFrequency is defined in device driver code. This allows embedded OS to set up the SYSTICK unit based on the system clock frequency.

The CMSIS defines the basic requirements to achieve software reusability and portability. MCU vendors can include additional functions for each peripheral to enrich the features of their software solution. So using CMSIS does not limit the capability of the embedded products.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856179638000132

Computing Platforms

Marilyn Wolf, in Computers as Components (Fourth Edition), 2017

Q4-1

Name three major components of a generic computing platform.

Q4-2

What role does the HAL play in the platform?

Q4-3

Draw UML state diagrams for device 1 and device 2 in a four-cycle handshake.

Q4-4

Describe the role of these signals in a bus:

a.

R/W'

b.

data ready

c.

clock

Q4-5

Draw a UML sequence diagram that shows a four-cycle handshake between a bus master and a device.

Q4-6

Define these signal types in a timing diagram:

a.

changing;

b.

stable.

Q4-7

Draw a timing diagram with the following signals (where [t1,t2] is the time interval starting at t1 and ending at t2):

a.

signal A is stable [0,10], changing [10,15], stable [15,30]

b.

signal B is 1 [0,5], falling [5,7], 0 [7,20], changing [20,30]

c.

signal C is changing [0,10], 0 [10,15], rising [15,18], 1 [18,25], changing [25,30]

Q4-8

Draw a timing diagram for a write operation with no wait states.

Q4-9

Draw a timing diagram for a read operation on a bus in which the read includes two wait states.

Q4-10

Draw a timing diagram for a write operation on a bus in which the write takes two wait states.

Q4-11

Draw a timing diagram for a burst write operation that writes four locations.

Q4-12

Draw a UML state diagram for a burst read operation with wait states. One state diagram is for the bus master and the other is for the device being read.

Q4-13

Draw a UML sequence diagram for a burst read operation with wait states.

Q4-14

Draw timing diagrams for

a.

a device becoming bus master

b.

the device returning control of the bus to the CPU

Q4-15

Draw a timing diagram that shows a complete DMA operation, including handing off the bus to the DMA controller, performing the DMA transfer, and returning bus control back to the CPU.

Q4-16

Draw UML state diagrams for a bus mastership transaction in which one side shows the CPU as the default bus master and the other shows the device that can request bus mastership.

Q4-17

Draw a UML sequence diagram for a bus mastership request, grant, and return.

Q4-18

Draw a UML sequence diagram that shows a DMA bus transaction and concurrent processing on the CPU.

Q4-19

Draw a UML sequence diagram for a complete DMA transaction, including the DMA controller requesting the bus, the DMA transaction itself, and returning control of the bus to the CPU.

Q4-20

Draw a UML sequence diagram showing a read operation across a bus bridge.

Q4-21

Draw a UML sequence diagram showing a write operation with wait states across a bus bridge.

Q4-22

Draw a UML sequence diagram for a read transaction that includes a DRAM refresh operation. The sequence diagram should include the CPU, the DRAM interface, and the DRAM internals to show the refresh itself.

Q4-23

Draw a UML sequence diagram for an SDRAM read operation. Show the activity of each of the SDRAM signals.

Q4-24

What is the role of a memory controller in a computing platform?

Q4-25

What hardware factors might be considered when choosing a computing platform?

Q4-26

What software factors might be considered when choosing a computing platform?

Q4-27

Write ARM assembly language code that handles a breakpoint. It should save the necessary registers, call a subroutine to communicate with the host, and upon return from the host, cause the breakpointed instruction to be properly executed.

Q4-28

Assume an A/D converter is supplying samples at 44.1 kHz.

a.

How much time is available per sample for CPU operations?

b.

If the interrupt handler executes 100 instructions obtaining the sample and passing it onto the application routine, how many instructions can be executed on a 20-MHz RISC processor that executes 1 instruction per cycle?

Q4-29

If an interrupt handler executes for too long and the next interrupt occurs before the last call to the handler has finished, what happens?

Q4-30

Consider a system in which an interrupt handler passes on samples to an FIR filter program that runs in the background.

a.

If the interrupt handler takes too long, how does the FIR filters output change?

b.

If the FIR filter code takes too long, how does its output change?

Q4-31

Assume that your microprocessor implements an ICE instruction that asserts a bus signal that causes a microprocessor in-circuit emulator to start. Also assume that the microprocessor allows all internal registers to be observed and controlled through a boundary scan chain. Draw a UML sequence diagram of the ICE operation, including execution of the ICE instruction, uploading the microprocessor state to the ICE, and returning control to the microprocessors program. The sequence diagram should include the microprocessor, the microprocessor in-circuit emulator, and the user.

Q4-32

Why might an embedded computing system want to implement a DOS-compatible file system?

Q4-33

Name two example embedded systems that implement a DOS-compatible file system.

Q4-34

You are given a memory system with an overhead O = 2 and a single-word transfer time of 1 (no wait states). You will use this memory system to perform a transfer of 1024 locations. Plot total number of clock cycles T as a function of burst size B for 1 <= B <= 8.

Q4-35

You are given a bus which supports single-word and burst transfers. A single transfer takes 1 clock cycle (no wait states). The overhead of the single-word transfer is 1 clock cycles (O = 1). The overhead of a burst is 3 clock cycles (OB = 3). Which performs a two-word transfer faster: a pair of single transfers or a single burst of two words?

Q4-36

You are given a 2-byte wide bus that supports single-byte, dual-word (same clock cycle), and burst transfers of up to 8 bytes (4 byte pairs per burst). The overhead of each of these types of transfers is 1 clock cycle (O = OB = 1) and a data transfer takes 1 clock cycle per single or dual word (D = 1). You want to send a 1080P video frame at a resolution of 1920 × 1080 pixels with 3 bytes per pixel. Compare the difference in bus transfer times if the pixels are packed versus sending a pixel as a 2-byte followed by a single-byte transfer.

Q4-37

Determine the design parameters for an audio system:

a.

Determine the total bytes per second required for an audio signal of 16 bits/sample per channel, two channels, sampled at 44.1 kHz.

b.

Given a clock period P = 20 MHz for a bus, determine the bus width required assuming that nonburst mode transfers are used and D = O = 1.

c.

Given a clock period P = 20 MHz for a bus, determine the bus width required assuming burst transfers of length four and D = OB = 1.

d.

Assume the data signal now contains both the original audio signal and a compressed version of the audio at a bit rate of 1/10 the input audio signal. Assume bus bandwidth for burst transfers of length four with P = 20 MHz and D = OB = 1. Will a bus of width 1 be sufficient to handle the combined traffic?

Q4-38

You are designing a system a bus-based computer: the input device I1 sends its data to program P1; P1 sends its output to output device O1. Is there any way to overlap bus transfers and computations in this system?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053874000042

Multiprocessor Software

Marilyn Wolf, in High-Performance Embedded Computing (Second Edition), 2014

6.4.2 System-on-chip services

The advent of systems-on-chips (SoC) has resulted in a new generation of custom middleware that relies less on standard services and models. SoC middleware has been designed from scratch for several reasons. First, these systems are often power or energy constrained and any services must be implemented very efficiently. Second, although the SoC may be required to use standard services externally, they are not constrained to use standards within the chip. Third, today’s SoCs are composed of a relatively small number of processors. The 50-processor systems of tomorrow may in fact make more use of industry standard services, but today’s systems-on-chips often use customized middleware.

Figure 6.18 shows a typical software stack for an embedded SoC multiprocessor. This stack has several elements:

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

FIGURE 6.18. A software stack and services in an embedded multiprocessor.

The hardware abstraction layer (HAL) provides a uniform abstraction for devices and other hardware primitives. The HAL abstracts the rest of the software from both the devices themselves and from certain elements of the processor.

The real-time operating system controls basic system resources such as process scheduling and memory.

The interprocess communication layer provides abstract communication services. It may, for example, use one function for communication between processes whether they are on the same or different PE.

The application-specific libraries provide utilities for computation or communication specific to the application.

The application code uses these layers to provide the end service or function.

MultiFlex

Paulin et al. [Pau02a; Pau06][Pau02a][Pau06] developed the MultiFlex programming environment to support multiple programming models that are supported by hardware accelerators. MultiFlex supports both distributed system object component (DSOC) and symmetric multiprocessing (SMP) models. Each is supported by a high-level programming interface. Figure 6.19 shows the MultiFlex architecture and its DSOC and SMP subsystems. Different parts of the system are mapped onto different parts of the architecture: control functions can run on top of an OS on the host processor; some high-performance functions, such as video, can run on accelerators; some parallelizable operations can go on a hardware multithreaded set of processors; and some operations can go into DSPs. The DSOC and SMP units in the architecture manage communication between the various subsystems.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

FIGURE 6.19. Object broker and SMP concurrency engine in MultiFlex.

From Paulin et al. [Pau06] ©2006 IEEE.

The DSOC model is based on a parallel communicating object model from client to server. It uses a message-passing engine for interprocess communication. When passing a message, a wrapper on the client side must marshal the data required for the call. Marshaling may require massaging data types, moving data to a more accessible location, and so on. On the server side, another wrapper unmarshals the data to make it usable to the client.

The object request broker coordinates object communication. It must be able to handle many parallel objects that execute in parallel. A server farm holds the resources to execute a large number of object requests. The ORB matches a client request to an available server. The client stalls until the request is satisfied. The server takes a request and looks up appropriate object servers in a table, then returns the results when they are available. Paulin et al. [Pau06] report that 300 MHz RISC processors can perform about 35 million object calls per second.

The SMP model is implemented using software on top of a hardware concurrency engine. The engine appears as a memory-mapped device with a set of memory-mapped addresses for each concurrency object. A protected region for an object can be entered by writing the appropriate address within that object’s address region.

ENSEMBLE

ENSEMBLE [Cad01] is a library for large data transfers that allows overlapping computation and communication. The library is designed for use with an annotated form of Java that allows array accesses and data dependencies to be analyzed for single program, multiple data execution. The library provides send and receive functions, including specialized versions for contiguous buffers. The emb_fence() function handles the pending sends and receives. Data transfers are handled by the DMA system, allowing the programs to execute concurrently with transfers.

The next example describes the middleware for the TI OMAP.

Example 6.3

Middleware and Services Architecture of the TI OMAP

The following figure shows the layers of software in an OMAP-based system. The DSP provides a software interface to its functions. The C55x supports a standard, known as eXpressDSP, for describing algorithms. This standard hides some of the memory and interfacing requirements of algorithms from application code.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

The DSP resource manager provides the basic API for the DSP functions. It controls tasks, data streams between the DSP and CPU, and memory allocation. It also keeps track of resources such as CPU time, CPU utilization, and memory.

The DSPBridge in the OMAP is an architecture-specific interface. It provides abstract communication but only in the special case of a master CPU and a slave DSP. This is a prime example of a tailored middleware service.

Nostrum stack

The Nostrum network-on-chip (NoC) is supported by a communications protocol stack [Mil04]. The basic service provided by the system is to accept a packet with a destination process identifier and to deliver it to its destination. The system requires three compulsory layers: the physical layer is the network-on-chip; the data link layer handles synchronization, error correction, and flow control; and the network layer handles routing and the mapping of logical addresses to destination process IDs. The network layer provides both datagram and virtual circuit services.

Network-on-chip services

Sgroi et al. [Sgr01] based their on-chip networking design methodology on the Metropolis methodology. They successively refine the protocol stack by adding adaptators. Adaptors can perform a variety of transformations: behavior adapters allow components with different models of computation or protocols to communicate; channel adapters adapt for discrepancies between the desired characteristics of a channel, such as reliability or performance, and the characteristics of the chosen physical channel.

Power management

Benini and De Micheli [Ben01b] developed a methodology for power management of networks-on-chips. They advocate a micronetwork stack with three major levels: the physical layer; an architecture and control layer that consists of the OSI data link, network, and transport layers; and a software layer that consists of handle systems and applications. At the data link layer, the proper choice of error-correction codes and retransmission schemes is key. The media access control algorithm also has a strong influence on power consumption. At the transport layer, services may be either connection-oriented or connectionless. Flow control also has a strong influence on energy consumption.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012410511900006X

Software, firmware, and drivers

Manish J. Gajjar, in Mobile Sensors and Context-Aware Computing, 2017

Android Sensor Types and Modes

Most mobile devices will have built-in sensors to measure the device orientation, motion, and surrounding environmental parameters like temperature, humidity, and so on. Android platforms support the following categories of sensors [17]:

Motion sensors: This group of sensors measures acceleration or rotational forces along the device’s X–Y–Z coordinates. Examples of such sensors are accelerometers and gyroscopes.

Environmental sensors: This group of sensors measures environmental parameters such as temperature, pressure, humidity, and light intensity. Examples of such sensors are thermometers, barometers, and ambient light sensors.

Position sensors: This group of sensors measures the physical position and orientation of the mobile device. Magnetometers fall under this group of sensor.

The sensor types supported by Android are listed in Chapter 11, Sensor application areas. They are named as TYPE_ where xyz would be “ACCELEROMETER,” “AMBIENT_TEMPERATURE,” and so on.

The Android sensor stack has base sensors and composite sensors [22].

Base sensors are not the physical sensors but are given the name after the underlying physical sensors. Base sensor means that these sensors deliver the sensor information after applying various corrections to the raw output from the underlying single physical sensor. Some of the examples of base sensor types are SENSOR_TYPE_ACCELEROMETER, SENSOR_TYPE_HEART_RATE, SENSOR_TYPE_LIGHT, SENSOR_TYPE_PROXIMITY, SENSOR_TYPE_PRESSURE, and SENSOR_TYPE_GYROSCOPE.

Composite sensors are the sensors that deliver sensor data after processing and/or fusing data from multiple physical sensors. Some of the examples of composite sensor types are gravity sensor (accelerometer+gyroscope), geomagnetic rotation vector (accelerometer+magnetometer), and rotation vector sensor (accelerometer+magnetometer+gyroscope).

The behavior of Android sensors is impacted by the presence of hardware FIFO in the sensors. When the sensor stores its events or data in FIFO instead of reporting to HAL, it is known as batching. This process of batching [23] is implemented only in hardware and helps to save power because the sensor data or event is obtained in the background, grouped, and then processed together instead of waking up the SOC to receive each individual event. Batching happens when the sensor events of a particular sensor are delayed up to the maximum reporting latency before reporting them to HAL, or when the sensor has to wait for the SOC to wake up and hence has to store all the events till then. Bigger FIFO size will enable more batching and hence potentially more power savings.

If a sensor does not have hardware FIFO or if the maximum reporting latency is set to zero, then the sensor can operate in continuous operation [23] mode, where its events are not buffered but are reported immediately to HAL. This operation is the opposite of the batching process.

Based on the capability of the Android sensors to allow the SOC to enter or wake up from the suspend mode, these sensors can be defined as wake-up sensors or nonwake-up sensors through a flag in sensor definition.

Nonwake-up sensors [24]: These sensors do not prevent the SOC from entering the suspend mode and also do not wake up the SOC to report availability of the sensor data. The nonwake-up sensor behavior with respect to SOC suspend mode [23] is listed below.

During SOC suspend mode:

This sensor type continues to generate required events and store them in the sensor hardware FIFO rather than report it to HAL.

If the hardware FIFO gets filled, then the FIFO would wrap around just like a circular buffer and new events will overwrite the previous events.

If the sensor does not have hardware FIFO then the events are lost.

Exit of SOC from suspend mode:

The hardware FIFO data is delivered to the SOC even if maximum reporting latency has not elapsed. This helps in power saving because the SOC will not have to be awakened soon if it decides to go into suspend mode again.

When the SOC is not in suspend mode:

The sensor events can be stored in the FIFO as long as the maximum reporting latency is not elapsed. If FIFO gets filled before the elapse of maximum reporting latency, then the events are reported to the awake SOC to ensure no events are lost or dropped.

If maximum reporting latency time elapses, then all events from FIFO are reported to SOC. For example, if the maximum reporting latency of an accelerometer is 20 seconds while that of the gyroscope is 5 seconds, the batches for the accelerometer and gyroscope can both happen every 5 seconds. If one event must be reported, then all events from all sensors can be reported. If sensors share the hardware FIFO and the maximum reporting latency elapses for one of the sensor, then all the events from FIFO are reported even when maximum reporting latency is not elapsed for other sensors.

If maximum reporting latency is set to zero, then the events can delivered to the application, since the SOC is awake. This would result in continuous operation.

If the sensor does not have hardware FIFO, then the events will get reported to the SOC immediately resulting in continuous operation.

Wake-up sensors [24]: This type of sensor always has to deliver its data/event irrespective of the SOC power state. These sensors will allow the SOC to go into suspend mode but will wake it up when an event needs to be reported to the SOC. The wake-up sensor behavior with respect to the SOC suspend mode [23] is listed below.

During SOC suspend mode:

This sensor type continues to generate required events and store them in the sensor hardware FIFO rather than report it to HAL.

These sensors will wake up the SOC from the suspend mode to deliver its events either before the maximum reporting latency expiry or when its hardware FIFO gets full.

If the hardware FIFO gets full, then the FIFO will not wrap around like in the case of nonwake-up sensors. Hence FIFO should not overflow (and result in the loss of events) while the SOC takes time to exit suspend mode and start the FIFO flush process.

If maximum reporting latency is set to zero then the events will wake up the SOC and get reported. This would result in continuous operation.

If the sensor does not have hardware FIFO then the events will wake up the SOC and get reported. This would result in continuous operation.

Exit of the SOC from suspend mode:

This sensor type behaves just like nonwake-up sensors and data from the hardware FIFO is delivered to the SOC even if maximum reporting latency has not elapsed.

When the SOC is not in suspend mode:

This sensor type behaves just like nonwake-up sensors.

Android sensors generate events in four possible reporting modes [25], namely continuous reporting, on-change reporting, one-shot reporting, and special reporting.

In continuous reporting mode the events are generated at a constant rate as defined by a sampling period parameter setting passed to the batch function defined in HAL. Accelerometers and gyroscopes are examples of sensors using continuous reporting mode.

In on-change reporting mode, the events are generated when sensed values change including the activation of this sensor type at HAL. These events are reported after the elapse of minimum time between the two events as set by the sampling period parameter of the batch function. Heart rate sensors and step counters are examples of sensors using on-change reporting mode.

In one-shot reporting mode, the sensor will deactivate itself when an event occurs and then send that event information through HAL as soon as the event is generated. The detected event cannot be stored in hardware FIFO. The one-shot sensors need to be reactivated to send any other event. Sensors that can detect any motion that results in a major change of user location can fall under this category (examples of such motions could be walking or user in moving vehicle). Such sensors are all called trigger sensors. For these sensors the maximum reporting latency and sampling period parameters are meaningless.

In special reporting mode, the sensor will generate an event based on a specific event. For example, an accelerometer can use special reporting mode to generate an event when the user takes a step, or it can be used to report the tilt of the mobile device. Hence the underlying physical sensor is used in special reporting mode as a step detector or as a tilt detector.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016602000069

Installing Linux

Graham Speake, in Eleventh Hour Linux+, 2010

Summary of Exam Objectives

In this chapter, we discussed how Linux was installed and some of the options that are available. The hardware architecture was described, including the processors Linux supports, the HAL, how the Linux kernel interacts with the hardware, the hardware components that are supported, and the Hardware Compatibility List. Installation of Linux from local media, as well as remotely across a network, was presented, highlighting the differences between the two approaches. The generic decisions to be made in installing the operating system were given to ensure that any Linux distribution could be used. The Linux filesystem was introduced and the various types of filesystems that could be created were discussed. The primary and extended partition types for hard disk partitioning were explained and when to use each. In particular, at least one primary partition is needed to hold the bootable operating system. Finally, LVM and RAID were described to show how to create logical volumes and how to stripe, mirror, and parity check disk arrays.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597494977000049

Introduction

Frank Brill, ... Stephen Ramm, in OpenVX Programming Guide, 2020

Abstract

OpenVX is an Application Programming Interface (API) that was created to make computer vision programs run faster on mobile and embedded devices. It is different from other computer vision libraries, because from the very beginning it was designed as a Hardware Abstraction Layer (HAL) that helps software run efficiently on a wide range of hardware platforms. OpenVX is developed by the OpenVX committee, which is a part of the Khronos Group. Khronos is an open and nonprofit consortium; any company or individual can become its member and participate in the development of OpenVX and other standards, including OpenGL, Vulkan, and WebGL. OpenVX is royalty free, but at the same time, it is developed by the industry: some of the world largest silicon vendors are members of the OpenVX committee.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128164259000073

System-Level Design and Hardware/Software Co-design

Marilyn Wolf, in High-Performance Embedded Computing (Second Edition), 2014

7.4 Electronic system-level design

Electronic system-level (ESL) design has become widely accepted as a term for the design of mixed hardware/software systems such as systems-on-chips, driven from high-level modeling languages such as SystemC or MATLAB. ESL methodologies leverage existing hardware and software tools and design flows, concentrating on the refinement of an abstract system description into concrete hardware and software components.

Y-chart

An early representation for tasks in system-level design was the Gajski-Kuhn Y-chart [Gaj83], which was developed for VLSI system design, at a time in which hardware and software design were largely separate. As shown in Figure 7.22, the Y-chart combines levels of abstraction with forms of specification. Three types of specification form the Y: behavioral, structural, and physical. Levels of abstraction can be represented in each of these forms and so are represented as circles around the center of the Y: system, register-transfer, gate, and transistor. This chart has been referenced and updated by many researchers over years.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

FIGURE 7.22. The Gajski-Kuhn Y-chart [Gaj83].

Daedalus

Nikolov et al. [Nik08] developed the Daedalus system for multimedia MPSoCs. They represent the application as a Kahn process network. Design space exploration uses high-level models of the major hardware components to explore candidate partitionings into hardware and software.

SCE

Domer et al. [Dom08] used a three-level design hierarchy for their System-on-Chip Environment (SCE):

Specification model—The system is specified as a set of behaviors interconnected by abstract communication channels.

Transaction-level model (TLM)—The behaviors are mapped onto the platform architecture, including processors, communication links, and memories. The execution of behavior is scheduled.

Implementation model—A cycle-accurate model. The RTOS and hardware abstraction layer (HAL) are explicitly modeled.

X-chart

Gerstlauer et al. [Ger09] extended the Y-chart concept to an X-chart for SoC design as shown in Figure 7.23. The behavior and nonfunctional constraints together form the system specification. A synthesis process transforms that specification into an implementation consisting of a structure and a set of quality numbers that describe properties such as performance and energy consumption.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

FIGURE 7.23. The X-chart model for system-on-chip design [Ger09].

Common intermediate code

Kwon et al. [Kwo09] proposed a common intermediate code (CIC) representation for system-level design. Their CIC includes hardware and software representations. Software is represented at the task level using stylized C code that includes function calls to manage task-level operations, such as initialization, the main body, and completion. The target architecture is described using XML notation in a form known as an architectural information file. A CIC translator compiles the CIC code into executable C code for the tasks on the chosen processors. Translation is performed in four steps: translation of the API, generation of the hardware interface code, OpenMP translation, and task scheduling code.

SystemCoDesigner

Keinert et al. [Kei09] proposed SystemCoDesigner to automate design space exploration for the hardware and software components of an ESL design. They represent the design problem using three elements: an architecture template describes the possible CPUs/function units and their interconnection; a process graph describes the application; and a set of constraints restricts the possible mappings. Multi-objective evolutionary algorithms explore the design space using heuristics derived from biological evolution.

System and virtual architectures

Popovici et al. [Pop10] divided MPSoC design into four stages:

System architecture—The system is partitioned into hardware and software components with abstract communication links. The types of hardware and software components are specified only as abstract tasks.

Virtual architecture—Communication between tasks is refined in terms of the hardware/software application programming interface (HdS API), which hides the details of the operating systems and communication systems. Each task is modeled as sequential C code. Memory is explicitly modeled.

Transaction-accurate architecture—Hardware/software interfaces are described in terms of the hardware abstraction layer. The operating system is explicitly modeled. The communication networks are explicitly modeled.

Virtual prototype—The architectures of the processors are explicitly modeled. Communication is modeled by loads and stores. The memory system, including all caches, is fully modeled.

Their system-level synthesis system allows applications to be described in restricted forms of either Simulink or C. They model the virtual architecture and below using SystemC. Design space exploration between levels is guided by the designer.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124105119000071

Digital Set-Top Terminals and Consumer Interfaces

Walter Ciciora, ... Michael Adams, in Modern Cable Television Technology (Second Edition), 2004

22.8.1 Layers of Software

You may think of at least three layers of software running on the microprocessor(s) in a host. The lowest layer is the operating system, just as a desktop computer has an operating system. It is specific to the hardware being used and is typically purchased from a third-party vendor by the host manufacturer for the processor(s) chosen to be used. (This is consistent with the way OCAP is explained in the standards, but note that in some cases there is a layer of software below the operating system, call the hardware abstraction layer (HAL). The HAL allows the operating system to be platform independent.) The highest layer of software is the application software, which may, for example, provide an EPG in the form that the operator wishes to display it. Many other applications software packages are possible, from communications to games to messaging to VOD and others. Usually, application software is downloaded to a particular host based on what the user purchases and what the operator wants to provide. Application software is usually downloaded from a central server and may change from time to time.

The problem is that the operator needs to be able to download one software package for one application and expect it to run the same way on all hosts and STTs in the system. This must obtain even if there are different STTs and hosts made by many different manufacturers at different times, using different processors and operating systems, and having different capabilities. If a capability is supported by the hardware in a box, then the same software should enable it in all boxes. Making this happen is the role of middleware. The host manufacturer is responsible for providing middleware for his products that conforms to the appropriate OCAP specification, 1.0 or 2.0, as specified by the cable operator. It is possible to use a host that supports 1.0 but not 2.0. Each specification details the services to be supported and how relevant data is communicated to the host.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608283500242

Network slicing

Fabrizio Granelli, in Computing in Communication Networks, 2020

3.3.2 Single owner, multiple tenants – SDN proxy

Supporting multiple virtual networks is now becoming common in many settings, from data centers to service provider networks. In this framework an alternative technology to implement network slicing is the usage of an SDN proxy, typically, controlled by the owner of the physical infrastructure. In this case, the SDN proxy provides an abstraction of the network forwarding path that allows the SDN proxy to slice the network. The proxy employs the SDN protocol to define a hardware abstraction layer that logically sits between control and forwarding paths on a network device to enforce the rules and agreements defining the network slices and to maintain isolation. The resulting architecture is presented in Fig. 3.7.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

Figure 3.7. A network slicing architecture with SDN proxy and multiple orchestrators. The SDN proxy provides topology and capacity slicing to the orchestrators of the different slices (SDNO1 and SDNO2).

The advantage of this scenario is that it enables multiple virtual tenants to deploy their own controllers/SDN orchestrators on the shared infrastructure while maintaining the isolation between different slice instances. An example of this solution is FlowVisor [61], illustrated in Fig. 3.8. Indeed, the FlowVisor is implemented as an OpenFlow proxy that intercepts messages between OpenFlow-enabled switches and OpenFlow controllers.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

Figure 3.8. Conceptual FlowVisor architecture.

The FlowVisor defines a slice as a set of flows running on a topology of switches. It sits between each OpenFlow controller and the switches to make sure that a guest controller can only observe and control the switches it is supposed to. FlowVisor partitions the link bandwidth by assigning a minimum data rate to the set of flows that make up a slice and the flow-table in each switch by keeping track of which flow-entries belong to each guest controller. See Fig. 3.9 for an example of OpenFlow message exchange in an architecture using FlowVisor.

Which layer abstracts away the need for any other layers to care about what hardware is in use physical data link network transport?

Figure 3.9. FlowVisor intercepts OpenFlow messages from guest controllers (1) and, using the user's slicing policy (2), transparently rewrites (3) the message to control only a slice of the network. Messages from switches (4) are forwarded only to guests if they match the corresponding slice policy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128204887000141

Which layer abstract away the need for any other layers to care about what hardware is in use?

A hardware abstraction layer (HAL) is an abstraction layer, implemented in software, between the physical hardware of a computer and the software that runs on that computer.

What are computer layers?

In computer programming, layering is the organization of programming into separate functional components that interact in some sequential and hierarchical way, with each layer usually having an interface only to the layer above it and the layer below it. Communication programs are often layered.