reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Infrastructure Layer: Summary and Discussion <s> This paper reviews the latest effort of extending PON over other media. After looking at standards progress in ITU-T and BBF, we explore possible interworking solutions for the hybrid access. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Infrastructure Layer: Summary and Discussion <s> We discuss hybrid fiber/copper access networks with a focus on XG-PON/VDSL2 hybrid access networks. We present tutorial material on the XG-PON and VDSL2 protocols as standardized by the ITU. We investigate mechanisms to reduce the functional logic at the device that bridges the fiber and copper segments of the hybrid fiber/copper access network. This device is called a drop-point device. Reduced functional logic translates into lower energy consumption and cost for the drop-point device. We define and analyze the performance of several mechanisms to move some of the VDSL2 functional logic blocks from the drop-point device into the XG-PON Optical Line Terminal. Our analysis uncovers that silence suppression mechanisms are necessary to achieve statistical multiplexing gain when carrying synchronous intermediate VDSL2 data formats across the XG-PON. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Infrastructure Layer: Summary and Discussion <s> The possibility to deploy telecommunication services based on the availability of a fully flow-aware network is an appealing possibility. Concepts like Network Service Chaining and Network Function Virtualization expect the information to be manageable at the flow level. But, for this concept to be available for the development of user-centric applications, the access network should also be made flow-aware. In this paper we present the integration of a legacy DOCSIS based access network under an OpenFlow Control Framework by using the Hardware Abstraction Layer designed in the FP7 ALIEN project. The result is a dynamic wide area OpenFlow switch that spawns from the aggregation switch to the home equipment and hides all the complexity (including the provisioning) of the access technology to an unmodified and standard OpenFlow controller. As a result, the access network can react not only to any kind of user traffic but also to the connection of CPE to the network. The approach used is technology independent, and the results have been successfully demonstrated over a Cisco based DOCSIS access network. <s> BIB003
|
The research to date on the SDN controlled infrastructure layer has resulted in a variety of SDN controlled transceivers as well as a few designs of SDN controlled switching elements. Moreover, the SDN control of switching paradigms and optical performance monitoring have been examined. The SDN infrastructure studies have paid close attention to the physical (photonic) communication aspects. Principles of isolation of control plane and data plane with the goals of simplifying network management and making the networks more flexible have been explored. The completed SDN infrastructure layer studies have indicated that the SDN control of the infrastructure layer can reduce costs, facilitate flexible reconfigurable resource management, increase utilizations, and lower latency. However, detailed comprehensive optimizations of the infrastructure components and paradigms that minimize capital and operational expenditures are an important area for future research. Also, further refinements of the optical components and switching paradigms are needed to ease the deployment of SDONs and make the networks operating on the SDON infrastructures more efficient. Moreover, the cost reduction of implementations, easy adoption by network providers, flexible upgrades to adopt new technologies, and reduced complexity require thorough future research. Most SDON infrastructure studies have focused on a particular network component or networking aspect, e.g., a transceiver or the hybrid packet-circuit switching paradigm, or a particular application context, e.g., data center networking. Future research should comprehensively examine SDON infrastructure components and paradigms to optimize their interactions for a wide set of networking scenarios and application contexts. The SDON infrastructure studies to date have primarily focused on the optical transmission medium. Future research should explore complementary infrastructure components and paradigms to support transmissions in hybrid fiber-wireless and other hybrid fiber-X networks, such as fiber-Digital Subscriber Line (DSL) or fiber-coax cable networks BIB002 , BIB003 , BIB001 . Generally, the flexible SDN control can be very advantageous for hybrid networks composed of heterogeneous network segments. The OpenFlow protocol can facilitate the topology abstraction of the heterogeneous physical transmission media, which in turn facilitates control and optimization at the higher network protocol layers.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> OpenFlow is proposed as an architectural platform and a unified control plane for packet and circuit networks, with the main goal of simplifying network control and management while fostering innovative change in them. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> OFDM has been considered as a promising candidate for future high-speed optical transmission technology. Based on OFDM, a novel architecture named flexi-grid optical network has been proposed, and it has drawn increasing attention in both academic and industry. In flexi-grid optical networks, with connection setting up and tearing down, the spectrum resources are separated into small non-contiguous spectrum bands, which may lead to inefficient spectrum utilization. The key requirement is spectrum defragmentation, which refers to periodically reconfigure the network and return it to its optimal states. Spectrum defragmentation should be operated under minimum cost including interrupting services or affecting the QoS (i.e. delay, bandwidth, bitrate). In this paper, we demonstrate for the first time spectrum defragmentation based on software defined networking (SDN) in flexi-grid optical networks. Experimental results are reported on our testbed and verify the feasibility of our proposed architecture. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> Elastic optical networks (EONs) facilitate agile spectrum management in the optical layer. When coupling with software-defined networking, they function as software-defined EONs (SD-EONs) and provide service providers more freedom to customize their infrastructure dynamically. In this paper, we investigate how to overcome spectrum fragmentation in SD-EONs with OpenFlow-controlled online spectrum defragmentation (DF), and conduct system implementations to facilitate highly-efficient online DF. We first consider sequential DF, i.e., the scenario that involves a sequence of lightpath reconfigurations to progressively consolidate the spectrum utilization. We modify our previous DF algorithm to make sure that the reconfigurations can be performed in batches and the “make-before-break” scheme can be applied to all of them. The modified algorithm is implemented in an OpenFlow (OF) controller, and we design OF extensions to facilitate synchronous batch reconfiguration. Then, we further simplify the DF operations by designing and implementing parallel DF that can accomplish all the DF-related lightpath reconfigurations simultaneously. All these DF implementations are experimentally demonstrated in an SD-EON control plane testbed that consists of $14$ ::: stand-alone OF agents and one OF controller, which are all implemented based on high-performance Linux servers. The experimental results indicate that our OF-controlled online DF implementations perform well and can improve network performance in an efficient way. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> We experimentally demonstrate the first OpenFlow-enabled transport SDN that performs multi-flow switching by cross-layer optimization and configuring all major hardware elements, including adaptive EDFA-Raman amplifiers, multi-degree superchannel transponders, and flexible grid switching nodes. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> Recently there have been a lot of discussions about the benefits and usages of SDN technologies, but not many practical proposals for how to interconnect the various heterogeneous and disparate networks maintained by the telecommunication companies. Current level of SDN technologies does not show any sign of transforming the whole legacy network into the SDN oriented one all at once, but instead, SDN concept may need to be deployed into the small parts of the current heterogeneous mixed network first and then it would be able to expand its area gradually [1]. Being started to consider the new aspects of network services, such as off-loading the mobile data, transferring smart contents, adjusting and adapting the network resources and traffic amount, the telecommunication companies need to establish a strategy for virtualization of the networks, including metro Ethernet and optical transport networks, for systematic unification of the control mechanisms, and eventually for the SDN based network management and control system operated in real-time. Since the research and development of SDN technology to be adapted onto the large scaled telecommunication network is in the beginning stage, we focus on the activities for the interconnection methods for the disparate heterogeneous networks. This paper especially suggests SDN-based end-to-end path provisioning architecture for mixed circuit and packet networks. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> This paper gives an overview of the global trends and standardization of transport SDN and introduces SK Telecom's R&D activities on transport SDN in unified converged transport network (uCTN) expected to be a transpot infrastructure for 5G. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> IV. SDN CONTROL LAYER <s> This paper presents and experimentally demonstrates the generalized architecture of Openflow-controlled optical packet switching (OPS) network. Openflow control is enabled by introducing the Openflow/OPS agent into the OPS network, which realizes the Openflow protocol translation and message exchange between the Openflow control plane and the underlying OPS nodes. With software-defined networking (SDN) and Openflow technique, the complex control functions of the conventional OPS network can be offloaded into a centralized and flexible control plane, while promoted control and operations can be provided due to centralized coordination of network resources. Furthermore, a contention-aware routing/rerouting strategy as well as a fast network adjustment mechanism is proposed and demonstrated for the first time as advanced Openflow control to route traffic and handle the network dynamics. With centralized SDN/Openflow control, the OPS network has the potential to have better resource utilization and enhanced network resilience at lower cost and less node complexity. Our work will accelerate the development of both OPS and SDN evolution. <s> BIB012
|
This section surveys the SDON studies that are focused on applying the SDN principles at the SDN control layer to control the various optical network elements and operational aspects. The main challenges of SDON control include extensions of the OpenFlow protocol for specifically controlling the optical transmission and switching components surveyed in Section III and for controlling the optical spectrum as well as for controlling optical networks spanning multiple optical network tiers (see Section II-D2). As illustrated in Fig. 6 , we first survey SDN control mechanisms and frameworks for controlling infrastructure layer components, namely transceivers as well as optical circuit, packet, and burst switches. More specifically, we survey OpenFlow extensions for controlling the optical infrastructure components. We then survey mechanisms for retro-fitting non-SDN optical network elements so that they can be controlled by OpenFlow. The retro-fitting typically involves the insertion of an abstraction layer into the network elements. The abstraction layer makes the optical hardware controllable by OpenFlow. The retro-fitting studies would also fit into Section III as the abstraction layer is inserted into the network elements; however, the abstraction mechanisms closely relate to the OpenFlow extensions for optical networking and we include the retro-fitting studies therefore in this control layer section. We then survey the various SDN control mechanisms for operational aspects of optical networks, including the control of tandem networks that include optical segments. Lastly, we survey SDON controller performance analysis studies. A. SDN Control of Optical Infrastructure Components 1) Controlling Optical Transceivers with OpenFlow: Recent generations of optical transceivers utilize digital signal processing techniques that allow many parameters of the transceiver to be software controlled (see Sections III-A1 and III-A2). These parameters include modulation scheme, symbol rate, and wavelength. Yu et al. BIB007 and Chen et al. BIB008 proposed adding a "modulation format" field to the OpenFlow cross-connect table entries to support this programmable feature of some software defined optical transceivers. Ji et al. BIB009 created a testbed that places super-channel optical transponders and optical amplifiers under SDN control. An OpenFlow extension is proposed to control these devices. The modulation technique and FEC code for each optical subcarrier of the super-channel transponder and the optical amplifier power level can be controlled via OpenFlow. Ji et al. do not discuss this explicitly but the transponder subcarriers can be treated as OpenFlow switch ports that can be configured through the OpenFlow protocol via port modification messages. It is unclear in BIB009 how the amplifiers would be controlled via OpenFlow. However, doing so would allow the SDN controller to adaptively modify amplifiers to compensate for channel impairments while minimizing energy consumption. Ji et al. BIB009 have established a testbed demonstrating the placement of transponders and EDFA optical amplifiers under SDN control. Liu et al. BIB004 propose configuring optical transponder operation via flow table entries with new transponder specific fields (without providing details). They also propose capturing failure alarms from optical transponders and sending them to the SDN controller via OpenFlow Packet-In messages. These messages are normally meant to establish new flow connections. Alternatively, a new OpenFlow message type could be created for the purpose of capturing failure alarms BIB004 . With failure alarm information, the SDN controller can implement protection switching services. 2) Controlling Optical Circuit Switches with OpenFlow: Circuit switching can be enabled by OpenFlow by adding new circuit switching flow table entries BIB001 , BIB003 , BIB002 , BIB010 . The OpenFlow circuit switching addendum discusses the addition of cross-connect tables for this purpose. These cross-connect tables are configured via OpenFlow messages inside the circuit switches. According to the addendum, a cross-connect table entry consists of the following fields to identify the input: • Virtual Concatenation Group and the following fields to identify the output: These cross-connect tables cover circuit switching in space, fixed-grid wavelength, and time. Channegowda et al. BIB005 , BIB006 extend the capabilities of the OpenFlow circuit switching addendum to support flexible wavelength grid optical switching. Specifically, the wavelength identifier specified in the circuit switching addendum to OpenFlow is replaced with two fields: center frequency, and slot width. The center frequency is an integer specifying the multiple of 6.25 GHz the center frequency is away from 193.1 Thz and the slot width is a positive integer specifying the spectral width in multiples of 12.5 GHz. An SDN controlled optical network testbed at the University of Bristol has been established to demonstrate the OpenFlow extensions for flexible grid DWDM BIB005 . The testbed consists of both fixed-grid and flexible-grid optical switching devices. South Korea Telekom has also built an SDN controlled optical network testbed BIB011 . 3) Controlling Optical Packet and Burst Switches with OpenFlow: OpenFlow flow tables can be utilized in optical packet switches for expressing the forwarding table and its computation can be offloaded to an SDN controller. This offloading can simplify the design of highly complex optical packet switches BIB012 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> OpenFlow is proposed as an architectural platform and a unified control plane for packet and circuit networks, with the main goal of simplifying network control and management while fostering innovative change in them. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Optical networks are undergoing significant changes, fueled by the exponential growth of traffic due to multimedia services and by the increased uncertainty in predicting the sources of this traffic due to the ever changing models of content providers over the Internet. The change has already begun: simple on-off modulation of signals, which was adequate for bit rates up to 10 Gb/s, has given way to much more sophisticated modulation schemes for 100 Gb/s and beyond. The next bottleneck is the 10-year-old division of the optical spectrum into a fixed "wavelength grid," which will no longer work for 400 Gb/s and above, heralding the need for a more flexible grid. Once both transceivers and switches become flexible, a whole new elastic optical networking paradigm is born. In this article we describe the drivers, building blocks, architecture, and enabling technologies for this new paradigm, as well as early standardization efforts. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> We experimentally present the seamless interworking between OpenFlow and PCE for dynamic wavelength path control in multi-domain WSON, assessing the overall feasibility and quantitatively evaluating both the path computation and lightpath provisioning latencies. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Control plane techniques are very important for optical networks since they can enable dynamic lightpath provisioning and restoration, improve the network intelligence, and greatly reduce the processing latency and operational expenditure. In recent years, there have been great progresses in this area, ranged from the traditional generalized multi-protocol label switching (GMPLS) to a path computation element (PCE)/GMPLS-based architecture. The latest studies have focused on an OpenFlow-based control plane for optical networks, which is also known as software-defined networking. In this paper, we review our recent research activities related to the GMPLS-based, PCE/GMPLS-based, and OpenFlow-based control planes for a translucent wavelength switched optical network (WSON). We present enabling techniques for each control plane, and we summarize their advantages and disadvantages. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> In order to address a diverse and demanding set of service and network drivers, several technology candidates with inherent physical layer (PHY) differences are emerging for future optical access networks. To overcome this PHY divide and enable both cost and bandwidth efficient heterogeneous technology co-existence in future optical access, we propose a novel Orthogonal Frequency Division Multiple Access (OFDMA)-based “meta-MAC”, which encapsulates PHY variations and enables fair inter-technology bandwidth arbitration. The new software-defined meta-MAC is envisioned to work on top of constituent MAC protocols, and exploit virtual OFDMA subcarriers as both finely granular and scalable bandwidth assignment units. We introduce important OFDMA meta-MAC design principles, and propose an elaborate three-stage dynamic resource provisioning scheme that satisfies the key requirements. The performance benefits of the meta-MAC concept and the proposed dynamic resource provisioning schemes in terms of spectrum management flexibility and support of diverse services are verified via real-time traffic simulation, confirming the attractiveness of the new approach for future optical access systems. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> A path computation element (PCE) is briefly defined as a control plane functional component (physical or logical) that is able to perform constrained path computation on a graph representing (a subset of) a network. A stateful PCE is a PCE that is able to consider the set of active connections, and its development is motivated by the fact that such knowledge enables the deployment of improved, more efficient algorithms. Additionally, a stateful PCE is said to be active if it is also able to affect (modify or suggest the modification of) the state of such connections. A stateful active PCE is thus able not only to use the knowledge of the active connections as available information during the computation, but also to reroute existing ones, resulting in a more efficient use of resources and the ability to dynamically arrange and reoptimize the network. An OpenFlow controller is a logically centralized entity that implements a control plane and configures the forwarding plane of the underlying network devices using the OpenFlow protocol. From a control plane perspective, an OpenFlow controller and the aforementioned stateful PCE have several functions in common, for example, in what concerns network topology or connection management. That said, both entities also complement each other, since a PCE is responsible mainly for path computation accessible via an open, standard, and flexible protocol, and the OpenFlow controller assumes the task of the actual data plane forwarding provisioning. In other words, the stateful PCE becomes active by virtue of relying on an OpenFlow controller for the establishment of connections. In this framework, the integration of both entities presents an opportunity allowing a return on investment, reduction of operational expenses, and reduction of time to market, resulting in an efficient approach to operate transport networks. In this paper, we detail the design, implementation, and experimental evaluation of a centralized control plane based on a stateful PCE, acting as an OpenFlow controller, targeting the control and management of optical networks. We detail the extensions toboth the OpenFlow and the PCE communication protocol (PCEP), addressing the requirements of elastic optical networks as well as the system performance, obtained when deployed in a laboratory trial. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> We propose the first optical SDN model enabling performance optimization and comparison of heterogeneous SDN scenarios. We exploit it to minimize latency and compare cost for non-SDN, partial-SDN and full-SDN variants of the same network. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> OpenFlow is a protocol that enables networks to evolve and change flexibly, by giving a remote controller the capability of modifying the behavior of network devices. In an OpenFlow network, each device needs to maintain a dedicated and separated connection with a remote controller. All these connections can be described as the OpenFlow control network, that is the data network which transports control plane information, and can be deployed together with the data infrastructure plane (in-band) or separated (out-of-band), with advantages and disadvantages in both cases. The control network is a critical subsystem since the communication with the controller must be reliable and ideally should be protected against failures. This paper proposes a novel ring architecture to efficiently transport both the data plane and an out-of-band control network. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> OFDM has been considered as a promising candidate for future high-speed optical transmission technology. Based on OFDM, a novel architecture named flexi-grid optical network has been proposed, and it has drawn increasing attention in both academic and industry. In flexi-grid optical networks, with connection setting up and tearing down, the spectrum resources are separated into small non-contiguous spectrum bands, which may lead to inefficient spectrum utilization. The key requirement is spectrum defragmentation, which refers to periodically reconfigure the network and return it to its optimal states. Spectrum defragmentation should be operated under minimum cost including interrupting services or affecting the QoS (i.e. delay, bandwidth, bitrate). In this paper, we demonstrate for the first time spectrum defragmentation based on software defined networking (SDN) in flexi-grid optical networks. Experimental results are reported on our testbed and verify the feasibility of our proposed architecture. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> We experimentally demonstrate the first OpenFlow-enabled transport SDN that performs multi-flow switching by cross-layer optimization and configuring all major hardware elements, including adaptive EDFA-Raman amplifiers, multi-degree superchannel transponders, and flexible grid switching nodes. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Recently there have been a lot of discussions about the benefits and usages of SDN technologies, but not many practical proposals for how to interconnect the various heterogeneous and disparate networks maintained by the telecommunication companies. Current level of SDN technologies does not show any sign of transforming the whole legacy network into the SDN oriented one all at once, but instead, SDN concept may need to be deployed into the small parts of the current heterogeneous mixed network first and then it would be able to expand its area gradually [1]. Being started to consider the new aspects of network services, such as off-loading the mobile data, transferring smart contents, adjusting and adapting the network resources and traffic amount, the telecommunication companies need to establish a strategy for virtualization of the networks, including metro Ethernet and optical transport networks, for systematic unification of the control mechanisms, and eventually for the SDN based network management and control system operated in real-time. Since the research and development of SDN technology to be adapted onto the large scaled telecommunication network is in the beginning stage, we focus on the activities for the interconnection methods for the disparate heterogeneous networks. This paper especially suggests SDN-based end-to-end path provisioning architecture for mixed circuit and packet networks. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> An optical packet and circuit integrated network (OPCInet) provides both high-speed, inexpensive services and deterministic-delay, low-data-loss services according to the users’ usage scenarios, from the viewpoint of end users. From the viewpoint of network service providers, this network provides large switching capacity with low energy consumption, high flexibility, and efficient resource utilization with a simple control mechanism. This paper presents the recent progress made in the development of OPCInet and its extension to software-defined networking (SDN). We have developed OPCI nodes, which are capable of layer 3 switching from/to an Ethernet frame to/from an optical packet in the optical packet edge part and a burst-tolerant optical amplifier and an optical buffer with optical fiber delays in 100 Gbps optical packet switching part. The OPCI node achieves a packet error rate less than $10^{-4}$ ::: and is used as a node in a lab-network that has access to the Internet. A distributed automatic control works in a control plane for the circuit switching part and in a moving boundary control between optical packet resources and circuit resources. Our optical system for packet and circuit switching works with a centralized control mechanism as well as a distributed control mechanism. We have shown a packet-based SDN system that configures mapping between IP addresses and OPCI node identifiers and switching tables according to the requests from multiple service providers via a web interface. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> The growth of intra data center communications, cloud computing and multimedia content applications force transport network providers to allocate resources faster, smarter and dynamically. Software-defined Networking (SDN), has been proposed to create a unified control plane for transport networks (Transport SDN). This article presents an overview on Transport SDN proposals based on OpenFlow, the de facto SDN protocol. OpenFlow is at the forefront of the Transport SDN models and several testbeds have proved the implementation of a unified control plane for multi-domain and multi-technology optical transport networks. We show how OpenFlow can be enabled in current and future network devices through agents and new hardware respectively. Transport SDN can boost the programmability and scalability of the network., increase the network intelligence and allow for dynamic resource allocation and restoration. The review highlights a rapid development of Transport SDN, which seems to tackle the problems that GMPLS encountered for commercial deployment. Finally a comparison between the main research efforts towards a multi-domain transport SDN is given. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> As Software Defined Networking (SDN) and in particular OpenFlow (OF) availability increases, the desire to extend its use in other scenarios appears. It would be appealing to include substantial parts of the network under OF control but until recently this implied replacing much of the hardware with OF enabled versions. There are some cases, such as access networks in which the benefits could be considerable but deal with a great amount of legacy equipment that is difficult tore place. In this case an alternative method of enabling Odon these devices would be useful. In this paper we describe an architecture and software which could enable OF on many access technologies with minimal changes. The software has been written and tested on a Gigabit Ethernet Passive Optical Network(GEPON). The approach is engineered to be easily ported to any access technology with minimal requirements made on that hardware. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> SDN and NFV are two novel paradigms that open the way for a more efficient operation and management of networks, allowing the virtualization and centralization of some functions that are distributed in current network architectures. Optical access networks present several characteristics (tree-like topology, distributed shared access to the upstream channel, partial centralization of the complex operations in the OLT device) that make them appealing for the virtualization of some of their functionalities. We propose a novel EPON architecture where OLTs and ONUs are partially virtualized and migrated to the network core following SDN and NFV paradigms, thus decreasing CAPEX and OPEX, and improving the flexibility and efficiency of network operations. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Elastic optical networks (EONs) facilitate agile spectrum management in the optical layer. When coupling with software-defined networking, they function as software-defined EONs (SD-EONs) and provide service providers more freedom to customize their infrastructure dynamically. In this paper, we investigate how to overcome spectrum fragmentation in SD-EONs with OpenFlow-controlled online spectrum defragmentation (DF), and conduct system implementations to facilitate highly-efficient online DF. We first consider sequential DF, i.e., the scenario that involves a sequence of lightpath reconfigurations to progressively consolidate the spectrum utilization. We modify our previous DF algorithm to make sure that the reconfigurations can be performed in batches and the “make-before-break” scheme can be applied to all of them. The modified algorithm is implemented in an OpenFlow (OF) controller, and we design OF extensions to facilitate synchronous batch reconfiguration. Then, we further simplify the DF operations by designing and implementing parallel DF that can accomplish all the DF-related lightpath reconfigurations simultaneously. All these DF implementations are experimentally demonstrated in an SD-EON control plane testbed that consists of $14$ ::: stand-alone OF agents and one OF controller, which are all implemented based on high-performance Linux servers. The experimental results indicate that our OF-controlled online DF implementations perform well and can improve network performance in an efficient way. <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Adaptive flexi-grid optical networks should be able to autonomously decide where and when to dynamically setup, reoptimize, and release elastic optical connections, in reaction to network state changes. A stateful path computation element (PCE) is a key element for the introduction of dynamics and adaptation in generalized multiprotocol label switching (GMPLS)-based distributed control plane for flexi-grid DWDM networks (e.g., global concurrent reoptimization, defragmentation, or elastic inverse-multiplexing), as well as for enabling the standardized deployment of the GMPLS control plane in the software defined network control architecture. First, this paper provides an overview of passive and active stateful PCE architectures for GMPLS-enabled flexi-grid DWDM networks. A passive stateful PCE allows for improved path computation considering not only the network state (TED) but also the global connection state label switched paths database (LSPDB), in comparison with a (stateless) PCE. However, it does not have direct control (modification, rerouting) of path reservations stored in the LSPDB. The lack of control of these label switched paths (LSPs) may result in the suboptimal performance. To this end, an active stateful PCE allows for optimal path computation considering the LSPDB for the control of the state (e.g., increase of LSP bandwidth, LSP rerouting) of the stored LSPs. More recently, an active stateful PCE architecture has also been proposed that exposes the capability of setting up and releasing new LSPs. It is known as active stateful PCE with instantiation capabilities. This paper presents the first prototype implementation and experimental evaluation of an active stateful PCE with instantiation capabilities for the GMPLS-controlled flexi-grid DWDM network of the ADRENALINE testbed. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> We propose a global dynamic bandwidth optimization algorithm for software defined optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real-time. The performance benefits of the proposed algorithm in terms of resource utilization rate, average delay and delay of a single mobile user are verified through network simulation. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> In this paper, we propose a design for a flat Layer 2 Metro-Core network as part of a Long Reach PON architecture that meets the demands of scalability, efficiency and economy within a modern telecommunications network. We introduce the concept of Mac Address Translation, which is equivalent to Network Address translation at Layer 3 but applied instead to layer 2. This allows layer 2 address space to be structured and fits well with the table driven approach of OpenFlow and the wider Software Defined Networks. Without structure at the layer 2 addressing level, the number of flow table rules to support a moderately sized layer 2 network would be very significant, for which there are few if any OpenFlow switch available with adequate TCAM tables. <s> BIB025 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Virtual machine migration in cloud-computing environments is an important operational technique, and requires significant network bandwidth. We demonstrate that heterogeneous bandwidth (vs. homogeneous bandwidth) for migration reduces significant resource consumption in SDN-enabled optical networks. <s> BIB026 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> The development of software defined networking (SDN) has instigated a growing number of experimental studies which demonstrate the flexibility in network control and management introduced by this technique. Optical networks add new challenges for network designers and operators to successfully dimension and deploy an SDN-based in the optical domain. At present, few performance evaluations and scalability studies that consider the high-bandwidth of the optical domain and the flow characterization from current Internet statistics have been developed. In this paper these parameters are taken as key inputs to study SDN scalability in the optical domain. As a relevant example an optical ring Metropolitan Area Network (MAN) is analyzed with circuit and packet traffic integrated at the wavelength level. The numerical results characterize the limitations in network dimensioning when considering an SDN controller implementation in the presence of different flow mixes. Employing flow aggregation and/or parallel distributed controllers is outlined as potential solution to achieve SDN network scalability. <s> BIB027 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Software defined network (SDN) allows the rethinking of traditional approaches to network design and architecture. The distribution of the unified control-plane can be necessary in several SDN scenarios, particularly for large scale inter-domain optical networks. Distribution is necessary in inter-domain networks due to privacy issues, and can be necessary in large networks to improve scalability and management. This paper proposes a new architectural model in which network elements are grouped by proximity (in clusters) around distributed SDN controllers. The Open Flow protocol with wavelength switching extensions is used for intra-cluster control while inter-cluster coordination is performed by a new control application. The proposed model is applied to large-scale wavelength switched optical networks (WSON) and is validated by simulation. The results show that to increase the number of controllers is not justifiable if the only concern is the setup time performance. However, a multi-cluster approach is advantageous when light paths are created more frequently between nearby nodes. Also, the clustered SDN can be successfully used in a multi-administrative domain, because inter-domain light paths can be created while keeping the privacy of the network information within a cluster. <s> BIB028 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> This paper presents and experimentally demonstrates the generalized architecture of Openflow-controlled optical packet switching (OPS) network. Openflow control is enabled by introducing the Openflow/OPS agent into the OPS network, which realizes the Openflow protocol translation and message exchange between the Openflow control plane and the underlying OPS nodes. With software-defined networking (SDN) and Openflow technique, the complex control functions of the conventional OPS network can be offloaded into a centralized and flexible control plane, while promoted control and operations can be provided due to centralized coordination of network resources. Furthermore, a contention-aware routing/rerouting strategy as well as a fast network adjustment mechanism is proposed and demonstrated for the first time as advanced Openflow control to route traffic and handle the network dynamics. With centralized SDN/Openflow control, the OPS network has the potential to have better resource utilization and enhanced network resilience at lower cost and less node complexity. Our work will accelerate the development of both OPS and SDN evolution. <s> BIB029 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> The optical access networks and aggregation networks are necessary to be controlled together to improve the bandwidth resource availability globally. Unified control architecture for optical access networks and aggregation networks is designed based on software-defined networking controller, the function modules of which have been described and the related extended protocol solution has been given. A software-defined dynamic bandwidth optimization (SD-DBO) algorithm is first proposed for optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real time. The performance of the proposed algorithm has been verified and compared with traditional DBA algorithm in terms of resource utilization rate and average delay time. Simulation result shows that SD-DBO algorithm performs better. <s> BIB030 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> In this paper, we describe the integration of Software Defined Networking (SDN) in the mobile backhaul as a disruptive approach to streamline the transport network. In this work we leverage SDN to optimize the mobile backhaul transport by removing all mobile specific tunnelling and replace it with more efficient MPLS or Carrier Grade Ethernet deployed either over electrical or optical networks. The paper also presents the testbed with complete end to end system including off the shelf base stations, SDN enabled mobile backhaul switches and virtualize network elements (i.e. Mobility Management Entity (MME), Serving/Packet Gateway (S/P-GW)) running on the cloud. This testbed is currently accepted as European Telecommunication Standards (ETSI) Proof of Concept and the results are used to describe the benefits for operators and end users. Moreover, an initial design of services based on the proposed virtualized mobile network architecture is proposed. The results of the testbed show the benefits for mobile operators in terms of Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) savings but more importantly the development of services that benefit from optimal usage of resources. <s> BIB031 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Novel optical access network virtualization and resource allocation algorithms for Internet-of-Things support are proposed and implemented on a real-time SDN-controller platform. 30–50% gains in served request number, traffic prioritization, and revenue are demonstrated. <s> BIB032 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> ? <s> Multicore fibres (MCF) offer the opportunity of both increasing communication capacity as well as offering enhanced flexibility in the network scenario. Software-defined networks (SDN) are capable of handling novel functionalities coming from physical layer with the aim of better exploit overall connectivity. In this paper, network defragmentation is considered combined with space-division multiplexing (SDM). In particular, SDN-driven effective defragmentation technique on a seven-core MCF is demonstrated. This paper includes a networking view about defragmentation principle exploiting SDN control plane. At the same time, an accurate model and numerical investigations reveal feasibility and system constraints. Push–pull operation for a coherent DQPSK transmission has been experimentally demonstrated together with full dynamic defragmentation. By using a high-speed-integrated dual-output intensity modulator switch for core adaptation in combination with hitless frequency shift, quasi-hitless SDN-driven reconfiguration performances are shown. Defragmentation for $40\,\text{Gb/s}$ DQPSK and $80\,\text{Gb/s}$ 16QAM signals is demonstrated. Switching from core 2 to core 1, $100\text{-}\text{GHz}$ frequency shift, and switching back to core 2 is obtained losing just $1800$ and $2600$ symbols, respectively. <s> BIB033
|
Ctl. of Infra. Comp., Sec. IV-A Transceiver Ctl. BIB008 , BIB015 - BIB016 Circuit Sw. Ctl. BIB009 , BIB001 , BIB003 , BIB002 - BIB017 Pkt. + Burst Sw. Ctl. BIB008 , BIB029 , BIB018 ? Retro-fitting Devices, Sec. IV-B BIB009 , BIB008 , BIB010 , BIB029 , BIB019 - BIB020 ? Ctl. of Opt. Netw. Ops., Sec. IV-C PON Ctl. BIB021 - BIB011 Spectrum Defrag. BIB005 , BIB015 , BIB022 , BIB023 - BIB033 Tandem Netw., Sec. IV-C3 Metro+Access BIB024 , BIB030 Access+Wireless - BIB031 Access+Metro+Core BIB025 DC BIB026 IoT BIB032 ? Hybrid SDN-GMPLS, Sec. IV-D BIB004 , BIB019 , BIB023 , BIB006 , BIB012 ? Ctl. Perform., Sec. IV-E SDN vs. GMPLS BIB010 , BIB007 - BIB013 Flow Setup Time BIB008 , BIB027 Out of Band Ctl. BIB014 Clust. Ctl BIB028 Cao et al. BIB029 extend the OpenFlow protocol to work with Optical Packet Switching (OPS) devices by creating: (i) an abstraction layer that converts OpenFlow configuration messages to the native OPS configuration, (ii) a process that converts optical packets that do not match a flow table entry to the electrical domain for forwarding to the SDN controller, and (iii) a wavelength identifier extension to the flow table entries. To compensate for either the lack of any optical buffering or limited optical buffering, an SDN controller, with its global view, can provide more effective means to resolve contention that would lead to packet loss in optical packet switches. Specifically, Cao et al. suggest to select the path with the most available resources among multiple available paths between two nodes BIB029 . Paths can be re-computed periodically or on-demand to account for changes in traffic conditions. Monitoring messages can be defined to keep the SDN controller updated of network traffic conditions. Engineers with Japan's National Institute of Information and Communications Technology BIB018 have created an optical circuit and packet switched demonstration system in which the packet portion is SDN controlled. The optical circuit switching is implemented with Wavelength Selective Switches (WSSs) and the optical packet switching is implemented with an Semiconductor Optical Amplifier (SOA) switch. OpenFlow flow tables can also be used to configure optical burst switching devices BIB008 . When there is no flow table entry for a burst of packets, the optical burst switching device can send the Burst Header Packet (BHP) to the SDN controller to process the addition of the new flow to the network BIB008 rather than the first packet in the burst.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> We experimentally present a proof-of-concept demonstration of OpenFlow-based wavelength path control for lightpath provisioning in transparent optical networks, assessing its overall feasibility and quantitatively evaluating the network performances. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> We report a world first field trial of an OpenFlow-based unified control plane for multilayer multi-granularity optical networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> The growth of intra data center communications, cloud computing and multimedia content applications force transport network providers to allocate resources faster, smarter and dynamically. Software-defined Networking (SDN), has been proposed to create a unified control plane for transport networks (Transport SDN). This article presents an overview on Transport SDN proposals based on OpenFlow, the de facto SDN protocol. OpenFlow is at the forefront of the Transport SDN models and several testbeds have proved the implementation of a unified control plane for multi-domain and multi-technology optical transport networks. We show how OpenFlow can be enabled in current and future network devices through agents and new hardware respectively. Transport SDN can boost the programmability and scalability of the network., increase the network intelligence and allow for dynamic resource allocation and restoration. The review highlights a rapid development of Transport SDN, which seems to tackle the problems that GMPLS encountered for commercial deployment. Finally a comparison between the main research efforts towards a multi-domain transport SDN is given. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> As Software Defined Networking (SDN) and in particular OpenFlow (OF) availability increases, the desire to extend its use in other scenarios appears. It would be appealing to include substantial parts of the network under OF control but until recently this implied replacing much of the hardware with OF enabled versions. There are some cases, such as access networks in which the benefits could be considerable but deal with a great amount of legacy equipment that is difficult tore place. In this case an alternative method of enabling Odon these devices would be useful. In this paper we describe an architecture and software which could enable OF on many access technologies with minimal changes. The software has been written and tested on a Gigabit Ethernet Passive Optical Network(GEPON). The approach is engineered to be easily ported to any access technology with minimal requirements made on that hardware. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Retro-fitting Devices to Support OpenFlow <s> This paper presents and experimentally demonstrates the generalized architecture of Openflow-controlled optical packet switching (OPS) network. Openflow control is enabled by introducing the Openflow/OPS agent into the OPS network, which realizes the Openflow protocol translation and message exchange between the Openflow control plane and the underlying OPS nodes. With software-defined networking (SDN) and Openflow technique, the complex control functions of the conventional OPS network can be offloaded into a centralized and flexible control plane, while promoted control and operations can be provided due to centralized coordination of network resources. Furthermore, a contention-aware routing/rerouting strategy as well as a fast network adjustment mechanism is proposed and demonstrated for the first time as advanced Openflow control to route traffic and handle the network dynamics. With centralized SDN/Openflow control, the OPS network has the potential to have better resource utilization and enhanced network resilience at lower cost and less node complexity. Our work will accelerate the development of both OPS and SDN evolution. <s> BIB008
|
An abstraction layer can be used to turn non-SDN optical switching devices into OpenFlow controllable switching devices BIB003 , BIB004 , BIB005 , BIB008 , BIB006 . As illustrated in Fig. 7 , the abstraction layer provides a conversion layer between OpenFlow configuration messages and the optical switching devices' native management interface, e.g., the Simple Network Management Protocol (SNMP), the Transaction Language 1 (TL1) protocol, or a proprietary (vendor-specific) API. Additionally, a virtual OpenFlow switch with virtual interfaces Traditional non-SDN network elements can be retro-fitted for control by an SDN controller using OpenFlow using a hardware abstraction layer BIB004 , BIB001 - BIB002 . that correspond to physical switching ports on the non-SDN switching device completes the abstraction layer BIB004 , BIB001 - BIB002 . When a flow entry is added between two virtual ports in the virtual OpenFlow switch, the abstraction layer uses the switching devices' native management interface to add the flow entry between the two corresponding physical ports. A non-SDN PON OLT can be supplemented with a twoport OpenFlow switch and a hardware abstraction layer that converts OpenFlow forwarding rules to control messages understood by the non-SDN OLT BIB007 . Fig. 8 illustrates this OLT retro-fit for SDN control via OpenFlow. In this way the PON has its switching functions controlled by OpenFlow.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Optical networks are undergoing significant changes, fueled by the exponential growth of traffic due to multimedia services and by the increased uncertainty in predicting the sources of this traffic due to the ever changing models of content providers over the Internet. The change has already begun: simple on-off modulation of signals, which was adequate for bit rates up to 10 Gb/s, has given way to much more sophisticated modulation schemes for 100 Gb/s and beyond. The next bottleneck is the 10-year-old division of the optical spectrum into a fixed "wavelength grid," which will no longer work for 400 Gb/s and above, heralding the need for a more flexible grid. Once both transceivers and switches become flexible, a whole new elastic optical networking paradigm is born. In this article we describe the drivers, building blocks, architecture, and enabling technologies for this new paradigm, as well as early standardization efforts. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> The paper is devoted to consideration of an innovative access network dedicated to B2B (Business To Business) applications. We present a network design based on passive optical LAN architecture utilizing proven GPON technology. The major advantage of the solution is an introduction of SDN paradigm to PON networking. Thanks to such approach network configuration can be easily adapted to business customers' demands and needs that can change dynamically. The proposed solution provides a high level of service flexibility and supports sophisticated methods allowing user traffic forwarding in effective way within the considered architecture. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> In order to address a diverse and demanding set of service and network drivers, several technology candidates with inherent physical layer (PHY) differences are emerging for future optical access networks. To overcome this PHY divide and enable both cost and bandwidth efficient heterogeneous technology co-existence in future optical access, we propose a novel Orthogonal Frequency Division Multiple Access (OFDMA)-based “meta-MAC”, which encapsulates PHY variations and enables fair inter-technology bandwidth arbitration. The new software-defined meta-MAC is envisioned to work on top of constituent MAC protocols, and exploit virtual OFDMA subcarriers as both finely granular and scalable bandwidth assignment units. We introduce important OFDMA meta-MAC design principles, and propose an elaborate three-stage dynamic resource provisioning scheme that satisfies the key requirements. The performance benefits of the meta-MAC concept and the proposed dynamic resource provisioning schemes in terms of spectrum management flexibility and support of diverse services are verified via real-time traffic simulation, confirming the attractiveness of the new approach for future optical access systems. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> SDN and NFV are two novel paradigms that open the way for a more efficient operation and management of networks, allowing the virtualization and centralization of some functions that are distributed in current network architectures. Optical access networks present several characteristics (tree-like topology, distributed shared access to the upstream channel, partial centralization of the complex operations in the OLT device) that make them appealing for the virtualization of some of their functionalities. We propose a novel EPON architecture where OLTs and ONUs are partially virtualized and migrated to the network core following SDN and NFV paradigms, thus decreasing CAPEX and OPEX, and improving the flexibility and efficiency of network operations. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> The paper offers an innovative approach for building future proof access network dedicated to B2B (Business To Business) applications. The conceptual model of considered network is based on three main assumptions. Firstly, we present a network design based on passive optical LAN architecture utilizing proven GPON (Gigabit-capable Passive Optical Network) technology. Secondly, the new business model is proposed. Finally, the major advantage of the solution is an introduction of SDN (Software-Defined Networking) paradigm to GPON area. Thanks to such approach network configuration can be easily adapted to business customers' demands and needs that can change dynamically over the time. The proposed solution provides a high level of service flexibility and supports sophisticated methods allowing users' traffic forwarding in efficient way. The paper extends a description of the OpenFlowPLUS protocol proposed in [18] . Additionally it provides an exemplary logical scheme of traffic forwarding relevant for GPON devices employing the OpenFlowPLUS solution. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> In recent years, Passive Optical Network (PON) is developing rapidly in the access network, in which the high energy consumption problem is attracting more and more attention. In the paper, SDN (Software Defined Network) is first introduced in optical access networks to implement an energy-efficient control mechanism through OpenFlow protocol. Some theoretical analysis work for the energy consumption of this architecture has been conducted. Numeric results show that the proposed SDN based control architecture can reduce the energy consumption of the access network, and facilitates integration of access and metro networks. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Adaptive flexi-grid optical networks should be able to autonomously decide where and when to dynamically setup, reoptimize, and release elastic optical connections, in reaction to network state changes. A stateful path computation element (PCE) is a key element for the introduction of dynamics and adaptation in generalized multiprotocol label switching (GMPLS)-based distributed control plane for flexi-grid DWDM networks (e.g., global concurrent reoptimization, defragmentation, or elastic inverse-multiplexing), as well as for enabling the standardized deployment of the GMPLS control plane in the software defined network control architecture. First, this paper provides an overview of passive and active stateful PCE architectures for GMPLS-enabled flexi-grid DWDM networks. A passive stateful PCE allows for improved path computation considering not only the network state (TED) but also the global connection state label switched paths database (LSPDB), in comparison with a (stateless) PCE. However, it does not have direct control (modification, rerouting) of path reservations stored in the LSPDB. The lack of control of these label switched paths (LSPs) may result in the suboptimal performance. To this end, an active stateful PCE allows for optimal path computation considering the LSPDB for the control of the state (e.g., increase of LSP bandwidth, LSP rerouting) of the stored LSPs. More recently, an active stateful PCE architecture has also been proposed that exposes the capability of setting up and releasing new LSPs. It is known as active stateful PCE with instantiation capabilities. This paper presents the first prototype implementation and experimental evaluation of an active stateful PCE with instantiation capabilities for the GMPLS-controlled flexi-grid DWDM network of the ADRENALINE testbed. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> OFDM has been considered as a promising candidate for future high-speed optical transmission technology. Based on OFDM, a novel architecture named flexi-grid optical network has been proposed, and it has drawn increasing attention in both academic and industry. In flexi-grid optical networks, with connection setting up and tearing down, the spectrum resources are separated into small non-contiguous spectrum bands, which may lead to inefficient spectrum utilization. The key requirement is spectrum defragmentation, which refers to periodically reconfigure the network and return it to its optimal states. Spectrum defragmentation should be operated under minimum cost including interrupting services or affecting the QoS (i.e. delay, bandwidth, bitrate). In this paper, we demonstrate for the first time spectrum defragmentation based on software defined networking (SDN) in flexi-grid optical networks. Experimental results are reported on our testbed and verify the feasibility of our proposed architecture. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Elastic optical networks (EONs) facilitate agile spectrum management in the optical layer. When coupling with software-defined networking, they function as software-defined EONs (SD-EONs) and provide service providers more freedom to customize their infrastructure dynamically. In this paper, we investigate how to overcome spectrum fragmentation in SD-EONs with OpenFlow-controlled online spectrum defragmentation (DF), and conduct system implementations to facilitate highly-efficient online DF. We first consider sequential DF, i.e., the scenario that involves a sequence of lightpath reconfigurations to progressively consolidate the spectrum utilization. We modify our previous DF algorithm to make sure that the reconfigurations can be performed in batches and the “make-before-break” scheme can be applied to all of them. The modified algorithm is implemented in an OpenFlow (OF) controller, and we design OF extensions to facilitate synchronous batch reconfiguration. Then, we further simplify the DF operations by designing and implementing parallel DF that can accomplish all the DF-related lightpath reconfigurations simultaneously. All these DF implementations are experimentally demonstrated in an SD-EON control plane testbed that consists of $14$ ::: stand-alone OF agents and one OF controller, which are all implemented based on high-performance Linux servers. The experimental results indicate that our OF-controlled online DF implementations perform well and can improve network performance in an efficient way. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> We propose a global dynamic bandwidth optimization algorithm for software defined optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real-time. The performance benefits of the proposed algorithm in terms of resource utilization rate, average delay and delay of a single mobile user are verified through network simulation. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> SUMMARY In this invited paper, software defined network (SDN)based approaches for future cost-effective optical mobile backhaul (MBH) networks are discussed, focusing on key principles, throughput optimization and dynamic service provisioning as its use cases. We propose a novel physical-layer aware throughput optimization algorithm that confirms > 100 Mb/s end-to-end per-cell throughputs with ≥2.5 Gb/s optical links deployed at legacy cell sites. We also demonstrate the first optical line terminal (OLT)-side optical Nyquist filtering of legacy 10G on-offkeying (OOK) signals, enabling dynamic >10 Gb/s Orthogonal Frequency Domain Multiple Access (OFDMA) λ-overlays for MBH over passive optical network (PON) with 40-km transmission distances and 1:128 splitting ratios, without any ONU-side equipment upgrades. The software defined flexible optical access network architecture described in this paper is thus highly promising for future MBH networks. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> In this paper, we propose a design for a flat Layer 2 Metro-Core network as part of a Long Reach PON architecture that meets the demands of scalability, efficiency and economy within a modern telecommunications network. We introduce the concept of Mac Address Translation, which is equivalent to Network Address translation at Layer 3 but applied instead to layer 2. This allows layer 2 address space to be structured and fits well with the table driven approach of OpenFlow and the wider Software Defined Networks. Without structure at the layer 2 addressing level, the number of flow table rules to support a moderately sized layer 2 network would be very significant, for which there are few if any OpenFlow switch available with adequate TCAM tables. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Virtual machine migration in cloud-computing environments is an important operational technique, and requires significant network bandwidth. We demonstrate that heterogeneous bandwidth (vs. homogeneous bandwidth) for migration reduces significant resource consumption in SDN-enabled optical networks. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Spectrum fragmentation limits the efficiency of spectrum utilization in elastic optical networks (EONs). This paper studies how to take advantage of the centralized network control and management provided by software-defined EONs (SD-EONs) for realizing OpenFlow-assisted implementation of online defragmentation (DF). We first discuss the overall system design and OpenFlow protocol extensions to support efficient online DF and conduct DF experiments with routing and spectrum assignment (RSA) reconfigurations in a single-domain SD-EON. Then, we propose to realize fragmentation-aware RSA (FA-RSA) in multi-domain SD-EONs with the cooperation of multiple OpenFlow controllers. In order to provision inter-domain lightpaths with restricted domain visibility on intradomain resource utilization, we design and implement an inter-domain protocol to facilitate FA-RSA in multi-domain SD-EONs and demonstrate controlling the spectrum fragmentation on inter-domain links with FA-RSA. Our experimental results indicate that the OpenFlow-controlled DF systems perform well and can improve the performance of SD-EONs effectively. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> The optical access networks and aggregation networks are necessary to be controlled together to improve the bandwidth resource availability globally. Unified control architecture for optical access networks and aggregation networks is designed based on software-defined networking controller, the function modules of which have been described and the related extended protocol solution has been given. A software-defined dynamic bandwidth optimization (SD-DBO) algorithm is first proposed for optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real time. The performance of the proposed algorithm has been verified and compared with traditional DBA algorithm in terms of resource utilization rate and average delay time. Simulation result shows that SD-DBO algorithm performs better. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> In this paper, we describe the integration of Software Defined Networking (SDN) in the mobile backhaul as a disruptive approach to streamline the transport network. In this work we leverage SDN to optimize the mobile backhaul transport by removing all mobile specific tunnelling and replace it with more efficient MPLS or Carrier Grade Ethernet deployed either over electrical or optical networks. The paper also presents the testbed with complete end to end system including off the shelf base stations, SDN enabled mobile backhaul switches and virtualize network elements (i.e. Mobility Management Entity (MME), Serving/Packet Gateway (S/P-GW)) running on the cloud. This testbed is currently accepted as European Telecommunication Standards (ETSI) Proof of Concept and the results are used to describe the benefits for operators and end users. Moreover, an initial design of services based on the proposed virtualized mobile network architecture is proposed. The results of the testbed show the benefits for mobile operators in terms of Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) savings but more importantly the development of services that benefit from optimal usage of resources. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Novel optical access network virtualization and resource allocation algorithms for Internet-of-Things support are proposed and implemented on a real-time SDN-controller platform. 30–50% gains in served request number, traffic prioritization, and revenue are demonstrated. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. SDN Control of Optical Network Operation 1) Controlling Passive Optical Networks with OpenFlow: <s> Multicore fibres (MCF) offer the opportunity of both increasing communication capacity as well as offering enhanced flexibility in the network scenario. Software-defined networks (SDN) are capable of handling novel functionalities coming from physical layer with the aim of better exploit overall connectivity. In this paper, network defragmentation is considered combined with space-division multiplexing (SDM). In particular, SDN-driven effective defragmentation technique on a seven-core MCF is demonstrated. This paper includes a networking view about defragmentation principle exploiting SDN control plane. At the same time, an accurate model and numerical investigations reveal feasibility and system constraints. Push–pull operation for a coherent DQPSK transmission has been experimentally demonstrated together with full dynamic defragmentation. By using a high-speed-integrated dual-output intensity modulator switch for core adaptation in combination with hitless frequency shift, quasi-hitless SDN-driven reconfiguration performances are shown. Defragmentation for $40\,\text{Gb/s}$ DQPSK and $80\,\text{Gb/s}$ 16QAM signals is demonstrated. Switching from core 2 to core 1, $100\text{-}\text{GHz}$ frequency shift, and switching back to core 2 is obtained losing just $1800$ and $2600$ symbols, respectively. <s> BIB018
|
An SDN controlled PON can be created by upgrading OLTs to SDN-OLTs that can be controlled using a Southbound Interface, such as OpenFlow BIB004 , . A centralized PON controller, potentially executing in a data center, controls one or more SDN-OLTs. The advantage of using SDN is the broadened perspective of the PON controller as well as the potentially reduced cost of the SDN-OLT compared to a non-SDN OLT. Parol and Pawlowski BIB002 , BIB005 Many of the OLT functions operate at timescales that are problematic for the controller due to the latency between the controller and OLTs. However, Khalili et al. BIB004 identify ONU registration policy and coarse timescale DBA policy as functions that operate at timescales that allow effective offloading to an SDN controller. Yan et al. BIB006 further identify OLT and ONU power control for energy savings as a function that can be effectively offloaded to an SDN controller. There is also a movement to use PONs in edge networks to provide connectivity inside a multitenant building or on a campus with multiple buildings BIB002 , BIB005 . The use of PONs in this edge scenario requires rapid re-provisioning from the OLT. A software controlled PON can provide this needed rapid reprovisioning BIB002 , BIB005 . Kanonakis et al. BIB003 propose leveraging the broad perspective that SDN can provide to perform dynamic bandwidth allocation across several Virtual PONs (VPONs). The VPONs are separated on a physical PON by the wavelength bands that they utilize. Bandwidth allocation is performed at the granularity of OFDMA subcarriers that compose the optical spectrum. 2) SDN Control of Optical Spectrum Defragmentation: In a departure from the fixed wavelength grid (ITU-T G.694.1), elastic optical networking allows flexible use of the optical spectrum. This flexibility can permit higher spectral efficiency by avoiding consuming an entire fixed-grid wavelength channel when unnecessary and avoiding unnecessary guard bands in certain circumstances BIB001 . However, this flexibility causes fragmentation of the optical spectrum as flexible grid lightpaths are established and terminated over time. Spectrum fragmentation leads to the circumstance in which there is enough spectral capacity to satisfy a demand but that capacity is spread over several fragments rather than being consolidated in adjacent spectrum as required. If the fragmentation is not counter-acted by a periodic defragmentation process than overall spectral utilization will suffer. This resource fragmentation problem appears in computer systems in main memory and long term storage. In those contexts the problem is typically solved by allowing the memory to be allocated using non-adjacent segments. Memory and storage is partitioned into pages and blocks, respectively. The allocations of pages to a process or blocks to a file do not need to be contiguous. With communication spectrum this would mean combining multiple small bandwidth channels through inverse multiplexing to create a larger channel BIB007 . An SDN controller can provide a broad network perspective to empower the periodic optical spectrum defragmentation process to be more effective BIB007 . In general, optical spectrum defragmentation operations can reduce lightpath blocking probabilities from 3% BIB008 up to as much as 75% BIB009 , BIB014 . Multicore fibers provide additional spectral resources through additional transmission cores to permit quasi-hitless defragmentation BIB018 . 3) SDN Control of Tandem Networks: a) Metro and Access: Wu et al. BIB010 , BIB015 propose leveraging the broad perspective that SDN can provide to improve bandwidth allocation. Two cooperating stages of SDN controllers: (i) access stage that controls each SDN OLT individually, and (ii) metro stage that controls global bandwidth allocation strategy, can coordinate bandwidth allocation across several physical PONs BIB010 , BIB015 . The bandwidth allocation is managed cooperatively among the two stages of SDN controllers to optimize the utilization of the access and metro network bandwidth. Simulation experiments indicate a 40% increase in network bandwidth utilization as a result of the global coordination compared to operating the bandwidth allocation only within the individual PONs BIB010 , BIB015 . b) Access and Wireless: Bojic et al. expand on the concept of SDN controlled OFDMA enabled VPONs BIB003 to provide mobile backhaul service. The backhaul service can be provided for wireless small-cell sites (e.g., micro and femto cells) that utilize millimeter wave frequencies. Each smallcell site contains an OFDMA-PON ONU that provides the backhaul service through the access network over a VPON. An SDN controller is utilized to assign bandwidth to each small-cell site through OFDMA subcarrier assignment in a VPON to the constituent ONU. The SDN controller leverages its broad view of the network to provide solutions to the joint bandwidth allocation and routing across several network segments. With this broad perspective of the network, the SDN controller can make globally rather than just locally optimal bandwidth allocation and routing decisions. Efficient optimization algorithms, such as genetic algorithms, can be used to provide computationally efficient competitive solutions, mitigating computational complexity issues associated with optimization for large networks. Additionally, network partitioning with an SDN controller for each partition can be used to mitigate unreasonable computational complexity that arises when scaling to large networks. Tanaka and Cvijetic BIB011 presented one such optimization formulation for maximizing throughput. Costa-Requena et al. BIB016 described a proof-of-concept LTE testbed they have constructed whereby the network consists of software defined base stations and various network functions executing on cloud resources. The testbed is described in broad qualitative terms, no technical details are provided. There was no mathematical or experimental analysis provided. c) Access, Metro, and Core: Slyne and Ruffini BIB012 provide a use case for SDN switching control across network segments: use Layer 2 switching across the access, metro, and core networks. Layer 2 (e.g., Ethernet) switching does not scale well due to a lack of hierarchy in its addresses. That lack of hierarchy does not allow for switching rules on aggregates of addresses thereby limiting the scaling of these networks. Slyne and Ruffini BIB012 propose using SDN to create hierarchical pseudo-MAC addresses that permit a small number of flow table entries to configure the switching of traffic using Layer 2 addresses across network segments. The pseudo-MAC addresses encode information about the device location to permit simple switching rules. At the entry of the network, flow table entries are set up to translate from real (non-hierarchical) MAC addresses to hierarchical pseudo-MAC addresses. The reverse takes place at the exit point of the network. d) DC Virtual Machine Migration: Mandal et al. BIB013 provided a cloud computing use case for SDN bandwidth allocation across network segments: Virtual Machine (VM) migration between data centers. VM migrations require significant network bandwidth. Bandwidth allocation that utilizes the broad perspective that SDN can provide is critical for reasonable VM migration latencies without sacrificing network bandwidth utilization. e) Internet of Things: Wang et al. BIB017 examine another use case for SDN bandwidth allocation across network segments: the Internet of Things (IoT). Specifically, Wang et al. have developed a Dynamic Bandwidth Allocation (DBA) protocol that exploits SDN control for multicasting and suspending flows. This DBA protocol is studied in the context of a virtualized WDM optical access network that provides IoT services through the distributed ONUs to individual devices. The SDN controller employs multicasting and flow suspension to efficiently prioritize the IoT service requests. Multicasting allows multiple requests to share resources in the central nodes that are responsible for processing a prescribed wavelength in the central office (OLT). Flow suspension allows high-priority requests (e.g., an emergency call) to suspend ongoing lowpriority traffic flows (e.g., routine meter readings). Performance results for a real-time SDN controller implementation indicate that the proposed bandwidth (resource) allocation with multicast and flow suspension can improve several key performance metrics, such as request serving ratio, revenue, and delays by 30-50 % BIB017 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> We experimentally present the seamless interworking between OpenFlow and PCE for dynamic wavelength path control in multi-domain WSON, assessing the overall feasibility and quantitatively evaluating both the path computation and lightpath provisioning latencies. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Control plane techniques are very important for optical networks since they can enable dynamic lightpath provisioning and restoration, improve the network intelligence, and greatly reduce the processing latency and operational expenditure. In recent years, there have been great progresses in this area, ranged from the traditional generalized multi-protocol label switching (GMPLS) to a path computation element (PCE)/GMPLS-based architecture. The latest studies have focused on an OpenFlow-based control plane for optical networks, which is also known as software-defined networking. In this paper, we review our recent research activities related to the GMPLS-based, PCE/GMPLS-based, and OpenFlow-based control planes for a translucent wavelength switched optical network (WSON). We present enabling techniques for each control plane, and we summarize their advantages and disadvantages. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> We overview the PCE architecture and how it can mitigate some weaknesses of GMPLS-controlled optical networks. We identify some of its own limitations and the way they are being addressed, along with its deployment models in SDN/Openflow. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> A path computation element (PCE) is briefly defined as a control plane functional component (physical or logical) that is able to perform constrained path computation on a graph representing (a subset of) a network. A stateful PCE is a PCE that is able to consider the set of active connections, and its development is motivated by the fact that such knowledge enables the deployment of improved, more efficient algorithms. Additionally, a stateful PCE is said to be active if it is also able to affect (modify or suggest the modification of) the state of such connections. A stateful active PCE is thus able not only to use the knowledge of the active connections as available information during the computation, but also to reroute existing ones, resulting in a more efficient use of resources and the ability to dynamically arrange and reoptimize the network. An OpenFlow controller is a logically centralized entity that implements a control plane and configures the forwarding plane of the underlying network devices using the OpenFlow protocol. From a control plane perspective, an OpenFlow controller and the aforementioned stateful PCE have several functions in common, for example, in what concerns network topology or connection management. That said, both entities also complement each other, since a PCE is responsible mainly for path computation accessible via an open, standard, and flexible protocol, and the OpenFlow controller assumes the task of the actual data plane forwarding provisioning. In other words, the stateful PCE becomes active by virtue of relying on an OpenFlow controller for the establishment of connections. In this framework, the integration of both entities presents an opportunity allowing a return on investment, reduction of operational expenses, and reduction of time to market, resulting in an efficient approach to operate transport networks. In this paper, we detail the design, implementation, and experimental evaluation of a centralized control plane based on a stateful PCE, acting as an OpenFlow controller, targeting the control and management of optical networks. We detail the extensions toboth the OpenFlow and the PCE communication protocol (PCEP), addressing the requirements of elastic optical networks as well as the system performance, obtained when deployed in a laboratory trial. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Two testbeds based on GMPLS and OpenFlow are built respectively to validate their performance over large scale optical networks. Blocking probability, wavelength utilization and lightpath setup time are shown on the topology with 1000 nodes. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> We propose the first optical SDN model enabling performance optimization and comparison of heterogeneous SDN scenarios. We exploit it to minimize latency and compare cost for non-SDN, partial-SDN and full-SDN variants of the same network. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> OpenFlow is a protocol that enables networks to evolve and change flexibly, by giving a remote controller the capability of modifying the behavior of network devices. In an OpenFlow network, each device needs to maintain a dedicated and separated connection with a remote controller. All these connections can be described as the OpenFlow control network, that is the data network which transports control plane information, and can be deployed together with the data infrastructure plane (in-band) or separated (out-of-band), with advantages and disadvantages in both cases. The control network is a critical subsystem since the communication with the controller must be reliable and ideally should be protected against failures. This paper proposes a novel ring architecture to efficiently transport both the data plane and an out-of-band control network. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Adaptive flexi-grid optical networks should be able to autonomously decide where and when to dynamically setup, reoptimize, and release elastic optical connections, in reaction to network state changes. A stateful path computation element (PCE) is a key element for the introduction of dynamics and adaptation in generalized multiprotocol label switching (GMPLS)-based distributed control plane for flexi-grid DWDM networks (e.g., global concurrent reoptimization, defragmentation, or elastic inverse-multiplexing), as well as for enabling the standardized deployment of the GMPLS control plane in the software defined network control architecture. First, this paper provides an overview of passive and active stateful PCE architectures for GMPLS-enabled flexi-grid DWDM networks. A passive stateful PCE allows for improved path computation considering not only the network state (TED) but also the global connection state label switched paths database (LSPDB), in comparison with a (stateless) PCE. However, it does not have direct control (modification, rerouting) of path reservations stored in the LSPDB. The lack of control of these label switched paths (LSPs) may result in the suboptimal performance. To this end, an active stateful PCE allows for optimal path computation considering the LSPDB for the control of the state (e.g., increase of LSP bandwidth, LSP rerouting) of the stored LSPs. More recently, an active stateful PCE architecture has also been proposed that exposes the capability of setting up and releasing new LSPs. It is known as active stateful PCE with instantiation capabilities. This paper presents the first prototype implementation and experimental evaluation of an active stateful PCE with instantiation capabilities for the GMPLS-controlled flexi-grid DWDM network of the ADRENALINE testbed. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> The growth of intra data center communications, cloud computing and multimedia content applications force transport network providers to allocate resources faster, smarter and dynamically. Software-defined Networking (SDN), has been proposed to create a unified control plane for transport networks (Transport SDN). This article presents an overview on Transport SDN proposals based on OpenFlow, the de facto SDN protocol. OpenFlow is at the forefront of the Transport SDN models and several testbeds have proved the implementation of a unified control plane for multi-domain and multi-technology optical transport networks. We show how OpenFlow can be enabled in current and future network devices through agents and new hardware respectively. Transport SDN can boost the programmability and scalability of the network., increase the network intelligence and allow for dynamic resource allocation and restoration. The review highlights a rapid development of Transport SDN, which seems to tackle the problems that GMPLS encountered for commercial deployment. Finally a comparison between the main research efforts towards a multi-domain transport SDN is given. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> The development of software defined networking (SDN) has instigated a growing number of experimental studies which demonstrate the flexibility in network control and management introduced by this technique. Optical networks add new challenges for network designers and operators to successfully dimension and deploy an SDN-based in the optical domain. At present, few performance evaluations and scalability studies that consider the high-bandwidth of the optical domain and the flow characterization from current Internet statistics have been developed. In this paper these parameters are taken as key inputs to study SDN scalability in the optical domain. As a relevant example an optical ring Metropolitan Area Network (MAN) is analyzed with circuit and packet traffic integrated at the wavelength level. The numerical results characterize the limitations in network dimensioning when considering an SDN controller implementation in the presence of different flow mixes. Employing flow aggregation and/or parallel distributed controllers is outlined as potential solution to achieve SDN network scalability. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Hybrid SDN-GMPLS Control 1) Generalized MultiProtocol Label Switching (GMPLS): <s> Software defined network (SDN) allows the rethinking of traditional approaches to network design and architecture. The distribution of the unified control-plane can be necessary in several SDN scenarios, particularly for large scale inter-domain optical networks. Distribution is necessary in inter-domain networks due to privacy issues, and can be necessary in large networks to improve scalability and management. This paper proposes a new architectural model in which network elements are grouped by proximity (in clusters) around distributed SDN controllers. The Open Flow protocol with wavelength switching extensions is used for intra-cluster control while inter-cluster coordination is performed by a new control application. The proposed model is applied to large-scale wavelength switched optical networks (WSON) and is validated by simulation. The results show that to increase the number of controllers is not justifiable if the only concern is the setup time performance. However, a multi-cluster approach is advantageous when light paths are created more frequently between nearby nodes. Also, the clustered SDN can be successfully used in a multi-administrative domain, because inter-domain light paths can be created while keeping the privacy of the network information within a cluster. <s> BIB016
|
Prior to SDN, MultiProtocol Label Switching (MPLS) offered a mechanism to separate the control and data planes through label switching. With MPLS, packets are forwarded in a connection-oriented manner through Label Switched Paths (LSPs) traversing Label Switching Routers (LSRs). An entity in the network establishes an LSP through a network of LSRs for a particular class of packets and then signals the labelbased forwarding table entries to the LSRs. At each hop along an LSP, a packet is assigned a label that determines its forwarding rule at the next hop. At the next hop, that label determines that packet's output port and label for the next hop; the process repeats until the packet reaches the end of the LSP. Several signalling protocols for programming the label-based forwarding table entries inside LSRs have been defined, e.g., through the Resource Reservation Protocol (RSVP). Generalized MPLS (GMPLS) extends MPLS to offer circuit switching capability. Although never commercially deployed BIB005 , GMPLS and a centralized Path Computation Element (PCE) BIB006 - have been considered for control of optical networks. 2) Path Computation Element (PCE): A PCE is a concept developed by the IETF (see RFC 4655) to refer to an entity that computes network paths given a topology and some criteria. The PCE concept breaks the path computation action from the forwarding action in switching devices. A PCE could be distributed in every switching element in a network domain or there could be a single centralized PCE for an entire network domain. The network domain could be an area of an Autonomous System (AS), an AS, a conglomeration of several ASes, or just a group of switching devices relying on one PCE. Some of an SDN controller's functionality falls under the classification of a centralized PCE. However, the PCE concept does not include the external configuration of forwarding tables. Thus, a centralized PCE device does not necessarily have a means to configure the switching elements to provision a computed path. When the entity requesting path computation is not colocated with the PCE, a PCE Communication Protocol (PCEP) is used over TCP port 4189 to facilitate path computation requests and responses. The PCEP consists of the following message types: • Session establishment messages (open, keepalive, close) • PCReq -Path computation request • PCRep -Path computation reply • PCNtf -event notification • PCErr -signal a protocol error The path computation request message must include the end points of the path and can optionally include the requested bandwidth, the metric to be optimized in the path computation, and a list of links to be included in the path. The Path computation reply includes the computed path expressed in the Explicit Route Object format (see RFC 3209) or an indication that there is no path. See RFC 5440 for more details on PCEP. A PCE has been proposed as a central entity to manage a GMPLS-enabled optical circuit switched network. Specifically, the PCE maintains the network topology in a structure called the Traffic Engineering Database (TED). The traffic engineering modifier (see RFC 2702) signifies that the path computations are made to relieve congestion that is caused by the sub-optimal allocation of network resources. This modifier is used extensively in discussions of MPLS/GMPLS because their use case is for traffic engineering; in acronym form the modifier is TE (e.g., TE LSP, RSVP-TE). If the PCE is stateful with complete control over its network domain, it will also maintain an LSP database recording the provisioned GMPLS lightpaths. A lightpath request can be sent to the PCE, it will use the topology and LSP database to find the optimal path and then configure the GMPLS-controlled optical circuit switching nodes using NETCONF (see RFC 6241) or proprietary command line interfaces (CLIs) BIB013 . This stateful PCE with instantiation capabilities (capabilities to provision lightpaths) operates similarly to an SDN controller. For that reason, GMPLS with a centralized stateful PCE with instantiation capabilities can provide a baseline for performance analysis of an SDN controller as well as provide a mechanism to be blended with an SDN controller for hybrid control BIB007 , BIB008 , BIB014 . 3) Approaches to Hybrid SDN-GMPLS Control: Hybrid GMPLS/PCE and SDN control can be formed by allowing an SDN controller to leverage a centralized PCE to control a portion of the infrastructure using PCEP as the SBI BIB002 , BIB013 ; see illustration a) in Fig. 9 . The SDN controller builds higher functionality above what the PCE provides and can possibly control a large network that utilizes several PCEs as well as OpenFlow controlled network elements. Alternatively, the SDN controller can leverage a PCE for its path computation abilities with the SDN controller handling the configuration of the network elements to establish a path using an SBI protocol, such as OpenFlow BIB014 , BIB003 , BIB009 ; see illustration b) in Fig. 9 . E. SDN Performance Analysis 1) SDN vs. GMPLS: Liu et al. BIB004 provided a qualitative comparison of GMPLS, GMPLS/PCE, and SDN OpenFlow for control of wavelength switched optical networks. Liu et al. noted that there is an evolution of centralized control from GMPLS to GMPLS/PCE to OpenFlow. Whereas GMPLS offers distributed control, GMPLS/PCE is commonly regarded as having centralized path computation but still distributed provisioning/configuration; while OpenFlow centralizes all of the network control. In our discussion in Section IV-D we noted that a stateful PCE with instantiation capabilities centralizes all network control and is therefore very similar to SDN. Liu et al. have also pointed out that GMPLS/PCE is more technically mature compared to OpenFlow with IETF RFCs for GMPLS A comparison of GMPLS and OpenFlow has been conducted by Zhao et al. BIB010 for large-scale optical networks. Two testbeds were built, based on GMPLS and on Openflow, respectively. Performance metrics, such as blocking probability, wavelength utilization, and lightpath setup time were evaluated for a 1000 node topology. The results indicated that GMPLS gives slightly lower blocking probability. However, OpenFlow gives higher wavelength utilization and shorter average lightpath setup time. Thus, the results suggest that OpenFlow is overall advantageous compared to GMPLS in large-scale optical networks. Cvijetic et al. BIB011 conducted a numerical analysis to compare the computed shortest path lengths for non-SDN, partial-SDN, and full-SDN optical networks. A full-SDN network enables path lengths that are approximately a third of those computed on a non-SDN network. These path lengths can also translate into an energy consumption measure, with shortest paths resulting in reduced energy consumption. An SDN controlled network can result in smaller computed shortest paths that translates to smaller network latency and energy consumption BIB011 . Experiments conducted on the testbed described in BIB008 show a 4 % reduction in lightpath blocking probability using SDN OpenFlow compared to GMPLS for lightpath provisioning. The same experiments show that lightpath setup times can be reduced to nearly half using SDN OpenFlow compared to GMPLS. Finally, the experiments show that an Open vSwitch based controller can process about three times the number of flows per second as a NOX BIB001 based controller. 2) SDN Controller Flow Setup: Veisllari et al. BIB015 evaluated the use of SDN to support both circuit and packet switching in a metropolitan area ring network that interconnects access network segments with a backbone network. This network is assumed to be controlled by a single SDN controller. The objective of the study BIB015 was to determine the effect of packet service flow size on the required SDN controller flow service time to meet stability conditions at the controller. Toward this end, Veisllari et al. produced a mean arrival rate function of new packet and circuit flows at that controller. This arrival rate function was visualized by varying the length of short-lived ("mice") flows, the fraction of long-lived ("elephant") flows, and the volume of traffic consumed by "elephant" flows. Veisllari Liu et al. BIB005 use a multinational (Japan, China, Spain) NOX:OpenFlow controlled four-wavelength optical circuit and burst switched network to study path setup/release times as well as path restoration times. The optical transponders that can generate failure alarms were also under NOX:OpenFlow control and these alarms were used to trigger protection switching. The single SDN controller was located in the Japanese portion of the network. The experiments found the path setup time to vary from 250-600 ms and the path release times to vary from 130-450 ms. Path restoration times varied from 250-500 ms. Liu et al. noted that the major contributing factor to these times was the OpenFlow message delivery time BIB005 . 3) Out of Band Control: Sanchez et al. BIB012 have qualitatively compared four SDN controlled ring metropolitan network architectures. The architectures vary in whether the SDN control traffic is carried in-band with the data traffic or out-ofband separately from the data traffic. In a single wavelength ring network, out-of-band control would require a separate physical network that would come at a high cost, but provide reliability of the network control under failure of the ring network. In a multiwavelength ring network, a separate wavelength can be allocated to carry the control traffic. Sanchez et al. BIB012 focused on a Tunable Transceiver Fixed Receiver (TTFR) WDM ring node architecture. In this architecture each node receives data on a home wavelength channel and has the capability to transmit on any of the available wavelengths to reach any other node. The addition of the out-of-band control channel on a separate wavelength requires each node to have an additional fixed receiver, thereby increasing cost. Sanchez et al. identified a clear tradeoff between cost and reliability when comparing the four architectures. 4) Clustered SDN Control: Penna et al. BIB016 described partitioning a wavelength-switched optical network into administrative domains or clusters for control by a single SDN controller. The clustering should meet certain performance criteria for the SDN controller. To permit lightpath establishment across clusters, an inter-cluster lightpath establishment protocol is established. Each SDN controller provides a lightpath establishment function between any two points in its associated cluster. Each SDN controller also keeps a global view of the network topology. When an SDN controller receives a lightpath establishment request whose computed path traverses other clusters, the SDN controller requests lightpath establishment within those clusters via a WBI. The formation of clusters can be performed such that for a specified number of clusters the average distance to each SDN controller is minimized BIB016 . The lightpath establishment time decreases exponentially as the number of clusters increases.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> A virtualized optical network is proposed as a key to implementing increased agility and flexibility into a cloud computing environment by providing any-to-any connectivity with the appropriate optical bandwidth at the appropriate time. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> This paper proposes WiMAX-VPON, a novel framework for establishing layer-2 virtual private networks (VPNs) over the integration of WiMAX and Ethernet passive optical networks, which has lately been considered as a promising candidate for next-generation fiber-wireless backhaul-access networks. With WiMAX-VPON, layer-2 VPNs support a bundle of service requirements to the respective registered wireless/wired users. These requirements are stipulated in the service level agreement and should be fulfilled by a suite of effective bandwidth management solutions. To achieve this, we propose a novel VPN-based admission control and bandwidth allocation scheme that provides per-stream quality-of-service protection and bandwidth guarantee for real-time flows. The bandwidth allocation is performed via a common medium access control protocol working in both the optical and wireless domains. An event-driven simulation model is implemented to study the effectiveness of the proposed framework. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> The integration of Ethernet Passive Optical Networks (EPONs) and IEEE 802.16 (WiMAX) has been lately presented as a promising fiber-wireless (FiWi) broadband access network. Conversely, lightweight layer-2 virtual private networks (VPNs) over FiWi, which can provide bandwidth guarantee to the respective users, were only recently addressed by Dhaini et. al. In this paper, WiMAX-VPON, the framework proposed by Dhaini et. al to support layer-2 VPNs over EPON-WiMAX, is improved to take into account the polling control overhead when distributing the VPN bandwidth. A new generic analytical model is also presented to evaluate the performance of each registered VPN service. Our proposed model, which can also be used to analyze any polling-based FiWi network, applies for wireless and optical domains and provides performance measurements such as packet queuing delay, end-to-end (from wireless user to optical server) packet delay and average queue size. Numerical results are compared with simulation experiments, and show consistency between both outcomes. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> Nowadays, multipath routing algorithms and resource distribution strategies of the homogeneous network are the research focus of the Fiber-Wireless (FiWi) network which is the combination of optical subnetwork and wireless subnetwork. However few studies concerned on the efficient way to set up multipath in FiWi network as the affiliation of heterogeneous networks and packet reordering are tangled problems. Separating the Internet service provider (ISP) into two independent sections, nfrastructure provider (InP) and service provider (SP), the proposal of network virtualization provides a potential method to solve this problem. As a starting point, in this paper we apply network virtualization to remove the differences between heterogeneous networks to take FiWi network as a whole. Moreover, we propose a viable way to establish multipath access in the FiWi network through the flexible use of virtual networks (VNs) which can be deployed in the virtual resource manager (VRM). Besides, we present the superior performance of multipath in the FiWi network with network virtualization based on the simulation results by using the multipath scheduling policy Weighted Round Robin (WRR). <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> We apply network virtualization to remove the differences between heterogeneous networks in Fiber-Wireless (FiWi) network to establish intelligent multipath access through the flexible use of virtual networks (VNs) deployed in virtual resource manager (VRM). <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> In order to reduce cost and complexity, fiber-wireless (FiWi) networks emerge, combining the huge amount of available bandwidth of fiber networks and the flexibility, mobility of wireless networks. However, there is still a long way to go before taking fiber and wireless systems as fully integrated networks. In this paper, we propose a network virtualization based seamless networking scheme for FiWi networks, including hierarchical model, service model, service implementation and dynamic bandwidth assignment (DBA). Then, we evaluate the performance changes after network virtualization is introduced. Throughput for nodes, bandwidth for links and overheads leaded by network virtualization are analyzed. The performance of our proposed networking scheme is evaluated by simulation and real implementations, respectively. The results show that, compared to traditional networking scheme, our scheme has a better performance. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> This paper presents a techno-economic comparison between classical passive optical networking (PON) and innovative convergent radio and fibre architectures, based on intermediate aggregation at the remote base station (RBS) of both, fixed and mobile subscribers. By aggregating users at an active remote node (ARN) co-located at the RBS, which offers high-speed switching and statistical-multiplexing features for maximal exploitation of available network resources, the cost per bit of access networks is greatly reduced. Such techno-economic advantage is especially relevant at the present, in order to offer ultra-high-speed access for fixed users (> 1 Gbps) and connectivity at the RBS for 4G and beyond. In addition, the low-energy of the ARN and its low-cost design offer an environmentally sustainable technology solution, whilst acting as a critical enabler for new smart services, e-society initiatives, and intelligent lifestyle management. Finally, such ARN-based convergent architectures offer straight-forward Open Access control and management systems, so as to allow flexible infrastructure sharing and further reductions in CAPEX and OPEX, which are essential for the deployment of converged next-generation access networks for fixed and mobile subscribers. The work also includes performance studies to demonstrate that the proposed convergent radio and fibre architecture offers a realistic upgrade path to existing PON networks. © 2014 IEEE. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> The fiber-wireless network which is a combination of the fiber subnetwork and wireless subnetwork has provided high bandwidth access with ubiquity and mobility. But traditional single-path transmission cannot satisfy people's requirements for network performance due to various services. Multipath algorithms have been proposed as a solution to the network congestion. The multipath algorithm in traditional network has some limits in such heterogeneous networks. The application of network virtualization which can hide the differences of underlying physical infrastructures provides a potential method to solve this problem. In this paper we propose a Modified Weighted Round Robin (MWRR) algorithm based on the model of fiber-wireless network virtualization. Specific scheduling schemes will be arranged due to the quality of service requests and the states of links via the global view in the control plane. The simulation results show the smaller end-to-end delay and better load balancing. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> Fiber-wireless (FiWi) access networks, which are a combination of fiber networks and wireless networks, have the advantages of both networks, such as high bandwidth, high security, low cost, and flexible access. However, with the increasing need for bandwidth and types of service from users, FiWi networks are still relatively incapable and ossified. To alleviate bandwidth tension and facilitate new service deployment, we attempt to apply network virtualization in FiWi networks, in which the network’s control plane and data plane are separated from each other. Based on a previously proposed hierarchical model and service model for FiWi network virtualization, the process of service implementation is described. The performances of the FiWi access networks applying network virtualization are analyzed in detail, including bandwidth for links, throughput for nodes, and multipath flow transmission. Simulation results show that the FiWi network with virtualization is superior to that without. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> We review recent advances in the field of passive optical networks (PONs), and discuss the future trends including fiber/wireless convergence based on mobile fronthauling, as well as software-defined access networking and transmission based on flexible modulation, detection and digital signal processing. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> Abstract This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> V. VIRTUALIZATION <s> Modern high-performance data centers are responsible for delivering a huge variety of cloud applications to the end-users, which are increasingly pushing the limits of the currently deployed computing and network infrastructure. All-optical dynamic data center network (DCN) architectures are strong candidates to overcome those adversities, especially when they are combined with an intelligent software defined control plane. In this paper, we report the first harmonious integration of an optical flexible hardware framework operated by an agile software and virtualization platform. The LIGHTNESS deeply programmable all-optical circuit and packet switched data plane is able to perform unicast/multicast switch-over on-demand, while the powerful software defined networking (SDN) control plane enables the virtualization of computing and network resources creating a virtual data center and virtual network functions (VNF) on top of the data plane. We experimentally demonstrate realistic intra DCN with deterministic latencies for both unicast and multicast, showcasing monitoring, and database migration scenarios each of which is enabled by an associated network function virtualization element. Results demonstrate a fully functional complete unification of an advanced optical data plane with an SDN control plane, promising more efficient management of the next-generation data center compute and network resources. <s> BIB012
|
This section surveys control layer mechanisms for virtualizing SDONs. As optical infrastructures have typically high costs, creating multiple VONs over the optical network infrastructure is especially important for access networks, where the costs need to be amortized over relatively few users. Throughout, accounting for the specific optical transmission and signal propagation characteristics is a key challenge for SDON virtualization. Following the classification structure illustrated in Fig. 10 , we initially survey virtualization mechanisms for access networks and data center networks, followed by virtualization mechanisms for optical core networks. In addition, virtual MAC queues and processors are isolated to store and process the data from multiple VPONs, thus creating virtual MAC protocols, as illustrated in Fig. 11(b) . The OFDMA transmissions and receptions are processed in a DSP module that is controlled by a central SDN control module. The central SDN control module also controls the different virtual MAC processes in Fig. 11(b) , which feed/receive data to/from the DSP module. Additional bandwidth partitioning between VPONs can be achieved through Time Division Multiple Access (TDMA). Simulation studies compared a static allocation of subcarriers to VPONs with a dynamic allocation based on traffic demands. The dynamic allocation achieved significantly higher numbers of supported VPONs on a given network infrastructure as well as lower packet delays than the static allocation. A similar strategy for flexibly employing different dynamic bandwidth allocation modules for different groups of ONU queues has been examined in BIB011 . Similar OFDMA based slicing strategies for supporting cloud computing have been examined by Jinno et al. BIB001 . Zhou et al. have explored a FlexPON with similar virtualization capabilities. The FlexPON employs OFDM for adaptive transmissions. The isolation of different VPONs is mainly achieved through separate MAC processing. The resulting VPONs allow for flexible port assignments in ONUs and OLT, which have been demonstrated in a testbed . 2) FiWi Access Network Virtualization: a) Virtualized FiWi Network: Dai et al. BIB006 - BIB004 have examined the virtualization of FiWi networks BIB007 , BIB010 to eliminate the differences between the heterogeneous segments (fiber and wireless). The virtualization provides a unified homogenous (virtual) view of the FiWi network. The unified network view simplifies flow control and other operational algorithms for traffic transmissions over the heterogeneous network segments. In particular, a virtual resource manager operates the heterogeneous segments. The resource manager permits multiple routes from a given source node to a given destination node. Load balancing across the multiple paths has been examined in BIB005 , BIB008 . Simulation results indicate that the virtualized FiWi network with load balancing significantly reduces packet delays compared to a conventional FiWi network. An experimental OpenFlow switch testbed of the virtualized FiWi network has been presented in BIB009 . Testbed measurements demonstrate the seamless networking across the heterogeneous fiber and wireless networks segments. Measurements for nodal throughput, link bandwidth utilization, and packet delay indicate performance improvements due to the virtualized FiWi networking approach. Moreover, the FiWi testbed performance is measured for a video service scenario indicating that the virtualized FiWi networking approach improves the Quality of Experience (QoE) , of the video streaming. A mathematical performance model of the virtualized FiWi network has been developed in BIB009 . b) WiMAX-VPON: WiMAX-VPON BIB002 , BIB003 is a Layer-2 Virtual Private Network (VPN) design for FiWi access networks. WiMAX-VPON executes a common MAC protocol across the wireless and fiber network segments. A VPN based admission control mechanism in conjunction with a VPN bandwidth allocation ensures per-flow Quality of Service (QoS). Results from discrete event simulations demonstrate that the proposed WiMAX-VPON achieves favorable performance. Also, Dhaini et al. BIB002 , BIB003 demonstrate how the WiMAX-VPON design can be extended to different access network types with polling-based wireless and optical medium access control. BIB012 is a European research project examining an optical Data Center Network (DCN) capable of providing dynamic, programmable, and highly available DCN connectivity services. Whereas conventional DCNs have rigid control and management platforms, LIGHTNESS strives to introduce flexible control and management through SDN control. The LIGHTNESSS architecture comprises server racks that are interconnected through optical packet switches, optical circuit switches, and hybrid Top-ofthe-Rack (ToR) switches. The server racks and switches are all controlled and managed by an SDN controller. LIGHTNESS control consists of an SDN controller above the optical physical layer and OpenFlow agents that interact with the optical network and server elements. The SDN controller in cooperation with the OpenFlow-agents provides a programmable data plane to the virtualization modules. The virtualization creates multiple Virtual Data Centers (VDCs), each with its own virtual computing and memory resources, as well as virtual networking resources, based on a given physical data center. The virtualization is achieved through a VDC planner module and an NFV application that directly interact with the SDN controller. The VDC planner composes the VDC slices through mapping of the VDC requests to the physical SDN-controlled switches and server racks. The VDC slices are monitored by the NFV application, which interfaces with the VDC planner. Based on monitoring data, the NFV application and VDC planner may revise the VDC composition, e.g., transition from optical packet switches to optical circuit switches.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In the past few years, there has been growing interest in wide area ``All Optical Networks'''' with {\em wavelength division multiplexing\/} (WDM), using {\em wavelength routing}. Due to the huge bandwidth inherent in optical fiber, and the use of WDM to match user and network bandwidths, the wavelength routing architecture is an attractive candidate for future backbone transport networks. A {\em virtual topology\/} over a WDM WAN consists of clear channels between nodes called {\em lightpaths}, with traffic carried from source to destination without electronic switching ``as far as possible'''', but some electronic switching may be performed. Virtual topology design aims at combining the best of optical switching and electronic routing abilities. Designing a virtual topology on a physical network consists of deciding the lightpaths to be set up in terms of their source and destination nodes and wavelength assignment. In this survey we first describe the context and motivations of the virtual topology design problem. We provide a complete formulation of the problem, describe and compare the formulations and theoretical results as well as algorithms, heuristics and some results in the current literature in the field. The reconfigurability issue, which is another attractive characteristic of optical networks, is also discussed and the literature surveyed. This survey is restricted to transport networks with wavelength routing. Similar virtual topology problems also arise in multihop broadcast local area optical networks, but this work does not directly apply to them and corresponding literature is not included in this survey. This survey also relates to the design of a static topology, not one in which individual lightpaths are set up and torn down in response to traffic demand. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> As the operation of our fiber-optic backbone networks migrates from interconnected SONET rings to arbitrary mesh topology, traffic grooming on wavelength-division multiplexing (WDM) mesh networks becomes an extremely important research problem. To address this problem, we propose a new generic graph model for traffic grooming in heterogeneous WDM mesh networks. The novelty of our model is that, by only manipulating the edges of the auxiliary graph created by our model and the weights of these edges, our model can achieve various objectives using different grooming policies, while taking into account various constraints such as transceivers, wavelengths, wavelength-conversion capabilities, and grooming capabilities. Based on the auxiliary graph, we develop an integrated traffic-grooming algorithm (IGABAG) and an integrated grooming procedure (INGPROC) which jointly solve several traffic-grooming subproblems by simply applying the shortest-path computation method. Different grooming policies can be represented by different weight-assignment functions, and the performance of these grooming policies are compared under both nonblocking scenario and blocking scenario. The IGABAG can be applied to both static and dynamic traffic grooming. In static grooming, the traffic-selection scheme is key to achieving good network performance. We propose several traffic-selection schemes based on this model and we evaluate their performance for different network topologies. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In WDM optical networks, the physical layer impairments (PLIs) and their significance depend on network type-opaque, translucent, or transparent; the reach-access, metro, or core/long-haul; the number and type of network elements-fiber, wavelengths, amplifiers, switching elements, etc.; and the type of applications-real-time, non-real time, missioncritical, etc. In transparent optical networks, PLIs incurred by non-ideal optical transmission media accumulate along an optical path, and the overall effect determines the feasibility of the lightpaths. If the received signal quality is not within the receiver sensitivity threshold, the receiver may not be able to correctly detect the optical signal and this may result in high bit-error rates. Hence, it is important to understand various PLIs and their effect on optical feasibility, analytical models, and monitoring and mitigation techniques. Introducing optical transparency in the physical layer on one hand leads to a dynamic, flexible optical layer with the possibility of adding intelligence such as optical performance monitoring, fault management, etc. On the other hand, transparency reduces the possibility of client layer interaction with the optical layer at intermediate nodes along the path. This has an impact on network design, planning, control, and management. Hence, it is important to understand the techniques that provide PLI information to the control plane protocols and that use this information efficiently to compute feasible routes and wavelengths. The purpose of this article is to provide a comprehensive survey of various PLIs, their effects, and the available modeling and mitigation techniques. We then present a comprehensive survey of various PLI-aware network design techniques, regenerator placement algorithms, routing and wavelength assignment algorithms, and PLI-aware failure recovery algorithms. Furthermore, we identify several important research issues that need to be addressed to realize dynamically reconfigurable next-generation optical networks. We also argue the need for PLI-aware control plane protocol extensions and present several interesting issues that need to be considered in order for these extensions to be deployed in real-world networks. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Cloud computing platforms are growing from clusters of machines within a data center to networks of data centers with resources spread across the globe. Virtual machine migration within the LAN has changed the scale of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration to likewise transform the scope of provisioning from a single data center to multiple data centers spread across the country or around the world. In this paper we propose a cloud computing platform linked with a VPN based network infrastructure that provides seamless connectivity between enterprise and data center sites, as well as support for live WAN migration of virtual machines. We describe a set of optimizations that minimize the cost of transferring persistent storage and moving virtual machine memory during migrations over low bandwidth, high latency Internet links. Our evaluation on both a local testbed and across two real data centers demonstrates that these improvements can reduce total migration and pause time by over 30%. During simultaneous migrations of four VMs between Texas and Illinois, CloudNet’s optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 20GB, a 57% reduction. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> With the growth of traffic volume and the emergence of various new applications, future telecom networks are expected to be increasingly heterogeneous with respect to applications supported and underlying technologies employed. To address this heterogeneity, it may be most cost effective to set up different lightpaths at different bit rates in such a backbone telecom mesh network employing optical wavelength-division multiplexing. This approach can be cost effective because low-bit-rate services will need less grooming (i.e., less multiplexing with other low-bit-rate services onto high-capacity wavelengths), while a high-bit-rate service can be accommodated directly on a wavelength itself. Optical networks with mixed line rates (MLRs), e.g., 10/40/100 Gb/s over different wavelength channels, are a new networking paradigm. The unregenerated reach of a lightpath depends on its line rate. So, the assignment of a line rate to a lightpath is a tradeoff between its capacity and transparent reach. Thus, based on their signal-quality constraints (threshold bit error rate), intelligent assignment of line rates to lightpaths can minimize the need for signal regeneration. This constraint on the transparent reach based on threshold signal quality can be relaxed by employing more advanced modulation formats, but with more investment. We propose a design method for MLR optical networks with transceivers employing different modulation formats. Our results demonstrate the tradeoff between a transceiver's cost and its optical reach in overall network design. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In this paper, a novel Infrastructure as a Service architecture for future Internet enabled by optical network virtualization is proposed. Central to this architecture is a novel virtual optical network (VON) composition mechanism capable of taking physical layer impairments (PLIs) into account. The impact of PLIs on VON composition is investigated based on both analytical model of PLIs and industrial parameters. Furthermore, the impact of network topology on VON composition is evaluated. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> With the continuing growth in the amount of backbone traffic, improving the cost-effectiveness and ensuring survivability of the underlying optical networks are very important problems facing network service providers today. In this paper, we propose a bandwidth squeezed restoration (BSR) scheme in our recently proposed spectrum-sliced elastic optical path network (SLICE). The proposed BSR takes advantage of elastic bandwidth variation in the optical paths of SLICE. It enables spectrally efficient and highly survivable network recovery for best-effort traffic as well as bandwidth guaranteed traffic, while satisfying the service level specifications required from the client layer networks. We discuss the necessary interworking architectures between the optical path layer and client layer in the BSR in SLICE. We also present a control framework that achieves flexible bandwidth assignment as well as BSR of optical paths in SLICE. Finally, we describe an implementation example of a control plane using generalized multi-protocol label switching (GMPLS). <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> A control plane is a key enabling technique for dynamic and intelligent end-to-end path provisioning in optical networks. In this paper, we present an OpenFlow-based control plane for spectrum sliced elastic optical path networks, called OpenSlice, for dynamic end-to-end path provisioning and IP traffic offloading. Experimental demonstration and numerical evaluation show its overall feasibility and efficiency. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> A novel impairment aware optical network virtualization mechanism (Optical FlowVisor) based on software defined networking (OpenFlow-based) paradigm is experimentally demonstrated. End-to-end flow setup time and the performance of virtual switches and OpenFlow controller are reported. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Software Defined Networking (SDN) is a concept which provides the network operators and data centres to flexibly manage their networking equipment using software running on external servers. According to the SDN framework, the control and management of the networks, which is usually implemented in software, is decoupled from the data plane. On the other hand cloud computing materializes the vision of utility computing. Tenants can benefit from on-demand provisioning of networking, storage and compute resources according to a pay-per-use business model. In this work we present the networking issues in IaaS and networking and federation challenges that are currently addressed with existing technologies. We also present innovative software-define networking proposals, which are applied to some of the challenges and could be used in future deployments as efficient solutions. cloud computing networking and the potential contribution of software-defined networking along with some performance evaluation results are presented in this paper. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> New and emerging Internet applications are increasingly becoming high-performance and network-based, relying on optical network and cloud computing services. Due to the accelerated evolution of these applications, the flexibility and efficiency of the underlying optical network infrastructure as well as the cloud computing infrastructure [i.e., data centers (DCs)] become more and more crucial. In order to achieve the required flexibility and efficiency, coordinated provisioning of DCs and optical network interconnecting DCs is essential. In this paper, we address the role of high-performance dynamic optical networks in cloud computing environments. A DC as a service architecture for future cloud computing is proposed. Central to the proposed architecture is the coordinated virtualization of optical network and IT resources of distributed DCs, enabling the composition of virtual infrastructures (VIs). During the composition process of the multiple coexisting but isolated VIs, the unique characteristics of optical networks (e.g., optical layer constraints and impairments) are addressed and taken into account. The proposed VI composition algorithms are evaluated over various network topologies and scenarios. The results provide a set of guidelines for the optical network and DC infrastructure providers to be able to effectively and optimally provision VI services to users and satisfy their requirements. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as "Virtual Network Embedding (VNE)" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this paper, we remove this assumption by formulating the survivable virtual network embedding (SVNE) problem. We then develop a pro-active, and a hybrid policy heuristic to solve it, and a baseline policy heuristic to compare to. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristics for SVNE outperform the baseline heuristic in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Optical network virtualization enables network operators to compose and operate multiple independent and application-specific virtual optical networks (VONs) sharing a common physical infrastructure. To achieve this capability, the virtualization mechanism must guarantee isolation between coexisting VONs. In order to satisfy this fundamental requirement, the VON composition mechanism must take into account the impact of physical layer impairments (PLIs). In this paper we propose a new infrastructure as a service architecture utilizing optical network virtualization. We introduce novel PLI-aware VON composition algorithms suitable for single-line-rate (SLR) and mixed-line-rate (MLR) network scenarios. In order to assess the impact of PLIs and guarantee the isolation of multiple coexisting VONs, PLI assessment models for intra- and inter-VON impairments are proposed and adopted in the VON composition process for both SLR and MLR networks. In the SLR networks, the PLI-aware VON composition mechanisms with both heuristic and optimal (MILP) mapping methods are proposed. A replanning strategy is proposed for the MILP mapping method in order to increase its efficiency. In the MLR networks, a new virtual link mapping method suitable for the MLR network scenario and two line rate distribution methods are proposed. With the proposed PLI-aware VON composition methods, multiple coexisting and cost-effective VONs with guaranteed transmission quality can be dynamically composed. We evaluate and compare the performance of the proposed VON composition methods through extensive simulation studies with various network scenarios. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Abstract Network virtualization can eradicate the ossification of the Internet and stimulate innovation of new network architectures and applications. Optical networks are ideal substrates for provisioning high-bandwidth virtual-network services. In this study, we investigate the problem of network virtualization over both WDM and flexible-grid optical networks by formulating the problems as mixed integer linear programs (MILP). Two heuristics, namely MaxMapping and MinMapping , are developed for each kind of network to solve the problem quickly but suboptimally. Numerical examples show that MinMapping consumes fewer spectrum resources than MaxMapping and performs very close to the optimal results derived by the MILP in both kinds of optical networks, by exploring the opportunities of traffic grooming. Also, it is verified that flexible-grid optical networks can be more spectrum efficient than WDM networks as the substrate for network virtualization. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Virtualization improves the efficiency of networks by allowing multiple virtual networks to share a single physical network's resources. Next-generation optical transport networks are expected to support virtualization by accommodating multiple virtual networks with different topologies and bit rate requirements. Meanwhile, Optical Orthogonal Frequency-Division Multiplexing (OOFDM) is emerging as a viable technique for efficiently using the optical fiber's bandwidth in an elastic manner. OOFDM partitions the fiber's bandwidth into hundreds or even thousands of OFDM subcarriers that may be allocated to services. In this paper, we consider an OOFDM-based optical network and formulate a virtual network mapping problem for both static and dynamic traffic. This problem has several natural applications, such as e-Science, Grid, and cloud computing. The objective for static traffic is to maximize the subcarrier utilization, while minimizing the blocking ratio is the aim for dynamic traffic. Two heuristics are proposed and compared. Simulation results are presented to demonstrate the effectiveness of the proposed approaches. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Abstract Network virtualization facilitates the technology advancement via decoupling the traditional Internet Service Providers (ISPs) into the infrastructure provider (InP) and the service provider (SP). Revolutionary technologies hence can be easily employed by the SP and transparently mapped to the physical network managed by the InP after resolving the network embedding problem. In this work, we target on importing resilience to the virtualization context by solving the survivable network embedding ( SNE ) problem. We view the SNE problem from a multi-commodity network flow perspective, and present an Integer Linear Programming (ILP) model for both splittable and non-splittable flow to achieve joint optimal allocation for the working and backup resources. For large-scale problems, we propose two efficient heuristic algorithms for the case with splittable and non-splittable flow, respectively. Our performance evaluation shows that the splittable mapping outperforms the non-splittable mapping in terms of the consumed resources, while the latter bears the advantage of consistent QoS guarantee. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> This paper investigates the benefits of dynamic restoration with service relocation in resilient optical clouds. Results from the proposed optimization model show that service availability can be significantly improved by allowing a few service relocations. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> We describe bandwidth-on-demand in an evolved multi-layer, SDN-based Cloud Services model. We also show an initial proof-of-concept demonstration of this capability. <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Optical networks are ideal candidates for the future intraand inter-data center networks, due to their merits of high throughput, low energy consumption, high reliability, and so on. Optical network virtualization is a key technology to realize the deployment of various types of network-based applications on a single optical network infrastructure. Current virtual optical network embedding allocates resources in an exclusive and excessive manner. Typically, the spectrum amount of the virtual optical network’s peak traffic is reserved along the optical paths. It may lead to high user cost and low carrier revenue. In this paper, we propose a dynamic resource pooling and trading mechanism, in which users do no need to reserve the spectrum amount of their peak traffic demands. We compare the user cost and carrier revenue of our dynamic mechanism with the traditional exclusive resource allocation, by formulating our mechanism as a Stackelberg game and finding the Subgame Perfect Equilibrium. The numerical results show that our proposed dynamic mechanism can save user cost while increase carrier revenue under certain conditions. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> We propose a network-driven transfer mode for cloud operations in a step towards a carrier SDN. Inter-datacenter connectivity is requested in terms of volume of data and completion time. The SDN controller translates and forwards requests to an ABNO controller in charge of a flexgrid network. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Mobile computation offloading has been identified as a key enabling technology to overcome the inherent processing power and storage constraints of mobile end devices. To satisfy the low-latency requirements of content-rich mobile applications, existing mobile cloud computing solutions allow mobile devices to access the required resources by accessing a nearby resource-rich cloudlet, suffering increased capital and operational expenditures. To address this issue, in this paper we propose an infrastructure and architectural approach based on the orchestrated planning and operation of Optical Data Center networks and Wireless Access networks. To this end, a novel formulation based on a multi-objective Non Linear Programming model is presented that considers energy efficient virtual infrastructure planning over the converged wireless, optical network interconnecting DCs with mobile devices, taking a holistic view of the infrastructure. Our modelling results identify trends and trade-offs related to end-to-end service delay, resource requirements and energy consumption levels of the infrastructure across the various technology domains. <s> BIB025 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Based on the concept of infrastructure as a service, optical network virtualization can facilitate the sharing of physical infrastructure among different users and applications. In this paper, we design algorithms for both transparent and opaque virtual optical network embedding (VONE) over flexible-grid elastic optical networks. For transparent VONE, we first formulate an integer linear programming (ILP) model that leverages the all-or-nothing multi-commodity flow in graphs. Then, to consider the continuity and consecutiveness of substrate fiber links' (SFLs') optical spectra, we propose a layered-auxiliary-graph (LAG) approach that decomposes the physical infrastructure into several layered graphs according to the bandwidth requirement of a virtual optical network request. With LAG, we design two heuristic algorithms: one applies LAG to achieve integrated routing and spectrum assignment in link mapping (i.e., local resource capacity (LRC)-layered shortest-path routing LaSP), while the other realizes coordinated node and link mapping using LAG (i.e., layered local resource capacity(LaLRC)-LaSP). The simulation results from three different substrate topologies demonstrate that LaLRC-LaSP achieves better blocking performance than LRC-LaSP and an existing benchmark algorithm. For the opaque VONE, an ILP model is also formulated. We then design a LRC metric that considers the spectrum consecutiveness of SFLs. With this metric, a novel heuristic for opaque VONE, consecutiveness-aware LRC-K shortest-path-first fit (CaLRC-KSP-FF), is proposed. Simulation results show that compared with the existing algorithms, CaLRC-KSP-FF can reduce the request blocking probability significantly. <s> BIB026 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> We present a flexible virtual optical network provisioning procedure for distance-adaptive flex-grid optical networks. Simulations show ~3 times increase in effective network capacity by leveraging the combined effect of flexible node mapping and distance-adaptive modulation. <s> BIB027 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In this paper, we study the problem of survivable impairment-constrained virtual optical network mapping in flexible-grid optical networks (SIC-VONM). The objective is to minimize the total cost of working and backup resources, including transponders, regenerators, and shared infrastructure, for a given set of virtual optical networks, which can survive single link failures. We first provide the problem definition of SIC-VONM, and then formulate the problem as an integer linear program (ILP). We also develop a novel heuristic algorithm, together with a baseline algorithm and a lower bound for comparison. Numerical results show that our proposed heuristic achieves results that are very close to those of the ILP for small-scale problems and that our proposed heuristic can solve large-scale problems very well. <s> BIB028 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> An emerging use case in software-defined networking is to provide efficient mapping of multiple virtual infrastructures (VIs) simultaneously over the same physical substrate (PS) which can increase the resource utilization of the PS, thus improving its provider's revenue. In this paper, for the first time, we investigate a practical and yet theoretically challenging issue related to dynamic VI mapping in software-defined elastic optical networks while considering the presence of possible upgrade of the VIs and the optical layer constraints, which has not been addressed in any of the existing studies. More specifically, we investigate the following aspects: (1) Which revenue models are appropriate? (2) How to map a new VI request or to upgrade an existing VI to maximize the PS providers revenue? In particular, we study two different revenue models in terms of the incremental pricing policy and the binding pricing policy and propose a number of efficient heuristics to solve the upgrade-aware VI mapping (U-VIM) problem. We also perform comprehensive performance evaluation in different scenario, and the results show that plan-ahead is a desirable strategy when conducting VI mapping in the presence of VI upgrade. <s> BIB029 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Green House Gas (GHG) emissions mainly come from the consumption of non-renewable energy. To reduce GHG emissions of IP over WDM networks, we propose to maximize renewable energy usage at each network node location so as to reduce the consumption of non-renewable energy. A “Follow the Sun, Follow the Wind” strategy is proposed for the IP over WDM network to periodically reconfigure the lightpath virtual topology to enable more lightpaths to start or end at nodes where maximum renewable energy is available. We develop a mixed integer linear programming model to design new lightpath virtual topologies. Since the computational complexity of the optimization model is excessive, we also propose a simple but efficient heuristic algorithm to tackle this. Our results indicate that a network operated in this way can significantly reduce non-renewable energy consumption as illustrated in the example network scenarios considered. <s> BIB030 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> This paper presents a novel policy-based mechanism to provide context-aware network-wide policies to Software Defined Networking (SDN) applications, implemented with a policy flow based on property graph models. The proposal has been validated in a transport SDN controller, supporting optical network virtualization via slicing of physical resources such as nodes, links and wavelengths, through use case testbed demonstrations of policy enforcement for SDN applications, including optical equalization and virtual optical network control. Additionally, the policy engine incorporates a simulation-assisted pre-setting mechanism for local policy decisions in case of problems in communication with the controller. <s> BIB031 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In recent years, OFDM has been the focus of extensive research efforts in optical transmission and networking, initially as a means to overcome physical impairments in optical communications. However, unlike, say, in wireless LANs or xDSL systems where OFDM is deployed as a transmission technology in a single link, in optical networks it is being considered as the technology underlying the novel elastic network paradigm. Consequently, network-wide spectrum management arises as the key challenge to be addressed in network design and control. In this work, we review and classify a range of spectrum management techniques for elastic optical networks, including offline and online routing and spectrum assignment (RSA), distance-adaptive RSA, fragmentation-aware RSA, traffic grooming, and survivability. <s> BIB032 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Cloud computing enables users to receive infrastructure/platform/software as a Service (XaaS) via a shared pool of resources in a pay-as-you-go fashion. Data centers, as the hosts of physical servers, play a key role in the delivery of cloud services. Therefore, interconnection of data centers over a backbone network is one of the major challenges affecting the performance of the cloud system, as well as the operational expenditures of service providers. This article proposes resilient design of a cloud backbone through demand profile-based network virtualization where the data centers are located at the core nodes of an IP over elastic optical network. Three approaches, MOPIC, MRPIC, and RS-MOPIC, are proposed. MOPIC aims to design the cloud backbone with minimum outage probability per demand. MRPIC aims to minimize the usage of network resources while routing the cloud demands toward data centers. RS-MOPIC is a hybrid of both approaches aiming to reduce network resource usage while minimizing outage probability. Through simulations of a small-scale cloud scenario, we show that incorporation of manycast provisioning ensures significantly low outage probability on the order of 10–7. Furthermore, integration of a resource saving objective into MOPIC can make a compromise between network resource consumption and outage probability of the workloads submitted to the cloud. <s> BIB033 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31% of brown energy consumption. <s> BIB034 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Multi-tenancy is a key feature of modern data centers. It allows for the existence of multiple independent virtual infrastructures, called virtual slices, on top of the same physical infrastructure, each of them specially tailored to the tenants' needs. In such a scenario, an optimal mapping of the virtual slices plays a capital role toward an efficient utilization of the data center network resources, potentially saving costs for the data center owner. However, due to the increasing trend of bringing optics to data center networks, specific virtual slice mapping mechanisms accounting for the particularities of the optical medium (e.g., wavelength continuity constraint) have to be investigated. For this, we present an integer linear programming (ILP) model for optimally mapping a set of virtual slices from different tenants in a hybrid optical circuit switching (OCS)/optical packet switching (OPS) data center network with the aim to minimize the necessary optical transponders to be equipped in the network. Additionally, we also present a lightweight heuristic for the cases in which the ILP model scalability is compromised. The benefits of the proposals are highlighted by benchmarking them against a pure OCS solution through extensive simulations. <s> BIB035 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Efficiently mapping multiple virtual infrastructures (VIs) onto the same physical substrate with survivability is one of the fundamental challenges related to network virtualization in transport software-defined networks (T-SDNs). In this paper, we study the survivable VI mapping problem in T-SDNs with the objective of minimizing the VI request blocking probability. In particular, we address the subproblems of modulation selection and spectrum allocation in the process of provisioning optical channels to support virtual links, taking into consideration the optical layer constraints such as the transmission reach constraint and the spectral continuity constraint. We propose an auxiliary-graph-based algorithm, namely, parallel VI mapping (PAR), to offer dedicated protection against any single physical node or link failure. More specifically, the PAR algorithm can jointly optimize the assignments of mapping the primary and backup VIs by adopting the modified Suurballe algorithm to find the shortest pair of node-disjoint paths for each virtual link. Through extensive simulations, we demonstrate that the PAR algorithm can significantly reduce the VI request blocking probability and improve the traffic-carrying capacity of the networks, compared to the baseline sequential VI mapping approaches. <s> BIB036 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> In this work, we study the availability-aware survivable virtual network embedding (A-SVNE) problem in optical interdatacenter networks that use wavelength-division multiplexing. With A-SVNE, we try to satisfy the availability requirement of each virtual component (i.e., a virtual link or a virtual node) in a virtual network. We first analyze the availability of a virtual component based on the availabilities of the substrate link(s) and node(s). Then, we formulate an integer linear programming model for the A-SVNE problem and propose several time-efficient heuristics. Specifically, we design two node mapping strategies: one is sequential selection using efficient weights defined by the availability information, while the other uses auxiliary graphs to transform the problem into a classical problem in graph theory, i.e., the maximum-weight maximum clique. Finally, we use extensive simulations to compare the proposed A-SVNE algorithms with existing ones in terms of the blocking probability, availability gap, and penalty due to service-level agreement violations, and the results indicate that our algorithms perform better. <s> BIB037 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> We consider survivable virtual network mapping in a multi-domain optical network with the objective of minimizing total network link cost for a given virtual traffic demand that is embedded over the multi-domain optical network. The survivability constraint guarantees the connectivity of virtual nodes after any single optical link failure. We propose a hierarchical software-defined networking (H-SDN)-based control plane to exchange information between domains, and we propose heuristic approaches for mapping virtual links onto multi-domain optical links using partition and contraction mechanisms (PCM) on the virtual topology. We show that the proposed PCM technique can reduce time complexity compared to traditional cut set graph theory approaches. Numerical results show that our heuristic approach is effective in reducing total network cost and increasing the successful mapping rate. <s> BIB038 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Network virtualization is meant to improve the efficiency of network infrastructure by sharing a physical substrate network among multiple virtual networks. Virtual network embedding (VNE) determines how to map a virtual network request onto a physical substrate. In this paper, we first overview three possible underlying substrates for interdatacenter networks, namely an electrical-layer-based substrate, an optical-layer-based substrate, and a multilayer-based (optical and electrical layer) substrate. Then, the corresponding VNE problems for the three physical substrates are discussed. The work presented focuses on VNE over a multilayer optical network; a key problem is how to map a virtual network request onto either an electrical or optical substrate. We propose an auxiliary graph model to address multilayer virtual network mapping in a dynamic traffic scenario. Different node-mapping and link-mapping policies can be achieved by adjusting weights of the edges of the auxiliary graph, which depends on the purposes of the network operators. <s> BIB039 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Network virtualization is widely considered to be one of the main paradigms for the future Internet architecture as it provides a number of advantages including scalability, on demand allocation of network resources, and the promise of efficient use of network resources. In this paper, we propose an energy efficient virtual network embedding (EEVNE) approach for cloud computing networks, where power savings are introduced by consolidating resources in the network and data centers. We model our approach in an IP over WDM network using mixed integer linear programming (MILP). The performance of the EEVNE approach is compared with two approaches from the literature: the bandwidth cost approach (CostVNE) and the energy aware approach (VNE-EA). The CostVNE approach optimizes the use of available bandwidth, while the VNE-EA approach minimizes the power consumption by reducing the number of activated nodes and links without taking into account the granular power consumption of the data centers and the different network devices. The results show that the EEVNE model achieves a maximum power saving of 60% (average 20%) compared to the CostVNE model under an energy inefficient data center power profile. We develop a heuristic, real-time energy optimized VNE (REOViNE), with power savings approaching those of the EEVNE model. We also compare the different approaches adopting an energy efficient data center power profile. Furthermore, we study the impact of delay and node location constraints on the energy efficiency of virtual network embedding. We also show how VNE can impact the design of optimally located data centers for minimal power consumption in cloud networks. Finally, we examine the power savings and spectral efficiency benefits that VNE offers in optical orthogonal division multiplexing networks. <s> BIB040 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> The sliceable optical transponder, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility. The upper-layer traffic can be offloaded onto an optical layer with “just-enough transponder” resources. Traffic grooming evolves as the optical transponder shifts from fixed to sliceable. “Optical-layer grooming” enabled by a sliceable optical transponder can reduce the number of power-consumption components (e.g., IP ports and optical transponders). In this paper, energy-efficient traffic grooming in IP-over-elastic optical networks with a sliceable optical transponder is studied. Three bandwidth-variable transponders (BVTs) based on their sliceability, namely, non-sliceable BVTs, fully sliceable BVTs, and partially sliceable BVTs, are investigated. For each transponder, we develop energy-minimized traffic grooming integer linear programming (ILP) models and corresponding heuristic algorithms. Comprehensive comparisons are performed among the three types of transponders, and two interesting observations emerge. First, we find that significant power savings can be achieved by using a sliceable optical transponder. Second, we find that power savings do not keep improving linearly while transponder sliceability is increasing, and traditional electrical-layer grooming is still required to work together with optical-layer grooming to reduce power consumption. <s> BIB041 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Flexgrid technology is now considered to be a promising solution for future high-speed network design. In this context, we need a tutorial that covers the key aspects of elastic optical networks. This tutorial paper starts with a brief introduction of the elastic optical network and its unique characteristics. The paper then moves to the architecture of the elastic optical network and its operation principle. To complete the discussion of network architecture, this paper focuses on the different node architectures, and compares their performance in terms of scalability and flexibility. Thereafter, this paper reviews and classifies routing and spectrum allocation (RSA) approaches including their pros and cons. Furthermore, various aspects, namely, fragmentation, modulation, quality-of-transmission, traffic grooming, survivability, energy saving, and networking cost related to RSA, are presented. Finally, the paper explores the experimental demonstrations that have tested the functionality of the elastic optical network, and follows that with the research challenges and open issues posed by flexible networks. <s> BIB042 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> This paper addresses the minimum network cost problem for survivable virtual optical network mapping in flexible bandwidth optical networks. For each virtual link, we provide dedicated-path protection, i.e., primary path and backup path, to guarantee high survivability on the physical network. To simplify the virtual links mapping, an extended auxiliary graph is constructed by coordinating the virtual optical network and the physical network. We develop an integer linear program (ILP) model, the LBSD (the largest bandwidth requirement (LB) of virtual links versus the shortest distance (SD)) mapping approach, the LCSD (the largest computing (LC) resources requirement versus the shortest distance) mapping approach to minimize the network cost for a given set of VONs. For comparison, we also introduce one baseline mapping approach, named LCLC (the largest computing resources requirement versus the largest computing resources (LC) provisioning), and the lower bound. Simulation results show that, comparing to the LCLC mapping approach, the ILP model, the LBSD and LCSD mapping approaches not only solve the problem of minimizing the total network cost but also guarantee that the spectrum usage and the number of regenerators are minimum. The ILP model and the LBSD mapping approach are greatly close to a lower bound of network cost and perform the same results as a lower bound of spectrum usage in both the 6-node network and the 14-node network. As a result, our proposed LBSD mapping approach can efficiently reduce the network cost, spectrum usage, and the number of regenerators, which is near the optimal solutions of the ILP model. <s> BIB043 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> With network virtualization, the physical infrastructure can be partitioned into multiple parallel virtual networks for sharing purposes. However, different transport technologies or quality of service (QoS) levels may impact both the requested amount of resources and the characteristics of different virtual instances that can be built on top of a single physical infrastructure. In this paper we propose a novel mixed integer linear programming (MILP) formulation for different schemes of protection in scenarios where multiple virtual topologies run over an elastic optical network. The proposed MILP formulation uses the concept of bandwidth squeezing to guarantee a minimum bandwidth for surviving virtual topologies. It achieves a high level of survivability for traffic that is subject to a different committed service profile for each virtual topology. Case studies are carried out in order to analyze the basic properties of the formulation in small networks, and three heuristics are proposed for larger networks. <s> BIB044 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Data Centers 1) LIGHTNESS: LIGHTNESS [296]- <s> Considering the virtual network infrastructure as a service, optical network virtualization can facilitate the physical infrastructure sharing among different clients and applications that require optical network resources. Obviously, mapping multiple virtual network infrastructures onto the same physical network infrastructure is one of the greatest challenges related to optical network virtualization in flexible bandwidth optical networks. In order to efficiently address the virtual optical network (VON) provisioning problem, we can first obtain the virtual links' order and the virtual nodes' order based on their characteristics, such as the bandwidth requirement on virtual links and computing resources on virtual nodes. We then preconfigure the primary and backup paths for all node-pairs in the physical optical network, and the auxiliary graph is constructed by preconfiguring primary and backup paths. Two VON mapping approaches that include the power-aware virtual-links mapping (PVLM) approach and the power-aware virtual-nodes mapping (PVNM) approach are developed to reduce power consumption for a given set of VONs in flexible bandwidth optical networks with the distributed data centers. Simulation results show that our proposed PVLM approach can greatly reduce power consumption and save spectrum resources compared to the PVNM approach for the single-line rate and the mixed-line rate in flexible bandwidth optical networks with the distributed data centers. <s> BIB045
|
2) Cloudnets: Cloudnets BIB012 - BIB005 exploit network virtualization for pooling resources among distributed data centers. Cloudnets support the migration of virtual machines across networks to achieve resource pooling. Cloudnet designs can be supported through optical networks BIB013 . Kantarci and Mouftah BIB033 have examined designs for a virtual cloud backbone network that interconnects distributed backbone nodes, whereby each backbone node is associated with one data center. A network resource manager periodically executes a virtualization algorithm to accommodate traffic demands through appropriate resource provisioning. Kantarci and Mouftah BIB033 have developed and evaluated algorithms for three provisioning objectives: minimize the outage probability of the cloud, minimize the resource provisioning, and minimize a tradeoff between resource saving and cloud outage probability. The range of performance characteristics for outage probability, resource consumption, and delays of the provisioning approaches have been evaluated through simulations. The outage probability of optical cloud networks has been reduced in BIB021 through optimized service re-locations. Several complementary aspects of optical cloudnet networks have recently been investigated. A multilayer network architecture with an SDN based network management structure for cloud services has been developed in BIB022 . A dynamic variation of the sharing of optical network resources for intraand inter-data center networking has been examined in BIB023 . The dynamic sharing does not statically assign optical network resources to virtual optical networks; instead, the network resources are dynamically assigned according to the timevarying traffic demands. An SDN based optical transport mode for data center traffic has been explored in BIB024 . Virtual machine migration mechanisms that take the characteristics of renewable energy into account have been examined in BIB034 while general energy efficiency mechanisms for optically networked could computing resources have been examined in BIB025 . C. Metro/Core Networks 1) Virtual Optical Network Embedding: Virtual optical network embedding seeks to map requests for virtual optical networks to a given physical optical network infrastructure (substrate). A virtual optical network consists of both a set of virtual nodes and a set of interconnecting links that need to be mapped to the network substrate. This mapping of virtual networks consisting of both network nodes and links is fundamentally different from the extensively studied virtual topology design for optical wavelength routed networks BIB001 , which only considered network links (and did not map nodes). Virtual network embedding of both nodes and link has already been extensively studied in general network graphs BIB014 , BIB015 . However, virtual optical network embedding requires additional constraints to account for the special optical transmission characteristics, such as the wavelength continuity constraint and the transmission reach constraint. Consequently, several studies have begun to examine virtual network embedding algorithms specifically for optical networks. a) Impairment-Aware Embedding: Peng et al. BIB007 , BIB016 have modeled the optical transmission impairments to facilitate the embedding of isolated VONs in a given underlying physical network infrastructure. Specifically, they model the physical (photonic) layer impairments of both single-line rate and mixed-line rates BIB006 . Peng et al. BIB016 consider intra-VON impairments from Amplified Spontaneous Emission (ASE) and inter-VON impairments from non-linear impairments and four wave mixing. These impairments are captured in a Q-factor , BIB003 , which is considered in the mapping of virtual links to the underlying physical link resources, such as wavelengths and wavebands. b) Embedding on WDM and Flexi-grid Networks: Zhang et al. BIB017 have considered the embedding of overall virtual networks encompassing both virtual nodes and virtual links. Zhang et al. have considered both conventional WDM networks as well as flexi-grid networks. For each network type, they formulate the virtual node and virtual link mapping as a mixed integer linear program. Concluding that the mixed integer linear program is NP-hard, heuristic solution approaches are developed. Specifically, the overall embedding (mapping) problem is divided into a node mapping problem and a link mapping problem. The node mapping problem is heuristically solved through a greedy MinMapping strategy that maps the largest computing resource demand to the node with the minimum remaining computing capacity (a complementary MaxMapping strategy that maps the largest demand to the node with the maximum remaining capacity is also considered). After the node mapping, the link mapping problem is solved with an extended grooming graph BIB002 . Comparisons for a small network indicate that the MinMapping strategy approaches the optimal mixed integer linear program solution quite closely; whereas the MaxMapping strategy gives poor results. The evaluations also indicate that the flexi-grid network requires only about half the spectrum compared to an equivalent WDM network for several evaluation scenarios. The embedding of virtual optical networks in the context of elastic flexi-grid optical networking has been further examined in several studies. For a flexi-grid network based on OFDM BIB018 , Zhao et al. BIB019 have compared a greedy heuristic that maps requests in decreasing order of the required resources with an arbitrary first-fit benchmark. Gong et al. BIB026 have considered flexi-grid networks with a similar overall strategy of node mapping followed by link mapping as Zhang et al. BIB017 . Based on the local resource constraints at each node, Gong et al. have formed a layered auxiliary graph for the node mapping. The link mapping is then solved with a shortest path routing approach. Wang et al. BIB027 have examined an embedding approach based on candidate mapping patterns that could provide the requested resources. The VON is then embedded according to a shortest path routing. Pages et al. BIB035 have considered embeddings that minimize the required optical transponders. che2016cos c) Survivable Embedding: Survivability of a virtual optical network, i.e., its continued operation in the face of physical node or link failures, is important for many applications that require dependable service. Hu et al. BIB020 developed an embedding that can survive the failure of a single physical node. Ye et al. BIB036 have examined the embedding of virtual optical networks so as to survive the failure of a single physical node or a physical link. Specifically, Ye et al. ensure that each virtual node request is mapped to a primary physical node as well as a distinct backup physical node. Similarly, each virtual link is mapped to a primary physical route as well as a node-disjoint backup physical route. Ye et al. mathematically formulate an optimization problem for the survivable embedding and then propose a Parallel Virtual Infrastructure (VI) Mapping (PAR) algorithm. The PAR algorithm finds distinct candidate physical nodes (with the highest remaining resources) for each virtual node request. The candidate physical nodes are then jointly examined with pairs of shortest node-disjoint paths. The evaluations in BIB036 indicate that the parallel PAR algorithm reduces the blocking probabilities of virtual network requests by 5-20 % compared to a sequential algorithm benchmark. A limitation of the survivable embedding BIB036 is that it protects only from a single link or node failure. As the optical infrastructure is expected to penetrate deeper in the access network deployments (e.g., mobile backhaul), it will become necessary to consider multiple failure points. Similar survivable network embedding algorithms that employ nodedisjoint shortest paths in conjunction with specific cost metrics for node mappings have been investigated by Xie et al. BIB028 and Chen et al. BIB043 . Jiang et al. BIB037 have examined a solution variant based on maximum-weight maximum clique formation. The studies BIB008 - BIB044 have examined so-called bandwidth squeezed restoration for virtual topologies. With bandwidth squeezing, the back-up path bandwidths of the surviving virtual topologies are generally lower than the bandwidths on the working paths. Survivable virtual topology design in the context of multidomain optical networks has been studied by Hong et al. BIB038 . Hong et al. focused on minimizing the total network link cost for a given virtual traffic demand. A heuristic algorithm for partition and contraction mechanisms based on cut set theory has been proposed for the mapping of virtual links onto multidomain optical networks. A hierarchical SDN control plane is split between local controllers that to manage individual domains and a global controller for the overall management. The partition and contraction mechanisms abstract inter-and intra-domain information as a method of contraction. Survivability conditions are ensured individually for inter-and intra-domains such that survivability is met for the entire network. The evaluations in BIB038 demonstrate successful virtual network mapping at the scale required by commercial Internet service providers and infrastructure providers. d) Dynamic Embedding: The embedding approaches surveyed so far have mainly focused on the offline embedding of a static set of virtual network requests. However, in the ongoing network operation the dynamic embedding of modifications (upgrades) of existing virtual networks, or the addition of new virtual networks are important. Ye et al. BIB029 have examined a variety of strategies for upgrading existing virtual topologies. Ye et al. have considered both scenarios without advance planning (knowledge) of virtual network upgrades and scenarios that plan ahead for possible (anticipated) upgrades. For both scenarios, a divide-and-conquer strategy and an integrate-and-cooperate strategy are examined. The divideand conquer strategy sequentially maps all the virtual nodes and then the virtual links. In contrast, the integrate-andcooperate strategy jointly considers the virtual node and virtual link mappings. Without advance planning, these strategies are applied sequentially, as the virtual network requests arrive over time, whereas, with planning, the initial and upgrade requests are jointly considered. Evaluation results indicate that the integrate-and-cooperate strategy slightly increases a revenue measure and request acceptance ratio compared to the divideand-conquer strategy. The results also indicate that planning has the potential to substantially increase the revenue and acceptance ratio. In a related study, Zhang et al. BIB039 have examined embedding algorithms for virtual network requests that arrive dynamically to a multilayer network consisting of electrical and optical network substrates. e) Energy-efficient Embedding: Motivated by the growing importance of green networking and information technology BIB009 , a few studies have begun to consider the energy efficiency of the embedded virtual optical networks. Nonde et al. BIB040 have developed and evaluated mechanisms for embedding virtual cloud networks so as to minimize the overall power consumption, i.e., the aggregate of the power consumption for communication and computing (in the data centers). Nonde et al. have incorporated the power consumption of the communication components, such as transponders and optical switches, as well as the power consumption characteristics of data center servers into a mathematical power minimization model. Nonde et al. then develop a real-time heuristic for energy-optimized virtual network embedding. The heuristic strives to consolidate computing requests in the physical nodes with the least residual computing capacity. This consolidation strategy is motivated by the typical power consumption characteristic of a compute server that has a significant idle power consumption and then grows linearly with increasing computing load; thus a fully loaded server is more energyefficient than a lightly loaded server. The bandwidth demands are then routed between the nodes according to a minimum hop algorithm. The energy optimized embedding is compared with a cost optimized embedding that only seeks to minimize the number of utilized wavelength channels. The evaluation results in BIB040 indicate that the energy optimized embedding significantly reduces the overall energy consumption for low to moderate loads on the physical infrastructure; for high loads, when all physical resources need to be utilized, there are no significant savings. Across the entire load range, the energy optimized embedding saves on average 20 % energy compared to the benchmark minimizing the wavelength channels. Chen BIB045 has examined a similar energy-efficient virtual optical network embedding that considers primary and linkdisjoint backup paths, similar to the survivable embeddings in Section V-C1c. More specifically, virtual link requests are mapped in decreasing order of their bandwidth requirements to the shortest physical transmission distance paths, i.e., the highest virtual bandwidth demands are allocated to the shortest physical paths. Evaluations indicate that this link mapping approach roughly halves the power consumption compared to a random node mapping benchmark. Further studies focused on energy savings have examined virtual link embeddings that maximize the usage of nodes with renewable energy BIB030 and the traffic grooming onto sliceable BVTs BIB041 . 2) Hypervisors for VONs: The operation of VONs over a given underlying physical (substrate) optical network requires an intermediate hypervisor. The hypervisor presents the physical network as multiple isolated VONs to the corresponding VON controllers (with typically one VON controller per VON). In turn, the hypervisor intercepts the control messages issued by a VON controller and controls the physical network to effect the control actions desired by the VON controller for the corresponding VON. Towards the development of an optical network hypervisor, Siquera et al. BIB031 have developed a SDN-based controller for an optical transport architecture. The controller implements a virtualized GMPLS control plane with offloading to facilitate the implementation of hypervisor functionalities, namely the creation optical virtual private networks, optical network slicing, and optical interface management. A major contribution of Siquera et al. BIB031 is a Transport Network Operating System (T-NOS), which abstracts the physical layer for the controller and could be utilized for hypervisor functionalities. OpenSlice BIB010 is a comprehensive OpenFlow-based hypervisor that creates VONs over underlying elastic optical networks BIB042 , BIB032 . OpenSlice dynamically provisions end-toend paths and offloads IP traffic by slicing the optical communications spectrum. The paths are set up through a handshake protocol that fills in cross-connection table entries. The control messages for slicing the optical communications spectrum, such as slot width and modulation format, are carried in extended OpenFlow protocol messages. OpenSlice relies on special distributed network elements, namely bandwidth variable wavelength cross-connects BIB004 and multiflow optical transponders that have been extended for control through the extended OpenFlow messages. The OpenSlice evaluation includes an experimental demonstration. The evaluation results include path provisioning latency comparisons with a GMPLSbased control plane and indicate that OpenFlow outperforms GMPLS for paths with more than three hops. OpenSlice extension and refinements to multilayer and multidomain networks are surveyed in Section VII. An alternate centralized Optical FlowVisor that does not require extensions to the distributed network elements has been investigated in BIB011 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> Motivated by the design goals of Global Environment for Network Innovation (GENI), we consider how to support the slicing of link bandwidth resources as well as the virtualization of optical access networks and optical backbone mesh networks. Specifically, in this paper, we study a novel programmable mechanism called optical orthogonal frequency division multiplexing (OFDM)/orthogonal frequency division multiple access (OFDMA) for link virtualization. Unlike conventional time division multiplexing (TDM)/time division multiple access (TDMA) and wavelength division multiplexing (WDM)/wavelength division multiple access (WDMA) methods, optical OFDM/OFDMA utilizes advanced digital signal processing (DSP), parallel signal detection (PSD), and flexible resource management schemes for subwavelength level multiplexing and grooming. Simulations as well as experiments are conducted to demonstrate performance improvements and system benefits including cost-reduction and service transparency. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> A virtualized optical network is proposed as a key to implementing increased agility and flexibility into a cloud computing environment by providing any-to-any connectivity with the appropriate optical bandwidth at the appropriate time. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> A new networking approach based on IP/optical OFDM technologies is proposed, providing an adaptive mechanism of bandwidth provisioning and pipe resizing for dynamic traffic flows. A comparison study is presented to demonstrate its advantages. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> The integration of Ethernet Passive Optical Networks (EPONs) and IEEE 802.16 (WiMAX) has been lately presented as a promising fiber-wireless (FiWi) broadband access network. Conversely, lightweight layer-2 virtual private networks (VPNs) over FiWi, which can provide bandwidth guarantee to the respective users, were only recently addressed by Dhaini et. al. In this paper, WiMAX-VPON, the framework proposed by Dhaini et. al to support layer-2 VPNs over EPON-WiMAX, is improved to take into account the polling control overhead when distributing the VPN bandwidth. A new generic analytical model is also presented to evaluate the performance of each registered VPN service. Our proposed model, which can also be used to analyze any polling-based FiWi network, applies for wireless and optical domains and provides performance measurements such as packet queuing delay, end-to-end (from wireless user to optical server) packet delay and average queue size. Numerical results are compared with simulation experiments, and show consistency between both outcomes. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> This paper proposes WiMAX-VPON, a novel framework for establishing layer-2 virtual private networks (VPNs) over the integration of WiMAX and Ethernet passive optical networks, which has lately been considered as a promising candidate for next-generation fiber-wireless backhaul-access networks. With WiMAX-VPON, layer-2 VPNs support a bundle of service requirements to the respective registered wireless/wired users. These requirements are stipulated in the service level agreement and should be fulfilled by a suite of effective bandwidth management solutions. To achieve this, we propose a novel VPN-based admission control and bandwidth allocation scheme that provides per-stream quality-of-service protection and bandwidth guarantee for real-time flows. The bandwidth allocation is performed via a common medium access control protocol working in both the optical and wireless domains. An event-driven simulation model is implemented to study the effectiveness of the proposed framework. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> The integration of Ethernet passive optical networks (EPONs) with wireless worldwide interoperability for microwave access (WiMAX) is an approved solution for an access network. A resilient packet ring (RPR) is a good candidate for a metro network. Hence RPR, EPON, and WiMAX integration is a viable solution for metro-access network bridging. The present paper examines such integration, including an architecture and a joint media access control (MAC) protocol, as a solution for both access and metro networks. The proposed architecture is reliable due to the dependability of the RPR standard and the protection mechanism employed in the EPON. Moreover, the architecture contains a high fault tolerance against node and connection failure. The suggested MAC protocol includes a multi-level dynamic bandwidth allocation algorithm, a distributed admission control, a scheduler, and a routing algorithm. This MAC protocol aims at maximizing the advantages of the proposed architecture by distributing its functionalities over different parts of the architecture and jointly executing the parts of the MAC protocol. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> A control plane is a key enabling technique for dynamic and intelligent end-to-end path provisioning in optical networks. In this paper, we present an OpenFlow-based control plane for spectrum sliced elastic optical path networks, called OpenSlice, for dynamic end-to-end path provisioning and IP traffic offloading. Experimental demonstration and numerical evaluation show its overall feasibility and efficiency. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> This paper discusses necessary steps for the migration from today's residential network model to a converged access/aggregation platform based on software defined networks (SDN). <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> This article addresses the potential impact of emerging technologies and solutions, such as software defined networking and network function virtualization, on carriers' network evolution. It is argued that standard hardware advances and these emerging paradigms can bring the most impactful disruption at the network's edge, enabling the deployment of clouds of nodes using standard hardware: it will be possible to virtualize network and service functions, which are provided today by expensive middleboxes, and move them to the edge, as close as possible to users. Specifically, this article identifies some of key technical challenges behind this vision, such as dynamic allocation, migration, and orchestration of ensembles of virtual machines across wide areas of interconnected edge networks. This evolution of the network will profoundly affect the value chain: it will create new roles and business opportunities, reshaping the entire ICT world. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> In order to reduce cost and complexity, fiber-wireless (FiWi) networks emerge, combining the huge amount of available bandwidth of fiber networks and the flexibility, mobility of wireless networks. However, there is still a long way to go before taking fiber and wireless systems as fully integrated networks. In this paper, we propose a network virtualization based seamless networking scheme for FiWi networks, including hierarchical model, service model, service implementation and dynamic bandwidth assignment (DBA). Then, we evaluate the performance changes after network virtualization is introduced. Throughput for nodes, bandwidth for links and overheads leaded by network virtualization are analyzed. The performance of our proposed networking scheme is evaluated by simulation and real implementations, respectively. The results show that, compared to traditional networking scheme, our scheme has a better performance. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> Fiber-wireless (FiWi) access networks, which are a combination of fiber networks and wireless networks, have the advantages of both networks, such as high bandwidth, high security, low cost, and flexible access. However, with the increasing need for bandwidth and types of service from users, FiWi networks are still relatively incapable and ossified. To alleviate bandwidth tension and facilitate new service deployment, we attempt to apply network virtualization in FiWi networks, in which the network’s control plane and data plane are separated from each other. Based on a previously proposed hierarchical model and service model for FiWi network virtualization, the process of service implementation is described. The performances of the FiWi access networks applying network virtualization are analyzed in detail, including bandwidth for links, throughput for nodes, and multipath flow transmission. Simulation results show that the FiWi network with virtualization is superior to that without. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Virtualization: Summary and Discussion <s> There is a growing awareness among industry players of reaping the benefits of mobile-cloud convergence by extending today’s unmodified cloud to a decentralized two-level cloud-cloudlet architecture based on emerging mobile-edge computing (MEC) capabilities. In light of future 5G mobile networks moving toward decentralization based on cloudlets, intelligent base stations, and MEC, the inherent distributed processing and storage capabilities of radio-and-fiber (R&F) networks may be exploited for new applications, e.g., cognitive assistance, augmented reality, or cloud robotics. In this paper, we first revisit fiber-wireless (FiWi) networks in the context of conventional clouds and emerging cloudlets, thereby highlighting the limitations of conventional radio-overfiber (RoF) networks such as China Mobile’s centralized cloud radio access network (C-RAN) to meet the aforementioned trends. Furthermore, we pay close attention to the specific design challenges of data center networks and revisit our switchless arrayedwaveguide grating (AWG) based network with efficient support of east-west flows and enhanced scalability. <s> BIB014
|
The virtualization studies on access networks BIB001 - BIB003 , BIB002 - BIB004 have primarily focused on exploiting and manipulating the specific properties of the optical physical layer (e.g., different OFDMA subcarriers) and MAC layer (e.g., polling based MAC protocol) of the optical access networks for virtualization. In addition, to virtualization studies on purely optical PON access networks, two sets of studies, namely sets BIB012 - BIB013 and WiMAX-VPON BIB005 , BIB004 have examined virtualization for two forms of FiWi access networks. Future research needs to consider virtualization of a wider set of FiWi network technologies, i.e., FiWi networks that consider optical access networks with a wider variety of wireless access technologies, such as different forms of cellular access or combinations of cellular with other forms of wireless access. Also, virtualization of integrated access and metropolitan area networks BIB006 - BIB009 is an important future research direction. A set of studies has begun to explore optical networking support for SDN-enabled cloudnets that exploit virtualization to dynamically pool resources across distributed data centers. One important direction for future work on cloudnets is to examine moving data center resources closer to the users and the subsequent resource pooling across edge networks BIB010 . Also, the exploration of the benefits of FiWi networks for decentralized cloudlets BIB011 - BIB007 that support mobile wireless network services is an important future research direction BIB014 . A fairly extensive set of studies has examined virtual network embedding for metro/core networks. The virtual network embedding studies have considered the specific limitations and constraints of optical networks and have begun to explore specialized embedding strategies that strive to meet a specific optimization objective, such as survivability, dynamic adaptability, or energy efficiency. Future research should seek to develop a comprehensive framework of embedding algorithms that can be tuned with weights to achieve prescribed degrees of the different optimization objectives. A relatively smaller set of studies has developed and refined hypervisors for creating VONs over metro/core optical networks. Much of the SDON hypervisor research has centered on the OpenSlice hypervisor concept BIB008 . While OpenSlice accounts for the specific characteristics of the optical transmission medium, it is relatively complex as it requires a distributed implementation with specialized optical networking components. Future research should seek to achieve the hypervisor functionalities with a wider set of common optical components so as to reduce cost and complexity. Overall, SDON hypervisor research should examine the performance-complexity/cost tradeoffs of distributed versus centralized approaches. Within this context of examining the spectrum of distributed to centralized hypervisors, future hypervisor research should further refine and optimize the virtualization mechanisms so as to achieve strict isolation between virtual network slices, as well as low-complexity hypervisor deployment, operation, and maintenance.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Enterprise network security is typically reactive, and it relies heavily on host security and middleboxes. This approach creates complicated interactions between protocols and systems that can cause incorrect behavior and slow response to attacks. We argue that imbuing the network layer with mechanisms for dynamic access control can remedy these ills. We propose Resonance, a system for securing enterprise networks, where the network elements themselves enforce dynamic access control policies based on both flow-level information and real-time alerts. Resonance uses programmable switches to manipulate traffic at lower layers; these switches take actions (e.g., dropping or redirecting traffic) to enforce high-level security policies based on input from both higherlevel security policies and distributed monitoring and inference systems. We describe the design of Resonance, apply it to Georgia Tech's network access control system, show how it can both overcome the current shortcomings and provide new security functions, describe our proposed deployment, and discuss open research questions. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Data center interconnected by flexi-grid optical networks is a promising scenario to meet the high burstiness and high-bandwidth requirement of data center application, because flexi-grid optical networks can allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Meanwhile, as centralized control architecture, the software defined networking (SDN) enabled by OpenFlow protocol can provide maximum flexibility for the networks and make a unified control over various resources for the joint optimization of data center and network resource. Time factor is first introduced into SDN based control architecture for flexi-grid optical networks supporting data center application. Traffic model considering time factor is built and a requirement parameter i.e. bandwidth-delay product is adopted for the service requirement measurement. Then, time-aware software defined networking (Ta-SDN) based control architecture is designed with OpenFlow protocol extension. A novel time-correlated PCE (TC-PCE) algorithm is proposed for the time-correlated service under Ta-SDN based control architecture, which can complete data center selection, path computation and bandwidth resource allocation. Finally, simulation results shows that our proposed Ta-SDN control architecture and time-correlated PCE algorithm can improve the application and network performance to a large extent in blocking probability. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> OBS over WSON is one of the promising paradigms for future optical networks, which offers statistical multiplexing over high speed optical networks while eliminating electronic bottlenecks. In such legacy IP/WDM networks, control planes at IP and WDM layers are independently operated and managed, which may not optimize network performance. Recently, OpenFlow-based Software Defined Network (SDN) architecture is introduced, which enables unified control protocols for multi-layer networks to improve network agility and automation while reducing capital and operational expenditures. In this paper, we introduce a Software Defined Optical Network (SDON) architecture and develop a QoS-aware unified control protocol for optical burst switching in OpenFlow-based software-defined optical networks. A novel adaptive-burst assembling algorithm, a latency-aware burst routing and scheduling algorithm, and an effective OpenFlow-based signaling protocol are investigated. The performance of the proposed protocol is evaluated with a well-known GMPLS-based distributed protocol. The proposed QoS-aware unified control protocol significantly improves burst blocking, network throughput, and packet latency while offering better quality of service (QoS) to different classes of traffic with heterogeneous delay requirements compared to the GMPLS-based distributed protocol. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> We overview the PCE architecture and how it can mitigate some weaknesses of GMPLS-controlled optical networks. We identify some of its own limitations and the way they are being addressed, along with its deployment models in SDN/Openflow. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Quality of Service-enabled applications and services rely on Traffic Engineering-based (TE) Label Switched Paths (LSP) established in core networks and controlled by the GMPLS control plane. Path computation process is crucial to achieve the desired TE objective. Its actual effectiveness depends on a number of factors. Mechanisms utilized to update topology and TE information, as well as the latency between path computation and resource reservation, which is typically distributed, may affect path computation efficiency. Moreover, TE visibility is limited in many network scenarios, such as multi-layer, multi-domain and multi-carrier networks, and it may negatively impact resource utilization. The Internet Engineering Task Force (IETF) has promoted the Path Computation Element (PCE) architecture, proposing a dedicated network entity devoted to path computation process. The PCE represents a flexible instrument to overcome visibility and distributed provisioning inefficiencies. Communications between path computation clients (PCC) and PCEs, realized through the PCE Protocol (PCEP), also enable inter-PCE communications offering an attractive way to perform TE-based path computation among cooperating PCEs in multi-layer/domain scenarios, while preserving scalability and confidentiality. This survey presents the state-of-the-art on the PCE architecture for GMPLS-controlled networks carried out by research and standardization community. In this work, packet (i.e., MPLS-TE and MPLS-TP) and wavelength/spectrum (i.e., WSON and SSON) switching capabilities are the considered technological platforms, in which the PCE is shown to achieve a number of evident benefits. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> In this invited tutorial paper, we review the changing nature of data center networks and the role played by optoelectronics in future network designs. Conventional network protocols will be reviewed, including Ethernet, Fibre Channel, and InfiniBand, and requirements for WAN connectivity between data centers. The transition to converged networks based on lossless Ethernet will be discussed, including FCoE and RoCE protocols. Industry roadmaps for bandwidth, port density, and scalability of optical links will be presented, and optical transceiver form factors including QSFP, CFP, and active optical cables will be discussed. The role of software defined networking (SDN) in next generation data center networks will also be presented in this context. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> In the High Speed Internet (HSI) service of the Fiber-To-The-Home (FTTH) networks, there are increasingly various applications, such as browsing, video streaming, large downloads and online games. They are competing for the fixed bandwidth on a best-effort basis, and finally resulting in the network congestion and poor quality of experience (QoE). Users want to improve the quality of certain applications. However, today's network service controller (e.g. Broadband Remote Access Server, BRAS) lacks mechanisms to meet the users' desire to enhance the QoE of specific applications. Moreover, BRAS still lacks mechanisms to allocate the bandwidth resources properly for users' different applications according to their “sweet points”. “Sweet points” is a specific bandwidth value. The QoE gets worse quickly when the bandwidth is smaller than the “sweet point”, and keeps the same approximately when the bandwidth is larger than the “sweet point”. In this paper, we proposed a novel BRAS architecture using Software-Defined Networking (SDN) technology, which can improve the user's QoE by adjusting the bandwidth of a specific application to its “sweet point” according to their requirements. To demonstrate the feasibility of our proposed novel BRAS, we built a prototype using SDN to help the user to adjust the bandwidth for the specific application and improve the users' QoE. The experimental results show that users could enhance the QoE of specific applications according to users' preference. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Over the years, the demand for high bandwidth services, such as live and on-demand video streaming, steadily increased. The adequate provisioning of such services is challenging and requires complex network management mechanisms to be implemented by Internet service providers (ISPs). In current broadband network architectures, the traffic of subscribers is tunneled through a single aggregation point, independent of the different service types it belongs to. While having a single aggregation point eases the management of subscribers for the ISP, it implies huge bandwidth requirements for the aggregation point and potentially high end-to-end latency for subscribers. An alternative would be a distributed subscriber management, adding more complexity to the management itself. In this paper, a new traffic management architecture is proposed that uses the concept of Software Defined Networking (SDN) to extend the existing Ethernet-based broadband network architecture, enabling a more efficient traffic management for an ISP. By using SDN-enabled home gateways, the ISP can configure traffic flows more dynamically, optimizing throughput in the network, especially for bandwidth-intensive services. Furthermore, a proof-of-concept implementation of the approach is presented to show the general feasibility and study configuration tradeoffs. Analytic considerations and testbed measurements show that the approach scales well with an increasing number of subscriber sessions. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> This work experimentally demonstrates how to control and manage user Quality of Service (QoS) by acting on the switching on-off of the optical Gigabit Ethernet (GbE) interfaces in a wide area network test bed including routers and GPON accesses. The QoS is monitored at the user location by means of active probes developed in the framework of the FP7 MPLANE project. The network topology is managed according to some current Software Defined Network issues and in particular an Orchestrator checks the user quality, the traffic load in the GbE links and manages the network interface reconfiguration when congestion occurs in some network segments. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> We propose joint bandwidth provisioning and base station caching for video delivery in software-defined PONs. Performance evaluation via custom simulation models reveals 30% increase in served video requests and 50% reduction in service response delays. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> The key current challenges for the industrial application of all optical switching networks are energy consumption, transmission rate, spectrum efficiency, and switching throughput. The energy consumption problem is mainly researched in this paper. From the perspective of components and modules, node equipment, and network levels, different enabling technologies are proposed to overcome this problem, which are also evaluated through different experimental demonstrations. First, high-sampling-rate digital-to-analog converters (DACs) and WSS-based ROADM modules are demonstrated as components and modules for energy-efficient all optical switching networks. Then, an all optical transport network test-bed consisting of 10 Pbit/s level all optical switching nodes based on multi-level and multi-planar switching architecture is experimentally demonstrated for the first time, which can reduce power consumption by 43%. A control architecture for energy-efficient all optical switching networks is built with OpenFlow based software defined networking (SDN), and experimental results are given to verify the performance of this control architecture. Finally, we describe an All Optical Networks Innovation (AONI) project in China, which aims to explore transmission, switching, and networking technologies in all optical switching networks, and then two application scenarios are forecast based on the technical breakthroughs of this project. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> This work experimentally demonstrates how to save energy in GbE router network architectures by switching off idle optical Gigabit Ethernet (GbE) interfaces during low traffic periods. Two energy saving approaches, called Fixed Upper Fixed Lower (FUFL) and Dynamic Upper Fixed Lower (DUFL), have been adopted. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> The problem of energy consumption in data centers has attracted many researchers interest recently. This paper proposes an optimal energy consumption Software Defined Network (SDN) data center model using the dynamic activation of hosts and switches. We model switches and hosts as queues and formulate a Mixed Integer Linear Programming (MILP) model to minimize energy consumption while guaranteeing Quality of Service (QoS) of data center. Our purpose is minimizing static power, port power, and memory power of data centers. Since the problem is NP-hard, we adopt Simulated Annealing algorithm to obtain the solution. Through numerical experiment, we could observe that our model is able to save reasonable energy compared to the full operation data center model. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> A hierarchically controlled IP+Optical multilayer Transport SDN architecture is proposed, which highlights flexible resource provisioning and dynamic cross-layer restorations. The proposals are also demonstrated via an implemented testbed prototype. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Broadband Remote Access Servers (BRASes) are crucial middleboxes in DSL access networks, providing the first IP point in the network for subscribers and enforcing operator policies. The number of functions provided by BRASes, combined with the key role they play in the network, means that these devices are expensive, difficult to change, and constitute a single point of failure. In order to overcome these limitations, we propose to virtualize the BRAS and to enhance it with a control interface that can be exploited by management systems in order to introduce live session migration and higher reliability. Our proof-of-concept implementation shows that our virtual software BRAS is able to handle thousands of sessions while forwarding and shaping traffic at rates of millions of packets per second on commodity hardware, and that the live session migration feature enables the implementation of high-reliability scenarios. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Cloud based datacenters will be most suitable candidates for future software defined networking. The QoS requirements for shared data centers, hosting diverse applications could be successfully achieved through SDN architecture. This paper provides an extension of our previously proposed scheme QAMO that was aimed at achieving tangible QoS in datacenters through controlling bandwidth reservation in Multipath TCP and OBS layer while maintaining throughput efficiency. However, QAMO was designed for traditional networks and did not have the capability to adapt to current network status as expected from future software defined networks. The paper presents an enhanced algorithm called QAMO-SDN that introduces a controller layer in previously proposed architecture and achieves adaptive QoS differentiation based on current network feedback. QAMO-SDN inherits the architecture of QAMO, using Multipath TCP over OBS networks. We evaluate the performance of QAMO-SDN under different network loads and topologies using realistic data center traffic models and present the results of our detailed simulation tests. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Increased network traffic has put great pressure on edge router. It tends to be more expensive and consumes more resources. Buffer is especially the most valuable resource in the router. Given the potential benefits of reducing buffer sizes, a lot of debate on buffer sizing has emerged in the past few years. The small buffer rule, for example, was challenged at edge router. Instead of buffer sizing, the goal of our work is to find out a way to relieve the pressure of edge router and logically enlarge its buffer size without incurring additional costs. In this paper, taking advantage of the global view of SDN, we proposed Software Defined Backpressure Mechanism (SD-BM) to alleviate the pressure of edge router. Particularly, we gave Refill and Software Defined Networking based RED (RS-RED) algorithm, which makes it possible to enlarge the network buffer logically and offloads traffic from busy egress router to free ingress devices. Simulation results show that it has comparable performance, both in time delay and loss rate, with edge router which has large buffer in traditional way. The results can have consequences for the design of edge router and the related network. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Software Defined Networks (SDN) such as OpenFlow provides better network management for data center by decoupling control plane from data plane. Current OpenFlow controllers install flow rules with a fixed timeout after which the switch automatically removes the rules from its flow table. However, this fixed timeout has shown many disadvantages. For flows with short packet interval, the timeout may be too large so that flow rules stay in the flow table for too long time and result in unnecessary occupation of flow table; for flows with long packet interval or periodic flows, the timeout may be too short, hence producing too many packet-in events and causing overload on the controller. In this paper, we propose the Intelligent Timeout Master, which can assign suitable timeout to different flows according to their characteristics, as well as conduct a feedback control to adjust the max timeout value according to the current flow table occupation, in an effort to avoid flow table overflow. In our experiments, we use real traffic trace and the result confirms that our Intelligent Timeout Master performs quite well in reducing the number of packet-in events as well as flow table occupation. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> This paper outlines the issues in providing a seamless integration between energy-efficient optical access networks and metro networks that preserves the overall latency balance. A solution based on SDN is proposed and detailed. The proposed solution allows to trade the increased delay in the access section, due the utilization of energy efficient schemes, with a reduced delay in the metro section. Experiments in a geographically distributed testbed evaluate the different delay contributions. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM). Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures. Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, i.e., resource crunch. To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, e.g., reduced capacity, to connections that can tolerate degraded service, versus no service at all. Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic. By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization. The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller. <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Optical transport networks typically deploy dynamic restoration mechanisms in order to automatically recover optical connections disrupted by network failures. Elastic optical networks (EONs), currently emerging as the next-generation technology to be adopted in optical transport, introduce new challenges for traditional generic multiprotocol label-switching (GMPLS)-based restoration that may seriously impact the achievable recovery time. At the same time, the software-defined networking (SDN) framework is emerging as an alternative control plane. It is therefore important to investigate possible benefits provided by SDN in the implementation of restoration mechanisms for EONs. This paper proposes a dynamic restoration scheme for EONs based on the SDN framework. The proposed scheme contemporarily exploits centralized path computation and node configuration to avoid contentions during the recovery procedure with the final aim of minimizing the recovery time. The performance of the proposed scheme is evaluated by means of simulations in terms of recovery time and restoration blocking probability and compared against three reference schemes based on GMPLS and SDN. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> We demonstrate a cross-layer orchestration for packet service over IP-optical networks, in terms of availability and elasticity. Our orchestration built over the SDN concept self-adjusts and cost-efficiently responds to dynamics on network paths, impairments/failures, and topology. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> The over-provisioning of capacities in optical networks is not a sustainable approach in the long run. In this paper, we propose a software defined networking scheme for quality of service provisioning through energy efficient assignment of optical transponders, employing bandwidth variable distance adaptive modulation and coding. Our scheme enables avoiding over-provisioning of transponder capacity as well as short-term major changes in equipment allocation for networks with dynamic traffic. We make use of the seasonal auto-regressive integrated moving average model to forecast the statistics of network traffic for an arbitrary time span based on the requirements and the constraints of the service provider. The quality of service measure is defined as the probability of congestion at the core router ports. A stochastic linear programming approach is used to provide a solution for energy efficient assignment of optical transponders and electronic switching capacity while ensuring a certain level of quality of service to core routers. The scheduling of optical lightpath capacities is performed for the entire duration of time under consideration, whereas the scheduling of electronic switching capacities is performed based on the short-term dynamics of the traffic. Numerical results show up to 48% improvement in the energy efficiency of optical networks and 45% reduction in the number of optical lightpaths through the implementation of the proposed technique, compared to a design based on employing conventional fixed optical transponders and no traffic rerouting, where both schemes satisfy the congestion probability requirements. <s> BIB025 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> In this letter, we propose a novel SDN-based fast lightpath hopping (LPH) mechanism enabled by time synchronization to protect optical networks from eavesdropping and jamming. In order to realize fast LPH in optical networks, we establish an SDN-based LPH routing and signalling architecture, and set up an integer linear programming (ILP) model for fewest-shared-link multipath computation. We also demonstrate the first fast LPH prototype experiment and results show that an extremely high hop rate upto 1 MHz can be achieved with acceptable bit error rates (BERs) by control/switching separation and precise timing. <s> BIB026 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Cloud radio access network (C-RAN) has become a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing using cloud BBUs. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate the services in optical networks. In view of this, this study extends to consider the multiple dimensional resources optimization of radio, optical and BBU processing in 5G age. We propose a novel multi-stratum resources optimization (MSRO) architecture with network functions virtualization for cloud-based radio over optical fiber networks (C-RoFN) using software defined control. A global evaluation scheme (GES) for MSRO in C-RoFN is introduced based on the proposed architecture. The MSRO can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical and BBU resources effectively to maximize radio coverage. The efficiency and feasibility of the proposed architecture are experimentally demonstrated on OpenFlow-based enhanced SDN testbed. The performance of GES under heavy traffic load scenario is also quantitatively evaluated based on MSRO architecture in terms of resource occupation rate and path provisioning latency, compared with other provisioning scheme. <s> BIB027 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software-defined networking (SDN) paradigm can ease network configurations by enabling network programability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based network operations (ABNO) are a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multitenant virtual networks in multitechnology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion detection and failure recovery are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer. <s> BIB028 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VI. SDN APPLICATION LAYER <s> The operation of smart power grids will depend on a reliable and flexible communication infrastructure for monitoring and control. Software defined networking (SDN) is emerging as a promising control platform, facilitating network programmability and bandwidth flexibility. We study SDN optical transmission reliability for smart grid applications. We identify the collaboration of the control plane and the data plane in software-defined optical transmission systems as a cyber-physical interdependency where the ‘physical’ fiber network provides the ‘cyber’ control network with means to distribute control and signaling messages and in turn is itself operated by these ‘cyber’ control messages. We examine the robustness of such an interdependent communication system and quantify the advantages of optical layer reconfigurability. <s> BIB029
|
In the SDN paradigm, applications interact with the controllers to implement network services. We organize the survey of the studies on application layer aspects of SDONs according to the main application categories of quality of service (QoS), access control and security, energy efficiency, and failure recovery, as illustrated in Fig. 12. A. QoS 1) Long-term QoS: Time-Aware SDN: Data Center (DC) networks move data back and forth between DCs to balance the computing load and the data storage usage (for upload) BIB007 . These data movements between DCs can span large geographical areas and help ensure DC service QoS for the end users. Load balancing algorithms can exploit the characteristics of the user requests. One such request characteristic is the high degree of time-correlation over various time scales ranging from several hours of a day (e.g., due to a sporting event) to several days in a year (e.g., due to a political event). Zhao et al. BIB002 have proposed a time-aware SDN application using OpenFlow extensions to dynamically balance the load across the DC resources so as to improve the QoS. Specifically, a time correlated PCE algorithm based on flexi-grid optical transport (see Section IV-D2) has been proposed. An SDN application monitors the DC resources and applies network rules to preserve the QoS. Evaluations of the algorithm indicate improvements in terms of network blocking probability, global blocking probability, and spectrum consumption ratio. This study did not consider short time scale traffic bursts, which can significantly affect the load conditions. We believe that in order to avoid pitfalls in the operation of load balancing through PCE algorithms implemented with SDN, a wide range of traffic conditions needs to be considered. The considered traffic range should include short and long term traffic variations, which should be traded off with various QoS aspects, such as type of application and delay constraints, as well as the resulting costs and control overheads. Khodakarami et al. BIB025 have taken steps in this direction by forming a traffic forecasting model for both long-term and short-term forecasts in a wide-area mesh network. Optical lightpaths are then configured based on the overall traffic forecast, while electronic switching capacities are allocated based on short-term forecasts. 2) Short Term QoS: Users of a high-speed FTTH access network may request very large bandwidths due to simultaneously running applications that require high data rates. In such a scenario, applications requiring very high data rates may affect each other. For instance, a video conference running simultaneously with the streaming of a sports video may result in call drops in the video conference application and in stalls of the sports video. Li et al. BIB008 proposed an SDN based bandwidth provisioning application in the broadband remote access server BIB017 network. They defined and assigned the minimum bandwidth, which they named "sweet point", required for each application to experience good QoE. Li et al. showed that maintaining the "sweet point" bandwidth for each application can significantly improve the QoE while other applications are being served according to their bandwidth requirements. In a similar study, Patel et al. BIB003 proposed a burst switching mechanism based on a software defined optical BIB002 , BIB025 Short-term QoS BIB008 , BIB003 Virt. Top. Reconfig. BIB004 QoS Routing BIB018 - BIB019 QoS Management BIB009 , BIB010 Video Appl. - BIB011 ? Access Control and Security, Sec. VI-B Flow-based Access Ctl. BIB012 , BIB001 Lightpath Hopping Sec. BIB026 Flow Timeout BIB020 ? Energy Eff., Sec. VI-C Appl. Ctl. BIB013 - BIB027 Routing BIB021 , BIB014 - BIB015 ? Failure Recov. + Restoration, Sec. VI-D Netw. Reprov. BIB022 Restoration BIB023 Reconfig. BIB028 - BIB024 Hierarchical Surv. BIB016 Robust Power Grid BIB029 network. Bursts typically originate at the edge nodes and the aggregation points due to statistical multiplexing of high speed optical transmissions. To ensure QoS for multiple traffic classes, bursts at the edge nodes have to be managed by deciding their end-to-end path to meet their QoS requirements, such as minimum delay and data rate. In non-SDN based mechanisms, complicated distributed protocols, such as GMPLS BIB005 , BIB006 , are used to route the burst traffic. In the proposed application, the centralized unified control plane decides the routing path for the burst based on latency and QoS requirements. A simplified procedure involves (i) burst evaluation at the edge node, (ii) reporting burst information to the SDN controller, and (iii) sending of configurations to the optical nodes by the controller to set up a lightpath as illustrated in Fig. 13 . Simulations indicate an increase of performance in terms of throughput, network blocking probability, and latency along with improved QoS when compared to non-SDN GMPLS methods.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 3) Virtual Topology Reconfigurations: <s> The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 3) Virtual Topology Reconfigurations: <s> Software defined networking (SDN), originally designed to operate on access Ethernet-based networks, has been recently proposed for different specific networking scenarios, including core or metro/aggregation networks. In this study, we extend this concept to enable a comprehensive control of a converged access, metro and edges of a core network. In particular, a generalized SDN controller is proposed for upstream global QoS traffic engineering of passive optical networks (PONs), Ethernet metro/aggregation segment and IP/MPLS networks through the adoption of an unique interface, in the framework of the Interface to the Routing System (I2RS). Extended OpenFlow functionalities and Path Computation Element Protocol (PCEP) interfaces are encompassed to achieve effective dynamic flow control. <s> BIB002
|
The QoS experienced by traffic flows greatly depends on their route through a network. Wette et al. BIB001 have examined an application algorithm that reconfigures WDM network virtual topologies (see Section V-C1b) according to the traffic levels. The algorithm considers the localized traffic information and optical resource availability at the nodes. The algorithm does not require synchronization, thus reducing the overhead while simplifying the network design. In the proposed architecture, optical switches are connected to ROADMs. The reconfiguration application manages and controls the optical switches through the SDN controller. A new WDM controller is introduced to configure the lightpaths taking wavelength conversion and lightpath switching at the ROADMs into consideration. The SDN controller operates on the optical network which appears as a static network, while the WDM controller configures (and re-configures) the ROADMs to create multiple virtual optical networks according to the traffic levels. Evaluation results indicate improved utilization and throughput. The results indicate that virtual topologies reconfigurations can significantly increase the flexibility of the network while achieving the desired QoS. However, the control overhead and the delay aspects due to virtualization and separation of control and lightwave paths needs to be carefully considered. Illustration of routing application with integrated control of access, metro, and core networks using SDN and the Information To the Routing System (I2RS) BIB002 : The SDN controller interacts with the access network, e.g., through the OpenFlow protocol, the metro network, e.g., through the I2RS, and the core network, e.g., through the Path Computation Elements (PCEs).
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in a large university network, and this deployment experience has significantly affected Ethane's design. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Enterprise network security is typically reactive, and it relies heavily on host security and middleboxes. This approach creates complicated interactions between protocols and systems that can cause incorrect behavior and slow response to attacks. We argue that imbuing the network layer with mechanisms for dynamic access control can remedy these ills. We propose Resonance, a system for securing enterprise networks, where the network elements themselves enforce dynamic access control policies based on both flow-level information and real-time alerts. Resonance uses programmable switches to manipulate traffic at lower layers; these switches take actions (e.g., dropping or redirecting traffic) to enforce high-level security policies based on input from both higherlevel security policies and distributed monitoring and inference systems. We describe the design of Resonance, apply it to Georgia Tech's network access control system, show how it can both overcome the current shortcomings and provide new security functions, describe our proposed deployment, and discuss open research questions. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> As broadband networks using Fiber-to-the-x (FTTx) technologies are being increasingly deployed in access networks, video service, especially VoD (Video-on-Demand), is becoming more attractive to deploy. To provide efficient VoD service, a cost-effective service model is very important and a lot of research has been conducted over the past decade. This paper reviews the existing literature on this topic, focusing on the user behavior in VoD services and bandwidth-saving multicast streaming schemes, which are the most important aspects of VoD service. First, we review the user behavior in VoD such as video popularity, daily access pattern, and interactive VCR (Videocassette Recorder) properties from recent data. Each video title's rental frequency, i.e., video popularity, follows the Zipf distribution, and this popularity can change with time or by service provider's recommendation of videos. This overall request frequency for each video constitutes a specific pattern throughout the day and has a similar pattern every day. Second, we review the bandwidth-saving streaming schemes such as broadcasting, batching, patching, and merging, which use multicast streaming technologies and user buffer memory. We review the mechanism of each multicast streaming technology and compare their differences. We also review the recent trends on multicast streaming technologies, which is summarized as hybrid architecture which combines several multicast streaming technologies to obtain better performance. Next, we review how these multicast streaming technologies implement interactive VCR functions. We classify the VCR interactivity into discontinuous and continuous VCR actions and examine the principles for VCR support in multicast streaming schemes: caching some video data for discontinuous VCR support and allocating contingency channels for continuous VCR support. We review mechanisms of VCR support for different multicast streaming schemes. Through this survey, we provide an in-depth understanding of VoD service deployment. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> The key challenges facing network architecture today are the ability to change rapidly with business needs and to control complexity. The Interface to the Routing System is one form of software-defined networks designed to address specific problems at the Internet scale. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Software defined networking (SDN), originally designed to operate on access Ethernet-based networks, has been recently proposed for different specific networking scenarios, including core or metro/aggregation networks. In this study, we extend this concept to enable a comprehensive control of a converged access, metro and edges of a core network. In particular, a generalized SDN controller is proposed for upstream global QoS traffic engineering of passive optical networks (PONs), Ethernet metro/aggregation segment and IP/MPLS networks through the adoption of an unique interface, in the framework of the Interface to the Routing System (I2RS). Extended OpenFlow functionalities and Path Computation Element Protocol (PCEP) interfaces are encompassed to achieve effective dynamic flow control. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Over the years, the demand for high bandwidth services, such as live and on-demand video streaming, steadily increased. The adequate provisioning of such services is challenging and requires complex network management mechanisms to be implemented by Internet service providers (ISPs). In current broadband network architectures, the traffic of subscribers is tunneled through a single aggregation point, independent of the different service types it belongs to. While having a single aggregation point eases the management of subscribers for the ISP, it implies huge bandwidth requirements for the aggregation point and potentially high end-to-end latency for subscribers. An alternative would be a distributed subscriber management, adding more complexity to the management itself. In this paper, a new traffic management architecture is proposed that uses the concept of Software Defined Networking (SDN) to extend the existing Ethernet-based broadband network architecture, enabling a more efficient traffic management for an ISP. By using SDN-enabled home gateways, the ISP can configure traffic flows more dynamically, optimizing throughput in the network, especially for bandwidth-intensive services. Furthermore, a proof-of-concept implementation of the approach is presented to show the general feasibility and study configuration tradeoffs. Analytic considerations and testbed measurements show that the approach scales well with an increasing number of subscriber sessions. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> This work experimentally demonstrates how to control and manage user Quality of Service (QoS) by acting on the switching on-off of the optical Gigabit Ethernet (GbE) interfaces in a wide area network test bed including routers and GPON accesses. The QoS is monitored at the user location by means of active probes developed in the framework of the FP7 MPLANE project. The network topology is managed according to some current Software Defined Network issues and in particular an Orchestrator checks the user quality, the traffic load in the GbE links and manages the network interface reconfiguration when congestion occurs in some network segments. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> We propose joint bandwidth provisioning and base station caching for video delivery in software-defined PONs. Performance evaluation via custom simulation models reveals 30% increase in served video requests and 50% reduction in service response delays. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> The paper offers an innovative approach for building future proof access network dedicated to B2B (Business To Business) applications. The conceptual model of considered network is based on three main assumptions. Firstly, we present a network design based on passive optical LAN architecture utilizing proven GPON (Gigabit-capable Passive Optical Network) technology. Secondly, the new business model is proposed. Finally, the major advantage of the solution is an introduction of SDN (Software-Defined Networking) paradigm to GPON area. Thanks to such approach network configuration can be easily adapted to business customers' demands and needs that can change dynamically over the time. The proposed solution provides a high level of service flexibility and supports sophisticated methods allowing users' traffic forwarding in efficient way. The paper extends a description of the OpenFlowPLUS protocol proposed in [18] . Additionally it provides an exemplary logical scheme of traffic forwarding relevant for GPON devices employing the OpenFlowPLUS solution. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> In Software Defined Networks (SDN), applications on the controller could enforce fine-grained control on flows by policies employing more packet fields. These policies are converted to flow entries and stored in switch Flow Table. To store these entries, Flow Table requires large storage space because an entry consisted of more packet fields needs more storage space and the number of entries also increases significantly due to fine-granularity definition of flows. However, Flow Table has limited storage space owing to the constraints of Ternary Content Addressable Memory (TCAM). As a result, the switch Flow Table in SDN faces scalability issue. We address this issue by means of adaptive Flow Table management, namely we manage how long the entries occupy the storage space by setting adaptive timeouts to them. Through this means, the storage space could be reused efficiently and more flows could be supported with the same Flow Table (without updating hardware devices). Our proposed method TimeoutX, for the first time, combines traffic characteristics, flow types and Flow Table utilization ratio to decide the timeout of each entry and it outperforms current timeout setting strategies in both metrics of table miss number and blocked packet number, which indicates TimeoutX could make the best of Flow Table and support more flows. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Cloud based datacenters will be most suitable candidates for future software defined networking. The QoS requirements for shared data centers, hosting diverse applications could be successfully achieved through SDN architecture. This paper provides an extension of our previously proposed scheme QAMO that was aimed at achieving tangible QoS in datacenters through controlling bandwidth reservation in Multipath TCP and OBS layer while maintaining throughput efficiency. However, QAMO was designed for traditional networks and did not have the capability to adapt to current network status as expected from future software defined networks. The paper presents an enhanced algorithm called QAMO-SDN that introduces a controller layer in previously proposed architecture and achieves adaptive QoS differentiation based on current network feedback. QAMO-SDN inherits the architecture of QAMO, using Multipath TCP over OBS networks. We evaluate the performance of QAMO-SDN under different network loads and topologies using realistic data center traffic models and present the results of our detailed simulation tests. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Today, off-line network planning tools are used to optimize transport networks from scratch. A set of pre-planned wavelength services is taken into account for routing optimization. Performing such optimization in an operational transport network may lead to the rerouting of a large amount of wavelengths per link followed by massive adjustments of optical power levels. Hence, an approach is needed that performs on-line optimization of an operational network in a continuous and incremental way, thus enabling a smooth migration towards always optimal routing. To implement such an optimization application, which is also vendor-independent, an open control framework like Transport - Software Defined Networking (T-SDN) is particularly suitable. Our T-SDN optimization application presented here runs on top of a SDN controller which - in turn - is communicating with the network elements. A demonstration environment has been developed to evaluate feasibility and benefits of such on-line network optimization. Beside the application, the demonstrator comprises an Alcatel-Lucent Proof-of-Concept T-SDN controller as well as an emulated transport network based on the Alcatel-Lucent 1830 Photonic Service Switch (PSS). For ease of demonstration, the current network topology and configuration are query results from the SDN controller and thus input parameters for our application. The application performs a continuous routing optimization which is aware of physical impairments of optical transmission. The goal of this incremental optimization process is to affect only a very small number of wavelengths per iteration thus limiting the impact of wavelength rerouting on the photonic layer. The process itself can either be controlled by the network operator or can be run periodically in background to determine inefficient routes and to find better alternatives. If a more efficient route was found, the new path information is posted to the SDN controller which has to implement the rerouting. Our application thus enables the permanent analysis of lightpath deployment in transport networks in order to increase resource utilization and reduce cost per service. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Increased network traffic has put great pressure on edge router. It tends to be more expensive and consumes more resources. Buffer is especially the most valuable resource in the router. Given the potential benefits of reducing buffer sizes, a lot of debate on buffer sizing has emerged in the past few years. The small buffer rule, for example, was challenged at edge router. Instead of buffer sizing, the goal of our work is to find out a way to relieve the pressure of edge router and logically enlarge its buffer size without incurring additional costs. In this paper, taking advantage of the global view of SDN, we proposed Software Defined Backpressure Mechanism (SD-BM) to alleviate the pressure of edge router. Particularly, we gave Refill and Software Defined Networking based RED (RS-RED) algorithm, which makes it possible to enlarge the network buffer logically and offloads traffic from busy egress router to free ingress devices. Simulation results show that it has comparable performance, both in time delay and loss rate, with edge router which has large buffer in traditional way. The results can have consequences for the design of edge router and the related network. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Software defined networking (SDN) decouples the network control and data planes. The network intelligence and state are logically centralized and the underlying network infrastructure is abstracted from applications. SDN enhances network security by means of global visibility of the network state where a conflict can be easily resolved from the logically centralized control plane. Hence, the SDN architecture empowers networks to actively monitor traffic and diagnose threats to facilitates network forensics, security policy alteration, and security service insertion. The separation of the control and data planes, however, opens security challenges, such as man-in-the middle attacks, denial of service (DoS) attacks, and saturation attacks. In this paper, we analyze security threats to application, control, and data planes of SDN. The security platforms that secure each of the planes are described followed by various security approaches for network-wide security in SDN. SDN security is analyzed according to security dimensions of the ITU-T recommendation, as well as, by the costs of security solutions. In a nutshell, this paper highlights the present and future security challenges in SDN and future directions for secure SDN. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> The predicted prevalence of both Internet of Things (IoT) based devices and the concept of Software Defined Networking (SDN) as a new paradigm in networking, means that consideration is required for how they will interact. Current SDN implementations operate on the principle that on receiving an unrecognised packet, a switch will query a centralised controller for a corresponding rule. Memory limitations within current switch devices dictate that this rule can only be stored for a short period of time before being removed, thus making it likely that the relatively infrequent data samples sent from IoT devices will have a transmission interval longer than this timeout. This paper proposes a Pre-emptive Flow Installation Mechanism (PFIM) that dynamically learns the transmission intervals of periodic network flows and installs the corresponding rules within a switch, prior to the arrival of a packet. A proof-of-concept implementation shows this to have a significant effect on reducing the delay experienced by these flows. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> Software Defined Networks (SDN) such as OpenFlow provides better network management for data center by decoupling control plane from data plane. Current OpenFlow controllers install flow rules with a fixed timeout after which the switch automatically removes the rules from its flow table. However, this fixed timeout has shown many disadvantages. For flows with short packet interval, the timeout may be too large so that flow rules stay in the flow table for too long time and result in unnecessary occupation of flow table; for flows with long packet interval or periodic flows, the timeout may be too short, hence producing too many packet-in events and causing overload on the controller. In this paper, we propose the Intelligent Timeout Master, which can assign suitable timeout to different flows according to their characteristics, as well as conduct a feedback control to adjust the max timeout value according to the current flow table occupation, in an effort to avoid flow table overflow. In our experiments, we use real traffic trace and the result confirms that our Intelligent Timeout Master performs quite well in reducing the number of packet-in events as well as flow table occupation. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> 4) End-to-End QoS <s> In this letter, we propose a novel SDN-based fast lightpath hopping (LPH) mechanism enabled by time synchronization to protect optical networks from eavesdropping and jamming. In order to realize fast LPH in optical networks, we establish an SDN-based LPH routing and signalling architecture, and set up an integer linear programming (ILP) model for fewest-shared-link multipath computation. We also demonstrate the first fast LPH prototype experiment and results show that an extremely high hop rate upto 1 MHz can be achieved with acceptable bit error rates (BERs) by control/switching separation and precise timing. <s> BIB019
|
(MPTCP). Ensuring QoS in such an MPTCP setting while preserving throughput efficiency in a reconfigurable underlying burst switching optical network is a challenging task. Tariq et al. BIB013 have proposed QoS-aware bandwidth reservation for MPTCP in an SDON. The bandwidth reservation proceeds in two stages (i) path selection for MPTCP, and (ii) OBS wavelength reservation to assign the priorities for latency-sensitive flows. Larger portions of a wavelength reservation are assigned to high priority flows, resulting in reduced burst blocking probability while achieving the higher MPTCP throughput. The simulation results in BIB013 validate the two-stage algorithm for QoS-aware MPTCP over an SDON, indicating decreased dropping probabilities, and increased throughputs. Information To the Routing System (I2RS) is a highlevel architecture for communicating and interacting with routing systems, such as BGP routers. A routing system may consists of several complex functional entities, such as a Routing Information Base (RIB), an RIB manager, topology and policy databases, along with routing and signalling units. The I2RS provides a programmability platform that enables access and modifications of the configurations of the routing system elements. The I2RS can be extended with SDN principles to achieve global network management and reconfiguration BIB005 . Sgambelluri et al. BIB006 presented an SDN based routing application within the I2RS framework to integrate the control of the access, metro, and core networks as illustrated in Fig. 14 . The SDN controller communicates with the Path Computation Elements (PCEs) of the core network to create Label Switched Paths (LSPs) based on the information received by the OLTs. Experimental demonstrations validated the routing optimization based on the current traffic status and previous load as well as the unified control interface for access, metro, and core networks. Ilchmann et al. BIB014 developed an SDN application that communicates to an SDN controller via an HTTP-based REST API. Over time, lightpaths in an optical network can become inefficient for a number of reasons (e.g., optical spectrum fragmentation). For this reason, Ilchmann et al. developed an SDN application that evaluates existing lightpaths in an optical network and offers an application user the option to reoptimize the lightpath routing to improve various performance metrics (e.g., path length). The application is user-interactive in that the user can see the number of proposed lightpath routing changes before they are made and can potentially select a subset of the proposed changes to minimize network down-time. At the ingress and egress routers of optical networks (e.g., the edge routers between access and metro networks), buffers are highly non-economical to implement, as they require large buffers sizes to accommodate the channel rates of 40 Mb/s or more. To reduce the buffer requirements at the edge routers, Chang et al. BIB015 have proposed a backpressure application referred to as Refill and SDN-based Random Early Detection (RS-RED). RS-RED implements a refill queue at the ingress device and a droptail queue at the egress device, whereby both queues are centrally managed by the RS-RED algorithm running on the SDN controller. Simulation results showed that at the expense of small delay increases, edge router buffer sizes can be significantly reduced. 5) QoS Management: Rukert et al. BIB007 proposed SDN based controlled home-gateway supporting heterogeneous wired technologies, such as DSL, and wireless technologies, such as LTE and WiFi. SDN controllers managed by the ISPs optimize the traffic flows to each user while accommodating large numbers of users and ensuring their minimum QoS. Additionally, Tego et al. BIB008 demonstrated an experimental SDN based QoS management setup to optimize the energy utilization. GbE links are switched on and off based on the traffic levels. The QoS management reroutes the traffic to avoid congestion and achieve efficient throughput. SDN applications conduct active QoS probing to monitor the network QoS characteristics. Evaluations have indicated that the SDN based techniques achieve significantly higher throughput than non-SDN techniques BIB008 . 6) Video Applications: The application-aware SDNenabled resource allocation application has been introduced by Chitimalla et al. to improve the video QoE in a PON access network. The resource allocation application uses application level feedback to schedule the optical resources. The video resolution is incrementally increased or decreased based on the buffer utilization statistics that the client sends to the controller. The scheduler at the OLT schedules the packets based on weights calculated by the SDN controller, whereby the video applications at the clients communicate with the controller to determine the weights. If the network is congested, then the SDN controller communicates to the clients to reduce the video resolution so as to reduce the stalls and to improve the QoE. Caching of video data close the users is generally beneficial for improving the QoE of video services , BIB004 . Li et al. BIB009 have introduced caching mechanisms for softwaredefined PONs. In particular, Li et al. have proposed joint provisioning of the bandwidth to service the video and the cache management, as illustrated in Fig. 15 . Based on the request frequency for specific video content, the Base Station (BS) caches the content with the assistance of the SDN controller. The proposed push-based mechanism delivers (pushes) the video to the BS caches when the PON is not congested. A specific PON transmission sub-band can be used to multicast video content that needs to be cached at multiple BSs. The simulation evaluation in BIB009 indicate that up to 30% additional videos can be serviced while the service response delay is reduced to 50%. B. Access Control and Security 1) Flow-based Access Control: Network Access Control (NAC) is a networking application that regulates the access to network services BIB010 , BIB002 . A NAC based on traffic flows has been developed by Matias BIB011 . FlowNAC exploits the forwarding rules of OpenFlow switches, which are set by a central SDN controller, to control the access of traffic flows to network services. FlowNAC can implement the access control based on various flow identifiers, such as MAC addresses or IP source and destination addresses. Performance evaluations measured the connection times for flows on a testbed and found average connection times on the order of 100 ms for completing the flow access control. In a related study, Nayak et al. BIB003 developed the Resonance flow based access control system for an enterprise network. In the Resonance system, the network elements, such as the routers themselves, dynamically enforce access control policies. The access control policies are implemented through real-time alerts and flow based information that is exchanged with SDN principles. Nayak et al. have demonstrated the Resonance system on a production network at Georgia Tech. The Resonance design can be readily implemented in SDON networks and can be readily extended to wide area networks. Consider for example multiple heterogeneous DCs of multiple organizations that are connected by an optical backbone network. The Resonance system can be extended Lightpath channels Fig. 16 . Overview of optical light path hopping mechanism to secure link from eavesdropping and jamming BIB019 : The flow marked by the diagonal shading hops from lightpath channel λ 4 to λ 2 , then to λ 3 and on to λ 1 . Transmissions by distinct flows on a given lightpath channel must be separated by at least a guard period. to provide access control mechanisms, such as authentication and authorization, through such a wide area SDON. 2) Lightpath Hopping Security: The broad network perspective of SDN controllers facilitates the implementation of security functions that require this broad perspective BIB016 , , . However, SDN may also be vulnerable to a wide range of attacks and vulnerabilities, including unauthorized access, data leakage, data modification, and misconfiguration. Eavesdropping and jamming are security threats on the physical layer and are especially relevant for the optical layer of SDONs. In order to prevent eavesdropping and jamming in an optical lightpath, Li et al. BIB019 have proposed an SDN based fast lightpath hopping mechanism. As illustrated in Fig. 16 , the hopping mechanism operates over multiple lightpath channels. Conventional optical lightpath setup times range from several hundreds of milliseconds to several seconds and would result in a very low hopping frequency. To avoid the optical setup times during each hopping period, an SDN based high precision time synchronization has been proposed. As a result, a fast hopping mechanism can be implemented and executed in a coordinated manner. A hop frame is defined and guard periods are added in between hop frames. The experimental evaluations indicate that a maximum hopping frequency of 1 MHz can be achieved with a BER of 1 × 10 −3 . However, shortcomings of such mechanisms are the secure exchange of hopping sequences between the transmitter and the receiver. Although, centralized SDN control provides authenticated provisioning of the hopping sequence, additional mechanisms to secure the hopping sequence from being obtained through man-in-the-middle attacks should be investigated. 3) Flow Timeout: SDN flow actions on the forwarding and switching elements have generally a validity period. Upon expiration of the validity period, i.e., the flow action timeout, the forwarding or switching element drops the flow action from the forwarding information base or the flow table. The switching element CPU must be able to access the flow action information with very low latency so as to perform switching actions at the line rate. Therefore, the flow actions are commonly stored in Ternary Content Addressable Memories (TCAMs) BIB001 , which are limited to storing on the order of thousands of distinct entries. In SDONs, the optical network elements perform the actions set by the SDN controller. These actions have to be stored in a finite memory space. Therefore, it is important to utilize the finite memory space as efficiently as possible BIB017 - BIB012 . In the dynamic timeout approach BIB018 , the SDN controller tracks the TCAM occupancy levels in the switches and adjusts timeout durations accordingly. However, a shortcoming of such techniques is that the bookkeeping processes at the SDN controllers can become cumbersome for a large network. Therefore, autonomous timeout management techniques that are implemented at the hypervisors can reduce the controller processing load and are an important future research direction.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> Since the energy crisis and environmental protection are gaining increasing concerns in recent years, new research topics to devise technological solutions for energy conservation are being investigated in many scientific disciplines. Specifically, due to the rapid growth of energy consumption in ICT (Information and Communication Technologies), lot of attention is being devoted towards "green" ICT solutions. In this paper, we provide a comprehensive survey of the most relevant research activities for minimizing energy consumption in telecom networks, with a specific emphasis on those employing optical technologies. We investigate the energy-minimization opportunities enabled by optical technologies and classify the existing approaches over different network domains, namely core, metro, and access networks. A section is also devoted to describe energy-efficient solutions for some of today's important applications using optical network technology, e.g., grid computing and data centers. We provide an overview of the ongoing standardization efforts in this area. This work presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of energy consumption in optical telecom networks. We aim at providing a comprehensive reference for the growing base of researchers who will work on energy efficiency of telecom networks in the upcoming years. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> This is Part II of a two-part paper that explores the fundamental limitations on energy consumption in optical communications. Part I covers energy consumption in optical transport. Part II explores the lower bound on energy consumption in optical switches and networks, analyzes the energy performance of a range of switching devices, and presents quantitative models of the lower bounds on energy consumption in these devices. These models are incorporated into a simple model of a global switched network and the lower bound on total network energy consumption is estimated. We compare the results of this bottom-up calculation of the lower bound on network energy with a previous top-down analysis of overall network energy consumption based on real-world data for state-of-the art equipment and “business-as-usual” forward projections. The present analysis confirms a previous finding in that in a global scale network, the energy consumption of the switching infrastructure is larger than the energy consumption of the transport infrastructure. We find that the theoretical lower bounds on transport energy identified in Part I and the switching energy in this paper are more than three orders of magnitude lower than predicted by a “business-as-usual” analysis. In this paper, we explore how the gap between the theoretical lower bounds on energy consumption and current trends in network energy efficiency can be closed. We argue that future research needs to focus on improving the energy efficiency of switching and on devising methods to reduce the quantity of switching infrastructure in the network. Further key strategies for reducing network energy consumption include developing of low-energy transport technologies, reducing the energy overheads associated with peripheral functions that are not central to the transport and switching of data, and reducing the energy consumption of the access network. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> Nowadays, most service providers offer their services and support their applications through federated sets of data centers that need to be interconnected using high-capacity telecom transport networks. To provide such high-capacity network channels, data center interconnection is typically based on IP and optical transport networks that ensure certain end-to-end connectivity performance guarantees. However, in the current mode of operation, the control of IP networks, optical networks, and data centers is separately deployed. Enabling even a limited interworking among these separated control systems requires the adoption of complex and inelastic interfaces among the various networks, and this solution is not efficient enough to provide the required quality of service. In this paper, we propose a multi-stratum resource integration (MSRI) architecture for OpenFlow-based data center interconnection using IP and optical transport networks. The control of the architecture is implemented through multiple OpenFlow controllers' cooperation. By exchanging information among multiple controllers, the MSRI can effectively overcome the interworking limitations of a multi-stratum architecture, enable joint optimization of data center and network resources, and enhance the data center responsiveness to end-to-end service demands. Additionally, a service-aware flow estimation strategy for MSRI is introduced based on the proposed architecture. The overall feasibility and efficiency of the proposed architecture are experimentally demonstrated on our optical as a service testbed in terms of blocking probability, resource occupation rate, and path provisioning latency. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> The key current challenges for the industrial application of all optical switching networks are energy consumption, transmission rate, spectrum efficiency, and switching throughput. The energy consumption problem is mainly researched in this paper. From the perspective of components and modules, node equipment, and network levels, different enabling technologies are proposed to overcome this problem, which are also evaluated through different experimental demonstrations. First, high-sampling-rate digital-to-analog converters (DACs) and WSS-based ROADM modules are demonstrated as components and modules for energy-efficient all optical switching networks. Then, an all optical transport network test-bed consisting of 10 Pbit/s level all optical switching nodes based on multi-level and multi-planar switching architecture is experimentally demonstrated for the first time, which can reduce power consumption by 43%. A control architecture for energy-efficient all optical switching networks is built with OpenFlow based software defined networking (SDN), and experimental results are given to verify the performance of this control architecture. Finally, we describe an All Optical Networks Innovation (AONI) project in China, which aims to explore transmission, switching, and networking technologies in all optical switching networks, and then two application scenarios are forecast based on the technical breakthroughs of this project. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> This work experimentally demonstrates how to save energy in GbE router network architectures by switching off idle optical Gigabit Ethernet (GbE) interfaces during low traffic periods. Two energy saving approaches, called Fixed Upper Fixed Lower (FUFL) and Dynamic Upper Fixed Lower (DUFL), have been adopted. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM). Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures. Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, i.e., resource crunch. To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, e.g., reduced capacity, to connections that can tolerate degraded service, versus no service at all. Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic. By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization. The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Energy Efficiency <s> Cloud radio access network (C-RAN) has become a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing using cloud BBUs. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate the services in optical networks. In view of this, this study extends to consider the multiple dimensional resources optimization of radio, optical and BBU processing in 5G age. We propose a novel multi-stratum resources optimization (MSRO) architecture with network functions virtualization for cloud-based radio over optical fiber networks (C-RoFN) using software defined control. A global evaluation scheme (GES) for MSRO in C-RoFN is introduced based on the proposed architecture. The MSRO can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical and BBU resources effectively to maximize radio coverage. The efficiency and feasibility of the proposed architecture are experimentally demonstrated on OpenFlow-based enhanced SDN testbed. The performance of GES under heavy traffic load scenario is also quantitatively evaluated based on MSRO architecture in terms of resource occupation rate and path provisioning latency, compared with other provisioning scheme. <s> BIB008
|
The separation of the control plane from the data plane and the global network perspective are unique advantages of SDN for improving the energy efficiency of networks, which is an important goal BIB002 , BIB001 . 1) Power-saving Application Controller: Ji et al. BIB004 have proposed an all optical energy-efficient network centered around an application controller BIB003 , BIB006 that monitors power consumption characteristics and enforces power savings policies. Ji et al. first introduce energy-efficient variations of Digital-to-Analog Converters (DACs) and wavelength selective ROADMs as components for their energy-efficient network. Second, Jie et al. introduce an energy-efficient switch architecture that consists of multiple parallel switching planes, whereby each plane consists of three stages with optical burst switching employed in the second (central) switching stage. Third, Jie et al. detail a multilevel SDN based control architecture for the network built from the introduced components and switch. The control structure accommodates multiple networks domains, whereby each network domain can involve multiple switching technologies, such as timebased and frequency-based optical switching. All controllers for the various domains and technologies are placed under the control of an application controller. Dedicated power monitors that are distributed throughout the network update the SDN based application controller about the energy consumption characteristics of each network node. Based on the received energy consumption updates, the application controller executes power-saving strategies. The resulting control actions are signalled by the application controller to the various controllers for the different network domains and technologies. An extension of this multi-level architecture to cloud-based radio access networks has been examined in BIB008 . 2) Energy-Saving Routing: Tego et al. BIB005 have proposed an energy-saving application that switches off under-utilized GbE network links. Specifically, Tego et al. proposed two methods: Fixed Upper Fixed Lower (FUFL) and Dynamic Upper and Fixed Lower (DLFU). In FUFL, the IP routing and the connectivity of the logical topology are fixed. The utilization of physical GbE links (whereby multiple parallel physical links form a logical link) is compared with a threshold to determine whether to switch off or on individual physical links (that support a given logical link). The traffic on a Fig. 17 . Illustration of application layer modules of SDN based network reprovisioning framework for disaster aware networking BIB007 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> The problem of energy consumption in data centers has attracted many researchers interest recently. This paper proposes an optimal energy consumption Software Defined Network (SDN) data center model using the dynamic activation of hosts and switches. We model switches and hosts as queues and formulate a Mixed Integer Linear Programming (MILP) model to minimize energy consumption while guaranteeing Quality of Service (QoS) of data center. Our purpose is minimizing static power, port power, and memory power of data centers. Since the problem is NP-hard, we adopt Simulated Annealing algorithm to obtain the solution. Through numerical experiment, we could observe that our model is able to save reasonable energy compared to the full operation data center model. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> Elastic Optical Networks will drive a high degree of flexibility enabling dynamic configurable lightpaths provisioning and re- optimization due to next generation bandwidth variable transponders and switches. In order to guarantee quality of transmission (QoT), novel Operation Administration and Maintenance (OAM) solutions are necessary with respect to existing standard management protocols. For optical networks, scalable mechanisms providing fast and effective QoT alarm information, including localization and, possibly, forecasting critical events, are needed. The introduction of the Application Based Network Operation (ABNO) architecture is pushing towards a dedicated OAM Handler, in charge of collecting OAM information from the network, performing correlations and triggering control plane reaction. However, serious scalability issues may arise since a centralized element would have to elaborate a potentially huge amount of data. In this paper, a novel hierarchical OAM architecture is proposed, that enables multi- level OAM entities to provide OAM Handler with effective information, obtained by filtering several OAM messages at each layer, so that the overload of OAM Handler is avoided. Moreover, the NETCONF protocol, typically used for SDN- based node configuration purposes, is proposed and utilized as OAM protocol, in order to achieve high degree of convergence and limit the number of utilized protocols. The proposed OAM architecture is implemented and experimentally evaluated in a QoT degradation use case, showing that multi-level localization and local correlation of events allow aggregated, fast and scalable OAM information set provided to the OAM Handler. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> We test an end-to-end 1:1 protection scheme for a combined LR-PON access and core networks using separate but loosely coupled SDN controllers, over a Pan-European network. Fast recovery is achieved in 7ms in the access and 52ms in the core. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> A hierarchically controlled IP+Optical multilayer Transport SDN architecture is proposed, which highlights flexible resource provisioning and dynamic cross-layer restorations. The proposals are also demonstrated via an implemented testbed prototype. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> In this paper, we study the interdependency between the power grid and the communication network used to control the grid. A communication node depends on the power grid in order to receive power for operation, and a power node depends on the communication network in order to receive control signals for safe operation. We demonstrate that these dependencies can lead to cascading failures, and it is essential to consider the power flow equations for studying the behavior of such interdependent networks. We propose a two-phase control policy to mitigate the cascade of failures. In the first phase, our control policy finds the non-avoidable failures that occur due to physical disconnection. In the second phase, our algorithm redistributes the power so that all the connected communication nodes have enough power for operation and no power lines overload. We perform a sensitivity analysis to evaluate the performance of our control policy, and show that our control policy achieves close to optimal yield for many scenarios. This analysis can help design robust interdependent grids and associated control policies. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> The growing energy consumption has posed new challenges for the future development of networks. Some earlier work has proposed solutions to improve energy consumption based on the existing control plane, for example, node/links sleeping. This study presents a new possibility to reduce network energy consumption by proposing a new integrated control plane structure utilising Software Defined Networking technologies. The integrated control plane increases the efficiencies of exchanging control information across different network domains, while introducing new possibilities to the routing methods and the control over quality of service (QoS). The structure is defined as an overlay generalised multi-protocol label switching (GMPLS) control model. With the defined structure, the integrated control plane is able to gather information from different domains (i.e. optical core network and the access networks), and enable energy efficiency networking over a wider area. In the case presented, the integrated control plane collects the network energy related information and the QoS requirements of different types of traffic. This information is used to define the specific group of traffic's (flow's) routing behaviours. With the flexibility of the routing structure, results show that the energy efficiency of the network can be improved without compromising the QoS for delay/blocking sensitive services. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> The most important features of software defined networks are related to resource virtualization and centralized optimal control that is fully programmatically realized. In turn it gives rise to development of mathematical model which allows to find the optimal solution. On the other hand important role in optimization process belongs to criterion of optimality (objective function). Taking into account global world trends the paper offers mathematical model of SDN-driven transport packet optical network that allows to optimize network resources according to energy saving criterion. It will be shown that multipath routing over virtualized IP-links leads to saving 10% of power consumption per every additionally used path. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> This paper outlines the issues in providing a seamless integration between energy-efficient optical access networks and metro networks that preserves the overall latency balance. A solution based on SDN is proposed and detailed. The proposed solution allows to trade the increased delay in the access section, due the utilization of energy efficient schemes, with a reduced delay in the metro section. Experiments in a geographically distributed testbed evaluate the different delay contributions. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM). Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures. Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, i.e., resource crunch. To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, e.g., reduced capacity, to connections that can tolerate degraded service, versus no service at all. Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic. By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization. The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> Optical transport networks typically deploy dynamic restoration mechanisms in order to automatically recover optical connections disrupted by network failures. Elastic optical networks (EONs), currently emerging as the next-generation technology to be adopted in optical transport, introduce new challenges for traditional generic multiprotocol label-switching (GMPLS)-based restoration that may seriously impact the achievable recovery time. At the same time, the software-defined networking (SDN) framework is emerging as an alternative control plane. It is therefore important to investigate possible benefits provided by SDN in the implementation of restoration mechanisms for EONs. This paper proposes a dynamic restoration scheme for EONs based on the SDN framework. The proposed scheme contemporarily exploits centralized path computation and node configuration to avoid contentions during the recovery procedure with the final aim of minimizing the recovery time. The performance of the proposed scheme is evaluated by means of simulations in terms of recovery time and restoration blocking probability and compared against three reference schemes based on GMPLS and SDN. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> Elastic optical networking (EON), with its flexible use of the optical spectrum, is a promising solution for future metro/core optical networking. For the deployment of EON in a real-operational scenario, the dynamic lightpath restoration, driven by an intelligent control plane, is a necessary network function. Dynamic restoration can restore network services automatically and, thus, greatly reduce the operational cost, compared with traditional manual or semistatic lightpath restoration strategies enabled by network operators via a network management system. To this end, in this paper, we present an OpenFlow-enabled dynamic lightpath restoration in elastic optical networks, detailing the restoration framework and algorithm, the failure isolation mechanism, and the proposed OpenFlow protocol extensions. We quantitatively present the restoration performance via control plane experimental tests on the Global Environment for Network Innovations testbed. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> We demonstrate a cross-layer orchestration for packet service over IP-optical networks, in terms of availability and elasticity. Our orchestration built over the SDN concept self-adjusts and cost-efficiently responds to dynamics on network paths, impairments/failures, and topology. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software-defined networking (SDN) paradigm can ease network configurations by enabling network programability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based network operations (ABNO) are a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multitenant virtual networks in multitechnology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion detection and failure recovery are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> The operation of smart power grids will depend on a reliable and flexible communication infrastructure for monitoring and control. Software defined networking (SDN) is emerging as a promising control platform, facilitating network programmability and bandwidth flexibility. We study SDN optical transmission reliability for smart grid applications. We identify the collaboration of the control plane and the data plane in software-defined optical transmission systems as a cyber-physical interdependency where the ‘physical’ fiber network provides the ‘cyber’ control network with means to distribute control and signaling messages and in turn is itself operated by these ‘cyber’ control messages. We examine the robustness of such an interdependent communication system and quantify the advantages of optical layer reconfigurability. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> SDN Controller <s> Abstract Cloud computing is dominating Internet services and would continue to expand in the foreseeable future. It is very challenging for network operators to evolve their infrastructures to be more intelligent and agile in resource orchestration. Nowadays, the optical networks term denotes high-capacity telecommunications networks based on optical technologies and components that can provide capacity, provisioning, routing, grooming, or restoration at the wavelength level. The proposed Elastic Optical Networks (EONs) technology is expected to mitigate this problem by adaptively allocating spectral resources according to client traffic demands. In this paper, we focus on survivability problems in dynamic routing in EONs. We propose Adaptive Survivability (AS) approach to achieve the best trade-off between the efficiency of path protection and cost of routing. Moreover, we propose entirely new Routing, Spectrum and Modulation Assignment (RMSA) algorithm to optimize both anycast and unicast traffic flows. Finally, we evaluate the performance of RMSA algorithms and assess the effectiveness of AS approach under various network scenarios. The main conclusion is that using AS approach results in significant improvement of network performance. <s> BIB015
|
physical link that is about to be switched off is rerouted on a parallel physical GbE link (within the same logical link). In contrast, in the DLFU approach, the energy saving application monitors the load levels on the virtual links. If the load level on a given virtual link falls below a threshold value, then the virtual link topology is reconfigured to eliminate the virtual link with the low load. A general pitfall of such link switchoff techniques is that energy savings may be achieved at the expense of deteriorating QoS. The QoS should therefore be closely monitored when switching off links and re-routing flows. A similar SDN based routing strategy that strives to save energy while preserving the QoS has been examined in the context of a GMPLS optical networks in BIB006 . Multipath routing optimizing applications that strive to save energy in an SDN based transport optical network have been presented in BIB007 . A similar SDN based optimization approach for reducing the energy consumption in data centers has been examined by Yoon et al. BIB001 . Yoon et al. formulated a mixed integer linear program that models the switches and hosts as queues. Essentially, the optimization decides on the switches and hosts that could be turned off. As the problem is NP-hard, annealing algorithms are examined. Simulations indicate that energy savings of more than 80% are possible for low data center utilization rates, while the energy savings decrease to less than 40% for high data center utilization rates. Traffic balancing in the metro optical access networks through the SDN based reconfiguration of optical subscriber units in a TWDM-PON systems for energy savings has been additionally demonstrated in BIB008 . D. Failure Recovery and Restoration 1) Network Reprovisioning: Network disruptions can occur due to various natural and/or man-made factors. Network resource reprovisioning is a process to change the network configurations, e.g., the network topology and routes, to recover from failures. A Backup Reprovisioning with Path Protection (BRPP), based on SDN for optical networks has been presented by Savas et al. BIB009 . An SDN application framework as illustrated in Fig. 17 was designed to support the reprovisioning with services, such as provisioning the new connections, risk assessment, as well as service level and backup management. When new requests are received by the BRPP application framework, the statistics module evaluates the network state to find the primary path and a linkdisjoint backup path. The computed backup paths are stored as logical links without being provisioned on the physical network. The logical backup module manages and recalculates the logical links when a new backup path cannot be accommodated or to optimize the existing backup paths (e.g., minimize the backup path distance). Savas et al. introduce a degraded backup path mechanism that reserves not the full, but a lower (degraded) transmission capacity on the backup paths, so as to accommodate more requests. Emulations of the proposed mechanisms indicate improved network utilization while effectively provisioning the backup paths for restoring the network after network failures. As a part of DARPA's core optical networks CORONET project, a non-SDN based Robust Optical Layer End-to-end X-connection (ROLEX) protocol has been demonstrated and presented along with the lessons learned . ROLEX is a distributed protocol for failure recovery which requires a considerable amount of signaling between nodes for the distributed management. Therefore to avoid the pitfall of excessive signalling, it may be worthwhile to examine a ROLEX version with centralized SDN control in future research to reduce the recovery time and signaling overhead, as well as the costs of restored paths while ensuring the user QoS. 2) Restoration Processing: During a restoration, the network control plane simultaneously triggers backup provisioning of all disrupted paths. In GMPLS restoration, along with signal flooding, there can be contention of signal messages at the network nodes. Contentions may arise due to spectrum conflicts of the lightpath, or node-configuration overrides, i.e., a new configuration request arrives while a preceding reconfiguration is under way. Giorgetti et al. BIB010 have proposed dynamic restoration in the elastic optical network to avoid signaling contention in SDN (i.e., of OpenFlow messages). Two SDN restoration mechanisms were presented: (i) the independent restoration scheme (SDN-ind), and (ii) the bundle restoration scheme (SDN-bund). In SDN-ind, the controller triggers simultaneous independent flow modification (FlowMod) messages for each backup path to the switches involved in the reconfigurations. During contention, switches enqueue the multiple received Flow-Mod messages and process them sequentially. Although SDN-ind achieves reduced recovery time as compared to non-SDN GMPLS, the waiting of messages in the queue incurs a delay. In SDN-bund, the backup path reconfigurations are bundled into a single message, i.e., a Bundle Flow-Mod message, and sent to each involved switch. Each switch then configures the flow modifications in one reconfiguration, eliminating the delay incurred by the queuing of Flow-Mod messages. A similar OpenFlow enabled restoration in Elastic Optical Networks (EONs) has been studied in BIB011 . 3) Reconfiguration: Aguado et al. BIB013 have demonstrated a failure recovery mechanism as part of the EU FP7 STRAUSS project with dynamic virtual reconfigurations using SDN. They considered multidomain hypervisors and domain-specific controllers to virtualize the multidomain networks. The Application-Based Network Operations (ABNO) framework illustrated in Fig. 18 enables network automation and programmability. ABNO can compute end-to-end optical Illustration of Application-Based Network Operation (ABNO) architecture: The ABNO controller communicates with the Operation, Administration, and Maintenance (OAM) module, the Path Computation Element (PCE) module as well as the topology modules and the provisioning manager to control the lower domain SDN controllers so as to recover from network failures BIB013 . paths and delegate the configurations to lower layer domain SDN controllers. Requirements for fast recovery from network failures would be in the order of tens of milliseconds, which is challenging to achieve in large scale networks. ABNO reduces the recovery times by pre-computing the backup connections after the first failure, while the Operation, Administration and Maintenance (OAM) module BIB002 communicates with the ABNO controller to configure the new end-to-end connections in response to a failure alarm. Failure alarms are triggered by the domain SDN controllers monitoring the traffic via the optical power meters when power is below −20 dBm. In order to ensure survivability, an adaptive survivability scheme that takes routing as well as spectrum assignment and modulation into consideration has been explored in BIB015 . A similar design for end-to-end protection and failure recovery has been demonstrated by Slyne et al. BIB003 for a long-reach (LR) PON. LR-PON failures are highly likely due to physical breaks in the long feeder fibers. Along with the high impact of connectivity break down or degraded service, physical restoration time can be very long. Therefore, 1:1 protection for LR-PONs based on SDN has been proposed, where primary and secondary (backup) OLTs are used without traffic duplication. More specifically, Slyne et al. have devised and demonstrated an OpenFlow-Relay located at the switching unit. The OpenFlow-Relay detects and reports a failure along with fast updating of forwarding rules. Experimental demonstration show the backup OLT carrying protected traffic within 7.2 ms after a failure event. An experimental demonstration utilizing multiple paths in optical transport networks for failure recovery has been discussed by Kim et al. BIB012 . Kim et al. have used commercial grade IP WDM network equipment and implemented multipath TCP in an SDN framework to emulate inter-DC communication. They developed an SDN application, consisting of an cross-layer service manager module and a crosslayer multipath transport module to reconfigure the optical paths for the recovery from connection impairments. Their evaluations show increased bandwidth utilization and reduced cost while being resilient to network impairments as the crosslayer multipath transport module does not reserve the backup path on the transport network. 4) Hierarchical Survivability: Networks can be made survivable by introducing resource redundancy. However, the cost of the network increases with increased redundancy. Zhang et al. BIB004 have demonstrated a highly survivable IPOptical multilayered transport network. Hierarchal controllers are placed for multilayer resource provisioning. Optical nodes are controlled by Transport Controllers (TCs), while higher layers (IP) are controlled by unified controllers (UCs). The UCs communicate with the TCs to optimize the routes based on cross-layer information. If a fiber causes a service disruption, TCs may directly set up alternate routes or ask the UCs for optimized routes. A pitfall of such hierarchical control techniques can be long restoration times. However, the cross layer restorations can recover from high degrees of failures, such as multipoint and concurrent failures. 5) Robust Power Grid: The lack of a reliable communication infrastructure for power grid management was one the many reasons for the widespread blackout in the Northeastern U.S.A. in the year 2003, which affected the lives of 50 million people BIB005 . Since then building a reliable communication infrastructure for the power grid has become an important priority. Rastegarfar et al. BIB014 have proposed a communication infrastructure that is focused on monitoring and can react to and recover from failures so as to reliably support power grid applications. More specifically, their architecture was built on SDN based optical networking for implementing robust power grid control applications. Control and infrastructure in the SDN based power grid management exhibits an interdependency i.e., the physical fiber relies on the control plane for its operations and the logical control plane relies on the same physical fiber for its signalling communications. Therefore, they only focus on optical protection switching instead of IP layer protection, for the resilience of the SDN control. Cascaded failure mechanisms were modeled and simulated for two geographical topologies (U.S. and E.U.). In addition, the impacts of cascaded failures were studied for two scenarios (i) static optical layer (static OL), and (ii) dynamic optical layer (dynamic OL). Results for a static OL illustrated that the failure cascades are persistent and are closely dependent on the network topology. However, for a dynamic OL (i.e., with reconfiguration of the physical layer), failure cascades were suppressed by an average of 73%.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Data center interconnected by flexi-grid optical networks is a promising scenario to meet the high burstiness and high-bandwidth requirement of data center application, because flexi-grid optical networks can allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Meanwhile, as centralized control architecture, the software defined networking (SDN) enabled by OpenFlow protocol can provide maximum flexibility for the networks and make a unified control over various resources for the joint optimization of data center and network resource. Time factor is first introduced into SDN based control architecture for flexi-grid optical networks supporting data center application. Traffic model considering time factor is built and a requirement parameter i.e. bandwidth-delay product is adopted for the service requirement measurement. Then, time-aware software defined networking (Ta-SDN) based control architecture is designed with OpenFlow protocol extension. A novel time-correlated PCE (TC-PCE) algorithm is proposed for the time-correlated service under Ta-SDN based control architecture, which can complete data center selection, path computation and bandwidth resource allocation. Finally, simulation results shows that our proposed Ta-SDN control architecture and time-correlated PCE algorithm can improve the application and network performance to a large extent in blocking probability. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Software defined networking (SDN), originally designed to operate on access Ethernet-based networks, has been recently proposed for different specific networking scenarios, including core or metro/aggregation networks. In this study, we extend this concept to enable a comprehensive control of a converged access, metro and edges of a core network. In particular, a generalized SDN controller is proposed for upstream global QoS traffic engineering of passive optical networks (PONs), Ethernet metro/aggregation segment and IP/MPLS networks through the adoption of an unique interface, in the framework of the Interface to the Routing System (I2RS). Extended OpenFlow functionalities and Path Computation Element Protocol (PCEP) interfaces are encompassed to achieve effective dynamic flow control. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> In the High Speed Internet (HSI) service of the Fiber-To-The-Home (FTTH) networks, there are increasingly various applications, such as browsing, video streaming, large downloads and online games. They are competing for the fixed bandwidth on a best-effort basis, and finally resulting in the network congestion and poor quality of experience (QoE). Users want to improve the quality of certain applications. However, today's network service controller (e.g. Broadband Remote Access Server, BRAS) lacks mechanisms to meet the users' desire to enhance the QoE of specific applications. Moreover, BRAS still lacks mechanisms to allocate the bandwidth resources properly for users' different applications according to their “sweet points”. “Sweet points” is a specific bandwidth value. The QoE gets worse quickly when the bandwidth is smaller than the “sweet point”, and keeps the same approximately when the bandwidth is larger than the “sweet point”. In this paper, we proposed a novel BRAS architecture using Software-Defined Networking (SDN) technology, which can improve the user's QoE by adjusting the bandwidth of a specific application to its “sweet point” according to their requirements. To demonstrate the feasibility of our proposed novel BRAS, we built a prototype using SDN to help the user to adjust the bandwidth for the specific application and improve the users' QoE. The experimental results show that users could enhance the QoE of specific applications according to users' preference. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> We propose joint bandwidth provisioning and base station caching for video delivery in software-defined PONs. Performance evaluation via custom simulation models reveals 30% increase in served video requests and 50% reduction in service response delays. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> A hierarchically controlled IP+Optical multilayer Transport SDN architecture is proposed, which highlights flexible resource provisioning and dynamic cross-layer restorations. The proposals are also demonstrated via an implemented testbed prototype. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> We test an end-to-end 1:1 protection scheme for a combined LR-PON access and core networks using separate but loosely coupled SDN controllers, over a Pan-European network. Fast recovery is achieved in 7ms in the access and 52ms in the core. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Increased network traffic has put great pressure on edge router. It tends to be more expensive and consumes more resources. Buffer is especially the most valuable resource in the router. Given the potential benefits of reducing buffer sizes, a lot of debate on buffer sizing has emerged in the past few years. The small buffer rule, for example, was challenged at edge router. Instead of buffer sizing, the goal of our work is to find out a way to relieve the pressure of edge router and logically enlarge its buffer size without incurring additional costs. In this paper, taking advantage of the global view of SDN, we proposed Software Defined Backpressure Mechanism (SD-BM) to alleviate the pressure of edge router. Particularly, we gave Refill and Software Defined Networking based RED (RS-RED) algorithm, which makes it possible to enlarge the network buffer logically and offloads traffic from busy egress router to free ingress devices. Simulation results show that it has comparable performance, both in time delay and loss rate, with edge router which has large buffer in traditional way. The results can have consequences for the design of edge router and the related network. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM). Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures. Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, i.e., resource crunch. To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, e.g., reduced capacity, to connections that can tolerate degraded service, versus no service at all. Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic. By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization. The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Optical transport networks typically deploy dynamic restoration mechanisms in order to automatically recover optical connections disrupted by network failures. Elastic optical networks (EONs), currently emerging as the next-generation technology to be adopted in optical transport, introduce new challenges for traditional generic multiprotocol label-switching (GMPLS)-based restoration that may seriously impact the achievable recovery time. At the same time, the software-defined networking (SDN) framework is emerging as an alternative control plane. It is therefore important to investigate possible benefits provided by SDN in the implementation of restoration mechanisms for EONs. This paper proposes a dynamic restoration scheme for EONs based on the SDN framework. The proposed scheme contemporarily exploits centralized path computation and node configuration to avoid contentions during the recovery procedure with the final aim of minimizing the recovery time. The performance of the proposed scheme is evaluated by means of simulations in terms of recovery time and restoration blocking probability and compared against three reference schemes based on GMPLS and SDN. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Elastic optical networking (EON), with its flexible use of the optical spectrum, is a promising solution for future metro/core optical networking. For the deployment of EON in a real-operational scenario, the dynamic lightpath restoration, driven by an intelligent control plane, is a necessary network function. Dynamic restoration can restore network services automatically and, thus, greatly reduce the operational cost, compared with traditional manual or semistatic lightpath restoration strategies enabled by network operators via a network management system. To this end, in this paper, we present an OpenFlow-enabled dynamic lightpath restoration in elastic optical networks, detailing the restoration framework and algorithm, the failure isolation mechanism, and the proposed OpenFlow protocol extensions. We quantitatively present the restoration performance via control plane experimental tests on the Global Environment for Network Innovations testbed. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> We demonstrate a cross-layer orchestration for packet service over IP-optical networks, in terms of availability and elasticity. Our orchestration built over the SDN concept self-adjusts and cost-efficiently responds to dynamics on network paths, impairments/failures, and topology. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> E. Application Layer: Summary and Discussion <s> Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software-defined networking (SDN) paradigm can ease network configurations by enabling network programability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based network operations (ABNO) are a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multitenant virtual networks in multitechnology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion detection and failure recovery are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer. <s> BIB013
|
The SDON QoS application studies have mainly examined traffic and network management mechanisms that are supported through the OpenFlow protocol and the central SDN controller. The studied SDON QoS applications are structurally very similar in that the traffic conditions or network states (e.g., congestion levels) are probed or monitored by the central SDN controller. The centralized knowledge of the traffic and network is then utilized to allocate or configure resources, such as DC resources in BIB001 , application bandwidths in BIB004 , and topology configurations or routes in BIB002 - BIB003 , BIB008 . Future research on SDON QoS needs to further optimize the interactions of the controller with the network applications and data plane to quickly and correctly react to changing user demands and network conditions, so as to assure consistent QoS. The specific characteristics and requirements of video streaming applications have been considered in the few studies on video QoS - BIB005 . Future SDON QoS research should consider a wider range of specific prominent application traffic types with specific characteristics and requirements, e.g., Voice over IP (VoIP) traffic has relatively low bit rate requirements, but requires low end-to-end latency. Very few studies have considered security and access control for SDONs. The thorough study of the broad topic area of security and privacy is an important future research direction in SDONs, as outlined in Section VIII-C Energy efficiency is similarly a highly important topic within the SDON research area that has received relatively little attention so far and presents overarching research challenges, see Section VIII-I. One common theme of the SDON application layer studies focused on failure recovery and restoration has been to exploit the global perspective of the SDN control. The global perspective has been exploited for for improved planning of the recovery and restoration BIB009 , BIB013 , BIB006 as well as for improved coordination of the execution of the restoration processes BIB010 , BIB011 . Generally, the existing failure recovery and restoration studies have focused on network (routing) domain that is owned by a particular organizational entity. Future research should seek to examine the tradeoffs when exploiting the global perspective of orchestration of multiple routing domains, i.e., the failure recovery and restoration techniques surveyed in this section could be combined with the multidomain orchestration techniques surveyed in Section VII. One concrete example of multidomain orchestration could be to coordinate the specific LR-PON access network protection and failure recovery BIB007 with protection and recovery techniques for metropolitan and core network domains, e.g., BIB009 , BIB013 , BIB012 , BIB006 , for improved end-to-end protection and recovery.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> This document specifies the Path Computation Element (PCE) ::: Communication Protocol (PCEP) for communications between a Path ::: Computation Client (PCC) and a PCE, or between two PCEs. Such ::: interactions include path computation requests and path computation ::: replies as well as notifications of specific states related to the use ::: of a PCE in the context of Multiprotocol Label Switching (MPLS) and ::: Generalized MPLS (GMPLS) Traffic Engineering. PCEP is designed to be ::: flexible and extensible so as to easily allow for the addition of ::: further messages and objects, should further requirements be expressed ::: in the future. [STANDARDS-TRACK] <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> Even though software-defined networking (SDN) and the OpenFlow protocol have demonstrated great practicality in the packet domain, there has been some hesitance in extending the OpenFlow specification to circuit and optical switched domains that constitute wide area multi-layer transport networks. This paper presents an overview of various proposals with regards to extending OpenFlow to support circuit switched multi-layer networks. The goal is to shed light on these ideas and propose a way forward. This paper favors a top-down approach, which relies on transport network's main SDN use case: packet-optical integration, to help identify the sufficient extensions for OpenFlow to support circuit/optical switching. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> Due to a number of recent technology developments, now is the right time to re-examine the use of TCP for very large data transfers. These developments include the deployment of 100 Gigabit per second (Gbps) network backbones, hosts that can easily manage 40 Gbps, and higher, data transfers, the Science DMZ model, the availability of virtual circuit technology, and wide-area Remote Direct Memory Access (RDMA) protocols. In this paper we show that RDMA works well over wide-area virtual circuits, and uses much less CPU than TCP or UDP. We also characterize the limitations of RDMA in the presence of other traffic, including competing RDMA flows. We conclude that RDMA for Science DMZ to Science DMZ transfers of massive data is a viable and desirable option for high-performance data transfer. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> This paper discusses necessary steps for the migration from today's residential network model to a converged access/aggregation platform based on software defined networks (SDN). <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> OFELIA is an experimental network designed to offer a diverse OpenFlow-enabled infrastructure to allow Software Defined Networking (SDN) experimentation. OFELIA is currently composed of ten sub–testbeds (called islands), most of them in Europe and one in Brazil. An experimenter get access to a so-called slice; a subset of the testbed resources like nodes and links, including the Openflow programmable switches to carry on an experiment. A new network virtualization tool called VeRTIGO has been recently presented to extend the way isolation is achieved between slices (slicing), allowing each experimenter to instantiate an arbitrary virtual network topology on top of a physical testbed. In this paper we present preliminary results obtained by deploying and using VeRTIGO in an experiment running across several OFELIA islands, which has proven to increase flexibility to experimenters willing to play with novel SDN concepts at large scale. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> Wide area networks (WAN) forward traffic through a mix of packet and optical data planes, composed by a variety of devices from different vendors. Multiple forwarding technologies and encapsulation methods are used for each data plane (e.g. IP, MPLS, ATM, SONET, Wavelength Switching). Despite standards defined, the control planes of these devices are usually not interoperable, and different technologies are used to manage each forwarding segment independently (e.g. Open Flow, TL-1, GMPLS). The result is lack of coordination between layers and inefficient resource usage. In this paper we discuss the design and implementation of a system that uses unmodified Open Flow to optimize network utilization across layers, enabling practical bandwidth virtualization. We discuss strategies for scalable traffic monitoring and to minimize losses on route updates across layers. A prototype of the system was built using a traditional circuit reservation application and an unmodified SDN controller, and its evaluation was performed on a multi-vendor test bed. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> We apply a hierarchical SDN controller to demonstrate multi-layer orchestration of a commercial Optical Network Control platform. We show dynamic allocation of transport resources for bandwidth on demand and congestion control of packet services. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> The growth of the Internet in terms of number of devices, the number of networks associated to each device and the mobility of devices and users makes the operation and management of the Internet network infrastructure a very complex challenge. In order to address this challenge, innovative solutions and ideas must be tested and evaluated in real network environments and not only based on simulations or laboratory setups. OFELIA is an European FP7 project and its main objective is to address the aforementioned challenge by building and operating a multi-layer, multi-technology and geographically distributed Future Internet testbed facility, where the network itself is precisely controlled and programmed by the experimenter using the emerging OpenFlow technology. This paper reports on the work done during the first half of the project, the lessons learned as well as the key advantages of the OFELIA facility for developing and testing new networking ideas. An overview on the challenges that have been faced on the design and implementation of the testbed facility is described, including the OFELIA Control Framework testbed management software. In addition, early operational experience of the facility since it was opened to the general public, providing five different testbeds or islands, is described. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> A multidomain and multitechnology optical network orchestration is demonstrated in an international testbed located in Japan, the U.K., and Spain. The application-based network operations architecture is proposed as a carrier software-defined network solution for provisioning end-to-end optical transport services through a multidomain multitechnology network scenario, consisting of a 46–108 Gb/s variable-capacity OpenFlow-capable optical packet switching network and a programmable, flexi-grid elastic optical path network. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> We argue that the implementation of services in an IP-optical network should be driven by the needs of the specific applications, and explain why this requires a centralized orchestration architecture. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> We experimental validated a Hierarchical Controlled Software Defined Networks Architecture for IP over Optical Transport Network. OpenDayLight is chosen as the basic SDN controller for our extensions. Our implementation includes interaction between controllers and three function module extensions inside OpenDayLight to control the optical network. An successful experimental of end to end dynamic connection establishment is implemented across both IP and OTN layers. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> A network hypervisor is introduced to dynamically deploy multi-tenant virtual networks on top of multi-technology optical networks. It provides an abstract view of each virtual network and enables its control through an independent SDN controller. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> We propose a network orchestrator based on ODENOS for orchestrating multi-layer and multi-domain networks. The network orchestrator can dynamically provide end-to-end paths according to requests of network services. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> VII. ORCHESTRATION <s> We experimentally assess the first Transport API that provides a research-oriented multi-layer approach using YANG/RESTconf. It defines a functional model of a control plane and enables the integration of optical/wireless networks and distributed cloud resources. <s> BIB014
|
As introduced in Section II-A4, orchestration accomplishes higher layer abstract coordination of network services and operations. In the context of SDONs, orchestration has mainly been studied in support of multilayer networking. Multilayer networking in the context of SDN and network virtualization generally refers to networking across multiple network layers and their respective technologies, such as IP, MPLS, and WDM, in combination with networking across multiple routing domains , - . The concept of multilayer networking is generally an abstraction of providing network services with multiple networking layers (technologies) and multiple routing domains. The different network layers and their technologies are sometimes classified into Layer 0 (e.g., fiber-switch capable), Layer 1 (e.g., lambda switching capable), Layer 1.5 (e.g., TDM SONET/SDH), Layer 2 (e.g., Ethernet), Layer 2.5 (e.g., packet switching capable using MPLS), and Layer 3 (e.g., packet switching capable using IP routing) BIB009 . Routing domains are also commonly referred to as network domains, routing areas, or levels . The recent multilayer networking review article has introduced a range of capability planes to represent the grouping of related functionalities for a given networking technology. The capability planes include the data plane for transmitting and switching data. The control plane and the management plane directly interact with the data plane for controlling and provisioning data plane services as well as for trouble shooting and monitoring the data plane. Furthermore, an authentication and authorization plane, a service plane, and an application plane have been introduced for providing network services to users. Multilayer networking can involve vertical layering or horizontal layering , as illustrated in Fig. 19 . In vertical layering, a given layer, e.g., the routing layer, which may employ a particular technology, e.g., the Internet Protocol (IP), uses another (underlying) layer, e.g., the Wavelength Division Multiplexing (WDM) circuit switching layer, to provide services to higher layers. In horizontal layering, services are provided by "stitching" together a service path across multiple routing domains. SDN provides a convenient control framework for these flexible multilayer networks . Several research networks, such as ESnet, Internet2, GEANT, Science DMZ (Demilitarized Zone) have experimented with these multilayer networking concepts BIB003 , BIB006 . In particular, SDN based multilayer network architectures, e.g., BIB004 , BIB013 , BIB014 , are formed by conjoining the layered technology regions (i) in vertical fashion i.e., multiple technology layers internetwork within a single domain, or (ii) in horizontal layering fashion across multiple domains, i.e., technology layers internetwork across distinct domains. Horizontal multilayer networking can be viewed as a generalization of vertical multilayer networking in that the horizontal networking may involve the same or different (or even multiple) layers in the distinct domains. As illustrated in Fig. 19 , the formed SDN based multilayer network architecture is controlled by an SDN orchestrator. As illustrated in Fig. 20 we organize the SDON orchestration studies according to their focus into studies that primarily address the orchestration of vertical multilayer (multitechnology) networking, i.e., the vertical networking across multiple layers (that typically implement different technologies) within a given domain, and into studies that primarily address the orchestration of horizontal multilayer (multidomain) networking, i.e., the horizontal networking across multiple routing domains (which may possibly involve different or multiple vertical layers in the different domains). We subclassify the vertical multilayer studies into general (vertical) multilayer networking frameworks and studies focused on supporting specific applications through vertical multilayer networking. We subclassify the multidomain (horizontal multilayer) networking studies into studies on general network domains and studies focused on internetworking with Data Center (DC) network domains. a) Hierarchical Multilayer Control: Felix et al. BIB007 presented an hierarchical SDN control mechanism for packet optical networks. Multilayer optimization techniques are employed at the SDN orchestrator to integrate the optical transport technology with packet services by provisioning end-toend Ethernet services. Two aspects are investigated, namely (i) bandwidth optimization for the optical transport services, and (ii) congestion control for packet network services in an integrated packet optical network. More specifically, the SDN controller initially allocates the minimum available bandwidth required for the services and then dynamically scales allocations based on the availability. Optical-Virtual Private Networks (O-VPNs) are created over the physical transport network. Services are then mapped to O-VPNs based on class of service requirements. When congestion is detected for a service, the SDN controller switches the service to another O-VPN, thus balancing the traffic to maintain the required class of service. Similar steps towards the orchestration of multilayer networks have been taken within the OFELIA project BIB005 - BIB008 . Specifically, Shirazipour et al. BIB002 have explored extensions to OpenFlow version 1.1 actions to enable multitechnology transport layers, including Ethernet transport and optical transport. The explorations of the extensions include justifications of the use of SDN in circuit-based transport networks. b) Application Centric Orchestration: Gerstel et al. BIB010 proposed an application centric network service provisioning approach based on multilayer orchestration. This approach enables the network applications to directly interact with the physical layer resource allocations to achieve the desired service requirements. Application requirements for a network service may include maximum end-to-end latency, connection setup and hold times, failure protection, as well as security and encryption. In traditional IP networking, packets from multiple applications requiring heterogeneous services are simply aggregated and sent over a common transport link (IP services). As a result, network applications are typically assigned to a single (common) transport service within an optical link. Consider a failure recovery process with multiple available paths. IP networking typically selects the single path with the least end-to-end delay. However, some applications may tolerate higher latencies and therefore, the traffic can be split over multiple restoration paths achieving better traffic management. The orchestrator needs to interact with multiple network controllers operating across multiple (vertical) layers supported by north/south bound interfaces to achieve the application centric control. Dynamic additions of new IP links are demonstrated to accommodate the requirements of multiple application services with multiple IP links when the load on the existing IP link was increased. 2) Application-specific Orchestration: a) Failure Recovery: Generally, network CapEx and OpEx increase as more protection against network failures is added. Khaddam et al. [430] propose an SDN based integration of multiple layers, such as WDM and IP, in a failure recovery mechanism to improve the utilization (i.e., to eventually reduce CapEx and OpEx while maintaining high protection levels). An observation study was conducted over a five year period to understand the impact of network failures on the real deployment of backbone networks. Results showed 75 distinct failures following a Pareto distribution, in which, 48% of the total deployed capacity was affected by the top (i.e., the highest impact) 20% of the failures. And, 10% of the total deployed capacity was impacted by the top two failure instances. These results emphasize the significance of backup capacities in the optical links for restoration processes. However, attaining the optimal protection capacities while achieving a high utilization of the optical links is challenging. A failure recovery mechanism is proposed based on a "hybrid" (i.e., combination of optical transport and IP) multilayer optimization. The hybrid mechanism improved the optical link utilization up to 50 %. Specifically, 30 % increase of the transport capacity utilization is achieved by dynamically reusing the remainder capacities in the optical links, i.e., the capacity reserved for failure recoveries. The multilayer optimization technique was validated on an experimental testbed utilizing central path-computation (PCE) BIB001 within the SDN framework. Experimental verification of failure recovery mechanism resulted in recovery times on the order of sub-seconds for MPLS restorations and several seconds for optical WSON restorations. b) Resource Utilization: Liu et al. BIB011 proposed a method to improve resource utilization and to reduce transmission latencies through the processes of virtualization and service abstraction. A centralized SDN control implements the service abstraction layer (to enable SDN orchestrations) in order to integrate the network topology management (across both IP and WDM), and the spectrum resource allocation in a single control platform. The SDN orchestrator also achieves dynamic and simultaneous connection establishment across both IP and OTN layers reducing the transmission latencies. The control plane design is split between local (child) and root (parent) controllers. The local controller realizes the label switched paths on the optical nodes while the root controller realizes the forwarding rules for realizing the IP layer. Experimental evaluation of average transfer time measurements showed IP layer latencies on the order of several milliseconds, and several hundreds of milliseconds for the OTN latencies, validating the feasibility of control plane unification for IP over optical transport networks. c) Virtual Optical Networks (VONs): Vilalta et al. BIB012 presented controller orchestration to integrate multiple transport network technologies, such as IP and GMPLS. The proposed architectural framework devises VONs to enable the virtualization of the physical resources within each domain. VONs are managed by lower level physical controllers (PCs), which are hierarchically managed by an SDN network orchestrator (NO). Network Virtualization Controllers (NVC) are introduced (on top of the NO) to abstract the virtualized multilayers across multiple domains. End-to-end provisioning of VONs is facilitated through hierarchical control interaction over three levels, the customer controller, the NO&NVCs, and the PCs. An experimental evaluation demonstrated average VON provisioning delays on the order of several seconds (5 s and 10 s), validating the flexibility of dynamic VON deployments over the optical transport networks. Longer provisioning delays may impact the network application requirements, such as failure recovery processes, congestion control, and traffic engineering. General pitfalls of such hierarchical structures are increased control plane complexity, risk of controller failures, and maintenance of reliable communication links between control plane entities.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> Services such as content distribution, distributed databases, or ::: inter-data center connectivity place a set of new requirements on the ::: operation of networks. They need on-demand and application-specific ::: reservation of network connectivity, reliability, and resources (such ::: as bandwidth) in a variety of network applications (such as point-to- ::: point connectivity, network virtualization, or mobile back-haul) and ::: in a range of network technologies from packet (IP/MPLS) down to ::: optical. An environment that operates to meet these types of ::: requirements is said to have Application-Based Network Operations ::: (ABNO). ABNO brings together many existing technologies and may be ::: seen as the use of a toolbox of existing components enhanced with a ::: few new elements. This document describes an architecture and ::: framework for ABNO, showing how these components fit together. It ::: provides a cookbook of existing technologies to satisfy the ::: architecture and meet the needs of the applications. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> This paper describes a demonstration of SDN-based optical transport network virtualization and orchestration. Two scenarios are demonstrated: a dynamic setup of optical connectivity services inside a single domain as well as a multidomain service orchestration over a shared optical infrastructure using the architecture defined in the STRAUSS project. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> We introduce an SDN orchestration architecture to enable the introduction of E2E TE policies in a multi-domain, multi-layer network scenario. The ABNO is used as reference architecture for the SDN orchestration of packet/optical SDN controllers using Flow Service Classification. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> Software-defined networking (SDN) with optical transport techniques enables network operators and data center (DC) operators to provide their resources dynamically on user's demands while minimizing operating and capital expenditure. Current multi-domain networking techniques mainly rely on the path-computing element, while SDN has shown considerable potential for this inevitable issue. The first field demonstration of multi-domain software-defined transport networking (SDTN) without global topology information is detailed in this paper. A multi-controller collaboration framework and three schemes, a controller-driven scheme (ConDS), a cloud-driven scheme (ClDS), and a ClDS with dynamic optimization (ClDS-DO), for datacenter interconnection based on SDTN are demonstrated via field networks. We also extend the OpenFlow protocol and design the Java-Script Object Notation application programming interface to support this framework. Multi-domain lightpaths are automatically provided with limited signaling latency. The blocking performances of the proposed schemes are estimated, and the ClDS-DO has the best performance approaching the optimal boundary. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> A multidomain and multitechnology optical network orchestration is demonstrated in an international testbed located in Japan, the U.K., and Spain. The application-based network operations architecture is proposed as a carrier software-defined network solution for provisioning end-to-end optical transport services through a multidomain multitechnology network scenario, consisting of a 46–108 Gb/s variable-capacity OpenFlow-capable optical packet switching network and a programmable, flexi-grid elastic optical path network. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> We present the performance of fixed-length, variable-capacity (FL-VC) packets in optical packet-switching (OPS) networks. We show how FL-VC achieves an effective balance between implementation feasibility and performance of the applications using the network. Focusing on metropolitan area networks and real-world file distributions, we also show that an adequate selection of packet duration leads to nearly optimal application throughput with respect to conventional variable-length, fixed-bit-rate packets (VL-FBR), and that this optimal packet duration is robust against changes in the workload. Finally, we show that a single fiber delay line per OPS switch managed by a first-fit scheduler can increase throughput to levels similar to those obtained by random access buffers. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> The combination of elastic optical networks (EONs) and software-defined networking (SDN) leads to SD-EONs, which bring a new opportunity for enhancing programmability and flexibility of optical networks with more freedom for network operators to customize their infrastructure dynamically. In this paper, we investigate how to apply multidomain scenarios to SD-EONs. We design the functionalities in the control plane to facilitate multidomain tasks, and propose an interdomain protocol to enable OpenFlow controllers in different SD-EON domains to operate cooperatively for multidomain routing and spectrum assignment. The proposed system is implemented and experimentally demonstrated in a multinational SD-EON control plane testbed that consists of two geographically distributed domains located in China and USA, respectively. Experimental results indicate that the proposed system performs well for resource allocation across multiple SD-EON domains. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> A virtualization architecture is presented for deploying multitenant virtual networks on top of multitechnology optical networks. A multidomain network hypervisor (MNH) and a multidomain SDN orchestrator (MSO) are introduced with this purpose. TheMNHprovides an abstract view of each virtual network and gives control of it to each independent customer SDN controller. The MNH is able to provide virtual networks across heterogeneous control domains (i.e., generalized multiprotocol label switching and OpenFlow) and transport technologies (i.e., optical packet switching and elastic optical networks). The MSO is responsible for providing the necessary end-toend connectivity. We have designed, implemented, and experimentally evaluated the MNH and MSO in an international testbed across Spain, UK, Germany, and Japan. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> A multidomain optical transport network composed of heterogeneous optical transport technologies (e.g., flexi/fixed-grid optical circuit switching and optical packet switching) and control plane technologies (e.g., centralized OpenFlow or distributed GMPLS) does not naturally interoperate, and a network orchestration mechanism is required. A network orchestrator allows the composition of end-to-end network service provisioning across multidomain optical networks comprising different transport and control plane technologies. Software-defined networking (SDN) is a key technology to address this requirement, since the separation of control and data planes makes the SDN a suitable candidate for end-to-end provisioning service orchestration across multiple domains with heterogeneous control and transport technologies. This paper presents two different network orchestration's architectures based on the application-based network operations (ABNO) which is being defined by IETF based on standard building blocks. Then, we experimentally assesses in the international testbed of the STRAUSS project, an ABNO-based network orchestrator for end-to-end multi-layer (OPS and Flexi-grid OCS) and multidomain provisioning across heterogeneous control domains (SDN/OpenFlow and GMPLS/Stateful PCE) employing dynamic domain abstraction based on virtual node aggregation. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> New and emerging use cases, such as the interconnection of geographically distributed data centers (DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking (SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/ OpenFlow domains or multiple OpenFlow/ Generalized Multi-Protocol Label Switching (GMPLS) heterogeneous domains. In addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-end service provisioning in multi-domain data center optical networks. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> New and emerging use cases, such as the interconnection of geographically remote data centers, are drawing attention to the need for provisioning end-to-end connectivity services spanning multiple and heterogeneous network domains. This heterogeneity is due not only to the data transmission and switching technology (the so-called data plane) but also to the deployed control plane, which may be used within each domain to automate the setup and recovery of such services, dynamically. The choice of a control plane is affected by factors such as availability, maturity, operator's preference, and the ability to satisfy a list of functional requirements. Given the current developments around OpenFlow and software-defined networking (SDN) along with the need to account for existing deployments based on GMPLS, the problem of heterogeneous control plane interworking needs to be solved. The retained solution must equally address the specific issues of multidomain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints that characterize them. In this setting, we propose a functional and protocol architecture for such interworking, based on the key concepts of network abstraction and overarching control, implemented in terms of a hierarchical stateful path computation element (PCE), which provides the orchestration and coordination layer. In the proposed architecture, the PCEP and BGP-LS protocols are extended to support OpenFlow addresses and datapath identifiers, unifying both GMPLS and OpenFlow domains. The solution is deployed in an experimental testbed and validated. Although the main scope of the approach is the interworking of OpenFlow and GMPLS, the same approach can be directly applied to a wide range of multidomain scenarios, with either homogeneous or heterogeneous control technologies. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> Software-defined networking (SDN) and network function virtualization (NFV) have emerged as the most promising candidates for improving network function and protocol programmability and dynamic adjustment of network resources. On the one hand, SDN is responsible for providing an abstraction of network resources through well-defined application programming interfaces. This abstraction enables SDN to perform network virtualization, that is, to slice the physical infrastructure and create multiple coexisting application-specific virtual tenant networks (VTNs) with specific quality-of-service and service-levelagreement requirements, independent of the underlying optical transport technology and network protocols. On the other hand, the notion of NFV relates to deploying network functions that are typically deployed in specialized and dedicated hardware, as software instances [called virtual network functions (VNFs)] running on commodity servers (e.g., in data centers) through software virtualization techniques. Despite all the attention that has been given to virtualizing IP functions (e.g., firewall; authentication, authorization, and accounting) or Long-Term Evolution control functions (e.g., mobility management entity, serving gateway, and packet data network gateway), some transport control functions can also be virtualized and moved to the cloud as a VNF. In this work we propose virtualizing the tenant SDN control functions of a VTN and moving them into the cloud. The control of a VTN is a key requirement associated with network virtualization, since it allows the dynamic programming (i.e., direct control and configuration) of the virtual resources allocated to the VTN. We experimentally assess and evaluate the first SDN/NFV orchestration architecture in a multipartner testbed to dynamically deploy independent SDN controller instances for each instantiated VTN and to provide the required connectivity within minutes. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> This paper proposes the hierarchical SDN orchestration of heterogeneous wireless/optical networks. End-to-End connectivity services are provisioned through different network segments by means of a Transport API. The hierarchical approach allows scalability, modularity and more security. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> We propose the combination of optical network virtualization and network function virtualization (NFV) for deployment of on-demand OpenFlow-controlled virtual optical networks (VON). Each tenant SDN controller is run on the cloud, so the tenant can control the deployed VON. This paper demonstrates the feasibility of the proposed use case and provides implementation details in the ADRENALINE testbed of an NFV orchestrator, which is able to provide multitenancy on top of an heterogeneous transport network by means of network orchestration and virtualization. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> We propose and experimentally validate an SDN/NFV orchestrator to dynamically create virtual backhaul tenants over a multi-layer (packet/optical) aggregation network and deploy virtual network functions (vEPC and vSDN controller) to better adapt MNO's capacity increase. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Multidomain Orchestration <s> Abstract Emerging cloud-based applications, running in geographically distributed data centers (DCs), generate new dynamic traffic patterns which claim for a more efficient management of the traffic flows. Geographically distributed DCs interconnection requires automatic and more dynamic provisioning and deletion of end-to-end (E2E) connectivity services, through heterogeneous network domains. Each network domain may use a different data transport technology but also a different control/management system. The fast development of Software Defined Networking (SDN) and the interworking with current control plane technologies such as Generalized Multi-protocol Label Switching (GMPLS), demand orchestration over the heterogeneous control instances to provide seamless E2E connectivity services to external applications (i.e. Cloud Computing applications). In this work, we present different orchestration architectures based on the SDN principles which use the Path Computation Element (PCE) as a fundamental component. In particular, a single SDN controller orchestration approach is compared with an orchestration architecture based on the Application Based Network Operations (ABNO) defined within the International Engineering Task Force (IETF), in order to find the potential benefits and drawbacks of both architectures. Finally, the SDN IT and Network Orchestration (SINO) platform which integrates the management of Cloud Computing infrastructure with the network orchestration, it is used to validate both architectures by evaluating their performance providing two inter-DC connectivity services: E2E connectivity and Virtual Machine (VM) migration. <s> BIB016
|
Large scale network deployments typically involve multiple domains, which have often heterogeneous layer technologies. Achieve high utilization of the networking resources while provisioning end-to-end network paths and services across multiple domains and their respective layers and respective technologies is highly challenging BIB003 - BIB004 . Multidomain SDN orchestration studies have sought to exploit the unified SDN control plane to aid the resource-efficient provisioning across the multiple domains. 1) General Multidomain Networks: a) Optical Multitechnologies Across Multiple Domains: Optical nodes are becoming increasingly reconfigurable (e.g., through variable BVTs and OFDM transceivers, see Section III), adding flexibility to the switching elements. When a single end-to-end service establishment is considered, it is more likely that a service is supported by different optical technologies that operate across multiple domains. Yoshida et al. BIB005 have demonstrated SDN based orchestration with emphasis on the physical interconnects between multiple domains and multiple technology specific controllers so as to realize end-to-end services. OpenFlow capabilities have been extended for fixed-length variable capacity optical packet switching BIB006 . That is, when an optical switch matches the label on an incoming optical packet, if a rule exists in the switch (flow entry in the table) for a specific label, a defined action is performed on the optical packet by the switch. Otherwise, the optical packet is dropped and the controller is notified. Interconnects between optical packet switching networks and elastic optical networks are enabled through a novel OPS-EON interface card. The OPS-EON interface is designed as an extension to a reconfigurable, programmable and flexi-grid EON supporting the OpenFlow protocol. The testbed implementation of OPS-EON interface cards demonstrated the orchestration of multiple domain controllers and c) Inter-Domain Protocol: Zhu et al. BIB007 followed a different approach for the SDN multidomain control mechanisms by considering the flat arrangement of controllers as shown in Fig. 21 . Each domain is autonomously managed by an SDN controller specific to the domain. An Inter-Domain Protocol (IDP) was devised to establish the communication between domain specific controllers to coordinate the lightpath setup across multiple domains. Zhu et al. also proposed a Routing and Spectrum Allocation (RSA) algorithm for the end-to-end provisioning of services in the SD-EONs. The distributed RSA algorithm operates on the domain specific controllers using the IDP protocol. The RSA considers both transparent lightpath connections, i.e., all-optical lightpath, and translucent lightpath connections, i.e., optical-electrical-optical connections. The benefit of such techniques is privacy, since the domain specific policies and topology information are not shared among other network entities. Neighbor discovery is independently conducted by the domain specific controller or can initially be configured. A domain appears as an abstracted virtual node to all other domain specific controllers. Each controller then assigns the shortest path routing within a domain between its border nodes. An experimental setup validating the proposed mechanism was demonstrated across geographically-distributed domains in the USA and China. d) Multidomain Network Hypervisors: Vilalta et al. BIB008 presented a mechanism for virtualizing multitechnology optical, multitenant networks. The Multidomain Network Hypervisor (MNH) creates customer specific virtual network slices managed by the customer specific SDN controllers (residing at the customers' locations) as illustrated in Fig. 22 . Physical resources are managed by their domain specific physical SDN controllers. The MNH operates over the network orchestrator and physical SDN controllers for provisioning VONs on the physical infrastructures. The MNH abstracts both (i) multiple optical transport technologies, such as optical packet switching and Elastic Optical Networks (EONs), and (ii) multiple control domains, such as GMPLS and OpenFlow. Experimental assessments on a testbed achieved VON provisioning within a few seconds (5 s), and control overhead delay on the order of several tens of milliseconds. Related virtualization mechanisms for multidomain optical SDN networks with endto-end provisioning have been investigated in BIB002 , BIB013 . e) Application-Based Network Operations: Muñoz et al. BIB009 , have presented an SDN orchestration mechanism based on the application-based network operations (ABNO) framework, which is being defined by the IETF BIB001 . The ABNO based SDN orchestrator integrates OpenFlow and GMPLS in transport networks. Two SDN orchestration designs have been presented: (i) with centralized physical network topology aware path computation (illustrated in Fig. 23) , and (ii) with topology abstraction and distributed path computation. In the centralized design, OpenFlow and GMPLS controllers (lower level control) expose the physical topology information to the ABNO-orchestrator (higher level control). The PCE in the ABNO-orchestrator has the global view of the network and can compute end-to-end paths with complete knowledge of the network. Computed paths are then provisioned through the lower level controllers. The pitfalls of such centralized designs are (i) computationally intensive path computations, (ii) continuous updates of topology and traffic information, and (iii) sharing of confidential network information and policies with other network elements. To reduce the computational load at the orchestrator, the second design implements distributed path computation at the lower level controllers (instead of path computation at the centralized orchestrator). However, such distributed mechanisms may lead to suboptimal solutions due to the limited network knowledge. 2) Multidomain Data Center Orchestration: a) Control Architectures: Geographically distributed DCs are typically interconnected by links traversing multiple domains. The traversed domains may be homogeneous i.e., have the same type of network technology, e.g., OpenFlow based ROADMs, or may be heterogeneous, i.e., have different types of network technologies, e.g., OpenFlow based ROADMs and GMPLS based WSON. The SDN control structures for a multidomain network can be broadly classified into the categories of (i) single SDN orchestrator/controller, (ii) multiple mesh SDN controllers, and (iii) multiple hierarchical SDN controllers BIB010 , BIB016 . The single SDN orchestrator/controller has to support heterogeneous SBIs in order to operate with multiple heterogeneous domains, e.g., the Path Computation Element Protocol (PCEP) for GMPLS network domains and the OpenFlow protocol for OpenFlow supported ROADMs. Also, domain specific details, such as topology, as well as network statistics and configurations, have to be exposed to an external entity, namely the single SDN orchestrator/controller, raising privacy concerns. Furthermore, a single controller may result in scalability issues. Mesh SDN control connects the domain-specific controllers side-by-side by extending the east/west bound interfaces. Although mesh SDN control addresses the scalability and privacy issues, the distributed nature of the control mechanisms may lead to sub-optimal solutions. With hierarchical SDN control, a logically centralized controller (parent SDN controller) is placed above the domain-specific controllers (child SDN controllers), extending the north/south bound interfaces. Domain-specific controllers virtualize the underlying networks inside their domains, exposing only the abstracted view of the domains to the parent controller, which addresses the privacy concerns. Centralized path computation at the parent controller can achieve optimal solutions. Multiple hierarchical levels can address the scalability issues. These advantages of hierarchal SDN control are achieved at the expense of an increased number of network entities, resulting in the operational complexities. b) Hierarchical PCE: Casellas et al. BIB011 considered DC connectivities involving both intra-DC and inter-DC communications. Intra-DC communications enabled through OpenFlow networks are supported by an OpenFlow controller. The inter-DC communications are enabled by optical transport networks involving more complex control, such as GMPLS, as illustrated in Fig. 24 . To achieve the desired SDN benefits of flexibility and scalability, a common centralized control platform spanning across heterogeneous control domains is proposed. More specifically, an Hierarchical PCE (H-PCE) aggregates PCE states from multiple domains. The end-toend path setup between DCs is orchestrated by a parent-PCE (pPCE) element, while the paths are provisioned by the child-PCEs (cPCEs) on the physical resources, i.e., the OpenFlow and GMPLS domains. The proposed mechanism utilizes existing protocol interfaces, such as BGP-LS and PCEP, which are extended with OpenFlow to support the H-PCE. c) Virtual-SDN Control: Muñoz et al. BIB012 , BIB014 proposed a mechanism to virtualize the SDN control functions in a DC/cloud by integrating SDN with Network Function Virtualization (NFV). In the considered context, NFV refers to realizing network functions by software modules running on generic computing hardware inside a DC; these network functions were conventionally implemented on specialized hardware modules. The orchestration of Virtual Network Functions (VNFs) is enabled by an integrated SDN and NFV management which dynamically instantiates virtual SDN controllers. The virtual SDN controllers control the Virtual Tenant Networks (VTNs), i.e., virtual multidomain and multitechnology networks. Multiple VNFs running on a Virtual Machine (VM) in a DC are managed by a VNF manger. A virtual SDN controller is responsible for creating, managing, and tearing down the VNF achieving the flexibility in the control plane management of the multilayer and the multidomain networks. Additionally, as an extension to the proposed mechanism, the virtualization of the control functions of the LTE Evolved Packet Core (EPC) has been discussed in BIB015 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> We apply a hierarchical SDN controller to demonstrate multi-layer orchestration of a commercial Optical Network Control platform. We show dynamic allocation of transport resources for bandwidth on demand and congestion control of packet services. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> We argue that the implementation of services in an IP-optical network should be driven by the needs of the specific applications, and explain why this requires a centralized orchestration architecture. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> IP+WDM transport benefits significantly from advancements in SDN automation, and traffic-engineering central-control optimization, potentially >30%. We discuss adoption opportunities in planning/engineering and operations/maintenance, trade-offs, and requirements for multi-phased evolution to a programmable multi-layer SDN architecture. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> A network hypervisor is introduced to dynamically deploy multi-tenant virtual networks on top of multi-technology optical networks. It provides an abstract view of each virtual network and enables its control through an independent SDN controller. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> We demonstrate a control mechanism for multi-domain optical networks with commercial OTN equipments by using hierarchical SDN controllers. A solution based on extended OpenFlow was proposed to support Control Virtual Network Interface in OTN network. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> New and emerging use cases, such as the interconnection of geographically remote data centers, are drawing attention to the need for provisioning end-to-end connectivity services spanning multiple and heterogeneous network domains. This heterogeneity is due not only to the data transmission and switching technology (the so-called data plane) but also to the deployed control plane, which may be used within each domain to automate the setup and recovery of such services, dynamically. The choice of a control plane is affected by factors such as availability, maturity, operator's preference, and the ability to satisfy a list of functional requirements. Given the current developments around OpenFlow and software-defined networking (SDN) along with the need to account for existing deployments based on GMPLS, the problem of heterogeneous control plane interworking needs to be solved. The retained solution must equally address the specific issues of multidomain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints that characterize them. In this setting, we propose a functional and protocol architecture for such interworking, based on the key concepts of network abstraction and overarching control, implemented in terms of a hierarchical stateful path computation element (PCE), which provides the orchestration and coordination layer. In the proposed architecture, the PCEP and BGP-LS protocols are extended to support OpenFlow addresses and datapath identifiers, unifying both GMPLS and OpenFlow domains. The solution is deployed in an experimental testbed and validated. Although the main scope of the approach is the interworking of OpenFlow and GMPLS, the same approach can be directly applied to a wide range of multidomain scenarios, with either homogeneous or heterogeneous control technologies. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> Software-defined networking (SDN) and network function virtualization (NFV) have emerged as the most promising candidates for improving network function and protocol programmability and dynamic adjustment of network resources. On the one hand, SDN is responsible for providing an abstraction of network resources through well-defined application programming interfaces. This abstraction enables SDN to perform network virtualization, that is, to slice the physical infrastructure and create multiple coexisting application-specific virtual tenant networks (VTNs) with specific quality-of-service and service-levelagreement requirements, independent of the underlying optical transport technology and network protocols. On the other hand, the notion of NFV relates to deploying network functions that are typically deployed in specialized and dedicated hardware, as software instances [called virtual network functions (VNFs)] running on commodity servers (e.g., in data centers) through software virtualization techniques. Despite all the attention that has been given to virtualizing IP functions (e.g., firewall; authentication, authorization, and accounting) or Long-Term Evolution control functions (e.g., mobility management entity, serving gateway, and packet data network gateway), some transport control functions can also be virtualized and moved to the cloud as a VNF. In this work we propose virtualizing the tenant SDN control functions of a VTN and moving them into the cloud. The control of a VTN is a key requirement associated with network virtualization, since it allows the dynamic programming (i.e., direct control and configuration) of the virtual resources allocated to the VTN. We experimentally assess and evaluate the first SDN/NFV orchestration architecture in a multipartner testbed to dynamically deploy independent SDN controller instances for each instantiated VTN and to provide the required connectivity within minutes. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> The combination of elastic optical networks (EONs) and software-defined networking (SDN) leads to SD-EONs, which bring a new opportunity for enhancing programmability and flexibility of optical networks with more freedom for network operators to customize their infrastructure dynamically. In this paper, we investigate how to apply multidomain scenarios to SD-EONs. We design the functionalities in the control plane to facilitate multidomain tasks, and propose an interdomain protocol to enable OpenFlow controllers in different SD-EON domains to operate cooperatively for multidomain routing and spectrum assignment. The proposed system is implemented and experimentally demonstrated in a multinational SD-EON control plane testbed that consists of two geographically distributed domains located in China and USA, respectively. Experimental results indicate that the proposed system performs well for resource allocation across multiple SD-EON domains. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> New and emerging use cases, such as the interconnection of geographically distributed data centers (DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking (SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/ OpenFlow domains or multiple OpenFlow/ Generalized Multi-Protocol Label Switching (GMPLS) heterogeneous domains. In addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-end service provisioning in multi-domain data center optical networks. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Orchestration: Summary and Discussion <s> Abstract Emerging cloud-based applications, running in geographically distributed data centers (DCs), generate new dynamic traffic patterns which claim for a more efficient management of the traffic flows. Geographically distributed DCs interconnection requires automatic and more dynamic provisioning and deletion of end-to-end (E2E) connectivity services, through heterogeneous network domains. Each network domain may use a different data transport technology but also a different control/management system. The fast development of Software Defined Networking (SDN) and the interworking with current control plane technologies such as Generalized Multi-protocol Label Switching (GMPLS), demand orchestration over the heterogeneous control instances to provide seamless E2E connectivity services to external applications (i.e. Cloud Computing applications). In this work, we present different orchestration architectures based on the SDN principles which use the Path Computation Element (PCE) as a fundamental component. In particular, a single SDN controller orchestration approach is compared with an orchestration architecture based on the Application Based Network Operations (ABNO) defined within the International Engineering Task Force (IETF), in order to find the potential benefits and drawbacks of both architectures. Finally, the SDN IT and Network Orchestration (SINO) platform which integrates the management of Cloud Computing infrastructure with the network orchestration, it is used to validate both architectures by evaluating their performance providing two inter-DC connectivity services: E2E connectivity and Virtual Machine (VM) migration. <s> BIB010
|
Relatively few SDN orchestration studies to date have focused on vertical multilayer networking within a given domain. The few studies have developed two general orchestration frameworks and have examined a few orchestration strategies for some specific applications. More specifically, one orchestration framework has focused on optimal bandwidth allocation based mainly on congestion BIB001 , while the other framework has focused on exploiting application traffic tolerances for delays for efficiently routing traffic BIB002 . SDN orchestration of vertical multilayer optical networking is thus still a relatively little explored area. Future research can develop orchestration frameworks that accommodate the specific optical communication technologies in the various layers and rigorously examine their performance-complexity tradeoffs. Similarly, relatively few applications have been examined to date in the application-specific orchestration studies for vertical multilayer networking BIB003 - BIB004 . The examination of the wide range of existing applications and any newly emerging network application in the context of SDN orchestrated vertical multilayer networking presents rich research opportunities. The cross-layer perspective of the SDN orchestrator over a given domain could, for instance, be exploited for strengthening security and privacy mechanisms or for accommodating demanding real-time multimedia. Relatively more SDN orchestration studies to date have examined multidomain networking than multilayer networking (within a single domain). As the completed multidomain orchestration studies have demonstrated, the SDN orchestration can help greatly in coordinating complex network management decisions across multiple distributed routing domains. The completed studies have illustrated the fundamental tradeoff between centralized decision making in a hierarchical orchestration structure and distributed decision making in a flat orchestration structure. In particular, most studies have focused on hierarchical structures BIB005 , BIB006 , BIB007 , while only one study has mainly focused on a flat orchestration structure BIB008 . In the context of DC internetworking, the studies BIB009 , BIB010 have sought to bring out the tradeoffs between these two structures by examining a range of structures from centralized to distributed. While centralized orchestration can make decisions with a wide knowledge horizon across the states in multiple domains, distributed decision making preserves the privacy of network status information, reduces control traffic, and can make fast localized decisions. Future research needs to shed further light on these complex tradeoffs for a wide range of combinations of optical technologies employed in the various domains. Throughout, it will be critical to abstract and convey the key characteristics of optical physical layer components and switching nodes to the overall orchestration protocols. Optimizing each abstraction step as well as the overall orchestration and examining the various performance tradeoffs are important future research directions. The SDON research and development effort to date have resulted in insights for making the use of SDN in optical transport networks feasible and have demonstrated advantages of SDN based optical network management. However, most network and service providers depend on optical transport to integrate with multiple industries to complete the network infrastructure. Often, network and service providers struggle to integrate hardware components and to provide accessible software management to customers. For example, companies that develop hardware optical components do not always have a complete associated software stack for the hardware components. Thus, network and service providers using the hardware optical components often have to maintain a software development team to integrate the various hardware components through software based management into their network, which is often a costly endeavor. Thus, improving SDN technology so that it seamlessly integrates with components of various industries and helps the integration of components from various industries is an essential underlying theme for future SDON research.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. North Bound Interface <s> Modern telecommunication networks and classical roles of operators are subject to fundamental change. Many network operators are currently seeking for new sources to generate revenue by exposing network capabilities to 3rd party service providers.At the same time we can observe that applications on the World Wide Web (WWW) are becoming more mature in terms of the definition of APIs that are offered towards other services. The combinations of those services are commonly referred to as Web 2.0 mash-ups.This report describes our approach to prototype a policy-based service broker function for Next Generation Networks (NGN)-based telecommunications service delivery platforms to provide flexible service exposure anchor points for service integration into so called mash-ups. The defined exposure API uses Intent-based request constructs to allow a description of services in business terms, i.e. intentions and strategies to achieve them and to organize their publication, search and composition on the basis of these descriptions. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. North Bound Interface <s> Virtualizing resources for easy pooling and accounting, as well as for rapid provisioning and release, is essential for the effective management of modern data centers. Although the compute and storage resources can be virtualized quite effectively, a comprehensive solution for network virtualization has yet to be developed. Our analysis of the requirements for a comprehensive network virtualization solution identified two complimentary steps of ultimate importance. One is specifying the network-related requirements, another is carrying out the requirements of multiple independent tenants in an efficient and scalable manner. We introduce a novel intent-based modeling abstraction for specifying the network as a policy governed service and present an efficient network virtualization architecture, Distributed Overlay Virtual Ethernet network (DOVE), realizing the proposed abstraction. We describe the working prototype of DOVE architecture and report the results of the extensive simulation-based performance study, demonstrating the scalability and the efficiency of the solution. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. North Bound Interface <s> In the cloud era, booming broadband access, and the new services that it facilitates, is posing a greater demand on access technologies. Legacy access technologies are facing lots of challenges: Complexity for multi-point access management; Scalability and energy efficiency in remote nodes; Painfulness of choosing a technology; Difficulty of access network wholesale. To address these challenges, this paper discusses Software-defined Access Networks (SDAN) as the next-gen architecture for access networking. With its simplified access nodes, flexible and programmable line technologies, and cloud home gateway & services, SDAN helps operators construct a simple, agile, elastic, and value-added access network. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. North Bound Interface <s> Software Defined Networking (SDN) and cloud automation enable a large number of diverse parties (network operators, application admins, tenants/end-users) and control programs (SDN Apps, network services) to generate network policies independently and dynamically. Yet existing policy abstractions and frameworks do not support natural expression and automatic composition of high-level policies from diverse sources. We tackle the open problem of automatic, correct and fast composition of multiple independently specified network policies. We first develop a high-level Policy Graph Abstraction (PGA) that allows network policies to be expressed simply and independently, and leverage the graph structure to detect and resolve policy conflicts efficiently. Besides supporting ACL policies, PGA also models and composes service chaining policies, i.e., the sequence of middleboxes to be traversed, by merging multiple service chain requirements into conflict-free composed chains. Our system validation using a large enterprise network policy dataset demonstrates practical composition times even for very large inputs, with only sub-millisecond runtime latencies. <s> BIB004
|
The NorthBound Interface (NBI) comprises the communication from the controller to the applications. This is an important area of future research as applications and their needs are generally the driving force for deploying SDON infrastructures. Any application, such as video on demand, VoIP, file transfer, or peer-to-peer networking, is applied from the NBI to the SDN controller which consequently conducts the necessary actions to implement the service behaviors on the physical network infrastructure. Applications often require specific service behaviors that need to be implemented on the overall network infrastructure. For example, applications requiring high data rates and reliability, such as Netflix, depend on data centers and the availability of data from servers with highly resilient failure protection mechanisms. The associated management network needs to stack redundant devices as to safeguard against outages. Services are provided as policies through the NBI to the SDN controller, which in turn generates flow rules for the switching devices. These flow rules can be prioritized based on the customer use cases. An important challenge for future NBI research is to provide a simple interface for a wide variety of service deployments without vendor lock-in, as vendor lock-in generally drives costs up. Also, new forms of communication to the controller, in addition to current techniques, such as REpresentational State Transfer (REST) and HTTP, should be researched. Moreover, future research should develop an NBI framework that spans horizontally across multiple controllers, so that service customers are not restricted to using only a single controller. Future research should examine control mechanisms that optimally exploit the central SDN control to provide simple and efficient mechanisms for automatic network management and dynamic service deployment BIB003 . The NBI of SDONs is a challenging facet of research and development because of the multitude of interfaces that need to be managed on the physical layer and transport layer. Optical physical layer components and infrastructures require high capital and operational expenditures and their management is generally not associated with network or service providers but rather with optical component/infrastructure vendors. Future research should develop novel Application Program Interfaces (APIs) for optical layer components and infrastructures that facilitate SDN control and are amenable to efficient NBI communication. Essentially, the challenge of efficient NBI communication with the SDN controller should be considered when designing the APIs that interface with the physical optical layer components and infrastructures. One specific strategy for simplifying network management and operation could be to explore the grouping of control policies of similar service applications, e.g., applications with similar QoS requirements. The grouping can reduce the number of control policies at the expense of slightly coarser granularity of the service offerings. The emerging Intent-Based Networking (IBN) paradigm, which drafts intents for services and policies, can provide a specific avenue for simplifying dynamic automatic configuration and virtualization BIB001 , BIB002 . Currently network applications are deployed based on how the network should behave for a specific action. For example, for inter domain routing, the Border Gateway Protocol (BGP) is used, and the network gateways are configured to communicate with the BGP protocol. This complicates the provisioning of services that typically require multiple protocols and limits the flexibility of service provisioning. With IBN, the application gives an intent, for example, transferring video across multiple domains. This intent is then associated with automated dynamic configurations of the network elements to communicate data over the domains using appropriate protocols. The grouping of service policies, such as intents, can facilitate easy and dynamic service provisioning. Intent groups can be described in a graph to simplify the compilation of service policies and to resolve conflicts BIB004 .
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Reliability, Security, and Privacy <s> Software defined networking (SDN) decouples the network control and data planes. The network intelligence and state are logically centralized and the underlying network infrastructure is abstracted from applications. SDN enhances network security by means of global visibility of the network state where a conflict can be easily resolved from the logically centralized control plane. Hence, the SDN architecture empowers networks to actively monitor traffic and diagnose threats to facilitates network forensics, security policy alteration, and security service insertion. The separation of the control and data planes, however, opens security challenges, such as man-in-the middle attacks, denial of service (DoS) attacks, and saturation attacks. In this paper, we analyze security threats to application, control, and data planes of SDN. The security platforms that secure each of the planes are described followed by various security approaches for network-wide security in SDN. SDN security is analyzed according to security dimensions of the ITU-T recommendation, as well as, by the costs of security solutions. In a nutshell, this paper highlights the present and future security challenges in SDN and future directions for secure SDN. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Reliability, Security, and Privacy <s> Abstract Internet designed over 40 years ago was originally focused on host-to-host message delivery in a best-effort manner. However, introduction of new applications over the years have brought about new requirements related with throughput, scalability, mobility, security, connectivity, and availability among others. Additionally, convergence of telecommunications, media, and information technology was responsible for transformation of the Internet into an integrated system enabling accessing, distributing, processing, storing, and managing the payload of these messages. Users are now visibly more interested in receiving / accessing information independently of the network location of its host. This consideration in turn revived the interest in named data-driven networking (a.k.a. Information-Centric Networking - ICN). Instead of assuming that networks are limited to the manipulation of network locator space, the basic assumption underneath is that information can be named, addressed, and matched independently of its network location leaving in turn the possibility to match message delivery delay requirements. In this paper, we summarize our research conducted in order to bring a completely different view / perspective of network resilience, originally defined as the ability of a network to assure an acceptable level of service in the face of various faults and challenges to normal operation. That is, instead of maintaining network reachability independently of its actual utility to the “end-points”, our research aimed at exchanging and confronting the key principles that would enable an information-driven resilience (networked) scheme. More precisely, knowing that the user utility function is mainly driven nowadays by information-related criteria such as accessibility (reachability), how to design network resilience schemes that would be directed toward that goal. The main challenge is thus: can one design resilience schemes that combine maximization of end-point utility function and minimization of the network-related cost? <s> BIB002
|
The SDN paradigm is based on a centrally managed network. Faulty behaviors, security infringements, or failures of the control would likely result in extensive disruptions and performance losses that are exacerbated by the centralized nature of the SDN control. Instances of extensive disruptions and losses due to SDN control failures or infringements would likely reduce the trust in SDN deployments. Therefore, it is very important to ensure reliable network operation BIB002 and to provision for security and privacy of the communication. Hence, reliability, security, and privacy are prominent SDON research challenges. Security in SDON techniques is a fairly open research area, with only few published findings. As a few reviewed studies (see Section VI-D) have explored, the central SDN control can facilitate reliable network service through speeding up failure recovery. The central SDN control can continuously scan the network and the status messages from the network devices. Or, the SDN control can redirect the status messages to a monitoring service that analyzes the data network. Security breaches can be controlled by broadcasting messages from the controller to all affected devices to block traffic in a specific direction. Future research should refine these reliability functions to optimize automated fault and performance diagnostics and reconfigurations for quick failure recovery. Network failures can either occur within the physical layer infrastructure, or as errors within the higher protocol layers, e.g., in the classical data link (L2), network (L3), of transport (L4) layers. In the context of SDONs, physical layer failures present important future research opportunities. Physical layer devices need to be carefully monitored by sending feedback from the devices to the controller. The research and development on communication between the SDN controller and the network devices has mainly focused on sending flow rules to the network devices while feedback communicated from the devices to the controller has received relatively little attention. For example, there are three types of OpenFlow messages, namely Packet-In, Packet-Out, and Flow-Mod. The PacketIn messages are sent from the OpenFlow switches to the controller, the Packet-Out message is sent from the controller to the device, and the Flow-Mod message is used to modify and monitor the flow rules in the flow table. Future research should examine extensions of the Packet-In message to send specific status updates in support of network and device failure monitoring to the controller. These status messages could be monitored by a dedicated failure monitoring service. The status update messages could be broadly defined to cover a wide range of network management aspects, including system health monitoring and network failure protection. A related future research direction is to secure configuration and operation of SDONs through trusted encryption and key management systems BIB001 . Moreover, mechanisms to ensure the privacy of the communication should be explored. The security and privacy mechanisms should strive to exploit the natural immunity of optical transmission segments to electromagnetic interferences. In summary, security and privacy of SDON communication are largely open research areas. The optical physical layer infrastructure has traditionally not been controlled remotely, which in general reduces the occurrences of security breaches. However, centralized SDN management and control increase the risk of security breaches, requiring extensive research on SDON security, so as to reap the benefits of centralized SDN management and control in a secure manner.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Scalability <s> P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet-processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Scalability <s> With the development of software-defined networking (SDN), people start to realize that the protocol-dependent nature of OpenFlow, i.e., the matching fields are defined according to existing network protocols (e.g., Ethernet and IP), will limit the programmability of forwarding plane and cause scalability issues. In this work, we focus on Protocol-Oblivious Forwarding (POF) [1], which can make the forwarding plane reconfigurable, programmable and future-proof with a protocol-independent instruction set. We design and implement a POF-based flexible flow converging (F-FC) scheme to reduce the number of flow-entries for enhanced scalability. To evaluate the POF system experimentally, we build a network testbed that consists of both commercial and software-based POF switches. Network experiments with real-time video streaming in the proposed POF system demonstrate that our POF-based F-FC approach can outperform conventional schemes. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> D. Scalability <s> Novel optical access network virtualization and resource allocation algorithms for Internet-of-Things support are proposed and implemented on a real-time SDN-controller platform. 30–50% gains in served request number, traffic prioritization, and revenue are demonstrated. <s> BIB003
|
Optical networks are expensive and used for high-bandwidth services, such as long-distance network access and data center interconnections. Optical network infrastructures either span long distances between multiple geographically distributed locations, or could be short-distance incremental additions (interconnects) of computing devices. Scalability in multiple dimensions is therefore an important aspect for future SDON research. For example, a myriad of tiny end devices need to be provided with network access in the emerging Internet of Things (IoT) paradigm BIB003 . The IoT requires access network architectures and protocols to scale vertically (across protocol layers and technologies) and horizontally (across network domains). At the same time, the ongoing growth of multimedia services requires data centers to scale up optical network bandwidths to maintain the quality of experience of the multimedia services. Broadly speaking, scalability includes in the vertical dimension the support for multiple network devices and technologies. Scalability in the horizontal direction includes the communication between a large number of different domains as well as support for existing non-SDON infrastructures. A specific scalability challenge arising with SDN infrastructure is that the scalability of the control plane (OpenFlow protocol signalling) communication and the scalability of the data plane communication which transports the data plane flows need to be jointly considered. For example, the OpenFlow protocol 1.4 currently supports 34 Flow-Mod messages [455] , which can communicate between the network devices and the controller. This number limits the functionality of the SBI communication. Recent studies have explored a protocolagnostic approach BIB001 , BIB002 , which is a data plane protocol that extends the use of multiple protocols for communication between the control plane and data plane. The protocolagnostic approach resolves the challenges faced by OpenFlow and, in general, any particular protocol. Exploring this novel protocol-agnostic approach presents many new SDON research directions. Scalability would also require SDN technology to overlay and scale over existing non-SDN infrastructures. Vendors provide support for known non-SDN devices, but this area is still a challenge. There are no known protocols that could modify the flow tables of existing popularly described "nonOpenFlow" switches. In the case of optical networks, as SDN is still being incrementally deployed, the overlaying with non-SDN infrastructure still requires significant attention. Ideally, the overlay mechanisms should ensure seamless integration and should scale with the growing deployment of SDN technologies while incurring only low costs. Overall, scalability poses highly important future SDON research directions that require economical solutions.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> Access Network Selection (ANS) providing the most appropriate networking technology for accessing and using services in a heterogeneous wireless environment constitutes the heart of the overall handover management procedure. The aim of this paper is to survey representative vertical handover schemes proposed in related research literature with emphasis laid on the design of the ANS mechanism. Schemes' distinct features are analyzed and the authors discuss on their relative merits and weaknesses. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogeneous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative relaying are undisputed future technologies in this regard, we propose a research vision to make these technologies more energy efficient. Lastly, we explore some broader perspectives in realizing a "green" cellular network technology <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> Machine-to-machine communication, a promising technology for the smart city concept, enables ubiquitous connectivity between one or more autonomous devices without or with minimal human interaction. M2M communication is the key technology to support data transfer among sensors and actuators to facilitate various smart city applications (e.g., smart metering, surveillance and security, infrastructure management, city automation, and eHealth). To support massive numbers of machine type communication (MTC) devices, one of the challenging issues is to provide an efficient way for multiple access in the network and to minimize network overload. In this article, we review the M2M communication techniques in Long Term Evolution- Advanced cellular networks and outline the major research issues. Also, we review the different random access overload control mechanisms to avoid congestion caused by random channel access of MTC devices. To this end, we propose a reinforcement learning-based eNB selection algorithm that allows the MTC devices to choose the eNBs (or base stations) to transmit packets in a self-organizing fashion. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> Next generation 5G mobile system will support the vision of connecting all devices that benefit from a connection. Transport networks need to support the required capacity, latency and flexibility. This paper outlines how 5G transport networks will address these requirements. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> H. Fiber-Wireless (FiWi) Networking <s> Mobile computation offloading has been identified as a key-enabling technology to overcome the inherent processing power and storage constraints of mobile end devices. To satisfy the low-latency requirements of content-rich mobile applications, existing mobile cloud computing solutions allow mobile devices to access the required resources by accessing a nearby resource-rich cloudlet, suffering increased capital and operational expenditures. To address this issue, in this paper, we propose an infrastructure and architectural approach based on the orchestrated planning and operation of optical data center networks and wireless access networks. To this end, a novel formulation based on a multi-objective nonlinear programming model is presented that considers energy-efficient virtual infrastructure planning over the converged wireless, optical network interconnecting DCs with mobile devices, taking a holistic view of the infrastructure. Our modelling results identify trends and trade-offs relating to end-to-end service delay, mobility, resource requirements and energy consumption levels of the infrastructure across the various technology domains. <s> BIB008
|
The optical (fiber) and wireless network domains have many differences. At the physical layer, wireless networks are characterized by varying channel qualities, potentially high losses, and generally lower transmission bit rates than optical fiber. Wireless end nodes are typically mobile and may connect dynamically to wireless network domains. The mobile wireless nodes are generally the end-nodes in a FiWi network and connect via intermediate optical nodes to the Internet. Due to these different characteristics, the management of wireless networks with mobile end nodes is very different from the management of optical network nodes. For example, wireless access points should maintain their own routing table to accommodate access to dynamically connected mobile devices. Combining the control of both wireless and optical networks in a single SDN controller requires concrete APIs that handle the respective control functions of wireless and optical networks. Currently, service providers maintain separate physical management services without a unified logical control and management plane for FiWi networks. Developing integrated controls for FiWi networks can be viewed as a special case of multilayer networking and integration. Developing specialized multilayer networking strategies for FiWi networks is an important future research directions as many aspects of wireless networks have dramatically advanced in recent years. For instance, the cell structure of wireless cellular networks BIB007 has advanced to femtocell networks BIB001 as well as heterogeneous and multitier cellular structures BIB004 , BIB002 . At the same time, machine-to-machine communication BIB005 , BIB006 and energy savings BIB008 , BIB003 have drawn research attention.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. QoS and Energy Efficiency <s> Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogeneous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative relaying are undisputed future technologies in this regard, we propose a research vision to make these technologies more energy efficient. Lastly, we explore some broader perspectives in realizing a "green" cellular network technology <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. QoS and Energy Efficiency <s> The paper presents energy efficient routing algorithms based on a novel integrated control plane platform. The centralized control plane structure enables the use of flexible heuristic algorithms for route selection in optical networks. Differentiated routing for various traffic types is used in our previous work. The work presented in this paper further optimizes the energy performance in the whole network by utilizing a multi-objective evolutionary algorithm for route selection. The trade-off between energy optimization and QoS for high priority traffic is examined and results show an overall improvement in energy performance whilst maintaining satisfactory QoS. Energy savings are obtained on the low priority traffic whilst the QoS for the high priority traffic is not degraded. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. QoS and Energy Efficiency <s> Software Defined Networking (SDN) and Network Function Virtualization (NFV) provide an alluring vision of how to transform broadcast, contribution and content distribution networks. In our laboratory we assembled a multi-vendor, multi-layer media network environment that used SDN controllers and NFV-based applications to schedule, coordinate, and control media flows across broadcast and contribution network infrastructure. — This paper will share our experiences of investigating, designing and experimenting in order to build the next generation broadcast and contribution network. We will describe our experience of dynamic workflow automation of high-bandwidth broadcast and media services across multi-layered optical network environment using SDN-based technologies for programmatic forwarding plane control and orchestration of key network functions hosted on virtual machines. Finally, we will outline the prospects for the future of how packet and optical technologies might continue to scale to support the transport of increasingly growing broadcast media. <s> BIB003
|
Different types of applications have vastly different traffic bit rate characteristics and QoS requirements. For instance, streaming high-definition video requires high bit rates, but can tolerate some delays with appropriate playout buffering. On the other hand, VoIP (packet voice) or video conference applications have typically low to moderate bit rates, but require low latencies. Achieving these application-dependent QoS levels in an energy-efficient manner BIB001 - BIB002 is an important future research direction. A related future research direction is to exploit SDN control for QoS adaptations of realtime media and broadcasting services. Broadcasting services involve typically data rates ranging from 3-48 Gb/s to deliver video at various resolutions to the users within a reasonable time limit. In addition to managing the QoS, the network has to manage the multicast groups for efficient routing of traffic to the users. Recent studies , BIB003 discuss the potential of SDN, NFV, and optical technologies to achieve the growing demands of broadcasters and media. Moreover, automated provisioning strategies of QoS and the incorporation of quality of protection and security with traditional QoS are important direction for future QoS research in SDONs.
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> J. Performance Evaluation <s> Software defined networking (SDN) promises a way to more flexible networks that can adapt to changing demands. At the same time these networks should also benefit from simpler management mechanisms. This is achieved by moving the network control out of the forwarding devices to purpose-tailored software-applications on top of a "networking operating system". Currently, the most notable representative of this approach is OpenFlow. In the OpenFlow architecture the operating system is represented by the OpenFlow controller. As the key component of the OpenFlow ecosystem, the behavior and performance of the controller are significant for the entire network. Therefore, it is important to understand these influence factors, when planning an OpenFlow-based SDN deployment. In this work, we introduce a tool to help achieving just that - a flexible OpenFlow controller benchmark. The benchmark creates a set of message-generating virtual switches, which can be configured independently from each other to emulate a certain scenario and also keep their own statistics. This way a granular controller performance analysis is possible. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> J. Performance Evaluation <s> Network emulation has been one of the tools of choice for conducting experiments on commodity hardware. In the absence of an easy to use optical network test-bed, researchers can significantly benefit from the availability of a flexible/programmable optical network emulation platform. Exploiting the lightweight system virtualization, which is recently supported in modern operating systems, in this work we present the architecture of a Software-Defined Network (SDN) emulation platform for transport optical networks and investigate its usage in a use-case scenario. To the best of our knowledge, this is for the first time that an SDN-based emulation platform is proposed for modeling and performance evaluation of optical networks. Coupled with recent trend of extension of SDN towards transport (optical) networks, the presented tool can facilitate the evaluation of innovative idea before actual implementations and deployments. In addition to the architecture of SONEP, a use-case scenario to evaluate the quality of transmission (QoT) of alien wavelengths in transport optical networks, along with performance results are reported in this piece of work. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> J. Performance Evaluation <s> The deployment experience of OpenFlow support in production networks has highlighted variable limitations between network devices and vendors, while the recent integration of OpenFlow control abstractions in 10 GbE switches, increases further the performance requirements to support the switch control plane. This paper presents OFLOPS-Turbo, an effort to integrate OFLOPS, the OpenFlow switch evaluation platform, with OSNT, a hardware-accelerated traffic generation and capture system. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> J. Performance Evaluation <s> I. INTRODUCTION The key component in the Software Defined Networking architecture is the controller or ”networking operating system”. The controller provides a platform for the operation of diverse network control and management applications. However, little is known about the stability and performance of current controller applications, which is a requirement for a smooth operation of the network. In case of OpenFlow, the currently most popular realization of SDN, the controller is not specified by the standard. Its performance depends on the specific implementation. As a consequence, some controllers are more suitable for certain tasks than others. Choosing the right controller for a task requires a thorough analysis of the available candidates in terms of system behavior and performance. In this paper, we present the extended platformindependent and flexible OpenFlow controller performance analyzer ”OFCProbe” as a follow up to our previous work with ”OFCBenchmark”. The new tool features a scalable and modular architecture that allows a deep granular analysis of a controller’s behavior and characteristics. It allows the emulation of virtual switches that each provide sophisticated statistics about different aspects of the controller’s performance. The virtual switches are arrangeable into topologies to emulate different scenarios and traffic patterns. This way a detailed insight and deep analysis of possible bottlenecks concerning the controller performance or unexpected behavior is possible. Key features of the re-implementation are a more flexible, simulation-style packet generation system as well as Java Selector-based connection handling. In order to highlight the tool’s features, we perform some experiments for the Nox and Floodlight controllers in different scenarios. The remainder of this paper is structured as follows. In Section II, we discuss related work in terms of OpenFlow controller performance. The architecture and features of OFCProbe are then introduced in Section III. We show and discuss the results of our example experiments in Section IV before drawing our conclusions in Section V. <s> BIB004
|
Comprehensive performance evaluation methodologies and metrics need to be developed to assess the SDON designs addressing the preceding future research directions ranging from simplicity and efficiency (Section VIII-A) to optical-wireless networks (Section VIII-H). The performance evaluations need to encompass the data plane, the control plane, as well as the overall data and control plane interactions with the SDN interfaces and need to take virtualization and orchestration mechanisms into consideration. In the case of the SDON infrastructure, the performance evaluations will need to include the optical physical layer BIB002 . While there have been some efforts to develop evaluation frameworks for general SDN switches , BIB003 , such evaluation frameworks need to be adapted to the specific characteristics of SDON architectures. Similarly, some evaluation frameworks for general SDN controllers have been explored BIB001 , BIB004 ; these need to be extended to the specific SDON control mechanisms. Generally, performance metrics obtained with SDN and virtualization mechanisms should be benchmarked against the corresponding conventional network without any SDN or virtualization components. Thus, the performance tradeoffs and costs of the flexibility gained through SDN and virtualization mechanism can be quantified. This quantified data would then need to be assessed and compared in the context of business needs. To identify some of the important aspects of performance we analyze the sample architecture in Fig. 14 . The SDN controller in the SDON architecture in Fig. 14 spans across multiple elements, such as ONUs, OLTs, routers/switches in the metro-section, as well as PCEs in the core section. A meaningful performance evaluation of such a network requires comprehensive analysis of data plane performance aspects and related metrics, including noise spectral analysis, bandwidth and link rate monitoring, as well as evaluation of failure resilience. Performance evaluation mechanisms need to be developed to enable the SDON controller to obtain and analyze these performance data. In addition, mechanisms for control layer performance analysis are needed. The control plane performance evaluation should, for instance assess the controller efficiency and performance characteristics, such as the OpenFlow message rates and the rates and delays of flow table management actions.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> This book is written for computer engineersand scientists active in the development of software andhardware systems. Itsupplies theunderstanding andtools needed to effectively evaluate the performance of individual computer and communication systems. Itcoversthe theoretical foundations of the fieldas well asspecific software packages being employed by leaders in the field. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Given the diversity of commercial Cloud services, performance evaluations of candidate services would be crucial and beneficial for both service customers (e.g. cost-benefit analysis) and providers (e.g. direction of service improvement). Before an evaluation implementation, the selection of suitable factors (also called parameters or variables) plays a prerequisite role in designing evaluation experiments. However, there seems a lack of systematic approaches to factor selection for Cloud services performance evaluation. In other words, evaluators randomly and intuitively concerned experimental factors in most of the existing evaluation studies. Based on our previous taxonomy and modeling work, this paper proposes a factor framework for experimental design for performance evaluation of commercial Cloud services. This framework capsules the state-of-the-practice of performance evaluation factors that people currently take into account in the Cloud Computing domain, and in turn can help facilitate designing new experiments for evaluating Cloud services. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Data Center Networks (DCNs) are attracting growing interest from both academia and industry to keep pace with the exponential growth in cloud computing and enterprise networks. Modern DCNs are facing two main challenges of scalability and cost-effectiveness. The architecture of a DCN directly impacts on its scalability, while its cost is largely driven by its power consumption. In this paper, we conduct a detailed survey of the most recent advances and research activities in DCNs, with a special focus on the architectural evolution of DCNs and their energy efficiency. The paper provides a qualitative categorization of existing DCN architectures into switch-centric and server-centric topologies as well as their design technologies. Energy efficiency in data centers is discussed in details with survey of existing techniques in energy savings, green data centers and renewable energy approaches. Finally, we outline potential future research directions in DCNs. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Cloud computing has emerged as the leading paradigm for information technology businesses. Cloud computing provides a platform to manage and deliver computing services around the world over the Internet. Cloud services have helped businesses utilize computing services on demand with no upfront investments. The cloud computing paradigm has sustained its growth, which has led to increase in size and number of data centers. Data centers with thousands of computing devices are deployed as back end to provide cloud services. Computing devices are deployed redundantly in data centers to ensure 24/7 availability. However, many studies have pointed out that data centers consume large amount of electricity, thus calling for energy-efficiency measures. In this survey, we discuss research issues related to conflicting requirements of maximizing quality of services (QoSs) (availability, reliability, etc.) delivered by the cloud services while minimizing energy consumption of the data center resources. In this paper, we present the concept of inception of data center energy-efficiency controller that can consolidate data center resources with minimal effect on QoS requirements. We discuss software- and hardware-based techniques and architectures for data center resources such as server, memory, and network devices that can be manipulated by the data center controller to achieve energy efficiency. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Introduction <s> Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-the-art techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including: i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models. <s> BIB009
|
Given the requirement of efficient use of computing power and the increasing consideration of global warming, the energy consumption management is a crucial concern across the entire community of the information and communication technology (ICT), especially in the Cloud computing domain BIB004 . In particular, understanding Cloud applications' energy consumption has been identified to be a prerequisite for developing energy saving mechanisms . Unfortunately, due to Cloud applications' inherent complexity and their environmental heterogeneity, it would be extremely challenging to tune the energy efficiency of a real-world application BIB001 , and even unpractical to directly measure its energy consumption. On one hand, the components and data of a modern application could largely be distributed and spread in Cloud environments. On the other hand, the same computing resource in the Cloud could be shared among a bunch of different applications. Consequently, most of the related work focused on the energy expense in the Cloud infrastructure and IT equipment (e.g., data center energy consumption BIB006 BIB008 ), without considering specific application scenarios or isolating a single application from its surroundings. In particular, with a lack of concern about the application runtime, some of the studies essentially emphasized the power consumption in Cloud systems from the hardware's perspective (e.g., BIB009 ). Note that here power (measured in Watts) is defined as the rate at which energy (measured in Joules) is consumed in the Cloud infrastructure. As for the studies investigating Cloud applications' energy consumption, researchers tend to employ the modeling approach to relieve the aforementioned challenges and complexity, by abstracting real-world objects or processes that are difficult to observe or understand directly . However, since such an abstraction sacrifices (and usually does not need) the complete reflection of the reality to be modeled, current energy consumption models vary in terms of purposes, assumptions, application characteristics and environmental conditions, with possible overlaps between different research works. As a result, different models need to be weaved together to reflect a full scope of energy consumption aspects, which is also common in other domains . Therefore, to facilitate understanding the nature of the energy consumption of Cloud applications, it would be useful and valuable to come up with the state-of-the-art of the existing modeling efforts that play an evidence role in revealing the reality. When it comes to the evidence aggregation for answering research questions in software engineering and computer science, a standard and rigorous methodology is Systematic Literature Review (SLR) . Thus, we implemented an SLR to identify, examine and synthesize the existing models developed/employed in the relevant studies. Moreover, to help analyze and compare the existing models, we followed the divide-and-conquer strategy to also study the prerequisites of modeling practices: (1) Since the energy for running a Cloud application is driven by the combined mutual effects of the application and its environment BIB007 , we extracted nine generic application execution elements and built up an evidence-based architecture of the application deployment environment. Considering that Cloud computing scenarios involve numerous and various factors BIB002 , we identified 18 environmental factors and 12 workload factors respectively as well as their individual influences on Cloud applications' energy consumption. Driven by the aforementioned motivations, our main contributions to the research field can be summarized as follows. First, our deconstruction of Cloud application runtime and deployment environment offers an expandable dictionary of energy-related factors. Benefiting from this dictionary, researchers and practitioners can conveniently screen the existing concerns and choose suitable ones for new energy consumption studies. In fact, pre-listing all the domainrelevant factors has been considered to be a "tedious but crucial task" for factorial studies in general BIB003 BIB005 . Second, the systematically organized models with unified notations can act as a knowledge artefact for both researchers and practitioners to not only reveal the fundamentals of energy consumption, but also facilitate simulations to deal with a wide range of Cloud application energy efficiency problems. For example, accurate model-based energy consumption simulations would be significantly beneficial for decision making in various trade-off situations. The remainder of this paper is organized as follows. Section 2 briefly describes the methodology employed in our survey, and particularly highlights the research questions and selection & exclusion criteria. Section 3 specifies the results of this survey by addressing the predefined research questions. Section 4 lists four trade-off debates to demonstrate both the complexity in combinational effects of multiple factors, and the potential research directions that can benefit from our survey. Conclusions and our future work are outlined in Section 5.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Research Questions <s> The energy consumption of Cloud computing continues to be an area of significant concern as data center growth continues to increase. This paper reports on an energy efficient interoperable Cloud architecture realized as a Cloud toolbox that focuses on reducing the energy consumption of Cloud applications holistically across all deployments models. The architecture supports energy efficiency at service construction, deployment, and operation and interoperability through the use of the Open Virtualization Format (OVF) standard. We discuss our practical experience during implementation and present an initial performance evaluation of the architecture. The results show that the implementing Cloud provider interoperability is feasible and incurs minimal performance overhead during application deployment in comparison to the time taken to instantiate Virtual Machines. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Research Questions <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB002
|
During the whole lifecycle of Cloud applications, energy consumption happens mainly when they are being deployed and executed BIB001 . Moreover, as mentioned previously, the energy for executing a Cloud application is essentially caused by the combined mutual effects between the application software and its environmental infrastructure BIB002 . Therefore, we decided to summarize the deployment environments and the runtime execution elements of Cloud applications: RQ1 What deployment environments of Cloud applications have been discussed in the relevant studies? RQ2 What execution elements of Cloud applications have been discussed in the relevant studies? Although there is no doubt that running Cloud applications will cause energy consumption, it is more valuable to identify influential factors to understand why different amounts of energy could be consumed even for the same application to achieve the same (or comparable) performance quality. Following the previous research questions, it is natural to distinguish between the environmental factors and the application workload factors: RQ3 What environmental factors and their influences have been studied in Cloud application energy consumption? RQ4 What workload factors and their influences have been studied in Cloud application energy consumption? Through reviewing the modeling studies, one of our main purposes is to reveal Cloud applications' energy consumption models, because the mathematical models can theoretically explain how the energy is consumed: RQ5 What models have been developed for abstracting the energy consumption of Cloud applications?
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Inclusion and Exclusion Criteria <s> Distributed processing frameworks, such as Yahoo!'s Hadoop and Google's MapReduce, have been successful at harnessing expansive datacenter resources for large-scale data analysis. However, their effect on datacenter energy efficiency has not been scrutinized. Moreover, the filesystem component of these frameworks effectively precludes scale-down of clusters deploying these frameworks (i.e. operating at reduced capacity). This paper presents our early work on modifying Hadoop to allow scale-down of operational clusters. We find that running Hadoop clusters in fractional configurations can save between 9% and 50% of energy consumption, and that there is a tradeoff between performance energy consumption. We also outline further research into the energy-efficiency of these frameworks. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Inclusion and Exclusion Criteria <s> This paper considers online energy-efficient scheduling of virtual machines (VMs) for Cloud data centers. Each request is associated with a start-time, an end-time, a processing time and a capacity demand from a Physical Machine (PM). The goal is to schedule all of the requests non-preemptively in their start-time-end-time windows, subjecting to PM capacity constraints, such that the total busy time of all used PMs is minimized (called MinTBT-ON for abbreviation). This problem is a fundamental scheduling problem for parallel jobs allocation on multiple machines; it has important applications in power-aware scheduling in cloud computing, optical network design, customer service systems, and other related areas. Offline scheduling to minimize busy time is NP-hard already in the special case where all jobs have the same processing time and can be scheduled in a fixed time interval. One best-known result for MinTBT-ON problem is a g-competitive algorithm for general instances and unit-size jobs using First-Fit algorithm where g is the total capacity of a machine. In this paper, a $(1+\frac{g-2}{k}-\frac{g-1}{k^{2}})$ -competitive algorithm, Dynamic Bipartition-First-Fit (BFF) is proposed and proved for general case, where k is the ratio of the length of the longest interval over the length of the second longest interval for k>1 and g?2. More results in general and special cases are obtained to improve the best-known bounds. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Inclusion and Exclusion Criteria <s> The development and maintenance of cloud sites are often characterized by energy waste and high CO2 emissions. Energy efficiency and the decrease of the CO2 emissions in cloud-based systems can be only obtained by adopting suitable actions and techniques (e.g., utilization of green energy sources, reduction of the number of physical and virtual machines, usage of the greener machines). In order to evaluate the suitability of these different actions, it is necessary to define a measure for greenness of the whole system. For this reason, this paper defines a set of metrics to assess the greenness of a cloud infrastructure. In order to provide a detailed view of the behaviour of the system and to facilitate the identification of the causes of the energy waste, metrics have been defined at different layers of the system (i.e., application, virtualization, infrastructure layers). The monitoring infrastructure that is necessary to retrieve all the data required for the assessment of the identified set of metrics is also described. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Inclusion and Exclusion Criteria <s> Wireless data transmission consumes a significant part of the overall energy consumption of smartphones, due to the popularity of Internet applications. In this paper, we investigate the energy consumption characteristics of data transmission over Wi-Fi, focusing on the effect of Internet flow characteristics and network environment. We present deterministic models that describe the energy consumption of Wi-Fi data transmission with traffic burstiness, network performance metrics like throughput and retransmission rate, and parameters of the power saving mechanisms in use. Our models are practical because their inputs are easily available on mobile platforms without modifying low-level software or hardware components. We demonstrate the practice of model-based energy profiling on Maemo, Symbian, and Android phones, and evaluate the accuracy with physical power measurement of applications including file transfer, web browsing, video streaming, and instant messaging. Our experimental results show that our models are of adequate accuracy for energy profiling and are easy to apply. <s> BIB004
|
In addition to the research questions, we also pre-clarify a set of inclusion and exclusion criteria to further shape our research scope, as specified below: Inclusion Criteria: 1) Publications that profile/characterize the energy consumption of applications deployed in the Cloud environment. 2) Publications that investigate the energy consumption of local applications that have interactions with a Cloud system (e.g., workload offloading). 3) Publications that model application's (or application component's) energy consumption by denoting the energy consumption of environmental hardware. 4) Publications that reflect the changes in energy consumption of a Cloudbased application (or application component) by measuring hardware's energy consumptions with different workload configurations. 5) Publications that reflect the changes in energy consumption of a Cloudbased application (or application component) by measuring hardware's energy consumptions with different environmental configurations. 6) Publications that provide first-hand and relatively strong evidence through evaluations and peer reviews, such as book chapters and full journal/conference/workshop papers. Exclusion Criteria: (1) Publications that investigate the energy consumption of applications running in local environment (e.g., desktop systems) without addressing any concern related to the Cloud. (2) Publications that compare energy-saving strategies/algorithms through experiments without energy consumption modeling or factor discussions in a generic sense. (3) Publications that investigate the energy consumption of packet/frame transferring in the lower layers of network protocol stack (e.g., BIB004 ). Given our focus on the energy consumption in the application layer, we are concerned with bit/Byte/file data transmission. (4) Publications that investigate the energy consumption of a Cloud system or its components (e.g., server, cluster or datacenter BIB001 BIB002 ) without regarding to a single application (component) scenario. In other words, this type of studies could be concerned with the overall workloads from numerous and various applications. (5) Publications that model the environmental hardware's energy consumption by notating applications' (or application components') energy consumption (e.g., BIB003 ). This type of studies were not in the context of a single application (component) scenario, either. (6) Publications that do not contribute first-hand or strong evidence, such as survey papers (i.e. secondary studies), extended abstracts, posters, short/position papers, and industry white papers.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Review Process <s> Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Review Process <s> Context: Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries. Objective: The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE. Method: We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of 'quasi-gold standard' (QGS), which consists of collection of known studies, and corresponding 'quasi-sensitivity' into the search process for evaluating search performance. Results: We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research. Conclusion: We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE. <s> BIB002
|
By using the quasi-gold standard to manipulate search strings BIB002 , we retrieved over 3000 publications from the five dominant electronic libraries (namely ACM Digital Library, Google Scholar, IEEE Xplore, ScienceDirect, and SpringerLink), and initially identified 394 studies through quickly scanning their titles and abstracts (note that we only screened the first 50 pages from Google Scholar). In particular, considering that the term "Cloud computing" was coined in 2006 BIB001 , we did not search the literature published before 2006. After further examining the full texts of the initially collected studies against the inclusion & exclusion criteria, we finally selected 76 papers to fit in this survey. It is notable that we have employed two strategies to reduce the selection bias and improve the fundamental reliability: Firstly, we conducted pilot reviews to try to well establish and polish the inclusion & exclusion criteria in advance. Secondly, we organized regular meetings to discuss the unsure issues and crossreviewed the borderline papers. At last, a data extraction schema was developed to guide paper review and data identification in a structured fashion. In detail, the raw data were gradually extracted from the selected studies and aggregated into a big table to facilitate the overall data synthesis. 1 Based on the data analysis, we deliver the review results and discussions by respectively addressing the aforementioned research questions, as specified in the following section.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> As concerns about global energy consumption increase, the power consumption of the Internet is a matter of increasing importance. We present a network-based model that estimates Internet power consumption including the core, metro, and access networks. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> This paper proposes a Green Cloud model for mobile Cloud computing. The proposed model leverage on the current trend of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service), and look at new paradigm called "Network as a Service" (NaaS). The Green Cloud model proposes various Telco's revenue generating streams and services with the CaaS (Cloud as a Service) for the near future. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Nowadays, power consumption of data centers has huge impacts on environments. Researchers are seeking to find effective solutions to make data centers reduce power consumption while keep the desired quality of service or service level objectives. Virtual Machine (VM) technology has been widely applied in data center environments due to its seminal features, including reliability, flexibility, and the ease of management. We present the GreenCloud architecture, which aims to reduce data center power consumption, while guarantee the performance from users' perspective. GreenCloud architecture enables comprehensive online-monitoring, live virtual machine migration, and VM placement optimization. To verify the efficiency and effectiveness of the proposed architecture, we take an online real-time game, Tremulous, as a VM application. Evaluation results show that we can save up to 27% of the energy when applying GreenCloud architecture. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Networked manufacturing technology has provided an effective approach to integrating manufacturing resources over Internet. However, a vast number of resources involving networks, computing, and storage resulted in ever increasing energy consumption. This paper proposed an IT energy-saving approach based on Cloud Computing for networked green manufacturing. The paper first presented the architecture of networked green manufacturing on Cloud Computing to support service-oriented IT resources outsourcing. Then the paper discussed the dynamically-scalable resource utilization mechanism for networked manufacturing. The approach can provide effective support for manufacturing resource virtualization and deliver a variety of services to distributed manufacturing enterprises. Moreover, IT resources of network, computing, and storage can be shared concurrently and scheduled dynamically that are adaptive to practical resource demand of collaborative manufacturing tasks. Thus, a great amount of IT resources can achieve high efficient utilization in networked manufacturing and result in reduced energy consumption to benefit green manufacturing. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> The expanding scale and density of data centers has made their power consumption an imperative issue. Data center energy management has become of unprecedented importance not only from an economic perspective but also for environment conservation. The recent surge in the popularity of cloud computing for providing rich multimedia services has further necessitated the need to consider energy consumption. Moreover, a recent phenomenon has been the astounding increase in multimedia data traffic over the Internet, which in turn is exerting a new burden on the energy resources. This paper provides a comprehensive overview of the techniques and approaches in the fields of energy efficiency for data centers and large-scale multimedia services. The paper also highlights important challenges in designing and maintaining green data centers and identifies some of the opportunities in offering green streaming service in cloud computing frameworks. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> In spite of the dramatic growth in the number of smartphones in the recent years, the energy capacity challenge for these devices has not been solved satisfactorily. Moreover, the global demand for green Information and Communication Technology (ICT) motivates the researchers to consider cloud computing as a new computing paradigm that is promising for green solution. In this paper, we propose new green solutions that save smartphones energy and at the same time achieve the green ICT goal. Our green solution is achieved by what we call Mobile Cloud Computing (MCC). The MCC migrates the content from the main cloud data center to local cloud data center temporary. The Internet Service Provide (ISP) provides the MCC, which holds the required contents for the smartphone network. Our analysis and experiments show that our proposed solution significantly reduces the ICT system energy consumption by 63% - 70%. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> This article provides an overview of a network-based model of power consumption in Internet infrastructure. This model provides insight into how different parts of the Internet will contribute to network power as Internet access increase over time. The model shows that today the access network dominates the Internet's power consumption and, as access speeds grow, the core network routers will dominate power consumption. The power consumption of data centers and content distribution networks is dominated by the power consumption of data storage for material that is infrequently downloaded and by the transport of the data for material that is frequently downloaded. Based on the model several strategies to improve the energy efficiency of the Internet are presented. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Researchers and developers use energy models to map out what an application or device's energy usage will be. Application developers most often do not have the capability to manipulate the CPU characteristics that most of these energy models and schedules use as their defining aspect. We present an energy model for multiprocess applications that centers around the CPU utilization, which application developers can actively affect with the design of their application. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Offloading is one major type of collaborations between mobile devices and clouds to achieve less execution time and less energy consumption. Offloading decisions for mobile cloud collaboration involve many decision factors. One of important decision factors is the network unavailability that has not been well studied. This paper presents an offloading decision model that takes network unavailability into consideration. Network with some unavailability can be modeled as an alternating renewal process. Then, application execution time and energy consumption in both ideal network and network with some unavailability are analyzed. Based on the presented theoretical model, an application partition algorithm and a decision module are presented to produce an offloading decision that is resistant to network unavailability. Simulation results demonstrate good performance of proposed scheme, where the proposed partition algorithm is analyzed in different application and cloud scenarios. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> With the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Energy efficiency for data centers has been recently an active research field. Several efforts have been made at the infrastructure and application levels to achieve energy efficiency and reduction of CO2 emissions. In this paper we approach the problem of application deployment to evaluate its impact on the energy consumption of applications at runtime. We use queuing networks to model different deployment configurations and to perform quantitative analysis to predict application performance and energy consumption. The results are validated against experimental data to confirm the correctness of the models when used for predictions. Comparisons between different configurations in terms of performance and energy consumption are made to suggest the optimal configuration to deploy applications on cloud environments. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> It is common practice for mobile devices to offload computationally heavy tasks off to a cloud, which has greater computational resources. In this paper, we consider an environment in which computational offloading is made among collaborative mobile devices.We call such an environment a mobile device cloud (MDC). We highlight the gain in computation time and energy consumption that can be achieved by offloading tasks with given characteristics to nearby devices inside a mobile device cloud. We adopt an experimental approach to measure power consumption in mobile to mobile opportunistic offloading using MDCs. Then, we adopt a data driven approach to evaluate and assess various offloading algorithms in MDCs. We believe that MDCs are not replacing the Cloud, however they present an offloading opportunity for a set of tasks with given characteristics or simply a solution when the cloud is unacceptable or costly. The promise of this approach shown by evaluating these algorithms using real datasets that include contact traces and social information of mobile devices in a conference setting. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing users' demand of mobile multimedia applications, by offloading the computational workloads from local devices to the remote cloud. Current MCC research focuses on making offloading decisions over different methods of a MCC application, but may inappropriately increase the energy consumption if having transmitted a large amount of program states over expensive wireless channels. Limited research has been done on avoiding such energy waste by exploiting the dynamic patterns of applications' run-time execution for workload offloading. In this paper, we adaptively offload the local computational workload with respect to the run-time application dynamics. Our basic idea is to formulate the dynamic executions of user applications using a semi-Markov model, and to further make offloading decisions based on probabilistic estimations of the offloading operation's energy saving. Such estimation is motivated by experimental investigations over practical smart phone applications, and then builds on analytical modeling of methods' execution times and offloading expenses. Systematic evaluations show that our scheme significantly improves the efficiency of workload offloading compared to existing schemes over various smart phone applications. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> With the increasing variety of mobile applications, reducing the energy consumption of mobile devices is a major challenge in sustaining multimedia streaming applications. This paper explores how to minimize the energy consumption of the backlight when displaying a video stream without adversely impacting the user's visual experience. First, we model the problem as a dynamic backlight scaling optimization problem. Then, we propose algorithms to solve the fundamental problem and prove the optimality in terms of energy savings. Finally, based on the algorithms, we present a cloud-based energy-saving service. We have also developed a prototype implementation integrated with existing video streaming applications to validate the practicability of the approach. The results of experiments conducted to evaluate the efficacy of the proposed approach are very encouraging and show energy savings of 15-49 percent on commercial mobile devices. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Many people use smart phones on a daily basis, yet, their energy consumption is pretty high and the battery power lasts typically only for a single day. In the scope of the EnAct project, we investigate potential energy savings on smart phones by offloading computationally expensive tasks into the cloud. Obviously, also the wireless communication for uploading tasks requires energy. For that reason, it is crucial to understand the trade-off between energy consumption for wireless communication and local computation in order to assert that the overall power consumption is decreased. In this paper, we investigate the communications part of that trade-off. We conducted an extensive set of measurement experiments using typical smart phones. This is the first step towards the development of accurate energy models allowing to predict the energy required for offloading a given task. Our measurements include WiFi, 2G, and 3G networks as well as a set of two different devices. According to our findings, WiFi consumes by far the least energy per time unit, yet, this advantage seems to be due to its higher throughput and the implied shorter download time and not due to lower power consumption over time. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB021 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance. <s> BIB022 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> The increase in capabilities of mobile devices to perform computation tasks has led to increase in energy consumption. While offloading the computation tasks helps in reducing the energy consumption, service availability is a cause of major concern. Thus, the main objective of this work is to reduce the energy consumption of mobile device, while maximising the service availability for users. The multi-criteria decision making (MCDM) TOPSIS method prioritises among the service providing resources such as Cloud, Cloudlet and peer mobile devices. The superior one is chosen for offloading. While availing service from a resource, the proposed fuzzy vertical handoff algorithm triggers handoff from a resource to another, when the energy consumption of the device increases or the connection time with the resource decreases. In addition, parallel execution of tasks is performed to conserve energy of the mobile device. The results of experimental setup with opennebula Cloud platform, Cloudlets and Android mobile devices on various network environments, suggest that handoff from one resource to another is by far more beneficial in terms of energy consumption and service availability for mobile users. <s> BIB023 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Combining mobile computing and cloud computing has opened the door recently for numerous applications that were not possible before due to the limited capabilities of mobile devices. Computation intensive applications are offloaded to the cloud, hence saving phone's energy and extending its battery life. However, energy savings are influenced by the wireless network conditions. In this paper, we propose considering contextual network conditions in deciding whether to offload to the cloud or not. An energy model is proposed to predict the energy consumed in offloading data under the current network conditions. Based on this prediction, a decision is taken whether to offload, to execute the application locally, or to delay offloading until detecting improvement in network conditions. We evaluated our approach by extending Think Air, a computation offloading framework proposed in [1], by our proposed energy model and delayed offloading algorithm. Experiments results showed considerable savings in energy with an average of 57% of the energy consumed by the application compared with the original static decision module implemented by Think Air. <s> BIB024 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size. <s> BIB025 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Abstract Mobile applications are becoming computationally intensive nowadays due to the increasing convenience, reliance on, and sophistication of smartphones. Nevertheless, battery lifetime remains a major obstacle that prohibits the large-scale adoption of such apps. Mobile cloud computing is a promising solution whereby apps are partially processed in the cloud to minimize the overall energy consumption of smartphones. However, this will not necessarily save energy if there is no systematic mechanism to evaluate the effect of offloading an app onto the cloud. In this paper, we present a mathematical model that represents this energy consumption optimization problem. We propose an algorithm to dynamically solve the problem while taking security measures into account. We also propose the free sequence protocol (FSP) that allows for the dynamic execution of apps according to their call graph. Our experimental setup consists of an Android smartphone and a Java server in the cloud. The results demonstrate that our approach saves battery lifetime and enhances performance. They also show the effects of workload amount, network type, computation cost, security operations, signal strength, and call graph structure on the optimized overall energy consumption. <s> BIB026 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> HighlightsWe review and synthesize the existing knowledge and experiences of evaluating commercial Cloud services.The findings identify several research gaps in the Cloud services evaluation domain.A dictionary-like reference is provided for future Cloud services evaluation work.IaaS and PaaS would serve different types of customers, and they cannot be replaced with each other.The Elasticity and Security evaluation of Cloud services could be long-term research challenges. BackgroundCloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. AimTo facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. MethodBased on a conceptual evaluation model comprising six steps, the systematic literature review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. ResultsThis SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially depicts the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. ConclusionsEvaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g.,?compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and records new evidence-based software engineering (EBSE) lessons. <s> BIB027 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Deployment Environment of Cloud Applications (RQ1) <s> Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. <s> BIB028
|
It has been identified that the deployment environment has significant effects on the energy consumption of Cloud applications BIB013 . Recall that a Cloud application is generally based on a multi-resource collaboration, and the application tasks could be deployed into different places typically including local devices and Cloud virtual machines BIB011 . To facilitate locating the energy consumption sources when running Cloud applications, it would be useful to outline a generic deployment architecture in the context of Cloud computing. By extracting the information about deployment configurations from the reviewed studies, we draw an evidence-based environmental architecture for Cloud application deployment, as shown in Fig. 1 . • Cloud: Being located at the far end of the deployment architecture (cf. Fig. 1 ), the Cloud provides on-demand computing resources for users through the Internet. The Cloud computing paradigm is initially a business model by allowing Cloud consumers to avoid upfront infrastructure costs BIB027 . Driven by the requirement of energy efficiency in ICT, Cloud computing has acted as a promising solution to the global demand for green computing BIB006 BIB007 BIB004 . Although the data centers in production could continuously use tremendous amounts of electricity BIB010 , the Cloud has been advocated to be more environmentally friendly than local computing systems, for multiple reasons ranging from the improvement of utilization through resource multitenancy to the replacement of high-power local equipment with lightweight client devices BIB002 BIB003 BIB004 BIB021 BIB005 . • Cloudlet: The emergence of Cloudlet is a crucial evolution in mobile Cloud computing BIB006 . As the mobile and wearable devices are becoming pervasive, the mobile application market is booming BIB012 . Many Cloudbased mobile applications require low latencies and high data throughput for their remote interactions and/or workload offloading. However, given the large separation between the local devices and the Cloud, moving computation tasks and transferring data have to go through WAN-scale network hops, which would consequently consume considerable energy and incur unacceptable delay and jitters BIB022 . To satisfy the resource and performance requirement of mobile applications, a natural approach is to push the Cloud closer to its end users. A Cloudlet can be viewed as a mobileservice-oriented and small-scale data center that is beside the clients or at the inner edge of the Internet. Some empirical studies have shown that, because of smaller round-trip delay, the nearby Cloudlet presents a better offloading option for computation-intensive workloads than the distant Cloud BIB012 BIB023 . • Internet: Recall that accessing the Cloud/Cloudlet relies on the de facto Internet infrastructure BIB006 BIB023 BIB028 , and thus the Internet plays an irreplaceable role in the Cloud ecosystem. According to the telecommunication network design principles, the Internet can be segmented into three main parts including access, metro/edge, and core networks BIB008 BIB021 , besides the content distribution networks and data centers. Such a segment model has been used to estimate the overall power consumption in the Internet by integrating those individual components BIB001 BIB008 . From the application's perspective, however, the calculation of energy for data transportation through the Internet only comprises a small set of involved network equipment (cf. Equation 30 in Section 3.5.4). Therefore, to be aligned with the studies on Cloud applications' energy consumption, we simplify the Internet model to be an equipment combination of switches, routers and various links, plus the Cloudlet and Cloud, as illustrated in • Device Cloud: Considering the potentially spare computing resources of surrounding devices, peer-device offloading has been proposed as an effective option to share workloads through Bluetooth ad-hoc network BIB023 . A simulation-based theoretical analysis even showed 63% more energy saving than traditional offloading to the Cloud BIB014 . In addition to the cooperation between peer devices, the paradigm of device Cloud has naturally evolved from the increasing average quantity of mobile devices per user or household, for running an application among a set of cooperative devices BIB012 BIB015 BIB016 . By employing different wireless communication access technologies (e.g., WiFi, 2G/3G, LTE, etc.) and including sensors of various kinds (e.g., GPS, camera sensor, air pollution sensor, etc.), the cooperation among sensor nodes can be extended to a broad range, namely mobile wireless sensor network BIB014 . As a matter of fact, the latest radio frequency technologies and enhanced processing capability make lightweight wireless sensor nodes also feasible to host sensing applications. Since a sensor is inevitably integrated into a particular electronic equipment (e.g., environmental monitor and vehicle diagnostic board) on the client side (or outer edge of the Internet BIB028 ), we still treat the mobile wireless sensor network as part of the device Cloud paradigm. • Client Device: Although there are various types of client devices, the client-side energy consumption of Cloud applications has been discussed largely with respect to mobile handsets such as smartphones and tablets. In fact, mobile devices nowadays are becoming the primary computing platform and a mandatory part of daily life for many users BIB024 BIB025 BIB012 . Unfortunately, due to the slow development of battery technology compared to the semiconductor technologies BIB006 BIB026 , the limited battery capacity has been identified to be a major bottleneck of mobile handsets, in contrast to the wall-socket-powered platforms BIB009 BIB017 . Moreover, given the high demand for computationally expensive Cloud applications (e.g., the increasingly popular use cases of multimedia streaming), the client devices would further experience a significant increase in the local energy consumption BIB018 BIB019 BIB020 . Correspondingly, the relevant studies are pervasively concerned with workload offloading strategies in mobile Cloud computing, in order to alleviate the suffering from the clients' energy shortage.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> We present a technique that controls the peak power consumption of a high-density server by implementing a feedback controller that uses precise, system-level power measurement to periodically select the highest performance state while keeping the system within a fixed power constraint. A control theoretic methodology is applied to systematically design this control loop with analytic assurances of system stability and controller performance, despite unpredictable workloads and running environments. In a real server we are able to control power over a 1 second period to within 1 W and over an 8 second period to within 0.1 W. ::: ::: Conventional servers respond to power supply constraint situations by using simple open-loop policies to set a safe performance level in order to limit peak power consumption. We show that closed-loop control can provide higher performance under these conditions and implement this technique on an IBM BladeCenter HS20 server. Experimental results demonstrate that closed-loop control provides up to 82% higher application performance compared to open-loop control and up to 17% higher performance compared to a widely used ad-hoc technique. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> It is now critical to reduce the consumption of natural resources, especially petroleum. Even in information systems, we have to reduce the total electrical power consumption. We classify network applications to two types of applications, transaction and communication based ones. In this paper, we consider communication based applications like the file transfer protocol (FTP). A computer named server consumes the electric power to transfer a file to a client depending on the transmission rate. We discuss a model for power consumption of a data transfer application which depends on the total transmission rate and number of clients to which the server concurrently transmits files. A client has to find a server in a set of servers, each of which holds a file so that the power consumption of the server is reduced. We discuss a pair of algorithms PCB (power consumption-based) and TRB (transmission rate-based) to find a server which transmits a file to a client. In the evaluation, we show the total power consumption can be reduced by the algorithms compared with the traditional round-robin algorithm. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> It is now critical to reduce the consumption of natural resources, especially petroleum to resolve air pollutions. Even in information systems, we have to reduce the total electrical power consumption. A cloud computing system is composed of a huge number of server computers like google file systems. There are many discussions on how to reduce the total power consumption of servers, e. g. by turning off servers which are not required to execute requests from clients. A peer-to-peer (P2P) system is another type of information system which is composed of a huge number of peer computers where various types of applications are autonomously performed. In this paper, we consider a P2P system with data transfer application like the file transfer protocol (FTP). A computer consumes the electric power to transfer a file to another computer depending on the bandwidth. We discuss a model for power consumption of data transfer applications. A client peer has to find a server peer in a set of server peers which holds a file so that the power consumption of the server is reduced. We discuss algorithms to find a server peer which transfers file in a P2P overlay network. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. The cost and operating expenses of data centers have skyrocketed with the increase in computing capacity. Several governmental, industrial, and academic surveys indicate that the energy utilized by computing and communication units within a data center contributes to a considerable slice of the data center operational costs. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Cloud computing clusters distributed computers to provide applications as services and on-demand resources over Internet. From the perspective of average and total energy consumption, such consolidated resource enhances the energy efficiency on both clients and servers. However, cloud computing has a different power consumption pattern from the traditional storage oriented Internet services. The computation oriented implementation of cloud service broadens the gap between the peak power demand and base power demand of a data center. A higher peak demand implies the need of feeder capacity expansion, which requires a considerable investment. This study proposes a computation related approach to lessen the increasing power demand of cloud service data centers. Through appropriated designs, some frequently used computing algorithms can be performed by either clients or servers. As a model presented in this paper, such client-server balanced computation resource integration suggests an energy-efficient and cost-effective cloud service data center. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> In spite of the dramatic growth in the number of smartphones in the recent years, the energy capacity challenge for these devices has not been solved satisfactorily. Moreover, the global demand for green Information and Communication Technology (ICT) motivates the researchers to consider cloud computing as a new computing paradigm that is promising for green solution. In this paper, we propose new green solutions that save smartphones energy and at the same time achieve the green ICT goal. Our green solution is achieved by what we call Mobile Cloud Computing (MCC). The MCC migrates the content from the main cloud data center to local cloud data center temporary. The Internet Service Provide (ISP) provides the MCC, which holds the required contents for the smartphone network. Our analysis and experiments show that our proposed solution significantly reduces the ICT system energy consumption by 63% - 70%. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> With the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Many mobile applications such as games and social applications are emerging for mobile devices. These powerful applications consume more and more running time and energy. So they are badly confined by mobile device with limited resource. Since cloud infrastructure has great potential to benefit task execution, this paper presents SmartVirtCloud (SmartVC). A system can offload methods in applications to achieve better performance in indoor environment. SmartVC decides at runtime whether and when the methods in application should be executed remotely. And two types of cloud service models, namely load-balancing and application-isolation, are constructed for concurrent requests. The empirical results show that, by using SmartVC, the CPU-intensive calculation application consumes two orders of magnitude less energy on average; the processing speed of latency-sensitive image translation application gets doubled; the performance of network-intensive picture download application is improved with the increase of picture amount. In addition, the proposed two cloud models support concurrent requests from smartphones very well. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Many people use smart phones on a daily basis, yet, their energy consumption is pretty high and the battery power lasts typically only for a single day. In the scope of the EnAct project, we investigate potential energy savings on smart phones by offloading computationally expensive tasks into the cloud. Obviously, also the wireless communication for uploading tasks requires energy. For that reason, it is crucial to understand the trade-off between energy consumption for wireless communication and local computation in order to assert that the overall power consumption is decreased. In this paper, we investigate the communications part of that trade-off. We conducted an extensive set of measurement experiments using typical smart phones. This is the first step towards the development of accurate energy models allowing to predict the energy required for offloading a given task. Our measurements include WiFi, 2G, and 3G networks as well as a set of two different devices. According to our findings, WiFi consumes by far the least energy per time unit, yet, this advantage seems to be due to its higher throughput and the implied shorter download time and not due to lower power consumption over time. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Measuring energy consumption is an essential step in the development of policies for the management of energy in every IT system. There is a wide range of methods using both hardware and software for measuring energy consumed by the system accurately. However, most of these methods measure energy consumed by a machine or a cluster of machines. In environments such as Cloud that an application can be built from components with comparable characteristics, measuring energy consumed by a single component can be extremely beneficial. For example, if we can measure energy consumed by different HTTP servers, then we can establish which one consumes less energy performing a given task. As a result, the Cloud provider can provide incentives, so that, application developers use the HTTP server that consume less energy. Indeed, considering size of the Cloud, even a small amount of saving per Virtual Machine can add up to a substantial saving. In this paper, we propose a technique to measure energy consumed by an application via measuring energy consumed by the individual processes of the application. We shall deal with applications that run in a virtualized environment such as Cloud. We present two implementations of our idea to demonstrate the feasibility of the approach. Firstly, a method of measurement with the help of Kernel-Based Virtual Machine running on a typical laptop is presented. Secondly, in a commercial Cloud such as Elastic host, we describe a method of measuring energy consumed by processes such as HTTP servers. This will allow commercial providers to identify which product consumes less energy on their platform. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> With the increasing variety of mobile applications, reducing the energy consumption of mobile devices is a major challenge in sustaining multimedia streaming applications. This paper explores how to minimize the energy consumption of the backlight when displaying a video stream without adversely impacting the user's visual experience. First, we model the problem as a dynamic backlight scaling optimization problem. Then, we propose algorithms to solve the fundamental problem and prove the optimality in terms of energy savings. Finally, based on the algorithms, we present a cloud-based energy-saving service. We have also developed a prototype implementation integrated with existing video streaming applications to validate the practicability of the approach. The results of experiments conducted to evaluate the efficacy of the proposed approach are very encouraging and show energy savings of 15-49 percent on commercial mobile devices. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Modern smartphones permit to run a large variety of applications, i.e. multimedia, games, social network applications, etc. However, this aspect considerably reduces the battery life of these devices. A possible solution to alleviate this problem is to offload part of the application or the whole computation to remote servers, i.e. Cloud Computing. The offloading cannot be performed without considering the issues derived from the nature of the application (i.e. multimedia, games, etc.), which can considerably change the resources necessary to the computation and the type, the frequency and the amount of data to be exchanged with the network. This work shows a framework for automatically building models for the offloading of mobile applications based on evolutionary algorithms and how it can be used to simulate different kinds of mobile applications and to analyze the rules generated. To this aim, a tool for generating mobile datasets, presenting different features, is designed and experiments are performed in different usage conditions in order to demonstrate the utility of the overall framework. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Mobile cloud computing (MC2) is emerging as a new computing paradigm that seeks to augment resource-constrained mobile devices for executing computing- and/or data-intensive mobile applications. Nonetheless, the energy-poverty nature of mobile devices has become a stumbling block that greatly impedes the practical application of MC2. Fortunately, for delay-tolerant mobile applications, energy conservation is achievable via two means: (1) dynamic selection of energy-efficient links (e.g., WiFi interface); and (2) deferring data transmission in bad connectivity. In this paper, we study the problem of energy-efficient downlink and uplink data transmission between mobile devices and clouds. In the presence of unpredictable data arrival, network availability and link quality, our objective is to minimize the time average energy consumption of a mobile device while ensuring the stability of both device-end and cloud-end queues. To achieve this goal, we propose an online control framework named EcoPlan under which mobile users can make flexible link selection and data transmission scheduling decisions to achieve arbitrary energy-delay tradeoffs. Real-world trace-driven simulations demonstrate the effectiveness of EcoPlan, along with its superior energy-efficiency over alternative WiFi-prioritized, minimum-delay and SALSA schemes. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> The increase in capabilities of mobile devices to perform computation tasks has led to increase in energy consumption. While offloading the computation tasks helps in reducing the energy consumption, service availability is a cause of major concern. Thus, the main objective of this work is to reduce the energy consumption of mobile device, while maximising the service availability for users. The multi-criteria decision making (MCDM) TOPSIS method prioritises among the service providing resources such as Cloud, Cloudlet and peer mobile devices. The superior one is chosen for offloading. While availing service from a resource, the proposed fuzzy vertical handoff algorithm triggers handoff from a resource to another, when the energy consumption of the device increases or the connection time with the resource decreases. In addition, parallel execution of tasks is performed to conserve energy of the mobile device. The results of experimental setup with opennebula Cloud platform, Cloudlets and Android mobile devices on various network environments, suggest that handoff from one resource to another is by far more beneficial in terms of energy consumption and service availability for mobile users. <s> BIB021 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> In cloud environments, IT solutions are delivered to users via shared infrastructure, enabling cloud service providers to deploy applications as services according to user QoS (Quality of Service) requirements. One consequence of this cloud model is the huge amount of energy consumption and significant carbon footprints caused by large cloud infrastructures. A key and common objective of cloud service providers is thus to develop cloud application deployment and management solutions with minimum energy consumption while guaranteeing performance and other QoS specified in Service Level Agreements (SLAs). However, finding the best deployment configuration that maximises energy efficiency while guaranteeing system performance is an extremely challenging task, which requires the evaluation of system performance and energy consumption under various workloads and deployment configurations. In order to simplify this process we have developed Stress Cloud, an automatic performance and energy consumption analysis tool for cloud applications in real-world cloud environments. Stress Cloud supports the modelling of realistic cloud application workloads, the automatic generation of load tests, and the profiling of system performance and energy consumption. We demonstrate the utility of Stress Cloud by analysing the performance and energy consumption of a cloud application under a broad range of different deployment configurations. <s> BIB022 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB023 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Execution Elements of Cloud Applications (RQ2) <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB024
|
Although there could be an infinite variety in functionality of Cloud applications, we emphasize generic execution elements. To facilitate identifying execution elements of Cloud applications, we pre-list three entities (namely Client/User, Method/Task, and Data) that drive, or are driven by, potential execution elements. At last, nine runtime elements across those entities are extracted from the reviewed papers, as shown in Fig. 2 . The discussion about application execution elements is combined into Section 3.2.1. • Downloading & Uploading: In essence, downloading/uploading indicates data access from the user's point of view. We recognize these two activities only when they are specifically discussed in the primary studies, for example the file uploading and downloading from the Cloud BIB007 BIB003 BIB004 . In addition, since the radio frequency module (RF) of mobile devices demands different amounts of energy for sending and receiving data respectively (uploading generally costs more energy than downloading with respect to the same amount of data) BIB012 BIB015 , we also employ this execution element to cover the separate uplink and downlink wireless transmissions BIB016 BIB020 . • Interaction: Although the interaction between the client and the remote tasks essentially incurs data exchanging, investigating the energy consumption of interactive workloads could be particularly challenging, due to the fine granularity of communication BIB005 . Moreover, to intentionally study the mutual actions between a Cloud application and its users, it would be useful to distinguish interaction from the other types of communication elements. For example, instead of reflecting communication data throughput, this execution element is often highlighted when stressing the server load, like user connections BIB008 and user requests for playing online games BIB021 or for exploring HTTP websites BIB022 BIB017 . • Maintenance: If a Cloud application requires data storage, one of its fundamental execution elements would be maintaining the availability and integrity of data. In practice, it is common to spread data across different locations to improve the data accessibility and reduce the likelihood of data loss BIB001 . Given the limited maintenance scenarios in the selected studies, we roughly identify data files to be stored either in the remote data centers (e.g., when employing storage as a service) or in the local client devices (e.g., when offloading computational workloads only) BIB007 . When it comes to the remote data maintenance, storing popular contents in the Cloudlet instead of the Cloud has widely been accepted as an energy-efficient strategy, for reducing the Internet traffic between the content data and their end users BIB009 . • Monitoring: When employing Cloud services, monitoring is one of the primary execution tasks especially in thin-client scenarios BIB007 . Considering the limited battery capacity of handset devices, runtime monitoring could be a major concern for energy consumption of mobile Cloud applications BIB010 . Correspondingly, it has been proposed to scale the image frames' backlight levels in particular Cloud applications, like video streaming, in order to reduce the energy consumed in display modules of client devices BIB018 . • Processing: As the name suggests, we treat processing as the processorcentric execution element, such as mathematical calculation (e.g., generating a particular Fibonacci number BIB013 ), logic task execution (e.g., workload-resource scheduling BIB011 BIB023 ), and data processing (e.g., mapping, shuffling and reducing the input data ). Since processor has been considered to be the major power consumer in Cloud computing scenarios BIB002 , processing seems to be the commonest energy-consuming activity that has been discussed in nearly all the selected studies. • Reading & Writing: Compared to data accessing from the user's point of view (i.e. Downloading & Uploading), the application task's perspective considers two types of energy consumption elements of data accessing. The first type focuses on data reading/writing from/to where the data are stored, while the second type emphasizes data transmission through the network. Although not specified in every study, these two element types usually coexist with each other in Cloud applications (e.g., the data fetching requires both disk reading and network transferring BIB014 ). When it comes to Reading & Writing only, one trend is that disk IO is more powerconsuming than memory IO, while another trend is that data writing is generally more power-expensive than reading BIB024 . • Transmission: As mentioned above, the element data transmission mainly focuses on application tasks with respect to their data transfer over network resources. Since different tasks of a Cloud application can be executed distributedly, the data transmission could take place not only in the Cloud but also between the Cloud and the client (note that we identify Cloud-client data transmission from a study when it does not emphasize Downloading & Uploading or Interaction). In either case, a Cloud application that transfers large amounts of data would cause a significant proportion of its whole energy consumption, due to two facts: (1) In the Cloud, routers, switches, links and aggregation resources consume more than 30% of the total energy BIB006 ; (2) On the client side, data communication has significant impacts on mobile devices' energy consumption BIB019 .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> We present a technique that controls the peak power consumption of a high-density server by implementing a feedback controller that uses precise, system-level power measurement to periodically select the highest performance state while keeping the system within a fixed power constraint. A control theoretic methodology is applied to systematically design this control loop with analytic assurances of system stability and controller performance, despite unpredictable workloads and running environments. In a real server we are able to control power over a 1 second period to within 1 W and over an 8 second period to within 0.1 W. ::: ::: Conventional servers respond to power supply constraint situations by using simple open-loop policies to set a safe performance level in order to limit peak power consumption. We show that closed-loop control can provide higher performance under these conditions and implement this technique on an IBM BladeCenter HS20 server. Experimental results demonstrate that closed-loop control provides up to 82% higher application performance compared to open-loop control and up to 17% higher performance compared to a widely used ad-hoc technique. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> Energy efficiency and parallel I/O performance have become two critical measures in high performance computing (HPC). However, there is little empirical data that characterize the energy-performance behaviors of parallel I/O workload. In this paper, we present a methodology to profile the performance, energy, and energy efficiency of parallel I/O access patterns and report our findings on the impacting factors of parallel I/O energy efficiency. Our study shows that choosing the right buffer size can change the energy-performance efficiency by up to 30 times. High spatial and temporal spacing can also lead to significant improvement in energy-performance efficiency (about 2X). We observe CPU frequency has a more complex impact, depending on the IO operations, spatial and temporal, and memory buffer size. The presented methodology and findings are useful for evaluating the energy efficiency of I/O intensive applications and for providing a guideline to develop energy efficient parallel I/O technology. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> General-purpose computing domain has experienced strategy transfer from scale-up to scale-out in the past decade. In this paper, we take a step further to analyze ARM-processor based cluster against Intel X86 workstation, from both energy-efficiency and cost-efficiency perspectives. Three applications are selected and evaluated to represent diversified applications, including Web server throughput, in-memory database, and video transcoding. Through detailed measurements, we make the observations that the energy-efficiency ratio of the ARM cluster against the Intel workstation varies from 2.6-9.5 in in-memory database, to approximately 1.3 in Web server application, and 1.21 in video transcoding. We also find out that for the Intel processor that adopts dynamic voltage and frequency scaling (DVFS) techniques, the power consumption is not linear with the CPU utilization level. The maximum energy saving achievable from DVFS is 20%. Finally, by utilizing a monthly cost model of data centers, we conclude that ARM cluster based data centers are feasible, and are advantageous in computationally lightweight applications, e.g. in-memory database and network-bounded Web applications. The cost advantage of ARM cluster diminishes progressively for computation-intensive applications, i.e. dynamic Web server application and video transcoding, because the number of ARM processors needed to provide comparable performance increases. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> With the recent advent of 4G LTE networks, there has been increasing interest to better understand the performance and power characteristics, compared with 3G/WiFi networks. In this paper, we take one of the first steps in this direction. Using a publicly deployed tool we designed for Android called 4GTest attracting more than 3000 users within 2 months and extensive local experiments, we study the network performance of LTE networks and compare with other types of mobile networks. We observe LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively. We develop the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications. Using a comprehensive data set consisting of 5-month traces of 20 smartphone users, we carefully investigate the energy usage in 3G, LTE, and WiFi networks and evaluate the impact of configuring LTE-related parameters. Despite several new power saving improvements, we find that LTE is as much as 23 times less power efficient compared with WiFi, and even less power efficient than 3G, based on the user traces and the long high power tail is found to be a key contributor. In addition, we perform case studies of several popular applications on Android in LTE and identify that the performance bottleneck for web-based applications lies less in the network, compared to our previous study in 3G [24]. Instead, the device's processing power, despite the significant improvement compared to our analysis two years ago, becomes more of a bottleneck. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> Reducing energy consumption without scarifying service quality is important for cloud computing. Efficient scheduling algorithms HEFT-D and HEFT-DS based on frequency-scaling and state-switching techniques are proposed. Our scheduling algorithms use the fact that the hosts employing a lower frequency or entering a sleeping state may consume less energy without leading to a longer makespan. Experimental results have shown that our algorithms maintain the performance as good as that of HEFT while the energy consumption is reduced. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> With increasingly inexpensive cloud storage and increasingly powerful cloud processing, the cloud has rapidly become the environment to store and analyze data. Most of the large-scale data computations in the cloud heavily rely on the MapReduce paradigm and its Hadoop implementation. Nevertheless, this exponential growth in popularity has significantly impacted power consumption in cloud infrastructures. In this paper, we focus on MapReduce and we investigate the impact of dynamically scaling the frequency of compute nodes on the performance and energy consumption of a Hadoop cluster. To this end, a series of experiments are conducted to explore the implications of Dynamic Voltage Frequency scaling (DVFS) settings on power consumption in Hadoop-clusters. By adapting existing DVFS governors (i.e., performance, powersave, ondemand, conservative and userspace) in the Hadoop cluster, we observe significant variation in performance and power consumption of the cluster with different applications when applying these governors: the different DVFS settings are only sub-optimal for different MapReduce applications. Furthermore, our results reveal that the current CPU governors do not exactly reflect their design goal and may even become ineffective to manage the power consumption in Hadoop clusters. This study aims at providing more clear understanding of the interplay between performance and power management in Hadoop cluster and therefore offers useful insight into designing power-aware techniques for Hadoop systems. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Summary <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB011
|
According to the investigated execution elements and deployment environment of Cloud applications, we distribute the selected studies over a bubble plot, as shown in Fig. 3 . It is notable that the same study could have been counted in different bubbles, because one energy investigation might include multiple execution elements and different environmental components (e.g., BIB003 ). With regarding to the execution elements, a clear trend is that most studies have focused on task processing and data transmission, which confirms computation and communication as two major concerns about a Cloud application's energy expense (e.g., using a communication-computation ratio to characterize application workloads and analyze its influence on energy efficiency BIB007 ). Among the environmental components for Cloud application deployment, client devices and Cloud have attracted the most research attentions. By examining their research methods, the reason seems to be twofold: (1) Client devices can directly be controlled and measured; and (2) Cloud data centers can be simplified into a local-server simulation, while the local servers are controllable and measurable. Such a distribution confirms that uncontrollable deployment environment makes addressing a Cloud application's energy consumption more challenging and complex. Correspondingly, by abstracting the uncontrollable aspects, modeling and model-based simulations would be a practical and effective research approach in this case. Overall, we have identified 18 environmental factors from the relevant studies. To facilitate tracing back to the reviewed studies, relevant publications are specified for each of the factors. Since the identified factors were not evenly studied, it would be useful to reveal to what extent those factors concerned researchers.Here we employ factor-studies as a metric to measure the popularity of the identified factors, i.e. one factor-study of a particular factor indicates that the factor is involved in one study. The popularity distribution is illustrated in Fig. 5 . It is clear that the CPU clock frequency has been studied as an outstanding environmental factor, followed by the technology of access points and the network bandwidth. As for the factor-study distribution over the four resource types, we only found five studies for two memory factors and one study for two storage factors. This huge imbalance in factor-studies further confirms computation and communication as two major energy concerns in the existing research work from the environmental perspective. In particular, there are conflict opinions about adjusting CPU clock frequencies for energy saving, particularly through dynamic voltage and frequency scaling (DVFS). Although intelligently scaling frequency can improve energy efficiency, its benefits seem to be trivial BIB008 , and the achievable energy saving could be 13% BIB002 to 20% only BIB005 . Furthermore, different applications might have their best energy efficiency at different optimal frequencies BIB004 , and thus the same DVFS scheduling could only be sub-optimal for those different applications BIB009 . It is also notable for Access Point Technology that, although WiFi is generally more energy efficient than the cellular technologies, the superiority of WiFi becomes marginal if the utilization of cellular is high (for example when transmitting large bulks of data) BIB006 . Meanwhile, the efficiency of WiFi in saturation traffic would significantly degrade due to packet loss and retransmissions. As listed above, we have identified 12 workload factors in total. In a similar fashion to Section 3.3.6, we also use numerical factor-studies to reflect to what extent different environmental factors have concerned researchers, as illustrated in Fig. 7 . It is again notable that popular factors do not necessarily act as main contributors to energy consumption. By isolating individual factors' impacts on energy consumption from each other, the sizes of data and task (the processing workload) seem to be the main energy-related factors in a Cloud application. In fact, there has been a wide consensus on these two factors among the literature and reality: Computational tasks rely on the major power consumer of computing resources BIB001 , while the data lead to communication and storage costs. Such a factor concentration roughly matches the main environmental factors (cf. Section 3.3.6) in terms of their potential interactions (i.e. task processing and data communication). Since Cloud application workload is usually reflected by a combination of factors, in practice, one factor's influence on energy consumption could be correlated with or even constrained by others. For example, task size and task complexity can sometimes interchangeably indicate each other ( BIB010 vs. ); the number of tasks and data size are frequently used together to represent the overall workload size (e.g., BIB011 ); while the degree of parallelism in a Cloud application also depends on the resource allocations (e.g., BIB004 ). We leave more discussions about combinational influences of factors to Section 4.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environmental Factors and their Influences on Energy Consumption of Cloud Applications (RQ3) <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environmental Factors and their Influences on Energy Consumption of Cloud Applications (RQ3) <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environmental Factors and their Influences on Energy Consumption of Cloud Applications (RQ3) <s> In cloud environments, IT solutions are delivered to users via shared infrastructure, enabling cloud service providers to deploy applications as services according to user QoS (Quality of Service) requirements. One consequence of this cloud model is the huge amount of energy consumption and significant carbon footprints caused by large cloud infrastructures. A key and common objective of cloud service providers is thus to develop cloud application deployment and management solutions with minimum energy consumption while guaranteeing performance and other QoS specified in Service Level Agreements (SLAs). However, finding the best deployment configuration that maximises energy efficiency while guaranteeing system performance is an extremely challenging task, which requires the evaluation of system performance and energy consumption under various workloads and deployment configurations. In order to simplify this process we have developed Stress Cloud, an automatic performance and energy consumption analysis tool for cloud applications in real-world cloud environments. Stress Cloud supports the modelling of realistic cloud application workloads, the automatic generation of load tests, and the profiling of system performance and energy consumption. We demonstrate the utility of Stress Cloud by analysing the performance and energy consumption of a cloud application under a broad range of different deployment configurations. <s> BIB003
|
Although the environmental architecture is straightforward (cf. Section 3.1), the deployment of a Cloud application could require sophisticated environmental configurations, and different environmental conditions might in turn drive different deployment strategies (e.g., the right data distribution with excellent connectivity would be wrong under poor communication channels BIB001 ). In essence, it is the detailed configurations that expose significant environmental impacts on the energy consumption of Cloud applications BIB002 BIB003 . To alleviate the complexity in energy analysis with various deployment configurations, it would be valuable to identify individual environmental factors and distinguish their energy influences between each other. Given the fine-grained decomposition of the IT infrastructure , the existing studies were mainly concerned with four Cloud resource types, i.e. computation, communication, memory and storage. We accordingly group and report the identified environmental factors, as organized through an entry-relationship diagram in Fig. 4 .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> In spite of the dramatic growth in the number of smartphones in the recent years, the energy capacity challenge for these devices has not been solved satisfactorily. Moreover, the global demand for green Information and Communication Technology (ICT) motivates the researchers to consider cloud computing as a new computing paradigm that is promising for green solution. In this paper, we propose new green solutions that save smartphones energy and at the same time achieve the green ICT goal. Our green solution is achieved by what we call Mobile Cloud Computing (MCC). The MCC migrates the content from the main cloud data center to local cloud data center temporary. The Internet Service Provide (ISP) provides the MCC, which holds the required contents for the smartphone network. Our analysis and experiments show that our proposed solution significantly reduces the ICT system energy consumption by 63% - 70%. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> The popularity of smartphones is growing every day. Thanks to the more powerful hardware the applications can run more tasks and use broadband network connection, however there are several known issues. For example, under typical usage (messaging, browsing, and gaming) a smartphone can be discharged in one day. This makes the battery life one of the biggest problems of the mobile devices. That is a good motivation to find energy-efficient solutions. One of the possible methods is the “computation offloading” mechanism, which means that some of the tasks are uploaded to the cloud. In this paper we are going to present a new energy-efficient job scheduling model and a measurement infrastructure which is used to analyze the energy consumption of smartphones. Our results are going to be demonstrated through some scenarios where the goal is to save energy. The offloading task is based on LP and scheduling problems. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Traditional scheduling research usually targets make span as the only optimization goal, while several isolated efforts addressed the problem by considering at most two objectives. In this paper we propose a general framework and heuristic algorithm for multi-objective static scheduling of scientific workflows in heterogeneous computing environments. The algorithm uses constraints specified by the user for each objective and approximates the optimal solution by applying a double strategy: maximizing the distance to the constraint vector for dominant solutions and minimizing it otherwise. We analyze and classify different objectives with respect to their impact on the optimization process and present a four-objective case study comprising make span, economic cost, energy consumption, and reliability. We implemented the algorithm as part of the ASKALON environment for Grid and Cloud computing. Results for two real-world applications demonstrate that the solutions generated by our algorithm are superior to user-defined constraints most of the time. Moreover, the algorithm outperforms a related bi-criteria heuristic and a bi-criteria genetic algorithm. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Many emerging mobile applications nowadays tend to be computation-intensive due to the increasing popularity and convenience of smartphones. Nevertheless, a major obstacle prohibits the direct adoption of such applications and that is battery lifetime. Mobile Cloud Computing (MCC) is a promising solution that suggests the partial processing of applications on the cloud to minimize the overall power consumption at the mobile device. However, this does not necessarily save energy if there is no systematic mechanism for evaluating the effect of offloading the application into the cloud. In this paper, we study the factors affecting the power consumption due to offloading, develop a decision model, and verify its correctness by real implementation on an Android device. The results show that the proposed partitioning scheme successfully results in energy savings at the mobile handset and surpasses the energy efficiency of both fully local and fully remote execution. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> With the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Offloading is one major type of collaborations between mobile devices and clouds to achieve less execution time and less energy consumption. Offloading decisions for mobile cloud collaboration involve many decision factors. One of important decision factors is the network unavailability that has not been well studied. This paper presents an offloading decision model that takes network unavailability into consideration. Network with some unavailability can be modeled as an alternating renewal process. Then, application execution time and energy consumption in both ideal network and network with some unavailability are analyzed. Based on the presented theoretical model, an application partition algorithm and a decision module are presented to produce an offloading decision that is resistant to network unavailability. Simulation results demonstrate good performance of proposed scheme, where the proposed partition algorithm is analyzed in different application and cloud scenarios. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Online social networks (OSNs) with their huge number of active users consume significant amount energy both in the data centers and in the transport network. Existing studies focus mainly on the energy consumption in the data centers and do not take into account the energy consumption during the transport of data between end-users and data centers. To indicate the amount of the neglected energy, this paper provides a comprehensive framework and a set of measurements for understanding the energy consumption of cloud applications such as photo sharing in social networks. A new energy model is developed to estimate the energy consumption of cloud applications and applied to sharing photos on Facebook, as an example. Our results indicate that the energy consumption involved in the network and end-user devices for photo sharing is approximately equal to 60% of the energy consumption of all Facebook data enters. Therefore, achieving an energy-efficient cloud service requires energy efficiency improvement in the transport network and end-user devices along with the related data centers. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Mobile Cloud Computing (MCC) is emerging as a main ubiquitous computing platform which enables to leverage the resource limitations of mobile devices and wireless networks by offloading data-intensive computation tasks from resource-poor mobile devices to resource-rich clouds. In this paper, we consider an online location-aware offloading problem in a two-tiered mobile cloud computing environment consisting of a local cloudlet and remote clouds, with an objective to fair share the use of the cloudlet by consuming the same proportional of their mobile device energy, while keeping their individual SLA, for which we devise an efficient online algorithm. We also conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising and outperforms other heuristics. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Many people use smart phones on a daily basis, yet, their energy consumption is pretty high and the battery power lasts typically only for a single day. In the scope of the EnAct project, we investigate potential energy savings on smart phones by offloading computationally expensive tasks into the cloud. Obviously, also the wireless communication for uploading tasks requires energy. For that reason, it is crucial to understand the trade-off between energy consumption for wireless communication and local computation in order to assert that the overall power consumption is decreased. In this paper, we investigate the communications part of that trade-off. We conducted an extensive set of measurement experiments using typical smart phones. This is the first step towards the development of accurate energy models allowing to predict the energy required for offloading a given task. Our measurements include WiFi, 2G, and 3G networks as well as a set of two different devices. According to our findings, WiFi consumes by far the least energy per time unit, yet, this advantage seems to be due to its higher throughput and the implied shorter download time and not due to lower power consumption over time. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> With prosperity of applications on smartphones, energy saving for smartphones has drawn increasing attention. In this paper we devise Phone2Cloud, a computation offloading-based system for energy saving on smartphones in the context of mobile cloud computing. Phone2Cloud offloads computation of an application running on smartphones to the cloud. The objective is to improve energy efficiency of smartphones and at the same time, enhance the application's performance through reducing its execution time. In this way, the user's experience can be improved. We implement the prototype of Phone2Cloud on Android and Hadoop environment. Two sets of experiments, including application experiments and scenario experiments, are conducted to evaluate the system. The experimental results show that Phone2Cloud can effectively save energy for smartphones and reduce the application's execution time. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing users' demand of mobile multimedia applications, by offloading the computational workloads from local devices to the remote cloud. Current MCC research focuses on making offloading decisions over different methods of a MCC application, but may inappropriately increase the energy consumption if having transmitted a large amount of program states over expensive wireless channels. Limited research has been done on avoiding such energy waste by exploiting the dynamic patterns of applications' run-time execution for workload offloading. In this paper, we adaptively offload the local computational workload with respect to the run-time application dynamics. Our basic idea is to formulate the dynamic executions of user applications using a semi-Markov model, and to further make offloading decisions based on probabilistic estimations of the offloading operation's energy saving. Such estimation is motivated by experimental investigations over practical smart phone applications, and then builds on analytical modeling of methods' execution times and offloading expenses. Systematic evaluations show that our scheme significantly improves the efficiency of workload offloading compared to existing schemes over various smart phone applications. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Abstract Mobile applications are becoming computationally intensive nowadays due to the increasing convenience, reliance on, and sophistication of smartphones. Nevertheless, battery lifetime remains a major obstacle that prohibits the large-scale adoption of such apps. Mobile cloud computing is a promising solution whereby apps are partially processed in the cloud to minimize the overall energy consumption of smartphones. However, this will not necessarily save energy if there is no systematic mechanism to evaluate the effect of offloading an app onto the cloud. In this paper, we present a mathematical model that represents this energy consumption optimization problem. We propose an algorithm to dynamically solve the problem while taking security measures into account. We also propose the free sequence protocol (FSP) that allows for the dynamic execution of apps according to their call graph. Our experimental setup consists of an Android smartphone and a Java server in the cloud. The results demonstrate that our approach saves battery lifetime and enhances performance. They also show the effects of workload amount, network type, computation cost, security operations, signal strength, and call graph structure on the optimized overall energy consumption. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> The increase in capabilities of mobile devices to perform computation tasks has led to increase in energy consumption. While offloading the computation tasks helps in reducing the energy consumption, service availability is a cause of major concern. Thus, the main objective of this work is to reduce the energy consumption of mobile device, while maximising the service availability for users. The multi-criteria decision making (MCDM) TOPSIS method prioritises among the service providing resources such as Cloud, Cloudlet and peer mobile devices. The superior one is chosen for offloading. While availing service from a resource, the proposed fuzzy vertical handoff algorithm triggers handoff from a resource to another, when the energy consumption of the device increases or the connection time with the resource decreases. In addition, parallel execution of tasks is performed to conserve energy of the mobile device. The results of experimental setup with opennebula Cloud platform, Cloudlets and Android mobile devices on various network environments, suggest that handoff from one resource to another is by far more beneficial in terms of energy consumption and service availability for mobile users. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Personal mobile devices (PMDs) have become the most used computing devices for many people. With the introduction of mobile cloud computing, we can augment the storage and computing capabilities of PMDs via cloud support. However, there are many challenges in developing mobile cloud applications (MCAs) that incorporate cloud computing efficiently, especially for developers targeting multiple mobile platforms. This paper presents Uniport, a uniform framework for developing MCAs. We introduce a uniform architecture for MCAs based on the Model-View-Controller (MVC) pattern and a set of programming primitives and runtime libraries. Not only can Uniport support the creation of new MCAs, it can also help transform existing mobile applications to MCAs efficiently. We demonstrate the applicability and flexibility of Uniport in a case study to transform three existing mobile applications on iOS, Android and Windows Phone, to their mobile cloud versions respectively. Evaluation results show that, with very few modifications, we can easily transform mobile applications to MCAs that can exploit the cloud support to improve performance by 3 -- 7x and save more than half of their energy consumption. <s> BIB021 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Long running applications on resource-constrained mobile devices can lead to software aging, which is a critical impediment to the mobile users due to its pervasive nature. Mobile offloading that migrates computation-intensive parts of applications from mobile devices onto resource-rich cloud servers, is an effective way for enhancing the availability of mobile services as it can postpone or prevent the software aging in mobile devices. Through partitioning the execution between the device side and the cloud side, the mobile device can have the most benefit from offloading in reducing utilisation of the device and increasing its lifetime. In this paper, we propose a path-based offloading partitioning (POP) algorithm to determine which portions of the application tasks to run on mobile devices and which portions on cloud servers with different cost models in mobile environments. The evaluation results show that the partial offloading scheme can significantly improve performance and reduce energy consumption by optimally distributing tasks between mobile devices and cloud servers, and can well adapt to changes in the environment. <s> BIB022 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size. <s> BIB023 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Combining mobile computing and cloud computing has opened the door recently for numerous applications that were not possible before due to the limited capabilities of mobile devices. Computation intensive applications are offloaded to the cloud, hence saving phone's energy and extending its battery life. However, energy savings are influenced by the wireless network conditions. In this paper, we propose considering contextual network conditions in deciding whether to offload to the cloud or not. An energy model is proposed to predict the energy consumed in offloading data under the current network conditions. Based on this prediction, a decision is taken whether to offload, to execute the application locally, or to delay offloading until detecting improvement in network conditions. We evaluated our approach by extending Think Air, a computation offloading framework proposed in [1], by our proposed energy model and delayed offloading algorithm. Experiments results showed considerable savings in energy with an average of 57% of the energy consumed by the application compared with the original static decision module implemented by Think Air. <s> BIB024 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> This paper presents the results of a formative study conducted to determine the effects of computation offloading in mobile applications by comparing “application performance” (chiefly energy consumption and response time). The study examined two general execution scenarios: (1) computation is performed locally on a mobile device, and (2) when it is offloaded entirely to the cloud. The study also carefully considered the underlying network characteristics as an important factor affecting the performance. More specifically, we refactored two mobile applications to offload their computationally intensive functionality to execute in the cloud. We then profiled these applications under different network conditions, and carefully measured “application performance” in each case. The results were not as conclusive as we had expected. On fast networks, offloading is almost always beneficial. However, on slower networks, the offloading cost-benefit analysis is not as clear cut. The characteristics of the data transferred between the mobile device and the cloud may be a deciding factor in determining whether offloading a computation would improve performance. <s> BIB025 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Environmental Factors <s> Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. <s> BIB026
|
1) Access Point Technology: Nowadays diverse network technologies are available in different situations for accessing Cloud services, ranging from traditional Ethernet to modern cellular telecommunication. The energy consumption influenced by different technologies is mainly discussed with regarding to client devices BIB011 BIB006 . Among the popular access point technologies, WiFi and Ethernet generally consume less energy than cellular wireless networks BIB001 BIB002 BIB004 BIB017 BIB018 BIB012 ; although providing lower data rate, Bluetooth could be 80% to 120% more energy efficient than WiFi BIB007 ; as for the cellular networks, LTE (4G) consumes more power than UMTS (3G), followed by EDGE (2G) BIB019 BIB013 . 2) Network Bandwidth: As indicating the maximum channel capacity, the network bandwidth is considered to have a positive impact on reducing both the transmission delay and the energy consumption of Cloud applications BIB020 BIB021 . Consequently, bandwidth has become a critical concern for computational offloading in the context of mobile Cloud computing BIB022 : The offloading effort is not preferred until the connection has sufficient bandwidth, and the benefit of offloading enlarges as the network bandwidth increases BIB014 . In particular, in addition to the TCP stream bandwidth between different computing resources BIB008 BIB005 , the researchers are also concerned with the bandwidth of network equipment (e.g., access points BIB015 BIB012 and base station BIB023 ). 3) Network Condition: Given the same communication coefficients, better channel quality improves Cloud applications' energy performance BIB026 , while poor network conditions worsens both response time and energy efficiency BIB024 BIB025 . The network condition can be reflected by the signal strength or the signal to noise ratio BIB015 . When the signal strength is low, the relevant network devices will have to increase their power levels for data transmission BIB019 , and will correspondingly end up with higher communication cost BIB017 . Furthermore, weak signals would lead to high chance of network unavailability BIB009 . In the worst case, significant energy would be consumed for frequently reestablishing the broken connections, rather than actual data transmission BIB016 . 4) Network Equipment Type: Recall that the Internet topology involves various network equipment, while different types of equipment have different power profiles. Thus, the network equipment types are specified particularly when analyzing the communication energy consumption in Cloud applications BIB011 BIB020 . For example, the energy for delivering one bit data through the Internet would be associated with the power consumed in multiple gateways, switches, routers, and high-capacity wavelength division multiplexed fiber links located in different network segments BIB001 BIB003 BIB018 . 5) Number of Network Equipment: As mentioned above, a communication line could comprise multiple groups of identical network equipment, and in practice the data traversal would hop through different types of equipment at different amounts BIB001 BIB003 . In particular, the number of routers (and their power profiles) was emphasized for the energy expenditure along a data transmission path BIB010 . 6) Traffic Load: Although a network equipment's power profile is predefined by its manufacturer, its practical power consumption would vary depending on the equipment's traffic load BIB010 . Meanwhile, the traffic load ratio also indicates the resource utilization level of network devices BIB020 . Similar to the CPU utilization, higher traffic load would increase the communication energy consumption for Cloud applications.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Reducing energy consumption has been an essential technique for Cloud resources or datacenters, not only for operational cost, but also for system reliability. As Cloud computing becomes emergent for Anything as a Service (XaaS) paradigm, modern real-time Cloud services are also available throughout Cloud computing. In this work, we investigate power-aware provisioning of virtual machines for real-time services. Our approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines of datacenters using DVFS (Dynamic Voltage Frequency Scaling) schemes. We propose several schemes to reduce power consumption and show their performance throughout simulation results. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> A cloud can be defined as a pool of computer resources that can host a variety of different workloads, ranging from long-running scientific jobs (e.g., modeling and simulation) to transactional work (e.g., web applications). A cloud computing platform dynamically provisions, configures, reconfigures, and de-provisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Cloud-hosting facilities, including many large businesses that run clouds in-house, became more common as businesses tend to out-source their computing needs more and more. For large-scale clouds power consumption is a major cost factor. Modern computing devices have the ability to run at various frequencies each one with a different power consumption level. Hence, the possibility exists to choose frequencies at which applications run to optimize total power consumption while staying within the constraints of the Service Level Agreements (SLA) that govern the applications. In this paper, we analyze the mathematical relationship of these SLAs and the number of servers that should be used and at what frequencies they should be running. We discuss a proactive provisioning model that includes hardware failures, devices available for services, and devices available for change management, all as a function of time and within constraints of SLAs. We provide scenarios that illustrate the mathematical relationships for a sample cloud and that provides a range of possible power consumption savings for different environments. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Energy efficiency and parallel I/O performance have become two critical measures in high performance computing (HPC). However, there is little empirical data that characterize the energy-performance behaviors of parallel I/O workload. In this paper, we present a methodology to profile the performance, energy, and energy efficiency of parallel I/O access patterns and report our findings on the impacting factors of parallel I/O energy efficiency. Our study shows that choosing the right buffer size can change the energy-performance efficiency by up to 30 times. High spatial and temporal spacing can also lead to significant improvement in energy-performance efficiency (about 2X). We observe CPU frequency has a more complex impact, depending on the IO operations, spatial and temporal, and memory buffer size. The presented methodology and findings are useful for evaluating the energy efficiency of I/O intensive applications and for providing a guideline to develop energy efficient parallel I/O technology. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Cloud computing clusters distributed computers to provide applications as services and on-demand resources over Internet. From the perspective of average and total energy consumption, such consolidated resource enhances the energy efficiency on both clients and servers. However, cloud computing has a different power consumption pattern from the traditional storage oriented Internet services. The computation oriented implementation of cloud service broadens the gap between the peak power demand and base power demand of a data center. A higher peak demand implies the need of feeder capacity expansion, which requires a considerable investment. This study proposes a computation related approach to lessen the increasing power demand of cloud service data centers. Through appropriated designs, some frequently used computing algorithms can be performed by either clients or servers. As a model presented in this paper, such client-server balanced computation resource integration suggests an energy-efficient and cost-effective cloud service data center. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Researchers and developers use energy models to map out what an application or device's energy usage will be. Application developers most often do not have the capability to manipulate the CPU characteristics that most of these energy models and schedules use as their defining aspect. We present an energy model for multiprocess applications that centers around the CPU utilization, which application developers can actively affect with the design of their application. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Cloud computing are becoming an important platform for scientific applications. Using virtualization in cloud computing it is possible to provide an on-demand provision of virtualized resources as a service, without any additional waiting time. Scheduling the tasks based on the deadline-constraint reduces the energy consumption, leading to a considerable decrease in energy cost. Energy consumption is the blue eye of task scheduling. But all the applications are not based on the deadline-constraint, so complexity of the problem arises. Hence the task scheduling need to be focused by considering the energy consumption and it has to be evaluated based on the quality of the schedules. This paper puts forth a hybrid algorithm which focuses on the reduction of energy consumption. This method is based on the voltage scaling factor to reduce the energy consumption. The result proves that by using this method, energy consumption can be minimized while scheduling the tasks. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> General-purpose computing domain has experienced strategy transfer from scale-up to scale-out in the past decade. In this paper, we take a step further to analyze ARM-processor based cluster against Intel X86 workstation, from both energy-efficiency and cost-efficiency perspectives. Three applications are selected and evaluated to represent diversified applications, including Web server throughput, in-memory database, and video transcoding. Through detailed measurements, we make the observations that the energy-efficiency ratio of the ARM cluster against the Intel workstation varies from 2.6-9.5 in in-memory database, to approximately 1.3 in Web server application, and 1.21 in video transcoding. We also find out that for the Intel processor that adopts dynamic voltage and frequency scaling (DVFS) techniques, the power consumption is not linear with the CPU utilization level. The maximum energy saving achievable from DVFS is 20%. Finally, by utilizing a monthly cost model of data centers, we conclude that ARM cluster based data centers are feasible, and are advantageous in computationally lightweight applications, e.g. in-memory database and network-bounded Web applications. The cost advantage of ARM cluster diminishes progressively for computation-intensive applications, i.e. dynamic Web server application and video transcoding, because the number of ARM processors needed to provide comparable performance increases. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers. In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule. Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Traditional scheduling research usually targets make span as the only optimization goal, while several isolated efforts addressed the problem by considering at most two objectives. In this paper we propose a general framework and heuristic algorithm for multi-objective static scheduling of scientific workflows in heterogeneous computing environments. The algorithm uses constraints specified by the user for each objective and approximates the optimal solution by applying a double strategy: maximizing the distance to the constraint vector for dominant solutions and minimizing it otherwise. We analyze and classify different objectives with respect to their impact on the optimization process and present a four-objective case study comprising make span, economic cost, energy consumption, and reliability. We implemented the algorithm as part of the ASKALON environment for Grid and Cloud computing. Results for two real-world applications demonstrate that the solutions generated by our algorithm are superior to user-defined constraints most of the time. Moreover, the algorithm outperforms a related bi-criteria heuristic and a bi-criteria genetic algorithm. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Reducing energy consumption without scarifying service quality is important for cloud computing. Efficient scheduling algorithms HEFT-D and HEFT-DS based on frequency-scaling and state-switching techniques are proposed. Our scheduling algorithms use the fact that the hosts employing a lower frequency or entering a sleeping state may consume less energy without leading to a longer makespan. Experimental results have shown that our algorithms maintain the performance as good as that of HEFT while the energy consumption is reduced. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> In this paper, the challenge of scheduling a parallel application on a cloud environment to achieve both time and energy efficiency is addressed. Two energy-aware task scheduling algorithms called the EHEFT and the ECPOP are proposed to address the challenge. These algorithms have the objective of trying to sustain the makespan and energy consumption at the same time. The concept is to use a metric that identify the inefficient processors and shut them down to reduce energy consumption. Then, the task is rescheduled to use fewer processors to obtain more energy efficiency. The experimental results from the simulation show that our enhanced algorithms not only reduce the energy consumption, but also maintain a good quality of the scheduling. This will enable the efficient use of the cloud system as a large scalable computing platform. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> With increasingly inexpensive cloud storage and increasingly powerful cloud processing, the cloud has rapidly become the environment to store and analyze data. Most of the large-scale data computations in the cloud heavily rely on the MapReduce paradigm and its Hadoop implementation. Nevertheless, this exponential growth in popularity has significantly impacted power consumption in cloud infrastructures. In this paper, we focus on MapReduce and we investigate the impact of dynamically scaling the frequency of compute nodes on the performance and energy consumption of a Hadoop cluster. To this end, a series of experiments are conducted to explore the implications of Dynamic Voltage Frequency scaling (DVFS) settings on power consumption in Hadoop-clusters. By adapting existing DVFS governors (i.e., performance, powersave, ondemand, conservative and userspace) in the Hadoop cluster, we observe significant variation in performance and power consumption of the cluster with different applications when applying these governors: the different DVFS settings are only sub-optimal for different MapReduce applications. Furthermore, our results reveal that the current CPU governors do not exactly reflect their design goal and may even become ineffective to manage the power consumption in Hadoop clusters. This study aims at providing more clear understanding of the interplay between performance and power management in Hadoop cluster and therefore offers useful insight into designing power-aware techniques for Hadoop systems. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB021 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> With the proliferation of virtualization and cloud comput- ing, optimizing the power usage effectiveness of enterprise data centers has become a laudable goal and a critical re- quirement in IT operations all over the world. While a sig- nificant body of research exists to measure, monitor, and control the greenness level of hardware components, signif- icant research efforts are needed to relate hardware energy consumption to energy consumption due to program exe- cution. In this paper we report on our investigations to characterize power consumption profiles for different types of compute and memory intensive software applications. In particular, we focus on studying the effects of CPU loads on the power consumption of compute servers by monitoring rack power consumption in a data center. We conducted a series of experiments with a variety of processes of differ- ent complexity to understand and characterize the effect on power consumption. Combining processes of varying com- plexity with varying resource allocations produces different energy consumption levels. The challenge is to optimize pro- cess orchestration based on a power consumption framework to accrue energy savings. Our ultimate goal is to develop smart adaptive green computing techniques, such as adap- tive job scheduling and resource provisioning, to reduce over- all power consumption in data centers or clouds. <s> BIB022 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> With the development of cloud computing, the problem of scheduling workflow in cloud system attracts a large amount of attention. In general, the cloud workflow scheduling problem requires to consider a variety of optimization objectives with some constraints. Traditional workflow scheduling methods focus on single optimization goal like makespan and single constraint like deadline or budget. In this paper, we first make a unified formalization of the optimality problem of multi-constraint and multi-objective cloud workflow scheduling using pareto optimality theory. We also present a two-constraint and two-objective case study, considering deadline, budget constraints and energy consumption, reliability objectives. A general list scheduling algorithm and a tuning mechanism are designed to solve this problem. Through extensive experimental, it confirms the efficiency of the unified multi-constraint and multi-objective cloud workflow scheduling system. <s> BIB023 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB024 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB025 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Long running applications on resource-constrained mobile devices can lead to software aging, which is a critical impediment to the mobile users due to its pervasive nature. Mobile offloading that migrates computation-intensive parts of applications from mobile devices onto resource-rich cloud servers, is an effective way for enhancing the availability of mobile services as it can postpone or prevent the software aging in mobile devices. Through partitioning the execution between the device side and the cloud side, the mobile device can have the most benefit from offloading in reducing utilisation of the device and increasing its lifetime. In this paper, we propose a path-based offloading partitioning (POP) algorithm to determine which portions of the application tasks to run on mobile devices and which portions on cloud servers with different cost models in mobile environments. The evaluation results show that the partial offloading scheme can significantly improve performance and reduce energy consumption by optimally distributing tasks between mobile devices and cloud servers, and can well adapt to changes in the environment. <s> BIB026 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size. <s> BIB027 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Personal mobile devices (PMDs) have become the most used computing devices for many people. With the introduction of mobile cloud computing, we can augment the storage and computing capabilities of PMDs via cloud support. However, there are many challenges in developing mobile cloud applications (MCAs) that incorporate cloud computing efficiently, especially for developers targeting multiple mobile platforms. This paper presents Uniport, a uniform framework for developing MCAs. We introduce a uniform architecture for MCAs based on the Model-View-Controller (MVC) pattern and a set of programming primitives and runtime libraries. Not only can Uniport support the creation of new MCAs, it can also help transform existing mobile applications to MCAs efficiently. We demonstrate the applicability and flexibility of Uniport in a case study to transform three existing mobile applications on iOS, Android and Windows Phone, to their mobile cloud versions respectively. Evaluation results show that, with very few modifications, we can easily transform mobile applications to MCAs that can exploit the cloud support to improve performance by 3 -- 7x and save more than half of their energy consumption. <s> BIB028 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB029 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Scalable and fault-tolerant information systems like cloud systems are realized in server cluster systems. Server cluster systems are equipped with virtual machines to provide applications with scalable and fault-tolerant services. Scalable and fault-tolerant application services can be provided by balancing processing load among virtual machines to perform application processes. On the other hand, a large amount of electric energy is consumed in a server cluster system since multiple virtual machines are performed on multiple servers which consume electric energy to perform application processes. In order to design and implement an energy-aware server cluster system, the computation model and power consumption model of a server to perform application processes on multiple virtual machines have to be defined. In this paper, we first define the computation model of a virtual machine to perform application processes. We also define the power consumption model of a server to perform application processes on virtual machines. <s> BIB030 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Computation Environmental Factors 1) <s> Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. <s> BIB031
|
Clock Frequency (and Supply Voltage): CPU's power consumption is dominantly influenced by its supply voltage BIB023 . Since the supply voltage is about linearly proportional to the operating clock frequency BIB024 , and only frequency can be altered without making physical adjustments BIB010 , most researchers have mainly focused on the clock frequency as a factor BIB011 BIB004 BIB015 BIB002 BIB012 BIB018 BIB016 . Intuitively, scheduling low clock frequency will scale down the supply voltage, which eventually brings power saving for CPU BIB003 . With relax application deadlines, the frequency (or voltage) downscaling has become a preferable approach to energy saving BIB017 BIB019 BIB031 , especially for non-CPU intensive workloads BIB005 BIB020 BIB025 BIB006 BIB007 BIB008 . In particular, fine-grained frequency levels seem to be more energy friendly for Cloud applications BIB001 BIB013 . 2) Computing Speed: The capacity of a Cloud computational resource can be measured by its computing speed in millions of instructions per second (MIPS) BIB014 . In general, maintaining high processing speed would consume more energy BIB031 . In mobile Cloud computing, the speeds of client devices and Cloud servers are usually discussed together, in order to calculate their computing speedup (i.e. the Cloud-client computing speed ratio) BIB021 BIB026 . The bigger speedup might indicate the better offloading opportunity, and lead to the higher application performance and the lower energy consumption BIB027 BIB028 . 3) CPU Utilization: The studies BIB009 BIB029 considered the power consumption in a server to be an exponential function of its CPU utilization, and the high CPU utilization is related to the underlying large workload size. Accordingly, higher utilization would result in more energy consumption within the same size of time window BIB008 . 4) Number of CPU Cores: The power consumption of a Cloud computational resource depends on the number of its active CPU cores BIB030 , with a proportional linear relationship BIB022 . When the physical cores are saturated, adding more workload will not further increase the resources power usage BIB008 . On the other hand, employing more CPU cores to address the increasing workload will significantly consume more energy due to the increased CPU power and parallelization overhead BIB007 . Thus, allocating more than enough resources will inevitably result in wastes of energy BIB022 . Note that utilizing more computational resources to improve a Cloud application's processing concurrency is not a concern here. Multiple factors' combinational impact on energy consumption is discussed in Section 4.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Memory Environmental Factors 1) <s> Energy efficiency and parallel I/O performance have become two critical measures in high performance computing (HPC). However, there is little empirical data that characterize the energy-performance behaviors of parallel I/O workload. In this paper, we present a methodology to profile the performance, energy, and energy efficiency of parallel I/O access patterns and report our findings on the impacting factors of parallel I/O energy efficiency. Our study shows that choosing the right buffer size can change the energy-performance efficiency by up to 30 times. High spatial and temporal spacing can also lead to significant improvement in energy-performance efficiency (about 2X). We observe CPU frequency has a more complex impact, depending on the IO operations, spatial and temporal, and memory buffer size. The presented methodology and findings are useful for evaluating the energy efficiency of I/O intensive applications and for providing a guideline to develop energy efficient parallel I/O technology. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Memory Environmental Factors 1) <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Memory Environmental Factors 1) <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Memory Environmental Factors 1) <s> Abstract The power cost of running a data center is a significant portion of its total annual operating budget. With the aim of reducing power bills of data centers, “Green Computing” has emerged with the primary goal of making software more energy efficient without compromising the performance. Developers play an important role in controlling the energy cost of data center software while writing code. In this paper, we show how software developers can contribute to energy efficiency of servers by choosing energy efficient APIs (Application Programming Interface) with the optimal choice of parameters while implementing file reading, file copy, file compression and file decompression operations in Java; that are performed extensively on large scale servers in data centers. We performed extensive measurements of energy cost of those operations on a Dell Power Edge 2950 machine running Linux and Windows servers. Measurement results show that energy costs of various APIs for those operations are sensitive to the buffer size selection. The choice of a particular Java API for file reading with different buffer sizes has significant impact on the energy cost, giving an opportunity to save up to 76%. To save energy while copying files, it is important to use APIs with tunable buffer sizes, rather than APIs using fixed size buffers. In addition, there is a trade off between compression ratio and energy cost: because of more compression ratio, xz compression API consumes more energy than zip and gzip compression APIs. Finally, we model the energy costs of APIs by polynomial regression to avoid repeated measurements. <s> BIB004
|
Buffer Size: As a generally predefined factor, memory buffer size could have to be decided by developers before the Cloud application deployment. The experiments showed that buffering different sizes of data would be sensitively influential on the energy costs of not only the data I/O methods but also the data compression/decompression BIB001 BIB003 BIB004 . For file operations, buffer size between 64KB and 256KB seems to be the most energy-efficient setting BIB004 . 2) Operating Frequency: Memory operating frequency has been viewed as one of the fundamental contributors to the power consumption in memory BIB002 . Similar to the CPU clock frequency, higher memory frequency will also consume more power.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> A cloud can be defined as a pool of computer resources that can host a variety of different workloads, ranging from long-running scientific jobs (e.g., modeling and simulation) to transactional work (e.g., web applications). A cloud computing platform dynamically provisions, configures, reconfigures, and de-provisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Cloud-hosting facilities, including many large businesses that run clouds in-house, became more common as businesses tend to out-source their computing needs more and more. For large-scale clouds power consumption is a major cost factor. Modern computing devices have the ability to run at various frequencies each one with a different power consumption level. Hence, the possibility exists to choose frequencies at which applications run to optimize total power consumption while staying within the constraints of the Service Level Agreements (SLA) that govern the applications. In this paper, we analyze the mathematical relationship of these SLAs and the number of servers that should be used and at what frequencies they should be running. We discuss a proactive provisioning model that includes hardware failures, devices available for services, and devices available for change management, all as a function of time and within constraints of SLAs. We provide scenarios that illustrate the mathematical relationships for a sample cloud and that provides a range of possible power consumption savings for different environments. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> It is now critical to reduce the consumption of natural resources, especially petroleum. Even in information systems, we have to reduce the total electrical power consumption. We classify network applications to two types of applications, transaction and communication based ones. In this paper, we consider communication based applications like the file transfer protocol (FTP). A computer named server consumes the electric power to transfer a file to a client depending on the transmission rate. We discuss a model for power consumption of a data transfer application which depends on the total transmission rate and number of clients to which the server concurrently transmits files. A client has to find a server in a set of servers, each of which holds a file so that the power consumption of the server is reduced. We discuss a pair of algorithms PCB (power consumption-based) and TRB (transmission rate-based) to find a server which transmits a file to a client. In the evaluation, we show the total power consumption can be reduced by the algorithms compared with the traditional round-robin algorithm. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> The popularity of smartphones is growing every day. Thanks to the more powerful hardware the applications can run more tasks and use broadband network connection, however there are several known issues. For example, under typical usage (messaging, browsing, and gaming) a smartphone can be discharged in one day. This makes the battery life one of the biggest problems of the mobile devices. That is a good motivation to find energy-efficient solutions. One of the possible methods is the “computation offloading” mechanism, which means that some of the tasks are uploaded to the cloud. In this paper we are going to present a new energy-efficient job scheduling model and a measurement infrastructure which is used to analyze the energy consumption of smartphones. Our results are going to be demonstrated through some scenarios where the goal is to save energy. The offloading task is based on LP and scheduling problems. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> General-purpose computing domain has experienced strategy transfer from scale-up to scale-out in the past decade. In this paper, we take a step further to analyze ARM-processor based cluster against Intel X86 workstation, from both energy-efficiency and cost-efficiency perspectives. Three applications are selected and evaluated to represent diversified applications, including Web server throughput, in-memory database, and video transcoding. Through detailed measurements, we make the observations that the energy-efficiency ratio of the ARM cluster against the Intel workstation varies from 2.6-9.5 in in-memory database, to approximately 1.3 in Web server application, and 1.21 in video transcoding. We also find out that for the Intel processor that adopts dynamic voltage and frequency scaling (DVFS) techniques, the power consumption is not linear with the CPU utilization level. The maximum energy saving achievable from DVFS is 20%. Finally, by utilizing a monthly cost model of data centers, we conclude that ARM cluster based data centers are feasible, and are advantageous in computationally lightweight applications, e.g. in-memory database and network-bounded Web applications. The cost advantage of ARM cluster diminishes progressively for computation-intensive applications, i.e. dynamic Web server application and video transcoding, because the number of ARM processors needed to provide comparable performance increases. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Many emerging mobile applications nowadays tend to be computation-intensive due to the increasing popularity and convenience of smartphones. Nevertheless, a major obstacle prohibits the direct adoption of such applications and that is battery lifetime. Mobile Cloud Computing (MCC) is a promising solution that suggests the partial processing of applications on the cloud to minimize the overall power consumption at the mobile device. However, this does not necessarily save energy if there is no systematic mechanism for evaluating the effect of offloading the application into the cloud. In this paper, we study the factors affecting the power consumption due to offloading, develop a decision model, and verify its correctness by real implementation on an Android device. The results show that the proposed partitioning scheme successfully results in energy savings at the mobile handset and surpasses the energy efficiency of both fully local and fully remote execution. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Reducing energy consumption without scarifying service quality is important for cloud computing. Efficient scheduling algorithms HEFT-D and HEFT-DS based on frequency-scaling and state-switching techniques are proposed. Our scheduling algorithms use the fact that the hosts employing a lower frequency or entering a sleeping state may consume less energy without leading to a longer makespan. Experimental results have shown that our algorithms maintain the performance as good as that of HEFT while the energy consumption is reduced. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Many people use smart phones on a daily basis, yet, their energy consumption is pretty high and the battery power lasts typically only for a single day. In the scope of the EnAct project, we investigate potential energy savings on smart phones by offloading computationally expensive tasks into the cloud. Obviously, also the wireless communication for uploading tasks requires energy. For that reason, it is crucial to understand the trade-off between energy consumption for wireless communication and local computation in order to assert that the overall power consumption is decreased. In this paper, we investigate the communications part of that trade-off. We conducted an extensive set of measurement experiments using typical smart phones. This is the first step towards the development of accurate energy models allowing to predict the energy required for offloading a given task. Our measurements include WiFi, 2G, and 3G networks as well as a set of two different devices. According to our findings, WiFi consumes by far the least energy per time unit, yet, this advantage seems to be due to its higher throughput and the implied shorter download time and not due to lower power consumption over time. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Online social networks (OSNs) with their huge number of active users consume significant amount energy both in the data centers and in the transport network. Existing studies focus mainly on the energy consumption in the data centers and do not take into account the energy consumption during the transport of data between end-users and data centers. To indicate the amount of the neglected energy, this paper provides a comprehensive framework and a set of measurements for understanding the energy consumption of cloud applications such as photo sharing in social networks. A new energy model is developed to estimate the energy consumption of cloud applications and applied to sharing photos on Facebook, as an example. Our results indicate that the energy consumption involved in the network and end-user devices for photo sharing is approximately equal to 60% of the energy consumption of all Facebook data enters. Therefore, achieving an energy-efficient cloud service requires energy efficiency improvement in the transport network and end-user devices along with the related data centers. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> There is an increasing interest for cloud services to be provided in a more energy efficient way. The growing deployment of large-scale, complex workflow applications onto cloud computing hosts is being faced with crucial challenges in reducing the power consumption without violating the service level agreement (SLA). In this paper, we consider cloud hosts which can operate in different power states with different capacities respectively, and propose a novel scheduling heuristic for workflows to reduce energy consumption while still meeting deadline constraint. The proposed heuristic is evaluated using simulation with four different real-world applications. The observed results indicates that our heuristic does significantly outperform the existing approaches. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Energy efficiency for data centers has been recently an active research field. Several efforts have been made at the infrastructure and application levels to achieve energy efficiency and reduction of CO2 emissions. In this paper we approach the problem of application deployment to evaluate its impact on the energy consumption of applications at runtime. We use queuing networks to model different deployment configurations and to perform quantitative analysis to predict application performance and energy consumption. The results are validated against experimental data to confirm the correctness of the models when used for predictions. Comparisons between different configurations in terms of performance and energy consumption are made to suggest the optimal configuration to deploy applications on cloud environments. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Measuring energy consumption is an essential step in the development of policies for the management of energy in every IT system. There is a wide range of methods using both hardware and software for measuring energy consumed by the system accurately. However, most of these methods measure energy consumed by a machine or a cluster of machines. In environments such as Cloud that an application can be built from components with comparable characteristics, measuring energy consumed by a single component can be extremely beneficial. For example, if we can measure energy consumed by different HTTP servers, then we can establish which one consumes less energy performing a given task. As a result, the Cloud provider can provide incentives, so that, application developers use the HTTP server that consume less energy. Indeed, considering size of the Cloud, even a small amount of saving per Virtual Machine can add up to a substantial saving. In this paper, we propose a technique to measure energy consumed by an application via measuring energy consumed by the individual processes of the application. We shall deal with applications that run in a virtualized environment such as Cloud. We present two implementations of our idea to demonstrate the feasibility of the approach. Firstly, a method of measurement with the help of Kernel-Based Virtual Machine running on a typical laptop is presented. Secondly, in a commercial Cloud such as Elastic host, we describe a method of measuring energy consumed by processes such as HTTP servers. This will allow commercial providers to identify which product consumes less energy on their platform. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Environmental Factors 1) <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB020
|
Disk Speed: Among all the indexes of a storage device, the disk speed is emphasized in the energy expenditure of an application's storage I/O operations. BIB009 . The power characteristics of disk speed and other indexes are essentially determined by storage device manufacturers. 2) Number of Data Sites: Spreading data across different sites is a common practice to improve data availability. Correspondingly, for a Cloud application, the more sites need to be visited, the more energy and time will be consumed for more data transmissions BIB009 . 3.3.5. Other Environmental Factors 1) Client Device Type: Although various user handsets do not show big difference in energy consumption for running mobile Cloud applications BIB013 , the client device type indeed matters when making comparison among desktops, laptops and cell phones BIB014 BIB010 BIB015 . Given different power profiles, replacing a personal computer with a low-power consuming device would make the same Cloud application more energy-efficient in a generic sense BIB020 . If emphasizing the overall share of power consumed in the device communication (e.g., the WiFi interface has a bigger share of the power consumption in smartphones than laptops), however, larger client devices seem preferable for Cloud applications with respect to their energy consumption BIB005 . 2) Number of Servers: In a Cloud host, provisioning more virtual machines could require more physical servers BIB004 , and activating more physical servers implies enhancing the needed power level BIB011 . Meanwhile, the increased maintenance overhead after provisioning more virtual machines will eventually increase the energy consumption per task in an application BIB012 . Therefore, selecting a suitable number of servers should optimize the overall power consumption and the total workload BIB001 . Similar to the aforementioned factor of number of CPU cores, allocating more than enough servers will cause energy waste during the execution of a Cloud application, even if employing sophisticated energy saving mechanisms BIB016 . 3) Resource Competition: If holding the computing resource constant, fierce resource competition could dramatically increase the corresponding energy consumption, no matter what the resource (component) is. For example, configuring more virtual machines within the same physical server will increase the CPU activities and incur extra scheduling overhead BIB006 . Hosting multiple application instances in a single virtual machine would consume more energy than running application instances separately BIB017 . As for the resource components, the intense competition for access point connections BIB018 , CPU processes BIB007 , memory footprints BIB002 , and disk IO bandwidth BIB004 have all been proved negatively impacting Cloud applications' energy efficiency. 4) Server Type: The relevant studies have addressed the types of physical server, virtual server and Web server for their influences on Cloud applica- tions' energy consumption. The physical server type can further be defined by using processor number or types (e.g., Intel vs. ARM-based processors) BIB003 BIB008 . Given a particular Cloud server pool, the large heterogeneity in server types will result in high variance in the application execution time BIB011 . As for virtual servers, vertical scaling (adjusting the server type) has clear impacts on the energy consumption and performance of a Cloud application. However, the specific influences of different virtual machine types are closely related to the application types (workload characteristics) BIB012 . For example, among different HTTP Web servers, Apache and Lighttp are more energy efficient for lightweight workload, while Nginx consumes relatively less power at faster user arrival speed BIB019 .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> A cloud can be defined as a pool of computer resources that can host a variety of different workloads, ranging from long-running scientific jobs (e.g., modeling and simulation) to transactional work (e.g., web applications). A cloud computing platform dynamically provisions, configures, reconfigures, and de-provisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Cloud-hosting facilities, including many large businesses that run clouds in-house, became more common as businesses tend to out-source their computing needs more and more. For large-scale clouds power consumption is a major cost factor. Modern computing devices have the ability to run at various frequencies each one with a different power consumption level. Hence, the possibility exists to choose frequencies at which applications run to optimize total power consumption while staying within the constraints of the Service Level Agreements (SLA) that govern the applications. In this paper, we analyze the mathematical relationship of these SLAs and the number of servers that should be used and at what frequencies they should be running. We discuss a proactive provisioning model that includes hardware failures, devices available for services, and devices available for change management, all as a function of time and within constraints of SLAs. We provide scenarios that illustrate the mathematical relationships for a sample cloud and that provides a range of possible power consumption savings for different environments. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> In this paper, the challenge of scheduling a parallel application on a cloud environment to achieve both time and energy efficiency is addressed. Two energy-aware task scheduling algorithms called the EHEFT and the ECPOP are proposed to address the challenge. These algorithms have the objective of trying to sustain the makespan and energy consumption at the same time. The concept is to use a metric that identify the inefficient processors and shut them down to reduce energy consumption. Then, the task is rescheduled to use fewer processors to obtain more energy efficiency. The experimental results from the simulation show that our enhanced algorithms not only reduce the energy consumption, but also maintain a good quality of the scheduling. This will enable the efficient use of the cloud system as a large scalable computing platform. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> Modern smartphones permit to run a large variety of applications, i.e. multimedia, games, social network applications, etc. However, this aspect considerably reduces the battery life of these devices. A possible solution to alleviate this problem is to offload part of the application or the whole computation to remote servers, i.e. Cloud Computing. The offloading cannot be performed without considering the issues derived from the nature of the application (i.e. multimedia, games, etc.), which can considerably change the resources necessary to the computation and the type, the frequency and the amount of data to be exchanged with the network. This work shows a framework for automatically building models for the offloading of mobile applications based on evolutionary algorithms and how it can be used to simulate different kinds of mobile applications and to analyze the rules generated. To this aim, a tool for generating mobile datasets, presenting different features, is designed and experiments are performed in different usage conditions in order to demonstrate the utility of the overall framework. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Workload Factors and their Influences on Energy Consumption of Cloud <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB006
|
Applications (RQ4) Since the energy for running a Cloud application is tightly coupled with its workload BIB002 BIB006 , we identify energy-related factors by deconstructing Cloud application workloads. In Cloud environments, an application's workload can be described through one of three different aspects (namely Terminal, Activity, and Object) or a combination of them . Correspondingly, we further organize the workload factors into those three aspects respectively, and use an entry-relationship diagram to illustrate the organization, as shown in Fig. 6 . In particular, we consider application type to be an inherent attribute of a Cloud application, and thus "application type" BIB001 BIB005 BIB003 BIB004 is not regarded as a factor in our survey. In other words, we claim that the type of a Cloud application has already been reflected by its workload characteristics (e.g. the specific communication-computation ratio).
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Terminal-related Factors <s> It is now critical to reduce the consumption of natural resources, especially petroleum to resolve air pollutions. Even in information systems, we have to reduce the total electrical power consumption. A cloud computing system is composed of a huge number of server computers like google file systems. There are many discussions on how to reduce the total power consumption of servers, e. g. by turning off servers which are not required to execute requests from clients. A peer-to-peer (P2P) system is another type of information system which is composed of a huge number of peer computers where various types of applications are autonomously performed. In this paper, we consider a P2P system with data transfer application like the file transfer protocol (FTP). A computer consumes the electric power to transfer a file to another computer depending on the bandwidth. We discuss a model for power consumption of data transfer applications. A client peer has to find a server peer in a set of server peers which holds a file so that the power consumption of the server is reduced. We discuss algorithms to find a server peer which transfers file in a P2P overlay network. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Terminal-related Factors <s> It is now critical to reduce the consumption of natural resources, especially petroleum. Even in information systems, we have to reduce the total electrical power consumption. We classify network applications to two types of applications, transaction and communication based ones. In this paper, we consider communication based applications like the file transfer protocol (FTP). A computer named server consumes the electric power to transfer a file to a client depending on the transmission rate. We discuss a model for power consumption of a data transfer application which depends on the total transmission rate and number of clients to which the server concurrently transmits files. A client has to find a server in a set of servers, each of which holds a file so that the power consumption of the server is reduced. We discuss a pair of algorithms PCB (power consumption-based) and TRB (transmission rate-based) to find a server which transmits a file to a client. In the evaluation, we show the total power consumption can be reduced by the algorithms compared with the traditional round-robin algorithm. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Terminal-related Factors <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB003
|
The client-side terminals usually act as workload generators in interactionintensive Cloud applications. 1) Number of Clients: As workload generators, the client-side terminals can be either end users BIB003 or machines BIB001 BIB002 , and the number of clients have been used to reflect the size of the generated workload. Naturally, the more number of clients an application serves, the more electric energy the application consumes.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Energy efficiency and parallel I/O performance have become two critical measures in high performance computing (HPC). However, there is little empirical data that characterize the energy-performance behaviors of parallel I/O workload. In this paper, we present a methodology to profile the performance, energy, and energy efficiency of parallel I/O access patterns and report our findings on the impacting factors of parallel I/O energy efficiency. Our study shows that choosing the right buffer size can change the energy-performance efficiency by up to 30 times. High spatial and temporal spacing can also lead to significant improvement in energy-performance efficiency (about 2X). We observe CPU frequency has a more complex impact, depending on the IO operations, spatial and temporal, and memory buffer size. The presented methodology and findings are useful for evaluating the energy efficiency of I/O intensive applications and for providing a guideline to develop energy efficient parallel I/O technology. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> It is now critical to reduce the consumption of natural resources, especially petroleum. Even in information systems, we have to reduce the total electrical power consumption. We classify network applications to two types of applications, transaction and communication based ones. In this paper, we consider communication based applications like the file transfer protocol (FTP). A computer named server consumes the electric power to transfer a file to a client depending on the transmission rate. We discuss a model for power consumption of a data transfer application which depends on the total transmission rate and number of clients to which the server concurrently transmits files. A client has to find a server in a set of servers, each of which holds a file so that the power consumption of the server is reduced. We discuss a pair of algorithms PCB (power consumption-based) and TRB (transmission rate-based) to find a server which transmits a file to a client. In the evaluation, we show the total power consumption can be reduced by the algorithms compared with the traditional round-robin algorithm. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> It is now critical to reduce the consumption of natural resources, especially petroleum to resolve air pollutions. Even in information systems, we have to reduce the total electrical power consumption. A cloud computing system is composed of a huge number of server computers like google file systems. There are many discussions on how to reduce the total power consumption of servers, e. g. by turning off servers which are not required to execute requests from clients. A peer-to-peer (P2P) system is another type of information system which is composed of a huge number of peer computers where various types of applications are autonomously performed. In this paper, we consider a P2P system with data transfer application like the file transfer protocol (FTP). A computer consumes the electric power to transfer a file to another computer depending on the bandwidth. We discuss a model for power consumption of data transfer applications. A client peer has to find a server peer in a set of server peers which holds a file so that the power consumption of the server is reduced. We discuss algorithms to find a server peer which transfers file in a P2P overlay network. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Cloud computing clusters distributed computers to provide applications as services and on-demand resources over Internet. From the perspective of average and total energy consumption, such consolidated resource enhances the energy efficiency on both clients and servers. However, cloud computing has a different power consumption pattern from the traditional storage oriented Internet services. The computation oriented implementation of cloud service broadens the gap between the peak power demand and base power demand of a data center. A higher peak demand implies the need of feeder capacity expansion, which requires a considerable investment. This study proposes a computation related approach to lessen the increasing power demand of cloud service data centers. Through appropriated designs, some frequently used computing algorithms can be performed by either clients or servers. As a model presented in this paper, such client-server balanced computation resource integration suggests an energy-efficient and cost-effective cloud service data center. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> General-purpose computing domain has experienced strategy transfer from scale-up to scale-out in the past decade. In this paper, we take a step further to analyze ARM-processor based cluster against Intel X86 workstation, from both energy-efficiency and cost-efficiency perspectives. Three applications are selected and evaluated to represent diversified applications, including Web server throughput, in-memory database, and video transcoding. Through detailed measurements, we make the observations that the energy-efficiency ratio of the ARM cluster against the Intel workstation varies from 2.6-9.5 in in-memory database, to approximately 1.3 in Web server application, and 1.21 in video transcoding. We also find out that for the Intel processor that adopts dynamic voltage and frequency scaling (DVFS) techniques, the power consumption is not linear with the CPU utilization level. The maximum energy saving achievable from DVFS is 20%. Finally, by utilizing a monthly cost model of data centers, we conclude that ARM cluster based data centers are feasible, and are advantageous in computationally lightweight applications, e.g. in-memory database and network-bounded Web applications. The cost advantage of ARM cluster diminishes progressively for computation-intensive applications, i.e. dynamic Web server application and video transcoding, because the number of ARM processors needed to provide comparable performance increases. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> With the proliferation of virtualization and cloud comput- ing, optimizing the power usage effectiveness of enterprise data centers has become a laudable goal and a critical re- quirement in IT operations all over the world. While a sig- nificant body of research exists to measure, monitor, and control the greenness level of hardware components, signif- icant research efforts are needed to relate hardware energy consumption to energy consumption due to program exe- cution. In this paper we report on our investigations to characterize power consumption profiles for different types of compute and memory intensive software applications. In particular, we focus on studying the effects of CPU loads on the power consumption of compute servers by monitoring rack power consumption in a data center. We conducted a series of experiments with a variety of processes of differ- ent complexity to understand and characterize the effect on power consumption. Combining processes of varying com- plexity with varying resource allocations produces different energy consumption levels. The challenge is to optimize pro- cess orchestration based on a power consumption framework to accrue energy savings. Our ultimate goal is to develop smart adaptive green computing techniques, such as adap- tive job scheduling and resource provisioning, to reduce over- all power consumption in data centers or clouds. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Measuring energy consumption is an essential step in the development of policies for the management of energy in every IT system. There is a wide range of methods using both hardware and software for measuring energy consumed by the system accurately. However, most of these methods measure energy consumed by a machine or a cluster of machines. In environments such as Cloud that an application can be built from components with comparable characteristics, measuring energy consumed by a single component can be extremely beneficial. For example, if we can measure energy consumed by different HTTP servers, then we can establish which one consumes less energy performing a given task. As a result, the Cloud provider can provide incentives, so that, application developers use the HTTP server that consume less energy. Indeed, considering size of the Cloud, even a small amount of saving per Virtual Machine can add up to a substantial saving. In this paper, we propose a technique to measure energy consumed by an application via measuring energy consumed by the individual processes of the application. We shall deal with applications that run in a virtualized environment such as Cloud. We present two implementations of our idea to demonstrate the feasibility of the approach. Firstly, a method of measurement with the help of Kernel-Based Virtual Machine running on a typical laptop is presented. Secondly, in a commercial Cloud such as Elastic host, we describe a method of measuring energy consumed by processes such as HTTP servers. This will allow commercial providers to identify which product consumes less energy on their platform. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> In cloud environments, IT solutions are delivered to users via shared infrastructure, enabling cloud service providers to deploy applications as services according to user QoS (Quality of Service) requirements. One consequence of this cloud model is the huge amount of energy consumption and significant carbon footprints caused by large cloud infrastructures. A key and common objective of cloud service providers is thus to develop cloud application deployment and management solutions with minimum energy consumption while guaranteeing performance and other QoS specified in Service Level Agreements (SLAs). However, finding the best deployment configuration that maximises energy efficiency while guaranteeing system performance is an extremely challenging task, which requires the evaluation of system performance and energy consumption under various workloads and deployment configurations. In order to simplify this process we have developed Stress Cloud, an automatic performance and energy consumption analysis tool for cloud applications in real-world cloud environments. Stress Cloud supports the modelling of realistic cloud application workloads, the automatic generation of load tests, and the profiling of system performance and energy consumption. We demonstrate the utility of Stress Cloud by analysing the performance and energy consumption of a cloud application under a broad range of different deployment configurations. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Activity-related Factors <s> Scalable and fault-tolerant information systems like cloud systems are realized in server cluster systems. Server cluster systems are equipped with virtual machines to provide applications with scalable and fault-tolerant services. Scalable and fault-tolerant application services can be provided by balancing processing load among virtual machines to perform application processes. On the other hand, a large amount of electric energy is consumed in a server cluster system since multiple virtual machines are performed on multiple servers which consume electric energy to perform application processes. In order to design and implement an energy-aware server cluster system, the computation model and power consumption model of a server to perform application processes on multiple virtual machines have to be defined. In this paper, we first define the computation model of a virtual machine to perform application processes. We also define the power consumption model of a server to perform application processes on virtual machines. <s> BIB020
|
Revoking the previous analysis in Section 3.2, here we identify factors mainly related to those generic application execution elements. 1) (Data) Access Pattern: Data accessing refers to the reading and writing activities. Simple access patterns are relevant to activities only, such as one-time access, repeat access, and cyclic access; while sophisticated access patterns are associated with both the activities and the spatial distance between data locations, such as sequential access, nested access, and random access BIB002 . When accessing the same amount of data, longer distance traversals will apparently consume more energy. For example, random access has been empirically verified to be significantly more energy expensive BIB017 . 2) (Data) Transmission Rate: Without exceeding physical bandwidths, the power consumed in both servers and network equipment is a proportional function of the total data transmission rate in a Cloud application BIB003 BIB004 BIB018 . However, data transfer at higher bit rate would be more energy efficient (i.e. less energy consumption per bit) BIB005 , and therefore the downloading speed should be set as high as possible to save energy for client devices BIB014 . On the contrary, the energy consumption per bit was identified to be an increasing function of the data uploading rate from mobile devices. Considering that the low-speed traffic flows's impact on the overall power consumption is generally negligible BIB018 , decreasing the uploading speed has been argued to be an energy optimal solution on the client side (with flexible time limit) BIB014 . 3) Number of (User) Connections: For a Cloud application at runtime, one "connection" indicates an active user session, no matter what activity is issued from the client side. When more user sessions are active, more energy consumption of the application will be incurred BIB013 BIB007 . The user connections can be sequential, overlapped, or concurrent (e.g., file downloading from the Cloud BIB008 ). In the concurrent case, more user activities would lead to an increase in Cloud resource usage, and the extra scheduling and synchronizing overhead could in turn increase each user request's processing time BIB019 . 4) Processing Concurrency: Concurrent processing activities commonly exist in parallel applications, and the concurrency can be measured by the amount of processes. Due to the overhead of scheduling, both overall and per-task energy consumption could increase with the number of processes BIB011 BIB013 . However, unlike the other types of activities, the concurrency is generally for speeding up workload processing, rather than influencing the workload size. Accordingly, although incurring extra scheduling, increasing the degree of parallelism in a Cloud application can still significantly improve its energy efficiency (i.e. the workload-energy ratio) BIB006 BIB012 BIB009 . In particular, when memory footprints are relatively small, starting multiple processes within less computing resources can be even more energy friendly BIB015 , until reaching the maximum utilization or physical limits of the resources (e.g., the total number of hyperthreads) BIB020 BIB010 . 5) (User/Task) Arrival Rate: Following the convention of the primary studies, we also use "arrival rate" to represent the frequency of user interactions and task processing. In general, the faster user arrival rate BIB016 and the shorter inter-arrival time between two consecutive tasks BIB001 both imply the tenser workload, and correspondingly result in the higher power consumption of a Cloud application. Note that the actual energy consumption eventually depends on the application's execution time, as specified above.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Energy efficiency and parallel I/O performance have become two critical measures in high performance computing (HPC). However, there is little empirical data that characterize the energy-performance behaviors of parallel I/O workload. In this paper, we present a methodology to profile the performance, energy, and energy efficiency of parallel I/O access patterns and report our findings on the impacting factors of parallel I/O energy efficiency. Our study shows that choosing the right buffer size can change the energy-performance efficiency by up to 30 times. High spatial and temporal spacing can also lead to significant improvement in energy-performance efficiency (about 2X). We observe CPU frequency has a more complex impact, depending on the IO operations, spatial and temporal, and memory buffer size. The presented methodology and findings are useful for evaluating the energy efficiency of I/O intensive applications and for providing a guideline to develop energy efficient parallel I/O technology. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Cloud computing clusters distributed computers to provide applications as services and on-demand resources over Internet. From the perspective of average and total energy consumption, such consolidated resource enhances the energy efficiency on both clients and servers. However, cloud computing has a different power consumption pattern from the traditional storage oriented Internet services. The computation oriented implementation of cloud service broadens the gap between the peak power demand and base power demand of a data center. A higher peak demand implies the need of feeder capacity expansion, which requires a considerable investment. This study proposes a computation related approach to lessen the increasing power demand of cloud service data centers. Through appropriated designs, some frequently used computing algorithms can be performed by either clients or servers. As a model presented in this paper, such client-server balanced computation resource integration suggests an energy-efficient and cost-effective cloud service data center. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers. In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule. Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> The popularity of smartphones is growing every day. Thanks to the more powerful hardware the applications can run more tasks and use broadband network connection, however there are several known issues. For example, under typical usage (messaging, browsing, and gaming) a smartphone can be discharged in one day. This makes the battery life one of the biggest problems of the mobile devices. That is a good motivation to find energy-efficient solutions. One of the possible methods is the “computation offloading” mechanism, which means that some of the tasks are uploaded to the cloud. In this paper we are going to present a new energy-efficient job scheduling model and a measurement infrastructure which is used to analyze the energy consumption of smartphones. Our results are going to be demonstrated through some scenarios where the goal is to save energy. The offloading task is based on LP and scheduling problems. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> General-purpose computing domain has experienced strategy transfer from scale-up to scale-out in the past decade. In this paper, we take a step further to analyze ARM-processor based cluster against Intel X86 workstation, from both energy-efficiency and cost-efficiency perspectives. Three applications are selected and evaluated to represent diversified applications, including Web server throughput, in-memory database, and video transcoding. Through detailed measurements, we make the observations that the energy-efficiency ratio of the ARM cluster against the Intel workstation varies from 2.6-9.5 in in-memory database, to approximately 1.3 in Web server application, and 1.21 in video transcoding. We also find out that for the Intel processor that adopts dynamic voltage and frequency scaling (DVFS) techniques, the power consumption is not linear with the CPU utilization level. The maximum energy saving achievable from DVFS is 20%. Finally, by utilizing a monthly cost model of data centers, we conclude that ARM cluster based data centers are feasible, and are advantageous in computationally lightweight applications, e.g. in-memory database and network-bounded Web applications. The cost advantage of ARM cluster diminishes progressively for computation-intensive applications, i.e. dynamic Web server application and video transcoding, because the number of ARM processors needed to provide comparable performance increases. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Traditional scheduling research usually targets make span as the only optimization goal, while several isolated efforts addressed the problem by considering at most two objectives. In this paper we propose a general framework and heuristic algorithm for multi-objective static scheduling of scientific workflows in heterogeneous computing environments. The algorithm uses constraints specified by the user for each objective and approximates the optimal solution by applying a double strategy: maximizing the distance to the constraint vector for dominant solutions and minimizing it otherwise. We analyze and classify different objectives with respect to their impact on the optimization process and present a four-objective case study comprising make span, economic cost, energy consumption, and reliability. We implemented the algorithm as part of the ASKALON environment for Grid and Cloud computing. Results for two real-world applications demonstrate that the solutions generated by our algorithm are superior to user-defined constraints most of the time. Moreover, the algorithm outperforms a related bi-criteria heuristic and a bi-criteria genetic algorithm. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Reducing energy consumption without scarifying service quality is important for cloud computing. Efficient scheduling algorithms HEFT-D and HEFT-DS based on frequency-scaling and state-switching techniques are proposed. Our scheduling algorithms use the fact that the hosts employing a lower frequency or entering a sleeping state may consume less energy without leading to a longer makespan. Experimental results have shown that our algorithms maintain the performance as good as that of HEFT while the energy consumption is reduced. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Offloading is one major type of collaborations between mobile devices and clouds to achieve less execution time and less energy consumption. Offloading decisions for mobile cloud collaboration involve many decision factors. One of important decision factors is the network unavailability that has not been well studied. This paper presents an offloading decision model that takes network unavailability into consideration. Network with some unavailability can be modeled as an alternating renewal process. Then, application execution time and energy consumption in both ideal network and network with some unavailability are analyzed. Based on the presented theoretical model, an application partition algorithm and a decision module are presented to produce an offloading decision that is resistant to network unavailability. Simulation results demonstrate good performance of proposed scheme, where the proposed partition algorithm is analyzed in different application and cloud scenarios. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Many mobile applications such as games and social applications are emerging for mobile devices. These powerful applications consume more and more running time and energy. So they are badly confined by mobile device with limited resource. Since cloud infrastructure has great potential to benefit task execution, this paper presents SmartVirtCloud (SmartVC). A system can offload methods in applications to achieve better performance in indoor environment. SmartVC decides at runtime whether and when the methods in application should be executed remotely. And two types of cloud service models, namely load-balancing and application-isolation, are constructed for concurrent requests. The empirical results show that, by using SmartVC, the CPU-intensive calculation application consumes two orders of magnitude less energy on average; the processing speed of latency-sensitive image translation application gets doubled; the performance of network-intensive picture download application is improved with the increase of picture amount. In addition, the proposed two cloud models support concurrent requests from smartphones very well. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Many emerging mobile applications nowadays tend to be computation-intensive due to the increasing popularity and convenience of smartphones. Nevertheless, a major obstacle prohibits the direct adoption of such applications and that is battery lifetime. Mobile Cloud Computing (MCC) is a promising solution that suggests the partial processing of applications on the cloud to minimize the overall power consumption at the mobile device. However, this does not necessarily save energy if there is no systematic mechanism for evaluating the effect of offloading the application into the cloud. In this paper, we study the factors affecting the power consumption due to offloading, develop a decision model, and verify its correctness by real implementation on an Android device. The results show that the proposed partitioning scheme successfully results in energy savings at the mobile handset and surpasses the energy efficiency of both fully local and fully remote execution. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> With the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Modern smartphones permit to run a large variety of applications, i.e. multimedia, games, social network applications, etc. However, this aspect considerably reduces the battery life of these devices. A possible solution to alleviate this problem is to offload part of the application or the whole computation to remote servers, i.e. Cloud Computing. The offloading cannot be performed without considering the issues derived from the nature of the application (i.e. multimedia, games, etc.), which can considerably change the resources necessary to the computation and the type, the frequency and the amount of data to be exchanged with the network. This work shows a framework for automatically building models for the offloading of mobile applications based on evolutionary algorithms and how it can be used to simulate different kinds of mobile applications and to analyze the rules generated. To this aim, a tool for generating mobile datasets, presenting different features, is designed and experiments are performed in different usage conditions in order to demonstrate the utility of the overall framework. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> There is an increasing interest for cloud services to be provided in a more energy efficient way. The growing deployment of large-scale, complex workflow applications onto cloud computing hosts is being faced with crucial challenges in reducing the power consumption without violating the service level agreement (SLA). In this paper, we consider cloud hosts which can operate in different power states with different capacities respectively, and propose a novel scheduling heuristic for workflows to reduce energy consumption while still meeting deadline constraint. The proposed heuristic is evaluated using simulation with four different real-world applications. The observed results indicates that our heuristic does significantly outperform the existing approaches. <s> BIB021 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Improving performance of a mobile application by offloading its computation onto a cloudlet has become a prevalent paradigm. Among mobile applications, the category of interactive data-streaming applications is emerging while having not yet received sufficient attention. During computation offloading, the performance of this category of applications (including response time and throughput) depends on network latency and bandwidth between the mobile device and the cloudlet. Although a single cloudlet can provide satisfactory network latency, the bandwidth is always the bottleneck of the throughput. To address this issue, we propose to use multiple cloudlets for computation offloading so as to alleviate the bandwidth bottleneck. In addition, we propose to use multiple module instances to complete a module, enabling more fine-grained computation partitioning, since data processing in many modules of data-streaming applications could be highly parallelized. Specifically, at first we apply a fine-grained data-flow model to characterize mobile interactive data-streaming applications. Then we build a unified optimization framework that achieves maximization of the overall utilities of all mobile users, and design an efficient heuristic for the optimization problem, which is able to make trade-off between throughput and energy consumption at each mobile device. At the end we verify our algorithm with extensive simulation. The results show that the overall utility achieved by our heuristic is close to the precise optimum, and our multiple-cloudlet mechanism significantly outperforms the single-cloudlet mechanism. <s> BIB022 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB023 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Mobile Cloud Computing (MCC) is emerging as a main ubiquitous computing platform which enables to leverage the resource limitations of mobile devices and wireless networks by offloading data-intensive computation tasks from resource-poor mobile devices to resource-rich clouds. In this paper, we consider an online location-aware offloading problem in a two-tiered mobile cloud computing environment consisting of a local cloudlet and remote clouds, with an objective to fair share the use of the cloudlet by consuming the same proportional of their mobile device energy, while keeping their individual SLA, for which we devise an efficient online algorithm. We also conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising and outperforms other heuristics. <s> BIB024 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Abstract Mobile applications are becoming computationally intensive nowadays due to the increasing convenience, reliance on, and sophistication of smartphones. Nevertheless, battery lifetime remains a major obstacle that prohibits the large-scale adoption of such apps. Mobile cloud computing is a promising solution whereby apps are partially processed in the cloud to minimize the overall energy consumption of smartphones. However, this will not necessarily save energy if there is no systematic mechanism to evaluate the effect of offloading an app onto the cloud. In this paper, we present a mathematical model that represents this energy consumption optimization problem. We propose an algorithm to dynamically solve the problem while taking security measures into account. We also propose the free sequence protocol (FSP) that allows for the dynamic execution of apps according to their call graph. Our experimental setup consists of an Android smartphone and a Java server in the cloud. The results demonstrate that our approach saves battery lifetime and enhances performance. They also show the effects of workload amount, network type, computation cost, security operations, signal strength, and call graph structure on the optimized overall energy consumption. <s> BIB025 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB026 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB027 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> 1) <s> The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size. <s> BIB028
|
Data Location: Locality could be a significant contributor to the energy consumption of data accessing. As mentioned in Data Access Pattern, it is the data location that essentially impacts different patterns' influences BIB002 . Thus, moving data closer to where they are needed seems to be an energy saving principle. For example, the collocated data and compute configuration delivers the best energy profile BIB025 , while distributing data and compute nodes into different layers will result in more energy consumption . 2) Overall Data Size: The existing studies exhibit a consensus on the positive correlation between the overall data size and the energy consumption of a Cloud application, even though the correlation was studied in various contexts. For instance, the input data size is a major driver behind the computation workload BIB018 BIB019 ; the energy incurred by accessing activities mainly depends on the data length BIB010 ; and the amount of data to be transmitted is one of the discriminating factors for communication energy cost BIB020 BIB026 portionally to the overall data size, small-data transactions in a Cloud application show a negative correlation with the energy consumption. In practice, the data block per transaction can vary from several bytes to multiple megabytes BIB002 . Given the same amount of data in an application, dealing with smaller-data-size transactions would cause longer execution time and higher energy expense BIB011 . Consequently, packing a set of small data requests into a bulk transaction becomes an effective approach to improve the application's energy efficiency BIB002 BIB003 . Note that the aforementioned data segments involved in application tasks do not necessarily act as transactional data pieces, because a task might further comprise numerous transactions. 4) Number of Tasks: By representing Cloud applications as task interaction graphs (e.g., directed acyclic graph), the number of task nodes and edges has been used to reflect the whole workload (i.e. graph size) BIB012 BIB013 BIB005 BIB001 BIB014 BIB021 . Since more tasks usually imply more data and more application activities at runtime, the corresponding application execution will inevitably require more energy BIB010 . Moreover, considering the extra overhead and energy for task scheduling, a larger number of tasks in a Cloud application will lead to higher average energy consumption per task BIB011 . 5) Task Complexity: The computational complexity in tasks or functional modules is closely associated with the Cloud application's energy consumption BIB006 BIB022 BIB027 , as complex computation requires more computing resources and/or causes longer execution time. To verify this association, the empirical studies varied task complexity mainly through topping up functions BIB004 and increasing the load of mathematical calculations BIB007 BIB015 , while the simulation study BIB018 characterized the complexity in computation algorithm as a random variable with Gamma distribution. 6) Task Size: As mentioned previously, a composite-object task can further be defined as a combination of the input/output data and computation workload BIB028 , and therefore the size of a task can partially be reflected by the data size BIB008 or together with the computation complexity BIB018 . To avoid duplication, we only focus on the amount of computation workload BIB016 that has been widely depicted as the number of CPU cycles BIB012 , floating-point operations BIB023 BIB017 , and processing instructions BIB028 BIB009 BIB001 BIB024 . In fact, the CPU cycles of a computation task have been treated as a linear function of the data input the task BIB019 , and the computation complexity can also be translated into particular number of instructions .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Traditional scheduling research usually targets make span as the only optimization goal, while several isolated efforts addressed the problem by considering at most two objectives. In this paper we propose a general framework and heuristic algorithm for multi-objective static scheduling of scientific workflows in heterogeneous computing environments. The algorithm uses constraints specified by the user for each objective and approximates the optimal solution by applying a double strategy: maximizing the distance to the constraint vector for dominant solutions and minimizing it otherwise. We analyze and classify different objectives with respect to their impact on the optimization process and present a four-objective case study comprising make span, economic cost, energy consumption, and reliability. We implemented the algorithm as part of the ASKALON environment for Grid and Cloud computing. Results for two real-world applications demonstrate that the solutions generated by our algorithm are superior to user-defined constraints most of the time. Moreover, the algorithm outperforms a related bi-criteria heuristic and a bi-criteria genetic algorithm. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> This paper describes an approach for service composition optimization and its application in cloud computing to streamline resource usage that in turn contributes towards energy efficiency. The suitability and usefulness of the approach is evaluated by experimentation. In the experiments, physical hosts at various cloud sites represent candidate services that are brought together in a composition to satisfy the requirements of applications. The composition is optimized based on functional and non-functional criteria to determine a set of cloud services representing energy efficient deployment configurations. We also propose a runtime adaptation model that can help in minimizing energy consumption of cloud applications at runtime. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Improving performance of a mobile application by offloading its computation onto a cloudlet has become a prevalent paradigm. Among mobile applications, the category of interactive data-streaming applications is emerging while having not yet received sufficient attention. During computation offloading, the performance of this category of applications (including response time and throughput) depends on network latency and bandwidth between the mobile device and the cloudlet. Although a single cloudlet can provide satisfactory network latency, the bandwidth is always the bottleneck of the throughput. To address this issue, we propose to use multiple cloudlets for computation offloading so as to alleviate the bandwidth bottleneck. In addition, we propose to use multiple module instances to complete a module, enabling more fine-grained computation partitioning, since data processing in many modules of data-streaming applications could be highly parallelized. Specifically, at first we apply a fine-grained data-flow model to characterize mobile interactive data-streaming applications. Then we build a unified optimization framework that achieves maximization of the overall utilities of all mobile users, and design an efficient heuristic for the optimization problem, which is able to make trade-off between throughput and energy consumption at each mobile device. At the end we verify our algorithm with extensive simulation. The results show that the overall utility achieved by our heuristic is close to the precise optimum, and our multiple-cloudlet mechanism significantly outperforms the single-cloudlet mechanism. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Recently, MapReduce has been a popular distributed programming framework for solving data-intensive applications. However, a large-scale MapReduce cluster has inevitable machine/node failures and considerable energy consumption. To solve these problems, MapReduce has employed several policies for replicating input data, storing/replicating intermediate data, and re-executing failed tasks. In this study, we concentrate on two typical policies for storing/replicating intermediate data, and derive the job completion reliability (JCR for short) and job energy consumption (JEC for short) of a MapReduce cluster when the two policies are individually employed. The two policies are further analyzed and compared given various scenarios in which jobs with different input data sizes, numbers of reduce tasks, and other parameters are run in a MapReduce cluster with two extreme parallel execution capabilities. From the analytical results, MapReduce managers are able to comprehend how the two policies influence the JCR and JEC of a MapReduce cluster. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Long running applications on resource-constrained mobile devices can lead to software aging, which is a critical impediment to the mobile users due to its pervasive nature. Mobile offloading that migrates computation-intensive parts of applications from mobile devices onto resource-rich cloud servers, is an effective way for enhancing the availability of mobile services as it can postpone or prevent the software aging in mobile devices. Through partitioning the execution between the device side and the cloud side, the mobile device can have the most benefit from offloading in reducing utilisation of the device and increasing its lifetime. In this paper, we propose a path-based offloading partitioning (POP) algorithm to determine which portions of the application tasks to run on mobile devices and which portions on cloud servers with different cost models in mobile environments. The evaluation results show that the partial offloading scheme can significantly improve performance and reduce energy consumption by optimally distributing tasks between mobile devices and cloud servers, and can well adapt to changes in the environment. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-implicit Energy Consumption Model <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB010
|
As the name suggests, the environment-implicit energy consumption models are purely based on the analysis of Cloud applications, with little consideration of the deployment environment. Without loss of generality, we exploit the widely employed directed acyclic graph (DAG) as a generic model of Cloud application A in our discussion, as shown in Equation (1). A : where the application's DAG comprises N nodes and at most N × N edges. By partitioning A into functional pieces, each node n i indicates a workload task, while each edge (n i , n j ) represents the precedence constraint between two consecutive tasks. Unlike the application modeling in BIB006 BIB008 , we treat data transmission as a workload task represented by a node instead of an edge. By focusing only on the execution duration and the required energy unit of each workload task, the most straightforward energy consumption model of A was given in BIB002 BIB004 : where E(·) represents a generic energy consumption function, while T (·) is a generic makespan function. Note that e(n i ) is the energy unit consumed by the task n i during a unit of time, which essentially is a workload-oriented notation BIB003 BIB005 in contrast to the power consumption in environmental resources. In addition to the task energy per time unit, there are also other types of workloadoriented energy units, e.g., energy per user or energy per bit BIB009 . When individual workload tasks have the same functionality, they can be grouped together to facilitate energy consumption modeling. For example, in the context of a MapReduce workflow, there are generally mapping, shuffling and reducing tasks. Correspondingly, the study BIB007 defined a function-groupbased energy consumption model as: Recall that there are mainly four types of infrastructural resources (cf. Section 3.3). Without necessarily knowing the environmental details, similarly, we can also group the tasks that are related to the same resource-intensive workload. As for the task interactions, their energy consumption comprises an integration of task computation and information communication between tasks BIB010 . Although few modeling studies were concerned with the four resource types simultaneously, we summarize such a resource-group-based energy consumption model inspired by the empirical investigation BIB001 , as shown below. Since this model is inherently associated with Cloud applications' deployment environment, we further treat it as a bridge between the environmentimplicit and the following environment-specific energy consumption models.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> As frequencies and feature size scale faster than operating voltages, power density is increasing in every processor generation. Along with that, leakage (highly dependent on temperature) has become an important source of power. Due to the non uniformity of on-chip power density, localized hot spots may create transient high temperature in a restricted area of the chip. These temperatures are source of errors and reduce chip reliability. This paper evaluates clustered architectures as an effective way to distribute power across the chip in order to reduce chip temperature. The proposed quadcluster architecture reduces 33% peak temperature and 12% average. Along with this, “cluster-hopping” decreases temperature in the chip because of disabling some of the clustered backends during a period of time: peak temperatures are reduced 37% and average temperature of the processor 14% with an extra penalty of 3%. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> A cloud can be defined as a pool of computer resources that can host a variety of different workloads, ranging from long-running scientific jobs (e.g., modeling and simulation) to transactional work (e.g., web applications). A cloud computing platform dynamically provisions, configures, reconfigures, and de-provisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Cloud-hosting facilities, including many large businesses that run clouds in-house, became more common as businesses tend to out-source their computing needs more and more. For large-scale clouds power consumption is a major cost factor. Modern computing devices have the ability to run at various frequencies each one with a different power consumption level. Hence, the possibility exists to choose frequencies at which applications run to optimize total power consumption while staying within the constraints of the Service Level Agreements (SLA) that govern the applications. In this paper, we analyze the mathematical relationship of these SLAs and the number of servers that should be used and at what frequencies they should be running. We discuss a proactive provisioning model that includes hardware failures, devices available for services, and devices available for change management, all as a function of time and within constraints of SLAs. We provide scenarios that illustrate the mathematical relationships for a sample cloud and that provides a range of possible power consumption savings for different environments. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers. In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule. Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> With the development of cloud computing, more and more data-intensive workflows have been deployed on virtualized datacenters. As a result, the energy spent on massive data accessing grows rapidly. In this paper, an energy aware scheduling algorithm is proposed, which introduces a novel heuristic called Minimal Data-Accessing Energy Path for scheduling data-intensive workflows aiming to reduce the energy consumption of intensive data accessing. Extensive experiments based on both synthetical and real workloads are conducted to investigate the effectiveness and performance of the proposed scheduling approach. The experimental results show that the proposed heuristic scheduling can significantly reduce the energy consumption of storing/retrieving intermediate data generated during the execution of data intensive workflow. In addition, it exhibits better robustness than existing algorithms when cloud systems are in presence of I/O intensive workloads. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> In this paper, the challenge of scheduling a parallel application on a cloud environment to achieve both time and energy efficiency is addressed. Two energy-aware task scheduling algorithms called the EHEFT and the ECPOP are proposed to address the challenge. These algorithms have the objective of trying to sustain the makespan and energy consumption at the same time. The concept is to use a metric that identify the inefficient processors and shut them down to reduce energy consumption. Then, the task is rescheduled to use fewer processors to obtain more energy efficiency. The experimental results from the simulation show that our enhanced algorithms not only reduce the energy consumption, but also maintain a good quality of the scheduling. This will enable the efficient use of the cloud system as a large scalable computing platform. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Offloading is one major type of collaborations between mobile devices and clouds to achieve less execution time and less energy consumption. Offloading decisions for mobile cloud collaboration involve many decision factors. One of important decision factors is the network unavailability that has not been well studied. This paper presents an offloading decision model that takes network unavailability into consideration. Network with some unavailability can be modeled as an alternating renewal process. Then, application execution time and energy consumption in both ideal network and network with some unavailability are analyzed. Based on the presented theoretical model, an application partition algorithm and a decision module are presented to produce an offloading decision that is resistant to network unavailability. Simulation results demonstrate good performance of proposed scheme, where the proposed partition algorithm is analyzed in different application and cloud scenarios. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Reducing energy consumption without scarifying service quality is important for cloud computing. Efficient scheduling algorithms HEFT-D and HEFT-DS based on frequency-scaling and state-switching techniques are proposed. Our scheduling algorithms use the fact that the hosts employing a lower frequency or entering a sleeping state may consume less energy without leading to a longer makespan. Experimental results have shown that our algorithms maintain the performance as good as that of HEFT while the energy consumption is reduced. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> There is an increasing interest for cloud services to be provided in a more energy efficient way. The growing deployment of large-scale, complex workflow applications onto cloud computing hosts is being faced with crucial challenges in reducing the power consumption without violating the service level agreement (SLA). In this paper, we consider cloud hosts which can operate in different power states with different capacities respectively, and propose a novel scheduling heuristic for workflows to reduce energy consumption while still meeting deadline constraint. The proposed heuristic is evaluated using simulation with four different real-world applications. The observed results indicates that our heuristic does significantly outperform the existing approaches. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Most recently existing studies pay too much attention on low energy consumption or execution time for tasks with precedence constraint in heterogeneous computing systems. In most cases, system reliability is more important than other performance metrics. Energy consumption and system reliability are two conflicting objectives. In this study, we present a novel bi-objective genetic algorithm BOGA to pursuit low energy consumption and high system reliability simultaneously. The proposed BOGA can offer the users more flexibility to submit their jobs to a data center. In the comparison with excellent algorithms multi-objective heterogeneous earliest finish time MOHEFT and Multi-objective Differential Evolution MODE, BOGA is significantly better in terms of finding spread of compromise solutions. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Overall Energy Consumption Model <s> Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-the-art techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including: i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models. <s> BIB018
|
When it comes to the environment of a Cloud application, we are only concerned with the IT equipment, while not including cooling and other facilities. From the viewpoint of resource partitioning, the deployment environment of Cloud applications has normally been modeled as a resource pool comprising a set of resource items: where r i is the i th resource item within the pool R(A) consisting of M resource items. Since we focus on the environmental resources with respect to a single Cloud application, in this survey, we clarify that R(A) is only composed of the resource items employed by the aforementioned Cloud application A. Moreover, the employed resource items might have different types BIB010 , and the same type of resource items are not necessarily identical BIB011 . Then, the energy consumption of A can be modeled based on the involved resources' power consumptions. In fact, a key characteristic of environment-specific modeling is that it relies on the power consumption of environmental resources. For example, by denoting the power consumed in the resource item r i at time t to be P (r i , t), the studies BIB004 BIB006 modeled the energy expense of a parallel application A running with M resource items during time interval (t 1 , t 2 ): If we define every resource item to be a combination of various powerconsuming components, P (r i , t) of resource r i can further be specified into j∈Ω P (r i,j , t), where Ω is the set of power-consuming components BIB010 . By dividing Ω into the aforementioned four resource types (namely cpu, net, mem and disk for short), we are able to update Equation BIB018 and make it compatible with Equation (4): (P (ri,cpu , t) + P (ri,net , t) + P (ri,mem , t) + P (r i,disk , t)) · dt If focusing on the CMOS circuits involved in the IT resources BIB008 , since a CMOS circuit has two power consumption components (namely static power and dynamic power), a Cloud application's energy consumption can be distinguished between the static and dynamic parts BIB012 , as shown in Equation . where P static (R(A)) and P dynamic (R(A)) represent the average static and dynamic power consumed in the application environment R(A) during the application runtime T (A). In theory, Static Power indicates the essential power for keeping IT resources in the power-on state (e.g., maintaining the basic circuits and system clock), which is independent of any workload BIB013 and cannot be avoided until the whole system is turned off BIB010 BIB017 . As such, the static power consumption is normally modeled as a constant without scaling with other factors BIB003 . In practice, the reverse-bias leakage between diffused regions and the substrate will also result in a particular amount of static power consumption, while the leakage can be proportionally influenced by the temperature BIB001 . Further considering the proportional impact of dynamic power on the temperature, some studies estimated the static power as a fraction of its dynamic counterpart, and the fraction is usually less than 30% BIB002 BIB005 . Thus, during the execution of an application, the static energy consumption can be expressed as: Dynamic Power is the dynamic utilization of power in the environmental IT resources when dealing with workloads. Since the dynamic power dominates the whole power consumption in the popular CMOS technology BIB017 , most of the relevant studies only employed the dynamic power for modeling the energy consumption of Cloud applications (e.g., BIB013 BIB011 ). Furthermore, from the perspective of a system rather than of a CMOS gate, we distinguish between the active and idle power consumption according to different load levels of a particular IT resource during the execution of a Cloud application BIB007 BIB015 . Active Power refers to the power for actively executing tasks on an IT resource (i.e. > 0% load), and Idle Power indicates the power consumption when the IT resource is ready to work while doing nothing (i.e. 0% load). Note that IT resources are not truly static at idle states, because there are still backend workloads. 2 To be aligned with the definition of dynamic power (when dealing with workloads), we clarify that static power is excluded when discussing active power and idle power in this survey. In fact, the study BIB013 has combined idle power with static power (e.g., the power corresponding to the sleep resource state BIB014 BIB009 ) into the so-called standby power. Therefore, by focusing on the dynamic power, the dynamic energy expense for completing the Cloud application A can be modeled as: where T idle (R(A)) and T active (R(A)) respectively indicate the average idle and the average active time of the environmental IT resources R(A). It is noteworthy that T idle (R(A)) + T active (R(A)) = T (A). Since different resource items are possible to be alternatively idle during the continuous execution of the Cloud application A, it is improper to use fractions of T (A) to calculate A's idle and active energy consumption. When it comes to E active (A), one of the active energy components reflects the energy used for driving the data flow of the Cloud application A. The data flow might comprise various interactive execution elements (cf. Fig. 2 ) with respect not only to network equipment (e.g., BIB013 ) but also to other types of resources (e.g., BIB010 ). From the perspective of a single resource item r i , the corresponding data flow can be distinguished as either data input or data output. By emphasizing the input/output channel between two consecutive resource items, the energy consumption of A's data flow has been modeled as follows. where P out (r i ) (resp. P in (r j )) is the power of resource r i (resp. r j ) when outputting/inputting the data D(r i →r j ), and Φ(r i →r j ) refers to the data throughput between those two different resource items r i and r j . Instead of emphasizing the input/output channel, Equation (11) has been rewritten in BIB016 by focusing on the input/output activities of individual resource items: where D(r i →)/D(r i ←) represents the size of output/input data of the resource item r i , and Φ(r i →)/Φ(r i ←) indicates the data throughput when r i is outputting/inputting data.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Reducing energy consumption has been an essential technique for Cloud resources or datacenters, not only for operational cost, but also for system reliability. As Cloud computing becomes emergent for Anything as a Service (XaaS) paradigm, modern real-time Cloud services are also available throughout Cloud computing. In this work, we investigate power-aware provisioning of virtual machines for real-time services. Our approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines of datacenters using DVFS (Dynamic Voltage Frequency Scaling) schemes. We propose several schemes to reduce power consumption and show their performance throughout simulation results. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Context: Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries. Objective: The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE. Method: We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of 'quasi-gold standard' (QGS), which consists of collection of known studies, and corresponding 'quasi-sensitivity' into the search process for evaluating search performance. Results: We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research. Conclusion: We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Cloud computing clusters distributed computers to provide applications as services and on-demand resources over Internet. From the perspective of average and total energy consumption, such consolidated resource enhances the energy efficiency on both clients and servers. However, cloud computing has a different power consumption pattern from the traditional storage oriented Internet services. The computation oriented implementation of cloud service broadens the gap between the peak power demand and base power demand of a data center. A higher peak demand implies the need of feeder capacity expansion, which requires a considerable investment. This study proposes a computation related approach to lessen the increasing power demand of cloud service data centers. Through appropriated designs, some frequently used computing algorithms can be performed by either clients or servers. As a model presented in this paper, such client-server balanced computation resource integration suggests an energy-efficient and cost-effective cloud service data center. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> The popularity of smartphones is growing every day. Thanks to the more powerful hardware the applications can run more tasks and use broadband network connection, however there are several known issues. For example, under typical usage (messaging, browsing, and gaming) a smartphone can be discharged in one day. This makes the battery life one of the biggest problems of the mobile devices. That is a good motivation to find energy-efficient solutions. One of the possible methods is the “computation offloading” mechanism, which means that some of the tasks are uploaded to the cloud. In this paper we are going to present a new energy-efficient job scheduling model and a measurement infrastructure which is used to analyze the energy consumption of smartphones. Our results are going to be demonstrated through some scenarios where the goal is to save energy. The offloading task is based on LP and scheduling problems. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers. In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule. Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Mobile Cloud Computing (MCC) is emerging as a main ubiquitous computing platform which enables to leverage the resource limitations of mobile devices and wireless networks by offloading data-intensive computation tasks from resource-poor mobile devices to resource-rich clouds. In this paper, we consider an online location-aware offloading problem in a two-tiered mobile cloud computing environment consisting of a local cloudlet and remote clouds, with an objective to fair share the use of the cloudlet by consuming the same proportional of their mobile device energy, while keeping their individual SLA, for which we devise an efficient online algorithm. We also conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising and outperforms other heuristics. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> With the development of cloud computing, the problem of scheduling workflow in cloud system attracts a large amount of attention. In general, the cloud workflow scheduling problem requires to consider a variety of optimization objectives with some constraints. Traditional workflow scheduling methods focus on single optimization goal like makespan and single constraint like deadline or budget. In this paper, we first make a unified formalization of the optimality problem of multi-constraint and multi-objective cloud workflow scheduling using pareto optimality theory. We also present a two-constraint and two-objective case study, considering deadline, budget constraints and energy consumption, reliability objectives. A general list scheduling algorithm and a tuning mechanism are designed to solve this problem. Through extensive experimental, it confirms the efficiency of the unified multi-constraint and multi-objective cloud workflow scheduling system. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Scalable and fault-tolerant information systems like cloud systems are realized in server cluster systems. Server cluster systems are equipped with virtual machines to provide applications with scalable and fault-tolerant services. Scalable and fault-tolerant application services can be provided by balancing processing load among virtual machines to perform application processes. On the other hand, a large amount of electric energy is consumed in a server cluster system since multiple virtual machines are performed on multiple servers which consume electric energy to perform application processes. In order to design and implement an energy-aware server cluster system, the computation model and power consumption model of a server to perform application processes on multiple virtual machines have to be defined. In this paper, we first define the computation model of a virtual machine to perform application processes. We also define the power consumption model of a server to perform application processes on virtual machines. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Environment-specific Computation Energy Consumption Model <s> Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. <s> BIB016
|
Following the convention of Equation (2) and (5), here we consider a computationintensive task n cpu running on the compute resource r cpu . As mentioned above, the dynamic power dominates the power consumption of CPU's CMOS circuits, and the dynamic CPU power generally depends on the supply voltage and operating frequency via relation . The operating frequency-based model specified in Equation (13) has widely been used for applications' energy consumption in both client devices and Cloud servers. where the energy coefficient k depends on the CPU's chip architecture; the linearly proportional relationship between the operating clock frequency f and the supply voltage v is modeled as v = af ; while a is a constant coefficient. It is evident that the consumed energy of the task n cpu is directly proportional to its makespan, i.e. E(n cpu ) ∝ T (n cpu ) BIB006 . However, a task's makespan varies in practice due to the dynamic changes in CPU capacity caused by possible voltage scaling at runtime. If using τ to denote the time for executing the task n cpu at the maximum processing capacity, then the practical execution time T (n cpu ) would be τ · vmax v BIB013 or τ · fmax f BIB002 . In particular, the levels of voltage v and frequency f are within range [v min , v max ] and [f min , f max ] respectively. Accordingly, the previous frequency-based energy consumption model has been updated by BIB002 into: Recall that the computation workload induced by a task can be measured by CPU cycles (cf. Task Size in Section 3.4). Suppose the task n cpu comprises C cycles in total. Its makespan can directly be calculated as C/f at frequency f . Then, as proposed in BIB001 BIB014 BIB010 , the energy consumption of such a task can be modeled as: In the extreme case, the operating frequency is assumed changeable after every single CPU cycle BIB011 BIB016 . Given the single cycle time 1/f c at frequency f c , one CPU cycle's energy consumption can be represented as c , and thus the task's energy consumption can be expressed as: Considering that f c ∈ [f min , f max ] and there are only limited frequency levels within [f min , f max ], we can categorize the CPU cycles into different frequency level groups. By using δ f to denote the execution fraction of the task n cpu at the frequency f BIB009 , the energy consumption model can be rewritten with regards to either the CPU cycles fractions (i.e. C · δ f ) or the execution time fractions (i.e. T (n cpu ) · δ f ), as shown below. As explained in Equation (10), the idle state of compute resources caused by a task is generally unavoidable due to workload offloading or imbalanced parallel execution. In particular, a compute resource is considered to be idle when its operating frequency (or supply voltage) reaches the lowest level f min (or v min ) BIB007 . Accordingly, by focusing on the dynamic power, the dynamic energy expense for running the task n cpu on the resource r cpu can be separated and modeled as follows. Similar to Equation (6) and (7), it is also common to model Cloud application energy consumption without specifying the power details such as operating frequency. For example, by assuming the resource power P (r cpu ) and the compute speed S(r cpu ) to be constant when running the task n cpu , the consumed energy was calculated in BIB012 through: where W (·) is a generic workload function, and then W (n cpu ) refers to the workload of the task n cpu . It is clear that the idle state of compute resource has been excluded in this case. Therefore, we particularly label Equation BIB004 as an active energy consumption model. Instead of a constant value, the power consumed in a compute resource has been identified to be an exponential function of the resource utilization BIB005 . By using P idle (r cpu ) and P full (r cpu ) to respectively represent the compute resource's empty and full load powers, the energy consumption for running the task n cpu on the compute resource can be modeled as: BIB003 where α and β are resource-specific parameters that need to be determined through empirical measurements. The context-dependent notation U (t) denotes the utilization of compute resource at time t. In the straightforward case, U (t) directly equals to the CPU load fraction BIB008 . As for a multi-CPU server, U (t) was estimated as the number of active CPU cores among all the available ones BIB015 . Considering that the compute resource utilization would also be proportional to the workload being dealt with, the study BIB005 further modeled U (t) = γ · W (n cpu , t) + λ, where γ and λ are both resource-specific parameters, and the workload W (n cpu , t) was measured by the number of user connections at time t.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> This book is written for computer engineersand scientists active in the development of software andhardware systems. Itsupplies theunderstanding andtools needed to effectively evaluate the performance of individual computer and communication systems. Itcoversthe theoretical foundations of the fieldas well asspecific software packages being employed by leaders in the field. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> In spite of the dramatic growth in the number of smartphones in the recent years, the energy capacity challenge for these devices has not been solved satisfactorily. Moreover, the global demand for green Information and Communication Technology (ICT) motivates the researchers to consider cloud computing as a new computing paradigm that is promising for green solution. In this paper, we propose new green solutions that save smartphones energy and at the same time achieve the green ICT goal. Our green solution is achieved by what we call Mobile Cloud Computing (MCC). The MCC migrates the content from the main cloud data center to local cloud data center temporary. The Internet Service Provide (ISP) provides the MCC, which holds the required contents for the smartphone network. Our analysis and experiments show that our proposed solution significantly reduces the ICT system energy consumption by 63% - 70%. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> The popularity of smartphones is growing every day. Thanks to the more powerful hardware the applications can run more tasks and use broadband network connection, however there are several known issues. For example, under typical usage (messaging, browsing, and gaming) a smartphone can be discharged in one day. This makes the battery life one of the biggest problems of the mobile devices. That is a good motivation to find energy-efficient solutions. One of the possible methods is the “computation offloading” mechanism, which means that some of the tasks are uploaded to the cloud. In this paper we are going to present a new energy-efficient job scheduling model and a measurement infrastructure which is used to analyze the energy consumption of smartphones. Our results are going to be demonstrated through some scenarios where the goal is to save energy. The offloading task is based on LP and scheduling problems. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> To reduce the energy consumption in mobile devices, intricate applications are divided into several interconnected partitions like Task Interaction Graph (TIG) and are of floaded to cloud resources or nearby surrogates. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to reduce the power consumption during mapping and scheduling stages. Most of the existing research works proposed several task scheduling solutions by considering the voltage/frequency scaling at the scheduling stage alone. But, the efficacy of these solutions can be improved by applying the DVFS in both mapping as well as scheduling stages. This research work attempts to apply DVFS in mapping as well as scheduling stages by combining both the task-resource and resource-frequency assignments in a single problem. The idea is to estimate the worst-case global slack time for each task-resource assignment, distributes it over the TIG and slowing down the execution of tasks using dynamic voltage and frequency scaling. This optimal slowdown increases the computation time of TIG without exceeding its worst-case completion time. Further, the proposed work models the code offloading as a Quadratic Assignment Problem (QAP) in Matlab-R2012b and solves it using two-level Genetic Algorithm (GA) of the global optimization toolbox. The effectiveness of the proposed model is assessed by a simulation and the results conclude that there is an average energy savings of 35% in a mobile device. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Offloading is one major type of collaborations between mobile devices and clouds to achieve less execution time and less energy consumption. Offloading decisions for mobile cloud collaboration involve many decision factors. One of important decision factors is the network unavailability that has not been well studied. This paper presents an offloading decision model that takes network unavailability into consideration. Network with some unavailability can be modeled as an alternating renewal process. Then, application execution time and energy consumption in both ideal network and network with some unavailability are analyzed. Based on the presented theoretical model, an application partition algorithm and a decision module are presented to produce an offloading decision that is resistant to network unavailability. Simulation results demonstrate good performance of proposed scheme, where the proposed partition algorithm is analyzed in different application and cloud scenarios. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Mobile Cloud Computing (MCC) is emerging as a main ubiquitous computing platform which enables to leverage the resource limitations of mobile devices and wireless networks by offloading data-intensive computation tasks from resource-poor mobile devices to resource-rich clouds. In this paper, we consider an online location-aware offloading problem in a two-tiered mobile cloud computing environment consisting of a local cloudlet and remote clouds, with an objective to fair share the use of the cloudlet by consuming the same proportional of their mobile device energy, while keeping their individual SLA, for which we devise an efficient online algorithm. We also conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising and outperforms other heuristics. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> The Smart cities applications are gaining an increasing interest among administrations, citizens and technologists for their suitability in managing the everyday life. One of the major challenges is regarding the possibility of managing in an efficient way the presence of multiple applications in a Wireless Heterogeneous Network (HetNet) environment, alongside the presence of a Mobile Cloud Computing (MCC) infrastructure. In this context we propose a utility function model derived from the economic world aiming to measure the Quality of Service (QoS), in order to choose the best access point in a HetNet to offload part of an application on the MCC, aiming to save energy for the Smart Mobile Devices (SMDs) and to reduce computational time. We distinguish three different types of application, considering different offloading percentage of computation and analyzing how the cell association algorithm allows energy saving and shortens computation time. The results show that when the network is overloaded, the proposed utility function allows to respect the target values by achieving higher throughput values, and reducing the energy consumption and the computational time. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Advances in sensor cloud computing to support vehicular applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. In this paper, we propose a novel approach to minimize energy consumption of processing a vehicular application within mobile wireless sensor networks (MWSN) while satisfying a certain completion time requirement. Specifically, the application can be optimally partitioned, offloaded and executed with helps of peer sensor devices, e.g., a smart phone, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Our theoretical analysis is supplemented by simulation results to show the significance of energy saving by 63% compared to the traditional cloud computing methods. Moreover, a prototype cloud system has been developing to validate the efficiency of sensor cloud strategies in dealing with diverse vehicular applications. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Many people use smart phones on a daily basis, yet, their energy consumption is pretty high and the battery power lasts typically only for a single day. In the scope of the EnAct project, we investigate potential energy savings on smart phones by offloading computationally expensive tasks into the cloud. Obviously, also the wireless communication for uploading tasks requires energy. For that reason, it is crucial to understand the trade-off between energy consumption for wireless communication and local computation in order to assert that the overall power consumption is decreased. In this paper, we investigate the communications part of that trade-off. We conducted an extensive set of measurement experiments using typical smart phones. This is the first step towards the development of accurate energy models allowing to predict the energy required for offloading a given task. Our measurements include WiFi, 2G, and 3G networks as well as a set of two different devices. According to our findings, WiFi consumes by far the least energy per time unit, yet, this advantage seems to be due to its higher throughput and the implied shorter download time and not due to lower power consumption over time. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> The increase in capabilities of mobile devices to perform computation tasks has led to increase in energy consumption. While offloading the computation tasks helps in reducing the energy consumption, service availability is a cause of major concern. Thus, the main objective of this work is to reduce the energy consumption of mobile device, while maximising the service availability for users. The multi-criteria decision making (MCDM) TOPSIS method prioritises among the service providing resources such as Cloud, Cloudlet and peer mobile devices. The superior one is chosen for offloading. While availing service from a resource, the proposed fuzzy vertical handoff algorithm triggers handoff from a resource to another, when the energy consumption of the device increases or the connection time with the resource decreases. In addition, parallel execution of tasks is performed to conserve energy of the mobile device. The results of experimental setup with opennebula Cloud platform, Cloudlets and Android mobile devices on various network environments, suggest that handoff from one resource to another is by far more beneficial in terms of energy consumption and service availability for mobile users. <s> BIB014 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Combining mobile computing and cloud computing has opened the door recently for numerous applications that were not possible before due to the limited capabilities of mobile devices. Computation intensive applications are offloaded to the cloud, hence saving phone's energy and extending its battery life. However, energy savings are influenced by the wireless network conditions. In this paper, we propose considering contextual network conditions in deciding whether to offload to the cloud or not. An energy model is proposed to predict the energy consumed in offloading data under the current network conditions. Based on this prediction, a decision is taken whether to offload, to execute the application locally, or to delay offloading until detecting improvement in network conditions. We evaluated our approach by extending Think Air, a computation offloading framework proposed in [1], by our proposed energy model and delayed offloading algorithm. Experiments results showed considerable savings in energy with an average of 57% of the energy consumed by the application compared with the original static decision module implemented by Think Air. <s> BIB015 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance. <s> BIB016 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Mobile cloud computing (MC2) is emerging as a new computing paradigm that seeks to augment resource-constrained mobile devices for executing computing- and/or data-intensive mobile applications. Nonetheless, the energy-poverty nature of mobile devices has become a stumbling block that greatly impedes the practical application of MC2. Fortunately, for delay-tolerant mobile applications, energy conservation is achievable via two means: (1) dynamic selection of energy-efficient links (e.g., WiFi interface); and (2) deferring data transmission in bad connectivity. In this paper, we study the problem of energy-efficient downlink and uplink data transmission between mobile devices and clouds. In the presence of unpredictable data arrival, network availability and link quality, our objective is to minimize the time average energy consumption of a mobile device while ensuring the stability of both device-end and cloud-end queues. To achieve this goal, we propose an online control framework named EcoPlan under which mobile users can make flexible link selection and data transmission scheduling decisions to achieve arbitrary energy-delay tradeoffs. Real-world trace-driven simulations demonstrate the effectiveness of EcoPlan, along with its superior energy-efficiency over alternative WiFi-prioritized, minimum-delay and SALSA schemes. <s> BIB017 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size. <s> BIB018 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> This article investigates the problem of holistic energy consumption in cloud-assisted mobile computing. In particular, since the cloud, assisting a multi-core mobile device, can be considered as a special core with powerful computation capability, the optimization of holistic energy consumption is formulated as a task-core assignment and scheduling problem. Specifically, the energy consumption models for the mobile device, network, cloud, and, more importantly, task interaction are presented, respectively. Based on these energy consumption models, a holistic energy optimization framework is then proposed, where the thermal effect, application execution deadline, transmission power, transmission bandwidth, and adaptive modulation and coding rate are jointly considered. <s> BIB019 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB020 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Communication Energy Consumption Model <s> Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. <s> BIB021
|
Similarly, we define a communication-intensive task n net of A to facilitate our discussion. As explained in Section 3.5.2, the task n net can be thought of as a data flow across the involved resource pool R(n net ), and then E(n net ) can directly be derived from Equation (11) and BIB001 BIB006 BIB009 . Given the generic architecture for physical environment of Cloud applications (cf. Section 3.1), the resource items can be grouped into Client, Internet, Cloudlet, and Cloud resources. Accordingly, the communication energy consumption of a Cloud application can roughly be divided into four parts BIB002 , as modeled as follows. It is noteworthy that Equation (11) and (12) are still valid and can be reused for each of the four energy parts by adapting the resource pool. As the most controllable part, the client side attracts most of the research efforts on modeling communication energy consumption. By treating a client device as a single resource item, a straightforward approach is to follow Equation BIB001 to estimate the communication energy consumed in client devices, as follows. Without distinguishing the power and data BIB010 between sending and receiving, Equation (22) can be simplified to: where P (r client,net ) and Φ(r client ) are respectively the transmission power and data throughput of the client device r client . Note that we use r client,net to emphasize the power consumed in the network component of the resource r client ; and the notation Φ(r client ) completely ignores the data transmission directions. As such, the first expression in Euqation (23) views D(n net ) as the overall roundtrip data in the task n net , while in the second expression D(n net ) is doubled to imply the data transmission along both directions. Considering the influence of uncertain channel quality (e.g., transmission errors), the factor Network Condition (cf. Section 3.3) was introduced to the previous cases BIB014 : where β 1 and β 2 are the channel condition parameters for sending and receiving data respectively, and their values are required to be tested by the client device r client itself BIB014 . We note that, in this model, the data sending and receiving power of r client are assumed to be identical. By using regression analysis and Wolfram Mathematica, the study BIB015 even ignored the data transmission power, and proposed the following energy consumption model: where α and β are constant parameters that need to be determined through experimental measurements. Resorting to the Shannon Formula, Φ(r client ) was further modeled as Φ(r client ) = Φ(rap ) number of clients · log 2 1 + SNR Distance(r client ,rap ) 2 , with regarding to the signal to noise ratio SNR, the bandwidth Φ(r ap ) and resource competition of the access point r ap , and the distance between r ap and r client BIB011 . By replacing transmission throughput with channel quality, the studies BIB012 BIB021 proposed the following convex monomial function to describe the energy used to transmit D(n net ) bits of data: where γ denotes the energy coefficient in the order of less than 10 −2 , Θ represents the channel state with variable value 0 < Θ < 1 at different time slots, and o refers to the order of monomial that depends on the transmission scheduling policy. For instance, the one-shot policy o = 1 is used to indicate that the channel state has the biggest influence on the data transmission, and the transmission is finished in one time slot only. Without conflicting with such a one-shot policy, a further simplified model proposed a directly proportional relation between the energy consumption of a communication task and its data size, i.e. E client (n net ) ∝ D(n net ) BIB004 BIB013 , as shown below: where λ is a linear or quantile regression parameter that can be related to the employed access point technology BIB013 . By analogy with CMOS concern, the network power of client devices, E client (n net ), can also be separated into static part and dynamic part BIB007 , where the dynamic part covers the idle and active states BIB005 . In particular, the active energy for wireless communication between the mobile device's RF module and different access points (cellular vs. WiFi) was emphasized by BIB016 BIB017 , as modeled below. To save space, here we replace the task n net with a dot. where E RF ramp (·) refers to the extra energy for switching the RF circuitries from low-to high-power states before the initiation of cellular data transmission; E RF tail (·) indicates the tail energy of high-power duration after the cellular data transmission ends; E RF scan (·) represents the energy for scanning and associating to an available WiFi access point; E RF transmit (·) includes both the uplink and the downlink data transmission energy BIB018 that can be calculated through Equation BIB007 , and the power value and data throughput need to be adapted to the chosen access point technology; while E RF hold (·) is the energy for keeping the access point interface active during the data transmissions. Besides the client-side wireless network, the Internet was studied as another communication part for mobile Cloud applications in BIB019 . The communication energy consumed in the Internet was identified to be relative to the data size, the traffic load ratio and the transmission delay. However, the negative correlation between the transmission delay and the corresponding energy consumption conflicts with the other relevant studies and seems to be incorrect, thus our survey does not include the model proposed in BIB019 . By focusing on the routers only in the network path of a Cloud application, the study BIB008 simplified the Internet architecture, and used the number of routers and their power profiles to model the data transmission energy: where P (r router , Φ(r router )) represents the power of the router r router at the data throughput Φ(r router ), which implies that the router's power varies depending on its traffic load. In practice, given different network segments of the Internet, the routers can be specified and classified according to their functions and locations, such as broadband gateway routers and edge/core routers. Moreover, the network path of a Cloud application also includes other types of network facilities like Ethernet switches and WDM transport equipment BIB020 . In detail, the user traffic over the Internet has been assumed to generally require three hops (over two switches, one broadband gateway router, and one edge router) before reaching the core network, and eight hops (over eight WDM links across nine core routers) within the core network BIB003 : where P (r switch ), P (r broad ), P (r edge ), P (r core ), and P (r wdm ) refers to the powers consumed in the Ethernet switch, broadband gateway router, edge router, core router, and WDM link respectively; and Φ(·) represents the maximum capacity (or bandwidth) of the corresponding network equipment. The number of core routers are doubled to reflect the hardware redundancy of the core network, while the number of WDM links are halved to reflect the core hops between co-located equipment. The overall factor of four further covers extra power consumption under the redundancy policy (factor of 2) and high power expenditure at low network utilization (factor of 2). Note that we removed the factor of 1.5 for cooling and other overheads from the original study. Similarly, by assuming two hops (over one switch, one edge router, and one gateway router) for accessing a server within a data center BIB002 BIB003 , the energy consumption of user traffic with respect to both the Cloudlet and the Cloud can be modeled as: where P (r gateway ) and Φ(r gateway ) respectively indicate the power and the maximum capacity of the gateway router.
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> It is now critical to reduce the consumption of natural resources, especially petroleum. Even in information systems, we have to reduce the total electrical power consumption. We classify network applications to two types of applications, transaction and communication based ones. In this paper, we consider communication based applications like the file transfer protocol (FTP). A computer named server consumes the electric power to transfer a file to a client depending on the transmission rate. We discuss a model for power consumption of a data transfer application which depends on the total transmission rate and number of clients to which the server concurrently transmits files. A client has to find a server in a set of servers, each of which holds a file so that the power consumption of the server is reduced. We discuss a pair of algorithms PCB (power consumption-based) and TRB (transmission rate-based) to find a server which transmits a file to a client. In the evaluation, we show the total power consumption can be reduced by the algorithms compared with the traditional round-robin algorithm. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> It is now critical to reduce the consumption of natural resources, especially petroleum to resolve air pollutions. Even in information systems, we have to reduce the total electrical power consumption. A cloud computing system is composed of a huge number of server computers like google file systems. There are many discussions on how to reduce the total power consumption of servers, e. g. by turning off servers which are not required to execute requests from clients. A peer-to-peer (P2P) system is another type of information system which is composed of a huge number of peer computers where various types of applications are autonomously performed. In this paper, we consider a P2P system with data transfer application like the file transfer protocol (FTP). A computer consumes the electric power to transfer a file to another computer depending on the bandwidth. We discuss a model for power consumption of data transfer applications. A client peer has to find a server peer in a set of server peers which holds a file so that the power consumption of the server is reduced. We discuss algorithms to find a server peer which transfers file in a P2P overlay network. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circum- stances cloud computing can consume more energy than conventional computing where each user performs all com- puting on their own personal computer (PC). <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> In spite of the dramatic growth in the number of smartphones in the recent years, the energy capacity challenge for these devices has not been solved satisfactorily. Moreover, the global demand for green Information and Communication Technology (ICT) motivates the researchers to consider cloud computing as a new computing paradigm that is promising for green solution. In this paper, we propose new green solutions that save smartphones energy and at the same time achieve the green ICT goal. Our green solution is achieved by what we call Mobile Cloud Computing (MCC). The MCC migrates the content from the main cloud data center to local cloud data center temporary. The Internet Service Provide (ISP) provides the MCC, which holds the required contents for the smartphone network. Our analysis and experiments show that our proposed solution significantly reduces the ICT system energy consumption by 63% - 70%. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> With the development of cloud computing, more and more data-intensive workflows have been deployed on virtualized datacenters. As a result, the energy spent on massive data accessing grows rapidly. In this paper, an energy aware scheduling algorithm is proposed, which introduces a novel heuristic called Minimal Data-Accessing Energy Path for scheduling data-intensive workflows aiming to reduce the energy consumption of intensive data accessing. Extensive experiments based on both synthetical and real workloads are conducted to investigate the effectiveness and performance of the proposed scheduling approach. The experimental results show that the proposed heuristic scheduling can significantly reduce the energy consumption of storing/retrieving intermediate data generated during the execution of data intensive workflow. In addition, it exhibits better robustness than existing algorithms when cloud systems are in presence of I/O intensive workloads. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> There is an increasing interest for cloud services to be provided in a more energy efficient way. The growing deployment of large-scale, complex workflow applications onto cloud computing hosts is being faced with crucial challenges in reducing the power consumption without violating the service level agreement (SLA). In this paper, we consider cloud hosts which can operate in different power states with different capacities respectively, and propose a novel scheduling heuristic for workflows to reduce energy consumption while still meeting deadline constraint. The proposed heuristic is evaluated using simulation with four different real-world applications. The observed results indicates that our heuristic does significantly outperform the existing approaches. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Storage Energy Consumption Model <s> Online social networks (OSNs) with their huge number of active users consume significant amount energy both in the data centers and in the transport network. Existing studies focus mainly on the energy consumption in the data centers and do not take into account the energy consumption during the transport of data between end-users and data centers. To indicate the amount of the neglected energy, this paper provides a comprehensive framework and a set of measurements for understanding the energy consumption of cloud applications such as photo sharing in social networks. A new energy model is developed to estimate the energy consumption of cloud applications and applied to sharing photos on Facebook, as an example. Our results indicate that the energy consumption involved in the network and end-user devices for photo sharing is approximately equal to 60% of the energy consumption of all Facebook data enters. Therefore, achieving an energy-efficient cloud service requires energy efficiency improvement in the transport network and end-user devices along with the related data centers. <s> BIB010
|
Given a storage-intensive task n disk , in addition to the data input/output analysis BIB007 in alignment with Equation (11), the major concern is about accessing data stored in hard disk arrays through content servers BIB004 . Naturally, the energy consumption of n disk can be split into two parts occurred in the disk arrays (i.e. E array (n disk )) and content servers (i.e. E server (n disk )) respectively: Suppose the data D(n disk ) involved in, or to be accessed by, the task n disk are pre-stored in the disk array r array (for the case of writing, we assume that the same size of storage area has been pre-booked in the disk array). Then, the energy for storing the data during the lifecycle T (n disk ) of the task can be calculated through: where P (r array ) indicates the power of the hard disk array; D(r array ) stands for the disk content capacity; and the initial factor of 2 accounts for the redundancy policy in storage. As before, we removed the factor of 1.5 that reflects cooling and extra overheads for the power of the hard disk array. For the purpose of conciseness, we define each task n disk to include only a one-shot access to the data D(n disk ), and multiple data accesses can be viewed as multiple tasks. Then, the data accessing energy consumed in a content server r server has been modeled by focusing either on the accessing time BIB008 or on the data size BIB004 : where P (r server ,disk ) refers to the power consumed in the storage component of r server , and Φ(r server ) represents the maximum data throughput over r server . In particular, the factor of extra power requirement for other overheads can also be added to Equation (34) BIB005 . If allowing multiple clients to access data simultaneously within the same task n disk , the energy consumption located at r server between time t 1 and t 2 was given in BIB002 BIB003 without emphasizing the storage component: where α depends on the content server type, β t ≥ 1 is proportional to the number of clients at time t, and Φ(r server , t) refers to the data throughput over r server at time t. 3.5.6. Summary Given the identified 30+ models, it is evident that there is no one-size-fits-all approach to modeling energy consumption of Cloud applications. Various energy consumption models are applied to different situations when emphasizing and combining different factors. By deconstructing and analyzing the existing models, however, we see a regular pattern in the modeling efforts, i.e. on the power characteristics of the resources together with the way resources are utilized by application workloads. This regular pattern confirms the statement that a Cloud application's energy consumption involves a mutual effect between its workload and environmental factors. Furthermore, by distinguishing between different power consumption components, we see three viewpoints about the energy consumption of Cloud applications, and we name them as Effective, Active, and Incremental energy consumptions. Effective Energy Consumption, i.e. E(A) = E active (R(A))+E idle (R(A)), includes both the active and the idle power consumed in the environmental resources of a Cloud application. In particular, the idle power consumption is included for two reasons. Firstly, the idle Cloud resources would have to keep a standby state and wait for new jobs, so that they can be rented again at any time BIB009 . Secondly, the idle power consumption will still be meaningful and effective if it is used for maintaining the application accessibility and/or the data availability BIB001 . Active Energy Consumption, i.e. E(A) = E active (R(A)), includes only the active power consumed in the environmental resources of a Cloud application. Although the idle power consumption should not be excluded in practice as mentioned above, focusing on the active power consumption would be useful for investigating the energy consumption incurred by dynamic application activities. Incremental Energy Consumption, i.e. E(A) = E active (R(A))+E idle (R(A))− P idle (R(A)) · T (A), is related to the increased power arising from the idle power consumed in the environmental resources of a Cloud application. In other words, the arising power consumption is the top-up part within active power consumption based on its idle counterpart. Since various IT equipment has widely different dynamic power ranges (e.g., network devices operating at the utilization less than 50% may still incur nearly the maximum power consumption) BIB004 BIB001 , emphasizing the incremental energy consumption can reduce possible investigation bias by excluding the background noises BIB010 BIB006 .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. <s> BIB002 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> MapReduce is a programming model for data intensive computing on large-scale distributed systems. With its wide acceptance and deployment, improving the energy efficiency of MapReduce will lead to significant energy savings for data centers and computational grids. In this paper, we study the performance and energy efficiency of the Hadoop implementation of MapReduce under the context of energy-proportional computing. We consider how MapReduce efficiency varies with two runtime configurations: resource allocation that changes the number of available concurrent workers, and DVFS (Dynamic Voltage and Frequency Scaling) that adjusts the processor frequency based on the workloads' computational needs. Our experimental results indicate significant energy savings can be achieved from judicious resource allocation and intelligent DVFS scheduling for computation intensive applications, though the level of improvements depends on both workload characteristic of the MapReduce application and the policy of resource and DVFS scheduling. <s> BIB003 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Power and energy are primary concerns in the design and management of modern cloud computing systems and data centers. Operational costs for powering and cooling large-scale cloud systems will soon exceed acquisition costs. To improve the energy effciency of cloud computing systems and applications, it is critical to profile the power usage of real systems and applications. Many factors influence power and energy usage in cloud systems, including each components electrical specification, the system usage characteristics of the applications, and system software. In this work, we present the power profiling results on a cloud test bed. We combine hardware and software that achieves power and energy profiling at server granularity. We collect the power and energy usage data with varying server/cloud configurations, and quantify their correlation. Our experiments reveal conclusively how different system configurations affect the server/cloud power and energy usage. <s> BIB004 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device. <s> BIB005 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Researchers and developers use energy models to map out what an application or device's energy usage will be. Application developers most often do not have the capability to manipulate the CPU characteristics that most of these energy models and schedules use as their defining aspect. We present an energy model for multiprocess applications that centers around the CPU utilization, which application developers can actively affect with the design of their application. <s> BIB006 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Cloud computing delivers computing as a utility to users worldwide. A consequence of this model is that cloud data centres have high deployment and operational costs, as well as significant carbon footprints for the environment. We need to develop Green Cloud Computing (GCC) solutions that reduce these deployment and operational costs and thus save energy and reduce adverse environmental impacts. In order to achieve this objective, a thorough understanding of the energy consumption patterns in complex Cloud environments is needed. We present a new energy consumption model and associated analysis tool for Cloud computing environments. We measure energy consumption in Cloud environments based on different runtime tasks. Empirical analysis of the correlation of energy consumption and Cloud data and computational tasks, as well as system performance, will be investigated based on our energy consumption model and analysis tool. Our research results can be integrated into Cloud systems to monitor energy consumption and support static or dynamic system-level optimisation. <s> BIB007 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> The cloud computing paradigm enables the work anywhere anytime paradigm by allowing application execution and data storage on remote servers. This is especially useful for mobile computing and communication devices that are constrained in terms of computation power and storage. It is however not clear how preferable cloud-based applications would be for mobile device users. For users of such battery life constrained devices, the most important criteria might be the energy consumed by the applications they run. The goal of this work is to characterize under what scenarios cloud-based applications would be relatively more energy-efficient for users of mobile devices. This work first empirically studies the energy consumption for various types of applications and for multiple classes of devices to make this determination. Subsequently, it presents an analytical model that helps characterize energy consumption of mobile devices under both the cloud and non-cloud application scenarios. Finally, an algorithm GreenSpot is presented that considers application features and energy-performance tradeoffs to determine whether cloud or local execution will be more preferable. <s> BIB008 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Cloud computing delivers IT solutions as a utility to users. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A common objective of cloud providers is to develop resource provisioning and management solutions that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve this objective, a thorough understanding of energy consumption patterns in complex cloud systems is imperative. We have developed an energy consumption model for cloud computing systems. To operationalise this model, we have conducted extensive experiments to profile the energy consumption in cloud computing systems based on three types of tasks: computation-intensive, data-intensive and communication-intensive tasks. We collected fine-grained energy consumption and performance data with varying system configurations and workloads. Our experimental results show the correlation coefficients of energy consumption, system configuration and workload, as well as system performance in cloud systems. These results can be used for designing energy consumption monitors, and static or dynamic system-level energy consumption optimisation strategies for green cloud computing systems. <s> BIB009 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> With the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members. <s> BIB010 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Nowadays enormous amounts of energy are consumed by Cloud infrastructures and this trend is still growing. An existing solution to lower this consumption is to turn off as many servers as possible, but these solutions do not involve the user as a main lever to save energy. We propose a system that proposes to the user to run her application with degraded performance. A user choosing an energy-efficient run promotes a better consolidation of the Virtual Machines in the Cloud and thus may help turning off more servers. We experimented our system on Grid'5000 and we used the Montage workflow as a benchmark. Experimentation results show promising outcomes. In energy-efficiency mode, the energy consumed can be significantly reduced to the cost of a low increase of the execution time. <s> BIB011 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scaling (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy optimization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs. <s> BIB012 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> This paper presents the results of a formative study conducted to determine the effects of computation offloading in mobile applications by comparing “application performance” (chiefly energy consumption and response time). The study examined two general execution scenarios: (1) computation is performed locally on a mobile device, and (2) when it is offloaded entirely to the cloud. The study also carefully considered the underlying network characteristics as an important factor affecting the performance. More specifically, we refactored two mobile applications to offload their computationally intensive functionality to execute in the cloud. We then profiled these applications under different network conditions, and carefully measured “application performance” in each case. The results were not as conclusive as we had expected. On fast networks, offloading is almost always beneficial. However, on slower networks, the offloading cost-benefit analysis is not as clear cut. The characteristics of the data transferred between the mobile device and the cloud may be a deciding factor in determining whether offloading a computation would improve performance. <s> BIB013 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Trade-off Debates <s> Interactive cloud computing and cloud-based applications are a rapidly growing sector of the expanding digital economy because they provide access to advanced computing and storage services via simple, compact personal devices. Recent studies have suggested that processing a task in the cloud is more energy-efficient than processing the same task locally. However, these studies have generally ignored the power consumption of the network and end-user devices when accessing the cloud. In this paper, we develop a power consumption model for interactive cloud applications that includes the power consumption of end-user devices and the influence of the applications on the power consumption of the various network elements along the path between the user and the cloud data centre. As examples, we apply our model to Google Drive and Microsoft Skydrive's word processing, presentation and spreadsheet interactive applications. We demonstrate via extensive packet-level traffic measurements that the volume of traffic generated by a session of the application vastly exceeds the amount of data keyed in by the user. This has important implications on the overall power consumption of the service. We show that using the cloud to perform certain tasks consumes more power (by a watt to 10 watts depending on the scenario) than performing the same tasks locally on a low-power consuming computer and a tablet. <s> BIB014
|
As mentioned in Section 3.4 and 3.3, we try to isolate the influences of individual factors on the energy consumption, to avoid the combinatorial explosion of the factorial discussions. However, it is noteworthy that the energy expense of a Cloud application is inevitably affected as a result of combining multiple factors, as demonstrated in the mathematical models (cf. Section 3.5). Although studying various combinational factors' effects on the energy consumption is out of the scope of this survey, we particularly highlight a set of trade-off debates that would be worth further investigations, and we believe that model-based simulations would be the key to investigating those concerns raised by these debates. • Resource Allocation Level: To improve the energy efficiency for a Cloud application, there is evidence advocating both less than and more than enough resource allocations. By provisioning "under-the-just-enough" servers, the authors BIB011 showed that a data-intensive Cloud application can save up to 24% in energy consumption with a loss of around 6% only in execution time. However, in general cases, Cloud applications are supposed to achieve greater energy efficiency by utilizing more processors, in order to finish more quickly and free the processors sooner. In other words, the energy saving for a Cloud application can be realized by returning its environmental resources to the idle state earlier BIB006 . • Degree of Application Parallelism: This debate can be viewed as a counterpart of the above one from the application's perspective. By tailoring the resource allocations to the degree of parallelism BIB003 , the overall energy consumption can decrease significantly with improved processing concurrency in a Cloud application BIB004 . This is because the increased parallelism would have more chances to reduce the processing time and overwhelm the influence of the resource increase BIB001 . Nevertheless, considering the theoretical limit of energy saving of parallel executions BIB006 , it is impossible to infinitely enhance the energy efficiency of a Cloud application by increasing its parallelism degree, not to mention that the increased overhead of process scheduling would meanwhile cause more energy consumption BIB007 BIB009 . • Downscaling CPU Frequency: In addition to the conflicting opinions on the effectiveness of adjusting CPU frequencies (cf. Section 3.3.6), there is also a debate on energy saving by downscaling the CPU frequency. Considering the cubic relationship between a CPU's power and its clock frequency (cf. Equation 13 ), in theory, three quarters of the energy can be saved by halving the processor's clock speed, although the execution time doubles . In practice, unfortunately, blindly downscaling CPU frequency often increases energy consumption BIB012 , and computationintensive applications would particularly be less energy efficient when operating processors at lower frequencies BIB004 . Such a debate is still driven by the aforementioned "race to idle", depending on if reducing power consumption can bring overwhelming energy benefits. • Workload Offloading: In mobile Cloud computing, offloading local workloads to external resources have widely been considered effective to shorten applications' execution time and extend mobile devices' battery life, because powerful remote servers can generally offer a significant speedup for mobile applications BIB005 BIB013 . However, simply offloading workloads has been proven not always to be energy efficient BIB014 , unless the workload is characterized by a relatively small communication-computation ratio . Correspondingly, the communication-computation ratio has frequently been employed as a trade-off indicator to help determine the right circumstances of workload offloading BIB002 BIB010 BIB008 .
|
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Conclusions and Future Work <s> With the number of high-density servers in data centers rapidly increasing, power control with performance optimization has become a key challenge to gain a high return on investment, by safely accommodating the maximized number of servers allowed by the limited power supply and cooling facilities in a data center. Various power control solutions have been recently proposed for high-density servers and different components in a server to avoid system failures due to power overload or overheating. Existing solutions, unfortunately, either rely only on the processor for server power control, with the assumption that it is the only major power consumer, or limit power only for a single component, such as main memory. As a result, the synergy between the processor and main memory is impaired by uncoordinated power adaptations, resulting in degraded overall system performance. In this paper, we propose a novel power control solution that can precisely limit the peak power consumption of a server below a desired budget. Our solution adapts the power states of both the processor and memory in a coordinated manner, based on their power demands, to achieve optimized system performance. Our solution also features a control algorithm that is designed rigorously based on advanced feedback control theory for guaranteed control accuracy and system stability. Compared with two state-of-the-art server power control solutions, experimental results show that our solution, on average, achieves up to 23% better performance than one baseline for CPU-intensive benchmarks and doubles the performance of the other baseline when the power budget is tight. <s> BIB001 </s> A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates <s> Conclusions and Future Work <s> Nowadays enormous amounts of energy are consumed by Cloud infrastructures and this trend is still growing. An existing solution to lower this consumption is to turn off as many servers as possible, but these solutions do not involve the user as a main lever to save energy. We propose a system that proposes to the user to run her application with degraded performance. A user choosing an energy-efficient run promotes a better consolidation of the Virtual Machines in the Cloud and thus may help turning off more servers. We experimented our system on Grid'5000 and we used the Montage workflow as a benchmark. Experimentation results show promising outcomes. In energy-efficiency mode, the energy consumed can be significantly reduced to the cost of a low increase of the execution time. <s> BIB002
|
The energy consumption of Cloud computing is predicted to keep growing and even quadruple the current annual consumption by 2020 BIB002 . Thus, efficient use of computing power and energy consumption management have become crucial topics for engineering Cloud applications. With modeling as a prevalent approach to addressing energy consumption, a substantially large number of models with a high variety has emerged. This drives us to use SLR as a rigorous surveying approach to study the existing modeling efforts as evidence to build up a knowledge foundation for investigating Cloud applications' energy consumption. In particular, by deconstructing Cloud computing scenarios, we find that the controllable environmental components (especially client devices) and the application execution elements related to task processing and data communication have attracted most of the research attention as well as the modeling efforts. By identifying energy-related factors, this survey confirms computation and communication to be the existing researchers' major concerns about energy consumption of Cloud applications. Correspondingly, Task Size and Data Size have been considered to be the main workload factors, which would largely interact with CPU Clock Frequency and Network Bandwidth (and Access Point Technology used in the client devices) as main environmental factors. On the contrary, the energy consumption of data storage has attracted little attention, and few studies have intensively investigated and modeled the energy for Cloud applications' memory footprints. Such a finding indicates crucial research gaps that require further research efforts in the future. In fact, storage policies in different cloud environments, which partly relates to the application's nature, may result in a considerably high persistence of the application's data in the Cloud storage, and in turn gives rise to energy consumption for keeping the data. Not to mentions that the degree of data distribution (for protection purposes) can also negatively affect the energy consumption of data storage. Meanwhile, given the increasing trend of in-memory Cloud computing (e.g., Apache Spark 3 ), memory has become a significant contributor to the power consumption in Cloud infrastructures BIB001 . More importantly, our work has advocated divide-and-conquer to be a principle approach to studying energy consumption in the Cloud computing domain. On one hand, decomposing an energy consumption scenario can help clarify the atomic energy concerns and mitigate the complexity in the corresponding problem. On the other hand, gradually recomposing major energy concerns can facilitate iterative and incremental development of energy consumption models, in order to address the complicated trade-offs and even debates with respect to energy efficiency. Naturally, we will unfold our future work along two directions. The first direction is to gradually expand the knowledge artefact (including both factors and models) established in this survey. The second direction is to implement model-driven simulations to reveal further knowledge about the combinational factorial effects on Cloud applications' energy consumption.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Much of systems biology aims to predict the behaviour of biological systems on the basis of the set of molecules involved. Understanding the interactions between these molecules is therefore crucial to such efforts. Although many thousands of interactions are known, precise molecular details are available for only a tiny fraction of them. The difficulties that are involved in experimentally determining atomic structures for interacting proteins make predictive methods essential for progress. Structural details can ultimately turn abstract system representations into models that more accurately reflect biological reality. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Protein–protein interactions (PPIs) are central to most biological processes. Although efforts have been devoted to the development of methodology for predicting PPIs and protein interaction networks, the application of most existing methods is limited because they need information about protein homology or the interaction marks of the protein partners. In the present work, we propose a method for PPI prediction using only the information of protein sequences. This method was developed based on a learning algorithm-support vector machine combined with a kernel function and a conjoint triad feature for describing amino acids. More than 16,000 diverse PPI pairs were used to construct the universal model. The prediction ability of our approach is better than that of other sequence-based PPI prediction methods because it is able to predict PPI networks. Different types of PPI networks have been effectively mapped with our method, suggesting that, even with only sequence information, this method could be applied to the exploration of networks for any newly discovered protein with unknown biological relativity. In addition, such supplementary experimental information can enhance the prediction ability of the method. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Human cancer cells typically harbour multiple chromosomal aberrations, nucleotide substitutions and epigenetic modifications that drive malignant transformation. The Cancer Genome Atlas ( TCGA) pilot project aims to assess the value of large- scale multi- dimensional analysis of these molecular characteristics in human cancer and to provide the data rapidly to the research community. Here we report the interim integrative analysis of DNA copy number, gene expression and DNA methylation aberrations in 206 glioblastomas - the most common type of primary adult brain cancer - and nucleotide sequence aberrations in 91 of the 206 glioblastomas. This analysis provides new insights into the roles of ERBB2, NF1 and TP53, uncovers frequent mutations of the phosphatidylinositol- 3- OH kinase regulatory subunit gene PIK3R1, and provides a network view of the pathways altered in the development of glioblastoma. Furthermore, integration of mutation, DNA methylation and clinical treatment data reveals a link between MGMT promoter methylation and a hypermutator phenotype consequent to mismatch repair deficiency in treated glioblastomas, an observation with potential clinical implications. Together, these findings establish the feasibility and power of TCGA, demonstrating that it can rapidly expand knowledge of the molecular basis of cancer. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Compared to the available protein sequences of different organisms, the number of revealed protein– protein interactions (PPIs) is still very limited. So many computational methods have been developed to facilitate the identification of novel PPIs. However, the methods only using the information of protein sequences are more universal than those that depend on some additional information or predictions about the proteins. In this article, a sequence-based method is proposed by combining a new feature representation using auto covariance (AC) and support vector machine (SVM). AC accounts for the interactions between residues a certain distance apart in the sequence, so this method adequately takes the neighbouring effect into account. When performed on the PPI data of yeast Saccharomyces cerevisiae, the method achieved a very promising prediction result. An independent data set of 11 474 yeast PPIs was used to evaluate this prediction model and the prediction accuracy is 88.09%. The performance of this method is superior to those of the existing sequence-based methods, so it can be a useful supplementary tool for future proteomics studies. The prediction software and all data sets used in this article are freely available at http://www. scucic.cn/Predict_PPI/index.htm. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Systems biology is increasingly popular, but to many biologists it remains unclear what this new discipline actually encompasses. This brief personal perspective starts by outlining the asthetic qualities that motivate systems biologists, discusses which activities do not belong to the core of systems biology, and finally explores the crucial link with synthetic biology. It concludes by attempting to define systems biology as the research endeavor that aims at providing the scientific foundation for successful synthetic biology. <s> BIB005 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> General properties of the antagonistic biomolecular interactions between viruses and their hosts (exogenous interactions) remain poorly understood, and may differ significantly from known principles governing the cooperative interactions within the host (endogenous interactions). Systems biology approaches have been applied to study the combined interaction networks of virus and human proteins, but such efforts have so far revealed only low-resolution patterns of host-virus interaction. Here, we layer curated and predicted 3D structural models of human-virus and human-human protein complexes on top of traditional interaction networks to reconstruct the human-virus structural interaction network. This approach reveals atomic resolution, mechanistic patterns of host-virus interaction, and facilitates systematic comparison with the host's endogenous interactions. We find that exogenous interfaces tend to overlap with and mimic endogenous interfaces, thereby competing with endogenous binding partners. The endogenous interfaces mimicked by viral proteins tend to participate in multiple endogenous interactions which are transient and regulatory in nature. While interface overlap in the endogenous network results largely from gene duplication followed by divergent evolution, viral proteins frequently achieve interface mimicry without any sequence or structural similarity to an endogenous binding partner. Finally, while endogenous interfaces tend to evolve more slowly than the rest of the protein surface, exogenous interfaces--including many sites of endogenous-exogenous overlap--tend to evolve faster, consistent with an evolutionary "arms race" between host and pathogen. These significant biophysical, functional, and evolutionary differences between host-pathogen and within-host protein-protein interactions highlight the distinct consequences of antagonism versus cooperation in biological networks. <s> BIB006 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Recent advances in high-throughput technologies have led to the emergence of systems biology as a holistic science to achieve more precise modeling of complex diseases. Many predict the emergence of personalized medicine in the near future. We are, however, moving from two-tiered health systems to a two-tiered personalized medicine. Omics facilities are restricted to affluent regions, and personalized medicine is likely to widen the growing gap in health systems between high and low-income countries. This is mirrored by an increasing lag between our ability to generate and analyze big data. Several bottlenecks slow-down the transition from conventional to personalized medicine: generation of cost-effective high-throughput data; hybrid education and multidisciplinary teams; data storage and processing; data integration and interpretation; and individual and global economic relevance. This review provides an update of important developments in the analysis of big data and forward strategies to accelerate the global transition to personalized medicine. <s> BIB007 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Machine learning methods are becoming increasingly important in the analysis of large-scale genomic, epigenomic, proteomic and metabolic data sets. In this Review, the authors consider the applications of supervised, semi-supervised and unsupervised machine learning methods to genetic and genomic studies. They provide general guidelines for the selection and application of algorithms that are best suited to particular study designs. <s> BIB008 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> "Big Data" is immersed in many disciplines, including computer vision, economics, online resources, bioinformatics and so on. Increasing researches are conducted on data mining and machine learning for uncovering and predicting related domain knowledge. Protein-protein interaction is one of the main areas in bioinformatics as it is the basis of the biological functions. However, most pathogen-host protein-protein interactions, which would be able to reveal much more infectious mechanisms between pathogen and host, are still up for further investigation. Considering a decent feature representation of pathogen-host protein-protein interactions (PHPPI), currently there is not a well structured database for research purposes, not even for infection mechanism studies for different species of pathogens. In this paper, we will survey the PHPPI researches and construct a public PHPPI dataset by ourselves for future research. It results in an utterly big and imbalanced data set associated with high dimension and large quantity. Several machine learning methodologies are also discussed in this paper to imply possible analytics solutions in near future. This paper contributes to a new, yet challenging, research area in applying data analytic technologies in bioinformatics, by learning and predicting pathogen-host protein-protein interactions. <s> BIB009 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. <s> BIB010 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> In big data research related to bioinformatics, one of the most critical areas is proteomics. In this paper, we focus on the protein-protein interactions, especially on pathogen-host protein-protein interactions (PHPPIs), which reveals the critical molecular process in biology. Conventionally, biologists apply in-lab methods, including small-scale biochemical, biophysical, genetic experiments and large-scale experiment methods (e.g. yeast-two-hybrid analysis), to identify the interactions. These in-lab methods are time consuming and labor intensive. Since the interactions between proteins from different species play very critical roles for both the infectious diseases and drug design, the motivation behind this study is to provide a basic framework for biologists, which is based on big data analytics and deep learning models. Our work contributes in leveraging unsupervised learning model, in which we focus on stacked denoising autoencoders, to achieve a more efficient prediction performance on PHPPI. In this paper, we further detail the framework based on unsupervised learning model for PHPPI researches, while curating a large imbalanced PHPPI dataset. Our model demonstrates a better result with the unsupervised learning model on PHPPI dataset. <s> BIB011 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> I. INTRODUCTION <s> Nowadays more and more data are being sequenced and accumulated in system biology, which brings the data analytics researchers to a brand new era, namely ‘big data’, to extract the inner relationship and knowledge from the huge amount of data. Bridging the gap between computational methodology and biology to accelerate the development of biology analytics has been a hot area. In this paper, we focus on these enormous amounts of data generated with the speedy development of high throughput technologies during the past decades, especially for protein-protein interactions, which are the critical molecular process in biology. Since pathogen-host protein-protein interactions are the major and basic problems for not only infectious diseases but also drug design, molecular level interactions between pathogen and host play very critical role for the study of infection mechanisms. In this paper, we built a basic framework for analyzing the specific problems about pathogen-host protein-protein interactions (PHPPI), meanwhile, we also presented the state-of-art deep learning method results on prediction of PHPPI comparing with other machine learning methods. Utilizing the evaluation methods, specifically by considering the high skewed imbalanced ratio and huge amount of data, we detailed the pipeline solution on both storing and learning for PHPPI. This work contributes as a basis for a further investigation of protein and protein-protein interactions, with the collaboration of data analytics results from the vast amount of data dispersedly available in biology literature. <s> BIB012
|
In this paper, we describe how the computational-intelligence methods can help solve key problems and the dominant mechanisms involved in proteomics research. Considering proteomics represent the large-scale study of proteins, proteomics relies upon the investigation of several aspects, including when, where, and how proteins function, and how proteins interact with each other. Recently, an abundance of experimental data has accumulated, propelling hypothesisdriven biomedical research into the big-data era. Given the continuous growth and availability of largescale multi-omics data, both the protein-protein interaction (PPI) networks and structural analyses involving proteomics remain hot topics. Exploration of proteomics data sources, such as those from the European Bioinformatics Institute - , promotes research in transforming biomedical research at system-level, mechanistic studies aimed at a comprehensive and holistic understanding of biological systems BIB007 . Although challenges, such as specialised domain knowledge and data issues, might hinder proteomics researches, this data-driven work to obtain extensive information about systems from large amounts of raw data is currently popular in both academia and industry BIB010 . Systems biology BIB005 represents the comprehensive study of presenting a holistic view and analysis of biological processes. Specifically, systems biology aims to understand and further predict the behaviour of biological systems BIB001 and includes studies on functional genomics and proteomics. There are several studies focusing on genomics data, mostly from The Cancer Genome Atlas (TCGA) BIB003 , given that a nearly complete map for human and other species had been provided along with the development of genome-sequencing projects BIB001 . These studies provided insights into gene-related networks and a fuller understanding of how a set of molecules interacts with each other BIB008 . Three-dimensional (3D) structures of these molecules are the most critical for deriving relationships. Our study was focused on proteomics, and specifically on HPPPIs. Considering the prevalence of protein interactions between species, most early studies were performed within the same species due to the limited availability of proteomics data at that time BIB002 , BIB004 . Several recent studies demonstrated improvements in PPI between different species, which were referred as ''interspecies PPI'', and that offered important information for further analysis of infectious mechanisms BIB001 , BIB011 . However, beyond the interaction between these PPIs, their structural information is vital to their discovery. We anticipate that study of the identified data collected via open databases BIB009 would present a comprehensive survey towards structural principles concerning the PPI identified between the host and pathogen. These HPPPIs are experimentally verified and manually recorded in systems and include information regarding infection pathways in their interaction networks and are able to reveal much more information regarding the infectious mechanisms between hosts and pathogens. We first investigated a previous HPPPI study BIB009 and expanded our work based on the preliminary sequence information BIB011 , BIB012 to exploit the online available and experimentally verified HPPPI data. However, these studies simply focused on binary protein interactions prediction. In addition to these studies, we expect to leverage the structural information of the HPPPI data for building structural-interaction networks (SINs) with respect to simply classifying pairs of proteins as interacting or not. The structural information of the HPPPIs represents various protein properties, from which systems biology might extract a highly convincing network-analysis result and introduce trustworthy statistics in cooperation with the corresponding structural information and domain data, as well as the atomic resolution-level networks. Therefore, the structural-principle analysis of HPPPI networks is discussed and surveyed in the following sections, which covers most branches closely associate with the protein structural information. This analysis was achieved by SIN, an atomic-resolution PPI network BIB006 . Protein structural information is another experimentally determined set of 3D data previously described. It mainly contains several protein properties, including domain information, family annotation, secondary/tertiary structure. Because there are few 3D-specific studies offering an atomic view of HPPPIs, we provide an overview of progress made by biologists in relation to bioinformatics, including 3D structural databases and analysis based on the structural information. Our efforts will help readers navigate gaps between biological analysis and computational modelling. This includes: • Protein secondary/tertiary structure prediction • Domain-domain interaction prediction These provide the basics for reconstruction of a SIN. The remainder of the paper is organised as follows. We firstly present the preliminary concepts in Section 2, including the sequence information and the representation algorithms, structural information and domain-domain interaction. Section 3 lists the public repositories and databases. In Section 4, a variety of machine-learning algorithms developed and applied for protein-structure analysis and domain prediction are discussed, and a detailed process to layer curated 3D structural models on top of the binary interaction network is described in Section 5. Section 5 also provides a linkage between model knowledge and analysis. The challenges to building a structural interaction model are discussed in Section 6, and we conclude the review in Section 7.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. SEQUENCE INFORMATION <s> Protein–protein interactions (PPIs) are central to most biological processes. Although efforts have been devoted to the development of methodology for predicting PPIs and protein interaction networks, the application of most existing methods is limited because they need information about protein homology or the interaction marks of the protein partners. In the present work, we propose a method for PPI prediction using only the information of protein sequences. This method was developed based on a learning algorithm-support vector machine combined with a kernel function and a conjoint triad feature for describing amino acids. More than 16,000 diverse PPI pairs were used to construct the universal model. The prediction ability of our approach is better than that of other sequence-based PPI prediction methods because it is able to predict PPI networks. Different types of PPI networks have been effectively mapped with our method, suggesting that, even with only sequence information, this method could be applied to the exploration of networks for any newly discovered protein with unknown biological relativity. In addition, such supplementary experimental information can enhance the prediction ability of the method. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. SEQUENCE INFORMATION <s> "Big Data" is immersed in many disciplines, including computer vision, economics, online resources, bioinformatics and so on. Increasing researches are conducted on data mining and machine learning for uncovering and predicting related domain knowledge. Protein-protein interaction is one of the main areas in bioinformatics as it is the basis of the biological functions. However, most pathogen-host protein-protein interactions, which would be able to reveal much more infectious mechanisms between pathogen and host, are still up for further investigation. Considering a decent feature representation of pathogen-host protein-protein interactions (PHPPI), currently there is not a well structured database for research purposes, not even for infection mechanism studies for different species of pathogens. In this paper, we will survey the PHPPI researches and construct a public PHPPI dataset by ourselves for future research. It results in an utterly big and imbalanced data set associated with high dimension and large quantity. Several machine learning methodologies are also discussed in this paper to imply possible analytics solutions in near future. This paper contributes to a new, yet challenging, research area in applying data analytic technologies in bioinformatics, by learning and predicting pathogen-host protein-protein interactions. <s> BIB002
|
Proteins are comprised of various numbers of amino acids as their basic building blocks. The concatenated string of amino acids forming the folded protein represents its primary sequence information. Typically, there are 20 different proteinogenic amino acids BIB001 , although five additional amino acids exist in the human and pathogen protein sequences BIB002 , including selenocysteine/U, pyrrolysine/O, aspartate or Asparagine/B, glutamate, and glutamine/Z. Figure 1 shows the 20 different amino acids. As a preprocessing step for inputting sequence data into computational model built for protein classification and regression tasks, transformation of efficient and effective data into the model is necessary. Sequence representation is a vital preprocessing step for efficiently and effectively feeding data to any computational model for protein classification and regression analysis. In mainstream algorithms concerned with sequence representation, where the protein sequence is denoted as X = x 1 , x 2 , . . . , x n . We define the amino acid number as 20 for this paper. These different sequence-representation algorithms provide as much information as possible to the computational model in different vector lengths. Because the sequence information is easier to obtain via the high-throughput technology, it is primarily utilised for both protein structure prediction and interaction prediction.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> BackgroundThe accuracy of protein secondary structure prediction has been improving steadily towards the 88% estimated theoretical limit. There are two types of prediction algorithms: Single-sequence prediction algorithms imply that information about other (homologous) proteins is not available, while algorithms of the second type imply that information about homologous proteins is available, and use it intensively. The single-sequence algorithms could make an important contribution to studies of proteins with no detected homologs, however the accuracy of protein secondary structure prediction from a single-sequence is not as high as when the additional evolutionary information is present.ResultsIn this paper, we further refine and extend the hidden semi-Markov model (HSMM) initially considered in the BSPSS algorithm. We introduce an improved residue dependency model by considering the patterns of statistically significant amino acid correlation at structural segment borders. We also derive models that specialize on different sections of the dependency structure and incorporate them into HSMM. In addition, we implement an iterative training method to refine estimates of HSMM parameters. The three-state-per-residue accuracy and other accuracy measures of the new method, IPSSP, are shown to be comparable or better than ones for BSPSS as well as for PSIPRED, tested under the single-sequence condition.ConclusionsWe have shown that new dependency models and training methods bring further improvements to single-sequence protein secondary structure prediction. The results are obtained under cross-validation conditions using a dataset with no pair of sequences having significant sequence similarity. As new sequences are added to the database it is possible to augment the dependency structure and obtain even higher accuracy. Current and future advances should contribute to the improvement of function prediction for orphan proteins inscrutable to current similarity search methods. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> This review provides an exposition to the important problems of (i) structure prediction in protein folding and (ii) de novo protein design. The recent advances in protein folding are reviewed based on a classification of the approaches in comparative modeling, fold recognition, and first principles methods with and without database information. The advances towards the challenging problem of loop structure prediction and the first principles method, ASTRO-FOLD, along with the developments in the area of force-fields development have been discussed. Finally, the recent progress in the area of de novo protein design is presented with focus on template flexibility, in silico sequence selection, and successful peptide and protein designs. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> Evolutionarily related proteins have similar sequences. Such similarity is called homology and can be described using substitution matrices such as Blosum 60. Naturally occurring homologous proteins usually have similar stable tertiary structures and this fact is used in so-called homology modeling. In contrast, the artificial protein designed by the Regan group has 50% identical sequence to the B1 domain of Streptococcal IgG-binding protein and a structure similar to the protein Rop. In this study, we asked the question whether artificial similar protein sequences (pseudohomologs) tend to encode similar protein structures, such as proteins existing in nature. To answer this question, we designed sets of protein sequences (pseudohomologs) homologous to sequences having known three-dimensional structures (template structures), same number of identities, same composition and equal level of homology, according to Blosum 60 substitution matrix as the known natural homolog. We compared the structural features of homologs and pseudohomologs by fitting them to the template structure. The quality of such structures was evaluated by threading potentials. The packing quality was measured using three-dimensional homology models. The packing quality of the models was worse for the “pseudohomologs” than for real homologs. The native homologs have better threading potentials (indicating better sequence-structure fit) in the native structure than the designed sequences. Therefore, we have shown that threading potentials and proper packing are evolutionarily more strongly conserved than sequence homology measured using the Blosum 60 matrix. Our results indicate that three-dimensional protein structure is evolutionarily more conserved than expected due to sequence conservation. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> Predicting protein secondary structure is a fundamental problem in protein structure prediction. Here we present a new supervised generative stochastic network (GSN) based method to predict local secondary structure with deep hierarchical representations. GSN is a recently proposed deep learning technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative model. We present the supervised extension of GSN, which learns a Markov chain to sample from a conditional distribution, and applied it to protein structure prediction. To scale the model to full-sized, high-dimensional data, like protein sequences with hundreds of amino acids, we introduce a convolutional architecture, which allows efficient learning across multiple layers of hierarchical representations. Our architecture uniquely focuses on predicting structured low-level labels informed with both low and high-level representations learned by the model. In our application this corresponds to labeling the secondary structure state of each amino-acid residue. We trained and tested the model on separate sets of non-homologous proteins sharing less than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513 dataset, better than the previously reported best performance 64.9% (Wang et al., 2011) for this challenging secondary structure prediction problem. <s> BIB005 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> BackgroundSecondary structures prediction of proteins is important to many protein structure modeling applications. Correct prediction of secondary structures can significantly reduce the degrees of freedom in protein tertiary structure modeling and therefore reduces the difficulty of obtaining high resolution 3D models.MethodsIn this work, we investigate a template-based approach to enhance 8-state secondary structure prediction accuracy. We construct structural templates from known protein structures with certain sequence similarity. The structural templates are then incorporated as features with sequence and evolutionary information to train two-stage neural networks. In case of structural templates absence, heuristic structural information is incorporated instead.ResultsAfter applying the template-based 8-state secondary structure prediction method, the 7-fold cross-validated Q8 accuracy is 78.85%. Even templates from structures with only 20%~30% sequence similarity can help improve the 8-state prediction accuracy. More importantly, when good templates are available, the prediction accuracy of less frequent secondary structures, such as 3-10 helices, turns, and bends, are highly improved, which are useful for practical applications.ConclusionsOur computational results show that the templates containing structural information are effective features to enhance 8-state secondary structure predictions. Our prediction algorithm is implemented on a web server named "C8-SCORPION" available at: http://hpcr.cs.odu.edu/c8scorpion. <s> BIB006 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> BackgroundThe advent of human genome sequencing project has led to a spurt in the number of protein sequences in the databanks. Success of structure based drug discovery severely hinges on the availability of structures. Despite significant progresses in the area of experimental protein structure determination, the sequence-structure gap is continually widening. Data driven homology based computational methods have proved successful in predicting tertiary structures for sequences sharing medium to high sequence similarities. With dwindling similarities of query sequences, advanced homology/ ab initio hybrid approaches are being explored to solve structure prediction problem. Here we describe Bhageerath-H, a homology/ ab initio hybrid software/server for predicting protein tertiary structures with advancing drug design attempts as one of the goals.ResultsBhageerath-H web-server was validated on 75 CASP10 targets which showed TM-scores ≥0.5 in 91% of the cases and Cα RMSDs ≤5Å from the native in 58% of the targets, which is well above the CASP10 water mark. Comparison with some leading servers demonstrated the uniqueness of the hybrid methodology in effectively sampling conformational space, scoring best decoys and refining low resolution models to high and medium resolution.ConclusionBhageerath-H methodology is web enabled for the scientific community as a freely accessible web server. The methodology is fielded in the on-going CASP11 experiment. <s> BIB007 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> In this paper, we evaluated the performance of an evolutionary-based protein secondary structure (PSS) prediction model which uses the information of amino acid sequences extracted by a clustering technique. The dimension of the classifier's inputs is reduced using a k-means clustering method on sequence segments. The proposed PSS classifier is based on a Genetic Programming (GP) approach that uses IF rules for a multi-target classifier. The GP classifier is evaluated by using protein sequences and the sequence information obtained from the k-means clustering. The GP prediction model's performance is compared with those of feed-forward artificial neural networks (ANNs) and support vector machines (SVMs). The prediction methods are examined with two protein datasets RS126 and CB513. The performance of the three classification models are measured according to Q 3 and segment overlap (SOV) scores. The prediction models which use clustered data result in average 2% higher prediction accuracy than those using sequence data. In addition, the experimental results indicate the GP model's prediction scores are in average 3% higher than those of the ANN and SVMs models when amino acid sequences or clustered information are explored. <s> BIB008 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent. <s> BIB009 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. STRUCTURAL INFORMATION <s> Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available. <s> BIB010
|
Because protein sequences exhibit various lengths, those with < 50 amino acids are generally referred to as polypeptides and contain only primary level information. For secondary structure, folding forms common structures, such as α − helices and β − sheets (from β − strands). Another structure is referred to as a random coli. Upon folding, a secondary structure subunit transforms into tertiary structure. For some proteins, their structure consist of more than one polypeptide, suggesting multiple tertiary structures. This context information is subsequently referred to as quaternary structure. We illustrate the 3D structure for protective antigen (UniProt ID: 'P13423') in Fig. 2 . Because the wet lab is the site of protein-structure determination by X-ray crystallography, NMR spectroscopy or cryo-electron microscopy, these methods are extremely timeconsuming and expensive. Therefore, an ab initio method based on computational modelling is a current focus of academic and industrial research. Only < 0.5% of all sequenced protein structures have solved structures according to the limitations of biological experiments methods BIB008 . Study of secondary structure prediction creates a dictionary of protein secondary structure (DSSP), which is better defined and clearer than tertiary structure and quaternary structure. Additionally, secondary structure can be analysed using efficient sequence information from primary structure. The secondary structure is predefined with three types of motifs: α-helix, β-strand and coli, allowing Q3 accuracy BIB010 , BIB003 - BIB009 . Statistical models and machine-learning methods have extensively improved Q3 predictive accuracy from 65% to 80%. Recently a more challenging problem targeting on eight-category prediction (Q8) defined in DSSP for secondary structure prediction was described. These eight categories describe the secondary structure based on additional elements: 3 10 -helix, α-helix, π -helix, β-strand, β-bridge, β-turn, bend and loop/irregular BIB005 , BIB006 . To achieve more accurate results on secondary structure, these methods require not only an efficient model but also sufficient feature representations from the sequence information. The involved models will be introduced in Section 4. The key challenge to predicting secondary structure involves prediction of those proteins having no close homologs and that have not experimentally verified 3D structures. To achieve sufficient feature representations for secondary structure prediction, most studies introduced the proteinsequence information, amino acid profile information, local and global sequence information BIB010 , BIB002 , BIB001 , . In this study, we first focus on the eight categories for secondary structure prediction. Fig. 3 provides an example of a tertiary structure of the protective antigen protein (UniProt ID: P13423). Prediction for this level of structure normally involves homology modelling BIB007 , which is also known as comparative modelling, where the main resulting candidate is derived from amino acid sequence alignment by mapping amino acids between different sequences. Introduction of homology modelling method into tertiary structure prediction allows evolutionary results to reveal proteins harboring similar amino acid sequences based on their shared similar tertiary structure to accomplish related biological function BIB004 . The structure information is a requisite for structural interaction networks, given that they provide atom level information. In Section 3, we will describe related databases available for acquiring such information.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. DOMAIN-DOMAIN INTERACTIONS <s> Recent advances in high-throughput experimental methods for the identification of protein interactions have resulted in a large amount of diverse data that are somewhat incomplete and contradictory. As valuable as they are, such experimental approaches studying protein interactomes have certain limitations that can be complemented by the computational methods for predicting protein interactions. In this review we describe different approaches to predict protein interaction partners as well as highlight recent achievements in the prediction of specific domains mediating protein-protein interactions. We discuss the applicability of computational methods to different types of prediction problems and point out limitations common to all of them. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. DOMAIN-DOMAIN INTERACTIONS <s> The database of 3D interacting domains (3did) is a collection of protein interactions for which high-resolution 3D structures are known. 3did exploits structural information to provide the crucial molecular details necessary for understanding how protein interactions occur. Besides interactions between globular domains, the new release of 3did also contains a hand-curated set of transient peptide-mediated interactions. The interactions are grouped in Interaction Types, based on the mode of binding, and the different binding interfaces used in each type are also identified and catalogued. A web-based tool to query 3did is available at http://3did.irbbarcelona.org. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. DOMAIN-DOMAIN INTERACTIONS <s> DOMINE is a comprehensive collection of known and predicted domain–domain interactions (DDIs) compiled from 15 different sources. The updated DOMINE includes 2285 new domain–domain interactions (DDIs) inferred from experimentally characterized high-resolution three-dimensional structures, and about 3500 novel predictions by five computational approaches published over the last 3 years. These additions bring the total number of unique DDIs in the updated version to 26 219 among 5140 unique Pfam domains, a 23% increase compared to 20 513 unique DDIs among 4346 unique domains in the previous version. The updated version now contains 6634 known DDIs, and features a new classification scheme to assign confidence levels to predicted DDIs. DOMINE will serve as a valuable resource to those studying protein and domain interactions. Most importantly, DOMINE will not only serve as an excellent reference to bench scientists testing for new interactions but also to bioinformaticans seeking to predict novel protein–protein interactions based on the DDIs. The contents of the DOMINE are available at http://domine.utdallas.edu. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. DOMAIN-DOMAIN INTERACTIONS <s> The database iPfam, available at http://ipfam.org, catalogues Pfam domain interactions based on known 3D structures that are found in the Protein Data Bank, providing interaction data at the molecular level. Previously, the iPfam domain–domain interaction data was integrated within the Pfam database and website, but it has now been migrated to a separate database. This allows for independent development, improving data access and giving clearer separation between the protein family and interactions datasets. In addition to domain–domain interactions, iPfam has been expanded to include interaction data for domain bound small molecule ligands. Functional annotations are provided from source databases, supplemented by the incorporation of Wikipedia articles where available. iPfam (version 1.0) contains >9500 domain–domain and 15 500 domain–ligand interactions. The new website provides access to this data in a variety of ways, including interactive visualizations of the interaction data. <s> BIB004
|
Given a protein sequence, protein domains are distinctive functional or structural subsegments. Most protein domains build independently stable and folded 3D structures, with which the domains combined into different arrangements to form a unique protein with different functions BIB003 . Therefore, binary PPI networks can be further considered at the domain level, especially when the interacting protein is large. Although most proteins consist of multiple domains, a pair of PPIs often involves only one pair of domain-domain interaction focusing on the actual binding site. Domain-level interactions provide a global view of the binary PPIs network. For HPPPI investigations, this reveals interaction location or pathological interactions and can help facilitate drug-development targeting for infectious diseases. To acquire a comprehensive understanding of how domain interactions are mediated, the primary method involves analysis of individual interactions using experimentally determined 3D structures. However, this information is available for only a small fraction of proteins, indicating the domainlevel PPI data not readily accessible. There are several existing databases, including 3did BIB002 and iPfam BIB004 , that provide domain-domain interactions by identifying these based on experimentally determined 3D structures. Other databases provide combined interactions, in which data are derived experimentally and the rest is computationally predicted. DOMINE BIB001 includes both 3D-structure-based and predicted domain-domain interactions and shows the predicted domain-domain interactions at three different levels, namely 'High', 'Middle' and 'Low'. Two primary methods, association BIB001 and maximum-likelihood estimation , are introduced in this domain-domain interaction-prediction task. The essential information utilised in these models includes domain information from protein sequence and binary PPI information. To provide a general understanding of domain-domain interactions associated with binary PPIs, Fig. 4 shows a basic diagram for domain-domain-interaction prediction . 'Protein A' interacts with 'Protein B' while 'Protein C' does not interact with 'Protein D'. Several different domains types are identified using the related databases. Mostly, we choose Protein Data Bank (PDB) as suggested. Later, we will compare differences between these two groups of domaindomain relationships to identify the interacting domains between two different proteins.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. PROTEIN FAMILIES AND DOMAIN DATABASES <s> The database of 3D interacting domains (3did) is a collection of protein interactions for which high-resolution 3D structures are known. 3did exploits structural information to provide the crucial molecular details necessary for understanding how protein interactions occur. Besides interactions between globular domains, the new release of 3did also contains a hand-curated set of transient peptide-mediated interactions. The interactions are grouped in Interaction Types, based on the mode of binding, and the different binding interfaces used in each type are also identified and catalogued. A web-based tool to query 3did is available at http://3did.irbbarcelona.org. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. PROTEIN FAMILIES AND DOMAIN DATABASES <s> Pfam, available via servers in the UK (http://pfam.sanger.ac.uk/) and the USA (http://pfam.janelia.org/), is a widely used database of protein families, containing 14 831 manually curated entries in the current release, version 27.0. Since the last update article 2 years ago, we have generated 1182 new families and maintained sequence coverage of the UniProt Knowledgebase (UniProtKB) at nearly 80%, despite a 50% increase in the size of the underlying sequence database. Since our 2012 article describing Pfam, we have also undertaken a comprehensive review of the features that are provided by Pfam over and above the basic family data. For each feature, we determined the relevance, computational burden, usage statistics and the functionality of the feature in a website context. As a consequence of this review, we have removed some features, enhanced others and developed new ones to meet the changing demands of computational biology. Here, we describe the changes to Pfam content. Notably, we now provide family alignments based on four different representative proteome sequence data sets and a new interactive DNA search interface. We also discuss the mapping between Pfam and known 3D structures. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. PROTEIN FAMILIES AND DOMAIN DATABASES <s> The database iPfam, available at http://ipfam.org, catalogues Pfam domain interactions based on known 3D structures that are found in the Protein Data Bank, providing interaction data at the molecular level. Previously, the iPfam domain–domain interaction data was integrated within the Pfam database and website, but it has now been migrated to a separate database. This allows for independent development, improving data access and giving clearer separation between the protein family and interactions datasets. In addition to domain–domain interactions, iPfam has been expanded to include interaction data for domain bound small molecule ligands. Functional annotations are provided from source databases, supplemented by the incorporation of Wikipedia articles where available. iPfam (version 1.0) contains >9500 domain–domain and 15 500 domain–ligand interactions. The new website provides access to this data in a variety of ways, including interactive visualizations of the interaction data. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. PROTEIN FAMILIES AND DOMAIN DATABASES <s> The database of 3D interacting domains (3did, available online for browsing and bulk download at http://3did.irbbarcelona.org) is a catalog of protein–protein interactions for which a high-resolution 3D structure is known. 3did collects and classifies all structural templates of domain–domain interactions in the Protein Data Bank, providing molecular details for such interactions. The current version also includes a pipeline for the discovery and annotation of novel domain–motif interactions. For every interaction, 3did identifies and groups different binding modes by clustering similar interfaces into ‘interaction topologies’. By maintaining a constantly updated collection of domain-based structural interaction templates, 3did is a reference source of information for the structural characterization of protein interaction networks. 3did is updated every 6 months. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. PROTEIN FAMILIES AND DOMAIN DATABASES <s> In the last two years the Pfam database (http://pfam.xfam.org) has undergone a substantial reorganisation to reduce the effort involved in making a release, thereby permitting more frequent releases. Arguably the most significant of these changes is that Pfam is now primarily based on the UniProtKB reference proteomes, with the counts of matched sequences and species reported on the website restricted to this smaller set. Building families on reference proteomes sequences brings greater stability, which decreases the amount of manual curation required to maintain them. It also reduces the number of sequences displayed on the website, whilst still providing access to many important model organisms. Matches to the full UniProtKB database are, however, still available and Pfam annotations for individual UniProtKB sequences can still be retrieved. Some Pfam entries (1.6%) which have no matches to reference proteomes remain; we are working with UniProt to see if sequences from them can be incorporated into reference proteomes. Pfam-B, the automatically-generated supplement to Pfam, has been removed. The current release (Pfam 29.0) includes 16 295 entries and 559 clans. The facility to view the relationship between families within a clan has been improved by the introduction of a new tool. <s> BIB005
|
As an important database of protein domains and families, Pfam provides a complete map for protein domains and families BIB002 , BIB005 . It is regularly updated, with the latest version being Pfam 31.0 released in March 2017 for instance and containing >16,712 protein families. Although amino acids are the elements comprising a protein sequence, functions occur in multi-sequential regions which are called domains. Identifying these domains provides details and insights regarding the functional mechanism of the protein. Structural information allows bond information detailing interactions between proteins, which is more concrete than binary HPPPI network provided in HPPPI databases. Therefore, iPfam is used in SIN studies to identify domain-domain interactions between proteins BIB003 . iPfam was developed by Howard Hughes Medical Institute, and currently harbors > 9,500 domain-domain interactions. iPfam is based on two continuously updating databases, PDB and Pfam, both of which are well established for their 3D structure and domain-information purposes. Most of the structural information in the PDB also contains multiple domains. The 3did is another domain-domain interaction databases for 3D-interacting domains between proteins, and is a collection of protein interactions from which highresolution 3D structures are known BIB001 , BIB004 . By using iPfam and 3did to achieve domain-level resolution of HPPPIs, SIN considers proteins in their precise spatial relationships by layering domain-domain interactions on top of the conventional PPI networks. As protein-sequence information accumulates at a staggering rate, these data depict its characteristics with high volume, high velocity, high variety, high value and high veracity (5V). This, along with big-data analytics, including machine-learning technologies, allows addressing structural and domain-domain-interaction prediction problems. In the following section, we introduce the related computational models or methods for SIN construction, including machine-learning methodologies.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. BAYESIAN STATISTICS <s> A comparison of neural network methods, and Bayesian statistical methods, is presented for prediction of the secondary structure of proteins given their primary sequence. The Bayesian method makes the unphysical assumption that the probability of an amino acid occurring in each position in the protein is independent of the amino acids occurring elsewhere. However, we find the predictive accuracy of the Bayesian method to be only minimally less than the accuracy of the most sophisticated methods used to date. We present the relationship of neural network methods to Bayesian statistical methods and show that in principle neural methods offer considerable power, although apparently it is not particularly useful for this problem. In the process, we derive a neural formalism in which the output neurons directly represent the conditional probabilities of structure class. The probabilistic formalism allows introduction of a new objective function, the mutual information, which translates the notion of correlation as a measure of predictive accuracy into a useful training measure. Although a similar accuracy to other approaches (utilising a Mean Square Error) is achieved using this new measure, the accuracy on the training set is significantly, and tantalisingly, higher, even though the number of adjustable parameters remains the same. The mutual information measure predicts a greater fraction of helix and sheet structures correctly than the mean square error measure, at the expense of coil accuracy -- precisely as it was designed to do. By combining the two objective functions, we obtain a marginally improved accuracy of 64.4%, with Mathews coefficients $C_\alpha$, $C_\beta$ and $C_{coil}$ of 0.40, 0.32 and 0.42 respectively. However, since all methods to date perform only slightly better than the Bayes algorithm which entails the drastic assumption of independence of amino acids, one is forced to conclude that little progress has been made on this problem despite the application of a variety of sophisticated algorithms such as neural networks, and that further advances will require a better understanding of the relevant biophysics. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. BAYESIAN STATISTICS <s> Summary: We have created the GOR V web server for protein secondary structure prediction. The GOR V algorithm combines information theory, Bayesian statistics and evolutionary information. In its fifth version, the GOR method reached (with the full jack-knife procedure) an accuracy of prediction Q3 of 73.5%. Although GOR V has been among the most successful methods, its online unavailability has been a deterrent to its popularity. Here, we remedy this situation by creating the GOR V server. ::: ::: Availability: The GOR V server is freely accessible to public users and private institutions at http://gor.bb.iastate.edu/ ::: ::: Contact: [email protected] <s> BIB002
|
The earliest studies on protein secondary structure prediction mainly focused on the use of Bayesian statistics BIB001 - BIB002 . Basically, Bayesian statistics describes this problem as follows: where P(S|R) is the conditional probability for observing a conformation S, when a residue (amino acid) R is present, and P(S) is the probability of observing S.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> According to the conditional probabilities definition, P(S|R) = P(S, R)/P(R). P(S, R) <s> Publisher Summary This chapter discusses the Garnier–Osguthorpe–Robson (GOR) method for predicting protein secondary structure from amino acid sequence. The chapter presents the major principles used by the GOR method and some results obtained with an updated version of this method. The GOR method is one of the most popular of the secondary structure prediction schemes. Through the successive incorporation of observed frequencies of single, then pairs of residues on a local sequence of 17 residues, the accuracy of the GOR method has improved from about 55% up to 64.4%. The GOR method has the advantage over neural network-based methods or nearest-neighbor methods in that it clearly identifies what is taken into account for the prediction and what is neglected. The method provides estimates of probabilities for the three secondary structures at each residue position that can be useful for further application of the method. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> According to the conditional probabilities definition, P(S|R) = P(S, R)/P(R). P(S, R) <s> Summary: We have created the GOR V web server for protein secondary structure prediction. The GOR V algorithm combines information theory, Bayesian statistics and evolutionary information. In its fifth version, the GOR method reached (with the full jack-knife procedure) an accuracy of prediction Q3 of 73.5%. Although GOR V has been among the most successful methods, its online unavailability has been a deterrent to its popularity. Here, we remedy this situation by creating the GOR V server. ::: ::: Availability: The GOR V server is freely accessible to public users and private institutions at http://gor.bb.iastate.edu/ ::: ::: Contact: [email protected] <s> BIB002
|
is the joint probability of S and R. Through the use of Eq. (1), an estimation of I (S; R) from a database of known protein sequences and corresponding secondary structures can be achieved. Specifically, a previous study BIB001 showed that the the Garnier-Osguthorpe-Robson (GOR) method based on information theory used a 17-amino-acid sequence window to extract properties from protein sequences. The GOR method presented the observed frequencies of singletons, then in pairs of residues on a local sequence of 17 residues to build the Bayesian model, followed by estimation of the probabilities for the Q3 structures. This method increased the accuracy from 55% to 64.4%. Later, in BIB002 , combined with information theory, GOR V algorithm projects the known twenty amino acids types for each specific secondary structure to achieve a Q3 accuracy of 73.5%.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. SUPPORT VECTOR MACHINE (SVM) <s> We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. SUPPORT VECTOR MACHINE (SVM) <s> We have introduced a new method of protein secondary structure prediction which is based on the theory of support vector machine (SVM). SVM represents a new approach to supervised pattern classification which has been successfully applied to a wide range of pattern recognition problems, including object recognition, speaker identification, gene function prediction with microarray expression profile, etc. In these cases, the performance of SVM either matches or is significantly better than that of traditional machine learning approaches, including neural networks.The first use of the SVM approach to predict protein secondary structure is described here. Unlike the previous studies, we first constructed several binary classifiers, then assembled a tertiary classifier for three secondary structure states (helix, sheet and coil) based on these binary classifiers. The SVM method achieved a good performance of segment overlap accuracy SOV=76.2 % through sevenfold cross validation on a database of 513 non-homologous protein chains with multiple sequence alignments, which out-performs existing methods. Meanwhile three-state overall per-residue accuracy Q(3) achieved 73.5 %, which is at least comparable to existing single prediction methods. Furthermore a useful "reliability index" for the predictions was developed. In addition, SVM has many attractive features, including effective avoidance of overfitting, the ability to handle large feature spaces, information condensing of the given data set, etc. The SVM method is conveniently applied to many other pattern classification tasks in biology. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. SUPPORT VECTOR MACHINE (SVM) <s> The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. ::: ::: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. <s> BIB003
|
Using SVMs to predict protein secondary structure was firstly introduced in 2001 BIB002 , with the first SVM proposed in 1995 BIB003 . It is not the first machine learning approach used for protein secondary structure prediction, yet by then, it achieved the best performance overall on Q3 task. Similar to earlier researches using neural network based methods BIB001 , the encoding scheme for the input layer is called a local-coding scheme and denotes every amino acid with a 21-dimensional orthogonal binary vector as follows: In the output layer, the Q3 task was first considered as a binary classifier later combined into a tertiary classifier. A previous study BIB002 considered the SVM as a superior model based on its ability to effectively avoid overfitting and to handle large feature spaces. In details, the authors BIB002 selected the radial basis function as the kernel function to train the SVM, resulting in a Q3 task of 73.5%.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. RANDOM FORESTS <s> Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. RANDOM FORESTS <s> We investigate the idea of using diversified multiple trees for Microarray data classification. We propose an algorithm of Maximally Diversified Multiple Trees (MDMT), which makes use of a set of unique trees in the decision committee. We compare MDMT with some well-known ensemble methods, namely AdaBoost, Bagging, and Random Forests. We also compare MDMT with a diversified decision tree algorithm, Cascading and Sharing trees (CS4), which forms the decision committee by using a set of trees with distinct roots. Based on seven Microarray data sets, both MDMT and CS4 are more accurate on average than AdaBoost, Bagging, and Random Forests. Based on a sign test of 95% confidence, both MDMT and CS4 perform better than majority traditional ensemble methods tested. We discuss differences between MDMT and CS4. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. RANDOM FORESTS <s> Motivation: Identification of protein interaction sites has significant impact on understanding protein function, elucidating signal transduction networks and drug design studies. With the exponentially growing protein sequence data, predictive methods using sequence information only for protein interaction site prediction have drawn increasing interest. In this article, we propose a predictive model for identifying protein interaction sites. Without using any structure data, the proposed method extracts a wide range of features from protein sequences. A random forest-based integrative model is developed to effectively utilize these features and to deal with the imbalanced data classification problem commonly encountered in binding site predictions. ::: ::: Results: We evaluate the predictive method using 2829 interface residues and 24 616 non-interface residues extracted from 99 polypeptide chains in the Protein Data Bank. The experimental results show that the proposed method performs significantly better than two other sequence-based predictive methods and can reliably predict residues involved in protein interaction sites. Furthermore, we apply the method to predict interaction sites and to construct three protein complexes: the DnaK molecular chaperone system, 1YUW and 1DKG, which provide new insight into the sequence–function relationship. We show that the predicted interaction sites can be valuable as a first approach for guiding experimental methods investigating protein–protein interactions and localizing the specific interface residues. ::: ::: Availability: Datasets and software are available at http://ittc.ku.edu/~xwchen/bindingsite/prediction. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> C. RANDOM FORESTS <s> Predicting protein-protein interaction (PPI) sites from protein sequences is still a challenge task in computational biology. There exists a severe class imbalance phenomenon in predicting PPI sites, which leads to a decrease in overall performance for traditional statistical machine-learning-based classifiers, such as SVM and random forests. In this study, an ensemble of SVM and sample-weighted random forests (SSWRF) was proposed to deal with class imbalance. An SVM classifier was trained and applied to estimate the weights of training samples. Then, the training samples with estimated weights were utilized to train a sample-weighted random forests (SWRF). In addition, a lower-dimensional feature representation method, which consists of evolutionary conservation, hydrophobic property, solvent accessibility features derived from a target residue and its neighbors, was developed to improve the discriminative capability for PPI sites prediction. The analysis of feature importance shows that the proposed feature representation method is an effective representation for predicting PPI sites. The proposed SSWRF achieved 22.4% and 35.1% in MCC and F-measure, respectively, on independent validation dataset Dtestset72, and achieved 15.2% and 36.5% in MCC and F-measure, respectively, on PDBtestset164. Computational comparisons between existing PPI sites predictors on benchmark datasets demonstrated that the proposed SSWRF is effective for PPI sites prediction and outperforms the state-of-the-art sequence-based method (i.e., LORIS) released most recently. The benchmark datasets used in this study and the source codes of the proposed method are publicly available at http://csbio.njust.edu.cn/bioinf/SSWRF for academic use. <s> BIB004
|
Apart from predicting secondary structure, domain-domain interaction is also critical to the SIN. The random forest model was introduced to build multi-classifiers to determine a decision for a dataset with 1050-dimensional features BIB003 . Additionally, another study BIB004 showed an ensemble model of random forests and SVMs were able to predict the domaininteracting sites. Derived from decision trees model, random forest leverages the power of randomisation to increase model performance BIB002 , BIB001 . It is able to deal with imbalanced data problems via the voting mechanism while its random feature selection benefits the model in case of high-dimensional data.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> D. ARTIFICIAL NEURAL NETWORKS <s> We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> D. ARTIFICIAL NEURAL NETWORKS <s> Publisher Summary The first step in a PHD prediction is generating a multiple sequence alignment. The second step involves feeding the alignment into a neural network system. Correctness of the multiple sequence alignment is as crucial for prediction accuracy as is the fact that the alignment contains a broad spectrum of homologous sequences. This chapter describes three prediction methods that use evolutionary information as input to neural network systems to predict secondary structure (PHDsec), relative solvent accessibility (PHDacc), and transmembrane helices (PHDhtm). It illustrates the possibilities and limitations in practical applications of these methods with results from careful cross-validation experiments on large sets of unique protein structures. All predictions are made available by an automatic Email prediction service. The baseline conclusion after some 30,000 requests to the service is that 1-D predictions have become accurate enough to be used as a starting point for the expert-driven modeling of protein structure. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> D. ARTIFICIAL NEURAL NETWORKS <s> Compared with the protein 3-class secondary structure (SS) prediction, the 8-class prediction gains less attention and is also much more challenging, especially for proteins with few sequence homologs. This paper presents a new probabilistic method for 8-class SS prediction using conditional neural fields (CNFs), a recently invented probabilistic graphical model. This CNF method not only models the complex relationship between sequence features and SS, but also exploits the interdependency among SS types of adjacent residues. In addition to sequence profiles, our method also makes use of non-evolutionary information for SS prediction. Tested on the CB513 and RS126 data sets, our method achieves Q8 accuracy of 64.9 and 64.7%, respectively, which are much better than the SSpro8 web server (51.0 and 48.0%, respectively). Our method can also be used to predict other structure properties (e.g. solvent accessibility) of a protein or the SS of RNA. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> D. ARTIFICIAL NEURAL NETWORKS <s> Predicting protein secondary structure is a fundamental problem in protein structure prediction. Here we present a new supervised generative stochastic network (GSN) based method to predict local secondary structure with deep hierarchical representations. GSN is a recently proposed deep learning technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative model. We present the supervised extension of GSN, which learns a Markov chain to sample from a conditional distribution, and applied it to protein structure prediction. To scale the model to full-sized, high-dimensional data, like protein sequences with hundreds of amino acids, we introduce a convolutional architecture, which allows efficient learning across multiple layers of hierarchical representations. Our architecture uniquely focuses on predicting structured low-level labels informed with both low and high-level representations learned by the model. In our application this corresponds to labeling the secondary structure state of each amino-acid residue. We trained and tested the model on separate sets of non-homologous proteins sharing less than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513 dataset, better than the previously reported best performance 64.9% (Wang et al., 2011) for this challenging secondary structure prediction problem. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> D. ARTIFICIAL NEURAL NETWORKS <s> Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available. <s> BIB005
|
To the best of our knowledge, artificial neural networks were first introduced in protein secondary structure prediction using a fully connected three-layer network in BIB001 , with a learning algorithm involving back propagation. Later, the authors of BIB002 used a two-tier architecture to deploy neural networks for prediction; however, the improvement in Q3 accuracy has since stalled. Recently, Q8 accuracy has been the focus of academia and industry, aiming to apply deep learning techniques to improve performance. In BIB003 , probabilistic graphical models, which combine conditional neural fields (CNFs) with neural network, were deployed to improve Q8 accuracy. The features are extracted from position-specific score matrix (PSSM) and the physico-chemical properties of the amino acids. Both the complex relationship between sequence and secondary structure information, and the interdependency relationship among secondary structure types of adjacent amino acids were studied using the CNFs model BIB003 . Generative stochastic networks (GSNs) were utilised to learn a generative model of data distribution without explicitly specifying a probabilistic graphical model BIB004 . Specifically, this supervised extension of GSNs is deployed via learning a Markov chain to sample from a conditional distribution for training on a protein structure prediction task. This model was presented with deep learning techniques to tackle Q8 problem for protein secondary structure prediction. The empirical design for the data preprocessing step involved choosing 700 lengths as the cut-off threshold to balance the efficiency and coverage of protein sequence. The main features extracted included the evolutionary information (PSSM feature) and the sequence information (one-hot binary vector feature). The model achieved 66.4% accuracy on Q8 problem. The most recent result on Q8 accuracy task was reported in BIB005 , which proposed a deep convolutional and recurrent neural network. The feature encoding the protein sequence remained partially similar to the local-coding scheme. In this network model, a feature embedding layer was deployed to map sequence information and profile feature (by PSI-BLAST) to a denser matrix. Multiple convolutional neural network layers and stacked bidirectional relational neural network layers were included to learn both local context information and global context information from the denser matrix. Fully connected and softmax layers were layered on the top of the model to build the classifier for the prediction task. Considering the different properties of protein structure, an iterative use of predicted features, including the backbone angles and dihedrals based on C α atoms, improves secondary structure prediction accuracy . Stacked sparse auto-encoders with three hidden layers were introduced. The hidden layers were all with 150 neuron nodes. The method achieved an accuracy 80.8% in secondary structure prediction in the recent CASP targets 1 . Various models have been discussed in this section; however, our goal is to stack these different data types atop the binary HPPPI network to achieve structural principles analysis. In the following section, we will discuss the structural interaction network.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> V. STRUCTURAL INTERACTION NETWORK <s> The Human Gene Mutation Database (HGMD®) is a comprehensive core collection of germline mutations in nuclear genes that underlie or are associated with human inherited disease. Here, we summarize the history of the database and its current resources. By December 2008, the database contained over 85,000 different lesions detected in 3,253 different genes, with new entries currently accumulating at a rate exceeding 9,000 per annum. Although originally established for the scientific study of mutational mechanisms in human genes, HGMD has since acquired a much broader utility for researchers, physicians, clinicians and genetic counselors as well as for companies specializing in biopharmaceuticals, bioinformatics and personalized genomics. HGMD was first made publicly available in April 1996, and a collaboration was initiated in 2006 between HGMD and BIOBASE GmbH. This cooperative agreement covers the exclusive worldwide marketing of the most up-to-date (subscription) version of HGMD, HGMD Professional, to academic, clinical and commercial users. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> V. STRUCTURAL INTERACTION NETWORK <s> General properties of the antagonistic biomolecular interactions between viruses and their hosts (exogenous interactions) remain poorly understood, and may differ significantly from known principles governing the cooperative interactions within the host (endogenous interactions). Systems biology approaches have been applied to study the combined interaction networks of virus and human proteins, but such efforts have so far revealed only low-resolution patterns of host-virus interaction. Here, we layer curated and predicted 3D structural models of human-virus and human-human protein complexes on top of traditional interaction networks to reconstruct the human-virus structural interaction network. This approach reveals atomic resolution, mechanistic patterns of host-virus interaction, and facilitates systematic comparison with the host's endogenous interactions. We find that exogenous interfaces tend to overlap with and mimic endogenous interfaces, thereby competing with endogenous binding partners. The endogenous interfaces mimicked by viral proteins tend to participate in multiple endogenous interactions which are transient and regulatory in nature. While interface overlap in the endogenous network results largely from gene duplication followed by divergent evolution, viral proteins frequently achieve interface mimicry without any sequence or structural similarity to an endogenous binding partner. Finally, while endogenous interfaces tend to evolve more slowly than the rest of the protein surface, exogenous interfaces--including many sites of endogenous-exogenous overlap--tend to evolve faster, consistent with an evolutionary "arms race" between host and pathogen. These significant biophysical, functional, and evolutionary differences between host-pathogen and within-host protein-protein interactions highlight the distinct consequences of antagonism versus cooperation in biological networks. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> V. STRUCTURAL INTERACTION NETWORK <s> To better understand the molecular mechanisms and genetic basis of human disease, we systematically examine relationships between 3,949 genes, 62,663 mutations and 3,453 associated disorders by generating a three-dimensional, structurally resolved human interactome. This network consists of 4,222 high-quality binary protein-protein interactions with their atomic-resolution interfaces. We find that in-frame mutations (missense point mutations and in-frame insertions and deletions) are enriched on the interaction interfaces of proteins associated with the corresponding disorders, and that the disease specificity for different mutations of the same gene can be explained by their location within an interface. We also predict 292 candidate genes for 694 unknown disease-to-gene associations with proposed molecular mechanism hypotheses. This work indicates that knowledge of how in-frame disease mutations alter specific interactions is critical to understanding pathogenesis. Structurally resolved interaction networks should be valuable tools for interpreting the wealth of data being generated by large-scale structural genomics and disease association studies. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> V. STRUCTURAL INTERACTION NETWORK <s> General principles governing biomolecular interactions between species are expected to differ significantly from known principles governing the interactions within species, yet these principles remain poorly understood at the systems level. A key reason for this knowledge gap is the lack of a detailed three-dimensional, atomistic view of biomolecular interaction networks between species. Recent progress in structural biology, systems biology, and computational biology has enabled accurate and large-scale construction of three-dimensional structural models of nodes and edges for protein-protein interaction networks within and between species. The resulting within- and between-species structural interaction networks have provided new biophysical, functional, and evolutionary insights into species interactions and infectious disease. Here, we review the nascent field of between-species structural systems biology, focusing on interactions between host and pathogens such as viruses. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> V. STRUCTURAL INTERACTION NETWORK <s> "Big Data" is immersed in many disciplines, including computer vision, economics, online resources, bioinformatics and so on. Increasing researches are conducted on data mining and machine learning for uncovering and predicting related domain knowledge. Protein-protein interaction is one of the main areas in bioinformatics as it is the basis of the biological functions. However, most pathogen-host protein-protein interactions, which would be able to reveal much more infectious mechanisms between pathogen and host, are still up for further investigation. Considering a decent feature representation of pathogen-host protein-protein interactions (PHPPI), currently there is not a well structured database for research purposes, not even for infection mechanism studies for different species of pathogens. In this paper, we will survey the PHPPI researches and construct a public PHPPI dataset by ourselves for future research. It results in an utterly big and imbalanced data set associated with high dimension and large quantity. Several machine learning methodologies are also discussed in this paper to imply possible analytics solutions in near future. This paper contributes to a new, yet challenging, research area in applying data analytic technologies in bioinformatics, by learning and predicting pathogen-host protein-protein interactions. <s> BIB005
|
Since principles analysis of protein interactions between host and pathogens still remains poorly understood, an ensemble network of binary HPPPI networks and structural information would provide an efficient option for mining this knowledge using a systems biology approach. A previous study used 3,949 genes, 62,663 mutations and 3,453 associated disorders for analysis using a 3D structurally resolved human interactome network BIB003 . By integrating data from iPfam, 3did and the Human Gene Mutation Database (HGMD) BIB001 , a high-quality binary PPIs network with the atomic-resolution interfaces was successfully built BIB003 , providing key insights to in-frame mutations, locations, and disease specificity for different mutations in the same gene, which had not been possible to be acquired on a low-resolution network. The original interaction network obtained from literature-curated databases BIB003 contained 82,823 pairs; however, after filtering out the proteins without experimentally determined structures, only 4,222 structurally resolved interactions between 2,816 proteins remained. To build a structural interaction network still requires more efforts on experimental determination of a structure or computational prediction, because only a tiny fraction of these binary PPIs can be analysed with their corresponding structure information. Our previous study BIB005 collected all the experimental protein interaction data from the published databases, among which we chose the databases being manually checked and uploaded. TABLE 3 shows the five bacterial species with HPPPI statistics. The HPPPI network is further illustrated for Clostridium botulinum in Fig. 5 2 [44] . 5 shows six primary human proteins interacting with nine Clostridium botulinum proteins, resulting in 44 HPPPI connections derived from the PHISTO database. These interactions are considered as exogenous interactions. To further analyse interactions from the PPI network, we embedded this information with structural information. There are two classes of protein-protein interaction in physical interactions: interactions mediated by two domains and that between short motifs and domains. We observed that several possible structural principles analyses were obtained within the human-virus proteinprotein interaction network BIB002 . The SIN approach in human-virus PPIs network reveals atomic resolution, mechanistic patterns, and allows systematic comparison with human endogenous interactions. Figure 6 shows an example detailing how to layer the structure and domain-domain interaction information on top of the binary PPIs network BIB002 , BIB004 . http://www.phisto.org/index.xhtml Further analysis revealed that ''Pathogen protein'' is mimicking the action of ''Host Protein3''. Layering the 3D structural information to illustrate the details of the protein interaction allows derivation of two different classes of protein interactions ( Fig. 7 and Fig. 8 ) . The results are generated by PyMOL . The illustration examples present the non-overlapping protein-protein interactions by 3D structures 1F5Q-1BUH, and overlapping protein-protein interaction by 4MI8-2P1L . Here, 1F5Q, 1BUH, 4MI8 and 2P1L are their PDB id. The host-pathogen PPI networks provide specific pathogen protein functions and the global analyses on this network help revealing critical proteins in the networks BIB004 . Although Fig. 6 provides essential mappings via the overlapping interfaces, annotating the experimental HPPPI networks with 3D structural information will provide further information, because the PPIs can be combined between two globular domains and also between one short linear motif (a short FIGURE 7. The overlapping structure interaction: The red string is the human protein Beclin-1, which is annotated with 5EFM as its PDB id. The compound (in yellow), which is interacted by human protein ''Beclin-1'' and Gamma Herpesvirus protein ''v-Bcl2'', is associated with the compound (in blue) by human protein ''Beclin-1'' and human protein ''BCL-XL''. The 3D structure of yellow compound can be fetched by PDB id 4MI8 while the blue is 2P1L . FIGURE 8. The non-overlapping structure interaction: The interaction is linked by the human protein ''CDK2''. The PDB id is 5MHQ. The yellow compound is the interaction between Gama Herpervirus ''Cyclin'' and human protein ''CDK2''. The purple compound is by human protein ''CKS1'' and ''CDK2'' . functional segment considered on secondary structure) and globular domains. Superimposing structures of the HPPPI can help to visually reveal the details. Several methods to assemble structural information with binary HP-PPI network include: • Using only the experimentally determined structural information. Both proteins in the HPPPI network could be mapped along with the determined structural information; • Using both the experimentally determined and computationally predicted structural information. One of the proteins in the HPPPI could not be mapped with its determined structural information; • Using only the computationally inferred structural information. Both proteins in the HPPPI could not be mapped with its determined structural information. The homology modelling method is widely used for searching for homologous proteins with having determined structure according to the BLAST E-value. Computationally predicted structural information mainly comes from homology modelling, which is widely used in bioinformatics, provided that protein structure and function are primarily determined according to their sequence information BIB002 . Typically, for host-pathogen protein-protein interactions, we hypothesised that imitating the binding activities between proteins would allow insight into primary mechanism associated with infections. Given a SIN, there are several types of statistics data that may help us propose and support this hypothesis. As a specific example between virus and host-PPI networks, a previous study BIB002 analysed the exogenous and endogenous interactions in the human-virus SIN model. Meanwhile, the overlapping ratio of protein interactions involved in exogenous interface to those involved in endogenous interface indicates potential infectious targets, although the mapping of endogenous interfaces is not guaranteed to be complete BIB002 . To achieve a better understanding of the mimicry mechanism that possibly explains virus-infectious procedure, similarity statistical analysis can be performed according to z-score and E-value levels. Since the mimicry action occurs between host protein and pathogen protein, similarity statistics might help elucidate potential activities. Overall, SIN, combined with binary protein-protein interactions, has many advantages for precise analysis based on statistics associated with 3D structure and domain information.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. FEASIBLE AND EFFICIENT FEATURE REPRESENTATION <s> Predicting protein secondary structure is a fundamental problem in protein structure prediction. Here we present a new supervised generative stochastic network (GSN) based method to predict local secondary structure with deep hierarchical representations. GSN is a recently proposed deep learning technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative model. We present the supervised extension of GSN, which learns a Markov chain to sample from a conditional distribution, and applied it to protein structure prediction. To scale the model to full-sized, high-dimensional data, like protein sequences with hundreds of amino acids, we introduce a convolutional architecture, which allows efficient learning across multiple layers of hierarchical representations. Our architecture uniquely focuses on predicting structured low-level labels informed with both low and high-level representations learned by the model. In our application this corresponds to labeling the secondary structure state of each amino-acid residue. We trained and tested the model on separate sets of non-homologous proteins sharing less than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513 dataset, better than the previously reported best performance 64.9% (Wang et al., 2011) for this challenging secondary structure prediction problem. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. FEASIBLE AND EFFICIENT FEATURE REPRESENTATION <s> Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent. <s> BIB002 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. FEASIBLE AND EFFICIENT FEATURE REPRESENTATION <s> Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available. <s> BIB003 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. FEASIBLE AND EFFICIENT FEATURE REPRESENTATION <s> In big data research related to bioinformatics, one of the most critical areas is proteomics. In this paper, we focus on the protein-protein interactions, especially on pathogen-host protein-protein interactions (PHPPIs), which reveals the critical molecular process in biology. Conventionally, biologists apply in-lab methods, including small-scale biochemical, biophysical, genetic experiments and large-scale experiment methods (e.g. yeast-two-hybrid analysis), to identify the interactions. These in-lab methods are time consuming and labor intensive. Since the interactions between proteins from different species play very critical roles for both the infectious diseases and drug design, the motivation behind this study is to provide a basic framework for biologists, which is based on big data analytics and deep learning models. Our work contributes in leveraging unsupervised learning model, in which we focus on stacked denoising autoencoders, to achieve a more efficient prediction performance on PHPPI. In this paper, we further detail the framework based on unsupervised learning model for PHPPI researches, while curating a large imbalanced PHPPI dataset. Our model demonstrates a better result with the unsupervised learning model on PHPPI dataset. <s> BIB004 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> A. FEASIBLE AND EFFICIENT FEATURE REPRESENTATION <s> Nowadays more and more data are being sequenced and accumulated in system biology, which brings the data analytics researchers to a brand new era, namely ‘big data’, to extract the inner relationship and knowledge from the huge amount of data. Bridging the gap between computational methodology and biology to accelerate the development of biology analytics has been a hot area. In this paper, we focus on these enormous amounts of data generated with the speedy development of high throughput technologies during the past decades, especially for protein-protein interactions, which are the critical molecular process in biology. Since pathogen-host protein-protein interactions are the major and basic problems for not only infectious diseases but also drug design, molecular level interactions between pathogen and host play very critical role for the study of infection mechanisms. In this paper, we built a basic framework for analyzing the specific problems about pathogen-host protein-protein interactions (PHPPI), meanwhile, we also presented the state-of-art deep learning method results on prediction of PHPPI comparing with other machine learning methods. Utilizing the evaluation methods, specifically by considering the high skewed imbalanced ratio and huge amount of data, we detailed the pipeline solution on both storing and learning for PHPPI. This work contributes as a basis for a further investigation of protein and protein-protein interactions, with the collaboration of data analytics results from the vast amount of data dispersedly available in biology literature. <s> BIB005
|
For computational models, especially protein sequences, feature representation remains a challenging topic. Various methods for feature representation currently exist BIB004 - BIB005 , [20] , BIB002 - . Previous results indicate that various representational methods yielded different performances across several species, although additional protein sequence information is being experimentally generated. We might observe this from the aspect of a small dataset (i.e. Clostridium botulinum and the big dataset: Bacillus anthracis). Additional models based on deep learning techniques present end-to-end frameworks for learning from big data sets. The automatic feature extraction process could be a promising option for protein sequence research. Previously, we successfully employed a stacked denoising autoencoder as an unsupervised learning model to extract high-level feature for model learning BIB004 . Our result showed a potential direction for introducing deep learning neural networks. Prior to inputting data into learning models, several traditional feature representation methods, including one-hot vector method, PSSM feature, and other statistic methods shown in TABLE 1, were widely used. Additionally, deep learning techniques are also first introduced in protein secondary structure prediction BIB001 , BIB003 and HPPPI prediction tasks BIB004 . In terms of feature representation, deep learning techniques could harness the power of high-dimensional data in large volumes, enabling acquisition of large volumes of feature information to further improve model performance.
|
Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. IMBALANCED DATA <s> Motivation: Identification of protein interaction sites has significant impact on understanding protein function, elucidating signal transduction networks and drug design studies. With the exponentially growing protein sequence data, predictive methods using sequence information only for protein interaction site prediction have drawn increasing interest. In this article, we propose a predictive model for identifying protein interaction sites. Without using any structure data, the proposed method extracts a wide range of features from protein sequences. A random forest-based integrative model is developed to effectively utilize these features and to deal with the imbalanced data classification problem commonly encountered in binding site predictions. ::: ::: Results: We evaluate the predictive method using 2829 interface residues and 24 616 non-interface residues extracted from 99 polypeptide chains in the Protein Data Bank. The experimental results show that the proposed method performs significantly better than two other sequence-based predictive methods and can reliably predict residues involved in protein interaction sites. Furthermore, we apply the method to predict interaction sites and to construct three protein complexes: the DnaK molecular chaperone system, 1YUW and 1DKG, which provide new insight into the sequence–function relationship. We show that the predicted interaction sites can be valuable as a first approach for guiding experimental methods investigating protein–protein interactions and localizing the specific interface residues. ::: ::: Availability: Datasets and software are available at http://ittc.ku.edu/~xwchen/bindingsite/prediction. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. <s> BIB001 </s> Structural Principles Analysis of Host-Pathogen Protein-Protein Interactions: A Structural Bioinformatics Survey <s> B. IMBALANCED DATA <s> "Big Data" is immersed in many disciplines, including computer vision, economics, online resources, bioinformatics and so on. Increasing researches are conducted on data mining and machine learning for uncovering and predicting related domain knowledge. Protein-protein interaction is one of the main areas in bioinformatics as it is the basis of the biological functions. However, most pathogen-host protein-protein interactions, which would be able to reveal much more infectious mechanisms between pathogen and host, are still up for further investigation. Considering a decent feature representation of pathogen-host protein-protein interactions (PHPPI), currently there is not a well structured database for research purposes, not even for infection mechanism studies for different species of pathogens. In this paper, we will survey the PHPPI researches and construct a public PHPPI dataset by ourselves for future research. It results in an utterly big and imbalanced data set associated with high dimension and large quantity. Several machine learning methodologies are also discussed in this paper to imply possible analytics solutions in near future. This paper contributes to a new, yet challenging, research area in applying data analytic technologies in bioinformatics, by learning and predicting pathogen-host protein-protein interactions. <s> BIB002
|
Another challenging issue is the imbalanced ratio among different classes of the structural information, such as the eight categories of protein secondary structure. For structure prediction, domain-domain interaction and host-pathogen protein-protein interaction problems, the imbalanced ratio between different classes is important in improving model performance. The ratio of non-interface interactions to interface interactions is about 9:1 BIB001 . In structure prediction task, the ratios in both Q3 and Q8 tasks are also different and imbalanced between different protein families. Specifically, for Q8 tasks, some structures are barely observable in the protein structures. In a previous study, the interacting pairs and noninteracting pairs were defined with 1:100 ratio, which is a highly skewed number BIB002 . With the continuous expansion and availability of structural information and domain data, the issues involving imbalanced data biological areas intensifies.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Recent years have seen an increasing interest in the scheduling of mixed-criticality real-time systems. These systems are composed of groups of tasks with different levels of criticality deployed over the same processor(s). Such systems must be able to accommodate additional execution-time requirements that may occasionally be needed. When overload conditions develop, critical tasks must still meet their timing constraints at the expense of less critical tasks. Zero-slack scheduling algorithms are promising candidates for such systems. These algorithms guarantee that all tasks meet their deadlines when no overload occurs, and that criticality ordering is satisfied under overloads. Unfortunately, when mutually exclusive resources are shared across tasks, these guarantees are voided. Furthermore, the dual-execution modes of tasks in mixed-criticality systems violate the assumptions of traditional real-time synchronization protocols like PCP and hence the latter cannot be used directly. In this paper, we develop extensions to real-time synchronization protocols (Priority Inheritance and Priority Ceiling Protocol) that coordinate the mode changes of the zero-slack scheduler. We analyze the properties of these new protocols and the blocking terms they introduce. We maintain the deadlock avoidance property of our PCP extension, called the Priority and Criticality Ceiling Protocol (PCCP), and limit the blocking to only one critical section for each of the zero-slack scheduling execution modes. We also develop techniques to accommodate the blocking terms arising from synchronization, in calculating the zero-slack instants used by the scheduler. Finally, we conduct an experimental evaluation of PCCP. Our evaluation shows that PCCP is able to take advantage of the capacity of zero-slack schedulers to reclaim unused over-provisioning of resources that are only used in critical execution modes. This allows PCCP to accommodate larger blocking terms. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Methods such as rollback and modular redundancy are efficient to correct transient errors. In hard real-time systems, however, correction has a strong impact on response times, also on tasks that were not directly affected by errors. Due to deadline misses, these tasks eventually fail to provide correct service. In this paper we present a reliability analysis for periodic task sets and static priorities that includes realistic detection and roll-back scenarios and covers a hyperperiod instead of just a critical instant and therefore leads to much higher accuracy than previous approaches. The approach is compared with Monte-Carlo simulation to demonstrate the accuracy and with previous approaches covering critical instants to evaluate the improvements. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The scheduling of mixed-criticality implicit-deadline sporadic task systems on identical multiprocessor platforms is considered. Two approaches, one for global and another for partitioned scheduling, are described. Theoretical analyses and simulation experiments are used to compare the global and partitioned scheduling approaches. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> This paper proposes a design methodology that enhances the classical system-level design flow for embedded systems to introduce reliability-awareness. The mapping and scheduling step is extended to support the application of hardening techniques to fulfill the required fault management properties that the final system must exhibit; moreover, the methodology allows the designer to specify that only some parts of the systems need to be hardened against faults. The reference architecture is a complex distributed one, constituted by resources with different characteristics in terms of performance and available fault detection/tolerance mechanisms. The approach is evaluated and compared against the most recent and relevant work, with an in-depth analysis on a large set of benchmarks. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Ethernet is widely recognized as an attractive networking technology for modern distributed real-time systems. However, standard Ethernet components require specific modifications and hardware support to provide strict latency guarantees necessary for safety-critical applications. Although this is a well-stated fact, the design of hardware components for real-time communication remains mostly unexplored. This becomes evident from the few solutions reporting prototypes and experimental validation, which hinders the consolidation of Ethernet in real-world distributed applications. This paper presents Atacama, the first open-source framework based on reconfigurable hardware for mixed-criticality communication in multi-segmented Ethernet networks. Atacama uses specialized modules for time-triggered communication of real-time data, which seamlessly integrate with a standard infrastructure using regular best-effort traffic. Atacama enables low and highly predictable communication latency on multi-segmented 1Gbps networks, easy optimization of devices for specific application scenarios, and rapid prototyping of new protocol characteristics. Researchers can use the open-source design to verify our results and build upon the framework, which aims to accelerate the development, validation, and adoption of Ethernet-based solutions in real-time applications. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In this paper, we deal with the schedule synthesis problem of mixed-criticality cyber-physical systems (MCCPS), which are composed of hard real-time tasks and feedback control tasks. The real-time tasks are associated with deadlines that must always be satisfied whereas feedback control tasks are characterized by their Quality of Control (QoC) which needs to be optimized. A straight-forward approach to the above scheduling problem is to translate the QoC requirements into deadline constraints and then, to apply traditional real-time scheduling techniques such as Deadline Monotonic (DM). In this work, we show that such scheduling leads to overly conservative results and hence is not efficient in the above context. On the other hand, methods from the mixed-criticality systems (MC) literature mainly focus on tasks with different criticality levels and certification issues. However, in MCCPS, the tasks may not be fully characterized by only criticality levels, but they may further be classified according to their criticality types, e.g., deadline-critical real-time tasks and QoC-critical feedback control tasks. On the contrary to traditional deadline-driven scheduling, scheduling MCCPS requires to integrate both, deadline-driven and QoC-driven techniques which gives rise to a challenging scheduling problem. In this paper, we present a multi-layered schedule synthesis scheme for MCCPS that aims to jointly schedule deadline-critical, and QoC-critical tasks at different scheduling layers. Our scheduling framework (i) integrates a number of QoC-oriented metrics to capture the QoC requirements in the schedule synthesis (ii) uses arrival curves from real-time calculus which allow a general characterization of task triggering patterns compared to simple task models such as periodic or sporadic, and (iii) has pseudo-polynomial complexity. Finally, we show the applicability of our scheduling scheme by a number of experiments. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Synchronous languages are widely used to design safety-critical embedded systems. These languages are based on the synchrony hypothesis, asserting that all tasks must complete instantaneously at each logical time step. This assertion is, however, unsuitable for the design of mixed-criticality systems, where some tasks can tolerate missed deadlines. This paper proposes a novel extension to the synchronous approach for supporting three levels of task criticality: life, mission, and non-critical. We achieve this by relaxing the synchrony hypothesis to allow tasks that can tolerate bounded or unbounded deadline misses. We address the issue of task communication between multi-rate, mixed-criticality tasks, and propose a deterministic lossless communication model. To maximize system utilization, we present a hybrid static and dynamic scheduling approach that executes schedulable tasks during slack time. Extensive benchmarking shows that our approach can schedule up to 15% more task sets and achieve an average of 5.38% better system utilization than the Early-Release EDF (ER-EDF) approach. Tasks are scheduled fairer under our approach and achieve consistently higher execution frequencies, but require more preemptions. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Real-time systems are becoming increasingly complex. A modern car, for example, requires a multitude of control tasks, such as braking, active suspension, and collision avoidance. These tasks not only exhibit different degrees of safety criticality but also change their criticalities as the driving mode changes. For instance, the suspension task is a critical part of the stability of the car at high speed, but it is only a comfort feature at low speed. Therefore, it is crucial to ensure timing guarantees for the system with respect to the tasks' criticalities, not only within each mode but also during mode changes. This paper presents a partitioned multi-processor scheduling scheme for multi-modal mixed-criticality real-time systems. Our scheme consists of a packing algorithm and a scheduling algorithm for each processor that take into account both mode changes and criticalities. The packing algorithm maximizes the schedulable utilization across modes using the sustained criticality of each task, which captures the overall criticality of the task across modes. The scheduling algorithm combines Rate-Monotonic scheduling with a mode transition enforcement mechanism that relies on the transitional zero-slack instants of tasks to control low-criticality tasks during mode changes, so as to preserve the schedulability of high-criticality tasks. We also present an implementation of our scheduler in the Linux operating system, as well as an experimental evaluation to illustrate its practicality. Our evaluation shows that our scheme can provide close to twice as much tolerance to overloads (ductility) compared to a mode-agnostic scheme. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We propose HLC-PCP (Highest-Locker Criticality, Priority-Ceiling Protocol), which extends the well-known Priority Ceiling Protocol (PCP) to be applicable to AMC (Adaptive Mixed-Criticality), a variant of MCS. We present methods for worst-case blocking time computation with HLC-PCP, used for schedulability analysis of AMC with resource sharing, for both the dual-criticality model and the general multi-criticality model. This helps relax one of the key limiting assumptions of most MCS work, that is, tasks with different levels of criticality do not have common shared resources. Today's safety-critical Cyber-Physical Systems (CPS) often need to integrate multiple diverse applications with varying levels of importance, or criticality. Mixed-Criticality Scheduling (MCS) has been proposed with the objectives of achieving certification at multiple criticality levels and efficient utilization of hardware resources. Current work on MCS typically assumes tasks at different criticality levels are independent and do not share any resources (data). We propose HLC-PCP (Highest-Locker Criticality, Priority-Ceiling Protocol), which extends the well-known Priority Ceiling Protocol (PCP) to be applicable to AMC (Adaptive Mixed-Criticality), a variant of MCS. We present methods for worst-case blocking time computation with HLC-PCP, used for schedulability analysis of AMC with resource sharing, for both the dual-criticality model and the general multi-criticality model. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In mixed-criticality systems, highly critical tasks must be temporally and logically isolated from faults in lower-criticality tasks. Such strict isolation, however, is difficult to ensure even for independent tasks, and has not yet been attained if low- and high-criticality tasks share resources subject to mutual exclusion constraints (e.g., Shared data structures, peripheral I/O devices, or OS services), as it is often the case in practical systems. Taking a pragmatic, systems-oriented point of view, this paper argues that traditional real-time locking approaches are unsuitable in a mixed-criticality context: locking is a cooperative activity and requires trust, which is inherently in conflict with the paramount isolation requirements. Instead, a solution based on resource servers (in the microkernel sense) is proposed, and MC-IPC, a novel synchronous multiprocessor IPC protocol for invoking such servers, is presented. The MC-IPC protocol enables strict temporal and logical isolation among mutually untrusted tasks and thus can be used to share resources among tasks of different criticalities. It is shown to be practically viable with a prototype implementation in LITMUSRT and validated with a case study involving several antagonistic failure modes. Finally, MC-IPC is shown to offer analytical benefits in the context of Vestal's mixed-criticality task model. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In the past, we have silently accepted that energy consumption in real-time and embedded systems is subordinate to time. That is, we have tried to reduce energy always under the constraint that all deadlines must be met. In mixed-criticality systems however, schedulers respect that some tasks are more important than others and guarantee their completion even at the expense of others. We believe in these systems the role of the energy budget has changed and it is time to ask whether energy has surpassed timeliness. Investigating energy as a further dimension of mixed-criticality systems, we show in a realistic scenario that a subordinate handling of energy can lead to violations of the mixed-criticality guarantees that can only be avoided if energy becomes an equally important resource as time. <s> BIB011 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The design and analysis of real-time scheduling algorithms for safety-critical systems is a challenging problem due to the temporal dependencies among different design constraints. This paper considers scheduling sporadic tasks with three interrelated design constraints: (i) meeting the hard deadlines of application tasks, (ii) providing fault tolerance by executing backups, and (iii) respecting the criticality of each task to facilitate system's certification. First, a new approach to model mixed-criticality systems from the perspective of fault tolerance is proposed. Second, a uniprocessor fixed-priority scheduling algorithm, called fault-tolerant mixed-criticality (FTMC) scheduling, is designed for the proposed model. The FTMC algorithm executes backups to recover from task errors caused by hardware or software faults. Third, a sufficient schedulability test is derived, when satisfied for a (mixed-criticality) task set, guarantees that all deadlines are met even if backups are executed to recover from errors. Finally, evaluations illustrate the effectiveness of the proposed test. <s> BIB012 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We consider in this paper fault-tolerant mixed-criticality scheduling, where heterogeneous safety guarantees must be provided to functionalities (tasks) of varying criticalities (importances). We model explicitly the safety requirements for tasks of different criticalities according to safety standards, assuming hardware transient faults. We further provide analysis techniques to bound the effects of task killing and service degradation on the system safety and schedulability. Based on our model and analysis, we show that our problem can be converted to a conventional mixed-criticality scheduling problem. Thus, we broaden the scope of applicability of the conventional mixed-criticality scheduling techniques. Our proposed techniques are validated with a realistic flight management system application and extensive simulations. <s> BIB013 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> This paper presents a static mapping optimization technique for fault-tolerant mixed-criticality MPSoCs. The uncertainties imposed by system hardening and mixed criticality algorithms, such as dynamic task dropping, make the worst-case response time analysis difficult for such systems. We tackle this challenge and propose a worst-case analysis framework that considers both reliability and mixed-criticality concerns. On top of that, we build up a design space exploration engine that optimizes fault-tolerant mixed-criticality MPSoCs and provides worst-case guarantees. We study the mapping optimization considering judicious task dropping, that may impose a certain service degradation. Extensive experiments with real-life and synthetic benchmarks confirm the effectiveness of the proposed technique. <s> BIB014 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> This paper presents a novel mapping optimization technique for mixed critical multi-core systems with different reliability requirements. For this scope, we derived a quantitative reliability metric and presented a scheduling analysis that certifies given mixed-criticality constraints. Our framework is capable of investigating re-execution, passive replication, and modular redundancy with optimized voter placement, while typical hardening approaches consider only one or two of these techniques. The proposed technique complies with existing safety standards and is power-efficient, as demonstrated by our experiments. <s> BIB015 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. In this paper, we discuss the design of the Quest-V separation kernel, which partitions services of different criticalities in separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. Moreover, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes. <s> BIB016 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Multicore systems are being increasingly used for embedded system deployments, even in safety-critical domains. Co-hosting applications of different criticality levels in the same platform requires sufficient isolation among them, which has given rise to the mixed-criticality scheduling problem and several recently proposed policies. Such policies typically employ runtime mechanisms to monitor task execution, detect exceptional events like task overruns, and react by switching scheduling mode. Implementing such mechanisms efficiently is crucial for any scheduler to detect runtime events and react in a timely manner, without compromising the system’s safety. This paper investigates implementation alternatives for these mechanisms and empirically evaluates the effect of their runtime overhead on the schedulability of mixed-criticality applications. Specifically, we implement in user-space two state-of-the-art scheduling policies: the flexible time-triggered FTTS [1] and the partitioned EDFVD [2], and measure their runtime overheads on a 60-core Intel R Xeon Phi and a 4-core Intel R Core i5 for the first time. Based on extensive executions of synthetic task sets and an industrial avionic application, we show that these overheads cannot be neglected, esp. on massively multicore architectures, where they can incur a schedulability loss up to 97%. Evaluating runtime mechanisms early in the design phase and integrating their overheads into schedulability analysis seem therefore inevitable steps in the design of mixed-criticality systems. The need for verifiably bounded overheads motivates the development of novel timing-predictable architectures and runtime environments specifically targeted for mixed-criticality applications. <s> BIB017 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We present a new graph-based real-time task model that can specify complex job arrival patterns and global state-based mode switching. The mode switching is of a mixed-criticality style, meaning that it allows immediate changes to the parameters of active jobs upon mode switches. The resulting task model generalizes previously proposed task graph models as well as mixed-criticality (sporadic) task models; the merging of these mutually incomparable modeling paradigms allows formulation of new types of tasks. A sufficient schedulability analysis for EDF on preemptive uniprocessors is developed for the proposed model. <s> BIB018 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Heterogeneous multicore platforms have become an attractive choice to deploy mixed criticality systems demanding diverse computational requirements. One of the major challenges is to efficiently harness the computational power of these multicore platforms while deploying mixed criticality applications. The problem is acerbated with an additional demand of energy efficiency. It is particularly relevant for the battery powered embedded systems. We propose a partitioning algorithm for unrelated heterogeneous multicore platforms to map mixed criticality applications that ensures the timeliness property and reduces the energy consumption. <s> BIB019 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The embedded system industry is facing an increasing pressure for migrating from single-core to multi- and many-core platforms for size, performance and cost purposes. Real-time embedded system design follows this trend by integrating multiple applications with different safety criticality levels into a common platform. Scheduling mixed-criticality applications on today's multi/many-core platforms and providing safe worst-case response time bounds for the real-time applications is challenging given the shared platform resources. For instance, sharing of memory buses introduces delays due to contention, which are non-negligible. Bounding these delays is not trivial, as one needs to model all possible interference scenarios. In this work, we introduce a combined analysis of computing, memory and communication scheduling in a mixed-criticality setting. In particular, we propose: (1) a mixed-criticality scheduling policy for cluster-based many-core systems with two shared resource classes, i.e., a shared multi-bank memory within each cluster, and a network-on-chip for inter-cluster communication and access to external memories; (2) a response time analysis for the proposed scheduling policy, which takes into account the interferences from the two classes of shared resources; and (3) a design exploration framework and algorithms for optimizing the resource utilizations under mixed-criticality timing constraints. The considered cluster-based architecture model describes closely state-of-the-art many-core platforms, such as the Kalray MPPA®-256. The applicability of the approach is demonstrated with a real-world avionics application. Also, the scheduling policy is compared against state-of-the-art scheduling policies based on extensive simulations with synthetic task sets. <s> BIB020 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In their widely-cited survey on mixed-criticality systems, Burns and Davis describe a very general model for representing mixed-criticality sporadic tasks. In this general model multiple estimates, at differing levels of assurance, are specified for each of the three parameters -- worst-case execution time (WCET), relative deadline, and period -- characterizing a 3-parameter sporadic task. The preemptive uniprocessor scheduling of systems of such tasks is considered. A scheduling algorithm is presented, proved correct, and quantitatively characterized via the speedup factor metric for dual-criticality systems of such tasks. To our knowledge, this is the first work to conduct any form of analysis of task systems that are represented using this general model. <s> BIB021 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The scheduling for mixed-criticality (MC) systems, where multiple activities have different certification requirements and thus different criticality on a shared hardware platform, has recently become an important research focus. In this work, considering that multicore processors have emerged as the de-facto platform for modern embedded systems, we propose a novel and efficient criticality-aware task partitioning algorithm (CA-TPA) for a set of periodic MC tasks running on multicore systems. We employ the state-of-the art EDF-VD scheduler on each core. Our work is based on the observation that the utilizations of MC tasks at different criticality levels can have quite large variations, hence when a task is allocated, its utilization contribution on different processors may vary by large margins and this can significantly affect the schedulability of tasks. During partitioning, CA-TPA sorts the tasks according to their utilization contributions on individual processors. Several heuristics are investigated to balance the workload on processors with the objective of improving the schedulability of tasks under CA-TPA. The simulation results show that our proposed CA-TPA scheme is effective, giving much higher schedulability ratios when compared to the classical partitioning schemes. <s> BIB022 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We propose a probabilistic scheduling framework for the design and development of mixed-criticality systems, i.e., where tasks with different levels of criticality need to be scheduled on a shared resource. Whereas highly critical tasks normally require hard real-time guarantees, less or non-critical ones may be degraded or even temporarily discarded at runtime. We hence propose giving probabilistic (instead of deterministic) real-time guarantees on low-criticality tasks. This simplifies the analysis and reduces conservativeness on the one hand. On the other hand, probabilistic guarantees can be tuned by the designer to reach a desired level of assurance. We illustrate these and other benefits of our framework based on extensive simulations. <s> BIB023 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Model-based design using Synchronous Reactive (SR) models enables early design and verification of application functionality in a platform-independent manner, and the implementation on the target platform should guarantee the preservation of application semantic properties. Mixed-Criticality Scheduling (MCS) is an effective approach to addressing diverse certification requirements of safety-critical systems that integrate multiple subsystems with different levels of criticality. This article considers fixed-priority scheduling of mixed-criticality SR models, and considers two scheduling approaches: Adaptive MCS and Elastic MCS. We formulate the optimization problem of minimizing the total system cost of added functional delays in the implementation while guaranteeing schedulability, and present an optimal algorithm based on branch-and-bound search, and an efficient heuristic algorithm. <s> BIB024 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Many algorithms have recently been studied for scheduling mixed-criticality (MC) tasks. However, most existing MC scheduling algorithms guarantee the timely executions of high-criticality (HC) tasks at the expense of discarding low-criticality (LC) tasks, which can cause serious service interruption for such tasks. In this work, aiming at providing guaranteed services for LC tasks, we study an elastic mixed-criticality (E-MC) task model for dual-criticality systems. Specifically, the model allows each LC task to specify its maximum period (i.e., minimum service level) and a set of early-release points. We propose an early-release (ER) mechanism that enables LC tasks to be released more frequently and thus improve their service levels at runtime, with both conservative and aggressive approaches to exploiting system slack being considered, which is applied to both earliest deadline first (EDF) and preference-oriented earliest-deadline schedulers. We formally prove the correctness of the proposed early-release--earliest deadline first scheduler on guaranteeing the timeliness of all tasks through judicious management of the early releases of LC tasks. The proposed model and schedulers are evaluated through extensive simulations. The results show that by moderately relaxing the service requirements of LC tasks in MC task sets (i.e., by having LC tasks’ maximum periods in the E-MC model be two to three times their desired MC periods), most transformed E-MC task sets can be successfully scheduled without sacrificing the timeliness of HC tasks. Moreover, with the proposed ER mechanism, the runtime performance of tasks (e.g., execution frequencies of LC tasks, response times, and jitters of HC tasks) can be significantly improved under the ER schedulers when compared to that of the state-of-the-art earliest deadline first—virtual deadline scheduler. <s> BIB025 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> This paper studies real-time scheduling of mixed-criticality systems where low-criticality tasks are still guaranteed some service in the high-criticality mode, with reduced execution budgets. First, we present a utilization-based schedulability test for such systems under EDF-VD scheduling. Second, we quantify the suboptimality of EDF-VD (with our test condition) in terms of speedup factors. In general, the speedup factor is a function with respect to the ratio between the amount of resource required by different types of tasks in different criticality modes, and reaches 4/3 in the worst case. Furthermore, we show that the proposed utilization-based schedulability test and speedup factor results apply to the elastic mixed-criticality model as well. Experiments show effectiveness of our proposed method and confirm the theoretical suboptimality results. <s> BIB026 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Many existing studies on mixed-criticality (MC) scheduling assume that low-criticality budgets for high-criticality applications are known apriori. These budgets are primarily used as guidance to determine when the scheduler should switch the system mode from low to high. Based on this key observation, in this paper we propose a dynamic MC scheduling model under which low-criticality budgets for individual high-criticality applications are determined at runtime as opposed to being fixed offline. To ensure sufficient budget for high-criticality applications at all times, we use offline schedulability analysis to determine a system-wide total low-criticality budget allocation for all the high-criticality applications combined. This total budget is used as guidance in our model to determine the need for a mode-switch. The runtime strategy then distributes this total budget among the various applications depending on their execution requirement and with the objective of postponing mode-switch as much as possible. We show that this runtime strategy is able to postpone mode-switches for a longer time than any strategy that uses a fixed low-criticality budget allocation for each application. Finally, since we are able to control the total budget allocation for high-criticality applications before mode-switch, we also propose techniques to determine these budgets considering system-wide objectives such as schedulability and service guarantee for low-criticality applications. <s> BIB027 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> This work presents CArb, an arbiter for controlling accesses to the shared memory bus in multi-core mixed criticality systems. CArb is a requirement-aware arbiter that optimally allocates service to tasks based on their requirements. It is also criticality-aware since it incorporates criticality as a first-class principle in arbitration decisions. CArb supports any number of criticality levels and does not impose any restrictions on mapping tasks to processors. Hence, it operates in tandem with existing processor scheduling policies. In addition, CArb is able to dynamically adapt memory bus arbitration at run time to respond to increases in the monitored execution times of tasks. Utilizing this adaptation, CArb is able to offset these increases; hence, postpones the system need to switch to a degraded mode. We prototype CArb, and evaluate it with an avionics case-study from Honeywell as well as synthetic experiments. <s> BIB028 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We consider a battery-less real-time embedded system equipped with an energy harvester. It scavenges energy from an environmental resource according to some stochastic patterns. The success of jobs is threatened in the case of energy shortage, which might be due to lack of harvested energy, losses originated from the super-capacitor self-discharge, as well as power consumption of executed tasks. The periodic real-time tasks of the system follow a dual-criticality model. In addition, each task has a minimum required success ratio that needs to be satisfied in steady state. We analytically evaluate the behavior of such a system in terms of its energy-related success ratio for a given schedule. Based on these results, we propose a scheduling algorithm that satisfies both temporal and success-ratio constraints of the jobs, while respecting task criticalities and corresponding system modes. The accuracy of the analytical method as well as its dependence on the numerical computations and other model assumptions are extensively discussed through comparison with simulation results. Also, the efficacy of the proposed scheduling algorithm is studied through comparison to some existing non-mixed- and mixed-criticality scheduling algorithms. <s> BIB029 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In this paper we study a general energy minimization problem for mixed-criticality systems on multi-cores, considering different system operation modes, and static & dynamic energy consumption. While making global scheduling decisions, trade-offs in energy consumption between different modes and also between static and dynamic energy consumption are required. Thus, such a problem is challenging. To this end, we first develop an optimal solution analytically for unicore and a corresponding low-complexity heuristic. Leveraging this, we further propose energy-aware mapping techniques and explore energy savings for multi-cores. To the best of our knowledge, we are the first to investigate mixed-criticality energy minimization in such a general setting. The effectiveness of our approaches in energy reduction is demonstrated through both extensive simulations and a realistic industrial application. <s> BIB030 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Analyze resource demand of MC task set under reliability and deadline constraints.Develop a heuristic approach to solve the formulated problem.Evaluate the proposed approach through simulation under various scenarios.Achieve up to 10% more energy saving comparing with the existing approaches. This paper studies the energy minimization problem in mixed-criticality systems that have stringent reliability and deadline constraints. We first analyze the resource demand of a mixed-criticality task set that has both reliability and deadline requirements. Based on the analysis, we present a heuristic task scheduling algorithm that minimizes system's energy consumption and at the same time also guarantees system's reliability and deadline constraints. Extensive experiments are conducted to evaluate and validate the performance of the proposed algorithm. The empirical results show that the algorithm further improves energy saving by up to 10% compared with the approaches proposed in our earlier work. <s> BIB031 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Mixed-criticality is a significant recent trend in the embedded system industry, where common computing platforms are utilized to host functionalities of varying criticality levels. To date, most scheduling techniques have focused on the timing aspect of this problem, while functional safety (i.e. fault-tolerance) is often neglected. This paper presents design methodologies to guarantee both safety and schedulability for real-time mixed-criticality systems on identical multicores. Assuming hardware/software transient errors, we model safety requirements on different criticality levels explicitly according to safety standards; based on this, we further propose fault-tolerant mixed-criticality scheduling techniques with task replication and re-execution to enhance system safety. To cope with runtime urgencies where critical tasks do not succeed after a certain number of trials, our techniques can perform system reconfigurations (task killing or service degradation) in those situations to reallocate system resources to the critical tasks. Due to explicit modeling of safety, we can quantify the impact of task killing and service degradation on system feasibility (safety and schedulability), enabling a rigorous design. To this end, we derive analysis techniques when reconfigurations are triggered either globally (synchronously) on all cores or locally (asynchronously) on each core. To our best knowledge, this is the first work on fault-tolerant mixed-criticality scheduling on multicores, matching theoretical insights with industrial safety standards. Our proposed techniques are validated with an industrial application and extensive simulations. <s> BIB032 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We propose the integration of a network-on-chip-based MPSoC in mixed-criticality systems, i.e. systems running applications with different criticality levels in terms of completing their execution within predefined time limits. An MPSoC contains tiles that can be either CPUs or memories, and we connect them with an instance of a customizable point-to-point interconnect from STMicroelectronics called STNoC. We explore whether the on-chip network capacity is sufficient for meeting the deadlines of external high critical workloads, and at the same time for serving less critical workloads that are generated internally. To evaluate the on-chip network we vary its configuration parameters, such as the link-width, and the Quality-of-Service (QoS), in specific the number (1 or 2) and type (high or low priority) of virtual channels (VCs), and the relative priority of packets from different flows sharing the same VC. <s> BIB033 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The integration of mixed-critical tasks into a platform is an increasingly important trend in the design of real-time systems due to its efficient resource usage. With a growing variety of activation patterns considered in real-time systems, some of them capture arbitrary activation patterns. As a consequence, the existing scheduling approaches in mixed-criticality systems (MCs), which assume the sporadic tasks with implicit deadlines, have sometimes become inapplicable or are ineffective. In this paper, we extend the sporadically activated task model to the arbitrarily activated task model in MCs with the preemptive fixed-task-priority schedule. By using the event arrival curve to model task activations, we present the necessary and sufficient schedulability tests that are based on the well-established results from Real-Time Calculus. We propose to use the busy-window analysis to do the sufficient test because it has been shown to be tighter than the sufficient test of using Real-Time Calculus. According... <s> BIB034 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> In this paper we present a probabilistic response time analysis for mixed criticality real-time systems running on a single processor according to a fixed priority pre-emptive scheduling policy. The analysis extends the existing state of the art probabilistic analysis to the case of mixed criticalities, taking into account both the level of assurance at which each task needs to be certified, as well as the possible criticalities at which the system may execute. The proposed analysis is formally presented as well as explained with the aid of an illustrative example. <s> BIB035 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> We design a novel DRAM controller that bundles and executes memory requests of hard real-time applications in consecutive rounds based on their type to reduce read/write switching delay. At the same time, our controller provides a configurable, guaranteed bandwidth for soft real-time requests. We show that there is a fundamental trade-off between the latency guarantee for hard real-time requests and the bandwidth provided to soft requests. Finally, we compare our approach analytically and experimentally with the current state-of-theart real-time memory controller for single-rank DRAM devices, which applies type reordering at the level of DRAM commands rather than requests. Our evaluation shows that for tasks exhibiting average row hit ratios, or for which computing a row hit guarantee might be difficult, our controller provides both smaller guaranteed latency and larger bandwidth. <s> BIB036 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> The multicore revolution is having limited impact in safety-critical application domains. A key reason is the "one-out-of-m" problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional $$m-1$$m-1 cores so that only "one core's worth" of capacity is utilized even though m cores are available. Two approaches have been investigated previously to address this problem: mixed-criticality allocation techniques, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable. A better way forward may be to combine both approaches, but to show this, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated herein in the context of a new variant of a mixed-criticality framework, called $$\textsf {MC}^\textsf {2} $$MC2, that supports configurable criticality-based hardware management. This framework allows specific DRAM memory banks and areas of the last-level cache (LLC) to be allocated to certain groups of tasks. A linear-programming-based optimization framework is presented for sizing such LLC areas, subject to conditions for ensuring $$\textsf {MC}^\textsf {2} $$MC2 schedulability. The effectiveness of the overall framework in resolving hardware-management and scheduling tradeoffs is investigated in the context of a large-scale overhead-aware schedulability study. This study was guided by extensive trace data obtained by executing benchmark programs on the new variant of $$\textsf {MC}^\textsf {2} $$MC2 presented herein. This study shows that mixed-criticality allocation and hardware-management techniques can be much more effective when applied together instead of alone. <s> BIB037 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Embedded systems are increasingly based on multi-core platforms to accommodate a growing number of applications, some of which have real-time requirements. Resources, such as off-chip DRAM, are typically shared between the applications using memory interconnects with different arbitration polices to cater to diverse bandwidth and latency requirements. However, traditional centralized interconnects are not scalable as the number of clients increase. Similarly, current distributed interconnects either cannot satisfy the diverse requirements or have decoupled arbitration stages, resulting in larger area, power and worst-case latency. The four main contributions of this article are: 1) a Globally Arbitrated Memory Tree (GAMT) with a distributed architecture that scales well with the number of cores, 2) an RTL-level implementation that can be configured with five arbitration policies (three distinct and two as special cases), 3) the concept of mixed arbitration policies that allows the policy to be selected individually per core, and 4) a worst-case analysis for a mixed arbitration policy that combines TDM and FBSP arbitration.We compare the performance of GAMT with centralized implementations and show that it can run up to four times faster and have over 51 and 37 percent reduction in area and power consumption, respectively, for a given bandwidth. <s> BIB038 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Abstract The Internet of Things (IoT) is gaining momentum and may positively influence the automation of energy-efficiency management of smart buildings. However, the development of IoT-enabled applications still takes tremendous efforts due to the lack of proper tools. Many software components have to be developed from scratch, thus requiring huge amounts of effort, as developers must have a deep understanding of the technologies, the new application domain, and the interplay with legacy systems. In this paper we introduce the IMPReSS Systems Development Platform (SDP) that aims at reducing the complexity of developing IoT-enabled applications for supporting sensor data collection in buildings, managing automated system changes according to the context, and real-time prioritization of devices for controlling energy usage. The effectiveness of the SDP for the development of IoT-based context-aware and mixed-criticality applications was assessed by using it in four scenarios involving energy efficiency management in public buildings. Qualitative studies were undertaken with application developers in order to evaluate their perception of five key components of the SDP with regard to usability. The study revealed significant and encouraging results. Further, a quantitative performance analysis explored the scalability limits of the IMPReSS communication components. <s> BIB039 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Integration of safety-critical tasks with different certification requirements onto a common hardware platform has become a growing tendency in the design of real-time and embedded systems. In the past decade, great efforts have been made to develop techniques for handling uncertainties in task worst-case execution time, quality-of-service, and schedulability of mixed-criticality systems. However, few works take fault-tolerance as a design requirement. In this paper, we address the scheduling of fault-tolerant mixed-criticality systems to ensure the safety of tasks at different levels of criticalities in the presence of transient faults. We adopt task re-execution as the fault-tolerant technique. Extensive simulations were performed to validate the effectiveness of our algorithm. Simulation results show that our algorithm results in up to 15.8% and 94.4% improvement in system reliability and schedule feasibility as compared to existing techniques, which contributes to a more safe system. <s> BIB040 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> A function model for the description of distributed end-to-end computations is called a task graph. Multiple functions with different criticality levels are supported by one electronic control unit (ECU), and one function is distributed over multiple ECUs in integrated automotive architecture. Considering the inherent heterogeneity, interaction, and diverse nature of such an architecture, automotive embedded systems have evolved to automotive cyber-physical systems (ACPS), which consist of multiple distributed automotive functions with different criticality levels. Efficient scheduling strategies can fully utilize ECUs in ACPS for high performance. However, ACPS should deal with joint challenges of heterogeneity, dynamics, parallelism, safety, and criticality, and these challenges are the key issues that will be solved in the next generation automotive open system architecture adaptive platform. This study first proposes a fairness-based dynamic scheduling algorithm FDS_MIMF to minimize the individual makespans (i.e., schedule lengths) of functions from a high performance perspective. FDS_MIMF can respond autonomously to the joint challenges of heterogeneity, dynamics, and parallelism of ACPS. To further respond autonomously to the joint challenges of heterogeneity, dynamics, parallelism, safety, and criticality of ACPS, we present an adaptive dynamic scheduling algorithm ADS_MIMF to achieve low deadline miss ratios (DMRs) of safety-critical functions from a timing constraint perspective while maintaining the acceptable overall makespan of ACPS from a high performance perspective. ADS_MIMF is implemented by changing up and down the criticality level of ACPS to adjust the execution of different functions on different criticality levels without increasing the time complexity. Experimental results indicate that FDS_MIMF can obtain short overall makespan, whereas ADS_MIMF can reduce the DMR values of high-criticality functions while still keeping satisfactory performance of ACPS. <s> BIB041 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Upcoming high-bandwidth protocols like Ethernet TSN feature mechanisms for redundant and deterministic (scheduled) message delivery to integrate safety- and real-time--critical applications and, thus, realize mixed-criticality systems. In existing design approaches, the message routing and system scheduling are generated in two entirely separated design steps, ignoring and/or not exploiting the distinct interrelations between routing and scheduling decisions. In this paper, we first introduce an exact approach to generate an implementation with a valid routing and a valid schedule in a single step by solving a 0-1 ILP. Second, we show that the 0-1 ILP formulation can be utilized in a design space exploration to optimize the routing and schedule with respect to, e.g., interference imposed on non-scheduled traffic or the number of configured port slots. We demonstrate the optimization potential of the proposed approach using a mixed-criticality system from the automotive domain. <s> BIB042 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Preemption Threshold Scheduling (PTS) is an effective technique for reducing stack memory usage by selectively disabling preemption between pairs of tasks.We consider the AUTOSAR standard in automotive embedded software development, where each task consists of multiple runnables that are scheduled with static priority and preemption threshold.We address the problems of design synthesis from an AUTOSAR model to minimize stack usage for mixed-criticality systems with preemption threshold scheduling, and present algorithms for schedulability analysis and stack usage minimization.Experimental results demonstrate that our approach can significantly reduce the system stack usage. Safety-critical embedded systems are often subject to multiple certification requirements from different certification authorities, giving rise to the concept of Mixed-Criticality Systems. Preemption Threshold Scheduling (PTS) is an effective technique for reducing stack memory usage by selectively disabling preemption between pairs of tasks. In this paper, we consider the AUTOSAR standard in automotive embedded software development, where each task consists of multiple runnables that are scheduled with static priority and preemption threshold. We address the problems of design synthesis from an AUTOSAR model to minimize stack usage for mixed-criticality systems with preemption threshold scheduling, and present algorithms for schedulability analysis and system stack usage minimization. Experimental results demonstrate that our approach can significantly reduce the system stack usage. <s> BIB043 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Abstract Cluster-based scheduling is recently gaining importance to be applied to mixed-criticality real-time systems on multicore processors platform. In this approach, the cores are grouped into clusters, and tasks that are partitioned among different clusters are scheduled by global scheduler in each cluster. This research work introduces a new cluster-based task allocation scheme for the mixed-criticality real-time task sets on multicore processors. For task allocation, smaller clusters sizes (sub-clusters) are used for mixed-criticality tasks in low criticality mode, while relatively larger cluster sizes are used for high criticality tasks in high criticality mode. In this research paper, the mixed-criticality task set is allocated to clusters using worst-fit heuristic. The tasks from each cluster are also allocated to its sub-clusters, using the same worst-fit heuristic. A fixed-priority response time analysis approach based on Audsley’s approach is used for the schedulability analysis of tasks in each cluster and sub-cluster. If the high criticality job is not completed after its worst case execution time in low mode, then the system is switched to high criticality mode. After mode switch, all the low criticalities tasks are discarded and only high criticality tasks are further executed in high criticality mode. Simulation results indicate that the percentage of schedulable task sets significantly increases under cluster scheduling as compared to partitioned and global mixed-criticality scheduling schemes. <s> BIB044 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Introduction <s> Abstract Partitioning is a widespread technique that enables the execution of mixed-criticality applications in the same hardware platform. New challenges for the next generation of partitioned systems include the use of multiprocessor architectures and distribution standards in order to open up this technique to a heterogeneous set of emerging scenarios (e.g., cyber-physical systems). This work describes a system architecture that enables the use of data-centric distribution middleware in partitioned real-time embedded systems based on a hypervisor for multi-core, and it focuses on the analysis of the available architectural configurations. We also present an application-case study to evaluate and identify the possible trade-offs among the different configurations. <s> BIB045
|
As the development of science and technology, the scale and complexity of the integrated functionalities in modern embedded systems also show a rapid increase, resulting in increased costs of embedded systems. To save costs, there has been a trend towards integrating applications with di®erent key functionalities deployed in an independent subsystem into a common hardware platform for sharing processor resources, which promotes the emergence of mixed-criticality (MC) systems. In order to meet the increasingly complicated tendency and the constraints of the embedded system hardware such as the size, function and energy consumption, integrating multiple critical functions deployed in independent subsystems into a uni¯ed platform has become the trend and direction of today's embedded real-time system design. Therefore, the study of MC system has become a hotpot. MC systems have two or more di®erent criticality levels. Here, tasks are divided according to the degree of urgency. If the tasks are more critical, they are classi¯ed as high-criticality (HC) tasks, and if the tasks are less critical, they are classi¯ed as lowcriticality (LC) tasks. In MC systems, more important functionalities should be given a higher criticality to ensure safety needs for these functionalities. And in MC systems, failure to perform HC tasks can have catastrophic consequences, while failure to perform LC tasks can only result in a reduced user experience. So we need to provide varying degrees of protection to di®erent key-level applications in MC systems. If all tasks perform within their deadlines, the system is called MC schedulable. There are many studies on schedulability in MC systems. BIB003 BIB017 BIB021 BIB022 BIB007 BIB034 BIB018 BIB001 BIB008 BIB009 Most of them guarantee the accurate operation of the HC tasks, while reducing or even abandoning the execution of LC tasks. However, this will result in a reduction in the quality-ofservice (QoS) of the systems, so it is unreasonable to neglect LC tasks directly. Considering this, researchers began to investigate the implementation of LC tasks to improve the QoS of the systems. BIB023 BIB035 BIB024 BIB025 BIB026 BIB027 BIB028 BIB036 BIB037 BIB010 BIB038 At the same time, improving system QoS will bring more energy costs. However, the energy of many MC systems is batterypowered, resulting in limited energy of the MC systems. More energy costs severely restrict the performance of embedded devices. Therefore, the application model is abstracted according to the speci¯c application scenarios, and the correct execution of HC tasks, real-time embedded systems and low power consumption constraints are established. Under the premise of ensuring the correct implementation of the HC tasks in MC system and the real-time performance of the embedded system,¯nding a suitable low-power scheduling strategy is very meaningful. Based on this, energy needs to be considered when designing and developing a MC system. BIB029 BIB019 BIB011 BIB030 BIB039 BIB031 The above studies have made tremendous e®orts to develop techniques for dealing with the uncertainty of schedulability, QoS and energy consumption of MC systems. However, few papers regard fault tolerance as a design requirement, which represents the resilience of the system when an error occurs. Therefore, researching on fault tolerance is necessary for MC systems to enhance their robustness. Fault-tolerance is necessary for developing such systems to counter potential failures for a high level of safety and reliability. Recently, there are a lot of researches on fault tolerance in MC systems. BIB012 BIB040 BIB002 BIB013 BIB032 BIB004 BIB014 BIB015 BIB016 With the rapid development of MC system, it has been applied in more and more¯elds. BIB033 BIB041 BIB042 BIB043 BIB044 BIB020 BIB005 BIB006 BIB045 Contribution and organization: This paper outlines a number of state-of-art technologies for MC systems. The test of the paper is organized as follows. First, we discuss the importance of researching MC systems and describe the advantages, prospect, present situation of MC systems (Sec. 1) and then we present the schedulability study of MC system (Sec. 2). Next, considering that the designing of the system schedulability may result in a degradation of QoS, we probe some methods to solve this problem in MC systems (Sec. 3). However, enhancing the QoS in the system may lead to excessive energy consumption, we discuss some techniques to ensure energy e±ciency (Sec. 4). In addition, we introduce the researches on fault tolerance which is taken as a requirement for the system design (Sec. 5). Finally, we introduce some applications of MC systems in di®erent domains (Sec. 6). For a better understanding of the paper, we list the main abbreviations in Table 1 .
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> In fixed-priority scheduling, the priority of a job,.once assigned, may not change. A new fixed-priority algorthm for scheduling systems of periodic tasks upon identical multiprocessors is proposed. This algorithm has an achievable utilization of (m+1)/2 upon m unit-capacity processors. It is proven that this algorithm is optimal from the perspective of achievable utilization in the sense that no fixed-priority algorithm for scheduling periodic task systems upon identical multiprocessors may have an achievable utilization greater than (m+1)/2. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> SymTA/S is a system-level performance and timing analysis approach based on formal scheduling analysis techniques and symbolic simulation. The tool supports heterogeneous architectures, complex task dependencies and context aware analysis. It determines system-level performance data such as end-to-end latencies, bus and processor utilisation, and worst-case scheduling scenarios. SymTA/S furthermore combines optimisation algorithms with system sensitivity analysis for rapid design space exploration. The paper gives an overview of current research interests in the SymTA/S project. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> Many safety-critical embedded systems are subject to certification requirements. However, only a subset of the functionality of the system may be safety-critical and hence subject to certification, the rest of the functionality is non safety-critical and does not need to be certified, or is certified to a lower level. The resulting mixed criticality system offers challenges both for static schedulability analysis and run-time monitoring. This paper considers a novel implementation scheme for fixed priority uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored (as is usually the case in high integrity systems). An optimal priority assignment scheme is derived and sufficient response-time analysis is provided. The new scheme formally dominates those previously published. Evaluations illustrate the benefits of the scheme. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> Systems in many safety-critical application domains are subject to certification requirements. For any given system, however, it may be the case that only a subset of its functionality is safety-critical and hence subject to certification, the rest of the functionality is non safety critical and does not need to be certified, or is certified to a lower level of assurance. An algorithm called EDF-VD (for Earliest Deadline First with Virtual Deadlines) is described for the scheduling of such mixed-criticality task systems. Analyses of EDF-VD significantly superior to previously-known ones are presented, based on metrics such as processor speedup factor (EDF-VD is proved to be optimal with respect to this metric) and utilization bounds. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> A common trend in real-time safety-critical embedded systems is to integrate multiple applications on a single platform. Such systems are known as mixed-criticality (MC) systems as the applications are usually characterized by different criticality levels (CLs). Nowadays, multicore platforms are promoted due to cost and performance benefits. However, certification of multicore MC systems is challenging because concurrently executed applications with different CLs may block each other when accessing shared platform resources. Most of the existing research on multicore MC scheduling ignores the effects of resource sharing on the execution times of applications. This paper proposes a MC scheduling strategy which explicitly accounts for these effects. Applications are executed by a flexible time-triggered criticality-monotonic scheduling scheme. Schedulers on different cores are dynamically synchronized such that only a statically known subset of applications of the same CL can interfere on shared resources, e. g., memories, buses. Therefore, the timing effects of resource sharing are bounded and we quantify them at design time. We combine this scheduling strategy with a mapping optimization technique for achieving better resource utilization. The efficiency of the approach is demonstrated through extensive simulations as well as comparisons with traditional temporal partitioning and state-of-the-art scheduling algorithms. It is also validated on a real-world avionics system. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> Synchronous languages are widely used to design safety-critical embedded systems. These languages are based on the synchrony hypothesis, asserting that all tasks must complete instantaneously at each logical time step. This assertion is, however, unsuitable for the design of mixed-criticality systems, where some tasks can tolerate missed deadlines. This paper proposes a novel extension to the synchronous approach for supporting three levels of task criticality: life, mission, and non-critical. We achieve this by relaxing the synchrony hypothesis to allow tasks that can tolerate bounded or unbounded deadline misses. We address the issue of task communication between multi-rate, mixed-criticality tasks, and propose a deterministic lossless communication model. To maximize system utilization, we present a hybrid static and dynamic scheduling approach that executes schedulable tasks during slack time. Extensive benchmarking shows that our approach can schedule up to 15% more task sets and achieve an average of 5.38% better system utilization than the Early-Release EDF (ER-EDF) approach. Tasks are scheduled fairer under our approach and achieve consistently higher execution frequencies, but require more preemptions. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> Multicore systems are being increasingly used for embedded system deployments, even in safety-critical domains. Co-hosting applications of different criticality levels in the same platform requires sufficient isolation among them, which has given rise to the mixed-criticality scheduling problem and several recently proposed policies. Such policies typically employ runtime mechanisms to monitor task execution, detect exceptional events like task overruns, and react by switching scheduling mode. Implementing such mechanisms efficiently is crucial for any scheduler to detect runtime events and react in a timely manner, without compromising the system’s safety. This paper investigates implementation alternatives for these mechanisms and empirically evaluates the effect of their runtime overhead on the schedulability of mixed-criticality applications. Specifically, we implement in user-space two state-of-the-art scheduling policies: the flexible time-triggered FTTS [1] and the partitioned EDFVD [2], and measure their runtime overheads on a 60-core Intel R Xeon Phi and a 4-core Intel R Core i5 for the first time. Based on extensive executions of synthetic task sets and an industrial avionic application, we show that these overheads cannot be neglected, esp. on massively multicore architectures, where they can incur a schedulability loss up to 97%. Evaluating runtime mechanisms early in the design phase and integrating their overheads into schedulability analysis seem therefore inevitable steps in the design of mixed-criticality systems. The need for verifiably bounded overheads motivates the development of novel timing-predictable architectures and runtime environments specifically targeted for mixed-criticality applications. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> Systems in many safety-critical application domains are subject to certification requirements. For any given system, however, it may be the case that only a subset of its functionality is safety-critical and hence subject to certification; the rest of the functionality is non-safety-critical and does not need to be certified, or is certified to lower levels of assurance. The certification-cognizant runtime scheduling of such mixed-criticality systems is considered. An algorithm called EDF-VD (for Earliest Deadline First with Virtual Deadlines) is presented: this algorithm can schedule systems for which any number of criticality levels are defined. Efficient implementations of EDF-VD, as well as associated schedulability tests for determining whether a task system can be correctly scheduled using EDF-VD, are presented. For up to 13 criticality levels, analyses of EDF-VD, based on metrics such as processor speedup factor and utilization bounds, are derived, and conditions under which EDF-VD is optimal with respect to these metrics are identified. Finally, two extensions of EDF-VD are discussed that enhance its applicability. The extensions are aimed at scheduling a wider range of task sets, while preserving the favorable worst-case resource usage guarantees of the basic algorithm. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> We present a new graph-based real-time task model that can specify complex job arrival patterns and global state-based mode switching. The mode switching is of a mixed-criticality style, meaning that it allows immediate changes to the parameters of active jobs upon mode switches. The resulting task model generalizes previously proposed task graph models as well as mixed-criticality (sporadic) task models; the merging of these mutually incomparable modeling paradigms allows formulation of new types of tasks. A sufficient schedulability analysis for EDF on preemptive uniprocessors is developed for the proposed model. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> The scheduling for mixed-criticality (MC) systems, where multiple activities have different certification requirements and thus different criticality on a shared hardware platform, has recently become an important research focus. In this work, considering that multicore processors have emerged as the de-facto platform for modern embedded systems, we propose a novel and efficient criticality-aware task partitioning algorithm (CA-TPA) for a set of periodic MC tasks running on multicore systems. We employ the state-of-the art EDF-VD scheduler on each core. Our work is based on the observation that the utilizations of MC tasks at different criticality levels can have quite large variations, hence when a task is allocated, its utilization contribution on different processors may vary by large margins and this can significantly affect the schedulability of tasks. During partitioning, CA-TPA sorts the tasks according to their utilization contributions on individual processors. Several heuristics are investigated to balance the workload on processors with the objective of improving the schedulability of tasks under CA-TPA. The simulation results show that our proposed CA-TPA scheme is effective, giving much higher schedulability ratios when compared to the classical partitioning schemes. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Schedulability-Aware Techniques in MC Systems <s> The integration of mixed-critical tasks into a platform is an increasingly important trend in the design of real-time systems due to its efficient resource usage. With a growing variety of activation patterns considered in real-time systems, some of them capture arbitrary activation patterns. As a consequence, the existing scheduling approaches in mixed-criticality systems (MCs), which assume the sporadic tasks with implicit deadlines, have sometimes become inapplicable or are ineffective. In this paper, we extend the sporadically activated task model to the arbitrarily activated task model in MCs with the preemptive fixed-task-priority schedule. By using the event arrival curve to model task activations, we present the necessary and sufficient schedulability tests that are based on the well-established results from Real-Time Calculus. We propose to use the busy-window analysis to do the sufficient test because it has been shown to be tighter than the sufficient test of using Real-Time Calculus. According... <s> BIB011
|
The tasks in MC system are divided into HC and LC task as mentioned above. Most papers assume that in MC systems, the worst case execution time (WCET) for HC tasks is low-level system assurance and high-level WCET is high-level system assurance, while LC tasks require only a low-level assurance, as they own a low-level WCET. During the operation, if any HC task exceeds its low-level WCET, all LC tasks will be abandoned for the system to leave su±cient system resources for satisfying the high-level assurance. If all tasks are executed within their deadline, and all tasks are performed at a low-level, and the HC tasks can satisfy their deadlines even when performing low-level WCET, the system is referred to as MC schedulable. Schedulability of tasks is the key of the MC system. To meet the schedulability of the system, most MC systems ensure the accurate operation of HC tasks, neglect or even abandon the performance of LC tasks. We will introduce the current research status below. Many researches have been devoted to use the EDF-VD scheduling algorithm for maintaining schedulability of MC systems. Baruah et al. 1 considered the scheduling of implicit-deadline sporadic task in MC systems on uniform multiprocessor platforms and introduced two policies: the MC scheduling algorithm earliest deadlinē rst-virtual deadlines (EDF-VD) BIB001 on uniprocessor, and the EDF-based global scheduling algorithm called fpEDF BIB004 on multiprocessor. Sigrist et al. BIB007 empirically assessed the in°uence of runtime overhead to schedulability of MC systems, and determined alternative solutions for the achievement of common MC mechanisms for multicores, and introduced two scheduling policies, one is the FTTS, BIB005 another is the partitioned EDF-VD. BIB001 Baruah 3 studied the scheduling of task in MC systems speci¯ed based on the three-parameter model of sporadic tasks on preemptive uniprocessor platforms and extended the EDF-VD scheduling algorithm BIB003 that schedules sporadic task in MC systems where multiple values are just used for the WCET parameter. Ramanathan and Easwaran 4 focused on the issue of MC scheduling on partitioned multiprocessor and used three di®erent scheduling approaches ECDF, BIB004 EDF-VD, BIB001 and Adaptive MC (AMC) BIB003 to decrease the maximum di®erence between the overall LC utilization and overall HC utilization assigned on each processor. However, from the conditions of schedulability of the EDF-VD scheduler, BIB008 utilizations of MC tasks at other criticality levels also play an important role besides maximizing utilization. Han et al. BIB010 used the EDF-VD algorithm BIB001 on each core and proposed a novel criticality-aware task partitioning algorithm for a group of periodic MC tasks operating on multicore systems. According to the synchrony hypothesis, synchronous languages asserted that all tasks must be accomplished immediately at each step of logical time. However, this assertion is improper for the design of MC systems, in which tasks like the LC tasks can endure missed deadlines. Yip et al. BIB006 proposed a new extension to the synchronous method for standing by three levels (life, noncritical and mission) of task criticality. They achieved this by relaxing the synchrony hypothesis to allow tasks that can tolerate unbounded or bounded deadline misses. But, the scheme that relaxes the synchrony hypothesis to allow MC goes against the communication synchronous model. To solve this, they used a common lossless bu®ering method with limited queue sizes. Then they proposed a lossless communication model to allow mission critical tasks and life to communicate at a certain frequency. To schedule life and mission critical tasks, they developed a static scheduling approach by using integer linear programming, which maximizes system utilization by assigning slack time across all tasks proportionally. To further increase runtime utilization, the tasks will be scheduled dynamically whenever possible during the slack. Extensive results showed that the proposed approach can schedule more task sets and achieve better system utilization compared to the ER-EDF 67 approach. Some researches used the¯xed priority (FP) scheduling algorithm to maintain schedulability of MC systems. Guan et al. proposed an e®ective algorithm named PLRS to schedule certi¯able MC sporadic tasks systems. PLRS applies FP-jobs scheduling methods, and allocates job priorities by balancing and exploring the asymmetric in°uences between the workload on distinct criticality levels. Chen et al. studied FP scheduling of MC systems on a uniprocessor platform in a more general way that uses di®erent priority sequences in di®erent stages of execution, and presented a novel priority assignment scheme called heuristic priority assignment (HPA) based on the popular optimal priority assignment (OPA) algorithm. BIB003 Hu et al. BIB011 analyzed the schedulability for dual-criticality (DC) systems with arbitrarily activated tasks. They extended the sporadic and activated task model to any activated task model in MC systems with the preemptive¯xed priority tasks schedule. By applying the arrival curve to express the upper limit of task activations, they integrated the perfect consequences from RTC BIB002 to analyze the schedulability of any activated tasks in MC systems. However, the scheduling and analysis of MC system using sporadic MC task models have been censured for its limited suitability to many real systems. To solve these problems, Ekberg and Yi BIB009 proposed a novel task model that calls the MS-DRT, which integrates arrival modes of jobs with global mode switching, and adopts a structured EDF-schedulability analysis approach where each mode of the system is relatively separated by abstracting the e®ects from other modes. However, in FP scheduling, unbounded priority inversion can be prevented by using priority inheritance protocols, but these protocols cannot work when zeroslack (ZS) schedulers are applied to schedule MC task-sets. Lakshmanan et al.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> 11 <s> Abstract : The priority ceiling protocol is a new technique that addresses the priority inversion problem, i.e., the possibility that a high-priority task can be delayed by a low-priority task. Under the priority ceiling protocol, a high priority task can be blocked at most once by a lower priority task. This paper defines how to apply the protocol to Ada. In particular, restrictions on the use of task priorities in Ada are defined as well as restrictions on the use of Ada tasking constructs. An extensive example illustrating the behavior guaranteed by the protocol is given. This paper was presented at the 2nd International Workshop on Real-Time Ada Issues in May 1988. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> 11 <s> The recurring real-time task model for hard-real-time task is studied from a feasibility-analysis perspective. This model generalizes earlier models such as the sporadic task model and the generalized multiframe task model. Algorithms are presented for the static-priority and dynamic-priority feasibility-analysis of systems of independent recurring real-time tasks in a preemptive uniprocessor environment. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> 11 <s> Many safety-critical embedded systems are subject to certification requirements. However, only a subset of the functionality of the system may be safety-critical and hence subject to certification, the rest of the functionality is non safety-critical and does not need to be certified, or is certified to a lower level. The resulting mixed criticality system offers challenges both for static schedulability analysis and run-time monitoring. This paper considers a novel implementation scheme for fixed priority uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored (as is usually the case in high integrity systems). An optimal priority assignment scheme is derived and sufficient response-time analysis is provided. The new scheme formally dominates those previously published. Evaluations illustrate the benefits of the scheme. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> 11 <s> We propose HLC-PCP (Highest-Locker Criticality, Priority-Ceiling Protocol), which extends the well-known Priority Ceiling Protocol (PCP) to be applicable to AMC (Adaptive Mixed-Criticality), a variant of MCS. We present methods for worst-case blocking time computation with HLC-PCP, used for schedulability analysis of AMC with resource sharing, for both the dual-criticality model and the general multi-criticality model. This helps relax one of the key limiting assumptions of most MCS work, that is, tasks with different levels of criticality do not have common shared resources. Today's safety-critical Cyber-Physical Systems (CPS) often need to integrate multiple diverse applications with varying levels of importance, or criticality. Mixed-Criticality Scheduling (MCS) has been proposed with the objectives of achieving certification at multiple criticality levels and efficient utilization of hardware resources. Current work on MCS typically assumes tasks at different criticality levels are independent and do not share any resources (data). We propose HLC-PCP (Highest-Locker Criticality, Priority-Ceiling Protocol), which extends the well-known Priority Ceiling Protocol (PCP) to be applicable to AMC (Adaptive Mixed-Criticality), a variant of MCS. We present methods for worst-case blocking time computation with HLC-PCP, used for schedulability analysis of AMC with resource sharing, for both the dual-criticality model and the general multi-criticality model. <s> BIB004
|
presented two protocols that extend real-time synchronization protocols like PCP to harmonize the mode changes of the ZSs and to limit the criticality and priority inversions owing to resource sharing. They also developed methods to adjust the blocking terms caused by synchronization, for computing the ZS instants applied by the scheduler. Niz et al. proposed an end-to-end ZS rate-monotonic method (ZSRM) according to real-time pipelines, which is an MC scheduler developed for a nonpipeline system. They calculated the ZS instant for each task in the task-set, and the resulting ZS instants make this task-set schedulable. Under ZSRM, each task is related with a parameter named ZS instant, and whenever an HC task is not completed at its ZS instant, all LC tasks are suspended to satisfy the deadline of the HC task. Niz and Phan 13 also proposed a partitioned scheduling method for multimodal MC-RTS on multiprocessor platforms. The scheduling approach is used for scheduling tasks on each processor. It extends the algorithm with a mode transformation enforcement mechanism, which depends on the transitional ZS times during the mode change of the task to manage the LC tasks for keeping the schedulability of HC tasks. Execution time of LC tasks will be managed during mode changes for keeping the schedulability of HC tasks. According to response time calculation, BIB002 the schedulability analysis approaches for systems with AMC scheduling are too complicated for optimization purposes. Zhao and Zeng presented a schedulability analysis method according to request bound function (RBF) for AMC-scheduled MC systems called AMC-RBF. To illustrate the motivation for developing AMC-RBF, they o®ered the formulation of the schedulable region according to AMC-RBF. Zhao et al. BIB004 proposed a protocol named HLC-PCP, which extends the PCP BIB001 for defending shared resources in AMC. They also presented a approach for worst case blocking time calculation with it, utilized for AMC schedulability analysis with shared resources, for both the general multicriticality model and the DC model. As locking time analysis in HLC-PCP only lies on the character of AMC scheduling algorithm, not on the algorithm of worst case response times (WCRT) analysis, they modi¯ed the WCRT analysis equations for AMC-RBF, which is used for computing WCRT value of each task for AMC, BIB003 to consider resource sharing by adding the terms of blocking time simply. The widely used mode-switch methods supposed that all HC tasks are schedulable even the LC tasks are abandoned or degraded. However, this method activates a mode-switch at once after any task overruns, this can be pessimistic and abrupt. Considering this, Hu et al. 16¯r stly solved MC systems scheduled, and presented light-weight mode-switch methods that can e®ectively move the system out of the critical mode. Its main idea is to execute overrun budget for all tasks, by supervising execution of the task and updating an ordinary overrun budget. To this end, the overrun budget can be adaptively complemented leveraging run-time information and shared among all tasks, thus, mode-switch can be delayed as much as possible. Experimental results showed that the proposed mode-switch reduces the modeswitch frequencies and abandoned jobs, as well as improves the scheduling time ratio of all tasks in the system. However, the schedulability conditions from the above methods did not include the e®ect of system overheads, therefore, Chisholm et al. considered an overhead called cache-related preemption delays (CRPDs), which are the delays that a task reloads lines out of the shared last-level caches (LLCs) according to a preemption. They utilized preemption-centric accounting to calculate CRPDs cost, where the preempting job execution time is replenished to \pay" for the CPRD overheads that are caused by the execution resumption of each preemptive job when the preempting job is accomplished. Then, they formulated the schedulability conditions that accounted CRPD overheads for integrating these delays into schedulability analysis. Results showed that the proposed techniques can achieve schedulability improvements.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> The authors present the architecture of a general-purpose broadband-ISDN (B-ISDN) switch chip and, in particular, its novel feature: the weighted round-robin cell (packet) multiplexing algorithm and its implementation in hardware. The flow control and buffer management strategies that allow the chip to operate at top performance under congestion are given, and the reason why this multiplexing scheme should be used under those circumstances is explained. The chip architecture and how the key choices were made are discussed. The statistical performance of the switch is analyzed. The critical parts of the chip have been laid out and simulated, thus proving the feasibility of the architecture. Chip sizes of four to ten links with link throughput of 0.5 to 1 Gb/s and with about 1000 virtual circuits per switch have been realized. The results of simulations of the chip are presented. > <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> In fixed-priority scheduling, the priority of a job,.once assigned, may not change. A new fixed-priority algorthm for scheduling systems of periodic tasks upon identical multiprocessors is proposed. This algorithm has an achievable utilization of (m+1)/2 upon m unit-capacity processors. It is proven that this algorithm is optimal from the perspective of achievable utilization in the sense that no fixed-priority algorithm for scheduling periodic task systems upon identical multiprocessors may have an achievable utilization greater than (m+1)/2. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Many safety-critical embedded systems are subject to certification requirements. However, only a subset of the functionality of the system may be safety-critical and hence subject to certification, the rest of the functionality is non safety-critical and does not need to be certified, or is certified to a lower level. The resulting mixed criticality system offers challenges both for static schedulability analysis and run-time monitoring. This paper considers a novel implementation scheme for fixed priority uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored (as is usually the case in high integrity systems). An optimal priority assignment scheme is derived and sufficient response-time analysis is provided. The new scheme formally dominates those previously published. Evaluations illustrate the benefits of the scheme. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Systems in many safety-critical application domains are subject to certification requirements. For any given system, however, it may be the case that only a subset of its functionality is safety-critical and hence subject to certification, the rest of the functionality is non safety critical and does not need to be certified, or is certified to a lower level of assurance. An algorithm called EDF-VD (for Earliest Deadline First with Virtual Deadlines) is described for the scheduling of such mixed-criticality task systems. Analyses of EDF-VD significantly superior to previously-known ones are presented, based on metrics such as processor speedup factor (EDF-VD is proved to be optimal with respect to this metric) and utilization bounds. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Mixed-criticality scheduling algorithms, which attempt to reclaim system capacity lost to worst-case execution time pessimism, seem to hold great promise for multi core real-time systems, where such loss is particularly severe. However, the unique nature of these algorithms gives rise to a number of major challenges for the would-be implementer. This paper describes the first implementation of a mixed-criticality scheduling framework on a multi core system. We experimentally evaluate design trade offs that arise when seeking to isolate tasks of different criticalities and to maintain overheads commensurate with a standard RTOS. We also evaluate a key property needed for such a system to be practical: that the system be robust to breaches of the optimistic execution-time assumptions used in mixed-criticality analysis. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> We generalize the commonly used mixed-criticality sporadic task model to let all task parameters (execution-time, deadline and period) change between criticality modes. In addition, new tasks may be added in higher criticality modes and the modes may be arranged using any directed acyclic graph, where the nodes represent the different criticality modes and the edges the possible mode switches. We formulate demand bound functions for mixed-criticality sporadic tasks and use these to determine EDF-schedulability. Tasks have different demand bound functions for each criticality mode. We show how to shift execution demand between different criticality modes by tuning the relative deadlines. This allows us to shape the demand characteristics of each task. We propose efficient algorithms for tuning all relative deadlines of a task set in order to shape the total demand to the available supply of the computing platform. Experiments indicate that this approach is successful in practice. This new approach has the added benefit of supporting hierarchical scheduling frameworks. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> We consider in this paper fault-tolerant mixed-criticality scheduling, where heterogeneous safety guarantees must be provided to functionalities (tasks) of varying criticalities (importances). We model explicitly the safety requirements for tasks of different criticalities according to safety standards, assuming hardware transient faults. We further provide analysis techniques to bound the effects of task killing and service degradation on the system safety and schedulability. Based on our model and analysis, we show that our problem can be converted to a conventional mixed-criticality scheduling problem. Thus, we broaden the scope of applicability of the conventional mixed-criticality scheduling techniques. Our proposed techniques are validated with a realistic flight management system application and extensive simulations. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> In mixed-criticality systems, highly critical tasks must be temporally and logically isolated from faults in lower-criticality tasks. Such strict isolation, however, is difficult to ensure even for independent tasks, and has not yet been attained if low- and high-criticality tasks share resources subject to mutual exclusion constraints (e.g., Shared data structures, peripheral I/O devices, or OS services), as it is often the case in practical systems. Taking a pragmatic, systems-oriented point of view, this paper argues that traditional real-time locking approaches are unsuitable in a mixed-criticality context: locking is a cooperative activity and requires trust, which is inherently in conflict with the paramount isolation requirements. Instead, a solution based on resource servers (in the microkernel sense) is proposed, and MC-IPC, a novel synchronous multiprocessor IPC protocol for invoking such servers, is presented. The MC-IPC protocol enables strict temporal and logical isolation among mutually untrusted tasks and thus can be used to share resources among tasks of different criticalities. It is shown to be practically viable with a prototype implementation in LITMUSRT and validated with a case study involving several antagonistic failure modes. Finally, MC-IPC is shown to offer analytical benefits in the context of Vestal's mixed-criticality task model. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> The multicourse revolution is having limited impact on safety-critical cyber-physical systems. The key reason is the "one out of m" problem: certifying the real-time correctness of a system running on m cores can necessitate pessimistic analysis that easily negates the processing capacity of the "additional" m -- 1 cores. In safety-critical domains such as avionics, this has led to the common practice of simply disabling all but one core. In this paper, the usage of mixed-criticality (MC) scheduling and analysis techniques is considered to alleviate such analysis pessimism. Under MC analysis, a single system with components of different criticality levels is viewed as a set of different per-criticality-level systems. More optimistic analysis assumptions are made when certifying lower criticality levels. Unfortunately, this can lead to transient overloads at these levels, compromising real-time guarantees. This paper presents the first multicourse MC framework that addresses this problem. This framework makes scheduling decisions in a virtual time domain that can be "stretched" until the effects of a transient overload have abated. Such effects dissipate more quickly if virtual time is "stretched" more aggressively, but this may reduce the quality of the work performed. This trade off is analyzed experimentally herein. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> We propose a probabilistic scheduling framework for the design and development of mixed-criticality systems, i.e., where tasks with different levels of criticality need to be scheduled on a shared resource. Whereas highly critical tasks normally require hard real-time guarantees, less or non-critical ones may be degraded or even temporarily discarded at runtime. We hence propose giving probabilistic (instead of deterministic) real-time guarantees on low-criticality tasks. This simplifies the analysis and reduces conservativeness on the one hand. On the other hand, probabilistic guarantees can be tuned by the designer to reach a desired level of assurance. We illustrate these and other benefits of our framework based on extensive simulations. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Many algorithms have recently been studied for scheduling mixed-criticality (MC) tasks. However, most existing MC scheduling algorithms guarantee the timely executions of high-criticality (HC) tasks at the expense of discarding low-criticality (LC) tasks, which can cause serious service interruption for such tasks. In this work, aiming at providing guaranteed services for LC tasks, we study an elastic mixed-criticality (E-MC) task model for dual-criticality systems. Specifically, the model allows each LC task to specify its maximum period (i.e., minimum service level) and a set of early-release points. We propose an early-release (ER) mechanism that enables LC tasks to be released more frequently and thus improve their service levels at runtime, with both conservative and aggressive approaches to exploiting system slack being considered, which is applied to both earliest deadline first (EDF) and preference-oriented earliest-deadline schedulers. We formally prove the correctness of the proposed early-release--earliest deadline first scheduler on guaranteeing the timeliness of all tasks through judicious management of the early releases of LC tasks. The proposed model and schedulers are evaluated through extensive simulations. The results show that by moderately relaxing the service requirements of LC tasks in MC task sets (i.e., by having LC tasks’ maximum periods in the E-MC model be two to three times their desired MC periods), most transformed E-MC task sets can be successfully scheduled without sacrificing the timeliness of HC tasks. Moreover, with the proposed ER mechanism, the runtime performance of tasks (e.g., execution frequencies of LC tasks, response times, and jitters of HC tasks) can be significantly improved under the ER schedulers when compared to that of the state-of-the-art earliest deadline first—virtual deadline scheduler. <s> BIB011 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> This paper studies real-time scheduling of mixed-criticality systems where low-criticality tasks are still guaranteed some service in the high-criticality mode, with reduced execution budgets. First, we present a utilization-based schedulability test for such systems under EDF-VD scheduling. Second, we quantify the suboptimality of EDF-VD (with our test condition) in terms of speedup factors. In general, the speedup factor is a function with respect to the ratio between the amount of resource required by different types of tasks in different criticality modes, and reaches 4/3 in the worst case. Furthermore, we show that the proposed utilization-based schedulability test and speedup factor results apply to the elastic mixed-criticality model as well. Experiments show effectiveness of our proposed method and confirm the theoretical suboptimality results. <s> BIB012 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Many existing studies on mixed-criticality (MC) scheduling assume that low-criticality budgets for high-criticality applications are known apriori. These budgets are primarily used as guidance to determine when the scheduler should switch the system mode from low to high. Based on this key observation, in this paper we propose a dynamic MC scheduling model under which low-criticality budgets for individual high-criticality applications are determined at runtime as opposed to being fixed offline. To ensure sufficient budget for high-criticality applications at all times, we use offline schedulability analysis to determine a system-wide total low-criticality budget allocation for all the high-criticality applications combined. This total budget is used as guidance in our model to determine the need for a mode-switch. The runtime strategy then distributes this total budget among the various applications depending on their execution requirement and with the objective of postponing mode-switch as much as possible. We show that this runtime strategy is able to postpone mode-switches for a longer time than any strategy that uses a fixed low-criticality budget allocation for each application. Finally, since we are able to control the total budget allocation for high-criticality applications before mode-switch, we also propose techniques to determine these budgets considering system-wide objectives such as schedulability and service guarantee for low-criticality applications. <s> BIB013 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> In this paper we present a probabilistic response time analysis for mixed criticality real-time systems running on a single processor according to a fixed priority pre-emptive scheduling policy. The analysis extends the existing state of the art probabilistic analysis to the case of mixed criticalities, taking into account both the level of assurance at which each task needs to be certified, as well as the possible criticalities at which the system may execute. The proposed analysis is formally presented as well as explained with the aid of an illustrative example. <s> BIB014 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> The multicore revolution is having limited impact in safety-critical application domains. A key reason is the "one-out-of-m" problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional $$m-1$$m-1 cores so that only "one core's worth" of capacity is utilized even though m cores are available. Two approaches have been investigated previously to address this problem: mixed-criticality allocation techniques, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable. A better way forward may be to combine both approaches, but to show this, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated herein in the context of a new variant of a mixed-criticality framework, called $$\textsf {MC}^\textsf {2} $$MC2, that supports configurable criticality-based hardware management. This framework allows specific DRAM memory banks and areas of the last-level cache (LLC) to be allocated to certain groups of tasks. A linear-programming-based optimization framework is presented for sizing such LLC areas, subject to conditions for ensuring $$\textsf {MC}^\textsf {2} $$MC2 schedulability. The effectiveness of the overall framework in resolving hardware-management and scheduling tradeoffs is investigated in the context of a large-scale overhead-aware schedulability study. This study was guided by extensive trace data obtained by executing benchmark programs on the new variant of $$\textsf {MC}^\textsf {2} $$MC2 presented herein. This study shows that mixed-criticality allocation and hardware-management techniques can be much more effective when applied together instead of alone. <s> BIB015 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Embedded systems are increasingly based on multi-core platforms to accommodate a growing number of applications, some of which have real-time requirements. Resources, such as off-chip DRAM, are typically shared between the applications using memory interconnects with different arbitration polices to cater to diverse bandwidth and latency requirements. However, traditional centralized interconnects are not scalable as the number of clients increase. Similarly, current distributed interconnects either cannot satisfy the diverse requirements or have decoupled arbitration stages, resulting in larger area, power and worst-case latency. The four main contributions of this article are: 1) a Globally Arbitrated Memory Tree (GAMT) with a distributed architecture that scales well with the number of cores, 2) an RTL-level implementation that can be configured with five arbitration policies (three distinct and two as special cases), 3) the concept of mixed arbitration policies that allows the policy to be selected individually per core, and 4) a worst-case analysis for a mixed arbitration policy that combines TDM and FBSP arbitration.We compare the performance of GAMT with centralized implementations and show that it can run up to four times faster and have over 51 and 37 percent reduction in area and power consumption, respectively, for a given bandwidth. <s> BIB016 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> QoS-Oriented Techniques in MC Systems <s> Industrial embedded systems are cost sensitive, and hardware cost of industrial production should be reduced for high profit. The functional safety requirement must be satisfied according to industrial functional safety standards. This study proposes three hardware cost optimization algorithms for functional safety-critical parallel applications on heterogeneous distributed embedded systems during the design phase. The explorative hardware cost optimization (EHCO), enhanced EHCO (EEHCO), and simplified EEHCO (SEEHCO) algorithms are proposed step by step. Experimental results reveal that EEHCO can obtain minimum hardware cost, whereas SEEHCO is efficient for large-scale parallel applications compared with the existing algorithms. <s> BIB017
|
However, these methods above guaranteed the timely executions of HC tasks at the expense of discarding LC tasks, which may cause serious service interruption for the tasks. In order to ensure schedulability of HC tasks, LC tasks are discarded immediately upon a critical change from LC to HC mode. This method completely ignores the LC tasks QoS. However, in some applications, it is still expectable to provide LC tasks with some (possibly degraded) QoS levels even in HC mode. In addition, it has been shown recently that task terminating could actually violate the security of the system. BIB007 We will introduce some solutions to these problems as follows. According to a controlled randomization, Masrur BIB010 hence proposed a probabilistic scheduling framework for giving probabilistic real-time assurances on LC tasks, while providing deterministic timing assurances for HC tasks. In their model, jobs are not dropped to increase the schedulability of other tasks, but all tasks need to be veri¯ed to be within certain thresholds of failure probabilities depending on their criticalities and the criticality of the system. These tasks are scheduled under an FP scheme on one processor, for which priorities are allocated according to some policy. BIB004 Based on an FP preemptive scheduling policy, Abdeddam and Maxim BIB014 introduced a probabilistic MC model for MCRTSs operating on a single processor, where each task is of a given criticality and the WCET of the task is a random discrete variable. The set of possible execution times for each task are grouped into WCET sets of di®erent criticalities according to their probability of occurrence. They also proposed an analysis to calculate response time assignments for tasks based on the Probabilistic MC. Besides these response time distributions, they can also extract deadline miss probabilities. Al-bayati et al. considered an scheduling model of MC system 74 that attempts to supply a decreased level of service for LC tasks, rather than simply discarding them. BIB003 They realized this goal by proposing a novel MC partitioning algorithm, called the dual-partitioned MC algorithm, which permits limited migration of LC tasks to improve the partitioning e±ciency while keeping advantages of partitioned systems. This algorithm consists of two phases: optimization phase and partitioning phase. Huang et al. extended the EDF-VD 62 scheduling method to ensure a degraded service to the LC tasks when the HC tasks exceed their LC WCET. They gave the approximation of demand bounds of all tasks under di®erent operating modes, and de¯ned the demand bound of a task in a given interval to be the sum of the runtimes for all task instances. They also presented an analytical approach to bound o®line the services resetting time provided to the LC tasks, and recon¯gured the system with a simple runtime mechanism to ensure maximum degraded services for LC tasks. Zhao and Al-Bayati 22 used the Elastic MC task model for FP scheduling to supply a decreased level of QoS for LC tasks in High mode, and proposed a schedulability analysis method for elastic MC. As the optimization goal is to improve system services performance while ensuring schedulability, they proposed two optimization algorithms based on OPA for allocating task priorities, and optionally adding delays to edges for producing a schedulable semantics-preserving realization. However, degrading or terminating the services of less critical tasks may lead to great service/performance loss. To solve this problem, Huang et al. used DVFS to accelerate the processor for guaranteeing that all tasks can still satisfy their deadlines in the case of timing urgency resulting in task overrun. They computed o®line a minimum processor acceleration to ensure the schedulability of the system in critical operation mode. To ensure schedulability in Low mode, the sum of all tasks requiring bound functions in any time interval cannot be greater than the o®ered processing resource. BIB006 To provide ensured services for LC tasks, Su et al. BIB011 studied an elastic MC task model for DC systems. Speci¯cally, the model permits each LC task to designate a set of early-release (ER) points and its minimum service level. After the LC task nishes executing the current instance, it can publish its next job instance at one of its ER points. To accommodate at runtime the ER executions of LC tasks, they proposed an ER mechanism that releases LC tasks more frequently and therefore increases their levels of service at run-time, while considering both aggressive and conservative methods for exploiting system slack. Liu et al. BIB012 researched real-time scheduling of MC system where LC tasks are still ensured some service in the High mode, while reducing execution budgets. They considered the imprecise MC model which increased the schedulability of LC tasks in High mode by decreasing their execution time. Since the task model is in Low mode, these tasks are considered as real-time tasks scheduled with the EDF algorithm that has a virtual deadline. And they proposed a schedulability test based on utilization of such systems using EDF-VD scheduling. BIB002 Gu and Easwaran BIB013 proposed a dynamic scheduling model of MC systems under which LC budgets for individual HC applications are decided at runtime rather than being¯xed o®line. To guarantee adequate budget for HC applications at all times, they used o®line schedulability analysis for all the HC applications combined to decide a system-wide total LC budget allocation taking into account system-wide goals such as service guarantee and schedulability for LC applications. Then, the runtime policy allocates this budget to each application according to their execution need and delays mode switching as much as possible. Hassan and Patel 27 introduced CArb, a con¯gurable arbiter of criticality and requirement awareness for dominating accesses to shared memories and buses in multicore MC systems. CArb employs two-tier WRR BIB001 arbitration to manage these accesses. CArb optimally assigns service to tasks at startup through con¯gurable schedules loaded, and if the present set of memory needs for all tasks is nonschedulable, it gives priorities to tasks with higher criticality. CArb does not impose any restrictions on mapping tasks to processors and stands by arbitrary number of criticality levels. Hassan et al. proposed a new method PMC for scheduling memory requests in MC systems. This method supports any number of criticality levels via allowing the MC system designer to appoint memory needs per task. They introduced a framework that builds optimal schedules to administer requests for o®-chip memory and a tight time-division-multiplexing scheduler, and they employed a mixed-page strategy to dynamically switch between open-and close page strategies according to the size of the request. Guo and Pellizzoni 28 designed a new DRAM controller that executes and bundles memory requests of hard real-time (HRT) applications in consecutive rounds according to their type for signi¯cantly reducing read/write switching delay. Open-page is applied for soft real-time (SRT) requestors to increase bandwidth by using row bu®er locality and Close-page scheme is applied for HRT requestors to realize predictability. Then, they proposed a request reordering scheduling policy to target systems with MC. The request scheduler executes di®erent arbitration polices for SRT and HRT banks. Kim et al. BIB015 solved shared-hardware interference caused by the operating system (OS). They presented a new platform, which extends a framework named MC2 78 by adding support for a variety of hardware management methods. BIB005 Speci¯cally, they o®ered administration for both the DRAM 80 memory banks and LLC. To assign DRAM banks and LLC colors correctly to tasks, they built page pools for each bank and color combination. Resources that are shared among cores such as main memories, memory buses, and caches in their scheduling or analysis may also lead to much pessimism underlying the problem. BIB009 Chisholm et al. 31¯r stly considered tradeo®s on multicore platforms for data sharing, where capacity loss is decreased by using both hardware-management methods BIB017 and MC con¯guration assumptions, and described a novel implementation of MC2, which extends the previous one by enabling tasks to communicate over shared memory, and o®ers approaches for administrating the DRAM memory banks and LLC. However, these approaches do not include intertask interferences resulting from accessing resources which are shared among cores such as main memories, memory buses, and caches in their scheduling or analysis. Brandenburg BIB008 proposed a solution according to resource servers and presented MC-IPC, a novel synchronous multiprocessor inter-process communication (IPC) protocol for invoking such servers. The MC-IPC protocol allows strict logical and time isolation among untrusted tasks and therefore can be employed to share resources among di®erent MC tasks. Modern distributed interconnects either cannot meet the diverse needs or have decoupled arbitration phases, leading to larger area, power and so on. Gomony et al. BIB016 solved this problem by presenting globally arbitrated memory tree (GAMT). GAMT employs a distributed architecture that is con¯gured with¯ve arbitration strategies (FBSP, CCSP, TDM, PBS 84 and RR 85 ). They introduced the new concept of mixed arbitration strategies, where the selection of arbiter is done by each client instead of being done jointly by all clients, to improve further arbitration°e xibility without e®ecting cost. Then they proposed a new mixed arbitration strategy aiming at mixed-time-criticality systems. These mixed-time-criticality systems join nonwork-conserving TDM to realize time isolation with work-conserving FBSP to solve diversity and decrease average latency. Finally, they performed a worst-case analysis of the mixed strategy in the background of the LR 86 framework. Results showed that GAMT runs faster and saves power and area consumption, respectively.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Systems in many safety-critical application domains are subject to certification requirements. For any given system, however, it may be the case that only a subset of its functionality is safety-critical and hence subject to certification, the rest of the functionality is non safety critical and does not need to be certified, or is certified to a lower level of assurance. An algorithm called EDF-VD (for Earliest Deadline First with Virtual Deadlines) is described for the scheduling of such mixed-criticality task systems. Analyses of EDF-VD significantly superior to previously-known ones are presented, based on metrics such as processor speedup factor (EDF-VD is proved to be optimal with respect to this metric) and utilization bounds. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Scheduling mixed-criticality systems is a challenging problem. Recently a number of new techniques are developed to schedule such systems, among which an approach called OCBP has shown interesting properties and drawn considerable attentions. OCBP explores the job-level priority order in a very flexible manner to drastically improve the system schedulability. However, the job priority exploration in OCBP involves nontrivial overheads. In this work, we propose a new algorithm LPA (Lazy Priority Adjustment) based on the OCBP approach, which improves the state-of-the-art OCBP-based scheduling algorithm PLRS in both schedulability and run-time efficiency. Firstly, while the time-complexity of PLRS' online priority management is quadratic, our new algorithm LPA has linear time-complexity at run-time. Secondly, we present an approach to calculate tighter upper bounds of the busy period size, and thereby can greatly reduce the run-time space requirement. Thirdly, the tighter busy period size bounds also improve the schedulability in terms of acceptance ratio. Experiments with synthetic workloads show improvements of LPA in all the above three aspects. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Voltage and Frequency Scaling (VFS) can effectively reduce energy consumption at system level. Most work in this field has focused on deadline-constrained applications with finite schedule lengths. However, in typical real-time streaming, processing is repeatedly activated by indefinitely long data streams and operations on successive data instances are overlapped to achieve a tight throughput. A particular application domain where such characteristics co-exist with stringent energy consumption constraints is baseband processing. Such behavior requires new VFS scheduling policies. This paper addresses throughput-constrained VFS problems for real-time streaming with discrete frequency levels on a heterogeneous multiprocessor. We propose scaling algorithms for two platform types: with dedicated VFS switches per processor, and with a single, global VFS switch. We formulate Local VFS using Mixed Integer Linear Programming (MILP). For the global variant, we propose a 3-stage heuristic incorporating MILP. Experiments on our modem benchmarks show that the discrete local VFS algorithm achieves energy savings close to its continuous counterpart, and local VFS is more effective than global VFS. As an example, for a WLAN receiver, running on a modem realized as a heterogeneous multiprocessor, the continuous local VFS algorithm reduces energy consumption by 29%, while the discrete local and global algorithms reduce energy by 28% and 16%, respectively, when compared to a on/off energy saving policy. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> In the past, we have silently accepted that energy consumption in real-time and embedded systems is subordinate to time. That is, we have tried to reduce energy always under the constraint that all deadlines must be met. In mixed-criticality systems however, schedulers respect that some tasks are more important than others and guarantee their completion even at the expense of others. We believe in these systems the role of the energy budget has changed and it is time to ask whether energy has surpassed timeliness. Investigating energy as a further dimension of mixed-criticality systems, we show in a realistic scenario that a subordinate handling of energy can lead to violations of the mixed-criticality guarantees that can only be avoided if energy becomes an equally important resource as time. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Heterogeneous multicore platforms have become an attractive choice to deploy mixed criticality systems demanding diverse computational requirements. One of the major challenges is to efficiently harness the computational power of these multicore platforms while deploying mixed criticality applications. The problem is acerbated with an additional demand of energy efficiency. It is particularly relevant for the battery powered embedded systems. We propose a partitioning algorithm for unrelated heterogeneous multicore platforms to map mixed criticality applications that ensures the timeliness property and reduces the energy consumption. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to and including December 2015. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, and systems issues. An appendix lists funded projects in the area of mixed criticality. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> We consider a battery-less real-time embedded system equipped with an energy harvester. It scavenges energy from an environmental resource according to some stochastic patterns. The success of jobs is threatened in the case of energy shortage, which might be due to lack of harvested energy, losses originated from the super-capacitor self-discharge, as well as power consumption of executed tasks. The periodic real-time tasks of the system follow a dual-criticality model. In addition, each task has a minimum required success ratio that needs to be satisfied in steady state. We analytically evaluate the behavior of such a system in terms of its energy-related success ratio for a given schedule. Based on these results, we propose a scheduling algorithm that satisfies both temporal and success-ratio constraints of the jobs, while respecting task criticalities and corresponding system modes. The accuracy of the analytical method as well as its dependence on the numerical computations and other model assumptions are extensively discussed through comparison with simulation results. Also, the efficacy of the proposed scheduling algorithm is studied through comparison to some existing non-mixed- and mixed-criticality scheduling algorithms. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> In this paper we study a general energy minimization problem for mixed-criticality systems on multi-cores, considering different system operation modes, and static & dynamic energy consumption. While making global scheduling decisions, trade-offs in energy consumption between different modes and also between static and dynamic energy consumption are required. Thus, such a problem is challenging. To this end, we first develop an optimal solution analytically for unicore and a corresponding low-complexity heuristic. Leveraging this, we further propose energy-aware mapping techniques and explore energy savings for multi-cores. To the best of our knowledge, we are the first to investigate mixed-criticality energy minimization in such a general setting. The effectiveness of our approaches in energy reduction is demonstrated through both extensive simulations and a realistic industrial application. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Analyze resource demand of MC task set under reliability and deadline constraints.Develop a heuristic approach to solve the formulated problem.Evaluate the proposed approach through simulation under various scenarios.Achieve up to 10% more energy saving comparing with the existing approaches. This paper studies the energy minimization problem in mixed-criticality systems that have stringent reliability and deadline constraints. We first analyze the resource demand of a mixed-criticality task set that has both reliability and deadline requirements. Based on the analysis, we present a heuristic task scheduling algorithm that minimizes system's energy consumption and at the same time also guarantees system's reliability and deadline constraints. Extensive experiments are conducted to evaluate and validate the performance of the proposed algorithm. The empirical results show that the algorithm further improves energy saving by up to 10% compared with the approaches proposed in our earlier work. <s> BIB011 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Energy-E±cient Techniques in MC Systems <s> Abstract The Internet of Things (IoT) is gaining momentum and may positively influence the automation of energy-efficiency management of smart buildings. However, the development of IoT-enabled applications still takes tremendous efforts due to the lack of proper tools. Many software components have to be developed from scratch, thus requiring huge amounts of effort, as developers must have a deep understanding of the technologies, the new application domain, and the interplay with legacy systems. In this paper we introduce the IMPReSS Systems Development Platform (SDP) that aims at reducing the complexity of developing IoT-enabled applications for supporting sensor data collection in buildings, managing automated system changes according to the context, and real-time prioritization of devices for controlling energy usage. The effectiveness of the SDP for the development of IoT-based context-aware and mixed-criticality applications was assessed by using it in four scenarios involving energy efficiency management in public buildings. Qualitative studies were undertaken with application developers in order to evaluate their perception of five key components of the SDP with regard to usability. The study revealed significant and encouraging results. Further, a quantitative performance analysis explored the scalability limits of the IMPReSS communication components. <s> BIB012
|
In the past two decades, energy management has become a key design and operational focus for many real-time embedded platforms. In fact, e®ective energy management is crucial to any battery-powered embedded system. In these systems, it is not always practical or feasible to recharge or replace the battery. Therefore, minimizing the energy consumption of embedded devices for a longer life expectancy has important practical and economic bene¯ts. However, the existing approaches ensure the QoS of the system, which may lead to more energy consumption so that the MC systems cannot function normally or even collapse, which is a very serious threat for energy-limited MC systems. Therefore, studying energy optimization problem of the MC systems becomes inevitable. In this section, we will introduce some researches on optimizing the energy consumption of the MC system. As the task's work may not be successfully implemented due to the lack of available energy, Asyaban et al. BIB009 considered MC systems with energy harvesters. These systems not only need to implement MC requirements within its deadline, but also need to meet its energy-related constraints. To ensure these constraints, at di®erent moments, suitable task scheduling is required to manage the available energy of the super-capacitor, while taking into account the random pattern of energy arrival, the power consumption of the super-capacitor, and the energy usages of MC task. Therefore, they proposed a method according to the interaction of energy availability with MC scheduling and its analytical evaluation. Then, based on this analysis approach, they presented a scheduling algorithm which ensures the lower borders of job success probabilities. This algorithm is based on the features and nontrivial insights of RTSs, with a success rate constraint in the case of energy harvesting. The proposed algorithm is able to be further extended to deal with non-MC systems as well as MC systems with advanced loss models. To guarantee timely completion of all tasks, while obeying this constraint, to do the best to conserve battery power, Awan et al. BIB007 mapped the given tasks on a heterogeneous multicore platform of di®erent criticality levels by using partitioned scheduling such that available calculative power of the hardware platform can be exploited e±ciently and energy cost is minimized while guaranteeing the timeliness nature of the system. To realize this problem, they proposed an algorithm that optimizes the energy in the Low mode by manipulating the ordering of the tasks while respecting temporality. In this algorithm, initially, they rank the task-set for the computed density di®erence according to the energy cost in the Low mode and executed the distribution with the ILLED algorithm, which ranks the tasks according to a metric named density di®erence while executing distribution to their preferred core. If the system is scheduled according to the rankings, they will achieve the minimum energy consumption allocation. Otherwise, they will arrange the taskset for computed density di®erence according to utilization in the Low mode and execute the distribution. They gradually improved the HC tasks in the ranking for getting a set of practical distributions and choosing the lowest energy consumption. Volp et al. BIB006 re-evaluated the function of the energy budget in MC systems. Knowing that, in all FP scheduling methods those following the OCBP BIB001 produce the smallest acceleration bound. BIB003 They turned OCBP into an MC scheduler for energy constrained systems, by rearranging the sequence in which jobs are introduced to the OCBP assignment approach. They computed the maximum energy cost for all probable criticality level transforms of the tasks that are generated when scheduled with the proposed method, then they compared the results with the obtained schedules from an unsorted job list presented to OCBP. The results showed that compared to the unsorted OCBP scheduler, the proposed algorithm needs smaller budget, while still ensuring schedulability. With drastically increased calculated needs and the battery-operated property of MC systems, the energy minimization of such systems is becoming crucial as well. Huang et al. applied DVFS BIB004 to MC systems for minimizing the energy. The main idea of using DVFS to minimize energy cost is to stretch execution times of task as much as possible by reducing the processor frequency, so that tasks can be¯nished in time. They showed that DVFS is used to support critical tasks when they overrun to satisfy deadlines by accelerating the processor, which will further enable the system to maintain fewer time budgets. As overrun is few, such a scheme can greatly decrease the expected energy cost for MC systems. To achieve the energy minimization problem, they developed a convex program by combining DVFS with an MC scheduling method EDF-VD BIB002 that addresses DC tasks with implicit deadlines. Experimental results showed that the proposed methods are validated. Narayana et al. BIB010 studied a general energy minimization issue for MC systems on multicores. They used a general setting in which both dynamic and static energy consumption are considered for all system operation modes. When making global scheduling policies, trade-o®s in energy cost among di®erent modes as well as between dynamic and static energy consumption are required. To solve the problem of energy cost in di®erent modes and to minimize the total energy together, they divided the problem into two sub-problems: (1)¯rst execute energy-aware task mapping; (2) then apply and develop unicore DVFS 90 method to all cores. As there is a convex formulation, they used the KKT optimality conditions to solve this problem. For partitioned scheduling, they proposed a new approach to realize this energy minimization problem by separating di®erent criticality levels tasks on different cores. The results showed the proposed methods are valid, and can save energy. Despite intensive studies on energy e±ciency, these power-saving methods have not been applied in safety-critical areas. Lenz et al. studied an architecture's requirements for a low-power MC system such as the fault tolerance and safety arguments, the power and energy e±ciency, predictability and real-time features. Based on this, they introduced European project SAFE-POWER used to solve the problem of power management in MC systems real-time response, predictability, as well as power and energy e±ciency requirements. SAFEPOWER constructs a set of comprehensive veri¯cation tools, simulation, and analysis for low-power MC systems, including software/hardware reference platforms assisting in observing, implementing and testing such applications. In the SAFEPOWER project, the generator of network-on-chip (NoC) system will be expanded to stand by NoC technologies with low-power and predictability. In addition, integrating Design space exploration (DSE) tools into NoC-system builders will bene¯t system design, as designers can then focus on application design while NoC-system builders generate full FPGA implementations and DSE tools compute e±cient implementations. Public buildings consume a signi¯cant percentage of energy in industrialized countries, thus building management systems must solve the problem of decreasing energy use. To realize this challenge, Kamienski BIB012 developed an internet of things (IoT) systems development platform (SDP) within the IMPReSS project, which aims at managing device's real-time prioritization for governing energy usage. IMPReSS SDP takes public buildings Energy E±ciency management as its¯rst goal, but it is also available for any system aimed to embrace a more intelligent society. The platform contains a variety of components that make it easier to develop IoT applications, including IoT wireless communications management, analytics and management, BIB005 context-aware data storage, MC resource management, BIB008 pre-prepared middleware and software components, and tools to quickly develop user interfaces. In addition, typical middleware components for handling public buildings energy e±ciency management are provided as well for hiding the implementation complexity from the application developers. Results showed that the IMPReSS SDP is e®ective for developing IoT-based and MC applications. Di®erent with the work above that reliability constraint is not taken into consideration, Li et al. BIB011 addressed the problem about how to schedule MC task sets to minimize energy consumption of the systems while also satisfying both deadline and reliability constraints. They¯rst established the theoretical foundation for deciding whether the reliability and deadline constraints of the tasks can be meet with EDF-VD 62 scheduling method under given task execution frequency and virtual deadline allocations. According to the theoretic analysis, they developed a frequency assignment based on heuristic search algorithm that determines the execution frequency of lowest task and minimizes energy consumption of the systems while ensuring both task schedulability and reliability constraints. Results showed that the proposed method can save more energy.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> One of the proposed techniques for meeting the severe reliability requirements inherent in certain future computer applications is described. This technique involves the use of triple-modular redundancy, which is essentially the use of the two-out-of-three votingc oncept at a low level. Effects of imperfect voting circuitry and of various interconnections logical elements are assessed. A hypothetical triple-modular redundant computer is subjected to a Monte Carlo program on the IBM 704, which simulates component failures. Reliability is thereby determined and compared with reliability obtained by analytical calculations based on simplifying assumptions. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> Evolution of the N-version software approach to the tolerance of design faults is reviewed. Principal requirements for the implementation of N-version software are summarized and the DEDIX distributed supervisor and testbed for the execution of N-version software is described. Goals of current research are presented and some potential benefits of the N-version approach are identified. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> This article reviews the principal requirements of the IEC 61508 international standard relating to the specification and design of hardware and software in programmable electronic systems intended for use in safety-related applications. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> In fixed-priority scheduling, the priority of a job,.once assigned, may not change. A new fixed-priority algorthm for scheduling systems of periodic tasks upon identical multiprocessors is proposed. This algorithm has an achievable utilization of (m+1)/2 upon m unit-capacity processors. It is proven that this algorithm is optimal from the perspective of achievable utilization in the sense that no fixed-priority algorithm for scheduling periodic task systems upon identical multiprocessors may have an achievable utilization greater than (m+1)/2. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> Methods such as rollback and modular redundancy are efficient to correct transient errors. In hard real-time systems, however, correction has a strong impact on response times, also on tasks that were not directly affected by errors. Due to deadline misses, these tasks eventually fail to provide correct service. In this paper we present a reliability analysis for periodic task sets and static priorities that includes realistic detection and roll-back scenarios and covers a hyperperiod instead of just a critical instant and therefore leads to much higher accuracy than previous approaches. The approach is compared with Monte-Carlo simulation to demonstrate the accuracy and with previous approaches covering critical instants to evaluate the improvements. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> This paper proposes a design methodology that enhances the classical system-level design flow for embedded systems to introduce reliability-awareness. The mapping and scheduling step is extended to support the application of hardening techniques to fulfill the required fault management properties that the final system must exhibit; moreover, the methodology allows the designer to specify that only some parts of the systems need to be hardened against faults. The reference architecture is a complex distributed one, constituted by resources with different characteristics in terms of performance and available fault detection/tolerance mechanisms. The approach is evaluated and compared against the most recent and relevant work, with an in-depth analysis on a large set of benchmarks. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> In this paper, we propose a novel analytical method, called scheduling time bound analysis, to find a tight upper bound of the worst-case response time in a distributed real-time embedded system, considering execution time variations of tasks, jitter of input arrivals, and scheduling anomaly behavior in a multi-tasking system all together. By analyzing the graph topology and worst-case scheduling scenarios, we measure the conservative scheduling time bound of each task. The proposed method supports an arbitrary mixture of preemptive and non-preemptive processing elements. Its speed is comparable to compositional approaches while it gives a much tighter bound. The advantages of the proposed approach compared with related work were verified by experimental results with randomly generated task graphs and a real-life automotive application. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> The design and analysis of real-time scheduling algorithms for safety-critical systems is a challenging problem due to the temporal dependencies among different design constraints. This paper considers scheduling sporadic tasks with three interrelated design constraints: (i) meeting the hard deadlines of application tasks, (ii) providing fault tolerance by executing backups, and (iii) respecting the criticality of each task to facilitate system's certification. First, a new approach to model mixed-criticality systems from the perspective of fault tolerance is proposed. Second, a uniprocessor fixed-priority scheduling algorithm, called fault-tolerant mixed-criticality (FTMC) scheduling, is designed for the proposed model. The FTMC algorithm executes backups to recover from task errors caused by hardware or software faults. Third, a sufficient schedulability test is derived, when satisfied for a (mixed-criticality) task set, guarantees that all deadlines are met even if backups are executed to recover from errors. Finally, evaluations illustrate the effectiveness of the proposed test. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> We consider in this paper fault-tolerant mixed-criticality scheduling, where heterogeneous safety guarantees must be provided to functionalities (tasks) of varying criticalities (importances). We model explicitly the safety requirements for tasks of different criticalities according to safety standards, assuming hardware transient faults. We further provide analysis techniques to bound the effects of task killing and service degradation on the system safety and schedulability. Based on our model and analysis, we show that our problem can be converted to a conventional mixed-criticality scheduling problem. Thus, we broaden the scope of applicability of the conventional mixed-criticality scheduling techniques. Our proposed techniques are validated with a realistic flight management system application and extensive simulations. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> This paper presents a static mapping optimization technique for fault-tolerant mixed-criticality MPSoCs. The uncertainties imposed by system hardening and mixed criticality algorithms, such as dynamic task dropping, make the worst-case response time analysis difficult for such systems. We tackle this challenge and propose a worst-case analysis framework that considers both reliability and mixed-criticality concerns. On top of that, we build up a design space exploration engine that optimizes fault-tolerant mixed-criticality MPSoCs and provides worst-case guarantees. We study the mapping optimization considering judicious task dropping, that may impose a certain service degradation. Extensive experiments with real-life and synthetic benchmarks confirm the effectiveness of the proposed technique. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> This paper presents a novel mapping optimization technique for mixed critical multi-core systems with different reliability requirements. For this scope, we derived a quantitative reliability metric and presented a scheduling analysis that certifies given mixed-criticality constraints. Our framework is capable of investigating re-execution, passive replication, and modular redundancy with optimized voter placement, while typical hardening approaches consider only one or two of these techniques. The proposed technique complies with existing safety standards and is power-efficient, as demonstrated by our experiments. <s> BIB011 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. In this paper, we discuss the design of the Quest-V separation kernel, which partitions services of different criticalities in separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. Moreover, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes. <s> BIB012 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> Mixed-criticality is a significant recent trend in the embedded system industry, where common computing platforms are utilized to host functionalities of varying criticality levels. To date, most scheduling techniques have focused on the timing aspect of this problem, while functional safety (i.e. fault-tolerance) is often neglected. This paper presents design methodologies to guarantee both safety and schedulability for real-time mixed-criticality systems on identical multicores. Assuming hardware/software transient errors, we model safety requirements on different criticality levels explicitly according to safety standards; based on this, we further propose fault-tolerant mixed-criticality scheduling techniques with task replication and re-execution to enhance system safety. To cope with runtime urgencies where critical tasks do not succeed after a certain number of trials, our techniques can perform system reconfigurations (task killing or service degradation) in those situations to reallocate system resources to the critical tasks. Due to explicit modeling of safety, we can quantify the impact of task killing and service degradation on system feasibility (safety and schedulability), enabling a rigorous design. To this end, we derive analysis techniques when reconfigurations are triggered either globally (synchronously) on all cores or locally (asynchronously) on each core. To our best knowledge, this is the first work on fault-tolerant mixed-criticality scheduling on multicores, matching theoretical insights with industrial safety standards. Our proposed techniques are validated with an industrial application and extensive simulations. <s> BIB013 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Fault-Tolerant Techniques in MC Systems <s> Integration of safety-critical tasks with different certification requirements onto a common hardware platform has become a growing tendency in the design of real-time and embedded systems. In the past decade, great efforts have been made to develop techniques for handling uncertainties in task worst-case execution time, quality-of-service, and schedulability of mixed-criticality systems. However, few works take fault-tolerance as a design requirement. In this paper, we address the scheduling of fault-tolerant mixed-criticality systems to ensure the safety of tasks at different levels of criticalities in the presence of transient faults. We adopt task re-execution as the fault-tolerant technique. Extensive simulations were performed to validate the effectiveness of our algorithm. Simulation results show that our algorithm results in up to 15.8% and 94.4% improvement in system reliability and schedule feasibility as compared to existing techniques, which contributes to a more safe system. <s> BIB014
|
The above research has made tremendous e®orts to develop techniques for dealing with the uncertainty in task schedulability, QoS and energy consumption of MC systems. MC system is susceptible to transient faults like other electronic systems. These systems must mitigate the e®ects of faults and provide recovery mechanisms when faults occur. However, few works take fault-tolerance as a design requirement, which represents the system repair capacity in the presence of faults. Therefore, researching fault tolerance makes sense for MC systems to enhance their robustness. We will present the research status in this section. The schedule to meet real-time constraints can a®ect or be a®ected by the faulttolerant constraints. Pathan BIB008 proposed a new method to model MC systems from the fault tolerant perspective. Then, they designed a uniprocessor FP scheduling algorithm, named fault-tolerant MC (FTMC) scheduling for this new model. To ensure both functional and temporal correctness, The FTMC algorithm performs backups to resume from task errors resulting from software or hardware faults. The task is considered to be erroneous, when a fault adversely in°uences the task functionality. Whenever a task's job is ready for execution, the FTMC algorithm¯rst schedules the primary task. If primary task is found wrong, the backups are dispatched one by one until the output is proper. The backup has the same priority as the primary task. Results showed that the proposed test is e®ective. Zhou et al. BIB014 solved the fault-tolerant MC tasks scheduling to guarantee the security of di®erent levels of criticality tasks in the existence of transient faults on a uniprocessor platform. They proposed a fault-tolerant scheduling method called Slice-EDF-VD, which is based on EDF-VD. BIB004 Slice-EDF-VD algorithm uses re-execution to increase the reliability and adopts period transformation and utilization of idle time to improve schedule feasibility. With these two e®orts, system safety can be upgraded. Results showed that Slice-EDF-VD can improve system schedule feasibility and reliability compared to existing approaches, helping to create safer systems. Approaches such as module redundancy and rollback can e®ectively correct transient errors. However, in hard RTSs, the correction has a strong e®ect on the response time as well as tasks not directly in°uenced by the error. Here, the goal of Axer et al. BIB005 is to maximize reliability as much as possible to minimize oversupply. In order to solve this goal, they proposed a new algorithm. Unlike other approaches, they considered a representative hyperperiod, rather than the worst case, as it guarantees tighter reliability. The¯rst step is to list all the feasible schemes for each job for the task over the entire hyperperiod. The second step is to turn these schemes into probabilities by which deriving the feature reliability function is possible. The proposed algorithm showed a very good accuracy and signi¯cantly reduced analysis time for actual parameters. Huang et al. BIB009 studied fault-tolerant MC scheduling problem under hardware transient faults. Based on established safety standards, they explicitly model the safety needs on di®erent criticality levels. They analyzed the e®ects of service degradation, task killing, and task re-execution on system schedulability and safety. Then, they proposed a scheduling algorithm for this problem, in which all tasks of equal importance own the same re-execution pro¯le. Furthermore, all HC tasks adaptation pro¯les are the same. The algorithm can be transformed into a conventional MC scheduling problem. Therefore, a large class of existing MC scheduling methods can be used. Extensive simulations showed that the proposed techniques are valid. Zeng et al. BIB013 studied fault-tolerant MC scheduling under hardware/software transient faults on multicores. They proposed technologies that make it possible to recon¯gure systems at runtime so that MC tasks can still be ensured under an emergency. They explicitly modeled safety needs on di®erent criticality levels based on safety standards. BIB003 According to this, they further presented fault-tolerant MC scheduling methods with task re-execution and replication, and formulated the problem with adaptation and redundancy pro¯les to realize a feasible design. To cope with unsuccessful critical tasks under runtime emergencies, these methods can execute system recon¯guration to redistribute resources of the system to critical tasks. Because of explicit modeling of security, the e®ect of service degradation and task killing on system feasibility can be quanti¯ed, allowing a rigorous design. To this end, they distinguished between global and local reconstruction in these proposed techniques, enabling trade-o®s between system feasibility and complexity analysis. Bolchini and Miele BIB006 proposed a method and framework to implement embedded systems with MC fault management needs, according to an improved system-level synthesis. The proposed framework is a distributed one, consisting of resources with di®erent performance characteristics and available fault tolerant mechanisms. The proposed method extends the traditional hardware/software coordination paradigm to stand by the standard of reliability-related needs, the application of fault tolerance methods, and the utilization of the architecture characteristics to realize a¯nal system capable to control the transient failures. In this method, scheduling and mapping step is extended to stand by the application of hardening methods to achieve the required fault administration attributes that the¯nal system has to display. Moreover, the method enables the designer to designate that only some sections of the systems require to be hardened to prevent faults. Kang et al. BIB010 proposed an optimization method for fault-tolerant MC MPSoCs. Besides the traditional hardening methods by replication and re-execution, they also proposed a MC scheduling with dropped task that proves that HC applications o®er their service, and their WCRTs are ensured. The uncertainties caused by system hardening and MC algorithms, like dynamic task reduction, make it di±cult to analyze the WCRTs for such systems. To tackle this challenge, they proposed a worst case analysis framework that considers over both reliability and MC concerns. Moreover, they constructed a space exploration engine to optimize fault-tolerant MC MPSoCs and o®er worst case guarantees. Experiments con¯rmed that the proposed technique is e®ective. Kang et al. BIB011 also proposed an optimization method of reliability-aware mapping for multicore MC systems, which is consistent with the existing standards. This technique is a universal framework that considers over judicious voter placement, active/passive replication, and its combinations, as well as re-execution. To allow a quantitative and comparative evaluation of the optimal mapping, they proposed a reliability metric based on probability. In order to cover the hardening method more comprehensively, they adopt selective voter arrangements and passive replication regarding the given limits. The fault-management method selected for each task to improve the reliability may lead to uncertain behaviors. To solve this problem, they used an analysis technique BIB007 that can be replaced by any existing method if it supports the variable execution time. Experimental results proved that the proposed approach is e®ective. West et al. BIB012 introduced Quest-V, which uses hardware virtualization to detach system components into sandboxes. Sandboxes administer their own subsets of performing scheduling and machine resources, I/O management, and memory without the participation of a hypervisor. Inter-sandbox communication is achieved by shared memory channels mapped to extended page tables (EPT) entries. Only trusted monitors can change entries in these EPTs to prevent visitors from accessing any memory areas in remote sandbox. It is probable to use N-versioning BIB002 or triple modular redundancy BIB001 methods to keep system operational, if a fault or security breach does happen in a monitor. Therefore, the design of Quest-V allows lower criticality services separated from higher criticality ones, and essential services replicated from di®erent sandboxes to guarantee availability in the event of faults. Compared with traditional hypervisors, Quest-V system monitors own a small memory space, they are only used to divide resources at boot time, help fault recovery, and build inter-sandbox communication channels. Al-bayati et al. considered the issue of scheduling and designing certi¯ed faulttolerant MC systems. To e®ectively deal with faults and task overruns in the system, they proposed a four-mode (transient faults mode, execution time overruns mode, and their combination mode) model that resolves execution time overrun and fault with separate modes when either transient faults or overruns occur. This model, integrated with the optional continuation of LC tasks, increases the QoS to these tasks while o®ering the same assurance to HC tasks. Experimental results showed that the proposed new model can improve reliability and schedule feasibility of the system while achieving LC task QoS improvements.
|
A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> The paper presents exact schedulability analyses for real-time systems scheduled at runtime with a static priority pre-emptive dispatcher. The tasks to be scheduled are allowed to experience internal blocking (from other tasks with which they share resources) and (with certain restrictions) to release jitter, such as waiting for a message to arrive. The analysis presented is more general than that previously published and subsumes, for example, techniques based on the Rate Monotonic approach. In addition to presenting the relevant theory, an existing avionics case study is described and analysed. The predictions that follow from this analysis are seen to be in close agreement with the behaviour exhibited during simulation studies. <s> BIB001 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Time-triggered (TT) Ethernet is a new real-time communication protocol that is fully compatible with Ethernet and provides in addition to the standard Ethernet service a deterministic real-time communication service for distributed real-time systems. This paper elaborates on basic concepts in real-time communication, elicits real-time communication requirements and discusses some innate conflicts that must be considered in any realtime protocol design. In the second part, the rationale and the principles of operation of TT-Ethernet are presented and common properties of all members of the TT-Ethernet protocol family are discussed. <s> BIB002 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Distributed real-time applications implement distributed applications with timeliness requirements. Such systems require a deterministic communication medium with bounded communication delays. Ethernet is a widely used commodity network with many appliances and network components and represents a natural fit for real-time application; unfortunately, standard Ethernet provides no bounded communication delays. Conditional state-based communication schedules provide expressive means for specifying and executing with choice points, while staying verifiable. Such schedules implement an arbitration scheme and provide the developer with means to fit the arbitration scheme to the application demands instead of requiring the developer to tweak the application to fit a predefined scheme. An evaluation of this approach as software prototypes showed that jitter and execution overhead may diminish the gains. This work successfully addresses this problem with a synthesized soft processor. We present results around the development of the soft processor, the design choices, and the measurements on throughput and robustness. <s> BIB003 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> This paper presents a framework for schedule integration of time-triggered systems tailored to the automotive domain. In-vehicle networks might be very large and complex and hence obtaining a schedule for a fully synchronous system becomes a challenging task since all bus and processor constraints as well as end-to-end-timing constraints have to be taken concurrently into account. Existing optimization approaches apply the schedule optimization to the entire network, limiting their application due to scalability issues. In contrast, the presented framework obtains the schedule for the entire network, using a two-step approach where for each cluster a local schedule is obtained first and the local schedules are then merged to the global schedule. This approach is also in accordance with the design process in the automotive industry where different subsystems are developed independently to reduce the design complexity and are finally combined in the integration stage. In this paper, a generic framework for schedule integration of time-triggered systems is presented. Further, we show how this framework is implemented for a FlexRay network using an Integer Linear Programming (ILP) approach which might also be easily adapted to other protocols. A realistic case study and a scalability analysis give evidence of the applicability and efficiency of our approach. <s> BIB004 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> A common trend in real-time safety-critical embedded systems is to integrate multiple applications on a single platform. Such systems are known as mixed-criticality (MC) systems as the applications are usually characterized by different criticality levels (CLs). Nowadays, multicore platforms are promoted due to cost and performance benefits. However, certification of multicore MC systems is challenging because concurrently executed applications with different CLs may block each other when accessing shared platform resources. Most of the existing research on multicore MC scheduling ignores the effects of resource sharing on the execution times of applications. This paper proposes a MC scheduling strategy which explicitly accounts for these effects. Applications are executed by a flexible time-triggered criticality-monotonic scheduling scheme. Schedulers on different cores are dynamically synchronized such that only a statically known subset of applications of the same CL can interfere on shared resources, e. g., memories, buses. Therefore, the timing effects of resource sharing are bounded and we quantify them at design time. We combine this scheduling strategy with a mapping optimization technique for achieving better resource utilization. The efficiency of the approach is demonstrated through extensive simulations as well as comparisons with traditional temporal partitioning and state-of-the-art scheduling algorithms. It is also validated on a real-world avionics system. <s> BIB005 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Ethernet is widely recognized as an attractive networking technology for modern distributed real-time systems. However, standard Ethernet components require specific modifications and hardware support to provide strict latency guarantees necessary for safety-critical applications. Although this is a well-stated fact, the design of hardware components for real-time communication remains mostly unexplored. This becomes evident from the few solutions reporting prototypes and experimental validation, which hinders the consolidation of Ethernet in real-world distributed applications. This paper presents Atacama, the first open-source framework based on reconfigurable hardware for mixed-criticality communication in multi-segmented Ethernet networks. Atacama uses specialized modules for time-triggered communication of real-time data, which seamlessly integrate with a standard infrastructure using regular best-effort traffic. Atacama enables low and highly predictable communication latency on multi-segmented 1Gbps networks, easy optimization of devices for specific application scenarios, and rapid prototyping of new protocol characteristics. Researchers can use the open-source design to verify our results and build upon the framework, which aims to accelerate the development, validation, and adoption of Ethernet-based solutions in real-time applications. <s> BIB006 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> In this paper, we deal with the schedule synthesis problem of mixed-criticality cyber-physical systems (MCCPS), which are composed of hard real-time tasks and feedback control tasks. The real-time tasks are associated with deadlines that must always be satisfied whereas feedback control tasks are characterized by their Quality of Control (QoC) which needs to be optimized. A straight-forward approach to the above scheduling problem is to translate the QoC requirements into deadline constraints and then, to apply traditional real-time scheduling techniques such as Deadline Monotonic (DM). In this work, we show that such scheduling leads to overly conservative results and hence is not efficient in the above context. On the other hand, methods from the mixed-criticality systems (MC) literature mainly focus on tasks with different criticality levels and certification issues. However, in MCCPS, the tasks may not be fully characterized by only criticality levels, but they may further be classified according to their criticality types, e.g., deadline-critical real-time tasks and QoC-critical feedback control tasks. On the contrary to traditional deadline-driven scheduling, scheduling MCCPS requires to integrate both, deadline-driven and QoC-driven techniques which gives rise to a challenging scheduling problem. In this paper, we present a multi-layered schedule synthesis scheme for MCCPS that aims to jointly schedule deadline-critical, and QoC-critical tasks at different scheduling layers. Our scheduling framework (i) integrates a number of QoC-oriented metrics to capture the QoC requirements in the schedule synthesis (ii) uses arrival curves from real-time calculus which allow a general characterization of task triggering patterns compared to simple task models such as periodic or sporadic, and (iii) has pseudo-polynomial complexity. Finally, we show the applicability of our scheduling scheme by a number of experiments. <s> BIB007 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> In the development of real-time embedded applications, especially those on systems-on-chip, an efficient use of RAM memory is as important as the effective scheduling of the computation resources. The protection of communication and state variables accessed by concurrent tasks must provide real-time schedulability guarantees while using the least amount of memory. Several schemes, including preemption thresholds, have been developed to improve schedulability and save stack space by selectively disabling preemption. However, the design synthesis problem is still open. In this article, we target the assignment of the scheduling parameters to minimize memory usage for systems of practical interest, including designs compliant with automotive standards. We propose algorithms either proven optimal or shown to improve on randomized optimization methods like simulated annealing. <s> BIB008 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> The embedded system industry is facing an increasing pressure for migrating from single-core to multi- and many-core platforms for size, performance and cost purposes. Real-time embedded system design follows this trend by integrating multiple applications with different safety criticality levels into a common platform. Scheduling mixed-criticality applications on today's multi/many-core platforms and providing safe worst-case response time bounds for the real-time applications is challenging given the shared platform resources. For instance, sharing of memory buses introduces delays due to contention, which are non-negligible. Bounding these delays is not trivial, as one needs to model all possible interference scenarios. In this work, we introduce a combined analysis of computing, memory and communication scheduling in a mixed-criticality setting. In particular, we propose: (1) a mixed-criticality scheduling policy for cluster-based many-core systems with two shared resource classes, i.e., a shared multi-bank memory within each cluster, and a network-on-chip for inter-cluster communication and access to external memories; (2) a response time analysis for the proposed scheduling policy, which takes into account the interferences from the two classes of shared resources; and (3) a design exploration framework and algorithms for optimizing the resource utilizations under mixed-criticality timing constraints. The considered cluster-based architecture model describes closely state-of-the-art many-core platforms, such as the Kalray MPPA®-256. The applicability of the approach is demonstrated with a real-world avionics application. Also, the scheduling policy is compared against state-of-the-art scheduling policies based on extensive simulations with synthetic task sets. <s> BIB009 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> We propose the integration of a network-on-chip-based MPSoC in mixed-criticality systems, i.e. systems running applications with different criticality levels in terms of completing their execution within predefined time limits. An MPSoC contains tiles that can be either CPUs or memories, and we connect them with an instance of a customizable point-to-point interconnect from STMicroelectronics called STNoC. We explore whether the on-chip network capacity is sufficient for meeting the deadlines of external high critical workloads, and at the same time for serving less critical workloads that are generated internally. To evaluate the on-chip network we vary its configuration parameters, such as the link-width, and the Quality-of-Service (QoS), in specific the number (1 or 2) and type (high or low priority) of virtual channels (VCs), and the relative priority of packets from different flows sharing the same VC. <s> BIB010 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Upcoming high-bandwidth protocols like Ethernet TSN feature mechanisms for redundant and deterministic (scheduled) message delivery to integrate safety- and real-time--critical applications and, thus, realize mixed-criticality systems. In existing design approaches, the message routing and system scheduling are generated in two entirely separated design steps, ignoring and/or not exploiting the distinct interrelations between routing and scheduling decisions. In this paper, we first introduce an exact approach to generate an implementation with a valid routing and a valid schedule in a single step by solving a 0-1 ILP. Second, we show that the 0-1 ILP formulation can be utilized in a design space exploration to optimize the routing and schedule with respect to, e.g., interference imposed on non-scheduled traffic or the number of configured port slots. We demonstrate the optimization potential of the proposed approach using a mixed-criticality system from the automotive domain. <s> BIB011 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Preemption Threshold Scheduling (PTS) is an effective technique for reducing stack memory usage by selectively disabling preemption between pairs of tasks.We consider the AUTOSAR standard in automotive embedded software development, where each task consists of multiple runnables that are scheduled with static priority and preemption threshold.We address the problems of design synthesis from an AUTOSAR model to minimize stack usage for mixed-criticality systems with preemption threshold scheduling, and present algorithms for schedulability analysis and stack usage minimization.Experimental results demonstrate that our approach can significantly reduce the system stack usage. Safety-critical embedded systems are often subject to multiple certification requirements from different certification authorities, giving rise to the concept of Mixed-Criticality Systems. Preemption Threshold Scheduling (PTS) is an effective technique for reducing stack memory usage by selectively disabling preemption between pairs of tasks. In this paper, we consider the AUTOSAR standard in automotive embedded software development, where each task consists of multiple runnables that are scheduled with static priority and preemption threshold. We address the problems of design synthesis from an AUTOSAR model to minimize stack usage for mixed-criticality systems with preemption threshold scheduling, and present algorithms for schedulability analysis and system stack usage minimization. Experimental results demonstrate that our approach can significantly reduce the system stack usage. <s> BIB012 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> The multicore revolution is having limited impact in safety-critical application domains. A key reason is the "one-out-of-m" problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional $$m-1$$m-1 cores so that only "one core's worth" of capacity is utilized even though m cores are available. Two approaches have been investigated previously to address this problem: mixed-criticality allocation techniques, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable. A better way forward may be to combine both approaches, but to show this, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated herein in the context of a new variant of a mixed-criticality framework, called $$\textsf {MC}^\textsf {2} $$MC2, that supports configurable criticality-based hardware management. This framework allows specific DRAM memory banks and areas of the last-level cache (LLC) to be allocated to certain groups of tasks. A linear-programming-based optimization framework is presented for sizing such LLC areas, subject to conditions for ensuring $$\textsf {MC}^\textsf {2} $$MC2 schedulability. The effectiveness of the overall framework in resolving hardware-management and scheduling tradeoffs is investigated in the context of a large-scale overhead-aware schedulability study. This study was guided by extensive trace data obtained by executing benchmark programs on the new variant of $$\textsf {MC}^\textsf {2} $$MC2 presented herein. This study shows that mixed-criticality allocation and hardware-management techniques can be much more effective when applied together instead of alone. <s> BIB013 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Abstract Partitioning is a widespread technique that enables the execution of mixed-criticality applications in the same hardware platform. New challenges for the next generation of partitioned systems include the use of multiprocessor architectures and distribution standards in order to open up this technique to a heterogeneous set of emerging scenarios (e.g., cyber-physical systems). This work describes a system architecture that enables the use of data-centric distribution middleware in partitioned real-time embedded systems based on a hypervisor for multi-core, and it focuses on the analysis of the available architectural configurations. We also present an application-case study to evaluate and identify the possible trade-offs among the different configurations. <s> BIB014 </s> A Review of Recent Techniques in Mixed-Criticality Systems <s> Speci¯c Applications in MC Systems <s> Both response time and reliability are important functional safety properties that must be simultaneously satisfied learning from the automotive functional safety standard ISO 26262. Safety verification pertains to checking if an application meets a safe set of design specifications and complies with regulations. Introducing verification in the early design phase not only complies with the latest automotive functional safety standard but also avoids unnecessary design effort or reduces the design burden of the late design optimization phase. This study presents a fast functional safety verification (FFSV) method for a distributed automotive application during the early design phase. The first method FFSV1 finds the solution with the minimum response time under the reliability requirement, and the second method FFSV2 finds the solution with the maximum reliability under the response time requirement. We combine FFSV1 and FFSV2 to create union FFSV (UFFSV), which can obtain acceptance ratios higher than those of current methods. Experiments on real-life and synthetic distributed automotive applications show that UFFSV can obtain higher acceptance ratios than their existing counterparts. <s> BIB015
|
With the rapid development of embedded systems, MC systems applications are more and more widely used. Typical applications include cyber-physical systems (CPSs), automotive systems, BIB015 grids and so on. We will introduce some main applications in MC systems below. Petrakis et al. BIB010 introduced the integration of MPSoC with NoC in MC systems. They examined whether the NoC capacity is su±cient to meet deadlines for external HC workloads while also serving the lower internal critical workloads. In order to evaluate the NoC, they changed its con¯guration parameters such as QoS, link width, and relative priority of packets from di®erent streams sharing the same virtual channel. Another tra±c scenario is to assign priorities to di®erent tra±c sharing the same virtual channel. This is realized by QoS fair bandwidth distribution, a lowcost arbitration method that distributes network interface objective bandwidth during peak demand times among di®erent initiators. The results showed that this method can balance the cost and performance. NoC must o®er performance segregation for security-critical tra±c, while maintaining low latency for tra±c. Tobuschat and Ernst proposed a run-time con¯g-urable design of NoC that enables latency assurances for security-critical tra±c with decreased adverse e®ect on the performance of tra±c. They prioritize Guaranteed Delay tra±c in NoC routers and switched priorities as needed based on actual congestion. Based on the actual blocking, they prioritized ensured-delay tra±c in NoC routers and just switched priorities when needed. For this, they derived the slack time for the°ow through timing analysis and saved the slack time in the packet header. The slack time is then evaluated and modi¯ed in the router to administer the leftover slack time. This allows using critical applications latency slack, while o®ering enough independence between di®erent criticality levels regarding timing properties. Experimental results showed that the approach o®ers enough isolation, while decreasing the adverse in°uences for safety-critical applications. E®ective scheduling strategies can take full advantage of electronic control units in automotive CPS for high performance. However, automotive CPS should address the common challenges of parallelism, dynamism, heterogeneity, criticality and security. To solve these challenges, Xie et al. 54¯r st presented a dynamic scheduling algorithm based on fairness called FDS MIMF to minimize schedule lengths from a high-performance angle. FDS MIMF is able to respond autonomously to the common challenge of parallelism, dynamism and heterogeneity of ACPS. To further respond to these challenges, they proposed an adaptive dynamic scheduling called ADS MIMF to realize low deadline miss rates of the functions from a timing constraint angle while keeping the overall acceptable automotive CPS's make-span from a high-performance point of view. ADS MIMF is achieved by reducing and increasing the criticality level of automotive CPS in order to adjust the implement of di®erent functions at di®erent criticality levels without improving the time complexity. Experiments showed that FDS MIMF is able to get shorter overall make-span, ADS MIMF can decrease the value of deadline miss rates of HC functions while maintaining high performance of Automotive CPS. Smirnov et al. BIB011 considered a MCS from the automotive domain and introduced Pseudo-Boolean constraints which guarantees scheduled messages can have a valid global schedule and a valid routing in the network. These constraints can be used to produce an e®ective MC communication network in a single step that has scheduled tra±c. Then, they extended these constraints for combining it with the constraints and using it to generate a valid schedule, which in a single step allocates the required global o®set to all scheduled messages. They used a similar idea, BIB004 where the schedule is produced by developing SMT-constraints for two possible pairs of overlapping transmission slots. But they reformulated them as Pseudo-Boolean constraints. Experimental results showed that the approach performs better for hard scheduling problems, and is suitable for the MC systems multiobjective optimization. Zhao et al. BIB012 solved the design comprehensive problem of executing the AUTOSAR model in automotive MC systems, where each task is composed by multiple operational plans with preemption thresholds and static priority. For the priority assignment and mappings from operational to tasks, they used the PA-DMMPT algorithm BIB008 to set task priorities. By allocating higher priorities to higher criticality tasks, they modi¯ed PA-DMMPT to break a tie when utilizing Deadline Monotonic. Then they presented a heuristic algorithm named HeuPADMMPT which extends PA-DMMPT by limiting that only the same critical runnable are able to be mapped to the same task. Results showed that HeuPADMMPT can signi¯cantly decrease the usage of the system stack. Recently, Cluster-based Scheduling has become increasingly important for using real-time MC systems on multicore processor platforms. In these approaches, the cores are divided into clusters, and each cluster of the global scheduler schedules the partitioned tasks among di®erent clusters. Ali and Kim BIB013 introduced a novel clusterbased task distribution method for the real-time task sets on multicore processors in MC systems. For task allocation, smaller sizes of clusters are utilized for MC tasks under LC mode, relatively larger sizes of cluster are utilized for HC tasks under High mode. Here the set of MC task is assigned to clusters employing worst-suit heuristic. Tasks from each cluster are assigned to its sub-clusters too, using the same worst-suit heuristic. For schedulability analysis, they used FP RTA BIB001 according to Audsley's approach for cluster of the set of MC task and each sub-cluster. Results showed that the proportion of schedulable task sets under cluster scheduling signi¯cantly improves compared with global and partitioned MC scheduling methods. Giannopoulou et al. BIB009 extended the state-of-the-art for MC systems by presenting a uni¯ed analysis approach for computing, memory, and communication scheduling. To model such communication°ows and architectures through the NoC, they concretized and extended the system model introduced in work. BIB005 In addition, they introduced an inter-cluster communication protocol with formally proven timing features. To schedule MC applications on an architecture based on cluster, they proposed an MC scheduling method called FTTS that implements global timing isolation between di®erent critical applications to provide demonstrable features. This is realized by enabling only applications of equal importance to be implemented in parallel and therefore, disturb the communication infrastructure and shared memory. The results showed that FTTS performs better in schedulability. Standard Ethernet cannot supply hard latency assurances needed for distributed safety-critical applications like industrial control, automobiles, and avionics. BIB002 Carvajal et al. BIB006 introduced a real-time ethernet framework named Atacama for multisegmented MC tra±c networks. Atacama employs a time-triggered method, BIB002 which integrates an ASIP BIB003 to coordinate the switch of time-sensitive data for each station performing real-time tasks. Moreover, Atacama adopts a custom forwarding path that supplies predictable and low propagation delay across multiple switches. With the experimental data that implements the prototype, they derived a communication delay model of real-time frames. The model o®ers a precise upper-limit on the end-to-end delay among distributed real-time tasks, limited only by physical characteristics like the drift between uncertainty and clock domains in the physical links. Experiments showed that latency assurances are robust for the best-e®ort tra±c. Atacama enables researcher to validate the devices, build upon, and test the resources on their applications. Compared to common deadline-driven scheduling methods, scheduling MCCPS needs a combination of QoC-driven and timing-driven methods, which causes a challenging scheduling problem. To tackle this problem, Schneider et al. BIB007 proposed a multilayered scheduling algorithm for MC CPS composed of feedback control tasks and HRT tasks. The proposed algorithm integrates timing analysis methods based on RTC to an MLS framework. Real-time tasks can be scheduled in the top layer based on a timing-driven scheduling scheme, and control tasks are allocated priorities in the second layer subject to QoC optimization. The proposed approach signi¯-cantly increases overall QoC while ensuring schedulability. New challenges for the partitioned systems include the usage of distribution standards and multiprocessor architectures for opening up this method to CPSs. To solve these challenges, Prez et al. BIB014 studied three policies: (1) the usage of a multiprocessor method that enables one core to be specially allocated for communications, which avoids the inherent additional delays to the time window con¯guration. (2) The usage of priority-based scheduling for deciding the order in which partitions are executed at the partition level. (3) The combination of scheduling schemes in a multiprocessor method that enables space and time isolation to be ensured in a set of cores, therefore it may perform partitions with authentication requirements.
|
A Survey on Cloud Data Security using Image Steganography <s> I. INTRODUCTION <s> Cloud is simply a network of computers. It refers to a network of computers owned by one person or company, where other people or companies can store their data. In personal machines, every relevant data is stored in a single physical storage device. Cloud storage refers to a virtual storage area that can span across many different physical storage devices. When cloud storage is used, some of the files would be stored in various physical servers located at far away countries. Since most users do not know where their physical files are, using cloud storage can be thought of as a vague, untouchable thing like a cloud itself. One of the major issues faced by user while dealing with cloud storage is security of the data. Many of encryption schemes mainly attribute based and other hierarchical based are implemented to provide data confidentiality and access control to cloud storage where they are failed to address the security issues inside cloud. The proposed system includes a Mediated certificateless encryption which is an advanced encryption scheme that offers more security to the cloud data sharing and a steganographic method which enhances the security of data inside the cloud. Steganography approach reduces the falsification of unauthorized users. By implementing mediated certificateless encryption with steganography shows the performance of the system is better in comparison with other schemes and also embedding the secret text inside the noise using Least Significant Bit image embedding technique which protects the data from the attackers. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> I. INTRODUCTION <s> Abstract With the development of cloud storage system and its application in complex environment, its data security has been more and more attention. On the one hand, node crashes or external invasion are likely to lead to incomplete data; on the other hand, when the data is incomplete, because the cloud service provider deliberately concealed or other factors, the user cannot be promptly informed of the change. In view of the above problems, this paper makes a deep research, and puts forward a secure storage system based on how to ensure the data availability when data integrity and data are not complete. In this paper, we begin with the availability of data; the research focuses on the confidentiality of data, the loss of data recovery and data recovery. In this paper, we propose a data secure storage scheme based on Tornado codes (DSBT) by combining the technique of symmetric encryption and erasure codes. Program uses boot password to solve the traditional data encryption in the problem of key preservation and management; system design by correcting Tornado data redundancy code delete code in order to solve problems and recover lost data; through a hash keyed to Tornado code with error correction function so as to solve the problem of data tampering. On this basis, the paper continues to carry out research on data retrieval (POR). Based on the classic POR algorithm based on BLS short signature, the trusted log is introduced, and the trusted log is used to provide the user with the test results. Finally, combined with the DSBT scheme, the computational efficiency of the POR algorithm is optimized, which has nothing to do with the file size, which can achieve the calculation complexity of the constant level. According to the above scheme, this paper implements a secure cloud storage prototype system based on Cassandra. The test shows that the system can provide strong data loss recovery ability, effectively resist the Byzantine fault, in the back of the desirable detection ability is also prominent, but also has very high computation efficiency, especially in the face of large files. This paper studies the modeling and analysis methods of some key problems of data security in cloud storage, such as encryption storage, integrity verification, access control, and verification and so on. Through the data segmentation and refinement rules algorithm to optimize the access control strategy, using the data label verification cloud data integrity, using replica strategy to ensure the data availability, the height of authentication to strengthen security, attribute encryption method using signcryption technology to improve the algorithm efficiency, the use of time encryption and DHT network to ensure that the cipher text and key to delete the data, so as to establish a security scheme for cloud storage has the characteristics of privacy protection. <s> BIB002
|
Cloud computing provides flexible services for users by combining many of resources and applications based on a payas-you-need concept . One of the services provided by the cloud is store data in the cloud. This service provides fast distribution, low-cost and reliability BIB001 . When storing data in cloud storage, storage devices has vulnerability to internal leakage, hacking and other reasons that may lead to lose data confidentiality BIB002 . Some of data stored in the cloud are very sensitive data, such as banking and government information, which must be protected against unauthorized people including the cloud service provider BIB001 . There are many researches that use cryptography techniques to protect the cloud confidentiality of the data , but the main disadvantage of encryption is although the data is encrypted and became unreadable, it is still exists as a secret data. The attacker could decrypt the data if he has enough time . Steganography is a way to solve this problem since it allows the user to hide data into other object such as text, image, audio or video, these techniques will increase the sensitive data security . In this paper, we focus on the image steganography to protect cloud data. Fig. 1 illustrate the usage of image steganography in cloud environment. In this paper, we check the existing cloud data security techniques used image steganography. This paper is structured as follows. Section II, present an overview of cloud computing. Section III, give an overview of steganography. In Section IV, we introduce an overview of image steganography. In Section V, we review some of recent techniques of cloud data security using image steganography. In Section VI, we compare the techniques based on different aspects and discuss the current status. In Section VII, we discuss the future works.
|
A Survey on Cloud Data Security using Image Steganography <s> A. Service Model of Cloud Computing <s> Abstract According to a Forbes’ report published in 2015, cloud-based security spending is expected to increase by 42%. According to another research, the IT security expenditure had increased to 79.1% by 2015, showing an increase of more than 10% each year. International Data Corporation (IDC) in 2011 showed that 74.6% of enterprise customers ranked security as a major challenge. This paper summarizes a number of peer-reviewed articles on security threats in cloud computing and the preventive methods. The objective of our research is to understand the cloud components, security issues, and risks, along with emerging solutions that may potentially mitigate the vulnerabilities in the cloud. It is a commonly accepted fact that since 2008, cloud is a viable hosting platform; however, the perception with respect to security in the cloud is that it needs significant improvements to realise higher rates of adaption in the enterprise scale. As identified by another research, many of the issues confronting the cloud computing need to be resolved urgently. The industry has made significant advances in combatting threats to cloud computing, but there is more to be done to achieve a level of maturity that currently exists with traditional/on-premise hosting. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> A. Service Model of Cloud Computing <s> Cloud computing is one of the largest developments in the field of information technology during recent years. It is a service oriented computing which offers everything as a service via the internet by the pay-as-you-go model. It becomes more desirable for all organizations (such as education, banking, healthcare and manufacturing) and also for personal use as it provides a flexible, scalable, and reliable infrastructure and services. For the user, the most important issue is to store, retrieve and transmit the data over the cloud network and storage in a secure manner. Steganography and cryptography are some of the security techniques applied in the cloud to secure the user data transmitting. The objective of steganography is to hide the existence of communication from the unintended users; whereas cryptography encrypts the data to make it more secure. Steganography is considered as the most effective technique for securing the communication in the cloud. Digital images are most commonly used as a cover medium in steganography. In the literature, there exist several image steganography techniques for hiding information in images; which were developed and implemented in the time domain as well as in the frequency domain. Fundamentals of spatial domain and frequency domain techniques are reviewed in this paper with emphasis on the Least Significant Bit (LSB) and the Discrete Cosine Transform (DCT) techniques. <s> BIB002
|
• Software as a service (SaaS): User can only use the applications provided by the provider without ability to manage the applications BIB001 . • Platform as a service (PaaS): User creates applications on the cloud infrastructure and the user will be able to deploy and manage the applications BIB001 . • Infrastructure as a service (IaaS): User will provide the fundamental computing resources, such as networks, storage and processing BIB001 . • File storage as a service (FSaaS): The cloud provides the ability to store, manage and access the data from an interface of browser. the cloud provider holds the maintenance responsibility and oversees the infrastructure storage BIB002 .
|
A Survey on Cloud Data Security using Image Steganography <s> B. Deployment Model of Cloud Computing <s> Abstract According to a Forbes’ report published in 2015, cloud-based security spending is expected to increase by 42%. According to another research, the IT security expenditure had increased to 79.1% by 2015, showing an increase of more than 10% each year. International Data Corporation (IDC) in 2011 showed that 74.6% of enterprise customers ranked security as a major challenge. This paper summarizes a number of peer-reviewed articles on security threats in cloud computing and the preventive methods. The objective of our research is to understand the cloud components, security issues, and risks, along with emerging solutions that may potentially mitigate the vulnerabilities in the cloud. It is a commonly accepted fact that since 2008, cloud is a viable hosting platform; however, the perception with respect to security in the cloud is that it needs significant improvements to realise higher rates of adaption in the enterprise scale. As identified by another research, many of the issues confronting the cloud computing need to be resolved urgently. The industry has made significant advances in combatting threats to cloud computing, but there is more to be done to achieve a level of maturity that currently exists with traditional/on-premise hosting. <s> BIB001
|
• Private cloud: Cloud service provider makes the resources and applications available to cloud users. The users must subscribe to get the benefits of the resources, and they will pay based on the subscription BIB001 . • Public cloud: Users use the resources dynamically over the Internet, and they will pay based on their use BIB001 . • Hybrid cloud: It consists of distributed private clouds linked together and have a central management. The payment system in this model is complex BIB001 .
|
A Survey on Cloud Data Security using Image Steganography <s> C. Cloud Computing Security Requirements <s> Cloud computing is one of the largest developments occurred in the field of information technology during recent years. This model has become more desirable for all institutions, organizations and also for personal use thanks to the storage of ‘valuable information’ at low costs, access to such information from anywhere in the world as well as its ease of use and low cost. In this paper, the services constituting the cloud architecture and deployment models are examined, and the main factors in the provision of security requirements of all those models as well as points to be taken into consideration are described in detail. In addition, the methods and tools considering how security, confidentiality and integrity of the information or data that forms the basis of modern technology are implemented in cloud computing architecture are examined. Finally, it is proposed in the paper that the use of data hiding methods in terms of access security in cloud computing architecture and the security of the stored data would be very effective in securing information. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> C. Cloud Computing Security Requirements <s> The cloud computing technology provides computing resources as services over the internet. Efficiency and cost-effectiveness are the main drivers for cloud computing adoption since it promises better scalability over legacy enterprise systems. With all benefits found in cloud technology, there are still some security issues because information and system components are completely controlled by an external company. Most of the discussions on cloud computing security topic are mainly focused on the organizational means to overcome these issues. This paper focusses on the main obstacles to adopting cloud computing technology in Saudi Arabia. It will also cover the technical means to secure cloud computing environment along with real cloud hacking scenarios. <s> BIB002
|
• Audit: It includes authentications and authorisation, to ensure user's identity by implementing a strong verification process BIB001 . • Confidentiality: Protect data stored in the database from unauthorized users BIB001 . • Integrity: It is used to ensure the data consistency, and to protect data from iteration BIB002 .
|
A Survey on Cloud Data Security using Image Steganography <s> A. Types of Steganography <s> Abstract This paper presents a literature review of image steganography techniques in the spatial domain for last 5 years. The research community has already done lots of noteworthy research in image steganography. Even though it is interesting to highlight that the existing embedding techniques may not be perfect, the objective of this paper is to provide a comprehensive survey and to highlight the pros and cons of existing up-to-date techniques for researchers that are involved in the designing of image steganographic system. In this article, the general structure of the steganographic system and classifications of image steganographic techniques with its properties in spatial domain are exploited. Furthermore, different performance matrices and steganalysis detection attacks are also discussed. The paper concludes with recommendations and good practices drawn from the reviewed techniques. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> A. Types of Steganography <s> One of the latest trends in IT sector is cloud computing. It develops the capabilities of organizations dynamically without training new employees, obtaining new software licenses or investing in infrastructure. At present, user keeps and share a high amount of data on cloud, and hence, the security of cloud computing is necessary so that there is no threat to any of the user’s data. Steganography is becoming a standard practice for both cloud users and cloud service providers as a mechanism against unauthorized surveillance. Steganography refers to writing hidden messages in a way that only the sender and receiver have the ability to safely know and transfer the hidden information in the means of communications. The aim of this paper is to provide an overview of steganography in cloud computing and compare various studies on the basis of technique selection, carrier formats, payload capacity and embedding algorithm to open important research directions. <s> BIB002
|
• Text steganography: Use text file to hide secret data . • Image steganography: Hide secret data in a cover image . • Audio steganography: Use an audio file to conceal secret data . • Video steganography: Hide secret data in a video file . • DNA-based steganography: Employ randomness of DNA to embed secret data BIB001 . • Protocol steganography: Hide secret data in network protocol such as, IP, TCP and UDP BIB002 .
|
A Survey on Cloud Data Security using Image Steganography <s> B. Objectives of Steganography <s> Cloud computing is one of the largest developments in the field of information technology during recent years. It is a service oriented computing which offers everything as a service via the internet by the pay-as-you-go model. It becomes more desirable for all organizations (such as education, banking, healthcare and manufacturing) and also for personal use as it provides a flexible, scalable, and reliable infrastructure and services. For the user, the most important issue is to store, retrieve and transmit the data over the cloud network and storage in a secure manner. Steganography and cryptography are some of the security techniques applied in the cloud to secure the user data transmitting. The objective of steganography is to hide the existence of communication from the unintended users; whereas cryptography encrypts the data to make it more secure. Steganography is considered as the most effective technique for securing the communication in the cloud. Digital images are most commonly used as a cover medium in steganography. In the literature, there exist several image steganography techniques for hiding information in images; which were developed and implemented in the time domain as well as in the frequency domain. Fundamentals of spatial domain and frequency domain techniques are reviewed in this paper with emphasis on the Least Significant Bit (LSB) and the Discrete Cosine Transform (DCT) techniques. <s> BIB001
|
• Security: The attacker unable to detect the secret data BIB001 . • Payload (Capacity): Allow to hide large amount of data into the cover object BIB001 . • Invisibility (Quality): The changes in the cover object undetectable by the Human Visual System (HVS) BIB001 .
|
A Survey on Cloud Data Security using Image Steganography <s> IV. IMAGE STEGANOGRAPHY OVERVIEW <s> Cloud computing is one of the largest developments in the field of information technology during recent years. It is a service oriented computing which offers everything as a service via the internet by the pay-as-you-go model. It becomes more desirable for all organizations (such as education, banking, healthcare and manufacturing) and also for personal use as it provides a flexible, scalable, and reliable infrastructure and services. For the user, the most important issue is to store, retrieve and transmit the data over the cloud network and storage in a secure manner. Steganography and cryptography are some of the security techniques applied in the cloud to secure the user data transmitting. The objective of steganography is to hide the existence of communication from the unintended users; whereas cryptography encrypts the data to make it more secure. Steganography is considered as the most effective technique for securing the communication in the cloud. Digital images are most commonly used as a cover medium in steganography. In the literature, there exist several image steganography techniques for hiding information in images; which were developed and implemented in the time domain as well as in the frequency domain. Fundamentals of spatial domain and frequency domain techniques are reviewed in this paper with emphasis on the Least Significant Bit (LSB) and the Discrete Cosine Transform (DCT) techniques. <s> BIB001
|
This section, provides an overview of image steganography, some techniques of image steganography and types of images. The image steganography is the process of hiding the secret data in a cover image to produce a stego image BIB001 . A. Some of Image Steganography Techniques: • Least Significant Bit (LSB) based Steganography: Hide the bits of secret data in the LSB of the cover image. This technique is the most popular used . • Discrete Cosine Transform (DCT): Use subdivision of quantized DCT coefficient to hide the secret data . • Discrete Wavelet Transform (DWT): It is used to decompress the image mathematically into a set of wavelet BIB001 . This technique used for medical and military applications .
|
A Survey on Cloud Data Security using Image Steganography <s> B. Types of Images <s> Cloud computing is one of the largest developments in the field of information technology during recent years. It is a service oriented computing which offers everything as a service via the internet by the pay-as-you-go model. It becomes more desirable for all organizations (such as education, banking, healthcare and manufacturing) and also for personal use as it provides a flexible, scalable, and reliable infrastructure and services. For the user, the most important issue is to store, retrieve and transmit the data over the cloud network and storage in a secure manner. Steganography and cryptography are some of the security techniques applied in the cloud to secure the user data transmitting. The objective of steganography is to hide the existence of communication from the unintended users; whereas cryptography encrypts the data to make it more secure. Steganography is considered as the most effective technique for securing the communication in the cloud. Digital images are most commonly used as a cover medium in steganography. In the literature, there exist several image steganography techniques for hiding information in images; which were developed and implemented in the time domain as well as in the frequency domain. Fundamentals of spatial domain and frequency domain techniques are reviewed in this paper with emphasis on the Least Significant Bit (LSB) and the Discrete Cosine Transform (DCT) techniques. <s> BIB001
|
• The binary images: consists of black and white pixels BIB001 . • The grayscale images: consists of pixels with shades of gray colors BIB001 . • The color images: uses some integration of red, green and blue to specify the pixels' colors BIB001 .
|
A Survey on Cloud Data Security using Image Steganography <s> V. CURRENT WORKS PROPOSED FOR CLOUD DATA SECURITY USING IMAGE STEGANOGRAPHY <s> Cloud is simply a network of computers. It refers to a network of computers owned by one person or company, where other people or companies can store their data. In personal machines, every relevant data is stored in a single physical storage device. Cloud storage refers to a virtual storage area that can span across many different physical storage devices. When cloud storage is used, some of the files would be stored in various physical servers located at far away countries. Since most users do not know where their physical files are, using cloud storage can be thought of as a vague, untouchable thing like a cloud itself. One of the major issues faced by user while dealing with cloud storage is security of the data. Many of encryption schemes mainly attribute based and other hierarchical based are implemented to provide data confidentiality and access control to cloud storage where they are failed to address the security issues inside cloud. The proposed system includes a Mediated certificateless encryption which is an advanced encryption scheme that offers more security to the cloud data sharing and a steganographic method which enhances the security of data inside the cloud. Steganography approach reduces the falsification of unauthorized users. By implementing mediated certificateless encryption with steganography shows the performance of the system is better in comparison with other schemes and also embedding the secret text inside the noise using Least Significant Bit image embedding technique which protects the data from the attackers. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> V. CURRENT WORKS PROPOSED FOR CLOUD DATA SECURITY USING IMAGE STEGANOGRAPHY <s> Data security is a major issue in computer science and information technology. In the cloud computing environment, it is a serious issue because data is located in different places. In the cloud, environment data is maintained by the third party so it is harder to maintain security for user’s data. There is a prominent need for security for cloud data, so we proposed an approach which provides better results as compared to previous approaches. In this work, we tried to secure the data by using image steganography partition random edge-based technique. In the next step, partition the original image into a number of parts, apply edge-based algorithms, select random pixels (prime number-based pixels) from each part, and insert the data into them. The performance of this proposed analysis is calculated by using MSE and PSNR values. Results are good as compared to several existing edge-based algorithms of image steganography. <s> BIB002 </s> A Survey on Cloud Data Security using Image Steganography <s> V. CURRENT WORKS PROPOSED FOR CLOUD DATA SECURITY USING IMAGE STEGANOGRAPHY <s> Now days storage of data in cloud plays a vital role in any place but data security for cloud environment is a major problem now a days because the data is maintained and its is organized by third party from different locations of different places. Before storing the data in the cloud environment the user must and should give security from unauthorized access. So in this paper we proposed new algorithm to secure user data by using image Steganography and image Segmentation. In this the data is hidden different segments of image by using image segmentation. The performance of proposed algorithm is evaluated by considering various parameters like PSNR, MSE values and the results are compared with various existing algorithms for various sizes of images. <s> BIB003 </s> A Survey on Cloud Data Security using Image Steganography <s> V. CURRENT WORKS PROPOSED FOR CLOUD DATA SECURITY USING IMAGE STEGANOGRAPHY <s> The architecture development of cloud computing technology is growing tremendously in recent times, which leads to improvement of scalability, accessibility and cost reduction measures in the IT sectors of all enterprises. In this service, the data storage without reviewing security policies and procedures is a challenging task and probabilities of extracting secret information by an unauthorized intervention are more. However, to prevent the breaches of security in the cloud service, the steganography art plays an essential role in the data communication medium to improve the security measures, and it is an indispensable technique for hiding the secret information into a cover object. This paper describes the implementation of new steganography method with International Data Encryption Standard Algorithm (IDEA) and Least Significant Bit Grouping (LSBG) algorithm for embedding the secret information into an original image and extracting the same. The result shows the improvement of data embedding capacity and reduces the issues related to data security by effective utilization of this new approach, which reveals the remarkable achievement of the combinational execution of steganography and cryptography technique. The IDEA and LSBG have some vital qualities such as data confidentiality, integrity verification, capacity and robustness, which are crucial factors to achieve successful implementation of steganography process in data security system. The effectiveness and properties of the stego image can be evaluated by some specific measures like mean squared error, root mean squared error, peak signal to noise ratio and structural similarity index matrix to analyse the image quality. The results show that the proposed technique outperforms the existing methodologies and resolves the data security problem in data transmission and storage system of cloud computing services. <s> BIB004
|
In this section, we review some works proposed for cloud data security using image steganography. Mohis and Devipriya in BIB001 , proposed an improved approach that increases the security of public cloud data by using mediated certificateless public key encryption (MCL-PKE) and LSB steganography algorithm. The proposed system consists of three modules: registration module, cloud module and embedding module. In the registration module, the user registers to the cloud and generates public and private keys, keep the private key for the users and transfer the public key to the Key Generation Centre (KGC). In the cloud module, if the user requests the data the Security Mediator (SEM) check if the user legitimate it will decrypt partially the data and will provide it to the user, then the user fully decrypts it using the private key. In the embedding module, the user before storing the data in the cloud he will embed the sensitive data within an image. The authors compared the proposed approach with other system. The proposed approach reduces overhead at the owner side, and reduces unauthorized access on the data. This technique does not produce high quality stego image and does not allow to hide large amount of data. Ebrahim et al. combined encryption and steganography to prevent unauthorized access to cloud data. In the proposed model, there are three phases. The first phase, compute hash value of secret data using SHA-256, then use RSA to encrypt the hash value and session key. The second phase, use AES-256 to encrypt the secret data. The third phase, use advanced LSB algorithm to hide encrypted data in a cover image. The authors were evaluated the proposed model and compared it with other models. The result shows this model provides security against cryptanalysis and steganalysis attacks and stetisstical changes, and produces a stego image with high quality. Seshubhavan et al. in , used steganography and genetic algorithms to secure the data in the cloud. The proposed technique tries to insert the secret data in suitable pixels in the cover image without affecting the characteristics of the cover image. This technique work only on the grayscale image. Therefore, if the cover image is colored image convert it to gray scale image, then extract the least significant bit and most significant bit and convert them to 0's and 1's array. Use the AES algorithm to encrypt the secret data and the key converted to 0's and 1's array. The two arrays combined and split into R Block, and L Block. These segments are applying to genetic algorithm to produce an address block, which is used to embed the secret data in the cover image and produce the stego image that will store in the cloud database. This algorithm compared with other existing algorithms. The result shows that, the proposed algorithm is better quality, but does not provide high capacity payload. Rahman et al. proposed a new combination of encryption and steganography to secure cloud data. They used blowfish algorithm to encrypt secret data, to embed encrypted data in a cover image E-LSB algorithm is used, and to preserve the integrity of produced stego image they used SHA-256. The analysis of the proposed model presents the model provides security against statistical and visual attacks. Suneetha and Kumar in BIB002 have improved the security of cloud data by using partition random edge-based technique for image steganography. They supposed this technique will help to reduce changes between cover image and produced a stego image. In the embedding process, convert the cover image into grayscale image and portion it into 9 partitions. Then, use Canny edge detection method to identify the edge pixels and select the prime number of random pixels of an image. After that encrypt the secret data and embed the key in the selected pixels. The authors compared their method with others existing methods and the result shows that, this method is better and works on different types of data. It provides security against steganalysis attack. This work focuses on security and quality, but ignores the amount of data can be embedded in the cover image. Kumar and Suneetha BIB003 used image segmentation along with image steganography to increase the security of data in cloud environment. To embed secret data in a cover image covert the cover image into black and white or grayscale image, then apply the image segmentation technique to identify and extract the iris part of the cover image. After that use Canny edge detection to select edge pixels of inner and outer circle, and use RSA algorithm to encrypt the secret data. Hide the secret key in the selected pixels and store the stego image in the cloud. The authors were analyzed the technique and the result shows that this technique provides better security than others existing techniques based on steganography and segmentation. Shanthakumari and Malliga BIB004 proposed a combination of International Data Encryption Standard Algorithm (IDEA) and Least Significant Bit Grouping (LSBG) algorithm to improve security and capacity of data embedding to the cover image. In the embedding phase, the IDEA algorithm performed to encrypt secret data, then LSBG is applying to embed the encrypted data into cover image and produce stego image which is uploaded to the cloud. In the extracting phase, download the stego image from the cloud and use LSBG to extract the secret data, then perform IDEA decryption to decrypt the secret data. The authors were evaluated the proposed technique and compared it with other techniques. The result shows this technique provides good security for secret data and produces stego image with high quality and increase the embedding capacity.
|
A Survey on Cloud Data Security using Image Steganography <s> VI. DISCUSSION <s> Data security is a major issue in computer science and information technology. In the cloud computing environment, it is a serious issue because data is located in different places. In the cloud, environment data is maintained by the third party so it is harder to maintain security for user’s data. There is a prominent need for security for cloud data, so we proposed an approach which provides better results as compared to previous approaches. In this work, we tried to secure the data by using image steganography partition random edge-based technique. In the next step, partition the original image into a number of parts, apply edge-based algorithms, select random pixels (prime number-based pixels) from each part, and insert the data into them. The performance of this proposed analysis is calculated by using MSE and PSNR values. Results are good as compared to several existing edge-based algorithms of image steganography. <s> BIB001 </s> A Survey on Cloud Data Security using Image Steganography <s> VI. DISCUSSION <s> Now days storage of data in cloud plays a vital role in any place but data security for cloud environment is a major problem now a days because the data is maintained and its is organized by third party from different locations of different places. Before storing the data in the cloud environment the user must and should give security from unauthorized access. So in this paper we proposed new algorithm to secure user data by using image Steganography and image Segmentation. In this the data is hidden different segments of image by using image segmentation. The performance of proposed algorithm is evaluated by considering various parameters like PSNR, MSE values and the results are compared with various existing algorithms for various sizes of images. <s> BIB002 </s> A Survey on Cloud Data Security using Image Steganography <s> VI. DISCUSSION <s> The architecture development of cloud computing technology is growing tremendously in recent times, which leads to improvement of scalability, accessibility and cost reduction measures in the IT sectors of all enterprises. In this service, the data storage without reviewing security policies and procedures is a challenging task and probabilities of extracting secret information by an unauthorized intervention are more. However, to prevent the breaches of security in the cloud service, the steganography art plays an essential role in the data communication medium to improve the security measures, and it is an indispensable technique for hiding the secret information into a cover object. This paper describes the implementation of new steganography method with International Data Encryption Standard Algorithm (IDEA) and Least Significant Bit Grouping (LSBG) algorithm for embedding the secret information into an original image and extracting the same. The result shows the improvement of data embedding capacity and reduces the issues related to data security by effective utilization of this new approach, which reveals the remarkable achievement of the combinational execution of steganography and cryptography technique. The IDEA and LSBG have some vital qualities such as data confidentiality, integrity verification, capacity and robustness, which are crucial factors to achieve successful implementation of steganography process in data security system. The effectiveness and properties of the stego image can be evaluated by some specific measures like mean squared error, root mean squared error, peak signal to noise ratio and structural similarity index matrix to analyse the image quality. The results show that the proposed technique outperforms the existing methodologies and resolves the data security problem in data transmission and storage system of cloud computing services. <s> BIB003
|
In this section, we compare the current techniques based on different aspects and discuss the current status. Table I shows a comparison of the reviewed techniques based on the algorithms they used, advantages and drawbacks. From Table I , we conclude there is no technique totally strong without weaknesses, each technique has its own strengths and weaknesses. In Table II , we compare the current techniques based on steganography objectives: security, capacity and quality. From Table II , we can conclude all proposed techniques satisfies the security objective, and five of them produces a stego image with high quality, but only one technique allows to hide large amounts of data. The reviewed techniques works on different types of images where and BIB001 suitable for a grayscale image, BIB002 works on black and white or grayscale images and some suitable for color image such as , and BIB003 .
|
Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> We have demonstrated new switching and gas-sensing effects in integrated optics using input and output grating couplers and Bragg reflector gratings with 1200 lines/mm on planar SiO2–TiO2 waveguides. Switching is actuated by adsorption or desorption of water or other adsorbates on the waveguide surface through a change in the effective index of the guided modes under the grating. We derived theoretically the ultimate sensitivity limits of the grating devices employed either as switches or as gas sensors. Switching requires the adsorption and desorption, respectively, of less than one H2O monolayer. Sensors can detect variations in surface coverage of 1/100 of an H2O monolayer. <s> BIB001 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> This paper describes the design and fabrication of a sensitive integrated chemo-optical sensor supplied with on-chip fiber-to-waveguide connectors. The sensor is designed for TE-polarized light with wavelength of 633 nm. The fiber-to-chip connectors are based on easily fabricated silicon V-grooves combined with a smooth sawcut. The sawcut is defining the channel waveguide endface. The sensor is based on a phase modulated Mach-Zehnder interferometer, using the electro-optic effect of the waveguiding material zinc oxide (ZnO). The fiber-to-chip connector units have a typical coupling efficiency of 0.1?1%. The electro-optical voltage × length product V? is 15 ± 4 V cm at frequencies above 100 Hz. Preliminary experiments on the general (passive) sensor response showing its expected high sensitivity are discussed. <s> BIB002 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> The operation of a novel device – a waveguide surface plasmon resonance sensor with a UV-written Bragg grating – is theoretically analysed using two methods. In the simple perturbation approach, the metal/dielectric layer system supporting the resonance excitation of the surface plasma wave is considered to be a perturbation of the original dielectric waveguide with Bragg grating that is analysed using a coupled-mode theory. The second approach consists of the rigorous method of bi-directional mode expansion and propagation using the Floquet mode formalism developed recently for the analysis of waveguide grating structures. The results of both approaches are mutually compared, and the operation characteristics of this novel sensing device are briefly described. <s> BIB003 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> An opto-chemical in-fibre Bragg grating (FBG) sensor for refractive index measurement in liquids has been developed using fibre side-polishing technology. At a polished site where the fibre cladding has partly been removed, a FBG is exposed to a liquid analyte via evanescent field interaction of the guided fibre mode. The Bragg wavelength of the FBG is obtained in terms of its dependence on the refractive index of the analyte. Modal and wavelength dependences have been investigated both theoretically and experimentally in order to optimize the structure of the sensor. Using working wavelengths far above the cut-off wavelength results in an enhancement of the sensitivity of the sensor. Measurements with different mode configurations lead to the separation of cross sensitivities. Besides this, a second FBG located in the unpolished part can be used to compensate for temperature effects. Application examples for monitoring fuels of varying quality as well as salt concentrations under deep borehole conditions are presented. <s> BIB004 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> In this paper, we propose a novel silicon-on-insulator (SOI) Michelson interferometer sensor with waveguide Bragg reflective gratings as a high-sensitivity temperature sensor. Due to the SOI waveguide, the Bragg reflective grating has a larger thermal expansion coefficient than the fiber Bragg grating (FBG); the temperature sensitivity of the SOI Michelson interferometer sensor is 20 times higher than the FBG sensor. © 2001 John Wiley & Sons, Inc. Microwave Opt Technol Lett 30: 321–322, 2001. <s> BIB005 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> This paper demonstrates the development of optical temperature sensor based on the etched silica-based planar waveguide Bragg grating. Topics include design and fabrication of the etched planar waveguide Bragg grating optical temperature sensor. The typical bandwidth and reflectivity of the surface etched grating has been ∼0.2 nm and ∼9 %, respectively, at a wavelength of ∼1552 nm. The temperature-induced wavelength change is found to be slightly non-linear over ∼200 °C temperature range. Typically, the temperature-induced fractional Bragg wavelength shift measured in this experiment is 0.0132 nm/°C with linear curve fit. Theoretical models with nonlinear temperature effect for the grating response based on waveguide and plate deformation theories agree with experiments to within acceptable tolerance. <s> BIB006 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Abstract A new design of a highly sensitive optical biosensor is described. It consists of a planar waveguide with a corrugated surface, tuned to be a resonant Bragg reflector. Only the heights of the corrugation are covered by an opto-chemical transducer layer that selectively adsorbs target biomolecules. Upon adsorption, the corrugation depth rises and reflection coefficient changes. This is used to monitor surface reactions. Theoretical analysis and mathematical modeling proves that the device sensitivity compares favorably with current known designs. The proposed biosensor is shown to possess a unique capability of unambiguous discrimination between surface reactions and refraction index changes in the bulk solution. <s> BIB007 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> The analysis of temperature effects on a single guided mode optical fiber coupled into a deposited layer planar waveguide is presented. The pressure effects are investigated through a Bragg grating structure. Using the transfer matrix and mode expansion propagation methods the characteristics of an integrated temperature and pressure sensors are performed. Two kinds of polymers are used as a planar waveguide materials. Employing the thermooptic effects of a polymer planar waveguide, the resonant wavelength of the device is very sensitive to ambient temperature. A relatively good agreement between our results obtained using rigorous approaches and those reported in the experiment is obtained. <s> BIB008 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> The refractive index of some polymers can be locally increased in a controllable way by UV-excimer laser irradiation. Thus by mask lithographic methods integrated-optical waveguiding and dispersive structures are generated in the surface of a planar polymer chip. By this way a polymeric Bragg sensor component in integrated-optical form was fabricated by the UV-light of an excimer laser. The functional properties of the polymeric Bragg sensor have been investigated in dependence on the irradiation parameters and the temperature. <s> BIB009 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Abstract A sensor believed to be the first truly integrated optical sensor capable of detecting the liquid–solid phase transition of water is presented. The condensation, freezing, melting and evaporation of water are all detected with a planar silica Bragg grating operating in the 1.5 μm telecommunications window. Additionally, use of the sensor allows recognition of supercooled liquid at temperatures below the melting point of water. The device, well suited for integrated optics, is fabricated by direct UV writing with simultaneous definition of the grating. The Bragg grating is exposed and water is allowed to condense over it. Interaction with the evanescent field causes small changes in effective index (5 × 10 −6 ) which can be detected, a sufficient sensitivity to identify the phase transitions of water clearly. <s> BIB010 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Solid-to-liquid and gas-to-liquid phase changes in water and ordered-to-isotropic phase changes in a nematic liquid crystal are detected with an optical sensor. A planar Bragg grating defined purely by refractive index modulation is covered with a water or liquid crystal overcladding and the temperature is controlled to trigger phase changes. Measurement of the Bragg wavelength allows changes of effective refractive index to be detected and discontinuities in behaviour caused by phase transitions can be clearly identified. <s> BIB011 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Polymethylmethacrylate is irradiated by a UV-laser in order to modify its optical properties photochemically. Thus, by a lithographic method, the refractive index can be locally increased in a controllable way permitting the manufacturing of integrated-optical waveguiding and dispersive structures at the surface of a planar polymer chip. By this method, a polymeric Bragg sensor in integrated-optical form was fabricated by the UV-light of an excimer laser. The surface topography and the functional properties of the planar polymeric deformation Bragg sensor have been examined. Experiments concerning the evanescent field of the sensor have also been carried out in order to clarify the Bragg reflection mechanism. <s> BIB012 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Abstract The aim of the presented investigations was to develop a technique of producing Bragg’s grating couplers on planar waveguides. Waveguides are obtained by means of the sol-gel technology. The introduction of a light beam into the structure of the waveguide is in the case of planar or strip optical systems always an essential technical problem, requiring simple and reproducible solutions without extending excessively the waveguide structure. The paper presents a technology of producing grating couplers by impressing the pattern of the network while forming the planar waveguide structure applying the sol-gel method. Some remarks concerning the sol-gel technology are also presented. The results of investigations on grating couplers obtained in such a way have been discussed, too. Attention has been drawn to the possibility of using such structures in optoelectronic sensors, particularly gas sensors, including sensors of water vapour as well as toxic gases. <s> BIB013 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> A highly sensitive waveguide Bragg grating (WBG) sensor for measuring small changes of the refractive index of the surrounding liquid is presented. By using an open top ridge waveguide with a small core, the evanescent field interaction of the guided mode with the liquid analyte on the top of the waveguide is enhanced. The sensitivity measured via a shift in the resonance wavelength of the Bragg grating as high as 1 pm of wavelength shift for a change of 4 × 10−5 in the refractive index around 1.402 is realized. With a polarization insensitive Bragg grating, the polarization dependence of the sensor is improved. A theoretical analysis for the sensitivity of ridge waveguide sensors is given. The experimental results are in good agreement with the theoretical analysis. <s> BIB014 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> In this paper, a theoretical model of a new planar integrated surface plasmon-polariton (SPP)-excitation-based refractive-index sensor is presented and comprehensively investigated. The main principle of operation of this device is based on energy transfer by means of a corrugated metal grating between a p-polarized guided mode propagating in a waveguide layer and the SPP propagating in the opposite direction in a metal layer. The corrugated grating is engraved in the metal layer in contact with the sensed medium. This device is free from any moving parts and can be simply integrated into any planar-waveguide system. Our sensor simulations are based on the transfer-matrix method with the mode-matching technique and have been performed at commercialized telecom wavelengths. <s> BIB015 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> A theoretical model of a new integrated planar surface plasmon-polariton (SPP) refractive index sensor is presented and comprehensively investigated. The main principle of operation of this device is based on high efficiency energy transfer between a p-polarized guided mode propagating in a waveguide layer of the structure and the SPP propagating in the opposite direction in a metal layer separated from the waveguide layer by a dielectric buffer. The high efficiency energy transfer is realised by means of a properly designed Bragg grating imprinted in the waveguide layer. This device is compact, free from any moving parts and can easily be integrated into any planar scheme. Our simulations for the sensor operating at the well developed and commercialised telecom wavelengths are based on coupled mode theory. <s> BIB016 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> We demonstrate a new sensor concept for the measurement of oscillating electric fields that is based on Bragg gratings in LiNbO3:Ti channel waveguides. This miniaturized sensor that works in a retroreflective scheme does not require metallic electrodes and can be directly immersed in an oscillating electric field. The electric field induces a shift of the Bragg wavelength of the reflection grating that is due to the electro-optic effect. The operating point of the sensor is chosen by adjusting the laser wavelength to the slope of the spectral reflectivity function of the grating. In this way the magnitude of an external electric field is measured precisely as the amplitude of modulated reflected light intensity by using a lock-in amplifier. The sensor principle is demonstrated by detecting low-frequency electric fields ranging from 50 V/cm to 5 kV/cm without any conducting parts of the sensor head. Furthermore, the ability of the sensor to determine the three-dimensional orientation of an external electric field by a single rotation along the waveguide direction is demonstrated. <s> BIB017 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Abstract Thermally stabilized channel waveguides with Bragg gratings were fabricated by the space-selective precipitation technique of crystalline Ge nanoparticles using KrF excimer laser irradiation. The periodic structures consisting of Ge nanoparticles were formed in Ge–B–SiO 2 thin glass films after exposure to an interference pattern of the laser followed by annealing at 600 °C. The channel waveguides with the periodic structures were fabricated by the cladding of the patterned Cr layers on the films. The diffraction peak for the TE-like mode of 11.8 dB depth was observed clearly at a wavelength of 1526.4 nm, indicating that the periodic structure also served as the optical band-pass filter in optical communication wavelength. The spectral shape, diffraction efficiency, and diffraction wavelength remained unchanged even after annealing at 400 °C. Furthermore, a low temperature dependence of the diffraction wavelength – as low as 8.1 pm/°C – was achieved. The diffraction efficiency was further enhanced after subsequent annealing at 600 °C. The space-selective precipitation technique is expected to be useful for the fabrication of highly reliable optical filters or durable sensing devices operating at high temperature. <s> BIB018 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> This article reviews the recent progress in optical biosensors that use the label-free detection protocol, in which biomolecules are unlabeled or unmodified, and are detected in their natural forms. In particular, it will focus on the optical biosensors that utilize the refractive index change as the sensing transduction signal. Various optical label-free biosensing platforms will be introduced, including, but not limited to, surface plasmon resonance, interferometers, waveguides, fiber gratings, ring resonators, and photonic crystals. Emphasis will be given to the description of optical structures and their respective sensing mechanisms. Examples of detecting various types of biomolecules will be presented. Wherever possible, the sensing performance of each optical structure will be evaluated and compared in terms of sensitivity and detection limit. <s> BIB019 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Refractive index sensors using self-forming microchannels embedded in borophosphosilicate glass and monolithically integrated with silica waveguides are presented. Fabricated devices presented include solid-core and liquid-core directional couplers, liquid-core modal interferometers, Mach-Zehnder interferometers, segmented waveguides, and microchannel grating devices. Sensitivities of these devices are calculated and compared with each other and to other well-known devices. Experimental characterizations show that the performance of fabricated devices agrees well with calculations. <s> BIB020 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Sensing via fiber optics has occupied R&D groups for over 40 years, and some important transitions into the commercial sector have been achieved. We look at the fundamental concepts involved in the various sensing approaches, and the differentiators which have led to commercial impact. We also look to the future of fiber-optic sensors. <s> BIB021 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> Bragg reflection waveguide devices are fabricated on a flexible substrate by using a post-lift-off process in order to obtain highly uniform grating patterns on a wide range. In this process, the flexible substrate formed by spin-coating on a silicon wafer is lifted-off at the end of fabrication procedures. The flexible Bragg reflector exhibits very sharp transmission spectrum with a 3-dB bandwidth of 0.1 nm and a 10-dB bandwidth of 0.4 nm, which is provided by the grating pattern with excellent uniformity. Athermal operation of the flexible Bragg reflector is also demonstrated through the optimization of thermal expansion property of the plastic substrate by controlling the thickness of two polymer substrate materials. The flexible substrate made of 0.7-mum SU-8 layers sandwiching 100-mum NOA61 film provides an optimized thermal expansion property compensating the thermooptic effect of the polymer waveguide. The temperature dependence of the Bragg reflector is reduced to -0.011 nm/degC by the plastic substrate. <s> BIB022 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Introduction <s> A submicrometer integrated optical sensor based on Bragg gratings in silicon-on-insulator technology is theoretically proposed in this paper. The grating analysis is performed using a mixed numerical approach based on the finite-element method and coupled mode theory. The possibility to use third-order instead of first-order grating is discussed and performances compared, thus overcoming fabrication problems associated to submicrometer scale features. A detection limit of approximately 10-4 refractive index unit has been calculated for a 173-mum-long grating. Strategies to further improve this value have been discussed too. Finally, fabrication tolerances influence on optimized gratings has been investigated. <s> BIB023
|
A wide range of optical sensing technologies exist and are subject to intensive development due to a number of driving factors. In the fields of process control and automation for example, there is a desire to monitor concentrations and compositions in real time without the risk of damaging or contaminating high-value product. It is common for such applications to occur in volatile or flammable environments where spark-free or intrinsically safe technology is a prerequisite. Whilst there are many fields with disparate motivations and sensing requirements, they do in many ways share a common goal, that of rapid, accurate, and safe detection in a potentially harmful environment. Existing review papers discuss the broad range of optical sensing technologies BIB019 and the motivations for using an integrated optical format in terms of compatibility with microfluidics BIB020 . In this paper we concentrate on a review of results on planar Bragg grating sensors which are a recent addition to the field and offer attractive advantages. A detailed description of the range of different techniques and technologies available from the generic class of "optical sensors" would form an extensive review and could include a vast array of applications from particle counting to vibration detection. Therefore the various optical sensor technologies may be segmented in a wide number of ways, but it is helpful to distinguish between sensors in which the light does not physically pass into the measurand material (such as a Bragg grating sensor for strain) and sensors in which the optical field does pass into the measurand. In this paper we are concerned with the latter type of device. It is further useful to consider the means by which the light interacts; this may be either through the refractive index of the measurand or via an absorption or other energy exchanging interaction with the measurand. In that sense we can choose to characterise both types of linear properties of the light interacting in the material via a complex refractive index (n * = n − iκ) where n is the real index at a particular wavelength and κ represents the absorption. There are, of course, nonlinear interactions too, such as the Raman effect-which are outside of the scope of this paper. By thinking in terms of the complex refractive index we can choose to classify techniques into either ones that make use of the imaginary part (iκ) such as absorption 2 Journal of Sensors spectroscopy, and those that make use of the real part of the index (n) such as refractometry. In these terms we see that absorption spectroscopy can be viewed as investigating how (iκ) varies with wavelength. However, in this paper we are primarily interested in those techniques that make use of the refractive index properties of the measurand, and particularly techniques in which the light interaction in the sensor and measurand modifies the modal properties of the light. A modal picture is familiar in fibre optics where a mode represents a solution to the laws of electromagnetic propagation that is constant in form along an invariant refractive index structure. This modal concept is distinct from refractive index properties (such as are used in a refractometer) in which the light propagation is effectively free space like with refraction occurring at a set of discrete boundaries. Thus from the huge possible range of sensor technologies we are led to consider firstly those that primarily sense the real part of the refractive index and then specialise to those techniques in which modal interactions are used. Within this category of modal devices the most familiar devices make use of surface plasmon resonance BIB003 and provide a well-established technology BIB015 BIB016 . Such plasmon devices are well established in literature; for a recent review of plasmonics the book by Maier [6] provides a wealth of information. Plasmon-based sensors are used in a number of commercial instruments produced by companies such as Biacore, Biosensing Instruments Inc, Sensata, and ICx Nomadics. In a Plasmon type sensor the light propagation and modal properties are strongly dependent on the properties of a thin film of a metallic conductor (most commonly gold), in which the modal coupling properties are modified by the refractive index of the surrounding dielectric (the measurand). In contrast to SPR sensors, the types of devices in this paper make use of dielectric waveguides in which there are no metallic elements and in which the modal properties are dominated by the real part of the refractive index of the waveguide and of the measurand. The most familiar format for such a device is an optical fibre sensor. A recent review article can be found BIB021 which covers the whole area of fibre sensing. In this review we are specifically interested in devices in which guidance occurs by total internal reflection in a higher index core, and where the waveguide structure is processed to allow light to interact with a measurand fluid. This interaction causes a change in optical path length, which can be sensed in a number of ways but typically is either interferometric or via a change in the response of a grating structure. More recently, researchers have started to exploit the advantages of planar integration as a way to allow enhanced functionality devices to be made in which microfluidics and multiple sensor elements can be incorporated into a single device. Such devices have a common physical operating principle, in that they all operate by having a dielectric waveguide in which the propagating mode is allowed to partially interact with the measurand, and where the optical path change associated with that interaction is measured. For example work by Heideman et al. BIB002 describes the operation of a Mach-Zehnder sensor which measures fringe changes in the interferometer output; in contrast early work by Tiefenthaler BIB001 used a surface grating to measure water absorption on a planar waveguide. More recently work by Schroeder et al. BIB004 showed how multiple gratings at different operating wavelengths may be used to measure and correct for temperature variation and also gain information on the variation of refractive with wavelength; however, this device used a side-polished fibre embedded in a block, which is not simple to fabricate. A relatively small number of Bragg grating-based devices have been considered and implemented in planar form. They have been demonstrated in a variety of different material platforms such as in polymers BIB008 BIB009 BIB012 BIB022 , Sol-gel systems BIB013 , Silicon-on-Insulator SOI BIB023 BIB005 , Lithium Niobate BIB017 , and Silica-on-Silicon BIB010 BIB011 . The wave-guiding and grating structures in Bragg-based optical sensors have been fabricated with a number of approaches leading to sensing elements with ridge waveguides BIB014 , UV written waveguides and gratings BIB010 BIB011 , corrugated/etched Bragg gratings BIB006 BIB007 , or even Bragg gratings through selective precipitation of nanoparticles BIB018 . However a very limited number of designs have been proved viable in terms of commercialisation. Recently, Stratophase Ltd has commercialised a direct UV writing technology following its original development at the University of Southampton BIB010 BIB011 . The method allows the inscription of waveguides and gratings onto planar substrates. This technique enjoys the benefits of planar integration and ease of applying microfluidics and also makes use of telecommunication grade single mode fibre components and measurement technology allowing for tremendous refractive index sensitivity while exploiting the temperature compensation advantages first demonstrated by Schroeder et al. BIB004 . Moving on from consideration of the physical mechanisms, in the context of the work presented here, perhaps the most widely used tool is the benchtop refractometer upon which samples taken from processing steps are analysed in a lab to determine solution concentrations or sugar, alcohol, or solvents. Inline variants of this style of equipment have started to reach the market in recent years so that offline measurements may be replaced with at-line or inline measurements to save time, money and reduce safety and contamination concerns. Generally these tools require an electrical signal at the point of measurement which can be problematic in volatile environments which require intrinsically safe equipment to minimise the risk of explosion. This paper presents a review of recent advances that have been made in the development of an optical sensor that has proven to be suitable in the application areas described above. The sensor, an evanescent wave device based on planar Bragg gratings, offers both the required sensitivity for concentration measurements and process monitoring and is also suited to industrial environments where robust and reliable devices must be installed. The all optical measurement means that there is no ignition risk, making the technology highly suitable for volatile environments. Additionally, because the underlying principles are the same as those used in the field of telecommunications, multiple Journal of Sensors 3 devices may be networked and multiplexed over very large distances. Multiple devices located at separate and distant locations can easily be monitored from a single analysis base station to maximise convenience and minimise cost. To describe the sensing technology, an overview of the fabrication technique and its advantageous features is given. This is followed by an analysis of the device sensitivity to refractive index and temperature. To complete the presentation two examples of industrial applications are given.
|
Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Background <s> It is demonstrated that direct ultraviolet writing of waveguides is a method suitable for mass production of compact variable optical attenuators with low insertion loss, low polarization-dependent loss, and high dynamic range. The fabrication setup is shown to be robust, providing good device performance over a period of many months without maintenance. <s> BIB001 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Background <s> We evaluate a wavelength interrogation technique based on an arrayed waveguide grating (AWG). Initial results show that the Bragg wavelength of fiber Bragg grating (FBG) sensors can be precisely interrogated by thermally scanning an AWG-based demultiplexer. The technique potentially offers a low-cost, compact, and high-performance solution for the interrogation of FBG distributed sensors and multisensor arrays. <s> BIB002 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Background <s> In this letter, we discuss the use of cyclic arrayed waveguide gratings in the construction of a multiplexed fiber-optic sensor system. The basic components are described and their role in interrogating sensors is discussed. The letter concludes with a proposed design for an expansive, low cost, highly maintainable system <s> BIB003 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Background <s> We report a compact high-resolution arrayed waveguide grating (AWG) interrogator system designed to measure the relative wavelength spacing between two individual resonances of a tilted fiber Bragg grating (TFBG) refractometer. The TFBG refractometer benefits from an internal wavelength and power reference provided by the core mode reflection resonance that can be used to determine cladding mode perturbations with high accuracy. The AWG interrogator is a planar waveguide device fabricated on a silicon-on-insulator platform, having 50 channels with a 0.18 nm wavelength separation and a footprint of 8 mmx8 mm. By overlaying two adjacent interference orders of the AWG we demonstrate simultaneous monitoring of two widely separated resonances in real time with high wavelength resolution. The standard deviation of the measured wavelength shifts is 1.2 pm, and it is limited by the resolution of the optical spectrum analyzer used for the interrogator calibration measurements. <s> BIB004
|
The core technology of the sensors discussed here is that of the Bragg grating, a structure that has been known for decades and has always been recognised as having the potential for use as a sensing element. Most commonly employed in optical fibres, the Bragg grating reflects optical wavelengths according to the following relation: where λ B is the Bragg wavelength at which maximum reflectivity occurs. Λ provides the period of the refractive index modulation that defines the grating. The effective index of the waveguide that contains the Bragg grating, n eff , is a combined refractive index of the core and cladding that the optical mode interacts with. From this equation it can be seen that as the material surrounding the Bragg grating changes, variation in the effective refractive index causes the reflected wavelength to shift. This forms the basis of the use of Bragg gratings as sensors and is shown conceptually in Figure 1 . The specific devices that will be outlined in the subsequent discussion use the technique of UV writing. This approach is highly flexible and has been highly refined for the creation and postfabrication trimming BIB001 of optical devices suitable for, amongst other applications, the telecommunications industry. The sensors described here are fabricated using a unique extension to the UV writing technology known as Direct Grating Writing which simultaneously creates a waveguide and a Bragg grating in a planar substrate. Early work on the conversion of the UV written substrates into liquid sensors has been presented along with demonstrations of their use as tunable filters and refractometers. Whilst this early work highlights some of the opportunities for such devices, the level of development was not initially suited towards full commercial exploitation. Such planar Bragg grating devices are appealing for sensing applications for several reasons. (1) Multiple wavelengths may be used offering the possibility for analyte identification through optical dispersion measurements and also providing a range of evanescent field penetration depths which may provide additional information on the dimensions of biological entities. advantage particularly in immunoassay-based biodetection where it is advantageous to test for multiple different targets simultaneously without the need for duplication of equipment or time delays. (3) The monolithic silicon chip-based design is robust, requires no electrical signal, is resistant to a wide range of chemicals, and is thus suitable for deployment in a wide range of environments. In order to get the maximum possible performance from a Bragg sensor the optical interrogation method is critical. Many industrial applications require multiple measurements to be made 24 hours per day, and so relatively high capital cost can be tolerated as the cost is shared. For top end performance the interrogation can come at a relatively high financial cost although technology improvements and commercial competition provide a strong drive for cost reduction over time. Additionally, the devices presented here are designed to operate in the telecommunications wavelength band. As such it is possible to have sensors positioned at up to several kilometre distances from the optical source. More advantageous still, signals from multiple sensors at different locations may be multiplexed in such a way as to have many sensors all monitored by a single read-out unit. Thus the cost per sensor, which is often more important than total cost, is dramatically reduced. Furthermore, the cost per measurement is lower still because of the opportunity to incorporate multiple sensing regions on a single chip. In addition there is the possibility to integrate the sensing element with the interrogation system by deploying, for example, Bragg grating or Arrayed Waveguide Gratings 4 Journal of Sensors BIB004 BIB003 BIB002 within the same chip, a technology that is highly compatible and readily available in the Silica-on-Silicon platform. This combination could lead to compact and selfcontained sensing systems.
|
Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Journal of Sensors 5 <s> Presents an efficient method for the design of complex fiber Bragg gratings. The method relies on the synthesis of the impulse response of the grating by means of a differential layer-peeling algorithm. The algorithm developed takes into account all the multiple reflections inside the grating, giving an exact solution to the inverse scattering problem. Its low algorithmic complexity enables the synthesis of long fiber gratings. The method is illustrated by designing several filters with interest for optical fiber communication systems: dispersionless bandpass filters and second- and third order dispersion compensators. <s> BIB001 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Journal of Sensors 5 <s> A single-step technique for defining 2D channel waveguide structures with internal Bragg gratings in photosensitive germanosilica-on-silicon using two interfering focused UV beams is presented. Through software control, grating detuning across the S-, C-, and L-wavelength bands is also demonstrated. <s> BIB002
|
The relationship between translation speed and the rate at which the UV beam is modulated plays an important role in the creation of Bragg structures. Variation of the duty cycle can be used to control the strength of the Bragg grating. Duty cycle refers to the percentage of time that the UV beam is turned on in the process of writing a Bragg grating. Previously, the modulation of the UV beam whilst the sample is translated was discussed. The duration of this modulation as a percentage of the Bragg period gives the duty cycle. A duty cycle of 100 percent results in a waveguide with no refractive index modulation whilst a duty cycle of 0 percent results in no refractive index change being written into the sample whatsoever. It is desirable for the average refractive index of a Bragg grating to be close to, if not identical to, the waveguide at either end of the grating. Although the link between UV power and induced refractive index change is not perfectly linear, to a close approximation fluence and duty cycle may be used to index match waveguides and gratings very simply. A grating written with a 50-percent duty cycle with a given UV power must be translated under the UV beam at half the speed used for a waveguide written with the same power. This gives the same fluence and therefore approximately the same average refractive index in both the grating and waveguide. Similarly a 90-percent duty cycle grating would be translated at 90 percent of the waveguide translation speed to "fluence match" the two structures. This simple relation is possible because the waveguides and gratings are being written into a "blank" substrate where no waveguide exists beforehand. In the case of fibre Bragg gratings this is slightly different as the gratings are added to a pre-existing waveguide, and so the average index of the waveguide and fibre cannot be the same unless special extra steps are taken. Duty cycle may also be used to change the strength of the Bragg gratings written. Generally speaking, a higher duty cycle results in a lower contrast between the grating planes resulting in a more response with a more narrow bandwidth than one written with lower duty cycles. An additional degree of flexibility may be achieved by modulating the UV beam at a rate very slightly different to the intrinsic period of the UV intensity modulation. The cumulative effect of the multiple UV exposures used to create a grating produces a grating with a period that is equal to the period between exposures, not the intrinsic period of the interference pattern. BIB002 . In this way, gratings may be written spanning hundreds of nanometer using exactly the same process. Refinement of the UV writing process has allowed an immensely flexible technique for waveguide and grating production to be developed. Using specially developed software packages it becomes a simple matter to create scripts which when loaded into the UV writing system can produce straight and curved waveguides with sophisticated grating structures at multiple wavelengths in a single process. Not only can bespoke waveguide and grating designs be rapidly created without the need for expensive phase masks but also no clean room facilities are needed, making the overall infrastructure requirements low. Some examples of grating spectra that may be written using this process are given below. All structures were created using the same proprietary software package to create the required grating responses. Figure 3 compares a straightforward uniform grating with another grating of similar properties but which has been apodised using a cosine squared function. The reduction in sidelobes is clear albeit at the expense of a slightly broader reflection peak. As the gratings are written in a manner that is close to being plane by plane, it is straightforward to achieve high levels of control of the grating structure along its length. For example, phase shifts may be inserted in order to achieve a sharp dip in the reflection response which may in some cases be advantageous when determining the centre wavelength of peaks. Such a spectrum is shown in Figure 4 . Periodic spacing of phase shifts can be used to generate more elaborate grating structures which provide multiwavelength responses. An example of this is given in Figure 5 , which is a demonstration of a superstructured grating providing over 15 wavelengths that may be used for sensing purposes over a 130 nm wavelength span. This grating, just 4 mm in length, opens up opportunities to measure refractive index over sufficiently wide ranges to allow dispersion characterisation and thus material fingerprinting to be performed. This approach may be extended, as shown in Figure 6 , to use multiple superstructured gratings that are interlaced such that they allow a greater spectral density of measurements to be performed from a compact device which is again written in a single step onto a single device that is just millimetres in dimensions. Drawing on the technology of fibre Bragg gratings it should be possible, if required, to fully control the spectral properties of Bragg gratings through the use of novel design strategies employed in fibres BIB001 and then implement them through the DGW fabrication technique. This advantage gives this particular planar platform further flexibility to develop customised purpose-oriented sensors combined in a compact integrated form.
|
Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Creating a Sensor. <s> The proportion of power carried in the superstrate medium by the guided modes of integrated optical waveguides can be increased by the addition of a thin high index film. Enhanced refractive index sensing is demonstrated using channel waveguide Mach-Zehnder interferometers with Ta2O5 overlayers. Sensitivity increases by a factor greater than 50, and a detection limit better than 5x10^-7 are obtained. This approach is broadly applicable to sensing at waveguide surfaces where the strength of evanescent fields dictates performance. <s> BIB001 </s> Planar Bragg Grating Sensors—Fabrication and Applications: A Review <s> Creating a Sensor. <s> Sensing via fiber optics has occupied R&D groups for over 40 years, and some important transitions into the commercial sector have been achieved. We look at the fundamental concepts involved in the various sensing approaches, and the differentiators which have led to commercial impact. We also look to the future of fiber-optic sensors. <s> BIB002
|
In their as-written state, the UV written samples are intrinsically sensitive to temperature or stress and can be used as sensors in a manner comparable to that widely used in fibre sensing BIB002 . To exploit the advantages of the planar geometry this temperature sensing ability may be combined with multiple liquid refractive index sensing regions. Conversion to a sensor uses relatively simple principles. The Bragg wavelength of a grating is determined by the effective index of the waveguiding structure in which the grating is defined. In other words the combined refractive indices of the waveguide core and cladding play a key role in setting the wavelength or wavelengths at which the Bragg grating operates. If we remove the upper cladding and replace it with something else, the effective index and thus the Bragg wavelength is changed. In this way, the device can now be made into a sensor. Liquid that is used to replace the upper cladding in the vicinity of the Bragg grating will control the Bragg wavelength. As the properties of that liquid change, so the Bragg wavelength changes. The cladding over the sensor gratings is removed using a wet etch. The etchant is delivered to the silica surface using a microfluidic flow cell which allows the chemical to come into contact with only the areas of the chip where etching is required. The Bragg response is monitored throughout the etching process to ensure that the etch is allowed to proceed until sufficiently deep to ensure that the cladding is removed but that it is stopped before the response is degraded by Following etching, the penetration depth of the optical mode into the liquid analyte is relatively low. To extend the penetration further, a high-index overlayer may be applied to the etched surface BIB001 . This has the effect of lifting the optical mode up towards the analyte, resulting in a much higher penetration depth and a much greater sensitivity to refractive index. A number of methods and materials may be used to achieve this effect depending on the desired upper surface material and the required refractive index sensitivity. To date several materials have been utilised including, silica, silicon nitride, silicon oxynitride, fluorinated polymers, titania, zirconia, and alumina. The different materials all require different processing steps and results in different end products. To perform measurements the etched and overlayered chips are pigtailed using single mode fibre, and optical spectra are obtained with the use of commercially available Bragg grating interrogators. Figure 7 shows a photograph of a typical sensor chip after fabrication but before optical fibre pigtailing. The etched window containing the sensing region can be seen as a small oval to the right of the chip. Temperature measurement gratings are embedded in the left-hand side of the chip. It should be noted that the process of etching the chip and subsequently adding a high-index overlayer brings about a very high level of birefringence in the Bragg grating section of the device. In many applications this would not be tolerable as it would cause large amounts of polarisation dependence. Here, the effect is sufficiently large that TE and TM modes can be independently resolved. This means that instead of requiring polarisation control to obtain a reliable mode of operation, the more simple route of using an unpolarised optical source can, if desired, be taken. Similarly, whilst overall optical loss would in many applications be required to be minimal, it is of less concern here. It is critical to measure changes in Bragg wavelength as accurately as possible but this is not dependant on the optical power that is reflected.
|
Machine Translation Evaluation: A Survey <s> Introduction <s> We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models out-perform word-based models. Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> The ACL-2005 Workshop on Parallel Texts hosted a shared task on building statistical machine translation systems for four European language pairs: French-English, German-English, Spanish-English, and Finnish-English. Eleven groups participated in the event. This paper describes the goals, the task definition and resources, as well as results and some analysis. <s> BIB002 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> We present a statistical phrase-based translation model that uses hierarchical phrases---phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntax-based translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrase-based model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system. <s> BIB003 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> We evaluated machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. Evaluation was done automatically using the Bleu score and manually on fluency and adequacy. <s> BIB004 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> Parallel corpora are crucial for training SMT systems. However, for many language pairs they are available only in very limited quantities. For these language pairs a huge portion of phrases encountered at run-time will be unknown. We show how techniques from paraphrasing can be used to deal with these otherwise unknown source language phrases. Our results show that augmenting a state-of-the-art SMT system with paraphrases leads to significantly improved coverage and translation quality. For a training corpus with 10,000 sentence pairs we increase the coverage of unique test set unigrams from 48% to 90%, with more than half of the newly covered items accurately translated, as opposed to none in current approaches. <s> BIB005 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> We describe an open-source toolkit for statistical machine translation whose novel contributions are (a) support for linguistically motivated factors, (b) confusion network decoding, and (c) efficient data formats for translation models and language models. In addition to the SMT decoder, the toolkit also includes a wide variety of tools for training, tuning and applying the system to many translation tasks. <s> BIB006 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> We describe methods for improving the performance of statistical machine translation (SMT) between four linguistically different languages, i.e., Chinese, English, Japanese, and Korean by using morphosyntactic knowledge. For the purpose of reducing the translation ambiguities and generating grammatically correct and fluent translation output, we address the use of shallow linguistic knowledge, that is: (1) enriching a word with its morphosyntactic features, (2) obtaining shallow linguistically-motivated phrase pairs, (3) iteratively refining word alignment using filtered phrase pairs, and (4) building a language model from morphosyntactically enriched words. Previous studies reported that the introduction of syntactic features into SMT models resulted in only a slight improvement in performance in spite of the heavy computational expense, however, this study demonstrates the effectiveness of morphosyntactic features, when reliable, discriminative features are used. Our experimental results show that word representations that incorporate morphosyntactic features significantly improve the performance of the translation model and language model. Moreover, we show that refining the word alignment using fine-grained phrase pairs is effective in improving system performance. <s> BIB007 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> Automatic word alignment plays a critical role in statistical machine translation. Unfortunately, the relationship between alignment quality and statistical machine translation performance has not been well understood. In the recent literature, the alignment task has frequently been decoupled from the translation task and assumptions have been made about measuring alignment quality for machine translation which, it turns out, are not justified. In particular, none of the tens of papers published over the last five years has shown that significant decreases in alignment error rate (AER) result in significant increases in translation performance. This paper explains this state of affairs and presents steps towards measuring alignment quality in a way which is predictive of statistical machine translation performance. <s> BIB008 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk. <s> BIB009 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> Abstract: In this paper, we develop an approach called syntax-based reordering (SBR) to handling the fundamental problem of word ordering for statistical machine translation (SMT). We propose to alleviate the word order challenge including morpho-syntactical and statistical information in the context of a pre-translation reordering framework aimed at capturing short- and long-distance word distortion dependencies. We examine the proposed approach from the theoretical and experimental points of view discussing and analyzing its advantages and limitations in comparison with some of the state-of-the-art reordering methods. In the final part of the paper, we describe the results of applying the syntax-based model to translation tasks with a great need for reordering (Chinese-to-English and Arabic-to-English). The experiments are carried out on standard phrase-based and alternative N-gram-based SMT systems. We first investigate sparse training data scenarios, in which the translation and reordering models are trained on a sparse bilingual data, then scaling the method to a large training set and demonstrating that the improvement in terms of translation quality is maintained. <s> BIB010 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> This paper describes a method for the automatic inference of structural transfer rules to be used in a shallow-transfer machine translation (MT) system from small parallel corpora. The structural transfer rules are based on alignment templates, like those used in statistical MT. Alignment templates are extracted from sentence-aligned parallel corpora and extended with a set of restrictions which are derived from the bilingual dictionary of the MT system and control their application as transfer rules. The experiments conducted using three difierent language pairs in the free/open-source MT platform Apertium show that translation quality is improved as compared to word-for-word translation (when no transfer rules are used), and that the resulting translation quality is close to that obtained using hand-coded transfer rules. The method we present is entirely unsupervised and benefits from information in the rest of modules of the MT system in which the inferred rules are applied. <s> BIB011 </s> Machine Translation Evaluation: A Survey <s> Introduction <s> This paper presents the results of the WMT14 shared tasks, which included a standard news translation task, a separate medical translation task, a task for run-time estimation of machine translation quality, and a metrics task. This year, 143 machine translation systems from 23 institutions were submitted to the ten translation directions in the standard translation task. An additional 6 anonymized systems were included, and were then evaluated both automatically and manually. The quality estimation task had four subtasks, with a total of 10 teams, submitting 57 entries <s> BIB012
|
Machine translation (MT) began as early as in the 1950s , and gained a rapid development since the 1990s due to the development of storage and computing power of computer and the widely available multilingual and bilingual corpora. There are many important works in MT areas, for some to mention by time, IBM Watson research group designed five statistical MT models and the ways of how to estimate the parameters in the models given the bilingual translation corpora; BIB001 proposed statistical phrase-based MT model; Och presented Minimum Error Rate Training (MERT) for log-linear statistical machine translation models; BIB002 introduced a Shared task of building statistical machine translation (SMT) systems for four European language pairs; BIB003 proposed a hierarchical phrase-based SMT model that is learned from a bitext without syntactic information; (Menezes et al., 2006) introduced a syntactically informed phrasal SMT system for Englishto-Spanish translation using a phrase translation model, which was based on global reordering and dependency tree; BIB006 developed an open source SMT software toolkit Moses; BIB007 utilized the shallow linguistic knowledge to improve word alignment and language model quality between linguistically different languages; BIB008 made a discussion of the relationship between word alignment and the quality of machine translation; BIB011 described an unsupervised method for the automatic inference of structural transfer rules for a shallow-transfer machine translation system; BIB010 designed an effective syntax-based reordering approach to address the word ordering problem. Neural MT (NMT) is a recently active topic that conduct the automatic translation workflow very differently with the traditional phrase-based SMT methods. Instead of training the different MT components separately, NMT model utilizes the artificial neural network (ANN) to learn the model jointly to maximize the translation performance through two steps recurrent neural network (RNN) of encoder and decoder Wolk and Marasek, 2015) . There were far more representative MT works that we havent listed here. Due to the wide-spread development of MT systems, the MT evaluation became more and more important to tell us how well the MT systems perform and whether they make some progress. However, the MT evaluation is difficult because the natural languages are highly ambiguous and different languages do not always express the same content in the same way . There are several events that promote the development of MT and MT evaluation research. One of them was the NIST open machine translation Evaluation series (OpenMT), which were very prestigious evaluation campaigns from 2001 to (Group, 2010 . The innovation of MT and the evaluation methods is also promoted by the annual Workshop on Statistical Machine Translation (WMT) BIB004 BIB005 BIB009 BIB009 BIB009 BIB009 BIB012 organized by the special interest group in machine translation (SIGMT) since 2006. The evaluation campaigns focus on European languages. There are roughly two tracks in the annual WMT workshop including the translation task and evaluation task. The tested language pairs are clearly divided into two directions, i.e., English-to-other and otherto-English, covering French, German, Spanish, Czech, Hungarian, Haitian Creole and Russian. Another promotion is the international workshop of spoken language translation (IWSLT) that has been organized annually from 2004 . This campaign has a stronger focus on speech translation including the English and Asian languages, e.g. Chinese, Japanese and Korean. The better evaluation metrics will be surly helpful to the development of better MT systems . Due to all the above efforts, the MT evaluation research achieved a rapid development. This paper is constructed as follow: Section 2 and 3 discuss the human assessment methods and automatic evaluation methods respectively, Section 4 introduces the evaluating methods of the MT evaluation, Section 5 is the advanced MT evaluation, Section 6 is the discussion and related works, and the perspective is presented in Section 7.
|
Machine Translation Evaluation: A Survey <s> Fluency, Adequacy and Comprehension <s> We address the text-to-text generation problem of sentence-level paraphrasing --- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Fluency, Adequacy and Comprehension <s> Abstract The quality of machine translation is rapidly evolving. Today one can find several machine translation systems on the web that provide reasonable translations, although the systems are not perfect. In some specific domains, the quality may decrease. A recently proposed approach to this domain is neural machine translation. It aims at building a jointly-tuned single neural network that maximizes translation performance, a very different approach from traditional statistical machine translation. Recently proposed neural machine translation models often belong to the encoder-decoder family in which a source sentence is encoded into a fixed length vector that is, in turn, decoded to generate a translation. The present research examines the effects of different training methods on a Polish-English Machine Translation system used for medical data. The European Medicines Agency parallel text corpus was used as the basis for training of neural and statistical network-based translation systems. The main machine translation evaluation metrics have also been used in analysis of the systems. A comparison and implementation of a real-time medical translator is the main focus of our experiments. <s> BIB002
|
In1990s, the Advanced Research Projects Agency (ARPA) created the methodology to evaluate machine translation systems using the adequacy, fluency and comprehension (Church and Hovy, 1991) in MT evaluation campaigns BIB002 . The evaluator is asked to look at each fragment, delimited by syntactic constituent and containing sufficient information, and judge the adequacy on a scale 1-to-5. The results are computed by averaging the judgments over all of the decisions in the translation set. The fluency evaluation is compiled with the same manner as that for the adequacy except for that the evaluator is to make intuitive judgments on a sentence by sentence basis for each translation. The evaluators are asked to determine whether the translation is good English without reference to the correct translation. The fluency evaluation is to determine whether the sentence is well-formed and fluent in context. The modified comprehension develops into the "Informativeness", whose objective is to measure a system's ability to produce a translation that conveys sufficient information, such that people can gain necessary information from it. Developed from the reference set of expert translations, six questions have six possible answers respectively including, "none of above" and "cannot be determined". BIB001 conduct a research developing accuracy into several kinds including simple string accuracy, generation string accuracy, and two corresponding tree-based accuracies. Reeder (2004) shows the correlation between fluency and the number of words it takes to distinguish between human translation and machine translation.
|
Machine Translation Evaluation: A Survey <s> Segment Ranking <s> This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk. <s> BIB001
|
In the WMT metrics task, the human assessment based on segment ranking is usually employed. Judgesare frequently asked to provide a complete ranking over all the candidate translations of the same source segment BIB001 BIB001 . In the recent WMT tasks ,five systems are randomly selected for the judges to rank. Each time, the source segment and the reference translation are presented to the judges together with the candidate translations of five systems. The judges will rank the systems from 1 to 5, allowing tie scores. For each ranking, there is the potential to provide as many as 10 pairwise results if no ties. The collected pairwise rankings can be used to assign a score to each participated system to reflect the quality of the automatic translations.The assigned score can also be utilized to reflect how frequently a system is judged to be better or worse than other systems when they are compared on the same source segment, according to the following formula: #better pairwise ranking #total pairwise comparison − #ties comparisons .
|
Machine Translation Evaluation: A Survey <s> Precision and Recall <s> Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Precision and Recall <s> Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results. <s> BIB002 </s> Machine Translation Evaluation: A Survey <s> Precision and Recall <s> Abstract : Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigrambased F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. <s> BIB003 </s> Machine Translation Evaluation: A Survey <s> Precision and Recall <s> In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence n-grams automatically. The second method relaxes strict n-gram matching to skip-bigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency. <s> BIB004 </s> Machine Translation Evaluation: A Survey <s> Precision and Recall <s> We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules. <s> BIB005
|
The widely used evaluation metric BLEU BIB001 ) is based on the degree of n-gram overlapping between the strings of words produced by the machine and the human translation references at the corpus level. BLEU computes the precision for n-gram of size 1-to-4 with the coefficient of brevity penalty (BP). where c is the total length of candidate translation corpus, and r refers to the sum of effective reference sentence length in the corpus. If there are multi-references for each candidate sentence, then the nearest length as compared to the candidate sentence is selected as the effective one. In the BLEU metric, the n-gram precision weight λ n is usually selected as uniform weight. However, the 4-gram precision value is usually very low or even zero when the test corpus is small. To weight more heavily those n-grams that are more informative, proposes the NIST metric with the information weight added. Furthermore, he replaces the geometric mean of co-occurrences with the arithmetic average of n-gram counts, extend the n-gram into 5-gram (N = 5), and select the average length of reference translations instead of the nearest length. ROUGE BIB002 ) is a recalloriented automated evaluation metric, which is initially developed for summaries. Following the adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, Lin conducts a study of a similar idea for evaluating summaries. They also apply the ROUGE into automatic machine translation evaluation work BIB004 . BIB003 conducted experiments to examine how standard measures such as precision and recall and F-measure can be applied for evaluation of MT and showed the comparisons of these standard measures with some existing alternative evaluation measures. F-measure is the combination of precision (P ) and recall (R), which is firstly employed in the information retrieval and latterly has been adopted by the information extraction, MT evaluation and other tasks. BIB005 ) design a novel evaluation metric METEOR. METEOR is based on general concept of flexible unigram matching, unigram precision and unigram recall, including the match of words that are simple morphological variants of each other by the identical stem and words that are synonyms of each other. To measure how well-ordered the matched words in the candidate translation are in relation to the human reference, METEOR introduces a penalty coefficient by employing the number of matched chunks.
|
Machine Translation Evaluation: A Survey <s> Word Order <s> Many machine translation (MT) evaluation metrics have been shown to correlate better with human judgment than BLEU. In principle, tuning on these metrics should yield better systems than tuning on BLEU. However, due to issues such as speed, requirements for linguistic resources, and optimization difficulty, they have not been widely adopted for tuning. This paper presents PORT, a new MT evaluation metric which combines precision, recall and an ordering metric and which is primarily designed for tuning MT systems. PORT does not require external resources and is quick to compute. It has a better correlation with human judgment than BLEU. We compare PORT-tuned MT systems to BLEU-tuned baselines in five experimental conditions involving four language pairs. PORT tuning achieves consistently better performance than BLEU tuning, according to four automated metrics (including BLEU) and to human evaluation: in comparisons of outputs from 300 source sentences, human judges preferred the PORT-tuned output 45.3% of the time (vs. 32.7% BLEU tuning preferences and 22.0% ties). <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Word Order <s> With the rapid development of machine translation (MT), the MT evaluation becomes very important to timely tell us whether the MT system makes any progress. The conventional MT evaluation methods tend to calculate the similarity between hypothesis translations offered by automatic translation systems and reference translations offered by professional translators. There are several weaknesses in existing evaluation metrics. Firstly, the designed incomprehensive factors result in language-bias problem, which means they perform well on some special language pairs but weak on other language pairs. Secondly, they tend to use no linguistic features or too many linguistic features, of which no usage of linguistic feature draws a lot of criticism from the linguists and too many linguistic features make the model weak in repeatability. Thirdly, the employed reference translations are very expensive and sometimes not available in the practice. In this paper, the authors propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations. The authors also explore the performances of the designed metric on traditional supervised evaluation tasks. Both the supervised and unsupervised experiments show that the designed methods yield higher correlation scores with human judgments. <s> BIB002
|
The right word order places an important role to ensure a high quality translation output. However, the language diversity also allows different appearances or structures of the sentence. How to successfully achieve the penalty on really wrong word order (wrongly structured sentence) instead of on the "correctly" different order, the candidate sentence that has different word order with the reference is well structured, attracts a lot of interests from researchers in the NLP literature. In fact, the Levenshtein distance and n-gram based measures also contain the word order information. Featuring the explicit assessment of word order and word choice, (Wong and yu Kit, 2009) develop the evaluation metric ATEC, assessment of text essential characteristics. It is also based on precision and recall criteria but with the designed position difference penalty coefficient attached. The word choice is assessed by matching word forms at various linguistic levels, including surface form, stem, sound and sense, and further by weighing the informativeness of each word. Combining the precision, order, and recall information together, BIB001 develop an automatic evaluation metric PORT that is initially for the tuning of the MT systems to output higher quality translation. Another evaluation metric LEPOR BIB002 ) is proposed as the combination of many evaluation factors including n-gram based word order penalty in addition to precision, recall, and sentence-length penalty. The LEPOR metric yields the excellent performance on the English-to-other (Spanish, German, French, Czech and Russian) language pairs in ACL-WMT13 metrics shared tasks at system level evaluation .
|
Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> Automatic evaluation of machine translation, based on computing n-gram similarity between system output and human reference translations, has revolutionized the development of MT systems. We explore the use of syntactic information, including constituent labels and head-modier dependencies, in computing similarity between output and reference. Our results show that adding syntactic information to the evaluation metric improves both sentence-level and corpus-level correlation with human judgments. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> Evaluation and error analysis of machine translation output are important but difficult tasks. In this work, we propose a novel method for obtaining more details about actual translation errors in the generated output by introducing the decomposition of Word Error Rate (Wer) and Position independent word Error Rate (Per) over different Part-of-Speech (Pos) classes. Furthermore, we investigate two possible aspects of the use of these decompositions for automatic error analysis: estimation of inflectional errors and distribution of missing words over Pos classes. The obtained results are shown to correspond to the results of a human error analysis. The results obtained on the European Parliament Plenary Session corpus in Spanish and English give a better overview of the nature of translation errors as well as ideas of where to put efforts for possible improvements of the translation system. <s> BIB002 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> Current metrics for evaluating machine translation quality have the huge drawback that they require human-quality reference translations. We propose a truly automatic evaluation metric based on ibm1 lexicon probabilities which does not need any reference translations. Several variants of ibm1 scores are systematically explored in order to find the most promising directions. Correlations between the new metrics and human judgments are calculated on the data of the third, fourth and fifth shared tasks of the Statistical Machine Translation Workshop. Five different European languages are taken into account: English, Spanish, French, German and Czech. The results show that the ibm1 scores are competitive with the classic evaluation metrics, the most promising being ibm1 scores calculated on morphemes and pos-4grams. <s> BIB003 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> We present a pilot study on an evaluation method which is able to rank translation outputs with no reference translation, given only their source sentence. The system employs a statistical classifier trained upon existing human rankings, using several features derived from analysis of both the source and the target sentences. Development experiments on one language pair showed that the method has considerably good correlation with human ranking when using features obtained from a PCFG parser. <s> BIB004 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> We introduce a novel semi-automated metric, MEANT, that assesses translation utility by matching semantic role fillers, producing scores that correlate with human judgment as well as HTER but at much lower labor cost. As machine translation systems improve in lexical choice and fluency, the shortcomings of widespread n-gram based, fluency-oriented MT evaluation metrics such as BLEU, which fail to properly evaluate adequacy, become more apparent. But more accurate, non-automatic adequacy-oriented MT evaluation metrics like HTER are highly labor-intensive, which bottlenecks the evaluation cycle. We first show that when using untrained monolingual readers to annotate semantic roles in MT output, the non-automatic version of the metric HMEANT achieves a 0.43 correlation coefficient with human adequacy judgments at the sentence level, far superior to BLEU at only 0.20, and equal to the far more expensive HTER. We then replace the human semantic role annotators with automatic shallow semantic parsing to further automate the evaluation metric, and show that even the semi-automated evaluation metric achieves a 0.34 correlation coefficient with human adequacy judgment, which is still about 80% as closely correlated as HTER despite an even lower labor cost for the evaluation procedure. The results show that our proposed metric is significantly better correlated with human judgment on adequacy than current widespread automatic evaluation metrics, while being much more cost effective than HTER. <s> BIB005 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> This paper presents the utilization of chunk phrases to facilitate evaluation of machine translation. Since most of current researches on evaluation take great effects to evaluate translation quality on content relevance and readability, we further introduce high-level abstract information such as semantic similarity and topic model into this phrase-based evaluation metric. The proposed metric mainly involves three parts: calculating phrase similarity, determining weight to each phrase, and finding maximum similarity map. Experiments on MTC Part 2 (LDC2003T17) show our metric, compared with other popular metrics such as BLEU, MAXSIM and METEOR, achieves comparable correlation with human judgements at segment-level and significant higher correlation at document-level. TITLE AND ABSTRACT IN ANOTHER LANGUAGE (CHINESE) <s> BIB006 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> We introduce the first fully automatic, fully semantic frame based MT evaluation metric, MEANT, that outperforms all other commonly used automatic metrics in correlating with human judgment on translation adequacy. Recent work on HMEANT, which is a human metric, indicates that machine translation can be better evaluated via semantic frames than other evaluation paradigms, requiring only minimal effort from monolingual humans to annotate and align semantic frames in the reference and machine translations. We propose a surprisingly effective Occam's razor automation of HMEANT that combines standard shallow semantic parsing with a simple maximum weighted bipartite matching algorithm for aligning semantic frames. The matching criterion is based on lexical similarity scoring of the semantic role fillers through a simple context vector model which can readily be trained using any publicly available large monolingual corpus. Sentence level correlation analysis, following standard NIST MetricsMATR protocol, shows that this fully automated version of HMEANT achieves significantly higher Kendall correlation with human adequacy judgments than BLEU, NIST, METEOR, PER, CDER, WER, or TER. Furthermore, we demonstrate that performing the semantic frame alignment automatically actually tends to be just as good as performing it manually. Despite its high performance, fully automated MEANT is still able to preserve HMEANT's virtues of simplicity, representational transparency, and inexpensiveness. <s> BIB007 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> Many treebanks have been developed in recent years for different languages. But these treebanks usually employ different syntactic tag sets. This forms an obstacle for other researchers to take full advantages of them, especially when they undertake the multilingual research. To address this problem and to facilitate future research in unsupervised induction of syntactic structures, some researchers have developed a universal POS tag set. However, the disaccord problem of the phrase tag sets remains unsolved. Trying to bridge the phrase level tag sets of multilingual treebanks, this paper designs a phrase mapping between the French Treebank and the English Penn Treebank. Furthermore, one of the potential applications of this mapping work is explored in the machine translation evaluation task. This novel evaluation model developed without using reference translations yields promising results as compared to the state-of-the-art evaluation metrics. <s> BIB008 </s> Machine Translation Evaluation: A Survey <s> Syntactic Similarity <s> With the rapid development of machine translation (MT), the MT evaluation becomes very important to timely tell us whether the MT system makes any progress. The conventional MT evaluation methods tend to calculate the similarity between hypothesis translations offered by automatic translation systems and reference translations offered by professional translators. There are several weaknesses in existing evaluation metrics. Firstly, the designed incomprehensive factors result in language-bias problem, which means they perform well on some special language pairs but weak on other language pairs. Secondly, they tend to use no linguistic features or too many linguistic features, of which no usage of linguistic feature draws a lot of criticism from the linguists and too many linguistic features make the model weak in repeatability. Thirdly, the employed reference translations are very expensive and sometimes not available in the practice. In this paper, the authors propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations. The authors also explore the performances of the designed metric on traditional supervised evaluation tasks. Both the supervised and unsupervised experiments show that the designed methods yield higher correlation scores with human judgments. <s> BIB009
|
Syntactic similarity methods usually employ the features of morphological part-of-speech information, phrase categories or sentence structure generated by the linguistic tools such as language parser or chunker. In grammar, a part of speech (POS) is a linguistic category of words or lexical items, which is generally defined by the syntactic or morphological behavior of the lexical item. Common linguistic categories of lexical items include noun, verb, adjective, adverb, and preposition, etc. To reflect the syntactic quality of automatically translated sentences, some researchers employ the POS information into their evaluation. Using the IBM model one, BIB003 evaluate the translation quality by calculating the similarity scores of source and target (translated) sentence without using reference translation, based on the morphemes, 4-gram POS and lexicon probabilities. develop the evaluation metrics TESLA, combining the synonyms of bilingual phrase tables and POS information in the matching task. Other similar works using POS information include (Giménez and Márquez, 2007; BIB002 BIB009 . In linguistics, a phrase may refer to any group of words that form a constituent and so function as a single unit in the syntax of a sentence. To measure a MT system's performance in translating new text-types, such as in what ways the system itself could be extended to deal with new text-types, ) perform a research work focusing on the study of Englishto-Danish machine-translation system. The syntactic constructions are explored with more complex linguistic knowledge, such as the identifying of fronted adverbial subordinate clauses and prepositional phrases. Assuming that the similar grammatical structures should occur on both source and translations, BIB004 perform the evaluation on source (German) and target (English) sentence employing the features of sentence length ratio, unknown words, phrase numbers including noun phrase, verb phrase and prepositional phrase. Other similar works using the phrase similarity include the BIB006 that uses noun phrase and verb phrase from chunking and (Echizen-ya and Araki, 2010) that only uses the noun phrase chunking in automatic evaluation and BIB008 ) that designs a universal phrase tagset for French to English MT evaluation. Syntax is the study of the principles and processes by which sentences are constructed in particular languages. To address the overall goodness of the translated sentence's structure, BIB001 employ constituent labels and head-modifier dependencies from language parser as syntactic features for MT evaluation. They compute the similarity of dependency trees. The overall experiments prove that adding syntactic information can improve the evaluation performance especially for predicting the fluency of hypothesis translations.Other works that using syntactic information into the evaluation include BIB005 and BIB007 that use an automatic shallow parser.
|
Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to find the word they are looking for. But a frequent objection to this solution is that finding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because finding the information would interrupt their work and break their train of thought. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We address the text-to-text generation problem of sentence-level paraphrasing --- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems. <s> BIB002 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> In this paper we investigate the possibility of evaluating MT quality and fluency at the sentence level in the absence of reference translations. We measure the correlation between automatically-generated scores and human judgments, and we evaluate the per- formance of our system when used as a classifier for identifying highly dysfluent and ill- formed sentences. We show that we can substantially improve on the correlation between language model perplexity scores and human judgment by combining these perplexity scores with class probabilities from a machine-learned classifier. The classifier uses linguis- tic features and has been trained to distinguish human translations from machine transla- tions. We show that this approach also performs well in identifying dysfluent sentences. <s> BIB003 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored. <s> BIB004 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We describe an open-source toolkit for statistical machine translation whose novel contributions are (a) support for linguistically motivated factors, (b) confusion network decoding, and (c) efficient data formats for translation models and language models. In addition to the SMT decoder, the toolkit also includes a wide variety of tools for training, tuning and applying the system to many translation tasks. <s> BIB005 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> This document describes the approach by the NLP Group at the Technical University of Catalonia (UPC-LSI), for the shared task on Automatic Evaluation of Machine Translation at the ACL 2008 Third SMT Workshop. <s> BIB006 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods. <s> BIB007 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We present two evaluation measures for Machine Translation (MT), which are defined as error rates extended by block moves. In contrast to Ter, these measures are constrained in a way that allows for an exact calculation in polynomial time. We then investigate three methods to estimate the standard error of error rates, and compare them to bootstrap estimates. We assess the correlation of our proposed measures with human judgment using data from the National Institute of Standards and Technology (NIST) 2008 MetricsMATR workshop. <s> BIB008 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We introduce a novel semi-automated metric, MEANT, that assesses translation utility by matching semantic role fillers, producing scores that correlate with human judgment as well as HTER but at much lower labor cost. As machine translation systems improve in lexical choice and fluency, the shortcomings of widespread n-gram based, fluency-oriented MT evaluation metrics such as BLEU, which fail to properly evaluate adequacy, become more apparent. But more accurate, non-automatic adequacy-oriented MT evaluation metrics like HTER are highly labor-intensive, which bottlenecks the evaluation cycle. We first show that when using untrained monolingual readers to annotate semantic roles in MT output, the non-automatic version of the metric HMEANT achieves a 0.43 correlation coefficient with human adequacy judgments at the sentence level, far superior to BLEU at only 0.20, and equal to the far more expensive HTER. We then replace the human semantic role annotators with automatic shallow semantic parsing to further automate the evaluation metric, and show that even the semi-automated evaluation metric achieves a 0.34 correlation coefficient with human adequacy judgment, which is still about 80% as closely correlated as HTER despite an even lower labor cost for the evaluation procedure. The results show that our proposed metric is significantly better correlated with human judgment on adequacy than current widespread automatic evaluation metrics, while being much more cost effective than HTER. <s> BIB009 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We argue that failing to capture the degree of contribution of each semantic frame in a sentence explains puzzling results in recent work on the MEANT family of semantic MT evaluation metrics, which have disturbingly indicated that dissociating semantic roles and fillers from their predicates actually improves correlation with human adequacy judgments even though, intuitively, properly segregating event frames should more accurately reflect the preservation of meaning. Our analysis finds that both properly structured and flattened representations fail to adequately account for the contribution of each semantic frame to the overall sentence. We then show that the correlation of HMEANT, the human variant of MEANT, can be greatly improved by introducing a simple length-based weighting scheme that approximates the degree of contribution of each semantic frame to the overall sentence. The new results also show that, without flattening the structure of semantic frames, weighting the degree of each frame's contribution gives HMEANT higher correlations than the previously best-performing flattened model, as well as HTER. <s> BIB010 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> In this paper we introduce a number of new features for quality estimation in machine translation that were developed for the WMT 2012 quality estimation shared task. We find that very simple features such as indicators of certain characters are able to outperform complex features that aim to model the connection between two languages. <s> BIB011 </s> Machine Translation Evaluation: A Survey <s> Semantic Similarity <s> We introduce the first fully automatic, fully semantic frame based MT evaluation metric, MEANT, that outperforms all other commonly used automatic metrics in correlating with human judgment on translation adequacy. Recent work on HMEANT, which is a human metric, indicates that machine translation can be better evaluated via semantic frames than other evaluation paradigms, requiring only minimal effort from monolingual humans to annotate and align semantic frames in the reference and machine translations. We propose a surprisingly effective Occam's razor automation of HMEANT that combines standard shallow semantic parsing with a simple maximum weighted bipartite matching algorithm for aligning semantic frames. The matching criterion is based on lexical similarity scoring of the semantic role fillers through a simple context vector model which can readily be trained using any publicly available large monolingual corpus. Sentence level correlation analysis, following standard NIST MetricsMATR protocol, shows that this fully automated version of HMEANT achieves significantly higher Kendall correlation with human adequacy judgments than BLEU, NIST, METEOR, PER, CDER, WER, or TER. Furthermore, we demonstrate that performing the semantic frame alignment automatically actually tends to be just as good as performing it manually. Despite its high performance, fully automated MEANT is still able to preserve HMEANT's virtues of simplicity, representational transparency, and inexpensiveness. <s> BIB012
|
As a contrast to the syntactic information, which captures the overall grammaticality or sentence structure similarity, the semantic similarity of the automatic translations and the source sentences (or references) can be measured by the employing of some semantic features. To capture the semantic equivalence of sentences or text fragments, the named entity knowledge is brought from the literature of namedentity recognition, which is aiming to identify and classify atomic elements in the text into different entity categories (Marsh and Perzanowski, 1998; BIB007 ). The commonly used entity categories include the names of persons, locations, organizations and time. In the MEDAR2011 evaluation campaign,one baseline system based on Moses BIB005 utilizes Open NLP toolkit to perform named entity detection, in addition to other packages. The low performances from the perspective of named entities cause a drop in fluency and adequacy.In the quality estimation of machine translation task of WMT 2012, BIB011 introduces the features including named entity, in addition to discriminative word lexicon, neural networks, back off behavior and edit distance, etc. The experiments on individual features show that, from the perspective of the increasing the correlation score with human judgments, the feature of named entity contributes nearly the most compared with the contributions of other features. Synonyms are words with the same or close meanings. One of the widely used synonym database in NLP literature is the WordNet BIB001 , which is an English lexical database grouping English words into sets of synonyms. WordNet classifies the words mainly into four kinds of part-of-speech (POS) categories including Noun, Verb, Adjective, and Adverb without prepositions, determiners, etc. Synonymous words or phrases are organized using the unit of synset. Each synset is a hierarchical structure with the words in different levels according to their semantic relations. Textual entailment is usually used as a directive relation between text fragments. If the truth of one text fragment TA follows another text fragment TB, then there is a directional relation between TA and TB (TB ⇒ TA). Instead of the pure logical or mathematical entailment, the textual entailment in natural language processing (NLP) is usually performed with a relaxed or loose definition . For instance, according to text fragment TB, if it can be inferred that the text fragment TA is most likely to be true then the relationship TB ⇒ TA also establishes. That the relation is directive also means that the inverse inference (TA ⇒ TB) is not ensured to be true (Dagan and Glickman, 2004) . Recently, Castillo and Estrella (2012) present a new approach for MT evaluation based on the task of "Semantic Textual Similarity". This problem is addressed using a textual entailment engine based on WordNet semantic features. Paraphrase is to restatement the meaning of a passage or text utilizing other words, which can be seen as bidirectional textual entailment (Androutsopoulos and Malakasiotis, 2010). Instead of the literal translation, word by word and line by line used by metaphrase, paraphrase represents a dynamic equivalent. Further knowledge of paraphrase from the aspect of linguistics is introduced in the works of Meteer and Shaked, 1988; BIB002 . describe a new evaluation metric TER-Plus (TERp). Sequences of words in the reference are considered to be paraphrases of a sequence of words in the hypothesis if that phrase pair occurs in TERp phrase table. The semantic roles are employed by some researchers as linguistic features in the MT evaluation. To utilize the semantic roles, the sentences are usually first shallow parsed and entity tagged. Then the semantic roles used to specify the arguments and adjuncts that occur in both the candidate translation and reference translation. For instance, the semantic roles introduced by (Giménez and Márquez, 2007; BIB006 include causative agent, adverbial adjunct, directional adjunct, negation marker, and predication adjunct, etc.In the further development, BIB009 BIB010 design the metric MEANT to capture the predicate-argument relations as the structural relations in semantic frames, which is not reflected by the flat semantic role label features in the work of (Giménez and Márquez, 2007) . Furthermore, instead of using uniform weights, BIB012 weight the different types of semantic roles according to their relative importance to the adequate preservation of meaning, which is empirically determined.Generally, the semantic roles account for the semantic structure of a segment and have proved effective to assess adequacy in the above papers. The language models are also utilized by the MT and MT evaluation researchers. A statistical language model usually assigns a probability to a sequence of words by means of a probability distribution. BIB003 propose LM-SVM, language-model, support vector machine, method investigating the possibility of evaluating MT quality and fluency in the absence of reference translations. They evaluate the performance of the system when used as a classifier for identifying highly dysfluent and illformed sentences. (Stanojević and Sima'an, 2014a) designed a novel sentence level MT evaluation metric BEER, which has the advantage of incorporate large number of features in a linear model to maximize the correlation with human judgments. To make smoother sentence level scores, they explored two kinds of less sparse features including "character n-grams" (e.g. stem checking) and "abstract ordering patterns" (permutation trees). They further investigated the model with more dense features such as adequacy features, fluency features and features based on permutation trees (Stanojević and Sima'an, 2014c) . In the latest version, they extended the permutation-tree BIB004 into permutation-forests model (Stanojević and Sima'an, 2014b) , and showed stable good performance on different language pairs in WMT sentence level evaluation task. Generally, the linguistic features mentioned above, including both syntactic and semantic features, are usually combined in two ways, either by following a machine learning approach BIB008 , or trying to combine a wide variety of metrics in a more simple and straightforward way, such as BIB006 ,etc.
|
Machine Translation Evaluation: A Survey <s> Spearman rank Correlation <s> This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk. <s> BIB001
|
Spearman rank correlation coefficient, a simplified version of Pearson correlation coefficient , is another algorithm to measure the correlations of automatic evaluation and manual judges, especially in recent years BIB001 BIB001 BIB001 . When there are no ties, Spearman rank correlation coefficient, which is sometimes specified as (rs) is calculated as: where d i is the difference-value (D-value) between the two corresponding rank variables (x i − y i ) in X = {x 1 , x 2 , ..., x n } and Y = {y 1 , y 2 , ..., y n } describing the system ϕ. has been used in recent years for the correlation between automatic order and reference order BIB001 BIB001 BIB001 .
|
Machine Translation Evaluation: A Survey <s> Kendall's <s> The measurement of rank correlation introduction to the general theory of rank correlation tied ranks tests of significance proof of the results of chapter 4 the problem of m ranking proof of the result of chapter 6 partial rank correlation ranks and variate values proof of the result of chapter 9 paired comparisons proof of the results of chapter 11 some further applications. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Kendall's <s> Ordering information is a critical task for natural language generation applications. In this paper we propose an approach to information ordering that is particularly suited for text-to-text generation. We describe a model that learns constraints on sentence order from a corpus of domain-specific texts and an algorithm that yields the most likely order among several alternatives. We evaluate the automatically generated orderings against authored texts from our corpus and against human subjects that are asked to mimic the model's task. We also assess the appropriateness of such a model for multidocument summarization. <s> BIB002
|
It is defined as: τ = num concordant pairs − num discordant pairs total pairs (18) The latest version of Kendall's τ is introduced in BIB001 . give an overview work for Kendall's τ showing its application in calculating how much the system orders differ from the reference order. More concretely, BIB002 proposes the use of Kendall's τ , a measure of rank correlation, estimating the distance between a system-generated and a human-generated gold-standard order.
|
Machine Translation Evaluation: A Survey <s> Advanced Quality Estimation <s> This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Advanced Quality Estimation <s> This paper presents the results of the WMT14 shared tasks, which included a standard news translation task, a separate medical translation task, a task for run-time estimation of machine translation quality, and a metrics task. This year, 143 machine translation systems from 23 institutions were submitted to the ten translation directions in the standard translation task. An additional 6 anonymized systems were included, and were then evaluated both automatically and manually. The quality estimation task had four subtasks, with a total of 10 teams, submitting 57 entries <s> BIB002
|
In recent years, some MT evaluation methods that do not use the manually offered golden reference translations are proposed. They are usually called as "Quality Estimation (QE)". Some of the related works have already been mentioned in previous sections. The latest quality estimation tasks of MT can be found from WMT12 to WMT15 BIB001 BIB002 . They defined a novel evaluation metric that provides some advantages over the traditional ranking metrics. The designed criterion DeltaAvg assumes that the reference test set has a number associated with each entry that represents its extrinsic value. Given these values, their metric does not need an explicit reference ranking, the way the Spearman ranking correlation does. The goal of the DeltaAvg metric is to measure how valuable a proposed ranking is according to the extrinsic values associated with the test entries. For the scoring task, they use two task evaluation metrics that have been traditionally used for measuring performance for regression tasks: Mean Absolute Error (MAE) as a primary metric, and Root of Mean Squared Error (RMSE) as a secondary metric. For a given test set S with entries s i , 1 i |S| , they denote by H(s i ) the proposed score for entry s i (hypothesis), and by V (s i ) the reference value for entry s i (goldstandard value). where N = |S|. Both these metrics are nonparametric, automatic and deterministic (and therefore consistent), and extrinsically interpretable.
|
Machine Translation Evaluation: A Survey <s> Discussion and Related Works <s> This introductory text to statistical machine translation (SMT) provides all of the theories and methods needed to build a statistical machine translator, such as Google Language Tools and Babelfish. In general, statistical techniques allow automatic translation systems to be built quickly for any language-pair using only translated texts and generic software. With increasing globalization, statistical machine translation will be central to communication and commerce. Based on courses and tutorials, and classroom-tested globally, it is ideal for instruction or self-study, for advanced undergraduates and graduate students in computer science and/or computational linguistics, and researchers in natural language processing. The companion website provides open-source corpora and tool-kits. <s> BIB001
|
So far, the human judgment scores of MT results are usually considered as the golden standard that the automatic evaluation metrics should try to approach. However, some improper handlings in the process also yield problems. For instance, in the ACL WMT 2011 English-Czech task, the multiannotator agreement kappa value k is very low and even the exact same string produced by two systems is ranked differently each time by the same annotator. The evaluation results are highly affected by the manual reference translations. How to ensure the quality of reference translations and the agreement level of human judgments are two important problems. Automatic evaluation metrics are indirect measures of translation quality, because that they are usually using the various string distance algorithms to measure the closeness between the machine translation system outputs and the manually offered reference translations and they are based on the calculating of correlation score with manual MT evaluation . Furthermore, the automatic evaluation metrics tend to ignore the relevance of words BIB001 , for instance, the name entities and core concepts are more important than punctuations and determiners but most automatic evaluation metrics put the same weight on each word of the sentences. Third, automatic evaluation metrics usually yield meaningless score, which is very test set specific and the absolute value is not informative. For instance, what is the meaning of -16094 score by the MTeRater metric or 1.98 score by ROSE (Song and Cohn, 2011)? The automatic evaluation metrics should try to achieve the goals of low cost, reduce time and money spent on carrying out evaluation; tunable, automatically optimize system performance towards metric;meaningful, score should give intuitive interpretation of translation quality; consistent, repeated use of metric should give same results; correct, metric must rank better systems higher as mentioned in BIB001 , of which the low cost, tunable and consistent characteristics are easily achieved by the metric developers, but the rest two goals (meaningful and correct) are usually the challenges in front of the NLP researchers. There are some related works about MT evaluation survey or literature review before. For instance, in the DARPA GALE report , researchers first introduced the automatic and semi-automatic MT evaluation measures, and the task and human in loop measures; then, they gave a description of the MT metrology in GALE program, which focus on the HTER metric as standard method used in GALE; finally, they compared some automatic metrics and explored some other usages of the metric, such as optimization in MT parameter training. In another research project report EuroMatrix (EuroMatrix, 2007), researchers first gave an introduction of the MT history, then, they introduced human evaluation of MT and objective evaluation of MT as two main sections of the work; finally, they introduced a listed of popular evaluation measures at that time including WER, SER, CDER, X-Score, D-score, NIST, RED, IER and TER etc. Mrquez introduced the Asiya online interface developed by their institute for MT output error analysis, where they also briefly mentioned the MT evaluation developments of lexical measures and linguistically motivated measures, and pointed out the the chanllenges in the quality estimation task. Our work differs with the previous ones, by introducing some recent development of MT evaluation models, the different classifications from manual to automatic evaluation measures, the introduction of recent QE tasks, and the concise construction of the content.
|
Machine Translation Evaluation: A Survey <s> Perspective <s> Parallel corpora are crucial for training SMT systems. However, for many language pairs they are available only in very limited quantities. For these language pairs a huge portion of phrases encountered at run-time will be unknown. We show how techniques from paraphrasing can be used to deal with these otherwise unknown source language phrases. Our results show that augmenting a state-of-the-art SMT system with paraphrases leads to significantly improved coverage and translation quality. For a training corpus with 10,000 sentence pairs we increase the coverage of unique test set unigrams from 48% to 90%, with more than half of the newly covered items accurately translated, as opposed to none in current approaches. <s> BIB001 </s> Machine Translation Evaluation: A Survey <s> Perspective <s> We evaluated machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. Evaluation was done automatically using the Bleu score and manually on fluency and adequacy. <s> BIB002 </s> Machine Translation Evaluation: A Survey <s> Perspective <s> This paper presents our metric (UoWLSTM) submitted in the WMT-15 metrics task. Many state-of-the-art Machine Translation (MT) evaluation metrics are complex, involve extensive external resources (e.g. for paraphrasing) and require tuning to achieve the best results. We use a metric based on dense vector spaces and Long Short Term Memory (LSTM) networks, which are types of Recurrent Neural Networks (RNNs). For WMT15 our new metric is the best performing metric overall according to Spearman and Pearson (Pre-TrueSkill) and second best according to Pearson (TrueSkill) system level correlation. <s> BIB003
|
In this section, we mention several aspects that are useful and will attract much attention for the further development of MT evaluation field. Firstly, it is about the lexical similarity and the linguistic features. Because the natural languages are expressive and ambiguous at different levels (Giménez and Márquez, 2007) , lexical similarity based metrics limit their scope to the lexical dimension and are not sufficient to ensure that two sentences convey the same meaning or not. For instance, the researches of BIB001 and BIB002 report that lexical similarity metrics tend to favor the automatic statistical machine translation systems. If the evaluated systems belong to different types that include rule based, human aided, and statistical systems, then the lexical similarity metrics, such as BLEU, give a strong disagreement between ranking results provided by them and the human evaluators.So the linguistic features are very important in the MT evaluation procedure. However, in-appropriate utilization, or abundant or abused utilization, will result in difficulty in promotion.In the future, how to utilize the linguistic features more accurate, flexible, and simplified, will be one tendency in MT evaluation.Furthermore, the MT evaluation from the aspects of semantic similarity is more reasonable and reaches closer to the human judgments, so it should receive more attention. Secondly, the Quality Estimation tasks make some difference from the traditional evaluation, such as extracting reference-independent features from input sentences and the translation, obtaining quality score based on models produced from training data, predicting the quality of an unseen translated text at system run-time, filtering out sentences which are not good enough for post processing, and selecting the best translation among multiple systems, etc., so they will continuously attract many researchers. Thirdly, some advanced or challenging technologies that can be tried for the MT evaluation include the deep learning BIB003 , semantic logic form, and decipherment model, etc.
|
Identifying Botnets Intrusion & Prevention – A Review <s> c. Detecting and Neutralizing the C & C Servers <s> This book presents information on how to analyze risks to your networks and the steps needed to select and deploy the appropriate countermeasures to reduce your exposure to physical and network threats. It also imparts the skills and knowledge needed to identify and counter some fundamental security risks and requirements, inlcuding Internet security threats and measures (audit trails IP sniffing/spoofing etc.) and how to implement security policies and procedures.In addition, this book also covers security and network design with respect to particular vulnerabilities and threats. It also covers risk assessment and mitigation and auditing and testing of security systems.From this book, the reader will also learn about applying the standards and technologies required to build secure VPNs, configure client software and server operating systems, IPsec-enabled routers, firewalls and SSL clients.Chapter coverage includes identifying vulnerabilities and implementing appropriate countermeasures to prevent and mitigate threats to mission-critical processes. Techniques are explored for creating a business continuity plan (BCP) and the methodology for building an infrastructure that supports its effective implementation.A public key infrastructure (PKI) is an increasingly critical component for ensuring confidentiality, integrity and authentication in an enterprise. This comprehensive book will provide essential knowledge and skills needed to select, design and deploy a PKI to secure existing and future applications. This book will include discussion of vulnerability scanners to detect security weaknesses and prevention techniques, as well as allowing access to key services while maintaining systems security. Chapters contributed by leaders in the field cover theory and practice of computer security technology, allowing the reader to develop a new level of technical expertise. This book's comprehensive and up-to-date coverage of security issues facilitates learning and allows the reader to remain current and fully informed from multiple viewpoints.Presents methods of analysis and problem-solving techniques, enhancing the readers grasp of the material and ability to implement practical solutions. <s> BIB001
|
C & C traffic detection and bot elimination still doesn't sort the entire botnet at once. To achieve this in a centralized botnet, access to the C & C servers must be removed. BotSniffer similar to BotHunter was developed in 2008 [9] an approach that represents several improvements including handling of encrypted traffic, since it doesn't rely only on content inspection to co-relate messages. This approach doesn't require advance knowledge of the bot's signature or the identity of C & C servers. By analyzing network traces, BotSniffer detects the spatial-temporal correlation among C & C traffic belonging to the same botnet. It can therefore detect both the bot members and the C & C server(s) with a low false positive rate BIB001 [10] BIB001 .
|
Identifying Botnets Intrusion & Prevention – A Review <s> d. Attacking Encrypted C & C Channels <s> This book presents information on how to analyze risks to your networks and the steps needed to select and deploy the appropriate countermeasures to reduce your exposure to physical and network threats. It also imparts the skills and knowledge needed to identify and counter some fundamental security risks and requirements, inlcuding Internet security threats and measures (audit trails IP sniffing/spoofing etc.) and how to implement security policies and procedures.In addition, this book also covers security and network design with respect to particular vulnerabilities and threats. It also covers risk assessment and mitigation and auditing and testing of security systems.From this book, the reader will also learn about applying the standards and technologies required to build secure VPNs, configure client software and server operating systems, IPsec-enabled routers, firewalls and SSL clients.Chapter coverage includes identifying vulnerabilities and implementing appropriate countermeasures to prevent and mitigate threats to mission-critical processes. Techniques are explored for creating a business continuity plan (BCP) and the methodology for building an infrastructure that supports its effective implementation.A public key infrastructure (PKI) is an increasingly critical component for ensuring confidentiality, integrity and authentication in an enterprise. This comprehensive book will provide essential knowledge and skills needed to select, design and deploy a PKI to secure existing and future applications. This book will include discussion of vulnerability scanners to detect security weaknesses and prevention techniques, as well as allowing access to key services while maintaining systems security. Chapters contributed by leaders in the field cover theory and practice of computer security technology, allowing the reader to develop a new level of technical expertise. This book's comprehensive and up-to-date coverage of security issues facilitates learning and allows the reader to remain current and fully informed from multiple viewpoints.Presents methods of analysis and problem-solving techniques, enhancing the readers grasp of the material and ability to implement practical solutions. <s> BIB001
|
Though some of the approaches can detect encrypted C & C traffic, the presence of encryption makes botnet research and analysis much harder. The first step in dealing with these advanced botnets is to penetrate the encryption that protects the C & C channels [15] [5] BIB001 . Many encryption schemes that support key exchange like SSL/TLS are susceptible to man-in-the-middle (MITM) attacks. Therefore, the two possible attacks on encrypted C & C channels include: Gray-box analysis, where the bot communicates with a local machine impersonating the C & C server and a full MITM attack, in which the bot communicates with the true C & C server as in Figure 4 . BIB001 The first attack determines the authentication information required to join the live botnet. However, it does not allow the observer to see the interaction with the larger botnet, specifically the botmaster . The second attack reveals the full interaction with the botnet, including all botmaster commands which can allow the observer to literally take over the botnet. He can then log in as the botmaster, issue a command such as Agobot's .bot.remove, to disconnect all bots from botnet and permanently removed from the infected computers. Unfortunately, there are legal issues with this approach because it constitutes unauthorized access to all the botnet computers, despite the fact that it is in fact a benign command to remove the bot software BIB001 .
|
Identifying Botnets Intrusion & Prevention – A Review <s> VII. BOTMASTER TRACEBACK <s> This book presents information on how to analyze risks to your networks and the steps needed to select and deploy the appropriate countermeasures to reduce your exposure to physical and network threats. It also imparts the skills and knowledge needed to identify and counter some fundamental security risks and requirements, inlcuding Internet security threats and measures (audit trails IP sniffing/spoofing etc.) and how to implement security policies and procedures.In addition, this book also covers security and network design with respect to particular vulnerabilities and threats. It also covers risk assessment and mitigation and auditing and testing of security systems.From this book, the reader will also learn about applying the standards and technologies required to build secure VPNs, configure client software and server operating systems, IPsec-enabled routers, firewalls and SSL clients.Chapter coverage includes identifying vulnerabilities and implementing appropriate countermeasures to prevent and mitigate threats to mission-critical processes. Techniques are explored for creating a business continuity plan (BCP) and the methodology for building an infrastructure that supports its effective implementation.A public key infrastructure (PKI) is an increasingly critical component for ensuring confidentiality, integrity and authentication in an enterprise. This comprehensive book will provide essential knowledge and skills needed to select, design and deploy a PKI to secure existing and future applications. This book will include discussion of vulnerability scanners to detect security weaknesses and prevention techniques, as well as allowing access to key services while maintaining systems security. Chapters contributed by leaders in the field cover theory and practice of computer security technology, allowing the reader to develop a new level of technical expertise. This book's comprehensive and up-to-date coverage of security issues facilitates learning and allows the reader to remain current and fully informed from multiple viewpoints.Presents methods of analysis and problem-solving techniques, enhancing the readers grasp of the material and ability to implement practical solutions. <s> BIB001
|
The botnet field is quiet challenging with problems such as: encrypted C & C channels, obfuscated binaries, fast-flux proxies protecting central C & C servers, customized communication protocols, and many more as in Figure 5 . The only permanent solution of the botnet problem is to go after the root cause, being the botmasters, with which the most challenging task is locating them since they are very good at concealing their identities and locations with precautions on multiple levels to ensure that their connections cannot be traced. This is due to the expected disastrous consequences should the trace be successful. As of now, there is no published work that would allow automated botmaster trace back on the Internet, and it remains an open problem BIB001 . Therefore, the only technique that can help mitigate the Botnet problem is the Intrusion Detection System (IDS) which can identify unencrypted IRC traffic even at ISP level based on transport layer flow statistics. Traceback Challenges: One way to find the Botmaster is to track the botnet C & C traffic. However, the fact that the botmaster originates the botnet C & C traffic, he hides by disguising his link to the C & C traffic via various traffic-laundering techniques that make tracking C & C traffic more difficult and further conceals his activities by encrypting his traffic to and from the C & C servers. Later on, botmaster only need to be online briefly and send small amounts of traffic to interact with his botnet, reducing the chances of live traceback. Stepping Stones: These are the intermediate hosts used for traffic laundering. The attacker sets them up in a chain, leading from the botmaster's true location to the C & C server. Stepping stones can be any network redirection services like SSH servers, proxies, IRC bouncers (BNCs) or virtual private network (VPN). These usually run on compromised hosts, which are under the attacker's control and lack audit/logging mechanisms to trace traffic making, manual traceback tedious and time-consuming BIB001 . The major challenge posed by stepping stones is that all routing information from the previous hop (IP headers, TCP headers, and the like) is stripped from the data before it is sent out on a new separate connection, preserving only the content of the packet, which renders many existing tracing schemes useless.
|
Identifying Botnets Intrusion & Prevention – A Review <s> Low-Latency Anonymous Network: <s> This book presents information on how to analyze risks to your networks and the steps needed to select and deploy the appropriate countermeasures to reduce your exposure to physical and network threats. It also imparts the skills and knowledge needed to identify and counter some fundamental security risks and requirements, inlcuding Internet security threats and measures (audit trails IP sniffing/spoofing etc.) and how to implement security policies and procedures.In addition, this book also covers security and network design with respect to particular vulnerabilities and threats. It also covers risk assessment and mitigation and auditing and testing of security systems.From this book, the reader will also learn about applying the standards and technologies required to build secure VPNs, configure client software and server operating systems, IPsec-enabled routers, firewalls and SSL clients.Chapter coverage includes identifying vulnerabilities and implementing appropriate countermeasures to prevent and mitigate threats to mission-critical processes. Techniques are explored for creating a business continuity plan (BCP) and the methodology for building an infrastructure that supports its effective implementation.A public key infrastructure (PKI) is an increasingly critical component for ensuring confidentiality, integrity and authentication in an enterprise. This comprehensive book will provide essential knowledge and skills needed to select, design and deploy a PKI to secure existing and future applications. This book will include discussion of vulnerability scanners to detect security weaknesses and prevention techniques, as well as allowing access to key services while maintaining systems security. Chapters contributed by leaders in the field cover theory and practice of computer security technology, allowing the reader to develop a new level of technical expertise. This book's comprehensive and up-to-date coverage of security issues facilitates learning and allows the reader to remain current and fully informed from multiple viewpoints.Presents methods of analysis and problem-solving techniques, enhancing the readers grasp of the material and ability to implement practical solutions. <s> BIB001
|
Besides laundering the botnet C & C across stepping stones and different protocols, a sophisticated botmaster could anonymize its C & C traffic by routing it through some low-latency anonymous communication systems. The botmaster could use Tor as a virtual tunnel to anonymize his TCP-based C & C traffic to the IRC server of the botnet as well as utilizing Tor's hidden services to anonymize the IRC server of the botnet BIB001 . Encryption: Most of the stepping stone chain can be encrypted to protect it against content inspection, which could reveal information about the botnet and botmaster. This can be done using a number of methods, including SSH (Secure Shell) tunneling, SSL/TLS (Secure Socket Layer / Transport Layer Security) enabled BNCs and IPsec tunneling because using encryption defeats all content-based tracing approaches BIB001 [1] [11] .
|
Identifying Botnets Intrusion & Prevention – A Review <s> Traceback Beyond the Internet: <s> This book presents information on how to analyze risks to your networks and the steps needed to select and deploy the appropriate countermeasures to reduce your exposure to physical and network threats. It also imparts the skills and knowledge needed to identify and counter some fundamental security risks and requirements, inlcuding Internet security threats and measures (audit trails IP sniffing/spoofing etc.) and how to implement security policies and procedures.In addition, this book also covers security and network design with respect to particular vulnerabilities and threats. It also covers risk assessment and mitigation and auditing and testing of security systems.From this book, the reader will also learn about applying the standards and technologies required to build secure VPNs, configure client software and server operating systems, IPsec-enabled routers, firewalls and SSL clients.Chapter coverage includes identifying vulnerabilities and implementing appropriate countermeasures to prevent and mitigate threats to mission-critical processes. Techniques are explored for creating a business continuity plan (BCP) and the methodology for building an infrastructure that supports its effective implementation.A public key infrastructure (PKI) is an increasingly critical component for ensuring confidentiality, integrity and authentication in an enterprise. This comprehensive book will provide essential knowledge and skills needed to select, design and deploy a PKI to secure existing and future applications. This book will include discussion of vulnerability scanners to detect security weaknesses and prevention techniques, as well as allowing access to key services while maintaining systems security. Chapters contributed by leaders in the field cover theory and practice of computer security technology, allowing the reader to develop a new level of technical expertise. This book's comprehensive and up-to-date coverage of security issues facilitates learning and allows the reader to remain current and fully informed from multiple viewpoints.Presents methods of analysis and problem-solving techniques, enhancing the readers grasp of the material and ability to implement practical solutions. <s> BIB001
|
Despite the control measures put to monitor traffic, there are additional traceback challenges beyond the reach of the Internet (see Figure 5 ). Any IP-based traceback method assumes that the true source IP belongs to the computer being used by the attacker. However, in many scenarios this is not true e.g. Internet-connected mobile phone networks, open wireless (Wi-Fi) networks and public computers, such as those at libraries and Internet cafes. Most modern cell phones support text-messaging services such as Short Message Service (SMS), and many smart phones also have fullfeatured IM software. Therefore, the botmaster can use a mobile device to control the botnet from any location with cell phone reception using a protocol translation service or a special IRC client for mobile phones BIB001 . For an IRC botnet, such a service would receive the incoming SMS or IM message, then repackage it as an IRC message and send it on to the C & C server (possibly via more stepping stones), as shown in Figure 6 . Fig. 6 . Using a cell phone to evade Internet-based traceback BIB001 . To eliminate the need for protocol translation, the botmaster can run a native IRC client on a smart phone with Internet access. However, there are several problems with this approach BIB001 . To begin with, this trace requires lots of manual work and cooperation of yet another organization, making a real-time trace unlikely. Then the carrier won't be able to determine the name of the subscriber if he is using a prepaid cell phone. Finally, the tracer could obtain an approximate physical location based on cell site triangulation. Even if he can do this in real time, it might not be very useful if the botmaster is in a crowded public place.
|
Scene Flow Estimation: A Survey <s> Introduction <s> Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Introduction <s> Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Introduction <s> Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Introduction <s> Disparity flow depicts the 3D motion of a scene in the disparity space of a given view and can be considered as view-dependent scene flow. A novel algorithm is presented to compute disparity maps and disparity flow maps in an integrated process. Consequently, the disparity flow maps obtained helps to enforce the temporal consistency between disparity maps of adjacent frames. The disparity maps found also provides the spatial correspondence information that can be used to cross-validate disparity flow maps of different views. Two different optimization approaches are integrated in the presented algorithm for searching optimal disparity values and disparity flows. The local winner-take-all approach runs faster, whereas the global dynamic programming based approach produces better results. All major computations are performed in the image space of the given view, leading to an efficient implementation on programmable graphics hardware. Experimental results on captured stereo sequences demonstrate the algorithm's capability of estimating both 3D depth and 3D motion in real-time. Quantitative performance evaluation using synthetic data with ground truth is also provided. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Introduction <s> In this paper a novel approach for estimating the three dimensional motion field of the visible world from stereo image sequences is proposed. This approach combines dense variational optical flow estimation, including spatial regularization, with Kalman filtering for temporal smoothness and robustness. The result is a dense, robust, and accurate reconstruction of the three-dimensional motion field of the current scene that is computed in real-time. Parallel implementation on a GPU and an FPGA yields a vision-system which is directly applicable in real-world scenarios, like automotive driver assistance systems or in the field of surveillance. Within this paper we systematically show that the proposed algorithm is physically motivated and that it outperforms existing approaches with respect to computation time and accuracy. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Introduction <s> Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Introduction <s> Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Introduction <s> This paper presents the first method to compute dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in real-time. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate. <s> BIB008
|
Scene flow is a three-dimensional motion field of the surface in world space, or in other words, it shows the three-dimensional displacement vector of each surface point between two frames. As most computer vision issues are, scene flow estimation is essentially an ill-posed energy minimization problem with three unknowns. Prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images. Hence, it's essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance. The paper attempts to reveal clues by providing a comprehensive literature survey in this field. Scene flow is first introduced by Vedula in 1999 BIB003 and has made constant progress over the years. Diverse data sources has emerged thus scene flow estimation don't need to set up the complicated array of cameras. The conventional framework derived from optical flow field BIB001 BIB002 has extended to this three-dimensional motion field estimation task, while diverse ideas and optimization manners has improved the performance noticeably. The widely concerned learning based method has been utilized for scene flow estimation BIB007 , which brings fresh blood to this integrated field. Moreover, a few methods have achieved real-time estimation with GPU implementation at the QVGA(320 × 240) resolution BIB004 BIB005 BIB006 BIB008 , which insure a promising efficiency. The emergence of these methods stands for the fact that scene flow estimation will be widely utilized and applied to practice soon in the near future. The paper is organized as follows. Section 2 illustrates the relevant issues, challenges and applications of scene flow as a background . Section 3 provides classification of scene flow in terms of three major components. Emerging datasets that are publicly available and the diverse evaluation protocols are presented and analyzed in Section 4. Section 5 arises few questions to briefly discuss the content mentioned above, and the future vision is provided. Finally, a conclusion is presented in Section 6.
|
Scene Flow Estimation: A Survey <s> Optical flow <s> Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> This contribution investigates local differential techniques for estimating optical flow and its derivatives based on the brightness change constraint. By using the tensor calculus representation we build the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood. Such a formulation simplifies a unifying framework for all existing local differential approaches and allows to derive new systems of equations to estimate the optical flow and its derivatives. We also tested various optical flow estimation approaches on real image sequences recorded by a calibrated camera fixed on the arm of a robot. By moving the arm of the robot along a precisely defined trajectory we can determine the true displacement rate of scene surface elements projected into the image plane and compare it quantitatively with the results of different optical flow estimators. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> Two-dimensional image motion is the projection of the three-dimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of time-orderedimages allow the estimation of projected two-dimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field . Provided that optical flow is a reliable approximation to two-dimensional image motion, it may then be used to recover the three-dimensional motion of the visual sensor (to within a scale factor) and the three-dimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the three-dimensional environment, and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement. We investigate the computation of optical flow in this survey: widely known methods for estimating optical flow are classified and examined by scrutinizing the hypothesis and assumptions they use. The survey concludes with a discussion of current research issues. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> The quantitative evaluation of optical flow algorithms by Barron et al. led to significant advances in the performance of optical flow methods. The challenges for optical flow today go beyond the datasets and evaluation methods proposed in that paper and center on problems associated with nonrigid motion, real sensor noise, complex natural scenes, and motion discontinuities. Our goal is to establish a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture; realistic synthetic sequences; high frame-rate video used to study interpolation error; and modified stereo sequences of static scenes. In addition to the average angular error used in Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and flow accuracy at motion boundaries and in textureless regions. We evaluate the performance of several well-known methods on this data to establish the current state of the art. Our database is freely available on the Web together with scripts for scoring and publication of the results at http://vision.middlebury.edu/flow/. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> A common problem of optical flow estimation in the multiscale variational framework is that fine motion structures cannot always be correctly estimated, especially for regions with significant and abrupt displacement variation. A novel extended coarse-to-fine (EC2F) refinement framework is introduced in this paper to address this issue, which reduces the reliance of flow estimates on their initial values propagated from the coarse level and enables recovering many motion details in each scale. The contribution of this paper also includes adaptation of the objective function to handle outliers and development of a new optimization procedure. The effectiveness of our algorithm is demonstrated by Middlebury optical flow benchmarkmarking and by experiments on challenging examples that involve large-displacement motion. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> Despite the significant progress in terms of accuracy achieved by recent variational optical flow methods, the correct handling of large displacements still poses a severe problem for many algorithms. In particular if the motion exceeds the size of an object, standard coarse-to-fine estimation schemes fail to produce meaningful results. While the integration of point correspondences may help to overcome this limitation, such strategies often deteriorate the performance for small displacements due to false or ambiguous matches. In this paper we address the aforementioned problem by proposing an adaptive integration strategy for feature matches. The key idea of our approach is to use the matching energy of the baseline method to carefully select those locations where feature matches may potentially improve the estimation. This adaptive selection does not only reduce the runtime compared to an exhaustive search, it also improves the reliability of the estimation by identifying unnecessary and unreliable features and thus by excluding spurious matches. Results for the Middlebury benchmark and several other image sequences demonstrate that our approach succeeds in handling large displacements in such a way that the performance for small displacements is not compromised. Moreover, experiments even indicate that image sequences with small displacements can benefit from carefully selected point correspondences. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> We present an optical flow algorithm for large displacement motions. Most existing optical flow methods use the standard coarse-to-fine framework to deal with large displacement motions which has intrinsic limitations. Instead, we formulate the motion estimation problem as a motion segmentation problem. We use approximate nearest neighbor fields to compute an initial motion field and use a robust algorithm to compute a set of similarity transformations as the motion candidates for segmentation. To account for deviations from similarity transformations, we add local deformations in the segmentation process. We also observe that small objects can be better recovered using translations as the motion candidates. We fuse the motion results obtained under similarity transformations and under translations together before a final refinement. Experimental validation shows that our method can successfully handle large displacement motions. Although we particularly focus on large displacement motions in this work, we make no sacrifice in terms of overall performance. In particular, our method ranks at the top of the Middlebury benchmark. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> Since two years there is a recent trend in optical flow estimation to improve the results of state-of-the-art variational methods by applying additional filtering steps such as median filters, bilateral filters, and non-local techniques. So far, however, the application of such filters has been restricted to two-frame optical flow methods. In this paper, we go beyond this two-frame case and investigate the usefulness of such filtering steps for multi-frame optical flow estimation. Thereby we consider both the application to single flow fields as well as the filtering of the entire spatio-temporal flow volume. In this context, we propose the use of a joint trilateral filter that processes all flow fields simultaneously while imposing consistency of joint flow structures at the same time. Evaluations on the Middlebury benchmark clearly demonstrate the success of our filtering strategy. Achieving rank 3, our method yields state-of-the art results and significantly outperforms the baseline method providing considerably sharper results. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> In this paper we present a novel non-rigid optical flow algorithm for dense image correspondence and non-rigid registration. The algorithm uses a unique Laplacian Mesh Energy term to encourage local smoothness whilst simultaneously preserving non-rigid deformation. Laplacian deformation approaches have become popular in graphics research as they enable mesh deformations to preserve local surface shape. In this work we propose a novel Laplacian Mesh Energy formula to ensure such sensible local deformations between image pairs. We express this wholly within the optical flow optimization, and show its application in a novel coarse-to-fine pyramidal approach. Our algorithm achieves the state-of-the-art performance in all trials on the Garg et al. dataset, and top tier performance on the Middlebury evaluation. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Optical flow <s> This paper proposes an optical flow algorithm by adapting Approximate Nearest Neighbor Fields (ANNF) to obtain a pixel level optical flow between image sequence. Patch similarity based coherency is performed to refine the ANNF maps. Further improvement in mapping between the two images are obtained by fusing bidirectional ANNF maps between pair of images. Thus a highly accurate pixel level flow is obtained between the pair of images. Using pyramidal cost optimization, the pixel level optical flow is further optimized to a sub-pixel level. The proposed approach is evaluated on the middlebury dataset and the performance obtained is comparable with the state of the art approaches. Furthermore, the proposed approach can be used to compute large displacement optical flow as evaluated using MPI Sintel dataset. <s> BIB014
|
Optical flow is a two-dimensional motion field. The global variational Horn-Schunck(H-S) method and the local total-least-square(TLS) Lucas-Kanade(L-K) method have led the optical flow field and scene flow field over the years BIB001 BIB002 . Early works was studied and categorized by Barron and Otte with quantitative evaluation models BIB004 BIB003 . Afterwards, Brox implemented the coarse-to-fine strategy to deal with large displacement BIB005 , while Sun studied the statistics of optical flow methods to find the best way for modeling BIB007 . Baker proposed a thorough taxonomy of current optical flow methods and introduced the Middlebury dataset for evaluation BIB006 , and comparisons between error evaluation methodologies, statistics and datasets are presented as well. Currently, optical flow estimation has reached to a promising status. A segmentation-based method with the approximate nearest neighbor field to handle large displacement ranks the top of Middlebury dataset in terms of both endpoint error(EPE) and average angular error(AAE) currently BIB010 , where EPE varies from 0.07px to 0.41px in different data and AAE varies from 0.99 • to 2.39 • . A similar method reached promising results as well BIB014 . Moreover, there are a variety of methods which achieve top-tier performance and solve different problems respectively. Rushwan utilized a tensor voting method to preserve discontinuity BIB011 . Xu introduced a novel extended coarseto-fine optimization framework for large displacement BIB008 , while Stoll combines the feature matching method with variational estimation to keep small displacement area from being compromised BIB009 . He also introduced a multi-frame method utilizing trilateral filter BIB012 . To handle non-rigid optical flow, Li proposed a Laplacian mesh energy formula which combines both Laplacian deformation and mesh deformation BIB013 .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.