Dependable and Adaptive Distributed Systems10th DADS Track of the
30th ACM Symposium on Applied Computing
9th DADS 2014
8th DADS 2013
7th DADS 2012
6th DADS 2011
5th DADS 2010
4th DADS 2009
3rd DADS 2008
2nd DADS 2007
1st DADS 2006
April 13 - 17, 2015
The Symposium on Applied Computing has been a primary gathering forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world. SAC 2015 is sponsored by the ACM Special Interest Group on Applied Computing and is hosted by Seoul National University, Kyungpook National University, Soongsil University, and Dongguk University. The SRC Program is sponsored by Microsoft Research.
The track provides a forum for scientists and engineers in academia and industry to present and discuss their latest research findings on selected topics in dependable and adaptive distributed systems. The track is structured in two sessions:
A Taxonomy of Reliable Request-Response Protocols
Naghmeh Ivaki, Nuno Laranjeiro and Filipe Araujo
Reliable request-response interactions, in which the server never executes a given request more than once, are being used to support business and safety-critical operations in diverse sectors, such as banking, E-commerce, or healthcare. This form of interactions can be quite difficult to implement, because the client, server, or communication channel may fail, potentially requiring diverse and complex recovery procedures. In this paper we address the following question: could we provide a meaningful taxonomy of reliable request-response protocols? We generate valid sequences of client and server actions, organize the generated sequences into a prefix tree, and classify them according to their reliability semantics and memory requirements. The tree reveals three families of protocols matching common real-world implementations that try to deliver exactly-once or at-most-once. The strict organization of the protocols provides a solid foundation for creating correct services, and we show that it also serves to easily identify fallacies and pitfalls of existing implementations.
Workload characterization model for optimal resource allocation in cloud middleware
Shruti Kunde and Tridib Mukherjee
With increasing focus on inter-operability across cloud offerings to leverage their disparate capabilities, it has become more and more important to enable a flexible framework for sharing of heterogeneous resources in the cloud infrastructure. At the same time, it is imperative to be aware of the performance implications of hosting application workloads on different resources in order to guarantee Service Level Agreements (SLAs) to the applications. This paper focusses on experimental characterization of performance implications of different heterogeneous resources in hosting big-data analytics application workloads (one of the most critical applications in modern times). To create the knowledge, based on which the recommendations are provided, we benchmark the performance of big-data analytics applications, using a Hadoop cluster setup. Specifically, we study parameters of interest such as turnaround time and throughput, which are most likely to influence our choice of infrastructure for a particular application. Our experiments are conducted on varied platforms, both internal to Xerox and external cloud providers. We present a model based on our experiments, that facilitates the characterization of hetergeneous applications, thus enabling the cloud middleware to select an appropriate infrastructure and metrics in order to attain the the desired SLA.
Modeling Dependable Systems with Continuous Time Bayesian Networks
In the domain of information systems modeling for depend- ability is an established method. Most approaches dealing with structural or probabilistic modeling do not consider time information and handle only discrete data. But in re- ality systems have a time varying behavior and numerous measures are continuous. In this paper an approach for modeling dependable informa- tion systems for fault prediction is presented. The method considers time behavior and continuous variables. The tech- nique is based on continuous time Bayesian networks (CTBN) which make assumptions for time to transition or time to failure feasible. A drawback of CTBN is that only discrete data is processed, thus continuous variables have to be dis- cretized. This is carried out by grouping measures with dis- tributions which are similar with the restriction that values from continuous range are nearby. Furthermore this tech- nique is capable of performing a data reduction such that subsequent computations can be done with moderate hard- ware resources. Based on such preprocessed data the struc- ture of the Bayesian network (BN) is learned by a Max- Min Hill-Climbing (MMHC) algorithm. Known misbehav- ior (e.g. faults) is incorporated into the BN by introduction of auxiliary variables. A structural model generated in this way forms the backbone of a continuous time Bayesian net- work. Henceforth CTBN parameter estimation (e.g. time characteristic) is doable by established learning methods.
Evaluation of an adaptive framework for resilient executions of high demanding Monte Carlo applications
Manuel Rodríguez-Pascual, Antonio Juan Rubio-Montero and Rafael Mayo-García
Solving certain calculations in time is crucial for some industrial, medical or research areas. However, problems with high computational requirements are specially constrained to the availability and dependability of the computational resources. Distributed Computing Infrastructures have consolidated as the platform that can solve the issue in last decade. Grid and Cloud infrastructures can currently supply users with thousands of resources of different types. Nevertheless, despite the advances achieved, the nature of these infrastructures finally makes them un-predictable, especially grid. Consequently, users continuously experience failures and poor performance, and consequently, they result unfeasible for some calculations. An instrument to deal with this lack of dependability is to build adaptive algorithms specifically designed for increasing the reliability of certain types of applications on these heterogeneous and dynamic infrastructures. In this work, the suitability of the Montera2 framework is evaluated for Monte Carlo calculations. For this purpose, the proposed approach is compared with the basic tools offered by current middleware.
Optimal Planning for Architecture-Based Self-Adaptation Via Model Checking of Stochastic Games
Javier Camara, David Garlan, Bradley Schmerl and Ashutosh Pandey
Architecture-based approaches to self-adaptation rely on architectural descriptions to reason about the best way of adapting the structure and behavior of complex software-intensive systems at runtime, either by choosing among aset of predefined adaptation strategies, or by automatically generating adaptation plans. While predefined strategy selection approaches have a low computational overhead and often facilitate dealing with uncertainty (e.g., by accounting explicitly for contingencies derived from unexpected outcomes of actions), they require additional designer effort regarding the specification of strategies and are unable to guarantee optimal solutions. In contrast, runtime plan generation is able to explore a richer solution space and provide optimal solutions in some cases, but are more limited when dealing with uncertainty, and incur higher computational overheads. In this paper, we propose an approach to optimal adaptation plan generation for architecture-based self-adaptation via model checking of stochastic multiplayer games (SMGs). Our approach enables: (i) trade-off analysis among different qualities by means of utility functions and preferences, and (ii) explicit modeling of uncertainty both as probabilistic outcomes of adaptation actions and by explicit modeling of the environment. Basing on the concepts embodied in the Rainbow framework for architecture-based self-adaptation, we illustrate the scalability of our approach in Znn.com, a benchmark case study that reproduces the typical infrastructure for a news website.
Details see SAC poster page.
Scalable Model for Dynamic Configuration and Power Management in Virtualized Heterogeneous Web Clusters
André F. Monteiro and Orlando Loques
This paper presents a model for dynamic configuration of Virtualized Application Servers (VAS) based on an algo- rithm of linear complexity, achieving the scalability needed for big data centers and cloud computing platforms. We use advanced techniques of virtualization, such as agile clone of VAS and co-allocation of VAS, enabling rapid configuration actions and a fine-grained QoS control. We propose a sequential resource allocation that considers the power costs of the configuration actions. The goal is to minimize the power consumption of the processing environment and maintenance of the quality of service (QoS) of the applications. The experiments evaluate our model as compared to the Linux CPU governors and a state-of-the-art approach based on optimization. The results show that our model conserves up to 49.6% of the energy required by a cluster designed for peak workload scenario, with a negligible impact on the applications performance.
An Experimental Evaluation of Machine-to-Machine Coordination Middleware
Filipe Campos and José Pereira
The vision of the Internet-of-Things (IoT) embodies the seamless discovery, configuration, and interoperability of networked devices in various settings, ranging from home automation and multimedia to autonomous vehicles and manufacturing equipment. As these applications become increasingly critical, the middleware coping with Machine-to-Machine (M2M) communication and coordination has to deal with fault tolerance and increasing complexity, while still abiding the resource constraints of target devices. In this paper, we focus on configuration management and coordination of services in a M2M scenario. On one hand, we consider ZooKeeper, originally developed for cloud data centers, offering a simple file-system abstraction, and embodying replication for fault-tolerance and scalability based on a consensus protocol. On the other hand, we consider the Devices Profile for Web Services (DPWS) stack with replicated services based on our implementation of the Raft consensus protocol. We show that the latter offers adequate performance for the targeted applications while providing more flexibility.
Architectural Patterns for Software-based Fail-Operational Behavior of Cyber-Physical Systems
Dulcineia Penha and Gereon Weiss
To deal with fail-operational requirements in today’s cyber physical systems, engineers have to recur to concepts such as redundancy, monitoring, and special shutdown procedures. Hardware-based redundancy approaches are not applicable to many embedded systems domains like automotive systems, e.g., because of prohibitive costs of the mechanisms. In this scenario, adaptability concepts can be used to fulfill these fail-operational requirements while enabling optimized resource utilization. However, the applicability of such concepts highly depends on the support for the engineering during system development. We propose an approach to cope with the challenges of fail-operational behavior of CPS in which engineers are supported by design concepts for realizing safety, reliability, and adaptability requirements through the use of architectural patterns. The approach allows expressing concepts for fail-operational behavior such as redundancy and graceful degradation at the software architecture level. By our approach, the effort for developing adaptive CPS can be kept low by utilizing generic patterns for general and reoccurring safety-relevant mechanisms. This is highlighted by different automotive case studies which demonstrate the applicability of the introduced approach.
Karl M. Göschka (Main contact chair)
Vienna University of Technology
Institute of Information Systems
Distributed Systems Group
A-1040 Vienna, Austria
phone: +43 664 180 6946
fax: +43 664 188 6275
Karl dot Goeschka (at) tuwien dot ac dot at
Universidade do Minho
Computer Science Department
Campus de Gualtar
4710-057 Braga, Portugal
phone: +351 253 604 452 / Internal: 4452
fax: +351 253 604 471
rco (at) di dot uminho dot pt
Imperial College London
Department of Computing
South Kensington Campus
180 Queen's Gate
London SW7 2AZ, United Kingdom
phone: +44 (20) 7594 8314
fax: +44 (20) 7581 8024
prp (the at sign goes here) doc (dot) ic (dot) ac (dot) uk
University of Auckland
Department of Computer Science
Private Bag 92019
Auckland 1142, New Zealand
phone: +64 9 373 7599 ext. 86137
g dot russello at auckland dot ac dot nz
|October 10, 2014 (11:59PM Pacific Time) - extended||Paper submission|
|November 30, 2014||Author notification|
|December 15, 2014||Camera-ready papers|
For general information about SAC, please visit: http://www.acm.org/conferences/sac/sac2015/
If you have further questions, please do not hesitate to contact us: email@example.com