Market analysts are agreed that serverless computing has strong market potential, with projected compound annual growth rates varying between 21% and 28% through 2028 and a projected market value of $36.8 billion by that time. Although serverless computing has gained significant attention in industry and academia over the past years, there is still no consensus about its unique distinguishing characteristics and precise understanding of how these characteristics differ from classical cloud computing. For example, there is no wide agreement on whether serverless is solely a set of requirements from the cloud user’s perspective or it should also mandate specific implementation choices on the provider side, such as implementing an autoscaling mechanism to achieve elasticity. Similarly, there is no agreement on whether serverless is just the operational part, or it should also include specific programming models, interfaces, or calling protocols.
In this talk, we seek to dispel this confusion by evaluating the essential conceptual characteristics of serverless computing as a paradigm, while putting the various terms around it into perspective. We examine how the term serverless computing, and related terms, are used today. We explain the historical evolution leading to serverless computing, starting with mainframe virtualization in the 1960 through to Grid and cloud computing all the way up to today. We review existing cloud computing service models, including IaaS, PaaS, SaaS, CaaS, FaaS, and BaaS, discussing how they relate to the serverless paradigm.
Slides are available at https://go.uniwue.de/serverless (Password: „serverless“)
slides
From: Samuel Kounev.
Contact: samuel.kounev@uni-wuerzburg.de.
Unikernels pack applications, OS primitives, and drivers into a single binary that can be executed directly on top of a hypervisor, resulting in lean images, fast boot times, and a small attack surface. In theory, this sounds like an excellent isolation mechanism for serverless computing at the edge. Through empirical investigation, we evaluate whether current implementations hold up to this promise, comparing OSv, Nanos, gVisor, Docker, and Linux on Firecracker as FaaS isolation mechanisms at the edge.
Slides [PDF]
Tobias Pfandzelter.
Contact: tp@3s.tu-berlin.de.
Function-as-a-Service (FaaS) is a popular cloud computing model in which applications are implemented as work flows of multiple independent functions. While cloud providers usually offer composition services for such workflows, they do not support cross-platform workflows forcing developers to hardcode the composition logic. Furthermore, FaaS workflows tend to be slow due to cascading cold starts, inter-function latency, and data download latency on the critical path. In this paper, we propose GeoFF, a serverless choreography middleware that executes FaaS workflows across different public and private FaaS platforms, including ad-hoc workflow recomposition. Furthermore, GeoFF supports function pre-warming and data pre-fetching. This minimizes end-to-end workflow latency by taking cold starts and data download latency off the critical path. In experiments with our proof-of-concept prototype and a realistic application, we were able to reduce end-to-end latency by more than 50%.
Slides [PDF]
Valentin Carl.
Contact: carl@tu-berlin.de.
In a continuous deployment setting, Function-as-a-Service (FaaS) applications frequently receive updated releases, each of which can cause a performance regression. While continuous benchmarking, i.e., comparing benchmark results of the updated and the previous version, can detect such regressions, performance variability of FaaS platforms necessitates thousands of function calls, thus, making continuous benchmarking time-intensive and expensive. In this paper, we propose DuetFaaS, an approach which adapts duet benchmarking to FaaS applications. With DuetFaaS, we deploy two versions of FaaS function in a single cloud function instance and execute them in parallel to reduce the impact of platform variability. We evaluate our approach against state-of-the-art approaches, running on AWS Lambda. Overall, DuetFaaS requires fewer invocations to accurately detect performance regressions than other state-of-the-art approaches. In 99.65% of evaluated cases, our approach provides smaller confidence interval sizes than the compared approaches, and can reduce the size by up to 98.23%.
Slides [PDF]
Tim Dockenfuß.
Contact: dockenfuss@tu-berlin.de.
The massive growth of mobile and IoT devices demands geographically distributed computing systems for optimal performance, privacy, and scalability. However, existing edge-to-cloud serverless platforms lack location awareness, resulting in inefficient network usage and increased latency. In this paper, we propose GeoFaaS, a novel edge-to-cloud Function-as-a-Service (FaaS) platform that leverages real-time client location information for transparent request execution on the nearest available FaaS node. If needed, GeoFaaS transparently offloads requests to the cloud when edge resources are overloaded, thus, ensuring consistent execution without user intervention. GeoFaaS has a modular and decentralized architecture: building on the single-node FaaS system tinyFaaS, GeoFaaS works as a stand-alone edge-to-cloud FaaS platform but can also integrate and act as a routing layer for existing FaaS services, e.g., in the cloud. To evaluate our approach, we implemented an open-source proof-of-concept prototype and studied performance and fault-tolerance behavior in experiments.
Slides [PDF]
Mohammadreza Malekabbasi.
Contact: malekabbasi@tu-berlin.de .
Running microbenchmark suites often and early in the development process enables developers to identify performance issues in their application. Microbenchmark suites of complex applications can comprise hundreds of individual benchmarks and take multiple hours to evaluate meaning fully, making running those benchmarks as part of CI/CD pipelines infeasible. In this paper, we reduce the total execution time of microbenchmark suites by leveraging the massive scalability and elasticity of FaaS (Function-as-a-Service) platforms. While using FaaS enables users to quickly scale up to thousands of parallel function instances to speed up microbenchmarking, the performance variation and low control over the underlying computing resources complicate reliable benchmarking. We present ElastiBench, an architecture for executing microbenchmark suites on cloud FaaS platforms, and evaluate it on code changes from an open-source time series database. Our evaluation shows that our prototype can produce reliable results (~95% of performance changes accurately detected) in a quarter of the time (≤15min vs. ~4h) and at lower cost ($0.49 vs. $1.18) compared to cloud-based virtual machines.
Slides [PDF]
Trever Schirmer.
Contact: ts@3s.tu-berlin.de.
Serverless computing eases the process of application deployment for the developers. Yet rightsizing serverless functions remains a challenge. Rightsizing a serverless function is necessary to ensure optimal cost and/or performance. In this work, we identify that using parametric regression can significantly simplify function rightsizing compared to current state of the art techniques that use black-box optimization. With this insight, we build a tool, called Parrotfish. Parrotfish achieves substantially lower exploration costs (1.81-9.96×) compared with the state-of-the-art tools, while reducing the cost of running the functions by %25.74, on average.
Slides [PDF]
Arshia Moghimi.
Contact: amoghimi@student.ubc.ca.
Over three years ago, we built SeBS, a serverless benchmarking suite, to address the need for an automatic, representative, and easy-to-use benchmarking framework for FaaS applications. SeBS has found applications in many research projects, and we integrated new features to address the ongoing changes in FaaS platforms, such as workflows and data movement. We continue the work to support new workloads and frameworks, and we are trying to identify upcoming trends and paradigm shifts to support researchers with reliable and reproducible benchmarks. Serverless needs an open, portable, standardized benchmarking framework to drive future progress, and we hope SeBS will help our community achieve that goal.
Slides [PDF]
Marcin Copik.
Contact: marcin.copik@inf.ethz.ch.
Data can be found everywhere, from health to human infrastructure to the surge of sensors to the proliferation of internet-linked devices. To meet this challenge, the data engineering domain has expanded monumentally in recent years in both research and industry. Additionally, In recent years, the data engineering discipline has been dramatically impacted by Artificial Intelligence (AI) and Machine Learning (ML), which has resulted in research on the speed, performance, and optimization of such processes. Traditionally, data engineering, Machine Learning, and Artificial Intelligence workloads have been executed on large clusters in a data center environment. This requires considerable investment in terms of both hardware and maintenance. With the advent of the public cloud, it is now possible to run large applications across nodes without maintaining or owning hardware. Serverless computing has emerged in cloud and open-source varieties to meet such needs. This allows users of such systems to focus on the application code and take advantage of bundled CPUs and memory configuration without focusing primarily on the semantics of horizontal scalability and resource allocation. This talk discusses Serverless Cylon, a high-performance, distributed memory parallel data frame library developed to run on AWS Serverless compute.
Slides [PDF]
Mills Staylor.
Contact: qad5gv@virginia.edu.
In this talk, we outline our work for a new autoscaling approach suited for microservices and function chains. Key characteristics of our approach include the focus on high-level service level objectives given as tail latencies, coordinated scaling for multiple services/functions at the same time, and increased explainability and self-monitoring.
Slides [PDF]
Martin Straesser.
Contact: martin.straesser@uni-wuerzburg.de.
There is high societal and business interest in building metaverses a fundamentally new way for humans to interact seamlessly with a (largely) digital world. Novel use-cases with large societal impact, such as working remotely, travelling digitally, and interacting and gaming online, rely on recent technological leaps in extended reality (XR) hardware devices, and complex hardware and software ecosystems around these devices. However, because many metaverse ecosystem are currently emerging, understanding how these systems perform in practice is an open research challenge. This challenge is compounded by a lack of publicly available data describing the behavior of these systems. In this work, we address this challenge by designing Dizi, the first workload trace archive for metaverse applications. Dizi's archive includes user-input and resource utilization traces, and tools that partially automate the trace-collection process, enabling developers to efficiently explore performance trade-offs for metaverse applications. We show that Dizi has low overhead, offers high replay fidelity, and supports heterogeneous applications and devices. We perform an extensive set of real-world experiments using Dizi. Our main findings include that, surprisingly, using XR devices in a streamed setup is possible with network bandwidths as low as 20 Mbps, and that Meta's flagship Quest Pro's performance is similar to that of the older Quest 2 in common applications. Dizi, including all traces and tools, follows the FAIR principles and Open Science best practices and will become publicly available upon project completion.
Slides [PDF]
Jesse Donkervliet.
Contact: j.donkervliet@gmail.com.
This report proposes a novel tool to evaluate autoscaling algorithms within Ku- bernetes environments. Kubernetes offers a variety of autoscaling options, cate- gorized as pre-emptive (predictive) or reactive (responsive). Additionally, these algorithms can be classified as cluster autoscalers (adding/removing nodes) or workload autoscalers (adjusting pod replicas). This diversity creates challenges for deployment personnel, especially in Infrastructure as a Service (IaaS) con- texts. IaaS providers lack visibility into deployed applications yet must adhere to service level agreements (SLAs), often leading to over-provisioning. Further- more, the growing range of Kubernetes cluster types, including geographically distributed edge clusters, adds complexity. Considering these factors and the potential for multi-cluster algorithms, a vast number of permutations require testing. Our proposed tool addresses this gap by facilitating highly reproducible experimentation that maintains realistic application behavior. This enables the evaluation of various hypotheses to identify optimal autoscaling configurations.
Slides [PDF]
Ranjan Ojha.
Contact: ojha@zhaw.ch.
In the last decade, the development of flash technology has brought us a giant leap in the performance of storage devices. Nowadays, high-end storage devices can deliver single-digit microsecond latency and several GiB/s bandwidth. Together with the Compute Express Link (CXL) technology, these fast storages can be attached to the PICe link with a memory interface, providing several TiBs memory space at a much lower cost than the DDR memory. However, due to the physical limitations, these CXL-enabled SSDs can not provide the same level of performance as the DDR memory which is attached to the CPU memory bus. Previous studies have shown the interface overhead of the CXL disaggregated memory. In this study, we will do the first-of-its-kind performance characterization on real CXL-enabled SSDs. We investigate three research problems (1) what is the performance of their baseline performance, for example, latency and bandwidth, (2) how does the performance of SSDs affect the end-to-end performance that is exposed to the host and (3) how does the kind of workload affect the performance of CXL-enabled SSDs?
Zebin Ren.
Contact: z.ren@vu.nl.