Andrea will explore recent advances in Memory Management related to the KVM Virtualization Hypervisor and the Kernel technologies that when properly combined creates the Container abstraction.
Andrea will provide a high level perspective of the most notable milestones in the long term evolution of the Linux Virtual Memory and Virtualization subsystems. In addition, Andrea will explore recent advances in Memory Management related to the KVM Virtualization Hypervisor and the Kernel technologies that when properly combined creates the Container abstraction. Virtualization and Containers are the Linux Kernel foundations leveraged by Kubernetes, OpenShift, oVirt and OpenStack and Andrea will explore some of the tradeoffs between the two.
Habitat is a new Open Source project for building and maintaining cloud native applications. It provides a build environment and a self-contained runtime for your apps. This talk will provide an introduction to the Habitat tools and workflow.
Container Orchestration Systems make for a great operational experience for deploying and management of containers. But that’s only part of the story when running containers in production. How do you build containers that contain only what you need (like no build systems/tools)? How do you orchestrate configuration of your application after the containers have been launched? How do you make it easy to modify an application config while keeping the containers immutable? How can you give your developers a means to declare dependencies for their applications?
Habitat, our open-source project for application automation, simplifies container management by packaging applications in a compact, atomic, and easily auditable format that makes it easier to deploy your application on various container runtimes, natively on the systems, or with Habitat’s own built-in runtime supervisor. This talk will provide an introduction to the open source Habitat project, it’s tools, and the methods it uses to produce artifacts for your immutable applications.
The Linux 4.x series introduced a new powerful engine of programmable tracing (BPF) that allows to actually look inside the kernel at runtime. This talk will show you how to exploit this engine in order to debug problems or identify performance bottlenecks in a complex environment like a cloud.
This talk will cover the latest Linux superpowers that allow to see what is happening “under the hood” of the Linux kernel at runtime. I will explain how to exploit these “superpowers” to measure and trace complex events at runtime in a cloud environment. For example, we will see how we can measure latency distribution of filesystem I/O, details of storage device operations, like individual block I/O request timeouts, or TCP buffer allocations, investigating stack traces of certain events, identify memory leaks, performance bottlenecks and a whole lot more.
In questo talk si vuole presentare un approccio per la scrittura di driver audio in user-space che espongono le proprie funzionalità come servizi REST anziché moduli di kernel.
In un momento storico dove tutto è connesso, dove REST è, e sta diventando sempre più una lingua comune di interazione fra software, perchè utilizzare il paradigma REST per “parlare” direttamente con l’hardware? In questo talk verrà presentato, un approccio alla scrittura di driver audio, user-space, che espongono le proprie funzionalità tramite servizi REST.
Cgroups and Namespace are the shoes and shorts of the container race, not in any particular order. They have been around for a while but not too many see the usage and power they have. The talk is a consortium of cookbooks where these were used to sole Infrastructure problems I have encountered.
Cgroups and Namespaces are the building blocks of containers and resource isolation. Its probably wise to know how to use them before you go about porting your infrastructure to use containers. It helps you understand what will work and what won’t. Its like caring to read relational algebra before relational Database modelling. Having worked in the Infrastructure space for over a decade, I would like to talk about scenarios that I faced and cgroups and namespace isolation address those situations. The talk is aimed at intermediate, beginner and aspiring level system engineers, to understand their turf better.
Raw packet capture is necessary for network monitoring and troubleshooting. With the advent of fast networks, the PF_RING framework has been introduced to accelerate packet capture and transmission on commodity hardware. This talk aims to introduce PF_RING with an eye on containers and namespaces.
Raw packet capture is necessary for network monitoring and troubleshooting, however considering modern networks it is not possible to capture and transmit packets at wire-speed using general-purpose operating systems. For this reason, about a decade ago, PF_RING has been introduced to accelerate packet capture and analysis on commodity hardware. Today PF_RING has a modular architecture, supporting almost all network adapters, including commercial network adapters specialised in packet capture activities.
With the advent of containers, processes isolation has become extremely easy and effective, to the point that also the use of ordinary virtual machines has been in some case reconsidered. Containers is an operating-system level virtualization method for running multiple isolated Linux systems on a single host. Isolation is provided by features like namespaces in the Linux kernel. Namespaces isolate system resources including network, in addition to process IDs, hostnames, user IDs, filesystems.
This talk aims to introduce the PF_RING framework, with an eye on containers, to see what exactly happens under the hood with respect to raw packet capture and network namespaces.
Xen is an Open Source, Type 1, OS agnostic, hypervisor. Back in 2003, it paved the way to the cloud, by inventing paravirtualization. Nowadays, it is what runs Amazon EC2, and it’s about to go into cars! So, come and learn about Xen’s architecture, and how it evolved during all these years.
The talk will cover the following topics: - (quick) introduction to virtualization, and the challenges it poses - virtualization on x86 and ARM - virtualization and its role within the cloud - description of Xen architecture - host/guest OS support & supported virtualization mode - latest innovations in Xen world - how the Xen community works
Focus will be put on explaining: - the high level of security that the Xen architecture enables, by exposing a really small attack surface - how Xen is a fit for server consolidation, client virtualization (for security paranoid people), cloud computing, and mobile/embedded.
Containers provide increased security through isolation and rule-based access control. While this is a great improvement, this proved to be a challenge at Datadog for effectively instrumenting and monitoring containerised workloads. In this talk, we will go through several of the technical issues we encountered while developing container-aware instrumentation, and how what we learned can be leveraged to improve your deployment’s security and performance.
Cgroup hierarchies: limits and accounting
Kernel namespacing: what do –net, –pid, –privileged imply?
Host-local traffic through Unix Domain Sockets: performance gains and origin detection thanks to ancillary data
How to secure you Docker socket?
The introduction of ARM based machines in networking/server markets segments has fostered a standardization process of the ARM software ecosystem aimed at providing a software stack on top of which virtualization and cloud services can be built seamlessly using existing hypervisors and standard software libraries.
This talk will highlight the process through which the ARM Linux kernels, starting from ARM 32-bit kernels up to the latest ARM 64-bit ones, integrated firmware standards such as device tree, UEFI, ACPI, Trusted Firmware and its PSCI (Power State Coordination Interface) implementation, in order to explain the ongoing effort aimed at building machines suitable for supporting standardized virtualization and cloud services in the ARM ecosystem.
The talk will also provide details on how the ARM v8 architecture, ARM IPs (IOMMU) and busses such as PCI express targeted at building ARM enterprise systems were integrated in the ARM 64-bit kernel code and device drivers to support standard virtualization software stack subsystems (eg VFIO).
Verranno introdotti 1. i concetti base di un sistema operativo RealTime evidenziando la differenza tra sistemi “veloci” e sistemi “predicibili”. 2. i vantaggi e limiti dell’uso della RT Patch. 3. come si programma in userspace. 4. risultati, in termini di latenza, che si ottengono su x86_64
Verranno introdotti i concetti base di un sistema operativo RealTime evidenziando la differenza tra sistemi “veloci” e sistemi “predicibili”. Le spiegazioni verranno accompagnate con esempi pratici che evidenzino il concetto di predicibilità (es. read-write lock, priority inversion).
Verrà fatta una brevissima panoramica sulle possibilità che si possono avere per realizzare un sistema realtime che possa sfruttare le potenzialità di Linux: 1. sistema eterogeneo con linux realtime su una cpu/core e un’altro sistema operativo su un’altra cpu/core. 2. sistema operativo realtime che faccia girare linux come suo task. 3. patch al codice sorgente di linux per avere ‘totale’ predicibilità (RT Patch) direttamente in Linux. Ovvero: . esecuzione degli interrupt in kernel thread . implementazione dei rtmutex con priority hineritance per risolvere il problema dell inversione di priorità . converzione di tutti gli spinlock e rwlock presenti nel kernel in rtmutex
Verranno illustrati i vantaggi e limiti dell’uso delle RT Patches. A fine presentazione verrà illustrato un caso pratico basato su architettura Intel e con una distrubuzione Linux standard. Verranno inoltre mostrati : - gli accorgimenti per configurare bios e linux - gli accorgimenti che è necessario adottare programmando task realtime in user-space - come si misurano le latenze in varie condizioni di uso - I risultati che si ottengono.
Docker is a great technology that allows developers to build and deploy the infrastructure of an application in one source code image, but, security is one of the biggest challenges. In this talk, we present the best practices and lessons learned of security reviews on docker images deployments.
These could be the main talking points:
1-Introduction to docker security ecosystem,examining the main parts of a docker application.
2-Tools for auditing docker images for detecting vulnerabilities like docker-bench-security and lynis
The target of these tools is detect potential vulnerabilities in docker images/containers and to monitor running docker containers for detecting anomalous activities.
3- Other tools for testing the security of a docker container.
We can use tools such as Jenkins/TravisCI for automated testing, and Coveralls to ensure all lines of code inside docker image are tested.
4-Security best-practices around deploying Docker containers in production.
During this talk, we will briefly recall the characteristics of the Linux SCHED_DEADLINE real-time scheduler, then we will illustrate two extensions recently proposed to add the reclaiming and the frequency scaling features, respectively. Moreover, we will show how a set of tools have been used to check the correctness and the performance of the implementation.
Since the release 3.14, the Linux kernel features the SCHED_DEADLINE scheduler, specifically designed for embedded real-time systems. This scheduler allows specifying the amount of CPU runtime to assign to the task and the frequency to meet its timing constraints. Recently, our research group has proposed a couple of extensions to this scheduler to add the reclaiming of unused CPU runtime (i.e. GRUB algorithm, merged in the kernel 4.13) and to reduce the energy consumption through frequency scaling (i.e., GRUB-PA). Besides summarizing the characteristics of the scheduler and how to use it, we will describe these two extensions and the several design choices made during the implementation. Finally, we will show how the various open-source (e.g., ftrace, kernelshark, LISA) and commercial tools have been used on a multi-core ARM platform to check the correctness of the implementation.
A Network of Namespaces (NoN) interconnects network namespaces running on different hosts as if they were on the same (virtual) Local Area Network. It is possible to set up and maintain NoN using VLANs, veth, kernel bridge definitions,… but it would be a daunting work for system administrators.
VXVDE and VXVDEX implement zero-configuration NoN. Starting a namespace connected to a NoN it is as simple as typing a command like: “vdens vxvde://“. This new approach is fast (about the same performance figures of VXLAN’s kernel implementation) and it runs on vanilla Linux kernels.
VXVDEX provides NoN with access control.
Users can grant or deny network related ambient capabilities to their processes (e.g. using cado commands: cado is like sudo but it provides rights at capability level).
This talk introduces the concept of NoN, gives some example of usage scenarios and provides a live demo of the tools.
Programmers talk about building an OS or developing a new programming language but why never about building a File System? Is it even a thing? How hard can it be? There is more to it than meets the eye and I want to show how complex yet fun it can be with this talk.
Introduction Operating System & Programming Languages - Who hasn’t dreamt of building one? But the complexity and time investment is something very few of us can put in. However, we never hear people talk about wanting to build a File System. I believe the reason for that is because we think it is part of the OS, but to be honest it need not be. In my opinion, building a File System gives you all the technical challenges you might expect in building an OS but an earlier payout. (Imagine a USB stick with your own file format that no one else can access.) In this talk, I want go through the basic concepts one might need to understand to implement a File System on their own.
Planned Talking Points Why build a FS?
What is a FS?
Program? Format? Data? How & where does FS write Data?
Write to File? Write to Disk? Reasoning about FS
How does FS interpret data Data in memory vs. Data on disk. Data Structures
Super Block I-Node Direct Indirect Double Indirect Visualization of a large file. Standard Values used by FS Disk Block Size File System Block Size File System Operations under microscope
Initialization a FS Mounting & Unmounting a FS Creating Files & Directories Reading from Files & Directories Writing to Files & Directories Deleting Files & Directories Renaming Files & Directories How to use your own File System
A File on Disk - Quick and easy. Filesystem in UserSpacE (FUSE) - Your gateway to let others use your FS. References
Some suggestions on where to find more information.
In this talk, we will learn about the evolution of software development over the last 50 years and how it has led us to the patterns we use today. Understanding the past allows us to appreciate the present, and we will look at the approaches through each decade since the 1970s. We will take a quick look at how languages have evolved from assembler to flat procedural languages like C to attempts to model real world subjects with objects. Why rising complexity in software has driven us to look at the way we test our systems. How provisioning has changed from a manual install with floppy disks to modern infrastructure as code and why demand and the requirements for elasticity have driven this need. We will also take a quick look at some predictions for the future and the direction that the industry is heading.
Takeaways: By the end of this talk, you will have a keen appreciation of the current patterns in modern software development and the history of their evolution including: Packages and package management, Modern security, Code level testing, Integration testing, Infrastructure as code, Running applications at scale, Designing for failure, Continuous deployment
Special session of all Linux Professional Institute (LPI) certification exams during LinuxLab 2017.
Please note you will need a separate ticket to access the exam: https://www.eventbrite.it/e/biglietti-lpi-examlab-linuxlab-firenze-2017-39649442550
Image capture is one of the most broad and complex fields of today’s computing applications. Capturing and displaying images with an embedded platform poses additional challenges, introduced by the rapidly increasing complexity of dedicated hardware blocks often found on modern Systems On Chip designed for mobile and industrial computing. Using real world examples of image sensors, connection buses and processing blocks this presentation provides anoverview of current industry standard technologies with an introduction to Video4Linux2 kernel framework for driver development and its userspace APIs.
Talk outline: - light, colors, pixels - image sensors - the Video4Linux2 framework - video capture driver - image capture in userspace
For years network traffic monitoring has focused on protocols, and IP address/ports. Today users demand more behaviour-oriented tools able to characterise user traffic, and prevent device/Iot-specific data exchanges. This talk show how to achieve this using open source software on embedded systems.
ntopng is an open source network traffic application based on nDPI a library for deep-packet inspection (both available as open source at http://github.com/ntop ). The talk describes the challenges of modern, content/user-oriented network traffic monitoring where we need to move from the traditional packet-oriented paradigm (IP X contacted host Y) to user-oriented (user A is talking on Skype call with user B) and IoT-aware (my television is trying to send an email, is this allowed?) traffic patterns. This talk describes the challenges of these monitoring activities, and how to make them efficient enough to be run on cheap embedded devices. The core talk topics include network traffic monitoring, Linux netfilter, embedded devices (including RaspberryPI).
Come convertire una infrastruttura web tradizionale con apache/nginx e database in una infrastruttura basata su container e microservizi su piattaforma Mesosphere.
Nel talk viene descritto un ipotetico processo di migrazione dalle vecchie piattaforme ospitanti applicativi web interni e siti (tipicamente basate su apache/nginx ed un database) verso una piattaforma basata su container, nello specifico Marathon + Mesos. Viene affrontato il talk da un punto di vista strutturale, andando ad analizzare la vecchia e soprattutto la nuova piattaforma, mostrando anche le possibilità di evoluzione degli applicativi su una piattaforma di questo tipo, orientata quindi ai microservizi e le API.
Thousands of payment terminals are nowday connected to the network, and security and its enforcement are top priority to avoid threats . Operations with payment terminals need to rely on practical tools, easy to use, and guarantee high level of security, as in Android Payment Terminals.
In this talk we will describe advanced features to insure security in payment terminals. We will give a general introduction about Android Payment Terminal and describe their main differences from the old-style payment solutions. The talk will cover and analyze the following topics :
Secure Boot with reference of IMX6 architecture and OMAP4 one Network security SeLinux and hardening enforcement Wifi and bluetooth restrictions Applications strong signature verification Each topic will cover a theoretical analysis and very practical examples from working experience on actual products, like PCI-certified tablet-based touch screen payment terminals. Those terminals are already deployed in the market, and are in fact consider as the world’s first running customized Android Lollipop OS on a custom hardware.
Designing custom hardware, though might seem intimidating at first, can simplify many activities. It solves certain issues, but also raises many others. Paweł will present a fresh approach at providing remote access (including device flashing, debugging and power management) to embedded devices.
This talk will introduce novices to the topic of remote access to the device and previous attempts to do so. Presentation will also introduce components of the new Tizen GNU/Linux distribution’s testing laboratory: Boruta (board farm management system), Weles (LAVA-compatible light testing framework), Perun (binary image testing system) and MuxPi (successor of SD MUX board, showcased during Embedded Linux Conference 2016 in San Diego). It will hopefully encourage discussion on this topic or inspire attendees to try this open hardware design out in their own setups and share feedback on what can be further improved.
Multi-core, multi-ISA processors need complex SW infrastructures to be effectively used. The talk presents such an infrastructure with an emphasis on inter-process communication and the programming model it encourages. An HPC application that proves its capabilities and performance is also shown.
Several silicon vendors are making available multi-core chips where different cores have very different characteristics: some may be powerful microprocessors, others may be small microcomputers, others DSPs, not to mention GPUs and FPGAs. In general, they can be considered examples of multi-ISA architectures (architectures with a functional asymmetry). A typical use case of these architectures is the consolidation of the several computing nodes that are present on a car and that must handle tasks as different as the control of servo devices, the infotainment, and the connection to internet and to GPS. Another use case is embedded HPC. All these devices cannot be run by a single OS kernel, and an application must be distributed across a set of autonomous processing elements, each with its own run-time environment, and each with its own partition of the HW resources of the chip (I/O, memory, cache). If we want to effectively use these platforms all the autonomous processing elements must be able to interact with each other: a multi-vendor, de facto standard for in-chip interprocessor communications already exists, rpmsg on shared memory, but it has several limits: • It is only the equivalent of a Data-link layer service. • Its API is different on different RTOSs. • On Linux, even though it is already supported as part of the main branch, it is implemented as a bus driver. Its API is accessible only from kernel space, and this is true in particular for the definition of service access points. We have complemented rpmsg with two transport protocols, supporting respectively unreliable and reliable, message based communications. Both protocols are interfaced on Linux via sockets, of type DGRAM and SEQPACKET respectively, and this allows users to create their own communication endpoints in the protocol family rpmsg. The same protocols have been implemented in a portable form also for RTOSs, and also on RTOSs they have been made available through a socket-like API. Finally a remote procedure call environment has been developed on top of the socket API: this RPC environment has been derived from eRPC, but we have enhanced it in several ways, e.g. with the introduction of a global broker that allows to relocate services across the different processing elements of the chip. All developments have been based either on SW provided by silicon vendors or on open source SW: Linux, FreeRTOS, OpenAMP (portable version of rpmsg and its port on FreeRTOS and Zynq), eRPC. All developments will become available as open source SW. As a demonstrator we have implemented a computer vision application for blob analysis, where the main program, running on the Linux managed main processing unit of a TI Sitara AM572x, invokes via eRPC the functions of the TI image processing library that runs on the 2 DSPs of the HW platform.