Ceph Days London 2024

Bringing Ceph to London

Register Now!

A full-day event dedicated to sharing Ceph’s transformative power and fostering the vibrant Ceph community with the community in London!

The expert Ceph team, Ceph’s customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception, to foster more Ceph learning.

The registration will be limited!

Important Dates

  • CFP Opens: 2024-06-10
  • CFP Closes: 2024-06-30
  • Speakers receive confirmation of acceptance: 2024-07-03
  • Schedule Announcement: 2024-07-08
  • Event Date: 2024-07-17

Schedule

TimeAbstract
Speaker
8:00 AMCheck-in and Breakfast
9:00 AMWelcome
Phil Williams (Canonical)
9:10 AMKeynote - State of Ceph

A look at the newest Ceph release, current development priorities, and the latest activity in the Ceph community.

Neha Ojha (IBM)
9:30 AMCrimson Project (Squid Release)

In the presentation I will introduce the Crimson project and explain the rationale behind it. As Squid is the first release to include Crimson as a tech preview, the presentation will cover the current status and future goals of the project.

Matan Breizman (IBM)
10:00 AMTaming a Cephalopod Swarm: Multi-cluster Monitoring Comes to the Ceph Dashboard

Traditionally matching Ceph core's fast pace, the Dashboard is steadily surpassing CLI with enhanced features: monitoring, centralized logging, guided workflows... and the pioneering multi-cluster monitoring. One Ceph Dashboard to rule them all.

Ernesto Puerta, IBM
10:30 AMTea/Coffee Break
11:00 AMWe ran out of IOPSWe ran out of IOPS! Adding NVMe devices to an overworked S3 service

Our Ceph service was deployed with all HDD OSDs, intended for OpenStack Cinder volumes. We found radosgw/S3 was the use which really took off and we ran out of HDD IOPS performance in the rgw pools. Adding NVMe OSDs restored stability and sanity.

David Holland, Wellcome Sanger
11:10 AMCrimson: experiments to reach saturation

In this talk, we describe the experiments we conducted to gain an empirical understanding of the performance for Crimson (the new technology OSD for Ceph). We show our methodology and discuss the results, as well as next steps.

Jose Juan Palacios Perez, IBM
11:20 AMAll things in moderation: Our experience applying cluster-wide rate limiting to Ceph object storage

We built a distributed QoS system to limit both request and data-transfer rate on a per-user level across our Ceph object storage clusters. We present learnings from a couple years of using this to protect our clusters and our user's experience.

Jacques Heunis, Bloomberg
11:50 AMUnlocking Ceph's Potential with NVMe-oF Integration

Discover how NVMe-oF integrates with Ceph to enhance data storage efficiency. We'll cover implementation challenges and solutions for high availability, performance, and scalability. Join us to unlock Ceph's full potential with NVMe-oF.

Orit Wasserman, IBM
12:20 PMLunch
1:30 PMIntegrating NVMe-oF with Ceph and Juju

This talk describes the development and usage of a Juju charm meant to allow users to create NVMe-oF devices backed by RBD images in a scalable way to provide high-availability guarantees in a user-friendly fashion.

Luciano Lo Giudice, Canonical
2:00 PMMaking the right hardware choices

With the increasing diversity and ever changing hardware options that are available this talk will take a look through some of those options and what makes for sensible configuration options for Ceph.

Darren Soothill, Croit
2:30 PMDisTRaC: Transient Ceph On RAM

DisTRaC is a fast and scalable open-source deployment tool for creating quick and efficient Ceph storage systems on high-performance computing utilising RAM. The talk introduces DisTRaX and shows use cases and results using this tool.

Gabryel Mason-Williams, Rosalind Franklin Institute
2:40 PMCeph Performance Benchmarking and CBT (Ceph Benchmark Tool) Improvement plans

CBT (Ceph Benchmarking Tool) is a utility that benchmarks clusters to highlight the max performance of the system. This talk is about the CBT vision and improvement plans to simplify and generate full comprehensive reports with comparisons.

Lee Sanders, IBM
2:50 PMNVMeoF and VMware Integrations for Ceph

Ceph NVMe over TCP unleashes the power of high performing NVMe drives with the scalability of disaggregated storage. Find out how the new protocol exposes high-performing block access over networks, and how VMware environments can benefit from it!

Mike Bukhart, IBM
3:00 PMSnack Break
3:30 PMThe art of redundancy across failure domains

Proper cluster design is an art, especially with complex failure domains. This talk strives some aspects of role instance distribution, best practices for redundant instances placement and minimizing cluster footprint with room for future growth.

Matthias Muench, Red Hat
4:00 PMNext Generation Erasure Coding

Erasure coding has lower storage overheads than replicated pools, but does not perform as well for short random I/O. This talk looks at a variety of techniques that we are planning on implementing to improve erasure coding performance.

Bill Scales, IBM
4:30 PMSunbeam and Ceph sitting in a tree

OpenStack Sunbeam is a re-think of how we deploy and operate OpenStack Clouds including how we integrate with Ceph - in the form of MicroCeph. Discover more about the motivations for this project and what we're doing technically.

James Page, Canonical
5:00 PMClosing Remarks
Phil, Neha, Danny, Orit
5:15 PMNetworking Reception