Ceph Days India

Bringing Ceph to India

Registration SOLD OUT

A full-day event dedicated to sharing Ceph’s transformative power and fostering the vibrant Ceph community with the community in India!

The expert Ceph team, Ceph’s customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception, to foster more Ceph learning.

The CFP is now open and registration is limited!

Important Dates

  • CFP Opens: 2023-02-21
  • CFP Closes: 2023-03-29
  • Speakers receive confirmation of acceptance: 2023-04-03
  • Schedule Announcement: 2023-04-05
  • Event Date: 2023-05-05

Schedule

TimeAbstractSpeaker
9:00Keynote: Kickoff with Community updates

Gaurav Sitlani
IBM
9:20CephFS Under The Hood

This talk would go into detail on how Ceph File System works under the hood. We start by explaining about CephFS, how it has an edge over other distributed file systems. We move on to uncover the on-disk format explaining in detail about how and where (all) CephFS stores its metadata and user data. Further, we introduce the concept of snapshots. The on-disk format is essential to understanding this. Lastly, we lightly touch upon the concept of "caps" (capabilities).

Venky Shankar

IBM India Pvt. Ltd.

10:05Ceph RBD integration with DPUs

DPU will start running CEPH client, Crush and librbd libraries to virtualize the rbd, and present the rbd to host as a PCIe connected NVMe disk like NVMeOF. All the read and write requests to the cluster are converted from pcie to rados requests. A special daemon runs on the cores and becomes a client to the ceph cluster. It can fetch the cluster maps, runs crush. It can understand the failures and redirects the TCP connections to new OSDs to fetch data and load balances the traffic well.

Varada Raja Kumar Kari

AMD India Pvt Ltd

10:40When should we use cephadm or cephadm-ansible?

"In an Octopus release, the cephadm utility was introduced to properly manage a single Ceph cluster. However, if we need to scale the size of multiple Ceph clusters that are geographically distributed, or if we want to automate the steps for tuning a single cluster, we cannot use cephadm.

Therefore, to automate and manage simple tasks across multiple clusters or multiple on a single cluster, we can use Ansible to perform these tasks at scale by using cephadm-ansible instead of cephadm."

Kritik Sachdeva

IBM

11:00Break
11:20Critical Troubleshooting Tools for Rook Clusters

While Rook simplifies the deployment and management of Ceph in Kubernetes, it can still be challenging to troubleshoot issues that arise. Common issues include network connectivity, losing mon quorum, removing failed OSDs, and so on. This talk will present tools for admin to restore the Mon quorum from just one single healthy mon, remove the failed OSDs, debug mon and OSD pods, and many more.

Subham Kumar Rai

Red Hat

11:55Enabling Read Affinity for workloads with Rook

Rook spreads OSDs across the nodes of cluster to provide redundancy and availability using node topology labels, which are also added to the desired level in the CRUSH map. Currently, reads for Containerised workloads(pods) are served from the primary OSD of the PG which maybe located on a different node, zone or even region.

Leveraging read affinity feature, reads are served from the OSDs which are in proximity to the client, reducing data transfer and improving performance.

Rakshit

IBM

12:20 PMRADOS Gateway Integration with Hashicorp Vault

RGW integration with Vault presents opportunities for encryption to be more flexible. While OSD encryption is one key per underlying block device, encryption configured with a key management service makes the process flexible through bucket policies; now keys can be unique on a bucket-level or even on object-level. Vault itself can be configured to be highly available and provide encryption as a service. This ensures that the keys are stored safely within vault, much like in the case of an HSM.

K Gopal Krishna

croit GmbH

12:55Enhancing Observability using Tracing in Ceph

With the introduction of opentelemetry tracing in Ceph, we can identify

abnormalities in your Ceph cluster more easily. This will make your

Ceph cluster reach a much-improved monitoring state, supporting visibility to its

background distributed processes. This would, in turn, add up to the ways Ceph

is being debugged, “making Ceph more transparent” in identifying abnormalities

and troubleshooting performance issues.

Deepika Upadhyay

Koor Technologies Inc.

1:15Break
2:00Quiz
2:15IBM Ceph and Storage Insights integration strategy and plan

Storage Insights integration will help multi-cluster monitoring for multiple Ceph clusters. This will provide a single tool for monitoring high level cluster metrics like cluster health, fullness and other identified metrics with no drill down and click through. Most Ceph customer have more than one cluster deployed and multi-cluster monitoring has been a gap for a long time in Ceph management tooling. So SI integration will help to fill that gap. Storage Insights will be integrated into the call-home workflow of IBM Ceph. IBM Ceph also will be integrated into the alerting workflow of Storage Insights. This near real-time alerting capability in SI will be a significant value add to Ceph.

Vijay Patidar

IBM

2:35Monitoring and Centralized Logging in Ceph

The objective of the talk is to highlight the various aspects and importance of two of the pillars of Observability: Metrics & Logs in Ceph Storage cluster. We will talk about the current architecture of metrics collection and logging, technology stack used and how you can easily deploy them in Ceph. This talk will also highlight the various aspects and importance of Centralized Logging, which can be very useful to view and manage the logs in a Dashboard view. Including a short demo at end.

Avan Thakkar & Aashish Sharma

IBM

3:15Autoscaling with KEDA for Ceph Object Store aka RGW

Scaling your object store is complex, payloads vary in size - objects can be as large as virtual machine images or as small as emails. In behaviour - some are mostly reading, writing, and listing objects. Other payloads delete objects, and some keep them forever. Using CPU and RAM to autoscale the pods horizontally or vertically is limited and may have adverse effects. Treating our object store as a queueing system: converting HTTP requests to actions on disks may just be the solution!

Jiffin Tony Thottan

IBM

3:50External Rook Ceph Cluster

You may have an existing Ceph cluster that you want to integrate with Kubernetes, or wanted centralized ceph management in a single cluster connected to multiple Kubernetes clusters what's the solution? External Rook-Ceph Cluster.

An external cluster is a Ceph configuration that is managed outside of the local K8s cluster.

This lightning talk will give a quick overview of the Rook external cluster, which includes its deployment, demo, and how the latest ceph features can be used with this.

Parth Arora

IBM

4:15Break
4:45NVMeOF support for Ceph

Provide access to RBD volumes via generic NVMe block devices

Rahul Lepakshi

IBM

5:20Teuthology - Integration Test Framework

Teuthology is a test framework for Ceph which is used to run vast majority of Ceph tests. In the presentation we will talk about it's infrastructure, installation and how to get started with it.

Subhashree Mishra & Yash Ajgaonkar

Red Hat

Quiz
5:50Ceph Days Closure note

Join the Ceph announcement list, or follow Ceph on social media for Ceph event updates: