Ceph Days Vancouver
Bringing Ceph to Vancouver ¶
A full-day event dedicated to sharing Ceph’s transformative power and fostering the vibrant Ceph community with the community in Vancouver, co-located with the OpenInfra Summit!
The expert Ceph team, Ceph’s customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception, to foster more Ceph learning.
The CFP is now open and registration is limited!
Important Dates ¶
- CFP Opens: 2023-03-27
- CFP Closes: 2023-05-17
- Speakers receive confirmation of acceptance: 2023-05-19
- Schedule Announcement: 2023-05-22
- Event Date: 2023-06-15
|10:20||State of the Cephalopod|
|10:50||Ceph Solution Design Tool|
In this talk, see a live demo of the scale-out solution design tool used to build Ceph clusters from OSNexus Founder & CEO Steve Umbehocker.
|11:00||A Beginner's Guide to Ceph|
New to Ceph? You're not alone! And kind of like 7th grade science class, starting with some explicit definitions for core concepts can prove quite invaluable in grasping the bigger picture. Join us as we review the basics of "the future of storage".
|11:40||Storage in Containers - Introduction to Rook|
The Rook project will be introduced to attendees of all levels and experiences. Rook is an open-source cloud-native storage operator for Kubernetes, providing the platform, framework, and support for Ceph to integrate with Kubernetes natively. A deep dive will be presented for the Ceph storage provider to show how Rook provides a stable block, shared file system, and object storage for your production data. Rook was accepted as a graduated project by the CNCF in October 2020.
|12:20||Improving Business Continuity for an Existing Large Scale Ceph Infrastructure|
The IT Department at CERN operates a large-scale computing and storage infrastructure for scientific data processing and service provisioning to its user community. Ceph is a critical part of this picture. We have recently evaluated different Ceph features to offer solutions for High(er) Availability for Business Continuity and Disaster Recovery. Here we report on plans for RBD backups, object storage multisite replication, and CephFS backups to S3 (and tape) via a restic-based orchestrator.
|14:00||Ceph Integration in Red Hat OpenStack Platform: ceph-ansible to cephadm and Nautilus to Quincy|
Ceph integration Overview
Ceph deployment tools used in Red Hat Openstack Platform : Ceph-ansible to Cephadm
|14:20||Lessons Learned in Ceph Operations for Science|
In 2013, CERN began using Ceph as reliable, flexible, future-proof storage for its on prem cloud. From one 3PB cluster it has grown to support the lab's entire IT infra, now offering 100PB across data centres with use-cases ranging from basic IT to DBs, HPC, and others. This talk will offer a recap of how it is operated at scale: the challenges in this env; improvements made to perf and scalability along the way; and provide key lessons that may be applicable for Ceph users in future.
Dan van der Ster
|15:00||Ceph troubleshooting in a containerized environment|
- Identifying problems/issues
- Diagnosing problem
- Log collection and understanding logs
- Troubleshooting Ceph OSDs
- Troubleshooting Ceph MGR
- Troubleshooting ceph daemon memory leak issues
- Troubleshooting crashing/segfaulting ceph daemons
|15:40||Data Security and Storage Hardening in Rook and Ceph|
We explore the Ceph security model. Digging increasingly deeper in the stack, we examine hardening options appropriate to a variety of threat profiles. Options include defining a threat model, limiting the blast radius of an attack by implementing separate security zones, the use of encryption at rest and in-flight and FIPS 140-2 validated ciphers, hardened builds and default configuration, as well as user access controls and key management. Data retention and secure deletion are also addressed.
|16:20||Testing S3 implementations: RGW & Beyond, client & server perspectives|
How to testing S3 implementations?
- how those relate to RGW & s3-tests [client & server]
- recent work in testing
- Does a given S3 client/server handle requests compliant with the spec? spec gaps? requests/response compliance mismatch
This talk builds on recent RGW testing work:
- "RGW:S3 SDK Compatibility" Saptarshi Majumder work for GSoC 2021
- “S3 API Validation and Reporting (mazi-test)” - Joannah Nanjekye’s work as a DigitalOcean Intern in 2021 (Ceph Outreach alumni)
|16:50||Natively scaleable CephFS NFS Gateways with OpenStack Manila|
Consuming shared file system storage easily and securely in multi-tenant OpenStack clouds usually necessitates the use of NFS gateways to CephFS. This solution is widely embraced, despite scale limitations and the lack of the level of support that native ceph cluster services have. This presentation will provide an update about ongoing work within OpenStack, Ceph and NFS-Ganesha communities to allow scaling the CephFS NFS Gateway according to the deployer's needs.
Goutham Pacha Ravi & Carlos Eduardo da Silva
|17:30||Ceph Operations BoF|
Join the Ceph announcement list, or follow Ceph on social media for Ceph event updates: