Ceph Community Newsletter, July 2019 edition

thingee

Announcements

Ceph Upstream Documenter Opportunity

While the Ceph community continues to grow and the software improves, an essential part of our success will be a focus on improving our documentation.

We’re excited to announce a new contract opportunity that would be funded by the Ceph Foundation to help with this initiative. Below is the contract description. Read more for full description and how to apply.

Ceph User Survey Discussion

It's that time of the year again for us to form this year's user survey. The user survey gives the Ceph community insight with how people are using Ceph and where we should be spending our efforts. You can see last year's survey in this blog post. The previous set of questions and the answer options that will be available are being discussed both on the ceph-user's mailing list and etherpad.

New Ceph Foundation Member: SWITCH

Eight months ago the Ceph Foundation was announced with 31 founding organization members to create an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. Since then, three more members have joined, two as General members and one as an Associate member.

Today we’re excited to announce our newest associate member of the Foundation, SWITCH. Read more

Don't forget your Nautilus release shirt

March 19 we announced the new release of Ceph Nautilus! Take a look at our blog post that captures the major features and upgrade notes. Watch the talk from Sage Weil, co-creator and project leader, on the state of the cephalopod. We're now pleased to announce the availability of official Ceph Nautilus shirts in the Ceph store!

Project updates

All changes for July

Rados

  • The version of rocksdb has been updated to v6.1.2, which incorporates a number of fixes and improvements since 5.17.2.

  • The number of concurrent bluestore rocksdb compactions has been changed to 2 from the earlier default of 1, which will provide improved performance with OMAP heavy write workloads.

  • We now have higher default priority for recovery of rgw metadata pools and cephfs metadata pools.

  • Several improvements in the crash module:

  • "ceph crash ls-new" - reports info about new crashes

  • "ceph crash archive" - archives a crash report

  • "ceph crash archive-all" - archives all new crash reports

  • recent crashes raise ``RECENT_CRASH`` health warning for two weeks by default, duration can be controlled by "ceph config set mgr/crash/warn_recent_interval" - duration of 0 disables the warnings

  • crash reports are retained for a year by default, duration can be changed by "ceph config set mgr/crash/warn_recent_interval"

CephFS

    • Attempt to revive inline support in the kernel client driver determined not to be worth the effort. Current thinking is to remove file inlining from CephFS in Octopus.
    • `ceph fs volume` and related interfaces reaching feature completeness in 14.2.2.
  • Support for the kernel client to reconnect after getting blacklisted is nearly complete. Current patchset in testing. See related threads on ceph-devel for more information.

RBD

  • Long-running background tasks can now be scheduled to run via the MGR 'rbd_support' module
    • ceph rbd task add remove
    • ceph rbd task add flatten
    • ceph rbd task add trash remove
    • ceph rbd task add migration execute
    • ceph rbd task add migration commit
    • ceph rbd task add migration abort <image-spec
    • ceph rbd task cancel
    • ceph rbd task list []
    • Note: these will also be integrated w/ the standard 'rbd' CLI in the near future
  • RBD parent image persistent cache
    • Data blocks for parent images are now optionally cached persistently on librbd client nodes
    • This offloads read requests for "golden" base
    • More information
  • RBD online image (re-)sparsify now supports EC-backed data pools

Dashboard

Rook

Orchestrator

  • Nautilus: Deepsea orchestrator now supports configuring the Ceph Dashboard (v14.2.3)

  • Work started on adding container support for the SSH orchestrator

  • Rook orchestrator now supports `ceph orchestrator host ls`

Releases

Ceph Planet

Project Meeting Recordings

Ceph Developers Monthly

Ceph DocUBetter

Ceph Performance Weekly

Ceph Orchestration

Recent events we were at

Cephalocon Barcelona

Our recent big Ceph event! We enjoyed hearing user stories, presentations from members of the community. Watch over 70 videos now!

Upcoming conferences

Please coordinate your Ceph CFP's with the community on our CFP coordination pad.