Ceph Community Newsletter, July 2019 edition
Announcements ¶
Ceph Upstream Documenter Opportunity ¶
While the Ceph community continues to grow and the software improves, an essential part of our success will be a focus on improving our documentation.
We’re excited to announce a new contract opportunity that would be funded by the Ceph Foundation to help with this initiative. Below is the contract description. Read more for full description and how to apply.
Ceph User Survey Discussion ¶
It's that time of the year again for us to form this year's user survey. The user survey gives the Ceph community insight with how people are using Ceph and where we should be spending our efforts. You can see last year's survey in this blog post. The previous set of questions and the answer options that will be available are being discussed both on the ceph-user's mailing list and etherpad.
New Ceph Foundation Member: SWITCH ¶
Eight months ago the Ceph Foundation was announced with 31 founding organization members to create an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. Since then, three more members have joined, two as General members and one as an Associate member.
Today we’re excited to announce our newest associate member of the Foundation, SWITCH. Read more
Don't forget your Nautilus release shirt ¶
March 19 we announced the new release of Ceph Nautilus! Take a look at our blog post that captures the major features and upgrade notes. Watch the talk from Sage Weil, co-creator and project leader, on the state of the cephalopod. We're now pleased to announce the availability of official Ceph Nautilus shirts in the Ceph store!
Project updates ¶
Rados ¶
The version of rocksdb has been updated to v6.1.2, which incorporates a number of fixes and improvements since 5.17.2.
The number of concurrent bluestore rocksdb compactions has been changed to 2 from the earlier default of 1, which will provide improved performance with OMAP heavy write workloads.
We now have higher default priority for recovery of rgw metadata pools and cephfs metadata pools.
Several improvements in the crash module:
"ceph crash ls-new" - reports info about new crashes
"ceph crash archive
" - archives a crash report "ceph crash archive-all" - archives all new crash reports
recent crashes raise ``RECENT_CRASH`` health warning for two weeks by default, duration can be controlled by "ceph config set mgr/crash/warn_recent_interval
" - duration of 0 disables the warnings crash reports are retained for a year by default, duration can be changed by "ceph config set mgr/crash/warn_recent_interval
"
CephFS ¶
- Attempt to revive inline support in the kernel client driver determined not to be worth the effort. Current thinking is to remove file inlining from CephFS in Octopus.
- `ceph fs volume` and related interfaces reaching feature completeness in 14.2.2.
Support for the kernel client to reconnect after getting blacklisted is nearly complete. Current patchset in testing. See related threads on ceph-devel for more information.
RBD ¶
- Long-running background tasks can now be scheduled to run via the MGR 'rbd_support' module
- ceph rbd task add remove
- ceph rbd task add flatten
- ceph rbd task add trash remove
- ceph rbd task add migration execute
- ceph rbd task add migration commit
- ceph rbd task add migration abort <image-spec
- ceph rbd task cancel
- ceph rbd task list [
] - Note: these will also be integrated w/ the standard 'rbd' CLI in the near future
- ceph rbd task add remove
- RBD parent image persistent cache
- Data blocks for parent images are now optionally cached persistently on librbd client nodes
- This offloads read requests for "golden" base
- More information
- RBD online image (re-)sparsify now supports EC-backed data pools
Dashboard ¶
Work on dashboard features for Ceph Octopus is progressing well - see https://pad.ceph.com/p/ceph-dashboard-octopus-priorities for the prioritized list for Ceph Octopus
Noteworthy dashboard features that were merged in the past 5 weeks:
- Prevent deletion of iSCSI IQNs with open sessions — https://github.com/ceph/ceph/pull/29133
- Show iSCSI gateways status in the health page — https://github.com/ceph/ceph/pull/29112
- Allow disabling redirection on standby Dashboards — https://github.com/ceph/ceph/pull/29088
- Integrate progress mgr module events into dashboard tasks list — https://github.com/ceph/ceph/pull/29048
- Provide user enable/disable capability — https://github.com/ceph/ceph/pull/29046
- Allow users to change their password on the UI — https://github.com/ceph/ceph/pull/28935
- Evict a CephFS client — https://github.com/ceph/ceph/pull/28898
- Display iSCSI "logged in" info — https://github.com/ceph/ceph/pull/28265
- Watch for pool PGs increase and decrease — https://github.com/ceph/ceph/pull/28006
- Allow viewing and setting pool quotas — https://github.com/ceph/ceph/pull/27945
Silence Prometheus Alertmanager alerts — https://github.com/ceph/ceph/pull/27277
Ongoing feature development work:
- progress: support rbd_support module async tasks — https://github.com/ceph/ceph/pull/29424
- Controller models proposal — https://github.com/ceph/ceph/pull/29383
- Orchestrator integration initial works — https://github.com/ceph/ceph/pull/29127
- Force change the password — https://github.com/ceph/ceph/pull/28405
Rook ¶
ceph/ceph:v14.2.2 is out, and Rook being updated to use it: https://github.com/rook/rook/pull/3489
Lots of work done on making Ceph on Kubernetes PersistenVolumes
Orchestrator ¶
Nautilus: Deepsea orchestrator now supports configuring the Ceph Dashboard (v14.2.3)
Work started on adding container support for the SSH orchestrator
Rook orchestrator now supports `ceph orchestrator host ls`
Releases ¶
Ceph Planet ¶
Project Meeting Recordings ¶
Ceph Developers Monthly ¶
Ceph DocUBetter ¶
Ceph Performance Weekly ¶
Ceph Orchestration ¶
Recent events we were at ¶
Cephalocon Barcelona ¶
Our recent big Ceph event! We enjoyed hearing user stories, presentations from members of the community. Watch over 70 videos now!
Upcoming conferences ¶
Please coordinate your Ceph CFP's with the community on our CFP coordination pad.
- Ceph Day CERN - September 16th
- Ceph Day London - October 24th