Ceph Community Newsletter, June 2020 Edition

thingee

Announcements

Virtual Ceph Days

Thank you to everyone who has participated in our Virtual Ceph Day questionnaire. We are still in the process of finalizing a platform and details. If you were previously selected as a presenter for the planned Cephalocon 2020 event, consider applying for the CFP.

Ceph Tech Talk for July 2020: A Different Scale - Running small ceph clusters in multiple data centers

The Ceph community keeps the tech talks going for 2020! We’ve heard introductions to the new release Octopus which brings many new improvements and enhancements; to exciting tales with solving the bug of the year in Ceph.

We decided this month small Ceph clusters needed attention and Yuval Freund has experience with this scale within multiple data centers. Catch us on July 23rd at 17:00 UTC live!

On August 27th hear about the STS ( Secure Token Service) in RGW by Pritha Srivastava.

If the time doesn’t work for you, make sure to subscribe to Ceph on Youtube and review the recording at your convenience.

Subscribe to the community calendar

More on Ceph Tech Talks

User Survey 2019 Update

The user survey is expected to be released on July 20th. There were 405 responses from 60 different countries. Thank you to everyone who took the time to fill out this information.

Ceph Octopus Shirts

Where are they? Your community manager is having a bit of a late start this year, but the process is sort of going. The artwork has been completed; Spreadshirt will be pressing and shipping them and finally providing us a real storefront!

By the way, did we mention Spreadshirt deploys Ceph for their own internal use?

If you contributed to Ceph for the Octopus release, you should receive an email to request a free shirt sometime in July.

Project updates

Ceph Admin

  • batch backport May (1) (pr#34893, Michael Fritch, Ricardo Marques, Matthew Oliver, Sebastian Wagner, Joshua Schmid, Zac Dover, Varsha Rao)
  • batch backport May (2) (pr#35188, Michael Fritch, Sebastian Wagner, Kefu Chai, Georgios Kyratsas, Kiefer Chang, Joshua Schmid, Patrick Seidensal, Varsha Rao, Matthew Oliver, Zac Dover, Juan Miguel Olmo Martínez, Tim Serong, Alexey Miasoedov, Ricardo Marques, Satoru Takeuchi)
  • batch backport June (1) (pr#35347, Sebastian Wagner, Zac Dover, Georgios Kyratsas, Kiefer Chang, Ricardo Marques, Patrick Seidensal, Patrick Donnelly, Joshua Schmid, Matthew Oliver, Varsha Rao, Juan Miguel Olmo Martínez, Michael Fritch)
  • batch backport June (2) (pr#35475, Sebastian Wagner, Kiefer Chang, Joshua Schmid, Michael Fritch, shinhwagk, Kefu Chai, Juan Miguel Olmo Martínez, Daniel Pivonka)

Ceph Volume

  • add and delete lvm tags in a single lvchange call (pr#35452, Jan Fajerski)
  • add ceph.osdspec_affinity tag (pr#35134, Joshua Schmid)

CephFS

  • allow pool names with hyphen and period (pr#35251, Ramana Raja)
  • bash_completion: Do not auto complete obsolete and hidden cmds (pr#34996, Kotresh HR)
  • cephfs-shell: Change tox testenv name to py3 (pr#34998, Kefu Chai, Varsha Rao, Aditya Srivastava)
  • client: expose Client::ll_register_callback via libcephfs (pr#35150, Jeff Layton)
  • client: fix Finisher assert failure (pr#34999, Xiubo Li)
  • client: only set MClientCaps::FLAG_SYNC when flushing dirty auth caps (pr#34997, Jeff Layton)
  • fuse: add the ‘-d’ option back for libfuse (pr#35449, Xiubo Li)
  • mds: Handle blacklisted error in purge queue (pr#35148, Varsha Rao)
  • mds: preserve ESlaveUpdate logevent until receiving OP_FINISH (pr#35253, songxinying)
  • mds: take xlock in the order requests start locking (pr#35252, “Yan, Zheng”)
  • src/client/fuse_ll: compatible with libfuse3.5 or higher (pr#35450, Jeff Layton, Xiubo Li)
  • vstart_runner: set mounted to True at the end of mount() (pr#35447, Rishabh Dave)

Dashboard

Orchestrator

  • Nautilus: Deepsea orchestrator now supports configuring the Ceph Dashboard (v14.2.3)

  • Work started on adding container support for the SSH orchestrator

  • Rook orchestrator now supports `ceph orchestrator host ls`

Rados

  • - ceph-bluestore-tool: the ability to add/remove/resize db and wal for an existing bluestore osd
  • osd: pg merging has merged
  • osd, bluestore: new single osd_memory_target option to control osd memory consumption (obsoletes bluestore_cache_size)
  • mon: track hardware device health for mons, just like osds

Rados Block Device

  • Live image migration: an in-use image can be migrated to a new pool or to a new image with different layout settings with minimal downtime.
  • Simplified mirroring setup: The monitor addresses and CephX keys for remote clusters can now be stored in the local Ceph cluster.
  • Initial support for namespace isolation: a single RBD pool can be used to store RBD images for multiple tenants
  • Simplified configuration overrides: global, pool-, and image-level configuration overrides now supported
  • Image timestamps: last modified and accessed timestamps are now supported

Rados Gateway

  • add “rgw-orphan-list” tool and “radosgw-admin bucket radoslist …” (pr#34991, J. Eric Ivancich)
  • amqp: fix the “routable” delivery mode (pr#35433, Yuval Lifshitz)
  • anonomous swift to obj that don’t exist should 401 (pr#35120, Matthew Oliver)
  • fix bug where bucket listing end marker not always set correctly (pr#34993, J. Eric Ivancich)
  • fix rgw tries to fetch anonymous user (pr#34988, Or Friedmann)
  • : fix some list buckets handle leak (pr#34985, Tianshan Qu)
  • gc: Clearing off urgent data in bufferlist, before (pr#35434, Pritha Srivastava)
  • lc: enable thread-parallelism in RGWLC (pr#35431, Matt Benjamin)
  • notifications: fix zero size in notifications (pr#34940, J. Eric Ivancich, Yuval Lifshitz)
  • notifications: version id was not sent in versioned buckets (pr#35254, Yuval Lifshitz)
  • radosgw-admin: fix infinite loops in ‘datalog list’ (pr#34989, Casey Bodley)
  • url: fix amqp urls with vhosts (pr#35432, Yuval Lifshitz)

Releases

Ceph Planet

Project Meeting Recordings

All meetings

Ceph Tech Talk

Ceph Code Walkthrough

Ceph Crimson/SeaStor OSD Weekly

Ceph Developers Monthly

Ceph DocUBetter

Ceph Performance Weekly

Ceph Orchestration

Ceph Science Working Group