The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

July 22, 2019

v14.2.2 Nautilus released

This is the second bug fix release of Ceph Nautilus release series. We recommend all Nautilus users upgrade to this release. For upgrading from older releases of ceph, general guidelines for upgrade to nautilus must be followed Upgrading from Mimic or Luminous.

Notable Changes

  • The no{up,down,in,out} related commands have been revamped. There are now 2 ways to set the no{up,down,in,out} flags: the old ceph osd [un]set <flag> command, which sets cluster-wide flags; and the new ceph osd [un]set-group <flags> <who> command, which sets flags in batch at the granularity of any crush node, or device class.

  • radosgw-admin introduces two subcommands that allow the managing of expire-stale objects that might be left behind after a bucket reshard in earlier versions of RGW. One subcommand lists such objects and the other deletes them. Read the troubleshooting section of the dynamic resharding docs for details.

  • Earlier Nautilus releases (14.2.1 and 14.2.0) have an issue where deploying a single new (Nautilus) BlueStore OSD on an upgraded cluster (i.e. one that was originally deployed pre-Nautilus) breaks the pool utilization stats reported by ceph df. Until all OSDs have been reprovisioned or updated (via ceph-bluestore-tool repair), the pool stats will show values that are lower than the
    true value. This is resolved in 14.2.2, such that the cluster only switches to using the more accurate per-pool stats after all OSDs are 14.2.2 (or later), are BlueStore, and (if they were created prior to Nautilus) have been updated via the repair function.

  • The default value for mon_crush_min_required_version has been changed from firefly to hammer, which means the cluster will issue a health warning if your CRUSH tunables are older than hammer. There is generally a small (but non-zero) amount of data that will move around by making the switch to hammer tunables; for more information, see Tunables.

    If possible, we recommend that you set the oldest allowed client to hammer or later. You can tell what the current oldest allowed client is with:

    ceph osd dump | grep min_compat_client
    

    If the current value is older than hammer, you can tell whether it is safe to make this change by verifying that there are no clients older than hammer current connected to the cluster with:

    ceph features
    

    The newer straw2 CRUSH bucket type was introduced in hammer, and ensuring that all clients are hammer or newer allows new features only supported for straw2 buckets to be used, including the crush-compat mode for the Balancer.

Changelog

Nathan Cutler

Careers