The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • May 2, 2019
    Part – 1 : BlueStore (Default vs. Tuned) Performance Comparison

    Acknowledgments We would like to thank BBVA,   Cisco and Intel for providing the cutting edge hardware used to run a Red Hat Ceph Storage 3.2 All-flash performance POC. The tests and results provided in this blog series is a joint effort of the partnership formed by  BBVA, Intel , Cisco and Red Hat. All partners …Read more

  • April 30, 2019
    New in Nautilus: crash dump telemetry

    When Ceph daemons encounter software bugs, unexpected state, failed assertions, or other exceptional cases, they dump a stack trace and recently internal log activity to their log file in /var/log/ceph. On modern systems, systemd will restart the daemon and life will go on–often without the cluster administrator even realizing that there was a problem. This …Read more

  • April 29, 2019
    v14.2.1 Nautilus released

    This is the first bug fix release of Ceph Nautilus release series. We recommend all nautilus users upgrade to this release. For upgrading from older releases of ceph, general guidelines for upgrade to nautilus must be followed Upgrading from Mimic or Luminous. Notable Changes¶ The default value for mon_crush_min_required_version has been changed from firefly to …Read more

  • April 18, 2019
    New in Nautilus: ceph-iscsi Improvements

    The ceph-iscsi project provides a framework, REST API and CLI tool for creating and managing iSCSI targets and gateways for Ceph via LIO. It is the successor and a consolidation of two formerly separate projects, the ceph-iscsi-cli and ceph-iscsi-config which were initially started in 2016 by Paul Cuzner at Red Hat. While this is not a new feature of Ceph Nautilus per se, improving …Read more

  • April 12, 2019
    New in Nautilus: device management and failure prediction

    Ceph storage clusters ultimately rely on physical hardware devices–HDDs or SSDs–that can fail. Starting in Nautilus, management and tracking of physical devices is now handled by Ceph. Furthermore, we’ve added infrastructure to collect device health metrics (e.g., SMART) and to predict device failures before they happen, either via a built-in pre-trained prediction model, or via …Read more

  • April 12, 2019
    v12.2.12 Luminous released

    This is the twelfth bug fix release of the Luminous v12.2.x long term stable release series. We recommend that all users upgrade to this release. Notable Changes¶ In 12.2.11 and earlier releases, keyring caps were not checked for validity, so the caps string could be anything. As of 12.2.12, caps strings are validated and providing …Read more

  • April 10, 2019
    Ceph Community Newsletter, March 2019 edition

    Announcements Nautilus stable release is out! March 19 we announced the new release of Ceph Nautilus! Take a look at our blog post that captures the major features and upgrade notes.   Cephalocon Barcelona May 19-20: Registration and Sponsor slots still available! We’re very excited for the upcoming Cephalocon in Barcelona! We have a convenient …Read more

  • April 9, 2019
    Cephalocon Barcelona

    Cephalocon Barcelona aims to bring together technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions. Join us in Barcelona, Spain on 19-20 May 2019 for our second international conference event! Cephalocon Barcelona is co-located with KubeCon + CloudNativeCon Europe, which takes place over …Read more

  • April 4, 2019
    New in Nautilus: PG merging and autotuning

    Since the beginning, choosing and tuning the PG count in Ceph has been one of the more frustrating parts of managing a Ceph cluster.  Guidance for choosing an appropriate pool is confusing, inconsistent between sources, and frequently surrounded by caveats and exceptions.  And most importantly, if a bad value is chosen, it can’t always be …Read more

  • March 19, 2019
    v14.2.0 Nautilus released

    We’re glad to announce the first release of Nautilus v14.2.0 stable series. There are a lot of changes across components from the previous Ceph release, and we advise everyone to go through the release and upgrade notes carefully. Major Changes from Mimic¶ Dashboard: The Ceph Dashboard has gained a lot of new functionality: Support for …Read more

  • March 13, 2019
    v13.2.5 Mimic released

    This is the fifth bugfix release of the Mimic v13.2.x long term stable release series. We recommend all Mimic users upgrade. Notable Changes¶ This release fixes the pg log hard limit bug that was introduced in 13.2.2, A flag called pglog_hardlimit has been introduced, which is off by default. Enabling this flag will limit …Read more

  • March 6, 2019
    Deploying a Ceph+NFS Server Cluster with Rook

    With it’s possible to deploy a Ceph cluster on top of kubernetes (also known as k8s). The ceph cluster can use storage on each individual k8s cluster node just as it when it is deployed on regular hosts. Newer versions of rook and Ceph also support the deployment of a CephFS to NFS gateway …Read more