The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • July 24, 2019
    New in Nautilus: Orchestrator

    Ceph Nautilus introduces a new orchestrator interface that provides the ability to control external deployment tools like ceph-ansible, DeepSea or Rook.  The vision is to provide a bridge between administrators, Ceph, and external deployment systems. In order to accomplish that, the orchestrator interface enables the Ceph dashboard or the ceph command line tool to access to data provided by different deployment tools …Read more

  • July 22, 2019
    v14.2.2 Nautilus released

    This is the second bug fix release of Ceph Nautilus release series. We recommend all Nautilus users upgrade to this release. For upgrading from older releases of ceph, general guidelines for upgrade to nautilus must be followed Upgrading from Mimic or Luminous. Notable Changes¶ The no{up,down,in,out} related commands have been revamped. There are now 2 …Read more

  • July 11, 2019
    Part – 4 : RHCS 3.2 Bluestore Advanced Performance Investigation

    Introduction Recap: In Blog Episode-3 We have covered RHCS cluster scale-out performance and have observed that, upon adding 60% of additional hardware resources we can get 95% higher IOPS, this demonstrates the scale-out nature of Red Hat Ceph Storage Cluster. This is the fourth episode of the performance blog series on RHCS 3.2 BlueStore running …Read more

  • July 1, 2019
    New Ceph Foundation associate member: SWITCH

    Eight months ago the Ceph Foundation was announced with 31 founding organization members to create an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem.  Since then, three more members have joined, two as General members and one as an Associate member. Today we’re excited …Read more

  • June 18, 2019
    New in Nautilus: RBD Performance Monitoring

    Prior to Nautilus, Ceph storage administrators have not had access to any built-in RBD performance monitoring and metrics gathering tools. While a storage administrator could monitor high-level cluster or OSD IO metrics, oftentimes this was too coarse-grained to determine the source of noisy neighbor workloads running on top of RBD images. The best available workaround, …Read more

  • June 5, 2019
    Ceph Community Newsletter, April 2019 edition

    Announcements Nautilus Shirts are now available: March 19 we announced the new release of Ceph Nautilus! Take a look at our blog post that captures the major features and upgrade notes. Watch the talk from Sage Weil, co-creator and project leader, on the state of the cephalopod. We’re now pleased to announce the availability of …Read more

  • June 4, 2019
    v13.2.6 Mimic released

    This is the sixth bugfix release of the Mimic v13.2.x long term stable release series. We recommend all Mimic users upgrade. Notable Changes¶ Ceph v13.2.6 now packages python bindings for python3.6 instead of python3.4, because EPEL7 recently switched from python3.4 to python3.6 as the native python3. See the announcement for more details on the background …Read more

  • May 18, 2019
    New in Nautilus: CephFS Improvements

    Work continues to improve the CephFS file system in Nautilus. As with the rest of Ceph, we have been dedicating significant developer time towards improving usability and stability. The following sections go through each of these works in detail MDS stability MDS stability has been a major focus for developers in the past two releases. …Read more

  • May 14, 2019
    New in Nautilus: New Dashboard Functionality

    The Ceph Dashboard shipped with Ceph Mimic was the first step in replacing the original read-only dashboard with a more flexible and extensible architecture and adding management functionality derived from the openATTIC project. One goal for the team working on the dashboard for Ceph Nautilus was to reach feature parity with openATTIC, and we’re quite …Read more

  • May 10, 2019
    New in Nautilus: RADOS Highlights

    BlueStore Nautilus comes with a bunch of new features and improvements for RADOS. To begin with, BlueStore is even more awesome now! If you were ever wondering how BlueStore uses space on your devices, stop wondering any further. With Nautilus, BlueStore space utilization information is much more granular and accurate, with separate accounting of space …Read more

  • May 9, 2019
    Part – 3 : RHCS Bluestore performance Scalability ( 3 vs 5 nodes )

    Introduction Welcome to the episode-3 of the performance blog series. In this blog, we will explain the performance increase we get when scaling-out the Ceph OSD node count of the RHCS cluster. A traditional storage scale-up architecture is built around two controllers connected to disk shelves. When the controllers reach 100% utilization, they create a …Read more

  • May 6, 2019
    Part – 2: Ceph Block Storage Performance on All-Flash Cluster with BlueStore backend

    Introduction Recap: In Blog Episode-1 we have covered RHCS, BlueStore introduction, lab hardware details, benchmarking methodology and performance comparison between Default Ceph configuration vs Tuned Ceph configuration This is the second episode of the performance blog series on RHCS 3.2 BlueStore running on the all-flash cluster. There is no rule of thumb to categorize block …Read more