We are excited that Rook has reached a huge milestone… v1.0 has been released! Congrats to the Rook community for all the hard work to reach this critical milestone. This is another great release with many improvements for Ceph that solidify its use in production with Kubernetes clusters. Of all the many features and bug …Read more
Rook is an orchestrator for storage services that run in a Kubernetes cluster. In the Rook v0.8 release, we are excited to say that the orchestration around Ceph has stabilized to the point to be declared Beta. If you haven’t yet started a Ceph cluster with Rook, now is the time to take it for a …Read more
My previous post showed you how to get deduplication working on Linux with VDO. In some ways, that’s the post that could cause trouble – if you start using vdo across a number of hosts, how can you easily establish monitoring or even alerting? So that’s the problem we’re going to focus on in this …Read more
Whether you’re using proprietary storage arrays or software defined storage, the actual cost of capacity can sometimes provoke responses like, “why do you you need all that space?” or “OK, but that’s all the storage you’re going to get, so make it last“. The problem is that storage is a commodity resource, it’s like toner …Read more
Hey Cephers! We are starting this new section on Ceph website to talk about the project highlights on a monthly newsletter. We hope you enjoy it! Project updates The SUSE OpenAttic team is porting their management dashboard upstream into ceph-mgr, where it will replace the current ‘dashboard’ module and be expanded to include greater management …Read more
There is no doubt that Ansible is a pretty cool automation engine for provisioning and configuration management. ceph-ansible builds on this versatility to deliver what is probably the most flexible Ceph deployment tool out there. However, some of you may not want to get to grips with Ansible before you install Ceph…weird right? No, not really. …Read more
The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a module exporting overall cluster status and performance to Zabbix. Enabling the dashboard module The Zabbix module is included in the ceph-mgr package, so if you’ve …Read more
Recently I’ve been working on converging glusterfs with oVirt – hyperconverged, open source style. oVirt has supported glusterfs storage domains for a while, but in the past a virtual disk was stored as a single file on a gluster volume. This helps some workloads, but file distribution and functions like self heal and rebalance have …Read more
These days hyperconverged strategies are everywhere. But when you think about it, sharing the finite resources within a physical host requires an effective means of prioritisation and enforcement. Luckily, the Linux kernel already provides an infrastructure for this in the shape of cgroups, and the interface to these controls is now simplified with systemd integration. …Read more
The schedule for the next OpenStack Summit in Tokyo this year was announced some days ago. One of my submissions was accepted. The presentation “99.999% available OpenStack Cloud – A builder’s guide” is scheduled for Thursday, October 29, 09:50 – 10:30.
Also other presentations from the Ceph Community have been accepted:
Checkout the links or the schedule for dates and times of the talks
The Ceph integration tests run via teuthology rely on workunits found in the Ceph repository. For instance: the /cephtool/test.sh workunit is modified it is pushed to a wip- in the official Ceph git repository the gitbuilder will automatically build packages … Continue reading →
Ceph integration tests are vital and expensive. Contrary to unit tests that can be run on a laptop, they require multiple machines to deploy an actual Ceph cluster. As the community of Ceph developers expands, the community lab needs to … Continue reading →