Planet Ceph

Aggregated news from external sources

  • March 25, 2020
    Storage infrastructure for everyone: Lowering the bar to installing Ceph

    The last few years have seen Ceph continue to mature in stability, scalability and performance to become a leading open source storage platform. However, getting started with Ceph has typically required the administrator learning automation products like Ansible first. While learning Ansible brings its own rewards, wouldn’t it be great if you could simply skip …Read more

  • March 10, 2020
    What's new in Red Hat Ceph Storage 4: A Beast of a front end, default support for BlueStore, and Cockpit installer support

    Today Red Hat announced Red Hat Ceph Storage 4, a major release that brings a number of improvements in scalability, monitoring, management, and security improvements. We also have designed Ceph Storage 4 to be easier to get started. Let’s tour some of its most interesting features. Read More

  • March 9, 2020
    What’s new between the Mass Open Cloud and Red Hat Ceph Storage?

    We at Red Hat are proud to have the opportunity to work with so many interesting and innovative organizations. One such group is the Mass Open Cloud (MOC), which is a non-profit initiative that includes universities, government organizations and businesses, and provides reliable and cost effective storage to support both its public and private clouds …Read more

  • February 13, 2020
    Scaling Ceph to a billion objects and beyond

    This is the sixth in Red Hat Ceph object storage performance series. In this post we will take a deep dive and learn how we scale tested Ceph with more than one billion objects, and share the performance secrets we discovered in the process. Read More

  • January 31, 2020
    Twenty Thousand Features under the Sea

    The Nautilus technology cornerstone to a roaring 2020 Red Hat Ceph Storage 4 brings the Nautilus codebase to our portfolio of marquee-name customers, and lays the foundation for our Ceph storage product portfolio for the rest of the year. 4.0 is integrated with OpenStack Platform 16 from the start, enabling customers to roll out the …Read more

  • January 9, 2020
    brctl 增加桥接网卡

    前言 之前有一篇介绍配置桥接网卡的,这个桥接网卡一般是手动做虚拟化的时候会用到,通过修改网卡的配置文件的方式会改变环境的原有的配置,而很多情况,我只是简单的用一下,并且尽量不要把网络搞断了,万一有问题,远程把机器重启一下也就恢复了,不至于反复去定位哪里改错了,当然如果是能够直连的修改的时候,还是建议通过配置文件的方式去修改 安装必要的软件包 yum install bridge-utils 选择想要修改的网卡 [root@lab101 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.101 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe19:3efb prefixlen 64 scopeid 0x20<link> ether 00:0c:29:19:3e:fb txqueuelen 1000 (Ethernet) RX packets 181 bytes 16447 (16.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 98 bytes 16871 (16.4 KiB) TX …Read more

  • December 9, 2019
    Creating a Management Routing Instance (VRF) on Juniper QFX5100

    For a Ceph cluster I have two Juniper QFX5100 switches running as a Virtual Chassis. This Virtual Chassis is currently only performing L2 forwarding, but I wanted to move this to a L3 setup where the QFX switches use Dynamic Routing (BGP) and thus are the gateway(s) for the Ceph servers. This works great, but …Read more

  • December 5, 2019
    Comparing Red Hat Ceph Storage 3.3 BlueStore/Beast performance with Red Hat Ceph Storage 2.0 Filestore/Civetweb

    This post is the sequel to the object storage performance testing we did two years back based on Red Hat Ceph Storage 2.0 FileStore OSD backend and Civetweb RGW frontend. In this post, we will compare the performance of the latest available (at the time of writing) Ceph Storage i.e. version 3.3 (BlueStore OSD backend …Read more

  • November 25, 2019
    KubeCon San Diego: Rook Deep Dive

    Date: 21/11/19 Video, my talk starts at 22 minutes: If the slides don’t render properly in the web viewer, please download them: Source: Sebastian Han (KubeCon San Diego: Rook Deep Dive)

  • November 20, 2019
    Ceph RGW dynamic bucket sharding: performance investigation and guidance

    In part 4 of a series on Ceph performance, we take a look at RGW bucket sharding strategies and performance impacts. Read More

  • October 30, 2019
    Achieving maximum performance from a fixed size Ceph object storage cluster

    We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total …Read more

  • October 22, 2019
    Installing Ceph the Easy-Peasy Way

    with Paul Cuzner (Red Hat) Lowering the bar to installing Ceph # The last few years have seen Ceph continue to mature in stability, scale and performance to become the leading open source storage platform. However, getting started with Ceph has typically involved the administrator learning automation products like Ansible first. While learning Ansible brings …Read more

Careers