Planet Ceph

Aggregated news from external sources

  • December 9, 2019
    Creating a Management Routing Instance (VRF) on Juniper QFX5100

    For a Ceph cluster I have two Juniper QFX5100 switches running as a Virtual Chassis. This Virtual Chassis is currently only performing L2 forwarding, but I wanted to move this to a L3 setup where the QFX switches use Dynamic Routing (BGP) and thus are the gateway(s) for the Ceph servers. This works great, but …Read more

  • December 5, 2019
    Comparing Red Hat Ceph Storage 3.3 BlueStore/Beast performance with Red Hat Ceph Storage 2.0 Filestore/Civetweb

    This post is the sequel to the object storage performance testing we did two years back based on Red Hat Ceph Storage 2.0 FileStore OSD backend and Civetweb RGW frontend. In this post, we will compare the performance of the latest available (at the time of writing) Ceph Storage i.e. version 3.3 (BlueStore OSD backend …Read more

  • November 25, 2019
    KubeCon San Diego: Rook Deep Dive

    Date: 21/11/19 Video, my talk starts at 22 minutes: If the slides don’t render properly in the web viewer, please download them: Source: Sebastian Han (KubeCon San Diego: Rook Deep Dive)

  • November 20, 2019
    Ceph RGW dynamic bucket sharding: performance investigation and guidance

    In part 4 of a series on Ceph performance, we take a look at RGW bucket sharding strategies and performance impacts. Read More

  • October 30, 2019
    Achieving maximum performance from a fixed size Ceph object storage cluster

    We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total …Read more

  • October 22, 2019
    Installing Ceph the Easy-Peasy Way

    with Paul Cuzner (Red Hat) Lowering the bar to installing Ceph # The last few years have seen Ceph continue to mature in stability, scale and performance to become the leading open source storage platform. However, getting started with Ceph has typically involved the administrator learning automation products like Ansible first. While learning Ansible brings …Read more

  • October 22, 2019
    Red Hat Ceph Storage RGW deployment strategies and sizing guidance

    Starting in Red Hat Ceph Storage 3.0, Red Hat added support for Containerized Storage Daemons (CSD) which allows the software-defined storage components (Ceph MON, OSD, MGR, RGW, etc) to run within containers. CSD avoids the need to have dedicated nodes for storage services thus reducing both CAPEX and OPEX by co-located storage containerized daemons.  Read …Read more

  • October 10, 2019
    Red Hat Ceph object store on Dell EMC servers (Part 1)

    Organizations are increasingly being tasked with managing billions of files and tens to hundreds of petabytes of data. Object storage is well suited to these challenges, both in the public cloud and on-premise. Organizations need to understand how to best configure and deploy software, hardware, and network components to serve a diverse range of data …Read more

  • September 25, 2019
    Red Hat Ceph Storage 3.3 BlueStore compression performance

    With the BlueStore OSD backend, Red Hat Ceph Storage  gained a new capability known as “on-the-fly data compression” that helps save disk space. Compression can be enabled or disabled on each Ceph pool created on BlueStore OSDs. In addition to this, using the Ceph CLI the compression algorithm and mode can be changed anytime, regardless …Read more

  • September 19, 2019
    ceph osd tree的可视化

    前言 很久没有处理很大的集群,在接触一个新集群的时候,如果集群足够大,需要比较长的时间才能去理解这个集群的结构,而直接去看ceph osd tree的结果,当然是可以的,这里是把osd tree的结构进行了一个结构化输出,也可以理解为画出一个简单的结构图,比较适合给其它人讲解你对crush做了哪些改变,这个如果指着文字来讲估计很多人会听的云里雾里,如果有比较方便的方式出图就比较好了 为此写了一个小工具自己用,正好也可以看看我们对结构做简单调整后的效果 创建一个模拟集群 环境就一台机器,不需要用到磁盘,这里是模拟结构创建一个大集群40台机器 seq 1 40 |xargs -i ceph osd crush add-bucket lab{} hostseq 1 40|xargs -i ceph osd crush move lab{} root=default 创建一个960个的集群 seq 1 960 |xargs -i ceph osd create 放到指定的主机 #! /bin/shfor osd in `seq 0 959`dohost=$(( (($osd / 24)) + 1 ))ceph osd crush create-or-move osd.$osd …Read more

  • September 8, 2019
    高性能arm运行ceph存储基准测试

    关于arm 之前wdlab对外发布过一次约500个节点的arm的ceph集群,那个采用的是微集群的结构,使用的是双核的cortex-a9 ARM处理器,运行速度为1.3 GHz,内存为1 GB,直接焊接到驱动器的PCB上,选项包括2 GB内存和ECC保护 这个在国内也有类似的实现,深圳瑞驰商用Arm存储NxCells 这个采用的是微集群的架构,能够比较好的应对一些冷存场景,但是今天要说的不是这种架构,而是一个比较新的平台,采用的是高性能的arm的架构,也就是类似X86的大主板结构很多人了解的arm的特点是小,功耗低,主频低,这个是以前的arm想发力的场景,类似于intel做的一款atom,在很早期的时候,我在的公司也尝试过基于atom主板做过1U的ceph存储,但是后来各种原因没有继续下去 实际上arm也在发力高性能的场景,但是这个比较新,并不是每个人都能接触的到,在这里,我把我们的硬件设备的测试数据发一部分出来,也许能改变你对arm的印象,在未来硬件选型的时候,也许就多了一层选择 高性能arm设备说明 System Information PROCESSOR: Ampere eMAG ARMv8 @ 3.00GHz Core Count: 32 Scaling Driver: cppc_cpufreq conservative GRAPHICS: ASPEED Screen: 1024×768 MOTHERBOARD: MiTAC RAPTOR BIOS Version: 0.11 Chipset: Ampere Computing LLC Skylark Network: 2 x Intel 82599ES 10-Gigabit SFI/SFP+ + Intel I210 MEMORY: 2 x 32 GB …Read more

  • September 3, 2019
    bluestore的osd自启动

    前言 自启动相关的文章很多,有分析的很详细的文章,这里就不做赘述,本篇讲述的是什么情况下用,怎么用的问题 使用场景 一台机器的系统盘坏了,需要重装系统,相关的一些信息没有了,但是上面的数据盘还是在的,所以需要保留 某个磁盘需要换台机器进行启动,但是那台机器上没有相关的信息 处理过程 自启动的相关处理 先扫描下lvm vgscanpvscanlvscan 本篇的场景是lvm没有损坏的情况,如果lvm本身损坏了,那么就是去恢复lvm的问题,本篇的基础是有一个完整的osd的数据盘,也就是磁盘本身是没问题的 查询osd相关的磁盘信息 lvdisplay |grep “LV Path”|grep ceph LV Path /dev/ceph-b748833c-b646-4b1c-a2ef-f50576b0a165/osd-block-38657557-5ce3-43a1-861a-e690c880ddf6 LV Path /dev/ceph-aa2304f1-a098-4990-8f3a-46f176d4cece/osd-block-f8a30c38-48fd-465c-9982-14cd22d00d21 LV Path /dev/ceph-8b987af1-f10a-4c9a-a096-352e63c7ef83/osd-block-07d1c423-8777-4eea-8a1d-34dc06f840ae LV Path /dev/ceph-f39ac1da-2811-4486-8690-4ccfb1e45e18/osd-block-0cb9186e-6512-4582-a30d-9fb4cf03c964 LV Path /dev/ceph-6167d452-a121-4602-836a-ab378cf6eccc/osd-block-2e77e3b5-9d5c-4d5f-bf18-c33ddf0bbc0a 注意osd-block后面的字段,这个信息是会记录在osd dump输出信息的,我们查询下osd-block-38657557-5ce3-43a1-861a-e690c880ddf6这个的信息 [root@node1 ~]# ceph osd dump|grep 38657557-5ce3-43a1-861a-e690c880ddf6osd.31 down in weight 1 up_from 395 up_thru 395 down_at 399 last_clean_interval [391,392) 66.66.66.60:6830/10392 66.66.66.60:6847/10392 66.66.66.60:6875/10392 66.66.66.60:6882/10392 …Read more

Careers