Planet Ceph

Aggregated news from external sources

  • September 3, 2019
    bluestore的osd自启动

    前言 自启动相关的文章很多,有分析的很详细的文章,这里就不做赘述,本篇讲述的是什么情况下用,怎么用的问题 使用场景 一台机器的系统盘坏了,需要重装系统,相关的一些信息没有了,但是上面的数据盘还是在的,所以需要保留 某个磁盘需要换台机器进行启动,但是那台机器上没有相关的信息 处理过程 自启动的相关处理 先扫描下lvm vgscanpvscanlvscan 本篇的场景是lvm没有损坏的情况,如果lvm本身损坏了,那么就是去恢复lvm的问题,本篇的基础是有一个完整的osd的数据盘,也就是磁盘本身是没问题的 查询osd相关的磁盘信息 lvdisplay |grep “LV Path”|grep ceph LV Path /dev/ceph-b748833c-b646-4b1c-a2ef-f50576b0a165/osd-block-38657557-5ce3-43a1-861a-e690c880ddf6 LV Path /dev/ceph-aa2304f1-a098-4990-8f3a-46f176d4cece/osd-block-f8a30c38-48fd-465c-9982-14cd22d00d21 LV Path /dev/ceph-8b987af1-f10a-4c9a-a096-352e63c7ef83/osd-block-07d1c423-8777-4eea-8a1d-34dc06f840ae LV Path /dev/ceph-f39ac1da-2811-4486-8690-4ccfb1e45e18/osd-block-0cb9186e-6512-4582-a30d-9fb4cf03c964 LV Path /dev/ceph-6167d452-a121-4602-836a-ab378cf6eccc/osd-block-2e77e3b5-9d5c-4d5f-bf18-c33ddf0bbc0a 注意osd-block后面的字段,这个信息是会记录在osd dump输出信息的,我们查询下osd-block-38657557-5ce3-43a1-861a-e690c880ddf6这个的信息 [root@node1 ~]# ceph osd dump|grep 38657557-5ce3-43a1-861a-e690c880ddf6osd.31 down in weight 1 up_from 395 up_thru 395 down_at 399 last_clean_interval [391,392) 66.66.66.60:6830/10392 66.66.66.60:6847/10392 66.66.66.60:6875/10392 66.66.66.60:6882/10392 …Read more

  • September 3, 2019
    ceph luminous版本限制osd的内存使用

    引言 ceph自从到了L版本以后,L版本的启用,对性能本身有了极大的提高,一直对这个比较不放心的就是内存的占用,刚开始的时候记得大量dd就可以把内存搞崩掉,这个应该是内部的设计逻辑需要更多的内存的占用 最近在做ARM版本的服务器的测试,机器为36盘位的机器,内存需要自然多,但是36盘位的机器,按之前想法是4G预留,那得需要144G内存了,这个还没有算迁移的时候的内存消耗,而很多时候,我们并不需要速度,只需要稳定就好 测试环境说明 测试环境比较简单,一台36盘位的arm机器,一台X86机器,通过万兆相连,设置集群为副本1,然后再X86上面通过rados命令进行测试 限制前后对比 我们先按默认的来一组测试 用读取命令进行测试 rados -p rbd -t 64 bench 300 seq –run-name 4Mt16···2019-09-03 15:19:20.478841 min lat: 0.188154 max lat: 0.658198 avg lat: 0.227437 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 20 63 5620 5557 1111.24 1124 0.223682 0.227437 21 63 5901 5838 1111.84 1124 …Read more

  • August 21, 2019
    Refreshingly Luminous

    After an almost seven month team effort focusing on our next-generation Rook and Ceph Nautilus-based storage products, we have taken a little bit of time to refresh the releases currently in production. We are pleased to announce the availability of Red Hat Ceph Storage 3.3, our sixteenth RHCS release. Red Hat Ceph Storage 3.3 delivers …Read more

  • August 5, 2019
    ceph的国产化存储

    前言 所谓的国产化是在中国的大环境下产生的一个特定名词,也许你会认为国产化的是为了保护我们自己,或者是某些公司打着的名号,这个也是之前的国产化操作系统留下了不太好的名声,造成了一些我们自己行业的人都认为只是上去改了个名字而已,没什么技术含量 其实也并不是那样,在这个产业里面也存在着很多潜心搞研究的人,比如之前经常分享ceph相关技术的汪黎博士 Source: zphj1987@gmail (ceph的国产化存储)

  • August 1, 2019
    Red Hat OpenStack Platform with Red Hat Ceph Storage: MySQL Database Performance on Ceph RBD

    In Part 1 of this series, we detailed the hardware and software architecture of our testing lab, as well as benchmarking methodology and Ceph cluster baseline performance. Read More

  • July 25, 2019
    bluestore对象挂载到系统进行提取

    前言 之前在filestore里面,pg是直接暴露到文件系统的,也就是可以直接进去查看或者拷贝,在极端情况下,多个osd无法启动,pg无法导出的时候,那么对pg内部对象的操作处理,是可以作为最后恢复数据的一种方式的 这个在bluestore里面就没那么直接了,之前也碰到过一次,osd无法启动,内部死循环,pg无法export,osd就僵死在那里了,实际上,bluestore也提供了接口把对象能够直接显示出来 具体操作实践 我们选择一个pg 1.7 [root@lab101 ceph]# ceph pg dump|grep 1.7dumped all1.7 128 0 0 0 0 524353536 1583 1583 active+clean 2019-07-26 10:05:17.715749 14’3583 14:3670 [1] 1 [1] 1 0’0 2019-07-26 10:01:20.337218 0’0 2019-07-26 10:01:20.337218 可以看到pg 1.7是有128个对象存储在osd.1上 检查挂载点 [root@lab101 ceph]# df -h|grep ceph-1tmpfs 16G 48K 16G 1% /var/lib/ceph/osd/ceph-1 可以看到是挂载到tmpfs的,我们先停止掉osd.1 我们把osd的数据挂载起来 [root@lab101 ceph]# ceph-objectstore-tool –op …Read more

  • July 22, 2019
    Red Hat OpenStack Platform with Red Hat Ceph Storage: Radosbench baseline performance evaluation

    Red Hat Ceph Storage is popular storage for Red Hat OpenStack Platform. Customers around the world run their hyperscale, production workloads on Red Hat Ceph Storage and Red Hat OpenStack Platform. This is driven by the high level of integration between Ceph storage and OpenStack private cloud platforms.  With each release of both platforms, the …Read more

  • July 12, 2019
    Peccary Book Part Deux!

    Amazon Web Services Guide de l’administrateur système — sounds familiar? It should! AWS System Administration, better known as the Peccary Book is now available in French. Our thanks to monsieur Olivier Engler for his outstanding translation work, featuring both detailed feedback and a timely delivery. Source: Federico Lucifredi (Peccary Book Part Deux!)

  • May 24, 2019
    KubeCon Barcelona: Rook, Ceph, and ARM: A Caffeinated Tutorial

    Date: 22/05/19 Video: Source: Sebastian Han (KubeCon Barcelona: Rook, Ceph, and ARM: A Caffeinated Tutorial)

  • May 17, 2019
    Rook just landed on operatorhub.io

    I’m excited to announce that just right for Cephalocon and KubeCon Europe, Rook has landed on operatorhub.io. It has been quite a challenge to have it merged, but in the end my pull request got merged :). If you want to know what this means for upstream you should look at this article. Source: Sebastian …Read more

  • May 9, 2019
    Hey! What’s up?!

    It has been a long time since I’ve been giving updates or even blogging. Let’s take some time here (while being on the plane) to update you on what I’m doing these days. Moving away from ceph-ansible/container In 2014, I was launching ceph-ansible, a set of playbooks to deploy, manage and upgrade Ceph with the …Read more

  • April 29, 2019
    Open Infrastructure Summit Denver: Rook 101

    Date: 30/04/19 Video: Source: Sebastian Han (Open Infrastructure Summit Denver: Rook 101)

Careers