Planet Ceph

Aggregated news from external sources

  • July 24, 2020
    SUSE Enterprise Storage delivers best CephFS benchmark on Arm

    Since its introduction in 2006, Ceph has been used predominantly for volume-intensive use cases where performance is not the most critical aspect. Because the technology has many attractive capabilities, there has been a desire to extend the use of Ceph into areas such as High-Performance Computing. Ceph deployments in HPC environments are usually as an …Read more

  • July 14, 2020
    arm64大服务器安装ubuntu18看不到安装界面

    前言 最近在使用arm的大服务器需要用到ubuntu相关的一些东西,在操作系统安装过程中遇到了一些问题 记录 华为鲲鹏服务器 这个默认安装centos的都很顺利,安装ubuntu18最新的,impi就花屏了,然后找各种地方都没找到原因,看到官网的,用18.04.01写的文档,然后试了下18.04.01可以,其它版本都花屏,直接使用即可 安培服务器 同样的基本找不到相关的文档,网上的都是禁用的一些参数什么的,实际上操作如下: 在grub编辑界面,在—后面增加 console=tty0 iommu.passthrough=1 然后crtl+x启动就好了,不清楚这个在上面的华为机器上是否可以,机器被拿走了有机会再试了 Source: zphj1987@gmail (arm64大服务器安装ubuntu18看不到安装界面)

  • June 24, 2020
    SUSE Enterprise Storage 7 first public beta!

    SUSE is happy to announce their first public beta for SUSE Enterprise Storage 7. The latest software-defined storage solution built on the Octopus release of the open source Ceph technology. Read More

  • June 17, 2020
    ceph的pg平衡插件balancer

    前言 ceph比较老的版本使用的reweight或者osd weight来调整平衡的,本篇介绍的是ceph新的自带的插件balancer的使用,官网有比较详细的操作手册可以查询 使用方法 查询插件的开启情况 [root@node1 ceph]# ceph mgr module ls { “enabled_modules”: [ “balancer”, “restful”, “status” ], “disabled_modules”: [ “dashboard”, “influx”, “localpool”, “prometheus”, “selftest”, “telemetry”, “zabbix” ] } 默认balancer就是enable的 查询balancer活动情况 [root@node1 ceph]# ceph balancer status { “last_optimize_duration”: “”, “plans”: [], “mode”: “none”, “active”: false, “optimize_result”: “”, “last_optimize_started”: “” } 可以看到active是false,这里有手动的方法和自动的方法,我一般使用自动的,然后调整完了关闭 首先设置兼容模式 ceph balancer mode …Read more

  • June 9, 2020
    Updating the Nautilus cornerstone of Red Hat’s Ceph Storage platform

    Red Hat Ceph Storage 4 brought the upstream Ceph Nautilus codebase to our customers, and laid out the foundation of our Ceph storage product portfolio for the rest of the year. 4.0 is integrated with OpenStack Platform 16 from the start, enabling customers to roll out the latest and greatest across the Red Hat Portfolio. …Read more

  • May 28, 2020
    Building a Ceph-powered Cloud

    Deploying a containerized Red Hat Ceph Storage 4 cluster for Red Hat Open Stack Platform 16 with John Fulton (Red Hat) and Gregory Charot (Red Hat) Ceph is the most popular storage backend for OpenStack by a wide margin, as has been reported by the OpenStack Foundation’s survey every year since its inception. In the …Read more

  • May 28, 2020
    Building a Ceph-powered Cloud: Deploying a containerized Red Hat Ceph Storage 4 cluster for Red Hat Open Stack Platform 16

    Ceph is the most popular storage backend for OpenStack by a wide margin, as has been reported by the OpenStack Foundation’s survey every year since its inception. In the latest survey, conducted during the Summer of 2019, Ceph outclassed other options by an even greater margin than it did in the past, with a 75% …Read more

  • April 28, 2020
    Ceph at Red Hat Summit 2020

    Sage, Uday, and I put our best efforts into making sure that the new virtual venue for the Red Hat Summit would not diminish customer access and visibility into our future plans for Ceph. We delivered an unprecedented 18-month roadmap for the downstream, enterprise-class supported product, showcasing the “secret deck” that is usually reserved for …Read more

  • April 16, 2020
    Ceph Block Performance Monitoring

    Putting noisy neighbors in their place with “RBD Top” and QoS with Jason Dillaman (Red Hat) Prior to Red Hat Storage 4, Ceph storage administrators have not had access to built-in RBD performance monitoring and metrics gathering tools. While a storage administrator could monitor high-level cluster or OSD I/O metrics, oftentimes this was too coarse-grained …Read more

  • April 15, 2020
    Ceph Block Performance Monitoring: Putting noisy neighbors in their place with RBD top and QoS

    Prior to Red Hat Storage 4, Ceph storage administrators have not had access to built-in RBD performance monitoring and metrics gathering tools. While a storage administrator could monitor high-level cluster or OSD I/O metrics, oftentimes this was too coarse-grained to determine the source of noisy neighbor workloads running on top of RBD images. Read More

  • April 10, 2020
    The Power User’s Path to Ceph

    Deploying a containerized Ceph Storage 4 cluster using ceph-ansible with Guillaume Abrioux (Red Hat) and Paul Cuzner (Red Hat) Introduction # The landscape of modern IT infrastructure is dominated by software defined networking, public cloud, hybrid cloud and software defined storage. The shift from legacy hardware centric architectures to embrace software defined infrastructure requires a …Read more

  • April 9, 2020
    Deploying a containerized Red Hat Ceph Storage 4 cluster using ceph-ansible

    The landscape of modern IT infrastructure is dominated by software defined networking, public cloud, hybrid cloud and software defined storage. The shift from legacy hardware centric architectures to embrace software defined infrastructure requires a more mature orchestration “engine” to manage changes across distributed systems. Read More

Careers