Ceph erasure coding overhead in a nutshell
Calculating the storage overhead of a replicated pool in Ceph is easy. You divide the amount of space you have by the “size” (amount of replicas) parameter of your storage pool.
Let’s work with some rough numbers: 64 OSDs of 4TB each.
Raw size: 64 * 4 = 256TB Size 2 : 128 / 2 = 128TB Size 3 : 128 / 3 = 85.33TB
Replicated pools are expensive in terms of overhead: Size 2 provides the same resilience and overhead as RAID-1. Size 3 provides more resilience than RAID-1 but at the tradeoff of even more overhead.
Explaining what Erasure coding is about gets complicated quickly.
What’s appealing with erasure coding is that it can provide the same (or better) resiliency than replicated pools but with less storage overhead - at the cost of the computing it requires.
Ceph has had erasure coding support for a good while already and interesting documentation is available:
The thing with erasure coded pools, though, is that you’ll need a cache tier in front of them to be able to use them in most cases.
This makes for a perfect synergy of slower/larger/less expensive drives for your erasure coded pool and faster, more expensive drives in front as your cache tier.
To calculate the overhead of a erasure coded pool, you need to know your ‘k’ and ‘m’ values of your erasure code profile.
When the encoding function is called, it returns chunks of the same size. Data chunks which can be concatenated to reconstruct the original object and coding chunks which can be used to rebuild a lost chunk.
The number of data chunks, i.e. the number of chunks in which the original object is divided. For instance if K = 2 a 10KB object will be divided into K objects of 5KB each.
The number of coding chunks, i.e. the number of additional chunks computed by the encoding functions. If there are 2 coding chunks, it means 2 OSDs can be out without losing data.
The formula to calculate the overhead is:
nOSD * k / (k+m) * OSD Size
Finally, let’s look at a couple different erasure coding profile configurations based on 64 OSDs of 4 TB ranging from m=1 to m=4 and k=1 to k=10: