RBD image bigger than your Ceph cluster
Some experiment with gigantic overprovisioned RBD images.
First, create a large image, let’s 1 PB:
1 2 3 4 5 6 7 |
|
Problems rise as soon as you attempt to delete the image. Eventually try to remove it:
1 2 3 4 5 6 |
|
Keeping an of every exiting objects is terribly inefficient since this will kill our performance. The major downside with this technique is when shrinking or deleting an image it must look for all objects above the shrink size.
In dumpling or later RBD can do this in parallel controlled by --rbd-concurrent-management-ops
(undocumented option), which defaults to 10.
You still have another option, if you’ve never written to the image, you can just delete the rbd_header
file. You can find it by listing all the objects contained in the image. Something like rados -p <your-pool> ls | grep <block_name_prefix>
will do the trick. After this, removing the RBD image will take a second.
1 2 3 4 5 6 7 8 9 10 11 12 |
|