In the process of setting up a new cluster on Ceph Luminous, the task appeared to spread various S3 buckets across different storage devices (in my case, SSD and HDD). There are many instructions on the Internet on how to do this in Ceph Jewel, but in the case of Luminous, the process has undergone great changes and the old instructions no longer work. However, this scenario is not described in the off-documentation, the configuration process is not too trivial.
Task
Once again I will describe the task: a certain number of HDD and SSD disks are installed in each node of the cluster. It is necessary that when creating an S3 baketa, specify which devices to store it (HDD or SSD).
We carry pools on different devices
Let's look at the current replication rules. By default, there should only be a “replicated_rule” entry:
ceph osd crush rule ls
Thanks to the innovation in Luminous Ceph can determine the type of device itself and we can easily separate them according to different replication rules:
')
ceph osd crush rule create-replicated replicated_hdd default host hdd ceph osd crush rule create-replicated replicated_ssd default host ssd
Remove the old default rule:
ceph osd crush rule rm replicated_rule
Now we will create a new additional pool in which we will store S3 objects and locate it on the SSD:
ceph osd pool create default.rgw.buckets.data.ssd 8 8 replicated replicated_ssd
And the default data pool is located on the HDD:
ceph osd pool set default.rgw.buckets.data crush_rule replicated_hdd
Naturally, you can do the opposite and arrange the default on the SSD.
Configure Rados Gateway
The most interesting part for which the article was written.
When you install a new cluster goes without default Realm. It is not very clear why this is done. Create a Realm "default" and set it as default:
radosgw-admin realm create
Add additional placement for SSD buckets to zonegroup default:
radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id="ssd-placement"
And add additional placement to the default zone:
radosgw-admin zone placement add --rgw-zone=default --placement-id="ssd-placement" --data-pool="default.rgw.buckets.data.ssd" --index-pool="default.rgw.buckets.index"
We have to store the index of all objects (both HDD and SSD) using one pool “default.rgw.buckets.index”, but we can create a separate pool for the index.
Tie the zonegroup "default" to the realm "default" and commit the changes:
radosgw-admin zonegroup modify --rgw-zonegroup=default --rgw-realm=default radosgw-admin period update --commit
The last step is to restart Rados Gateway.
Now we can create a new batch on the SSD (note the colon, it did not work without it):
s3cmd mb s3://test --bucket-location=:ssd-placement
Or create a bucket with default placement (in our case on the HDD):
s3cmd mb s3://test
I hope my little note will save someone time in solving a similar problem.