Good News, Everyone!
Linux kernel 5.0 is already here and appears in experimental distributions, such as Arch, openSUSE Tumbleweed, Fedora.
')
And if you look at the Ubuntu Disko Dingo and Red Hat 8 RC distros, it will become clear: soon, kernel 5.0 will also download kernel 5.0 from fans' desktops to serious servers.
Someone will say - so what. The next release, nothing special. Linus Torvalds himself said:
I’m not getting any more than that 4.x numbers started and toes.
( I repeat once again - our releases are not tied to any specific features, so the number of the new version 5.0 means only that for numbering versions 4.x I don’t have enough fingers and toes )
However, the module for floppy disks (who does not know - these are the disks of the size c with the breast pocket of a shirt, with a capacity of 1.44 MB) - they corrected it ...
And that's why:
It's all about the multi-queue block layer (blk-mq). There are plenty of introductory articles about him on the Internet, so let's get straight to the point. The transition to blk-mq was started a long time ago and slowly advanced. Multi-queue scsi appeared (kernel parameter scsi_mod.use_blk_mq), new mq-deadline schedulers, bfq, etc. appeared ...
[root@fedora-29 sblkdev]# cat /sys/block/sda/queue/scheduler [mq-deadline] none
By the way, what's yours?
The number of block device drivers, which work in the old manner, has been reduced. And in 5.0, the function blk_init_queue () was removed as unnecessary. And now the old glorious code
lwn.net/Articles/58720 from 2003 is not only not going, but has lost its relevance. Moreover, new distributions that are being prepared for release this year, in the default configuration, use multi-queue block layer. For example, on the 18th Manjaro, the kernel, though version 4.19, but blk-mq by default.
Therefore, we can assume that in 5.0 the transition to blk-mq is completed. And for me this is an important event that will require rewriting the code and additional testing. That in itself promises the emergence of bugs large and small, as well as several fallen servers (It is necessary, Fedya, it is necessary! (C)).
By the way, if someone thinks that this turning point did not come for rhel8, since the core was “frozen” with version 4.18, then you are mistaken. In the fresh RC at rhel8, the new ones from 5.0 have already migrated, and the blk_init_queue () function has also been cut out (probably, while dragging the next chekina with github.com/torvalds/linux to their sources).
In general, the “freeze” version of the kernel for Linux distributors, such as SUSE and Red Hat, has long been a marketing concept. The system reports that the version is, for example, 4.4, and in fact the functionality of fresh 4.8 vanilla. At the same time on the official website there is an inscription like: “In the new distribution, we have kept a stable 4.4 kernel for you”.
But we digress ...
So here. We need a new simple block device driver to make it clearer how it works.
So, the source on
github.com/CodeImp/sblkdev . I suggest discussing, doing pull requests, starting an issue - I will fix it. QA has not verified yet.
Further in the article I will try to describe what for. Therefore, a lot of code further.
Immediately I apologize that the Linux kernel coding style is not fully respected, and yes - I do not like goto.
So let's start with entry points.
static int __init sblkdev_init(void) { int ret = SUCCESS; _sblkdev_major = register_blkdev(_sblkdev_major, _sblkdev_name); if (_sblkdev_major <= 0){ printk(KERN_WARNING "sblkdev: unable to get major number\n"); return -EBUSY; } ret = sblkdev_add_device(); if (ret) unregister_blkdev(_sblkdev_major, _sblkdev_name); return ret; } static void __exit sblkdev_exit(void) { sblkdev_remove_device(); if (_sblkdev_major > 0) unregister_blkdev(_sblkdev_major, _sblkdev_name); } module_init(sblkdev_init); module_exit(sblkdev_exit);
Obviously, when the module is loaded, the sblkdev_init () function is launched, and when the sblkdev_exit () is unloaded.
The register_blkdev () function registers a block device. He is allocated a major number. unregister_blkdev () - frees this number.
The key structure of our module is sblkdev_device_t.
It contains all the necessary information about the device to the kernel module, in particular: the capacity of the block device, the data itself (this is simple), pointers to the disk and the queue.
All initialization of the block device is done in the sblkdev_add_device () function.
static int sblkdev_add_device(void) { int ret = SUCCESS; sblkdev_device_t* dev = kzalloc(sizeof(sblkdev_device_t), GFP_KERNEL); if (dev == NULL) { printk(KERN_WARNING "sblkdev: unable to allocate %ld bytes\n", sizeof(sblkdev_device_t)); return -ENOMEM; } _sblkdev_device = dev; do{ ret = sblkdev_allocate_buffer(dev); if(ret) break; #if 0
We allocate memory for the structure, allocating a buffer for data storage. There is nothing special.
Next, we initialize the request queue either with one function blk_mq_init_sq_queue (), or immediately with two: blk_mq_alloc_tag_set () + blk_mq_init_queue ().
By the way, if you look at the sources of the blk_mq_init_sq_queue () function, you will see that this is just a wrapper over the functions blk_mq_alloc_tag_set () and blk_mq_init_queue (), which appeared in kernel 4.20. In addition, it conceals many of the parameters of the queue, but it looks much simpler. You choose which option is better, but I prefer the more explicit.
The key to this code is the _mq_ops global variable.
static struct blk_mq_ops _mq_ops = { .queue_rq = queue_rq, };
This is where the function is located that provides processing of requests, but more about it a little later. The main thing is that we marked the entry point to the request handler.
Now that we have created a queue, we can create an instance of the disk.
Here, without much change. The disk is allocated, parameters are set, and the disk is added to the system. I want to clarify about the parameter disk-> flags. It allows the system to indicate that the disk is removable, or, for example, that it does not contain partitions and does not need to look for them there.
For disk management there is a _fops structure.
static const struct block_device_operations _fops = { .owner = THIS_MODULE, .open = _open, .release = _release, .ioctl = _ioctl, #ifdef CONFIG_COMPAT .compat_ioctl = _compat_ioctl, #endif };
The entry points _open and _release are not very interesting for us for a simple block device module. In addition to the atomic increment and decrement of the counter, there is nothing there. I also left compat_ioctl without implementation, since the version of systems with a 64-bit kernel and a 32-bit user-space environment does not seem promising to me.
But _ioctl allows you to process system requests for this disk. When a disk appears, the system tries to learn more about it. By your own understanding, you can answer some requests (for example, to pretend to be a new CD), but the general rule is this: if you do not want to respond to requests that are of no interest to you, simply return the error code -ENOTTY. By the way, if necessary, here you can add your own request handlers for this particular disk.
So, the device we added - you need to take care of the release of resources. Here you are not
here Rust.
static void sblkdev_remove_device(void) { sblkdev_device_t* dev = _sblkdev_device; if (dev){ if (dev->disk) del_gendisk(dev->disk); if (dev->queue) { blk_cleanup_queue(dev->queue); dev->queue = NULL; } if (dev->tag_set.tags) blk_mq_free_tag_set(&dev->tag_set); if (dev->disk) { put_disk(dev->disk); dev->disk = NULL; } sblkdev_free_buffer(dev); kfree(dev); _sblkdev_device = NULL; printk(KERN_WARNING "sblkdev: simple block device was removed\n"); } }
In principle, everything is obvious: we delete the disk object from the system and release the queue, after which we release our buffers (data areas).
And now the most important thing is the processing of requests in the function queue_rq ().
static blk_status_t queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data* bd) { blk_status_t status = BLK_STS_OK; struct request *rq = bd->rq; blk_mq_start_request(rq);
To begin, consider the parameters. The first is struct blk_mq_hw_ctx * hctx - the state of the hardware queue. In our case, we do without the hardware queue, so unused.
The second parameter, const struct blk_mq_queue_data * bd, is a parameter with a very laconic structure, which I will not be afraid to bring to your attention in its entirety:
struct blk_mq_queue_data { struct request *rq; bool last; };
It turns out that in fact it is all the same request, which came to us from the time of which the chronicler
elixir.bootlin.com does not remember. So we take the request and start processing it, which we notify by calling blk_mq_start_request (). Upon completion of the request processing, we will notify the kernel by calling the function blk_mq_end_request ().
There is a small note: the function blk_mq_end_request () is, in essence, a wrapper over calls to blk_update_request () + __blk_mq_end_request (). When using the blk_mq_end_request () function, it is not possible to specify how many bytes were actually processed. Believes that everything is processed.
The alternative has another feature: the blk_update_request function is exported only for GPL-only modules. That is, if you want to create a proprietary kernel module (let PM save you from this thorny path), you cannot use blk_update_request (). So here the choice is yours.
Immediately, the transfer of the bytes from the request to the buffer and back I transferred to the function do_simple_request ().
static int do_simple_request(struct request *rq, unsigned int *nr_bytes) { int ret = SUCCESS; struct bio_vec bvec; struct req_iterator iter; sblkdev_device_t *dev = rq->q->queuedata; loff_t pos = blk_rq_pos(rq) << SECTOR_SHIFT; loff_t dev_size = (loff_t)(dev->capacity << SECTOR_SHIFT); printk(KERN_WARNING "sblkdev: request start from sector %ld \n", blk_rq_pos(rq)); rq_for_each_segment(bvec, rq, iter) { unsigned long b_len = bvec.bv_len; void* b_buf = page_address(bvec.bv_page) + bvec.bv_offset; if ((pos + b_len) > dev_size) b_len = (unsigned long)(dev_size - pos); if (rq_data_dir(rq))
There is nothing new here: rq_for_each_segment goes through all the bio, and in them all the bio_vec structures, allowing us to get to the pages with the request data.
What are your impressions? It seems simple? Processing the request is generally just copying the data between the request pages and the internal buffer. Well worthy of a simple block device driver, right?
But there is a problem:
It is not for real use!
The essence of the problem is that the request processing function queue_rq () is called in a loop that processes requests from the list. I don’t know what kind of blocking is used for this list, Spin or RCU (I don’t want to lie - who knows, correct me), but when trying to use, for example, mutex in the query processing function, the debug kernel swears and warns: doze here it is impossible. That is, it is impossible to use conventional synchronization tools or virtual memory (virtually contiguous memory) - the one that is allocated using vmalloc and can fall into the swap with everything that follows - because the process cannot go into the standby state.
Therefore, either only Spin or RCU locks and a buffer as an array of pages, or a list, or a tree, as implemented in .. \ linux \ drivers \ block \ brd.c, or pending processing in another thread, as implemented in .. \ linux \ drivers \ block \ loop.c.
I think it is not necessary to describe how to assemble the module, how to load it into the system and how to unload it. No innovations on this front, and thanks for that :) So if someone wants to try it out, he will surely figure it out.
Just do not do it right away on your favorite laptop! Raise the virtual machine or at least make a backup on the ball.
By the way, Veeam Backup for Linux 3.0.1.1046 is already available. Just do not try to run VAL 3.0.1.1046 on a kernel of 5.0 or higher. veeamsnap will not gather. And some multi-queue innovations are still at the testing stage.