/dev/sda
./dev/md3
, in which I created an LVM volume group named vg1
, and a vg1
logical volume LVM named lv1
: # lvcreate -L 100G -n lv1 vg1 # mkfs -t ext4 /dev/vg1/lv1
# rpm -Uvh http://elrepo.reloumirrors.net/elrepo/el6/x86_64/RPMS/elrepo-release-6-4.el6.elrepo.noarch.rpm
# yum -y install kmod-flashcache flashcache-utils
writethrough
- read and write operations are saved in the cache, and the disk is written immediately. In this mode, data integrity is guaranteed.writearound
is the same as the previous one, except that only reads are saved in the cache.writeback
is the fastest mode because read and write operations are saved in the cache, but the data is flushed to the disk in the background after some time. This mode is not so safe from the point of view of data integrity, as there is a risk that data will not be written to disk in case of a sudden server failure or loss of power.flashcache_create
, flashcache_load
and flashcache_destroy
. The first is used to create a caching device, the other two are needed to work only in writeback mode to load an existing caching device and to delete it, respectively.flashcache_create
has the following basic parameters:-p
- caching mode. Required. It can take the values thru
, around
and back
to enable the writethrough, writearound and writeback modes, respectively.-s
- cache size. If this parameter is not specified, the entire SSD disk will be used for the cache.-b
- block size. The default is 4KB. Optimal for most uses. # flashcache_create -p thru cachedev /dev/sda /dev/vg1/lv1 cachedev cachedev, ssd_devname /dev/sda, disk_devname /dev/vg1/lv1 cache mode WRITE_THROUGH block_size 8, cache_size 0 Flashcache metadata will use 335MB of your 7842MB main memory
cachedev
, operating in writethrough mode on the SSD /dev/sda
for the block device /dev/vg1/lv1
./dev/mapper/cachedev
device should appear, and the dmsetup status command should display statistics for various cache operations: # dmsetup status vg1-lv1: 0 209715200 linear cachedev: 0 3463845888 flashcache stats: reads(142), writes(0) read hits(50), read hit percent(35) write hits(0) write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) no room(0) disk reads(92), disk writes(0) ssd reads(50) ssd writes(92) uncached reads(0), uncached writes(0), uncached IO requeue(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0)
# mount /dev/mapper/cachedev /lv1/
/lv1/
directory will be cached on the SSD./dev/md3
: # flashcache_create -p thru cachedev /dev/sda /dev/md3
/etc/lvm/lvm.conf
file:filter = [ "r/md3/" ]
/dev/mapper
directory:scan = [ "/dev/mapper" ]
# vgchange -ay
dd
in 1 and 4 threads; # rpm -Uvh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm # yum -y install iozone
# cd /lv1/ # iozone -a -i0 -i1 -i2 -s8G -r64k
writeback
mode goes into the lead as well on write operations. # dd if=/dev/vg1/lv1 of=/dev/null bs=1M count=1024 iflag=direct
# dd if=/dev/vg1/lv1 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vg1/lv1 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vg1/lv1 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vg1/lv1 of=/dev/null bs=1M count=1024 iflag=direct skip=4096
# echo 3 > /proc/sys/vm/drop_caches
/lv1/sites/
directory. The total amount of files was about 20 GB, and the number - about 760 thousand. # cd /lv1/sites/ # echo 3 > /proc/sys/vm/drop_caches # time find . -type f -print0 | xargs -0 cat >/dev/null
# umount /lv1/
# vgchange -an
# dmsetup remove cachedev
Source: https://habr.com/ru/post/151268/
All Articles