Description

https://fio.readthedocs.io/en/latest/fio_doc.html

or just jump straight to the results

Hardware Raid

3.10.0-862.14.4.el7.x86_64

CentOS Linux release 7.9.2009 (Core)

1x LSI MegaRAID SAS 9260-4i 512MB

02:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)

4x (HP MB2000FBUCL) SAS 2TB 7.2K 6G drives

HW Raid:

https://www.broadcom.com/support/knowledgebase/1211161503234/how-to-create-a-raid-10-50-or-60

No drives.

/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0
                                     

Adapter 0 -- Virtual Drive Information:
Adapter 0: No Virtual Drive Configured.

Build Raid 5

/opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r5[252:0,252:1,252:2,252:3] -a0

Build Raid 10

[[email protected] /]# /opt/MegaRAID/MegaCli/MegaCli64 -CfgSpanAdd -R10 -Array0[252:0,252:1] -Array1[252:2,252:3] -a0
                                     
Adapter 0: Created VD 0

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
[[email protected] /]# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0
                                     

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 3.637 TB
State               : Optimal
Stripe Size         : 256 KB
Number Of Drives per span:2
Span Depth          : 2
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Bad Blocks Exist: No



Exit Code: 0x00

Enable write cache

[[email protected] /]# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -EnDskCache -L0 -a0
                                     
Set Disk Cache Policy to Enabled on Adapter 0, VD 0 (target id: 0) success

Exit Code: 0x00
[[email protected] /]# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0
                                     

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 3.637 TB
State               : Optimal
Stripe Size         : 256 KB
Number Of Drives per span:2
Span Depth          : 2
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Enabled
Encryption Type     : None
Bad Blocks Exist: No



Exit Code: 0x00

partition

[[email protected] /]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart                                                           
Partition name?  []? data                                                 
File system type?  [ext2]? ext4                                           
Start? 1049kB                                                             
End? 100%                                                                 
(parted) p                                                                
Model: LSI MR9260-4i (scsi)
Disk /dev/sdb: 4000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  4000GB  4000GB               data

(parted) quit                                                             
Information: You may need to update /etc/fstab.

format

[[email protected] /]# mkfs.ext4 /dev/sdb1 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
244121600 inodes, 976485888 blocks
48824294 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done       

mount

[[email protected] /]# mount /dev/sdb1 /data
[[email protected] /]# df -h /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       3.6T   89M  3.4T   1% /data

XFS mount:

[[email protected] /]# mkfs.xfs /dev/sdb1
mkfs.xfs: /dev/sdb1 appears to contain an existing filesystem (ext4).
mkfs.xfs: Use the -f option to force overwrite.
[[email protected] /]# mkfs.xfs -f /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=244121472 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=976485888, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=476799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] /]# mount /dev/sdb1 /data
[[email protected] /]# df -h /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       3.7T   33M  3.7T   1% /data

Software Raid (ZFS mirror)

To use O_DIRECT need to build and install latest zfs (2.1.99 at time of writing). https://openzfs.github.io/openzfs-docs/Developer%20Resources/Custom%20Packages.html#red-hat-centos-and-fedora

For a fair comparison the same controller was used with raid 0 volumes and cache enabled across all (megaraid no IT/HBA firmware). I will do a test with LSI HBA also.

Raid 0 volume setup.

[[email protected] /]# /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdDel -L0 -a0

                                     
Adapter 0: Deleted Virtual Drive-0(target id-0)

Exit Code: 0x00
[[email protected] /]#  /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0[252:0] -a0
                                     
Adapter 0: Created VD 0

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
[[email protected] /]#  /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0[252:1] -a0
                                     
Adapter 0: Created VD 1

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
[[email protected] /]#  /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0[252:2] -a0
                                     
Adapter 0: Created VD 2

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
[[email protected] /]#  /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0[252:3] -a0
                                     
Adapter 0: Created VD 3

Adapter 0: Configured the Adapter!!

Exit Code: 0x00

[[email protected] data]# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -EnDskCache -L0 -a0
                                     
Set Disk Cache Policy to Enabled on Adapter 0, VD 0 (target id: 0) success

Exit Code: 0x00
[[email protected] data]# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -EnDskCache -L1 -a0
                                     
Set Disk Cache Policy to Enabled on Adapter 0, VD 1 (target id: 1) success

Exit Code: 0x00
[[email protected] data]# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -EnDskCache -L2 -a0
                                     
Set Disk Cache Policy to Enabled on Adapter 0, VD 2 (target id: 2) success

Exit Code: 0x00
[[email protected] data]# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -EnDskCache -L3 -a0
                                     
Set Disk Cache Policy to Enabled on Adapter 0, VD 3 (target id: 3) success

Exit Code: 0x00
[[email protected] /]# zpool create -f data mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde

[[email protected] /]# zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	data        ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0

errors: No known data errors
[[email protected] /]# cd /data

Stress testing

Gitlab sample :

https://docs.gitlab.com/ee/administration/operations/filesystem_benchmarking.html

fio –randrepeat=1 –ioengine=libaio –direct=1 –gtod_reduce=1 –name=test –bs=4k –iodepth=64 –readwrite=randrw –rwmixread=75 –size=16G –filename=./sample

ext4 results

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [m(1)][32.1%][r=2474KiB/s,w=824KiB/s][r=618,w=206 IOPS][eta 49m:57s]      

[----8<----]

[[email protected] ~]# iostat -x 5 /dev/sdb
Linux 3.10.0-862.14.4.el7.x86_64 (lab.home.lan) 	28/05/21 	_x86_64_	(2 CPU)

(discounting first 2 sets of results)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.05    0.00    4.42   30.42    0.00   64.11

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.60  734.00  312.40  2936.00 24684.00    52.79    67.71   65.49   78.44   35.06   0.96 100.02

[----8<----]

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=5329KiB/s,w=1837KiB/s][r=1332,w=459 IOPS][eta 00m:01s]   
test: (groupid=0, jobs=1): err= 0: pid=2257: Fri May 28 11:52:30 2021
   read: IOPS=754, BW=3018KiB/s (3090kB/s)(11.0GiB/4169132msec)
   bw (  KiB/s): min=    8, max= 6072, per=100.00%, avg=3019.86, stdev=721.91, samples=8331
   iops        : min=    2, max= 1518, avg=754.94, stdev=180.48, samples=8331
  write: IOPS=251, BW=1006KiB/s (1030kB/s)(4097MiB/4169132msec)
   bw (  KiB/s): min=    8, max= 2000, per=100.00%, avg=1007.07, stdev=242.10, samples=8330
   iops        : min=    2, max=  500, avg=251.74, stdev=60.53, samples=8330
  cpu          : usr=2.71%, sys=9.34%, ctx=3421251, majf=1, minf=25
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=3145447,1048857,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=3018KiB/s (3090kB/s), 3018KiB/s-3018KiB/s (3090kB/s-3090kB/s), io=11.0GiB (12.9GB), run=4169132-4169132msec
  WRITE: bw=1006KiB/s (1030kB/s), 1006KiB/s-1006KiB/s (1030kB/s-1030kB/s), io=4097MiB (4296MB), run=4169132-4169132msec

Disk stats (read/write):
  sdb: ios=3145392/1216347, merge=0/2128, ticks=237759412/39134897, in_queue=276899518, util=100.00%

XFS results

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [m(1)][50.6%][r=3843KiB/s,w=1217KiB/s][r=960,w=304 IOPS][eta 26m:12s] 

[----8<----]


[[email protected] ~]# iostat -x 5 /dev/sdb

[----8<----]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.16    0.00    4.65    0.00    0.00   94.19

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00  840.80  285.00  3363.20  1140.00     8.00    63.86   57.16   67.73   25.95   0.89 100.00

[----8<----]

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=5249KiB/s,w=1865KiB/s][r=1312,w=466 IOPS][eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=4015: Fri May 28 13:31:58 2021
   read: IOPS=936, BW=3745KiB/s (3835kB/s)(11.0GiB/3359499msec)
   bw (  KiB/s): min= 1392, max= 6064, per=99.98%, avg=3744.27, stdev=504.35, samples=6718
   iops        : min=  348, max= 1516, avg=936.04, stdev=126.08, samples=6718
  write: IOPS=312, BW=1249KiB/s (1279kB/s)(4097MiB/3359499msec)
   bw (  KiB/s): min=  440, max= 1928, per=100.00%, avg=1248.50, stdev=189.26, samples=6718
   iops        : min=  110, max=  482, avg=312.10, stdev=47.32, samples=6718
  cpu          : usr=3.30%, sys=11.51%, ctx=3362052, majf=0, minf=26
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=3145447,1048857,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=3745KiB/s (3835kB/s), 3745KiB/s-3745KiB/s (3835kB/s-3835kB/s), io=11.0GiB (12.9GB), run=3359499-3359499msec
  WRITE: bw=1249KiB/s (1279kB/s), 1249KiB/s-1249KiB/s (1279kB/s-1279kB/s), io=4097MiB (4296MB), run=3359499-3359499msec

Disk stats (read/write):
  sdb: ios=3145437/1049067, merge=0/1, ticks=196521131/17985696, in_queue=214510587, util=100.00%

ZFS results

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [m(1)][2.4%][r=456KiB/s,w=144KiB/s][r=114,w=36 IOPS][eta 06h:14m:54s]


[[email protected] ~]# zpool iostat -v 60

              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        16.1G  3.61T    113    155  14.2M  13.0M
  mirror    8.02G  1.80T     55     79  7.00M  6.55M
    sdb         -      -     27     39  3.50M  3.28M
    sdc         -      -     27     40  3.50M  3.28M
  mirror    8.06G  1.80T     57     75  7.19M  6.43M
    sdd         -      -     28     36  3.55M  3.21M
    sde         -      -     29     39  3.64M  3.21M
----------  -----  -----  -----  -----  -----  -----

[[email protected] ~]# iostat -x 5 /dev/sdb /dev/sdc /dev/sdd /dev/sde

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.31    0.00    4.49   48.11    0.00   47.09

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00   29.00   37.60  3712.00  2923.30   199.26     0.40    6.06    9.17    3.67   4.30  28.64
sdc               0.00     0.00   25.60   35.20  3276.80  2923.30   203.95     0.30    4.95    8.31    2.51   3.96  24.08
sdd               0.00     0.00   26.20   37.60  3353.60  3070.20   201.37     0.48    7.76   10.47    5.87   4.78  30.48
sde               0.00     0.00   24.40   38.80  3123.20  3070.20   195.99     0.34    5.32    8.53    3.30   3.98  25.18

[[email protected] ~]# iostat -x 60 /dev/sdb /dev/sdc /dev/sdd /dev/sde

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.31    0.00    5.18   48.08    0.00   46.44

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00   29.12   37.77  3726.93  3165.72   206.11     0.35    5.25    8.21    2.96   4.02  26.86
sdc               0.00     0.00   28.25   42.05  3616.00  3165.72   192.94     0.33    4.71    8.32    2.28   3.76  26.46
sdd               0.00     0.00   27.92   44.40  3573.33  3308.03   190.31     0.31    4.30    8.24    1.82   3.58  25.89
sde               0.00     0.00   27.83   38.35  3562.67  3308.03   207.63     0.35    5.23    8.36    2.96   4.01  26.56



[[email protected] ~]# zpool iostat -r 5

data          sync_read    sync_write    async_read    async_write      scrub         trim    
req_size      ind    agg    ind    agg    ind    agg    ind    agg    ind    agg    ind    agg
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
512             0      0      0      0      0      0     13      0      0      0      0      0
1K              0      0      3      0      0      0     14      2      0      0      0      0
2K              0      0      0      0      0      0     21      2      0      0      0      0
4K              0      0      0      0      0      0      9      7      0      0      0      0
8K              0      0      0      0      0      0      0      3      0      0      0      0
16K             0      0      0      0      0      0      0      1      0      0      0      0
32K             0      0      0      0      0      0     40      0      0      0      0      0
64K             0      0      0      0      0      0      0      0      0      0      0      0
128K          109      0      0      0      0      0     31      1      0      0      0      0
256K            0      0      0      0      0      0      0      1      0      0      0      0
512K            0      0      0      0      0      0      0      5      0      0      0      0
1M              0      0      0      0      0      0      0      0      0      0      0      0
2M              0      0      0      0      0      0      0      0      0      0      0      0
4M              0      0      0      0      0      0      0      0      0      0      0      0
8M              0      0      0      0      0      0      0      0      0      0      0      0
16M             0      0      0      0      0      0      0      0      0      0      0      0
----------------------------------------------------------------------------------------------

[[email protected] ~]# zpool iostat -r 60

data          sync_read    sync_write    async_read    async_write      scrub         trim    
req_size      ind    agg    ind    agg    ind    agg    ind    agg    ind    agg    ind    agg
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
512             0      0      0      0      0      0     12      0      0      0      0      0
1K              0      0      3      0      0      0     11      2      0      0      0      0
2K              0      0      0      0      0      0     12      2      0      0      0      0
4K              0      0      0      0      0      0      7      6      0      0      0      0
8K              0      0      0      0      0      0      5      5      0      0      0      0
16K             0      0      0      0      0      0      0      1      0      0      0      0
32K             0      0      0      0      0      0     43      0      0      0      0      0
64K             0      0      0      0      0      0      0      0      0      0      0      0
128K          111      0      0      0      0      0     40      0      0      0      0      0
256K            0      0      0      0      0      0      0      1      0      0      0      0
512K            0      0      0      0      0      0      0      5      0      0      0      0
1M              0      0      0      0      0      0      0      0      0      0      0      0
2M              0      0      0      0      0      0      0      0      0      0      0      0
4M              0      0      0      0      0      0      0      0      0      0      0      0
8M              0      0      0      0      0      0      0      0      0      0      0      0
16M             0      0      0      0      0      0      0      0      0      0      0      0
----------------------------------------------------------------------------------------------


Killed off. 


[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [m(1)][2.7%][r=444KiB/s,w=132KiB/s][r=111,w=33 IOPS][eta 06h:22m:30s]
^Cbs: 1 (f=1): [m(1)][4.6%][r=420KiB/s,w=148KiB/s][r=105,w=37 IOPS][eta 06h:51m:04s]
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=10279: Fri May 28 15:57:50 2021
   read: IOPS=121, BW=487KiB/s (499kB/s)(566MiB/1189603msec)
   bw (  KiB/s): min=  120, max=29152, per=99.93%, avg=486.66, stdev=1025.51, samples=2379
   iops        : min=   30, max= 7288, avg=121.55, stdev=256.38, samples=2379
  write: IOPS=40, BW=162KiB/s (166kB/s)(188MiB/1189603msec)
   bw (  KiB/s): min=   32, max=10024, per=100.00%, avg=162.00, stdev=343.90, samples=2379
   iops        : min=    8, max= 2506, avg=40.37, stdev=85.98, samples=2379
  cpu          : usr=0.42%, sys=3.67%, ctx=163184, majf=0, minf=26
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=144846,48240,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=487KiB/s (499kB/s), 487KiB/s-487KiB/s (499kB/s-499kB/s), io=566MiB (593MB), run=1189603-1189603msec
  WRITE: bw=162KiB/s (166kB/s), 162KiB/s-162KiB/s (166kB/s-166kB/s), io=188MiB (198MB), run=1189603-1189603msec

Application:

fio –name=random-write –ioengine=posixaio –rw=randwrite –bs=4k –numjobs=1 –size=16g –iodepth=1 –runtime=30 –time_based –end_fsync=1

ext4 results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=16g --iodepth=1 --runtime=30 --time_based --end_fsync=1
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]      

[----8<----]

[[email protected] ~]# iostat -x 5 /dev/sdb
Linux 3.10.0-862.14.4.el7.x86_64 (lab.home.lan) 	28/05/21 	_x86_64_	(2 CPU)

[----8<----]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.10    0.00   35.52   63.15    0.00    1.23

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00   244.80    0.00  678.20     0.00  6137.60    18.10     6.85   10.09    0.00   10.09   1.47  99.92

[----8<----]

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=16g --iodepth=1 --runtime=30 --time_based --end_fsync=1
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]00s]  
random-write: (groupid=0, jobs=1): err= 0: pid=3377: Fri May 28 12:20:22 2021
  write: IOPS=458, BW=1835KiB/s (1879kB/s)(1024MiB/571230msec)
    slat (nsec): min=1323, max=5506.0k, avg=3188.84, stdev=14325.24
    clat (nsec): min=494, max=121278k, avg=109990.83, stdev=3119390.73
     lat (usec): min=10, max=121282, avg=113.18, stdev=3119.50
    clat percentiles (usec):
     |  1.00th=[    10],  5.00th=[    10], 10.00th=[    11], 20.00th=[    13],
     | 30.00th=[    14], 40.00th=[    14], 50.00th=[    14], 60.00th=[    14],
     | 70.00th=[    14], 80.00th=[    15], 90.00th=[    18], 95.00th=[    27],
     | 99.00th=[    78], 99.50th=[   167], 99.90th=[  5800], 99.95th=[107480],
     | 99.99th=[115868]
   bw (  KiB/s): min= 1536, max=202512, per=100.00%, avg=34935.92, stdev=69315.53, samples=60
   iops        : min=  384, max=50628, avg=8733.97, stdev=17328.89, samples=60
  lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.03%, 4=0.01%, 10=6.94%, 20=85.07%, 50=6.42%
  lat (usec)   : 100=0.76%, 250=0.32%, 500=0.05%, 750=0.08%, 1000=0.07%
  lat (msec)   : 2=0.10%, 4=0.04%, 10=0.04%, 20=0.01%, 250=0.08%
  cpu          : usr=0.19%, sys=0.23%, ctx=292179, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262029,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=1835KiB/s (1879kB/s), 1835KiB/s-1835KiB/s (1879kB/s-1879kB/s), io=1024MiB (1073MB), run=571230-571230msec

Disk stats (read/write):
  sdb: ios=4/345334, merge=0/117717, ticks=40/2837796, in_queue=2837557, util=98.76%

xfs results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=16g --iodepth=1 --runtime=30 --time_based --end_fsync=1

random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]      

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.31    0.00    9.81   49.79    0.00   40.08

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.40    0.00 3212.60     0.00 14391.20     8.96   285.70   88.95    0.00   88.95   0.31 100.00

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=16g --iodepth=1 --runtime=30 --time_based --end_fsync=1

random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]      
random-write: (groupid=0, jobs=1): err= 0: pid=4555: Fri May 28 13:37:43 2021
  write: IOPS=3215, BW=12.6MiB/s (13.2MB/s)(1634MiB/130047msec)
    slat (nsec): min=1360, max=1298.0k, avg=4677.68, stdev=8583.16
    clat (nsec): min=499, max=199770k, avg=64993.68, stdev=1811343.94
     lat (usec): min=13, max=199776, avg=69.67, stdev=1811.54
    clat percentiles (usec):
     |  1.00th=[   16],  5.00th=[   17], 10.00th=[   17], 20.00th=[   17],
     | 30.00th=[   19], 40.00th=[   21], 50.00th=[   21], 60.00th=[   22],
     | 70.00th=[   22], 80.00th=[   23], 90.00th=[   36], 95.00th=[   39],
     | 99.00th=[  145], 99.50th=[  194], 99.90th=[  627], 99.95th=[42206],
     | 99.99th=[92799]
   bw (  KiB/s): min= 4648, max=169176, per=100.00%, avg=55760.08, stdev=65326.93, samples=60
   iops        : min= 1162, max=42294, avg=13940.00, stdev=16331.75, samples=60
  lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=36.41%, 50=60.11%
  lat (usec)   : 100=1.16%, 250=1.94%, 500=0.21%, 750=0.05%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 50=0.02%, 100=0.03%, 250=0.01%
  cpu          : usr=1.96%, sys=2.50%, ctx=421286, majf=0, minf=49
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,418208,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=1634MiB (1713MB), run=130047-130047msec

Disk stats (read/write):
  sdb: ios=0/374777, merge=0/1856, ticks=0/31449016, in_queue=31448839, util=96.32%

ZFS Results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=16g --iodepth=1 --runtime=30 --time_based --end_fsync=1
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=544KiB/s][r=0,w=136 IOPS][eta 00m:00s]
random-write: (groupid=0, jobs=1): err= 0: pid=6925: Fri May 28 16:00:05 2021
  write: IOPS=163, BW=652KiB/s (668kB/s)(19.1MiB/30012msec)
    slat (usec): min=5, max=252, avg=28.05, stdev=10.84
    clat (usec): min=59, max=50661, avg=6083.87, stdev=5259.78
     lat (usec): min=66, max=50686, avg=6111.91, stdev=5259.39
    clat percentiles (usec):
     |  1.00th=[  184],  5.00th=[  260], 10.00th=[  285], 20.00th=[ 1074],
     | 30.00th=[ 3228], 40.00th=[ 4621], 50.00th=[ 5800], 60.00th=[ 7111],
     | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[10814], 95.00th=[11863],
     | 99.00th=[27132], 99.50th=[35914], 99.90th=[44303], 99.95th=[44827],
     | 99.99th=[50594]
   bw (  KiB/s): min=  199, max=  936, per=99.87%, avg=651.13, stdev=178.57, samples=60
   iops        : min=   49, max=  234, avg=162.65, stdev=44.72, samples=60
  lat (usec)   : 100=0.06%, 250=3.74%, 500=12.29%, 750=0.06%, 1000=0.37%
  lat (msec)   : 2=10.87%, 4=8.22%, 10=48.00%, 20=14.49%, 50=1.88%
  lat (msec)   : 100=0.02%
  cpu          : usr=0.89%, sys=0.81%, ctx=5152, majf=0, minf=48
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4892,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=652KiB/s (668kB/s), 652KiB/s-652KiB/s (668kB/s-668kB/s), io=19.1MiB (20.0MB), run=30012-30012msec

NAS conditions:

fio –name=random-write –ioengine=posixaio –rw=randwrite –bs=1m –size=16g –numjobs=1 –iodepth=1 –runtime=30 –time_based –end_fsync=1

ext4 results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=30 --time_based --end_fsync=1

random-write: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]    
random-write: (groupid=0, jobs=1): err= 0: pid=3614: Fri May 28 12:27:32 2021
  write: IOPS=82, BW=82.7MiB/s (86.7MB/s)(3835MiB/46387msec)
    slat (usec): min=38, max=463, avg=144.85, stdev=50.25
    clat (usec): min=1004, max=28102, avg=7530.40, stdev=5686.94
     lat (usec): min=1062, max=28290, avg=7675.25, stdev=5699.75
    clat percentiles (usec):
     |  1.00th=[ 1029],  5.00th=[ 1045], 10.00th=[ 1057], 20.00th=[ 1156],
     | 30.00th=[ 2073], 40.00th=[ 2606], 50.00th=[ 9110], 60.00th=[10028],
     | 70.00th=[11076], 80.00th=[11994], 90.00th=[13960], 95.00th=[16450],
     | 99.00th=[24249], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608],
     | 99.99th=[28181]
   bw (  KiB/s): min=63488, max=720896, per=100.00%, avg=133060.17, stdev=146327.99, samples=59
   iops        : min=   62, max=  704, avg=129.90, stdev=142.91, samples=59
  lat (msec)   : 2=28.89%, 4=12.07%, 10=17.11%, 20=39.71%, 50=2.22%
  cpu          : usr=1.29%, sys=1.56%, ctx=4407, majf=0, minf=51
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3835,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=3835MiB (4021MB), run=46387-46387msec

Disk stats (read/write):
  sdb: ios=0/45494, merge=0/56130, ticks=0/11065295, in_queue=11067180, util=96.57%

xfs results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=30 --time_based --end_fsync=1

random-write: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]    
random-write: (groupid=0, jobs=1): err= 0: pid=4803: Fri May 28 13:39:02 2021
  write: IOPS=109, BW=110MiB/s (115MB/s)(4376MiB/39920msec)
    slat (usec): min=36, max=1019, avg=145.69, stdev=55.99
    clat (usec): min=896, max=152721, avg=6543.55, stdev=5631.90
     lat (usec): min=947, max=152934, avg=6689.24, stdev=5637.86
    clat percentiles (usec):
     |  1.00th=[   930],  5.00th=[   963], 10.00th=[   979], 20.00th=[  1139],
     | 30.00th=[  1893], 40.00th=[  2507], 50.00th=[  7111], 60.00th=[  8094],
     | 70.00th=[  9634], 80.00th=[ 11600], 90.00th=[ 13566], 95.00th=[ 15139],
     | 99.00th=[ 19006], 99.50th=[ 22414], 99.90th=[ 25297], 99.95th=[ 25560],
     | 99.99th=[152044]
   bw (  KiB/s): min=89932, max=872448, per=100.00%, avg=151845.44, stdev=139054.16, samples=59
   iops        : min=   87, max=  852, avg=148.22, stdev=135.81, samples=59
  lat (usec)   : 1000=16.04%
  lat (msec)   : 2=16.64%, 4=13.96%, 10=24.84%, 20=27.79%, 50=0.71%
  lat (msec)   : 250=0.02%
  cpu          : usr=1.64%, sys=2.31%, ctx=5464, majf=0, minf=51
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4376,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=4376MiB (4589MB), run=39920-39920msec

Disk stats (read/write):
  sdb: ios=0/178139, merge=0/28338, ticks=0/5858194, in_queue=5858060, util=95.74%

ZFS results

[[email protected] data]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=30 --time_based --end_fsync=1

random-write: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process

Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]    
random-write: (groupid=0, jobs=1): err= 0: pid=7038: Fri May 28 16:01:37 2021
  write: IOPS=199, BW=199MiB/s (209MB/s)(6973MiB/35012msec)
    slat (usec): min=38, max=3600, avg=118.76, stdev=115.40
    clat (usec): min=658, max=61598, avg=4167.51, stdev=2070.11
     lat (usec): min=719, max=61693, avg=4286.26, stdev=2080.64
    clat percentiles (usec):
     |  1.00th=[  750],  5.00th=[ 1037], 10.00th=[ 2900], 20.00th=[ 3425],
     | 30.00th=[ 3621], 40.00th=[ 3851], 50.00th=[ 4047], 60.00th=[ 4293],
     | 70.00th=[ 4555], 80.00th=[ 4948], 90.00th=[ 5735], 95.00th=[ 6456],
     | 99.00th=[ 8291], 99.50th=[ 9372], 99.90th=[34341], 99.95th=[44827],
     | 99.99th=[61604]
   bw (  KiB/s): min=133120, max=772096, per=100.00%, avg=237942.68, stdev=91340.54, samples=60
   iops        : min=  130, max=  754, avg=232.33, stdev=89.15, samples=60
  lat (usec)   : 750=0.93%, 1000=3.73%
  lat (msec)   : 2=3.76%, 4=40.27%, 10=50.90%, 20=0.29%, 50=0.09%
  lat (msec)   : 100=0.04%
  cpu          : usr=2.49%, sys=0.74%, ctx=9817, majf=0, minf=49
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,6973,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=6973MiB (7312MB), run=35012-35012msec

VM:

fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm

ext4 results

[[email protected] data]# sudo fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm
vm: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.7
Starting 4 processes
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)

[----8<----]

[[email protected] ~]# iostat -x 5 /dev/sdb

[----8<----]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8.79    0.00   35.15   55.44    0.00    0.63

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    0.00  964.60     0.00 303884.80   630.07   141.15  146.25    0.00  146.25   1.04 100.00

[----8<----]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.80    0.00    5.50    0.00    0.00   92.70

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.40  717.60  326.00  2870.40  1305.60     8.00    47.87   45.89   65.46    2.80   0.96 100.02

[----8<----]

[[email protected] data]# sudo fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm
vm: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.7
Starting 4 processes
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
Jobs: 4 (f=4): [m(4)][100.0%][r=3032KiB/s,w=1300KiB/s][r=758,w=325 IOPS][eta 00m:00s]
vm: (groupid=0, jobs=4): err= 0: pid=3691: Fri May 28 12:32:22 2021
   read: IOPS=712, BW=2850KiB/s (2918kB/s)(83.9MiB/30134msec)
    slat (usec): min=6, max=986, avg=81.81, stdev=27.96
    clat (usec): min=62, max=1264.8k, avg=66097.97, stdev=87266.39
     lat (usec): min=118, max=1264.9k, avg=66181.60, stdev=87266.60
    clat percentiles (msec):
     |  1.00th=[    6],  5.00th=[    9], 10.00th=[   11], 20.00th=[   15],
     | 30.00th=[   20], 40.00th=[   26], 50.00th=[   35], 60.00th=[   48],
     | 70.00th=[   66], 80.00th=[   96], 90.00th=[  161], 95.00th=[  230],
     | 99.00th=[  439], 99.50th=[  527], 99.90th=[  760], 99.95th=[  869],
     | 99.99th=[ 1200]
   bw (  KiB/s): min=  280, max= 1152, per=25.06%, avg=713.87, stdev=164.05, samples=240
   iops        : min=   70, max=  288, avg=178.41, stdev=41.02, samples=240
  write: IOPS=330, BW=1321KiB/s (1352kB/s)(38.9MiB/30134msec)
    slat (usec): min=8, max=2183, avg=97.37, stdev=37.04
    clat (usec): min=268, max=87497, avg=2268.27, stdev=3843.12
     lat (usec): min=296, max=87597, avg=2367.50, stdev=3843.75
    clat percentiles (usec):
     |  1.00th=[  330],  5.00th=[  355], 10.00th=[  371], 20.00th=[  457],
     | 30.00th=[  725], 40.00th=[ 1057], 50.00th=[ 1352], 60.00th=[ 1598],
     | 70.00th=[ 1811], 80.00th=[ 2040], 90.00th=[ 4883], 95.00th=[ 8979],
     | 99.00th=[21365], 99.50th=[27395], 99.90th=[38011], 99.95th=[42206],
     | 99.99th=[87557]
   bw (  KiB/s): min=  136, max=  608, per=25.12%, avg=331.52, stdev=92.98, samples=240
   iops        : min=   34, max=  152, avg=82.80, stdev=23.25, samples=240
  lat (usec)   : 100=0.01%, 250=0.01%, 500=6.91%, 750=2.86%, 1000=2.38%
  lat (msec)   : 2=12.90%, 4=3.01%, 10=9.03%, 20=15.66%, 50=21.03%
  lat (msec)   : 100=13.16%, 250=10.14%, 500=2.44%, 750=0.37%, 1000=0.06%
  cpu          : usr=1.03%, sys=3.11%, ctx=31178, majf=0, minf=123
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=21468,9950,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
   READ: bw=2850KiB/s (2918kB/s), 2850KiB/s-2850KiB/s (2918kB/s-2918kB/s), io=83.9MiB (87.9MB), run=30134-30134msec
  WRITE: bw=1321KiB/s (1352kB/s), 1321KiB/s-1321KiB/s (1352kB/s-1352kB/s), io=38.9MiB (40.8MB), run=30134-30134msec

Disk stats (read/write):
  sdb: ios=21400/9951, merge=0/10, ticks=1406969/22236, in_queue=1434164, util=98.92%

XFS results

[[email protected] data]# fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm
vm: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.7
Starting 4 processes
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           6.90    0.00   35.33   45.08    0.00   12.69

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    0.00  964.00     0.00 308224.00   639.47   142.33  147.67    0.00  147.67   1.04 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.69    0.00    5.50    0.00    0.00   92.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00  696.20  322.20  2784.80  1288.80     8.00    47.87   46.79   67.06    2.98   0.98 100.00

[[email protected] data]# fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm
vm: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.7
Starting 4 processes
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
Jobs: 4 (f=4): [m(4)][100.0%][r=2842KiB/s,w=1357KiB/s][r=710,w=339 IOPS][eta 00m:00s]
vm: (groupid=0, jobs=4): err= 0: pid=4906: Fri May 28 13:43:48 2021
   read: IOPS=709, BW=2837KiB/s (2905kB/s)(83.5MiB/30137msec)
    slat (usec): min=6, max=825, avg=83.11, stdev=27.41
    clat (usec): min=75, max=1236.6k, avg=65736.88, stdev=84427.41
     lat (usec): min=136, max=1236.7k, avg=65821.81, stdev=84427.45
    clat percentiles (msec):
     |  1.00th=[    6],  5.00th=[    8], 10.00th=[   11], 20.00th=[   14],
     | 30.00th=[   20], 40.00th=[   26], 50.00th=[   35], 60.00th=[   48],
     | 70.00th=[   67], 80.00th=[   97], 90.00th=[  159], 95.00th=[  228],
     | 99.00th=[  409], 99.50th=[  506], 99.90th=[  735], 99.95th=[  844],
     | 99.99th=[  995]
   bw (  KiB/s): min=  328, max= 1064, per=25.05%, avg=710.80, stdev=153.13, samples=240
   iops        : min=   82, max=  266, avg=177.65, stdev=38.30, samples=240
  write: IOPS=329, BW=1319KiB/s (1351kB/s)(38.8MiB/30137msec)
    slat (usec): min=9, max=517, avg=96.72, stdev=28.78
    clat (usec): min=262, max=935354, avg=3580.84, stdev=14626.74
     lat (usec): min=305, max=935472, avg=3679.38, stdev=14626.83
    clat percentiles (usec):
     |  1.00th=[   330],  5.00th=[   359], 10.00th=[   375], 20.00th=[   494],
     | 30.00th=[   799], 40.00th=[  1123], 50.00th=[  1434], 60.00th=[  1696],
     | 70.00th=[  1893], 80.00th=[  3163], 90.00th=[  8586], 95.00th=[ 15008],
     | 99.00th=[ 36963], 99.50th=[ 43254], 99.90th=[ 58983], 99.95th=[ 70779],
     | 99.99th=[935330]
   bw (  KiB/s): min=  111, max=  576, per=25.11%, avg=331.16, stdev=90.06, samples=240
   iops        : min=   27, max=  144, avg=82.71, stdev=22.54, samples=240
  lat (usec)   : 100=0.01%, 250=0.01%, 500=6.43%, 750=2.60%, 1000=2.38%
  lat (msec)   : 2=12.06%, 4=2.61%, 10=9.73%, 20=16.18%, 50=21.54%
  lat (msec)   : 100=13.19%, 250=10.49%, 500=2.40%, 750=0.30%, 1000=0.06%
  cpu          : usr=0.97%, sys=3.18%, ctx=31010, majf=0, minf=127
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=21377,9939,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
   READ: bw=2837KiB/s (2905kB/s), 2837KiB/s-2837KiB/s (2905kB/s-2905kB/s), io=83.5MiB (87.6MB), run=30137-30137msec
  WRITE: bw=1319KiB/s (1351kB/s), 1319KiB/s-1319KiB/s (1351kB/s-1351kB/s), io=38.8MiB (40.7MB), run=30137-30137msec

Disk stats (read/write):
  sdb: ios=21276/9915, merge=0/3, ticks=1392927/37304, in_queue=1433770, util=99.79%

ZFS results

[[email protected] data]# fio -size=16GB -direct=1 -rw=randrw -rwmixread=69 -bs=4K -ioengine=libaio -iodepth=12 -runtime=30 -numjobs=4 -time_based -group_reporting -name=vm

vm: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.7
Starting 4 processes
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
vm: Laying out IO file (1 file / 16384MiB)
Jobs: 4 (f=4): [m(4)][100.0%][r=1041KiB/s,w=508KiB/s][r=260,w=127 IOPS][eta 00m:00s]
vm: (groupid=0, jobs=4): err= 0: pid=15168: Fri May 28 16:07:38 2021
   read: IOPS=213, BW=855KiB/s (876kB/s)(25.1MiB/30013msec)
    slat (usec): min=17, max=124935, avg=12736.39, stdev=9509.98
    clat (usec): min=21, max=659287, avg=138509.59, stdev=59984.75
     lat (msec): min=17, max=710, avg=151.25, stdev=65.08
    clat percentiles (msec):
     |  1.00th=[   62],  5.00th=[   80], 10.00th=[   90], 20.00th=[  102],
     | 30.00th=[  110], 40.00th=[  117], 50.00th=[  125], 60.00th=[  133],
     | 70.00th=[  144], 80.00th=[  163], 90.00th=[  203], 95.00th=[  239],
     | 99.00th=[  405], 99.50th=[  456], 99.90th=[  584], 99.95th=[  600],
     | 99.99th=[  659]
   bw (  KiB/s): min=   48, max=  407, per=25.36%, avg=216.79, stdev=68.96, samples=234
   iops        : min=   12, max=  101, avg=54.09, stdev=17.24, samples=234
  write: IOPS=100, BW=404KiB/s (413kB/s)(11.8MiB/30013msec)
    slat (usec): min=43, max=136318, avg=12527.55, stdev=9621.96
    clat (usec): min=9, max=617648, avg=138011.62, stdev=59659.62
     lat (msec): min=18, max=694, avg=150.55, stdev=64.66
    clat percentiles (msec):
     |  1.00th=[   61],  5.00th=[   78], 10.00th=[   89], 20.00th=[  102],
     | 30.00th=[  109], 40.00th=[  117], 50.00th=[  126], 60.00th=[  133],
     | 70.00th=[  144], 80.00th=[  161], 90.00th=[  201], 95.00th=[  245],
     | 99.00th=[  384], 99.50th=[  439], 99.90th=[  592], 99.95th=[  609],
     | 99.99th=[  617]
   bw (  KiB/s): min=   16, max=  224, per=25.28%, avg=101.88, stdev=38.65, samples=234
   iops        : min=    4, max=   56, avg=25.37, stdev= 9.67, samples=234
  lat (usec)   : 10=0.01%, 20=0.02%, 50=0.01%
  lat (msec)   : 20=0.03%, 50=0.29%, 100=18.50%, 250=76.85%, 500=4.05%
  lat (msec)   : 750=0.23%
  cpu          : usr=0.34%, sys=2.33%, ctx=10436, majf=0, minf=116
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=6419,3028,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
   READ: bw=855KiB/s (876kB/s), 855KiB/s-855KiB/s (876kB/s-876kB/s), io=25.1MiB (26.3MB), run=30013-30013msec
  WRITE: bw=404KiB/s (413kB/s), 404KiB/s-404KiB/s (413kB/s-413kB/s), io=11.8MiB (12.4MB), run=30013-30013msec

Results

Gitlab filesystem performance test

Application performance test

NAS performance test

VM Host performance test

See extras for detail of these extended zfs test results

Conclusion

If you are running a service with persistent data you absolutely want xfs paired with hardware raid for your raw storage and especially for databases. Its a clear performance winner on the gitlab storage beanchmarking test. xfs also wins for raw random write perfomance a big win for master databases.

Where ZFS comes into its own is large block performance, for network attached storage, backups, archive or any at scale data retention its a clear choice given you can also snapshot at low cost and do incremental sends between hosts. zfs needs a HBA not RAID, it needs to read the SMART data of the attached drives. In the extra tests the same gitlab 4k fio was ran with a HBA setup with very similar results to Raid0 devices with cache enabled. Tuning isnt going to help for ZFS options like compression, recordize, logbias, sync non of these options will give you more IOPS which is where its falling down on the small block size and the copy on write. Mirrored ZIL devices would help, or switching every device to SSDs.

I should add i did the tests with what i had at the time, new database deployments are 100% SSD devices now. 4k blocks are the most intense benchmark tests you can do for storage and a 70 read / 30 write split is typical for large databases that do not fit in memory.

Extras

More ZFS tests

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=8k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample8k

test: (g=0): rw=randrw, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
^Cbs: 1 (f=1): [m(1)][0.9%][r=904KiB/s,w=384KiB/s][r=113,w=48 IOPS][eta 03h:49m:00s] 
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=16495: Fri May 28 16:12:02 2021
   read: IOPS=114, BW=915KiB/s (937kB/s)(115MiB/129064msec)
   bw (  KiB/s): min=   96, max= 1264, per=100.00%, avg=918.51, stdev=158.85, samples=256
   iops        : min=   12, max=  158, avg=114.68, stdev=19.88, samples=256
  write: IOPS=37, BW=303KiB/s (311kB/s)(38.2MiB/129064msec)
   bw (  KiB/s): min=   32, max=  526, per=100.00%, avg=304.80, stdev=75.11, samples=256
   iops        : min=    4, max=   65, avg=37.94, stdev= 9.39, samples=256
  cpu          : usr=0.46%, sys=3.86%, ctx=18491, majf=0, minf=23
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=14763,4896,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=915KiB/s (937kB/s), 915KiB/s-915KiB/s (937kB/s-937kB/s), io=115MiB (121MB), run=129064-129064msec
  WRITE: bw=303KiB/s (311kB/s), 303KiB/s-303KiB/s (311kB/s-311kB/s), io=38.2MiB (40.1MB), run=129064-129064msec

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=16k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample16k
[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=16k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample16k
test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
^Cbs: 1 (f=1): [m(1)][1.8%][r=2128KiB/s,w=576KiB/s][r=133,w=36 IOPS][eta 02h:04m:06s]
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=19079: Fri May 28 16:16:18 2021
   read: IOPS=104, BW=1668KiB/s (1708kB/s)(225MiB/138396msec)
   bw (  KiB/s): min=  288, max= 2240, per=100.00%, avg=1679.67, stdev=301.76, samples=273
   iops        : min=   18, max=  140, avg=104.81, stdev=18.86, samples=273
  write: IOPS=34, BW=552KiB/s (565kB/s)(74.6MiB/138396msec)
   bw (  KiB/s): min=   96, max=  960, per=100.00%, avg=556.18, stdev=137.05, samples=273
   iops        : min=    6, max=   60, avg=34.56, stdev= 8.58, samples=273
  cpu          : usr=0.42%, sys=3.62%, ctx=18271, majf=0, minf=24
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=14430,4775,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1668KiB/s (1708kB/s), 1668KiB/s-1668KiB/s (1708kB/s-1708kB/s), io=225MiB (236MB), run=138396-138396msec
  WRITE: bw=552KiB/s (565kB/s), 552KiB/s-552KiB/s (565kB/s-565kB/s), io=74.6MiB (78.2MB), run=138396-138396msec
[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=32k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample32k
test: (g=0): rw=randrw, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
^Cbs: 1 (f=1): [m(1)][11.4%][r=2850KiB/s,w=1057KiB/s][r=89,w=33 IOPS][eta 52m:31s]    
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=24577: Fri May 28 16:24:38 2021
   read: IOPS=110, BW=3541KiB/s (3626kB/s)(1405MiB/406302msec)
   bw (  KiB/s): min=  766, max= 4928, per=100.00%, avg=3539.83, stdev=622.60, samples=811
   iops        : min=   23, max=  154, avg=110.49, stdev=19.46, samples=811
  write: IOPS=36, BW=1182KiB/s (1210kB/s)(469MiB/406302msec)
   bw (  KiB/s): min=  255, max= 2112, per=100.00%, avg=1181.35, stdev=299.70, samples=811
   iops        : min=    7, max=   66, avg=36.76, stdev= 9.40, samples=811
  cpu          : usr=0.46%, sys=4.06%, ctx=58264, majf=0, minf=24
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=44954,15003,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=3541KiB/s (3626kB/s), 3541KiB/s-3541KiB/s (3626kB/s-3626kB/s), io=1405MiB (1473MB), run=406302-406302msec
  WRITE: bw=1182KiB/s (1210kB/s), 1182KiB/s-1182KiB/s (1210kB/s-1210kB/s), io=469MiB (492MB), run=406302-406302msec

WITH HBA

02:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)

[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
^Cbs: 1 (f=1): [m(1)][0.6%][r=504KiB/s,w=180KiB/s][r=126,w=45 IOPS][eta 07h:18m:20s]
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=3109: Fri May 28 18:20:46 2021
   read: IOPS=119, BW=478KiB/s (489kB/s)(71.4MiB/153097msec)
   bw (  KiB/s): min=  168, max=  656, per=100.00%, avg=477.76, stdev=91.73, samples=305
   iops        : min=   42, max=  164, avg=119.31, stdev=22.96, samples=305
  write: IOPS=39, BW=158KiB/s (162kB/s)(23.7MiB/153097msec)
   bw (  KiB/s): min=   56, max=  256, per=100.00%, avg=158.15, stdev=39.48, samples=305
   iops        : min=   14, max=   64, avg=39.39, stdev= 9.87, samples=305
  cpu          : usr=0.48%, sys=3.91%, ctx=22399, majf=0, minf=25
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=18283,6055,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=71.4MiB (74.9MB), run=153097-153097msec
  WRITE: bw=158KiB/s (162kB/s), 158KiB/s-158KiB/s (162kB/s-162kB/s), io=23.7MiB (24.8MB), run=153097-153097msec

ashift=12 (2**12=4096 byte sectors)

[[email protected] ~]# zpool create -f data -o ashift=12 mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde

[[email protected] ~]# zpool get ashift
NAME  PROPERTY  VALUE   SOURCE
data  ashift    12      local

[[email protected] ~]# zdb | grep ashift
            ashift: 12
            ashift: 12
[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample

RAIDz1

[[email protected] /]# zpool create -f data raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

[[email protected] /]# zfs list -t all
NAME   USED  AVAIL     REFER  MOUNTPOINT
data   132K  5.31T     32.9K  /data
[[email protected] /]# zpool status
  pool: data
 state: ONLINE
config:

  NAME        STATE     READ WRITE CKSUM
  data        ONLINE       0     0     0
    raidz1-0  ONLINE       0     0     0
      sdb     ONLINE       0     0     0
      sdc     ONLINE       0     0     0
      sdd     ONLINE       0     0     0
      sde     ONLINE       0     0     0

errors: No known data errors
[[email protected] data]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=16G --filename=./sample
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
^Cbs: 1 (f=1): [m(1)][0.1%][r=432KiB/s,w=120KiB/s][r=108,w=30 IOPS][eta 09h:41m:08s]
fio: terminating on signal 2

test: (groupid=0, jobs=1): err= 0: pid=3661: Fri May 28 18:26:13 2021
   read: IOPS=90, BW=362KiB/s (370kB/s)(16.8MiB/47514msec)
   bw (  KiB/s): min=  160, max=  488, per=100.00%, avg=361.37, stdev=76.78, samples=94
   iops        : min=   40, max=  122, avg=90.22, stdev=19.21, samples=94
  write: IOPS=30, BW=124KiB/s (127kB/s)(5884KiB/47514msec)
   bw (  KiB/s): min=   39, max=  216, per=100.00%, avg=123.67, stdev=34.17, samples=94
   iops        : min=    9, max=   54, avg=30.80, stdev= 8.59, samples=94
  cpu          : usr=0.35%, sys=3.54%, ctx=5656, majf=0, minf=24
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=4297,1471,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=362KiB/s (370kB/s), 362KiB/s-362KiB/s (370kB/s-370kB/s), io=16.8MiB (17.6MB), run=47514-47514msec
  WRITE: bw=124KiB/s (127kB/s), 124KiB/s-124KiB/s (127kB/s-127kB/s), io=5884KiB (6025kB), run=47514-47514msec

Raidz1 NAS test :

random-write: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [F(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]    
random-write: (groupid=0, jobs=1): err= 0: pid=3925: Fri May 28 18:30:14 2021
  write: IOPS=230, BW=230MiB/s (241MB/s)(7602MiB/33020msec)
    slat (usec): min=38, max=2549, avg=109.53, stdev=90.32
    clat (usec): min=669, max=76457, avg=3821.41, stdev=2099.87
     lat (usec): min=730, max=76562, avg=3930.94, stdev=2105.48
    clat percentiles (usec):
     |  1.00th=[  848],  5.00th=[ 1090], 10.00th=[ 2835], 20.00th=[ 3097],
     | 30.00th=[ 3261], 40.00th=[ 3359], 50.00th=[ 3556], 60.00th=[ 3851],
     | 70.00th=[ 4228], 80.00th=[ 4686], 90.00th=[ 5342], 95.00th=[ 5932],
     | 99.00th=[ 7373], 99.50th=[ 8160], 99.90th=[34341], 99.95th=[45351],
     | 99.99th=[76022]
   bw (  KiB/s): min=173732, max=778240, per=100.00%, avg=259364.35, stdev=89941.85, samples=60
   iops        : min=  169, max=  760, avg=253.18, stdev=87.88, samples=60
  lat (usec)   : 750=0.29%, 1000=3.29%
  lat (msec)   : 2=4.26%, 4=56.77%, 10=35.15%, 20=0.08%, 50=0.12%
  lat (msec)   : 100=0.04%
  cpu          : usr=2.77%, sys=0.59%, ctx=11690, majf=0, minf=47
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,7602,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=230MiB/s (241MB/s), 230MiB/s-230MiB/s (241MB/s-241MB/s), io=7602MiB (7971MB), run=33020-33020msec