Understanding iostat with Linux software RAID











up vote
1
down vote

favorite












I'm trying to understand what I see in iostat, specifically the differences between the output for md and sd devices.



I have a couple of quite large Centos Linux servers, each with E3-1230 CPU, 16 GB RAM and 4 2TB SATA disk drives. Most are JBOD, but one is configure with software RAID 1+0. The servers have very similar type and amount of load, but the %util figures I get with iostat on the software raid one is much higher than others, and I'm trying to understand why. All servers are usually 80-90% idle with regard to CPU.



Example of iostat on a server without RAID:




avg-cpu: %user %nice %system %iowait %steal %idle
9.26 0.19 1.15 2.55 0.00 86.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 2.48 9.45 10.45 13.08 1977.55 1494.06 147.50 2.37 100.61 3.86 9.08
sdc 4.38 24.11 13.25 20.69 1526.18 1289.87 82.97 1.40 41.14 3.94 13.36
sdd 0.06 1.28 1.43 2.50 324.67 587.49 232.32 0.45 113.73 2.77 1.09
sda 0.28 1.06 1.33 0.97 100.89 61.63 70.45 0.06 27.14 2.46 0.57
dm-0 0.00 0.00 0.17 0.24 4.49 1.96 15.96 0.01 18.09 3.38 0.14
dm-1 0.00 0.00 0.09 0.12 0.74 0.99 8.00 0.00 4.65 0.36 0.01
dm-2 0.00 0.00 1.49 3.34 324.67 587.49 188.75 0.45 93.64 2.25 1.09
dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36
dm-4 0.00 0.00 0.11 0.03 0.88 0.79 12.17 0.00 19.48 0.87 0.01
dm-5 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 1.17 1.17 0.00
dm-6 0.00 0.00 12.87 20.44 1976.66 1493.27 104.17 2.77 83.01 2.73 9.08
dm-7 0.00 0.00 1.36 1.58 95.65 58.68 52.52 0.09 29.20 1.55 0.46


Example of iostat on a server with RAID 1+0:




avg-cpu: %user %nice %system %iowait %steal %idle
7.55 0.25 1.01 3.35 0.00 87.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 42.21 31.78 18.47 59.18 8202.18 2040.94 131.91 2.07 26.65 4.02 31.20
sdc 44.93 27.92 18.96 55.88 8570.70 1978.15 140.94 2.21 29.48 4.60 34.45
sdd 45.75 28.69 14.52 55.10 8093.17 1978.16 144.66 0.21 2.95 3.94 27.42
sda 45.05 32.59 18.22 58.37 8471.04 2040.93 137.24 1.57 20.56 5.04 38.59
md1 0.00 0.00 18.17 162.73 3898.45 4013.90 43.74 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 4.89 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.07 0.26 3.30 2.13 16.85 0.04 135.54 73.73 2.38
dm-1 0.00 0.00 0.25 0.22 2.04 1.79 8.00 0.24 500.99 11.64 0.56
dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76
dm-3 0.00 0.00 2.31 2.37 1756.39 2297.67 867.42 2.30 492.30 13.08 6.11


So my questions are:



1) Why is there such a relatively high %util on the server with RAID vs the one without.



2) On the non-RAID server the %util of the combined physical devices (sd*) are more or less the same as the combined LVM devices (dm-*). Why is that not the case for the RAID server?



3) Why does it seem like the software RAID devices (md*) are virtually idle, while the underlying physical devices (sd*) are busy? My first thought was that it might be caused by RAID checking, but /proc/mdadm shows all good.



Edit: Apologies, I thought the question was clear, but that seems there is some confusion about it. Obviously the question is not about the difference in the %util between drives on one server, but why the total/avg %util value on one server is so different from the other. Hope that clarifies any misunderstanding.










share|improve this question
























  • Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
    – Patrick
    Jul 10 '14 at 14:09










  • I was referring to the %idle column.
    – RCD
    Jul 10 '14 at 15:31










  • The only thing that has %idle is the cpu, and that's about the same on both hosts.
    – Patrick
    Jul 10 '14 at 15:39










  • Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
    – RCD
    Jul 10 '14 at 17:47












  • Are you really asking why there is a ~1% difference between all the counters in both scenario?
    – Braiam
    Jul 10 '14 at 20:47















up vote
1
down vote

favorite












I'm trying to understand what I see in iostat, specifically the differences between the output for md and sd devices.



I have a couple of quite large Centos Linux servers, each with E3-1230 CPU, 16 GB RAM and 4 2TB SATA disk drives. Most are JBOD, but one is configure with software RAID 1+0. The servers have very similar type and amount of load, but the %util figures I get with iostat on the software raid one is much higher than others, and I'm trying to understand why. All servers are usually 80-90% idle with regard to CPU.



Example of iostat on a server without RAID:




avg-cpu: %user %nice %system %iowait %steal %idle
9.26 0.19 1.15 2.55 0.00 86.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 2.48 9.45 10.45 13.08 1977.55 1494.06 147.50 2.37 100.61 3.86 9.08
sdc 4.38 24.11 13.25 20.69 1526.18 1289.87 82.97 1.40 41.14 3.94 13.36
sdd 0.06 1.28 1.43 2.50 324.67 587.49 232.32 0.45 113.73 2.77 1.09
sda 0.28 1.06 1.33 0.97 100.89 61.63 70.45 0.06 27.14 2.46 0.57
dm-0 0.00 0.00 0.17 0.24 4.49 1.96 15.96 0.01 18.09 3.38 0.14
dm-1 0.00 0.00 0.09 0.12 0.74 0.99 8.00 0.00 4.65 0.36 0.01
dm-2 0.00 0.00 1.49 3.34 324.67 587.49 188.75 0.45 93.64 2.25 1.09
dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36
dm-4 0.00 0.00 0.11 0.03 0.88 0.79 12.17 0.00 19.48 0.87 0.01
dm-5 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 1.17 1.17 0.00
dm-6 0.00 0.00 12.87 20.44 1976.66 1493.27 104.17 2.77 83.01 2.73 9.08
dm-7 0.00 0.00 1.36 1.58 95.65 58.68 52.52 0.09 29.20 1.55 0.46


Example of iostat on a server with RAID 1+0:




avg-cpu: %user %nice %system %iowait %steal %idle
7.55 0.25 1.01 3.35 0.00 87.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 42.21 31.78 18.47 59.18 8202.18 2040.94 131.91 2.07 26.65 4.02 31.20
sdc 44.93 27.92 18.96 55.88 8570.70 1978.15 140.94 2.21 29.48 4.60 34.45
sdd 45.75 28.69 14.52 55.10 8093.17 1978.16 144.66 0.21 2.95 3.94 27.42
sda 45.05 32.59 18.22 58.37 8471.04 2040.93 137.24 1.57 20.56 5.04 38.59
md1 0.00 0.00 18.17 162.73 3898.45 4013.90 43.74 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 4.89 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.07 0.26 3.30 2.13 16.85 0.04 135.54 73.73 2.38
dm-1 0.00 0.00 0.25 0.22 2.04 1.79 8.00 0.24 500.99 11.64 0.56
dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76
dm-3 0.00 0.00 2.31 2.37 1756.39 2297.67 867.42 2.30 492.30 13.08 6.11


So my questions are:



1) Why is there such a relatively high %util on the server with RAID vs the one without.



2) On the non-RAID server the %util of the combined physical devices (sd*) are more or less the same as the combined LVM devices (dm-*). Why is that not the case for the RAID server?



3) Why does it seem like the software RAID devices (md*) are virtually idle, while the underlying physical devices (sd*) are busy? My first thought was that it might be caused by RAID checking, but /proc/mdadm shows all good.



Edit: Apologies, I thought the question was clear, but that seems there is some confusion about it. Obviously the question is not about the difference in the %util between drives on one server, but why the total/avg %util value on one server is so different from the other. Hope that clarifies any misunderstanding.










share|improve this question
























  • Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
    – Patrick
    Jul 10 '14 at 14:09










  • I was referring to the %idle column.
    – RCD
    Jul 10 '14 at 15:31










  • The only thing that has %idle is the cpu, and that's about the same on both hosts.
    – Patrick
    Jul 10 '14 at 15:39










  • Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
    – RCD
    Jul 10 '14 at 17:47












  • Are you really asking why there is a ~1% difference between all the counters in both scenario?
    – Braiam
    Jul 10 '14 at 20:47













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I'm trying to understand what I see in iostat, specifically the differences between the output for md and sd devices.



I have a couple of quite large Centos Linux servers, each with E3-1230 CPU, 16 GB RAM and 4 2TB SATA disk drives. Most are JBOD, but one is configure with software RAID 1+0. The servers have very similar type and amount of load, but the %util figures I get with iostat on the software raid one is much higher than others, and I'm trying to understand why. All servers are usually 80-90% idle with regard to CPU.



Example of iostat on a server without RAID:




avg-cpu: %user %nice %system %iowait %steal %idle
9.26 0.19 1.15 2.55 0.00 86.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 2.48 9.45 10.45 13.08 1977.55 1494.06 147.50 2.37 100.61 3.86 9.08
sdc 4.38 24.11 13.25 20.69 1526.18 1289.87 82.97 1.40 41.14 3.94 13.36
sdd 0.06 1.28 1.43 2.50 324.67 587.49 232.32 0.45 113.73 2.77 1.09
sda 0.28 1.06 1.33 0.97 100.89 61.63 70.45 0.06 27.14 2.46 0.57
dm-0 0.00 0.00 0.17 0.24 4.49 1.96 15.96 0.01 18.09 3.38 0.14
dm-1 0.00 0.00 0.09 0.12 0.74 0.99 8.00 0.00 4.65 0.36 0.01
dm-2 0.00 0.00 1.49 3.34 324.67 587.49 188.75 0.45 93.64 2.25 1.09
dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36
dm-4 0.00 0.00 0.11 0.03 0.88 0.79 12.17 0.00 19.48 0.87 0.01
dm-5 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 1.17 1.17 0.00
dm-6 0.00 0.00 12.87 20.44 1976.66 1493.27 104.17 2.77 83.01 2.73 9.08
dm-7 0.00 0.00 1.36 1.58 95.65 58.68 52.52 0.09 29.20 1.55 0.46


Example of iostat on a server with RAID 1+0:




avg-cpu: %user %nice %system %iowait %steal %idle
7.55 0.25 1.01 3.35 0.00 87.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 42.21 31.78 18.47 59.18 8202.18 2040.94 131.91 2.07 26.65 4.02 31.20
sdc 44.93 27.92 18.96 55.88 8570.70 1978.15 140.94 2.21 29.48 4.60 34.45
sdd 45.75 28.69 14.52 55.10 8093.17 1978.16 144.66 0.21 2.95 3.94 27.42
sda 45.05 32.59 18.22 58.37 8471.04 2040.93 137.24 1.57 20.56 5.04 38.59
md1 0.00 0.00 18.17 162.73 3898.45 4013.90 43.74 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 4.89 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.07 0.26 3.30 2.13 16.85 0.04 135.54 73.73 2.38
dm-1 0.00 0.00 0.25 0.22 2.04 1.79 8.00 0.24 500.99 11.64 0.56
dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76
dm-3 0.00 0.00 2.31 2.37 1756.39 2297.67 867.42 2.30 492.30 13.08 6.11


So my questions are:



1) Why is there such a relatively high %util on the server with RAID vs the one without.



2) On the non-RAID server the %util of the combined physical devices (sd*) are more or less the same as the combined LVM devices (dm-*). Why is that not the case for the RAID server?



3) Why does it seem like the software RAID devices (md*) are virtually idle, while the underlying physical devices (sd*) are busy? My first thought was that it might be caused by RAID checking, but /proc/mdadm shows all good.



Edit: Apologies, I thought the question was clear, but that seems there is some confusion about it. Obviously the question is not about the difference in the %util between drives on one server, but why the total/avg %util value on one server is so different from the other. Hope that clarifies any misunderstanding.










share|improve this question















I'm trying to understand what I see in iostat, specifically the differences between the output for md and sd devices.



I have a couple of quite large Centos Linux servers, each with E3-1230 CPU, 16 GB RAM and 4 2TB SATA disk drives. Most are JBOD, but one is configure with software RAID 1+0. The servers have very similar type and amount of load, but the %util figures I get with iostat on the software raid one is much higher than others, and I'm trying to understand why. All servers are usually 80-90% idle with regard to CPU.



Example of iostat on a server without RAID:




avg-cpu: %user %nice %system %iowait %steal %idle
9.26 0.19 1.15 2.55 0.00 86.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 2.48 9.45 10.45 13.08 1977.55 1494.06 147.50 2.37 100.61 3.86 9.08
sdc 4.38 24.11 13.25 20.69 1526.18 1289.87 82.97 1.40 41.14 3.94 13.36
sdd 0.06 1.28 1.43 2.50 324.67 587.49 232.32 0.45 113.73 2.77 1.09
sda 0.28 1.06 1.33 0.97 100.89 61.63 70.45 0.06 27.14 2.46 0.57
dm-0 0.00 0.00 0.17 0.24 4.49 1.96 15.96 0.01 18.09 3.38 0.14
dm-1 0.00 0.00 0.09 0.12 0.74 0.99 8.00 0.00 4.65 0.36 0.01
dm-2 0.00 0.00 1.49 3.34 324.67 587.49 188.75 0.45 93.64 2.25 1.09
dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36
dm-4 0.00 0.00 0.11 0.03 0.88 0.79 12.17 0.00 19.48 0.87 0.01
dm-5 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 1.17 1.17 0.00
dm-6 0.00 0.00 12.87 20.44 1976.66 1493.27 104.17 2.77 83.01 2.73 9.08
dm-7 0.00 0.00 1.36 1.58 95.65 58.68 52.52 0.09 29.20 1.55 0.46


Example of iostat on a server with RAID 1+0:




avg-cpu: %user %nice %system %iowait %steal %idle
7.55 0.25 1.01 3.35 0.00 87.84

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb 42.21 31.78 18.47 59.18 8202.18 2040.94 131.91 2.07 26.65 4.02 31.20
sdc 44.93 27.92 18.96 55.88 8570.70 1978.15 140.94 2.21 29.48 4.60 34.45
sdd 45.75 28.69 14.52 55.10 8093.17 1978.16 144.66 0.21 2.95 3.94 27.42
sda 45.05 32.59 18.22 58.37 8471.04 2040.93 137.24 1.57 20.56 5.04 38.59
md1 0.00 0.00 18.17 162.73 3898.45 4013.90 43.74 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 4.89 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.07 0.26 3.30 2.13 16.85 0.04 135.54 73.73 2.38
dm-1 0.00 0.00 0.25 0.22 2.04 1.79 8.00 0.24 500.99 11.64 0.56
dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76
dm-3 0.00 0.00 2.31 2.37 1756.39 2297.67 867.42 2.30 492.30 13.08 6.11


So my questions are:



1) Why is there such a relatively high %util on the server with RAID vs the one without.



2) On the non-RAID server the %util of the combined physical devices (sd*) are more or less the same as the combined LVM devices (dm-*). Why is that not the case for the RAID server?



3) Why does it seem like the software RAID devices (md*) are virtually idle, while the underlying physical devices (sd*) are busy? My first thought was that it might be caused by RAID checking, but /proc/mdadm shows all good.



Edit: Apologies, I thought the question was clear, but that seems there is some confusion about it. Obviously the question is not about the difference in the %util between drives on one server, but why the total/avg %util value on one server is so different from the other. Hope that clarifies any misunderstanding.







lvm mdadm iostat






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 25 '14 at 4:58

























asked Jul 10 '14 at 14:00









RCD

14018




14018












  • Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
    – Patrick
    Jul 10 '14 at 14:09










  • I was referring to the %idle column.
    – RCD
    Jul 10 '14 at 15:31










  • The only thing that has %idle is the cpu, and that's about the same on both hosts.
    – Patrick
    Jul 10 '14 at 15:39










  • Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
    – RCD
    Jul 10 '14 at 17:47












  • Are you really asking why there is a ~1% difference between all the counters in both scenario?
    – Braiam
    Jul 10 '14 at 20:47


















  • Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
    – Patrick
    Jul 10 '14 at 14:09










  • I was referring to the %idle column.
    – RCD
    Jul 10 '14 at 15:31










  • The only thing that has %idle is the cpu, and that's about the same on both hosts.
    – Patrick
    Jul 10 '14 at 15:39










  • Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
    – RCD
    Jul 10 '14 at 17:47












  • Are you really asking why there is a ~1% difference between all the counters in both scenario?
    – Braiam
    Jul 10 '14 at 20:47
















Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
– Patrick
Jul 10 '14 at 14:09




Where are you seeing that the raid device is idle? I see md1 with 162 writes per second. This is more than 4 times as much as all the drives in your other server combined.
– Patrick
Jul 10 '14 at 14:09












I was referring to the %idle column.
– RCD
Jul 10 '14 at 15:31




I was referring to the %idle column.
– RCD
Jul 10 '14 at 15:31












The only thing that has %idle is the cpu, and that's about the same on both hosts.
– Patrick
Jul 10 '14 at 15:39




The only thing that has %idle is the cpu, and that's about the same on both hosts.
– Patrick
Jul 10 '14 at 15:39












Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
– RCD
Jul 10 '14 at 17:47






Are you saying the numbers in the %idle category is just for CPU? If so, why is the overall CPU 87% idle?
– RCD
Jul 10 '14 at 17:47














Are you really asking why there is a ~1% difference between all the counters in both scenario?
– Braiam
Jul 10 '14 at 20:47




Are you really asking why there is a ~1% difference between all the counters in both scenario?
– Braiam
Jul 10 '14 at 20:47










1 Answer
1






active

oldest

votes

















up vote
0
down vote













non-RAID



Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36


RAID



Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76


avgrq-sz is lower, w/s is higher. This shows a larger number of smaller I/O requests. Thus the I/O may be more "random". More disk seeks = slower I/O.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f143786%2funderstanding-iostat-with-linux-software-raid%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    non-RAID



    Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
    dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36


    RAID



    Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
    dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76


    avgrq-sz is lower, w/s is higher. This shows a larger number of smaller I/O requests. Thus the I/O may be more "random". More disk seeks = slower I/O.






    share|improve this answer



























      up vote
      0
      down vote













      non-RAID



      Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
      dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36


      RAID



      Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
      dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76


      avgrq-sz is lower, w/s is higher. This shows a larger number of smaller I/O requests. Thus the I/O may be more "random". More disk seeks = slower I/O.






      share|improve this answer

























        up vote
        0
        down vote










        up vote
        0
        down vote









        non-RAID



        Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
        dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36


        RAID



        Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
        dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76


        avgrq-sz is lower, w/s is higher. This shows a larger number of smaller I/O requests. Thus the I/O may be more "random". More disk seeks = slower I/O.






        share|improve this answer














        non-RAID



        Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
        dm-3 0.00 0.00 17.73 42.82 1526.17 1289.87 46.50 0.35 5.72 2.21 13.36


        RAID



        Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
        dm-2 0.00 0.00 15.55 150.63 2136.73 1712.31 23.16 1.77 10.66 2.93 48.76


        avgrq-sz is lower, w/s is higher. This shows a larger number of smaller I/O requests. Thus the I/O may be more "random". More disk seeks = slower I/O.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 25 at 16:06









        steve

        13.8k22452




        13.8k22452










        answered Nov 25 at 15:50









        sourcejedi

        22k43396




        22k43396






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f143786%2funderstanding-iostat-with-linux-software-raid%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Accessing regular linux commands in Huawei's Dopra Linux

            Can't connect RFCOMM socket: Host is down

            Kernel panic - not syncing: Fatal Exception in Interrupt