Good Morning,
the issue persists. :-(
I have started a fio Benchmark with this command, and i think that's not so bad in the VM.
localhost:/var/cache # fio --name=db_workload --rw=randrw --rwmixread=75 --bs=4k --iodepth=32 --numjobs=4 --size=4G --direct=1 --ioengine=libaio --group_reporting --time_based --runtime=60s
db_workload: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.23
Starting 4 processes
db_workload: Laying out IO file (1 file / 4096MiB)
db_workload: Laying out IO file (1 file / 4096MiB)
db_workload: Laying out IO file (1 file / 4096MiB)
db_workload: Laying out IO file (1 file / 4096MiB)
Jobs: 4 (f=4): [m(4)][100.0%][r=661MiB/s,w=220MiB/s][r=169k,w=56.4k IOPS][eta 00m:00s]
db_workload: (groupid=0, jobs=4): err= 0: pid=21868: Mon Apr 28 23:07:55 2025
read: IOPS=175k, BW=683MiB/s (717MB/s)(40.0GiB/60001msec)
slat (usec): min=5, max=12945, avg= 8.35, stdev=18.46
clat (usec): min=59, max=20430, avg=498.40, stdev=159.95
lat (usec): min=67, max=20437, avg=507.18, stdev=161.59
clat percentiles (usec):
| 1.00th=[ 277], 5.00th=[ 322], 10.00th=[ 355], 20.00th=[ 400],
| 30.00th=[ 433], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 502],
| 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 644], 95.00th=[ 725],
| 99.00th=[ 1012], 99.50th=[ 1123], 99.90th=[ 1582], 99.95th=[ 2114],
| 99.99th=[ 3916]
bw ( KiB/s): min=583370, max=783159, per=100.00%, avg=700494.43, stdev=8698.45, samples=476
iops : min=145842, max=195791, avg=175122.90, stdev=2174.64, samples=476
write: IOPS=58.3k, BW=228MiB/s (239MB/s)(13.3GiB/60001msec); 0 zone resets
slat (usec): min=5, max=9798, avg= 8.82, stdev=17.87
clat (usec): min=137, max=20658, avg=657.23, stdev=255.53
lat (usec): min=169, max=20684, avg=666.49, stdev=256.36
clat percentiles (usec):
| 1.00th=[ 363], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 529],
| 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676],
| 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 840], 95.00th=[ 938],
| 99.00th=[ 1188], 99.50th=[ 1303], 99.90th=[ 2409], 99.95th=[ 4047],
| 99.99th=[11863]
bw ( KiB/s): min=194664, max=259887, per=100.00%, avg=233489.29, stdev=2903.91, samples=476
iops : min=48666, max=64971, avg=58371.66, stdev=725.97, samples=476
lat (usec) : 100=0.01%, 250=0.19%, 500=47.69%, 750=43.56%, 1000=6.91%
lat (msec) : 2=1.57%, 4=0.06%, 10=0.01%, 20=0.01%, 50=0.01%
cpu : usr=21.07%, sys=44.77%, ctx=588078, majf=0, minf=66
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10497859,3499428,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=683MiB/s (717MB/s), 683MiB/s-683MiB/s (717MB/s-717MB/s), io=40.0GiB (42.0GB), run=60001-60001msec
WRITE: bw=228MiB/s (239MB/s), 228MiB/s-228MiB/s (239MB/s-239MB/s), io=13.3GiB (14.3GB), run=60001-60001msec
Disk stats (read/write):
dm-0: ios=10464280/3496628, merge=0/0, ticks=3139072/1055620, in_queue=4194692, util=100.00%, aggrios=10497938/3507977, aggrmerge=0/109, aggrticks=3211177/1083242, aggrin_queue=4295038, aggrutil=30.55%
vda: ios=10497938/3507977, merge=0/109, ticks=3211177/1083242, in_queue=4295038, util=30.55%
How i can log the performance issue from the service gromox-http
Is that possible?
This is the config from the VM:
- vCPU: 6 Cores (load approx 30%)
- RAM: 32GB
- Disk: lvm-thin 300GB (Cache: Write-Back Unsafe) -> to check if this is the bottleneck
- Disk and Ethernet Controller VIRTIO Single
Host:
Lenovo System x3650 M4
6x SAS 2,5" HDD 15k
256GB RAM
thanks :-)