[ext4] 079f9927c7: aim7.jobs-per-min 376.0% improvement
by kernel test robot
Greeting,
FYI, we noticed a 376.0% improvement of aim7.jobs-per-min due to commit:
commit: 079f9927c7bfa026d963db1455197159ebe5b534 ("ext4: gracefully handle ext4_break_layouts() failure during truncate")
https://git.kernel.org/cgit/linux/kernel/git/tytso/ext4.git dev
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 1BRD_48G
fs: ext4
test: creat-clo
load: 1000
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/ext4/x86_64-rhel-7.6/1000/debian-x86_64-2018-04-03.cgz/lkp-ivb-ep01/creat-clo/aim7
commit:
ee0ed02ca9 ("ext4: do not delete unlinked inode from orphan list on failed truncate")
079f9927c7 ("ext4: gracefully handle ext4_break_layouts() failure during truncate")
ee0ed02ca93ef1ec 079f9927c7bfa026d963db14551
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 25% 1:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
16534 +376.0% 78703 ± 4% aim7.jobs-per-min
363.16 -78.9% 76.58 ± 5% aim7.time.elapsed_time
363.16 -78.9% 76.58 ± 5% aim7.time.elapsed_time.max
594747 -82.5% 103989 ± 5% aim7.time.involuntary_context_switches
97265 ± 3% -37.4% 60892 aim7.time.minor_page_faults
13398 -80.7% 2581 ± 5% aim7.time.system_time
33.72 ± 3% -9.2% 30.61 aim7.time.user_time
1221874 -70.9% 355969 ± 23% aim7.time.voluntary_context_switches
42.74 ± 4% -7.2% 39.67 ± 2% boot-time.boot
1510 ± 5% -7.9% 1391 ± 3% boot-time.idle
8.54 +125.9% 19.30 ± 7% iostat.cpu.idle
91.18 -12.7% 79.61 iostat.cpu.system
0.27 ± 3% +307.2% 1.09 ± 3% iostat.cpu.user
8.04 ± 2% +9.3 17.30 ± 8% mpstat.cpu.all.idle%
0.01 ± 18% -0.0 0.00 ± 42% mpstat.cpu.all.iowait%
91.68 -10.1 81.60 mpstat.cpu.all.sys%
0.27 ± 3% +0.8 1.10 ± 3% mpstat.cpu.all.usr%
1127458 ± 8% +26.5% 1426646 ± 6% numa-numastat.node0.local_node
1129163 ± 8% +26.8% 1431830 ± 6% numa-numastat.node0.numa_hit
1087298 ± 8% +37.4% 1494060 ± 6% numa-numastat.node1.local_node
1098684 ± 8% +36.7% 1501858 ± 6% numa-numastat.node1.numa_hit
38000204 ± 89% -83.4% 6302127 ± 13% cpuidle.C1.time
441337 ± 94% -64.7% 155896 ± 21% cpuidle.C1.usage
7.57e+08 ± 15% -44.7% 4.19e+08 ± 18% cpuidle.C6.time
1339886 ± 6% -56.4% 584695 ± 6% cpuidle.C6.usage
15308 ± 21% +388.7% 74812 ± 36% cpuidle.POLL.time
8842 ± 23% +925.3% 90663 ± 38% cpuidle.POLL.usage
8.00 +134.4% 18.75 ± 7% vmstat.cpu.id
91.00 -13.2% 79.00 ± 2% vmstat.cpu.sy
2201 +339.2% 9669 ± 5% vmstat.io.bo
739.25 -35.9% 474.00 vmstat.memory.buff
40.75 -17.2% 33.75 vmstat.procs.r
81511 +1.7% 82928 vmstat.system.in
2166 -13.1% 1883 meminfo.Active(file)
176619 -57.3% 75501 ± 3% meminfo.AnonHugePages
986.50 ± 75% -96.1% 38.75 ±173% meminfo.Mlocked
110576 +39.9% 154732 meminfo.SUnreclaim
46793 ± 7% -25.0% 35102 ± 5% meminfo.Shmem
190857 +21.5% 231855 meminfo.Slab
8496 +368.5% 39805 ± 3% meminfo.max_used_kB
10510 ± 19% +15.5% 12138 ± 19% numa-meminfo.node0.Mapped
502.75 ± 74% -96.8% 16.25 ±173% numa-meminfo.node0.Mlocked
64390 ± 14% +40.0% 90175 ± 4% numa-meminfo.node0.SUnreclaim
32715 ± 24% -60.4% 12939 ± 74% numa-meminfo.node0.Shmem
108863 ± 7% +20.2% 130824 ± 8% numa-meminfo.node0.Slab
120127 ± 10% -69.6% 36463 ± 31% numa-meminfo.node1.AnonHugePages
171951 ± 11% -19.5% 138407 ± 13% numa-meminfo.node1.AnonPages
46192 ± 21% +38.6% 64015 ± 5% numa-meminfo.node1.SUnreclaim
82005 ± 10% +22.6% 100526 ± 10% numa-meminfo.node1.Slab
2620 ± 20% +15.8% 3033 ± 19% numa-vmstat.node0.nr_mapped
125.25 ± 75% -96.8% 4.00 ±173% numa-vmstat.node0.nr_mlock
8177 ± 24% -60.5% 3232 ± 74% numa-vmstat.node0.nr_shmem
16098 ± 14% +39.9% 22524 ± 4% numa-vmstat.node0.nr_slab_unreclaimable
42987 ± 11% -19.5% 34596 ± 13% numa-vmstat.node1.nr_anon_pages
120.25 ± 81% -95.4% 5.50 ±173% numa-vmstat.node1.nr_mlock
11547 ± 21% +38.4% 15983 ± 6% numa-vmstat.node1.nr_slab_unreclaimable
995499 ± 12% +13.7% 1132178 ± 7% numa-vmstat.node1.numa_hit
818394 ± 15% +17.2% 958772 ± 8% numa-vmstat.node1.numa_local
2749 -7.0% 2556 turbostat.Avg_MHz
433741 ± 96% -65.9% 148066 ± 23% turbostat.C1
1337165 ± 6% -56.5% 581717 ± 6% turbostat.C6
5.16 ± 15% +8.0 13.12 ± 18% turbostat.C6%
6.00 ± 4% +30.3% 7.82 ± 9% turbostat.CPU%c1
1.56 ± 35% +389.9% 7.66 ± 24% turbostat.CPU%c6
29977039 -77.6% 6718772 ± 3% turbostat.IRQ
1.04 ± 16% +451.8% 5.72 ± 17% turbostat.Pkg%pc2
0.01 ±100% +8000.0% 0.41 ±101% turbostat.Pkg%pc3
0.07 ± 97% +755.2% 0.62 ± 52% turbostat.Pkg%pc6
29420 -84.9% 4430 ± 5% turbostat.SMI
76258 -7.8% 70292 proc-vmstat.nr_active_anon
541.25 -13.1% 470.50 proc-vmstat.nr_active_file
69429 -4.1% 66607 proc-vmstat.nr_anon_pages
1774 ± 24% -68.2% 564.75 ± 11% proc-vmstat.nr_dirtied
276750 -1.1% 273776 proc-vmstat.nr_file_pages
4975 +1.9% 5068 proc-vmstat.nr_inactive_anon
486.25 -1.9% 477.25 proc-vmstat.nr_inactive_file
23712 -6.5% 22169 proc-vmstat.nr_kernel_stack
5858 +3.8% 6079 proc-vmstat.nr_mapped
246.50 ± 75% -96.1% 9.50 ±173% proc-vmstat.nr_mlock
11698 ± 7% -25.0% 8775 ± 5% proc-vmstat.nr_shmem
20069 -3.9% 19281 proc-vmstat.nr_slab_reclaimable
27647 +40.1% 38730 proc-vmstat.nr_slab_unreclaimable
1724 ± 26% -69.6% 524.50 ± 10% proc-vmstat.nr_written
76258 -7.8% 70292 proc-vmstat.nr_zone_active_anon
541.25 -13.1% 470.50 proc-vmstat.nr_zone_active_file
4975 +1.9% 5068 proc-vmstat.nr_zone_inactive_anon
486.25 -1.9% 477.25 proc-vmstat.nr_zone_inactive_file
48176 ± 4% -75.1% 12005 ± 27% proc-vmstat.numa_hint_faults
31019 ± 6% -84.0% 4952 ± 55% proc-vmstat.numa_hint_faults_local
2253964 +31.3% 2958574 proc-vmstat.numa_hit
2240865 +31.4% 2945589 proc-vmstat.numa_local
60592 ± 21% -63.0% 22390 ± 53% proc-vmstat.numa_pte_updates
3620485 +51.5% 5486303 proc-vmstat.pgalloc_normal
919928 -71.5% 262105 ± 4% proc-vmstat.pgfault
3395671 +55.1% 5267921 proc-vmstat.pgfree
809823 -3.2% 783689 proc-vmstat.pgpgout
2267 ± 11% -27.2% 1651 ± 4% slabinfo.buffer_head.active_objs
2269 ± 11% -26.7% 1663 ± 5% slabinfo.buffer_head.num_objs
1211 -25.9% 898.00 ± 9% slabinfo.dquot.active_objs
1211 -25.9% 898.00 ± 9% slabinfo.dquot.num_objs
3575 ± 3% -13.6% 3090 slabinfo.ext4_inode_cache.active_objs
3654 ± 3% -14.8% 3114 slabinfo.ext4_inode_cache.num_objs
34698 ± 2% +582.6% 236857 ± 3% slabinfo.filp.active_objs
1192 +524.6% 7445 ± 3% slabinfo.filp.active_slabs
38161 +524.4% 238278 ± 3% slabinfo.filp.num_objs
1192 +524.6% 7445 ± 3% slabinfo.filp.num_slabs
3296 -11.0% 2934 slabinfo.fscrypt_ctx.active_objs
3296 -11.0% 2934 slabinfo.fscrypt_ctx.num_objs
3140 -16.3% 2627 ± 3% slabinfo.kmalloc-128.active_objs
3140 -16.3% 2627 ± 3% slabinfo.kmalloc-128.num_objs
5591 ± 3% -9.0% 5088 slabinfo.kmalloc-96.active_objs
5605 ± 3% -9.1% 5095 slabinfo.kmalloc-96.num_objs
2595 ± 5% +10.7% 2873 ± 2% slabinfo.kmalloc-rcl-64.active_objs
2595 ± 5% +10.7% 2873 ± 2% slabinfo.kmalloc-rcl-64.num_objs
1016 ± 3% -12.6% 888.00 ± 8% slabinfo.kmem_cache_node.active_objs
1056 ± 3% -12.1% 928.00 ± 7% slabinfo.kmem_cache_node.num_objs
1162 -11.5% 1028 slabinfo.names_cache.active_objs
1195 -11.4% 1059 slabinfo.names_cache.num_objs
287.75 ± 12% -51.6% 139.25 ± 19% slabinfo.nfs_commit_data.active_objs
287.75 ± 12% -51.6% 139.25 ± 19% slabinfo.nfs_commit_data.num_objs
225.00 ± 12% -59.6% 91.00 slabinfo.nfs_read_data.active_objs
225.00 ± 12% -59.6% 91.00 slabinfo.nfs_read_data.num_objs
13453 -11.3% 11935 slabinfo.proc_inode_cache.active_objs
1067 ± 6% -19.2% 862.25 ± 12% slabinfo.task_group.active_objs
1067 ± 6% -19.2% 862.25 ± 12% slabinfo.task_group.num_objs
2.99 ± 14% +158.5% 7.73 ± 7% perf-stat.i.MPKI
1.129e+10 -10.5% 1.01e+10 perf-stat.i.branch-instructions
0.54 ± 6% +1.2 1.73 ± 7% perf-stat.i.branch-miss-rate%
23864805 +87.5% 44737022 ± 4% perf-stat.i.branch-misses
27.92 +1.8 29.71 perf-stat.i.cache-miss-rate%
23518787 ± 4% +25.8% 29586748 perf-stat.i.cache-misses
84242323 ± 5% +18.4% 99771184 perf-stat.i.cache-references
1.98 ± 2% +25.3% 2.48 ± 2% perf-stat.i.cpi
1.098e+11 -5.8% 1.035e+11 perf-stat.i.cpu-cycles
1819 -54.8% 822.64 ± 4% perf-stat.i.cpu-migrations
4591 ± 4% -31.3% 3153 perf-stat.i.cycles-between-cache-misses
0.19 ± 20% +0.5 0.68 ± 29% perf-stat.i.dTLB-load-miss-rate%
31335134 ± 23% +254.2% 1.11e+08 ± 32% perf-stat.i.dTLB-load-misses
1.843e+10 -11.9% 1.624e+10 perf-stat.i.dTLB-loads
2954542 ± 22% +95.9% 5788569 ± 24% perf-stat.i.dTLB-store-misses
1.264e+09 +131.4% 2.925e+09 ± 4% perf-stat.i.dTLB-stores
64.60 ± 34% +29.0 93.61 ± 2% perf-stat.i.iTLB-load-miss-rate%
1328625 ± 20% +276.4% 5001400 ± 16% perf-stat.i.iTLB-load-misses
5.944e+10 -11.2% 5.281e+10 perf-stat.i.instructions
46008 ± 17% -78.8% 9761 ± 18% perf-stat.i.instructions-per-iTLB-miss
0.53 -10.6% 0.48 ± 2% perf-stat.i.ipc
2433 +28.5% 3127 perf-stat.i.minor-faults
47.71 -4.8 42.88 perf-stat.i.node-load-miss-rate%
13410002 ± 7% -13.0% 11663161 ± 4% perf-stat.i.node-load-misses
40.76 -2.0 38.75 perf-stat.i.node-store-miss-rate%
6166336 ± 2% +57.1% 9689578 ± 3% perf-stat.i.node-store-misses
8743609 ± 2% +55.2% 13568157 ± 3% perf-stat.i.node-stores
2433 +28.5% 3127 perf-stat.i.page-faults
1.42 ± 6% +33.3% 1.89 perf-stat.overall.MPKI
0.21 +0.2 0.44 ± 4% perf-stat.overall.branch-miss-rate%
27.94 +1.7 29.66 perf-stat.overall.cache-miss-rate%
1.85 +6.1% 1.96 perf-stat.overall.cpi
4678 ± 4% -25.2% 3497 perf-stat.overall.cycles-between-cache-misses
0.17 ± 23% +0.5 0.68 ± 33% perf-stat.overall.dTLB-load-miss-rate%
63.84 ± 35% +30.3 94.15 ± 2% perf-stat.overall.iTLB-load-miss-rate%
46330 ± 16% -76.5% 10887 ± 18% perf-stat.overall.instructions-per-iTLB-miss
0.54 -5.7% 0.51 perf-stat.overall.ipc
47.98 -4.4 43.56 perf-stat.overall.node-load-miss-rate%
1.126e+10 -11.5% 9.964e+09 perf-stat.ps.branch-instructions
23803167 +85.5% 44150955 ± 4% perf-stat.ps.branch-misses
23452823 ± 4% +24.5% 29196539 perf-stat.ps.cache-misses
84006313 ± 5% +17.2% 98455946 perf-stat.ps.cache-references
1.095e+11 -6.8% 1.021e+11 perf-stat.ps.cpu-cycles
1814 -55.3% 811.78 ± 4% perf-stat.ps.cpu-migrations
31246611 ± 23% +250.4% 1.095e+08 ± 32% perf-stat.ps.dTLB-load-misses
1.837e+10 -12.8% 1.602e+10 perf-stat.ps.dTLB-loads
2946207 ± 22% +93.8% 5711211 ± 24% perf-stat.ps.dTLB-store-misses
1.261e+09 +128.9% 2.886e+09 ± 4% perf-stat.ps.dTLB-stores
1324897 ± 20% +272.5% 4935070 ± 16% perf-stat.ps.iTLB-load-misses
5.927e+10 -12.1% 5.21e+10 perf-stat.ps.instructions
2426 +27.4% 3092 perf-stat.ps.minor-faults
79779 -1.0% 78978 perf-stat.ps.msec
13371979 ± 7% -13.9% 11507768 ± 4% perf-stat.ps.node-load-misses
6148850 ± 2% +55.5% 9560388 ± 3% perf-stat.ps.node-store-misses
8718962 ± 2% +53.5% 13387860 ± 3% perf-stat.ps.node-stores
2426 +27.4% 3092 perf-stat.ps.page-faults
2.168e+13 -81.0% 4.115e+12 ± 4% perf-stat.total.instructions
155607 ± 7% -84.5% 24158 sched_debug.cfs_rq:/.exec_clock.avg
158019 ± 7% -83.9% 25412 sched_debug.cfs_rq:/.exec_clock.max
154697 ± 7% -84.5% 23983 sched_debug.cfs_rq:/.exec_clock.min
499.00 ± 6% -54.7% 226.10 ± 28% sched_debug.cfs_rq:/.exec_clock.stddev
26441 ± 2% -27.3% 19235 ± 3% sched_debug.cfs_rq:/.load.avg
77245 ± 19% -28.8% 54981 ± 16% sched_debug.cfs_rq:/.load.max
4994 ± 71% +148.9% 12431 sched_debug.cfs_rq:/.load.min
16095 ± 28% -31.8% 10978 ± 11% sched_debug.cfs_rq:/.load.stddev
44.75 ± 18% +92.2% 86.02 ± 11% sched_debug.cfs_rq:/.load_avg.avg
300.77 ± 19% +124.0% 673.88 ± 31% sched_debug.cfs_rq:/.load_avg.max
9.08 ± 10% -54.6% 4.12 ± 66% sched_debug.cfs_rq:/.load_avg.min
63.85 ± 20% +165.7% 169.68 ± 15% sched_debug.cfs_rq:/.load_avg.stddev
5946728 ± 7% -83.4% 989775 sched_debug.cfs_rq:/.min_vruntime.avg
6057788 ± 7% -82.2% 1075477 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
5666459 ± 8% -83.9% 915130 sched_debug.cfs_rq:/.min_vruntime.min
61219 ± 7% -34.2% 40297 ± 37% sched_debug.cfs_rq:/.min_vruntime.stddev
0.83 ± 4% -18.3% 0.68 sched_debug.cfs_rq:/.nr_running.avg
0.22 ± 71% +127.0% 0.50 sched_debug.cfs_rq:/.nr_running.min
0.57 ± 9% -87.3% 0.07 ± 39% sched_debug.cfs_rq:/.nr_spread_over.avg
6.20 ± 17% -83.9% 1.00 ± 35% sched_debug.cfs_rq:/.nr_spread_over.max
1.10 ± 10% -78.9% 0.23 ± 35% sched_debug.cfs_rq:/.nr_spread_over.stddev
7.72 ±103% +348.0% 34.59 ± 30% sched_debug.cfs_rq:/.removed.load_avg.avg
79.24 ±100% +544.7% 510.88 sched_debug.cfs_rq:/.removed.load_avg.max
23.18 ±100% +442.4% 125.74 ± 13% sched_debug.cfs_rq:/.removed.load_avg.stddev
356.27 ±103% +346.6% 1591 ± 29% sched_debug.cfs_rq:/.removed.runnable_sum.avg
3682 ±100% +538.4% 23507 sched_debug.cfs_rq:/.removed.runnable_sum.max
1069 ±100% +440.7% 5785 ± 13% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2.15 ±102% +333.0% 9.32 ± 18% sched_debug.cfs_rq:/.removed.util_avg.avg
25.76 ±100% +543.4% 165.75 ± 8% sched_debug.cfs_rq:/.removed.util_avg.max
6.54 ±100% +450.1% 35.99 ± 10% sched_debug.cfs_rq:/.removed.util_avg.stddev
20.86 ± 3% -28.3% 14.95 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.avg
64.47 ± 26% -43.8% 36.25 ± 14% sched_debug.cfs_rq:/.runnable_load_avg.max
13.48 ± 33% -45.3% 7.37 ± 14% sched_debug.cfs_rq:/.runnable_load_avg.stddev
23012 ± 4% -29.4% 16252 ± 2% sched_debug.cfs_rq:/.runnable_weight.avg
65365 ± 27% -37.8% 40658 ± 19% sched_debug.cfs_rq:/.runnable_weight.max
4983 ± 72% +149.2% 12418 sched_debug.cfs_rq:/.runnable_weight.min
13411 ± 36% -48.3% 6932 ± 14% sched_debug.cfs_rq:/.runnable_weight.stddev
279326 ± 14% -75.3% 69126 ± 16% sched_debug.cfs_rq:/.spread0.avg
390244 ± 10% -60.3% 154863 ± 20% sched_debug.cfs_rq:/.spread0.max
61180 ± 7% -34.1% 40294 ± 37% sched_debug.cfs_rq:/.spread0.stddev
382.72 ± 12% -56.6% 166.00 ± 56% sched_debug.cfs_rq:/.util_avg.min
233.66 ± 17% -28.1% 167.97 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.avg
562074 ± 5% -41.3% 330076 ± 9% sched_debug.cpu.avg_idle.avg
153410 ± 17% -92.0% 12251 ±118% sched_debug.cpu.avg_idle.min
227868 ± 5% +21.8% 277629 ± 5% sched_debug.cpu.avg_idle.stddev
222329 ± 5% -65.5% 76724 sched_debug.cpu.clock.avg
222336 ± 5% -65.5% 76728 sched_debug.cpu.clock.max
222321 ± 5% -65.5% 76719 sched_debug.cpu.clock.min
4.39 ± 15% -45.6% 2.39 ± 24% sched_debug.cpu.clock.stddev
222329 ± 5% -65.5% 76724 sched_debug.cpu.clock_task.avg
222336 ± 5% -65.5% 76728 sched_debug.cpu.clock_task.max
222321 ± 5% -65.5% 76719 sched_debug.cpu.clock_task.min
4.39 ± 15% -45.6% 2.39 ± 24% sched_debug.cpu.clock_task.stddev
21.54 ± 4% -28.9% 15.32 ± 5% sched_debug.cpu.cpu_load[0].avg
68.51 ± 24% -53.1% 32.12 ± 7% sched_debug.cpu.cpu_load[0].max
13.77 ± 32% -49.0% 7.02 ± 4% sched_debug.cpu.cpu_load[0].stddev
22.15 ± 4% -26.3% 16.33 ± 4% sched_debug.cpu.cpu_load[1].avg
78.08 ± 32% -53.1% 36.62 ± 15% sched_debug.cpu.cpu_load[1].max
6.70 ± 19% -45.9% 3.62 ± 40% sched_debug.cpu.cpu_load[1].min
14.37 ± 33% -55.3% 6.42 ± 4% sched_debug.cpu.cpu_load[1].stddev
22.47 ± 4% -23.1% 17.28 ± 4% sched_debug.cpu.cpu_load[2].avg
22.77 ± 4% -19.5% 18.34 ± 2% sched_debug.cpu.cpu_load[3].avg
22.98 ± 2% -13.8% 19.81 sched_debug.cpu.cpu_load[4].avg
93.10 ± 14% +70.0% 158.25 ± 13% sched_debug.cpu.cpu_load[4].max
15.03 ± 24% +57.8% 23.72 ± 14% sched_debug.cpu.cpu_load[4].stddev
1838 ± 4% -29.5% 1297 ± 3% sched_debug.cpu.curr->pid.avg
6582 ± 4% -58.1% 2761 sched_debug.cpu.curr->pid.max
361.45 ± 80% +106.9% 747.88 ± 8% sched_debug.cpu.curr->pid.min
1008 ± 10% -47.2% 531.98 sched_debug.cpu.curr->pid.stddev
26427 ± 2% -27.0% 19302 ± 2% sched_debug.cpu.load.avg
77266 ± 19% -29.1% 54762 ± 14% sched_debug.cpu.load.max
16043 ± 27% -31.3% 11015 ± 10% sched_debug.cpu.load.stddev
0.00 ± 3% -11.5% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
182784 ± 7% -77.9% 40349 sched_debug.cpu.nr_load_updates.avg
192548 ± 6% -75.2% 47823 ± 4% sched_debug.cpu.nr_load_updates.max
179866 ± 7% -79.1% 37656 sched_debug.cpu.nr_load_updates.min
2037 ± 2% -15.1% 1729 ± 11% sched_debug.cpu.nr_load_updates.stddev
0.95 ± 6% -24.7% 0.72 sched_debug.cpu.nr_running.avg
0.47 ± 11% -20.2% 0.37 ± 12% sched_debug.cpu.nr_running.stddev
40112 ± 7% -76.4% 9447 ± 12% sched_debug.cpu.nr_switches.avg
52113 ± 3% -65.3% 18084 ± 6% sched_debug.cpu.nr_switches.max
36473 ± 8% -81.5% 6753 ± 17% sched_debug.cpu.nr_switches.min
3224 ± 6% -20.8% 2553 ± 7% sched_debug.cpu.nr_switches.stddev
18.87 ± 5% -43.6% 10.64 ± 4% sched_debug.cpu.nr_uninterruptible.avg
852.94 ± 4% -89.8% 86.88 ± 16% sched_debug.cpu.nr_uninterruptible.max
-286.66 -85.2% -42.38 sched_debug.cpu.nr_uninterruptible.min
194.19 ± 4% -86.0% 27.19 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
38344 ± 7% -76.8% 8892 ± 14% sched_debug.cpu.sched_count.avg
44285 ± 5% -72.3% 12276 ± 11% sched_debug.cpu.sched_count.max
35777 ± 8% -80.9% 6843 ± 18% sched_debug.cpu.sched_count.min
1748 ± 7% -26.6% 1282 ± 9% sched_debug.cpu.sched_count.stddev
10951 ± 7% -77.0% 2523 ± 22% sched_debug.cpu.sched_goidle.avg
13008 ± 7% -69.8% 3926 ± 14% sched_debug.cpu.sched_goidle.max
10335 ± 7% -82.9% 1764 ± 27% sched_debug.cpu.sched_goidle.min
619.45 ± 7% -22.1% 482.74 ± 12% sched_debug.cpu.sched_goidle.stddev
25678 ± 7% -78.9% 5421 ± 13% sched_debug.cpu.ttwu_count.avg
33687 ± 14% -80.6% 6542 ± 11% sched_debug.cpu.ttwu_count.max
23414 ± 8% -78.8% 4973 ± 14% sched_debug.cpu.ttwu_count.min
2076 ± 25% -82.8% 356.21 ± 10% sched_debug.cpu.ttwu_count.stddev
5437 ± 7% -82.6% 945.25 ± 5% sched_debug.cpu.ttwu_local.avg
7134 ± 7% -76.9% 1651 ± 13% sched_debug.cpu.ttwu_local.max
4648 ± 7% -84.4% 726.00 ± 10% sched_debug.cpu.ttwu_local.min
486.67 ± 14% -67.0% 160.43 ± 18% sched_debug.cpu.ttwu_local.stddev
222322 ± 5% -65.5% 76718 sched_debug.cpu_clk
218751 ± 5% -66.6% 73147 sched_debug.ktime
222764 ± 5% -65.4% 77160 sched_debug.sched_clk
125327 -76.6% 29327 ± 17% softirqs.CPU0.RCU
20714 ± 3% -57.8% 8743 ± 6% softirqs.CPU0.SCHED
153988 -73.1% 41422 ± 5% softirqs.CPU0.TIMER
124970 -76.3% 29675 ± 22% softirqs.CPU1.RCU
15792 -64.5% 5602 ± 8% softirqs.CPU1.SCHED
154998 ± 3% -74.0% 40293 ± 10% softirqs.CPU1.TIMER
123582 -77.2% 28183 ± 19% softirqs.CPU10.RCU
15646 -73.4% 4156 ± 7% softirqs.CPU10.SCHED
149793 ± 3% -75.6% 36510 ± 4% softirqs.CPU10.TIMER
124322 -77.6% 27802 ± 21% softirqs.CPU11.RCU
15923 -69.4% 4872 ± 3% softirqs.CPU11.SCHED
150655 ± 3% -74.7% 38147 ± 3% softirqs.CPU11.TIMER
126447 -74.8% 31910 ± 22% softirqs.CPU12.RCU
16156 -66.5% 5413 ± 18% softirqs.CPU12.SCHED
154057 ± 2% -73.2% 41265 ± 8% softirqs.CPU12.TIMER
126457 -76.5% 29743 ± 21% softirqs.CPU13.RCU
15979 -69.9% 4808 softirqs.CPU13.SCHED
154436 ± 7% -73.3% 41274 ± 8% softirqs.CPU13.TIMER
125940 ± 2% -77.1% 28826 ± 22% softirqs.CPU14.RCU
16497 ± 3% -70.2% 4916 softirqs.CPU14.SCHED
153940 ± 2% -74.8% 38789 ± 4% softirqs.CPU14.TIMER
124842 -78.1% 27353 ± 22% softirqs.CPU15.RCU
15957 -69.9% 4806 ± 2% softirqs.CPU15.SCHED
156045 ± 3% -75.0% 39007 ± 3% softirqs.CPU15.TIMER
124602 -77.3% 28319 ± 20% softirqs.CPU16.RCU
16843 ± 6% -71.1% 4874 softirqs.CPU16.SCHED
153923 ± 3% -74.3% 39521 ± 3% softirqs.CPU16.TIMER
123307 -76.9% 28498 ± 21% softirqs.CPU17.RCU
15719 ± 2% -68.9% 4886 ± 3% softirqs.CPU17.SCHED
153920 ± 5% -74.4% 39333 ± 3% softirqs.CPU17.TIMER
123677 -77.2% 28196 ± 21% softirqs.CPU18.RCU
16079 -69.4% 4922 ± 4% softirqs.CPU18.SCHED
148412 -73.7% 39028 ± 5% softirqs.CPU18.TIMER
124392 -78.1% 27286 ± 21% softirqs.CPU19.RCU
15882 ± 2% -69.4% 4855 ± 6% softirqs.CPU19.SCHED
152830 ± 6% -74.9% 38385 ± 3% softirqs.CPU19.TIMER
125331 -76.5% 29431 ± 21% softirqs.CPU2.RCU
15735 -70.6% 4624 ± 7% softirqs.CPU2.SCHED
148379 -71.0% 43101 ± 14% softirqs.CPU2.TIMER
123961 -76.0% 29706 ± 20% softirqs.CPU20.RCU
16273 ± 3% -68.3% 5157 ± 9% softirqs.CPU20.SCHED
151943 ± 2% -74.5% 38701 ± 3% softirqs.CPU20.TIMER
125243 ± 2% -76.2% 29763 ± 21% softirqs.CPU21.RCU
15652 ± 2% -69.6% 4760 ± 5% softirqs.CPU21.SCHED
161256 ± 12% -75.9% 38857 ± 4% softirqs.CPU21.TIMER
124479 -75.9% 30023 ± 19% softirqs.CPU22.RCU
16100 -68.2% 5119 ± 2% softirqs.CPU22.SCHED
150035 -73.4% 39884 ± 4% softirqs.CPU22.TIMER
125176 -77.3% 28403 ± 19% softirqs.CPU23.RCU
15832 -68.6% 4969 ± 7% softirqs.CPU23.SCHED
167218 ± 11% -76.4% 39544 ± 6% softirqs.CPU23.TIMER
124460 -76.9% 28810 ± 23% softirqs.CPU24.RCU
15964 -69.2% 4912 ± 4% softirqs.CPU24.SCHED
153473 -74.8% 38717 ± 3% softirqs.CPU24.TIMER
125379 ± 2% -77.8% 27828 ± 20% softirqs.CPU25.RCU
15938 -69.3% 4887 ± 6% softirqs.CPU25.SCHED
153816 ± 4% -74.6% 39145 ± 4% softirqs.CPU25.TIMER
124840 -76.9% 28822 ± 21% softirqs.CPU26.RCU
16065 -69.4% 4920 softirqs.CPU26.SCHED
151409 -74.1% 39241 ± 7% softirqs.CPU26.TIMER
124537 -76.9% 28727 ± 23% softirqs.CPU27.RCU
16079 -70.2% 4794 ± 3% softirqs.CPU27.SCHED
154904 ± 2% -74.1% 40193 ± 14% softirqs.CPU27.TIMER
124537 -76.6% 29154 ± 17% softirqs.CPU28.RCU
16165 -69.1% 4992 ± 2% softirqs.CPU28.SCHED
149455 ± 2% -74.1% 38671 ± 5% softirqs.CPU28.TIMER
124603 -77.7% 27737 ± 17% softirqs.CPU29.RCU
16574 ± 7% -70.8% 4837 ± 4% softirqs.CPU29.SCHED
154094 ± 7% -74.9% 38727 ± 4% softirqs.CPU29.TIMER
125497 ± 2% -77.6% 28139 ± 21% softirqs.CPU3.RCU
17110 ± 5% -66.6% 5713 ± 15% softirqs.CPU3.SCHED
159439 -74.4% 40765 ± 4% softirqs.CPU3.TIMER
125858 -76.7% 29336 ± 20% softirqs.CPU30.RCU
15553 -70.8% 4539 ± 3% softirqs.CPU30.SCHED
149468 ± 2% -74.8% 37703 ± 6% softirqs.CPU30.TIMER
124502 -78.2% 27131 ± 21% softirqs.CPU31.RCU
15962 -69.6% 4850 ± 4% softirqs.CPU31.SCHED
150482 ± 4% -74.9% 37815 ± 4% softirqs.CPU31.TIMER
122223 ± 2% -74.1% 31635 ± 16% softirqs.CPU32.RCU
15947 ± 2% -68.6% 5015 ± 4% softirqs.CPU32.SCHED
161780 ± 9% -73.7% 42498 ± 7% softirqs.CPU32.TIMER
121276 -77.5% 27260 ± 19% softirqs.CPU33.RCU
15742 -69.1% 4858 ± 2% softirqs.CPU33.SCHED
154469 ± 7% -72.2% 42993 ± 15% softirqs.CPU33.TIMER
120980 -78.5% 26031 ± 19% softirqs.CPU34.RCU
15924 ± 2% -68.7% 4987 softirqs.CPU34.SCHED
151995 ± 2% -74.4% 38894 ± 5% softirqs.CPU34.TIMER
121999 -76.2% 28990 ± 23% softirqs.CPU35.RCU
15665 ± 2% -68.4% 4954 ± 5% softirqs.CPU35.SCHED
155429 ± 2% -75.1% 38760 ± 2% softirqs.CPU35.TIMER
121908 -77.5% 27433 ± 24% softirqs.CPU36.RCU
15870 -69.2% 4893 ± 2% softirqs.CPU36.SCHED
152401 ± 3% -74.2% 39339 ± 4% softirqs.CPU36.TIMER
120644 -78.0% 26489 ± 19% softirqs.CPU37.RCU
15743 ± 2% -69.6% 4789 ± 5% softirqs.CPU37.SCHED
152409 ± 4% -74.5% 38878 ± 4% softirqs.CPU37.TIMER
121224 -75.9% 29208 ± 19% softirqs.CPU38.RCU
15934 -67.4% 5198 ± 8% softirqs.CPU38.SCHED
148974 ± 2% -73.7% 39206 ± 7% softirqs.CPU38.TIMER
120939 -77.8% 26904 ± 22% softirqs.CPU39.RCU
15635 -69.9% 4702 ± 6% softirqs.CPU39.SCHED
163698 ± 15% -76.9% 37831 ± 4% softirqs.CPU39.TIMER
126050 -76.9% 29174 ± 21% softirqs.CPU4.RCU
16129 -69.3% 4954 ± 3% softirqs.CPU4.SCHED
153560 -74.5% 39176 ± 4% softirqs.CPU4.TIMER
125855 -78.0% 27679 ± 22% softirqs.CPU5.RCU
16649 ± 3% -70.9% 4839 softirqs.CPU5.SCHED
154576 ± 5% -74.7% 39120 ± 4% softirqs.CPU5.TIMER
124711 -77.2% 28483 ± 20% softirqs.CPU6.RCU
16290 ± 2% -68.6% 5118 ± 6% softirqs.CPU6.SCHED
151528 -72.5% 41640 ± 13% softirqs.CPU6.TIMER
126001 -77.2% 28694 ± 21% softirqs.CPU7.RCU
16072 -68.8% 5013 ± 6% softirqs.CPU7.SCHED
155073 ± 2% -75.4% 38089 ± 4% softirqs.CPU7.TIMER
125571 -77.2% 28640 ± 21% softirqs.CPU8.RCU
16083 -69.6% 4887 ± 2% softirqs.CPU8.SCHED
148865 ± 2% -74.1% 38616 ± 5% softirqs.CPU8.TIMER
125533 ± 2% -77.8% 27916 ± 18% softirqs.CPU9.RCU
16028 -69.4% 4905 ± 3% softirqs.CPU9.SCHED
152644 ± 6% -74.9% 38343 ± 4% softirqs.CPU9.TIMER
4971208 -77.0% 1142693 ± 20% softirqs.RCU
645925 -68.9% 201062 ± 2% softirqs.SCHED
6149792 ± 2% -74.3% 1577446 ± 3% softirqs.TIMER
617.50 ± 18% -77.9% 136.75 ± 43% interrupts.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
306.75 ± 20% -76.3% 72.75 ± 33% interrupts.38:IR-PCI-MSI.524290-edge.eth0-TxRx-1
447.00 ± 35% -86.4% 61.00 ± 37% interrupts.39:IR-PCI-MSI.524291-edge.eth0-TxRx-2
392.50 ± 26% -86.8% 51.75 ± 7% interrupts.40:IR-PCI-MSI.524292-edge.eth0-TxRx-3
251.50 ± 13% -74.0% 65.50 ± 57% interrupts.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
385.75 ± 50% -79.5% 79.00 ± 38% interrupts.42:IR-PCI-MSI.524294-edge.eth0-TxRx-5
297.00 ± 36% -68.9% 92.50 ± 57% interrupts.43:IR-PCI-MSI.524295-edge.eth0-TxRx-6
404.00 ± 34% -80.8% 77.50 ± 31% interrupts.44:IR-PCI-MSI.524296-edge.eth0-TxRx-7
140895 ± 5% -67.3% 46076 ± 9% interrupts.CAL:Function_call_interrupts
3751 ± 5% -69.6% 1141 ± 9% interrupts.CPU0.CAL:Function_call_interrupts
732484 -78.3% 158839 ± 3% interrupts.CPU0.LOC:Local_timer_interrupts
5101 ± 35% -79.9% 1027 ±173% interrupts.CPU0.NMI:Non-maskable_interrupts
5101 ± 35% -79.9% 1027 ±173% interrupts.CPU0.PMI:Performance_monitoring_interrupts
7691 ± 10% -67.0% 2537 ± 22% interrupts.CPU0.RES:Rescheduling_interrupts
3323 ± 8% -65.4% 1150 ± 9% interrupts.CPU1.CAL:Function_call_interrupts
732357 -78.4% 158544 ± 3% interrupts.CPU1.LOC:Local_timer_interrupts
7105 ± 7% -54.5% 3231 ± 28% interrupts.CPU1.RES:Rescheduling_interrupts
3609 ± 6% -66.8% 1199 ± 8% interrupts.CPU10.CAL:Function_call_interrupts
732159 -78.4% 158400 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
6721 ± 8% -69.3% 2065 ± 10% interrupts.CPU10.RES:Rescheduling_interrupts
3638 ± 8% -69.4% 1114 ± 12% interrupts.CPU11.CAL:Function_call_interrupts
732161 -78.4% 158371 ± 3% interrupts.CPU11.LOC:Local_timer_interrupts
6935 ± 6% -68.0% 2219 ± 21% interrupts.CPU11.RES:Rescheduling_interrupts
3571 ± 8% -66.5% 1196 ± 9% interrupts.CPU12.CAL:Function_call_interrupts
732160 -78.4% 158295 ± 3% interrupts.CPU12.LOC:Local_timer_interrupts
6735 ± 9% -68.7% 2109 ± 10% interrupts.CPU12.RES:Rescheduling_interrupts
3586 ± 9% -67.8% 1155 ± 8% interrupts.CPU13.CAL:Function_call_interrupts
732074 -78.4% 158256 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
6965 ± 6% -72.6% 1910 ± 8% interrupts.CPU13.RES:Rescheduling_interrupts
3399 ± 5% -68.3% 1077 ± 18% interrupts.CPU14.CAL:Function_call_interrupts
732544 -78.3% 158727 ± 4% interrupts.CPU14.LOC:Local_timer_interrupts
6709 ± 7% -69.1% 2076 ± 12% interrupts.CPU14.RES:Rescheduling_interrupts
3554 ± 11% -65.9% 1213 ± 9% interrupts.CPU15.CAL:Function_call_interrupts
731864 -78.4% 158337 ± 4% interrupts.CPU15.LOC:Local_timer_interrupts
6902 ± 7% -72.2% 1917 ± 16% interrupts.CPU15.RES:Rescheduling_interrupts
3287 ± 5% -64.4% 1171 ± 8% interrupts.CPU16.CAL:Function_call_interrupts
732135 -78.4% 158356 ± 3% interrupts.CPU16.LOC:Local_timer_interrupts
6625 ± 9% -69.0% 2053 ± 10% interrupts.CPU16.RES:Rescheduling_interrupts
3585 ± 10% -69.5% 1094 ± 16% interrupts.CPU17.CAL:Function_call_interrupts
730713 -78.3% 158566 ± 3% interrupts.CPU17.LOC:Local_timer_interrupts
6874 ± 7% -72.2% 1910 ± 7% interrupts.CPU17.RES:Rescheduling_interrupts
3655 ± 9% -68.4% 1156 ± 12% interrupts.CPU18.CAL:Function_call_interrupts
732401 -78.3% 158690 ± 4% interrupts.CPU18.LOC:Local_timer_interrupts
6716 ± 10% -69.9% 2021 ± 8% interrupts.CPU18.RES:Rescheduling_interrupts
3337 ± 10% -66.7% 1110 ± 13% interrupts.CPU19.CAL:Function_call_interrupts
731334 -78.4% 158031 ± 4% interrupts.CPU19.LOC:Local_timer_interrupts
7403 ± 6% -72.7% 2024 ± 14% interrupts.CPU19.RES:Rescheduling_interrupts
404.00 ± 34% -80.8% 77.50 ± 31% interrupts.CPU2.44:IR-PCI-MSI.524296-edge.eth0-TxRx-7
3530 -69.9% 1063 ± 10% interrupts.CPU2.CAL:Function_call_interrupts
732052 -78.4% 158430 ± 3% interrupts.CPU2.LOC:Local_timer_interrupts
6823 ± 7% -72.3% 1888 ± 7% interrupts.CPU2.RES:Rescheduling_interrupts
3376 ± 5% -64.6% 1195 ± 6% interrupts.CPU20.CAL:Function_call_interrupts
732255 -78.4% 158361 ± 4% interrupts.CPU20.LOC:Local_timer_interrupts
6615 ± 8% -67.6% 2146 ± 10% interrupts.CPU20.RES:Rescheduling_interrupts
3514 ± 10% -67.8% 1131 ± 6% interrupts.CPU21.CAL:Function_call_interrupts
732132 -78.4% 158381 ± 3% interrupts.CPU21.LOC:Local_timer_interrupts
6889 ± 6% -70.9% 2008 ± 16% interrupts.CPU21.RES:Rescheduling_interrupts
3519 ± 11% -65.7% 1208 ± 9% interrupts.CPU22.CAL:Function_call_interrupts
732144 -78.4% 158339 ± 4% interrupts.CPU22.LOC:Local_timer_interrupts
6662 ± 9% -68.1% 2124 ± 15% interrupts.CPU22.RES:Rescheduling_interrupts
3470 ± 6% -66.4% 1165 ± 8% interrupts.CPU23.CAL:Function_call_interrupts
732050 -78.3% 158538 ± 4% interrupts.CPU23.LOC:Local_timer_interrupts
6803 ± 8% -65.3% 2361 ± 36% interrupts.CPU23.RES:Rescheduling_interrupts
3551 ± 6% -67.2% 1165 ± 3% interrupts.CPU24.CAL:Function_call_interrupts
732140 -78.3% 158510 ± 4% interrupts.CPU24.LOC:Local_timer_interrupts
6626 ± 9% -68.1% 2113 ± 9% interrupts.CPU24.RES:Rescheduling_interrupts
3697 ± 5% -68.3% 1171 ± 11% interrupts.CPU25.CAL:Function_call_interrupts
732078 -78.3% 158523 ± 3% interrupts.CPU25.LOC:Local_timer_interrupts
6899 ± 6% -69.4% 2110 ± 7% interrupts.CPU25.RES:Rescheduling_interrupts
617.50 ± 18% -77.9% 136.75 ± 43% interrupts.CPU26.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
3624 ± 4% -68.2% 1152 ± 6% interrupts.CPU26.CAL:Function_call_interrupts
732096 -78.4% 158339 ± 4% interrupts.CPU26.LOC:Local_timer_interrupts
6622 ± 8% -69.3% 2035 ± 12% interrupts.CPU26.RES:Rescheduling_interrupts
3581 ± 12% -67.1% 1179 ± 11% interrupts.CPU27.CAL:Function_call_interrupts
732145 -78.4% 158379 ± 3% interrupts.CPU27.LOC:Local_timer_interrupts
6845 ± 6% -70.9% 1994 ± 11% interrupts.CPU27.RES:Rescheduling_interrupts
306.75 ± 20% -76.3% 72.75 ± 33% interrupts.CPU28.38:IR-PCI-MSI.524290-edge.eth0-TxRx-1
3353 ± 7% -66.5% 1123 ± 3% interrupts.CPU28.CAL:Function_call_interrupts
732259 -78.4% 158392 ± 4% interrupts.CPU28.LOC:Local_timer_interrupts
6668 ± 8% -70.1% 1996 ± 8% interrupts.CPU28.RES:Rescheduling_interrupts
3535 ± 8% -65.6% 1215 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
732060 -78.4% 158465 ± 4% interrupts.CPU29.LOC:Local_timer_interrupts
6888 ± 6% -71.2% 1986 ± 11% interrupts.CPU29.RES:Rescheduling_interrupts
3596 ± 5% -66.8% 1194 ± 12% interrupts.CPU3.CAL:Function_call_interrupts
732432 -78.4% 158445 ± 4% interrupts.CPU3.LOC:Local_timer_interrupts
7473 ± 9% -73.2% 2004 ± 20% interrupts.CPU3.RES:Rescheduling_interrupts
447.00 ± 35% -86.4% 61.00 ± 37% interrupts.CPU30.39:IR-PCI-MSI.524291-edge.eth0-TxRx-2
3491 ± 9% -66.3% 1176 ± 10% interrupts.CPU30.CAL:Function_call_interrupts
732194 -78.4% 158307 ± 4% interrupts.CPU30.LOC:Local_timer_interrupts
6652 ± 9% -69.4% 2033 ± 8% interrupts.CPU30.RES:Rescheduling_interrupts
3428 ± 3% -66.3% 1156 ± 13% interrupts.CPU31.CAL:Function_call_interrupts
732419 -78.4% 158523 ± 4% interrupts.CPU31.LOC:Local_timer_interrupts
6850 ± 8% -70.8% 1998 ± 8% interrupts.CPU31.RES:Rescheduling_interrupts
392.50 ± 26% -86.8% 51.75 ± 7% interrupts.CPU32.40:IR-PCI-MSI.524292-edge.eth0-TxRx-3
3432 ± 3% -66.2% 1159 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
732243 -78.4% 158318 ± 4% interrupts.CPU32.LOC:Local_timer_interrupts
6604 ± 7% -71.0% 1918 ± 9% interrupts.CPU32.RES:Rescheduling_interrupts
3538 -67.8% 1138 ± 17% interrupts.CPU33.CAL:Function_call_interrupts
732395 -78.4% 158278 ± 4% interrupts.CPU33.LOC:Local_timer_interrupts
6785 ± 9% -69.5% 2069 ± 19% interrupts.CPU33.RES:Rescheduling_interrupts
251.50 ± 13% -74.0% 65.50 ± 57% interrupts.CPU34.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
3734 ± 7% -68.7% 1167 ± 8% interrupts.CPU34.CAL:Function_call_interrupts
732181 -78.4% 158515 ± 4% interrupts.CPU34.LOC:Local_timer_interrupts
6577 ± 8% -68.5% 2073 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
3546 ± 2% -67.7% 1147 ± 8% interrupts.CPU35.CAL:Function_call_interrupts
732115 -78.4% 158122 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
6784 ± 9% -67.4% 2209 ± 8% interrupts.CPU35.RES:Rescheduling_interrupts
385.75 ± 50% -79.5% 79.00 ± 38% interrupts.CPU36.42:IR-PCI-MSI.524294-edge.eth0-TxRx-5
3620 ± 3% -66.9% 1198 ± 9% interrupts.CPU36.CAL:Function_call_interrupts
732242 -78.4% 158318 ± 4% interrupts.CPU36.LOC:Local_timer_interrupts
6537 ± 7% -69.7% 1983 ± 9% interrupts.CPU36.RES:Rescheduling_interrupts
3466 ± 3% -68.0% 1108 ± 16% interrupts.CPU37.CAL:Function_call_interrupts
732076 -78.4% 158298 ± 4% interrupts.CPU37.LOC:Local_timer_interrupts
6727 ± 7% -69.0% 2086 ± 18% interrupts.CPU37.RES:Rescheduling_interrupts
297.00 ± 36% -68.9% 92.50 ± 57% interrupts.CPU38.43:IR-PCI-MSI.524295-edge.eth0-TxRx-6
3454 ± 4% -66.0% 1173 ± 3% interrupts.CPU38.CAL:Function_call_interrupts
731970 -78.4% 158222 ± 4% interrupts.CPU38.LOC:Local_timer_interrupts
6481 ± 9% -69.2% 1994 ± 7% interrupts.CPU38.RES:Rescheduling_interrupts
3626 ± 6% -69.1% 1122 ± 16% interrupts.CPU39.CAL:Function_call_interrupts
732075 -78.5% 157421 ± 4% interrupts.CPU39.LOC:Local_timer_interrupts
6762 ± 7% -71.2% 1945 ± 3% interrupts.CPU39.RES:Rescheduling_interrupts
3578 ± 7% -67.5% 1161 ± 12% interrupts.CPU4.CAL:Function_call_interrupts
732402 -78.4% 158505 ± 4% interrupts.CPU4.LOC:Local_timer_interrupts
6713 ± 8% -68.3% 2126 ± 15% interrupts.CPU4.RES:Rescheduling_interrupts
3331 ± 6% -64.4% 1187 ± 10% interrupts.CPU5.CAL:Function_call_interrupts
732138 -78.4% 158438 ± 4% interrupts.CPU5.LOC:Local_timer_interrupts
1468 ±113% +254.6% 5208 ± 33% interrupts.CPU5.NMI:Non-maskable_interrupts
1468 ±113% +254.6% 5208 ± 33% interrupts.CPU5.PMI:Performance_monitoring_interrupts
7136 ± 8% -72.1% 1989 ± 9% interrupts.CPU5.RES:Rescheduling_interrupts
3461 ± 5% -69.5% 1056 ± 3% interrupts.CPU6.CAL:Function_call_interrupts
732368 -78.3% 158758 ± 4% interrupts.CPU6.LOC:Local_timer_interrupts
6924 ± 13% -70.1% 2068 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
3401 ± 10% -66.7% 1132 ± 13% interrupts.CPU7.CAL:Function_call_interrupts
732061 -78.4% 158412 ± 4% interrupts.CPU7.LOC:Local_timer_interrupts
7002 ± 4% -65.2% 2437 ± 20% interrupts.CPU7.RES:Rescheduling_interrupts
3596 ± 9% -68.5% 1132 ± 8% interrupts.CPU8.CAL:Function_call_interrupts
732166 -78.4% 158317 ± 4% interrupts.CPU8.LOC:Local_timer_interrupts
6761 ± 8% -71.0% 1963 ± 3% interrupts.CPU8.RES:Rescheduling_interrupts
3537 ± 4% -68.5% 1112 ± 11% interrupts.CPU9.CAL:Function_call_interrupts
731979 -78.4% 158238 ± 4% interrupts.CPU9.LOC:Local_timer_interrupts
5093 ± 34% -100.0% 0.75 ±110% interrupts.CPU9.NMI:Non-maskable_interrupts
5093 ± 34% -100.0% 0.75 ±110% interrupts.CPU9.PMI:Performance_monitoring_interrupts
7101 ± 7% -70.9% 2063 ± 12% interrupts.CPU9.RES:Rescheduling_interrupts
29285268 -78.4% 6335519 ± 4% interrupts.LOC:Local_timer_interrupts
273604 -69.4% 83805 ± 9% interrupts.RES:Rescheduling_interrupts
90.95 -90.9 0.00 perf-profile.calltrace.cycles-pp.ext4_truncate.ext4_setattr.notify_change.do_truncate.path_openat
92.48 -83.6 8.89 ± 9% perf-profile.calltrace.cycles-pp.ext4_setattr.notify_change.do_truncate.path_openat.do_filp_open
92.67 -82.9 9.72 ± 8% perf-profile.calltrace.cycles-pp.notify_change.do_truncate.path_openat.do_filp_open.do_sys_open
93.03 -82.3 10.76 ± 8% perf-profile.calltrace.cycles-pp.do_truncate.path_openat.do_filp_open.do_sys_open.do_syscall_64
44.25 -44.2 0.00 perf-profile.calltrace.cycles-pp.ext4_orphan_del.ext4_truncate.ext4_setattr.notify_change.do_truncate
43.72 -43.7 0.00 perf-profile.calltrace.cycles-pp.ext4_orphan_add.ext4_truncate.ext4_setattr.notify_change.do_truncate
42.08 -42.1 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.ext4_orphan_add.ext4_truncate.ext4_setattr.notify_change
41.71 -41.7 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.ext4_orphan_del.ext4_truncate.ext4_setattr.notify_change
40.77 -40.8 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.ext4_orphan_add.ext4_truncate.ext4_setattr
40.47 -40.5 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.ext4_orphan_del.ext4_truncate.ext4_setattr
95.68 -5.2 90.47 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
95.72 -5.1 90.62 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
96.07 -3.9 92.17 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
96.11 -3.7 92.37 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
96.13 -3.7 92.43 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
97.30 -3.1 94.21 perf-profile.calltrace.cycles-pp.creat
0.76 -0.3 0.42 ± 57% perf-profile.calltrace.cycles-pp.crc32c_pcl_intel_update.creat
0.00 +0.6 0.61 ± 4% perf-profile.calltrace.cycles-pp.__ext4_get_inode_loc.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty
0.00 +0.7 0.65 ± 2% perf-profile.calltrace.cycles-pp.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.dentry_needs_remove_privs.do_truncate
0.00 +0.7 0.67 ± 8% perf-profile.calltrace.cycles-pp.selinux_inode_permission.security_inode_permission.link_path_walk.path_openat.do_filp_open
0.00 +0.7 0.71 ± 2% perf-profile.calltrace.cycles-pp.cap_inode_need_killpriv.security_inode_need_killpriv.dentry_needs_remove_privs.do_truncate.path_openat
0.00 +0.7 0.72 ± 7% perf-profile.calltrace.cycles-pp.security_inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_open
0.00 +0.8 0.75 perf-profile.calltrace.cycles-pp.security_inode_need_killpriv.dentry_needs_remove_privs.do_truncate.path_openat.do_filp_open
0.00 +0.8 0.77 ± 4% perf-profile.calltrace.cycles-pp.lockref_put_return.dput.path_openat.do_filp_open.do_sys_open
0.00 +0.8 0.79 perf-profile.calltrace.cycles-pp.dentry_needs_remove_privs.do_truncate.path_openat.do_filp_open.do_sys_open
0.00 +0.8 0.81 ± 7% perf-profile.calltrace.cycles-pp.getname_flags.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
0.00 +0.8 0.82 ± 15% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.call_rwsem_wake.up_write
0.00 +0.9 0.86 ± 7% perf-profile.calltrace.cycles-pp.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr
0.00 +0.9 0.88 ± 14% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.call_rwsem_wake.up_write.path_openat
0.00 +0.9 0.89 ± 5% perf-profile.calltrace.cycles-pp.lockref_get_not_zero.dget_parent.fscrypt_file_open.ext4_file_open.do_dentry_open
0.00 +0.9 0.91 ± 5% perf-profile.calltrace.cycles-pp.dget_parent.fscrypt_file_open.ext4_file_open.do_dentry_open.path_openat
0.00 +0.9 0.91 ± 6% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.path_openat
0.00 +0.9 0.94 ± 7% perf-profile.calltrace.cycles-pp.lockref_get_not_dead.legitimize_path.unlazy_walk.complete_walk.path_openat
0.00 +1.0 1.04 ± 3% perf-profile.calltrace.cycles-pp.dput.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +1.1 1.13 ± 8% perf-profile.calltrace.cycles-pp.legitimize_path.unlazy_walk.complete_walk.path_openat.do_filp_open
0.00 +1.2 1.17 ± 7% perf-profile.calltrace.cycles-pp.unlazy_walk.complete_walk.path_openat.do_filp_open.do_sys_open
0.00 +1.2 1.17 ± 6% perf-profile.calltrace.cycles-pp.__alloc_file.alloc_empty_file.path_openat.do_filp_open.do_sys_open
0.00 +1.2 1.20 ± 11% perf-profile.calltrace.cycles-pp.jbd2_journal_stop.__ext4_journal_stop.__mark_inode_dirty.ext4_setattr.notify_change
0.00 +1.2 1.22 ± 7% perf-profile.calltrace.cycles-pp.complete_walk.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +1.2 1.23 ± 7% perf-profile.calltrace.cycles-pp.alloc_empty_file.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +1.3 1.25 ± 11% perf-profile.calltrace.cycles-pp.__ext4_journal_stop.__mark_inode_dirty.ext4_setattr.notify_change.do_truncate
0.00 +1.4 1.37 ± 4% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.5 1.53 ± 13% perf-profile.calltrace.cycles-pp.fscrypt_file_open.ext4_file_open.do_dentry_open.path_openat.do_filp_open
0.00 +1.6 1.60 ± 4% perf-profile.calltrace.cycles-pp.ext4_do_update_inode.ext4_mark_iloc_dirty.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty
0.00 +1.6 1.63 ± 7% perf-profile.calltrace.cycles-pp.rwsem_wake.call_rwsem_wake.up_write.path_openat.do_filp_open
0.00 +1.7 1.65 ± 7% perf-profile.calltrace.cycles-pp.call_rwsem_wake.up_write.path_openat.do_filp_open.do_sys_open
0.00 +1.7 1.72 ± 5% perf-profile.calltrace.cycles-pp.ext4_mark_iloc_dirty.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr
0.00 +1.7 1.73 ± 11% perf-profile.calltrace.cycles-pp.ext4_file_open.do_dentry_open.path_openat.do_filp_open.do_sys_open
0.00 +1.8 1.82 ± 5% perf-profile.calltrace.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +1.8 1.83 ± 3% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.00 +2.0 1.98 ± 7% perf-profile.calltrace.cycles-pp.up_write.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +2.0 1.99 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.62 ± 4% +2.0 2.66 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.63 ± 4% +2.1 2.71 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
0.54 ± 4% +2.1 2.66 ± 5% perf-profile.calltrace.cycles-pp.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr.notify_change
0.00 +2.7 2.65 ± 6% perf-profile.calltrace.cycles-pp.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +2.7 2.69 ± 14% perf-profile.calltrace.cycles-pp.start_this_handle.jbd2__journal_start.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr
0.83 ± 5% +2.9 3.69 ± 3% perf-profile.calltrace.cycles-pp.close
0.00 +3.2 3.18 ± 12% perf-profile.calltrace.cycles-pp.jbd2__journal_start.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr.notify_change
0.99 +5.4 6.38 ± 9% perf-profile.calltrace.cycles-pp.ext4_dirty_inode.__mark_inode_dirty.ext4_setattr.notify_change.do_truncate
1.09 +6.6 7.72 ± 9% perf-profile.calltrace.cycles-pp.__mark_inode_dirty.ext4_setattr.notify_change.do_truncate.path_openat
0.60 ± 5% +63.4 64.03 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.path_openat
0.67 ± 4% +65.6 66.25 perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.path_openat.do_filp_open
0.67 ± 4% +65.6 66.27 perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.path_openat.do_filp_open.do_sys_open
0.75 ± 4% +66.2 66.91 perf-profile.calltrace.cycles-pp.down_write.path_openat.do_filp_open.do_sys_open.do_syscall_64
90.96 -91.0 0.00 perf-profile.children.cycles-pp.ext4_truncate
83.85 -83.8 0.00 perf-profile.children.cycles-pp.__mutex_lock
92.49 -83.6 8.91 ± 9% perf-profile.children.cycles-pp.ext4_setattr
92.68 -82.9 9.74 ± 8% perf-profile.children.cycles-pp.notify_change
93.03 -82.3 10.77 ± 8% perf-profile.children.cycles-pp.do_truncate
44.28 -44.3 0.00 perf-profile.children.cycles-pp.ext4_orphan_del
43.76 -43.8 0.00 perf-profile.children.cycles-pp.ext4_orphan_add
81.93 -17.8 64.14 perf-profile.children.cycles-pp.osq_lock
95.69 -5.2 90.51 perf-profile.children.cycles-pp.path_openat
95.72 -5.1 90.64 perf-profile.children.cycles-pp.do_filp_open
96.08 -3.9 92.19 perf-profile.children.cycles-pp.do_sys_open
97.32 -3.0 94.29 perf-profile.children.cycles-pp.creat
96.88 -1.6 95.26 perf-profile.children.cycles-pp.do_syscall_64
96.91 -1.5 95.37 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.71 -1.0 1.73 ± 5% perf-profile.children.cycles-pp.ext4_mark_iloc_dirty
1.54 ± 3% -0.7 0.87 ± 7% perf-profile.children.cycles-pp.ext4_reserve_inode_write
2.13 ± 2% -0.5 1.62 ± 5% perf-profile.children.cycles-pp.ext4_do_update_inode
0.78 -0.2 0.56 ± 5% perf-profile.children.cycles-pp.crc32c_pcl_intel_update
0.79 ± 7% -0.2 0.62 ± 4% perf-profile.children.cycles-pp.__ext4_get_inode_loc
0.19 ± 7% -0.1 0.05 ± 58% perf-profile.children.cycles-pp.__brelse
0.43 ± 11% -0.1 0.36 perf-profile.children.cycles-pp.ext4_inode_csum_set
0.47 ± 3% -0.1 0.41 ± 5% perf-profile.children.cycles-pp.__getblk_gfp
0.26 ± 7% -0.1 0.20 ± 16% perf-profile.children.cycles-pp.__ext4_journal_get_write_access
0.39 ± 4% -0.0 0.34 ± 6% perf-profile.children.cycles-pp.__find_get_block
0.47 ± 3% -0.0 0.43 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
0.14 ± 9% -0.0 0.11 ± 9% perf-profile.children.cycles-pp.jbd2_journal_get_write_access
0.07 ± 7% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.schedule
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.jbd2_write_access_granted
0.00 +0.1 0.05 perf-profile.children.cycles-pp.lockref_get
0.00 +0.1 0.05 ± 9% perf-profile.children.cycles-pp.security_file_free
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.inode_has_perm
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.mntput_no_expire
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.__fd_install
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.setattr_prepare
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.__x64_sys_creat
0.14 ± 5% +0.1 0.20 ± 9% perf-profile.children.cycles-pp.rcu_all_qs
0.00 +0.1 0.06 ± 17% perf-profile.children.cycles-pp.__wake_up_common_lock
0.09 ± 4% +0.1 0.15 ± 2% perf-profile.children.cycles-pp.do_unlinkat
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.__check_heap_object
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.setattr_copy
0.00 +0.1 0.07 ± 22% perf-profile.children.cycles-pp.__lookup_mnt
0.00 +0.1 0.07 ± 20% perf-profile.children.cycles-pp.__sb_end_write
0.00 +0.1 0.08 ± 11% perf-profile.children.cycles-pp.selinux_task_getsecid
0.35 ± 4% +0.1 0.43 ± 7% perf-profile.children.cycles-pp.jbd2_journal_dirty_metadata
0.09 ± 4% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.unlink
0.00 +0.1 0.08 ± 21% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.00 +0.1 0.09 ± 9% perf-profile.children.cycles-pp.current_time
0.01 ±173% +0.1 0.10 ± 4% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.selinux_file_alloc_security
0.00 +0.1 0.10 ± 13% perf-profile.children.cycles-pp.security_task_getsecid
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.__close_fd
0.00 +0.1 0.10 ± 12% perf-profile.children.cycles-pp.creat_clo
0.07 ± 6% +0.1 0.17 ± 8% perf-profile.children.cycles-pp.ext4_discard_preallocations
0.00 +0.1 0.10 ± 21% perf-profile.children.cycles-pp.set_root
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.__slab_free
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.ext4_break_layouts
0.11 ± 4% +0.1 0.22 ± 7% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.11 ± 14% perf-profile.children.cycles-pp.__legitimize_mnt
0.00 +0.1 0.12 ± 12% perf-profile.children.cycles-pp.generic_permission
0.00 +0.1 0.12 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.12 ± 13% perf-profile.children.cycles-pp.__fsnotify_parent
0.00 +0.1 0.12 ± 16% perf-profile.children.cycles-pp.file_free_rcu
0.00 +0.1 0.13 ± 15% perf-profile.children.cycles-pp.down_read
0.03 ±102% +0.1 0.16 ± 6% perf-profile.children.cycles-pp.memset_erms
0.06 ± 9% +0.1 0.19 ± 10% perf-profile.children.cycles-pp.path_init
0.00 +0.1 0.14 ± 9% perf-profile.children.cycles-pp.avc_has_perm
0.00 +0.1 0.14 ± 3% perf-profile.children.cycles-pp.__call_rcu
0.00 +0.1 0.14 ± 5% perf-profile.children.cycles-pp.task_work_add
0.04 ± 59% +0.1 0.19 ± 16% perf-profile.children.cycles-pp.__sb_start_write
0.01 ±173% +0.2 0.17 ± 8% perf-profile.children.cycles-pp.ima_file_check
0.03 ±100% +0.2 0.19 ± 7% perf-profile.children.cycles-pp.xattr_resolve_name
0.04 ±103% +0.2 0.20 ± 13% perf-profile.children.cycles-pp.__mnt_want_write
0.05 ± 8% +0.2 0.22 ± 10% perf-profile.children.cycles-pp.fput_many
0.03 ±100% +0.2 0.20 ± 11% perf-profile.children.cycles-pp.inode_security_rcu
0.01 ±173% +0.2 0.19 ± 6% perf-profile.children.cycles-pp.__d_lookup_rcu
0.06 +0.2 0.23 ± 7% perf-profile.children.cycles-pp.__alloc_fd
0.23 ± 7% +0.2 0.41 ± 8% perf-profile.children.cycles-pp._cond_resched
0.04 ± 57% +0.2 0.23 ± 10% perf-profile.children.cycles-pp.terminate_walk
0.21 ± 6% +0.2 0.40 ± 3% perf-profile.children.cycles-pp.osq_unlock
0.00 +0.2 0.19 ± 5% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.2 0.19 ± 19% perf-profile.children.cycles-pp.__virt_addr_valid
0.17 ± 2% +0.2 0.37 ± 2% perf-profile.children.cycles-pp.ext4_xattr_get
0.00 +0.2 0.20 ± 44% perf-profile.children.cycles-pp.__follow_mount_rcu
0.10 ± 5% +0.2 0.30 ± 4% perf-profile.children.cycles-pp.wake_q_add
0.34 ± 3% +0.2 0.54 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.2 0.21 ± 3% perf-profile.children.cycles-pp.new_slab
0.06 ± 14% +0.2 0.27 ± 3% perf-profile.children.cycles-pp.selinux_inode_setattr
0.07 ± 5% +0.2 0.29 ± 3% perf-profile.children.cycles-pp.selinux_file_open
0.08 ± 5% +0.2 0.30 ± 9% perf-profile.children.cycles-pp.truncate_pagecache
0.07 ± 13% +0.2 0.29 ± 5% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.08 ± 11% +0.2 0.30 ± 8% perf-profile.children.cycles-pp.fsnotify
0.06 ± 11% +0.2 0.29 ± 9% perf-profile.children.cycles-pp.security_file_alloc
0.07 ± 13% +0.2 0.30 ± 2% perf-profile.children.cycles-pp.security_inode_setattr
0.07 ± 13% +0.2 0.30 ± 7% perf-profile.children.cycles-pp.__inode_security_revalidate
0.07 ± 11% +0.2 0.32 ± 3% perf-profile.children.cycles-pp.inode_permission
0.00 +0.2 0.25 ± 2% perf-profile.children.cycles-pp.___slab_alloc
0.10 ± 24% +0.3 0.35 ± 12% perf-profile.children.cycles-pp.mnt_want_write
0.00 +0.3 0.26 perf-profile.children.cycles-pp.__slab_alloc
0.09 ± 13% +0.3 0.34 ± 5% perf-profile.children.cycles-pp.filp_close
0.07 ± 12% +0.3 0.34 ± 34% perf-profile.children.cycles-pp.ret_from_fork
0.07 ± 12% +0.3 0.34 ± 34% perf-profile.children.cycles-pp.kthread
0.09 ± 8% +0.3 0.38 ± 2% perf-profile.children.cycles-pp.security_file_open
0.08 ± 10% +0.3 0.38 ± 5% perf-profile.children.cycles-pp.unmap_mapping_pages
0.07 ± 11% +0.3 0.37 ± 13% perf-profile.children.cycles-pp.__check_object_size
0.00 +0.3 0.32 ± 34% perf-profile.children.cycles-pp.run_ksoftirqd
0.00 +0.3 0.33 ± 35% perf-profile.children.cycles-pp.smpboot_thread_fn
0.10 ± 13% +0.4 0.45 ± 11% perf-profile.children.cycles-pp.__softirqentry_text_start
0.12 ± 3% +0.4 0.48 ± 7% perf-profile.children.cycles-pp.ext4_release_file
0.06 ± 14% +0.4 0.41 ± 10% perf-profile.children.cycles-pp.rcu_core
0.24 ± 6% +0.4 0.61 ± 13% perf-profile.children.cycles-pp.__might_sleep
0.30 ± 7% +0.4 0.67 ± 3% perf-profile.children.cycles-pp.___might_sleep
0.10 ± 8% +0.4 0.49 ± 2% perf-profile.children.cycles-pp.kmem_cache_free
0.11 ± 10% +0.4 0.49 ± 19% perf-profile.children.cycles-pp.lookup_fast
0.10 ± 4% +0.4 0.49 ± 11% perf-profile.children.cycles-pp.strncpy_from_user
0.11 ± 11% +0.4 0.51 ± 6% perf-profile.children.cycles-pp.may_open
0.12 ± 8% +0.4 0.53 ± 4% perf-profile.children.cycles-pp.__x64_sys_close
0.25 +0.4 0.66 ± 2% perf-profile.children.cycles-pp.__vfs_getxattr
0.18 ± 3% +0.4 0.60 ± 13% perf-profile.children.cycles-pp._raw_read_lock
2.26 ± 2% +0.4 2.68 ± 5% perf-profile.children.cycles-pp.ext4_mark_inode_dirty
0.13 ± 7% +0.5 0.58 ± 16% perf-profile.children.cycles-pp.walk_component
0.26 ± 3% +0.5 0.72 ± 2% perf-profile.children.cycles-pp.cap_inode_need_killpriv
0.27 ± 3% +0.5 0.75 perf-profile.children.cycles-pp.security_inode_need_killpriv
0.00 +0.5 0.49 ± 16% perf-profile.children.cycles-pp.__d_lookup
0.33 ± 5% +0.5 0.82 ± 15% perf-profile.children.cycles-pp.try_to_wake_up
0.00 +0.5 0.51 ± 15% perf-profile.children.cycles-pp.d_lookup
0.28 +0.5 0.82 ± 2% perf-profile.children.cycles-pp.dentry_needs_remove_privs
0.34 ± 6% +0.5 0.89 ± 14% perf-profile.children.cycles-pp.wake_up_q
0.10 ± 15% +0.6 0.70 ± 18% perf-profile.children.cycles-pp.add_transaction_credits
0.27 ± 4% +0.6 0.90 ± 12% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.17 ± 3% +0.6 0.81 ± 7% perf-profile.children.cycles-pp.getname_flags
0.15 ± 14% +0.7 0.83 ± 13% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.17 ± 7% +0.7 0.85 ± 3% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.19 ± 3% +0.8 0.96 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.23 ± 5% +0.8 1.02 ± 7% perf-profile.children.cycles-pp.selinux_inode_permission
0.24 ± 5% +0.9 1.09 ± 7% perf-profile.children.cycles-pp.security_inode_permission
0.06 ± 7% +0.9 0.92 ± 7% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +0.9 0.89 ± 5% perf-profile.children.cycles-pp.lockref_get_not_zero
0.04 ± 58% +0.9 0.94 ± 7% perf-profile.children.cycles-pp.lockref_get_not_dead
0.24 ± 6% +0.9 1.14 ± 5% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.9 0.91 ± 5% perf-profile.children.cycles-pp.dget_parent
0.26 ± 6% +1.0 1.21 ± 11% perf-profile.children.cycles-pp.jbd2_journal_stop
0.29 ± 4% +1.0 1.26 ± 11% perf-profile.children.cycles-pp.__ext4_journal_stop
0.19 ± 7% +1.0 1.18 ± 6% perf-profile.children.cycles-pp.__alloc_file
0.21 ± 5% +1.0 1.24 ± 7% perf-profile.children.cycles-pp.alloc_empty_file
0.08 ± 5% +1.0 1.13 ± 8% perf-profile.children.cycles-pp.legitimize_path
0.33 ± 3% +1.1 1.40 ± 4% perf-profile.children.cycles-pp.__fput
0.09 ± 11% +1.1 1.19 ± 7% perf-profile.children.cycles-pp.unlazy_walk
0.10 ± 12% +1.1 1.22 ± 7% perf-profile.children.cycles-pp.complete_walk
0.00 +1.3 1.29 ± 4% perf-profile.children.cycles-pp.lockref_put_return
0.43 ± 3% +1.4 1.84 ± 3% perf-profile.children.cycles-pp.task_work_run
0.41 ± 5% +1.4 1.84 ± 4% perf-profile.children.cycles-pp.link_path_walk
0.05 +1.5 1.53 ± 13% perf-profile.children.cycles-pp.fscrypt_file_open
0.21 ± 4% +1.5 1.74 ± 11% perf-profile.children.cycles-pp.ext4_file_open
0.12 ± 3% +1.5 1.65 ± 7% perf-profile.children.cycles-pp.rwsem_wake
0.12 ± 4% +1.5 1.66 ± 7% perf-profile.children.cycles-pp.call_rwsem_wake
0.48 ± 2% +1.5 2.01 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.12 ± 5% +1.8 1.90 ± 5% perf-profile.children.cycles-pp.dput
0.33 ± 7% +1.9 2.25 ± 6% perf-profile.children.cycles-pp.up_write
0.58 ± 6% +2.1 2.69 ± 14% perf-profile.children.cycles-pp.start_this_handle
0.45 ± 2% +2.2 2.66 ± 6% perf-profile.children.cycles-pp.do_dentry_open
0.69 ± 5% +2.5 3.18 ± 12% perf-profile.children.cycles-pp.jbd2__journal_start
0.85 ± 5% +2.9 3.75 ± 3% perf-profile.children.cycles-pp.close
0.99 +5.4 6.39 ± 9% perf-profile.children.cycles-pp.ext4_dirty_inode
1.09 +6.6 7.72 ± 9% perf-profile.children.cycles-pp.__mark_inode_dirty
0.67 ± 4% +65.7 66.34 perf-profile.children.cycles-pp.rwsem_down_write_failed
0.67 ± 4% +65.7 66.34 perf-profile.children.cycles-pp.call_rwsem_down_write_failed
0.87 ± 4% +66.7 67.56 perf-profile.children.cycles-pp.down_write
81.35 -17.7 63.68 perf-profile.self.cycles-pp.osq_lock
0.53 ± 3% -0.5 0.03 ±100% perf-profile.self.cycles-pp.ext4_reserve_inode_write
0.57 -0.5 0.11 ± 4% perf-profile.self.cycles-pp.ext4_mark_iloc_dirty
0.90 ± 3% -0.4 0.52 ± 7% perf-profile.self.cycles-pp.ext4_do_update_inode
0.18 ± 6% -0.1 0.05 ± 58% perf-profile.self.cycles-pp.__brelse
0.65 ± 2% -0.1 0.52 ± 4% perf-profile.self.cycles-pp.crc32c_pcl_intel_update
0.21 ± 24% -0.1 0.13 ± 10% perf-profile.self.cycles-pp.__ext4_get_inode_loc
0.12 ± 27% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.ext4_inode_csum_set
0.10 ± 7% -0.0 0.08 ± 6% perf-profile.self.cycles-pp.jbd2_write_access_granted
0.00 +0.1 0.05 perf-profile.self.cycles-pp.lockref_get
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.__vfs_getxattr
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.mntput_no_expire
0.11 ± 4% +0.1 0.16 ± 11% perf-profile.self.cycles-pp.rcu_all_qs
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.inode_security_rcu
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.inode_has_perm
0.00 +0.1 0.06 ± 15% perf-profile.self.cycles-pp.__fd_install
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.try_to_wake_up
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.may_open
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.d_lookup
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.security_file_open
0.00 +0.1 0.06 perf-profile.self.cycles-pp.filp_close
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.ext4_break_layouts
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.do_truncate
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.cap_inode_need_killpriv
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.__check_heap_object
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.wake_up_q
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.creat_clo
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.path_init
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.selinux_task_getsecid
0.00 +0.1 0.07 ± 26% perf-profile.self.cycles-pp.__lookup_mnt
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.__x64_sys_close
0.00 +0.1 0.07 ± 15% perf-profile.self.cycles-pp.__mark_inode_dirty
0.35 ± 4% +0.1 0.42 ± 8% perf-profile.self.cycles-pp.jbd2_journal_dirty_metadata
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.security_inode_permission
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.__sb_end_write
0.00 +0.1 0.08 ± 14% perf-profile.self.cycles-pp.getname_flags
0.00 +0.1 0.08 ± 19% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.unmap_mapping_pages
0.00 +0.1 0.08 ± 8% perf-profile.self.cycles-pp.do_filp_open
0.00 +0.1 0.08 ± 8% perf-profile.self.cycles-pp.close
0.00 +0.1 0.09 ± 4% perf-profile.self.cycles-pp.walk_component
0.00 +0.1 0.09 ± 4% perf-profile.self.cycles-pp.__call_rcu
0.00 +0.1 0.09 ± 21% perf-profile.self.cycles-pp.__sb_start_write
0.11 ± 14% +0.1 0.20 ± 7% perf-profile.self.cycles-pp._cond_resched
0.00 +0.1 0.09 ± 7% perf-profile.self.cycles-pp.selinux_file_alloc_security
0.00 +0.1 0.09 ± 4% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.09 ± 15% perf-profile.self.cycles-pp.selinux_inode_setattr
0.00 +0.1 0.09 ± 23% perf-profile.self.cycles-pp.set_root
0.00 +0.1 0.10 ± 17% perf-profile.self.cycles-pp.ext4_discard_preallocations
0.00 +0.1 0.10 perf-profile.self.cycles-pp.exit_to_usermode_loop
0.11 ± 6% +0.1 0.21 ± 5% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.10 ± 8% perf-profile.self.cycles-pp.__slab_free
0.00 +0.1 0.10 ± 10% perf-profile.self.cycles-pp.creat
0.00 +0.1 0.11 ± 4% perf-profile.self.cycles-pp.__check_object_size
0.00 +0.1 0.11 ± 6% perf-profile.self.cycles-pp.lookup_fast
0.00 +0.1 0.11 ± 17% perf-profile.self.cycles-pp.__legitimize_mnt
0.00 +0.1 0.11 ± 19% perf-profile.self.cycles-pp.ext4_release_file
0.00 +0.1 0.11 ± 13% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.1 0.11 ± 7% perf-profile.self.cycles-pp.__alloc_fd
0.00 +0.1 0.11 ± 4% perf-profile.self.cycles-pp.task_work_add
0.00 +0.1 0.11 ± 7% perf-profile.self.cycles-pp.selinux_file_open
0.00 +0.1 0.12 ± 9% perf-profile.self.cycles-pp.strncpy_from_user
0.00 +0.1 0.12 ± 12% perf-profile.self.cycles-pp.generic_permission
0.00 +0.1 0.12 ± 12% perf-profile.self.cycles-pp.__fsnotify_parent
0.00 +0.1 0.12 ± 16% perf-profile.self.cycles-pp.file_free_rcu
0.00 +0.1 0.12 ± 65% perf-profile.self.cycles-pp.__follow_mount_rcu
0.00 +0.1 0.12 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.03 ±100% +0.1 0.15 ± 7% perf-profile.self.cycles-pp.memset_erms
0.20 ± 4% +0.1 0.32 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.13 ± 10% perf-profile.self.cycles-pp.task_work_run
0.00 +0.1 0.13 ± 11% perf-profile.self.cycles-pp.__inode_security_revalidate
0.00 +0.1 0.14 ± 11% perf-profile.self.cycles-pp.avc_has_perm
0.04 ± 57% +0.1 0.18 ± 12% perf-profile.self.cycles-pp.do_sys_open
0.15 ± 7% +0.1 0.30 ± 6% perf-profile.self.cycles-pp.ext4_setattr
0.04 ± 57% +0.1 0.18 ± 4% perf-profile.self.cycles-pp.inode_permission
0.00 +0.1 0.15 perf-profile.self.cycles-pp.new_slab
0.00 +0.2 0.16 ± 45% perf-profile.self.cycles-pp.dput
0.04 ±102% +0.2 0.19 ± 14% perf-profile.self.cycles-pp.__mnt_want_write
0.03 ±100% +0.2 0.19 ± 7% perf-profile.self.cycles-pp.xattr_resolve_name
0.05 ± 8% +0.2 0.22 ± 8% perf-profile.self.cycles-pp.do_dentry_open
0.01 ±173% +0.2 0.18 ± 4% perf-profile.self.cycles-pp.__d_lookup_rcu
0.01 ±173% +0.2 0.18 ± 13% perf-profile.self.cycles-pp.__fput
0.00 +0.2 0.18 ± 11% perf-profile.self.cycles-pp.rwsem_wake
0.00 +0.2 0.19 ± 4% perf-profile.self.cycles-pp._raw_spin_trylock
0.21 ± 6% +0.2 0.40 ± 3% perf-profile.self.cycles-pp.osq_unlock
0.00 +0.2 0.19 ± 19% perf-profile.self.cycles-pp.__virt_addr_valid
0.06 ± 9% +0.2 0.25 ± 9% perf-profile.self.cycles-pp.do_syscall_64
0.06 ± 13% +0.2 0.26 ± 4% perf-profile.self.cycles-pp.notify_change
0.10 ± 5% +0.2 0.30 ± 4% perf-profile.self.cycles-pp.wake_q_add
0.07 ± 13% +0.2 0.28 ± 7% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.08 ± 11% +0.2 0.30 ± 9% perf-profile.self.cycles-pp.fsnotify
0.07 ± 7% +0.2 0.31 ± 16% perf-profile.self.cycles-pp.link_path_walk
0.00 +0.3 0.26 perf-profile.self.cycles-pp.jbd2__journal_start
0.08 ± 10% +0.3 0.35 ± 2% perf-profile.self.cycles-pp.kmem_cache_free
0.23 ± 7% +0.3 0.55 ± 16% perf-profile.self.cycles-pp.__might_sleep
0.06 ± 13% +0.3 0.39 ± 6% perf-profile.self.cycles-pp.__alloc_file
0.29 ± 6% +0.4 0.65 ± 5% perf-profile.self.cycles-pp.___might_sleep
0.21 ± 11% +0.4 0.58 ± 4% perf-profile.self.cycles-pp.up_write
0.00 +0.4 0.40 ± 16% perf-profile.self.cycles-pp.__d_lookup
0.11 ± 7% +0.4 0.52 ± 7% perf-profile.self.cycles-pp.selinux_inode_permission
0.11 ± 6% +0.4 0.52 ± 7% perf-profile.self.cycles-pp.kmem_cache_alloc
0.18 ± 3% +0.4 0.60 ± 13% perf-profile.self.cycles-pp._raw_read_lock
0.13 ± 11% +0.6 0.69 ± 18% perf-profile.self.cycles-pp.path_openat
0.10 ± 15% +0.6 0.70 ± 18% perf-profile.self.cycles-pp.add_transaction_credits
0.15 ± 12% +0.7 0.83 ± 13% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.17 ± 4% +0.7 0.85 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.17 ± 7% +0.7 0.85 ± 3% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.13 ± 6% +0.7 0.85 ± 4% perf-profile.self.cycles-pp.down_write
0.00 +0.8 0.82 ± 7% perf-profile.self.cycles-pp.lockref_get_not_zero
0.04 ± 58% +0.8 0.88 ± 6% perf-profile.self.cycles-pp.lockref_get_not_dead
0.00 +0.8 0.84 ± 2% perf-profile.self.cycles-pp.rwsem_down_write_failed
0.06 ± 7% +0.8 0.90 ± 6% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.23 ± 5% +0.9 1.09 ± 13% perf-profile.self.cycles-pp.jbd2_journal_stop
0.27 ± 8% +1.1 1.38 ± 13% perf-profile.self.cycles-pp.start_this_handle
0.00 +1.3 1.27 ± 4% perf-profile.self.cycles-pp.lockref_put_return
aim7.jobs-per-min
90000 +-+-----------------------------------------------------------------+
| O O O O |
80000 +-+ O O |
70000 O-O O O O O O O O O O |
| O O O O O O |
60000 +-+ |
50000 +-+ |
| |
40000 +-+ |
30000 +-+ |
| |
20000 +-+.+.+.+.+.+.+.+.+.+.+.+ + + + +.+.+.+.+.+.+.+.+.+.+.+.+.+.|
10000 +-+ : : : : : : : : |
| : : : : : : : : |
0 +-+-----------------------------------------------------------------+
aim7.time.system_time
14000 +-+-----------------------------------------------------------------+
| +.+.+.+.+ +.+ + : + : + +.+.+.+.+.+.+.+.+ |
12000 +-+ : : : : : |
| : : : : : |
10000 +-+ : : : : : : |
| : : : : : : : : |
8000 +-+ : : : : : : : : |
| : : : : : : : : |
6000 +-+ : : : : : : : : |
| : : : : : : : : |
4000 +-+ : : : : : : : : |
O O O O O O O O O O O O O O O O O : O O |
2000 +-+ : : O O : O O |
| : : : : |
0 +-+-----------------------------------------------------------------+
aim7.time.elapsed_time
400 +-+-------------------------------------------------------------------+
|.+.+.+.+. .+.+.+..+.+.+.+ + + + +.+.+. .+.+..+.+.+.+.+. .+.+.|
350 +-+ + : : : : : + + |
300 +-+ : : : : : |
| : : : : : |
250 +-+ : : : : : : : : |
| : : : : : : : : |
200 +-+ : : : : : : : : |
| : : : : : : : : |
150 +-+ : : : : : : : : |
100 +-+ : : : : : : : : |
O O O O O O O O O O O O O O O O O O O O O O O |
50 +-+ : : : : |
| : : : : |
0 +-+-------------------------------------------------------------------+
aim7.time.elapsed_time.max
400 +-+-------------------------------------------------------------------+
|.+.+.+.+. .+.+.+..+.+.+.+ + + + +.+.+. .+.+..+.+.+.+.+. .+.+.|
350 +-+ + : : : : : + + |
300 +-+ : : : : : |
| : : : : : |
250 +-+ : : : : : : : : |
| : : : : : : : : |
200 +-+ : : : : : : : : |
| : : : : : : : : |
150 +-+ : : : : : : : : |
100 +-+ : : : : : : : : |
O O O O O O O O O O O O O O O O O O O O O O O |
50 +-+ : : : : |
| : : : : |
0 +-+-------------------------------------------------------------------+
aim7.time.minor_page_faults
120000 +-+----------------------------------------------------------------+
| |
100000 +-+.+.+.+.+. .+. .+.+ + + + +.+.+. .+. .+.+.+. .+. .+.|
| + +.+.+ : : : : : + + + +.+ |
| : : : : : |
80000 +-+ : :: :: :: : |
| O O O: : : : : :: : |
60000 O-O O O O O O O O O :O:O:O:O:OO:O:O O O |
| : : : : : :: : |
40000 +-+ : : : : :: : : |
| : : : : :: : : |
| :: :: : :: |
20000 +-+ : : : : |
| : : : : |
0 +-+----------------------------------------------------------------+
aim7.time.voluntary_context_switches
1.4e+06 +-+---------------------------------------------------------------+
|.+.+. .+. .+ .+. .+ .+.+ |
1.2e+06 +-+ +.+.+ + + + : + + .+.+.+ +.+.+.+.+.+.+.+.|
| : : + : + |
1e+06 +-+ : : : : : |
| : :: : :: : |
800000 +-+ : : : : : : : : |
| : : : : : : : : |
600000 +-+ O : : : : : : : : |
| O O O O:O: :O: : : :O: |
400000 O-+ O O : : : : : : : : |
| O O O O O ::O ::O:O:O ::O O O |
200000 +-+ : : : : |
| : : : : |
0 +-+---------------------------------------------------------------+
aim7.time.involuntary_context_switches
700000 +-+----------------------------------------------------------------+
| |
600000 +-+.+.+.+.+.+.+.+.+.+.+.+ + + + +.+.+.+.+.+.+.+.+.+.+.+.+.+.|
| : : : : : |
500000 +-+ : : : : : |
| : :: :: :: : |
400000 +-+ : : : : : :: : |
| : : : : : :: : |
300000 +-+ : : : : : :: : |
| : : : : :: : : |
200000 +-+ : : : : :: : : |
| :: :: : :: |
100000 O-O O O O O O O O O O O O O O O O OO O O O O |
| : : : : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 12 months
[mm/page_alloc.c] 60f9751638: will-it-scale.per_process_ops 9.1% improvement
by kernel test robot
Greeting,
FYI, we noticed a 9.1% improvement of will-it-scale.per_process_ops due to commit:
commit: 60f97516388a0fc63bcaf31a1cb81d22d4b765b4 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
https://git.kernel.org/cgit/linux/kernel/git/saeed/linux.git net-next-mlx4
in testcase: will-it-scale
on test machine: 192 threads Skylake-SP with 256G memory
with following parameters:
nr_task: 100%
mode: process
test: page_fault2
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 9.8% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | load=3000 |
| | test=page_test |
+------------------+-----------------------------------------------------------------------+
| testcase: change | lmbench3: lmbench3.PIPE.bandwidth.MB/sec 21.9% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | mode=development |
| | nr_threads=50% |
| | test=PIPE |
| | test_memory_size=50% |
| | ucode=0xb00002e |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2018-04-03.cgz/lkp-skl-4sp1/page_fault2/will-it-scale
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 4% 2:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.testcase
5:4 8% 5:4 perf-profile.calltrace.cycles-pp.error_entry.testcase
5:4 8% 5:4 perf-profile.children.cycles-pp.error_entry
2:4 4% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
53775 +9.1% 58672 will-it-scale.per_process_ops
10325022 +9.1% 11265130 will-it-scale.workload
2.01 ± 4% +0.2 2.20 ± 2% mpstat.cpu.all.usr%
142061 ± 11% -11.6% 125624 ± 2% softirqs.CPU109.TIMER
167.29 +2.6% 171.59 turbostat.RAMWatt
4969 -8.3% 4556 vmstat.system.cs
7.776e+08 +9.7% 8.528e+08 numa-numastat.node0.local_node
7.776e+08 +9.7% 8.528e+08 numa-numastat.node0.numa_hit
43894 ± 16% -62.5% 16439 ± 66% numa-numastat.node0.other_node
28438 ± 44% +58.7% 45125 ± 12% numa-numastat.node1.other_node
4.72 -17.6% 3.89 irq_exception_noise.__do_page_fault.80th
46.52 -83.0% 7.91 ± 3% irq_exception_noise.__do_page_fault.90th
50.66 +63.9% 83.01 irq_exception_noise.__do_page_fault.95th
57.32 +78.4% 102.24 ± 2% irq_exception_noise.__do_page_fault.99th
3066 ± 6% -16.2% 2569 ± 5% irq_exception_noise.softirq_nr
3.128e+09 +9.1% 3.412e+09 proc-vmstat.numa_hit
3.128e+09 +9.1% 3.412e+09 proc-vmstat.numa_local
3.129e+09 +9.1% 3.412e+09 proc-vmstat.pgalloc_normal
3.115e+09 +9.1% 3.397e+09 proc-vmstat.pgfault
3.128e+09 +9.1% 3.412e+09 proc-vmstat.pgfree
5751 ± 10% -25.3% 4297 ± 6% slabinfo.eventpoll_pwq.active_objs
5751 ± 10% -25.3% 4297 ± 6% slabinfo.eventpoll_pwq.num_objs
6512 ± 10% -15.4% 5510 ± 9% slabinfo.kmalloc-rcl-64.active_objs
6512 ± 10% -15.4% 5510 ± 9% slabinfo.kmalloc-rcl-64.num_objs
524.50 ± 8% +14.7% 601.50 ± 8% slabinfo.skbuff_fclone_cache.active_objs
524.50 ± 8% +14.7% 601.50 ± 8% slabinfo.skbuff_fclone_cache.num_objs
29885158 ± 2% -31.1% 20579741 ± 7% sched_debug.cfs_rq:/.MIN_vruntime.max
29885158 ± 2% -31.1% 20579741 ± 7% sched_debug.cfs_rq:/.max_vruntime.max
34663 ± 6% -24.6% 26135 ± 16% sched_debug.cpu.nr_switches.stddev
34440 ± 6% -25.1% 25811 ± 16% sched_debug.cpu.sched_count.stddev
695.10 ± 35% +31.8% 916.17 ± 21% sched_debug.cpu.sched_goidle.stddev
2742 -8.8% 2500 sched_debug.cpu.ttwu_count.avg
17218 ± 6% -25.1% 12900 ± 16% sched_debug.cpu.ttwu_count.stddev
2607 -9.2% 2367 sched_debug.cpu.ttwu_local.avg
17131 ± 6% -25.2% 12821 ± 17% sched_debug.cpu.ttwu_local.stddev
34992 ± 12% +57.5% 55106 ± 6% numa-meminfo.node0.KReclaimable
9098 ± 10% +95.6% 17793 ± 23% numa-meminfo.node0.Mapped
34992 ± 12% +57.5% 55106 ± 6% numa-meminfo.node0.SReclaimable
63787 ± 7% +28.8% 82188 ± 8% numa-meminfo.node0.SUnreclaim
98780 ± 9% +39.0% 137296 ± 6% numa-meminfo.node0.Slab
14203 ± 14% -26.0% 10516 ± 14% numa-meminfo.node1.Mapped
272149 ± 5% -3.5% 262648 ± 4% numa-meminfo.node1.Unevictable
49906 ± 6% -31.6% 34140 ± 20% numa-meminfo.node2.KReclaimable
24382 ± 29% -32.1% 16553 ± 2% numa-meminfo.node2.PageTables
49906 ± 6% -31.6% 34140 ± 20% numa-meminfo.node2.SReclaimable
78716 ± 11% -24.4% 59505 ± 11% numa-meminfo.node2.SUnreclaim
128624 ± 9% -27.2% 93646 ± 14% numa-meminfo.node2.Slab
2278 ± 10% +97.5% 4499 ± 21% numa-vmstat.node0.nr_mapped
8748 ± 12% +57.5% 13776 ± 6% numa-vmstat.node0.nr_slab_reclaimable
15946 ± 7% +28.9% 20547 ± 8% numa-vmstat.node0.nr_slab_unreclaimable
3.835e+08 +11.6% 4.279e+08 numa-vmstat.node0.numa_hit
3.834e+08 +11.6% 4.279e+08 numa-vmstat.node0.numa_local
43664 ± 17% -62.3% 16459 ± 65% numa-vmstat.node0.numa_other
3605 ± 14% -26.3% 2655 ± 14% numa-vmstat.node1.nr_mapped
68036 ± 5% -3.5% 65661 ± 4% numa-vmstat.node1.nr_unevictable
68036 ± 5% -3.5% 65661 ± 4% numa-vmstat.node1.nr_zone_unevictable
3.865e+08 +10.6% 4.274e+08 numa-vmstat.node1.numa_hit
3.864e+08 +10.6% 4.273e+08 numa-vmstat.node1.numa_local
6083 ± 29% -32.0% 4139 ± 2% numa-vmstat.node2.nr_page_table_pages
12477 ± 6% -31.6% 8535 ± 20% numa-vmstat.node2.nr_slab_reclaimable
19677 ± 11% -24.4% 14876 ± 11% numa-vmstat.node2.nr_slab_unreclaimable
3.884e+08 +10.3% 4.283e+08 numa-vmstat.node2.numa_hit
3.883e+08 +10.3% 4.281e+08 numa-vmstat.node2.numa_local
3.878e+08 +10.7% 4.294e+08 numa-vmstat.node3.numa_hit
3.877e+08 +10.7% 4.293e+08 numa-vmstat.node3.numa_local
26.76 -4.6% 25.52 ± 2% perf-stat.i.MPKI
1.088e+10 +5.6% 1.149e+10 perf-stat.i.branch-instructions
67273401 -2.1% 65836751 perf-stat.i.branch-misses
59.45 +3.0 62.49 perf-stat.i.cache-miss-rate%
8.705e+08 +4.7% 9.113e+08 perf-stat.i.cache-misses
4721 -11.1% 4197 perf-stat.i.context-switches
7.66 -6.2% 7.19 perf-stat.i.cpi
55.33 -3.9% 53.20 perf-stat.i.cpu-migrations
494.82 -3.7% 476.56 ± 2% perf-stat.i.cycles-between-cache-misses
1.497e+10 +6.3% 1.592e+10 perf-stat.i.dTLB-loads
93308105 +7.7% 1.005e+08 perf-stat.i.dTLB-store-misses
7.58e+09 +7.8% 8.174e+09 perf-stat.i.dTLB-stores
1.141e+08 ± 3% +10.7% 1.262e+08 ± 6% perf-stat.i.iTLB-loads
5.464e+10 +6.0% 5.793e+10 perf-stat.i.instructions
0.13 +6.6% 0.14 perf-stat.i.ipc
10174625 +8.2% 11004859 perf-stat.i.minor-faults
1.962e+08 +18.7% 2.328e+08 perf-stat.i.node-loads
5.10 +1.6 6.74 ± 5% perf-stat.i.node-store-miss-rate%
3031373 +11.4% 3377511 perf-stat.i.node-store-misses
56918216 -9.5% 51536120 perf-stat.i.node-stores
10178999 +8.2% 11010518 perf-stat.i.page-faults
26.72 -6.4% 25.00 perf-stat.overall.MPKI
0.62 -0.0 0.57 perf-stat.overall.branch-miss-rate%
59.60 +3.3 62.92 perf-stat.overall.cache-miss-rate%
7.65 -6.1% 7.18 perf-stat.overall.cpi
480.23 -5.0% 456.23 perf-stat.overall.cycles-between-cache-misses
0.13 +6.5% 0.14 perf-stat.overall.ipc
0.43 ± 10% -0.1 0.34 perf-stat.overall.node-load-miss-rate%
5.06 +1.1 6.15 perf-stat.overall.node-store-miss-rate%
1617960 -2.1% 1584621 perf-stat.overall.path-length
1.081e+10 +5.6% 1.142e+10 perf-stat.ps.branch-instructions
67186069 -2.3% 65653528 perf-stat.ps.branch-misses
8.644e+08 +4.8% 9.056e+08 perf-stat.ps.cache-misses
4711 -10.1% 4234 perf-stat.ps.context-switches
55.38 ± 2% -3.7% 53.33 perf-stat.ps.cpu-migrations
1.488e+10 +6.3% 1.582e+10 perf-stat.ps.dTLB-loads
92669520 +7.8% 99918550 perf-stat.ps.dTLB-store-misses
7.535e+09 +7.8% 8.123e+09 perf-stat.ps.dTLB-stores
5.429e+10 +6.0% 5.756e+10 perf-stat.ps.instructions
10113507 +8.3% 10947922 perf-stat.ps.minor-faults
1.948e+08 +18.8% 2.314e+08 perf-stat.ps.node-loads
3010974 +11.5% 3357072 perf-stat.ps.node-store-misses
56529573 -9.4% 51222328 perf-stat.ps.node-stores
10112933 +8.3% 10949314 perf-stat.ps.page-faults
1.671e+13 +6.9% 1.785e+13 perf-stat.total.instructions
35.35 -2.9 32.44 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
35.28 -2.9 32.41 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
32.09 -2.4 29.65 ± 6% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range
32.45 -2.4 30.05 ± 6% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
39.69 -2.1 37.56 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
39.95 -2.1 37.82 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
40.17 -2.1 38.07 perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
37.95 -2.0 35.91 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
37.87 -2.0 35.87 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
33.87 -1.0 32.90 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region
34.06 -0.9 33.15 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
39.77 -0.8 38.98 perf-profile.calltrace.cycles-pp.munmap
39.76 -0.8 38.98 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
39.76 -0.8 38.98 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.74 -0.8 38.96 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
39.74 -0.8 38.96 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.74 -0.8 38.96 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.74 -0.8 38.96 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
4.28 -0.3 3.94 ± 3% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
4.22 -0.3 3.90 ± 3% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu
4.46 -0.2 4.27 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
4.50 -0.2 4.32 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
4.48 -0.2 4.30 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
1.13 -0.1 1.03 perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.34 -0.1 1.25 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
1.42 -0.1 1.35 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
1.68 -0.1 1.62 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.56 -0.1 1.50 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.69 +0.0 0.73 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
0.79 +0.1 0.85 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode.testcase
0.99 +0.1 1.10 perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
58.52 +0.7 59.19 perf-profile.calltrace.cycles-pp.page_fault.testcase
59.28 +0.8 60.03 perf-profile.calltrace.cycles-pp.testcase
6.92 +1.1 7.97 perf-profile.calltrace.cycles-pp.copy_page.copy_user_highpage.__handle_mm_fault.handle_mm_fault.__do_page_fault
7.00 +1.1 8.05 perf-profile.calltrace.cycles-pp.copy_user_highpage.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.47 ± 2% +1.3 2.80 ± 13% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
1.43 ± 2% +1.3 2.77 ± 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte
0.84 +1.3 2.18 ± 57% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range
0.85 +1.3 2.19 ± 57% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
2.38 +1.4 3.77 ± 10% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
2.46 +1.4 3.86 ± 10% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
4.21 +1.5 5.76 ± 7% perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
4.16 +1.6 5.71 ± 7% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
74.45 -4.9 69.59 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
36.35 -2.8 33.57 ± 5% perf-profile.children.cycles-pp.free_pcppages_bulk
36.76 -2.7 34.02 ± 5% perf-profile.children.cycles-pp.free_unref_page_list
39.87 -2.1 37.73 perf-profile.children.cycles-pp.get_page_from_freelist
40.09 -2.1 37.96 perf-profile.children.cycles-pp.__alloc_pages_nodemask
40.21 -2.1 38.09 perf-profile.children.cycles-pp.alloc_pages_vma
75.70 -2.1 73.61 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
38.42 -1.1 37.29 perf-profile.children.cycles-pp.release_pages
38.54 -1.1 37.46 perf-profile.children.cycles-pp.tlb_flush_mmu
39.77 -0.8 38.98 perf-profile.children.cycles-pp.munmap
39.89 -0.8 39.10 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
39.89 -0.8 39.10 perf-profile.children.cycles-pp.do_syscall_64
39.74 -0.8 38.96 perf-profile.children.cycles-pp.__vm_munmap
39.74 -0.8 38.96 perf-profile.children.cycles-pp.__x64_sys_munmap
39.74 -0.8 38.96 perf-profile.children.cycles-pp.unmap_region
39.74 -0.8 38.97 perf-profile.children.cycles-pp.__do_munmap
4.51 -0.2 4.33 perf-profile.children.cycles-pp.tlb_finish_mmu
1.13 -0.1 1.04 perf-profile.children.cycles-pp.find_get_entry
1.44 -0.1 1.37 perf-profile.children.cycles-pp.shmem_getpage_gfp
1.34 -0.1 1.27 perf-profile.children.cycles-pp.find_lock_entry
1.57 -0.1 1.51 perf-profile.children.cycles-pp.shmem_fault
1.69 -0.1 1.63 perf-profile.children.cycles-pp.__do_fault
0.12 +0.0 0.13 perf-profile.children.cycles-pp.___might_sleep
0.06 +0.0 0.07 perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.12 ± 7% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.22 ± 4% +0.0 0.23 ± 2% perf-profile.children.cycles-pp.__count_memcg_events
0.16 +0.0 0.18 ± 4% perf-profile.children.cycles-pp.page_remove_rmap
0.30 +0.0 0.33 ± 2% perf-profile.children.cycles-pp.xas_load
0.25 +0.0 0.28 perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.34 +0.0 0.38 perf-profile.children.cycles-pp.__mod_lruvec_state
0.33 +0.0 0.36 perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.34 ± 2% +0.0 0.38 perf-profile.children.cycles-pp.__mod_memcg_state
0.68 +0.0 0.72 perf-profile.children.cycles-pp.sync_regs
0.70 +0.0 0.74 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.92 +0.0 0.97 perf-profile.children.cycles-pp.native_irq_return_iret
0.79 +0.1 0.85 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.20 ± 3% +0.1 0.27 ± 6% perf-profile.children.cycles-pp.free_pages_and_swap_cache
1.00 +0.1 1.07 ± 2% perf-profile.children.cycles-pp.__list_del_entry_valid
60.07 +0.8 60.86 perf-profile.children.cycles-pp.testcase
7.00 +1.1 8.05 perf-profile.children.cycles-pp.copy_user_highpage
6.94 +1.1 7.99 perf-profile.children.cycles-pp.copy_page
2.40 +1.4 3.79 ± 10% perf-profile.children.cycles-pp.pagevec_lru_move_fn
2.47 +1.4 3.87 ± 10% perf-profile.children.cycles-pp.__lru_cache_add
4.19 +1.5 5.74 ± 7% perf-profile.children.cycles-pp.alloc_set_pte
4.21 +1.6 5.77 ± 7% perf-profile.children.cycles-pp.finish_fault
2.45 ± 2% +2.8 5.26 ± 33% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
75.70 -2.1 73.61 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.82 -0.1 0.71 perf-profile.self.cycles-pp.find_get_entry
0.73 ± 2% -0.1 0.64 perf-profile.self.cycles-pp.get_page_from_freelist
0.10 +0.0 0.11 perf-profile.self.cycles-pp.__mod_zone_page_state
0.05 +0.0 0.06 perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.12 +0.0 0.13 perf-profile.self.cycles-pp.___might_sleep
0.14 ± 3% +0.0 0.15 ± 3% perf-profile.self.cycles-pp.page_remove_rmap
0.36 +0.0 0.39 perf-profile.self.cycles-pp.__handle_mm_fault
0.21 ± 3% +0.0 0.23 ± 3% perf-profile.self.cycles-pp.__count_memcg_events
0.25 +0.0 0.28 ± 3% perf-profile.self.cycles-pp.xas_load
0.38 +0.0 0.41 perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.68 +0.0 0.71 perf-profile.self.cycles-pp.sync_regs
0.34 +0.0 0.37 perf-profile.self.cycles-pp.__mod_memcg_state
0.17 ± 3% +0.0 0.20 ± 4% perf-profile.self.cycles-pp.free_unref_page_list
0.28 +0.0 0.32 ± 2% perf-profile.self.cycles-pp.release_pages
0.67 +0.0 0.71 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.92 +0.1 0.97 perf-profile.self.cycles-pp.native_irq_return_iret
0.20 ± 4% +0.1 0.27 ± 6% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.99 +0.1 1.07 ± 2% perf-profile.self.cycles-pp.__list_del_entry_valid
1.90 +0.1 1.99 perf-profile.self.cycles-pp.testcase
0.85 +0.1 0.99 ± 4% perf-profile.self.cycles-pp.free_pcppages_bulk
0.96 +0.3 1.25 ± 3% perf-profile.self.cycles-pp.unmap_page_range
6.91 +1.0 7.96 perf-profile.self.cycles-pp.copy_page
412.00 ± 62% -49.2% 209.25 ± 28% interrupts.33:PCI-MSI.26738690-edge.eth0-TxRx-1
453.50 ± 25% -49.3% 229.75 ± 19% interrupts.34:PCI-MSI.26738691-edge.eth0-TxRx-2
267.50 ± 63% +169.4% 720.75 ± 50% interrupts.35:PCI-MSI.26738692-edge.eth0-TxRx-3
1131833 +10.7% 1252460 ± 4% interrupts.CAL:Function_call_interrupts
134.50 ± 34% +99.3% 268.00 ± 25% interrupts.CPU1.RES:Rescheduling_interrupts
453.50 ± 25% -49.3% 229.75 ± 19% interrupts.CPU10.34:PCI-MSI.26738691-edge.eth0-TxRx-2
5847 +11.6% 6524 ± 4% interrupts.CPU100.CAL:Function_call_interrupts
5845 +11.5% 6517 ± 4% interrupts.CPU101.CAL:Function_call_interrupts
5593 ± 10% +17.0% 6546 ± 4% interrupts.CPU102.CAL:Function_call_interrupts
5861 +12.0% 6563 ± 5% interrupts.CPU104.CAL:Function_call_interrupts
5876 +9.1% 6411 ± 6% interrupts.CPU105.CAL:Function_call_interrupts
5856 +11.1% 6506 ± 4% interrupts.CPU106.CAL:Function_call_interrupts
49.50 ±112% +929.8% 509.75 ± 96% interrupts.CPU107.RES:Rescheduling_interrupts
31.75 ±108% +704.7% 255.50 ± 62% interrupts.CPU108.RES:Rescheduling_interrupts
5901 +11.0% 6547 ± 5% interrupts.CPU109.CAL:Function_call_interrupts
267.50 ± 63% +169.4% 720.75 ± 50% interrupts.CPU11.35:PCI-MSI.26738692-edge.eth0-TxRx-3
5908 +12.0% 6619 ± 5% interrupts.CPU110.CAL:Function_call_interrupts
5965 +11.3% 6640 ± 5% interrupts.CPU111.CAL:Function_call_interrupts
5930 +11.2% 6593 ± 5% interrupts.CPU112.CAL:Function_call_interrupts
5963 +11.9% 6675 ± 5% interrupts.CPU113.CAL:Function_call_interrupts
5970 +11.0% 6627 ± 4% interrupts.CPU118.CAL:Function_call_interrupts
5984 +9.8% 6571 ± 4% interrupts.CPU119.CAL:Function_call_interrupts
50.75 ± 92% +590.1% 350.25 ± 67% interrupts.CPU12.RES:Rescheduling_interrupts
5970 +10.4% 6589 ± 4% interrupts.CPU120.CAL:Function_call_interrupts
5926 +9.5% 6488 ± 5% interrupts.CPU121.CAL:Function_call_interrupts
5992 +10.3% 6610 ± 4% interrupts.CPU122.CAL:Function_call_interrupts
5788 ± 6% +13.8% 6586 ± 5% interrupts.CPU123.CAL:Function_call_interrupts
6009 +9.5% 6581 ± 5% interrupts.CPU125.CAL:Function_call_interrupts
5991 +11.3% 6668 ± 5% interrupts.CPU126.CAL:Function_call_interrupts
5975 +10.6% 6608 ± 4% interrupts.CPU127.CAL:Function_call_interrupts
5989 +10.5% 6615 ± 5% interrupts.CPU128.CAL:Function_call_interrupts
5925 +11.9% 6632 ± 5% interrupts.CPU129.CAL:Function_call_interrupts
5930 +10.6% 6559 ± 5% interrupts.CPU130.CAL:Function_call_interrupts
5940 +11.1% 6600 ± 5% interrupts.CPU131.CAL:Function_call_interrupts
5902 +11.2% 6562 ± 4% interrupts.CPU132.CAL:Function_call_interrupts
5863 +11.6% 6545 ± 4% interrupts.CPU133.CAL:Function_call_interrupts
5870 ± 3% +12.1% 6580 ± 3% interrupts.CPU134.CAL:Function_call_interrupts
5888 +11.9% 6587 ± 4% interrupts.CPU135.CAL:Function_call_interrupts
5828 +10.6% 6446 ± 4% interrupts.CPU136.CAL:Function_call_interrupts
5898 +11.1% 6551 ± 4% interrupts.CPU137.CAL:Function_call_interrupts
5936 ± 2% +10.9% 6581 ± 4% interrupts.CPU138.CAL:Function_call_interrupts
5864 +10.3% 6470 ± 4% interrupts.CPU14.CAL:Function_call_interrupts
5900 ± 2% +12.3% 6628 ± 5% interrupts.CPU140.CAL:Function_call_interrupts
5922 ± 2% +11.4% 6595 ± 5% interrupts.CPU142.CAL:Function_call_interrupts
5916 ± 2% +12.1% 6629 ± 5% interrupts.CPU143.CAL:Function_call_interrupts
5785 ± 4% +12.2% 6491 ± 4% interrupts.CPU145.CAL:Function_call_interrupts
424.75 ± 89% -87.3% 53.75 ± 73% interrupts.CPU145.RES:Rescheduling_interrupts
5939 +10.6% 6570 ± 5% interrupts.CPU147.CAL:Function_call_interrupts
5304 ± 13% +23.5% 6552 ± 4% interrupts.CPU149.CAL:Function_call_interrupts
5839 +11.0% 6481 ± 4% interrupts.CPU15.CAL:Function_call_interrupts
24.50 ± 60% +612.2% 174.50 ±101% interrupts.CPU15.RES:Rescheduling_interrupts
5600 ± 10% +17.6% 6583 ± 4% interrupts.CPU150.CAL:Function_call_interrupts
1062 ± 93% -87.8% 129.75 ±107% interrupts.CPU150.RES:Rescheduling_interrupts
5951 ± 2% +10.6% 6580 ± 5% interrupts.CPU151.CAL:Function_call_interrupts
402.25 ± 33% -80.8% 77.25 ±100% interrupts.CPU151.RES:Rescheduling_interrupts
5921 +11.7% 6612 ± 4% interrupts.CPU152.CAL:Function_call_interrupts
394.00 ± 80% -80.8% 75.75 ±119% interrupts.CPU152.RES:Rescheduling_interrupts
308.25 ± 40% -88.0% 37.00 ± 85% interrupts.CPU153.RES:Rescheduling_interrupts
5917 +10.9% 6563 ± 3% interrupts.CPU154.CAL:Function_call_interrupts
5880 +12.0% 6587 ± 3% interrupts.CPU155.CAL:Function_call_interrupts
183.75 ± 22% -69.3% 56.50 ±125% interrupts.CPU155.RES:Rescheduling_interrupts
5875 +11.7% 6564 ± 4% interrupts.CPU156.CAL:Function_call_interrupts
164.50 ± 38% -77.2% 37.50 ± 82% interrupts.CPU156.RES:Rescheduling_interrupts
5905 +11.1% 6560 ± 4% interrupts.CPU157.CAL:Function_call_interrupts
5895 +11.1% 6549 ± 4% interrupts.CPU158.CAL:Function_call_interrupts
5978 +10.4% 6598 ± 4% interrupts.CPU159.CAL:Function_call_interrupts
73.75 ± 39% -90.2% 7.25 ± 5% interrupts.CPU159.RES:Rescheduling_interrupts
5835 +11.0% 6479 ± 4% interrupts.CPU16.CAL:Function_call_interrupts
5979 ± 2% +10.2% 6592 ± 4% interrupts.CPU160.CAL:Function_call_interrupts
252.25 ±110% -82.1% 45.25 ±138% interrupts.CPU160.RES:Rescheduling_interrupts
5901 ± 2% +9.9% 6484 ± 4% interrupts.CPU161.CAL:Function_call_interrupts
5970 +11.1% 6633 ± 5% interrupts.CPU162.CAL:Function_call_interrupts
5837 ± 4% +12.9% 6591 ± 5% interrupts.CPU163.CAL:Function_call_interrupts
5872 ± 4% +13.0% 6637 ± 5% interrupts.CPU165.CAL:Function_call_interrupts
435.75 ±107% -78.7% 93.00 ± 87% interrupts.CPU165.RES:Rescheduling_interrupts
5950 ± 2% +10.9% 6598 ± 5% interrupts.CPU167.CAL:Function_call_interrupts
5641 ± 7% +16.9% 6592 ± 5% interrupts.CPU168.CAL:Function_call_interrupts
5835 +11.0% 6475 ± 5% interrupts.CPU17.CAL:Function_call_interrupts
5950 +10.3% 6561 ± 5% interrupts.CPU170.CAL:Function_call_interrupts
5971 +10.2% 6578 ± 5% interrupts.CPU172.CAL:Function_call_interrupts
5952 +11.1% 6615 ± 4% interrupts.CPU173.CAL:Function_call_interrupts
5988 ± 2% +9.6% 6562 ± 4% interrupts.CPU174.CAL:Function_call_interrupts
5980 ± 2% +10.4% 6601 ± 4% interrupts.CPU175.CAL:Function_call_interrupts
5940 ± 2% +11.4% 6616 ± 4% interrupts.CPU176.CAL:Function_call_interrupts
5933 +11.1% 6592 ± 4% interrupts.CPU178.CAL:Function_call_interrupts
5898 ± 2% +11.4% 6570 ± 4% interrupts.CPU179.CAL:Function_call_interrupts
5857 +11.0% 6500 ± 5% interrupts.CPU18.CAL:Function_call_interrupts
40.25 ±128% +120.5% 88.75 ± 70% interrupts.CPU18.RES:Rescheduling_interrupts
5937 +12.0% 6648 ± 4% interrupts.CPU180.CAL:Function_call_interrupts
5919 ± 2% +11.8% 6619 ± 4% interrupts.CPU181.CAL:Function_call_interrupts
5914 ± 2% +11.3% 6583 ± 4% interrupts.CPU182.CAL:Function_call_interrupts
5880 ± 2% +12.1% 6593 ± 4% interrupts.CPU183.CAL:Function_call_interrupts
5886 +11.8% 6578 ± 4% interrupts.CPU184.CAL:Function_call_interrupts
22.00 ± 39% +152.3% 55.50 ± 75% interrupts.CPU184.RES:Rescheduling_interrupts
5942 ± 3% +11.3% 6611 ± 5% interrupts.CPU185.CAL:Function_call_interrupts
5979 ± 3% +8.4% 6483 ± 4% interrupts.CPU186.CAL:Function_call_interrupts
5954 ± 3% +10.0% 6551 ± 4% interrupts.CPU187.CAL:Function_call_interrupts
5949 ± 3% +10.3% 6563 ± 4% interrupts.CPU188.CAL:Function_call_interrupts
5856 +10.7% 6480 ± 4% interrupts.CPU19.CAL:Function_call_interrupts
24.50 ± 54% +427.6% 129.25 ± 67% interrupts.CPU19.RES:Rescheduling_interrupts
5867 +9.3% 6413 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
5872 +9.7% 6440 ± 4% interrupts.CPU21.CAL:Function_call_interrupts
5939 +9.3% 6489 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
5952 +8.8% 6475 ± 4% interrupts.CPU23.CAL:Function_call_interrupts
5947 +9.1% 6488 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
5633 ± 9% +14.9% 6473 ± 3% interrupts.CPU27.CAL:Function_call_interrupts
5949 +8.3% 6441 ± 3% interrupts.CPU28.CAL:Function_call_interrupts
5922 +8.9% 6449 ± 3% interrupts.CPU29.CAL:Function_call_interrupts
6014 +7.7% 6477 ± 3% interrupts.CPU3.CAL:Function_call_interrupts
37.25 ± 84% +276.5% 140.25 ± 88% interrupts.CPU3.RES:Rescheduling_interrupts
5915 +10.0% 6504 ± 4% interrupts.CPU32.CAL:Function_call_interrupts
5900 +10.0% 6491 ± 4% interrupts.CPU33.CAL:Function_call_interrupts
5892 +10.2% 6495 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
5865 +10.7% 6494 ± 5% interrupts.CPU35.CAL:Function_call_interrupts
5862 +10.3% 6465 ± 4% interrupts.CPU36.CAL:Function_call_interrupts
5787 ± 2% +12.3% 6497 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
5982 +8.5% 6492 ± 4% interrupts.CPU4.CAL:Function_call_interrupts
5903 +10.2% 6508 ± 5% interrupts.CPU41.CAL:Function_call_interrupts
5890 +10.7% 6520 ± 5% interrupts.CPU42.CAL:Function_call_interrupts
5902 +7.8% 6361 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
5894 +10.6% 6519 ± 4% interrupts.CPU45.CAL:Function_call_interrupts
5905 +10.4% 6521 ± 4% interrupts.CPU46.CAL:Function_call_interrupts
5910 ± 2% +10.0% 6500 ± 4% interrupts.CPU47.CAL:Function_call_interrupts
5895 ± 2% +10.7% 6524 ± 4% interrupts.CPU48.CAL:Function_call_interrupts
5777 ± 2% +12.3% 6488 ± 4% interrupts.CPU49.CAL:Function_call_interrupts
5904 +9.7% 6478 ± 4% interrupts.CPU51.CAL:Function_call_interrupts
5769 ± 4% +13.3% 6534 ± 4% interrupts.CPU52.CAL:Function_call_interrupts
5318 ± 10% +23.0% 6542 ± 4% interrupts.CPU53.CAL:Function_call_interrupts
5607 ± 7% +17.2% 6569 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
1119 ± 60% -85.5% 162.50 ±146% interrupts.CPU54.RES:Rescheduling_interrupts
5885 +10.5% 6503 ± 5% interrupts.CPU55.CAL:Function_call_interrupts
552.50 ± 33% -80.8% 106.25 ± 86% interrupts.CPU55.RES:Rescheduling_interrupts
5883 +10.8% 6518 ± 4% interrupts.CPU56.CAL:Function_call_interrupts
917.00 ± 66% -91.9% 74.00 ±121% interrupts.CPU56.RES:Rescheduling_interrupts
662.75 ± 72% -90.1% 65.75 ± 82% interrupts.CPU57.RES:Rescheduling_interrupts
5866 +11.0% 6510 ± 5% interrupts.CPU58.CAL:Function_call_interrupts
339.75 ± 14% -81.4% 63.25 ± 86% interrupts.CPU58.RES:Rescheduling_interrupts
5834 +11.8% 6524 ± 4% interrupts.CPU59.CAL:Function_call_interrupts
309.50 ± 22% -79.7% 62.75 ± 92% interrupts.CPU59.RES:Rescheduling_interrupts
5859 +11.5% 6531 ± 4% interrupts.CPU60.CAL:Function_call_interrupts
5863 +11.2% 6522 ± 5% interrupts.CPU61.CAL:Function_call_interrupts
5899 +10.9% 6544 ± 5% interrupts.CPU62.CAL:Function_call_interrupts
5888 +11.4% 6559 ± 5% interrupts.CPU63.CAL:Function_call_interrupts
5814 ± 3% +13.0% 6567 ± 5% interrupts.CPU64.CAL:Function_call_interrupts
5912 +10.5% 6535 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
5994 +9.4% 6556 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
287.75 ± 71% -89.5% 30.25 ±108% interrupts.CPU66.RES:Rescheduling_interrupts
5748 ± 4% +14.7% 6591 ± 5% interrupts.CPU67.CAL:Function_call_interrupts
5741 ± 5% +14.3% 6564 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
5950 +10.4% 6571 ± 5% interrupts.CPU70.CAL:Function_call_interrupts
5807 ± 2% +13.8% 6611 ± 4% interrupts.CPU71.CAL:Function_call_interrupts
5679 ± 7% +16.8% 6636 ± 5% interrupts.CPU72.CAL:Function_call_interrupts
5843 +12.4% 6570 ± 5% interrupts.CPU73.CAL:Function_call_interrupts
164.50 ± 47% -63.4% 60.25 ±113% interrupts.CPU73.RES:Rescheduling_interrupts
5975 ± 2% +9.2% 6526 ± 4% interrupts.CPU74.CAL:Function_call_interrupts
5934 +10.9% 6578 ± 5% interrupts.CPU76.CAL:Function_call_interrupts
5899 +10.9% 6544 ± 5% interrupts.CPU79.CAL:Function_call_interrupts
19.25 ± 19% +1671.4% 341.00 ±143% interrupts.CPU81.RES:Rescheduling_interrupts
5882 +12.5% 6615 ± 5% interrupts.CPU82.CAL:Function_call_interrupts
5917 +12.6% 6662 ± 4% interrupts.CPU83.CAL:Function_call_interrupts
5874 +11.9% 6572 ± 4% interrupts.CPU84.CAL:Function_call_interrupts
5926 ± 2% +11.6% 6616 ± 5% interrupts.CPU85.CAL:Function_call_interrupts
5948 +11.5% 6630 ± 5% interrupts.CPU86.CAL:Function_call_interrupts
5905 +11.0% 6554 ± 4% interrupts.CPU87.CAL:Function_call_interrupts
5832 ± 4% +13.3% 6610 ± 4% interrupts.CPU88.CAL:Function_call_interrupts
5935 +11.5% 6615 ± 5% interrupts.CPU89.CAL:Function_call_interrupts
412.00 ± 62% -49.2% 209.25 ± 28% interrupts.CPU9.33:PCI-MSI.26738690-edge.eth0-TxRx-1
5892 +11.8% 6589 ± 4% interrupts.CPU90.CAL:Function_call_interrupts
5903 +10.6% 6526 ± 5% interrupts.CPU91.CAL:Function_call_interrupts
5857 +12.9% 6614 ± 5% interrupts.CPU92.CAL:Function_call_interrupts
5918 +11.9% 6626 ± 5% interrupts.CPU94.CAL:Function_call_interrupts
5851 +12.3% 6572 ± 5% interrupts.CPU95.CAL:Function_call_interrupts
5888 +8.2% 6372 ± 6% interrupts.CPU96.CAL:Function_call_interrupts
irq_exception_noise.__do_page_fault.80th
5.2 +-+-------------------------------------------------------------------+
| |
5 +-+ + + +. + |
| : + .+ .+ : : : + :: + |
4.8 +-+.+. : + + .+.++.+ + .+. .+.+. : : .+.+. : + : :.+.+. .+. + +|
| + + + + + + + + + + +.+ |
4.6 +-+ |
| |
4.4 +-+ |
| |
4.2 +-+ O O |
O O O O O |
4 +-+ O O O O O O |
| O O O O |
3.8 +-+------------------------O-O-----O-O--------------------------------+
irq_exception_noise.__do_page_fault.90th
50 +-+--------------------------------------------------------------------+
|.+.+.+ +.+.+.+.+.+.+.+.+.+.+.+.+.+ +.+.+.+ +.+.+.+.+.+.+.+.+.+.+.|
45 +-+ |
40 +-+ |
| |
35 +-+ |
30 +-+ |
| |
25 +-+ |
20 +-+ |
| |
15 +-+ |
10 +-+ |
O O O O O O O O O O O O O O O O O O OO O |
5 +-+--------------------------------------------------------------------+
irq_exception_noise.__do_page_fault.95th
90 +-+--------------------------------------------------------------------+
| O O O O O O |
85 O-+ O O O O O O O O O |
80 +-+ O O O O O |
| |
75 +-+ |
70 +-+ |
| |
65 +-+ |
60 +-+ |
| |
55 +-+ .+ .+ +. .+. |
50 +-+.+.+ + .+.+. .+.+.+ + .+.+.+.+.+ +.+.+.+ +. .+. .+.+.+.+.+.+.+.|
| + + + + + |
45 +-+--------------------------------------------------------------------+
irq_exception_noise.__do_page_fault.99th
110 +-+---------------O---------------------------------------------------+
O O O O O O O O O O |
100 +-+ O O O O O O O O |
| O O |
| |
90 +-+ |
| |
80 +-+ |
| |
70 +-+ |
| |
| .+ .+. .+. |
60 +-+.+.+ + .+.+.+.++. .+.+.+.+.+.+.+ +.+.+.+ +. .+ .+.+.+.+. .+.|
| + + + +.+ + |
50 +-+-------------------------------------------------------------------+
will-it-scale.per_process_ops
60000 +-+-----------------------------------------------------------------+
| O O O |
59000 O-O O O O O O O O O O O O O |
58000 +-+ O O O O |
| |
57000 +-+ |
56000 +-+ |
| |
55000 +-+ |
54000 +-+.+. .+. .+.+.+. +. .+. .+ .+.+ +.+. .+ .+.|
|.+ +.++.+.+ + + +.+ +.+.+.+ +.+ : : + +.+.+ |
53000 +-+ : : |
52000 +-+ :: |
| + |
51000 +-+-----------------------------------------------------------------+
will-it-scale.workload
1.16e+07 +-+--------------------------------------------------------------+
| O |
1.14e+07 +-O O O O OO |
1.12e+07 O-+ OO O O O O O O OO O |
| O O |
1.1e+07 +-+ |
1.08e+07 +-+ |
| |
1.06e+07 +-+ |
1.04e+07 +-+.+ .+. .+. .+. .+ +. .+.|
|.+ +.+.+.+.++.+.+.+.++ +.+ ++.+.+ ++.+ : + +.+.+.++ |
1.02e+07 +-+ : : |
1e+07 +-+ :: |
| + |
9.8e+06 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep2: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/3000/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep2/page_test/aim7
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
238624 +9.8% 262037 aim7.jobs-per-min
75.68 -8.9% 68.93 aim7.time.elapsed_time
75.68 -8.9% 68.93 aim7.time.elapsed_time.max
1049552 +11.8% 1173879 aim7.time.involuntary_context_switches
5515 -10.8% 4922 aim7.time.system_time
9174 +29.7% 11901 ± 7% aim7.time.voluntary_context_switches
10400 ± 6% +38.0% 14348 ± 29% cpuidle.C1.usage
2341 -2.3% 2287 turbostat.Avg_MHz
13791996 -11.3% 12234596 ± 4% turbostat.IRQ
1754 +14.3% 2004 ± 5% vmstat.procs.r
14408 +19.7% 17252 vmstat.system.cs
993.50 ± 7% +16.5% 1157 ± 7% slabinfo.pool_workqueue.active_objs
994.00 ± 7% +16.5% 1158 ± 7% slabinfo.pool_workqueue.num_objs
3468 +10.8% 3845 ± 2% slabinfo.task_struct.active_objs
92628 +9.4% 101293 ± 2% slabinfo.vm_area_struct.active_objs
1093130 ± 2% +23.9% 1354826 proc-vmstat.nr_active_anon
1053719 +20.6% 1270896 ± 2% proc-vmstat.nr_anon_pages
48624 +11.9% 54408 ± 3% proc-vmstat.nr_kernel_stack
84948 +18.6% 100760 ± 2% proc-vmstat.nr_page_table_pages
1093130 ± 2% +23.9% 1354825 proc-vmstat.nr_zone_active_anon
4371572 +21.5% 5309764 ± 4% meminfo.Active
4371404 +21.5% 5309596 ± 4% meminfo.Active(anon)
4210160 +18.7% 4995961 ± 4% meminfo.AnonPages
48481 +11.7% 54139 ± 3% meminfo.KernelStack
6891666 +15.5% 7958463 ± 3% meminfo.Memused
339393 ± 3% +16.6% 395859 ± 5% meminfo.PageTables
314.38 +140.3% 755.50 ± 46% sched_debug.cfs_rq:/.load.min
8605941 -38.3% 5311208 ± 25% sched_debug.cfs_rq:/.min_vruntime.avg
19256166 -45.7% 10464484 ± 30% sched_debug.cfs_rq:/.min_vruntime.max
1156285 ± 5% +59.8% 1847295 ± 14% sched_debug.cfs_rq:/.min_vruntime.min
7139595 -62.7% 2665437 ± 67% sched_debug.cfs_rq:/.min_vruntime.stddev
45.54 -26.8% 33.32 ± 5% sched_debug.cfs_rq:/.nr_spread_over.avg
1417 ± 10% -51.2% 691.25 ± 20% sched_debug.cfs_rq:/.nr_spread_over.max
181.01 ± 6% -48.8% 92.72 ± 17% sched_debug.cfs_rq:/.nr_spread_over.stddev
314.25 +135.8% 740.88 ± 48% sched_debug.cfs_rq:/.runnable_weight.min
8385173 ± 60% -53.0% 3943418 ± 13% sched_debug.cfs_rq:/.spread0.max
7143576 -62.6% 2669695 ± 67% sched_debug.cfs_rq:/.spread0.stddev
1.38 ±113% +12545.5% 173.88 ± 24% sched_debug.cfs_rq:/.util_avg.min
335.18 ± 5% -18.4% 273.49 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
551.94 ± 12% +55.7% 859.12 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.avg
0.50 +15750.0% 79.25 ± 29% sched_debug.cfs_rq:/.util_est_enqueued.min
22.88 ± 20% +977.4% 246.49 ± 52% sched_debug.cpu.clock.stddev
22.88 ± 20% +977.5% 246.49 ± 52% sched_debug.cpu.clock_task.stddev
7.95 ± 4% +9.1% 8.68 ± 3% sched_debug.cpu.cpu_load[4].avg
314.38 +140.3% 755.50 ± 46% sched_debug.cpu.load.min
0.00 ± 7% +527.3% 0.00 ± 49% sched_debug.cpu.next_balance.stddev
1275 ± 3% +19.2% 1519 ± 9% sched_debug.cpu.nr_load_updates.stddev
9.34 ± 8% +41.1% 13.18 ± 13% sched_debug.cpu.nr_running.avg
0.50 +225.0% 1.62 ± 33% sched_debug.cpu.nr_running.min
6071 +16.1% 7051 sched_debug.cpu.nr_switches.avg
4285 ± 2% +31.8% 5647 ± 5% sched_debug.cpu.nr_switches.min
12.75 ± 33% +102.0% 25.75 ± 22% sched_debug.cpu.nr_uninterruptible.max
-13.50 +116.7% -29.25 sched_debug.cpu.nr_uninterruptible.min
4.40 ± 16% +145.6% 10.81 ± 13% sched_debug.cpu.nr_uninterruptible.stddev
5836 +13.4% 6619 sched_debug.cpu.sched_count.avg
4208 ± 3% +26.7% 5331 ± 5% sched_debug.cpu.sched_count.min
39.75 ± 3% +57.9% 62.75 ± 12% sched_debug.cpu.ttwu_count.min
415.63 -14.3% 356.26 ± 8% sched_debug.cpu.ttwu_count.stddev
33.25 +11.7% 37.12 ± 4% sched_debug.cpu.ttwu_local.min
355.25 ± 4% -19.0% 287.79 ± 12% sched_debug.cpu.ttwu_local.stddev
1.951e+10 +2.6% 2.003e+10 perf-stat.i.branch-instructions
79324533 +5.7% 83859108 ± 2% perf-stat.i.branch-misses
11.68 +0.8 12.52 perf-stat.i.cache-miss-rate%
1.017e+08 +15.1% 1.17e+08 perf-stat.i.cache-misses
7.578e+08 +5.3% 7.982e+08 perf-stat.i.cache-references
14736 +21.4% 17888 perf-stat.i.context-switches
2.088e+11 -2.0% 2.047e+11 perf-stat.i.cpu-cycles
5761 ± 4% +35.1% 7783 ± 6% perf-stat.i.cpu-migrations
2.372e+10 +3.8% 2.462e+10 perf-stat.i.dTLB-loads
0.34 -0.0 0.34 perf-stat.i.dTLB-store-miss-rate%
35552575 +8.1% 38436033 perf-stat.i.dTLB-store-misses
9.735e+09 +8.9% 1.06e+10 perf-stat.i.dTLB-stores
20313022 ± 2% +8.3% 21989394 perf-stat.i.iTLB-load-misses
8.706e+10 +3.2% 8.988e+10 perf-stat.i.instructions
3965 ± 3% -6.6% 3705 ± 3% perf-stat.i.instructions-per-iTLB-miss
6218805 +9.4% 6804504 perf-stat.i.minor-faults
2220320 +21.5% 2696669 ± 9% perf-stat.i.node-load-misses
20226466 +15.6% 23372135 perf-stat.i.node-loads
13.29 ± 12% +3.8 17.13 ± 3% perf-stat.i.node-store-miss-rate%
1690422 +12.4% 1900866 perf-stat.i.node-store-misses
16181623 +9.8% 17770316 perf-stat.i.node-stores
6235897 +9.3% 6816015 perf-stat.i.page-faults
8.70 +2.1% 8.88 perf-stat.overall.MPKI
0.41 +0.0 0.42 perf-stat.overall.branch-miss-rate%
13.43 +1.2 14.65 perf-stat.overall.cache-miss-rate%
2.40 -5.1% 2.28 perf-stat.overall.cpi
2052 -14.7% 1749 perf-stat.overall.cycles-between-cache-misses
0.42 +5.3% 0.44 perf-stat.overall.ipc
9.45 +0.2 9.66 perf-stat.overall.node-store-miss-rate%
78236970 +5.4% 82446273 ± 2% perf-stat.ps.branch-misses
1.006e+08 +14.3% 1.151e+08 perf-stat.ps.cache-misses
7.495e+08 +4.8% 7.853e+08 perf-stat.ps.cache-references
14636 +20.5% 17633 perf-stat.ps.context-switches
2.066e+11 -2.5% 2.013e+11 perf-stat.ps.cpu-cycles
5740 ± 4% +33.2% 7648 ± 7% perf-stat.ps.cpu-migrations
2.347e+10 +3.2% 2.422e+10 perf-stat.ps.dTLB-loads
35182612 +7.5% 37827421 perf-stat.ps.dTLB-store-misses
9.63e+09 +8.3% 1.043e+10 perf-stat.ps.dTLB-stores
20099572 ± 2% +7.7% 21642568 perf-stat.ps.iTLB-load-misses
6180094 +8.7% 6720452 perf-stat.ps.minor-faults
2193415 ± 2% +20.7% 2647735 ± 9% perf-stat.ps.node-load-misses
20022606 +14.5% 22934734 perf-stat.ps.node-loads
1672498 +11.8% 1869846 perf-stat.ps.node-store-misses
16024684 +9.1% 17490720 perf-stat.ps.node-stores
6180093 +8.7% 6719421 perf-stat.ps.page-faults
6.694e+12 -6.1% 6.285e+12 perf-stat.total.instructions
12.14 ±103% -9.9 2.27 ±173% perf-profile.calltrace.cycles.do_page_fault.page_fault
12.14 ±103% -9.9 2.27 ±173% perf-profile.calltrace.cycles.__do_page_fault.do_page_fault.page_fault
12.14 ±103% -9.9 2.27 ±173% perf-profile.calltrace.cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.__schedule.schedule.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.find_busiest_group.load_balance.pick_next_task_fair.__schedule.schedule
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.kthread.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.schedule.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.pick_next_task_fair.__schedule.schedule.worker_thread.kthread
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.load_balance.pick_next_task_fair.__schedule.schedule.worker_thread
16.07 ± 63% -8.9 7.14 ±173% perf-profile.calltrace.cycles.task_work_run.do_exit.do_group_exit.get_signal.do_signal
16.07 ± 63% -8.9 7.14 ±173% perf-profile.calltrace.cycles.__fput.task_work_run.do_exit.do_group_exit.get_signal
16.07 ± 63% -8.9 7.14 ±173% perf-profile.calltrace.cycles.perf_release.__fput.task_work_run.do_exit.do_group_exit
16.07 ± 63% -8.9 7.14 ±173% perf-profile.calltrace.cycles.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
9.82 ±107% -7.5 2.27 ±173% perf-profile.calltrace.cycles.update_sd_lb_stats.find_busiest_group.load_balance.pick_next_task_fair.__schedule
6.70 ±100% -4.4 2.27 ±173% perf-profile.calltrace.cycles.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173% perf-profile.calltrace.cycles.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173% perf-profile.calltrace.cycles.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173% perf-profile.calltrace.cycles.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
0.00 +14.6 14.56 ± 23% perf-profile.calltrace.cycles.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.kthread
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.perf_remove_from_context
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.worker_thread
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.schedule
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.task_work_run
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.__fput
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.perf_release
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.perf_event_release_kernel
9.82 ±107% -7.5 2.27 ±173% perf-profile.children.cycles.update_sd_lb_stats
9.82 ±107% -7.5 2.27 ±173% perf-profile.children.cycles.find_busiest_group
9.82 ±107% -7.5 2.27 ±173% perf-profile.children.cycles.load_balance
9.82 ±107% -7.5 2.27 ±173% perf-profile.children.cycles.pick_next_task_fair
11.25 ±101% -7.1 4.17 ±173% perf-profile.children.cycles.___might_sleep
6.70 ±100% -6.7 0.00 perf-profile.children.cycles.free_pgd_range
6.70 ±100% -6.7 0.00 perf-profile.children.cycles.free_p4d_range
9.82 ±107% -5.3 4.54 ±173% perf-profile.children.cycles.smp_call_function_single
9.82 ±107% -5.3 4.54 ±173% perf-profile.children.cycles.event_function_call
6.70 ±100% -4.4 2.27 ±173% perf-profile.children.cycles.__x64_sys_execve
6.70 ±100% -4.4 2.27 ±173% perf-profile.children.cycles.__do_execve_file
6.70 ±100% -4.4 2.27 ±173% perf-profile.children.cycles.search_binary_handler
6.70 ±100% -4.4 2.27 ±173% perf-profile.children.cycles.load_elf_binary
0.00 +12.3 12.29 ± 26% perf-profile.children.cycles.link_path_walk
0.00 +14.6 14.56 ± 23% perf-profile.children.cycles.do_filp_open
0.00 +14.6 14.56 ± 23% perf-profile.children.cycles.path_openat
0.00 +20.4 20.40 ± 39% perf-profile.children.cycles.do_sys_open
11.25 ±101% -7.1 4.17 ±173% perf-profile.self.cycles.___might_sleep
6.70 ±100% -2.2 4.54 ±173% perf-profile.self.cycles.smp_call_function_single
35954 ± 2% -8.3% 32952 ± 4% softirqs.CPU0.TIMER
33924 -9.8% 30589 softirqs.CPU12.TIMER
33942 ± 2% -9.8% 30629 ± 5% softirqs.CPU14.TIMER
33732 ± 2% -7.8% 31094 ± 3% softirqs.CPU15.TIMER
33659 ± 2% -8.8% 30713 ± 3% softirqs.CPU17.TIMER
33919 -8.9% 30897 ± 4% softirqs.CPU18.TIMER
34395 -10.4% 30801 ± 6% softirqs.CPU19.TIMER
3083 ± 8% +187.1% 8850 ± 37% softirqs.CPU2.RCU
33611 ± 3% -7.7% 31038 ± 3% softirqs.CPU20.TIMER
33836 ± 3% -9.5% 30628 softirqs.CPU23.TIMER
33967 ± 2% -12.0% 29882 ± 3% softirqs.CPU24.TIMER
35481 ± 7% -15.3% 30066 ± 2% softirqs.CPU25.TIMER
33818 -10.8% 30159 ± 2% softirqs.CPU27.TIMER
33489 -9.6% 30259 ± 2% softirqs.CPU28.TIMER
33545 -10.3% 30075 ± 2% softirqs.CPU29.TIMER
33940 -11.9% 29915 ± 2% softirqs.CPU30.TIMER
34286 ± 2% -12.2% 30104 ± 2% softirqs.CPU31.TIMER
33721 -11.6% 29811 ± 3% softirqs.CPU32.TIMER
33742 -12.2% 29612 ± 3% softirqs.CPU33.TIMER
33571 -11.4% 29745 ± 3% softirqs.CPU34.TIMER
33867 -12.7% 29565 ± 3% softirqs.CPU35.TIMER
33966 ± 2% -12.1% 29862 ± 3% softirqs.CPU36.TIMER
34002 ± 2% -11.5% 30077 ± 3% softirqs.CPU37.TIMER
33946 -11.2% 30144 ± 3% softirqs.CPU38.TIMER
33555 -10.8% 29933 ± 3% softirqs.CPU39.TIMER
34694 ± 6% -12.3% 30421 ± 7% softirqs.CPU4.TIMER
33590 -11.3% 29782 ± 3% softirqs.CPU40.TIMER
33179 ± 2% -10.6% 29662 ± 3% softirqs.CPU41.TIMER
33880 ± 4% -11.6% 29941 ± 2% softirqs.CPU43.TIMER
33510 -8.8% 30572 ± 3% softirqs.CPU44.TIMER
34723 ± 7% -15.1% 29488 ± 6% softirqs.CPU45.TIMER
33641 -8.6% 30763 ± 4% softirqs.CPU46.TIMER
36893 ± 4% -10.3% 33082 ± 3% softirqs.CPU48.TIMER
2765 ± 2% +146.4% 6814 ± 39% softirqs.CPU49.RCU
33024 -6.9% 30741 ± 3% softirqs.CPU52.TIMER
33338 -9.0% 30329 ± 3% softirqs.CPU55.TIMER
33356 ± 2% -10.9% 29718 ± 4% softirqs.CPU56.TIMER
33314 -9.5% 30143 ± 4% softirqs.CPU57.TIMER
33000 ± 3% -8.1% 30332 ± 4% softirqs.CPU58.TIMER
38046 ± 21% -19.5% 30644 ± 4% softirqs.CPU59.TIMER
34805 -10.0% 31316 ± 3% softirqs.CPU6.TIMER
33131 ± 2% -7.8% 30547 ± 4% softirqs.CPU60.TIMER
33120 ± 2% -7.9% 30503 ± 2% softirqs.CPU61.TIMER
33280 -9.5% 30133 ± 2% softirqs.CPU62.TIMER
33268 -7.9% 30624 ± 4% softirqs.CPU63.TIMER
33359 -9.6% 30145 ± 3% softirqs.CPU65.TIMER
33056 ± 2% -9.9% 29795 ± 3% softirqs.CPU67.TIMER
33303 -11.3% 29548 ± 2% softirqs.CPU69.TIMER
33401 -11.2% 29658 ± 2% softirqs.CPU70.TIMER
33579 -12.1% 29529 ± 2% softirqs.CPU71.TIMER
33535 ± 2% -12.3% 29412 ± 2% softirqs.CPU72.TIMER
37862 ± 21% -21.7% 29664 ± 2% softirqs.CPU73.TIMER
34032 ± 3% -13.1% 29576 ± 2% softirqs.CPU74.TIMER
33764 ± 2% -13.4% 29247 ± 3% softirqs.CPU75.TIMER
33684 ± 2% -12.9% 29325 ± 3% softirqs.CPU76.TIMER
33420 ± 2% -11.6% 29548 ± 2% softirqs.CPU77.TIMER
33191 -11.5% 29378 ± 3% softirqs.CPU78.TIMER
33195 -11.1% 29510 ± 2% softirqs.CPU79.TIMER
33332 -11.4% 29527 ± 2% softirqs.CPU80.TIMER
33492 ± 2% -11.5% 29655 ± 2% softirqs.CPU81.TIMER
33277 -9.6% 30068 ± 4% softirqs.CPU82.TIMER
33155 -11.2% 29427 ± 2% softirqs.CPU83.TIMER
33205 -10.5% 29730 ± 2% softirqs.CPU84.TIMER
32874 ± 2% -10.2% 29537 ± 2% softirqs.CPU85.TIMER
33248 -11.0% 29605 ± 2% softirqs.CPU86.TIMER
41246 ± 19% -28.4% 29532 softirqs.CPU87.TIMER
262187 +92.7% 505328 ± 16% softirqs.RCU
2984407 -9.7% 2696016 ± 3% softirqs.TIMER
156252 -11.3% 138634 ± 4% interrupts.CPU0.LOC:Local_timer_interrupts
155582 -11.4% 137894 ± 4% interrupts.CPU1.LOC:Local_timer_interrupts
155294 -10.9% 138304 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
154946 -10.8% 138222 ± 4% interrupts.CPU11.LOC:Local_timer_interrupts
155903 -11.3% 138329 ± 4% interrupts.CPU12.LOC:Local_timer_interrupts
155226 -10.9% 138278 ± 4% interrupts.CPU13.LOC:Local_timer_interrupts
155668 -12.2% 136627 ± 3% interrupts.CPU14.LOC:Local_timer_interrupts
155963 -11.4% 138236 ± 4% interrupts.CPU15.LOC:Local_timer_interrupts
155761 -11.5% 137868 ± 4% interrupts.CPU16.LOC:Local_timer_interrupts
154649 -10.8% 138021 ± 4% interrupts.CPU17.LOC:Local_timer_interrupts
155951 -11.6% 137907 ± 4% interrupts.CPU18.LOC:Local_timer_interrupts
156031 -11.9% 137535 ± 4% interrupts.CPU19.LOC:Local_timer_interrupts
155323 -11.3% 137839 ± 4% interrupts.CPU2.LOC:Local_timer_interrupts
155295 -11.4% 137530 ± 4% interrupts.CPU20.LOC:Local_timer_interrupts
155878 -11.3% 138258 ± 4% interrupts.CPU21.LOC:Local_timer_interrupts
155826 -11.4% 138126 ± 4% interrupts.CPU22.LOC:Local_timer_interrupts
154996 -10.7% 138421 ± 4% interrupts.CPU23.LOC:Local_timer_interrupts
155823 -11.6% 137681 ± 3% interrupts.CPU24.LOC:Local_timer_interrupts
155900 -11.3% 138258 ± 4% interrupts.CPU25.LOC:Local_timer_interrupts
156026 -11.1% 138675 ± 4% interrupts.CPU26.LOC:Local_timer_interrupts
156065 -11.2% 138624 ± 4% interrupts.CPU27.LOC:Local_timer_interrupts
155134 -10.8% 138317 ± 4% interrupts.CPU28.LOC:Local_timer_interrupts
155977 -11.3% 138287 ± 4% interrupts.CPU29.LOC:Local_timer_interrupts
155019 -10.7% 138485 ± 4% interrupts.CPU3.LOC:Local_timer_interrupts
156170 -11.5% 138240 ± 4% interrupts.CPU30.LOC:Local_timer_interrupts
156008 -11.3% 138312 ± 4% interrupts.CPU31.LOC:Local_timer_interrupts
155070 -11.2% 137728 ± 3% interrupts.CPU32.LOC:Local_timer_interrupts
155798 -11.7% 137637 ± 3% interrupts.CPU33.LOC:Local_timer_interrupts
155199 -11.4% 137518 ± 3% interrupts.CPU34.LOC:Local_timer_interrupts
155905 -11.7% 137607 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
155961 -11.9% 137475 ± 3% interrupts.CPU36.LOC:Local_timer_interrupts
9.25 ±118% +370.3% 43.50 ± 87% interrupts.CPU36.RES:Rescheduling_interrupts
155888 -11.7% 137651 ± 3% interrupts.CPU37.LOC:Local_timer_interrupts
8.50 ±133% +2229.4% 198.00 ±140% interrupts.CPU37.RES:Rescheduling_interrupts
155894 -11.3% 138300 ± 4% interrupts.CPU38.LOC:Local_timer_interrupts
155998 -11.3% 138325 ± 4% interrupts.CPU39.LOC:Local_timer_interrupts
155942 -12.4% 136662 ± 4% interrupts.CPU4.LOC:Local_timer_interrupts
155795 -11.7% 137517 ± 3% interrupts.CPU40.LOC:Local_timer_interrupts
155043 -11.3% 137505 ± 3% interrupts.CPU41.LOC:Local_timer_interrupts
156027 -11.4% 138257 ± 4% interrupts.CPU42.LOC:Local_timer_interrupts
155105 -10.9% 138176 ± 4% interrupts.CPU43.LOC:Local_timer_interrupts
155785 -11.3% 138155 ± 4% interrupts.CPU44.LOC:Local_timer_interrupts
155037 -11.9% 136645 ± 3% interrupts.CPU45.LOC:Local_timer_interrupts
155780 -11.2% 138308 ± 4% interrupts.CPU46.LOC:Local_timer_interrupts
154545 -10.6% 138183 ± 4% interrupts.CPU47.LOC:Local_timer_interrupts
155901 -11.2% 138377 ± 4% interrupts.CPU48.LOC:Local_timer_interrupts
155844 -11.9% 137312 ± 4% interrupts.CPU49.LOC:Local_timer_interrupts
155883 -11.2% 138494 ± 4% interrupts.CPU5.LOC:Local_timer_interrupts
155349 -11.9% 136908 ± 3% interrupts.CPU50.LOC:Local_timer_interrupts
155083 -10.8% 138342 ± 4% interrupts.CPU51.LOC:Local_timer_interrupts
154977 -10.8% 138283 ± 4% interrupts.CPU52.LOC:Local_timer_interrupts
154864 -11.3% 137418 ± 4% interrupts.CPU53.LOC:Local_timer_interrupts
155844 -11.2% 138326 ± 4% interrupts.CPU54.LOC:Local_timer_interrupts
155776 -11.3% 138109 ± 4% interrupts.CPU55.LOC:Local_timer_interrupts
155486 -12.2% 136576 ± 4% interrupts.CPU56.LOC:Local_timer_interrupts
155812 -11.8% 137402 ± 4% interrupts.CPU57.LOC:Local_timer_interrupts
154795 -11.2% 137388 ± 4% interrupts.CPU58.LOC:Local_timer_interrupts
155118 -11.3% 137590 ± 4% interrupts.CPU59.LOC:Local_timer_interrupts
155919 -11.3% 138296 ± 4% interrupts.CPU6.LOC:Local_timer_interrupts
154221 -10.8% 137520 ± 4% interrupts.CPU60.LOC:Local_timer_interrupts
154838 -10.8% 138165 ± 4% interrupts.CPU61.LOC:Local_timer_interrupts
155591 -11.7% 137447 ± 4% interrupts.CPU62.LOC:Local_timer_interrupts
155969 -11.8% 137496 ± 4% interrupts.CPU63.LOC:Local_timer_interrupts
155891 -11.8% 137460 ± 4% interrupts.CPU64.LOC:Local_timer_interrupts
155293 -11.5% 137478 ± 4% interrupts.CPU65.LOC:Local_timer_interrupts
155143 -10.9% 138276 ± 4% interrupts.CPU66.LOC:Local_timer_interrupts
155074 -10.7% 138471 ± 4% interrupts.CPU67.LOC:Local_timer_interrupts
155145 -10.9% 138232 ± 4% interrupts.CPU68.LOC:Local_timer_interrupts
156022 -11.3% 138366 ± 4% interrupts.CPU69.LOC:Local_timer_interrupts
5.50 ± 81% +3395.5% 192.25 ±152% interrupts.CPU69.RES:Rescheduling_interrupts
155096 -11.3% 137612 ± 4% interrupts.CPU7.LOC:Local_timer_interrupts
155904 -11.3% 138305 ± 4% interrupts.CPU70.LOC:Local_timer_interrupts
156019 -11.3% 138393 ± 4% interrupts.CPU71.LOC:Local_timer_interrupts
156061 -11.5% 138062 ± 3% interrupts.CPU72.LOC:Local_timer_interrupts
155876 -11.5% 138010 ± 4% interrupts.CPU73.LOC:Local_timer_interrupts
155994 -11.9% 137467 ± 3% interrupts.CPU74.LOC:Local_timer_interrupts
155981 -11.7% 137752 ± 3% interrupts.CPU75.LOC:Local_timer_interrupts
155979 -11.8% 137545 ± 3% interrupts.CPU76.LOC:Local_timer_interrupts
13.75 ±136% +280.0% 52.25 ± 88% interrupts.CPU76.RES:Rescheduling_interrupts
155815 -11.3% 138268 ± 4% interrupts.CPU77.LOC:Local_timer_interrupts
155821 -11.7% 137609 ± 3% interrupts.CPU78.LOC:Local_timer_interrupts
155875 -11.3% 138327 ± 4% interrupts.CPU79.LOC:Local_timer_interrupts
155000 -11.3% 137467 ± 4% interrupts.CPU8.LOC:Local_timer_interrupts
155831 -11.8% 137520 ± 3% interrupts.CPU80.LOC:Local_timer_interrupts
155820 -11.2% 138418 ± 4% interrupts.CPU81.LOC:Local_timer_interrupts
155902 -11.3% 138272 ± 4% interrupts.CPU82.LOC:Local_timer_interrupts
5.75 ±163% +1556.5% 95.25 ± 99% interrupts.CPU82.RES:Rescheduling_interrupts
155904 -11.3% 138251 ± 4% interrupts.CPU83.LOC:Local_timer_interrupts
155971 -11.4% 138129 ± 4% interrupts.CPU84.LOC:Local_timer_interrupts
155061 -11.0% 138013 ± 4% interrupts.CPU85.LOC:Local_timer_interrupts
155985 -11.6% 137878 ± 4% interrupts.CPU86.LOC:Local_timer_interrupts
155927 -11.2% 138438 ± 4% interrupts.CPU87.LOC:Local_timer_interrupts
155191 -11.5% 137353 ± 4% interrupts.CPU9.LOC:Local_timer_interrupts
13692319 -11.4% 12137408 ± 4% interrupts.LOC:Local_timer_interrupts
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_threads/rootfs/tbox_group/test/test_memory_size/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/development/50%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/PIPE/50%/lmbench3/0xb00002e
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
45852 +21.9% 55876 lmbench3.PIPE.bandwidth.MB/sec
5463 -22.6% 4228 ± 30% lmbench3.time.percent_of_cpu_this_job_got
10056 ± 6% -13.5% 8699 ± 2% lmbench3.time.system_time
33336 ± 5% +36.2% 45403 ± 16% softirqs.CPU17.SCHED
38.65 +12.4% 43.45 ± 7% boot-time.boot
2898 +15.0% 3332 ± 9% boot-time.idle
49.14 +12.6 61.75 ± 18% mpstat.cpu.all.idle%
50.06 -12.7 37.34 ± 31% mpstat.cpu.all.sys%
1408 ± 6% +23.6% 1740 ± 6% slabinfo.avc_xperms_data.active_objs
1408 ± 6% +23.6% 1740 ± 6% slabinfo.avc_xperms_data.num_objs
49.00 +25.9% 61.67 ± 17% vmstat.cpu.id
49.25 -25.5% 36.67 ± 30% vmstat.cpu.sy
1701 -4.7% 1622 ± 3% proc-vmstat.nr_page_table_pages
6121 ± 56% +167.0% 16341 ± 11% proc-vmstat.numa_pages_migrated
6121 ± 56% +167.0% 16341 ± 11% proc-vmstat.pgmigrate_success
1788 ± 72% -77.9% 394.67 ± 21% numa-meminfo.node1.Inactive
1701 ± 79% -80.1% 338.00 ± 7% numa-meminfo.node1.Inactive(anon)
6188 ± 9% -12.0% 5447 ± 2% numa-meminfo.node1.KernelStack
2623 ± 32% -43.2% 1489 ± 19% numa-meminfo.node1.PageTables
425.00 ± 79% -80.2% 84.33 ± 7% numa-vmstat.node1.nr_inactive_anon
6188 ± 9% -12.0% 5446 ± 2% numa-vmstat.node1.nr_kernel_stack
655.75 ± 32% -43.2% 372.33 ± 19% numa-vmstat.node1.nr_page_table_pages
425.00 ± 79% -80.2% 84.33 ± 7% numa-vmstat.node1.nr_zone_inactive_anon
1493 -23.2% 1146 ± 27% turbostat.Avg_MHz
54.03 -12.2 41.82 ± 26% turbostat.Busy%
10.32 ± 3% -2.0 8.27 ± 25% turbostat.C1%
204.25 -13.1% 177.40 ± 14% turbostat.PkgWatt
11.88 -6.1% 11.16 ± 2% turbostat.RAMWatt
317.75 ± 42% -56.2% 139.33 ± 38% interrupts.36:PCI-MSI.1572867-edge.eth0-TxRx-3
94.00 ± 6% +39.4% 131.00 ± 28% interrupts.92:PCI-MSI.1572923-edge.eth0-TxRx-59
317.75 ± 42% -56.2% 139.33 ± 38% interrupts.CPU3.36:PCI-MSI.1572867-edge.eth0-TxRx-3
2329 ± 5% +37.0% 3191 ± 33% interrupts.CPU44.CAL:Function_call_interrupts
2148 ± 10% +51.9% 3263 ± 35% interrupts.CPU50.CAL:Function_call_interrupts
94.00 ± 6% +39.4% 131.00 ± 28% interrupts.CPU59.92:PCI-MSI.1572923-edge.eth0-TxRx-59
2298 ± 2% +41.7% 3257 ± 35% interrupts.CPU59.CAL:Function_call_interrupts
29892 ± 4% -6.0% 28112 ± 4% interrupts.CPU60.RES:Rescheduling_interrupts
5013 ± 62% -75.8% 1215 ± 59% interrupts.CPU65.NMI:Non-maskable_interrupts
5013 ± 62% -75.8% 1215 ± 59% interrupts.CPU65.PMI:Performance_monitoring_interrupts
2296 ± 3% +41.4% 3247 ± 33% interrupts.CPU71.CAL:Function_call_interrupts
4566 ± 40% -63.8% 1652 ± 79% interrupts.CPU87.NMI:Non-maskable_interrupts
4566 ± 40% -63.8% 1652 ± 79% interrupts.CPU87.PMI:Performance_monitoring_interrupts
2215 ± 10% +42.5% 3156 ± 33% interrupts.CPU9.CAL:Function_call_interrupts
589.00 ± 8% -19.3% 475.33 ± 5% interrupts.TLB:TLB_shootdowns
1.465e+10 -24.8% 1.102e+10 ± 30% perf-stat.i.branch-instructions
1.88 ± 13% +2.2 4.09 ± 63% perf-stat.i.cache-miss-rate%
1.331e+11 -23.4% 1.019e+11 ± 28% perf-stat.i.cpu-cycles
1.715e+10 -22.9% 1.322e+10 ± 30% perf-stat.i.dTLB-loads
56.76 ± 2% -5.7 51.11 ± 4% perf-stat.i.iTLB-load-miss-rate%
6.077e+10 -23.6% 4.641e+10 ± 29% perf-stat.i.instructions
23695 ± 3% -27.7% 17142 ± 25% perf-stat.i.instructions-per-iTLB-miss
19.67 ± 2% +12.1% 22.05 perf-stat.overall.MPKI
0.49 +0.1 0.59 ± 15% perf-stat.overall.branch-miss-rate%
19041 ± 8% -29.6% 13402 ± 39% perf-stat.overall.cycles-between-cache-misses
17134 ± 2% -17.6% 14124 ± 13% perf-stat.overall.instructions-per-iTLB-miss
1.457e+10 -24.8% 1.096e+10 ± 30% perf-stat.ps.branch-instructions
1.323e+11 -23.4% 1.014e+11 ± 28% perf-stat.ps.cpu-cycles
1.706e+10 -22.9% 1.315e+10 ± 30% perf-stat.ps.dTLB-loads
6.043e+10 -23.6% 4.617e+10 ± 29% perf-stat.ps.instructions
1.16e+13 ± 6% -12.2% 1.018e+13 ± 2% perf-stat.total.instructions
40010 ± 15% -26.4% 29449 ± 32% sched_debug.cfs_rq:/.exec_clock.avg
38212 ± 14% -30.1% 26726 ± 33% sched_debug.cfs_rq:/.exec_clock.min
322.75 ± 10% +158.0% 832.72 ± 70% sched_debug.cfs_rq:/.load_avg.max
3421408 ± 15% -28.0% 2464418 ± 35% sched_debug.cfs_rq:/.min_vruntime.avg
3493241 ± 16% -27.1% 2546488 ± 34% sched_debug.cfs_rq:/.min_vruntime.max
3277398 ± 15% -29.5% 2310348 ± 33% sched_debug.cfs_rq:/.min_vruntime.min
0.65 ± 13% -27.8% 0.47 ± 22% sched_debug.cfs_rq:/.nr_running.avg
0.06 ± 14% +74.4% 0.10 ± 26% sched_debug.cfs_rq:/.nr_spread_over.avg
753.19 ± 12% -25.4% 561.82 ± 27% sched_debug.cfs_rq:/.util_avg.avg
1654 ± 12% -15.5% 1398 ± 13% sched_debug.cfs_rq:/.util_avg.max
607.16 ± 13% -25.1% 454.68 ± 30% sched_debug.cfs_rq:/.util_est_enqueued.avg
1376 ± 9% -31.1% 947.89 ± 15% sched_debug.cfs_rq:/.util_est_enqueued.max
228.22 ± 4% -27.7% 165.11 ± 15% sched_debug.cfs_rq:/.util_est_enqueued.stddev
10073 ± 56% +274.5% 37720 ± 53% sched_debug.cpu.avg_idle.min
150879 ± 7% -21.4% 118605 ± 6% sched_debug.cpu.avg_idle.stddev
1.15 ± 20% -60.2% 0.46 ± 45% sched_debug.cpu.cpu_load[1].min
1855684 ± 5% -23.1% 1427848 ± 27% sched_debug.cpu.nr_switches.avg
4011313 ± 8% -20.5% 3190464 ± 24% sched_debug.cpu.nr_switches.max
1754340 ± 5% -22.9% 1351964 ± 26% sched_debug.cpu.nr_switches.min
-7.67 +43.4% -10.99 sched_debug.cpu.nr_uninterruptible.min
1855325 ± 5% -23.1% 1427549 ± 27% sched_debug.cpu.sched_count.avg
4008427 ± 8% -20.4% 3189563 ± 24% sched_debug.cpu.sched_count.max
1754161 ± 5% -23.0% 1351450 ± 26% sched_debug.cpu.sched_count.min
915862 ± 5% -22.8% 706844 ± 26% sched_debug.cpu.sched_goidle.avg
1988061 ± 8% -20.1% 1588229 ± 24% sched_debug.cpu.sched_goidle.max
871494 ± 5% -22.8% 672571 ± 26% sched_debug.cpu.sched_goidle.min
938616 ± 6% -23.3% 719865 ± 27% sched_debug.cpu.ttwu_count.avg
2019971 ± 8% -21.0% 1596694 ± 24% sched_debug.cpu.ttwu_count.max
879784 ± 5% -23.1% 676751 ± 26% sched_debug.cpu.ttwu_count.min
120270 ± 66% -53.4% 55992 ± 84% sched_debug.cpu.ttwu_local.max
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 12 months
Re: [LKP] [btrfs] 302167c50b: fio.write_bw_MBps -12.4% regression
by Huang, Ying
Josef Bacik <josef(a)toxicpanda.com> writes:
> On Fri, May 24, 2019 at 03:46:17PM +0800, Huang, Ying wrote:
>> "Huang, Ying" <ying.huang(a)intel.com> writes:
>>
>> > "Huang, Ying" <ying.huang(a)intel.com> writes:
>> >
>> >> Hi, Josef,
>> >>
>> >> kernel test robot <rong.a.chen(a)intel.com> writes:
>> >>
>> >>> Greeting,
>> >>>
>> >>> FYI, we noticed a -12.4% regression of fio.write_bw_MBps due to commit:
>> >>>
>> >>>
>> >>> commit: 302167c50b32e7fccc98994a91d40ddbbab04e52 ("btrfs: don't end
>> >>> the transaction for delayed refs in throttle")
>> >>> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git pending-fixes
>> >>>
>> >>> in testcase: fio-basic
>> >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
>> >>> with following parameters:
>> >>>
>> >>> runtime: 300s
>> >>> nr_task: 8t
>> >>> disk: 1SSD
>> >>> fs: btrfs
>> >>> rw: randwrite
>> >>> bs: 4k
>> >>> ioengine: sync
>> >>> test_size: 400g
>> >>> cpufreq_governor: performance
>> >>> ucode: 0xb00002e
>> >>>
>> >>> test-description: Fio is a tool that will spawn a number of threads
>> >>> or processes doing a particular type of I/O action as specified by
>> >>> the user.
>> >>> test-url: https://github.com/axboe/fio
>> >>>
>> >>>
>> >>
>> >> Do you have time to take a look at this regression?
>> >
>> > Ping
>>
>> Ping again.
>>
>
> This happens because now we rely more on on-demand flushing than the catchup
> flushing that happened before. This is just one case where it's slightly worse,
> overall this change provides better latencies, and even in this result it
> provided better completion latencies because we're not randomly flushing at the
> end of a transaction. It does appear to be costing writes in that they will
> spend more time flushing than before, so you get slightly lower throughput on
> pure small write workloads. I can't actually see the slowdown locally.
>
> This patch is here to stay, it just shows we need to continue to refine the
> flushing code to be less spikey/painful. Thanks,
Thanks for detailed explanation. We will ignore this regression.
Best Regards,
Huang, Ying
2 years, 12 months
8d2bd61bd8 ("sched/core: Clean up sched_init() a bit"): BUG: unable to handle page fault for address: 00ba0396
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 8d2bd61bd89260d5da5eaef7cf264587f19a7e0d
Author: Qian Cai <cai(a)lca.pw>
AuthorDate: Wed May 22 14:20:06 2019 -0400
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Fri May 24 08:57:29 2019 +0200
sched/core: Clean up sched_init() a bit
Compiling a kernel with both FAIR_GROUP_SCHED=n and RT_GROUP_SCHED=n
will generate a warning using W=1:
kernel/sched/core.c: In function 'sched_init':
kernel/sched/core.c:5906:32: warning: variable 'ptr' set but not used
Use this opportunity to tidy up a code a bit by removing unnecssary
indentations and lines.
Signed-off-by: Qian Cai <cai(a)lca.pw>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Link: http://lkml.kernel.org/r/1558549206-13031-1-git-send-email-cai@lca.pw
[ Also remove some of the unnecessary #endif comments. ]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
54dee40637 Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
8d2bd61bd8 sched/core: Clean up sched_init() a bit
7668c966e0 Merge branch 'linus'
+-------------------------------------------------------+------------+------------+------------+
| | 54dee40637 | 8d2bd61bd8 | 7668c966e0 |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 49 | 0 | 0 |
| boot_failures | 0 | 16 | 13 |
| BUG:kernel_hang_in_boot_stage | 0 | 3 | |
| BUG:unable_to_handle_page_fault_for_address | 0 | 2 | 1 |
| Oops:#[##] | 0 | 13 | 13 |
| EIP:rq_online_rt | 0 | 2 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 | 6 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 11 | 12 |
| EIP:sched_rt_period_timer | 0 | 9 | 7 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 9 | 7 |
| EIP:enqueue_rt_entity | 0 | 2 | 5 |
+-------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.029282] Initializing CPU#1
[ 0.032615] kvm-clock: cpu 1, msr 20e6041, secondary cpu clock
[ 0.032615] masked ExtINT on CPU#1
[ 0.193558] KVM setup async PF for cpu 1
[ 0.193558] kvm-stealtime: cpu 1, msr 1e54d8c0
[ 0.193558] BUG: unable to handle page fault for address: 00ba0396
[ 0.193558] #PF: supervisor read access in kernel mode
[ 0.193558] #PF: error_code(0x0000) - not-present page
[ 0.193558] *pde = 00000000
[ 0.193558] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 0.193558] CPU: 1 PID: 14 Comm: cpuhp/1 Tainted: G T 5.2.0-rc1-00166-g8d2bd61 #177
[ 0.193558] EIP: rq_online_rt+0x6d/0xf5
[ 0.193558] Code: 93 d0 06 00 00 05 c4 00 00 00 e8 4f 5b 00 00 83 c4 0c 5b 5e 5f 5d c3 8b 93 d0 06 00 00 8b 80 c8 00 00 00 8b 34 90 85 f6 74 c9 <8b> 8e 94 03 00 00 8d 81 cc 00 00 00 89 4d e8 89 45 f0 e8 97 0f 9a
[ 0.193558] EAX: 40093140 EBX: 5e555600 ECX: 14c073f2 EDX: 00000001
[ 0.193558] ESI: 00ba0002 EDI: 420e9200 EBP: 4005fef8 ESP: 4005fee0
[ 0.193558] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00010002
[ 0.193558] CR0: 80050033 CR2: 00ba0396 CR3: 020d8000 CR4: 00000690
[ 0.193558] Call Trace:
[ 0.193558] ? cpudl_set_freecpu+0x13/0x17
[ 0.193558] ? set_rq_online+0x42/0x4a
[ 0.193558] ? sched_cpu_activate+0xc5/0xdd
[ 0.193558] ? cpuhp_invoke_callback+0x8c/0x13a
[ 0.193558] ? cpuhp_thread_fun+0xcf/0x11e
[ 0.193558] ? smpboot_thread_fn+0x17b/0x18f
[ 0.193558] ? kthread+0xd3/0xd5
[ 0.193558] ? smpboot_destroy_threads+0x65/0x65
[ 0.193558] ? __list_del_entry+0x1e/0x1e
[ 0.193558] ? ret_from_fork+0x1e/0x28
[ 0.193558] CR2: 0000000000ba0396
[ 0.193558] random: get_random_bytes called from init_oops_id+0x23/0x3a with crng_init=0
[ 0.193558] ---[ end trace ceabaa41d6971a28 ]---
[ 0.193558] EIP: rq_online_rt+0x6d/0xf5
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 7668c966e0f348d25e72f1328dd26341922db815 4dde821e4296e156d133b98ddc4c45861935a4fb --
git bisect good 01360de1d7b5b45251ec97307190c87a5588a5fc # 17:38 G 11 0 0 0 Merge branch 'x86/asm'
git bisect bad 25e083f9d3e0223945de0fa9682f94b404a0d704 # 17:42 B 0 11 27 2 Merge branch 'perf/urgent'
git bisect good 11f0e441c61714ecdec83290323cc7922883e06b # 17:47 G 10 0 2 2 Merge branch 'x86/apic'
git bisect bad 4a1021adac53483be80dcfea2032ce031b225783 # 18:01 B 0 10 24 0 Merge branch 'sched/core'
git bisect bad 8d2bd61bd89260d5da5eaef7cf264587f19a7e0d # 18:06 B 0 10 26 2 sched/core: Clean up sched_init() a bit
# first bad commit: [8d2bd61bd89260d5da5eaef7cf264587f19a7e0d] sched/core: Clean up sched_init() a bit
git bisect good 54dee406374ce8adb352c48e175176247cb8db7c # 18:13 G 31 0 2 2 Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
# extra tests with debug options
git bisect bad 8d2bd61bd89260d5da5eaef7cf264587f19a7e0d # 18:17 B 0 11 25 0 sched/core: Clean up sched_init() a bit
# extra tests on HEAD of tip/master
git bisect bad 7668c966e0f348d25e72f1328dd26341922db815 # 18:17 B 0 13 30 0 Merge branch 'linus'
# extra tests on tree/branch tip/sched/core
git bisect bad 8d2bd61bd89260d5da5eaef7cf264587f19a7e0d # 18:22 B 0 13 30 3 sched/core: Clean up sched_init() a bit
# extra tests with first bad commit reverted
git bisect good 8dad6492e10fe44e39e1296fe727fe9a0bb41135 # 18:37 G 10 0 0 0 Revert "sched/core: Clean up sched_init() a bit"
# extra tests on tree/branch tip/master
git bisect bad 7668c966e0f348d25e72f1328dd26341922db815 # 18:38 B 0 13 30 0 Merge branch 'linus'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
2 years, 12 months
[sched] b0f464c9c8: WARNING:at_kernel/sched/sched.h:#pick_next_task_fair
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b0f464c9c862c0c912ac3fe1edf8a157517157d0 ("sched: Add task_struct pointer to sched_class::set_curr_task")
https://github.com/digitalocean/linux-coresched coresched
in testcase: locktorture
with following parameters:
runtime: 300s
test: cpuhotplug
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+------------+------------+
| | 4f53939e5f | b0f464c9c8 |
+------------------------------------------------------+------------+------------+
| boot_successes | 5 | 10 |
| boot_failures | 1 | 16 |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | |
| WARNING:at_kernel/sched/sched.h:#pick_next_task_fair | 0 | 16 |
| RIP:pick_next_task_fair | 0 | 16 |
| WARNING:at_kernel/sched/sched.h:#sched_cpu_dying | 0 | 16 |
| RIP:sched_cpu_dying | 0 | 16 |
+------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 276.263169] WARNING: CPU: 0 PID: 11 at kernel/sched/sched.h:1719 pick_next_task_fair+0x754/0x780
[ 276.265858] Modules linked in: locktorture torture ppdev snd_pcm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel snd_timer snd soundcore bochs_drm pcspkr joydev ttm serio_raw drm_kms_helper drm ata_generic pata_acpi i2c_piix4 parport_pc floppy parport qemu_fw_cfg
[ 276.272254] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.1.0-10881-gb0f464c #1
[ 276.274218] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 276.276415] RIP: 0010:pick_next_task_fair+0x754/0x780
[ 276.277773] Code: 48 0f af 85 18 01 00 00 48 83 c1 01 48 f7 f1 48 89 83 10 0a 00 00 e9 6a fa ff ff bf 02 00 00 00 e8 e1 33 ff ff e9 08 fa ff ff <0f> 0b e9 1d fb ff ff 80 3d e7 50 61 01 00 0f 85 30 fc ff ff e8 83
[ 276.282716] RSP: 0018:ffffbab3c0377d08 EFLAGS: 00010006
[ 276.284174] RAX: ffffffff90011dc0 RBX: ffffa0316be2aec0 RCX: ffffffff9064d0c0
[ 276.286075] RDX: ffffbab3c0377d90 RSI: 0000000000000001 RDI: ffffa0316be2aec0
[ 276.287968] RBP: ffffbab3c0377dd0 R08: 000000405290c768 R09: 0000000000000005
[ 276.289876] R10: ffffbab3c0377d48 R11: 000000000000138e R12: ffffffff9064d0c0
[ 276.291768] R13: ffffffff90011fc0 R14: ffffbab3c0377d90 R15: 0000000000000000
[ 276.293658] FS: 0000000000000000(0000) GS:ffffa0316be00000(0000) knlGS:0000000000000000
[ 276.295803] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 276.297325] CR2: 00005608c15f8d60 CR3: 000000007efe0000 CR4: 00000000000006f0
[ 276.299238] Call Trace:
[ 276.301855] sched_cpu_dying+0x29a/0x3c0
[ 276.302925] ? sched_cpu_starting+0xd0/0xd0
[ 276.304034] cpuhp_invoke_callback+0x86/0x5d0
[ 276.305221] ? cpu_disable_common+0x1cf/0x1f0
[ 276.306375] take_cpu_down+0x60/0xa0
[ 276.307330] multi_cpu_stop+0x68/0xe0
[ 276.308308] ? cpu_stopper_thread+0x100/0x100
[ 276.309469] cpu_stopper_thread+0x94/0x100
[ 276.310591] ? smpboot_thread_fn+0x2f/0x1e0
[ 276.311725] ? smpboot_thread_fn+0x74/0x1e0
[ 276.312843] ? smpboot_thread_fn+0x14e/0x1e0
[ 276.313987] smpboot_thread_fn+0x149/0x1e0
[ 276.315077] ? sort_range+0x20/0x20
[ 276.316034] kthread+0x11e/0x140
[ 276.316903] ? kthread_park+0x90/0x90
[ 276.317889] ret_from_fork+0x35/0x40
[ 276.318851] ---[ end trace 4938a64101275eb2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-10881-gb0f464c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
2 years, 12 months
[sched] b0f464c9c8: WARNING:at_kernel/sched/sched.h:#pick_next_task_fair
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b0f464c9c862c0c912ac3fe1edf8a157517157d0 ("sched: Add task_struct pointer to sched_class::set_curr_task")
https://github.com/digitalocean/linux-coresched coresched
in testcase: locktorture
with following parameters:
runtime: 300s
test: cpuhotplug
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+------------+------------+
| | 4f53939e5f | b0f464c9c8 |
+------------------------------------------------------+------------+------------+
| boot_successes | 5 | 10 |
| boot_failures | 1 | 16 |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | |
| WARNING:at_kernel/sched/sched.h:#pick_next_task_fair | 0 | 16 |
| RIP:pick_next_task_fair | 0 | 16 |
| WARNING:at_kernel/sched/sched.h:#sched_cpu_dying | 0 | 16 |
| RIP:sched_cpu_dying | 0 | 16 |
+------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 276.263169] WARNING: CPU: 0 PID: 11 at kernel/sched/sched.h:1719 pick_next_task_fair+0x754/0x780
[ 276.265858] Modules linked in: locktorture torture ppdev snd_pcm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel snd_timer snd soundcore bochs_drm pcspkr joydev ttm serio_raw drm_kms_helper drm ata_generic pata_acpi i2c_piix4 parport_pc floppy parport qemu_fw_cfg
[ 276.272254] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.1.0-10881-gb0f464c #1
[ 276.274218] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 276.276415] RIP: 0010:pick_next_task_fair+0x754/0x780
[ 276.277773] Code: 48 0f af 85 18 01 00 00 48 83 c1 01 48 f7 f1 48 89 83 10 0a 00 00 e9 6a fa ff ff bf 02 00 00 00 e8 e1 33 ff ff e9 08 fa ff ff <0f> 0b e9 1d fb ff ff 80 3d e7 50 61 01 00 0f 85 30 fc ff ff e8 83
[ 276.282716] RSP: 0018:ffffbab3c0377d08 EFLAGS: 00010006
[ 276.284174] RAX: ffffffff90011dc0 RBX: ffffa0316be2aec0 RCX: ffffffff9064d0c0
[ 276.286075] RDX: ffffbab3c0377d90 RSI: 0000000000000001 RDI: ffffa0316be2aec0
[ 276.287968] RBP: ffffbab3c0377dd0 R08: 000000405290c768 R09: 0000000000000005
[ 276.289876] R10: ffffbab3c0377d48 R11: 000000000000138e R12: ffffffff9064d0c0
[ 276.291768] R13: ffffffff90011fc0 R14: ffffbab3c0377d90 R15: 0000000000000000
[ 276.293658] FS: 0000000000000000(0000) GS:ffffa0316be00000(0000) knlGS:0000000000000000
[ 276.295803] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 276.297325] CR2: 00005608c15f8d60 CR3: 000000007efe0000 CR4: 00000000000006f0
[ 276.299238] Call Trace:
[ 276.301855] sched_cpu_dying+0x29a/0x3c0
[ 276.302925] ? sched_cpu_starting+0xd0/0xd0
[ 276.304034] cpuhp_invoke_callback+0x86/0x5d0
[ 276.305221] ? cpu_disable_common+0x1cf/0x1f0
[ 276.306375] take_cpu_down+0x60/0xa0
[ 276.307330] multi_cpu_stop+0x68/0xe0
[ 276.308308] ? cpu_stopper_thread+0x100/0x100
[ 276.309469] cpu_stopper_thread+0x94/0x100
[ 276.310591] ? smpboot_thread_fn+0x2f/0x1e0
[ 276.311725] ? smpboot_thread_fn+0x74/0x1e0
[ 276.312843] ? smpboot_thread_fn+0x14e/0x1e0
[ 276.313987] smpboot_thread_fn+0x149/0x1e0
[ 276.315077] ? sort_range+0x20/0x20
[ 276.316034] kthread+0x11e/0x140
[ 276.316903] ? kthread_park+0x90/0x90
[ 276.317889] ret_from_fork+0x35/0x40
[ 276.318851] ---[ end trace 4938a64101275eb2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-10881-gb0f464c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
2 years, 12 months
[sched] b0f464c9c8: WARNING:at_kernel/sched/sched.h:#pick_next_task_fair
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b0f464c9c862c0c912ac3fe1edf8a157517157d0 ("sched: Add task_struct pointer to sched_class::set_curr_task")
https://github.com/digitalocean/linux-coresched coresched
in testcase: locktorture
with following parameters:
runtime: 300s
test: cpuhotplug
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+--------+--------+
| | error: | error: |
+------------------------------------------------------+--------+--------+
| boot_successes | 5 | 10 |
| boot_failures | 1 | 16 |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | |
| WARNING:at_kernel/sched/sched.h:#pick_next_task_fair | 0 | 16 |
| RIP:pick_next_task_fair | 0 | 16 |
| WARNING:at_kernel/sched/sched.h:#sched_cpu_dying | 0 | 16 |
| RIP:sched_cpu_dying | 0 | 16 |
+------------------------------------------------------+--------+--------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 276.263169] WARNING: CPU: 0 PID: 11 at kernel/sched/sched.h:1719 pick_next_task_fair+0x754/0x780
[ 276.265858] Modules linked in: locktorture torture ppdev snd_pcm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel snd_timer snd soundcore bochs_drm pcspkr joydev ttm serio_raw drm_kms_helper drm ata_generic pata_acpi i2c_piix4 parport_pc floppy parport qemu_fw_cfg
[ 276.272254] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.1.0-10881-gb0f464c #1
[ 276.274218] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 276.276415] RIP: 0010:pick_next_task_fair+0x754/0x780
[ 276.277773] Code: 48 0f af 85 18 01 00 00 48 83 c1 01 48 f7 f1 48 89 83 10 0a 00 00 e9 6a fa ff ff bf 02 00 00 00 e8 e1 33 ff ff e9 08 fa ff ff <0f> 0b e9 1d fb ff ff 80 3d e7 50 61 01 00 0f 85 30 fc ff ff e8 83
[ 276.282716] RSP: 0018:ffffbab3c0377d08 EFLAGS: 00010006
[ 276.284174] RAX: ffffffff90011dc0 RBX: ffffa0316be2aec0 RCX: ffffffff9064d0c0
[ 276.286075] RDX: ffffbab3c0377d90 RSI: 0000000000000001 RDI: ffffa0316be2aec0
[ 276.287968] RBP: ffffbab3c0377dd0 R08: 000000405290c768 R09: 0000000000000005
[ 276.289876] R10: ffffbab3c0377d48 R11: 000000000000138e R12: ffffffff9064d0c0
[ 276.291768] R13: ffffffff90011fc0 R14: ffffbab3c0377d90 R15: 0000000000000000
[ 276.293658] FS: 0000000000000000(0000) GS:ffffa0316be00000(0000) knlGS:0000000000000000
[ 276.295803] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 276.297325] CR2: 00005608c15f8d60 CR3: 000000007efe0000 CR4: 00000000000006f0
[ 276.299238] Call Trace:
[ 276.301855] sched_cpu_dying+0x29a/0x3c0
[ 276.302925] ? sched_cpu_starting+0xd0/0xd0
[ 276.304034] cpuhp_invoke_callback+0x86/0x5d0
[ 276.305221] ? cpu_disable_common+0x1cf/0x1f0
[ 276.306375] take_cpu_down+0x60/0xa0
[ 276.307330] multi_cpu_stop+0x68/0xe0
[ 276.308308] ? cpu_stopper_thread+0x100/0x100
[ 276.309469] cpu_stopper_thread+0x94/0x100
[ 276.310591] ? smpboot_thread_fn+0x2f/0x1e0
[ 276.311725] ? smpboot_thread_fn+0x74/0x1e0
[ 276.312843] ? smpboot_thread_fn+0x14e/0x1e0
[ 276.313987] smpboot_thread_fn+0x149/0x1e0
[ 276.315077] ? sort_range+0x20/0x20
[ 276.316034] kthread+0x11e/0x140
[ 276.316903] ? kthread_park+0x90/0x90
[ 276.317889] ret_from_fork+0x35/0x40
[ 276.318851] ---[ end trace 4938a64101275eb2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-10881-gb0f464c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
2 years, 12 months
71ab3fd618: kernel_selftests.net.pmtu.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 71ab3fd618a57999aca56f35c8a68193737ef5a0 ("fold into pmtu run_test refactor")
https://github.com/dsahern/linux 5.2-next-nh-v3
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0
media_tests test: not in Makefile
2019-05-24 13:31:27 make TARGETS=media_tests
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0/tools/testing/selftests/media_tests'
gcc -I../ -I../../../../usr/include/ media_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0/tools/testing/selftests/media_tests/media_device_test
gcc -I../ -I../../../../usr/include/ media_device_open.c -o /usr/src/perf_selftests-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0/tools/testing/selftests/media_tests/media_device_open
gcc -I../ -I../../../../usr/include/ video_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0/tools/testing/selftests/media_tests/video_device_test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-71ab3fd618a57999aca56f35c8a68193737ef5a0/tools/testing/selftests/media_tests'
ignored_by_lkp media_tests test
selftests: net: fib-onlink-tests.sh
========================================
########################################
Configuring interfaces
RTNETLINK answers: File exists
not ok 1..14 selftests: net: fib-onlink-tests.sh [FAIL]
selftests: net: pmtu.sh
========================================
TEST: ipv4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
RTNETLINK answers: Network is unreachable
TEST: ipv4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding MTU
TEST: ipv6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
RTNETLINK answers: Network is unreachable
TEST: ipv6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding MTU
TEST: IPv4 over vxlan4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over vxlan4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
TEST: IPv6 over vxlan4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over vxlan4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
TEST: IPv4 over vxlan6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over vxlan6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
TEST: IPv6 over vxlan6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over vxlan6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
TEST: IPv4 over geneve4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over geneve4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on geneve interface
TEST: IPv6 over geneve4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over geneve4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on geneve interface
TEST: IPv4 over geneve6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over geneve6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on geneve interface
TEST: IPv6 over geneve6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over geneve6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on geneve interface
TEST: IPv4 over fou4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over fou4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on fou interface
TEST: IPv6 over fou4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over fou4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on fou interface
TEST: IPv4 over fou6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over fou6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on fou interface
TEST: IPv6 over fou6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over fou6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on fou interface
TEST: IPv4 over gue4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over gue4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on gue interface
TEST: IPv6 over gue4: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over gue4: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on gue interface
TEST: IPv4 over gue6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv4 over gue6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on gue interface
TEST: IPv6 over gue6: PMTU exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: IPv6 over gue6: PMTU exceptions - nexthop objects [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on gue interface
xfrm6 not supported
TEST: vti6: PMTU exceptions [SKIP]
xfrm6 not supported
TEST: vti6: PMTU exceptions - nexthop objects [SKIP]
xfrm4 not supported
TEST: vti4: PMTU exceptions [SKIP]
xfrm4 not supported
TEST: vti4: PMTU exceptions - nexthop objects [SKIP]
TEST: vti4: default MTU assignment [ OK ]
TEST: vti4: default MTU assignment - nexthop objects [ OK ]
TEST: vti6: default MTU assignment [ OK ]
TEST: vti6: default MTU assignment - nexthop objects [ OK ]
TEST: vti4: MTU setting on link creation [ OK ]
TEST: vti4: MTU setting on link creation - nexthop objects [ OK ]
TEST: vti6: MTU setting on link creation [ OK ]
TEST: vti6: MTU setting on link creation - nexthop objects [ OK ]
TEST: vti6: MTU changes on link changes [ OK ]
TEST: vti6: MTU changes on link changes - nexthop objects [ OK ]
TEST: ipv4: cleanup of cached exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: ipv4: cleanup of cached exceptions - nexthop objects [ OK ]
TEST: ipv6: cleanup of cached exceptions [ OK ]
new routing
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Object "nexthop" is unknown, try "ip help".
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
Error: either "to" is duplicate, or "nhid" is a garbage.
TEST: ipv6: cleanup of cached exceptions - nexthop objects [ OK ]
not ok 1..15 selftests: net: pmtu.sh [FAIL]
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc7-02038-g71ab3fd .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 12 months
[btrfs] 302167c50b: fio.write_bw_MBps -12.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.4% regression of fio.write_bw_MBps due to commit:
commit: 302167c50b32e7fccc98994a91d40ddbbab04e52 ("btrfs: don't end the transaction for delayed refs in throttle")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git pending-fixes
in testcase: fio-basic
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 8t
disk: 1SSD
fs: btrfs
rw: randwrite
bs: 4k
ioengine: sync
test_size: 400g
cpufreq_governor: performance
ucode: 0xb00002e
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1SSD/btrfs/sync/x86_64-rhel-7.2/8t/debian-x86_64-2018-04-03.cgz/300s/randwrite/lkp-bdw-ep3b/400g/fio-basic/0xb00002e
commit:
a627947076 ("Btrfs: fix deadlock when allocating tree block during leaf/node split")
302167c50b ("btrfs: don't end the transaction for delayed refs in throttle")
a6279470762c19ba 302167c50b32e7fccc98994a91
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
0.02 ± 4% -0.0 0.01 fio.latency_100ms%
41.36 ± 2% -14.7 26.66 ± 12% fio.latency_100us%
0.85 ± 6% +0.3 1.14 ± 14% fio.latency_10us%
0.01 +0.0 0.02 ± 3% fio.latency_2000ms%
0.02 ± 18% -0.0 0.01 ± 5% fio.latency_20ms%
0.50 ± 11% +0.1 0.56 ± 11% fio.latency_20us%
0.03 ± 11% -0.0 0.01 ± 10% fio.latency_250ms%
8.90 ± 5% -2.1 6.80 ± 3% fio.latency_250us%
0.03 ± 7% -0.0 0.02 ± 7% fio.latency_500ms%
0.03 ± 15% -0.0 0.01 fio.latency_50ms%
41.49 ± 3% +16.2 57.73 ± 5% fio.latency_50us%
44895412 ± 2% -12.5% 39295860 fio.time.file_system_outputs
36.25 ± 3% -16.6% 30.25 fio.time.percent_of_cpu_this_job_got
98.06 ± 3% -18.2% 80.23 fio.time.system_time
5558064 ± 2% -12.7% 4851975 fio.time.voluntary_context_switches
5610728 ± 2% -12.5% 4909544 fio.workload
72.97 ± 2% -12.4% 63.91 fio.write_bw_MBps
427.18 ± 2% +14.2% 487.93 fio.write_clat_mean_us
13691 ± 2% +43.7% 19669 fio.write_clat_stddev
18680 ± 2% -12.4% 16360 fio.write_iops
0.97 -0.7 0.30 ± 2% mpstat.cpu.iowait%
3.94 ± 3% -1.5 2.40 mpstat.cpu.sys%
2875717 -13.4% 2489058 softirqs.BLOCK
5107622 ± 3% +27.5% 6510241 ± 4% softirqs.RCU
30695 ± 15% -30.2% 21424 ± 11% numa-meminfo.node0.Writeback
179069 ± 19% +134.0% 419038 ± 20% numa-meminfo.node1.Active
36182 ±105% +701.8% 290125 ± 30% numa-meminfo.node1.Active(file)
1.096e+09 ± 3% -22.2% 8.531e+08 ± 7% cpuidle.C1.time
57940399 -34.0% 38218420 ± 4% cpuidle.C1.usage
13565831 ± 7% -67.4% 4420507 ± 16% cpuidle.POLL.time
4064467 ± 5% -72.0% 1136676 ± 12% cpuidle.POLL.usage
124.33 ± 2% -59.2% 50.74 ± 3% iostat.sda.avgqu-sz
18410 -13.2% 15975 iostat.sda.w/s
300245 -21.0% 237217 iostat.sda.wkB/s
9.15 ± 10% -42.0% 5.31 ± 19% iostat.sda.wrqm/s
300252 -21.0% 237234 vmstat.io.bo
1.00 -100.0% 0.00 vmstat.procs.b
3.00 -33.3% 2.00 vmstat.procs.r
392814 -36.9% 247683 vmstat.system.cs
12975351 -10.0% 11683920 meminfo.Inactive
12742134 -10.1% 11450539 meminfo.Inactive(file)
1336423 -10.4% 1197060 meminfo.SUnreclaim
36875 ± 15% -35.8% 23682 ± 8% meminfo.Writeback
97963 ± 4% -9.3% 88890 ± 2% meminfo.max_used_kB
9315760 ± 11% -24.4% 7044222 ± 9% numa-vmstat.node0.nr_dirtied
7593 ± 15% -30.2% 5301 ± 8% numa-vmstat.node0.nr_writeback
9253810 ± 11% -24.4% 6992866 ± 9% numa-vmstat.node0.nr_written
9053 ±105% +699.4% 72375 ± 30% numa-vmstat.node1.nr_active_file
9053 ±105% +699.4% 72375 ± 30% numa-vmstat.node1.nr_zone_active_file
197.50 ± 2% -20.8% 156.50 ± 4% turbostat.Avg_MHz
7.59 ± 4% -1.1 6.45 ± 7% turbostat.Busy%
57935368 -34.0% 38214519 ± 4% turbostat.C1
3.97 ± 3% -0.9 3.10 ± 7% turbostat.C1%
117.34 ± 5% -10.4% 105.14 ± 3% turbostat.PkgWatt
6.93 -5.8% 6.53 ± 3% turbostat.RAMWatt
23703837 -21.2% 18668822 proc-vmstat.nr_dirtied
11565487 +2.6% 11866577 proc-vmstat.nr_free_pages
3186566 -10.0% 2867899 proc-vmstat.nr_inactive_file
14987 -2.0% 14683 proc-vmstat.nr_kernel_stack
203124 -2.2% 198730 proc-vmstat.nr_slab_reclaimable
334281 -10.4% 299452 proc-vmstat.nr_slab_unreclaimable
23643508 -21.2% 18622029 proc-vmstat.nr_written
3186566 -10.0% 2867899 proc-vmstat.nr_zone_inactive_file
9200220 ± 4% -16.8% 7655217 ± 2% proc-vmstat.numa_hit
9182883 ± 4% -16.8% 7637938 ± 2% proc-vmstat.numa_local
15866899 ± 3% -34.3% 10421136 ± 2% proc-vmstat.pgalloc_normal
15347481 -37.3% 9620050 ± 3% proc-vmstat.pgfree
94578712 -21.2% 74490196 proc-vmstat.pgpgout
1.653e+09 -28.2% 1.188e+09 ± 2% perf-stat.i.branch-instructions
16239810 ± 6% -20.2% 12960638 ± 7% perf-stat.i.cache-misses
1.771e+08 ± 4% -21.6% 1.389e+08 ± 6% perf-stat.i.cache-references
397106 -37.0% 250140 perf-stat.i.context-switches
1.75e+10 ± 5% -21.7% 1.37e+10 ± 6% perf-stat.i.cpu-cycles
8.56 ± 17% -55.8% 3.79 ± 15% perf-stat.i.cpu-migrations
2.408e+09 -24.3% 1.823e+09 ± 2% perf-stat.i.dTLB-loads
1.351e+09 ± 6% -18.8% 1.097e+09 ± 2% perf-stat.i.dTLB-stores
6077563 ± 3% -14.6% 5188983 ± 6% perf-stat.i.iTLB-loads
8.756e+09 -25.6% 6.518e+09 perf-stat.i.instructions
48.01 ± 18% +12.6 60.57 ± 7% perf-stat.i.node-load-miss-rate%
2697176 ± 11% -36.8% 1705410 ± 12% perf-stat.i.node-loads
50.90 ± 16% +12.8 63.72 ± 5% perf-stat.overall.node-load-miss-rate%
486504 ± 2% -15.1% 412869 ± 2% perf-stat.overall.path-length
1.648e+09 -28.2% 1.184e+09 ± 2% perf-stat.ps.branch-instructions
16185048 ± 6% -20.2% 12917198 ± 7% perf-stat.ps.cache-misses
1.765e+08 ± 4% -21.6% 1.384e+08 ± 6% perf-stat.ps.cache-references
395744 -37.0% 249290 perf-stat.ps.context-switches
1.744e+10 ± 5% -21.7% 1.365e+10 ± 6% perf-stat.ps.cpu-cycles
8.54 ± 17% -55.7% 3.78 ± 15% perf-stat.ps.cpu-migrations
2.4e+09 -24.3% 1.817e+09 ± 2% perf-stat.ps.dTLB-loads
1.347e+09 ± 6% -18.8% 1.094e+09 ± 2% perf-stat.ps.dTLB-stores
6056751 ± 3% -14.6% 5171616 ± 6% perf-stat.ps.iTLB-loads
8.727e+09 -25.6% 6.497e+09 perf-stat.ps.instructions
2688159 ± 11% -36.8% 1699709 ± 12% perf-stat.ps.node-loads
2.729e+12 -25.7% 2.026e+12 perf-stat.total.instructions
7679 ± 2% -37.9% 4771 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
25109 ± 10% -20.3% 20001 ± 12% sched_debug.cfs_rq:/.exec_clock.max
6099 ± 20% -24.1% 4629 ± 7% sched_debug.cfs_rq:/.exec_clock.stddev
96721 ± 8% -43.2% 54939 ± 37% sched_debug.cfs_rq:/.load.avg
243210 ± 4% -27.0% 177643 ± 21% sched_debug.cfs_rq:/.load.stddev
105.27 ± 15% -43.2% 59.81 ± 22% sched_debug.cfs_rq:/.load_avg.avg
197.18 ± 11% -21.2% 155.31 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
0.13 ± 6% -31.5% 0.09 ± 25% sched_debug.cfs_rq:/.nr_running.avg
49.64 ± 12% -49.3% 25.18 ± 28% sched_debug.cfs_rq:/.runnable_load_avg.avg
689.54 ± 4% -9.3% 625.71 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.max
142.56 ± 7% -26.4% 104.98 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.stddev
97240 ± 8% -46.4% 52094 ± 33% sched_debug.cfs_rq:/.runnable_weight.avg
243593 ± 4% -28.5% 174272 ± 19% sched_debug.cfs_rq:/.runnable_weight.stddev
147.89 ± 8% -27.2% 107.65 ± 13% sched_debug.cfs_rq:/.util_avg.avg
192.27 ± 8% -20.2% 153.44 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
43.61 ± 16% -60.4% 17.27 ± 44% sched_debug.cfs_rq:/.util_est_enqueued.avg
493.75 ± 16% -44.6% 273.67 ± 33% sched_debug.cfs_rq:/.util_est_enqueued.max
120.70 ± 13% -52.0% 57.95 ± 35% sched_debug.cfs_rq:/.util_est_enqueued.stddev
26.69 ± 32% -43.1% 15.20 ± 13% sched_debug.cpu.cpu_load[0].avg
107.63 ± 13% -22.0% 84.01 ± 8% sched_debug.cpu.cpu_load[0].stddev
28.23 ± 30% -46.4% 15.13 ± 10% sched_debug.cpu.cpu_load[1].avg
96.80 ± 14% -19.5% 77.96 ± 4% sched_debug.cpu.cpu_load[1].stddev
28.35 ± 27% -50.8% 13.93 ± 13% sched_debug.cpu.cpu_load[2].avg
26.83 ± 28% -54.8% 12.13 ± 16% sched_debug.cpu.cpu_load[3].avg
76.35 ± 21% -27.2% 55.61 ± 9% sched_debug.cpu.cpu_load[3].stddev
24.61 ± 29% -58.0% 10.35 ± 18% sched_debug.cpu.cpu_load[4].avg
67.78 ± 23% -29.6% 47.73 ± 16% sched_debug.cpu.cpu_load[4].stddev
217.01 ± 9% -29.1% 153.85 ± 11% sched_debug.cpu.curr->pid.avg
65004 ± 18% -52.3% 31025 ± 31% sched_debug.cpu.load.avg
200774 ± 8% -31.1% 138243 ± 19% sched_debug.cpu.load.stddev
0.09 ± 12% -33.7% 0.06 ± 17% sched_debug.cpu.nr_running.avg
0.27 ± 5% -16.5% 0.23 ± 9% sched_debug.cpu.nr_running.stddev
735069 -32.2% 498554 sched_debug.cpu.nr_switches.avg
2860144 ± 11% -27.4% 2076064 ± 13% sched_debug.cpu.nr_switches.max
665483 ± 24% -31.6% 455234 ± 9% sched_debug.cpu.nr_switches.stddev
0.13 ± 7% -30.1% 0.09 ± 12% sched_debug.cpu.nr_uninterruptible.avg
735117 -32.2% 498430 sched_debug.cpu.sched_count.avg
2858539 ± 11% -27.4% 2076509 ± 13% sched_debug.cpu.sched_count.max
665356 ± 24% -31.6% 454947 ± 9% sched_debug.cpu.sched_count.stddev
366543 -32.2% 248579 sched_debug.cpu.sched_goidle.avg
1428344 ± 11% -27.4% 1036752 ± 13% sched_debug.cpu.sched_goidle.max
332365 ± 24% -31.6% 227301 ± 9% sched_debug.cpu.sched_goidle.stddev
368002 -32.2% 249386 sched_debug.cpu.ttwu_count.avg
3059342 -9.8% 2760232 slabinfo.Acpi-State.active_objs
60835 -10.0% 54758 slabinfo.Acpi-State.active_slabs
3102644 -10.0% 2792672 slabinfo.Acpi-State.num_objs
60835 -10.0% 54758 slabinfo.Acpi-State.num_slabs
40884 ± 7% -42.6% 23477 ± 21% slabinfo.avc_xperms_data.active_objs
323.25 ± 7% -41.8% 188.00 ± 21% slabinfo.avc_xperms_data.active_slabs
41459 ± 7% -41.8% 24144 ± 21% slabinfo.avc_xperms_data.num_objs
323.25 ± 7% -41.8% 188.00 ± 21% slabinfo.avc_xperms_data.num_slabs
1524 ± 18% -25.4% 1136 ± 11% slabinfo.biovec-128.active_objs
1536 ± 18% -24.8% 1155 ± 11% slabinfo.biovec-128.num_objs
1681 ± 7% -20.8% 1331 ± 13% slabinfo.biovec-64.active_objs
1681 ± 7% -20.8% 1331 ± 13% slabinfo.biovec-64.num_objs
2654 ± 10% -56.1% 1166 ± 13% slabinfo.biovec-max.active_objs
671.00 ± 10% -55.3% 300.00 ± 12% slabinfo.biovec-max.active_slabs
2685 ± 10% -55.3% 1201 ± 12% slabinfo.biovec-max.num_objs
671.00 ± 10% -55.3% 300.00 ± 12% slabinfo.biovec-max.num_slabs
21641 ± 9% -12.3% 18989 ± 7% slabinfo.btrfs_delayed_ref_head.active_objs
22866 ± 8% -10.1% 20556 ± 7% slabinfo.btrfs_delayed_ref_head.num_objs
67913 ± 4% -12.5% 59451 ± 3% slabinfo.btrfs_extent_buffer.active_objs
1237 ± 4% -14.7% 1055 ± 3% slabinfo.btrfs_extent_buffer.active_slabs
71775 ± 4% -14.7% 61246 ± 3% slabinfo.btrfs_extent_buffer.num_objs
1237 ± 4% -14.7% 1055 ± 3% slabinfo.btrfs_extent_buffer.num_slabs
6184518 -10.1% 5562477 slabinfo.btrfs_extent_map.active_objs
110462 -10.1% 99345 slabinfo.btrfs_extent_map.active_slabs
6185888 -10.1% 5563352 slabinfo.btrfs_extent_map.num_objs
110462 -10.1% 99345 slabinfo.btrfs_extent_map.num_slabs
26097 ± 3% -27.1% 19016 ± 9% slabinfo.btrfs_ordered_extent.active_objs
673.75 ± 4% -26.8% 493.50 ± 9% slabinfo.btrfs_ordered_extent.active_slabs
26301 ± 4% -26.8% 19264 ± 9% slabinfo.btrfs_ordered_extent.num_objs
673.75 ± 4% -26.8% 493.50 ± 9% slabinfo.btrfs_ordered_extent.num_slabs
13863 ± 5% -39.9% 8328 ± 17% slabinfo.btrfs_path.active_objs
387.25 ± 5% -39.4% 234.50 ± 16% slabinfo.btrfs_path.active_slabs
13954 ± 5% -39.3% 8467 ± 16% slabinfo.btrfs_path.num_objs
387.25 ± 5% -39.4% 234.50 ± 16% slabinfo.btrfs_path.num_slabs
13884 ± 9% -25.6% 10330 ± 15% slabinfo.kmalloc-128.active_objs
439.75 ± 8% -24.7% 331.25 ± 15% slabinfo.kmalloc-128.active_slabs
14089 ± 8% -24.6% 10617 ± 15% slabinfo.kmalloc-128.num_objs
439.75 ± 8% -24.7% 331.25 ± 15% slabinfo.kmalloc-128.num_slabs
1554 ± 3% -10.8% 1386 ± 5% slabinfo.kmalloc-rcl-96.active_objs
1554 ± 3% -10.8% 1386 ± 5% slabinfo.kmalloc-rcl-96.num_objs
10158 ± 8% -28.3% 7284 ± 15% slabinfo.mnt_cache.active_objs
10369 ± 8% -26.9% 7581 ± 14% slabinfo.mnt_cache.num_objs
1660 ± 7% -15.2% 1408 ± 11% slabinfo.skbuff_fclone_cache.active_objs
1660 ± 7% -15.2% 1408 ± 11% slabinfo.skbuff_fclone_cache.num_objs
17.20 ± 15% -10.1 7.14 ± 5% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
19.67 ± 13% -9.6 10.08 ± 7% perf-profile.calltrace.cycles-pp.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread.kthread
14.18 ± 16% -9.5 4.73 ± 6% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work
20.52 ± 13% -9.1 11.40 ± 5% perf-profile.calltrace.cycles-pp.normal_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
27.59 ± 9% -8.7 18.88 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
27.59 ± 9% -8.7 18.88 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
24.79 ± 10% -6.3 18.45 ± 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
25.03 ± 9% -6.2 18.79 ± 4% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
5.57 ± 21% -4.2 1.36 ± 7% perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
5.55 ± 21% -4.2 1.35 ± 7% perf-profile.calltrace.cycles-pp.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
4.87 ± 20% -3.6 1.31 ± 10% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
4.84 ± 20% -3.6 1.28 ± 10% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
3.84 ± 24% -3.1 0.75 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
3.76 ± 24% -3.0 0.72 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node
3.60 ± 22% -2.8 0.81 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot
3.54 ± 22% -2.7 0.79 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node
3.47 ± 19% -2.7 0.80 ± 6% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
3.25 ± 17% -2.4 0.85 ± 10% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
1.85 ± 8% -1.2 0.65 ± 3% perf-profile.calltrace.cycles-pp.unlock_up.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
1.83 ± 9% -1.2 0.63 ± 4% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.unlock_up.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
1.45 ± 17% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.btrfs_search_slot.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io
1.71 ± 8% -1.1 0.60 ± 5% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.unlock_up.btrfs_search_slot.btrfs_mark_extent_written
1.69 ± 9% -1.1 0.59 ± 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unlock_up.btrfs_search_slot
1.63 ± 9% -1.1 0.57 ± 7% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unlock_up
2.12 ± 13% -0.7 1.43 ± 5% perf-profile.calltrace.cycles-pp.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
2.75 ± 10% -0.7 2.09 ± 5% perf-profile.calltrace.cycles-pp.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work
0.76 ± 5% -0.2 0.57 ± 8% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.73 ± 5% -0.2 0.56 ± 8% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.99 ± 7% -0.1 0.87 ± 6% perf-profile.calltrace.cycles-pp.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
0.99 ± 7% -0.1 0.87 ± 7% perf-profile.calltrace.cycles-pp.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
0.88 ± 10% +0.3 1.21 ± 13% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.27 ±100% +0.4 0.62 ± 10% perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.99 ± 10% +0.4 1.38 ± 7% perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper
1.71 ± 17% +0.5 2.17 ± 13% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.29 ±100% +0.5 0.76 ± 12% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.50 ± 7% +0.5 2.00 ± 9% perf-profile.calltrace.cycles-pp.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
1.49 ± 7% +0.5 2.00 ± 9% perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work
0.31 ±103% +0.5 0.83 ± 12% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.15 ±173% +0.6 0.75 ± 6% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.14 ±173% +0.6 0.75 ± 12% perf-profile.calltrace.cycles-pp.split_leaf.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io
0.29 ±100% +0.6 0.91 ± 27% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.08 ± 18% +0.6 1.71 ± 12% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.66 ± 13% perf-profile.calltrace.cycles-pp.push_leaf_right.split_leaf.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written
1.21 ± 19% +0.7 1.92 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.37 ± 20% +0.8 2.20 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.28 ± 11% +1.1 4.33 ± 10% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +1.1 1.13 ± 24% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space
0.00 +1.1 1.14 ± 24% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
0.00 +1.1 1.15 ± 23% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
2.54 ± 17% +1.3 3.81 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.00 +1.4 1.35 ± 16% perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
0.00 +1.4 1.35 ± 16% perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
3.59 ± 12% +1.9 5.50 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.15 ±173% +2.5 2.67 ± 14% perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
0.15 ±173% +2.5 2.67 ± 14% perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
0.00 +2.7 2.67 ± 14% perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
6.62 ± 16% +3.0 9.58 ± 5% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
7.43 ± 10% +3.3 10.76 ± 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
52.83 ± 4% +4.2 57.01 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
62.44 ± 3% +7.2 69.61 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
69.18 ± 3% +8.3 77.44 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
69.26 ± 3% +8.3 77.52 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
69.25 ± 3% +8.3 77.52 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
69.97 ± 3% +8.8 78.74 perf-profile.calltrace.cycles-pp.secondary_startup_64
fio.write_clat_stddev
21000 +-+-----------------------------------------------------------------+
| O O O O |
20000 O-O O O O O O O O O O O O O O O O O O O |
19000 +-+ O O O |
| |
18000 +-+ |
17000 +-+ |
| |
16000 +-+ |
15000 +-+ |
| |
14000 +-+ .+ .+.. .+. .+. .+. .+.+.+.+.+.. .+.|
13000 +-+. .+. + .+.+.+.+.+ + +.+ + +..+.+.+ +.+.+ |
| + + |
12000 +-+-----------------------------------------------------------------+
fio.latency_2000ms_
0.019 O-+-----------------------------O---O------------O------------------+
| O O O O O O O O |
0.018 +-+ O O |
0.017 +-+ O O O O |
| O O O O O O O |
0.016 +-O |
0.015 +-+ O |
| |
0.014 +-+ |
0.013 +-+ |
| |
0.012 +-+ |
0.011 +-+ |
| |
0.01 +-+-----------------------------------------------------------------+
fio.time.voluntary_context_switches
5.8e+06 +-+---------------------------------------------------------------+
| + + |
5.6e+06 +-+ + +.. : + .+. .+ : : |
|. + : + : : +. + + : : : +.|
| + : + + : + +. .+. + : .+. .+.+.+ :+ |
5.4e+06 +-+ +.+ + + + + +.+.+..+.+ + |
| |
5.2e+06 +-+ |
| |
5e+06 +-+ O O O O O |
| O O O O O O |
O O O O O O O O O O O O O O |
4.8e+06 +-+ O O |
| |
4.6e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 12 months
[mm] e52271917f: BUG:sleeping_function_called_from_invalid_context_at_mm/slab.h
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: e52271917f9f5159c791eda8ba748a66d659c27e ("[PATCH v4 5/7] mm: rework non-root kmem_cache lifecycle management")
url: https://github.com/0day-ci/linux/commits/Roman-Gushchin/mm-reparent-slab-...
in testcase: nvml
with following parameters:
group: obj
test: non-pmem
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------------------------+------------+------------+
| | ff756a15f3 | e52271917f |
+------------------------------------------------------------------------------+------------+------------+
| boot_successes | 5 | 4 |
| boot_failures | 861 | 852 |
| BUG:kernel_reboot-without-warning_in_test_stage | 738 | 163 |
| BUG:kernel_hang_in_boot_stage | 120 | 122 |
| BUG:soft_lockup-CPU##stuck_for#s | 4 | 1 |
| RIP:free_unref_page | 1 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 4 | 1 |
| RIP:free_reserved_area | 3 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/slab.h | 0 | 560 |
| BUG:scheduling_while_atomic | 0 | 561 |
| WARNING:at_lib/usercopy.c:#_copy_to_user | 0 | 116 |
| RIP:_copy_to_user | 0 | 116 |
| WARNING:at_arch/x86/kernel/fpu/signal.c:#copy_fpstate_to_sigframe | 0 | 534 |
| RIP:copy_fpstate_to_sigframe | 0 | 532 |
| WARNING:at_arch/x86/kernel/signal.c:#do_signal | 0 | 527 |
| RIP:do_signal | 0 | 526 |
| WARNING:at_lib/usercopy.c:#_copy_from_user | 0 | 389 |
| RIP:_copy_from_user | 0 | 388 |
| kernel_BUG_at_mm/vmalloc.c | 0 | 304 |
| invalid_opcode:#[##] | 0 | 304 |
| RIP:__get_vm_area_node | 0 | 301 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 294 |
| Kernel_panic-not_syncing:Aiee,killing_interrupt_handler | 0 | 155 |
| WARNING:at_fs/read_write.c:#vfs_write | 0 | 15 |
| RIP:vfs_write | 0 | 15 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/rwsem.c | 0 | 101 |
| BUG:sleeping_function_called_from_invalid_context_at_include/linux/uaccess.h | 0 | 54 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 0 | 47 |
| BUG:sleeping_function_called_from_invalid_context_at_lib/iov_iter.c | 0 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at_fs/dcache.c | 0 | 57 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/memory.c | 0 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 0 | 104 |
| BUG:kernel_hang_in_test_stage | 0 | 5 |
| WARNING:at_arch/x86/include/asm/uaccess.h:#strncpy_from_user | 0 | 4 |
| RIP:strncpy_from_user | 0 | 4 |
| WARNING:at_fs/read_write.c:#vfs_read | 0 | 4 |
| RIP:vfs_read | 0 | 4 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/filemap.c | 0 | 3 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/page_alloc.c | 0 | 8 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/gup.c | 0 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at_include/linux/freezer.h | 0 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at/kb | 0 | 1 |
+------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 1025.218323] BUG: sleeping function called from invalid context at mm/slab.h:457
[ 1025.222456] in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: systemd
[ 1025.225612] CPU: 0 PID: 1 Comm: systemd Not tainted 5.1.0-12240-ge522719 #1
[ 1025.226200] BUG: scheduling while atomic: systemd-journal/187/0x00000031
[ 1025.228830] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.228832] Call Trace:
[ 1025.228850] dump_stack+0x5c/0x7b
[ 1025.230292] ___might_sleep+0xf1/0x110
[ 1025.233781] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.237363] kmem_cache_alloc+0x170/0x1c0
[ 1025.237371] security_file_alloc+0x24/0x90
[ 1025.237379] __alloc_file+0x4f/0xe0
[ 1025.237384] ? security_inode_permission+0x30/0x50
[ 1025.273837] alloc_empty_file+0x43/0xe0
[ 1025.278046] path_openat+0x4a/0x1550
[ 1025.282100] ? terminate_walk+0xed/0x100
[ 1025.286222] ? path_parentat+0x3c/0x80
[ 1025.290224] ? filename_parentat+0x10b/0x190
[ 1025.294306] do_filp_open+0x9b/0x110
[ 1025.298138] ? __d_lookup+0x65/0x150
[ 1025.301905] ? cp_new_stat+0x150/0x180
[ 1025.305720] ? _cond_resched+0x19/0x30
[ 1025.309143] ? kernfs_dop_revalidate+0xab/0xc0
[ 1025.311459] ? lookup_dcache+0x3b/0x60
[ 1025.313618] ? __check_object_size+0xcf/0x1a0
[ 1025.315852] ? do_sys_open+0x1bd/0x250
[ 1025.317905] do_sys_open+0x1bd/0x250
[ 1025.319925] do_syscall_64+0x5b/0x1e0
[ 1025.321948] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1025.324202] RIP: 0033:0x7f70f2df9840
[ 1025.326146] Code: 73 01 c3 48 8b 0d 68 77 20 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 bb 20 00 00 75 10 b8 02 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 1e f6 ff ff 48 89 04 24
[ 1025.332881] RSP: 002b:00007ffd9115fa28 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
[ 1025.335760] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f70f2df9840
[ 1025.338563] RDX: 0000000000000000 RSI: 0000000000080101 RDI: 000055b3e928f740
[ 1025.341351] RBP: 00007ffd9115faf0 R08: 0000000000000000 R09: 0000000000000040
[ 1025.344061] R10: 0000000000000075 R11: 0000000000000246 R12: 000055b3e928f740
[ 1025.346701] R13: 00007ffd9115faf0 R14: 00007ffd9115fac0 R15: 000055b3e925b4c0
[ 1025.349282] CPU: 1 PID: 187 Comm: systemd-journal Not tainted 5.1.0-12240-ge522719 #1
[ 1025.349486] BUG: scheduling while atomic: systemd/1/0x00000009
[ 1025.353183] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.353185] Call Trace:
[ 1025.353202] dump_stack+0x5c/0x7b
[ 1025.353209] __schedule_bug+0x55/0x70
[ 1025.355515] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.359746] __schedule+0x560/0x670
[ 1025.359755] ? ep_item_poll+0x3f/0xb0
[ 1025.359757] ? ep_poll+0x23e/0x510
[ 1025.390358] schedule+0x34/0xb0
[ 1025.392950] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1025.396331] ? ep_read_events_proc+0xe0/0xe0
[ 1025.399354] ? ep_scan_ready_list+0x228/0x250
[ 1025.402581] ? ep_poll+0x23e/0x510
[ 1025.405262] ep_poll+0x21f/0x510
[ 1025.407839] ? wake_up_q+0x80/0x80
[ 1025.410156] do_epoll_wait+0xbd/0xe0
[ 1025.412637] __x64_sys_epoll_wait+0x1a/0x20
[ 1025.415418] do_syscall_64+0x5b/0x1e0
[ 1025.418110] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1025.420985] RIP: 0033:0x7f3c827f42e3
[ 1025.423515] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1025.432467] RSP: 002b:00007ffd45cf3588 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1025.436394] RAX: ffffffffffffffda RBX: 00005652bbd42200 RCX: 00007f3c827f42e3
[ 1025.440324] RDX: 0000000000000017 RSI: 00007ffd45cf3590 RDI: 0000000000000008
[ 1025.444213] RBP: 00007ffd45cf37b0 R08: 000000005ce3fecc R09: 00007ffd45d850a0
[ 1025.448172] R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffd45cf3590
[ 1025.452040] R13: 0000000000000001 R14: ffffffffffffffff R15: 00058965eeb2926e
[ 1025.455776] CPU: 0 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1025.458800] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.461791] Call Trace:
[ 1025.463352] dump_stack+0x5c/0x7b
[ 1025.465118] __schedule_bug+0x55/0x70
[ 1025.466952] __schedule+0x560/0x670
[ 1025.468812] schedule+0x34/0xb0
[ 1025.470607] exit_to_usermode_loop+0x5c/0xf0
[ 1025.471094] systemd-journal[187]: segfault at 7f3c808da000 ip 00007f3c8321ba88 sp 00007ffd45cf0780 error 6 in libsystemd-shared-232.so[7f3c830f7000+192000]
[ 1025.472523] do_syscall_64+0x1a7/0x1e0
[ 1025.472528] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1025.472532] RIP: 0033:0x7f70f2df9840
[ 1025.472535] Code: 73 01 c3 48 8b 0d 68 77 20 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 bb 20 00 00 75 10 b8 02 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 1e f6 ff ff 48 89 04 24
[ 1025.472536] RSP: 002b:00007ffd9115fa28 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
[ 1025.472538] RAX: 0000000000000018 RBX: 0000000000000000 RCX: 00007f70f2df9840
[ 1025.472541] RDX: 0000000000000000 RSI: 0000000000080101 RDI: 000055b3e928f740
[ 1025.480301] Code: 08 4c 8d 4c 24 20 31 d2 49 89 d8 89 ee 4c 89 e7 e8 ad d4 ff ff 85 c0 78 40 48 8b 44 24 20 48 8b 74 24 08 48 c7 00 00 00 00 00 <48> 89 58 08 40 88 28 49 8b 94 24 c8 00 00 00 48 89 b2 88 00 00 00
[ 1025.482147] RBP: 00007ffd9115faf0 R08: 0000000000000000 R09: 0000000000000040
[ 1025.482148] R10: 0000000000000075 R11: 0000000000000246 R12: 000055b3e928f740
[ 1025.482149] R13: 00007ffd9115faf0 R14: 00007ffd9115fac0 R15: 000055b3e925b4c0
[ 1025.548055] BUG: scheduling while atomic: systemd-journal/187/0x00000041
[ 1025.552815] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.568486] BUG: scheduling while atomic: systemd/1/0x00000093
[ 1025.572547] CPU: 1 PID: 187 Comm: systemd-journal Tainted: G W 5.1.0-12240-ge522719 #1
[ 1025.574964] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.579611] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.579614] Call Trace:
[ 1025.579633] dump_stack+0x5c/0x7b
[ 1025.579641] __schedule_bug+0x55/0x70
[ 1025.605125] __schedule+0x560/0x670
[ 1025.608455] ? force_sig_info+0xc7/0xe0
[ 1025.611994] ? async_page_fault+0x8/0x30
[ 1025.615444] schedule+0x34/0xb0
[ 1025.617623] exit_to_usermode_loop+0x5c/0xf0
[ 1025.619711] prepare_exit_to_usermode+0xa0/0xe0
[ 1025.621827] retint_user+0x8/0x8
[ 1025.623637] RIP: 0033:0x7f3c8321ba88
[ 1025.625519] Code: 08 4c 8d 4c 24 20 31 d2 49 89 d8 89 ee 4c 89 e7 e8 ad d4 ff ff 85 c0 78 40 48 8b 44 24 20 48 8b 74 24 08 48 c7 00 00 00 00 00 <48> 89 58 08 40 88 28 49 8b 94 24 c8 00 00 00 48 89 b2 88 00 00 00
[ 1025.632024] RSP: 002b:00007ffd45cf0780 EFLAGS: 00010202
[ 1025.634366] RAX: 00007f3c808d9ff8 RBX: 000000000000005d RCX: 000000000022eff8
[ 1025.637138] RDX: 0000000000000000 RSI: 000000000022eff8 RDI: 00005652bbd44140
[ 1025.639921] RBP: 0000000000000001 R08: 000000000022f055 R09: 00005652bbd44140
[ 1025.642722] R10: 00007ffd45cf0730 R11: 00000000001eb892 R12: 00005652bbd43ea0
[ 1025.645511] R13: 00007ffd45cf08b0 R14: 00007ffd45cf08a8 R15: 000000003d5f1097
[ 1025.648317] CPU: 0 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1025.652678] meminfo[1008]: segfault at 5636a4cb5f1c ip 00005636a3adfbec sp 00007ffd8a3a5a50 error 7
[ 1025.652682] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.652683] in dash[5636a3ad4000+1b000]
[ 1025.657131] Call Trace:
[ 1025.657150] dump_stack+0x5c/0x7b
[ 1025.657157] __schedule_bug+0x55/0x70
[ 1025.660476] Code: 89 c6 74 06 48 8b 43 10 8b 30 89 ef e8 55 79 ff ff 41 83 fd 01 0f 84 73 01 00 00 0f b7 43 1c 48 8b 53 10 8d 48 01 48 c1 e0 04 <66> 89 4b 1c 48 8d 1c 02 48 8d 05 a5 30 21 00 48 89 43 08 8b 05 67
[ 1025.663501] __schedule+0x560/0x670
[ 1025.663509] ? __sys_sendmsg+0x5e/0xa0
[ 1025.663513] schedule+0x34/0xb0
[ 1025.665307] BUG: scheduling while atomic: meminfo/1008/0x000000a7
[ 1025.668183] exit_to_usermode_loop+0x5c/0xf0
[ 1025.668186] do_syscall_64+0x1a7/0x1e0
[ 1025.670259] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.679316] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1025.679322] RIP: 0033:0x7f70f2df9e67
[ 1025.679326] Code: 89 02 48 c7 c0 ff ff ff ff eb d0 0f 1f 84 00 00 00 00 00 8b 05 6a b5 20 00 85 c0 75 2e 48 63 ff 48 63 d2 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 11 71 20 00 f7 d8 64 89 02 48
[ 1025.679327] RSP: 002b:00007ffd9115f8f8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
[ 1025.679330] RAX: 00000000000000d9 RBX: 000055b3e9200a80 RCX: 00007f70f2df9e67
[ 1025.679333] RDX: 0000000000004040 RSI: 00007ffd9115f940 RDI: 000000000000002a
[ 1025.741650] RBP: 00007ffd9115f9b0 R08: 0000000000000002 R09: 000055b3e928f5b0
[ 1025.746173] R10: 0000000000000090 R11: 0000000000000246 R12: 00007ffd9115f940
[ 1025.750679] R13: 00007ffd9115fa40 R14: 000055b3e928f100 R15: 00007ffd9115f900
[ 1025.755203] CPU: 1 PID: 1008 Comm: meminfo Tainted: G W 5.1.0-12240-ge522719 #1
[ 1025.763548] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.771669] Call Trace:
[ 1025.776170] dump_stack+0x5c/0x7b
[ 1025.781246] __schedule_bug+0x55/0x70
[ 1025.786466] __schedule+0x560/0x670
[ 1025.791572] ? force_sig_info+0xc7/0xe0
[ 1025.797122] ? async_page_fault+0x8/0x30
[ 1025.802590] schedule+0x34/0xb0
[ 1025.807556] exit_to_usermode_loop+0x5c/0xf0
[ 1025.813222] prepare_exit_to_usermode+0xa0/0xe0
[ 1025.818974] retint_user+0x8/0x8
[ 1025.823834] RIP: 0033:0x5636a3adfbec
[ 1025.828729] Code: 89 c6 74 06 48 8b 43 10 8b 30 89 ef e8 55 79 ff ff 41 83 fd 01 0f 84 73 01 00 00 0f b7 43 1c 48 8b 53 10 8d 48 01 48 c1 e0 04 <66> 89 4b 1c 48 8d 1c 02 48 8d 05 a5 30 21 00 48 89 43 08 8b 05 67
[ 1025.845269] RSP: 002b:00007ffd8a3a5a50 EFLAGS: 00010246
[ 1025.847989] BUG: scheduling while atomic: dbus-daemon/261/0x00000005
[ 1025.849294] RAX: 0000000000000000 RBX: 00005636a4cb5f00 RCX: 0000000000000001
[ 1025.849296] RDX: 00005636a4cb5f00 RSI: 0000000000000000 RDI: 0000000001200011
[ 1025.849297] RBP: 000000000000734f R08: 00007f434abb5700 R09: 00007f434a52a360
[ 1025.849298] R10: 00007f434abb59d0 R11: 0000000000000246 R12: 00005636a3cf2bc8
[ 1025.849299] R13: 0000000000000002 R14: 00005636a3aea582 R15: 00005636a3cf2bc8
[ 1025.850974] note: systemd-journal[187] exited with preempt_count 2
[ 1025.853356] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.982919] BUG: scheduling while atomic: systemd-logind/281/0x00000007
[ 1025.986290] CPU: 0 PID: 261 Comm: dbus-daemon Tainted: G W 5.1.0-12240-ge522719 #1
[ 1025.988978] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1025.992777] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1025.992779] Call Trace:
[ 1025.992800] dump_stack+0x5c/0x7b
[ 1025.992808] __schedule_bug+0x55/0x70
[ 1025.992817] __schedule+0x560/0x670
[ 1026.052486] ? ep_item_poll+0x3f/0xb0
[ 1026.055863] ? ep_poll+0x23e/0x510
[ 1026.058868] schedule+0x34/0xb0
[ 1026.061839] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1026.065339] ? ep_read_events_proc+0xe0/0xe0
[ 1026.068467] ? ep_scan_ready_list+0x228/0x250
[ 1026.071926] ? __switch_to_asm+0x40/0x70
[ 1026.074961] ? __switch_to_asm+0x34/0x70
[ 1026.077998] ? ep_poll+0x23e/0x510
[ 1026.080820] ep_poll+0x21f/0x510
[ 1026.083604] ? wake_up_q+0x80/0x80
[ 1026.086336] do_epoll_wait+0xbd/0xe0
[ 1026.089050] __x64_sys_epoll_wait+0x1a/0x20
[ 1026.091861] do_syscall_64+0x5b/0x1e0
[ 1026.094535] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1026.097636] RIP: 0033:0x7f60d105c2e3
[ 1026.100314] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1026.109332] RSP: 002b:00007ffda738f118 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1026.113430] RAX: ffffffffffffffda RBX: ffffffffffffffff RCX: 00007f60d105c2e3
[ 1026.117441] RDX: 0000000000000040 RSI: 00007ffda738f120 RDI: 0000000000000004
[ 1026.121480] RBP: 00007ffda738f4d0 R08: 0000000000000401 R09: 00007ffda73a80b0
[ 1026.125452] R10: 00000000ffffffff R11: 0000000000000246 R12: 000055f82d37b500
[ 1026.129312] R13: 0000000000000001 R14: 000055f82d3971e8 R15: 000055f82d39cd00
[ 1026.133177] CPU: 1 PID: 281 Comm: systemd-logind Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.136258] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.137642] meminfo[29519]: segfault at 5636a4cb6360 ip 00005636a3adab0b sp 00007ffd8a3a5850 error 7 in dash[5636a3ad4000+1b000]
[ 1026.139203] Call Trace:
[ 1026.139220] dump_stack+0x5c/0x7b
[ 1026.139227] __schedule_bug+0x55/0x70
[ 1026.146531] Code: 5b 5d 41 5c 41 5d c3 0f 1f 84 00 00 00 00 00 31 db 45 85 e4 74 dc 48 89 ef e8 81 c8 ff ff 48 8d 78 18 e8 88 5b 00 00 48 89 c3 <49> 89 45 00 48 c7 00 00 00 00 00 48 8d 7b 13 b8 ff ff ff ff 48 89
[ 1026.148112] __schedule+0x560/0x670
[ 1026.148119] ? ep_item_poll+0x3f/0xb0
[ 1026.151038] BUG: scheduling while atomic: meminfo/29519/0x0000000b
[ 1026.152903] ? ep_poll+0x23e/0x510
[ 1026.152907] schedule+0x34/0xb0
[ 1026.152911] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1026.152913] ? ep_read_events_proc+0xe0/0xe0
[ 1026.152914] ? ep_scan_ready_list+0x228/0x250
[ 1026.152916] ? ep_poll+0x23e/0x510
[ 1026.152917] ep_poll+0x21f/0x510
[ 1026.152920] ? _cond_resched+0x19/0x30
[ 1026.152926] ? wake_up_q+0x80/0x80
[ 1026.152930] do_epoll_wait+0xbd/0xe0
[ 1026.162286] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1026.164714] __x64_sys_epoll_wait+0x1a/0x20
[ 1026.164722] do_syscall_64+0x5b/0x1e0
[ 1026.164729] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1026.233261] RIP: 0033:0x7f60b575a2e3
[ 1026.236611] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1026.247869] RSP: 002b:00007ffcb8705c48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1026.253346] RAX: ffffffffffffffda RBX: 0000563d3e17b280 RCX: 00007f60b575a2e3
[ 1026.258686] RDX: 000000000000000b RSI: 00007ffcb8705c50 RDI: 0000000000000004
[ 1026.263861] RBP: 00007ffcb8705de0 R08: 0000563d3e187aa0 R09: 0000563d3e187c00
[ 1026.269119] R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffcb8705c50
[ 1026.274380] R13: 0000000000000001 R14: ffffffffffffffff R15: 00007ffcb8705ee0
[ 1026.279813] CPU: 0 PID: 29519 Comm: meminfo Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.284461] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.288825] Call Trace:
[ 1026.291505] dump_stack+0x5c/0x7b
[ 1026.294329] __schedule_bug+0x55/0x70
[ 1026.297276] __schedule+0x560/0x670
[ 1026.300041] ? force_sig_info+0xc7/0xe0
[ 1026.302841] ? async_page_fault+0x8/0x30
[ 1026.305748] schedule+0x34/0xb0
[ 1026.308355] exit_to_usermode_loop+0x5c/0xf0
[ 1026.311269] prepare_exit_to_usermode+0xa0/0xe0
[ 1026.314286] retint_user+0x8/0x8
[ 1026.316873] RIP: 0033:0x5636a3adab0b
[ 1026.319498] Code: 5b 5d 41 5c 41 5d c3 0f 1f 84 00 00 00 00 00 31 db 45 85 e4 74 dc 48 89 ef e8 81 c8 ff ff 48 8d 78 18 e8 88 5b 00 00 48 89 c3 <49> 89 45 00 48 c7 00 00 00 00 00 48 8d 7b 13 b8 ff ff ff ff 48 89
[ 1026.327848] RSP: 002b:00007ffd8a3a5850 EFLAGS: 00010206
[ 1026.330976] RAX: 00005636a4cb8d50 RBX: 00005636a4cb8d50 RCX: 00007f434a795b00
[ 1026.334636] RDX: 00005636a4cb8d50 RSI: 0000000000000000 RDI: 0000000000000001
[ 1026.338287] RBP: 00005636a4cb85f0 R08: 000000000000ffff R09: 0000000000000030
[ 1026.341808] R10: 00007f434a526430 R11: 0000000000000246 R12: 0000000000000001
[ 1026.345387] R13: 00005636a4cb6360 R14: 0000000000000001 R15: 0000000000000000
[ 1026.349691] BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:65
[ 1026.349780] BUG: scheduling while atomic: systemd/1/0x00000005
[ 1026.354095] in_atomic(): 1, irqs_disabled(): 0, pid: 29520, name: systemd-cgroups
[ 1026.356679] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1026.360422] CPU: 0 PID: 29520 Comm: systemd-cgroups Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.376244] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.380767] Call Trace:
[ 1026.383659] dump_stack+0x5c/0x7b
[ 1026.386775] ___might_sleep+0xf1/0x110
[ 1026.390034] down_write+0x1c/0x50
[ 1026.393029] vma_link+0x42/0xb0
[ 1026.395960] mmap_region+0x423/0x660
[ 1026.398922] do_mmap+0x3e3/0x580
[ 1026.401762] vm_mmap_pgoff+0xd2/0x120
[ 1026.404666] elf_map+0x8c/0x120
[ 1026.407450] load_elf_binary+0x6a3/0x1180
[ 1026.410467] search_binary_handler+0x91/0x1e0
[ 1026.413508] __do_execve_file+0x761/0x940
[ 1026.416610] do_execve+0x21/0x30
[ 1026.419328] call_usermodehelper_exec_async+0x1a8/0x1c0
[ 1026.422561] ? recalc_sigpending+0x17/0x50
[ 1026.425493] ? call_usermodehelper+0xa0/0xa0
[ 1026.428404] ret_from_fork+0x35/0x40
[ 1026.431159] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.431293] note: systemd-cgroups[29520] exited with preempt_count 6
[ 1026.433971] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.433972] Call Trace:
[ 1026.433984] dump_stack+0x5c/0x7b
[ 1026.433989] __schedule_bug+0x55/0x70
[ 1026.433997] __schedule+0x560/0x670
[ 1026.437738] note: meminfo[29519] exited with preempt_count 2
[ 1026.440163] ? hrtimer_start_range_ns+0x1e4/0x2c0
[ 1026.440166] schedule+0x34/0xb0
[ 1026.440169] schedule_hrtimeout_range_clock+0xbb/0x1b0
[ 1026.440171] ? __hrtimer_init+0xb0/0xb0
[ 1026.440178] poll_schedule_timeout+0x4d/0x80
[ 1026.440181] do_sys_poll+0x3d6/0x590
[ 1026.440187] ? ep_poll_callback+0x26f/0x2e0
[ 1026.440192] ? __wake_up_common+0x76/0x170
[ 1026.443001] BUG: scheduling while atomic: dbus-daemon/261/0x00000005
[ 1026.444172] ? _cond_resched+0x19/0x30
[ 1026.444174] ? mutex_lock+0x21/0x40
[ 1026.444177] ? set_fd_set+0x50/0x50
[ 1026.444183] ? import_iovec+0x8d/0xb0
[ 1026.444188] ? unix_stream_recvmsg+0x53/0x70
[ 1026.444192] ? __unix_insert_socket+0x40/0x40
[ 1026.444199] ? ___sys_recvmsg+0x1ab/0x250
[ 1026.446953] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1026.448644] ? __switch_to_asm+0x40/0x70
[ 1026.448646] ? __switch_to_asm+0x34/0x70
[ 1026.448647] ? __switch_to_asm+0x40/0x70
[ 1026.448648] ? __switch_to_asm+0x34/0x70
[ 1026.448650] ? __switch_to_asm+0x40/0x70
[ 1026.448652] ? __switch_to_asm+0x34/0x70
[ 1026.501741] ? __switch_to_asm+0x40/0x70
[ 1026.503383] ? __switch_to_asm+0x34/0x70
[ 1026.504999] ? __switch_to_asm+0x40/0x70
[ 1026.506617] ? __switch_to_asm+0x34/0x70
[ 1026.508233] ? __switch_to_asm+0x40/0x70
[ 1026.509824] ? __switch_to_asm+0x34/0x70
[ 1026.511384] ? __switch_to_asm+0x40/0x70
[ 1026.512969] ? __switch_to_asm+0x34/0x70
[ 1026.514576] ? __switch_to_asm+0x40/0x70
[ 1026.516156] ? __switch_to_asm+0x34/0x70
[ 1026.517713] ? __switch_to_asm+0x40/0x70
[ 1026.519272] ? __switch_to_asm+0x34/0x70
[ 1026.520793] ? __switch_to_asm+0x40/0x70
[ 1026.522328] ? __switch_to_asm+0x34/0x70
[ 1026.523833] ? __switch_to_asm+0x40/0x70
[ 1026.525358] ? __switch_to_asm+0x34/0x70
[ 1026.526825] ? __switch_to_asm+0x40/0x70
[ 1026.528256] ? __switch_to_asm+0x34/0x70
[ 1026.529680] ? __switch_to_asm+0x40/0x70
[ 1026.531141] ? __switch_to_asm+0x34/0x70
[ 1026.532558] ? __switch_to_asm+0x40/0x70
[ 1026.533967] ? __switch_to_asm+0x34/0x70
[ 1026.535395] ? kvm_clock_get_cycles+0x14/0x20
[ 1026.536895] ? ktime_get_ts64+0x4c/0xe0
[ 1026.538283] ? __x64_sys_ppoll+0xbb/0x110
[ 1026.539683] __x64_sys_ppoll+0xbb/0x110
[ 1026.541087] do_syscall_64+0x5b/0x1e0
[ 1026.542409] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1026.544007] RIP: 0033:0x7f70f2b2992d
[ 1026.545341] Code: 48 8b 52 08 48 89 44 24 10 8b 05 ee ed 2b 00 48 89 54 24 18 48 8d 54 24 10 85 c0 75 30 41 b8 08 00 00 00 b8 0f 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 73 48 83 c4 20 5b 5d 41 5c c3 66 90 8b 05 ba
[ 1026.550658] RSP: 002b:00007ffd9115f7f0 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[ 1026.552914] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f70f2b2992d
[ 1026.555097] RDX: 00007ffd9115f800 RSI: 0000000000000001 RDI: 00007ffd9115f840
[ 1026.557347] RBP: 0000000000000001 R08: 0000000000000008 R09: 00007ffd911eb0b0
[ 1026.559525] R10: 0000000000000000 R11: 0000000000000246 R12: 00000000017d7838
[ 1026.561705] R13: 000055b3e9200a80 R14: 0000000000000000 R15: 000000003ea29a30
[ 1026.563887] CPU: 0 PID: 261 Comm: dbus-daemon Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.567501] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.570383] note: meminfo[1008] exited with preempt_count 2
[ 1026.570931] Call Trace:
[ 1026.574448] dump_stack+0x5c/0x7b
[ 1026.576334] __schedule_bug+0x55/0x70
[ 1026.578318] __schedule+0x560/0x670
[ 1026.580243] ? ep_item_poll+0x3f/0xb0
[ 1026.582542] ? ep_poll+0x23e/0x510
[ 1026.584452] schedule+0x34/0xb0
[ 1026.586332] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1026.588750] ? ep_read_events_proc+0xe0/0xe0
[ 1026.590926] ? ep_scan_ready_list+0x228/0x250
[ 1026.593511] ? __switch_to_asm+0x40/0x70
[ 1026.595722] ? __switch_to_asm+0x34/0x70
[ 1026.597925] ? ep_poll+0x23e/0x510
[ 1026.600019] ep_poll+0x21f/0x510
[ 1026.602107] ? wake_up_q+0x80/0x80
[ 1026.604158] do_epoll_wait+0xbd/0xe0
[ 1026.606235] __x64_sys_epoll_wait+0x1a/0x20
[ 1026.608492] do_syscall_64+0x5b/0x1e0
[ 1026.610581] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1026.613180] RIP: 0033:0x7f60d105c2e3
[ 1026.615508] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1026.623765] RSP: 002b:00007ffda738f118 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1026.627784] RAX: ffffffffffffffda RBX: ffffffffffffffff RCX: 00007f60d105c2e3
[ 1026.631737] RDX: 0000000000000040 RSI: 00007ffda738f120 RDI: 0000000000000004
[ 1026.635748] RBP: 00007ffda738f4d0 R08: 0000000000000401 R09: 00007ffda73a80b0
[ 1026.639522] R10: 00000000ffffffff R11: 0000000000000246 R12: 000055f82d37b500
[ 1026.643262] R13: 0000000000000001 R14: 000055f82d3971e8 R15: 000055f82d39cd00
[ 1026.778152] BUG: scheduling while atomic: systemd/1/0x000000cf
[ 1026.782050] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1026.797533] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.801475] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.805600] Call Trace:
[ 1026.807512] dump_stack+0x5c/0x7b
[ 1026.809734] __schedule_bug+0x55/0x70
[ 1026.812039] __schedule+0x560/0x670
[ 1026.814245] ? hrtimer_start_range_ns+0x1e4/0x2c0
[ 1026.816807] schedule+0x34/0xb0
[ 1026.818834] schedule_hrtimeout_range_clock+0xbb/0x1b0
[ 1026.821629] ? __hrtimer_init+0xb0/0xb0
[ 1026.824902] poll_schedule_timeout+0x4d/0x80
[ 1026.829051] do_sys_poll+0x3d6/0x590
[ 1026.831529] ? ep_poll_callback+0x26f/0x2e0
[ 1026.833992] ? __wake_up_common+0x76/0x170
[ 1026.837040] ? _cond_resched+0x19/0x30
[ 1026.840385] ? mutex_lock+0x21/0x40
[ 1026.843059] ? set_fd_set+0x50/0x50
[ 1026.844962] ? import_iovec+0x8d/0xb0
[ 1026.846680] ? unix_stream_recvmsg+0x53/0x70
[ 1026.848624] ? __unix_insert_socket+0x40/0x40
[ 1026.851521] ? ___sys_recvmsg+0x1ab/0x250
[ 1026.854455] ? flush_workqueue+0x1a9/0x420
[ 1026.856529] ? list_lru_add+0xbf/0x1b0
[ 1026.858244] ? kvm_clock_get_cycles+0x14/0x20
[ 1026.860170] ? ktime_get_ts64+0x4c/0xe0
[ 1026.862266] ? __x64_sys_ppoll+0xbb/0x110
[ 1026.865057] __x64_sys_ppoll+0xbb/0x110
[ 1026.867461] do_syscall_64+0x5b/0x1e0
[ 1026.869307] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1026.871899] RIP: 0033:0x7f70f2b2992d
[ 1026.874535] Code: 48 8b 52 08 48 89 44 24 10 8b 05 ee ed 2b 00 48 89 54 24 18 48 8d 54 24 10 85 c0 75 30 41 b8 08 00 00 00 b8 0f 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 73 48 83 c4 20 5b 5d 41 5c c3 66 90 8b 05 ba
[ 1026.882409] RSP: 002b:00007ffd9115f7e0 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[ 1026.885278] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f70f2b2992d
[ 1026.888507] RDX: 00007ffd9115f7f0 RSI: 0000000000000001 RDI: 00007ffd9115f830
[ 1026.891872] RBP: 0000000000000001 R08: 0000000000000008 R09: 00007ffd911eb0b0
[ 1026.894542] R10: 0000000000000000 R11: 0000000000000246 R12: 00000000017d783d
[ 1026.897386] R13: 000055b3e9200a80 R14: 0000000000000000 R15: 000000003ea92388
[ 1026.934533] BUG: scheduling while atomic: dbus-daemon/261/0x00000009
[ 1026.938388] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1026.955687] CPU: 1 PID: 261 Comm: dbus-daemon Tainted: G W 5.1.0-12240-ge522719 #1
[ 1026.961323] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1026.965624] Call Trace:
[ 1026.967513] dump_stack+0x5c/0x7b
[ 1026.970336] __schedule_bug+0x55/0x70
[ 1026.973714] __schedule+0x560/0x670
[ 1026.976381] ? ep_item_poll+0x3f/0xb0
[ 1026.978878] ? ep_poll+0x23e/0x510
[ 1026.981032] schedule+0x34/0xb0
[ 1026.983865] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1026.987829] ? ep_read_events_proc+0xe0/0xe0
[ 1026.990498] ? ep_scan_ready_list+0x228/0x250
[ 1026.993743] ? __switch_to_asm+0x40/0x70
[ 1026.997050] ? __switch_to_asm+0x34/0x70
[ 1027.000404] ? ep_poll+0x23e/0x510
[ 1027.003260] ep_poll+0x21f/0x510
[ 1027.005401] ? wake_up_q+0x80/0x80
[ 1027.007492] do_epoll_wait+0xbd/0xe0
[ 1027.009642] __x64_sys_epoll_wait+0x1a/0x20
[ 1027.012888] do_syscall_64+0x5b/0x1e0
[ 1027.015546] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.018343] RIP: 0033:0x7f60d105c2e3
[ 1027.020584] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1027.031748] systemd[1]: segfault at 7f70f2de6830 ip 00007f70f2b024ba sp 00007ffd9115f8d0 error 7 in libc-2.24.so[7f70f2a4a000+195000]
[ 1027.032437] RSP: 002b:00007ffda738f118 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1027.032441] RAX: ffffffffffffffda RBX: ffffffffffffffff RCX: 00007f60d105c2e3
[ 1027.032443] RDX: 0000000000000040 RSI: 00007ffda738f120 RDI: 0000000000000004
[ 1027.032444] RBP: 00007ffda738f4d0 R08: 0000000000000402 R09: 00007ffda73a80b0
[ 1027.032446] R10: 00000000ffffffff R11: 0000000000000246 R12: 000055f82d37b500
[ 1027.032447] R13: 0000000000000001 R14: 000055f82d3971e8 R15: 000055f82d39cd00
[ 1027.034233] BUG: scheduling while atomic: rs:main Q:Reg/264/0x00000007
[ 1027.039915] Code: 48 85 db 74 ce 41 bc ca 00 00 00 eb 0c 0f 1f 00 48 8b 5b 08 48 85 db 74 ba 48 8b 3b 48 8b 47 10 48 85 c0 74 05 ff d0 48 8b 3b <f0> ff 4f 28 0f 94 c0 84 c0 74 db 8b 47 2c 85 c0 74 d4 48 83 c7 28
[ 1027.039929] BUG: scheduling while atomic: systemd/1/0x00000101
[ 1027.044540] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.048883] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.053364] CPU: 1 PID: 264 Comm: rs:main Q:Reg Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.116462] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.121503] Call Trace:
[ 1027.123566] dump_stack+0x5c/0x7b
[ 1027.125524] __schedule_bug+0x55/0x70
[ 1027.127513] __schedule+0x560/0x670
[ 1027.129604] ? get_futex_key+0x337/0x420
[ 1027.132763] schedule+0x34/0xb0
[ 1027.134848] futex_wait_queue_me+0xd3/0x150
[ 1027.136947] futex_wait+0xeb/0x250
[ 1027.138862] ? set_page_dirty+0xe/0xb0
[ 1027.140830] ? simple_write_end+0x4e/0x140
[ 1027.143048] ? generic_perform_write+0x142/0x1d0
[ 1027.146190] do_futex+0x12c/0x970
[ 1027.148027] ? new_sync_write+0x12d/0x1d0
[ 1027.150363] __x64_sys_futex+0x134/0x180
[ 1027.153496] ? ksys_write+0x66/0xe0
[ 1027.156447] do_syscall_64+0x5b/0x1e0
[ 1027.158726] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.160999] RIP: 0033:0x7f7f3b98217f
[ 1027.163370] Code: 30 83 f8 20 75 15 be 8b 00 00 00 b8 ca 00 00 00 0f 05 83 f8 00 41 0f 94 c0 eb 0f be 80 00 00 00 45 30 c0 b8 ca 00 00 00 0f 05 <8b> 3c 24 e8 49 2d 00 00 48 8b 7c 24 08 be 01 00 00 00 31 c0 f0 0f
[ 1027.171164] RSP: 002b:00007f7f38f40c70 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[ 1027.174246] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f7f3b98217f
[ 1027.178390] RDX: 00000000000002ff RSI: 0000000000000080 RDI: 000055b0a6fbc28c
[ 1027.182242] RBP: 000055b0a6fbc288 R08: 000055b0a6fbc000 R09: 000000000000017f
[ 1027.185219] R10: 0000000000000000 R11: 0000000000000246 R12: 00007f7f38f40cd0
[ 1027.189140] R13: 0000000000000000 R14: 000055b0a52e6290 R15: 0000000000000000
[ 1027.191974] CPU: 0 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.192447] (systemd)[29521]: segfault at 7f70f2b252d0 ip 00007f70f2b252d0 sp 00007ffd9115f628 error 14 in libc-2.24.so[7f70f2a4a000+195000]
[ 1027.196520] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.196522] Call Trace:
[ 1027.196539] dump_stack+0x5c/0x7b
[ 1027.196549] __schedule_bug+0x55/0x70
[ 1027.204249] Code: Bad RIP value.
[ 1027.208742] __schedule+0x560/0x670
[ 1027.208749] ? force_sig_info+0xc7/0xe0
[ 1027.210544] BUG: scheduling while atomic: (systemd)/29521/0x0000000b
[ 1027.213570] ? async_page_fault+0x8/0x30
[ 1027.213575] schedule+0x34/0xb0
[ 1027.213585] exit_to_usermode_loop+0x5c/0xf0
[ 1027.213589] prepare_exit_to_usermode+0xa0/0xe0
[ 1027.216777] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.219831] retint_user+0x8/0x8
[ 1027.219838] RIP: 0033:0x7f70f2b024ba
[ 1027.219843] Code: 48 85 db 74 ce 41 bc ca 00 00 00 eb 0c 0f 1f 00 48 8b 5b 08 48 85 db 74 ba 48 8b 3b 48 8b 47 10 48 85 c0 74 05 ff d0 48 8b 3b <f0> ff 4f 28 0f 94 c0 84 c0 74 db 8b 47 2c 85 c0 74 d4 48 83 c7 28
[ 1027.271149] RSP: 002b:00007ffd9115f8d0 EFLAGS: 00010246
[ 1027.274596] RAX: 0000000000000000 RBX: 00007ffd9115f8d0 RCX: 00007f70f2b0238b
[ 1027.278564] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f70f2de6808
[ 1027.282528] RBP: 00007ffd9115f920 R08: 00007f70f455a500 R09: 000055b3e7eb2e01
[ 1027.286508] R10: 00007f70f455a7d0 R11: 0000000000000246 R12: 00000000000000ca
[ 1027.290492] R13: 0000000000007351 R14: 0000000000000000 R15: 0000000000000000
[ 1027.294496] CPU: 1 PID: 29521 Comm: (systemd) Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.295208] BUG: scheduling while atomic: systemd-logind/281/0x00000011
[ 1027.297676] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.297678] Call Trace:
[ 1027.297697] dump_stack+0x5c/0x7b
[ 1027.297705] __schedule_bug+0x55/0x70
[ 1027.297714] __schedule+0x560/0x670
[ 1027.297719] ? force_sig_info+0xc7/0xe0
[ 1027.297725] ? async_page_fault+0x8/0x30
[ 1027.297728] schedule+0x34/0xb0
[ 1027.297735] exit_to_usermode_loop+0x5c/0xf0
[ 1027.297738] prepare_exit_to_usermode+0xa0/0xe0
[ 1027.297740] retint_user+0x8/0x8
[ 1027.297744] RIP: 0033:0x7f70f2b252d0
[ 1027.297755] Code: Bad RIP value.
[ 1027.301693] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.304924] RSP: 002b:00007ffd9115f628 EFLAGS: 00010206
[ 1027.304926] RAX: 0000000000000018 RBX: 0000000000000018 RCX: 00007f70f2afd9b1
[ 1027.304927] RDX: 00007ffd9115f630 RSI: 0000000000000018 RDI: 0000000000000001
[ 1027.304928] RBP: 00007ffd9115f8e0 R08: 0000000000000000 R09: 0000000000000000
[ 1027.304929] R10: 0000000000000008 R11: 0000000000000202 R12: 0000000000000001
[ 1027.304930] R13: 00007ffd9115f740 R14: 000055b3e928b260 R15: 00007ffd9115f9c0
[ 1027.305238] BUG: scheduling while atomic: dbus-daemon/261/0x00000005
[ 1027.307700] CPU: 0 PID: 281 Comm: systemd-logind Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.309610] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.312390] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.312392] Call Trace:
[ 1027.312410] dump_stack+0x5c/0x7b
[ 1027.312418] __schedule_bug+0x55/0x70
[ 1027.415964] __schedule+0x560/0x670
[ 1027.419154] ? hrtimer_start_range_ns+0x1e4/0x2c0
[ 1027.422716] schedule+0x34/0xb0
[ 1027.425681] schedule_hrtimeout_range_clock+0xbb/0x1b0
[ 1027.429219] ? __hrtimer_init+0xb0/0xb0
[ 1027.432331] poll_schedule_timeout+0x4d/0x80
[ 1027.435930] do_sys_poll+0x3d6/0x590
[ 1027.438802] ? ep_poll_callback+0x26f/0x2e0
[ 1027.441774] ? __wake_up_common+0x76/0x170
[ 1027.444701] ? _cond_resched+0x19/0x30
[ 1027.447584] ? mutex_lock+0x21/0x40
[ 1027.450358] ? set_fd_set+0x50/0x50
[ 1027.453318] ? import_iovec+0x8d/0xb0
[ 1027.456025] ? unix_stream_recvmsg+0x53/0x70
[ 1027.458847] ? __unix_insert_socket+0x40/0x40
[ 1027.461731] ? ___sys_recvmsg+0x1ab/0x250
[ 1027.464491] ? shmem_evict_inode+0x87/0x240
[ 1027.467349] ? init_wait_var_entry+0x40/0x40
[ 1027.470286] ? fsnotify_grab_connector+0x45/0x90
[ 1027.473221] ? fsnotify_destroy_marks+0x22/0xf0
[ 1027.476178] ? __seccomp_filter+0x96/0x6c0
[ 1027.479045] ? __dentry_kill+0x14f/0x1a0
[ 1027.481733] ? kvm_clock_get_cycles+0x14/0x20
[ 1027.484542] ? ktime_get_ts64+0x4c/0xe0
[ 1027.487323] ? __x64_sys_ppoll+0xbb/0x110
[ 1027.490153] __x64_sys_ppoll+0xbb/0x110
[ 1027.492889] do_syscall_64+0x5b/0x1e0
[ 1027.495502] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.498483] RIP: 0033:0x7f60b575092d
[ 1027.501132] Code: 48 8b 52 08 48 89 44 24 10 8b 05 ee ed 2b 00 48 89 54 24 18 48 8d 54 24 10 85 c0 75 30 41 b8 08 00 00 00 b8 0f 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 73 48 83 c4 20 5b 5d 41 5c c3 66 90 8b 05 ba
[ 1027.510407] RSP: 002b:00007ffcb8705a70 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[ 1027.514922] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f60b575092d
[ 1027.518889] RDX: 00007ffcb8705a80 RSI: 0000000000000001 RDI: 00007ffcb8705ac0
[ 1027.522841] RBP: 0000000000000001 R08: 0000000000000008 R09: 00007ffcb872d0b0
[ 1027.526843] R10: 0000000000000000 R11: 0000000000000246 R12: 00000000017d783b
[ 1027.530702] R13: 0000563d3e17ce30 R14: 0000000000000000 R15: 000000003eb10744
[ 1027.534670] CPU: 1 PID: 261 Comm: dbus-daemon Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.539363] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.543416] Call Trace:
[ 1027.544863] dump_stack+0x5c/0x7b
[ 1027.546559] __schedule_bug+0x55/0x70
[ 1027.548809] __schedule+0x560/0x670
[ 1027.551199] ? ep_item_poll+0x3f/0xb0
[ 1027.554014] ? ep_poll+0x23e/0x510
[ 1027.556385] schedule+0x34/0xb0
[ 1027.557935] schedule_hrtimeout_range_clock+0x19e/0x1b0
[ 1027.559980] ? ep_read_events_proc+0xe0/0xe0
[ 1027.562110] ? ep_scan_ready_list+0x228/0x250
[ 1027.564859] ? __switch_to_asm+0x40/0x70
[ 1027.566505] ? __switch_to_asm+0x34/0x70
[ 1027.568179] ? ep_poll+0x23e/0x510
[ 1027.570290] ep_poll+0x21f/0x510
[ 1027.571741] ? wake_up_q+0x80/0x80
[ 1027.573237] do_epoll_wait+0xbd/0xe0
[ 1027.575255] __x64_sys_epoll_wait+0x1a/0x20
[ 1027.577511] do_syscall_64+0x5b/0x1e0
[ 1027.579095] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.581373] RIP: 0033:0x7f60d105c2e3
[ 1027.583288] Code: 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 29 54 2b 00 00 75 13 49 89 ca b8 e8 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 0b c2 00 00 48 89 04 24
[ 1027.591216] RSP: 002b:00007ffda738f118 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[ 1027.593894] RAX: ffffffffffffffda RBX: ffffffffffffffff RCX: 00007f60d105c2e3
[ 1027.597199] RDX: 0000000000000040 RSI: 00007ffda738f120 RDI: 0000000000000004
[ 1027.601159] RBP: 00007ffda738f4d0 R08: 0000000000000402 R09: 00007ffda73a80b0
[ 1027.603940] R10: 00000000ffffffff R11: 0000000000000246 R12: 000055f82d37b500
[ 1027.606577] R13: 0000000000000001 R14: 000055f82d3971e8 R15: 000055f82d39cd00
[ 1027.614459] BUG: sleeping function called from invalid context at mm/slab.h:457
[ 1027.621770] in_atomic(): 1, irqs_disabled(): 0, pid: 264, name: rs:main Q:Reg
[ 1027.627321] CPU: 1 PID: 264 Comm: rs:main Q:Reg Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.634447] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.641503] Call Trace:
[ 1027.645089] dump_stack+0x5c/0x7b
[ 1027.649052] ___might_sleep+0xf1/0x110
[ 1027.654233] __kmalloc+0x186/0x220
[ 1027.659092] security_prepare_creds+0x8a/0xa0
[ 1027.664729] prepare_creds+0xd5/0x110
[ 1027.669133] do_faccessat+0x3c/0x230
[ 1027.673566] ? ksys_write+0x66/0xe0
[ 1027.677681] do_syscall_64+0x5b/0x1e0
[ 1027.681915] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.686496] RIP: 0033:0x7f7f3aa8d9c7
[ 1027.690557] Code: 83 c4 08 48 3d 01 f0 ff ff 73 01 c3 48 8b 0d c8 d4 2b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 b8 15 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a1 d4 2b 00 f7 d8 64 89 01 48
[ 1027.703729] RSP: 002b:00007f7f38f403f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015
[ 1027.708542] RAX: ffffffffffffffda RBX: 00007f7f38f404e0 RCX: 00007f7f3aa8d9c7
[ 1027.711834] RDX: 00007f7f3ab1a850 RSI: 0000000000000000 RDI: 00007f7f3ab16f43
[ 1027.715464] RBP: 00007f7f38f4050c R08: 00007f7f300362ec R09: 00007f7f3aadfda0
[ 1027.718680] R10: 000055b0a5515280 R11: 0000000000000246 R12: 00007f7f38f40660
[ 1027.722348] R13: 000055b0a6fc9b20 R14: 000055b0a6fc9c10 R15: 0000000000000001
[ 1027.725522] BUG: scheduling while atomic: rs:main Q:Reg/264/0x00000005
[ 1027.729102] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.742015] CPU: 1 PID: 264 Comm: rs:main Q:Reg Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.746595] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.750411] Call Trace:
[ 1027.752982] dump_stack+0x5c/0x7b
[ 1027.755243] __schedule_bug+0x55/0x70
[ 1027.757282] __schedule+0x560/0x670
[ 1027.759982] schedule+0x34/0xb0
[ 1027.762139] exit_to_usermode_loop+0x5c/0xf0
[ 1027.765267] do_syscall_64+0x1a7/0x1e0
[ 1027.767852] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1027.771188] RIP: 0033:0x7f7f3aa8d9c7
[ 1027.773594] Code: 83 c4 08 48 3d 01 f0 ff ff 73 01 c3 48 8b 0d c8 d4 2b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 b8 15 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a1 d4 2b 00 f7 d8 64 89 01 48
[ 1027.782778] RSP: 002b:00007f7f38f403f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015
[ 1027.787564] RAX: fffffffffffffffe RBX: 00007f7f38f404e0 RCX: 00007f7f3aa8d9c7
[ 1027.790392] RDX: 00007f7f3ab1a850 RSI: 0000000000000000 RDI: 00007f7f3ab16f43
[ 1027.794576] RBP: 00007f7f38f4050c R08: 00007f7f300362ec R09: 00007f7f3aadfda0
[ 1027.797345] R10: 000055b0a5515280 R11: 0000000000000246 R12: 00007f7f38f40660
[ 1027.801338] R13: 000055b0a6fc9b20 R14: 000055b0a6fc9c10 R15: 0000000000000001
[ 1027.806481] rs:main Q:Reg[264]: segfault at 7f7f300371d8 ip 00007f7f3aa2a525 sp 00007f7f38f40630 error 6 in libc-2.24.so[7f7f3a9b2000+195000]
[ 1027.812964] Code: 01 16 32 00 48 29 e8 31 c9 48 8d 34 2a 48 39 fb 0f 95 c1 48 83 cd 01 48 83 c8 01 48 c1 e1 02 48 89 73 58 48 09 cd 48 89 6a 08 <48> 89 46 08 48 8d 42 10 48 83 c4 48 5b 5d 41 5c 41 5d 41 5e 41 5f
[ 1027.821383] BUG: scheduling while atomic: rs:main Q:Reg/264/0x00000003
[ 1027.824319] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.842129] CPU: 1 PID: 264 Comm: rs:main Q:Reg Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.846058] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.850241] Call Trace:
[ 1027.852537] dump_stack+0x5c/0x7b
[ 1027.856448] __schedule_bug+0x55/0x70
[ 1027.859812] __schedule+0x560/0x670
[ 1027.861958] ? force_sig_info+0xc7/0xe0
[ 1027.864876] ? async_page_fault+0x8/0x30
[ 1027.868539] schedule+0x34/0xb0
[ 1027.872143] exit_to_usermode_loop+0x5c/0xf0
[ 1027.875005] prepare_exit_to_usermode+0xa0/0xe0
[ 1027.877260] retint_user+0x8/0x8
[ 1027.880534] RIP: 0033:0x7f7f3aa2a525
[ 1027.883724] Code: 01 16 32 00 48 29 e8 31 c9 48 8d 34 2a 48 39 fb 0f 95 c1 48 83 cd 01 48 83 c8 01 48 c1 e1 02 48 89 73 58 48 09 cd 48 89 6a 08 <48> 89 46 08 48 8d 42 10 48 83 c4 48 5b 5d 41 5c 41 5d 41 5e 41 5f
[ 1027.894440] RSP: 002b:00007f7f38f40630 EFLAGS: 00010202
[ 1027.898589] RAX: 0000000000000e31 RBX: 00007f7f30000020 RCX: 0000000000000004
[ 1027.902200] RDX: 00007f7f30036fc0 RSI: 00007f7f300371d0 RDI: 00007f7f3ad4bb00
[ 1027.906383] RBP: 0000000000000215 R08: 00007f7f30000000 R09: 0000000000037000
[ 1027.911204] R10: 00007f7f30037000 R11: 0000000000000206 R12: 0000000000000040
[ 1027.914322] R13: 00007f7f30036fc0 R14: 0000000000001000 R15: 0000000000000230
[ 1027.918621] BUG: scheduling while atomic: rs:main Q:Reg/264/0x00000003
[ 1027.923333] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1027.937497] CPU: 1 PID: 264 Comm: rs:main Q:Reg Tainted: G W 5.1.0-12240-ge522719 #1
[ 1027.941786] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1027.947455] Call Trace:
[ 1027.949422] dump_stack+0x5c/0x7b
[ 1027.951408] __schedule_bug+0x55/0x70
[ 1027.953837] __schedule+0x560/0x670
[ 1027.956709] ? enqueue_task_fair+0x1a4/0x960
[ 1027.960171] ? wait_for_completion+0x123/0x1c0
[ 1027.962341] schedule+0x34/0xb0
[ 1027.965777] schedule_timeout+0x1f2/0x310
[ 1027.969002] ? ttwu_do_wakeup+0x1e/0x150
[ 1027.972630] ? wait_for_completion+0x123/0x1c0
[ 1027.975991] wait_for_completion+0x15b/0x1c0
[ 1027.979743] ? wake_up_q+0x80/0x80
[ 1027.983889] do_coredump+0x350/0xfa0
[ 1027.987053] ? __show_regs+0xae/0x2d0
[ 1027.990200] ? __module_text_address+0xe/0x60
[ 1027.993195] get_signal+0x16a/0x8a0
[ 1027.996308] ? __switch_to_asm+0x34/0x70
[ 1027.999713] ? __switch_to_asm+0x40/0x70
[ 1028.003131] ? async_page_fault+0x8/0x30
[ 1028.006003] do_signal+0x36/0x670
[ 1028.009099] ? __switch_to+0x101/0x470
[ 1028.012206] ? __schedule+0x25d/0x670
[ 1028.015399] ? async_page_fault+0x8/0x30
[ 1028.018597] ? async_page_fault+0x8/0x30
[ 1028.021154] exit_to_usermode_loop+0x89/0xf0
[ 1028.023804] prepare_exit_to_usermode+0xa0/0xe0
[ 1028.026953] retint_user+0x8/0x8
[ 1028.029906] RIP: 0033:0x7f7f3aa2a525
[ 1028.033031] Code: 01 16 32 00 48 29 e8 31 c9 48 8d 34 2a 48 39 fb 0f 95 c1 48 83 cd 01 48 83 c8 01 48 c1 e1 02 48 89 73 58 48 09 cd 48 89 6a 08 <48> 89 46 08 48 8d 42 10 48 83 c4 48 5b 5d 41 5c 41 5d 41 5e 41 5f
[ 1028.042156] RSP: 002b:00007f7f38f40630 EFLAGS: 00010202
[ 1028.045456] RAX: 0000000000000e31 RBX: 00007f7f30000020 RCX: 0000000000000004
[ 1028.049389] RDX: 00007f7f30036fc0 RSI: 00007f7f300371d0 RDI: 00007f7f3ad4bb00
[ 1028.053725] RBP: 0000000000000215 R08: 00007f7f30000000 R09: 0000000000037000
[ 1028.056513] systemd[1]: segfault at 7ffd9115f2f8 ip 000055b3e7e387df sp 00007ffd9115f300 error 7 in systemd[55b3e7e00000+ed000]
[ 1028.057954] R10: 00007f7f30037000 R11: 0000000000000206 R12: 0000000000000040
[ 1028.057956] R13: 00007f7f30036fc0 R14: 0000000000001000 R15: 0000000000000230
[ 1028.060376] note: (systemd)[29521] exited with preempt_count 2
[ 1028.062283] Code: d2 31 c0 be 11 00 00 00 bf 38 00 00 00 e8 49 f3 fe ff 85 c0 49 89 c4 0f 88 af 02 00 00 0f 84 4e 01 00 00 48 8d 74 24 20 89 c7 <e8> 3c f3 fe ff 85 c0 41 89 c6 0f 88 49 03 00 00 44 8b 74 24 28 41
[ 1028.077142] BUG: scheduling while atomic: systemd/1/0x00000101
[ 1028.079432] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.090606] CPU: 0 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.093539] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.096557] Call Trace:
[ 1028.098204] dump_stack+0x5c/0x7b
[ 1028.099913] __schedule_bug+0x55/0x70
[ 1028.101807] __schedule+0x560/0x670
[ 1028.103655] ? force_sig_info+0xc7/0xe0
[ 1028.105457] ? async_page_fault+0x8/0x30
[ 1028.107335] schedule+0x34/0xb0
[ 1028.108967] exit_to_usermode_loop+0x5c/0xf0
[ 1028.110918] prepare_exit_to_usermode+0xa0/0xe0
[ 1028.112914] retint_user+0x8/0x8
[ 1028.114616] RIP: 0033:0x55b3e7e387df
[ 1028.116383] Code: d2 31 c0 be 11 00 00 00 bf 38 00 00 00 e8 49 f3 fe ff 85 c0 49 89 c4 0f 88 af 02 00 00 0f 84 4e 01 00 00 48 8d 74 24 20 89 c7 <e8> 3c f3 fe ff 85 c0 41 89 c6 0f 88 49 03 00 00 44 8b 74 24 28 41
[ 1028.122835] RSP: 002b:00007ffd9115f300 EFLAGS: 00010202
[ 1028.125101] RAX: 0000000000007352 RBX: 00007ffd9115f3a0 RCX: 00007f70f2b2e469
[ 1028.127834] RDX: 00007f70f2dfa1de RSI: 00007ffd9115f320 RDI: 0000000000007352
[ 1028.130613] RBP: 000000000000000b R08: 0000000000000000 R09: 0000000000000013
[ 1028.133346] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007352
[ 1028.136015] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 1028.291136] BUG: scheduling while atomic: systemd/1/0x00000101
[ 1028.293508] note: systemd[29522] exited with preempt_count 10
[ 1028.295468] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.298227] note: systemd-cgroups[29523] exited with preempt_count 6
[ 1028.315213] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.321359] note: systemd[29524] exited with preempt_count 8
[ 1028.323241] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.323243] Call Trace:
[ 1028.323264] dump_stack+0x5c/0x7b
[ 1028.337985] __schedule_bug+0x55/0x70
[ 1028.341117] __schedule+0x560/0x670
[ 1028.344308] ? _do_fork+0x13e/0x430
[ 1028.348012] schedule+0x34/0xb0
[ 1028.351461] exit_to_usermode_loop+0x5c/0xf0
[ 1028.355570] do_syscall_64+0x1a7/0x1e0
[ 1028.358617] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1028.362529] RIP: 0033:0x7f70f2b2e469
[ 1028.366126] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
[ 1028.376240] RSP: 002b:00007ffd9115ed38 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
[ 1028.382006] RAX: 0000000000007354 RBX: 00007ffd9115ede0 RCX: 00007f70f2b2e469
[ 1028.386191] RDX: 00007f70f2dfa1de RSI: 0000000000000000 RDI: 0000000000000011
[ 1028.390447] RBP: 000000000000000b R08: 0000000000000000 R09: 000055b3e7ed5478
[ 1028.395542] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007352
[ 1028.399842] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 1028.404556] systemd[1]: segfault at 7f70f455a2f8 ip 00007f70f420e1c6 sp 00007ffd9115e3d0 error 7 in libsystemd-shared-232.so[7f70f4127000+192000]
[ 1028.414950] Code: 10 78 51 44 89 e0 83 e0 07 3b 05 45 60 13 00 0f 8e bf 00 00 00 89 d8 f7 d8 48 8b 94 24 28 08 00 00 64 48 33 14 25 28 00 00 00 <44> 89 55 00 0f 85 a6 00 00 00 48 81 c4 38 08 00 00 5b 5d 41 5c 41
[ 1028.427866] BUG: scheduling while atomic: systemd/1/0x00000005
[ 1028.433726] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.454314] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.461356] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.467907] Call Trace:
[ 1028.471795] dump_stack+0x5c/0x7b
[ 1028.477133] __schedule_bug+0x55/0x70
[ 1028.481254] __schedule+0x560/0x670
[ 1028.485185] ? force_sig_info+0xc7/0xe0
[ 1028.490054] ? async_page_fault+0x8/0x30
[ 1028.494260] schedule+0x34/0xb0
[ 1028.497970] exit_to_usermode_loop+0x5c/0xf0
[ 1028.503246] prepare_exit_to_usermode+0xa0/0xe0
[ 1028.507376] retint_user+0x8/0x8
[ 1028.511137] RIP: 0033:0x7f70f420e1c6
[ 1028.515620] Code: 10 78 51 44 89 e0 83 e0 07 3b 05 45 60 13 00 0f 8e bf 00 00 00 89 d8 f7 d8 48 8b 94 24 28 08 00 00 64 48 33 14 25 28 00 00 00 <44> 89 55 00 0f 85 a6 00 00 00 48 81 c4 38 08 00 00 5b 5d 41 5c 41
[ 1028.530270] RSP: 002b:00007ffd9115e3d0 EFLAGS: 00010246
[ 1028.534156] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000fffffffe
[ 1028.538848] RDX: 0000000000000000 RSI: 00007ffd9115d910 RDI: 0000000000000000
[ 1028.543472] RBP: 00007f70f455a2f8 R08: 000000000000ff00 R09: 0000000000000076
[ 1028.549765] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[ 1028.554497] R13: 000055b3e7ec89a7 R14: 000055b3e7eb9900 R15: 00007ffd9115ec50
[ 1028.559959] systemd[1]: segfault at 7ffd9115ddf8 ip 000055b3e7e387df sp 00007ffd9115de00 error 7 in systemd[55b3e7e00000+ed000]
[ 1028.563711] note: systemd[29525] exited with preempt_count 8
[ 1028.568045] Code: d2 31 c0 be 11 00 00 00 bf 38 00 00 00 e8 49 f3 fe ff 85 c0 49 89 c4 0f 88 af 02 00 00 0f 84 4e 01 00 00 48 8d 74 24 20 89 c7 <e8> 3c f3 fe ff 85 c0 41 89 c6 0f 88 49 03 00 00 44 8b 74 24 28 41
[ 1028.581705] BUG: scheduling while atomic: systemd/1/0x00000101
[ 1028.586305] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.606428] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.612931] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.617898] Call Trace:
[ 1028.621540] dump_stack+0x5c/0x7b
[ 1028.625205] __schedule_bug+0x55/0x70
[ 1028.628488] __schedule+0x560/0x670
[ 1028.631736] ? force_sig_info+0xc7/0xe0
[ 1028.635875] ? async_page_fault+0x8/0x30
[ 1028.639305] schedule+0x34/0xb0
[ 1028.642904] exit_to_usermode_loop+0x5c/0xf0
[ 1028.647455] prepare_exit_to_usermode+0xa0/0xe0
[ 1028.651738] retint_user+0x8/0x8
[ 1028.654830] RIP: 0033:0x55b3e7e387df
[ 1028.657923] Code: d2 31 c0 be 11 00 00 00 bf 38 00 00 00 e8 49 f3 fe ff 85 c0 49 89 c4 0f 88 af 02 00 00 0f 84 4e 01 00 00 48 8d 74 24 20 89 c7 <e8> 3c f3 fe ff 85 c0 41 89 c6 0f 88 49 03 00 00 44 8b 74 24 28 41
[ 1028.669529] RSP: 002b:00007ffd9115de00 EFLAGS: 00010206
[ 1028.674148] RAX: 0000000000007355 RBX: 00007ffd9115dea0 RCX: 00007f70f2b2e469
[ 1028.678643] RDX: 00007f70f2dfa1de RSI: 00007ffd9115de20 RDI: 0000000000007355
[ 1028.684384] RBP: 000000000000000b R08: 0000000000000000 R09: 00007ffd9115ed30
[ 1028.688712] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007355
[ 1028.694207] R13: 0000000000000000 R14: 000055b3e7eb9900 R15: 00007ffd9115ec50
[ 1028.698943] BUG: sleeping function called from invalid context at mm/slab.h:457
[ 1028.705135] in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: systemd
[ 1028.709499] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.715113] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.720857] Call Trace:
[ 1028.723657] dump_stack+0x5c/0x7b
[ 1028.726746] ___might_sleep+0xf1/0x110
[ 1028.729978] kmem_cache_alloc_node_trace+0x1cf/0x1f0
[ 1028.734557] __get_vm_area_node+0x7a/0x170
[ 1028.738118] __vmalloc_node_range+0x6d/0x260
[ 1028.742207] ? _do_fork+0xce/0x430
[ 1028.745957] copy_process+0x8d6/0x1b60
[ 1028.749353] ? _do_fork+0xce/0x430
[ 1028.752351] ? copy_fpstate_to_sigframe+0x318/0x3b0
[ 1028.756138] _do_fork+0xce/0x430
[ 1028.759602] do_syscall_64+0x5b/0x1e0
[ 1028.762430] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1028.766866] RIP: 0033:0x7f70f2b2e469
[ 1028.769766] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
[ 1028.780821] RSP: 002b:00007ffd9115d838 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
[ 1028.786738] RAX: ffffffffffffffda RBX: 00007ffd9115d8e0 RCX: 00007f70f2b2e469
[ 1028.791246] RDX: 00007f70f2dfa1de RSI: 0000000000000000 RDI: 0000000000000011
[ 1028.796548] RBP: 000000000000000b R08: 0000000000000000 R09: 00007f70f425c6ad
[ 1028.801917] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007355
[ 1028.806264] R13: 0000000000000000 R14: 000055b3e7eb9900 R15: 00007ffd9115ec50
[ 1028.812652] BUG: scheduling while atomic: systemd/1/0x00000101
[ 1028.816688] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.836033] CPU: 1 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.841916] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.848165] Call Trace:
[ 1028.851169] dump_stack+0x5c/0x7b
[ 1028.855034] __schedule_bug+0x55/0x70
[ 1028.858866] __schedule+0x560/0x670
[ 1028.862040] ? _do_fork+0x13e/0x430
[ 1028.865329] schedule+0x34/0xb0
[ 1028.869115] exit_to_usermode_loop+0x5c/0xf0
[ 1028.873514] do_syscall_64+0x1a7/0x1e0
[ 1028.876621] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1028.880199] RIP: 0033:0x7f70f2b2e469
[ 1028.883704] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
[ 1028.893950] RSP: 002b:00007ffd9115d838 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
[ 1028.899852] RAX: 0000000000007356 RBX: 00007ffd9115d8e0 RCX: 00007f70f2b2e469
[ 1028.904040] RDX: 00007f70f2dfa1de RSI: 0000000000000000 RDI: 0000000000000011
[ 1028.909513] RBP: 000000000000000b R08: 0000000000000000 R09: 00007f70f425c6ad
[ 1028.913649] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007355
[ 1028.917973] R13: 0000000000000000 R14: 000055b3e7eb9900 R15: 00007ffd9115ec50
[ 1028.923936] oom-killer[1012]: segfault at 55d8de51ca6c ip 000055d8dde77bec sp 00007ffec5ba4130 error 7 in dash[55d8dde6c000+1b000]
[ 1028.931540] Code: 89 c6 74 06 48 8b 43 10 8b 30 89 ef e8 55 79 ff ff 41 83 fd 01 0f 84 73 01 00 00 0f b7 43 1c 48 8b 53 10 8d 48 01 48 c1 e0 04 <66> 89 4b 1c 48 8d 1c 02 48 8d 05 a5 30 21 00 48 89 43 08 8b 05 67
[ 1028.937120] note: dmesg[29527] exited with preempt_count 20
[ 1028.943263] BUG: scheduling while atomic: oom-killer/1012/0x000000ab
[ 1028.943265] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1028.971922] CPU: 1 PID: 1012 Comm: oom-killer Tainted: G W 5.1.0-12240-ge522719 #1
[ 1028.977277] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1028.982421] Call Trace:
[ 1028.985520] dump_stack+0x5c/0x7b
[ 1028.989498] __schedule_bug+0x55/0x70
[ 1028.993886] __schedule+0x560/0x670
[ 1028.997362] ? force_sig_info+0xc7/0xe0
[ 1029.000658] ? async_page_fault+0x8/0x30
[ 1029.004928] schedule+0x34/0xb0
[ 1029.007887] exit_to_usermode_loop+0x5c/0xf0
[ 1029.012051] prepare_exit_to_usermode+0xa0/0xe0
[ 1029.016064] retint_user+0x8/0x8
[ 1029.019036] RIP: 0033:0x55d8dde77bec
[ 1029.022100] Code: 89 c6 74 06 48 8b 43 10 8b 30 89 ef e8 55 79 ff ff 41 83 fd 01 0f 84 73 01 00 00 0f b7 43 1c 48 8b 53 10 8d 48 01 48 c1 e0 04 <66> 89 4b 1c 48 8d 1c 02 48 8d 05 a5 30 21 00 48 89 43 08 8b 05 67
[ 1029.033718] RSP: 002b:00007ffec5ba4130 EFLAGS: 00010256
[ 1029.037691] RAX: 0000000000000000 RBX: 000055d8de51ca50 RCX: 0000000000000001
[ 1029.044253] RDX: 000055d8de51cbc0 RSI: 0000000000000000 RDI: 0000000001200011
[ 1029.049040] RBP: 0000000000007357 R08: 00007fe496bda700 R09: 000000000000001f
[ 1029.054197] R10: 00007fe496bda9d0 R11: 0000000000000246 R12: 000055d8de51c580
[ 1029.059459] R13: 0000000000000000 R14: 00007ffec5ba4180 R15: 000055d8de51c580
[ 1029.069642] note: systemd[29526] exited with preempt_count 8
[ 1029.075197] note: oom-killer[1012] exited with preempt_count 2
[ 1029.076618] WARNING: CPU: 0 PID: 1 at arch/x86/kernel/fpu/signal.c:167 copy_fpstate_to_sigframe+0x393/0x3b0
[ 1029.083698] note: systemd[29528] exited with preempt_count 8
[ 1029.088386] Modules linked in: sr_mod cdrom sg crct10dif_pclmul crc32_pclmul ppdev crc32c_intel ghash_clmulni_intel bochs_drm ttm aesni_intel drm_kms_helper ata_generic pata_acpi crypto_simd syscopyarea sysfillrect sysimgblt cryptd fb_sys_fops snd_pcm glue_helper joydev snd_timer drm snd serio_raw soundcore pcspkr ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 1029.088425] CPU: 0 PID: 1 Comm: systemd Tainted: G W 5.1.0-12240-ge522719 #1
[ 1029.088433] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1029.122518] RIP: 0010:copy_fpstate_to_sigframe+0x393/0x3b0
[ 1029.127225] Code: c0 f7 d8 e9 fb fd ff ff bb f2 ff ff ff e9 3a fe ff ff 0f 0b e9 3b fd ff ff 83 ca f2 eb d9 49 c7 c4 20 0b 8e b7 e9 60 ff ff ff <0f> 0b e9 b1 fc ff ff b8 ff ff ff ff e9 c8 fd ff ff e8 27 17 06 00
[ 1029.138673] RSP: 0000:ffffb07e00c5fdf8 EFLAGS: 00010206
[ 1029.143275] RAX: 0000000080000100 RBX: ffffb07e00c5ff58 RCX: ffffb07e00c5fe80
[ 1029.148328] RDX: 0000000000000200 RSI: 00007ffd9115c680 RDI: 00007ffd9115c680
[ 1029.153473] RBP: 00007ffd9115c680 R08: 00007ffd9115c4c0 R09: ffffffffb5ab1f0b
[ 1029.158429] R10: 00007ffd9115c680 R11: 0000000000000000 R12: ffff98f046dd8af8
[ 1029.163529] R13: 000000000000000b R14: ffff98f046dd8000 R15: 00007ffd9115c4b8
[ 1029.168297] FS: 00007f70f455a500(0000) GS:ffff98f17fc00000(0000) knlGS:0000000000000000
[ 1029.173510] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1029.177915] CR2: 00007ffd9115c8f8 CR3: 00000001bf09a000 CR4: 00000000000006f0
[ 1029.182603] Call Trace:
[ 1029.185775] do_signal+0x57f/0x670
[ 1029.189322] ? async_page_fault+0x8/0x30
[ 1029.192814] exit_to_usermode_loop+0x89/0xf0
[ 1029.196356] prepare_exit_to_usermode+0xa0/0xe0
[ 1029.200001] retint_user+0x8/0x8
[ 1029.203114] RIP: 0033:0x55b3e7e387df
[ 1029.206360] Code: d2 31 c0 be 11 00 00 00 bf 38 00 00 00 e8 49 f3 fe ff 85 c0 49 89 c4 0f 88 af 02 00 00 0f 84 4e 01 00 00 48 8d 74 24 20 89 c7 <e8> 3c f3 fe ff 85 c0 41 89 c6 0f 88 49 03 00 00 44 8b 74 24 28 41
[ 1029.216737] RSP: 002b:00007ffd9115c900 EFLAGS: 00010202
[ 1029.220774] RAX: 0000000000007358 RBX: 00007ffd9115c9a0 RCX: 00007f70f2b2e469
[ 1029.225287] RDX: 00007f70f2dfa1de RSI: 00007ffd9115c920 RDI: 0000000000007358
[ 1029.229884] RBP: 000000000000000b R08: 0000000000000000 R09: 00007ffd9115d830
[ 1029.234573] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000007358
[ 1029.239053] R13: 0000000000000000 R14: 000055b3e7eb9900 R15: 00007ffd9115d750
[ 1029.243783] ---[ end trace cb78806200d92def ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-12240-ge522719 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 12 months