[sched/fair] 3c29e651e1: hackbench.throughput -15.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -15.2% regression of hackbench.throughput due to commit:
commit: 3c29e651e16dd3b3179cfb2d055ee9538e37515c ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:
nr_threads: 100%
mode: threads
ipc: pipe
cpufreq_governor: performance
ucode: 0xca
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -11.3% regression |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | iterations=18 |
| | mode=threads |
| | nr_threads=1600% |
| | ucode=0x43 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -10.6% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=process |
| | nr_threads=50% |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -6.8% regression |
| test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=threads |
| | nr_threads=100% |
| | ucode=0xb8 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/threads/100%/debian-x86_64-20191114.cgz/lkp-cfl-e1/hackbench/0xca
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
%stddev %change %stddev
\ | \
69492 ± 3% -15.2% 58899 hackbench.throughput
7.727e+08 ± 7% +38.3% 1.069e+09 ± 2% hackbench.time.involuntary_context_switches
131708 ± 4% -15.4% 111382 hackbench.time.minor_page_faults
684.98 ± 2% -2.5% 667.72 hackbench.time.user_time
1.755e+09 ± 2% +11.2% 1.953e+09 hackbench.time.voluntary_context_switches
4.32e+08 ± 4% -15.6% 3.648e+08 hackbench.workload
186.50 ± 12% -16.6% 155.50 ± 11% interrupts.TLB:TLB_shootdowns
4147892 ± 5% +19.6% 4961643 vmstat.system.cs
0.72 ± 10% -0.1 0.58 ± 17% mpstat.cpu.all.idle%
0.00 ±156% +0.0 0.01 ± 62% mpstat.cpu.all.soft%
1038 ± 9% +18.5% 1230 ± 4% slabinfo.avc_xperms_data.active_objs
1038 ± 9% +18.5% 1230 ± 4% slabinfo.avc_xperms_data.num_objs
40647036 ± 10% -34.6% 26589504 proc-vmstat.numa_hit
40647036 ± 10% -34.6% 26589504 proc-vmstat.numa_local
40724029 ± 10% -34.5% 26661546 proc-vmstat.pgalloc_normal
763906 ± 2% -7.5% 706781 proc-vmstat.pgfault
40701242 ± 10% -34.6% 26637947 proc-vmstat.pgfree
1636058 ± 11% -45.0% 899629 ± 11% turbostat.C1
0.10 ± 7% -0.0 0.07 ± 7% turbostat.C1%
15201 ± 25% +113.3% 32422 ± 8% turbostat.C1E
0.01 ± 57% +0.0 0.03 ± 13% turbostat.C1E%
9093 ± 11% +217.2% 28841 ± 11% turbostat.C3
0.00 ±173% +0.0 0.03 ± 13% turbostat.C3%
16977 ± 2% -92.8% 1222 ± 98% turbostat.C8
0.17 ± 4% -0.2 0.01 ±110% turbostat.C8%
0.13 -100.0% 0.00 turbostat.CPU%c7
9623106 ± 5% -35.1% 6241651 ± 8% cpuidle.C1.time
1636668 ± 11% -45.0% 899926 ± 11% cpuidle.C1.usage
899494 ± 33% +272.6% 3351412 ± 5% cpuidle.C1E.time
15327 ± 25% +112.6% 32592 ± 8% cpuidle.C1E.usage
435674 ± 20% +643.5% 3239351 ± 14% cpuidle.C3.time
9179 ± 11% +215.2% 28933 ± 11% cpuidle.C3.usage
16850924 ± 2% -94.2% 979516 ± 86% cpuidle.C8.time
17150 ± 2% -92.0% 1378 ± 88% cpuidle.C8.usage
6022328 ± 18% -57.1% 2584773 ± 7% cpuidle.POLL.time
5339703 ± 19% -60.1% 2130089 ± 7% cpuidle.POLL.usage
14.06 ± 25% +64.0% 23.07 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.min
25.10 ± 5% -15.8% 21.12 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.stddev
13339 ± 24% +45.7% 19433 ± 7% sched_debug.cfs_rq:/.runnable_weight.min
166.51 ± 24% +48.9% 247.86 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.min
253.56 ± 2% -12.3% 222.30 ± 9% sched_debug.cfs_rq:/.util_est_enqueued.stddev
159602 ± 23% +55.6% 248411 ± 6% sched_debug.cpu.avg_idle.max
51567 ± 24% +49.1% 76905 ± 11% sched_debug.cpu.avg_idle.stddev
11576 ± 5% -9.5% 10478 sched_debug.cpu.curr->pid.avg
9976 ± 8% -16.8% 8305 ± 9% sched_debug.cpu.curr->pid.min
4.81 ± 15% +43.3% 6.89 ± 8% sched_debug.cpu.nr_running.min
6.68 ± 4% -17.0% 5.54 ± 8% sched_debug.cpu.nr_running.stddev
76003928 ± 2% +21.0% 91975904 sched_debug.cpu.nr_switches.avg
78115404 ± 2% +23.4% 96397551 sched_debug.cpu.nr_switches.max
74275755 ± 2% +19.0% 88359903 sched_debug.cpu.nr_switches.min
1043054 ± 15% +83.9% 1917752 ± 10% sched_debug.cpu.nr_switches.stddev
-685.26 +64.8% -1129 sched_debug.cpu.nr_uninterruptible.min
15470 ± 9% -31.0% 10674 ± 9% softirqs.CPU0.SCHED
13756 ± 10% -40.8% 8140 ± 4% softirqs.CPU1.SCHED
13645 ± 11% -39.7% 8227 ± 3% softirqs.CPU10.SCHED
14065 ± 12% -38.8% 8609 ± 2% softirqs.CPU11.SCHED
13948 ± 11% -39.9% 8381 ± 6% softirqs.CPU12.SCHED
13823 ± 12% -39.7% 8332 ± 9% softirqs.CPU13.SCHED
73323 +13.4% 83165 ± 11% softirqs.CPU14.RCU
13872 ± 12% -36.5% 8809 ± 7% softirqs.CPU14.SCHED
13726 ± 12% -40.4% 8175 ± 4% softirqs.CPU15.SCHED
13780 ± 11% -34.8% 8987 ± 12% softirqs.CPU2.SCHED
13895 ± 10% -40.5% 8266 ± 6% softirqs.CPU3.SCHED
221879 +13.4% 251589 ± 14% softirqs.CPU3.TIMER
13873 ± 10% -40.4% 8262 ± 6% softirqs.CPU4.SCHED
72625 +12.0% 81355 ± 10% softirqs.CPU5.RCU
13876 ± 13% -41.4% 8137 ± 3% softirqs.CPU5.SCHED
14118 ± 12% -40.8% 8354 ± 3% softirqs.CPU6.SCHED
13503 ± 12% -40.0% 8099 softirqs.CPU7.SCHED
13549 ± 11% -38.2% 8374 ± 8% softirqs.CPU8.SCHED
14273 ± 10% -40.1% 8548 ± 6% softirqs.CPU9.SCHED
223184 ± 11% -38.9% 136383 ± 5% softirqs.SCHED
55.80 +9.9% 61.34 perf-stat.i.MPKI
1.32 +0.1 1.39 perf-stat.i.branch-miss-rate%
1.066e+08 +3.7% 1.105e+08 perf-stat.i.branch-misses
0.15 ± 8% -0.0 0.11 ± 3% perf-stat.i.cache-miss-rate%
2521584 ± 9% -26.5% 1854353 ± 3% perf-stat.i.cache-misses
2.27e+09 +8.0% 2.451e+09 perf-stat.i.cache-references
4162783 ± 4% +19.7% 4981708 perf-stat.i.context-switches
1.52 +2.0% 1.55 perf-stat.i.cpi
227633 ± 14% +75.1% 398694 ± 3% perf-stat.i.cpu-migrations
33931 ± 12% +42.7% 48428 ± 2% perf-stat.i.cycles-between-cache-misses
0.01 -0.0 0.01 ± 4% perf-stat.i.dTLB-load-miss-rate%
1408442 ± 2% -16.4% 1177727 ± 4% perf-stat.i.dTLB-load-misses
1.229e+10 -2.1% 1.203e+10 perf-stat.i.dTLB-loads
0.00 ± 2% -0.0 0.00 ± 4% perf-stat.i.dTLB-store-miss-rate%
35757 ± 2% -16.0% 30032 ± 3% perf-stat.i.dTLB-store-misses
7.449e+09 -2.3% 7.279e+09 perf-stat.i.dTLB-stores
39246970 -5.5% 37079604 perf-stat.i.iTLB-load-misses
76813 ± 8% -26.2% 56724 ± 6% perf-stat.i.iTLB-loads
4.096e+10 -1.9% 4.02e+10 perf-stat.i.instructions
1062 +4.1% 1105 perf-stat.i.instructions-per-iTLB-miss
0.66 -1.9% 0.65 perf-stat.i.ipc
1210 -7.1% 1124 perf-stat.i.minor-faults
105889 ± 10% -24.8% 79596 ± 5% perf-stat.i.node-loads
174508 ± 7% -24.4% 131915 perf-stat.i.node-stores
1210 -7.1% 1124 perf-stat.i.page-faults
55.42 ± 2% +10.0% 60.98 perf-stat.overall.MPKI
1.31 +0.1 1.38 perf-stat.overall.branch-miss-rate%
0.11 ± 11% -0.0 0.08 ± 3% perf-stat.overall.cache-miss-rate%
1.51 +1.9% 1.54 perf-stat.overall.cpi
24740 ± 9% +34.9% 33377 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 -0.0 0.01 ± 4% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 2% -0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
1043 +3.9% 1084 perf-stat.overall.instructions-per-iTLB-miss
0.66 -1.9% 0.65 perf-stat.overall.ipc
57826 ± 3% +15.6% 66874 perf-stat.overall.path-length
1.064e+08 +3.7% 1.104e+08 perf-stat.ps.branch-misses
2517524 ± 9% -26.5% 1851335 ± 3% perf-stat.ps.cache-misses
2.266e+09 +8.0% 2.447e+09 perf-stat.ps.cache-references
4155898 ± 4% +19.7% 4973458 perf-stat.ps.context-switches
227258 ± 14% +75.1% 398041 ± 3% perf-stat.ps.cpu-migrations
1406119 ± 2% -16.4% 1175777 ± 4% perf-stat.ps.dTLB-load-misses
1.227e+10 -2.1% 1.201e+10 perf-stat.ps.dTLB-loads
35699 ± 2% -16.0% 29983 ± 3% perf-stat.ps.dTLB-store-misses
7.437e+09 -2.3% 7.267e+09 perf-stat.ps.dTLB-stores
39182154 -5.5% 37018206 perf-stat.ps.iTLB-load-misses
76695 ± 8% -26.2% 56638 ± 6% perf-stat.ps.iTLB-loads
4.09e+10 -1.9% 4.013e+10 perf-stat.ps.instructions
1208 -7.1% 1122 perf-stat.ps.minor-faults
105716 ± 10% -24.8% 79465 ± 5% perf-stat.ps.node-loads
174234 ± 7% -24.4% 131703 perf-stat.ps.node-stores
1208 -7.1% 1122 perf-stat.ps.page-faults
29.39 ±104% -29.4 0.00 perf-profile.calltrace.cycles-pp.start_thread
15.86 ±105% -15.9 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write.start_thread
14.88 ±105% -14.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
14.75 ±105% -14.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
13.53 ±104% -13.5 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
12.51 ±104% -12.5 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read.start_thread
12.45 ±105% -12.4 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
11.21 ±105% -11.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
11.08 ±105% -11.1 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
10.47 ±105% -10.5 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
9.73 ±105% -9.7 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.09 ± 12% -0.6 2.50 ± 2% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
2.45 ± 14% -0.5 1.95 ± 2% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
1.15 ± 13% -0.2 0.95 ± 4% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.new_sync_write.vfs_write
0.99 ± 13% -0.2 0.83 ± 5% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.new_sync_write
0.92 ± 15% +0.2 1.10 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.84 ± 23% +0.3 1.10 ± 2% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.99 ± 19% +0.3 1.26 ± 3% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.97 ± 23% +0.3 1.26 perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.29 ±100% +0.3 0.60 ± 5% perf-profile.calltrace.cycles-pp.reschedule_interrupt.__lock_text_start.__wake_up_common_lock.pipe_write.new_sync_write
0.55 ± 62% +0.3 0.89 ± 7% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
0.28 ±100% +0.4 0.63 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop
0.33 ±100% +0.4 0.69 ± 2% perf-profile.calltrace.cycles-pp.prepare_to_wait.pipe_wait.pipe_read.new_sync_read.vfs_read
0.29 ±100% +0.4 0.65 ± 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
1.00 ± 27% +0.4 1.37 ± 2% perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.57 ± 63% +0.4 0.95 ± 5% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.46 ± 31% +0.4 1.86 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 60% +0.4 1.14 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.29 ±100% +0.4 0.70 ± 6% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.34 ±100% +0.4 0.77 ± 7% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task
1.26 ± 26% +0.5 1.72 perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.13 ± 26% +0.5 1.59 perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.35 ±100% +0.5 0.83 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.28 ± 28% +0.5 1.77 ± 3% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.40 ± 26% +0.5 1.89 perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.44 ±100% +0.5 0.95 ± 7% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
2.33 ± 17% +0.5 2.88 ± 2% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
1.33 ± 28% +0.6 1.89 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.40 ± 24% +0.6 1.97 ± 4% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.83 ± 66% +0.6 1.40 ± 5% perf-profile.calltrace.cycles-pp.native_write_msr
5.01 ± 8% +0.8 5.81 ± 2% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.75 ± 39% +0.9 2.61 ± 6% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.33 ± 67% +1.0 2.28 ± 6% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.56 ± 12% +1.2 7.78 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
8.96 ± 9% +1.6 10.56 perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
9.23 ± 10% +1.6 10.87 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
9.17 ± 10% +1.6 10.81 perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
3.72 ± 31% +1.7 5.44 ± 6% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.37 ± 8% +1.9 5.32 ± 4% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.37 ± 10% +2.3 6.65 ± 5% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
15.27 ± 10% +2.3 17.59 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
15.59 ± 10% +2.4 17.96 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
3.00 ± 71% +2.6 5.60 ± 6% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.10 ± 71% +2.7 5.77 ± 6% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.05 ± 10% +2.7 20.73 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
28.21 ± 5% +3.7 31.87 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
24.29 ± 8% +4.7 28.98 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
23.38 ± 8% +4.7 28.08 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
23.72 ± 8% +4.7 28.43 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
31.18 ± 41% +12.7 43.86 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
33.44 ± 40% +13.2 46.61 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.81 ± 43% +27.5 92.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
65.30 ± 43% +27.7 92.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
29.39 ±104% -29.4 0.00 perf-profile.children.cycles-pp.start_thread
16.09 ±105% -16.1 0.00 perf-profile.children.cycles-pp.__GI___libc_write
12.73 ±104% -12.7 0.00 perf-profile.children.cycles-pp.__GI___libc_read
5.80 ± 10% -1.2 4.65 ± 3% perf-profile.children.cycles-pp.security_file_permission
4.10 ± 16% -1.0 3.10 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.21 ± 9% -0.7 2.55 ± 3% perf-profile.children.cycles-pp.copy_page_to_iter
1.45 ± 32% -0.6 0.83 perf-profile.children.cycles-pp.entry_SYSCALL_64
3.04 ± 8% -0.6 2.44 ± 3% perf-profile.children.cycles-pp.selinux_file_permission
2.56 ± 11% -0.6 1.99 ± 2% perf-profile.children.cycles-pp.copy_page_from_iter
2.80 ± 9% -0.5 2.25 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.38 ± 7% -0.5 2.87 ± 2% perf-profile.children.cycles-pp.mutex_lock
1.62 ± 12% -0.4 1.23 ± 3% perf-profile.children.cycles-pp.___might_sleep
1.82 ± 9% -0.4 1.43 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
2.09 ± 12% -0.4 1.72 ± 6% perf-profile.children.cycles-pp.file_has_perm
0.90 ± 20% -0.3 0.57 ± 9% perf-profile.children.cycles-pp.__mutex_lock
1.14 ± 12% -0.3 0.82 ± 3% perf-profile.children.cycles-pp.__inode_security_revalidate
1.26 ± 10% -0.3 0.97 ± 4% perf-profile.children.cycles-pp.copyin
1.70 ± 8% -0.3 1.42 ± 3% perf-profile.children.cycles-pp.copyout
1.25 ± 11% -0.3 0.98 ± 3% perf-profile.children.cycles-pp.fsnotify
1.04 ± 9% -0.3 0.79 ± 3% perf-profile.children.cycles-pp._cond_resched
1.72 ± 7% -0.2 1.48 ± 5% perf-profile.children.cycles-pp.fput_many
1.16 ± 10% -0.2 0.92 ± 4% perf-profile.children.cycles-pp.__might_sleep
1.09 ± 8% -0.2 0.85 ± 3% perf-profile.children.cycles-pp.__fsnotify_parent
0.83 ± 13% -0.2 0.63 ± 3% perf-profile.children.cycles-pp.__might_fault
0.69 ± 11% -0.1 0.56 perf-profile.children.cycles-pp.current_time
0.99 ± 4% -0.1 0.89 ± 4% perf-profile.children.cycles-pp.touch_atime
0.65 ± 6% -0.1 0.55 ± 6% perf-profile.children.cycles-pp.atime_needs_update
0.39 ± 4% -0.1 0.29 perf-profile.children.cycles-pp.rcu_all_qs
0.13 ± 39% -0.1 0.04 ± 59% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.34 ± 11% -0.1 0.26 ± 9% perf-profile.children.cycles-pp.preempt_schedule_common
0.49 ± 5% -0.1 0.41 ± 2% perf-profile.children.cycles-pp.wake_up_q
0.24 ± 2% -0.1 0.17 ± 4% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.30 ± 11% -0.1 0.23 ± 3% perf-profile.children.cycles-pp.inode_has_perm
0.23 ± 8% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.__sb_end_write
0.21 ± 9% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.__x64_sys_read
0.29 ± 8% -0.0 0.24 ± 2% perf-profile.children.cycles-pp.timespec64_trunc
0.21 ± 8% -0.0 0.17 ± 8% perf-profile.children.cycles-pp.__x64_sys_write
0.22 ± 7% -0.0 0.19 ± 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.08 ± 16% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.bpf_fd_pass
0.10 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.perf_exclude_event
0.08 ± 24% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.rcu_note_context_switch
0.17 ± 19% +0.1 0.24 ± 8% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.23 ± 19% +0.1 0.32 ± 3% perf-profile.children.cycles-pp.resched_curr
0.61 ± 8% +0.1 0.72 ± 6% perf-profile.children.cycles-pp.sched_clock
0.67 ± 8% +0.1 0.78 ± 6% perf-profile.children.cycles-pp.sched_clock_cpu
0.50 ± 9% +0.1 0.62 ± 3% perf-profile.children.cycles-pp.account_entity_enqueue
0.61 ± 10% +0.2 0.79 perf-profile.children.cycles-pp.account_entity_dequeue
0.43 ± 19% +0.2 0.62 ± 7% perf-profile.children.cycles-pp.clear_buddies
1.42 ± 14% +0.2 1.66 ± 4% perf-profile.children.cycles-pp.native_write_msr
1.17 ± 13% +0.3 1.42 ± 2% perf-profile.children.cycles-pp.check_preempt_wakeup
0.85 ± 23% +0.3 1.12 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.94 ± 14% +0.3 1.22 ± 5% perf-profile.children.cycles-pp.___perf_sw_event
1.45 ± 11% +0.3 1.73 perf-profile.children.cycles-pp.check_preempt_curr
1.45 ± 12% +0.3 1.76 perf-profile.children.cycles-pp.__enqueue_entity
1.60 ± 10% +0.3 1.91 perf-profile.children.cycles-pp.ttwu_do_wakeup
1.12 ± 9% +0.3 1.43 ± 3% perf-profile.children.cycles-pp.__update_load_avg_se
1.64 ± 8% +0.4 2.00 ± 4% perf-profile.children.cycles-pp.available_idle_cpu
0.96 ± 19% +0.4 1.34 ± 7% perf-profile.children.cycles-pp.pick_next_entity
1.85 ± 16% +0.4 2.28 ± 6% perf-profile.children.cycles-pp.switch_fpu_return
1.90 ± 9% +0.6 2.50 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
3.22 ± 13% +0.6 3.87 perf-profile.children.cycles-pp.update_load_avg
4.03 ± 10% +0.6 4.68 ± 2% perf-profile.children.cycles-pp.update_curr
3.01 ± 14% +0.7 3.72 ± 2% perf-profile.children.cycles-pp.reweight_entity
5.10 ± 8% +0.8 5.87 ± 2% perf-profile.children.cycles-pp.enqueue_entity
4.71 ± 15% +1.1 5.77 ± 4% perf-profile.children.cycles-pp.pick_next_task_fair
6.73 ± 11% +1.1 7.87 perf-profile.children.cycles-pp.dequeue_task_fair
9.07 ± 9% +1.5 10.61 perf-profile.children.cycles-pp.enqueue_task_fair
9.32 ± 9% +1.6 10.89 perf-profile.children.cycles-pp.ttwu_do_activate
91.46 +1.6 93.03 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
9.27 ± 9% +1.6 10.84 perf-profile.children.cycles-pp.activate_task
4.16 ± 26% +1.6 5.78 ± 6% perf-profile.children.cycles-pp.exit_to_usermode_loop
90.79 +1.7 92.47 perf-profile.children.cycles-pp.do_syscall_64
3.48 ± 8% +1.9 5.36 ± 4% perf-profile.children.cycles-pp.select_idle_sibling
4.44 ± 10% +2.2 6.67 ± 5% perf-profile.children.cycles-pp.select_task_rq_fair
18.46 ± 9% +2.3 20.81 perf-profile.children.cycles-pp.pipe_wait
19.93 ± 11% +3.5 23.46 ± 2% perf-profile.children.cycles-pp.__schedule
28.82 ± 5% +3.6 32.45 perf-profile.children.cycles-pp.__wake_up_common_lock
19.89 ± 12% +3.7 23.62 ± 2% perf-profile.children.cycles-pp.schedule
23.96 ± 8% +4.5 28.45 perf-profile.children.cycles-pp.try_to_wake_up
24.53 ± 8% +4.5 29.06 perf-profile.children.cycles-pp.__wake_up_common
23.91 ± 8% +4.6 28.46 perf-profile.children.cycles-pp.autoremove_wake_function
2.76 ± 9% -0.5 2.23 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.46 ± 18% -0.4 1.06 ± 4% perf-profile.self.cycles-pp.pipe_write
1.81 ± 9% -0.4 1.43 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.56 ± 12% -0.4 1.19 ± 3% perf-profile.self.cycles-pp.___might_sleep
2.09 ± 7% -0.3 1.81 ± 2% perf-profile.self.cycles-pp.mutex_lock
1.86 ± 8% -0.3 1.59 ± 3% perf-profile.self.cycles-pp.selinux_file_permission
1.21 ± 11% -0.3 0.96 ± 2% perf-profile.self.cycles-pp.fsnotify
1.70 ± 7% -0.2 1.46 ± 5% perf-profile.self.cycles-pp.fput_many
1.04 ± 11% -0.2 0.82 ± 3% perf-profile.self.cycles-pp.__might_sleep
1.01 ± 8% -0.2 0.80 ± 3% perf-profile.self.cycles-pp.__fsnotify_parent
1.04 ± 10% -0.2 0.83 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.67 ± 10% -0.2 0.51 ± 5% perf-profile.self.cycles-pp.new_sync_write
0.64 ± 12% -0.2 0.48 ± 5% perf-profile.self.cycles-pp.vfs_write
1.67 -0.2 1.51 ± 2% perf-profile.self.cycles-pp.pipe_read
0.58 ± 17% -0.1 0.44 ± 7% perf-profile.self.cycles-pp.file_has_perm
0.44 ± 18% -0.1 0.31 ± 7% perf-profile.self.cycles-pp.__mutex_lock
0.49 ± 13% -0.1 0.35 ± 3% perf-profile.self.cycles-pp.copy_page_from_iter
0.64 ± 8% -0.1 0.52 ± 4% perf-profile.self.cycles-pp.copy_page_to_iter
0.67 ± 7% -0.1 0.55 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.30 ± 5% -0.1 0.20 ± 4% perf-profile.self.cycles-pp.ksys_write
0.40 ± 12% -0.1 0.30 ± 7% perf-profile.self.cycles-pp.security_file_permission
0.13 ± 38% -0.1 0.04 ± 59% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.36 ± 11% -0.1 0.28 ± 3% perf-profile.self.cycles-pp.__inode_security_revalidate
0.76 ± 5% -0.1 0.68 ± 5% perf-profile.self.cycles-pp.do_syscall_64
0.21 ± 2% -0.1 0.14 ± 8% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.26 ± 12% -0.1 0.20 ± 4% perf-profile.self.cycles-pp.inode_has_perm
0.29 ± 11% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.current_time
0.28 ± 4% -0.1 0.21 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
0.22 ± 7% -0.1 0.17 ± 7% perf-profile.self.cycles-pp.__sb_end_write
0.31 -0.1 0.26 ± 7% perf-profile.self.cycles-pp.ksys_read
0.20 ± 11% -0.0 0.16 ± 9% perf-profile.self.cycles-pp.__x64_sys_read
0.26 ± 8% -0.0 0.22 perf-profile.self.cycles-pp.timespec64_trunc
0.20 ± 5% -0.0 0.16 ± 9% perf-profile.self.cycles-pp.__x64_sys_write
0.24 ± 9% -0.0 0.21 ± 6% perf-profile.self.cycles-pp.__might_fault
0.09 ± 9% -0.0 0.06 perf-profile.self.cycles-pp.copyin
0.14 ± 5% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.07 ± 10% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.sched_clock
0.16 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.20 ± 12% +0.0 0.24 ± 5% perf-profile.self.cycles-pp.__list_add_valid
0.10 ± 26% +0.0 0.14 ± 10% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.15 ± 16% +0.1 0.21 ± 11% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.35 ± 7% +0.1 0.43 ± 4% perf-profile.self.cycles-pp.prepare_to_wait
0.55 ± 2% +0.1 0.63 ± 3% perf-profile.self.cycles-pp.__fget_light
0.22 ± 18% +0.1 0.30 ± 3% perf-profile.self.cycles-pp.resched_curr
0.48 ± 11% +0.1 0.58 ± 3% perf-profile.self.cycles-pp.check_preempt_wakeup
0.39 ± 11% +0.1 0.50 perf-profile.self.cycles-pp.account_entity_enqueue
0.85 ± 7% +0.1 0.97 ± 4% perf-profile.self.cycles-pp.enqueue_entity
0.43 ± 16% +0.1 0.55 ± 7% perf-profile.self.cycles-pp.pick_next_entity
0.86 ± 10% +0.1 0.99 perf-profile.self.cycles-pp.dequeue_task_fair
0.37 ± 21% +0.2 0.55 ± 6% perf-profile.self.cycles-pp.clear_buddies
0.45 ± 14% +0.2 0.65 perf-profile.self.cycles-pp.account_entity_dequeue
1.41 ± 14% +0.2 1.65 ± 4% perf-profile.self.cycles-pp.native_write_msr
0.80 ± 16% +0.3 1.06 ± 4% perf-profile.self.cycles-pp.___perf_sw_event
1.09 ± 9% +0.3 1.39 ± 2% perf-profile.self.cycles-pp.__update_load_avg_se
0.84 ± 16% +0.3 1.14 ± 7% perf-profile.self.cycles-pp.select_task_rq_fair
1.45 ± 12% +0.3 1.75 perf-profile.self.cycles-pp.__enqueue_entity
1.62 ± 8% +0.4 1.99 ± 4% perf-profile.self.cycles-pp.available_idle_cpu
1.83 ± 16% +0.4 2.25 ± 6% perf-profile.self.cycles-pp.switch_fpu_return
2.15 ± 11% +0.5 2.60 ± 5% perf-profile.self.cycles-pp.update_curr
1.88 ± 9% +0.6 2.45 ± 3% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
1.21 ± 9% +0.7 1.92 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.80 ± 11% +1.4 2.17 ± 3% perf-profile.self.cycles-pp.select_idle_sibling
hackbench.throughput
74000 +-+-----------------------------------------------------------------+
72000 +-+.+ .+ |
|.+ : .+. .+. + |
70000 +-+ : + + +. : |
68000 +-+ + + : |
| :: |
66000 +-+ + |
64000 +-+ |
62000 +-+ |
| |
60000 +-+ O O OO O O O O O O O OO |
58000 +-+ O O O O O O O O O O O
| O O OO O O |
56000 O-+ O O O O O O |
54000 +-O-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/18/x86_64-rhel-7.6/threads/1600%/debian-x86_64-2019-11-14.cgz/lkp-hsw-ep4/hackbench/0x43
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
176075 ± 2% -11.3% 156152 hackbench.throughput
891.79 +12.3% 1001 hackbench.time.elapsed_time
891.79 +12.3% 1001 hackbench.time.elapsed_time.max
1.889e+09 ± 9% +33.1% 2.514e+09 ± 3% hackbench.time.involuntary_context_switches
42939 ± 3% +17.0% 50220 hackbench.time.system_time
18978 +3.2% 19592 hackbench.time.user_time
2.558e+09 ± 8% +33.5% 3.415e+09 ± 2% hackbench.time.voluntary_context_switches
43133704 ± 3% +39.2% 60035429 ± 33% cpuidle.C1.time
48661 +17.7% 57274 ± 5% meminfo.Shmem
19043 ± 2% -14.0% 16377 meminfo.max_used_kB
2502 +1.3% 2535 turbostat.Avg_MHz
3.368e+08 ± 5% +30.6% 4.398e+08 turbostat.IRQ
66.25 +4.5% 69.25 vmstat.cpu.sy
29.25 ± 2% -8.5% 26.75 vmstat.cpu.us
5000443 ± 7% +18.6% 5931772 vmstat.system.cs
370173 ± 3% +17.1% 433616 vmstat.system.in
98786923 ± 7% +74.5% 1.724e+08 numa-numastat.node0.local_node
98791606 ± 7% +74.5% 1.724e+08 numa-numastat.node0.numa_hit
4686 ± 99% +297.8% 18642 ± 24% numa-numastat.node0.other_node
1.818e+08 -51.4% 88447082 ± 3% numa-numastat.node1.local_node
1.819e+08 -51.4% 88451833 ± 3% numa-numastat.node1.numa_hit
18694 ± 25% -74.6% 4753 ± 97% numa-numastat.node1.other_node
155316 +1.5% 157632 proc-vmstat.nr_active_anon
99.50 +5.3% 104.75 ± 3% proc-vmstat.nr_dirtied
4440 +4.9% 4656 proc-vmstat.nr_inactive_anon
6283 +1.9% 6399 proc-vmstat.nr_mapped
12248 ± 2% +16.6% 14276 ± 5% proc-vmstat.nr_shmem
98.75 +5.3% 104.00 proc-vmstat.nr_written
155316 +1.5% 157632 proc-vmstat.nr_zone_active_anon
4440 +4.9% 4656 proc-vmstat.nr_zone_inactive_anon
2.807e+08 -7.1% 2.609e+08 proc-vmstat.numa_hit
2.807e+08 -7.1% 2.609e+08 proc-vmstat.numa_local
2.83e+08 -7.0% 2.632e+08 proc-vmstat.pgalloc_normal
3212744 +2.3% 3286078 proc-vmstat.pgfault
2.828e+08 -7.0% 2.631e+08 proc-vmstat.pgfree
39216 ± 35% +153.3% 99342 ± 4% numa-vmstat.node0.nr_active_anon
31864 ± 44% +211.0% 99111 ± 4% numa-vmstat.node0.nr_anon_pages
11183 ± 3% +4212.8% 482321 ± 2% numa-vmstat.node0.nr_kernel_stack
499.50 ± 23% +66.9% 833.75 ± 8% numa-vmstat.node0.nr_page_table_pages
11417 -63.5% 4168 ± 5% numa-vmstat.node0.nr_shmem
15905 ± 4% -12.5% 13919 ± 2% numa-vmstat.node0.nr_slab_reclaimable
17996 ± 3% +587.4% 123708 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
39216 ± 35% +153.3% 99342 ± 4% numa-vmstat.node0.nr_zone_active_anon
52471566 ± 6% +73.3% 90915199 numa-vmstat.node0.numa_hit
52466577 ± 6% +73.2% 90896366 numa-vmstat.node0.numa_local
4990 ± 92% +277.5% 18836 ± 24% numa-vmstat.node0.numa_other
114712 ± 12% -50.1% 57234 ± 7% numa-vmstat.node1.nr_active_anon
114545 ± 12% -58.2% 47874 ± 9% numa-vmstat.node1.nr_anon_pages
462075 -96.6% 15717 ± 67% numa-vmstat.node1.nr_kernel_stack
871.50 ± 13% -37.9% 541.00 ± 13% numa-vmstat.node1.nr_page_table_pages
691.25 ± 22% +1359.2% 10086 ± 8% numa-vmstat.node1.nr_shmem
11357 ± 5% +18.1% 13414 ± 4% numa-vmstat.node1.nr_slab_reclaimable
118594 -85.5% 17243 ± 16% numa-vmstat.node1.nr_slab_unreclaimable
114712 ± 12% -50.1% 57234 ± 7% numa-vmstat.node1.nr_zone_active_anon
93308245 ± 2% -49.7% 46921467 ± 4% numa-vmstat.node1.numa_hit
93128224 ± 2% -49.8% 46755145 ± 4% numa-vmstat.node1.numa_local
157762 ± 35% +154.4% 401333 ± 4% numa-meminfo.node0.Active
157721 ± 35% +154.4% 401249 ± 4% numa-meminfo.node0.Active(anon)
128232 ± 44% +212.2% 400337 ± 4% numa-meminfo.node0.AnonPages
63487 ± 4% -12.0% 55850 ± 2% numa-meminfo.node0.KReclaimable
11691 ± 3% +4087.3% 489553 numa-meminfo.node0.KernelStack
1170596 ± 4% +104.2% 2390602 numa-meminfo.node0.MemUsed
2003 ± 23% +67.3% 3351 ± 8% numa-meminfo.node0.PageTables
63487 ± 4% -12.0% 55850 ± 2% numa-meminfo.node0.SReclaimable
72301 ± 3% +591.9% 500285 numa-meminfo.node0.SUnreclaim
45749 -63.5% 16679 ± 5% numa-meminfo.node0.Shmem
135789 ± 3% +309.6% 556136 numa-meminfo.node0.Slab
460025 ± 11% -50.2% 228882 ± 7% numa-meminfo.node1.Active
459902 ± 11% -50.2% 228802 ± 7% numa-meminfo.node1.Active(anon)
459237 ± 11% -58.3% 191354 ± 9% numa-meminfo.node1.AnonPages
45568 ± 5% +17.8% 53657 ± 4% numa-meminfo.node1.KReclaimable
468955 -96.7% 15461 ± 67% numa-meminfo.node1.KernelStack
2241450 -51.9% 1078199 ± 3% numa-meminfo.node1.MemUsed
3493 ± 13% -38.0% 2167 ± 13% numa-meminfo.node1.PageTables
45568 ± 5% +17.8% 53657 ± 4% numa-meminfo.node1.SReclaimable
479010 -85.6% 68750 ± 16% numa-meminfo.node1.SUnreclaim
2779 ± 23% +1354.1% 40419 ± 8% numa-meminfo.node1.Shmem
524580 -76.7% 122408 ± 8% numa-meminfo.node1.Slab
250988 ± 19% -52.0% 120407 ± 33% sched_debug.cfs_rq:/.MIN_vruntime.avg
1469922 ± 14% -32.1% 998119 ± 33% sched_debug.cfs_rq:/.MIN_vruntime.stddev
20766 ± 9% -25.3% 15519 ± 10% sched_debug.cfs_rq:/.load.avg
43820 ± 19% -40.6% 26044 ± 36% sched_debug.cfs_rq:/.load.stddev
250988 ± 19% -52.0% 120407 ± 33% sched_debug.cfs_rq:/.max_vruntime.avg
1469922 ± 14% -32.1% 998119 ± 33% sched_debug.cfs_rq:/.max_vruntime.stddev
10990567 ± 9% +22.9% 13509735 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
102.40 -33.3% 68.27 ± 57% sched_debug.cfs_rq:/.removed.load_avg.max
4711 -33.4% 3136 ± 57% sched_debug.cfs_rq:/.removed.runnable_sum.max
19930 ± 10% -25.1% 14917 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
-5713498 -349.0% 14226386 ± 8% sched_debug.cfs_rq:/.spread0.avg
8106715 ± 13% +283.1% 31057182 ± 6% sched_debug.cfs_rq:/.spread0.max
-18025645 -99.7% -45393 sched_debug.cfs_rq:/.spread0.min
11027552 ± 9% +21.5% 13398318 ± 7% sched_debug.cfs_rq:/.spread0.stddev
570.46 ± 5% +109.2% 1193 ± 21% sched_debug.cfs_rq:/.util_est_enqueued.avg
2898 ± 11% +90.0% 5507 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.max
563.27 ± 13% +149.3% 1404 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.stddev
155578 ± 22% +84.1% 286457 ± 28% sched_debug.cpu.avg_idle.min
1678 ± 43% +105.9% 3457 ± 31% sched_debug.cpu.clock.stddev
1678 ± 43% +105.9% 3457 ± 31% sched_debug.cpu.clock_task.stddev
50207 -12.3% 44017 ± 7% sched_debug.cpu.curr->pid.avg
82673 -19.5% 66514 ± 5% sched_debug.cpu.curr->pid.max
746.05 ± 11% +554.3% 4881 ± 8% sched_debug.cpu.curr->pid.min
30728 ± 2% -40.8% 18187 ± 10% sched_debug.cpu.curr->pid.stddev
0.00 ± 43% +106.0% 0.00 ± 31% sched_debug.cpu.next_balance.stddev
20.49 ± 6% +149.0% 51.03 ± 23% sched_debug.cpu.nr_running.avg
157.83 ± 9% +77.0% 279.37 ± 8% sched_debug.cpu.nr_running.max
29.95 ± 11% +137.1% 70.99 ± 16% sched_debug.cpu.nr_running.stddev
29803727 ± 9% +29.6% 38623870 ± 3% sched_debug.cpu.nr_switches.avg
51549476 ± 10% +22.5% 63171598 ± 3% sched_debug.cpu.nr_switches.max
12792570 ± 3% +43.5% 18362678 ± 4% sched_debug.cpu.nr_switches.min
2.03 ± 80% +430.9% 10.76 ± 38% sched_debug.cpu.nr_uninterruptible.avg
6.99 ± 4% +48.7% 10.39 ± 5% perf-stat.i.MPKI
7.49 ± 5% -1.2 6.29 ± 5% perf-stat.i.cache-miss-rate%
27202880 ± 2% +46.9% 39966416 ± 2% perf-stat.i.cache-misses
4.662e+08 ± 5% +71.5% 7.998e+08 ± 5% perf-stat.i.cache-references
5989366 ± 4% +12.9% 6760958 ± 5% perf-stat.i.context-switches
197319 ± 12% -19.2% 159504 ± 3% perf-stat.i.cpu-migrations
7415 ± 2% -23.3% 5686 ± 3% perf-stat.i.cycles-between-cache-misses
0.61 ± 35% -0.3 0.35 ± 4% perf-stat.i.dTLB-load-miss-rate%
1.506e+08 ± 38% -44.9% 82967539 ± 5% perf-stat.i.dTLB-load-misses
0.22 ± 4% -0.0 0.20 ± 4% perf-stat.i.dTLB-store-miss-rate%
35974976 ± 5% -13.4% 31146077 ± 6% perf-stat.i.dTLB-store-misses
1.627e+10 -3.7% 1.566e+10 perf-stat.i.dTLB-stores
46665620 ± 3% -14.6% 39838962 ± 5% perf-stat.i.iTLB-load-misses
2191 ± 5% +21.4% 2660 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.47 -2.5% 0.46 perf-stat.i.ipc
6660 ± 8% -19.1% 5391 ± 8% perf-stat.i.minor-faults
62.93 ± 2% +11.1 74.06 perf-stat.i.node-load-miss-rate%
12537290 ± 4% +73.5% 21750457 ± 4% perf-stat.i.node-load-misses
5459308 ± 3% -7.4% 5057404 ± 3% perf-stat.i.node-loads
29.39 ± 2% -4.1 25.29 ± 2% perf-stat.i.node-store-miss-rate%
2863317 +13.6% 3252632 ± 2% perf-stat.i.node-store-misses
6381925 ± 2% +53.5% 9796918 ± 4% perf-stat.i.node-stores
6638 ± 8% -19.4% 5348 ± 8% perf-stat.i.page-faults
4.89 ± 4% +78.4% 8.73 perf-stat.overall.MPKI
1.29 +0.0 1.31 perf-stat.overall.branch-miss-rate%
6.13 ± 4% -1.3 4.85 ± 3% perf-stat.overall.cache-miss-rate%
2.15 +2.0% 2.19 perf-stat.overall.cpi
7187 -27.9% 5182 ± 3% perf-stat.overall.cycles-between-cache-misses
0.63 ± 39% -0.3 0.36 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.23 ± 5% -0.0 0.21 ± 3% perf-stat.overall.dTLB-store-miss-rate%
75.65 -1.1 74.59 perf-stat.overall.iTLB-load-miss-rate%
1604 ± 3% +15.9% 1859 perf-stat.overall.instructions-per-iTLB-miss
0.47 -2.0% 0.46 perf-stat.overall.ipc
63.14 ± 2% +13.6 76.70 perf-stat.overall.node-load-miss-rate%
30.74 ± 2% -5.1 25.62 ± 2% perf-stat.overall.node-store-miss-rate%
47754 ± 2% +11.7% 53321 perf-stat.overall.path-length
24893507 +40.6% 34993733 ± 2% perf-stat.ps.cache-misses
4.072e+08 ± 5% +77.1% 7.212e+08 perf-stat.ps.cache-references
4968881 ± 7% +19.0% 5911450 perf-stat.ps.context-switches
1.789e+11 +1.3% 1.812e+11 perf-stat.ps.cpu-cycles
1.59e+08 ± 39% -44.1% 88801351 ± 6% perf-stat.ps.dTLB-load-misses
2.502e+10 -2.0% 2.453e+10 perf-stat.ps.dTLB-loads
39138578 ± 5% -12.6% 34190129 ± 3% perf-stat.ps.dTLB-store-misses
1.675e+10 -3.7% 1.613e+10 perf-stat.ps.dTLB-stores
51901139 ± 2% -14.3% 44461197 perf-stat.ps.iTLB-load-misses
16711264 ± 2% -9.3% 15153636 ± 2% perf-stat.ps.iTLB-loads
3573 -9.1% 3247 perf-stat.ps.minor-faults
10427704 ± 3% +73.2% 18055799 ± 3% perf-stat.ps.node-load-misses
6085553 ± 4% -10.0% 5479559 ± 2% perf-stat.ps.node-loads
2566511 +13.4% 2911367 ± 2% perf-stat.ps.node-store-misses
5784179 ± 2% +46.2% 8454210 ± 3% perf-stat.ps.node-stores
3573 -9.1% 3247 perf-stat.ps.page-faults
7.427e+13 ± 2% +11.7% 8.293e+13 perf-stat.total.instructions
127394 ± 5% +19.1% 151665 softirqs.CPU0.RCU
321761 ± 3% +12.8% 363107 ± 3% softirqs.CPU0.TIMER
124526 ± 5% +20.2% 149697 ± 3% softirqs.CPU1.RCU
123768 ± 4% +24.7% 154332 ± 2% softirqs.CPU10.RCU
130315 ± 3% +22.1% 159111 softirqs.CPU11.RCU
126684 ± 4% +26.1% 159755 ± 2% softirqs.CPU12.RCU
124649 ± 9% +27.8% 159298 ± 3% softirqs.CPU13.RCU
124528 ± 4% +23.2% 153372 ± 2% softirqs.CPU14.RCU
326150 ± 2% +9.8% 357991 ± 4% softirqs.CPU14.TIMER
121923 ± 6% +17.4% 143119 softirqs.CPU15.RCU
122114 ± 5% +16.7% 142560 softirqs.CPU16.RCU
121425 ± 6% +14.4% 138861 ± 3% softirqs.CPU17.RCU
330948 ± 2% +22.2% 404570 ± 16% softirqs.CPU17.TIMER
312317 ± 4% +18.2% 369091 ± 2% softirqs.CPU18.TIMER
9626 ± 2% +15.0% 11067 ± 3% softirqs.CPU19.SCHED
322308 ± 3% +21.1% 390454 ± 2% softirqs.CPU19.TIMER
124281 ± 5% +22.8% 152670 ± 2% softirqs.CPU2.RCU
9452 ± 2% +15.6% 10927 ± 3% softirqs.CPU20.SCHED
318575 ± 6% +22.3% 389741 ± 3% softirqs.CPU20.TIMER
9327 ± 3% +16.7% 10886 ± 3% softirqs.CPU21.SCHED
306628 ± 4% +23.3% 377962 ± 3% softirqs.CPU21.TIMER
9424 ± 2% +17.6% 11084 ± 3% softirqs.CPU22.SCHED
302796 ± 4% +24.2% 376126 ± 2% softirqs.CPU22.TIMER
9798 ± 2% +11.4% 10918 ± 4% softirqs.CPU23.SCHED
312009 ± 4% +23.6% 385626 ± 2% softirqs.CPU23.TIMER
9559 ± 3% +13.2% 10822 softirqs.CPU24.SCHED
311772 ± 4% +23.0% 383504 ± 2% softirqs.CPU24.TIMER
9514 ± 4% +13.2% 10766 ± 3% softirqs.CPU25.SCHED
307640 ± 4% +22.4% 376479 ± 2% softirqs.CPU25.TIMER
9573 ± 3% +13.8% 10897 ± 3% softirqs.CPU26.SCHED
314607 ± 9% +21.1% 380918 ± 2% softirqs.CPU26.TIMER
9385 +13.7% 10668 ± 2% softirqs.CPU27.SCHED
301949 ± 4% +25.2% 378140 ± 3% softirqs.CPU27.TIMER
9483 ± 2% +14.7% 10874 ± 2% softirqs.CPU28.SCHED
299462 ± 3% +24.9% 373896 ± 2% softirqs.CPU28.TIMER
9505 ± 3% +13.3% 10769 ± 3% softirqs.CPU29.SCHED
307087 ± 4% +23.8% 380066 ± 3% softirqs.CPU29.TIMER
124992 ± 4% +19.4% 149253 ± 3% softirqs.CPU3.RCU
9630 ± 4% +13.6% 10942 ± 3% softirqs.CPU30.SCHED
306382 ± 4% +25.4% 384355 ± 2% softirqs.CPU30.TIMER
9496 ± 6% +12.6% 10689 ± 2% softirqs.CPU31.SCHED
308689 ± 4% +24.5% 384419 ± 2% softirqs.CPU31.TIMER
9714 +12.6% 10937 ± 4% softirqs.CPU32.SCHED
304137 ± 3% +35.9% 413240 ± 13% softirqs.CPU32.TIMER
9598 ± 2% +13.4% 10882 ± 5% softirqs.CPU33.SCHED
9622 +13.8% 10949 ± 3% softirqs.CPU34.SCHED
305882 ± 4% +27.3% 389315 ± 4% softirqs.CPU34.TIMER
9409 ± 4% +15.3% 10853 ± 4% softirqs.CPU35.SCHED
306215 ± 4% +23.8% 379222 ± 2% softirqs.CPU35.TIMER
321617 ± 4% +12.9% 362966 ± 3% softirqs.CPU36.TIMER
134843 ± 6% +19.4% 161053 ± 2% softirqs.CPU37.RCU
130954 ± 6% +20.4% 157716 softirqs.CPU38.RCU
129884 ± 5% +17.8% 152978 softirqs.CPU39.RCU
124744 ± 4% +20.1% 149790 ± 3% softirqs.CPU4.RCU
130593 ± 5% +17.2% 153092 ± 2% softirqs.CPU40.RCU
134185 ± 5% +19.9% 160871 softirqs.CPU41.RCU
133336 ± 5% +18.1% 157409 softirqs.CPU42.RCU
134486 ± 4% +18.7% 159613 softirqs.CPU43.RCU
129857 ± 6% +18.5% 153887 softirqs.CPU44.RCU
131816 ± 5% +20.0% 158144 softirqs.CPU45.RCU
132691 ± 5% +22.4% 162412 softirqs.CPU46.RCU
138736 ± 3% +18.7% 164668 softirqs.CPU47.RCU
132971 ± 6% +24.2% 165142 softirqs.CPU48.RCU
136142 ± 3% +20.8% 164493 softirqs.CPU49.RCU
125011 ± 4% +23.6% 154462 ± 2% softirqs.CPU5.RCU
133660 ± 4% +19.5% 159753 ± 2% softirqs.CPU50.RCU
140405 ± 4% +19.5% 167772 softirqs.CPU51.RCU
140523 ± 4% +18.5% 166576 softirqs.CPU52.RCU
138849 ± 5% +14.6% 159157 ± 4% softirqs.CPU53.RCU
332171 ± 2% +21.4% 403260 ± 15% softirqs.CPU53.TIMER
133037 ± 4% +14.0% 151688 ± 2% softirqs.CPU54.RCU
9584 +14.3% 10959 ± 2% softirqs.CPU54.SCHED
310603 ± 4% +18.0% 366472 ± 3% softirqs.CPU54.TIMER
9565 ± 2% +13.8% 10889 ± 5% softirqs.CPU55.SCHED
321231 ± 3% +21.3% 389753 ± 2% softirqs.CPU55.TIMER
136633 ± 4% +12.6% 153854 softirqs.CPU56.RCU
9430 ± 2% +15.6% 10898 ± 5% softirqs.CPU56.SCHED
319477 ± 6% +22.1% 390107 ± 2% softirqs.CPU56.TIMER
128672 ± 4% +14.1% 146791 ± 3% softirqs.CPU57.RCU
9493 ± 2% +13.1% 10740 ± 3% softirqs.CPU57.SCHED
305670 ± 4% +23.5% 377597 ± 2% softirqs.CPU57.TIMER
127129 ± 4% +15.6% 146993 ± 2% softirqs.CPU58.RCU
9529 ± 3% +13.6% 10824 ± 4% softirqs.CPU58.SCHED
302317 ± 4% +23.8% 374371 ± 3% softirqs.CPU58.TIMER
9427 ± 2% +14.0% 10743 ± 2% softirqs.CPU59.SCHED
310985 ± 4% +23.5% 384119 ± 3% softirqs.CPU59.TIMER
124937 ± 4% +21.6% 151962 ± 3% softirqs.CPU6.RCU
9435 ± 2% +14.1% 10762 ± 3% softirqs.CPU60.SCHED
311646 ± 4% +22.8% 382731 ± 3% softirqs.CPU60.TIMER
9585 +13.5% 10877 ± 2% softirqs.CPU61.SCHED
306857 ± 4% +22.8% 376944 ± 2% softirqs.CPU61.TIMER
9483 ± 2% +12.5% 10671 ± 5% softirqs.CPU62.SCHED
9620 ± 4% +9.7% 10556 ± 4% softirqs.CPU63.SCHED
301085 ± 3% +24.7% 375397 ± 3% softirqs.CPU63.TIMER
300443 ± 4% +23.7% 371620 ± 2% softirqs.CPU64.TIMER
9667 +11.2% 10751 ± 4% softirqs.CPU65.SCHED
307194 ± 4% +23.4% 379020 ± 3% softirqs.CPU65.TIMER
9493 ± 3% +14.9% 10909 ± 3% softirqs.CPU66.SCHED
306226 ± 4% +25.1% 383140 ± 3% softirqs.CPU66.TIMER
9674 ± 3% +13.8% 11010 ± 3% softirqs.CPU67.SCHED
307870 ± 4% +25.0% 384793 ± 2% softirqs.CPU67.TIMER
9451 ± 2% +14.2% 10792 ± 4% softirqs.CPU68.SCHED
301873 ± 3% +26.1% 380748 softirqs.CPU68.TIMER
9542 +13.8% 10863 ± 3% softirqs.CPU69.SCHED
316206 ± 8% +20.5% 380893 ± 2% softirqs.CPU69.TIMER
128158 ± 4% +20.4% 154294 ± 2% softirqs.CPU7.RCU
9456 ± 3% +13.5% 10734 ± 3% softirqs.CPU70.SCHED
305361 ± 4% +36.4% 416546 ± 15% softirqs.CPU70.TIMER
9520 ± 3% +15.4% 10985 ± 4% softirqs.CPU71.SCHED
305605 ± 4% +24.6% 380653 ± 2% softirqs.CPU71.TIMER
122831 ± 4% +21.6% 149424 ± 2% softirqs.CPU8.RCU
123870 ± 4% +23.0% 152396 ± 2% softirqs.CPU9.RCU
9357957 ± 5% +15.4% 10794953 softirqs.RCU
23209193 ± 3% +15.5% 26806870 ± 3% softirqs.TIMER
4.79 ± 4% -3.1 1.71 ± 16% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.85 ± 4% -2.4 1.41 ± 16% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.39 ± 4% -2.0 1.35 ± 16% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
2.93 ± 5% -1.9 0.99 ± 18% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
3.25 ± 4% -1.9 1.31 ± 10% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
2.14 ± 4% -1.2 0.91 ± 14% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
2.06 ± 4% -1.2 0.85 ± 13% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
1.52 ± 3% -1.0 0.49 ± 59% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.new_sync_read.vfs_read
1.46 ± 3% -1.0 0.46 ± 59% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.new_sync_read
1.53 ± 8% -0.8 0.68 ± 12% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.new_sync_read.vfs_read.ksys_read
1.65 ± 4% -0.8 0.81 ± 10% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ± 5% -0.7 0.74 ± 10% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.35 ± 5% -0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.__fget.__fget_light.__fdget_pos.ksys_write.do_syscall_64
1.28 ± 8% -0.7 0.62 ± 12% perf-profile.calltrace.cycles-pp.file_update_time.pipe_write.new_sync_write.vfs_write.ksys_write
1.29 ± 3% -0.6 0.67 ± 9% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.new_sync_write.vfs_write.ksys_write
0.64 ± 11% +0.3 0.96 ± 7% perf-profile.calltrace.cycles-pp.__lock_text_start.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
0.00 +0.5 0.55 ± 3% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.00 +0.6 0.55 ± 5% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.6 0.61 ± 7% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.7 0.69 ± 8% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.7 0.70 ± 6% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.71 ± 11% perf-profile.calltrace.cycles-pp.__enqueue_entity.put_prev_entity.pick_next_task_fair.__schedule.schedule
0.00 +0.7 0.72 ± 7% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.7 0.72 ± 6% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.8 0.76 ± 10% perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.8 0.77 ± 2% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.8 0.77 ± 5% perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.8 0.77 ± 6% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.8 0.79 ± 5% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.8 0.81 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.00 +0.8 0.82 ± 6% perf-profile.calltrace.cycles-pp.update_rq_clock.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.8 0.82 ± 6% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +0.9 0.90 ± 3% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.00 +0.9 0.91 ± 4% perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +0.9 0.91 ± 6% perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.00 +0.9 0.92 ± 2% perf-profile.calltrace.cycles-pp.native_write_msr
0.00 +1.1 1.05 ± 5% perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +1.1 1.08 ± 8% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.00 +1.1 1.13 ± 5% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +1.2 1.17 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +1.2 1.18 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.00 +1.2 1.20 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.3 1.30 ± 5% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +1.3 1.34 ± 6% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +1.6 1.62 ± 4% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.9 1.88 ± 5% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.9 1.93 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.13 ±173% +2.0 2.11 ± 5% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +2.1 2.06 ± 4% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +2.4 2.38 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +2.7 2.65 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.35 ±103% +4.3 4.67 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.22 ±173% +4.6 4.81 ± 4% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.23 ±173% +4.7 4.97 ± 4% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 23% +5.0 5.72 ± 5% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.69 ± 17% +5.6 6.25 ± 4% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.74 ± 17% +5.8 6.50 ± 4% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.74 ± 18% +5.8 6.55 ± 5% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
15.99 ± 54% +7.5 23.51 ± 6% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.13 +7.7 20.86 ± 3% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.49 +8.7 20.19 ± 3% perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.31 ± 27% +8.9 10.21 ± 7% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
1.81 ± 22% +11.2 12.98 ± 5% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
1.89 ± 22% +11.4 13.28 ± 5% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
2.18 ± 21% +13.1 15.28 ± 4% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
2.01 ± 25% +19.1 21.10 ± 7% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
2.38 ± 23% +20.0 22.40 ± 7% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
19.01 ± 3% +23.4 42.38 ± 4% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
21.13 ± 53% +23.8 44.92 ± 5% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.85 ± 4% +23.9 41.76 ± 4% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
19.16 ± 53% +24.6 43.74 ± 5% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.79 ± 22% +30.7 34.45 ± 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
3.84 ± 23% +30.9 34.69 ± 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
4.13 ± 21% +31.0 35.10 ± 6% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
5.77 ± 14% +31.0 36.76 ± 6% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
9.20 ± 6% -6.0 3.16 ± 12% perf-profile.children.cycles-pp.syscall_return_via_sysret
8.88 ± 3% -5.8 3.03 ± 12% perf-profile.children.cycles-pp.entry_SYSCALL_64
8.72 ± 4% -5.3 3.39 ± 11% perf-profile.children.cycles-pp.security_file_permission
4.27 ± 3% -2.4 1.89 ± 9% perf-profile.children.cycles-pp.selinux_file_permission
3.15 ± 5% -2.1 1.07 ± 12% perf-profile.children.cycles-pp.file_has_perm
3.48 ± 3% -2.0 1.46 ± 10% perf-profile.children.cycles-pp.copy_page_to_iter
3.80 ± 3% -2.0 1.79 ± 6% perf-profile.children.cycles-pp.mutex_unlock
3.02 ± 5% -1.9 1.11 ± 11% perf-profile.children.cycles-pp.copy_page_from_iter
3.31 ± 4% -1.6 1.75 ± 6% perf-profile.children.cycles-pp.__fdget_pos
2.55 ± 3% -1.5 1.02 ± 10% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.17 ± 4% -1.5 1.70 ± 6% perf-profile.children.cycles-pp.__fget_light
2.10 ± 7% -1.4 0.71 ± 12% perf-profile.children.cycles-pp.fsnotify
2.52 ± 6% -1.3 1.23 ± 8% perf-profile.children.cycles-pp.__fget
1.77 ± 6% -1.1 0.66 ± 9% perf-profile.children.cycles-pp.avc_has_perm
1.89 ± 5% -1.1 0.78 ± 10% perf-profile.children.cycles-pp.___might_sleep
1.44 ± 4% -1.0 0.48 ± 12% perf-profile.children.cycles-pp.__inode_security_revalidate
1.33 ± 10% -0.9 0.40 ± 16% perf-profile.children.cycles-pp.current_time
1.56 ± 3% -0.9 0.66 ± 9% perf-profile.children.cycles-pp.copyout
1.56 ± 8% -0.8 0.73 ± 7% perf-profile.children.cycles-pp.touch_atime
1.33 ± 4% -0.8 0.53 ± 10% perf-profile.children.cycles-pp.__might_sleep
1.15 ± 9% -0.8 0.36 ± 12% perf-profile.children.cycles-pp.atime_needs_update
1.17 ± 4% -0.7 0.43 ± 13% perf-profile.children.cycles-pp.copyin
1.23 ± 5% -0.7 0.54 ± 9% perf-profile.children.cycles-pp.__fsnotify_parent
1.32 ± 7% -0.6 0.68 ± 6% perf-profile.children.cycles-pp.file_update_time
1.02 ± 3% -0.6 0.38 ± 12% perf-profile.children.cycles-pp.__might_fault
2.54 ± 2% -0.5 2.03 perf-profile.children.cycles-pp.mutex_lock
0.78 ± 4% -0.4 0.36 ± 10% perf-profile.children.cycles-pp.fput_many
0.53 ± 4% -0.4 0.15 ± 21% perf-profile.children.cycles-pp.inode_has_perm
0.44 ± 12% -0.3 0.11 ± 22% perf-profile.children.cycles-pp.iov_iter_init
0.76 ± 4% -0.3 0.44 ± 8% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.53 ± 3% -0.3 0.21 ± 6% perf-profile.children.cycles-pp.rcu_all_qs
0.41 ± 9% -0.3 0.12 ± 15% perf-profile.children.cycles-pp.timespec64_trunc
0.97 -0.3 0.71 ± 3% perf-profile.children.cycles-pp._cond_resched
0.47 ± 7% -0.3 0.21 ± 11% perf-profile.children.cycles-pp.__x64_sys_read
0.45 ± 7% -0.3 0.20 ± 10% perf-profile.children.cycles-pp.__x64_sys_write
0.37 ± 4% -0.2 0.17 ± 16% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.44 ± 12% -0.2 0.24 ± 3% perf-profile.children.cycles-pp.__sb_start_write
0.31 ± 6% -0.2 0.12 ± 13% perf-profile.children.cycles-pp.__sb_end_write
0.36 ± 7% -0.2 0.18 ± 6% perf-profile.children.cycles-pp.rw_verify_area
0.26 ± 14% -0.2 0.09 ± 14% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.29 ± 8% -0.2 0.14 ± 16% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.22 ± 39% -0.2 0.06 ± 11% perf-profile.children.cycles-pp.__vfs_read
0.21 ± 19% -0.1 0.11 ± 9% perf-profile.children.cycles-pp.__mutex_unlock_slowpath
0.12 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__vfs_write
0.12 ± 17% -0.0 0.08 ± 11% perf-profile.children.cycles-pp.wake_up_q
0.08 ± 10% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.kill_fasync
0.11 ± 7% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.09 ± 5% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.01 ±173% +0.1 0.06 ± 6% perf-profile.children.cycles-pp.__mnt_want_write
0.07 ± 25% +0.1 0.12 ± 27% perf-profile.children.cycles-pp.__softirqentry_text_start
0.08 ± 30% +0.1 0.14 ± 26% perf-profile.children.cycles-pp.irq_exit
0.06 ± 6% +0.1 0.12 ± 10% perf-profile.children.cycles-pp.__mark_inode_dirty
0.00 +0.1 0.06 perf-profile.children.cycles-pp.preempt_schedule_common
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.is_cpu_allowed
0.03 ±100% +0.1 0.10 ± 8% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.__x2apic_send_IPI_dest
0.01 ±173% +0.1 0.09 ± 32% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.00 +0.1 0.07 ± 14% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.05 ± 61% +0.1 0.13 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.08 ± 8% perf-profile.children.cycles-pp.rcu_note_context_switch
0.00 +0.1 0.08 ± 13% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.00 +0.1 0.09 ± 4% perf-profile.children.cycles-pp.native_load_tls
0.03 ±102% +0.1 0.12 ± 30% perf-profile.children.cycles-pp.set_task_cpu
0.00 +0.1 0.09 ± 11% perf-profile.children.cycles-pp.check_cfs_rq_runtime
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.rb_next
0.10 ± 10% +0.1 0.20 ± 9% perf-profile.children.cycles-pp.generic_update_time
0.00 +0.1 0.10 ± 29% perf-profile.children.cycles-pp.__x64_sys_exit
0.00 +0.1 0.10 ± 28% perf-profile.children.cycles-pp.do_exit
0.00 +0.1 0.11 ± 8% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.__list_add_valid
0.00 +0.1 0.12 ± 7% perf-profile.children.cycles-pp.resched_curr
0.00 +0.1 0.12 ± 6% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 ± 10% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.15 ± 4% perf-profile.children.cycles-pp.set_next_buddy
0.00 +0.2 0.16 ± 5% perf-profile.children.cycles-pp.deactivate_task
0.05 ± 9% +0.2 0.22 ± 7% perf-profile.children.cycles-pp.rb_erase
0.06 ± 13% +0.2 0.23 ± 9% perf-profile.children.cycles-pp.finish_wait
0.17 ± 12% +0.2 0.35 ± 2% perf-profile.children.cycles-pp.anon_pipe_buf_release
0.01 ±173% +0.2 0.20 ± 8% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.00 +0.2 0.21 ± 4% perf-profile.children.cycles-pp.cpumask_next
0.07 ± 22% +0.3 0.40 ± 2% perf-profile.children.cycles-pp.prepare_to_wait
0.03 ±100% +0.4 0.39 ± 5% perf-profile.children.cycles-pp.update_min_vruntime
0.03 ±100% +0.4 0.39 ± 12% perf-profile.children.cycles-pp.cpus_share_cache
0.06 ± 14% +0.4 0.43 perf-profile.children.cycles-pp.clear_buddies
0.11 ± 15% +0.4 0.50 ± 3% perf-profile.children.cycles-pp.cpuacct_charge
0.10 ± 26% +0.4 0.49 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.32 ± 31% +0.4 0.75 ± 14% perf-profile.children.cycles-pp.sched_ttwu_pending
0.11 ± 26% +0.4 0.56 ± 3% perf-profile.children.cycles-pp.sched_clock
0.34 ± 32% +0.5 0.80 ± 15% perf-profile.children.cycles-pp.scheduler_ipi
0.08 ± 12% +0.5 0.56 ± 7% perf-profile.children.cycles-pp.account_entity_enqueue
0.13 ± 26% +0.5 0.62 ± 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.14 ± 26% +0.6 0.77 ± 4% perf-profile.children.cycles-pp.__calc_delta
0.08 ± 16% +0.7 0.76 ± 3% perf-profile.children.cycles-pp.account_entity_dequeue
0.84 ± 5% +0.7 1.54 ± 5% perf-profile.children.cycles-pp.__lock_text_start
1.61 ± 16% +0.7 2.31 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.42 ± 31% +0.7 1.12 ± 13% perf-profile.children.cycles-pp.reschedule_interrupt
0.24 ± 28% +0.9 1.11 perf-profile.children.cycles-pp.native_write_msr
0.14 ± 22% +0.9 1.01 ± 5% perf-profile.children.cycles-pp.check_preempt_wakeup
0.24 ± 32% +0.9 1.12 ± 7% perf-profile.children.cycles-pp.finish_task_switch
0.22 ± 22% +0.9 1.10 ± 3% perf-profile.children.cycles-pp.find_next_bit
0.22 ± 23% +0.9 1.12 ± 3% perf-profile.children.cycles-pp.set_next_entity
0.21 ± 32% +0.9 1.14 ± 3% perf-profile.children.cycles-pp.update_cfs_group
0.16 ± 23% +1.0 1.15 ± 4% perf-profile.children.cycles-pp.check_preempt_curr
0.21 ± 30% +1.0 1.21 ± 2% perf-profile.children.cycles-pp.__switch_to
0.23 ± 21% +1.0 1.24 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
0.16 ± 38% +1.0 1.20 ± 4% perf-profile.children.cycles-pp.___perf_sw_event
0.15 ± 26% +1.0 1.20 ± 3% perf-profile.children.cycles-pp.pick_next_entity
0.15 ± 20% +1.1 1.21 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.17 ± 22% +1.1 1.23 ± 4% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.19 ± 10% +1.3 1.52 ± 10% perf-profile.children.cycles-pp.__enqueue_entity
0.24 ± 23% +1.3 1.59 ± 6% perf-profile.children.cycles-pp.__update_load_avg_se
0.28 ± 23% +1.4 1.64 ± 3% perf-profile.children.cycles-pp.switch_fpu_return
0.31 ± 29% +1.6 1.88 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.44 ± 21% +1.8 2.24 ± 4% perf-profile.children.cycles-pp.cpumask_next_wrap
0.33 ± 20% +1.8 2.15 ± 4% perf-profile.children.cycles-pp.dequeue_entity
23.28 ± 2% +2.0 25.27 ± 2% perf-profile.children.cycles-pp.ksys_read
0.46 ± 25% +2.1 2.57 ± 5% perf-profile.children.cycles-pp.update_load_avg
0.20 ± 21% +2.1 2.34 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.91 ± 29% +2.2 3.12 ± 11% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.38 ± 23% +2.4 2.80 ± 4% perf-profile.children.cycles-pp.reweight_entity
0.52 ± 18% +2.4 2.96 ± 3% perf-profile.children.cycles-pp.enqueue_entity
0.49 ± 23% +2.6 3.10 ± 4% perf-profile.children.cycles-pp.update_curr
20.92 ± 2% +3.2 24.12 ± 2% perf-profile.children.cycles-pp.vfs_read
0.21 ± 23% +3.4 3.65 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
0.76 ± 24% +3.9 4.67 ± 3% perf-profile.children.cycles-pp.pick_next_task_fair
0.66 ± 34% +4.4 5.06 ± 2% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.81 ± 22% +5.0 5.83 ± 4% perf-profile.children.cycles-pp.dequeue_task_fair
1.01 ± 21% +5.8 6.86 ± 4% perf-profile.children.cycles-pp.enqueue_task_fair
1.06 ± 21% +6.0 7.10 ± 4% perf-profile.children.cycles-pp.activate_task
1.07 ± 21% +6.1 7.15 ± 4% perf-profile.children.cycles-pp.ttwu_do_activate
13.19 +7.8 20.94 ± 3% perf-profile.children.cycles-pp.new_sync_read
11.61 +8.7 20.29 ± 3% perf-profile.children.cycles-pp.pipe_read
1.45 ± 25% +8.9 10.35 ± 7% perf-profile.children.cycles-pp.available_idle_cpu
78.85 ± 2% +9.8 88.66 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
77.90 ± 2% +10.1 88.00 perf-profile.children.cycles-pp.do_syscall_64
2.42 ± 21% +12.9 15.36 ± 4% perf-profile.children.cycles-pp.pipe_wait
2.71 ± 26% +15.4 18.12 ± 4% perf-profile.children.cycles-pp.__schedule
2.76 ± 25% +15.6 18.37 ± 4% perf-profile.children.cycles-pp.schedule
27.59 ± 2% +18.6 46.17 ± 3% perf-profile.children.cycles-pp.ksys_write
2.26 ± 23% +19.0 21.30 ± 7% perf-profile.children.cycles-pp.select_idle_sibling
24.98 ± 2% +19.9 44.88 ± 3% perf-profile.children.cycles-pp.vfs_write
2.48 ± 23% +20.0 22.46 ± 7% perf-profile.children.cycles-pp.select_task_rq_fair
19.06 ± 3% +23.3 42.41 ± 4% perf-profile.children.cycles-pp.new_sync_write
17.98 ± 4% +23.9 41.93 ± 4% perf-profile.children.cycles-pp.pipe_write
4.03 ± 21% +30.6 34.60 ± 6% perf-profile.children.cycles-pp.try_to_wake_up
3.97 ± 22% +30.8 34.74 ± 6% perf-profile.children.cycles-pp.autoremove_wake_function
4.30 ± 20% +30.9 35.19 ± 6% perf-profile.children.cycles-pp.__wake_up_common
6.06 ± 14% +31.2 37.21 ± 5% perf-profile.children.cycles-pp.__wake_up_common_lock
24.41 ± 6% -15.5 8.96 ± 10% perf-profile.self.cycles-pp.do_syscall_64
9.18 ± 6% -6.0 3.14 ± 12% perf-profile.self.cycles-pp.syscall_return_via_sysret
8.63 ± 5% -5.6 3.00 ± 12% perf-profile.self.cycles-pp.entry_SYSCALL_64
3.71 ± 3% -2.0 1.73 ± 6% perf-profile.self.cycles-pp.mutex_unlock
2.47 ± 3% -1.5 0.96 ± 10% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.76 ± 3% -1.4 1.35 ± 8% perf-profile.self.cycles-pp.selinux_file_permission
2.02 ± 7% -1.3 0.68 ± 12% perf-profile.self.cycles-pp.fsnotify
2.46 ± 6% -1.3 1.20 ± 8% perf-profile.self.cycles-pp.__fget
1.75 ± 6% -1.1 0.64 ± 10% perf-profile.self.cycles-pp.avc_has_perm
1.83 ± 5% -1.1 0.75 ± 10% perf-profile.self.cycles-pp.___might_sleep
1.75 -1.0 0.77 ± 7% perf-profile.self.cycles-pp.pipe_write
1.20 ± 4% -0.7 0.46 ± 10% perf-profile.self.cycles-pp.__might_sleep
1.29 ± 11% -0.7 0.58 ± 10% perf-profile.self.cycles-pp.new_sync_read
1.63 ± 4% -0.7 0.97 ± 5% perf-profile.self.cycles-pp.pipe_read
1.15 ± 5% -0.7 0.49 ± 8% perf-profile.self.cycles-pp.__fsnotify_parent
0.89 ± 8% -0.6 0.27 ± 15% perf-profile.self.cycles-pp.copy_page_from_iter
0.89 ± 4% -0.6 0.29 ± 14% perf-profile.self.cycles-pp.security_file_permission
0.83 ± 5% -0.6 0.24 ± 21% perf-profile.self.cycles-pp.file_has_perm
0.90 ± 2% -0.5 0.41 ± 6% perf-profile.self.cycles-pp.new_sync_write
0.67 ± 10% -0.5 0.20 ± 15% perf-profile.self.cycles-pp.current_time
0.83 ± 4% -0.4 0.41 ± 10% perf-profile.self.cycles-pp.copy_page_to_iter
0.72 ± 7% -0.4 0.31 ± 9% perf-profile.self.cycles-pp.vfs_write
0.75 ± 4% -0.4 0.35 ± 9% perf-profile.self.cycles-pp.fput_many
1.01 ± 4% -0.3 0.69 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.45 ± 4% -0.3 0.14 ± 19% perf-profile.self.cycles-pp.inode_has_perm
0.54 ± 8% -0.3 0.23 ± 9% perf-profile.self.cycles-pp.file_update_time
0.41 ± 10% -0.3 0.11 ± 20% perf-profile.self.cycles-pp.iov_iter_init
0.49 ± 7% -0.3 0.19 ± 8% perf-profile.self.cycles-pp.atime_needs_update
0.44 ± 7% -0.3 0.15 ± 14% perf-profile.self.cycles-pp.ksys_read
0.44 ± 7% -0.3 0.17 ± 9% perf-profile.self.cycles-pp.__inode_security_revalidate
0.41 ± 2% -0.3 0.15 ± 8% perf-profile.self.cycles-pp.rcu_all_qs
0.43 ± 8% -0.3 0.17 ± 11% perf-profile.self.cycles-pp.ksys_write
0.37 ± 9% -0.3 0.11 ± 17% perf-profile.self.cycles-pp.timespec64_trunc
0.42 ± 14% -0.2 0.18 ± 8% perf-profile.self.cycles-pp.__sb_start_write
0.39 ± 7% -0.2 0.17 ± 11% perf-profile.self.cycles-pp.__x64_sys_write
0.40 ± 8% -0.2 0.17 ± 8% perf-profile.self.cycles-pp.__x64_sys_read
0.41 ± 7% -0.2 0.20 ± 11% perf-profile.self.cycles-pp.__wake_up_common_lock
0.60 ± 4% -0.2 0.39 ± 7% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
1.25 -0.2 1.04 ± 2% perf-profile.self.cycles-pp.mutex_lock
0.30 ± 6% -0.2 0.12 ± 15% perf-profile.self.cycles-pp.__sb_end_write
0.35 ± 5% -0.2 0.18 ± 3% perf-profile.self.cycles-pp.rw_verify_area
0.34 ± 7% -0.2 0.18 ± 8% perf-profile.self.cycles-pp.touch_atime
0.32 ± 4% -0.2 0.15 ± 14% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.23 ± 16% -0.2 0.08 ± 11% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.74 ± 3% -0.2 0.59 ± 5% perf-profile.self.cycles-pp.vfs_read
0.60 ± 3% -0.2 0.45 ± 4% perf-profile.self.cycles-pp.__fget_light
0.27 ± 8% -0.1 0.12 ± 15% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.18 ± 45% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.__vfs_read
0.18 ± 5% -0.1 0.06 ± 17% perf-profile.self.cycles-pp.__fdget_pos
0.25 ± 5% -0.1 0.14 ± 11% perf-profile.self.cycles-pp.__might_fault
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.kill_fasync
0.01 ±173% +0.1 0.06 ± 17% perf-profile.self.cycles-pp.sched_ttwu_pending
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.scheduler_ipi
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.rcu_note_context_switch
0.01 ±173% +0.1 0.07 ± 12% perf-profile.self.cycles-pp.generic_update_time
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.cpumask_next
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.ttwu_do_activate
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__mnt_want_write
0.06 ± 14% +0.1 0.12 ± 10% perf-profile.self.cycles-pp.__mark_inode_dirty
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.is_cpu_allowed
0.00 +0.1 0.07 ± 13% perf-profile.self.cycles-pp.sched_clock_cpu
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.sched_clock
0.03 ±100% +0.1 0.10 ± 11% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.07 ± 14% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.08 perf-profile.self.cycles-pp.native_load_tls
0.05 ± 60% +0.1 0.13 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.09 ± 9% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.09 ± 11% perf-profile.self.cycles-pp.rb_next
0.00 +0.1 0.10 ± 8% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.1 0.10 ± 10% perf-profile.self.cycles-pp.rb_insert_color
0.00 +0.1 0.11 ± 9% perf-profile.self.cycles-pp.resched_curr
0.00 +0.1 0.11 ± 7% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.32 ± 7% +0.1 0.45 ± 5% perf-profile.self.cycles-pp.__wake_up_common
0.13 ± 10% +0.1 0.25 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.13 ± 5% perf-profile.self.cycles-pp.autoremove_wake_function
0.00 +0.1 0.13 ± 8% perf-profile.self.cycles-pp.check_preempt_curr
0.04 ± 58% +0.1 0.18 ± 10% perf-profile.self.cycles-pp.finish_wait
0.00 +0.1 0.14 ± 5% perf-profile.self.cycles-pp.set_next_buddy
0.00 +0.1 0.15 ± 3% perf-profile.self.cycles-pp.put_prev_entity
0.00 +0.2 0.15 ± 5% perf-profile.self.cycles-pp.deactivate_task
0.05 ± 9% +0.2 0.21 ± 5% perf-profile.self.cycles-pp.rb_erase
0.00 +0.2 0.15 ± 10% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.00 +0.2 0.18 ± 6% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.15 ± 7% +0.2 0.34 ± 2% perf-profile.self.cycles-pp.anon_pipe_buf_release
0.03 ±100% +0.2 0.21 ± 8% perf-profile.self.cycles-pp.pipe_wait
0.03 ±102% +0.2 0.24 ± 3% perf-profile.self.cycles-pp.activate_task
0.01 ±173% +0.2 0.23 ± 3% perf-profile.self.cycles-pp.prepare_to_wait
0.03 ±100% +0.2 0.25 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.02 ±173% +0.2 0.26 ± 9% perf-profile.self.cycles-pp.dequeue_entity
0.04 ±100% +0.3 0.29 ± 5% perf-profile.self.cycles-pp.enqueue_entity
0.08 ± 31% +0.3 0.34 ± 2% perf-profile.self.cycles-pp.schedule
0.78 ± 3% +0.3 1.09 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.09 ± 24% +0.3 0.42 ± 7% perf-profile.self.cycles-pp.finish_task_switch
0.03 ±100% +0.3 0.37 ± 6% perf-profile.self.cycles-pp.update_min_vruntime
0.03 ±102% +0.4 0.38 perf-profile.self.cycles-pp.clear_buddies
0.03 ±100% +0.4 0.39 ± 12% perf-profile.self.cycles-pp.cpus_share_cache
0.07 ± 22% +0.4 0.44 ± 7% perf-profile.self.cycles-pp.check_preempt_wakeup
0.09 ± 27% +0.4 0.47 ± 4% perf-profile.self.cycles-pp.native_sched_clock
0.07 ± 26% +0.4 0.45 ± 4% perf-profile.self.cycles-pp.pick_next_entity
0.11 ± 15% +0.4 0.50 ± 3% perf-profile.self.cycles-pp.cpuacct_charge
0.07 ± 14% +0.5 0.53 ± 8% perf-profile.self.cycles-pp.account_entity_enqueue
0.10 ± 21% +0.5 0.56 ± 8% perf-profile.self.cycles-pp.try_to_wake_up
0.10 ± 18% +0.5 0.62 ± 6% perf-profile.self.cycles-pp.dequeue_task_fair
0.14 ± 26% +0.6 0.75 ± 4% perf-profile.self.cycles-pp.__calc_delta
0.06 ± 13% +0.6 0.68 ± 4% perf-profile.self.cycles-pp.account_entity_dequeue
0.15 ± 27% +0.6 0.79 ± 6% perf-profile.self.cycles-pp.enqueue_task_fair
0.17 ± 30% +0.7 0.89 ± 4% perf-profile.self.cycles-pp.update_load_avg
0.15 ± 20% +0.7 0.88 ± 4% perf-profile.self.cycles-pp.reweight_entity
0.15 ± 23% +0.8 0.96 ± 3% perf-profile.self.cycles-pp.pick_next_task_fair
0.24 ± 30% +0.9 1.11 perf-profile.self.cycles-pp.native_write_msr
0.20 ± 23% +0.9 1.07 ± 5% perf-profile.self.cycles-pp.select_task_rq_fair
0.20 ± 22% +0.9 1.07 ± 3% perf-profile.self.cycles-pp.find_next_bit
0.20 ± 31% +0.9 1.11 ± 3% perf-profile.self.cycles-pp.update_cfs_group
0.20 ± 28% +0.9 1.14 ± 2% perf-profile.self.cycles-pp.__switch_to
0.14 ± 37% +1.0 1.13 ± 5% perf-profile.self.cycles-pp.___perf_sw_event
0.21 ± 24% +1.0 1.22 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
0.24 ± 23% +1.1 1.31 ± 5% perf-profile.self.cycles-pp.cpumask_next_wrap
0.23 ± 32% +1.3 1.49 ± 5% perf-profile.self.cycles-pp.update_rq_clock
0.19 ± 28% +1.3 1.45 ± 6% perf-profile.self.cycles-pp.update_curr
0.23 ± 23% +1.3 1.55 ± 6% perf-profile.self.cycles-pp.__update_load_avg_se
0.19 ± 10% +1.3 1.51 ± 10% perf-profile.self.cycles-pp.__enqueue_entity
0.28 ± 22% +1.3 1.63 ± 3% perf-profile.self.cycles-pp.switch_fpu_return
0.11 ± 19% +1.6 1.72 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.42 ± 26% +1.7 2.12 ± 2% perf-profile.self.cycles-pp.__schedule
0.19 ± 20% +2.0 2.20 ± 4% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.91 ± 29% +2.2 3.11 ± 12% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.32 ± 19% +7.7 8.05 ± 7% perf-profile.self.cycles-pp.select_idle_sibling
1.44 ± 25% +8.8 10.23 ± 7% perf-profile.self.cycles-pp.available_idle_cpu
439.00 +11.7% 490.50 interrupts.100:IR-PCI-MSI.1572923-edge.eth0-TxRx-59
436.25 +13.1% 493.25 interrupts.101:IR-PCI-MSI.1572924-edge.eth0-TxRx-60
436.25 +12.4% 490.50 interrupts.102:IR-PCI-MSI.1572925-edge.eth0-TxRx-61
436.25 +12.4% 490.50 interrupts.103:IR-PCI-MSI.1572926-edge.eth0-TxRx-62
509.75 ± 3% +545.1% 3288 ±120% interrupts.42:IR-PCI-MSI.1572867-edge.eth0-TxRx-3
474.00 ± 3% +49.7% 709.75 ± 24% interrupts.46:IR-PCI-MSI.1572871-edge.eth0-TxRx-7
588.00 ± 35% +196.0% 1740 ± 50% interrupts.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
436.75 ± 2% +12.8% 492.50 interrupts.55:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
438.00 +39.5% 611.00 ± 33% interrupts.57:IR-PCI-MSI.1572882-edge.eth0-TxRx-18
436.25 +13.8% 496.25 interrupts.58:IR-PCI-MSI.1572883-edge.eth0-TxRx-19
436.25 +14.3% 498.50 interrupts.59:IR-PCI-MSI.1572884-edge.eth0-TxRx-20
437.00 +12.2% 490.50 interrupts.60:IR-PCI-MSI.1572885-edge.eth0-TxRx-21
436.25 +12.4% 490.50 interrupts.61:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
436.25 +12.4% 490.50 interrupts.62:IR-PCI-MSI.1572887-edge.eth0-TxRx-23
441.00 +11.8% 493.00 interrupts.63:IR-PCI-MSI.1572888-edge.eth0-TxRx-24
437.25 +13.3% 495.25 interrupts.64:IR-PCI-MSI.1572889-edge.eth0-TxRx-25
436.25 +12.4% 490.50 interrupts.65:IR-PCI-MSI.1572890-edge.eth0-TxRx-26
436.25 +12.4% 490.50 interrupts.68:IR-PCI-MSI.1572893-edge.eth0-TxRx-29
436.25 +12.4% 490.50 interrupts.69:IR-PCI-MSI.1572894-edge.eth0-TxRx-30
436.25 +12.4% 490.50 interrupts.71:IR-PCI-MSI.1572896-edge.eth0-TxRx-32
437.25 ± 2% +12.2% 490.50 interrupts.74:IR-PCI-MSI.1572897-edge.eth0-TxRx-33
436.25 +12.6% 491.25 interrupts.75:IR-PCI-MSI.1572898-edge.eth0-TxRx-34
440.75 +12.4% 495.25 ± 2% interrupts.76:IR-PCI-MSI.1572899-edge.eth0-TxRx-35
436.25 +12.6% 491.00 interrupts.77:IR-PCI-MSI.1572900-edge.eth0-TxRx-36
436.75 +13.1% 493.75 interrupts.78:IR-PCI-MSI.1572901-edge.eth0-TxRx-37
436.25 +12.8% 492.25 interrupts.79:IR-PCI-MSI.1572902-edge.eth0-TxRx-38
436.25 +12.8% 492.25 interrupts.80:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
436.25 +12.4% 490.50 interrupts.81:IR-PCI-MSI.1572904-edge.eth0-TxRx-40
436.25 +12.4% 490.50 interrupts.82:IR-PCI-MSI.1572905-edge.eth0-TxRx-41
436.25 +12.4% 490.50 interrupts.83:IR-PCI-MSI.1572906-edge.eth0-TxRx-42
436.25 +14.6% 499.75 ± 2% interrupts.84:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
436.25 +12.4% 490.50 interrupts.85:IR-PCI-MSI.1572908-edge.eth0-TxRx-44
436.50 +12.4% 490.50 interrupts.86:IR-PCI-MSI.1572909-edge.eth0-TxRx-45
436.25 +12.4% 490.50 interrupts.87:IR-PCI-MSI.1572910-edge.eth0-TxRx-46
436.25 +12.4% 490.50 interrupts.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
436.25 +12.6% 491.00 interrupts.89:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
441.25 ± 3% +14.1% 503.25 ± 3% interrupts.90:IR-PCI-MSI.1572913-edge.eth0-TxRx-49
436.25 +12.4% 490.50 interrupts.91:IR-PCI-MSI.1572914-edge.eth0-TxRx-50
436.25 +14.4% 499.25 ± 3% interrupts.92:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
439.25 ± 2% +11.7% 490.50 interrupts.94:IR-PCI-MSI.1572917-edge.eth0-TxRx-53
437.75 +12.1% 490.50 interrupts.95:IR-PCI-MSI.1572918-edge.eth0-TxRx-54
436.50 +12.4% 490.50 interrupts.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
436.25 +12.4% 490.50 interrupts.97:IR-PCI-MSI.1572920-edge.eth0-TxRx-56
439.00 ± 2% +11.7% 490.50 interrupts.98:IR-PCI-MSI.1572921-edge.eth0-TxRx-57
442.25 ± 3% +10.9% 490.50 interrupts.99:IR-PCI-MSI.1572922-edge.eth0-TxRx-58
1989 +12.1% 2230 interrupts.9:IR-IO-APIC.9-fasteoi.acpi
6596036 ± 10% -21.1% 5206443 ± 6% interrupts.CAL:Function_call_interrupts
86097 ± 13% -18.4% 70229 ± 7% interrupts.CPU0.CAL:Function_call_interrupts
1785369 +12.4% 2006294 interrupts.CPU0.LOC:Local_timer_interrupts
102823 ± 12% -18.3% 84031 ± 5% interrupts.CPU0.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496548 ± 4% interrupts.CPU0.TRM:Thermal_event_interrupts
1989 +12.1% 2230 interrupts.CPU1.9:IR-IO-APIC.9-fasteoi.acpi
90370 ± 8% -21.4% 70995 ± 6% interrupts.CPU1.CAL:Function_call_interrupts
1784834 +12.4% 2006046 interrupts.CPU1.LOC:Local_timer_interrupts
3088391 ± 15% -20.9% 2441882 ± 7% interrupts.CPU1.RES:Rescheduling_interrupts
105391 ± 8% -18.6% 85793 ± 6% interrupts.CPU1.TLB:TLB_shootdowns
388373 ± 12% +27.9% 496558 ± 4% interrupts.CPU1.TRM:Thermal_event_interrupts
88236 ± 11% -19.5% 71001 ± 7% interrupts.CPU10.CAL:Function_call_interrupts
1785471 +12.3% 2004740 interrupts.CPU10.LOC:Local_timer_interrupts
3040774 ± 4% -21.1% 2398396 ± 8% interrupts.CPU10.RES:Rescheduling_interrupts
104483 ± 10% -18.6% 85025 ± 5% interrupts.CPU10.TLB:TLB_shootdowns
388417 ± 12% +27.8% 496560 ± 4% interrupts.CPU10.TRM:Thermal_event_interrupts
1785039 +12.3% 2005274 interrupts.CPU11.LOC:Local_timer_interrupts
925.75 ±173% +461.3% 5196 ± 58% interrupts.CPU11.NMI:Non-maskable_interrupts
925.75 ±173% +461.3% 5196 ± 58% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3094193 ± 14% -21.5% 2430228 ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
102398 ± 10% -16.1% 85884 ± 5% interrupts.CPU11.TLB:TLB_shootdowns
388413 ± 12% +27.8% 496474 ± 4% interrupts.CPU11.TRM:Thermal_event_interrupts
588.00 ± 35% +196.0% 1740 ± 50% interrupts.CPU12.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
87438 ± 9% -18.1% 71580 ± 7% interrupts.CPU12.CAL:Function_call_interrupts
1785376 +12.3% 2005068 interrupts.CPU12.LOC:Local_timer_interrupts
103057 ± 10% -16.4% 86186 ± 6% interrupts.CPU12.TLB:TLB_shootdowns
388416 ± 12% +27.8% 496490 ± 4% interrupts.CPU12.TRM:Thermal_event_interrupts
88735 ± 9% -19.4% 71496 ± 9% interrupts.CPU13.CAL:Function_call_interrupts
1785460 +12.3% 2005445 interrupts.CPU13.LOC:Local_timer_interrupts
104230 ± 9% -17.5% 86038 ± 6% interrupts.CPU13.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496563 ± 4% interrupts.CPU13.TRM:Thermal_event_interrupts
88327 ± 12% -19.0% 71507 ± 7% interrupts.CPU14.CAL:Function_call_interrupts
1785094 +12.3% 2005441 interrupts.CPU14.LOC:Local_timer_interrupts
103317 ± 11% -18.1% 84581 ± 6% interrupts.CPU14.TLB:TLB_shootdowns
388412 ± 12% +27.8% 496564 ± 4% interrupts.CPU14.TRM:Thermal_event_interrupts
86972 ± 10% -17.3% 71898 ± 7% interrupts.CPU15.CAL:Function_call_interrupts
1784668 +12.4% 2005375 interrupts.CPU15.LOC:Local_timer_interrupts
3039799 ± 9% -21.1% 2397505 ± 9% interrupts.CPU15.RES:Rescheduling_interrupts
103367 ± 12% -16.9% 85892 ± 6% interrupts.CPU15.TLB:TLB_shootdowns
388342 ± 12% +27.9% 496559 ± 4% interrupts.CPU15.TRM:Thermal_event_interrupts
436.75 ± 2% +12.8% 492.50 interrupts.CPU16.55:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
1784750 +12.4% 2005330 interrupts.CPU16.LOC:Local_timer_interrupts
55.25 ±171% +7696.8% 4307 ± 64% interrupts.CPU16.NMI:Non-maskable_interrupts
55.25 ±171% +7696.8% 4307 ± 64% interrupts.CPU16.PMI:Performance_monitoring_interrupts
2969996 ± 10% -18.7% 2413141 ± 9% interrupts.CPU16.RES:Rescheduling_interrupts
102367 ± 11% -13.7% 88305 ± 3% interrupts.CPU16.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496563 ± 4% interrupts.CPU16.TRM:Thermal_event_interrupts
1785363 +12.3% 2005849 interrupts.CPU17.LOC:Local_timer_interrupts
388418 ± 12% +27.8% 496560 ± 4% interrupts.CPU17.TRM:Thermal_event_interrupts
438.00 +39.5% 611.00 ± 33% interrupts.CPU18.57:IR-PCI-MSI.1572882-edge.eth0-TxRx-18
88992 ± 12% -22.3% 69145 ± 6% interrupts.CPU18.CAL:Function_call_interrupts
1785398 +12.1% 2001316 interrupts.CPU18.LOC:Local_timer_interrupts
2199105 ± 7% +115.2% 4731679 ± 7% interrupts.CPU18.RES:Rescheduling_interrupts
108465 ± 8% -23.0% 83467 ± 5% interrupts.CPU18.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380131 ± 2% interrupts.CPU18.TRM:Thermal_event_interrupts
436.25 +13.8% 496.25 interrupts.CPU19.58:IR-PCI-MSI.1572883-edge.eth0-TxRx-19
92059 ± 6% -24.2% 69776 ± 5% interrupts.CPU19.CAL:Function_call_interrupts
1784665 +12.2% 2001579 interrupts.CPU19.LOC:Local_timer_interrupts
1284622 ± 7% +225.8% 4185208 ± 4% interrupts.CPU19.RES:Rescheduling_interrupts
108492 ± 7% -23.4% 83059 ± 4% interrupts.CPU19.TLB:TLB_shootdowns
329166 ± 3% +15.5% 380141 ± 2% interrupts.CPU19.TRM:Thermal_event_interrupts
1784512 +12.4% 2005572 interrupts.CPU2.LOC:Local_timer_interrupts
3091132 ± 8% -22.3% 2401342 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
102562 ± 9% -16.5% 85643 ± 5% interrupts.CPU2.TLB:TLB_shootdowns
388380 ± 12% +27.8% 496544 ± 4% interrupts.CPU2.TRM:Thermal_event_interrupts
436.25 +14.3% 498.50 interrupts.CPU20.59:IR-PCI-MSI.1572884-edge.eth0-TxRx-20
91785 ± 9% -23.9% 69853 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
1785284 +12.1% 2001952 interrupts.CPU20.LOC:Local_timer_interrupts
1281621 ± 12% +203.6% 3890516 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
106297 ± 9% -19.9% 85180 ± 3% interrupts.CPU20.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380131 ± 2% interrupts.CPU20.TRM:Thermal_event_interrupts
437.00 +12.2% 490.50 interrupts.CPU21.60:IR-PCI-MSI.1572885-edge.eth0-TxRx-21
92635 ± 7% -23.9% 70512 ± 6% interrupts.CPU21.CAL:Function_call_interrupts
1785480 +12.1% 2001723 interrupts.CPU21.LOC:Local_timer_interrupts
1595447 ± 6% +187.8% 4591004 interrupts.CPU21.RES:Rescheduling_interrupts
107575 ± 7% -21.6% 84296 ± 5% interrupts.CPU21.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380086 ± 2% interrupts.CPU21.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU22.61:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
92674 ± 10% -24.4% 70047 ± 6% interrupts.CPU22.CAL:Function_call_interrupts
1784881 +12.1% 2001482 interrupts.CPU22.LOC:Local_timer_interrupts
1605143 ± 6% +179.5% 4486945 ± 5% interrupts.CPU22.RES:Rescheduling_interrupts
108749 ± 10% -23.3% 83409 ± 5% interrupts.CPU22.TLB:TLB_shootdowns
329295 ± 3% +15.4% 380136 ± 2% interrupts.CPU22.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU23.62:IR-PCI-MSI.1572887-edge.eth0-TxRx-23
90975 ± 11% -23.6% 69492 ± 5% interrupts.CPU23.CAL:Function_call_interrupts
1785236 +12.1% 2001811 interrupts.CPU23.LOC:Local_timer_interrupts
1600988 ± 2% +180.2% 4486667 ± 5% interrupts.CPU23.RES:Rescheduling_interrupts
106819 ± 10% -22.3% 83007 ± 4% interrupts.CPU23.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380146 ± 2% interrupts.CPU23.TRM:Thermal_event_interrupts
441.00 +11.8% 493.00 interrupts.CPU24.63:IR-PCI-MSI.1572888-edge.eth0-TxRx-24
91919 ± 11% -23.1% 70689 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
1784996 +12.1% 2000922 interrupts.CPU24.LOC:Local_timer_interrupts
1612237 ± 5% +188.4% 4649702 ± 2% interrupts.CPU24.RES:Rescheduling_interrupts
108187 ± 8% -22.5% 83831 ± 3% interrupts.CPU24.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380122 ± 2% interrupts.CPU24.TRM:Thermal_event_interrupts
437.25 +13.3% 495.25 interrupts.CPU25.64:IR-PCI-MSI.1572889-edge.eth0-TxRx-25
88417 ± 12% -20.8% 70029 ± 4% interrupts.CPU25.CAL:Function_call_interrupts
1785404 +12.0% 1999622 interrupts.CPU25.LOC:Local_timer_interrupts
1612074 ± 2% +182.4% 4551699 interrupts.CPU25.RES:Rescheduling_interrupts
105324 ± 10% -20.1% 84201 ± 4% interrupts.CPU25.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU25.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU26.65:IR-PCI-MSI.1572890-edge.eth0-TxRx-26
92632 ± 12% -23.5% 70907 ± 5% interrupts.CPU26.CAL:Function_call_interrupts
1784730 +12.1% 2000264 interrupts.CPU26.LOC:Local_timer_interrupts
1639148 ± 6% +188.0% 4720149 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
108099 ± 11% -21.6% 84727 ± 4% interrupts.CPU26.TLB:TLB_shootdowns
329271 ± 3% +15.4% 379984 ± 2% interrupts.CPU26.TRM:Thermal_event_interrupts
91312 ± 12% -23.4% 69963 ± 6% interrupts.CPU27.CAL:Function_call_interrupts
1785171 +12.1% 2001743 interrupts.CPU27.LOC:Local_timer_interrupts
1704847 ± 11% +179.4% 4764019 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
106515 ± 11% -21.6% 83491 ± 5% interrupts.CPU27.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380080 ± 2% interrupts.CPU27.TRM:Thermal_event_interrupts
89178 ± 8% -22.0% 69565 ± 6% interrupts.CPU28.CAL:Function_call_interrupts
1785194 +12.1% 2001836 interrupts.CPU28.LOC:Local_timer_interrupts
1593025 ± 8% +185.7% 4551782 ± 5% interrupts.CPU28.RES:Rescheduling_interrupts
104408 ± 10% -19.2% 84365 ± 6% interrupts.CPU28.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380142 ± 2% interrupts.CPU28.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU29.68:IR-PCI-MSI.1572893-edge.eth0-TxRx-29
90850 ± 12% -21.9% 70993 ± 6% interrupts.CPU29.CAL:Function_call_interrupts
1785398 +12.1% 2000545 interrupts.CPU29.LOC:Local_timer_interrupts
1657508 ± 5% +179.4% 4631109 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
106383 ± 13% -20.8% 84284 ± 4% interrupts.CPU29.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380133 ± 2% interrupts.CPU29.TRM:Thermal_event_interrupts
509.75 ± 3% +545.1% 3288 ±120% interrupts.CPU3.42:IR-PCI-MSI.1572867-edge.eth0-TxRx-3
87008 ± 11% -19.9% 69730 ± 5% interrupts.CPU3.CAL:Function_call_interrupts
1784446 +12.4% 2005368 interrupts.CPU3.LOC:Local_timer_interrupts
3169674 ± 9% -20.4% 2521963 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
103703 ± 12% -17.1% 85958 ± 7% interrupts.CPU3.TLB:TLB_shootdowns
388356 ± 12% +27.9% 496551 ± 4% interrupts.CPU3.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU30.69:IR-PCI-MSI.1572894-edge.eth0-TxRx-30
92043 ± 12% -24.2% 69765 ± 5% interrupts.CPU30.CAL:Function_call_interrupts
1785175 +12.1% 2001248 interrupts.CPU30.LOC:Local_timer_interrupts
1637465 ± 6% +191.3% 4769701 ± 3% interrupts.CPU30.RES:Rescheduling_interrupts
106836 ± 11% -22.5% 82810 ± 5% interrupts.CPU30.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380130 ± 2% interrupts.CPU30.TRM:Thermal_event_interrupts
1785301 +12.0% 1999270 interrupts.CPU31.LOC:Local_timer_interrupts
62.75 ±173% +4261.4% 2736 ±113% interrupts.CPU31.NMI:Non-maskable_interrupts
62.75 ±173% +4261.4% 2736 ±113% interrupts.CPU31.PMI:Performance_monitoring_interrupts
1589500 ± 5% +184.1% 4515325 interrupts.CPU31.RES:Rescheduling_interrupts
106576 ± 11% -21.5% 83698 ± 5% interrupts.CPU31.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380131 ± 2% interrupts.CPU31.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU32.71:IR-PCI-MSI.1572896-edge.eth0-TxRx-32
93348 ± 12% -24.4% 70595 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
1785245 +12.1% 2001145 interrupts.CPU32.LOC:Local_timer_interrupts
1626883 ± 6% +176.3% 4494456 ± 5% interrupts.CPU32.RES:Rescheduling_interrupts
107151 ± 12% -21.4% 84236 ± 5% interrupts.CPU32.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380131 ± 2% interrupts.CPU32.TRM:Thermal_event_interrupts
437.25 ± 2% +12.2% 490.50 interrupts.CPU33.74:IR-PCI-MSI.1572897-edge.eth0-TxRx-33
91487 ± 12% -22.7% 70758 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
1785371 +12.0% 1999987 interrupts.CPU33.LOC:Local_timer_interrupts
1578761 ± 6% +201.2% 4754762 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
106071 ± 12% -21.3% 83430 ± 2% interrupts.CPU33.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380130 ± 2% interrupts.CPU33.TRM:Thermal_event_interrupts
436.25 +12.6% 491.25 interrupts.CPU34.75:IR-PCI-MSI.1572898-edge.eth0-TxRx-34
91116 ± 11% -24.3% 68943 ± 7% interrupts.CPU34.CAL:Function_call_interrupts
1785353 +12.1% 2001525 interrupts.CPU34.LOC:Local_timer_interrupts
1553828 ± 4% +186.0% 4444035 ± 3% interrupts.CPU34.RES:Rescheduling_interrupts
105575 ± 10% -20.7% 83762 ± 4% interrupts.CPU34.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380139 ± 2% interrupts.CPU34.TRM:Thermal_event_interrupts
440.75 +12.4% 495.25 ± 2% interrupts.CPU35.76:IR-PCI-MSI.1572899-edge.eth0-TxRx-35
91396 ± 11% -25.6% 67986 ± 11% interrupts.CPU35.CAL:Function_call_interrupts
1785237 +12.0% 2000288 interrupts.CPU35.LOC:Local_timer_interrupts
1602616 ± 10% +189.7% 4642115 ± 3% interrupts.CPU35.RES:Rescheduling_interrupts
107669 ± 12% -21.3% 84736 ± 4% interrupts.CPU35.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380145 ± 2% interrupts.CPU35.TRM:Thermal_event_interrupts
436.25 +12.6% 491.00 interrupts.CPU36.77:IR-PCI-MSI.1572900-edge.eth0-TxRx-36
93553 ± 8% -19.8% 75044 ± 6% interrupts.CPU36.CAL:Function_call_interrupts
1784636 +12.4% 2005389 interrupts.CPU36.LOC:Local_timer_interrupts
2588984 ± 6% +25.8% 3256348 ± 6% interrupts.CPU36.RES:Rescheduling_interrupts
107053 ± 7% -19.0% 86754 ± 5% interrupts.CPU36.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496560 ± 4% interrupts.CPU36.TRM:Thermal_event_interrupts
436.75 +13.1% 493.75 interrupts.CPU37.78:IR-PCI-MSI.1572901-edge.eth0-TxRx-37
89518 ± 10% -18.9% 72619 ± 5% interrupts.CPU37.CAL:Function_call_interrupts
1784648 +12.4% 2005651 interrupts.CPU37.LOC:Local_timer_interrupts
3112476 ± 10% -20.8% 2464103 ± 5% interrupts.CPU37.RES:Rescheduling_interrupts
102576 ± 10% -14.4% 87794 ± 6% interrupts.CPU37.TLB:TLB_shootdowns
388412 ± 12% +27.8% 496563 ± 4% interrupts.CPU37.TRM:Thermal_event_interrupts
436.25 +12.8% 492.25 interrupts.CPU38.79:IR-PCI-MSI.1572902-edge.eth0-TxRx-38
92566 ± 10% -20.6% 73466 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
1785132 +12.4% 2005883 interrupts.CPU38.LOC:Local_timer_interrupts
3163520 ± 10% -26.0% 2340820 ± 11% interrupts.CPU38.RES:Rescheduling_interrupts
105823 ± 10% -18.0% 86785 ± 4% interrupts.CPU38.TLB:TLB_shootdowns
388401 ± 12% +27.8% 496554 ± 4% interrupts.CPU38.TRM:Thermal_event_interrupts
436.25 +12.8% 492.25 interrupts.CPU39.80:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
93197 ± 11% -21.8% 72854 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
1784624 +12.4% 2005934 interrupts.CPU39.LOC:Local_timer_interrupts
105953 ± 11% -20.8% 83925 ± 6% interrupts.CPU39.TLB:TLB_shootdowns
388418 ± 12% +27.8% 496548 ± 4% interrupts.CPU39.TRM:Thermal_event_interrupts
86883 ± 13% -17.8% 71449 ± 7% interrupts.CPU4.CAL:Function_call_interrupts
1784433 +12.4% 2005462 interrupts.CPU4.LOC:Local_timer_interrupts
3119739 ± 9% -21.1% 2461192 ± 4% interrupts.CPU4.RES:Rescheduling_interrupts
104829 ± 11% -19.1% 84802 ± 5% interrupts.CPU4.TLB:TLB_shootdowns
388417 ± 12% +27.8% 496563 ± 4% interrupts.CPU4.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU40.81:IR-PCI-MSI.1572904-edge.eth0-TxRx-40
91881 ± 12% -21.4% 72188 ± 8% interrupts.CPU40.CAL:Function_call_interrupts
1784831 +12.4% 2005991 interrupts.CPU40.LOC:Local_timer_interrupts
3101288 ± 10% -17.2% 2569221 ± 7% interrupts.CPU40.RES:Rescheduling_interrupts
105884 ± 10% -19.0% 85772 ± 5% interrupts.CPU40.TLB:TLB_shootdowns
388415 ± 12% +27.8% 496564 ± 4% interrupts.CPU40.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU41.82:IR-PCI-MSI.1572905-edge.eth0-TxRx-41
91739 ± 12% -19.9% 73475 ± 7% interrupts.CPU41.CAL:Function_call_interrupts
1785231 +12.3% 2005398 interrupts.CPU41.LOC:Local_timer_interrupts
3223405 ± 11% -27.9% 2325498 ± 5% interrupts.CPU41.RES:Rescheduling_interrupts
104410 ± 11% -18.7% 84846 ± 6% interrupts.CPU41.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496557 ± 4% interrupts.CPU41.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU42.83:IR-PCI-MSI.1572906-edge.eth0-TxRx-42
1785354 +12.4% 2006034 interrupts.CPU42.LOC:Local_timer_interrupts
3068304 ± 9% -22.1% 2390347 ± 6% interrupts.CPU42.RES:Rescheduling_interrupts
107102 ± 9% -20.7% 84890 ± 4% interrupts.CPU42.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496564 ± 4% interrupts.CPU42.TRM:Thermal_event_interrupts
436.25 +14.6% 499.75 ± 2% interrupts.CPU43.84:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
92178 ± 9% -18.3% 75306 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
1784759 +12.4% 2005986 interrupts.CPU43.LOC:Local_timer_interrupts
6434 ± 7% -64.3% 2297 ±106% interrupts.CPU43.NMI:Non-maskable_interrupts
6434 ± 7% -64.3% 2297 ±106% interrupts.CPU43.PMI:Performance_monitoring_interrupts
104773 ± 9% -17.6% 86369 ± 6% interrupts.CPU43.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496556 ± 4% interrupts.CPU43.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU44.85:IR-PCI-MSI.1572908-edge.eth0-TxRx-44
92387 ± 10% -21.9% 72174 ± 9% interrupts.CPU44.CAL:Function_call_interrupts
1784603 +12.4% 2006064 interrupts.CPU44.LOC:Local_timer_interrupts
106179 ± 9% -18.6% 86461 ± 6% interrupts.CPU44.TLB:TLB_shootdowns
388426 ± 12% +27.8% 496562 ± 4% interrupts.CPU44.TRM:Thermal_event_interrupts
436.50 +12.4% 490.50 interrupts.CPU45.86:IR-PCI-MSI.1572909-edge.eth0-TxRx-45
93641 ± 7% -21.9% 73155 ± 6% interrupts.CPU45.CAL:Function_call_interrupts
1784811 +12.4% 2005654 interrupts.CPU45.LOC:Local_timer_interrupts
3093195 ± 11% -15.6% 2609354 ± 4% interrupts.CPU45.RES:Rescheduling_interrupts
106463 ± 7% -19.6% 85593 ± 5% interrupts.CPU45.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496564 ± 4% interrupts.CPU45.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU46.87:IR-PCI-MSI.1572910-edge.eth0-TxRx-46
91739 ± 10% -19.0% 74279 ± 8% interrupts.CPU46.CAL:Function_call_interrupts
1784609 +12.4% 2005522 interrupts.CPU46.LOC:Local_timer_interrupts
2970471 ± 9% -21.0% 2345439 ± 8% interrupts.CPU46.RES:Rescheduling_interrupts
108158 ± 10% -19.8% 86743 ± 4% interrupts.CPU46.TLB:TLB_shootdowns
388368 ± 12% +27.8% 496469 ± 4% interrupts.CPU46.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU47.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
92856 ± 9% -20.0% 74327 ± 8% interrupts.CPU47.CAL:Function_call_interrupts
1784835 +12.4% 2005700 interrupts.CPU47.LOC:Local_timer_interrupts
3116242 ± 10% -20.4% 2480398 ± 5% interrupts.CPU47.RES:Rescheduling_interrupts
104927 ± 9% -18.6% 85393 ± 6% interrupts.CPU47.TLB:TLB_shootdowns
388403 ± 12% +27.8% 496554 ± 4% interrupts.CPU47.TRM:Thermal_event_interrupts
436.25 +12.6% 491.00 interrupts.CPU48.89:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
92889 ± 11% -21.5% 72890 ± 8% interrupts.CPU48.CAL:Function_call_interrupts
1784772 +12.4% 2006205 interrupts.CPU48.LOC:Local_timer_interrupts
3175210 ± 12% -19.7% 2548339 ± 13% interrupts.CPU48.RES:Rescheduling_interrupts
105183 ± 11% -18.6% 85618 ± 6% interrupts.CPU48.TLB:TLB_shootdowns
388426 ± 12% +27.8% 496563 ± 4% interrupts.CPU48.TRM:Thermal_event_interrupts
441.25 ± 3% +14.1% 503.25 ± 3% interrupts.CPU49.90:IR-PCI-MSI.1572913-edge.eth0-TxRx-49
90134 ± 10% -18.3% 73603 ± 6% interrupts.CPU49.CAL:Function_call_interrupts
1785145 +12.4% 2005932 interrupts.CPU49.LOC:Local_timer_interrupts
3185609 ± 11% -17.6% 2626502 ± 4% interrupts.CPU49.RES:Rescheduling_interrupts
102737 ± 10% -17.3% 84932 ± 5% interrupts.CPU49.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496560 ± 4% interrupts.CPU49.TRM:Thermal_event_interrupts
92114 ± 9% -23.2% 70721 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
1785135 +12.3% 2005424 interrupts.CPU5.LOC:Local_timer_interrupts
3210646 ± 12% -24.8% 2415935 ± 4% interrupts.CPU5.RES:Rescheduling_interrupts
107349 ± 9% -18.9% 87034 ± 6% interrupts.CPU5.TLB:TLB_shootdowns
388418 ± 12% +27.8% 496564 ± 4% interrupts.CPU5.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU50.91:IR-PCI-MSI.1572914-edge.eth0-TxRx-50
92406 ± 10% -20.2% 73771 ± 8% interrupts.CPU50.CAL:Function_call_interrupts
1785200 +12.4% 2006488 interrupts.CPU50.LOC:Local_timer_interrupts
106635 ± 9% -19.9% 85431 ± 6% interrupts.CPU50.TLB:TLB_shootdowns
388419 ± 12% +27.8% 496564 ± 4% interrupts.CPU50.TRM:Thermal_event_interrupts
436.25 +14.4% 499.25 ± 3% interrupts.CPU51.92:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
91491 ± 9% -20.8% 72473 ± 6% interrupts.CPU51.CAL:Function_call_interrupts
1784683 +12.4% 2005527 interrupts.CPU51.LOC:Local_timer_interrupts
3051494 ± 8% -24.6% 2300032 ± 10% interrupts.CPU51.RES:Rescheduling_interrupts
104954 ± 10% -20.2% 83775 ± 5% interrupts.CPU51.TLB:TLB_shootdowns
388264 ± 12% +27.9% 496462 ± 4% interrupts.CPU51.TRM:Thermal_event_interrupts
92938 ± 11% -19.7% 74594 ± 7% interrupts.CPU52.CAL:Function_call_interrupts
1785451 +12.4% 2006171 interrupts.CPU52.LOC:Local_timer_interrupts
3025681 ± 11% -17.1% 2506800 ± 10% interrupts.CPU52.RES:Rescheduling_interrupts
107100 ± 11% -19.6% 86156 ± 6% interrupts.CPU52.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496563 ± 4% interrupts.CPU52.TRM:Thermal_event_interrupts
439.25 ± 2% +11.7% 490.50 interrupts.CPU53.94:IR-PCI-MSI.1572917-edge.eth0-TxRx-53
93000 ± 12% -20.9% 73549 ± 7% interrupts.CPU53.CAL:Function_call_interrupts
1785290 +12.3% 2005441 interrupts.CPU53.LOC:Local_timer_interrupts
3212623 ± 7% -18.2% 2629478 ± 10% interrupts.CPU53.RES:Rescheduling_interrupts
107303 ± 11% -17.5% 88567 ± 10% interrupts.CPU53.TLB:TLB_shootdowns
388411 ± 12% +27.8% 496557 ± 4% interrupts.CPU53.TRM:Thermal_event_interrupts
437.75 +12.1% 490.50 interrupts.CPU54.95:IR-PCI-MSI.1572918-edge.eth0-TxRx-54
1785170 +12.0% 2000090 interrupts.CPU54.LOC:Local_timer_interrupts
5351 ± 26% -64.5% 1900 ± 99% interrupts.CPU54.NMI:Non-maskable_interrupts
5351 ± 26% -64.5% 1900 ± 99% interrupts.CPU54.PMI:Performance_monitoring_interrupts
2066106 ± 9% +92.7% 3982033 ± 6% interrupts.CPU54.RES:Rescheduling_interrupts
108150 ± 12% -21.2% 85173 ± 5% interrupts.CPU54.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380146 ± 2% interrupts.CPU54.TRM:Thermal_event_interrupts
436.50 +12.4% 490.50 interrupts.CPU55.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
91164 ± 12% -19.9% 73014 ± 6% interrupts.CPU55.CAL:Function_call_interrupts
1785472 +12.0% 1999115 interrupts.CPU55.LOC:Local_timer_interrupts
1286654 ± 6% +228.2% 4223034 ± 5% interrupts.CPU55.RES:Rescheduling_interrupts
104943 ± 11% -19.5% 84466 ± 5% interrupts.CPU55.TLB:TLB_shootdowns
329292 ± 3% +15.4% 379918 ± 2% interrupts.CPU55.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU56.97:IR-PCI-MSI.1572920-edge.eth0-TxRx-56
96934 ± 13% -23.4% 74215 ± 6% interrupts.CPU56.CAL:Function_call_interrupts
1785010 +12.1% 2000169 interrupts.CPU56.LOC:Local_timer_interrupts
1312758 ± 17% +194.5% 3866522 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
109307 ± 12% -22.1% 85139 ± 4% interrupts.CPU56.TLB:TLB_shootdowns
329273 ± 3% +15.4% 380131 ± 2% interrupts.CPU56.TRM:Thermal_event_interrupts
439.00 ± 2% +11.7% 490.50 interrupts.CPU57.98:IR-PCI-MSI.1572921-edge.eth0-TxRx-57
94960 ± 11% -22.9% 73246 ± 5% interrupts.CPU57.CAL:Function_call_interrupts
1785297 +12.0% 2000019 interrupts.CPU57.LOC:Local_timer_interrupts
1666475 ± 10% +169.6% 4492937 interrupts.CPU57.RES:Rescheduling_interrupts
107534 ± 10% -22.0% 83871 ± 5% interrupts.CPU57.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380136 ± 2% interrupts.CPU57.TRM:Thermal_event_interrupts
442.25 ± 3% +10.9% 490.50 interrupts.CPU58.99:IR-PCI-MSI.1572922-edge.eth0-TxRx-58
92818 ± 12% -21.4% 72925 ± 5% interrupts.CPU58.CAL:Function_call_interrupts
1785429 +12.0% 1999528 interrupts.CPU58.LOC:Local_timer_interrupts
1606064 ± 5% +176.8% 4446179 ± 6% interrupts.CPU58.RES:Rescheduling_interrupts
103696 ± 11% -19.5% 83458 ± 4% interrupts.CPU58.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380022 ± 2% interrupts.CPU58.TRM:Thermal_event_interrupts
439.00 +11.7% 490.50 interrupts.CPU59.100:IR-PCI-MSI.1572923-edge.eth0-TxRx-59
93876 ± 13% -20.5% 74669 ± 6% interrupts.CPU59.CAL:Function_call_interrupts
1785085 +12.0% 1999992 interrupts.CPU59.LOC:Local_timer_interrupts
1634812 ± 5% +174.5% 4487501 ± 4% interrupts.CPU59.RES:Rescheduling_interrupts
106929 ± 13% -21.0% 84522 ± 5% interrupts.CPU59.TLB:TLB_shootdowns
329291 ± 3% +15.4% 380111 ± 2% interrupts.CPU59.TRM:Thermal_event_interrupts
89930 ± 9% -20.5% 71520 ± 7% interrupts.CPU6.CAL:Function_call_interrupts
1785220 +12.3% 2005468 interrupts.CPU6.LOC:Local_timer_interrupts
106307 ± 7% -19.9% 85162 ± 6% interrupts.CPU6.TLB:TLB_shootdowns
388421 ± 12% +27.8% 496564 ± 4% interrupts.CPU6.TRM:Thermal_event_interrupts
436.25 +13.1% 493.25 interrupts.CPU60.101:IR-PCI-MSI.1572924-edge.eth0-TxRx-60
98595 ± 11% -25.6% 73379 ± 5% interrupts.CPU60.CAL:Function_call_interrupts
1785116 +12.1% 2000324 interrupts.CPU60.LOC:Local_timer_interrupts
1604314 ± 8% +186.8% 4601614 ± 5% interrupts.CPU60.RES:Rescheduling_interrupts
110442 ± 10% -24.0% 83932 ± 5% interrupts.CPU60.TLB:TLB_shootdowns
329256 ± 3% +15.5% 380140 ± 2% interrupts.CPU60.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU61.102:IR-PCI-MSI.1572925-edge.eth0-TxRx-61
93035 ± 11% -21.4% 73124 ± 5% interrupts.CPU61.CAL:Function_call_interrupts
1785176 +12.1% 2000960 interrupts.CPU61.LOC:Local_timer_interrupts
4567 ± 67% -79.2% 952.00 ±171% interrupts.CPU61.NMI:Non-maskable_interrupts
4567 ± 67% -79.2% 952.00 ±171% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1575186 ± 6% +194.9% 4644760 ± 4% interrupts.CPU61.RES:Rescheduling_interrupts
103665 ± 11% -18.7% 84259 ± 5% interrupts.CPU61.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380127 ± 2% interrupts.CPU61.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU62.103:IR-PCI-MSI.1572926-edge.eth0-TxRx-62
95188 ± 10% -21.9% 74376 ± 6% interrupts.CPU62.CAL:Function_call_interrupts
1785028 +12.0% 1999262 interrupts.CPU62.LOC:Local_timer_interrupts
1613834 ± 8% +184.7% 4594499 ± 3% interrupts.CPU62.RES:Rescheduling_interrupts
106856 ± 10% -21.4% 83987 ± 5% interrupts.CPU62.TLB:TLB_shootdowns
329270 ± 3% +15.4% 380120 ± 2% interrupts.CPU62.TRM:Thermal_event_interrupts
94411 ± 11% -21.3% 74269 ± 5% interrupts.CPU63.CAL:Function_call_interrupts
1785322 +12.0% 1999237 interrupts.CPU63.LOC:Local_timer_interrupts
1669218 ± 6% +187.2% 4794684 ± 3% interrupts.CPU63.RES:Rescheduling_interrupts
105402 ± 10% -18.3% 86076 ± 8% interrupts.CPU63.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU63.TRM:Thermal_event_interrupts
96060 ± 11% -22.6% 74364 ± 6% interrupts.CPU64.CAL:Function_call_interrupts
1785304 +12.0% 1999571 interrupts.CPU64.LOC:Local_timer_interrupts
1681501 ± 7% +164.0% 4438647 ± 3% interrupts.CPU64.RES:Rescheduling_interrupts
108534 ± 10% -22.8% 83813 ± 5% interrupts.CPU64.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380142 ± 2% interrupts.CPU64.TRM:Thermal_event_interrupts
94178 ± 9% -22.7% 72758 ± 8% interrupts.CPU65.CAL:Function_call_interrupts
1785338 +12.0% 1999408 interrupts.CPU65.LOC:Local_timer_interrupts
1674108 ± 4% +182.2% 4724262 ± 3% interrupts.CPU65.RES:Rescheduling_interrupts
107519 ± 10% -22.6% 83259 ± 5% interrupts.CPU65.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380128 ± 2% interrupts.CPU65.TRM:Thermal_event_interrupts
95135 ± 8% -22.4% 73795 ± 5% interrupts.CPU66.CAL:Function_call_interrupts
1785413 +12.0% 2000352 interrupts.CPU66.LOC:Local_timer_interrupts
616.75 ±171% +902.4% 6182 ± 28% interrupts.CPU66.NMI:Non-maskable_interrupts
616.75 ±171% +902.4% 6182 ± 28% interrupts.CPU66.PMI:Performance_monitoring_interrupts
1646166 ± 6% +169.3% 4433267 ± 6% interrupts.CPU66.RES:Rescheduling_interrupts
107190 ± 7% -22.5% 83034 ± 4% interrupts.CPU66.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380118 ± 2% interrupts.CPU66.TRM:Thermal_event_interrupts
95627 ± 10% -22.4% 74249 ± 5% interrupts.CPU67.CAL:Function_call_interrupts
1785314 +12.1% 2001128 interrupts.CPU67.LOC:Local_timer_interrupts
1611146 ± 4% +183.8% 4572204 ± 7% interrupts.CPU67.RES:Rescheduling_interrupts
109283 ± 11% -22.5% 84676 ± 4% interrupts.CPU67.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380142 ± 2% interrupts.CPU67.TRM:Thermal_event_interrupts
95479 ± 11% -20.6% 75791 ± 5% interrupts.CPU68.CAL:Function_call_interrupts
1785192 +12.0% 1999411 interrupts.CPU68.LOC:Local_timer_interrupts
1575394 ± 7% +189.9% 4567277 ± 4% interrupts.CPU68.RES:Rescheduling_interrupts
106972 ± 12% -18.6% 87039 ± 5% interrupts.CPU68.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380129 ± 2% interrupts.CPU68.TRM:Thermal_event_interrupts
93067 ± 14% -19.9% 74538 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
1785257 +12.1% 2000636 interrupts.CPU69.LOC:Local_timer_interrupts
1578663 ± 9% +192.9% 4624518 ± 3% interrupts.CPU69.RES:Rescheduling_interrupts
104271 ± 12% -18.5% 84989 ± 2% interrupts.CPU69.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380066 ± 2% interrupts.CPU69.TRM:Thermal_event_interrupts
474.00 ± 3% +49.7% 709.75 ± 24% interrupts.CPU7.46:IR-PCI-MSI.1572871-edge.eth0-TxRx-7
92062 ± 9% -19.7% 73884 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
1784266 +12.4% 2004980 interrupts.CPU7.LOC:Local_timer_interrupts
2505703 ± 11% -19.0% 2029187 ± 5% interrupts.CPU7.RES:Rescheduling_interrupts
107698 ± 8% -18.4% 87903 ± 6% interrupts.CPU7.TLB:TLB_shootdowns
388281 ± 12% +27.8% 496347 ± 4% interrupts.CPU7.TRM:Thermal_event_interrupts
92337 ± 12% -17.5% 76148 ± 2% interrupts.CPU70.CAL:Function_call_interrupts
1785161 +12.0% 1999757 interrupts.CPU70.LOC:Local_timer_interrupts
1562360 ± 4% +170.8% 4231208 ± 9% interrupts.CPU70.RES:Rescheduling_interrupts
103862 ± 10% -18.8% 84371 ± 3% interrupts.CPU70.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU70.TRM:Thermal_event_interrupts
95829 ± 12% -21.2% 75509 ± 5% interrupts.CPU71.CAL:Function_call_interrupts
1784894 +12.1% 2001287 interrupts.CPU71.LOC:Local_timer_interrupts
1533567 ± 7% +196.9% 4553440 interrupts.CPU71.RES:Rescheduling_interrupts
105883 ± 11% -20.4% 84248 ± 6% interrupts.CPU71.TLB:TLB_shootdowns
329247 ± 3% +15.5% 380146 ± 2% interrupts.CPU71.TRM:Thermal_event_interrupts
88372 ± 11% -20.3% 70389 ± 7% interrupts.CPU8.CAL:Function_call_interrupts
1784452 +12.4% 2005305 interrupts.CPU8.LOC:Local_timer_interrupts
103992 ± 10% -17.4% 85887 ± 5% interrupts.CPU8.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496557 ± 4% interrupts.CPU8.TRM:Thermal_event_interrupts
89290 ± 10% -19.1% 72221 ± 6% interrupts.CPU9.CAL:Function_call_interrupts
1784716 +12.4% 2005710 interrupts.CPU9.LOC:Local_timer_interrupts
104340 ± 10% -16.0% 87647 ± 3% interrupts.CPU9.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496557 ± 4% interrupts.CPU9.TRM:Thermal_event_interrupts
1.285e+08 +12.2% 1.442e+08 interrupts.LOC:Local_timer_interrupts
144.00 +50.0% 216.00 interrupts.MCP:Machine_check_polls
1.677e+08 ± 8% +50.4% 2.522e+08 ± 2% interrupts.RES:Rescheduling_interrupts
7624145 ± 10% -19.7% 6125835 ± 5% interrupts.TLB:TLB_shootdowns
25836833 ± 8% +22.2% 31559779 ± 2% interrupts.TRM:Thermal_event_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/process/50%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/hackbench/0xb000038
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
134091 -10.6% 119937 hackbench.throughput
3.352e+09 ± 2% +16.1% 3.892e+09 ± 5% hackbench.time.involuntary_context_switches
535571 ± 2% +10.6% 592420 ± 4% numa-meminfo.node0.FilePages
12614 ± 18% -12.2% 11078 ± 18% numa-meminfo.node1.Mapped
3472 ± 6% +18.0% 4096 slabinfo.skbuff_head_cache.active_objs
3568 ± 6% +15.7% 4128 slabinfo.skbuff_head_cache.num_objs
730128 ± 5% -52.2% 349073 ± 25% turbostat.C3
14.77 +5.9% 15.64 turbostat.RAMWatt
24613783 ± 5% -32.6% 16595193 ± 23% numa-numastat.node0.local_node
24632941 ± 5% -32.6% 16605181 ± 23% numa-numastat.node0.numa_hit
25827961 -29.9% 18097150 ± 19% numa-numastat.node1.local_node
25836948 -29.9% 18115676 ± 19% numa-numastat.node1.numa_hit
50467170 ± 2% -31.2% 34733456 ± 10% proc-vmstat.numa_hit
50439016 ± 2% -31.2% 34704933 ± 10% proc-vmstat.numa_local
50622650 ± 2% -31.1% 34896854 ± 10% proc-vmstat.pgalloc_normal
50548381 ± 2% -31.1% 34834391 ± 10% proc-vmstat.pgfree
79.00 +2.2% 80.75 vmstat.cpu.sy
19.00 -7.9% 17.50 ± 2% vmstat.cpu.us
13450755 +4.6% 14066265 vmstat.system.cs
1122772 -12.1% 986681 vmstat.system.in
133891 ± 2% +10.6% 148112 ± 4% numa-vmstat.node0.nr_file_pages
12301995 ± 4% -30.2% 8590153 ± 22% numa-vmstat.node0.numa_hit
12282910 ± 4% -30.1% 8580139 ± 22% numa-vmstat.node0.numa_local
12964354 ± 5% -31.2% 8914408 ± 18% numa-vmstat.node1.numa_hit
12806275 ± 5% -31.7% 8746928 ± 19% numa-vmstat.node1.numa_local
33816695 ± 10% +17.8% 39824104 ± 7% cpuidle.C1.time
21015951 ± 3% +19.6% 25127666 ± 7% cpuidle.C1E.time
730991 ± 5% -52.1% 350128 ± 25% cpuidle.C3.usage
1.497e+08 ± 28% +41.6% 2.119e+08 ± 15% cpuidle.C6.time
6771631 ± 4% -45.2% 3713804 ± 15% cpuidle.POLL.time
5209842 ± 4% -44.1% 2913245 ± 15% cpuidle.POLL.usage
26965506 +5.5% 28444381 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
28257967 +6.6% 30126596 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
582203 ± 9% +37.0% 797353 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
4.65 ± 8% -10.7% 4.16 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
4620 ± 6% -10.7% 4125 ± 4% sched_debug.cfs_rq:/.runnable_weight.stddev
582831 ± 9% +36.6% 796226 ± 7% sched_debug.cfs_rq:/.spread0.stddev
154.73 ± 2% -14.7% 132.00 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
120.93 ± 25% +43.5% 173.50 ± 19% sched_debug.cfs_rq:/.util_est_enqueued.min
184.93 ± 2% -11.0% 164.54 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev
182135 ± 18% +57.1% 286098 ± 6% sched_debug.cpu.avg_idle.avg
185926 ± 19% +49.0% 276991 ± 9% sched_debug.cpu.avg_idle.stddev
3.32 ± 4% -10.8% 2.96 ± 3% sched_debug.cpu.nr_running.stddev
46710201 +9.0% 50934866 ± 5% sched_debug.cpu.nr_switches.avg
50053735 +16.9% 58491253 ± 7% sched_debug.cpu.nr_switches.max
43751407 +6.7% 46698664 ± 4% sched_debug.cpu.nr_switches.min
1344975 ± 10% +78.9% 2406504 ± 16% sched_debug.cpu.nr_switches.stddev
1.63 ± 3% -32.8% 1.09 ± 11% sched_debug.cpu.nr_uninterruptible.avg
1017 ± 27% -59.5% 411.75 ± 16% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6547829 -14.4% 5603910 ± 5% interrupts.CPU16.RES:Rescheduling_interrupts
1017 ± 27% -59.5% 411.75 ± 16% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6801598 -14.7% 5801847 ± 5% interrupts.CPU19.RES:Rescheduling_interrupts
6609756 -13.7% 5703084 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
6446878 ± 2% -11.8% 5689145 ± 7% interrupts.CPU21.RES:Rescheduling_interrupts
6634825 -9.4% 6008203 ± 6% interrupts.CPU24.RES:Rescheduling_interrupts
6801182 -9.8% 6137528 ± 3% interrupts.CPU25.RES:Rescheduling_interrupts
6687252 -12.9% 5821967 ± 6% interrupts.CPU28.RES:Rescheduling_interrupts
6583387 -10.9% 5868659 ± 6% interrupts.CPU29.RES:Rescheduling_interrupts
6585631 -16.6% 5490726 ± 9% interrupts.CPU37.RES:Rescheduling_interrupts
6759973 -13.2% 5867800 ± 7% interrupts.CPU57.RES:Rescheduling_interrupts
6459616 ± 2% -10.5% 5783047 ± 6% interrupts.CPU58.RES:Rescheduling_interrupts
6580818 ± 2% -16.9% 5468945 ± 3% interrupts.CPU60.RES:Rescheduling_interrupts
6723178 ± 2% -12.4% 5887778 ± 3% interrupts.CPU63.RES:Rescheduling_interrupts
6480129 -16.2% 5427168 ± 7% interrupts.CPU64.RES:Rescheduling_interrupts
6439558 ± 2% -11.6% 5691160 ± 5% interrupts.CPU65.RES:Rescheduling_interrupts
6662822 -12.0% 5866390 ± 7% interrupts.CPU68.RES:Rescheduling_interrupts
6602178 -12.0% 5809760 ± 7% interrupts.CPU7.RES:Rescheduling_interrupts
6616172 -14.1% 5685889 ± 7% interrupts.CPU73.RES:Rescheduling_interrupts
6591477 -13.8% 5682655 ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
13.97 +15.8% 16.18 perf-stat.i.MPKI
1.70 -0.1 1.65 perf-stat.i.branch-miss-rate%
3.607e+08 -1.6% 3.549e+08 perf-stat.i.branch-misses
1.22 ± 6% +0.8 2.01 ± 18% perf-stat.i.cache-miss-rate%
17650711 ± 6% +98.0% 34953649 ± 18% perf-stat.i.cache-misses
1.496e+09 +17.3% 1.755e+09 perf-stat.i.cache-references
13503062 +4.6% 14122231 perf-stat.i.context-switches
2946654 ± 2% +18.5% 3491269 ± 3% perf-stat.i.cpu-migrations
29365 ± 8% -30.6% 20387 ± 31% perf-stat.i.cycles-between-cache-misses
0.35 ± 2% +0.0 0.36 ± 2% perf-stat.i.dTLB-store-miss-rate%
2.048e+10 -2.1% 2.006e+10 perf-stat.i.dTLB-stores
55.17 -1.4 53.74 perf-stat.i.iTLB-load-miss-rate%
1.019e+08 +5.5% 1.074e+08 perf-stat.i.iTLB-loads
3068 -3.0% 2975 perf-stat.i.minor-faults
9195204 ± 6% +113.0% 19589023 ± 18% perf-stat.i.node-load-misses
28132 ± 6% +125.3% 63396 ± 13% perf-stat.i.node-loads
58.92 -9.8 49.16 ± 4% perf-stat.i.node-store-miss-rate%
3588196 ± 6% +60.5% 5760265 ± 19% perf-stat.i.node-store-misses
2415953 ± 5% +145.2% 5924042 ± 18% perf-stat.i.node-stores
3068 -3.0% 2975 perf-stat.i.page-faults
13.96 +16.0% 16.19 perf-stat.overall.MPKI
1.70 -0.1 1.65 perf-stat.overall.branch-miss-rate%
1.18 ± 6% +0.8 1.99 ± 18% perf-stat.overall.cache-miss-rate%
13823 ± 6% -47.8% 7221 ± 20% perf-stat.overall.cycles-between-cache-misses
0.35 ± 2% +0.0 0.36 ± 2% perf-stat.overall.dTLB-store-miss-rate%
55.11 -1.4 53.70 perf-stat.overall.iTLB-load-miss-rate%
59.75 -10.5 49.23 perf-stat.overall.node-store-miss-rate%
78192 +13.0% 88370 perf-stat.overall.path-length
3.601e+08 -1.6% 3.543e+08 perf-stat.ps.branch-misses
17622620 ± 6% +98.0% 34899005 ± 18% perf-stat.ps.cache-misses
1.494e+09 +17.3% 1.752e+09 perf-stat.ps.cache-references
13480657 +4.6% 14099389 perf-stat.ps.context-switches
2941749 ± 2% +18.5% 3485631 ± 3% perf-stat.ps.cpu-migrations
2.045e+10 -2.1% 2.003e+10 perf-stat.ps.dTLB-stores
1.017e+08 +5.5% 1.072e+08 perf-stat.ps.iTLB-loads
3064 -3.0% 2972 perf-stat.ps.minor-faults
9180406 ± 6% +113.0% 19558101 ± 18% perf-stat.ps.node-load-misses
28101 ± 6% +125.3% 63314 ± 13% perf-stat.ps.node-loads
3582467 ± 6% +60.5% 5751138 ± 19% perf-stat.ps.node-store-misses
2412086 ± 5% +145.2% 5914636 ± 18% perf-stat.ps.node-stores
3064 -3.0% 2972 perf-stat.ps.page-faults
223485 +9.1% 243918 ± 5% softirqs.CPU0.TIMER
11887 ± 11% -18.2% 9723 ± 6% softirqs.CPU1.SCHED
215411 +9.2% 235267 ± 6% softirqs.CPU1.TIMER
11293 ± 3% -14.3% 9678 ± 10% softirqs.CPU10.SCHED
213916 +9.8% 234908 ± 5% softirqs.CPU10.TIMER
212758 +7.7% 229141 ± 5% softirqs.CPU11.TIMER
11098 ± 2% -15.9% 9337 ± 10% softirqs.CPU12.SCHED
213397 +9.0% 232706 ± 6% softirqs.CPU12.TIMER
212419 +8.3% 230115 ± 6% softirqs.CPU13.TIMER
10777 ± 4% -12.5% 9426 ± 6% softirqs.CPU14.SCHED
11241 ± 3% -11.5% 9950 ± 6% softirqs.CPU15.SCHED
214604 +10.0% 236116 ± 8% softirqs.CPU15.TIMER
11147 -14.9% 9482 ± 8% softirqs.CPU16.SCHED
216343 +8.4% 234558 ± 6% softirqs.CPU16.TIMER
214680 +7.0% 229748 ± 6% softirqs.CPU17.TIMER
212791 +8.6% 230988 ± 6% softirqs.CPU19.TIMER
11211 ± 4% -10.4% 10047 ± 6% softirqs.CPU2.SCHED
219918 +9.8% 241513 ± 6% softirqs.CPU2.TIMER
10825 ± 4% -14.3% 9273 ± 7% softirqs.CPU20.SCHED
215209 +8.8% 234149 ± 6% softirqs.CPU20.TIMER
10869 ± 4% -11.4% 9626 ± 4% softirqs.CPU21.SCHED
216225 +8.7% 235105 ± 5% softirqs.CPU21.TIMER
13013 ± 7% -14.0% 11192 ± 3% softirqs.CPU22.SCHED
220406 +8.9% 239975 ± 4% softirqs.CPU22.TIMER
11925 -14.9% 10145 ± 3% softirqs.CPU23.SCHED
220231 +7.6% 237056 ± 5% softirqs.CPU23.TIMER
11140 ± 2% -11.5% 9861 ± 7% softirqs.CPU24.SCHED
216091 +8.9% 235338 ± 5% softirqs.CPU24.TIMER
11438 ± 3% -16.3% 9578 ± 2% softirqs.CPU25.SCHED
214880 +9.4% 235143 ± 5% softirqs.CPU25.TIMER
214888 +8.8% 233874 ± 5% softirqs.CPU26.TIMER
11410 ± 2% -13.7% 9844 ± 10% softirqs.CPU27.SCHED
215012 +8.7% 233790 ± 4% softirqs.CPU27.TIMER
220618 ± 2% +7.8% 237788 ± 5% softirqs.CPU28.TIMER
11096 ± 5% -14.3% 9513 ± 6% softirqs.CPU29.SCHED
217194 +8.2% 235108 ± 5% softirqs.CPU29.TIMER
11445 ± 2% -16.7% 9536 ± 10% softirqs.CPU3.SCHED
214692 +9.1% 234221 ± 5% softirqs.CPU3.TIMER
11127 ± 5% -10.5% 9954 ± 6% softirqs.CPU30.SCHED
214861 +8.2% 232532 ± 5% softirqs.CPU30.TIMER
217582 +7.8% 234493 ± 4% softirqs.CPU31.TIMER
214606 +8.8% 233473 ± 4% softirqs.CPU32.TIMER
11299 ± 2% -11.7% 9975 ± 6% softirqs.CPU33.SCHED
11503 ± 2% -16.0% 9658 ± 10% softirqs.CPU34.SCHED
213233 +10.9% 236580 ± 6% softirqs.CPU34.TIMER
11162 ± 2% -11.6% 9863 ± 7% softirqs.CPU35.SCHED
11241 -13.9% 9678 ± 6% softirqs.CPU36.SCHED
215681 +9.2% 235446 ± 4% softirqs.CPU36.TIMER
11358 -15.2% 9632 ± 6% softirqs.CPU37.SCHED
219691 ± 2% +18.5% 260407 ± 13% softirqs.CPU37.TIMER
11570 ± 2% -13.1% 10049 ± 6% softirqs.CPU38.SCHED
215559 +8.2% 233189 ± 4% softirqs.CPU38.TIMER
11295 ± 4% -13.1% 9819 ± 10% softirqs.CPU39.SCHED
215781 ± 2% +7.6% 232108 ± 5% softirqs.CPU39.TIMER
11506 ± 7% -13.0% 10006 ± 6% softirqs.CPU4.SCHED
214973 +8.9% 234140 ± 6% softirqs.CPU4.TIMER
11304 ± 2% -13.7% 9759 ± 10% softirqs.CPU40.SCHED
213827 +10.1% 235341 ± 4% softirqs.CPU40.TIMER
11462 ± 4% -13.5% 9913 ± 8% softirqs.CPU41.SCHED
215879 ± 2% +7.2% 231499 ± 5% softirqs.CPU41.TIMER
11418 ± 3% -14.0% 9819 ± 8% softirqs.CPU43.SCHED
214864 +8.3% 232698 ± 4% softirqs.CPU43.TIMER
10706 ± 5% -13.9% 9223 ± 11% softirqs.CPU44.SCHED
217386 +9.4% 237916 ± 5% softirqs.CPU44.TIMER
215191 +9.7% 236017 ± 5% softirqs.CPU45.TIMER
214214 +9.9% 235466 ± 6% softirqs.CPU46.TIMER
214471 +9.9% 235675 ± 5% softirqs.CPU47.TIMER
11251 ± 2% -13.0% 9785 ± 8% softirqs.CPU48.SCHED
212500 +9.9% 233451 ± 6% softirqs.CPU48.TIMER
213609 +9.9% 234657 ± 5% softirqs.CPU49.TIMER
216836 +8.5% 235225 ± 5% softirqs.CPU5.TIMER
215854 +9.0% 235383 ± 5% softirqs.CPU50.TIMER
11468 -14.4% 9820 ± 9% softirqs.CPU51.SCHED
215402 +8.8% 234360 ± 5% softirqs.CPU51.TIMER
214152 +8.8% 233038 ± 5% softirqs.CPU52.TIMER
212843 +8.4% 230716 ± 5% softirqs.CPU53.TIMER
11373 ± 2% -14.9% 9679 ± 7% softirqs.CPU54.SCHED
214008 +9.5% 234380 ± 5% softirqs.CPU54.TIMER
212619 +7.5% 228539 ± 5% softirqs.CPU55.TIMER
11136 ± 3% -16.4% 9312 ± 10% softirqs.CPU56.SCHED
213404 +9.0% 232542 ± 6% softirqs.CPU56.TIMER
11171 ± 4% -11.9% 9837 ± 7% softirqs.CPU57.SCHED
212283 +8.6% 230474 ± 6% softirqs.CPU57.TIMER
11137 ± 4% -14.0% 9578 ± 6% softirqs.CPU58.SCHED
11226 ± 4% -17.2% 9289 ± 4% softirqs.CPU59.SCHED
214193 +23.1% 263668 ± 22% softirqs.CPU59.TIMER
217195 +8.2% 234975 ± 5% softirqs.CPU6.TIMER
11051 ± 4% -14.4% 9460 ± 9% softirqs.CPU60.SCHED
217131 +7.9% 234363 ± 5% softirqs.CPU60.TIMER
11206 ± 3% -12.6% 9793 ± 9% softirqs.CPU61.SCHED
211791 +9.2% 231302 ± 5% softirqs.CPU61.TIMER
214956 ± 2% +8.8% 233770 ± 5% softirqs.CPU62.TIMER
11180 ± 3% -14.8% 9526 ± 8% softirqs.CPU63.SCHED
213413 +8.7% 232008 ± 5% softirqs.CPU63.TIMER
215674 +8.1% 233086 ± 6% softirqs.CPU64.TIMER
11032 ± 3% -15.7% 9302 ± 7% softirqs.CPU65.SCHED
215870 +9.0% 235386 ± 5% softirqs.CPU65.TIMER
11403 ± 3% -14.2% 9786 ± 6% softirqs.CPU66.SCHED
215968 +9.4% 236175 ± 5% softirqs.CPU66.TIMER
11098 ± 3% -7.4% 10281 ± 4% softirqs.CPU67.SCHED
217846 +8.3% 236023 ± 4% softirqs.CPU67.TIMER
11094 -12.2% 9740 ± 7% softirqs.CPU68.SCHED
215512 +9.4% 235796 ± 5% softirqs.CPU68.TIMER
11321 ± 3% -14.1% 9724 ± 3% softirqs.CPU69.SCHED
213564 +9.0% 232867 ± 5% softirqs.CPU69.TIMER
218822 +8.8% 238080 ± 6% softirqs.CPU7.TIMER
11507 ± 2% -12.9% 10026 ± 4% softirqs.CPU70.SCHED
213928 +9.4% 233986 ± 4% softirqs.CPU70.TIMER
213528 +8.5% 231730 ± 4% softirqs.CPU71.TIMER
11262 ± 2% -11.1% 10013 ± 5% softirqs.CPU72.SCHED
11145 ± 3% -14.5% 9533 ± 8% softirqs.CPU73.SCHED
216467 +9.2% 236409 ± 5% softirqs.CPU73.TIMER
11447 ± 4% -13.2% 9935 ± 4% softirqs.CPU74.SCHED
212578 +9.1% 232022 ± 5% softirqs.CPU74.TIMER
215256 +7.8% 232072 ± 5% softirqs.CPU75.TIMER
11324 ± 4% -11.8% 9985 ± 3% softirqs.CPU76.SCHED
214170 +8.6% 232516 ± 5% softirqs.CPU76.TIMER
11646 ± 5% -15.7% 9824 ± 7% softirqs.CPU77.SCHED
213706 +8.0% 230880 ± 5% softirqs.CPU77.TIMER
11397 ± 6% -13.9% 9810 ± 10% softirqs.CPU78.SCHED
11179 ± 3% -11.5% 9894 ± 5% softirqs.CPU79.SCHED
213215 +8.4% 231081 ± 5% softirqs.CPU79.TIMER
11165 ± 4% -16.2% 9352 ± 9% softirqs.CPU8.SCHED
216508 ± 2% +7.9% 233692 ± 6% softirqs.CPU8.TIMER
11203 ± 2% -13.9% 9643 ± 4% softirqs.CPU80.SCHED
216446 +8.3% 234469 ± 5% softirqs.CPU80.TIMER
11151 ± 2% -12.8% 9726 ± 6% softirqs.CPU81.SCHED
11373 ± 3% -13.0% 9891 ± 6% softirqs.CPU82.SCHED
213597 +8.9% 232608 ± 5% softirqs.CPU82.TIMER
214308 +7.9% 231148 ± 5% softirqs.CPU83.TIMER
214384 +20.9% 259149 ± 15% softirqs.CPU84.TIMER
11458 -13.7% 9887 ± 7% softirqs.CPU86.SCHED
215459 +8.2% 233058 ± 5% softirqs.CPU86.TIMER
11303 ± 4% -12.4% 9904 ± 8% softirqs.CPU87.SCHED
214185 +7.9% 231116 ± 5% softirqs.CPU87.TIMER
11030 ± 2% -13.0% 9593 ± 8% softirqs.CPU9.SCHED
213229 +8.7% 231788 ± 6% softirqs.CPU9.TIMER
995635 -12.9% 867688 ± 5% softirqs.SCHED
19052052 +8.6% 20690944 ± 5% softirqs.TIMER
31.32 -1.6 29.74 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.24 -1.5 28.76 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
26.06 -0.9 25.17 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.35 ± 5% -0.9 1.48 ± 12% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
25.36 -0.8 24.54 perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
2.75 ± 4% -0.7 2.08 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
3.84 ± 2% -0.6 3.23 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
7.83 -0.5 7.30 ± 2% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
1.68 ± 6% -0.5 1.17 ± 10% perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
8.02 -0.5 7.53 ± 2% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
8.08 -0.5 7.60 ± 2% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.63 ± 4% -0.4 1.19 ± 6% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
1.71 ± 6% -0.4 1.28 ± 8% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_task_fair.__schedule.schedule.pipe_wait
2.60 -0.4 2.19 ± 4% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.03 ± 2% -0.4 1.64 ± 4% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.47 -0.4 3.08 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.95 ± 2% -0.3 1.61 ± 3% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
2.67 -0.3 2.35 ± 3% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
0.56 -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.22 ± 4% -0.3 0.94 ± 2% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.new_sync_write.vfs_write.ksys_write
1.25 ± 2% -0.3 1.00 ± 5% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
2.64 -0.2 2.41 ± 2% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.53 -0.2 1.34 ± 7% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.59 ± 4% -0.2 0.40 ± 57% perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.34 -0.2 1.16 ± 6% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.07 ± 2% -0.2 0.90 ± 4% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.new_sync_write.vfs_write
1.05 -0.2 0.88 ± 6% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.56 ± 2% -0.2 1.38 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.new_sync_read
1.17 ± 4% -0.2 1.01 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
0.98 ± 2% -0.2 0.82 ± 4% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.66 ± 2% -0.2 1.50 ± 5% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.new_sync_read.vfs_read
0.73 ± 3% -0.2 0.57 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.96 ± 2% -0.1 0.82 ± 3% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.new_sync_write
0.89 ± 2% -0.1 0.77 ± 3% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 ± 6% -0.1 0.54 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
2.42 -0.1 2.32 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.89 ± 4% -0.1 0.79 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write
0.74 ± 2% -0.1 0.66 ± 3% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.84 -0.1 0.79 ± 3% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.56 -0.0 0.54 ± 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.77 ± 2% +0.1 0.82 perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.90 ± 3% +0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.native_write_msr
0.69 ± 3% +0.1 0.77 ± 3% perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.80 ± 4% +0.1 0.89 ± 2% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
1.10 +0.1 1.20 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.73 ± 3% +0.1 0.83 ± 2% perf-profile.calltrace.cycles-pp.__switch_to
1.05 +0.1 1.16 perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.63 ± 4% +0.1 0.75 ± 2% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.99 ± 5% +0.1 1.12 ± 5% perf-profile.calltrace.cycles-pp.set_task_cpu.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.07 ± 3% +0.1 1.21 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.64 ± 6% +0.2 0.80 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
1.41 ± 3% +0.2 1.59 ± 2% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ± 5% +0.2 1.00 ± 5% perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop
1.73 ± 3% +0.3 2.06 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.14 ±173% +0.5 0.63 ± 5% perf-profile.calltrace.cycles-pp.update_rq_clock.__schedule.schedule.pipe_wait.pipe_read
0.13 ±173% +0.5 0.64 ± 5% perf-profile.calltrace.cycles-pp.migrate_task_rq_fair.set_task_cpu.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.85 ± 5% +0.5 2.38 ± 3% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.6 0.59 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.00 +0.6 0.60 ± 5% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.67 ± 5% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.15 ±173% +1.0 1.14 ± 12% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
87.66 +1.1 88.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
86.87 +1.1 88.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.59 ± 4% +1.4 6.96 ± 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.72 ± 4% +1.4 7.12 ± 3% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.87 ± 4% +1.4 7.28 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.70 ± 30% +3.3 4.98 ± 13% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
35.54 +3.3 38.87 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.28 +3.5 37.82 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
31.12 +4.1 35.22 ± 2% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.49 +4.1 34.63 ± 2% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
22.54 ± 2% +6.0 28.53 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
20.58 ± 2% +6.2 26.76 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
19.97 ± 2% +6.2 26.16 ± 3% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
19.73 ± 3% +6.2 25.92 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
3.39 ± 20% +7.0 10.39 ± 12% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.89 ± 15% +7.1 11.98 ± 10% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
31.34 -1.6 29.75 perf-profile.children.cycles-pp.ksys_read
30.28 -1.5 28.79 perf-profile.children.cycles-pp.vfs_read
4.14 ± 5% -1.3 2.85 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.51 ± 6% -1.0 2.54 ± 8% perf-profile.children.cycles-pp.update_cfs_group
26.10 -0.9 25.20 perf-profile.children.cycles-pp.new_sync_read
25.42 -0.8 24.59 perf-profile.children.cycles-pp.pipe_read
4.67 -0.8 3.86 ± 4% perf-profile.children.cycles-pp.security_file_permission
4.33 ± 2% -0.7 3.65 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.18 ± 4% -0.5 1.64 ± 6% perf-profile.children.cycles-pp.mutex_unlock
7.92 -0.5 7.40 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
8.10 -0.5 7.61 ± 3% perf-profile.children.cycles-pp.activate_task
8.16 -0.5 7.67 ± 3% perf-profile.children.cycles-pp.ttwu_do_activate
4.12 ± 4% -0.5 3.64 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
2.83 -0.4 2.38 ± 6% perf-profile.children.cycles-pp.selinux_file_permission
3.53 -0.4 3.13 ± 4% perf-profile.children.cycles-pp.enqueue_entity
1.99 ± 2% -0.4 1.64 ± 3% perf-profile.children.cycles-pp.copy_page_from_iter
3.34 -0.4 2.99 ± 4% perf-profile.children.cycles-pp.update_load_avg
2.70 -0.3 2.39 ± 3% perf-profile.children.cycles-pp.copy_page_to_iter
2.66 -0.3 2.35 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.93 ± 3% -0.3 0.64 ± 7% perf-profile.children.cycles-pp.__mutex_unlock_slowpath
1.41 ± 4% -0.3 1.13 perf-profile.children.cycles-pp.file_has_perm
2.75 -0.3 2.49 ± 2% perf-profile.children.cycles-pp.dequeue_entity
2.29 ± 2% -0.3 2.04 perf-profile.children.cycles-pp.mutex_lock
0.85 ± 3% -0.2 0.61 ± 4% perf-profile.children.cycles-pp.__inode_security_revalidate
0.59 ± 3% -0.2 0.39 ± 6% perf-profile.children.cycles-pp.__mutex_lock
1.79 ± 2% -0.2 1.59 ± 4% perf-profile.children.cycles-pp.__fdget_pos
1.70 ± 2% -0.2 1.52 ± 3% perf-profile.children.cycles-pp.__fget_light
0.93 ± 3% -0.2 0.75 perf-profile.children.cycles-pp.avc_has_perm
1.09 ± 2% -0.2 0.92 ± 4% perf-profile.children.cycles-pp.copyin
2.33 ± 2% -0.2 2.16 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.96 ± 2% -0.2 0.81 ± 2% perf-profile.children.cycles-pp.___might_sleep
1.67 ± 2% -0.2 1.52 ± 5% perf-profile.children.cycles-pp.copyout
0.83 ± 4% -0.1 0.68 ± 2% perf-profile.children.cycles-pp._cond_resched
0.33 ± 5% -0.1 0.19 ± 10% perf-profile.children.cycles-pp.preempt_schedule_common
0.42 ± 2% -0.1 0.28 ± 8% perf-profile.children.cycles-pp.wake_up_q
0.70 ± 3% -0.1 0.59 ± 3% perf-profile.children.cycles-pp.__might_sleep
0.65 ± 5% -0.1 0.54 ± 7% perf-profile.children.cycles-pp.__fsnotify_parent
0.52 ± 2% -0.1 0.43 perf-profile.children.cycles-pp.__might_fault
0.53 ± 5% -0.1 0.45 perf-profile.children.cycles-pp.current_time
0.84 ± 3% -0.1 0.77 perf-profile.children.cycles-pp.fsnotify
0.16 ± 7% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.15 ± 8% -0.1 0.08 ± 15% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.22 ± 3% -0.1 0.16 ± 7% perf-profile.children.cycles-pp.wake_q_add
0.49 -0.1 0.44 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.26 ± 4% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.08 ± 14% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__vfs_read
0.25 ± 4% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.finish_wait
0.38 -0.0 0.34 ± 2% perf-profile.children.cycles-pp.__x64_sys_read
0.43 ± 3% -0.0 0.39 ± 2% perf-profile.children.cycles-pp.prepare_to_wait
0.14 ± 20% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.task_tick_fair
0.16 ± 19% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.scheduler_tick
0.10 ± 8% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.18 ± 8% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.inode_has_perm
0.21 ± 2% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.deactivate_task
0.20 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__sb_end_write
0.14 ± 9% -0.0 0.12 perf-profile.children.cycles-pp.timespec64_trunc
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.11 ± 4% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.17 ± 4% -0.0 0.15 perf-profile.children.cycles-pp.iov_iter_init
0.11 ± 4% -0.0 0.09 ± 10% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.09 ± 4% -0.0 0.07 perf-profile.children.cycles-pp.__x2apic_send_IPI_dest
0.15 -0.0 0.13 ± 3% perf-profile.children.cycles-pp.rb_insert_color
0.18 ± 2% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.set_next_buddy
0.08 ± 5% -0.0 0.07 perf-profile.children.cycles-pp.interrupt_entry
0.23 ± 2% +0.0 0.24 perf-profile.children.cycles-pp.cpumask_next
0.67 +0.0 0.69 perf-profile.children.cycles-pp.__calc_delta
0.15 ± 3% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.17 ± 6% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.remove_entity_load_avg
0.14 ± 6% +0.1 0.19 ± 7% perf-profile.children.cycles-pp.attach_entity_load_avg
0.80 ± 3% +0.1 0.86 perf-profile.children.cycles-pp.check_preempt_wakeup
0.37 ± 3% +0.1 0.45 ± 4% perf-profile.children.cycles-pp.account_entity_enqueue
0.79 ± 4% +0.1 0.88 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.80 ± 2% +0.1 0.89 ± 2% perf-profile.children.cycles-pp.put_prev_entity
0.54 ± 2% +0.1 0.65 perf-profile.children.cycles-pp.account_entity_dequeue
1.01 ± 5% +0.1 1.13 ± 5% perf-profile.children.cycles-pp.set_task_cpu
0.49 ± 7% +0.2 0.65 ± 5% perf-profile.children.cycles-pp.migrate_task_rq_fair
1.23 ± 4% +0.2 1.40 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
1.15 ± 2% +0.2 1.33 ± 2% perf-profile.children.cycles-pp.__switch_to
1.42 ± 3% +0.2 1.59 ± 2% perf-profile.children.cycles-pp.switch_fpu_return
0.76 ± 3% +0.2 0.96 ± 3% perf-profile.children.cycles-pp.pick_next_entity
2.21 +0.2 2.42 perf-profile.children.cycles-pp.reweight_entity
2.20 ± 2% +0.2 2.43 perf-profile.children.cycles-pp.load_new_mm_cr3
1.24 ± 5% +0.2 1.48 ± 4% perf-profile.children.cycles-pp.update_rq_clock
1.43 ± 5% +0.3 1.73 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.38 ± 15% +0.3 0.68 ± 9% perf-profile.children.cycles-pp.find_next_bit
3.96 +0.3 4.28 perf-profile.children.cycles-pp.pick_next_task_fair
4.43 ± 2% +0.4 4.82 perf-profile.children.cycles-pp.switch_mm_irqs_off
1.11 +0.7 1.83 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.46 ± 26% +0.7 1.20 ± 12% perf-profile.children.cycles-pp.cpumask_next_wrap
87.76 +1.1 88.86 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
86.98 +1.1 88.10 perf-profile.children.cycles-pp.do_syscall_64
21.71 +1.3 23.02 perf-profile.children.cycles-pp.__schedule
5.98 ± 4% +1.4 7.34 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
21.59 +1.4 22.99 perf-profile.children.cycles-pp.schedule
1.73 ± 29% +3.3 5.03 ± 13% perf-profile.children.cycles-pp.available_idle_cpu
35.55 +3.3 38.88 perf-profile.children.cycles-pp.ksys_write
34.31 +3.5 37.84 perf-profile.children.cycles-pp.vfs_write
31.14 +4.1 35.23 ± 2% perf-profile.children.cycles-pp.new_sync_write
30.56 +4.2 34.74 ± 2% perf-profile.children.cycles-pp.pipe_write
22.96 ± 2% +6.0 28.93 ± 3% perf-profile.children.cycles-pp.__wake_up_common_lock
20.11 ± 2% +6.1 26.19 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
20.66 ± 2% +6.2 26.82 ± 3% perf-profile.children.cycles-pp.__wake_up_common
20.01 ± 2% +6.2 26.19 ± 3% perf-profile.children.cycles-pp.autoremove_wake_function
3.46 ± 20% +7.0 10.48 ± 12% perf-profile.children.cycles-pp.select_idle_sibling
4.92 ± 15% +7.1 12.01 ± 10% perf-profile.children.cycles-pp.select_task_rq_fair
11.85 ± 3% -2.1 9.72 ± 2% perf-profile.self.cycles-pp.do_syscall_64
4.13 ± 5% -1.3 2.85 ± 8% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.49 ± 6% -1.0 2.52 ± 9% perf-profile.self.cycles-pp.update_cfs_group
4.33 ± 2% -0.7 3.64 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64
2.12 ± 4% -0.5 1.59 ± 6% perf-profile.self.cycles-pp.mutex_unlock
2.61 -0.3 2.31 ± 3% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.60 ± 2% -0.3 1.30 ± 5% perf-profile.self.cycles-pp.update_load_avg
1.48 ± 3% -0.2 1.25 ± 2% perf-profile.self.cycles-pp.mutex_lock
1.95 -0.2 1.75 ± 7% perf-profile.self.cycles-pp.selinux_file_permission
1.06 ± 5% -0.2 0.86 ± 5% perf-profile.self.cycles-pp.pipe_write
1.46 ± 2% -0.2 1.28 ± 3% perf-profile.self.cycles-pp.pipe_read
1.65 ± 2% -0.2 1.49 ± 4% perf-profile.self.cycles-pp.__fget_light
0.91 ± 3% -0.2 0.74 perf-profile.self.cycles-pp.avc_has_perm
0.94 ± 2% -0.2 0.79 ± 2% perf-profile.self.cycles-pp.___might_sleep
1.84 -0.1 1.72 ± 2% perf-profile.self.cycles-pp.update_curr
1.05 -0.1 0.93 perf-profile.self.cycles-pp.enqueue_task_fair
0.63 ± 3% -0.1 0.52 ± 2% perf-profile.self.cycles-pp.__might_sleep
0.60 ± 6% -0.1 0.51 ± 7% perf-profile.self.cycles-pp.__fsnotify_parent
0.29 ± 3% -0.1 0.19 ± 5% perf-profile.self.cycles-pp.__mutex_lock
0.38 ± 4% -0.1 0.29 ± 4% perf-profile.self.cycles-pp.vfs_write
0.51 -0.1 0.43 ± 4% perf-profile.self.cycles-pp.copy_page_to_iter
0.49 -0.1 0.41 ± 2% perf-profile.self.cycles-pp.new_sync_write
0.35 ± 4% -0.1 0.28 ± 2% perf-profile.self.cycles-pp.copy_page_from_iter
0.81 ± 3% -0.1 0.75 perf-profile.self.cycles-pp.fsnotify
0.26 -0.1 0.19 ± 3% perf-profile.self.cycles-pp.__inode_security_revalidate
0.20 ± 4% -0.1 0.13 ± 6% perf-profile.self.cycles-pp.__mutex_unlock_slowpath
0.29 ± 5% -0.1 0.23 ± 3% perf-profile.self.cycles-pp.file_has_perm
0.22 -0.1 0.16 ± 8% perf-profile.self.cycles-pp.wake_q_add
0.57 ± 2% -0.1 0.52 perf-profile.self.cycles-pp.new_sync_read
0.34 ± 2% -0.1 0.29 ± 3% perf-profile.self.cycles-pp.dequeue_entity
0.26 ± 3% -0.1 0.21 ± 3% perf-profile.self.cycles-pp.ksys_read
0.28 ± 2% -0.0 0.23 perf-profile.self.cycles-pp.security_file_permission
0.22 ± 3% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.finish_wait
0.25 -0.0 0.21 ± 4% perf-profile.self.cycles-pp.ksys_write
0.88 -0.0 0.83 perf-profile.self.cycles-pp.__lock_text_start
0.66 -0.0 0.61 perf-profile.self.cycles-pp.vfs_read
1.37 -0.0 1.32 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.29 ± 4% -0.0 0.25 perf-profile.self.cycles-pp.current_time
0.20 ± 3% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.rcu_all_qs
0.10 ± 8% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.42 -0.0 0.39 ± 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.34 -0.0 0.31 ± 2% perf-profile.self.cycles-pp.__x64_sys_read
0.09 ± 5% -0.0 0.05 ± 9% perf-profile.self.cycles-pp.wake_up_q
0.15 ± 8% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.inode_has_perm
0.14 ± 5% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.rb_next
0.20 ± 2% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.deactivate_task
0.11 ± 4% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__fdget_pos
0.24 ± 2% -0.0 0.22 ± 3% perf-profile.self.cycles-pp.prepare_to_wait
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.13 ± 7% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.timespec64_trunc
0.06 -0.0 0.04 ± 57% perf-profile.self.cycles-pp.copyout
0.11 ± 4% -0.0 0.09 ± 10% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.17 ± 4% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.18 ± 2% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.__might_fault
0.17 ± 2% -0.0 0.16 ± 2% perf-profile.self.cycles-pp.set_next_buddy
0.15 ± 2% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.rb_insert_color
0.12 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.__sb_end_write
0.10 ± 8% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.06 -0.0 0.05 perf-profile.self.cycles-pp.kill_fasync
0.14 ± 3% +0.0 0.16 ± 5% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.66 +0.0 0.68 perf-profile.self.cycles-pp.__calc_delta
0.16 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.native_load_tls
0.14 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.67 +0.0 0.70 perf-profile.self.cycles-pp.dequeue_task_fair
0.23 ± 3% +0.0 0.27 ± 3% perf-profile.self.cycles-pp._cond_resched
0.17 ± 4% +0.0 0.21 ± 6% perf-profile.self.cycles-pp.activate_task
0.35 ± 3% +0.0 0.40 perf-profile.self.cycles-pp.pick_next_entity
1.20 +0.1 1.25 perf-profile.self.cycles-pp.select_task_rq_fair
0.13 ± 6% +0.1 0.18 ± 6% perf-profile.self.cycles-pp.attach_entity_load_avg
0.87 ± 3% +0.1 0.96 ± 3% perf-profile.self.cycles-pp.pick_next_task_fair
0.27 ± 6% +0.1 0.36 ± 7% perf-profile.self.cycles-pp.migrate_task_rq_fair
0.32 ± 2% +0.1 0.41 ± 4% perf-profile.self.cycles-pp.account_entity_enqueue
0.42 +0.1 0.56 ± 2% perf-profile.self.cycles-pp.account_entity_dequeue
2.24 ± 3% +0.1 2.39 ± 2% perf-profile.self.cycles-pp.switch_mm_irqs_off
1.09 ± 2% +0.2 1.26 ± 2% perf-profile.self.cycles-pp.__switch_to
1.20 ± 4% +0.2 1.37 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
1.40 ± 3% +0.2 1.58 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
0.89 ± 6% +0.2 1.10 ± 5% perf-profile.self.cycles-pp.update_rq_clock
2.19 ± 2% +0.2 2.42 perf-profile.self.cycles-pp.load_new_mm_cr3
1.33 ± 4% +0.3 1.60 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
0.34 ± 14% +0.3 0.66 ± 9% perf-profile.self.cycles-pp.find_next_bit
0.28 ± 23% +0.4 0.72 ± 12% perf-profile.self.cycles-pp.cpumask_next_wrap
0.89 +0.7 1.58 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
1.04 +0.7 1.76 ± 4% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.65 ± 12% +3.0 3.61 ± 13% perf-profile.self.cycles-pp.select_idle_sibling
1.71 ± 29% +3.3 4.97 ± 13% perf-profile.self.cycles-pp.available_idle_cpu
***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/threads/100%/debian-x86_64-2018-04-03.cgz/lkp-cfl-e1/hackbench/0xb8
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:2 -50% :4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
49845 -6.8% 46466 hackbench.throughput
8.776e+08 +13.0% 9.915e+08 hackbench.time.involuntary_context_switches
93623 -6.3% 87707 hackbench.time.minor_page_faults
740.08 -1.4% 729.54 hackbench.time.user_time
1.569e+09 +3.5% 1.623e+09 hackbench.time.voluntary_context_switches
3.072e+08 -6.2% 2.88e+08 hackbench.workload
21.45 -1.7% 21.09 boot-time.boot
1747212 ± 3% -31.0% 1205072 ± 14% cpuidle.POLL.time
1289000 ± 4% -36.6% 817407 ± 13% cpuidle.POLL.usage
4038486 +6.3% 4293331 vmstat.system.cs
394711 -10.2% 354353 ± 2% vmstat.system.in
10059 -15.4% 8506 ± 5% softirqs.CPU0.SCHED
309881 -27.3% 225159 ± 4% softirqs.CPU2.TIMER
220105 +20.8% 265807 ± 16% softirqs.CPU6.TIMER
120937 ± 3% -17.6% 99667 ± 2% softirqs.SCHED
18883 -8.9% 17200 ± 3% slabinfo.kmalloc-32.active_objs
18883 -8.9% 17200 ± 3% slabinfo.kmalloc-32.num_objs
8573 ± 2% -7.6% 7924 slabinfo.lsm_file_cache.active_objs
8573 ± 2% -7.6% 7924 slabinfo.lsm_file_cache.num_objs
5280 -8.8% 4814 slabinfo.task_delay_info.active_objs
5280 -8.8% 4814 slabinfo.task_delay_info.num_objs
65295 -1.9% 64034 proc-vmstat.nr_active_anon
60605 -1.5% 59671 proc-vmstat.nr_anon_pages
12935 -0.9% 12820 proc-vmstat.nr_slab_unreclaimable
65295 -1.9% 64034 proc-vmstat.nr_zone_active_anon
24774010 -16.6% 20673098 proc-vmstat.numa_hit
24774010 -16.6% 20673098 proc-vmstat.numa_local
24839243 -16.5% 20735391 proc-vmstat.pgalloc_normal
673024 -2.4% 657142 proc-vmstat.pgfault
24822112 -16.5% 20717243 proc-vmstat.pgfree
247.95 ± 14% -19.1% 200.59 ± 4% sched_debug.cfs_rq:/.load_avg.max
43.55 ± 3% -19.9% 34.86 ± 6% sched_debug.cfs_rq:/.load_avg.min
55.51 ± 6% -13.2% 48.19 ± 7% sched_debug.cfs_rq:/.load_avg.stddev
20116 ± 8% -12.8% 17531 ± 5% sched_debug.cfs_rq:/.runnable_weight.min
1288 +7.5% 1384 ± 3% sched_debug.cfs_rq:/.util_avg.max
54763 ± 9% +130.8% 126390 ± 25% sched_debug.cpu.avg_idle.avg
301502 ± 18% +84.9% 557406 ± 14% sched_debug.cpu.avg_idle.max
82233 ± 22% +122.7% 183153 ± 17% sched_debug.cpu.avg_idle.stddev
0.00 ± 12% -12.0% 0.00 ± 8% sched_debug.cpu.next_balance.stddev
16.92 ± 4% -7.5% 15.66 ± 3% sched_debug.cpu.nr_running.avg
2.92 ± 10% -20.9% 2.31 ± 16% sched_debug.cpu.nr_uninterruptible.avg
2977 ± 50% -73.0% 803.75 ± 43% interrupts.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
546.00 ± 23% +119.5% 1198 ± 16% interrupts.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
13831842 -11.5% 12245495 ± 2% interrupts.CPU0.RES:Rescheduling_interrupts
45259 ± 5% +68.9% 76440 ± 15% interrupts.CPU0.TRM:Thermal_event_interrupts
2977 ± 50% -73.0% 803.75 ± 43% interrupts.CPU1.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
45283 ± 5% +68.9% 76474 ± 15% interrupts.CPU1.TRM:Thermal_event_interrupts
13947887 -11.0% 12413074 interrupts.CPU10.RES:Rescheduling_interrupts
45275 ± 5% +68.9% 76468 ± 15% interrupts.CPU10.TRM:Thermal_event_interrupts
45286 ± 5% +68.9% 76468 ± 15% interrupts.CPU11.TRM:Thermal_event_interrupts
13579741 -11.8% 11974820 ± 5% interrupts.CPU12.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76471 ± 15% interrupts.CPU12.TRM:Thermal_event_interrupts
13668073 -11.3% 12119346 interrupts.CPU13.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76474 ± 15% interrupts.CPU13.TRM:Thermal_event_interrupts
13578574 -7.8% 12518854 interrupts.CPU14.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76467 ± 15% interrupts.CPU14.TRM:Thermal_event_interrupts
13689117 -10.1% 12310290 ± 2% interrupts.CPU15.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76473 ± 15% interrupts.CPU15.TRM:Thermal_event_interrupts
13924470 -10.5% 12461613 interrupts.CPU2.RES:Rescheduling_interrupts
45264 ± 5% +68.9% 76467 ± 15% interrupts.CPU2.TRM:Thermal_event_interrupts
546.00 ± 23% +119.5% 1198 ± 16% interrupts.CPU3.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
13951884 -10.5% 12480739 interrupts.CPU3.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76469 ± 15% interrupts.CPU3.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76471 ± 15% interrupts.CPU4.TRM:Thermal_event_interrupts
14030690 -10.7% 12530992 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76471 ± 15% interrupts.CPU5.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76474 ± 15% interrupts.CPU6.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76472 ± 15% interrupts.CPU7.TRM:Thermal_event_interrupts
13319036 -10.2% 11966842 ± 2% interrupts.CPU8.RES:Rescheduling_interrupts
45280 ± 5% +68.7% 76405 ± 15% interrupts.CPU8.TRM:Thermal_event_interrupts
13401328 -11.8% 11816711 ± 3% interrupts.CPU9.RES:Rescheduling_interrupts
45279 ± 5% +68.9% 76467 ± 15% interrupts.CPU9.TRM:Thermal_event_interrupts
2.19e+08 -9.8% 1.974e+08 ± 2% interrupts.RES:Rescheduling_interrupts
724498 ± 5% +68.9% 1223435 ± 15% interrupts.TRM:Thermal_event_interrupts
53.44 +4.2% 55.68 perf-stat.i.MPKI
0.12 -0.0 0.10 ± 4% perf-stat.i.cache-miss-rate%
1764490 -13.1% 1533362 ± 3% perf-stat.i.cache-misses
1.767e+09 +3.6% 1.832e+09 perf-stat.i.cache-references
4055785 +6.2% 4306694 perf-stat.i.context-switches
336154 +21.5% 408363 perf-stat.i.cpu-migrations
34528 ± 5% +10.3% 38089 ± 3% perf-stat.i.cycles-between-cache-misses
0.01 ± 6% -0.0 0.01 ± 6% perf-stat.i.dTLB-load-miss-rate%
899577 ± 6% -15.2% 762553 ± 5% perf-stat.i.dTLB-load-misses
0.00 ± 7% -0.0 0.00 ± 4% perf-stat.i.dTLB-store-miss-rate%
22816 ± 7% -13.2% 19813 ± 4% perf-stat.i.dTLB-store-misses
6.044e+09 -1.2% 5.969e+09 perf-stat.i.dTLB-stores
31331199 -3.4% 30272777 perf-stat.i.iTLB-load-misses
1061 +3.1% 1094 perf-stat.i.instructions-per-iTLB-miss
1077 -2.9% 1046 perf-stat.i.minor-faults
73018 -10.9% 65028 ± 2% perf-stat.i.node-loads
122632 ± 2% -9.3% 111181 ± 4% perf-stat.i.node-stores
1077 -2.9% 1046 perf-stat.i.page-faults
53.29 +4.2% 55.53 perf-stat.overall.MPKI
0.10 -0.0 0.08 ± 3% perf-stat.overall.cache-miss-rate%
25264 +15.3% 29120 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 ± 6% -0.0 0.01 ± 6% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 7% -0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
1058 +2.9% 1089 perf-stat.overall.instructions-per-iTLB-miss
65158 +6.6% 69487 perf-stat.overall.path-length
1761814 -13.1% 1530883 ± 3% perf-stat.ps.cache-misses
1.765e+09 +3.7% 1.829e+09 perf-stat.ps.cache-references
4048770 +6.2% 4299578 perf-stat.ps.context-switches
335565 +21.5% 407697 perf-stat.ps.cpu-migrations
898100 ± 6% -15.2% 761293 ± 5% perf-stat.ps.dTLB-load-misses
22777 ± 7% -13.2% 19781 ± 4% perf-stat.ps.dTLB-store-misses
6.034e+09 -1.2% 5.96e+09 perf-stat.ps.dTLB-stores
31277565 -3.4% 30222833 perf-stat.ps.iTLB-load-misses
1075 -2.9% 1044 perf-stat.ps.minor-faults
72896 -10.9% 64922 ± 2% perf-stat.ps.node-loads
122424 ± 2% -9.3% 111005 ± 4% perf-stat.ps.node-stores
1075 -2.9% 1044 perf-stat.ps.page-faults
39.37 ±100% -36.4 2.98 ±173% perf-profile.calltrace.cycles-pp.start_thread
21.68 ±100% -20.0 1.64 ±173% perf-profile.calltrace.cycles-pp.__GI___libc_write.start_thread
20.14 ±100% -18.6 1.52 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
20.00 ±100% -18.5 1.51 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
18.05 ±100% -16.7 1.39 ±173% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
16.75 ±100% -15.5 1.22 ±173% perf-profile.calltrace.cycles-pp.__GI___libc_read.start_thread
16.70 ±100% -15.4 1.26 ±173% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
14.98 ±100% -13.9 1.09 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
14.82 ±100% -13.7 1.08 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
13.95 ±100% -12.9 1.03 ±173% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
12.97 ±100% -12.0 0.95 ±173% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.69 ± 6% -1.1 2.58 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.25 ± 6% -1.0 3.29 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.54 ± 10% -0.5 1.05 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
1.07 ± 16% -0.5 0.60 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write
1.14 ± 2% -0.3 0.80 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.pipe_wait
0.55 ± 4% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_read.new_sync_read.vfs_read.ksys_read
1.90 ± 9% -0.2 1.67 ± 7% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.71 ± 7% -0.2 1.48 ± 7% perf-profile.calltrace.cycles-pp.__fget.__fget_light.__fdget_pos.ksys_write.do_syscall_64
0.68 ± 19% +0.2 0.86 perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.67 ± 16% +0.2 0.88 ± 4% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_task_fair.__schedule.schedule.pipe_wait
0.29 ±100% +0.4 0.64 ± 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.31 ±100% +0.4 0.67 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.32 ±100% +0.4 0.68 ± 2% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.29 ±100% +0.4 0.68 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.35 ±100% +0.4 0.78 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
1.23 ± 19% +0.4 1.68 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.29 ± 18% +0.5 1.76 ± 5% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.set_next_entity.pick_next_task_fair.__schedule.schedule
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.__enqueue_entity.put_prev_entity.pick_next_task_fair.__schedule.schedule
8.82 +0.6 9.46 ± 2% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.47 ±100% +0.7 1.14 ± 6% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
2.21 ± 10% +0.7 2.88 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
9.10 +0.7 9.79 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
9.04 +0.7 9.74 ± 2% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.71 ±100% +0.9 1.61 ± 3% perf-profile.calltrace.cycles-pp.__switch_to
0.68 ± 99% +0.9 1.60 ± 3% perf-profile.calltrace.cycles-pp.native_write_msr
0.86 ±100% +0.9 1.79 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
6.54 +1.0 7.53 ± 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
26.96 +1.0 27.95 perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.42 ± 63% +1.0 2.47 ± 3% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ± 8% +1.2 5.79 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.99 +1.3 16.27 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
15.35 +1.3 16.66 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
27.37 +1.3 28.70 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
17.81 +1.6 19.38 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
3.95 ± 11% +1.8 5.70 ± 6% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
23.38 +2.0 25.33 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
24.30 +2.0 26.27 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
23.73 +2.0 25.71 ± 2% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
5.24 ± 10% +2.0 7.25 ± 6% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
3.23 ± 57% +2.7 5.98 ± 4% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.34 ± 57% +2.8 6.18 ± 4% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.37 ±100% -36.4 2.98 ±173% perf-profile.children.cycles-pp.start_thread
21.68 ±100% -20.0 1.67 ±173% perf-profile.children.cycles-pp.__GI___libc_write
16.99 ±100% -15.7 1.24 ±173% perf-profile.children.cycles-pp.__GI___libc_read
6.84 ± 10% -2.1 4.71 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
45.35 -1.0 44.33 perf-profile.children.cycles-pp.ksys_write
6.70 ± 5% -1.0 5.70 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
5.90 ± 4% -0.6 5.34 ± 6% perf-profile.children.cycles-pp.security_file_permission
3.35 ± 9% -0.6 2.80 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
42.14 -0.5 41.63 perf-profile.children.cycles-pp.vfs_write
3.41 -0.4 3.01 perf-profile.children.cycles-pp.__fdget_pos
3.30 -0.4 2.91 perf-profile.children.cycles-pp.__fget_light
2.71 ± 2% -0.4 2.32 perf-profile.children.cycles-pp.__fget
3.23 -0.3 2.93 ± 3% perf-profile.children.cycles-pp.mutex_lock
2.76 -0.3 2.47 ± 5% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
2.21 ± 3% -0.3 1.93 ± 10% perf-profile.children.cycles-pp.file_has_perm
3.14 -0.2 2.90 ± 4% perf-profile.children.cycles-pp.copy_page_to_iter
3.03 ± 5% -0.2 2.79 ± 5% perf-profile.children.cycles-pp.selinux_file_permission
2.10 ± 5% -0.2 1.86 ± 2% perf-profile.children.cycles-pp.mutex_unlock
1.50 ± 5% -0.2 1.27 ± 2% perf-profile.children.cycles-pp.fput_many
0.68 ± 17% -0.2 0.48 ± 4% perf-profile.children.cycles-pp.__mutex_lock
1.66 ± 2% -0.2 1.47 ± 4% perf-profile.children.cycles-pp.copyout
1.73 -0.2 1.57 ± 2% perf-profile.children.cycles-pp.update_rq_clock
1.30 -0.1 1.17 ± 6% perf-profile.children.cycles-pp.copyin
0.27 ± 11% -0.1 0.15 ± 9% perf-profile.children.cycles-pp.preempt_schedule_common
1.25 ± 3% -0.1 1.15 ± 3% perf-profile.children.cycles-pp.fsnotify
0.79 -0.1 0.69 ± 7% perf-profile.children.cycles-pp.reschedule_interrupt
0.91 -0.1 0.82 ± 5% perf-profile.children.cycles-pp._cond_resched
0.80 ± 2% -0.1 0.71 ± 3% perf-profile.children.cycles-pp.file_update_time
0.29 ± 10% -0.1 0.24 ± 5% perf-profile.children.cycles-pp.rb_insert_color
0.26 -0.0 0.21 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 14% -0.0 0.03 ±100% perf-profile.children.cycles-pp.scheduler_ipi
0.22 ± 6% -0.0 0.18 ± 4% perf-profile.children.cycles-pp.__x64_sys_read
0.25 ± 4% -0.0 0.21 ± 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.27 -0.0 0.23 ± 9% perf-profile.children.cycles-pp.timespec64_trunc
0.23 ± 2% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.22 -0.0 0.20 ± 5% perf-profile.children.cycles-pp.__x64_sys_write
0.24 -0.0 0.22 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.14 -0.0 0.12 ± 10% perf-profile.children.cycles-pp.kill_fasync
0.11 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.perf_exclude_event
0.10 +0.0 0.12 ± 8% perf-profile.children.cycles-pp.is_cpu_allowed
0.09 ± 11% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.rcu_note_context_switch
0.07 +0.0 0.10 ± 10% perf-profile.children.cycles-pp.pkg_thermal_notify
0.17 ± 3% +0.0 0.20 ± 9% perf-profile.children.cycles-pp.deactivate_task
0.16 +0.0 0.21 ± 10% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.08 ± 6% +0.0 0.12 ± 10% perf-profile.children.cycles-pp.intel_thermal_interrupt
0.08 ± 6% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.smp_thermal_interrupt
0.08 +0.1 0.13 ± 9% perf-profile.children.cycles-pp.thermal_interrupt
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.default_wake_function
0.19 ± 5% +0.1 0.25 ± 5% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.19 ± 5% +0.1 0.27 ± 4% perf-profile.children.cycles-pp.finish_wait
0.89 ± 8% +0.1 0.98 perf-profile.children.cycles-pp.__calc_delta
0.63 ± 4% +0.1 0.74 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.33 ± 4% +0.1 0.44 ± 5% perf-profile.children.cycles-pp.set_next_buddy
1.23 ± 3% +0.1 1.34 ± 3% perf-profile.children.cycles-pp.check_preempt_wakeup
0.69 ± 4% +0.1 0.81 ± 3% perf-profile.children.cycles-pp.sched_clock
1.52 ± 3% +0.1 1.65 ± 2% perf-profile.children.cycles-pp.set_next_entity
0.73 ± 4% +0.1 0.88 ± 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.57 ± 4% +0.2 0.74 ± 4% perf-profile.children.cycles-pp.account_entity_dequeue
1.21 ± 4% +0.2 1.39 ± 2% perf-profile.children.cycles-pp.__update_load_avg_se
1.99 ± 3% +0.2 2.20 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
1.64 ± 6% +0.2 1.85 perf-profile.children.cycles-pp.__switch_to_asm
1.60 ± 4% +0.2 1.85 ± 3% perf-profile.children.cycles-pp.update_cfs_group
3.91 +0.2 4.15 perf-profile.children.cycles-pp.update_curr
1.04 ± 5% +0.3 1.30 ± 4% perf-profile.children.cycles-pp.___perf_sw_event
1.05 ± 9% +0.3 1.33 ± 2% perf-profile.children.cycles-pp.pick_next_entity
1.57 ± 3% +0.3 1.89 perf-profile.children.cycles-pp.native_write_msr
1.01 ± 9% +0.3 1.33 ± 5% perf-profile.children.cycles-pp.put_prev_entity
2.17 ± 7% +0.3 2.51 ± 2% perf-profile.children.cycles-pp.switch_fpu_return
1.93 ± 7% +0.4 2.36 perf-profile.children.cycles-pp.__switch_to
2.99 ± 4% +0.6 3.57 ± 2% perf-profile.children.cycles-pp.reweight_entity
8.89 +0.6 9.50 ± 2% perf-profile.children.cycles-pp.enqueue_task_fair
9.16 +0.7 9.82 perf-profile.children.cycles-pp.ttwu_do_activate
9.09 +0.7 9.76 ± 2% perf-profile.children.cycles-pp.activate_task
5.18 ± 6% +0.9 6.04 ± 2% perf-profile.children.cycles-pp.pick_next_task_fair
6.71 +0.9 7.63 ± 2% perf-profile.children.cycles-pp.dequeue_task_fair
28.01 +1.4 29.36 ± 2% perf-profile.children.cycles-pp.__wake_up_common_lock
4.91 ± 7% +1.4 6.28 ± 4% perf-profile.children.cycles-pp.exit_to_usermode_loop
18.19 +1.4 19.56 perf-profile.children.cycles-pp.pipe_wait
4.03 ± 11% +1.7 5.75 ± 6% perf-profile.children.cycles-pp.select_idle_sibling
23.86 ± 2% +1.8 25.71 ± 2% perf-profile.children.cycles-pp.try_to_wake_up
24.44 +1.9 26.36 ± 2% perf-profile.children.cycles-pp.__wake_up_common
23.84 +1.9 25.77 ± 2% perf-profile.children.cycles-pp.autoremove_wake_function
5.29 ± 10% +2.0 7.29 ± 6% perf-profile.children.cycles-pp.select_task_rq_fair
20.27 ± 2% +2.3 22.53 ± 2% perf-profile.children.cycles-pp.__schedule
20.36 ± 2% +2.5 22.82 ± 2% perf-profile.children.cycles-pp.schedule
6.82 ± 10% -2.1 4.69 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.67 ± 2% -0.4 2.29 perf-profile.self.cycles-pp.__fget
2.72 -0.3 2.44 ± 5% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.22 -0.2 0.98 ± 2% perf-profile.self.cycles-pp.update_rq_clock
1.36 ± 3% -0.2 1.13 ± 6% perf-profile.self.cycles-pp.pipe_write
1.48 ± 5% -0.2 1.24 ± 3% perf-profile.self.cycles-pp.fput_many
2.04 ± 5% -0.2 1.81 ± 2% perf-profile.self.cycles-pp.mutex_unlock
1.96 -0.2 1.73 ± 2% perf-profile.self.cycles-pp.mutex_lock
0.66 ± 12% -0.1 0.53 ± 10% perf-profile.self.cycles-pp.file_has_perm
0.38 ± 10% -0.1 0.28 ± 9% perf-profile.self.cycles-pp.ksys_read
0.34 ± 7% -0.1 0.25 ± 7% perf-profile.self.cycles-pp.__mutex_lock
1.21 ± 3% -0.1 1.11 ± 2% perf-profile.self.cycles-pp.fsnotify
1.00 -0.1 0.94 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.60 ± 4% -0.1 0.55 ± 8% perf-profile.self.cycles-pp.vfs_write
0.29 ± 10% -0.1 0.23 ± 3% perf-profile.self.cycles-pp.rb_insert_color
0.32 ± 6% -0.0 0.28 ± 6% perf-profile.self.cycles-pp.ksys_write
0.26 -0.0 0.21 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 14% -0.0 0.03 ±100% perf-profile.self.cycles-pp.scheduler_ipi
0.36 ± 5% -0.0 0.32 ± 8% perf-profile.self.cycles-pp.__sb_start_write
0.29 -0.0 0.24 ± 4% perf-profile.self.cycles-pp.check_preempt_curr
0.22 ± 6% -0.0 0.17 ± 6% perf-profile.self.cycles-pp.__x64_sys_read
0.25 -0.0 0.21 ± 8% perf-profile.self.cycles-pp.timespec64_trunc
0.22 ± 4% -0.0 0.19 ± 11% perf-profile.self.cycles-pp.wake_q_add
0.21 ± 2% -0.0 0.18 ± 6% perf-profile.self.cycles-pp.__x64_sys_write
0.21 ± 2% -0.0 0.20 perf-profile.self.cycles-pp.__list_add_valid
0.11 ± 4% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.kill_fasync
0.06 ± 9% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.rcu_note_context_switch
0.15 +0.0 0.18 ± 6% perf-profile.self.cycles-pp.autoremove_wake_function
0.16 +0.0 0.19 ± 9% perf-profile.self.cycles-pp.deactivate_task
0.26 +0.0 0.29 ± 5% perf-profile.self.cycles-pp.rcu_all_qs
0.33 ± 3% +0.0 0.37 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.18 ± 5% +0.0 0.22 ± 6% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.03 ±100% +0.0 0.07 perf-profile.self.cycles-pp.check_cfs_rq_runtime
0.16 ± 6% +0.0 0.20 ± 6% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.16 +0.0 0.20 ± 12% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.20 ± 2% +0.1 0.25 ± 2% perf-profile.self.cycles-pp.activate_task
0.41 ± 6% +0.1 0.46 perf-profile.self.cycles-pp.schedule
1.96 +0.1 2.02 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.15 ± 6% +0.1 0.22 perf-profile.self.cycles-pp.finish_wait
0.41 ± 4% +0.1 0.48 ± 6% perf-profile.self.cycles-pp.account_entity_enqueue
0.61 ± 4% +0.1 0.71 ± 3% perf-profile.self.cycles-pp.native_sched_clock
0.83 ± 2% +0.1 0.93 perf-profile.self.cycles-pp.dequeue_task_fair
0.29 ± 3% +0.1 0.40 ± 5% perf-profile.self.cycles-pp.set_next_buddy
1.02 +0.1 1.15 ± 7% perf-profile.self.cycles-pp.enqueue_task_fair
0.46 ± 12% +0.2 0.61 ± 3% perf-profile.self.cycles-pp.pick_next_entity
1.17 ± 4% +0.2 1.35 ± 2% perf-profile.self.cycles-pp.__update_load_avg_se
0.42 ± 8% +0.2 0.61 ± 5% perf-profile.self.cycles-pp.account_entity_dequeue
1.64 ± 6% +0.2 1.85 perf-profile.self.cycles-pp.__switch_to_asm
1.23 ± 4% +0.2 1.46 ± 3% perf-profile.self.cycles-pp.reweight_entity
1.07 ± 5% +0.2 1.31 ± 4% perf-profile.self.cycles-pp.select_task_rq_fair
0.90 ± 5% +0.2 1.14 ± 5% perf-profile.self.cycles-pp.___perf_sw_event
1.57 ± 4% +0.2 1.82 ± 2% perf-profile.self.cycles-pp.update_cfs_group
1.17 ± 3% +0.3 1.42 ± 7% perf-profile.self.cycles-pp.update_load_avg
1.56 ± 4% +0.3 1.88 perf-profile.self.cycles-pp.native_write_msr
2.16 ± 7% +0.3 2.50 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
1.84 ± 8% +0.4 2.25 ± 2% perf-profile.self.cycles-pp.__switch_to
1.22 +0.5 1.73 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.92 ± 11% +1.3 2.27 ± 6% perf-profile.self.cycles-pp.select_idle_sibling
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 months, 1 week
[ptrace] 201766a20e: kernel_selftests.seccomp.make_fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 201766a20e30f982ccfe36bebfad9602c3ff574a ("ptrace: add PTRACE_GET_SYSCALL_INFO request")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-07-26 16:52:03 make run_tests -C seccomp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
gcc -Wl,-no-as-needed -Wall seccomp_bpf.c -lpthread -o seccomp_bpf
In file included from seccomp_bpf.c:51:0:
seccomp_bpf.c: In function ‘tracer_ptrace’:
seccomp_bpf.c:1787:20: error: ‘PTRACE_EVENTMSG_SYSCALL_ENTRY’ undeclared (first use in this function)
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1787:20: note: each undeclared identifier is reported only once for each function it appears in
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1788:6: error: ‘PTRACE_EVENTMSG_SYSCALL_EXIT’ undeclared (first use in this function)
: PTRACE_EVENTMSG_SYSCALL_EXIT, msg);
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
Makefile:12: recipe for target 'seccomp_bpf' failed
make: *** [seccomp_bpf] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
To reproduce:
# build kernel
cd linux
cp config-5.2.0-10889-g201766a20e30f9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
5 months, 2 weeks
[x86/mce] 1de08dccd3: will-it-scale.per_process_ops -14.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -14.1% regression of will-it-scale.per_process_ops due to commit:
commit: 1de08dccd383482a3e88845d3554094d338f5ff9 ("x86/mce: Add a struct mce.kflags field")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
nr_task: 100%
mode: process
test: malloc1
cpufreq_governor: performance
ucode: 0x11
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-20191114.cgz/lkp-knm01/malloc1/will-it-scale/0x11
commit:
9554bfe403 ("x86/mce: Convert the CEC to use the MCE notifier")
1de08dccd3 ("x86/mce: Add a struct mce.kflags field")
9554bfe403bdfc08 1de08dccd383482a3e88845d355
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
%stddev %change %stddev
\ | \
668.00 -14.1% 573.75 will-it-scale.per_process_ops
192559 -14.1% 165344 will-it-scale.workload
424371 -20.3% 338331 ± 8% vmstat.system.in
0.00 ± 13% +0.0 0.00 ± 22% mpstat.cpu.all.soft%
0.54 -0.1 0.47 ± 3% mpstat.cpu.all.usr%
1.205e+08 -13.7% 1.039e+08 numa-numastat.node0.local_node
1.205e+08 -13.7% 1.039e+08 numa-numastat.node0.numa_hit
61585280 -13.1% 53521568 numa-vmstat.node0.numa_hit
61585799 -13.1% 53522027 numa-vmstat.node0.numa_local
1.203e+08 -13.8% 1.037e+08 proc-vmstat.numa_hit
1.203e+08 -13.8% 1.037e+08 proc-vmstat.numa_local
1.205e+08 -13.7% 1.04e+08 proc-vmstat.pgalloc_normal
60608363 -13.6% 52339576 proc-vmstat.pgfault
1.204e+08 -13.8% 1.038e+08 proc-vmstat.pgfree
0.04 ± 9% +17.4% 0.05 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
52.52 ± 5% +11.9% 58.79 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
2049590 ± 7% -12.8% 1788136 ± 4% sched_debug.cpu.avg_idle.avg
418.68 ± 2% -22.1% 326.13 ± 2% sched_debug.cpu.clock.stddev
418.68 ± 2% -22.1% 326.14 ± 2% sched_debug.cpu.clock_task.stddev
158439 ± 8% +29.6% 205376 ± 16% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 -18.8% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
0.00 ±173% +500.0% 0.00 ± 33% sched_debug.cpu.nr_uninterruptible.avg
-49.04 +76.9% -86.75 sched_debug.cpu.nr_uninterruptible.min
1117 +13.3% 1266 ± 3% sched_debug.cpu.sched_count.min
36.92 -2.4 34.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
36.91 -2.4 34.51 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.30 -2.3 33.98 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.30 -2.3 33.99 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.55 -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn
0.54 ± 2% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages
1.02 -0.1 0.94 perf-profile.calltrace.cycles-pp.page_fault
0.81 -0.1 0.73 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_page_fault.page_fault
0.98 -0.1 0.90 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.77 -0.1 0.68 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
0.69 -0.1 0.61 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
0.65 -0.1 0.58 ± 5% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.67 ± 2% -0.1 0.60 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
0.66 -0.1 0.59 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
0.62 -0.1 0.56 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu
0.64 ± 2% -0.1 0.57 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu
0.66 -0.1 0.60 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
47.88 +0.1 48.00 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
47.86 +0.1 48.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
47.74 +0.1 47.89 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
96.85 +0.3 97.18 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.33 +0.3 46.67 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
46.30 +0.3 46.64 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
47.55 +0.3 47.90 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
47.55 +0.4 47.90 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__vm_munmap
47.52 +0.4 47.87 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
96.64 +0.4 97.01 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
46.22 +0.5 46.74 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
46.20 +0.5 46.72 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
60.91 +2.6 63.55 perf-profile.calltrace.cycles-pp.munmap
60.60 +2.6 63.24 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
60.84 +2.6 63.47 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
60.59 +2.6 63.23 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
60.83 +2.6 63.47 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
2.21 -0.3 1.90 ± 6% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
2.24 -0.3 1.93 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
2.13 -0.3 1.85 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.84 -0.2 1.60 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
1.57 -0.2 1.39 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.66 -0.1 0.55 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.67 -0.1 0.56 perf-profile.children.cycles-pp.ksys_mmap_pgoff
1.07 -0.1 0.96 ± 5% perf-profile.children.cycles-pp.tick_sched_timer
1.03 -0.1 0.93 ± 5% perf-profile.children.cycles-pp.tick_sched_handle
1.01 -0.1 0.91 ± 5% perf-profile.children.cycles-pp.update_process_times
0.58 -0.1 0.49 perf-profile.children.cycles-pp.do_mmap
1.01 -0.1 0.92 perf-profile.children.cycles-pp.do_page_fault
1.05 -0.1 0.97 perf-profile.children.cycles-pp.page_fault
0.83 -0.1 0.74 ± 2% perf-profile.children.cycles-pp.handle_mm_fault
0.79 ± 2% -0.1 0.71 ± 6% perf-profile.children.cycles-pp.scheduler_tick
0.92 ± 2% -0.1 0.84 ± 3% perf-profile.children.cycles-pp.unmap_vmas
0.78 -0.1 0.70 ± 2% perf-profile.children.cycles-pp.__handle_mm_fault
0.88 -0.1 0.80 ± 3% perf-profile.children.cycles-pp.unmap_page_range
0.70 -0.1 0.62 perf-profile.children.cycles-pp.handle_pte_fault
0.43 -0.1 0.36 perf-profile.children.cycles-pp.mmap_region
0.47 ± 2% -0.1 0.41 ± 2% perf-profile.children.cycles-pp.mmap64
0.55 -0.0 0.50 ± 4% perf-profile.children.cycles-pp.task_tick_fair
0.18 ± 2% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.perf_event_mmap
0.11 -0.0 0.08 ± 6% perf-profile.children.cycles-pp.perf_iterate_sb
0.31 -0.0 0.27 ± 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.26 -0.0 0.23 perf-profile.children.cycles-pp.get_page_from_freelist
0.22 -0.0 0.19 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.24 ± 2% -0.0 0.21 ± 4% perf-profile.children.cycles-pp.__pte_alloc
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_protection
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_p4d_range
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_prot_numa
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.free_unref_page_list
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.task_work_run
0.50 -0.0 0.48 perf-profile.children.cycles-pp.exit_to_usermode_loop
0.32 ± 2% -0.0 0.29 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.task_numa_work
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.prep_new_page
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.alloc_pages_vma
0.14 ± 3% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.clear_page_erms
0.07 ± 6% -0.0 0.05 perf-profile.children.cycles-pp.kmem_cache_free
0.07 -0.0 0.05 ± 8% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.17 ± 2% -0.0 0.15 ± 4% perf-profile.children.cycles-pp._cond_resched
0.12 -0.0 0.10 ± 4% perf-profile.children.cycles-pp.get_unmapped_area
0.09 ± 4% -0.0 0.07 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
0.13 -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__anon_vma_prepare
0.13 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.free_pgtables
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.16 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.irq_exit
0.13 -0.0 0.12 ± 3% perf-profile.children.cycles-pp.free_pgd_range
0.10 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.05 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__mod_memcg_state
0.07 +0.0 0.10 perf-profile.children.cycles-pp.__mod_lruvec_state
0.14 ± 13% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.__remove_hrtimer
47.91 +0.1 48.04 perf-profile.children.cycles-pp.tlb_finish_mmu
47.90 +0.1 48.03 perf-profile.children.cycles-pp.tlb_flush_mmu
47.81 +0.1 47.96 perf-profile.children.cycles-pp.release_pages
98.55 +0.1 98.70 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
98.53 +0.1 98.68 perf-profile.children.cycles-pp.do_syscall_64
96.91 +0.3 97.24 perf-profile.children.cycles-pp.__x64_sys_munmap
96.89 +0.3 97.22 perf-profile.children.cycles-pp.__vm_munmap
47.73 +0.3 48.06 perf-profile.children.cycles-pp.pagevec_lru_move_fn
96.88 +0.3 97.21 perf-profile.children.cycles-pp.__do_munmap
47.60 +0.4 47.95 perf-profile.children.cycles-pp.lru_add_drain
47.59 +0.4 47.94 perf-profile.children.cycles-pp.lru_add_drain_cpu
96.67 +0.4 97.03 perf-profile.children.cycles-pp.unmap_region
92.78 +0.8 93.60 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.83 +0.8 93.67 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
60.92 +2.6 63.56 perf-profile.children.cycles-pp.munmap
0.17 ± 6% -0.1 0.06 ± 13% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.47 ± 2% -0.1 0.42 ± 4% perf-profile.self.cycles-pp.unmap_page_range
0.10 ± 4% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.hrtimer_interrupt
0.11 -0.0 0.08 ± 10% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.44 -0.0 0.41 ± 3% perf-profile.self.cycles-pp.change_p4d_range
0.08 -0.0 0.06 ± 9% perf-profile.self.cycles-pp.perf_iterate_sb
0.14 -0.0 0.12 ± 3% perf-profile.self.cycles-pp.clear_page_erms
0.08 ± 5% -0.0 0.07 ± 6% perf-profile.self.cycles-pp._raw_spin_lock
0.07 ± 7% -0.0 0.05 perf-profile.self.cycles-pp.kmem_cache_free
0.08 -0.0 0.07 ± 6% perf-profile.self.cycles-pp.release_pages
0.09 -0.0 0.08 ± 5% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.05 ± 8% +0.0 0.07 ± 14% perf-profile.self.cycles-pp.___perf_sw_event
0.05 +0.0 0.07 ± 5% perf-profile.self.cycles-pp.__mod_memcg_state
0.00 +0.1 0.14 ± 9% perf-profile.self.cycles-pp.__remove_hrtimer
92.77 +0.8 93.60 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
124732 +11.7% 139275 softirqs.CPU0.TIMER
125382 +13.1% 141792 ± 2% softirqs.CPU10.TIMER
124413 +11.5% 138750 softirqs.CPU100.TIMER
123657 +12.2% 138733 softirqs.CPU101.TIMER
123899 +12.1% 138896 softirqs.CPU102.TIMER
124228 +11.7% 138717 softirqs.CPU103.TIMER
123619 +12.3% 138882 softirqs.CPU104.TIMER
123585 +12.7% 139247 softirqs.CPU105.TIMER
123593 +12.0% 138384 softirqs.CPU106.TIMER
123642 +12.2% 138741 softirqs.CPU107.TIMER
123926 +11.7% 138456 softirqs.CPU108.TIMER
123798 +12.0% 138608 softirqs.CPU109.TIMER
124136 +12.0% 139018 softirqs.CPU11.TIMER
123935 +12.5% 139487 softirqs.CPU110.TIMER
124699 +11.1% 138505 softirqs.CPU111.TIMER
123708 +12.3% 138865 softirqs.CPU112.TIMER
123122 +13.1% 139227 softirqs.CPU113.TIMER
123665 +16.1% 143543 ± 7% softirqs.CPU114.TIMER
123887 +11.6% 138272 softirqs.CPU115.TIMER
123873 +11.8% 138498 softirqs.CPU116.TIMER
123987 +11.6% 138362 softirqs.CPU117.TIMER
123891 +11.6% 138254 softirqs.CPU118.TIMER
123315 +16.1% 143144 ± 4% softirqs.CPU119.TIMER
123661 +14.9% 142093 ± 3% softirqs.CPU12.TIMER
123199 +12.3% 138367 softirqs.CPU120.TIMER
123372 +12.1% 138262 softirqs.CPU121.TIMER
123641 +11.6% 137956 softirqs.CPU122.TIMER
123389 +12.6% 138884 softirqs.CPU123.TIMER
123309 +12.4% 138608 softirqs.CPU124.TIMER
123604 +11.9% 138283 softirqs.CPU125.TIMER
123456 +12.2% 138513 softirqs.CPU126.TIMER
123526 +12.8% 139288 softirqs.CPU127.TIMER
123496 +11.9% 138241 softirqs.CPU128.TIMER
123381 +12.9% 139252 softirqs.CPU129.TIMER
125036 +11.1% 138906 softirqs.CPU13.TIMER
123722 +12.5% 139159 softirqs.CPU130.TIMER
123523 +12.1% 138454 softirqs.CPU131.TIMER
123760 +11.8% 138329 softirqs.CPU132.TIMER
123569 +12.2% 138624 softirqs.CPU133.TIMER
123669 +12.1% 138657 softirqs.CPU134.TIMER
122996 +12.7% 138591 softirqs.CPU135.TIMER
123011 +12.6% 138503 softirqs.CPU136.TIMER
123141 +12.6% 138624 softirqs.CPU137.TIMER
123505 +12.0% 138360 softirqs.CPU138.TIMER
123513 +12.0% 138343 softirqs.CPU139.TIMER
124622 +18.6% 147768 ± 6% softirqs.CPU14.TIMER
123088 +12.3% 138241 softirqs.CPU140.TIMER
123186 +13.2% 139395 softirqs.CPU141.TIMER
123249 +12.8% 139061 ± 2% softirqs.CPU142.TIMER
123274 +12.2% 138333 softirqs.CPU143.TIMER
123241 +12.5% 138599 ± 2% softirqs.CPU144.TIMER
123183 +12.1% 138053 softirqs.CPU145.TIMER
124668 +10.7% 137947 softirqs.CPU146.TIMER
123275 +12.7% 138880 softirqs.CPU147.TIMER
123278 +12.3% 138403 softirqs.CPU148.TIMER
123611 +12.3% 138764 softirqs.CPU149.TIMER
124133 +12.0% 139055 softirqs.CPU15.TIMER
123195 +12.1% 138071 softirqs.CPU150.TIMER
123687 +11.7% 138126 softirqs.CPU151.TIMER
123483 +11.9% 138191 softirqs.CPU152.TIMER
123281 +12.1% 138248 softirqs.CPU153.TIMER
124097 +11.3% 138090 softirqs.CPU154.TIMER
123803 +11.7% 138348 softirqs.CPU155.TIMER
122720 +12.6% 138171 softirqs.CPU157.TIMER
123267 +12.7% 138929 softirqs.CPU158.TIMER
123387 +12.0% 138189 softirqs.CPU159.TIMER
124075 +12.0% 138944 softirqs.CPU16.TIMER
123571 +12.4% 138834 softirqs.CPU160.TIMER
123578 +12.0% 138378 softirqs.CPU161.TIMER
123664 +11.9% 138397 softirqs.CPU162.TIMER
123363 +11.9% 138061 softirqs.CPU163.TIMER
123053 +12.4% 138264 softirqs.CPU164.TIMER
123412 +12.1% 138367 softirqs.CPU165.TIMER
123721 +11.7% 138219 softirqs.CPU166.TIMER
123512 +11.8% 138098 softirqs.CPU167.TIMER
123392 +12.1% 138382 softirqs.CPU168.TIMER
123559 +11.9% 138276 softirqs.CPU169.TIMER
124075 +12.5% 139563 softirqs.CPU17.TIMER
123019 +12.5% 138428 softirqs.CPU170.TIMER
123467 +12.0% 138328 softirqs.CPU171.TIMER
123040 +12.3% 138224 softirqs.CPU173.TIMER
123997 +11.6% 138334 softirqs.CPU174.TIMER
123787 +11.8% 138437 softirqs.CPU175.TIMER
123315 +12.2% 138419 softirqs.CPU176.TIMER
123771 +12.1% 138716 softirqs.CPU177.TIMER
123016 +12.2% 138058 softirqs.CPU178.TIMER
122844 +12.4% 138072 softirqs.CPU179.TIMER
123981 +12.1% 138975 softirqs.CPU18.TIMER
123511 +11.8% 138041 softirqs.CPU180.TIMER
123415 +12.0% 138171 softirqs.CPU181.TIMER
122954 +12.9% 138845 softirqs.CPU182.TIMER
123291 +12.0% 138113 softirqs.CPU183.TIMER
122910 +12.4% 138175 softirqs.CPU184.TIMER
123015 +12.8% 138812 ± 2% softirqs.CPU185.TIMER
123197 +11.5% 137396 softirqs.CPU186.TIMER
122914 +12.2% 137870 softirqs.CPU187.TIMER
122854 +12.7% 138509 softirqs.CPU188.TIMER
122864 +12.5% 138211 softirqs.CPU189.TIMER
124311 +11.5% 138654 softirqs.CPU19.TIMER
122961 +12.4% 138166 softirqs.CPU190.TIMER
123015 +12.3% 138134 softirqs.CPU191.TIMER
122906 +12.3% 138029 softirqs.CPU192.TIMER
122988 +12.3% 138098 softirqs.CPU193.TIMER
122863 +11.9% 137464 softirqs.CPU194.TIMER
122883 +12.3% 138039 softirqs.CPU195.TIMER
123157 +12.1% 138023 softirqs.CPU196.TIMER
122831 +12.5% 138204 softirqs.CPU197.TIMER
123113 +11.9% 137755 softirqs.CPU198.TIMER
122809 +12.2% 137771 softirqs.CPU199.TIMER
123833 +12.3% 139118 softirqs.CPU20.TIMER
122908 +12.1% 137806 softirqs.CPU200.TIMER
122641 +12.4% 137887 softirqs.CPU201.TIMER
123187 +12.2% 138253 softirqs.CPU202.TIMER
122997 +12.2% 138012 softirqs.CPU203.TIMER
123088 +11.7% 137542 softirqs.CPU204.TIMER
122928 +12.2% 137903 softirqs.CPU205.TIMER
122990 +12.1% 137911 softirqs.CPU206.TIMER
123028 +12.2% 138095 softirqs.CPU207.TIMER
122473 +12.7% 138050 softirqs.CPU208.TIMER
122665 +12.8% 138325 softirqs.CPU209.TIMER
124436 +11.5% 138798 softirqs.CPU21.TIMER
122387 +12.7% 137900 softirqs.CPU210.TIMER
122693 +12.3% 137746 softirqs.CPU211.TIMER
122651 +12.3% 137710 softirqs.CPU212.TIMER
122995 +11.9% 137644 softirqs.CPU213.TIMER
122918 +11.9% 137535 softirqs.CPU214.TIMER
122662 +12.0% 137431 softirqs.CPU215.TIMER
122226 +12.3% 137244 softirqs.CPU216.TIMER
122549 +12.1% 137338 softirqs.CPU217.TIMER
123475 +11.5% 137651 softirqs.CPU218.TIMER
123553 +11.4% 137635 softirqs.CPU219.TIMER
123621 +13.3% 140099 ± 2% softirqs.CPU22.TIMER
122924 +12.0% 137726 softirqs.CPU220.TIMER
123174 +12.3% 138348 softirqs.CPU221.TIMER
122630 +12.5% 137977 softirqs.CPU222.TIMER
123731 +11.8% 138294 softirqs.CPU223.TIMER
123492 +11.8% 138023 softirqs.CPU224.TIMER
123019 +11.9% 137651 softirqs.CPU225.TIMER
123061 +12.0% 137781 softirqs.CPU226.TIMER
123436 +11.8% 137983 softirqs.CPU227.TIMER
122711 +12.3% 137768 softirqs.CPU228.TIMER
122361 +13.2% 138515 softirqs.CPU229.TIMER
124044 +12.2% 139183 softirqs.CPU23.TIMER
122462 +13.2% 138573 softirqs.CPU230.TIMER
122970 +12.5% 138298 softirqs.CPU231.TIMER
123005 +12.2% 137962 softirqs.CPU232.TIMER
122716 +12.4% 137939 softirqs.CPU233.TIMER
122591 +12.4% 137826 softirqs.CPU234.TIMER
122794 +12.4% 138058 softirqs.CPU235.TIMER
122607 +12.6% 138015 softirqs.CPU236.TIMER
122744 +12.8% 138401 softirqs.CPU237.TIMER
122680 +12.2% 137683 softirqs.CPU238.TIMER
122589 +12.3% 137729 softirqs.CPU239.TIMER
122741 +12.5% 138046 softirqs.CPU240.TIMER
124001 +11.3% 137964 softirqs.CPU241.TIMER
122283 +12.5% 137507 softirqs.CPU242.TIMER
122722 +12.4% 137958 softirqs.CPU243.TIMER
123393 +11.4% 137463 softirqs.CPU244.TIMER
122456 +12.4% 137610 softirqs.CPU245.TIMER
122995 +12.3% 138090 softirqs.CPU246.TIMER
123687 +11.4% 137814 softirqs.CPU247.TIMER
122494 +12.9% 138288 softirqs.CPU248.TIMER
122634 +12.6% 138146 softirqs.CPU249.TIMER
124923 +11.8% 139698 softirqs.CPU25.TIMER
122661 +12.3% 137714 softirqs.CPU250.TIMER
122343 +12.5% 137689 softirqs.CPU251.TIMER
122846 +11.8% 137353 softirqs.CPU252.TIMER
122455 +12.3% 137525 softirqs.CPU253.TIMER
122334 +13.1% 138369 softirqs.CPU254.TIMER
122023 +13.2% 138083 ± 2% softirqs.CPU256.TIMER
122317 +12.2% 137246 softirqs.CPU257.TIMER
122558 +11.6% 136825 softirqs.CPU258.TIMER
122482 +15.8% 141829 ± 6% softirqs.CPU259.TIMER
123637 +12.2% 138768 softirqs.CPU26.TIMER
122120 +12.5% 137405 softirqs.CPU260.TIMER
122386 +12.2% 137365 softirqs.CPU262.TIMER
122398 +12.4% 137560 softirqs.CPU263.TIMER
122108 +12.5% 137359 softirqs.CPU264.TIMER
122101 +15.4% 140873 ± 5% softirqs.CPU265.TIMER
122163 +12.2% 137050 softirqs.CPU266.TIMER
122077 +12.6% 137416 softirqs.CPU267.TIMER
122318 +12.3% 137341 softirqs.CPU268.TIMER
122063 +12.5% 137290 softirqs.CPU269.TIMER
123854 +12.0% 138752 softirqs.CPU27.TIMER
122323 +12.0% 137011 softirqs.CPU270.TIMER
122091 +12.4% 137199 softirqs.CPU271.TIMER
122042 +13.0% 137944 softirqs.CPU272.TIMER
122898 +11.4% 136920 softirqs.CPU273.TIMER
122100 +12.5% 137340 softirqs.CPU274.TIMER
122223 +12.2% 137141 softirqs.CPU275.TIMER
122367 +11.8% 136818 softirqs.CPU276.TIMER
122216 +12.2% 137067 softirqs.CPU277.TIMER
122035 +12.2% 136880 softirqs.CPU278.TIMER
122296 +12.4% 137442 softirqs.CPU279.TIMER
124072 +12.2% 139264 softirqs.CPU28.TIMER
122058 +12.3% 137060 softirqs.CPU280.TIMER
121889 +12.5% 137107 softirqs.CPU281.TIMER
121953 +12.4% 137058 softirqs.CPU282.TIMER
122136 +12.3% 137153 softirqs.CPU283.TIMER
122152 +11.9% 136661 softirqs.CPU284.TIMER
122126 +12.1% 136891 softirqs.CPU285.TIMER
121523 +12.7% 136947 softirqs.CPU286.TIMER
117427 +12.6% 132264 ± 2% softirqs.CPU287.TIMER
124082 +11.7% 138601 softirqs.CPU29.TIMER
124520 +11.4% 138684 softirqs.CPU3.TIMER
125786 ± 3% +10.4% 138855 softirqs.CPU30.TIMER
124607 +11.5% 138958 softirqs.CPU31.TIMER
123701 +13.1% 139886 softirqs.CPU32.TIMER
124593 +11.9% 139391 softirqs.CPU33.TIMER
123719 +12.0% 138526 softirqs.CPU34.TIMER
123959 +11.9% 138665 softirqs.CPU35.TIMER
123758 +12.0% 138556 softirqs.CPU36.TIMER
123856 +11.9% 138597 softirqs.CPU37.TIMER
124053 +16.7% 144775 ± 6% softirqs.CPU38.TIMER
123675 +11.8% 138281 softirqs.CPU39.TIMER
124228 +12.0% 139164 softirqs.CPU4.TIMER
123900 +12.3% 139175 softirqs.CPU40.TIMER
123892 +12.4% 139211 softirqs.CPU41.TIMER
127063 ± 3% +9.1% 138615 softirqs.CPU42.TIMER
123679 +12.2% 138760 softirqs.CPU43.TIMER
124702 +11.9% 139566 softirqs.CPU44.TIMER
123975 +11.9% 138712 softirqs.CPU45.TIMER
124174 +11.6% 138531 softirqs.CPU46.TIMER
123644 +12.1% 138571 softirqs.CPU48.TIMER
123687 +12.3% 138843 softirqs.CPU49.TIMER
124610 +11.6% 139078 softirqs.CPU5.TIMER
124146 +15.8% 143709 ± 3% softirqs.CPU51.TIMER
123635 +12.8% 139412 softirqs.CPU52.TIMER
124065 +12.1% 139088 softirqs.CPU53.TIMER
124147 +14.2% 141788 ± 3% softirqs.CPU54.TIMER
123762 +12.2% 138905 softirqs.CPU55.TIMER
125582 +10.6% 138868 softirqs.CPU57.TIMER
125328 +18.7% 148744 ± 11% softirqs.CPU58.TIMER
123995 +12.1% 138967 softirqs.CPU59.TIMER
124120 +12.1% 139157 softirqs.CPU6.TIMER
124023 +12.3% 139227 softirqs.CPU60.TIMER
123781 +12.1% 138727 softirqs.CPU61.TIMER
123569 +12.5% 138970 softirqs.CPU62.TIMER
123608 +12.5% 139074 softirqs.CPU63.TIMER
123407 +13.1% 139543 softirqs.CPU64.TIMER
127045 ± 4% +9.2% 138760 softirqs.CPU65.TIMER
126610 ± 5% +9.5% 138633 softirqs.CPU66.TIMER
123612 +12.1% 138603 softirqs.CPU67.TIMER
123604 +12.5% 139113 softirqs.CPU68.TIMER
123505 +12.2% 138554 softirqs.CPU69.TIMER
124325 +11.7% 138838 softirqs.CPU7.TIMER
124090 +12.5% 139591 softirqs.CPU70.TIMER
123699 +11.9% 138451 softirqs.CPU71.TIMER
124613 +13.9% 141945 ± 3% softirqs.CPU72.TIMER
123431 +12.3% 138639 softirqs.CPU73.TIMER
123853 +12.1% 138852 softirqs.CPU75.TIMER
124259 +11.8% 138957 softirqs.CPU76.TIMER
123966 +12.1% 138962 softirqs.CPU77.TIMER
123577 +12.1% 138498 softirqs.CPU78.TIMER
123749 +12.0% 138577 softirqs.CPU79.TIMER
124280 +11.7% 138772 softirqs.CPU8.TIMER
124029 +11.7% 138583 softirqs.CPU80.TIMER
123492 +12.4% 138826 softirqs.CPU81.TIMER
124304 +11.6% 138753 softirqs.CPU82.TIMER
123952 +12.0% 138766 softirqs.CPU83.TIMER
123672 +15.0% 142254 ± 3% softirqs.CPU85.TIMER
123910 +12.4% 139236 softirqs.CPU86.TIMER
123664 +12.3% 138908 softirqs.CPU87.TIMER
124068 +12.1% 139076 softirqs.CPU88.TIMER
123802 +12.2% 138885 softirqs.CPU89.TIMER
123826 +12.0% 138649 softirqs.CPU90.TIMER
124086 +11.7% 138621 softirqs.CPU91.TIMER
123586 +12.4% 138925 softirqs.CPU92.TIMER
123696 +12.5% 139160 softirqs.CPU93.TIMER
123720 +11.8% 138367 softirqs.CPU94.TIMER
124536 +11.3% 138666 softirqs.CPU95.TIMER
123950 +12.2% 139071 softirqs.CPU96.TIMER
123876 +12.1% 138860 softirqs.CPU97.TIMER
123700 +12.2% 138807 softirqs.CPU98.TIMER
123452 +12.5% 138905 softirqs.CPU99.TIMER
35617264 +12.0% 39908742 softirqs.TIMER
554.75 ± 65% +547.1% 3590 ± 96% interrupts.32:IR-PCI-MSI.2097155-edge.eth0-TxRx-2
452452 -21.0% 357575 ± 8% interrupts.CPU0.LOC:Local_timer_interrupts
1315 ± 10% +55.9% 2051 ± 16% interrupts.CPU0.RES:Rescheduling_interrupts
452347 -21.1% 357112 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
452532 -21.3% 356223 ± 8% interrupts.CPU10.LOC:Local_timer_interrupts
448763 -20.9% 354890 ± 8% interrupts.CPU100.LOC:Local_timer_interrupts
448680 -20.8% 355364 ± 8% interrupts.CPU101.LOC:Local_timer_interrupts
447608 -20.7% 355149 ± 8% interrupts.CPU102.LOC:Local_timer_interrupts
448227 -20.9% 354352 ± 8% interrupts.CPU103.LOC:Local_timer_interrupts
448996 -20.6% 356349 ± 8% interrupts.CPU104.LOC:Local_timer_interrupts
448510 -20.7% 355821 ± 8% interrupts.CPU105.LOC:Local_timer_interrupts
447226 -20.7% 354825 ± 8% interrupts.CPU106.LOC:Local_timer_interrupts
446828 -20.9% 353563 ± 8% interrupts.CPU107.LOC:Local_timer_interrupts
446384 -20.5% 355009 ± 8% interrupts.CPU108.LOC:Local_timer_interrupts
446823 -20.7% 354511 ± 8% interrupts.CPU109.LOC:Local_timer_interrupts
452246 -21.1% 356984 ± 8% interrupts.CPU11.LOC:Local_timer_interrupts
447034 -20.5% 355451 ± 8% interrupts.CPU110.LOC:Local_timer_interrupts
447783 -20.9% 354154 ± 8% interrupts.CPU111.LOC:Local_timer_interrupts
446657 -20.5% 355120 ± 8% interrupts.CPU112.LOC:Local_timer_interrupts
445392 -20.3% 354902 ± 8% interrupts.CPU113.LOC:Local_timer_interrupts
448354 -20.7% 355621 ± 8% interrupts.CPU114.LOC:Local_timer_interrupts
72.00 ± 84% +293.8% 283.50 ± 28% interrupts.CPU114.RES:Rescheduling_interrupts
447962 -20.8% 354707 ± 8% interrupts.CPU115.LOC:Local_timer_interrupts
447215 -20.7% 354848 ± 8% interrupts.CPU116.LOC:Local_timer_interrupts
447530 -20.8% 354411 ± 8% interrupts.CPU117.LOC:Local_timer_interrupts
448647 -20.9% 354998 ± 8% interrupts.CPU118.LOC:Local_timer_interrupts
447753 -20.6% 355316 ± 8% interrupts.CPU119.LOC:Local_timer_interrupts
554.75 ± 65% +547.1% 3590 ± 96% interrupts.CPU12.32:IR-PCI-MSI.2097155-edge.eth0-TxRx-2
451965 -20.9% 357310 ± 8% interrupts.CPU12.LOC:Local_timer_interrupts
448234 -20.7% 355312 ± 8% interrupts.CPU120.LOC:Local_timer_interrupts
447794 -20.7% 354971 ± 8% interrupts.CPU121.LOC:Local_timer_interrupts
448691 -20.4% 357243 ± 8% interrupts.CPU122.LOC:Local_timer_interrupts
5632 ± 32% -33.5% 3745 interrupts.CPU122.NMI:Non-maskable_interrupts
5632 ± 32% -33.5% 3745 interrupts.CPU122.PMI:Performance_monitoring_interrupts
447313 -20.2% 356771 ± 9% interrupts.CPU123.LOC:Local_timer_interrupts
91.50 ± 57% -59.0% 37.50 ±128% interrupts.CPU123.RES:Rescheduling_interrupts
448249 -20.3% 357447 ± 8% interrupts.CPU124.LOC:Local_timer_interrupts
448297 -20.6% 355804 ± 8% interrupts.CPU125.LOC:Local_timer_interrupts
449032 -21.0% 354647 ± 8% interrupts.CPU126.LOC:Local_timer_interrupts
447604 -20.8% 354651 ± 8% interrupts.CPU127.LOC:Local_timer_interrupts
447392 -20.4% 356084 ± 8% interrupts.CPU128.LOC:Local_timer_interrupts
169.25 ± 48% +405.2% 855.00 ±119% interrupts.CPU128.RES:Rescheduling_interrupts
447344 -20.7% 354766 ± 8% interrupts.CPU129.LOC:Local_timer_interrupts
6632 ± 24% -29.3% 4686 ± 33% interrupts.CPU129.NMI:Non-maskable_interrupts
6632 ± 24% -29.3% 4686 ± 33% interrupts.CPU129.PMI:Performance_monitoring_interrupts
452316 -21.2% 356567 ± 8% interrupts.CPU13.LOC:Local_timer_interrupts
449103 -20.4% 357697 ± 8% interrupts.CPU130.LOC:Local_timer_interrupts
449086 -20.5% 357205 ± 8% interrupts.CPU131.LOC:Local_timer_interrupts
450238 -21.0% 355846 ± 8% interrupts.CPU132.LOC:Local_timer_interrupts
447135 -20.8% 354142 ± 8% interrupts.CPU133.LOC:Local_timer_interrupts
446450 -20.1% 356602 ± 8% interrupts.CPU134.LOC:Local_timer_interrupts
449356 -20.5% 357068 ± 9% interrupts.CPU135.LOC:Local_timer_interrupts
447938 -20.5% 356253 ± 8% interrupts.CPU136.LOC:Local_timer_interrupts
447648 -20.6% 355408 ± 8% interrupts.CPU137.LOC:Local_timer_interrupts
3763 +74.3% 6559 ± 24% interrupts.CPU137.NMI:Non-maskable_interrupts
3763 +74.3% 6559 ± 24% interrupts.CPU137.PMI:Performance_monitoring_interrupts
446747 -20.3% 355856 ± 8% interrupts.CPU138.LOC:Local_timer_interrupts
447237 -20.8% 354392 ± 8% interrupts.CPU139.LOC:Local_timer_interrupts
452138 -21.0% 357163 ± 8% interrupts.CPU14.LOC:Local_timer_interrupts
448154 -20.5% 356451 ± 8% interrupts.CPU140.LOC:Local_timer_interrupts
176.25 ± 53% +131.8% 408.50 ± 38% interrupts.CPU140.RES:Rescheduling_interrupts
447880 -20.6% 355665 ± 8% interrupts.CPU141.LOC:Local_timer_interrupts
447680 -20.7% 354855 ± 8% interrupts.CPU142.LOC:Local_timer_interrupts
446659 -20.6% 354589 ± 8% interrupts.CPU143.LOC:Local_timer_interrupts
445820 -20.5% 354379 ± 8% interrupts.CPU144.LOC:Local_timer_interrupts
6541 ± 24% -42.8% 3738 interrupts.CPU144.NMI:Non-maskable_interrupts
6541 ± 24% -42.8% 3738 interrupts.CPU144.PMI:Performance_monitoring_interrupts
447253 -20.6% 355128 ± 8% interrupts.CPU145.LOC:Local_timer_interrupts
447763 -20.8% 354763 ± 8% interrupts.CPU146.LOC:Local_timer_interrupts
447772 -20.7% 354890 ± 8% interrupts.CPU147.LOC:Local_timer_interrupts
446939 -20.7% 354440 ± 8% interrupts.CPU148.LOC:Local_timer_interrupts
447464 -20.7% 354793 ± 8% interrupts.CPU149.LOC:Local_timer_interrupts
452738 -21.2% 356559 ± 8% interrupts.CPU15.LOC:Local_timer_interrupts
446799 -20.6% 354599 ± 8% interrupts.CPU150.LOC:Local_timer_interrupts
447879 -20.9% 354287 ± 8% interrupts.CPU151.LOC:Local_timer_interrupts
448426 -20.6% 355888 ± 8% interrupts.CPU152.LOC:Local_timer_interrupts
449366 -20.7% 356542 ± 8% interrupts.CPU153.LOC:Local_timer_interrupts
447867 -20.7% 355067 ± 8% interrupts.CPU154.LOC:Local_timer_interrupts
447719 -20.7% 355013 ± 8% interrupts.CPU155.LOC:Local_timer_interrupts
446511 -20.3% 355972 ± 8% interrupts.CPU156.LOC:Local_timer_interrupts
449192 -20.9% 355497 ± 8% interrupts.CPU157.LOC:Local_timer_interrupts
446758 -20.5% 355083 ± 8% interrupts.CPU158.LOC:Local_timer_interrupts
447084 -20.6% 355107 ± 8% interrupts.CPU159.LOC:Local_timer_interrupts
452662 -21.1% 357106 ± 8% interrupts.CPU16.LOC:Local_timer_interrupts
447831 -20.8% 354684 ± 8% interrupts.CPU160.LOC:Local_timer_interrupts
109.50 ± 15% -37.2% 68.75 ± 22% interrupts.CPU160.RES:Rescheduling_interrupts
447477 -20.6% 355188 ± 8% interrupts.CPU161.LOC:Local_timer_interrupts
448495 -20.8% 355285 ± 8% interrupts.CPU162.LOC:Local_timer_interrupts
449288 -20.9% 355477 ± 8% interrupts.CPU163.LOC:Local_timer_interrupts
447985 -20.6% 355568 ± 8% interrupts.CPU164.LOC:Local_timer_interrupts
446601 -20.5% 355191 ± 8% interrupts.CPU165.LOC:Local_timer_interrupts
448135 -20.6% 355818 ± 8% interrupts.CPU166.LOC:Local_timer_interrupts
448461 -20.6% 356227 ± 8% interrupts.CPU167.LOC:Local_timer_interrupts
447391 -20.7% 354733 ± 8% interrupts.CPU168.LOC:Local_timer_interrupts
446612 -20.5% 354909 ± 8% interrupts.CPU169.LOC:Local_timer_interrupts
452748 -21.3% 356314 ± 8% interrupts.CPU17.LOC:Local_timer_interrupts
447238 -20.4% 355830 ± 8% interrupts.CPU170.LOC:Local_timer_interrupts
448308 -20.9% 354608 ± 8% interrupts.CPU171.LOC:Local_timer_interrupts
449043 -21.2% 354035 ± 8% interrupts.CPU172.LOC:Local_timer_interrupts
449395 -20.9% 355374 ± 9% interrupts.CPU173.LOC:Local_timer_interrupts
5700 ± 32% -17.8% 4687 ± 33% interrupts.CPU173.NMI:Non-maskable_interrupts
5700 ± 32% -17.8% 4687 ± 33% interrupts.CPU173.PMI:Performance_monitoring_interrupts
446781 -20.4% 355457 ± 8% interrupts.CPU174.LOC:Local_timer_interrupts
446541 -20.7% 354087 ± 8% interrupts.CPU175.LOC:Local_timer_interrupts
447728 -20.6% 355499 ± 8% interrupts.CPU176.LOC:Local_timer_interrupts
447740 -20.5% 355885 ± 8% interrupts.CPU177.LOC:Local_timer_interrupts
6623 ± 24% -43.3% 3758 interrupts.CPU177.NMI:Non-maskable_interrupts
6623 ± 24% -43.3% 3758 interrupts.CPU177.PMI:Performance_monitoring_interrupts
447747 -20.7% 355148 ± 8% interrupts.CPU178.LOC:Local_timer_interrupts
447285 -20.6% 354994 ± 8% interrupts.CPU179.LOC:Local_timer_interrupts
450911 -21.1% 355809 ± 8% interrupts.CPU18.LOC:Local_timer_interrupts
447180 -20.7% 354485 ± 8% interrupts.CPU180.LOC:Local_timer_interrupts
447702 -20.7% 354973 ± 8% interrupts.CPU181.LOC:Local_timer_interrupts
447897 -20.7% 355132 ± 8% interrupts.CPU182.LOC:Local_timer_interrupts
449321 -21.0% 355184 ± 8% interrupts.CPU183.LOC:Local_timer_interrupts
448357 -20.8% 354979 ± 8% interrupts.CPU184.LOC:Local_timer_interrupts
165.00 ± 61% -81.2% 31.00 ± 63% interrupts.CPU184.RES:Rescheduling_interrupts
447698 -20.4% 356305 ± 8% interrupts.CPU185.LOC:Local_timer_interrupts
446780 -20.6% 354611 ± 8% interrupts.CPU186.LOC:Local_timer_interrupts
447678 -20.6% 355625 ± 8% interrupts.CPU187.LOC:Local_timer_interrupts
447756 -20.3% 356660 ± 8% interrupts.CPU188.LOC:Local_timer_interrupts
448842 -20.5% 356728 ± 8% interrupts.CPU189.LOC:Local_timer_interrupts
452463 -21.2% 356696 ± 8% interrupts.CPU19.LOC:Local_timer_interrupts
448558 -20.6% 355985 ± 8% interrupts.CPU190.LOC:Local_timer_interrupts
6581 ± 24% -28.9% 4680 ± 34% interrupts.CPU190.NMI:Non-maskable_interrupts
6581 ± 24% -28.9% 4680 ± 34% interrupts.CPU190.PMI:Performance_monitoring_interrupts
448186 -20.7% 355605 ± 8% interrupts.CPU191.LOC:Local_timer_interrupts
447640 -20.7% 354849 ± 8% interrupts.CPU192.LOC:Local_timer_interrupts
447828 -20.5% 355818 ± 8% interrupts.CPU193.LOC:Local_timer_interrupts
449769 -20.7% 356689 ± 8% interrupts.CPU194.LOC:Local_timer_interrupts
449120 -20.4% 357570 ± 9% interrupts.CPU195.LOC:Local_timer_interrupts
5641 ± 32% -33.7% 3738 interrupts.CPU195.NMI:Non-maskable_interrupts
5641 ± 32% -33.7% 3738 interrupts.CPU195.PMI:Performance_monitoring_interrupts
448037 -20.4% 356497 ± 7% interrupts.CPU196.LOC:Local_timer_interrupts
446302 -20.4% 355172 ± 8% interrupts.CPU197.LOC:Local_timer_interrupts
451541 -21.3% 355341 ± 8% interrupts.CPU198.LOC:Local_timer_interrupts
449452 -21.1% 354503 ± 9% interrupts.CPU199.LOC:Local_timer_interrupts
450751 -20.8% 356778 ± 8% interrupts.CPU2.LOC:Local_timer_interrupts
451672 -21.2% 356011 ± 8% interrupts.CPU20.LOC:Local_timer_interrupts
447614 -20.9% 354143 ± 8% interrupts.CPU200.LOC:Local_timer_interrupts
446364 -20.6% 354456 ± 8% interrupts.CPU201.LOC:Local_timer_interrupts
447150 -20.4% 355847 ± 8% interrupts.CPU202.LOC:Local_timer_interrupts
5662 ± 32% -17.4% 4678 ± 34% interrupts.CPU202.NMI:Non-maskable_interrupts
5662 ± 32% -17.4% 4678 ± 34% interrupts.CPU202.PMI:Performance_monitoring_interrupts
447324 -20.5% 355784 ± 8% interrupts.CPU203.LOC:Local_timer_interrupts
450353 -21.1% 355551 ± 8% interrupts.CPU204.LOC:Local_timer_interrupts
449486 -21.1% 354766 ± 8% interrupts.CPU205.LOC:Local_timer_interrupts
47.50 ±118% -84.7% 7.25 ± 15% interrupts.CPU205.RES:Rescheduling_interrupts
447640 -20.7% 355083 ± 8% interrupts.CPU206.LOC:Local_timer_interrupts
447443 -20.4% 356366 ± 8% interrupts.CPU207.LOC:Local_timer_interrupts
446651 -20.5% 355032 ± 8% interrupts.CPU208.LOC:Local_timer_interrupts
446807 -20.4% 355505 ± 8% interrupts.CPU209.LOC:Local_timer_interrupts
5643 ± 32% -33.5% 3751 interrupts.CPU209.NMI:Non-maskable_interrupts
5643 ± 32% -33.5% 3751 interrupts.CPU209.PMI:Performance_monitoring_interrupts
452259 -21.0% 357396 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
447655 -20.7% 354771 ± 8% interrupts.CPU210.LOC:Local_timer_interrupts
447098 -20.7% 354473 ± 8% interrupts.CPU211.LOC:Local_timer_interrupts
446681 -20.6% 354768 ± 8% interrupts.CPU212.LOC:Local_timer_interrupts
446397 -20.4% 355254 ± 8% interrupts.CPU213.LOC:Local_timer_interrupts
7.75 ± 5% +1032.3% 87.75 ±105% interrupts.CPU213.RES:Rescheduling_interrupts
449196 -20.8% 355834 ± 8% interrupts.CPU214.LOC:Local_timer_interrupts
447889 -20.7% 355386 ± 8% interrupts.CPU215.LOC:Local_timer_interrupts
448595 -21.1% 353795 ± 8% interrupts.CPU216.LOC:Local_timer_interrupts
448232 -21.0% 354308 ± 8% interrupts.CPU217.LOC:Local_timer_interrupts
446761 -20.6% 354787 ± 8% interrupts.CPU218.LOC:Local_timer_interrupts
446095 -20.5% 354569 ± 8% interrupts.CPU219.LOC:Local_timer_interrupts
451994 -21.1% 356839 ± 8% interrupts.CPU22.LOC:Local_timer_interrupts
448837 -21.0% 354621 ± 8% interrupts.CPU220.LOC:Local_timer_interrupts
448495 -21.1% 353771 ± 8% interrupts.CPU221.LOC:Local_timer_interrupts
446731 -20.6% 354886 ± 8% interrupts.CPU222.LOC:Local_timer_interrupts
446768 -20.7% 354324 ± 8% interrupts.CPU223.LOC:Local_timer_interrupts
446684 -20.6% 354750 ± 8% interrupts.CPU224.LOC:Local_timer_interrupts
446780 -20.6% 354621 ± 8% interrupts.CPU225.LOC:Local_timer_interrupts
448071 -20.6% 355846 ± 8% interrupts.CPU226.LOC:Local_timer_interrupts
447211 -20.6% 355043 ± 8% interrupts.CPU227.LOC:Local_timer_interrupts
447185 -20.7% 354717 ± 8% interrupts.CPU228.LOC:Local_timer_interrupts
446926 -20.8% 353965 ± 8% interrupts.CPU229.LOC:Local_timer_interrupts
452110 -20.8% 358288 ± 8% interrupts.CPU23.LOC:Local_timer_interrupts
448401 -20.3% 357175 ± 7% interrupts.CPU230.LOC:Local_timer_interrupts
449244 -20.8% 355611 ± 8% interrupts.CPU231.LOC:Local_timer_interrupts
449265 -21.1% 354397 ± 8% interrupts.CPU232.LOC:Local_timer_interrupts
5647 ± 33% -33.6% 3747 interrupts.CPU232.NMI:Non-maskable_interrupts
5647 ± 33% -33.6% 3747 interrupts.CPU232.PMI:Performance_monitoring_interrupts
447186 -20.7% 354486 ± 8% interrupts.CPU233.LOC:Local_timer_interrupts
448107 -21.0% 354228 ± 8% interrupts.CPU234.LOC:Local_timer_interrupts
7537 -37.9% 4681 ± 35% interrupts.CPU234.NMI:Non-maskable_interrupts
7537 -37.9% 4681 ± 35% interrupts.CPU234.PMI:Performance_monitoring_interrupts
447040 -20.7% 354506 ± 8% interrupts.CPU235.LOC:Local_timer_interrupts
447193 -20.9% 353777 ± 8% interrupts.CPU236.LOC:Local_timer_interrupts
446268 -20.6% 354295 ± 8% interrupts.CPU237.LOC:Local_timer_interrupts
449634 -20.8% 356255 ± 8% interrupts.CPU238.LOC:Local_timer_interrupts
449337 -20.8% 355992 ± 8% interrupts.CPU239.LOC:Local_timer_interrupts
451802 -20.9% 357381 ± 8% interrupts.CPU24.LOC:Local_timer_interrupts
447287 -20.7% 354800 ± 8% interrupts.CPU240.LOC:Local_timer_interrupts
446264 -20.8% 353437 ± 8% interrupts.CPU241.LOC:Local_timer_interrupts
447521 -20.6% 355538 ± 8% interrupts.CPU242.LOC:Local_timer_interrupts
448368 -20.8% 355108 ± 8% interrupts.CPU243.LOC:Local_timer_interrupts
448794 -21.2% 353861 ± 8% interrupts.CPU244.LOC:Local_timer_interrupts
448265 -21.1% 353723 ± 8% interrupts.CPU245.LOC:Local_timer_interrupts
447404 -20.6% 355385 ± 8% interrupts.CPU246.LOC:Local_timer_interrupts
448265 -20.7% 355456 ± 8% interrupts.CPU247.LOC:Local_timer_interrupts
447316 -20.7% 354842 ± 8% interrupts.CPU248.LOC:Local_timer_interrupts
447886 -20.7% 355020 ± 8% interrupts.CPU249.LOC:Local_timer_interrupts
452147 -21.3% 355786 ± 8% interrupts.CPU25.LOC:Local_timer_interrupts
448406 -20.8% 355289 ± 8% interrupts.CPU250.LOC:Local_timer_interrupts
448566 -20.9% 354726 ± 8% interrupts.CPU251.LOC:Local_timer_interrupts
447607 -21.0% 353757 ± 8% interrupts.CPU252.LOC:Local_timer_interrupts
28.50 ± 68% +173.7% 78.00 ± 39% interrupts.CPU252.RES:Rescheduling_interrupts
448037 -20.9% 354453 ± 8% interrupts.CPU253.LOC:Local_timer_interrupts
449413 -20.8% 356025 ± 8% interrupts.CPU254.LOC:Local_timer_interrupts
449548 -20.8% 356202 ± 8% interrupts.CPU255.LOC:Local_timer_interrupts
41.00 ±110% -84.8% 6.25 ± 13% interrupts.CPU255.RES:Rescheduling_interrupts
449239 -21.0% 354887 ± 8% interrupts.CPU256.LOC:Local_timer_interrupts
447192 -20.8% 354224 ± 8% interrupts.CPU257.LOC:Local_timer_interrupts
448177 -20.7% 355300 ± 8% interrupts.CPU258.LOC:Local_timer_interrupts
448275 -20.6% 355711 ± 8% interrupts.CPU259.LOC:Local_timer_interrupts
451573 -21.3% 355400 ± 8% interrupts.CPU26.LOC:Local_timer_interrupts
449188 -20.7% 356207 ± 8% interrupts.CPU260.LOC:Local_timer_interrupts
449309 -20.8% 355852 ± 8% interrupts.CPU261.LOC:Local_timer_interrupts
447847 -20.8% 354847 ± 8% interrupts.CPU262.LOC:Local_timer_interrupts
446950 -20.5% 355229 ± 8% interrupts.CPU263.LOC:Local_timer_interrupts
6496 ± 24% -42.7% 3722 interrupts.CPU263.NMI:Non-maskable_interrupts
6496 ± 24% -42.7% 3722 interrupts.CPU263.PMI:Performance_monitoring_interrupts
449945 -20.8% 356417 ± 8% interrupts.CPU264.LOC:Local_timer_interrupts
450439 -21.0% 355899 ± 8% interrupts.CPU265.LOC:Local_timer_interrupts
450339 -20.7% 357335 ± 8% interrupts.CPU266.LOC:Local_timer_interrupts
450174 -20.2% 359229 ± 9% interrupts.CPU267.LOC:Local_timer_interrupts
451002 -20.9% 356683 ± 8% interrupts.CPU268.LOC:Local_timer_interrupts
450462 -21.1% 355569 ± 8% interrupts.CPU269.LOC:Local_timer_interrupts
452188 -21.0% 357345 ± 8% interrupts.CPU27.LOC:Local_timer_interrupts
453310 -21.7% 355167 ± 8% interrupts.CPU270.LOC:Local_timer_interrupts
451241 -21.2% 355504 ± 8% interrupts.CPU271.LOC:Local_timer_interrupts
451114 -21.3% 355154 ± 8% interrupts.CPU272.LOC:Local_timer_interrupts
450134 -21.1% 355326 ± 8% interrupts.CPU273.LOC:Local_timer_interrupts
450431 -20.9% 356194 ± 8% interrupts.CPU274.LOC:Local_timer_interrupts
19.00 ± 45% +173.7% 52.00 ± 62% interrupts.CPU274.RES:Rescheduling_interrupts
450807 -20.8% 357003 ± 8% interrupts.CPU275.LOC:Local_timer_interrupts
453075 -21.5% 355714 ± 8% interrupts.CPU276.LOC:Local_timer_interrupts
450048 -21.0% 355644 ± 8% interrupts.CPU277.LOC:Local_timer_interrupts
449561 -20.7% 356522 ± 8% interrupts.CPU278.LOC:Local_timer_interrupts
450017 -20.4% 358363 ± 9% interrupts.CPU279.LOC:Local_timer_interrupts
451493 -21.2% 355798 ± 8% interrupts.CPU28.LOC:Local_timer_interrupts
160.00 ± 5% +51.4% 242.25 ± 9% interrupts.CPU28.RES:Rescheduling_interrupts
450158 -21.0% 355626 ± 8% interrupts.CPU280.LOC:Local_timer_interrupts
450175 -21.0% 355780 ± 8% interrupts.CPU281.LOC:Local_timer_interrupts
450065 -20.9% 356051 ± 8% interrupts.CPU282.LOC:Local_timer_interrupts
447777 -20.5% 355761 ± 8% interrupts.CPU283.LOC:Local_timer_interrupts
450110 -20.9% 356052 ± 8% interrupts.CPU284.LOC:Local_timer_interrupts
6524 ± 24% -28.6% 4657 ± 34% interrupts.CPU284.NMI:Non-maskable_interrupts
6524 ± 24% -28.6% 4657 ± 34% interrupts.CPU284.PMI:Performance_monitoring_interrupts
449420 -20.9% 355525 ± 8% interrupts.CPU285.LOC:Local_timer_interrupts
449800 -20.7% 356713 ± 8% interrupts.CPU286.LOC:Local_timer_interrupts
459915 -20.2% 366810 ± 9% interrupts.CPU287.LOC:Local_timer_interrupts
451800 -21.2% 355897 ± 8% interrupts.CPU29.LOC:Local_timer_interrupts
451158 -21.0% 356357 ± 8% interrupts.CPU3.LOC:Local_timer_interrupts
452347 -21.3% 355924 ± 8% interrupts.CPU30.LOC:Local_timer_interrupts
3777 +98.4% 7492 interrupts.CPU30.NMI:Non-maskable_interrupts
3777 +98.4% 7492 interrupts.CPU30.PMI:Performance_monitoring_interrupts
128.75 ± 31% +86.2% 239.75 ± 10% interrupts.CPU30.RES:Rescheduling_interrupts
451634 -20.8% 357831 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
452787 -20.9% 358017 ± 8% interrupts.CPU32.LOC:Local_timer_interrupts
70.50 ± 38% +279.8% 267.75 ± 54% interrupts.CPU32.RES:Rescheduling_interrupts
451923 -20.9% 357380 ± 8% interrupts.CPU33.LOC:Local_timer_interrupts
452722 -21.4% 355941 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
448603 -20.4% 357004 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
452210 -21.3% 355998 ± 8% interrupts.CPU36.LOC:Local_timer_interrupts
451882 -21.3% 355544 ± 8% interrupts.CPU37.LOC:Local_timer_interrupts
453383 -21.4% 356383 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
452809 -21.0% 357598 ± 8% interrupts.CPU39.LOC:Local_timer_interrupts
452315 -21.0% 357529 ± 8% interrupts.CPU4.LOC:Local_timer_interrupts
450608 -21.2% 354930 ± 8% interrupts.CPU40.LOC:Local_timer_interrupts
450081 -20.9% 355990 ± 8% interrupts.CPU41.LOC:Local_timer_interrupts
450043 -20.8% 356507 ± 8% interrupts.CPU42.LOC:Local_timer_interrupts
22.75 ± 48% +236.3% 76.50 ± 40% interrupts.CPU42.RES:Rescheduling_interrupts
450140 -20.8% 356378 ± 8% interrupts.CPU43.LOC:Local_timer_interrupts
449695 -20.7% 356421 ± 8% interrupts.CPU44.LOC:Local_timer_interrupts
18.50 ± 43% +625.7% 134.25 ± 80% interrupts.CPU44.RES:Rescheduling_interrupts
450168 -20.8% 356636 ± 8% interrupts.CPU45.LOC:Local_timer_interrupts
451761 -21.3% 355664 ± 8% interrupts.CPU46.LOC:Local_timer_interrupts
450482 -20.8% 356781 ± 8% interrupts.CPU47.LOC:Local_timer_interrupts
53.75 ± 80% +245.6% 185.75 ± 57% interrupts.CPU47.RES:Rescheduling_interrupts
450849 -21.0% 356318 ± 8% interrupts.CPU48.LOC:Local_timer_interrupts
449530 -20.8% 356024 ± 8% interrupts.CPU49.LOC:Local_timer_interrupts
452185 -21.2% 356424 ± 8% interrupts.CPU5.LOC:Local_timer_interrupts
452374 -21.2% 356685 ± 8% interrupts.CPU50.LOC:Local_timer_interrupts
451158 -20.7% 357779 ± 9% interrupts.CPU51.LOC:Local_timer_interrupts
451039 -20.6% 357966 ± 8% interrupts.CPU52.LOC:Local_timer_interrupts
5682 ± 32% -34.2% 3740 interrupts.CPU52.NMI:Non-maskable_interrupts
5682 ± 32% -34.2% 3740 interrupts.CPU52.PMI:Performance_monitoring_interrupts
450978 -21.0% 356243 ± 8% interrupts.CPU53.LOC:Local_timer_interrupts
452251 -21.5% 355052 ± 9% interrupts.CPU54.LOC:Local_timer_interrupts
450986 -21.2% 355254 ± 8% interrupts.CPU55.LOC:Local_timer_interrupts
450685 -21.0% 356148 ± 8% interrupts.CPU56.LOC:Local_timer_interrupts
449475 -20.9% 355563 ± 8% interrupts.CPU57.LOC:Local_timer_interrupts
451278 -20.7% 357853 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
169.75 ± 60% -63.6% 61.75 ± 99% interrupts.CPU58.RES:Rescheduling_interrupts
450102 -20.5% 357925 ± 8% interrupts.CPU59.LOC:Local_timer_interrupts
451723 -21.3% 355340 ± 8% interrupts.CPU6.LOC:Local_timer_interrupts
454798 -21.5% 356820 ± 8% interrupts.CPU60.LOC:Local_timer_interrupts
7545 -50.2% 3758 interrupts.CPU60.NMI:Non-maskable_interrupts
7545 -50.2% 3758 interrupts.CPU60.PMI:Performance_monitoring_interrupts
451436 -21.3% 355493 ± 8% interrupts.CPU61.LOC:Local_timer_interrupts
450910 -20.8% 357077 ± 8% interrupts.CPU62.LOC:Local_timer_interrupts
450686 -20.3% 359334 ± 9% interrupts.CPU63.LOC:Local_timer_interrupts
450216 -20.8% 356564 ± 8% interrupts.CPU64.LOC:Local_timer_interrupts
449965 -21.1% 354963 ± 8% interrupts.CPU65.LOC:Local_timer_interrupts
450709 -21.0% 355952 ± 8% interrupts.CPU66.LOC:Local_timer_interrupts
450160 -21.0% 355775 ± 8% interrupts.CPU67.LOC:Local_timer_interrupts
450297 -21.1% 355375 ± 8% interrupts.CPU68.LOC:Local_timer_interrupts
449937 -20.9% 356013 ± 8% interrupts.CPU69.LOC:Local_timer_interrupts
5674 ± 32% -17.2% 4700 ± 34% interrupts.CPU69.NMI:Non-maskable_interrupts
5674 ± 32% -17.2% 4700 ± 34% interrupts.CPU69.PMI:Performance_monitoring_interrupts
451977 -21.0% 356885 ± 8% interrupts.CPU7.LOC:Local_timer_interrupts
449651 -20.9% 355504 ± 8% interrupts.CPU70.LOC:Local_timer_interrupts
451073 -20.9% 356589 ± 8% interrupts.CPU71.LOC:Local_timer_interrupts
447324 -20.8% 354292 ± 8% interrupts.CPU72.LOC:Local_timer_interrupts
6588 ± 24% -43.2% 3740 interrupts.CPU72.NMI:Non-maskable_interrupts
6588 ± 24% -43.2% 3740 interrupts.CPU72.PMI:Performance_monitoring_interrupts
240.50 ± 7% -36.0% 154.00 ± 50% interrupts.CPU72.RES:Rescheduling_interrupts
447170 -20.7% 354707 ± 8% interrupts.CPU73.LOC:Local_timer_interrupts
448611 -20.8% 355143 ± 8% interrupts.CPU74.LOC:Local_timer_interrupts
447417 -20.8% 354345 ± 8% interrupts.CPU75.LOC:Local_timer_interrupts
448190 -20.8% 355035 ± 8% interrupts.CPU76.LOC:Local_timer_interrupts
448346 -20.9% 354442 ± 8% interrupts.CPU77.LOC:Local_timer_interrupts
6594 ± 24% -43.4% 3735 interrupts.CPU77.NMI:Non-maskable_interrupts
6594 ± 24% -43.4% 3735 interrupts.CPU77.PMI:Performance_monitoring_interrupts
448452 -20.6% 355866 ± 8% interrupts.CPU78.LOC:Local_timer_interrupts
447509 -20.7% 354660 ± 8% interrupts.CPU79.LOC:Local_timer_interrupts
451912 -20.9% 357687 ± 8% interrupts.CPU8.LOC:Local_timer_interrupts
449808 -20.9% 355596 ± 8% interrupts.CPU80.LOC:Local_timer_interrupts
449111 -21.0% 354994 ± 8% interrupts.CPU81.LOC:Local_timer_interrupts
6642 ± 24% -29.3% 4699 ± 34% interrupts.CPU81.NMI:Non-maskable_interrupts
6642 ± 24% -29.3% 4699 ± 34% interrupts.CPU81.PMI:Performance_monitoring_interrupts
447536 -20.8% 354299 ± 8% interrupts.CPU82.LOC:Local_timer_interrupts
448241 -21.0% 354323 ± 8% interrupts.CPU83.LOC:Local_timer_interrupts
447498 -20.7% 355042 ± 8% interrupts.CPU84.LOC:Local_timer_interrupts
484.50 ± 82% -80.6% 94.00 ± 47% interrupts.CPU84.RES:Rescheduling_interrupts
446794 -20.5% 355249 ± 8% interrupts.CPU85.LOC:Local_timer_interrupts
448191 -20.9% 354594 ± 8% interrupts.CPU86.LOC:Local_timer_interrupts
447635 -20.9% 353877 ± 8% interrupts.CPU87.LOC:Local_timer_interrupts
448706 -20.8% 355250 ± 8% interrupts.CPU88.LOC:Local_timer_interrupts
3738 +50.5% 5627 ± 33% interrupts.CPU88.NMI:Non-maskable_interrupts
3738 +50.5% 5627 ± 33% interrupts.CPU88.PMI:Performance_monitoring_interrupts
448312 -20.9% 354709 ± 8% interrupts.CPU89.LOC:Local_timer_interrupts
452329 -21.0% 357333 ± 8% interrupts.CPU9.LOC:Local_timer_interrupts
449864 -21.1% 354745 ± 8% interrupts.CPU90.LOC:Local_timer_interrupts
451193 -21.5% 353997 ± 8% interrupts.CPU91.LOC:Local_timer_interrupts
447166 -20.4% 356159 ± 8% interrupts.CPU92.LOC:Local_timer_interrupts
447364 -20.7% 354752 ± 8% interrupts.CPU93.LOC:Local_timer_interrupts
449764 -21.0% 355410 ± 8% interrupts.CPU94.LOC:Local_timer_interrupts
448393 -20.7% 355788 ± 8% interrupts.CPU95.LOC:Local_timer_interrupts
450191 -21.1% 355404 ± 8% interrupts.CPU96.LOC:Local_timer_interrupts
447651 -20.6% 355343 ± 8% interrupts.CPU97.LOC:Local_timer_interrupts
447679 -20.8% 354531 ± 8% interrupts.CPU98.LOC:Local_timer_interrupts
447448 -20.9% 353795 ± 8% interrupts.CPU99.LOC:Local_timer_interrupts
1.293e+08 -20.8% 1.024e+08 ± 8% interrupts.LOC:Local_timer_interrupts
will-it-scale.per_process_ops
680 +---------------------------------------------------------------------+
| ++.++. .+ .++. + +.++.+ +.++.+.++. : ++.+.+ ++.++.+.++.++.|
660 |-+ + + +.+ ++.+.+ |
640 |-+ |
| |
620 |-+ |
| |
600 |-+ |
| O O OO O O |
580 |-+O OO O O O OO |
560 |-O O O |
| O O |
540 |-+ OO O O O OO O OO O |
| O |
520 +---------------------------------------------------------------------+
will-it-scale.workload
200000 +------------------------------------------------------------------+
195000 |.+ .+ +. +. .+ |
| ++.++. +. +.+ + +.++.+ ++.+.++.+ : ++.++ +.++.++.++.++.|
190000 |-+ + + +.+ +.++.+ |
185000 |-+ |
| |
180000 |-+ |
175000 |-+ |
170000 |-+ O O O |
| O O O O O O |
165000 |-+ O O O O OO O |
160000 |-O |
| O O OO OO OO OO O |
155000 |-+ O OO |
150000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
5 months, 4 weeks
[x86, sched] 1567c3e346: vm-scalability.median -15.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -15.8% regression of vm-scalability.median due to commit:
commit: 1567c3e3467cddeb019a7b53ec632f834b6a9239 ("x86, sched: Add support for frequency invariance")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq
cpufreq_governor: performance
ucode: 0x16
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-20191114.cgz/300s/8T/lkp-hsw-4ex1/anon-cow-seq/vm-scalability/0x16
commit:
2a4b03ffc6 ("sched/fair: Prevent unlimited runtime on throttled group")
1567c3e346 ("x86, sched: Add support for frequency invariance")
2a4b03ffc69f2ded 1567c3e3467cddeb019a7b53ec6
---------------- ---------------------------
%stddev %change %stddev
\ | \
210443 -15.8% 177280 vm-scalability.median
0.07 ± 6% -64.9% 0.02 ± 16% vm-scalability.median_stddev
30242018 -16.0% 25399016 vm-scalability.throughput
241304 ± 2% +53.9% 371337 ± 2% vm-scalability.time.involuntary_context_switches
8973780 -12.4% 7862380 vm-scalability.time.minor_page_faults
10507 +7.6% 11301 vm-scalability.time.percent_of_cpu_this_job_got
17759 +22.0% 21664 ± 2% vm-scalability.time.system_time
14801 -10.4% 13262 vm-scalability.time.user_time
1023965 -63.3% 376066 ± 4% vm-scalability.time.voluntary_context_switches
7.967e+09 -11.1% 7.082e+09 vm-scalability.workload
2985 -11.4% 2643 ± 3% slabinfo.khugepaged_mm_slot.active_objs
2985 -11.4% 2643 ± 3% slabinfo.khugepaged_mm_slot.num_objs
36024164 -42.9% 20554898 ± 40% cpuidle.C1.time
7.277e+09 ± 41% -72.1% 2.027e+09 ±160% cpuidle.C3.time
17453257 ± 19% -74.4% 4462508 ±163% cpuidle.C3.usage
214574 ± 10% -50.2% 106926 ± 33% cpuidle.POLL.usage
26.26 -5.4 20.89 mpstat.cpu.all.idle%
0.02 ± 8% -0.0 0.01 ± 9% mpstat.cpu.all.iowait%
40.23 +8.8 49.05 mpstat.cpu.all.sys%
33.49 -3.4 30.05 ± 2% mpstat.cpu.all.usr%
1989988 -20.6% 1580840 ± 4% numa-numastat.node1.local_node
2026261 -20.6% 1608134 ± 4% numa-numastat.node1.numa_hit
3614123 ± 44% -56.4% 1576459 ± 5% numa-numastat.node2.local_node
3641345 ± 44% -56.0% 1603122 ± 5% numa-numastat.node2.numa_hit
26.50 -20.8% 21.00 vmstat.cpu.id
32.75 -10.7% 29.25 vmstat.cpu.us
9800 -34.3% 6434 vmstat.system.cs
290059 -6.9% 270063 ± 4% vmstat.system.in
1.012e+08 +12.3% 1.136e+08 meminfo.Active
1.012e+08 +12.3% 1.136e+08 meminfo.Active(anon)
92123057 +14.3% 1.053e+08 meminfo.AnonHugePages
1.009e+08 +12.3% 1.134e+08 meminfo.AnonPages
1.044e+08 +11.9% 1.169e+08 meminfo.Memused
25495045 +15.0% 29324284 numa-meminfo.node1.Active
25494976 +15.0% 29324244 numa-meminfo.node1.Active(anon)
23164225 +17.2% 27157731 numa-meminfo.node1.AnonHugePages
25417044 +15.1% 29267058 numa-meminfo.node1.AnonPages
26192254 +14.4% 29959010 numa-meminfo.node1.MemUsed
24839952 ± 3% +18.2% 29349199 numa-meminfo.node2.Active
24839901 ± 3% +18.2% 29349040 numa-meminfo.node2.Active(anon)
22604826 ± 3% +20.4% 27211067 numa-meminfo.node2.AnonHugePages
24777054 ± 3% +18.2% 29282273 numa-meminfo.node2.AnonPages
25718372 ± 2% +16.8% 30037056 numa-meminfo.node2.MemUsed
199045 ± 96% -99.6% 743.75 ± 52% numa-meminfo.node2.PageTables
54935 ± 20% -24.5% 41470 ± 10% numa-meminfo.node2.SUnreclaim
418.75 ± 94% -93.7% 26.50 ± 49% numa-meminfo.node3.Inactive(file)
4864 ± 15% +26.9% 6174 ± 10% numa-meminfo.node3.KernelStack
95371 ±172% +233.9% 318444 ± 57% numa-meminfo.node3.PageTables
6347307 +14.8% 7286035 numa-vmstat.node1.nr_active_anon
6328825 +14.9% 7268986 numa-vmstat.node1.nr_anon_pages
11261 +16.9% 13162 numa-vmstat.node1.nr_anon_transparent_hugepages
6347284 +14.8% 7285923 numa-vmstat.node1.nr_zone_active_anon
1324428 -15.2% 1122566 ± 3% numa-vmstat.node1.numa_hit
1204942 -16.0% 1011998 ± 3% numa-vmstat.node1.numa_local
6207902 ± 3% +17.7% 7308204 numa-vmstat.node2.nr_active_anon
6195059 ± 3% +17.6% 7287608 numa-vmstat.node2.nr_anon_pages
11047 ± 3% +19.7% 13218 numa-vmstat.node2.nr_anon_transparent_hugepages
49684 ± 96% -99.6% 186.00 ± 52% numa-vmstat.node2.nr_page_table_pages
13734 ± 20% -24.5% 10367 ± 10% numa-vmstat.node2.nr_slab_unreclaimable
6207738 ± 3% +17.7% 7308081 numa-vmstat.node2.nr_zone_active_anon
2125860 ± 40% -49.5% 1073183 ± 2% numa-vmstat.node2.numa_hit
2015486 ± 42% -52.2% 963317 ± 2% numa-vmstat.node2.numa_local
104.25 ± 94% -93.8% 6.50 ± 49% numa-vmstat.node3.nr_inactive_file
4864 ± 15% +26.7% 6160 ± 10% numa-vmstat.node3.nr_kernel_stack
23833 ±172% +230.0% 78649 ± 57% numa-vmstat.node3.nr_page_table_pages
104.25 ± 94% -93.8% 6.50 ± 49% numa-vmstat.node3.nr_zone_inactive_file
1.12 ± 19% -0.5 0.63 ± 20% perf-profile.calltrace.cycles-pp.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.81 ± 19% -0.2 0.60 ± 20% perf-profile.calltrace.cycles-pp.reuse_swap_page.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.33 ± 22% -0.2 0.17 ± 4% perf-profile.children.cycles-pp.migrate_misplaced_transhuge_page
0.09 ± 23% -0.1 0.03 ±100% perf-profile.children.cycles-pp.ttwu_do_activate
0.08 ± 19% -0.1 0.03 ±100% perf-profile.children.cycles-pp.enqueue_entity
0.15 ± 20% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.wake_up_page_bit
0.14 ± 23% -0.1 0.09 ± 14% perf-profile.children.cycles-pp.__wake_up_common
0.14 ± 20% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.autoremove_wake_function
0.09 ± 20% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.enqueue_task_fair
0.09 ± 23% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.activate_task
0.15 ± 21% -0.0 0.11 ± 17% perf-profile.children.cycles-pp.try_to_wake_up
0.15 ± 11% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.schedule
0.08 ± 10% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.newidle_balance
0.12 ± 16% +0.0 0.17 ± 9% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.05 perf-profile.children.cycles-pp.balance_fair
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.18 ± 10% +0.1 0.23 ± 8% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.09 ± 9% perf-profile.children.cycles-pp.smpboot_thread_fn
0.40 ± 23% +0.3 0.68 ± 17% perf-profile.children.cycles-pp.task_tick_fair
0.01 ±173% +0.3 0.30 ± 16% perf-profile.children.cycles-pp.update_cfs_group
0.01 ±173% +0.3 0.30 ± 16% perf-profile.self.cycles-pp.update_cfs_group
8587 ± 12% -29.1% 6090 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
0.53 ± 45% -92.1% 0.04 ±173% sched_debug.cfs_rq:/.load_avg.min
0.54 ± 19% +38.3% 0.74 ± 10% sched_debug.cfs_rq:/.nr_running.avg
4.81 ± 12% +266.0% 17.59 ± 7% sched_debug.cfs_rq:/.nr_spread_over.avg
24.01 ± 8% +96.4% 47.14 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
5.02 ± 10% +123.0% 11.19 ± 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
313.47 ± 18% -55.1% 140.75 ± 51% sched_debug.cfs_rq:/.runnable_load_avg.max
31.75 ± 9% -54.5% 14.46 ± 44% sched_debug.cfs_rq:/.runnable_load_avg.stddev
0.54 ± 19% +38.2% 0.75 ± 9% sched_debug.cpu.nr_running.avg
11567 ± 8% -28.4% 8277 ± 7% sched_debug.cpu.nr_switches.avg
4760 ± 9% -31.7% 3249 ± 4% sched_debug.cpu.nr_switches.stddev
-36.17 +31.8% -47.65 sched_debug.cpu.nr_uninterruptible.min
15.88 ± 2% +35.3% 21.49 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
9718 ± 10% -34.1% 6402 ± 9% sched_debug.cpu.sched_count.avg
4294 ± 8% -37.4% 2688 ± 8% sched_debug.cpu.sched_count.stddev
3936 ± 10% -53.6% 1828 ± 12% sched_debug.cpu.sched_goidle.avg
2045 ± 8% -50.0% 1022 ± 10% sched_debug.cpu.sched_goidle.stddev
4616 ± 9% -36.5% 2933 ± 10% sched_debug.cpu.ttwu_count.avg
11005 ± 9% -22.0% 8579 ± 9% sched_debug.cpu.ttwu_count.max
2271 ± 8% -34.9% 1478 ± 8% sched_debug.cpu.ttwu_count.stddev
915.40 ± 8% +42.3% 1303 ± 6% sched_debug.cpu.ttwu_local.avg
3927 ± 5% +19.3% 4684 ± 10% sched_debug.cpu.ttwu_local.max
488.40 ± 9% +40.6% 686.55 ± 3% sched_debug.cpu.ttwu_local.stddev
25252488 +12.3% 28363408 proc-vmstat.nr_active_anon
25182652 +12.4% 28296707 proc-vmstat.nr_anon_pages
44870 +14.2% 51263 proc-vmstat.nr_anon_transparent_hugepages
148.25 -5.7% 139.75 proc-vmstat.nr_dirtied
10541824 -3.0% 10230220 proc-vmstat.nr_dirty_background_threshold
21109424 -3.0% 20485453 proc-vmstat.nr_dirty_threshold
1.06e+08 -2.9% 1.029e+08 proc-vmstat.nr_free_pages
369.75 -1.4% 364.50 proc-vmstat.nr_inactive_file
99622 +7.3% 106911 proc-vmstat.nr_page_table_pages
144.25 -4.9% 137.25 proc-vmstat.nr_written
25252485 +12.3% 28363396 proc-vmstat.nr_zone_active_anon
369.75 -1.4% 364.50 proc-vmstat.nr_zone_inactive_file
2953466 -10.6% 2639571 proc-vmstat.numa_hint_faults
661711 -9.8% 596566 proc-vmstat.numa_hint_faults_local
11245421 -11.8% 9923535 proc-vmstat.numa_hit
5838019 -7.0% 5428548 proc-vmstat.numa_huge_pte_updates
11136518 -11.9% 9814538 proc-vmstat.numa_local
201352 ± 12% +234.6% 673715 ± 7% proc-vmstat.numa_pages_migrated
2.99e+09 -7.0% 2.78e+09 proc-vmstat.numa_pte_updates
2.382e+09 -18.9% 1.931e+09 proc-vmstat.pgalloc_normal
10046733 -11.2% 8918565 proc-vmstat.pgfault
2.381e+09 -18.9% 1.93e+09 proc-vmstat.pgfree
5.942e+08 -42.6% 3.409e+08 proc-vmstat.pgmigrate_fail
201352 ± 12% +234.6% 673715 ± 7% proc-vmstat.pgmigrate_success
3478289 -11.1% 3091553 proc-vmstat.thp_fault_alloc
8.85 ± 7% -16.8% 7.36 ± 12% perf-stat.i.MPKI
2.766e+10 ± 4% -7.4% 2.562e+10 ± 3% perf-stat.i.branch-instructions
20.31 +1.8 22.10 ± 4% perf-stat.i.cache-miss-rate%
5.089e+08 ± 4% -9.5% 4.604e+08 ± 3% perf-stat.i.cache-references
9955 ± 3% -33.0% 6665 ± 4% perf-stat.i.context-switches
1.73 ± 4% +9.9% 1.90 ± 3% perf-stat.i.cpi
1.287e+11 ± 4% +11.2% 1.431e+11 ± 3% perf-stat.i.cpu-cycles
485.10 ± 5% +108.0% 1009 ± 4% perf-stat.i.cpu-migrations
2.215e+10 ± 4% -6.5% 2.071e+10 ± 3% perf-stat.i.dTLB-loads
7.427e+09 ± 4% -8.1% 6.829e+09 ± 3% perf-stat.i.dTLB-stores
958368 ± 5% -30.9% 662160 ± 2% perf-stat.i.iTLB-load-misses
8.707e+10 ± 4% -7.4% 8.062e+10 ± 3% perf-stat.i.instructions
142754 ± 2% +11.3% 158927 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.67 -13.3% 0.58 perf-stat.i.ipc
33135 ± 4% -10.7% 29584 ± 2% perf-stat.i.minor-faults
70.79 +6.0 76.82 perf-stat.i.node-load-miss-rate%
41028170 ± 4% +14.1% 46801837 ± 3% perf-stat.i.node-load-misses
1.85 ± 5% +0.9 2.78 ± 5% perf-stat.i.node-store-miss-rate%
584076 ± 6% +70.0% 992640 ± 4% perf-stat.i.node-store-misses
53445296 ± 5% -19.0% 43298204 ± 7% perf-stat.i.node-stores
33137 ± 4% -10.7% 29584 ± 2% perf-stat.i.page-faults
5.85 -1.7% 5.75 perf-stat.overall.MPKI
1.48 +19.6% 1.77 perf-stat.overall.cpi
1156 +16.7% 1350 ± 5% perf-stat.overall.cycles-between-cache-misses
92129 ± 6% +25.2% 115378 perf-stat.overall.instructions-per-iTLB-miss
0.67 -16.4% 0.56 perf-stat.overall.ipc
75.40 +3.8 79.17 perf-stat.overall.node-load-miss-rate%
1.11 ± 2% +1.2 2.33 ± 7% perf-stat.overall.node-store-miss-rate%
2.704e+10 -11.0% 2.408e+10 perf-stat.ps.branch-instructions
1.091e+08 -8.5% 99896941 ± 5% perf-stat.ps.cache-misses
4.979e+08 -12.5% 4.357e+08 perf-stat.ps.cache-references
9775 -34.7% 6386 ± 2% perf-stat.ps.context-switches
1.262e+11 +6.5% 1.344e+11 perf-stat.ps.cpu-cycles
487.90 ± 5% +100.7% 979.41 perf-stat.ps.cpu-migrations
2.166e+10 -10.1% 1.948e+10 perf-stat.ps.dTLB-loads
7.26e+09 -11.3% 6.441e+09 perf-stat.ps.dTLB-stores
927296 ± 5% -29.1% 657143 perf-stat.ps.iTLB-load-misses
8.512e+10 -10.9% 7.581e+10 perf-stat.ps.instructions
32070 -11.2% 28469 perf-stat.ps.minor-faults
40537100 +9.2% 44251909 ± 4% perf-stat.ps.node-load-misses
13226459 ± 2% -12.0% 11644845 ± 4% perf-stat.ps.node-loads
582949 ± 3% +65.6% 965166 ± 3% perf-stat.ps.node-store-misses
52116857 -21.9% 40679765 ± 8% perf-stat.ps.node-stores
32070 -11.2% 28469 perf-stat.ps.page-faults
2.642e+13 -11.4% 2.341e+13 perf-stat.total.instructions
229.75 ± 52% -34.1% 151.50 interrupts.102:PCI-MSI.1572921-edge.eth0-TxRx-57
156.75 ± 3% +27.3% 199.50 ± 29% interrupts.47:PCI-MSI.1572866-edge.eth0-TxRx-2
1603 ± 22% -15.6% 1352 ± 4% interrupts.CPU1.NMI:Non-maskable_interrupts
1603 ± 22% -15.6% 1352 ± 4% interrupts.CPU1.PMI:Performance_monitoring_interrupts
1007 ± 19% -38.7% 618.00 ± 8% interrupts.CPU100.RES:Rescheduling_interrupts
1016 ± 19% -33.9% 671.75 ± 5% interrupts.CPU101.RES:Rescheduling_interrupts
2046 ± 32% -19.9% 1638 ± 30% interrupts.CPU102.NMI:Non-maskable_interrupts
2046 ± 32% -19.9% 1638 ± 30% interrupts.CPU102.PMI:Performance_monitoring_interrupts
956.75 ± 18% -33.1% 640.25 ± 5% interrupts.CPU102.RES:Rescheduling_interrupts
909.50 ± 20% -29.1% 644.50 ± 12% interrupts.CPU103.RES:Rescheduling_interrupts
959.50 ± 24% -31.4% 658.00 ± 5% interrupts.CPU104.RES:Rescheduling_interrupts
921.75 ± 14% -30.2% 643.50 ± 3% interrupts.CPU105.RES:Rescheduling_interrupts
901.50 ± 14% -27.8% 650.50 ± 11% interrupts.CPU106.RES:Rescheduling_interrupts
2670 ± 3% -37.5% 1670 ± 31% interrupts.CPU107.NMI:Non-maskable_interrupts
2670 ± 3% -37.5% 1670 ± 31% interrupts.CPU107.PMI:Performance_monitoring_interrupts
2849 -31.6% 1948 ± 33% interrupts.CPU109.NMI:Non-maskable_interrupts
2849 -31.6% 1948 ± 33% interrupts.CPU109.PMI:Performance_monitoring_interrupts
1410 ± 20% +51.0% 2130 ± 23% interrupts.CPU11.NMI:Non-maskable_interrupts
1410 ± 20% +51.0% 2130 ± 23% interrupts.CPU11.PMI:Performance_monitoring_interrupts
2820 ± 3% -30.7% 1955 ± 32% interrupts.CPU111.NMI:Non-maskable_interrupts
2820 ± 3% -30.7% 1955 ± 32% interrupts.CPU111.PMI:Performance_monitoring_interrupts
2808 -30.1% 1962 ± 33% interrupts.CPU116.NMI:Non-maskable_interrupts
2808 -30.1% 1962 ± 33% interrupts.CPU116.PMI:Performance_monitoring_interrupts
622190 -8.1% 571803 ± 4% interrupts.CPU118.LOC:Local_timer_interrupts
621861 -8.3% 570381 ± 4% interrupts.CPU119.LOC:Local_timer_interrupts
2024 ± 25% -30.0% 1417 ± 8% interrupts.CPU12.NMI:Non-maskable_interrupts
2024 ± 25% -30.0% 1417 ± 8% interrupts.CPU12.PMI:Performance_monitoring_interrupts
622130 -8.0% 572612 ± 4% interrupts.CPU120.LOC:Local_timer_interrupts
622145 -8.4% 569980 ± 4% interrupts.CPU121.LOC:Local_timer_interrupts
2131 ± 33% -24.4% 1611 ± 26% interrupts.CPU121.NMI:Non-maskable_interrupts
2131 ± 33% -24.4% 1611 ± 26% interrupts.CPU121.PMI:Performance_monitoring_interrupts
621983 -7.9% 572602 ± 4% interrupts.CPU123.LOC:Local_timer_interrupts
2219 ± 34% -40.0% 1332 ± 5% interrupts.CPU124.NMI:Non-maskable_interrupts
2219 ± 34% -40.0% 1332 ± 5% interrupts.CPU124.PMI:Performance_monitoring_interrupts
622061 -8.0% 572198 ± 4% interrupts.CPU125.LOC:Local_timer_interrupts
817.50 ± 12% -46.8% 435.25 ± 13% interrupts.CPU128.RES:Rescheduling_interrupts
820.75 ± 24% -48.8% 420.00 ± 25% interrupts.CPU129.RES:Rescheduling_interrupts
750.00 ± 25% -42.9% 428.00 ± 24% interrupts.CPU130.RES:Rescheduling_interrupts
2285 ± 22% -38.1% 1415 interrupts.CPU133.NMI:Non-maskable_interrupts
2285 ± 22% -38.1% 1415 interrupts.CPU133.PMI:Performance_monitoring_interrupts
779.25 ± 30% -50.5% 386.00 ± 23% interrupts.CPU133.RES:Rescheduling_interrupts
741.25 ± 28% -47.7% 387.75 ± 28% interrupts.CPU135.RES:Rescheduling_interrupts
728.00 ± 28% -46.2% 391.50 ± 31% interrupts.CPU137.RES:Rescheduling_interrupts
749.25 ± 29% -53.9% 345.50 ± 27% interrupts.CPU138.RES:Rescheduling_interrupts
751.25 ± 27% -49.0% 383.00 ± 29% interrupts.CPU139.RES:Rescheduling_interrupts
688.00 ± 24% -49.8% 345.25 ± 27% interrupts.CPU140.RES:Rescheduling_interrupts
726.00 ± 27% -55.1% 326.25 ± 32% interrupts.CPU141.RES:Rescheduling_interrupts
2272 ± 29% -40.7% 1347 ± 2% interrupts.CPU15.NMI:Non-maskable_interrupts
2272 ± 29% -40.7% 1347 ± 2% interrupts.CPU15.PMI:Performance_monitoring_interrupts
156.75 ± 3% +27.3% 199.50 ± 29% interrupts.CPU2.47:PCI-MSI.1572866-edge.eth0-TxRx-2
2562 ± 9% -38.6% 1573 ± 28% interrupts.CPU2.NMI:Non-maskable_interrupts
2562 ± 9% -38.6% 1573 ± 28% interrupts.CPU2.PMI:Performance_monitoring_interrupts
1344 ± 20% -32.8% 903.75 interrupts.CPU20.RES:Rescheduling_interrupts
1314 ± 16% -29.9% 921.50 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
120.00 ± 47% -98.3% 2.00 ± 93% interrupts.CPU22.TLB:TLB_shootdowns
2245 ± 24% -35.7% 1443 ± 24% interrupts.CPU23.NMI:Non-maskable_interrupts
2245 ± 24% -35.7% 1443 ± 24% interrupts.CPU23.PMI:Performance_monitoring_interrupts
1338 ± 18% -32.8% 899.00 ± 12% interrupts.CPU23.RES:Rescheduling_interrupts
1243 ± 16% -28.0% 895.75 ± 4% interrupts.CPU24.RES:Rescheduling_interrupts
1360 ± 30% -33.3% 907.50 ± 15% interrupts.CPU25.RES:Rescheduling_interrupts
1273 ± 20% -34.0% 841.00 ± 6% interrupts.CPU26.RES:Rescheduling_interrupts
1273 ± 15% -31.3% 874.25 ± 8% interrupts.CPU27.RES:Rescheduling_interrupts
1315 ± 16% -40.6% 781.00 ± 11% interrupts.CPU28.RES:Rescheduling_interrupts
1320 ± 20% -39.2% 802.75 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
2094 ± 30% -22.7% 1619 ± 35% interrupts.CPU30.NMI:Non-maskable_interrupts
2094 ± 30% -22.7% 1619 ± 35% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1301 ± 19% -40.6% 773.00 ± 5% interrupts.CPU30.RES:Rescheduling_interrupts
1253 ± 16% -36.9% 791.00 ± 15% interrupts.CPU31.RES:Rescheduling_interrupts
1231 ± 17% -41.5% 720.25 ± 8% interrupts.CPU32.RES:Rescheduling_interrupts
1177 ± 11% -37.3% 738.00 ± 14% interrupts.CPU33.RES:Rescheduling_interrupts
1195 ± 18% -34.0% 788.75 ± 21% interrupts.CPU34.RES:Rescheduling_interrupts
2567 ± 8% -35.1% 1666 ± 35% interrupts.CPU35.NMI:Non-maskable_interrupts
2567 ± 8% -35.1% 1666 ± 35% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1223 ± 18% -39.1% 745.25 ± 8% interrupts.CPU35.RES:Rescheduling_interrupts
621753 -7.9% 572936 ± 4% interrupts.CPU36.LOC:Local_timer_interrupts
2527 ± 10% -34.0% 1667 ± 31% interrupts.CPU40.NMI:Non-maskable_interrupts
2527 ± 10% -34.0% 1667 ± 31% interrupts.CPU40.PMI:Performance_monitoring_interrupts
622323 -7.9% 572967 ± 4% interrupts.CPU45.LOC:Local_timer_interrupts
1832 ± 17% +38.7% 2541 ± 3% interrupts.CPU46.NMI:Non-maskable_interrupts
1832 ± 17% +38.7% 2541 ± 3% interrupts.CPU46.PMI:Performance_monitoring_interrupts
621906 -8.0% 572330 ± 4% interrupts.CPU47.LOC:Local_timer_interrupts
621972 -8.1% 571378 ± 4% interrupts.CPU48.LOC:Local_timer_interrupts
622275 -8.2% 571410 ± 4% interrupts.CPU49.LOC:Local_timer_interrupts
2459 ± 24% -41.7% 1433 ± 10% interrupts.CPU49.NMI:Non-maskable_interrupts
2459 ± 24% -41.7% 1433 ± 10% interrupts.CPU49.PMI:Performance_monitoring_interrupts
622031 -7.9% 573184 ± 4% interrupts.CPU50.LOC:Local_timer_interrupts
622081 -7.9% 572943 ± 4% interrupts.CPU51.LOC:Local_timer_interrupts
622140 -7.8% 573769 ± 4% interrupts.CPU52.LOC:Local_timer_interrupts
622074 -7.6% 574558 ± 4% interrupts.CPU53.LOC:Local_timer_interrupts
229.75 ± 52% -34.1% 151.50 interrupts.CPU57.102:PCI-MSI.1572921-edge.eth0-TxRx-57
1052 ± 38% -59.2% 429.25 ± 16% interrupts.CPU61.RES:Rescheduling_interrupts
830.75 ± 26% -53.3% 388.25 ± 27% interrupts.CPU62.RES:Rescheduling_interrupts
774.50 ± 28% -48.5% 399.00 ± 32% interrupts.CPU67.RES:Rescheduling_interrupts
785.00 ± 33% -51.6% 380.00 ± 32% interrupts.CPU68.RES:Rescheduling_interrupts
890.25 ± 31% -58.3% 371.50 ± 27% interrupts.CPU69.RES:Rescheduling_interrupts
821.50 ± 30% -46.4% 440.50 ± 36% interrupts.CPU70.RES:Rescheduling_interrupts
1897 ± 24% +40.4% 2663 ± 4% interrupts.CPU84.NMI:Non-maskable_interrupts
1897 ± 24% +40.4% 2663 ± 4% interrupts.CPU84.PMI:Performance_monitoring_interrupts
1431 ± 17% -43.5% 808.25 ± 21% interrupts.CPU9.RES:Rescheduling_interrupts
1064 ± 18% -30.1% 743.75 ± 16% interrupts.CPU91.RES:Rescheduling_interrupts
1076 ± 10% -39.6% 649.75 ± 10% interrupts.CPU92.RES:Rescheduling_interrupts
1042 ± 17% -33.8% 690.00 ± 10% interrupts.CPU93.RES:Rescheduling_interrupts
1065 ± 18% -33.4% 709.50 ± 5% interrupts.CPU94.RES:Rescheduling_interrupts
2731 ± 7% -30.6% 1894 ± 28% interrupts.CPU95.NMI:Non-maskable_interrupts
2731 ± 7% -30.6% 1894 ± 28% interrupts.CPU95.PMI:Performance_monitoring_interrupts
998.75 ± 19% -28.4% 715.00 ± 7% interrupts.CPU96.RES:Rescheduling_interrupts
2.75 ± 47% +2009.1% 58.00 ±148% interrupts.CPU96.TLB:TLB_shootdowns
952.50 ± 27% -29.5% 671.25 ± 12% interrupts.CPU97.RES:Rescheduling_interrupts
2667 ± 2% -50.2% 1328 ± 4% interrupts.CPU98.NMI:Non-maskable_interrupts
2667 ± 2% -50.2% 1328 ± 4% interrupts.CPU98.PMI:Performance_monitoring_interrupts
973.00 ± 20% -28.5% 695.25 ± 4% interrupts.CPU98.RES:Rescheduling_interrupts
991.00 ± 18% -34.0% 654.00 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
135782 ± 14% -25.0% 101847 ± 6% interrupts.RES:Rescheduling_interrupts
140559 ± 2% -11.8% 124028 ± 6% softirqs.CPU0.RCU
145814 ± 3% -13.7% 125791 ± 5% softirqs.CPU1.RCU
138760 -12.0% 122124 ± 6% softirqs.CPU100.RCU
16104 -17.4% 13295 ± 5% softirqs.CPU100.SCHED
16422 ± 5% -16.9% 13653 ± 7% softirqs.CPU101.SCHED
15960 -14.5% 13650 ± 6% softirqs.CPU102.SCHED
15804 ± 8% -13.8% 13616 ± 7% softirqs.CPU103.SCHED
139949 -12.0% 123102 ± 5% softirqs.CPU104.RCU
16155 ± 3% -14.6% 13799 ± 8% softirqs.CPU104.SCHED
16018 ± 4% -14.3% 13725 ± 10% softirqs.CPU105.SCHED
15392 ± 5% -14.4% 13172 ± 6% softirqs.CPU106.SCHED
15218 ± 4% -15.5% 12852 ± 9% softirqs.CPU107.SCHED
143209 -14.2% 122899 ± 5% softirqs.CPU108.RCU
144618 -14.9% 123106 ± 4% softirqs.CPU109.RCU
144915 -14.1% 124470 ± 6% softirqs.CPU110.RCU
145038 -14.7% 123770 ± 5% softirqs.CPU111.RCU
138846 -12.2% 121867 ± 6% softirqs.CPU112.RCU
137438 -11.9% 121069 ± 6% softirqs.CPU113.RCU
141152 -13.4% 122234 ± 5% softirqs.CPU114.RCU
141545 -13.8% 122025 ± 5% softirqs.CPU115.RCU
140302 -13.0% 122108 ± 6% softirqs.CPU116.RCU
140548 -13.2% 122057 ± 5% softirqs.CPU117.RCU
142370 -14.3% 122021 ± 5% softirqs.CPU118.RCU
136694 -11.3% 121193 ± 5% softirqs.CPU119.RCU
137193 -11.5% 121415 ± 6% softirqs.CPU120.RCU
137125 -11.9% 120804 ± 5% softirqs.CPU121.RCU
140181 -13.0% 121893 ± 5% softirqs.CPU122.RCU
137405 -11.4% 121741 ± 5% softirqs.CPU124.RCU
139350 -12.5% 122000 ± 6% softirqs.CPU125.RCU
145339 -14.5% 124311 ± 5% softirqs.CPU126.RCU
14887 ± 18% -32.6% 10033 ± 15% softirqs.CPU126.SCHED
144917 -15.6% 122366 ± 5% softirqs.CPU127.RCU
15094 ± 20% -37.0% 9516 ± 21% softirqs.CPU127.SCHED
140946 -13.8% 121461 ± 6% softirqs.CPU128.RCU
138106 ± 2% -11.1% 122812 ± 6% softirqs.CPU129.RCU
137799 -11.3% 122221 ± 6% softirqs.CPU130.RCU
137425 -10.9% 122427 ± 6% softirqs.CPU131.RCU
140854 -12.9% 122753 ± 5% softirqs.CPU132.RCU
15140 ± 24% -35.8% 9716 ± 19% softirqs.CPU132.SCHED
140764 ± 2% -12.7% 122926 ± 5% softirqs.CPU133.RCU
139626 -11.7% 123235 ± 5% softirqs.CPU134.RCU
141210 ± 2% -12.6% 123477 ± 5% softirqs.CPU135.RCU
142996 ± 3% -14.1% 122901 ± 5% softirqs.CPU136.RCU
140567 ± 2% -12.5% 123022 ± 6% softirqs.CPU140.RCU
14192 ± 21% -34.2% 9336 ± 18% softirqs.CPU141.SCHED
135514 ± 2% -9.8% 122294 ± 5% softirqs.CPU142.RCU
134945 ± 2% -10.0% 121501 ± 6% softirqs.CPU143.RCU
145145 -13.0% 126309 ± 5% softirqs.CPU16.RCU
144489 ± 2% -13.3% 125217 ± 5% softirqs.CPU17.RCU
138706 ± 4% -12.0% 122092 ± 5% softirqs.CPU18.RCU
15348 ± 4% -10.0% 13810 ± 8% softirqs.CPU18.SCHED
140870 ± 2% -10.7% 125784 ± 3% softirqs.CPU19.RCU
16392 ± 5% -15.5% 13843 ± 6% softirqs.CPU19.SCHED
143741 ± 2% -12.4% 125852 ± 5% softirqs.CPU2.RCU
140088 -11.7% 123663 ± 5% softirqs.CPU20.RCU
16092 ± 3% -14.0% 13832 ± 5% softirqs.CPU20.SCHED
138206 -11.0% 123025 ± 5% softirqs.CPU21.RCU
16877 ± 4% -16.7% 14051 ± 4% softirqs.CPU21.SCHED
138653 -11.4% 122814 ± 5% softirqs.CPU22.RCU
17090 ± 3% -19.3% 13797 ± 5% softirqs.CPU22.SCHED
16476 ± 3% -15.6% 13910 ± 5% softirqs.CPU23.SCHED
138808 -11.2% 123204 ± 5% softirqs.CPU24.RCU
16779 -16.2% 14060 ± 5% softirqs.CPU24.SCHED
139811 ± 2% -11.6% 123555 ± 5% softirqs.CPU25.RCU
16734 ± 3% -15.8% 14083 ± 7% softirqs.CPU25.SCHED
138909 -10.9% 123758 ± 5% softirqs.CPU26.RCU
16619 ± 5% -15.0% 14133 ± 5% softirqs.CPU26.SCHED
140257 -12.3% 123025 ± 5% softirqs.CPU27.RCU
17013 ± 5% -17.3% 14061 ± 5% softirqs.CPU27.SCHED
137824 -11.2% 122428 ± 6% softirqs.CPU28.RCU
16470 ± 2% -15.4% 13927 ± 6% softirqs.CPU28.SCHED
135294 ± 2% -9.6% 122364 ± 6% softirqs.CPU29.RCU
17350 ± 3% -20.7% 13761 ± 5% softirqs.CPU29.SCHED
142300 ± 3% -13.1% 123621 ± 6% softirqs.CPU3.RCU
17067 -18.4% 13927 ± 5% softirqs.CPU30.SCHED
17110 ± 4% -17.6% 14099 ± 7% softirqs.CPU31.SCHED
145321 -14.0% 124914 ± 4% softirqs.CPU32.RCU
16952 ± 3% -16.0% 14234 ± 7% softirqs.CPU32.SCHED
143684 -14.0% 123636 ± 5% softirqs.CPU33.RCU
16728 ± 2% -17.3% 13838 ± 5% softirqs.CPU33.SCHED
143358 -13.4% 124129 ± 5% softirqs.CPU34.RCU
17103 ± 3% -17.7% 14082 ± 6% softirqs.CPU34.SCHED
143430 -13.0% 124848 ± 5% softirqs.CPU35.RCU
16705 -17.7% 13746 ± 4% softirqs.CPU35.SCHED
139206 -11.5% 123261 ± 7% softirqs.CPU36.RCU
140252 -12.7% 122378 ± 5% softirqs.CPU37.RCU
140086 -12.6% 122492 ± 5% softirqs.CPU39.RCU
140518 -13.4% 121743 ± 5% softirqs.CPU40.RCU
138605 -11.7% 122371 ± 5% softirqs.CPU41.RCU
141519 -13.5% 122418 ± 5% softirqs.CPU42.RCU
142030 -13.7% 122637 ± 5% softirqs.CPU43.RCU
140694 -13.4% 121845 ± 4% softirqs.CPU44.RCU
140504 -12.1% 123545 ± 5% softirqs.CPU45.RCU
140277 -12.7% 122435 ± 5% softirqs.CPU46.RCU
137267 -11.2% 121919 ± 5% softirqs.CPU47.RCU
140839 -12.9% 122646 ± 5% softirqs.CPU48.RCU
141816 -13.3% 122937 ± 5% softirqs.CPU49.RCU
140868 ± 2% -12.0% 123942 ± 6% softirqs.CPU5.RCU
143643 -14.1% 123392 ± 6% softirqs.CPU50.RCU
141326 -12.7% 123403 ± 6% softirqs.CPU51.RCU
141583 -13.4% 122613 ± 6% softirqs.CPU52.RCU
141100 -13.2% 122445 ± 5% softirqs.CPU53.RCU
139956 -12.1% 122981 ± 6% softirqs.CPU54.RCU
14018 ± 17% -29.8% 9845 ± 15% softirqs.CPU54.SCHED
140114 -11.3% 124217 ± 5% softirqs.CPU55.RCU
143329 -14.8% 122069 ± 6% softirqs.CPU56.RCU
141146 -12.3% 123718 ± 6% softirqs.CPU57.RCU
15264 ± 19% -34.6% 9988 ± 17% softirqs.CPU57.SCHED
140809 -12.4% 123330 ± 6% softirqs.CPU58.RCU
15201 ± 20% -36.3% 9678 ± 20% softirqs.CPU58.SCHED
140568 -12.0% 123769 ± 5% softirqs.CPU59.RCU
15360 ± 20% -33.3% 10252 ± 19% softirqs.CPU59.SCHED
139409 ± 3% -10.9% 124281 ± 6% softirqs.CPU6.RCU
141767 -12.7% 123768 ± 5% softirqs.CPU60.RCU
142046 -12.9% 123657 ± 5% softirqs.CPU61.RCU
15389 ± 22% -34.7% 10052 ± 18% softirqs.CPU61.SCHED
141719 -12.7% 123700 ± 5% softirqs.CPU62.RCU
142499 -12.9% 124068 ± 6% softirqs.CPU63.RCU
14667 ± 20% -31.8% 10005 ± 18% softirqs.CPU63.SCHED
144437 -14.1% 124008 ± 6% softirqs.CPU64.RCU
15191 ± 22% -35.9% 9736 ± 19% softirqs.CPU64.SCHED
141156 ± 2% -12.3% 123743 ± 6% softirqs.CPU65.RCU
139859 -11.2% 124140 ± 6% softirqs.CPU66.RCU
141282 -11.9% 124535 ± 6% softirqs.CPU67.RCU
144228 -13.7% 124434 ± 6% softirqs.CPU68.RCU
142536 -13.0% 123956 ± 6% softirqs.CPU69.RCU
141103 ± 2% -11.8% 124488 ± 6% softirqs.CPU7.RCU
140849 ± 2% -11.5% 124683 ± 6% softirqs.CPU70.RCU
141347 ± 2% -12.4% 123863 ± 6% softirqs.CPU71.RCU
142552 ± 3% -13.6% 123193 ± 5% softirqs.CPU72.RCU
142074 ± 4% -12.3% 124659 ± 7% softirqs.CPU73.RCU
141649 -13.1% 123155 ± 5% softirqs.CPU74.RCU
143525 ± 4% -14.0% 123493 ± 6% softirqs.CPU75.RCU
142772 ± 3% -13.9% 122974 ± 7% softirqs.CPU76.RCU
141178 ± 3% -12.9% 122944 ± 5% softirqs.CPU77.RCU
140777 ± 3% -12.8% 122799 ± 5% softirqs.CPU78.RCU
143039 ± 4% -14.3% 122572 ± 5% softirqs.CPU79.RCU
140336 ± 3% -11.8% 123811 ± 6% softirqs.CPU8.RCU
141727 ± 2% -13.0% 123316 ± 5% softirqs.CPU80.RCU
141813 ± 2% -13.0% 123363 ± 5% softirqs.CPU81.RCU
141978 ± 2% -11.6% 125442 ± 3% softirqs.CPU82.RCU
137611 ± 3% -10.7% 122950 ± 5% softirqs.CPU84.RCU
141077 ± 2% -12.0% 124188 ± 5% softirqs.CPU86.RCU
139221 ± 3% -11.9% 122606 ± 5% softirqs.CPU87.RCU
138177 ± 4% -11.7% 121980 ± 5% softirqs.CPU88.RCU
139882 ± 2% -11.1% 124329 ± 5% softirqs.CPU9.RCU
141843 ± 2% -13.6% 122541 ± 6% softirqs.CPU90.RCU
16251 ± 3% -15.5% 13733 ± 5% softirqs.CPU90.SCHED
143175 ± 2% -14.1% 122926 ± 6% softirqs.CPU91.RCU
17088 ± 3% -19.2% 13801 ± 7% softirqs.CPU91.SCHED
144768 -14.7% 123542 ± 5% softirqs.CPU92.RCU
16851 ± 2% -20.1% 13462 ± 4% softirqs.CPU92.SCHED
144398 ± 4% -13.4% 125018 ± 3% softirqs.CPU93.RCU
16917 ± 3% -19.9% 13559 ± 6% softirqs.CPU93.SCHED
142957 ± 2% -13.6% 123521 ± 5% softirqs.CPU94.RCU
16758 ± 2% -17.8% 13773 ± 4% softirqs.CPU94.SCHED
141885 -13.8% 122263 ± 6% softirqs.CPU95.RCU
16161 ± 5% -14.7% 13789 ± 6% softirqs.CPU95.SCHED
138647 -11.8% 122297 ± 5% softirqs.CPU96.RCU
16590 ± 3% -16.8% 13804 ± 8% softirqs.CPU96.SCHED
15776 ± 3% -13.0% 13731 ± 7% softirqs.CPU97.SCHED
16115 -10.6% 14405 ± 5% softirqs.CPU98.SCHED
140827 -12.8% 122846 ± 5% softirqs.CPU99.RCU
16252 ± 4% -16.2% 13623 ± 5% softirqs.CPU99.SCHED
20190421 -12.1% 17737370 ± 5% softirqs.RCU
2133170 -16.1% 1790167 ± 4% softirqs.SCHED
vm-scalability.time.system_time
22500 +-------------------------------------------------------------------+
22000 |-+ O O |
| O O O O |
21500 |-O O O O O O O |
21000 |-+ O O O O O O O O O O O |
20500 |-+ O O O |
20000 |-+ |
| |
19500 |-+ |
19000 |-+ |
18500 |-+ + |
18000 |.+.. + + : |
| .. + + : .+.. +.. + |
17500 |-+ +.+ + + + + |
17000 +-------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
11400 +-------------------------------------------------------------------+
11300 |-+ O O O O O O O O O |
| O O O O O O O O O O O O O O O O |
11200 |-+ O O |
11100 |-+ |
| |
11000 |-+ |
10900 |-+ |
10800 |-+ |
| |
10700 |-+ |
10600 |-+ |
|.+.. .+. .+. .+.. .+ |
10500 |-+ +.+. +. +.+..+ + |
10400 +-------------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
1.3e+06 +-----------------------------------------------------------------+
1.2e+06 |.+ |
| + .+. .+. .+ |
1.1e+06 |-+ + .+.+..+.+..+ +. + |
1e+06 |-+ + |
| |
900000 |-+ |
800000 |-+ |
700000 |-+ |
| |
600000 |-+ |
500000 |-+ |
| O O O O O O O O O O O O O |
400000 |-O O O O O O O O O O O O O O |
300000 +-----------------------------------------------------------------+
vm-scalability.throughput
3.2e+07 +-----------------------------------------------------------------+
| +.. |
3.1e+07 |++ |
| +.+. .+. .+.+.+..+.+.+ |
3e+07 |-+ +. +. |
| |
2.9e+07 |-+ |
| |
2.8e+07 |-+ |
| |
2.7e+07 |-+ |
| |
2.6e+07 |-+ O O O O O O O O O O O O O |
| O O O O O O O O O O O O |
2.5e+07 +-----------------------------------------------------------------+
vm-scalability.median
220000 +------------------------------------------------------------------+
215000 |-+.. |
|+ +.+.. .+. .+.+..+.+.+..+ |
210000 |-+ + +. |
205000 |-+ |
| |
200000 |-+ |
195000 |-+ |
190000 |-+ |
| |
185000 |-+ |
180000 |-+ O O O O O O O O O O O O O O O |
| O O O O O O O O O O O |
175000 |-+ O |
170000 +------------------------------------------------------------------+
vm-scalability.workload
8.4e+09 +-----------------------------------------------------------------+
| + |
8.2e+09 |:++ |
|: + |
8e+09 |-+ +.+.+..+.+..+.+.+..+.+.+ |
| |
7.8e+09 |-+ |
| |
7.6e+09 |-+ |
| |
7.4e+09 |-+ |
| |
7.2e+09 |-+ |
| O O O O O O O O O O O O O O O O O O O O O O O O O O O |
7e+09 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
7 months, 1 week
Re: [ext4] d3b6f23f71: stress-ng.fiemap.ops_per_sec -60.5% regression
by Xing Zhengjun
Thanks for your quick response, if you need any more test information
about the regression, please let me known.
On 4/13/2020 6:56 PM, Ritesh Harjani wrote:
>
>
> On 4/13/20 2:07 PM, Xing Zhengjun wrote:
>> Hi Harjani,
>>
>> Do you have time to take a look at this? Thanks.
>
> Hello Xing,
>
> I do want to look into this. But as of now I am stuck with another
> mballoc failure issue. I will get back at this once I have some handle
> over that one.
>
> BTW, are you planning to take look at this?
>
> -ritesh
>
>
>>
>> On 4/7/2020 4:00 PM, kernel test robot wrote:
>>> Greeting,
>>>
>>> FYI, we noticed a -60.5% regression of stress-ng.fiemap.ops_per_sec
>>> due to commit:
>>>
>>>
>>> commit: d3b6f23f71670007817a5d59f3fbafab2b794e8c ("ext4: move
>>> ext4_fiemap to use iomap framework")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>>
>>> in testcase: stress-ng
>>> on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz
>>> with 192G memory
>>> with following parameters:
>>>
>>> nr_threads: 10%
>>> disk: 1HDD
>>> testtime: 1s
>>> class: os
>>> cpufreq_governor: performance
>>> ucode: 0x500002c
>>> fs: ext4
>>>
>>>
>>>
>>>
>>>
>>>
>>> Details are as below:
>>> -------------------------------------------------------------------------------------------------->
>>>
>>>
>>>
>>> To reproduce:
>>>
>>> git clone https://github.com/intel/lkp-tests.git
>>> cd lkp-tests
>>> bin/lkp install job.yaml # job file is attached in this email
>>> bin/lkp run job.yaml
>>>
>>> =========================================================================================
>>>
>>> class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
>>>
>>> os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/10%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
>>>
>>>
>>> commit:
>>> b2c5764262 ("ext4: make ext4_ind_map_blocks work with fiemap")
>>> d3b6f23f71 ("ext4: move ext4_fiemap to use iomap framework")
>>>
>>> b2c5764262edded1 d3b6f23f71670007817a5d59f3f
>>> ---------------- ---------------------------
>>> fail:runs %reproduction fail:runs
>>> | | |
>>> :4 25% 1:4
>>> dmesg.WARNING:at#for_ip_interrupt_entry/0x
>>> 2:4 5% 2:4
>>> perf-profile.calltrace.cycles-pp.sync_regs.error_entry
>>> 2:4 6% 3:4
>>> perf-profile.calltrace.cycles-pp.error_entry
>>> 3:4 9% 3:4
>>> perf-profile.children.cycles-pp.error_entry
>>> 0:4 1% 0:4
>>> perf-profile.self.cycles-pp.error_entry
>>> %stddev %change %stddev
>>> \ | \
>>> 28623 +28.2% 36703 ± 12% stress-ng.daemon.ops
>>> 28632 +28.2% 36704 ± 12%
>>> stress-ng.daemon.ops_per_sec
>>> 566.00 ± 22% -53.2% 265.00 ± 53% stress-ng.dev.ops
>>> 278.81 ± 22% -53.0% 131.00 ± 54% stress-ng.dev.ops_per_sec
>>> 73160 -60.6% 28849 ± 3% stress-ng.fiemap.ops
>>> 72471 -60.5% 28612 ± 3%
>>> stress-ng.fiemap.ops_per_sec
>>> 23421 ± 12% +21.2% 28388 ± 6% stress-ng.filename.ops
>>> 22638 ± 12% +20.3% 27241 ± 6%
>>> stress-ng.filename.ops_per_sec
>>> 21.25 ± 7% -10.6% 19.00 ± 3% stress-ng.iomix.ops
>>> 38.75 ± 49% -47.7% 20.25 ± 96% stress-ng.memhotplug.ops
>>> 34.45 ± 52% -51.8% 16.62 ±106%
>>> stress-ng.memhotplug.ops_per_sec
>>> 1734 ± 10% +31.4% 2278 ± 10% stress-ng.resources.ops
>>> 807.56 ± 5% +35.2% 1091 ± 8%
>>> stress-ng.resources.ops_per_sec
>>> 1007356 ± 3% -16.5% 840642 ± 9% stress-ng.revio.ops
>>> 1007692 ± 3% -16.6% 840711 ± 9%
>>> stress-ng.revio.ops_per_sec
>>> 21812 ± 3% +16.0% 25294 ± 5% stress-ng.sysbadaddr.ops
>>> 21821 ± 3% +15.9% 25294 ± 5%
>>> stress-ng.sysbadaddr.ops_per_sec
>>> 440.75 ± 4% +21.9% 537.25 ± 9% stress-ng.sysfs.ops
>>> 440.53 ± 4% +21.9% 536.86 ± 9%
>>> stress-ng.sysfs.ops_per_sec
>>> 13286582 -11.1% 11805520 ± 6%
>>> stress-ng.time.file_system_outputs
>>> 68253896 +2.4% 69860122 stress-ng.time.minor_page_faults
>>> 197.00 ± 4% -15.9% 165.75 ± 12% stress-ng.xattr.ops
>>> 192.45 ± 5% -16.1% 161.46 ± 11%
>>> stress-ng.xattr.ops_per_sec
>>> 15310 +62.5% 24875 ± 22% stress-ng.zombie.ops
>>> 15310 +62.5% 24874 ± 22%
>>> stress-ng.zombie.ops_per_sec
>>> 203.50 ± 12% -47.3% 107.25 ± 49% vmstat.io.bi
>>> 861318 ± 18% -29.7% 605884 ± 5% meminfo.AnonHugePages
>>> 1062742 ± 14% -20.2% 847853 ± 3% meminfo.AnonPages
>>> 31093 ± 6% +9.6% 34090 ± 3% meminfo.KernelStack
>>> 7151 ± 34% +55.8% 11145 ± 9% meminfo.Mlocked
>>> 1.082e+08 ± 5% -40.2% 64705429 ± 31%
>>> numa-numastat.node0.local_node
>>> 1.082e+08 ± 5% -40.2% 64739883 ± 31%
>>> numa-numastat.node0.numa_hit
>>> 46032662 ± 21% +104.3% 94042918 ± 20%
>>> numa-numastat.node1.local_node
>>> 46074205 ± 21% +104.2% 94072810 ± 20%
>>> numa-numastat.node1.numa_hit
>>> 3942 ± 3% +14.2% 4501 ± 4%
>>> slabinfo.pool_workqueue.active_objs
>>> 4098 ± 3% +14.3% 4683 ± 4%
>>> slabinfo.pool_workqueue.num_objs
>>> 4817 ± 7% +13.3% 5456 ± 8%
>>> slabinfo.proc_dir_entry.active_objs
>>> 5153 ± 6% +12.5% 5797 ± 8%
>>> slabinfo.proc_dir_entry.num_objs
>>> 18598 ± 13% -33.1% 12437 ± 20%
>>> sched_debug.cfs_rq:/.load.avg
>>> 452595 ± 56% -71.4% 129637 ± 76%
>>> sched_debug.cfs_rq:/.load.max
>>> 67675 ± 35% -55.1% 30377 ± 42%
>>> sched_debug.cfs_rq:/.load.stddev
>>> 18114 ± 12% -33.7% 12011 ± 20%
>>> sched_debug.cfs_rq:/.runnable_weight.avg
>>> 448215 ± 58% -72.8% 121789 ± 82%
>>> sched_debug.cfs_rq:/.runnable_weight.max
>>> 67083 ± 37% -56.3% 29305 ± 43%
>>> sched_debug.cfs_rq:/.runnable_weight.stddev
>>> -38032 +434.3% -203212 sched_debug.cfs_rq:/.spread0.avg
>>> -204466 +95.8% -400301 sched_debug.cfs_rq:/.spread0.min
>>> 90.02 ± 25% -58.1% 37.69 ± 52%
>>> sched_debug.cfs_rq:/.util_est_enqueued.avg
>>> 677.54 ± 6% -39.3% 411.50 ± 22%
>>> sched_debug.cfs_rq:/.util_est_enqueued.max
>>> 196.57 ± 8% -47.6% 103.05 ± 36%
>>> sched_debug.cfs_rq:/.util_est_enqueued.stddev
>>> 3.34 ± 23% +34.1% 4.48 ± 4%
>>> sched_debug.cpu.clock.stddev
>>> 3.34 ± 23% +34.1% 4.48 ± 4%
>>> sched_debug.cpu.clock_task.stddev
>>> 402872 ± 7% -11.9% 354819 ± 2%
>>> proc-vmstat.nr_active_anon
>>> 1730331 -9.5% 1566418 ± 5% proc-vmstat.nr_dirtied
>>> 31042 ± 6% +9.3% 33915 ± 3%
>>> proc-vmstat.nr_kernel_stack
>>> 229047 -2.4% 223615 proc-vmstat.nr_mapped
>>> 74008 ± 7% +20.5% 89163 ± 8% proc-vmstat.nr_written
>>> 402872 ± 7% -11.9% 354819 ± 2%
>>> proc-vmstat.nr_zone_active_anon
>>> 50587 ± 11% -25.2% 37829 ± 14%
>>> proc-vmstat.numa_pages_migrated
>>> 457500 -23.1% 351918 ± 31%
>>> proc-vmstat.numa_pte_updates
>>> 81382485 +1.9% 82907822 proc-vmstat.pgfault
>>> 2.885e+08 ± 5% -13.3% 2.502e+08 ± 6% proc-vmstat.pgfree
>>> 42206 ± 12% -46.9% 22399 ± 49% proc-vmstat.pgpgin
>>> 431233 ± 13% -64.8% 151736 ±109% proc-vmstat.pgrotated
>>> 176754 ± 7% -40.2% 105637 ± 31%
>>> proc-vmstat.thp_fault_alloc
>>> 314.50 ± 82% +341.5% 1388 ± 44%
>>> proc-vmstat.unevictable_pgs_stranded
>>> 1075269 ± 14% -41.3% 631388 ± 17% numa-meminfo.node0.Active
>>> 976056 ± 12% -39.7% 588727 ± 19%
>>> numa-meminfo.node0.Active(anon)
>>> 426857 ± 22% -36.4% 271375 ± 13%
>>> numa-meminfo.node0.AnonHugePages
>>> 558590 ± 19% -36.4% 355402 ± 14%
>>> numa-meminfo.node0.AnonPages
>>> 1794824 ± 9% -28.8% 1277157 ± 20%
>>> numa-meminfo.node0.FilePages
>>> 8517 ± 92% -82.7% 1473 ± 89%
>>> numa-meminfo.node0.Inactive(file)
>>> 633118 ± 2% -41.7% 368920 ± 36% numa-meminfo.node0.Mapped
>>> 2958038 ± 12% -27.7% 2139271 ± 12%
>>> numa-meminfo.node0.MemUsed
>>> 181401 ± 5% -13.7% 156561 ± 4%
>>> numa-meminfo.node0.SUnreclaim
>>> 258124 ± 6% -13.0% 224535 ± 5% numa-meminfo.node0.Slab
>>> 702083 ± 16% +31.0% 919406 ± 11% numa-meminfo.node1.Active
>>> 38663 ±107% +137.8% 91951 ± 31%
>>> numa-meminfo.node1.Active(file)
>>> 1154975 ± 7% +41.6% 1635593 ± 12%
>>> numa-meminfo.node1.FilePages
>>> 395813 ± 25% +62.8% 644533 ± 16%
>>> numa-meminfo.node1.Inactive
>>> 394313 ± 25% +62.5% 640686 ± 16%
>>> numa-meminfo.node1.Inactive(anon)
>>> 273317 +88.8% 515976 ± 25% numa-meminfo.node1.Mapped
>>> 2279237 ± 6% +25.7% 2865582 ± 7%
>>> numa-meminfo.node1.MemUsed
>>> 10830 ± 18% +29.6% 14033 ± 9%
>>> numa-meminfo.node1.PageTables
>>> 149390 ± 3% +23.2% 184085 ± 3%
>>> numa-meminfo.node1.SUnreclaim
>>> 569542 ± 16% +74.8% 995336 ± 21% numa-meminfo.node1.Shmem
>>> 220774 ± 5% +20.3% 265656 ± 3% numa-meminfo.node1.Slab
>>> 35623587 ± 5% -11.7% 31444514 ± 3% perf-stat.i.cache-misses
>>> 2.576e+08 ± 5% -6.8% 2.4e+08 ± 2%
>>> perf-stat.i.cache-references
>>> 3585 -7.3% 3323 ± 5%
>>> perf-stat.i.cpu-migrations
>>> 180139 ± 2% +4.2% 187668 perf-stat.i.minor-faults
>>> 69.13 +2.6 71.75 perf-stat.i.node-load-miss-rate%
>>> 4313695 ± 2% -7.4% 3994957 ± 2%
>>> perf-stat.i.node-load-misses
>>> 5466253 ± 11% -17.3% 4521173 ± 6% perf-stat.i.node-loads
>>> 2818674 ± 6% -15.8% 2372542 ± 5% perf-stat.i.node-stores
>>> 227810 +4.6% 238290 perf-stat.i.page-faults
>>> 12.67 ± 4% -7.2% 11.76 ± 2% perf-stat.overall.MPKI
>>> 1.01 ± 4% -0.0 0.97 ± 3%
>>> perf-stat.overall.branch-miss-rate%
>>> 1044 +13.1% 1181 ± 4%
>>> perf-stat.overall.cycles-between-cache-misses
>>> 40.37 ± 4% +3.6 44.00 ± 2%
>>> perf-stat.overall.node-store-miss-rate%
>>> 36139526 ± 5% -12.5% 31625519 ± 3% perf-stat.ps.cache-misses
>>> 2.566e+08 ± 5% -6.9% 2.389e+08 ± 2%
>>> perf-stat.ps.cache-references
>>> 3562 -7.2% 3306 ± 5%
>>> perf-stat.ps.cpu-migrations
>>> 179088 +4.2% 186579 perf-stat.ps.minor-faults
>>> 4323383 ± 2% -7.5% 3999214 perf-stat.ps.node-load-misses
>>> 5607721 ± 10% -18.5% 4568664 ± 6% perf-stat.ps.node-loads
>>> 2855134 ± 7% -16.4% 2387345 ± 5% perf-stat.ps.node-stores
>>> 226270 +4.6% 236709 perf-stat.ps.page-faults
>>> 242305 ± 10% -42.4% 139551 ± 18%
>>> numa-vmstat.node0.nr_active_anon
>>> 135983 ± 17% -37.4% 85189 ± 10%
>>> numa-vmstat.node0.nr_anon_pages
>>> 209.25 ± 16% -38.1% 129.50 ± 10%
>>> numa-vmstat.node0.nr_anon_transparent_hugepages
>>> 449367 ± 9% -29.7% 315804 ± 20%
>>> numa-vmstat.node0.nr_file_pages
>>> 2167 ± 90% -80.6% 419.75 ± 98%
>>> numa-vmstat.node0.nr_inactive_file
>>> 157405 ± 3% -41.4% 92206 ± 35%
>>> numa-vmstat.node0.nr_mapped
>>> 2022 ± 30% -73.3% 539.25 ± 91%
>>> numa-vmstat.node0.nr_mlock
>>> 3336 ± 10% -24.3% 2524 ± 25%
>>> numa-vmstat.node0.nr_page_table_pages
>>> 286158 ± 10% -41.2% 168337 ± 37%
>>> numa-vmstat.node0.nr_shmem
>>> 45493 ± 5% -14.1% 39094 ± 4%
>>> numa-vmstat.node0.nr_slab_unreclaimable
>>> 242294 ± 10% -42.4% 139547 ± 18%
>>> numa-vmstat.node0.nr_zone_active_anon
>>> 2167 ± 90% -80.6% 419.75 ± 98%
>>> numa-vmstat.node0.nr_zone_inactive_file
>>> 54053924 ± 8% -39.3% 32786242 ± 34%
>>> numa-vmstat.node0.numa_hit
>>> 53929628 ± 8% -39.5% 32619715 ± 34%
>>> numa-vmstat.node0.numa_local
>>> 9701 ±107% +136.9% 22985 ± 31%
>>> numa-vmstat.node1.nr_active_file
>>> 202.50 ± 16% -25.1% 151.75 ± 23%
>>> numa-vmstat.node1.nr_anon_transparent_hugepages
>>> 284922 ± 7% +43.3% 408195 ± 13%
>>> numa-vmstat.node1.nr_file_pages
>>> 96002 ± 26% +67.5% 160850 ± 17%
>>> numa-vmstat.node1.nr_inactive_anon
>>> 68077 ± 2% +90.3% 129533 ± 25%
>>> numa-vmstat.node1.nr_mapped
>>> 138482 ± 15% +79.2% 248100 ± 22%
>>> numa-vmstat.node1.nr_shmem
>>> 37396 ± 3% +23.3% 46094 ± 3%
>>> numa-vmstat.node1.nr_slab_unreclaimable
>>> 9701 ±107% +136.9% 22985 ± 31%
>>> numa-vmstat.node1.nr_zone_active_file
>>> 96005 ± 26% +67.5% 160846 ± 17%
>>> numa-vmstat.node1.nr_zone_inactive_anon
>>> 23343661 ± 17% +99.9% 46664267 ± 23%
>>> numa-vmstat.node1.numa_hit
>>> 23248487 ± 17% +100.5% 46610447 ± 23%
>>> numa-vmstat.node1.numa_local
>>> 105745 ± 23% +112.6% 224805 ± 24% softirqs.CPU0.NET_RX
>>> 133310 ± 36% -45.3% 72987 ± 52% softirqs.CPU1.NET_RX
>>> 170110 ± 55% -66.8% 56407 ±147% softirqs.CPU11.NET_RX
>>> 91465 ± 36% -65.2% 31858 ±112% softirqs.CPU13.NET_RX
>>> 164491 ± 57% -77.7% 36641 ±121% softirqs.CPU15.NET_RX
>>> 121069 ± 55% -99.3% 816.75 ± 96% softirqs.CPU17.NET_RX
>>> 81019 ± 4% -8.7% 73967 ± 4% softirqs.CPU20.RCU
>>> 72143 ± 63% -89.8% 7360 ±172% softirqs.CPU22.NET_RX
>>> 270663 ± 17% -57.9% 113915 ± 45% softirqs.CPU24.NET_RX
>>> 20149 ± 76% +474.1% 115680 ± 62% softirqs.CPU26.NET_RX
>>> 14033 ± 70% +977.5% 151211 ± 75% softirqs.CPU27.NET_RX
>>> 27834 ± 94% +476.1% 160357 ± 28% softirqs.CPU28.NET_RX
>>> 35346 ± 68% +212.0% 110290 ± 30% softirqs.CPU29.NET_RX
>>> 34347 ±103% +336.5% 149941 ± 32% softirqs.CPU32.NET_RX
>>> 70077 ± 3% +10.8% 77624 ± 3% softirqs.CPU34.RCU
>>> 36453 ± 84% +339.6% 160253 ± 42% softirqs.CPU36.NET_RX
>>> 72367 ± 2% +10.6% 80043 softirqs.CPU37.RCU
>>> 25239 ±118% +267.7% 92799 ± 45% softirqs.CPU38.NET_RX
>>> 4995 ±170% +1155.8% 62734 ± 62% softirqs.CPU39.NET_RX
>>> 4641 ±145% +1611.3% 79432 ± 90% softirqs.CPU42.NET_RX
>>> 7192 ± 65% +918.0% 73225 ± 66% softirqs.CPU45.NET_RX
>>> 1772 ±166% +1837.4% 34344 ± 63% softirqs.CPU46.NET_RX
>>> 13149 ± 81% +874.7% 128170 ± 58% softirqs.CPU47.NET_RX
>>> 86484 ± 94% -92.6% 6357 ±172% softirqs.CPU48.NET_RX
>>> 129128 ± 27% -95.8% 5434 ±172% softirqs.CPU55.NET_RX
>>> 82772 ± 59% -91.7% 6891 ±164% softirqs.CPU56.NET_RX
>>> 145313 ± 57% -87.8% 17796 ± 88% softirqs.CPU57.NET_RX
>>> 118160 ± 33% -86.3% 16226 ±109% softirqs.CPU58.NET_RX
>>> 94576 ± 56% -94.1% 5557 ±173% softirqs.CPU6.NET_RX
>>> 82900 ± 77% -66.8% 27508 ±171% softirqs.CPU62.NET_RX
>>> 157291 ± 30% -81.1% 29656 ±111% softirqs.CPU64.NET_RX
>>> 135101 ± 28% -80.2% 26748 ± 90% softirqs.CPU67.NET_RX
>>> 146574 ± 56% -100.0% 69.75 ± 98% softirqs.CPU68.NET_RX
>>> 81347 ± 2% -9.0% 74024 ± 2% softirqs.CPU68.RCU
>>> 201729 ± 37% -99.6% 887.50 ±107% softirqs.CPU69.NET_RX
>>> 108454 ± 78% -97.9% 2254 ±169% softirqs.CPU70.NET_RX
>>> 55289 ±104% -89.3% 5942 ±172% softirqs.CPU71.NET_RX
>>> 10112 ±172% +964.6% 107651 ± 89% softirqs.CPU72.NET_RX
>>> 3136 ±171% +1522.2% 50879 ± 66% softirqs.CPU73.NET_RX
>>> 13353 ± 79% +809.2% 121407 ±101% softirqs.CPU74.NET_RX
>>> 75194 ± 3% +10.3% 82957 ± 5% softirqs.CPU75.RCU
>>> 11002 ±173% +1040.8% 125512 ± 61% softirqs.CPU76.NET_RX
>>> 2463 ±173% +2567.3% 65708 ± 77% softirqs.CPU78.NET_RX
>>> 25956 ± 3% -7.8% 23932 ± 3% softirqs.CPU78.SCHED
>>> 16366 ±150% +340.7% 72125 ± 91% softirqs.CPU82.NET_RX
>>> 14553 ±130% +1513.4% 234809 ± 27% softirqs.CPU93.NET_RX
>>> 26314 -9.2% 23884 ± 3% softirqs.CPU93.SCHED
>>> 4582 ± 88% +4903.4% 229268 ± 23% softirqs.CPU94.NET_RX
>>> 11214 ±111% +1762.5% 208867 ± 18% softirqs.CPU95.NET_RX
>>> 1.53 ± 27% -0.5 0.99 ± 17%
>>> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 1.52 ± 27% -0.5 0.99 ± 17%
>>> perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 1.39 ± 29% -0.5 0.88 ± 21%
>>> perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
>>>
>>> 1.39 ± 29% -0.5 0.88 ± 21%
>>> perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 0.50 ± 59% +0.3 0.81 ± 13%
>>> perf-profile.calltrace.cycles-pp.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
>>>
>>> 5.70 ± 9% +0.8 6.47 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
>>>
>>> 5.48 ± 9% +0.8 6.27 ± 7%
>>> perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
>>>
>>> 5.49 ± 9% +0.8 6.28 ± 7%
>>> perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.do_signal
>>>
>>> 4.30 ± 4% +1.3 5.60 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode
>>>
>>> 4.40 ± 4% +1.3 5.69 ± 7%
>>> perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.37 ± 4% +1.3 5.66 ± 7%
>>> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.36 ± 4% +1.3 5.66 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.33 ± 4% +1.3 5.62 ± 7%
>>> perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.44 ± 4% +1.3 5.74 ± 7%
>>> perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 3.20 ± 10% -2.4 0.78 ±156%
>>> perf-profile.children.cycles-pp.copy_page
>>> 0.16 ± 9% -0.1 0.08 ± 64%
>>> perf-profile.children.cycles-pp.irq_work_interrupt
>>> 0.16 ± 9% -0.1 0.08 ± 64%
>>> perf-profile.children.cycles-pp.smp_irq_work_interrupt
>>> 0.24 ± 5% -0.1 0.17 ± 18%
>>> perf-profile.children.cycles-pp.irq_work_run_list
>>> 0.16 ± 9% -0.1 0.10 ± 24%
>>> perf-profile.children.cycles-pp.irq_work_run
>>> 0.16 ± 9% -0.1 0.10 ± 24%
>>> perf-profile.children.cycles-pp.printk
>>> 0.23 ± 6% -0.1 0.17 ± 9%
>>> perf-profile.children.cycles-pp.__do_execve_file
>>> 0.08 ± 14% -0.1 0.03 ±100%
>>> perf-profile.children.cycles-pp.delay_tsc
>>> 0.16 ± 6% -0.1 0.11 ± 9%
>>> perf-profile.children.cycles-pp.load_elf_binary
>>> 0.16 ± 7% -0.0 0.12 ± 13%
>>> perf-profile.children.cycles-pp.search_binary_handler
>>> 0.20 ± 7% -0.0 0.15 ± 10%
>>> perf-profile.children.cycles-pp.call_usermodehelper_exec_async
>>> 0.19 ± 6% -0.0 0.15 ± 11%
>>> perf-profile.children.cycles-pp.do_execve
>>> 0.08 ± 10% -0.0 0.04 ± 59%
>>> perf-profile.children.cycles-pp.__vunmap
>>> 0.15 ± 3% -0.0 0.11 ± 7%
>>> perf-profile.children.cycles-pp.rcu_idle_exit
>>> 0.12 ± 10% -0.0 0.09 ± 14%
>>> perf-profile.children.cycles-pp.__switch_to_asm
>>> 0.09 ± 13% -0.0 0.07 ± 5%
>>> perf-profile.children.cycles-pp.des3_ede_encrypt
>>> 0.06 ± 11% +0.0 0.09 ± 13%
>>> perf-profile.children.cycles-pp.mark_page_accessed
>>> 0.15 ± 5% +0.0 0.19 ± 12%
>>> perf-profile.children.cycles-pp.apparmor_cred_prepare
>>> 0.22 ± 8% +0.0 0.27 ± 11%
>>> perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
>>> 0.17 ± 2% +0.0 0.22 ± 12%
>>> perf-profile.children.cycles-pp.security_prepare_creds
>>> 0.95 ± 17% +0.3 1.22 ± 14%
>>> perf-profile.children.cycles-pp.filemap_map_pages
>>> 5.92 ± 8% +0.7 6.65 ± 7%
>>> perf-profile.children.cycles-pp.get_signal
>>> 5.66 ± 9% +0.8 6.44 ± 7%
>>> perf-profile.children.cycles-pp.mmput
>>> 5.65 ± 9% +0.8 6.43 ± 7%
>>> perf-profile.children.cycles-pp.exit_mmap
>>> 4.40 ± 4% +1.3 5.70 ± 7%
>>> perf-profile.children.cycles-pp.prepare_exit_to_usermode
>>> 4.45 ± 4% +1.3 5.75 ± 7%
>>> perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 3.16 ± 10% -2.4 0.77 ±155%
>>> perf-profile.self.cycles-pp.copy_page
>>> 0.08 ± 14% -0.1 0.03 ±100%
>>> perf-profile.self.cycles-pp.delay_tsc
>>> 0.12 ± 10% -0.0 0.09 ± 14%
>>> perf-profile.self.cycles-pp.__switch_to_asm
>>> 0.08 ± 12% -0.0 0.06 ± 17%
>>> perf-profile.self.cycles-pp.enqueue_task_fair
>>> 0.09 ± 13% -0.0 0.07 ± 5%
>>> perf-profile.self.cycles-pp.des3_ede_encrypt
>>> 0.07 ± 13% +0.0 0.08 ± 19%
>>> perf-profile.self.cycles-pp.__lru_cache_add
>>> 0.19 ± 9% +0.0 0.22 ± 10%
>>> perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
>>> 0.15 ± 5% +0.0 0.19 ± 11%
>>> perf-profile.self.cycles-pp.apparmor_cred_prepare
>>> 0.05 ± 58% +0.0 0.09 ± 13%
>>> perf-profile.self.cycles-pp.mark_page_accessed
>>> 0.58 ± 10% +0.2 0.80 ± 20%
>>> perf-profile.self.cycles-pp.release_pages
>>> 0.75 ±173% +1.3e+05% 1005 ±100%
>>> interrupts.127:PCI-MSI.31981660-edge.i40e-eth0-TxRx-91
>>> 820.75 ±111% -99.9% 0.50 ±173%
>>> interrupts.47:PCI-MSI.31981580-edge.i40e-eth0-TxRx-11
>>> 449.25 ± 86% -100.0% 0.00
>>> interrupts.53:PCI-MSI.31981586-edge.i40e-eth0-TxRx-17
>>> 33.25 ±157% -100.0% 0.00
>>> interrupts.57:PCI-MSI.31981590-edge.i40e-eth0-TxRx-21
>>> 0.75 ±110% +63533.3% 477.25 ±162%
>>> interrupts.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
>>> 561.50 ±160% -100.0% 0.00
>>> interrupts.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
>>> 82921 ± 8% -11.1% 73748 ± 6%
>>> interrupts.CPU11.CAL:Function_call_interrupts
>>> 66509 ± 30% -32.6% 44828 ± 8%
>>> interrupts.CPU14.TLB:TLB_shootdowns
>>> 43105 ± 98% -90.3% 4183 ± 21%
>>> interrupts.CPU17.RES:Rescheduling_interrupts
>>> 148719 ± 70% -69.4% 45471 ± 16%
>>> interrupts.CPU17.TLB:TLB_shootdowns
>>> 85589 ± 42% -52.2% 40884 ± 5%
>>> interrupts.CPU20.TLB:TLB_shootdowns
>>> 222472 ± 41% -98.0% 4360 ± 45%
>>> interrupts.CPU22.RES:Rescheduling_interrupts
>>> 0.50 ±173% +95350.0% 477.25 ±162%
>>> interrupts.CPU25.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
>>> 76029 ± 10% +14.9% 87389 ± 5%
>>> interrupts.CPU25.CAL:Function_call_interrupts
>>> 399042 ± 6% +13.4% 452479 ± 8%
>>> interrupts.CPU27.LOC:Local_timer_interrupts
>>> 561.00 ±161% -100.0% 0.00
>>> interrupts.CPU29.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
>>> 7034 ± 46% +1083.8% 83279 ±138%
>>> interrupts.CPU29.RES:Rescheduling_interrupts
>>> 17829 ± 99% -71.0% 5172 ± 16%
>>> interrupts.CPU30.RES:Rescheduling_interrupts
>>> 5569 ± 15% +2414.7% 140059 ± 94%
>>> interrupts.CPU31.RES:Rescheduling_interrupts
>>> 37674 ± 16% +36.6% 51473 ± 25%
>>> interrupts.CPU31.TLB:TLB_shootdowns
>>> 47905 ± 39% +76.6% 84583 ± 38%
>>> interrupts.CPU34.TLB:TLB_shootdowns
>>> 568.75 ±140% +224.8% 1847 ± 90%
>>> interrupts.CPU36.NMI:Non-maskable_interrupts
>>> 568.75 ±140% +224.8% 1847 ± 90%
>>> interrupts.CPU36.PMI:Performance_monitoring_interrupts
>>> 4236 ± 25% +2168.5% 96092 ± 90%
>>> interrupts.CPU36.RES:Rescheduling_interrupts
>>> 52717 ± 27% +43.3% 75565 ± 28%
>>> interrupts.CPU37.TLB:TLB_shootdowns
>>> 41418 ± 9% +136.6% 98010 ± 50%
>>> interrupts.CPU39.TLB:TLB_shootdowns
>>> 5551 ± 8% +847.8% 52615 ± 66%
>>> interrupts.CPU40.RES:Rescheduling_interrupts
>>> 4746 ± 25% +865.9% 45841 ± 91%
>>> interrupts.CPU42.RES:Rescheduling_interrupts
>>> 37556 ± 11% +24.6% 46808 ± 6%
>>> interrupts.CPU42.TLB:TLB_shootdowns
>>> 21846 ±124% -84.4% 3415 ± 46%
>>> interrupts.CPU48.RES:Rescheduling_interrupts
>>> 891.50 ± 22% -35.2% 577.25 ± 40%
>>> interrupts.CPU49.NMI:Non-maskable_interrupts
>>> 891.50 ± 22% -35.2% 577.25 ± 40%
>>> interrupts.CPU49.PMI:Performance_monitoring_interrupts
>>> 20459 ±120% -79.2% 4263 ± 14%
>>> interrupts.CPU49.RES:Rescheduling_interrupts
>>> 59840 ± 21% -23.1% 46042 ± 16%
>>> interrupts.CPU5.TLB:TLB_shootdowns
>>> 65200 ± 19% -34.5% 42678 ± 9%
>>> interrupts.CPU51.TLB:TLB_shootdowns
>>> 70923 ±153% -94.0% 4270 ± 29%
>>> interrupts.CPU53.RES:Rescheduling_interrupts
>>> 65312 ± 22% -28.7% 46578 ± 14%
>>> interrupts.CPU56.TLB:TLB_shootdowns
>>> 65828 ± 24% �� -33.4% 43846 ± 4%
>>> interrupts.CPU59.TLB:TLB_shootdowns
>>> 72558 ±156% -93.2% 4906 ± 9%
>>> interrupts.CPU6.RES:Rescheduling_interrupts
>>> 68698 ± 34% -32.6% 46327 ± 18%
>>> interrupts.CPU61.TLB:TLB_shootdowns
>>> 109745 ± 44% -57.4% 46711 ± 16%
>>> interrupts.CPU62.TLB:TLB_shootdowns
>>> 89714 ± 44% -48.5% 46198 ± 7%
>>> interrupts.CPU63.TLB:TLB_shootdowns
>>> 59380 ±136% -91.5% 5066 ± 13%
>>> interrupts.CPU69.RES:Rescheduling_interrupts
>>> 40094 ± 18% +133.9% 93798 ± 44%
>>> interrupts.CPU78.TLB:TLB_shootdowns
>>> 129884 ± 72% -55.3% 58034 ±157%
>>> interrupts.CPU8.RES:Rescheduling_interrupts
>>> 69984 ± 11% +51.4% 105957 ± 20%
>>> interrupts.CPU80.CAL:Function_call_interrupts
>>> 32857 ± 10% +128.7% 75131 ± 36%
>>> interrupts.CPU80.TLB:TLB_shootdowns
>>> 35726 ± 16% +34.6% 48081 ± 12%
>>> interrupts.CPU82.TLB:TLB_shootdowns
>>> 73820 ± 17% +28.2% 94643 ± 8%
>>> interrupts.CPU84.CAL:Function_call_interrupts
>>> 38829 ± 28% +190.3% 112736 ± 42%
>>> interrupts.CPU84.TLB:TLB_shootdowns
>>> 36129 ± 4% +47.6% 53329 ± 13%
>>> interrupts.CPU85.TLB:TLB_shootdowns
>>> 4693 ± 7% +1323.0% 66793 ±145%
>>> interrupts.CPU86.RES:Rescheduling_interrupts
>>> 38003 ± 11% +94.8% 74031 ± 43%
>>> interrupts.CPU86.TLB:TLB_shootdowns
>>> 78022 ± 3% +7.9% 84210 ± 3%
>>> interrupts.CPU87.CAL:Function_call_interrupts
>>> 36359 ± 6% +54.9% 56304 ± 48%
>>> interrupts.CPU88.TLB:TLB_shootdowns
>>> 89031 ±105% -95.0% 4475 ± 40%
>>> interrupts.CPU9.RES:Rescheduling_interrupts
>>> 40085 ± 11% +60.6% 64368 ± 27%
>>> interrupts.CPU91.TLB:TLB_shootdowns
>>> 42244 ± 10% +44.8% 61162 ± 35%
>>> interrupts.CPU94.TLB:TLB_shootdowns
>>> 40959 ± 15% +109.4% 85780 ± 41%
>>> interrupts.CPU95.TLB:TLB_shootdowns
>>>
>>>
>>> stress-ng.fiemap.ops
>>> 80000
>>> +-------------------------------------------------------------------+
>>> 75000 |..+. .+.. .+..+.. .+.
>>> .+.. |
>>> | +..+..+..+.+. .+..+.. .+ +. +.
>>> +.+..+..+..+.+..|
>>> 70000 |-+ + +. |
>>> 65000
>>> |-+ |
>>> 60000
>>> |-+ |
>>> 55000
>>> |-+ |
>>> | |
>>> 50000
>>> |-+ |
>>> 45000
>>> |-+ |
>>> 40000
>>> |-+ |
>>> 35000 |-+ O |
>>> | O O O O
>>> O |
>>> 30000 |-+ O O O O O O O O O O O O O O
>>> O O O |
>>> 25000
>>> +-------------------------------------------------------------------+
>>> stress-ng.fiemap.ops_per_sec
>>> 80000
>>> +-------------------------------------------------------------------+
>>> 75000 |.. .+.. .+.. |
>>> | +. .+..+..+.+. .+..+.. .+.+.
>>> +..+.+..+..+.+..+..+..+.+..|
>>> 70000 |-+ +. + +. |
>>> 65000
>>> |-+ |
>>> 60000
>>> |-+ |
>>> 55000
>>> |-+ |
>>> | |
>>> 50000
>>> |-+ |
>>> 45000
>>> |-+ |
>>> 40000
>>> |-+ |
>>> 35000 |-+ O |
>>> | O O O
>>> O |
>>> 30000 |-+ O O O O O O O O O O O O O
>>> O O O |
>>> 25000
>>> +-------------------------------------------------------------------+
>>> [*] bisect-good sample
>>> [O] bisect-bad sample
>>>
>>>
>>>
>>> Disclaimer:
>>> Results have been estimated based on internal Intel analysis and are
>>> provided
>>> for informational purposes only. Any difference in system hardware or
>>> software
>>> design or configuration may affect actual performance.
>>>
>>>
>>> Thanks,
>>> Rong Chen
>>>
>>>
>>> _______________________________________________
>>> LKP mailing list -- lkp(a)lists.01.org
>>> To unsubscribe send an email to lkp-leave(a)lists.01.org
>>>
>>
>
--
Zhengjun Xing
8 months, 2 weeks
[sched/fair] 070f5e860e: reaim.jobs_per_min -10.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -10.5% regression of reaim.jobs_per_min due to commit:
commit: 070f5e860ee2bf588c99ef7b4c202451faa48236 ("sched/fair: Take into account runnable_avg to classify group")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: reaim
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: five_sec
cpufreq_governor: performance
ucode: 0x21
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/300s/lkp-ivb-d04/five_sec/reaim/0x21
commit:
9f68395333 ("sched/pelt: Add a new runnable average signal")
070f5e860e ("sched/fair: Take into account runnable_avg to classify group")
9f68395333ad7f5b 070f5e860ee2bf588c99ef7b4c2
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 -18% 3:4 perf-profile.children.cycles-pp.error_entry
3:4 -12% 3:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.68 -10.4% 0.61 reaim.child_systime
67235 -10.5% 60195 reaim.jobs_per_min
16808 -10.5% 15048 reaim.jobs_per_min_child
97.90 -1.2% 96.70 reaim.jti
72000 -10.8% 64216 reaim.max_jobs_per_min
0.36 +11.3% 0.40 reaim.parent_time
1.56 ± 3% +79.1% 2.80 ± 6% reaim.std_dev_percent
0.00 ± 7% +145.9% 0.01 ± 9% reaim.std_dev_time
104276 -16.0% 87616 reaim.time.involuntary_context_switches
15511157 -2.4% 15144312 reaim.time.minor_page_faults
55.00 -7.3% 51.00 reaim.time.percent_of_cpu_this_job_got
88.01 -12.4% 77.12 reaim.time.system_time
79.97 -3.2% 77.38 reaim.time.user_time
216380 -3.4% 208924 reaim.time.voluntary_context_switches
50800 -2.4% 49600 reaim.workload
30.40 ± 2% -4.7% 28.97 ± 2% boot-time.boot
9.38 -0.7 8.66 ± 3% mpstat.cpu.all.sys%
7452 +7.5% 8014 vmstat.system.cs
1457802 ± 16% +49.3% 2176122 ± 13% cpuidle.C1.time
48523684 +43.4% 69570233 ± 22% cpuidle.C1E.time
806543 ± 2% +20.7% 973406 ± 11% cpuidle.C1E.usage
14328 ± 6% +14.5% 16410 ± 8% cpuidle.POLL.time
43300 ± 4% +13.5% 49150 ± 5% softirqs.CPU0.SCHED
118751 -9.3% 107763 softirqs.CPU1.RCU
41679 ± 3% +14.1% 47546 ± 4% softirqs.CPU1.SCHED
42688 ± 3% +12.3% 47931 ± 4% softirqs.CPU2.SCHED
41730 ± 2% +17.7% 49115 ± 4% softirqs.CPU3.SCHED
169399 +14.4% 193744 ± 2% softirqs.SCHED
3419 +1.0% 3453 proc-vmstat.nr_kernel_stack
16365616 -1.8% 16077850 proc-vmstat.numa_hit
16365616 -1.8% 16077850 proc-vmstat.numa_local
93908 -1.6% 92389 proc-vmstat.pgactivate
16269664 -3.9% 15629529 ± 2% proc-vmstat.pgalloc_normal
15918803 -2.3% 15557936 proc-vmstat.pgfault
16644610 -2.0% 16310898 proc-vmstat.pgfree
20125 ±123% +161.7% 52662 ± 30% sched_debug.cfs_rq:/.load.min
348749 ± 10% -11.2% 309562 ± 11% sched_debug.cfs_rq:/.load.stddev
1096 ± 6% -14.4% 938.42 ± 7% sched_debug.cfs_rq:/.load_avg.max
448.46 ± 8% -17.5% 370.19 ± 10% sched_debug.cfs_rq:/.load_avg.stddev
117372 -10.2% 105432 sched_debug.cfs_rq:/.min_vruntime.avg
135242 ± 4% -9.2% 122811 sched_debug.cfs_rq:/.min_vruntime.max
0.53 ± 8% +17.6% 0.62 ± 6% sched_debug.cfs_rq:/.nr_running.avg
29.79 ± 30% -51.0% 14.58 ± 35% sched_debug.cfs_rq:/.nr_spread_over.max
10.21 ± 34% -59.7% 4.12 ± 52% sched_debug.cfs_rq:/.nr_spread_over.stddev
78.25 ± 40% +3304.7% 2664 ± 94% sched_debug.cpu.curr->pid.min
294309 ± 2% +34.3% 395172 ± 12% sched_debug.cpu.nr_switches.min
9.58 ± 35% +84.8% 17.71 ± 40% sched_debug.cpu.nr_uninterruptible.max
-6.88 +120.6% -15.17 sched_debug.cpu.nr_uninterruptible.min
6.41 ± 30% +95.2% 12.52 ± 33% sched_debug.cpu.nr_uninterruptible.stddev
286185 +33.4% 381734 ± 13% sched_debug.cpu.sched_count.min
180416 +11.0% 200247 sched_debug.cpu.sched_goidle.avg
116264 ± 3% +44.6% 168090 ± 15% sched_debug.cpu.sched_goidle.min
476.00 ± 8% +92.4% 915.75 ± 3% interrupts.CAL:Function_call_interrupts
110.50 ± 24% +101.1% 222.25 ± 4% interrupts.CPU0.CAL:Function_call_interrupts
1381 ± 29% +23.7% 1709 ± 26% interrupts.CPU0.NMI:Non-maskable_interrupts
1381 ± 29% +23.7% 1709 ± 26% interrupts.CPU0.PMI:Performance_monitoring_interrupts
3319 ± 9% +50.4% 4991 ± 2% interrupts.CPU0.RES:Rescheduling_interrupts
41.25 ± 30% +274.5% 154.50 ± 15% interrupts.CPU0.TLB:TLB_shootdowns
116.25 ± 23% +96.1% 228.00 ± 16% interrupts.CPU1.CAL:Function_call_interrupts
1183 ± 10% +43.1% 1692 ± 23% interrupts.CPU1.NMI:Non-maskable_interrupts
1183 ± 10% +43.1% 1692 ± 23% interrupts.CPU1.PMI:Performance_monitoring_interrupts
3335 ± 7% +60.4% 5350 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
36.25 ± 30% +344.1% 161.00 ± 8% interrupts.CPU1.TLB:TLB_shootdowns
131.25 ± 11% +81.1% 237.75 ± 11% interrupts.CPU2.CAL:Function_call_interrupts
3247 ± 2% +62.4% 5274 interrupts.CPU2.RES:Rescheduling_interrupts
34.50 ± 36% +357.2% 157.75 ± 7% interrupts.CPU2.TLB:TLB_shootdowns
118.00 ± 13% +93.0% 227.75 ± 9% interrupts.CPU3.CAL:Function_call_interrupts
3155 ± 4% +68.7% 5322 ± 3% interrupts.CPU3.RES:Rescheduling_interrupts
38.50 ± 16% +303.9% 155.50 ± 3% interrupts.CPU3.TLB:TLB_shootdowns
13057 ± 2% +60.4% 20939 interrupts.RES:Rescheduling_interrupts
150.50 ± 27% +317.8% 628.75 ± 3% interrupts.TLB:TLB_shootdowns
2.00 +0.1 2.09 ± 3% perf-stat.i.branch-miss-rate%
10.26 +1.1 11.36 ± 7% perf-stat.i.cache-miss-rate%
2009706 ± 2% +5.4% 2117525 ± 3% perf-stat.i.cache-misses
16867421 -4.5% 16106908 perf-stat.i.cache-references
7514 +7.6% 8083 perf-stat.i.context-switches
1.51 -3.0% 1.47 perf-stat.i.cpi
2.523e+09 ± 3% -8.8% 2.301e+09 ± 2% perf-stat.i.cpu-cycles
124.54 +157.8% 321.08 perf-stat.i.cpu-migrations
1842 ± 10% -18.6% 1498 ± 6% perf-stat.i.cycles-between-cache-misses
752585 ± 2% -4.1% 721714 perf-stat.i.dTLB-store-misses
590441 +2.7% 606399 perf-stat.i.iTLB-load-misses
68766 +4.0% 71488 ± 2% perf-stat.i.iTLB-loads
1.847e+09 ± 3% -4.7% 1.76e+09 ± 2% perf-stat.i.instructions
3490 ± 4% -8.5% 3195 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.68 +3.7% 0.70 perf-stat.i.ipc
51861 -2.1% 50797 perf-stat.i.minor-faults
51861 -2.1% 50797 perf-stat.i.page-faults
2.68 ± 2% +0.1 2.78 perf-stat.overall.branch-miss-rate%
11.91 +1.2 13.14 ± 2% perf-stat.overall.cache-miss-rate%
1.37 -4.3% 1.31 perf-stat.overall.cpi
1255 -13.4% 1087 ± 2% perf-stat.overall.cycles-between-cache-misses
3127 ± 3% -7.2% 2901 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.73 +4.5% 0.76 perf-stat.overall.ipc
2002763 ± 2% +5.4% 2110303 ± 3% perf-stat.ps.cache-misses
16809816 -4.5% 16051656 perf-stat.ps.cache-references
7489 +7.6% 8055 perf-stat.ps.context-switches
2.514e+09 ± 3% -8.8% 2.293e+09 ± 2% perf-stat.ps.cpu-cycles
124.12 +157.8% 319.95 perf-stat.ps.cpu-migrations
750010 ± 2% -4.1% 719223 perf-stat.ps.dTLB-store-misses
588424 +2.7% 604314 perf-stat.ps.iTLB-load-misses
68533 +4.0% 71246 ± 2% perf-stat.ps.iTLB-loads
1.841e+09 ± 3% -4.7% 1.754e+09 ± 2% perf-stat.ps.instructions
51683 -2.1% 50622 perf-stat.ps.minor-faults
51683 -2.1% 50622 perf-stat.ps.page-faults
5.577e+11 ± 3% -5.1% 5.292e+11 ± 2% perf-stat.total.instructions
7.35 ± 17% -2.7 4.60 ± 10% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
7.74 ± 20% -2.7 5.00 ± 6% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
10.14 ± 8% -2.7 7.44 ± 6% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.66 ± 8% -2.6 8.07 ± 8% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
7.10 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.write._fini
7.10 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.devkmsg_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write.ksys_write
6.20 ± 8% -2.1 4.08 ± 5% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write
5.15 ± 11% -1.8 3.38 ± 4% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
5.05 ± 11% -1.7 3.31 ± 3% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
7.41 ± 10% -1.1 6.29 ± 5% perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.57 ± 11% -1.1 6.46 ± 5% perf-profile.calltrace.cycles-pp.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.03 ± 5% -1.1 5.95 ± 10% perf-profile.calltrace.cycles-pp.brk
5.90 ± 7% -0.9 4.98 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
5.84 ± 7% -0.9 4.93 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
15.77 ± 2% -0.9 14.88 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.86 ± 2% -0.9 14.97 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.88 ± 6% -0.9 2.99 ± 5% perf-profile.calltrace.cycles-pp.kill
1.70 ± 23% -0.8 0.90 ± 10% perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
4.88 ± 8% -0.8 4.08 ± 8% perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.39 ± 27% -0.7 1.67 ± 5% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
2.29 ± 30% -0.7 1.59 ± 5% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
2.27 ± 30% -0.7 1.58 ± 5% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
3.11 ± 5% -0.6 2.47 ± 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
3.07 ± 5% -0.6 2.45 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
2.09 ± 18% -0.4 1.67 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
2.82 ± 9% -0.4 2.40 ± 12% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
2.80 ± 9% -0.4 2.38 ± 12% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
1.11 ± 33% -0.4 0.71 ± 10% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.05 ± 15% -0.4 0.69 ± 13% perf-profile.calltrace.cycles-pp.vt_console_print.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.lf.vt_console_print.console_unlock.vprintk_emit.devkmsg_emit
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.con_scroll.lf.vt_console_print.console_unlock.vprintk_emit
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.fbcon_scroll.con_scroll.lf.vt_console_print.console_unlock
0.96 ± 16% -0.3 0.66 ± 12% perf-profile.calltrace.cycles-pp.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll.lf
1.85 ± 4% -0.3 1.58 ± 8% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
0.89 ± 15% -0.3 0.62 ± 12% perf-profile.calltrace.cycles-pp.kill_pid_info.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.67 ± 5% -0.3 1.41 ± 9% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
1.02 ± 7% -0.3 0.77 ± 12% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.94 ± 16% -0.2 0.70 ± 5% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.98 ± 16% -0.2 0.74 ± 7% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
1.03 ± 6% -0.2 0.79 ± 10% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
1.00 ± 10% -0.2 0.77 ± 9% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.87 ± 10% -0.2 0.66 ± 15% perf-profile.calltrace.cycles-pp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.41 ± 3% -0.2 1.23 ± 7% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
1.88 ± 5% -0.1 1.73 perf-profile.calltrace.cycles-pp.__x64_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.87 ± 5% -0.1 1.73 perf-profile.calltrace.cycles-pp._do_fork.__x64_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.34 ± 11% +7.3 17.66 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
10.18 ± 11% +7.3 17.52 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
11.32 ± 9% +7.7 19.03 ± 8% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
11.02 ± 5% +8.1 19.14 ± 7% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.calltrace.cycles-pp.secondary_startup_64
55.98 -7.0 48.94 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
55.67 -7.0 48.67 ± 4% perf-profile.children.cycles-pp.do_syscall_64
10.60 ± 16% -3.3 7.30 ± 8% perf-profile.children.cycles-pp.vprintk_emit
13.02 ± 7% -3.0 9.99 ± 7% perf-profile.children.cycles-pp.write
9.92 ± 13% -2.8 7.08 ± 4% perf-profile.children.cycles-pp.console_unlock
10.26 ± 8% -2.7 7.53 ± 6% perf-profile.children.cycles-pp.new_sync_write
10.79 ± 8% -2.6 8.18 ± 8% perf-profile.children.cycles-pp.vfs_write
10.95 ± 8% -2.6 8.36 ± 8% perf-profile.children.cycles-pp.ksys_write
7.17 ± 16% -2.5 4.69 ± 7% perf-profile.children.cycles-pp.devkmsg_write
7.17 ± 16% -2.5 4.69 ± 7% perf-profile.children.cycles-pp.devkmsg_emit
8.65 ± 16% -2.4 6.21 ± 4% perf-profile.children.cycles-pp.serial8250_console_write
8.53 ± 17% -2.4 6.11 ± 4% perf-profile.children.cycles-pp.uart_console_write
7.13 ± 17% -2.4 4.71 ± 6% perf-profile.children.cycles-pp._fini
8.46 ± 16% -2.4 6.07 ± 4% perf-profile.children.cycles-pp.wait_for_xmitr
8.34 ± 16% -2.4 5.97 ± 4% perf-profile.children.cycles-pp.serial8250_console_putchar
5.80 ± 16% -1.6 4.21 ± 6% perf-profile.children.cycles-pp.io_serial_in
7.85 ± 10% -1.2 6.67 ± 5% perf-profile.children.cycles-pp.execve
7.72 ± 11% -1.2 6.55 ± 5% perf-profile.children.cycles-pp.__do_execve_file
5.19 ± 13% -1.1 4.05 ± 8% perf-profile.children.cycles-pp.mmput
5.16 ± 13% -1.1 4.03 ± 8% perf-profile.children.cycles-pp.exit_mmap
7.76 ± 10% -1.1 6.64 ± 5% perf-profile.children.cycles-pp.__x64_sys_execve
7.11 ± 5% -1.1 6.01 ± 10% perf-profile.children.cycles-pp.brk
3.92 ± 6% -0.9 3.03 ± 5% perf-profile.children.cycles-pp.kill
2.63 ± 17% -0.8 1.85 perf-profile.children.cycles-pp.delay_tsc
4.89 ± 8% -0.8 4.12 ± 8% perf-profile.children.cycles-pp.__x64_sys_brk
2.48 ± 27% -0.7 1.74 ± 4% perf-profile.children.cycles-pp.flush_old_exec
3.02 ± 12% -0.7 2.28 ± 12% perf-profile.children.cycles-pp.unmap_page_range
3.15 ± 11% -0.7 2.40 ± 12% perf-profile.children.cycles-pp.unmap_vmas
2.25 ± 19% -0.5 1.75 ± 11% perf-profile.children.cycles-pp.unmap_region
1.27 ± 11% -0.4 0.86 ± 8% perf-profile.children.cycles-pp.vt_console_print
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.lf
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.con_scroll
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.fbcon_scroll
1.79 ± 9% -0.4 1.41 ± 4% perf-profile.children.cycles-pp.release_pages
1.22 ± 11% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.fbcon_redraw
1.17 ± 12% -0.4 0.82 ± 10% perf-profile.children.cycles-pp.fbcon_putcs
1.16 ± 13% -0.3 0.82 ± 10% perf-profile.children.cycles-pp.bit_putcs
0.90 ± 16% -0.3 0.62 ± 12% perf-profile.children.cycles-pp.kill_pid_info
0.95 ± 10% -0.3 0.68 ± 6% perf-profile.children.cycles-pp.drm_fb_helper_cfb_imageblit
0.95 ± 11% -0.3 0.68 ± 6% perf-profile.children.cycles-pp.cfb_imageblit
1.24 ± 7% -0.2 1.01 ± 6% perf-profile.children.cycles-pp.new_sync_read
0.71 ± 4% -0.2 0.49 ± 23% perf-profile.children.cycles-pp.___perf_sw_event
0.55 ± 31% -0.2 0.33 ± 16% perf-profile.children.cycles-pp.unlink_anon_vmas
0.89 ± 11% -0.2 0.67 ± 15% perf-profile.children.cycles-pp.shmem_file_read_iter
0.60 ± 20% -0.2 0.39 ± 20% perf-profile.children.cycles-pp.__send_signal
1.06 ± 6% -0.2 0.85 ± 16% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.88 -0.2 0.68 ± 6% perf-profile.children.cycles-pp.__perf_sw_event
1.49 ± 5% -0.2 1.29 ± 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.56 ± 12% -0.2 0.37 ± 11% perf-profile.children.cycles-pp.do_send_sig_info
1.65 ± 8% -0.2 1.47 ± 4% perf-profile.children.cycles-pp.perf_event_mmap
0.69 ± 2% -0.2 0.52 ± 16% perf-profile.children.cycles-pp.page_remove_rmap
0.61 ± 5% -0.2 0.44 ± 15% perf-profile.children.cycles-pp.free_unref_page_list
0.60 ± 6% -0.2 0.43 ± 15% perf-profile.children.cycles-pp.__vm_munmap
0.77 ± 12% -0.2 0.62 ± 12% perf-profile.children.cycles-pp.__might_sleep
0.39 ± 12% -0.2 0.24 ± 18% perf-profile.children.cycles-pp.time
0.46 ± 14% -0.1 0.34 ± 14% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.57 ± 8% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.shmem_undo_range
0.41 ± 12% -0.1 0.30 ± 15% perf-profile.children.cycles-pp.copy_fpstate_to_sigframe
0.76 ± 7% -0.1 0.67 ± 8% perf-profile.children.cycles-pp.__x64_sys_rt_sigreturn
0.26 ± 16% -0.1 0.17 ± 17% perf-profile.children.cycles-pp.mark_page_accessed
0.12 ± 47% -0.1 0.04 ±103% perf-profile.children.cycles-pp.sigaction
0.23 ± 12% -0.1 0.15 ± 11% perf-profile.children.cycles-pp.__vm_enough_memory
0.12 ± 18% -0.1 0.05 ±106% perf-profile.children.cycles-pp.__vsprintf_chk
0.23 ± 20% -0.1 0.17 ± 13% perf-profile.children.cycles-pp.d_add
0.13 ± 23% -0.1 0.07 ± 58% perf-profile.children.cycles-pp.fput_many
0.13 ± 14% -0.1 0.08 ± 24% perf-profile.children.cycles-pp.vfs_unlink
0.11 ± 20% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.04 ± 63% +0.0 0.08 ± 23% perf-profile.children.cycles-pp.uncharge_page
0.06 ± 22% +0.0 0.10 ± 36% perf-profile.children.cycles-pp.sched_exec
0.44 ± 4% +0.0 0.48 ± 4% perf-profile.children.cycles-pp.close
0.14 ± 22% +0.1 0.21 ± 17% perf-profile.children.cycles-pp.pick_next_task_fair
0.10 ± 17% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.__anon_vma_prepare
0.00 +0.1 0.07 ± 24% perf-profile.children.cycles-pp.update_sd_lb_stats
0.07 ± 34% +0.1 0.15 ± 42% perf-profile.children.cycles-pp.file_free_rcu
0.15 ± 27% +0.1 0.23 ± 21% perf-profile.children.cycles-pp.__strcasecmp
0.20 ± 21% +0.1 0.29 ± 8% perf-profile.children.cycles-pp.__pte_alloc
0.14 ± 47% +0.1 0.23 ± 27% perf-profile.children.cycles-pp.update_blocked_averages
0.09 ± 44% +0.1 0.19 ± 18% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.1 0.10 ± 33% perf-profile.children.cycles-pp.newidle_balance
0.00 +0.1 0.10 ± 18% perf-profile.children.cycles-pp.__vmalloc_node_range
0.21 ± 15% +0.1 0.32 ± 25% perf-profile.children.cycles-pp.__wake_up_common
0.63 ± 8% +0.1 0.77 ± 6% perf-profile.children.cycles-pp.rcu_do_batch
0.76 ± 14% +0.1 0.90 ± 9% perf-profile.children.cycles-pp.rcu_core
0.07 ± 90% +0.2 0.27 ±109% perf-profile.children.cycles-pp.security_mmap_addr
0.46 ± 26% +0.3 0.75 ± 13% perf-profile.children.cycles-pp.__sched_text_start
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.children.cycles-pp.start_secondary
11.03 ± 5% +8.1 19.16 ± 7% perf-profile.children.cycles-pp.intel_idle
14.78 ± 6% +8.5 23.24 ± 8% perf-profile.children.cycles-pp.cpuidle_enter
14.76 ± 6% +8.5 23.24 ± 8% perf-profile.children.cycles-pp.cpuidle_enter_state
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.children.cycles-pp.secondary_startup_64
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.children.cycles-pp.cpu_startup_entry
16.04 ± 6% +9.1 25.19 ± 8% perf-profile.children.cycles-pp.do_idle
5.79 ± 16% -1.6 4.21 ± 6% perf-profile.self.cycles-pp.io_serial_in
2.62 ± 17% -0.8 1.85 perf-profile.self.cycles-pp.delay_tsc
5.11 ± 4% -0.6 4.56 ± 5% perf-profile.self.cycles-pp.do_syscall_64
1.44 ± 6% -0.3 1.15 ± 5% perf-profile.self.cycles-pp.unmap_page_range
0.94 ± 11% -0.3 0.68 ± 6% perf-profile.self.cycles-pp.cfb_imageblit
0.65 ± 6% -0.2 0.42 ± 23% perf-profile.self.cycles-pp.___perf_sw_event
1.42 ± 5% -0.2 1.22 ± 7% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.65 ± 13% -0.2 0.47 ± 9% perf-profile.self.cycles-pp.do_page_fault
0.65 ± 9% -0.1 0.52 ± 5% perf-profile.self.cycles-pp.release_pages
0.24 ± 20% -0.1 0.15 ± 16% perf-profile.self.cycles-pp.mark_page_accessed
0.16 ± 28% -0.1 0.08 ± 69% perf-profile.self.cycles-pp.free_unref_page_commit
0.12 ± 24% -0.1 0.04 ± 59% perf-profile.self.cycles-pp.__do_munmap
0.10 ± 24% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.04 ± 57% +0.0 0.07 ± 19% perf-profile.self.cycles-pp.__sbrk
0.04 ± 57% +0.0 0.08 ± 23% perf-profile.self.cycles-pp.update_load_avg
0.04 ± 57% +0.0 0.08 ± 23% perf-profile.self.cycles-pp.uncharge_page
0.26 ± 11% +0.1 0.39 ± 12% perf-profile.self.cycles-pp.copy_page
0.49 ± 13% +0.1 0.63 ± 13% perf-profile.self.cycles-pp.get_page_from_freelist
11.00 ± 5% +8.1 19.14 ± 7% perf-profile.self.cycles-pp.intel_idle
reaim.time.system_time
90 +----------------------------------------------------------------------+
| +. .+ ++.++ .+ |
88 |+.+ +.++. + +++ ++.++.+ +.++.+++.+++.+++. +.++ .+ + :.+++.+|
| + + + + ++.++ + |
86 |-+ |
| |
84 |-+ |
| |
82 |-+ O |
|O OO OO O |
80 |-+ |
| OO OOO O O O O |
78 |-+ O O O O O OO O |
| O OOO O |
76 +----------------------------------------------------------------------+
reaim.time.percent_of_cpu_this_job_got
55 +--------------------------------------------------------------------+
| |
54.5 |-+ |
54 |-+ |
| |
53.5 |-+ |
| |
53 |-+OOO OOO |
| |
52.5 |-+ |
52 |O+ OOO OOO OOO OOO OO |
| |
51.5 |-+ |
| |
51 +--------------------------------------------------------------------+
reaim.parent_time
0.405 +-------------------------------------------------------------------+
0.4 |-+ O |
| O O OOO OO |
0.395 |-+ |
0.39 |-+ |
0.385 |-+ OOOO OOO OOO OOO OOOO |
0.38 |O+OOO |
| |
0.375 |-+ |
0.37 |-+ |
0.365 |-+ |
0.36 |-+ |
| +. .+ +. .+ +. + +. +++. + .+|
0.355 |+. +.+ ++.++ +++. ++.++++ + +++ + +++ + ++.++ + +. : + + |
0.35 +-------------------------------------------------------------------+
reaim.child_systime
0.69 +--------------------------------------------------------------------+
| +.+ ++. ++.+ .++ ++ +++.+ .++ .+++. + .+ : + .+|
0.68 |+.++ ++.+ + ++ +.+ ++ + + +.+++ +.+ + |
0.67 |-+ |
| |
0.66 |-+ |
0.65 |-+ |
| O |
0.64 |O+O OOO |
0.63 |-+ O |
| O O O O O |
0.62 |-+ OO O O O O O |
0.61 |-+ O OO O OO OOOO |
| |
0.6 +--------------------------------------------------------------------+
reaim.jobs_per_min
69000 +-------------------------------------------------------------------+
68000 |-.++ + + .++ .+ .++ + .+++.+ |
|+ +.+ ++.+++.+ + + +++.+++.+++.+++.++++ +.+ + + +.+ +. |
67000 |-+ + + +|
66000 |-+ |
| |
65000 |-+ |
64000 |-+ |
63000 |-+ |
|O OOO O O O O |
62000 |-+ O O OOO O OOO OOOO |
61000 |-+ |
| O OOO OO |
60000 |-+ OO |
59000 +-------------------------------------------------------------------+
reaim.jobs_per_min_child
17500 +-------------------------------------------------------------------+
| |
17000 |-.+++. + + .+++.+ .++ + .+++.+ |
|+ + ++.+++.+ + +++.+++.+++.+++.++++ +.+ + + +.+++.+|
| + |
16500 |-+ |
| |
16000 |-+ |
| |
15500 |O+OOO OOOO OOO OOO OOO OOOO |
| |
| O O OO |
15000 |-+ OOO O |
| |
14500 +-------------------------------------------------------------------+
reaim.max_jobs_per_min
76000 +-------------------------------------------------------------------+
| |
74000 |-+ + + + |
| :: : :: |
| : : : :: : |
72000 |+.++ ++++.+++.+++.+++.++++.+++.+++.+++.++++.+++.+++.+ + +++.+++.+|
| |
70000 |-+ |
| |
68000 |-+ O |
| |
| |
66000 |O+O O OOOO OOO OOO OOO OOOO |
| |
64000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
8 months, 2 weeks
4becb7ee5b ("net/x25: Fix x25_neigh refcnt leak when x25 .."): [ 89.261843] BUG: kernel NULL pointer dereference, address: 00000074
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 4becb7ee5b3d2829ed7b9261a245a77d5b7de902
Author: Xiyu Yang <xiyuyang19(a)fudan.edu.cn>
AuthorDate: Sat Apr 25 21:06:25 2020 +0800
Commit: David S. Miller <davem(a)davemloft.net>
CommitDate: Mon Apr 27 11:20:30 2020 -0700
net/x25: Fix x25_neigh refcnt leak when x25 disconnect
x25_connect() invokes x25_get_neigh(), which returns a reference of the
specified x25_neigh object to "x25->neighbour" with increased refcnt.
When x25 connect success and returns, the reference still be hold by
"x25->neighbour", so the refcount should be decreased in
x25_disconnect() to keep refcount balanced.
The reference counting issue happens in x25_disconnect(), which forgets
to decrease the refcnt increased by x25_get_neigh() in x25_connect(),
causing a refcnt leak.
Fix this issue by calling x25_neigh_put() before x25_disconnect()
returns.
Signed-off-by: Xiyu Yang <xiyuyang19(a)fudan.edu.cn>
Signed-off-by: Xin Tan <tanxin.ctf(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
095f5614bf net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict()
4becb7ee5b net/x25: Fix x25_neigh refcnt leak when x25 disconnect
+-------------------------------------------------------+------------+------------+
| | 095f5614bf | 4becb7ee5b |
+-------------------------------------------------------+------------+------------+
| boot_successes | 29 | 1 |
| boot_failures | 4 | 10 |
| BUG:kernel_timeout_in_boot_stage | 1 | |
| BUG:kernel_hang_in_test_stage | 2 | |
| BUG:kernel_hang_in_boot_stage | 1 | 1 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 9 |
| Oops:#[##] | 0 | 9 |
| EIP:x25_disconnect | 0 | 9 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 9 |
| WARNING:at_lib/refcount.c:#refcount_warn_saturate | 0 | 1 |
| EIP:refcount_warn_saturate | 0 | 1 |
+-------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
Stopping syslogd/klogd: stopped syslogd (pid 459)
stopped klogd (pid 462)
done
Deconfiguring network interfaces... done.
Sending all processes the TERM signal...
[ 89.261843] BUG: kernel NULL pointer dereference, address: 00000074
[ 89.263892] #PF: supervisor write access in kernel mode
[ 89.264352] #PF: error_code(0x0002) - not-present page
[ 89.264799] *pde = 00000000
[ 89.265057] Oops: 0002 [#1] SMP
[ 89.265338] CPU: 1 PID: 785 Comm: trinity-c2 Not tainted 5.7.0-rc2-00379-g4becb7ee5b3d2 #1
[ 89.303957] EIP: x25_disconnect+0x81/0xbc
[ 89.304969] Code: b3 7c 02 00 00 75 0d 89 d8 ff 93 08 03 00 00 0f ba 6b 50 00 b8 a0 b9 f8 81 e8 a6 70 03 00 8b 8b 50 03 00 00 83 ca ff 8d 41 74 <f0> 0f c1 51 74 83 fa 01 75 09 89 c8 e8 12 32 81 ff eb 0e 85 d2 7f
[ 89.309273] EAX: 00000074 EBX: f25fb800 ECX: 00000000 EDX: ffffffff
[ 89.310597] ESI: 00000000 EDI: 00000008 EBP: f2ff5ed0 ESP: f2ff5ec0
[ 89.312086] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010286
[ 89.313295] CR0: 80050033 CR2: 00000074 CR3: 72eb6000 CR4: 00140690
[ 89.314409] Call Trace:
[ 89.314796] x25_release+0x98/0xec
[ 89.317726] __sock_release+0x26/0x78
[ 89.318307] sock_close+0xd/0x11
[ 89.332917] __fput+0xe5/0x1a2
[ 89.333443] ____fput+0x8/0xa
[ 89.334210] task_work_run+0x53/0x76
[ 89.334789] do_exit+0x404/0x8f8
[ 89.335286] do_group_exit+0x82/0x82
[ 89.335833] __ia32_sys_exit_group+0x10/0x10
[ 89.336506] do_fast_syscall_32+0x8c/0xc5
[ 89.337749] entry_SYSENTER_32+0xaa/0x102
[ 89.338246] EIP: 0x77fc1c3d
[ 89.338588] Code: Bad RIP value.
[ 89.339050] EAX: ffffffda EBX: 00000000 ECX: 00000000 EDX: 00000000
[ 89.339782] ESI: 00000080 EDI: 09d30ef8 EBP: 0000006e ESP: 7fc0c0fc
[ 89.340549] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000216
[ 89.341451] Modules linked in:
[ 89.341834] CR2: 0000000000000074
[ 89.342300] ---[ end trace 4adddd6044784e2e ]---
[ 89.342971] EIP: x25_disconnect+0x81/0xbc
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start b54e1dda887def1d16df3f47692ce7fbaccfb7d1 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c --
git bisect bad 8ea28476ea8059845ba55223fc779048553f4914 # 06:25 B 0 3 19 0 Merge 'nsaenz-linux-rpi/for-next' into devel-hourly-2020042823
git bisect bad 00be51a8460ac2298cf6515ca5ff90ec0214f986 # 07:05 B 0 4 20 0 Merge 'linux-review/Mason-Yang/mtd-spi-nor-macronix-Add-support-for-mx25l512-mx25u512/20200426-125136' into devel-hourly-2020042823
git bisect bad 5b07957c29cdcb3f9fd2850460abe343b3cb6edd # 07:56 B 0 2 18 0 Merge 'linux-review/Like-Xu/KVM-x86-pmu-Support-full-width-counting/20200428-055206' into devel-hourly-2020042823
git bisect bad ea46db9609519c8a9cbe7bfec63194adefd51a2d # 10:26 B 0 6 22 0 Merge 'linux-review/Ranjani-Sridharan/Kconfig-updates-for-DMIC-and-SOF-HDMI-support/20200428-093102' into devel-hourly-2020042823
git bisect good 98e97b9813c233f075a74c9a89e62e3ca35b00d3 # 14:34 G 10 0 0 0 Merge 'linux-review/Anders-Roxell/memory-tegra-mark-PM-functions-as-__maybe_unused/20200428-094935' into devel-hourly-2020042823
git bisect good e16c3f98a1906ef3b1a2c8d61c937b7a4f6a7628 # 16:07 G 11 0 0 0 Merge 'linux-review/sathyanarayanan-kuppuswamy-linux-intel-com/PCI-AER-Use-_OSC-negotiation-to-determine-AER-ownership/20200428-040550' into devel-hourly-2020042823
git bisect bad 60da1f95aa465e3b6ea917b753cfc8e8e0796459 # 17:01 B 1 1 1 1 Merge 'linux-review/Toke-H-iland-J-rgensen/wireguard-Use-tunnel-helpers-for-decapsulating-ECN-markings/20200428-082513' into devel-hourly-2020042823
git bisect good 7358cb29b9fd5a2553da6210e824052406698177 # 17:44 G 10 0 1 1 Merge 'linux-review/Eric-Dumazet/fq_codel-fix-TCA_FQ_CODEL_DROP_BATCH_SIZE-sanity-checks/20200427-190619' into devel-hourly-2020042823
git bisect good ffe419ae8a3e08aa9bad4878b99f5543d4ee5d6b # 18:17 G 10 0 0 0 Merge 'linux-review/UPDATE-20200428-085738/Sakari-Ailus/IPU3-ImgU-driver-parameter-struct-fixes/20200416-195812' into devel-hourly-2020042823
git bisect bad bae361c54fb6ac6eba3b4762f49ce14beb73ef13 # 18:56 B 0 4 20 0 bnxt_en: Improve AER slot reset.
git bisect bad 4becb7ee5b3d2829ed7b9261a245a77d5b7de902 # 19:28 B 0 2 18 0 net/x25: Fix x25_neigh refcnt leak when x25 disconnect
git bisect good 18e6719c141e472fe3b9dce2d089eb89fdbce0b5 # 20:05 G 10 0 3 3 Merge branch 'vsock-virtio-fixes-about-packet-delivery-to-monitoring-devices'
git bisect good 095f5614bfe16e5b3e191b34ea41b10d6fdd4ced # 21:02 G 10 0 1 1 net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict()
# first bad commit: [4becb7ee5b3d2829ed7b9261a245a77d5b7de902] net/x25: Fix x25_neigh refcnt leak when x25 disconnect
git bisect good 095f5614bfe16e5b3e191b34ea41b10d6fdd4ced # 21:14 G 30 0 1 2 net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict()
# extra tests with debug options
git bisect bad 4becb7ee5b3d2829ed7b9261a245a77d5b7de902 # 21:31 B 0 2 18 0 net/x25: Fix x25_neigh refcnt leak when x25 disconnect
# extra tests on revert first bad commit
git bisect good c56c1e56fe4c60e83308391f3faf5100ff5d3874 # 22:30 G 10 0 1 1 Revert "net/x25: Fix x25_neigh refcnt leak when x25 disconnect"
# good: [c56c1e56fe4c60e83308391f3faf5100ff5d3874] Revert "net/x25: Fix x25_neigh refcnt leak when x25 disconnect"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
10 months
Re: [mm/debug] fa6726c1e7: kernel_BUG_at_include/linux/mm.h
by Catalin Marinas
On Tue, Apr 28, 2020 at 04:41:11AM -0400, Qian Cai wrote:
> On Apr 28, 2020, at 1:54 AM, Anshuman Khandual <Anshuman.Khandual(a)arm.com> wrote:
> > That is true. There is a slight change in the rules, making it explicit yes
> > only when both ARCH_HAS_DEBUG_VM_PGTABLE and DEBUG_VM are enabled.
> >
> > +config DEBUG_VM_PGTABLE
> > + bool "Debug arch page table for semantics compliance"
> > + depends on MMU
> > + depends on !IA64 && !ARM
> > + depends on ARCH_HAS_DEBUG_VM_PGTABLE || EXPERT
> > + default y if ARCH_HAS_DEBUG_VM_PGTABLE && DEBUG_VM
> > + help
> >
> > The default is really irrelevant as the config option can be set explicitly.
>
> That could also explain. Since not long time ago, it was only “default
> y if DEBUG_VM”, that caused the robot saved a .config with
> DEBUG_VM_PGTABLE=y by default.
>
> Even though you changed the rule recently, it has no effect as the
> robot could “make oldconfig” from the saved config for each linux-next
> tree execution and the breakage will go on.
I'm not entirely sure that's the case. This report still points at the
old commit fa6726c1e7 which has:
+ depends on ARCH_HAS_DEBUG_VM_PGTABLE || EXPERT
+ default n if !ARCH_HAS_DEBUG_VM_PGTABLE
+ default y if DEBUG_VM
In -next we now have commit 647d9a0de34c and subsequently modified by
commit 0a8646638865. So hopefully with the latest -next tree we won't
see this report.
We could as well remove the 'depends on ... || EXPERT' part but I'd
rather leave this around with a default n (as in current -next) in case
others want to have a go. If that's still causing problems, we can
remove the '|| EXPERT' part, so there won't be any further regressions.
--
Catalin
10 months
90f0081848 ("kernfs: fix possibility of NULL pointer .."): WARNING: lock held when returning to user space!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/youngjun/kernfs-fix-possibility-...
commit 90f0081848b85a7943dab6b1036c40f68ee60264
Author: youngjun <her0gyugyu(a)gmail.com>
AuthorDate: Mon Apr 27 09:48:36 2020 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Tue Apr 28 09:27:55 2020 +0800
kernfs: fix possibility of NULL pointer dereference.
When dentry is negative, "kernfs_dentry_node" returns NULL.
In this case, "kernfs_root" dereferences NULL pointer.
Signed-off-by: youngjun <her0gyugyu(a)gmail.com>
96fa72ffb2 Merge 5.7-rc3 into driver-core-next
90f0081848 kernfs: fix possibility of NULL pointer dereference.
+------------------------------------------------+------------+------------+
| | 96fa72ffb2 | 90f0081848 |
+------------------------------------------------+------------+------------+
| boot_successes | 41 | 6 |
| boot_failures | 0 | 8 |
| WARNING:lock_held_when_returning_to_user_space | 0 | 8 |
| is_leaving_the_kernel_with_locks_still_held | 0 | 8 |
+------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child3:684] vm86old (113) returned ENOSYS, marking as inactive.
[child1:686] io_cancel (249) returned ENOSYS, marking as inactive.
[child1:686] mq_getsetattr (282) returned ENOSYS, marking as inactive.
[ 18.050958]
[ 18.051083] ================================================
[ 18.051565] WARNING: lock held when returning to user space!
[ 18.052193] 5.7.0-rc3-00008-g90f0081848b85a #1 Not tainted
[ 18.052819] ------------------------------------------------
[ 18.053465] trinity-c1/686 is leaving the kernel with locks still held!
[ 18.054215] 1 lock held by trinity-c1/686:
[ 18.054687] #0: f42f28f8 (kn->active#6){++++}-{0:0}, at: kernfs_iop_rename+0x2b/0xd0
[child2:683] mq_getsetattr (282) returned ENOSYS, marking as inactive.
[child2:683] pkey_alloc (381) returned ENOSYS, marking as inactive.
[ 18.077451] VFS: Warning: trinity-c1 using old stat() call. Recompile your binary.
[child1:686] io_setup (245) returned ENOSYS, marking as inactive.
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 6af4b6f59d9d3cd72325056b7b25cf9016952263 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c --
git bisect bad 705b896e3b86bfc9653f17c55c1c23b6e6369aad # 05:30 B 1 1 0 0 Merge 'linux-review/yuiko-oshino-microchip-com/net-phy-microchip_t1-add-lan87xx_phy_init-to-initialize-the-lan87xx-phy/20200422-105747' into devel-hourly-2020042821
git bisect bad d2f89f2e48319d5aa0916afb7409ba79a111bdca # 05:56 B 0 3 19 0 Merge 'linux-review/Shijie-Hu/hugetlbfs-Get-unmapped-area-below-TASK_UNMAPPED_BASE-for-hugetlbfs/20200428-071603' into devel-hourly-2020042821
git bisect good d6b6606d1fab7b5c0dcc6b71266136c8bf423eb1 # 06:41 G 13 0 0 0 Merge 'linux-review/David-Ahern/net-Add-support-for-XDP-in-egress-path/20200428-135444' into devel-hourly-2020042821
git bisect bad 6488dc0bee3801181a7e064817cff5cf0838ea47 # 07:04 B 1 4 0 0 Merge 'linux-review/Nishad-Kamdar/NFS-Use-the-correct-style-for-SPDX-License-Identifier/20200427-164819' into devel-hourly-2020042821
git bisect bad eadb2a0d40b96f5a93dd880e60b92656d8a358fe # 07:34 B 0 4 21 0 Merge 'linux-review/Wei-Yongjun/ath11k-use-GFP_ATOMIC-under-spin-lock/20200428-062853' into devel-hourly-2020042821
git bisect bad c2f6bab338be7aedaa3b4f5b14b2d9b1e98fa950 # 08:29 B 0 1 17 0 Merge 'linux-review/Wei-Yongjun/ath11k-remove-redundant-dev_err-call-in-ath11k_ahb_probe/20200428-070454' into devel-hourly-2020042821
git bisect bad 6d50d87806ace0b238c44be99e163f028e62b312 # 09:23 B 1 3 0 0 Merge 'linux-review/Anthony-Felice/net-tc35815-Fix-phydev-supported-advertising-mask/20200428-044618' into devel-hourly-2020042821
git bisect bad 3ae490f0ef7a579e0009e63e7a368f0bc2868fcf # 10:07 B 2 5 0 0 Merge 'linux-review/youngjun/kernfs-fix-possibility-of-NULL-pointer-dereference/20200428-092748' into devel-hourly-2020042821
git bisect good 1463aa7c6bee023a635540246f9b1e99e0c23f3f # 10:36 G 14 0 0 0 Merge 'linux-review/Gustavo-A-R-Silva/mtd-lpddr-Fix-bad-logic-bug-in-print_drs_error/20200428-120038' into devel-hourly-2020042821
git bisect good 69b07ee33eb12a505d55e3e716fc7452496b9041 # 13:32 G 14 0 1 1 debugfs: Use the correct style for SPDX License Identifier
git bisect good fbc35b45f9f6a971341b9462c6e94c257e779fb5 # 14:30 G 13 0 1 1 Add documentation on meaning of -EPROBE_DEFER
git bisect bad 90f0081848b85a7943dab6b1036c40f68ee60264 # 15:54 B 0 1 17 0 kernfs: fix possibility of NULL pointer dereference.
git bisect good 96fa72ffb2155dba9ba8c5d282a1ff19ed32f177 # 16:12 G 13 0 0 0 Merge 5.7-rc3 into driver-core-next
# first bad commit: [90f0081848b85a7943dab6b1036c40f68ee60264] kernfs: fix possibility of NULL pointer dereference.
git bisect good 96fa72ffb2155dba9ba8c5d282a1ff19ed32f177 # 16:20 G 39 0 1 1 Merge 5.7-rc3 into driver-core-next
# extra tests with debug options
git bisect bad 90f0081848b85a7943dab6b1036c40f68ee60264 # 17:00 B 1 1 0 0 kernfs: fix possibility of NULL pointer dereference.
# extra tests on head commit of linux-review/youngjun/kernfs-fix-possibility-of-NULL-pointer-dereference/20200428-092748
git bisect bad 90f0081848b85a7943dab6b1036c40f68ee60264 # 17:17 B 6 8 0 0 kernfs: fix possibility of NULL pointer dereference.
# bad: [90f0081848b85a7943dab6b1036c40f68ee60264] kernfs: fix possibility of NULL pointer dereference.
# extra tests on revert first bad commit
git bisect good cc2b347449ca6666b089dca93c3769e9e8813440 # 18:59 G 13 0 0 0 Revert "kernfs: fix possibility of NULL pointer dereference."
# good: [cc2b347449ca6666b089dca93c3769e9e8813440] Revert "kernfs: fix possibility of NULL pointer dereference."
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
10 months
7322464a68 ("nsproxy: attach to namespaces via pidfds"): WARNING: CPU: 0 PID: 808 at lib/refcount.c:25 refcount_warn_saturate
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Christian-Brauner/nsproxy-attach...
commit 7322464a68c444dccde385b3b696f48e6d1bb5cc
Author: Christian Brauner <christian.brauner(a)ubuntu.com>
AuthorDate: Mon Apr 27 16:36:46 2020 +0200
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Tue Apr 28 08:20:37 2020 +0800
nsproxy: attach to namespaces via pidfds
For quite a while we have been thinking about using pidfds to attach to
namespaces. This patchset has existed for about a year already but we've
wanted to wait to see how the general api would be received and adopted.
Now that more and more programs in userspace have started using pidfds
for process management it's time to send this one out.
This patch makes it possible to use pidfds to attach to the namespaces
of another process, i.e. they can be passed as the first argument to the
setns() syscall. When only a single namespace type is specified the
semantics are equivalent to passing an nsfd. That means
setns(nsfd, CLONE_NEWNET) equals setns(pidfd, CLONE_NEWNET). However,
when a pidfd is passed, multiple namespace flags can be specified in the
second setns() argument and setns() will attach the caller to all the
specified namespaces all at once or to none of them. If 0 is specified
together with a pidfd then setns() will interpret it the same way 0 is
interpreted together with a nsfd argument, i.e. attach to any/all
namespaces.
The obvious example where this is useful is a standard container
manager interacting with a running container: pushing and pulling files
or directories, injecting mounts, attaching/execing any kind of process,
managing network devices all these operations require attaching to all
or at least multiple namespaces at the same time. Given that nowadays
most containers are spawned with all namespaces enabled we're currently
looking at at least 14 syscalls, 7 to open the /proc/<pid>/ns/<ns>
nsfds, another 7 to actually perform the namespace switch. With time
namespaces we're looking at about 16 syscalls.
(We could amortize the first 7 or 8 syscalls for opening the nsfds by
stashing them in each container's monitor process but that would mean
we need to send around those file descriptors through unix sockets
everytime we want to interact with the container or keep on-disk
state. Even in scenarios where a caller wants to join a particular
namespace in a particular order callers still profit from batching
other namespaces. That mostly applies to the user namespace but
all container runtimes I found join the user namespace first no matter
if it privileges or deprivileges the container.)
With pidfds this becomes a single syscall no matter how many namespaces
are supposed to be attached to.
A decently designed, large-scale container manager usually isn't the
parent of any of the containers it spawns so the containers don't die
when it crashes or needs to update or reinitialize. This means that
for the manger to interact with containers through pids is inherently
racy especially on systems where the maximum pid number is not
signficianly bumped. This is even more problematic since we often spawn
and manage thousands or ten-thousands of containers. Interacting with a
container through a pid thus can become risky quite quickly. Especially
since we allow for an administrator to enable advanced features such as
syscall interception where we're performing syscalls in lieu of the
container. In all of those cases we use pidfds if they are available and
we pass them around as stable references. Using them to setns() to the
target process namespaces is as reliable as using nsfds. Either the
target process is already dead and we get ESRCH or we manage to attach
to its namespaces but we can't accidently attach to another process'
namespaces. So pidfds lend themselves to be used with this api.
Apart from significiantly reducing the number of syscalls from double
digit to single digit which is a decent reason post-spectre/meltdown
this also allows to switch to a set of namespaces atomically, i.e.
either attaching to all the specified namespaces succeeds or we fail. If
we fail we haven't changed a single namespace. There are currently three
namespaces that can fail (other than for ENOMEM which really is not
very interesting since we then have other problems anyway) for
non-trivial reasons, user, mount, and pid namespaces. We can fail to
attach to a pid namespace if it is not our current active pid namespace
or a descendant of it. We can fail to attach to a user namespace because
we are multi-threaded, because our current mount namespace shares
filesystem state with other tasks, or because we're trying to setns()
to the same user namespace, i.e. the target task has the same user
namespace as we do. We can fail to attach to a mount namespace because
it shares filesystem state with other tasks or because we fail to lookup
the new root for the new mount namespace. In most non-pathological
scenarios these issues can be somewhat mitigated. But there's e.g.
still an inherent race between trying to setns() to the mount namespace
of a task and that task spawning a child with CLONE_FS. If that process
runs in a new user namespace we must have already setns()ed into the new
user namespace otherwise we fail to attach to the mount namespace. There
are other cases similar to that and we've had issues where we're
half-attached to some namespace and failing in the middle. I've talked
about some of these problem during the hallway track (something only the
pre-COVID-19 generation will remember) of Plumber in Los Angeles in
2018(?). Even if all these issues could be avoided with super careful
userspace coding it would be nicer to have this done in-kernel. There's
not a lot of cost associated with this extension for the kernel and
pidfds seem to lend themselves nicely for this.
Cc: Eric W. Biederman <ebiederm(a)xmission.com>
Cc: Serge Hallyn <serge(a)hallyn.com>
Cc: Aleksa Sarai <cyphar(a)cyphar.com>
Signed-off-by: Christian Brauner <christian.brauner(a)ubuntu.com>
51184ae37e Merge tag 'for-5.7-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
7322464a68 nsproxy: attach to namespaces via pidfds
+---------------------------------------------------+------------+------------+
| | 51184ae37e | 7322464a68 |
+---------------------------------------------------+------------+------------+
| boot_successes | 44 | 0 |
| boot_failures | 1 | 16 |
| Mem-Info | 1 | |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 10 |
| Oops:#[##] | 0 | 16 |
| EIP:__ia32_sys_setns | 0 | 11 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 16 |
| WARNING:at_lib/refcount.c:#refcount_warn_saturate | 0 | 6 |
| EIP:refcount_warn_saturate | 0 | 6 |
| BUG:unable_to_handle_page_fault_for_address | 0 | 6 |
| EIP:get_pid_task | 0 | 5 |
+---------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 9.233966] VFS: Warning: trinity-c3 using old stat() call. Recompile your binary.
[child3:813] set_mempolicy (276) returned ENOSYS, marking as inactive.
[ 9.237518] warning: process `trinity-c2' used the deprecated sysctl system call with
[ 9.238724] ------------[ cut here ]------------
[ 9.239358] refcount_t: addition on 0; use-after-free.
[ 9.240095] WARNING: CPU: 0 PID: 808 at lib/refcount.c:25 refcount_warn_saturate+0xba/0x120
[ 9.241429] Modules linked in:
[ 9.241851] CPU: 0 PID: 808 Comm: trinity-c3 Tainted: G S 5.7.0-rc3-00013-g7322464a68c44 #1
[ 9.243138] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 9.244244] EIP: refcount_warn_saturate+0xba/0x120
[ 9.244884] Code: 58 e9 87 00 00 00 8d b4 26 00 00 00 00 8d 76 00 80 3d f4 84 f2 d1 00 75 74 68 24 07 bc d1 c6 05 f4 84 f2 d1 01 e8 36 56 c9 ff <0f> 0b 58 eb 5e 90 80 3d f3 84 f2 d1 00 75 54 68 50 07 bc d1 c6 05
[ 9.247345] EAX: 0000002a EBX: f66d0b44 ECX: 0000008b EDX: f7314300
[ 9.248175] ESI: 00000000 EDI: f7314300 EBP: f7317f64 ESP: f7317f60
[ 9.249001] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010292
[ 9.249909] CR0: 80050033 CR2: b6d3c000 CR3: 2c9e6000 CR4: 000006d0
[ 9.250755] Call Trace:
[ 9.251099] get_pid_task+0x5e/0xa0
[ 9.251576] __ia32_sys_setns+0xcd/0x440
[ 9.252100] ? __task_pid_nr_ns+0xb7/0xd0
[ 9.252643] do_int80_syscall_32+0x45/0xd0
[ 9.253199] entry_INT80_32+0xf4/0xf4
[ 9.253691] EIP: 0x809b132
[ 9.254065] Code: 89 c8 c3 90 8d 74 26 00 85 c0 c7 01 01 00 00 00 75 d8 a1 6c 94 a8 08 eb d1 66 90 66 90 66 90 66 90 66 90 66 90 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 10 a3 94 94 a8 08 85
[ 9.256568] EAX: ffffffda EBX: 0000011c ECX: 00000000 EDX: 4a03bed6
[ 9.257399] ESI: fffffffb EDI: 0d258544 EBP: 0000c000 ESP: bfea7cd8
[ 9.258231] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[ 9.259145] ---[ end trace 6da55ee5a0c4840d ]---
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5d84712bd468a94c6dc944824cbe62278e9ba112 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c --
git bisect bad 56a717a7c44e387d6d9ac05bdb00177ed94073c7 # 02:19 B 0 1 17 0 Merge 'linux-review/Chuck-Lever/NFS-RDMA-client-patches-for-v5-7-rc/20200421-201520' into devel-hourly-2020042818
git bisect bad 6b419cca3362544aa5b545cfd147f80268dbf995 # 02:48 B 0 4 20 0 Merge 'linux-review/Nishad-Kamdar/f2fs-Use-the-correct-style-for-SPDX-License-Identifier/20200427-163732' into devel-hourly-2020042818
git bisect good 31d4507e44281e3a526306083d7240ae9727ca59 # 03:18 G 13 0 0 0 Merge 'linux-review/Denis-Kirjanov/xen-networking-add-basic-XDP-support-for-xen-netfront/20200428-083754' into devel-hourly-2020042818
git bisect bad 39c0c195552fd7f3999a276a4d32a04f990541a4 # 03:53 B 1 3 1 1 Merge 'linux-review/Mateusz-Gorski/Add-support-for-different-DMIC-configurations/20200428-074312' into devel-hourly-2020042818
git bisect good 57e099d6b915c6fdf563859662b3c49766011ba9 # 04:29 G 13 0 6 6 Merge 'linux-review/Jin-Yao/perf-stat-Fix-uncore-event-mixed-metric-with-workload-error-issue/20200428-082344' into devel-hourly-2020042818
git bisect good 01c0ef45fd240689365da8765fe716700fac6ad6 # 05:00 G 13 0 4 4 Merge 'linux-review/Nishad-Kamdar/NFS-Use-the-correct-style-for-SPDX-License-Identifier/20200427-164819' into devel-hourly-2020042818
git bisect good 864d22c79992955a3bcb81ec9b65be07ffdfe3a4 # 08:35 G 14 0 7 7 Merge 'linux-review/Marek-Szyprowski/Minor-WM8994-MFD-codec-fixes/20200428-055602' into devel-hourly-2020042818
git bisect good 1ac57c71a6f74f5d705f4718bc0cad04b9de22b7 # 09:19 G 13 0 7 7 Merge 'linux-review/madhuparnabhowmik10-gmail-com/rapidio-Avoid-data-race-between-file-operation-callbacks-and-mport_cdev_add/20200428-031110' into devel-hourly-2020042818
git bisect good 98d171a4cb8c07fa091f8ad5cbe9187771ce15ef # 10:06 G 13 0 6 6 Merge 'linux-review/Masahiro-Yamada/kbuild-remove-target/20200427-162454' into devel-hourly-2020042818
git bisect bad 771b1c0092cd189d26703e37d6d5a62d064841d4 # 10:53 B 1 1 1 1 Merge 'linux-review/Christian-Brauner/nsproxy-attach-to-namespaces-via-pidfds/20200428-082028' into devel-hourly-2020042818
git bisect bad 7322464a68c444dccde385b3b696f48e6d1bb5cc # 15:15 B 1 3 1 1 nsproxy: attach to namespaces via pidfds
# first bad commit: [7322464a68c444dccde385b3b696f48e6d1bb5cc] nsproxy: attach to namespaces via pidfds
git bisect good 51184ae37e0518fd90cb437a2fbc953ae558cd0d # 16:08 G 47 0 5 5 Merge tag 'for-5.7-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
# extra tests with debug options
git bisect bad 7322464a68c444dccde385b3b696f48e6d1bb5cc # 16:41 B 3 1 3 3 nsproxy: attach to namespaces via pidfds
# extra tests on head commit of linux-review/Christian-Brauner/nsproxy-attach-to-namespaces-via-pidfds/20200428-082028
git bisect bad 7322464a68c444dccde385b3b696f48e6d1bb5cc # 16:48 B 0 11 32 5 nsproxy: attach to namespaces via pidfds
# bad: [7322464a68c444dccde385b3b696f48e6d1bb5cc] nsproxy: attach to namespaces via pidfds
# extra tests on revert first bad commit
git bisect good a845404448ebed36cbab42e09ce85fcebd48e701 # 17:19 G 15 0 1 1 Revert "nsproxy: attach to namespaces via pidfds"
# good: [a845404448ebed36cbab42e09ce85fcebd48e701] Revert "nsproxy: attach to namespaces via pidfds"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
10 months