[drm/i915] 04ff178484: phoronix-test-suite.supertuxkart.1024x768.Fullscreen.Ultimate.1.GranParadisoIsland.frames_per_second -30.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -30.4% regression of phoronix-test-suite.supertuxkart.1024x768.Fullscreen.Ultimate.1.GranParadisoIsland.frames_per_second due to commit:
commit: 04ff1784840f5f954a656c7e8795c76467e29128 ("[Intel-gfx] [PATCH] drm/i915: Enable WaProgramMgsrForCorrectSliceSpecificMmioReads for Gen9")
url: https://github.com/0day-ci/linux/commits/Cooper-Chiou/drm-i915-Enable-WaP...
base: git://anongit.freedesktop.org/drm-intel for-linux-next
in testcase: phoronix-test-suite
on test machine: 4 threads Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz with 32G memory
with following parameters:
need_x: true
test: supertuxkart-1.5.2
option_a: Fullscreen
option_b: Ultimate
option_c: 1
option_d: Gran Paradiso Island [Approximately 275k triangles; advanced graphics]
cpufreq_governor: performance
ucode: 0xd6
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/need_x/option_a/option_b/option_c/option_d/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/true/Fullscreen/Ultimate/1/Gran Paradiso Island [Approximately 275k triangles; advanced graphics]/debian-x86_64-phoronix/lkp-kbl-nuc1/supertuxkart-1.5.2/phoronix-test-suite/0xd6
commit:
400d4953f1 ("drm/i915/pll: Centralize PLL_ENABLE register lookup")
04ff178484 ("drm/i915: Enable WaProgramMgsrForCorrectSliceSpecificMmioReads for Gen9")
400d4953f1f434d5 04ff1784840f5f954a656c7e879
---------------- ---------------------------
%stddev %change %stddev
\ | \
35.32 -30.4% 24.59 phoronix-test-suite.supertuxkart.1024x768.Fullscreen.Ultimate.1.GranParadisoIsland.frames_per_second
3638 ± 15% -39.2% 2211 ± 4% phoronix-test-suite.time.involuntary_context_switches
32.00 -30.5% 22.25 phoronix-test-suite.time.percent_of_cpu_this_job_got
36 +----------------------------------------------------------------------+
|.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+.+..+ +.+..+.+.|
34 |-+ |
| |
| |
32 |-+ |
| |
30 |-+ |
| |
28 |-+ |
| |
| |
26 |-+ |
| O O O O O O O O O O O O O O O O O O O O O O |
24 +----------------------------------------------------------------------+
phoronix-test-suite.time.percent_of_cpu_this_job_got
32 +----------------------------------------------------------------------+
| + +. + + +.+.+. |
30 |-+ |
| |
| |
28 |-+ |
| |
26 |-+ |
| |
24 |-+ |
| O O |
| |
22 |-O O O O O O O O O O O O O O O O O O O |
| O |
20 +----------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 day, 16 hours
[xfs] db962cd266: Assertion_failed
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: db962cd266c4d3230436aec2899186510797d49f ("[PATCH v14 4/4] xfs: use current->journal_info to avoid transaction reservation recursion")
url: https://github.com/0day-ci/linux/commits/Yafang-Shao/xfs-avoid-transactio...
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
in testcase: filebench
version: filebench-x86_64-22620e6-1_20201112
with following parameters:
disk: 1HDD
fs: btrfs
test: fivestreamwrite.f
cpufreq_governor: performance
ucode: 0x42e
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 552.503501]
[ 552.525993] /usr/bin/wget -q --timeout=1800 --tries=1 --local-encoding=UTF-8 http://internal-lkp-server:80/~lkp/pkg/linux/x86_64-rhel-8.3/gcc-9/5c8fe5... -N -P /opt/rootfs/tmp/pkg/linux/x86_64-rhel-8.3/gcc-9/5c8fe583cce542aa0b84adc939ce85293de36e5e
[ 552.525995]
[ 552.884581] /opt/rootfs/tmp/pkg/linux/x86_64-rhel-8.3/gcc-9/5c8fe583cce542aa0b84adc939ce85293de36e5e/modules.cgz isn't modified
[ 552.884583]
[ 552.905799] XFS: Assertion failed: !current->journal_info, file: fs/xfs/xfs_trans.h, line: 280
[ 552.907594] /usr/bin/wget -q --timeout=1800 --tries=1 --local-encoding=UTF-8 http://internal-lkp-server:80/~lkp/osimage/ucode/intel-ucode-20201117.cgz -N -P /opt/rootfs/tmp/osimage/ucode
[ 552.916568]
[ 552.916574] ------------[ cut here ]------------
[ 552.939361] /opt/rootfs/tmp/osimage/ucode/intel-ucode-20201117.cgz isn't modified
[ 552.940036] kernel BUG at fs/xfs/xfs_message.c:110!
[ 552.946338]
[ 552.955784] invalid opcode: 0000 [#1] SMP PTI
[ 552.971010] CPU: 46 PID: 3793 Comm: kexec-lkp Not tainted 5.10.0-rc5-00044-gdb962cd266c4 #1
[ 552.981331] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[ 552.993907] RIP: 0010:assfail+0x23/0x28 [xfs]
[ 552.999797] Code: 67 fc ff ff 0f 0b c3 0f 1f 44 00 00 41 89 c8 48 89 d1 48 89 f2 48 c7 c6 30 58 be c0 e8 82 f9 ff ff 80 3d b1 80 0a 00 00 74 02 <0f> 0b 0f 0b c3 48 8d 45 10 48 89 e2 4c 89 e6 48 89 1c 24 48 89 44
[ 553.022798] RSP: 0018:ffffc90006a139c8 EFLAGS: 00010202
[ 553.029624] RAX: 0000000000000000 RBX: ffff889c3edea700 RCX: 0000000000000000
[ 553.038646] RDX: 00000000ffffffc0 RSI: 000000000000000a RDI: ffffffffc0bd7bab
[ 553.047600] RBP: ffffc90006a13a14 R08: 0000000000000000 R09: 0000000000000000
[ 553.056536] R10: 000000000000000a R11: f000000000000000 R12: 0000000000000000
[ 553.065546] R13: 0000000000000000 R14: ffff889c3ede91c8 R15: ffff888f44608000
[ 553.074455] FS: 00007ffff7fc9580(0000) GS:ffff889bea380000(0000) knlGS:0000000000000000
[ 553.084494] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 553.091837] CR2: 00005555555a1420 CR3: 0000001c30fbe002 CR4: 00000000001706e0
[ 553.100720] Call Trace:
[ 553.104459] xfs_trans_reserve+0x225/0x320 [xfs]
[ 553.110556] xfs_trans_roll+0x6e/0xe0 [xfs]
[ 553.116134] xfs_defer_trans_roll+0x104/0x2a0 [xfs]
[ 553.122489] ? xfs_extent_free_create_intent+0x62/0xc0 [xfs]
[ 553.129780] xfs_defer_finish_noroll+0xb8/0x620 [xfs]
[ 553.136299] xfs_defer_finish+0x11/0xa0 [xfs]
[ 553.142017] xfs_itruncate_extents_flags+0x141/0x440 [xfs]
[ 553.149053] xfs_setattr_size+0x3da/0x480 [xfs]
[ 553.154939] ? setattr_prepare+0x6a/0x1e0
[ 553.160250] xfs_vn_setattr+0x70/0x120 [xfs]
[ 553.165833] notify_change+0x364/0x500
[ 553.170820] ? do_truncate+0x76/0xe0
[ 553.175673] do_truncate+0x76/0xe0
[ 553.180184] path_openat+0xe6c/0x10a0
[ 553.184981] do_filp_open+0x91/0x100
[ 553.189707] ? __check_object_size+0x136/0x160
[ 553.195493] do_sys_openat2+0x20d/0x2e0
[ 553.200481] do_sys_open+0x44/0x80
[ 553.204926] do_syscall_64+0x33/0x40
[ 553.209588] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 553.215867] RIP: 0033:0x7ffff7ef11ae
[ 553.220579] Code: 25 00 00 41 00 3d 00 00 41 00 74 48 48 8d 05 59 65 0d 00 8b 00 85 c0 75 69 89 f2 b8 01 01 00 00 48 89 fe bf 9c ff ff ff 0f 05 <48> 3d 00 f0 ff ff 0f 87 a6 00 00 00 48 8b 4c 24 28 64 48 33 0c 25
[ 553.242870] RSP: 002b:00007fffffffc980 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[ 553.251949] RAX: ffffffffffffffda RBX: 000055555556afcc RCX: 00007ffff7ef11ae
[ 553.260586] RDX: 0000000000000241 RSI: 00005555555aaa40 RDI: 00000000ffffff9c
[ 553.269217] RBP: 0000555555577bd0 R08: 00005555555a250f R09: 00005555555783b0
[ 553.277804] R10: 00000000000001b6 R11: 0000000000000246 R12: 00005555555aaa40
[ 553.286406] R13: 00000000fffffffd R14: 00005555555a1400 R15: 0000000000000010
[ 553.294926] Modules linked in: dm_mod xfs btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c sd_mod t10_pi sg intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200 irqbypass crct10dif_pclmul drm_kms_helper crc32_pclmul crc32c_intel ghash_clmulni_intel isci syscopyarea sysfillrect sysimgblt rapl fb_sys_fops ahci libsas libahci ipmi_si scsi_transport_sas mei_me intel_cstate ipmi_devintf ioatdma drm mei intel_uncore ipmi_msghandler libata dca wmi joydev ip_tables
[ 553.349820] ---[ end trace 41e34856cd03d8f3 ]---
[ 553.359002] RIP: 0010:assfail+0x23/0x28 [xfs]
[ 553.364558] Code: 67 fc ff ff 0f 0b c3 0f 1f 44 00 00 41 89 c8 48 89 d1 48 89 f2 48 c7 c6 30 58 be c0 e8 82 f9 ff ff 80 3d b1 80 0a 00 00 74 02 <0f> 0b 0f 0b c3 48 8d 45 10 48 89 e2 4c 89 e6 48 89 1c 24 48 89 44
[ 553.386866] RSP: 0018:ffffc90006a139c8 EFLAGS: 00010202
[ 553.393357] RAX: 0000000000000000 RBX: ffff889c3edea700 RCX: 0000000000000000
[ 553.402093] RDX: 00000000ffffffc0 RSI: 000000000000000a RDI: ffffffffc0bd7bab
[ 553.410746] RBP: ffffc90006a13a14 R08: 0000000000000000 R09: 0000000000000000
[ 553.419499] R10: 000000000000000a R11: f000000000000000 R12: 0000000000000000
[ 553.428122] R13: 0000000000000000 R14: ffff889c3ede91c8 R15: ffff888f44608000
[ 553.436764] FS: 00007ffff7fc9580(0000) GS:ffff889bea380000(0000) knlGS:0000000000000000
[ 553.446562] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 553.453670] CR2: 00005555555a1420 CR3: 0000001c30fbe002 CR4: 00000000001706e0
[ 553.462302] Kernel panic - not syncing: Fatal exception
[ 553.513856] Kernel Offset: disabled
ACPI MEMORY or I/O RESET_REG.
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Oliver Sang
2 weeks, 5 days
[mm/swap] aae466b005: vm-scalability.throughput -2.7% regression
by kernel test robot
Greeting,
FYI, we noticed a -2.7% regression of vm-scalability.throughput due to commit:
commit: aae466b0052e1888edd1d7f473d4310d64936196 ("mm/swap: implement workingset detection for anonymous LRU")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
with following parameters:
runtime: 300
thp_enabled: never
thp_defrag: always
nr_task: 32
nr_ssd: 1
test: swap-w-rand-mt
cpufreq_governor: performance
ucode: 0x4003003
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 12.8% improvement |
| test machine | 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_pmem=2 |
| | nr_task=8 |
| | priority=1 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=never |
| | ucode=0x4003003 |
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -1.3% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | nr_ssd=1 |
| | nr_task=32 |
| | runtime=300 |
| | test=swap-w-rand-mt |
| | thp_defrag=always |
| | thp_enabled=always |
| | ucode=0x4003003 |
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 17.2% improvement |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | nr_pmem=4 |
| | nr_task=8 |
| | priority=1 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=never |
| | ucode=0x16 |
+------------------+------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-9/performance/x86_64-rhel-8.3/1/32/debian-10.4-x86_64-20200603.cgz/300/lkp-csl-2ap1/swap-w-rand-mt/vm-scalability/always/never/0x4003003
commit:
3852f6768e ("mm/swapcache: support to handle the shadow entries")
aae466b005 ("mm/swap: implement workingset detection for anonymous LRU")
3852f6768ede542e aae466b0052e1888edd1d7f473d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 5% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 2% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
3.97 +33.0% 5.28 ± 2% vm-scalability.free_time
6216 -2.4% 6065 vm-scalability.median
184.21 +57.2 241.41 ± 2% vm-scalability.stddev%
198352 -2.7% 193018 vm-scalability.throughput
368.12 +6.4% 391.63 vm-scalability.time.elapsed_time
368.12 +6.4% 391.63 vm-scalability.time.elapsed_time.max
18610 ± 29% -47.9% 9704 ± 11% vm-scalability.time.involuntary_context_switches
320.00 ± 4% -6.3% 299.75 ± 2% vm-scalability.time.percent_of_cpu_this_job_got
11.22 -4.9% 10.67 iostat.cpu.iowait
0.11 -0.0 0.09 mpstat.cpu.all.soft%
30334488 -44.9% 16726456 cpuidle.POLL.time
6991777 -46.0% 3774649 ± 2% cpuidle.POLL.usage
164847 ± 2% +387.8% 804132 slabinfo.radix_tree_node.active_objs
3155 ± 2% +372.3% 14905 slabinfo.radix_tree_node.active_slabs
176738 ± 2% +372.3% 834722 slabinfo.radix_tree_node.num_objs
3155 ± 2% +372.3% 14905 slabinfo.radix_tree_node.num_slabs
481751 ± 4% +11.1% 535142 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
44029 ± 48% +720.6% 361329 ± 44% sched_debug.cfs_rq:/.spread0.max
0.03 ± 10% +17.7% 0.03 ± 12% sched_debug.cpu.nr_running.avg
0.16 ± 5% +8.0% 0.17 ± 5% sched_debug.cpu.nr_running.stddev
2921 ± 29% -39.4% 1770 ± 11% sched_debug.cpu.nr_switches.min
129531 ± 10% +33.7% 173189 ± 10% sched_debug.cpu.nr_switches.stddev
284476 -5.3% 269500 ± 2% vmstat.io.bi
829640 -4.6% 791105 vmstat.io.bo
689.50 ± 24% -52.7% 326.00 ± 17% vmstat.memory.buff
1695753 +18.6% 2012004 ± 2% vmstat.memory.cache
24761938 +21.1% 29998510 vmstat.memory.free
284470 -5.3% 269494 ± 2% vmstat.swap.si
829635 -4.6% 791100 vmstat.swap.so
147287 -4.9% 140140 vmstat.system.cs
39473212 ± 2% -65.6% 13576005 ± 8% meminfo.Active
39472833 ± 2% -65.6% 13575638 ± 8% meminfo.Active(anon)
28016 ± 11% +27.6% 35751 ± 5% meminfo.CmaFree
1.295e+08 +15.8% 1.499e+08 meminfo.Inactive
1.295e+08 +15.8% 1.499e+08 meminfo.Inactive(anon)
179771 +209.6% 556606 meminfo.KReclaimable
23750082 +22.5% 29103301 meminfo.MemAvailable
24371669 +21.2% 29537348 meminfo.MemFree
268.00 ± 4% +452.0% 1479 ± 42% meminfo.Mlocked
179771 +209.6% 556606 meminfo.SReclaimable
501211 +76.2% 883374 meminfo.Slab
2015638 ± 20% +81.6% 3659798 ± 15% proc-vmstat.compact_daemon_free_scanned
2015638 ± 20% +81.6% 3659798 ± 15% proc-vmstat.compact_free_scanned
9901379 -65.8% 3389780 ± 8% proc-vmstat.nr_active_anon
41693055 -3.5% 40233791 proc-vmstat.nr_anon_pages
591382 +22.5% 724480 proc-vmstat.nr_dirty_background_threshold
1184213 +22.5% 1450733 proc-vmstat.nr_dirty_threshold
7034 ± 12% +27.6% 8976 ± 5% proc-vmstat.nr_free_cma
6089969 +21.9% 7425242 proc-vmstat.nr_free_pages
32343769 +15.8% 37442761 proc-vmstat.nr_inactive_anon
153.50 ± 28% -63.2% 56.50 ± 26% proc-vmstat.nr_inactive_file
11447 -4.9% 10883 ± 2% proc-vmstat.nr_mapped
66.75 ± 4% +453.9% 369.75 ± 42% proc-vmstat.nr_mlock
225376 -2.8% 219090 proc-vmstat.nr_page_table_pages
135218 ± 2% -10.8% 120652 ± 8% proc-vmstat.nr_shmem
44994 +209.1% 139062 proc-vmstat.nr_slab_reclaimable
47878384 +5.7% 50604735 proc-vmstat.nr_vmscan_write
77049882 +1.4% 78117892 proc-vmstat.nr_written
9901589 -65.8% 3390472 ± 8% proc-vmstat.nr_zone_active_anon
32344898 +15.8% 37443693 proc-vmstat.nr_zone_inactive_anon
153.25 ± 28% -63.1% 56.50 ± 26% proc-vmstat.nr_zone_inactive_file
222877 ±173% +7663.4% 17302906 ± 3% proc-vmstat.pgdeactivate
3.082e+08 +1.4% 3.125e+08 proc-vmstat.pgpgout
222877 ±173% +7663.4% 17302906 ± 3% proc-vmstat.pgrefill
77049882 +1.4% 78117893 proc-vmstat.pswpout
3950 ± 8% +27.7% 5043 ± 6% proc-vmstat.swap_ra_hit
1.582e+09 -3.1% 1.532e+09 perf-stat.i.branch-instructions
148737 -4.9% 141447 perf-stat.i.context-switches
2.114e+09 -3.8% 2.033e+09 perf-stat.i.dTLB-loads
1.112e+09 -3.8% 1.07e+09 perf-stat.i.dTLB-stores
58.64 -0.9 57.71 perf-stat.i.iTLB-load-miss-rate%
8601022 ± 3% -6.8% 8018116 ± 2% perf-stat.i.iTLB-load-misses
7.824e+09 -3.0% 7.588e+09 perf-stat.i.instructions
71817 ± 2% -5.3% 68008 ± 2% perf-stat.i.major-faults
1.10 ± 2% -5.9% 1.04 ± 2% perf-stat.i.metric.K/sec
25.50 -3.4% 24.62 perf-stat.i.metric.M/sec
271982 -5.5% 257059 perf-stat.i.minor-faults
86.69 -2.5 84.16 ± 2% perf-stat.i.node-load-miss-rate%
3592897 ± 3% -7.2% 3335717 perf-stat.i.node-store-misses
343799 -5.4% 325068 perf-stat.i.page-faults
87.63 -2.2 85.44 perf-stat.overall.node-load-miss-rate%
44275 +3.2% 45673 perf-stat.overall.path-length
1.581e+09 -3.1% 1.532e+09 perf-stat.ps.branch-instructions
147972 -4.9% 140775 ± 2% perf-stat.ps.context-switches
2.112e+09 -3.8% 2.032e+09 perf-stat.ps.dTLB-loads
1.111e+09 -3.7% 1.069e+09 perf-stat.ps.dTLB-stores
8576754 ± 3% -6.7% 8000293 ± 2% perf-stat.ps.iTLB-load-misses
7.82e+09 -3.0% 7.587e+09 perf-stat.ps.instructions
71434 ± 2% -5.3% 67666 ± 2% perf-stat.ps.major-faults
275553 -5.7% 259748 perf-stat.ps.minor-faults
3597708 ± 3% -7.1% 3342209 ± 2% perf-stat.ps.node-store-misses
346987 -5.6% 327415 perf-stat.ps.page-faults
2.89e+12 +3.2% 2.982e+12 perf-stat.total.instructions
2797818 ± 13% -62.0% 1062303 ± 18% numa-vmstat.node0.nr_active_anon
7606015 ± 4% +18.5% 9015289 ± 3% numa-vmstat.node0.nr_inactive_anon
2841 ± 20% -23.7% 2167 ± 12% numa-vmstat.node0.nr_mapped
21.25 ± 17% +364.7% 98.75 ± 41% numa-vmstat.node0.nr_mlock
399.50 ± 58% +40928.8% 163910 ± 57% numa-vmstat.node0.nr_page_table_pages
23030 ± 37% -49.7% 11581 ± 35% numa-vmstat.node0.nr_shmem
10778 ± 18% +154.0% 27381 ± 28% numa-vmstat.node0.nr_slab_reclaimable
2797850 ± 13% -62.0% 1062427 ± 18% numa-vmstat.node0.nr_zone_active_anon
7606315 ± 4% +18.5% 9015536 ± 3% numa-vmstat.node0.nr_zone_inactive_anon
2434112 ± 4% -70.0% 729671 ± 7% numa-vmstat.node1.nr_active_anon
1478062 ± 6% +24.7% 1842766 ± 4% numa-vmstat.node1.nr_free_pages
8286756 +15.8% 9597629 numa-vmstat.node1.nr_inactive_anon
13.75 ± 6% +678.2% 107.00 ± 52% numa-vmstat.node1.nr_mlock
11056 ± 7% +265.0% 40351 ± 11% numa-vmstat.node1.nr_slab_reclaimable
2434168 ± 4% -70.0% 729866 ± 7% numa-vmstat.node1.nr_zone_active_anon
8287096 +15.8% 9597821 numa-vmstat.node1.nr_zone_inactive_anon
2143905 ± 13% -62.8% 797472 ± 24% numa-vmstat.node2.nr_active_anon
1494773 ± 5% +29.0% 1927632 ± 9% numa-vmstat.node2.nr_free_pages
16.50 ± 29% +400.0% 82.50 ± 44% numa-vmstat.node2.nr_mlock
11153 ± 14% +211.9% 34787 ± 26% numa-vmstat.node2.nr_slab_reclaimable
2143957 ± 13% -62.8% 797647 ± 24% numa-vmstat.node2.nr_zone_active_anon
2483365 ± 7% -67.9% 797268 ± 4% numa-vmstat.node3.nr_active_anon
7022 ± 12% +27.8% 8975 ± 5% numa-vmstat.node3.nr_free_cma
1480903 +26.7% 1876124 numa-vmstat.node3.nr_free_pages
8081690 ± 3% +17.4% 9491688 numa-vmstat.node3.nr_inactive_anon
8163 ± 8% -19.9% 6535 numa-vmstat.node3.nr_kernel_stack
11935 ± 4% +205.0% 36402 ± 4% numa-vmstat.node3.nr_slab_reclaimable
20598 ± 13% -21.9% 16083 ± 9% numa-vmstat.node3.nr_slab_unreclaimable
2483418 ± 7% -67.9% 797444 ± 4% numa-vmstat.node3.nr_zone_active_anon
8081950 ± 3% +17.4% 9491926 numa-vmstat.node3.nr_zone_inactive_anon
18061676 ± 9% +16.2% 20994129 ± 9% numa-vmstat.node3.numa_hit
17948663 ± 9% +16.0% 20828234 ± 9% numa-vmstat.node3.numa_local
11235292 ± 14% -62.1% 4253332 ± 18% numa-meminfo.node0.Active
11235045 ± 14% -62.1% 4253174 ± 18% numa-meminfo.node0.Active(anon)
30372185 ± 5% +18.7% 36043049 ± 3% numa-meminfo.node0.Inactive
30372149 ± 5% +18.7% 36043021 ± 3% numa-meminfo.node0.Inactive(anon)
43141 ± 18% +154.4% 109749 ± 28% numa-meminfo.node0.KReclaimable
43141 ± 18% +154.4% 109749 ± 28% numa-meminfo.node0.SReclaimable
92323 ± 37% -49.7% 46402 ± 35% numa-meminfo.node0.Shmem
142930 ± 14% +51.3% 216266 ± 15% numa-meminfo.node0.Slab
9771157 ± 4% -70.1% 2923440 ± 7% numa-meminfo.node1.Active
9771112 ± 4% -70.1% 2923413 ± 7% numa-meminfo.node1.Active(anon)
33103934 +15.9% 38369090 numa-meminfo.node1.Inactive
33103732 +15.9% 38369003 numa-meminfo.node1.Inactive(anon)
44315 ± 7% +264.5% 161527 ± 11% numa-meminfo.node1.KReclaimable
5921070 ± 6% +24.8% 7388272 ± 4% numa-meminfo.node1.MemFree
44315 ± 7% +264.5% 161527 ± 11% numa-meminfo.node1.SReclaimable
107817 ± 3% +121.5% 238839 ± 5% numa-meminfo.node1.Slab
8607727 ± 13% -62.9% 3193249 ± 23% numa-meminfo.node2.Active
8607678 ± 13% -62.9% 3193160 ± 23% numa-meminfo.node2.Active(anon)
44678 ± 14% +212.0% 139381 ± 26% numa-meminfo.node2.KReclaimable
5985691 ± 4% +29.1% 7729464 ± 8% numa-meminfo.node2.MemFree
44678 ± 14% +212.0% 139381 ± 26% numa-meminfo.node2.SReclaimable
120407 ± 19% +81.0% 217905 ± 20% numa-meminfo.node2.Slab
9966813 ± 7% -68.0% 3193310 ± 4% numa-meminfo.node3.Active
9966773 ± 7% -68.0% 3193215 ± 4% numa-meminfo.node3.Active(anon)
32288714 ± 3% +17.5% 37944173 numa-meminfo.node3.Inactive
32288483 ± 3% +17.5% 37944122 numa-meminfo.node3.Inactive(anon)
47843 ± 4% +204.8% 145824 ± 4% numa-meminfo.node3.KReclaimable
8154 ± 8% -19.9% 6535 numa-meminfo.node3.KernelStack
12199 ± 5% +18.9% 14501 ± 7% numa-meminfo.node3.Mapped
5928259 ± 2% +26.9% 7523247 ± 2% numa-meminfo.node3.MemFree
47843 ± 4% +204.8% 145824 ± 4% numa-meminfo.node3.SReclaimable
82430 ± 13% -21.9% 64341 ± 9% numa-meminfo.node3.SUnreclaim
130274 ± 8% +61.3% 210167 ± 3% numa-meminfo.node3.Slab
0.01 ± 13% +45.5% 0.01 ± 15% perf-sched.sch_delay.avg.ms.__sched_text_start.__sched_text_start.do_syslog.part.0
0.00 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.do_swap_page
0.01 ± 6% +24.0% 0.01 ± 5% perf-sched.sch_delay.avg.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
0.01 ± 13% +45.5% 0.01 ± 15% perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.do_syslog.part.0
0.01 ± 40% -80.9% 0.00 ± 19% perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.00 ± 72% -100.0% 0.00 perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.do_swap_page
0.01 ± 20% +75.0% 0.03 ± 22% perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop
0.03 ± 25% +63.0% 0.05 ± 43% perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
0.82 ±147% -92.7% 0.06 ± 69% perf-sched.sch_delay.max.ms.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
3.48 ± 2% -10.3% 3.12 ± 2% perf-sched.total_wait_and_delay.average.ms
3.48 ± 2% -10.3% 3.12 ± 2% perf-sched.total_wait_time.average.ms
208.52 ± 3% +8.8% 226.98 ± 2% perf-sched.wait_and_delay.avg.ms.__sched_text_start.__sched_text_start.do_task_dead.do_exit.do_group_exit
74.43 ± 56% -92.1% 5.90 ±173% perf-sched.wait_and_delay.avg.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.generic_perform_write
346.68 +164.3% 916.39 ± 35% perf-sched.wait_and_delay.avg.ms.__sched_text_start.__sched_text_start.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait
4.05 +61.0% 6.52 ± 12% perf-sched.wait_and_delay.avg.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
374.36 ± 10% +98.3% 742.31 ± 2% perf-sched.wait_and_delay.avg.ms.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
3.50 ± 42% -92.9% 0.25 ±173% perf-sched.wait_and_delay.count.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.generic_perform_write
2462 -38.5% 1514 ± 13% perf-sched.wait_and_delay.count.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
5945 ± 8% -58.0% 2498 perf-sched.wait_and_delay.count.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
1918 ± 22% +132.6% 4463 ± 21% perf-sched.wait_and_delay.max.ms.__sched_text_start.__sched_text_start.do_task_dead.do_exit.do_group_exit
142.24 ± 49% -95.9% 5.90 ±173% perf-sched.wait_and_delay.max.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.generic_perform_write
5.03 +5481.4% 280.75 ± 35% perf-sched.wait_and_delay.max.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
7708 ± 12% -83.5% 1270 ± 34% perf-sched.wait_and_delay.max.ms.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
208.52 ± 3% +8.8% 226.97 ± 2% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.do_task_dead.do_exit.do_group_exit
0.36 ± 79% -100.0% 0.00 perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.do_swap_page
0.13 ± 31% -88.8% 0.01 ±173% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.down_read
74.43 ± 56% -92.1% 5.90 ±173% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.generic_perform_write
346.67 +164.3% 916.38 ± 35% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait
2.11 ± 17% +60.0% 3.37 ± 38% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg
4.04 +61.1% 6.51 ± 12% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
374.35 ± 10% +98.3% 742.30 ± 2% perf-sched.wait_time.avg.ms.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
1918 ± 22% +132.6% 4463 ± 21% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.do_task_dead.do_exit.do_group_exit
31.60 ±130% -99.1% 0.28 ± 90% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
1.33 ± 85% -100.0% 0.00 perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.do_swap_page
0.14 ± 26% -89.5% 0.01 ±173% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.down_read
142.24 ± 49% -95.9% 5.90 ±173% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.preempt_schedule_common._cond_resched.generic_perform_write
2.11 ± 17% +60.0% 3.37 ± 38% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg
5.01 +5501.3% 280.74 ± 35% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.schedule_timeout.rcu_gp_kthread.kthread
7708 ± 12% -83.5% 1270 ± 34% perf-sched.wait_time.max.ms.__sched_text_start.__sched_text_start.smpboot_thread_fn.kthread.ret_from_fork
0.76 ± 20% +0.2 0.96 ± 3% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__read_swap_cache_async.read_swap_cache_async.swapin_readahead.do_swap_page
0.88 ± 22% +0.3 1.16 ± 6% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.59 ± 59% +0.3 0.89 ± 12% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.28 ±100% +0.3 0.62 ± 5% perf-profile.calltrace.cycles-pp.mem_cgroup_uncharge_swap.mem_cgroup_charge.__read_swap_cache_async.read_swap_cache_async.swapin_readahead
1.43 ± 20% +0.4 1.83 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.54 ± 59% +0.4 0.94 ± 8% perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.pagecache_get_page.lookup_swap_cache.do_swap_page
0.56 ± 60% +0.4 0.97 ± 8% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.lookup_swap_cache.do_swap_page.__handle_mm_fault
0.56 ± 60% +0.4 0.98 ± 8% perf-profile.calltrace.cycles-pp.pagecache_get_page.lookup_swap_cache.do_swap_page.__handle_mm_fault.handle_mm_fault
0.13 ±173% +0.4 0.55 ± 6% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.64 ± 60% +0.4 1.09 ± 8% perf-profile.calltrace.cycles-pp.lookup_swap_cache.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.56 ± 20% +0.5 2.07 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
1.54 ± 34% +0.6 2.12 ± 11% perf-profile.calltrace.cycles-pp.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec
3.38 ±162% -3.3 0.13 ± 25% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.37 ±160% -3.2 0.15 ± 7% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.65 ± 21% -0.3 0.37 ± 10% perf-profile.children.cycles-pp.poll_idle
0.32 ± 4% -0.2 0.10 ± 22% perf-profile.children.cycles-pp.rcu_core
0.23 ± 17% -0.1 0.10 perf-profile.children.cycles-pp.kmem_cache_free
0.16 ± 19% -0.1 0.05 ± 9% perf-profile.children.cycles-pp.__slab_free
0.14 ± 23% -0.1 0.08 ± 29% perf-profile.children.cycles-pp.find_vma
0.15 ± 10% -0.0 0.10 ± 19% perf-profile.children.cycles-pp.xas_create_range
0.15 ± 2% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_state
0.09 ± 20% +0.0 0.12 ± 13% perf-profile.children.cycles-pp.update_cfs_group
0.05 ± 58% +0.0 0.09 ± 10% perf-profile.children.cycles-pp.io_serial_in
0.05 ± 59% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.rcu_nmi_enter
0.20 ± 7% +0.0 0.24 ± 5% perf-profile.children.cycles-pp.__count_memcg_events
0.04 ± 60% +0.0 0.08 ± 19% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.03 ±100% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__blk_mq_alloc_request
0.03 ±102% +0.0 0.07 ± 14% perf-profile.children.cycles-pp.__rq_qos_throttle
0.08 ± 21% +0.0 0.13 ± 9% perf-profile.children.cycles-pp.blk_throtl_bio
0.01 ±173% +0.0 0.06 ± 13% perf-profile.children.cycles-pp.wbt_wait
0.04 ± 58% +0.1 0.09 ± 15% perf-profile.children.cycles-pp.account_process_tick
0.01 ±173% +0.1 0.06 ± 13% perf-profile.children.cycles-pp.timerqueue_add
0.00 +0.1 0.05 perf-profile.children.cycles-pp.blk_mq_get_tag
0.09 ± 24% +0.1 0.14 ± 12% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.08 ± 29% +0.1 0.13 ± 9% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.04 ± 58% +0.1 0.11 ± 12% perf-profile.children.cycles-pp.workingset_age_nonresident
0.04 ± 58% +0.1 0.11 ± 14% perf-profile.children.cycles-pp.xas_init_marks
0.28 ± 9% +0.1 0.35 ± 9% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.28 ± 26% +0.1 0.35 ± 5% perf-profile.children.cycles-pp.submit_bio_checks
0.10 ± 31% +0.1 0.18 ± 4% perf-profile.children.cycles-pp.__frontswap_load
0.03 ±100% +0.1 0.11 ± 17% perf-profile.children.cycles-pp.xas_clear_mark
0.31 ± 16% +0.1 0.39 ± 8% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.14 ± 21% +0.1 0.23 ± 10% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.10 ± 24% +0.1 0.19 ± 6% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.45 ± 19% +0.1 0.55 ± 6% perf-profile.children.cycles-pp.schedule_idle
0.50 ± 22% +0.1 0.62 ± 7% perf-profile.children.cycles-pp.end_swap_bio_read
0.13 ± 24% +0.2 0.28 ± 8% perf-profile.children.cycles-pp.cpumask_any_but
0.38 ± 31% +0.2 0.54 ± 13% perf-profile.children.cycles-pp.bio_alloc_bioset
0.54 ± 22% +0.2 0.71 ± 6% perf-profile.children.cycles-pp.mem_cgroup_uncharge_swap
0.00 +0.2 0.18 ± 9% perf-profile.children.cycles-pp.workingset_refault
0.43 ± 20% +0.2 0.63 ± 5% perf-profile.children.cycles-pp._find_next_bit
0.80 ± 24% +0.2 1.00 ± 7% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.88 ± 22% +0.3 1.16 ± 7% perf-profile.children.cycles-pp.timekeeping_max_deferment
1.14 ± 25% +0.3 1.43 ± 6% perf-profile.children.cycles-pp.submit_bio
1.12 ± 25% +0.3 1.41 ± 6% perf-profile.children.cycles-pp.submit_bio_noacct
0.80 ± 29% +0.3 1.10 ± 9% perf-profile.children.cycles-pp.xas_load
0.73 ± 33% +0.4 1.09 ± 8% perf-profile.children.cycles-pp.lookup_swap_cache
0.67 ± 34% +0.4 1.03 ± 9% perf-profile.children.cycles-pp.find_get_entry
0.67 ± 33% +0.4 1.04 ± 8% perf-profile.children.cycles-pp.pagecache_get_page
1.44 ± 21% +0.4 1.85 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
1.57 ± 21% +0.5 2.08 ± 4% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +0.5 0.53 ± 12% perf-profile.children.cycles-pp.shrink_active_list
1.58 ± 29% +0.5 2.12 ± 11% perf-profile.children.cycles-pp.__swap_writepage
3.38 ±163% -3.3 0.13 ± 25% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.15 ± 21% -0.1 0.05 ± 9% perf-profile.self.cycles-pp.__slab_free
0.12 ± 22% -0.1 0.05 ± 64% perf-profile.self.cycles-pp.vmacache_find
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.07 ± 20% +0.0 0.10 ± 7% perf-profile.self.cycles-pp.page_counter_try_charge
0.13 ± 7% +0.0 0.16 ± 7% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.09 ± 20% +0.0 0.12 ± 13% perf-profile.self.cycles-pp.update_cfs_group
0.11 ± 21% +0.0 0.14 ± 8% perf-profile.self.cycles-pp.mem_cgroup_id_put_many
0.05 ± 58% +0.0 0.09 ± 10% perf-profile.self.cycles-pp.io_serial_in
0.04 ± 59% +0.0 0.08 ± 19% perf-profile.self.cycles-pp.propagate_protected_usage
0.20 ± 5% +0.0 0.23 ± 4% perf-profile.self.cycles-pp.__count_memcg_events
0.10 ± 10% +0.0 0.14 ± 12% perf-profile.self.cycles-pp.xas_create
0.04 ± 58% +0.1 0.09 ± 15% perf-profile.self.cycles-pp.account_process_tick
0.01 ±173% +0.1 0.06 ± 13% perf-profile.self.cycles-pp.__swap_writepage
0.06 ± 66% +0.1 0.11 ± 6% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.04 ± 58% +0.1 0.10 ± 10% perf-profile.self.cycles-pp.workingset_age_nonresident
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.workingset_refault
0.01 ±173% +0.1 0.08 ± 24% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.00 +0.1 0.07 ± 22% perf-profile.self.cycles-pp.__swap_duplicate
0.23 ± 22% +0.1 0.30 ± 9% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.07 ± 62% +0.1 0.16 ± 4% perf-profile.self.cycles-pp.__frontswap_load
0.35 ± 19% +0.1 0.45 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.10 ± 15% perf-profile.self.cycles-pp.xas_clear_mark
0.39 ± 20% +0.2 0.58 ± 6% perf-profile.self.cycles-pp._find_next_bit
0.58 ± 27% +0.2 0.81 ± 9% perf-profile.self.cycles-pp.xas_load
0.88 ± 22% +0.3 1.16 ± 7% perf-profile.self.cycles-pp.timekeeping_max_deferment
550.50 ± 37% -53.4% 256.50 ± 57% interrupts.CPU0.RES:Rescheduling_interrupts
471.50 ± 94% -85.9% 66.25 ± 46% interrupts.CPU1.RES:Rescheduling_interrupts
251.25 ± 34% -75.7% 61.00 ± 35% interrupts.CPU10.RES:Rescheduling_interrupts
681.25 ±101% -81.9% 123.25 ± 33% interrupts.CPU100.NMI:Non-maskable_interrupts
681.25 ±101% -81.9% 123.25 ± 33% interrupts.CPU100.PMI:Performance_monitoring_interrupts
256.25 ± 80% -60.7% 100.75 ± 25% interrupts.CPU102.NMI:Non-maskable_interrupts
256.25 ± 80% -60.7% 100.75 ± 25% interrupts.CPU102.PMI:Performance_monitoring_interrupts
240.50 ± 35% -49.3% 122.00 ± 32% interrupts.CPU104.NMI:Non-maskable_interrupts
240.50 ± 35% -49.3% 122.00 ± 32% interrupts.CPU104.PMI:Performance_monitoring_interrupts
1330 ±115% -86.1% 184.50 ± 53% interrupts.CPU108.NMI:Non-maskable_interrupts
1330 ±115% -86.1% 184.50 ± 53% interrupts.CPU108.PMI:Performance_monitoring_interrupts
375.25 ± 51% -46.4% 201.00 ± 62% interrupts.CPU11.NMI:Non-maskable_interrupts
375.25 ± 51% -46.4% 201.00 ± 62% interrupts.CPU11.PMI:Performance_monitoring_interrupts
414.25 ± 37% -69.2% 127.50 ± 30% interrupts.CPU111.NMI:Non-maskable_interrupts
414.25 ± 37% -69.2% 127.50 ± 30% interrupts.CPU111.PMI:Performance_monitoring_interrupts
2112995 ± 11% -28.8% 1504089 ± 42% interrupts.CPU114.TLB:TLB_shootdowns
31.00 ± 36% +126.6% 70.25 ± 65% interrupts.CPU118.RES:Rescheduling_interrupts
1831 ±103% -86.4% 249.00 ± 83% interrupts.CPU12.NMI:Non-maskable_interrupts
1831 ±103% -86.4% 249.00 ± 83% interrupts.CPU12.PMI:Performance_monitoring_interrupts
241.75 ± 72% -83.5% 40.00 ± 36% interrupts.CPU12.RES:Rescheduling_interrupts
106.75 ± 37% +64.9% 176.00 ± 22% interrupts.CPU125.RES:Rescheduling_interrupts
850681 ± 26% +37.7% 1171518 ± 18% interrupts.CPU126.CAL:Function_call_interrupts
99.50 ± 61% +97.2% 196.25 ± 16% interrupts.CPU126.RES:Rescheduling_interrupts
124.50 ± 38% +89.6% 236.00 ± 22% interrupts.CPU129.RES:Rescheduling_interrupts
284.00 ± 65% -84.6% 43.75 ± 42% interrupts.CPU13.RES:Rescheduling_interrupts
988716 ± 18% +35.6% 1340401 ± 2% interrupts.CPU131.CAL:Function_call_interrupts
1052717 ± 14% +50.1% 1580609 ± 20% interrupts.CPU133.CAL:Function_call_interrupts
151.50 ± 56% +94.2% 294.25 ± 21% interrupts.CPU135.RES:Rescheduling_interrupts
55.75 ± 38% +86.5% 104.00 ± 23% interrupts.CPU136.RES:Rescheduling_interrupts
41.75 ± 41% +91.0% 79.75 ± 33% interrupts.CPU137.RES:Rescheduling_interrupts
82031 ± 44% -77.9% 18146 ± 23% interrupts.CPU15.CAL:Function_call_interrupts
470.25 ± 38% -68.8% 146.50 ± 20% interrupts.CPU15.NMI:Non-maskable_interrupts
470.25 ± 38% -68.8% 146.50 ± 20% interrupts.CPU15.PMI:Performance_monitoring_interrupts
478.25 ± 91% -83.4% 79.50 ± 80% interrupts.CPU15.RES:Rescheduling_interrupts
83996 ± 12% -62.1% 31856 ±109% interrupts.CPU16.CAL:Function_call_interrupts
266121 ± 37% -77.9% 58704 ± 82% interrupts.CPU16.TLB:TLB_shootdowns
216830 ± 74% -90.0% 21706 ± 80% interrupts.CPU17.CAL:Function_call_interrupts
155845 ±104% -91.7% 12939 ±118% interrupts.CPU18.CAL:Function_call_interrupts
305444 ± 48% -81.6% 56176 ±145% interrupts.CPU18.TLB:TLB_shootdowns
580700 ± 24% +100.8% 1165795 ± 11% interrupts.CPU187.CAL:Function_call_interrupts
1143784 ± 33% +56.2% 1786340 ± 17% interrupts.CPU187.TLB:TLB_shootdowns
403282 ± 41% +83.7% 741015 ± 26% interrupts.CPU189.CAL:Function_call_interrupts
127301 ± 69% -75.3% 31438 ±152% interrupts.CPU19.CAL:Function_call_interrupts
139476 ± 41% -85.3% 20557 ± 82% interrupts.CPU20.CAL:Function_call_interrupts
241687 ± 55% -90.2% 23612 ±132% interrupts.CPU21.CAL:Function_call_interrupts
390203 ± 56% -76.1% 93439 ±128% interrupts.CPU21.TLB:TLB_shootdowns
1374 ± 3% -41.8% 800.25 ± 28% interrupts.CPU24.RES:Rescheduling_interrupts
291.25 ± 38% -69.3% 89.50 ± 57% interrupts.CPU25.RES:Rescheduling_interrupts
372.50 ± 61% -90.9% 34.00 ± 16% interrupts.CPU27.RES:Rescheduling_interrupts
382.25 ± 17% -70.5% 112.75 ± 65% interrupts.CPU29.RES:Rescheduling_interrupts
109264 ± 20% -42.8% 62530 ± 35% interrupts.CPU3.CAL:Function_call_interrupts
348.75 ± 52% -58.7% 144.00 ± 21% interrupts.CPU3.NMI:Non-maskable_interrupts
348.75 ± 52% -58.7% 144.00 ± 21% interrupts.CPU3.PMI:Performance_monitoring_interrupts
184.25 ± 67% -69.2% 56.75 ± 39% interrupts.CPU3.RES:Rescheduling_interrupts
866.75 ±101% -74.6% 220.25 ± 51% interrupts.CPU32.NMI:Non-maskable_interrupts
866.75 ±101% -74.6% 220.25 ± 51% interrupts.CPU32.PMI:Performance_monitoring_interrupts
386.25 ± 77% -81.0% 73.50 ± 62% interrupts.CPU32.RES:Rescheduling_interrupts
175.75 ± 47% -84.8% 26.75 ± 38% interrupts.CPU33.RES:Rescheduling_interrupts
86801 ± 77% -78.3% 18806 ± 56% interrupts.CPU34.CAL:Function_call_interrupts
290.25 ± 54% -77.1% 66.50 ± 74% interrupts.CPU34.RES:Rescheduling_interrupts
509.25 ± 85% -72.1% 142.00 ± 19% interrupts.CPU4.NMI:Non-maskable_interrupts
509.25 ± 85% -72.1% 142.00 ± 19% interrupts.CPU4.PMI:Performance_monitoring_interrupts
233.75 ± 67% +103.7% 476.25 ± 20% interrupts.CPU41.NMI:Non-maskable_interrupts
233.75 ± 67% +103.7% 476.25 ± 20% interrupts.CPU41.PMI:Performance_monitoring_interrupts
102481 ± 60% -86.2% 14192 ± 89% interrupts.CPU42.CAL:Function_call_interrupts
199657 ± 62% -70.0% 59900 ±120% interrupts.CPU42.TLB:TLB_shootdowns
99111 ± 36% -64.7% 34962 ±106% interrupts.CPU44.CAL:Function_call_interrupts
133788 ± 70% -76.7% 31187 ± 87% interrupts.CPU47.CAL:Function_call_interrupts
255715 ± 45% -72.7% 69684 ± 87% interrupts.CPU47.TLB:TLB_shootdowns
750.25 ± 24% -51.7% 362.25 ± 74% interrupts.CPU48.RES:Rescheduling_interrupts
111872 ± 61% -68.1% 35692 ± 13% interrupts.CPU51.CAL:Function_call_interrupts
248476 ± 54% -65.9% 84695 ± 57% interrupts.CPU51.TLB:TLB_shootdowns
130521 ± 86% -75.4% 32054 ± 81% interrupts.CPU53.CAL:Function_call_interrupts
330056 ± 64% -76.1% 78788 ±107% interrupts.CPU53.TLB:TLB_shootdowns
122572 ± 36% -73.8% 32077 ± 65% interrupts.CPU54.CAL:Function_call_interrupts
223.75 ± 73% -81.3% 41.75 ± 60% interrupts.CPU54.RES:Rescheduling_interrupts
261777 ± 29% -69.6% 79586 ±101% interrupts.CPU54.TLB:TLB_shootdowns
38695 ± 78% -73.9% 10112 ± 83% interrupts.CPU59.CAL:Function_call_interrupts
91774 ± 39% -75.9% 22107 ± 69% interrupts.CPU6.CAL:Function_call_interrupts
337.75 ± 54% -82.3% 59.75 ± 48% interrupts.CPU6.RES:Rescheduling_interrupts
38136 ± 70% -76.6% 8929 ± 95% interrupts.CPU62.CAL:Function_call_interrupts
107221 ± 74% -82.2% 19106 ±131% interrupts.CPU62.TLB:TLB_shootdowns
50363 ± 79% -85.2% 7477 ± 51% interrupts.CPU63.CAL:Function_call_interrupts
107471 ± 89% -65.0% 37647 ±100% interrupts.CPU63.TLB:TLB_shootdowns
196470 ± 27% -89.8% 20111 ±146% interrupts.CPU66.CAL:Function_call_interrupts
251626 ± 28% -86.9% 33021 ±168% interrupts.CPU66.TLB:TLB_shootdowns
244907 ± 51% -95.9% 10117 ± 95% interrupts.CPU67.CAL:Function_call_interrupts
351943 ± 41% -94.1% 20599 ± 98% interrupts.CPU67.TLB:TLB_shootdowns
111121 ± 44% -92.8% 7951 ± 95% interrupts.CPU68.CAL:Function_call_interrupts
169523 ± 45% -80.0% 33881 ±144% interrupts.CPU68.TLB:TLB_shootdowns
128876 ± 44% -97.2% 3560 ± 35% interrupts.CPU69.CAL:Function_call_interrupts
196198 ± 26% -96.1% 7648 ± 82% interrupts.CPU69.TLB:TLB_shootdowns
86918 ± 43% -78.8% 18404 ±138% interrupts.CPU70.CAL:Function_call_interrupts
137071 ± 53% -69.6% 41713 ±137% interrupts.CPU70.TLB:TLB_shootdowns
248192 ± 54% -99.1% 2141 ± 28% interrupts.CPU71.CAL:Function_call_interrupts
351778 ± 50% -99.4% 2248 ±171% interrupts.CPU71.TLB:TLB_shootdowns
72843 ± 39% -59.9% 29213 ± 73% interrupts.CPU79.CAL:Function_call_interrupts
296.25 ± 75% -95.6% 13.00 ± 39% interrupts.CPU79.RES:Rescheduling_interrupts
294.75 ± 44% -51.8% 142.00 ± 14% interrupts.CPU8.NMI:Non-maskable_interrupts
294.75 ± 44% -51.8% 142.00 ± 14% interrupts.CPU8.PMI:Performance_monitoring_interrupts
631.50 ± 60% -71.2% 181.75 ± 13% interrupts.CPU82.NMI:Non-maskable_interrupts
631.50 ± 60% -71.2% 181.75 ± 13% interrupts.CPU82.PMI:Performance_monitoring_interrupts
180.75 ± 53% -76.1% 43.25 ±118% interrupts.CPU83.RES:Rescheduling_interrupts
224.25 ± 45% -74.6% 57.00 ±126% interrupts.CPU84.RES:Rescheduling_interrupts
152.75 ± 76% -67.1% 50.25 ±110% interrupts.CPU85.RES:Rescheduling_interrupts
69834 ± 44% -89.6% 7248 ± 34% interrupts.CPU88.CAL:Function_call_interrupts
175679 ± 56% -84.4% 27337 ± 52% interrupts.CPU88.TLB:TLB_shootdowns
79758 ± 46% -67.9% 25563 ±130% interrupts.CPU89.CAL:Function_call_interrupts
172259 ± 39% -90.6% 16168 ± 96% interrupts.CPU91.CAL:Function_call_interrupts
163998 ± 41% -87.5% 20465 ±101% interrupts.CPU92.CAL:Function_call_interrupts
272932 ± 52% -79.7% 55479 ±115% interrupts.CPU92.TLB:TLB_shootdowns
171203 ± 30% -88.9% 19071 ±112% interrupts.CPU93.CAL:Function_call_interrupts
281620 ± 34% -85.6% 40571 ± 74% interrupts.CPU93.TLB:TLB_shootdowns
1737 ±123% -92.9% 123.50 ± 17% interrupts.CPU96.NMI:Non-maskable_interrupts
1737 ±123% -92.9% 123.50 ± 17% interrupts.CPU96.PMI:Performance_monitoring_interrupts
353.50 ± 48% -60.5% 139.50 ± 44% interrupts.CPU97.NMI:Non-maskable_interrupts
353.50 ± 48% -60.5% 139.50 ± 44% interrupts.CPU97.PMI:Performance_monitoring_interrupts
432.00 ± 42% -66.6% 144.25 ± 21% interrupts.CPU99.NMI:Non-maskable_interrupts
432.00 ± 42% -66.6% 144.25 ± 21% interrupts.CPU99.PMI:Performance_monitoring_interrupts
25708 ± 6% -20.9% 20326 ± 8% interrupts.RES:Rescheduling_interrupts
35428 ± 11% -32.4% 23935 ± 12% softirqs.CPU0.RCU
32512 -32.2% 22047 ± 8% softirqs.CPU1.RCU
31237 ± 2% -31.1% 21518 ± 8% softirqs.CPU10.RCU
33358 ± 2% -34.0% 22013 ± 8% softirqs.CPU100.RCU
34383 ± 5% -34.9% 22384 ± 4% softirqs.CPU101.RCU
34539 ± 3% -36.4% 21956 ± 7% softirqs.CPU102.RCU
33724 ± 6% -35.7% 21681 ± 9% softirqs.CPU103.RCU
33883 ± 2% -36.8% 21411 ± 8% softirqs.CPU104.RCU
33349 ± 3% -34.5% 21843 ± 7% softirqs.CPU105.RCU
33947 ± 3% -35.6% 21869 ± 8% softirqs.CPU106.RCU
35088 ± 2% -37.9% 21804 ± 8% softirqs.CPU107.RCU
35392 -37.6% 22077 ± 9% softirqs.CPU108.RCU
34530 -36.0% 22086 ± 9% softirqs.CPU109.RCU
31070 ± 5% -30.1% 21721 ± 7% softirqs.CPU11.RCU
35908 ± 2% -40.7% 21300 ± 8% softirqs.CPU110.RCU
35501 -38.7% 21779 ± 7% softirqs.CPU111.RCU
34344 ± 2% -42.2% 19837 ± 6% softirqs.CPU112.RCU
33845 -41.0% 19980 ± 7% softirqs.CPU113.RCU
33862 ± 3% -39.7% 20433 ± 7% softirqs.CPU114.RCU
32765 ± 3% -39.1% 19942 ± 7% softirqs.CPU115.RCU
32934 ± 2% -38.6% 20224 ± 5% softirqs.CPU116.RCU
32819 ± 3% -39.5% 19865 ± 6% softirqs.CPU117.RCU
33093 ± 3% -39.9% 19885 ± 7% softirqs.CPU118.RCU
32575 ± 4% -38.9% 19898 ± 6% softirqs.CPU119.RCU
31681 ± 7% -31.0% 21852 ± 7% softirqs.CPU12.RCU
33330 ± 6% -41.4% 19546 ± 6% softirqs.CPU120.RCU
33723 ± 5% -43.1% 19200 ± 5% softirqs.CPU121.RCU
33744 ± 6% -42.7% 19336 ± 6% softirqs.CPU122.RCU
33521 ± 8% -41.4% 19648 ± 5% softirqs.CPU123.RCU
33386 ± 5% -40.7% 19799 ± 5% softirqs.CPU124.RCU
33788 ± 4% -43.8% 18979 ± 6% softirqs.CPU125.RCU
33692 ± 6% -41.4% 19756 ± 5% softirqs.CPU126.RCU
33196 ± 7% -41.8% 19309 ± 5% softirqs.CPU127.RCU
34241 ± 7% -42.0% 19853 ± 7% softirqs.CPU128.RCU
33830 ± 6% -42.1% 19600 ± 7% softirqs.CPU129.RCU
31799 ± 5% -31.3% 21860 ± 9% softirqs.CPU13.RCU
33276 ± 4% -41.2% 19566 ± 7% softirqs.CPU130.RCU
34680 ± 5% -42.8% 19829 ± 6% softirqs.CPU131.RCU
34512 ± 4% -42.3% 19913 ± 7% softirqs.CPU132.RCU
35534 ± 2% -44.7% 19654 ± 6% softirqs.CPU133.RCU
34896 ± 3% -43.0% 19889 ± 8% softirqs.CPU134.RCU
34745 ± 5% -42.9% 19831 ± 7% softirqs.CPU135.RCU
35559 ± 7% -44.1% 19890 ± 6% softirqs.CPU136.RCU
34349 ± 6% -41.7% 20009 ± 8% softirqs.CPU137.RCU
34036 ± 5% -41.9% 19771 ± 6% softirqs.CPU138.RCU
34760 ± 2% -42.6% 19946 ± 7% softirqs.CPU139.RCU
31258 ± 2% -31.2% 21499 ± 7% softirqs.CPU14.RCU
33466 ± 4% -40.8% 19814 ± 7% softirqs.CPU140.RCU
32509 ± 5% -40.8% 19246 ± 7% softirqs.CPU141.RCU
33244 ± 3% -35.0% 21617 ± 8% softirqs.CPU142.RCU
32706 ± 5% -38.6% 20067 ± 7% softirqs.CPU143.RCU
33261 ± 4% -37.7% 20717 ± 5% softirqs.CPU144.RCU
32254 ± 2% -37.2% 20271 ± 4% softirqs.CPU145.RCU
32274 ± 4% -38.1% 19985 ± 4% softirqs.CPU146.RCU
32816 ± 4% -37.4% 20532 ± 6% softirqs.CPU147.RCU
33441 ± 8% -38.9% 20441 ± 6% softirqs.CPU148.RCU
33248 ± 6% -40.0% 19953 ± 7% softirqs.CPU149.RCU
30279 ± 9% -27.9% 21839 ± 9% softirqs.CPU15.RCU
32601 ± 6% -38.2% 20155 ± 5% softirqs.CPU150.RCU
32860 ± 4% -37.1% 20673 ± 8% softirqs.CPU151.RCU
32643 ± 6% -37.0% 20569 ± 8% softirqs.CPU152.RCU
33337 ± 8% -39.2% 20261 ± 5% softirqs.CPU153.RCU
32641 ± 7% -37.2% 20493 ± 6% softirqs.CPU154.RCU
32548 ± 5% -36.8% 20580 ± 5% softirqs.CPU155.RCU
33385 ± 4% -38.5% 20531 ± 5% softirqs.CPU156.RCU
33250 ± 8% -39.9% 19972 ± 4% softirqs.CPU157.RCU
33614 ± 5% -39.0% 20507 ± 7% softirqs.CPU158.RCU
32542 ± 7% -37.4% 20365 ± 6% softirqs.CPU159.RCU
30962 ± 4% -33.2% 20694 ± 6% softirqs.CPU16.RCU
33130 ± 2% -40.5% 19714 ± 4% softirqs.CPU160.RCU
32930 ± 4% -39.2% 20029 ± 5% softirqs.CPU161.RCU
33810 ± 5% -39.5% 20452 ± 7% softirqs.CPU162.RCU
33694 ± 4% -39.9% 20249 ± 4% softirqs.CPU163.RCU
32579 ± 3% -38.7% 19955 ± 6% softirqs.CPU164.RCU
31732 ± 3% -36.8% 20067 ± 4% softirqs.CPU165.RCU
32614 ± 3% -37.1% 20502 ± 7% softirqs.CPU166.RCU
32179 ± 3% -37.2% 20218 ± 4% softirqs.CPU167.RCU
31673 ± 4% -40.9% 18722 ± 4% softirqs.CPU168.RCU
31132 ± 6% -39.3% 18899 ± 4% softirqs.CPU169.RCU
31814 ± 4% -34.4% 20885 ± 5% softirqs.CPU17.RCU
32315 ± 6% -44.3% 17995 ± 3% softirqs.CPU170.RCU
31903 ± 6% -42.2% 18436 ± 4% softirqs.CPU171.RCU
30810 ± 6% -39.0% 18786 ± 3% softirqs.CPU172.RCU
31117 ± 6% -40.0% 18661 ± 6% softirqs.CPU173.RCU
31954 ± 7% -41.8% 18597 ± 4% softirqs.CPU174.RCU
31720 ± 5% -41.1% 18696 ± 4% softirqs.CPU175.RCU
34100 ± 10% -41.9% 19817 ± 5% softirqs.CPU176.RCU
32522 ± 6% -41.6% 19004 ± 6% softirqs.CPU177.RCU
32041 ± 7% -41.7% 18685 ± 8% softirqs.CPU178.RCU
32337 ± 7% -41.1% 19031 ± 6% softirqs.CPU179.RCU
31987 ± 8% -34.0% 21115 ± 5% softirqs.CPU18.RCU
32733 ± 3% -40.8% 19385 ± 8% softirqs.CPU180.RCU
32337 ± 5% -40.0% 19418 ± 8% softirqs.CPU181.RCU
31730 ± 5% -40.5% 18894 ± 7% softirqs.CPU182.RCU
32190 -41.7% 18778 ± 6% softirqs.CPU183.RCU
32604 ± 5% -41.5% 19066 ± 7% softirqs.CPU184.RCU
32316 -41.1% 19049 ± 6% softirqs.CPU185.RCU
31497 ± 2% -41.6% 18383 ± 5% softirqs.CPU186.RCU
31844 ± 4% -40.9% 18828 ± 7% softirqs.CPU187.RCU
31207 ± 6% -37.6% 19479 ± 7% softirqs.CPU188.RCU
31677 ± 5% -40.2% 18932 ± 7% softirqs.CPU189.RCU
31643 ± 4% -34.0% 20898 ± 5% softirqs.CPU19.RCU
31618 ± 8% -38.9% 19308 ± 5% softirqs.CPU190.RCU
30165 ± 6% -34.8% 19667 ± 5% softirqs.CPU191.RCU
30591 ± 4% -28.1% 21993 ± 5% softirqs.CPU2.RCU
45537 +10.8% 50453 ± 2% softirqs.CPU2.SCHED
31294 ± 5% -33.7% 20740 ± 5% softirqs.CPU20.RCU
32036 ± 3% -34.9% 20853 ± 5% softirqs.CPU21.RCU
31837 ± 3% -34.8% 20770 ± 6% softirqs.CPU22.RCU
31914 ± 2% -34.8% 20815 ± 6% softirqs.CPU23.RCU
32142 ± 4% -36.9% 20279 ± 6% softirqs.CPU24.RCU
30685 ± 5% -34.5% 20098 ± 6% softirqs.CPU25.RCU
30461 ± 5% -33.3% 20307 ± 5% softirqs.CPU26.RCU
29637 ± 9% -31.0% 20444 ± 6% softirqs.CPU27.RCU
30580 ± 5% -33.5% 20334 ± 5% softirqs.CPU28.RCU
30860 ± 5% -35.7% 19856 ± 9% softirqs.CPU29.RCU
30808 ± 4% -29.4% 21739 ± 9% softirqs.CPU3.RCU
30612 ± 5% -33.8% 20276 ± 5% softirqs.CPU30.RCU
30416 ± 5% -34.4% 19957 ± 6% softirqs.CPU31.RCU
32305 ± 8% -34.3% 21232 ± 7% softirqs.CPU32.RCU
32214 ± 5% -34.5% 21085 ± 6% softirqs.CPU33.RCU
31564 ± 4% -33.1% 21101 ± 7% softirqs.CPU34.RCU
32098 ± 6% -34.2% 21115 ± 7% softirqs.CPU35.RCU
31331 ± 4% -27.0% 22869 ± 13% softirqs.CPU36.RCU
31247 ± 5% -32.2% 21178 ± 7% softirqs.CPU37.RCU
32514 ± 8% -35.0% 21125 ± 6% softirqs.CPU38.RCU
31150 ± 4% -29.4% 21977 ± 8% softirqs.CPU39.RCU
30868 ± 4% -29.4% 21808 ± 7% softirqs.CPU4.RCU
31274 ± 5% -32.2% 21217 ± 6% softirqs.CPU40.RCU
31369 ± 6% -32.9% 21039 ± 7% softirqs.CPU41.RCU
31353 ± 5% -33.6% 20822 ± 6% softirqs.CPU42.RCU
32650 ± 5% -34.9% 21267 ± 7% softirqs.CPU43.RCU
31688 ± 4% -32.6% 21360 ± 5% softirqs.CPU44.RCU
31966 ± 7% -34.1% 21053 ± 7% softirqs.CPU45.RCU
31681 ± 5% -33.4% 21111 ± 7% softirqs.CPU46.RCU
31963 ± 5% -34.3% 20989 ± 7% softirqs.CPU47.RCU
32375 ± 6% -33.5% 21544 ± 6% softirqs.CPU48.RCU
31389 ± 5% -28.6% 22426 ± 9% softirqs.CPU49.RCU
31171 ± 4% -28.6% 22264 ± 7% softirqs.CPU5.RCU
32538 ± 6% -34.2% 21408 ± 4% softirqs.CPU50.RCU
31689 ± 5% -31.9% 21577 ± 7% softirqs.CPU51.RCU
31219 ± 7% -30.8% 21588 ± 6% softirqs.CPU52.RCU
31505 ± 5% -33.2% 21052 ± 6% softirqs.CPU53.RCU
31950 ± 6% -32.7% 21498 ± 5% softirqs.CPU54.RCU
31274 ± 5% -32.4% 21147 ± 5% softirqs.CPU55.RCU
31105 ± 5% -31.4% 21350 ± 7% softirqs.CPU56.RCU
31774 ± 5% -32.9% 21325 ± 4% softirqs.CPU57.RCU
30439 ± 10% -29.3% 21506 ± 6% softirqs.CPU58.RCU
31582 ± 5% -30.5% 21947 ± 5% softirqs.CPU59.RCU
31230 ± 4% -30.4% 21728 ± 7% softirqs.CPU6.RCU
31606 ± 5% -30.5% 21952 ± 4% softirqs.CPU60.RCU
31319 ± 6% -32.7% 21076 ± 5% softirqs.CPU61.RCU
31538 ± 7% -33.1% 21088 ± 6% softirqs.CPU62.RCU
31193 ± 5% -31.1% 21499 ± 7% softirqs.CPU63.RCU
32026 ± 3% -35.3% 20715 ± 5% softirqs.CPU64.RCU
32074 ± 3% -34.6% 20982 ± 5% softirqs.CPU65.RCU
33788 ± 6% -38.3% 20852 ± 5% softirqs.CPU66.RCU
32471 ± 5% -34.9% 21149 ± 5% softirqs.CPU67.RCU
32427 ± 6% -35.7% 20861 ± 6% softirqs.CPU68.RCU
32462 ± 4% -35.2% 21044 ± 6% softirqs.CPU69.RCU
31535 ± 4% -30.6% 21879 ± 7% softirqs.CPU7.RCU
43714 ± 5% +11.5% 48741 ± 2% softirqs.CPU7.SCHED
32081 ± 2% -35.0% 20854 ± 6% softirqs.CPU70.RCU
32576 ± 3% -35.4% 21031 ± 5% softirqs.CPU71.RCU
30737 ± 4% -35.1% 19939 ± 4% softirqs.CPU72.RCU
30253 ± 4% -35.3% 19568 ± 5% softirqs.CPU73.RCU
30874 ± 7% -37.0% 19449 ± 5% softirqs.CPU74.RCU
29806 ± 4% -34.3% 19591 ± 5% softirqs.CPU75.RCU
29904 ± 5% -35.0% 19451 ± 5% softirqs.CPU76.RCU
29914 ± 4% -34.7% 19541 ± 4% softirqs.CPU77.RCU
30603 ± 5% -36.8% 19352 ± 5% softirqs.CPU78.RCU
30164 ± 5% -34.1% 19885 ± 7% softirqs.CPU79.RCU
30473 ± 4% -28.6% 21745 ± 7% softirqs.CPU8.RCU
30256 ± 4% -34.1% 19941 ± 6% softirqs.CPU80.RCU
29945 ± 4% -33.3% 19961 ± 5% softirqs.CPU81.RCU
30259 ± 4% -34.0% 19972 ± 7% softirqs.CPU82.RCU
30326 ± 4% -34.0% 20016 ± 6% softirqs.CPU83.RCU
30662 ± 5% -35.0% 19939 ± 5% softirqs.CPU84.RCU
30592 ± 4% -34.8% 19953 ± 6% softirqs.CPU85.RCU
30065 ± 4% -34.5% 19702 ± 6% softirqs.CPU86.RCU
30100 ± 4% -34.2% 19805 ± 6% softirqs.CPU87.RCU
30493 ± 6% -33.3% 20336 ± 4% softirqs.CPU88.RCU
30934 ± 6% -35.7% 19883 ± 6% softirqs.CPU89.RCU
31023 ± 5% -30.4% 21580 ± 6% softirqs.CPU9.RCU
30607 ± 5% -35.8% 19653 ± 5% softirqs.CPU90.RCU
31895 ± 3% -37.3% 20002 ± 6% softirqs.CPU91.RCU
30994 ± 5% -34.7% 20242 ± 7% softirqs.CPU92.RCU
31973 ± 5% -38.0% 19833 ± 5% softirqs.CPU93.RCU
31326 ± 7% -35.8% 20111 ± 6% softirqs.CPU94.RCU
32100 ± 4% -36.3% 20461 ± 7% softirqs.CPU95.RCU
34644 ± 2% -37.9% 21527 ± 7% softirqs.CPU96.RCU
34061 ± 5% -36.2% 21721 ± 7% softirqs.CPU97.RCU
34668 ± 3% -38.2% 21433 ± 6% softirqs.CPU98.RCU
34388 ± 3% -37.3% 21564 ± 7% softirqs.CPU99.RCU
6196758 ± 4% -36.5% 3932333 ± 6% softirqs.RCU
vm-scalability.median
7000 +--------------------------------------------------------------------+
|.+.+.+.+.+.++.+ +.+.+.+.+.+.+.+.++.+.+.+ +.+.+. |
6000 |-O O O O O OO : : O O O O O O OO O O : O : O O +.+.+.+O O O O O |
| : : : : |
5000 |-+ : : : : |
| : : : : |
4000 |-+ : : : : |
| : : : : |
3000 |-+ : : : : |
| : : : : |
2000 |-+ : : : : |
| : : |
1000 |-+ : : |
| : : |
0 +--------------------------------------------------------------------+
vm-scalability.stddev_
4500 +--------------------------------------------------------------------+
| O |
4000 |-+ |
3500 |-+ |
| |
3000 |-+ |
2500 |-+ |
| |
2000 |-+ |
1500 |-+ |
| |
1000 |-+ |
500 |-+ |
| O O O O O OO O O.+.O O O O OO O O O O.+.O O.+.+.+.+ O O O O |
0 +--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp4: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-9/performance/x86_64-rhel-8.3/2/8/1/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp4/swap-w-seq/vm-scalability/never/never/0x4003003
commit:
3852f6768e ("mm/swapcache: support to handle the shadow entries")
aae466b005 ("mm/swap: implement workingset detection for anonymous LRU")
3852f6768ede542e aae466b0052e1888edd1d7f473d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:3 29% 0:2 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
:3 34% 1:2 perf-profile.calltrace.cycles-pp.error_entry.do_access
1:3 11% 1:2 perf-profile.children.cycles-pp.error_entry
0:3 1% 0:2 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
542839 +18.8% 644645 ± 9% vm-scalability.median
4306633 +12.8% 4855863 ± 4% vm-scalability.throughput
30.01 -3.1% 29.07 vm-scalability.time.elapsed_time
30.01 -3.1% 29.07 vm-scalability.time.elapsed_time.max
36096 ± 27% -64.0% 12994 ± 41% vm-scalability.time.involuntary_context_switches
5106690 ± 5% +26.8% 6477410 ± 13% vm-scalability.time.maximum_resident_set_size
2001 ± 56% -69.5% 611.00 ± 12% vm-scalability.time.voluntary_context_switches
5476 ± 29% +311.6% 22539 ± 78% cpuidle.C1.usage
5283 ± 16% -36.1% 3375 ± 9% vmstat.system.cs
20.69 +3.5% 21.42 ± 2% boot-time.boot
1529 ± 3% +7.5% 1643 boot-time.idle
89.51 +1.3% 90.68 iostat.cpu.idle
8.65 -14.6% 7.39 ± 11% iostat.cpu.system
36029 ± 14% +68.2% 60587 ± 30% meminfo.CmaFree
697090 ± 5% +9.4% 762414 ± 2% meminfo.DirectMap4k
0.00 ± 10% +0.0 0.00 mpstat.cpu.all.iowait%
0.35 ± 5% -0.3 0.05 ± 18% mpstat.cpu.all.soft%
1.813e+08 ± 2% -41.0% 1.07e+08 ± 17% perf-node.node-loads
38.67 ± 2% -32.8% 26.00 ± 7% perf-node.node-local-load-ratio
54.67 -61.6% 21.00 ± 9% perf-node.node-local-store-ratio
2.167e+08 ± 4% -20.8% 1.717e+08 ± 9% perf-node.node-store-misses
2.671e+08 ± 2% -82.3% 47402485 ± 2% perf-node.node-stores
22519 ± 3% -20.3% 17951 ± 9% numa-meminfo.node0.Active
22324 ± 3% -20.6% 17736 ± 10% numa-meminfo.node0.Active(anon)
13921 ± 2% +15.8% 16121 ± 14% numa-meminfo.node0.Mapped
5201 ± 20% +32.3% 6881 ± 3% numa-meminfo.node1.Active
5084 ± 20% +34.0% 6811 ± 3% numa-meminfo.node1.Active(anon)
65575 ± 4% +12.4% 73725 ± 5% numa-meminfo.node1.KReclaimable
7093 ± 2% +7.4% 7615 ± 2% numa-meminfo.node1.KernelStack
17090 ± 4% -13.8% 14733 ± 5% numa-meminfo.node1.Mapped
66670 ± 5% -23.4% 51082 ± 14% numa-meminfo.node1.PageTables
65575 ± 4% +12.4% 73725 ± 5% numa-meminfo.node1.SReclaimable
3308500 ± 20% +140.2% 7945839 ± 36% numa-numastat.node0.local_node
1675907 ± 6% +140.5% 4030944 ± 4% numa-numastat.node0.numa_foreign
3318819 ± 20% +140.3% 7976782 ± 36% numa-numastat.node0.numa_hit
5171847 ± 14% -62.5% 1939771 ± 34% numa-numastat.node0.numa_miss
5182188 ± 14% -62.0% 1970716 ± 34% numa-numastat.node0.other_node
15171228 -31.1% 10449562 ± 23% numa-numastat.node1.local_node
5171847 ± 14% -62.5% 1939771 ± 34% numa-numastat.node1.numa_foreign
15191898 -31.2% 10449560 ± 23% numa-numastat.node1.numa_hit
1675907 ± 6% +140.5% 4030944 ± 4% numa-numastat.node1.numa_miss
1696577 ± 7% +137.6% 4030962 ± 4% numa-numastat.node1.other_node
1430 ± 4% +14.6% 1640 ± 2% slabinfo.Acpi-Parse.active_objs
1430 ± 4% +14.6% 1640 ± 2% slabinfo.Acpi-Parse.num_objs
1495 ± 7% +28.5% 1921 ± 9% slabinfo.dmaengine-unmap-16.active_objs
1495 ± 7% +28.5% 1921 ± 9% slabinfo.dmaengine-unmap-16.num_objs
4330 ± 9% +8.6% 4704 ± 4% slabinfo.eventpoll_pwq.active_objs
4330 ± 9% +8.6% 4704 ± 4% slabinfo.eventpoll_pwq.num_objs
893.33 +5.4% 941.50 ± 4% slabinfo.filp.active_slabs
28601 +5.4% 30133 ± 4% slabinfo.filp.num_objs
893.33 +5.4% 941.50 ± 4% slabinfo.filp.num_slabs
790.00 ± 11% +16.2% 918.00 ± 6% slabinfo.kmem_cache_node.active_objs
828.00 ± 10% +15.3% 955.00 ± 6% slabinfo.kmem_cache_node.num_objs
1983 ± 9% +16.6% 2312 ± 10% slabinfo.radix_tree_node.active_slabs
111093 ± 9% +16.6% 129533 ± 10% slabinfo.radix_tree_node.num_objs
1983 ± 9% +16.6% 2312 ± 10% slabinfo.radix_tree_node.num_slabs
4330 ± 6% -15.0% 3680 slabinfo.skbuff_head_cache.active_objs
4330 ± 6% -15.0% 3680 slabinfo.skbuff_head_cache.num_objs
5566 ± 3% -20.5% 4424 ± 9% numa-vmstat.node0.nr_active_anon
3524 ± 3% +15.2% 4061 ± 13% numa-vmstat.node0.nr_mapped
5576 ± 2% -20.7% 4424 ± 9% numa-vmstat.node0.nr_zone_active_anon
1054096 ± 10% +164.4% 2787187 ± 19% numa-vmstat.node0.numa_foreign
2769819 ± 18% +83.1% 5070160 ± 30% numa-vmstat.node0.numa_hit
2716892 ± 17% +83.2% 4977012 ± 32% numa-vmstat.node0.numa_local
3395314 ± 14% -63.4% 1241437 ± 54% numa-vmstat.node0.numa_miss
3448253 ± 12% -61.3% 1334595 ± 55% numa-vmstat.node0.numa_other
1265 ± 21% +34.8% 1706 ± 4% numa-vmstat.node1.nr_active_anon
8775 ± 13% +74.8% 15335 ± 29% numa-vmstat.node1.nr_free_cma
103.33 ± 4% -37.1% 65.00 ± 30% numa-vmstat.node1.nr_isolated_anon
7103 ± 2% +7.3% 7619 ± 2% numa-vmstat.node1.nr_kernel_stack
16671 ± 4% -23.3% 12782 ± 14% numa-vmstat.node1.nr_page_table_pages
1276 ± 21% +33.8% 1707 ± 4% numa-vmstat.node1.nr_zone_active_anon
3395812 ± 14% -63.4% 1241615 ± 54% numa-vmstat.node1.numa_foreign
1054335 ± 10% +164.4% 2787638 ± 19% numa-vmstat.node1.numa_miss
1166568 ± 4% +145.1% 2859690 ± 20% numa-vmstat.node1.numa_other
15105 ± 2% -45.0% 8309 ± 50% proc-vmstat.allocstall_movable
30459 ± 49% -71.7% 8615 ± 48% proc-vmstat.compact_isolated
6848 ± 3% -10.4% 6136 ± 6% proc-vmstat.nr_active_anon
9005 ± 14% +68.2% 15144 ± 30% proc-vmstat.nr_free_cma
132.33 ± 5% -23.7% 101.00 ± 7% proc-vmstat.nr_isolated_anon
6868 ± 3% -10.7% 6136 ± 6% proc-vmstat.nr_zone_active_anon
7563 ± 27% -64.8% 2660 ± 61% proc-vmstat.numa_hint_faults
10414558 ± 9% -19.1% 8428117 ± 2% proc-vmstat.numa_pte_updates
1031362 -20.5% 820126 proc-vmstat.pgalloc_dma32
31078215 -24.0% 23617831 proc-vmstat.pgalloc_normal
32164957 -23.9% 24479418 proc-vmstat.pgfree
4152 -23.2% 3188 ± 3% proc-vmstat.pgmajfault
17471 ± 45% -74.5% 4458 ± 49% proc-vmstat.pgmigrate_success
46356943 ± 9% -28.9% 32956826 ± 17% proc-vmstat.pgscan_anon
29201063 ± 14% -28.2% 20954738 ± 2% proc-vmstat.pgscan_direct
4248 ± 2% -21.8% 3321 proc-vmstat.pswpin
36037 +3.2% 37204 proc-vmstat.slabs_scanned
0.00 +2.8e+10% 284.55 ± 22% sched_debug.cfs_rq:/.MIN_vruntime.avg
0.00 +2.7e+12% 27316 ± 22% sched_debug.cfs_rq:/.MIN_vruntime.max
2720 ± 18% +589.5% 18759 ± 31% sched_debug.cfs_rq:/.load.avg
46202 ± 36% +3321.4% 1580776 ± 32% sched_debug.cfs_rq:/.load.max
8163 ± 17% +1865.3% 160439 ± 32% sched_debug.cfs_rq:/.load.stddev
382.50 ±121% -91.3% 33.40 ± 12% sched_debug.cfs_rq:/.load_avg.avg
17337 ±129% -94.4% 972.00 sched_debug.cfs_rq:/.load_avg.max
2438 ±128% -94.4% 136.89 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
0.00 +2.8e+10% 284.55 ± 22% sched_debug.cfs_rq:/.max_vruntime.avg
0.00 +2.7e+12% 27316 ± 22% sched_debug.cfs_rq:/.max_vruntime.max
16104 ± 13% +54.7% 24908 ± 36% sched_debug.cfs_rq:/.min_vruntime.avg
9293 ± 32% +72.6% 16037 ± 32% sched_debug.cfs_rq:/.min_vruntime.min
3035 ± 5% +38.0% 4189 ± 24% sched_debug.cfs_rq:/.min_vruntime.stddev
1.00 +100.0% 2.00 sched_debug.cfs_rq:/.nr_running.max
0.32 ± 7% +15.6% 0.37 ± 2% sched_debug.cfs_rq:/.nr_running.stddev
20.87 ± 39% -75.4% 5.12 ±100% sched_debug.cfs_rq:/.removed.load_avg.avg
140.08 ± 20% -64.3% 49.95 ±100% sched_debug.cfs_rq:/.removed.load_avg.stddev
355.11 ± 4% -14.2% 304.76 sched_debug.cfs_rq:/.runnable_avg.stddev
-759.95 -1050.2% 7221 ± 72% sched_debug.cfs_rq:/.spread0.avg
12531 ± 16% +76.2% 22078 ± 25% sched_debug.cfs_rq:/.spread0.max
-7559 -81.2% -1421 sched_debug.cfs_rq:/.spread0.min
3045 ± 4% +38.5% 4219 ± 24% sched_debug.cfs_rq:/.spread0.stddev
355.02 ± 4% -14.1% 304.79 sched_debug.cfs_rq:/.util_avg.stddev
34.72 ± 12% -21.4% 27.28 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.avg
772814 ± 7% +5.6% 816403 ± 6% sched_debug.cpu.avg_idle.avg
1396289 ± 27% +314.0% 5781242 ± 71% sched_debug.cpu.avg_idle.max
2225 ± 32% +512.8% 13639 ± 81% sched_debug.cpu.avg_idle.min
0.12 ± 11% +14.7% 0.14 ± 7% sched_debug.cpu.nr_running.avg
1.00 +100.0% 2.00 sched_debug.cpu.nr_running.max
0.32 ± 4% +15.2% 0.37 ± 2% sched_debug.cpu.nr_running.stddev
6.76 ± 9% +11.6% 7.55 ± 5% sched_debug.cpu.nr_uninterruptible.stddev
4.47 ±122% +308.4% 18.27 ± 78% sched_debug.cpu.sched_count.avg
272.67 ±136% +313.5% 1127 ± 79% sched_debug.cpu.sched_count.max
32.56 ±133% +325.6% 138.57 ± 78% sched_debug.cpu.sched_count.stddev
2.26 ±122% +305.0% 9.15 ± 77% sched_debug.cpu.sched_goidle.avg
137.67 ±136% +310.4% 565.00 ± 79% sched_debug.cpu.sched_goidle.max
16.42 ±133% +322.8% 69.44 ± 78% sched_debug.cpu.sched_goidle.stddev
0.98 ± 57% +796.7% 8.79 ± 68% sched_debug.cpu.ttwu_count.avg
51.00 ± 76% +952.0% 536.50 ± 75% sched_debug.cpu.ttwu_count.max
6.18 ± 72% +971.6% 66.18 ± 73% sched_debug.cpu.ttwu_count.stddev
0.08 ± 55% -80.3% 0.02 ± 6% sched_debug.cpu.ttwu_local.avg
0.36 ± 47% -65.4% 0.13 ± 3% sched_debug.cpu.ttwu_local.stddev
17248 ± 18% -59.1% 7054 ± 50% softirqs.CPU0.RCU
9395 ± 25% -63.7% 3412 ± 34% softirqs.CPU1.RCU
9451 ± 17% -70.5% 2785 ± 27% softirqs.CPU11.RCU
8411 ± 33% -60.8% 3294 ± 2% softirqs.CPU13.RCU
8757 ± 18% -70.4% 2590 ± 27% softirqs.CPU14.RCU
15363 ± 11% -82.6% 2668 ± 23% softirqs.CPU24.RCU
14609 ± 20% -77.0% 3361 ± 26% softirqs.CPU25.RCU
11185 ± 21% -73.8% 2931 ± 31% softirqs.CPU26.RCU
11890 ± 31% -68.1% 3790 ± 5% softirqs.CPU27.RCU
11480 ± 29% -73.2% 3073 ± 17% softirqs.CPU28.RCU
10461 ± 33% -63.2% 3847 ± 6% softirqs.CPU29.RCU
13942 ± 32% -78.2% 3045 ± 25% softirqs.CPU30.RCU
12582 ± 3% -77.0% 2889 ± 38% softirqs.CPU31.RCU
12318 ± 24% -70.8% 3600 ± 10% softirqs.CPU32.RCU
9897 ± 10% -44.1% 5537 ± 37% softirqs.CPU33.RCU
10918 ± 9% -72.4% 3018 ± 21% softirqs.CPU34.RCU
14239 ± 33% -76.2% 3386 ± 4% softirqs.CPU35.RCU
11469 ± 23% -74.6% 2908 ± 28% softirqs.CPU36.RCU
12694 ± 50% -77.9% 2805 ± 26% softirqs.CPU37.RCU
13133 ± 19% -79.1% 2746 ± 27% softirqs.CPU38.RCU
12577 ± 7% -78.8% 2672 ± 30% softirqs.CPU39.RCU
12311 ± 14% -75.6% 3002 ± 34% softirqs.CPU40.RCU
11091 ± 3% -73.5% 2940 ± 31% softirqs.CPU41.RCU
9579 ± 25% -72.0% 2683 ± 37% softirqs.CPU42.RCU
10558 ± 14% -74.3% 2713 ± 37% softirqs.CPU43.RCU
10360 ± 37% -69.2% 3188 ± 30% softirqs.CPU44.RCU
9697 ± 20% -71.5% 2763 ± 24% softirqs.CPU45.RCU
8819 ± 39% -67.4% 2873 ± 24% softirqs.CPU46.RCU
10626 ± 29% -74.2% 2738 ± 21% softirqs.CPU47.RCU
7931 ± 24% -69.7% 2402 ± 25% softirqs.CPU48.RCU
11642 ± 23% -72.4% 3219 ± 43% softirqs.CPU72.RCU
10515 ± 28% -76.8% 2442 ± 28% softirqs.CPU73.RCU
13581 ± 60% -79.1% 2837 ± 42% softirqs.CPU75.RCU
14199 ± 53% -81.4% 2639 ± 33% softirqs.CPU76.RCU
8791 ± 31% -71.5% 2504 ± 30% softirqs.CPU77.RCU
13260 ± 32% -81.8% 2418 ± 32% softirqs.CPU78.RCU
13741 ± 24% -79.4% 2837 ± 24% softirqs.CPU79.RCU
11007 ± 25% -77.6% 2463 ± 32% softirqs.CPU80.RCU
10950 ± 33% -77.2% 2494 ± 33% softirqs.CPU81.RCU
10367 ± 30% -74.8% 2610 ± 23% softirqs.CPU82.RCU
12792 ± 18% -81.6% 2353 ± 34% softirqs.CPU83.RCU
11857 ± 32% -79.2% 2465 ± 37% softirqs.CPU85.RCU
9573 ± 38% -73.9% 2498 ± 30% softirqs.CPU86.RCU
8645 ± 18% -70.6% 2542 ± 34% softirqs.CPU87.RCU
10293 ± 27% -76.5% 2417 ± 33% softirqs.CPU88.RCU
10604 ± 29% -72.7% 2896 ± 22% softirqs.CPU89.RCU
8119 ± 24% -70.8% 2371 ± 31% softirqs.CPU90.RCU
9698 ± 25% -65.8% 3321 ± 48% softirqs.CPU95.RCU
862186 ± 10% -67.6% 279218 ± 27% softirqs.RCU
8.23 -23.0% 6.34 ± 8% perf-stat.i.MPKI
6.462e+09 -4.4% 6.175e+09 perf-stat.i.branch-instructions
0.56 +0.0 0.57 perf-stat.i.branch-miss-rate%
30675041 -2.2% 29993559 perf-stat.i.branch-misses
1.099e+08 ± 4% -23.4% 84100899 ± 9% perf-stat.i.cache-misses
1.996e+08 -27.2% 1.453e+08 ± 9% perf-stat.i.cache-references
5330 ± 18% -40.5% 3170 ± 9% perf-stat.i.context-switches
3.42e+10 -10.1% 3.074e+10 ± 6% perf-stat.i.cpu-cycles
274.89 -19.8% 220.40 perf-stat.i.cpu-migrations
496.32 ± 5% +25.8% 624.53 ± 16% perf-stat.i.cycles-between-cache-misses
6.592e+09 -7.9% 6.069e+09 ± 3% perf-stat.i.dTLB-loads
0.13 +0.0 0.16 ± 13% perf-stat.i.dTLB-store-miss-rate%
2.832e+09 ± 2% -13.9% 2.439e+09 ± 7% perf-stat.i.dTLB-stores
2194729 -4.3% 2099808 ± 2% perf-stat.i.iTLB-loads
2.588e+10 -6.8% 2.413e+10 ± 3% perf-stat.i.instructions
104.43 -24.2% 79.16 ± 16% perf-stat.i.major-faults
0.36 -10.1% 0.32 ± 6% perf-stat.i.metric.GHz
167.95 -7.8% 154.81 ± 3% perf-stat.i.metric.M/sec
810491 +2.6% 831441 perf-stat.i.minor-faults
63.14 ± 3% +6.8 69.97 ± 5% perf-stat.i.node-load-miss-rate%
9153385 ± 7% +16.6% 10675997 ± 5% perf-stat.i.node-load-misses
5449241 ± 5% -33.6% 3619910 ± 7% perf-stat.i.node-loads
48.01 +25.4 73.36 ± 10% perf-stat.i.node-store-miss-rate%
8802770 ± 11% -79.6% 1794154 ± 20% perf-stat.i.node-stores
810595 +2.6% 831521 perf-stat.i.page-faults
7.71 -22.0% 6.01 ± 6% perf-stat.overall.MPKI
0.48 +0.0 0.49 perf-stat.overall.branch-miss-rate%
55.07 ± 3% +2.8 57.89 perf-stat.overall.cache-miss-rate%
311.03 ± 2% +17.7% 365.94 ± 2% perf-stat.overall.cycles-between-cache-misses
0.15 +0.0 0.17 ± 5% perf-stat.overall.dTLB-store-miss-rate%
62.67 ± 2% +12.0 74.69 perf-stat.overall.node-load-miss-rate%
43.92 +36.5 80.41 ± 3% perf-stat.overall.node-store-miss-rate%
7132 -9.3% 6468 ± 4% perf-stat.overall.path-length
6.258e+09 -4.4% 5.979e+09 perf-stat.ps.branch-instructions
29818678 -2.7% 29004012 perf-stat.ps.branch-misses
1.065e+08 ± 4% -23.5% 81492231 ± 9% perf-stat.ps.cache-misses
1.934e+08 -27.2% 1.408e+08 ± 9% perf-stat.ps.cache-references
5145 ± 18% -40.3% 3070 ± 9% perf-stat.ps.context-switches
3.31e+10 -10.1% 2.975e+10 ± 6% perf-stat.ps.cpu-cycles
266.05 -20.0% 212.96 perf-stat.ps.cpu-migrations
6.386e+09 -8.0% 5.876e+09 ± 3% perf-stat.ps.dTLB-loads
2.743e+09 ± 2% -13.9% 2.363e+09 ± 7% perf-stat.ps.dTLB-stores
2123501 -4.4% 2029324 ± 2% perf-stat.ps.iTLB-loads
2.507e+10 -6.8% 2.336e+10 ± 3% perf-stat.ps.instructions
101.46 -24.8% 76.33 ± 16% perf-stat.ps.major-faults
784848 +2.6% 805278 perf-stat.ps.minor-faults
8879280 ± 6% +16.5% 10347043 ± 5% perf-stat.ps.node-load-misses
5280243 ± 5% -33.5% 3511677 ± 7% perf-stat.ps.node-loads
8532944 ± 11% -79.6% 1737146 ± 19% perf-stat.ps.node-stores
784950 +2.6% 805354 perf-stat.ps.page-faults
7.758e+11 -9.3% 7.036e+11 ± 4% perf-stat.total.instructions
16.00 ± 19% -9.7 6.28 ± 37% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
15.76 ± 19% -9.5 6.25 ± 37% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
15.72 ± 19% -9.5 6.24 ± 37% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
11.57 ± 37% -9.1 2.51 ±100% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
10.80 ± 40% -8.5 2.32 ±100% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
10.79 ± 40% -8.5 2.32 ±100% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma
11.52 ± 2% -8.4 3.10 ±100% perf-profile.calltrace.cycles-pp.ret_from_fork
11.52 ± 2% -8.4 3.10 ±100% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
12.38 ± 36% -7.5 4.88 ± 15% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
12.26 ± 36% -7.5 4.79 ± 17% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
13.21 ± 15% -7.1 6.14 ± 36% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
8.08 ± 7% -5.1 2.94 ±100% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
8.08 ± 7% -5.1 2.94 ±100% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
8.07 ± 7% -5.1 2.94 ±100% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
8.05 ± 8% -5.1 2.93 ±100% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.balance_pgdat.kswapd.kthread
8.04 ± 7% -5.1 2.93 ±100% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
7.51 ± 9% -4.7 2.77 ±100% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
5.02 ± 5% -2.4 2.62 ± 35% perf-profile.calltrace.cycles-pp.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
5.01 ± 5% -2.4 2.62 ± 35% perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_lruvec
4.66 ± 13% -2.4 2.27 ± 64% perf-profile.calltrace.cycles-pp.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
4.91 ± 5% -2.4 2.55 ± 34% perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list
4.60 ± 4% -2.1 2.47 ± 33% perf-profile.calltrace.cycles-pp.smp_call_function_single.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list
3.96 ± 13% -2.1 1.86 ± 65% perf-profile.calltrace.cycles-pp.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec
3.89 ± 13% -2.1 1.81 ± 65% perf-profile.calltrace.cycles-pp.bdev_write_page.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list
3.23 ± 13% -2.0 1.24 ±100% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
3.48 ± 14% -1.9 1.58 ± 65% perf-profile.calltrace.cycles-pp.pmem_rw_page.bdev_write_page.__swap_writepage.pageout.shrink_page_list
2.06 ± 35% -1.8 0.25 ±100% perf-profile.calltrace.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
1.94 ± 26% -1.4 0.57 ±100% perf-profile.calltrace.cycles-pp.pmem_do_write.pmem_rw_page.bdev_write_page.__swap_writepage.pageout
1.94 ± 26% -1.4 0.57 ±100% perf-profile.calltrace.cycles-pp.write_pmem.pmem_do_write.pmem_rw_page.bdev_write_page.__swap_writepage
1.86 ± 25% -1.3 0.55 ±100% perf-profile.calltrace.cycles-pp.__memcpy_flushcache.write_pmem.pmem_do_write.pmem_rw_page.bdev_write_page
0.96 ± 34% -0.7 0.28 ±100% perf-profile.calltrace.cycles-pp.mem_cgroup_swapout.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.56 ± 8% +0.3 0.82 perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.36 ± 70% +0.4 0.73 ± 6% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
1.61 ± 8% +0.4 2.06 ± 22% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.68 ± 17% +0.5 1.21 ± 37% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
0.20 ±141% +0.6 0.82 ± 31% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
1.29 ± 15% +1.1 2.42 ± 45% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.31 ± 15% +1.1 2.46 ± 45% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.31 ± 15% +1.2 2.47 ± 45% perf-profile.calltrace.cycles-pp.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.17 ±141% +1.3 1.47 ± 15% perf-profile.calltrace.cycles-pp.clear_shadow_from_swap_cache.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
29.19 ± 21% +4.7 33.91 ± 23% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
32.27 ± 18% +7.3 39.52 ± 12% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
32.59 ± 18% +7.4 40.00 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
34.63 ± 16% +7.9 42.53 ± 9% perf-profile.calltrace.cycles-pp.secondary_startup_64
34.42 ± 16% +8.0 42.44 ± 9% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
34.42 ± 16% +8.0 42.45 ± 9% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
34.42 ± 16% +8.0 42.45 ± 9% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
25.02 ± 11% -15.8 9.27 ± 57% perf-profile.children.cycles-pp.shrink_node
24.75 ± 11% -15.5 9.23 ± 57% perf-profile.children.cycles-pp.shrink_lruvec
24.71 ± 11% -15.5 9.23 ± 57% perf-profile.children.cycles-pp.shrink_inactive_list
21.26 ± 7% -12.3 8.96 ± 56% perf-profile.children.cycles-pp.shrink_page_list
17.87 ± 19% -11.3 6.54 ± 39% perf-profile.children.cycles-pp.__alloc_pages_slowpath
16.97 ± 20% -10.6 6.34 ± 38% perf-profile.children.cycles-pp.try_to_free_pages
16.95 ± 20% -10.6 6.34 ± 38% perf-profile.children.cycles-pp.do_try_to_free_pages
18.60 ± 19% -9.8 8.84 ± 10% perf-profile.children.cycles-pp.__alloc_pages_nodemask
11.52 ± 2% -8.2 3.31 ± 87% perf-profile.children.cycles-pp.kthread
11.52 ± 2% -8.2 3.31 ± 87% perf-profile.children.cycles-pp.ret_from_fork
12.53 ± 36% -7.6 4.89 ± 15% perf-profile.children.cycles-pp.alloc_pages_vma
8.08 ± 7% -5.1 2.94 ±100% perf-profile.children.cycles-pp.kswapd
8.08 ± 7% -5.1 2.94 ±100% perf-profile.children.cycles-pp.balance_pgdat
5.15 ± 5% -2.5 2.63 ± 35% perf-profile.children.cycles-pp.try_to_unmap_flush_dirty
5.15 ± 5% -2.5 2.63 ± 35% perf-profile.children.cycles-pp.arch_tlbbatch_flush
5.05 ± 5% -2.5 2.56 ± 34% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
4.79 ± 13% -2.5 2.29 ± 65% perf-profile.children.cycles-pp.pageout
3.64 ± 15% -2.4 1.24 ± 66% perf-profile.children.cycles-pp.rmap_walk_anon
4.78 ± 4% -2.3 2.50 ± 34% perf-profile.children.cycles-pp.smp_call_function_single
4.06 ± 14% -2.2 1.88 ± 65% perf-profile.children.cycles-pp.__swap_writepage
3.99 ± 14% -2.2 1.83 ± 65% perf-profile.children.cycles-pp.bdev_write_page
2.92 ± 10% -2.1 0.84 ± 49% perf-profile.children.cycles-pp.add_to_swap
3.56 ± 14% -2.0 1.59 ± 65% perf-profile.children.cycles-pp.pmem_rw_page
2.03 ± 14% -1.9 0.09 ± 44% perf-profile.children.cycles-pp.rcu_do_batch
2.46 ± 59% -1.9 0.53 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irq
2.08 ± 14% -1.9 0.21 ± 65% perf-profile.children.cycles-pp.add_to_swap_cache
3.33 ± 14% -1.9 1.47 ± 70% perf-profile.children.cycles-pp.__remove_mapping
2.12 ± 42% -1.8 0.29 perf-profile.children.cycles-pp.worker_thread
2.08 ± 14% -1.8 0.27 ± 77% perf-profile.children.cycles-pp.rcu_core
2.10 ± 43% -1.8 0.29 perf-profile.children.cycles-pp.process_one_work
2.43 ± 25% -1.8 0.63 ± 68% perf-profile.children.cycles-pp.page_referenced
2.42 ± 14% -1.6 0.82 ± 58% perf-profile.children.cycles-pp.__softirqentry_text_start
1.60 ± 14% -1.6 0.03 ±100% perf-profile.children.cycles-pp.xas_create_range
1.59 ± 14% -1.6 0.04 ±100% perf-profile.children.cycles-pp.xas_create
1.64 ± 13% -1.5 0.10 ± 50% perf-profile.children.cycles-pp.kmem_cache_free
1.37 ± 12% -1.3 0.04 ±100% perf-profile.children.cycles-pp.__slab_free
1.28 ± 23% -1.2 0.04 ±100% perf-profile.children.cycles-pp.run_ksoftirqd
1.31 ± 22% -1.2 0.07 ± 23% perf-profile.children.cycles-pp.smpboot_thread_fn
2.13 ± 16% -1.2 0.96 ± 65% perf-profile.children.cycles-pp.pmem_do_write
2.11 ± 16% -1.2 0.95 ± 65% perf-profile.children.cycles-pp.write_pmem
2.09 ± 16% -1.2 0.94 ± 65% perf-profile.children.cycles-pp.__memcpy_flushcache
1.22 ± 27% -0.9 0.30 ± 80% perf-profile.children.cycles-pp.page_referenced_one
1.17 ± 20% -0.8 0.33 ± 70% perf-profile.children.cycles-pp.page_vma_mapped_walk
1.53 ± 7% -0.8 0.71 ± 67% perf-profile.children.cycles-pp.try_to_unmap
1.41 ± 11% -0.8 0.62 ± 64% perf-profile.children.cycles-pp.end_page_writeback
0.87 ± 12% -0.7 0.18 ± 61% perf-profile.children.cycles-pp.__delete_from_swap_cache
0.80 ± 28% -0.7 0.14 ±100% perf-profile.children.cycles-pp.isolate_lru_pages
1.17 ± 9% -0.6 0.57 ± 66% perf-profile.children.cycles-pp.try_to_unmap_one
0.85 ± 21% -0.6 0.27 ± 47% perf-profile.children.cycles-pp.page_lock_anon_vma_read
1.49 ± 6% -0.5 0.95 ± 34% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.59 ± 19% -0.4 0.18 ± 60% perf-profile.children.cycles-pp.down_read_trylock
0.46 ± 35% -0.4 0.06 ±100% perf-profile.children.cycles-pp.move_pages_to_lru
1.01 ± 5% -0.3 0.68 ± 31% perf-profile.children.cycles-pp.sysvec_call_function_single
0.93 ± 9% -0.3 0.60 ± 30% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.34 ± 63% -0.3 0.03 ±100% perf-profile.children.cycles-pp.__libc_fork
0.33 ± 65% -0.3 0.03 ±100% perf-profile.children.cycles-pp.__do_sys_clone
0.33 ± 65% -0.3 0.03 ±100% perf-profile.children.cycles-pp._do_fork
0.33 ± 65% -0.3 0.03 ±100% perf-profile.children.cycles-pp.copy_process
0.91 ± 5% -0.3 0.62 ± 32% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.46 ± 12% -0.2 0.21 ± 67% perf-profile.children.cycles-pp.put_swap_page
0.27 ± 51% -0.2 0.04 ±100% perf-profile.children.cycles-pp.smp_call_function_many_cond
1.05 ± 10% -0.2 0.83 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.26 ± 35% -0.2 0.05 ±100% perf-profile.children.cycles-pp.__isolate_lru_page
0.60 ± 14% -0.2 0.41 ± 8% perf-profile.children.cycles-pp.xas_store
0.25 ± 61% -0.2 0.06 ± 16% perf-profile.children.cycles-pp.__x64_sys_execve
0.25 ± 61% -0.2 0.06 ± 16% perf-profile.children.cycles-pp.execve
0.25 ± 61% -0.2 0.06 ± 16% perf-profile.children.cycles-pp.do_execveat_common
0.39 ± 15% -0.2 0.21 ± 31% perf-profile.children.cycles-pp.flush_tlb_func_common
0.19 ± 42% -0.2 0.03 ±100% perf-profile.children.cycles-pp.shrink_slab
0.22 ± 7% -0.1 0.07 ±100% perf-profile.children.cycles-pp.test_clear_page_writeback
0.27 ± 26% -0.1 0.13 ± 23% perf-profile.children.cycles-pp.up_read
0.24 ± 22% -0.1 0.12 ± 39% perf-profile.children.cycles-pp.native_flush_tlb_local
0.21 ± 4% -0.1 0.08 ± 29% perf-profile.children.cycles-pp.start_kernel
0.31 ± 13% -0.1 0.20 ± 12% perf-profile.children.cycles-pp.propagate_protected_usage
0.20 ± 14% -0.1 0.09 ± 22% perf-profile.children.cycles-pp.page_mapping
0.50 ± 3% -0.1 0.41 ± 4% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.11 ± 32% -0.1 0.03 ±100% perf-profile.children.cycles-pp.unlock_page
0.13 ± 12% -0.1 0.05 ±100% perf-profile.children.cycles-pp.clear_page_dirty_for_io
0.11 ± 25% -0.1 0.04 ±100% perf-profile.children.cycles-pp.__set_page_dirty_no_writeback
0.15 ± 28% -0.1 0.07 perf-profile.children.cycles-pp._find_next_bit
0.09 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.cpumask_any_but
0.07 ± 14% -0.0 0.03 ±100% perf-profile.children.cycles-pp.inc_node_page_state
0.08 ± 14% -0.0 0.06 ± 9% perf-profile.children.cycles-pp._cond_resched
0.08 ± 31% +0.0 0.11 ± 18% perf-profile.children.cycles-pp.calc_global_load_tick
0.07 ± 14% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.07 ± 14% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.07 ± 18% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.__radix_tree_lookup
0.02 ±141% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.do_swap
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.xas_start
0.02 ±141% +0.1 0.08 ± 37% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.27 ± 12% +0.1 0.33 ± 15% perf-profile.children.cycles-pp.scheduler_tick
0.15 ± 11% +0.1 0.24 ± 19% perf-profile.children.cycles-pp.memcpy_erms
0.15 ± 11% +0.1 0.24 ± 19% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.02 ±141% +0.1 0.10 ± 42% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.14 ± 17% +0.1 0.23 ± 43% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.30 ± 9% +0.1 0.41 ± 23% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.1 0.11 ± 4% perf-profile.children.cycles-pp.xas_clear_mark
0.02 ±141% +0.1 0.15 ± 6% perf-profile.children.cycles-pp.xas_init_marks
0.51 ± 8% +0.2 0.73 ± 6% perf-profile.children.cycles-pp.tick_nohz_next_event
0.46 ± 20% +0.2 0.68 ± 22% perf-profile.children.cycles-pp.update_process_times
0.46 ± 20% +0.2 0.68 ± 22% perf-profile.children.cycles-pp.tick_sched_handle
0.56 ± 8% +0.3 0.83 perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.43 ± 17% +0.3 0.71 ± 35% perf-profile.children.cycles-pp.clockevents_program_event
0.59 ± 17% +0.3 0.93 ± 32% perf-profile.children.cycles-pp.tick_sched_timer
0.08 ± 26% +0.4 0.51 ± 21% perf-profile.children.cycles-pp.xas_load
1.62 ± 8% +0.5 2.07 ± 22% perf-profile.children.cycles-pp.menu_select
0.05 ± 70% +0.5 0.56 ± 21% perf-profile.children.cycles-pp.xas_find
0.82 ± 15% +0.5 1.35 ± 37% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.51 ± 4% +1.0 1.52 ± 16% perf-profile.children.cycles-pp.clear_shadow_from_swap_cache
29.35 ± 21% +4.6 33.95 ± 23% perf-profile.children.cycles-pp.intel_idle
32.79 ± 18% +7.3 40.07 ± 11% perf-profile.children.cycles-pp.cpuidle_enter
32.78 ± 18% +7.3 40.06 ± 11% perf-profile.children.cycles-pp.cpuidle_enter_state
34.63 ± 16% +7.9 42.53 ± 9% perf-profile.children.cycles-pp.secondary_startup_64
34.63 ± 16% +7.9 42.53 ± 9% perf-profile.children.cycles-pp.cpu_startup_entry
34.63 ± 16% +7.9 42.53 ± 9% perf-profile.children.cycles-pp.do_idle
34.42 ± 16% +8.0 42.45 ± 9% perf-profile.children.cycles-pp.start_secondary
4.20 ± 5% -2.0 2.25 ± 30% perf-profile.self.cycles-pp.smp_call_function_single
1.33 ± 12% -1.3 0.04 ±100% perf-profile.self.cycles-pp.__slab_free
1.97 ± 16% -1.1 0.90 ± 64% perf-profile.self.cycles-pp.__memcpy_flushcache
1.10 ± 12% -0.6 0.50 ± 64% perf-profile.self.cycles-pp.end_page_writeback
0.54 ± 15% -0.4 0.12 ±100% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.55 ± 11% -0.4 0.14 ±100% perf-profile.self.cycles-pp.shrink_page_list
0.56 ± 20% -0.4 0.17 ± 57% perf-profile.self.cycles-pp.down_read_trylock
0.28 ± 40% -0.2 0.04 ±100% perf-profile.self.cycles-pp.move_pages_to_lru
0.32 ± 18% -0.2 0.07 ±100% perf-profile.self.cycles-pp.page_lock_anon_vma_read
0.26 ± 35% -0.2 0.05 ±100% perf-profile.self.cycles-pp.__isolate_lru_page
0.25 ± 19% -0.2 0.06 ±100% perf-profile.self.cycles-pp.rmap_walk_anon
0.38 ± 8% -0.2 0.21 ± 56% perf-profile.self.cycles-pp.try_to_unmap_one
0.29 ± 14% -0.2 0.13 ± 61% perf-profile.self.cycles-pp.add_to_swap_cache
0.21 ± 23% -0.2 0.06 ±100% perf-profile.self.cycles-pp.page_referenced_one
0.17 ± 20% -0.1 0.03 ±100% perf-profile.self.cycles-pp.xas_create
0.31 ± 13% -0.1 0.18 ± 16% perf-profile.self.cycles-pp.propagate_protected_usage
0.26 ± 28% -0.1 0.12 ± 19% perf-profile.self.cycles-pp.up_read
0.24 ± 22% -0.1 0.12 ± 39% perf-profile.self.cycles-pp.native_flush_tlb_local
0.16 ± 24% -0.1 0.04 ±100% perf-profile.self.cycles-pp.isolate_lru_pages
0.18 ± 15% -0.1 0.09 ± 17% perf-profile.self.cycles-pp.page_mapping
0.11 ± 36% -0.1 0.03 ±100% perf-profile.self.cycles-pp.unlock_page
0.13 ± 26% -0.1 0.07 ± 7% perf-profile.self.cycles-pp._find_next_bit
0.09 ± 19% -0.1 0.03 ±100% perf-profile.self.cycles-pp.pageout
0.13 ± 12% -0.0 0.09 ± 17% perf-profile.self.cycles-pp.flush_tlb_func_common
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.test_clear_page_writeback
0.06 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.inc_node_page_state
0.16 ± 10% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.06 ± 13% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__radix_tree_lookup
0.06 ± 19% +0.0 0.09 perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.08 ± 31% +0.0 0.11 ± 18% perf-profile.self.cycles-pp.calc_global_load_tick
0.34 ± 7% +0.0 0.38 ± 6% perf-profile.self.cycles-pp.swap_range_free
0.02 ±141% +0.1 0.08 ± 37% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.06 ± 16% perf-profile.self.cycles-pp.do_swap
0.09 ± 10% +0.1 0.15 ± 41% perf-profile.self.cycles-pp.irqtime_account_irq
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.xas_find
0.15 ± 11% +0.1 0.24 ± 19% perf-profile.self.cycles-pp.memcpy_erms
0.12 ± 10% +0.1 0.21 ± 12% perf-profile.self.cycles-pp.xas_store
0.15 ± 9% +0.1 0.24 ± 4% perf-profile.self.cycles-pp.tick_nohz_next_event
0.11 ± 14% +0.1 0.21 ± 21% perf-profile.self.cycles-pp.clear_shadow_from_swap_cache
0.30 ± 11% +0.1 0.41 ± 23% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.00 +0.1 0.11 ± 9% perf-profile.self.cycles-pp.xas_clear_mark
0.02 ±141% +0.4 0.45 ± 21% perf-profile.self.cycles-pp.xas_load
29.34 ± 21% +4.6 33.95 ± 23% perf-profile.self.cycles-pp.intel_idle
1309 ± 31% -95.8% 55.00 ± 38% interrupts.CPU1.NMI:Non-maskable_interrupts
1309 ± 31% -95.8% 55.00 ± 38% interrupts.CPU1.PMI:Performance_monitoring_interrupts
113374 ± 41% -100.0% 55.00 ± 7% interrupts.CPU11.CAL:Function_call_interrupts
257.33 ± 58% -74.4% 66.00 ± 46% interrupts.CPU11.NMI:Non-maskable_interrupts
257.33 ± 58% -74.4% 66.00 ± 46% interrupts.CPU11.PMI:Performance_monitoring_interrupts
123300 ± 42% -100.0% 2.00 ±100% interrupts.CPU11.TLB:TLB_shootdowns
71595 ± 21% +346.8% 319882 ± 25% interrupts.CPU13.CAL:Function_call_interrupts
39.33 ±112% +80.5% 71.00 ± 66% interrupts.CPU13.RES:Rescheduling_interrupts
79142 ± 23% +327.8% 338596 ± 25% interrupts.CPU13.TLB:TLB_shootdowns
121354 ± 28% -99.9% 70.50 ± 31% interrupts.CPU14.CAL:Function_call_interrupts
282.67 ± 73% +397.1% 1405 ± 7% interrupts.CPU14.NMI:Non-maskable_interrupts
282.67 ± 73% +397.1% 1405 ± 7% interrupts.CPU14.PMI:Performance_monitoring_interrupts
133142 ± 28% -100.0% 1.00 ±100% interrupts.CPU14.TLB:TLB_shootdowns
79518 ± 47% -99.4% 497.50 ± 90% interrupts.CPU17.CAL:Function_call_interrupts
87505 ± 47% -99.5% 431.50 ±100% interrupts.CPU17.TLB:TLB_shootdowns
57594 ± 58% -99.9% 77.00 ± 49% interrupts.CPU18.CAL:Function_call_interrupts
62678 ± 58% -100.0% 1.50 ± 33% interrupts.CPU18.TLB:TLB_shootdowns
83503 ± 32% -99.9% 95.00 ± 36% interrupts.CPU19.CAL:Function_call_interrupts
387.33 ± 75% -82.3% 68.50 ± 6% interrupts.CPU19.NMI:Non-maskable_interrupts
387.33 ± 75% -82.3% 68.50 ± 6% interrupts.CPU19.PMI:Performance_monitoring_interrupts
92869 ± 29% -100.0% 2.00 ±100% interrupts.CPU19.TLB:TLB_shootdowns
76523 ± 29% -98.8% 934.00 ± 93% interrupts.CPU2.CAL:Function_call_interrupts
1128 ± 68% -89.7% 116.00 ± 65% interrupts.CPU2.NMI:Non-maskable_interrupts
1128 ± 68% -89.7% 116.00 ± 65% interrupts.CPU2.PMI:Performance_monitoring_interrupts
84432 ± 28% -99.3% 550.00 ± 99% interrupts.CPU2.TLB:TLB_shootdowns
42900 ± 35% -99.9% 50.50 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
47261 ± 35% -100.0% 1.00 ±100% interrupts.CPU20.TLB:TLB_shootdowns
55343 ± 5% -99.8% 133.50 ± 64% interrupts.CPU21.CAL:Function_call_interrupts
557.33 ± 70% -83.9% 89.50 ± 19% interrupts.CPU21.NMI:Non-maskable_interrupts
557.33 ± 70% -83.9% 89.50 ± 19% interrupts.CPU21.PMI:Performance_monitoring_interrupts
60700 ± 3% -100.0% 1.50 ±100% interrupts.CPU21.TLB:TLB_shootdowns
65151 ± 29% -99.9% 65.50 ± 31% interrupts.CPU23.CAL:Function_call_interrupts
1511 ±132% -96.6% 51.50 ± 28% interrupts.CPU23.NMI:Non-maskable_interrupts
1511 ±132% -96.6% 51.50 ± 28% interrupts.CPU23.PMI:Performance_monitoring_interrupts
71450 ± 26% -100.0% 2.50 ± 20% interrupts.CPU23.TLB:TLB_shootdowns
917.33 ± 65% +289.9% 3576 ± 39% interrupts.CPU24.NMI:Non-maskable_interrupts
917.33 ± 65% +289.9% 3576 ± 39% interrupts.CPU24.PMI:Performance_monitoring_interrupts
304398 ± 14% +354.3% 1382899 ± 17% interrupts.CPU25.CAL:Function_call_interrupts
331035 ± 15% +351.5% 1494674 ± 15% interrupts.CPU25.TLB:TLB_shootdowns
198328 ± 28% +632.6% 1452903 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
494.00 ± 20% +343.8% 2192 ± 35% interrupts.CPU26.NMI:Non-maskable_interrupts
494.00 ± 20% +343.8% 2192 ± 35% interrupts.CPU26.PMI:Performance_monitoring_interrupts
215319 ± 28% +623.9% 1558705 ± 4% interrupts.CPU26.TLB:TLB_shootdowns
58.33 ± 88% -86.3% 8.00 ± 12% interrupts.CPU28.RES:Rescheduling_interrupts
189955 ± 55% -99.9% 116.50 ± 47% interrupts.CPU29.CAL:Function_call_interrupts
1128 ± 49% -92.0% 90.50 ± 22% interrupts.CPU29.NMI:Non-maskable_interrupts
1128 ± 49% -92.0% 90.50 ± 22% interrupts.CPU29.PMI:Performance_monitoring_interrupts
205120 ± 55% -100.0% 1.00 ±100% interrupts.CPU29.TLB:TLB_shootdowns
127609 ± 33% -96.9% 3959 ± 98% interrupts.CPU3.CAL:Function_call_interrupts
139191 ± 32% -97.2% 3873 ±100% interrupts.CPU3.TLB:TLB_shootdowns
281773 ± 58% -99.8% 476.50 ± 87% interrupts.CPU30.CAL:Function_call_interrupts
1645 ± 12% -94.7% 88.00 ± 20% interrupts.CPU30.NMI:Non-maskable_interrupts
1645 ± 12% -94.7% 88.00 ± 20% interrupts.CPU30.PMI:Performance_monitoring_interrupts
105.00 ± 59% -91.9% 8.50 ± 52% interrupts.CPU30.RES:Rescheduling_interrupts
303723 ± 58% -99.9% 432.00 ± 99% interrupts.CPU30.TLB:TLB_shootdowns
236289 ± 46% -87.7% 29164 ± 99% interrupts.CPU32.CAL:Function_call_interrupts
918.67 ± 36% -90.1% 90.50 ± 14% interrupts.CPU32.NMI:Non-maskable_interrupts
918.67 ± 36% -90.1% 90.50 ± 14% interrupts.CPU32.PMI:Performance_monitoring_interrupts
135.00 ± 56% -97.0% 4.00 ±100% interrupts.CPU32.RES:Rescheduling_interrupts
255505 ± 46% -88.1% 30485 ± 99% interrupts.CPU32.TLB:TLB_shootdowns
260418 ± 32% -100.0% 70.00 ± 30% interrupts.CPU34.CAL:Function_call_interrupts
1693 ± 41% -92.1% 134.00 ± 48% interrupts.CPU34.NMI:Non-maskable_interrupts
1693 ± 41% -92.1% 134.00 ± 48% interrupts.CPU34.PMI:Performance_monitoring_interrupts
283853 ± 32% -100.0% 2.50 ± 20% interrupts.CPU34.TLB:TLB_shootdowns
306303 -100.0% 72.00 ± 19% interrupts.CPU35.CAL:Function_call_interrupts
1685 ± 23% -94.3% 96.50 ± 8% interrupts.CPU35.NMI:Non-maskable_interrupts
1685 ± 23% -94.3% 96.50 ± 8% interrupts.CPU35.PMI:Performance_monitoring_interrupts
377.33 ± 88% -97.5% 9.50 ± 89% interrupts.CPU35.RES:Rescheduling_interrupts
331951 ± 2% -100.0% 1.50 ± 33% interrupts.CPU35.TLB:TLB_shootdowns
2148 ± 86% -95.9% 88.50 ± 17% interrupts.CPU36.NMI:Non-maskable_interrupts
2148 ± 86% -95.9% 88.50 ± 17% interrupts.CPU36.PMI:Performance_monitoring_interrupts
1007 ± 33% -93.0% 71.00 interrupts.CPU39.NMI:Non-maskable_interrupts
1007 ± 33% -93.0% 71.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts
89.67 ± 57% -98.3% 1.50 ± 33% interrupts.CPU39.RES:Rescheduling_interrupts
227185 ± 39% -100.0% 45.00 ± 6% interrupts.CPU40.CAL:Function_call_interrupts
297.33 ±105% -99.8% 0.50 ±100% interrupts.CPU40.RES:Rescheduling_interrupts
248670 ± 39% -100.0% 1.50 ±100% interrupts.CPU40.TLB:TLB_shootdowns
281008 ± 24% -100.0% 52.50 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
2233 ± 34% -96.8% 70.50 ± 2% interrupts.CPU41.NMI:Non-maskable_interrupts
2233 ± 34% -96.8% 70.50 ± 2% interrupts.CPU41.PMI:Performance_monitoring_interrupts
78.00 ± 35% -98.1% 1.50 ±100% interrupts.CPU41.RES:Rescheduling_interrupts
304440 ± 24% -100.0% 4.00 ± 25% interrupts.CPU41.TLB:TLB_shootdowns
295913 ± 47% -100.0% 52.00 ± 5% interrupts.CPU42.CAL:Function_call_interrupts
2330 ± 57% -96.4% 84.50 ± 25% interrupts.CPU42.NMI:Non-maskable_interrupts
2330 ± 57% -96.4% 84.50 ± 25% interrupts.CPU42.PMI:Performance_monitoring_interrupts
321984 ± 48% -100.0% 2.00 ±100% interrupts.CPU42.TLB:TLB_shootdowns
168874 ± 32% -100.0% 75.00 ± 25% interrupts.CPU43.CAL:Function_call_interrupts
182538 ± 32% -100.0% 1.50 ±100% interrupts.CPU43.TLB:TLB_shootdowns
212342 ± 41% -99.9% 107.50 ± 49% interrupts.CPU44.CAL:Function_call_interrupts
475.00 ± 32% -80.6% 92.00 ± 26% interrupts.CPU44.NMI:Non-maskable_interrupts
475.00 ± 32% -80.6% 92.00 ± 26% interrupts.CPU44.PMI:Performance_monitoring_interrupts
73.33 ±124% -97.3% 2.00 ± 50% interrupts.CPU44.RES:Rescheduling_interrupts
229440 ± 41% -100.0% 1.50 ±100% interrupts.CPU44.TLB:TLB_shootdowns
239600 ± 62% -100.0% 52.50 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
2233 ± 41% -95.4% 103.50 ± 8% interrupts.CPU45.NMI:Non-maskable_interrupts
2233 ± 41% -95.4% 103.50 ± 8% interrupts.CPU45.PMI:Performance_monitoring_interrupts
259462 ± 62% -100.0% 9.00 ±100% interrupts.CPU45.TLB:TLB_shootdowns
142832 ± 79% -100.0% 52.50 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
88.33 ±104% -99.4% 0.50 ±100% interrupts.CPU46.RES:Rescheduling_interrupts
153950 ± 79% -100.0% 1.50 ±100% interrupts.CPU46.TLB:TLB_shootdowns
223934 ± 51% -100.0% 51.50 ± 4% interrupts.CPU47.CAL:Function_call_interrupts
1107 ± 6% -89.8% 112.50 ± 4% interrupts.CPU47.NMI:Non-maskable_interrupts
1107 ± 6% -89.8% 112.50 ± 4% interrupts.CPU47.PMI:Performance_monitoring_interrupts
59.67 ± 92% -98.3% 1.00 ±100% interrupts.CPU47.RES:Rescheduling_interrupts
242184 ± 51% -100.0% 1.00 ±100% interrupts.CPU47.TLB:TLB_shootdowns
1904 ± 6% -17.4% 1573 ± 8% interrupts.CPU48.NMI:Non-maskable_interrupts
1904 ± 6% -17.4% 1573 ± 8% interrupts.CPU48.PMI:Performance_monitoring_interrupts
54.33 ± 72% -96.3% 2.00 ±100% interrupts.CPU48.RES:Rescheduling_interrupts
615.67 ± 10% -91.4% 53.00 ± 35% interrupts.CPU49.NMI:Non-maskable_interrupts
615.67 ± 10% -91.4% 53.00 ± 35% interrupts.CPU49.PMI:Performance_monitoring_interrupts
801.33 ± 67% -82.0% 144.00 ± 13% interrupts.CPU50.NMI:Non-maskable_interrupts
801.33 ± 67% -82.0% 144.00 ± 13% interrupts.CPU50.PMI:Performance_monitoring_interrupts
45295 ± 20% -99.4% 271.50 ± 81% interrupts.CPU51.CAL:Function_call_interrupts
211.33 ± 77% -63.3% 77.50 ± 8% interrupts.CPU51.NMI:Non-maskable_interrupts
211.33 ± 77% -63.3% 77.50 ± 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
48928 ± 18% -99.5% 220.50 ±100% interrupts.CPU51.TLB:TLB_shootdowns
47166 ± 20% -99.9% 54.50 ± 10% interrupts.CPU52.CAL:Function_call_interrupts
50380 ± 20% -100.0% 3.50 ±100% interrupts.CPU52.TLB:TLB_shootdowns
35411 ± 35% -99.8% 61.50 ± 5% interrupts.CPU54.CAL:Function_call_interrupts
38542 ± 36% -100.0% 1.50 ±100% interrupts.CPU54.TLB:TLB_shootdowns
18108 ± 37% -97.4% 469.00 ± 67% interrupts.CPU55.CAL:Function_call_interrupts
423.00 ± 61% -72.9% 114.50 ± 14% interrupts.CPU55.NMI:Non-maskable_interrupts
423.00 ± 61% -72.9% 114.50 ± 14% interrupts.CPU55.PMI:Performance_monitoring_interrupts
19986 ± 36% -98.2% 367.50 ±100% interrupts.CPU55.TLB:TLB_shootdowns
95159 ± 59% -99.9% 69.00 ± 21% interrupts.CPU56.CAL:Function_call_interrupts
106645 ± 60% -100.0% 1.00 ±100% interrupts.CPU56.TLB:TLB_shootdowns
30072 ± 48% -99.3% 215.00 ± 73% interrupts.CPU57.CAL:Function_call_interrupts
393.33 ± 98% -83.0% 67.00 ± 44% interrupts.CPU57.NMI:Non-maskable_interrupts
393.33 ± 98% -83.0% 67.00 ± 44% interrupts.CPU57.PMI:Performance_monitoring_interrupts
32538 ± 49% -99.6% 132.00 ± 97% interrupts.CPU57.TLB:TLB_shootdowns
88184 ± 34% -84.5% 13690 ± 97% interrupts.CPU58.CAL:Function_call_interrupts
66.33 ±125% -97.7% 1.50 ± 33% interrupts.CPU58.RES:Rescheduling_interrupts
96406 ± 34% -85.7% 13817 ± 98% interrupts.CPU58.TLB:TLB_shootdowns
150.67 ± 23% -56.2% 66.00 interrupts.CPU61.NMI:Non-maskable_interrupts
150.67 ± 23% -56.2% 66.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts
59359 ± 14% +626.9% 431469 ± 6% interrupts.CPU62.CAL:Function_call_interrupts
366.33 ± 70% +522.2% 2279 ± 26% interrupts.CPU62.NMI:Non-maskable_interrupts
366.33 ± 70% +522.2% 2279 ± 26% interrupts.CPU62.PMI:Performance_monitoring_interrupts
64879 ± 14% +602.7% 455931 ± 6% interrupts.CPU62.TLB:TLB_shootdowns
151.00 ± 11% -37.7% 94.00 ± 5% interrupts.CPU64.NMI:Non-maskable_interrupts
151.00 ± 11% -37.7% 94.00 ± 5% interrupts.CPU64.PMI:Performance_monitoring_interrupts
13947 ± 64% -98.9% 147.50 ± 65% interrupts.CPU66.CAL:Function_call_interrupts
15318 ± 63% -99.3% 114.00 ±100% interrupts.CPU66.TLB:TLB_shootdowns
786.33 ±112% -88.7% 89.00 ± 26% interrupts.CPU67.NMI:Non-maskable_interrupts
786.33 ±112% -88.7% 89.00 ± 26% interrupts.CPU67.PMI:Performance_monitoring_interrupts
79917 ± 80% -99.9% 52.50 ± 6% interrupts.CPU68.CAL:Function_call_interrupts
212.67 ± 62% -59.1% 87.00 ± 22% interrupts.CPU68.NMI:Non-maskable_interrupts
212.67 ± 62% -59.1% 87.00 ± 22% interrupts.CPU68.PMI:Performance_monitoring_interrupts
53982 ± 65% -99.9% 58.50 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
327.33 ± 41% -83.7% 53.50 ± 32% interrupts.CPU69.NMI:Non-maskable_interrupts
327.33 ± 41% -83.7% 53.50 ± 32% interrupts.CPU69.PMI:Performance_monitoring_interrupts
58974 ± 63% -100.0% 1.50 ±100% interrupts.CPU69.TLB:TLB_shootdowns
27087 ± 30% -96.1% 1051 ± 83% interrupts.CPU7.CAL:Function_call_interrupts
433.67 ± 33% -71.6% 123.00 ± 21% interrupts.CPU7.NMI:Non-maskable_interrupts
433.67 ± 33% -71.6% 123.00 ± 21% interrupts.CPU7.PMI:Performance_monitoring_interrupts
29611 ± 30% -97.5% 750.00 ±100% interrupts.CPU7.TLB:TLB_shootdowns
30032 ± 57% -99.6% 128.00 ± 57% interrupts.CPU71.CAL:Function_call_interrupts
32874 ± 57% -99.8% 77.50 ± 96% interrupts.CPU71.TLB:TLB_shootdowns
225415 ± 25% -100.0% 67.50 ± 14% interrupts.CPU72.CAL:Function_call_interrupts
735.33 ± 34% +135.9% 1734 ± 23% interrupts.CPU72.NMI:Non-maskable_interrupts
735.33 ± 34% +135.9% 1734 ± 23% interrupts.CPU72.PMI:Performance_monitoring_interrupts
243030 ± 25% -100.0% 0.50 ±100% interrupts.CPU72.TLB:TLB_shootdowns
267730 ± 75% -99.3% 1820 ± 97% interrupts.CPU75.CAL:Function_call_interrupts
290980 ± 76% -99.4% 1770 ±100% interrupts.CPU75.TLB:TLB_shootdowns
265672 ± 44% -90.3% 25882 ± 99% interrupts.CPU76.CAL:Function_call_interrupts
971.33 ± 37% -86.8% 128.50 ± 22% interrupts.CPU76.NMI:Non-maskable_interrupts
971.33 ± 37% -86.8% 128.50 ± 22% interrupts.CPU76.PMI:Performance_monitoring_interrupts
288037 ± 44% -90.5% 27254 ±100% interrupts.CPU76.TLB:TLB_shootdowns
259900 ± 63% -100.0% 64.50 ± 13% interrupts.CPU77.CAL:Function_call_interrupts
2126 ± 52% -95.6% 92.50 ± 21% interrupts.CPU77.NMI:Non-maskable_interrupts
2126 ± 52% -95.6% 92.50 ± 21% interrupts.CPU77.PMI:Performance_monitoring_interrupts
280644 ± 63% -100.0% 1.50 ±100% interrupts.CPU77.TLB:TLB_shootdowns
233432 ± 31% -100.0% 54.00 ± 9% interrupts.CPU78.CAL:Function_call_interrupts
2428 ± 50% -97.0% 73.50 ± 53% interrupts.CPU78.NMI:Non-maskable_interrupts
2428 ± 50% -97.0% 73.50 ± 53% interrupts.CPU78.PMI:Performance_monitoring_interrupts
226.00 ±110% -98.5% 3.50 ± 14% interrupts.CPU78.RES:Rescheduling_interrupts
253071 ± 32% -100.0% 0.50 ±100% interrupts.CPU78.TLB:TLB_shootdowns
252321 ± 17% -99.9% 311.50 ± 62% interrupts.CPU79.CAL:Function_call_interrupts
1040 ± 7% -85.9% 147.00 ± 12% interrupts.CPU79.NMI:Non-maskable_interrupts
1040 ± 7% -85.9% 147.00 ± 12% interrupts.CPU79.PMI:Performance_monitoring_interrupts
211.00 ±111% -94.3% 12.00 ± 83% interrupts.CPU79.RES:Rescheduling_interrupts
274447 ± 16% -99.9% 232.50 ± 98% interrupts.CPU79.TLB:TLB_shootdowns
109041 ± 28% -94.2% 6313 ± 98% interrupts.CPU8.CAL:Function_call_interrupts
122228 ± 32% -94.6% 6592 ± 99% interrupts.CPU8.TLB:TLB_shootdowns
261240 ± 44% -100.0% 53.00 ± 7% interrupts.CPU80.CAL:Function_call_interrupts
1136 ± 39% -94.9% 57.50 ± 42% interrupts.CPU80.NMI:Non-maskable_interrupts
1136 ± 39% -94.9% 57.50 ± 42% interrupts.CPU80.PMI:Performance_monitoring_interrupts
317.67 ±125% -99.7% 1.00 ±100% interrupts.CPU80.RES:Rescheduling_interrupts
281152 ± 43% -100.0% 2.00 ±100% interrupts.CPU80.TLB:TLB_shootdowns
273.00 ±130% -96.5% 9.50 ± 78% interrupts.CPU81.RES:Rescheduling_interrupts
190306 ± 52% -99.9% 106.50 ± 48% interrupts.CPU82.CAL:Function_call_interrupts
1698 ± 92% -94.6% 91.50 ± 23% interrupts.CPU82.NMI:Non-maskable_interrupts
1698 ± 92% -94.6% 91.50 ± 23% interrupts.CPU82.PMI:Performance_monitoring_interrupts
58.67 ± 95% -81.2% 11.00 ± 9% interrupts.CPU82.RES:Rescheduling_interrupts
206202 ± 52% -100.0% 1.50 ±100% interrupts.CPU82.TLB:TLB_shootdowns
318334 ± 17% -100.0% 55.00 ± 7% interrupts.CPU83.CAL:Function_call_interrupts
1359 ± 53% -90.9% 124.00 ± 15% interrupts.CPU83.NMI:Non-maskable_interrupts
1359 ± 53% -90.9% 124.00 ± 15% interrupts.CPU83.PMI:Performance_monitoring_interrupts
151.00 ±112% -96.7% 5.00 ± 60% interrupts.CPU83.RES:Rescheduling_interrupts
344952 ± 18% -100.0% 1.50 ± 33% interrupts.CPU83.TLB:TLB_shootdowns
1052 ± 31% -91.3% 91.50 ± 23% interrupts.CPU84.NMI:Non-maskable_interrupts
1052 ± 31% -91.3% 91.50 ± 23% interrupts.CPU84.PMI:Performance_monitoring_interrupts
63.33 ± 76% -92.9% 4.50 ± 55% interrupts.CPU84.RES:Rescheduling_interrupts
259296 ± 45% -100.0% 70.00 ± 5% interrupts.CPU85.CAL:Function_call_interrupts
132.67 ± 76% -98.1% 2.50 ± 60% interrupts.CPU85.RES:Rescheduling_interrupts
281595 ± 46% -100.0% 1.00 ±100% interrupts.CPU85.TLB:TLB_shootdowns
303.33 ±105% -98.7% 4.00 ±100% interrupts.CPU86.RES:Rescheduling_interrupts
173576 ± 25% -68.9% 53961 ± 93% interrupts.CPU87.CAL:Function_call_interrupts
1120 ± 31% -94.0% 67.00 ± 49% interrupts.CPU87.NMI:Non-maskable_interrupts
1120 ± 31% -94.0% 67.00 ± 49% interrupts.CPU87.PMI:Performance_monitoring_interrupts
188725 ± 27% -69.8% 57018 ± 93% interrupts.CPU87.TLB:TLB_shootdowns
197104 ± 41% -100.0% 52.00 ± 3% interrupts.CPU88.CAL:Function_call_interrupts
60.67 ± 92% -96.7% 2.00 interrupts.CPU88.RES:Rescheduling_interrupts
214605 ± 42% -100.0% 2.00 ± 50% interrupts.CPU88.TLB:TLB_shootdowns
229259 ± 46% -100.0% 58.00 ± 12% interrupts.CPU89.CAL:Function_call_interrupts
1911 ± 39% -95.3% 90.50 ± 18% interrupts.CPU89.NMI:Non-maskable_interrupts
1911 ± 39% -95.3% 90.50 ± 18% interrupts.CPU89.PMI:Performance_monitoring_interrupts
53.67 ± 95% -99.1% 0.50 ±100% interrupts.CPU89.RES:Rescheduling_interrupts
250104 ± 46% -100.0% 1.50 ± 33% interrupts.CPU89.TLB:TLB_shootdowns
40214 ± 37% -99.1% 354.00 ± 84% interrupts.CPU9.CAL:Function_call_interrupts
387.33 ±112% -87.5% 48.50 ± 29% interrupts.CPU9.NMI:Non-maskable_interrupts
387.33 ±112% -87.5% 48.50 ± 29% interrupts.CPU9.PMI:Performance_monitoring_interrupts
114.33 ±119% -98.3% 2.00 ±100% interrupts.CPU9.RES:Rescheduling_interrupts
43311 ± 38% -99.6% 169.50 ± 98% interrupts.CPU9.TLB:TLB_shootdowns
86000 ± 41% -99.9% 54.00 interrupts.CPU90.CAL:Function_call_interrupts
1558 ± 38% -94.6% 83.50 ± 13% interrupts.CPU90.NMI:Non-maskable_interrupts
1558 ± 38% -94.6% 83.50 ± 13% interrupts.CPU90.PMI:Performance_monitoring_interrupts
91513 ± 40% -100.0% 2.00 ± 50% interrupts.CPU90.TLB:TLB_shootdowns
107289 ± 56% -99.9% 56.00 ± 10% interrupts.CPU91.CAL:Function_call_interrupts
695.00 ± 46% -79.9% 140.00 ± 30% interrupts.CPU91.NMI:Non-maskable_interrupts
695.00 ± 46% -79.9% 140.00 ± 30% interrupts.CPU91.PMI:Performance_monitoring_interrupts
116833 ± 56% -100.0% 2.00 ± 50% interrupts.CPU91.TLB:TLB_shootdowns
174141 ± 53% -100.0% 53.00 ± 5% interrupts.CPU92.CAL:Function_call_interrupts
520.33 ± 35% -82.3% 92.00 ± 10% interrupts.CPU92.NMI:Non-maskable_interrupts
520.33 ± 35% -82.3% 92.00 ± 10% interrupts.CPU92.PMI:Performance_monitoring_interrupts
189666 ± 54% -100.0% 2.50 ± 60% interrupts.CPU92.TLB:TLB_shootdowns
110083 ± 63% -98.3% 1857 ± 96% interrupts.CPU93.CAL:Function_call_interrupts
1136 ± 56% -93.5% 73.50 ± 7% interrupts.CPU93.NMI:Non-maskable_interrupts
1136 ± 56% -93.5% 73.50 ± 7% interrupts.CPU93.PMI:Performance_monitoring_interrupts
118874 ± 62% -98.5% 1797 ± 99% interrupts.CPU93.TLB:TLB_shootdowns
58617 ± 40% -96.9% 1813 ± 96% interrupts.CPU94.CAL:Function_call_interrupts
63251 ± 40% -97.2% 1756 ± 99% interrupts.CPU94.TLB:TLB_shootdowns
1872 ± 27% -95.7% 80.00 interrupts.CPU95.NMI:Non-maskable_interrupts
1872 ± 27% -95.7% 80.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts
83.00 ±103% -86.1% 11.50 ± 56% interrupts.CPU95.RES:Rescheduling_interrupts
7935 ± 72% -82.2% 1414 ± 10% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-csl-2ap1: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-9/performance/x86_64-rhel-8.3/1/32/debian-10.4-x86_64-20200603.cgz/300/lkp-csl-2ap1/swap-w-rand-mt/vm-scalability/always/always/0x4003003
commit:
3852f6768e ("mm/swapcache: support to handle the shadow entries")
aae466b005 ("mm/swap: implement workingset detection for anonymous LRU")
3852f6768ede542e aae466b0052e1888edd1d7f473d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:4 dmesg.BUG:kernel_NULL_pointer_dereference,address
:2 50% 1:4 dmesg.Kernel_panic-not_syncing:Fatal_exception
:2 50% 1:4 dmesg.Oops:#[##]
:2 50% 1:4 dmesg.RIP:kobject_del
:2 50% 1:4 dmesg.RIP:native_smp_send_reschedule
:2 50% 1:4 dmesg.WARNING:at_arch/x86/kernel/apic/ipi.c:#native_smp_send_reschedule
0:2 64% 1:4 perf-profile.children.cycles-pp.error_entry
0:2 16% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1.08 ± 5% +92.8% 2.08 ± 14% vm-scalability.free_time
6536 -1.3% 6450 vm-scalability.median
320.31 +5.2% 337.04 vm-scalability.time.elapsed_time
320.31 +5.2% 337.04 vm-scalability.time.elapsed_time.max
86351 ± 12% -35.3% 55851 ± 25% vm-scalability.time.involuntary_context_switches
873.12 ± 4% -22.2% 679.71 ± 15% vm-scalability.time.system_time
3.09 -8.9% 2.81 ± 5% iostat.cpu.system
36054075 -46.3% 19366911 ± 6% cpuidle.POLL.time
8107090 -45.8% 4394214 ± 7% cpuidle.POLL.usage
0.13 -0.0 0.11 ± 10% mpstat.cpu.all.soft%
1.76 ± 4% -0.4 1.39 ± 16% mpstat.cpu.all.sys%
3989891 ± 35% +199.8% 11962962 ± 8% numa-numastat.node0.local_node
4021317 ± 34% +197.8% 11974410 ± 8% numa-numastat.node0.numa_hit
11809980 ± 38% -62.8% 4394489 ± 83% numa-numastat.node3.local_node
11835379 ± 38% -62.7% 4417959 ± 83% numa-numastat.node3.numa_hit
1044721 -14.8% 890319 ± 6% vmstat.io.bo
1117642 +37.0% 1530674 vmstat.memory.cache
11365041 +45.4% 16527872 ± 8% vmstat.memory.free
1.641e+08 -14.5% 1.403e+08 ± 5% vmstat.memory.swpd
1044714 -14.8% 890313 ± 6% vmstat.swap.so
1749 ± 5% -28.7% 1246 ± 6% perf-stat.i.cpu-migrations
0.06 +0.1 0.13 ± 15% perf-stat.i.dTLB-load-miss-rate%
1136787 +80.8% 2055617 ± 14% perf-stat.i.dTLB-load-misses
6103425 -5.0% 5795625 ± 5% perf-stat.i.iTLB-loads
0.37 -6.6% 0.35 ± 5% perf-stat.i.ipc
1.20 -6.6% 1.12 ± 3% perf-stat.i.metric.K/sec
7855002 -16.6% 6547298 ± 16% perf-stat.i.node-load-misses
0.05 ± 3% +0.1 0.10 ± 19% perf-stat.overall.dTLB-load-miss-rate%
1751 ± 5% -28.9% 1245 ± 6% perf-stat.ps.cpu-migrations
1129808 +81.1% 2046346 ± 14% perf-stat.ps.dTLB-load-misses
6074224 -5.0% 5773002 ± 5% perf-stat.ps.iTLB-loads
7853233 -16.8% 6531571 ± 16% perf-stat.ps.node-load-misses
74363186 ± 7% -44.9% 40944206 ± 5% meminfo.Active
74362739 ± 7% -44.9% 40943748 ± 5% meminfo.Active(anon)
13818 ± 8% +67.4% 23128 ± 25% meminfo.CmaFree
1.088e+08 ± 4% +25.7% 1.368e+08 meminfo.Inactive
1.088e+08 ± 4% +25.7% 1.368e+08 meminfo.Inactive(anon)
137327 +296.6% 544704 ± 5% meminfo.KReclaimable
10247553 ± 4% +51.5% 15527799 ± 9% meminfo.MemAvailable
10886951 ± 3% +46.7% 15976591 ± 9% meminfo.MemFree
137327 +296.6% 544704 ± 5% meminfo.SReclaimable
11874 +54.2% 18312 ± 14% meminfo.Shmem
452079 +89.7% 857727 ± 3% meminfo.Slab
135711 ± 6% +108.7% 283289 ± 41% meminfo.SwapCached
4123 ± 20% -51.3% 2007 ± 29% meminfo.Writeback
34270 ± 5% -9.5% 31010 ± 3% slabinfo.dmaengine-unmap-16.active_objs
824.00 ± 5% -9.4% 746.25 ± 3% slabinfo.dmaengine-unmap-16.active_slabs
34644 ± 5% -9.5% 31355 ± 3% slabinfo.dmaengine-unmap-16.num_objs
824.00 ± 5% -9.4% 746.25 ± 3% slabinfo.dmaengine-unmap-16.num_slabs
815.00 +32.9% 1082 ± 3% slabinfo.mnt_cache.active_objs
815.00 +32.9% 1082 ± 3% slabinfo.mnt_cache.num_objs
15050 ± 2% -10.3% 13502 ± 4% slabinfo.proc_inode_cache.num_objs
89702 +771.6% 781844 ± 5% slabinfo.radix_tree_node.active_objs
1619 +799.4% 14566 ± 5% slabinfo.radix_tree_node.active_slabs
90726 +799.2% 815771 ± 5% slabinfo.radix_tree_node.num_objs
1619 +799.4% 14566 ± 5% slabinfo.radix_tree_node.num_slabs
7382 ± 2% +19.8% 8844 ± 5% slabinfo.shmem_inode_cache.active_objs
7382 ± 2% +19.8% 8844 ± 5% slabinfo.shmem_inode_cache.num_objs
108.44 ± 5% -61.6% 41.65 ± 24% sched_debug.cfs_rq:/.exec_clock.min
95883 ± 3% -22.1% 74741 ± 15% sched_debug.cfs_rq:/.min_vruntime.avg
6602 ± 59% +1084.4% 78195 ± 34% sched_debug.cfs_rq:/.spread0.max
1562922 ± 16% -12.8% 1362192 ± 12% sched_debug.cpu.avg_idle.max
0.00 ± 5% -9.0% 0.00 ± 3% sched_debug.cpu.next_balance.stddev
3953 ± 34% -53.7% 1828 ± 8% sched_debug.cpu.nr_switches.min
109192 ± 6% +34.2% 146571 ± 11% sched_debug.cpu.nr_switches.stddev
15.54 ± 3% -10.5% 13.90 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
2843 ± 45% -72.3% 787.31 ± 16% sched_debug.cpu.sched_count.min
109111 ± 6% +34.2% 146397 ± 11% sched_debug.cpu.sched_count.stddev
1289 ± 49% -76.3% 305.32 ± 19% sched_debug.cpu.sched_goidle.min
52858 ± 7% +36.6% 72229 ± 11% sched_debug.cpu.sched_goidle.stddev
322.33 ± 5% -19.7% 258.99 ± 14% sched_debug.cpu.ttwu_count.min
271.42 -15.5% 229.38 ± 10% sched_debug.cpu.ttwu_local.min
55848 ± 6% +31.3% 73313 ± 11% sched_debug.cpu.ttwu_local.stddev
38086 +16.9% 44526 ± 7% softirqs.CPU1.SCHED
364.50 ± 44% +2507.6% 9504 ± 93% softirqs.CPU10.NET_RX
68420 -31.7% 46741 ± 33% softirqs.CPU152.RCU
66729 ± 9% -28.6% 47676 ± 35% softirqs.CPU158.RCU
71681 ± 11% -35.7% 46123 ± 35% softirqs.CPU160.RCU
69240 ± 9% -31.6% 47350 ± 35% softirqs.CPU161.RCU
72321 ± 8% -34.1% 47673 ± 35% softirqs.CPU162.RCU
66391 ± 14% -28.5% 47457 ± 36% softirqs.CPU165.RCU
63766 ± 13% -25.6% 47441 ± 35% softirqs.CPU166.RCU
67404 ± 12% -32.1% 45782 ± 37% softirqs.CPU169.RCU
69859 ± 13% -34.5% 45756 ± 35% softirqs.CPU170.RCU
67546 ± 9% -31.2% 46484 ± 36% softirqs.CPU171.RCU
68951 ± 9% -32.4% 46621 ± 37% softirqs.CPU173.RCU
67616 ± 7% -31.7% 46173 ± 37% softirqs.CPU174.RCU
39793 +10.8% 44074 ± 2% softirqs.CPU176.SCHED
67016 ± 19% -27.9% 48329 ± 38% softirqs.CPU177.RCU
67339 ± 15% -28.7% 48008 ± 36% softirqs.CPU180.RCU
68397 ± 11% -29.7% 48092 ± 37% softirqs.CPU181.RCU
70026 ± 10% -30.9% 48411 ± 38% softirqs.CPU182.RCU
70803 ± 2% -34.3% 46509 ± 35% softirqs.CPU186.RCU
39877 +10.7% 44144 ± 2% softirqs.CPU188.SCHED
36241 ± 4% +17.8% 42697 softirqs.CPU24.SCHED
40263 -23.0% 30987 ± 11% softirqs.CPU38.SCHED
36666 ± 6% +19.6% 43839 ± 2% softirqs.CPU56.SCHED
36812 +17.3% 43176 ± 3% softirqs.CPU75.SCHED
4239 ± 12% +367.7% 19827 ± 52% softirqs.NET_RX
115809 ± 48% +2278.9% 2754993 ± 26% proc-vmstat.compact_daemon_migrate_scanned
11776 -50.7% 5808 ± 22% proc-vmstat.compact_success
18598991 ± 7% -44.9% 10248481 ± 4% proc-vmstat.nr_active_anon
45770728 -3.0% 44393376 proc-vmstat.nr_anon_pages
85912 -8.6% 78555 ± 2% proc-vmstat.nr_anon_transparent_hugepages
253989 +49.2% 378960 ± 8% proc-vmstat.nr_dirty_background_threshold
508600 +49.2% 758848 ± 8% proc-vmstat.nr_dirty_threshold
279185 +13.8% 317589 ± 9% proc-vmstat.nr_file_pages
3457 ± 6% +66.9% 5770 ± 25% proc-vmstat.nr_free_cma
2713646 +46.1% 3964193 ± 8% proc-vmstat.nr_free_pages
27207952 ± 4% +25.8% 34220425 proc-vmstat.nr_inactive_anon
90.50 ± 38% -42.3% 52.25 ± 71% proc-vmstat.nr_inactive_file
3204 ± 10% -17.1% 2656 ± 9% proc-vmstat.nr_isolated_anon
2957 +54.4% 4564 ± 15% proc-vmstat.nr_shmem
34261 +297.3% 136110 ± 4% proc-vmstat.nr_slab_reclaimable
724.50 -35.8% 465.00 ± 29% proc-vmstat.nr_writeback
18599012 ± 7% -44.9% 10248642 ± 4% proc-vmstat.nr_zone_active_anon
27208816 ± 4% +25.8% 34221203 proc-vmstat.nr_zone_inactive_anon
90.50 ± 38% -42.8% 51.75 ± 72% proc-vmstat.nr_zone_inactive_file
703.00 -35.7% 452.00 ± 30% proc-vmstat.nr_zone_write_pending
3.385e+08 -10.2% 3.038e+08 ± 7% proc-vmstat.pgpgout
83774568 -10.9% 74674345 ± 7% proc-vmstat.pgrotated
2.043e+08 ± 8% -26.8% 1.496e+08 ± 12% proc-vmstat.pgscan_direct
84560845 -10.4% 75806985 ± 7% proc-vmstat.pgsteal_anon
44538776 -22.5% 34526377 ± 3% proc-vmstat.pgsteal_direct
84619799 -10.2% 75956512 ± 7% proc-vmstat.pswpout
121047 ± 8% -26.4% 89044 ± 22% proc-vmstat.slabs_scanned
615.00 ± 6% +96.7% 1209 ± 37% proc-vmstat.swap_ra_hit
207587 -10.2% 186311 ± 3% proc-vmstat.thp_fault_alloc
19001220 ± 6% -48.3% 9815424 ± 12% numa-meminfo.node0.Active
19001053 ± 6% -48.3% 9815137 ± 12% numa-meminfo.node0.Active(anon)
26175277 ± 5% +29.9% 34004831 ± 4% numa-meminfo.node0.Inactive
26175253 ± 5% +29.9% 34004819 ± 4% numa-meminfo.node0.Inactive(anon)
43040 ± 4% +230.6% 142276 ± 17% numa-meminfo.node0.KReclaimable
9504 ± 3% -18.3% 7765 ± 5% numa-meminfo.node0.KernelStack
5901 +23.0% 7256 ± 20% numa-meminfo.node0.Mapped
2684201 ± 7% +53.8% 4128329 ± 6% numa-meminfo.node0.MemFree
206184 ± 4% -27.7% 149049 ± 14% numa-meminfo.node0.PageTables
43040 ± 4% +230.6% 142276 ± 17% numa-meminfo.node0.SReclaimable
110348 ± 2% -18.6% 89858 ± 3% numa-meminfo.node0.SUnreclaim
5091 ± 10% -64.8% 1790 ±116% numa-meminfo.node0.Shmem
153389 +51.3% 232135 ± 10% numa-meminfo.node0.Slab
18726155 ± 7% -48.6% 9622670 ± 17% numa-meminfo.node1.Active
18725898 ± 7% -48.6% 9622633 ± 17% numa-meminfo.node1.Active(anon)
27336363 ± 4% +29.3% 35337382 ± 5% numa-meminfo.node1.Inactive
27336247 ± 4% +29.3% 35337252 ± 5% numa-meminfo.node1.Inactive(anon)
30936 ± 9% +410.8% 158013 ± 23% numa-meminfo.node1.KReclaimable
30936 ± 9% +410.8% 158013 ± 23% numa-meminfo.node1.SReclaimable
102012 ± 4% +124.8% 229332 ± 16% numa-meminfo.node1.Slab
18780713 ± 8% -41.5% 10981759 ± 24% numa-meminfo.node2.Active
18780699 ± 8% -41.5% 10981704 ± 24% numa-meminfo.node2.Active(anon)
260006 +18.5% 308084 ± 11% numa-meminfo.node2.FilePages
27126472 ± 5% +23.7% 33567811 ± 6% numa-meminfo.node2.Inactive
27126417 ± 5% +23.7% 33567753 ± 6% numa-meminfo.node2.Inactive(anon)
31525 ± 5% +347.0% 140930 ± 22% numa-meminfo.node2.KReclaimable
2856577 ± 3% +43.3% 4094648 ± 9% numa-meminfo.node2.MemFree
31525 ± 5% +347.0% 140930 ± 22% numa-meminfo.node2.SReclaimable
94052 ± 2% +124.9% 211533 ± 13% numa-meminfo.node2.Slab
1071 ± 2% -48.8% 548.25 ± 65% numa-meminfo.node2.Writeback
17886451 ± 8% -41.1% 10536003 ± 19% numa-meminfo.node3.Active
17886443 ± 8% -41.1% 10535924 ± 19% numa-meminfo.node3.Active(anon)
28171067 ± 4% +20.1% 33819890 ± 6% numa-meminfo.node3.Inactive
28170898 ± 4% +20.1% 33819879 ± 6% numa-meminfo.node3.Inactive(anon)
31804 +225.7% 103572 ± 21% numa-meminfo.node3.KReclaimable
10075 ± 6% -28.9% 7162 ± 25% numa-meminfo.node3.Mapped
2651367 ± 9% +58.4% 4199157 ± 15% numa-meminfo.node3.MemFree
31804 +225.7% 103572 ± 21% numa-meminfo.node3.SReclaimable
102638 +80.1% 184817 ± 8% numa-meminfo.node3.Slab
1103 ± 2% -62.4% 415.25 ± 50% numa-meminfo.node3.Writeback
4750797 ± 6% -48.4% 2453627 ± 12% numa-vmstat.node0.nr_active_anon
675471 ± 5% +52.6% 1030659 ± 7% numa-vmstat.node0.nr_free_pages
6538855 ± 5% +30.0% 8502807 ± 4% numa-vmstat.node0.nr_inactive_anon
9503 ± 3% -18.2% 7771 ± 5% numa-vmstat.node0.nr_kernel_stack
51494 ± 4% -27.6% 37274 ± 14% numa-vmstat.node0.nr_page_table_pages
1272 ± 11% -64.8% 448.00 ±116% numa-vmstat.node0.nr_shmem
10750 ± 4% +230.9% 35572 ± 17% numa-vmstat.node0.nr_slab_reclaimable
27587 ± 2% -18.6% 22460 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
1543166 ± 20% +183.1% 4368165 ± 11% numa-vmstat.node0.nr_vmscan_write
1542970 ± 20% +183.1% 4367994 ± 11% numa-vmstat.node0.nr_written
4750809 ± 6% -48.4% 2453693 ± 12% numa-vmstat.node0.nr_zone_active_anon
6538943 ± 5% +30.0% 8502999 ± 4% numa-vmstat.node0.nr_zone_inactive_anon
924215 ± 53% +177.0% 2560038 ± 48% numa-vmstat.node0.numa_foreign
2217911 ± 29% +154.8% 5650919 ± 7% numa-vmstat.node0.numa_hit
2155706 ± 31% +161.5% 5637333 ± 7% numa-vmstat.node0.numa_local
4674978 ± 7% -48.4% 2411058 ± 16% numa-vmstat.node1.nr_active_anon
63.50 ± 62% -85.8% 9.00 ±132% numa-vmstat.node1.nr_active_file
6836617 ± 4% +29.1% 8824282 ± 5% numa-vmstat.node1.nr_inactive_anon
7742 ± 9% +410.4% 39516 ± 23% numa-vmstat.node1.nr_slab_reclaimable
250.50 ± 6% -37.0% 157.75 ± 34% numa-vmstat.node1.nr_writeback
4674967 ± 7% -48.4% 2411093 ± 16% numa-vmstat.node1.nr_zone_active_anon
63.50 ± 62% -85.8% 9.00 ±132% numa-vmstat.node1.nr_zone_active_file
6836844 ± 4% +29.1% 8824484 ± 5% numa-vmstat.node1.nr_zone_inactive_anon
244.50 ± 5% -37.0% 154.00 ± 35% numa-vmstat.node1.nr_zone_write_pending
575290 ± 2% +67.1% 961547 ± 29% numa-vmstat.node1.numa_foreign
4700028 ± 8% -41.6% 2746294 ± 24% numa-vmstat.node2.nr_active_anon
65049 +18.2% 76893 ± 11% numa-vmstat.node2.nr_file_pages
718878 ± 4% +42.7% 1025683 ± 8% numa-vmstat.node2.nr_free_pages
6772066 ± 5% +23.9% 8389077 ± 6% numa-vmstat.node2.nr_inactive_anon
7859 ± 5% +348.7% 35266 ± 22% numa-vmstat.node2.nr_slab_reclaimable
234.00 ± 10% -50.6% 115.50 ± 61% numa-vmstat.node2.nr_writeback
4700031 ± 8% -41.6% 2746330 ± 24% numa-vmstat.node2.nr_zone_active_anon
6772290 ± 5% +23.9% 8389269 ± 6% numa-vmstat.node2.nr_zone_inactive_anon
226.00 ± 10% -51.0% 110.75 ± 61% numa-vmstat.node2.nr_zone_write_pending
4466820 ± 8% -41.0% 2635768 ± 19% numa-vmstat.node3.nr_active_anon
3487 ± 7% +66.9% 5820 ± 25% numa-vmstat.node3.nr_free_cma
667183 ± 11% +57.7% 1052209 ± 15% numa-vmstat.node3.nr_free_pages
7043232 ± 4% +20.0% 8450818 ± 6% numa-vmstat.node3.nr_inactive_anon
2519 ± 6% -27.7% 1821 ± 24% numa-vmstat.node3.nr_mapped
7951 +225.7% 25892 ± 21% numa-vmstat.node3.nr_slab_reclaimable
4244224 ± 37% -48.6% 2180229 ± 69% numa-vmstat.node3.nr_vmscan_write
296.00 ± 17% -62.2% 111.75 ± 59% numa-vmstat.node3.nr_writeback
4243937 ± 37% -48.6% 2180124 ± 69% numa-vmstat.node3.nr_written
4466803 ± 8% -41.0% 2635791 ± 19% numa-vmstat.node3.nr_zone_active_anon
7043545 ± 4% +20.0% 8450932 ± 6% numa-vmstat.node3.nr_zone_inactive_anon
290.50 ± 18% -62.7% 108.25 ± 60% numa-vmstat.node3.nr_zone_write_pending
4851710 ± 38% -47.5% 2546955 ± 72% numa-vmstat.node3.numa_hit
4744738 ± 39% -48.5% 2442319 ± 76% numa-vmstat.node3.numa_local
691.00 ± 47% +2554.2% 18340 ± 92% interrupts.31:PCI-MSI.524289-edge.eth0-TxRx-0
278.50 ± 50% +165.7% 740.00 ± 48% interrupts.CPU0.NMI:Non-maskable_interrupts
278.50 ± 50% +165.7% 740.00 ± 48% interrupts.CPU0.PMI:Performance_monitoring_interrupts
109.00 ± 22% +597.7% 760.50 ±107% interrupts.CPU1.NMI:Non-maskable_interrupts
109.00 ± 22% +597.7% 760.50 ±107% interrupts.CPU1.PMI:Performance_monitoring_interrupts
691.00 ± 47% +2554.2% 18340 ± 92% interrupts.CPU10.31:PCI-MSI.524289-edge.eth0-TxRx-0
143.50 ± 20% +227.0% 469.25 ± 24% interrupts.CPU10.NMI:Non-maskable_interrupts
143.50 ± 20% +227.0% 469.25 ± 24% interrupts.CPU10.PMI:Performance_monitoring_interrupts
107.00 ± 29% +70.1% 182.00 ± 25% interrupts.CPU100.NMI:Non-maskable_interrupts
107.00 ± 29% +70.1% 182.00 ± 25% interrupts.CPU100.PMI:Performance_monitoring_interrupts
76888 ± 54% +499.7% 461102 ± 57% interrupts.CPU101.CAL:Function_call_interrupts
90857 ± 55% +504.5% 549258 ± 60% interrupts.CPU101.TLB:TLB_shootdowns
202.00 ± 50% -63.0% 74.75 ± 27% interrupts.CPU102.RES:Rescheduling_interrupts
287.50 ± 48% -71.0% 83.25 ± 46% interrupts.CPU103.RES:Rescheduling_interrupts
238.00 ± 36% -51.3% 116.00 ± 52% interrupts.CPU104.RES:Rescheduling_interrupts
221.50 ± 25% -51.2% 108.00 ± 45% interrupts.CPU105.RES:Rescheduling_interrupts
151.50 ± 25% +237.8% 511.75 ± 23% interrupts.CPU106.NMI:Non-maskable_interrupts
151.50 ± 25% +237.8% 511.75 ± 23% interrupts.CPU106.PMI:Performance_monitoring_interrupts
277.50 ± 42% -58.9% 114.00 ± 44% interrupts.CPU106.RES:Rescheduling_interrupts
276.50 ± 42% -58.6% 114.50 ± 30% interrupts.CPU107.RES:Rescheduling_interrupts
275.00 ± 28% -59.2% 112.25 ± 25% interrupts.CPU108.RES:Rescheduling_interrupts
290.00 ± 22% -55.7% 128.50 ± 24% interrupts.CPU109.RES:Rescheduling_interrupts
307.50 ± 60% -74.8% 77.50 ± 13% interrupts.CPU11.RES:Rescheduling_interrupts
259.50 ± 17% -49.0% 132.25 ± 26% interrupts.CPU110.RES:Rescheduling_interrupts
282.00 ± 32% -45.4% 154.00 ± 45% interrupts.CPU111.RES:Rescheduling_interrupts
62672 ± 41% +419.3% 325467 ± 45% interrupts.CPU112.CAL:Function_call_interrupts
76545 ± 41% +394.7% 378647 ± 46% interrupts.CPU112.TLB:TLB_shootdowns
218.50 ± 27% +182.3% 616.75 ± 49% interrupts.CPU116.NMI:Non-maskable_interrupts
218.50 ± 27% +182.3% 616.75 ± 49% interrupts.CPU116.PMI:Performance_monitoring_interrupts
66666 ± 26% +459.0% 372656 ± 70% interrupts.CPU118.CAL:Function_call_interrupts
78025 ± 28% +456.6% 434295 ± 72% interrupts.CPU118.TLB:TLB_shootdowns
59429 ± 63% +297.0% 235937 ± 32% interrupts.CPU119.CAL:Function_call_interrupts
68515 ± 64% +293.8% 269805 ± 34% interrupts.CPU119.TLB:TLB_shootdowns
219.50 ± 28% -48.6% 112.75 ± 41% interrupts.CPU12.RES:Rescheduling_interrupts
101.00 +212.1% 315.25 ± 31% interrupts.CPU124.RES:Rescheduling_interrupts
110.50 ± 19% +273.5% 412.75 ± 44% interrupts.CPU125.RES:Rescheduling_interrupts
94.50 ± 8% +256.3% 336.75 ± 84% interrupts.CPU126.NMI:Non-maskable_interrupts
94.50 ± 8% +256.3% 336.75 ± 84% interrupts.CPU126.PMI:Performance_monitoring_interrupts
120.00 ± 14% +271.7% 446.00 ± 30% interrupts.CPU126.RES:Rescheduling_interrupts
131.50 ± 23% +131.2% 304.00 ± 33% interrupts.CPU127.RES:Rescheduling_interrupts
133.00 ± 42% +119.9% 292.50 ± 25% interrupts.CPU128.RES:Rescheduling_interrupts
171.50 ± 21% +62.4% 278.50 ± 24% interrupts.CPU130.RES:Rescheduling_interrupts
241.50 +86.4% 450.25 ± 50% interrupts.CPU132.RES:Rescheduling_interrupts
251.50 ± 2% +67.6% 421.50 ± 26% interrupts.CPU134.RES:Rescheduling_interrupts
135.50 ± 23% -41.9% 78.75 ± 22% interrupts.CPU14.RES:Rescheduling_interrupts
36.50 ± 6% +217.1% 115.75 ± 51% interrupts.CPU141.RES:Rescheduling_interrupts
16.50 ± 27% +1084.8% 195.50 ± 80% interrupts.CPU142.RES:Rescheduling_interrupts
152.50 ± 4% +54.6% 235.75 ± 34% interrupts.CPU144.NMI:Non-maskable_interrupts
152.50 ± 4% +54.6% 235.75 ± 34% interrupts.CPU144.PMI:Performance_monitoring_interrupts
608.50 -62.2% 230.25 ± 34% interrupts.CPU145.NMI:Non-maskable_interrupts
608.50 -62.2% 230.25 ± 34% interrupts.CPU145.PMI:Performance_monitoring_interrupts
7770 ± 15% +307.5% 31664 ±104% interrupts.CPU15.CAL:Function_call_interrupts
7746 ± 14% +451.0% 42681 ± 81% interrupts.CPU15.TLB:TLB_shootdowns
179321 +37.6% 246659 ± 14% interrupts.CPU154.CAL:Function_call_interrupts
474.50 ± 5% -26.4% 349.25 ± 24% interrupts.CPU154.RES:Rescheduling_interrupts
223397 ± 3% +27.8% 285599 ± 16% interrupts.CPU154.TLB:TLB_shootdowns
158.50 ± 6% +321.6% 668.25 ±123% interrupts.CPU165.NMI:Non-maskable_interrupts
158.50 ± 6% +321.6% 668.25 ±123% interrupts.CPU165.PMI:Performance_monitoring_interrupts
155.00 ± 3% +730.2% 1286 ± 65% interrupts.CPU167.NMI:Non-maskable_interrupts
155.00 ± 3% +730.2% 1286 ± 65% interrupts.CPU167.PMI:Performance_monitoring_interrupts
227937 ± 38% -72.5% 62775 ±112% interrupts.CPU169.CAL:Function_call_interrupts
260271 ± 38% -70.9% 75774 ±113% interrupts.CPU169.TLB:TLB_shootdowns
231108 ± 29% -75.3% 57057 ± 92% interrupts.CPU171.CAL:Function_call_interrupts
352.50 ± 46% -75.8% 85.25 ±104% interrupts.CPU171.RES:Rescheduling_interrupts
263976 ± 31% -76.1% 62985 ± 93% interrupts.CPU171.TLB:TLB_shootdowns
249893 ± 22% -69.3% 76666 ± 93% interrupts.CPU173.CAL:Function_call_interrupts
313.50 ± 25% -64.0% 112.75 ± 85% interrupts.CPU173.RES:Rescheduling_interrupts
295531 ± 25% -68.0% 94592 ± 95% interrupts.CPU173.TLB:TLB_shootdowns
368.00 ± 14% -60.9% 143.75 ± 75% interrupts.CPU174.RES:Rescheduling_interrupts
423.00 -57.3% 180.50 ± 73% interrupts.CPU175.RES:Rescheduling_interrupts
550.00 -71.7% 155.50 ± 90% interrupts.CPU176.RES:Rescheduling_interrupts
528.50 ± 5% -67.0% 174.25 ±117% interrupts.CPU178.RES:Rescheduling_interrupts
225636 ± 32% -64.7% 79543 ±105% interrupts.CPU179.CAL:Function_call_interrupts
402.00 ± 32% -49.6% 202.75 ± 34% interrupts.CPU179.NMI:Non-maskable_interrupts
402.00 ± 32% -49.6% 202.75 ± 34% interrupts.CPU179.PMI:Performance_monitoring_interrupts
501.50 ± 3% -64.1% 180.25 ± 98% interrupts.CPU179.RES:Rescheduling_interrupts
266959 ± 34% -65.1% 93250 ±101% interrupts.CPU179.TLB:TLB_shootdowns
420.50 ± 12% -54.6% 191.00 ± 60% interrupts.CPU180.RES:Rescheduling_interrupts
460.50 -59.6% 186.25 ± 85% interrupts.CPU181.RES:Rescheduling_interrupts
249398 ± 34% -50.9% 122544 ± 41% interrupts.CPU182.CAL:Function_call_interrupts
298945 ± 36% -51.9% 143906 ± 42% interrupts.CPU182.TLB:TLB_shootdowns
484.00 ± 24% -54.4% 220.50 ±101% interrupts.CPU183.RES:Rescheduling_interrupts
228.50 ± 4% -65.2% 79.50 ± 80% interrupts.CPU184.RES:Rescheduling_interrupts
237.50 -32.1% 161.25 ± 40% interrupts.CPU186.NMI:Non-maskable_interrupts
237.50 -32.1% 161.25 ± 40% interrupts.CPU186.PMI:Performance_monitoring_interrupts
119.50 ± 20% -51.9% 57.50 ± 59% interrupts.CPU187.RES:Rescheduling_interrupts
100.50 ± 11% -61.4% 38.75 ± 62% interrupts.CPU188.RES:Rescheduling_interrupts
25279 ± 55% -84.0% 4034 ±108% interrupts.CPU19.CAL:Function_call_interrupts
154.50 ± 18% +299.7% 617.50 ± 46% interrupts.CPU19.NMI:Non-maskable_interrupts
154.50 ± 18% +299.7% 617.50 ± 46% interrupts.CPU19.PMI:Performance_monitoring_interrupts
27988 ± 59% -77.4% 6335 ±162% interrupts.CPU19.TLB:TLB_shootdowns
78795 ± 17% -66.5% 26361 ±132% interrupts.CPU191.CAL:Function_call_interrupts
89793 ± 17% -69.3% 27599 ±139% interrupts.CPU191.TLB:TLB_shootdowns
136.50 ± 3% +701.1% 1093 ± 69% interrupts.CPU2.NMI:Non-maskable_interrupts
136.50 ± 3% +701.1% 1093 ± 69% interrupts.CPU2.PMI:Performance_monitoring_interrupts
264.00 -62.9% 98.00 ± 42% interrupts.CPU2.RES:Rescheduling_interrupts
190.50 ± 16% +179.0% 531.50 ± 42% interrupts.CPU20.NMI:Non-maskable_interrupts
190.50 ± 16% +179.0% 531.50 ± 42% interrupts.CPU20.PMI:Performance_monitoring_interrupts
85184 ± 18% -81.5% 15741 ±122% interrupts.CPU22.CAL:Function_call_interrupts
97815 ± 17% -78.7% 20803 ±107% interrupts.CPU22.TLB:TLB_shootdowns
36354 ± 30% -80.5% 7107 ±136% interrupts.CPU23.CAL:Function_call_interrupts
41265 ± 31% -83.2% 6931 ±168% interrupts.CPU23.TLB:TLB_shootdowns
187656 ± 14% -61.3% 72695 ± 97% interrupts.CPU24.CAL:Function_call_interrupts
105763 ± 67% -89.0% 11633 ± 79% interrupts.CPU25.CAL:Function_call_interrupts
118461 ± 60% -78.6% 25319 ±100% interrupts.CPU25.TLB:TLB_shootdowns
12080 ± 13% -56.8% 5220 ± 55% interrupts.CPU26.CAL:Function_call_interrupts
235.50 ± 28% -72.0% 66.00 ± 35% interrupts.CPU27.RES:Rescheduling_interrupts
35432 ± 7% -84.5% 5502 ±117% interrupts.CPU28.CAL:Function_call_interrupts
392.00 ± 5% -74.4% 100.50 ± 75% interrupts.CPU28.RES:Rescheduling_interrupts
46283 ± 10% -81.3% 8648 ±153% interrupts.CPU28.TLB:TLB_shootdowns
222.00 -70.4% 65.75 ± 48% interrupts.CPU3.RES:Rescheduling_interrupts
253.00 ± 3% -71.3% 72.50 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
7384 ± 74% -84.1% 1177 ± 21% interrupts.CPU39.CAL:Function_call_interrupts
12116 ± 94% -99.5% 66.50 ± 80% interrupts.CPU39.TLB:TLB_shootdowns
9942 ± 22% -71.4% 2839 ± 77% interrupts.CPU4.CAL:Function_call_interrupts
261.00 ± 6% -79.6% 53.25 ± 17% interrupts.CPU4.RES:Rescheduling_interrupts
10765 ± 26% -66.5% 3606 ±138% interrupts.CPU4.TLB:TLB_shootdowns
25523 ± 92% -89.8% 2615 ± 88% interrupts.CPU40.CAL:Function_call_interrupts
113.00 ± 32% +80.3% 203.75 ± 25% interrupts.CPU44.NMI:Non-maskable_interrupts
113.00 ± 32% +80.3% 203.75 ± 25% interrupts.CPU44.PMI:Performance_monitoring_interrupts
8.50 ± 17% +1120.6% 103.75 ±100% interrupts.CPU45.RES:Rescheduling_interrupts
541.50 ± 55% -84.2% 85.50 ± 57% interrupts.CPU49.RES:Rescheduling_interrupts
150.00 ± 49% -68.2% 47.75 ± 47% interrupts.CPU5.RES:Rescheduling_interrupts
327.00 ± 22% -83.6% 53.75 ± 17% interrupts.CPU50.RES:Rescheduling_interrupts
145.00 +118.3% 316.50 ± 72% interrupts.CPU51.NMI:Non-maskable_interrupts
145.00 +118.3% 316.50 ± 72% interrupts.CPU51.PMI:Performance_monitoring_interrupts
165.50 ± 5% -72.2% 46.00 ± 19% interrupts.CPU51.RES:Rescheduling_interrupts
108.00 ± 32% +147.7% 267.50 ± 61% interrupts.CPU52.NMI:Non-maskable_interrupts
108.00 ± 32% +147.7% 267.50 ± 61% interrupts.CPU52.PMI:Performance_monitoring_interrupts
251.00 ± 46% -59.5% 101.75 ± 73% interrupts.CPU52.RES:Rescheduling_interrupts
146.00 +165.9% 388.25 ± 46% interrupts.CPU53.NMI:Non-maskable_interrupts
146.00 +165.9% 388.25 ± 46% interrupts.CPU53.PMI:Performance_monitoring_interrupts
160.00 ± 9% +100.2% 320.25 ± 58% interrupts.CPU54.NMI:Non-maskable_interrupts
160.00 ± 9% +100.2% 320.25 ± 58% interrupts.CPU54.PMI:Performance_monitoring_interrupts
132.50 ± 4% +298.9% 528.50 ± 63% interrupts.CPU55.NMI:Non-maskable_interrupts
132.50 ± 4% +298.9% 528.50 ± 63% interrupts.CPU55.PMI:Performance_monitoring_interrupts
25836 ± 36% -94.7% 1374 ± 26% interrupts.CPU56.CAL:Function_call_interrupts
321.00 ± 33% -86.6% 43.00 ± 7% interrupts.CPU56.RES:Rescheduling_interrupts
38679 ± 28% -99.4% 243.00 ± 73% interrupts.CPU56.TLB:TLB_shootdowns
21567 ± 14% -91.2% 1907 ± 61% interrupts.CPU57.CAL:Function_call_interrupts
121.50 ± 18% +180.5% 340.75 ± 52% interrupts.CPU57.NMI:Non-maskable_interrupts
121.50 ± 18% +180.5% 340.75 ± 52% interrupts.CPU57.PMI:Performance_monitoring_interrupts
272.00 ± 35% -81.2% 51.00 ± 28% interrupts.CPU57.RES:Rescheduling_interrupts
41527 ± 15% -97.9% 862.25 ±136% interrupts.CPU57.TLB:TLB_shootdowns
248.00 ± 45% -74.6% 63.00 ± 46% interrupts.CPU58.RES:Rescheduling_interrupts
258.00 ± 25% -58.9% 106.00 ± 94% interrupts.CPU59.RES:Rescheduling_interrupts
465.00 ± 2% -91.7% 38.75 ± 39% interrupts.CPU6.RES:Rescheduling_interrupts
101.50 ± 27% -53.9% 46.75 ± 13% interrupts.CPU60.RES:Rescheduling_interrupts
186.00 ± 44% -66.7% 62.00 ± 29% interrupts.CPU61.RES:Rescheduling_interrupts
6410 ± 70% -68.3% 2030 ± 48% interrupts.CPU62.CAL:Function_call_interrupts
174.50 ± 58% -65.0% 61.00 ± 24% interrupts.CPU62.RES:Rescheduling_interrupts
6054 ± 86% -86.4% 821.00 ±146% interrupts.CPU62.TLB:TLB_shootdowns
41658 ± 55% -71.7% 11775 ±150% interrupts.CPU64.CAL:Function_call_interrupts
47312 ± 55% -75.0% 11834 ±168% interrupts.CPU64.TLB:TLB_shootdowns
18320 ± 37% -91.5% 1556 ± 15% interrupts.CPU7.CAL:Function_call_interrupts
273.50 ± 39% -83.5% 45.00 ± 58% interrupts.CPU7.RES:Rescheduling_interrupts
20101 ± 38% -97.5% 504.75 ± 36% interrupts.CPU7.TLB:TLB_shootdowns
45476 ± 7% -67.5% 14783 ±122% interrupts.CPU70.CAL:Function_call_interrupts
152.50 +43.9% 219.50 ± 22% interrupts.CPU70.NMI:Non-maskable_interrupts
152.50 +43.9% 219.50 ± 22% interrupts.CPU70.PMI:Performance_monitoring_interrupts
56380 -65.3% 19561 ±109% interrupts.CPU70.TLB:TLB_shootdowns
148.50 ± 4% +744.3% 1253 ± 53% interrupts.CPU71.NMI:Non-maskable_interrupts
148.50 ± 4% +744.3% 1253 ± 53% interrupts.CPU71.PMI:Performance_monitoring_interrupts
63152 ± 21% -87.8% 7723 ± 78% interrupts.CPU73.CAL:Function_call_interrupts
531.50 ± 51% -69.0% 164.50 ± 58% interrupts.CPU73.RES:Rescheduling_interrupts
83608 ± 24% -79.0% 17535 ±100% interrupts.CPU73.TLB:TLB_shootdowns
21856 ± 49% -83.4% 3629 ±103% interrupts.CPU74.CAL:Function_call_interrupts
311.50 ± 17% -63.6% 113.50 ± 92% interrupts.CPU74.RES:Rescheduling_interrupts
32415 ± 40% -78.9% 6828 ±163% interrupts.CPU74.TLB:TLB_shootdowns
663.00 ± 39% -83.4% 110.25 ±102% interrupts.CPU75.RES:Rescheduling_interrupts
245.50 ± 40% -73.3% 65.50 ± 90% interrupts.CPU76.RES:Rescheduling_interrupts
34156 ± 4% -75.5% 8365 ±111% interrupts.CPU76.TLB:TLB_shootdowns
193.00 ± 43% -52.6% 91.50 ± 96% interrupts.CPU77.RES:Rescheduling_interrupts
209.00 ± 28% -83.9% 33.75 ± 25% interrupts.CPU78.RES:Rescheduling_interrupts
21964 ± 25% -72.9% 5954 ±168% interrupts.CPU78.TLB:TLB_shootdowns
14935 ± 19% -90.6% 1408 ± 31% interrupts.CPU79.CAL:Function_call_interrupts
310.50 ± 3% -87.4% 39.00 ± 82% interrupts.CPU79.RES:Rescheduling_interrupts
18945 -98.6% 259.75 ±108% interrupts.CPU79.TLB:TLB_shootdowns
217.00 -67.1% 71.50 ± 80% interrupts.CPU8.RES:Rescheduling_interrupts
224.00 ± 51% -75.0% 56.00 ±107% interrupts.CPU82.RES:Rescheduling_interrupts
14294 ± 4% -75.3% 3527 ± 85% interrupts.CPU83.CAL:Function_call_interrupts
309.50 ± 23% -49.3% 157.00 ± 55% interrupts.CPU83.NMI:Non-maskable_interrupts
309.50 ± 23% -49.3% 157.00 ± 55% interrupts.CPU83.PMI:Performance_monitoring_interrupts
251.50 ± 30% -68.4% 79.50 ± 79% interrupts.CPU83.RES:Rescheduling_interrupts
20072 ± 30% -63.1% 7408 ±128% interrupts.CPU83.TLB:TLB_shootdowns
14272 ± 22% -91.2% 1260 ± 32% interrupts.CPU84.CAL:Function_call_interrupts
294.00 ± 3% -83.2% 49.25 ± 64% interrupts.CPU84.RES:Rescheduling_interrupts
23916 ± 51% -99.4% 132.75 ±166% interrupts.CPU84.TLB:TLB_shootdowns
155.00 ± 37% -52.7% 73.25 ±110% interrupts.CPU85.RES:Rescheduling_interrupts
13973 ± 9% -91.1% 1250 ± 22% interrupts.CPU86.CAL:Function_call_interrupts
265.50 ± 29% -84.2% 42.00 ± 40% interrupts.CPU86.RES:Rescheduling_interrupts
14770 ± 9% -99.6% 61.00 ±158% interrupts.CPU86.TLB:TLB_shootdowns
26510 ± 18% -57.5% 11270 ± 84% interrupts.CPU88.CAL:Function_call_interrupts
95.50 ± 24% -38.0% 59.25 ± 32% interrupts.CPU9.RES:Rescheduling_interrupts
243.00 ± 10% -39.6% 146.75 ± 28% interrupts.CPU90.NMI:Non-maskable_interrupts
243.00 ± 10% -39.6% 146.75 ± 28% interrupts.CPU90.PMI:Performance_monitoring_interrupts
81359 ± 69% -59.5% 32911 ±143% interrupts.CPU91.CAL:Function_call_interrupts
92043 ± 69% -51.6% 44544 ±140% interrupts.CPU91.TLB:TLB_shootdowns
115461 ± 55% -60.2% 45972 ±134% interrupts.CPU92.CAL:Function_call_interrupts
91135 ± 7% +316.9% 379964 ± 45% interrupts.CPU98.CAL:Function_call_interrupts
105442 ± 8% +337.6% 461433 ± 50% interrupts.CPU98.TLB:TLB_shootdowns
42028 ± 2% +77.2% 74484 ± 12% interrupts.NMI:Non-maskable_interrupts
42028 ± 2% +77.2% 74484 ± 12% interrupts.PMI:Performance_monitoring_interrupts
38859 ± 12% -30.2% 27136 ± 13% interrupts.RES:Rescheduling_interrupts
10.69 ± 12% -10.4 0.28 ±173% perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
22.09 ± 4% -8.5 13.55 ± 15% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
65.03 -8.0 57.01 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
63.55 -7.9 55.69 ± 10% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
7.80 -7.6 0.20 ±173% perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
7.57 -7.4 0.20 ±173% perf-profile.calltrace.cycles-pp.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
17.22 ± 4% -7.1 10.15 ± 18% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
6.31 -6.1 0.16 ±173% perf-profile.calltrace.cycles-pp.clear_page_erms.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
15.23 ± 3% -5.3 9.95 ± 12% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
15.24 ± 3% -5.2 10.05 ± 12% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
15.31 ± 3% -4.3 11.00 ± 13% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
15.32 ± 3% -4.3 11.06 ± 13% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
9.97 ± 3% -4.2 5.81 ± 18% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
9.98 ± 3% -4.2 5.83 ± 18% perf-profile.calltrace.cycles-pp.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
9.84 ± 3% -4.1 5.73 ± 18% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
4.39 -3.4 0.96 ± 10% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_anon.try_to_unmap.shrink_page_list.shrink_inactive_list
4.42 -3.4 1.02 ± 10% perf-profile.calltrace.cycles-pp.rmap_walk_anon.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec
4.42 -3.4 1.02 ± 10% perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
15.46 ± 3% -2.8 12.63 ± 14% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
5.47 ± 4% -2.5 2.97 ± 20% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
4.19 ± 5% -2.0 2.18 ± 18% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
3.91 ± 5% -1.9 2.00 ± 23% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
3.38 ± 2% -1.8 1.62 ± 21% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
3.35 ± 2% -1.8 1.60 ± 21% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
3.20 ± 7% -1.6 1.64 ± 25% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
3.19 ± 7% -1.6 1.63 ± 25% perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
3.17 ± 7% -1.5 1.63 ± 25% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
2.09 ± 5% -1.3 0.75 ± 16% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
2.98 -1.2 1.81 ± 14% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
3.02 ± 4% -0.9 2.15 ± 7% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
2.29 ± 3% -0.8 1.44 ± 14% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
2.68 ± 5% -0.8 1.88 ± 7% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
1.48 ± 12% -0.5 0.97 ± 10% perf-profile.calltrace.cycles-pp.ktime_get.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.52 ± 11% -0.5 1.02 ± 11% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.32 ± 12% -0.4 0.88 ± 15% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu
1.52 ± 4% -0.4 1.11 ± 18% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.55 ± 9% -0.4 1.14 ± 7% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
1.37 ± 7% -0.4 0.96 ± 17% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.77 ± 16% -0.3 0.45 ± 58% perf-profile.calltrace.cycles-pp._raw_spin_trylock.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack
0.90 ± 11% -0.2 0.66 ± 20% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
1.03 ± 15% -0.2 0.81 ± 17% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.6 0.63 ± 16% perf-profile.calltrace.cycles-pp.page_referenced.shrink_active_list.shrink_lruvec.shrink_node.balance_pgdat
0.00 +0.7 0.70 ± 28% perf-profile.calltrace.cycles-pp.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.31 ±100% +0.7 1.03 ± 18% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.io_schedule.swap_readpage.read_swap_cache_async
0.00 +0.7 0.72 ± 17% perf-profile.calltrace.cycles-pp.mem_cgroup_uncharge_swap.mem_cgroup_charge.__read_swap_cache_async.read_swap_cache_async.swapin_readahead
0.31 ±100% +0.7 1.04 ± 18% perf-profile.calltrace.cycles-pp.schedule.io_schedule.swap_readpage.read_swap_cache_async.swapin_readahead
0.31 ±100% +0.7 1.04 ± 19% perf-profile.calltrace.cycles-pp.io_schedule.swap_readpage.read_swap_cache_async.swapin_readahead.do_swap_page
1.59 ± 8% +0.7 2.32 ± 16% perf-profile.calltrace.cycles-pp.__read_swap_cache_async.read_swap_cache_async.swapin_readahead.do_swap_page.__handle_mm_fault
0.00 +0.8 0.75 ± 19% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.swap_readpage.read_swap_cache_async.swapin_readahead
0.00 +0.8 0.76 ± 19% perf-profile.calltrace.cycles-pp.submit_bio.swap_readpage.read_swap_cache_async.swapin_readahead.do_swap_page
0.00 +0.8 0.77 ± 27% perf-profile.calltrace.cycles-pp.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.00 +0.8 0.77 ± 10% perf-profile.calltrace.cycles-pp.shrink_active_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
0.00 +0.8 0.82 ± 27% perf-profile.calltrace.cycles-pp.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
0.00 +0.9 0.91 ± 33% perf-profile.calltrace.cycles-pp.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +0.9 0.93 ± 25% perf-profile.calltrace.cycles-pp.page_vma_mapped_walk.page_referenced_one.rmap_walk_anon.page_referenced.shrink_page_list
0.00 +1.0 1.04 ± 11% perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.pagecache_get_page.lookup_swap_cache.do_swap_page
0.00 +1.1 1.06 ± 11% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.lookup_swap_cache.do_swap_page.__handle_mm_fault
0.00 +1.1 1.06 ± 11% perf-profile.calltrace.cycles-pp.pagecache_get_page.lookup_swap_cache.do_swap_page.__handle_mm_fault.handle_mm_fault
0.00 +1.1 1.11 ± 17% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__read_swap_cache_async.read_swap_cache_async.swapin_readahead.do_swap_page
1.57 ± 9% +1.1 2.71 ± 22% perf-profile.calltrace.cycles-pp.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq
0.00 +1.1 1.15 ± 12% perf-profile.calltrace.cycles-pp.lookup_swap_cache.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.58 ± 9% +1.1 2.73 ± 22% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.asm_call_on_stack
1.61 ± 10% +1.2 2.81 ± 22% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt
0.00 +1.3 1.26 ± 16% perf-profile.calltrace.cycles-pp.page_referenced_one.rmap_walk_anon.page_referenced.shrink_page_list.shrink_inactive_list
1.64 ± 10% +1.3 2.92 ± 22% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt
1.67 ± 10% +1.3 3.00 ± 22% perf-profile.calltrace.cycles-pp.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state
1.67 ± 10% +1.3 3.00 ± 22% perf-profile.calltrace.cycles-pp.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +1.4 1.36 ± 25% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.00 +1.4 1.38 ± 18% perf-profile.calltrace.cycles-pp.rmap_walk_anon.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.76 ± 10% +1.4 3.15 ± 22% perf-profile.calltrace.cycles-pp.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
1.77 ± 10% +1.4 3.18 ± 22% perf-profile.calltrace.cycles-pp.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +1.7 1.65 ± 24% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.end_swap_bio_read
0.00 +1.7 1.71 ± 24% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.end_swap_bio_read.blk_update_request.blk_mq_end_request
0.00 +1.7 1.71 ± 24% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.end_swap_bio_read.blk_update_request
0.80 ± 12% +1.7 2.52 ± 17% perf-profile.calltrace.cycles-pp.swap_readpage.read_swap_cache_async.swapin_readahead.do_swap_page.__handle_mm_fault
0.00 +1.7 1.75 ± 13% perf-profile.calltrace.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.25 ±100% +1.8 2.01 ± 24% perf-profile.calltrace.cycles-pp.try_to_wake_up.end_swap_bio_read.blk_update_request.blk_mq_end_request.nvme_irq
0.27 ±100% +1.9 2.12 ± 24% perf-profile.calltrace.cycles-pp.end_swap_bio_read.blk_update_request.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu
0.33 ±100% +2.0 2.29 ± 23% perf-profile.calltrace.cycles-pp.blk_update_request.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu
0.00 +2.0 2.01 ± 38% perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list
0.35 ±100% +2.0 2.38 ± 23% perf-profile.calltrace.cycles-pp.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
0.00 +2.0 2.04 ± 38% perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list
0.00 +2.1 2.07 ± 38% perf-profile.calltrace.cycles-pp.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +2.1 2.07 ± 38% perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_lruvec
2.40 +2.5 4.86 ± 16% perf-profile.calltrace.cycles-pp.read_swap_cache_async.swapin_readahead.do_swap_page.__handle_mm_fault.handle_mm_fault
2.41 +2.5 4.88 ± 16% perf-profile.calltrace.cycles-pp.swapin_readahead.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.04 +3.0 7.02 ± 21% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
4.11 +3.4 7.48 ± 21% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
4.12 +4.1 8.26 ± 19% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.balance_pgdat.kswapd.kthread
4.13 +4.2 8.32 ± 19% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
4.13 +4.2 8.33 ± 19% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
4.13 +4.2 8.34 ± 19% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
2.71 +4.9 7.56 ± 15% perf-profile.calltrace.cycles-pp.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
10.69 ± 12% -10.3 0.41 ±114% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
20.17 ± 4% -7.9 12.24 ± 15% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
65.62 -7.7 57.92 ± 9% perf-profile.children.cycles-pp.cpuidle_enter
65.59 -7.7 57.89 ± 9% perf-profile.children.cycles-pp.cpuidle_enter_state
7.83 -7.5 0.32 ±106% perf-profile.children.cycles-pp.clear_huge_page
7.61 -7.3 0.32 ±106% perf-profile.children.cycles-pp.clear_subpage
17.52 ± 4% -7.1 10.39 ± 17% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
6.51 -6.0 0.55 ± 51% perf-profile.children.cycles-pp.clear_page_erms
5.58 ± 8% -5.5 0.08 ±173% perf-profile.children.cycles-pp.__alloc_pages_slowpath
15.35 ± 3% -5.3 10.01 ± 12% perf-profile.children.cycles-pp.__handle_mm_fault
15.37 ± 3% -5.3 10.10 ± 12% perf-profile.children.cycles-pp.handle_mm_fault
5.75 ± 8% -5.0 0.73 ± 29% perf-profile.children.cycles-pp.__alloc_pages_nodemask
15.44 ± 3% -4.4 11.06 ± 13% perf-profile.children.cycles-pp.do_user_addr_fault
15.45 ± 3% -4.3 11.12 ± 13% perf-profile.children.cycles-pp.exc_page_fault
10.16 ± 3% -4.2 5.99 ± 17% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
5.14 -4.1 1.00 ± 13% perf-profile.children.cycles-pp.try_to_unmap_one
10.03 ± 3% -4.1 5.90 ± 17% perf-profile.children.cycles-pp.hrtimer_interrupt
15.46 ± 2% -4.1 11.36 ± 16% perf-profile.children.cycles-pp.asm_call_on_stack
5.15 -4.1 1.06 ± 13% perf-profile.children.cycles-pp.try_to_unmap
15.60 ± 3% -3.6 12.01 ± 14% perf-profile.children.cycles-pp.asm_exc_page_fault
5.50 -2.7 2.79 ± 10% perf-profile.children.cycles-pp.rmap_walk_anon
5.60 ± 4% -2.5 3.07 ± 19% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.67 -2.3 0.34 ± 9% perf-profile.children.cycles-pp.load_balance
2.53 -2.3 0.27 ± 8% perf-profile.children.cycles-pp.find_busiest_group
2.50 ± 2% -2.2 0.26 ± 8% perf-profile.children.cycles-pp.update_sd_lb_stats
2.35 -2.2 0.11 ± 4% perf-profile.children.cycles-pp.newidle_balance
2.42 -2.2 0.23 ± 9% perf-profile.children.cycles-pp.pick_next_task_fair
2.08 ± 4% -2.0 0.04 ± 59% perf-profile.children.cycles-pp.worker_thread
4.28 ± 5% -2.0 2.26 ± 16% perf-profile.children.cycles-pp.tick_sched_timer
4.00 ± 4% -1.9 2.12 ± 21% perf-profile.children.cycles-pp.irq_exit_rcu
3.46 ± 3% -1.8 1.68 ± 20% perf-profile.children.cycles-pp.tick_sched_handle
3.44 ± 3% -1.8 1.67 ± 20% perf-profile.children.cycles-pp.update_process_times
2.45 ± 5% -1.7 0.74 ± 29% perf-profile.children.cycles-pp.alloc_pages_vma
2.77 -1.7 1.08 ± 18% perf-profile.children.cycles-pp.schedule
5.62 ± 9% -1.7 3.94 ± 13% perf-profile.children.cycles-pp.ktime_get
3.26 ± 7% -1.6 1.66 ± 24% perf-profile.children.cycles-pp.__softirqentry_text_start
3.26 ± 7% -1.6 1.67 ± 24% perf-profile.children.cycles-pp.do_softirq_own_stack
3.07 -1.5 1.56 ± 15% perf-profile.children.cycles-pp.__sched_text_start
2.15 ± 4% -1.4 0.80 ± 13% perf-profile.children.cycles-pp.scheduler_tick
3.03 -1.2 1.86 ± 13% perf-profile.children.cycles-pp.clockevents_program_event
3.04 ± 4% -0.9 2.17 ± 7% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
2.71 ± 5% -0.8 1.89 ± 7% perf-profile.children.cycles-pp.tick_nohz_next_event
0.83 ± 8% -0.8 0.06 ± 22% perf-profile.children.cycles-pp.__rq_qos_throttle
0.81 ± 10% -0.7 0.09 ± 5% perf-profile.children.cycles-pp.idle_cpu
0.72 ± 2% -0.7 0.04 ±110% perf-profile.children.cycles-pp.__rq_qos_done
0.72 -0.7 0.04 ±110% perf-profile.children.cycles-pp.wbt_done
0.73 -0.7 0.08 ± 51% perf-profile.children.cycles-pp.blk_mq_free_request
0.88 ± 5% -0.6 0.27 ± 67% perf-profile.children.cycles-pp.rcu_core
0.72 ± 6% -0.6 0.14 ± 14% perf-profile.children.cycles-pp.swap_duplicate
0.72 ± 6% -0.5 0.19 ± 13% perf-profile.children.cycles-pp.__swap_duplicate
1.86 -0.5 1.33 ± 13% perf-profile.children.cycles-pp._raw_spin_lock
1.54 ± 11% -0.5 1.02 ± 11% perf-profile.children.cycles-pp.tick_nohz_irq_exit
1.18 ± 4% -0.5 0.70 ± 28% perf-profile.children.cycles-pp.__swap_writepage
1.34 ± 12% -0.4 0.90 ± 14% perf-profile.children.cycles-pp.rebalance_domains
1.45 ± 6% -0.4 1.03 ± 17% perf-profile.children.cycles-pp.tick_irq_enter
1.56 ± 9% -0.4 1.15 ± 7% perf-profile.children.cycles-pp.timekeeping_max_deferment
1.62 ± 3% -0.4 1.23 ± 17% perf-profile.children.cycles-pp.irq_enter_rcu
1.23 ± 2% -0.4 0.84 ± 16% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.42 ± 4% -0.3 0.10 ± 15% perf-profile.children.cycles-pp.blk_flush_plug_list
0.42 ± 4% -0.3 0.10 ± 15% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.42 ± 6% -0.3 0.10 ± 15% perf-profile.children.cycles-pp.blk_mq_sched_insert_requests
0.90 ± 16% -0.3 0.59 ± 17% perf-profile.children.cycles-pp._raw_spin_trylock
0.66 ± 7% -0.3 0.36 ± 13% perf-profile.children.cycles-pp.lapic_next_deadline
0.73 -0.3 0.44 ± 26% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.95 -0.3 0.70 ± 18% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.92 -0.2 0.67 ± 16% perf-profile.children.cycles-pp.irqtime_account_irq
0.48 ± 16% -0.2 0.23 ± 16% perf-profile.children.cycles-pp.do_syscall_64
0.49 ± 16% -0.2 0.25 ± 20% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.92 ± 9% -0.2 0.68 ± 19% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.32 ± 4% -0.2 0.08 ± 15% perf-profile.children.cycles-pp.page_remove_rmap
0.52 ± 8% -0.2 0.29 ± 14% perf-profile.children.cycles-pp.calc_global_load_tick
0.36 -0.2 0.15 ± 61% perf-profile.children.cycles-pp.note_gp_changes
0.48 ± 5% -0.2 0.27 ± 24% perf-profile.children.cycles-pp.update_rq_clock
0.47 ± 4% -0.2 0.25 ± 49% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.23 ± 2% -0.2 0.03 ±100% perf-profile.children.cycles-pp.nvme_pci_complete_rq
0.30 ± 20% -0.2 0.10 ± 24% perf-profile.children.cycles-pp.update_ts_time_stats
0.28 ± 17% -0.2 0.09 ± 17% perf-profile.children.cycles-pp.nr_iowait_cpu
0.67 ± 6% -0.2 0.48 ± 15% perf-profile.children.cycles-pp.sched_clock_cpu
0.70 ± 2% -0.2 0.52 ± 19% perf-profile.children.cycles-pp.poll_idle
0.32 ± 10% -0.2 0.16 ± 14% perf-profile.children.cycles-pp.update_blocked_averages
0.28 ± 3% -0.2 0.11 ± 4% perf-profile.children.cycles-pp.nvme_map_data
0.22 ± 6% -0.2 0.06 perf-profile.children.cycles-pp.cpumask_next_and
0.59 ± 10% -0.2 0.43 ± 15% perf-profile.children.cycles-pp.sched_clock
0.29 -0.2 0.13 ± 25% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.34 ± 4% -0.1 0.20 ± 32% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.35 ± 8% -0.1 0.20 ± 5% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.55 ± 9% -0.1 0.41 ± 15% perf-profile.children.cycles-pp.native_sched_clock
0.34 ± 10% -0.1 0.20 ± 5% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.31 ± 12% -0.1 0.17 ± 16% perf-profile.children.cycles-pp.run_rebalance_domains
0.26 ± 9% -0.1 0.14 ± 19% perf-profile.children.cycles-pp.irq_work_run_list
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.sysvec_irq_work
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.__sysvec_irq_work
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.irq_work_run
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.irq_work_single
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.printk
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.vprintk_emit
0.25 ± 12% -0.1 0.13 ± 21% perf-profile.children.cycles-pp.console_unlock
0.23 ± 11% -0.1 0.11 ± 19% perf-profile.children.cycles-pp.uart_console_write
0.23 ± 11% -0.1 0.11 ± 19% perf-profile.children.cycles-pp.serial8250_console_putchar
0.23 ± 13% -0.1 0.12 ± 21% perf-profile.children.cycles-pp.serial8250_console_write
0.23 ± 11% -0.1 0.12 ± 19% perf-profile.children.cycles-pp.wait_for_xmitr
0.14 ± 49% -0.1 0.05 ± 62% perf-profile.children.cycles-pp.ksys_read
0.16 ± 12% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.___might_sleep
0.36 ± 4% -0.1 0.27 ± 10% perf-profile.children.cycles-pp.nvme_queue_rq
0.13 ± 46% -0.1 0.04 ± 59% perf-profile.children.cycles-pp.vfs_read
0.15 ± 20% -0.1 0.07 ± 58% perf-profile.children.cycles-pp.io_serial_in
0.22 ± 6% -0.1 0.14 ± 16% perf-profile.children.cycles-pp.rcu_idle_exit
0.18 ± 27% -0.1 0.10 ± 31% perf-profile.children.cycles-pp.cpumask_any_but
0.22 ± 4% -0.1 0.15 ± 20% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.10 -0.1 0.03 ±100% perf-profile.children.cycles-pp.select_task_rq_fair
0.15 ± 10% -0.1 0.08 ± 17% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.20 -0.1 0.14 ± 15% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.08 -0.1 0.03 ±100% perf-profile.children.cycles-pp.do_execveat_common
0.08 -0.1 0.03 ±100% perf-profile.children.cycles-pp.execve
0.08 -0.1 0.03 ±100% perf-profile.children.cycles-pp.__x64_sys_execve
0.13 ± 7% -0.1 0.08 ± 15% perf-profile.children.cycles-pp.rcu_eqs_exit
0.15 ± 3% -0.0 0.11 ± 23% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.12 -0.0 0.08 ± 16% perf-profile.children.cycles-pp.call_cpuidle
0.17 ± 15% -0.0 0.12 ± 25% perf-profile.children.cycles-pp.update_irq_load_avg
0.12 ± 13% -0.0 0.08 ± 24% perf-profile.children.cycles-pp.rcu_eqs_enter
0.08 ± 12% -0.0 0.04 ± 60% perf-profile.children.cycles-pp._cond_resched
0.11 ± 4% -0.0 0.07 ± 20% perf-profile.children.cycles-pp.account_process_tick
0.10 -0.0 0.07 ± 7% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.16 ± 18% -0.0 0.13 ± 20% perf-profile.children.cycles-pp.irqentry_enter
0.14 ± 18% -0.0 0.10 ± 28% perf-profile.children.cycles-pp.rcu_nmi_enter
0.07 -0.0 0.04 ± 58% perf-profile.children.cycles-pp.get_cpu_device
0.12 ± 4% -0.0 0.10 ± 19% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.06 +0.0 0.09 ± 24% perf-profile.children.cycles-pp.set_next_entity
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.bio_associate_blkg
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.blk_mq_get_tag
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.blk_mq_start_request
0.09 ± 17% +0.1 0.15 ± 16% perf-profile.children.cycles-pp.__mod_lruvec_state
0.10 ± 10% +0.1 0.16 ± 21% perf-profile.children.cycles-pp.xas_init_marks
0.00 +0.1 0.07 ± 39% perf-profile.children.cycles-pp.scan_swap_map_slots
0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.__blk_mq_alloc_request
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.bio_init
0.14 ± 3% +0.1 0.21 ± 24% perf-profile.children.cycles-pp.rmqueue
0.10 ± 5% +0.1 0.17 ± 15% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.00 +0.1 0.07 ± 34% perf-profile.children.cycles-pp.__isolate_lru_page
0.00 +0.1 0.07 ± 15% perf-profile.children.cycles-pp.__perf_sw_event
0.00 +0.1 0.07 ± 17% perf-profile.children.cycles-pp.blk_throtl_bio
0.00 +0.1 0.07 ± 42% perf-profile.children.cycles-pp.map_swap_entry
0.00 +0.1 0.08 ± 14% perf-profile.children.cycles-pp.unlock_page
0.03 ±100% +0.1 0.11 ± 19% perf-profile.children.cycles-pp.__switch_to
0.00 +0.1 0.08 ± 23% perf-profile.children.cycles-pp.get_swap_device
0.08 ± 12% +0.1 0.16 ± 28% perf-profile.children.cycles-pp.__orc_find
0.00 +0.1 0.08 ± 41% perf-profile.children.cycles-pp.get_swap_pages
0.00 +0.1 0.08 ± 34% perf-profile.children.cycles-pp.page_mapping
0.06 +0.1 0.14 ± 26% perf-profile.children.cycles-pp.__list_del_entry_valid
0.07 ± 20% +0.1 0.16 ± 21% perf-profile.children.cycles-pp.xas_clear_mark
0.00 +0.1 0.09 ± 25% perf-profile.children.cycles-pp.pagevec_move_tail
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.swap_range_free
0.00 +0.1 0.09 ± 27% perf-profile.children.cycles-pp.xas_find
0.03 ±100% +0.1 0.12 ± 18% perf-profile.children.cycles-pp.update_cfs_group
0.00 +0.1 0.09 ± 31% perf-profile.children.cycles-pp.free_unref_page_list
0.00 +0.1 0.10 ± 55% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.00 +0.1 0.10 ± 37% perf-profile.children.cycles-pp.blk_attempt_plug_merge
0.00 +0.1 0.10 ± 19% perf-profile.children.cycles-pp.___perf_sw_event
0.09 ± 22% +0.1 0.19 ± 13% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.10 ± 30% perf-profile.children.cycles-pp.kernel_text_address
0.03 ±100% +0.1 0.13 ± 28% perf-profile.children.cycles-pp.__kernel_text_address
0.00 +0.1 0.11 ± 25% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.1 0.11 ± 23% perf-profile.children.cycles-pp.page_lock_anon_vma_read
0.00 +0.1 0.11 ± 30% perf-profile.children.cycles-pp.rotate_reclaimable_page
0.00 +0.1 0.11 ± 15% perf-profile.children.cycles-pp.do_page_add_anon_rmap
0.15 ± 10% +0.1 0.27 ± 20% perf-profile.children.cycles-pp.dequeue_entity
0.06 ± 9% +0.1 0.18 ± 23% perf-profile.children.cycles-pp.__mod_memcg_state
0.03 ±100% +0.1 0.15 ± 28% perf-profile.children.cycles-pp.unwind_get_return_address
0.17 ± 5% +0.1 0.30 ± 20% perf-profile.children.cycles-pp.dequeue_task_fair
0.00 +0.1 0.13 ± 49% perf-profile.children.cycles-pp.page_swapcount
0.17 ± 29% +0.1 0.31 ± 11% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
0.10 ± 5% +0.1 0.23 ± 31% perf-profile.children.cycles-pp.tlb_is_not_lazy
0.00 +0.1 0.14 ± 35% perf-profile.children.cycles-pp.cgroup_throttle_swaprate
0.12 ± 4% +0.1 0.27 ± 20% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.15 ± 56% perf-profile.children.cycles-pp.swap_writepage
0.00 +0.2 0.15 ± 15% perf-profile.children.cycles-pp.workingset_refault
0.06 ± 16% +0.2 0.22 ± 32% perf-profile.children.cycles-pp.cpumask_next
0.00 +0.2 0.16 ± 30% perf-profile.children.cycles-pp.__count_memcg_events
0.03 ±100% +0.2 0.19 ± 18% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.09 +0.2 0.25 ± 21% perf-profile.children.cycles-pp.__irqentry_text_start
0.00 +0.2 0.16 ± 35% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.00 +0.2 0.16 ± 19% perf-profile.children.cycles-pp.bio_alloc_bioset
0.00 +0.2 0.17 ± 37% perf-profile.children.cycles-pp.put_swap_page
0.05 +0.2 0.22 ± 20% perf-profile.children.cycles-pp.xas_start
0.03 ±100% +0.2 0.21 ± 15% perf-profile.children.cycles-pp.swap_cgroup_record
0.00 +0.2 0.18 ± 17% perf-profile.children.cycles-pp.clear_shadow_from_swap_cache
0.31 +0.2 0.50 ± 9% perf-profile.children.cycles-pp.schedule_idle
0.05 +0.2 0.24 ± 56% perf-profile.children.cycles-pp.llist_add_batch
0.05 +0.2 0.24 ± 19% perf-profile.children.cycles-pp.submit_bio_checks
0.00 +0.2 0.20 ± 33% perf-profile.children.cycles-pp.end_swap_bio_write
0.00 +0.2 0.20 ± 33% perf-profile.children.cycles-pp.end_page_writeback
0.03 ±100% +0.2 0.22 ± 22% perf-profile.children.cycles-pp.move_pages_to_lru
0.03 ±100% +0.2 0.22 ± 16% perf-profile.children.cycles-pp.blk_mq_try_issue_directly
0.31 ± 6% +0.2 0.51 ± 27% perf-profile.children.cycles-pp.unwind_next_frame
0.00 +0.2 0.20 ± 17% perf-profile.children.cycles-pp.__frontswap_load
0.00 +0.2 0.20 ± 44% perf-profile.children.cycles-pp.get_swap_page
0.00 +0.2 0.20 ± 18% perf-profile.children.cycles-pp.mem_cgroup_id_put_many
0.00 +0.2 0.20 ± 71% perf-profile.children.cycles-pp.default_send_IPI_mask_sequence_phys
0.04 ±100% +0.2 0.24 ± 38% perf-profile.children.cycles-pp.try_charge
0.00 +0.2 0.21 ± 43% perf-profile.children.cycles-pp.mem_cgroup_swapout
0.06 ± 9% +0.2 0.27 ± 26% perf-profile.children.cycles-pp.isolate_lru_pages
0.32 ± 6% +0.2 0.53 ± 32% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.11 ± 23% +0.2 0.32 ± 15% perf-profile.children.cycles-pp.finish_task_switch
0.10 ± 15% +0.2 0.31 ± 44% perf-profile.children.cycles-pp.prep_new_page
0.00 +0.2 0.22 ± 26% perf-profile.children.cycles-pp.up_read
0.00 +0.2 0.23 ± 47% perf-profile.children.cycles-pp._swap_info_get
0.03 ±100% +0.2 0.26 ± 10% perf-profile.children.cycles-pp.__swp_swapcount
0.00 +0.2 0.25 ± 17% perf-profile.children.cycles-pp.lookup_swap_cgroup_id
0.00 +0.2 0.25 ± 18% perf-profile.children.cycles-pp.sync_regs
0.24 ± 2% +0.3 0.50 ± 26% perf-profile.children.cycles-pp.pmdp_clear_flush_young
0.07 ± 14% +0.3 0.34 ± 20% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.31 ± 3% +0.3 0.59 ± 34% perf-profile.children.cycles-pp.get_page_from_freelist
0.25 ± 4% +0.3 0.53 ± 14% perf-profile.children.cycles-pp.add_to_swap_cache
0.21 ± 2% +0.3 0.49 ± 40% perf-profile.children.cycles-pp.add_to_swap
0.30 ± 4% +0.3 0.60 ± 10% perf-profile.children.cycles-pp.total_mapcount
0.50 ± 8% +0.3 0.83 ± 27% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.00 +0.3 0.34 ± 31% perf-profile.children.cycles-pp.down_read_trylock
0.00 +0.3 0.34 ± 22% perf-profile.children.cycles-pp.get_swap_bio
0.45 ± 8% +0.3 0.80 ± 27% perf-profile.children.cycles-pp.arch_stack_walk
0.04 ±100% +0.4 0.39 ± 30% perf-profile.children.cycles-pp.lru_cache_add
0.10 ± 30% +0.4 0.46 ± 21% perf-profile.children.cycles-pp.delete_from_swap_cache
0.07 ± 7% +0.4 0.43 ± 15% perf-profile.children.cycles-pp.swapcache_free_entries
0.07 ± 14% +0.4 0.45 ± 16% perf-profile.children.cycles-pp.free_swap_slot
0.08 ± 12% +0.4 0.47 ± 18% perf-profile.children.cycles-pp.page_counter_cancel
0.08 ± 6% +0.4 0.47 ± 16% perf-profile.children.cycles-pp.__swap_entry_free
0.08 ± 12% +0.4 0.48 ± 18% perf-profile.children.cycles-pp.page_counter_uncharge
0.11 ± 27% +0.4 0.53 ± 19% perf-profile.children.cycles-pp.reuse_swap_page
0.12 ± 8% +0.7 0.79 ± 16% perf-profile.children.cycles-pp.mem_cgroup_uncharge_swap
0.68 ± 2% +0.7 1.38 ± 25% perf-profile.children.cycles-pp.__account_scheduler_latency
1.02 ± 5% +0.7 1.74 ± 24% perf-profile.children.cycles-pp.ttwu_do_activate
1.60 ± 8% +0.7 2.33 ± 15% perf-profile.children.cycles-pp.__read_swap_cache_async
1.00 ± 6% +0.7 1.74 ± 24% perf-profile.children.cycles-pp.enqueue_task_fair
0.03 ±100% +0.7 0.77 ± 10% perf-profile.children.cycles-pp.shrink_active_list
0.92 ± 5% +0.8 1.69 ± 24% perf-profile.children.cycles-pp.enqueue_entity
1.21 ± 5% +0.8 2.05 ± 25% perf-profile.children.cycles-pp.try_to_wake_up
0.23 ± 2% +0.9 1.12 ± 19% perf-profile.children.cycles-pp.page_vma_mapped_walk
0.14 ± 3% +0.9 1.03 ± 66% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.12 ± 4% +0.9 1.05 ± 65% perf-profile.children.cycles-pp.__sysvec_call_function
0.15 ± 6% +1.0 1.13 ± 64% perf-profile.children.cycles-pp.sysvec_call_function
0.03 ±100% +1.1 1.12 ± 12% perf-profile.children.cycles-pp.find_get_entry
0.03 ±100% +1.1 1.13 ± 12% perf-profile.children.cycles-pp.pagecache_get_page
0.03 ±100% +1.1 1.15 ± 12% perf-profile.children.cycles-pp.lookup_swap_cache
0.10 ± 5% +1.2 1.26 ± 13% perf-profile.children.cycles-pp.xas_load
1.65 ± 8% +1.3 2.91 ± 20% perf-profile.children.cycles-pp.nvme_irq
0.33 ± 7% +1.3 1.60 ± 9% perf-profile.children.cycles-pp.page_referenced_one
1.66 ± 8% +1.3 2.94 ± 20% perf-profile.children.cycles-pp.__handle_irq_event_percpu
1.69 ± 9% +1.3 3.01 ± 20% perf-profile.children.cycles-pp.handle_irq_event_percpu
0.25 ± 8% +1.4 1.62 ± 21% perf-profile.children.cycles-pp.mem_cgroup_charge
1.71 ± 10% +1.4 3.13 ± 20% perf-profile.children.cycles-pp.handle_irq_event
1.75 ± 10% +1.5 3.21 ± 20% perf-profile.children.cycles-pp.handle_edge_irq
1.84 ± 10% +1.5 3.36 ± 20% perf-profile.children.cycles-pp.common_interrupt
1.85 ± 10% +1.6 3.40 ± 20% perf-profile.children.cycles-pp.asm_common_interrupt
0.50 ± 26% +1.6 2.09 ± 38% perf-profile.children.cycles-pp.arch_tlbbatch_flush
0.43 ± 25% +1.7 2.14 ± 24% perf-profile.children.cycles-pp.end_swap_bio_read
0.80 ± 11% +1.7 2.53 ± 17% perf-profile.children.cycles-pp.swap_readpage
0.65 ± 6% +1.7 2.38 ± 6% perf-profile.children.cycles-pp.page_referenced
0.81 ± 12% +1.8 2.58 ± 35% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
0.79 ± 11% +1.8 2.56 ± 35% perf-profile.children.cycles-pp.smp_call_function_many_cond
0.17 ± 19% +1.9 2.08 ± 38% perf-profile.children.cycles-pp.try_to_unmap_flush_dirty
0.54 ± 24% +1.9 2.45 ± 21% perf-profile.children.cycles-pp.blk_update_request
0.59 ± 22% +2.0 2.56 ± 21% perf-profile.children.cycles-pp.blk_mq_end_request
2.40 +2.5 4.86 ± 16% perf-profile.children.cycles-pp.read_swap_cache_async
2.41 +2.5 4.88 ± 16% perf-profile.children.cycles-pp.swapin_readahead
4.13 +4.2 8.33 ± 19% perf-profile.children.cycles-pp.balance_pgdat
4.13 +4.2 8.34 ± 19% perf-profile.children.cycles-pp.kswapd
2.71 +4.9 7.57 ± 15% perf-profile.children.cycles-pp.do_swap_page
6.35 -5.8 0.52 ± 52% perf-profile.self.cycles-pp.clear_page_erms
3.35 -2.9 0.45 ± 8% perf-profile.self.cycles-pp.try_to_unmap_one
5.09 ± 10% -1.6 3.51 ± 12% perf-profile.self.cycles-pp.ktime_get
1.71 ± 3% -1.5 0.18 ± 14% perf-profile.self.cycles-pp.update_sd_lb_stats
1.15 -1.1 0.04 ±104% perf-profile.self.cycles-pp.clear_subpage
0.81 ± 10% -0.7 0.08 ± 10% perf-profile.self.cycles-pp.idle_cpu
1.83 -0.6 1.26 ± 13% perf-profile.self.cycles-pp._raw_spin_lock
1.55 ± 9% -0.4 1.15 ± 7% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.79 -0.3 0.45 ± 12% perf-profile.self.cycles-pp.tick_nohz_next_event
0.65 ± 7% -0.3 0.36 ± 13% perf-profile.self.cycles-pp.lapic_next_deadline
0.85 ± 12% -0.3 0.59 ± 17% perf-profile.self.cycles-pp._raw_spin_trylock
0.51 ± 9% -0.2 0.29 ± 14% perf-profile.self.cycles-pp.calc_global_load_tick
0.83 ± 10% -0.2 0.62 ± 20% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.66 -0.2 0.46 ± 20% perf-profile.self.cycles-pp.irqtime_account_irq
0.28 ± 20% -0.2 0.09 ± 17% perf-profile.self.cycles-pp.nr_iowait_cpu
0.26 ± 21% -0.2 0.07 ± 12% perf-profile.self.cycles-pp.update_rq_clock
0.33 ± 7% -0.2 0.15 ± 44% perf-profile.self.cycles-pp.__softirqentry_text_start
0.39 ± 2% -0.2 0.21 ± 50% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.64 -0.2 0.48 ± 18% perf-profile.self.cycles-pp.poll_idle
0.29 -0.2 0.13 ± 25% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.34 ± 4% -0.1 0.20 ± 32% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.53 ± 10% -0.1 0.39 ± 17% perf-profile.self.cycles-pp.native_sched_clock
0.33 ± 10% -0.1 0.20 ± 5% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.21 ± 16% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.xas_store
0.19 ± 5% -0.1 0.07 ±105% perf-profile.self.cycles-pp.note_gp_changes
0.34 ± 15% -0.1 0.26 ± 17% perf-profile.self.cycles-pp._find_next_bit
0.15 ± 20% -0.1 0.07 ± 58% perf-profile.self.cycles-pp.io_serial_in
0.15 ± 10% -0.1 0.07 ± 17% perf-profile.self.cycles-pp.___might_sleep
0.22 ± 6% -0.1 0.15 ± 20% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.15 ± 10% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.15 ± 3% -0.1 0.08 ± 30% perf-profile.self.cycles-pp.hrtimer_interrupt
0.15 ± 10% -0.1 0.09 ± 25% perf-profile.self.cycles-pp.rebalance_domains
0.14 ± 14% -0.1 0.08 ± 17% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.12 ± 21% -0.1 0.05 ± 9% perf-profile.self.cycles-pp.update_blocked_averages
0.09 -0.1 0.03 ±102% perf-profile.self.cycles-pp.load_balance
0.09 -0.0 0.04 ± 63% perf-profile.self.cycles-pp.rmqueue
0.12 -0.0 0.08 ± 16% perf-profile.self.cycles-pp.call_cpuidle
0.09 -0.0 0.05 ± 62% perf-profile.self.cycles-pp.rcu_idle_exit
0.15 -0.0 0.11 ± 20% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.17 ± 15% -0.0 0.12 ± 25% perf-profile.self.cycles-pp.update_irq_load_avg
0.11 ± 4% -0.0 0.07 ± 20% perf-profile.self.cycles-pp.account_process_tick
0.12 ± 13% -0.0 0.08 ± 17% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.12 ± 4% -0.0 0.10 ± 17% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.get_cpu_device
0.21 ± 2% -0.0 0.18 ± 5% perf-profile.self.cycles-pp.tick_sched_timer
0.06 ± 9% +0.0 0.09 ± 15% perf-profile.self.cycles-pp.xas_create
0.07 ± 7% +0.1 0.12 ± 23% perf-profile.self.cycles-pp.__delete_from_swap_cache
0.07 ± 7% +0.1 0.12 ± 17% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.07 ± 14% +0.1 0.13 ± 21% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.__blk_queue_split
0.00 +0.1 0.07 ± 34% perf-profile.self.cycles-pp.__isolate_lru_page
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.do_page_add_anon_rmap
0.00 +0.1 0.07 ± 26% perf-profile.self.cycles-pp.get_swap_device
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.unlock_page
0.00 +0.1 0.07 ± 27% perf-profile.self.cycles-pp.isolate_lru_pages
0.00 +0.1 0.08 ± 34% perf-profile.self.cycles-pp.page_mapping
0.06 +0.1 0.14 ± 27% perf-profile.self.cycles-pp.__list_del_entry_valid
0.03 ±100% +0.1 0.10 ± 18% perf-profile.self.cycles-pp.__switch_to
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.swap_cgroup_record
0.00 +0.1 0.08 ± 19% perf-profile.self.cycles-pp.___perf_sw_event
0.08 ± 12% +0.1 0.16 ± 28% perf-profile.self.cycles-pp.__orc_find
0.00 +0.1 0.08 ± 21% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.__swp_swapcount
0.07 ± 14% +0.1 0.16 ± 21% perf-profile.self.cycles-pp.xas_clear_mark
0.03 ±100% +0.1 0.12 ± 17% perf-profile.self.cycles-pp.update_cfs_group
0.00 +0.1 0.09 ± 46% perf-profile.self.cycles-pp.get_swap_page
0.09 ± 22% +0.1 0.18 ± 13% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.10 ± 33% perf-profile.self.cycles-pp.move_pages_to_lru
0.06 ± 9% +0.1 0.16 ± 21% perf-profile.self.cycles-pp.nvme_irq
0.00 +0.1 0.11 ± 32% perf-profile.self.cycles-pp.do_swap_page
0.06 ± 9% +0.1 0.17 ± 21% perf-profile.self.cycles-pp.__mod_memcg_state
0.00 +0.1 0.13 ± 49% perf-profile.self.cycles-pp.lru_cache_add
0.09 ± 11% +0.1 0.22 ± 28% perf-profile.self.cycles-pp.tlb_is_not_lazy
0.00 +0.1 0.13 ± 33% perf-profile.self.cycles-pp.cgroup_throttle_swaprate
0.03 ±100% +0.1 0.16 ± 41% perf-profile.self.cycles-pp.try_charge
0.00 +0.1 0.15 ± 40% perf-profile.self.cycles-pp.mem_cgroup_charge
0.00 +0.1 0.15 ± 30% perf-profile.self.cycles-pp.shrink_page_list
0.00 +0.2 0.15 ± 34% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.12 ± 12% +0.2 0.28 ± 10% perf-profile.self.cycles-pp.__sched_text_start
0.00 +0.2 0.16 ± 30% perf-profile.self.cycles-pp.__count_memcg_events
0.03 ±100% +0.2 0.20 ± 22% perf-profile.self.cycles-pp.xas_start
0.00 +0.2 0.19 ± 16% perf-profile.self.cycles-pp.__frontswap_load
0.00 +0.2 0.19 ± 17% perf-profile.self.cycles-pp.mem_cgroup_id_put_many
0.00 +0.2 0.21 ± 28% perf-profile.self.cycles-pp.up_read
0.03 ±100% +0.2 0.23 ± 57% perf-profile.self.cycles-pp.llist_add_batch
0.10 ± 15% +0.2 0.31 ± 25% perf-profile.self.cycles-pp.__account_scheduler_latency
0.00 +0.2 0.22 ± 47% perf-profile.self.cycles-pp._swap_info_get
0.00 +0.2 0.24 ± 18% perf-profile.self.cycles-pp.lookup_swap_cgroup_id
0.00 +0.2 0.24 ± 18% perf-profile.self.cycles-pp.sync_regs
0.04 ±100% +0.3 0.29 ± 17% perf-profile.self.cycles-pp.__handle_mm_fault
0.08 ± 6% +0.3 0.33 ± 16% perf-profile.self.cycles-pp.add_to_swap_cache
0.00 +0.3 0.26 ± 22% perf-profile.self.cycles-pp.finish_task_switch
0.30 ± 3% +0.3 0.58 ± 10% perf-profile.self.cycles-pp.total_mapcount
0.03 ±100% +0.3 0.31 ± 15% perf-profile.self.cycles-pp.swap_readpage
0.00 +0.3 0.32 ± 31% perf-profile.self.cycles-pp.down_read_trylock
0.07 ± 20% +0.3 0.41 ± 19% perf-profile.self.cycles-pp.page_counter_cancel
0.14 ± 11% +0.8 0.97 ± 16% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.03 ±100% +1.0 1.00 ± 13% perf-profile.self.cycles-pp.xas_load
0.58 ± 19% +1.0 1.62 ± 33% perf-profile.self.cycles-pp.smp_call_function_many_cond
0.18 ± 2% +3.6 3.79 ± 82% perf-profile.self.cycles-pp.do_access
***************************************************************************************************
lkp-hsw-4ex1: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-9/performance/x86_64-rhel-8.3/4/8/1/debian-10.4-x86_64-20200603.cgz/lkp-hsw-4ex1/swap-w-seq/vm-scalability/never/never/0x16
commit:
3852f6768e ("mm/swapcache: support to handle the shadow entries")
aae466b005 ("mm/swap: implement workingset detection for anonymous LRU")
3852f6768ede542e aae466b0052e1888edd1d7f473d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 1% 2:4 perf-profile.children.cycles-pp.error_entry
1:4 2% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
23.24 ± 3% +8.6% 25.25 vm-scalability.free_time
451459 ± 2% +17.6% 531040 ± 2% vm-scalability.median
3613724 ± 2% +17.2% 4233483 ± 2% vm-scalability.throughput
147.90 ± 2% -11.1% 131.45 ± 2% vm-scalability.time.elapsed_time
147.90 ± 2% -11.1% 131.45 ± 2% vm-scalability.time.elapsed_time.max
126092 -40.2% 75396 ± 5% vm-scalability.time.involuntary_context_switches
709.75 +3.4% 733.75 vm-scalability.time.percent_of_cpu_this_job_got
858.01 ± 6% -12.7% 748.79 ± 3% vm-scalability.time.system_time
2555 ± 11% -86.5% 345.25 ± 19% vm-scalability.time.voluntary_context_switches
0.96 ± 22% +23.2% 1.18 ± 2% iostat.cpu.user
0.46 ± 2% -0.4 0.08 ± 6% mpstat.cpu.all.soft%
19058794 ± 73% -51.3% 9284205 ±108% numa-numastat.node2.numa_foreign
1166704 +28.5% 1499626 vmstat.memory.cache
2470874 ± 2% +12.5% 2779610 ± 2% vmstat.swap.so
5097 -25.6% 3792 ± 3% vmstat.system.cs
775916 ± 6% +15.8% 898759 vmstat.system.in
3.27e+09 ± 3% -9.3% 2.964e+09 ± 2% perf-node.node-load-misses
1.755e+09 ± 4% -16.1% 1.473e+09 ± 9% perf-node.node-loads
40.00 ± 4% -62.5% 15.00 perf-node.node-local-store-ratio
2.337e+09 ± 2% -39.6% 1.412e+09 ± 2% perf-node.node-store-misses
1.594e+09 ± 3% -83.9% 2.563e+08 ± 2% perf-node.node-stores
11725373 +15.5% 13547085 ± 6% meminfo.DirectMap2M
179916 +187.0% 516441 meminfo.KReclaimable
1116912 ± 5% +19.9% 1339332 ± 4% meminfo.MemAvailable
179916 +187.0% 516441 meminfo.SReclaimable
413707 +80.4% 746239 meminfo.Slab
218196 ± 2% +12.4% 245320 ± 2% meminfo.max_used_kB
11422 ± 11% +182.9% 32315 ± 25% numa-vmstat.node0.nr_slab_reclaimable
30123 +11.0% 33440 ± 4% numa-vmstat.node1.nr_page_table_pages
11318 ± 10% +187.6% 32549 ± 15% numa-vmstat.node1.nr_slab_reclaimable
14498502 +11.0% 16097678 ± 4% numa-vmstat.node1.nr_vmscan_write
14498504 +11.0% 16097680 ± 4% numa-vmstat.node1.nr_written
2099 ± 74% -67.5% 681.25 ± 58% numa-vmstat.node2.nr_active_anon
11274 ± 13% +172.3% 30698 ± 11% numa-vmstat.node2.nr_slab_reclaimable
2101 ± 74% -67.6% 681.50 ± 58% numa-vmstat.node2.nr_zone_active_anon
11491764 ± 72% -54.4% 5245129 ±107% numa-vmstat.node2.numa_foreign
10784 ± 7% +210.9% 33532 ± 6% numa-vmstat.node3.nr_slab_reclaimable
6117 ± 6% -15.4% 5175 ± 2% slabinfo.files_cache.active_objs
6124 ± 5% -15.5% 5175 ± 2% slabinfo.files_cache.num_objs
1269 ± 3% +21.3% 1539 ± 5% slabinfo.khugepaged_mm_slot.active_objs
1269 ± 3% +21.3% 1539 ± 5% slabinfo.khugepaged_mm_slot.num_objs
12779 ± 3% -15.2% 10839 ± 4% slabinfo.pde_opener.active_objs
12779 ± 3% -15.2% 10839 ± 4% slabinfo.pde_opener.num_objs
166610 ± 2% +352.8% 754410 slabinfo.radix_tree_node.active_objs
3337 ± 3% +318.4% 13961 slabinfo.radix_tree_node.active_slabs
167671 ± 2% +366.3% 781839 slabinfo.radix_tree_node.num_objs
3337 ± 3% +318.4% 13961 slabinfo.radix_tree_node.num_slabs
3109 ± 4% -10.4% 2785 slabinfo.sighand_cache.active_objs
3131 ± 4% -10.4% 2807 ± 2% slabinfo.sighand_cache.num_objs
8020 ± 4% -12.5% 7021 ± 2% slabinfo.task_delay_info.active_objs
8020 ± 4% -12.5% 7021 ± 2% slabinfo.task_delay_info.num_objs
45388 ± 11% +184.5% 129110 ± 25% numa-meminfo.node0.KReclaimable
45388 ± 11% +184.5% 129110 ± 25% numa-meminfo.node0.SReclaimable
114445 ± 14% +74.5% 199762 ± 18% numa-meminfo.node0.Slab
45359 ± 9% +187.0% 130162 ± 15% numa-meminfo.node1.KReclaimable
120458 +11.0% 133695 ± 4% numa-meminfo.node1.PageTables
45359 ± 9% +187.0% 130162 ± 15% numa-meminfo.node1.SReclaimable
109895 ± 16% +71.3% 188201 ± 13% numa-meminfo.node1.Slab
8457 ± 73% -67.5% 2751 ± 59% numa-meminfo.node2.Active
8389 ± 74% -67.5% 2726 ± 58% numa-meminfo.node2.Active(anon)
45582 ± 13% +169.3% 122741 ± 11% numa-meminfo.node2.KReclaimable
45582 ± 13% +169.3% 122741 ± 11% numa-meminfo.node2.SReclaimable
97401 ± 13% +73.7% 169199 ± 6% numa-meminfo.node2.Slab
43179 ± 6% +210.4% 134010 ± 6% numa-meminfo.node3.KReclaimable
43179 ± 6% +210.4% 134010 ± 6% numa-meminfo.node3.SReclaimable
91559 ± 9% +106.1% 188662 ± 8% numa-meminfo.node3.Slab
20729509 ± 16% -7.8e+06 12912354 ± 25% syscalls.sys_close.noise.100%
28960042 ± 11% -8.2e+06 20719899 ± 14% syscalls.sys_close.noise.2%
27766391 ± 11% -8.4e+06 19401559 ± 15% syscalls.sys_close.noise.25%
28908895 ± 11% -8.2e+06 20661833 ± 14% syscalls.sys_close.noise.5%
25426814 ± 13% -8.5e+06 16899949 ± 18% syscalls.sys_close.noise.50%
22869034 ± 14% -8e+06 14822779 ± 21% syscalls.sys_close.noise.75%
32447789 ± 25% -69.7% 9828013 ± 98% syscalls.sys_openat.max
1.654e+09 ± 39% -1.4e+09 2.337e+08 ± 39% syscalls.sys_openat.noise.100%
1.706e+09 ± 38% -1.4e+09 2.834e+08 ± 32% syscalls.sys_openat.noise.2%
1.693e+09 ± 38% -1.4e+09 2.712e+08 ± 34% syscalls.sys_openat.noise.25%
1.706e+09 ± 38% -1.4e+09 2.829e+08 ± 32% syscalls.sys_openat.noise.5%
1.678e+09 ± 38% -1.4e+09 2.548e+08 ± 36% syscalls.sys_openat.noise.50%
1.667e+09 ± 39% -1.4e+09 2.438e+08 ± 38% syscalls.sys_openat.noise.75%
1174 ± 22% +53.6% 1803 ± 5% syscalls.sys_write.med
5.911e+08 ± 64% +2e+09 2.555e+09 ± 2% syscalls.sys_write.noise.100%
6.002e+08 ± 63% +2e+09 2.56e+09 ± 2% syscalls.sys_write.noise.2%
5.985e+08 ± 64% +2e+09 2.559e+09 ± 2% syscalls.sys_write.noise.25%
6.002e+08 ± 63% +2e+09 2.56e+09 ± 2% syscalls.sys_write.noise.5%
5.953e+08 ± 64% +2e+09 2.558e+09 ± 2% syscalls.sys_write.noise.50%
5.93e+08 ± 64% +2e+09 2.557e+09 ± 2% syscalls.sys_write.noise.75%
52505 ± 2% +8.7% 57048 ± 4% proc-vmstat.allocstall_movable
1329701 ± 53% -76.6% 311376 ± 84% proc-vmstat.compact_daemon_migrate_scanned
3092 ±104% -98.4% 48.75 ± 73% proc-vmstat.compact_daemon_wake
91.25 ± 12% -83.0% 15.50 ± 39% proc-vmstat.compact_fail
1470217 ± 46% -68.7% 459729 ± 57% proc-vmstat.compact_migrate_scanned
97.50 ± 13% -81.3% 18.25 ± 32% proc-vmstat.compact_stall
3161 ± 98% -96.3% 117.50 ± 72% proc-vmstat.kswapd_low_wmark_hit_quickly
5191554 -2.4% 5065321 proc-vmstat.nr_anon_pages
5184927 -2.4% 5059469 proc-vmstat.nr_inactive_anon
187.75 ± 4% -9.1% 170.75 proc-vmstat.nr_isolated_anon
111810 +1.6% 113619 proc-vmstat.nr_page_table_pages
44657 +188.9% 129025 proc-vmstat.nr_slab_reclaimable
58446 -1.7% 57450 proc-vmstat.nr_slab_unreclaimable
5183621 -2.4% 5059220 proc-vmstat.nr_zone_inactive_anon
56750562 ± 10% -26.1% 41935745 ± 19% proc-vmstat.numa_pte_updates
1625054 ± 9% -100.0% 0.00 proc-vmstat.pgalloc_dma
8718205 ± 6% -40.3% 5207463 ± 26% proc-vmstat.pgalloc_dma32
1.346e+08 -30.0% 94227189 proc-vmstat.pgalloc_normal
1.451e+08 -31.4% 99498336 proc-vmstat.pgfree
63642 ± 3% -7.3% 59028 ± 5% proc-vmstat.slabs_scanned
456.00 ±127% +223.0% 1473 ± 21% proc-vmstat.swap_ra
345.00 ±165% +286.0% 1331 ± 20% proc-vmstat.swap_ra_hit
621120 ± 3% -12.0% 546816 ± 2% proc-vmstat.unevictable_pgs_scanned
5.92e+09 ± 3% +6.5% 6.304e+09 perf-stat.i.branch-instructions
61633368 ± 2% -24.6% 46490910 perf-stat.i.cache-misses
5041 -26.5% 3706 ± 3% perf-stat.i.context-switches
388.19 -23.9% 295.51 perf-stat.i.cpu-migrations
809.81 ± 8% +21.0% 980.06 ± 7% perf-stat.i.cycles-between-cache-misses
6.351e+09 +6.7% 6.776e+09 ± 2% perf-stat.i.dTLB-loads
0.15 ± 5% +0.0 0.18 ± 5% perf-stat.i.dTLB-store-miss-rate%
3.606e+09 ± 3% -7.4% 3.339e+09 ± 2% perf-stat.i.dTLB-stores
6823772 ± 5% +14.2% 7794812 ± 8% perf-stat.i.iTLB-load-misses
2.4e+10 ± 3% +4.7% 2.513e+10 perf-stat.i.instructions
112.11 +3.1% 115.57 perf-stat.i.metric.M/sec
666516 ± 2% +12.5% 749805 ± 2% perf-stat.i.minor-faults
61.38 ± 2% +20.6 82.01 perf-stat.i.node-store-miss-rate%
15267649 ± 4% -25.5% 11374283 ± 10% perf-stat.i.node-store-misses
10435945 ± 6% -79.8% 2104898 ± 10% perf-stat.i.node-stores
666572 ± 2% +12.5% 749872 ± 2% perf-stat.i.page-faults
659.86 ± 6% +28.5% 848.20 ± 6% perf-stat.overall.cycles-between-cache-misses
0.16 ± 3% +0.0 0.20 ± 3% perf-stat.overall.dTLB-store-miss-rate%
59.26 ± 2% +25.0 84.28 perf-stat.overall.node-store-miss-rate%
8026 -6.9% 7474 perf-stat.overall.path-length
5.902e+09 ± 3% +6.3% 6.274e+09 perf-stat.ps.branch-instructions
61374057 ± 2% -24.7% 46230779 perf-stat.ps.cache-misses
5019 -26.6% 3682 ± 3% perf-stat.ps.context-switches
386.19 -24.1% 293.29 perf-stat.ps.cpu-migrations
6.332e+09 +6.5% 6.744e+09 ± 2% perf-stat.ps.dTLB-loads
3.595e+09 ± 3% -7.6% 3.323e+09 ± 2% perf-stat.ps.dTLB-stores
6808353 ± 5% +14.1% 7769606 ± 8% perf-stat.ps.iTLB-load-misses
2.392e+10 ± 2% +4.6% 2.502e+10 perf-stat.ps.instructions
666070 ± 2% +12.2% 747635 ± 2% perf-stat.ps.minor-faults
15144045 ± 4% -25.7% 11246617 ± 10% perf-stat.ps.node-store-misses
10417852 ± 6% -79.9% 2096205 ± 10% perf-stat.ps.node-stores
666126 ± 2% +12.2% 747701 ± 2% perf-stat.ps.page-faults
3.555e+12 -6.8% 3.311e+12 perf-stat.total.instructions
12185 ± 8% +182.0% 34361 ± 8% sched_debug.cfs_rq:/.exec_clock.max
1188 ± 14% -98.8% 13.90 ± 53% sched_debug.cfs_rq:/.exec_clock.min
2713 ± 12% +171.7% 7373 ± 8% sched_debug.cfs_rq:/.exec_clock.stddev
158216 ± 20% -27.7% 114407 ± 17% sched_debug.cfs_rq:/.load.stddev
43.24 ± 15% -23.4% 33.11 ± 19% sched_debug.cfs_rq:/.load_avg.avg
1022 ± 3% -28.2% 733.92 ± 3% sched_debug.cfs_rq:/.load_avg.max
170.08 ± 8% -25.7% 126.42 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
119278 ± 11% +110.4% 250918 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
23859 ± 13% +115.1% 51319 ± 14% sched_debug.cfs_rq:/.min_vruntime.stddev
0.06 ± 48% -42.5% 0.04 ± 11% sched_debug.cfs_rq:/.nr_spread_over.avg
0.23 ± 26% -32.0% 0.16 ± 8% sched_debug.cfs_rq:/.nr_spread_over.stddev
-50753 +175.6% -139895 sched_debug.cfs_rq:/.spread0.avg
-83164 +122.8% -185253 sched_debug.cfs_rq:/.spread0.min
23868 ± 13% +115.0% 51318 ± 14% sched_debug.cfs_rq:/.spread0.stddev
140364 ± 14% +23.7% 173587 ± 4% sched_debug.cpu.avg_idle.stddev
678.88 ± 4% +20.0% 814.72 ± 5% sched_debug.cpu.clock_task.stddev
0.00 ± 26% +566.5% 0.00 ± 91% sched_debug.cpu.next_balance.stddev
3913 -17.4% 3233 ± 2% sched_debug.cpu.nr_switches.avg
14639 ± 12% +55.0% 22684 ± 19% sched_debug.cpu.nr_switches.max
1791 ± 8% -37.4% 1121 ± 7% sched_debug.cpu.nr_switches.min
2060 ± 8% +27.8% 2633 ± 6% sched_debug.cpu.nr_switches.stddev
2394 ± 2% -28.3% 1716 ± 5% sched_debug.cpu.sched_count.avg
10139 ± 24% +79.3% 18179 ± 24% sched_debug.cpu.sched_count.max
961.92 ± 11% -54.9% 434.25 sched_debug.cpu.sched_count.min
1375 ± 14% +47.3% 2025 ± 10% sched_debug.cpu.sched_count.stddev
626.15 ± 4% -19.1% 506.24 ± 5% sched_debug.cpu.sched_goidle.avg
4252 ± 25% +109.7% 8919 ± 24% sched_debug.cpu.sched_goidle.max
277.58 ± 11% -63.6% 101.00 ± 4% sched_debug.cpu.sched_goidle.min
554.97 ± 9% +71.0% 949.07 ± 11% sched_debug.cpu.sched_goidle.stddev
1105 ± 2% -30.2% 771.36 ± 4% sched_debug.cpu.ttwu_count.avg
4824 ± 23% +89.2% 9126 ± 21% sched_debug.cpu.ttwu_count.max
364.25 ± 13% -61.1% 141.83 sched_debug.cpu.ttwu_count.min
727.66 ± 8% +44.9% 1054 ± 8% sched_debug.cpu.ttwu_count.stddev
754.79 -25.7% 560.55 ± 5% sched_debug.cpu.ttwu_local.avg
1844 ± 5% +94.2% 3581 ± 25% sched_debug.cpu.ttwu_local.max
281.33 ± 10% -50.5% 139.25 sched_debug.cpu.ttwu_local.min
338.12 ± 10% +70.6% 576.68 ± 5% sched_debug.cpu.ttwu_local.stddev
4.68 ± 4% -4.0 0.73 ± 44% perf-profile.calltrace.cycles-pp.add_to_swap_cache.add_to_swap.shrink_page_list.shrink_inactive_list.shrink_lruvec
6.00 ± 4% -3.2 2.75 ± 13% perf-profile.calltrace.cycles-pp.add_to_swap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
1.82 ± 15% -1.5 0.28 ±100% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
1.74 ± 5% -1.4 0.29 ±100% perf-profile.calltrace.cycles-pp.__delete_from_swap_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
2.55 ± 11% -1.4 1.16 ± 12% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
1.44 ± 15% -1.0 0.47 ±106% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.43 ± 16% -1.0 0.47 ±106% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
1.43 ± 16% -1.0 0.47 ±106% perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.55 ± 16% -0.8 0.70 ± 68% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
2.10 ± 15% -0.5 1.61 ± 12% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
2.12 ± 8% +0.4 2.52 ± 10% perf-profile.calltrace.cycles-pp.end_page_writeback.pmem_rw_page.bdev_write_page.__swap_writepage.pageout
2.13 ± 5% +0.4 2.58 ± 11% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_anon.try_to_unmap.shrink_page_list.shrink_inactive_list
0.95 ± 26% +0.5 1.47 ± 13% perf-profile.calltrace.cycles-pp.swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.28 ± 4% +0.5 1.81 ± 14% perf-profile.calltrace.cycles-pp.page_counter_uncharge.mem_cgroup_swapout.__remove_mapping.shrink_page_list.shrink_inactive_list
1.27 ± 4% +0.5 1.80 ± 14% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.mem_cgroup_swapout.__remove_mapping.shrink_page_list
0.54 ± 71% +0.9 1.41 ± 10% perf-profile.calltrace.cycles-pp.__test_set_page_writeback.bdev_write_page.__swap_writepage.pageout.shrink_page_list
2.52 ± 5% +0.9 3.44 ± 14% perf-profile.calltrace.cycles-pp.mem_cgroup_swapout.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
5.49 ± 4% -4.8 0.69 ± 44% perf-profile.children.cycles-pp.__softirqentry_text_start
4.76 ± 4% -3.7 1.09 ± 12% perf-profile.children.cycles-pp.add_to_swap_cache
3.74 ± 5% -3.6 0.16 ± 19% perf-profile.children.cycles-pp.xas_create_range
3.74 ± 5% -3.5 0.27 ± 15% perf-profile.children.cycles-pp.xas_create
6.06 ± 4% -3.3 2.77 ± 13% perf-profile.children.cycles-pp.add_to_swap
2.32 ± 11% -1.6 0.70 ± 44% perf-profile.children.cycles-pp.do_softirq_own_stack
2.61 ± 11% -1.6 1.04 ± 35% perf-profile.children.cycles-pp.irq_exit_rcu
5.20 ± 11% -1.5 3.68 ± 11% perf-profile.children.cycles-pp.get_page_from_freelist
3.78 ± 14% -1.3 2.45 ± 12% perf-profile.children.cycles-pp.prep_new_page
3.61 ± 14% -1.3 2.32 ± 12% perf-profile.children.cycles-pp.clear_page_erms
1.21 ± 7% -1.0 0.21 ± 20% perf-profile.children.cycles-pp.xas_store
1.75 ± 5% -0.9 0.86 ± 13% perf-profile.children.cycles-pp.__delete_from_swap_cache
2.17 ± 14% -0.5 1.63 ± 12% perf-profile.children.cycles-pp.mem_cgroup_charge
0.61 ± 17% -0.5 0.15 ± 25% perf-profile.children.cycles-pp.worker_thread
0.58 ± 19% -0.4 0.13 ± 24% perf-profile.children.cycles-pp.process_one_work
0.82 ± 7% -0.3 0.56 ± 10% perf-profile.children.cycles-pp.rmqueue
0.75 ± 13% -0.2 0.52 ± 8% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.55 ± 6% -0.2 0.37 ± 11% perf-profile.children.cycles-pp.free_pcppages_bulk
0.77 ± 13% -0.2 0.59 ± 5% perf-profile.children.cycles-pp.__count_memcg_events
0.19 ± 30% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.__libc_fork
0.48 ± 13% -0.1 0.34 ± 13% perf-profile.children.cycles-pp.try_charge
0.36 ± 13% -0.1 0.23 ± 14% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.34 ± 12% -0.1 0.22 ± 13% perf-profile.children.cycles-pp.do_syscall_64
0.15 ± 22% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__do_sys_clone
0.15 ± 22% -0.1 0.03 ±100% perf-profile.children.cycles-pp._do_fork
0.21 ± 7% -0.1 0.11 ± 23% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.35 ± 5% -0.1 0.27 ± 13% perf-profile.children.cycles-pp.rmqueue_bulk
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.memcpy_toio
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.drm_atomic_helper_commit
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.commit_tail
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.mgag200_simple_display_pipe_update
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.mgag200_handle_damage
0.15 ± 14% -0.1 0.09 ± 23% perf-profile.children.cycles-pp.drm_fb_memcpy_dstclip
0.10 ± 8% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.schedule
0.08 ± 19% +0.0 0.11 ± 12% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.14 ± 3% +0.0 0.17 ± 10% perf-profile.children.cycles-pp.default_send_IPI_single_phys
0.12 ± 3% +0.0 0.16 ± 9% perf-profile.children.cycles-pp.scan_swap_map_try_ssd_cluster
0.10 ± 11% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.inc_node_page_state
0.03 ±102% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.03 ±102% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.13 ± 11% +0.0 0.18 ± 14% perf-profile.children.cycles-pp.cpumask_any_but
0.23 ± 8% +0.1 0.29 ± 10% perf-profile.children.cycles-pp.scan_swap_map_slots
0.15 ± 12% +0.1 0.23 ± 7% perf-profile.children.cycles-pp.lock_page_memcg
0.31 ± 5% +0.1 0.39 ± 8% perf-profile.children.cycles-pp.test_clear_page_writeback
0.29 ± 6% +0.1 0.38 ± 11% perf-profile.children.cycles-pp.get_swap_pages
0.37 ± 7% +0.1 0.46 ± 16% perf-profile.children.cycles-pp.llist_reverse_order
0.39 ± 8% +0.1 0.52 ± 11% perf-profile.children.cycles-pp.tlb_is_not_lazy
0.35 ± 8% +0.1 0.48 ± 14% perf-profile.children.cycles-pp.zswap_frontswap_store
0.45 ± 2% +0.2 0.63 ± 14% perf-profile.children.cycles-pp.page_swapcount
0.49 ± 3% +0.2 0.67 ± 14% perf-profile.children.cycles-pp.try_to_free_swap
0.82 ± 4% +0.2 1.01 ± 11% perf-profile.children.cycles-pp.put_swap_page
0.55 ± 9% +0.2 0.79 ± 14% perf-profile.children.cycles-pp.__frontswap_store
0.81 ± 6% +0.3 1.07 ± 13% perf-profile.children.cycles-pp.get_swap_page
0.62 ± 7% +0.3 0.89 ± 17% perf-profile.children.cycles-pp.page_swap_info
0.58 ± 8% +0.3 0.85 ± 17% perf-profile.children.cycles-pp.swap_set_page_dirty
2.18 ± 8% +0.4 2.56 ± 10% perf-profile.children.cycles-pp.end_page_writeback
1.05 ± 3% +0.4 1.45 ± 11% perf-profile.children.cycles-pp._swap_info_get
1.04 ± 5% +0.4 1.44 ± 11% perf-profile.children.cycles-pp.__test_set_page_writeback
1.07 ± 6% +0.4 1.48 ± 13% perf-profile.children.cycles-pp.swap_writepage
2.17 ± 5% +0.4 2.61 ± 11% perf-profile.children.cycles-pp.try_to_unmap_one
2.23 ± 5% +0.5 2.71 ± 15% perf-profile.children.cycles-pp.__sysvec_call_function_single
2.52 ± 6% +0.5 3.05 ± 15% perf-profile.children.cycles-pp.sysvec_call_function_single
1.30 ± 4% +0.5 1.83 ± 14% perf-profile.children.cycles-pp.page_counter_uncharge
1.29 ± 4% +0.5 1.82 ± 14% perf-profile.children.cycles-pp.page_counter_cancel
2.56 ± 5% +0.9 3.46 ± 14% perf-profile.children.cycles-pp.mem_cgroup_swapout
3.38 ± 13% -1.2 2.17 ± 12% perf-profile.self.cycles-pp.clear_page_erms
0.77 ± 13% -0.2 0.59 ± 5% perf-profile.self.cycles-pp.__count_memcg_events
0.22 ± 8% -0.1 0.08 ± 15% perf-profile.self.cycles-pp.xas_store
0.32 ± 15% -0.1 0.18 ± 12% perf-profile.self.cycles-pp.try_charge
0.34 ± 9% -0.1 0.22 ± 9% perf-profile.self.cycles-pp.mem_cgroup_charge
0.35 ± 7% -0.1 0.24 ± 17% perf-profile.self.cycles-pp.xas_create
0.11 ± 10% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.smp_call_function_many_cond
0.18 ± 4% -0.0 0.13 ± 16% perf-profile.self.cycles-pp.rmqueue_bulk
0.13 ± 13% -0.0 0.09 ± 13% perf-profile.self.cycles-pp.rmqueue
0.12 ± 15% -0.0 0.08 ± 14% perf-profile.self.cycles-pp.page_remove_rmap
0.15 ± 15% -0.0 0.12 ± 21% perf-profile.self.cycles-pp.prep_new_page
0.09 ± 13% +0.0 0.12 ± 3% perf-profile.self.cycles-pp.inc_node_page_state
0.05 ± 58% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.scan_swap_map_slots
0.01 ±173% +0.0 0.05 ± 9% perf-profile.self.cycles-pp.dec_zone_page_state
0.01 ±173% +0.0 0.05 ± 9% perf-profile.self.cycles-pp.mutex_unlock
0.01 ±173% +0.0 0.06 ± 20% perf-profile.self.cycles-pp.irqtime_account_process_tick
0.13 ± 11% +0.1 0.19 ± 16% perf-profile.self.cycles-pp.mem_cgroup_swapout
0.08 ± 15% +0.1 0.15 ± 14% perf-profile.self.cycles-pp.arch_tlbbatch_flush
0.15 ± 11% +0.1 0.21 ± 10% perf-profile.self.cycles-pp.lock_page_memcg
0.37 ± 7% +0.1 0.46 ± 16% perf-profile.self.cycles-pp.llist_reverse_order
0.18 ± 15% +0.1 0.28 ± 13% perf-profile.self.cycles-pp.__frontswap_store
0.57 +0.1 0.67 ± 11% perf-profile.self.cycles-pp.__delete_from_swap_cache
0.60 ± 5% +0.1 0.70 ± 12% perf-profile.self.cycles-pp.add_to_swap_cache
0.33 ± 7% +0.1 0.45 ± 13% perf-profile.self.cycles-pp.zswap_frontswap_store
0.45 ± 2% +0.1 0.57 ± 14% perf-profile.self.cycles-pp.flush_tlb_func_common
0.38 ± 7% +0.1 0.52 ± 13% perf-profile.self.cycles-pp.get_swap_page
0.74 ± 3% +0.2 0.91 ± 10% perf-profile.self.cycles-pp.try_to_unmap_one
0.57 ± 9% +0.2 0.81 ± 16% perf-profile.self.cycles-pp.page_swap_info
0.72 ± 3% +0.3 1.03 ± 11% perf-profile.self.cycles-pp.__test_set_page_writeback
1.68 ± 6% +0.3 2.00 ± 9% perf-profile.self.cycles-pp.end_page_writeback
0.98 ± 3% +0.4 1.35 ± 12% perf-profile.self.cycles-pp._swap_info_get
1.22 ± 5% +0.5 1.75 ± 14% perf-profile.self.cycles-pp.page_counter_cancel
52345 ± 17% -84.0% 8384 ± 41% softirqs.CPU0.RCU
19063 ± 7% -20.7% 15125 ± 6% softirqs.CPU0.SCHED
36466 ± 32% -86.4% 4972 ± 11% softirqs.CPU1.RCU
36608 ± 40% -87.4% 4628 ± 19% softirqs.CPU10.RCU
26212 ± 19% -86.4% 3553 ± 2% softirqs.CPU100.RCU
27448 ± 17% -86.5% 3706 ± 6% softirqs.CPU101.RCU
27842 ± 29% -87.2% 3562 ± 4% softirqs.CPU102.RCU
26457 ± 29% -87.1% 3409 ± 4% softirqs.CPU103.RCU
27085 ± 19% -86.7% 3602 ± 6% softirqs.CPU104.RCU
27395 ± 20% -86.8% 3611 ± 8% softirqs.CPU105.RCU
25853 ± 23% -86.6% 3457 ± 5% softirqs.CPU106.RCU
27375 ± 24% -86.5% 3696 ± 6% softirqs.CPU107.RCU
41902 ± 27% -90.1% 4168 ± 47% softirqs.CPU108.RCU
18997 ± 7% -15.2% 16115 ± 3% softirqs.CPU108.SCHED
35065 ± 28% -91.6% 2936 ± 11% softirqs.CPU109.RCU
35638 ± 34% -85.2% 5271 ± 41% softirqs.CPU11.RCU
34068 ± 26% -91.6% 2847 ± 12% softirqs.CPU110.RCU
19277 ± 6% -18.1% 15790 ± 9% softirqs.CPU110.SCHED
34270 ± 24% -89.9% 3470 ± 31% softirqs.CPU111.RCU
34839 ± 33% -90.4% 3339 ± 3% softirqs.CPU112.RCU
38104 ± 27% -91.3% 3308 ± 5% softirqs.CPU113.RCU
32552 ± 31% -90.9% 2966 ± 8% softirqs.CPU114.RCU
33623 ± 26% -90.2% 3303 ± 7% softirqs.CPU115.RCU
35550 ± 27% -90.4% 3417 ± 8% softirqs.CPU116.RCU
33356 ± 27% -89.8% 3415 ± 5% softirqs.CPU117.RCU
33993 ± 32% -90.0% 3405 ± 12% softirqs.CPU118.RCU
36253 ± 26% -90.3% 3519 ± 3% softirqs.CPU119.RCU
34824 ± 31% -87.1% 4494 ± 9% softirqs.CPU12.RCU
32768 ± 39% -88.8% 3661 ± 11% softirqs.CPU120.RCU
36094 ± 23% -90.6% 3401 ± 3% softirqs.CPU121.RCU
34497 ± 31% -89.0% 3807 ± 12% softirqs.CPU122.RCU
36262 ± 25% -88.4% 4193 ± 32% softirqs.CPU123.RCU
34252 ± 25% -90.1% 3394 ± 5% softirqs.CPU124.RCU
36674 ± 21% -88.3% 4301 ± 25% softirqs.CPU125.RCU
27951 ± 7% -88.9% 3101 ± 10% softirqs.CPU126.RCU
18871 ± 7% -14.7% 16104 ± 3% softirqs.CPU126.SCHED
23136 ± 14% -88.8% 2600 ± 4% softirqs.CPU127.RCU
19431 ± 5% -22.5% 15053 ± 14% softirqs.CPU127.SCHED
27391 ± 5% -88.3% 3205 softirqs.CPU128.RCU
25272 ± 14% -86.9% 3319 ± 4% softirqs.CPU129.RCU
35293 ± 39% -88.5% 4050 ± 11% softirqs.CPU13.RCU
26272 ± 10% -88.4% 3041 ± 10% softirqs.CPU130.RCU
25115 ± 5% -86.9% 3291 ± 3% softirqs.CPU131.RCU
23529 ± 6% -86.9% 3071 ± 10% softirqs.CPU132.RCU
27781 ± 13% -86.5% 3755 ± 15% softirqs.CPU133.RCU
26065 ± 16% -87.4% 3289 ± 10% softirqs.CPU134.RCU
23998 ± 10% -86.3% 3277 ± 3% softirqs.CPU135.RCU
26249 ± 18% -87.3% 3328 ± 5% softirqs.CPU136.RCU
21876 ± 13% -82.2% 3887 ± 14% softirqs.CPU137.RCU
22414 ± 10% -82.1% 4001 ± 18% softirqs.CPU138.RCU
24824 ± 5% -85.5% 3595 ± 9% softirqs.CPU139.RCU
34692 ± 44% -88.7% 3921 ± 6% softirqs.CPU14.RCU
22513 ± 12% -84.8% 3418 ± 7% softirqs.CPU140.RCU
23522 ± 5% -85.0% 3517 ± 7% softirqs.CPU141.RCU
22534 ± 11% -84.6% 3469 ± 8% softirqs.CPU142.RCU
25471 ± 22% -84.8% 3869 ± 18% softirqs.CPU143.RCU
33424 ± 43% -87.6% 4133 ± 9% softirqs.CPU15.RCU
32536 ± 25% -89.7% 3340 ± 4% softirqs.CPU16.RCU
31138 ± 29% -88.7% 3526 ± 7% softirqs.CPU17.RCU
51275 ± 11% -92.3% 3956 ± 11% softirqs.CPU18.RCU
16486 ± 6% -21.1% 13010 ± 12% softirqs.CPU18.SCHED
37422 ± 27% -86.3% 5144 ± 42% softirqs.CPU19.RCU
18953 ± 6% -29.2% 13419 ± 20% softirqs.CPU19.SCHED
38774 ± 41% -88.8% 4336 ± 11% softirqs.CPU2.RCU
19636 ± 2% -19.1% 15894 ± 7% softirqs.CPU2.SCHED
37496 ± 28% -90.0% 3733 ± 8% softirqs.CPU20.RCU
19096 ± 5% -19.5% 15372 ± 19% softirqs.CPU20.SCHED
36543 ± 22% -89.9% 3688 ± 5% softirqs.CPU21.RCU
33600 ± 22% -88.2% 3950 ± 16% softirqs.CPU22.RCU
38695 ± 21% -90.1% 3830 ± 4% softirqs.CPU23.RCU
35007 ± 22% -89.8% 3559 ± 7% softirqs.CPU24.RCU
19064 ± 4% -12.9% 16604 ± 8% softirqs.CPU24.SCHED
37485 ± 21% -90.1% 3728 ± 6% softirqs.CPU25.RCU
33627 ± 28% -88.8% 3763 ± 7% softirqs.CPU26.RCU
36026 ± 24% -89.4% 3829 ± 3% softirqs.CPU27.RCU
34690 ± 29% -88.4% 4028 ± 5% softirqs.CPU28.RCU
36943 ± 24% -89.5% 3882 ± 3% softirqs.CPU29.RCU
38484 ± 41% -85.7% 5499 ± 39% softirqs.CPU3.RCU
34943 ± 26% -89.3% 3743 ± 3% softirqs.CPU30.RCU
34022 ± 31% -88.5% 3927 ± 5% softirqs.CPU31.RCU
34577 ± 26% -90.2% 3392 ± 7% softirqs.CPU32.RCU
34324 ± 39% -90.2% 3379 ± 4% softirqs.CPU33.RCU
32528 ± 21% -89.1% 3560 ± 11% softirqs.CPU34.RCU
35054 ± 38% -86.3% 4802 ± 28% softirqs.CPU35.RCU
55830 ± 22% -91.4% 4827 ± 19% softirqs.CPU36.RCU
18131 ± 5% -32.8% 12178 ± 16% softirqs.CPU36.SCHED
44535 ± 25% -91.5% 3800 ± 6% softirqs.CPU37.RCU
18780 ± 6% -17.0% 15579 ± 12% softirqs.CPU37.SCHED
46422 ± 32% -90.5% 4423 ± 17% softirqs.CPU38.RCU
18645 ± 6% -27.3% 13557 ± 20% softirqs.CPU38.SCHED
43345 ± 22% -88.7% 4918 ± 36% softirqs.CPU39.RCU
32960 ± 30% -88.1% 3917 ± 4% softirqs.CPU4.RCU
44490 ± 29% -91.7% 3685 ± 2% softirqs.CPU40.RCU
46120 ± 29% -91.8% 3798 ± 2% softirqs.CPU41.RCU
42257 ± 32% -91.6% 3551 ± 4% softirqs.CPU42.RCU
46039 ± 20% -90.9% 4188 ± 21% softirqs.CPU43.RCU
44088 ± 29% -91.8% 3604 softirqs.CPU44.RCU
41672 ± 31% -91.2% 3647 softirqs.CPU45.RCU
44499 ± 25% -91.9% 3597 ± 8% softirqs.CPU46.RCU
47826 ± 27% -92.4% 3616 softirqs.CPU47.RCU
40295 ± 41% -91.3% 3518 softirqs.CPU48.RCU
41600 ± 24% -91.6% 3497 softirqs.CPU49.RCU
36523 ± 32% -88.8% 4102 ± 6% softirqs.CPU5.RCU
43678 ± 34% -92.3% 3347 ± 4% softirqs.CPU50.RCU
44579 ± 28% -90.7% 4125 ± 15% softirqs.CPU51.RCU
43503 ± 28% -92.2% 3396 ± 3% softirqs.CPU52.RCU
44468 ± 18% -90.5% 4219 ± 33% softirqs.CPU53.RCU
49302 ± 5% -90.4% 4747 ± 22% softirqs.CPU54.RCU
16826 ± 6% -23.7% 12844 ± 10% softirqs.CPU54.SCHED
34574 ± 13% -87.8% 4220 ± 18% softirqs.CPU55.RCU
18223 ± 8% -35.1% 11830 ± 25% softirqs.CPU55.SCHED
34444 ± 10% -88.1% 4102 ± 21% softirqs.CPU56.RCU
29792 ± 13% -87.5% 3728 ± 11% softirqs.CPU57.RCU
29979 ± 15% -86.7% 3997 ± 16% softirqs.CPU58.RCU
29522 ± 13% -87.5% 3687 ± 8% softirqs.CPU59.RCU
34227 ± 39% -88.9% 3815 ± 6% softirqs.CPU6.RCU
19618 ± 4% -10.2% 17615 ± 4% softirqs.CPU6.SCHED
29331 ± 13% -88.2% 3454 ± 8% softirqs.CPU60.RCU
30844 ± 15% -88.5% 3555 ± 4% softirqs.CPU61.RCU
30676 ± 16% -87.1% 3947 ± 18% softirqs.CPU62.RCU
31209 ± 20% -89.1% 3399 ± 8% softirqs.CPU63.RCU
32203 ± 16% -89.0% 3550 ± 12% softirqs.CPU64.RCU
26618 ± 18% -86.7% 3543 ± 4% softirqs.CPU65.RCU
28476 ± 20% -86.5% 3856 ± 11% softirqs.CPU66.RCU
29713 ± 12% -87.1% 3842 ± 14% softirqs.CPU67.RCU
28361 ± 25% -87.2% 3628 ± 6% softirqs.CPU68.RCU
30363 ± 18% -87.3% 3850 ± 9% softirqs.CPU69.RCU
40268 ± 49% -90.1% 3997 ± 6% softirqs.CPU7.RCU
29067 ± 11% -83.9% 4679 ± 38% softirqs.CPU70.RCU
30143 ± 14% -87.5% 3758 ± 4% softirqs.CPU71.RCU
33102 ± 37% -90.7% 3088 ± 6% softirqs.CPU72.RCU
29408 ± 29% -88.2% 3474 ± 5% softirqs.CPU73.RCU
19427 -18.5% 15841 ± 10% softirqs.CPU73.SCHED
31208 ± 34% -89.4% 3312 softirqs.CPU74.RCU
19059 ± 2% -12.2% 16733 ± 7% softirqs.CPU74.SCHED
29231 ± 42% -88.2% 3446 ± 2% softirqs.CPU75.RCU
27628 ± 26% -86.2% 3814 ± 21% softirqs.CPU76.RCU
26377 ± 27% -87.0% 3428 ± 2% softirqs.CPU77.RCU
26349 ± 50% -86.3% 3622 ± 18% softirqs.CPU78.RCU
30531 ± 48% -88.9% 3389 ± 7% softirqs.CPU79.RCU
35512 ± 42% -88.0% 4272 ± 4% softirqs.CPU8.RCU
25522 ± 36% -86.9% 3339 ± 16% softirqs.CPU80.RCU
27427 ± 48% -86.2% 3784 ± 16% softirqs.CPU81.RCU
25383 ± 40% -82.8% 4368 ± 33% softirqs.CPU82.RCU
26528 ± 40% -84.2% 4180 ± 15% softirqs.CPU83.RCU
26637 ± 33% -85.5% 3849 ± 9% softirqs.CPU84.RCU
27814 ± 30% -86.6% 3735 ± 8% softirqs.CPU85.RCU
27615 ± 39% -82.7% 4764 ± 19% softirqs.CPU86.RCU
27935 ± 40% -85.1% 4154 ± 14% softirqs.CPU87.RCU
27323 ± 26% -84.0% 4380 ± 15% softirqs.CPU88.RCU
26788 ± 28% -85.9% 3788 ± 2% softirqs.CPU89.RCU
34530 ± 43% -88.9% 3845 ± 11% softirqs.CPU9.RCU
34234 ± 12% -90.2% 3338 softirqs.CPU90.RCU
19353 ± 3% -24.2% 14671 ± 2% softirqs.CPU90.SCHED
27937 ± 28% -85.9% 3926 ± 15% softirqs.CPU91.RCU
19542 ± 6% -15.2% 16566 ± 3% softirqs.CPU91.SCHED
28497 ± 22% -87.9% 3441 ± 14% softirqs.CPU92.RCU
28320 ± 18% -89.1% 3074 ± 4% softirqs.CPU93.RCU
26306 ± 11% -85.4% 3840 ± 27% softirqs.CPU94.RCU
19738 ± 4% -12.9% 17184 ± 9% softirqs.CPU94.SCHED
26514 ± 8% -87.0% 3439 ± 23% softirqs.CPU95.RCU
25298 ± 14% -86.9% 3318 ± 10% softirqs.CPU96.RCU
19776 ± 3% -13.9% 17031 ± 6% softirqs.CPU96.SCHED
27549 ± 12% -87.5% 3438 ± 6% softirqs.CPU97.RCU
28550 ± 25% -88.7% 3235 ± 9% softirqs.CPU98.RCU
26684 ± 19% -87.0% 3473 ± 6% softirqs.CPU99.RCU
4756845 ± 5% -88.5% 544700 softirqs.RCU
30919 ± 3% -13.8% 26641 ± 2% softirqs.TIMER
145.50 ± 2% -56.0% 64.00 ±100% interrupts.175:PCI-MSI.512000-edge.ahci[0000:00:1f.2]
112.25 ± 3% -8.7% 102.50 ± 4% interrupts.45:PCI-MSI.1572864-edge.eth0-TxRx-0
78296627 +5.0% 82208891 interrupts.CAL:Function_call_interrupts
112.25 ± 3% -8.7% 102.50 ± 4% interrupts.CPU0.45:PCI-MSI.1572864-edge.eth0-TxRx-0
1231976 ± 21% +185.1% 3511906 ± 22% interrupts.CPU0.CAL:Function_call_interrupts
1343291 ± 20% +181.1% 3775855 ± 22% interrupts.CPU0.TLB:TLB_shootdowns
608271 ± 75% -99.3% 4379 ±170% interrupts.CPU10.CAL:Function_call_interrupts
457.50 ± 61% -79.9% 91.75 ± 38% interrupts.CPU10.NMI:Non-maskable_interrupts
457.50 ± 61% -79.9% 91.75 ± 38% interrupts.CPU10.PMI:Performance_monitoring_interrupts
89.00 ± 59% -89.9% 9.00 ± 79% interrupts.CPU10.RES:Rescheduling_interrupts
657079 ± 75% -99.3% 4336 ±173% interrupts.CPU10.TLB:TLB_shootdowns
411365 ± 29% -99.7% 1267 ± 64% interrupts.CPU100.CAL:Function_call_interrupts
444274 ± 30% -99.7% 1249 ± 69% interrupts.CPU100.TLB:TLB_shootdowns
659.50 ± 68% -74.6% 167.50 ± 22% interrupts.CPU102.NMI:Non-maskable_interrupts
659.50 ± 68% -74.6% 167.50 ± 22% interrupts.CPU102.PMI:Performance_monitoring_interrupts
302129 ± 60% -99.9% 166.75 ± 59% interrupts.CPU103.CAL:Function_call_interrupts
323.75 ± 53% -56.4% 141.00 ± 34% interrupts.CPU103.NMI:Non-maskable_interrupts
323.75 ± 53% -56.4% 141.00 ± 34% interrupts.CPU103.PMI:Performance_monitoring_interrupts
327333 ± 60% -100.0% 32.50 ±166% interrupts.CPU103.TLB:TLB_shootdowns
312284 ± 48% -53.9% 144043 ± 50% interrupts.CPU104.CAL:Function_call_interrupts
338648 ± 48% -54.3% 154674 ± 50% interrupts.CPU104.TLB:TLB_shootdowns
115.50 ± 72% -81.0% 22.00 ± 19% interrupts.CPU108.RES:Rescheduling_interrupts
626496 ± 83% -90.8% 57927 ±172% interrupts.CPU11.CAL:Function_call_interrupts
78.00 ± 60% -73.1% 21.00 ± 86% interrupts.CPU11.RES:Rescheduling_interrupts
679997 ± 83% -90.5% 64580 ±173% interrupts.CPU11.TLB:TLB_shootdowns
81.25 ± 51% -80.3% 16.00 ± 11% interrupts.CPU111.RES:Rescheduling_interrupts
87.50 ± 71% -89.7% 9.00 ± 40% interrupts.CPU112.RES:Rescheduling_interrupts
645406 ± 66% -89.8% 66133 ±103% interrupts.CPU113.CAL:Function_call_interrupts
103.50 ± 48% -89.6% 10.75 ± 47% interrupts.CPU113.RES:Rescheduling_interrupts
698561 ± 66% -89.8% 71091 ±104% interrupts.CPU113.TLB:TLB_shootdowns
78.00 ± 68% -86.2% 10.75 ± 41% interrupts.CPU114.RES:Rescheduling_interrupts
1141 ± 56% -72.3% 316.50 ±106% interrupts.CPU115.NMI:Non-maskable_interrupts
1141 ± 56% -72.3% 316.50 ±106% interrupts.CPU115.PMI:Performance_monitoring_interrupts
91.00 ± 62% -90.1% 9.00 ± 26% interrupts.CPU115.RES:Rescheduling_interrupts
757124 ± 58% -60.1% 302332 ±132% interrupts.CPU116.CAL:Function_call_interrupts
98.25 ± 46% -88.0% 11.75 ± 63% interrupts.CPU116.RES:Rescheduling_interrupts
823491 ± 58% -60.9% 322256 ±132% interrupts.CPU116.TLB:TLB_shootdowns
628006 ± 76% -73.8% 164708 ±146% interrupts.CPU117.CAL:Function_call_interrupts
987.50 ± 49% -85.9% 139.25 ± 30% interrupts.CPU117.NMI:Non-maskable_interrupts
987.50 ± 49% -85.9% 139.25 ± 30% interrupts.CPU117.PMI:Performance_monitoring_interrupts
109.25 ± 69% -93.1% 7.50 ± 52% interrupts.CPU117.RES:Rescheduling_interrupts
681610 ± 76% -74.4% 174707 ±146% interrupts.CPU117.TLB:TLB_shootdowns
104.00 ± 55% -87.7% 12.75 ± 78% interrupts.CPU119.RES:Rescheduling_interrupts
471.25 ± 54% -80.9% 90.00 ± 38% interrupts.CPU12.NMI:Non-maskable_interrupts
471.25 ± 54% -80.9% 90.00 ± 38% interrupts.CPU12.PMI:Performance_monitoring_interrupts
58.25 ± 79% -76.4% 13.75 ± 88% interrupts.CPU12.RES:Rescheduling_interrupts
878.50 ± 72% -88.3% 103.00 ± 43% interrupts.CPU120.NMI:Non-maskable_interrupts
878.50 ± 72% -88.3% 103.00 ± 43% interrupts.CPU120.PMI:Performance_monitoring_interrupts
64.00 ± 77% -85.5% 9.25 ± 68% interrupts.CPU120.RES:Rescheduling_interrupts
868.50 ± 54% -84.1% 138.50 ± 45% interrupts.CPU121.NMI:Non-maskable_interrupts
868.50 ± 54% -84.1% 138.50 ± 45% interrupts.CPU121.PMI:Performance_monitoring_interrupts
597802 ± 74% -99.8% 1291 ±164% interrupts.CPU122.CAL:Function_call_interrupts
804.25 ± 83% -85.9% 113.25 ± 65% interrupts.CPU122.NMI:Non-maskable_interrupts
804.25 ± 83% -85.9% 113.25 ± 65% interrupts.CPU122.PMI:Performance_monitoring_interrupts
95.50 ± 48% -92.7% 7.00 ± 56% interrupts.CPU122.RES:Rescheduling_interrupts
649164 ± 75% -99.8% 1290 ±173% interrupts.CPU122.TLB:TLB_shootdowns
114.50 ± 54% -93.7% 7.25 ± 48% interrupts.CPU123.RES:Rescheduling_interrupts
629610 ± 63% -69.6% 191537 ±168% interrupts.CPU124.CAL:Function_call_interrupts
1011 ± 58% -86.9% 132.50 ± 45% interrupts.CPU124.NMI:Non-maskable_interrupts
1011 ± 58% -86.9% 132.50 ± 45% interrupts.CPU124.PMI:Performance_monitoring_interrupts
70.25 ± 70% -85.4% 10.25 ± 44% interrupts.CPU124.RES:Rescheduling_interrupts
683705 ± 63% -69.7% 206886 ±168% interrupts.CPU124.TLB:TLB_shootdowns
739662 ± 49% -82.9% 126387 ±117% interrupts.CPU125.CAL:Function_call_interrupts
968.50 ± 63% -84.4% 151.00 ± 27% interrupts.CPU125.NMI:Non-maskable_interrupts
968.50 ± 63% -84.4% 151.00 ± 27% interrupts.CPU125.PMI:Performance_monitoring_interrupts
66.25 ± 66% -72.8% 18.00 ± 97% interrupts.CPU125.RES:Rescheduling_interrupts
804430 ± 50% -83.3% 134653 ±118% interrupts.CPU125.TLB:TLB_shootdowns
501165 ± 14% +221.8% 1612759 ± 13% interrupts.CPU126.CAL:Function_call_interrupts
545091 ± 14% +217.3% 1729798 ± 13% interrupts.CPU126.TLB:TLB_shootdowns
303.75 ± 35% +1609.7% 5193 ± 40% interrupts.CPU127.NMI:Non-maskable_interrupts
303.75 ± 35% +1609.7% 5193 ± 40% interrupts.CPU127.PMI:Performance_monitoring_interrupts
81.00 ± 59% -78.1% 17.75 ± 68% interrupts.CPU13.RES:Rescheduling_interrupts
221283 ± 18% -61.8% 84423 ± 90% interrupts.CPU132.CAL:Function_call_interrupts
240847 ± 18% -62.5% 90210 ± 89% interrupts.CPU132.TLB:TLB_shootdowns
228853 ± 19% -64.6% 81121 ±115% interrupts.CPU135.CAL:Function_call_interrupts
514.00 ± 87% -74.7% 130.25 ± 15% interrupts.CPU135.NMI:Non-maskable_interrupts
514.00 ± 87% -74.7% 130.25 ± 15% interrupts.CPU135.PMI:Performance_monitoring_interrupts
250600 ± 19% -65.4% 86702 ±116% interrupts.CPU135.TLB:TLB_shootdowns
319279 ± 47% -92.1% 25144 ±150% interrupts.CPU136.CAL:Function_call_interrupts
343.25 ± 76% -63.6% 125.00 ± 25% interrupts.CPU136.NMI:Non-maskable_interrupts
343.25 ± 76% -63.6% 125.00 ± 25% interrupts.CPU136.PMI:Performance_monitoring_interrupts
349192 ± 47% -92.4% 26548 ±150% interrupts.CPU136.TLB:TLB_shootdowns
270185 ± 37% -78.4% 58300 ±172% interrupts.CPU139.CAL:Function_call_interrupts
378.00 ± 56% -71.6% 107.25 ± 27% interrupts.CPU139.NMI:Non-maskable_interrupts
378.00 ± 56% -71.6% 107.25 ± 27% interrupts.CPU139.PMI:Performance_monitoring_interrupts
293913 ± 37% -78.9% 62022 ±173% interrupts.CPU139.TLB:TLB_shootdowns
78.00 ± 63% -76.3% 18.50 ±111% interrupts.CPU14.RES:Rescheduling_interrupts
162067 ± 56% -81.2% 30462 ±165% interrupts.CPU140.CAL:Function_call_interrupts
175459 ± 56% -82.0% 31590 ±166% interrupts.CPU140.TLB:TLB_shootdowns
253.25 ± 32% -50.4% 125.50 ± 35% interrupts.CPU143.NMI:Non-maskable_interrupts
253.25 ± 32% -50.4% 125.50 ± 35% interrupts.CPU143.PMI:Performance_monitoring_interrupts
66.75 ± 72% -82.4% 11.75 ± 49% interrupts.CPU15.RES:Rescheduling_interrupts
508268 ± 63% -100.0% 79.25 ± 22% interrupts.CPU16.CAL:Function_call_interrupts
1082 ± 98% -93.1% 75.25 ± 11% interrupts.CPU16.NMI:Non-maskable_interrupts
1082 ± 98% -93.1% 75.25 ± 11% interrupts.CPU16.PMI:Performance_monitoring_interrupts
58.50 ± 46% -86.3% 8.00 ± 81% interrupts.CPU16.RES:Rescheduling_interrupts
549265 ± 63% -100.0% 4.00 ±133% interrupts.CPU16.TLB:TLB_shootdowns
52.25 ± 65% -66.5% 17.50 ± 57% interrupts.CPU17.RES:Rescheduling_interrupts
1258584 ± 15% +208.1% 3878023 ± 32% interrupts.CPU18.CAL:Function_call_interrupts
1369478 ± 15% +204.9% 4175783 ± 31% interrupts.CPU18.TLB:TLB_shootdowns
629511 ± 88% +241.1% 2147207 ± 76% interrupts.CPU2.CAL:Function_call_interrupts
678734 ± 89% +237.7% 2292112 ± 75% interrupts.CPU2.TLB:TLB_shootdowns
620.75 ± 72% -82.7% 107.50 ± 33% interrupts.CPU21.NMI:Non-maskable_interrupts
620.75 ± 72% -82.7% 107.50 ± 33% interrupts.CPU21.PMI:Performance_monitoring_interrupts
50.50 ± 71% -71.8% 14.25 ± 84% interrupts.CPU21.RES:Rescheduling_interrupts
63.75 ± 43% -78.8% 13.50 ± 99% interrupts.CPU22.RES:Rescheduling_interrupts
66.75 ± 50% -80.5% 13.00 ± 88% interrupts.CPU23.RES:Rescheduling_interrupts
54.25 ± 62% -77.0% 12.50 ± 17% interrupts.CPU25.RES:Rescheduling_interrupts
323.25 ± 93% -74.2% 83.25 ± 8% interrupts.CPU28.NMI:Non-maskable_interrupts
323.25 ± 93% -74.2% 83.25 ± 8% interrupts.CPU28.PMI:Performance_monitoring_interrupts
71.75 ± 37% -83.6% 11.75 ±109% interrupts.CPU29.RES:Rescheduling_interrupts
657.25 ± 54% -83.5% 108.50 ± 30% interrupts.CPU3.NMI:Non-maskable_interrupts
657.25 ± 54% -83.5% 108.50 ± 30% interrupts.CPU3.PMI:Performance_monitoring_interrupts
81.75 ± 86% -78.0% 18.00 ± 42% interrupts.CPU3.RES:Rescheduling_interrupts
846.25 ± 65% -88.7% 96.00 ± 15% interrupts.CPU30.NMI:Non-maskable_interrupts
846.25 ± 65% -88.7% 96.00 ± 15% interrupts.CPU30.PMI:Performance_monitoring_interrupts
54.50 ± 54% -85.3% 8.00 ± 49% interrupts.CPU30.RES:Rescheduling_interrupts
545049 ± 65% -98.4% 8859 ±171% interrupts.CPU31.CAL:Function_call_interrupts
612.00 ± 63% -84.8% 92.75 ± 14% interrupts.CPU31.NMI:Non-maskable_interrupts
612.00 ± 63% -84.8% 92.75 ± 14% interrupts.CPU31.PMI:Performance_monitoring_interrupts
49.75 ± 83% -78.4% 10.75 ± 54% interrupts.CPU31.RES:Rescheduling_interrupts
593689 ± 65% -98.4% 9350 ±173% interrupts.CPU31.TLB:TLB_shootdowns
67.75 ± 84% -85.6% 9.75 ± 42% interrupts.CPU33.RES:Rescheduling_interrupts
138.75 ± 65% -78.7% 29.50 ± 25% interrupts.CPU36.RES:Rescheduling_interrupts
133.75 ± 81% -84.1% 21.25 ± 49% interrupts.CPU37.RES:Rescheduling_interrupts
108.75 ± 54% -82.3% 19.25 ± 45% interrupts.CPU38.RES:Rescheduling_interrupts
112.50 ± 75% -93.8% 7.00 ± 42% interrupts.CPU40.RES:Rescheduling_interrupts
117.75 ± 69% -89.0% 13.00 ± 44% interrupts.CPU41.RES:Rescheduling_interrupts
889287 ± 63% -69.2% 273776 ±142% interrupts.CPU42.CAL:Function_call_interrupts
917.25 ± 65% -84.8% 139.25 ± 48% interrupts.CPU42.NMI:Non-maskable_interrupts
917.25 ± 65% -84.8% 139.25 ± 48% interrupts.CPU42.PMI:Performance_monitoring_interrupts
110.75 ± 72% -91.0% 10.00 ± 50% interrupts.CPU42.RES:Rescheduling_interrupts
970285 ± 63% -69.8% 292582 ±142% interrupts.CPU42.TLB:TLB_shootdowns
945929 ± 47% -54.3% 432248 ±109% interrupts.CPU43.CAL:Function_call_interrupts
1431 ± 84% -74.5% 365.50 ±134% interrupts.CPU43.NMI:Non-maskable_interrupts
1431 ± 84% -74.5% 365.50 ±134% interrupts.CPU43.PMI:Performance_monitoring_interrupts
91.25 ± 46% -93.7% 5.75 ± 44% interrupts.CPU43.RES:Rescheduling_interrupts
1030879 ± 47% -54.7% 467326 ±108% interrupts.CPU43.TLB:TLB_shootdowns
120.25 ± 47% -93.6% 7.75 ± 23% interrupts.CPU44.RES:Rescheduling_interrupts
869076 ± 61% -79.1% 182037 ±126% interrupts.CPU45.CAL:Function_call_interrupts
1007 ± 52% -88.8% 113.00 ± 61% interrupts.CPU45.NMI:Non-maskable_interrupts
1007 ± 52% -88.8% 113.00 ± 61% interrupts.CPU45.PMI:Performance_monitoring_interrupts
119.00 ± 65% -88.4% 13.75 ± 37% interrupts.CPU45.RES:Rescheduling_interrupts
945574 ± 61% -79.6% 192547 ±127% interrupts.CPU45.TLB:TLB_shootdowns
121.00 ± 36% -87.6% 15.00 ± 48% interrupts.CPU46.RES:Rescheduling_interrupts
96.50 ± 69% -85.8% 13.75 ±116% interrupts.CPU47.RES:Rescheduling_interrupts
91.25 ± 70% -91.2% 8.00 ± 31% interrupts.CPU48.RES:Rescheduling_interrupts
113.00 ± 73% -93.1% 7.75 ±108% interrupts.CPU49.RES:Rescheduling_interrupts
471.25 ± 60% -81.3% 88.25 ± 22% interrupts.CPU5.NMI:Non-maskable_interrupts
471.25 ± 60% -81.3% 88.25 ± 22% interrupts.CPU5.PMI:Performance_monitoring_interrupts
72.25 ± 54% -60.2% 28.75 ± 87% interrupts.CPU5.RES:Rescheduling_interrupts
959091 ± 62% -94.9% 48533 ±172% interrupts.CPU50.CAL:Function_call_interrupts
137.75 ± 80% -95.5% 6.25 ± 6% interrupts.CPU50.RES:Rescheduling_interrupts
1039830 ± 62% -95.0% 52192 ±173% interrupts.CPU50.TLB:TLB_shootdowns
1333 ± 57% -88.1% 158.50 ± 30% interrupts.CPU51.NMI:Non-maskable_interrupts
1333 ± 57% -88.1% 158.50 ± 30% interrupts.CPU51.PMI:Performance_monitoring_interrupts
121.25 ± 51% -89.5% 12.75 ± 45% interrupts.CPU51.RES:Rescheduling_interrupts
939728 ± 55% -87.8% 114957 ±173% interrupts.CPU52.CAL:Function_call_interrupts
103.25 ± 63% -94.2% 6.00 ± 39% interrupts.CPU52.RES:Rescheduling_interrupts
1020727 ± 55% -88.1% 121660 ±173% interrupts.CPU52.TLB:TLB_shootdowns
1039918 ± 36% -89.8% 106181 ±122% interrupts.CPU53.CAL:Function_call_interrupts
1284 ± 78% -91.1% 114.00 ± 52% interrupts.CPU53.NMI:Non-maskable_interrupts
1284 ± 78% -91.1% 114.00 ± 52% interrupts.CPU53.PMI:Performance_monitoring_interrupts
104.00 ± 68% -94.7% 5.50 ± 27% interrupts.CPU53.RES:Rescheduling_interrupts
1134197 ± 36% -90.1% 112045 ±121% interrupts.CPU53.TLB:TLB_shootdowns
1168628 ± 8% +182.2% 3297943 ± 17% interrupts.CPU54.CAL:Function_call_interrupts
963.25 ± 47% +350.1% 4336 ± 34% interrupts.CPU54.NMI:Non-maskable_interrupts
963.25 ± 47% +350.1% 4336 ± 34% interrupts.CPU54.PMI:Performance_monitoring_interrupts
80.75 ± 40% -48.3% 41.75 ± 25% interrupts.CPU54.RES:Rescheduling_interrupts
1275853 ± 8% +177.1% 3535200 ± 18% interrupts.CPU54.TLB:TLB_shootdowns
392.25 ± 34% +1015.9% 4377 ± 29% interrupts.CPU55.NMI:Non-maskable_interrupts
392.25 ± 34% +1015.9% 4377 ± 29% interrupts.CPU55.PMI:Performance_monitoring_interrupts
84.75 ± 68% -72.6% 23.25 ± 94% interrupts.CPU6.RES:Rescheduling_interrupts
707.00 ± 42% -77.1% 162.25 ± 30% interrupts.CPU61.NMI:Non-maskable_interrupts
707.00 ± 42% -77.1% 162.25 ± 30% interrupts.CPU61.PMI:Performance_monitoring_interrupts
405740 ± 55% -76.0% 97283 ±115% interrupts.CPU63.CAL:Function_call_interrupts
684.50 ± 70% -77.1% 157.00 ± 37% interrupts.CPU63.NMI:Non-maskable_interrupts
684.50 ± 70% -77.1% 157.00 ± 37% interrupts.CPU63.PMI:Performance_monitoring_interrupts
440306 ± 55% -76.6% 102844 ±115% interrupts.CPU63.TLB:TLB_shootdowns
315353 ± 24% -62.0% 119836 ±106% interrupts.CPU65.CAL:Function_call_interrupts
340082 ± 24% -62.6% 127245 ±106% interrupts.CPU65.TLB:TLB_shootdowns
356445 ± 33% -88.5% 41068 ± 70% interrupts.CPU66.CAL:Function_call_interrupts
387163 ± 33% -88.6% 44312 ± 71% interrupts.CPU66.TLB:TLB_shootdowns
390653 ± 33% -91.5% 33124 ±172% interrupts.CPU67.CAL:Function_call_interrupts
461.25 ± 50% -72.1% 128.75 ± 43% interrupts.CPU67.NMI:Non-maskable_interrupts
461.25 ± 50% -72.1% 128.75 ± 43% interrupts.CPU67.PMI:Performance_monitoring_interrupts
424248 ± 33% -91.6% 35682 ±173% interrupts.CPU67.TLB:TLB_shootdowns
362929 ± 57% -67.3% 118702 ±125% interrupts.CPU68.CAL:Function_call_interrupts
393487 ± 57% -68.0% 125761 ±125% interrupts.CPU68.TLB:TLB_shootdowns
396812 ± 29% -91.7% 32913 ±168% interrupts.CPU71.CAL:Function_call_interrupts
407.00 ± 48% -71.3% 117.00 ± 35% interrupts.CPU71.NMI:Non-maskable_interrupts
407.00 ± 48% -71.3% 117.00 ± 35% interrupts.CPU71.PMI:Performance_monitoring_interrupts
432737 ± 29% -91.1% 38455 ±169% interrupts.CPU71.TLB:TLB_shootdowns
617468 ± 68% +224.7% 2004642 ± 58% interrupts.CPU72.CAL:Function_call_interrupts
670964 ± 68% +220.4% 2149931 ± 58% interrupts.CPU72.TLB:TLB_shootdowns
62.50 ± 63% -72.0% 17.50 ± 37% interrupts.CPU75.RES:Rescheduling_interrupts
58.50 ± 67% -68.4% 18.50 ± 84% interrupts.CPU77.RES:Rescheduling_interrupts
584390 ± 73% -92.3% 45050 ±111% interrupts.CPU8.CAL:Function_call_interrupts
498.50 ± 43% -82.6% 86.50 ± 18% interrupts.CPU8.NMI:Non-maskable_interrupts
498.50 ± 43% -82.6% 86.50 ± 18% interrupts.CPU8.PMI:Performance_monitoring_interrupts
71.25 ± 62% -94.0% 4.25 ± 67% interrupts.CPU8.RES:Rescheduling_interrupts
634390 ± 73% -92.5% 47709 ±111% interrupts.CPU8.TLB:TLB_shootdowns
305172 ±104% -98.9% 3263 ±136% interrupts.CPU80.CAL:Function_call_interrupts
332295 ±104% -99.0% 3256 ±144% interrupts.CPU80.TLB:TLB_shootdowns
306939 ± 97% -100.0% 77.25 ± 23% interrupts.CPU82.CAL:Function_call_interrupts
45.50 ± 79% -80.2% 9.00 ± 53% interrupts.CPU82.RES:Rescheduling_interrupts
332644 ± 98% -100.0% 0.75 ±110% interrupts.CPU82.TLB:TLB_shootdowns
626.25 ± 82% -78.1% 137.25 ± 44% interrupts.CPU84.NMI:Non-maskable_interrupts
626.25 ± 82% -78.1% 137.25 ± 44% interrupts.CPU84.PMI:Performance_monitoring_interrupts
145.50 ± 2% -56.0% 64.00 ±100% interrupts.CPU87.175:PCI-MSI.512000-edge.ahci[0000:00:1f.2]
330258 ±115% -100.0% 114.75 ± 59% interrupts.CPU88.CAL:Function_call_interrupts
936.25 ±104% -85.1% 139.50 ± 58% interrupts.CPU88.NMI:Non-maskable_interrupts
936.25 ±104% -85.1% 139.50 ± 58% interrupts.CPU88.PMI:Performance_monitoring_interrupts
360088 ±116% -100.0% 41.50 ±164% interrupts.CPU88.TLB:TLB_shootdowns
84.75 ± 77% -69.0% 26.25 ± 97% interrupts.CPU9.RES:Rescheduling_interrupts
673363 ± 17% +226.0% 2195380 ± 32% interrupts.CPU90.CAL:Function_call_interrupts
732427 ± 17% +223.3% 2367615 ± 32% interrupts.CPU90.TLB:TLB_shootdowns
8947 ± 18% -71.3% 2570 ± 19% interrupts.RES:Rescheduling_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
2 weeks, 5 days
[dyndbg] e83b4a5011: leaking-addresses.proc.__dyndbg_callsites.
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: e83b4a5011dc4c54048fa264da5923d3253df8fd ("[RFC PATCH v2 02/19] dyndbg: split struct _ddebug, move display fields to new _ddebug_callsite")
url: https://github.com/0day-ci/linux/commits/Jim-Cromie/dynamic-debug-diet-pl...
base: https://git.kernel.org/cgit/linux/kernel/git/arnd/asm-generic.git master
in testcase: leaking-addresses
version: leaking-addresses-x86_64-4f19048-1_20201111
with following parameters:
ucode: 0xde
on test machine: 4 threads Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2020-12-30 15:54:07 ./leaking_addresses.pl --output-raw result/scan.out
2020-12-30 15:54:27 ./leaking_addresses.pl --input-raw result/scan.out --squash-by-filename
Total number of results from scan (incl dmesg): 160105
dmesg output:
[ 0.058423] mapped IOAPIC to ffffffffff5fb000 (fec00000)
Results squashed by filename (excl dmesg). Displaying [<number of results> <filename>], <example result>
[5 .altinstr_aux] 0xffffffffc0131a49
[50 .symtab] 0xffffffffc00c2000
[2 .noinstr.text] 0xffffffffc0762a40
[41 .data] 0xffffffffc009f000
[158484 kallsyms] ffffffff81000000 T startup_64
[22 .smp_locks] 0xffffffffc009d094
[1 .rodata.cst16.mask2] 0xffffffffc00de0e0
[1 key] 1000000000007 ff980000000007ff febeffdfffefffff fffffffffffffffe
[49 __mcount_loc] 0xffffffffc009d03c
[46 .orc_unwind_ip] 0xffffffffc009d3a0
[24 __ksymtab_strings] 0xffffffffc0118048
[7 .fixup] 0xffffffffc00d62ea
[1 .rodata.cst16.mask1] 0xffffffffc00de0d0
[6 __tracepoints] 0xffffffffc02efcc0
[6 _ftrace_events] 0xffffffffc02efb80
[1 .data..cacheline_aligned] 0xffffffffc064ac80
[31 .bss] 0xffffffffc009f500
[38 .rodata.str1.8] 0xffffffffc009d170
[35 .text.unlikely] 0xffffffffc009cbaf
[2 .rodata.cst16.bswap_mask] 0xffffffffc0098070
[37 .exit.text] 0xffffffffc009cc70
[14 .parainstructions] 0xffffffffc02b4d88
[50 .strtab] 0xffffffffc00c2bb8
[50 .note.Linux] 0xffffffffc009d024
[18 __dyndbg] 0xffffffffc009f0c8
[50 .text] 0xffffffffc009c000
[6 __tracepoints_strings] 0xffffffffc02eb7d0
[50 .note.gnu.build-id] 0xffffffffc009d000
[339 blacklist] 0xffffffff81c00860-0xffffffff81c00880 asm_exc_divide_error
[11 __ex_table] 0xffffffffc00d1128
[1 _ftrace_eval_map] 0xffffffffc0985148
[14 __ksymtab_gpl] 0xffffffffc02b403c
[9 .init.rodata] 0xffffffffc00c1000
[1 _error_injection_whitelist] 0xffffffffc098ab70
[6 .ref.data] 0xffffffffc02efba0
[10 .data..read_mostly] 0xffffffffc009f108
[42 .rodata.str1.1] 0xffffffffc009d09c
[1 ___srcu_struct_ptrs] 0xffffffffc027a000
[140 printk_formats] 0xffffffff8234e927 : "CPU_ON"
[1 devices] B: KEY=1000000000007 ff980000000007ff febeffdfffefffff fffffffffffffffe
[1 uevent] KEY=1000000000007 ff980000000007ff febeffdfffefffff fffffffffffffffe
[18 __ksymtab] 0xffffffffc011803c
[6 __bpf_raw_tp_map] 0xffffffffc02efb20
[50 .gnu.linkonce.this_module] 0xffffffffc009f140
[50 modules] netconsole 20480 0 - Live 0xffffffffc00c0000
[1 .rodata.cst32.byteshift_table] 0xffffffffc00de100
[14 .altinstr_replacement] 0xffffffffc03d39ca
[25 __jump_table] 0xffffffffc009e000
[7 .static_call_sites] 0xffffffffc0987088
[46 .orc_unwind] 0xffffffffc009d544
[42 .rodata] 0xffffffffc009d2c0
[6 __tracepoints_ptrs] 0xffffffffc02eb7bc
[4 .init.data] 0xffffffffc0069000
[14 .altinstructions] 0xffffffffc03e5846
[10 .data.once] 0xffffffffc03e65b4
[6 .static_call.text] 0xffffffffc02e2b24
[18 __dyndbg_callsites] 0xffffffffc009f0f0
[40 .init.text] 0xffffffffc00c0000
[19 __param] 0xffffffffc009d378
[25 __bug_table] 0xffffffffc01ee070
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Oliver Sang
3 weeks, 2 days
[kasan] 97593cad00: RIP:kasan_record_aux_stack
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 97593cad003c668e2532cb2939a24a031f8de52d ("kasan: sanitize objects when metadata doesn't fit")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: trinity
version: trinity-i386-4d2343bd-1_20200320
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------------+------------+------------+
| | 3933c17571 | 97593cad00 |
+--------------------------------------------------------------------------+------------+------------+
| boot_successes | 4 | 1 |
| boot_failures | 0 | 3 |
| BUG:sleeping_function_called_from_invalid_context_at_arch/x86/mm/fault.c | 0 | 3 |
| RIP:kasan_record_aux_stack | 0 | 3 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 3 |
| Oops:#[##] | 0 | 3 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 3 |
+--------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 235.553325] BUG: sleeping function called from invalid context at arch/x86/mm/fault.c:1351
[ 235.554684] in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 7515, name: trinity-c1
[ 235.555890] 2 locks held by trinity-c1/7515:
[ 235.556506] #0: ffffffff8323dd38 (&ids->rwsem){....}-{3:3}, at: semctl_down+0x6d/0x686
[ 235.557684] #1: ffff888128ccc868 (&mm->mmap_lock#2){....}-{3:3}, at: do_user_addr_fault+0x196/0x59e
[ 235.559020] CPU: 1 PID: 7515 Comm: trinity-c1 Not tainted 5.10.0-g97593cad003c #2
[ 235.560317] Call Trace:
[ 235.560767] dump_stack+0x7d/0xa3
[ 235.561371] ___might_sleep+0x2c4/0x2df
[ 235.562063] ? do_user_addr_fault+0x196/0x59e
[ 235.562834] do_user_addr_fault+0x234/0x59e
[ 235.563519] exc_page_fault+0x70/0x8b
[ 235.564112] asm_exc_page_fault+0x1b/0x20
[ 235.564754] RIP: 0010:kasan_record_aux_stack+0x64/0x74
[ 235.565603] Code: 48 f7 fe 8b 47 24 49 89 f0 8d 70 ff 41 0f af f0 48 01 ce 48 29 d3 48 39 f3 48 0f 46 f3 e8 6f e5 ff ff bf 00 08 00 00 48 89 c3 <8b> 40 08 89 43 0c e8 fb e2 ff ff 89 43 08 5b c3 53 48 89 f3 e8 61
[ 235.568479] RSP: 0000:ffff88811f29fce8 EFLAGS: 00010046
[ 235.569415] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88813a800000
[ 235.570645] RDX: 0000000000000080 RSI: ffff88813a800000 RDI: 0000000000000800
[ 235.571721] RBP: 00000000001ea2c0 R08: 0000000000000000 R09: 0000000000000001
[ 235.572728] R10: ffffed1027500013 R11: ffff88813a800093 R12: ffff88813a800080
[ 235.573700] R13: 0000000000000000 R14: ffff88813a8000f8 R15: 0000000000000246
[ 235.574793] ? kasan_record_aux_stack+0x5c/0x74
[ 235.575536] ? sem_more_checks+0x6c/0x6c
[ 235.576171] call_rcu+0xbe/0x96f
[ 235.576668] ? lock_downgrade+0x46b/0x46b
[ 235.577343] ? do_nocb_bypass_wakeup_timer+0x65/0x65
[ 235.578220] semctl_down+0x602/0x686
[ 235.579015] ? sem_lock_and_putref+0x1b/0x1b
[ 235.579762] ? kvm_sched_clock_read+0x5/0xd
[ 235.580517] ? paravirt_sched_clock+0x5/0x8
[ 235.581259] compat_ksys_semctl+0x1a8/0x1de
[ 235.582005] ? semctl_main+0x81b/0x81b
[ 235.582675] ? lock_downgrade+0x46b/0x46b
[ 235.583340] ? get_vtime_delta+0x83/0x115
[ 235.583994] ? do_write_seqcount_end+0x12/0x42
[ 235.584724] do_int80_syscall_32+0x38/0x45
[ 235.585383] entry_INT80_compat+0x82/0x87
[ 235.586014] RIP: 0023:0xf7ef1a02
[ 235.586543] Code: 95 01 00 05 25 36 02 00 83 ec 14 8d 80 e8 99 ff ff 50 6a 02 e8 1f ff 00 00 c7 04 24 7f 00 00 00 e8 7e 87 01 00 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 1c 24 c3 8d b6 00 00
[ 235.589764] RSP: 002b:00000000ffe6b0f8 EFLAGS: 00000292 ORIG_RAX: 000000000000018a
[ 235.591049] RAX: ffffffffffffffda RBX: 0000000000000081 RCX: 0000000000000001
[ 235.592230] RDX: 0000000000000000 RSI: 0000000000004000 RDI: 00000000000000ff
[ 235.593329] RBP: 000000007aeed3f6 R08: 0000000000000000 R09: 0000000000000000
[ 235.594454] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 235.595609] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 235.596810] BUG: kernel NULL pointer dereference, address: 0000000000000008
[ 235.598027] #PF: supervisor read access in kernel mode
[ 235.598952] #PF: error_code(0x0000) - not-present page
[ 235.599857] PGD 8000000118306067 P4D 8000000118306067 PUD 11ac1e067 PMD 15b2e9067 PTE 0
[ 235.601232] Oops: 0000 [#1] SMP KASAN PTI
[ 235.601936] CPU: 1 PID: 7515 Comm: trinity-c1 Tainted: G W 5.10.0-g97593cad003c #2
[ 235.603475] RIP: 0010:kasan_record_aux_stack+0x64/0x74
[ 235.604329] Code: 48 f7 fe 8b 47 24 49 89 f0 8d 70 ff 41 0f af f0 48 01 ce 48 29 d3 48 39 f3 48 0f 46 f3 e8 6f e5 ff ff bf 00 08 00 00 48 89 c3 <8b> 40 08 89 43 0c e8 fb e2 ff ff 89 43 08 5b c3 53 48 89 f3 e8 61
[ 235.607111] RSP: 0000:ffff88811f29fce8 EFLAGS: 00010046
[ 235.607964] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88813a800000
[ 235.609165] RDX: 0000000000000080 RSI: ffff88813a800000 RDI: 0000000000000800
[ 235.610409] RBP: 00000000001ea2c0 R08: 0000000000000000 R09: 0000000000000001
[ 235.611834] R10: ffffed1027500013 R11: ffff88813a800093 R12: ffff88813a800080
[ 235.613110] R13: 0000000000000000 R14: ffff88813a8000f8 R15: 0000000000000246
[ 235.614379] FS: 0000000000000000(0000) GS:ffff8881e8a00000(0063) knlGS:00000000f7eec840
[ 235.615685] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
[ 235.616545] CR2: 0000000000000008 CR3: 00000001100b0000 CR4: 00000000000406a0
[ 235.617684] Call Trace:
[ 235.618092] ? sem_more_checks+0x6c/0x6c
[ 235.618706] call_rcu+0xbe/0x96f
[ 235.619190] ? lock_downgrade+0x46b/0x46b
[ 235.619825] ? do_nocb_bypass_wakeup_timer+0x65/0x65
[ 235.620678] semctl_down+0x602/0x686
[ 235.621319] ? sem_lock_and_putref+0x1b/0x1b
[ 235.622096] ? kvm_sched_clock_read+0x5/0xd
[ 235.622855] ? paravirt_sched_clock+0x5/0x8
[ 235.623616] compat_ksys_semctl+0x1a8/0x1de
[ 235.624377] ? semctl_main+0x81b/0x81b
[ 235.625066] ? lock_downgrade+0x46b/0x46b
[ 235.625799] ? get_vtime_delta+0x83/0x115
[ 235.626468] ? do_write_seqcount_end+0x12/0x42
[ 235.627214] do_int80_syscall_32+0x38/0x45
[ 235.627854] entry_INT80_compat+0x82/0x87
[ 235.628552] RIP: 0023:0xf7ef1a02
[ 235.629128] Code: 95 01 00 05 25 36 02 00 83 ec 14 8d 80 e8 99 ff ff 50 6a 02 e8 1f ff 00 00 c7 04 24 7f 00 00 00 e8 7e 87 01 00 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 1c 24 c3 8d b6 00 00
[ 235.632253] RSP: 002b:00000000ffe6b0f8 EFLAGS: 00000292 ORIG_RAX: 000000000000018a
[ 235.633500] RAX: ffffffffffffffda RBX: 0000000000000081 RCX: 0000000000000001
[ 235.634649] RDX: 0000000000000000 RSI: 0000000000004000 RDI: 00000000000000ff
[ 235.635722] RBP: 000000007aeed3f6 R08: 0000000000000000 R09: 0000000000000000
[ 235.636865] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 235.638130] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 235.639416] Modules linked in: mousedev crc32c_intel evdev psmouse autofs4
[ 235.640669] CR2: 0000000000000008
[ 235.641281] ---[ end trace 21817c93fd871d30 ]---
To reproduce:
# build kernel
cd linux
cp config-5.10.0-g97593cad003c .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Oliver Sang
3 weeks, 2 days
[trace] 2158a32526: BUG:using_smp_processor_id()in_preemptible
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 2158a32526b660c34eedc2eaef2bbd623227dd84 ("[PATCH] trace: Remove get/put_cpu() from function_trace_init")
url: https://github.com/0day-ci/linux/commits/Qiujun-Huang/trace-Remove-get-pu...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git c2208046bba6842dc232a600dc5cafc2fca41078
in testcase: kernel-selftests
version: kernel-selftests-x86_64-b5a583fb-1_20201015
with following parameters:
group: group-01
ucode: 0xe2
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 8 threads Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
kern :err : [ 89.914811] BUG: using smp_processor_id() in preemptible [00000000] code: ftracetest/10346
kern :warn : [ 89.923315] caller is function_trace_init+0x3d/0x80
kern :warn : [ 89.928379] CPU: 6 PID: 10346 Comm: ftracetest Tainted: G I 5.10.0-rc5-00048-g2158a32526b6 #1
kern :warn : [ 89.938618] Hardware name: /NUC6i7KYB, BIOS KYSKLi70.86A.0041.2016.0817.1130 08/17/2016
kern :warn : [ 89.947067] Call Trace:
kern :warn : [ 89.949611] dump_stack+0x8d/0xb5
kern :warn : [ 89.953090] check_preemption_disabled+0xc3/0xe0
kern :warn : [ 89.957991] function_trace_init+0x3d/0x80
kern :warn : [ 89.962284] tracing_set_tracer+0x138/0x220
kern :warn : [ 89.966620] tracing_set_trace_write+0x95/0xe0
kern :warn : [ 89.971222] ? ksys_write+0x68/0xe0
kern :warn : [ 89.974839] vfs_write+0xee/0x3c0
kern :warn : [ 89.978288] ksys_write+0x68/0xe0
kern :warn : [ 89.981707] do_syscall_64+0x33/0x40
kern :warn : [ 89.985450] entry_SYSCALL_64_after_hwframe+0x44/0xa9
user :notice: [ 89.988062] # Check success of execveat(5, '../execveat', 0)... [OK]
kern :warn : [ 89.990669] RIP: 0033:0x7f7d1f648504
kern :warn : [ 89.990678] Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00 00 00 48 8d 05 f9 61 0d 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 41 54 49 89 d4 55 48 89 f5 53
kern :warn : [ 90.000988] RSP: 002b:00007fff326418a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
kern :warn : [ 90.000990] RAX: ffffffffffffffda RBX: 0000000000000009 RCX: 00007f7d1f648504
kern :warn : [ 90.000991] RDX: 0000000000000009 RSI: 0000563de1bc3920 RDI: 0000000000000001
kern :warn : [ 90.000992] RBP: 0000563de1bc3920 R08: 000000000000000a R09: 00007f7d1f6d8e80
kern :warn : [ 90.000994] R10: 000000000000000a R11: 0000000000000246 R12: 00007f7d1f71a760
kern :warn : [ 90.000995] R13: 0000000000000009 R14: 00007f7d1f715760 R15: 0000000000000009
user :notice: [ 90.068488] # Check success of execveat(7, 'execveat', 0)... [OK]
user :notice: [ 90.077748] # Check success of execveat(9, 'execveat', 0)... [OK]
user :notice: [ 91.944852] # Check success of execveat(-100, '/usr/src/perf_selfte...ftests/exec/execveat', 0)... [OK]
user :notice: [ 91.958685] # Check success of execveat(99, '/usr/src/perf_selfte...ftests/exec/execveat', 0)... [OK]
user :notice: [ 91.971263] # Check success of execveat(11, '', 4096)... [OK]
user :notice: [ 91.980178] # Check success of execveat(20, '', 4096)... [OK]
user :notice: [ 91.989100] # Check success of execveat(12, '', 4096)... [OK]
user :notice: [ 91.998064] # Check success of execveat(17, '', 4096)... [OK]
user :notice: [ 92.007057] # Check success of execveat(17, '', 4096)... [OK]
user :notice: [ 92.016034] # Check success of execveat(18, '', 4096)... [OK]
user :notice: [ 92.025178] # Check failure of execveat(11, '', 0) with ENOENT... [OK]
user :notice: [ 92.035300] # Check failure of execveat(11, '(null)', 4096) with EFAULT... [OK]
user :notice: [ 92.046380] # Check success of execveat(7, 'execveat.symlink', 0)... [OK]
user :notice: [ 92.056637] # Check success of execveat(9, 'execveat.symlink', 0)... [OK]
user :notice: [ 92.067857] # Check success of execveat(-100, '/usr/src/perf_selfte...xec/execveat.symlink', 0)... [OK]
user :notice: [ 92.080671] # Check success of execveat(13, '', 4096)... [OK]
user :notice: [ 92.090051] # Check success of execveat(13, '', 4352)... [OK]
user :notice: [ 92.100312] # Check failure of execveat(7, 'execveat.symlink', 256) with ELOOP... [OK]
user :notice: [ 92.112462] # Check failure of execveat(9, 'execveat.symlink', 256) with ELOOP... [OK]
kern :warn : [ 93.511413] **********************************************************
kern :warn : [ 93.518345] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **
kern :warn : [ 93.525104] ** **
kern :warn : [ 93.531889] ** trace_printk() being used. Allocating extra memory. **
kern :warn : [ 93.538660] ** **
kern :warn : [ 93.545394] ** This means that this is a DEBUG kernel and it is **
kern :warn : [ 93.552308] ** unsafe for production use. **
kern :warn : [ 93.559159] ** **
kern :warn : [ 93.566088] ** If you see this message and you are not debugging **
kern :warn : [ 93.572951] ** the kernel, report this immediately to your vendor! **
kern :warn : [ 93.579780] ** **
kern :warn : [ 93.586630] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **
kern :warn : [ 93.593501] **********************************************************
user :notice: [ 96.683973] # Check failure of execveat(-100, '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-2158a32526b660c34eedc2eaef2bbd623227dd84/tools/testing/selftests/exec/execveat.symlink', 256) with ELOOP... [OK]
user :notice: [ 96.706608] # Check failure of execveat(7, 'pipe', 0) with EACCES... [OK]
user :notice: [ 96.716823] # Check success of execveat(5, '../script', 0)... [OK]
user :notice: [ 96.726318] # Check success of execveat(7, 'script', 0)... [OK]
user :notice: [ 96.735349] # Check success of execveat(9, 'script', 0)... [OK]
user :notice: [ 96.745231] # Check success of execveat(-100, '/usr/src/perf_selfte...elftests/exec/script', 0)... [OK]
user :notice: [ 96.757894] # Check success of execveat(16, '', 4096)... [OK]
user :notice: [ 96.766599] # Check success of execveat(16, '', 4352)... [OK]
user :notice: [ 96.775638] # Check failure of execveat(21, '', 4096) with ENOENT... [OK]
user :notice: [ 96.785800] # Check failure of execveat(10, 'script', 0) with ENOENT... [OK]
user :notice: [ 96.795985] # Check success of execveat(19, '', 4096)... [OK]
user :notice: [ 96.804779] # Check success of execveat(19, '', 4096)... [OK]
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Oliver Sang
3 weeks, 3 days
[blokc/blk] d9ad3bf8cb: xfstests.generic.299.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: d9ad3bf8cb2c5fa6738a48e303b2b0f50770700a ("[PATCH] blokc/blk-merge: remove the next_bvec label in __blk_bios_map_sg()linux-block(a)vger.kernel.org (open list:BLOCK LAYER)")
url: https://github.com/0day-ci/linux/commits/sh/blokc-blk-merge-remove-the-ne...
base: https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next
in testcase: xfstests
version: xfstests-x86_64-d41dcbd-1_20201218
with following parameters:
disk: 4HDD
fs: btrfs
test: generic-group-14
ucode: 0x21
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2020-12-29 18:23:58 export TEST_DIR=/fs/sda1
2020-12-29 18:23:58 export TEST_DEV=/dev/sda1
2020-12-29 18:23:58 export FSTYP=btrfs
2020-12-29 18:23:58 export SCRATCH_MNT=/fs/scratch
2020-12-29 18:23:58 mkdir /fs/scratch -p
2020-12-29 18:23:58 export SCRATCH_DEV_POOL="/dev/sda2 /dev/sda3 /dev/sda4"
2020-12-29 18:23:58 sed "s:^:generic/:" //lkp/benchmarks/xfstests/tests/generic-group-14
2020-12-29 18:23:58 ./check generic/280 generic/281 generic/282 generic/283 generic/284 generic/285 generic/286 generic/287 generic/288 generic/289 generic/290 generic/291 generic/292 generic/293 generic/294 generic/295 generic/296 generic/298 generic/299
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 lkp-ivb-d02 5.10.0-gd9ad3bf8cb2c #1 SMP Fri Dec 25 09:02:24 CST 2020
MKFS_OPTIONS -- /dev/sda2
MOUNT_OPTIONS -- /dev/sda2 /fs/scratch
generic/280 [not run] disk quotas not supported by this filesystem type: btrfs
generic/281 3s
generic/282 2s
generic/283 3s
generic/284 2s
generic/285 0s
generic/286 2s
generic/287 1s
generic/288 [not run] FITRIM not supported on /fs/scratch
generic/289 1s
generic/290 1s
generic/291 2s
generic/292 2s
generic/293 2s
generic/294 1s
generic/295 2s
generic/296 1s
generic/298 _check_btrfs_filesystem: filesystem on /dev/sda2 is inconsistent
(see /lkp/benchmarks/xfstests/results//generic/298.full for details)
generic/299 _check_btrfs_filesystem: filesystem on /dev/sda2 is inconsistent
(see /lkp/benchmarks/xfstests/results//generic/299.full for details)
Ran: generic/280 generic/281 generic/282 generic/283 generic/284 generic/285 generic/286 generic/287 generic/288 generic/289 generic/290 generic/291 generic/292 generic/293 generic/294 generic/295 generic/296 generic/298 generic/299
Not run: generic/280 generic/288
Failures: generic/298 generic/299
Failed 2 of 19 tests
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Oliver Sang
3 weeks, 3 days
[mm/filemap.c] 06c0444290: stress-ng.sendfile.ops_per_sec 26.7% improvement
by kernel test robot
Greeting,
FYI, we noticed a 26.7% improvement of stress-ng.sendfile.ops_per_sec due to commit:
commit: 06c0444290cecf04c89c62e6d448b8461507d247 ("mm/filemap.c: generic_file_buffered_read() now uses find_get_pages_contig")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 30s
class: pipe
cpufreq_governor: performance
ucode: 0x5003003
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
pipe/gcc-9/performance/1HDD/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/stress-ng/30s/0x5003003
commit:
723ef24b9b ("mm/filemap/c: break generic_file_buffered_read up into multiple functions")
06c0444290 ("mm/filemap.c: generic_file_buffered_read() now uses find_get_pages_contig")
723ef24b9b379e59 06c0444290cecf04c89c62e6d44
---------------- ---------------------------
%stddev %change %stddev
\ | \
14865658 +26.7% 18839172 ± 2% stress-ng.sendfile.ops
495501 +26.7% 627957 ± 2% stress-ng.sendfile.ops_per_sec
17943 ± 12% +36.5% 24500 ± 5% proc-vmstat.numa_hint_faults
1585 ±170% +3019.7% 49447 ± 63% numa-numastat.node0.other_node
85104 ± 3% -56.2% 37244 ± 83% numa-numastat.node1.other_node
169349 ± 19% -21.8% 132475 meminfo.AnonHugePages
301479 ± 10% -11.8% 265842 meminfo.AnonPages
333754 ± 9% -10.6% 298511 meminfo.Inactive
333754 ± 9% -10.6% 298511 meminfo.Inactive(anon)
11540 ± 12% -15.2% 9785 ± 3% sched_debug.cfs_rq:/.load.avg
17531 ± 77% -77.9% 3878 ± 9% sched_debug.cfs_rq:/.load.stddev
28103 ± 22% +50.3% 42227 ± 30% sched_debug.cpu.avg_idle.min
6188 ± 20% +38.9% 8595 ± 4% sched_debug.cpu.curr->pid.min
495.48 ± 24% -46.2% 266.34 ± 18% sched_debug.cpu.curr->pid.stddev
3.336e+10 -5.6% 3.148e+10 ± 6% perf-stat.i.branch-instructions
0.03 ± 7% +0.0 0.04 ± 43% perf-stat.i.dTLB-load-miss-rate%
0.01 ± 18% +0.0 0.01 ± 10% perf-stat.i.dTLB-store-miss-rate%
6253 ± 3% -18.2% 5117 ± 9% perf-stat.i.instructions-per-iTLB-miss
0.64 +0.0 0.67 perf-stat.overall.branch-miss-rate%
3.264e+10 -5.5% 3.084e+10 ± 5% perf-stat.ps.branch-instructions
0.01 ± 5% +13.5% 0.01 ± 2% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.05 ± 83% +1464.2% 0.85 ±157% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
494.00 ± 4% +734.5% 4122 ± 3% perf-sched.wait_and_delay.count.preempt_schedule_common._cond_resched.__splice_from_pipe.splice_from_pipe.direct_splice_actor
5392 ± 3% -98.4% 87.00 ± 10% perf-sched.wait_and_delay.count.preempt_schedule_common._cond_resched.generic_file_buffered_read.generic_file_splice_read.splice_direct_to_actor
18.80 ± 20% -68.8% 5.86 ± 73% perf-sched.wait_and_delay.max.ms.preempt_schedule_common._cond_resched.generic_file_buffered_read.generic_file_splice_read.splice_direct_to_actor
0.59 ± 24% -45.0% 0.32 ± 22% perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.path_openat
18.80 ± 20% -68.8% 5.86 ± 73% perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.generic_file_buffered_read.generic_file_splice_read.splice_direct_to_actor
7626 -38.2% 4716 ± 33% interrupts.CPU30.NMI:Non-maskable_interrupts
7626 -38.2% 4716 ± 33% interrupts.CPU30.PMI:Performance_monitoring_interrupts
18521 ± 44% +241.7% 63289 ± 80% interrupts.CPU42.RES:Rescheduling_interrupts
30879 ± 63% +91.0% 58971 ± 31% interrupts.CPU49.RES:Rescheduling_interrupts
37970 ± 19% +115.4% 81806 ± 33% interrupts.CPU5.CAL:Function_call_interrupts
48131 ± 35% -55.7% 21307 ± 23% interrupts.CPU65.RES:Rescheduling_interrupts
33689 ± 39% +186.7% 96598 ± 88% interrupts.CPU7.CAL:Function_call_interrupts
37234 ± 52% +76.5% 65709 ± 45% interrupts.CPU71.CAL:Function_call_interrupts
22154 ± 18% +126.8% 50249 ± 70% interrupts.CPU82.RES:Rescheduling_interrupts
16632 ± 60% +310.7% 68311 ± 51% interrupts.CPU9.CAL:Function_call_interrupts
17920 ± 45% +264.1% 65238 ± 48% interrupts.CPU95.CAL:Function_call_interrupts
83.19 -3.7 79.44 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
83.28 -3.8 79.53 ± 5% perf-profile.children.cycles-pp.do_syscall_64
0.68 ± 3% +0.0 0.71 ± 2% perf-profile.children.cycles-pp.sched_clock
0.32 ± 12% +0.1 0.39 ± 6% perf-profile.children.cycles-pp.set_next_buddy
0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.31 +0.1 1.39 ± 3% perf-profile.children.cycles-pp.__update_load_avg_se
3.41 +0.2 3.56 ± 2% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.08 +0.0 0.10 ± 5% perf-profile.self.cycles-pp.perf_trace_run_bpf_submit
0.08 ± 10% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__bitmap_and
0.28 ± 9% +0.1 0.33 ± 9% perf-profile.self.cycles-pp.pipe_poll
0.93 +0.1 0.99 perf-profile.self.cycles-pp.switch_mm_irqs_off
0.29 ± 13% +0.1 0.36 ± 7% perf-profile.self.cycles-pp.set_next_buddy
1.28 +0.1 1.36 ± 3% perf-profile.self.cycles-pp.__update_load_avg_se
9207 ± 21% -40.3% 5500 ± 26% softirqs.CPU10.SCHED
8966 ± 22% -36.9% 5653 ± 18% softirqs.CPU11.SCHED
9107 ± 21% -41.4% 5341 ± 19% softirqs.CPU12.SCHED
8925 ± 23% -41.6% 5215 ± 14% softirqs.CPU13.SCHED
9040 ± 21% -41.5% 5285 ± 16% softirqs.CPU14.SCHED
8972 ± 23% -40.6% 5327 ± 16% softirqs.CPU15.SCHED
9021 ± 22% -41.5% 5279 ± 13% softirqs.CPU16.SCHED
8932 ± 20% -40.0% 5357 ± 15% softirqs.CPU17.SCHED
8870 ± 21% -39.0% 5409 ± 20% softirqs.CPU18.SCHED
8866 ± 23% -39.9% 5330 ± 19% softirqs.CPU19.SCHED
9042 ± 21% -37.1% 5683 ± 16% softirqs.CPU2.SCHED
8898 ± 22% -40.7% 5274 ± 20% softirqs.CPU20.SCHED
8989 ± 22% -39.8% 5412 ± 18% softirqs.CPU21.SCHED
8876 ± 22% -41.2% 5223 ± 19% softirqs.CPU22.SCHED
8892 ± 19% -38.7% 5455 ± 16% softirqs.CPU23.SCHED
6924 ± 31% +58.8% 10997 ± 7% softirqs.CPU24.SCHED
7098 ± 30% +64.3% 11663 ± 6% softirqs.CPU25.SCHED
7040 ± 32% +63.8% 11534 ± 7% softirqs.CPU26.SCHED
6977 ± 32% +63.6% 11416 ± 7% softirqs.CPU27.SCHED
7088 ± 30% +58.8% 11255 ± 8% softirqs.CPU28.SCHED
6857 ± 33% +65.5% 11352 ± 6% softirqs.CPU29.SCHED
9142 ± 25% -41.4% 5358 ± 19% softirqs.CPU3.SCHED
7061 ± 29% +62.5% 11472 ± 7% softirqs.CPU30.SCHED
6878 ± 30% +65.2% 11362 ± 8% softirqs.CPU31.SCHED
7173 ± 30% +64.9% 11828 ± 5% softirqs.CPU32.SCHED
7013 ± 31% +62.6% 11405 ± 8% softirqs.CPU33.SCHED
14139 ± 27% +30.8% 18492 ± 24% softirqs.CPU34.RCU
7033 ± 32% +58.8% 11166 ± 7% softirqs.CPU34.SCHED
6963 ± 29% +61.5% 11248 ± 7% softirqs.CPU35.SCHED
7012 ± 30% +61.6% 11332 ± 9% softirqs.CPU36.SCHED
6923 ± 29% +63.4% 11310 ± 8% softirqs.CPU37.SCHED
7070 ± 32% +59.8% 11298 ± 8% softirqs.CPU38.SCHED
6818 ± 31% +67.0% 11389 ± 7% softirqs.CPU39.SCHED
9088 ± 20% -42.2% 5250 ± 15% softirqs.CPU4.SCHED
7040 ± 29% +61.5% 11368 ± 7% softirqs.CPU40.SCHED
6980 ± 29% +62.2% 11321 ± 7% softirqs.CPU41.SCHED
6926 ± 29% +64.3% 11379 ± 8% softirqs.CPU42.SCHED
7062 ± 30% +57.4% 11114 ± 7% softirqs.CPU43.SCHED
6960 ± 31% +62.1% 11279 ± 9% softirqs.CPU44.SCHED
6854 ± 31% +65.0% 11310 ± 7% softirqs.CPU45.SCHED
7152 ± 29% +58.9% 11362 ± 9% softirqs.CPU46.SCHED
6828 ± 31% +64.2% 11210 ± 8% softirqs.CPU47.SCHED
8903 ± 21% -41.9% 5170 ± 17% softirqs.CPU48.SCHED
9048 ± 21% -41.0% 5336 ± 19% softirqs.CPU49.SCHED
8974 ± 23% -42.9% 5124 ± 18% softirqs.CPU5.SCHED
8742 ± 23% -40.8% 5177 ± 17% softirqs.CPU50.SCHED
8647 ± 23% -38.3% 5335 ± 19% softirqs.CPU51.SCHED
8783 ± 22% -41.7% 5118 ± 10% softirqs.CPU52.SCHED
8659 ± 26% -40.2% 5175 ± 18% softirqs.CPU53.SCHED
8852 ± 23% -40.2% 5292 ± 15% softirqs.CPU54.SCHED
9070 ± 20% -43.2% 5153 ± 16% softirqs.CPU55.SCHED
9191 ± 17% -42.7% 5266 ± 15% softirqs.CPU56.SCHED
8884 ± 24% -41.8% 5171 ± 16% softirqs.CPU57.SCHED
8986 ± 23% -40.5% 5344 ± 18% softirqs.CPU58.SCHED
9501 ± 24% -44.9% 5233 ± 20% softirqs.CPU59.SCHED
8897 ± 21% -38.5% 5467 ± 18% softirqs.CPU6.SCHED
9260 ± 18% -42.4% 5335 ± 22% softirqs.CPU60.SCHED
8966 ± 21% -42.3% 5170 ± 15% softirqs.CPU61.SCHED
8963 ± 22% -39.2% 5454 ± 16% softirqs.CPU62.SCHED
8948 ± 22% -39.0% 5462 ± 17% softirqs.CPU63.SCHED
8980 ± 22% -41.1% 5289 ± 16% softirqs.CPU64.SCHED
8969 ± 20% -41.4% 5260 ± 15% softirqs.CPU65.SCHED
8891 ± 23% -41.4% 5211 ± 20% softirqs.CPU66.SCHED
9193 ± 23% -40.0% 5520 ± 14% softirqs.CPU67.SCHED
8936 ± 24% -40.7% 5296 ± 16% softirqs.CPU68.SCHED
8871 ± 22% -40.0% 5320 ± 16% softirqs.CPU69.SCHED
8962 ± 19% -41.8% 5216 ± 17% softirqs.CPU7.SCHED
8671 ± 24% -38.7% 5315 ± 19% softirqs.CPU70.SCHED
7198 ± 29% +53.9% 11076 ± 4% softirqs.CPU72.SCHED
7133 ± 29% +61.0% 11488 ± 9% softirqs.CPU73.SCHED
6952 ± 30% +66.9% 11602 ± 8% softirqs.CPU74.SCHED
6975 ± 31% +60.8% 11214 ± 7% softirqs.CPU75.SCHED
6985 ± 31% +58.4% 11065 ± 10% softirqs.CPU76.SCHED
6811 ± 31% +63.6% 11146 ± 6% softirqs.CPU77.SCHED
7006 ± 29% +62.0% 11347 ± 7% softirqs.CPU78.SCHED
6827 ± 32% +65.8% 11316 ± 9% softirqs.CPU79.SCHED
8957 ± 19% -40.8% 5304 ± 18% softirqs.CPU8.SCHED
7102 ± 32% +59.7% 11345 ± 8% softirqs.CPU80.SCHED
7023 ± 30% +60.3% 11258 ± 8% softirqs.CPU81.SCHED
7046 ± 31% +57.8% 11121 ± 6% softirqs.CPU82.SCHED
6966 ± 30% +57.1% 10941 ± 8% softirqs.CPU83.SCHED
6953 ± 30% +62.0% 11261 ± 10% softirqs.CPU85.SCHED
6884 ± 31% +63.0% 11220 ± 9% softirqs.CPU86.SCHED
6765 ± 32% +66.3% 11249 ± 8% softirqs.CPU87.SCHED
6963 ± 29% +63.8% 11403 ± 7% softirqs.CPU88.SCHED
6869 ± 31% +63.6% 11241 ± 8% softirqs.CPU89.SCHED
9002 ± 21% -43.2% 5115 ± 18% softirqs.CPU9.SCHED
6759 ± 33% +69.4% 11450 ± 9% softirqs.CPU90.SCHED
7003 ± 30% +58.9% 11130 ± 9% softirqs.CPU91.SCHED
6994 ± 28% +64.9% 11533 ± 9% softirqs.CPU92.SCHED
6839 ± 32% +66.1% 11358 ± 6% softirqs.CPU93.SCHED
7121 ± 29% +59.2% 11339 ± 7% softirqs.CPU94.SCHED
6798 ± 32% +67.9% 11412 ± 7% softirqs.CPU95.SCHED
stress-ng.sendfile.ops
2e+07 +-----------------------------------------------------------------+
| O OO OO |
1.9e+07 |-+O OOOOOOO OO OOOO |
|OOOO O O OOO O |
| O O O O |
1.8e+07 |-+ |
| |
1.7e+07 |-+ |
| |
1.6e+07 |-+ |
| |
| |
1.5e+07 |+++++ + ++++++++++ +++++++++++ ++++++ +++++++ ++++++++++ + +++++|
|+ ++++ + +++ +++++ + ++ +++++ ++++++++ +++++ + + ++++ + |
1.4e+07 +-----------------------------------------------------------------+
stress-ng.sendfile.ops_per_sec
660000 +------------------------------------------------------------------+
640000 |-+ OO OOOOO O |
|OOOOOOOO O OOOOOOO |
620000 |-+OO O |
600000 |-+ O O O |
| |
580000 |-+ |
560000 |-+ |
540000 |-+ |
| |
520000 |-+ |
500000 |-+ + + + + + |
|++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++|
480000 |-+ ++ + + ++ + + + + + +++++ ++ + + ++ + + |
460000 +------------------------------------------------------------------+
7000 +--------------------------------------------------------------------+
| |
6000 |-+ + + + + + + |
|++++++:++++++++++++:+++++++++++::++++:+ + ++++++:+++++++++++::++++++|
5000 |-+ ++++ + ++ + ++ + + + ++ + ++++ + + ++ + + + ++ + ++ |
| |
4000 |-+ |
| |
3000 |-+ |
| |
2000 |-+ |
| |
1000 |-+ |
| |
0 +--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
3 weeks, 3 days
[btrfs] 196d59ab9c: fxmark.hdd_btrfs_MWCL_9_bufferedio.works/sec -16.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -16.4% regression of fxmark.hdd_btrfs_MWCL_9_bufferedio.works/sec due to commit:
please be noted there are also other regressions in fxmark tests. e.g.
56097 -29.1% 39797 fxmark.hdd_btrfs_MWCL_18_bufferedio.works/sec
53599 -13.7% 46230 ± 3% fxmark.hdd_btrfs_MWCL_72_bufferedio.works/sec
commit: 196d59ab9ccc975d8d29292845d227cdf4423ef8 ("btrfs: switch extent buffer tree lock to rw_semaphore")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fxmark
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
disk: 1HDD
media: hdd
test: MWCL
fstype: btrfs
directio: bufferedio
cpufreq_governor: performance
ucode: 0x11
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/bufferedio/1HDD/btrfs/x86_64-rhel-8.3/hdd/debian-10.4-x86_64-20200603.cgz/lkp-knm01/MWCL/fxmark/0x11
commit:
ecdcf3c259 ("btrfs: open code insert_orphan_item")
196d59ab9c ("btrfs: switch extent buffer tree lock to rw_semaphore")
ecdcf3c259e4c36e 196d59ab9ccc975d8d29292845d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 8% 0:4 perf-profile.children.cycles-pp.error_return
0:4 12% 0:4 perf-profile.children.cycles-pp.error_entry
0:4 11% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
123.68 +88.8% 233.54 fxmark.hdd_btrfs_MWCL_18_bufferedio.idle_sec
24.30 +89.1% 45.95 fxmark.hdd_btrfs_MWCL_18_bufferedio.idle_util
0.51 ± 13% +46.3% 0.75 ± 4% fxmark.hdd_btrfs_MWCL_18_bufferedio.iowait_sec
0.10 ± 13% +46.6% 0.15 ± 5% fxmark.hdd_btrfs_MWCL_18_bufferedio.iowait_util
4.79 ± 3% -14.1% 4.11 ± 2% fxmark.hdd_btrfs_MWCL_18_bufferedio.softirq_sec
0.94 ± 3% -14.0% 0.81 ± 2% fxmark.hdd_btrfs_MWCL_18_bufferedio.softirq_util
338.07 -31.2% 232.52 fxmark.hdd_btrfs_MWCL_18_bufferedio.sys_sec
66.42 -31.1% 45.75 fxmark.hdd_btrfs_MWCL_18_bufferedio.sys_util
10.98 -21.0% 8.66 fxmark.hdd_btrfs_MWCL_18_bufferedio.user_sec
2.16 -20.9% 1.70 fxmark.hdd_btrfs_MWCL_18_bufferedio.user_util
1683695 -29.1% 1194164 fxmark.hdd_btrfs_MWCL_18_bufferedio.works
56097 -29.1% 39797 fxmark.hdd_btrfs_MWCL_18_bufferedio.works/sec
241.75 ± 3% +71.4% 414.43 fxmark.hdd_btrfs_MWCL_27_bufferedio.idle_sec
31.25 ± 3% +71.3% 53.52 fxmark.hdd_btrfs_MWCL_27_bufferedio.idle_util
478.99 ± 2% -35.8% 307.43 fxmark.hdd_btrfs_MWCL_27_bufferedio.sys_sec
61.90 -35.9% 39.71 fxmark.hdd_btrfs_MWCL_27_bufferedio.sys_util
11.25 -12.8% 9.81 ± 3% fxmark.hdd_btrfs_MWCL_27_bufferedio.user_sec
1.45 -12.8% 1.27 ± 3% fxmark.hdd_btrfs_MWCL_27_bufferedio.user_util
1707056 -22.6% 1321210 fxmark.hdd_btrfs_MWCL_27_bufferedio.works
56877 -22.6% 44009 fxmark.hdd_btrfs_MWCL_27_bufferedio.works/sec
1.37 ± 5% -77.9% 0.30 ± 30% fxmark.hdd_btrfs_MWCL_2_bufferedio.idle_sec
2.29 ± 5% -78.1% 0.50 ± 30% fxmark.hdd_btrfs_MWCL_2_bufferedio.idle_util
1.11 ± 11% -13.9% 0.96 ± 9% fxmark.hdd_btrfs_MWCL_2_bufferedio.softirq_sec
1.86 ± 11% -14.9% 1.58 ± 9% fxmark.hdd_btrfs_MWCL_2_bufferedio.softirq_util
341.20 ± 2% +75.6% 598.98 fxmark.hdd_btrfs_MWCL_36_bufferedio.idle_sec
32.89 ± 2% +75.5% 57.71 fxmark.hdd_btrfs_MWCL_36_bufferedio.idle_util
0.46 ± 27% +138.5% 1.08 ± 19% fxmark.hdd_btrfs_MWCL_36_bufferedio.iowait_sec
0.04 ± 27% +138.0% 0.10 ± 19% fxmark.hdd_btrfs_MWCL_36_bufferedio.iowait_util
635.07 -40.5% 377.89 fxmark.hdd_btrfs_MWCL_36_bufferedio.sys_sec
61.22 -40.5% 36.41 fxmark.hdd_btrfs_MWCL_36_bufferedio.sys_util
1747939 -15.5% 1476154 fxmark.hdd_btrfs_MWCL_36_bufferedio.works
58241 -15.6% 49175 fxmark.hdd_btrfs_MWCL_36_bufferedio.works/sec
487.29 +62.8% 793.11 ± 2% fxmark.hdd_btrfs_MWCL_45_bufferedio.idle_sec
37.52 +62.7% 61.04 ± 2% fxmark.hdd_btrfs_MWCL_45_bufferedio.idle_util
744.34 -41.0% 439.31 ± 4% fxmark.hdd_btrfs_MWCL_45_bufferedio.sys_sec
57.31 -41.0% 33.81 ± 4% fxmark.hdd_btrfs_MWCL_45_bufferedio.sys_util
1759079 -10.8% 1568784 ± 3% fxmark.hdd_btrfs_MWCL_45_bufferedio.works
58631 -10.9% 52267 ± 3% fxmark.hdd_btrfs_MWCL_45_bufferedio.works/sec
6.38 -18.0% 5.23 ± 7% fxmark.hdd_btrfs_MWCL_4_bufferedio.idle_sec
5.47 -20.5% 4.35 ± 7% fxmark.hdd_btrfs_MWCL_4_bufferedio.idle_util
1.75 ± 5% -33.2% 1.17 ± 8% fxmark.hdd_btrfs_MWCL_4_bufferedio.softirq_sec
1.50 ± 5% -35.3% 0.97 ± 8% fxmark.hdd_btrfs_MWCL_4_bufferedio.softirq_util
659.89 +52.6% 1007 fxmark.hdd_btrfs_MWCL_54_bufferedio.idle_sec
42.16 +53.0% 64.49 fxmark.hdd_btrfs_MWCL_54_bufferedio.idle_util
8.08 +13.2% 9.14 ± 2% fxmark.hdd_btrfs_MWCL_54_bufferedio.softirq_sec
0.52 +13.5% 0.59 ± 2% fxmark.hdd_btrfs_MWCL_54_bufferedio.softirq_util
833.14 -42.2% 481.74 fxmark.hdd_btrfs_MWCL_54_bufferedio.sys_sec
53.22 -42.0% 30.85 fxmark.hdd_btrfs_MWCL_54_bufferedio.sys_util
827.45 +47.9% 1223 fxmark.hdd_btrfs_MWCL_63_bufferedio.idle_sec
45.34 +47.9% 67.04 fxmark.hdd_btrfs_MWCL_63_bufferedio.idle_util
8.54 +16.8% 9.98 fxmark.hdd_btrfs_MWCL_63_bufferedio.softirq_sec
0.47 +16.8% 0.55 fxmark.hdd_btrfs_MWCL_63_bufferedio.softirq_util
919.60 -43.1% 522.80 ± 2% fxmark.hdd_btrfs_MWCL_63_bufferedio.sys_sec
50.38 -43.2% 28.64 ± 2% fxmark.hdd_btrfs_MWCL_63_bufferedio.sys_util
1017 +54.2% 1569 fxmark.hdd_btrfs_MWCL_72_bufferedio.idle_sec
48.62 +53.9% 74.84 fxmark.hdd_btrfs_MWCL_72_bufferedio.idle_util
983.26 -55.2% 440.30 ± 4% fxmark.hdd_btrfs_MWCL_72_bufferedio.sys_sec
46.99 -55.3% 21.00 ± 3% fxmark.hdd_btrfs_MWCL_72_bufferedio.sys_util
1610685 -13.9% 1387293 ± 3% fxmark.hdd_btrfs_MWCL_72_bufferedio.works
53599 -13.7% 46230 ± 3% fxmark.hdd_btrfs_MWCL_72_bufferedio.works/sec
29.75 +89.7% 56.45 ± 2% fxmark.hdd_btrfs_MWCL_9_bufferedio.idle_sec
11.62 +86.9% 21.72 ± 2% fxmark.hdd_btrfs_MWCL_9_bufferedio.idle_util
7.64 -10.8% 6.82 fxmark.hdd_btrfs_MWCL_9_bufferedio.irq_util
3.10 ± 6% -15.1% 2.64 ± 3% fxmark.hdd_btrfs_MWCL_9_bufferedio.softirq_sec
1.21 ± 6% -16.4% 1.01 ± 3% fxmark.hdd_btrfs_MWCL_9_bufferedio.softirq_util
194.38 -10.0% 174.90 fxmark.hdd_btrfs_MWCL_9_bufferedio.sys_sec
75.94 -11.4% 67.29 fxmark.hdd_btrfs_MWCL_9_bufferedio.sys_util
8.95 -11.7% 7.91 fxmark.hdd_btrfs_MWCL_9_bufferedio.user_sec
3.50 -13.1% 3.04 fxmark.hdd_btrfs_MWCL_9_bufferedio.user_util
1359770 -16.4% 1136680 fxmark.hdd_btrfs_MWCL_9_bufferedio.works
45317 -16.4% 37885 fxmark.hdd_btrfs_MWCL_9_bufferedio.works/sec
549.29 -3.3% 531.11 fxmark.time.elapsed_time
549.29 -3.3% 531.11 fxmark.time.elapsed_time.max
1096944 -15.4% 927632 ± 2% fxmark.time.file_system_outputs
67.00 -22.4% 52.00 fxmark.time.percent_of_cpu_this_job_got
356.30 -25.2% 266.46 fxmark.time.system_time
13.97 -7.7% 12.90 fxmark.time.user_time
687910 ± 2% -14.9% 585190 fxmark.time.voluntary_context_switches
92480 +57.6% 145791 cpuidle.POLL.usage
45.12 +17.7% 53.12 iostat.cpu.idle
47.26 -17.5% 39.01 iostat.cpu.system
4465896 -11.0% 3976872 numa-numastat.node0.local_node
4465847 -10.9% 3976837 numa-numastat.node0.numa_hit
48.04 +8.2 56.24 mpstat.cpu.all.idle%
1.00 ± 5% -0.1 0.94 ± 4% mpstat.cpu.all.soft%
43.86 -8.1 35.72 mpstat.cpu.all.sys%
44.75 +17.9% 52.75 vmstat.cpu.id
51.00 -16.7% 42.50 vmstat.cpu.sy
23992 -12.0% 21119 vmstat.io.bo
2338252 -9.9% 2106851 vmstat.memory.cache
11.50 ± 4% -63.0% 4.25 ± 10% vmstat.procs.r
92744 +6.5% 98768 vmstat.system.cs
175350 -18.3% 143184 meminfo.Active
174192 -18.4% 142148 meminfo.Active(file)
214134 -21.0% 169130 ± 3% meminfo.Dirty
152957 -23.9% 116365 ± 4% meminfo.Inactive(file)
914555 -16.8% 760508 meminfo.KReclaimable
914555 -16.8% 760508 meminfo.SReclaimable
441392 -9.9% 397727 meminfo.SUnreclaim
28606 ± 3% -19.0% 23181 ± 3% meminfo.Shmem
1355948 -14.6% 1158236 meminfo.Slab
1683 ± 4% -8.9% 1532 ± 4% meminfo.Writeback
176261 -18.5% 143572 numa-meminfo.node0.Active
175123 -18.6% 142537 numa-meminfo.node0.Active(file)
214487 -21.0% 169509 ± 4% numa-meminfo.node0.Dirty
152258 -23.1% 117060 ± 4% numa-meminfo.node0.Inactive(file)
897715 -17.0% 744913 numa-meminfo.node0.KReclaimable
897715 -17.0% 744913 numa-meminfo.node0.SReclaimable
429290 -9.9% 386596 numa-meminfo.node0.SUnreclaim
28605 ± 3% -19.0% 23175 ± 3% numa-meminfo.node0.Shmem
1327006 -14.7% 1131510 numa-meminfo.node0.Slab
1752 ± 5% -11.4% 1552 numa-meminfo.node0.Writeback
43694 -18.5% 35593 numa-vmstat.node0.nr_active_file
916940 -14.6% 783445 numa-vmstat.node0.nr_dirtied
53663 -21.0% 42371 ± 3% numa-vmstat.node0.nr_dirty
38292 -23.2% 29417 ± 4% numa-vmstat.node0.nr_inactive_file
7144 ± 2% -19.0% 5789 ± 3% numa-vmstat.node0.nr_shmem
224840 -17.1% 186437 numa-vmstat.node0.nr_slab_reclaimable
107449 -10.0% 96654 numa-vmstat.node0.nr_slab_unreclaimable
449.50 ± 4% -14.6% 384.00 ± 4% numa-vmstat.node0.nr_writeback
862673 -14.1% 740618 numa-vmstat.node0.nr_written
43695 -18.5% 35593 numa-vmstat.node0.nr_zone_active_file
38291 -23.2% 29417 ± 4% numa-vmstat.node0.nr_zone_inactive_file
54266 -21.1% 42827 ± 3% numa-vmstat.node0.nr_zone_write_pending
642101 -18.1% 526175 slabinfo.btrfs_delayed_node.active_objs
12361 -18.0% 10131 slabinfo.btrfs_delayed_node.active_slabs
642824 -18.0% 526856 slabinfo.btrfs_delayed_node.num_objs
12361 -18.0% 10131 slabinfo.btrfs_delayed_node.num_slabs
641642 -18.1% 525750 slabinfo.btrfs_inode.active_objs
22930 -18.1% 18790 slabinfo.btrfs_inode.active_slabs
642066 -18.1% 526149 slabinfo.btrfs_inode.num_objs
22930 -18.1% 18790 slabinfo.btrfs_inode.num_slabs
614342 -15.8% 517367 slabinfo.dentry.active_objs
14843 -15.7% 12518 slabinfo.dentry.active_slabs
623419 -15.7% 525802 slabinfo.dentry.num_objs
14843 -15.7% 12518 slabinfo.dentry.num_slabs
26554 -33.0% 17795 slabinfo.kmalloc-192.active_objs
788.50 -26.9% 576.50 ± 2% slabinfo.kmalloc-192.active_slabs
33135 -26.8% 24238 ± 2% slabinfo.kmalloc-192.num_objs
788.50 -26.9% 576.50 ± 2% slabinfo.kmalloc-192.num_slabs
35021 -17.4% 28941 slabinfo.kmalloc-256.active_objs
39465 -13.8% 34015 slabinfo.kmalloc-256.num_objs
38834 -11.0% 34576 slabinfo.kmalloc-512.active_objs
3822 ± 4% -19.0% 3096 ± 9% slabinfo.mnt_cache.active_objs
4085 ± 4% -16.9% 3395 ± 8% slabinfo.mnt_cache.num_objs
22177 -98.4% 345.50 ± 2% slabinfo.numa_policy.active_objs
370.25 -98.2% 6.75 ± 6% slabinfo.numa_policy.active_slabs
22974 -98.2% 418.50 ± 6% slabinfo.numa_policy.num_objs
370.25 -98.2% 6.75 ± 6% slabinfo.numa_policy.num_slabs
30778 -11.4% 27258 slabinfo.radix_tree_node.active_objs
287.50 -10.2% 258.25 ± 2% proc-vmstat.nr_active_anon
44012 -18.2% 36011 proc-vmstat.nr_active_file
1641867 -14.6% 1402146 proc-vmstat.nr_dirtied
54266 -21.0% 42844 ± 3% proc-vmstat.nr_dirty
354597 -5.3% 335913 proc-vmstat.nr_file_pages
68032 -1.7% 66880 proc-vmstat.nr_inactive_anon
38874 -23.7% 29657 ± 4% proc-vmstat.nr_inactive_file
7185 ± 3% -19.0% 5817 ± 3% proc-vmstat.nr_shmem
231482 -16.9% 192443 proc-vmstat.nr_slab_reclaimable
110865 -10.0% 99807 proc-vmstat.nr_slab_unreclaimable
447.00 -11.0% 398.00 proc-vmstat.nr_writeback
1641866 -14.6% 1402143 proc-vmstat.nr_written
287.50 -10.2% 258.25 ± 2% proc-vmstat.nr_zone_active_anon
44012 -18.2% 36011 proc-vmstat.nr_zone_active_file
68032 -1.7% 66880 proc-vmstat.nr_zone_inactive_anon
38874 -23.7% 29657 ± 4% proc-vmstat.nr_zone_inactive_file
54890 -21.1% 43319 ± 3% proc-vmstat.nr_zone_write_pending
4487469 -10.8% 4001195 proc-vmstat.numa_hit
4487468 -10.8% 4001193 proc-vmstat.numa_local
645936 +8.8% 702580 proc-vmstat.pgactivate
9776528 -12.1% 8593562 proc-vmstat.pgalloc_normal
1323917 -14.8% 1127396 ± 3% proc-vmstat.pgdeactivate
1852505 -3.3% 1792262 proc-vmstat.pgfault
9778333 -12.1% 8596857 proc-vmstat.pgfree
13064555 -14.7% 11146070 proc-vmstat.pgpgout
1355099 -15.6% 1143590 ± 3% proc-vmstat.pgrotated
29192165 -12.6% 25511450 proc-vmstat.slabs_scanned
2682624 -2.3% 2622144 proc-vmstat.unevictable_pgs_scanned
11918547 ± 5% +7.8% 12843186 perf-stat.i.branch-instructions
0.08 ± 6% +0.0 0.09 ± 2% perf-stat.i.branch-miss-rate%
924883 ± 5% +8.7% 1004974 ± 3% perf-stat.i.branch-misses
0.16 ± 7% +0.0 0.17 perf-stat.i.cache-miss-rate%
194655 ± 8% +10.9% 215884 perf-stat.i.cache-misses
1307774 ± 8% +8.8% 1423352 perf-stat.i.cache-references
91889 +7.0% 98361 perf-stat.i.context-switches
0.09 ± 9% +8.5% 0.10 perf-stat.i.cpi
39978 +0.8% 40304 perf-stat.i.cpu-clock
4.778e+08 ± 7% +8.6% 5.187e+08 perf-stat.i.cpu-cycles
900.48 +26.4% 1138 perf-stat.i.cpu-migrations
630084 ± 7% +8.2% 681691 perf-stat.i.iTLB-load-misses
56704882 ± 5% +7.9% 61157553 perf-stat.i.iTLB-loads
56801859 ± 5% +7.9% 61262751 perf-stat.i.instructions
0.96 ± 5% +7.4% 1.03 perf-stat.i.instructions-per-iTLB-miss
0.00 ± 6% +7.0% 0.00 perf-stat.i.ipc
1.78 ± 2% +8.1% 1.93 ± 2% perf-stat.i.major-faults
0.00 ± 7% +8.5% 0.00 perf-stat.i.metric.GHz
0.33 +3.5% 0.34 perf-stat.i.metric.K/sec
0.25 ± 5% +7.7% 0.27 perf-stat.i.metric.M/sec
39978 +0.8% 40304 perf-stat.i.task-clock
12147873 ± 5% +8.2% 13143736 perf-stat.ps.branch-instructions
942554 ± 5% +9.1% 1028476 ± 3% perf-stat.ps.branch-misses
198339 ± 8% +11.4% 220964 perf-stat.ps.cache-misses
1332481 ± 8% +9.3% 1456578 perf-stat.ps.cache-references
91580 +6.7% 97730 perf-stat.ps.context-switches
4.868e+08 ± 7% +9.0% 5.308e+08 perf-stat.ps.cpu-cycles
896.06 +26.2% 1131 perf-stat.ps.cpu-migrations
641967 ± 7% +8.7% 697566 perf-stat.ps.iTLB-load-misses
57794217 ± 5% +8.3% 62587387 perf-stat.ps.iTLB-loads
57894864 ± 5% +8.3% 62696692 perf-stat.ps.instructions
1.78 ± 2% +8.4% 1.93 ± 2% perf-stat.ps.major-faults
76331 -14.9% 64991 sched_debug.cfs_rq:/.exec_clock.avg
131652 -45.5% 71764 sched_debug.cfs_rq:/.exec_clock.max
19235 -88.8% 2152 ± 10% sched_debug.cfs_rq:/.exec_clock.stddev
29025 ± 23% +115.9% 62654 ± 25% sched_debug.cfs_rq:/.load.min
228411 ± 11% -38.6% 140254 ± 44% sched_debug.cfs_rq:/.load.stddev
43.60 ± 29% +86.5% 81.33 ± 17% sched_debug.cfs_rq:/.load_avg.min
1523223 ± 3% -19.2% 1230395 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
2877263 ± 3% -54.9% 1297137 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
821701 ± 6% +33.4% 1096067 ± 8% sched_debug.cfs_rq:/.min_vruntime.min
639729 ± 3% -93.7% 40423 ± 26% sched_debug.cfs_rq:/.min_vruntime.stddev
0.43 ± 2% -26.0% 0.32 ± 13% sched_debug.cfs_rq:/.nr_running.avg
0.12 ± 34% +33.3% 0.17 ± 33% sched_debug.cfs_rq:/.nr_running.min
21.91 ± 7% +83.3% 40.17 ± 8% sched_debug.cfs_rq:/.nr_spread_over.avg
6.25 ± 33% +379.6% 29.97 ± 6% sched_debug.cfs_rq:/.nr_spread_over.min
224.08 ± 12% +74.5% 390.97 ± 12% sched_debug.cfs_rq:/.runnable_avg.min
222.26 ± 12% -22.6% 171.99 ± 13% sched_debug.cfs_rq:/.runnable_avg.stddev
-1308743 -104.8% 62240 ±123% sched_debug.cfs_rq:/.spread0.avg
-2010183 -96.4% -72085 sched_debug.cfs_rq:/.spread0.min
639697 ± 3% -93.7% 40422 ± 26% sched_debug.cfs_rq:/.spread0.stddev
408.48 ± 5% -13.0% 355.56 ± 3% sched_debug.cfs_rq:/.util_avg.avg
1055 ± 3% -10.6% 943.19 ± 3% sched_debug.cfs_rq:/.util_avg.max
162.68 ± 12% +53.0% 248.97 ± 15% sched_debug.cfs_rq:/.util_avg.min
205.13 ± 7% -25.5% 152.83 ± 13% sched_debug.cfs_rq:/.util_avg.stddev
244.43 ± 5% -66.9% 80.95 ± 39% sched_debug.cfs_rq:/.util_est_enqueued.avg
709.90 ± 13% -39.9% 426.36 ± 11% sched_debug.cfs_rq:/.util_est_enqueued.max
203.06 ± 15% -58.6% 84.00 ± 15% sched_debug.cfs_rq:/.util_est_enqueued.stddev
848045 ± 9% +34.7% 1142214 ± 10% sched_debug.cpu.avg_idle.avg
188030 ± 23% +140.0% 451192 ± 14% sched_debug.cpu.avg_idle.min
345409 ± 8% -25.4% 257601 ± 12% sched_debug.cpu.avg_idle.stddev
347803 -9.9% 313357 sched_debug.cpu.clock.avg
347858 -9.9% 313400 sched_debug.cpu.clock.max
347747 -9.9% 313310 sched_debug.cpu.clock.min
31.60 -18.3% 25.83 ± 3% sched_debug.cpu.clock.stddev
338985 -10.5% 303239 sched_debug.cpu.clock_task.avg
340608 -10.8% 303972 sched_debug.cpu.clock_task.max
329665 -9.8% 297262 sched_debug.cpu.clock_task.min
2108 ± 4% -39.3% 1280 ± 5% sched_debug.cpu.clock_task.stddev
939.85 ± 81% +165.3% 2493 ± 27% sched_debug.cpu.curr->pid.min
3549 ± 10% -29.0% 2520 ± 15% sched_debug.cpu.curr->pid.stddev
0.00 ± 97% -98.3% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
0.12 ± 34% +144.4% 0.31 ± 15% sched_debug.cpu.nr_running.min
654832 +18.8% 777815 sched_debug.cpu.nr_switches.avg
904162 ± 2% -9.8% 815631 ± 2% sched_debug.cpu.nr_switches.max
574858 +31.5% 755656 sched_debug.cpu.nr_switches.min
82240 ± 3% -81.8% 14938 ± 13% sched_debug.cpu.nr_switches.stddev
263.38 ± 26% -67.9% 84.44 ± 16% sched_debug.cpu.nr_uninterruptible.max
-218.23 -78.1% -47.78 sched_debug.cpu.nr_uninterruptible.min
128.55 ± 24% -77.9% 28.46 ± 22% sched_debug.cpu.nr_uninterruptible.stddev
653092 +18.6% 774865 sched_debug.cpu.sched_count.avg
913676 ± 2% -11.5% 808187 ± 2% sched_debug.cpu.sched_count.max
573366 +31.5% 753744 sched_debug.cpu.sched_count.min
83073 ± 3% -83.4% 13777 ± 12% sched_debug.cpu.sched_count.stddev
285707 +18.5% 338620 sched_debug.cpu.sched_goidle.avg
397157 -11.6% 351276 ± 2% sched_debug.cpu.sched_goidle.max
243238 +36.3% 331500 sched_debug.cpu.sched_goidle.min
43632 ± 3% -88.0% 5215 ± 22% sched_debug.cpu.sched_goidle.stddev
276633 +183.4% 783865 ± 3% sched_debug.cpu.ttwu_count.avg
615804 ± 2% +249.5% 2152323 ± 6% sched_debug.cpu.ttwu_count.max
144528 ± 2% +324.1% 612922 ± 4% sched_debug.cpu.ttwu_count.stddev
7838 ± 4% +62.0% 12696 ± 6% sched_debug.cpu.ttwu_local.avg
23472 ± 2% -36.2% 14981 ± 4% sched_debug.cpu.ttwu_local.max
6517 ± 3% +77.0% 11535 ± 6% sched_debug.cpu.ttwu_local.min
1978 ± 8% -47.1% 1047 ± 21% sched_debug.cpu.ttwu_local.stddev
347742 -9.9% 313303 sched_debug.cpu_clk
347083 -9.9% 312643 sched_debug.ktime
348036 -9.9% 313599 sched_debug.sched_clk
116128 ± 2% -38.5% 71410 softirqs.CPU0.SCHED
91878 -27.9% 66231 softirqs.CPU1.SCHED
70366 ± 2% -21.3% 55400 softirqs.CPU10.SCHED
71066 ± 2% -22.0% 55446 softirqs.CPU11.SCHED
70819 ± 2% -21.8% 55357 softirqs.CPU12.SCHED
72824 ± 2% -22.9% 56169 ± 3% softirqs.CPU13.SCHED
72347 ± 2% -23.0% 55736 ± 3% softirqs.CPU14.SCHED
71040 -22.6% 54996 softirqs.CPU15.SCHED
73832 ± 2% -25.7% 54822 softirqs.CPU16.SCHED
73531 -24.7% 55359 softirqs.CPU17.SCHED
61336 -23.2% 47097 softirqs.CPU18.SCHED
61263 -23.3% 46993 softirqs.CPU19.SCHED
90645 ± 2% -24.7% 68276 softirqs.CPU2.SCHED
61313 ± 2% -23.3% 47013 softirqs.CPU20.SCHED
62197 -24.2% 47163 softirqs.CPU21.SCHED
64716 ± 2% -27.6% 46834 softirqs.CPU22.SCHED
63504 -26.3% 46803 softirqs.CPU23.SCHED
64003 ± 2% -26.5% 47071 softirqs.CPU24.SCHED
64911 -27.8% 46890 softirqs.CPU25.SCHED
55809 -27.1% 40707 softirqs.CPU26.SCHED
54152 -26.6% 39754 softirqs.CPU27.SCHED
54386 ± 2% -27.4% 39499 softirqs.CPU28.SCHED
54236 -27.4% 39401 softirqs.CPU29.SCHED
87173 -21.4% 68485 softirqs.CPU3.SCHED
55503 ± 2% -29.2% 39293 softirqs.CPU30.SCHED
56079 -29.9% 39303 softirqs.CPU31.SCHED
57664 -31.4% 39583 softirqs.CPU32.SCHED
55860 -29.6% 39347 softirqs.CPU33.SCHED
54670 -29.8% 38364 softirqs.CPU34.SCHED
57222 ± 2% -31.7% 39072 softirqs.CPU35.SCHED
45803 -30.9% 31647 softirqs.CPU36.SCHED
45558 ± 2% -30.5% 31645 softirqs.CPU37.SCHED
46190 ± 2% -32.1% 31368 softirqs.CPU38.SCHED
43936 ± 7% -28.9% 31240 softirqs.CPU39.SCHED
79246 -17.9% 65045 softirqs.CPU4.SCHED
46593 ± 2% -30.6% 32330 softirqs.CPU40.SCHED
46935 -31.7% 32080 softirqs.CPU41.SCHED
46761 -32.1% 31771 softirqs.CPU42.SCHED
44507 ± 2% -29.3% 31454 ± 2% softirqs.CPU43.SCHED
36826 ± 3% -32.9% 24725 softirqs.CPU44.SCHED
34982 ± 3% -30.7% 24230 softirqs.CPU45.SCHED
35262 ± 4% -30.9% 24363 softirqs.CPU46.SCHED
35716 -31.7% 24388 softirqs.CPU47.SCHED
34721 ± 3% -30.5% 24144 ± 2% softirqs.CPU48.SCHED
36083 ± 4% -32.6% 24324 softirqs.CPU49.SCHED
81693 ± 2% -20.4% 65005 softirqs.CPU5.SCHED
35183 -29.8% 24699 softirqs.CPU50.SCHED
35820 ± 2% -32.2% 24290 softirqs.CPU51.SCHED
33069 -33.1% 22117 ± 4% softirqs.CPU52.SCHED
34535 -29.6% 24296 softirqs.CPU53.SCHED
25349 -34.8% 16522 ± 6% softirqs.CPU54.SCHED
23599 ± 3% -27.7% 17051 ± 2% softirqs.CPU55.SCHED
24760 -30.9% 17107 ± 2% softirqs.CPU56.SCHED
24178 ± 4% -27.7% 17470 ± 3% softirqs.CPU57.SCHED
24307 ± 2% -30.2% 16973 softirqs.CPU58.SCHED
23460 ± 4% -27.2% 17090 softirqs.CPU59.SCHED
79700 -19.3% 64305 softirqs.CPU6.SCHED
24681 -33.2% 16485 ± 3% softirqs.CPU60.SCHED
24529 ± 3% -31.0% 16925 ± 2% softirqs.CPU61.SCHED
14304 ± 5% -29.3% 10109 softirqs.CPU62.SCHED
13347 ± 3% -29.5% 9408 ± 2% softirqs.CPU63.SCHED
10558 ± 2% -9.0% 9612 ± 3% softirqs.CPU64.RCU
14104 ± 4% -31.8% 9623 ± 2% softirqs.CPU64.SCHED
13844 ± 5% -28.8% 9862 softirqs.CPU65.SCHED
12902 ± 3% -25.5% 9610 softirqs.CPU66.SCHED
10891 ± 7% -9.6% 9842 ± 3% softirqs.CPU67.RCU
13872 ± 7% -31.3% 9526 ± 3% softirqs.CPU67.SCHED
13213 ± 8% -26.1% 9771 ± 2% softirqs.CPU68.SCHED
14232 ± 5% -32.4% 9625 ± 2% softirqs.CPU69.SCHED
81684 -21.4% 64189 softirqs.CPU7.SCHED
10183 ± 15% -25.0% 7633 ± 6% softirqs.CPU70.SCHED
13274 ± 6% -28.9% 9434 ± 3% softirqs.CPU71.SCHED
75679 -22.7% 58521 softirqs.CPU8.SCHED
68854 -20.6% 54638 softirqs.CPU9.SCHED
4101011 -22.9% 3160637 softirqs.SCHED
12836 -11.9% 11304 interrupts.180:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
4431149 -1.7% 4357713 interrupts.CAL:Function_call_interrupts
1266 ± 45% -46.8% 674.00 ± 19% interrupts.CPU0.NMI:Non-maskable_interrupts
1266 ± 45% -46.8% 674.00 ± 19% interrupts.CPU0.PMI:Performance_monitoring_interrupts
217128 ± 7% -30.5% 150952 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
30529 ± 6% -38.9% 18640 ± 3% interrupts.CPU1.RES:Rescheduling_interrupts
3405 ± 8% -69.7% 1033 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
703.75 ± 21% -54.3% 321.75 ± 41% interrupts.CPU11.NMI:Non-maskable_interrupts
703.75 ± 21% -54.3% 321.75 ± 41% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3372 ± 5% -67.9% 1081 ± 9% interrupts.CPU11.RES:Rescheduling_interrupts
3289 ± 10% -71.6% 934.50 ± 4% interrupts.CPU12.RES:Rescheduling_interrupts
102804 ± 6% +15.4% 118611 ± 8% interrupts.CPU13.CAL:Function_call_interrupts
3233 ± 8% -69.3% 994.25 ± 5% interrupts.CPU13.RES:Rescheduling_interrupts
1071 ± 57% -69.6% 325.25 ± 42% interrupts.CPU14.NMI:Non-maskable_interrupts
1071 ± 57% -69.6% 325.25 ± 42% interrupts.CPU14.PMI:Performance_monitoring_interrupts
3502 ± 8% -76.1% 838.50 ± 4% interrupts.CPU14.RES:Rescheduling_interrupts
820.25 ± 26% -58.6% 339.25 ± 36% interrupts.CPU15.NMI:Non-maskable_interrupts
820.25 ± 26% -58.6% 339.25 ± 36% interrupts.CPU15.PMI:Performance_monitoring_interrupts
3377 ± 8% -73.3% 901.25 ± 2% interrupts.CPU15.RES:Rescheduling_interrupts
9914 -57.3% 4237 ±100% interrupts.CPU16.180:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
850.25 ± 26% -56.5% 369.75 ± 55% interrupts.CPU16.NMI:Non-maskable_interrupts
850.25 ± 26% -56.5% 369.75 ± 55% interrupts.CPU16.PMI:Performance_monitoring_interrupts
3510 ± 7% -75.0% 876.00 ± 10% interrupts.CPU16.RES:Rescheduling_interrupts
1201 ± 40% -76.7% 280.00 ± 22% interrupts.CPU17.NMI:Non-maskable_interrupts
1201 ± 40% -76.7% 280.00 ± 22% interrupts.CPU17.PMI:Performance_monitoring_interrupts
3545 ± 9% -75.3% 875.50 ± 6% interrupts.CPU17.RES:Rescheduling_interrupts
683.00 ± 52% -49.2% 347.25 ± 29% interrupts.CPU18.NMI:Non-maskable_interrupts
683.00 ± 52% -49.2% 347.25 ± 29% interrupts.CPU18.PMI:Performance_monitoring_interrupts
2080 ± 12% -80.9% 398.00 ± 8% interrupts.CPU18.RES:Rescheduling_interrupts
488.75 ± 23% -47.6% 256.00 ± 12% interrupts.CPU19.NMI:Non-maskable_interrupts
488.75 ± 23% -47.6% 256.00 ± 12% interrupts.CPU19.PMI:Performance_monitoring_interrupts
2009 ± 7% -77.6% 450.50 ± 12% interrupts.CPU19.RES:Rescheduling_interrupts
191815 ± 8% -22.4% 148909 ± 4% interrupts.CPU2.CAL:Function_call_interrupts
611.25 ± 29% -58.6% 253.25 ± 26% interrupts.CPU2.NMI:Non-maskable_interrupts
611.25 ± 29% -58.6% 253.25 ± 26% interrupts.CPU2.PMI:Performance_monitoring_interrupts
774.75 ± 38% -67.3% 253.25 ± 32% interrupts.CPU20.NMI:Non-maskable_interrupts
774.75 ± 38% -67.3% 253.25 ± 32% interrupts.CPU20.PMI:Performance_monitoring_interrupts
2025 ± 3% -81.8% 369.50 ± 15% interrupts.CPU20.RES:Rescheduling_interrupts
2088 ± 3% -83.5% 345.00 ± 9% interrupts.CPU21.RES:Rescheduling_interrupts
2212 ± 11% -85.3% 325.25 ± 3% interrupts.CPU22.RES:Rescheduling_interrupts
892.50 ± 44% -60.7% 351.00 ± 59% interrupts.CPU23.NMI:Non-maskable_interrupts
892.50 ± 44% -60.7% 351.00 ± 59% interrupts.CPU23.PMI:Performance_monitoring_interrupts
2264 ± 10% -85.1% 337.25 ± 10% interrupts.CPU23.RES:Rescheduling_interrupts
65733 ± 5% +18.9% 78175 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
1410 ± 26% -73.1% 379.50 ± 42% interrupts.CPU24.NMI:Non-maskable_interrupts
1410 ± 26% -73.1% 379.50 ± 42% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2309 ± 2% -86.3% 315.25 ± 3% interrupts.CPU24.RES:Rescheduling_interrupts
63928 ± 2% +21.4% 77636 ± 5% interrupts.CPU25.CAL:Function_call_interrupts
870.50 ± 32% -58.4% 362.00 ± 31% interrupts.CPU25.NMI:Non-maskable_interrupts
870.50 ± 32% -58.4% 362.00 ± 31% interrupts.CPU25.PMI:Performance_monitoring_interrupts
2490 ± 10% -86.4% 338.50 ± 9% interrupts.CPU25.RES:Rescheduling_interrupts
808.75 ± 16% -44.3% 450.50 ± 63% interrupts.CPU26.NMI:Non-maskable_interrupts
808.75 ± 16% -44.3% 450.50 ± 63% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1312 ± 9% -84.1% 208.25 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
1458 ± 44% -78.3% 316.75 ± 46% interrupts.CPU27.NMI:Non-maskable_interrupts
1458 ± 44% -78.3% 316.75 ± 46% interrupts.CPU27.PMI:Performance_monitoring_interrupts
1337 ± 5% -84.5% 207.25 ± 11% interrupts.CPU27.RES:Rescheduling_interrupts
1270 ± 4% -5.8% 1196 interrupts.CPU275.CAL:Function_call_interrupts
1118 ± 33% -73.0% 302.25 ± 17% interrupts.CPU28.NMI:Non-maskable_interrupts
1118 ± 33% -73.0% 302.25 ± 17% interrupts.CPU28.PMI:Performance_monitoring_interrupts
1195 ± 9% -82.0% 215.00 ± 4% interrupts.CPU28.RES:Rescheduling_interrupts
854.00 ± 19% -60.5% 337.25 ± 10% interrupts.CPU29.NMI:Non-maskable_interrupts
854.00 ± 19% -60.5% 337.25 ± 10% interrupts.CPU29.PMI:Performance_monitoring_interrupts
1190 ± 9% -80.2% 235.75 ± 14% interrupts.CPU29.RES:Rescheduling_interrupts
198710 ± 10% -22.2% 154622 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
1122 ± 43% -51.0% 550.25 ± 23% interrupts.CPU30.NMI:Non-maskable_interrupts
1122 ± 43% -51.0% 550.25 ± 23% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1369 ± 11% -85.7% 195.25 ± 12% interrupts.CPU30.RES:Rescheduling_interrupts
1307 ± 38% -54.7% 591.75 ± 65% interrupts.CPU31.NMI:Non-maskable_interrupts
1307 ± 38% -54.7% 591.75 ± 65% interrupts.CPU31.PMI:Performance_monitoring_interrupts
1316 ± 9% -84.4% 205.25 ± 10% interrupts.CPU31.RES:Rescheduling_interrupts
1303 ± 29% -79.1% 272.25 ± 20% interrupts.CPU32.NMI:Non-maskable_interrupts
1303 ± 29% -79.1% 272.25 ± 20% interrupts.CPU32.PMI:Performance_monitoring_interrupts
1361 ± 4% -84.5% 210.75 ± 8% interrupts.CPU32.RES:Rescheduling_interrupts
863.25 ± 16% -63.1% 318.25 ± 34% interrupts.CPU33.NMI:Non-maskable_interrupts
863.25 ± 16% -63.1% 318.25 ± 34% interrupts.CPU33.PMI:Performance_monitoring_interrupts
1424 ± 7% -81.6% 261.50 ± 35% interrupts.CPU33.RES:Rescheduling_interrupts
40521 ± 8% +34.8% 54611 ± 13% interrupts.CPU34.CAL:Function_call_interrupts
1060 ± 49% -50.9% 520.75 ± 6% interrupts.CPU34.NMI:Non-maskable_interrupts
1060 ± 49% -50.9% 520.75 ± 6% interrupts.CPU34.PMI:Performance_monitoring_interrupts
1460 ± 8% -83.4% 242.50 ± 26% interrupts.CPU34.RES:Rescheduling_interrupts
42549 ± 6% +27.2% 54118 ± 15% interrupts.CPU35.CAL:Function_call_interrupts
1108 ± 21% -45.5% 604.25 ± 38% interrupts.CPU35.NMI:Non-maskable_interrupts
1108 ± 21% -45.5% 604.25 ± 38% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1490 ± 11% -84.3% 234.25 ± 21% interrupts.CPU35.RES:Rescheduling_interrupts
857.25 ± 17% -48.6% 441.00 ± 39% interrupts.CPU36.NMI:Non-maskable_interrupts
857.25 ± 17% -48.6% 441.00 ± 39% interrupts.CPU36.PMI:Performance_monitoring_interrupts
921.50 ± 2% -85.5% 133.75 ± 2% interrupts.CPU36.RES:Rescheduling_interrupts
1107 ± 44% -55.3% 494.75 ± 49% interrupts.CPU37.NMI:Non-maskable_interrupts
1107 ± 44% -55.3% 494.75 ± 49% interrupts.CPU37.PMI:Performance_monitoring_interrupts
918.50 ± 3% -83.8% 149.00 ± 15% interrupts.CPU37.RES:Rescheduling_interrupts
982.00 ± 3% -85.0% 147.25 ± 19% interrupts.CPU38.RES:Rescheduling_interrupts
990.50 ± 4% -86.0% 139.00 ± 17% interrupts.CPU39.RES:Rescheduling_interrupts
170783 ± 10% -14.4% 146206 ± 6% interrupts.CPU4.CAL:Function_call_interrupts
1628 ± 35% -79.3% 337.75 ± 47% interrupts.CPU40.NMI:Non-maskable_interrupts
1628 ± 35% -79.3% 337.75 ± 47% interrupts.CPU40.PMI:Performance_monitoring_interrupts
984.00 ± 6% -86.0% 138.25 ± 11% interrupts.CPU40.RES:Rescheduling_interrupts
1080 ± 36% -63.9% 389.75 ± 41% interrupts.CPU41.NMI:Non-maskable_interrupts
1080 ± 36% -63.9% 389.75 ± 41% interrupts.CPU41.PMI:Performance_monitoring_interrupts
952.75 ± 7% -84.2% 150.25 ± 8% interrupts.CPU41.RES:Rescheduling_interrupts
1045 ± 35% -56.9% 450.25 ± 44% interrupts.CPU42.NMI:Non-maskable_interrupts
1045 ± 35% -56.9% 450.25 ± 44% interrupts.CPU42.PMI:Performance_monitoring_interrupts
983.75 ± 2% -86.7% 131.00 ± 6% interrupts.CPU42.RES:Rescheduling_interrupts
939.50 ± 29% -53.6% 435.75 ± 44% interrupts.CPU43.NMI:Non-maskable_interrupts
939.50 ± 29% -53.6% 435.75 ± 44% interrupts.CPU43.PMI:Performance_monitoring_interrupts
963.50 ± 3% -86.8% 126.75 ± 15% interrupts.CPU43.RES:Rescheduling_interrupts
27228 -33.3% 18149 ± 6% interrupts.CPU44.CAL:Function_call_interrupts
1334 ± 32% -67.4% 434.25 ± 40% interrupts.CPU44.NMI:Non-maskable_interrupts
1334 ± 32% -67.4% 434.25 ± 40% interrupts.CPU44.PMI:Performance_monitoring_interrupts
668.25 ± 8% -87.2% 85.75 ± 6% interrupts.CPU44.RES:Rescheduling_interrupts
20504 ± 6% -25.8% 15212 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
869.75 ± 41% -62.5% 326.25 ± 43% interrupts.CPU45.NMI:Non-maskable_interrupts
869.75 ± 41% -62.5% 326.25 ± 43% interrupts.CPU45.PMI:Performance_monitoring_interrupts
654.75 ± 2% -89.5% 68.75 ± 12% interrupts.CPU45.RES:Rescheduling_interrupts
803.25 ± 15% -50.6% 397.00 ± 54% interrupts.CPU46.NMI:Non-maskable_interrupts
803.25 ± 15% -50.6% 397.00 ± 54% interrupts.CPU46.PMI:Performance_monitoring_interrupts
687.25 ± 10% -87.3% 87.25 ± 16% interrupts.CPU46.RES:Rescheduling_interrupts
1364 ± 15% -72.6% 374.50 ± 48% interrupts.CPU47.NMI:Non-maskable_interrupts
1364 ± 15% -72.6% 374.50 ± 48% interrupts.CPU47.PMI:Performance_monitoring_interrupts
688.75 ± 4% -84.4% 107.25 ± 16% interrupts.CPU47.RES:Rescheduling_interrupts
1290 ± 40% -70.4% 382.25 ± 51% interrupts.CPU48.NMI:Non-maskable_interrupts
1290 ± 40% -70.4% 382.25 ± 51% interrupts.CPU48.PMI:Performance_monitoring_interrupts
701.75 ± 5% -88.3% 82.25 ± 18% interrupts.CPU48.RES:Rescheduling_interrupts
1163 ± 31% -76.9% 268.50 ± 31% interrupts.CPU49.NMI:Non-maskable_interrupts
1163 ± 31% -76.9% 268.50 ± 31% interrupts.CPU49.PMI:Performance_monitoring_interrupts
704.75 ± 5% -88.6% 80.25 ± 6% interrupts.CPU49.RES:Rescheduling_interrupts
166242 ± 6% -13.7% 143496 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
7461 ± 8% +25.9% 9396 ± 11% interrupts.CPU5.RES:Rescheduling_interrupts
706.25 ± 2% -88.1% 83.75 ± 15% interrupts.CPU50.RES:Rescheduling_interrupts
21177 ± 4% +11.3% 23562 ± 4% interrupts.CPU51.CAL:Function_call_interrupts
1075 ± 32% -63.1% 397.25 ± 46% interrupts.CPU51.NMI:Non-maskable_interrupts
1075 ± 32% -63.1% 397.25 ± 46% interrupts.CPU51.PMI:Performance_monitoring_interrupts
718.00 ± 5% -86.4% 97.50 ± 20% interrupts.CPU51.RES:Rescheduling_interrupts
680.75 ± 4% -86.7% 90.75 ± 22% interrupts.CPU52.RES:Rescheduling_interrupts
702.00 ± 5% -88.2% 83.00 ± 10% interrupts.CPU53.RES:Rescheduling_interrupts
1638 ± 39% -72.0% 458.25 ± 55% interrupts.CPU54.NMI:Non-maskable_interrupts
1638 ± 39% -72.0% 458.25 ± 55% interrupts.CPU54.PMI:Performance_monitoring_interrupts
388.75 ± 7% -86.9% 50.75 ± 7% interrupts.CPU54.RES:Rescheduling_interrupts
993.50 ± 26% -47.5% 521.50 ± 24% interrupts.CPU55.NMI:Non-maskable_interrupts
993.50 ± 26% -47.5% 521.50 ± 24% interrupts.CPU55.PMI:Performance_monitoring_interrupts
417.50 ± 7% -86.9% 54.75 ± 22% interrupts.CPU55.RES:Rescheduling_interrupts
1356 ± 22% -70.2% 404.00 ± 52% interrupts.CPU56.NMI:Non-maskable_interrupts
1356 ± 22% -70.2% 404.00 ± 52% interrupts.CPU56.PMI:Performance_monitoring_interrupts
394.50 ± 9% -87.8% 48.25 ± 11% interrupts.CPU56.RES:Rescheduling_interrupts
1213 ± 40% -64.4% 432.00 ± 43% interrupts.CPU57.NMI:Non-maskable_interrupts
1213 ± 40% -64.4% 432.00 ± 43% interrupts.CPU57.PMI:Performance_monitoring_interrupts
399.25 -85.0% 59.75 ± 20% interrupts.CPU57.RES:Rescheduling_interrupts
1067 ± 65% -72.4% 295.00 ± 57% interrupts.CPU58.NMI:Non-maskable_interrupts
1067 ± 65% -72.4% 295.00 ± 57% interrupts.CPU58.PMI:Performance_monitoring_interrupts
479.25 ± 2% -87.6% 59.25 ± 17% interrupts.CPU58.RES:Rescheduling_interrupts
1127 ± 41% -69.8% 341.00 ± 42% interrupts.CPU59.NMI:Non-maskable_interrupts
1127 ± 41% -69.8% 341.00 ± 42% interrupts.CPU59.PMI:Performance_monitoring_interrupts
437.25 ± 2% -88.3% 51.25 ± 32% interrupts.CPU59.RES:Rescheduling_interrupts
165868 ± 4% -11.1% 147461 ± 3% interrupts.CPU6.CAL:Function_call_interrupts
1195 ± 23% -67.7% 386.50 ± 61% interrupts.CPU6.NMI:Non-maskable_interrupts
1195 ± 23% -67.7% 386.50 ± 61% interrupts.CPU6.PMI:Performance_monitoring_interrupts
7147 ± 7% -30.6% 4959 ± 15% interrupts.CPU6.RES:Rescheduling_interrupts
111.75 ± 4% -28.6% 79.75 ± 27% interrupts.CPU6.TLB:TLB_shootdowns
1426 ± 21% -58.7% 589.00 ± 44% interrupts.CPU60.NMI:Non-maskable_interrupts
1426 ± 21% -58.7% 589.00 ± 44% interrupts.CPU60.PMI:Performance_monitoring_interrupts
410.25 ± 9% -88.0% 49.25 ± 10% interrupts.CPU60.RES:Rescheduling_interrupts
1022 ± 29% -68.2% 325.50 ± 43% interrupts.CPU61.NMI:Non-maskable_interrupts
1022 ± 29% -68.2% 325.50 ± 43% interrupts.CPU61.PMI:Performance_monitoring_interrupts
385.75 ± 8% -88.0% 46.25 ± 8% interrupts.CPU61.RES:Rescheduling_interrupts
12518 ± 11% -40.4% 7455 ± 29% interrupts.CPU62.CAL:Function_call_interrupts
1018 ± 20% -71.2% 293.00 ± 46% interrupts.CPU62.NMI:Non-maskable_interrupts
1018 ± 20% -71.2% 293.00 ± 46% interrupts.CPU62.PMI:Performance_monitoring_interrupts
207.25 ± 6% -82.8% 35.75 ± 13% interrupts.CPU62.RES:Rescheduling_interrupts
186.75 ± 3% -77.6% 41.75 ± 49% interrupts.CPU63.RES:Rescheduling_interrupts
7639 ± 3% -16.4% 6387 ± 11% interrupts.CPU64.CAL:Function_call_interrupts
194.00 ± 11% -86.1% 27.00 ± 4% interrupts.CPU64.RES:Rescheduling_interrupts
7523 ± 2% -16.6% 6277 ± 8% interrupts.CPU65.CAL:Function_call_interrupts
198.25 ± 12% -89.7% 20.50 ± 24% interrupts.CPU65.RES:Rescheduling_interrupts
7440 ± 11% -39.0% 4540 ± 26% interrupts.CPU66.CAL:Function_call_interrupts
201.50 ± 12% -90.8% 18.50 ± 9% interrupts.CPU66.RES:Rescheduling_interrupts
8240 ± 17% -43.8% 4631 ± 26% interrupts.CPU67.CAL:Function_call_interrupts
197.25 ± 14% -85.4% 28.75 ± 18% interrupts.CPU67.RES:Rescheduling_interrupts
181.75 ± 17% -84.2% 28.75 ± 27% interrupts.CPU68.RES:Rescheduling_interrupts
168.25 ± 19% -83.1% 28.50 ± 14% interrupts.CPU69.RES:Rescheduling_interrupts
161290 ± 3% -8.3% 147840 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
705.75 ± 16% -54.2% 323.25 ± 9% interrupts.CPU7.NMI:Non-maskable_interrupts
705.75 ± 16% -54.2% 323.25 ± 9% interrupts.CPU7.PMI:Performance_monitoring_interrupts
7207 ± 8% -37.0% 4541 ± 10% interrupts.CPU7.RES:Rescheduling_interrupts
155.25 ± 24% -78.4% 33.50 ± 20% interrupts.CPU70.RES:Rescheduling_interrupts
132.00 ± 20% -77.5% 29.75 ± 23% interrupts.CPU71.RES:Rescheduling_interrupts
18308 ± 4% +12.9% 20662 ± 5% interrupts.CPU73.LOC:Local_timer_interrupts
3295 ± 17% -59.6% 1332 ± 14% interrupts.CPU8.RES:Rescheduling_interrupts
27403 ± 11% -14.1% 23549 ± 3% interrupts.CPU80.LOC:Local_timer_interrupts
26422 ± 2% +11.4% 29433 ± 7% interrupts.CPU89.LOC:Local_timer_interrupts
3387 ± 16% -64.7% 1194 ± 12% interrupts.CPU9.RES:Rescheduling_interrupts
63112 ± 9% -57.3% 26977 ± 21% interrupts.NMI:Non-maskable_interrupts
63112 ± 9% -57.3% 26977 ± 21% interrupts.PMI:Performance_monitoring_interrupts
196263 -39.9% 117875 ± 2% interrupts.RES:Rescheduling_interrupts
58.40 ± 4% -29.2 29.23 ± 35% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
58.38 ± 4% -29.2 29.22 ± 35% perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
58.04 ± 4% -29.0 29.02 ± 35% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
58.01 ± 4% -29.0 29.00 ± 35% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
59.31 ± 4% -29.0 30.33 ± 35% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
58.76 ± 4% -28.9 29.88 ± 34% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
54.36 ± 4% -28.1 26.21 ± 34% perf-profile.calltrace.cycles-pp.btrfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open
39.06 ± 4% -23.6 15.48 ± 39% perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_new_inode.btrfs_create.path_openat.do_filp_open
38.60 ± 4% -23.3 15.30 ± 39% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode.btrfs_create.path_openat
40.56 ± 4% -22.0 18.57 ± 30% perf-profile.calltrace.cycles-pp.btrfs_new_inode.btrfs_create.path_openat.do_filp_open.do_sys_openat2
12.21 ± 4% -12.2 0.00 perf-profile.calltrace.cycles-pp.btrfs_try_tree_write_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode.btrfs_create
12.14 ± 4% -12.1 0.00 perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.btrfs_try_tree_write_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode
11.54 ± 4% -11.5 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_write_lock_slowpath.btrfs_try_tree_write_lock.btrfs_search_slot.btrfs_insert_empty_items
11.54 ± 8% -11.5 0.00 perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode
10.81 ± 8% -10.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_write_lock_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items
12.88 ± 8% -9.9 3.00 ± 22% perf-profile.calltrace.cycles-pp.__btrfs_commit_inode_delayed_items.btrfs_async_run_delayed_root.btrfs_work_helper.process_one_work.worker_thread
23.95 ± 5% -9.8 14.11 ± 38% perf-profile.calltrace.cycles-pp.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode.btrfs_create
12.14 ± 8% -9.8 2.36 ± 25% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_commit_inode_delayed_items.btrfs_async_run_delayed_root
12.14 ± 8% -9.8 2.38 ± 25% perf-profile.calltrace.cycles-pp.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_commit_inode_delayed_items.btrfs_async_run_delayed_root.btrfs_work_helper
12.55 ± 8% -9.6 2.91 ± 23% perf-profile.calltrace.cycles-pp.__btrfs_update_delayed_inode.__btrfs_commit_inode_delayed_items.btrfs_async_run_delayed_root.btrfs_work_helper.process_one_work
13.85 ± 8% -9.4 4.47 ± 25% perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
13.84 ± 8% -9.4 4.47 ± 26% perf-profile.calltrace.cycles-pp.btrfs_async_run_delayed_root.btrfs_work_helper.process_one_work.worker_thread.kthread
14.60 ± 8% -8.7 5.94 ± 19% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
14.62 ± 8% -8.6 5.99 ± 19% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
10.52 ± 3% -7.9 2.60 ± 46% perf-profile.calltrace.cycles-pp.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link.btrfs_create.path_openat
10.49 ± 3% -7.9 2.58 ± 46% perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link.btrfs_create
9.61 ± 3% -7.7 1.92 ± 45% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link
7.57 ± 5% -7.6 0.00 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode
11.34 ± 3% -7.2 4.13 ± 49% perf-profile.calltrace.cycles-pp.btrfs_insert_dir_item.btrfs_add_link.btrfs_create.path_openat.do_filp_open
7.18 ± 5% -7.2 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items
7.11 ± 5% -7.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_lock.btrfs_search_slot
7.01 ± 2% -7.0 0.00 perf-profile.calltrace.cycles-pp.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow
7.01 ± 2% -7.0 0.00 perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item
11.72 ± 3% -6.2 5.53 ± 55% perf-profile.calltrace.cycles-pp.btrfs_add_link.btrfs_create.path_openat.do_filp_open.do_sys_openat2
5.93 ± 9% -4.9 0.98 ± 70% perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_commit_inode_delayed_items
17.35 ± 8% -3.5 13.82 ± 6% perf-profile.calltrace.cycles-pp.ret_from_fork
17.35 ± 8% -3.5 13.82 ± 6% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.27 ± 18% -0.8 0.42 ±101% perf-profile.calltrace.cycles-pp.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item
1.23 ± 20% -0.8 0.40 ±101% perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow
0.00 +0.6 0.62 ± 13% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack
0.00 +0.6 0.63 ± 16% perf-profile.calltrace.cycles-pp._raw_spin_trylock.shrink_lock_dentry.shrink_dentry_list.prune_dcache_sb.super_cache_scan
0.47 ± 58% +0.8 1.31 ± 44% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.9 0.85 ± 24% perf-profile.calltrace.cycles-pp.list_lru_add.inode_lru_list_add.iput.__dentry_kill.shrink_dentry_list
0.00 +0.9 0.87 ± 24% perf-profile.calltrace.cycles-pp.inode_lru_list_add.iput.__dentry_kill.shrink_dentry_list.prune_dcache_sb
0.00 +0.9 0.90 ± 13% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack
0.00 +0.9 0.90 ± 16% perf-profile.calltrace.cycles-pp.dentry_lru_isolate.__list_lru_walk_one.list_lru_walk_one.prune_dcache_sb.super_cache_scan
0.28 ±100% +0.9 1.19 ± 12% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +0.9 0.94 ± 17% perf-profile.calltrace.cycles-pp.__list_lru_walk_one.list_lru_walk_one.prune_dcache_sb.super_cache_scan.do_shrink_slab
0.00 +0.9 0.94 ± 17% perf-profile.calltrace.cycles-pp.list_lru_walk_one.prune_dcache_sb.super_cache_scan.do_shrink_slab.shrink_slab
0.00 +0.9 0.94 ± 19% perf-profile.calltrace.cycles-pp.___d_drop.__d_drop.__dentry_kill.shrink_dentry_list.prune_dcache_sb
0.00 +0.9 0.95 ± 19% perf-profile.calltrace.cycles-pp.shrink_lock_dentry.shrink_dentry_list.prune_dcache_sb.super_cache_scan.do_shrink_slab
0.14 ±173% +0.9 1.08 ± 13% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
0.00 +0.9 0.95 ± 19% perf-profile.calltrace.cycles-pp.__d_drop.__dentry_kill.shrink_dentry_list.prune_dcache_sb.super_cache_scan
0.22 ±173% +1.0 1.20 ± 33% perf-profile.calltrace.cycles-pp.btrfs_get_32.check_leaf.btree_csum_one_bio.btrfs_submit_metadata_bio.submit_one_bio
0.00 +1.0 1.02 ± 23% perf-profile.calltrace.cycles-pp.btrfs_drop_extent_cache.btrfs_destroy_inode.destroy_inode.dispose_list.prune_icache_sb
0.80 ± 9% +1.1 1.91 ± 13% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.00 +1.1 1.14 ± 25% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
0.26 ±173% +1.1 1.40 ± 33% perf-profile.calltrace.cycles-pp.check_dir_item.check_leaf.btree_csum_one_bio.btrfs_submit_metadata_bio.submit_one_bio
0.94 ± 9% +1.2 2.12 ± 13% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.40 ± 58% +1.2 1.59 ± 25% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.94 ± 9% +1.2 2.13 ± 13% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.95 ± 9% +1.2 2.17 ± 13% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +1.3 1.26 ± 26% perf-profile.calltrace.cycles-pp.__btrfs_release_delayed_node.btrfs_evict_inode.evict.dispose_list.prune_icache_sb
0.00 +1.3 1.33 ± 26% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.28 ±100% +1.3 1.62 ± 23% perf-profile.calltrace.cycles-pp.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd
0.46 ± 58% +1.5 1.92 ± 21% perf-profile.calltrace.cycles-pp.iput.__dentry_kill.shrink_dentry_list.prune_dcache_sb.super_cache_scan
1.12 ± 9% +1.5 2.65 ± 16% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.56 ± 58% +1.7 2.23 ± 21% perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn
0.56 ± 58% +1.7 2.25 ± 21% perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread
0.78 ± 21% +1.7 2.50 ± 20% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.94 ± 5% +1.7 2.68 ± 24% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.62 ± 59% +1.8 2.44 ± 21% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
0.62 ± 58% +1.8 2.44 ± 21% perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.02 ± 5% +1.9 2.92 ± 23% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.96 ± 26% +2.1 3.03 ± 25% perf-profile.calltrace.cycles-pp.btrfs_destroy_inode.destroy_inode.dispose_list.prune_icache_sb.super_cache_scan
0.81 ± 22% +2.1 2.93 ± 26% perf-profile.calltrace.cycles-pp.btrfs_evict_inode.evict.dispose_list.prune_icache_sb.super_cache_scan
1.81 ± 2% +2.3 4.13 ± 12% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.33 ± 9% +2.4 3.74 ± 25% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
1.10 ± 25% +2.4 3.54 ± 26% perf-profile.calltrace.cycles-pp.destroy_inode.dispose_list.prune_icache_sb.super_cache_scan.do_shrink_slab
1.51 ± 63% +2.7 4.24 ± 32% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range
1.35 ± 14% +2.7 4.10 ± 20% perf-profile.calltrace.cycles-pp.__dentry_kill.shrink_dentry_list.prune_dcache_sb.super_cache_scan.do_shrink_slab
1.25 ± 80% +2.8 4.01 ± 28% perf-profile.calltrace.cycles-pp.check_leaf.btree_csum_one_bio.btrfs_submit_metadata_bio.submit_one_bio.submit_extent_page
1.58 ± 63% +2.8 4.43 ± 32% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents
1.30 ± 80% +2.9 4.16 ± 28% perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btrfs_submit_metadata_bio.submit_one_bio.submit_extent_page.write_one_eb
1.15 ± 21% +2.9 4.04 ± 26% perf-profile.calltrace.cycles-pp.evict.dispose_list.prune_icache_sb.super_cache_scan.do_shrink_slab
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread.kthread
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction
1.36 ± 80% +3.0 4.39 ± 27% perf-profile.calltrace.cycles-pp.btrfs_submit_metadata_bio.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages
1.36 ± 80% +3.0 4.39 ± 27% perf-profile.calltrace.cycles-pp.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages
1.90 ± 60% +3.3 5.20 ± 29% perf-profile.calltrace.cycles-pp.transaction_kthread.kthread.ret_from_fork
1.90 ± 60% +3.3 5.20 ± 29% perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork
2.09 ± 8% +3.8 5.86 ± 25% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
1.88 ± 15% +3.8 5.71 ± 20% perf-profile.calltrace.cycles-pp.shrink_dentry_list.prune_dcache_sb.super_cache_scan.do_shrink_slab.shrink_slab
2.18 ± 15% +4.5 6.66 ± 19% perf-profile.calltrace.cycles-pp.prune_dcache_sb.super_cache_scan.do_shrink_slab.shrink_slab.drop_slab_node
2.97 ± 10% +5.5 8.45 ± 20% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
2.37 ± 23% +5.7 8.06 ± 26% perf-profile.calltrace.cycles-pp.dispose_list.prune_icache_sb.super_cache_scan.do_shrink_slab.shrink_slab
3.33 ± 10% +6.0 9.29 ± 25% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
2.56 ± 23% +6.1 8.66 ± 25% perf-profile.calltrace.cycles-pp.prune_icache_sb.super_cache_scan.do_shrink_slab.shrink_slab.drop_slab_node
3.50 ± 10% +6.2 9.73 ± 25% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
3.53 ± 10% +6.3 9.84 ± 25% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
5.45 ± 11% +9.1 14.58 ± 23% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
5.72 ± 11% +9.9 15.63 ± 24% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
4.74 ± 19% +10.6 15.33 ± 23% perf-profile.calltrace.cycles-pp.super_cache_scan.do_shrink_slab.shrink_slab.drop_slab_node.drop_slab
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.calltrace.cycles-pp.drop_slab.drop_caches_sysctl_handler.proc_sys_call_handler.new_sync_write.vfs_write
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.calltrace.cycles-pp.drop_slab_node.drop_slab.drop_caches_sysctl_handler.proc_sys_call_handler.new_sync_write
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.calltrace.cycles-pp.shrink_slab.drop_slab_node.drop_slab.drop_caches_sysctl_handler.proc_sys_call_handler
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.calltrace.cycles-pp.do_shrink_slab.shrink_slab.drop_slab_node.drop_slab.drop_caches_sysctl_handler
5.67 ± 21% +12.2 17.87 ± 24% perf-profile.calltrace.cycles-pp.proc_sys_call_handler.new_sync_write.vfs_write.ksys_write.do_syscall_64
5.67 ± 21% +12.2 17.87 ± 24% perf-profile.calltrace.cycles-pp.drop_caches_sysctl_handler.proc_sys_call_handler.new_sync_write.vfs_write.ksys_write
5.72 ± 21% +12.3 17.99 ± 24% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.72 ± 21% +12.3 18.00 ± 24% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
5.72 ± 21% +12.3 18.00 ± 24% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
5.72 ± 21% +12.3 18.00 ± 24% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
5.72 ± 21% +12.3 18.00 ± 24% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
5.73 ± 21% +12.3 18.02 ± 24% perf-profile.calltrace.cycles-pp.write
0.00 +13.3 13.27 ± 37% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot
0.00 +14.0 13.98 ± 38% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items
0.00 +14.1 14.08 ± 38% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items.btrfs_new_inode
9.40 ± 10% +15.7 25.15 ± 21% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
9.62 ± 10% +16.1 25.75 ± 21% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
15.73 ± 5% +18.0 33.75 ± 15% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
15.76 ± 5% +18.1 33.82 ± 15% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
15.76 ± 5% +18.1 33.82 ± 15% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
15.99 ± 5% +18.4 34.38 ± 15% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
53.31 ± 3% -46.2 7.08 ± 42% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.27 ± 4% -41.1 21.16 ± 36% perf-profile.children.cycles-pp.btrfs_search_slot
50.06 ± 4% -31.6 18.44 ± 39% perf-profile.children.cycles-pp.btrfs_insert_empty_items
58.46 ± 4% -29.1 29.38 ± 35% perf-profile.children.cycles-pp.do_sys_open
58.45 ± 4% -29.1 29.36 ± 35% perf-profile.children.cycles-pp.do_sys_openat2
58.10 ± 4% -28.9 29.18 ± 35% perf-profile.children.cycles-pp.do_filp_open
58.07 ± 4% -28.9 29.16 ± 35% perf-profile.children.cycles-pp.path_openat
54.36 ± 4% -28.1 26.22 ± 34% perf-profile.children.cycles-pp.btrfs_create
27.83 ± 3% -27.8 0.00 perf-profile.children.cycles-pp.queued_write_lock_slowpath
40.57 ± 4% -22.0 18.57 ± 30% perf-profile.children.cycles-pp.btrfs_new_inode
31.53 ± 4% -16.9 14.66 ± 37% perf-profile.children.cycles-pp.__btrfs_tree_lock
17.10 ± 5% -16.3 0.81 ± 13% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
65.59 ± 3% -15.9 49.73 ± 12% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
65.01 ± 2% -15.8 49.22 ± 11% perf-profile.children.cycles-pp.do_syscall_64
12.58 ± 4% -12.5 0.06 ± 34% perf-profile.children.cycles-pp.btrfs_try_tree_write_lock
11.61 ± 5% -11.6 0.00 perf-profile.children.cycles-pp.prepare_to_wait_event
13.37 ± 8% -10.0 3.40 ± 23% perf-profile.children.cycles-pp.__btrfs_commit_inode_delayed_items
12.15 ± 8% -9.8 2.38 ± 25% perf-profile.children.cycles-pp.btrfs_lookup_inode
9.67 ± 7% -9.7 0.00 perf-profile.children.cycles-pp.queued_read_lock_slowpath
12.56 ± 8% -9.6 2.92 ± 22% perf-profile.children.cycles-pp.__btrfs_update_delayed_inode
13.85 ± 8% -9.4 4.47 ± 26% perf-profile.children.cycles-pp.btrfs_async_run_delayed_root
13.85 ± 8% -9.4 4.47 ± 25% perf-profile.children.cycles-pp.btrfs_work_helper
14.60 ± 8% -8.7 5.94 ± 19% perf-profile.children.cycles-pp.process_one_work
14.62 ± 8% -8.6 5.99 ± 19% perf-profile.children.cycles-pp.worker_thread
10.52 ± 3% -7.9 2.60 ± 46% perf-profile.children.cycles-pp.insert_with_overflow
7.55 ± 2% -7.4 0.12 ± 47% perf-profile.children.cycles-pp.btrfs_lock_root_node
11.34 ± 3% -7.2 4.13 ± 49% perf-profile.children.cycles-pp.btrfs_insert_dir_item
11.72 ± 3% -6.2 5.53 ± 55% perf-profile.children.cycles-pp.btrfs_add_link
8.69 ± 7% -5.8 2.89 ± 38% perf-profile.children.cycles-pp.__btrfs_tree_read_lock
5.59 ± 6% -5.6 0.00 perf-profile.children.cycles-pp.finish_wait
17.35 ± 8% -3.5 13.82 ± 6% perf-profile.children.cycles-pp.kthread
17.36 ± 8% -3.5 13.83 ± 6% perf-profile.children.cycles-pp.ret_from_fork
0.80 ± 5% -0.7 0.12 ± 27% perf-profile.children.cycles-pp.__wake_up_common_lock
0.69 ± 5% -0.6 0.10 ± 33% perf-profile.children.cycles-pp.__wake_up_common
0.63 ± 6% -0.5 0.08 ± 40% perf-profile.children.cycles-pp.autoremove_wake_function
0.45 ± 13% -0.3 0.19 ± 73% perf-profile.children.cycles-pp.unwind_get_return_address
0.40 ± 13% -0.2 0.16 ± 72% perf-profile.children.cycles-pp.__kernel_text_address
0.37 ± 14% -0.2 0.14 ± 74% perf-profile.children.cycles-pp.kernel_text_address
1.13 ± 7% -0.2 0.93 ± 7% perf-profile.children.cycles-pp.pick_next_task_fair
0.69 ± 7% -0.2 0.49 ± 21% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.49 ± 12% -0.2 0.30 ± 52% perf-profile.children.cycles-pp.btrfs_get_token_32
0.38 ± 9% -0.2 0.21 ± 54% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.25 ± 26% -0.1 0.11 ± 85% perf-profile.children.cycles-pp.__module_address
0.19 ± 20% -0.1 0.07 ±116% perf-profile.children.cycles-pp.__module_text_address
0.14 ± 7% -0.1 0.04 ±107% perf-profile.children.cycles-pp.btrfs_tree_read_unlock
0.24 ± 6% -0.1 0.17 ± 22% perf-profile.children.cycles-pp.__list_add_valid
0.13 ± 9% -0.1 0.07 ± 70% perf-profile.children.cycles-pp.inode_permission
0.10 ± 11% -0.1 0.04 ±101% perf-profile.children.cycles-pp.btrfs_alloc_delayed_item
0.15 ± 7% -0.1 0.10 ± 40% perf-profile.children.cycles-pp.push_leaf_left
0.10 ± 8% -0.1 0.04 ±100% perf-profile.children.cycles-pp.send_call_function_single_ipi
0.09 ± 13% -0.1 0.04 ±102% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.08 -0.1 0.03 ±100% perf-profile.children.cycles-pp.btrfs_init_acl
0.08 ± 5% -0.0 0.04 ±100% perf-profile.children.cycles-pp.btrfs_init_inode_security
0.11 ± 4% -0.0 0.06 ± 62% perf-profile.children.cycles-pp.copy_pages
0.06 ± 6% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.rcu_all_qs
0.10 ± 8% +0.0 0.14 ± 11% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.04 ± 57% +0.0 0.07 ± 17% perf-profile.children.cycles-pp.select_task_rq_fair
0.06 ± 15% +0.0 0.09 ± 13% perf-profile.children.cycles-pp.__mod_memcg_state
0.04 ± 59% +0.0 0.08 ± 13% perf-profile.children.cycles-pp.note_gp_changes
0.07 ± 10% +0.0 0.11 ± 26% perf-profile.children.cycles-pp.update_dl_rq_load_avg
0.14 ± 11% +0.0 0.18 ± 16% perf-profile.children.cycles-pp.cpumask_next_and
0.06 ± 20% +0.0 0.11 ± 17% perf-profile.children.cycles-pp.rcu_needs_cpu
0.03 ±100% +0.1 0.08 ± 26% perf-profile.children.cycles-pp.pm_qos_read_value
0.06 ± 13% +0.1 0.12 ± 24% perf-profile.children.cycles-pp.update_ts_time_stats
0.06 +0.1 0.11 ± 17% perf-profile.children.cycles-pp.get_cpu_device
0.03 ±100% +0.1 0.08 ± 17% perf-profile.children.cycles-pp.kfree
0.09 ± 13% +0.1 0.15 ± 17% perf-profile.children.cycles-pp.rb_next
0.12 ± 8% +0.1 0.18 ± 9% perf-profile.children.cycles-pp._cond_resched
0.06 ± 20% +0.1 0.12 ± 15% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.03 ±100% +0.1 0.09 ± 16% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.00 +0.1 0.06 ± 20% perf-profile.children.cycles-pp.blk_flush_plug_list
0.00 +0.1 0.06 ± 20% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.00 +0.1 0.06 ± 20% perf-profile.children.cycles-pp.blk_mq_sched_insert_requests
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.perf_read
0.03 ±100% +0.1 0.09 ± 24% perf-profile.children.cycles-pp.x86_pmu_disable
0.00 +0.1 0.06 ± 16% perf-profile.children.cycles-pp.__x86_retpoline_rsi
0.01 ±173% +0.1 0.07 ± 26% perf-profile.children.cycles-pp.run_timer_softirq
0.06 ± 60% +0.1 0.12 ± 10% perf-profile.children.cycles-pp.update_rt_rq_load_avg
0.04 ± 58% +0.1 0.10 ± 19% perf-profile.children.cycles-pp.hrtimer_forward
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.pagevec_move_tail
0.06 ± 26% +0.1 0.13 ± 15% perf-profile.children.cycles-pp.__libc_read
0.00 +0.1 0.07 ± 17% perf-profile.children.cycles-pp.__free_slab
0.00 +0.1 0.07 ± 39% perf-profile.children.cycles-pp.propagate_protected_usage
0.00 +0.1 0.07 ± 31% perf-profile.children.cycles-pp.free_one_page
0.00 +0.1 0.07 ± 17% perf-profile.children.cycles-pp.sched_clock_idle_wakeup_event
0.00 +0.1 0.07 ± 25% perf-profile.children.cycles-pp.call_timer_fn
0.06 ± 14% +0.1 0.12 ± 20% perf-profile.children.cycles-pp.can_stop_idle_tick
0.01 ±173% +0.1 0.08 ± 43% perf-profile.children.cycles-pp.btrfs_get_8
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.dup_mmap
0.04 ± 58% +0.1 0.11 ± 45% perf-profile.children.cycles-pp.pipe_write
0.06 ± 7% +0.1 0.13 ± 20% perf-profile.children.cycles-pp.rcu_eqs_exit
0.04 ± 60% +0.1 0.11 ± 13% perf-profile.children.cycles-pp.seq_read_iter
0.01 ±173% +0.1 0.08 ± 17% perf-profile.children.cycles-pp.local_touch_nmi
0.15 ± 4% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.__lookup_extent_mapping
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.dup_mm
0.00 +0.1 0.07 ± 17% perf-profile.children.cycles-pp.__dput_to_list
0.01 ±173% +0.1 0.09 ± 21% perf-profile.children.cycles-pp.rotate_reclaimable_page
0.00 +0.1 0.07 ± 20% perf-profile.children.cycles-pp.profile_tick
0.02 ±173% +0.1 0.10 ± 18% perf-profile.children.cycles-pp.perf_evsel__read
0.14 ± 5% +0.1 0.22 ± 15% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.08 ± 15% perf-profile.children.cycles-pp.call_cpuidle
0.07 ± 22% +0.1 0.15 ± 14% perf-profile.children.cycles-pp.run_local_timers
0.00 +0.1 0.08 ± 26% perf-profile.children.cycles-pp.tlb_finish_mmu
0.00 +0.1 0.08 ± 27% perf-profile.children.cycles-pp.irq_work_needs_cpu
0.06 ± 14% +0.1 0.14 ± 12% perf-profile.children.cycles-pp.nr_iowait_cpu
0.06 ± 11% +0.1 0.14 ± 20% perf-profile.children.cycles-pp.rcu_nmi_enter
0.06 ± 14% +0.1 0.14 ± 31% perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.03 ±100% +0.1 0.11 ± 28% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.03 ±100% +0.1 0.11 ± 12% perf-profile.children.cycles-pp.cpuidle_not_available
0.07 ± 11% +0.1 0.16 ± 22% perf-profile.children.cycles-pp.mmap_region
0.00 +0.1 0.08 ± 27% perf-profile.children.cycles-pp.up_read
0.01 ±173% +0.1 0.10 ± 28% perf-profile.children.cycles-pp.sched_setaffinity
0.01 ±173% +0.1 0.10 ± 28% perf-profile.children.cycles-pp.tick_nohz_idle_got_tick
0.06 ± 20% +0.1 0.14 ± 41% perf-profile.children.cycles-pp.setlocale
0.00 +0.1 0.08 ± 26% perf-profile.children.cycles-pp.__radix_tree_delete
0.05 +0.1 0.14 ± 33% perf-profile.children.cycles-pp.begin_new_exec
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.tlb_flush_mmu
0.12 ± 5% +0.1 0.21 ± 5% perf-profile.children.cycles-pp.memset_erms
0.03 ±100% +0.1 0.11 ± 18% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.03 ±100% +0.1 0.11 ± 25% perf-profile.children.cycles-pp.__etree_search
0.00 +0.1 0.09 ± 32% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.09 ± 27% perf-profile.children.cycles-pp.free_extent_map
0.02 ±173% +0.1 0.11 ± 41% perf-profile.children.cycles-pp.test_clear_page_writeback
0.03 ±100% +0.1 0.11 ± 37% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.00 +0.1 0.09 ± 23% perf-profile.children.cycles-pp.__do_munmap
0.01 ±173% +0.1 0.10 ± 18% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.03 ±100% +0.1 0.12 ± 22% perf-profile.children.cycles-pp.rcu_gp_kthread
0.08 ± 8% +0.1 0.17 ± 22% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.03 ±100% +0.1 0.12 ± 14% perf-profile.children.cycles-pp._atomic_dec_and_lock
0.07 ± 11% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.perf_event_task_tick
0.03 ±102% +0.1 0.13 ± 14% perf-profile.children.cycles-pp.rcu_nocb_try_bypass
0.03 ±100% +0.1 0.12 ± 26% perf-profile.children.cycles-pp.zap_pte_range
0.04 ± 58% +0.1 0.14 ± 18% perf-profile.children.cycles-pp.copy_process
0.07 ± 5% +0.1 0.17 ± 35% perf-profile.children.cycles-pp.do_fault
0.08 ± 6% +0.1 0.18 ± 21% perf-profile.children.cycles-pp.trigger_load_balance
0.09 ± 12% +0.1 0.19 ± 20% perf-profile.children.cycles-pp.do_mmap
0.03 ±100% +0.1 0.13 ± 20% perf-profile.children.cycles-pp.__free_pages_ok
0.11 ± 6% +0.1 0.21 ± 21% perf-profile.children.cycles-pp.rcu_eqs_enter
0.07 ± 16% +0.1 0.17 ± 18% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.10 ± 21% +0.1 0.21 ± 18% perf-profile.children.cycles-pp.cmd_stat
0.10 ± 21% +0.1 0.21 ± 18% perf-profile.children.cycles-pp.__run_perf_stat
0.10 ± 21% +0.1 0.21 ± 18% perf-profile.children.cycles-pp.dispatch_events
0.10 ± 18% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.process_interval
0.05 ± 62% +0.1 0.15 ± 18% perf-profile.children.cycles-pp.new_sync_read
0.07 ± 7% +0.1 0.17 ± 31% perf-profile.children.cycles-pp.__wake_up_bit
0.02 ±173% +0.1 0.12 ± 36% perf-profile.children.cycles-pp.__test_set_page_writeback
0.10 ± 22% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.read_counters
0.03 ±100% +0.1 0.13 ± 20% perf-profile.children.cycles-pp.__memcg_kmem_uncharge
0.00 +0.1 0.11 ± 42% perf-profile.children.cycles-pp.rwsem_mark_wake
0.11 ± 22% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.__libc_start_main
0.06 ± 11% +0.1 0.17 ± 17% perf-profile.children.cycles-pp.kernel_clone
0.00 +0.1 0.11 ± 30% perf-profile.children.cycles-pp.down_read
0.10 ± 9% +0.1 0.20 ± 22% perf-profile.children.cycles-pp.vm_mmap_pgoff
0.03 ±100% +0.1 0.14 ± 27% perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.1 0.11 ± 26% perf-profile.children.cycles-pp.tick_sched_do_timer
0.10 ± 21% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.main
0.10 ± 21% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.run_builtin
0.22 ± 6% +0.1 0.33 ± 29% perf-profile.children.cycles-pp.idle_cpu
0.07 ± 10% +0.1 0.18 ± 10% perf-profile.children.cycles-pp.irqentry_exit
0.06 ± 7% +0.1 0.17 ± 17% perf-profile.children.cycles-pp.__do_sys_clone
0.06 ± 66% +0.1 0.17 ± 21% perf-profile.children.cycles-pp.release_pages
0.01 ±173% +0.1 0.12 ± 37% perf-profile.children.cycles-pp.clear_inode
0.04 ± 59% +0.1 0.16 ± 23% perf-profile.children.cycles-pp.truncate_inode_pages_final
0.01 ±173% +0.1 0.13 ± 15% perf-profile.children.cycles-pp.rcu_nmi_exit
0.02 ±173% +0.1 0.13 ± 18% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.10 ± 7% +0.1 0.22 ± 16% perf-profile.children.cycles-pp.irqentry_enter
0.07 ± 60% +0.1 0.18 ± 21% perf-profile.children.cycles-pp.btrfs_lookup_first_ordered_extent
0.03 ±100% +0.1 0.15 ± 25% perf-profile.children.cycles-pp.unmap_vmas
0.04 ± 59% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.page_counter_cancel
0.06 ± 16% +0.1 0.18 ± 20% perf-profile.children.cycles-pp.page_counter_uncharge
0.12 ± 12% +0.1 0.24 ± 22% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.11 ± 13% +0.1 0.23 ± 22% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.05 ± 63% +0.1 0.18 ± 36% perf-profile.children.cycles-pp.deactivate_file_page
0.08 ± 10% +0.1 0.21 ± 21% perf-profile.children.cycles-pp.__libc_fork
0.02 ±173% +0.1 0.15 ± 21% perf-profile.children.cycles-pp.submit_bio_noacct
0.02 ±173% +0.1 0.15 ± 23% perf-profile.children.cycles-pp.submit_bio
0.10 ± 11% +0.1 0.23 ± 29% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.07 ± 26% +0.1 0.21 ± 18% perf-profile.children.cycles-pp.irqtime_account_process_tick
0.10 ± 28% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.list_lru_isolate_move
0.00 +0.1 0.14 ± 24% perf-profile.children.cycles-pp.inode_wait_for_writeback
0.14 ± 8% +0.1 0.28 ± 12% perf-profile.children.cycles-pp.__x86_retpoline_rax
0.05 ± 59% +0.1 0.20 ± 48% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.12 ± 15% +0.2 0.27 ± 20% perf-profile.children.cycles-pp.read
0.06 ± 80% +0.2 0.21 ± 27% perf-profile.children.cycles-pp.end_page_writeback
0.07 ± 72% +0.2 0.22 ± 27% perf-profile.children.cycles-pp.btrfs_get_16
0.07 ± 14% +0.2 0.23 ± 20% perf-profile.children.cycles-pp.drain_obj_stock
0.11 ± 4% +0.2 0.28 ± 7% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.12 ± 12% +0.2 0.28 ± 16% perf-profile.children.cycles-pp.rcu_idle_exit
0.10 ± 14% +0.2 0.27 ± 26% perf-profile.children.cycles-pp.load_elf_binary
0.17 ± 13% +0.2 0.33 ± 15% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.12 ± 5% +0.2 0.29 ± 18% perf-profile.children.cycles-pp.native_irq_return_iret
0.09 ± 13% +0.2 0.26 ± 28% perf-profile.children.cycles-pp.menu_reflect
0.01 ±173% +0.2 0.18 ± 25% perf-profile.children.cycles-pp.d_shrink_del
0.10 ± 17% +0.2 0.28 ± 25% perf-profile.children.cycles-pp.exec_binprm
0.06 ± 74% +0.2 0.23 ± 15% perf-profile.children.cycles-pp.btrfs_map_bio
0.04 ± 58% +0.2 0.22 ± 39% perf-profile.children.cycles-pp.fsnotify_grab_connector
0.14 ± 7% +0.2 0.32 ± 27% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.14 ± 19% +0.2 0.32 ± 11% perf-profile.children.cycles-pp.serial8250_console_putchar
0.15 ± 7% +0.2 0.33 ± 13% perf-profile.children.cycles-pp.timerqueue_add
0.07 ± 76% +0.2 0.25 ± 28% perf-profile.children.cycles-pp.end_bio_extent_buffer_writepage
0.10 ± 5% +0.2 0.28 ± 27% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.10 ± 5% +0.2 0.28 ± 27% perf-profile.children.cycles-pp.do_group_exit
0.10 ± 5% +0.2 0.28 ± 27% perf-profile.children.cycles-pp.do_exit
0.10 ± 18% +0.2 0.28 ± 18% perf-profile.children.cycles-pp.radix_tree_delete_item
0.14 ± 19% +0.2 0.32 ± 11% perf-profile.children.cycles-pp.wait_for_xmitr
0.17 ± 9% +0.2 0.35 ± 27% perf-profile.children.cycles-pp.lapic_next_deadline
0.14 ± 18% +0.2 0.33 ± 11% perf-profile.children.cycles-pp.uart_console_write
0.12 ± 22% +0.2 0.31 ± 25% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.15 ± 16% +0.2 0.34 ± 19% perf-profile.children.cycles-pp.vfs_read
0.17 ± 16% +0.2 0.36 ± 19% perf-profile.children.cycles-pp.ksys_read
0.23 ± 10% +0.2 0.42 ± 13% perf-profile.children.cycles-pp.mutex_lock
0.14 ± 18% +0.2 0.34 ± 11% perf-profile.children.cycles-pp.serial8250_console_write
0.07 ± 76% +0.2 0.27 ± 27% perf-profile.children.cycles-pp.btrfs_end_bio
0.12 ± 15% +0.2 0.32 ± 26% perf-profile.children.cycles-pp.bprm_execve
0.23 ± 7% +0.2 0.43 ± 23% perf-profile.children.cycles-pp.update_irq_load_avg
0.14 ± 7% +0.2 0.34 ± 12% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.04 ±107% +0.2 0.24 ± 50% perf-profile.children.cycles-pp.calc_global_load_tick
0.06 ± 58% +0.2 0.26 ± 34% perf-profile.children.cycles-pp.fsnotify_destroy_marks
0.15 ± 12% +0.2 0.35 ± 19% perf-profile.children.cycles-pp.io_serial_in
0.11 ± 4% +0.2 0.32 ± 23% perf-profile.children.cycles-pp.exit_mmap
0.08 ± 71% +0.2 0.29 ± 28% perf-profile.children.cycles-pp.blk_update_request
0.13 ± 16% +0.2 0.34 ± 28% perf-profile.children.cycles-pp.dentry_unlink_inode
0.16 ± 4% +0.2 0.38 ± 34% perf-profile.children.cycles-pp.__handle_mm_fault
0.08 ± 75% +0.2 0.31 ± 29% perf-profile.children.cycles-pp.scsi_io_completion
0.08 ± 75% +0.2 0.31 ± 29% perf-profile.children.cycles-pp.scsi_end_request
0.11 ± 4% +0.2 0.33 ± 27% perf-profile.children.cycles-pp.mmput
0.09 ± 72% +0.2 0.32 ± 29% perf-profile.children.cycles-pp.blk_done_softirq
0.27 ± 5% +0.2 0.50 ± 9% perf-profile.children.cycles-pp.___might_sleep
0.20 ± 5% +0.2 0.42 ± 14% perf-profile.children.cycles-pp.enqueue_hrtimer
0.17 ± 4% +0.2 0.40 ± 32% perf-profile.children.cycles-pp.handle_mm_fault
0.11 ± 25% +0.2 0.34 ± 26% perf-profile.children.cycles-pp.__remove_inode_hash
0.35 ± 8% +0.2 0.59 ± 12% perf-profile.children.cycles-pp.update_blocked_averages
0.14 ± 16% +0.2 0.38 ± 26% perf-profile.children.cycles-pp.execve
0.17 ± 13% +0.2 0.42 ± 22% perf-profile.children.cycles-pp.timerqueue_del
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.memcpy_toio
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.drm_fb_memcpy_dstclip
0.15 ± 18% +0.2 0.40 ± 26% perf-profile.children.cycles-pp.__x64_sys_execve
0.20 ± 14% +0.2 0.45 ± 15% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.15 ± 18% +0.2 0.40 ± 27% perf-profile.children.cycles-pp.do_execveat_common
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_commit
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.commit_tail
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.mgag200_simple_display_pipe_update
0.18 ± 15% +0.2 0.43 ± 24% perf-profile.children.cycles-pp.mgag200_handle_damage
0.18 ± 15% +0.3 0.44 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
0.20 ± 5% +0.3 0.46 ± 27% perf-profile.children.cycles-pp.asm_exc_page_fault
0.10 ± 22% +0.3 0.35 ± 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.19 ± 5% +0.3 0.45 ± 29% perf-profile.children.cycles-pp.exc_page_fault
0.20 ± 13% +0.3 0.46 ± 23% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.19 ± 4% +0.3 0.45 ± 29% perf-profile.children.cycles-pp.do_user_addr_fault
0.16 ± 17% +0.3 0.42 ± 22% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.34 ± 7% +0.3 0.61 ± 13% perf-profile.children.cycles-pp.run_rebalance_domains
0.16 ± 21% +0.3 0.43 ± 28% perf-profile.children.cycles-pp.__slab_free
0.21 ± 12% +0.3 0.49 ± 17% perf-profile.children.cycles-pp.console_unlock
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.sysvec_irq_work
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.__sysvec_irq_work
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.irq_work_run
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.irq_work_single
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.printk
0.22 ± 13% +0.3 0.50 ± 17% perf-profile.children.cycles-pp.vprintk_emit
0.17 ± 37% +0.3 0.45 ± 13% perf-profile.children.cycles-pp.btrfs_run_delayed_refs_for_head
0.13 ± 31% +0.3 0.42 ± 24% perf-profile.children.cycles-pp.alloc_extent_state
0.22 ± 5% +0.3 0.51 ± 13% perf-profile.children.cycles-pp.read_tsc
0.89 ± 2% +0.3 1.19 ± 10% perf-profile.children.cycles-pp.update_sd_lb_stats
0.24 ± 12% +0.3 0.54 ± 16% perf-profile.children.cycles-pp.irq_work_run_list
0.11 ± 21% +0.3 0.41 ± 32% perf-profile.children.cycles-pp.__destroy_inode
0.20 ± 18% +0.3 0.51 ± 28% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.19 ± 62% +0.3 0.51 ± 23% perf-profile.children.cycles-pp.check_inode_key
0.23 ± 11% +0.3 0.54 ± 24% perf-profile.children.cycles-pp.__remove_hrtimer
0.24 ± 8% +0.3 0.56 ± 16% perf-profile.children.cycles-pp.irqtime_account_irq
0.20 ± 15% +0.3 0.52 ± 19% perf-profile.children.cycles-pp.rb_erase
0.47 ± 5% +0.3 0.79 ± 15% perf-profile.children.cycles-pp.update_rq_clock
0.13 ± 21% +0.3 0.46 ± 16% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.92 ± 2% +0.3 1.25 ± 10% perf-profile.children.cycles-pp.find_busiest_group
0.23 ± 38% +0.3 0.56 ± 32% perf-profile.children.cycles-pp.start_kernel
0.00 +0.3 0.34 ± 45% perf-profile.children.cycles-pp.wake_up_q
0.26 ± 7% +0.3 0.59 ± 20% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.24 ± 36% +0.4 0.59 ± 14% perf-profile.children.cycles-pp.btrfs_run_delayed_refs
0.23 ± 35% +0.4 0.59 ± 14% perf-profile.children.cycles-pp.__btrfs_run_delayed_refs
0.18 ± 23% +0.4 0.55 ± 16% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.18 ± 21% +0.4 0.54 ± 16% perf-profile.children.cycles-pp.refill_obj_stock
0.32 ± 7% +0.4 0.69 ± 11% perf-profile.children.cycles-pp.native_sched_clock
0.12 ± 10% +0.4 0.49 ± 26% perf-profile.children.cycles-pp.mem_cgroup_from_obj
0.33 ± 7% +0.4 0.71 ± 11% perf-profile.children.cycles-pp.sched_clock
0.20 ± 31% +0.4 0.58 ± 25% perf-profile.children.cycles-pp.clear_record_extent_bits
0.17 ± 15% +0.4 0.55 ± 17% perf-profile.children.cycles-pp.inode_lru_isolate
0.18 ± 28% +0.4 0.58 ± 24% perf-profile.children.cycles-pp.alloc_extent_map
0.00 +0.4 0.41 ± 44% perf-profile.children.cycles-pp.rwsem_wake
0.16 ± 21% +0.4 0.57 ± 28% perf-profile.children.cycles-pp.clear_extent_bit
0.27 ± 14% +0.4 0.68 ± 16% perf-profile.children.cycles-pp.call_rcu
0.17 ± 20% +0.4 0.61 ± 27% perf-profile.children.cycles-pp.btrfs_inode_clear_file_extent_range
0.36 ± 5% +0.5 0.82 ± 18% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.14 ± 3% +0.5 1.60 ± 11% perf-profile.children.cycles-pp.load_balance
0.23 ± 29% +0.5 0.70 ± 27% perf-profile.children.cycles-pp.btrfs_qgroup_check_reserved_leak
0.41 ± 7% +0.5 0.88 ± 16% perf-profile.children.cycles-pp.sched_clock_cpu
0.21 ± 16% +0.5 0.70 ± 19% perf-profile.children.cycles-pp.btrfs_drop_inode
0.46 ± 9% +0.5 0.96 ± 8% perf-profile.children.cycles-pp.__list_del_entry_valid
1.12 ± 3% +0.5 1.62 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc
0.36 ± 16% +0.6 0.94 ± 19% perf-profile.children.cycles-pp.___d_drop
0.36 ± 17% +0.6 0.96 ± 19% perf-profile.children.cycles-pp.__d_drop
0.53 ± 13% +0.6 1.14 ± 12% perf-profile.children.cycles-pp.rebalance_domains
0.29 ± 14% +0.6 0.91 ± 16% perf-profile.children.cycles-pp.dentry_lru_isolate
0.33 ± 6% +0.6 0.97 ± 15% perf-profile.children.cycles-pp.list_lru_add
0.23 ± 14% +0.6 0.87 ± 24% perf-profile.children.cycles-pp.inode_lru_list_add
0.30 ± 19% +0.7 0.95 ± 19% perf-profile.children.cycles-pp.shrink_lock_dentry
0.52 ± 42% +0.7 1.20 ± 19% perf-profile.children.cycles-pp.read_extent_buffer
0.32 ± 28% +0.7 1.03 ± 24% perf-profile.children.cycles-pp.btrfs_drop_extent_cache
0.53 ± 8% +0.8 1.29 ± 13% perf-profile.children.cycles-pp.tick_nohz_next_event
0.34 ± 24% +0.8 1.11 ± 26% perf-profile.children.cycles-pp.__clear_extent_bit
0.62 ± 25% +0.8 1.42 ± 40% perf-profile.children.cycles-pp.irq_enter_rcu
0.38 ± 13% +0.8 1.23 ± 16% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.9 0.85 ± 50% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.83 ± 2% +0.9 1.77 ± 20% perf-profile.children.cycles-pp.scheduler_tick
0.71 ± 38% +1.0 1.67 ± 22% perf-profile.children.cycles-pp.btrfs_get_32
0.54 ± 57% +1.0 1.54 ± 28% perf-profile.children.cycles-pp.check_dir_item
0.48 ± 13% +1.1 1.53 ± 17% perf-profile.children.cycles-pp.__list_lru_walk_one
0.48 ± 13% +1.1 1.54 ± 17% perf-profile.children.cycles-pp.list_lru_walk_one
0.63 ± 3% +1.1 1.69 ± 24% perf-profile.children.cycles-pp.clockevents_program_event
0.81 ± 9% +1.1 1.96 ± 13% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.57 ± 14% +1.3 1.92 ± 21% perf-profile.children.cycles-pp.iput
1.56 ± 9% +1.5 3.10 ± 14% perf-profile.children.cycles-pp.do_softirq_own_stack
1.11 ± 18% +1.7 2.79 ± 31% perf-profile.children.cycles-pp.ktime_get
0.74 ± 22% +1.7 2.44 ± 21% perf-profile.children.cycles-pp.run_ksoftirqd
0.78 ± 21% +1.7 2.50 ± 20% perf-profile.children.cycles-pp.smpboot_thread_fn
1.57 ± 3% +1.8 3.38 ± 19% perf-profile.children.cycles-pp.update_process_times
1.81 ± 7% +1.9 3.68 ± 15% perf-profile.children.cycles-pp.irq_exit_rcu
1.68 ± 3% +2.0 3.65 ± 18% perf-profile.children.cycles-pp.tick_sched_handle
0.99 ± 17% +2.0 2.98 ± 20% perf-profile.children.cycles-pp.rcu_do_batch
1.13 ± 14% +2.0 3.14 ± 19% perf-profile.children.cycles-pp.rcu_core
0.96 ± 25% +2.1 3.04 ± 25% perf-profile.children.cycles-pp.btrfs_destroy_inode
1.11 ± 16% +2.1 3.21 ± 20% perf-profile.children.cycles-pp.kmem_cache_free
0.81 ± 22% +2.1 2.94 ± 26% perf-profile.children.cycles-pp.btrfs_evict_inode
1.85 ± 2% +2.4 4.25 ± 12% perf-profile.children.cycles-pp.menu_select
1.10 ± 25% +2.4 3.54 ± 25% perf-profile.children.cycles-pp.destroy_inode
2.08 ± 6% +2.5 4.58 ± 21% perf-profile.children.cycles-pp.tick_sched_timer
1.47 ± 57% +2.7 4.17 ± 25% perf-profile.children.cycles-pp.check_leaf
1.49 ± 57% +2.7 4.23 ± 25% perf-profile.children.cycles-pp.btree_csum_one_bio
1.37 ± 14% +2.8 4.12 ± 20% perf-profile.children.cycles-pp.__dentry_kill
0.00 +2.8 2.77 ± 39% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
1.56 ± 57% +2.9 4.46 ± 25% perf-profile.children.cycles-pp.btrfs_submit_metadata_bio
1.56 ± 57% +2.9 4.47 ± 25% perf-profile.children.cycles-pp.submit_one_bio
1.15 ± 21% +2.9 4.06 ± 26% perf-profile.children.cycles-pp.evict
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.children.cycles-pp.btrfs_write_and_wait_transaction
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.children.cycles-pp.btrfs_write_marked_extents
1.64 ± 63% +3.0 4.60 ± 32% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
1.59 ± 57% +3.0 4.57 ± 24% perf-profile.children.cycles-pp.submit_extent_page
1.67 ± 57% +3.1 4.79 ± 24% perf-profile.children.cycles-pp.write_one_eb
2.28 ± 13% +3.2 5.49 ± 16% perf-profile.children.cycles-pp.__softirqentry_text_start
1.73 ± 56% +3.2 4.96 ± 24% perf-profile.children.cycles-pp.btree_write_cache_pages
1.73 ± 56% +3.2 4.96 ± 24% perf-profile.children.cycles-pp.do_writepages
1.90 ± 60% +3.3 5.20 ± 29% perf-profile.children.cycles-pp.transaction_kthread
1.90 ± 60% +3.3 5.20 ± 29% perf-profile.children.cycles-pp.btrfs_commit_transaction
1.19 ± 7% +3.6 4.75 ± 49% perf-profile.children.cycles-pp.__btrfs_release_delayed_node
1.88 ± 15% +3.8 5.72 ± 20% perf-profile.children.cycles-pp.shrink_dentry_list
3.18 ± 6% +3.9 7.07 ± 21% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.18 ± 15% +4.5 6.66 ± 19% perf-profile.children.cycles-pp.prune_dcache_sb
2.99 ± 10% +5.5 8.51 ± 20% perf-profile.children.cycles-pp.intel_idle
2.37 ± 23% +5.7 8.07 ± 26% perf-profile.children.cycles-pp.dispose_list
4.75 ± 7% +6.1 10.82 ± 23% perf-profile.children.cycles-pp.hrtimer_interrupt
2.56 ± 23% +6.1 8.66 ± 25% perf-profile.children.cycles-pp.prune_icache_sb
4.96 ± 7% +6.3 11.31 ± 22% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
2.52 ± 11% +6.6 9.11 ± 31% perf-profile.children.cycles-pp._raw_spin_lock
7.09 ± 7% +8.0 15.13 ± 19% perf-profile.children.cycles-pp.asm_call_sysvec_on_stack
7.68 ± 8% +9.5 17.19 ± 21% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
8.17 ± 9% +10.2 18.41 ± 20% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
4.74 ± 19% +10.6 15.33 ± 23% perf-profile.children.cycles-pp.super_cache_scan
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.children.cycles-pp.drop_slab
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.children.cycles-pp.drop_slab_node
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.children.cycles-pp.shrink_slab
4.74 ± 19% +10.6 15.34 ± 23% perf-profile.children.cycles-pp.do_shrink_slab
5.67 ± 21% +12.2 17.87 ± 24% perf-profile.children.cycles-pp.proc_sys_call_handler
5.67 ± 21% +12.2 17.87 ± 24% perf-profile.children.cycles-pp.drop_caches_sysctl_handler
5.72 ± 21% +12.3 17.99 ± 24% perf-profile.children.cycles-pp.new_sync_write
5.73 ± 21% +12.3 18.00 ± 24% perf-profile.children.cycles-pp.vfs_write
5.73 ± 21% +12.3 18.00 ± 24% perf-profile.children.cycles-pp.ksys_write
5.73 ± 21% +12.3 18.02 ± 24% perf-profile.children.cycles-pp.write
0.01 ±173% +14.6 14.59 ± 35% perf-profile.children.cycles-pp.osq_lock
0.00 +14.6 14.60 ± 37% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.00 +15.5 15.53 ± 36% perf-profile.children.cycles-pp.rwsem_optimistic_spin
9.72 ± 10% +16.4 26.11 ± 21% perf-profile.children.cycles-pp.cpuidle_enter_state
9.75 ± 10% +16.4 26.18 ± 21% perf-profile.children.cycles-pp.cpuidle_enter
15.76 ± 5% +18.1 33.82 ± 15% perf-profile.children.cycles-pp.start_secondary
15.99 ± 5% +18.4 34.38 ± 15% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
15.99 ± 5% +18.4 34.38 ± 15% perf-profile.children.cycles-pp.cpu_startup_entry
15.98 ± 5% +18.4 34.38 ± 15% perf-profile.children.cycles-pp.do_idle
52.28 ± 4% -45.4 6.86 ± 42% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.42 ± 12% -0.2 0.25 ± 50% perf-profile.self.cycles-pp.btrfs_get_token_32
0.25 ± 25% -0.1 0.11 ± 86% perf-profile.self.cycles-pp.__module_address
0.26 ± 15% -0.1 0.15 ± 56% perf-profile.self.cycles-pp.__account_scheduler_latency
0.14 ± 5% -0.1 0.03 ±105% perf-profile.self.cycles-pp.btrfs_tree_read_unlock
0.28 ± 4% -0.1 0.18 ± 47% perf-profile.self.cycles-pp.memmove
0.25 ± 7% -0.1 0.16 ± 18% perf-profile.self.cycles-pp.generic_bin_search
0.23 ± 6% -0.1 0.17 ± 24% perf-profile.self.cycles-pp.__list_add_valid
0.10 ± 8% -0.1 0.04 ±100% perf-profile.self.cycles-pp.send_call_function_single_ipi
0.07 ± 14% -0.0 0.03 ±100% perf-profile.self.cycles-pp.newidle_balance
0.08 ± 19% -0.0 0.03 ±105% perf-profile.self.cycles-pp.___perf_sw_event
0.12 ± 5% -0.0 0.08 ± 58% perf-profile.self.cycles-pp.__d_lookup_rcu
0.05 ± 8% +0.0 0.08 ± 14% perf-profile.self.cycles-pp.rcu_all_qs
0.07 ± 17% +0.0 0.11 ± 19% perf-profile.self.cycles-pp.__softirqentry_text_start
0.05 ± 8% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.__mod_memcg_state
0.07 ± 10% +0.0 0.11 ± 26% perf-profile.self.cycles-pp.update_dl_rq_load_avg
0.05 ± 8% +0.0 0.10 ± 24% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.06 ± 14% +0.0 0.10 ± 21% perf-profile.self.cycles-pp.rcu_needs_cpu
0.08 ± 13% +0.1 0.14 ± 20% perf-profile.self.cycles-pp.rb_next
0.06 ± 7% +0.1 0.11 ± 14% perf-profile.self.cycles-pp.get_cpu_device
0.04 ± 57% +0.1 0.10 ± 27% perf-profile.self.cycles-pp.enqueue_hrtimer
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.__free_pages_ok
0.03 ±100% +0.1 0.08 ± 23% perf-profile.self.cycles-pp.x86_pmu_disable
0.07 ± 17% +0.1 0.12 ± 26% perf-profile.self.cycles-pp.rcu_eqs_enter
0.04 ± 58% +0.1 0.10 ± 18% perf-profile.self.cycles-pp.hrtimer_forward
0.03 ±100% +0.1 0.09 ± 16% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.01 ±173% +0.1 0.07 ± 22% perf-profile.self.cycles-pp.pm_qos_read_value
0.08 ± 5% +0.1 0.14 ± 13% perf-profile.self.cycles-pp.update_blocked_averages
0.06 ± 60% +0.1 0.12 ± 10% perf-profile.self.cycles-pp.update_rt_rq_load_avg
0.00 +0.1 0.06 ± 17% perf-profile.self.cycles-pp.cpuidle_enter
0.00 +0.1 0.06 ± 17% perf-profile.self.cycles-pp.__dput_to_list
0.00 +0.1 0.06 ± 34% perf-profile.self.cycles-pp.propagate_protected_usage
0.01 ±173% +0.1 0.08 ± 10% perf-profile.self.cycles-pp.irqentry_enter
0.04 ± 57% +0.1 0.10 ± 8% perf-profile.self.cycles-pp.asm_call_sysvec_on_stack
0.00 +0.1 0.07 ± 21% perf-profile.self.cycles-pp.release_pages
0.00 +0.1 0.07 ± 19% perf-profile.self.cycles-pp.tick_nohz_irq_exit
0.14 ± 5% +0.1 0.21 ± 6% perf-profile.self.cycles-pp.rb_insert_color
0.13 ± 6% +0.1 0.20 ± 18% perf-profile.self.cycles-pp.__might_sleep
0.05 ± 9% +0.1 0.12 ± 20% perf-profile.self.cycles-pp.can_stop_idle_tick
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.call_cpuidle
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.hrtimer_next_event_without
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.__lookup_extent_mapping
0.11 ± 6% +0.1 0.18 ± 8% perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.profile_tick
0.06 ± 14% +0.1 0.13 ± 11% perf-profile.self.cycles-pp.nr_iowait_cpu
0.03 ±100% +0.1 0.10 ± 31% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.00 +0.1 0.08 ± 27% perf-profile.self.cycles-pp.arch_cpu_idle_enter
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.local_touch_nmi
0.00 +0.1 0.08 ± 30% perf-profile.self.cycles-pp.up_read
0.00 +0.1 0.08 ± 34% perf-profile.self.cycles-pp.free_extent_map
0.01 ±173% +0.1 0.09 ± 32% perf-profile.self.cycles-pp.tick_nohz_idle_got_tick
0.00 +0.1 0.08 ± 19% perf-profile.self.cycles-pp.sched_clock
0.03 ±100% +0.1 0.10 ± 14% perf-profile.self.cycles-pp.cpuidle_not_available
0.05 ± 8% +0.1 0.13 ± 28% perf-profile.self.cycles-pp.trigger_load_balance
0.01 ±173% +0.1 0.09 ± 23% perf-profile.self.cycles-pp.btrfs_drop_extent_cache
0.04 ± 58% +0.1 0.12 ± 21% perf-profile.self.cycles-pp.rcu_nmi_enter
0.05 ± 8% +0.1 0.14 ± 13% perf-profile.self.cycles-pp.tick_irq_enter
0.01 ±173% +0.1 0.10 ± 15% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.06 ± 11% +0.1 0.14 ± 14% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.08 ± 24% +0.1 0.16 ± 24% perf-profile.self.cycles-pp.irq_exit_rcu
0.04 ± 59% +0.1 0.13 ± 36% perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.01 ±173% +0.1 0.10 ± 25% perf-profile.self.cycles-pp.__etree_search
0.07 ± 21% +0.1 0.15 ± 33% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.09 ± 16% perf-profile.self.cycles-pp.tick_sched_handle
0.00 +0.1 0.09 ± 16% perf-profile.self.cycles-pp.rcu_nocb_try_bypass
0.00 +0.1 0.09 ± 32% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.08 ± 16% +0.1 0.17 ± 10% perf-profile.self.cycles-pp.timerqueue_add
0.06 ± 11% +0.1 0.15 ± 20% perf-profile.self.cycles-pp.rcu_idle_exit
0.03 ±100% +0.1 0.11 ± 21% perf-profile.self.cycles-pp.page_counter_cancel
0.10 ± 15% +0.1 0.19 ± 12% perf-profile.self.cycles-pp.rebalance_domains
0.07 ± 7% +0.1 0.15 ± 31% perf-profile.self.cycles-pp.scheduler_tick
0.07 ± 17% +0.1 0.15 ± 32% perf-profile.self.cycles-pp.timerqueue_del
0.00 +0.1 0.09 ± 15% perf-profile.self.cycles-pp.tick_nohz_tick_stopped
0.05 ± 62% +0.1 0.14 ± 36% perf-profile.self.cycles-pp.__remove_hrtimer
0.07 ± 15% +0.1 0.17 ± 30% perf-profile.self.cycles-pp.sched_clock_cpu
0.08 ± 6% +0.1 0.17 ± 19% perf-profile.self.cycles-pp.clockevents_program_event
0.07 ± 11% +0.1 0.17 ± 25% perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.1 0.10 ± 15% perf-profile.self.cycles-pp.rcu_nmi_exit
0.06 ± 6% +0.1 0.16 ± 34% perf-profile.self.cycles-pp.__wake_up_bit
0.06 ± 16% +0.1 0.16 ± 15% perf-profile.self.cycles-pp.irq_enter_rcu
0.17 ± 6% +0.1 0.28 ± 12% perf-profile.self.cycles-pp.update_rq_clock
0.08 ± 19% +0.1 0.18 ± 22% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.08 ± 16% +0.1 0.18 ± 32% perf-profile.self.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.10 ± 25% perf-profile.self.cycles-pp.__dentry_kill
0.12 ± 11% +0.1 0.23 ± 14% perf-profile.self.cycles-pp.load_balance
0.07 ± 17% +0.1 0.17 ± 18% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.06 ± 11% +0.1 0.17 ± 27% perf-profile.self.cycles-pp.menu_reflect
0.10 ± 10% +0.1 0.21 ± 19% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.10 ± 12% +0.1 0.21 ± 12% perf-profile.self.cycles-pp.irqtime_account_irq
0.04 ± 60% +0.1 0.15 ± 26% perf-profile.self.cycles-pp.truncate_inode_pages_final
0.08 ± 19% +0.1 0.19 ± 19% perf-profile.self.cycles-pp.sysvec_apic_timer_interrupt
0.08 ± 13% +0.1 0.20 ± 24% perf-profile.self.cycles-pp.iput
0.00 +0.1 0.12 ± 63% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
0.09 ± 25% +0.1 0.20 ± 22% perf-profile.self.cycles-pp.tick_sched_timer
0.11 ± 13% +0.1 0.23 ± 20% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.07 ± 21% +0.1 0.20 ± 17% perf-profile.self.cycles-pp.irqtime_account_process_tick
0.09 ± 4% +0.1 0.22 ± 13% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.09 ± 8% +0.1 0.22 ± 15% perf-profile.self.cycles-pp.__x86_retpoline_rax
0.04 ±106% +0.1 0.17 ± 22% perf-profile.self.cycles-pp.btrfs_get_16
0.00 +0.1 0.13 ± 31% perf-profile.self.cycles-pp.fsnotify_grab_connector
0.10 ± 15% +0.1 0.23 ± 10% perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.05 ±124% +0.1 0.19 ± 24% perf-profile.self.cycles-pp.check_inode_key
0.13 ± 9% +0.1 0.27 ± 17% perf-profile.self.cycles-pp.mutex_lock
0.07 ± 20% +0.1 0.22 ± 21% perf-profile.self.cycles-pp.inode_lru_isolate
0.12 ± 22% +0.1 0.27 ± 25% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.06 ± 58% +0.1 0.21 ± 26% perf-profile.self.cycles-pp.btrfs_destroy_inode
0.12 ± 6% +0.2 0.28 ± 32% perf-profile.self.cycles-pp.update_process_times
0.07 ± 16% +0.2 0.23 ± 29% perf-profile.self.cycles-pp.evict
0.03 ±100% +0.2 0.19 ± 26% perf-profile.self.cycles-pp.__clear_extent_bit
0.12 ± 5% +0.2 0.29 ± 18% perf-profile.self.cycles-pp.native_irq_return_iret
0.09 ± 15% +0.2 0.27 ± 27% perf-profile.self.cycles-pp.__remove_inode_hash
0.14 ± 11% +0.2 0.32 ± 16% perf-profile.self.cycles-pp.list_lru_add
0.17 ± 9% +0.2 0.35 ± 27% perf-profile.self.cycles-pp.lapic_next_deadline
0.10 ± 27% +0.2 0.29 ± 16% perf-profile.self.cycles-pp.refill_obj_stock
0.23 ± 5% +0.2 0.42 ± 24% perf-profile.self.cycles-pp.update_irq_load_avg
0.11 ± 51% +0.2 0.30 ± 36% perf-profile.self.cycles-pp.check_dir_item
0.11 ± 15% +0.2 0.31 ± 29% perf-profile.self.cycles-pp.dentry_unlink_inode
0.04 ±107% +0.2 0.24 ± 50% perf-profile.self.cycles-pp.calc_global_load_tick
0.10 ± 27% +0.2 0.30 ± 25% perf-profile.self.cycles-pp.shrink_lock_dentry
0.15 ± 9% +0.2 0.36 ± 23% perf-profile.self.cycles-pp.hrtimer_interrupt
0.15 ± 12% +0.2 0.35 ± 19% perf-profile.self.cycles-pp.io_serial_in
0.18 ± 12% +0.2 0.39 ± 14% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.25 ± 4% +0.2 0.46 ± 9% perf-profile.self.cycles-pp.___might_sleep
0.23 ± 5% +0.2 0.45 ± 21% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.16 ± 18% +0.2 0.40 ± 29% perf-profile.self.cycles-pp.memcpy_toio
0.59 ± 5% +0.2 0.83 ± 10% perf-profile.self.cycles-pp.update_sd_lb_stats
0.14 ± 20% +0.2 0.39 ± 28% perf-profile.self.cycles-pp.__slab_free
0.16 ± 15% +0.3 0.42 ± 21% perf-profile.self.cycles-pp.call_rcu
0.20 ± 9% +0.3 0.46 ± 25% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.40 ± 6% +0.3 0.67 ± 7% perf-profile.self.cycles-pp.kmem_cache_alloc
0.10 ± 13% +0.3 0.36 ± 18% perf-profile.self.cycles-pp.dentry_lru_isolate
0.21 ± 3% +0.3 0.48 ± 11% perf-profile.self.cycles-pp.read_tsc
0.11 ± 35% +0.3 0.39 ± 27% perf-profile.self.cycles-pp.btrfs_evict_inode
0.19 ± 11% +0.3 0.48 ± 15% perf-profile.self.cycles-pp.tick_nohz_next_event
0.18 ± 17% +0.3 0.47 ± 19% perf-profile.self.cycles-pp.rb_erase
0.23 ± 11% +0.3 0.52 ± 28% perf-profile.self.cycles-pp.do_idle
0.19 ± 4% +0.3 0.48 ± 17% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.18 ± 54% +0.3 0.51 ± 15% perf-profile.self.cycles-pp.check_leaf
0.30 ± 7% +0.3 0.63 ± 11% perf-profile.self.cycles-pp.native_sched_clock
0.11 ± 6% +0.3 0.45 ± 29% perf-profile.self.cycles-pp.mem_cgroup_from_obj
0.18 ± 23% +0.4 0.53 ± 16% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.26 ± 8% +0.4 0.69 ± 21% perf-profile.self.cycles-pp.__btrfs_release_delayed_node
0.18 ± 17% +0.5 0.64 ± 19% perf-profile.self.cycles-pp.btrfs_drop_inode
0.43 ± 8% +0.5 0.89 ± 8% perf-profile.self.cycles-pp.__list_del_entry_valid
0.37 ± 17% +0.5 0.92 ± 18% perf-profile.self.cycles-pp.cpuidle_enter_state
0.32 ± 18% +0.6 0.88 ± 18% perf-profile.self.cycles-pp.___d_drop
0.50 ± 42% +0.6 1.14 ± 18% perf-profile.self.cycles-pp.read_extent_buffer
0.54 ± 36% +0.7 1.25 ± 24% perf-profile.self.cycles-pp.btrfs_get_32
0.35 ± 13% +0.8 1.14 ± 17% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.8 0.82 ± 50% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.81 ± 8% +1.0 1.80 ± 16% perf-profile.self.cycles-pp.menu_select
1.25 ± 10% +1.1 2.38 ± 13% perf-profile.self.cycles-pp._raw_spin_lock
0.65 ± 14% +1.2 1.84 ± 19% perf-profile.self.cycles-pp.kmem_cache_free
0.93 ± 20% +1.4 2.32 ± 35% perf-profile.self.cycles-pp.ktime_get
2.99 ± 10% +5.5 8.51 ± 20% perf-profile.self.cycles-pp.intel_idle
0.01 ±173% +14.0 14.04 ± 36% perf-profile.self.cycles-pp.osq_lock
fxmark.hdd_btrfs_MWCL_72_bufferedio.sys_sec
1200 +--------------------------------------------------------------------+
| |
1000 |++ .+. .+.+. .+. .+.+. +.+.+. .+.+.+.+. .+. .+.|
| +.+ +.+.++ +.+ + +.+.+ + +.+.+.++ +.+ |
| |
800 |-+ |
| |
600 |-+ |
| O O O |
400 |-+ O O O O O O O O O O O OO O O O O O O O O O O O O |
| O O O O O |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_72_bufferedio.idle_sec
1800 +--------------------------------------------------------------------+
| O O O O O O |
1600 |-+ O O O O O O O O O O OO O O O O O O O O O O OO O O O |
1400 |-+ |
| |
1200 |-+ |
1000 |.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.|
| |
800 |-+ |
600 |-+ |
| |
400 |-+ |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_72_bufferedio.sys_util
60 +----------------------------------------------------------------------+
| |
50 |.+ |
| +.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+. .+.+.+.+.|
| + |
40 |-+ |
| |
30 |-+ |
| |
20 |-+ O O O O O OO O O O O O O O O O O O O O O O |
| O O O O O O O O O O O |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_72_bufferedio.idle_util
80 +----------------------------------------------------------------------+
| O O O O O O O O O O OO O O O O O O O O O O O O O O O |
70 |-+ |
60 |-+ |
| |
50 |.+.+. .+.+.+. .+.+. .+. .+.+.+ .+. .+.+.+.+.+. .+.+. |
| +.+ +.+ + +.+ +.+.+ +.+.+.+ + +.|
40 |-+ |
| |
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_63_bufferedio.sys_sec
1000 +--------------------------------------------------------------------+
900 |-+.+.+.+.+.++. .+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.|
| + |
800 |-+ |
700 |-+ |
| |
600 |-+ O O |
500 |-+ O O O O O O O O O O O O O O |
400 |-+ O O O |
| O O O O OO O O O O O O O O |
300 |-+ |
200 |-+ |
| |
100 |-+ |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_63_bufferedio.idle_sec
1400 +--------------------------------------------------------------------+
| O O O |
1200 |-+ O O O O O O O O O OO O O O |
| |
1000 |-+ |
| .+. .+. +. |
800 |.+.+.+.+.+.++ +.+.+.+.+.+ +.+.+ +.+.+.+.+.+.+.+.+.+.++.+.+.+.+.|
| |
600 |-+ |
| |
400 |-+ |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_63_bufferedio.sys_util
60 +----------------------------------------------------------------------+
| |
50 |.+.+.+.+.+.+.+. .+.+.+.+.+.+. .+.+.+ .+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
| + + + |
| |
40 |-+ |
| |
30 |-+ O O O O O O O O O O O O O O |
| O O |
20 |-O O O O O O O O O O O O O O O O O |
| |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_63_bufferedio.idle_util
80 +----------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
70 |-+ O O O O O O O O O O O O O O O O |
60 |-+ |
| |
50 |-+ |
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
40 |-+ |
| |
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_54_bufferedio.sys_sec
1000 +--------------------------------------------------------------------+
900 |.+ |
| +. .+.+. .++. .+.+.+.+.+.+.+.+.+.++.+.+. .+.+.+.+.+. .+. .+.+.+.|
800 |-+ + + + + + ++.+ |
700 |-+ |
| |
600 |-+ |
500 |-+ O O O O O O O O O OO O O O |
400 |-+ O O |
| O O O O O OO O O O O O O O O O O |
300 |-+ |
200 |-+ |
| |
100 |-+ |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_54_bufferedio.idle_sec
1200 +--------------------------------------------------------------------+
| O O OO O O O O O |
1000 |-+ O O O O O O O O O O OO O O O |
| O |
| |
800 |-+ |
| .+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+. .+. .+. .++.+.+.+.+.|
600 |.+ + + + |
| |
400 |-+ |
| |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_54_bufferedio.sys_util
60 +----------------------------------------------------------------------+
|.+. .+. .+. .+. .+. .+.+.+. .+ .+. .+.+.+.+.+. .+. .+.|
50 |-+ + +.+.+ +.+ + + + + +.+ + +.+.+.+.+ |
| |
| |
40 |-+ |
| O |
30 |-+ O O O O O O O O O O O O O O O |
| |
20 |-O O O O O O O O O O O O O O O O O |
| |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_54_bufferedio.idle_util
80 +----------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
70 |-+ O |
60 |-+ O O O O O O O O O O O O O O O |
| |
50 |-+ |
| .+. .+. .+. .+. .+.+.+. |
40 |.+.+.+.+ +.+ +.+.+.+.+.+.+.+.+.++.+.+ +.+.+.+.+ + +.+.+.|
| |
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_45_bufferedio.sys_sec
800 +---------------------------------------------------------------------+
| +.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+. .+.+.+.++.+.+.+.+.+.+.+.|
700 |-+ + |
600 |-+ |
| |
500 |-+ |
| O O O O O |
400 |-+ O O O O O O O OO O O |
| |
300 |-O O O O O O O O O O O O O O O O O |
200 |-+ |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_45_bufferedio.idle_sec
1000 +--------------------------------------------------------------------+
900 |-O O O O O O O O O |
| O O O O O O |
800 |-+ O O O O O OO O O O |
700 |-+ |
| |
600 |-+ |
500 |.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+. .++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.|
400 |-+ + |
| |
300 |-+ |
200 |-+ |
| |
100 |-+ |
0 +--------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_45_bufferedio.sys_util
70 +----------------------------------------------------------------------+
| |
60 |.+ .+.+. .+. .+.+. .+. .+.+.+. .+.+. .+. .+. .+. .+. |
| +.+.+ + +.+ + + ++.+ +.+.+.+ + + + +.+.|
50 |-+ |
| |
40 |-+ |
| O O O O O O O O |
30 |-+ O O O O O O O O |
| |
20 |-O O O O O O O O O O O O O O O O O |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_45_bufferedio.idle_util
80 +----------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
70 |-+ |
60 |-+ O O O O O O O O O O O O O |
| O O O |
50 |-+ |
| |
40 |.+.+.+. .+. .+.+. .+. .+. .++.+. .+.+.+.+. .+. .+.+.+. .+.+.|
| +.+ + +.+ + +.+.+ +.+ + + + |
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_36_bufferedio.sys_sec
700 +---------------------------------------------------------------------+
| +.+. .+. .+. .+.++.+. .+. .+. .+.+. .+. .+.++.+.+. .+.+.+.+.|
600 |-+ + + + + +.+.+ +.+ +.+ + + |
| |
500 |-+ |
| |
400 |-+ O O O O |
| O O O O O O O OO O O O |
300 |-+ |
| O O O O O O O O O O O O |
200 |-+ O O O O O |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_36_bufferedio.idle_sec
800 +---------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
700 |-+ |
600 |-+ O O O O O O O O O O |
| O OO O O O |
500 |-+ |
| |
400 |-+ |
|.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|
300 |-+ |
200 |-+ |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_36_bufferedio.sys_util
70 +----------------------------------------------------------------------+
|. .+. |
60 |-+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+ +.+.|
| |
50 |-+ |
| |
40 |-+ |
| O O O O O O O O O O O O O O O O |
30 |-+ |
| O O O |
20 |-O O O O O O O O O O O O O O |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_36_bufferedio.idle_util
80 +----------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
70 |-+ |
60 |-+ O O |
| O O O O O O O O O O O O O O |
50 |-+ |
| |
40 |-+ |
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.works
2e+06 +-----------------------------------------------------------------+
1.8e+06 |.+ |
| +.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.|
1.6e+06 |-+ |
1.4e+06 |-+ |
| O O OO O O O O OO O O O O OO |
1.2e+06 |-+ |
1e+06 |-O O OO O O O O O O O O O OO O O |
800000 |-+ |
| |
600000 |-+ |
400000 |-+ |
| |
200000 |-+ |
0 +-----------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.works_sec
70000 +-------------------------------------------------------------------+
| |
60000 |.+ .+. .+. |
| +.+.+.++.+.+.+.+.+.+.+.++.+ +.+ +.+.++.+.+.+.+.+.+.+.++.+.+.+.|
50000 |-+ |
| O O O O O O O OO O O |
40000 |-+ O O O O O |
| O O O OO O O O O O O OO O O O O |
30000 |-+ |
| |
20000 |-+ |
| |
10000 |-+ |
| |
0 +-------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.sys_sec
600 +---------------------------------------------------------------------+
| |
500 |++ .+. .+. .+. .+ .+. .+.|
| +.+.+ +.+.+.+.++.+.+.+.+.+ +.+.+.+ +.+.+.+.+ + +.+.+.+.+ |
| |
400 |-+ |
| |
300 |-+ O O O O O O O O OO O O O O O O |
| |
200 |-O O O O O O O O O O O O O O O O O |
| |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.idle_sec
600 +---------------------------------------------------------------------+
| |
500 |-O O O O O O O O O O O O O O O O O |
| |
| O O O O O O O O O O O O O O |
400 |-+ O O |
| |
300 |-+ |
|.+. .+.+.+. .+ .+. .+. .+. .+. .+. .+. +. .+. .+.+. .|
200 |-+ +.+.+ + + + +.+.+ + +.+ + +.+.+ + +.+ + |
| |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.sys_util
70 +----------------------------------------------------------------------+
|+ .+.+.+. .+. .+. .+. .+.+.+. .+ .+.+. .+. .+.+.+. .+. .+.+. .+.|
60 |-+ +.+.+ + + + + + + + + + +.+ |
| |
50 |-+ |
| |
40 |-+ O O O O O O O O O O O O |
| O O O O |
30 |-+ |
| O O O O O O O O O O O O O O O O O |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_27_bufferedio.idle_util
70 +----------------------------------------------------------------------+
| O O O |
60 |-+ |
| O O O O O O O O O O O O O O |
50 |-+ O O |
| |
40 |-+ |
| .+. |
30 |.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+ +.+.|
| |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.works
2e+06 +-----------------------------------------------------------------+
1.8e+06 |.+ |
| +.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.|
1.6e+06 |-+ |
1.4e+06 |-+ |
| O O |
1.2e+06 |-+ O O OO O O O OO O O O OO |
1e+06 |-O O OO O O O O O O O O O OO O O |
800000 |-+ |
| |
600000 |-+ |
400000 |-+ |
| |
200000 |-+ |
0 +-----------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.works_sec
70000 +-------------------------------------------------------------------+
| |
60000 |.+ |
| +.+.+.++.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.++.+.+.+.|
50000 |-+ |
| |
40000 |-+ O O O OO O O O O O O O OO O O |
| O O O O O |
30000 |-+ O O OO O O O OO O O O |
| |
20000 |-+ |
| |
10000 |-+ |
| |
0 +-------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.user_sec
14 +----------------------------------------------------------------------+
| |
12 |.+ .+. .+. +. |
| +.+.+ +.+.+.+.+.+.+.+.+.+.+.+ + +.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
10 |-+ |
| O O O O O O O O O O O O O O O O |
8 |-+ O O O O |
| O O O O O O O O O O O O O |
6 |-+ |
| |
4 |-+ |
| |
2 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.sys_sec
400 +---------------------------------------------------------------------+
|. |
350 |-+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|
300 |-+ |
| |
250 |-+ |
| O O O O O O O O OO O O O O O O |
200 |-+ |
| O O O O O O O O O O O O O O O O O |
150 |-+ |
100 |-+ |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.idle_sec
350 +---------------------------------------------------------------------+
| |
300 |-O O O O O O O O O O O O O O O O O |
| |
250 |-+ O O O O O |
| O O O O O O O O O O O |
200 |-+ |
| |
150 |-+ |
|.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|
100 |-+ |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.user_util
2.5 +---------------------------------------------------------------------+
|. .+. .+. .+. .+. .+. |
| +.+.+ +.+.+.+.++ + +.+.+ + +.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|
2 |-+ |
| O O O O O O O |
| O O O O O O O O O O |
1.5 |-O O O O O O O O O O O O O O O O |
| |
1 |-+ |
| |
| |
0.5 |-+ |
| |
| |
0 +---------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.sys_util
80 +----------------------------------------------------------------------+
| |
70 |.+.+.+.+.+.+. .+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
60 |-+ + |
| |
50 |-+ |
| O O O O O O O O O O O O O O O O |
40 |-+ |
| O O O O O O O O O O O O O O O O O |
30 |-+ |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.hdd_btrfs_MWCL_18_bufferedio.idle_util
60 +----------------------------------------------------------------------+
| O O O O |
50 |-+ |
| O O O O O O O O O O O O O O O |
| O |
40 |-+ |
| |
30 |-+ |
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
20 |-+ |
| |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fxmark.time.system_time
400 +---------------------------------------------------------------------+
|. .+.|
350 |-+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+ |
300 |-+ |
| |
250 |-+ O O O O O O O O OO O O O O O O |
| O O O O O O O O O O O O O O O O O |
200 |-+ |
| |
150 |-+ |
100 |-+ |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
fxmark.time.percent_of_cpu_this_job_got
70 +----------------------------------------------------------------------+
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
60 |-+ |
| |
50 |-+ O O O O O O O O O O O O O O O O |
| O O O O O O O O O O O O O O O O O |
40 |-+ |
| |
30 |-+ |
| |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
3 weeks, 3 days