Re: [PATCH] x86/pat: Fix off-by-one bugs in interval tree search
by Ingo Molnar
* Mariusz Ceier <mceier(a)gmail.com> wrote:
> Your patch fixes performance issue on my system and afterwards
> /sys/kernel/debug/x86/pat_memtype_list contents are:
Great, thanks for testing it!
> PAT memtype list:
> uncached-minus @ 0xfed90000-0xfed91000
> write-combining @ 0x2000000000-0x2100000000
> write-combining @ 0x2000000000-0x2100000000
> uncached-minus @ 0x2100000000-0x2100001000
Note how the UC- region starts right after the WC region, which triggered
the bug on your system.
> It's very similar to pat_memtype_list contents after reverting 4
> x86/mm/pat patches affecting performance:
>
> @@ -1,8 +1,8 @@
> PAT memtype list:
> write-back @ 0x55ba4000-0x55ba5000
> write-back @ 0x5e88c000-0x5e8b5000
> -write-back @ 0x5e8b4000-0x5e8b8000
> write-back @ 0x5e8b4000-0x5e8b5000
> +write-back @ 0x5e8b4000-0x5e8b8000
> write-back @ 0x5e8b7000-0x5e8bb000
> write-back @ 0x5e8ba000-0x5e8bc000
> write-back @ 0x5e8bb000-0x5e8be000
> @@ -21,8 +21,8 @@
> uncached-minus @ 0xec260000-0xec264000
> uncached-minus @ 0xec300000-0xec320000
> uncached-minus @ 0xec326000-0xec327000
> -uncached-minus @ 0xf0000000-0xf0001000
> uncached-minus @ 0xf0000000-0xf8000000
> +uncached-minus @ 0xf0000000-0xf0001000
Yes, the ordering of same-start regions is different. I believe the
difference is due to how the old rbtree logic inserted subtrees:
- while (*node) {
- struct memtype *data = rb_entry(*node, struct memtype, rb);
-
- parent = *node;
- if (data->subtree_max_end < newdata->end)
- data->subtree_max_end = newdata->end;
- if (newdata->start <= data->start)
- node = &((*node)->rb_left);
- else if (newdata->start > data->start)
- node = &((*node)->rb_right);
- }
-
- newdata->subtree_max_end = newdata->end;
- rb_link_node(&newdata->rb, parent, node);
- rb_insert_augmented(&newdata->rb, root, &memtype_rb_augment_cb);
In the new interval-tree logic this is:
while (*link) { \
rb_parent = *link; \
parent = rb_entry(rb_parent, ITSTRUCT, ITRB); \
if (parent->ITSUBTREE < last) \
parent->ITSUBTREE = last; \
if (start < ITSTART(parent)) \
link = &parent->ITRB.rb_left; \
else { \
link = &parent->ITRB.rb_right; \
leftmost = false; \
} \
} \
\
node->ITSUBTREE = last; \
rb_link_node(&node->ITRB, rb_parent, link); \
rb_insert_augmented_cached(&node->ITRB, root, \
leftmost, &ITPREFIX ## _augment); \
The old logic was a bit convoluted, but it can be written as:
if (newdata->start <= data->start)
node = &parent->rb_left;
else
node = &parent->rb_right;
The new logic is, in effect:
if (start < data->start)
link = &parent->rb_left;
else
link = &parent->rb_right;
Note the "<=" vs. '<' difference - this I believe changes the ordering
within the tree. It's still fine as long as this is used consistently,
but this changes the internal ordering of the nodes of the tree.
Thanks,
Ingo
1 year, 2 months
942437c97f ("y2038: allow disabling time32 system calls"): BUG: kernel hang in test stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 942437c97fd9ff23a17c13118f50bd0490f6868c
Author: Arnd Bergmann <arnd(a)arndb.de>
AuthorDate: Mon Jul 15 11:46:10 2019 +0200
Commit: Arnd Bergmann <arnd(a)arndb.de>
CommitDate: Fri Nov 15 14:38:30 2019 +0100
y2038: allow disabling time32 system calls
At the moment, the compilation of the old time32 system calls depends
purely on the architecture. As systems with new libc based on 64-bit
time_t are getting deployed, even architectures that previously supported
these (notably x86-32 and arm32 but also many others) no longer depend on
them, and removing them from a kernel image results in a smaller kernel
binary, the same way we can leave out many other optional system calls.
More importantly, on an embedded system that needs to keep working
beyond year 2038, any user space program calling these system calls
is likely a bug, so removing them from the kernel image does provide
an extra debugging help for finding broken applications.
I've gone back and forth on hiding this option unless CONFIG_EXPERT
is set. This version leaves it visible based on the logic that
eventually it will be turned off indefinitely.
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
bd40a17576 y2038: itimer: change implementation to timespec64
942437c97f y2038: allow disabling time32 system calls
+-----------------------------------------------------------+------------+------------+
| | bd40a17576 | 942437c97f |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 38 | 0 |
| boot_failures | 0 | 19 |
| BUG:kernel_hang_in_test_stage | 0 | 15 |
| Assertion_failed | 0 | 4 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 0 | 4 |
+-----------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
Press the [1], [2], [3] or [4] key and hit [enter] to select the debug level
[ 3.739930] rm (184) used greatest stack depth: 6556 bytes left
[ 5.754828] mount_root: mounting /dev/root
[ 5.762906] urandom-seed: Seed file not found (/etc/urandom.seed)
[ 5.767017] sh (172) used greatest stack depth: 6308 bytes left
BUG: kernel hang in test stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1875ff320f14afe21731a6e4c7b46dd33e45dfaa v5.4 --
git bisect good 9137b1791b9c92238389cdd1a23ca6d16da3145a # 18:36 G 11 0 0 0 Merge remote-tracking branch 'xtensa/xtensa-for-next'
git bisect good 4f718d6625e1302a61524688b87e9bdfaab0c7c0 # 19:40 G 11 0 0 0 Merge remote-tracking branch 'selinux/next'
git bisect good a7b0c9fecaeca5272254f581541864e898508d21 # 20:33 G 11 0 0 0 Merge remote-tracking branch 'scsi/for-next'
git bisect bad 2c98eb870fbbd8a18f7725a53decea87a303466a # 21:32 B 0 7 23 0 Merge remote-tracking branch 'kspp/for-next/kspp'
git bisect good 1f5b50bd3d843e2da5a388096de05b74c8acb4f3 # 22:47 G 11 0 2 2 Merge remote-tracking branch 'userns/for-next'
git bisect bad 128a34ad76c10bf861b6410b46e2226f83dda09e # 23:37 B 0 1 18 0 Merge remote-tracking branch 'livepatching/for-next'
git bisect good 6481881419b03d1af1229f32bcc5ecd1731ec2be # 00:21 G 11 0 0 0 Merge remote-tracking branch 'ktest/for-next'
git bisect bad d7eb0d1dbabe7415854fa69173b05c897d5bbed1 # 01:23 B 0 6 22 0 Merge remote-tracking branch 'y2038/y2038'
git bisect good 3e859adf3643c2da9765cd5088170738d7918567 # 05:05 G 10 0 0 0 compat_ioctl: unify copy-in of ppp filters
git bisect good 5e0fb1b57bea8d11fe77da2bc80f4c9a67e28318 # 05:50 G 11 0 0 0 y2038: time: avoid timespec usage in settimeofday()
git bisect bad b111df8447acdeb4b9220f99d5d4b28f83eb56ad # 06:54 B 1 2 1 3 y2038: alarm: fix half-second cut-off
git bisect good bd40a175769d411b2a37e1c087082ac7ee2c15bb # 08:26 G 11 0 1 1 y2038: itimer: change implementation to timespec64
git bisect bad 1c11ca7a0584ddede5b8c93057b40d31e8a96d3d # 09:32 B 0 2 20 2 y2038: fix typo in powerpc vdso "LOPART"
git bisect bad 942437c97fd9ff23a17c13118f50bd0490f6868c # 11:17 B 0 3 21 2 y2038: allow disabling time32 system calls
# first bad commit: [942437c97fd9ff23a17c13118f50bd0490f6868c] y2038: allow disabling time32 system calls
git bisect good bd40a175769d411b2a37e1c087082ac7ee2c15bb # 12:54 G 32 0 0 1 y2038: itimer: change implementation to timespec64
# extra tests with debug options
git bisect good 942437c97fd9ff23a17c13118f50bd0490f6868c # 13:51 G 11 0 0 0 y2038: allow disabling time32 system calls
# extra tests on revert first bad commit
git bisect good 66c1e1f1d2ea1a246dedf3908dae5806bde49691 # 15:19 G 11 0 0 0 Revert "y2038: allow disabling time32 system calls"
# good: [66c1e1f1d2ea1a246dedf3908dae5806bde49691] Revert "y2038: allow disabling time32 system calls"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year, 2 months
[locking/mutex] a0855d24fc: WARNING:at_kernel/locking/mutex.c:#mutex_trylock
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: a0855d24fc22d49cdc25664fb224caee16998683 ("locking/mutex: Complain upon mutex API misuse in IRQ contexts")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+------------+------------+
| | 751459043c | a0855d24fc |
+------------------------------------------------------+------------+------------+
| boot_successes | 1 | 1 |
| boot_failures | 35 | 35 |
| BUG:workqueue_lockup-pool | 10 | 9 |
| BUG:soft_lockup-CPU##stuck_for#s![dma-fence:#:#] | 24 | 25 |
| EIP:to_kthread | 3 | 2 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 25 | 26 |
| EIP:thread_signal_callback | 6 | 6 |
| calltrace:do_softirq_own_stack | 7 | 12 |
| EIP:lock_is_held_type | 10 | 10 |
| EIP:debug_lockdep_rcu_enabled | 2 | 5 |
| EIP:arch_local_save_flags | 3 | |
| EIP:kthread_should_stop | 1 | 1 |
| BUG:soft_lockup-CPU##stuck_for#s![rcu_torture_fak:#] | 1 | |
| WARNING:at_kernel/locking/mutex.c:#mutex_trylock | 0 | 26 |
| EIP:mutex_trylock | 0 | 26 |
| WARNING:at_kernel/locking/mutex.c:#mutex_unlock | 0 | 26 |
| EIP:mutex_unlock | 0 | 26 |
| BUG:soft_lockup-CPU##stuck_for#s![swapper:#] | 0 | 1 |
| EIP:rs_modnn | 0 | 1 |
| EIP:rcu_read_lock_held | 0 | 1 |
+------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 576.348903] watchdog: BUG: soft lockup - CPU#0 stuck for 134s! [dma-fence:0:230]
[ 576.348903] Modules linked in:
[ 576.348903] irq event stamp: 280712
[ 576.348903] hardirqs last enabled at (280711): [<dda010fb>] trace_hardirqs_on_thunk+0xc/0x10
[ 576.348903] hardirqs last disabled at (280712): [<dda0110b>] trace_hardirqs_off_thunk+0xc/0x10
[ 576.348903] softirqs last enabled at (526): [<de4568db>] __do_softirq+0x283/0x2b1
[ 576.348903] softirqs last disabled at (519): [<dda05bb1>] do_softirq_own_stack+0x21/0x27
[ 576.348903] CPU: 0 PID: 230 Comm: dma-fence:0 Not tainted 5.4.0-rc2-00042-ga0855d24fc22d #1
[ 576.348903] EIP: to_kthread+0x0/0x1a
[ 576.348903] Code: 43 08 ff d2 83 c3 14 eb e9 5b 5e 5d c3 3e 8d 74 26 00 55 89 e5 ff 70 2c 68 b8 e7 a1 de 68 00 10 00 00 51 e8 d7 65 a0 00 c9 c3 <f6> 40 16 20 75 0d 55 89 e5 0f 0b 8b 80 28 03 00 00 5d c3 8b 80 28
[ 576.348903] EAX: e9c3b380 EBX: ee6abea8 ECX: dec3053c EDX: e9c3b85c
[ 576.348903] ESI: 00000000 EDI: 00000000 EBP: eb3bff4c ESP: eb3bff48
[ 576.348903] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00000246
[ 576.348903] CR0: 80050033 CR2: 00000000 CR3: 1eea9000 CR4: 000406b0
[ 576.348903] Call Trace:
[ 576.348903] ? kthread_should_stop+0x12/0x1b
[ 576.348903] thread_signal_callback+0x122/0x243
[ 576.348903] kthread+0xdc/0xde
[ 576.348903] ? test_signaling+0x7b/0x7b
[ 576.348903] ? kthread_create_worker_on_cpu+0x1c/0x1c
[ 576.348903] ret_from_fork+0x19/0x24
[ 576.348903] Kernel panic - not syncing: softlockup: hung tasks
[ 576.348903] CPU: 0 PID: 230 Comm: dma-fence:0 Tainted: G L 5.4.0-rc2-00042-ga0855d24fc22d #1
[ 576.348903] Call Trace:
[ 576.348903] <IRQ>
[ 576.348903] dump_stack+0x16/0x18
[ 576.348903] panic+0xaa/0x259
[ 576.348903] watchdog_timer_fn+0x1ec/0x211
[ 576.348903] ? __touch_watchdog+0x1a/0x1a
[ 576.348903] __hrtimer_run_queues+0x165/0x256
[ 576.348903] ? __touch_watchdog+0x1a/0x1a
[ 576.348903] hrtimer_run_queues+0xc4/0xd9
[ 576.348903] run_local_timers+0xd/0x26
[ 576.348903] update_process_times+0x1c/0x3c
[ 576.348903] tick_periodic+0x99/0x9b
[ 576.348903] tick_handle_periodic+0x18/0x54
[ 576.348903] timer_interrupt+0x12/0x19
[ 576.348903] __handle_irq_event_percpu+0xa7/0x1ac
[ 576.348903] handle_irq_event_percpu+0x1c/0x42
[ 576.348903] handle_irq_event+0x2e/0x47
[ 576.348903] ? irq_set_chained_handler_and_data+0x4f/0x4f
[ 576.348903] handle_level_irq+0x57/0x83
[ 576.348903] handle_irq+0x3a/0x45
[ 576.348903] </IRQ>
[ 576.348903] do_IRQ+0x6a/0x8f
[ 576.348903] common_interrupt+0xfd/0x104
[ 576.348903] EIP: to_kthread+0x0/0x1a
[ 576.348903] Code: 43 08 ff d2 83 c3 14 eb e9 5b 5e 5d c3 3e 8d 74 26 00 55 89 e5 ff 70 2c 68 b8 e7 a1 de 68 00 10 00 00 51 e8 d7 65 a0 00 c9 c3 <f6> 40 16 20 75 0d 55 89 e5 0f 0b 8b 80 28 03 00 00 5d c3 8b 80 28
[ 576.348903] EAX: e9c3b380 EBX: ee6abea8 ECX: dec3053c EDX: e9c3b85c
[ 576.348903] ESI: 00000000 EDI: 00000000 EBP: eb3bff4c ESP: eb3bff48
[ 576.348903] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00000246
[ 576.348903] ? __modver_version_show+0x1d/0x1d
[ 576.348903] ? kthread_should_stop+0x12/0x1b
[ 576.348903] thread_signal_callback+0x122/0x243
[ 576.348903] kthread+0xdc/0xde
[ 576.348903] ? test_signaling+0x7b/0x7b
[ 576.348903] ? kthread_create_worker_on_cpu+0x1c/0x1c
[ 576.348903] ret_from_fork+0x19/0x24
[ 576.348903] ------------[ cut here ]------------
[ 576.348903] WARNING: CPU: 0 PID: 230 at kernel/locking/mutex.c:1419 mutex_trylock+0x4b/0x81
[ 576.348903] Modules linked in:
[ 576.348903] CPU: 0 PID: 230 Comm: dma-fence:0 Tainted: G L 5.4.0-rc2-00042-ga0855d24fc22d #1
[ 576.348903] EIP: mutex_trylock+0x4b/0x81
[ 576.348903] Code: 74 1c 83 3d 08 5f d7 de 00 75 13 68 68 53 8f de 68 97 76 8e de e8 fe af 5d ff 0f 0b 58 5a a1 d4 af ad de a9 00 ff 1f 00 74 02 <0f> 0b 89 d8 e8 d0 28 60 ff 84 c0 89 c6 74 1b ff 75 04 8d 43 38 6a
[ 576.348903] EAX: 00010002 EBX: dec36e60 ECX: 00000000 EDX: e9c3b380
[ 576.348903] ESI: 00000000 EDI: ee659ec8 EBP: ee659e7c ESP: ee659e74
[ 576.348903] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00010006
[ 576.348903] CR0: 80050033 CR2: 00000000 CR3: 1eea9000 CR4: 000406b0
[ 576.348903] Call Trace:
[ 576.348903] <IRQ>
[ 576.348903] __crash_kexec+0x26/0xa8
[ 576.348903] panic+0xba/0x259
[ 576.348903] watchdog_timer_fn+0x1ec/0x211
[ 576.348903] ? __touch_watchdog+0x1a/0x1a
[ 576.348903] __hrtimer_run_queues+0x165/0x256
[ 576.348903] ? __touch_watchdog+0x1a/0x1a
[ 576.348903] hrtimer_run_queues+0xc4/0xd9
[ 576.348903] run_local_timers+0xd/0x26
[ 576.348903] update_process_times+0x1c/0x3c
[ 576.348903] tick_periodic+0x99/0x9b
[ 576.348903] tick_handle_periodic+0x18/0x54
[ 576.348903] timer_interrupt+0x12/0x19
[ 576.348903] __handle_irq_event_percpu+0xa7/0x1ac
[ 576.348903] handle_irq_event_percpu+0x1c/0x42
[ 576.348903] handle_irq_event+0x2e/0x47
[ 576.348903] ? irq_set_chained_handler_and_data+0x4f/0x4f
[ 576.348903] handle_level_irq+0x57/0x83
[ 576.348903] handle_irq+0x3a/0x45
[ 576.348903] </IRQ>
[ 576.348903] do_IRQ+0x6a/0x8f
[ 576.348903] common_interrupt+0xfd/0x104
[ 576.348903] EIP: to_kthread+0x0/0x1a
[ 576.348903] Code: 43 08 ff d2 83 c3 14 eb e9 5b 5e 5d c3 3e 8d 74 26 00 55 89 e5 ff 70 2c 68 b8 e7 a1 de 68 00 10 00 00 51 e8 d7 65 a0 00 c9 c3 <f6> 40 16 20 75 0d 55 89 e5 0f 0b 8b 80 28 03 00 00 5d c3 8b 80 28
[ 576.348903] EAX: e9c3b380 EBX: ee6abea8 ECX: dec3053c EDX: e9c3b85c
[ 576.348903] ESI: 00000000 EDI: 00000000 EBP: eb3bff4c ESP: eb3bff48
[ 576.348903] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00000246
[ 576.348903] ? __modver_version_show+0x1d/0x1d
[ 576.348903] ? kthread_should_stop+0x12/0x1b
[ 576.348903] thread_signal_callback+0x122/0x243
[ 576.348903] kthread+0xdc/0xde
[ 576.348903] ? test_signaling+0x7b/0x7b
[ 576.348903] ? kthread_create_worker_on_cpu+0x1c/0x1c
[ 576.348903] ret_from_fork+0x19/0x24
[ 576.348903] irq event stamp: 280712
[ 576.348903] hardirqs last enabled at (280711): [<dda010fb>] trace_hardirqs_on_thunk+0xc/0x10
[ 576.348903] hardirqs last disabled at (280712): [<dda0110b>] trace_hardirqs_off_thunk+0xc/0x10
[ 576.348903] softirqs last enabled at (526): [<de4568db>] __do_softirq+0x283/0x2b1
[ 576.348903] softirqs last disabled at (519): [<dda05bb1>] do_softirq_own_stack+0x21/0x27
[ 576.348903] ---[ end trace a5329ee86dc626c2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc2-00042-ga0855d24fc22d .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 2 months
[tracing] bf8e602186: WARNING:at_kernel/trace/trace.c:#create_trace_option_files
by kenerl test robot
FYI, we noticed the following commit (built with gcc-7):
commit: bf8e602186ec402ed937b2cbd6c39a34c0029757 ("tracing: Do not create tracefs files if tracefs lockdown is in effect")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------+------------+------------+
| | 17911ff38a | bf8e602186 |
+------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 8 | 8 |
| WARNING:at_fs/open.c:#do_dentry_open | 8 | 8 |
| EIP:do_dentry_open | 8 | 8 |
| WARNING:at_kernel/trace/trace.c:#create_trace_option_files | 0 | 8 |
| EIP:create_trace_option_files | 0 | 8 |
+------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 1.049546] WARNING: CPU: 0 PID: 1 at kernel/trace/trace.c:8107 create_trace_option_files+0x134/0x148
[ 1.051505] Modules linked in:
[ 1.051928] CPU: 0 PID: 1 Comm: swapper Not tainted 5.4.0-rc2-00007-gbf8e602186ec4 #2
[ 1.052316] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.052316] EIP: create_trace_option_files+0x134/0x148
[ 1.052316] Code: fe ff ff 89 46 0c 59 58 83 7e 0c 00 75 20 80 3d e3 d3 c7 c1 00 75 17 ff 37 68 7b 54 a6 c1 c6 05 e3 d3 c7 c1 01 e8 46 77 f8 ff <0f> 0b 58 5a 83 c7 08 83 c6 10 eb 94 8d 65 f4 5b 5e 5f 5d c3 55 89
[ 1.052316] EAX: 0000002e EBX: c1bc1900 ECX: 00000000 EDX: 0000020c
[ 1.052316] ESI: f0e54f80 EDI: c1bc1d2c EBP: f0cb5f10 ESP: f0cb5ef4
[ 1.052316] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010282
[ 1.052316] CR0: 80050033 CR2: 00000000 CR3: 01d98000 CR4: 000406d0
[ 1.052316] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 1.052316] DR6: fffe0ff0 DR7: 00000400
[ 1.052316] Call Trace:
[ 1.052316] __update_tracer_options+0x23/0x2c
[ 1.052316] tracer_init_tracefs+0x140/0x166
[ 1.052316] ? register_tracer+0x21d/0x21d
[ 1.052316] do_one_initcall+0x7c/0x184
[ 1.052316] ? set_debug_rodata+0x2d/0x2d
[ 1.052316] ? trace_initcall_level+0x4c/0x50
[ 1.052316] kernel_init_freeable+0x138/0x1fe
[ 1.052316] ? rest_init+0xdc/0xdc
[ 1.052316] kernel_init+0x8/0xd0
[ 1.052316] ret_from_fork+0x1e/0x28
[ 1.052316] ---[ end trace 42d703653fa60537 ]---
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc2-00007-gbf8e602186ec4 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 2 months
3e05ad861b: vm-scalability.median -86.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -86.3% regression of vm-scalability.median due to commit:
commit: 3e05ad861b9b2b61a1cbfd0d98951579eb3c85e0 ("ZEN: Implement zen-tune v5.4")
https://github.com/zen-kernel/zen-kernel 5.4/zen-sauce
in testcase: vm-scalability
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
thp_enabled: never
thp_defrag: never
nr_task: 8
nr_pmem: 4
priority: 1
test: swap-w-seq
cpufreq_governor: performance
ucode: 0x16
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec 1609.1% improvement |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1SSD |
| | filesize=16MB |
| | fs=xfs |
| | iterations=1x |
| | nr_directories=16d |
| | nr_files_per_directory=256fpd |
| | nr_threads=32t |
| | sync_method=NoSync |
| | test_size=60G |
| | ucode=0x500002b |
+------------------+------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec 832.4% improvement |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1HDD |
| | filesize=16MB |
| | fs=f2fs |
| | iterations=1x |
| | nr_directories=16d |
| | nr_files_per_directory=256fpd |
| | nr_threads=32t |
| | sync_method=NoSync |
| | test_size=60G |
| | ucode=0x500002b |
+------------------+------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps -15.4% regression |
| test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=ext4 |
| | ioengine=libaio |
| | nr_task=100% |
| | runtime=300s |
| | rw=write |
| | test_size=128G |
| | ucode=0x42e |
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -93.8% regression |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | nr_pmem=4 |
| | nr_task=64 |
| | priority=1 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=never |
| | ucode=0x16 |
+------------------+------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps 228.8% improvement |
| test machine | 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=ext4 |
| | ioengine=sync |
| | nr_task=100% |
| | runtime=300s |
| | rw=write |
| | test_size=128G |
| | ucode=0xb000038 |
+------------------+------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-7/performance/x86_64-rhel-7.6/4/8/1/debian-x86_64-2019-11-14.cgz/lkp-hsw-4ex1/swap-w-seq/vm-scalability/never/never/0x16
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 100% 4:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
4:4 -100% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:4 -25% :4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 25% 1:4 dmesg.page_allocation_failure:order:#,mode:#(GFP_KERNEL),nodemask=(null),cpuset=/,mems_allowed=
%stddev %change %stddev
\ | \
46.22 ± 8% -87.4% 5.83 ± 5% vm-scalability.free_time
231850 -86.3% 31877 vm-scalability.median
1827686 -86.1% 254237 vm-scalability.throughput
294.40 +4.8% 308.51 vm-scalability.time.elapsed_time
294.40 +4.8% 308.51 vm-scalability.time.elapsed_time.max
397527 ± 5% -82.7% 68834 ± 10% vm-scalability.time.involuntary_context_switches
4139804 ± 10% -31.7% 2825636 vm-scalability.time.maximum_resident_set_size
98485423 -93.1% 6825583 vm-scalability.time.minor_page_faults
760.00 -58.0% 319.33 ± 5% vm-scalability.time.percent_of_cpu_this_job_got
1665 ± 2% -42.8% 952.65 ± 5% vm-scalability.time.system_time
573.54 -94.0% 34.29 ± 2% vm-scalability.time.user_time
19251 ± 9% -45.4% 10505 ± 16% vm-scalability.time.voluntary_context_switches
4.432e+08 -82.7% 76514024 vm-scalability.workload
2756753 ± 37% +221.9% 8873554 ± 9% cpuidle.C1.time
8563 ± 35% +260.4% 30860 ± 13% cpuidle.POLL.usage
91.69 -3.2% 88.74 iostat.cpu.idle
6.81 +59.6% 10.88 ± 6% iostat.cpu.system
1.50 -74.3% 0.38 ± 7% iostat.cpu.user
0.11 ± 24% -0.1 0.02 ±104% mpstat.cpu.all.soft%
6.74 +4.6 11.37 ± 7% mpstat.cpu.all.sys%
1.50 -1.1 0.41 ± 11% mpstat.cpu.all.usr%
0.01 ± 57% +0.0 0.02 turbostat.C1%
2.452e+08 ± 6% -61.6% 94135899 turbostat.IRQ
0.49 ± 45% -71.4% 0.14 ± 5% turbostat.Pkg%pc2
69.17 -15.0% 58.83 turbostat.RAMWatt
3.485e+09 ± 2% -50.7% 1.717e+09 ± 7% perf-node.node-load-misses
1.439e+09 ± 3% -92.3% 1.111e+08 ± 10% perf-node.node-loads
28.50 -81.3% 5.33 ± 8% perf-node.node-local-load-ratio
30.75 ± 5% -38.2% 19.00 ± 4% perf-node.node-local-store-ratio
1.386e+09 ± 4% -48.5% 7.143e+08 perf-node.node-store-misses
6.308e+08 ± 9% -72.4% 1.743e+08 ± 4% perf-node.node-stores
91.00 -2.2% 89.00 vmstat.cpu.id
123.50 ± 19% +37.7% 170.00 ± 13% vmstat.memory.buff
1.779e+08 -96.4% 6328966 ± 7% vmstat.memory.swpd
11.00 +27.3% 14.00 ± 11% vmstat.procs.r
199.75 -59.1% 81.67 ± 46% vmstat.swap.si
1255814 -97.0% 37410 ± 5% vmstat.swap.so
6094 ± 4% +60.1% 9759 ± 8% vmstat.system.cs
5445 -38.4% 3352 ± 71% syscalls.sys_mmap.min
1.93e+08 ± 10% -9.1e+07 1.022e+08 ± 71% syscalls.sys_mmap.noise.100%
3.126e+08 ± 5% -1.3e+08 1.862e+08 ± 71% syscalls.sys_mmap.noise.2%
2.91e+08 ± 5% -1.2e+08 1.716e+08 ± 71% syscalls.sys_mmap.noise.25%
3.116e+08 ± 5% -1.3e+08 1.855e+08 ± 71% syscalls.sys_mmap.noise.5%
2.617e+08 ± 6% -1.1e+08 1.526e+08 ± 71% syscalls.sys_mmap.noise.50%
2.292e+08 ± 8% -1e+08 1.289e+08 ± 70% syscalls.sys_mmap.noise.75%
2.559e+09 +365.3% 1.191e+10 ± 54% syscalls.sys_read.max
10679 ± 7% -42.8% 6111 ± 51% syscalls.sys_read.med
1960 -20.8% 1553 ± 8% syscalls.sys_read.min
18303468 -36.8% 11568383 ± 3% meminfo.Active
18303386 -36.8% 11568239 ± 3% meminfo.Active(anon)
694.75 ± 22% +311.7% 2860 ± 56% meminfo.AnonHugePages
20960583 -39.3% 12713259 ± 4% meminfo.AnonPages
2658038 -27.8% 1919449 ± 10% meminfo.Inactive
2657850 -27.8% 1918713 ± 10% meminfo.Inactive(anon)
136970 -17.8% 112657 ± 2% meminfo.KReclaimable
20792 +20.8% 25125 ± 2% meminfo.Mapped
1011847 ± 5% +289.4% 3939966 ± 27% meminfo.MemAvailable
1186192 ± 5% +247.8% 4125935 ± 26% meminfo.MemFree
31594374 -9.3% 28654632 ± 3% meminfo.Memused
450849 -89.6% 46670 ± 5% meminfo.PageTables
136970 -17.8% 112657 ± 2% meminfo.SReclaimable
226454 +66.5% 376940 ± 4% meminfo.SUnreclaim
2315 ± 2% +3322.9% 79248 ± 51% meminfo.Shmem
363425 +34.7% 489598 ± 3% meminfo.Slab
102.00 +6.8e+05% 697034 ± 13% meminfo.SwapCached
3.248e+08 +52.4% 4.952e+08 meminfo.SwapFree
111099 +540.3% 711342 ± 29% meminfo.max_used_kB
20266653 ± 9% -89.4% 2143184 ± 12% numa-numastat.node0.local_node
17092726 ± 78% -93.6% 1092549 ± 71% numa-numastat.node0.numa_foreign
20289644 ± 9% -89.3% 2166495 ± 12% numa-numastat.node0.numa_hit
4370210 ± 39% -87.7% 537112 ± 52% numa-numastat.node0.numa_miss
4393242 ± 39% -87.2% 560464 ± 51% numa-numastat.node0.other_node
19284050 ± 30% -91.3% 1673025 ± 6% numa-numastat.node1.local_node
15320346 ± 37% -97.9% 323994 ± 55% numa-numastat.node1.numa_foreign
19308182 ± 30% -91.2% 1691098 ± 7% numa-numastat.node1.numa_hit
7140678 ± 77% -83.7% 1164553 ± 12% numa-numastat.node1.numa_miss
7164810 ± 77% -83.5% 1182625 ± 11% numa-numastat.node1.other_node
11814274 ± 16% -81.2% 2219724 ± 9% numa-numastat.node2.local_node
4407066 ± 28% -83.0% 750305 ±100% numa-numastat.node2.numa_foreign
11838498 ± 16% -81.0% 2247034 ± 8% numa-numastat.node2.numa_hit
14628324 ± 16% -96.2% 561499 ± 56% numa-numastat.node2.numa_miss
14652548 ± 16% -96.0% 588810 ± 55% numa-numastat.node2.other_node
13847898 ± 7% -85.0% 2071206 ± 18% numa-numastat.node3.local_node
2000384 ± 23% -60.3% 793185 ± 74% numa-numastat.node3.numa_foreign
13877156 ± 7% -84.9% 2091059 ± 18% numa-numastat.node3.numa_hit
12681311 ± 8% -94.5% 696869 ± 61% numa-numastat.node3.numa_miss
12710569 ± 8% -94.4% 716723 ± 58% numa-numastat.node3.other_node
77617 ± 3% +1659.8% 1365927 ± 7% slabinfo.Acpi-Parse.active_objs
1071 ± 3% +1721.7% 19510 ± 6% slabinfo.Acpi-Parse.active_slabs
78208 ± 3% +1721.2% 1424296 ± 6% slabinfo.Acpi-Parse.num_objs
1071 ± 3% +1721.7% 19510 ± 6% slabinfo.Acpi-Parse.num_slabs
308.50 ± 7% -32.5% 208.33 ± 11% slabinfo.biovec-128.active_objs
308.50 ± 7% -32.5% 208.33 ± 11% slabinfo.biovec-128.num_objs
1019 ± 6% +11.2% 1133 ± 4% slabinfo.mnt_cache.active_objs
1019 ± 6% +11.2% 1133 ± 4% slabinfo.mnt_cache.num_objs
227.75 ± 29% -66.2% 77.00 ± 15% slabinfo.nfs_read_data.active_objs
227.75 ± 29% -66.2% 77.00 ± 15% slabinfo.nfs_read_data.num_objs
122256 ± 2% -34.3% 80273 ± 6% slabinfo.radix_tree_node.active_objs
2631 ± 5% -41.4% 1541 ± 4% slabinfo.radix_tree_node.active_slabs
123650 -30.2% 86332 ± 4% slabinfo.radix_tree_node.num_objs
2631 ± 5% -41.4% 1541 ± 4% slabinfo.radix_tree_node.num_slabs
3920 +37.7% 5397 slabinfo.scsi_sense_cache.active_objs
3920 +37.7% 5397 slabinfo.scsi_sense_cache.num_objs
1278 ± 2% -30.0% 895.00 ± 6% slabinfo.skbuff_fclone_cache.active_objs
1278 ± 2% -30.0% 895.00 ± 6% slabinfo.skbuff_fclone_cache.num_objs
14232 ± 2% +8988.9% 1293549 ± 8% slabinfo.vmap_area.active_objs
233.75 ± 2% +8918.9% 21081 ± 6% slabinfo.vmap_area.active_slabs
15005 ± 2% +8892.0% 1349243 ± 6% slabinfo.vmap_area.num_objs
233.75 ± 2% +8918.9% 21081 ± 6% slabinfo.vmap_area.num_slabs
273.25 ± 11% -46.2% 147.00 ± 23% slabinfo.xfrm_state.active_objs
273.25 ± 11% -46.2% 147.00 ± 23% slabinfo.xfrm_state.num_objs
11339 ± 13% +28.9% 14616 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
38562 ± 21% +218.2% 122703 ± 3% sched_debug.cfs_rq:/.exec_clock.max
8226 ± 28% +125.3% 18533 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
11965 ± 30% +123.9% 26793 ± 45% sched_debug.cfs_rq:/.min_vruntime.min
69332 ± 26% -43.1% 39465 ± 14% sched_debug.cfs_rq:/.min_vruntime.stddev
0.28 ± 17% +968.1% 2.94 ± 23% sched_debug.cfs_rq:/.nr_spread_over.avg
3.18 ± 13% +403.9% 16.00 ± 30% sched_debug.cfs_rq:/.nr_spread_over.max
0.66 ± 13% +466.5% 3.75 ± 31% sched_debug.cfs_rq:/.nr_spread_over.stddev
55.34 ± 10% -25.5% 41.23 ± 18% sched_debug.cfs_rq:/.runnable_load_avg.avg
864.71 ± 11% -15.7% 729.11 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.max
184.52 ± 7% -19.5% 148.54 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.stddev
69340 ± 26% -43.1% 39488 ± 14% sched_debug.cfs_rq:/.spread0.stddev
128.04 ± 10% -17.9% 105.18 sched_debug.cfs_rq:/.util_est_enqueued.avg
291.97 ± 4% -15.9% 245.48 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.stddev
940165 -46.9% 498861 sched_debug.cpu.avg_idle.avg
1079015 ± 7% -25.9% 799571 ± 12% sched_debug.cpu.avg_idle.max
162655 ± 9% -46.1% 87662 ± 6% sched_debug.cpu.avg_idle.stddev
183659 ± 6% +19.6% 219628 ± 6% sched_debug.cpu.clock.avg
183677 ± 6% +19.8% 219995 ± 5% sched_debug.cpu.clock.max
183639 ± 6% +19.4% 219208 ± 6% sched_debug.cpu.clock.min
10.78 ± 8% +2091.6% 236.33 ± 59% sched_debug.cpu.clock.stddev
183659 ± 6% +19.6% 219628 ± 6% sched_debug.cpu.clock_task.avg
183677 ± 6% +19.8% 219995 ± 5% sched_debug.cpu.clock_task.max
183639 ± 6% +19.4% 219208 ± 6% sched_debug.cpu.clock_task.min
10.78 ± 8% +2091.5% 236.34 ± 59% sched_debug.cpu.clock_task.stddev
6670 ± 6% -51.1% 3265 ± 5% sched_debug.cpu.curr->pid.max
930.74 ± 4% -14.2% 798.42 ± 2% sched_debug.cpu.curr->pid.stddev
500425 -46.3% 268486 sched_debug.cpu.max_idle_balance_cost.avg
543755 ± 8% -24.0% 413244 ± 14% sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
4081 ±104% +474.1% 23435 ± 24% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 8% +871.7% 0.00 ± 57% sched_debug.cpu.next_balance.stddev
0.10 ± 7% +19.6% 0.12 ± 7% sched_debug.cpu.nr_running.avg
8136 ± 8% -28.3% 5830 ± 11% sched_debug.cpu.nr_switches.avg
4371 ± 17% -21.0% 3451 ± 9% sched_debug.cpu.nr_switches.stddev
0.01 ± 19% +436.2% 0.03 ± 56% sched_debug.cpu.nr_uninterruptible.avg
6180 ± 10% -37.0% 3891 ± 15% sched_debug.cpu.sched_count.avg
3922 ± 19% -23.7% 2992 ± 10% sched_debug.cpu.sched_count.stddev
982.40 ± 11% +55.2% 1524 ± 20% sched_debug.cpu.sched_goidle.avg
7520 ± 32% +66.4% 12515 ± 22% sched_debug.cpu.sched_goidle.max
1043 ± 19% +41.2% 1473 ± 10% sched_debug.cpu.sched_goidle.stddev
3068 ± 10% -36.5% 1946 ± 16% sched_debug.cpu.ttwu_count.avg
2168 ± 17% -32.1% 1472 ± 6% sched_debug.cpu.ttwu_count.stddev
1780 ± 15% -71.3% 510.64 ± 5% sched_debug.cpu.ttwu_local.avg
6397 ± 15% +68.1% 10753 ± 25% sched_debug.cpu.ttwu_local.max
204.80 ± 29% -62.2% 77.33 ± 16% sched_debug.cpu.ttwu_local.min
183640 ± 6% +19.4% 219189 ± 6% sched_debug.cpu_clk
179329 ± 7% +19.8% 214877 ± 6% sched_debug.ktime
188095 ± 6% +18.2% 222368 ± 5% sched_debug.sched_clk
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
4339533 ± 3% -39.4% 2629862 ± 3% numa-meminfo.node0.Active
4339511 ± 3% -39.4% 2629827 ± 3% numa-meminfo.node0.Active(anon)
238.50 ± 60% +1039.5% 2717 ± 76% numa-meminfo.node0.AnonHugePages
5006302 ± 2% -42.8% 2864276 ± 4% numa-meminfo.node0.AnonPages
255381 +76.2% 449876 ± 7% numa-meminfo.node0.FilePages
667538 -35.8% 428640 ± 4% numa-meminfo.node0.Inactive
667487 -35.8% 428507 ± 4% numa-meminfo.node0.Inactive(anon)
40002 ± 2% -18.1% 32761 ± 6% numa-meminfo.node0.KReclaimable
5528 ± 2% +95.1% 10784 ± 32% numa-meminfo.node0.Mapped
277493 ± 14% +264.0% 1010043 ± 26% numa-meminfo.node0.MemFree
7785521 -9.4% 7052972 ± 3% numa-meminfo.node0.MemUsed
107288 ± 2% -89.2% 11624 ± 6% numa-meminfo.node0.PageTables
40002 ± 2% -18.1% 32761 ± 6% numa-meminfo.node0.SReclaimable
76331 ± 16% +42.5% 108808 ± 11% numa-meminfo.node0.SUnreclaim
1224 ± 23% +1589.3% 20676 ± 45% numa-meminfo.node0.Shmem
4504812 ± 3% -35.3% 2914148 ± 5% numa-meminfo.node1.Active
4504759 ± 3% -35.3% 2914135 ± 5% numa-meminfo.node1.Active(anon)
5174175 ± 3% -37.3% 3245139 ± 5% numa-meminfo.node1.AnonPages
266325 ± 3% +59.8% 425548 ± 5% numa-meminfo.node1.FilePages
669572 -25.1% 501236 ± 6% numa-meminfo.node1.Inactive
669503 -25.2% 501121 ± 6% numa-meminfo.node1.Inactive(anon)
34365 ± 13% -27.9% 24794 ± 9% numa-meminfo.node1.KReclaimable
5348 ± 4% -13.1% 4649 ± 6% numa-meminfo.node1.KernelStack
5099 +68.7% 8604 ± 20% numa-meminfo.node1.Mapped
262692 ± 9% +360.1% 1208713 ± 22% numa-meminfo.node1.MemFree
7986874 -11.8% 7040853 ± 3% numa-meminfo.node1.MemUsed
115349 ± 2% -90.3% 11242 ± 8% numa-meminfo.node1.PageTables
34365 ± 13% -27.9% 24794 ± 9% numa-meminfo.node1.SReclaimable
58815 ± 18% +38.9% 81716 ± 10% numa-meminfo.node1.SUnreclaim
388.75 ± 45% +3941.8% 15712 ± 20% numa-meminfo.node1.Shmem
4747251 -37.5% 2967712 ± 5% numa-meminfo.node2.Active
4747245 -37.5% 2967679 ± 5% numa-meminfo.node2.Active(anon)
5411320 -39.6% 3270127 ± 6% numa-meminfo.node2.AnonPages
263555 ± 4% +68.9% 445205 numa-meminfo.node2.FilePages
663741 -27.9% 478645 ± 11% numa-meminfo.node2.Inactive
663692 -27.9% 478592 ± 11% numa-meminfo.node2.Inactive(anon)
5099 +37.1% 6992 ± 19% numa-meminfo.node2.Mapped
311551 ± 5% +243.1% 1068839 ± 19% numa-meminfo.node2.MemFree
7938015 -9.5% 7180727 ± 2% numa-meminfo.node2.MemUsed
113940 ± 2% -89.9% 11545 ± 5% numa-meminfo.node2.PageTables
46207 ± 11% +96.1% 90608 ± 14% numa-meminfo.node2.SUnreclaim
150.75 ± 58% +13355.6% 20284 ± 69% numa-meminfo.node2.Shmem
79484 ± 9% +49.3% 118674 ± 13% numa-meminfo.node2.Slab
4713363 -38.3% 2909141 ± 3% numa-meminfo.node3.Active
4713360 -38.3% 2909071 ± 3% numa-meminfo.node3.Active(anon)
5372503 -40.2% 3213815 ± 5% numa-meminfo.node3.AnonPages
263686 ± 4% +66.5% 439151 ± 4% numa-meminfo.node3.FilePages
659157 -28.0% 474492 ± 13% numa-meminfo.node3.Inactive
659137 -28.1% 474018 ± 13% numa-meminfo.node3.Inactive(anon)
20.00 ±173% +2268.3% 473.67 ± 92% numa-meminfo.node3.Inactive(file)
330423 ± 8% +223.9% 1070132 ± 15% numa-meminfo.node3.MemFree
7887992 -9.4% 7148283 ± 2% numa-meminfo.node3.MemUsed
114192 ± 3% -90.0% 11432 ± 3% numa-meminfo.node3.PageTables
45094 ± 16% +107.7% 93667 ± 6% numa-meminfo.node3.SUnreclaim
74729 ± 9% +61.0% 120342 ± 6% numa-meminfo.node3.Slab
14.81 ± 14% -18.0% 12.15 ± 16% perf-stat.i.MPKI
3.083e+09 -36.0% 1.974e+09 ± 6% perf-stat.i.branch-instructions
14.08 ± 13% -3.5 10.59 ± 11% perf-stat.i.cache-miss-rate%
23708784 ± 2% -58.9% 9743671 perf-stat.i.cache-misses
1.688e+08 ± 12% -44.9% 93081918 ± 12% perf-stat.i.cache-references
6087 ± 4% +86.7% 11363 perf-stat.i.context-switches
2.10 ± 8% +66.6% 3.50 ± 5% perf-stat.i.cpi
2.237e+10 ± 7% +22.5% 2.74e+10 ± 4% perf-stat.i.cpu-cycles
32.70 ± 36% +148.0% 81.10 ± 3% perf-stat.i.cpu-migrations
1059 ± 7% +173.1% 2892 ± 3% perf-stat.i.cycles-between-cache-misses
0.52 ± 10% -0.2 0.31 ± 25% perf-stat.i.dTLB-load-miss-rate%
18429072 ± 8% -67.2% 6035849 ± 20% perf-stat.i.dTLB-load-misses
3.46e+09 -38.9% 2.114e+09 ± 6% perf-stat.i.dTLB-loads
4537715 ± 10% -62.5% 1702789 ± 25% perf-stat.i.dTLB-store-misses
2.243e+09 ± 7% -61.9% 8.533e+08 ± 9% perf-stat.i.dTLB-stores
74.00 ± 10% -25.8 48.18 ± 30% perf-stat.i.iTLB-load-miss-rate%
3791998 ± 9% -67.7% 1224176 ± 32% perf-stat.i.iTLB-load-misses
1.277e+10 -34.9% 8.309e+09 ± 5% perf-stat.i.instructions
3458 ± 12% +129.5% 7935 ± 35% perf-stat.i.instructions-per-iTLB-miss
0.56 ± 7% -44.5% 0.31 ± 4% perf-stat.i.ipc
339728 -82.7% 58892 ± 5% perf-stat.i.minor-faults
72.00 +19.4 91.39 perf-stat.i.node-load-miss-rate%
11919754 ± 2% -48.9% 6091855 perf-stat.i.node-load-misses
5092184 ± 3% -90.8% 468188 perf-stat.i.node-loads
70.13 +5.6 75.71 perf-stat.i.node-store-miss-rate%
4781459 ± 5% -50.1% 2384157 perf-stat.i.node-store-misses
2215719 ± 7% -66.6% 740244 perf-stat.i.node-stores
339791 -82.6% 59017 ± 5% perf-stat.i.page-faults
14.27 ± 13% -4.3 9.96 ± 10% perf-stat.overall.cache-miss-rate%
1.75 ± 7% +86.6% 3.27 ± 5% perf-stat.overall.cpi
943.87 ± 7% +171.8% 2565 ± 6% perf-stat.overall.cycles-between-cache-misses
0.53 ± 9% -0.2 0.31 ± 30% perf-stat.overall.dTLB-load-miss-rate%
75.42 ± 8% -26.4 48.97 ± 31% perf-stat.overall.iTLB-load-miss-rate%
3396 ± 9% +106.8% 7023 ± 32% perf-stat.overall.instructions-per-iTLB-miss
0.57 ± 7% -46.5% 0.31 ± 5% perf-stat.overall.ipc
70.08 +23.3 93.42 perf-stat.overall.node-load-miss-rate%
68.36 +11.3 79.70 perf-stat.overall.node-store-miss-rate%
8424 ± 2% +237.1% 28402 ± 7% perf-stat.overall.path-length
3.071e+09 -48.0% 1.598e+09 ± 7% perf-stat.ps.branch-instructions
23613472 ± 2% -62.0% 8984355 ± 2% perf-stat.ps.cache-misses
1.681e+08 ± 12% -45.6% 91402164 ± 12% perf-stat.ps.cache-references
6061 ± 4% -9.3% 5498 ± 5% perf-stat.ps.context-switches
18350392 ± 8% -69.5% 5597280 ± 25% perf-stat.ps.dTLB-load-misses
3.446e+09 -47.9% 1.796e+09 ± 7% perf-stat.ps.dTLB-loads
4519736 ± 10% -68.1% 1439823 ± 29% perf-stat.ps.dTLB-store-misses
2.234e+09 ± 7% -61.1% 8.691e+08 ± 8% perf-stat.ps.dTLB-stores
3776596 ± 9% -70.0% 1133072 ± 35% perf-stat.ps.iTLB-load-misses
1.272e+10 -44.5% 7.063e+09 ± 7% perf-stat.ps.instructions
338377 -93.4% 22226 ± 3% perf-stat.ps.minor-faults
11871025 ± 2% -50.7% 5847074 perf-stat.ps.node-load-misses
5069993 ± 3% -91.9% 412375 ± 7% perf-stat.ps.node-loads
4764923 ± 5% -51.4% 2314661 ± 3% perf-stat.ps.node-store-misses
2206822 ± 7% -73.3% 589452 ± 3% perf-stat.ps.node-stores
338442 -93.4% 22286 ± 3% perf-stat.ps.page-faults
3.733e+12 ± 2% -41.8% 2.173e+12 ± 7% perf-stat.total.instructions
380861 ± 4% -96.9% 11776 ± 15% proc-vmstat.allocstall_movable
5280 ± 4% +45.0% 7655 ± 19% proc-vmstat.allocstall_normal
2834038 ± 55% -97.9% 60180 ± 33% proc-vmstat.compact_daemon_free_scanned
1346165 ± 47% -86.6% 180664 ±131% proc-vmstat.compact_daemon_migrate_scanned
3661 ± 95% -99.8% 8.33 ± 14% proc-vmstat.compact_daemon_wake
2929601 ± 52% -97.9% 60180 ± 33% proc-vmstat.compact_free_scanned
266752 ± 41% -95.2% 12762 ± 43% proc-vmstat.compact_isolated
1584266 ± 44% -88.6% 180664 ±131% proc-vmstat.compact_migrate_scanned
452.75 ± 82% -92.3% 34.67 ± 21% proc-vmstat.kswapd_high_wmark_hit_quickly
4277 ± 90% -99.1% 37.67 ± 6% proc-vmstat.kswapd_low_wmark_hit_quickly
4573083 -37.5% 2855944 ± 4% proc-vmstat.nr_active_anon
5236995 -40.0% 3142739 ± 5% proc-vmstat.nr_anon_pages
23889 ± 6% +743.7% 201559 ± 34% proc-vmstat.nr_dirty_background_threshold
47838 ± 6% +953.6% 504024 ± 34% proc-vmstat.nr_dirty_threshold
262238 +72.6% 452544 ± 3% proc-vmstat.nr_file_pages
299754 ± 5% +256.4% 1068440 ± 32% proc-vmstat.nr_free_pages
664056 -28.2% 476653 ± 11% proc-vmstat.nr_inactive_anon
46.75 ± 11% +305.7% 189.67 ± 34% proc-vmstat.nr_inactive_file
163.25 ± 3% +104.6% 334.00 ± 5% proc-vmstat.nr_isolated_anon
5315 +22.8% 6527 proc-vmstat.nr_mapped
112605 -89.7% 11591 ± 5% proc-vmstat.nr_page_table_pages
576.75 ± 2% +3306.5% 19647 ± 53% proc-vmstat.nr_shmem
34157 ± 2% -17.7% 28107 ± 2% proc-vmstat.nr_slab_reclaimable
56613 +66.4% 94220 ± 4% proc-vmstat.nr_slab_unreclaimable
53764745 -95.7% 2292926 ± 5% proc-vmstat.nr_vmscan_write
93294656 -95.2% 4496008 ± 4% proc-vmstat.nr_written
4571365 -37.6% 2854690 ± 4% proc-vmstat.nr_zone_active_anon
665319 -28.4% 476264 ± 12% proc-vmstat.nr_zone_inactive_anon
46.25 ± 10% +310.1% 189.67 ± 34% proc-vmstat.nr_zone_inactive_file
38820524 ± 18% -92.4% 2960034 ± 7% proc-vmstat.numa_foreign
1113 ± 47% +3310.4% 37975 ± 8% proc-vmstat.numa_hint_faults
202.75 ± 76% +6161.9% 12696 ± 36% proc-vmstat.numa_hint_faults_local
65344291 ± 11% -87.4% 8219622 ± 3% proc-vmstat.numa_hit
65243287 ± 11% -87.5% 8130191 ± 3% proc-vmstat.numa_local
38820524 ± 18% -92.4% 2960034 ± 7% proc-vmstat.numa_miss
38921529 ± 18% -92.2% 3049465 ± 7% proc-vmstat.numa_other
45302506 ± 12% -89.7% 4677268 ± 2% proc-vmstat.numa_pte_updates
4750 ± 89% -98.4% 77.33 ± 13% proc-vmstat.pageoutrun
1965706 ± 43% -99.7% 6400 ± 23% proc-vmstat.pgalloc_dma
10382445 ± 11% -95.1% 509165 ± 6% proc-vmstat.pgalloc_dma32
1.183e+08 -90.8% 10852761 ± 2% proc-vmstat.pgalloc_normal
94019717 -95.9% 3818207 ± 3% proc-vmstat.pgdeactivate
99516300 -92.9% 7100462 ± 2% proc-vmstat.pgfault
1.307e+08 -91.8% 10779080 ± 3% proc-vmstat.pgfree
12520 ± 10% -36.1% 7995 ± 54% proc-vmstat.pgmajfault
787.50 ±104% -98.4% 12.33 ± 78% proc-vmstat.pgmigrate_fail
129306 ± 41% -94.9% 6595 ± 38% proc-vmstat.pgmigrate_success
94620510 -96.0% 3821306 ± 3% proc-vmstat.pgrefill
42338990 ± 7% +425.9% 2.227e+08 ± 3% proc-vmstat.pgscan_direct
60752611 -97.0% 1796934 ± 12% proc-vmstat.pgscan_kswapd
32578411 -92.3% 2517344 ± 10% proc-vmstat.pgsteal_direct
60721385 -97.1% 1739303 ± 13% proc-vmstat.pgsteal_kswapd
12803 ± 10% -77.2% 2916 ± 49% proc-vmstat.pswpin
93298288 -98.2% 1690205 ± 7% proc-vmstat.pswpout
143.25 ± 8% +667.2% 1099 ± 96% proc-vmstat.swap_ra
52.00 +1790.4% 983.00 ±104% proc-vmstat.swap_ra_hit
1084114 ± 3% -42.0% 628577 numa-vmstat.node0.nr_active_anon
1250553 ± 2% -45.5% 681208 numa-vmstat.node0.nr_anon_pages
63849 +72.1% 109898 ± 8% numa-vmstat.node0.nr_file_pages
70516 ± 14% +342.3% 311920 ± 31% numa-vmstat.node0.nr_free_pages
166639 -40.7% 98872 ± 7% numa-vmstat.node0.nr_inactive_anon
1377 +93.3% 2662 ± 32% numa-vmstat.node0.nr_mapped
26837 ± 2% -89.5% 2817 ± 9% numa-vmstat.node0.nr_page_table_pages
310.00 ± 23% +1635.5% 5380 ± 42% numa-vmstat.node0.nr_shmem
9826 ± 2% -16.9% 8165 ± 6% numa-vmstat.node0.nr_slab_reclaimable
19082 ± 16% +39.8% 26668 ± 12% numa-vmstat.node0.nr_slab_unreclaimable
12469818 ± 2% -95.7% 530861 ± 8% numa-vmstat.node0.nr_vmscan_write
12469821 ± 2% -94.4% 701283 ± 11% numa-vmstat.node0.nr_written
1083727 ± 3% -42.0% 628283 numa-vmstat.node0.nr_zone_active_anon
166963 -40.8% 98803 ± 7% numa-vmstat.node0.nr_zone_inactive_anon
9322571 ± 84% -91.5% 788811 ± 78% numa-vmstat.node0.numa_foreign
12430629 ± 8% -83.7% 2025212 ± 16% numa-vmstat.node0.numa_hit
12406467 ± 8% -83.9% 2002001 ± 17% numa-vmstat.node0.numa_local
2617336 ± 39% -85.4% 382759 ± 54% numa-vmstat.node0.numa_miss
2641506 ± 39% -84.6% 405973 ± 51% numa-vmstat.node0.numa_other
1125698 ± 3% -38.7% 690330 ± 9% numa-vmstat.node1.nr_active_anon
1292897 ± 3% -40.8% 764966 ± 9% numa-vmstat.node1.nr_anon_pages
66581 ± 3% +56.0% 103898 ± 6% numa-vmstat.node1.nr_file_pages
66383 ± 10% +460.8% 372282 ± 29% numa-vmstat.node1.nr_free_pages
167219 -31.4% 114672 ± 11% numa-vmstat.node1.nr_inactive_anon
5345 ± 4% -13.0% 4650 ± 6% numa-vmstat.node1.nr_kernel_stack
1303 ± 3% +54.3% 2011 ± 3% numa-vmstat.node1.nr_mapped
28875 ± 2% -90.7% 2689 ± 10% numa-vmstat.node1.nr_page_table_pages
97.75 ± 46% +3982.9% 3991 ± 26% numa-vmstat.node1.nr_shmem
8425 ± 13% -28.1% 6057 ± 12% numa-vmstat.node1.nr_slab_reclaimable
13793388 -96.0% 558058 ± 8% numa-vmstat.node1.nr_vmscan_write
13793433 -94.9% 706238 ± 8% numa-vmstat.node1.nr_written
1125261 ± 3% -38.7% 690068 ± 9% numa-vmstat.node1.nr_zone_active_anon
167470 -31.6% 114561 ± 11% numa-vmstat.node1.nr_zone_inactive_anon
8969163 ± 41% -97.5% 220481 ± 63% numa-vmstat.node1.numa_foreign
12036607 ± 29% -88.3% 1410089 ± 4% numa-vmstat.node1.numa_hit
11938330 ± 29% -89.0% 1318629 ± 5% numa-vmstat.node1.numa_local
4043928 ± 82% -79.1% 845837 ± 22% numa-vmstat.node1.numa_miss
4142212 ± 80% -77.4% 937300 ± 18% numa-vmstat.node1.numa_other
1186166 -39.2% 720915 ± 7% numa-vmstat.node2.nr_active_anon
1352038 -41.8% 787352 ± 8% numa-vmstat.node2.nr_anon_pages
65889 ± 4% +64.5% 108363 ± 2% numa-vmstat.node2.nr_file_pages
78835 ± 5% +310.8% 323882 ± 20% numa-vmstat.node2.nr_free_pages
165769 -35.0% 107757 ± 21% numa-vmstat.node2.nr_inactive_anon
28517 ± 2% -90.2% 2802 ± 6% numa-vmstat.node2.nr_page_table_pages
37.50 ± 58% +13228.0% 4998 ± 69% numa-vmstat.node2.nr_shmem
8143 ± 7% -16.1% 6835 ± 11% numa-vmstat.node2.nr_slab_reclaimable
11551 ± 11% +90.3% 21980 ± 12% numa-vmstat.node2.nr_slab_unreclaimable
13669648 ± 2% -95.9% 557388 ± 10% numa-vmstat.node2.nr_vmscan_write
13669665 ± 2% -94.8% 708930 ± 8% numa-vmstat.node2.nr_written
1185741 -39.2% 720658 ± 7% numa-vmstat.node2.nr_zone_active_anon
166089 -35.2% 107681 ± 21% numa-vmstat.node2.nr_zone_inactive_anon
2623218 ± 27% -80.5% 511882 ±100% numa-vmstat.node2.numa_foreign
7670185 ± 14% -73.9% 2005004 ± 11% numa-vmstat.node2.numa_hit
7572087 ± 15% -74.8% 1904742 ± 12% numa-vmstat.node2.numa_local
8417300 ± 17% -95.5% 376219 ± 59% numa-vmstat.node2.numa_miss
8515407 ± 17% -94.4% 476484 ± 49% numa-vmstat.node2.numa_other
14.25 ± 80% -90.6% 1.33 ±141% numa-vmstat.node2.workingset_nodes
1177529 -40.0% 706088 ± 4% numa-vmstat.node3.nr_active_anon
1342172 -42.3% 774015 ± 6% numa-vmstat.node3.nr_anon_pages
65923 ± 4% +62.3% 107007 ± 4% numa-vmstat.node3.nr_file_pages
83660 ± 7% +295.0% 330457 ± 25% numa-vmstat.node3.nr_free_pages
164645 -34.7% 107507 ± 17% numa-vmstat.node3.nr_inactive_anon
28571 ± 3% -90.4% 2752 ± 6% numa-vmstat.node3.nr_page_table_pages
11273 ± 16% +101.0% 22656 ± 8% numa-vmstat.node3.nr_slab_unreclaimable
13895597 ± 2% -96.1% 543450 ± 10% numa-vmstat.node3.nr_vmscan_write
13895599 ± 2% -95.0% 691547 ± 10% numa-vmstat.node3.nr_written
1177056 -40.0% 705805 ± 4% numa-vmstat.node3.nr_zone_active_anon
165015 -34.9% 107434 ± 17% numa-vmstat.node3.nr_zone_inactive_anon
9066546 ± 6% -79.8% 1829027 ± 20% numa-vmstat.node3.numa_hit
8963338 ± 6% -80.6% 1736058 ± 21% numa-vmstat.node3.numa_local
7069916 ± 9% -93.1% 485153 ± 63% numa-vmstat.node3.numa_miss
7173131 ± 9% -91.9% 578125 ± 52% numa-vmstat.node3.numa_other
14.91 ± 8% -12.5 2.44 ± 32% perf-profile.calltrace.cycles-pp.do_access
12.01 ± 8% -9.6 2.44 ± 32% perf-profile.calltrace.cycles-pp.page_fault.do_access
11.57 ± 8% -9.1 2.44 ± 32% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
11.52 ± 8% -9.1 2.44 ± 32% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
10.96 ± 8% -8.5 2.44 ± 32% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
9.60 ± 8% -6.9 2.68 ± 21% perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
9.52 ± 8% -6.8 2.68 ± 21% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
9.12 ± 8% -6.4 2.68 ± 21% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
10.84 ± 8% -4.7 6.10 ± 26% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
7.23 ± 9% -4.5 2.68 ± 21% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
7.14 ± 9% -1.8 5.35 ± 26% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma
1.02 ± 43% +0.6 1.59 ± 34% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
3.19 ± 9% +0.9 4.09 ± 9% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.12 ±173% +2.4 2.48 ± 60% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
3.90 ± 28% +2.4 6.26 ± 16% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +2.5 2.47 ± 61% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
7.84 ± 15% +2.5 10.33 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.00 +2.5 2.51 ± 92% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.new_slab.___slab_alloc.__slab_alloc.kmem_cache_alloc_node
0.00 +2.7 2.67 ± 66% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page.__handle_mm_fault.handle_mm_fault
0.00 +2.7 2.67 ± 67% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page.__handle_mm_fault
0.00 +2.7 2.67 ± 67% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page
0.00 +2.7 2.68 ± 67% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_swap_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +2.8 2.76 ± 68% perf-profile.calltrace.cycles-pp.do_swap_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.00 +2.8 2.78 ± 79% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.new_slab.___slab_alloc.__slab_alloc
0.00 +2.8 2.78 ± 79% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.new_slab.___slab_alloc
0.00 +2.8 2.78 ± 79% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.new_slab
8.70 ± 16% +3.1 11.79 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
66.46 ± 3% +5.9 72.39 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
66.99 ± 3% +5.9 72.93 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
71.77 ± 3% +7.0 78.80 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
71.24 ± 3% +7.1 78.37 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
71.29 ± 3% +7.2 78.44 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
71.29 ± 3% +7.2 78.44 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
3.36 ± 8% +13.6 17.00 ± 25% perf-profile.calltrace.cycles-pp.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 +15.7 15.65 ± 25% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.z3fold_alloc.zswap_frontswap_store.__frontswap_store
0.00 +16.4 16.44 ± 26% perf-profile.calltrace.cycles-pp._raw_spin_lock.z3fold_alloc.zswap_frontswap_store.__frontswap_store.swap_writepage
0.00 +16.5 16.54 ± 26% perf-profile.calltrace.cycles-pp.z3fold_alloc.zswap_frontswap_store.__frontswap_store.swap_writepage.pageout
0.00 +17.0 16.97 ± 25% perf-profile.calltrace.cycles-pp.zswap_frontswap_store.__frontswap_store.swap_writepage.pageout.shrink_page_list
0.00 +17.0 16.98 ± 25% perf-profile.calltrace.cycles-pp.__frontswap_store.swap_writepage.pageout.shrink_page_list.shrink_inactive_list
0.00 +17.0 16.99 ± 25% perf-profile.calltrace.cycles-pp.swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg
15.46 ± 8% -13.0 2.44 ± 32% perf-profile.children.cycles-pp.do_access
12.15 ± 8% -5.3 6.89 ± 24% perf-profile.children.cycles-pp.page_fault
11.68 ± 8% -4.8 6.89 ± 24% perf-profile.children.cycles-pp.do_page_fault
11.64 ± 8% -4.8 6.89 ± 24% perf-profile.children.cycles-pp.__do_page_fault
11.07 ± 8% -4.2 6.90 ± 24% perf-profile.children.cycles-pp.handle_mm_fault
10.95 ± 8% -4.0 6.90 ± 24% perf-profile.children.cycles-pp.__handle_mm_fault
9.67 ± 8% -2.8 6.87 ± 26% perf-profile.children.cycles-pp.alloc_pages_vma
2.28 ± 9% -2.0 0.30 ± 48% perf-profile.children.cycles-pp.rmap_walk_anon
3.70 ± 19% -1.5 2.22 ± 37% perf-profile.children.cycles-pp.__softirqentry_text_start
1.54 ± 14% -1.5 0.08 ± 40% perf-profile.children.cycles-pp.rcu_core
1.27 ± 7% -0.9 0.33 ± 50% perf-profile.children.cycles-pp.page_referenced
0.67 ± 8% -0.5 0.12 ± 55% perf-profile.children.cycles-pp.page_vma_mapped_walk
0.67 ± 7% -0.5 0.18 ± 53% perf-profile.children.cycles-pp.page_referenced_one
0.83 ± 12% -0.3 0.50 ± 8% perf-profile.children.cycles-pp.native_irq_return_iret
0.55 ± 12% -0.3 0.23 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.36 ± 6% -0.2 0.12 ± 56% perf-profile.children.cycles-pp.isolate_lru_pages
0.29 ± 3% -0.2 0.09 ± 59% perf-profile.children.cycles-pp.inactive_list_is_low
0.41 ± 8% -0.1 0.26 ± 10% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.64 ± 5% -0.1 0.53 ± 7% perf-profile.children.cycles-pp.find_next_bit
0.21 ± 9% -0.1 0.14 ± 26% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.08 ± 19% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.rcu_irq_exit
0.12 ± 7% +0.0 0.16 ± 10% perf-profile.children.cycles-pp.rcu_idle_exit
0.08 ± 20% +0.1 0.13 ± 23% perf-profile.children.cycles-pp.menu_reflect
0.21 ± 17% +0.1 0.29 ± 8% perf-profile.children.cycles-pp.native_sched_clock
0.22 ± 15% +0.1 0.31 ± 8% perf-profile.children.cycles-pp.sched_clock
0.28 ± 16% +0.1 0.39 ± 10% perf-profile.children.cycles-pp.sched_clock_cpu
0.29 ± 11% +0.1 0.40 ± 14% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.1 0.15 ± 74% perf-profile.children.cycles-pp.dup_mm
0.63 ± 9% +0.2 0.78 ± 8% perf-profile.children.cycles-pp.__next_timer_interrupt
0.31 ± 21% +0.2 0.50 ± 20% perf-profile.children.cycles-pp.read_tsc
0.00 +0.2 0.21 ± 47% perf-profile.children.cycles-pp.mm_init
0.00 +0.2 0.21 ± 47% perf-profile.children.cycles-pp.pgd_alloc
0.35 ± 23% +0.2 0.57 ± 18% perf-profile.children.cycles-pp.native_write_msr
0.32 ± 29% +0.2 0.55 ± 18% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.3 0.28 ±100% perf-profile.children.cycles-pp.do_wp_page
0.00 +0.3 0.28 ±100% perf-profile.children.cycles-pp.wp_page_copy
0.00 +0.3 0.30 ± 50% perf-profile.children.cycles-pp.pipe_write
0.00 +0.3 0.34 ± 91% perf-profile.children.cycles-pp.__vmalloc_node
0.00 +0.3 0.34 ± 20% perf-profile.children.cycles-pp.lz4_compress_crypto
0.00 +0.3 0.34 ± 20% perf-profile.children.cycles-pp.LZ4_compress_default
0.00 +0.4 0.36 ± 87% perf-profile.children.cycles-pp.proc_reg_open
0.00 +0.4 0.36 ± 87% perf-profile.children.cycles-pp.single_open_size
0.00 +0.4 0.37 ± 84% perf-profile.children.cycles-pp.open64
0.00 +0.4 0.39 ± 80% perf-profile.children.cycles-pp.__vmalloc_node_range
0.00 +0.4 0.40 ± 74% perf-profile.children.cycles-pp.do_dentry_open
0.00 +0.4 0.41 ± 73% perf-profile.children.cycles-pp.do_sys_open
0.00 +0.4 0.41 ± 73% perf-profile.children.cycles-pp.do_filp_open
0.00 +0.4 0.41 ± 73% perf-profile.children.cycles-pp.path_openat
0.07 ± 17% +0.4 0.50 ± 70% perf-profile.children.cycles-pp.__libc_fork
0.03 ±105% +0.4 0.48 ± 71% perf-profile.children.cycles-pp.__x64_sys_clone
0.00 +0.5 0.46 ± 76% perf-profile.children.cycles-pp.__get_free_pages
0.00 +0.5 0.46 ± 48% perf-profile.children.cycles-pp.ksys_write
0.00 +0.5 0.46 ± 48% perf-profile.children.cycles-pp.vfs_write
0.00 +0.5 0.46 ± 48% perf-profile.children.cycles-pp.new_sync_write
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.disk_check_events
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.sr_block_check_events
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.cdrom_check_events
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.sr_check_events
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.__scsi_execute
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.blk_rq_map_kern
0.00 +0.8 0.80 ± 56% perf-profile.children.cycles-pp.bio_copy_kern
3.24 ± 9% +0.9 4.16 ± 10% perf-profile.children.cycles-pp.menu_select
0.00 +1.2 1.23 ± 79% perf-profile.children.cycles-pp.poll
0.00 +1.5 1.47 ± 67% perf-profile.children.cycles-pp.__x64_sys_poll
0.00 +1.5 1.47 ± 67% perf-profile.children.cycles-pp.do_sys_poll
0.03 ±105% +1.6 1.60 ± 59% perf-profile.children.cycles-pp._do_fork
0.03 ±102% +1.6 1.60 ± 59% perf-profile.children.cycles-pp.copy_process
0.42 ± 15% +2.1 2.48 ± 60% perf-profile.children.cycles-pp.worker_thread
0.39 ± 18% +2.1 2.47 ± 61% perf-profile.children.cycles-pp.process_one_work
8.43 ± 14% +2.1 10.57 ± 6% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
4.18 ± 26% +2.3 6.45 ± 16% perf-profile.children.cycles-pp.hrtimer_interrupt
9.03 ± 14% +2.3 11.37 ± 6% perf-profile.children.cycles-pp.apic_timer_interrupt
0.51 ± 10% +2.5 3.03 ± 67% perf-profile.children.cycles-pp.__slab_alloc
0.51 ± 10% +2.5 3.03 ± 67% perf-profile.children.cycles-pp.___slab_alloc
0.46 ± 11% +2.6 3.03 ± 67% perf-profile.children.cycles-pp.new_slab
0.00 +2.7 2.74 ± 79% perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.00 +3.5 3.47 ± 49% perf-profile.children.cycles-pp.do_swap_page
0.23 ± 11% +4.1 4.31 ± 53% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.23 ± 11% +4.1 4.31 ± 53% perf-profile.children.cycles-pp.do_syscall_64
7.37 ± 8% +4.8 12.21 ± 17% perf-profile.children.cycles-pp.try_to_free_pages
7.28 ± 8% +4.9 12.21 ± 17% perf-profile.children.cycles-pp.do_try_to_free_pages
15.73 ± 8% +5.1 20.87 ± 14% perf-profile.children.cycles-pp.shrink_node
15.14 ± 8% +5.7 20.87 ± 14% perf-profile.children.cycles-pp.shrink_node_memcg
67.41 ± 3% +5.8 73.25 ± 5% perf-profile.children.cycles-pp.cpuidle_enter_state
67.43 ± 3% +5.8 73.27 ± 5% perf-profile.children.cycles-pp.cpuidle_enter
13.75 ± 8% +7.0 20.76 ± 14% perf-profile.children.cycles-pp.shrink_inactive_list
13.48 ± 8% +7.0 20.50 ± 14% perf-profile.children.cycles-pp.shrink_page_list
71.77 ± 3% +7.0 78.80 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
71.77 ± 3% +7.0 78.80 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
71.78 ± 3% +7.0 78.83 ± 3% perf-profile.children.cycles-pp.do_idle
71.29 ± 3% +7.2 78.44 ± 3% perf-profile.children.cycles-pp.start_secondary
3.38 ± 8% +16.5 19.90 ± 13% perf-profile.children.cycles-pp.pageout
0.42 ± 25% +18.0 18.45 ± 13% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.06 ± 9% +18.5 19.57 ± 13% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +19.4 19.39 ± 13% perf-profile.children.cycles-pp.z3fold_alloc
0.39 ± 8% +19.5 19.89 ± 13% perf-profile.children.cycles-pp.swap_writepage
0.27 ± 10% +19.6 19.88 ± 13% perf-profile.children.cycles-pp.__frontswap_store
0.10 ± 11% +19.8 19.87 ± 13% perf-profile.children.cycles-pp.zswap_frontswap_store
0.83 ± 12% -0.3 0.50 ± 8% perf-profile.self.cycles-pp.native_irq_return_iret
0.40 ± 10% -0.3 0.08 ± 53% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.41 ± 10% -0.2 0.21 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.20 ± 13% -0.1 0.10 ± 53% perf-profile.self.cycles-pp.shrink_page_list
0.54 ± 6% -0.1 0.45 ± 10% perf-profile.self.cycles-pp.find_next_bit
0.08 ± 19% +0.0 0.11 ± 11% perf-profile.self.cycles-pp.rcu_irq_exit
0.11 ± 8% +0.0 0.14 ± 17% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.04 ± 58% +0.1 0.09 ± 25% perf-profile.self.cycles-pp.menu_reflect
0.19 ± 17% +0.1 0.27 ± 6% perf-profile.self.cycles-pp.native_sched_clock
0.29 ± 21% +0.2 0.47 ± 21% perf-profile.self.cycles-pp.read_tsc
0.34 ± 24% +0.2 0.57 ± 18% perf-profile.self.cycles-pp.native_write_msr
0.00 +0.3 0.32 ± 19% perf-profile.self.cycles-pp.LZ4_compress_default
0.79 ± 9% +0.3 1.12 ± 23% perf-profile.self.cycles-pp._raw_spin_lock
1.05 ± 15% +0.5 1.59 ± 9% perf-profile.self.cycles-pp.menu_select
0.41 ± 25% +17.8 18.24 ± 13% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
122606 ± 39% -84.8% 18591 ± 7% softirqs.CPU0.RCU
203038 ± 29% -88.5% 23369 ± 14% softirqs.CPU1.RCU
150480 ± 40% -87.3% 19153 ± 12% softirqs.CPU10.RCU
98126 ± 26% -81.1% 18538 ± 13% softirqs.CPU100.RCU
93426 ± 14% -80.7% 18029 ± 4% softirqs.CPU101.RCU
77015 ± 9% -76.6% 18022 ± 9% softirqs.CPU102.RCU
99909 ± 11% -81.1% 18923 ± 15% softirqs.CPU103.RCU
93122 ± 13% -81.3% 17437 ± 13% softirqs.CPU104.RCU
78975 ± 7% -76.6% 18480 ± 8% softirqs.CPU105.RCU
87946 ± 13% -78.9% 18550 ± 6% softirqs.CPU106.RCU
33409 ± 11% +15.7% 38663 ± 3% softirqs.CPU106.SCHED
115027 ± 20% -82.9% 19648 ± 6% softirqs.CPU107.RCU
81078 ± 18% -66.9% 26850 ± 24% softirqs.CPU108.RCU
36747 ± 4% -11.9% 32386 ± 9% softirqs.CPU108.SCHED
69297 ± 15% -75.6% 16921 ± 4% softirqs.CPU109.RCU
165572 ± 34% -88.3% 19328 ± 12% softirqs.CPU11.RCU
69818 ± 21% -73.9% 18202 ± 9% softirqs.CPU110.RCU
80920 ± 28% -79.8% 16354 ± 8% softirqs.CPU111.RCU
79628 ± 10% -78.1% 17463 ± 7% softirqs.CPU112.RCU
72148 ± 6% -75.2% 17882 ± 6% softirqs.CPU113.RCU
75531 ± 22% -75.9% 18175 ± 9% softirqs.CPU114.RCU
72033 ± 17% -74.9% 18113 ± 6% softirqs.CPU115.RCU
96564 ± 20% -82.7% 16672 ± 4% softirqs.CPU116.RCU
71933 ± 6% -74.0% 18715 ± 11% softirqs.CPU117.RCU
75370 ± 18% -75.3% 18640 ± 15% softirqs.CPU118.RCU
87992 ± 20% -78.5% 18945 ± 10% softirqs.CPU119.RCU
128382 ± 23% -84.5% 19897 ± 14% softirqs.CPU12.RCU
69381 ± 25% -74.0% 18022 ± 6% softirqs.CPU120.RCU
80626 ± 22% -76.6% 18842 ± 8% softirqs.CPU121.RCU
70946 ± 19% -71.1% 20518 ± 12% softirqs.CPU122.RCU
62063 ± 12% -66.8% 20631 ± 4% softirqs.CPU123.RCU
75868 ± 20% -76.2% 18022 ± 4% softirqs.CPU124.RCU
81495 ± 13% -76.8% 18935 ± 4% softirqs.CPU125.RCU
70058 ± 6% -75.0% 17531 ± 15% softirqs.CPU126.RCU
81325 ± 26% -79.8% 16436 ± 15% softirqs.CPU127.RCU
86220 ± 33% -80.8% 16592 ± 14% softirqs.CPU128.RCU
68686 ± 20% -77.5% 15475 ± 7% softirqs.CPU129.RCU
132884 ± 33% -85.3% 19578 ± 11% softirqs.CPU13.RCU
74043 ± 25% -75.5% 18144 ± 13% softirqs.CPU130.RCU
59391 ± 14% -48.1% 30845 ± 59% softirqs.CPU131.RCU
81171 ± 22% -79.2% 16866 ± 12% softirqs.CPU132.RCU
93115 ± 24% -81.5% 17209 ± 5% softirqs.CPU133.RCU
87047 ± 12% -81.2% 16340 ± 2% softirqs.CPU134.RCU
69785 ± 19% -76.8% 16158 ± 2% softirqs.CPU135.RCU
95661 ± 40% -82.5% 16781 ± 4% softirqs.CPU136.RCU
70503 ± 9% -76.3% 16692 ± 7% softirqs.CPU137.RCU
75369 ± 14% -77.7% 16824 ± 5% softirqs.CPU138.RCU
79699 ± 21% -78.2% 17410 ± 5% softirqs.CPU139.RCU
136157 ± 39% -83.1% 23004 ± 22% softirqs.CPU14.RCU
76216 ± 26% -78.5% 16395 softirqs.CPU140.RCU
90114 ± 27% -78.3% 19521 ± 19% softirqs.CPU141.RCU
65557 ± 14% -70.4% 19378 ± 17% softirqs.CPU142.RCU
90425 ± 12% -81.1% 17051 ± 2% softirqs.CPU143.RCU
146804 ± 31% -85.6% 21205 ± 17% softirqs.CPU15.RCU
176665 ± 42% -89.5% 18487 ± 6% softirqs.CPU16.RCU
166802 ± 40% -89.8% 17010 ± 5% softirqs.CPU17.RCU
185563 ± 17% -87.3% 23609 ± 18% softirqs.CPU18.RCU
141768 ± 26% -83.3% 23614 ± 22% softirqs.CPU19.RCU
156993 ± 33% -87.8% 19088 ± 10% softirqs.CPU2.RCU
141610 ± 16% -85.6% 20417 ± 14% softirqs.CPU20.RCU
148436 ± 24% -87.4% 18663 ± 2% softirqs.CPU21.RCU
139031 ± 31% -85.9% 19632 ± 4% softirqs.CPU22.RCU
146175 ± 26% -85.5% 21181 ± 12% softirqs.CPU23.RCU
31417 ± 8% +15.9% 36417 ± 5% softirqs.CPU23.SCHED
127829 ± 20% -84.2% 20230 ± 12% softirqs.CPU24.RCU
116097 ± 28% -81.8% 21132 ± 9% softirqs.CPU25.RCU
139619 ± 3% -86.3% 19168 softirqs.CPU26.RCU
135700 ± 20% -85.4% 19847 ± 8% softirqs.CPU27.RCU
32348 ± 3% +12.2% 36305 ± 3% softirqs.CPU27.SCHED
121814 ± 20% -85.1% 18147 ± 3% softirqs.CPU28.RCU
126002 ± 17% -85.0% 18951 ± 5% softirqs.CPU29.RCU
160372 ± 27% -87.1% 20621 ± 9% softirqs.CPU3.RCU
118673 ± 18% -85.3% 17488 ± 5% softirqs.CPU30.RCU
144613 ± 17% -86.3% 19837 ± 11% softirqs.CPU31.RCU
32959 ± 2% +13.1% 37286 ± 3% softirqs.CPU31.SCHED
123185 ± 27% -84.9% 18635 ± 5% softirqs.CPU32.RCU
127766 ± 29% -85.6% 18430 ± 8% softirqs.CPU33.RCU
123713 ± 28% -85.5% 17984 ± 4% softirqs.CPU34.RCU
157712 ± 22% -87.8% 19239 softirqs.CPU35.RCU
33338 ± 5% +12.6% 37524 ± 3% softirqs.CPU35.SCHED
120549 ± 22% -82.0% 21718 ± 4% softirqs.CPU36.RCU
31740 ± 6% +12.3% 35630 ± 5% softirqs.CPU36.SCHED
116845 ± 15% -85.2% 17268 ± 5% softirqs.CPU37.RCU
95905 ± 25% -81.0% 18236 ± 14% softirqs.CPU38.RCU
101435 ± 17% -82.3% 17940 ± 8% softirqs.CPU39.RCU
144406 ± 35% -86.1% 20108 ± 13% softirqs.CPU4.RCU
136655 ± 16% -87.2% 17550 ± 8% softirqs.CPU40.RCU
33988 ± 5% +12.2% 38130 ± 2% softirqs.CPU40.SCHED
83889 ± 21% -77.5% 18878 softirqs.CPU41.RCU
101687 ± 26% -80.7% 19622 ± 6% softirqs.CPU42.RCU
111109 ± 28% -83.3% 18601 ± 3% softirqs.CPU43.RCU
103519 ± 25% -82.4% 18187 ± 7% softirqs.CPU44.RCU
103654 ± 26% -78.7% 22042 ± 10% softirqs.CPU45.RCU
98151 ± 41% -79.5% 20146 ± 13% softirqs.CPU46.RCU
107163 ± 28% -82.8% 18463 ± 2% softirqs.CPU47.RCU
110162 ± 14% -82.5% 19289 ± 6% softirqs.CPU48.RCU
96132 ± 22% -81.3% 18018 ± 6% softirqs.CPU49.RCU
144339 ± 42% -85.8% 20501 ± 16% softirqs.CPU5.RCU
106013 ± 28% -79.9% 21333 ± 15% softirqs.CPU50.RCU
89289 ± 19% -77.3% 20308 ± 9% softirqs.CPU51.RCU
95463 ± 24% -77.5% 21482 ± 5% softirqs.CPU52.RCU
105183 ± 21% -83.0% 17871 ± 2% softirqs.CPU53.RCU
108932 ± 24% -79.9% 21897 ± 14% softirqs.CPU54.RCU
32507 ± 5% +9.0% 35428 ± 3% softirqs.CPU54.SCHED
125463 ± 49% -84.5% 19488 ± 19% softirqs.CPU55.RCU
109702 ± 14% -83.0% 18691 ± 20% softirqs.CPU56.RCU
98004 ± 36% -82.8% 16903 ± 3% softirqs.CPU57.RCU
98802 ± 32% -81.6% 18168 ± 6% softirqs.CPU58.RCU
86125 ± 25% -76.6% 20188 ± 22% softirqs.CPU59.RCU
142191 ± 30% -84.7% 21779 ± 13% softirqs.CPU6.RCU
91832 ± 18% -80.8% 17587 ± 8% softirqs.CPU60.RCU
95250 ± 24% -80.9% 18193 ± 8% softirqs.CPU61.RCU
110906 ± 28% -83.0% 18890 ± 10% softirqs.CPU62.RCU
96808 ± 14% -82.4% 17038 ± 2% softirqs.CPU63.RCU
103911 ± 18% -85.0% 15608 ± 3% softirqs.CPU64.RCU
111434 ± 13% -76.0% 26761 ± 49% softirqs.CPU65.RCU
120328 ± 30% -86.4% 16376 ± 7% softirqs.CPU66.RCU
114254 ± 17% -85.4% 16657 ± 10% softirqs.CPU67.RCU
94384 ± 22% -82.0% 16982 ± 4% softirqs.CPU68.RCU
103606 ± 24% -70.9% 30127 ± 46% softirqs.CPU69.RCU
152746 ± 18% -87.5% 19092 ± 8% softirqs.CPU7.RCU
88543 ± 9% -77.8% 19671 ± 23% softirqs.CPU70.RCU
91807 ± 14% -76.0% 22015 ± 31% softirqs.CPU71.RCU
91668 ± 17% -75.1% 22825 ± 19% softirqs.CPU72.RCU
122358 ± 29% -84.3% 19219 ± 6% softirqs.CPU73.RCU
119056 ± 6% -84.1% 18892 ± 5% softirqs.CPU74.RCU
76091 ± 12% -73.3% 20309 ± 24% softirqs.CPU75.RCU
91371 ± 17% -78.1% 20011 ± 8% softirqs.CPU76.RCU
87961 ± 18% -79.7% 17850 ± 8% softirqs.CPU77.RCU
101265 ± 25% -77.7% 22620 ± 29% softirqs.CPU78.RCU
111370 ± 13% -83.0% 18961 ± 9% softirqs.CPU79.RCU
138698 ± 41% -87.1% 17837 ± 7% softirqs.CPU8.RCU
95723 ± 36% -82.6% 16658 ± 8% softirqs.CPU80.RCU
96671 ± 21% -82.1% 17290 ± 10% softirqs.CPU81.RCU
94733 ± 30% -81.2% 17825 ± 9% softirqs.CPU82.RCU
100387 ± 30% -80.6% 19431 ± 4% softirqs.CPU83.RCU
80467 ± 23% -77.2% 18371 ± 7% softirqs.CPU84.RCU
103293 ± 28% -82.2% 18342 ± 13% softirqs.CPU85.RCU
85568 ± 16% -79.2% 17784 ± 4% softirqs.CPU86.RCU
104826 ± 29% -81.5% 19385 ± 2% softirqs.CPU87.RCU
117826 ± 29% -80.6% 22900 ± 9% softirqs.CPU88.RCU
100578 ± 23% -81.5% 18600 ± 9% softirqs.CPU89.RCU
175695 ± 36% -89.3% 18713 ± 13% softirqs.CPU9.RCU
84614 ± 29% -77.4% 19127 ± 11% softirqs.CPU90.RCU
94919 ± 18% -79.9% 19054 ± 21% softirqs.CPU91.RCU
71979 ± 3% -71.7% 20342 ± 10% softirqs.CPU92.RCU
91785 ± 33% -82.3% 16216 ± 5% softirqs.CPU93.RCU
85976 ± 21% -79.1% 17984 ± 3% softirqs.CPU94.RCU
91839 ± 16% -81.7% 16781 ± 7% softirqs.CPU95.RCU
86493 ± 29% -78.4% 18653 ± 11% softirqs.CPU96.RCU
95683 ± 7% -79.4% 19730 ± 16% softirqs.CPU97.RCU
83164 ± 22% -79.4% 17127 ± 6% softirqs.CPU98.RCU
80882 ± 12% -77.3% 18394 ± 8% softirqs.CPU99.RCU
15122287 ± 11% -81.8% 2749702 ± 4% softirqs.RCU
603.75 ±105% -72.1% 168.33 ± 8% interrupts.54:PCI-MSI.1572873-edge.eth0-TxRx-9
523.00 ± 79% -68.1% 166.67 ± 9% interrupts.59:PCI-MSI.1572878-edge.eth0-TxRx-14
81871075 -98.0% 1625185 ± 2% interrupts.CAL:Function_call_interrupts
849480 ± 68% -98.5% 12359 ± 26% interrupts.CPU0.CAL:Function_call_interrupts
916640 ± 68% -98.8% 11136 ± 28% interrupts.CPU0.TLB:TLB_shootdowns
1673791 ± 63% -99.2% 14087 ± 34% interrupts.CPU1.CAL:Function_call_interrupts
4570 ± 33% -49.8% 2295 ± 32% interrupts.CPU1.RES:Rescheduling_interrupts
1783806 ± 63% -99.3% 12954 ± 38% interrupts.CPU1.TLB:TLB_shootdowns
1056163 ± 65% -98.2% 19126 ± 63% interrupts.CPU10.CAL:Function_call_interrupts
1122817 ± 66% -98.4% 18041 ± 68% interrupts.CPU10.TLB:TLB_shootdowns
396.00 ± 44% +2610.9% 10735 ± 63% interrupts.CPU100.NMI:Non-maskable_interrupts
396.00 ± 44% +2610.9% 10735 ± 63% interrupts.CPU100.PMI:Performance_monitoring_interrupts
451819 ± 72% -98.1% 8554 ± 13% interrupts.CPU101.CAL:Function_call_interrupts
480717 ± 73% -98.5% 7315 ± 15% interrupts.CPU101.TLB:TLB_shootdowns
265399 ± 58% -97.9% 5696 ± 29% interrupts.CPU102.CAL:Function_call_interrupts
289.75 ± 28% +2381.2% 7189 ± 70% interrupts.CPU102.NMI:Non-maskable_interrupts
289.75 ± 28% +2381.2% 7189 ± 70% interrupts.CPU102.PMI:Performance_monitoring_interrupts
280959 ± 58% -98.4% 4447 ± 37% interrupts.CPU102.TLB:TLB_shootdowns
395.00 ± 14% +1306.2% 5554 ± 66% interrupts.CPU103.NMI:Non-maskable_interrupts
395.00 ± 14% +1306.2% 5554 ± 66% interrupts.CPU103.PMI:Performance_monitoring_interrupts
261286 ± 48% -98.7% 3280 ± 19% interrupts.CPU105.CAL:Function_call_interrupts
278056 ± 49% -99.3% 1967 ± 25% interrupts.CPU105.TLB:TLB_shootdowns
470198 ± 74% -98.5% 7167 ± 39% interrupts.CPU106.CAL:Function_call_interrupts
339.50 ± 27% +2637.9% 9295 ± 67% interrupts.CPU106.NMI:Non-maskable_interrupts
339.50 ± 27% +2637.9% 9295 ± 67% interrupts.CPU106.PMI:Performance_monitoring_interrupts
506596 ± 74% -98.8% 5949 ± 49% interrupts.CPU106.TLB:TLB_shootdowns
915159 ± 61% -99.0% 9525 ± 30% interrupts.CPU107.CAL:Function_call_interrupts
973233 ± 61% -99.1% 8357 ± 34% interrupts.CPU107.TLB:TLB_shootdowns
337201 ± 97% -94.8% 17569 ± 20% interrupts.CPU108.CAL:Function_call_interrupts
488.00 ± 51% +4125.8% 20621 ± 76% interrupts.CPU108.NMI:Non-maskable_interrupts
488.00 ± 51% +4125.8% 20621 ± 76% interrupts.CPU108.PMI:Performance_monitoring_interrupts
357853 ± 98% -95.4% 16619 ± 21% interrupts.CPU108.TLB:TLB_shootdowns
769.00 ± 13% +102.0% 1553 ± 38% interrupts.CPU109.RES:Rescheduling_interrupts
1370570 ± 62% -99.0% 14377 ± 50% interrupts.CPU11.CAL:Function_call_interrupts
1023 ± 66% +261.4% 3696 ± 53% interrupts.CPU11.NMI:Non-maskable_interrupts
1023 ± 66% +261.4% 3696 ± 53% interrupts.CPU11.PMI:Performance_monitoring_interrupts
1452507 ± 62% -99.1% 13360 ± 55% interrupts.CPU11.TLB:TLB_shootdowns
202682 ± 64% -92.8% 14671 ± 47% interrupts.CPU110.CAL:Function_call_interrupts
214672 ± 66% -93.7% 13608 ± 51% interrupts.CPU110.TLB:TLB_shootdowns
220346 ± 67% -94.5% 12066 ± 67% interrupts.CPU111.CAL:Function_call_interrupts
231609 ± 67% -95.3% 10836 ± 75% interrupts.CPU111.TLB:TLB_shootdowns
360536 ± 7% -96.9% 11207 ± 50% interrupts.CPU113.CAL:Function_call_interrupts
388026 ± 8% -97.4% 10044 ± 59% interrupts.CPU113.TLB:TLB_shootdowns
226426 ± 94% -95.6% 10070 ± 20% interrupts.CPU114.CAL:Function_call_interrupts
241420 ± 96% -96.3% 8853 ± 23% interrupts.CPU114.TLB:TLB_shootdowns
191959 ± 74% -94.2% 11192 ± 45% interrupts.CPU115.CAL:Function_call_interrupts
719.00 ± 41% +48.5% 1068 ± 12% interrupts.CPU115.RES:Rescheduling_interrupts
201336 ± 76% -95.0% 10008 ± 52% interrupts.CPU115.TLB:TLB_shootdowns
667.50 ± 13% +62.6% 1085 ± 26% interrupts.CPU117.RES:Rescheduling_interrupts
133001 ± 76% -88.1% 15791 ± 53% interrupts.CPU118.CAL:Function_call_interrupts
139621 ± 78% -89.4% 14864 ± 60% interrupts.CPU118.TLB:TLB_shootdowns
225506 ± 83% -94.5% 12418 ± 44% interrupts.CPU119.CAL:Function_call_interrupts
237295 ± 84% -95.3% 11205 ± 50% interrupts.CPU119.TLB:TLB_shootdowns
795307 ± 33% -98.3% 13685 ± 61% interrupts.CPU12.CAL:Function_call_interrupts
839259 ± 34% -98.5% 12571 ± 66% interrupts.CPU12.TLB:TLB_shootdowns
121059 ± 51% -91.6% 10168 ± 75% interrupts.CPU120.CAL:Function_call_interrupts
126869 ± 52% -92.9% 8976 ± 87% interrupts.CPU120.TLB:TLB_shootdowns
63281 ± 91% -86.7% 8391 ± 58% interrupts.CPU121.CAL:Function_call_interrupts
65227 ± 94% -88.7% 7352 ± 72% interrupts.CPU121.TLB:TLB_shootdowns
139835 ± 49% -90.7% 12954 ± 40% interrupts.CPU122.CAL:Function_call_interrupts
772.75 ± 33% +1116.2% 9398 ± 64% interrupts.CPU122.NMI:Non-maskable_interrupts
772.75 ± 33% +1116.2% 9398 ± 64% interrupts.CPU122.PMI:Performance_monitoring_interrupts
718.50 ± 15% +73.9% 1249 ± 16% interrupts.CPU122.RES:Rescheduling_interrupts
147558 ± 50% -92.0% 11827 ± 44% interrupts.CPU122.TLB:TLB_shootdowns
497.75 ± 22% +122.3% 1106 ± 10% interrupts.CPU123.RES:Rescheduling_interrupts
149678 ± 82% -94.6% 8147 ± 46% interrupts.CPU124.CAL:Function_call_interrupts
410.50 ± 46% +1774.7% 7695 ± 66% interrupts.CPU124.NMI:Non-maskable_interrupts
410.50 ± 46% +1774.7% 7695 ± 66% interrupts.CPU124.PMI:Performance_monitoring_interrupts
158193 ± 83% -95.6% 6929 ± 57% interrupts.CPU124.TLB:TLB_shootdowns
379553 ± 39% -95.9% 15621 ± 38% interrupts.CPU125.CAL:Function_call_interrupts
402724 ± 39% -96.4% 14560 ± 42% interrupts.CPU125.TLB:TLB_shootdowns
286100 ± 53% -95.8% 12114 ± 10% interrupts.CPU126.CAL:Function_call_interrupts
303685 ± 54% -96.4% 10959 ± 13% interrupts.CPU126.TLB:TLB_shootdowns
199563 ± 51% -94.0% 11896 ± 27% interrupts.CPU127.CAL:Function_call_interrupts
532.75 ± 61% +1691.0% 9541 ± 89% interrupts.CPU127.NMI:Non-maskable_interrupts
532.75 ± 61% +1691.0% 9541 ± 89% interrupts.CPU127.PMI:Performance_monitoring_interrupts
209264 ± 52% -94.9% 10740 ± 31% interrupts.CPU127.TLB:TLB_shootdowns
317414 ± 95% -97.3% 8414 ± 54% interrupts.CPU128.CAL:Function_call_interrupts
351005 ± 98% -97.9% 7203 ± 64% interrupts.CPU128.TLB:TLB_shootdowns
99448 ± 59% -89.8% 10118 ± 49% interrupts.CPU129.CAL:Function_call_interrupts
102781 ± 60% -91.3% 8937 ± 58% interrupts.CPU129.TLB:TLB_shootdowns
772971 ± 64% -98.4% 12140 ± 58% interrupts.CPU13.CAL:Function_call_interrupts
819963 ± 65% -98.7% 11029 ± 64% interrupts.CPU13.TLB:TLB_shootdowns
64152 ± 71% -86.6% 8594 ± 43% interrupts.CPU130.CAL:Function_call_interrupts
508.00 ± 48% +97.4% 1003 ± 33% interrupts.CPU130.RES:Rescheduling_interrupts
65483 ± 74% -88.7% 7372 ± 50% interrupts.CPU130.TLB:TLB_shootdowns
72850 ± 54% -89.1% 7964 ± 43% interrupts.CPU131.CAL:Function_call_interrupts
550.25 ± 39% +55.0% 853.00 ± 11% interrupts.CPU131.RES:Rescheduling_interrupts
74903 ± 56% -90.9% 6779 ± 52% interrupts.CPU131.TLB:TLB_shootdowns
261747 ± 52% -96.2% 9923 ± 45% interrupts.CPU132.CAL:Function_call_interrupts
295.50 ± 25% +3663.9% 11122 ± 85% interrupts.CPU132.NMI:Non-maskable_interrupts
295.50 ± 25% +3663.9% 11122 ± 85% interrupts.CPU132.PMI:Performance_monitoring_interrupts
277227 ± 52% -96.9% 8699 ± 52% interrupts.CPU132.TLB:TLB_shootdowns
286476 ± 69% -96.7% 9388 ± 55% interrupts.CPU134.CAL:Function_call_interrupts
303867 ± 70% -97.3% 8160 ± 66% interrupts.CPU134.TLB:TLB_shootdowns
145156 ±108% -94.3% 8338 ± 53% interrupts.CPU135.CAL:Function_call_interrupts
354.25 ± 48% +1738.2% 6511 ± 74% interrupts.CPU135.NMI:Non-maskable_interrupts
354.25 ± 48% +1738.2% 6511 ± 74% interrupts.CPU135.PMI:Performance_monitoring_interrupts
151225 ±109% -95.3% 7148 ± 64% interrupts.CPU135.TLB:TLB_shootdowns
392.25 ± 44% +3897.0% 15678 ± 70% interrupts.CPU137.NMI:Non-maskable_interrupts
392.25 ± 44% +3897.0% 15678 ± 70% interrupts.CPU137.PMI:Performance_monitoring_interrupts
199456 ± 45% -97.1% 5883 ± 65% interrupts.CPU138.CAL:Function_call_interrupts
217728 ± 47% -97.9% 4647 ± 86% interrupts.CPU138.TLB:TLB_shootdowns
294075 ±110% -96.5% 10197 ± 81% interrupts.CPU139.CAL:Function_call_interrupts
310749 ±111% -97.1% 9084 ± 95% interrupts.CPU139.TLB:TLB_shootdowns
523.00 ± 79% -68.1% 166.67 ± 9% interrupts.CPU14.59:PCI-MSI.1572878-edge.eth0-TxRx-14
745929 ± 67% -98.4% 12023 ± 58% interrupts.CPU14.CAL:Function_call_interrupts
2028 ± 45% -43.5% 1146 ± 22% interrupts.CPU14.RES:Rescheduling_interrupts
790069 ± 68% -98.6% 10856 ± 65% interrupts.CPU14.TLB:TLB_shootdowns
248766 ±108% -96.7% 8211 ± 51% interrupts.CPU141.CAL:Function_call_interrupts
260228 ±108% -97.3% 6995 ± 62% interrupts.CPU141.TLB:TLB_shootdowns
476.00 ± 40% +104.3% 972.33 ± 13% interrupts.CPU142.RES:Rescheduling_interrupts
586208 ± 42% -97.8% 12830 ± 72% interrupts.CPU143.CAL:Function_call_interrupts
623556 ± 42% -98.1% 11740 ± 81% interrupts.CPU143.TLB:TLB_shootdowns
992404 ± 57% -98.7% 12737 ± 38% interrupts.CPU15.CAL:Function_call_interrupts
1052603 ± 58% -98.9% 11616 ± 42% interrupts.CPU15.TLB:TLB_shootdowns
1677250 ± 59% -99.1% 14619 ± 54% interrupts.CPU16.CAL:Function_call_interrupts
1778293 ± 59% -99.2% 13492 ± 59% interrupts.CPU16.TLB:TLB_shootdowns
1223311 ± 61% -98.8% 15177 ± 51% interrupts.CPU17.CAL:Function_call_interrupts
2879 ± 36% -47.7% 1505 ± 14% interrupts.CPU17.RES:Rescheduling_interrupts
1295129 ± 62% -98.9% 14135 ± 55% interrupts.CPU17.TLB:TLB_shootdowns
1564988 ± 28% -99.3% 10674 ± 18% interrupts.CPU18.CAL:Function_call_interrupts
562.50 ± 72% +1407.6% 8480 ± 54% interrupts.CPU18.NMI:Non-maskable_interrupts
562.50 ± 72% +1407.6% 8480 ± 54% interrupts.CPU18.PMI:Performance_monitoring_interrupts
3001 ± 21% -60.0% 1199 ± 23% interrupts.CPU18.RES:Rescheduling_interrupts
1674735 ± 28% -99.4% 9500 ± 19% interrupts.CPU18.TLB:TLB_shootdowns
1230199 ± 53% -99.0% 12270 ± 20% interrupts.CPU19.CAL:Function_call_interrupts
2384 ± 37% -58.6% 986.67 ± 16% interrupts.CPU19.RES:Rescheduling_interrupts
1315132 ± 53% -99.2% 11145 ± 21% interrupts.CPU19.TLB:TLB_shootdowns
1239868 ± 52% -98.7% 15566 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
1307900 ± 52% -98.9% 14479 ± 3% interrupts.CPU2.TLB:TLB_shootdowns
1174374 ± 27% -98.9% 13198 ± 22% interrupts.CPU20.CAL:Function_call_interrupts
2291 ± 23% -55.6% 1017 ± 14% interrupts.CPU20.RES:Rescheduling_interrupts
1261995 ± 28% -99.0% 12040 ± 24% interrupts.CPU20.TLB:TLB_shootdowns
1163562 ± 72% -99.0% 12151 ± 17% interrupts.CPU21.CAL:Function_call_interrupts
1242814 ± 72% -99.1% 11040 ± 20% interrupts.CPU21.TLB:TLB_shootdowns
935698 ± 52% -98.7% 11798 ± 20% interrupts.CPU22.CAL:Function_call_interrupts
2077 ± 16% -39.0% 1268 ± 5% interrupts.CPU22.RES:Rescheduling_interrupts
997742 ± 53% -98.9% 10643 ± 23% interrupts.CPU22.TLB:TLB_shootdowns
962574 ± 41% -99.1% 9130 ± 28% interrupts.CPU23.CAL:Function_call_interrupts
2487 ± 25% -55.7% 1103 ± 26% interrupts.CPU23.RES:Rescheduling_interrupts
1031301 ± 41% -99.2% 7958 ± 35% interrupts.CPU23.TLB:TLB_shootdowns
983055 ± 34% -99.1% 9145 ± 17% interrupts.CPU24.CAL:Function_call_interrupts
2020 ± 8% -43.4% 1143 ± 14% interrupts.CPU24.RES:Rescheduling_interrupts
1055794 ± 34% -99.2% 7933 ± 20% interrupts.CPU24.TLB:TLB_shootdowns
712727 ± 59% -98.6% 9881 ± 30% interrupts.CPU25.CAL:Function_call_interrupts
762308 ± 59% -98.9% 8687 ± 35% interrupts.CPU25.TLB:TLB_shootdowns
1088593 ± 25% -99.3% 7636 ± 10% interrupts.CPU26.CAL:Function_call_interrupts
2177 ± 28% -54.9% 982.33 ± 15% interrupts.CPU26.RES:Rescheduling_interrupts
1160016 ± 25% -99.4% 6583 ± 16% interrupts.CPU26.TLB:TLB_shootdowns
1035163 ± 32% -99.0% 9871 ± 36% interrupts.CPU27.CAL:Function_call_interrupts
2313 ± 26% -57.4% 986.33 ± 28% interrupts.CPU27.RES:Rescheduling_interrupts
1107088 ± 31% -99.2% 8640 ± 43% interrupts.CPU27.TLB:TLB_shootdowns
897104 ± 51% -99.2% 6742 ± 21% interrupts.CPU28.CAL:Function_call_interrupts
907.00 ± 61% +309.7% 3716 ± 48% interrupts.CPU28.NMI:Non-maskable_interrupts
907.00 ± 61% +309.7% 3716 ± 48% interrupts.CPU28.PMI:Performance_monitoring_interrupts
959438 ± 51% -99.4% 5514 ± 27% interrupts.CPU28.TLB:TLB_shootdowns
900084 ± 27% -99.4% 5596 ± 24% interrupts.CPU29.CAL:Function_call_interrupts
1976 ± 24% -56.8% 853.33 ± 32% interrupts.CPU29.RES:Rescheduling_interrupts
959656 ± 26% -99.5% 4341 ± 35% interrupts.CPU29.TLB:TLB_shootdowns
1194729 ± 56% -98.5% 17430 ± 39% interrupts.CPU3.CAL:Function_call_interrupts
525.75 ± 85% +556.6% 3452 ± 47% interrupts.CPU3.NMI:Non-maskable_interrupts
525.75 ± 85% +556.6% 3452 ± 47% interrupts.CPU3.PMI:Performance_monitoring_interrupts
1261693 ± 57% -98.7% 16395 ± 42% interrupts.CPU3.TLB:TLB_shootdowns
787739 ± 33% -99.1% 6919 ± 20% interrupts.CPU30.CAL:Function_call_interrupts
849132 ± 33% -99.3% 5737 ± 24% interrupts.CPU30.TLB:TLB_shootdowns
1135046 ± 55% -99.4% 6855 ± 30% interrupts.CPU31.CAL:Function_call_interrupts
1211676 ± 55% -99.5% 5602 ± 39% interrupts.CPU31.TLB:TLB_shootdowns
972935 ± 43% -99.0% 9546 ± 27% interrupts.CPU32.CAL:Function_call_interrupts
1818 ± 23% -44.3% 1012 ± 15% interrupts.CPU32.RES:Rescheduling_interrupts
1051259 ± 44% -99.2% 8453 ± 33% interrupts.CPU32.TLB:TLB_shootdowns
923884 ± 71% -99.0% 9111 ± 54% interrupts.CPU33.CAL:Function_call_interrupts
986655 ± 71% -99.2% 7952 ± 63% interrupts.CPU33.TLB:TLB_shootdowns
829706 ± 59% -99.0% 8204 ± 30% interrupts.CPU34.CAL:Function_call_interrupts
883394 ± 59% -99.2% 7222 ± 34% interrupts.CPU34.TLB:TLB_shootdowns
1545345 ± 48% -99.3% 11243 ± 28% interrupts.CPU35.CAL:Function_call_interrupts
2611 ± 32% -41.4% 1530 ± 31% interrupts.CPU35.RES:Rescheduling_interrupts
1648616 ± 48% -99.4% 10099 ± 31% interrupts.CPU35.TLB:TLB_shootdowns
743486 ± 25% -96.9% 23407 ± 22% interrupts.CPU36.CAL:Function_call_interrupts
709.75 ± 79% +1988.3% 14822 ± 84% interrupts.CPU36.NMI:Non-maskable_interrupts
709.75 ± 79% +1988.3% 14822 ± 84% interrupts.CPU36.PMI:Performance_monitoring_interrupts
1670 ± 21% -24.5% 1262 ± 23% interrupts.CPU36.RES:Rescheduling_interrupts
793538 ± 25% -97.2% 22365 ± 23% interrupts.CPU36.TLB:TLB_shootdowns
512017 ± 43% -98.3% 8886 ± 40% interrupts.CPU37.CAL:Function_call_interrupts
694.00 ± 31% +618.8% 4988 ± 77% interrupts.CPU37.NMI:Non-maskable_interrupts
694.00 ± 31% +618.8% 4988 ± 77% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1553 ± 19% -29.2% 1099 ± 28% interrupts.CPU37.RES:Rescheduling_interrupts
542917 ± 44% -98.6% 7663 ± 48% interrupts.CPU37.TLB:TLB_shootdowns
410157 ± 52% -96.4% 14951 ± 53% interrupts.CPU38.CAL:Function_call_interrupts
433249 ± 52% -96.8% 13784 ± 57% interrupts.CPU38.TLB:TLB_shootdowns
326913 ± 19% -96.2% 12392 ± 30% interrupts.CPU39.CAL:Function_call_interrupts
1361 ± 13% -31.8% 928.67 ± 20% interrupts.CPU39.RES:Rescheduling_interrupts
344502 ± 19% -96.7% 11332 ± 34% interrupts.CPU39.TLB:TLB_shootdowns
961062 ± 66% -98.2% 17399 ± 31% interrupts.CPU4.CAL:Function_call_interrupts
1018054 ± 66% -98.4% 16319 ± 34% interrupts.CPU4.TLB:TLB_shootdowns
811854 ± 11% -98.9% 9229 ± 24% interrupts.CPU40.CAL:Function_call_interrupts
2403 ± 25% -53.2% 1125 ± 27% interrupts.CPU40.RES:Rescheduling_interrupts
867263 ± 10% -99.1% 8044 ± 29% interrupts.CPU40.TLB:TLB_shootdowns
287216 ± 45% -96.1% 11286 ± 60% interrupts.CPU41.CAL:Function_call_interrupts
309210 ± 46% -96.7% 10138 ± 69% interrupts.CPU41.TLB:TLB_shootdowns
470893 ± 55% -96.3% 17210 ± 58% interrupts.CPU42.CAL:Function_call_interrupts
503992 ± 55% -96.8% 16080 ± 61% interrupts.CPU42.TLB:TLB_shootdowns
536532 ± 33% -97.5% 13371 ± 56% interrupts.CPU43.CAL:Function_call_interrupts
570162 ± 33% -97.8% 12291 ± 62% interrupts.CPU43.TLB:TLB_shootdowns
383745 ± 84% -97.3% 10222 ± 73% interrupts.CPU44.CAL:Function_call_interrupts
408337 ± 84% -97.7% 9202 ± 83% interrupts.CPU44.TLB:TLB_shootdowns
397985 ± 42% -96.8% 12896 ± 42% interrupts.CPU45.CAL:Function_call_interrupts
353.00 ± 31% +2152.7% 7952 ± 84% interrupts.CPU45.NMI:Non-maskable_interrupts
353.00 ± 31% +2152.7% 7952 ± 84% interrupts.CPU45.PMI:Performance_monitoring_interrupts
420434 ± 41% -97.2% 11900 ± 48% interrupts.CPU45.TLB:TLB_shootdowns
379477 ± 95% -95.4% 17413 ± 71% interrupts.CPU46.CAL:Function_call_interrupts
401864 ± 95% -95.9% 16430 ± 77% interrupts.CPU46.TLB:TLB_shootdowns
464073 ± 58% -98.0% 9210 ± 34% interrupts.CPU47.CAL:Function_call_interrupts
1484 ± 27% -36.9% 936.00 ± 16% interrupts.CPU47.RES:Rescheduling_interrupts
491939 ± 59% -98.4% 8033 ± 41% interrupts.CPU47.TLB:TLB_shootdowns
531744 ± 47% -97.0% 16180 ± 75% interrupts.CPU48.CAL:Function_call_interrupts
1563 ± 19% -43.5% 884.00 ± 18% interrupts.CPU48.RES:Rescheduling_interrupts
564429 ± 48% -97.3% 15078 ± 82% interrupts.CPU48.TLB:TLB_shootdowns
322097 ± 71% -95.7% 13797 ± 22% interrupts.CPU49.CAL:Function_call_interrupts
340460 ± 71% -96.3% 12681 ± 25% interrupts.CPU49.TLB:TLB_shootdowns
943457 ± 71% -98.2% 16513 ± 58% interrupts.CPU5.CAL:Function_call_interrupts
933.25 ± 69% +591.6% 6454 ± 51% interrupts.CPU5.NMI:Non-maskable_interrupts
933.25 ± 69% +591.6% 6454 ± 51% interrupts.CPU5.PMI:Performance_monitoring_interrupts
3127 ± 38% -53.3% 1461 ± 28% interrupts.CPU5.RES:Rescheduling_interrupts
1001711 ± 72% -98.5% 15424 ± 63% interrupts.CPU5.TLB:TLB_shootdowns
362917 ± 75% -96.1% 14110 ± 36% interrupts.CPU50.CAL:Function_call_interrupts
815.00 ± 71% +1328.5% 11642 ± 59% interrupts.CPU50.NMI:Non-maskable_interrupts
815.00 ± 71% +1328.5% 11642 ± 59% interrupts.CPU50.PMI:Performance_monitoring_interrupts
384929 ± 75% -96.5% 13325 ± 43% interrupts.CPU50.TLB:TLB_shootdowns
445928 ± 42% -97.0% 13267 ± 37% interrupts.CPU51.CAL:Function_call_interrupts
309.00 ± 39% +2831.8% 9059 ± 66% interrupts.CPU51.NMI:Non-maskable_interrupts
309.00 ± 39% +2831.8% 9059 ± 66% interrupts.CPU51.PMI:Performance_monitoring_interrupts
1836 ± 60% -43.7% 1033 ± 28% interrupts.CPU51.RES:Rescheduling_interrupts
474465 ± 42% -97.3% 12603 ± 42% interrupts.CPU51.TLB:TLB_shootdowns
399958 ± 46% -96.5% 14160 ± 49% interrupts.CPU52.CAL:Function_call_interrupts
422881 ± 46% -96.9% 13077 ± 55% interrupts.CPU52.TLB:TLB_shootdowns
689403 ± 46% -98.5% 10119 ± 47% interrupts.CPU53.CAL:Function_call_interrupts
735469 ± 47% -98.8% 8963 ± 55% interrupts.CPU53.TLB:TLB_shootdowns
617414 ± 38% -97.9% 12768 ± 28% interrupts.CPU54.CAL:Function_call_interrupts
667.25 ± 46% +1629.6% 11541 ± 62% interrupts.CPU54.NMI:Non-maskable_interrupts
667.25 ± 46% +1629.6% 11541 ± 62% interrupts.CPU54.PMI:Performance_monitoring_interrupts
658336 ± 39% -98.2% 11672 ± 32% interrupts.CPU54.TLB:TLB_shootdowns
546653 ± 89% -97.5% 13738 ± 13% interrupts.CPU55.CAL:Function_call_interrupts
774.75 ± 41% +721.2% 6362 ± 94% interrupts.CPU55.NMI:Non-maskable_interrupts
774.75 ± 41% +721.2% 6362 ± 94% interrupts.CPU55.PMI:Performance_monitoring_interrupts
580070 ± 90% -97.8% 12580 ± 15% interrupts.CPU55.TLB:TLB_shootdowns
409611 ± 49% -97.3% 11007 ± 49% interrupts.CPU56.CAL:Function_call_interrupts
1345 ± 20% -20.8% 1066 ± 26% interrupts.CPU56.RES:Rescheduling_interrupts
432727 ± 49% -97.7% 9815 ± 57% interrupts.CPU56.TLB:TLB_shootdowns
351492 ± 68% -97.6% 8567 ± 63% interrupts.CPU57.CAL:Function_call_interrupts
372086 ± 68% -98.0% 7404 ± 75% interrupts.CPU57.TLB:TLB_shootdowns
331546 ± 52% -97.3% 8874 ± 43% interrupts.CPU58.CAL:Function_call_interrupts
394.25 ± 47% +2800.7% 11436 ± 86% interrupts.CPU58.NMI:Non-maskable_interrupts
394.25 ± 47% +2800.7% 11436 ± 86% interrupts.CPU58.PMI:Performance_monitoring_interrupts
352515 ± 52% -97.8% 7710 ± 50% interrupts.CPU58.TLB:TLB_shootdowns
1017645 ± 49% -98.6% 14448 ± 45% interrupts.CPU6.CAL:Function_call_interrupts
1077506 ± 49% -98.8% 13383 ± 49% interrupts.CPU6.TLB:TLB_shootdowns
447523 ± 42% -97.6% 10874 ± 32% interrupts.CPU60.CAL:Function_call_interrupts
474946 ± 42% -98.0% 9696 ± 37% interrupts.CPU60.TLB:TLB_shootdowns
195026 ± 43% -95.2% 9316 ± 84% interrupts.CPU61.CAL:Function_call_interrupts
204326 ± 44% -96.0% 8100 ±100% interrupts.CPU61.TLB:TLB_shootdowns
506257 ± 37% -97.6% 12160 ± 52% interrupts.CPU62.CAL:Function_call_interrupts
536017 ± 38% -98.0% 10978 ± 60% interrupts.CPU62.TLB:TLB_shootdowns
284965 ± 31% -96.6% 9596 ± 58% interrupts.CPU63.CAL:Function_call_interrupts
302145 ± 32% -97.2% 8379 ± 68% interrupts.CPU63.TLB:TLB_shootdowns
479734 ± 54% -98.1% 9072 ± 56% interrupts.CPU64.CAL:Function_call_interrupts
508462 ± 54% -98.4% 7910 ± 67% interrupts.CPU64.TLB:TLB_shootdowns
579804 ± 47% -98.5% 8486 ± 16% interrupts.CPU65.CAL:Function_call_interrupts
386.25 ± 42% +3442.4% 13682 ±105% interrupts.CPU65.NMI:Non-maskable_interrupts
386.25 ± 42% +3442.4% 13682 ±105% interrupts.CPU65.PMI:Performance_monitoring_interrupts
1275 ± 21% -30.7% 884.00 ± 7% interrupts.CPU65.RES:Rescheduling_interrupts
618622 ± 48% -98.8% 7303 ± 22% interrupts.CPU65.TLB:TLB_shootdowns
591246 ± 51% -98.6% 8565 ± 51% interrupts.CPU66.CAL:Function_call_interrupts
629018 ± 52% -98.8% 7334 ± 62% interrupts.CPU66.TLB:TLB_shootdowns
662137 ± 52% -98.5% 10092 ± 45% interrupts.CPU67.CAL:Function_call_interrupts
1641 ± 7% -37.6% 1024 ± 27% interrupts.CPU67.RES:Rescheduling_interrupts
703434 ± 52% -98.7% 8912 ± 54% interrupts.CPU67.TLB:TLB_shootdowns
297911 ± 33% -96.5% 10478 ± 61% interrupts.CPU68.CAL:Function_call_interrupts
377.00 ± 62% +5955.2% 22828 ± 94% interrupts.CPU68.NMI:Non-maskable_interrupts
377.00 ± 62% +5955.2% 22828 ± 94% interrupts.CPU68.PMI:Performance_monitoring_interrupts
314898 ± 33% -97.0% 9293 ± 71% interrupts.CPU68.TLB:TLB_shootdowns
426139 ± 42% -97.7% 9808 ± 33% interrupts.CPU69.CAL:Function_call_interrupts
450916 ± 43% -98.1% 8712 ± 38% interrupts.CPU69.TLB:TLB_shootdowns
1227894 ± 28% -98.6% 16727 ± 33% interrupts.CPU7.CAL:Function_call_interrupts
2755 ± 22% -41.5% 1611 ± 21% interrupts.CPU7.RES:Rescheduling_interrupts
1301072 ± 29% -98.8% 15683 ± 35% interrupts.CPU7.TLB:TLB_shootdowns
363690 ± 39% -97.4% 9440 ± 48% interrupts.CPU70.CAL:Function_call_interrupts
386135 ± 40% -97.9% 8253 ± 57% interrupts.CPU70.TLB:TLB_shootdowns
406370 ± 30% -97.6% 9598 ± 52% interrupts.CPU71.CAL:Function_call_interrupts
1555 ± 16% +56.5% 2433 ± 22% interrupts.CPU71.RES:Rescheduling_interrupts
431220 ± 31% -98.0% 8419 ± 62% interrupts.CPU71.TLB:TLB_shootdowns
379521 ± 43% -96.1% 14799 ± 56% interrupts.CPU72.CAL:Function_call_interrupts
401269 ± 43% -96.6% 13719 ± 61% interrupts.CPU72.TLB:TLB_shootdowns
853406 ± 53% -98.5% 13118 ± 56% interrupts.CPU73.CAL:Function_call_interrupts
904450 ± 53% -98.7% 12014 ± 62% interrupts.CPU73.TLB:TLB_shootdowns
929431 ± 17% -98.4% 15108 ± 49% interrupts.CPU74.CAL:Function_call_interrupts
1953 ± 24% -38.2% 1208 ± 15% interrupts.CPU74.RES:Rescheduling_interrupts
980216 ± 17% -98.6% 14050 ± 54% interrupts.CPU74.TLB:TLB_shootdowns
378639 ± 55% -96.6% 12982 ± 41% interrupts.CPU75.CAL:Function_call_interrupts
761.50 ± 67% +555.7% 4993 ± 68% interrupts.CPU75.NMI:Non-maskable_interrupts
761.50 ± 67% +555.7% 4993 ± 68% interrupts.CPU75.PMI:Performance_monitoring_interrupts
396850 ± 55% -97.0% 11861 ± 45% interrupts.CPU75.TLB:TLB_shootdowns
458962 ± 58% -97.5% 11531 ± 32% interrupts.CPU76.CAL:Function_call_interrupts
485627 ± 59% -97.9% 10373 ± 35% interrupts.CPU76.TLB:TLB_shootdowns
281173 ± 82% -93.1% 19292 ± 37% interrupts.CPU77.CAL:Function_call_interrupts
861.25 ± 70% +625.0% 6244 ± 54% interrupts.CPU77.NMI:Non-maskable_interrupts
861.25 ± 70% +625.0% 6244 ± 54% interrupts.CPU77.PMI:Performance_monitoring_interrupts
297326 ± 83% -93.8% 18476 ± 39% interrupts.CPU77.TLB:TLB_shootdowns
640811 ± 43% -98.5% 9471 ± 30% interrupts.CPU78.CAL:Function_call_interrupts
678615 ± 43% -98.8% 8256 ± 34% interrupts.CPU78.TLB:TLB_shootdowns
688480 ± 26% -98.3% 11407 ± 36% interrupts.CPU79.CAL:Function_call_interrupts
1512 ± 15% -25.0% 1133 ± 19% interrupts.CPU79.RES:Rescheduling_interrupts
727783 ± 26% -98.6% 10309 ± 39% interrupts.CPU79.TLB:TLB_shootdowns
1027219 ± 72% -98.6% 14687 ± 56% interrupts.CPU8.CAL:Function_call_interrupts
545.50 ± 47% +763.6% 4711 ± 74% interrupts.CPU8.NMI:Non-maskable_interrupts
545.50 ± 47% +763.6% 4711 ± 74% interrupts.CPU8.PMI:Performance_monitoring_interrupts
1088136 ± 73% -98.8% 13533 ± 61% interrupts.CPU8.TLB:TLB_shootdowns
567067 ± 89% -97.9% 12111 ± 26% interrupts.CPU80.CAL:Function_call_interrupts
463.75 ± 76% +929.3% 4773 ± 71% interrupts.CPU80.NMI:Non-maskable_interrupts
463.75 ± 76% +929.3% 4773 ± 71% interrupts.CPU80.PMI:Performance_monitoring_interrupts
600387 ± 90% -98.2% 10968 ± 28% interrupts.CPU80.TLB:TLB_shootdowns
499954 ± 43% -97.2% 13756 ± 12% interrupts.CPU81.CAL:Function_call_interrupts
656.50 ± 34% +724.5% 5412 ± 76% interrupts.CPU81.NMI:Non-maskable_interrupts
656.50 ± 34% +724.5% 5412 ± 76% interrupts.CPU81.PMI:Performance_monitoring_interrupts
529479 ± 43% -97.6% 12630 ± 14% interrupts.CPU81.TLB:TLB_shootdowns
443999 ± 80% -97.8% 9792 ± 44% interrupts.CPU82.CAL:Function_call_interrupts
468595 ± 81% -98.2% 8582 ± 50% interrupts.CPU82.TLB:TLB_shootdowns
615661 ± 65% -98.0% 12037 ± 32% interrupts.CPU83.CAL:Function_call_interrupts
650630 ± 65% -98.3% 10861 ± 35% interrupts.CPU83.TLB:TLB_shootdowns
317778 ± 63% -96.7% 10627 ± 9% interrupts.CPU84.CAL:Function_call_interrupts
334372 ± 64% -97.2% 9524 ± 10% interrupts.CPU84.TLB:TLB_shootdowns
546581 ± 63% -98.3% 9310 ± 40% interrupts.CPU85.CAL:Function_call_interrupts
1311 ± 28% -39.0% 799.67 ± 27% interrupts.CPU85.RES:Rescheduling_interrupts
574583 ± 63% -98.6% 8083 ± 46% interrupts.CPU85.TLB:TLB_shootdowns
419213 ± 51% -98.0% 8407 ± 31% interrupts.CPU86.CAL:Function_call_interrupts
443869 ± 51% -98.4% 7177 ± 36% interrupts.CPU86.TLB:TLB_shootdowns
531049 ± 63% -98.2% 9730 ± 47% interrupts.CPU87.CAL:Function_call_interrupts
563628 ± 63% -98.5% 8564 ± 54% interrupts.CPU87.TLB:TLB_shootdowns
907469 ± 55% -98.9% 9870 ± 48% interrupts.CPU88.CAL:Function_call_interrupts
954396 ± 55% -99.1% 8694 ± 55% interrupts.CPU88.TLB:TLB_shootdowns
627701 ± 48% -98.3% 10603 ± 47% interrupts.CPU89.CAL:Function_call_interrupts
665632 ± 48% -98.6% 9438 ± 52% interrupts.CPU89.TLB:TLB_shootdowns
603.75 ±105% -72.1% 168.33 ± 8% interrupts.CPU9.54:PCI-MSI.1572873-edge.eth0-TxRx-9
1395838 ± 39% -98.7% 18681 ± 60% interrupts.CPU9.CAL:Function_call_interrupts
763.75 ± 33% +661.8% 5818 ± 93% interrupts.CPU9.NMI:Non-maskable_interrupts
763.75 ± 33% +661.8% 5818 ± 93% interrupts.CPU9.PMI:Performance_monitoring_interrupts
3455 ± 55% -53.1% 1619 ± 39% interrupts.CPU9.RES:Rescheduling_interrupts
1480077 ± 39% -98.8% 17602 ± 64% interrupts.CPU9.TLB:TLB_shootdowns
854.25 ± 25% +925.6% 8761 ± 88% interrupts.CPU90.NMI:Non-maskable_interrupts
854.25 ± 25% +925.6% 8761 ± 88% interrupts.CPU90.PMI:Performance_monitoring_interrupts
545942 ± 51% -97.9% 11661 ± 31% interrupts.CPU91.CAL:Function_call_interrupts
580139 ± 51% -98.2% 10555 ± 33% interrupts.CPU91.TLB:TLB_shootdowns
291246 ± 40% -96.7% 9728 ± 44% interrupts.CPU92.CAL:Function_call_interrupts
308723 ± 41% -97.2% 8636 ± 53% interrupts.CPU92.TLB:TLB_shootdowns
573160 ±115% -98.0% 11689 ± 21% interrupts.CPU93.CAL:Function_call_interrupts
608315 ±116% -98.3% 10620 ± 23% interrupts.CPU93.TLB:TLB_shootdowns
464446 ± 69% -97.2% 12863 ± 53% interrupts.CPU94.CAL:Function_call_interrupts
493711 ± 70% -97.6% 11665 ± 58% interrupts.CPU94.TLB:TLB_shootdowns
401393 ± 68% -98.5% 5850 ± 21% interrupts.CPU95.CAL:Function_call_interrupts
427084 ± 69% -98.9% 4632 ± 29% interrupts.CPU95.TLB:TLB_shootdowns
486077 ± 57% -98.4% 7564 ± 19% interrupts.CPU97.CAL:Function_call_interrupts
605.25 ± 68% +2436.9% 15354 ± 72% interrupts.CPU97.NMI:Non-maskable_interrupts
605.25 ± 68% +2436.9% 15354 ± 72% interrupts.CPU97.PMI:Performance_monitoring_interrupts
515849 ± 58% -98.8% 6337 ± 24% interrupts.CPU97.TLB:TLB_shootdowns
490328 ± 68% -98.1% 9077 ± 39% interrupts.CPU98.CAL:Function_call_interrupts
523353 ± 68% -98.5% 7952 ± 45% interrupts.CPU98.TLB:TLB_shootdowns
425492 ± 43% -98.0% 8654 ± 33% interrupts.CPU99.CAL:Function_call_interrupts
451818 ± 43% -98.4% 7411 ± 39% interrupts.CPU99.TLB:TLB_shootdowns
87005273 -98.3% 1460829 ± 3% interrupts.TLB:TLB_shootdowns
vm-scalability.time.user_time
700 +-+-------------------------------------------------------------------+
| |
600 +-+ .++.++ .+ |
| +.++.+.++.++.+ + .+.++. .+.++.++. +.++. +.++.++.+.++ +.|
500 +-+ ++ ++ +.+ +.+ |
| |
400 +-+ |
| |
300 +-+ |
| |
200 +-+ |
| |
100 +-+ |
O OO O O OO O OO O OO O OO OO O OO OO O O O |
0 +-+---O------O--O-----O----------------------O------------------------+
vm-scalability.time.system_time
1800 +-+------------------------------------------------------------------+
|.++.+ .++.+.++.++. +.+.++.++. +.+.++. +. +.++.+. +.++.+ .+.++.++.++.|
1600 +-+ + + + + + + + |
1400 +-+ |
| |
1200 +-+ O |
1000 +-O O O O O O O O |
| O OO O O O O O O |
800 O-+ O O O OO O O |
600 +-+ |
| |
400 +-+ |
200 +-+ |
| |
0 +-+---O------O--O-----O---------------------O------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
800 +-+-------------------------------------------------------------------+
|.++.++.+.++.++.+.++.++.++.+.++.++.+.++.++.+.++.++.+.++.++.++.+.++.++.|
700 +-+ |
600 +-+ |
| |
500 +-+ |
| |
400 +-+ |
| O O O OO O O OO O |
300 +-+O OO O O OO O OO O OO O O |
200 O-+ |
| |
100 +-+ |
| |
0 +-+---O------O--O-----O----------------------O------------------------+
vm-scalability.time.maximum_resident_set_size
5e+06 +-+---------------------------------------------------------------+
4.5e+06 +-+ + |
| :: |
4e+06 +-+ +. .+ +.+ .+ +. +. : :.+|
3.5e+06 +-++.+ + .++.++.++. .++.++ +.++.+ +.++.++ +.++.+ + ++ + |
| + ++ |
3e+06 O-OO O OO O O OO O OO OO OO OO OO OO O OO |
2.5e+06 +-+ |
2e+06 +-+ |
| |
1.5e+06 +-+ |
1e+06 +-+ |
| |
500000 +-+ |
0 +-+---O-----O--O-----O--------------------O-----------------------+
vm-scalability.time.minor_page_faults
1e+08 +-+-----------------------------------------------------------------+
9e+07 +-+ |
| |
8e+07 +-+ |
7e+07 +-+ |
| |
6e+07 +-+ |
5e+07 +-+ |
4e+07 +-+ |
| |
3e+07 +-+ |
2e+07 +-+ |
| |
1e+07 O-OO O OO O O O OO O OO OO OO O OO OO O OO |
0 +-+---O-----O--O------O--------------------O------------------------+
vm-scalability.time.involuntary_context_switches
450000 +-+----------------------------------------------------------------+
| +. +.++. +.++. +. .+ .+ .+ .+ +. +. .+ .++ +. ++.|
400000 +-+ + + ++.+ ++ + +.+ + +.++.+ + ++ + + : + + |
350000 +-+ + + |
| |
300000 +-+ |
250000 +-+ |
| |
200000 +-+ |
150000 +-+ |
| |
100000 +-O O OO |
50000 O-+O O OO O O OO OO OO O OO OO OO O O |
| |
0 +-+---O-----O--O-----O---------------------O-----------------------+
perf-node.node-stores
9e+08 +-+-----------------------------------------------------------------+
| + |
8e+08 +-+ + : |
7e+08 +-+ :: :: .+ |
| + .+ :: : : + .+ +. .+ +. + +. + + + : |
6e+08 +-++ : : + .++.+.+ : + + : : + : + : + : + + : : +.+ : |
| :.+ + :+ :.+ :+ +.+ : + : : +.|
5e+08 +-+ + + + + + ++ |
| |
4e+08 +-+ |
3e+08 +-+ |
| |
2e+08 +-+ O O O O O O OO O O |
O OO O O OO OO OO O O O O O O OO |
1e+08 +-+------------O---------------------------O------------------------+
perf-node.node-store-misses
1.5e+09 +-+---------------------------------------------------------------+
1.4e+09 +-++ .+ +.+ ++ +.+ ++ ++ |
|+ + .+ + +. +. : : + + +. : : + + .++.+ + + |
1.3e+09 +-+ ++ : .+ + + +.++ ++.+ + +.++ ++ + +|
1.2e+09 +-+ +.++ |
| |
1.1e+09 +-+ |
1e+09 +-+ |
9e+08 +-+ |
| |
8e+08 +-+ O O O O O O |
7e+08 O-OO O OO OO O O O O OO O OO O O OO |
| |
6e+08 +-+ O O O |
5e+08 +-+---------------------------------------O-----------------------+
vm-scalability.throughput
2e+06 +-+---------------------------------------------------------------+
|.++.++.+ .++.++.++. +.++.++.++.++.++.++.++.++.++.++.++.++.++.+ .+|
1.8e+06 +-+ + + + |
1.6e+06 +-+ |
| |
1.4e+06 +-+ |
1.2e+06 +-+ |
| |
1e+06 +-+ |
800000 +-+ |
| |
600000 +-+ |
400000 +-+ |
| |
200000 O-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-OO--------------------+
vm-scalability.free_time
55 +-+--------------------------------------------------------------------+
50 +-+ .+ .+ |
| .+.++.+. + : +.+ + .+. + : |
45 +-+ .++ + :+ : .+.+ +.+ + +.+. +.+.++ + :.|
40 +-++ + +.++ +.+.+ +.++.+ ++.+ + |
35 +-+ |
30 +-+ |
| |
25 +-+ |
20 +-+ |
15 +-+ |
10 +-+ |
| O O OO O O |
5 O-+O OO O OO OO O OO OO O OO O OO OO O O O |
0 +-+--------------------------------------------------------------------+
vm-scalability.median
250000 +-+----------------------------------------------------------------+
|.++.++.++.++.++.++. +.++.++.++.+.++.++.++.++.++.++.++.++.++.++.++.|
| + |
200000 +-+ |
| |
| |
150000 +-+ |
| |
100000 +-+ |
| |
| |
50000 +-+ |
O OO OO OO OO OO OO OO OO OO OO O OO OO OO OO O |
| |
0 +-+----------------------------------------------------------------+
vm-scalability.workload
4.5e+08 +-+---------------------------------------------------------------+
| |
4e+08 +-+ |
3.5e+08 +-+ |
| |
3e+08 +-+ |
| |
2.5e+08 +-+ |
| |
2e+08 +-+ |
1.5e+08 +-+ |
| |
1e+08 +-+ |
O OO OO OO OO OO OO OO OO OO OO OO OO OO OO OO |
5e+07 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp7: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-7/performance/1SSD/16MB/xfs/1x/x86_64-rhel-7.6/16d/256fpd/32t/debian-x86_64-2019-09-23.cgz/NoSync/lkp-csl-2sp7/60G/fsmark/0x500002b
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
%stddev %change %stddev
\ | \
213530 ± 6% -17.1% 176946 ± 3% fsmark.app_overhead
141.28 ± 25% +1609.1% 2414 fsmark.files_per_sec
2828 -2.0% 2772 fsmark.time.maximum_resident_set_size
48.75 ± 2% -17.3% 40.33 fsmark.time.percent_of_cpu_this_job_got
66.37 ± 2% -17.6% 54.70 fsmark.time.system_time
36378 -95.4% 1678 ± 4% fsmark.time.voluntary_context_switches
19.21 -5.1% 18.24 ± 6% boot-time.dhcp
8.17 -7.2 0.93 mpstat.cpu.all.iowait%
4911670 ± 5% -100.0% 0.00 numa-numastat.node0.numa_foreign
4911670 ± 5% -100.0% 0.00 numa-numastat.node1.numa_miss
4911702 ± 5% -99.9% 6293 ±140% numa-numastat.node1.other_node
4810521 ± 5% -100.0% 0.00 numa-vmstat.node0.numa_foreign
4810600 ± 5% -100.0% 0.00 numa-vmstat.node1.numa_miss
4980554 ± 5% -96.5% 175777 ± 4% numa-vmstat.node1.numa_other
91.00 +7.7% 98.00 vmstat.cpu.id
8.00 -100.0% 0.00 vmstat.procs.b
2552 -22.4% 1980 vmstat.system.cs
7659590 ± 67% -87.6% 951609 ± 20% cpuidle.C1.time
70426 ± 38% -51.2% 34365 ± 7% cpuidle.C1.usage
2823007 ± 73% -57.4% 1203288 ± 10% cpuidle.POLL.time
22890 ± 13% -23.2% 17580 ± 3% cpuidle.POLL.usage
1396 ± 6% -6.8% 1302 ± 6% slabinfo.kmalloc-rcl-96.active_objs
1396 ± 6% -6.8% 1302 ± 6% slabinfo.kmalloc-rcl-96.num_objs
344.75 ± 14% -35.4% 222.67 ± 13% slabinfo.xfrm_state.active_objs
344.75 ± 14% -35.4% 222.67 ± 13% slabinfo.xfrm_state.num_objs
68.25 ± 15% -17.9% 56.00 turbostat.Avg_MHz
4.71 ± 19% -1.0 3.72 turbostat.Busy%
66815 ± 38% -58.5% 27727 ± 7% turbostat.C1
93.61 ± 3% -4.9% 89.06 turbostat.PkgWatt
91.08 +7.8% 98.23 iostat.cpu.idle
8.05 -88.6% 0.92 iostat.cpu.iowait
15.79 ±173% -100.0% 0.00 iostat.sda.avgqu-sz.max
41.72 ±173% -100.0% 0.00 iostat.sda.await.max
17.62 ±173% -100.0% 0.00 iostat.sda.r_await.max
0.27 ±173% -100.0% 0.00 iostat.sda.svctm.max
41.72 ±173% -100.0% 0.00 iostat.sda.w_await.max
1.21 ± 57% +68.9% 2.05 iostat.sdb.wrqm/s
6902065 +8.6% 7498469 proc-vmstat.nr_dirty
6494293 +100.2% 13003066 proc-vmstat.nr_dirty_background_threshold
13004465 +150.0% 32515607 proc-vmstat.nr_dirty_threshold
14902299 +3.8% 15475261 proc-vmstat.nr_file_pages
50695632 -1.1% 50115309 proc-vmstat.nr_free_pages
14627724 +3.9% 15201051 proc-vmstat.nr_inactive_file
1196 -5.3% 1133 proc-vmstat.nr_page_table_pages
53073 +2.5% 54382 proc-vmstat.nr_slab_reclaimable
14627724 +3.9% 15201051 proc-vmstat.nr_zone_inactive_file
6924527 +8.6% 7520765 proc-vmstat.nr_zone_write_pending
4911670 ± 5% -100.0% 0.00 proc-vmstat.numa_foreign
11272999 ± 2% +43.5% 16181547 proc-vmstat.numa_hit
11241838 ± 2% +43.7% 16150474 proc-vmstat.numa_local
4911670 ± 5% -100.0% 0.00 proc-vmstat.numa_miss
4942831 ± 5% -99.4% 31073 proc-vmstat.numa_other
605288 ± 46% +62.4% 982801 ± 21% sched_debug.cfs_rq:/.load.max
110012 ± 20% +35.8% 149419 ± 15% sched_debug.cfs_rq:/.load.stddev
1.85 ± 73% +131.2% 4.28 ± 6% sched_debug.cfs_rq:/.removed.util_avg.avg
14.22 ± 59% +71.5% 24.40 ± 6% sched_debug.cfs_rq:/.removed.util_avg.stddev
604628 ± 46% +62.5% 982801 ± 21% sched_debug.cfs_rq:/.runnable_weight.max
109963 ± 20% +35.9% 149420 ± 15% sched_debug.cfs_rq:/.runnable_weight.stddev
182.26 ± 3% -5.8% 171.67 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
859467 -47.5% 451140 sched_debug.cpu.avg_idle.avg
1050621 ± 6% -46.8% 559208 ± 8% sched_debug.cpu.avg_idle.max
179785 ± 8% -53.1% 84392 ± 9% sched_debug.cpu.avg_idle.stddev
502355 -50.1% 250635 sched_debug.cpu.max_idle_balance_cost.avg
595239 ± 16% -50.1% 297142 ± 12% sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
3625 -14.7% 3093 sched_debug.cpu.nr_switches.avg
8.56 ± 13% -14.8% 7.29 ± 11% sched_debug.cpu.nr_uninterruptible.stddev
1812 -30.1% 1266 sched_debug.cpu.sched_count.avg
876.95 -31.5% 600.57 sched_debug.cpu.sched_goidle.avg
911.99 -29.9% 639.51 sched_debug.cpu.ttwu_count.avg
650.33 -39.6% 392.92 sched_debug.cpu.ttwu_local.avg
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
4.59e+08 ± 2% -4.3% 4.394e+08 ± 3% perf-stat.i.branch-instructions
1984351 ± 36% -49.3% 1005882 ± 11% perf-stat.i.cache-misses
30076098 ± 73% -74.8% 7591943 ± 7% perf-stat.i.cache-references
2477 -23.3% 1899 perf-stat.i.context-switches
4.704e+09 ± 22% -25.0% 3.529e+09 ± 3% perf-stat.i.cpu-cycles
2929 ± 23% +43.9% 4217 perf-stat.i.cycles-between-cache-misses
0.20 ± 96% -0.2 0.01 ± 15% perf-stat.i.dTLB-load-miss-rate%
677095 ± 96% -95.8% 28535 ± 31% perf-stat.i.dTLB-load-misses
5.269e+08 -5.1% 5e+08 ± 3% perf-stat.i.dTLB-loads
107155 ± 91% -91.0% 9662 ± 14% perf-stat.i.dTLB-store-misses
2.543e+08 ± 2% -5.8% 2.397e+08 ± 2% perf-stat.i.dTLB-stores
52.36 -1.5 50.83 perf-stat.i.iTLB-load-miss-rate%
2.234e+09 ± 2% -4.9% 2.125e+09 ± 3% perf-stat.i.instructions
78.56 ± 2% +4.4 82.97 ± 2% perf-stat.i.node-load-miss-rate%
52109 ± 8% -28.4% 37317 ± 11% perf-stat.i.node-loads
56.57 ± 22% +30.6 87.13 ± 3% perf-stat.i.node-store-miss-rate%
127934 ± 5% -60.3% 50785 ±112% perf-stat.i.node-stores
13.46 ± 73% -73.5% 3.57 ± 4% perf-stat.overall.MPKI
2510 ± 15% +41.0% 3540 ± 8% perf-stat.overall.cycles-between-cache-misses
53.32 -1.7 51.60 perf-stat.overall.iTLB-load-miss-rate%
4.554e+08 ± 2% -4.3% 4.359e+08 ± 3% perf-stat.ps.branch-instructions
1969672 ± 36% -49.3% 998526 ± 11% perf-stat.ps.cache-misses
29852128 ± 73% -74.8% 7534678 ± 7% perf-stat.ps.cache-references
2458 -23.3% 1885 perf-stat.ps.context-switches
4.668e+09 ± 22% -25.0% 3.502e+09 ± 3% perf-stat.ps.cpu-cycles
672093 ± 96% -95.8% 28338 ± 31% perf-stat.ps.dTLB-load-misses
5.228e+08 -5.1% 4.961e+08 ± 3% perf-stat.ps.dTLB-loads
106362 ± 91% -91.0% 9591 ± 14% perf-stat.ps.dTLB-store-misses
2.524e+08 ± 2% -5.8% 2.378e+08 ± 2% perf-stat.ps.dTLB-stores
2.217e+09 ± 2% -4.9% 2.108e+09 ± 3% perf-stat.ps.instructions
51702 ± 8% -28.4% 37035 ± 11% perf-stat.ps.node-loads
2.964e+11 ± 2% -4.4% 2.833e+11 ± 3% perf-stat.total.instructions
80.76 ± 3% -3.3 77.45 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
81.61 ± 2% -3.1 78.54 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
96.58 -1.3 95.29 perf-profile.calltrace.cycles-pp.secondary_startup_64
2.48 ± 9% -0.8 1.63 ± 12% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
2.16 ± 8% -0.8 1.36 ± 14% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.28 ± 16% -0.3 0.99 ± 18% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
1.59 ± 7% +0.4 1.98 ± 7% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.33 ± 19% +0.4 1.74 perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.35 ±102% +0.5 0.84 ± 15% perf-profile.calltrace.cycles-pp.test_clear_page_writeback.end_page_writeback.xfs_destroy_ioend.xfs_end_ioend.xfs_end_io
0.00 +0.6 0.58 ± 5% perf-profile.calltrace.cycles-pp.native_write_msr.__intel_pmu_enable_all.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.6 0.64 ± 4% perf-profile.calltrace.cycles-pp.__intel_pmu_enable_all.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
3.72 ± 15% +0.8 4.49 ± 3% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
82.91 ± 2% -3.5 79.44 perf-profile.children.cycles-pp.cpuidle_enter_state
82.99 ± 2% -3.5 79.52 perf-profile.children.cycles-pp.cpuidle_enter
96.58 -1.3 95.29 perf-profile.children.cycles-pp.secondary_startup_64
96.58 -1.3 95.29 perf-profile.children.cycles-pp.cpu_startup_entry
96.69 -1.3 95.41 perf-profile.children.cycles-pp.do_idle
2.53 ± 9% -0.9 1.66 ± 10% perf-profile.children.cycles-pp.irq_enter
2.20 ± 8% -0.8 1.39 ± 13% perf-profile.children.cycles-pp.tick_irq_enter
0.53 ± 42% -0.4 0.10 ± 17% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.60 ± 39% -0.3 0.33 ± 6% perf-profile.children.cycles-pp.enqueue_hrtimer
0.26 ± 53% -0.2 0.09 ± 18% perf-profile.children.cycles-pp.hrtimer_forward
0.24 ± 33% -0.1 0.12 ± 29% perf-profile.children.cycles-pp.rb_insert_color
0.11 ± 41% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.21 ± 6% +0.0 0.25 perf-profile.children.cycles-pp.rcu_eqs_exit
0.09 ± 26% +0.0 0.13 ± 14% perf-profile.children.cycles-pp.smp_call_function_single
0.10 ± 28% +0.0 0.13 ± 17% perf-profile.children.cycles-pp.perf_event_read
0.01 ±173% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.fbcon_redraw
0.01 ±173% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.con_scroll
0.01 ±173% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.lf
0.01 ±173% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.fbcon_scroll
0.01 ±173% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.vt_console_print
0.06 ± 26% +0.0 0.11 ± 12% perf-profile.children.cycles-pp.schedule
0.01 ±173% +0.0 0.06 ± 13% perf-profile.children.cycles-pp.__wake_up_common
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.24 ± 7% +0.1 0.30 ± 12% perf-profile.children.cycles-pp.call_cpuidle
0.08 ± 24% +0.1 0.14 ± 20% perf-profile.children.cycles-pp.__perf_event_read_value
0.10 ± 20% +0.1 0.17 ± 12% perf-profile.children.cycles-pp.update_dl_rq_load_avg
0.07 ± 64% +0.1 0.15 ± 24% perf-profile.children.cycles-pp.__mod_node_page_state
0.31 ± 8% +0.1 0.40 ± 19% perf-profile.children.cycles-pp.rcu_eqs_enter
0.60 ± 7% +0.1 0.72 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.22 ± 42% +0.2 0.38 ± 5% perf-profile.children.cycles-pp.trigger_load_balance
0.41 ± 20% +0.2 0.65 ± 4% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.59 ± 29% +0.3 0.88 ± 14% perf-profile.children.cycles-pp.test_clear_page_writeback
1.64 ± 7% +0.4 2.01 ± 7% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.38 ± 18% +0.4 1.76 perf-profile.children.cycles-pp.scheduler_tick
0.92 ± 18% +0.4 1.37 ± 7% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.53 ± 42% -0.4 0.10 ± 17% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.32 ± 49% -0.2 0.15 ± 10% perf-profile.self.cycles-pp.timerqueue_add
0.26 ± 53% -0.2 0.08 ± 20% perf-profile.self.cycles-pp.hrtimer_forward
0.20 ± 36% -0.1 0.11 ± 26% perf-profile.self.cycles-pp.rb_insert_color
0.22 ± 13% -0.1 0.14 ± 20% perf-profile.self.cycles-pp.irq_exit
0.12 ± 13% -0.0 0.08 ± 10% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.07 ± 11% -0.0 0.04 ± 73% perf-profile.self.cycles-pp.enqueue_hrtimer
0.10 ± 11% -0.0 0.07 perf-profile.self.cycles-pp.find_busiest_group
0.08 ± 23% +0.0 0.12 ± 10% perf-profile.self.cycles-pp.smp_call_function_single
0.20 ± 12% +0.1 0.25 ± 9% perf-profile.self.cycles-pp.load_balance
0.10 ± 21% +0.1 0.15 ± 11% perf-profile.self.cycles-pp.update_dl_rq_load_avg
0.07 ± 15% +0.1 0.13 ± 31% perf-profile.self.cycles-pp.page_mapping
0.24 ± 8% +0.1 0.30 ± 13% perf-profile.self.cycles-pp.call_cpuidle
0.07 ± 64% +0.1 0.15 ± 24% perf-profile.self.cycles-pp.__mod_node_page_state
0.30 ± 10% +0.1 0.39 ± 19% perf-profile.self.cycles-pp.rcu_eqs_enter
0.59 ± 7% +0.1 0.71 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
3.36 ± 14% +0.6 3.98 perf-profile.self.cycles-pp.cpuidle_enter_state
0.25 ±173% +65100.0% 163.00 ± 79% interrupts.86:PCI-MSI.31981619-edge.i40e-eth0-TxRx-50
290.50 ± 36% -44.5% 161.33 ± 23% interrupts.CPU0.NMI:Non-maskable_interrupts
290.50 ± 36% -44.5% 161.33 ± 23% interrupts.CPU0.PMI:Performance_monitoring_interrupts
5404 ±111% -91.4% 463.67 ± 94% interrupts.CPU1.RES:Rescheduling_interrupts
204.50 ± 24% -40.8% 121.00 ± 32% interrupts.CPU12.NMI:Non-maskable_interrupts
204.50 ± 24% -40.8% 121.00 ± 32% interrupts.CPU12.PMI:Performance_monitoring_interrupts
200.00 ± 16% -35.0% 130.00 ± 30% interrupts.CPU13.NMI:Non-maskable_interrupts
200.00 ± 16% -35.0% 130.00 ± 30% interrupts.CPU13.PMI:Performance_monitoring_interrupts
208.50 ± 22% -34.1% 137.33 ± 27% interrupts.CPU14.NMI:Non-maskable_interrupts
208.50 ± 22% -34.1% 137.33 ± 27% interrupts.CPU14.PMI:Performance_monitoring_interrupts
215.75 ± 23% -44.8% 119.00 ± 28% interrupts.CPU15.NMI:Non-maskable_interrupts
215.75 ± 23% -44.8% 119.00 ± 28% interrupts.CPU15.PMI:Performance_monitoring_interrupts
93.75 ±159% -97.2% 2.67 ± 17% interrupts.CPU18.RES:Rescheduling_interrupts
212.25 ± 24% -42.8% 121.33 ± 24% interrupts.CPU2.NMI:Non-maskable_interrupts
212.25 ± 24% -42.8% 121.33 ± 24% interrupts.CPU2.PMI:Performance_monitoring_interrupts
259.75 ± 37% -42.9% 148.33 ± 4% interrupts.CPU29.NMI:Non-maskable_interrupts
259.75 ± 37% -42.9% 148.33 ± 4% interrupts.CPU29.PMI:Performance_monitoring_interrupts
39.75 ±153% -97.5% 1.00 ± 81% interrupts.CPU36.RES:Rescheduling_interrupts
244.00 ± 39% -23.5% 186.67 ± 39% interrupts.CPU38.NMI:Non-maskable_interrupts
244.00 ± 39% -23.5% 186.67 ± 39% interrupts.CPU38.PMI:Performance_monitoring_interrupts
263.50 ± 42% -47.4% 138.67 ± 33% interrupts.CPU4.NMI:Non-maskable_interrupts
263.50 ± 42% -47.4% 138.67 ± 33% interrupts.CPU4.PMI:Performance_monitoring_interrupts
256.00 ± 42% -42.3% 147.67 ± 8% interrupts.CPU41.NMI:Non-maskable_interrupts
256.00 ± 42% -42.3% 147.67 ± 8% interrupts.CPU41.PMI:Performance_monitoring_interrupts
213.75 ± 30% -42.6% 122.67 ± 23% interrupts.CPU5.NMI:Non-maskable_interrupts
213.75 ± 30% -42.6% 122.67 ± 23% interrupts.CPU5.PMI:Performance_monitoring_interrupts
2077 ± 17% -20.0% 1663 interrupts.CPU51.CAL:Function_call_interrupts
2068 ± 18% -18.7% 1682 interrupts.CPU62.CAL:Function_call_interrupts
203.75 ± 18% -28.2% 146.33 ± 2% interrupts.CPU62.NMI:Non-maskable_interrupts
203.75 ± 18% -28.2% 146.33 ± 2% interrupts.CPU62.PMI:Performance_monitoring_interrupts
2068 ± 18% -18.7% 1682 interrupts.CPU63.CAL:Function_call_interrupts
2064 ± 18% -22.7% 1595 ± 5% interrupts.CPU65.CAL:Function_call_interrupts
208.25 ± 19% -26.2% 153.67 ± 5% interrupts.CPU66.NMI:Non-maskable_interrupts
208.25 ± 19% -26.2% 153.67 ± 5% interrupts.CPU66.PMI:Performance_monitoring_interrupts
2064 ± 18% -19.6% 1658 interrupts.CPU67.CAL:Function_call_interrupts
2065 ± 18% -20.8% 1636 interrupts.CPU68.CAL:Function_call_interrupts
200.25 ± 20% -35.9% 128.33 ± 28% interrupts.CPU7.NMI:Non-maskable_interrupts
200.25 ± 20% -35.9% 128.33 ± 28% interrupts.CPU7.PMI:Performance_monitoring_interrupts
297.50 ± 43% -47.0% 157.67 ± 8% interrupts.CPU70.NMI:Non-maskable_interrupts
297.50 ± 43% -47.0% 157.67 ± 8% interrupts.CPU70.PMI:Performance_monitoring_interrupts
267.50 ± 37% -44.2% 149.33 ± 3% interrupts.CPU72.NMI:Non-maskable_interrupts
267.50 ± 37% -44.2% 149.33 ± 3% interrupts.CPU72.PMI:Performance_monitoring_interrupts
255.75 ± 38% -47.3% 134.67 ± 4% interrupts.CPU76.NMI:Non-maskable_interrupts
255.75 ± 38% -47.3% 134.67 ± 4% interrupts.CPU76.PMI:Performance_monitoring_interrupts
261.50 ± 39% -40.9% 154.67 ± 5% interrupts.CPU77.NMI:Non-maskable_interrupts
261.50 ± 39% -40.9% 154.67 ± 5% interrupts.CPU77.PMI:Performance_monitoring_interrupts
263.00 ± 44% -52.1% 126.00 ± 38% interrupts.CPU79.NMI:Non-maskable_interrupts
263.00 ± 44% -52.1% 126.00 ± 38% interrupts.CPU79.PMI:Performance_monitoring_interrupts
259.50 ± 40% -50.8% 127.67 ± 34% interrupts.CPU84.NMI:Non-maskable_interrupts
259.50 ± 40% -50.8% 127.67 ± 34% interrupts.CPU84.PMI:Performance_monitoring_interrupts
253.75 ± 40% -41.1% 149.33 ± 48% interrupts.CPU86.NMI:Non-maskable_interrupts
253.75 ± 40% -41.1% 149.33 ± 48% interrupts.CPU86.PMI:Performance_monitoring_interrupts
250.75 ± 44% -49.6% 126.33 ± 31% interrupts.CPU88.NMI:Non-maskable_interrupts
250.75 ± 44% -49.6% 126.33 ± 31% interrupts.CPU88.PMI:Performance_monitoring_interrupts
252.25 ± 42% -51.9% 121.33 ± 33% interrupts.CPU89.NMI:Non-maskable_interrupts
252.25 ± 42% -51.9% 121.33 ± 33% interrupts.CPU89.PMI:Performance_monitoring_interrupts
245.75 ± 40% -50.8% 121.00 ± 34% interrupts.CPU90.NMI:Non-maskable_interrupts
245.75 ± 40% -50.8% 121.00 ± 34% interrupts.CPU90.PMI:Performance_monitoring_interrupts
261.00 ± 42% -46.4% 140.00 interrupts.CPU91.NMI:Non-maskable_interrupts
261.00 ± 42% -46.4% 140.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts
256.75 ± 39% -42.7% 147.00 ± 6% interrupts.CPU92.NMI:Non-maskable_interrupts
256.75 ± 39% -42.7% 147.00 ± 6% interrupts.CPU92.PMI:Performance_monitoring_interrupts
255.50 ± 39% -40.4% 152.33 ± 7% interrupts.CPU93.NMI:Non-maskable_interrupts
255.50 ± 39% -40.4% 152.33 ± 7% interrupts.CPU93.PMI:Performance_monitoring_interrupts
270.75 ± 41% -45.0% 149.00 ± 4% interrupts.CPU94.NMI:Non-maskable_interrupts
270.75 ± 41% -45.0% 149.00 ± 4% interrupts.CPU94.PMI:Performance_monitoring_interrupts
14178 ± 5% -24.3% 10726 ± 13% softirqs.CPU10.RCU
14327 ± 5% -25.3% 10709 ± 13% softirqs.CPU11.RCU
13657 ± 6% -22.7% 10555 ± 13% softirqs.CPU12.RCU
13837 ± 5% -23.1% 10640 ± 15% softirqs.CPU13.RCU
14189 ± 5% -24.5% 10711 ± 15% softirqs.CPU14.RCU
14009 ± 3% -22.0% 10921 ± 15% softirqs.CPU15.RCU
13125 ± 4% -14.6% 11210 ± 14% softirqs.CPU17.RCU
12992 ± 5% -17.9% 10663 ± 12% softirqs.CPU19.RCU
13489 ± 2% -20.0% 10787 ± 14% softirqs.CPU2.RCU
13290 ± 4% -18.1% 10883 ± 11% softirqs.CPU20.RCU
12513 ± 4% -13.9% 10776 ± 12% softirqs.CPU21.RCU
13147 ± 6% -19.9% 10526 ± 19% softirqs.CPU23.RCU
13959 ± 7% -20.4% 11106 ± 15% softirqs.CPU24.RCU
19261 ± 3% -8.8% 17559 softirqs.CPU24.SCHED
12910 ± 9% -17.4% 10662 ± 5% softirqs.CPU27.RCU
63260 ± 13% -17.9% 51933 ± 7% softirqs.CPU27.TIMER
58283 ± 11% -11.8% 51424 ± 6% softirqs.CPU28.TIMER
58243 ± 12% -11.7% 51441 ± 6% softirqs.CPU29.TIMER
16519 ± 18% -36.8% 10444 ± 30% softirqs.CPU3.RCU
14396 ± 9% -29.2% 10196 ± 6% softirqs.CPU32.RCU
13800 ± 11% -23.8% 10513 ± 8% softirqs.CPU33.RCU
14106 ± 10% -25.4% 10522 ± 7% softirqs.CPU34.RCU
14153 ± 10% -22.6% 10956 ± 12% softirqs.CPU35.RCU
13465 ± 12% -15.1% 11433 ± 12% softirqs.CPU36.RCU
13993 ± 11% -25.1% 10477 ± 7% softirqs.CPU37.RCU
13908 ± 9% -25.1% 10413 ± 8% softirqs.CPU38.RCU
13697 ± 11% -21.5% 10749 ± 4% softirqs.CPU39.RCU
15844 ± 16% -31.9% 10786 ± 14% softirqs.CPU4.RCU
21525 ± 13% -14.8% 18350 ± 4% softirqs.CPU4.SCHED
13962 ± 11% -23.6% 10666 ± 5% softirqs.CPU40.RCU
14388 ± 11% -26.7% 10541 ± 8% softirqs.CPU41.RCU
13635 ± 13% -27.6% 9868 ± 8% softirqs.CPU42.RCU
14048 ± 11% -21.3% 11053 ± 11% softirqs.CPU43.RCU
14344 ± 11% -26.8% 10497 ± 8% softirqs.CPU44.RCU
58230 ± 11% -13.4% 50409 ± 6% softirqs.CPU44.TIMER
14276 ± 10% -25.1% 10690 ± 5% softirqs.CPU45.RCU
14280 ± 11% -25.9% 10581 ± 5% softirqs.CPU46.RCU
13720 ± 8% -23.1% 10556 ± 6% softirqs.CPU47.RCU
13096 ± 4% -16.5% 10936 ± 16% softirqs.CPU48.RCU
13617 ± 4% -22.6% 10535 ± 11% softirqs.CPU49.RCU
14186 ± 6% -20.2% 11317 ± 8% softirqs.CPU5.RCU
14099 ± 3% -24.2% 10691 ± 11% softirqs.CPU50.RCU
13393 ± 8% -20.4% 10667 ± 13% softirqs.CPU52.RCU
14321 ± 4% -22.5% 11099 ± 17% softirqs.CPU53.RCU
14290 ± 5% -20.4% 11377 ± 11% softirqs.CPU54.RCU
13816 ± 5% -22.3% 10738 ± 14% softirqs.CPU55.RCU
14216 ± 5% -25.7% 10562 ± 15% softirqs.CPU56.RCU
13828 ± 5% -21.0% 10921 ± 16% softirqs.CPU57.RCU
14420 ± 7% -25.4% 10759 ± 14% softirqs.CPU58.RCU
14181 ± 5% -24.5% 10704 ± 14% softirqs.CPU59.RCU
14292 ± 5% -25.0% 10718 ± 13% softirqs.CPU6.RCU
13769 ± 4% -19.8% 11045 ± 10% softirqs.CPU60.RCU
13739 ± 3% -20.9% 10866 ± 14% softirqs.CPU61.RCU
14338 ± 7% -24.9% 10767 ± 14% softirqs.CPU62.RCU
13912 ± 3% -24.3% 10537 ± 14% softirqs.CPU63.RCU
12922 ± 5% -18.2% 10573 ± 13% softirqs.CPU64.RCU
13176 ± 8% -18.6% 10722 ± 9% softirqs.CPU65.RCU
13092 ± 8% -20.6% 10390 ± 14% softirqs.CPU67.RCU
12796 ± 3% -32.8% 8594 ± 43% softirqs.CPU68.RCU
12979 ± 8% -20.8% 10274 ± 12% softirqs.CPU69.RCU
13999 ± 4% -22.7% 10823 ± 13% softirqs.CPU7.RCU
12760 ± 6% -18.1% 10446 ± 12% softirqs.CPU70.RCU
12198 ± 14% -26.6% 8948 ± 22% softirqs.CPU75.RCU
56460 ± 13% -13.0% 49127 ± 7% softirqs.CPU76.TIMER
56294 ± 13% -13.0% 48997 ± 6% softirqs.CPU77.TIMER
14222 ± 5% -26.4% 10472 ± 12% softirqs.CPU8.RCU
13604 ± 10% -14.6% 11615 ± 9% softirqs.CPU81.RCU
14079 ± 11% -27.0% 10277 ± 8% softirqs.CPU82.RCU
13936 ± 11% -26.5% 10244 ± 8% softirqs.CPU83.RCU
14220 ± 13% -28.1% 10230 ± 7% softirqs.CPU85.RCU
13573 ± 9% -25.4% 10128 ± 8% softirqs.CPU86.RCU
13399 ± 12% -22.7% 10356 ± 3% softirqs.CPU87.RCU
13987 ± 11% -26.8% 10245 ± 7% softirqs.CPU88.RCU
14106 ± 12% -23.7% 10761 softirqs.CPU89.RCU
13853 ± 4% -23.6% 10577 ± 13% softirqs.CPU9.RCU
13260 ± 11% -22.1% 10325 ± 8% softirqs.CPU90.RCU
15140 ± 16% -33.6% 10054 ± 8% softirqs.CPU91.RCU
17734 ± 5% -7.0% 16486 ± 4% softirqs.CPU91.SCHED
14199 ± 12% -28.5% 10153 ± 7% softirqs.CPU92.RCU
14249 ± 12% -25.3% 10641 ± 11% softirqs.CPU93.RCU
13965 ± 11% -22.7% 10789 ± 8% softirqs.CPU94.RCU
1301155 ± 6% -21.3% 1023388 ± 10% softirqs.RCU
***************************************************************************************************
lkp-csl-2sp7: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-7/performance/1HDD/16MB/f2fs/1x/x86_64-rhel-7.6/16d/256fpd/32t/debian-x86_64-2019-09-23.cgz/NoSync/lkp-csl-2sp7/60G/fsmark/0x500002b
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
%stddev %change %stddev
\ | \
207487 ± 2% -24.5% 156557 ± 3% fsmark.app_overhead
37.10 +832.4% 345.93 ± 3% fsmark.files_per_sec
83.25 ± 3% +22.5% 102.00 ± 3% fsmark.time.percent_of_cpu_this_job_got
289.97 ± 2% +23.3% 357.53 ± 3% fsmark.time.system_time
137357 -73.7% 36165 fsmark.time.voluntary_context_switches
28459340 +9.4% 31135240 meminfo.Dirty
1013333 ±149% -91.1% 90145 ± 7% turbostat.C1
9.46 ± 2% -8.5 0.97 ± 7% mpstat.cpu.all.iowait%
0.93 ± 3% +0.2 1.18 ± 3% mpstat.cpu.all.sys%
62658734 ±154% -95.9% 2589839 ± 15% cpuidle.C1.time
1017527 ±148% -90.7% 94277 ± 9% cpuidle.C1.usage
133577 ± 89% -62.8% 49722 ± 4% cpuidle.POLL.usage
89.25 +8.7% 97.00 vmstat.cpu.id
9.00 -97.2% 0.25 ±173% vmstat.procs.b
2202 -29.0% 1564 vmstat.system.cs
3.00 ±173% +2.2e+05% 6602 ±165% softirqs.CPU19.BLOCK
33584 ± 4% -15.5% 28392 ± 11% softirqs.CPU3.RCU
43095 ± 3% +5.9% 45643 ± 5% softirqs.CPU43.SCHED
30539 ± 5% -14.1% 26236 ± 8% softirqs.CPU53.RCU
3490367 ± 14% -100.0% 0.00 numa-numastat.node0.numa_foreign
94454 ±123% -86.9% 12407 ±106% numa-numastat.node0.other_node
3380548 ± 23% +211.7% 10536792 ± 29% numa-numastat.node1.local_node
3383810 ± 22% +211.9% 10555638 ± 29% numa-numastat.node1.numa_hit
3490367 ± 14% -100.0% 0.00 numa-numastat.node1.numa_miss
3493632 ± 14% -99.5% 18845 ± 69% numa-numastat.node1.other_node
89.52 +9.1% 97.71 iostat.cpu.idle
9.41 ± 2% -89.7% 0.97 ± 7% iostat.cpu.iowait
0.96 ± 2% +25.7% 1.21 ± 3% iostat.cpu.system
140.86 ±173% +634.2% 1034 ±101% iostat.sda.await.max
74.25 ±173% +448.5% 407.24 ±168% iostat.sda.r_await.max
140.86 ±173% +634.2% 1034 ±101% iostat.sda.w_await.max
1225 ± 57% -99.9% 0.85 ±173% iostat.sdb.r_await.max
85549 ± 8% -51.4% 41552 ± 59% numa-meminfo.node0.Active(file)
155085 ± 5% -39.2% 94346 ± 30% numa-meminfo.node0.KReclaimable
2939 ± 13% -22.9% 2267 ± 17% numa-meminfo.node0.PageTables
155085 ± 5% -39.2% 94346 ± 30% numa-meminfo.node0.SReclaimable
257098 ± 2% -26.5% 188849 ± 16% numa-meminfo.node0.Slab
20863 ± 34% +227.4% 68312 ± 35% numa-meminfo.node1.Active(file)
137672 ± 19% -29.8% 96587 ± 27% numa-meminfo.node1.AnonHugePages
59732 ± 13% +111.6% 126382 ± 22% numa-meminfo.node1.KReclaimable
59732 ± 13% +111.6% 126382 ± 22% numa-meminfo.node1.SReclaimable
132316 ± 4% +55.3% 205519 ± 15% numa-meminfo.node1.Slab
16.03 ±118% -72.2% 4.46 perf-stat.i.MPKI
2179 -30.0% 1526 perf-stat.i.context-switches
3588 ± 32% +60.5% 5760 ± 3% perf-stat.i.cycles-between-cache-misses
55.93 ± 8% -5.5 50.45 perf-stat.i.iTLB-load-miss-rate%
84.24 +3.7 87.98 perf-stat.i.node-load-miss-rate%
65.18 ± 12% +23.8 88.94 perf-stat.i.node-store-miss-rate%
144399 ± 4% +36.4% 196999 ± 5% perf-stat.i.node-stores
84.95 +1.8 86.80 perf-stat.overall.node-load-miss-rate%
60.52 -8.0 52.53 ± 3% perf-stat.overall.node-store-miss-rate%
2172 -29.9% 1521 perf-stat.ps.context-switches
144175 ± 4% +36.1% 196209 ± 5% perf-stat.ps.node-stores
21387 ± 8% -51.5% 10377 ± 58% numa-vmstat.node0.nr_active_file
734.25 ± 13% -23.0% 565.25 ± 17% numa-vmstat.node0.nr_page_table_pages
38748 ± 5% -39.2% 23576 ± 30% numa-vmstat.node0.nr_slab_reclaimable
21387 ± 8% -51.5% 10377 ± 58% numa-vmstat.node0.nr_zone_active_file
3408738 ± 14% -100.0% 0.00 numa-vmstat.node0.numa_foreign
5214 ± 34% +227.7% 17087 ± 36% numa-vmstat.node1.nr_active_file
14932 ± 13% +111.6% 31599 ± 22% numa-vmstat.node1.nr_slab_reclaimable
5214 ± 34% +227.7% 17087 ± 36% numa-vmstat.node1.nr_zone_active_file
3413078 ± 20% +214.0% 10715494 ± 29% numa-vmstat.node1.numa_hit
3239704 ± 21% +224.9% 10526475 ± 29% numa-vmstat.node1.numa_local
3408755 ± 14% -100.0% 0.00 numa-vmstat.node1.numa_miss
3582130 ± 13% -94.7% 189019 ± 6% numa-vmstat.node1.numa_other
26585 +3.4% 27484 proc-vmstat.nr_active_file
7108139 +9.6% 7787129 proc-vmstat.nr_dirty
6494961 +100.2% 13005676 proc-vmstat.nr_dirty_background_threshold
13005805 +150.1% 32522132 proc-vmstat.nr_dirty_threshold
14934015 +4.3% 15578971 proc-vmstat.nr_file_pages
50671481 -1.3% 50025580 proc-vmstat.nr_free_pages
14632035 +4.4% 15276391 proc-vmstat.nr_inactive_file
1197 -6.2% 1123 proc-vmstat.nr_page_table_pages
5959 ± 2% -2.8% 5795 proc-vmstat.nr_shmem
53692 +2.8% 55196 proc-vmstat.nr_slab_reclaimable
26585 +3.4% 27484 proc-vmstat.nr_zone_active_file
14632035 +4.4% 15276391 proc-vmstat.nr_zone_inactive_file
7130078 +9.5% 7808827 proc-vmstat.nr_zone_write_pending
3556800 ± 11% -100.0% 0.00 proc-vmstat.numa_foreign
13113098 ± 2% +27.1% 16668341 proc-vmstat.numa_hit
13081660 ± 3% +27.2% 16637084 proc-vmstat.numa_local
3556800 ± 11% -100.0% 0.00 proc-vmstat.numa_miss
3588237 ± 11% -99.1% 31257 proc-vmstat.numa_other
8353 ± 2% -4.4% 7982 proc-vmstat.pgactivate
807.96 ± 12% -25.0% 606.17 ± 6% sched_debug.cfs_rq:/.load_avg.max
161.13 ± 7% -15.2% 136.56 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
262344 ± 4% +26.2% 331200 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
87252 ± 4% +24.9% 109020 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
590.83 ± 15% -28.5% 422.50 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.max
104.27 ± 8% -23.3% 79.93 ± 18% sched_debug.cfs_rq:/.runnable_load_avg.stddev
87252 ± 4% +24.9% 109021 ± 7% sched_debug.cfs_rq:/.spread0.stddev
914115 -48.5% 470421 sched_debug.cpu.avg_idle.avg
1018444 ± 3% -44.0% 569875 ± 11% sched_debug.cpu.avg_idle.max
136783 ± 20% -50.0% 68366 ± 18% sched_debug.cpu.avg_idle.stddev
501309 -49.7% 251975 sched_debug.cpu.max_idle_balance_cost.avg
521214 ± 4% -42.5% 299871 ± 19% sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
6323 -28.4% 4524 sched_debug.cpu.nr_switches.avg
6684 ± 5% -19.0% 5412 ± 10% sched_debug.cpu.nr_switches.stddev
0.08 ± 2% -71.2% 0.02 ± 17% sched_debug.cpu.nr_uninterruptible.avg
4672 -39.0% 2851 sched_debug.cpu.sched_count.avg
6408 ± 4% -20.0% 5125 ± 11% sched_debug.cpu.sched_count.stddev
2297 -39.7% 1384 sched_debug.cpu.sched_goidle.avg
3150 ± 4% -20.6% 2503 ± 11% sched_debug.cpu.sched_goidle.stddev
2327 -39.2% 1416 sched_debug.cpu.ttwu_count.avg
1122 -52.2% 536.60 sched_debug.cpu.ttwu_local.avg
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
74.50 ±168% -99.3% 0.50 ±173% interrupts.48:PCI-MSI.31981581-edge.i40e-eth0-TxRx-12
106.00 ± 84% -100.0% 0.00 interrupts.50:PCI-MSI.31981583-edge.i40e-eth0-TxRx-14
137.25 ± 99% -95.3% 6.50 ± 53% interrupts.CPU12.RES:Rescheduling_interrupts
227.75 ± 36% -39.8% 137.00 ± 16% interrupts.CPU29.NMI:Non-maskable_interrupts
227.75 ± 36% -39.8% 137.00 ± 16% interrupts.CPU29.PMI:Performance_monitoring_interrupts
81.75 ±147% -95.4% 3.75 ± 78% interrupts.CPU32.RES:Rescheduling_interrupts
231.00 ± 41% -40.4% 137.75 ± 17% interrupts.CPU34.NMI:Non-maskable_interrupts
231.00 ± 41% -40.4% 137.75 ± 17% interrupts.CPU34.PMI:Performance_monitoring_interrupts
237.50 ± 32% -38.9% 145.00 ± 25% interrupts.CPU39.NMI:Non-maskable_interrupts
237.50 ± 32% -38.9% 145.00 ± 25% interrupts.CPU39.PMI:Performance_monitoring_interrupts
226.75 ± 34% -47.5% 119.00 ± 35% interrupts.CPU4.NMI:Non-maskable_interrupts
226.75 ± 34% -47.5% 119.00 ± 35% interrupts.CPU4.PMI:Performance_monitoring_interrupts
237.00 ± 43% -51.6% 114.75 ± 36% interrupts.CPU48.NMI:Non-maskable_interrupts
237.00 ± 43% -51.6% 114.75 ± 36% interrupts.CPU48.PMI:Performance_monitoring_interrupts
229.25 ± 41% -51.4% 111.50 ± 31% interrupts.CPU49.NMI:Non-maskable_interrupts
229.25 ± 41% -51.4% 111.50 ± 31% interrupts.CPU49.PMI:Performance_monitoring_interrupts
224.25 ± 35% -58.1% 94.00 ± 31% interrupts.CPU50.NMI:Non-maskable_interrupts
224.25 ± 35% -58.1% 94.00 ± 31% interrupts.CPU50.PMI:Performance_monitoring_interrupts
233.00 ± 38% -57.8% 98.25 ± 22% interrupts.CPU51.NMI:Non-maskable_interrupts
233.00 ± 38% -57.8% 98.25 ± 22% interrupts.CPU51.PMI:Performance_monitoring_interrupts
221.25 ± 38% -54.9% 99.75 ± 31% interrupts.CPU52.NMI:Non-maskable_interrupts
221.25 ± 38% -54.9% 99.75 ± 31% interrupts.CPU52.PMI:Performance_monitoring_interrupts
205.75 ± 33% -48.4% 106.25 ± 31% interrupts.CPU54.NMI:Non-maskable_interrupts
205.75 ± 33% -48.4% 106.25 ± 31% interrupts.CPU54.PMI:Performance_monitoring_interrupts
218.25 ± 29% -49.3% 110.75 ± 33% interrupts.CPU59.NMI:Non-maskable_interrupts
218.25 ± 29% -49.3% 110.75 ± 33% interrupts.CPU59.PMI:Performance_monitoring_interrupts
217.25 ± 23% -52.7% 102.75 ± 51% interrupts.CPU62.NMI:Non-maskable_interrupts
217.25 ± 23% -52.7% 102.75 ± 51% interrupts.CPU62.PMI:Performance_monitoring_interrupts
242.00 ± 34% -49.8% 121.50 ± 34% interrupts.CPU68.NMI:Non-maskable_interrupts
242.00 ± 34% -49.8% 121.50 ± 34% interrupts.CPU68.PMI:Performance_monitoring_interrupts
208.25 ± 38% -42.0% 120.75 ± 31% interrupts.CPU7.NMI:Non-maskable_interrupts
208.25 ± 38% -42.0% 120.75 ± 31% interrupts.CPU7.PMI:Performance_monitoring_interrupts
275.50 ± 25% -57.3% 117.75 ± 35% interrupts.CPU71.NMI:Non-maskable_interrupts
275.50 ± 25% -57.3% 117.75 ± 35% interrupts.CPU71.PMI:Performance_monitoring_interrupts
248.50 ± 30% -43.3% 141.00 ± 23% interrupts.CPU72.NMI:Non-maskable_interrupts
248.50 ± 30% -43.3% 141.00 ± 23% interrupts.CPU72.PMI:Performance_monitoring_interrupts
1.75 ±173% +5600.0% 99.75 ±150% interrupts.CPU75.RES:Rescheduling_interrupts
1.00 ±100% +20250.0% 203.50 ±165% interrupts.CPU76.RES:Rescheduling_interrupts
0.00 +2.2e+104% 221.75 ±169% interrupts.CPU77.RES:Rescheduling_interrupts
21680 ± 36% -37.0% 13655 ± 9% interrupts.NMI:Non-maskable_interrupts
21680 ± 36% -37.0% 13655 ± 9% interrupts.PMI:Performance_monitoring_interrupts
0.39 ± 57% +0.3 0.67 ± 10% perf-profile.calltrace.cycles-pp.arch_cpu_idle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.52 ± 14% +0.4 1.94 ± 4% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.59 ± 62% +0.5 1.05 ± 12% perf-profile.calltrace.cycles-pp.run_local_timers.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.13 ±173% +0.5 0.65 ± 10% perf-profile.calltrace.cycles-pp.tsc_verify_tsc_adjust.arch_cpu_idle_enter.do_idle.cpu_startup_entry.start_secondary
0.13 ±173% +0.6 0.72 ± 23% perf-profile.calltrace.cycles-pp.f2fs_outplace_write_data.f2fs_do_write_data_page.__write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages
1.45 ± 59% +1.4 2.81 ± 2% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
5.50 ± 21% +1.7 7.18 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
10.82 ± 13% +2.8 13.59 ± 5% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.34 ± 53% -0.3 0.03 ±100% perf-profile.children.cycles-pp.new_sync_write
0.34 ± 53% -0.3 0.03 ±100% perf-profile.children.cycles-pp.ksys_write
0.34 ± 53% -0.3 0.03 ±100% perf-profile.children.cycles-pp.vfs_write
0.35 ± 51% -0.3 0.03 ±100% perf-profile.children.cycles-pp.write
0.32 ± 38% -0.2 0.08 ± 10% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.15 ± 38% -0.1 0.03 ±100% perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.84 ± 7% -0.1 0.71 ± 11% perf-profile.children.cycles-pp.console_unlock
0.75 ± 5% -0.1 0.64 ± 13% perf-profile.children.cycles-pp.wait_for_xmitr
0.75 ± 4% -0.1 0.65 ± 11% perf-profile.children.cycles-pp.uart_console_write
0.77 ± 5% -0.1 0.66 ± 12% perf-profile.children.cycles-pp.serial8250_console_write
0.73 ± 4% -0.1 0.62 ± 13% perf-profile.children.cycles-pp.serial8250_console_putchar
0.11 ± 65% -0.1 0.03 ±100% perf-profile.children.cycles-pp.irq_work_needs_cpu
0.21 ± 7% -0.1 0.14 ± 18% perf-profile.children.cycles-pp.memcpy_erms
0.21 ± 8% -0.1 0.14 ± 18% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.07 ± 7% -0.0 0.03 ±100% perf-profile.children.cycles-pp.x86_pmu_disable
0.50 ± 7% +0.2 0.67 ± 10% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.51 ± 7% +0.2 0.69 ± 10% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.41 ± 22% +0.2 0.61 ± 13% perf-profile.children.cycles-pp.__intel_pmu_enable_all
1.56 ± 12% +0.4 1.97 ± 3% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.53 ± 47% +1.3 2.84 perf-profile.children.cycles-pp.timekeeping_max_deferment
6.81 ± 19% +1.6 8.40 ± 4% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
5.61 ± 20% +1.7 7.28 ± 3% perf-profile.children.cycles-pp.tick_nohz_next_event
11.00 ± 13% +2.8 13.76 ± 5% perf-profile.children.cycles-pp.menu_select
7.66 ± 40% +2.9 10.61 ± 2% perf-profile.children.cycles-pp.ktime_get
0.32 ± 38% -0.2 0.08 ± 10% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.21 ± 7% -0.1 0.14 ± 18% perf-profile.self.cycles-pp.memcpy_erms
0.06 ± 60% +0.0 0.10 ± 10% perf-profile.self.cycles-pp.cpuidle_enter
0.62 ± 24% +0.2 0.78 ± 4% perf-profile.self.cycles-pp.__softirqentry_text_start
0.45 ± 15% +0.2 0.65 ± 11% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.41 ± 25% +0.3 0.68 ± 11% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.77 ± 17% +0.4 1.14 ± 6% perf-profile.self.cycles-pp.tick_nohz_next_event
3.18 ± 20% +0.7 3.92 ± 2% perf-profile.self.cycles-pp.cpuidle_enter_state
1.52 ± 48% +1.3 2.84 perf-profile.self.cycles-pp.timekeeping_max_deferment
6.49 ± 43% +2.8 9.32 ± 3% perf-profile.self.cycles-pp.ktime_get
***************************************************************************************************
lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1HDD/ext4/libaio/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/300s/write/lkp-ivb-2ep1/128G/fio-basic/0x42e
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
%stddev %change %stddev
\ | \
28.56 ± 14% -22.6 5.98 ± 12% fio.latency_100ms%
0.25 ± 8% -0.2 0.02 ± 47% fio.latency_20ms%
0.73 +0.4 1.15 ± 13% fio.latency_250ms%
0.01 ± 9% +9.6 9.57 ± 12% fio.latency_2ms%
0.01 +0.2 0.24 ± 20% fio.latency_4ms%
17.73 ± 8% +15.0 32.70 ± 12% fio.latency_750us%
0.27 ± 4% -0.1 0.14 ± 6% fio.latency_>=2000ms%
385.62 +8.7% 419.34 fio.time.elapsed_time
385.62 +8.7% 419.34 fio.time.elapsed_time.max
99365986 +6.9% 1.063e+08 fio.time.file_system_outputs
1945 ± 3% +69.2% 3291 ± 4% fio.time.involuntary_context_switches
34.75 +53.2% 53.25 ± 3% fio.time.percent_of_cpu_this_job_got
114.87 +74.3% 200.26 ± 4% fio.time.system_time
21.53 ± 2% +18.9% 25.60 ± 2% fio.time.user_time
263007 ± 5% -25.9% 194806 ± 8% fio.time.voluntary_context_switches
12420747 +6.9% 13282337 fio.workload
148.65 -15.4% 125.72 fio.write_bw_MBps
57147392 -17.4% 47185920 ± 2% fio.write_clat_90%_us
61341696 ± 2% -11.3% 54394880 ± 5% fio.write_clat_95%_us
1.277e+08 +27.7% 1.631e+08 ± 9% fio.write_clat_99%_us
35406116 -43.2% 20095509 fio.write_clat_mean_us
1.267e+08 +22.5% 1.552e+08 ± 2% fio.write_clat_stddev
38053 -15.4% 32184 fio.write_iops
1140007 -43.3% 646192 fio.write_slat_mean_us
22923702 +20.1% 27528884 ± 2% fio.write_slat_stddev
41.91 -29.4% 29.59 ± 2% iostat.cpu.idle
57.60 +20.4% 69.36 iostat.cpu.iowait
15846790 ± 15% +62.9% 25808727 ± 13% cpuidle.C1E.usage
13758474 ± 27% -59.3% 5598154 ± 65% cpuidle.C3.usage
5.555e+09 ± 11% -19.6% 4.468e+09 ± 7% cpuidle.C6.time
152936 ± 63% -78.2% 33344 ± 61% numa-numastat.node0.numa_miss
159740 ± 62% -73.2% 42830 ± 47% numa-numastat.node0.other_node
152936 ± 63% -78.2% 33344 ± 61% numa-numastat.node1.numa_foreign
41.62 -12.4 29.26 ± 2% mpstat.cpu.all.idle%
57.90 +11.8 69.69 mpstat.cpu.all.iowait%
0.38 +0.5 0.90 ± 4% mpstat.cpu.all.sys%
0.08 ± 2% +0.0 0.13 mpstat.cpu.all.usr%
41.50 -30.1% 29.00 ± 3% vmstat.cpu.id
56.75 +21.6% 69.00 vmstat.cpu.wa
128261 -1.6% 126247 vmstat.io.bo
32638075 +32.1% 43113546 ± 2% vmstat.memory.cache
32546493 -32.2% 22066885 ± 4% vmstat.memory.free
31.50 +22.2% 38.50 vmstat.procs.b
3411 ± 4% -17.2% 2825 ± 6% vmstat.system.cs
15846466 ± 15% +62.9% 25808277 ± 13% turbostat.C1E
13758155 ± 27% -59.3% 5597714 ± 65% turbostat.C3
29.88 ± 11% -7.8 22.10 ± 6% turbostat.C6%
13.68 ± 20% -60.8% 5.37 ± 50% turbostat.CPU%c6
74.95 ± 2% +4.1% 78.05 ± 2% turbostat.CorWatt
11.18 ± 25% -71.8% 3.16 ±102% turbostat.Pkg%pc2
0.10 ± 47% -85.4% 0.01 ±173% turbostat.Pkg%pc3
0.07 ± 48% -82.1% 0.01 ±173% turbostat.Pkg%pc6
103.68 ± 2% +3.0% 106.79 turbostat.PkgWatt
7.50 +1.1% 7.57 turbostat.RAMWatt
31797979 +32.0% 41987949 ± 2% meminfo.Cached
2246330 +12.6% 2530086 meminfo.Committed_AS
7112704 ± 6% +4.0% 7398400 ± 7% meminfo.DirectMap2M
9273585 +99.7% 18516564 meminfo.Dirty
30704128 +33.2% 40895517 ± 2% meminfo.Inactive
362019 +15.6% 418524 meminfo.Inactive(anon)
30342109 +33.4% 40476992 ± 2% meminfo.Inactive(file)
919250 +30.3% 1197987 meminfo.KReclaimable
372616 +15.3% 429563 meminfo.Mapped
32467447 -32.3% 21995088 ± 4% meminfo.MemFree
33378851 +31.4% 43851210 ± 2% meminfo.Memused
919250 +30.3% 1197987 meminfo.SReclaimable
367488 +15.5% 424291 meminfo.Shmem
1044343 +26.8% 1324225 meminfo.Slab
263713 +174.6% 724109 meminfo.Writeback
206.00 ± 2% +18.7% 244.50 ± 12% interrupts.42:PCI-MSI.2621448-edge.eth0-TxRx-7
481.50 ± 12% -29.5% 339.50 ± 29% interrupts.CPU11.NMI:Non-maskable_interrupts
481.50 ± 12% -29.5% 339.50 ± 29% interrupts.CPU11.PMI:Performance_monitoring_interrupts
431.75 ± 33% -42.4% 248.50 ± 24% interrupts.CPU12.NMI:Non-maskable_interrupts
431.75 ± 33% -42.4% 248.50 ± 24% interrupts.CPU12.PMI:Performance_monitoring_interrupts
4925 ± 11% +15.8% 5704 ± 7% interrupts.CPU13.CAL:Function_call_interrupts
546.50 ± 29% -62.4% 205.25 ± 59% interrupts.CPU13.RES:Rescheduling_interrupts
4560 ± 6% +31.1% 5978 ± 19% interrupts.CPU19.CAL:Function_call_interrupts
407.50 ± 60% -81.2% 76.50 ± 40% interrupts.CPU19.RES:Rescheduling_interrupts
288.50 ± 45% -77.2% 65.75 ± 22% interrupts.CPU21.RES:Rescheduling_interrupts
157.25 ± 43% -61.4% 60.75 ± 70% interrupts.CPU29.RES:Rescheduling_interrupts
206.00 ± 2% +18.7% 244.50 ± 12% interrupts.CPU31.42:PCI-MSI.2621448-edge.eth0-TxRx-7
4469 ± 7% +39.0% 6210 ± 22% interrupts.CPU36.CAL:Function_call_interrupts
494.75 ± 65% -72.0% 138.50 ± 40% interrupts.CPU36.RES:Rescheduling_interrupts
4316 ± 7% +22.2% 5274 ± 10% interrupts.CPU40.CAL:Function_call_interrupts
270.00 ± 44% -63.8% 97.75 ± 87% interrupts.CPU40.RES:Rescheduling_interrupts
495.50 ± 39% -79.3% 102.75 ± 61% interrupts.CPU45.RES:Rescheduling_interrupts
16584 ± 11% -30.8% 11482 ± 16% interrupts.RES:Rescheduling_interrupts
4618717 ± 3% +101.4% 9302547 numa-meminfo.node0.Dirty
16354663 ± 3% +28.5% 21008472 ± 2% numa-meminfo.node0.FilePages
15811819 ± 3% +29.5% 20470740 ± 2% numa-meminfo.node0.Inactive
15710516 ± 4% +29.6% 20360981 numa-meminfo.node0.Inactive(file)
469767 ± 3% +27.1% 596863 ± 2% numa-meminfo.node0.KReclaimable
15710137 ± 4% -30.5% 10924214 ± 5% numa-meminfo.node0.MemFree
17155509 ± 3% +27.9% 21941432 ± 2% numa-meminfo.node0.MemUsed
469767 ± 3% +27.1% 596863 ± 2% numa-meminfo.node0.SReclaimable
536435 ± 3% +23.4% 661977 ± 2% numa-meminfo.node0.Slab
127732 ± 2% +173.0% 348765 ± 4% numa-meminfo.node0.Writeback
4655423 ± 3% +98.3% 9231636 ± 3% numa-meminfo.node1.Dirty
15447018 ± 3% +36.1% 21022313 ± 2% numa-meminfo.node1.FilePages
14892867 ± 3% +37.4% 20464806 ± 3% numa-meminfo.node1.Inactive
14632190 ± 4% +37.7% 20155562 ± 3% numa-meminfo.node1.Inactive(file)
449292 ± 3% +34.0% 602274 ± 2% numa-meminfo.node1.KReclaimable
16756430 ± 3% -34.2% 11029554 ± 5% numa-meminfo.node1.MemFree
16224220 ± 3% +35.3% 21951097 ± 2% numa-meminfo.node1.MemUsed
449292 ± 3% +34.0% 602274 ± 2% numa-meminfo.node1.SReclaimable
507722 ± 3% +30.7% 663404 numa-meminfo.node1.Slab
135362 ± 5% +178.0% 376303 ± 3% numa-meminfo.node1.Writeback
4444421 ± 3% +25.2% 5562521 ± 3% numa-vmstat.node0.nr_dirtied
1155371 ± 3% +101.2% 2324533 numa-vmstat.node0.nr_dirty
4090377 ± 3% +28.4% 5250862 ± 2% numa-vmstat.node0.nr_file_pages
3925740 ± 4% -30.4% 2732344 ± 5% numa-vmstat.node0.nr_free_pages
3929385 ± 4% +29.5% 5088978 numa-vmstat.node0.nr_inactive_file
117498 ± 3% +26.9% 149157 ± 2% numa-vmstat.node0.nr_slab_reclaimable
31827 ± 2% +174.6% 87400 ± 4% numa-vmstat.node0.nr_writeback
3929385 ± 4% +29.5% 5088978 numa-vmstat.node0.nr_zone_inactive_file
1187493 ± 3% +103.1% 2412048 numa-vmstat.node0.nr_zone_write_pending
5087678 ± 5% +23.9% 6305813 ± 6% numa-vmstat.node0.numa_hit
5080612 ± 5% +23.9% 6295655 ± 6% numa-vmstat.node0.numa_local
146231 ± 62% -77.6% 32778 ± 61% numa-vmstat.node0.numa_miss
4434100 ± 3% +39.2% 6170063 ± 4% numa-vmstat.node1.nr_dirtied
1164610 ± 3% +98.1% 2307168 ± 3% numa-vmstat.node1.nr_dirty
3863740 ± 3% +36.0% 5254625 ± 2% numa-vmstat.node1.nr_file_pages
4187032 ± 3% -34.1% 2758350 ± 5% numa-vmstat.node1.nr_free_pages
3660023 ± 4% +37.6% 5037997 ± 3% numa-vmstat.node1.nr_inactive_file
112388 ± 3% +33.9% 150514 ± 2% numa-vmstat.node1.nr_slab_reclaimable
33693 ± 4% +179.1% 94036 ± 3% numa-vmstat.node1.nr_writeback
3660023 ± 4% +37.6% 5037997 ± 3% numa-vmstat.node1.nr_zone_inactive_file
1198596 ± 3% +100.3% 2401291 ± 3% numa-vmstat.node1.nr_zone_write_pending
146232 ± 62% -77.6% 32778 ± 61% numa-vmstat.node1.numa_foreign
5176677 ± 5% +37.1% 7096463 ± 6% numa-vmstat.node1.numa_hit
5011522 ± 6% +38.4% 6934030 ± 6% numa-vmstat.node1.numa_local
444.00 ± 3% +25.4% 556.75 ± 5% slabinfo.biovec-128.active_objs
444.00 ± 3% +25.4% 556.75 ± 5% slabinfo.biovec-128.num_objs
734.25 ± 4% +45.8% 1070 ± 3% slabinfo.biovec-64.active_objs
734.25 ± 4% +45.8% 1070 ± 3% slabinfo.biovec-64.num_objs
295.50 ± 18% +68.0% 496.50 ± 12% slabinfo.biovec-max.active_objs
319.25 ± 16% +63.0% 520.25 ± 13% slabinfo.biovec-max.num_objs
700.00 ± 8% +56.5% 1095 ± 2% slabinfo.blkdev_ioc.active_objs
700.00 ± 8% +56.5% 1095 ± 2% slabinfo.blkdev_ioc.num_objs
7592132 +33.4% 10126644 ± 2% slabinfo.buffer_head.active_objs
196264 +32.6% 260282 ± 2% slabinfo.buffer_head.active_slabs
7654320 +32.6% 10151024 ± 2% slabinfo.buffer_head.num_objs
196264 +32.6% 260282 ± 2% slabinfo.buffer_head.num_slabs
2158 ± 12% -14.3% 1849 ± 14% slabinfo.eventpoll_pwq.active_objs
2158 ± 12% -14.3% 1849 ± 14% slabinfo.eventpoll_pwq.num_objs
1211 ± 9% +20.1% 1454 ± 5% slabinfo.ext4_extent_status.active_objs
1211 ± 9% +20.1% 1454 ± 5% slabinfo.ext4_extent_status.num_objs
884.75 ± 8% +199.0% 2645 ± 17% slabinfo.ext4_io_end.active_objs
884.75 ± 8% +199.0% 2645 ± 17% slabinfo.ext4_io_end.num_objs
2314 ± 2% +14.8% 2655 ± 3% slabinfo.pool_workqueue.active_objs
2314 ± 2% +14.8% 2655 ± 2% slabinfo.pool_workqueue.num_objs
136721 +29.6% 177228 slabinfo.radix_tree_node.active_objs
2502 +27.6% 3193 slabinfo.radix_tree_node.active_slabs
140165 +27.6% 178868 slabinfo.radix_tree_node.num_objs
2502 +27.6% 3193 slabinfo.radix_tree_node.num_slabs
1024 +51.6% 1552 slabinfo.scsi_sense_cache.active_objs
1024 +51.6% 1552 slabinfo.scsi_sense_cache.num_objs
1761 ± 3% +29.5% 2280 ± 5% slabinfo.skbuff_fclone_cache.active_objs
1761 ± 3% +29.5% 2280 ± 5% slabinfo.skbuff_fclone_cache.num_objs
12421560 +6.9% 13283012 proc-vmstat.nr_dirtied
2318632 +99.7% 4631102 proc-vmstat.nr_dirty
1559352 +99.2% 3105629 proc-vmstat.nr_dirty_background_threshold
3122517 +148.7% 7765972 proc-vmstat.nr_dirty_threshold
7951217 +32.1% 10502150 ± 2% proc-vmstat.nr_file_pages
8115739 -32.3% 5494098 ± 4% proc-vmstat.nr_free_pages
90484 +15.6% 104626 proc-vmstat.nr_inactive_anon
7586540 +33.4% 10123741 ± 2% proc-vmstat.nr_inactive_file
11900 +2.3% 12177 proc-vmstat.nr_kernel_stack
93248 +15.3% 107512 proc-vmstat.nr_mapped
2085 +6.9% 2229 proc-vmstat.nr_page_table_pages
91851 +15.5% 106068 proc-vmstat.nr_shmem
229797 +30.4% 299628 ± 2% proc-vmstat.nr_slab_reclaimable
65830 +175.3% 181251 proc-vmstat.nr_writeback
12421560 +6.9% 13282996 proc-vmstat.nr_written
90484 +15.6% 104626 proc-vmstat.nr_zone_inactive_anon
7586540 +33.4% 10123741 ± 2% proc-vmstat.nr_zone_inactive_file
2385078 +101.8% 4812521 proc-vmstat.nr_zone_write_pending
226534 ± 44% -85.3% 33344 ± 61% proc-vmstat.numa_foreign
2658 ± 3% -32.5% 1794 ± 12% proc-vmstat.numa_hint_faults
1355 ± 4% -60.2% 539.25 ± 26% proc-vmstat.numa_hint_faults_local
13464638 +8.4% 14600391 proc-vmstat.numa_hit
13448439 +8.4% 14584391 proc-vmstat.numa_local
226534 ± 44% -85.3% 33344 ± 61% proc-vmstat.numa_miss
242733 ± 41% -79.7% 49344 ± 41% proc-vmstat.numa_other
1685 ± 20% -25.6% 1253 ± 7% proc-vmstat.numa_pages_migrated
4928 ± 57% -47.5% 2587 ± 7% proc-vmstat.numa_pte_updates
13778418 +6.9% 14722437 proc-vmstat.pgalloc_normal
1091426 +7.7% 1175859 proc-vmstat.pgfault
13552030 ± 2% +7.4% 14558129 proc-vmstat.pgfree
1685 ± 20% -25.6% 1253 ± 7% proc-vmstat.pgmigrate_success
49829011 +6.9% 53275846 proc-vmstat.pgpgout
204046 ± 14% +167.4% 545721 ± 7% proc-vmstat.pgrotated
34.16 ± 3% -30.4% 23.79 ± 16% perf-stat.i.MPKI
2.032e+08 +7.9% 2.193e+08 perf-stat.i.branch-instructions
4.66 ± 2% -1.0 3.67 ± 3% perf-stat.i.branch-miss-rate%
7209528 -22.0% 5624742 ± 2% perf-stat.i.branch-misses
5217888 -14.9% 4442724 ± 3% perf-stat.i.cache-misses
28886731 ± 2% -29.2% 20439061 ± 14% perf-stat.i.cache-references
3402 ± 4% -18.9% 2761 ± 5% perf-stat.i.context-switches
4.76 ± 2% -19.1% 3.85 ± 9% perf-stat.i.cpi
8.66 ± 7% -24.5% 6.54 ± 3% perf-stat.i.cpu-migrations
918.81 ± 2% +146.1% 2261 ± 7% perf-stat.i.cycles-between-cache-misses
0.90 ± 3% -0.3 0.63 ± 6% perf-stat.i.dTLB-load-miss-rate%
2588584 ± 3% -12.5% 2266173 ± 6% perf-stat.i.dTLB-load-misses
0.15 ± 10% -0.0 0.12 ± 5% perf-stat.i.dTLB-store-miss-rate%
454795 ± 6% -18.0% 373105 ± 2% perf-stat.i.dTLB-store-misses
2.861e+08 ± 3% -17.4% 2.364e+08 ± 7% perf-stat.i.dTLB-stores
9.973e+08 +6.3% 1.06e+09 perf-stat.i.instructions
1073 ± 3% -9.0% 977.08 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.24 +31.4% 0.32 ± 13% perf-stat.i.ipc
310076 +8.4% 335970 ± 5% perf-stat.i.node-loads
13.69 ± 7% +5.3 18.96 ± 10% perf-stat.i.node-store-miss-rate%
287029 ± 5% +23.8% 355408 ± 5% perf-stat.i.node-store-misses
28.96 ± 2% -33.4% 19.28 ± 14% perf-stat.overall.MPKI
3.55 -1.0 2.56 ± 3% perf-stat.overall.branch-miss-rate%
4.22 ± 2% -11.5% 3.74 ± 7% perf-stat.overall.cpi
0.90 ± 3% -0.1 0.77 ± 6% perf-stat.overall.dTLB-load-miss-rate%
11.82 ± 4% +2.7 14.51 ± 4% perf-stat.overall.node-store-miss-rate%
30943 +8.0% 33430 perf-stat.overall.path-length
2.027e+08 +8.0% 2.189e+08 perf-stat.ps.branch-instructions
7191170 -22.0% 5612177 ± 2% perf-stat.ps.branch-misses
5204770 -14.8% 4433929 ± 3% perf-stat.ps.cache-misses
28812682 ± 2% -29.2% 20395260 ± 14% perf-stat.ps.cache-references
3393 ± 4% -18.8% 2754 ± 5% perf-stat.ps.context-switches
8.64 ± 7% -24.5% 6.52 ± 3% perf-stat.ps.cpu-migrations
2582032 ± 3% -12.4% 2261755 ± 6% perf-stat.ps.dTLB-load-misses
453645 ± 6% -17.9% 372354 ± 2% perf-stat.ps.dTLB-store-misses
2.854e+08 ± 3% -17.3% 2.359e+08 ± 7% perf-stat.ps.dTLB-stores
9.948e+08 +6.3% 1.058e+09 perf-stat.ps.instructions
309286 +8.4% 335263 ± 5% perf-stat.ps.node-loads
286345 ± 5% +23.9% 354808 ± 5% perf-stat.ps.node-store-misses
3.843e+11 +15.5% 4.44e+11 perf-stat.total.instructions
92834 +44.6% 134241 ± 10% softirqs.CPU0.RCU
96210 +41.3% 135902 ± 9% softirqs.CPU1.RCU
102417 ± 5% +33.3% 136522 ± 2% softirqs.CPU10.RCU
102855 ± 2% +31.0% 134702 ± 8% softirqs.CPU11.RCU
90216 +38.4% 124897 ± 8% softirqs.CPU12.RCU
48053 ± 3% +8.4% 52087 softirqs.CPU12.SCHED
93157 +35.3% 126033 ± 6% softirqs.CPU13.RCU
91743 +33.9% 122809 ± 4% softirqs.CPU14.RCU
90784 +35.5% 122992 ± 6% softirqs.CPU15.RCU
93452 ± 2% +38.8% 129716 ± 5% softirqs.CPU16.RCU
95886 ± 2% +39.3% 133576 ± 6% softirqs.CPU17.RCU
99936 ± 2% +30.6% 130521 ± 5% softirqs.CPU18.RCU
102974 +30.2% 134121 ± 3% softirqs.CPU19.RCU
95928 +40.8% 135098 ± 7% softirqs.CPU2.RCU
100739 ± 2% +25.3% 126223 ± 4% softirqs.CPU20.RCU
107688 +22.8% 132213 ± 4% softirqs.CPU21.RCU
105205 +24.1% 130520 ± 3% softirqs.CPU22.RCU
102666 ± 2% +29.7% 133203 ± 4% softirqs.CPU23.RCU
91275 +37.9% 125857 ± 8% softirqs.CPU24.RCU
96242 +28.1% 123267 ± 9% softirqs.CPU25.RCU
96994 +38.5% 134298 ± 8% softirqs.CPU26.RCU
96537 ± 4% +40.6% 135744 ± 7% softirqs.CPU27.RCU
97235 ± 2% +31.9% 128261 ± 5% softirqs.CPU28.RCU
99328 +32.5% 131564 ± 5% softirqs.CPU29.RCU
101675 ± 3% +34.7% 136996 ± 6% softirqs.CPU3.RCU
97250 +35.0% 131258 ± 5% softirqs.CPU30.RCU
95333 +39.4% 132933 ± 6% softirqs.CPU31.RCU
92225 ± 2% +36.6% 125995 ± 6% softirqs.CPU32.RCU
94777 +32.4% 125465 ± 7% softirqs.CPU33.RCU
98071 +31.1% 128524 ± 4% softirqs.CPU34.RCU
96363 ± 2% +34.5% 129585 ± 4% softirqs.CPU35.RCU
91442 +48.4% 135696 ± 9% softirqs.CPU36.RCU
47007 ± 3% +7.7% 50640 ± 2% softirqs.CPU36.SCHED
95833 +43.4% 137424 ± 8% softirqs.CPU37.RCU
96444 +40.0% 135045 ± 7% softirqs.CPU38.RCU
97483 +38.5% 134993 ± 7% softirqs.CPU39.RCU
96291 +31.0% 126101 ± 7% softirqs.CPU4.RCU
99450 +38.4% 137663 ± 6% softirqs.CPU40.RCU
98131 ± 2% +41.9% 139295 ± 8% softirqs.CPU41.RCU
102150 +35.6% 138513 ± 4% softirqs.CPU42.RCU
104087 +36.5% 142062 ± 4% softirqs.CPU43.RCU
100013 +31.4% 131430 ± 4% softirqs.CPU44.RCU
104323 ± 3% +30.3% 135896 ± 5% softirqs.CPU45.RCU
105295 +26.6% 133313 ± 4% softirqs.CPU46.RCU
100966 ± 2% +28.3% 129533 ± 10% softirqs.CPU47.RCU
99327 +30.3% 129414 ± 5% softirqs.CPU5.RCU
99025 +35.7% 134399 ± 4% softirqs.CPU6.RCU
99818 +35.1% 134895 ± 6% softirqs.CPU7.RCU
102828 +33.0% 136710 ± 5% softirqs.CPU8.RCU
103045 +36.6% 140801 ± 4% softirqs.CPU9.RCU
4713974 +34.6% 6346250 ± 5% softirqs.RCU
1898 ± 4% +110.6% 3999 ± 3% sched_debug.cfs_rq:/.exec_clock.avg
2456 ± 5% +96.3% 4821 ± 3% sched_debug.cfs_rq:/.exec_clock.max
1618 ± 5% +124.6% 3633 ± 4% sched_debug.cfs_rq:/.exec_clock.min
179.43 ± 8% +48.0% 265.47 ± 17% sched_debug.cfs_rq:/.exec_clock.stddev
11645 ± 6% +18.8% 13834 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
0.19 ± 43% +93.5% 0.37 ± 18% sched_debug.cfs_rq:/.nr_spread_over.avg
1.29 ± 42% +75.0% 2.25 ± 8% sched_debug.cfs_rq:/.nr_spread_over.max
0.30 ± 43% +62.2% 0.48 ± 10% sched_debug.cfs_rq:/.nr_spread_over.stddev
15.03 ± 29% -50.5% 7.44 ± 43% sched_debug.cfs_rq:/.removed.load_avg.avg
279.77 ± 40% -52.6% 132.57 ± 5% sched_debug.cfs_rq:/.removed.load_avg.max
58.84 ± 19% -49.7% 29.58 ± 17% sched_debug.cfs_rq:/.removed.load_avg.stddev
694.48 ± 29% -50.7% 342.42 ± 43% sched_debug.cfs_rq:/.removed.runnable_sum.avg
12952 ± 40% -52.8% 6108 ± 5% sched_debug.cfs_rq:/.removed.runnable_sum.max
2719 ± 19% -49.9% 1362 ± 17% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
5.45 ± 21% -55.3% 2.43 ± 57% sched_debug.cfs_rq:/.removed.util_avg.avg
124.68 ± 34% -59.9% 50.03 ± 18% sched_debug.cfs_rq:/.removed.util_avg.max
22.75 ± 22% -57.8% 9.60 ± 30% sched_debug.cfs_rq:/.removed.util_avg.stddev
111.20 ± 7% -17.7% 91.50 ± 4% sched_debug.cfs_rq:/.util_avg.avg
910639 -49.4% 460497 sched_debug.cpu.avg_idle.avg
1000000 -50.0% 500000 sched_debug.cpu.avg_idle.max
503719 ± 13% -68.0% 160964 ± 19% sched_debug.cpu.avg_idle.min
102546 ± 11% -35.8% 65864 ± 7% sched_debug.cpu.avg_idle.stddev
198884 ± 7% +22.4% 243507 ± 5% sched_debug.cpu.clock.avg
198886 ± 7% +22.4% 243509 ± 5% sched_debug.cpu.clock.max
198881 ± 7% +22.4% 243505 ± 5% sched_debug.cpu.clock.min
1.36 ± 4% -12.0% 1.20 ± 3% sched_debug.cpu.clock.stddev
198884 ± 7% +22.4% 243507 ± 5% sched_debug.cpu.clock_task.avg
198886 ± 7% +22.4% 243509 ± 5% sched_debug.cpu.clock_task.max
198881 ± 7% +22.4% 243505 ± 5% sched_debug.cpu.clock_task.min
1.36 ± 4% -12.0% 1.20 ± 3% sched_debug.cpu.clock_task.stddev
5811 ± 5% +19.5% 6946 ± 4% sched_debug.cpu.curr->pid.max
944.59 ± 3% +14.6% 1082 ± 4% sched_debug.cpu.curr->pid.stddev
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.avg
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
0.00 ± 6% -14.1% 0.00 ± 10% sched_debug.cpu.next_balance.stddev
0.09 ± 4% -13.7% 0.08 ± 3% sched_debug.cpu.nr_running.avg
0.26 ± 3% -8.6% 0.24 sched_debug.cpu.nr_running.stddev
12984 ± 6% +8.4% 14071 ± 5% sched_debug.cpu.nr_switches.avg
11083 ± 7% +9.6% 12148 ± 5% sched_debug.cpu.sched_count.avg
5450 ± 9% -9.3% 4943 ± 2% sched_debug.cpu.sched_count.min
5478 ± 17% +25.6% 6880 ± 17% sched_debug.cpu.sched_count.stddev
2681 ± 9% -11.1% 2384 sched_debug.cpu.sched_goidle.min
2647 ± 18% +28.0% 3388 ± 16% sched_debug.cpu.sched_goidle.stddev
5499 ± 7% +10.1% 6054 ± 5% sched_debug.cpu.ttwu_count.avg
2298 ± 10% -18.6% 1869 ± 9% sched_debug.cpu.ttwu_local.min
198882 ± 7% +22.4% 243505 ± 5% sched_debug.cpu_clk
196302 ± 7% +22.7% 240926 ± 5% sched_debug.ktime
199298 ± 7% +22.4% 243922 ± 5% sched_debug.sched_clk
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
7.26 ± 6% -1.8 5.42 ± 19% perf-profile.calltrace.cycles-pp.io_submit
6.61 ± 5% -1.7 4.92 ± 20% perf-profile.calltrace.cycles-pp.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.io_submit
6.48 ± 5% -1.7 4.81 ± 21% perf-profile.calltrace.cycles-pp.io_submit_one.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.io_submit
1.21 ± 12% -0.4 0.80 ± 7% perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_write
1.19 ± 13% -0.4 0.79 ± 6% perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
1.17 ± 13% -0.4 0.78 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter
0.64 ± 5% -0.4 0.27 ±100% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
1.18 ± 7% -0.3 0.90 ± 15% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
1.16 ± 6% -0.3 0.89 ± 15% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.97 ± 9% -0.1 0.83 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.62 ± 12% +0.2 0.83 ± 8% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
0.90 ± 4% +0.3 1.17 ± 9% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.66 ± 6% +0.4 1.05 ± 14% perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.60 ± 3% +0.5 3.07 ± 11% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
1.63 ± 6% +0.5 2.10 ± 11% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.16 ± 2% +0.5 1.64 ± 6% perf-profile.calltrace.cycles-pp.memcpy_erms.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread
1.23 +0.5 1.75 ± 4% perf-profile.calltrace.cycles-pp.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread.ret_from_fork
1.66 ± 6% +0.6 2.25 ± 8% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
1.73 ± 6% +0.6 2.33 ± 7% perf-profile.calltrace.cycles-pp.ret_from_fork
1.73 ± 6% +0.6 2.33 ± 7% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.64 ± 6% +0.6 2.23 ± 7% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
3.33 ± 3% +0.6 3.94 ± 11% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.26 ±100% +0.7 0.93 ± 25% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.14 ±173% +0.7 0.89 ± 26% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
6.29 ± 3% +0.8 7.06 ± 7% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
3.51 ± 2% +1.1 4.62 ± 8% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
18.08 ± 3% +1.6 19.64 ± 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
15.80 ± 2% +1.6 17.41 ± 2% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
4.50 +1.6 6.11 ± 11% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
7.30 ± 6% -1.9 5.44 ± 19% perf-profile.children.cycles-pp.io_submit
8.53 ± 6% -1.8 6.72 ± 16% perf-profile.children.cycles-pp.do_syscall_64
8.55 ± 6% -1.8 6.76 ± 16% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
6.61 ± 5% -1.7 4.93 ± 20% perf-profile.children.cycles-pp.__x64_sys_io_submit
6.49 ± 5% -1.7 4.81 ± 21% perf-profile.children.cycles-pp.io_submit_one
1.21 ± 12% -0.4 0.81 ± 7% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
1.19 ± 13% -0.4 0.79 ± 6% perf-profile.children.cycles-pp.copyin
1.20 ± 13% -0.4 0.80 ± 7% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.18 ± 7% -0.3 0.90 ± 15% perf-profile.children.cycles-pp.grab_cache_page_write_begin
1.17 ± 7% -0.3 0.89 ± 14% perf-profile.children.cycles-pp.pagecache_get_page
0.64 ± 5% -0.2 0.47 ± 16% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.47 ± 11% -0.2 0.31 ± 10% perf-profile.children.cycles-pp.ext4_block_write_begin
0.52 ± 10% -0.2 0.36 ± 12% perf-profile.children.cycles-pp.generic_write_end
0.46 ± 10% -0.1 0.33 ± 11% perf-profile.children.cycles-pp.block_write_end
0.45 ± 12% -0.1 0.32 ± 10% perf-profile.children.cycles-pp.__block_commit_write
0.31 ± 15% -0.1 0.19 ± 20% perf-profile.children.cycles-pp.mark_buffer_dirty
0.29 ± 4% -0.1 0.17 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.37 ± 9% -0.1 0.26 ± 12% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.39 ± 8% -0.1 0.29 ± 25% perf-profile.children.cycles-pp.call_timer_fn
0.40 ± 8% -0.1 0.30 ± 8% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.26 ± 17% -0.1 0.17 ± 15% perf-profile.children.cycles-pp.create_empty_buffers
0.21 ± 13% -0.1 0.13 ± 19% perf-profile.children.cycles-pp.xas_load
0.35 ± 3% -0.1 0.27 ± 13% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.21 ± 21% -0.1 0.12 ± 14% perf-profile.children.cycles-pp.alloc_page_buffers
0.23 ± 20% -0.1 0.15 ± 21% perf-profile.children.cycles-pp.__set_page_dirty
0.42 ± 12% -0.1 0.35 ± 12% perf-profile.children.cycles-pp.do_io_getevents
0.18 ± 10% -0.1 0.11 ± 18% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited
0.26 ± 10% -0.1 0.20 ± 13% perf-profile.children.cycles-pp.__schedule
0.16 ± 19% -0.1 0.10 ± 13% perf-profile.children.cycles-pp.alloc_buffer_head
0.14 ± 19% -0.1 0.08 ± 27% perf-profile.children.cycles-pp.schedule_timeout
0.19 ± 11% -0.1 0.14 ± 15% perf-profile.children.cycles-pp.rb_insert_color
0.15 ± 7% -0.1 0.10 ± 21% perf-profile.children.cycles-pp.fio_gettime
0.17 ± 6% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.13 ± 8% -0.1 0.08 ± 23% perf-profile.children.cycles-pp.find_get_entry
0.19 ± 15% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.18 ± 13% -0.0 0.14 ± 6% perf-profile.children.cycles-pp.arch_stack_walk
0.15 ± 10% -0.0 0.10 ± 17% perf-profile.children.cycles-pp.td_io_queue
0.08 ± 13% -0.0 0.05 ± 60% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.10 ± 14% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.09 ± 16% +0.0 0.12 ± 14% perf-profile.children.cycles-pp.cpumask_next_and
0.11 ± 10% +0.0 0.15 ± 10% perf-profile.children.cycles-pp.update_rt_rq_load_avg
0.06 ± 60% +0.0 0.10 ± 15% perf-profile.children.cycles-pp.clear_page_dirty_for_io
0.12 ± 16% +0.0 0.17 ± 16% perf-profile.children.cycles-pp.perf_event_task_tick
0.03 ±100% +0.1 0.08 ± 10% perf-profile.children.cycles-pp.cpuidle_select
0.16 ± 6% +0.1 0.21 ± 9% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.16 ± 7% +0.1 0.24 ± 12% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.79 ± 2% +0.1 0.88 ± 5% perf-profile.children.cycles-pp.find_next_bit
0.29 ± 10% +0.1 0.39 ± 12% perf-profile.children.cycles-pp.rcu_idle_exit
0.12 ± 8% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.rcu_eqs_exit
0.55 ± 5% +0.1 0.65 ± 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.78 ± 9% +0.1 0.89 ± 6% perf-profile.children.cycles-pp.io_serial_in
0.36 ± 18% +0.1 0.48 ± 9% perf-profile.children.cycles-pp.update_sd_lb_stats
0.41 ± 18% +0.2 0.56 ± 10% perf-profile.children.cycles-pp.find_busiest_group
0.49 ± 2% +0.2 0.65 ± 24% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
1.95 ± 2% +0.2 2.14 ± 10% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.65 ± 11% +0.2 0.87 ± 8% perf-profile.children.cycles-pp.load_balance
0.33 ± 9% +0.2 0.57 ± 12% perf-profile.children.cycles-pp.note_gp_changes
0.93 ± 4% +0.3 1.22 ± 10% perf-profile.children.cycles-pp.rebalance_domains
0.69 ± 6% +0.4 1.09 ± 14% perf-profile.children.cycles-pp.rcu_core
0.49 ± 18% +0.4 0.93 ± 26% perf-profile.children.cycles-pp.tick_nohz_irq_exit
1.67 ± 7% +0.5 2.16 ± 12% perf-profile.children.cycles-pp.clockevents_program_event
1.22 +0.5 1.74 ± 5% perf-profile.children.cycles-pp.memcpy_erms
1.23 +0.5 1.75 ± 4% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
2.69 ± 2% +0.5 3.21 ± 10% perf-profile.children.cycles-pp.tick_nohz_next_event
1.74 ± 6% +0.6 2.33 ± 7% perf-profile.children.cycles-pp.ret_from_fork
1.66 ± 6% +0.6 2.25 ± 8% perf-profile.children.cycles-pp.worker_thread
1.73 ± 6% +0.6 2.33 ± 7% perf-profile.children.cycles-pp.kthread
1.64 ± 6% +0.6 2.23 ± 7% perf-profile.children.cycles-pp.process_one_work
1.91 ± 5% +0.7 2.59 ± 12% perf-profile.children.cycles-pp.ktime_get
3.40 ± 2% +0.7 4.08 ± 11% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
6.46 ± 2% +0.8 7.30 ± 6% perf-profile.children.cycles-pp.menu_select
3.82 ± 4% +1.3 5.12 ± 8% perf-profile.children.cycles-pp.__softirqentry_text_start
16.23 ± 2% +1.7 17.91 ± 2% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
17.52 ± 2% +1.7 19.26 ± 2% perf-profile.children.cycles-pp.apic_timer_interrupt
4.85 ± 2% +1.8 6.67 ± 10% perf-profile.children.cycles-pp.irq_exit
1.19 ± 14% -0.4 0.80 ± 7% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.37 ± 9% -0.1 0.26 ± 13% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.14 ± 13% -0.1 0.07 ± 61% perf-profile.self.cycles-pp.kmem_cache_alloc
0.33 ± 4% -0.1 0.26 ± 15% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.25 ± 5% -0.1 0.17 ± 16% perf-profile.self.cycles-pp.smp_apic_timer_interrupt
0.28 ± 8% -0.1 0.21 ± 11% perf-profile.self.cycles-pp.tick_sched_timer
0.16 ± 17% -0.1 0.09 ± 20% perf-profile.self.cycles-pp.xas_load
0.14 ± 8% -0.0 0.10 ± 17% perf-profile.self.cycles-pp.fio_gettime
0.13 ± 14% -0.0 0.09 ± 13% perf-profile.self.cycles-pp.io_submit_one
0.17 ± 10% -0.0 0.12 ± 16% perf-profile.self.cycles-pp.rb_insert_color
0.09 ± 8% +0.0 0.12 ± 6% perf-profile.self.cycles-pp.update_rt_rq_load_avg
0.34 +0.0 0.38 perf-profile.self.cycles-pp.__hrtimer_run_queues
0.03 ±100% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.cpuidle_select
0.12 ± 17% +0.0 0.17 ± 16% perf-profile.self.cycles-pp.perf_event_task_tick
0.10 ± 17% +0.1 0.16 ± 13% perf-profile.self.cycles-pp.tick_nohz_tick_stopped
0.12 ± 5% +0.1 0.18 ± 11% perf-profile.self.cycles-pp.rebalance_domains
0.01 ±173% +0.1 0.08 ± 31% perf-profile.self.cycles-pp.rcu_eqs_exit
0.16 ± 6% +0.1 0.24 ± 12% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.70 ± 5% +0.1 0.78 ± 6% perf-profile.self.cycles-pp.find_next_bit
0.22 ± 13% +0.1 0.32 ± 13% perf-profile.self.cycles-pp.update_sd_lb_stats
0.78 ± 9% +0.1 0.89 ± 6% perf-profile.self.cycles-pp.io_serial_in
0.43 ± 7% +0.1 0.56 ± 13% perf-profile.self.cycles-pp.tick_nohz_next_event
0.47 ± 11% +0.1 0.62 ± 2% perf-profile.self.cycles-pp.do_idle
0.18 ± 24% +0.2 0.33 ± 9% perf-profile.self.cycles-pp.note_gp_changes
1.13 ± 7% +0.5 1.63 ± 5% perf-profile.self.cycles-pp.memcpy_erms
***************************************************************************************************
lkp-hsw-4ex1: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-7/performance/x86_64-rhel-7.6/4/64/1/debian-x86_64-2019-11-14.cgz/lkp-hsw-4ex1/swap-w-seq/vm-scalability/never/never/0x16
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
4:4 -100% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
3:4 -75% :4 dmesg.WARNING:at_ip__slab_free/0x
%stddev %change %stddev
\ | \
17.61 ± 16% -76.0% 4.22 ± 2% vm-scalability.free_time
62939 -93.8% 3918 ± 2% vm-scalability.median
12.50 ± 10% -69.4% 3.82 ± 13% vm-scalability.stddev
4156059 ± 2% -93.9% 255064 vm-scalability.throughput
150.71 ± 2% +103.9% 307.32 vm-scalability.time.elapsed_time
150.71 ± 2% +103.9% 307.32 vm-scalability.time.elapsed_time.max
1794938 ± 9% -82.1% 320896 ± 11% vm-scalability.time.involuntary_context_switches
1450485 ± 17% -66.4% 487715 ± 8% vm-scalability.time.maximum_resident_set_size
98488869 -93.0% 6896039 ± 2% vm-scalability.time.minor_page_faults
4923 ± 2% -44.5% 2732 ± 9% vm-scalability.time.percent_of_cpu_this_job_got
6767 ± 4% +23.5% 8355 ± 9% vm-scalability.time.system_time
655.28 -93.1% 44.97 vm-scalability.time.user_time
66324 ± 11% -70.9% 19284 ± 10% vm-scalability.time.voluntary_context_switches
4.432e+08 -82.7% 76779117 vm-scalability.workload
3.19 ± 3% -81.2% 0.60 ± 16% iostat.cpu.user
7.042e+09 ± 51% +203.6% 2.138e+10 ± 36% cpuidle.C3.time
16070353 ± 54% +204.0% 48850208 ± 25% cpuidle.C3.usage
0.00 ±142% +0.0 0.01 ± 71% mpstat.cpu.all.iowait%
0.51 ± 11% -0.5 0.00 ± 26% mpstat.cpu.all.soft%
36.56 +8.7 45.28 ± 10% mpstat.cpu.all.sys%
3.24 ± 3% -2.7 0.50 ± 26% mpstat.cpu.all.usr%
4.909e+09 ± 2% -63.3% 1.803e+09 ± 2% perf-node.node-load-misses
1.304e+09 ± 3% -91.2% 1.145e+08 ± 12% perf-node.node-loads
20.50 ± 4% -73.2% 5.50 ± 15% perf-node.node-local-load-ratio
49.75 -60.3% 19.75 ± 2% perf-node.node-local-store-ratio
1.767e+09 ± 2% -59.6% 7.135e+08 perf-node.node-store-misses
1.775e+09 ± 2% -89.8% 1.813e+08 ± 2% perf-node.node-stores
104.25 ± 14% -40.3% 62.25 ± 20% vmstat.memory.buff
1312560 -11.3% 1164717 vmstat.memory.cache
1.548e+08 ± 2% -95.1% 7649606 ± 21% vmstat.memory.swpd
912.25 ± 2% -91.4% 78.75 ± 40% vmstat.swap.si
2399897 -98.2% 42592 ± 31% vmstat.swap.so
28854 ± 7% -59.3% 11746 ± 33% vmstat.system.cs
511.50 -26.1% 378.25 ± 7% turbostat.Avg_MHz
42.73 -11.1 31.61 ± 7% turbostat.Busy%
16066383 ± 54% +204.2% 48875909 ± 25% turbostat.C3
1.991e+08 ± 2% -53.9% 91725606 turbostat.IRQ
0.36 ± 24% -51.7% 0.17 ± 16% turbostat.Pkg%pc2
189.06 -8.1% 173.81 turbostat.PkgWatt
83.86 -30.9% 57.93 turbostat.RAMWatt
17963146 -38.3% 11083518 ± 3% meminfo.Active
17963065 -38.3% 11083226 ± 3% meminfo.Active(anon)
588.00 ± 28% +392.4% 2895 ± 31% meminfo.AnonHugePages
20605635 -40.3% 12298605 ± 3% meminfo.AnonPages
2650666 -24.4% 2003821 ± 5% meminfo.Inactive
2650493 -24.4% 2003364 ± 5% meminfo.Inactive(anon)
260010 ± 4% -56.8% 112427 ± 4% meminfo.KReclaimable
22096 +16.9% 25838 meminfo.Mapped
1283021 ± 6% +203.6% 3895652 ± 14% meminfo.MemAvailable
1396082 ± 5% +192.4% 4081951 ± 14% meminfo.MemFree
31384484 -8.6% 28698616 ± 2% meminfo.Memused
386048 ± 2% -87.5% 48385 ± 2% meminfo.PageTables
260010 ± 4% -56.8% 112427 ± 4% meminfo.SReclaimable
226846 +71.7% 389488 ± 2% meminfo.SUnreclaim
3399 ± 11% +212.9% 10639 ± 42% meminfo.Shmem
240.25 ± 9% +3e+05% 721703 ± 8% meminfo.SwapCached
3.485e+08 +42.0% 4.948e+08 meminfo.SwapFree
222522 ± 2% +382.2% 1072922 ± 20% meminfo.max_used_kB
88227 +1491.9% 1404480 ± 4% slabinfo.Acpi-Parse.active_objs
1210 +1601.3% 20598 ± 3% slabinfo.Acpi-Parse.active_slabs
88412 +1600.8% 1503700 ± 3% slabinfo.Acpi-Parse.num_objs
1210 +1601.3% 20598 ± 3% slabinfo.Acpi-Parse.num_slabs
18609 ± 3% +9.8% 20440 ± 4% slabinfo.kmalloc-512.num_objs
19763 ± 6% -37.3% 12398 ± 3% slabinfo.proc_inode_cache.active_objs
574.25 ± 13% -50.9% 282.00 ± 3% slabinfo.proc_inode_cache.active_slabs
20306 ± 5% -37.9% 12617 ± 2% slabinfo.proc_inode_cache.num_objs
574.25 ± 13% -50.9% 282.00 ± 3% slabinfo.proc_inode_cache.num_slabs
314685 ± 6% -73.2% 84365 ± 5% slabinfo.radix_tree_node.active_objs
13466 ± 14% -88.5% 1550 ± 6% slabinfo.radix_tree_node.active_slabs
315881 ± 6% -72.5% 86829 ± 6% slabinfo.radix_tree_node.num_objs
13466 ± 14% -88.5% 1550 ± 6% slabinfo.radix_tree_node.num_slabs
3888 +39.9% 5440 slabinfo.scsi_sense_cache.active_objs
3888 +39.9% 5440 slabinfo.scsi_sense_cache.num_objs
13264 +10013.3% 1341508 ± 5% slabinfo.vmap_area.active_objs
209.50 +10620.0% 22458 ± 3% slabinfo.vmap_area.active_slabs
13453 +10583.9% 1437385 ± 3% slabinfo.vmap_area.num_objs
209.50 +10620.0% 22458 ± 3% slabinfo.vmap_area.num_slabs
17577672 ± 14% -85.5% 2555856 ± 2% numa-numastat.node0.local_node
18174681 ± 25% -92.3% 1393014 ± 43% numa-numastat.node0.numa_foreign
17606911 ± 14% -85.4% 2573908 ± 2% numa-numastat.node0.numa_hit
4897187 ± 17% -97.1% 139796 ± 37% numa-numastat.node0.numa_miss
4926429 ± 17% -96.8% 157870 ± 40% numa-numastat.node0.other_node
16881406 ± 20% -87.7% 2076300 ± 17% numa-numastat.node1.local_node
8681777 ± 66% -89.9% 877168 ± 55% numa-numastat.node1.numa_foreign
16903128 ± 20% -87.6% 2092922 ± 17% numa-numastat.node1.numa_hit
12011542 ± 16% -93.9% 727801 ± 53% numa-numastat.node1.numa_miss
12033265 ± 16% -93.8% 744454 ± 50% numa-numastat.node1.other_node
16971463 ± 19% -88.4% 1969502 ± 25% numa-numastat.node2.local_node
5917967 ± 46% -89.2% 636924 ± 67% numa-numastat.node2.numa_foreign
16995319 ± 19% -88.3% 1996383 ± 25% numa-numastat.node2.numa_hit
10651608 ± 34% -91.6% 892057 ± 55% numa-numastat.node2.numa_miss
10675466 ± 34% -91.4% 918981 ± 54% numa-numastat.node2.other_node
18415256 ± 24% -91.7% 1532521 ± 22% numa-numastat.node3.local_node
4966169 ± 66% -96.5% 173950 ± 42% numa-numastat.node3.numa_foreign
18435690 ± 24% -91.5% 1559777 ± 21% numa-numastat.node3.numa_hit
10180257 ± 41% -87.0% 1321403 ± 22% numa-numastat.node3.numa_miss
10200691 ± 41% -86.8% 1348671 ± 22% numa-numastat.node3.other_node
90104 ± 6% -93.7% 5696 ± 48% syscalls.sys_close.max
1398 ± 2% -24.6% 1054 ± 2% syscalls.sys_close.med
50733844 ± 6% -4.4e+07 6692932 ± 34% syscalls.sys_close.noise.100%
70188309 ± 4% -5.4e+07 16449674 ± 10% syscalls.sys_close.noise.2%
68032137 ± 4% -5.5e+07 12974683 ± 11% syscalls.sys_close.noise.25%
70087310 ± 4% -5.4e+07 16286916 ± 10% syscalls.sys_close.noise.5%
63199296 ± 5% -5.3e+07 9760150 ± 20% syscalls.sys_close.noise.50%
56784706 ± 5% -4.9e+07 7965791 ± 28% syscalls.sys_close.noise.75%
192606 ± 43% -100.0% 0.00 syscalls.sys_mmap.max
9408 ± 2% -100.0% 0.00 syscalls.sys_mmap.med
5526 ± 2% -100.0% 0.00 syscalls.sys_mmap.min
2.557e+08 ± 8% -2.6e+08 0.00 syscalls.sys_mmap.noise.100%
3.674e+08 ± 6% -3.7e+08 0.00 syscalls.sys_mmap.noise.2%
3.481e+08 ± 6% -3.5e+08 0.00 syscalls.sys_mmap.noise.25%
3.665e+08 ± 6% -3.7e+08 0.00 syscalls.sys_mmap.noise.5%
3.179e+08 ± 7% -3.2e+08 0.00 syscalls.sys_mmap.noise.50%
2.879e+08 ± 7% -2.9e+08 0.00 syscalls.sys_mmap.noise.75%
2.494e+10 -100.0% 107756 ± 13% syscalls.sys_open.max
12591 +88.5% 23735 ± 17% syscalls.sys_open.med
7793 +10.5% 8608 syscalls.sys_open.min
3.789e+12 ± 8% -3.8e+12 3.455e+08 ± 28% syscalls.sys_open.noise.100%
3.789e+12 ± 8% -3.8e+12 5.935e+08 ± 20% syscalls.sys_open.noise.2%
3.789e+12 ± 8% -3.8e+12 5.224e+08 ± 23% syscalls.sys_open.noise.25%
3.789e+12 ± 8% -3.8e+12 5.907e+08 ± 20% syscalls.sys_open.noise.5%
3.789e+12 ± 8% -3.8e+12 4.383e+08 ± 27% syscalls.sys_open.noise.50%
3.789e+12 ± 8% -3.8e+12 3.752e+08 ± 28% syscalls.sys_open.noise.75%
5.619e+09 ± 25% -100.0% 618791 ± 46% syscalls.sys_read.max
11896 ± 7% -82.9% 2039 ± 5% syscalls.sys_read.med
1.374e+12 ± 6% -1.4e+12 2.054e+09 ± 6% syscalls.sys_read.noise.100%
1.374e+12 ± 6% -1.4e+12 2.064e+09 ± 6% syscalls.sys_read.noise.2%
1.374e+12 ± 6% -1.4e+12 2.063e+09 ± 6% syscalls.sys_read.noise.25%
1.374e+12 ± 6% -1.4e+12 2.064e+09 ± 6% syscalls.sys_read.noise.5%
1.374e+12 ± 6% -1.4e+12 2.062e+09 ± 6% syscalls.sys_read.noise.50%
1.374e+12 ± 6% -1.4e+12 2.06e+09 ± 6% syscalls.sys_read.noise.75%
1.289e+08 ± 51% -100.0% 0.00 syscalls.sys_write.max
5989 ± 41% -100.0% 0.00 syscalls.sys_write.med
2097 -100.0% 0.00 syscalls.sys_write.min
1.333e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.100%
1.338e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.2%
1.338e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.25%
1.338e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.5%
1.334e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.50%
1.333e+10 ± 32% -1.3e+10 0.00 syscalls.sys_write.noise.75%
51.75 ± 2% -38.3 13.48 ±173% perf-profile.calltrace.cycles-pp.do_access
48.42 ± 2% -35.0 13.46 ±173% perf-profile.calltrace.cycles-pp.page_fault.do_access
47.80 ± 3% -34.3 13.46 ±173% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
47.71 ± 3% -34.3 13.46 ±173% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
47.30 ± 3% -33.8 13.46 ±173% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
44.88 ± 3% -31.0 13.87 ±173% perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
41.74 ± 5% -27.9 13.84 ±173% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
40.70 ± 5% -26.9 13.84 ±173% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
37.04 ± 2% -23.1 13.90 ±173% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
36.71 ± 3% -22.6 14.11 ±173% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma
36.52 ± 3% -21.8 14.75 ±173% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
34.80 ± 3% -20.1 14.75 ±173% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
31.06 ± 4% -16.3 14.74 ±173% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
28.98 ± 4% -14.2 14.74 ±173% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
9.80 ± 13% -9.8 0.00 perf-profile.calltrace.cycles-pp.add_to_swap.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
8.68 ± 7% -8.7 0.00 perf-profile.calltrace.cycles-pp.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
8.66 ± 7% -8.7 0.00 perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list.shrink_node_memcg
8.56 ± 7% -8.6 0.00 perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list.shrink_inactive_list
8.14 ± 8% -8.1 0.00 perf-profile.calltrace.cycles-pp.on_each_cpu_mask.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_page_list
7.99 ± 8% -8.0 0.00 perf-profile.calltrace.cycles-pp.smp_call_function_single.on_each_cpu_mask.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty
8.87 ± 3% -7.8 1.02 ±173% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
8.87 ± 3% -7.8 1.03 ±173% perf-profile.calltrace.cycles-pp.ret_from_fork
7.32 ± 19% -7.3 0.00 perf-profile.calltrace.cycles-pp.add_to_swap_cache.add_to_swap.shrink_page_list.shrink_inactive_list.shrink_node_memcg
5.74 ± 10% -5.7 0.00 perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
52.40 ± 2% -38.9 13.48 ±173% perf-profile.children.cycles-pp.do_access
45.11 ± 3% -30.8 14.36 ±173% perf-profile.children.cycles-pp.alloc_pages_vma
45.97 ± 2% -30.1 15.84 ±173% perf-profile.children.cycles-pp.__alloc_pages_nodemask
44.33 ± 2% -28.5 15.84 ±173% perf-profile.children.cycles-pp.__alloc_pages_slowpath
40.91 ± 3% -24.2 16.68 ±173% perf-profile.children.cycles-pp.shrink_node
39.17 ± 4% -22.5 16.67 ±173% perf-profile.children.cycles-pp.shrink_node_memcg
37.69 ± 3% -21.9 15.82 ±173% perf-profile.children.cycles-pp.try_to_free_pages
37.37 ± 3% -21.5 15.82 ±173% perf-profile.children.cycles-pp.do_try_to_free_pages
35.23 ± 4% -18.6 16.66 ±173% perf-profile.children.cycles-pp.shrink_inactive_list
32.95 ± 4% -16.3 16.66 ±173% perf-profile.children.cycles-pp.shrink_page_list
9.95 ± 13% -9.9 0.00 perf-profile.children.cycles-pp.add_to_swap
8.78 ± 7% -8.8 0.03 ±173% perf-profile.children.cycles-pp.try_to_unmap_flush_dirty
8.77 ± 7% -8.7 0.03 ±173% perf-profile.children.cycles-pp.arch_tlbbatch_flush
8.70 ± 7% -8.7 0.03 ±173% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
8.27 ± 8% -8.2 0.02 ±173% perf-profile.children.cycles-pp.on_each_cpu_mask
8.21 ± 8% -8.2 0.03 ±173% perf-profile.children.cycles-pp.smp_call_function_single
8.87 ± 3% -7.8 1.02 ±173% perf-profile.children.cycles-pp.kthread
8.87 ± 4% -7.8 1.04 ±173% perf-profile.children.cycles-pp.ret_from_fork
7.67 ± 19% -7.7 0.00 perf-profile.children.cycles-pp.add_to_swap_cache
7.61 ± 12% -7.5 0.10 ±173% perf-profile.children.cycles-pp.get_page_from_freelist
6.04 ± 10% -6.0 0.01 ±173% perf-profile.children.cycles-pp.__remove_mapping
10.11 ± 12% +6.3 16.39 ±173% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
5.84 ± 11% +10.7 16.50 ±173% perf-profile.children.cycles-pp._raw_spin_lock
7.46 ± 9% -7.4 0.02 ±173% perf-profile.self.cycles-pp.smp_call_function_single
10.10 ± 12% +6.1 16.18 ±173% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
4329827 -46.0% 2337051 ± 10% numa-meminfo.node0.Active
4329808 -46.0% 2336843 ± 10% numa-meminfo.node0.Active(anon)
4992078 -48.7% 2562281 ± 12% numa-meminfo.node0.AnonPages
260335 ± 3% +68.8% 439568 ± 5% numa-meminfo.node0.FilePages
666055 -37.5% 416131 ± 11% numa-meminfo.node0.Inactive
666013 -37.5% 415958 ± 11% numa-meminfo.node0.Inactive(anon)
74249 ± 14% -57.2% 31763 ± 10% numa-meminfo.node0.KReclaimable
6334 ± 10% +15.6% 7323 ± 13% numa-meminfo.node0.KernelStack
5819 ± 3% +75.9% 10238 numa-meminfo.node0.Mapped
356516 ± 8% +235.6% 1196383 ± 20% numa-meminfo.node0.MemFree
7706498 -10.9% 6866632 ± 3% numa-meminfo.node0.MemUsed
80605 ± 13% -85.6% 11598 ± 9% numa-meminfo.node0.PageTables
74249 ± 14% -57.2% 31763 ± 10% numa-meminfo.node0.SReclaimable
68688 ± 6% +69.5% 116397 ± 11% numa-meminfo.node0.SUnreclaim
1417 ± 57% +403.3% 7131 ± 14% numa-meminfo.node0.Shmem
4589192 ± 3% -43.6% 2589650 ± 7% numa-meminfo.node1.Active
4589182 ± 3% -43.6% 2589589 ± 7% numa-meminfo.node1.Active(anon)
5250722 ± 2% -45.5% 2861734 ± 8% numa-meminfo.node1.AnonPages
260329 ± 2% +60.6% 418001 ± 2% numa-meminfo.node1.FilePages
663023 ± 3% -32.3% 449130 ± 14% numa-meminfo.node1.Inactive
662957 ± 3% -32.3% 448937 ± 14% numa-meminfo.node1.Inactive(anon)
61247 ± 17% -57.3% 26140 ± 6% numa-meminfo.node1.KReclaimable
344774 ± 15% +292.5% 1353279 ± 21% numa-meminfo.node1.MemFree
7904793 -12.8% 6896287 ± 4% numa-meminfo.node1.MemUsed
103272 ± 9% -88.7% 11650 ± 6% numa-meminfo.node1.PageTables
61247 ± 17% -57.3% 26140 ± 6% numa-meminfo.node1.SReclaimable
52333 ± 20% +84.2% 96422 ± 11% numa-meminfo.node1.SUnreclaim
4515529 ± 4% -39.4% 2735238 ± 7% numa-meminfo.node2.Active
4515494 ± 4% -39.4% 2735218 ± 7% numa-meminfo.node2.Active(anon)
5183552 ± 3% -41.5% 3030983 ± 7% numa-meminfo.node2.AnonPages
262131 ± 3% +61.1% 422260 ± 8% numa-meminfo.node2.FilePages
669419 -28.4% 479415 ± 10% numa-meminfo.node2.Inactive
669381 -28.4% 479276 ± 10% numa-meminfo.node2.Inactive(anon)
61005 ± 12% -51.6% 29551 ± 12% numa-meminfo.node2.KReclaimable
323022 ± 9% +331.1% 1392530 ± 23% numa-meminfo.node2.MemFree
7926544 -13.5% 6857036 ± 4% numa-meminfo.node2.MemUsed
98827 -88.6% 11310 ± 8% numa-meminfo.node2.PageTables
61005 ± 12% -51.6% 29551 ± 12% numa-meminfo.node2.SReclaimable
52987 ± 27% +59.3% 84423 ± 7% numa-meminfo.node2.SUnreclaim
4560396 ± 4% -39.2% 2774139 ± 3% numa-meminfo.node3.Active
4560379 ± 4% -39.2% 2774119 ± 3% numa-meminfo.node3.Active(anon)
5220719 ± 3% -40.3% 3115767 ± 4% numa-meminfo.node3.AnonPages
267193 ± 4% +51.2% 403915 ± 5% numa-meminfo.node3.FilePages
661103 ± 2% -26.1% 488255 ± 11% numa-meminfo.node3.Inactive
661080 ± 2% -26.1% 488229 ± 11% numa-meminfo.node3.Inactive(anon)
59354 ± 11% -59.2% 24210 ± 13% numa-meminfo.node3.KReclaimable
336489 ± 5% +300.8% 1348753 ± 20% numa-meminfo.node3.MemFree
7881925 -12.8% 6869661 ± 3% numa-meminfo.node3.MemUsed
103777 ± 12% -89.7% 10718 ± 3% numa-meminfo.node3.PageTables
59354 ± 11% -59.2% 24210 ± 13% numa-meminfo.node3.SReclaimable
52814 ± 23% +55.5% 82151 ± 4% numa-meminfo.node3.SUnreclaim
53595 ± 4% +108.5% 111746 ± 17% sched_debug.cfs_rq:/.exec_clock.max
44554 ± 14% +49.7% 66682 ± 19% sched_debug.cfs_rq:/.load.avg
1563170 ± 3% -33.1% 1045091 ± 38% sched_debug.cfs_rq:/.min_vruntime.avg
3177061 -39.6% 1918710 ± 40% sched_debug.cfs_rq:/.min_vruntime.max
957373 ± 6% -66.3% 323068 ± 46% sched_debug.cfs_rq:/.min_vruntime.stddev
4.83 ± 34% +10503.0% 512.62 ± 41% sched_debug.cfs_rq:/.nr_spread_over.avg
28.08 ± 31% +3211.6% 930.00 ± 43% sched_debug.cfs_rq:/.nr_spread_over.max
5.45 ± 37% +2701.1% 152.72 ± 51% sched_debug.cfs_rq:/.nr_spread_over.stddev
44503 ± 14% +49.8% 66656 ± 19% sched_debug.cfs_rq:/.runnable_weight.avg
-1150416 -96.9% -35376 sched_debug.cfs_rq:/.spread0.avg
-2466210 -69.4% -754726 sched_debug.cfs_rq:/.spread0.min
957970 ± 6% -64.3% 342227 ± 46% sched_debug.cfs_rq:/.spread0.stddev
878053 -42.5% 504620 sched_debug.cpu.avg_idle.avg
1381357 ± 38% -50.5% 684086 ± 2% sched_debug.cpu.avg_idle.max
215030 ± 16% -58.4% 89380 ± 6% sched_debug.cpu.avg_idle.stddev
115976 +62.2% 188124 ± 13% sched_debug.cpu.clock.avg
116043 +65.1% 191587 ± 13% sched_debug.cpu.clock.max
115899 +58.6% 183767 ± 13% sched_debug.cpu.clock.min
43.31 ± 17% +5110.4% 2256 ± 50% sched_debug.cpu.clock.stddev
115976 +62.2% 188124 ± 13% sched_debug.cpu.clock_task.avg
116043 +65.1% 191587 ± 13% sched_debug.cpu.clock_task.max
115899 +58.6% 183767 ± 13% sched_debug.cpu.clock_task.min
43.31 ± 17% +5110.4% 2256 ± 50% sched_debug.cpu.clock_task.stddev
4115 ± 6% -28.4% 2945 ± 3% sched_debug.cpu.curr->pid.max
1214 -14.8% 1034 ± 4% sched_debug.cpu.curr->pid.stddev
503244 -45.6% 273574 sched_debug.cpu.max_idle_balance_cost.avg
800322 ± 53% -53.7% 370523 sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
0.00 ± 13% +4112.8% 0.00 ± 50% sched_debug.cpu.next_balance.stddev
16443 ± 8% -67.6% 5324 ± 20% sched_debug.cpu.nr_switches.avg
4039 ± 4% -40.2% 2416 ± 28% sched_debug.cpu.nr_switches.min
9116 ± 7% -65.3% 3160 ± 16% sched_debug.cpu.nr_switches.stddev
0.01 ± 19% +1266.7% 0.07 ± 63% sched_debug.cpu.nr_uninterruptible.avg
16.33 ± 6% -13.8% 14.07 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
14501 ± 10% -76.4% 3415 ± 33% sched_debug.cpu.sched_count.avg
2846 ± 9% -50.1% 1419 ± 45% sched_debug.cpu.sched_count.min
8516 ± 7% -71.2% 2452 ± 25% sched_debug.cpu.sched_count.stddev
680.34 ± 3% +50.7% 1025 ± 28% sched_debug.cpu.sched_goidle.avg
4040 ± 11% +204.9% 12321 ± 37% sched_debug.cpu.sched_goidle.max
537.76 ± 11% +127.4% 1222 ± 27% sched_debug.cpu.sched_goidle.stddev
7325 ± 10% -77.1% 1677 ± 33% sched_debug.cpu.ttwu_count.avg
1661 ± 16% -64.8% 585.62 ± 50% sched_debug.cpu.ttwu_count.min
3741 ± 11% -67.4% 1220 ± 23% sched_debug.cpu.ttwu_count.stddev
2912 ± 9% -83.9% 468.94 ± 28% sched_debug.cpu.ttwu_local.avg
500.08 ± 9% -80.4% 98.04 ± 44% sched_debug.cpu.ttwu_local.min
1750 ± 8% -46.9% 929.33 ± 31% sched_debug.cpu.ttwu_local.stddev
115899 +58.6% 183767 ± 13% sched_debug.cpu_clk
111590 +60.8% 179456 ± 13% sched_debug.ktime
119393 +57.6% 188219 ± 12% sched_debug.sched_clk
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
9.49 ± 3% -61.5% 3.65 ± 54% perf-stat.i.MPKI
45240326 ± 4% -56.2% 19807119 ± 37% perf-stat.i.branch-misses
22.42 -5.3 17.15 ± 20% perf-stat.i.cache-miss-rate%
66478385 -78.2% 14465455 ± 47% perf-stat.i.cache-misses
2.954e+08 -63.8% 1.071e+08 ± 64% perf-stat.i.cache-references
29848 ± 8% -51.2% 14551 ± 16% perf-stat.i.context-switches
2.53 +46.1% 3.70 perf-stat.i.cpi
297.34 ± 6% +116.7% 644.32 ± 27% perf-stat.i.cpu-migrations
1337 ± 6% +508.0% 8129 ± 13% perf-stat.i.cycles-between-cache-misses
0.43 ± 11% -0.3 0.10 ± 39% perf-stat.i.dTLB-load-miss-rate%
38712590 ± 9% -80.9% 7390481 ± 62% perf-stat.i.dTLB-load-misses
0.18 -0.0 0.14 ± 14% perf-stat.i.dTLB-store-miss-rate%
7054225 -75.7% 1713274 ± 65% perf-stat.i.dTLB-store-misses
3.646e+09 -70.8% 1.063e+09 ± 69% perf-stat.i.dTLB-stores
74.52 ± 2% -45.2 29.29 ± 38% perf-stat.i.iTLB-load-miss-rate%
5296601 ± 2% -78.1% 1158896 ± 78% perf-stat.i.iTLB-load-misses
7242 ± 6% +837.7% 67911 ± 48% perf-stat.i.instructions-per-iTLB-miss
0.41 -33.3% 0.28 perf-stat.i.ipc
140.45 ± 14% -78.8% 29.77 ± 68% perf-stat.i.major-faults
666959 ± 2% -91.1% 59313 ± 5% perf-stat.i.minor-faults
79.72 +14.2 93.89 perf-stat.i.node-load-miss-rate%
32758157 ± 2% -72.0% 9162090 ± 47% perf-stat.i.node-load-misses
8807005 -93.5% 573675 ± 40% perf-stat.i.node-loads
54.33 +20.7 75.00 perf-stat.i.node-store-miss-rate%
11853895 ± 3% -71.2% 3414343 ± 49% perf-stat.i.node-store-misses
12055174 -91.1% 1070993 ± 37% perf-stat.i.node-stores
667419 ± 2% -91.0% 59826 ± 5% perf-stat.i.page-faults
9.40 -47.1% 4.97 ± 10% perf-stat.overall.MPKI
0.60 ± 5% -0.2 0.41 ± 30% perf-stat.overall.branch-miss-rate%
22.52 -9.8 12.69 ± 13% perf-stat.overall.cache-miss-rate%
2.35 +47.1% 3.46 perf-stat.overall.cpi
1112 ± 2% +403.4% 5599 ± 8% perf-stat.overall.cycles-between-cache-misses
0.47 ± 10% -0.3 0.14 ± 6% perf-stat.overall.dTLB-load-miss-rate%
0.19 -0.0 0.15 ± 12% perf-stat.overall.dTLB-store-miss-rate%
78.80 ± 2% -35.7 43.10 ± 18% perf-stat.overall.iTLB-load-miss-rate%
5952 +216.2% 18818 ± 14% perf-stat.overall.instructions-per-iTLB-miss
0.42 -32.1% 0.29 perf-stat.overall.ipc
78.82 +15.3 94.11 perf-stat.overall.node-load-miss-rate%
49.72 ± 2% +28.9 78.60 perf-stat.overall.node-store-miss-rate%
10467 ± 2% +491.5% 61917 ± 7% perf-stat.overall.path-length
7.485e+09 -50.5% 3.704e+09 ± 9% perf-stat.ps.branch-instructions
44621026 ± 4% -66.6% 14920712 ± 31% perf-stat.ps.branch-misses
65271966 -85.3% 9594206 perf-stat.ps.cache-misses
2.898e+08 -73.5% 76820933 ± 12% perf-stat.ps.cache-references
29229 ± 7% -79.0% 6150 ± 7% perf-stat.ps.context-switches
7.259e+10 -26.0% 5.372e+10 ± 8% perf-stat.ps.cpu-cycles
291.59 ± 5% -21.1% 230.19 ± 21% perf-stat.ps.cpu-migrations
37840178 ± 9% -86.1% 5274649 ± 11% perf-stat.ps.dTLB-load-misses
8.076e+09 -53.8% 3.728e+09 ± 8% perf-stat.ps.dTLB-loads
6908013 -83.3% 1152725 ± 15% perf-stat.ps.dTLB-store-misses
3.574e+09 -78.3% 7.744e+08 ± 8% perf-stat.ps.dTLB-stores
5181335 -83.7% 845503 ± 18% perf-stat.ps.iTLB-load-misses
1395935 ± 9% -20.1% 1115727 ± 13% perf-stat.ps.iTLB-loads
3.084e+10 -49.7% 1.551e+10 ± 8% perf-stat.ps.instructions
137.26 ± 14% -90.7% 12.74 ± 49% perf-stat.ps.major-faults
652076 ± 2% -96.8% 20654 ± 5% perf-stat.ps.minor-faults
32148538 ± 2% -81.3% 6004642 perf-stat.ps.node-load-misses
8631275 ± 2% -95.6% 376483 ± 10% perf-stat.ps.node-loads
11688310 ± 3% -80.1% 2329722 ± 4% perf-stat.ps.node-store-misses
11813097 -94.6% 634518 ± 5% perf-stat.ps.node-stores
652546 ± 2% -96.8% 20835 ± 5% perf-stat.ps.page-faults
853471 -97.6% 20576 ± 15% proc-vmstat.allocstall_movable
6909 ± 15% -38.6% 4244 ± 18% proc-vmstat.allocstall_normal
3133959 ± 35% -97.1% 89457 ± 56% proc-vmstat.compact_daemon_free_scanned
818614 ± 26% -84.0% 130740 ±168% proc-vmstat.compact_daemon_migrate_scanned
582.25 ±130% -97.9% 12.25 ± 27% proc-vmstat.compact_daemon_wake
2999 ± 38% -76.8% 694.50 ± 40% proc-vmstat.compact_fail
3305553 ± 34% -97.3% 89457 ± 56% proc-vmstat.compact_free_scanned
238885 ± 23% -96.5% 8428 ± 43% proc-vmstat.compact_isolated
1630990 ± 20% -92.0% 130740 ±168% proc-vmstat.compact_migrate_scanned
3016 ± 37% -77.0% 695.25 ± 40% proc-vmstat.compact_stall
167.25 ±106% -84.6% 25.75 ± 36% proc-vmstat.kswapd_high_wmark_hit_quickly
877.75 ±104% -96.1% 34.50 ± 47% proc-vmstat.kswapd_low_wmark_hit_quickly
4498214 -40.6% 2670391 proc-vmstat.nr_active_anon
19.75 ± 6% +283.5% 75.75 ±102% proc-vmstat.nr_active_file
5161207 -42.6% 2960450 proc-vmstat.nr_anon_pages
27874 ± 4% +716.9% 227716 ± 9% proc-vmstat.nr_dirty_background_threshold
55819 ± 4% +920.1% 569430 ± 9% proc-vmstat.nr_dirty_threshold
262480 +64.1% 430753 ± 3% proc-vmstat.nr_file_pages
339879 ± 3% +252.9% 1199399 ± 8% proc-vmstat.nr_free_pages
664876 -28.6% 475030 ± 5% proc-vmstat.nr_inactive_anon
42.75 ± 6% +193.6% 125.50 ± 24% proc-vmstat.nr_inactive_file
809.75 ± 2% +43.4% 1161 ± 13% proc-vmstat.nr_isolated_anon
5658 +19.8% 6780 ± 5% proc-vmstat.nr_mapped
96944 ± 3% -88.0% 11672 ± 2% proc-vmstat.nr_page_table_pages
845.75 ± 13% +273.3% 3157 ± 59% proc-vmstat.nr_shmem
64214 ± 3% -56.3% 28085 ± 4% proc-vmstat.nr_slab_reclaimable
56710 +69.3% 96026 ± 2% proc-vmstat.nr_slab_unreclaimable
58066797 -95.7% 2482316 ± 5% proc-vmstat.nr_vmscan_write
92058231 -95.0% 4637520 ± 6% proc-vmstat.nr_written
4492236 -40.6% 2667270 proc-vmstat.nr_zone_active_anon
19.25 ± 4% +296.1% 76.25 ±100% proc-vmstat.nr_zone_active_file
666758 -28.9% 474019 ± 5% proc-vmstat.nr_zone_inactive_anon
42.25 ± 5% +191.1% 123.00 ± 24% proc-vmstat.nr_zone_inactive_file
37740595 ± 7% -91.8% 3081058 ± 10% proc-vmstat.numa_foreign
4216 ± 61% +322.7% 17823 ± 32% proc-vmstat.numa_hint_faults
68.25 ± 45% +10235.2% 7053 ± 62% proc-vmstat.numa_hint_faults_local
69969181 ± 2% -88.2% 8255598 ± 3% proc-vmstat.numa_hit
69873924 ± 2% -88.3% 8165013 ± 3% proc-vmstat.numa_local
37740595 ± 7% -91.8% 3081058 ± 10% proc-vmstat.numa_miss
37835852 ± 7% -91.6% 3171643 ± 9% proc-vmstat.numa_other
39446530 ± 11% -87.1% 5073809 ± 6% proc-vmstat.numa_pte_updates
1050 ±104% -93.8% 65.00 ± 19% proc-vmstat.pageoutrun
66731 ± 7% -70.9% 19390 ± 23% proc-vmstat.pgactivate
749996 ± 17% -99.3% 5564 ± 28% proc-vmstat.pgalloc_dma
8660763 ± 7% -94.2% 506250 ± 4% proc-vmstat.pgalloc_dma32
1.195e+08 ± 2% -90.8% 10970916 ± 2% proc-vmstat.pgalloc_normal
93121825 -95.9% 3848016 ± 4% proc-vmstat.pgdeactivate
99039595 -92.9% 7036256 ± 3% proc-vmstat.pgfault
1.288e+08 -91.3% 11209163 ± 4% proc-vmstat.pgfree
22171 ± 10% -73.1% 5956 ± 26% proc-vmstat.pgmajfault
119215 ± 22% -95.5% 5412 ± 42% proc-vmstat.pgmigrate_success
93314793 -95.9% 3848330 ± 4% proc-vmstat.pgrefill
91553858 ± 3% +130.5% 2.11e+08 ± 5% proc-vmstat.pgscan_direct
14329232 ± 3% -91.6% 1199879 ± 25% proc-vmstat.pgscan_kswapd
192954 ±133% -100.0% 0.00 proc-vmstat.pgskip_normal
77752861 -95.8% 3242366 ± 17% proc-vmstat.pgsteal_direct
14314039 ± 3% -91.8% 1168153 ± 26% proc-vmstat.pgsteal_kswapd
30330 ± 6% -90.9% 2755 ± 59% proc-vmstat.pswpin
92062246 -98.1% 1752839 ± 9% proc-vmstat.pswpout
6003 ± 37% -99.2% 49.50 ± 8% proc-vmstat.swap_ra
3719 ± 38% -99.5% 18.50 ± 26% proc-vmstat.swap_ra_hit
1079479 -47.9% 562842 ± 10% numa-vmstat.node0.nr_active_anon
1244149 -50.9% 610979 ± 11% numa-vmstat.node0.nr_anon_pages
65081 ± 3% +65.9% 107940 ± 7% numa-vmstat.node0.nr_file_pages
92743 ± 9% +275.0% 347789 ± 18% numa-vmstat.node0.nr_free_pages
165606 -42.7% 94832 ± 12% numa-vmstat.node0.nr_inactive_anon
163.50 ± 10% +26.5% 206.75 ± 7% numa-vmstat.node0.nr_isolated_anon
6332 ± 10% +15.2% 7295 ± 13% numa-vmstat.node0.nr_kernel_stack
1475 ± 4% +73.2% 2555 numa-vmstat.node0.nr_mapped
19985 ± 13% -85.9% 2812 ± 11% numa-vmstat.node0.nr_page_table_pages
353.00 ± 56% +439.4% 1904 ± 12% numa-vmstat.node0.nr_shmem
18640 ± 15% -57.6% 7898 ± 10% numa-vmstat.node0.nr_slab_reclaimable
17170 ± 6% +66.0% 28498 ± 12% numa-vmstat.node0.nr_slab_unreclaimable
11339775 ± 8% -95.1% 557216 ± 7% numa-vmstat.node0.nr_vmscan_write
11339785 ± 8% -93.2% 772604 ± 10% numa-vmstat.node0.nr_written
1078097 -47.9% 562212 ± 10% numa-vmstat.node0.nr_zone_active_anon
165645 -42.9% 94563 ± 12% numa-vmstat.node0.nr_zone_inactive_anon
10833432 ± 20% -90.6% 1016984 ± 43% numa-vmstat.node0.numa_foreign
11368384 ± 11% -78.8% 2411490 ± 5% numa-vmstat.node0.numa_hit
11338851 ± 11% -78.9% 2396881 ± 5% numa-vmstat.node0.numa_local
3220466 ± 11% -97.3% 86801 ± 28% numa-vmstat.node0.numa_miss
3250022 ± 11% -96.9% 101418 ± 32% numa-vmstat.node0.numa_other
1144015 ± 3% -45.4% 624641 ± 8% numa-vmstat.node1.nr_active_anon
1308950 ± 2% -47.7% 684070 ± 10% numa-vmstat.node1.nr_anon_pages
65078 ± 2% +56.5% 101862 ± 3% numa-vmstat.node1.nr_file_pages
89819 ± 16% +339.7% 394892 ± 18% numa-vmstat.node1.nr_free_pages
165324 ± 3% -38.1% 102280 ± 20% numa-vmstat.node1.nr_inactive_anon
25582 ± 8% -89.0% 2824 ± 6% numa-vmstat.node1.nr_page_table_pages
61.00 ± 30% +254.5% 216.25 ±102% numa-vmstat.node1.nr_shmem
15272 ± 15% -57.9% 6427 ± 5% numa-vmstat.node1.nr_slab_reclaimable
13082 ± 20% +79.2% 23438 ± 11% numa-vmstat.node1.nr_slab_unreclaimable
15505760 ± 2% -96.1% 597228 ± 11% numa-vmstat.node1.nr_vmscan_write
15505764 ± 2% -95.0% 782744 ± 9% numa-vmstat.node1.nr_written
1142502 ± 3% -45.4% 624049 ± 8% numa-vmstat.node1.nr_zone_active_anon
165905 ± 3% -38.5% 101962 ± 20% numa-vmstat.node1.nr_zone_inactive_anon
5185520 ± 55% -87.2% 663100 ± 57% numa-vmstat.node1.numa_foreign
10989467 ± 16% -82.4% 1938286 ± 14% numa-vmstat.node1.numa_hit
10893913 ± 16% -83.0% 1848232 ± 15% numa-vmstat.node1.numa_local
7480409 ± 13% -93.0% 525453 ± 57% numa-vmstat.node1.numa_miss
7575999 ± 13% -91.9% 615517 ± 48% numa-vmstat.node1.numa_other
1126040 ± 4% -41.5% 658560 ± 8% numa-vmstat.node2.nr_active_anon
1292651 ± 3% -44.3% 719404 ± 8% numa-vmstat.node2.nr_anon_pages
65530 ± 3% +57.4% 103165 ± 9% numa-vmstat.node2.nr_file_pages
84064 ± 11% +383.7% 406632 ± 22% numa-vmstat.node2.nr_free_pages
166954 -34.8% 108855 ± 13% numa-vmstat.node2.nr_inactive_anon
24494 -88.8% 2746 ± 9% numa-vmstat.node2.nr_page_table_pages
15155 ± 11% -52.1% 7256 ± 12% numa-vmstat.node2.nr_slab_reclaimable
13245 ± 27% +55.3% 20576 ± 8% numa-vmstat.node2.nr_slab_unreclaimable
15002550 -95.9% 613047 ± 7% numa-vmstat.node2.nr_vmscan_write
15002556 -94.7% 788175 ± 7% numa-vmstat.node2.nr_written
1124427 ± 4% -41.5% 657862 ± 8% numa-vmstat.node2.nr_zone_active_anon
167651 -35.3% 108435 ± 13% numa-vmstat.node2.nr_zone_inactive_anon
3983276 ± 48% -88.2% 469967 ± 66% numa-vmstat.node2.numa_foreign
11396819 ± 20% -84.8% 1735877 ± 25% numa-vmstat.node2.numa_hit
11298966 ± 20% -85.5% 1642162 ± 26% numa-vmstat.node2.numa_local
6390588 ± 38% -89.6% 661903 ± 58% numa-vmstat.node2.numa_miss
6488475 ± 37% -88.4% 755628 ± 51% numa-vmstat.node2.numa_other
1136490 ± 4% -41.6% 663729 ± 4% numa-vmstat.node3.nr_active_anon
1301163 ± 3% -43.0% 741565 ± 4% numa-vmstat.node3.nr_anon_pages
66788 ± 4% +47.3% 98372 ± 6% numa-vmstat.node3.nr_file_pages
87998 ± 8% +352.3% 398056 ± 17% numa-vmstat.node3.nr_free_pages
164879 ± 2% -31.7% 112582 ± 13% numa-vmstat.node3.nr_inactive_anon
25741 ± 12% -89.9% 2587 ± 5% numa-vmstat.node3.nr_page_table_pages
14885 ± 12% -60.4% 5897 ± 14% numa-vmstat.node3.nr_slab_reclaimable
13200 ± 23% +51.3% 19973 ± 4% numa-vmstat.node3.nr_slab_unreclaimable
15613755 ± 4% -96.0% 616788 ± 7% numa-vmstat.node3.nr_vmscan_write
15613761 ± 4% -95.0% 781041 ± 8% numa-vmstat.node3.nr_written
1134916 ± 4% -41.6% 663173 ± 4% numa-vmstat.node3.nr_zone_active_anon
165543 ± 2% -32.3% 112086 ± 13% numa-vmstat.node3.nr_zone_inactive_anon
3251058 ± 62% -96.3% 120489 ± 48% numa-vmstat.node3.numa_foreign
12300437 ± 23% -88.7% 1395358 ± 20% numa-vmstat.node3.numa_hit
12206583 ± 23% -89.3% 1300144 ± 21% numa-vmstat.node3.numa_local
6166200 ± 40% -83.7% 1005747 ± 28% numa-vmstat.node3.numa_miss
6260087 ± 39% -82.4% 1100966 ± 25% numa-vmstat.node3.numa_other
148500 ± 19% -87.7% 18280 ± 10% softirqs.CPU0.RCU
17858 ± 16% +110.6% 37610 ± 5% softirqs.CPU0.SCHED
66390 ± 6% +74.0% 115536 ± 13% softirqs.CPU0.TIMER
143319 ± 13% -85.8% 20416 ± 21% softirqs.CPU1.RCU
12471 ± 31% +116.1% 26946 ± 13% softirqs.CPU1.SCHED
64688 ± 11% +74.1% 112626 ± 14% softirqs.CPU1.TIMER
158245 ± 15% -89.4% 16848 ± 14% softirqs.CPU10.RCU
7139 ± 34% +272.1% 26563 ± 12% softirqs.CPU10.SCHED
62529 ± 10% +80.2% 112653 ± 11% softirqs.CPU10.TIMER
51975 ± 71% -72.7% 14210 ± 4% softirqs.CPU100.RCU
16021 ± 20% +95.3% 31284 ± 6% softirqs.CPU100.SCHED
61521 ± 17% +82.3% 112124 ± 13% softirqs.CPU100.TIMER
55289 ± 64% -74.1% 14298 ± 9% softirqs.CPU101.RCU
14405 ± 24% +116.4% 31167 ± 5% softirqs.CPU101.SCHED
61371 ± 16% +81.0% 111096 ± 12% softirqs.CPU101.TIMER
64943 ± 66% -77.2% 14807 ± 8% softirqs.CPU102.RCU
14576 ± 25% +110.5% 30690 softirqs.CPU102.SCHED
61443 ± 18% +73.6% 106662 ± 10% softirqs.CPU102.TIMER
56913 ± 71% -74.4% 14548 ± 12% softirqs.CPU103.RCU
14999 ± 23% +107.7% 31152 ± 8% softirqs.CPU103.SCHED
60710 ± 17% +79.6% 109008 ± 10% softirqs.CPU103.TIMER
53357 ± 67% -72.2% 14815 ± 9% softirqs.CPU104.RCU
15438 ± 23% +104.7% 31595 ± 5% softirqs.CPU104.SCHED
60199 ± 17% +87.5% 112876 ± 11% softirqs.CPU104.TIMER
52763 ± 65% -71.3% 15162 ± 7% softirqs.CPU105.RCU
14867 ± 17% +111.7% 31469 ± 4% softirqs.CPU105.SCHED
60043 ± 17% +85.9% 111641 ± 13% softirqs.CPU105.TIMER
51265 ± 70% -71.6% 14549 ± 8% softirqs.CPU106.RCU
15449 ± 24% +103.7% 31472 ± 2% softirqs.CPU106.SCHED
61030 ± 16% +81.5% 110754 ± 11% softirqs.CPU106.TIMER
52653 ± 60% -73.4% 13989 ± 8% softirqs.CPU107.RCU
14764 ± 20% +117.8% 32153 ± 3% softirqs.CPU107.SCHED
61155 ± 16% +80.8% 110561 ± 11% softirqs.CPU107.TIMER
44960 ± 58% -67.3% 14687 ± 34% softirqs.CPU108.RCU
16242 ± 18% +87.1% 30382 ± 15% softirqs.CPU108.SCHED
68723 ± 12% +66.4% 114358 ± 12% softirqs.CPU108.TIMER
48128 ± 42% -74.2% 12434 ± 14% softirqs.CPU109.RCU
58701 ± 17% +84.1% 108099 ± 8% softirqs.CPU109.TIMER
161831 ± 13% -89.4% 17195 ± 15% softirqs.CPU11.RCU
8327 ± 40% +224.8% 27045 ± 20% softirqs.CPU11.SCHED
62461 ± 12% +78.1% 111265 ± 14% softirqs.CPU11.TIMER
44523 ± 52% -72.1% 12432 ± 10% softirqs.CPU110.RCU
15380 ± 18% +100.9% 30900 ± 7% softirqs.CPU110.SCHED
57841 ± 15% +88.5% 109008 ± 9% softirqs.CPU110.TIMER
39980 ± 66% -69.6% 12142 ± 9% softirqs.CPU111.RCU
17226 ± 11% +85.4% 31934 ± 7% softirqs.CPU111.SCHED
59163 ± 16% +87.8% 111088 ± 10% softirqs.CPU111.TIMER
44882 ± 58% -67.5% 14577 ± 13% softirqs.CPU112.RCU
17531 ± 12% +74.5% 30588 ± 9% softirqs.CPU112.SCHED
58977 ± 17% +89.5% 111769 ± 10% softirqs.CPU112.TIMER
42304 ± 35% -67.9% 13599 ± 3% softirqs.CPU113.RCU
16961 ± 14% +85.7% 31489 ± 8% softirqs.CPU113.SCHED
57316 ± 16% +90.6% 109253 ± 11% softirqs.CPU113.TIMER
40603 ± 48% -66.8% 13489 ± 7% softirqs.CPU114.RCU
17596 ± 8% +80.8% 31821 ± 9% softirqs.CPU114.SCHED
57479 ± 17% +90.1% 109273 ± 11% softirqs.CPU114.TIMER
38979 ± 52% -64.4% 13885 ± 13% softirqs.CPU115.RCU
17650 ± 9% +77.1% 31250 ± 8% softirqs.CPU115.SCHED
55459 ± 17% +114.8% 119143 ± 20% softirqs.CPU115.TIMER
38862 ± 45% -67.6% 12596 ± 5% softirqs.CPU116.RCU
17624 ± 9% +88.4% 33198 ± 6% softirqs.CPU116.SCHED
58562 ± 16% +94.3% 113798 ± 11% softirqs.CPU116.TIMER
36619 ± 27% -63.1% 13517 ± 8% softirqs.CPU117.RCU
17288 ± 9% +84.9% 31967 ± 8% softirqs.CPU117.SCHED
59297 ± 16% +89.8% 112547 ± 12% softirqs.CPU117.TIMER
42777 ± 57% -65.5% 14779 ± 8% softirqs.CPU118.RCU
15874 ± 12% +100.9% 31889 ± 4% softirqs.CPU118.SCHED
57772 ± 16% +94.2% 112179 ± 14% softirqs.CPU118.TIMER
37634 ± 43% -62.7% 14036 ± 9% softirqs.CPU119.RCU
16704 ± 18% +88.4% 31464 ± 11% softirqs.CPU119.SCHED
58553 ± 16% +84.7% 108138 ± 9% softirqs.CPU119.TIMER
159594 ± 14% -89.7% 16481 ± 18% softirqs.CPU12.RCU
8647 ± 42% +208.2% 26649 ± 19% softirqs.CPU12.SCHED
62630 ± 13% +75.2% 109705 ± 14% softirqs.CPU12.TIMER
36255 ± 44% -62.1% 13726 ± 9% softirqs.CPU120.RCU
17122 ± 8% +85.1% 31693 ± 9% softirqs.CPU120.SCHED
59082 ± 17% +87.9% 111002 ± 10% softirqs.CPU120.TIMER
39008 ± 42% -62.1% 14787 ± 11% softirqs.CPU121.RCU
18168 ± 10% +74.0% 31609 ± 11% softirqs.CPU121.SCHED
59598 ± 17% +90.9% 113755 ± 13% softirqs.CPU121.TIMER
38742 ± 36% -65.9% 13229 ± 14% softirqs.CPU122.RCU
15884 ± 13% +99.6% 31710 ± 14% softirqs.CPU122.SCHED
59139 ± 14% +90.8% 112826 ± 12% softirqs.CPU122.TIMER
38259 ± 49% -57.7% 16165 ± 17% softirqs.CPU123.RCU
17026 ± 16% +82.2% 31015 ± 12% softirqs.CPU123.SCHED
60160 ± 16% +86.1% 111963 ± 11% softirqs.CPU123.TIMER
38639 ± 33% -64.4% 13772 ± 10% softirqs.CPU124.RCU
17160 ± 15% +83.2% 31429 ± 9% softirqs.CPU124.SCHED
59669 ± 16% +85.4% 110651 ± 10% softirqs.CPU124.TIMER
45107 ± 60% -69.5% 13735 ± 12% softirqs.CPU125.RCU
17242 ± 10% +85.3% 31942 ± 10% softirqs.CPU125.SCHED
59485 ± 17% +87.8% 111686 ± 11% softirqs.CPU125.TIMER
58838 ± 86% -78.7% 12552 ± 5% softirqs.CPU126.RCU
15522 ± 27% +101.0% 31201 ± 8% softirqs.CPU126.SCHED
59876 ± 15% +129.3% 137299 ± 17% softirqs.CPU126.TIMER
57748 ± 88% -78.6% 12348 ± 15% softirqs.CPU127.RCU
16508 ± 20% +83.6% 30303 ± 7% softirqs.CPU127.SCHED
59775 ± 15% +80.4% 107848 ± 12% softirqs.CPU127.TIMER
60225 ± 69% -80.8% 11589 ± 9% softirqs.CPU128.RCU
16116 ± 16% +99.8% 32198 ± 5% softirqs.CPU128.SCHED
59586 ± 15% +80.4% 107499 ± 9% softirqs.CPU128.TIMER
59447 ± 72% -78.1% 12992 ± 9% softirqs.CPU129.RCU
15505 ± 26% +121.5% 34341 ± 4% softirqs.CPU129.SCHED
58958 ± 14% +91.2% 112740 ± 12% softirqs.CPU129.TIMER
154961 ± 18% -89.1% 16885 ± 20% softirqs.CPU13.RCU
9747 ± 52% +174.2% 26731 ± 18% softirqs.CPU13.SCHED
63566 ± 10% +75.1% 111291 ± 11% softirqs.CPU13.TIMER
52198 ± 78% -74.3% 13432 ± 11% softirqs.CPU130.RCU
16744 ± 17% +90.5% 31893 ± 10% softirqs.CPU130.SCHED
57112 ± 11% +87.3% 106987 ± 10% softirqs.CPU130.TIMER
58087 ± 63% -80.4% 11378 ± 9% softirqs.CPU131.RCU
16177 ± 19% +114.1% 34632 softirqs.CPU131.SCHED
60025 ± 13% +82.2% 109374 ± 10% softirqs.CPU131.TIMER
55006 ± 77% -78.6% 11767 ± 8% softirqs.CPU132.RCU
16494 ± 17% +99.6% 32918 ± 4% softirqs.CPU132.SCHED
59868 ± 13% +82.7% 109391 ± 10% softirqs.CPU132.TIMER
52985 ± 76% -75.2% 13118 ± 16% softirqs.CPU133.RCU
15975 ± 17% +96.3% 31354 ± 9% softirqs.CPU133.SCHED
59848 ± 12% +77.7% 106325 ± 8% softirqs.CPU133.TIMER
51423 ± 71% -76.8% 11913 ± 9% softirqs.CPU134.RCU
60239 ± 15% +86.1% 112088 ± 11% softirqs.CPU134.TIMER
58680 ± 69% -79.4% 12065 ± 17% softirqs.CPU135.RCU
15399 ± 27% +117.7% 33530 ± 7% softirqs.CPU135.SCHED
59646 ± 15% +91.8% 114419 ± 12% softirqs.CPU135.TIMER
59344 ± 57% -79.3% 12270 ± 13% softirqs.CPU136.RCU
16577 ± 15% +99.9% 33140 ± 6% softirqs.CPU136.SCHED
60564 ± 14% +81.2% 109717 ± 9% softirqs.CPU136.TIMER
53272 ± 69% -77.5% 12005 ± 9% softirqs.CPU137.RCU
15042 ± 28% +133.3% 35098 ± 2% softirqs.CPU137.SCHED
61160 ± 15% +86.6% 114147 ± 12% softirqs.CPU137.TIMER
50441 ± 71% -74.7% 12785 ± 9% softirqs.CPU138.RCU
15214 ± 24% +120.2% 33499 ± 6% softirqs.CPU138.SCHED
59597 ± 15% +88.8% 112539 ± 14% softirqs.CPU138.TIMER
51925 ± 61% -73.8% 13586 ± 14% softirqs.CPU139.RCU
16718 ± 14% +78.4% 29818 ± 12% softirqs.CPU139.SCHED
59680 ± 15% +87.2% 111693 ± 11% softirqs.CPU139.TIMER
158338 ± 15% -90.4% 15267 ± 14% softirqs.CPU14.RCU
6872 ± 36% +301.7% 27608 ± 15% softirqs.CPU14.SCHED
62444 ± 11% +78.7% 111617 ± 16% softirqs.CPU14.TIMER
51555 ± 58% -75.9% 12413 ± 9% softirqs.CPU140.RCU
16164 ± 11% +107.6% 33556 ± 2% softirqs.CPU140.SCHED
60149 ± 14% +82.5% 109751 ± 10% softirqs.CPU140.TIMER
57032 ± 81% -77.9% 12576 ± 9% softirqs.CPU141.RCU
17207 ± 13% +96.4% 33797 ± 7% softirqs.CPU141.SCHED
61584 ± 15% +84.2% 113415 ± 13% softirqs.CPU141.TIMER
48404 ± 66% -73.3% 12921 ± 15% softirqs.CPU142.RCU
14895 ± 6% +122.0% 33075 ± 6% softirqs.CPU142.SCHED
61970 ± 13% +78.3% 110491 ± 10% softirqs.CPU142.TIMER
53676 ± 59% -71.6% 15256 ± 20% softirqs.CPU143.RCU
15189 ± 20% +66.1% 25226 ± 17% softirqs.CPU143.SCHED
61406 ± 11% +82.3% 111972 ± 10% softirqs.CPU143.TIMER
159128 ± 15% -89.2% 17185 ± 13% softirqs.CPU15.RCU
8270 ± 41% +236.0% 27786 ± 8% softirqs.CPU15.SCHED
62582 ± 12% +79.5% 112330 ± 13% softirqs.CPU15.TIMER
156612 ± 14% -90.8% 14437 ± 14% softirqs.CPU16.RCU
8541 ± 46% +216.9% 27064 ± 11% softirqs.CPU16.SCHED
63236 ± 13% +76.0% 111303 ± 13% softirqs.CPU16.TIMER
158496 ± 16% -91.3% 13798 ± 14% softirqs.CPU17.RCU
7652 ± 51% +278.7% 28978 ± 9% softirqs.CPU17.SCHED
63016 ± 12% +81.7% 114498 ± 15% softirqs.CPU17.TIMER
111902 ± 19% -82.9% 19182 ± 16% softirqs.CPU18.RCU
12335 ± 18% +129.0% 28253 ± 10% softirqs.CPU18.SCHED
63057 ± 15% +74.6% 110102 ± 12% softirqs.CPU18.TIMER
110348 ± 20% -85.4% 16109 ± 2% softirqs.CPU19.RCU
11999 ± 23% +138.4% 28604 ± 4% softirqs.CPU19.SCHED
61933 ± 14% +75.8% 108906 ± 10% softirqs.CPU19.TIMER
155072 ± 17% -89.4% 16429 ± 13% softirqs.CPU2.RCU
7419 ± 28% +262.8% 26915 ± 11% softirqs.CPU2.SCHED
62650 ± 9% +75.2% 109746 ± 14% softirqs.CPU2.TIMER
107320 ± 24% -86.6% 14358 ± 7% softirqs.CPU20.RCU
12967 ± 17% +97.1% 25556 ± 23% softirqs.CPU20.SCHED
62015 ± 15% +76.5% 109477 ± 13% softirqs.CPU20.TIMER
111967 ± 18% -83.7% 18219 ± 18% softirqs.CPU21.RCU
12765 ± 21% +136.3% 30162 ± 14% softirqs.CPU21.SCHED
61643 ± 16% +88.1% 115927 ± 15% softirqs.CPU21.TIMER
104906 ± 30% -85.1% 15663 ± 8% softirqs.CPU22.RCU
13100 ± 21% +127.1% 29751 softirqs.CPU22.SCHED
61691 ± 16% +80.4% 111272 ± 12% softirqs.CPU22.TIMER
111456 ± 21% -81.0% 21201 ± 41% softirqs.CPU23.RCU
12023 ± 19% +112.6% 25565 ± 28% softirqs.CPU23.SCHED
60192 ± 16% +82.7% 109972 ± 12% softirqs.CPU23.TIMER
113756 ± 19% -86.3% 15589 ± 7% softirqs.CPU24.RCU
11746 ± 19% +153.1% 29732 ± 2% softirqs.CPU24.SCHED
60923 ± 16% +84.0% 112093 ± 12% softirqs.CPU24.TIMER
106907 ± 26% -85.4% 15622 ± 7% softirqs.CPU25.RCU
11714 ± 19% +146.9% 28920 softirqs.CPU25.SCHED
62148 ± 15% +76.8% 109902 ± 11% softirqs.CPU25.TIMER
109217 ± 26% -86.0% 15287 ± 13% softirqs.CPU26.RCU
12526 ± 20% +132.5% 29118 ± 7% softirqs.CPU26.SCHED
61152 ± 16% +80.4% 110309 ± 10% softirqs.CPU26.TIMER
106674 ± 22% -86.1% 14869 ± 7% softirqs.CPU27.RCU
12446 ± 25% +153.6% 31566 ± 2% softirqs.CPU27.SCHED
62325 ± 13% +84.3% 114859 ± 13% softirqs.CPU27.TIMER
118516 ± 21% -87.0% 15349 ± 13% softirqs.CPU28.RCU
12017 ± 23% +149.6% 29998 ± 10% softirqs.CPU28.SCHED
62263 ± 16% +80.3% 112233 ± 12% softirqs.CPU28.TIMER
105772 ± 28% -85.1% 15793 ± 9% softirqs.CPU29.RCU
12949 ± 19% +122.0% 28749 ± 7% softirqs.CPU29.SCHED
62928 ± 15% +75.1% 110167 ± 13% softirqs.CPU29.TIMER
152500 ± 18% -88.1% 18110 ± 16% softirqs.CPU3.RCU
8121 ± 31% +192.0% 23716 ± 20% softirqs.CPU3.SCHED
63602 ± 8% +68.7% 107290 ± 13% softirqs.CPU3.TIMER
104250 ± 30% -84.9% 15719 ± 10% softirqs.CPU30.RCU
13072 ± 17% +117.8% 28478 ± 7% softirqs.CPU30.SCHED
62284 ± 16% +71.4% 106727 ± 9% softirqs.CPU30.TIMER
113558 ± 22% -86.1% 15790 ± 13% softirqs.CPU31.RCU
12753 ± 14% +139.4% 30537 ± 3% softirqs.CPU31.SCHED
63368 ± 16% +74.7% 110692 ± 9% softirqs.CPU31.TIMER
111744 ± 24% -88.2% 13187 ± 5% softirqs.CPU32.RCU
12703 ± 21% +142.7% 30833 ± 6% softirqs.CPU32.SCHED
61578 ± 16% +80.0% 110841 ± 10% softirqs.CPU32.TIMER
110779 ± 25% -86.3% 15188 ± 8% softirqs.CPU33.RCU
13048 ± 20% +126.8% 29594 ± 8% softirqs.CPU33.SCHED
62259 ± 17% +78.0% 110803 ± 13% softirqs.CPU33.TIMER
108641 ± 27% -87.6% 13515 ± 4% softirqs.CPU34.RCU
11704 ± 17% +164.7% 30986 ± 2% softirqs.CPU34.SCHED
64380 ± 10% +72.8% 111244 ± 10% softirqs.CPU34.TIMER
108403 ± 23% -87.0% 14126 ± 4% softirqs.CPU35.RCU
12696 ± 22% +146.2% 31262 ± 2% softirqs.CPU35.SCHED
62249 ± 15% +79.0% 111411 ± 11% softirqs.CPU35.TIMER
98380 ± 38% -82.5% 17192 ± 15% softirqs.CPU36.RCU
12910 ± 20% +122.9% 28778 ± 13% softirqs.CPU36.SCHED
62142 ± 11% +83.5% 114020 ± 11% softirqs.CPU36.TIMER
100859 ± 33% -85.1% 15013 ± 13% softirqs.CPU37.RCU
13080 ± 18% +138.9% 31243 ± 11% softirqs.CPU37.SCHED
59497 ± 15% +83.3% 109036 ± 9% softirqs.CPU37.TIMER
97655 ± 40% -84.9% 14790 ± 8% softirqs.CPU38.RCU
13182 ± 25% +121.6% 29215 ± 10% softirqs.CPU38.SCHED
60659 ± 10% +80.6% 109527 ± 10% softirqs.CPU38.TIMER
104662 ± 29% -85.6% 15083 ± 9% softirqs.CPU39.RCU
12493 ± 25% +144.6% 30553 ± 11% softirqs.CPU39.SCHED
58747 ± 15% +88.8% 110929 ± 12% softirqs.CPU39.TIMER
156827 ± 17% -88.5% 18111 ± 8% softirqs.CPU4.RCU
7358 ± 44% +256.2% 26212 ± 10% softirqs.CPU4.SCHED
63026 ± 9% +77.3% 111773 ± 14% softirqs.CPU4.TIMER
108483 ± 29% -86.6% 14545 ± 9% softirqs.CPU40.RCU
12804 ± 26% +147.9% 31740 ± 6% softirqs.CPU40.SCHED
59092 ± 15% +91.0% 112873 ± 10% softirqs.CPU40.TIMER
100472 ± 36% -84.6% 15501 ± 4% softirqs.CPU41.RCU
13986 ± 17% +118.8% 30599 ± 9% softirqs.CPU41.SCHED
58005 ± 15% +90.2% 110346 ± 11% softirqs.CPU41.TIMER
105700 ± 32% -85.7% 15105 ± 7% softirqs.CPU42.RCU
13316 ± 22% +122.1% 29574 ± 9% softirqs.CPU42.SCHED
58757 ± 16% +85.1% 108781 ± 11% softirqs.CPU42.TIMER
109930 ± 28% -85.7% 15756 ± 12% softirqs.CPU43.RCU
13316 ± 12% +104.9% 27287 ± 11% softirqs.CPU43.SCHED
63407 ± 19% +62.6% 103118 ± 8% softirqs.CPU43.TIMER
103971 ± 30% -84.4% 16271 ± 12% softirqs.CPU44.RCU
13004 ± 30% +131.7% 30131 ± 7% softirqs.CPU44.SCHED
58980 ± 15% +93.9% 114384 ± 12% softirqs.CPU44.TIMER
110066 ± 32% -86.3% 15032 ± 5% softirqs.CPU45.RCU
12973 ± 17% +133.4% 30272 ± 7% softirqs.CPU45.SCHED
64564 ± 22% +74.1% 112411 ± 13% softirqs.CPU45.TIMER
108804 ± 22% -86.1% 15130 ± 4% softirqs.CPU46.RCU
13575 ± 17% +120.5% 29929 ± 8% softirqs.CPU46.SCHED
60320 ± 17% +86.7% 112609 ± 14% softirqs.CPU46.TIMER
101742 ± 35% -85.1% 15126 ± 10% softirqs.CPU47.RCU
14210 ± 17% +104.1% 29006 ± 10% softirqs.CPU47.SCHED
59629 ± 15% +79.8% 107203 ± 9% softirqs.CPU47.TIMER
99168 ± 39% -85.0% 14921 ± 17% softirqs.CPU48.RCU
14040 ± 22% +104.7% 28735 ± 14% softirqs.CPU48.SCHED
59847 ± 16% +85.8% 111210 ± 8% softirqs.CPU48.TIMER
97763 ± 35% -83.5% 16172 ± 19% softirqs.CPU49.RCU
13694 ± 15% +120.8% 30236 ± 15% softirqs.CPU49.SCHED
59453 ± 16% +93.8% 115218 ± 12% softirqs.CPU49.TIMER
141559 ± 18% -88.3% 16604 ± 15% softirqs.CPU5.RCU
7789 ± 40% +223.5% 25199 ± 24% softirqs.CPU5.SCHED
63586 ± 10% +73.4% 110254 ± 12% softirqs.CPU5.TIMER
94641 ± 39% -84.7% 14472 ± 25% softirqs.CPU50.RCU
59938 ± 16% +89.3% 113447 ± 10% softirqs.CPU50.TIMER
94452 ± 40% -77.7% 21094 ± 47% softirqs.CPU51.RCU
14050 ± 17% +96.7% 27639 ± 14% softirqs.CPU51.SCHED
59616 ± 16% +88.6% 112418 ± 11% softirqs.CPU51.TIMER
90924 ± 42% -83.7% 14791 ± 2% softirqs.CPU52.RCU
14066 ± 19% +109.8% 29510 ± 8% softirqs.CPU52.SCHED
59965 ± 15% +84.0% 110350 ± 11% softirqs.CPU52.TIMER
91145 ± 30% -84.8% 13823 ± 6% softirqs.CPU53.RCU
12915 ± 18% +136.1% 30490 ± 9% softirqs.CPU53.SCHED
60391 ± 14% +83.2% 110614 ± 10% softirqs.CPU53.TIMER
107181 ± 38% -81.1% 20290 ± 11% softirqs.CPU54.RCU
13172 ± 17% +106.1% 27148 ± 4% softirqs.CPU54.SCHED
61594 ± 13% +81.2% 111615 ± 6% softirqs.CPU54.TIMER
97873 ± 41% -78.8% 20702 ± 49% softirqs.CPU55.RCU
12950 ± 21% +113.8% 27688 ± 11% softirqs.CPU55.SCHED
60134 ± 13% +82.3% 109610 ± 11% softirqs.CPU55.TIMER
112575 ± 36% -87.1% 14513 ± 13% softirqs.CPU56.RCU
12484 ± 24% +132.9% 29080 ± 9% softirqs.CPU56.SCHED
60003 ± 12% +79.1% 107437 ± 9% softirqs.CPU56.TIMER
104995 ± 35% -86.1% 14636 ± 10% softirqs.CPU57.RCU
13488 ± 21% +121.9% 29924 ± 10% softirqs.CPU57.SCHED
60017 ± 13% +83.8% 110336 ± 14% softirqs.CPU57.TIMER
104935 ± 38% -86.3% 14364 ± 12% softirqs.CPU58.RCU
12724 ± 27% +145.4% 31227 ± 10% softirqs.CPU58.SCHED
66812 ± 18% +62.2% 108389 ± 8% softirqs.CPU58.TIMER
108792 ± 37% -86.6% 14543 ± 12% softirqs.CPU59.RCU
13357 ± 26% +126.5% 30251 ± 9% softirqs.CPU59.SCHED
60019 ± 13% +80.3% 108225 ± 10% softirqs.CPU59.TIMER
157899 ± 17% -89.4% 16667 ± 22% softirqs.CPU6.RCU
7072 ± 45% +273.2% 26396 ± 11% softirqs.CPU6.SCHED
61708 ± 12% +78.9% 110372 ± 11% softirqs.CPU6.TIMER
98202 ± 43% -85.9% 13841 ± 8% softirqs.CPU60.RCU
13142 ± 27% +143.1% 31954 ± 4% softirqs.CPU60.SCHED
60763 ± 9% +81.6% 110338 ± 11% softirqs.CPU60.TIMER
106760 ± 39% -86.1% 14830 ± 9% softirqs.CPU61.RCU
13280 ± 26% +124.7% 29834 ± 5% softirqs.CPU61.SCHED
59780 ± 13% +79.2% 107147 ± 9% softirqs.CPU61.TIMER
100012 ± 43% -85.5% 14533 ± 11% softirqs.CPU62.RCU
13730 ± 37% +127.5% 31238 ± 6% softirqs.CPU62.SCHED
60060 ± 14% +87.4% 112559 ± 11% softirqs.CPU62.TIMER
99644 ± 41% -83.1% 16814 ± 24% softirqs.CPU63.RCU
13920 ± 20% +113.0% 29653 ± 8% softirqs.CPU63.SCHED
61483 ± 11% +85.2% 113838 ± 12% softirqs.CPU63.TIMER
98673 ± 41% -86.4% 13371 ± 5% softirqs.CPU64.RCU
12874 ± 30% +146.2% 31698 ± 7% softirqs.CPU64.SCHED
59958 ± 14% +81.8% 109014 ± 10% softirqs.CPU64.TIMER
95038 ± 48% -87.0% 12392 ± 9% softirqs.CPU65.RCU
13766 ± 17% +150.0% 34411 softirqs.CPU65.SCHED
62256 ± 15% +83.8% 114419 ± 12% softirqs.CPU65.TIMER
98059 ± 41% -85.8% 13956 ± 9% softirqs.CPU66.RCU
13840 ± 27% +134.6% 32474 ± 6% softirqs.CPU66.SCHED
60586 ± 15% +87.0% 113313 ± 12% softirqs.CPU66.TIMER
98441 ± 43% -80.6% 19078 ± 36% softirqs.CPU67.RCU
13545 ± 22% +100.8% 27197 ± 19% softirqs.CPU67.SCHED
60113 ± 14% +83.7% 110443 ± 9% softirqs.CPU67.TIMER
99589 ± 42% -85.7% 14240 ± 8% softirqs.CPU68.RCU
13429 ± 29% +123.6% 30022 ± 8% softirqs.CPU68.SCHED
60597 ± 14% +80.8% 109558 ± 10% softirqs.CPU68.TIMER
92903 ± 50% -86.1% 12958 ± 6% softirqs.CPU69.RCU
12046 ± 29% +182.4% 34013 ± 5% softirqs.CPU69.SCHED
60955 ± 15% +86.7% 113828 ± 13% softirqs.CPU69.TIMER
164069 ± 14% -89.8% 16783 ± 14% softirqs.CPU7.RCU
7474 ± 40% +259.7% 26886 ± 10% softirqs.CPU7.SCHED
62306 ± 10% +76.9% 110239 ± 10% softirqs.CPU7.TIMER
93281 ± 44% -86.0% 13025 ± 8% softirqs.CPU70.RCU
13869 ± 32% +135.3% 32634 ± 5% softirqs.CPU70.SCHED
61791 ± 15% +78.1% 110050 ± 10% softirqs.CPU70.TIMER
90837 ± 45% -83.7% 14846 ± 14% softirqs.CPU71.RCU
13831 ± 18% +117.7% 30112 ± 9% softirqs.CPU71.SCHED
61682 ± 12% +79.3% 110607 ± 10% softirqs.CPU71.TIMER
123391 ± 36% -83.3% 20639 ± 13% softirqs.CPU72.RCU
10985 ± 41% +147.9% 27232 ± 21% softirqs.CPU72.SCHED
68480 ± 10% +71.9% 117714 ± 12% softirqs.CPU72.TIMER
113235 ± 37% -86.1% 15735 ± 12% softirqs.CPU73.RCU
10529 ± 54% +169.5% 28374 ± 16% softirqs.CPU73.SCHED
63871 ± 11% +75.0% 111795 ± 16% softirqs.CPU73.TIMER
112668 ± 39% -86.7% 14959 ± 15% softirqs.CPU74.RCU
11747 ± 42% +134.7% 27565 ± 15% softirqs.CPU74.SCHED
63357 ± 10% +72.7% 109401 ± 15% softirqs.CPU74.TIMER
105725 ± 36% -83.6% 17299 ± 28% softirqs.CPU75.RCU
9945 ± 51% +182.2% 28070 ± 7% softirqs.CPU75.SCHED
62141 ± 11% +78.4% 110853 ± 15% softirqs.CPU75.TIMER
105233 ± 37% -85.4% 15384 ± 10% softirqs.CPU76.RCU
11290 ± 49% +158.2% 29148 ± 3% softirqs.CPU76.SCHED
65298 ± 18% +74.6% 114041 ± 14% softirqs.CPU76.TIMER
104140 ± 43% -82.4% 18310 ± 31% softirqs.CPU77.RCU
64348 ± 11% +68.4% 108371 ± 10% softirqs.CPU77.TIMER
102544 ± 44% -82.4% 18029 ± 19% softirqs.CPU78.RCU
12037 ± 47% +107.5% 24978 ± 12% softirqs.CPU78.SCHED
64923 ± 14% +66.7% 108229 ± 12% softirqs.CPU78.TIMER
96336 ± 45% -75.5% 23635 ± 54% softirqs.CPU79.RCU
11778 ± 45% +104.1% 24034 ± 20% softirqs.CPU79.SCHED
62315 ± 10% +74.6% 108789 ± 11% softirqs.CPU79.TIMER
152851 ± 12% -88.6% 17363 ± 13% softirqs.CPU8.RCU
6669 ± 29% +282.5% 25509 ± 15% softirqs.CPU8.SCHED
62228 ± 10% +73.0% 107643 ± 10% softirqs.CPU8.TIMER
100159 ± 45% -83.3% 16770 ± 9% softirqs.CPU80.RCU
11133 ± 55% +147.7% 27581 ± 11% softirqs.CPU80.SCHED
62818 ± 10% +70.0% 106816 ± 12% softirqs.CPU80.TIMER
100214 ± 41% -84.1% 15970 ± 13% softirqs.CPU81.RCU
10962 ± 51% +167.1% 29280 ± 10% softirqs.CPU81.SCHED
62520 ± 10% +81.0% 113151 ± 13% softirqs.CPU81.TIMER
96685 ± 43% -81.3% 18033 ± 28% softirqs.CPU82.RCU
11223 ± 44% +138.2% 26738 ± 21% softirqs.CPU82.SCHED
61842 ± 11% +80.1% 111386 ± 11% softirqs.CPU82.TIMER
92424 ± 46% -82.4% 16229 ± 17% softirqs.CPU83.RCU
10849 ± 46% +158.7% 28069 ± 15% softirqs.CPU83.SCHED
62273 ± 10% +77.4% 110449 ± 14% softirqs.CPU83.TIMER
93512 ± 45% -83.5% 15439 ± 16% softirqs.CPU84.RCU
10190 ± 44% +139.0% 24354 ± 27% softirqs.CPU84.SCHED
62666 ± 9% +74.1% 109074 ± 13% softirqs.CPU84.TIMER
100596 ± 39% -83.8% 16338 ± 16% softirqs.CPU85.RCU
10554 ± 42% +157.4% 27166 ± 16% softirqs.CPU85.SCHED
61517 ± 12% +75.6% 108013 ± 12% softirqs.CPU85.TIMER
89535 ± 43% -82.5% 15707 ± 13% softirqs.CPU86.RCU
11981 ± 45% +141.5% 28934 ± 14% softirqs.CPU86.SCHED
62434 ± 11% +102.2% 126265 ± 19% softirqs.CPU86.TIMER
95204 ± 47% -82.8% 16413 ± 13% softirqs.CPU87.RCU
11032 ± 48% +156.0% 28241 ± 13% softirqs.CPU87.SCHED
61763 ± 11% +77.6% 109690 ± 13% softirqs.CPU87.TIMER
86945 ± 41% -81.0% 16553 ± 12% softirqs.CPU88.RCU
9268 ± 37% +211.4% 28864 ± 9% softirqs.CPU88.SCHED
62534 ± 11% +79.5% 112262 ± 14% softirqs.CPU88.TIMER
92979 ± 44% -80.7% 17961 ± 14% softirqs.CPU89.RCU
11846 ± 37% +131.6% 27433 ± 14% softirqs.CPU89.SCHED
63164 ± 15% +77.1% 111847 ± 16% softirqs.CPU89.TIMER
152947 ± 16% -89.5% 16089 ± 15% softirqs.CPU9.RCU
7789 ± 29% +252.5% 27457 ± 12% softirqs.CPU9.SCHED
62082 ± 10% +79.1% 111176 ± 13% softirqs.CPU9.TIMER
62677 ± 68% -75.2% 15542 ± 22% softirqs.CPU90.RCU
13906 ± 31% +110.2% 29227 ± 9% softirqs.CPU90.SCHED
62952 ± 17% +73.3% 109111 ± 10% softirqs.CPU90.TIMER
69697 ± 65% -79.6% 14199 ± 14% softirqs.CPU91.RCU
14063 ± 21% +109.0% 29398 ± 7% softirqs.CPU91.SCHED
61170 ± 15% +75.4% 107313 ± 10% softirqs.CPU91.TIMER
63874 ± 67% -71.7% 18095 ± 59% softirqs.CPU92.RCU
13638 ± 26% +117.2% 29618 ± 19% softirqs.CPU92.SCHED
61177 ± 15% +85.2% 113289 ± 13% softirqs.CPU92.TIMER
60204 ± 66% -77.2% 13740 ± 15% softirqs.CPU93.RCU
14278 ± 23% +121.5% 31632 ± 9% softirqs.CPU93.SCHED
60784 ± 17% +86.9% 113616 ± 14% softirqs.CPU93.TIMER
56970 ± 73% -76.2% 13564 ± 11% softirqs.CPU94.RCU
14801 ± 27% +108.0% 30781 ± 4% softirqs.CPU94.SCHED
60880 ± 16% +82.5% 111094 ± 12% softirqs.CPU94.TIMER
61290 ± 61% -78.6% 13124 ± 9% softirqs.CPU95.RCU
15518 ± 21% +87.7% 29121 ± 11% softirqs.CPU95.SCHED
60328 ± 16% +86.1% 112243 ± 12% softirqs.CPU95.TIMER
55646 ± 69% -74.8% 14041 ± 8% softirqs.CPU96.RCU
15853 ± 24% +92.7% 30545 ± 6% softirqs.CPU96.SCHED
61611 ± 17% +81.1% 111583 ± 11% softirqs.CPU96.TIMER
56069 ± 71% -74.2% 14442 ± 10% softirqs.CPU97.RCU
15054 ± 23% +99.1% 29967 ± 9% softirqs.CPU97.SCHED
62276 ± 14% +76.1% 109676 ± 11% softirqs.CPU97.TIMER
54992 ± 83% -70.9% 15990 ± 17% softirqs.CPU98.RCU
15081 ± 28% +95.1% 29425 ± 5% softirqs.CPU98.SCHED
60059 ± 16% +83.4% 110133 ± 11% softirqs.CPU98.TIMER
58340 ± 75% -75.3% 14398 ± 5% softirqs.CPU99.RCU
14366 ± 21% +116.4% 31088 ± 2% softirqs.CPU99.SCHED
61874 ± 13% +81.7% 112405 ± 12% softirqs.CPU99.TIMER
8716 ± 2% +44.1% 12556 ± 2% softirqs.NET_RX
12972555 ± 7% -83.1% 2188021 ± 5% softirqs.RCU
1930732 ± 5% +122.3% 4291149 ± 3% softirqs.SCHED
8819686 ± 13% +81.6% 16019120 ± 11% softirqs.TIMER
75.00 +156.0% 192.00 ± 36% interrupts.100:PCI-MSI.1572919-edge.eth0-TxRx-55
74.50 ± 2% +104.0% 152.00 interrupts.101:PCI-MSI.1572920-edge.eth0-TxRx-56
74.50 ± 2% +103.0% 151.25 interrupts.102:PCI-MSI.1572921-edge.eth0-TxRx-57
75.00 ± 3% +101.0% 150.75 interrupts.103:PCI-MSI.1572922-edge.eth0-TxRx-58
75.00 ± 3% +102.0% 151.50 interrupts.105:PCI-MSI.1572924-edge.eth0-TxRx-60
76.50 ± 3% +98.4% 151.75 interrupts.106:PCI-MSI.1572925-edge.eth0-TxRx-61
77.25 ± 5% +95.8% 151.25 interrupts.107:PCI-MSI.1572926-edge.eth0-TxRx-62
298.50 ± 25% +273.2% 1114 ±100% interrupts.45:PCI-MSI.1572864-edge.eth0-TxRx-0
132.50 ± 72% +100.4% 265.50 ± 42% interrupts.47:PCI-MSI.1572866-edge.eth0-TxRx-2
93.00 ± 31% +209.9% 288.25 ± 48% interrupts.48:PCI-MSI.1572867-edge.eth0-TxRx-3
80.75 ± 5% +203.7% 245.25 ± 53% interrupts.51:PCI-MSI.1572870-edge.eth0-TxRx-6
80.25 ± 3% +406.2% 406.25 ± 75% interrupts.52:PCI-MSI.1572871-edge.eth0-TxRx-7
87.50 ± 9% +175.1% 240.75 ± 52% interrupts.55:PCI-MSI.1572874-edge.eth0-TxRx-10
87.50 ± 17% +79.4% 157.00 ± 3% interrupts.56:PCI-MSI.1572875-edge.eth0-TxRx-11
122.25 ± 63% +949.5% 1283 ± 57% interrupts.60:PCI-MSI.1572879-edge.eth0-TxRx-15
75.25 +101.7% 151.75 interrupts.61:PCI-MSI.1572880-edge.eth0-TxRx-16
74.75 +105.0% 153.25 ± 2% interrupts.62:PCI-MSI.1572881-edge.eth0-TxRx-17
75.75 +99.3% 151.00 interrupts.63:PCI-MSI.1572882-edge.eth0-TxRx-18
80.25 ± 5% +91.3% 153.50 ± 3% interrupts.64:PCI-MSI.1572883-edge.eth0-TxRx-19
79.25 ± 5% +90.2% 150.75 interrupts.65:PCI-MSI.1572884-edge.eth0-TxRx-20
80.00 ± 7% +89.7% 151.75 interrupts.66:PCI-MSI.1572885-edge.eth0-TxRx-21
74.50 ± 2% +103.7% 151.75 interrupts.67:PCI-MSI.1572886-edge.eth0-TxRx-22
75.75 ± 3% +100.3% 151.75 interrupts.69:PCI-MSI.1572888-edge.eth0-TxRx-24
78.25 ± 8% +92.7% 150.75 interrupts.70:PCI-MSI.1572889-edge.eth0-TxRx-25
76.50 ± 3% +104.9% 156.75 ± 3% interrupts.71:PCI-MSI.1572890-edge.eth0-TxRx-26
77.75 ± 7% +93.9% 150.75 interrupts.72:PCI-MSI.1572891-edge.eth0-TxRx-27
84.00 ± 11% +80.4% 151.50 interrupts.73:PCI-MSI.1572892-edge.eth0-TxRx-28
76.75 +96.7% 151.00 interrupts.74:PCI-MSI.1572893-edge.eth0-TxRx-29
75.75 ± 2% +99.3% 151.00 interrupts.76:PCI-MSI.1572895-edge.eth0-TxRx-31
74.50 ± 2% +102.3% 150.75 interrupts.77:PCI-MSI.1572896-edge.eth0-TxRx-32
76.75 ± 3% +102.6% 155.50 ± 5% interrupts.78:PCI-MSI.1572897-edge.eth0-TxRx-33
75.25 +100.7% 151.00 interrupts.79:PCI-MSI.1572898-edge.eth0-TxRx-34
79.00 ± 8% +91.1% 151.00 interrupts.80:PCI-MSI.1572899-edge.eth0-TxRx-35
105.25 ± 30% +50.8% 158.75 ± 7% interrupts.81:PCI-MSI.1572900-edge.eth0-TxRx-36
75.25 ± 3% +101.0% 151.25 interrupts.82:PCI-MSI.1572901-edge.eth0-TxRx-37
75.00 +118.0% 163.50 ± 12% interrupts.83:PCI-MSI.1572902-edge.eth0-TxRx-38
75.25 ± 3% +100.3% 150.75 interrupts.84:PCI-MSI.1572903-edge.eth0-TxRx-39
75.25 ± 3% +101.0% 151.25 interrupts.85:PCI-MSI.1572904-edge.eth0-TxRx-40
74.50 ± 2% +102.3% 150.75 interrupts.86:PCI-MSI.1572905-edge.eth0-TxRx-41
75.00 ± 2% +101.7% 151.25 interrupts.87:PCI-MSI.1572906-edge.eth0-TxRx-42
75.25 ± 3% +128.9% 172.25 ± 21% interrupts.88:PCI-MSI.1572907-edge.eth0-TxRx-43
77.75 ± 4% +95.2% 151.75 interrupts.89:PCI-MSI.1572908-edge.eth0-TxRx-44
74.50 ± 2% +137.6% 177.00 ± 19% interrupts.90:PCI-MSI.1572909-edge.eth0-TxRx-45
75.25 ± 3% +100.7% 151.00 interrupts.91:PCI-MSI.1572910-edge.eth0-TxRx-46
74.50 ± 2% +103.0% 151.25 interrupts.92:PCI-MSI.1572911-edge.eth0-TxRx-47
74.75 ± 2% +102.7% 151.50 interrupts.93:PCI-MSI.1572912-edge.eth0-TxRx-48
77.50 ± 6% +96.8% 152.50 interrupts.94:PCI-MSI.1572913-edge.eth0-TxRx-49
81.50 ± 9% +131.6% 188.75 ± 34% interrupts.95:PCI-MSI.1572914-edge.eth0-TxRx-50
80.75 ± 5% +86.7% 150.75 interrupts.96:PCI-MSI.1572915-edge.eth0-TxRx-51
74.50 ± 2% +105.0% 152.75 interrupts.97:PCI-MSI.1572916-edge.eth0-TxRx-52
75.25 ± 3% +100.7% 151.00 interrupts.98:PCI-MSI.1572917-edge.eth0-TxRx-53
74.50 ± 2% +105.0% 152.75 interrupts.99:PCI-MSI.1572918-edge.eth0-TxRx-54
74682381 ± 2% -97.7% 1694822 ± 9% interrupts.CAL:Function_call_interrupts
298.50 ± 25% +273.2% 1114 ±100% interrupts.CPU0.45:PCI-MSI.1572864-edge.eth0-TxRx-0
914498 ± 12% -98.2% 16149 ± 20% interrupts.CPU0.CAL:Function_call_interrupts
280346 ± 4% +117.1% 608594 interrupts.CPU0.LOC:Local_timer_interrupts
17327 ± 22% -71.0% 5024 ± 19% interrupts.CPU0.RES:Rescheduling_interrupts
1021391 ± 11% -98.5% 15097 ± 21% interrupts.CPU0.TLB:TLB_shootdowns
909443 ± 7% -98.2% 16490 ± 21% interrupts.CPU1.CAL:Function_call_interrupts
282063 ± 3% +115.6% 608152 interrupts.CPU1.LOC:Local_timer_interrupts
19343 ± 24% -77.0% 4445 ± 22% interrupts.CPU1.RES:Rescheduling_interrupts
1008464 ± 6% -98.5% 15466 ± 23% interrupts.CPU1.TLB:TLB_shootdowns
87.50 ± 9% +175.1% 240.75 ± 52% interrupts.CPU10.55:PCI-MSI.1572874-edge.eth0-TxRx-10
967023 ± 8% -98.3% 16202 ± 23% interrupts.CPU10.CAL:Function_call_interrupts
283153 ± 4% +115.5% 610267 interrupts.CPU10.LOC:Local_timer_interrupts
17721 ± 20% -77.7% 3959 ± 21% interrupts.CPU10.RES:Rescheduling_interrupts
1073390 ± 7% -98.6% 15176 ± 25% interrupts.CPU10.TLB:TLB_shootdowns
269519 ± 91% -96.2% 10134 ± 16% interrupts.CPU100.CAL:Function_call_interrupts
277314 ± 15% +120.7% 612017 interrupts.CPU100.LOC:Local_timer_interrupts
291603 ± 91% -96.9% 9025 ± 19% interrupts.CPU100.TLB:TLB_shootdowns
291672 ± 87% -95.9% 12020 ± 17% interrupts.CPU101.CAL:Function_call_interrupts
263578 ± 15% +131.3% 609776 interrupts.CPU101.LOC:Local_timer_interrupts
322674 ± 87% -96.6% 10924 ± 18% interrupts.CPU101.TLB:TLB_shootdowns
282315 ± 96% -96.0% 11253 ± 21% interrupts.CPU102.CAL:Function_call_interrupts
266211 ± 15% +130.1% 612517 interrupts.CPU102.LOC:Local_timer_interrupts
307668 ± 96% -96.7% 10133 ± 24% interrupts.CPU102.TLB:TLB_shootdowns
286906 ± 78% -96.3% 10720 ± 17% interrupts.CPU103.CAL:Function_call_interrupts
268266 ± 14% +128.3% 612363 interrupts.CPU103.LOC:Local_timer_interrupts
6378 ± 87% -65.8% 2179 ± 18% interrupts.CPU103.RES:Rescheduling_interrupts
313542 ± 78% -96.9% 9609 ± 19% interrupts.CPU103.TLB:TLB_shootdowns
296983 ± 92% -96.6% 10188 ± 16% interrupts.CPU104.CAL:Function_call_interrupts
275192 ± 16% +123.2% 614344 interrupts.CPU104.LOC:Local_timer_interrupts
6515 ± 95% -65.1% 2275 ± 17% interrupts.CPU104.RES:Rescheduling_interrupts
320665 ± 92% -97.2% 9077 ± 19% interrupts.CPU104.TLB:TLB_shootdowns
296917 ± 88% -97.2% 8326 ± 39% interrupts.CPU105.CAL:Function_call_interrupts
267868 ± 15% +128.7% 612494 interrupts.CPU105.LOC:Local_timer_interrupts
1492 ± 65% -71.3% 428.25 ±173% interrupts.CPU105.NMI:Non-maskable_interrupts
1492 ± 65% -71.3% 428.25 ±173% interrupts.CPU105.PMI:Performance_monitoring_interrupts
326205 ± 87% -97.8% 7227 ± 47% interrupts.CPU105.TLB:TLB_shootdowns
302413 ± 88% -96.8% 9573 ± 30% interrupts.CPU106.CAL:Function_call_interrupts
277351 ± 13% +118.6% 606167 interrupts.CPU106.LOC:Local_timer_interrupts
333260 ± 89% -97.5% 8472 ± 35% interrupts.CPU106.TLB:TLB_shootdowns
269281 ± 60% -96.1% 10489 ± 15% interrupts.CPU107.CAL:Function_call_interrupts
274257 ± 15% +122.8% 610956 interrupts.CPU107.LOC:Local_timer_interrupts
5792 ± 79% -60.8% 2273 ± 17% interrupts.CPU107.RES:Rescheduling_interrupts
298181 ± 62% -96.9% 9384 ± 18% interrupts.CPU107.TLB:TLB_shootdowns
274771 ± 73% -96.2% 10543 ± 43% interrupts.CPU108.CAL:Function_call_interrupts
286626 ± 8% +113.1% 610692 interrupts.CPU108.LOC:Local_timer_interrupts
905.25 ± 20% -79.1% 189.00 ±173% interrupts.CPU108.NMI:Non-maskable_interrupts
905.25 ± 20% -79.1% 189.00 ±173% interrupts.CPU108.PMI:Performance_monitoring_interrupts
304852 ± 76% -96.9% 9447 ± 49% interrupts.CPU108.TLB:TLB_shootdowns
267182 ± 57% -96.6% 9172 ± 63% interrupts.CPU109.CAL:Function_call_interrupts
290439 ± 7% +110.9% 612547 interrupts.CPU109.LOC:Local_timer_interrupts
293957 ± 56% -97.3% 8021 ± 73% interrupts.CPU109.TLB:TLB_shootdowns
87.50 ± 17% +79.4% 157.00 ± 3% interrupts.CPU11.56:PCI-MSI.1572875-edge.eth0-TxRx-11
894660 ± 8% -98.0% 17586 ± 39% interrupts.CPU11.CAL:Function_call_interrupts
287488 ± 7% +111.8% 608811 interrupts.CPU11.LOC:Local_timer_interrupts
2155 ± 37% -75.1% 535.75 ±173% interrupts.CPU11.NMI:Non-maskable_interrupts
2155 ± 37% -75.1% 535.75 ±173% interrupts.CPU11.PMI:Performance_monitoring_interrupts
17329 ± 15% -77.3% 3931 ± 23% interrupts.CPU11.RES:Rescheduling_interrupts
984228 ± 7% -98.3% 16551 ± 42% interrupts.CPU11.TLB:TLB_shootdowns
276755 ± 58% -95.6% 12179 ± 36% interrupts.CPU110.CAL:Function_call_interrupts
279776 ± 9% +118.9% 612464 interrupts.CPU110.LOC:Local_timer_interrupts
1321 ± 34% -78.4% 285.75 ±173% interrupts.CPU110.NMI:Non-maskable_interrupts
1321 ± 34% -78.4% 285.75 ±173% interrupts.CPU110.PMI:Performance_monitoring_interrupts
313463 ± 58% -96.5% 11071 ± 40% interrupts.CPU110.TLB:TLB_shootdowns
221726 ± 64% -95.6% 9685 ± 46% interrupts.CPU111.CAL:Function_call_interrupts
292303 ± 5% +109.5% 612424 interrupts.CPU111.LOC:Local_timer_interrupts
241241 ± 64% -96.4% 8590 ± 52% interrupts.CPU111.TLB:TLB_shootdowns
210200 ± 62% -94.5% 11607 ± 31% interrupts.CPU112.CAL:Function_call_interrupts
292675 ± 5% +109.5% 613263 interrupts.CPU112.LOC:Local_timer_interrupts
1273 ± 46% -80.1% 253.50 ±173% interrupts.CPU112.NMI:Non-maskable_interrupts
1273 ± 46% -80.1% 253.50 ±173% interrupts.CPU112.PMI:Performance_monitoring_interrupts
228976 ± 62% -95.4% 10500 ± 35% interrupts.CPU112.TLB:TLB_shootdowns
223663 ± 63% -94.8% 11647 ± 28% interrupts.CPU113.CAL:Function_call_interrupts
285838 ± 10% +113.7% 610827 interrupts.CPU113.LOC:Local_timer_interrupts
248575 ± 66% -95.7% 10569 ± 32% interrupts.CPU113.TLB:TLB_shootdowns
186560 ± 47% -94.2% 10784 ± 40% interrupts.CPU114.CAL:Function_call_interrupts
291542 ± 6% +110.6% 614097 interrupts.CPU114.LOC:Local_timer_interrupts
1354 ± 42% -73.8% 354.75 ±173% interrupts.CPU114.NMI:Non-maskable_interrupts
1354 ± 42% -73.8% 354.75 ±173% interrupts.CPU114.PMI:Performance_monitoring_interrupts
203076 ± 48% -95.2% 9662 ± 45% interrupts.CPU114.TLB:TLB_shootdowns
221747 ± 56% -95.0% 11107 ± 38% interrupts.CPU115.CAL:Function_call_interrupts
286817 ± 9% +113.1% 611160 interrupts.CPU115.LOC:Local_timer_interrupts
1344 ± 42% -86.9% 175.75 ±173% interrupts.CPU115.NMI:Non-maskable_interrupts
1344 ± 42% -86.9% 175.75 ±173% interrupts.CPU115.PMI:Performance_monitoring_interrupts
247437 ± 59% -96.0% 10007 ± 43% interrupts.CPU115.TLB:TLB_shootdowns
207635 ± 57% -95.5% 9307 ± 47% interrupts.CPU116.CAL:Function_call_interrupts
294323 ± 4% +108.1% 612615 interrupts.CPU116.LOC:Local_timer_interrupts
994.75 ± 64% -79.9% 199.50 ±173% interrupts.CPU116.NMI:Non-maskable_interrupts
994.75 ± 64% -79.9% 199.50 ±173% interrupts.CPU116.PMI:Performance_monitoring_interrupts
4119 ± 51% -44.6% 2282 ± 38% interrupts.CPU116.RES:Rescheduling_interrupts
225292 ± 58% -96.4% 8177 ± 55% interrupts.CPU116.TLB:TLB_shootdowns
197047 ± 45% -94.4% 11115 ± 33% interrupts.CPU117.CAL:Function_call_interrupts
286330 ± 9% +114.0% 612849 interrupts.CPU117.LOC:Local_timer_interrupts
214135 ± 45% -95.3% 9996 ± 36% interrupts.CPU117.TLB:TLB_shootdowns
249591 ± 49% -96.6% 8376 ± 40% interrupts.CPU118.CAL:Function_call_interrupts
282487 ± 7% +116.2% 610709 interrupts.CPU118.LOC:Local_timer_interrupts
1467 ± 41% -89.9% 148.75 ±173% interrupts.CPU118.NMI:Non-maskable_interrupts
1467 ± 41% -89.9% 148.75 ±173% interrupts.CPU118.PMI:Performance_monitoring_interrupts
278627 ± 47% -97.4% 7259 ± 47% interrupts.CPU118.TLB:TLB_shootdowns
227464 ± 66% -96.4% 8104 ± 45% interrupts.CPU119.CAL:Function_call_interrupts
286525 ± 9% +114.0% 613265 interrupts.CPU119.LOC:Local_timer_interrupts
1100 ± 49% -89.9% 111.00 ±173% interrupts.CPU119.NMI:Non-maskable_interrupts
1100 ± 49% -89.9% 111.00 ±173% interrupts.CPU119.PMI:Performance_monitoring_interrupts
250689 ± 68% -97.2% 6988 ± 54% interrupts.CPU119.TLB:TLB_shootdowns
910218 ± 12% -97.9% 18993 ± 34% interrupts.CPU12.CAL:Function_call_interrupts
292193 ± 5% +108.4% 608902 interrupts.CPU12.LOC:Local_timer_interrupts
2446 ± 13% -89.0% 269.50 ±173% interrupts.CPU12.NMI:Non-maskable_interrupts
2446 ± 13% -89.0% 269.50 ±173% interrupts.CPU12.PMI:Performance_monitoring_interrupts
17337 ± 21% -77.8% 3856 ± 25% interrupts.CPU12.RES:Rescheduling_interrupts
1005979 ± 12% -98.2% 17975 ± 36% interrupts.CPU12.TLB:TLB_shootdowns
221222 ± 51% -96.1% 8640 ± 60% interrupts.CPU120.CAL:Function_call_interrupts
282436 ± 7% +116.7% 612115 interrupts.CPU120.LOC:Local_timer_interrupts
869.00 ± 49% -81.7% 158.75 ±173% interrupts.CPU120.NMI:Non-maskable_interrupts
869.00 ± 49% -81.7% 158.75 ±173% interrupts.CPU120.PMI:Performance_monitoring_interrupts
249558 ± 50% -97.0% 7539 ± 70% interrupts.CPU120.TLB:TLB_shootdowns
199507 ± 56% -94.7% 10491 ± 38% interrupts.CPU121.CAL:Function_call_interrupts
286909 ± 9% +113.1% 611343 interrupts.CPU121.LOC:Local_timer_interrupts
1434 ± 38% -89.2% 154.50 ±173% interrupts.CPU121.NMI:Non-maskable_interrupts
1434 ± 38% -89.2% 154.50 ±173% interrupts.CPU121.PMI:Performance_monitoring_interrupts
219815 ± 56% -95.7% 9391 ± 43% interrupts.CPU121.TLB:TLB_shootdowns
255129 ± 42% -96.4% 9212 ± 54% interrupts.CPU122.CAL:Function_call_interrupts
276573 ± 8% +121.5% 612542 interrupts.CPU122.LOC:Local_timer_interrupts
3699 ± 41% -38.6% 2270 ± 41% interrupts.CPU122.RES:Rescheduling_interrupts
292981 ± 44% -97.2% 8103 ± 62% interrupts.CPU122.TLB:TLB_shootdowns
217347 ± 66% -95.6% 9586 ± 54% interrupts.CPU123.CAL:Function_call_interrupts
284138 ± 10% +115.4% 611925 interrupts.CPU123.LOC:Local_timer_interrupts
965.25 ± 63% -84.7% 148.00 ±173% interrupts.CPU123.NMI:Non-maskable_interrupts
965.25 ± 63% -84.7% 148.00 ±173% interrupts.CPU123.PMI:Performance_monitoring_interrupts
243187 ± 68% -96.5% 8456 ± 62% interrupts.CPU123.TLB:TLB_shootdowns
223027 ± 58% -95.4% 10365 ± 50% interrupts.CPU124.CAL:Function_call_interrupts
285647 ± 10% +114.2% 611947 interrupts.CPU124.LOC:Local_timer_interrupts
1436 ± 60% -80.2% 285.00 ±173% interrupts.CPU124.NMI:Non-maskable_interrupts
1436 ± 60% -80.2% 285.00 ±173% interrupts.CPU124.PMI:Performance_monitoring_interrupts
249245 ± 61% -96.3% 9262 ± 57% interrupts.CPU124.TLB:TLB_shootdowns
195393 ± 41% -95.1% 9578 ± 69% interrupts.CPU125.CAL:Function_call_interrupts
286776 ± 9% +113.0% 610855 interrupts.CPU125.LOC:Local_timer_interrupts
4495 ± 47% -46.8% 2390 ± 37% interrupts.CPU125.RES:Rescheduling_interrupts
210269 ± 41% -96.0% 8486 ± 79% interrupts.CPU125.TLB:TLB_shootdowns
320707 ± 81% -97.3% 8610 ± 31% interrupts.CPU126.CAL:Function_call_interrupts
270205 ± 14% +126.5% 612144 interrupts.CPU126.LOC:Local_timer_interrupts
5986 ± 80% -67.8% 1929 ± 3% interrupts.CPU126.RES:Rescheduling_interrupts
350384 ± 81% -97.8% 7535 ± 37% interrupts.CPU126.TLB:TLB_shootdowns
293072 ± 79% -97.2% 8262 ± 16% interrupts.CPU127.CAL:Function_call_interrupts
281750 ± 12% +116.5% 610044 interrupts.CPU127.LOC:Local_timer_interrupts
1084 ± 30% -80.2% 214.75 ±173% interrupts.CPU127.NMI:Non-maskable_interrupts
1084 ± 30% -80.2% 214.75 ±173% interrupts.CPU127.PMI:Performance_monitoring_interrupts
5495 ± 88% -68.1% 1750 ± 13% interrupts.CPU127.RES:Rescheduling_interrupts
319049 ± 79% -97.8% 7149 ± 18% interrupts.CPU127.TLB:TLB_shootdowns
311232 ± 69% -97.7% 7248 ± 8% interrupts.CPU128.CAL:Function_call_interrupts
278575 ± 14% +119.0% 610071 interrupts.CPU128.LOC:Local_timer_interrupts
5886 ± 68% -67.6% 1908 ± 7% interrupts.CPU128.RES:Rescheduling_interrupts
338691 ± 69% -98.2% 6091 ± 8% interrupts.CPU128.TLB:TLB_shootdowns
314267 ± 72% -98.1% 5840 ± 29% interrupts.CPU129.CAL:Function_call_interrupts
270295 ± 14% +127.0% 613704 interrupts.CPU129.LOC:Local_timer_interrupts
5654 ± 72% -67.2% 1852 ± 10% interrupts.CPU129.RES:Rescheduling_interrupts
342768 ± 72% -98.6% 4732 ± 36% interrupts.CPU129.TLB:TLB_shootdowns
896870 ± 11% -98.1% 16952 ± 35% interrupts.CPU13.CAL:Function_call_interrupts
279444 ± 5% +117.6% 607983 interrupts.CPU13.LOC:Local_timer_interrupts
2058 ± 15% -86.7% 273.00 ±173% interrupts.CPU13.NMI:Non-maskable_interrupts
2058 ± 15% -86.7% 273.00 ±173% interrupts.CPU13.PMI:Performance_monitoring_interrupts
20397 ± 39% -81.1% 3858 ± 27% interrupts.CPU13.RES:Rescheduling_interrupts
998200 ± 11% -98.4% 15914 ± 38% interrupts.CPU13.TLB:TLB_shootdowns
283507 ± 70% -96.6% 9557 ± 69% interrupts.CPU130.CAL:Function_call_interrupts
276066 ± 15% +122.3% 613584 interrupts.CPU130.LOC:Local_timer_interrupts
1240 ± 17% -87.3% 157.25 ±173% interrupts.CPU130.NMI:Non-maskable_interrupts
1240 ± 17% -87.3% 157.25 ±173% interrupts.CPU130.PMI:Performance_monitoring_interrupts
5019 ± 80% -67.0% 1657 ± 11% interrupts.CPU130.RES:Rescheduling_interrupts
310421 ± 70% -97.3% 8484 ± 80% interrupts.CPU130.TLB:TLB_shootdowns
268465 ± 70% -97.7% 6061 ± 20% interrupts.CPU131.CAL:Function_call_interrupts
273953 ± 15% +122.6% 609689 interrupts.CPU131.LOC:Local_timer_interrupts
1714 ± 45% -88.5% 197.75 ±173% interrupts.CPU131.NMI:Non-maskable_interrupts
1714 ± 45% -88.5% 197.75 ±173% interrupts.CPU131.PMI:Performance_monitoring_interrupts
4887 ± 58% -67.1% 1607 ± 11% interrupts.CPU131.RES:Rescheduling_interrupts
298280 ± 73% -98.3% 4943 ± 23% interrupts.CPU131.TLB:TLB_shootdowns
286408 ± 78% -96.8% 9121 ± 29% interrupts.CPU132.CAL:Function_call_interrupts
274954 ± 16% +123.1% 613425 interrupts.CPU132.LOC:Local_timer_interrupts
988.75 ± 7% -78.7% 210.75 ±173% interrupts.CPU132.NMI:Non-maskable_interrupts
988.75 ± 7% -78.7% 210.75 ±173% interrupts.CPU132.PMI:Performance_monitoring_interrupts
5424 ± 77% -68.6% 1703 ± 19% interrupts.CPU132.RES:Rescheduling_interrupts
314842 ± 78% -97.5% 8013 ± 33% interrupts.CPU132.TLB:TLB_shootdowns
296934 ± 59% -96.9% 9119 ± 51% interrupts.CPU133.CAL:Function_call_interrupts
273477 ± 15% +123.9% 612314 interrupts.CPU133.LOC:Local_timer_interrupts
5032 ± 78% -65.9% 1714 ± 4% interrupts.CPU133.RES:Rescheduling_interrupts
332213 ± 56% -97.6% 8002 ± 57% interrupts.CPU133.TLB:TLB_shootdowns
269174 ± 62% -96.6% 9280 ± 34% interrupts.CPU134.CAL:Function_call_interrupts
275457 ± 16% +122.7% 613514 interrupts.CPU134.LOC:Local_timer_interrupts
23121 ±132% -92.8% 1667 ± 5% interrupts.CPU134.RES:Rescheduling_interrupts
297939 ± 60% -97.3% 8179 ± 39% interrupts.CPU134.TLB:TLB_shootdowns
304830 ± 73% -97.9% 6461 ± 32% interrupts.CPU135.CAL:Function_call_interrupts
266542 ± 15% +130.1% 613417 interrupts.CPU135.LOC:Local_timer_interrupts
1629 ± 34% -87.2% 208.75 ±173% interrupts.CPU135.NMI:Non-maskable_interrupts
1629 ± 34% -87.2% 208.75 ±173% interrupts.CPU135.PMI:Performance_monitoring_interrupts
5423 ± 65% -70.1% 1624 ± 5% interrupts.CPU135.RES:Rescheduling_interrupts
333755 ± 74% -98.4% 5348 ± 38% interrupts.CPU135.TLB:TLB_shootdowns
293065 ± 64% -97.2% 8101 ± 34% interrupts.CPU136.CAL:Function_call_interrupts
277110 ± 15% +120.3% 610447 interrupts.CPU136.LOC:Local_timer_interrupts
5399 ± 67% -67.1% 1776 interrupts.CPU136.RES:Rescheduling_interrupts
319744 ± 64% -97.8% 6994 ± 40% interrupts.CPU136.TLB:TLB_shootdowns
314300 ± 69% -97.6% 7414 ± 35% interrupts.CPU137.CAL:Function_call_interrupts
263007 ± 13% +132.4% 611317 interrupts.CPU137.LOC:Local_timer_interrupts
917.25 ± 23% -78.7% 195.25 ±173% interrupts.CPU137.NMI:Non-maskable_interrupts
917.25 ± 23% -78.7% 195.25 ±173% interrupts.CPU137.PMI:Performance_monitoring_interrupts
4262 ± 72% -60.5% 1684 ± 8% interrupts.CPU137.RES:Rescheduling_interrupts
362554 ± 70% -98.3% 6304 ± 41% interrupts.CPU137.TLB:TLB_shootdowns
296839 ± 64% -96.9% 9319 ± 50% interrupts.CPU138.CAL:Function_call_interrupts
264453 ± 17% +132.2% 614073 interrupts.CPU138.LOC:Local_timer_interrupts
4639 ± 78% -60.0% 1853 ± 8% interrupts.CPU138.RES:Rescheduling_interrupts
329188 ± 63% -97.5% 8184 ± 58% interrupts.CPU138.TLB:TLB_shootdowns
273595 ± 69% -97.6% 6579 ± 19% interrupts.CPU139.CAL:Function_call_interrupts
280505 ± 13% +117.5% 610220 interrupts.CPU139.LOC:Local_timer_interrupts
5066 ± 72% -66.1% 1716 ± 4% interrupts.CPU139.RES:Rescheduling_interrupts
299908 ± 68% -98.2% 5454 ± 24% interrupts.CPU139.TLB:TLB_shootdowns
949929 ± 2% -98.2% 17323 ± 22% interrupts.CPU14.CAL:Function_call_interrupts
276071 ± 2% +119.9% 607061 interrupts.CPU14.LOC:Local_timer_interrupts
1478 ± 27% -81.4% 274.50 ±173% interrupts.CPU14.NMI:Non-maskable_interrupts
1478 ± 27% -81.4% 274.50 ±173% interrupts.CPU14.PMI:Performance_monitoring_interrupts
16967 ± 23% -77.2% 3868 ± 24% interrupts.CPU14.RES:Rescheduling_interrupts
1060256 ± 3% -98.5% 16286 ± 23% interrupts.CPU14.TLB:TLB_shootdowns
268862 ± 64% -97.3% 7361 ± 17% interrupts.CPU140.CAL:Function_call_interrupts
272574 ± 14% +125.2% 613808 interrupts.CPU140.LOC:Local_timer_interrupts
1098 ± 40% -80.5% 214.25 ±173% interrupts.CPU140.NMI:Non-maskable_interrupts
1098 ± 40% -80.5% 214.25 ±173% interrupts.CPU140.PMI:Performance_monitoring_interrupts
4746 ± 75% -61.7% 1819 ± 3% interrupts.CPU140.RES:Rescheduling_interrupts
297616 ± 63% -97.9% 6238 ± 21% interrupts.CPU140.TLB:TLB_shootdowns
243590 ± 54% -95.8% 10219 ± 54% interrupts.CPU141.CAL:Function_call_interrupts
282721 ± 11% +116.5% 612168 interrupts.CPU141.LOC:Local_timer_interrupts
1112 ± 45% -84.4% 173.00 ±173% interrupts.CPU141.NMI:Non-maskable_interrupts
1112 ± 45% -84.4% 173.00 ±173% interrupts.CPU141.PMI:Performance_monitoring_interrupts
4972 ± 77% -65.1% 1735 ± 11% interrupts.CPU141.RES:Rescheduling_interrupts
264794 ± 54% -96.6% 9118 ± 62% interrupts.CPU141.TLB:TLB_shootdowns
316276 ± 44% -96.7% 10543 ± 32% interrupts.CPU142.CAL:Function_call_interrupts
265522 ± 16% +131.1% 613546 interrupts.CPU142.LOC:Local_timer_interrupts
366015 ± 41% -97.4% 9513 ± 36% interrupts.CPU142.TLB:TLB_shootdowns
285901 ± 51% -97.0% 8435 ± 34% interrupts.CPU143.CAL:Function_call_interrupts
265658 ± 14% +129.7% 610109 interrupts.CPU143.LOC:Local_timer_interrupts
1285 ± 64% -83.2% 215.50 ±173% interrupts.CPU143.NMI:Non-maskable_interrupts
1285 ± 64% -83.2% 215.50 ±173% interrupts.CPU143.PMI:Performance_monitoring_interrupts
4872 ± 70% -61.4% 1879 ± 13% interrupts.CPU143.RES:Rescheduling_interrupts
320459 ± 48% -97.7% 7335 ± 38% interrupts.CPU143.TLB:TLB_shootdowns
122.25 ± 63% +949.5% 1283 ± 57% interrupts.CPU15.60:PCI-MSI.1572879-edge.eth0-TxRx-15
930715 ± 8% -98.3% 15706 ± 22% interrupts.CPU15.CAL:Function_call_interrupts
288293 ± 5% +111.2% 608914 interrupts.CPU15.LOC:Local_timer_interrupts
17785 ± 19% -78.3% 3862 ± 22% interrupts.CPU15.RES:Rescheduling_interrupts
1032325 ± 7% -98.6% 14665 ± 24% interrupts.CPU15.TLB:TLB_shootdowns
75.25 +101.7% 151.75 interrupts.CPU16.61:PCI-MSI.1572880-edge.eth0-TxRx-16
846100 ± 14% -98.1% 15849 ± 23% interrupts.CPU16.CAL:Function_call_interrupts
290352 ± 5% +109.0% 606827 interrupts.CPU16.LOC:Local_timer_interrupts
16641 ± 26% -77.3% 3776 ± 24% interrupts.CPU16.RES:Rescheduling_interrupts
937358 ± 13% -98.4% 14790 ± 25% interrupts.CPU16.TLB:TLB_shootdowns
74.75 +105.0% 153.25 ± 2% interrupts.CPU17.62:PCI-MSI.1572881-edge.eth0-TxRx-17
949351 ± 13% -98.2% 17097 ± 17% interrupts.CPU17.CAL:Function_call_interrupts
285597 ± 7% +113.2% 609036 interrupts.CPU17.LOC:Local_timer_interrupts
16663 ± 20% -76.8% 3868 ± 21% interrupts.CPU17.RES:Rescheduling_interrupts
1060495 ± 14% -98.5% 16045 ± 19% interrupts.CPU17.TLB:TLB_shootdowns
75.75 +99.3% 151.00 interrupts.CPU18.63:PCI-MSI.1572882-edge.eth0-TxRx-18
645547 ± 17% -98.3% 10717 ± 29% interrupts.CPU18.CAL:Function_call_interrupts
278062 ± 13% +121.1% 614758 interrupts.CPU18.LOC:Local_timer_interrupts
880.25 ± 25% -73.2% 235.75 ±172% interrupts.CPU18.NMI:Non-maskable_interrupts
880.25 ± 25% -73.2% 235.75 ±172% interrupts.CPU18.PMI:Performance_monitoring_interrupts
10884 ± 43% -73.0% 2943 ± 16% interrupts.CPU18.RES:Rescheduling_interrupts
707614 ± 17% -98.6% 9629 ± 33% interrupts.CPU18.TLB:TLB_shootdowns
80.25 ± 5% +91.3% 153.50 ± 3% interrupts.CPU19.64:PCI-MSI.1572883-edge.eth0-TxRx-19
673179 ± 30% -98.1% 12592 ± 26% interrupts.CPU19.CAL:Function_call_interrupts
275442 ± 15% +122.8% 613740 interrupts.CPU19.LOC:Local_timer_interrupts
10948 ± 52% -73.9% 2852 ± 20% interrupts.CPU19.RES:Rescheduling_interrupts
746991 ± 30% -98.5% 11521 ± 29% interrupts.CPU19.TLB:TLB_shootdowns
132.50 ± 72% +100.4% 265.50 ± 42% interrupts.CPU2.47:PCI-MSI.1572866-edge.eth0-TxRx-2
933273 ± 11% -98.2% 17177 ± 26% interrupts.CPU2.CAL:Function_call_interrupts
271018 ± 5% +123.8% 606417 interrupts.CPU2.LOC:Local_timer_interrupts
2413 ± 26% -76.8% 559.50 ±173% interrupts.CPU2.NMI:Non-maskable_interrupts
2413 ± 26% -76.8% 559.50 ±173% interrupts.CPU2.PMI:Performance_monitoring_interrupts
16329 ± 19% -75.7% 3961 ± 24% interrupts.CPU2.RES:Rescheduling_interrupts
1038537 ± 11% -98.4% 16147 ± 27% interrupts.CPU2.TLB:TLB_shootdowns
79.25 ± 5% +90.2% 150.75 interrupts.CPU20.65:PCI-MSI.1572884-edge.eth0-TxRx-20
609982 ± 23% -98.0% 12313 ± 28% interrupts.CPU20.CAL:Function_call_interrupts
275379 ± 15% +121.6% 610307 interrupts.CPU20.LOC:Local_timer_interrupts
10528 ± 50% -64.6% 3727 ± 48% interrupts.CPU20.RES:Rescheduling_interrupts
670139 ± 23% -98.3% 11230 ± 32% interrupts.CPU20.TLB:TLB_shootdowns
80.00 ± 7% +89.7% 151.75 interrupts.CPU21.66:PCI-MSI.1572885-edge.eth0-TxRx-21
607863 ± 26% -98.1% 11742 ± 37% interrupts.CPU21.CAL:Function_call_interrupts
276045 ± 15% +120.9% 609811 interrupts.CPU21.LOC:Local_timer_interrupts
1186 ± 49% -83.1% 201.00 ±173% interrupts.CPU21.NMI:Non-maskable_interrupts
1186 ± 49% -83.1% 201.00 ±173% interrupts.CPU21.PMI:Performance_monitoring_interrupts
10980 ± 48% -73.8% 2874 ± 19% interrupts.CPU21.RES:Rescheduling_interrupts
664177 ± 25% -98.4% 10632 ± 41% interrupts.CPU21.TLB:TLB_shootdowns
74.50 ± 2% +103.7% 151.75 interrupts.CPU22.67:PCI-MSI.1572886-edge.eth0-TxRx-22
600537 ± 24% -98.3% 10181 ± 22% interrupts.CPU22.CAL:Function_call_interrupts
275077 ± 16% +121.2% 608414 interrupts.CPU22.LOC:Local_timer_interrupts
10789 ± 51% -75.6% 2628 ± 24% interrupts.CPU22.RES:Rescheduling_interrupts
655143 ± 24% -98.6% 9095 ± 26% interrupts.CPU22.TLB:TLB_shootdowns
630405 ± 23% -98.6% 8847 ± 32% interrupts.CPU23.CAL:Function_call_interrupts
279751 ± 12% +119.1% 612931 interrupts.CPU23.LOC:Local_timer_interrupts
1528 ± 17% -86.0% 214.25 ±172% interrupts.CPU23.NMI:Non-maskable_interrupts
1528 ± 17% -86.0% 214.25 ±172% interrupts.CPU23.PMI:Performance_monitoring_interrupts
11182 ± 48% -74.7% 2823 ± 20% interrupts.CPU23.RES:Rescheduling_interrupts
688294 ± 23% -98.9% 7748 ± 38% interrupts.CPU23.TLB:TLB_shootdowns
75.75 ± 3% +100.3% 151.75 interrupts.CPU24.69:PCI-MSI.1572888-edge.eth0-TxRx-24
652396 ± 18% -98.5% 9977 ± 40% interrupts.CPU24.CAL:Function_call_interrupts
270545 ± 14% +125.8% 610813 interrupts.CPU24.LOC:Local_timer_interrupts
11363 ± 46% -74.9% 2850 ± 15% interrupts.CPU24.RES:Rescheduling_interrupts
723683 ± 16% -98.8% 8872 ± 46% interrupts.CPU24.TLB:TLB_shootdowns
78.25 ± 8% +92.7% 150.75 interrupts.CPU25.70:PCI-MSI.1572889-edge.eth0-TxRx-25
647055 ± 17% -98.1% 12227 ± 15% interrupts.CPU25.CAL:Function_call_interrupts
270960 ± 16% +125.3% 610355 interrupts.CPU25.LOC:Local_timer_interrupts
1591 ± 23% -87.1% 205.75 ±173% interrupts.CPU25.NMI:Non-maskable_interrupts
1591 ± 23% -87.1% 205.75 ±173% interrupts.CPU25.PMI:Performance_monitoring_interrupts
10595 ± 43% -72.6% 2899 ± 14% interrupts.CPU25.RES:Rescheduling_interrupts
715159 ± 16% -98.4% 11161 ± 16% interrupts.CPU25.TLB:TLB_shootdowns
76.50 ± 3% +104.9% 156.75 ± 3% interrupts.CPU26.71:PCI-MSI.1572890-edge.eth0-TxRx-26
667160 ± 27% -98.2% 11849 ± 23% interrupts.CPU26.CAL:Function_call_interrupts
271990 ± 15% +124.3% 610136 interrupts.CPU26.LOC:Local_timer_interrupts
1457 ± 20% -84.3% 228.75 ±173% interrupts.CPU26.NMI:Non-maskable_interrupts
1457 ± 20% -84.3% 228.75 ±173% interrupts.CPU26.PMI:Performance_monitoring_interrupts
10385 ± 43% -73.9% 2711 ± 14% interrupts.CPU26.RES:Rescheduling_interrupts
747329 ± 29% -98.6% 10721 ± 26% interrupts.CPU26.TLB:TLB_shootdowns
77.75 ± 7% +93.9% 150.75 interrupts.CPU27.72:PCI-MSI.1572891-edge.eth0-TxRx-27
643390 ± 23% -98.3% 10665 ± 14% interrupts.CPU27.CAL:Function_call_interrupts
269670 ± 12% +127.0% 612112 interrupts.CPU27.LOC:Local_timer_interrupts
9744 ± 44% -69.0% 3016 ± 15% interrupts.CPU27.RES:Rescheduling_interrupts
716367 ± 23% -98.7% 9600 ± 16% interrupts.CPU27.TLB:TLB_shootdowns
84.00 ± 11% +80.4% 151.50 interrupts.CPU28.73:PCI-MSI.1572892-edge.eth0-TxRx-28
674461 ± 22% -98.5% 10449 ± 17% interrupts.CPU28.CAL:Function_call_interrupts
267114 ± 15% +129.9% 614200 interrupts.CPU28.LOC:Local_timer_interrupts
1122 ± 41% -80.0% 224.25 ±173% interrupts.CPU28.NMI:Non-maskable_interrupts
1122 ± 41% -80.0% 224.25 ±173% interrupts.CPU28.PMI:Performance_monitoring_interrupts
10858 ± 52% -73.1% 2919 ± 16% interrupts.CPU28.RES:Rescheduling_interrupts
742288 ± 21% -98.7% 9361 ± 19% interrupts.CPU28.TLB:TLB_shootdowns
76.75 +96.7% 151.00 interrupts.CPU29.74:PCI-MSI.1572893-edge.eth0-TxRx-29
618163 ± 31% -98.0% 12304 ± 17% interrupts.CPU29.CAL:Function_call_interrupts
273122 ± 17% +123.7% 610915 interrupts.CPU29.LOC:Local_timer_interrupts
1545 ± 30% -75.7% 375.00 ±173% interrupts.CPU29.NMI:Non-maskable_interrupts
1545 ± 30% -75.7% 375.00 ±173% interrupts.CPU29.PMI:Performance_monitoring_interrupts
10589 ± 54% -74.3% 2718 ± 21% interrupts.CPU29.RES:Rescheduling_interrupts
677771 ± 31% -98.3% 11207 ± 20% interrupts.CPU29.TLB:TLB_shootdowns
93.00 ± 31% +209.9% 288.25 ± 48% interrupts.CPU3.48:PCI-MSI.1572867-edge.eth0-TxRx-3
923756 ± 7% -98.0% 18900 ± 24% interrupts.CPU3.CAL:Function_call_interrupts
284849 ± 4% +114.3% 610562 interrupts.CPU3.LOC:Local_timer_interrupts
17053 ± 22% -75.9% 4113 ± 20% interrupts.CPU3.RES:Rescheduling_interrupts
1025520 ± 6% -98.3% 17871 ± 25% interrupts.CPU3.TLB:TLB_shootdowns
606421 ± 26% -97.5% 14896 ± 28% interrupts.CPU30.CAL:Function_call_interrupts
273717 ± 17% +123.6% 612130 interrupts.CPU30.LOC:Local_timer_interrupts
10123 ± 56% -69.9% 3044 ± 8% interrupts.CPU30.RES:Rescheduling_interrupts
678043 ± 25% -98.0% 13799 ± 30% interrupts.CPU30.TLB:TLB_shootdowns
75.75 ± 2% +99.3% 151.00 interrupts.CPU31.76:PCI-MSI.1572895-edge.eth0-TxRx-31
614682 ± 21% -98.1% 11811 ± 27% interrupts.CPU31.CAL:Function_call_interrupts
273565 ± 15% +124.0% 612786 interrupts.CPU31.LOC:Local_timer_interrupts
1863 ± 32% -77.6% 418.25 ±173% interrupts.CPU31.NMI:Non-maskable_interrupts
1863 ± 32% -77.6% 418.25 ±173% interrupts.CPU31.PMI:Performance_monitoring_interrupts
11072 ± 50% -74.9% 2780 ± 18% interrupts.CPU31.RES:Rescheduling_interrupts
673714 ± 20% -98.4% 10709 ± 30% interrupts.CPU31.TLB:TLB_shootdowns
74.50 ± 2% +102.3% 150.75 interrupts.CPU32.77:PCI-MSI.1572896-edge.eth0-TxRx-32
657006 ± 30% -98.2% 12098 ± 21% interrupts.CPU32.CAL:Function_call_interrupts
274908 ± 16% +121.9% 610112 interrupts.CPU32.LOC:Local_timer_interrupts
11320 ± 48% -76.3% 2680 ± 22% interrupts.CPU32.RES:Rescheduling_interrupts
719498 ± 30% -98.5% 10981 ± 24% interrupts.CPU32.TLB:TLB_shootdowns
76.75 ± 3% +102.6% 155.50 ± 5% interrupts.CPU33.78:PCI-MSI.1572897-edge.eth0-TxRx-33
663241 ± 24% -98.4% 10893 ± 22% interrupts.CPU33.CAL:Function_call_interrupts
278400 ± 13% +118.6% 608674 interrupts.CPU33.LOC:Local_timer_interrupts
11796 ± 47% -76.2% 2812 ± 15% interrupts.CPU33.RES:Rescheduling_interrupts
731432 ± 23% -98.7% 9798 ± 26% interrupts.CPU33.TLB:TLB_shootdowns
75.25 +100.7% 151.00 interrupts.CPU34.79:PCI-MSI.1572898-edge.eth0-TxRx-34
642251 ± 24% -98.4% 10101 ± 30% interrupts.CPU34.CAL:Function_call_interrupts
268454 ± 16% +128.7% 613994 interrupts.CPU34.LOC:Local_timer_interrupts
9906 ± 49% -73.7% 2609 ± 23% interrupts.CPU34.RES:Rescheduling_interrupts
718139 ± 24% -98.7% 8999 ± 35% interrupts.CPU34.TLB:TLB_shootdowns
79.00 ± 8% +91.1% 151.00 interrupts.CPU35.80:PCI-MSI.1572899-edge.eth0-TxRx-35
599308 ± 24% -98.1% 11108 ± 13% interrupts.CPU35.CAL:Function_call_interrupts
280035 ± 11% +119.0% 613246 interrupts.CPU35.LOC:Local_timer_interrupts
11089 ± 49% -75.9% 2677 ± 16% interrupts.CPU35.RES:Rescheduling_interrupts
653661 ± 23% -98.5% 10004 ± 15% interrupts.CPU35.TLB:TLB_shootdowns
105.25 ± 30% +50.8% 158.75 ± 7% interrupts.CPU36.81:PCI-MSI.1572900-edge.eth0-TxRx-36
576675 ± 34% -97.8% 12733 ± 38% interrupts.CPU36.CAL:Function_call_interrupts
280316 ± 7% +117.8% 610536 interrupts.CPU36.LOC:Local_timer_interrupts
8217 ± 46% -65.1% 2868 ± 31% interrupts.CPU36.RES:Rescheduling_interrupts
639697 ± 34% -98.2% 11682 ± 42% interrupts.CPU36.TLB:TLB_shootdowns
75.25 ± 3% +101.0% 151.25 interrupts.CPU37.82:PCI-MSI.1572901-edge.eth0-TxRx-37
585255 ± 34% -98.1% 11315 ± 48% interrupts.CPU37.CAL:Function_call_interrupts
282162 ± 7% +117.3% 613248 interrupts.CPU37.LOC:Local_timer_interrupts
1790 ± 41% -82.4% 314.75 ±173% interrupts.CPU37.NMI:Non-maskable_interrupts
1790 ± 41% -82.4% 314.75 ±173% interrupts.CPU37.PMI:Performance_monitoring_interrupts
8908 ± 46% -66.8% 2955 ± 28% interrupts.CPU37.RES:Rescheduling_interrupts
646131 ± 34% -98.4% 10235 ± 54% interrupts.CPU37.TLB:TLB_shootdowns
75.00 +118.0% 163.50 ± 12% interrupts.CPU38.83:PCI-MSI.1572902-edge.eth0-TxRx-38
571165 ± 35% -97.9% 11914 ± 43% interrupts.CPU38.CAL:Function_call_interrupts
284027 ± 7% +115.6% 612253 interrupts.CPU38.LOC:Local_timer_interrupts
869.75 ± 28% -83.4% 144.00 ±173% interrupts.CPU38.NMI:Non-maskable_interrupts
869.75 ± 28% -83.4% 144.00 ±173% interrupts.CPU38.PMI:Performance_monitoring_interrupts
8272 ± 46% -66.5% 2774 ± 29% interrupts.CPU38.RES:Rescheduling_interrupts
629458 ± 36% -98.3% 10836 ± 48% interrupts.CPU38.TLB:TLB_shootdowns
75.25 ± 3% +100.3% 150.75 interrupts.CPU39.84:PCI-MSI.1572903-edge.eth0-TxRx-39
598700 ± 42% -98.0% 11874 ± 46% interrupts.CPU39.CAL:Function_call_interrupts
279872 ± 9% +117.3% 608251 interrupts.CPU39.LOC:Local_timer_interrupts
9117 ± 38% -69.0% 2826 ± 26% interrupts.CPU39.RES:Rescheduling_interrupts
657735 ± 43% -98.4% 10793 ± 52% interrupts.CPU39.TLB:TLB_shootdowns
953724 ± 9% -98.1% 17800 ± 31% interrupts.CPU4.CAL:Function_call_interrupts
280572 ± 4% +116.5% 607494 interrupts.CPU4.LOC:Local_timer_interrupts
17128 ± 23% -77.3% 3896 ± 24% interrupts.CPU4.RES:Rescheduling_interrupts
1060132 ± 8% -98.4% 16760 ± 33% interrupts.CPU4.TLB:TLB_shootdowns
75.25 ± 3% +101.0% 151.25 interrupts.CPU40.85:PCI-MSI.1572904-edge.eth0-TxRx-40
603821 ± 36% -98.0% 11876 ± 45% interrupts.CPU40.CAL:Function_call_interrupts
282927 ± 8% +116.1% 611404 interrupts.CPU40.LOC:Local_timer_interrupts
1638 ± 60% -91.9% 132.75 ±173% interrupts.CPU40.NMI:Non-maskable_interrupts
1638 ± 60% -91.9% 132.75 ±173% interrupts.CPU40.PMI:Performance_monitoring_interrupts
8973 ± 45% -69.6% 2728 ± 31% interrupts.CPU40.RES:Rescheduling_interrupts
671599 ± 36% -98.4% 10786 ± 50% interrupts.CPU40.TLB:TLB_shootdowns
74.50 ± 2% +102.3% 150.75 interrupts.CPU41.86:PCI-MSI.1572905-edge.eth0-TxRx-41
578720 ± 35% -98.1% 10865 ± 48% interrupts.CPU41.CAL:Function_call_interrupts
287826 ± 8% +111.8% 609750 interrupts.CPU41.LOC:Local_timer_interrupts
1253 ± 20% -77.0% 288.00 ±173% interrupts.CPU41.NMI:Non-maskable_interrupts
1253 ± 20% -77.0% 288.00 ±173% interrupts.CPU41.PMI:Performance_monitoring_interrupts
8548 ± 37% -66.7% 2845 ± 31% interrupts.CPU41.RES:Rescheduling_interrupts
641704 ± 35% -98.5% 9786 ± 54% interrupts.CPU41.TLB:TLB_shootdowns
75.00 ± 2% +101.7% 151.25 interrupts.CPU42.87:PCI-MSI.1572906-edge.eth0-TxRx-42
540606 ± 34% -97.7% 12546 ± 32% interrupts.CPU42.CAL:Function_call_interrupts
282054 ± 11% +115.7% 608457 interrupts.CPU42.LOC:Local_timer_interrupts
8570 ± 35% -65.9% 2926 ± 28% interrupts.CPU42.RES:Rescheduling_interrupts
601634 ± 36% -98.1% 11456 ± 36% interrupts.CPU42.TLB:TLB_shootdowns
75.25 ± 3% +128.9% 172.25 ± 21% interrupts.CPU43.88:PCI-MSI.1572907-edge.eth0-TxRx-43
616654 ± 25% -97.6% 14630 ± 27% interrupts.CPU43.CAL:Function_call_interrupts
277903 ± 9% +119.7% 610610 interrupts.CPU43.LOC:Local_timer_interrupts
9259 ± 40% -67.3% 3031 ± 27% interrupts.CPU43.RES:Rescheduling_interrupts
680242 ± 25% -98.0% 13552 ± 29% interrupts.CPU43.TLB:TLB_shootdowns
77.75 ± 4% +95.2% 151.75 interrupts.CPU44.89:PCI-MSI.1572908-edge.eth0-TxRx-44
605519 ± 37% -98.1% 11416 ± 30% interrupts.CPU44.CAL:Function_call_interrupts
285149 ± 10% +113.7% 609312 interrupts.CPU44.LOC:Local_timer_interrupts
1236 ± 33% -83.4% 205.25 ±173% interrupts.CPU44.NMI:Non-maskable_interrupts
1236 ± 33% -83.4% 205.25 ±173% interrupts.CPU44.PMI:Performance_monitoring_interrupts
9232 ± 37% -69.7% 2796 ± 33% interrupts.CPU44.RES:Rescheduling_interrupts
669258 ± 38% -98.5% 10363 ± 34% interrupts.CPU44.TLB:TLB_shootdowns
74.50 ± 2% +137.6% 177.00 ± 19% interrupts.CPU45.90:PCI-MSI.1572909-edge.eth0-TxRx-45
611505 ± 42% -98.1% 11614 ± 25% interrupts.CPU45.CAL:Function_call_interrupts
278958 ± 9% +117.4% 606506 interrupts.CPU45.LOC:Local_timer_interrupts
1209 ± 54% -85.6% 174.00 ±173% interrupts.CPU45.NMI:Non-maskable_interrupts
1209 ± 54% -85.6% 174.00 ±173% interrupts.CPU45.PMI:Performance_monitoring_interrupts
9127 ± 40% -68.6% 2866 ± 25% interrupts.CPU45.RES:Rescheduling_interrupts
674116 ± 43% -98.4% 10521 ± 28% interrupts.CPU45.TLB:TLB_shootdowns
75.25 ± 3% +100.7% 151.00 interrupts.CPU46.91:PCI-MSI.1572910-edge.eth0-TxRx-46
604011 ± 33% -98.1% 11401 ± 34% interrupts.CPU46.CAL:Function_call_interrupts
289666 ± 7% +111.5% 612672 interrupts.CPU46.LOC:Local_timer_interrupts
9620 ± 35% -70.8% 2806 ± 30% interrupts.CPU46.RES:Rescheduling_interrupts
666771 ± 34% -98.5% 10313 ± 39% interrupts.CPU46.TLB:TLB_shootdowns
74.50 ± 2% +103.0% 151.25 interrupts.CPU47.92:PCI-MSI.1572911-edge.eth0-TxRx-47
592103 ± 38% -98.4% 9487 ± 39% interrupts.CPU47.CAL:Function_call_interrupts
292671 ± 5% +108.3% 609714 interrupts.CPU47.LOC:Local_timer_interrupts
1444 ± 25% -82.7% 250.25 ±173% interrupts.CPU47.NMI:Non-maskable_interrupts
1444 ± 25% -82.7% 250.25 ±173% interrupts.CPU47.PMI:Performance_monitoring_interrupts
9041 ± 40% -69.3% 2778 ± 31% interrupts.CPU47.RES:Rescheduling_interrupts
651013 ± 39% -98.7% 8392 ± 44% interrupts.CPU47.TLB:TLB_shootdowns
74.75 ± 2% +102.7% 151.50 interrupts.CPU48.93:PCI-MSI.1572912-edge.eth0-TxRx-48
554114 ± 41% -98.2% 10119 ± 45% interrupts.CPU48.CAL:Function_call_interrupts
284700 ± 9% +115.7% 614068 interrupts.CPU48.LOC:Local_timer_interrupts
1539 ± 61% -89.8% 157.25 ±173% interrupts.CPU48.NMI:Non-maskable_interrupts
1539 ± 61% -89.8% 157.25 ±173% interrupts.CPU48.PMI:Performance_monitoring_interrupts
8861 ± 48% -61.5% 3412 ± 23% interrupts.CPU48.RES:Rescheduling_interrupts
606269 ± 41% -98.5% 9011 ± 52% interrupts.CPU48.TLB:TLB_shootdowns
77.50 ± 6% +96.8% 152.50 interrupts.CPU49.94:PCI-MSI.1572913-edge.eth0-TxRx-49
577918 ± 32% -98.2% 10547 ± 41% interrupts.CPU49.CAL:Function_call_interrupts
278289 ± 10% +120.2% 612837 interrupts.CPU49.LOC:Local_timer_interrupts
1456 ± 71% -78.1% 319.75 ±173% interrupts.CPU49.NMI:Non-maskable_interrupts
1456 ± 71% -78.1% 319.75 ±173% interrupts.CPU49.PMI:Performance_monitoring_interrupts
8435 ± 48% -66.9% 2792 ± 32% interrupts.CPU49.RES:Rescheduling_interrupts
640512 ± 30% -98.5% 9472 ± 46% interrupts.CPU49.TLB:TLB_shootdowns
918859 ± 7% -98.1% 17263 ± 30% interrupts.CPU5.CAL:Function_call_interrupts
287835 ± 5% +112.1% 610589 interrupts.CPU5.LOC:Local_timer_interrupts
17179 ± 22% -77.0% 3950 ± 25% interrupts.CPU5.RES:Rescheduling_interrupts
1014576 ± 6% -98.4% 16226 ± 32% interrupts.CPU5.TLB:TLB_shootdowns
81.50 ± 9% +131.6% 188.75 ± 34% interrupts.CPU50.95:PCI-MSI.1572914-edge.eth0-TxRx-50
540125 ± 34% -98.1% 10216 ± 36% interrupts.CPU50.CAL:Function_call_interrupts
289163 ± 7% +111.3% 610942 interrupts.CPU50.LOC:Local_timer_interrupts
1808 ± 52% -91.0% 162.50 ±173% interrupts.CPU50.NMI:Non-maskable_interrupts
1808 ± 52% -91.0% 162.50 ±173% interrupts.CPU50.PMI:Performance_monitoring_interrupts
8609 ± 39% -65.0% 3010 ± 21% interrupts.CPU50.RES:Rescheduling_interrupts
591069 ± 35% -98.5% 9121 ± 41% interrupts.CPU50.TLB:TLB_shootdowns
80.75 ± 5% +86.7% 150.75 interrupts.CPU51.96:PCI-MSI.1572915-edge.eth0-TxRx-51
552961 ± 39% -98.1% 10256 ± 38% interrupts.CPU51.CAL:Function_call_interrupts
284759 ± 5% +115.1% 612655 interrupts.CPU51.LOC:Local_timer_interrupts
7988 ± 44% -65.4% 2764 ± 31% interrupts.CPU51.RES:Rescheduling_interrupts
616023 ± 40% -98.5% 9169 ± 43% interrupts.CPU51.TLB:TLB_shootdowns
74.50 ± 2% +105.0% 152.75 interrupts.CPU52.97:PCI-MSI.1572916-edge.eth0-TxRx-52
545175 ± 41% -97.5% 13766 ± 37% interrupts.CPU52.CAL:Function_call_interrupts
280809 ± 7% +117.6% 611036 interrupts.CPU52.LOC:Local_timer_interrupts
7862 ± 48% -65.9% 2681 ± 32% interrupts.CPU52.RES:Rescheduling_interrupts
603230 ± 41% -97.9% 12658 ± 40% interrupts.CPU52.TLB:TLB_shootdowns
75.25 ± 3% +100.7% 151.00 interrupts.CPU53.98:PCI-MSI.1572917-edge.eth0-TxRx-53
534168 ± 26% -98.1% 10303 ± 57% interrupts.CPU53.CAL:Function_call_interrupts
272007 ± 9% +124.3% 610206 interrupts.CPU53.LOC:Local_timer_interrupts
8567 ± 37% -65.8% 2933 ± 33% interrupts.CPU53.RES:Rescheduling_interrupts
589403 ± 26% -98.4% 9219 ± 64% interrupts.CPU53.TLB:TLB_shootdowns
74.50 ± 2% +105.0% 152.75 interrupts.CPU54.99:PCI-MSI.1572918-edge.eth0-TxRx-54
604818 ± 28% -98.2% 10869 ± 13% interrupts.CPU54.CAL:Function_call_interrupts
275029 ± 15% +122.9% 612975 interrupts.CPU54.LOC:Local_timer_interrupts
1375 ± 73% -84.9% 207.50 ±173% interrupts.CPU54.NMI:Non-maskable_interrupts
1375 ± 73% -84.9% 207.50 ±173% interrupts.CPU54.PMI:Performance_monitoring_interrupts
9396 ± 46% -73.1% 2525 ± 4% interrupts.CPU54.RES:Rescheduling_interrupts
671906 ± 28% -98.5% 9784 ± 15% interrupts.CPU54.TLB:TLB_shootdowns
75.00 +156.0% 192.00 ± 36% interrupts.CPU55.100:PCI-MSI.1572919-edge.eth0-TxRx-55
589116 ± 34% -98.4% 9579 ± 24% interrupts.CPU55.CAL:Function_call_interrupts
273467 ± 17% +123.2% 610320 interrupts.CPU55.LOC:Local_timer_interrupts
9561 ± 50% -76.8% 2219 ± 5% interrupts.CPU55.RES:Rescheduling_interrupts
650563 ± 33% -98.7% 8484 ± 27% interrupts.CPU55.TLB:TLB_shootdowns
74.50 ± 2% +104.0% 152.00 interrupts.CPU56.101:PCI-MSI.1572920-edge.eth0-TxRx-56
622630 ± 32% -98.1% 11790 ± 28% interrupts.CPU56.CAL:Function_call_interrupts
273117 ± 15% +124.1% 612148 interrupts.CPU56.LOC:Local_timer_interrupts
1066 ± 53% -79.4% 219.75 ±173% interrupts.CPU56.NMI:Non-maskable_interrupts
1066 ± 53% -79.4% 219.75 ±173% interrupts.CPU56.PMI:Performance_monitoring_interrupts
10161 ± 45% -77.4% 2295 ± 5% interrupts.CPU56.RES:Rescheduling_interrupts
684138 ± 31% -98.4% 10673 ± 31% interrupts.CPU56.TLB:TLB_shootdowns
74.50 ± 2% +103.0% 151.25 interrupts.CPU57.102:PCI-MSI.1572921-edge.eth0-TxRx-57
626725 ± 31% -98.7% 8124 ± 18% interrupts.CPU57.CAL:Function_call_interrupts
276783 ± 15% +121.8% 613820 interrupts.CPU57.LOC:Local_timer_interrupts
10089 ± 46% -78.1% 2213 ± 10% interrupts.CPU57.RES:Rescheduling_interrupts
685647 ± 31% -99.0% 7004 ± 21% interrupts.CPU57.TLB:TLB_shootdowns
75.00 ± 3% +101.0% 150.75 interrupts.CPU58.103:PCI-MSI.1572922-edge.eth0-TxRx-58
628212 ± 29% -98.6% 8874 ± 37% interrupts.CPU58.CAL:Function_call_interrupts
266509 ± 16% +129.6% 611829 interrupts.CPU58.LOC:Local_timer_interrupts
9521 ± 49% -77.0% 2188 ± 5% interrupts.CPU58.RES:Rescheduling_interrupts
698849 ± 28% -98.9% 7781 ± 42% interrupts.CPU58.TLB:TLB_shootdowns
595325 ± 25% -98.5% 8793 ± 36% interrupts.CPU59.CAL:Function_call_interrupts
272196 ± 15% +125.0% 612483 interrupts.CPU59.LOC:Local_timer_interrupts
9472 ± 47% -77.1% 2172 interrupts.CPU59.RES:Rescheduling_interrupts
653804 ± 26% -98.8% 7679 ± 40% interrupts.CPU59.TLB:TLB_shootdowns
80.75 ± 5% +203.7% 245.25 ± 53% interrupts.CPU6.51:PCI-MSI.1572870-edge.eth0-TxRx-6
904801 ± 12% -98.0% 18288 ± 41% interrupts.CPU6.CAL:Function_call_interrupts
276633 ± 6% +120.0% 608578 interrupts.CPU6.LOC:Local_timer_interrupts
16629 ± 23% -76.5% 3906 ± 22% interrupts.CPU6.RES:Rescheduling_interrupts
1003408 ± 12% -98.3% 17224 ± 43% interrupts.CPU6.TLB:TLB_shootdowns
75.00 ± 3% +102.0% 151.50 interrupts.CPU60.105:PCI-MSI.1572924-edge.eth0-TxRx-60
591936 ± 37% -98.3% 10246 ± 20% interrupts.CPU60.CAL:Function_call_interrupts
270957 ± 17% +126.6% 614004 interrupts.CPU60.LOC:Local_timer_interrupts
9243 ± 58% -75.4% 2274 ± 4% interrupts.CPU60.RES:Rescheduling_interrupts
652167 ± 37% -98.6% 9150 ± 22% interrupts.CPU60.TLB:TLB_shootdowns
76.50 ± 3% +98.4% 151.75 interrupts.CPU61.106:PCI-MSI.1572925-edge.eth0-TxRx-61
615054 ± 36% -98.2% 11117 ± 42% interrupts.CPU61.CAL:Function_call_interrupts
273847 ± 15% +123.3% 611369 interrupts.CPU61.LOC:Local_timer_interrupts
787.50 ± 45% -76.1% 188.00 ±173% interrupts.CPU61.NMI:Non-maskable_interrupts
787.50 ± 45% -76.1% 188.00 ±173% interrupts.CPU61.PMI:Performance_monitoring_interrupts
9607 ± 51% -76.6% 2245 ± 9% interrupts.CPU61.RES:Rescheduling_interrupts
674616 ± 36% -98.5% 10009 ± 46% interrupts.CPU61.TLB:TLB_shootdowns
77.25 ± 5% +95.8% 151.25 interrupts.CPU62.107:PCI-MSI.1572926-edge.eth0-TxRx-62
594390 ± 41% -98.4% 9601 ± 22% interrupts.CPU62.CAL:Function_call_interrupts
263541 ± 14% +132.9% 613769 interrupts.CPU62.LOC:Local_timer_interrupts
10900 ± 42% -80.7% 2100 ± 6% interrupts.CPU62.RES:Rescheduling_interrupts
659954 ± 41% -98.7% 8516 ± 26% interrupts.CPU62.TLB:TLB_shootdowns
576953 ± 40% -98.3% 9702 ± 35% interrupts.CPU63.CAL:Function_call_interrupts
272542 ± 16% +124.6% 612028 interrupts.CPU63.LOC:Local_timer_interrupts
9521 ± 47% -76.4% 2247 ± 5% interrupts.CPU63.RES:Rescheduling_interrupts
633974 ± 40% -98.6% 8589 ± 39% interrupts.CPU63.TLB:TLB_shootdowns
585690 ± 38% -98.2% 10319 ± 45% interrupts.CPU64.CAL:Function_call_interrupts
269339 ± 14% +127.4% 612393 interrupts.CPU64.LOC:Local_timer_interrupts
9503 ± 49% -77.1% 2179 ± 13% interrupts.CPU64.RES:Rescheduling_interrupts
641143 ± 38% -98.6% 9212 ± 51% interrupts.CPU64.TLB:TLB_shootdowns
534613 ± 32% -98.5% 8077 ± 29% interrupts.CPU65.CAL:Function_call_interrupts
270789 ± 18% +125.8% 611404 interrupts.CPU65.LOC:Local_timer_interrupts
1647 ± 21% -87.5% 206.50 ±173% interrupts.CPU65.NMI:Non-maskable_interrupts
1647 ± 21% -87.5% 206.50 ±173% interrupts.CPU65.PMI:Performance_monitoring_interrupts
8210 ± 49% -74.3% 2113 ± 7% interrupts.CPU65.RES:Rescheduling_interrupts
590235 ± 31% -98.8% 6981 ± 34% interrupts.CPU65.TLB:TLB_shootdowns
537019 ± 41% -97.9% 11099 ± 39% interrupts.CPU66.CAL:Function_call_interrupts
280077 ± 13% +119.2% 613899 interrupts.CPU66.LOC:Local_timer_interrupts
1361 ± 33% -85.0% 204.50 ±173% interrupts.CPU66.NMI:Non-maskable_interrupts
1361 ± 33% -85.0% 204.50 ±173% interrupts.CPU66.PMI:Performance_monitoring_interrupts
8435 ± 54% -74.1% 2184 ± 6% interrupts.CPU66.RES:Rescheduling_interrupts
587820 ± 41% -98.3% 10011 ± 43% interrupts.CPU66.TLB:TLB_shootdowns
563456 ± 41% -98.5% 8473 ± 29% interrupts.CPU67.CAL:Function_call_interrupts
273919 ± 17% +124.3% 614344 interrupts.CPU67.LOC:Local_timer_interrupts
1359 ± 37% -83.7% 221.00 ±173% interrupts.CPU67.NMI:Non-maskable_interrupts
1359 ± 37% -83.7% 221.00 ±173% interrupts.CPU67.PMI:Performance_monitoring_interrupts
9303 ± 52% -75.9% 2245 ± 10% interrupts.CPU67.RES:Rescheduling_interrupts
621279 ± 41% -98.8% 7372 ± 33% interrupts.CPU67.TLB:TLB_shootdowns
556297 ± 37% -98.3% 9711 ± 26% interrupts.CPU68.CAL:Function_call_interrupts
270481 ± 16% +125.8% 610830 interrupts.CPU68.LOC:Local_timer_interrupts
9370 ± 53% -75.6% 2286 ± 9% interrupts.CPU68.RES:Rescheduling_interrupts
608096 ± 37% -98.6% 8634 ± 30% interrupts.CPU68.TLB:TLB_shootdowns
547254 ± 35% -98.5% 8017 ± 28% interrupts.CPU69.CAL:Function_call_interrupts
264476 ± 17% +132.2% 614017 interrupts.CPU69.LOC:Local_timer_interrupts
8590 ± 54% -74.9% 2159 ± 15% interrupts.CPU69.RES:Rescheduling_interrupts
609526 ± 34% -98.9% 6926 ± 33% interrupts.CPU69.TLB:TLB_shootdowns
80.25 ± 3% +406.2% 406.25 ± 75% interrupts.CPU7.52:PCI-MSI.1572871-edge.eth0-TxRx-7
961374 ± 10% -98.2% 17601 ± 21% interrupts.CPU7.CAL:Function_call_interrupts
281703 ± 6% +115.9% 608138 interrupts.CPU7.LOC:Local_timer_interrupts
16700 ± 19% -75.3% 4130 ± 19% interrupts.CPU7.RES:Rescheduling_interrupts
1073837 ± 11% -98.5% 16549 ± 23% interrupts.CPU7.TLB:TLB_shootdowns
518881 ± 39% -97.7% 11814 ± 34% interrupts.CPU70.CAL:Function_call_interrupts
271884 ± 14% +124.8% 611226 interrupts.CPU70.LOC:Local_timer_interrupts
1667 ± 50% -75.8% 402.75 ±173% interrupts.CPU70.NMI:Non-maskable_interrupts
1667 ± 50% -75.8% 402.75 ±173% interrupts.CPU70.PMI:Performance_monitoring_interrupts
8469 ± 60% -74.0% 2205 ± 5% interrupts.CPU70.RES:Rescheduling_interrupts
569859 ± 38% -98.1% 10718 ± 37% interrupts.CPU70.TLB:TLB_shootdowns
527133 ± 34% -98.5% 7933 ± 35% interrupts.CPU71.CAL:Function_call_interrupts
277186 ± 14% +121.5% 613929 interrupts.CPU71.LOC:Local_timer_interrupts
1352 ± 72% -84.0% 216.00 ±173% interrupts.CPU71.NMI:Non-maskable_interrupts
1352 ± 72% -84.0% 216.00 ±173% interrupts.CPU71.PMI:Performance_monitoring_interrupts
9146 ± 53% -57.6% 3879 ± 8% interrupts.CPU71.RES:Rescheduling_interrupts
585937 ± 33% -98.8% 6859 ± 40% interrupts.CPU71.TLB:TLB_shootdowns
654632 ± 37% -97.3% 17562 ± 35% interrupts.CPU72.CAL:Function_call_interrupts
291651 ± 5% +109.0% 609650 interrupts.CPU72.LOC:Local_timer_interrupts
13458 ± 44% -72.9% 3650 ± 32% interrupts.CPU72.RES:Rescheduling_interrupts
722365 ± 36% -97.7% 16511 ± 37% interrupts.CPU72.TLB:TLB_shootdowns
667978 ± 38% -97.7% 15407 ± 40% interrupts.CPU73.CAL:Function_call_interrupts
281030 ± 7% +116.6% 608833 interrupts.CPU73.LOC:Local_timer_interrupts
11678 ± 45% -70.0% 3505 ± 29% interrupts.CPU73.RES:Rescheduling_interrupts
744888 ± 39% -98.1% 14379 ± 44% interrupts.CPU73.TLB:TLB_shootdowns
640668 ± 41% -97.4% 16536 ± 33% interrupts.CPU74.CAL:Function_call_interrupts
289715 ± 5% +110.3% 609245 interrupts.CPU74.LOC:Local_timer_interrupts
1871 ± 33% -86.0% 262.75 ±173% interrupts.CPU74.NMI:Non-maskable_interrupts
1871 ± 33% -86.0% 262.75 ±173% interrupts.CPU74.PMI:Performance_monitoring_interrupts
13712 ± 40% -74.6% 3478 ± 32% interrupts.CPU74.RES:Rescheduling_interrupts
705597 ± 41% -97.8% 15488 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
696811 ± 39% -97.7% 16149 ± 20% interrupts.CPU75.CAL:Function_call_interrupts
275166 ± 8% +120.9% 607719 interrupts.CPU75.LOC:Local_timer_interrupts
2421 ± 13% -88.7% 274.00 ±173% interrupts.CPU75.NMI:Non-maskable_interrupts
2421 ± 13% -88.7% 274.00 ±173% interrupts.CPU75.PMI:Performance_monitoring_interrupts
12514 ± 41% -71.5% 3567 ± 28% interrupts.CPU75.RES:Rescheduling_interrupts
775982 ± 39% -98.1% 15095 ± 21% interrupts.CPU75.TLB:TLB_shootdowns
626250 ± 42% -97.5% 15467 ± 26% interrupts.CPU76.CAL:Function_call_interrupts
290528 ± 5% +110.3% 610896 interrupts.CPU76.LOC:Local_timer_interrupts
2829 ± 4% -89.9% 286.50 ±173% interrupts.CPU76.NMI:Non-maskable_interrupts
2829 ± 4% -89.9% 286.50 ±173% interrupts.CPU76.PMI:Performance_monitoring_interrupts
12229 ± 45% -71.3% 3508 ± 28% interrupts.CPU76.RES:Rescheduling_interrupts
690451 ± 42% -97.9% 14409 ± 28% interrupts.CPU76.TLB:TLB_shootdowns
638611 ± 43% -97.7% 14857 ± 22% interrupts.CPU77.CAL:Function_call_interrupts
287678 ± 6% +111.3% 607749 interrupts.CPU77.LOC:Local_timer_interrupts
12736 ± 42% -73.0% 3437 ± 29% interrupts.CPU77.RES:Rescheduling_interrupts
702382 ± 43% -98.0% 13828 ± 25% interrupts.CPU77.TLB:TLB_shootdowns
588064 ± 40% -96.9% 18002 ± 47% interrupts.CPU78.CAL:Function_call_interrupts
286610 ± 4% +113.5% 611772 interrupts.CPU78.LOC:Local_timer_interrupts
11932 ± 44% -71.0% 3463 ± 33% interrupts.CPU78.RES:Rescheduling_interrupts
649344 ± 40% -97.4% 16965 ± 49% interrupts.CPU78.TLB:TLB_shootdowns
604357 ± 44% -97.1% 17682 ± 27% interrupts.CPU79.CAL:Function_call_interrupts
289001 ± 6% +110.1% 607335 interrupts.CPU79.LOC:Local_timer_interrupts
11991 ± 46% -70.8% 3505 ± 30% interrupts.CPU79.RES:Rescheduling_interrupts
665022 ± 43% -97.5% 16642 ± 29% interrupts.CPU79.TLB:TLB_shootdowns
962075 ± 8% -97.9% 19819 ± 31% interrupts.CPU8.CAL:Function_call_interrupts
273781 ± 2% +121.6% 606707 interrupts.CPU8.LOC:Local_timer_interrupts
16561 ± 24% -76.8% 3848 ± 21% interrupts.CPU8.RES:Rescheduling_interrupts
1066718 ± 8% -98.2% 18793 ± 33% interrupts.CPU8.TLB:TLB_shootdowns
570144 ± 46% -97.0% 16951 ± 29% interrupts.CPU80.CAL:Function_call_interrupts
278401 ± 10% +120.0% 612455 interrupts.CPU80.LOC:Local_timer_interrupts
636618 ± 46% -97.5% 15911 ± 31% interrupts.CPU80.TLB:TLB_shootdowns
569041 ± 45% -97.3% 15328 ± 29% interrupts.CPU81.CAL:Function_call_interrupts
286046 ± 6% +113.2% 609890 interrupts.CPU81.LOC:Local_timer_interrupts
11659 ± 44% -70.3% 3457 ± 30% interrupts.CPU81.RES:Rescheduling_interrupts
625296 ± 45% -97.7% 14270 ± 31% interrupts.CPU81.TLB:TLB_shootdowns
558378 ± 46% -97.5% 13901 ± 35% interrupts.CPU82.CAL:Function_call_interrupts
284344 ± 5% +114.4% 609639 interrupts.CPU82.LOC:Local_timer_interrupts
2356 ± 22% -77.4% 531.75 ±173% interrupts.CPU82.NMI:Non-maskable_interrupts
2356 ± 22% -77.4% 531.75 ±173% interrupts.CPU82.PMI:Performance_monitoring_interrupts
11748 ± 48% -70.3% 3486 ± 36% interrupts.CPU82.RES:Rescheduling_interrupts
613989 ± 46% -97.9% 12842 ± 38% interrupts.CPU82.TLB:TLB_shootdowns
584944 ± 49% -97.5% 14462 ± 26% interrupts.CPU83.CAL:Function_call_interrupts
270881 ± 9% +124.8% 608906 interrupts.CPU83.LOC:Local_timer_interrupts
9747 ± 35% -64.1% 3495 ± 29% interrupts.CPU83.RES:Rescheduling_interrupts
662526 ± 50% -98.0% 13431 ± 29% interrupts.CPU83.TLB:TLB_shootdowns
570058 ± 44% -97.4% 14730 ± 39% interrupts.CPU84.CAL:Function_call_interrupts
270900 ± 3% +123.9% 606641 interrupts.CPU84.LOC:Local_timer_interrupts
1628 ± 41% -83.4% 271.00 ±173% interrupts.CPU84.NMI:Non-maskable_interrupts
1628 ± 41% -83.4% 271.00 ±173% interrupts.CPU84.PMI:Performance_monitoring_interrupts
639038 ± 44% -97.9% 13666 ± 42% interrupts.CPU84.TLB:TLB_shootdowns
583241 ± 32% -97.5% 14403 ± 29% interrupts.CPU85.CAL:Function_call_interrupts
285411 ± 7% +112.4% 606099 interrupts.CPU85.LOC:Local_timer_interrupts
2074 ± 33% -74.4% 530.00 ±173% interrupts.CPU85.NMI:Non-maskable_interrupts
2074 ± 33% -74.4% 530.00 ±173% interrupts.CPU85.PMI:Performance_monitoring_interrupts
11762 ± 45% -70.4% 3485 ± 30% interrupts.CPU85.RES:Rescheduling_interrupts
651834 ± 30% -98.0% 13348 ± 31% interrupts.CPU85.TLB:TLB_shootdowns
519447 ± 39% -97.1% 15180 ± 30% interrupts.CPU86.CAL:Function_call_interrupts
283913 ± 7% +114.1% 607963 interrupts.CPU86.LOC:Local_timer_interrupts
9967 ± 41% -65.9% 3397 ± 33% interrupts.CPU86.RES:Rescheduling_interrupts
577318 ± 39% -97.6% 14129 ± 33% interrupts.CPU86.TLB:TLB_shootdowns
605148 ± 49% -97.4% 15555 ± 29% interrupts.CPU87.CAL:Function_call_interrupts
278120 ± 6% +118.6% 608024 interrupts.CPU87.LOC:Local_timer_interrupts
1629 ± 23% -83.0% 277.75 ±173% interrupts.CPU87.NMI:Non-maskable_interrupts
1629 ± 23% -83.0% 277.75 ±173% interrupts.CPU87.PMI:Performance_monitoring_interrupts
11184 ± 42% -69.3% 3437 ± 30% interrupts.CPU87.RES:Rescheduling_interrupts
674556 ± 49% -97.8% 14513 ± 32% interrupts.CPU87.TLB:TLB_shootdowns
580830 ± 25% -97.5% 14593 ± 23% interrupts.CPU88.CAL:Function_call_interrupts
274738 +122.4% 611147 interrupts.CPU88.LOC:Local_timer_interrupts
1788 ± 38% -85.2% 264.50 ±173% interrupts.CPU88.NMI:Non-maskable_interrupts
1788 ± 38% -85.2% 264.50 ±173% interrupts.CPU88.PMI:Performance_monitoring_interrupts
658116 ± 22% -97.9% 13565 ± 25% interrupts.CPU88.TLB:TLB_shootdowns
570707 ± 42% -97.1% 16793 ± 23% interrupts.CPU89.CAL:Function_call_interrupts
289656 ± 3% +109.4% 606629 interrupts.CPU89.LOC:Local_timer_interrupts
2306 ± 17% -88.4% 267.50 ±173% interrupts.CPU89.NMI:Non-maskable_interrupts
2306 ± 17% -88.4% 267.50 ±173% interrupts.CPU89.PMI:Performance_monitoring_interrupts
10794 ± 50% -66.9% 3573 ± 29% interrupts.CPU89.RES:Rescheduling_interrupts
638975 ± 41% -97.5% 15767 ± 25% interrupts.CPU89.TLB:TLB_shootdowns
924319 ± 5% -98.1% 17790 ± 25% interrupts.CPU9.CAL:Function_call_interrupts
285670 ± 3% +112.3% 606480 interrupts.CPU9.LOC:Local_timer_interrupts
17214 ± 25% -77.4% 3889 ± 22% interrupts.CPU9.RES:Rescheduling_interrupts
1029542 ± 4% -98.4% 16728 ± 27% interrupts.CPU9.TLB:TLB_shootdowns
393870 ± 75% -97.3% 10668 ± 27% interrupts.CPU90.CAL:Function_call_interrupts
271294 ± 15% +124.6% 609395 interrupts.CPU90.LOC:Local_timer_interrupts
1789 ± 17% -75.2% 443.00 ±173% interrupts.CPU90.NMI:Non-maskable_interrupts
1789 ± 17% -75.2% 443.00 ±173% interrupts.CPU90.PMI:Performance_monitoring_interrupts
6046 ± 84% -60.9% 2364 ± 20% interrupts.CPU90.RES:Rescheduling_interrupts
447340 ± 75% -97.9% 9583 ± 31% interrupts.CPU90.TLB:TLB_shootdowns
339915 ± 74% -96.7% 11099 ± 20% interrupts.CPU91.CAL:Function_call_interrupts
273423 ± 11% +122.8% 609162 interrupts.CPU91.LOC:Local_timer_interrupts
7730 ± 83% -69.1% 2392 ± 21% interrupts.CPU91.RES:Rescheduling_interrupts
373043 ± 73% -97.3% 10039 ± 22% interrupts.CPU91.TLB:TLB_shootdowns
359581 ± 65% -97.5% 9131 ± 29% interrupts.CPU92.CAL:Function_call_interrupts
260859 ± 13% +133.9% 610075 interrupts.CPU92.LOC:Local_timer_interrupts
860.50 ± 27% -74.6% 218.25 ±173% interrupts.CPU92.NMI:Non-maskable_interrupts
860.50 ± 27% -74.6% 218.25 ±173% interrupts.CPU92.PMI:Performance_monitoring_interrupts
6781 ± 88% -65.7% 2328 ± 18% interrupts.CPU92.RES:Rescheduling_interrupts
401523 ± 63% -98.0% 8051 ± 34% interrupts.CPU92.TLB:TLB_shootdowns
349477 ± 81% -97.4% 9045 ± 36% interrupts.CPU93.CAL:Function_call_interrupts
268030 ± 14% +127.7% 610202 interrupts.CPU93.LOC:Local_timer_interrupts
7271 ± 93% -68.8% 2271 ± 24% interrupts.CPU93.RES:Rescheduling_interrupts
382395 ± 82% -97.9% 7906 ± 42% interrupts.CPU93.TLB:TLB_shootdowns
321246 ± 86% -96.9% 9875 ± 19% interrupts.CPU94.CAL:Function_call_interrupts
273328 ± 16% +124.4% 613382 interrupts.CPU94.LOC:Local_timer_interrupts
357093 ± 88% -97.5% 8765 ± 22% interrupts.CPU94.TLB:TLB_shootdowns
304313 ± 81% -97.3% 8200 ± 40% interrupts.CPU95.CAL:Function_call_interrupts
281165 ± 12% +117.7% 612236 interrupts.CPU95.LOC:Local_timer_interrupts
1173 ± 45% -82.0% 211.75 ±173% interrupts.CPU95.NMI:Non-maskable_interrupts
1173 ± 45% -82.0% 211.75 ±173% interrupts.CPU95.PMI:Performance_monitoring_interrupts
6966 ± 90% -67.7% 2247 ± 18% interrupts.CPU95.RES:Rescheduling_interrupts
330447 ± 81% -97.8% 7134 ± 46% interrupts.CPU95.TLB:TLB_shootdowns
295289 ± 83% -97.0% 8978 ± 34% interrupts.CPU96.CAL:Function_call_interrupts
276155 ± 14% +121.1% 610552 interrupts.CPU96.LOC:Local_timer_interrupts
7211 ± 84% -67.0% 2380 ± 11% interrupts.CPU96.RES:Rescheduling_interrupts
320702 ± 83% -97.5% 7915 ± 40% interrupts.CPU96.TLB:TLB_shootdowns
328715 ± 84% -96.8% 10481 ± 28% interrupts.CPU97.CAL:Function_call_interrupts
274690 ± 11% +122.1% 609974 interrupts.CPU97.LOC:Local_timer_interrupts
1407 ± 33% -85.6% 202.50 ±173% interrupts.CPU97.NMI:Non-maskable_interrupts
1407 ± 33% -85.6% 202.50 ±173% interrupts.CPU97.PMI:Performance_monitoring_interrupts
6028 ± 84% -62.2% 2278 ± 21% interrupts.CPU97.RES:Rescheduling_interrupts
368993 ± 86% -97.5% 9381 ± 32% interrupts.CPU97.TLB:TLB_shootdowns
310595 ± 90% -95.7% 13351 ± 31% interrupts.CPU98.CAL:Function_call_interrupts
275081 ± 16% +122.6% 612237 interrupts.CPU98.LOC:Local_timer_interrupts
339269 ± 88% -96.4% 12245 ± 34% interrupts.CPU98.TLB:TLB_shootdowns
328284 ± 76% -97.0% 9883 ± 19% interrupts.CPU99.CAL:Function_call_interrupts
269113 ± 17% +127.5% 612113 interrupts.CPU99.LOC:Local_timer_interrupts
359011 ± 74% -97.5% 8809 ± 23% interrupts.CPU99.TLB:TLB_shootdowns
40066695 ± 10% +119.6% 87976028 interrupts.LOC:Local_timer_interrupts
0.00 +1.4e+104% 144.00 interrupts.MCP:Machine_check_polls
1337223 ± 7% -70.5% 395127 ± 11% interrupts.RES:Rescheduling_interrupts
82621137 -98.1% 1538710 ± 10% interrupts.TLB:TLB_shootdowns
***************************************************************************************************
lkp-bdw-ex2: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1SSD/ext4/sync/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/300s/write/lkp-bdw-ex2/128G/fio-basic/0xb000038
commit:
cba81e70bf (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
3e05ad861b (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl3cSywACgkQpekojE+k")
cba81e70bf716d85 3e05ad861b9b2b61a1cbfd0d989
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
%stddev %change %stddev
\ | \
0.04 ± 18% -0.0 0.00 ±173% fio.latency_100ms%
49.09 ± 2% +15.7 64.80 ± 5% fio.latency_100us%
13.09 ± 6% -10.2 2.85 ± 17% fio.latency_10us%
6.11 ± 14% -3.8 2.29 ± 21% fio.latency_20us%
0.03 ± 9% -0.0 0.00 fio.latency_250ms%
13.09 ± 2% +6.0 19.13 ± 8% fio.latency_250us%
7.12 ± 8% -6.6 0.56 ± 17% fio.latency_4us%
0.26 ± 5% -0.2 0.01 fio.latency_50ms%
162.33 +2.9% 167.02 fio.time.elapsed_time
162.33 +2.9% 167.02 fio.time.elapsed_time.max
1278 +27.0% 1624 ± 2% fio.time.percent_of_cpu_this_job_got
1965 +30.7% 2567 ± 2% fio.time.system_time
111.25 ± 6% +30.8% 145.51 ± 6% fio.time.user_time
118928 ± 4% -88.1% 14135 ± 51% fio.time.voluntary_context_switches
2115 +228.8% 6953 fio.write_bw_MBps
110336 +10.7% 122112 ± 2% fio.write_clat_90%_us
140800 +9.8% 154624 ± 3% fio.write_clat_95%_us
228350 ± 3% -65.3% 79185 ± 2% fio.write_clat_mean_us
3178158 ± 2% -97.2% 89863 ± 66% fio.write_clat_stddev
541463 +228.8% 1780153 fio.write_iops
76.62 +19.2% 91.31 iostat.cpu.idle
17.09 ± 5% -98.3% 0.30 ± 8% iostat.cpu.iowait
5.95 ± 4% +33.5% 7.94 ± 3% iostat.cpu.system
76.36 +14.8 91.20 mpstat.cpu.all.idle%
17.29 ± 5% -17.0 0.30 ± 8% mpstat.cpu.all.iowait%
6.01 ± 4% +2.0 8.02 ± 3% mpstat.cpu.all.sys%
0.35 ± 7% +0.1 0.46 ± 6% mpstat.cpu.all.usr%
229.00 +24.1% 284.25 ± 2% turbostat.Avg_MHz
10.30 +2.4 12.68 turbostat.Busy%
32.59 ± 20% +28.4% 41.86 ± 8% turbostat.Pkg%pc2
24.86 ± 5% -13.4% 21.53 turbostat.RAMWatt
76.25 +19.3% 91.00 vmstat.cpu.id
16.50 ± 6% -100.0% 0.00 vmstat.cpu.wa
33.25 ± 6% -100.0% 0.00 vmstat.procs.b
11.75 ± 3% +34.0% 15.75 ± 2% vmstat.procs.r
193615 ± 5% -12.1% 170221 ± 4% meminfo.AnonHugePages
2536834 ± 3% -52.0% 1217182 ± 3% meminfo.Committed_AS
48784112 ± 3% +17.9% 57521626 meminfo.Dirty
180597 ± 3% -58.1% 75643 ± 2% meminfo.Inactive(anon)
180087 ± 3% -58.4% 74836 ± 2% meminfo.Mapped
8963 ± 4% -31.7% 6119 ± 2% meminfo.PageTables
183911 ± 2% -57.1% 78917 meminfo.Shmem
7474229 ± 11% +25.1% 9352532 ± 5% numa-numastat.node0.local_node
1267102 ± 48% -100.0% 0.00 numa-numastat.node0.numa_foreign
7485535 ± 11% +25.4% 9383721 ± 5% numa-numastat.node0.numa_hit
2191430 ± 26% -85.2% 323322 ± 66% numa-numastat.node0.numa_miss
2202741 ± 26% -83.9% 354512 ± 60% numa-numastat.node0.other_node
2143882 ± 51% -100.0% 0.00 numa-numastat.node1.numa_miss
2175164 ± 50% -98.9% 23603 ± 56% numa-numastat.node1.other_node
5596306 ± 8% +52.3% 8520908 ± 3% numa-numastat.node3.local_node
3315969 ± 28% -90.2% 323322 ± 66% numa-numastat.node3.numa_foreign
5619817 ± 8% +51.8% 8528961 ± 3% numa-numastat.node3.numa_hit
61467 ± 77% -86.9% 8053 ±167% numa-numastat.node3.other_node
4.398e+10 ± 2% +22.5% 5.388e+10 perf-stat.i.cpu-cycles
8.91 ± 5% -9.4% 8.07 ± 6% perf-stat.i.cpu-migrations
90.82 +1.5 92.34 perf-stat.i.node-load-miss-rate%
51.25 ± 6% -8.9 42.31 ± 9% perf-stat.i.node-store-miss-rate%
847672 ± 3% +23.2% 1044081 ± 4% perf-stat.i.node-stores
10.21 ± 3% +18.7% 12.12 ± 2% perf-stat.overall.cpi
0.10 ± 3% -15.8% 0.08 ± 2% perf-stat.overall.ipc
59.95 ± 2% -5.2 54.72 perf-stat.overall.node-store-miss-rate%
4.458e+10 ± 2% +22.6% 5.464e+10 perf-stat.ps.cpu-cycles
9.04 ± 4% -9.1% 8.22 ± 6% perf-stat.ps.cpu-migrations
858746 ± 2% +23.8% 1062755 ± 4% perf-stat.ps.node-stores
0.50 ± 14% -0.2 0.32 ± 22% perf-profile.children.cycles-pp.start_kernel
0.26 ± 23% -0.1 0.12 ± 46% perf-profile.children.cycles-pp.pm_qos_read_value
0.24 ± 9% -0.1 0.13 ± 32% perf-profile.children.cycles-pp.pm_qos_request
0.15 ± 14% +0.1 0.22 ± 19% perf-profile.children.cycles-pp.ret_from_fork
0.15 ± 16% +0.1 0.22 ± 18% perf-profile.children.cycles-pp.kthread
0.09 ± 21% +0.1 0.16 ± 19% perf-profile.children.cycles-pp.worker_thread
0.09 ± 24% +0.1 0.15 ± 22% perf-profile.children.cycles-pp.process_one_work
0.03 ±173% +1.3 1.30 ±150% perf-profile.children.cycles-pp.poll_idle
0.25 ± 24% -0.1 0.12 ± 46% perf-profile.self.cycles-pp.pm_qos_read_value
0.23 ± 8% -0.1 0.13 ± 30% perf-profile.self.cycles-pp.pm_qos_request
0.03 ±100% +0.0 0.06 perf-profile.self.cycles-pp.jbd2_journal_try_to_free_buffers
0.03 ±173% +1.2 1.25 ±150% perf-profile.self.cycles-pp.poll_idle
13157591 ± 2% +30.5% 17174615 ± 17% numa-meminfo.node0.Dirty
33997072 ± 13% -24.5% 25661111 ± 14% numa-meminfo.node1.FilePages
33729212 ± 13% -24.7% 25398197 ± 15% numa-meminfo.node1.Inactive
33725344 ± 13% -24.7% 25394580 ± 15% numa-meminfo.node1.Inactive(file)
35091038 ± 13% -23.8% 26728332 ± 14% numa-meminfo.node1.MemUsed
7031982 ± 6% +75.5% 12338670 ± 18% numa-meminfo.node3.Dirty
17700511 ± 8% +64.4% 29100305 ± 3% numa-meminfo.node3.FilePages
17413058 ± 8% +65.5% 28824338 ± 3% numa-meminfo.node3.Inactive
17335896 ± 8% +66.2% 28819271 ± 3% numa-meminfo.node3.Inactive(file)
47105126 ± 3% -24.2% 35711671 ± 3% numa-meminfo.node3.MemFree
18913416 ± 8% +60.2% 30306871 ± 4% numa-meminfo.node3.MemUsed
4179 ± 64% -81.1% 788.75 ± 66% numa-meminfo.node3.PageTables
19897 ± 6% +54.8% 30810 ± 12% numa-meminfo.node3.Writeback
5275 ± 8% -38.5% 3244 ± 16% slabinfo.dquot.active_objs
5275 ± 8% -38.5% 3244 ± 16% slabinfo.dquot.num_objs
1157 ± 2% +11.7% 1292 ± 5% slabinfo.khugepaged_mm_slot.active_objs
1157 ± 2% +11.7% 1292 ± 5% slabinfo.khugepaged_mm_slot.num_objs
17827 ± 2% -8.9% 16246 ± 6% slabinfo.kmalloc-2k.active_objs
1122 ± 2% -8.8% 1024 ± 6% slabinfo.kmalloc-2k.active_slabs
17971 ± 2% -8.7% 16402 ± 6% slabinfo.kmalloc-2k.num_objs
1122 ± 2% -8.8% 1024 ± 6% slabinfo.kmalloc-2k.num_slabs
43034 ± 3% -26.6% 31588 ± 2% slabinfo.kmalloc-512.active_objs
729.75 ± 2% -27.7% 527.25 slabinfo.kmalloc-512.active_slabs
46743 ± 2% -27.8% 33769 slabinfo.kmalloc-512.num_objs
729.75 ± 2% -27.7% 527.25 slabinfo.kmalloc-512.num_slabs
30130 ± 5% -15.9% 25354 ± 6% slabinfo.kmalloc-96.active_objs
719.75 ± 5% -15.8% 606.25 ± 6% slabinfo.kmalloc-96.active_slabs
30255 ± 5% -15.7% 25491 ± 6% slabinfo.kmalloc-96.num_objs
719.75 ± 5% -15.8% 606.25 ± 6% slabinfo.kmalloc-96.num_slabs
3616 +21.7% 4400 slabinfo.scsi_sense_cache.active_objs
3616 +21.7% 4400 slabinfo.scsi_sense_cache.num_objs
1173 ± 10% -12.1% 1032 ± 6% slabinfo.skbuff_fclone_cache.active_objs
1173 ± 10% -12.1% 1032 ± 6% slabinfo.skbuff_fclone_cache.num_objs
4712 ± 8% -25.8% 3498 ± 11% slabinfo.task_group.active_objs
4712 ± 8% -25.8% 3498 ± 11% slabinfo.task_group.num_objs
68815 -3.0% 66722 proc-vmstat.nr_active_anon
68152 -3.2% 65945 proc-vmstat.nr_anon_pages
94.00 ± 5% -12.0% 82.75 ± 5% proc-vmstat.nr_anon_transparent_hugepages
12197181 ± 4% +17.9% 14378493 proc-vmstat.nr_dirty
12170201 +100.2% 24363760 proc-vmstat.nr_dirty_background_threshold
24370159 +150.0% 60924277 proc-vmstat.nr_dirty_threshold
27819578 ± 4% +6.8% 29722573 proc-vmstat.nr_file_pages
94790571 -2.1% 92828291 proc-vmstat.nr_free_pages
45205 ± 3% -58.1% 18931 ± 2% proc-vmstat.nr_inactive_anon
27497853 ± 4% +7.0% 29428032 proc-vmstat.nr_inactive_file
27694 -1.2% 27364 proc-vmstat.nr_kernel_stack
45215 ± 3% -58.3% 18869 ± 2% proc-vmstat.nr_mapped
2239 ± 3% -31.6% 1532 proc-vmstat.nr_page_table_pages
46034 ± 2% -57.1% 19745 proc-vmstat.nr_shmem
792760 ± 3% +6.8% 846419 proc-vmstat.nr_slab_reclaimable
75722 -5.0% 71926 proc-vmstat.nr_slab_unreclaimable
68815 -3.0% 66722 proc-vmstat.nr_zone_active_anon
45205 ± 3% -58.1% 18931 ± 2% proc-vmstat.nr_zone_inactive_anon
27497852 ± 4% +7.0% 29428032 proc-vmstat.nr_zone_inactive_file
12230867 ± 4% +17.8% 14411084 proc-vmstat.nr_zone_write_pending
4808083 ± 30% -93.3% 323322 ± 66% proc-vmstat.numa_foreign
30361240 ± 4% +14.8% 34857288 proc-vmstat.numa_hit
30266897 ± 4% +14.9% 34763199 proc-vmstat.numa_local
4808083 ± 30% -93.3% 323322 ± 66% proc-vmstat.numa_miss
4902425 ± 29% -91.5% 417410 ± 51% proc-vmstat.numa_other
3349 ± 4% +26.6% 4241 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
506.67 ± 17% -26.7% 371.50 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.max
138.66 ± 3% +13.8% 157.78 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
945129 -46.8% 502972 sched_debug.cpu.avg_idle.avg
109022 ± 64% -75.6% 26619 ± 81% sched_debug.cpu.avg_idle.min
503386 -46.2% 270873 ± 2% sched_debug.cpu.max_idle_balance_cost.avg
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
20130 ± 37% +186.1% 57597 ± 42% sched_debug.cpu.max_idle_balance_cost.stddev
1544 ± 8% -21.2% 1216 ± 14% sched_debug.cpu.nr_switches.min
6543 ± 15% +31.2% 8588 ± 6% sched_debug.cpu.nr_switches.stddev
0.08 ± 33% -82.7% 0.01 ± 14% sched_debug.cpu.nr_uninterruptible.avg
2659 ± 12% -24.1% 2017 ± 7% sched_debug.cpu.sched_count.avg
149.85 ± 29% -49.6% 75.50 ± 5% sched_debug.cpu.sched_count.min
6108 ± 16% +34.4% 8210 ± 5% sched_debug.cpu.sched_count.stddev
1245 ± 12% -25.6% 926.37 ± 8% sched_debug.cpu.sched_goidle.avg
29437 ± 26% +35.7% 39943 ± 7% sched_debug.cpu.sched_goidle.max
33.35 ± 20% -38.5% 20.50 ± 4% sched_debug.cpu.sched_goidle.min
2889 ± 16% +37.6% 3975 ± 6% sched_debug.cpu.sched_goidle.stddev
1337 ± 12% -24.3% 1012 ± 7% sched_debug.cpu.ttwu_count.avg
67654 ± 27% +45.6% 98484 ± 17% sched_debug.cpu.ttwu_count.max
60.54 ± 4% -36.3% 38.58 ± 2% sched_debug.cpu.ttwu_count.min
5284 ± 22% +40.6% 7432 ± 14% sched_debug.cpu.ttwu_count.stddev
691.05 ± 10% -71.7% 195.52 ± 17% sched_debug.cpu.ttwu_local.avg
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
3292949 ± 2% +30.5% 4296301 ± 17% numa-vmstat.node0.nr_dirty
3302011 ± 2% +30.4% 4305380 ± 17% numa-vmstat.node0.nr_zone_write_pending
1163801 ± 48% -100.0% 0.00 numa-vmstat.node0.numa_foreign
7192581 ± 12% +29.1% 9283819 ± 6% numa-vmstat.node0.numa_hit
7180573 ± 12% +28.8% 9251805 ± 7% numa-vmstat.node0.numa_local
1984427 ± 25% -85.6% 286712 ± 66% numa-vmstat.node0.numa_miss
1996442 ± 25% -84.0% 318738 ± 59% numa-vmstat.node0.numa_other
9281274 ± 9% -27.7% 6709488 ± 14% numa-vmstat.node1.nr_dirtied
8516765 ± 13% -24.6% 6422938 ± 14% numa-vmstat.node1.nr_file_pages
8448896 ± 13% -24.8% 6356318 ± 15% numa-vmstat.node1.nr_inactive_file
10161 ± 7% -32.8% 6831 ± 15% numa-vmstat.node1.nr_writeback
5292854 ± 11% -38.9% 3232303 ± 4% numa-vmstat.node1.nr_written
8448887 ± 13% -24.8% 6356310 ± 15% numa-vmstat.node1.nr_zone_inactive_file
1971155 ± 51% -100.0% 0.00 numa-vmstat.node1.numa_miss
2089389 ± 48% -94.7% 111231 ± 11% numa-vmstat.node1.numa_other
4971600 ± 6% +55.3% 7721225 ± 3% numa-vmstat.node3.nr_dirtied
1759273 ± 6% +75.4% 3085601 ± 18% numa-vmstat.node3.nr_dirty
4431518 ± 8% +64.4% 7286718 ± 3% numa-vmstat.node3.nr_file_pages
11769619 ± 3% -24.2% 8915950 ± 3% numa-vmstat.node3.nr_free_pages
4340312 ± 8% +66.3% 7216460 ± 3% numa-vmstat.node3.nr_inactive_file
1044 ± 64% -81.2% 196.75 ± 66% numa-vmstat.node3.nr_page_table_pages
5097 ± 17% +54.9% 7894 ± 5% numa-vmstat.node3.nr_writeback
3206942 ± 7% +44.3% 4627600 ± 12% numa-vmstat.node3.nr_written
4340309 ± 8% +66.3% 7216450 ± 3% numa-vmstat.node3.nr_zone_inactive_file
1764656 ± 6% +75.3% 3093612 ± 18% numa-vmstat.node3.nr_zone_write_pending
3016936 ± 27% -90.5% 286725 ± 66% numa-vmstat.node3.numa_foreign
5614717 ± 7% +47.6% 8285483 ± 4% numa-vmstat.node3.numa_hit
5504523 ± 7% +48.8% 8190007 ± 5% numa-vmstat.node3.numa_local
85.75 ± 6% +30.0% 111.50 ± 15% interrupts.57:PCI-MSI.1572870-edge.eth0-TxRx-6
86.25 ± 5% +784.1% 762.50 ± 87% interrupts.61:PCI-MSI.1572874-edge.eth0-TxRx-10
446.75 ± 49% -53.5% 207.75 ± 37% interrupts.CPU1.RES:Rescheduling_interrupts
86.25 ± 5% +784.1% 762.50 ± 87% interrupts.CPU10.61:PCI-MSI.1572874-edge.eth0-TxRx-10
76.50 ± 71% +140.2% 183.75 ± 32% interrupts.CPU105.TLB:TLB_shootdowns
68.50 ± 61% +156.9% 176.00 ± 39% interrupts.CPU106.TLB:TLB_shootdowns
77.50 ± 37% +107.4% 160.75 ± 45% interrupts.CPU108.TLB:TLB_shootdowns
79.50 ± 24% +128.9% 182.00 ± 35% interrupts.CPU109.TLB:TLB_shootdowns
92.75 ± 32% +101.1% 186.50 ± 38% interrupts.CPU110.TLB:TLB_shootdowns
100.50 ± 30% +80.6% 181.50 ± 35% interrupts.CPU111.TLB:TLB_shootdowns
82.25 ± 48% -90.0% 8.25 ± 60% interrupts.CPU128.RES:Rescheduling_interrupts
55.25 ± 80% -85.1% 8.25 ±125% interrupts.CPU130.RES:Rescheduling_interrupts
62.00 ± 77% -81.9% 11.25 ± 72% interrupts.CPU132.RES:Rescheduling_interrupts
50.00 ± 82% +160.5% 130.25 ± 80% interrupts.CPU14.TLB:TLB_shootdowns
76.25 ± 57% +80.3% 137.50 ± 22% interrupts.CPU159.RES:Rescheduling_interrupts
68.25 ± 63% +375.1% 324.25 ± 74% interrupts.CPU165.RES:Rescheduling_interrupts
75.50 ± 74% +468.2% 429.00 ± 91% interrupts.CPU167.RES:Rescheduling_interrupts
782.50 ± 11% -48.2% 405.50 ± 35% interrupts.CPU168.NMI:Non-maskable_interrupts
782.50 ± 11% -48.2% 405.50 ± 35% interrupts.CPU168.PMI:Performance_monitoring_interrupts
774.75 ± 15% -54.1% 355.25 ± 38% interrupts.CPU170.NMI:Non-maskable_interrupts
774.75 ± 15% -54.1% 355.25 ± 38% interrupts.CPU170.PMI:Performance_monitoring_interrupts
738.50 ± 9% -50.5% 365.75 ± 40% interrupts.CPU171.NMI:Non-maskable_interrupts
738.50 ± 9% -50.5% 365.75 ± 40% interrupts.CPU171.PMI:Performance_monitoring_interrupts
722.75 ± 15% -48.8% 370.25 ± 49% interrupts.CPU172.NMI:Non-maskable_interrupts
722.75 ± 15% -48.8% 370.25 ± 49% interrupts.CPU172.PMI:Performance_monitoring_interrupts
786.50 ± 17% -54.1% 361.00 ± 38% interrupts.CPU173.NMI:Non-maskable_interrupts
786.50 ± 17% -54.1% 361.00 ± 38% interrupts.CPU173.PMI:Performance_monitoring_interrupts
766.75 ± 15% -53.0% 360.75 ± 39% interrupts.CPU174.NMI:Non-maskable_interrupts
766.75 ± 15% -53.0% 360.75 ± 39% interrupts.CPU174.PMI:Performance_monitoring_interrupts
729.25 ± 11% -51.0% 357.00 ± 41% interrupts.CPU175.NMI:Non-maskable_interrupts
729.25 ± 11% -51.0% 357.00 ± 41% interrupts.CPU175.PMI:Performance_monitoring_interrupts
979.25 ± 54% -65.8% 334.50 ± 38% interrupts.CPU176.NMI:Non-maskable_interrupts
979.25 ± 54% -65.8% 334.50 ± 38% interrupts.CPU176.PMI:Performance_monitoring_interrupts
810.25 ± 23% -54.0% 372.75 ± 52% interrupts.CPU177.NMI:Non-maskable_interrupts
810.25 ± 23% -54.0% 372.75 ± 52% interrupts.CPU177.PMI:Performance_monitoring_interrupts
717.25 ± 15% -48.3% 370.75 ± 45% interrupts.CPU178.NMI:Non-maskable_interrupts
717.25 ± 15% -48.3% 370.75 ± 45% interrupts.CPU178.PMI:Performance_monitoring_interrupts
816.00 ± 23% -57.4% 347.75 ± 37% interrupts.CPU179.NMI:Non-maskable_interrupts
816.00 ± 23% -57.4% 347.75 ± 37% interrupts.CPU179.PMI:Performance_monitoring_interrupts
748.75 ± 10% -50.7% 369.00 ± 42% interrupts.CPU180.NMI:Non-maskable_interrupts
748.75 ± 10% -50.7% 369.00 ± 42% interrupts.CPU180.PMI:Performance_monitoring_interrupts
700.75 ± 14% -57.9% 295.00 ± 12% interrupts.CPU181.NMI:Non-maskable_interrupts
700.75 ± 14% -57.9% 295.00 ± 12% interrupts.CPU181.PMI:Performance_monitoring_interrupts
727.50 ± 15% -49.8% 365.50 ± 43% interrupts.CPU182.NMI:Non-maskable_interrupts
727.50 ± 15% -49.8% 365.50 ± 43% interrupts.CPU182.PMI:Performance_monitoring_interrupts
825.50 ± 13% -47.8% 430.75 ± 37% interrupts.CPU183.NMI:Non-maskable_interrupts
825.50 ± 13% -47.8% 430.75 ± 37% interrupts.CPU183.PMI:Performance_monitoring_interrupts
809.25 ± 15% -46.5% 433.25 ± 41% interrupts.CPU184.NMI:Non-maskable_interrupts
809.25 ± 15% -46.5% 433.25 ± 41% interrupts.CPU184.PMI:Performance_monitoring_interrupts
733.75 ± 10% -21.2% 578.25 ± 13% interrupts.CPU186.NMI:Non-maskable_interrupts
733.75 ± 10% -21.2% 578.25 ± 13% interrupts.CPU186.PMI:Performance_monitoring_interrupts
1346 ± 78% -58.2% 563.25 ± 7% interrupts.CPU187.NMI:Non-maskable_interrupts
1346 ± 78% -58.2% 563.25 ± 7% interrupts.CPU187.PMI:Performance_monitoring_interrupts
780.25 ± 12% -25.7% 580.00 ± 8% interrupts.CPU188.NMI:Non-maskable_interrupts
780.25 ± 12% -25.7% 580.00 ± 8% interrupts.CPU188.PMI:Performance_monitoring_interrupts
199.00 ± 53% -65.5% 68.75 ± 75% interrupts.CPU2.RES:Rescheduling_interrupts
114.50 ± 35% -77.5% 25.75 ± 47% interrupts.CPU24.RES:Rescheduling_interrupts
56.75 ± 94% +163.9% 149.75 ± 40% interrupts.CPU25.TLB:TLB_shootdowns
408.25 ± 31% +49.6% 610.75 ± 9% interrupts.CPU37.NMI:Non-maskable_interrupts
408.25 ± 31% +49.6% 610.75 ± 9% interrupts.CPU37.PMI:Performance_monitoring_interrupts
368.25 ± 42% +73.0% 637.25 ± 6% interrupts.CPU49.NMI:Non-maskable_interrupts
368.25 ± 42% +73.0% 637.25 ± 6% interrupts.CPU49.PMI:Performance_monitoring_interrupts
395.00 ± 35% +59.4% 629.50 ± 11% interrupts.CPU50.NMI:Non-maskable_interrupts
395.00 ± 35% +59.4% 629.50 ± 11% interrupts.CPU50.PMI:Performance_monitoring_interrupts
401.75 ± 40% +319.3% 1684 ±109% interrupts.CPU51.NMI:Non-maskable_interrupts
401.75 ± 40% +319.3% 1684 ±109% interrupts.CPU51.PMI:Performance_monitoring_interrupts
374.00 ± 37% +77.3% 663.00 ± 9% interrupts.CPU52.NMI:Non-maskable_interrupts
374.00 ± 37% +77.3% 663.00 ± 9% interrupts.CPU52.PMI:Performance_monitoring_interrupts
384.00 ± 34% +116.0% 829.50 ± 50% interrupts.CPU53.NMI:Non-maskable_interrupts
384.00 ± 34% +116.0% 829.50 ± 50% interrupts.CPU53.PMI:Performance_monitoring_interrupts
291.50 ± 9% +115.0% 626.75 ± 8% interrupts.CPU54.NMI:Non-maskable_interrupts
291.50 ± 9% +115.0% 626.75 ± 8% interrupts.CPU54.PMI:Performance_monitoring_interrupts
384.00 ± 46% +327.2% 1640 ± 57% interrupts.CPU57.NMI:Non-maskable_interrupts
384.00 ± 46% +327.2% 1640 ± 57% interrupts.CPU57.PMI:Performance_monitoring_interrupts
85.75 ± 6% +30.0% 111.50 ± 15% interrupts.CPU6.57:PCI-MSI.1572870-edge.eth0-TxRx-6
375.50 ± 38% +73.5% 651.50 ± 14% interrupts.CPU60.NMI:Non-maskable_interrupts
375.50 ± 38% +73.5% 651.50 ± 14% interrupts.CPU60.PMI:Performance_monitoring_interrupts
67.50 ± 57% +155.6% 172.50 ± 33% interrupts.CPU63.RES:Rescheduling_interrupts
72.25 ± 67% +154.7% 184.00 ± 35% interrupts.CPU64.RES:Rescheduling_interrupts
73.25 ± 84% +160.1% 190.50 ± 31% interrupts.CPU65.RES:Rescheduling_interrupts
57.75 ± 83% +297.8% 229.75 ± 63% interrupts.CPU69.RES:Rescheduling_interrupts
79.00 ± 55% +216.1% 249.75 ± 60% interrupts.CPU70.RES:Rescheduling_interrupts
53.00 ± 81% +423.1% 277.25 ± 53% interrupts.CPU71.RES:Rescheduling_interrupts
1192 ± 60% -48.8% 610.75 ± 14% interrupts.CPU89.NMI:Non-maskable_interrupts
1192 ± 60% -48.8% 610.75 ± 14% interrupts.CPU89.PMI:Performance_monitoring_interrupts
23200 ± 4% -31.1% 15987 ± 5% softirqs.CPU0.RCU
22518 ± 5% -15.8% 18969 ± 12% softirqs.CPU1.RCU
25629 ± 5% -36.1% 16373 ± 7% softirqs.CPU10.RCU
24471 ± 4% -34.0% 16153 ± 7% softirqs.CPU100.RCU
24082 ± 4% -35.6% 15518 ± 3% softirqs.CPU101.RCU
24381 ± 3% -35.3% 15787 ± 5% softirqs.CPU102.RCU
24282 ± 3% -32.0% 16517 ± 5% softirqs.CPU103.RCU
23824 -36.3% 15166 ± 3% softirqs.CPU104.RCU
23766 ± 9% -35.7% 15277 ± 16% softirqs.CPU105.RCU
24877 ± 6% -36.4% 15831 ± 6% softirqs.CPU106.RCU
24260 ± 5% -36.5% 15412 ± 2% softirqs.CPU107.RCU
23514 ± 4% -32.6% 15843 ± 4% softirqs.CPU108.RCU
23168 ± 11% -32.9% 15557 ± 4% softirqs.CPU109.RCU
24135 ± 3% -35.5% 15575 ± 4% softirqs.CPU11.RCU
23094 ± 11% -31.1% 15903 ± 6% softirqs.CPU110.RCU
24722 ± 7% -34.4% 16207 ± 6% softirqs.CPU111.RCU
22641 ± 7% -36.0% 14492 ± 6% softirqs.CPU112.RCU
22698 ± 6% -35.9% 14548 ± 5% softirqs.CPU113.RCU
20120 ± 3% -34.1% 13251 ± 2% softirqs.CPU114.RCU
20952 ± 3% -31.4% 14379 ± 6% softirqs.CPU115.RCU
20669 -32.4% 13982 ± 7% softirqs.CPU116.RCU
22652 ± 5% -34.8% 14758 ± 7% softirqs.CPU117.RCU
21951 ± 8% -32.7% 14770 ± 4% softirqs.CPU118.RCU
21279 ± 3% -34.5% 13946 ± 6% softirqs.CPU119.RCU
23698 ± 2% -31.7% 16175 ± 4% softirqs.CPU12.RCU
22161 ± 2% -35.0% 14413 ± 2% softirqs.CPU120.RCU
21560 ± 4% -38.3% 13307 ± 3% softirqs.CPU121.RCU
22145 ± 4% -39.0% 13518 ± 4% softirqs.CPU122.RCU
21705 ± 3% -38.1% 13425 ± 5% softirqs.CPU123.RCU
22025 ± 3% -39.1% 13416 ± 4% softirqs.CPU124.RCU
71707 ± 6% -12.6% 62663 ± 12% softirqs.CPU124.TIMER
21701 ± 3% -39.5% 13123 ± 3% softirqs.CPU125.RCU
21468 -37.3% 13469 ± 2% softirqs.CPU126.RCU
21540 ± 2% -38.6% 13228 ± 2% softirqs.CPU127.RCU
23737 ± 2% -35.6% 15282 ± 8% softirqs.CPU128.RCU
24378 ± 3% -44.5% 13528 ± 13% softirqs.CPU129.RCU
23915 ± 10% -33.2% 15987 ± 4% softirqs.CPU13.RCU
24830 ± 5% -40.7% 14724 ± 4% softirqs.CPU130.RCU
23809 ± 2% -39.3% 14455 ± 2% softirqs.CPU131.RCU
24304 ± 3% -38.7% 14892 ± 3% softirqs.CPU132.RCU
24225 -38.2% 14973 ± 4% softirqs.CPU133.RCU
24375 ± 3% -30.5% 16935 ± 13% softirqs.CPU134.RCU
26209 ± 4% -39.9% 15744 ± 7% softirqs.CPU135.RCU
26650 ± 4% -40.2% 15947 ± 6% softirqs.CPU137.RCU
23560 ± 2% -37.9% 14638 ± 4% softirqs.CPU138.RCU
23746 ± 9% -38.4% 14635 ± 13% softirqs.CPU139.RCU
23923 ± 8% -29.2% 16947 ± 4% softirqs.CPU14.RCU
22424 ± 17% -33.9% 14823 ± 6% softirqs.CPU140.RCU
26491 ± 3% -40.3% 15815 ± 7% softirqs.CPU141.RCU
26561 ± 4% -39.4% 16099 ± 5% softirqs.CPU142.RCU
24310 ± 6% -37.1% 15284 ± 5% softirqs.CPU143.RCU
24510 ± 5% -37.4% 15354 ± 3% softirqs.CPU144.RCU
23847 ± 5% -38.1% 14761 ± 3% softirqs.CPU145.RCU
25060 ± 6% -40.4% 14924 ± 3% softirqs.CPU146.RCU
23950 ± 11% -47.2% 12640 ± 14% softirqs.CPU147.RCU
24757 ± 7% -48.7% 12707 ± 18% softirqs.CPU148.RCU
23776 ± 2% -42.1% 13776 ± 11% softirqs.CPU149.RCU
25078 ± 7% -32.6% 16893 ± 6% softirqs.CPU15.RCU
23372 ± 14% -40.4% 13926 ± 16% softirqs.CPU150.RCU
25723 ± 8% -46.4% 13778 ± 16% softirqs.CPU151.RCU
24361 ± 6% -40.9% 14407 ± 10% softirqs.CPU152.RCU
24783 ± 3% -25.9% 18360 ± 22% softirqs.CPU153.RCU
20569 ± 3% -11.1% 18295 ± 7% softirqs.CPU153.SCHED
24343 ± 9% -47.7% 12731 ± 33% softirqs.CPU154.RCU
25566 ± 8% -41.9% 14842 ± 5% softirqs.CPU155.RCU
24887 ± 7% -39.1% 15167 ± 5% softirqs.CPU156.RCU
24441 ± 6% -37.2% 15340 ± 5% softirqs.CPU157.RCU
23774 ± 2% -36.0% 15226 ± 4% softirqs.CPU158.RCU
27239 ± 15% -44.0% 15263 ± 4% softirqs.CPU159.RCU
22945 ± 4% -35.0% 14923 ± 7% softirqs.CPU16.RCU
22114 -37.0% 13943 ± 6% softirqs.CPU160.RCU
23526 ± 7% -40.7% 13945 ± 2% softirqs.CPU161.RCU
21165 ± 6% -37.9% 13138 ± 3% softirqs.CPU162.RCU
22289 ± 9% -39.7% 13446 ± 5% softirqs.CPU163.RCU
21648 ± 9% -38.1% 13400 ± 4% softirqs.CPU164.RCU
23582 ± 8% -43.5% 13326 ± 11% softirqs.CPU165.RCU
24354 ± 8% -44.8% 13442 ± 5% softirqs.CPU166.RCU
21901 ± 9% -40.0% 13140 ± 4% softirqs.CPU167.RCU
22122 ± 11% -41.1% 13025 ± 3% softirqs.CPU168.RCU
20721 ± 16% -33.5% 13781 ± 9% softirqs.CPU169.RCU
21384 ± 4% -10.2% 19212 ± 4% softirqs.CPU169.SCHED
74084 ± 3% -22.6% 57373 ± 7% softirqs.CPU169.TIMER
23326 ± 3% -30.1% 16311 ± 15% softirqs.CPU17.RCU
22013 ± 9% -39.7% 13271 ± 3% softirqs.CPU170.RCU
69607 ± 9% -17.3% 57551 ± 6% softirqs.CPU170.TIMER
21648 ± 10% -40.4% 12903 ± 4% softirqs.CPU171.RCU
21544 ± 9% -42.0% 12491 ± 5% softirqs.CPU172.RCU
21436 ± 9% -40.3% 12803 ± 6% softirqs.CPU173.RCU
21782 ± 10% -39.3% 13224 ± 4% softirqs.CPU174.RCU
21609 ± 10% -40.8% 12798 ± 4% softirqs.CPU175.RCU
22224 ± 6% -36.9% 14026 ± 5% softirqs.CPU176.RCU
21640 ± 10% -36.0% 13851 ± 11% softirqs.CPU177.RCU
20431 ± 4% -8.2% 18755 ± 5% softirqs.CPU177.SCHED
22800 ± 4% -35.7% 14670 ± 5% softirqs.CPU178.RCU
22103 ± 7% -36.0% 14138 ± 4% softirqs.CPU179.RCU
21053 ± 5% -35.3% 13630 ± 2% softirqs.CPU18.RCU
22188 ± 7% -34.2% 14608 ± 4% softirqs.CPU180.RCU
22976 ± 5% -37.2% 14436 ± 4% softirqs.CPU181.RCU
22932 ± 9% -31.9% 15626 ± 11% softirqs.CPU182.RCU
24443 ± 8% -38.4% 15060 ± 5% softirqs.CPU183.RCU
25583 ± 8% -40.1% 15321 ± 4% softirqs.CPU184.RCU
70497 ± 6% -15.7% 59394 ± 5% softirqs.CPU184.TIMER
26175 ± 9% -41.6% 15275 ± 5% softirqs.CPU185.RCU
71005 ± 5% -17.8% 58334 ± 5% softirqs.CPU185.TIMER
23931 ± 3% -41.1% 14099 ± 4% softirqs.CPU186.RCU
26037 ± 12% -44.9% 14350 ± 8% softirqs.CPU187.RCU
23996 ± 5% -39.6% 14490 ± 4% softirqs.CPU188.RCU
27080 ± 11% -45.1% 14855 ± 10% softirqs.CPU189.RCU
20641 ± 5% -29.9% 14466 ± 6% softirqs.CPU19.RCU
25108 ± 7% -39.1% 15280 ± 5% softirqs.CPU190.RCU
70386 ± 8% -16.8% 58571 ± 4% softirqs.CPU190.TIMER
24651 ± 4% -41.6% 14404 ± 6% softirqs.CPU191.RCU
20449 ± 3% -7.5% 18912 ± 4% softirqs.CPU191.SCHED
69747 ± 8% -15.6% 58878 ± 5% softirqs.CPU191.TIMER
24459 ± 5% -32.2% 16575 ± 5% softirqs.CPU2.RCU
21287 ± 2% -29.9% 14911 ± 5% softirqs.CPU20.RCU
22989 ± 2% -31.2% 15815 ± 9% softirqs.CPU21.RCU
22529 ± 6% -32.1% 15308 ± 4% softirqs.CPU22.RCU
22249 ± 4% -34.1% 14666 ± 8% softirqs.CPU23.RCU
23833 ± 3% -34.1% 15702 ± 3% softirqs.CPU24.RCU
22340 ± 3% -37.3% 14004 ± 4% softirqs.CPU25.RCU
22821 ± 4% -37.6% 14234 ± 4% softirqs.CPU26.RCU
22365 ± 2% -37.9% 13888 ± 5% softirqs.CPU27.RCU
22753 ± 3% -48.4% 11736 ± 32% softirqs.CPU28.RCU
22303 ± 4% -39.2% 13560 ± 3% softirqs.CPU29.RCU
26947 ± 10% -38.6% 16549 ± 7% softirqs.CPU3.RCU
22344 ± 2% -37.4% 13996 ± 2% softirqs.CPU30.RCU
22285 ± 2% -45.5% 12154 ± 21% softirqs.CPU31.RCU
23877 ± 3% -37.2% 14996 ± 4% softirqs.CPU32.RCU
24827 ± 2% -43.3% 14078 ± 12% softirqs.CPU33.RCU
25217 ± 3% -38.8% 15427 ± 2% softirqs.CPU34.RCU
23942 ± 2% -38.2% 14798 ± 2% softirqs.CPU35.RCU
24326 ± 2% -37.0% 15331 ± 3% softirqs.CPU36.RCU
24743 ± 3% -36.9% 15610 ± 4% softirqs.CPU37.RCU
25003 -37.1% 15716 ± 7% softirqs.CPU38.RCU
26486 ± 3% -38.0% 16414 ± 7% softirqs.CPU39.RCU
25055 -34.4% 16424 ± 6% softirqs.CPU4.RCU
26355 ± 3% -37.4% 16504 ± 7% softirqs.CPU40.RCU
26796 ± 3% -39.5% 16219 ± 6% softirqs.CPU41.RCU
23944 -37.7% 14924 ± 4% softirqs.CPU42.RCU
24935 ± 2% -38.2% 15399 ± 9% softirqs.CPU43.RCU
25209 ± 4% -39.7% 15198 ± 7% softirqs.CPU44.RCU
26841 ± 3% -39.2% 16307 ± 7% softirqs.CPU45.RCU
26954 ± 5% -39.3% 16352 ± 6% softirqs.CPU46.RCU
72758 ± 6% -13.1% 63250 ± 12% softirqs.CPU46.TIMER
24713 ± 5% -36.4% 15719 ± 5% softirqs.CPU47.RCU
25005 ± 7% -35.4% 16153 ± 5% softirqs.CPU48.RCU
25086 ± 7% -36.6% 15905 ± 3% softirqs.CPU49.RCU
24352 ± 3% -35.1% 15802 ± 3% softirqs.CPU5.RCU
26029 ± 9% -39.9% 15634 ± 3% softirqs.CPU50.RCU
26066 ± 6% -42.7% 14933 ± 12% softirqs.CPU51.RCU
25792 ± 8% -47.6% 13517 ± 18% softirqs.CPU52.RCU
24668 ± 5% -41.3% 14477 ± 10% softirqs.CPU53.RCU
24755 ± 11% -40.6% 14711 ± 15% softirqs.CPU54.RCU
26696 ± 7% -43.4% 15108 ± 16% softirqs.CPU55.RCU
25310 ± 7% -39.1% 15405 ± 9% softirqs.CPU56.RCU
25491 ± 4% -40.8% 15103 ± 10% softirqs.CPU57.RCU
24493 ± 15% -36.5% 15564 ± 5% softirqs.CPU58.RCU
25247 ± 5% -38.9% 15414 ± 5% softirqs.CPU59.RCU
25244 ± 2% -35.4% 16309 ± 6% softirqs.CPU6.RCU
24964 ± 6% -37.1% 15694 ± 5% softirqs.CPU60.RCU
24997 ± 6% -37.0% 15758 ± 3% softirqs.CPU61.RCU
23131 ± 11% -31.2% 15923 ± 3% softirqs.CPU62.RCU
25362 ± 3% -37.9% 15751 ± 3% softirqs.CPU63.RCU
22262 ± 3% -35.3% 14396 ± 4% softirqs.CPU64.RCU
23891 ± 6% -40.2% 14285 softirqs.CPU65.RCU
21406 ± 7% -37.6% 13361 ± 3% softirqs.CPU66.RCU
22559 ± 8% -38.0% 13992 ± 5% softirqs.CPU67.RCU
21915 ± 9% -37.0% 13813 ± 3% softirqs.CPU68.RCU
24115 ± 7% -42.2% 13948 ± 10% softirqs.CPU69.RCU
26055 ± 9% -46.5% 13936 ± 34% softirqs.CPU7.RCU
23779 ± 8% -41.6% 13887 ± 8% softirqs.CPU70.RCU
22511 ± 9% -38.8% 13782 ± 4% softirqs.CPU71.RCU
23013 ± 11% -39.6% 13906 ± 3% softirqs.CPU72.RCU
21785 ± 13% -38.4% 13429 ± 6% softirqs.CPU73.RCU
75682 ± 4% -23.1% 58226 ± 6% softirqs.CPU73.TIMER
22508 ± 11% -39.5% 13627 ± 3% softirqs.CPU74.RCU
22691 ± 11% -38.5% 13960 ± 4% softirqs.CPU75.RCU
23127 ± 10% -41.7% 13480 ± 3% softirqs.CPU76.RCU
21909 ± 9% -38.0% 13580 ± 7% softirqs.CPU77.RCU
22584 ± 9% -40.4% 13460 ± 3% softirqs.CPU78.RCU
22352 ± 11% -40.0% 13420 ± 4% softirqs.CPU79.RCU
24521 -34.1% 16167 ± 4% softirqs.CPU8.RCU
23373 ± 11% -36.0% 14953 ± 4% softirqs.CPU80.RCU
23622 ± 4% -38.6% 14500 ± 9% softirqs.CPU81.RCU
20927 ± 4% -11.4% 18550 ± 2% softirqs.CPU81.SCHED
77620 ± 6% -27.0% 56642 ± 7% softirqs.CPU81.TIMER
24100 ± 4% -35.7% 15495 ± 6% softirqs.CPU82.RCU
23075 ± 8% -36.3% 14698 ± 4% softirqs.CPU83.RCU
23350 ± 9% -34.8% 15217 ± 3% softirqs.CPU84.RCU
23821 ± 6% -37.3% 14926 ± 2% softirqs.CPU85.RCU
23901 ± 9% -34.4% 15684 ± 4% softirqs.CPU86.RCU
25565 ± 7% -39.2% 15547 ± 4% softirqs.CPU87.RCU
26186 ± 8% -39.5% 15844 ± 3% softirqs.CPU88.RCU
71292 ± 6% -16.8% 59325 ± 5% softirqs.CPU88.TIMER
26995 ± 9% -42.1% 15632 ± 4% softirqs.CPU89.RCU
71906 ± 5% -18.9% 58342 ± 4% softirqs.CPU89.TIMER
24096 ± 9% -37.9% 14958 ± 14% softirqs.CPU9.RCU
24756 ± 4% -39.9% 14880 ± 3% softirqs.CPU90.RCU
27266 ± 18% -44.8% 15047 ± 4% softirqs.CPU91.RCU
24914 ± 7% -36.7% 15766 ± 8% softirqs.CPU92.RCU
26138 ± 7% -42.9% 14919 ± 10% softirqs.CPU93.RCU
26775 ± 6% -39.1% 16306 ± 5% softirqs.CPU94.RCU
71267 ± 8% -17.4% 58875 ± 4% softirqs.CPU94.TIMER
25074 ± 4% -40.2% 15004 ± 4% softirqs.CPU95.RCU
70923 ± 9% -17.2% 58728 ± 5% softirqs.CPU95.TIMER
24257 ± 12% -28.4% 17371 ± 16% softirqs.CPU96.RCU
22072 ± 6% -28.7% 15741 ± 5% softirqs.CPU97.RCU
23483 ± 5% -32.1% 15940 ± 6% softirqs.CPU98.RCU
23761 -32.8% 15969 ± 7% softirqs.CPU99.RCU
4572216 -37.7% 2847467 ± 3% softirqs.RCU
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[debugfs] 5496197f9b: WARNING:at_fs/open.c:#do_dentry_open
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 5496197f9b084f086cb410dd566648b0896fcc74 ("debugfs: Restrict debugfs when the kernel is locked down")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------+------------+------------+
| | 29d3c1c8df | 5496197f9b |
+--------------------------------------+------------+------------+
| boot_successes | 12 | 1 |
| boot_failures | 0 | 11 |
| WARNING:at_fs/open.c:#do_dentry_open | 0 | 11 |
| EIP:do_dentry_open | 0 | 11 |
+--------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 6.694953] WARNING: CPU: 0 PID: 298 at fs/open.c:805 do_dentry_open+0x1ef/0x24b
[ 6.696274] Modules linked in:
[ 6.696719] CPU: 0 PID: 298 Comm: run-lkp Not tainted 5.2.0-00026-g5496197f9b084 #2
[ 6.697784] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 6.698966] EIP: do_dentry_open+0x1ef/0x24b
[ 6.699554] Code: 40 8b 4d f0 74 78 8b 83 00 01 00 00 8b 80 84 00 00 00 85 c0 74 63 83 78 2c 00 b8 ea ff ff ff 0f 44 c8 eb 5a 89 c2 85 d2 7e 07 <0f> 0b ba ea ff ff ff f6 43 46 01 74 1b 89 55 f0 ff 8e 18 01 00 00
[ 6.702133] EAX: 00000001 EBX: f739e780 ECX: 00000001 EDX: 00000001
[ 6.703007] ESI: f0c47c70 EDI: c128e5ad EBP: f7283dec ESP: f7283ddc
[ 6.703888] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010202
[ 6.704860] CR0: 80050033 CR2: 09407008 CR3: 37288000 CR4: 000406d0
[ 6.705756] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 6.706634] DR6: fffe0ff0 DR7: 00000400
[ 6.707175] Call Trace:
[ 6.707531] vfs_open+0x23/0x27
[ 6.707985] path_openat+0x8f5/0xa3d
[ 6.708502] do_filp_open+0x50/0xa8
[ 6.709005] ? _raw_spin_unlock+0x27/0x38
[ 6.709574] ? __alloc_fd+0x10f/0x119
[ 6.710102] do_sys_open+0x61/0xd3
[ 6.710591] sys_open+0x18/0x1a
[ 6.711041] do_fast_syscall_32+0x90/0xcf
[ 6.711612] entry_SYSENTER_32+0x79/0xde
[ 6.712168] EIP: 0xb7efa795
[ 6.712570] Code: cd ff ff 85 d2 89 c8 74 02 89 0a 5b 5d c3 8b 04 24 c3 8b 14 24 c3 8b 1c 24 c3 8b 3c 24 c3 90 90 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d 76 00 58 b8 77 00 00 00 cd 80 90 8d 76
[ 6.715182] EAX: ffffffda EBX: 09456168 ECX: 00008241 EDX: 000001b6
[ 6.716072] ESI: 00000000 EDI: b7ec8ff4 EBP: bf8c8378 ESP: bf8c8210
[ 6.716956] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000286
[ 6.717908] ---[ end trace d1d982a595e22ad7 ]---
To reproduce:
# build kernel
cd linux
cp config-5.2.0-00026-g5496197f9b084 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months
86dc301f7b ("drivers/base/memory.c: cache blocks in radix tree .."): [ 1.341517] WARNING: suspicious RCU usage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Scott-Cheloha/drivers-base-memor...
commit 86dc301f7b4815d90e3a7843ffed655d64efe445
Author: Scott Cheloha <cheloha(a)linux.vnet.ibm.com>
AuthorDate: Thu Nov 21 13:59:52 2019 -0600
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Sun Nov 24 10:45:59 2019 +0800
drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
Searching for a particular memory block by id is slow because each block
device is kept in an unsorted linked list on the subsystem bus.
Lookup is much faster if we cache the blocks in a radix tree. Memory
subsystem initialization and hotplug/hotunplug is at least a little faster
for any machine with more than ~100 blocks, and the speedup grows with
the block count.
Signed-off-by: Scott Cheloha <cheloha(a)linux.vnet.ibm.com>
Acked-by: David Hildenbrand <david(a)redhat.com>
0e4a459f56 tracing: Remove unnecessary DEBUG_FS dependency
86dc301f7b drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
+---------------------------------------------------------------------+------------+------------+
| | 0e4a459f56 | 86dc301f7b |
+---------------------------------------------------------------------+------------+------------+
| boot_successes | 17 | 0 |
| boot_failures | 40 | 11 |
| WARNING:possible_circular_locking_dependency_detected | 40 | 8 |
| WARNING:suspicious_RCU_usage | 0 | 11 |
| include/linux/radix-tree.h:#suspicious_rcu_dereference_check()usage | 0 | 11 |
+---------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 1.335279] random: get_random_bytes called from kcmp_cookies_init+0x29/0x4c with crng_init=0
[ 1.336883] ACPI: bus type PCI registered
[ 1.338295] PCI: Using configuration type 1 for base access
[ 1.340735]
[ 1.341049] =============================
[ 1.341517] WARNING: suspicious RCU usage
[ 1.342266] 5.4.0-rc5-00070-g86dc301f7b481 #1 Tainted: G T
[ 1.343494] -----------------------------
[ 1.344226] include/linux/radix-tree.h:167 suspicious rcu_dereference_check() usage!
[ 1.345516]
[ 1.345516] other info that might help us debug this:
[ 1.345516]
[ 1.346962]
[ 1.346962] rcu_scheduler_active = 2, debug_locks = 1
[ 1.348134] no locks held by swapper/0/1.
[ 1.348866]
[ 1.348866] stack backtrace:
[ 1.349525] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G T 5.4.0-rc5-00070-g86dc301f7b481 #1
[ 1.351230] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.352720] Call Trace:
[ 1.353187] ? dump_stack+0x9a/0xde
[ 1.353507] ? node_access_release+0x19/0x19
[ 1.353507] ? walk_memory_blocks+0xe6/0x184
[ 1.353507] ? set_debug_rodata+0x20/0x20
[ 1.353507] ? link_mem_sections+0x39/0x3d
[ 1.353507] ? topology_init+0x74/0xc8
[ 1.353507] ? enable_cpu0_hotplug+0x15/0x15
[ 1.353507] ? do_one_initcall+0x13d/0x30a
[ 1.353507] ? kernel_init_freeable+0x18e/0x23b
[ 1.353507] ? rest_init+0x173/0x173
[ 1.353507] ? kernel_init+0x10/0x151
[ 1.353507] ? rest_init+0x173/0x173
[ 1.353507] ? ret_from_fork+0x3a/0x50
[ 1.410829] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 1.427389] cryptd: max_cpu_qlen set to 1000
[ 1.457792] ACPI: Added _OSI(Module Device)
[ 1.458615] ACPI: Added _OSI(Processor Device)
[ 1.459428] ACPI: Added _OSI(3.0 _SCP Extensions)
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1d5f18079cb10420c4e4b67f571b998ec861a20c 219d54332a09e8d8741c1e1982f5eae56099de85 --
git bisect good 00e5729fc7309fd1da26659b7dce0fcc0b46ab7e # 12:06 G 10 0 5 14 Merge 'linux-review/Daniel-W-S-Almeida/media-dummy_dvb_fe-register-adapter-frontend/20191127-043813' into devel-hourly-2019113015
git bisect bad da795b1ad310518d9100105235c9ba3ff4ee2524 # 12:07 B 0 11 27 0 Merge 'linux-review/Krzysztof-Kozlowski/media-Fix-Kconfig-indentation/20191122-055026' into devel-hourly-2019113015
git bisect bad 53f2832a6bf7103957815cef4ad58be22255ea7a # 12:08 B 0 11 27 0 Merge 'linux-review/Luc-Van-Oostenryck/misc-xilinx_sdfec-add-missing-__user-annotation/20191124-052749' into devel-hourly-2019113015
git bisect good 03a04e32f16a2c4bd267b48e8ab4aaf71c507050 # 12:13 G 10 0 5 13 Merge 'linux-review/Helmut-Grohne/mdio_bus-revert-inadvertent-error-code-change/20191124-171049' into devel-hourly-2019113015
git bisect bad 03403c59a591b12556bd0db1243eb0503112a0df # 12:13 B 0 11 27 0 Merge 'linux-review/Navid-Emamdoost/EDAC-Fix-memory-leak-in-i5100_init_one/20191124-103621' into devel-hourly-2019113015
git bisect good fae5c4b30fda35cbcc8389f432dcaac20f9b3a12 # 12:17 G 10 0 4 11 Merge 'linux-review/David-Gow/kunit-Always-print-actual-pointer-values-in-asserts/20191124-131742' into devel-hourly-2019113015
git bisect bad 1e0c725d2bb21789330a2fe1a360c37ae753eb18 # 12:17 B 0 11 27 0 Merge 'linux-review/Scott-Cheloha/drivers-base-memory-c-cache-blocks-in-radix-tree-to-accelerate-lookup/20191124-104557' into devel-hourly-2019113015
git bisect good 0902b87758e830e857e11f1538e50b218743f4b0 # 12:21 G 10 0 6 13 Merge 'linux-review/Julio-Faracco/drivers-net-virtio_net-Implement-a-dev_watchdog-handler/20191124-135051' into devel-hourly-2019113015
git bisect good 2d0c894673b55b51915ea858eabcf7c97c9b8ccb # 12:25 G 10 0 5 11 Merge 'linux-review/Maciej-enczykowski/net-ipv6-IPV6_TRANSPARENT-check-NET_RAW-prior-to-NET_ADMIN/20191124-120121' into devel-hourly-2019113015
git bisect good 3c095c856fb84271e56e1869aface37af1b078f1 # 12:29 G 10 0 6 16 Merge 'linux-review/Navid-Emamdoost/Bluetooth-Fix-memory-leak-in-hci_connect_le_scan/20191124-111255' into devel-hourly-2019113015
git bisect bad 86dc301f7b4815d90e3a7843ffed655d64efe445 # 12:58 B 0 11 41 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# first bad commit: [86dc301f7b4815d90e3a7843ffed655d64efe445] drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
git bisect good 0e4a459f56c32d3e52ae69a4b447db2f48a65f44 # 13:24 G 30 0 15 41 tracing: Remove unnecessary DEBUG_FS dependency
# extra tests with debug options
git bisect good 86dc301f7b4815d90e3a7843ffed655d64efe445 # 13:25 G 11 0 0 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# extra tests on head commit of linux-review/Scott-Cheloha/drivers-base-memory-c-cache-blocks-in-radix-tree-to-accelerate-lookup/20191124-104557
git bisect bad 86dc301f7b4815d90e3a7843ffed655d64efe445 # 13:31 B 0 11 41 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# bad: [86dc301f7b4815d90e3a7843ffed655d64efe445] drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# extra tests on revert first bad commit
git bisect good 65bba365eabe04a22e5d68012a45b92eee26860c # 13:38 G 10 0 6 16 Revert "drivers/base/memory.c: cache blocks in radix tree to accelerate lookup"
# good: [65bba365eabe04a22e5d68012a45b92eee26860c] Revert "drivers/base/memory.c: cache blocks in radix tree to accelerate lookup"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year, 2 months
9dc510db74 ("perf: Sharing PMU counters across compatible .."): [ 15.895385] WARNING: CPU: 0 PID: 648 at kernel/events/core.c:1755 perf_event_remove_dup
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Song-Liu/perf-Sharing-PMU-counte...
commit 9dc510db74b1b873853747098e1cd66b71e63210
Author: Song Liu <songliubraving(a)fb.com>
AuthorDate: Fri Nov 22 19:50:06 2019 +0000
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Mon Nov 25 02:13:06 2019 +0800
perf: Sharing PMU counters across compatible events
> On Nov 22, 2019, at 11:33 AM, Jiri Olsa <jolsa(a)redhat.com> wrote:
>
> On Fri, Nov 15, 2019 at 03:55:04PM -0800, Song Liu wrote:
>> This patch tries to enable PMU sharing. When multiple perf_events are
>> counting the same metric, they can share the hardware PMU counter. We
>> call these events as "compatible events".
>>
>> The PMU sharing are limited to events within the same perf_event_context
>> (ctx). When a event is installed or enabled, search the ctx for compatible
>> events. This is implemented in perf_event_setup_dup(). One of these
>> compatible events are picked as the master (stored in event->dup_master).
>> Similarly, when the event is removed or disabled, perf_event_remove_dup()
>> is used to clean up sharing.
>>
>> A new state PERF_EVENT_STATE_ENABLED is introduced for the master event.
>> This state is used when the slave event is ACTIVE, but the master event
>> is not.
>>
>> On the critical paths (add, del read), sharing PMU counters doesn't
>> increase the complexity. Helper functions event_pmu_[add|del|read]() are
>> introduced to cover these cases. All these functions have O(1) time
>> complexity.
>>
>> Cc: Peter Zijlstra <peterz(a)infradead.org>
>> Cc: Arnaldo Carvalho de Melo <acme(a)redhat.com>
>> Cc: Jiri Olsa <jolsa(a)kernel.org>
>> Cc: Alexey Budankov <alexey.budankov(a)linux.intel.com>
>> Cc: Namhyung Kim <namhyung(a)kernel.org>
>> Cc: Tejun Heo <tj(a)kernel.org>
>> Signed-off-by: Song Liu <songliubraving(a)fb.com>
>>
>> ---
>> Changes in v7:
>> Major rewrite to avoid allocating extra master event.
>
> hi,
> what is this based on? I can't apply it on tip/master:
>
> Applying: perf: Sharing PMU counters across compatible events
> error: patch failed: include/linux/perf_event.h:722
> error: include/linux/perf_event.h: patch does not apply
> Patch failed at 0001 perf: Sharing PMU counters across compatible events
> hint: Use 'git am --show-current-patch' to see the failed patch
> When you have resolved this problem, run "git am --continue".
> If you prefer to skip this patch, run "git am --skip" instead.
> To restore the original branch and stop patching, run "git am --abort".
I was using Linus's master branch. This one is specifically based on
commit 96b95eff4a591dbac582c2590d067e356a18aacb
Merge: 4e84608c7836 80591e61a0f7
Author: Linus Torvalds <torvalds(a)linux-foundation.org>
Date: 8 days ago
Merge tag 'kbuild-fixes-v5.4-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- fix build error when compiling SPARC VDSO with CONFIG_COMPAT=y
- pass correct --arch option to Sparse
* tag 'kbuild-fixes-v5.4-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
kbuild: tell sparse about the $ARCH
sparc: vdso: fix build error of vdso32
>
> also I'm getting this when trying to see/apply plain text patch:
>
> [jolsa@dell-r440-01 linux-perf]$ git am --show-current-patch | tail
> =09=09for_each_sibling_event(sibling, group_leader) {
> =09=09=09perf_remove_from_context(sibling, 0);
> =09=09=09put_ctx(gctx);
> +=09=09=09WARN_ON_ONCE(sibling->dup_master);
> =09=09}
> =20
> =09=09/*
> --=20
I also get these =09, =20 issues. I am not sure how to fix them. Attaching
the patch here to see whether it fixes it.
Thanks!
Song
From bb83b28deca52a2376d8b9d0b1d54e7fec797aa9 Mon Sep 17 00:00:00 2001
From: Song Liu <songliubraving(a)fb.com>
Date: Wed, 6 Jun 2018 23:24:10 -0700
Subject: [PATCH v7] perf: Sharing PMU counters across compatible events
This patch tries to enable PMU sharing. When multiple perf_events are
counting the same metric, they can share the hardware PMU counter. We
call these events as "compatible events".
The PMU sharing are limited to events within the same perf_event_context
(ctx). When a event is installed or enabled, search the ctx for compatible
events. This is implemented in perf_event_setup_dup(). One of these
compatible events are picked as the master (stored in event->dup_master).
Similarly, when the event is removed or disabled, perf_event_remove_dup()
is used to clean up sharing.
A new state PERF_EVENT_STATE_ENABLED is introduced for the master event.
This state is used when the slave event is ACTIVE, but the master event
is not.
On the critical paths (add, del read), sharing PMU counters doesn't
increase the complexity. Helper functions event_pmu_[add|del|read]() are
introduced to cover these cases. All these functions have O(1) time
complexity.
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)kernel.org>
Cc: Alexey Budankov <alexey.budankov(a)linux.intel.com>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Tejun Heo <tj(a)kernel.org>
Signed-off-by: Song Liu <songliubraving(a)fb.com>
6b8a794678 Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
9dc510db74 perf: Sharing PMU counters across compatible events
+--------------------------------------------------------------------------------------+------------+------------+
| | 6b8a794678 | 9dc510db74 |
+--------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 841 | 251 |
| boot_failures | 2 | 37 |
| BUG:kernel_hang_in_test_stage | 1 | |
| Mem-Info | 1 | 2 |
| WARNING:at_kernel/events/core.c:#event_function | 0 | 22 |
| EIP:event_function | 0 | 22 |
| WARNING:at_kernel/events/core.c:#event_sched_out | 0 | 21 |
| EIP:event_sched_out | 0 | 21 |
| WARNING:at_kernel/events/core.c:#perf_group_detach | 0 | 21 |
| EIP:perf_group_detach | 0 | 21 |
| WARNING:at_kernel/events/core.c:#list_del_event | 0 | 21 |
| EIP:list_del_event | 0 | 21 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 10 |
| Oops:#[##] | 0 | 10 |
| EIP:ctx_sched_out | 0 | 10 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 10 |
| WARNING:at_kernel/events/core.c:#perf_event_remove_dup | 0 | 1 |
| EIP:perf_event_remove_dup | 0 | 1 |
| WARNING:possible_circular_locking_dependency_detected | 0 | 3 |
| BUG:kernel_hang_in_early-boot_stage,last_printk:Probing_EDD(edd=off_to_disable)...ok | 0 | 2 |
| invoked_oom-killer:gfp_mask=0x | 0 | 1 |
+--------------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child0:625] quotactl (131) returned ENOSYS, mar
.
[child0:625] getresuid16 (165) returned ENOSYS, marking as inactive.
[child0:625] setresuid16 (164) returned ENOSYS, marking as inactive.
[ 15.893642] ------------[ cut here ]------------
[ 15.895385] WARNING: CPU: 0 PID: 648 at kernel/events/core.c:1755 perf_event_remove_dup+0x180/0x1b0
[ 15.898064] Modules linked in:
[ 15.898830] CPU: 0 PID: 648 Comm: trinity-subchil Not tainted 5.4.0-rc8-00109-g9dc510db74b1b #1
[ 15.900469] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 15.901575] EIP: perf_event_remove_dup+0x180/0x1b0
[ 15.902213] Code: db fe ff ff 3b b8 fc 03 00 00 75 ee 89 88 fc 03 00 00 eb e6 8d b6 00 00 00 00 0f 0b e9 42 ff ff ff 8d b4 26 00 00 00 00 66 90 <0f> 0b e9 a0 fe ff ff 8d b4 26 00 00 00 00 66 90 c7 83 fc 03 00 00
[ 15.904798] EAX: ffffffff EBX: f12a9008 ECX: ffffffff EDX: f113d8b8
[ 15.905764] ESI: f16d5a08 EDI: f12a9008 EBP: f11c3e4c ESP: f11c3e34
[ 15.906664] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010082
[ 15.907645] CR0: 80050033 CR2: 08063c76 CR3: 17a98000 CR4: 00340690
[ 15.908548] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 15.909457] DR6: fffe0ff0 DR7: 00000600
[ 15.910030] Call Trace:
[ 15.910458] list_del_event+0x39/0x100
[ 15.911107] perf_event_exit_task+0x199/0x3d0
[ 15.911716] do_exit+0x2bb/0xb70
[ 15.912160] ? find_held_lock+0x27/0xa0
[ 15.912649] ? get_signal+0x100/0xb90
[ 15.913117] do_group_exit+0x35/0x90
[ 15.913539] get_signal+0x149/0xb90
[ 15.913954] do_signal+0x24/0x540
[ 15.914373] ? finish_task_switch+0x69/0x220
[ 15.914890] ? kvm_cpu_online+0x20/0x20
[ 15.915343] exit_to_usermode_loop+0x5d/0xa0
[ 15.915980] prepare_exit_to_usermode+0xd7/0xe0
[ 15.916514] resume_userspace+0xd/0x12
[ 15.916956] EIP: 0x8063c80
[ 15.917292] Code: Bad RIP value.
[ 15.917656] EAX: b7697000 EBX: 0813d000 ECX: bf904294 EDX: 00000003
[ 15.918469] ESI: b7697000 EDI: bf9042a5 EBP: 00000003 ESP: bf90423c
[ 15.919318] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00010282
[ 15.920252] irq event stamp: 664
[ 15.920671] hardirqs last enabled at (663): [<d6ae619b>] __call_rcu+0x14b/0x410
[ 15.921613] hardirqs last disabled at (664): [<d746c0a1>] _raw_spin_lock_irq+0x11/0x40
[ 15.922788] softirqs last enabled at (0): [<d6a598b8>] copy_process+0x758/0x1fe0
[ 15.923842] softirqs last disabled at (0): [<00000000>] 0x0
[ 15.924701] ---[ end trace 74f78136c8701f9c ]---
[ 15.925237]
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 43a130289bccadcd56d4d500da6bc73ba4e2f653 219d54332a09e8d8741c1e1982f5eae56099de85 --
git bisect good 460c303a1b37f3aa67e4b08816f038c33a44271c # 22:30 G 307 0 23 28 Merge 'linux-review/Vladimir-Oltean/Configure-the-MTU-on-DSA-switches/20191125-041645' into devel-hourly-2019113011
git bisect bad 81902474a18eb750d77579ffc736a8f26661ca15 # 23:02 B 275 4 42 42 Merge 'linux-review/Darrick-J-Wong/iomap-trace-iomap_appply-results/20191124-135146' into devel-hourly-2019113011
git bisect bad 055037050e99520ad07793fbdf04eb319142447a # 23:45 B 129 1 22 22 Merge 'linux-review/Christophe-Kerello/mtd-Use-mtd-device-name-instead-of-mtd-name-when-registering-nvmem-device/20191125-011852' into devel-hourly-2019113011
git bisect good 5b9c62240d60e33f18af9a35495dcfd79820f17b # 00:37 G 304 0 20 20 Merge 'linux-review/Olga-Kornievskaia/NFS-allow-deprecation-of-NFS-UDP-protocol/20191127-084736' into devel-hourly-2019113011
git bisect good d87824744f4d98da8ea7b862ab9871e0c872e75f # 01:18 G 306 0 18 18 Merge 'linux-review/Charles-Keepax/spi-cadence-Correct-handling-of-native-chipselect/20191127-063016' into devel-hourly-2019113011
git bisect bad cf3ff810e66ff6b5d3b96aecea83224fae6242f6 # 01:51 B 302 2 54 54 Merge 'linux-review/Nick-Desaulniers/arm-explicitly-place-fixup-in-text/20191125-015858' into devel-hourly-2019113011
git bisect good f05394a456722fe19a68315a36e773c4df9a4459 # 02:45 G 309 0 15 15 Merge 'linux-review/Marc-Gonzalez/clk-Convert-managed-get-functions-to-devm_add_action-API/20191127-045647' into devel-hourly-2019113011
git bisect good 2604a2741f43dcbe5f180a4242c9ec9c9a984633 # 04:41 G 310 0 23 23 Merge 'userns/for-next' into devel-hourly-2019113011
git bisect bad 86762966e08c6cc38c5bfdede56a6660ab315d33 # 05:17 B 95 1 16 17 Merge 'linux-review/Song-Liu/perf-Sharing-PMU-counters-across-compatible-events/20191125-021305' into devel-hourly-2019113011
git bisect bad 9dc510db74b1b873853747098e1cd66b71e63210 # 06:41 B 309 1 58 58 perf: Sharing PMU counters across compatible events
# first bad commit: [9dc510db74b1b873853747098e1cd66b71e63210] perf: Sharing PMU counters across compatible events
git bisect good 6b8a794678763130b7e7d049985008641dc494e8 # 09:12 G 910 0 69 69 Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
# extra tests with debug options
git bisect bad 9dc510db74b1b873853747098e1cd66b71e63210 # 11:18 B 17 1 2 2 perf: Sharing PMU counters across compatible events
# extra tests on head commit of linux-review/Song-Liu/perf-Sharing-PMU-counters-across-compatible-events/20191125-021305
git bisect bad 9dc510db74b1b873853747098e1cd66b71e63210 # 11:21 B 251 1 0 58 perf: Sharing PMU counters across compatible events
# bad: [9dc510db74b1b873853747098e1cd66b71e63210] perf: Sharing PMU counters across compatible events
# extra tests on revert first bad commit
git bisect good ff9b7027a44b53349b76d666b6e6f1d226433fa7 # 12:40 G 302 0 26 26 Revert "perf: Sharing PMU counters across compatible events"
# good: [ff9b7027a44b53349b76d666b6e6f1d226433fa7] Revert "perf: Sharing PMU counters across compatible events"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year, 2 months
86dc301f7b ("drivers/base/memory.c: cache blocks in radix tree .."): [ 1.341517] WARNING: suspicious RCU usage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Scott-Cheloha/drivers-base-memor...
commit 86dc301f7b4815d90e3a7843ffed655d64efe445
Author: Scott Cheloha <cheloha(a)linux.vnet.ibm.com>
AuthorDate: Thu Nov 21 13:59:52 2019 -0600
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Sun Nov 24 10:45:59 2019 +0800
drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
Searching for a particular memory block by id is slow because each block
device is kept in an unsorted linked list on the subsystem bus.
Lookup is much faster if we cache the blocks in a radix tree. Memory
subsystem initialization and hotplug/hotunplug is at least a little faster
for any machine with more than ~100 blocks, and the speedup grows with
the block count.
Signed-off-by: Scott Cheloha <cheloha(a)linux.vnet.ibm.com>
Acked-by: David Hildenbrand <david(a)redhat.com>
0e4a459f56 tracing: Remove unnecessary DEBUG_FS dependency
86dc301f7b drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
+---------------------------------------------------------------------+------------+------------+
| | 0e4a459f56 | 86dc301f7b |
+---------------------------------------------------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 25 | 11 |
| WARNING:possible_circular_locking_dependency_detected | 25 | 8 |
| WARNING:suspicious_RCU_usage | 0 | 11 |
| include/linux/radix-tree.h:#suspicious_rcu_dereference_check()usage | 0 | 11 |
+---------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 1.335279] random: get_random_bytes called from kcmp_cookies_init+0x29/0x4c with crng_init=0
[ 1.336883] ACPI: bus type PCI registered
[ 1.338295] PCI: Using configuration type 1 for base access
[ 1.340735]
[ 1.341049] =============================
[ 1.341517] WARNING: suspicious RCU usage
[ 1.342266] 5.4.0-rc5-00070-g86dc301f7b481 #1 Tainted: G T
[ 1.343494] -----------------------------
[ 1.344226] include/linux/radix-tree.h:167 suspicious rcu_dereference_check() usage!
[ 1.345516]
[ 1.345516] other info that might help us debug this:
[ 1.345516]
[ 1.346962]
[ 1.346962] rcu_scheduler_active = 2, debug_locks = 1
[ 1.348134] no locks held by swapper/0/1.
[ 1.348866]
[ 1.348866] stack backtrace:
[ 1.349525] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G T 5.4.0-rc5-00070-g86dc301f7b481 #1
[ 1.351230] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.352720] Call Trace:
[ 1.353187] ? dump_stack+0x9a/0xde
[ 1.353507] ? node_access_release+0x19/0x19
[ 1.353507] ? walk_memory_blocks+0xe6/0x184
[ 1.353507] ? set_debug_rodata+0x20/0x20
[ 1.353507] ? link_mem_sections+0x39/0x3d
[ 1.353507] ? topology_init+0x74/0xc8
[ 1.353507] ? enable_cpu0_hotplug+0x15/0x15
[ 1.353507] ? do_one_initcall+0x13d/0x30a
[ 1.353507] ? kernel_init_freeable+0x18e/0x23b
[ 1.353507] ? rest_init+0x173/0x173
[ 1.353507] ? kernel_init+0x10/0x151
[ 1.353507] ? rest_init+0x173/0x173
[ 1.353507] ? ret_from_fork+0x3a/0x50
[ 1.410829] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 1.427389] cryptd: max_cpu_qlen set to 1000
[ 1.457792] ACPI: Added _OSI(Module Device)
[ 1.458615] ACPI: Added _OSI(Processor Device)
[ 1.459428] ACPI: Added _OSI(3.0 _SCP Extensions)
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1d5f18079cb10420c4e4b67f571b998ec861a20c 219d54332a09e8d8741c1e1982f5eae56099de85 --
git bisect good 00e5729fc7309fd1da26659b7dce0fcc0b46ab7e # 23:10 G 11 0 9 9 Merge 'linux-review/Daniel-W-S-Almeida/media-dummy_dvb_fe-register-adapter-frontend/20191127-043813' into devel-hourly-2019113015
git bisect bad da795b1ad310518d9100105235c9ba3ff4ee2524 # 23:32 B 0 11 27 0 Merge 'linux-review/Krzysztof-Kozlowski/media-Fix-Kconfig-indentation/20191122-055026' into devel-hourly-2019113015
git bisect bad 53f2832a6bf7103957815cef4ad58be22255ea7a # 00:21 B 0 7 24 0 Merge 'linux-review/Luc-Van-Oostenryck/misc-xilinx_sdfec-add-missing-__user-annotation/20191124-052749' into devel-hourly-2019113015
git bisect good 03a04e32f16a2c4bd267b48e8ab4aaf71c507050 # 00:55 G 11 0 8 8 Merge 'linux-review/Helmut-Grohne/mdio_bus-revert-inadvertent-error-code-change/20191124-171049' into devel-hourly-2019113015
git bisect bad 03403c59a591b12556bd0db1243eb0503112a0df # 02:22 B 0 10 26 0 Merge 'linux-review/Navid-Emamdoost/EDAC-Fix-memory-leak-in-i5100_init_one/20191124-103621' into devel-hourly-2019113015
git bisect good fae5c4b30fda35cbcc8389f432dcaac20f9b3a12 # 03:02 G 11 0 7 7 Merge 'linux-review/David-Gow/kunit-Always-print-actual-pointer-values-in-asserts/20191124-131742' into devel-hourly-2019113015
git bisect bad 1e0c725d2bb21789330a2fe1a360c37ae753eb18 # 04:01 B 0 10 26 0 Merge 'linux-review/Scott-Cheloha/drivers-base-memory-c-cache-blocks-in-radix-tree-to-accelerate-lookup/20191124-104557' into devel-hourly-2019113015
git bisect good 0902b87758e830e857e11f1538e50b218743f4b0 # 04:29 G 10 0 7 7 Merge 'linux-review/Julio-Faracco/drivers-net-virtio_net-Implement-a-dev_watchdog-handler/20191124-135051' into devel-hourly-2019113015
git bisect good 2d0c894673b55b51915ea858eabcf7c97c9b8ccb # 05:33 G 10 0 6 6 Merge 'linux-review/Maciej-enczykowski/net-ipv6-IPV6_TRANSPARENT-check-NET_RAW-prior-to-NET_ADMIN/20191124-120121' into devel-hourly-2019113015
git bisect good 3c095c856fb84271e56e1869aface37af1b078f1 # 05:58 G 11 0 10 10 Merge 'linux-review/Navid-Emamdoost/Bluetooth-Fix-memory-leak-in-hci_connect_le_scan/20191124-111255' into devel-hourly-2019113015
git bisect bad 86dc301f7b4815d90e3a7843ffed655d64efe445 # 06:39 B 0 11 27 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# first bad commit: [86dc301f7b4815d90e3a7843ffed655d64efe445] drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
git bisect good 0e4a459f56c32d3e52ae69a4b447db2f48a65f44 # 07:25 G 33 0 25 25 tracing: Remove unnecessary DEBUG_FS dependency
# extra tests with debug options
git bisect good 86dc301f7b4815d90e3a7843ffed655d64efe445 # 09:13 G 11 0 0 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# extra tests on head commit of linux-review/Scott-Cheloha/drivers-base-memory-c-cache-blocks-in-radix-tree-to-accelerate-lookup/20191124-104557
git bisect bad 86dc301f7b4815d90e3a7843ffed655d64efe445 # 09:16 B 0 11 27 0 drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# bad: [86dc301f7b4815d90e3a7843ffed655d64efe445] drivers/base/memory.c: cache blocks in radix tree to accelerate lookup
# extra tests on revert first bad commit
git bisect good 65bba365eabe04a22e5d68012a45b92eee26860c # 10:18 G 10 0 9 9 Revert "drivers/base/memory.c: cache blocks in radix tree to accelerate lookup"
# good: [65bba365eabe04a22e5d68012a45b92eee26860c] Revert "drivers/base/memory.c: cache blocks in radix tree to accelerate lookup"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year, 2 months
bf8e602186 ("tracing: Do not create tracefs files if tracefs .."): [ 2.789121] WARNING: CPU: 1 PID: 1 at kernel/trace/ftrace.c:989 ftrace_init_tracefs_toplevel
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit bf8e602186ec402ed937b2cbd6c39a34c0029757
Author: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
AuthorDate: Fri Oct 11 20:41:41 2019 -0400
Commit: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
CommitDate: Sat Oct 12 20:49:07 2019 -0400
tracing: Do not create tracefs files if tracefs lockdown is in effect
If on boot up, lockdown is activated for tracefs, don't even bother creating
the files. This can also prevent instances from being created if lockdown is
in effect.
Link: http://lkml.kernel.org/r/CAHk-=whC6Ji=fWnjh2+eS4b15TnbsS4VPVtvBOwCy1jjEG_...
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
17911ff38a tracing: Add locked_down checks to the open calls of files created for tracefs
bf8e602186 tracing: Do not create tracefs files if tracefs lockdown is in effect
81b6b96475 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping
419593dad8 Add linux-next specific files for 20191129
+----------------------------------------------------------------+------------+------------+------------+---------------+
| | 17911ff38a | bf8e602186 | 81b6b96475 | next-20191129 |
+----------------------------------------------------------------+------------+------------+------------+---------------+
| boot_successes | 31 | 0 | 0 | 0 |
| boot_failures | 0 | 11 | 11 | 11 |
| WARNING:at_kernel/trace/ftrace.c:#ftrace_init_tracefs_toplevel | 0 | 11 | 11 | 11 |
| EIP:ftrace_init_tracefs_toplevel | 0 | 11 | 11 | 11 |
| WARNING:at_kernel/trace/trace.c:#create_trace_option_files | 0 | 11 | 11 | 11 |
| EIP:create_trace_option_files | 0 | 11 | 11 | 11 |
| Mem-Info | 0 | 1 | | |
+----------------------------------------------------------------+------------+------------+------------+---------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 2.780405] Lockdown: swapper/0: use of tracefs is restricted; see man kernel_lockdown.7
[ 2.782605] Could not create tracefs 'set_graph_notrace' entry
[ 2.784297] Lockdown: swapper/0: use of tracefs is restricted; see man kernel_lockdown.7
[ 2.786368] ------------[ cut here ]------------
[ 2.787647] Could not register function stat for cpu 0
[ 2.789121] WARNING: CPU: 1 PID: 1 at kernel/trace/ftrace.c:989 ftrace_init_tracefs_toplevel+0x184/0x1d8
[ 2.792367] Modules linked in:
[ 2.793074] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G T 5.4.0-rc2-00007-gbf8e602186ec4 #1
[ 2.793074] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 2.793074] EIP: ftrace_init_tracefs_toplevel+0x184/0x1d8
[ 2.793074] Code: fe 85 c0 0f 84 58 ff ff ff 31 c9 ba 01 00 00 00 8b 5d f0 b8 e0 8a c7 42 6a 01 e8 b8 23 42 fe 53 68 94 67 8f 42 e8 7d f5 2f fe <0f> 0b 31 c9 ba 01 00 00 00 6a 01 b8 c8 8a c7 42 e8 98 23 42 fe 8b
[ 2.793074] EAX: 0000002a EBX: 00000000 ECX: 00000000 EDX: 00000000
[ 2.793074] ESI: 42dde7fc EDI: f55eab2c EBP: 402cbf00 ESP: 402cbedc
[ 2.793074] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00010202
[ 2.793074] CR0: 80050033 CR2: ffffffff CR3: 02fd9000 CR4: 00040690
[ 2.793074] Call Trace:
[ 2.793074] ? register_tracer+0x170/0x170
[ 2.793074] ? tracer_init_tracefs+0x91/0x1e7
[ 2.793074] ? do_one_initcall+0x75/0x3b8
[ 2.793074] ? trace_initcall_level+0xe1/0x101
[ 2.793074] ? kernel_init_freeable+0xf5/0x192
[ 2.793074] ? kernel_init_freeable+0x114/0x192
[ 2.793074] ? rest_init+0x140/0x140
[ 2.793074] ? kernel_init+0x10/0x110
[ 2.793074] ? schedule_tail_wrapper+0x9/0xc
[ 2.793074] ? ret_from_fork+0x33/0x40
[ 2.793074] _warn_unseeded_randomness: 123 callbacks suppressed
[ 2.793074] random: get_random_bytes called from print_oops_end_marker+0x57/0x70 with crng_init=0
[ 2.793074] ---[ end trace f0d030514b1a50e3 ]---
[ 2.827821] Lockdown: swapper/0: use of tracefs is restricted; see man kernel_lockdown.7
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start d196292990fce11fd7bb7585a782b3f4b34429e1 54ecb8f7028c5eb3d740bb82b0f1d90f2df63c5c --
git bisect bad 5d682fa3d8943c19a632ebeaf70e8b9e41c78a5b # 13:43 B 0 3 19 0 gpio: xgs-iproc: Fix section mismatch on device tree match table
git bisect good 698b8eeaed7287970fc2b6d322618850fd1b1e6c # 14:31 G 11 0 0 0 gpio/mpc8xxx: change irq handler from chained to normal
git bisect good 921d6c32b6f86c48e06667ce2f8c50ca45bfa212 # 15:03 G 10 0 0 0 MAINTAINERS: Add entry for RDA Micro GPIO driver and binding
git bisect good 6a41b6c5fc20abced88fa0eed42ae5e5cb70b280 # 15:31 G 10 0 0 0 gpio: Add xgs-iproc driver
git bisect bad c196924277ea82200d4c4fd9537c71390b96f247 # 16:04 B 0 1 18 1 Merge tag 'v5.4-rc6' into devel
git bisect bad 998d75510e373aab5644d777d3b058312d550159 # 16:49 B 0 3 19 0 Merge branch 'akpm' (patches from Andrew)
git bisect good c6ad7c3ce9800e91d6cc6d2f6f566339ebac5656 # 17:28 G 10 0 0 0 Merge tag '5.4-rc2-smb3' of git://git.samba.org/sfrench/cifs-2.6
git bisect good 71b1b5532b9c58f260911ee59c7b3007d6d673a5 # 17:48 G 11 0 0 0 Merge tag 'fixes-for-5.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux
git bisect bad ad32fd7426e192cdf5368eda23a6482ff83c2022 # 18:16 B 0 8 24 0 Merge tag 'xtensa-20191017' of git://github.com/jcmvbkbc/linux-xtensa
git bisect bad 8625732e7712882bd14e1fce962bdc3c315acd41 # 18:36 B 0 5 21 0 Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
git bisect bad 3c52b0af059e11a063970aed1ad143b9284a79c7 # 19:00 B 0 1 17 0 lib/generic-radix-tree.c: add kmemleak annotations
git bisect bad d303de1fcf344ff7c15ed64c3f48a991c9958775 # 19:23 B 0 1 17 0 tracing: Initialize iter->seq after zeroing in tracing_read_pipe()
git bisect good 8530dec63e7b486e3761cc3d74a22de301845ff5 # 20:15 G 11 0 0 0 tracing: Add tracing_check_open_get_tr()
git bisect bad 7f8557b88d6aa5bf31f25f6013d81355a1b1d48d # 21:01 B 0 11 27 0 recordmcount: Fix nop_mcount() function
git bisect bad bf8e602186ec402ed937b2cbd6c39a34c0029757 # 22:05 B 0 4 20 0 tracing: Do not create tracefs files if tracefs lockdown is in effect
git bisect good 17911ff38aa58d3c95c07589dbf5d3564c4cf3c5 # 22:36 G 11 0 0 0 tracing: Add locked_down checks to the open calls of files created for tracefs
# first bad commit: [bf8e602186ec402ed937b2cbd6c39a34c0029757] tracing: Do not create tracefs files if tracefs lockdown is in effect
git bisect good 17911ff38aa58d3c95c07589dbf5d3564c4cf3c5 # 22:41 G 31 0 0 0 tracing: Add locked_down checks to the open calls of files created for tracefs
# extra tests with debug options
git bisect good bf8e602186ec402ed937b2cbd6c39a34c0029757 # 23:07 G 11 0 0 0 tracing: Do not create tracefs files if tracefs lockdown is in effect
# extra tests on head commit of linus/master
git bisect bad 81b6b96475ac7a4ebfceae9f16fb3758327adbfe # 23:57 B 0 2 18 0 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping
# bad: [81b6b96475ac7a4ebfceae9f16fb3758327adbfe] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping
# extra tests on revert first bad commit
git bisect good 38cce32718f755a5a296c21f7ee4e20a981a8a82 # 00:41 G 10 0 0 0 Revert "tracing: Do not create tracefs files if tracefs lockdown is in effect"
# good: [38cce32718f755a5a296c21f7ee4e20a981a8a82] Revert "tracing: Do not create tracefs files if tracefs lockdown is in effect"
# extra tests on linus/master
# duplicated: [81b6b96475ac7a4ebfceae9f16fb3758327adbfe] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping
# extra tests on linux-next/master
git bisect bad 419593dad8439007452bb6f267861863b572c520 # 00:46 B 0 11 27 0 Add linux-next specific files for 20191129
# bad: [419593dad8439007452bb6f267861863b572c520] Add linux-next specific files for 20191129
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year, 2 months