[btrfs] 3b54a0a703: WARNING:at_fs/btrfs/inode.c:#btrfs_finish_ordered_io[btrfs]
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 3b54a0a703f17d2b1317d24beefcdcca587a7667 ("[PATCH v3 3/5] btrfs: Detect unbalanced tree with empty leaf before crashing btree operations")
url: https://github.com/0day-ci/linux/commits/Qu-Wenruo/btrfs-Enhanced-runtime...
base: https://git.kernel.org/cgit/linux/kernel/git/kdave/linux.git for-next
in testcase: fio-basic
with following parameters:
runtime: 300s
disk: 1SSD
fs: btrfs
nr_task: 100%
test_size: 128G
rw: write
bs: 4k
ioengine: sync
cpufreq_governor: performance
ucode: 0x400002c
fs2: nfsv4
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
on test machine: 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------------------+------------+------------+
| | 2703206ff5 | 3b54a0a703 |
+----------------------------------------------------------------------------+------------+------------+
| boot_successes | 9 | 0 |
| boot_failures | 4 | |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 4 | |
+----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 50.226906] WARNING: CPU: 71 PID: 500 at fs/btrfs/inode.c:2687 btrfs_finish_ordered_io+0x70a/0x820 [btrfs]
[ 50.236913] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfsd auth_rpcgss dm_mod dax_pmem_compat nd_pmem device_dax nd_btt dax_pmem_core btrfs sr_mod blake2b_generic xor cdrom sd_mod zstd_decompress sg zstd_compress raid6_pq libcrc32c intel_rapl_msr intel_rapl_common skx_edac x86_pkg_temp_thermal ipmi_ssif intel_powerclamp coretemp kvm_intel kvm irqbypass ast crct10dif_pclmul drm_vram_helper crc32_pclmul crc32c_intel acpi_ipmi drm_ttm_helper ghash_clmulni_intel ttm rapl drm_kms_helper intel_cstate syscopyarea sysfillrect nvme sysimgblt intel_uncore fb_sys_fops nvme_core ahci libahci t10_pi drm mei_me ioatdma libata mei ipmi_si joydev dca wmi ipmi_devintf ipmi_msghandler nfit libnvdimm ip_tables
[ 50.301669] CPU: 71 PID: 500 Comm: kworker/u193:5 Not tainted 5.8.0-rc7-00165-g3b54a0a703f17 #1
[ 50.310904] Workqueue: btrfs-endio-write btrfs_work_helper [btrfs]
[ 50.317626] RIP: 0010:btrfs_finish_ordered_io+0x70a/0x820 [btrfs]
[ 50.324255] Code: 48 0a 00 00 02 72 25 41 83 ff fb 0f 84 f2 00 00 00 41 83 ff e2 0f 84 e8 00 00 00 44 89 fe 48 c7 c7 70 1c 2b c1 e8 58 ae ed bf <0f> 0b 44 89 f9 ba 7f 0a 00 00 48 c7 c6 50 47 2a c1 48 89 df e8 15
[ 50.344116] RSP: 0018:ffffc90007a83d58 EFLAGS: 00010282
[ 50.349923] RAX: 0000000000000000 RBX: ffff888a93ca5ea0 RCX: 0000000000000000
[ 50.357656] RDX: ffff8890401e82a0 RSI: ffff8890401d7f60 RDI: ffff8890401d7f60
[ 50.365385] RBP: ffff8890300ab8a8 R08: 0000000000000bd4 R09: 0000000000000000
[ 50.373133] R10: 0000000000000001 R11: ffffffffc0714060 R12: 000000000f83e000
[ 50.381060] R13: 000000000f83ffff R14: ffff888fb6c39968 R15: 00000000ffffff8b
[ 50.388824] FS: 0000000000000000(0000) GS:ffff8890401c0000(0000) knlGS:0000000000000000
[ 50.397545] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 50.404300] CR2: 00007feacc500f98 CR3: 0000000f74422006 CR4: 00000000007606e0
[ 50.412477] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 50.420281] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 50.428082] PKRU: 55555554
[ 50.431451] Call Trace:
[ 50.434570] ? update_curr+0xc0/0x200
[ 50.438919] ? newidle_balance+0x232/0x3e0
[ 50.443700] ? __wake_up_common+0x80/0x180
[ 50.448491] btrfs_work_helper+0xc9/0x400 [btrfs]
[ 50.453880] ? __schedule+0x378/0x860
[ 50.458218] process_one_work+0x1b5/0x3a0
[ 50.462917] worker_thread+0x50/0x3c0
[ 50.467262] ? process_one_work+0x3a0/0x3a0
[ 50.472148] kthread+0x114/0x160
[ 50.476084] ? kthread_park+0xa0/0xa0
[ 50.480445] ret_from_fork+0x1f/0x30
[ 50.484731] ---[ end trace cc096c1a2068030e ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Oliver Sang
4 months
6abbedef68 ("mtd: rawnand: Use the ECC framework OOB layouts"): BUG: kernel reboot-without-warning in test stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/jirislaby/linux.git devel
commit 6abbedef68c4abc71c9505e49f2f57d357068cc4
Author: Miquel Raynal <miquel.raynal(a)bootlin.com>
AuthorDate: Thu Aug 27 10:52:05 2020 +0200
Commit: Miquel Raynal <miquel.raynal(a)bootlin.com>
CommitDate: Fri Sep 11 18:55:07 2020 +0200
mtd: rawnand: Use the ECC framework OOB layouts
No need to have our own in the raw NAND core.
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20200827085208.16276-18-miquel.raynal@b...
23eb0a866a mtd: rawnand: Make use of the ECC framework
6abbedef68 mtd: rawnand: Use the ECC framework OOB layouts
64c52a71e7 8250_tegra: clean up tegra_uart_handle_break
+-----------------------------------------------------------+------------+------------+------------+
| | 23eb0a866a | 6abbedef68 | 64c52a71e7 |
+-----------------------------------------------------------+------------+------------+------------+
| boot_successes | 16 | 0 | 0 |
| boot_failures | 30 | 11 | 2 |
| WARNING:at_kernel/rcu/rcutorture.c:#rcutorture_oom_notify | 23 | | |
| EIP:rcutorture_oom_notify | 23 | | |
| BUG:kernel_NULL_pointer_dereference,address | 22 | 1 | |
| Oops:#[##] | 22 | 1 | |
| EIP:rcu_torture_fwd_cb_hist | 21 | | |
| Kernel_panic-not_syncing:Fatal_exception | 22 | | |
| invoked_oom-killer:gfp_mask=0x | 5 | | |
| Mem-Info | 6 | | |
| EIP:__get_user_4 | 1 | | |
| EIP:__put_user_4 | 3 | | |
| BUG:kernel_timeout_in_test_stage | 1 | | |
| INFO:trying_to_register_non-static_key | 1 | | |
| EIP:do_raw_spin_trylock | 1 | | |
| EIP:__copy_user_intel | 1 | | |
| UBSAN:shift-out-of-bounds_in_arch/x86/kernel/vm#_32.c | 1 | | |
| EIP:memcpy | 0 | 1 | |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 1 | |
| BUG:kernel_reboot-without-warning_in_test_stage | 0 | 10 | 2 |
+-----------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 6.491023] random: mountall: uninitialized urandom read (12 bytes read)
[ 6.602848] udevd[195]: starting version 175
[ 6.615912] sysctl (190) used greatest stack depth: 6200 bytes left
[ 6.671749] init: plymouth-log main process (205) terminated with status 1
[ 6.787488] udevadm (201) used greatest stack depth: 6124 bytes left
BUG: kernel reboot-without-warning in test stage
Kboot worker: lkp-worker20
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start f965d3ec86fa89285db0fbb983da76ba9c398efa v5.8 --
git bisect good 326e311b849426a95cac0149406efb2bbd13fa65 # 10:47 G 10 0 6 8 Merge tag 'pm-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect good 7fea526ff18ce71afcbaf7378e177483f0c05400 # 12:38 G 10 0 4 6 Merge remote-tracking branch 'reset/reset/next' into master
git bisect bad 6c3ba3a61e4c7f840feec094dd82acbd5d94c6f2 # 17:27 B 0 1 14 4 Merge remote-tracking branch 'sound/for-next' into master
git bisect good 1c163b6e4bd957c486c3e1458e0daea9a2f7b47d # 23:48 G 10 0 4 4 Merge remote-tracking branch 'printk/for-next' into master
git bisect good 83a28da3a8a95f31b8b6a1bb0895a33ce6ae322f # 00:18 G 10 0 3 3 Merge remote-tracking branch 'dlm/next' into master
git bisect bad 33922456d8ca6f15ca352ceb303d1d2330df9234 # 00:31 B 0 2 12 1 Merge remote-tracking branch 'crypto/master' into master
git bisect good ca420c03055510aec61b7cc0a6ff1c8cdee94a76 # 01:26 G 10 0 3 3 Merge remote-tracking branch 'bpf-next/master' into master
git bisect good 121c7c75c7211f27c1d4b86e4206e5a41c619885 # 02:19 G 10 0 3 5 Merge remote-tracking branch 'gfs2/for-next' into master
git bisect good 3f3d8e6771c48ce428f6a2dab6e1248c79b8b729 # 02:53 G 10 0 5 5 Merge remote-tracking branch 'mtd/mtd/next' into master
git bisect bad a20c0c3db2c9a989c33ef103d110aac38a5b8a3f # 03:09 B 0 1 10 0 Merge remote-tracking branch 'nand/nand/next' into master
git bisect bad abfaaa267771ebecda3aedb8ec1d4c3682ab04a9 # 03:20 B 0 1 10 0 mtd: rawnand: atmel: Drop redundant nand_read_page_op()
git bisect good 8f27947f2ea51f14340c5743ed6c865edaaba372 # 04:20 G 10 0 6 7 mtd: nand: Create a helper to extract the ECC configuration
git bisect good 23eb0a866a1b001f0912f47caecf73187036acbe # 05:15 G 10 0 9 9 mtd: rawnand: Make use of the ECC framework
git bisect bad df0f8f3ef62db1c97a4b30379077141a50c1e175 # 05:27 B 0 1 10 0 mtd: rawnand: Use the ECC framework user input parsing bits
git bisect bad ee6e6cd2766eced50c3294a2eeb9753355e821e2 # 05:38 B 0 1 10 0 mtd: rawnand: Use the ECC framework nand_ecc_is_strong_enough() helper
git bisect bad 6abbedef68c4abc71c9505e49f2f57d357068cc4 # 05:58 B 1 1 1 1 mtd: rawnand: Use the ECC framework OOB layouts
# first bad commit: [6abbedef68c4abc71c9505e49f2f57d357068cc4] mtd: rawnand: Use the ECC framework OOB layouts
git bisect good 23eb0a866a1b001f0912f47caecf73187036acbe # 07:38 G 33 0 23 33 mtd: rawnand: Make use of the ECC framework
# extra tests with debug options
git bisect bad 6abbedef68c4abc71c9505e49f2f57d357068cc4 # 08:11 B 0 1 10 0 mtd: rawnand: Use the ECC framework OOB layouts
# extra tests on head commit of jirislaby/devel
git bisect bad 64c52a71e74ef6724aa77c6e20cd93d124b76409 # 12:01 B 0 1 11 1 8250_tegra: clean up tegra_uart_handle_break
# bad: [64c52a71e74ef6724aa77c6e20cd93d124b76409] 8250_tegra: clean up tegra_uart_handle_break
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
4 months
9af03886e0 ("mm: convert find_get_entry to return the head page"): BUG: kernel NULL pointer dereference, address: 00000004
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 9af03886e0a270fe8fd044c9f1094d7586411293
Author: Matthew Wilcox (Oracle) <willy(a)infradead.org>
AuthorDate: Mon Sep 14 14:57:17 2020 +1000
Commit: Stephen Rothwell <sfr(a)canb.auug.org.au>
CommitDate: Mon Sep 14 14:57:17 2020 +1000
mm: convert find_get_entry to return the head page
There are only four callers remaining of find_get_entry().
get_shadow_from_swap_cache() only wants to see shadow entries and doesn't
care about which page is returned. Push the find_subpage() call into
find_lock_entry(), find_get_incore_page() and pagecache_get_page().
Link: https://lkml.kernel.org/r/20200910183318.20139-7-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Alexey Dobriyan <adobriyan(a)gmail.com>
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Huang Ying <ying.huang(a)intel.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Jani Nikula <jani.nikula(a)linux.intel.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Matthew Auld <matthew.auld(a)intel.com>
Cc: William Kucharski <william.kucharski(a)oracle.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr(a)canb.auug.org.au>
44ff4aa43e i915: use find_lock_page instead of find_lock_entry
9af03886e0 mm: convert find_get_entry to return the head page
+---------------------------------------------+------------+------------+
| | 44ff4aa43e | 9af03886e0 |
+---------------------------------------------+------------+------------+
| boot_successes | 36 | 5 |
| boot_failures | 0 | 10 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 10 |
| Oops:#[##] | 0 | 10 |
| EIP:find_get_incore_page | 0 | 10 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 10 |
+---------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child2:497] semget (387) returned ENOSYS, marking as inactive.
[child2:494] clock_settime (398) returned ENOSYS, marking as inactive.
[child2:494] semget (387) returned ENOSYS, marking as inactive.
[child2:494] delete_module (129) returned ENOSYS, marking as inactive.
[child2:494] clock_nanosleep (401) returned ENOSYS, marking as inactive.
[ 20.211091] BUG: kernel NULL pointer dereference, address: 00000004
[ 20.211813] #PF: supervisor read access in kernel mode
[ 20.212428] #PF: error_code(0x0000) - not-present page
[ 20.213027] *pde = 00000000
[ 20.213408] Oops: 0000 [#1] SMP
[ 20.213812] CPU: 1 PID: 494 Comm: trinity-c2 Not tainted 5.9.0-rc5-00094-g9af03886e0a270 #1
[ 20.214764] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 20.215715] EIP: find_get_incore_page+0x12/0xc0
[ 20.216253] Code: ba 14 bf bf c1 89 d8 e8 5c f7 fd ff 0f 0b 8d b4 26 00 00 00 00 8d 76 00 55 89 e5 56 89 c6 53 e8 c4 c5 fb ff 89 c3 a8 01 75 1e <8b> 40 04 a8 01 0f 85 8b 00 00 00 8d 65 f8 89 d8 5b 5e 5d c3 8d b4
[ 20.218306] EAX: 00000000 EBX: 00000000 ECX: f3770694 EDX: f3770040
[ 20.219012] ESI: f3787768 EDI: f42df238 EBP: eb5d7e8c ESP: eb5d7e84
[ 20.219715] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 EFLAGS: 00010246
[ 20.220474] CR0: 80050033 CR2: 00000004 CR3: 2becd000 CR4: 00000690
[ 20.221186] Call Trace:
[ 20.221525] mincore_page+0x9/0x80
[ 20.221952] __mincore_unmapped_range+0x50/0xa0
[ 20.222485] mincore_pte_range+0xad/0x120
[ 20.222971] ? mincore_unmapped_range+0x20/0x20
[ 20.223509] walk_pgd_range+0x1e7/0x3e0
[ 20.223977] __walk_page_range+0x3e/0x80
[ 20.224462] walk_page_range+0x9d/0x140
[ 20.224937] __ia32_sys_mincore+0xec/0x2e0
[ 20.225435] do_int80_syscall_32+0x2c/0x40
[ 20.225931] entry_INT80_32+0xf0/0xf0
[ 20.226382] EIP: 0x80a3392
[ 20.226743] Code: 89 c8 c3 90 8d 74 26 00 85 c0 c7 01 01 00 00 00 75 d8 a1 c8 a9 ac 08 eb d1 66 90 66 90 66 90 66 90 66 90 66 90 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 10 a3 f0 a9 ac 08 85
[ 20.228750] EAX: ffffffda EBX: b734b000 ECX: 0002c000 EDX: b5bd5008
[ 20.229451] ESI: 59048f58 EDI: 3de75b7c EBP: 00200000 ESP: bf892da8
[ 20.230155] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000292
[ 20.230909] Modules linked in: mpls_router qrtr ns nfc caif_socket caif phonet can_bcm af_packet xfrm_user parport_pc parport input_leds i2c_piix4 i2c_core mac_hid nf_log_ipv6
[ 20.232623] CR2: 0000000000000004
[ 20.233043] ---[ end trace 103b8b1e2fc76655 ]---
[ 20.233593] EIP: find_get_incore_page+0x12/0xc0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start f965d3ec86fa89285db0fbb983da76ba9c398efa v5.8 --
git bisect good 326e311b849426a95cac0149406efb2bbd13fa65 # 12:50 G 11 0 0 0 Merge tag 'pm-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect good 7fea526ff18ce71afcbaf7378e177483f0c05400 # 14:43 G 10 0 0 0 Merge remote-tracking branch 'reset/reset/next' into master
git bisect good 6c3ba3a61e4c7f840feec094dd82acbd5d94c6f2 # 17:52 G 10 0 0 1 Merge remote-tracking branch 'sound/for-next' into master
git bisect good 12fb7d30613dcec940e74aaecb25de2a9154c01f # 23:31 G 11 0 3 3 Merge remote-tracking branch 'char-misc/char-misc-next' into master
git bisect good bd5828528533d2ec4b9eeec6994e96ad5cb5327c # 00:10 G 10 0 2 2 Merge remote-tracking branch 'nvdimm/libnvdimm-for-next' into master
git bisect good b8e2f1f8d5b8c1d27704317e8fa0ffa208cdc99a # 00:38 G 10 0 4 4 Merge remote-tracking branch 'pidfd/for-next' into master
git bisect good c3a37469b7e40016593f10ee96168e6050657ea6 # 01:08 G 11 0 0 0 Merge remote-tracking branch 'notifications/notifications-pipe-core' into master
git bisect good 7cde19ed6abf8cfe4789f8f4c0af78533ce98521 # 01:36 G 10 0 0 0 media: disable the Virtual DVB Driver (vidtv) for now
git bisect bad e1ab7882d9b9253e7a8de4a2ed5652f1625bd48c # 02:00 B 0 1 11 0 Merge branch 'akpm-current/current' into master
git bisect bad fdf0dfd33cefec253016832b99c13acd616126b5 # 02:16 B 1 3 1 3 mm/hugetlb: remove VM_BUG_ON(!nrg) in get_file_region_entry_from_cache()
git bisect good 02cfca70291682ffc82e1c853e4c51776a5c6a2c # 03:09 G 11 0 0 0 mm/debug_vm_pgtable/savedwrite: enable savedwrite test with CONFIG_NUMA_BALANCING
git bisect bad 0c18c5bb93c093ad8764baac87acd47e3b7c2168 # 03:22 B 0 1 11 0 mm/memory.c: fix typo in __do_fault() comment
git bisect bad c6abaefb49c93d154444eb6fa1c4016a95c51453 # 03:27 B 0 1 10 0 mm/gup: don't permit users to call get_user_pages with FOLL_LONGTERM
git bisect good 3fcbe4eb49a0406e6202e8c8c3560f30965a8e79 # 04:30 G 11 0 1 1 mm: factor find_get_incore_page out of mincore_page
git bisect bad 9af03886e0a270fe8fd044c9f1094d7586411293 # 05:14 B 0 1 10 0 mm: convert find_get_entry to return the head page
git bisect good 2daf9240ef5500a0ff69c0fc37a1239d0750e78d # 06:02 G 11 0 0 0 mm: optimise madvise WILLNEED
git bisect good 44ff4aa43ee7c662be3faed9ba03e9d9cf2b7633 # 06:58 G 11 0 0 0 i915: use find_lock_page instead of find_lock_entry
# first bad commit: [9af03886e0a270fe8fd044c9f1094d7586411293] mm: convert find_get_entry to return the head page
git bisect good 44ff4aa43ee7c662be3faed9ba03e9d9cf2b7633 # 08:34 G 33 0 0 0 i915: use find_lock_page instead of find_lock_entry
# extra tests with debug options
git bisect bad 9af03886e0a270fe8fd044c9f1094d7586411293 # 08:44 B 0 1 10 0 mm: convert find_get_entry to return the head page
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
4 months
[btrfs] ba157afd0d: fio.write_iops -69.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -69.4% regression of fio.write_iops due to commit:
commit: ba157afd0da008a5328b6cccc63a9e95084358d5 ("btrfs: switch extent buffer tree lock to rw_semaphore")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fio-basic
on test machine: 192 threads Cooper Lake with 128G memory
with following parameters:
runtime: 300s
disk: 1SSD
fs: btrfs
nr_task: 100%
test_size: 128G
rw: randwrite
bs: 4k
ioengine: sync
cpufreq_governor: performance
ucode: 0x86000017
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_iops 18.3% improvement |
| test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=btrfs |
| | ioengine=sync |
| | nr_task=8 |
| | runtime=300s |
| | rw=randwrite |
| | test_size=256g |
| | ucode=0x4002f01 |
+------------------+-------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-9/performance/1SSD/btrfs/sync/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/300s/randwrite/lkp-cpx-4s1/128G/fio-basic/0x86000017
commit:
6437634e75 ("btrfs: improve device scanning messages")
ba157afd0d ("btrfs: switch extent buffer tree lock to rw_semaphore")
6437634e758641eb ba157afd0da008a5328b6cccc63
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 18% 0:4 perf-profile.children.cycles-pp.error_entry
:4 16% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.01 +0.3 0.29 ± 26% fio.latency_100ms%
0.18 ± 19% +3.1 3.26 ± 37% fio.latency_20ms%
0.01 +0.1 0.10 ± 23% fio.latency_250ms%
40.67 ± 7% +35.5 76.15 ± 4% fio.latency_2ms%
51.80 ± 5% -44.4 7.39 ± 15% fio.latency_4ms%
1.34 ± 33% +3.2 4.54 ± 18% fio.latency_500us%
0.02 ± 30% +0.9 0.95 ± 25% fio.latency_50ms%
301.51 +9.2% 329.30 fio.time.elapsed_time
301.51 +9.2% 329.30 fio.time.elapsed_time.max
2.035e+08 -66.6% 67987426 ± 9% fio.time.file_system_outputs
4488 ± 9% +26.7% 5687 ± 17% fio.time.involuntary_context_switches
42577 ± 4% +18.4% 50403 ± 8% fio.time.minor_page_faults
432.75 ± 10% +45.1% 627.75 ± 19% fio.time.percent_of_cpu_this_job_got
1240 ± 11% +63.7% 2029 ± 20% fio.time.system_time
66.23 ± 4% -41.4% 38.83 ± 6% fio.time.user_time
24920311 -67.7% 8050705 ± 10% fio.time.voluntary_context_switches
25443735 -66.6% 8498428 ± 9% fio.workload
331.29 -69.4% 101.23 ± 9% fio.write_bw_MBps
4038656 +129.2% 9256960 ± 28% fio.write_clat_95%_us
6389760 ± 2% +263.1% 23199744 ± 11% fio.write_clat_99%_us
2263094 +230.8% 7485498 ± 10% fio.write_clat_mean_us
2025647 ± 27% +19088.1% 3.887e+08 ± 5% fio.write_clat_stddev
84810 -69.4% 25915 ± 9% fio.write_iops
83.58 +11.5% 93.23 iostat.cpu.idle
16.29 -63.4% 5.96 ± 14% iostat.cpu.system
2.119e+09 -89.4% 2.242e+08 ± 10% cpuidle.C1.time
76523366 -89.3% 8167882 ± 10% cpuidle.C1.usage
4.604e+10 +27.4% 5.867e+10 cpuidle.C1E.time
22842594 ± 2% -47.7% 11945154 ± 12% cpuidle.POLL.time
83.54 +9.7 93.20 mpstat.cpu.all.idle%
0.01 ± 17% +0.7 0.74 mpstat.cpu.all.iowait%
0.78 +0.6 1.35 ± 2% mpstat.cpu.all.irq%
0.09 +0.0 0.10 ± 5% mpstat.cpu.all.soft%
15.46 -10.9 4.54 ± 19% mpstat.cpu.all.sys%
0.12 ± 4% -0.0 0.07 ± 5% mpstat.cpu.all.usr%
83.00 +11.7% 92.75 vmstat.cpu.id
137.25 ± 59% -100.0% 0.00 vmstat.io.bi
385780 ± 2% -70.9% 112321 ± 9% vmstat.io.bo
45306604 -43.8% 25446047 ± 5% vmstat.memory.cache
15647859 ± 4% +121.3% 34622135 ± 4% vmstat.memory.free
0.00 +1e+102% 1.00 vmstat.procs.b
30.00 ± 2% -71.7% 8.50 ± 25% vmstat.procs.r
743594 -82.8% 127922 ± 9% vmstat.system.cs
471011 -16.6% 392963 vmstat.system.in
4618442 ± 3% -57.7% 1954792 ± 4% meminfo.Active
41949 ± 6% -83.4% 6954 ± 8% meminfo.Active(anon)
4576492 ± 4% -57.4% 1947836 ± 4% meminfo.Active(file)
44841237 -44.3% 24972944 ± 5% meminfo.Cached
97784 ± 3% +93.8% 189544 meminfo.CmaFree
2335279 ± 2% -64.5% 829844 ± 5% meminfo.Dirty
39555098 -43.5% 22353755 ± 6% meminfo.Inactive
38075046 -45.2% 20872076 ± 6% meminfo.Inactive(file)
15687211 ± 4% +121.0% 34667636 ± 4% meminfo.MemFree
49908692 -38.0% 30928266 ± 4% meminfo.Memused
3151878 +30.5% 4112034 meminfo.SUnreclaim
3578933 +26.9% 4542239 meminfo.Slab
32787 ± 7% +4856.9% 1625268 meminfo.Writeback
215624 -34.2% 141784 ± 7% meminfo.max_used_kB
5991661 ± 4% -51.7% 2896244 ± 7% numa-numastat.node0.local_node
1433212 ± 16% -100.0% 0.00 numa-numastat.node0.numa_foreign
6015022 ± 4% -51.3% 2927530 ± 7% numa-numastat.node0.numa_hit
787778 ± 21% -100.0% 0.00 numa-numastat.node0.numa_miss
811159 ± 21% -96.1% 31286 numa-numastat.node0.other_node
6821367 -50.4% 3384499 ± 6% numa-numastat.node1.local_node
1110342 ± 13% -100.0% 0.00 numa-numastat.node1.numa_foreign
6837285 -50.3% 3400509 ± 6% numa-numastat.node1.numa_hit
1279414 ± 16% -100.0% 0.00 numa-numastat.node1.numa_miss
1295363 ± 17% -98.8% 16024 ± 96% numa-numastat.node1.other_node
6391347 ± 5% -55.3% 2858503 ± 8% numa-numastat.node2.local_node
1173962 ± 12% -100.0% 0.00 numa-numastat.node2.numa_foreign
6414990 ± 4% -55.1% 2882232 ± 8% numa-numastat.node2.numa_hit
1232287 ± 18% -100.0% 0.00 numa-numastat.node2.numa_miss
1255950 ± 18% -98.1% 23738 ± 56% numa-numastat.node2.other_node
6503045 -54.0% 2990821 ± 14% numa-numastat.node3.local_node
789081 ± 19% -100.0% 0.00 numa-numastat.node3.numa_foreign
6534560 -53.9% 3014769 ± 14% numa-numastat.node3.numa_hit
1207118 ± 11% -100.0% 0.00 numa-numastat.node3.numa_miss
1238645 ± 10% -98.1% 23951 ± 56% numa-numastat.node3.other_node
22853 ± 8% -65.5% 7891 ± 11% sched_debug.cfs_rq:/.exec_clock.avg
39122 ± 8% -66.5% 13100 ± 11% sched_debug.cfs_rq:/.exec_clock.max
11708 ± 12% -46.5% 6262 ± 14% sched_debug.cfs_rq:/.exec_clock.min
5990 ± 10% -77.8% 1328 ± 5% sched_debug.cfs_rq:/.exec_clock.stddev
106343 ± 8% -22.1% 82835 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
130936 ± 8% -25.7% 97314 ± 10% sched_debug.cfs_rq:/.min_vruntime.max
78544 ± 9% -22.3% 61036 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
8990 ± 6% -27.9% 6483 ± 19% sched_debug.cfs_rq:/.min_vruntime.stddev
8990 ± 6% -27.9% 6483 ± 19% sched_debug.cfs_rq:/.spread0.stddev
15203 ± 91% +396.2% 75443 ± 79% sched_debug.cpu.avg_idle.min
387465 ± 6% -36.0% 248073 ± 12% sched_debug.cpu.avg_idle.stddev
39546 ± 43% +44.4% 57091 ± 8% sched_debug.cpu.max_idle_balance_cost.stddev
549938 ± 9% -78.2% 119953 ± 9% sched_debug.cpu.nr_switches.avg
914436 ± 9% -40.1% 547603 ± 11% sched_debug.cpu.nr_switches.max
314312 ± 11% -82.3% 55769 ± 16% sched_debug.cpu.nr_switches.min
128774 ± 12% -45.8% 69752 ± 8% sched_debug.cpu.nr_switches.stddev
550469 ± 9% -78.5% 118367 ± 9% sched_debug.cpu.sched_count.avg
915509 ± 9% -40.6% 544029 ± 11% sched_debug.cpu.sched_count.max
313729 ± 11% -82.6% 54446 ± 17% sched_debug.cpu.sched_count.min
129356 ± 12% -46.2% 69579 ± 8% sched_debug.cpu.sched_count.stddev
273933 ± 9% -78.5% 58953 ± 9% sched_debug.cpu.sched_goidle.avg
455450 ± 9% -40.3% 271697 ± 11% sched_debug.cpu.sched_goidle.max
156287 ± 11% -82.7% 27054 ± 17% sched_debug.cpu.sched_goidle.min
64287 ± 12% -46.0% 34743 ± 8% sched_debug.cpu.sched_goidle.stddev
281537 ± 9% -79.0% 59234 ± 9% sched_debug.cpu.ttwu_count.avg
118533 ± 10% -85.7% 16912 ± 18% sched_debug.cpu.ttwu_count.min
97169 ± 9% -47.8% 50680 ± 13% sched_debug.cpu.ttwu_count.stddev
15738 ± 9% -94.6% 855.34 ± 10% sched_debug.cpu.ttwu_local.avg
40270 ± 9% -89.5% 4245 ± 7% sched_debug.cpu.ttwu_local.max
416.35 ± 7% -32.4% 281.62 ± 11% sched_debug.cpu.ttwu_local.min
12037 ± 9% -94.9% 611.50 ± 13% sched_debug.cpu.ttwu_local.stddev
1079875 ± 17% -45.9% 583722 ± 44% numa-meminfo.node0.Active
1076988 ± 17% -45.8% 583198 ± 44% numa-meminfo.node0.Active(file)
96292 ± 37% -54.7% 43661 ± 24% numa-meminfo.node0.AnonHugePages
149398 ± 22% -45.3% 81772 ± 23% numa-meminfo.node0.AnonPages
546562 ± 13% -59.2% 222939 ± 34% numa-meminfo.node0.Dirty
10965279 ± 3% -45.9% 5935299 ± 7% numa-meminfo.node0.FilePages
9772744 ± 3% -47.1% 5165274 ± 12% numa-meminfo.node0.Inactive
9336204 -48.6% 4794928 ± 18% numa-meminfo.node0.Inactive(file)
123026 ± 9% -15.2% 104384 numa-meminfo.node0.KReclaimable
3699958 ± 10% +127.1% 8402484 ± 5% numa-meminfo.node0.MemFree
12414513 ± 3% -37.9% 7711986 ± 5% numa-meminfo.node0.MemUsed
123026 ± 9% -15.2% 104384 numa-meminfo.node0.SReclaimable
8187 ± 6% +4555.6% 381175 ± 9% numa-meminfo.node0.Writeback
1125362 ± 30% -57.2% 481280 ± 72% numa-meminfo.node1.Active
1122131 ± 30% -57.3% 478845 ± 73% numa-meminfo.node1.Active(file)
587896 ± 9% -63.5% 214711 ± 43% numa-meminfo.node1.Dirty
11683305 ± 2% -41.3% 6863685 ± 14% numa-meminfo.node1.FilePages
10360420 ± 3% -40.4% 6175208 ± 20% numa-meminfo.node1.Inactive
9468948 ± 8% -41.2% 5572196 ± 16% numa-meminfo.node1.Inactive(file)
3659930 ± 6% +122.0% 8123791 ± 7% numa-meminfo.node1.MemFree
12844396 -34.8% 8380535 ± 7% numa-meminfo.node1.MemUsed
8657 ± 5% +4423.1% 391575 ± 10% numa-meminfo.node1.Writeback
1393431 ± 23% -82.2% 247648 ± 43% numa-meminfo.node2.Active
1387554 ± 23% -82.2% 246400 ± 44% numa-meminfo.node2.Active(file)
636121 ± 14% -83.0% 107953 ± 38% numa-meminfo.node2.Dirty
11108643 ± 2% -47.1% 5880556 ± 8% numa-meminfo.node2.FilePages
9600015 ± 6% -43.1% 5465311 ± 9% numa-meminfo.node2.Inactive
9460291 ± 6% -43.0% 5388691 ± 10% numa-meminfo.node2.Inactive(file)
4091758 ± 4% +128.2% 9338976 ± 6% numa-meminfo.node2.MemFree
12412572 -42.3% 7165354 ± 8% numa-meminfo.node2.MemUsed
8507 ± 5% +5053.0% 438406 ± 12% numa-meminfo.node2.Writeback
1014511 ± 10% -36.7% 642646 ± 54% numa-meminfo.node3.Active
29864 ± 20% -90.8% 2747 ± 34% numa-meminfo.node3.Active(anon)
984646 ± 10% -35.0% 639898 ± 54% numa-meminfo.node3.Active(file)
9674 ± 24% +1429.5% 147976 ± 51% numa-meminfo.node3.AnonPages
564208 ± 5% -49.5% 285127 ± 51% numa-meminfo.node3.Dirty
11073406 ± 2% -43.2% 6294102 ± 12% numa-meminfo.node3.FilePages
9816418 ± 3% -43.5% 5548123 ± 14% numa-meminfo.node3.Inactive
9801157 ± 3% -47.8% 5117233 ± 16% numa-meminfo.node3.Inactive(file)
4244767 ± 5% +107.3% 8800327 ± 6% numa-meminfo.node3.MemFree
12228003 -37.3% 7672443 ± 7% numa-meminfo.node3.MemUsed
356.25 ± 33% +349.9% 1602 ± 89% numa-meminfo.node3.PageTables
8688 ± 6% +4676.5% 414983 ± 12% numa-meminfo.node3.Writeback
1963614 ± 19% -100.0% 0.00 proc-vmstat.compact_daemon_free_scanned
1884603 ± 17% -100.0% 0.00 proc-vmstat.compact_daemon_migrate_scanned
2236 ± 6% -100.0% 0.00 proc-vmstat.compact_daemon_wake
4803358 ± 10% -100.0% 0.00 proc-vmstat.compact_free_scanned
2105245 ± 18% -100.0% 0.00 proc-vmstat.compact_isolated
2142023 ± 16% -100.0% 0.00 proc-vmstat.compact_migrate_scanned
1473 ± 5% -100.0% 0.00 proc-vmstat.kswapd_high_wmark_hit_quickly
2036 ± 7% -100.0% 0.00 proc-vmstat.kswapd_low_wmark_hit_quickly
10471 ± 6% -83.4% 1737 ± 8% proc-vmstat.nr_active_anon
1143484 ± 4% -57.4% 487222 ± 4% proc-vmstat.nr_active_file
29902459 ± 2% -66.5% 10031018 ± 9% proc-vmstat.nr_dirtied
583822 ± 2% -64.5% 207339 ± 5% proc-vmstat.nr_dirty
1448057 -1.5% 1426637 proc-vmstat.nr_dirty_background_threshold
2899656 -1.5% 2856764 proc-vmstat.nr_dirty_threshold
11209629 -44.3% 6244340 ± 5% proc-vmstat.nr_file_pages
24431 ± 3% +94.0% 47386 proc-vmstat.nr_free_cma
3922200 ± 4% +120.9% 8665387 ± 4% proc-vmstat.nr_free_pages
9518208 -45.2% 5219177 ± 6% proc-vmstat.nr_inactive_file
295213 -3.0% 286354 proc-vmstat.nr_shmem
787900 +30.5% 1028413 proc-vmstat.nr_slab_unreclaimable
14946 ± 20% -100.0% 0.00 proc-vmstat.nr_vmscan_immediate_reclaim
8529 ± 4% +4667.1% 406608 proc-vmstat.nr_writeback
29273932 ± 2% -68.3% 9267650 ± 9% proc-vmstat.nr_written
10472 ± 6% -83.4% 1737 ± 8% proc-vmstat.nr_zone_active_anon
1144105 ± 4% -57.4% 487222 ± 4% proc-vmstat.nr_zone_active_file
9521928 -45.2% 5219177 ± 6% proc-vmstat.nr_zone_inactive_file
594443 ± 2% +3.4% 614476 proc-vmstat.nr_zone_write_pending
4495238 ± 6% -100.0% 0.00 proc-vmstat.numa_foreign
16973 ± 16% +45.9% 24763 ± 18% proc-vmstat.numa_hint_faults
10082 ± 18% +62.3% 16366 ± 20% proc-vmstat.numa_hint_faults_local
25819549 -52.5% 12266655 ± 7% proc-vmstat.numa_hit
25725022 ± 2% -52.7% 12171567 ± 7% proc-vmstat.numa_local
4495238 ± 6% -100.0% 0.00 proc-vmstat.numa_miss
4589764 ± 6% -97.9% 95087 proc-vmstat.numa_other
3897 ± 2% -100.0% 0.00 proc-vmstat.pageoutrun
2255144 ± 4% -67.2% 739508 ± 14% proc-vmstat.pgactivate
761876 ± 8% -100.0% 0.00 proc-vmstat.pgalloc_dma32
30802102 ± 2% -42.5% 17699688 ± 8% proc-vmstat.pgalloc_normal
60548 ± 63% -100.0% 0.00 proc-vmstat.pgdeactivate
1767208 +5.1% 1856697 proc-vmstat.pgfault
17156575 ± 4% -59.3% 6989993 ± 11% proc-vmstat.pgfree
289815 ± 29% -100.0% 0.00 proc-vmstat.pgmigrate_fail
887578 ± 17% -98.8% 10362 ± 69% proc-vmstat.pgmigrate_success
41767 ± 59% -100.0% 0.00 proc-vmstat.pgpgin
1.171e+08 ± 2% -68.3% 37074926 ± 9% proc-vmstat.pgpgout
60548 ± 63% -100.0% 0.00 proc-vmstat.pgrefill
77328 +2.9% 79603 proc-vmstat.pgreuse
14914 ± 20% -100.0% 0.00 proc-vmstat.pgrotated
15313814 ± 10% -100.0% 0.00 proc-vmstat.pgscan_file
15160284 ± 9% -100.0% 0.00 proc-vmstat.pgscan_kswapd
13902870 ± 4% -100.0% 0.00 proc-vmstat.pgsteal_file
13749340 ± 2% -100.0% 0.00 proc-vmstat.pgsteal_kswapd
69128 -100.0% 0.00 proc-vmstat.slabs_scanned
3036 ± 95% -100.0% 0.00 proc-vmstat.workingset_activate_file
3434 ± 88% -100.0% 0.00 proc-vmstat.workingset_refault_file
5690186 -32.0% 3869151 ± 3% slabinfo.Acpi-State.active_objs
113651 -31.9% 77430 ± 3% slabinfo.Acpi-State.active_slabs
5796264 -31.9% 3948951 ± 3% slabinfo.Acpi-State.num_objs
113651 -31.9% 77430 ± 3% slabinfo.Acpi-State.num_slabs
2955 +13498.7% 401944 slabinfo.biovec-max.active_objs
383.75 +12997.5% 50261 slabinfo.biovec-max.active_slabs
3074 +12977.3% 402095 slabinfo.biovec-max.num_objs
383.75 +12997.5% 50261 slabinfo.biovec-max.num_slabs
69179 ± 12% -76.5% 16291 ± 6% slabinfo.blkdev_ioc.active_objs
1849 ± 12% -76.5% 434.00 ± 6% slabinfo.blkdev_ioc.active_slabs
72164 ± 12% -76.5% 16942 ± 6% slabinfo.blkdev_ioc.num_objs
1849 ± 12% -76.5% 434.00 ± 6% slabinfo.blkdev_ioc.num_slabs
120669 +279.8% 458353 slabinfo.btrfs_ordered_extent.active_objs
3203 ± 2% +260.3% 11542 slabinfo.btrfs_ordered_extent.active_slabs
128148 ± 2% +260.3% 461702 slabinfo.btrfs_ordered_extent.num_objs
3203 ± 2% +260.3% 11542 slabinfo.btrfs_ordered_extent.num_slabs
8087 -14.0% 6952 ± 7% slabinfo.dmaengine-unmap-16.active_objs
8087 -14.0% 6952 ± 7% slabinfo.dmaengine-unmap-16.num_objs
62386 ± 10% -44.0% 34917 ± 10% slabinfo.fsnotify_mark_connector.active_objs
494.25 ± 9% -43.9% 277.25 ± 10% slabinfo.fsnotify_mark_connector.active_slabs
63316 ± 9% -43.9% 35545 ± 10% slabinfo.fsnotify_mark_connector.num_objs
494.25 ± 9% -43.9% 277.25 ± 10% slabinfo.fsnotify_mark_connector.num_slabs
76888 +22.0% 93794 slabinfo.inode_cache.active_objs
1431 +21.8% 1743 slabinfo.inode_cache.active_slabs
77290 +21.8% 94145 slabinfo.inode_cache.num_objs
1431 +21.8% 1743 slabinfo.inode_cache.num_slabs
8130 +39.1% 11313 ± 4% slabinfo.khugepaged_mm_slot.active_objs
8130 +39.1% 11313 ± 4% slabinfo.khugepaged_mm_slot.num_objs
26900 ± 15% +201.7% 81150 ± 8% slabinfo.kmalloc-128.active_objs
422.75 ± 15% +200.8% 1271 ± 8% slabinfo.kmalloc-128.active_slabs
27076 ± 15% +200.7% 81422 ± 8% slabinfo.kmalloc-128.num_objs
422.75 ± 15% +200.8% 1271 ± 8% slabinfo.kmalloc-128.num_slabs
40806 ± 18% +1067.6% 476447 slabinfo.kmalloc-192.active_objs
975.50 ± 18% +1064.2% 11356 slabinfo.kmalloc-192.active_slabs
41004 ± 18% +1063.3% 477003 slabinfo.kmalloc-192.num_objs
975.50 ± 18% +1064.2% 11356 slabinfo.kmalloc-192.num_slabs
3706 +19.6% 4433 ± 2% slabinfo.kmalloc-4k.active_objs
3771 +19.5% 4506 ± 2% slabinfo.kmalloc-4k.num_objs
330516 +96.7% 650081 slabinfo.kmalloc-64.active_objs
5260 +94.5% 10229 slabinfo.kmalloc-64.active_slabs
336720 +94.4% 654737 slabinfo.kmalloc-64.num_objs
5260 +94.5% 10229 slabinfo.kmalloc-64.num_slabs
25975 ± 13% +1577.5% 435743 slabinfo.mnt_cache.active_objs
514.00 ± 13% +1564.0% 8553 slabinfo.mnt_cache.active_slabs
26250 ± 13% +1561.8% 436235 slabinfo.mnt_cache.num_objs
514.00 ± 13% +1564.0% 8553 slabinfo.mnt_cache.num_slabs
135922 ± 3% -99.8% 310.00 slabinfo.numa_policy.active_objs
2262 ± 3% -99.8% 5.00 slabinfo.numa_policy.active_slabs
140301 ± 3% -99.8% 310.00 slabinfo.numa_policy.num_objs
2262 ± 3% -99.8% 5.00 slabinfo.numa_policy.num_slabs
14007583 -36.1% 8951894 ± 5% slabinfo.pid_namespace.active_objs
264130 -39.5% 159858 ± 5% slabinfo.pid_namespace.active_slabs
14791334 -39.5% 8952113 ± 5% slabinfo.pid_namespace.num_objs
264130 -39.5% 159858 ± 5% slabinfo.pid_namespace.num_slabs
4179 +73.0% 7229 slabinfo.proc_dir_entry.active_objs
4179 +73.0% 7229 slabinfo.proc_dir_entry.num_objs
16249 ± 2% +30.7% 21233 ± 2% slabinfo.proc_inode_cache.active_objs
356.75 ± 2% +26.7% 452.00 ± 3% slabinfo.proc_inode_cache.active_slabs
17141 ± 2% +26.7% 21719 ± 3% slabinfo.proc_inode_cache.num_objs
356.75 ± 2% +26.7% 452.00 ± 3% slabinfo.proc_inode_cache.num_slabs
24219 +11.1% 26917 slabinfo.vmap_area.active_objs
24316 +10.9% 26964 slabinfo.vmap_area.num_objs
11.62 +23.0% 14.30 ± 2% perf-stat.i.MPKI
3.79e+09 -65.1% 1.325e+09 ± 9% perf-stat.i.branch-instructions
1.33 +0.1 1.39 perf-stat.i.branch-miss-rate%
49802409 -66.5% 16700981 ± 8% perf-stat.i.branch-misses
35.51 -14.3 21.20 perf-stat.i.cache-miss-rate%
75367452 -70.7% 22048053 ± 11% perf-stat.i.cache-misses
2.13e+08 -63.9% 76903888 ± 7% perf-stat.i.cache-references
746576 -82.9% 127637 ± 9% perf-stat.i.context-switches
7.23 ± 2% -30.5% 5.03 perf-stat.i.cpi
1.319e+11 -67.3% 4.315e+10 ± 15% perf-stat.i.cpu-cycles
0.19 ± 7% -0.1 0.04 ± 10% perf-stat.i.dTLB-load-miss-rate%
9221810 ± 7% -81.8% 1675363 ± 23% perf-stat.i.dTLB-load-misses
4.796e+09 -63.8% 1.735e+09 ± 8% perf-stat.i.dTLB-loads
0.01 ± 7% -0.0 0.01 ± 5% perf-stat.i.dTLB-store-miss-rate%
230987 ± 7% -65.1% 80728 ± 19% perf-stat.i.dTLB-store-misses
2.341e+09 -63.5% 8.549e+08 ± 7% perf-stat.i.dTLB-stores
26.75 +8.9 35.70 perf-stat.i.iTLB-load-miss-rate%
8137696 ± 2% -50.5% 4027641 perf-stat.i.iTLB-load-misses
22441673 -63.7% 8142598 ± 3% perf-stat.i.iTLB-loads
1.843e+10 -64.3% 6.582e+09 ± 9% perf-stat.i.instructions
2279 -44.8% 1259 ± 6% perf-stat.i.instructions-per-iTLB-miss
0.16 ± 2% +52.2% 0.24 ± 2% perf-stat.i.ipc
936.33 -8.8% 853.65 perf-stat.i.major-faults
0.69 -67.2% 0.23 ± 15% perf-stat.i.metric.GHz
0.31 ± 12% +175.0% 0.86 ± 6% perf-stat.i.metric.K/sec
58.38 -64.2% 20.90 ± 8% perf-stat.i.metric.M/sec
3870 -1.7% 3806 perf-stat.i.minor-faults
29615985 ± 2% -75.2% 7334711 ± 13% perf-stat.i.node-load-misses
3627603 ± 24% -54.4% 1655183 ± 18% perf-stat.i.node-loads
93.22 -2.4 90.85 perf-stat.i.node-store-miss-rate%
9526845 -80.9% 1816862 ± 10% perf-stat.i.node-store-misses
664588 ± 9% -42.6% 381478 ± 18% perf-stat.i.node-stores
4806 -3.1% 4659 perf-stat.i.page-faults
1.31 -0.1 1.26 ± 2% perf-stat.overall.branch-miss-rate%
35.37 -6.7 28.65 ± 3% perf-stat.overall.cache-miss-rate%
7.16 ± 2% -8.3% 6.57 ± 6% perf-stat.overall.cpi
1751 +12.2% 1964 ± 6% perf-stat.overall.cycles-between-cache-misses
0.19 ± 7% -0.1 0.10 ± 16% perf-stat.overall.dTLB-load-miss-rate%
26.62 +6.5 33.08 perf-stat.overall.iTLB-load-miss-rate%
2266 -27.5% 1643 ± 7% perf-stat.overall.instructions-per-iTLB-miss
89.10 ± 2% -7.4 81.71 perf-stat.overall.node-load-miss-rate%
93.47 -10.8 82.71 perf-stat.overall.node-store-miss-rate%
218486 +17.8% 257317 perf-stat.overall.path-length
3.778e+09 -64.7% 1.334e+09 ± 9% perf-stat.ps.branch-instructions
49664408 -66.2% 16801034 ± 8% perf-stat.ps.branch-misses
75102264 -70.5% 22191890 ± 11% perf-stat.ps.cache-misses
2.123e+08 -63.6% 77229955 ± 7% perf-stat.ps.cache-references
743898 -82.7% 128529 ± 9% perf-stat.ps.context-switches
1.315e+11 -66.7% 4.378e+10 ± 15% perf-stat.ps.cpu-cycles
9195922 ± 7% -81.6% 1689875 ± 23% perf-stat.ps.dTLB-load-misses
4.781e+09 -63.5% 1.745e+09 ± 8% perf-stat.ps.dTLB-loads
230265 ± 7% -64.6% 81407 ± 19% perf-stat.ps.dTLB-store-misses
2.333e+09 -63.2% 8.588e+08 ± 7% perf-stat.ps.dTLB-stores
8108784 ± 2% -50.4% 4025682 perf-stat.ps.iTLB-load-misses
22357242 -63.6% 8147450 ± 3% perf-stat.ps.iTLB-loads
1.837e+10 -63.9% 6.625e+09 ± 9% perf-stat.ps.instructions
932.38 -8.3% 855.04 perf-stat.ps.major-faults
3857 -1.6% 3796 perf-stat.ps.minor-faults
29508634 ± 2% -75.0% 7390664 ± 13% perf-stat.ps.node-load-misses
3615319 ± 24% -53.9% 1667795 ± 18% perf-stat.ps.node-loads
9490091 -80.7% 1831748 ± 10% perf-stat.ps.node-store-misses
663408 ± 9% -41.9% 385258 ± 18% perf-stat.ps.node-stores
4789 -2.9% 4651 perf-stat.ps.page-faults
5.558e+12 -60.7% 2.185e+12 ± 9% perf-stat.total.instructions
269453 ± 17% -45.9% 145678 ± 44% numa-vmstat.node0.nr_active_file
37344 ± 22% -45.3% 20432 ± 23% numa-vmstat.node0.nr_anon_pages
3464536 ± 4% -59.8% 1394045 ± 11% numa-vmstat.node0.nr_dirtied
136686 ± 13% -59.2% 55792 ± 33% numa-vmstat.node0.nr_dirty
2741797 ± 3% -45.9% 1483185 ± 7% numa-vmstat.node0.nr_file_pages
924535 ± 10% +127.3% 2101569 ± 5% numa-vmstat.node0.nr_free_pages
2334329 ± 2% -48.7% 1197999 ± 18% numa-vmstat.node0.nr_inactive_file
30756 ± 9% -15.2% 26093 numa-vmstat.node0.nr_slab_reclaimable
2048 ± 4% +4546.1% 95175 ± 9% numa-vmstat.node0.nr_writeback
3324897 ± 4% -62.6% 1242336 ± 13% numa-vmstat.node0.nr_written
269602 ± 17% -46.0% 145678 ± 44% numa-vmstat.node0.nr_zone_active_file
2335170 ± 2% -48.7% 1197998 ± 18% numa-vmstat.node0.nr_zone_inactive_file
351923 ± 20% -100.0% 0.00 numa-vmstat.node0.numa_foreign
4037540 ± 8% -40.5% 2404226 ± 9% numa-vmstat.node0.numa_hit
3993199 ± 9% -41.0% 2356285 ± 8% numa-vmstat.node0.numa_local
201697 ± 15% -100.0% 0.00 numa-vmstat.node0.numa_miss
246041 ± 23% -80.5% 47942 ± 55% numa-vmstat.node0.numa_other
280808 ± 30% -57.4% 119543 ± 73% numa-vmstat.node1.nr_active_file
3706856 ± 5% -58.2% 1550035 ± 7% numa-vmstat.node1.nr_dirtied
146942 ± 9% -63.4% 53835 ± 43% numa-vmstat.node1.nr_dirty
2921252 ± 2% -41.3% 1715318 ± 14% numa-vmstat.node1.nr_file_pages
914612 ± 6% +122.1% 2031709 ± 7% numa-vmstat.node1.nr_free_pages
2367376 ± 8% -41.2% 1392398 ± 16% numa-vmstat.node1.nr_inactive_file
1306 ± 53% -100.0% 0.00 numa-vmstat.node1.nr_vmscan_immediate_reclaim
2168 ± 3% +4408.5% 97744 ± 10% numa-vmstat.node1.nr_writeback
3556716 ± 5% -60.7% 1398305 ± 9% numa-vmstat.node1.nr_written
280977 ± 30% -57.5% 119543 ± 73% numa-vmstat.node1.nr_zone_active_file
2368374 ± 8% -41.2% 1392396 ± 16% numa-vmstat.node1.nr_zone_inactive_file
331364 ± 18% -100.0% 0.00 numa-vmstat.node1.numa_foreign
4383788 ± 5% -40.6% 2602900 ± 15% numa-vmstat.node1.numa_hit
4288278 ± 5% -41.5% 2507299 ± 16% numa-vmstat.node1.numa_local
292964 ± 20% -100.0% 0.00 numa-vmstat.node1.numa_miss
388476 ± 18% -75.4% 95602 ± 16% numa-vmstat.node1.numa_other
884.50 ± 67% -100.0% 0.00 numa-vmstat.node1.workingset_activate_file
1151 ± 77% -100.0% 0.00 numa-vmstat.node1.workingset_refault_file
347162 ± 23% -82.3% 61566 ± 44% numa-vmstat.node2.nr_active_file
3807988 ± 2% -62.3% 1436167 ± 7% numa-vmstat.node2.nr_dirtied
159121 ± 14% -82.9% 27205 ± 38% numa-vmstat.node2.nr_dirty
2777667 ± 2% -47.1% 1469407 ± 8% numa-vmstat.node2.nr_file_pages
1022487 ± 4% +128.4% 2335707 ± 6% numa-vmstat.node2.nr_free_pages
2365316 ± 6% -43.1% 1346466 ± 10% numa-vmstat.node2.nr_inactive_file
2126 ± 4% +5045.1% 109397 ± 12% numa-vmstat.node2.nr_writeback
3644330 ± 2% -64.4% 1298979 ± 7% numa-vmstat.node2.nr_written
347321 ± 23% -82.3% 61566 ± 44% numa-vmstat.node2.nr_zone_active_file
2366100 ± 6% -43.1% 1346465 ± 10% numa-vmstat.node2.nr_zone_inactive_file
278338 ± 11% -100.0% 0.00 numa-vmstat.node2.numa_foreign
4033609 ± 6% -46.0% 2176395 ± 6% numa-vmstat.node2.numa_hit
3928864 ± 7% -47.3% 2070897 ± 6% numa-vmstat.node2.numa_local
377124 ± 29% -100.0% 0.00 numa-vmstat.node2.numa_miss
481872 ± 23% -78.1% 105498 ± 12% numa-vmstat.node2.numa_other
7486 ± 20% -90.8% 686.50 ± 34% numa-vmstat.node3.nr_active_anon
246367 ± 10% -35.1% 159833 ± 54% numa-vmstat.node3.nr_active_file
2419 ± 24% +1429.4% 36995 ± 51% numa-vmstat.node3.nr_anon_pages
3674230 -59.5% 1488979 ± 13% numa-vmstat.node3.nr_dirtied
140998 ± 5% -49.5% 71202 ± 50% numa-vmstat.node3.nr_dirty
2768934 ± 2% -43.2% 1572520 ± 12% numa-vmstat.node3.nr_file_pages
24436 ± 3% +93.9% 47386 numa-vmstat.node3.nr_free_cma
1060605 ± 5% +107.5% 2201255 ± 6% numa-vmstat.node3.nr_free_pages
2450673 ± 3% -47.8% 1278449 ± 16% numa-vmstat.node3.nr_inactive_file
89.25 ± 32% +349.3% 401.00 ± 88% numa-vmstat.node3.nr_page_table_pages
2180 ± 5% +4655.3% 103678 ± 12% numa-vmstat.node3.nr_writeback
3529658 -62.8% 1313608 ± 14% numa-vmstat.node3.nr_written
7487 ± 20% -90.8% 686.50 ± 34% numa-vmstat.node3.nr_zone_active_anon
246487 ± 10% -35.2% 159833 ± 54% numa-vmstat.node3.nr_zone_active_file
2451628 ± 3% -47.9% 1278448 ± 16% numa-vmstat.node3.nr_zone_inactive_file
210062 ± 21% -100.0% 0.00 numa-vmstat.node3.numa_foreign
3892724 ± 3% -38.8% 2383134 ± 12% numa-vmstat.node3.numa_hit
3798660 ± 3% -39.6% 2293931 ± 12% numa-vmstat.node3.numa_local
299932 ± 13% -100.0% 0.00 numa-vmstat.node3.numa_miss
394001 ± 15% -77.4% 89205 ± 28% numa-vmstat.node3.numa_other
51694 ± 4% -14.2% 44354 ± 4% softirqs.CPU0.RCU
55208 ± 3% -16.6% 46062 softirqs.CPU0.SCHED
50435 ± 5% -15.8% 42462 ± 4% softirqs.CPU1.RCU
52428 -19.0% 42483 ± 6% softirqs.CPU1.SCHED
50697 ± 2% -15.0% 43087 ± 4% softirqs.CPU10.RCU
49543 -15.4% 41892 ± 4% softirqs.CPU100.RCU
49799 ± 5% -11.8% 43907 softirqs.CPU100.SCHED
48957 ± 2% -13.9% 42164 ± 4% softirqs.CPU101.RCU
51810 ± 3% -17.2% 42905 ± 6% softirqs.CPU101.SCHED
49361 -15.8% 41579 ± 5% softirqs.CPU102.RCU
51409 ± 2% -15.8% 43302 softirqs.CPU102.SCHED
49340 -16.3% 41274 ± 3% softirqs.CPU103.RCU
52097 ± 2% -15.8% 43856 softirqs.CPU103.SCHED
48746 -15.1% 41383 ± 5% softirqs.CPU104.RCU
52665 -16.3% 44095 softirqs.CPU104.SCHED
48878 -14.0% 42047 ± 3% softirqs.CPU105.RCU
53603 ± 3% -18.3% 43772 softirqs.CPU105.SCHED
49378 -15.6% 41662 ± 4% softirqs.CPU106.RCU
52956 ± 4% -16.9% 43987 softirqs.CPU106.SCHED
49468 -15.3% 41900 ± 4% softirqs.CPU107.RCU
54328 ± 4% -18.5% 44268 softirqs.CPU107.SCHED
47546 ± 4% -13.1% 41339 ± 4% softirqs.CPU108.RCU
48486 -16.4% 40553 ± 5% softirqs.CPU109.RCU
52007 ± 2% -16.7% 43325 softirqs.CPU109.SCHED
50665 -14.5% 43303 ± 5% softirqs.CPU11.RCU
48467 -15.1% 41125 ± 5% softirqs.CPU110.RCU
53031 -16.5% 44270 softirqs.CPU110.SCHED
46756 ± 5% -11.8% 41247 ± 4% softirqs.CPU111.RCU
54083 ± 2% -18.5% 44051 softirqs.CPU111.SCHED
50318 -21.7% 39383 ± 5% softirqs.CPU112.RCU
53025 -16.7% 44160 softirqs.CPU112.SCHED
50329 -21.5% 39508 ± 4% softirqs.CPU113.RCU
50608 ± 5% -12.7% 44205 softirqs.CPU113.SCHED
50209 -23.6% 38357 ± 4% softirqs.CPU114.RCU
52429 ± 2% -16.0% 44023 softirqs.CPU114.SCHED
50468 -23.6% 38538 ± 4% softirqs.CPU115.RCU
52861 ± 3% -16.8% 43968 softirqs.CPU115.SCHED
49899 -21.8% 39004 ± 4% softirqs.CPU116.RCU
52991 ± 2% -17.2% 43877 softirqs.CPU116.SCHED
50573 -23.2% 38848 ± 5% softirqs.CPU117.RCU
51860 -15.4% 43871 softirqs.CPU117.SCHED
50768 -23.1% 39052 ± 5% softirqs.CPU118.RCU
51989 ± 5% -16.5% 43416 ± 2% softirqs.CPU118.SCHED
50902 -24.6% 38395 ± 6% softirqs.CPU119.RCU
50747 ± 3% -13.5% 43903 softirqs.CPU119.SCHED
272.00 ± 7% +4661.7% 12951 ± 97% softirqs.CPU12.NET_RX
49049 -13.9% 42252 ± 4% softirqs.CPU12.RCU
52144 -24.9% 39144 ± 4% softirqs.CPU120.RCU
48487 ± 3% -9.1% 44057 softirqs.CPU120.SCHED
52111 -25.3% 38930 ± 4% softirqs.CPU121.RCU
48123 ± 4% -8.6% 43987 softirqs.CPU121.SCHED
51613 -23.5% 39484 ± 4% softirqs.CPU122.RCU
50016 ± 4% -11.8% 44101 softirqs.CPU122.SCHED
51746 ± 2% -24.4% 39115 ± 5% softirqs.CPU123.RCU
50032 ± 2% -12.2% 43949 softirqs.CPU123.SCHED
51945 -23.8% 39598 ± 4% softirqs.CPU124.RCU
50438 ± 4% -12.6% 44107 softirqs.CPU124.SCHED
51670 -22.2% 40191 ± 5% softirqs.CPU125.RCU
51327 -13.8% 44253 softirqs.CPU125.SCHED
51302 ± 2% -24.4% 38798 ± 4% softirqs.CPU126.RCU
50934 ± 3% -13.7% 43954 softirqs.CPU126.SCHED
51536 ± 2% -24.7% 38810 ± 4% softirqs.CPU127.RCU
51106 -14.6% 43654 softirqs.CPU127.SCHED
51751 ± 3% -22.0% 40362 ± 4% softirqs.CPU128.RCU
51289 ± 4% -13.9% 44160 softirqs.CPU128.SCHED
52012 ± 3% -21.7% 40710 ± 4% softirqs.CPU129.RCU
49793 ± 2% -11.6% 44031 softirqs.CPU129.SCHED
50113 ± 2% -15.7% 42245 ± 5% softirqs.CPU13.RCU
47343 ± 2% -9.6% 42818 softirqs.CPU13.SCHED
51557 ± 2% -21.0% 40718 ± 5% softirqs.CPU130.RCU
49738 ± 3% -11.5% 44034 softirqs.CPU130.SCHED
51644 ± 2% -25.9% 38269 ± 15% softirqs.CPU131.RCU
51849 ± 3% -13.5% 44833 ± 2% softirqs.CPU131.SCHED
51927 ± 2% -20.5% 41297 ± 5% softirqs.CPU132.RCU
53304 ± 2% -18.6% 43383 ± 3% softirqs.CPU132.SCHED
51583 ± 2% -21.0% 40772 ± 5% softirqs.CPU133.RCU
51528 ± 4% -15.4% 43610 softirqs.CPU133.SCHED
52235 ± 2% -21.6% 40950 ± 5% softirqs.CPU134.RCU
52277 ± 3% -15.9% 43989 softirqs.CPU134.SCHED
51511 ± 2% -21.0% 40698 ± 4% softirqs.CPU135.RCU
50759 ± 2% -18.4% 41396 ± 11% softirqs.CPU135.SCHED
51717 ± 2% -21.3% 40698 ± 4% softirqs.CPU136.RCU
50938 ± 2% -13.3% 44175 softirqs.CPU136.SCHED
51734 -20.9% 40916 ± 4% softirqs.CPU137.RCU
50986 -13.3% 44196 softirqs.CPU137.SCHED
52217 ± 2% -20.7% 41422 ± 4% softirqs.CPU138.RCU
50858 ± 2% -13.5% 43996 softirqs.CPU138.SCHED
52274 ± 3% -22.0% 40799 ± 4% softirqs.CPU139.RCU
50085 ± 2% -12.2% 43968 softirqs.CPU139.SCHED
50465 ± 2% -15.8% 42511 ± 4% softirqs.CPU14.RCU
52738 -21.9% 41172 ± 5% softirqs.CPU140.RCU
50598 ± 3% -12.6% 44200 softirqs.CPU140.SCHED
52417 ± 2% -21.9% 40963 ± 4% softirqs.CPU141.RCU
50143 ± 5% -11.8% 44209 softirqs.CPU141.SCHED
51870 ± 2% -19.7% 41649 softirqs.CPU142.RCU
50257 ± 4% -12.0% 44207 softirqs.CPU142.SCHED
51772 ± 2% -21.9% 40452 ± 5% softirqs.CPU143.RCU
49842 -11.4% 44138 softirqs.CPU143.SCHED
51052 -19.2% 41250 ± 3% softirqs.CPU144.RCU
50247 ± 2% -14.4% 43002 ± 3% softirqs.CPU144.SCHED
49604 -18.8% 40266 ± 2% softirqs.CPU145.RCU
50995 ± 7% -13.6% 44040 softirqs.CPU145.SCHED
50953 -22.6% 39418 ± 7% softirqs.CPU146.RCU
51025 ± 4% -13.3% 44248 softirqs.CPU146.SCHED
51672 -20.2% 41258 ± 2% softirqs.CPU147.RCU
50675 ± 3% -12.9% 44140 softirqs.CPU147.SCHED
51742 -19.8% 41516 softirqs.CPU148.RCU
51663 ± 3% -15.7% 43556 ± 2% softirqs.CPU148.SCHED
51530 -19.0% 41753 ± 2% softirqs.CPU149.RCU
52428 ± 4% -15.4% 44358 softirqs.CPU149.SCHED
49848 ± 2% -14.0% 42894 ± 5% softirqs.CPU15.RCU
50291 -19.5% 40485 softirqs.CPU150.RCU
53186 ± 7% -17.2% 44013 softirqs.CPU150.SCHED
50612 -18.9% 41071 ± 2% softirqs.CPU151.RCU
52452 ± 6% -15.8% 44152 softirqs.CPU151.SCHED
50499 -19.3% 40754 softirqs.CPU152.RCU
52247 ± 5% -15.5% 44135 softirqs.CPU152.SCHED
51147 -19.9% 40987 softirqs.CPU153.RCU
53320 ± 7% -17.2% 44162 softirqs.CPU153.SCHED
51134 -19.4% 41220 ± 2% softirqs.CPU154.RCU
53027 ± 6% -16.5% 44259 softirqs.CPU154.SCHED
51083 -18.9% 41427 ± 2% softirqs.CPU155.RCU
53152 ± 7% -17.3% 43940 softirqs.CPU155.SCHED
50622 -19.0% 40998 ± 2% softirqs.CPU156.RCU
52939 ± 5% -16.8% 44047 softirqs.CPU156.SCHED
50558 -20.6% 40156 ± 3% softirqs.CPU157.RCU
53525 ± 4% -18.2% 43788 softirqs.CPU157.SCHED
50461 -20.4% 40166 softirqs.CPU158.RCU
55048 ± 4% -19.9% 44074 softirqs.CPU158.SCHED
50274 -18.6% 40905 softirqs.CPU159.RCU
52995 ± 4% -18.6% 43163 ± 3% softirqs.CPU159.SCHED
50470 ± 2% -20.2% 40264 ± 4% softirqs.CPU16.RCU
48372 -10.6% 43243 softirqs.CPU16.SCHED
52170 -23.7% 39824 ± 2% softirqs.CPU160.RCU
54209 ± 5% -18.8% 43999 softirqs.CPU160.SCHED
52202 -22.9% 40251 ± 2% softirqs.CPU161.RCU
52754 ± 2% -16.6% 44018 softirqs.CPU161.SCHED
52829 -24.2% 40052 ± 3% softirqs.CPU162.RCU
51161 ± 4% -13.9% 44052 softirqs.CPU162.SCHED
52659 -24.2% 39893 ± 2% softirqs.CPU163.RCU
52453 ± 4% -15.7% 44218 softirqs.CPU163.SCHED
52728 -24.0% 40073 ± 2% softirqs.CPU164.RCU
53270 ± 5% -17.3% 44060 softirqs.CPU164.SCHED
52735 -24.2% 39951 ± 3% softirqs.CPU165.RCU
53306 ± 4% -17.2% 44163 softirqs.CPU165.SCHED
52900 -25.1% 39616 ± 3% softirqs.CPU166.RCU
52797 ± 7% -16.9% 43852 softirqs.CPU166.SCHED
53055 -25.1% 39749 ± 2% softirqs.CPU167.RCU
51261 ± 2% -13.8% 44167 softirqs.CPU167.SCHED
50149 ± 2% -24.4% 37895 softirqs.CPU168.RCU
51960 ± 4% -15.4% 43964 softirqs.CPU168.SCHED
49742 ± 2% -24.3% 37662 softirqs.CPU169.RCU
49894 ± 3% -12.0% 43882 softirqs.CPU169.SCHED
49061 ± 8% -18.3% 40088 ± 4% softirqs.CPU17.RCU
49745 ± 3% -13.3% 43133 softirqs.CPU17.SCHED
49895 ± 2% -23.9% 37977 softirqs.CPU170.RCU
51718 ± 4% -15.0% 43976 softirqs.CPU170.SCHED
49952 ± 2% -23.7% 38118 ± 2% softirqs.CPU171.RCU
50767 ± 4% -13.3% 44028 softirqs.CPU171.SCHED
49649 ± 3% -23.8% 37819 softirqs.CPU172.RCU
51288 ± 4% -15.8% 43181 ± 3% softirqs.CPU172.SCHED
49694 ± 2% -23.3% 38106 softirqs.CPU173.RCU
51970 ± 3% -15.0% 44168 softirqs.CPU173.SCHED
50068 ± 2% -24.1% 37987 softirqs.CPU174.RCU
49902 ± 4% -11.9% 43979 softirqs.CPU174.SCHED
50864 ± 3% -25.1% 38084 ± 2% softirqs.CPU175.RCU
50772 ± 5% -13.4% 43947 softirqs.CPU175.SCHED
48829 ± 3% -18.2% 39955 softirqs.CPU176.RCU
51050 ± 7% -14.1% 43861 softirqs.CPU176.SCHED
48791 ± 2% -18.1% 39936 softirqs.CPU177.RCU
51353 ± 4% -14.2% 44074 softirqs.CPU177.SCHED
48667 ± 2% -17.2% 40283 softirqs.CPU178.RCU
52217 ± 3% -15.8% 43975 softirqs.CPU178.SCHED
49300 ± 2% -17.7% 40576 softirqs.CPU179.RCU
52512 ± 4% -15.9% 44182 softirqs.CPU179.SCHED
51235 ± 2% -21.6% 40167 ± 4% softirqs.CPU18.RCU
48026 -10.4% 43008 softirqs.CPU18.SCHED
48915 ± 3% -19.1% 39594 softirqs.CPU180.RCU
53311 -18.1% 43682 softirqs.CPU180.SCHED
48718 ± 2% -17.3% 40275 softirqs.CPU181.RCU
54139 ± 4% -18.6% 44042 softirqs.CPU181.SCHED
48338 ± 3% -16.9% 40187 softirqs.CPU182.RCU
51719 ± 4% -15.1% 43888 softirqs.CPU182.SCHED
48501 ± 3% -17.1% 40190 ± 2% softirqs.CPU183.RCU
53378 ± 7% -18.0% 43761 softirqs.CPU183.SCHED
48952 ± 2% -16.7% 40801 softirqs.CPU184.RCU
52589 ± 2% -16.1% 44131 softirqs.CPU184.SCHED
49432 ± 3% -17.8% 40616 softirqs.CPU185.RCU
52470 ± 5% -16.4% 43872 softirqs.CPU185.SCHED
48724 ± 2% -17.1% 40391 softirqs.CPU186.RCU
53967 ± 4% -18.7% 43872 softirqs.CPU186.SCHED
48184 ± 2% -14.5% 41186 ± 5% softirqs.CPU187.RCU
52030 ± 3% -15.6% 43937 softirqs.CPU187.SCHED
49024 ± 3% -17.1% 40619 softirqs.CPU188.RCU
52624 ± 6% -17.1% 43629 ± 3% softirqs.CPU188.SCHED
48575 ± 2% -17.6% 40025 softirqs.CPU189.RCU
52917 ± 6% -16.7% 44072 softirqs.CPU189.SCHED
50821 -21.1% 40078 ± 4% softirqs.CPU19.RCU
48601 ± 3% -11.6% 42976 softirqs.CPU19.SCHED
49129 ± 2% -17.4% 40604 softirqs.CPU190.RCU
51318 ± 4% -15.6% 43288 ± 2% softirqs.CPU190.SCHED
48828 ± 3% -17.3% 40382 ± 2% softirqs.CPU191.RCU
50573 ± 6% -12.6% 44176 softirqs.CPU191.SCHED
49852 ± 2% -13.6% 43052 ± 6% softirqs.CPU2.RCU
50624 -26.2% 37373 ± 29% softirqs.CPU2.SCHED
51034 -21.1% 40250 ± 5% softirqs.CPU20.RCU
47865 ± 2% -9.9% 43114 softirqs.CPU20.SCHED
51171 -21.2% 40343 ± 4% softirqs.CPU21.RCU
47911 ± 3% -10.2% 43008 softirqs.CPU21.SCHED
50944 ± 2% -21.1% 40184 ± 5% softirqs.CPU22.RCU
48516 -10.9% 43223 softirqs.CPU22.SCHED
50701 -23.0% 39023 ± 10% softirqs.CPU23.RCU
49695 ± 3% -11.7% 43879 ± 3% softirqs.CPU23.SCHED
51809 ± 2% -19.9% 41511 ± 3% softirqs.CPU24.RCU
54247 ± 3% -20.5% 43120 softirqs.CPU24.SCHED
52235 -20.7% 41426 ± 3% softirqs.CPU25.RCU
50455 -14.5% 43131 softirqs.CPU25.SCHED
52252 -21.2% 41175 ± 2% softirqs.CPU26.RCU
49032 ± 2% -11.9% 43218 softirqs.CPU26.SCHED
51957 ± 2% -21.5% 40803 ± 4% softirqs.CPU27.RCU
48325 ± 2% -12.2% 42441 ± 2% softirqs.CPU27.SCHED
52370 -20.2% 41787 ± 4% softirqs.CPU28.RCU
52435 -21.4% 41235 ± 3% softirqs.CPU29.RCU
50287 ± 2% -22.0% 39242 ± 17% softirqs.CPU3.RCU
49377 ± 2% -10.6% 44154 ± 4% softirqs.CPU3.SCHED
52607 -22.3% 40900 ± 3% softirqs.CPU30.RCU
46975 ± 3% -8.0% 43219 softirqs.CPU30.SCHED
52465 ± 2% -21.6% 41109 ± 4% softirqs.CPU31.RCU
47297 ± 3% -9.1% 43010 softirqs.CPU31.SCHED
52865 ± 2% -17.9% 43394 ± 2% softirqs.CPU32.RCU
52482 ± 2% -17.7% 43194 ± 3% softirqs.CPU33.RCU
47271 -9.0% 43023 softirqs.CPU33.SCHED
53070 ± 2% -16.9% 44114 ± 2% softirqs.CPU34.RCU
53427 ± 2% -20.4% 42540 ± 8% softirqs.CPU35.RCU
53001 ± 2% -16.9% 44053 softirqs.CPU36.RCU
52752 ± 2% -17.5% 43539 softirqs.CPU37.RCU
52961 ± 2% -18.2% 43316 ± 2% softirqs.CPU38.RCU
52893 ± 2% -18.3% 43194 ± 3% softirqs.CPU39.RCU
50464 ± 2% -14.1% 43344 ± 6% softirqs.CPU4.RCU
48953 -11.0% 43575 softirqs.CPU4.SCHED
53472 ± 2% -18.1% 43769 ± 3% softirqs.CPU40.RCU
53093 ± 2% -16.3% 44415 softirqs.CPU41.RCU
53337 ± 2% -18.3% 43591 ± 2% softirqs.CPU42.RCU
47497 -9.0% 43235 softirqs.CPU42.SCHED
53370 ± 3% -18.1% 43683 ± 2% softirqs.CPU43.RCU
52931 ± 2% -17.9% 43442 ± 2% softirqs.CPU44.RCU
47871 ± 3% -9.8% 43199 softirqs.CPU44.SCHED
52634 ± 2% -17.6% 43396 ± 2% softirqs.CPU45.RCU
53346 ± 2% -18.0% 43768 ± 2% softirqs.CPU46.RCU
52932 ± 2% -17.5% 43653 ± 3% softirqs.CPU47.RCU
52101 ± 2% -15.0% 44289 ± 2% softirqs.CPU48.RCU
53885 ± 2% -19.5% 43371 softirqs.CPU48.SCHED
51696 -14.4% 44274 ± 2% softirqs.CPU49.RCU
50832 -15.1% 43177 softirqs.CPU49.SCHED
50213 ± 3% -14.7% 42847 ± 2% softirqs.CPU5.SCHED
52612 -22.1% 40975 ± 16% softirqs.CPU50.RCU
49912 -11.1% 44375 ± 2% softirqs.CPU50.SCHED
52464 -16.5% 43795 ± 2% softirqs.CPU51.RCU
49807 ± 2% -12.6% 43514 softirqs.CPU51.SCHED
52260 -16.3% 43719 softirqs.CPU52.RCU
50089 -13.2% 43474 softirqs.CPU52.SCHED
52870 -16.6% 44113 ± 2% softirqs.CPU53.RCU
48786 ± 4% -11.1% 43376 softirqs.CPU53.SCHED
52620 -16.6% 43907 softirqs.CPU54.RCU
47802 -9.9% 43093 softirqs.CPU54.SCHED
52963 -15.9% 44554 ± 2% softirqs.CPU55.RCU
48152 -11.1% 42786 softirqs.CPU55.SCHED
53066 -16.9% 44120 ± 2% softirqs.CPU56.RCU
53572 -17.8% 44018 ± 2% softirqs.CPU57.RCU
52724 -16.2% 44160 softirqs.CPU58.RCU
48191 -9.9% 43434 softirqs.CPU58.SCHED
52946 -16.9% 44015 ± 2% softirqs.CPU59.RCU
50656 -16.1% 42511 ± 5% softirqs.CPU6.RCU
47282 -18.0% 38756 ± 11% softirqs.CPU6.SCHED
53199 -16.2% 44597 ± 2% softirqs.CPU60.RCU
53129 -16.8% 44201 ± 3% softirqs.CPU61.RCU
47790 ± 4% -9.4% 43286 softirqs.CPU61.SCHED
53486 -18.0% 43857 ± 2% softirqs.CPU62.RCU
52999 ± 2% -17.4% 43754 ± 2% softirqs.CPU63.RCU
54499 -20.6% 43253 ± 2% softirqs.CPU64.RCU
54588 -20.4% 43467 ± 2% softirqs.CPU65.RCU
49545 ± 5% -12.7% 43235 softirqs.CPU65.SCHED
53640 -20.2% 42818 ± 2% softirqs.CPU66.RCU
49043 ± 4% -12.7% 42837 softirqs.CPU66.SCHED
53726 -19.7% 43145 ± 2% softirqs.CPU67.RCU
49221 ± 2% -11.4% 43597 softirqs.CPU67.SCHED
54383 -20.6% 43183 ± 2% softirqs.CPU68.RCU
48491 ± 3% -10.8% 43235 softirqs.CPU68.SCHED
54911 ± 2% -20.9% 43437 ± 2% softirqs.CPU69.RCU
50098 -15.7% 42213 ± 4% softirqs.CPU7.RCU
54416 ± 2% -20.1% 43457 ± 2% softirqs.CPU70.RCU
49392 -12.3% 43335 softirqs.CPU70.SCHED
54250 -19.4% 43731 softirqs.CPU71.RCU
50377 ± 5% -14.8% 42929 ± 2% softirqs.CPU71.SCHED
50431 ± 2% -18.7% 41021 softirqs.CPU72.RCU
50868 -14.4% 43535 softirqs.CPU72.SCHED
50277 ± 2% -18.8% 40805 softirqs.CPU73.RCU
50296 -15.1% 42711 ± 3% softirqs.CPU73.SCHED
50580 ± 2% -19.0% 40984 softirqs.CPU74.RCU
49604 ± 3% -12.5% 43402 softirqs.CPU74.SCHED
50779 ± 2% -19.5% 40873 softirqs.CPU75.RCU
49472 ± 4% -11.8% 43644 softirqs.CPU75.SCHED
51151 ± 2% -18.4% 41760 softirqs.CPU76.RCU
48842 ± 3% -11.3% 43335 softirqs.CPU76.SCHED
51170 ± 2% -19.4% 41239 softirqs.CPU77.RCU
49058 -11.1% 43624 softirqs.CPU77.SCHED
50713 ± 2% -18.5% 41326 softirqs.CPU78.RCU
48848 ± 3% -10.7% 43625 softirqs.CPU78.SCHED
50672 ± 2% -19.7% 40672 softirqs.CPU79.RCU
49242 ± 2% -11.6% 43507 softirqs.CPU79.SCHED
50770 ± 2% -15.5% 42906 ± 4% softirqs.CPU8.RCU
49825 ± 2% -13.2% 43235 softirqs.CPU80.RCU
48735 -12.9% 42446 ± 4% softirqs.CPU80.SCHED
49888 ± 2% -11.9% 43973 softirqs.CPU81.RCU
47723 ± 3% -8.3% 43763 softirqs.CPU81.SCHED
50062 ± 2% -12.6% 43775 softirqs.CPU82.RCU
47795 -9.0% 43484 softirqs.CPU82.SCHED
50138 ± 2% -11.7% 44283 softirqs.CPU83.RCU
48295 ± 2% -10.3% 43314 softirqs.CPU83.SCHED
50045 ± 3% -13.0% 43534 softirqs.CPU84.RCU
50200 ± 2% -12.7% 43832 softirqs.CPU85.RCU
50086 -14.0% 43091 softirqs.CPU86.RCU
48117 ± 3% -10.0% 43301 softirqs.CPU86.SCHED
50010 ± 3% -14.4% 42805 softirqs.CPU87.RCU
50362 ± 2% -12.8% 43917 softirqs.CPU88.RCU
50285 ± 2% -12.2% 44175 softirqs.CPU89.RCU
49130 ± 3% -11.2% 43616 softirqs.CPU89.SCHED
50389 ± 2% -15.6% 42520 ± 6% softirqs.CPU9.RCU
47130 -8.8% 42972 softirqs.CPU9.SCHED
50279 ± 2% -13.3% 43579 softirqs.CPU90.RCU
47438 ± 2% -8.5% 43415 softirqs.CPU90.SCHED
49558 ± 2% -12.8% 43198 softirqs.CPU91.RCU
48564 -10.7% 43380 softirqs.CPU91.SCHED
50144 ± 2% -13.2% 43533 softirqs.CPU92.RCU
49389 -12.2% 43340 softirqs.CPU92.SCHED
49523 ± 2% -12.2% 43484 ± 2% softirqs.CPU93.RCU
49902 ± 2% -13.3% 43253 softirqs.CPU93.SCHED
49869 ± 3% -12.2% 43804 softirqs.CPU94.RCU
49367 ± 2% -13.2% 42826 ± 3% softirqs.CPU94.SCHED
50024 ± 3% -10.8% 44613 softirqs.CPU95.RCU
49912 -17.4% 41213 ± 4% softirqs.CPU96.RCU
49498 ± 3% -13.3% 42909 ± 4% softirqs.CPU96.SCHED
49010 -15.8% 41244 ± 3% softirqs.CPU97.RCU
51132 ± 2% -14.4% 43748 softirqs.CPU97.SCHED
49481 -15.8% 41639 ± 4% softirqs.CPU98.RCU
50596 ± 3% -13.1% 43952 softirqs.CPU98.SCHED
49746 -19.3% 40146 ± 9% softirqs.CPU99.RCU
49702 ± 4% -11.7% 43889 softirqs.CPU99.SCHED
5399 ± 4% +244.4% 18594 ± 60% softirqs.NET_RX
9808652 -18.8% 7967906 softirqs.RCU
9605355 -13.1% 8348507 softirqs.SCHED
54.99 ± 5% -52.8 2.22 ±121% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
56.26 ± 5% -52.7 3.54 ±104% perf-profile.calltrace.cycles-pp.ret_from_fork
56.26 ± 5% -52.7 3.54 ±104% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
54.80 ± 5% -52.7 2.14 ±121% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
53.38 ± 4% -51.6 1.77 ±127% perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
40.80 ± 5% -39.6 1.18 ±112% perf-profile.calltrace.cycles-pp.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread.kthread
37.79 ± 7% -37.2 0.59 ±113% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
36.41 ± 7% -36.2 0.25 ±173% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
18.72 ± 9% -18.7 0.00 perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
18.70 ± 9% -18.7 0.00 perf-profile.calltrace.cycles-pp.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
12.99 ± 7% -13.0 0.00 perf-profile.calltrace.cycles-pp.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
12.89 ± 7% -12.9 0.00 perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
12.55 ± 9% -12.1 0.40 ±173% perf-profile.calltrace.cycles-pp.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages
12.34 ± 9% -12.0 0.34 ±173% perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages
12.34 ± 9% -12.0 0.34 ±173% perf-profile.calltrace.cycles-pp.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages
11.94 ± 8% -11.5 0.47 ±173% perf-profile.calltrace.cycles-pp.extent_write_cache_pages.extent_writepages.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work
11.94 ± 8% -11.5 0.48 ±173% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work
11.94 ± 8% -11.5 0.48 ±173% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work.worker_thread
11.94 ± 8% -11.5 0.48 ±173% perf-profile.calltrace.cycles-pp.extent_writepages.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper
11.94 ± 8% -11.5 0.48 ±173% perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work.worker_thread.kthread
11.86 ± 8% -11.4 0.45 ±173% perf-profile.calltrace.cycles-pp.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages.__filemap_fdatawrite_range
10.63 ± 7% -10.6 0.00 perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
10.63 ± 7% -10.6 0.00 perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
10.05 ± 7% -10.1 0.00 perf-profile.calltrace.cycles-pp.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range
9.95 ± 7% -10.0 0.00 perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow
8.43 ± 7% -8.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot
8.23 ± 6% -8.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_read_lock.__btrfs_read_lock_root_node
6.75 ± 5% -6.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.finish_wait.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot
6.72 ± 5% -6.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.__btrfs_tree_read_lock.__btrfs_read_lock_root_node
6.63 ± 17% -6.6 0.00 perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
6.47 ± 9% -6.5 0.00 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
6.24 ± 9% -6.2 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
6.14 ± 9% -6.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.__btrfs_tree_lock.btrfs_lock_root_node
6.04 ± 19% -6.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_write_lock_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
5.66 ± 17% -5.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot
5.07 ± 9% -5.1 0.00 perf-profile.calltrace.cycles-pp.finish_wait.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
5.01 ± 5% -5.0 0.00 perf-profile.calltrace.cycles-pp.prepare_to_wait_event.__btrfs_tree_read_lock.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
5.00 ± 9% -5.0 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.finish_wait.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
4.96 ± 9% -5.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.__btrfs_tree_lock.btrfs_lock_root_node
4.96 ± 30% -4.5 0.43 ±173% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.95 ± 30% -4.5 0.43 ±173% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.93 ± 31% -4.5 0.42 ±173% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.92 ± 31% -4.5 0.42 ±173% perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
4.89 ± 31% -4.5 0.42 ±173% perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write
4.99 ± 30% -4.4 0.61 ±121% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
4.98 ± 30% -4.4 0.61 ±121% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack
0.00 +0.6 0.64 ± 5% perf-profile.calltrace.cycles-pp.rcu_idle_exit.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +0.7 0.68 ± 8% perf-profile.calltrace.cycles-pp.run_local_timers.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.00 +0.7 0.72 ± 13% perf-profile.calltrace.cycles-pp.calc_global_load_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.00 +1.0 0.96 ± 11% perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
0.00 +1.1 1.14 ± 8% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
0.00 +1.2 1.23 ± 7% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +1.4 1.38 ± 4% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu
0.00 +1.4 1.44 ± 10% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
0.00 +1.4 1.45 ± 6% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +1.7 1.71 ± 7% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +1.8 1.75 ± 8% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.00 +2.1 2.12 ± 6% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.00 +2.2 2.16 ± 11% perf-profile.calltrace.cycles-pp.ktime_get.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +2.2 2.21 ± 10% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +2.3 2.35 ± 4% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +2.4 2.35 ± 4% perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +2.4 2.36 ± 4% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +2.5 2.53 ± 7% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
0.00 +2.8 2.77 ± 5% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +2.9 2.88 ± 6% perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.00 +3.1 3.07 ± 5% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.2 3.16 ± 5% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +3.2 3.16 ± 5% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +3.2 3.17 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +3.2 3.18 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__waitpid
0.00 +3.2 3.20 ± 5% perf-profile.calltrace.cycles-pp.__waitpid
0.00 +3.4 3.45 ± 7% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +3.5 3.48 ± 8% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +3.6 3.60 ± 6% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +3.6 3.61 ± 8% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
0.00 +4.1 4.15 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.00 +4.5 4.49 ± 9% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
0.00 +6.9 6.87 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
33.81 ± 8% +8.0 41.80 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.75 ± 14% +11.6 12.39 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.76 ± 14% +11.8 12.58 ± 8% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.76 ± 14% +11.9 12.62 ± 8% perf-profile.calltrace.cycles-pp.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.43 ± 57% +14.8 15.22 ± 10% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.25 ± 11% +18.4 19.60 ± 7% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
1.34 ± 12% +29.4 30.75 ± 8% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
35.79 ± 7% +34.2 70.00 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
35.89 ± 7% +38.4 74.25 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
37.73 ± 6% +53.1 90.83 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
37.74 ± 6% +53.1 90.87 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
37.74 ± 6% +53.1 90.87 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
37.93 ± 6% +53.4 91.36 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
54.99 ± 5% -52.7 2.33 ±111% perf-profile.children.cycles-pp.worker_thread
56.26 ± 5% -52.6 3.63 ± 99% perf-profile.children.cycles-pp.ret_from_fork
56.26 ± 5% -52.6 3.63 ± 99% perf-profile.children.cycles-pp.kthread
54.80 ± 5% -52.5 2.25 ±111% perf-profile.children.cycles-pp.process_one_work
53.38 ± 4% -51.6 1.77 ±127% perf-profile.children.cycles-pp.btrfs_work_helper
48.57 ± 6% -47.8 0.77 ±124% perf-profile.children.cycles-pp.btrfs_search_slot
45.93 ± 2% -45.6 0.34 ±132% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
40.80 ± 5% -39.6 1.18 ±112% perf-profile.children.cycles-pp.btrfs_finish_ordered_io
37.79 ± 7% -37.2 0.59 ±113% perf-profile.children.cycles-pp.btrfs_mark_extent_written
30.86 ± 6% -30.2 0.67 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
23.16 ± 4% -23.0 0.17 ±126% perf-profile.children.cycles-pp.__btrfs_read_lock_root_node
22.88 ± 5% -22.7 0.19 ±127% perf-profile.children.cycles-pp.__btrfs_tree_read_lock
19.41 ± 8% -19.3 0.09 ±133% perf-profile.children.cycles-pp.btrfs_lock_root_node
19.41 ± 8% -19.2 0.24 ±146% perf-profile.children.cycles-pp.__btrfs_tree_lock
15.64 ± 7% -15.6 0.00 perf-profile.children.cycles-pp.prepare_to_wait_event
13.15 ± 10% -12.7 0.48 ±173% perf-profile.children.cycles-pp.extent_write_cache_pages
13.15 ± 10% -12.7 0.49 ±173% perf-profile.children.cycles-pp.extent_writepages
13.06 ± 10% -12.6 0.46 ±173% perf-profile.children.cycles-pp.__extent_writepage
12.45 ± 7% -12.4 0.00 perf-profile.children.cycles-pp.finish_wait
12.55 ± 9% -12.1 0.41 ±173% perf-profile.children.cycles-pp.writepage_delalloc
13.67 ± 9% -12.1 1.59 ±116% perf-profile.children.cycles-pp.do_writepages
12.34 ± 9% -12.0 0.35 ±173% perf-profile.children.cycles-pp.btrfs_run_delalloc_range
12.33 ± 9% -12.0 0.35 ±173% perf-profile.children.cycles-pp.run_delalloc_nocow
11.94 ± 8% -11.5 0.48 ±173% perf-profile.children.cycles-pp.btrfs_run_delalloc_work
12.45 ± 7% -10.9 1.53 ±117% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
10.63 ± 7% -10.6 0.07 ±173% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
7.19 ± 16% -7.2 0.00 perf-profile.children.cycles-pp.queued_write_lock_slowpath
6.34 ± 15% -6.3 0.00 perf-profile.children.cycles-pp.queued_read_lock_slowpath
4.97 ± 30% -4.4 0.55 ±127% perf-profile.children.cycles-pp.ksys_write
4.97 ± 30% -4.4 0.55 ±127% perf-profile.children.cycles-pp.vfs_write
4.94 ± 30% -4.4 0.54 ±127% perf-profile.children.cycles-pp.new_sync_write
4.92 ± 31% -4.4 0.53 ±131% perf-profile.children.cycles-pp.btrfs_file_write_iter
4.89 ± 31% -4.4 0.52 ±130% perf-profile.children.cycles-pp.btrfs_buffered_write
4.66 ± 39% -3.8 0.86 ± 61% perf-profile.children.cycles-pp._raw_spin_lock
3.88 ± 7% -3.8 0.10 ±131% perf-profile.children.cycles-pp.__wake_up_common_lock
2.49 ± 45% -2.3 0.22 ±134% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
2.46 ± 46% -2.2 0.21 ±135% perf-profile.children.cycles-pp.__reserve_bytes
2.42 ± 46% -2.2 0.21 ±135% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
1.95 ± 36% -1.8 0.18 ±138% perf-profile.children.cycles-pp.btrfs_block_rsv_release
1.93 ± 35% -1.8 0.17 ±135% perf-profile.children.cycles-pp.btrfs_inode_rsv_release
1.22 ± 37% -1.2 0.06 ±100% perf-profile.children.cycles-pp.wb_workfn
1.22 ± 37% -1.2 0.06 ±100% perf-profile.children.cycles-pp.wb_writeback
1.22 ± 37% -1.2 0.06 ±100% perf-profile.children.cycles-pp.__writeback_inodes_wb
1.22 ± 37% -1.2 0.06 ±100% perf-profile.children.cycles-pp.writeback_sb_inodes
1.22 ± 37% -1.2 0.06 ±100% perf-profile.children.cycles-pp.__writeback_single_inode
1.27 ± 13% -1.1 0.20 ± 95% perf-profile.children.cycles-pp.try_to_wake_up
1.16 ± 11% -1.1 0.09 ±130% perf-profile.children.cycles-pp.__wake_up_common
1.01 ± 10% -0.9 0.08 ±128% perf-profile.children.cycles-pp.autoremove_wake_function
1.08 ± 13% -0.9 0.17 ± 98% perf-profile.children.cycles-pp.ttwu_do_activate
1.08 ± 13% -0.9 0.17 ± 95% perf-profile.children.cycles-pp.enqueue_task_fair
1.03 ± 13% -0.9 0.17 ± 95% perf-profile.children.cycles-pp.enqueue_entity
0.99 ± 8% -0.8 0.16 ±135% perf-profile.children.cycles-pp.__lookup_extent_mapping
1.02 ± 14% -0.8 0.19 ±108% perf-profile.children.cycles-pp.btrfs_duplicate_item
0.90 ± 36% -0.8 0.10 ±126% perf-profile.children.cycles-pp.btrfs_remove_ordered_extent
0.87 ± 14% -0.8 0.10 ±134% perf-profile.children.cycles-pp.__account_scheduler_latency
1.10 ± 14% -0.8 0.34 ±111% perf-profile.children.cycles-pp.add_pending_csums
1.09 ± 14% -0.8 0.34 ±111% perf-profile.children.cycles-pp.btrfs_csum_file_blocks
0.83 ± 14% -0.7 0.12 ±108% perf-profile.children.cycles-pp.setup_leaf_for_split
0.76 ± 8% -0.6 0.13 ±136% perf-profile.children.cycles-pp.__etree_search
0.70 ± 11% -0.6 0.07 ±136% perf-profile.children.cycles-pp.sched_ttwu_pending
0.69 ± 14% -0.6 0.08 ±138% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.64 ± 14% -0.6 0.07 ±135% perf-profile.children.cycles-pp.arch_stack_walk
0.64 ± 13% -0.5 0.10 ±117% perf-profile.children.cycles-pp.__set_extent_bit
0.61 ± 12% -0.5 0.09 ±116% perf-profile.children.cycles-pp.lock_extent_bits
0.50 ± 15% -0.5 0.05 ±120% perf-profile.children.cycles-pp.__clear_extent_bit
0.54 ± 15% -0.4 0.12 ±132% perf-profile.children.cycles-pp.read_block_for_search
0.49 ± 13% -0.4 0.09 ±142% perf-profile.children.cycles-pp.btrfs_try_granting_tickets
0.55 ± 19% -0.4 0.16 ±107% perf-profile.children.cycles-pp.asm_common_interrupt
0.54 ± 20% -0.4 0.16 ±108% perf-profile.children.cycles-pp.common_interrupt
0.53 ± 19% -0.4 0.16 ±108% perf-profile.children.cycles-pp.handle_edge_irq
0.53 ± 20% -0.4 0.16 ±108% perf-profile.children.cycles-pp.handle_irq_event
0.69 ± 12% -0.4 0.32 ± 49% perf-profile.children.cycles-pp.__sched_text_start
0.44 ± 9% -0.4 0.07 ±122% perf-profile.children.cycles-pp.btrfs_dirty_pages
0.51 ± 20% -0.4 0.16 ±108% perf-profile.children.cycles-pp.handle_irq_event_percpu
0.51 ± 20% -0.3 0.16 ±108% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.50 ± 20% -0.3 0.16 ±108% perf-profile.children.cycles-pp.nvme_irq
0.45 ± 21% -0.3 0.15 ±109% perf-profile.children.cycles-pp.blk_mq_end_request
0.37 ± 21% -0.3 0.07 ±133% perf-profile.children.cycles-pp.queue_work_on
0.44 ± 21% -0.3 0.14 ±108% perf-profile.children.cycles-pp.blk_update_request
0.38 ± 13% -0.3 0.08 ±104% perf-profile.children.cycles-pp.btrfs_get_token_32
0.43 ± 21% -0.3 0.14 ±108% perf-profile.children.cycles-pp.btrfs_end_bio
0.36 ± 13% -0.3 0.07 ±137% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
0.34 ± 21% -0.3 0.06 ±131% perf-profile.children.cycles-pp.__queue_work
0.36 ± 21% -0.3 0.08 ±110% perf-profile.children.cycles-pp.end_bio_extent_writepage
0.40 ± 15% -0.2 0.15 ±106% perf-profile.children.cycles-pp.setup_items_for_insert
0.34 ± 16% -0.2 0.09 ±128% perf-profile.children.cycles-pp.find_extent_buffer
0.30 ± 8% -0.2 0.05 ±124% perf-profile.children.cycles-pp.btrfs_get_extent
0.29 ± 41% -0.2 0.05 ±103% perf-profile.children.cycles-pp.pagecache_get_page
0.30 ± 11% -0.2 0.08 ±139% perf-profile.children.cycles-pp.generic_bin_search
0.27 ± 10% -0.2 0.05 ±122% perf-profile.children.cycles-pp.unpin_extent_cache
0.26 ± 20% -0.2 0.05 ±120% perf-profile.children.cycles-pp.btrfs_map_bio
0.26 ± 14% -0.2 0.05 ±114% perf-profile.children.cycles-pp.schedule_idle
0.25 ± 22% -0.2 0.06 ±114% perf-profile.children.cycles-pp.end_extent_writepage
0.25 ± 22% -0.2 0.06 ±114% perf-profile.children.cycles-pp.btrfs_writepage_endio_finish_ordered
0.27 ± 39% -0.2 0.09 ±108% perf-profile.children.cycles-pp.btrfs_cow_block
0.23 ± 11% -0.2 0.04 ±104% perf-profile.children.cycles-pp.kmem_cache_alloc
0.27 ± 40% -0.2 0.09 ±108% perf-profile.children.cycles-pp.__btrfs_cow_block
0.21 ± 16% -0.2 0.04 ±100% perf-profile.children.cycles-pp.kmem_cache_free
0.20 ± 15% -0.1 0.06 ±101% perf-profile.children.cycles-pp.btrfs_set_token_32
0.15 ± 76% -0.1 0.03 ±100% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.17 ± 13% -0.1 0.05 ±100% perf-profile.children.cycles-pp.select_task_rq_fair
0.15 ± 36% -0.1 0.04 ±113% perf-profile.children.cycles-pp.poll_idle
0.14 ± 13% -0.1 0.04 ±107% perf-profile.children.cycles-pp.xas_load
0.14 ± 18% -0.1 0.04 ±110% perf-profile.children.cycles-pp.submit_bio
0.14 ± 18% -0.1 0.04 ±110% perf-profile.children.cycles-pp.submit_bio_noacct
0.11 ± 19% -0.1 0.03 ±102% perf-profile.children.cycles-pp.__push_leaf_right
0.12 ± 9% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.rb_erase
0.06 ± 11% +0.1 0.11 ± 11% perf-profile.children.cycles-pp.update_ts_time_stats
0.06 ± 11% +0.1 0.11 ± 14% perf-profile.children.cycles-pp.nr_iowait_cpu
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.update_rt_rq_load_avg
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.00 +0.1 0.06 ± 17% perf-profile.children.cycles-pp.path_openat
0.00 +0.1 0.06 ± 17% perf-profile.children.cycles-pp.do_fault
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.do_filp_open
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.00 +0.1 0.07 ± 21% perf-profile.children.cycles-pp.do_sys_open
0.00 +0.1 0.07 ± 21% perf-profile.children.cycles-pp.do_sys_openat2
0.00 +0.1 0.07 ± 16% perf-profile.children.cycles-pp.native_apic_mem_write
0.00 +0.1 0.07 ± 34% perf-profile.children.cycles-pp.exec_binprm
0.00 +0.1 0.07 ± 34% perf-profile.children.cycles-pp.load_elf_binary
0.00 +0.1 0.08 ± 32% perf-profile.children.cycles-pp.rcu_gp_kthread
0.00 +0.1 0.08 ± 23% perf-profile.children.cycles-pp.menu_reflect
0.00 +0.1 0.08 ± 24% perf-profile.children.cycles-pp.delay_tsc
0.00 +0.1 0.08 ± 35% perf-profile.children.cycles-pp.irqtime_account_process_tick
0.00 +0.1 0.09 ± 17% perf-profile.children.cycles-pp.irqentry_exit
0.00 +0.1 0.10 ± 11% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.10 ± 29% perf-profile.children.cycles-pp.bprm_execve
0.00 +0.1 0.10 ± 12% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.10 ± 14% perf-profile.children.cycles-pp.irq_work_tick
0.00 +0.1 0.10 ± 17% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +0.1 0.10 ± 17% perf-profile.children.cycles-pp.__handle_mm_fault
0.08 ± 10% +0.1 0.19 ± 32% perf-profile.children.cycles-pp.newidle_balance
0.00 +0.1 0.11 ± 36% perf-profile.children.cycles-pp.cmd_stat
0.00 +0.1 0.11 ± 36% perf-profile.children.cycles-pp.__run_perf_stat
0.00 +0.1 0.11 ± 36% perf-profile.children.cycles-pp.dispatch_events
0.00 +0.1 0.11 ± 36% perf-profile.children.cycles-pp.process_interval
0.00 +0.1 0.11 ± 36% perf-profile.children.cycles-pp.read_counters
0.00 +0.1 0.11 ± 35% perf-profile.children.cycles-pp.__libc_start_main
0.00 +0.1 0.11 ± 35% perf-profile.children.cycles-pp.main
0.00 +0.1 0.11 ± 35% perf-profile.children.cycles-pp.run_builtin
0.00 +0.1 0.11 ± 23% perf-profile.children.cycles-pp.execve
0.00 +0.1 0.11 ± 23% perf-profile.children.cycles-pp.__x64_sys_execve
0.00 +0.1 0.11 ± 23% perf-profile.children.cycles-pp.do_execveat_common
0.00 +0.1 0.12 ± 21% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.00 +0.1 0.12 ± 15% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.1 0.12 ± 14% perf-profile.children.cycles-pp._find_next_bit
0.00 +0.1 0.12 ± 26% perf-profile.children.cycles-pp.trigger_load_balance
0.00 +0.1 0.12 ± 12% perf-profile.children.cycles-pp.timerqueue_add
0.00 +0.1 0.13 ± 17% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.14 ± 33% perf-profile.children.cycles-pp.balance_fair
0.00 +0.1 0.14 ± 15% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.01 ±173% +0.1 0.16 ± 17% perf-profile.children.cycles-pp.memcpy_erms
0.00 +0.2 0.15 ± 16% perf-profile.children.cycles-pp.seq_read
0.00 +0.2 0.16 ± 48% perf-profile.children.cycles-pp.exc_page_fault
0.00 +0.2 0.16 ± 48% perf-profile.children.cycles-pp.do_user_addr_fault
0.00 +0.2 0.16 ± 30% perf-profile.children.cycles-pp.smpboot_thread_fn
0.00 +0.2 0.16 ± 49% perf-profile.children.cycles-pp.asm_exc_page_fault
0.00 +0.2 0.17 ± 6% perf-profile.children.cycles-pp.enqueue_hrtimer
0.00 +0.2 0.17 ± 16% perf-profile.children.cycles-pp.perf_pmu_disable
0.00 +0.2 0.17 ± 14% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.2 0.18 ± 15% perf-profile.children.cycles-pp.get_cpu_device
0.00 +0.2 0.18 ± 14% perf-profile.children.cycles-pp.call_cpuidle
0.00 +0.2 0.19 ± 11% perf-profile.children.cycles-pp.rcu_eqs_enter
0.00 +0.2 0.19 ± 9% perf-profile.children.cycles-pp.read
0.00 +0.2 0.22 ± 12% perf-profile.children.cycles-pp.vfs_read
0.00 +0.2 0.22 ± 15% perf-profile.children.cycles-pp.ksys_read
0.06 ± 11% +0.2 0.31 ± 13% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.2 0.25 ± 16% perf-profile.children.cycles-pp.note_gp_changes
0.00 +0.3 0.26 ± 5% perf-profile.children.cycles-pp.update_irq_load_avg
0.28 ± 13% +0.3 0.55 ± 10% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.3 0.28 ± 16% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.06 ± 11% +0.3 0.34 ± 11% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.3 0.28 ± 14% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.00 +0.3 0.29 ± 20% perf-profile.children.cycles-pp.io_serial_in
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.sysvec_irq_work
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.__sysvec_irq_work
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.irq_work_run
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.irq_work_single
0.00 +0.3 0.29 ± 11% perf-profile.children.cycles-pp.printk
0.00 +0.3 0.30 ± 9% perf-profile.children.cycles-pp.rcu_eqs_exit
0.19 ± 42% +0.3 0.49 ± 7% perf-profile.children.cycles-pp.start_kernel
0.00 +0.3 0.30 ± 13% perf-profile.children.cycles-pp.rcu_core
0.12 ± 13% +0.3 0.43 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.3 0.31 ± 15% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.00 +0.3 0.32 ± 10% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.3 0.32 ± 7% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.4 0.36 ± 10% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.4 0.36 ± 20% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +0.4 0.36 ± 20% perf-profile.children.cycles-pp.uart_console_write
0.00 +0.4 0.37 ± 21% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +0.4 0.38 ± 20% perf-profile.children.cycles-pp.serial8250_console_write
0.00 +0.4 0.38 ± 8% perf-profile.children.cycles-pp.__remove_hrtimer
0.00 +0.4 0.40 ± 22% perf-profile.children.cycles-pp.vprintk_emit
0.00 +0.4 0.40 ± 22% perf-profile.children.cycles-pp.console_unlock
0.01 ±173% +0.4 0.42 ± 29% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.4 0.41 ± 7% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.01 ±173% +0.4 0.42 ± 30% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.00 +0.5 0.45 ± 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.08 ± 8% +0.5 0.59 ± 7% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.5 0.52 ± 18% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.08 ± 10% +0.5 0.61 ± 9% perf-profile.children.cycles-pp._raw_spin_trylock
0.08 ± 13% +0.5 0.61 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.5 0.54 ± 6% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.6 0.56 ± 7% perf-profile.children.cycles-pp.irqtime_account_irq
0.08 ± 33% +0.6 0.68 ± 9% perf-profile.children.cycles-pp.run_local_timers
0.03 ±100% +0.6 0.65 ± 5% perf-profile.children.cycles-pp.rcu_idle_exit
0.07 ± 13% +0.7 0.72 ± 12% perf-profile.children.cycles-pp.calc_global_load_tick
0.07 ± 17% +0.7 0.73 ± 11% perf-profile.children.cycles-pp.native_sched_clock
0.10 ± 11% +0.7 0.78 ± 5% perf-profile.children.cycles-pp.load_balance
0.07 ± 17% +0.7 0.78 ± 10% perf-profile.children.cycles-pp.sched_clock
0.07 ± 10% +0.8 0.82 ± 7% perf-profile.children.cycles-pp.read_tsc
0.08 ± 14% +0.8 0.86 ± 10% perf-profile.children.cycles-pp.sched_clock_cpu
0.04 ± 57% +1.0 1.00 ± 11% perf-profile.children.cycles-pp.lapic_next_deadline
0.09 ± 9% +1.1 1.16 ± 8% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.07 ± 10% +1.2 1.23 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.12 ± 11% +1.3 1.39 ± 4% perf-profile.children.cycles-pp.rebalance_domains
0.12 ± 8% +1.3 1.47 ± 7% perf-profile.children.cycles-pp.tick_irq_enter
0.06 ± 6% +1.4 1.45 ± 10% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.14 ± 6% +1.6 1.73 ± 6% perf-profile.children.cycles-pp.irq_enter_rcu
0.15 ± 9% +1.6 1.79 ± 8% perf-profile.children.cycles-pp.scheduler_tick
0.18 ± 8% +1.9 2.12 ± 6% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.13 ± 11% +2.1 2.23 ± 10% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.26 ± 7% +2.1 2.38 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.26 ± 7% +2.1 2.39 ± 4% perf-profile.children.cycles-pp.do_softirq_own_stack
0.29 ± 7% +2.5 2.82 ± 5% perf-profile.children.cycles-pp.irq_exit_rcu
0.28 ± 18% +2.6 2.88 ± 6% perf-profile.children.cycles-pp.wait_consider_task
0.32 ± 18% +2.8 3.10 ± 5% perf-profile.children.cycles-pp.do_wait
0.33 ± 18% +2.8 3.16 ± 5% perf-profile.children.cycles-pp.kernel_wait4
0.33 ± 18% +2.8 3.17 ± 5% perf-profile.children.cycles-pp.__do_sys_wait4
0.33 ± 17% +2.9 3.21 ± 5% perf-profile.children.cycles-pp.__waitpid
0.30 ± 15% +3.2 3.50 ± 7% perf-profile.children.cycles-pp.update_process_times
0.30 ± 15% +3.2 3.51 ± 7% perf-profile.children.cycles-pp.tick_sched_handle
0.30 ± 7% +3.3 3.62 ± 6% perf-profile.children.cycles-pp.tick_nohz_next_event
0.30 ± 17% +3.3 3.65 ± 8% perf-profile.children.cycles-pp.clockevents_program_event
0.34 ± 6% +3.8 4.17 ± 4% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.38 ± 14% +4.2 4.55 ± 9% perf-profile.children.cycles-pp.tick_sched_timer
0.58 ± 14% +6.2 6.77 ± 7% perf-profile.children.cycles-pp.ktime_get
0.50 ± 11% +6.4 6.94 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
33.98 ± 8% +8.1 42.05 ± 3% perf-profile.children.cycles-pp.intel_idle
0.92 ± 12% +11.6 12.50 ± 8% perf-profile.children.cycles-pp.hrtimer_interrupt
0.93 ± 12% +11.7 12.67 ± 8% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.77 ± 13% +13.8 15.56 ± 6% perf-profile.children.cycles-pp.asm_call_on_stack
0.56 ± 6% +14.8 15.32 ± 10% perf-profile.children.cycles-pp.menu_select
1.49 ± 11% +18.3 19.77 ± 7% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.62 ± 10% +24.0 25.58 ± 8% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
36.07 ± 7% +38.6 74.65 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
36.07 ± 7% +38.6 74.67 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
37.74 ± 6% +53.1 90.87 ± 4% perf-profile.children.cycles-pp.start_secondary
37.93 ± 6% +53.4 91.36 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
37.93 ± 6% +53.4 91.36 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
37.93 ± 6% +53.4 91.36 ± 4% perf-profile.children.cycles-pp.do_idle
45.84 ± 2% -45.5 0.34 ±132% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.98 ± 8% -0.8 0.15 ±134% perf-profile.self.cycles-pp.__lookup_extent_mapping
0.76 ± 7% -0.6 0.13 ±135% perf-profile.self.cycles-pp.__etree_search
0.96 ± 8% -0.3 0.64 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.33 ± 13% -0.3 0.06 ±107% perf-profile.self.cycles-pp.btrfs_get_token_32
0.74 ± 15% -0.2 0.55 ± 21% perf-profile.self.cycles-pp._raw_spin_lock
0.14 ± 13% -0.1 0.04 ±103% perf-profile.self.cycles-pp.btrfs_set_token_32
0.14 ± 37% -0.1 0.04 ±113% perf-profile.self.cycles-pp.poll_idle
0.13 ± 11% -0.1 0.03 ±105% perf-profile.self.cycles-pp.xas_load
0.10 ± 14% -0.1 0.03 ±105% perf-profile.self.cycles-pp.kmem_cache_free
0.11 ± 7% -0.1 0.06 ± 17% perf-profile.self.cycles-pp.rb_erase
0.06 ± 11% +0.0 0.11 ± 12% perf-profile.self.cycles-pp.nr_iowait_cpu
0.00 +0.1 0.06 ± 15% perf-profile.self.cycles-pp.update_rt_rq_load_avg
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.08 ± 19% +0.1 0.15 ± 32% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.07 ± 21% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.rcu_eqs_enter
0.00 +0.1 0.07 ± 22% perf-profile.self.cycles-pp.__remove_hrtimer
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.sysvec_apic_timer_interrupt
0.00 +0.1 0.08 ± 20% perf-profile.self.cycles-pp.sched_clock_cpu
0.00 +0.1 0.08 ± 19% perf-profile.self.cycles-pp.timerqueue_add
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.load_balance
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.delay_tsc
0.00 +0.1 0.08 ± 35% perf-profile.self.cycles-pp.irqtime_account_process_tick
0.00 +0.1 0.09 ± 13% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.09 ± 24% perf-profile.self.cycles-pp.update_process_times
0.00 +0.1 0.09 ± 33% perf-profile.self.cycles-pp.update_blocked_averages
0.00 +0.1 0.10 ± 15% perf-profile.self.cycles-pp.irq_work_tick
0.00 +0.1 0.10 ± 25% perf-profile.self.cycles-pp.clockevents_program_event
0.00 +0.1 0.10 ± 14% perf-profile.self.cycles-pp.scheduler_tick
0.00 +0.1 0.10 ± 25% perf-profile.self.cycles-pp.trigger_load_balance
0.00 +0.1 0.11 ± 19% perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.00 +0.1 0.11 ± 23% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.00 +0.1 0.12 ± 13% perf-profile.self.cycles-pp._find_next_bit
0.00 +0.1 0.12 ± 18% perf-profile.self.cycles-pp.note_gp_changes
0.00 +0.1 0.13 ± 15% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.01 ±173% +0.1 0.16 ± 15% perf-profile.self.cycles-pp.memcpy_erms
0.00 +0.2 0.15 ± 17% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.16 ± 7% perf-profile.self.cycles-pp.rcu_eqs_exit
0.00 +0.2 0.17 ± 18% perf-profile.self.cycles-pp.perf_pmu_disable
0.00 +0.2 0.17 ± 16% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.2 0.17 ± 15% perf-profile.self.cycles-pp.get_cpu_device
0.04 ± 58% +0.2 0.22 ± 13% perf-profile.self.cycles-pp.do_wait
0.00 +0.2 0.18 ± 7% perf-profile.self.cycles-pp.irqtime_account_irq
0.00 +0.2 0.18 ± 17% perf-profile.self.cycles-pp.call_cpuidle
0.00 +0.2 0.20 ± 7% perf-profile.self.cycles-pp.timerqueue_del
0.00 +0.2 0.20 ± 14% perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.2 0.23 ± 22% perf-profile.self.cycles-pp.__softirqentry_text_start
0.00 +0.2 0.24 ± 7% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.00 +0.3 0.26 perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.3 0.26 ± 5% perf-profile.self.cycles-pp.update_irq_load_avg
0.00 +0.3 0.28 ± 7% perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.3 0.29 ± 20% perf-profile.self.cycles-pp.io_serial_in
0.07 ± 17% +0.3 0.38 ± 9% perf-profile.self.cycles-pp.do_idle
0.00 +0.3 0.31 ± 11% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.3 0.31 ± 4% perf-profile.self.cycles-pp.rcu_idle_exit
0.06 ± 9% +0.3 0.40 ± 4% perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.4 0.36 ± 10% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.01 ±173% +0.4 0.40 ± 30% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.4 0.41 ± 7% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.4 0.43 ± 14% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.11 ± 8% +0.4 0.54 ± 10% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.5 0.47 ± 7% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.08 ± 10% +0.5 0.61 ± 9% perf-profile.self.cycles-pp._raw_spin_trylock
0.07 ± 31% +0.6 0.67 ± 9% perf-profile.self.cycles-pp.run_local_timers
0.07 ± 17% +0.6 0.70 ± 11% perf-profile.self.cycles-pp.native_sched_clock
0.05 ± 57% +0.7 0.72 ± 13% perf-profile.self.cycles-pp.calc_global_load_tick
0.07 ± 12% +0.7 0.80 ± 8% perf-profile.self.cycles-pp.read_tsc
0.08 ± 11% +0.8 0.90 ± 21% perf-profile.self.cycles-pp.tick_nohz_next_event
0.08 ± 10% +0.9 0.99 ± 6% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.04 ± 57% +1.0 1.00 ± 11% perf-profile.self.cycles-pp.lapic_next_deadline
0.07 ± 10% +1.2 1.23 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.18 ± 8% +1.9 2.12 ± 6% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.27 ± 18% +2.6 2.84 ± 6% perf-profile.self.cycles-pp.wait_consider_task
0.52 ± 15% +5.6 6.09 ± 8% perf-profile.self.cycles-pp.ktime_get
33.98 ± 8% +8.1 42.04 ± 3% perf-profile.self.cycles-pp.intel_idle
0.16 ± 16% +9.5 9.63 ± 13% perf-profile.self.cycles-pp.cpuidle_enter_state
0.18 ± 10% +10.6 10.79 ± 15% perf-profile.self.cycles-pp.menu_select
314.50 ± 11% +7937.6% 25278 ± 98% interrupts.34:PCI-MSI.524292-edge.eth0-TxRx-3
4396710 ± 3% -80.0% 879881 ± 2% interrupts.CAL:Function_call_interrupts
26715 ± 18% -78.5% 5757 ± 32% interrupts.CPU0.CAL:Function_call_interrupts
3299 ± 22% -95.8% 137.50 ± 4% interrupts.CPU0.NMI:Non-maskable_interrupts
3299 ± 22% -95.8% 137.50 ± 4% interrupts.CPU0.PMI:Performance_monitoring_interrupts
2636 ± 15% -84.3% 412.75 ± 19% interrupts.CPU0.RES:Rescheduling_interrupts
28313 ± 16% -78.1% 6187 ± 19% interrupts.CPU1.CAL:Function_call_interrupts
3766 ± 32% -96.6% 129.25 ± 5% interrupts.CPU1.NMI:Non-maskable_interrupts
3766 ± 32% -96.6% 129.25 ± 5% interrupts.CPU1.PMI:Performance_monitoring_interrupts
2737 ± 15% -83.2% 460.50 ± 23% interrupts.CPU1.RES:Rescheduling_interrupts
18372 ± 12% -74.6% 4671 ± 29% interrupts.CPU10.CAL:Function_call_interrupts
2998 ± 14% -95.2% 142.50 ± 4% interrupts.CPU10.NMI:Non-maskable_interrupts
2998 ± 14% -95.2% 142.50 ± 4% interrupts.CPU10.PMI:Performance_monitoring_interrupts
1770 ± 9% -80.0% 354.50 ± 22% interrupts.CPU10.RES:Rescheduling_interrupts
23724 ± 16% -69.5% 7234 ± 40% interrupts.CPU100.CAL:Function_call_interrupts
2557 ± 19% -95.0% 129.00 ± 26% interrupts.CPU100.NMI:Non-maskable_interrupts
2557 ± 19% -95.0% 129.00 ± 26% interrupts.CPU100.PMI:Performance_monitoring_interrupts
11688 ± 17% -86.0% 1637 ± 37% interrupts.CPU100.RES:Rescheduling_interrupts
66.50 ± 64% +177.1% 184.25 ± 5% interrupts.CPU100.TLB:TLB_shootdowns
28969 ± 19% -78.9% 6098 ± 34% interrupts.CPU101.CAL:Function_call_interrupts
3437 ± 27% -95.6% 150.50 ± 10% interrupts.CPU101.NMI:Non-maskable_interrupts
3437 ± 27% -95.6% 150.50 ± 10% interrupts.CPU101.PMI:Performance_monitoring_interrupts
13376 ± 21% -88.6% 1521 ± 55% interrupts.CPU101.RES:Rescheduling_interrupts
84.00 ± 39% +106.5% 173.50 ± 2% interrupts.CPU101.TLB:TLB_shootdowns
27485 ± 9% -80.1% 5472 ± 19% interrupts.CPU102.CAL:Function_call_interrupts
3527 ± 17% -96.1% 138.00 ± 8% interrupts.CPU102.NMI:Non-maskable_interrupts
3527 ± 17% -96.1% 138.00 ± 8% interrupts.CPU102.PMI:Performance_monitoring_interrupts
12358 ± 12% -89.7% 1271 ± 41% interrupts.CPU102.RES:Rescheduling_interrupts
66.00 ± 76% +147.3% 163.25 ± 9% interrupts.CPU102.TLB:TLB_shootdowns
28785 ± 14% -80.4% 5629 ± 20% interrupts.CPU103.CAL:Function_call_interrupts
3338 ± 32% -96.4% 119.00 ± 21% interrupts.CPU103.NMI:Non-maskable_interrupts
3338 ± 32% -96.4% 119.00 ± 21% interrupts.CPU103.PMI:Performance_monitoring_interrupts
13973 ± 14% -89.5% 1469 ± 37% interrupts.CPU103.RES:Rescheduling_interrupts
27769 ± 14% -79.6% 5659 ± 21% interrupts.CPU104.CAL:Function_call_interrupts
3095 ± 18% -96.0% 124.00 ± 24% interrupts.CPU104.NMI:Non-maskable_interrupts
3095 ± 18% -96.0% 124.00 ± 24% interrupts.CPU104.PMI:Performance_monitoring_interrupts
12739 ± 22% -87.9% 1539 ± 45% interrupts.CPU104.RES:Rescheduling_interrupts
82.75 ± 61% +98.8% 164.50 ± 18% interrupts.CPU104.TLB:TLB_shootdowns
30394 ± 17% -85.6% 4389 ± 19% interrupts.CPU105.CAL:Function_call_interrupts
4113 ± 7% -96.7% 135.75 ± 6% interrupts.CPU105.NMI:Non-maskable_interrupts
4113 ± 7% -96.7% 135.75 ± 6% interrupts.CPU105.PMI:Performance_monitoring_interrupts
14222 ± 17% -91.7% 1186 ± 16% interrupts.CPU105.RES:Rescheduling_interrupts
57.50 ± 79% +210.4% 178.50 ± 3% interrupts.CPU105.TLB:TLB_shootdowns
30423 ± 25% -76.2% 7244 ± 41% interrupts.CPU106.CAL:Function_call_interrupts
3926 ± 23% -96.2% 147.50 ± 9% interrupts.CPU106.NMI:Non-maskable_interrupts
3926 ± 23% -96.2% 147.50 ± 9% interrupts.CPU106.PMI:Performance_monitoring_interrupts
13535 ± 20% -85.2% 2004 ± 48% interrupts.CPU106.RES:Rescheduling_interrupts
65.50 ± 64% +171.8% 178.00 ± 5% interrupts.CPU106.TLB:TLB_shootdowns
30034 ± 17% -80.6% 5828 ± 36% interrupts.CPU107.CAL:Function_call_interrupts
3376 ± 35% -96.2% 129.25 ± 18% interrupts.CPU107.NMI:Non-maskable_interrupts
3376 ± 35% -96.2% 129.25 ± 18% interrupts.CPU107.PMI:Performance_monitoring_interrupts
14346 ± 17% -89.0% 1585 ± 48% interrupts.CPU107.RES:Rescheduling_interrupts
77.00 ± 46% +121.8% 170.75 ± 3% interrupts.CPU107.TLB:TLB_shootdowns
27934 ± 20% -79.2% 5815 ± 26% interrupts.CPU108.CAL:Function_call_interrupts
3192 ± 34% -92.5% 240.75 ± 93% interrupts.CPU108.NMI:Non-maskable_interrupts
3192 ± 34% -92.5% 240.75 ± 93% interrupts.CPU108.PMI:Performance_monitoring_interrupts
14953 ± 22% -90.8% 1369 ± 33% interrupts.CPU108.RES:Rescheduling_interrupts
53.25 ± 80% +184.0% 151.25 ± 22% interrupts.CPU108.TLB:TLB_shootdowns
29248 ± 13% -82.1% 5240 ± 26% interrupts.CPU109.CAL:Function_call_interrupts
3793 ± 10% -96.5% 131.00 ± 4% interrupts.CPU109.NMI:Non-maskable_interrupts
3793 ± 10% -96.5% 131.00 ± 4% interrupts.CPU109.PMI:Performance_monitoring_interrupts
14788 ± 17% -91.7% 1229 ± 25% interrupts.CPU109.RES:Rescheduling_interrupts
74.75 ± 65% +126.1% 169.00 ± 9% interrupts.CPU109.TLB:TLB_shootdowns
18032 ± 17% -66.3% 6084 ± 47% interrupts.CPU11.CAL:Function_call_interrupts
2951 ± 32% -95.7% 127.75 ± 25% interrupts.CPU11.NMI:Non-maskable_interrupts
2951 ± 32% -95.7% 127.75 ± 25% interrupts.CPU11.PMI:Performance_monitoring_interrupts
1907 ± 15% -72.5% 524.75 ± 44% interrupts.CPU11.RES:Rescheduling_interrupts
29749 ± 13% -79.5% 6112 ± 36% interrupts.CPU110.CAL:Function_call_interrupts
3599 ± 34% -95.9% 149.00 ± 9% interrupts.CPU110.NMI:Non-maskable_interrupts
3599 ± 34% -95.9% 149.00 ± 9% interrupts.CPU110.PMI:Performance_monitoring_interrupts
15350 ± 10% -89.9% 1548 ± 37% interrupts.CPU110.RES:Rescheduling_interrupts
72.50 ± 70% +152.1% 182.75 ± 7% interrupts.CPU110.TLB:TLB_shootdowns
30939 ± 11% -77.6% 6925 ± 39% interrupts.CPU111.CAL:Function_call_interrupts
3653 ± 31% -95.9% 148.75 ± 19% interrupts.CPU111.NMI:Non-maskable_interrupts
3653 ± 31% -95.9% 148.75 ± 19% interrupts.CPU111.PMI:Performance_monitoring_interrupts
15010 ± 15% -86.6% 2014 ± 56% interrupts.CPU111.RES:Rescheduling_interrupts
65.75 ± 64% +186.7% 188.50 ± 12% interrupts.CPU111.TLB:TLB_shootdowns
29007 ± 7% -81.4% 5400 ± 34% interrupts.CPU112.CAL:Function_call_interrupts
3733 ± 11% -96.8% 119.25 ± 26% interrupts.CPU112.NMI:Non-maskable_interrupts
3733 ± 11% -96.8% 119.25 ± 26% interrupts.CPU112.PMI:Performance_monitoring_interrupts
14076 ± 15% -91.1% 1258 ± 32% interrupts.CPU112.RES:Rescheduling_interrupts
65.75 ± 57% +187.5% 189.00 ± 11% interrupts.CPU112.TLB:TLB_shootdowns
29036 ± 12% -79.8% 5878 ± 27% interrupts.CPU113.CAL:Function_call_interrupts
4018 ± 17% -96.8% 128.25 ± 29% interrupts.CPU113.NMI:Non-maskable_interrupts
4018 ± 17% -96.8% 128.25 ± 29% interrupts.CPU113.PMI:Performance_monitoring_interrupts
13622 ± 12% -89.2% 1470 ± 36% interrupts.CPU113.RES:Rescheduling_interrupts
70.25 ± 60% +160.5% 183.00 ± 5% interrupts.CPU113.TLB:TLB_shootdowns
29303 ± 20% -76.7% 6829 ± 37% interrupts.CPU114.CAL:Function_call_interrupts
3313 ± 9% -96.3% 122.00 ± 28% interrupts.CPU114.NMI:Non-maskable_interrupts
3313 ± 9% -96.3% 122.00 ± 28% interrupts.CPU114.PMI:Performance_monitoring_interrupts
13276 ± 24% -90.3% 1293 ± 34% interrupts.CPU114.RES:Rescheduling_interrupts
56.00 ± 99% +225.9% 182.50 ± 9% interrupts.CPU114.TLB:TLB_shootdowns
28215 ± 16% -82.2% 5020 ± 42% interrupts.CPU115.CAL:Function_call_interrupts
3078 ± 21% -95.8% 130.00 ± 30% interrupts.CPU115.NMI:Non-maskable_interrupts
3078 ± 21% -95.8% 130.00 ± 30% interrupts.CPU115.PMI:Performance_monitoring_interrupts
12771 ± 16% -91.6% 1069 ± 46% interrupts.CPU115.RES:Rescheduling_interrupts
70.50 ± 55% +150.7% 176.75 ± 7% interrupts.CPU115.TLB:TLB_shootdowns
28111 ± 9% -79.0% 5912 ± 30% interrupts.CPU116.CAL:Function_call_interrupts
3274 ± 21% -95.8% 138.25 ± 5% interrupts.CPU116.NMI:Non-maskable_interrupts
3274 ± 21% -95.8% 138.25 ± 5% interrupts.CPU116.PMI:Performance_monitoring_interrupts
12311 ± 9% -89.2% 1327 ± 40% interrupts.CPU116.RES:Rescheduling_interrupts
78.50 ± 74% +145.2% 192.50 ± 8% interrupts.CPU116.TLB:TLB_shootdowns
23705 ± 8% -73.0% 6390 ± 47% interrupts.CPU117.CAL:Function_call_interrupts
3222 ± 14% -96.3% 120.00 ± 25% interrupts.CPU117.NMI:Non-maskable_interrupts
3222 ± 14% -96.3% 120.00 ± 25% interrupts.CPU117.PMI:Performance_monitoring_interrupts
9948 ± 13% -85.5% 1444 ± 41% interrupts.CPU117.RES:Rescheduling_interrupts
87.00 ± 50% +120.4% 191.75 ± 9% interrupts.CPU117.TLB:TLB_shootdowns
25827 ± 21% -79.4% 5315 ± 32% interrupts.CPU118.CAL:Function_call_interrupts
4445 ± 23% -96.9% 136.50 ± 34% interrupts.CPU118.NMI:Non-maskable_interrupts
4445 ± 23% -96.9% 136.50 ± 34% interrupts.CPU118.PMI:Performance_monitoring_interrupts
11512 ± 18% -89.0% 1272 ± 31% interrupts.CPU118.RES:Rescheduling_interrupts
58.00 ± 75% +214.2% 182.25 ± 6% interrupts.CPU118.TLB:TLB_shootdowns
26383 ± 20% -74.1% 6834 ± 28% interrupts.CPU119.CAL:Function_call_interrupts
4156 ± 13% -97.0% 126.00 ± 27% interrupts.CPU119.NMI:Non-maskable_interrupts
4156 ± 13% -97.0% 126.00 ± 27% interrupts.CPU119.PMI:Performance_monitoring_interrupts
12557 ± 25% -88.2% 1480 ± 42% interrupts.CPU119.RES:Rescheduling_interrupts
72.75 ± 70% +150.9% 182.50 ± 2% interrupts.CPU119.TLB:TLB_shootdowns
314.50 ± 11% +7937.6% 25278 ± 98% interrupts.CPU12.34:PCI-MSI.524292-edge.eth0-TxRx-3
19032 ± 16% -72.9% 5166 ± 32% interrupts.CPU12.CAL:Function_call_interrupts
2501 ± 40% -93.9% 152.25 ± 46% interrupts.CPU12.NMI:Non-maskable_interrupts
2501 ± 40% -93.9% 152.25 ± 46% interrupts.CPU12.PMI:Performance_monitoring_interrupts
1979 ± 11% -77.8% 440.25 ± 35% interrupts.CPU12.RES:Rescheduling_interrupts
19776 ± 21% -81.9% 3574 ± 32% interrupts.CPU120.CAL:Function_call_interrupts
2124 ± 30% -93.2% 144.00 ± 11% interrupts.CPU120.NMI:Non-maskable_interrupts
2124 ± 30% -93.2% 144.00 ± 11% interrupts.CPU120.PMI:Performance_monitoring_interrupts
9406 ± 18% -91.9% 760.00 ± 53% interrupts.CPU120.RES:Rescheduling_interrupts
18584 ± 28% -78.7% 3952 ± 36% interrupts.CPU121.CAL:Function_call_interrupts
2412 ± 60% -93.0% 168.00 ± 27% interrupts.CPU121.NMI:Non-maskable_interrupts
2412 ± 60% -93.0% 168.00 ± 27% interrupts.CPU121.PMI:Performance_monitoring_interrupts
8656 ± 28% -90.7% 801.75 ± 51% interrupts.CPU121.RES:Rescheduling_interrupts
22468 ± 20% -77.6% 5027 ± 68% interrupts.CPU122.CAL:Function_call_interrupts
1585 ± 29% -91.1% 140.50 ± 5% interrupts.CPU122.NMI:Non-maskable_interrupts
1585 ± 29% -91.1% 140.50 ± 5% interrupts.CPU122.PMI:Performance_monitoring_interrupts
9928 -90.4% 954.75 ± 66% interrupts.CPU122.RES:Rescheduling_interrupts
23767 ± 21% -75.4% 5853 ± 74% interrupts.CPU123.CAL:Function_call_interrupts
2500 ± 51% -94.2% 145.00 ± 11% interrupts.CPU123.NMI:Non-maskable_interrupts
2500 ± 51% -94.2% 145.00 ± 11% interrupts.CPU123.PMI:Performance_monitoring_interrupts
11760 ± 18% -91.8% 969.50 ± 66% interrupts.CPU123.RES:Rescheduling_interrupts
21644 ± 18% -79.3% 4473 ± 38% interrupts.CPU124.CAL:Function_call_interrupts
1769 ± 44% -92.0% 141.00 ± 6% interrupts.CPU124.NMI:Non-maskable_interrupts
1769 ± 44% -92.0% 141.00 ± 6% interrupts.CPU124.PMI:Performance_monitoring_interrupts
9617 ± 14% -90.4% 925.50 ± 44% interrupts.CPU124.RES:Rescheduling_interrupts
22859 ± 11% -80.8% 4387 ± 25% interrupts.CPU125.CAL:Function_call_interrupts
1123 ± 23% -87.0% 146.50 ± 12% interrupts.CPU125.NMI:Non-maskable_interrupts
1123 ± 23% -87.0% 146.50 ± 12% interrupts.CPU125.PMI:Performance_monitoring_interrupts
10609 ± 11% -92.1% 834.50 ± 30% interrupts.CPU125.RES:Rescheduling_interrupts
23240 ± 21% -76.6% 5434 ± 38% interrupts.CPU126.CAL:Function_call_interrupts
1803 ± 45% -92.1% 142.50 ± 10% interrupts.CPU126.NMI:Non-maskable_interrupts
1803 ± 45% -92.1% 142.50 ± 10% interrupts.CPU126.PMI:Performance_monitoring_interrupts
10451 ± 23% -91.0% 938.75 ± 53% interrupts.CPU126.RES:Rescheduling_interrupts
25717 ± 14% -77.7% 5745 ± 61% interrupts.CPU127.CAL:Function_call_interrupts
2530 ± 39% -93.7% 158.25 ± 22% interrupts.CPU127.NMI:Non-maskable_interrupts
2530 ± 39% -93.7% 158.25 ± 22% interrupts.CPU127.PMI:Performance_monitoring_interrupts
12812 ± 12% -91.1% 1134 ± 74% interrupts.CPU127.RES:Rescheduling_interrupts
25385 ± 23% -80.4% 4967 ± 43% interrupts.CPU128.CAL:Function_call_interrupts
2319 ± 58% -94.8% 121.50 ± 30% interrupts.CPU128.NMI:Non-maskable_interrupts
2319 ± 58% -94.8% 121.50 ± 30% interrupts.CPU128.PMI:Performance_monitoring_interrupts
11067 ± 25% -91.5% 938.75 ± 44% interrupts.CPU128.RES:Rescheduling_interrupts
21868 ± 22% -78.6% 4678 ± 41% interrupts.CPU129.CAL:Function_call_interrupts
1896 ± 59% -94.4% 106.75 ± 39% interrupts.CPU129.NMI:Non-maskable_interrupts
1896 ± 59% -94.4% 106.75 ± 39% interrupts.CPU129.PMI:Performance_monitoring_interrupts
10791 ± 19% -91.5% 920.75 ± 45% interrupts.CPU129.RES:Rescheduling_interrupts
19727 ± 13% -72.3% 5463 ± 28% interrupts.CPU13.CAL:Function_call_interrupts
2651 ± 18% -96.4% 96.00 ± 32% interrupts.CPU13.NMI:Non-maskable_interrupts
2651 ± 18% -96.4% 96.00 ± 32% interrupts.CPU13.PMI:Performance_monitoring_interrupts
1958 ± 12% -78.0% 431.25 ± 29% interrupts.CPU13.RES:Rescheduling_interrupts
22220 ± 19% -76.6% 5205 ± 62% interrupts.CPU130.CAL:Function_call_interrupts
1834 ± 69% -91.0% 165.75 ± 25% interrupts.CPU130.NMI:Non-maskable_interrupts
1834 ± 69% -91.0% 165.75 ± 25% interrupts.CPU130.PMI:Performance_monitoring_interrupts
10866 ± 15% -90.9% 988.00 ± 64% interrupts.CPU130.RES:Rescheduling_interrupts
23789 ± 19% -80.0% 4748 ± 58% interrupts.CPU131.CAL:Function_call_interrupts
1891 ± 55% -93.4% 124.75 ± 20% interrupts.CPU131.NMI:Non-maskable_interrupts
1891 ± 55% -93.4% 124.75 ± 20% interrupts.CPU131.PMI:Performance_monitoring_interrupts
11431 ± 24% -90.9% 1036 ± 69% interrupts.CPU131.RES:Rescheduling_interrupts
26240 ± 14% -79.4% 5399 ± 55% interrupts.CPU132.CAL:Function_call_interrupts
1772 ± 41% -93.1% 122.75 ± 22% interrupts.CPU132.NMI:Non-maskable_interrupts
1772 ± 41% -93.1% 122.75 ± 22% interrupts.CPU132.PMI:Performance_monitoring_interrupts
11021 ± 16% -89.4% 1168 ± 63% interrupts.CPU132.RES:Rescheduling_interrupts
26054 ± 20% -83.2% 4366 ± 42% interrupts.CPU133.CAL:Function_call_interrupts
1651 ± 72% -92.2% 129.50 ± 31% interrupts.CPU133.NMI:Non-maskable_interrupts
1651 ± 72% -92.2% 129.50 ± 31% interrupts.CPU133.PMI:Performance_monitoring_interrupts
13736 ± 25% -93.7% 860.75 ± 40% interrupts.CPU133.RES:Rescheduling_interrupts
23715 ± 19% -72.1% 6625 ± 75% interrupts.CPU134.CAL:Function_call_interrupts
1544 ± 36% -92.1% 122.00 ± 26% interrupts.CPU134.NMI:Non-maskable_interrupts
1544 ± 36% -92.1% 122.00 ± 26% interrupts.CPU134.PMI:Performance_monitoring_interrupts
11016 ± 19% -89.4% 1163 ± 73% interrupts.CPU134.RES:Rescheduling_interrupts
24793 ± 18% -75.3% 6114 ± 54% interrupts.CPU135.CAL:Function_call_interrupts
2177 ± 82% -91.8% 178.50 ± 39% interrupts.CPU135.NMI:Non-maskable_interrupts
2177 ± 82% -91.8% 178.50 ± 39% interrupts.CPU135.PMI:Performance_monitoring_interrupts
10859 ± 17% -89.2% 1170 ± 48% interrupts.CPU135.RES:Rescheduling_interrupts
24107 ± 22% -77.3% 5481 ± 67% interrupts.CPU136.CAL:Function_call_interrupts
2111 ± 62% -94.3% 121.25 ± 26% interrupts.CPU136.NMI:Non-maskable_interrupts
2111 ± 62% -94.3% 121.25 ± 26% interrupts.CPU136.PMI:Performance_monitoring_interrupts
10408 ± 20% -90.7% 967.00 ± 69% interrupts.CPU136.RES:Rescheduling_interrupts
22847 ± 14% -76.7% 5316 ± 44% interrupts.CPU137.CAL:Function_call_interrupts
1824 ± 33% -93.3% 122.25 ± 27% interrupts.CPU137.NMI:Non-maskable_interrupts
1824 ± 33% -93.3% 122.25 ± 27% interrupts.CPU137.PMI:Performance_monitoring_interrupts
10749 ± 14% -92.2% 842.25 ± 42% interrupts.CPU137.RES:Rescheduling_interrupts
22463 ± 17% -76.5% 5272 ± 42% interrupts.CPU138.CAL:Function_call_interrupts
1065 ± 53% -84.6% 164.50 ± 31% interrupts.CPU138.NMI:Non-maskable_interrupts
1065 ± 53% -84.6% 164.50 ± 31% interrupts.CPU138.PMI:Performance_monitoring_interrupts
10942 ± 29% -91.8% 894.50 ± 61% interrupts.CPU138.RES:Rescheduling_interrupts
20792 ± 12% -72.6% 5702 ± 62% interrupts.CPU139.CAL:Function_call_interrupts
1632 ± 35% -86.3% 223.00 ± 74% interrupts.CPU139.NMI:Non-maskable_interrupts
1632 ± 35% -86.3% 223.00 ± 74% interrupts.CPU139.PMI:Performance_monitoring_interrupts
10147 ± 16% -89.5% 1066 ± 72% interrupts.CPU139.RES:Rescheduling_interrupts
16992 ± 20% -66.9% 5631 ± 19% interrupts.CPU14.CAL:Function_call_interrupts
2439 ± 30% -95.0% 121.00 ± 19% interrupts.CPU14.NMI:Non-maskable_interrupts
2439 ± 30% -95.0% 121.00 ± 19% interrupts.CPU14.PMI:Performance_monitoring_interrupts
1745 ± 17% -70.1% 522.00 ± 16% interrupts.CPU14.RES:Rescheduling_interrupts
21133 ± 17% -78.7% 4500 ± 55% interrupts.CPU140.CAL:Function_call_interrupts
1191 ± 20% -89.5% 125.00 ± 28% interrupts.CPU140.NMI:Non-maskable_interrupts
1191 ± 20% -89.5% 125.00 ± 28% interrupts.CPU140.PMI:Performance_monitoring_interrupts
9876 ± 13% -92.2% 772.25 ± 51% interrupts.CPU140.RES:Rescheduling_interrupts
20180 ± 22% -75.3% 4989 ± 43% interrupts.CPU141.CAL:Function_call_interrupts
1703 ± 33% -92.2% 133.50 ± 28% interrupts.CPU141.NMI:Non-maskable_interrupts
1703 ± 33% -92.2% 133.50 ± 28% interrupts.CPU141.PMI:Performance_monitoring_interrupts
9060 ± 24% -91.1% 808.25 ± 41% interrupts.CPU141.RES:Rescheduling_interrupts
20177 ± 17% -68.7% 6307 ± 64% interrupts.CPU142.CAL:Function_call_interrupts
2176 ± 56% -94.4% 122.50 ± 26% interrupts.CPU142.NMI:Non-maskable_interrupts
2176 ± 56% -94.4% 122.50 ± 26% interrupts.CPU142.PMI:Performance_monitoring_interrupts
9478 ± 21% -88.9% 1056 ± 73% interrupts.CPU142.RES:Rescheduling_interrupts
20090 ± 10% -79.0% 4217 ± 39% interrupts.CPU143.CAL:Function_call_interrupts
2143 ± 47% -94.0% 128.00 ± 31% interrupts.CPU143.NMI:Non-maskable_interrupts
2143 ± 47% -94.0% 128.00 ± 31% interrupts.CPU143.PMI:Performance_monitoring_interrupts
10267 ± 22% -91.2% 899.50 ± 51% interrupts.CPU143.RES:Rescheduling_interrupts
23846 ± 16% -87.6% 2962 ± 65% interrupts.CPU144.CAL:Function_call_interrupts
3817 ± 6% -96.4% 137.25 ± 42% interrupts.CPU144.NMI:Non-maskable_interrupts
3817 ± 6% -96.4% 137.25 ± 42% interrupts.CPU144.PMI:Performance_monitoring_interrupts
10395 ± 18% -93.1% 717.50 ± 80% interrupts.CPU144.RES:Rescheduling_interrupts
24691 ± 25% -84.3% 3872 ± 80% interrupts.CPU145.CAL:Function_call_interrupts
3576 ± 31% -95.5% 161.25 ± 20% interrupts.CPU145.NMI:Non-maskable_interrupts
3576 ± 31% -95.5% 161.25 ± 20% interrupts.CPU145.PMI:Performance_monitoring_interrupts
9715 ± 24% -91.4% 835.00 ± 80% interrupts.CPU145.RES:Rescheduling_interrupts
25468 ± 26% -83.9% 4099 ± 79% interrupts.CPU146.CAL:Function_call_interrupts
3261 ± 6% -95.3% 153.00 ± 40% interrupts.CPU146.NMI:Non-maskable_interrupts
3261 ± 6% -95.3% 153.00 ± 40% interrupts.CPU146.PMI:Performance_monitoring_interrupts
10923 ± 28% -91.7% 902.75 ± 85% interrupts.CPU146.RES:Rescheduling_interrupts
24476 ± 21% -87.4% 3073 ± 61% interrupts.CPU147.CAL:Function_call_interrupts
3052 ± 17% -94.9% 154.50 ± 26% interrupts.CPU147.NMI:Non-maskable_interrupts
3052 ± 17% -94.9% 154.50 ± 26% interrupts.CPU147.PMI:Performance_monitoring_interrupts
10777 ± 25% -92.8% 773.75 ± 81% interrupts.CPU147.RES:Rescheduling_interrupts
26863 ± 24% -88.3% 3144 ± 38% interrupts.CPU148.CAL:Function_call_interrupts
3382 ± 25% -95.5% 152.50 ± 22% interrupts.CPU148.NMI:Non-maskable_interrupts
3382 ± 25% -95.5% 152.50 ± 22% interrupts.CPU148.PMI:Performance_monitoring_interrupts
11840 ± 31% -93.8% 732.00 ± 58% interrupts.CPU148.RES:Rescheduling_interrupts
26930 ± 22% -85.5% 3911 ± 71% interrupts.CPU149.CAL:Function_call_interrupts
3572 ± 20% -95.7% 152.00 ± 19% interrupts.CPU149.NMI:Non-maskable_interrupts
3572 ± 20% -95.7% 152.00 ± 19% interrupts.CPU149.PMI:Performance_monitoring_interrupts
11385 ± 20% -91.8% 936.50 ± 76% interrupts.CPU149.RES:Rescheduling_interrupts
17673 ± 24% -70.2% 5267 ± 36% interrupts.CPU15.CAL:Function_call_interrupts
2255 ± 30% -95.5% 100.75 ± 29% interrupts.CPU15.NMI:Non-maskable_interrupts
2255 ± 30% -95.5% 100.75 ± 29% interrupts.CPU15.PMI:Performance_monitoring_interrupts
1797 ± 19% -79.1% 375.00 ± 37% interrupts.CPU15.RES:Rescheduling_interrupts
29927 ± 25% -84.5% 4630 ± 92% interrupts.CPU150.CAL:Function_call_interrupts
4214 ± 18% -97.0% 125.00 ± 33% interrupts.CPU150.NMI:Non-maskable_interrupts
4214 ± 18% -97.0% 125.00 ± 33% interrupts.CPU150.PMI:Performance_monitoring_interrupts
12622 ± 25% -91.0% 1130 ± 95% interrupts.CPU150.RES:Rescheduling_interrupts
28638 ± 26% -88.3% 3346 ± 73% interrupts.CPU151.CAL:Function_call_interrupts
3664 ± 18% -96.2% 139.50 ± 6% interrupts.CPU151.NMI:Non-maskable_interrupts
3664 ± 18% -96.2% 139.50 ± 6% interrupts.CPU151.PMI:Performance_monitoring_interrupts
12689 ± 28% -93.7% 803.75 ± 91% interrupts.CPU151.RES:Rescheduling_interrupts
27914 ± 19% -87.5% 3502 ± 75% interrupts.CPU152.CAL:Function_call_interrupts
3931 ± 33% -96.9% 123.50 ± 28% interrupts.CPU152.NMI:Non-maskable_interrupts
3931 ± 33% -96.9% 123.50 ± 28% interrupts.CPU152.PMI:Performance_monitoring_interrupts
12274 ± 16% -93.0% 856.50 ± 92% interrupts.CPU152.RES:Rescheduling_interrupts
29383 ± 27% -85.0% 4395 ± 81% interrupts.CPU153.CAL:Function_call_interrupts
2511 ± 47% -94.5% 138.25 ± 8% interrupts.CPU153.NMI:Non-maskable_interrupts
2511 ± 47% -94.5% 138.25 ± 8% interrupts.CPU153.PMI:Performance_monitoring_interrupts
13453 ± 27% -91.4% 1159 ± 96% interrupts.CPU153.RES:Rescheduling_interrupts
29288 ± 21% -86.7% 3888 ± 56% interrupts.CPU154.CAL:Function_call_interrupts
5050 ± 9% -94.9% 257.00 ± 69% interrupts.CPU154.NMI:Non-maskable_interrupts
5050 ± 9% -94.9% 257.00 ± 69% interrupts.CPU154.PMI:Performance_monitoring_interrupts
12937 ± 25% -92.5% 969.75 ± 69% interrupts.CPU154.RES:Rescheduling_interrupts
30564 ± 24% -90.0% 3058 ± 45% interrupts.CPU155.CAL:Function_call_interrupts
4279 ± 8% -96.1% 165.75 ± 27% interrupts.CPU155.NMI:Non-maskable_interrupts
4279 ± 8% -96.1% 165.75 ± 27% interrupts.CPU155.PMI:Performance_monitoring_interrupts
13191 ± 19% -94.5% 722.75 ± 60% interrupts.CPU155.RES:Rescheduling_interrupts
27947 ± 22% -82.9% 4766 ± 72% interrupts.CPU156.CAL:Function_call_interrupts
3313 ± 33% -95.0% 165.50 ± 35% interrupts.CPU156.NMI:Non-maskable_interrupts
3313 ± 33% -95.0% 165.50 ± 35% interrupts.CPU156.PMI:Performance_monitoring_interrupts
13222 ± 24% -91.5% 1121 ± 81% interrupts.CPU156.RES:Rescheduling_interrupts
28760 ± 15% -88.7% 3253 ± 48% interrupts.CPU157.CAL:Function_call_interrupts
4247 ± 32% -97.2% 119.00 ± 29% interrupts.CPU157.NMI:Non-maskable_interrupts
4247 ± 32% -97.2% 119.00 ± 29% interrupts.CPU157.PMI:Performance_monitoring_interrupts
12855 ± 14% -94.1% 759.25 ± 65% interrupts.CPU157.RES:Rescheduling_interrupts
31494 ± 17% -86.7% 4174 ± 79% interrupts.CPU158.CAL:Function_call_interrupts
3967 ± 21% -97.1% 113.75 ± 48% interrupts.CPU158.NMI:Non-maskable_interrupts
3967 ± 21% -97.1% 113.75 ± 48% interrupts.CPU158.PMI:Performance_monitoring_interrupts
12487 ± 20% -92.4% 948.50 ± 83% interrupts.CPU158.RES:Rescheduling_interrupts
27850 ± 19% -84.5% 4312 ± 74% interrupts.CPU159.CAL:Function_call_interrupts
3118 ± 28% -96.2% 118.25 ± 24% interrupts.CPU159.NMI:Non-maskable_interrupts
3118 ± 28% -96.2% 118.25 ± 24% interrupts.CPU159.PMI:Performance_monitoring_interrupts
12696 ± 21% -92.0% 1010 ± 88% interrupts.CPU159.RES:Rescheduling_interrupts
19933 ± 23% -76.3% 4719 ± 35% interrupts.CPU16.CAL:Function_call_interrupts
3258 ± 27% -96.4% 117.25 ± 22% interrupts.CPU16.NMI:Non-maskable_interrupts
3258 ± 27% -96.4% 117.25 ± 22% interrupts.CPU16.PMI:Performance_monitoring_interrupts
9290 ± 24% -89.2% 1003 ± 62% interrupts.CPU16.RES:Rescheduling_interrupts
31823 ± 22% -81.8% 5796 ± 76% interrupts.CPU160.CAL:Function_call_interrupts
3490 ± 5% -96.6% 120.25 ± 23% interrupts.CPU160.NMI:Non-maskable_interrupts
3490 ± 5% -96.6% 120.25 ± 23% interrupts.CPU160.PMI:Performance_monitoring_interrupts
13598 ± 22% -93.2% 927.00 ± 92% interrupts.CPU160.RES:Rescheduling_interrupts
28607 ± 14% -89.2% 3098 ± 35% interrupts.CPU161.CAL:Function_call_interrupts
4516 ± 28% -96.6% 152.50 ± 22% interrupts.CPU161.NMI:Non-maskable_interrupts
4516 ± 28% -96.6% 152.50 ± 22% interrupts.CPU161.PMI:Performance_monitoring_interrupts
11811 ± 20% -94.6% 638.75 ± 43% interrupts.CPU161.RES:Rescheduling_interrupts
22968 ± 13% -79.1% 4807 ± 85% interrupts.CPU162.CAL:Function_call_interrupts
3799 ± 21% -96.6% 130.75 ± 25% interrupts.CPU162.NMI:Non-maskable_interrupts
3799 ± 21% -96.6% 130.75 ± 25% interrupts.CPU162.PMI:Performance_monitoring_interrupts
10132 ± 14% -93.1% 698.75 ± 47% interrupts.CPU162.RES:Rescheduling_interrupts
27694 ± 22% -84.4% 4315 ± 83% interrupts.CPU163.CAL:Function_call_interrupts
3015 ± 11% -95.8% 125.75 ± 31% interrupts.CPU163.NMI:Non-maskable_interrupts
3015 ± 11% -95.8% 125.75 ± 31% interrupts.CPU163.PMI:Performance_monitoring_interrupts
12774 ± 32% -93.3% 861.25 ± 88% interrupts.CPU163.RES:Rescheduling_interrupts
27539 ± 17% -83.7% 4499 ± 79% interrupts.CPU164.CAL:Function_call_interrupts
3128 ± 38% -96.7% 104.75 ± 26% interrupts.CPU164.NMI:Non-maskable_interrupts
3128 ± 38% -96.7% 104.75 ± 26% interrupts.CPU164.PMI:Performance_monitoring_interrupts
11343 ± 19% -92.5% 847.75 ± 65% interrupts.CPU164.RES:Rescheduling_interrupts
29212 ± 16% -86.5% 3932 ± 69% interrupts.CPU165.CAL:Function_call_interrupts
3052 ± 27% -95.4% 141.00 ± 42% interrupts.CPU165.NMI:Non-maskable_interrupts
3052 ± 27% -95.4% 141.00 ± 42% interrupts.CPU165.PMI:Performance_monitoring_interrupts
12086 ± 15% -93.0% 848.25 ± 79% interrupts.CPU165.RES:Rescheduling_interrupts
25626 ± 20% -88.1% 3042 ± 48% interrupts.CPU166.CAL:Function_call_interrupts
2904 ± 33% -95.6% 128.25 ± 33% interrupts.CPU166.NMI:Non-maskable_interrupts
2904 ± 33% -95.6% 128.25 ± 33% interrupts.CPU166.PMI:Performance_monitoring_interrupts
10750 ± 20% -94.5% 596.00 ± 45% interrupts.CPU166.RES:Rescheduling_interrupts
24352 ± 17% -77.1% 5584 ± 94% interrupts.CPU167.CAL:Function_call_interrupts
3097 ± 33% -95.8% 128.75 ± 33% interrupts.CPU167.NMI:Non-maskable_interrupts
3097 ± 33% -95.8% 128.75 ± 33% interrupts.CPU167.PMI:Performance_monitoring_interrupts
9828 ± 21% -87.9% 1191 ±101% interrupts.CPU167.RES:Rescheduling_interrupts
23437 ± 25% -81.6% 4312 ± 74% interrupts.CPU168.CAL:Function_call_interrupts
2476 ± 18% -95.4% 113.50 ± 48% interrupts.CPU168.NMI:Non-maskable_interrupts
2476 ± 18% -95.4% 113.50 ± 48% interrupts.CPU168.PMI:Performance_monitoring_interrupts
11244 ± 41% -93.1% 777.50 ± 86% interrupts.CPU168.RES:Rescheduling_interrupts
22172 ± 25% -80.4% 4336 ± 69% interrupts.CPU169.CAL:Function_call_interrupts
2433 ± 55% -94.1% 144.25 ± 13% interrupts.CPU169.NMI:Non-maskable_interrupts
2433 ± 55% -94.1% 144.25 ± 13% interrupts.CPU169.PMI:Performance_monitoring_interrupts
10323 ± 27% -88.6% 1178 ± 95% interrupts.CPU169.RES:Rescheduling_interrupts
20938 ± 15% -73.2% 5609 ± 33% interrupts.CPU17.CAL:Function_call_interrupts
3883 ± 38% -96.3% 144.75 ± 8% interrupts.CPU17.NMI:Non-maskable_interrupts
3883 ± 38% -96.3% 144.75 ± 8% interrupts.CPU17.PMI:Performance_monitoring_interrupts
9547 ± 12% -90.0% 955.75 ± 34% interrupts.CPU17.RES:Rescheduling_interrupts
26170 ± 24% -81.8% 4763 ± 72% interrupts.CPU170.CAL:Function_call_interrupts
2715 ± 43% -95.0% 135.50 ± 34% interrupts.CPU170.NMI:Non-maskable_interrupts
2715 ± 43% -95.0% 135.50 ± 34% interrupts.CPU170.PMI:Performance_monitoring_interrupts
12107 ± 36% -91.1% 1081 ± 83% interrupts.CPU170.RES:Rescheduling_interrupts
22559 ± 22% -77.4% 5109 ± 67% interrupts.CPU171.CAL:Function_call_interrupts
2218 ± 48% -93.8% 138.00 ± 28% interrupts.CPU171.NMI:Non-maskable_interrupts
2218 ± 48% -93.8% 138.00 ± 28% interrupts.CPU171.PMI:Performance_monitoring_interrupts
10541 ± 24% -86.7% 1405 ± 84% interrupts.CPU171.RES:Rescheduling_interrupts
24510 ± 22% -80.6% 4757 ± 69% interrupts.CPU172.CAL:Function_call_interrupts
2244 ± 24% -93.3% 151.00 ± 20% interrupts.CPU172.NMI:Non-maskable_interrupts
2244 ± 24% -93.3% 151.00 ± 20% interrupts.CPU172.PMI:Performance_monitoring_interrupts
11403 ± 29% -90.1% 1129 ± 84% interrupts.CPU172.RES:Rescheduling_interrupts
26704 ± 23% -80.3% 5260 ± 61% interrupts.CPU173.CAL:Function_call_interrupts
2241 ± 58% -93.3% 151.00 ± 15% interrupts.CPU173.NMI:Non-maskable_interrupts
2241 ± 58% -93.3% 151.00 ± 15% interrupts.CPU173.PMI:Performance_monitoring_interrupts
12441 ± 31% -90.3% 1206 ± 67% interrupts.CPU173.RES:Rescheduling_interrupts
25377 ± 23% -83.3% 4246 ± 55% interrupts.CPU174.CAL:Function_call_interrupts
2712 ± 43% -90.8% 250.75 ± 75% interrupts.CPU174.NMI:Non-maskable_interrupts
2712 ± 43% -90.8% 250.75 ± 75% interrupts.CPU174.PMI:Performance_monitoring_interrupts
11738 ± 24% -92.0% 933.75 ± 71% interrupts.CPU174.RES:Rescheduling_interrupts
72.25 ± 66% +175.1% 198.75 ± 36% interrupts.CPU174.TLB:TLB_shootdowns
26314 ± 27% -83.3% 4386 ± 58% interrupts.CPU175.CAL:Function_call_interrupts
1957 ± 41% -93.3% 132.00 ± 33% interrupts.CPU175.NMI:Non-maskable_interrupts
1957 ± 41% -93.3% 132.00 ± 33% interrupts.CPU175.PMI:Performance_monitoring_interrupts
12844 ± 26% -91.9% 1040 ± 70% interrupts.CPU175.RES:Rescheduling_interrupts
25795 ± 33% -82.1% 4607 ± 72% interrupts.CPU176.CAL:Function_call_interrupts
2864 ± 42% -95.0% 143.00 ± 12% interrupts.CPU176.NMI:Non-maskable_interrupts
2864 ± 42% -95.0% 143.00 ± 12% interrupts.CPU176.PMI:Performance_monitoring_interrupts
11757 ± 29% -90.5% 1120 ± 86% interrupts.CPU176.RES:Rescheduling_interrupts
96.50 ± 22% +106.0% 198.75 ± 34% interrupts.CPU176.TLB:TLB_shootdowns
25060 ± 18% -83.2% 4212 ± 62% interrupts.CPU177.CAL:Function_call_interrupts
2709 ± 23% -95.6% 118.50 ± 26% interrupts.CPU177.NMI:Non-maskable_interrupts
2709 ± 23% -95.6% 118.50 ± 26% interrupts.CPU177.PMI:Performance_monitoring_interrupts
12241 ± 24% -91.1% 1089 ± 81% interrupts.CPU177.RES:Rescheduling_interrupts
102.00 ± 6% +109.1% 213.25 ± 30% interrupts.CPU177.TLB:TLB_shootdowns
26822 ± 18% -84.8% 4083 ± 52% interrupts.CPU178.CAL:Function_call_interrupts
2503 ± 36% -94.6% 136.00 ± 4% interrupts.CPU178.NMI:Non-maskable_interrupts
2503 ± 36% -94.6% 136.00 ± 4% interrupts.CPU178.PMI:Performance_monitoring_interrupts
12933 ± 24% -91.6% 1084 ± 83% interrupts.CPU178.RES:Rescheduling_interrupts
27526 ± 24% -82.5% 4812 ± 74% interrupts.CPU179.CAL:Function_call_interrupts
2592 ± 13% -94.5% 143.75 ± 14% interrupts.CPU179.NMI:Non-maskable_interrupts
2592 ± 13% -94.5% 143.75 ± 14% interrupts.CPU179.PMI:Performance_monitoring_interrupts
12740 ± 29% -90.8% 1172 ± 94% interrupts.CPU179.RES:Rescheduling_interrupts
20259 ± 20% -75.8% 4899 ± 39% interrupts.CPU18.CAL:Function_call_interrupts
2807 ± 28% -95.8% 117.00 ± 22% interrupts.CPU18.NMI:Non-maskable_interrupts
2807 ± 28% -95.8% 117.00 ± 22% interrupts.CPU18.PMI:Performance_monitoring_interrupts
9119 ± 18% -89.5% 959.25 ± 46% interrupts.CPU18.RES:Rescheduling_interrupts
29968 ± 16% -85.5% 4339 ± 62% interrupts.CPU180.CAL:Function_call_interrupts
2548 ± 44% -94.7% 134.50 ± 11% interrupts.CPU180.NMI:Non-maskable_interrupts
2548 ± 44% -94.7% 134.50 ± 11% interrupts.CPU180.PMI:Performance_monitoring_interrupts
13959 ± 21% -91.4% 1205 ± 89% interrupts.CPU180.RES:Rescheduling_interrupts
81.75 ± 59% +156.9% 210.00 ± 42% interrupts.CPU180.TLB:TLB_shootdowns
27542 ± 23% -84.9% 4168 ± 60% interrupts.CPU181.CAL:Function_call_interrupts
2595 ± 27% -94.7% 138.25 ± 9% interrupts.CPU181.NMI:Non-maskable_interrupts
2595 ± 27% -94.7% 138.25 ± 9% interrupts.CPU181.PMI:Performance_monitoring_interrupts
12071 ± 23% -90.7% 1117 ± 75% interrupts.CPU181.RES:Rescheduling_interrupts
92.75 ± 40% +116.2% 200.50 ± 39% interrupts.CPU181.TLB:TLB_shootdowns
26922 ± 23% -82.1% 4819 ± 44% interrupts.CPU182.CAL:Function_call_interrupts
1940 ± 20% -92.4% 147.00 ± 15% interrupts.CPU182.NMI:Non-maskable_interrupts
1940 ± 20% -92.4% 147.00 ± 15% interrupts.CPU182.PMI:Performance_monitoring_interrupts
13613 ± 28% -91.9% 1100 ± 62% interrupts.CPU182.RES:Rescheduling_interrupts
77.25 ± 37% +172.5% 210.50 ± 30% interrupts.CPU182.TLB:TLB_shootdowns
29377 ± 35% -82.6% 5109 ± 57% interrupts.CPU183.CAL:Function_call_interrupts
2641 ± 56% -95.4% 121.50 ± 27% interrupts.CPU183.NMI:Non-maskable_interrupts
2641 ± 56% -95.4% 121.50 ± 27% interrupts.CPU183.PMI:Performance_monitoring_interrupts
12895 ± 31% -90.3% 1251 ± 75% interrupts.CPU183.RES:Rescheduling_interrupts
27229 ± 21% -81.2% 5117 ± 66% interrupts.CPU184.CAL:Function_call_interrupts
2289 ± 32% -93.6% 146.00 ± 8% interrupts.CPU184.NMI:Non-maskable_interrupts
2289 ± 32% -93.6% 146.00 ± 8% interrupts.CPU184.PMI:Performance_monitoring_interrupts
13328 ± 33% -93.8% 827.00 ± 66% interrupts.CPU184.RES:Rescheduling_interrupts
100.25 ± 36% +98.3% 198.75 ± 35% interrupts.CPU184.TLB:TLB_shootdowns
29198 ± 22% -80.6% 5653 ± 79% interrupts.CPU185.CAL:Function_call_interrupts
1951 ± 40% -92.5% 146.75 ± 9% interrupts.CPU185.NMI:Non-maskable_interrupts
1951 ± 40% -92.5% 146.75 ± 9% interrupts.CPU185.PMI:Performance_monitoring_interrupts
14149 ± 36% -92.3% 1096 ± 70% interrupts.CPU185.RES:Rescheduling_interrupts
87.25 ± 45% +134.7% 204.75 ± 35% interrupts.CPU185.TLB:TLB_shootdowns
30213 ± 18% -80.0% 6042 ± 77% interrupts.CPU186.CAL:Function_call_interrupts
2637 ± 59% -95.0% 131.00 ± 5% interrupts.CPU186.NMI:Non-maskable_interrupts
2637 ± 59% -95.0% 131.00 ± 5% interrupts.CPU186.PMI:Performance_monitoring_interrupts
12482 ± 20% -90.5% 1185 ± 90% interrupts.CPU186.RES:Rescheduling_interrupts
95.00 ± 40% +108.2% 197.75 ± 44% interrupts.CPU186.TLB:TLB_shootdowns
25767 ± 18% -81.7% 4719 ± 76% interrupts.CPU187.CAL:Function_call_interrupts
3130 ± 86% -88.8% 351.50 ±103% interrupts.CPU187.NMI:Non-maskable_interrupts
3130 ± 86% -88.8% 351.50 ±103% interrupts.CPU187.PMI:Performance_monitoring_interrupts
11333 ± 23% -89.9% 1142 ± 90% interrupts.CPU187.RES:Rescheduling_interrupts
110.00 ± 45% +94.5% 214.00 ± 29% interrupts.CPU187.TLB:TLB_shootdowns
23925 ± 18% -78.8% 5071 ± 83% interrupts.CPU188.CAL:Function_call_interrupts
1902 ± 31% -92.2% 149.00 ± 16% interrupts.CPU188.NMI:Non-maskable_interrupts
1902 ± 31% -92.2% 149.00 ± 16% interrupts.CPU188.PMI:Performance_monitoring_interrupts
10556 ± 19% -92.2% 819.25 ± 71% interrupts.CPU188.RES:Rescheduling_interrupts
76.25 ± 45% +169.8% 205.75 ± 34% interrupts.CPU188.TLB:TLB_shootdowns
26062 ± 24% -82.9% 4464 ± 66% interrupts.CPU189.CAL:Function_call_interrupts
2176 ± 55% -93.8% 136.00 ± 38% interrupts.CPU189.NMI:Non-maskable_interrupts
2176 ± 55% -93.8% 136.00 ± 38% interrupts.CPU189.PMI:Performance_monitoring_interrupts
11757 ± 39% -92.4% 892.00 ± 66% interrupts.CPU189.RES:Rescheduling_interrupts
78.75 ± 50% +129.8% 181.00 ± 53% interrupts.CPU189.TLB:TLB_shootdowns
19975 ± 21% -71.6% 5672 ± 32% interrupts.CPU19.CAL:Function_call_interrupts
2507 ± 53% -94.2% 146.25 ± 11% interrupts.CPU19.NMI:Non-maskable_interrupts
2507 ± 53% -94.2% 146.25 ± 11% interrupts.CPU19.PMI:Performance_monitoring_interrupts
9166 ± 24% -86.5% 1237 ± 37% interrupts.CPU19.RES:Rescheduling_interrupts
23782 ± 18% -81.1% 4499 ± 72% interrupts.CPU190.CAL:Function_call_interrupts
2188 ± 61% -86.3% 300.25 ±108% interrupts.CPU190.NMI:Non-maskable_interrupts
2188 ± 61% -86.3% 300.25 ±108% interrupts.CPU190.PMI:Performance_monitoring_interrupts
10087 ± 28% -91.0% 909.00 ± 77% interrupts.CPU190.RES:Rescheduling_interrupts
24521 ± 26% -81.8% 4467 ± 52% interrupts.CPU191.CAL:Function_call_interrupts
2620 ± 41% -94.9% 134.25 ± 33% interrupts.CPU191.NMI:Non-maskable_interrupts
2620 ± 41% -94.9% 134.25 ± 33% interrupts.CPU191.PMI:Performance_monitoring_interrupts
11539 ± 31% -92.5% 861.00 ± 61% interrupts.CPU191.RES:Rescheduling_interrupts
25912 ± 12% -76.4% 6126 ± 30% interrupts.CPU2.CAL:Function_call_interrupts
3021 ± 7% -95.6% 133.00 ± 5% interrupts.CPU2.NMI:Non-maskable_interrupts
3021 ± 7% -95.6% 133.00 ± 5% interrupts.CPU2.PMI:Performance_monitoring_interrupts
2502 ± 10% -81.8% 454.50 ± 28% interrupts.CPU2.RES:Rescheduling_interrupts
18460 ± 20% -75.0% 4619 ± 31% interrupts.CPU20.CAL:Function_call_interrupts
2362 ± 20% -95.0% 119.25 ± 20% interrupts.CPU20.NMI:Non-maskable_interrupts
2362 ± 20% -95.0% 119.25 ± 20% interrupts.CPU20.PMI:Performance_monitoring_interrupts
7567 ± 23% -85.8% 1071 ± 56% interrupts.CPU20.RES:Rescheduling_interrupts
67.75 ± 81% +79.0% 121.25 ± 38% interrupts.CPU20.TLB:TLB_shootdowns
18520 ± 20% -71.3% 5313 ± 40% interrupts.CPU21.CAL:Function_call_interrupts
3164 ± 30% -96.8% 102.25 ± 33% interrupts.CPU21.NMI:Non-maskable_interrupts
3164 ± 30% -96.8% 102.25 ± 33% interrupts.CPU21.PMI:Performance_monitoring_interrupts
8115 ± 15% -86.5% 1094 ± 43% interrupts.CPU21.RES:Rescheduling_interrupts
66.50 ± 61% +165.0% 176.25 ± 14% interrupts.CPU21.TLB:TLB_shootdowns
20379 ± 14% -81.0% 3869 ± 18% interrupts.CPU22.CAL:Function_call_interrupts
2888 ± 27% -94.8% 149.25 ± 45% interrupts.CPU22.NMI:Non-maskable_interrupts
2888 ± 27% -94.8% 149.25 ± 45% interrupts.CPU22.PMI:Performance_monitoring_interrupts
9067 ± 13% -89.0% 1000 ± 41% interrupts.CPU22.RES:Rescheduling_interrupts
46.75 ± 89% +212.8% 146.25 ± 23% interrupts.CPU22.TLB:TLB_shootdowns
20977 ± 20% -80.3% 4126 ± 38% interrupts.CPU23.CAL:Function_call_interrupts
3324 ± 12% -96.8% 107.00 ± 36% interrupts.CPU23.NMI:Non-maskable_interrupts
3324 ± 12% -96.8% 107.00 ± 36% interrupts.CPU23.PMI:Performance_monitoring_interrupts
8809 ± 23% -88.8% 987.00 ± 43% interrupts.CPU23.RES:Rescheduling_interrupts
61.00 ± 63% +159.4% 158.25 ± 14% interrupts.CPU23.TLB:TLB_shootdowns
23732 ± 17% -81.9% 4301 ± 70% interrupts.CPU24.CAL:Function_call_interrupts
3459 ± 22% -95.9% 140.25 ± 8% interrupts.CPU24.NMI:Non-maskable_interrupts
3459 ± 22% -95.9% 140.25 ± 8% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2196 ± 14% -84.8% 333.00 ± 44% interrupts.CPU24.RES:Rescheduling_interrupts
21149 ± 7% -72.0% 5919 ± 75% interrupts.CPU25.CAL:Function_call_interrupts
2045 ± 41% -92.8% 147.50 ± 12% interrupts.CPU25.NMI:Non-maskable_interrupts
2045 ± 41% -92.8% 147.50 ± 12% interrupts.CPU25.PMI:Performance_monitoring_interrupts
2194 ± 8% -75.9% 528.75 ± 56% interrupts.CPU25.RES:Rescheduling_interrupts
19197 ± 9% -72.4% 5301 ± 85% interrupts.CPU26.CAL:Function_call_interrupts
1283 ± 30% -90.7% 119.50 ± 21% interrupts.CPU26.NMI:Non-maskable_interrupts
1283 ± 30% -90.7% 119.50 ± 21% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1872 ± 10% -71.0% 542.75 ± 76% interrupts.CPU26.RES:Rescheduling_interrupts
19108 ± 25% -71.9% 5371 ± 78% interrupts.CPU27.CAL:Function_call_interrupts
1911 ± 34% -93.7% 120.50 ± 20% interrupts.CPU27.NMI:Non-maskable_interrupts
1911 ± 34% -93.7% 120.50 ± 20% interrupts.CPU27.PMI:Performance_monitoring_interrupts
1894 ± 19% -71.9% 533.00 ± 87% interrupts.CPU27.RES:Rescheduling_interrupts
16132 ± 10% -65.8% 5513 ± 67% interrupts.CPU28.CAL:Function_call_interrupts
1700 ± 38% -93.0% 119.50 ± 20% interrupts.CPU28.NMI:Non-maskable_interrupts
1700 ± 38% -93.0% 119.50 ± 20% interrupts.CPU28.PMI:Performance_monitoring_interrupts
1688 ± 11% -65.6% 580.00 ± 80% interrupts.CPU28.RES:Rescheduling_interrupts
16891 ± 18% -68.7% 5282 ± 47% interrupts.CPU29.CAL:Function_call_interrupts
1712 ± 46% -92.8% 123.25 ± 15% interrupts.CPU29.NMI:Non-maskable_interrupts
1712 ± 46% -92.8% 123.25 ± 15% interrupts.CPU29.PMI:Performance_monitoring_interrupts
1638 ± 13% -70.1% 489.75 ± 41% interrupts.CPU29.RES:Rescheduling_interrupts
25866 ± 15% -74.3% 6651 ± 38% interrupts.CPU3.CAL:Function_call_interrupts
3296 ± 10% -96.9% 102.50 ± 33% interrupts.CPU3.NMI:Non-maskable_interrupts
3296 ± 10% -96.9% 102.50 ± 33% interrupts.CPU3.PMI:Performance_monitoring_interrupts
2515 ± 16% -75.9% 605.25 ± 30% interrupts.CPU3.RES:Rescheduling_interrupts
16900 ± 21% -77.5% 3805 ± 51% interrupts.CPU30.CAL:Function_call_interrupts
1763 ± 48% -93.3% 118.50 ± 21% interrupts.CPU30.NMI:Non-maskable_interrupts
1763 ± 48% -93.3% 118.50 ± 21% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1708 ± 20% -80.5% 333.00 ± 37% interrupts.CPU30.RES:Rescheduling_interrupts
16624 ± 18% -74.8% 4196 ± 61% interrupts.CPU31.CAL:Function_call_interrupts
1591 ± 46% -92.1% 125.00 ± 26% interrupts.CPU31.NMI:Non-maskable_interrupts
1591 ± 46% -92.1% 125.00 ± 26% interrupts.CPU31.PMI:Performance_monitoring_interrupts
1641 ± 17% -76.6% 383.50 ± 44% interrupts.CPU31.RES:Rescheduling_interrupts
18126 ± 17% -70.9% 5267 ± 39% interrupts.CPU32.CAL:Function_call_interrupts
2104 ± 54% -94.5% 115.50 ± 19% interrupts.CPU32.NMI:Non-maskable_interrupts
2104 ± 54% -94.5% 115.50 ± 19% interrupts.CPU32.PMI:Performance_monitoring_interrupts
1686 ± 12% -73.2% 451.25 ± 30% interrupts.CPU32.RES:Rescheduling_interrupts
15787 ± 12% -70.2% 4702 ± 56% interrupts.CPU33.CAL:Function_call_interrupts
1385 ± 61% -91.3% 120.50 ± 23% interrupts.CPU33.NMI:Non-maskable_interrupts
1385 ± 61% -91.3% 120.50 ± 23% interrupts.CPU33.PMI:Performance_monitoring_interrupts
1627 ± 7% -69.0% 504.50 ± 76% interrupts.CPU33.RES:Rescheduling_interrupts
15844 ± 20% -75.1% 3942 ± 50% interrupts.CPU34.CAL:Function_call_interrupts
1776 ± 52% -92.8% 128.25 ± 26% interrupts.CPU34.NMI:Non-maskable_interrupts
1776 ± 52% -92.8% 128.25 ± 26% interrupts.CPU34.PMI:Performance_monitoring_interrupts
1621 ± 15% -78.1% 354.50 ± 50% interrupts.CPU34.RES:Rescheduling_interrupts
13954 ± 27% -74.5% 3555 ± 19% interrupts.CPU35.CAL:Function_call_interrupts
1544 ± 48% -92.0% 123.25 ± 21% interrupts.CPU35.NMI:Non-maskable_interrupts
1544 ± 48% -92.0% 123.25 ± 21% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1417 ± 19% -79.2% 294.25 ± 9% interrupts.CPU35.RES:Rescheduling_interrupts
14120 ± 26% -70.8% 4129 ± 39% interrupts.CPU36.CAL:Function_call_interrupts
1754 ± 40% -93.0% 122.50 ± 25% interrupts.CPU36.NMI:Non-maskable_interrupts
1754 ± 40% -93.0% 122.50 ± 25% interrupts.CPU36.PMI:Performance_monitoring_interrupts
1450 ± 21% -75.2% 360.25 ± 22% interrupts.CPU36.RES:Rescheduling_interrupts
14319 ± 22% -79.7% 2900 ± 25% interrupts.CPU37.CAL:Function_call_interrupts
1489 ± 51% -91.7% 123.25 ± 20% interrupts.CPU37.NMI:Non-maskable_interrupts
1489 ± 51% -91.7% 123.25 ± 20% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1465 ± 17% -81.1% 276.50 ± 13% interrupts.CPU37.RES:Rescheduling_interrupts
15358 ± 30% -79.3% 3174 ± 38% interrupts.CPU38.CAL:Function_call_interrupts
1588 ± 34% -93.5% 103.50 ± 34% interrupts.CPU38.NMI:Non-maskable_interrupts
1588 ± 34% -93.5% 103.50 ± 34% interrupts.CPU38.PMI:Performance_monitoring_interrupts
1523 ± 21% -83.8% 246.50 ± 28% interrupts.CPU38.RES:Rescheduling_interrupts
14794 ± 32% -73.3% 3949 ± 53% interrupts.CPU39.CAL:Function_call_interrupts
1224 ± 35% -90.9% 112.00 ± 25% interrupts.CPU39.NMI:Non-maskable_interrupts
1224 ± 35% -90.9% 112.00 ± 25% interrupts.CPU39.PMI:Performance_monitoring_interrupts
1542 ± 27% -80.4% 303.00 ± 42% interrupts.CPU39.RES:Rescheduling_interrupts
22481 ± 13% -72.1% 6262 ± 29% interrupts.CPU4.CAL:Function_call_interrupts
3180 ± 21% -95.9% 129.00 ± 27% interrupts.CPU4.NMI:Non-maskable_interrupts
3180 ± 21% -95.9% 129.00 ± 27% interrupts.CPU4.PMI:Performance_monitoring_interrupts
2303 ± 10% -79.0% 484.75 ± 29% interrupts.CPU4.RES:Rescheduling_interrupts
14016 ± 27% -77.7% 3121 ± 54% interrupts.CPU40.CAL:Function_call_interrupts
2017 ± 47% -93.1% 140.00 ± 6% interrupts.CPU40.NMI:Non-maskable_interrupts
2017 ± 47% -93.1% 140.00 ± 6% interrupts.CPU40.PMI:Performance_monitoring_interrupts
6166 ± 22% -92.2% 480.25 ± 68% interrupts.CPU40.RES:Rescheduling_interrupts
15471 ± 16% -73.8% 4051 ± 54% interrupts.CPU41.CAL:Function_call_interrupts
1787 ± 42% -92.3% 138.00 ± 5% interrupts.CPU41.NMI:Non-maskable_interrupts
1787 ± 42% -92.3% 138.00 ± 5% interrupts.CPU41.PMI:Performance_monitoring_interrupts
7041 ± 14% -90.7% 657.00 ± 56% interrupts.CPU41.RES:Rescheduling_interrupts
15891 ± 22% -73.6% 4201 ± 48% interrupts.CPU42.CAL:Function_call_interrupts
1134 ± 20% -87.1% 146.50 ± 12% interrupts.CPU42.NMI:Non-maskable_interrupts
1134 ± 20% -87.1% 146.50 ± 12% interrupts.CPU42.PMI:Performance_monitoring_interrupts
7381 ± 33% -91.7% 616.25 ± 48% interrupts.CPU42.RES:Rescheduling_interrupts
15756 ± 22% -76.7% 3670 ± 27% interrupts.CPU43.CAL:Function_call_interrupts
1607 ± 48% -77.9% 355.00 ±102% interrupts.CPU43.NMI:Non-maskable_interrupts
1607 ± 48% -77.9% 355.00 ±102% interrupts.CPU43.PMI:Performance_monitoring_interrupts
6370 ± 18% -90.6% 596.00 ± 38% interrupts.CPU43.RES:Rescheduling_interrupts
15035 ± 25% -81.3% 2804 ± 16% interrupts.CPU44.CAL:Function_call_interrupts
1982 ± 16% -93.0% 139.25 ± 6% interrupts.CPU44.NMI:Non-maskable_interrupts
1982 ± 16% -93.0% 139.25 ± 6% interrupts.CPU44.PMI:Performance_monitoring_interrupts
6308 ± 20% -93.4% 414.25 ± 20% interrupts.CPU44.RES:Rescheduling_interrupts
14638 ± 14% -74.9% 3669 ± 28% interrupts.CPU45.CAL:Function_call_interrupts
1670 ± 57% -91.4% 143.75 ± 6% interrupts.CPU45.NMI:Non-maskable_interrupts
1670 ± 57% -91.4% 143.75 ± 6% interrupts.CPU45.PMI:Performance_monitoring_interrupts
7048 ± 30% -90.8% 651.75 ± 55% interrupts.CPU45.RES:Rescheduling_interrupts
14749 ± 21% -73.5% 3915 ± 43% interrupts.CPU46.CAL:Function_call_interrupts
1747 ± 43% -92.0% 139.00 ± 4% interrupts.CPU46.NMI:Non-maskable_interrupts
1747 ± 43% -92.0% 139.00 ± 4% interrupts.CPU46.PMI:Performance_monitoring_interrupts
6470 ± 25% -90.4% 621.75 ± 57% interrupts.CPU46.RES:Rescheduling_interrupts
15797 ± 25% -79.3% 3275 ± 36% interrupts.CPU47.CAL:Function_call_interrupts
1555 ± 59% -90.1% 154.50 ± 23% interrupts.CPU47.NMI:Non-maskable_interrupts
1555 ± 59% -90.1% 154.50 ± 23% interrupts.CPU47.PMI:Performance_monitoring_interrupts
6714 ± 25% -91.4% 575.00 ± 40% interrupts.CPU47.RES:Rescheduling_interrupts
27685 ± 13% -84.3% 4350 ± 77% interrupts.CPU48.CAL:Function_call_interrupts
2692 ± 31% -94.5% 147.75 ± 20% interrupts.CPU48.NMI:Non-maskable_interrupts
2692 ± 31% -94.5% 147.75 ± 20% interrupts.CPU48.PMI:Performance_monitoring_interrupts
2545 ± 14% -86.7% 337.75 ± 69% interrupts.CPU48.RES:Rescheduling_interrupts
25352 ± 15% -86.9% 3325 ± 46% interrupts.CPU49.CAL:Function_call_interrupts
3573 ± 10% -95.9% 148.00 ± 16% interrupts.CPU49.NMI:Non-maskable_interrupts
3573 ± 10% -95.9% 148.00 ± 16% interrupts.CPU49.PMI:Performance_monitoring_interrupts
2354 ± 16% -86.6% 315.00 ± 47% interrupts.CPU49.RES:Rescheduling_interrupts
21897 ± 12% -71.4% 6261 ± 59% interrupts.CPU5.CAL:Function_call_interrupts
2817 ± 39% -95.5% 125.50 ± 21% interrupts.CPU5.NMI:Non-maskable_interrupts
2817 ± 39% -95.5% 125.50 ± 21% interrupts.CPU5.PMI:Performance_monitoring_interrupts
2314 ± 24% -78.9% 488.75 ± 65% interrupts.CPU5.RES:Rescheduling_interrupts
25362 ± 15% -82.7% 4381 ± 78% interrupts.CPU50.CAL:Function_call_interrupts
3007 ± 31% -94.6% 161.00 ± 20% interrupts.CPU50.NMI:Non-maskable_interrupts
3007 ± 31% -94.6% 161.00 ± 20% interrupts.CPU50.PMI:Performance_monitoring_interrupts
2317 ± 13% -83.7% 377.25 ± 44% interrupts.CPU50.RES:Rescheduling_interrupts
25687 ± 18% -86.0% 3604 ± 61% interrupts.CPU51.CAL:Function_call_interrupts
3220 ± 26% -95.4% 147.75 ± 20% interrupts.CPU51.NMI:Non-maskable_interrupts
3220 ± 26% -95.4% 147.75 ± 20% interrupts.CPU51.PMI:Performance_monitoring_interrupts
2342 ± 17% -87.7% 288.25 ± 50% interrupts.CPU51.RES:Rescheduling_interrupts
24904 ± 19% -86.0% 3496 ± 64% interrupts.CPU52.CAL:Function_call_interrupts
3357 ± 10% -95.6% 146.25 ± 16% interrupts.CPU52.NMI:Non-maskable_interrupts
3357 ± 10% -95.6% 146.25 ± 16% interrupts.CPU52.PMI:Performance_monitoring_interrupts
2233 ± 15% -88.1% 265.50 ± 57% interrupts.CPU52.RES:Rescheduling_interrupts
21446 ± 22% -86.5% 2903 ± 46% interrupts.CPU53.CAL:Function_call_interrupts
3089 ± 23% -95.2% 147.50 ± 16% interrupts.CPU53.NMI:Non-maskable_interrupts
3089 ± 23% -95.2% 147.50 ± 16% interrupts.CPU53.PMI:Performance_monitoring_interrupts
1954 ± 16% -87.2% 249.25 ± 36% interrupts.CPU53.RES:Rescheduling_interrupts
22498 ± 14% -84.6% 3467 ± 68% interrupts.CPU54.CAL:Function_call_interrupts
2756 ± 25% -95.3% 128.75 ± 37% interrupts.CPU54.NMI:Non-maskable_interrupts
2756 ± 25% -95.3% 128.75 ± 37% interrupts.CPU54.PMI:Performance_monitoring_interrupts
2081 ± 11% -84.8% 316.75 ± 63% interrupts.CPU54.RES:Rescheduling_interrupts
23205 ± 18% -82.0% 4174 ± 83% interrupts.CPU55.CAL:Function_call_interrupts
3106 ± 27% -95.9% 126.00 ± 24% interrupts.CPU55.NMI:Non-maskable_interrupts
3106 ± 27% -95.9% 126.00 ± 24% interrupts.CPU55.PMI:Performance_monitoring_interrupts
2163 ± 15% -84.4% 337.75 ± 69% interrupts.CPU55.RES:Rescheduling_interrupts
19285 ± 20% -81.8% 3501 ± 63% interrupts.CPU56.CAL:Function_call_interrupts
2991 ± 35% -95.4% 138.75 ± 7% interrupts.CPU56.NMI:Non-maskable_interrupts
2991 ± 35% -95.4% 138.75 ± 7% interrupts.CPU56.PMI:Performance_monitoring_interrupts
1850 ± 16% -85.1% 275.00 ± 46% interrupts.CPU56.RES:Rescheduling_interrupts
18151 ± 9% -82.0% 3274 ± 64% interrupts.CPU57.CAL:Function_call_interrupts
2976 ± 19% -95.3% 138.75 ± 7% interrupts.CPU57.NMI:Non-maskable_interrupts
2976 ± 19% -95.3% 138.75 ± 7% interrupts.CPU57.PMI:Performance_monitoring_interrupts
1828 ± 6% -85.8% 258.75 ± 55% interrupts.CPU57.RES:Rescheduling_interrupts
21761 ± 13% -83.3% 3641 ± 81% interrupts.CPU58.CAL:Function_call_interrupts
3947 ± 26% -91.5% 336.00 ±110% interrupts.CPU58.NMI:Non-maskable_interrupts
3947 ± 26% -91.5% 336.00 ±110% interrupts.CPU58.PMI:Performance_monitoring_interrupts
1971 ± 8% -85.3% 289.50 ± 63% interrupts.CPU58.RES:Rescheduling_interrupts
19914 ± 23% -81.2% 3738 ± 84% interrupts.CPU59.CAL:Function_call_interrupts
3791 ± 28% -95.8% 159.75 ± 25% interrupts.CPU59.NMI:Non-maskable_interrupts
3791 ± 28% -95.8% 159.75 ± 25% interrupts.CPU59.PMI:Performance_monitoring_interrupts
1853 ± 18% -85.4% 269.75 ± 60% interrupts.CPU59.RES:Rescheduling_interrupts
20720 ± 13% -74.2% 5338 ± 34% interrupts.CPU6.CAL:Function_call_interrupts
2417 ± 40% -94.5% 133.50 ± 5% interrupts.CPU6.NMI:Non-maskable_interrupts
2417 ± 40% -94.5% 133.50 ± 5% interrupts.CPU6.PMI:Performance_monitoring_interrupts
2090 ± 12% -78.5% 448.75 ± 41% interrupts.CPU6.RES:Rescheduling_interrupts
19313 ± 18% -83.3% 3231 ± 51% interrupts.CPU60.CAL:Function_call_interrupts
3227 ± 27% -95.8% 134.00 ± 6% interrupts.CPU60.NMI:Non-maskable_interrupts
3227 ± 27% -95.8% 134.00 ± 6% interrupts.CPU60.PMI:Performance_monitoring_interrupts
1869 ± 14% -83.8% 302.75 ± 45% interrupts.CPU60.RES:Rescheduling_interrupts
21150 ± 25% -83.4% 3520 ± 59% interrupts.CPU61.CAL:Function_call_interrupts
3367 ± 27% -96.0% 135.75 ± 11% interrupts.CPU61.NMI:Non-maskable_interrupts
3367 ± 27% -96.0% 135.75 ± 11% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1922 ± 23% -84.1% 306.25 ± 41% interrupts.CPU61.RES:Rescheduling_interrupts
18774 ± 19% -82.3% 3318 ± 58% interrupts.CPU62.CAL:Function_call_interrupts
3021 ± 24% -95.2% 145.25 ± 17% interrupts.CPU62.NMI:Non-maskable_interrupts
3021 ± 24% -95.2% 145.25 ± 17% interrupts.CPU62.PMI:Performance_monitoring_interrupts
1771 ± 10% -86.4% 241.50 ± 33% interrupts.CPU62.RES:Rescheduling_interrupts
18631 ± 15% -85.3% 2731 ± 44% interrupts.CPU63.CAL:Function_call_interrupts
2390 ± 45% -94.9% 121.25 ± 29% interrupts.CPU63.NMI:Non-maskable_interrupts
2390 ± 45% -94.9% 121.25 ± 29% interrupts.CPU63.PMI:Performance_monitoring_interrupts
1840 ± 13% -86.8% 242.25 ± 49% interrupts.CPU63.RES:Rescheduling_interrupts
18970 ± 26% -82.6% 3304 ± 59% interrupts.CPU64.CAL:Function_call_interrupts
2460 ± 33% -94.4% 138.00 ± 6% interrupts.CPU64.NMI:Non-maskable_interrupts
2460 ± 33% -94.4% 138.00 ± 6% interrupts.CPU64.PMI:Performance_monitoring_interrupts
6825 ± 23% -91.3% 593.75 ± 76% interrupts.CPU64.RES:Rescheduling_interrupts
20966 ± 25% -84.4% 3264 ± 58% interrupts.CPU65.CAL:Function_call_interrupts
3955 ± 31% -96.5% 136.50 ± 39% interrupts.CPU65.NMI:Non-maskable_interrupts
3955 ± 31% -96.5% 136.50 ± 39% interrupts.CPU65.PMI:Performance_monitoring_interrupts
8719 ± 23% -94.5% 480.75 ± 66% interrupts.CPU65.RES:Rescheduling_interrupts
21173 ± 24% -86.4% 2881 ± 32% interrupts.CPU66.CAL:Function_call_interrupts
3216 ± 13% -96.8% 102.50 ± 33% interrupts.CPU66.NMI:Non-maskable_interrupts
3216 ± 13% -96.8% 102.50 ± 33% interrupts.CPU66.PMI:Performance_monitoring_interrupts
8930 ± 19% -95.5% 397.75 ± 55% interrupts.CPU66.RES:Rescheduling_interrupts
18130 ± 11% -80.1% 3604 ± 58% interrupts.CPU67.CAL:Function_call_interrupts
3276 ± 15% -96.0% 130.75 ± 35% interrupts.CPU67.NMI:Non-maskable_interrupts
3276 ± 15% -96.0% 130.75 ± 35% interrupts.CPU67.PMI:Performance_monitoring_interrupts
7428 ± 11% -92.9% 526.75 ± 54% interrupts.CPU67.RES:Rescheduling_interrupts
19018 ± 17% -86.6% 2540 ± 32% interrupts.CPU68.CAL:Function_call_interrupts
3240 ± 44% -95.4% 149.75 ± 20% interrupts.CPU68.NMI:Non-maskable_interrupts
3240 ± 44% -95.4% 149.75 ± 20% interrupts.CPU68.PMI:Performance_monitoring_interrupts
8311 ± 21% -94.7% 444.25 ± 41% interrupts.CPU68.RES:Rescheduling_interrupts
19564 ± 24% -84.0% 3134 ± 51% interrupts.CPU69.CAL:Function_call_interrupts
2678 ± 47% -94.6% 145.25 ± 45% interrupts.CPU69.NMI:Non-maskable_interrupts
2678 ± 47% -94.6% 145.25 ± 45% interrupts.CPU69.PMI:Performance_monitoring_interrupts
8286 ± 24% -94.4% 468.00 ± 61% interrupts.CPU69.RES:Rescheduling_interrupts
20919 ± 32% -76.3% 4951 ± 22% interrupts.CPU7.CAL:Function_call_interrupts
3759 ± 16% -96.9% 117.50 ± 23% interrupts.CPU7.NMI:Non-maskable_interrupts
3759 ± 16% -96.9% 117.50 ± 23% interrupts.CPU7.PMI:Performance_monitoring_interrupts
2132 ± 30% -82.4% 374.50 ± 24% interrupts.CPU7.RES:Rescheduling_interrupts
21098 ± 15% -85.2% 3129 ± 63% interrupts.CPU70.CAL:Function_call_interrupts
3114 ± 27% -95.3% 145.50 ± 16% interrupts.CPU70.NMI:Non-maskable_interrupts
3114 ± 27% -95.3% 145.50 ± 16% interrupts.CPU70.PMI:Performance_monitoring_interrupts
8084 ± 14% -92.9% 573.00 ± 67% interrupts.CPU70.RES:Rescheduling_interrupts
21477 ± 29% -85.2% 3169 ± 54% interrupts.CPU71.CAL:Function_call_interrupts
2979 ± 30% -94.9% 152.25 ± 23% interrupts.CPU71.NMI:Non-maskable_interrupts
2979 ± 30% -94.9% 152.25 ± 23% interrupts.CPU71.PMI:Performance_monitoring_interrupts
8612 ± 32% -93.5% 558.50 ± 62% interrupts.CPU71.RES:Rescheduling_interrupts
23511 ± 16% -79.3% 4873 ± 79% interrupts.CPU72.CAL:Function_call_interrupts
3216 ± 17% -96.3% 117.50 ± 50% interrupts.CPU72.NMI:Non-maskable_interrupts
3216 ± 17% -96.3% 117.50 ± 50% interrupts.CPU72.PMI:Performance_monitoring_interrupts
2401 ± 12% -83.3% 400.75 ± 60% interrupts.CPU72.RES:Rescheduling_interrupts
24986 ± 18% -83.7% 4074 ± 65% interrupts.CPU73.CAL:Function_call_interrupts
1962 ± 28% -93.6% 126.25 ± 32% interrupts.CPU73.NMI:Non-maskable_interrupts
1962 ± 28% -93.6% 126.25 ± 32% interrupts.CPU73.PMI:Performance_monitoring_interrupts
2638 ± 17% -84.5% 408.75 ± 51% interrupts.CPU73.RES:Rescheduling_interrupts
74.25 ± 36% +135.0% 174.50 ± 49% interrupts.CPU73.TLB:TLB_shootdowns
23981 ± 20% -81.5% 4439 ± 65% interrupts.CPU74.CAL:Function_call_interrupts
2390 ± 48% -93.8% 148.75 ± 17% interrupts.CPU74.NMI:Non-maskable_interrupts
2390 ± 48% -93.8% 148.75 ± 17% interrupts.CPU74.PMI:Performance_monitoring_interrupts
2423 ± 19% -87.3% 308.50 ± 46% interrupts.CPU74.RES:Rescheduling_interrupts
24170 ± 19% -77.9% 5338 ± 82% interrupts.CPU75.CAL:Function_call_interrupts
2544 ± 60% -94.1% 149.50 ± 16% interrupts.CPU75.NMI:Non-maskable_interrupts
2544 ± 60% -94.1% 149.50 ± 16% interrupts.CPU75.PMI:Performance_monitoring_interrupts
2517 ± 25% -78.8% 534.25 ± 92% interrupts.CPU75.RES:Rescheduling_interrupts
21633 ± 15% -78.5% 4650 ± 62% interrupts.CPU76.CAL:Function_call_interrupts
1945 ± 26% -92.3% 150.50 ± 19% interrupts.CPU76.NMI:Non-maskable_interrupts
1945 ± 26% -92.3% 150.50 ± 19% interrupts.CPU76.PMI:Performance_monitoring_interrupts
2346 ± 13% -83.0% 398.25 ± 55% interrupts.CPU76.RES:Rescheduling_interrupts
21442 ± 9% -78.9% 4515 ± 66% interrupts.CPU77.CAL:Function_call_interrupts
1859 ± 59% -92.2% 145.50 ± 12% interrupts.CPU77.NMI:Non-maskable_interrupts
1859 ± 59% -92.2% 145.50 ± 12% interrupts.CPU77.PMI:Performance_monitoring_interrupts
2347 ± 12% -84.2% 371.00 ± 52% interrupts.CPU77.RES:Rescheduling_interrupts
23015 ± 20% -76.4% 5442 ± 87% interrupts.CPU78.CAL:Function_call_interrupts
2129 ± 51% -83.1% 360.25 ±120% interrupts.CPU78.NMI:Non-maskable_interrupts
2129 ± 51% -83.1% 360.25 ±120% interrupts.CPU78.PMI:Performance_monitoring_interrupts
2326 ± 18% -83.3% 388.50 ± 70% interrupts.CPU78.RES:Rescheduling_interrupts
43.25 ± 73% +247.4% 150.25 ± 72% interrupts.CPU78.TLB:TLB_shootdowns
20738 ± 8% -78.2% 4511 ± 83% interrupts.CPU79.CAL:Function_call_interrupts
1753 ± 36% -93.5% 114.00 ± 49% interrupts.CPU79.NMI:Non-maskable_interrupts
1753 ± 36% -93.5% 114.00 ± 49% interrupts.CPU79.PMI:Performance_monitoring_interrupts
2285 ± 13% -85.1% 339.50 ± 76% interrupts.CPU79.RES:Rescheduling_interrupts
17974 ± 24% -71.1% 5189 ± 35% interrupts.CPU8.CAL:Function_call_interrupts
3188 ± 7% -96.2% 121.75 ± 23% interrupts.CPU8.NMI:Non-maskable_interrupts
3188 ± 7% -96.2% 121.75 ± 23% interrupts.CPU8.PMI:Performance_monitoring_interrupts
1897 ± 19% -78.3% 411.50 ± 34% interrupts.CPU8.RES:Rescheduling_interrupts
21734 ± 18% -79.7% 4411 ± 79% interrupts.CPU80.CAL:Function_call_interrupts
2507 ± 39% -95.8% 106.50 ± 41% interrupts.CPU80.NMI:Non-maskable_interrupts
2507 ± 39% -95.8% 106.50 ± 41% interrupts.CPU80.PMI:Performance_monitoring_interrupts
2268 ± 17% -81.7% 415.00 ± 85% interrupts.CPU80.RES:Rescheduling_interrupts
50.25 ± 45% +217.4% 159.50 ± 54% interrupts.CPU80.TLB:TLB_shootdowns
20704 ± 16% -76.5% 4873 ± 81% interrupts.CPU81.CAL:Function_call_interrupts
2001 ± 32% -94.1% 117.25 ± 26% interrupts.CPU81.NMI:Non-maskable_interrupts
2001 ± 32% -94.1% 117.25 ± 26% interrupts.CPU81.PMI:Performance_monitoring_interrupts
2211 ± 14% -79.8% 447.50 ± 91% interrupts.CPU81.RES:Rescheduling_interrupts
19770 ± 15% -81.4% 3671 ± 71% interrupts.CPU82.CAL:Function_call_interrupts
2293 ± 36% -94.8% 119.25 ± 26% interrupts.CPU82.NMI:Non-maskable_interrupts
2293 ± 36% -94.8% 119.25 ± 26% interrupts.CPU82.PMI:Performance_monitoring_interrupts
2021 ± 16% -84.7% 309.00 ± 80% interrupts.CPU82.RES:Rescheduling_interrupts
20443 ± 8% -83.3% 3415 ± 59% interrupts.CPU83.CAL:Function_call_interrupts
2713 ± 25% -95.3% 127.25 ± 31% interrupts.CPU83.NMI:Non-maskable_interrupts
2713 ± 25% -95.3% 127.25 ± 31% interrupts.CPU83.PMI:Performance_monitoring_interrupts
2074 ± 13% -87.2% 264.50 ± 54% interrupts.CPU83.RES:Rescheduling_interrupts
19883 ± 26% -84.5% 3080 ± 50% interrupts.CPU84.CAL:Function_call_interrupts
2671 ± 35% -95.6% 117.25 ± 28% interrupts.CPU84.NMI:Non-maskable_interrupts
2671 ± 35% -95.6% 117.25 ± 28% interrupts.CPU84.PMI:Performance_monitoring_interrupts
2065 ± 27% -86.3% 282.50 ± 49% interrupts.CPU84.RES:Rescheduling_interrupts
39.75 ± 62% +298.7% 158.50 ± 62% interrupts.CPU84.TLB:TLB_shootdowns
17946 ± 11% -80.6% 3486 ± 59% interrupts.CPU85.CAL:Function_call_interrupts
2036 ± 25% -94.0% 123.00 ± 29% interrupts.CPU85.NMI:Non-maskable_interrupts
2036 ± 25% -94.0% 123.00 ± 29% interrupts.CPU85.PMI:Performance_monitoring_interrupts
1909 ± 11% -83.8% 310.00 ± 54% interrupts.CPU85.RES:Rescheduling_interrupts
19381 ± 26% -78.6% 4148 ± 63% interrupts.CPU86.CAL:Function_call_interrupts
1969 ± 63% -93.8% 123.00 ± 28% interrupts.CPU86.NMI:Non-maskable_interrupts
1969 ± 63% -93.8% 123.00 ± 28% interrupts.CPU86.PMI:Performance_monitoring_interrupts
1964 ± 23% -85.0% 294.50 ± 46% interrupts.CPU86.RES:Rescheduling_interrupts
43.00 ± 30% +285.5% 165.75 ± 59% interrupts.CPU86.TLB:TLB_shootdowns
17223 ± 22% -79.6% 3507 ± 81% interrupts.CPU87.CAL:Function_call_interrupts
2154 ± 77% -94.5% 118.75 ± 26% interrupts.CPU87.NMI:Non-maskable_interrupts
2154 ± 77% -94.5% 118.75 ± 26% interrupts.CPU87.PMI:Performance_monitoring_interrupts
1905 ± 20% -86.3% 260.75 ± 69% interrupts.CPU87.RES:Rescheduling_interrupts
52.50 ± 40% +156.2% 134.50 ± 87% interrupts.CPU87.TLB:TLB_shootdowns
18627 ± 15% -79.2% 3878 ± 75% interrupts.CPU88.CAL:Function_call_interrupts
2313 ± 43% -94.6% 124.50 ± 27% interrupts.CPU88.NMI:Non-maskable_interrupts
2313 ± 43% -94.6% 124.50 ± 27% interrupts.CPU88.PMI:Performance_monitoring_interrupts
8788 ± 34% -93.6% 566.75 ± 66% interrupts.CPU88.RES:Rescheduling_interrupts
20205 ± 18% -80.6% 3911 ± 77% interrupts.CPU89.CAL:Function_call_interrupts
2017 ± 40% -93.7% 126.75 ± 28% interrupts.CPU89.NMI:Non-maskable_interrupts
2017 ± 40% -93.7% 126.75 ± 28% interrupts.CPU89.PMI:Performance_monitoring_interrupts
9440 ± 31% -93.1% 649.00 ± 72% interrupts.CPU89.RES:Rescheduling_interrupts
27.75 ± 37% +523.4% 173.00 ± 53% interrupts.CPU89.TLB:TLB_shootdowns
19523 ± 15% -72.7% 5326 ± 36% interrupts.CPU9.CAL:Function_call_interrupts
3187 ± 23% -96.3% 116.50 ± 21% interrupts.CPU9.NMI:Non-maskable_interrupts
3187 ± 23% -96.3% 116.50 ± 21% interrupts.CPU9.PMI:Performance_monitoring_interrupts
2068 ± 15% -79.4% 425.25 ± 49% interrupts.CPU9.RES:Rescheduling_interrupts
18042 ± 23% -73.9% 4709 ± 57% interrupts.CPU90.CAL:Function_call_interrupts
2466 ± 57% -94.7% 129.50 ± 6% interrupts.CPU90.NMI:Non-maskable_interrupts
2466 ± 57% -94.7% 129.50 ± 6% interrupts.CPU90.PMI:Performance_monitoring_interrupts
8430 ± 36% -92.7% 613.25 ± 77% interrupts.CPU90.RES:Rescheduling_interrupts
17473 ± 14% -81.0% 3318 ± 58% interrupts.CPU91.CAL:Function_call_interrupts
1553 ± 31% -88.9% 172.25 ± 33% interrupts.CPU91.NMI:Non-maskable_interrupts
1553 ± 31% -88.9% 172.25 ± 33% interrupts.CPU91.PMI:Performance_monitoring_interrupts
7125 ± 20% -91.2% 626.25 ± 70% interrupts.CPU91.RES:Rescheduling_interrupts
20911 ± 23% -81.9% 3789 ± 65% interrupts.CPU92.CAL:Function_call_interrupts
2315 ± 50% -93.6% 147.50 ± 15% interrupts.CPU92.NMI:Non-maskable_interrupts
2315 ± 50% -93.6% 147.50 ± 15% interrupts.CPU92.PMI:Performance_monitoring_interrupts
8439 ± 31% -92.7% 617.75 ± 70% interrupts.CPU92.RES:Rescheduling_interrupts
42.75 ± 48% +369.0% 200.50 ± 45% interrupts.CPU92.TLB:TLB_shootdowns
19367 ± 15% -84.5% 3004 ± 48% interrupts.CPU93.CAL:Function_call_interrupts
2073 ± 59% -92.7% 150.75 ± 19% interrupts.CPU93.NMI:Non-maskable_interrupts
2073 ± 59% -92.7% 150.75 ± 19% interrupts.CPU93.PMI:Performance_monitoring_interrupts
8356 ± 25% -93.0% 581.75 ± 72% interrupts.CPU93.RES:Rescheduling_interrupts
36.00 ± 50% +311.1% 148.00 ± 68% interrupts.CPU93.TLB:TLB_shootdowns
18667 ± 21% -76.4% 4406 ± 69% interrupts.CPU94.CAL:Function_call_interrupts
2468 ± 55% -92.1% 195.25 ± 75% interrupts.CPU94.NMI:Non-maskable_interrupts
2468 ± 55% -92.1% 195.25 ± 75% interrupts.CPU94.PMI:Performance_monitoring_interrupts
8573 ± 31% -93.1% 592.50 ± 75% interrupts.CPU94.RES:Rescheduling_interrupts
19587 ± 18% -78.4% 4229 ± 77% interrupts.CPU95.CAL:Function_call_interrupts
2375 ± 51% -94.1% 139.00 ± 39% interrupts.CPU95.NMI:Non-maskable_interrupts
2375 ± 51% -94.1% 139.00 ± 39% interrupts.CPU95.PMI:Performance_monitoring_interrupts
8456 ± 23% -91.0% 761.00 ± 83% interrupts.CPU95.RES:Rescheduling_interrupts
24183 ± 17% -83.4% 4014 ± 22% interrupts.CPU96.CAL:Function_call_interrupts
2574 ± 24% -94.7% 136.50 ± 5% interrupts.CPU96.NMI:Non-maskable_interrupts
2574 ± 24% -94.7% 136.50 ± 5% interrupts.CPU96.PMI:Performance_monitoring_interrupts
11095 ± 20% -90.5% 1056 ± 37% interrupts.CPU96.RES:Rescheduling_interrupts
70.00 ± 67% +141.4% 169.00 ± 7% interrupts.CPU96.TLB:TLB_shootdowns
27056 ± 18% -81.5% 5000 ± 35% interrupts.CPU97.CAL:Function_call_interrupts
3684 ± 17% -96.5% 130.00 ± 5% interrupts.CPU97.NMI:Non-maskable_interrupts
3684 ± 17% -96.5% 130.00 ± 5% interrupts.CPU97.PMI:Performance_monitoring_interrupts
12281 ± 22% -87.8% 1497 ± 39% interrupts.CPU97.RES:Rescheduling_interrupts
59.25 ± 63% +166.7% 158.00 ± 19% interrupts.CPU97.TLB:TLB_shootdowns
23959 ± 12% -77.7% 5352 ± 32% interrupts.CPU98.CAL:Function_call_interrupts
3860 ± 30% -97.0% 116.50 ± 26% interrupts.CPU98.NMI:Non-maskable_interrupts
3860 ± 30% -97.0% 116.50 ± 26% interrupts.CPU98.PMI:Performance_monitoring_interrupts
10771 ± 20% -87.2% 1377 ± 33% interrupts.CPU98.RES:Rescheduling_interrupts
69.75 ± 69% +157.3% 179.50 ± 3% interrupts.CPU98.TLB:TLB_shootdowns
23752 ± 22% -80.8% 4554 ± 26% interrupts.CPU99.CAL:Function_call_interrupts
3486 ± 16% -96.0% 139.25 ± 7% interrupts.CPU99.NMI:Non-maskable_interrupts
3486 ± 16% -96.0% 139.25 ± 7% interrupts.CPU99.PMI:Performance_monitoring_interrupts
12430 ± 22% -89.7% 1281 ± 17% interrupts.CPU99.RES:Rescheduling_interrupts
60.75 ± 62% +179.0% 169.50 ± 9% interrupts.CPU99.TLB:TLB_shootdowns
516431 ± 8% -94.7% 27204 ± 10% interrupts.NMI:Non-maskable_interrupts
516431 ± 8% -94.7% 27204 ± 10% interrupts.PMI:Performance_monitoring_interrupts
1530961 -90.2% 149797 ± 7% interrupts.RES:Rescheduling_interrupts
19909 ± 24% +56.9% 31231 ± 15% interrupts.TLB:TLB_shootdowns
fio.write_bw_MBps
350 +---------------------------------------------------------------------+
| + +++ ++ + + + +|
300 |-+ + :: : + ::::+ |
| :+ :+ : + + |
| : : : : : |
250 |-+ + + :+ + :+ + |
|++++++++++++++++++++++++++++++++++++++++ + + + + + |
200 |-+ |
| |
150 |-+ O |
| OO O O O O |
| OO OO OOOOO O OOO O O O O OO OO |
100 |O+ O O OOO OO |
| O O O |
50 +---------------------------------------------------------------------+
fio.write_iops
90000 +-------------------------------------------------------------------+
| + ++++ +++ + + +|
80000 |-+ + :: : + :::+ |
70000 |-+ :+ :+ : ++ |
| : : :: : |
60000 |-+ + + ++ + + + :+ + :+ + |
|++++++++++ ++++++++++ + +++ ++++++++++ + + + + + |
50000 |-+ |
| |
40000 |-+ O |
30000 |-+ OO OOOO OO O O O |
| OO O OO O OO OOO O O OOOOOO |
20000 |O+ O O OO O |
| O O |
10000 +-------------------------------------------------------------------+
fio.write_clat_mean_us
1.1e+07 +-----------------------------------------------------------------+
| O O |
1e+07 |-+ |
9e+06 |-+ O O O |
|O O O |
8e+06 |-+ O |
7e+06 |-+ O O O O O OOOOOO |
| OO O O O O O OO O |
6e+06 |-+ O OO O OO O O |
5e+06 |-+ O O |
| |
4e+06 |-+ |
3e+06 |+++++++++++++++++++++++++++++++++++++++++ ++++ ++++ |
| + :+ : + ++ ++ |
2e+06 +-----------------------------------------------------------------+
fio.write_clat_stddev
5e+08 +-----------------------------------------------------------------+
4.5e+08 |-+ |
|O O O OO O |
4e+08 |-+ O O O |
3.5e+08 |-OO OO OO O OOO O OO O OO OO |
| OO OOOO OO O O |
3e+08 |-+ O |
2.5e+08 |-+ |
2e+08 |-+ |
| |
1.5e+08 |-+ |
1e+08 |-+ + + + ++ |
| + :: : : :: |
5e+07 |+++++++++++++++++++++++++++++++++++++++++:++++: ++++ : :: :: : |
0 +-----------------------------------------------------------------+
fio.write_clat_99__us
3.5e+07 +-----------------------------------------------------------------+
| |
3e+07 |-+ O O OO |
|O O O O O O |
2.5e+07 |-+ O OO OO OO O O O |
| OO O O O O O O O |
2e+07 |-O OO O O O O O O O |
| |
1.5e+07 |-+ |
| |
1e+07 |-+ |
| + + + + + ++++++|
5e+06 |+++ +++++++++++++++++++++++++++++++++++++++++++ +++++ +++++ |
| |
0 +-----------------------------------------------------------------+
fio.latency_4ms_
80 +----------------------------------------------------------------------+
| + + + + + + + |
70 |+++++++++ :++++++++++++++++++++++ + +++++ +: + +: ++: |
60 |-+ + + + + : : : : : |
| +: +: : + + + +|
50 |-+ + + +++ ++ ++ + + :|
| + + |
40 |-+ |
| |
30 |-+ |
20 |-+ |
| |
10 |O+ O O O O O O O |
| OOOO OOOOOOOOOOOOOOO O O O OOOO OOO OO |
0 +----------------------------------------------------------------------+
fio.latency_50ms_
1.6 +---------------------------------------------------------------------+
|O |
1.4 |-+ O O |
1.2 |-+O O O |
| O OOO O O O |
1 |-+ O O |
| OO O O O O |
0.8 |-+ O O OO O O O |
| O O O O O O O O O O |
0.6 |-+ O O |
0.4 |-+ |
| |
0.2 |-+ + |
| :: + + |
0 +---------------------------------------------------------------------+
fio.workload
3e+07 +-----------------------------------------------------------------+
| |
| ++ + + + |
2.5e+07 |-+ + + : ++ :+ :+ + +|
| + :+ : + : + |
| : :: : + |
2e+07 |-+ : : : : : |
|+++++++++++++++++++++++++++++++++++++++++ ++++ ++++ |
1.5e+07 |-+ |
| |
| O |
1e+07 |-O OOO OO OO O O O |
| O O OOOO O OOO OO O O OOOOOO |
|O O O OO O |
5e+06 +-----------------------------------------------------------------+
fio.time.user_time
75 +----------------------------------------------------------------------+
| |
70 |++ + + + + |
65 |:+ + + + + ++ + + : + + + + + + +++ + :+ :++ + :|
| : :+++ :+++ + + +++ :+ :+ +++ ++: ::++++ + + + +: ++ : :: : : :: +|
60 |-:+ + + + + + + : :: : : ::+ : + |
55 |-+ + + :+ : + |
| + + |
50 |-+ |
45 |-+ |
| O OO OO |
40 |-+ OOO OO OOO OOO O O O OOO OOO OOO |
35 |O+O OO OO O O |
| O O O O |
30 +----------------------------------------------------------------------+
fio.time.voluntary_context_switches
3e+07 +-----------------------------------------------------------------+
| |
| + |
2.5e+07 |-+ + ++++ ++ ++ + +|
| + :: : + :: + |
| + :+ : + |
2e+07 |-+ : : : : : |
|+++++++++++++++++++++++++++++++++++++++++ ++++ ++++ |
1.5e+07 |-+ |
| |
| O |
1e+07 |-+ OO |
| OO OO OOOOOOOOOOOO O OO OO OOOOOO |
|O O O O OO O |
5e+06 +-----------------------------------------------------------------+
fio.time.file_system_outputs
2.2e+08 +-----------------------------------------------------------------+
| + ++ + + ++ ++|
2e+08 |-+ + :: : + ::+ :+:+ |
1.8e+08 |-+ + :+ : + : + |
| : : : : + |
1.6e+08 |-+ : : : : : |
1.4e+08 |+++++++++++++++++++++++++++++++++++++++++ ++++ ++++ |
| |
1.2e+08 |-+ |
1e+08 |-+ |
| O O |
8e+07 |-O OO OOOO OO O OO OO OO OO OO |
6e+07 |-+O O O O O OO O O O |
|O O O O O |
4e+07 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2ap1: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-9/performance/1SSD/btrfs/sync/x86_64-rhel-8.3/8/debian-10.4-x86_64-20200603.cgz/300s/randwrite/lkp-csl-2ap1/256g/fio-basic/0x4002f01
commit:
6437634e75 ("btrfs: improve device scanning messages")
ba157afd0d ("btrfs: switch extent buffer tree lock to rw_semaphore")
6437634e758641eb ba157afd0da008a5328b6cccc63
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.07 ± 2% -0.0 0.04 ± 13% fio.latency_1000us%
14.17 ± 6% +4.7 18.89 ± 5% fio.latency_100us%
0.75 ± 4% -0.1 0.63 ± 3% fio.latency_10us%
78.59 -3.1 75.51 fio.latency_250us%
0.08 ± 8% -0.0 0.06 ± 12% fio.latency_2ms%
0.03 ± 10% -0.0 0.02 ± 3% fio.latency_4ms%
1.76 ± 7% -1.0 0.77 ± 3% fio.latency_500us%
0.21 ± 4% -0.1 0.10 ± 11% fio.latency_750us%
1.215e+08 +18.3% 1.438e+08 fio.time.file_system_outputs
1006 +14.9% 1157 fio.time.involuntary_context_switches
128.50 +17.9% 151.50 fio.time.percent_of_cpu_this_job_got
323.36 ± 2% +19.1% 385.24 ± 2% fio.time.system_time
65.53 ± 2% +11.1% 72.78 ± 5% fio.time.user_time
15070185 +18.4% 17844674 fio.time.voluntary_context_switches
15190095 +18.3% 17974123 fio.workload
197.79 +18.3% 234.04 fio.write_bw_MBps
193024 -17.2% 159744 fio.write_clat_90%_us
213504 -18.2% 174592 fio.write_clat_95%_us
325632 -23.1% 250368 fio.write_clat_99%_us
156502 -15.7% 132004 fio.write_clat_mean_us
4793126 ± 4% -22.3% 3725510 ± 2% fio.write_clat_stddev
50633 +18.3% 59913 fio.write_iops
3.20 +11.8% 3.58 ± 3% iostat.cpu.system
1.78 +0.3 2.07 mpstat.cpu.all.sys%
3827675 ± 54% +92.3% 7361611 ± 22% numa-numastat.node3.local_node
3851078 ± 53% +91.8% 7384998 ± 22% numa-numastat.node3.numa_hit
163.25 ± 38% +194.6% 481.00 ± 68% numa-vmstat.node0.nr_page_table_pages
38226 ± 89% +174.3% 104837 ± 9% numa-vmstat.node0.nr_slab_reclaimable
169894 ± 34% +50.4% 255474 ± 16% numa-vmstat.node1.nr_slab_unreclaimable
221.75 ± 44% +59.2% 353.00 ± 17% numa-vmstat.node3.nr_writeback
241769 +16.7% 282251 vmstat.io.bo
37480187 +19.5% 44800114 vmstat.memory.cache
430020 -18.0% 352720 vmstat.system.cs
441754 +1.8% 449817 vmstat.system.in
7.506e+08 -18.2% 6.144e+08 cpuidle.C1.time
47184037 -27.3% 34282960 cpuidle.C1.usage
3.623e+08 ± 96% +5647.3% 2.082e+10 ± 95% cpuidle.C6.time
564786 ± 75% +4656.2% 26862191 ± 95% cpuidle.C6.usage
21842036 ± 4% +12.3% 24528720 cpuidle.POLL.time
152900 ± 89% +174.3% 419348 ± 9% numa-meminfo.node0.KReclaimable
6389 ± 3% +4410.7% 288188 ±168% numa-meminfo.node0.Mapped
658.00 ± 37% +193.0% 1927 ± 68% numa-meminfo.node0.PageTables
152900 ± 89% +174.3% 419348 ± 9% numa-meminfo.node0.SReclaimable
833768 ± 24% +44.4% 1203734 ± 8% numa-meminfo.node0.Slab
679242 ± 34% +50.4% 1021808 ± 16% numa-meminfo.node1.SUnreclaim
925966 ± 12% -12.3% 812493 ± 7% numa-meminfo.node2.Slab
938.00 ± 43% +54.6% 1450 ± 15% numa-meminfo.node3.Writeback
3511583 ± 2% +28.4% 4508862 ± 3% meminfo.Active
3501864 ± 2% +28.5% 4499529 ± 3% meminfo.Active(file)
36621950 +20.1% 43966118 meminfo.Cached
1668180 +13.9% 1900527 meminfo.Dirty
32411297 +19.6% 38757085 meminfo.Inactive
30978115 +20.5% 37326534 meminfo.Inactive(file)
41635020 +18.7% 49411230 meminfo.Memused
2799905 +15.7% 3238870 meminfo.SUnreclaim
3499525 +12.9% 3950736 meminfo.Slab
255609 +17.9% 301389 meminfo.max_used_kB
6123752 +14.7% 7023134 slabinfo.Acpi-State.active_objs
120114 +14.7% 137752 slabinfo.Acpi-State.active_slabs
6125856 +14.7% 7025413 slabinfo.Acpi-State.num_objs
120114 +14.7% 137752 slabinfo.Acpi-State.num_slabs
3712 ± 2% -13.6% 3207 ± 7% slabinfo.kmalloc-rcl-512.active_objs
3712 ± 2% -13.6% 3207 ± 7% slabinfo.kmalloc-rcl-512.num_objs
157643 ± 2% -99.8% 248.00 slabinfo.numa_policy.active_objs
2584 -99.8% 4.00 slabinfo.numa_policy.active_slabs
160256 -99.8% 248.00 slabinfo.numa_policy.num_objs
2584 -99.8% 4.00 slabinfo.numa_policy.num_slabs
13573 ± 2% +11.0% 15066 ± 3% slabinfo.pde_opener.active_objs
13573 ± 2% +11.0% 15066 ± 3% slabinfo.pde_opener.num_objs
13640372 +18.1% 16107233 slabinfo.pid_namespace.active_objs
243599 +18.1% 287653 slabinfo.pid_namespace.active_slabs
13641605 +18.1% 16108626 slabinfo.pid_namespace.num_objs
243599 +18.1% 287653 slabinfo.pid_namespace.num_slabs
876611 ± 2% +28.4% 1125257 ± 3% proc-vmstat.nr_active_file
18925863 +16.7% 22080924 proc-vmstat.nr_dirtied
417077 +13.9% 474996 proc-vmstat.nr_dirty
9165022 +20.0% 10994784 proc-vmstat.nr_file_pages
39012743 -5.0% 37075368 proc-vmstat.nr_free_pages
7752908 +20.4% 9334290 proc-vmstat.nr_inactive_file
174922 +1.7% 177970 proc-vmstat.nr_slab_reclaimable
700608 +15.6% 809902 proc-vmstat.nr_slab_unreclaimable
18369702 +16.3% 21366714 proc-vmstat.nr_written
876611 ± 2% +28.4% 1125257 ± 3% proc-vmstat.nr_zone_active_file
7752908 +20.4% 9334290 proc-vmstat.nr_zone_inactive_file
418163 +13.9% 476146 proc-vmstat.nr_zone_write_pending
19096693 +18.0% 22530593 proc-vmstat.numa_hit
19003205 +18.1% 22437066 proc-vmstat.numa_local
1330762 +32.7% 1766444 ± 5% proc-vmstat.pgactivate
19950869 +17.8% 23496254 proc-vmstat.pgalloc_normal
1489976 ± 8% +6.7% 1590186 ± 10% proc-vmstat.pgfree
73481218 +16.3% 85472143 proc-vmstat.pgpgout
973.42 ± 61% -64.0% 350.19 ±103% sched_debug.cfs_rq:/.MIN_vruntime.avg
2680 ± 5% +37.6% 3688 ± 8% sched_debug.cfs_rq:/.exec_clock.avg
12916 ± 4% +154.2% 32831 ± 14% sched_debug.cfs_rq:/.exec_clock.max
168.55 ± 6% +55.0% 261.19 ± 25% sched_debug.cfs_rq:/.exec_clock.min
2084 ± 7% +104.5% 4261 ± 10% sched_debug.cfs_rq:/.exec_clock.stddev
61371 ± 6% -18.3% 50158 ± 13% sched_debug.cfs_rq:/.load.avg
51.14 ± 5% -14.9% 43.52 ± 7% sched_debug.cfs_rq:/.load_avg.avg
973.42 ± 61% -64.0% 350.19 ±103% sched_debug.cfs_rq:/.max_vruntime.avg
0.08 ± 7% -17.5% 0.06 ± 10% sched_debug.cfs_rq:/.nr_running.avg
33109 ± 28% -84.5% 5122 ±240% sched_debug.cfs_rq:/.spread0.avg
-654.23 +2198.8% -15039 sched_debug.cfs_rq:/.spread0.min
750.80 ± 2% +14.1% 856.68 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.max
163716 +9.4% 179126 ± 8% sched_debug.cpu.clock.avg
163728 +9.4% 179138 ± 8% sched_debug.cpu.clock.max
163702 +9.4% 179113 ± 8% sched_debug.cpu.clock.min
161865 +9.3% 176911 ± 8% sched_debug.cpu.clock_task.avg
162264 +9.3% 177415 ± 8% sched_debug.cpu.clock_task.max
655942 ± 14% +17.0% 767671 ± 11% sched_debug.cpu.max_idle_balance_cost.max
1185969 ± 6% -27.1% 864504 ± 17% sched_debug.cpu.nr_switches.max
1182809 ± 6% -27.1% 862149 ± 17% sched_debug.cpu.sched_count.max
619.60 ± 16% +213.5% 1942 ± 69% sched_debug.cpu.sched_count.min
591254 ± 6% -27.1% 430828 ± 17% sched_debug.cpu.sched_goidle.max
246.80 ± 19% +264.7% 900.00 ± 75% sched_debug.cpu.sched_goidle.min
12823 ± 6% +22.6% 15719 ± 11% sched_debug.cpu.ttwu_local.avg
17417 ± 6% +16.3% 20248 ± 16% sched_debug.cpu.ttwu_local.stddev
163703 +9.4% 179113 ± 8% sched_debug.cpu_clk
162829 +9.5% 178240 ± 8% sched_debug.ktime
164054 +9.4% 179483 ± 8% sched_debug.sched_clk
2.26e+09 +8.1% 2.442e+09 perf-stat.i.branch-instructions
26190572 +15.9% 30361528 ± 11% perf-stat.i.branch-misses
26374740 +14.9% 30317213 perf-stat.i.cache-misses
1.189e+08 +13.5% 1.35e+08 ± 10% perf-stat.i.cache-references
432986 -17.8% 355815 perf-stat.i.context-switches
1.945e+10 +10.2% 2.143e+10 perf-stat.i.cpu-cycles
769.13 -4.7% 732.60 ± 2% perf-stat.i.cycles-between-cache-misses
2651798 ± 13% +39.1% 3687643 ± 14% perf-stat.i.dTLB-load-misses
2.97e+09 +8.2% 3.214e+09 perf-stat.i.dTLB-loads
1.609e+09 +6.3% 1.71e+09 perf-stat.i.dTLB-stores
8223763 -5.9% 7739545 ± 2% perf-stat.i.iTLB-load-misses
15798274 -5.3% 14958432 perf-stat.i.iTLB-loads
1.139e+10 +8.6% 1.237e+10 perf-stat.i.instructions
1405 +16.2% 1633 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.10 +10.2% 0.11 perf-stat.i.metric.GHz
36.39 +7.8% 39.22 perf-stat.i.metric.M/sec
82.59 +1.8 84.42 perf-stat.i.node-load-miss-rate%
8298538 +21.1% 10047651 perf-stat.i.node-load-misses
1742898 ± 4% +12.0% 1951637 ± 5% perf-stat.i.node-store-misses
737.41 -4.1% 707.29 ± 2% perf-stat.overall.cycles-between-cache-misses
1384 +15.5% 1599 ± 2% perf-stat.overall.instructions-per-iTLB-miss
82.76 +1.9 84.62 perf-stat.overall.node-load-miss-rate%
226049 -8.3% 207327 perf-stat.overall.path-length
2.253e+09 +8.1% 2.434e+09 perf-stat.ps.branch-instructions
26104481 +15.9% 30266405 ± 11% perf-stat.ps.branch-misses
26284934 +14.9% 30212652 perf-stat.ps.cache-misses
1.185e+08 +13.5% 1.345e+08 ± 10% perf-stat.ps.cache-references
431492 -17.8% 354556 perf-stat.ps.context-switches
1.938e+10 +10.2% 2.136e+10 perf-stat.ps.cpu-cycles
2642556 ± 13% +39.1% 3675119 ± 14% perf-stat.ps.dTLB-load-misses
2.96e+09 +8.2% 3.204e+09 perf-stat.ps.dTLB-loads
1.603e+09 +6.3% 1.705e+09 perf-stat.ps.dTLB-stores
8195539 -5.9% 7712710 ± 2% perf-stat.ps.iTLB-load-misses
15743480 -5.3% 14905981 perf-stat.ps.iTLB-loads
1.135e+10 +8.6% 1.233e+10 perf-stat.ps.instructions
8269469 +21.1% 10011654 perf-stat.ps.node-load-misses
1736728 ± 4% +12.0% 1944526 ± 5% perf-stat.ps.node-store-misses
3.434e+12 +8.5% 3.726e+12 perf-stat.total.instructions
1432303 ± 2% -40.4% 853396 ± 2% interrupts.CAL:Function_call_interrupts
6660 ± 26% -67.3% 2178 ± 21% interrupts.CPU0.CAL:Function_call_interrupts
896.75 ± 32% -60.2% 356.50 ± 33% interrupts.CPU0.RES:Rescheduling_interrupts
16969 ± 16% -81.0% 3217 ± 12% interrupts.CPU1.CAL:Function_call_interrupts
3042 ± 33% -77.0% 701.00 ± 34% interrupts.CPU1.RES:Rescheduling_interrupts
14041 ± 32% -59.4% 5702 ± 20% interrupts.CPU101.CAL:Function_call_interrupts
9759 ± 22% -59.7% 3930 ± 42% interrupts.CPU102.CAL:Function_call_interrupts
11347 ± 32% -50.3% 5640 ± 32% interrupts.CPU105.CAL:Function_call_interrupts
12189 ± 34% -63.1% 4497 ± 44% interrupts.CPU107.CAL:Function_call_interrupts
16969 ± 53% -69.5% 5174 ± 29% interrupts.CPU111.CAL:Function_call_interrupts
9415 ± 19% -42.9% 5379 ± 45% interrupts.CPU115.CAL:Function_call_interrupts
388.25 ± 44% +129.0% 889.00 ± 29% interrupts.CPU123.NMI:Non-maskable_interrupts
388.25 ± 44% +129.0% 889.00 ± 29% interrupts.CPU123.PMI:Performance_monitoring_interrupts
12314 ± 80% +140.1% 29571 ± 31% interrupts.CPU124.RES:Rescheduling_interrupts
8385 ± 78% +232.3% 27862 ± 44% interrupts.CPU125.RES:Rescheduling_interrupts
10329 ±104% +220.7% 33126 ± 48% interrupts.CPU127.RES:Rescheduling_interrupts
7297 ±116% +239.0% 24741 ± 44% interrupts.CPU129.RES:Rescheduling_interrupts
10697 ±100% +165.8% 28434 ± 45% interrupts.CPU131.RES:Rescheduling_interrupts
11197 ±105% +112.6% 23802 ± 49% interrupts.CPU132.RES:Rescheduling_interrupts
9719 ± 97% +183.1% 27516 ± 40% interrupts.CPU137.RES:Rescheduling_interrupts
9978 ±107% +145.0% 24444 ± 36% interrupts.CPU142.RES:Rescheduling_interrupts
207.00 ± 24% -63.8% 75.00 ± 45% interrupts.CPU146.TLB:TLB_shootdowns
205.75 ± 27% -55.7% 91.25 ± 48% interrupts.CPU147.TLB:TLB_shootdowns
175.50 ± 38% -52.3% 83.75 ± 21% interrupts.CPU153.TLB:TLB_shootdowns
8658 ± 25% -51.1% 4232 ± 43% interrupts.CPU155.CAL:Function_call_interrupts
83.75 ± 27% +114.9% 180.00 ± 27% interrupts.CPU158.NMI:Non-maskable_interrupts
83.75 ± 27% +114.9% 180.00 ± 27% interrupts.CPU158.PMI:Performance_monitoring_interrupts
184.50 ± 83% +153.9% 468.50 ± 71% interrupts.CPU164.NMI:Non-maskable_interrupts
184.50 ± 83% +153.9% 468.50 ± 71% interrupts.CPU164.PMI:Performance_monitoring_interrupts
252.00 ± 45% -66.9% 83.50 ± 47% interrupts.CPU164.TLB:TLB_shootdowns
4973 ± 67% +222.8% 16053 ± 79% interrupts.CPU165.RES:Rescheduling_interrupts
9653 ± 68% -74.3% 2484 ± 75% interrupts.CPU168.CAL:Function_call_interrupts
16286 ± 88% -80.8% 3132 ±132% interrupts.CPU168.RES:Rescheduling_interrupts
13175 ± 43% -74.4% 3375 ± 67% interrupts.CPU170.CAL:Function_call_interrupts
20028 ± 41% -72.7% 5468 ±106% interrupts.CPU170.RES:Rescheduling_interrupts
13364 ± 49% -76.2% 3176 ± 46% interrupts.CPU173.CAL:Function_call_interrupts
73.75 ± 45% +341.4% 325.50 ± 61% interrupts.CPU173.NMI:Non-maskable_interrupts
73.75 ± 45% +341.4% 325.50 ± 61% interrupts.CPU173.PMI:Performance_monitoring_interrupts
28313 ± 53% -79.8% 5709 ± 65% interrupts.CPU173.RES:Rescheduling_interrupts
9336 ± 19% -55.6% 4147 ± 45% interrupts.CPU174.CAL:Function_call_interrupts
12410 ± 40% -61.5% 4772 ± 63% interrupts.CPU175.CAL:Function_call_interrupts
20255 ± 33% -50.9% 9938 ± 71% interrupts.CPU175.RES:Rescheduling_interrupts
9932 ± 29% -59.2% 4049 ± 49% interrupts.CPU176.CAL:Function_call_interrupts
11851 ± 40% -61.3% 4590 ± 35% interrupts.CPU177.CAL:Function_call_interrupts
198.50 ± 99% +210.6% 616.50 ± 74% interrupts.CPU182.NMI:Non-maskable_interrupts
198.50 ± 99% +210.6% 616.50 ± 74% interrupts.CPU182.PMI:Performance_monitoring_interrupts
11518 ± 39% -61.6% 4427 ± 43% interrupts.CPU183.CAL:Function_call_interrupts
21286 ± 27% -44.7% 11768 ± 50% interrupts.CPU183.RES:Rescheduling_interrupts
11689 ± 44% -57.2% 5008 ± 67% interrupts.CPU184.CAL:Function_call_interrupts
8324 ± 40% -58.0% 3492 ± 62% interrupts.CPU187.CAL:Function_call_interrupts
2146 ± 70% +257.7% 7679 ± 26% interrupts.CPU19.RES:Rescheduling_interrupts
123.00 ± 15% +408.9% 626.00 ± 34% interrupts.CPU190.NMI:Non-maskable_interrupts
123.00 ± 15% +408.9% 626.00 ± 34% interrupts.CPU190.PMI:Performance_monitoring_interrupts
8698 ± 29% -58.2% 3635 ± 24% interrupts.CPU2.CAL:Function_call_interrupts
2250 ± 63% +249.2% 7856 ± 26% interrupts.CPU22.RES:Rescheduling_interrupts
4704 ± 38% -46.7% 2508 ± 25% interrupts.CPU23.CAL:Function_call_interrupts
17114 ± 48% -70.2% 5107 ± 30% interrupts.CPU24.CAL:Function_call_interrupts
372.75 ± 57% +346.9% 1665 ± 61% interrupts.CPU24.NMI:Non-maskable_interrupts
372.75 ± 57% +346.9% 1665 ± 61% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2785 ± 33% -59.8% 1118 ± 43% interrupts.CPU24.RES:Rescheduling_interrupts
365.75 ± 44% +65.2% 604.25 ± 18% interrupts.CPU27.NMI:Non-maskable_interrupts
365.75 ± 44% +65.2% 604.25 ± 18% interrupts.CPU27.PMI:Performance_monitoring_interrupts
9112 ± 12% -53.5% 4236 ± 48% interrupts.CPU3.CAL:Function_call_interrupts
5995 ± 36% -62.0% 2278 ± 27% interrupts.CPU32.CAL:Function_call_interrupts
7132 ± 65% -64.8% 2508 ± 23% interrupts.CPU34.CAL:Function_call_interrupts
155.75 ± 34% +216.1% 492.25 ± 43% interrupts.CPU34.NMI:Non-maskable_interrupts
155.75 ± 34% +216.1% 492.25 ± 43% interrupts.CPU34.PMI:Performance_monitoring_interrupts
1207 ± 47% -55.9% 532.25 ± 61% interrupts.CPU34.RES:Rescheduling_interrupts
4555 ± 40% -41.9% 2648 ± 26% interrupts.CPU35.CAL:Function_call_interrupts
21902 ± 44% -81.8% 3996 ± 38% interrupts.CPU48.CAL:Function_call_interrupts
3556 ± 42% -76.8% 825.75 ± 89% interrupts.CPU48.RES:Rescheduling_interrupts
11583 ± 44% -58.0% 4870 ± 26% interrupts.CPU49.CAL:Function_call_interrupts
12377 ± 25% -57.5% 5262 ± 25% interrupts.CPU50.CAL:Function_call_interrupts
140.00 ± 65% -64.5% 49.75 ± 63% interrupts.CPU55.TLB:TLB_shootdowns
832.00 ± 57% +83.8% 1529 ± 24% interrupts.CPU57.RES:Rescheduling_interrupts
157.00 ± 45% -61.6% 60.25 ± 14% interrupts.CPU57.TLB:TLB_shootdowns
197.50 ± 65% +198.9% 590.25 ± 34% interrupts.CPU61.NMI:Non-maskable_interrupts
197.50 ± 65% +198.9% 590.25 ± 34% interrupts.CPU61.PMI:Performance_monitoring_interrupts
98.25 ± 29% +140.7% 236.50 ± 31% interrupts.CPU62.NMI:Non-maskable_interrupts
98.25 ± 29% +140.7% 236.50 ± 31% interrupts.CPU62.PMI:Performance_monitoring_interrupts
155.00 ± 56% -67.7% 50.00 ± 76% interrupts.CPU63.TLB:TLB_shootdowns
158.00 ± 51% -58.1% 66.25 ± 54% interrupts.CPU68.TLB:TLB_shootdowns
6278 ± 18% -69.3% 1925 ± 31% interrupts.CPU71.CAL:Function_call_interrupts
9325 ± 44% -86.9% 1217 ±105% interrupts.CPU71.RES:Rescheduling_interrupts
28633 ± 23% -83.9% 4615 ± 16% interrupts.CPU72.CAL:Function_call_interrupts
4825 ± 29% -81.3% 903.50 ± 54% interrupts.CPU72.RES:Rescheduling_interrupts
14212 ± 17% -71.2% 4091 ± 25% interrupts.CPU73.CAL:Function_call_interrupts
2513 ± 20% -62.2% 951.00 ± 41% interrupts.CPU73.RES:Rescheduling_interrupts
16551 ± 47% -74.0% 4309 ± 22% interrupts.CPU74.CAL:Function_call_interrupts
156.25 ± 45% +161.6% 408.75 ± 62% interrupts.CPU74.NMI:Non-maskable_interrupts
156.25 ± 45% +161.6% 408.75 ± 62% interrupts.CPU74.PMI:Performance_monitoring_interrupts
10234 ± 36% -55.8% 4525 ± 46% interrupts.CPU77.CAL:Function_call_interrupts
114.50 ± 9% +194.8% 337.50 ± 44% interrupts.CPU77.NMI:Non-maskable_interrupts
114.50 ± 9% +194.8% 337.50 ± 44% interrupts.CPU77.PMI:Performance_monitoring_interrupts
1653 ± 34% -45.3% 904.00 ± 57% interrupts.CPU77.RES:Rescheduling_interrupts
8366 ± 24% -57.2% 3578 ± 11% interrupts.CPU78.CAL:Function_call_interrupts
238.25 ± 25% +97.0% 469.25 ± 42% interrupts.CPU79.NMI:Non-maskable_interrupts
238.25 ± 25% +97.0% 469.25 ± 42% interrupts.CPU79.PMI:Performance_monitoring_interrupts
113.25 ± 7% +112.4% 240.50 ± 37% interrupts.CPU80.NMI:Non-maskable_interrupts
113.25 ± 7% +112.4% 240.50 ± 37% interrupts.CPU80.PMI:Performance_monitoring_interrupts
220.25 ± 22% -40.9% 130.25 ± 57% interrupts.CPU92.TLB:TLB_shootdowns
9997 ± 57% -66.0% 3394 ± 40% interrupts.CPU93.CAL:Function_call_interrupts
7290 ± 23% -48.5% 3751 ± 23% interrupts.CPU94.CAL:Function_call_interrupts
137.75 ± 15% +282.0% 526.25 ± 33% interrupts.CPU94.NMI:Non-maskable_interrupts
137.75 ± 15% +282.0% 526.25 ± 33% interrupts.CPU94.PMI:Performance_monitoring_interrupts
9489 ± 37% -66.1% 3216 ± 56% interrupts.CPU98.CAL:Function_call_interrupts
1439781 ± 5% +16.6% 1678149 interrupts.RES:Rescheduling_interrupts
37980 ± 10% -33.9% 25097 ± 9% softirqs.CPU0.RCU
34015 ± 15% -36.1% 21727 ± 2% softirqs.CPU1.RCU
42993 ± 6% -20.1% 34342 ± 19% softirqs.CPU1.SCHED
31839 ± 19% -38.3% 19657 ± 11% softirqs.CPU10.RCU
34855 ± 9% -41.0% 20566 ± 5% softirqs.CPU100.RCU
35288 ± 9% -39.2% 21454 ± 7% softirqs.CPU101.RCU
34810 ± 8% -39.4% 21090 ± 6% softirqs.CPU102.RCU
35120 ± 9% -42.1% 20349 ± 18% softirqs.CPU103.RCU
34813 ± 8% -40.0% 20892 ± 7% softirqs.CPU104.RCU
33702 ± 8% -37.7% 21007 ± 10% softirqs.CPU105.RCU
34103 ± 12% -39.8% 20537 ± 11% softirqs.CPU106.RCU
35093 ± 9% -39.7% 21173 ± 9% softirqs.CPU107.RCU
34798 ± 8% -39.4% 21075 ± 7% softirqs.CPU108.RCU
34010 ± 7% -39.1% 20725 ± 8% softirqs.CPU109.RCU
34736 ± 9% -38.6% 21317 ± 6% softirqs.CPU11.RCU
34617 ± 8% -39.4% 20972 ± 8% softirqs.CPU110.RCU
34284 ± 6% -39.2% 20837 ± 8% softirqs.CPU111.RCU
31201 ± 7% -41.4% 18269 ± 5% softirqs.CPU112.RCU
31528 ± 6% -40.0% 18904 ± 3% softirqs.CPU113.RCU
31021 ± 6% -38.9% 18945 ± 2% softirqs.CPU114.RCU
30714 ± 6% -38.7% 18834 ± 5% softirqs.CPU115.RCU
31095 ± 6% -38.9% 18986 ± 2% softirqs.CPU116.RCU
30692 ± 5% -40.2% 18341 ± 3% softirqs.CPU117.RCU
30923 ± 6% -39.9% 18586 ± 5% softirqs.CPU118.RCU
31375 ± 5% -40.2% 18766 ± 5% softirqs.CPU119.RCU
34709 ± 8% -38.2% 21434 ± 6% softirqs.CPU12.RCU
31530 ± 4% -41.7% 18391 ± 7% softirqs.CPU120.RCU
31427 ± 3% -43.1% 17867 ± 9% softirqs.CPU121.RCU
31821 ± 4% -41.4% 18635 ± 5% softirqs.CPU122.RCU
31885 ± 2% -40.8% 18889 ± 5% softirqs.CPU123.RCU
31296 ± 5% -39.2% 19024 ± 7% softirqs.CPU124.RCU
31952 ± 5% -40.3% 19066 ± 5% softirqs.CPU125.RCU
31787 ± 3% -39.7% 19152 ± 7% softirqs.CPU126.RCU
31185 ± 2% -39.9% 18751 ± 5% softirqs.CPU127.RCU
34109 ± 5% -39.1% 20762 ± 7% softirqs.CPU128.RCU
31826 ± 6% -35.2% 20636 ± 10% softirqs.CPU129.RCU
34628 ± 9% -39.6% 20906 ± 4% softirqs.CPU13.RCU
33297 ± 3% -37.1% 20943 ± 9% softirqs.CPU130.RCU
34233 ± 3% -37.4% 21444 ± 10% softirqs.CPU131.RCU
33777 ± 4% -37.6% 21093 ± 10% softirqs.CPU132.RCU
32924 ± 5% -39.2% 20018 ± 7% softirqs.CPU133.RCU
32913 ± 2% -35.3% 21310 ± 8% softirqs.CPU134.RCU
32967 ± 2% -37.1% 20751 ± 10% softirqs.CPU135.RCU
33496 ± 3% -35.8% 21519 ± 9% softirqs.CPU136.RCU
34028 -37.0% 21447 ± 10% softirqs.CPU137.RCU
33777 ± 3% -36.7% 21369 ± 12% softirqs.CPU138.RCU
33623 ± 2% -37.0% 21181 ± 12% softirqs.CPU139.RCU
34995 ± 7% -38.7% 21466 ± 5% softirqs.CPU14.RCU
32978 ± 3% -38.4% 20311 ± 13% softirqs.CPU140.RCU
33243 ± 2% -38.0% 20597 ± 10% softirqs.CPU141.RCU
33338 ± 3% -37.2% 20935 ± 10% softirqs.CPU142.RCU
35084 ± 8% -40.6% 20823 ± 10% softirqs.CPU143.RCU
35168 ± 5% -36.0% 22507 ± 7% softirqs.CPU144.RCU
34430 ± 4% -36.7% 21782 ± 7% softirqs.CPU145.RCU
34076 ± 4% -37.7% 21222 ± 5% softirqs.CPU146.RCU
34339 ± 4% -34.7% 22409 ± 9% softirqs.CPU147.RCU
34541 ± 4% -35.5% 22290 ± 9% softirqs.CPU148.RCU
34319 ± 4% -35.9% 22005 ± 9% softirqs.CPU149.RCU
33656 ± 7% -37.2% 21147 ± 6% softirqs.CPU15.RCU
34906 ± 5% -34.8% 22762 ± 6% softirqs.CPU150.RCU
34484 ± 4% -35.7% 22185 ± 6% softirqs.CPU151.RCU
34216 ± 5% -35.0% 22233 ± 9% softirqs.CPU152.RCU
34245 ± 3% -35.3% 22139 ± 8% softirqs.CPU153.RCU
34198 ± 5% -35.6% 22034 ± 10% softirqs.CPU154.RCU
34480 ± 3% -36.9% 21752 ± 10% softirqs.CPU155.RCU
34377 ± 6% -35.9% 22042 ± 9% softirqs.CPU156.RCU
33895 ± 5% -39.2% 20618 ± 6% softirqs.CPU157.RCU
34391 ± 5% -36.0% 22024 ± 7% softirqs.CPU158.RCU
34497 ± 4% -36.8% 21790 ± 9% softirqs.CPU159.RCU
32289 ± 5% -42.2% 18655 ± 4% softirqs.CPU16.RCU
34064 ± 8% -43.5% 19254 ± 8% softirqs.CPU160.RCU
32863 ± 4% -38.5% 20217 ± 6% softirqs.CPU161.RCU
31802 ± 6% -36.3% 20249 ± 6% softirqs.CPU162.RCU
32305 ± 6% -38.2% 19963 ± 6% softirqs.CPU163.RCU
31674 ± 7% -36.6% 20077 ± 4% softirqs.CPU164.RCU
31777 ± 6% -37.2% 19952 ± 7% softirqs.CPU165.RCU
31907 ± 6% -37.8% 19833 ± 5% softirqs.CPU166.RCU
31470 ± 5% -36.5% 19993 ± 5% softirqs.CPU167.RCU
31077 ± 6% -39.0% 18947 ± 6% softirqs.CPU168.RCU
29623 ± 5% -37.4% 18533 ± 5% softirqs.CPU169.RCU
31711 ± 6% -39.3% 19251 ± 2% softirqs.CPU17.RCU
31096 ± 6% -39.5% 18823 ± 5% softirqs.CPU170.RCU
30458 ± 4% -37.9% 18925 ± 4% softirqs.CPU171.RCU
30208 ± 5% -37.0% 19038 ± 4% softirqs.CPU172.RCU
30882 ± 5% -39.8% 18588 ± 5% softirqs.CPU173.RCU
30629 ± 7% -38.3% 18884 ± 5% softirqs.CPU174.RCU
30494 ± 6% -38.8% 18655 ± 5% softirqs.CPU175.RCU
33200 ± 7% -37.0% 20927 ± 7% softirqs.CPU176.RCU
33526 ± 5% -36.8% 21179 ± 6% softirqs.CPU177.RCU
33740 ± 5% -37.3% 21142 ± 6% softirqs.CPU178.RCU
33646 ± 6% -36.4% 21403 ± 5% softirqs.CPU179.RCU
32191 ± 7% -37.6% 20090 ± 8% softirqs.CPU18.RCU
34035 ± 6% -37.2% 21360 ± 7% softirqs.CPU180.RCU
33369 ± 5% -36.2% 21300 ± 9% softirqs.CPU181.RCU
33054 ± 6% -37.5% 20657 ± 7% softirqs.CPU182.RCU
33513 ± 5% -38.4% 20633 ± 7% softirqs.CPU183.RCU
33520 ± 5% -37.3% 21009 ± 6% softirqs.CPU184.RCU
33259 ± 4% -32.4% 22467 ± 10% softirqs.CPU185.RCU
32663 ± 4% -37.1% 20545 ± 5% softirqs.CPU186.RCU
33030 ± 4% -37.3% 20724 ± 6% softirqs.CPU187.RCU
32681 ± 5% -36.3% 20816 ± 7% softirqs.CPU188.RCU
33300 ± 6% -37.1% 20948 ± 6% softirqs.CPU189.RCU
31351 ± 4% -38.3% 19357 ± 3% softirqs.CPU19.RCU
32920 ± 5% -36.7% 20848 ± 7% softirqs.CPU190.RCU
32554 ± 6% -36.2% 20772 ± 7% softirqs.CPU191.RCU
35600 ± 9% -37.6% 22204 ± 5% softirqs.CPU2.RCU
31770 ± 6% -39.0% 19386 ± 2% softirqs.CPU20.RCU
31381 ± 6% -39.1% 19123 ± 3% softirqs.CPU21.RCU
31740 ± 7% -39.0% 19376 ± 2% softirqs.CPU22.RCU
31971 ± 6% -40.4% 19064 ± 3% softirqs.CPU23.RCU
35073 ± 6% -39.8% 21130 ± 5% softirqs.CPU24.RCU
43577 ± 3% -16.8% 36259 ± 5% softirqs.CPU24.SCHED
33908 ± 4% -42.6% 19456 ± 7% softirqs.CPU25.RCU
34077 ± 7% -42.4% 19614 ± 7% softirqs.CPU26.RCU
33937 ± 6% -42.2% 19622 ± 7% softirqs.CPU27.RCU
33681 ± 5% -41.4% 19752 ± 8% softirqs.CPU28.RCU
34013 ± 5% -41.3% 19965 ± 9% softirqs.CPU29.RCU
35055 ± 8% -36.1% 22405 ± 9% softirqs.CPU3.RCU
34191 ± 6% -42.8% 19572 ± 7% softirqs.CPU30.RCU
33307 ± 5% -41.4% 19506 ± 4% softirqs.CPU31.RCU
36279 ± 6% -39.7% 21870 ± 9% softirqs.CPU32.RCU
34017 ± 16% -36.4% 21641 ± 9% softirqs.CPU33.RCU
35619 ± 4% -39.9% 21407 ± 8% softirqs.CPU34.RCU
36106 ± 6% -39.0% 22022 ± 9% softirqs.CPU35.RCU
36188 ± 6% -39.6% 21843 ± 9% softirqs.CPU36.RCU
35244 ± 5% -41.6% 20589 ± 7% softirqs.CPU37.RCU
34992 ± 4% -38.6% 21497 ± 8% softirqs.CPU38.RCU
36223 ± 4% -41.0% 21378 ± 9% softirqs.CPU39.RCU
35081 ± 7% -40.1% 21012 ± 5% softirqs.CPU4.RCU
35125 ± 6% -39.5% 21240 ± 6% softirqs.CPU40.RCU
35532 ± 5% -38.4% 21898 ± 10% softirqs.CPU41.RCU
36039 ± 6% -38.8% 22038 ± 10% softirqs.CPU42.RCU
35700 ± 4% -39.1% 21757 ± 9% softirqs.CPU43.RCU
35772 ± 5% -42.8% 20455 ± 20% softirqs.CPU44.RCU
34820 ± 3% -39.4% 21093 ± 8% softirqs.CPU45.RCU
35204 ± 3% -37.9% 21844 ± 10% softirqs.CPU46.RCU
35204 ± 3% -37.8% 21914 ± 11% softirqs.CPU47.RCU
38354 ± 5% -35.9% 24593 ± 3% softirqs.CPU48.RCU
46120 ± 8% -17.3% 38145 ± 4% softirqs.CPU48.SCHED
37154 ± 6% -36.2% 23719 ± 4% softirqs.CPU49.RCU
35023 ± 8% -36.3% 22296 ± 7% softirqs.CPU5.RCU
37027 ± 5% -38.2% 22876 ± 5% softirqs.CPU50.RCU
36587 ± 6% -34.7% 23896 ± 4% softirqs.CPU51.RCU
37370 ± 7% -35.7% 24036 ± 5% softirqs.CPU52.RCU
36682 ± 6% -35.2% 23762 ± 5% softirqs.CPU53.RCU
36598 ± 5% -34.8% 23879 ± 6% softirqs.CPU54.RCU
36407 ± 4% -35.6% 23438 ± 3% softirqs.CPU55.RCU
36244 ± 4% -33.4% 24129 ± 6% softirqs.CPU56.RCU
36019 ± 4% -33.3% 24028 ± 4% softirqs.CPU57.RCU
36198 ± 4% -35.4% 23389 ± 5% softirqs.CPU58.RCU
37507 ± 3% -37.1% 23581 ± 6% softirqs.CPU59.RCU
35023 ± 7% -35.9% 22462 ± 7% softirqs.CPU6.RCU
36006 ± 4% -35.5% 23230 ± 6% softirqs.CPU60.RCU
35707 ± 4% -38.5% 21956 ± 5% softirqs.CPU61.RCU
36362 ± 5% -35.9% 23316 ± 5% softirqs.CPU62.RCU
36095 ± 5% -35.6% 23232 ± 6% softirqs.CPU63.RCU
33350 ± 7% -39.5% 20181 ± 9% softirqs.CPU64.RCU
34128 ± 6% -37.4% 21370 ± 5% softirqs.CPU65.RCU
33742 ± 7% -37.0% 21273 ± 4% softirqs.CPU66.RCU
33375 ± 7% -35.5% 21521 ± 6% softirqs.CPU67.RCU
33258 ± 5% -36.1% 21263 ± 3% softirqs.CPU68.RCU
33441 ± 6% -36.8% 21121 ± 4% softirqs.CPU69.RCU
34999 ± 8% -38.2% 21633 ± 6% softirqs.CPU7.RCU
33253 ± 6% -35.6% 21408 ± 5% softirqs.CPU70.RCU
33316 ± 5% -36.4% 21177 ± 6% softirqs.CPU71.RCU
34444 ± 7% -38.1% 21313 ± 5% softirqs.CPU72.RCU
47715 ± 4% -19.7% 38298 ± 3% softirqs.CPU72.SCHED
31358 ± 13% -36.4% 19929 ± 6% softirqs.CPU73.RCU
43722 ± 3% -8.3% 40099 ± 2% softirqs.CPU73.SCHED
33448 ± 6% -39.1% 20361 ± 5% softirqs.CPU74.RCU
32555 ± 7% -36.7% 20610 ± 5% softirqs.CPU75.RCU
32068 ± 7% -35.5% 20684 ± 4% softirqs.CPU76.RCU
32025 ± 7% -36.5% 20334 ± 5% softirqs.CPU77.RCU
32558 ± 7% -37.1% 20468 ± 4% softirqs.CPU78.RCU
31790 ± 5% -35.8% 20410 ± 4% softirqs.CPU79.RCU
34653 ± 8% -36.5% 22003 ± 8% softirqs.CPU8.RCU
35441 ± 8% -36.3% 22592 ± 5% softirqs.CPU80.RCU
35652 ± 5% -37.1% 22419 ± 5% softirqs.CPU81.RCU
35077 ± 4% -35.5% 22633 ± 7% softirqs.CPU82.RCU
35176 ± 4% -36.1% 22486 ± 6% softirqs.CPU83.RCU
35345 ± 4% -35.5% 22799 ± 6% softirqs.CPU84.RCU
35564 ± 5% -37.5% 22225 ± 6% softirqs.CPU85.RCU
34920 ± 4% -37.1% 21980 ± 6% softirqs.CPU86.RCU
34389 ± 4% -35.7% 22097 ± 4% softirqs.CPU87.RCU
35240 ± 4% -36.2% 22492 ± 6% softirqs.CPU88.RCU
35208 ± 4% -36.4% 22393 ± 5% softirqs.CPU89.RCU
34018 ± 8% -38.4% 20967 ± 5% softirqs.CPU9.RCU
34901 ± 4% -37.1% 21962 ± 4% softirqs.CPU90.RCU
35089 ± 4% -37.1% 22063 ± 5% softirqs.CPU91.RCU
34841 ± 4% -36.1% 22265 ± 5% softirqs.CPU92.RCU
35289 ± 3% -36.5% 22424 ± 6% softirqs.CPU93.RCU
34974 ± 4% -36.1% 22346 ± 4% softirqs.CPU94.RCU
35184 ± 3% -37.0% 22179 ± 5% softirqs.CPU95.RCU
34925 ± 7% -42.6% 20046 ± 4% softirqs.CPU96.RCU
33606 ± 11% -39.0% 20490 ± 6% softirqs.CPU97.RCU
34241 ± 8% -39.9% 20578 ± 6% softirqs.CPU98.RCU
33622 ± 8% -37.3% 21075 ± 7% softirqs.CPU99.RCU
6501334 ± 4% -37.9% 4038961 ± 5% softirqs.RCU
8.37 ± 2% -1.8 6.60 ± 3% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
61.82 -1.5 60.28 perf-profile.calltrace.cycles-pp.secondary_startup_64
4.44 ± 3% -1.4 3.02 ± 5% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
61.52 -1.4 60.15 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
61.52 -1.4 60.15 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
61.48 -1.4 60.11 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
3.63 ± 5% -0.8 2.85 ± 14% perf-profile.calltrace.cycles-pp.transaction_kthread.kthread.ret_from_fork
3.63 ± 5% -0.8 2.85 ± 14% perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.transaction_kthread.kthread
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction
3.46 ± 5% -0.7 2.74 ± 14% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents
3.19 ± 5% -0.7 2.51 ± 14% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range
3.13 ± 5% -0.7 2.46 ± 15% perf-profile.calltrace.cycles-pp.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages
3.13 ± 5% -0.7 2.46 ± 15% perf-profile.calltrace.cycles-pp.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages
3.10 ± 5% -0.7 2.44 ± 15% perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb
3.03 ± 5% -0.6 2.38 ± 15% perf-profile.calltrace.cycles-pp.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page
0.91 ± 8% -0.5 0.42 ± 57% perf-profile.calltrace.cycles-pp.__btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
0.68 ± 6% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule_idle.do_idle.cpu_startup_entry.start_secondary
1.60 ± 5% -0.4 1.23 ± 15% perf-profile.calltrace.cycles-pp.check_extent_data_item.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio
0.89 ± 6% -0.4 0.52 ± 58% perf-profile.calltrace.cycles-pp.btrfs_get_64.check_extent_data_item.check_leaf.btree_csum_one_bio.btree_submit_bio_hook
1.72 -0.2 1.48 ± 4% perf-profile.calltrace.cycles-pp.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
0.67 ± 3% -0.2 0.45 ± 58% perf-profile.calltrace.cycles-pp.btrfs_get_32.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio
3.12 -0.2 2.90 ± 2% perf-profile.calltrace.cycles-pp.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
0.71 ± 6% -0.2 0.53 ± 4% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.76 ± 2% -0.0 0.72 ± 3% perf-profile.calltrace.cycles-pp.btrfs_get_token_32.setup_items_for_insert.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums
0.56 ± 4% +0.1 0.63 ± 4% perf-profile.calltrace.cycles-pp.push_leaf_right.split_leaf.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks
1.54 +0.1 1.61 perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.66 ± 4% +0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.split_leaf.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums
1.01 ± 6% +0.1 1.09 ± 2% perf-profile.calltrace.cycles-pp.blk_update_request.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu
1.20 ± 2% +0.1 1.28 ± 2% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.56 ± 8% +0.1 0.67 ± 3% perf-profile.calltrace.cycles-pp.__set_extent_bit.lock_extent_bits.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
1.19 ± 5% +0.1 1.29 ± 2% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt
0.56 ± 8% +0.1 0.67 ± 3% perf-profile.calltrace.cycles-pp.lock_extent_bits.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
1.23 ± 5% +0.1 1.34 ± 2% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt
1.25 ± 5% +0.1 1.37 ± 2% perf-profile.calltrace.cycles-pp.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter
1.25 ± 5% +0.1 1.37 ± 2% perf-profile.calltrace.cycles-pp.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state
1.32 ± 5% +0.1 1.45 ± 2% perf-profile.calltrace.cycles-pp.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.79 ± 11% +0.1 0.92 ± 4% perf-profile.calltrace.cycles-pp.__lookup_extent_mapping.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter
0.56 ± 7% +0.1 0.68 ± 3% perf-profile.calltrace.cycles-pp.unpin_extent_cache.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
1.33 ± 5% +0.1 1.46 ± 2% perf-profile.calltrace.cycles-pp.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.76 ± 6% +0.2 0.92 ± 3% perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper
0.77 ± 6% +0.2 0.94 ± 3% perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
2.80 +0.2 2.97 perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.btrfs_work_helper
0.96 ± 5% +0.2 1.18 ± 4% perf-profile.calltrace.cycles-pp.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
0.72 ± 20% +0.2 0.95 ± 6% perf-profile.calltrace.cycles-pp.find_lock_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages
0.48 ± 60% +0.3 0.76 ± 3% perf-profile.calltrace.cycles-pp.btrfs_find_delalloc_range.find_lock_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages
1.04 ± 21% +0.3 1.36 ± 9% perf-profile.calltrace.cycles-pp.__extent_writepage_io.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages
0.48 ± 61% +0.3 0.82 ± 6% perf-profile.calltrace.cycles-pp.__lookup_extent_mapping.btrfs_drop_extent_cache.create_io_em.run_delalloc_nocow.btrfs_run_delalloc_range
1.16 ± 20% +0.4 1.52 ± 7% perf-profile.calltrace.cycles-pp.btrfs_drop_extent_cache.create_io_em.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
0.30 ±100% +0.4 0.67 ± 8% perf-profile.calltrace.cycles-pp.__queue_work.queue_work_on.btrfs_wq_submit_bio.btrfs_submit_bio_hook.submit_one_bio
1.28 ± 19% +0.4 1.66 ± 7% perf-profile.calltrace.cycles-pp.create_io_em.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
1.08 ± 16% +0.4 1.48 ± 5% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
1.08 ± 16% +0.4 1.48 ± 5% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_file_extent
0.00 +0.6 0.59 ± 5% perf-profile.calltrace.cycles-pp.btrfs_try_granting_tickets.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow
0.00 +0.6 0.59 ± 6% perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range
0.00 +0.7 0.69 ± 6% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_mark_extent_written
0.93 ± 5% +0.7 1.64 ± 4% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_csum.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.00 +0.7 0.71 ± 6% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
0.97 ± 4% +0.7 1.68 ± 4% perf-profile.calltrace.cycles-pp.btrfs_lookup_csum.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.btrfs_work_helper
0.00 +0.7 0.71 ± 6% perf-profile.calltrace.cycles-pp.__btrfs_tree_lock.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
0.78 ± 24% +0.7 1.50 ± 2% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_csums_range.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range
0.89 ± 22% +0.7 1.62 ± 2% perf-profile.calltrace.cycles-pp.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
0.88 ± 22% +0.7 1.62 ± 2% perf-profile.calltrace.cycles-pp.btrfs_lookup_csums_range.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
4.70 +0.8 5.52 perf-profile.calltrace.cycles-pp.add_pending_csums.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
4.69 +0.8 5.50 perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
0.00 +0.9 0.88 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_csums_range
0.00 +0.9 0.89 ± 4% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_csums_range.csum_exist_in_range
0.00 +0.9 0.89 ± 4% perf-profile.calltrace.cycles-pp.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_csums_range.csum_exist_in_range.run_delalloc_nocow
30.30 +1.2 31.46 perf-profile.calltrace.cycles-pp.ret_from_fork
30.30 +1.2 31.46 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +1.3 1.26 ± 5% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot
5.09 ± 6% +1.5 6.61 ± 9% perf-profile.calltrace.cycles-pp.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages
5.11 ± 6% +1.5 6.63 ± 9% perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages
5.92 ± 6% +1.7 7.62 ± 9% perf-profile.calltrace.cycles-pp.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages
25.98 +1.8 27.81 ± 2% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
26.61 +1.9 28.54 ± 2% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
23.88 ± 4% +2.2 26.07 ± 2% perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
6.38 ± 18% +2.4 8.75 ± 6% perf-profile.calltrace.cycles-pp.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages.__filemap_fdatawrite_range
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work.worker_thread
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_work.btrfs_work_helper.process_one_work.worker_thread.kthread
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.calltrace.cycles-pp.extent_writepages.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work.btrfs_work_helper
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.calltrace.cycles-pp.extent_write_cache_pages.extent_writepages.do_writepages.__filemap_fdatawrite_range.btrfs_run_delalloc_work
4.18 ± 4% -2.2 1.97 ± 18% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.33 ± 3% -2.1 1.23 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
8.38 ± 2% -1.8 6.61 ± 3% perf-profile.children.cycles-pp.btrfs_mark_extent_written
2.75 ± 3% -1.7 1.09 ± 5% perf-profile.children.cycles-pp.__wake_up_common_lock
2.53 ± 2% -1.6 0.90 ± 6% perf-profile.children.cycles-pp.__wake_up_common
2.47 ± 3% -1.6 0.88 ± 6% perf-profile.children.cycles-pp.autoremove_wake_function
61.82 -1.5 60.28 perf-profile.children.cycles-pp.secondary_startup_64
61.82 -1.5 60.28 perf-profile.children.cycles-pp.cpu_startup_entry
61.80 -1.5 60.27 perf-profile.children.cycles-pp.do_idle
61.52 -1.4 60.15 perf-profile.children.cycles-pp.start_secondary
3.49 -1.3 2.21 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
3.01 -1.2 1.82 ± 3% perf-profile.children.cycles-pp.ttwu_do_activate
2.98 ± 2% -1.2 1.81 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
2.89 -1.1 1.74 ± 3% perf-profile.children.cycles-pp.enqueue_entity
2.48 ± 2% -1.1 1.40 ± 4% perf-profile.children.cycles-pp.__account_scheduler_latency
1.58 ± 6% -1.1 0.53 ± 7% perf-profile.children.cycles-pp.btrfs_lock_root_node
2.09 -1.0 1.09 ± 3% perf-profile.children.cycles-pp.stack_trace_save_tsk
1.90 ± 2% -0.9 0.96 ± 4% perf-profile.children.cycles-pp.arch_stack_walk
3.63 ± 5% -0.8 2.85 ± 14% perf-profile.children.cycles-pp.transaction_kthread
3.63 ± 5% -0.8 2.85 ± 14% perf-profile.children.cycles-pp.btrfs_commit_transaction
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.children.cycles-pp.btrfs_write_and_wait_transaction
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.children.cycles-pp.btrfs_write_marked_extents
3.58 ± 5% -0.7 2.83 ± 14% perf-profile.children.cycles-pp.btree_write_cache_pages
3.46 ± 5% -0.7 2.74 ± 14% perf-profile.children.cycles-pp.write_one_eb
1.38 -0.7 0.67 ± 6% perf-profile.children.cycles-pp.unwind_next_frame
0.85 ± 8% -0.7 0.16 ± 7% perf-profile.children.cycles-pp.unlock_up
3.11 ± 5% -0.7 2.45 ± 15% perf-profile.children.cycles-pp.btree_csum_one_bio
3.14 ± 5% -0.7 2.48 ± 15% perf-profile.children.cycles-pp.btree_submit_bio_hook
3.06 ± 5% -0.6 2.42 ± 15% perf-profile.children.cycles-pp.check_leaf
1.02 ± 2% -0.5 0.48 ± 7% perf-profile.children.cycles-pp.btrfs_release_path
3.84 ± 5% -0.5 3.33 ± 10% perf-profile.children.cycles-pp.submit_one_bio
0.50 ± 7% -0.5 0.04 ± 57% perf-profile.children.cycles-pp.finish_wait
0.70 -0.4 0.30 ± 8% perf-profile.children.cycles-pp.btrfs_free_path
0.69 ± 5% -0.4 0.30 ± 13% perf-profile.children.cycles-pp.queued_write_lock_slowpath
1.72 ± 4% -0.4 1.33 perf-profile.children.cycles-pp.__sched_text_start
1.45 ± 8% -0.4 1.06 ± 7% perf-profile.children.cycles-pp.__btrfs_read_lock_root_node
1.64 ± 5% -0.4 1.25 ± 14% perf-profile.children.cycles-pp.check_extent_data_item
0.59 ± 5% -0.4 0.24 ± 10% perf-profile.children.cycles-pp.orc_find
0.53 ± 4% -0.3 0.22 ± 10% perf-profile.children.cycles-pp.__orc_find
0.37 ± 4% -0.3 0.07 ± 10% perf-profile.children.cycles-pp.btrfs_try_tree_write_lock
1.08 ± 5% -0.2 0.84 ± 12% perf-profile.children.cycles-pp.btrfs_get_64
1.72 -0.2 1.48 ± 4% perf-profile.children.cycles-pp.setup_leaf_for_split
1.06 ± 2% -0.2 0.84 ± 2% perf-profile.children.cycles-pp.schedule
3.12 -0.2 2.91 ± 2% perf-profile.children.cycles-pp.btrfs_duplicate_item
0.71 ± 7% -0.2 0.53 ± 3% perf-profile.children.cycles-pp.schedule_idle
0.26 ± 6% -0.2 0.10 ± 7% perf-profile.children.cycles-pp._raw_read_lock
0.29 ± 9% -0.1 0.15 ± 8% perf-profile.children.cycles-pp.unwind_get_return_address
0.21 ± 12% -0.1 0.08 ± 19% perf-profile.children.cycles-pp.__module_address
0.27 ± 8% -0.1 0.14 ± 11% perf-profile.children.cycles-pp.__kernel_text_address
0.23 ± 11% -0.1 0.12 ± 10% perf-profile.children.cycles-pp.kernel_text_address
0.21 ± 4% -0.1 0.10 ± 12% perf-profile.children.cycles-pp._raw_write_lock
0.42 ± 3% -0.1 0.33 ± 5% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.14 ± 16% -0.1 0.04 ± 59% perf-profile.children.cycles-pp.__module_text_address
0.15 ± 16% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.is_module_text_address
0.37 ± 6% -0.1 0.29 ± 3% perf-profile.children.cycles-pp.dequeue_task_fair
0.39 ± 7% -0.1 0.31 ± 4% perf-profile.children.cycles-pp.pick_next_task_fair
0.24 ± 2% -0.1 0.17 ± 6% perf-profile.children.cycles-pp.__switch_to_asm
0.30 ± 8% -0.1 0.24 ± 8% perf-profile.children.cycles-pp.dequeue_entity
0.23 ± 12% -0.1 0.17 ± 6% perf-profile.children.cycles-pp.set_next_entity
0.23 ± 3% -0.1 0.17 ± 8% perf-profile.children.cycles-pp.__switch_to
0.43 ± 6% -0.0 0.38 ± 7% perf-profile.children.cycles-pp.update_rq_clock
0.14 ± 10% -0.0 0.10 ± 5% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.24 ± 3% -0.0 0.19 ± 12% perf-profile.children.cycles-pp._raw_spin_trylock
0.13 ± 8% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.check_preempt_curr
0.12 ± 8% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.12 ± 12% -0.0 0.09 ± 10% perf-profile.children.cycles-pp.__list_add_valid
0.18 ± 2% -0.0 0.15 ± 8% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.22 ± 5% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.07 ± 22% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.note_gp_changes
0.10 ± 15% -0.0 0.07 ± 17% perf-profile.children.cycles-pp.__update_load_avg_se
0.09 ± 7% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.account_entity_enqueue
0.15 ± 10% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.10 ± 9% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.06 ± 11% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.insert_work
0.07 ± 15% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.btrfs_queue_work
0.07 ± 13% +0.0 0.08 ± 15% perf-profile.children.cycles-pp.btrfs_inode_safe_disk_i_size_write
0.07 ± 12% +0.0 0.09 ± 9% perf-profile.children.cycles-pp.memset_erms
0.06 ± 14% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.submit_bio_checks
0.08 ± 10% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.verify_parent_transid
0.10 ± 18% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.remove_ticket
0.10 ± 11% +0.0 0.12 ± 5% perf-profile.children.cycles-pp.btrfs_calculate_inode_block_rsv_size
0.07 ± 17% +0.0 0.09 ± 13% perf-profile.children.cycles-pp.llist_add_batch
0.07 ± 17% +0.0 0.09 ± 17% perf-profile.children.cycles-pp.__smp_call_single_queue
0.10 ± 14% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.rb_erase
0.11 ± 7% +0.0 0.14 ± 15% perf-profile.children.cycles-pp.release_extent_buffer
0.34 ± 2% +0.0 0.38 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.31 ± 5% +0.0 0.35 perf-profile.children.cycles-pp.__radix_tree_lookup
0.14 ± 13% +0.0 0.18 ± 11% perf-profile.children.cycles-pp.leaf_space_used
0.25 ± 5% +0.0 0.30 ± 5% perf-profile.children.cycles-pp.memcpy_erms
0.23 ± 10% +0.1 0.29 ± 15% perf-profile.children.cycles-pp.btrfs_add_ordered_extent
0.23 ± 10% +0.1 0.29 ± 15% perf-profile.children.cycles-pp.__btrfs_add_ordered_extent
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.down_write
0.41 ± 2% +0.1 0.48 ± 11% perf-profile.children.cycles-pp.btrfs_cross_ref_exist
0.30 ± 6% +0.1 0.36 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.down_write_trylock
0.22 ± 8% +0.1 0.30 ± 7% perf-profile.children.cycles-pp.alloc_extent_state
0.00 +0.1 0.08 ± 15% perf-profile.children.cycles-pp.up_write
0.49 ± 3% +0.1 0.57 ± 3% perf-profile.children.cycles-pp.memmove_extent_buffer
0.00 +0.1 0.08 ± 15% perf-profile.children.cycles-pp.down_read_trylock
0.36 ± 4% +0.1 0.45 ± 6% perf-profile.children.cycles-pp.clear_state_bit
0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp.btrfs_try_tree_read_lock
0.65 ± 2% +0.1 0.74 ± 2% perf-profile.children.cycles-pp.end_extent_writepage
0.65 ± 2% +0.1 0.74 ± 2% perf-profile.children.cycles-pp.btrfs_writepage_endio_finish_ordered
0.55 ± 9% +0.1 0.65 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.35 ± 6% +0.1 0.45 ± 9% perf-profile.children.cycles-pp.extent_clear_unlock_delalloc
0.00 +0.1 0.10 ± 12% perf-profile.children.cycles-pp.wake_up_q
0.44 +0.1 0.55 ± 2% perf-profile.children.cycles-pp.memcpy_extent_buffer
0.00 +0.1 0.11 ± 17% perf-profile.children.cycles-pp.rwsem_wake
1.23 ± 5% +0.1 1.34 ± 2% perf-profile.children.cycles-pp.blk_mq_end_request
1.18 ± 6% +0.1 1.30 ± 2% perf-profile.children.cycles-pp.blk_update_request
0.92 ± 5% +0.1 1.04 ± 2% perf-profile.children.cycles-pp.end_bio_extent_writepage
0.00 +0.1 0.12 ± 5% perf-profile.children.cycles-pp.up_read
1.14 ± 6% +0.1 1.26 ± 2% perf-profile.children.cycles-pp.btrfs_end_bio
0.89 +0.1 1.01 ± 6% perf-profile.children.cycles-pp.__push_leaf_right
1.17 ± 3% +0.1 1.29 ± 4% perf-profile.children.cycles-pp.check_setget_bounds
0.56 ± 7% +0.1 0.68 ± 3% perf-profile.children.cycles-pp.unpin_extent_cache
0.42 ± 4% +0.1 0.55 ± 9% perf-profile.children.cycles-pp.btrfs_extend_item
0.65 ± 12% +0.1 0.79 ± 8% perf-profile.children.cycles-pp.btrfs_find_delalloc_range
1.36 ± 5% +0.1 1.50 ± 2% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.42 ± 3% +0.1 0.55 ± 7% perf-profile.children.cycles-pp.clear_extent_bit
1.35 ± 5% +0.1 1.49 ± 2% perf-profile.children.cycles-pp.nvme_irq
0.65 ± 3% +0.1 0.78 ± 3% perf-profile.children.cycles-pp.memmove
0.97 ± 5% +0.1 1.11 perf-profile.children.cycles-pp.find_extent_buffer
3.02 +0.1 3.17 ± 2% perf-profile.children.cycles-pp.setup_items_for_insert
0.68 ± 12% +0.1 0.83 ± 11% perf-profile.children.cycles-pp.btrfs_wq_submit_bio
1.27 ± 4% +0.1 1.41 ± 3% perf-profile.children.cycles-pp.read_block_for_search
0.64 ± 5% +0.1 0.79 ± 6% perf-profile.children.cycles-pp.btrfs_comp_cpu_keys
1.39 ± 4% +0.2 1.54 ± 2% perf-profile.children.cycles-pp.handle_irq_event_percpu
1.39 ± 2% +0.2 1.54 ± 5% perf-profile.children.cycles-pp.push_leaf_right
1.44 ± 4% +0.2 1.60 ± 3% perf-profile.children.cycles-pp.handle_irq_event
1.46 ± 4% +0.2 1.63 ± 3% perf-profile.children.cycles-pp.handle_edge_irq
1.54 ± 4% +0.2 1.71 ± 3% perf-profile.children.cycles-pp.common_interrupt
0.82 ± 3% +0.2 1.00 ± 4% perf-profile.children.cycles-pp.generic_bin_search
1.21 +0.2 1.39 ± 2% perf-profile.children.cycles-pp.btrfs_set_token_32
0.80 ± 12% +0.2 0.98 ± 10% perf-profile.children.cycles-pp.find_lock_delalloc_range
1.55 ± 4% +0.2 1.74 ± 2% perf-profile.children.cycles-pp.asm_common_interrupt
1.76 ± 5% +0.2 1.95 ± 5% perf-profile.children.cycles-pp.__etree_search
1.70 ± 2% +0.2 1.89 ± 5% perf-profile.children.cycles-pp.split_leaf
1.85 ± 4% +0.2 2.05 ± 4% perf-profile.children.cycles-pp.__btrfs_tree_lock
1.84 ± 2% +0.2 2.04 ± 4% perf-profile.children.cycles-pp.__set_extent_bit
1.22 ± 5% +0.2 1.42 ± 3% perf-profile.children.cycles-pp.btrfs_try_granting_tickets
1.04 ± 7% +0.2 1.24 ± 4% perf-profile.children.cycles-pp.__queue_work
2.89 +0.2 3.10 ± 2% perf-profile.children.cycles-pp.btrfs_insert_empty_items
0.00 +0.2 0.22 ± 28% perf-profile.children.cycles-pp.down_read
1.09 ± 8% +0.2 1.31 ± 6% perf-profile.children.cycles-pp.queue_work_on
1.66 ± 2% +0.2 1.89 ± 4% perf-profile.children.cycles-pp.lock_extent_bits
0.96 ± 5% +0.2 1.18 ± 4% perf-profile.children.cycles-pp.btrfs_remove_ordered_extent
1.68 ± 4% +0.2 1.91 ± 4% perf-profile.children.cycles-pp.btrfs_inode_rsv_release
1.16 ± 11% +0.3 1.41 ± 12% perf-profile.children.cycles-pp.__extent_writepage_io
0.85 ± 2% +0.3 1.12 ± 8% perf-profile.children.cycles-pp.__clear_extent_bit
1.28 ± 9% +0.3 1.56 ± 10% perf-profile.children.cycles-pp.btrfs_drop_extent_cache
1.41 ± 8% +0.3 1.71 ± 11% perf-profile.children.cycles-pp.create_io_em
1.21 ± 6% +0.3 1.53 ± 8% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
1.97 ± 5% +0.4 2.37 ± 6% perf-profile.children.cycles-pp.__lookup_extent_mapping
0.97 ± 13% +0.7 1.67 ± 7% perf-profile.children.cycles-pp.csum_exist_in_range
0.96 ± 13% +0.7 1.67 ± 7% perf-profile.children.cycles-pp.btrfs_lookup_csums_range
0.97 ± 4% +0.7 1.68 ± 4% perf-profile.children.cycles-pp.btrfs_lookup_csum
4.69 +0.8 5.50 perf-profile.children.cycles-pp.btrfs_csum_file_blocks
4.70 +0.8 5.52 perf-profile.children.cycles-pp.add_pending_csums
30.30 +1.2 31.46 perf-profile.children.cycles-pp.ret_from_fork
30.30 +1.2 31.46 perf-profile.children.cycles-pp.kthread
0.00 +1.2 1.19 ± 2% perf-profile.children.cycles-pp.osq_lock
10.97 ± 6% +1.2 12.20 ± 7% perf-profile.children.cycles-pp.do_writepages
1.43 ± 6% +1.3 2.70 ± 5% perf-profile.children.cycles-pp.__btrfs_tree_read_lock
5.09 ± 6% +1.5 6.62 ± 9% perf-profile.children.cycles-pp.run_delalloc_nocow
5.11 ± 6% +1.5 6.63 ± 9% perf-profile.children.cycles-pp.btrfs_run_delalloc_range
10.20 ± 13% +1.7 11.87 ± 3% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
5.92 ± 6% +1.7 7.62 ± 9% perf-profile.children.cycles-pp.writepage_delalloc
25.98 +1.8 27.82 ± 2% perf-profile.children.cycles-pp.process_one_work
26.62 +1.9 28.55 ± 2% perf-profile.children.cycles-pp.worker_thread
7.11 ± 7% +2.0 9.06 ± 10% perf-profile.children.cycles-pp.__extent_writepage
7.39 ± 7% +2.0 9.36 ± 10% perf-profile.children.cycles-pp.extent_writepages
7.39 ± 7% +2.0 9.36 ± 10% perf-profile.children.cycles-pp.extent_write_cache_pages
0.00 +2.0 2.01 ± 4% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
23.88 ± 4% +2.2 26.07 ± 2% perf-profile.children.cycles-pp.btrfs_work_helper
6.61 ± 18% +2.4 9.03 ± 6% perf-profile.children.cycles-pp.btrfs_run_delalloc_work
0.00 +2.5 2.52 ± 5% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
0.00 +2.8 2.82 ± 4% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +4.5 4.46 ± 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
4.15 ± 4% -2.2 1.96 ± 18% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.53 ± 4% -0.3 0.22 ± 10% perf-profile.self.cycles-pp.__orc_find
0.60 ± 3% -0.3 0.32 ± 6% perf-profile.self.cycles-pp.unwind_next_frame
0.54 -0.3 0.27 ± 13% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.85 ± 6% -0.2 0.65 ± 13% perf-profile.self.cycles-pp.btrfs_get_64
0.25 ± 5% -0.1 0.10 ± 8% perf-profile.self.cycles-pp._raw_read_lock
0.21 ± 10% -0.1 0.08 ± 19% perf-profile.self.cycles-pp.__module_address
1.29 ± 4% -0.1 1.18 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.21 ± 3% -0.1 0.10 ± 12% perf-profile.self.cycles-pp._raw_write_lock
0.36 ± 5% -0.1 0.26 ± 2% perf-profile.self.cycles-pp.__sched_text_start
0.24 -0.1 0.17 ± 9% perf-profile.self.cycles-pp.__switch_to_asm
0.37 ± 5% -0.1 0.31 ± 4% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.17 ± 7% -0.1 0.11 ± 11% perf-profile.self.cycles-pp.stack_trace_save_tsk
0.29 ± 5% -0.1 0.23 ± 16% perf-profile.self.cycles-pp.check_leaf
0.23 ± 10% -0.1 0.18 ± 8% perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 10% -0.1 0.09 ± 17% perf-profile.self.cycles-pp.__account_scheduler_latency
0.22 ± 4% -0.1 0.17 ± 9% perf-profile.self.cycles-pp.__switch_to
0.21 ± 9% -0.1 0.16 ± 13% perf-profile.self.cycles-pp.check_extent_data_item
0.16 ± 12% -0.0 0.11 ± 11% perf-profile.self.cycles-pp.set_next_entity
0.24 ± 3% -0.0 0.19 ± 12% perf-profile.self.cycles-pp._raw_spin_trylock
0.38 ± 2% -0.0 0.33 ± 8% perf-profile.self.cycles-pp.do_idle
0.22 ± 5% -0.0 0.18 ± 8% perf-profile.self.cycles-pp.__list_del_entry_valid
0.10 ± 15% -0.0 0.06 ± 13% perf-profile.self.cycles-pp.__update_load_avg_se
0.09 ± 7% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.account_entity_enqueue
0.10 ± 15% -0.0 0.07 ± 17% perf-profile.self.cycles-pp.enqueue_task_fair
0.15 ± 11% -0.0 0.12 ± 5% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.08 ± 10% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.orc_find
0.07 ± 17% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.llist_add_batch
0.04 ± 58% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.btrfs_block_rsv_release
0.23 ± 7% +0.0 0.27 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.05 ± 59% +0.0 0.09 perf-profile.self.cycles-pp.nvme_irq
0.19 ± 11% +0.0 0.23 ± 5% perf-profile.self.cycles-pp.kmem_cache_free
0.31 ± 4% +0.0 0.35 ± 2% perf-profile.self.cycles-pp.__radix_tree_lookup
0.24 ± 7% +0.0 0.29 ± 5% perf-profile.self.cycles-pp.memcpy_erms
0.17 ± 8% +0.1 0.22 ± 8% perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.06 ± 22% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.down_write_trylock
0.00 +0.1 0.08 ± 15% perf-profile.self.cycles-pp.up_write
0.00 +0.1 0.08 ± 15% perf-profile.self.cycles-pp.down_read_trylock
0.42 ± 3% +0.1 0.50 ± 4% perf-profile.self.cycles-pp.find_extent_buffer
0.00 +0.1 0.12 ± 5% perf-profile.self.cycles-pp.up_read
0.64 ± 3% +0.1 0.78 ± 3% perf-profile.self.cycles-pp.memmove
0.63 ± 5% +0.1 0.77 ± 7% perf-profile.self.cycles-pp.btrfs_comp_cpu_keys
0.84 ± 3% +0.2 1.02 ± 3% perf-profile.self.cycles-pp.btrfs_set_token_32
0.00 +0.2 0.18 ± 26% perf-profile.self.cycles-pp.down_read
1.72 ± 6% +0.2 1.92 ± 4% perf-profile.self.cycles-pp.__etree_search
1.93 +0.2 2.16 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
1.94 ± 6% +0.4 2.35 ± 6% perf-profile.self.cycles-pp.__lookup_extent_mapping
0.00 +0.4 0.42 ± 4% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.00 +1.2 1.18 perf-profile.self.cycles-pp.osq_lock
0.00 +2.8 2.77 ± 4% perf-profile.self.cycles-pp.rwsem_spin_on_owner
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
4 months
[mm] 2037ab69a5: BUG:KASAN:null-ptr-deref_in_t
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 2037ab69a5cd8afe58347135010f6160ea368dd0 ("mm: Convert find_get_entry to return the head page")
url: https://github.com/0day-ci/linux/commits/Matthew-Wilcox-Oracle/Return-hea...
in testcase: trinity
version: trinity-x86_64-af355e9-1_2019-12-03
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------------------+------------+------------+
| | a27ee9830b | 2037ab69a5 |
+----------------------------------------------------------------------------+------------+------------+
| boot_successes | 4 | 2 |
| boot_failures | 0 | 8 |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 0 | 2 |
| BUG:KASAN:null-ptr-deref_in_t | 0 | 6 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 6 |
| Oops:#[##] | 0 | 6 |
| RIP:test_bit | 0 | 6 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 6 |
+----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 162.744647] BUG: KASAN: null-ptr-deref in test_bit+0x23/0x2e
[ 162.745610] Read of size 8 at addr 0000000000000000 by task trinity-c1/1847
[ 162.746669]
[ 162.746984] CPU: 0 PID: 1847 Comm: trinity-c1 Not tainted 5.9.0-rc4-next-20200910-00006-g2037ab69a5cd8a #1
[ 162.748495] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 162.749850] Call Trace:
[ 162.750377] kasan_report+0x154/0x170
[ 162.751068] ? test_bit+0x23/0x2e
[ 162.751706] check_memory_region+0x13d/0x145
[ 162.752528] test_bit+0x23/0x2e
[ 162.753128] PageHuge+0x16/0x7c
[ 162.753748] find_get_incore_page+0x29/0xd3
[ 162.754631] __mincore_unmapped_range+0x169/0x210
[ 162.755548] mincore_unmapped_range+0x6d/0x9d
[ 162.756379] walk_pgd_range+0x736/0xa8b
[ 162.757156] __walk_page_range+0xd8/0x3f9
[ 162.757935] walk_page_range+0x178/0x205
[ 162.758710] ? __walk_page_range+0x3f9/0x3f9
[ 162.759569] ? hlock_class+0x3b/0xf2
[ 162.760303] __do_sys_mincore+0x3a5/0x459
[ 162.761161] do_syscall_64+0x2e/0x68
[ 162.761861] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 162.762903] RIP: 0033:0x7f923a75f1c9
[ 162.763621] Code: 01 00 48 81 c4 80 00 00 00 e9 f1 fe ff ff 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 97 dc 2c 00 f7 d8 64 89 01 48
[ 162.766849] RSP: 002b:00007ffc8082ed18 EFLAGS: 00000246 ORIG_RAX: 000000000000001b
[ 162.768139] RAX: ffffffffffffffda RBX: 000000000000001b RCX: 00007f923a75f1c9
[ 162.769213] RDX: 00007f92371ff010 RSI: 00000000000cc000 RDI: 00007f9238b47000
[ 162.770389] RBP: 00007f923ae44000 R08: 0000000000000041 R09: fffffffff8000000
[ 162.771458] R10: 00006407736b759e R11: 0000000000000246 R12: 00007f923ae44058
[ 162.772600] R13: 00007f923ae526b0 R14: 0000000000000000 R15: 00007f923ae44000
[ 162.773757] ==================================================================
[ 162.774941] Disabling lock debugging due to kernel taint
[ 162.775936] BUG: kernel NULL pointer dereference, address: 0000000000000000
[ 162.777007] #PF: supervisor read access in kernel mode
[ 162.777862] #PF: error_code(0x0000) - not-present page
[ 162.778635] PGD 1d6103067 P4D 1d6103067 PUD 1c687e067 PMD 0
[ 162.779570] Oops: 0000 [#1] KASAN
[ 162.780129] CPU: 0 PID: 1847 Comm: trinity-c1 Tainted: G B 5.9.0-rc4-next-20200910-00006-g2037ab69a5cd8a #1
[ 162.781812] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 162.783297] RIP: 0010:test_bit+0x23/0x2e
[ 162.784043] Code: 00 8b 43 34 5b 5d c3 48 89 f8 b9 40 00 00 00 55 48 89 f5 48 99 53 48 89 fb 48 f7 f9 48 8d 3c c6 be 08 00 00 00 e8 3d 6b 01 00 <48> 0f a3 5d 00 0f 92 c0 5b 5d c3 53 48 89 fe 48 89 fb bf 10 00 00
[ 162.787441] RSP: 0018:ffff88818d587bd0 EFLAGS: 00010286
[ 162.788425] RAX: 00000000b4610a00 RBX: 0000000000000010 RCX: ffffffff8c246fc2
[ 162.789733] RDX: fffffbfff1e79c96 RSI: 0000000000000000 RDI: ffffffff8d8a2f7b
[ 162.791090] RBP: 0000000000000000 R08: fffffbfff1e79c96 R09: 0000000000000000
[ 162.792423] R10: fffffbfff1e79c96 R11: ffffffff8f3ce4ab R12: 0000000000000000
[ 162.793745] R13: ffff8881f0c076b0 R14: ffff888180f88001 R15: 0000000000000000
[ 162.795078] FS: 00007f923ae52740(0000) GS:ffffffff8e8cd000(0000) knlGS:0000000000000000
[ 162.796303] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 162.797250] CR2: 0000000000000000 CR3: 000000018f354000 CR4: 00000000000406f0
[ 162.798357] DR0: 00007f923ad2e000 DR1: 00007f9238747000 DR2: 0000000000000000
[ 162.799515] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[ 162.800595] Call Trace:
[ 162.801031] PageHuge+0x16/0x7c
[ 162.801562] find_get_incore_page+0x29/0xd3
[ 162.802205] __mincore_unmapped_range+0x169/0x210
[ 162.807040] mincore_unmapped_range+0x6d/0x9d
[ 162.807705] walk_pgd_range+0x736/0xa8b
[ 162.808293] __walk_page_range+0xd8/0x3f9
[ 162.808906] walk_page_range+0x178/0x205
[ 162.809496] ? __walk_page_range+0x3f9/0x3f9
[ 162.810161] ? hlock_class+0x3b/0xf2
[ 162.810760] __do_sys_mincore+0x3a5/0x459
[ 162.811460] do_syscall_64+0x2e/0x68
[ 162.812052] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 162.812881] RIP: 0033:0x7f923a75f1c9
[ 162.813503] Code: 01 00 48 81 c4 80 00 00 00 e9 f1 fe ff ff 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 97 dc 2c 00 f7 d8 64 89 01 48
[ 162.816544] RSP: 002b:00007ffc8082ed18 EFLAGS: 00000246 ORIG_RAX: 000000000000001b
[ 162.817626] RAX: ffffffffffffffda RBX: 000000000000001b RCX: 00007f923a75f1c9
[ 162.818676] RDX: 00007f92371ff010 RSI: 00000000000cc000 RDI: 00007f9238b47000
[ 162.819837] RBP: 00007f923ae44000 R08: 0000000000000041 R09: fffffffff8000000
[ 162.820990] R10: 00006407736b759e R11: 0000000000000246 R12: 00007f923ae44058
[ 162.822105] R13: 00007f923ae526b0 R14: 0000000000000000 R15: 00007f923ae44000
[ 162.823282] Modules linked in:
[ 162.823839] CR2: 0000000000000000
[ 162.824521] ---[ end trace 2d46de9c846249c1 ]---
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc4-next-20200910-00006-g2037ab69a5cd8a .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
4 months, 1 week
[char_dev] 98c6e23616: dmesg.invoked_oom-killer:gfp_mask=0x
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 98c6e236168b29be199f59d8b1d06522fd8b01f4 ("char_dev: replace cdev_map with an xarray")
git://git.infradead.org/users/hch/block.git gendisk-lookup
in testcase: xfstests
version: xfstests-x86_64-bbfab0d-1_20200821
with following parameters:
disk: 4HDD
fs: ext4
test: generic-group00
ucode: 0x28
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 8 threads Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz with 6G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 177.153697] run fstests generic/475 at 2020-08-31 05:28:54
[ 185.493470] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: acl,user_xattr
[ 185.513734] EXT4-fs error (device dm-0): ext4_read_inode_bitmap:199: comm fsstress: Cannot read inode bitmap - block_group = 0, inode_bitmap = 1079
[ 185.515318] Buffer I/O error on dev dm-0, logical block 78643184, async page read
[ 185.528294] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.544461] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.551022] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.563842] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.571860] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.578394] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.591169] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.599183] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.605769] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.618572] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.626614] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.633200] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.646060] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.654090] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.660659] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.673522] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.681555] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.688129] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.701033] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.709092] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.715685] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.728677] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.736797] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.743413] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.756386] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 185.764475] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.771129] EXT4-fs error (device dm-0): ext4_check_bdev_write_error:215: comm fsstress: Error while async write back metadata
[ 185.784183] EXT4-fs (dm-0): I/O error while writing superblock
[ 185.795337] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
[ 185.803526] Aborting journal on device dm-0-8.
[ 185.808804] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
[ 185.893447] EXT4-fs (dm-0): recovery complete
[ 185.898646] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: acl,user_xattr
[ 186.927928] Aborting journal on device dm-0-8.
[ 186.933514] EXT4-fs error (device dm-0) in __ext4_new_inode:946: Journal has aborted
[ 186.933531] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
[ 186.942139] EXT4-fs (dm-0): I/O error while writing superblock
[ 187.009454] EXT4-fs (dm-0): I/O error while writing superblock
[ 187.016110] EXT4-fs error (device dm-0): ext4_journal_check_start:83: Detected aborted journal
[ 187.025515] EXT4-fs (dm-0): Remounting filesystem read-only
[ 187.031933] EXT4-fs (dm-0): ext4_writepages: jbd2_start: 9223372036854775807 pages, ino 16777218; err -30
[ 187.234910] EXT4-fs (dm-0): 2 truncates cleaned up
[ 187.240541] EXT4-fs (dm-0): recovery complete
[ 187.306615] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: acl,user_xattr
[ 188.378793] Aborting journal on device dm-0-8.
[ 188.384169] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
[ 191.575574] buffer_io_error: 10 callbacks suppressed
[ 191.575576] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[ 191.591358] EXT4-fs (dm-0): I/O error while writing superblock
[ 191.598420] EXT4-fs error (device dm-0): ext4_journal_check_start:83: Detected aborted journal
[ 191.608271] EXT4-fs (dm-0): Remounting filesystem read-only
[ 198.613314] killall invoked oom-killer: gfp_mask=0x40cd0(GFP_KERNEL|__GFP_COMP|__GFP_RECLAIMABLE), order=0, oom_score_adj=250
[ 198.625844] CPU: 7 PID: 8712 Comm: killall Not tainted 5.9.0-rc2-00097-g98c6e236168b2 #1
[ 198.635112] Hardware name: Dell Inc. OptiPlex 9020/03CPWF, BIOS A11 04/01/2015
[ 198.643507] Call Trace:
[ 198.647155] dump_stack+0x57/0x80
[ 198.651640] dump_header+0x4a/0x1fe
[ 198.656270] oom_kill_process.cold+0xb/0x10
[ 198.661568] out_of_memory+0x1d8/0x460
[ 198.667035] out_of_memory+0x67/0xe0
[ 198.671739] __alloc_pages_slowpath+0xcf2/0xde0
[ 198.678380] __alloc_pages_nodemask+0x2fb/0x360
[ 198.683948] allocate_slab+0x319/0x440
[ 198.688731] ___slab_alloc+0x380/0x5e0
[ 198.693498] ? proc_alloc_inode+0x16/0x80
[ 198.698511] ? __memcg_kmem_charge+0x76/0xc0
[ 198.703745] ? __mod_memcg_lruvec_state+0x1f/0xe0
[ 198.709455] ? proc_alloc_inode+0x16/0x80
[ 198.714483] __slab_alloc+0xe/0x20
[ 198.718890] kmem_cache_alloc+0x422/0x480
[ 198.723863] proc_alloc_inode+0x16/0x80
[ 198.728637] alloc_inode+0x1d/0xa0
[ 198.732981] new_inode_pseudo+0xd/0x60
[ 198.737658] new_inode+0x13/0x40
[ 198.741775] proc_pid_make_inode+0x1c/0x120
[ 198.746853] proc_pid_instantiate+0x1e/0xa0
[ 198.751935] proc_pid_lookup+0x83/0x120
[ 198.756611] proc_root_lookup+0x1d/0x40
[ 198.761283] __lookup_slow+0x84/0x160
[ 198.765759] walk_component+0x13b/0x1c0
[ 198.770406] link_path_walk+0x220/0x360
[ 198.775554] ? path_init+0x323/0x3c0
[ 198.779640] path_openat+0xca/0x10a0
[ 198.783978] ? seq_puts+0x39/0x60
[ 198.787941] do_filp_open+0x91/0x100
[ 198.792216] ? list_lru_add+0x13b/0x160
[ 198.796483] ? __check_object_size+0x136/0x147
[ 198.801348] do_sys_openat2+0x20d/0x2e0
[ 198.805621] do_sys_open+0x44/0x80
[ 198.809426] do_syscall_64+0x33/0x40
[ 198.813404] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 198.818880] RIP: 0033:0x7fc6191311ae
[ 198.822872] Code: 25 00 00 41 00 3d 00 00 41 00 74 48 48 8d 05 59 65 0d 00 8b 00 85 c0 75 69 89 f2 b8 01 01 00 00 48 89 fe bf 9c ff ff ff 0f 05 <48> 3d 00 f0 ff ff 0f 87 a6 00 00 00 48 8b 4c 24 28 64 48 33 0c 25
[ 198.842436] RSP: 002b:00007ffcc6eec1a0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[ 198.850408] RAX: ffffffffffffffda RBX: 000055994681c260 RCX: 00007fc6191311ae
[ 198.857969] RDX: 0000000000000000 RSI: 000055994681eaa0 RDI: 00000000ffffff9c
[ 198.865500] RBP: 0000000000000008 R08: 0000000000000008 R09: 0000000000000001
[ 198.873044] R10: 0000000000000000 R11: 0000000000000246 R12: 000055994542523a
[ 198.880565] R13: 000055994542523a R14: 0000000000000001 R15: 0000000000000000
[ 198.888092] Mem-Info:
[ 198.890774] active_anon:597 inactive_anon:110468 isolated_anon:49
[ 198.890774] active_file:144 inactive_file:0 isolated_file:50
[ 198.890774] unevictable:261148 dirty:4 writeback:0
[ 198.890774] slab_reclaimable:17116 slab_unreclaimable:1063669
[ 198.890774] mapped:5838 shmem:4299 pagetables:3025 bounce:0
[ 198.890774] free:23685 free_pcp:1181 free_cma:3668
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc3-00002-gb4ad8182a9a6ca .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
4 months, 1 week
[char_dev] 98c6e23616: kmsg.tpm_tpm#:unable_to_cdev_device_add()tpm#,major#,minor#,err=
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 98c6e236168b29be199f59d8b1d06522fd8b01f4 ("char_dev: replace cdev_map with an xarray")
git://git.infradead.org/users/hch/block.git gendisk-lookup
in testcase: suspend-stress
version:
with following parameters:
mode: mem
iterations: 10
on test machine: 4 threads Skylake with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
kern :info : [ 4.132454] Non-volatile memory driver v1.3
kern :debug : [ 4.133092] initcall nvram_module_init+0x0/0x89 returned 0 after 669 usecs
kern :debug : [ 4.133818] calling hwrng_modinit+0x0/0x82 @ 1
kern :debug : [ 4.134515] initcall hwrng_modinit+0x0/0x82 returned 0 after 46 usecs
kern :debug : [ 4.135293] calling virtio_rng_driver_init+0x0/0x11 @ 1
kern :debug : [ 4.135945] initcall virtio_rng_driver_init+0x0/0x11 returned 0 after 2 usecs
kern :debug : [ 4.136731] calling init_tis+0x0/0xde @ 1
kern :debug : [ 4.137361] initcall init_tis+0x0/0xde returned 0 after 11 usecs
kern :debug : [ 4.138075] calling crb_acpi_driver_init+0x0/0x11 @ 1
kern :warn : [ 4.138802] tpm_crb MSFT0101:00: [Firmware Bug]: Bad ACPI memory layout
kern :err : [ 4.151499] tpm tpm0: unable to cdev_device_add() tpm0, major 10, minor 224, err=-16
kern :warn : [ 4.152297] tpm_crb: probe of MSFT0101:00 failed with error -16
kern :debug : [ 4.153061] initcall crb_acpi_driver_init+0x0/0x11 returned 0 after 13991 usecs
kern :debug : [ 4.153853] calling cn_proc_init+0x0/0x3a @ 1
kern :debug : [ 4.154512] initcall cn_proc_init+0x0/0x3a returned 0 after 0 usecs
kern :debug : [ 4.155255] calling _nvm_misc_init+0x0/0x11 @ 1
kern :debug : [ 4.155929] initcall _nvm_misc_init+0x0/0x11 returned 0 after 52 usecs
kern :debug : [ 4.156671] calling topology_sysfs_init+0x0/0x2c @ 1
kern :debug : [ 4.157370] initcall topology_sysfs_init+0x0/0x2c returned 0 after 18 usecs
kern :debug : [ 4.158161] calling cacheinfo_sysfs_init+0x0/0x2c @ 1
kern :debug : [ 4.159149] initcall cacheinfo_sysfs_init+0x0/0x2c returned 0 after 315 usecs
kern :debug : [ 4.159926] calling init+0x0/0x7b @ 1
kern :debug : [ 4.160563] initcall init+0x0/0x7b returned 0 after 15 usecs
kern :debug : [ 4.161321] calling pvpanic_mmio_init+0x0/0x28 @ 1
kern :debug : [ 4.162015] initcall pvpanic_mmio_init+0x0/0x28 returned 0 after 63 usecs
kern :debug : [ 4.162743] calling lpc_ich_driver_init+0x0/0x1a @ 1
kern :debug : [ 4.163444] initcall lpc_ich_driver_init+0x0/0x1a returned 0 after 14 usecs
kern :debug : [ 4.164224] calling intel_lpss_init+0x0/0x1d @ 1
kern :debug : [ 4.164852] initcall intel_lpss_init+0x0/0x1d returned 0 after 2 usecs
kern :debug : [ 4.165616] calling intel_lpss_pci_driver_init+0x0/0x1a @ 1
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
4 months, 1 week
[x86/entry] 4facb95b7a: stress-ng.dup.ops_per_sec -8.7% regression
by kernel test robot
Greeting,
FYI, we noticed a -8.7% regression of stress-ng.dup.ops_per_sec due to commit:
commit: 4facb95b7adaf77e2da73aafb9ba60996fe42a12 ("x86/entry: Unbreak 32bit fast syscall")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 30s
class: filesystem
cpufreq_governor: performance
ucode: 0x5002f01
fs: btrfs
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -4.1% regression |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=50% |
| | test=getppid1 |
| | ucode=0x5002f01 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_thread_ops -3.5% regression |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=thread |
| | nr_task=16 |
| | test=futex4 |
| | ucode=0x5002f01 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -3.1% regression |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=16 |
| | test=poll1 |
| | ucode=0x5002f01 |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
filesystem/gcc-9/performance/1HDD/btrfs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/stress-ng/30s/0x5002f01
commit:
d5c678aed5 ("x86/debug: Allow a single level of #DB recursion")
4facb95b7a ("x86/entry: Unbreak 32bit fast syscall")
d5c678aed5eddb94 4facb95b7adaf77e2da73aafb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.937e+08 -8.7% 1.768e+08 stress-ng.dup.ops
6455320 -8.7% 5891784 stress-ng.dup.ops_per_sec
1898586 -3.9% 1823791 stress-ng.time.involuntary_context_switches
3902753 -3.5% 3767258 ± 2% stress-ng.time.minor_page_faults
42502 ± 2% -11.2% 37751 ± 5% sched_debug.cpu.yld_count.min
37834 +1.8% 38531 proc-vmstat.nr_slab_reclaimable
7344206 -2.1% 7188882 proc-vmstat.pgfault
1.346e+08 +1.3% 1.363e+08 perf-stat.i.cache-references
200.92 -1.6% 197.63 perf-stat.i.cpu-migrations
9.69 +2.0% 9.88 perf-stat.overall.MPKI
1.344e+08 +1.3% 1.361e+08 perf-stat.ps.cache-references
200.79 -1.6% 197.56 perf-stat.ps.cpu-migrations
2.076e+09 -1.4% 2.046e+09 perf-stat.ps.dTLB-stores
1.696e+13 -1.2% 1.676e+13 perf-stat.total.instructions
149522 ± 7% -12.1% 131402 ± 5% softirqs.BLOCK
111270 ± 3% +10.1% 122480 ± 3% softirqs.CPU40.RCU
131299 ± 5% -16.1% 110136 ± 9% softirqs.CPU55.RCU
116708 ± 4% -9.5% 105607 ± 4% softirqs.CPU56.RCU
127637 ± 9% -14.8% 108805 ± 6% softirqs.CPU58.RCU
108133 ± 6% +10.8% 119806 ± 4% softirqs.CPU7.RCU
108681 ± 9% +17.6% 127829 ± 9% softirqs.CPU88.RCU
9521 ± 3% -9.6% 8607 ± 5% slabinfo.file_lock_cache.active_objs
9555 ± 3% -9.6% 8637 ± 5% slabinfo.file_lock_cache.num_objs
76202 ± 7% -12.4% 66736 ± 8% slabinfo.ftrace_event_field.active_objs
899.25 ± 7% -12.4% 788.00 ± 8% slabinfo.ftrace_event_field.active_slabs
76468 ± 7% -12.3% 67030 ± 8% slabinfo.ftrace_event_field.num_objs
899.25 ± 7% -12.4% 788.00 ± 8% slabinfo.ftrace_event_field.num_slabs
24041 ± 5% -10.3% 21566 ± 3% slabinfo.pid.active_objs
24141 ± 5% -10.5% 21610 ± 3% slabinfo.pid.num_objs
6.16 -0.4 5.73 ± 6% perf-profile.calltrace.cycles-pp.free_uid.put_cred_rcu.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.09 -0.4 5.67 ± 6% perf-profile.calltrace.cycles-pp.refcount_dec_not_one.refcount_dec_and_lock_irqsave.free_uid.put_cred_rcu.do_faccessat
6.11 -0.4 5.69 ± 6% perf-profile.calltrace.cycles-pp.refcount_dec_and_lock_irqsave.free_uid.put_cred_rcu.do_faccessat.do_syscall_64
0.85 ± 3% -0.1 0.75 ± 12% perf-profile.calltrace.cycles-pp.btrfs_get_delayed_node.btrfs_get_or_create_delayed_node.btrfs_delayed_update_inode.btrfs_update_inode.btrfs_dirty_inode
6.10 -0.4 5.67 ± 6% perf-profile.children.cycles-pp.refcount_dec_not_one
6.16 -0.4 5.73 ± 6% perf-profile.children.cycles-pp.free_uid
6.12 -0.4 5.70 ± 6% perf-profile.children.cycles-pp.refcount_dec_and_lock_irqsave
0.85 ± 3% -0.1 0.75 ± 12% perf-profile.children.cycles-pp.btrfs_get_delayed_node
0.25 ± 4% -0.1 0.17 ± 6% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.12 ± 3% +0.0 0.13 perf-profile.children.cycles-pp.lapic_next_deadline
0.03 ±102% +0.0 0.08 ± 19% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__x64_sys_access
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.__x64_sys_faccessat
6.06 -0.4 5.64 ± 6% perf-profile.self.cycles-pp.refcount_dec_not_one
1.34 ± 4% -0.2 1.10 ± 11% perf-profile.self.cycles-pp.menu_select
1.28 ± 5% -0.2 1.11 ± 9% perf-profile.self.cycles-pp.cpuidle_enter_state
0.84 ± 3% -0.1 0.74 ± 12% perf-profile.self.cycles-pp.btrfs_get_delayed_node
0.24 ± 3% -0.1 0.17 ± 6% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.11 ± 4% +0.0 0.13 perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.__x64_sys_access
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.__x64_sys_faccessat
0.00 +0.1 0.09 ± 14% perf-profile.self.cycles-pp.__x64_sys_fchmod
297777 ± 7% -58.2% 124580 ±100% interrupts.315:PCI-MSI.376832-edge.ahci[0000:00:17.0]
2.25 ± 79% +21266.7% 480.75 ± 98% interrupts.69:PCI-MSI.31981602-edge.i40e-eth0-TxRx-33
2.25 ±127% +3888.9% 89.75 ±134% interrupts.74:PCI-MSI.31981607-edge.i40e-eth0-TxRx-38
16244 ± 15% -19.2% 13128 ± 14% interrupts.CPU1.RES:Rescheduling_interrupts
91723 ± 9% +52.2% 139616 ± 46% interrupts.CPU10.CAL:Function_call_interrupts
10853 ± 6% -31.2% 7465 ± 15% interrupts.CPU10.RES:Rescheduling_interrupts
11509 ± 8% -29.8% 8080 ± 26% interrupts.CPU11.RES:Rescheduling_interrupts
123.00 ± 4% -19.7% 98.75 ± 25% interrupts.CPU14.NMI:Non-maskable_interrupts
123.00 ± 4% -19.7% 98.75 ± 25% interrupts.CPU14.PMI:Performance_monitoring_interrupts
8822 ± 13% -36.8% 5575 ± 19% interrupts.CPU14.RES:Rescheduling_interrupts
11485 ± 15% -45.3% 6283 ± 2% interrupts.CPU16.RES:Rescheduling_interrupts
125.00 ± 3% -19.8% 100.25 ± 24% interrupts.CPU17.NMI:Non-maskable_interrupts
125.00 ± 3% -19.8% 100.25 ± 24% interrupts.CPU17.PMI:Performance_monitoring_interrupts
11064 ± 18% -34.1% 7293 ± 23% interrupts.CPU17.RES:Rescheduling_interrupts
810.00 ±146% -87.7% 99.75 ± 24% interrupts.CPU18.NMI:Non-maskable_interrupts
810.00 ±146% -87.7% 99.75 ± 24% interrupts.CPU18.PMI:Performance_monitoring_interrupts
9896 ± 2% -31.6% 6769 ± 26% interrupts.CPU18.RES:Rescheduling_interrupts
137504 ± 14% -46.9% 72975 ± 44% interrupts.CPU18.TLB:TLB_shootdowns
10649 ± 17% -38.4% 6556 ± 28% interrupts.CPU21.RES:Rescheduling_interrupts
125.00 ± 3% -20.8% 99.00 ± 24% interrupts.CPU22.NMI:Non-maskable_interrupts
125.00 ± 3% -20.8% 99.00 ± 24% interrupts.CPU22.PMI:Performance_monitoring_interrupts
9267 ± 16% -24.6% 6990 ± 10% interrupts.CPU22.RES:Rescheduling_interrupts
19849 ± 21% -28.1% 14273 ± 20% interrupts.CPU24.RES:Rescheduling_interrupts
14181 ± 8% -17.4% 11709 ± 10% interrupts.CPU25.RES:Rescheduling_interrupts
185468 ± 20% -29.7% 130477 ± 17% interrupts.CPU27.CAL:Function_call_interrupts
159450 ± 5% -22.1% 124232 ± 18% interrupts.CPU31.CAL:Function_call_interrupts
122378 ± 10% -46.4% 65550 ± 30% interrupts.CPU31.TLB:TLB_shootdowns
6106 ± 15% +46.4% 8939 ± 28% interrupts.CPU32.RES:Rescheduling_interrupts
2.00 ± 79% +23887.5% 479.75 ± 98% interrupts.CPU33.69:PCI-MSI.31981602-edge.i40e-eth0-TxRx-33
5812 ± 15% +23.8% 7195 ± 6% interrupts.CPU36.RES:Rescheduling_interrupts
2.00 ±122% +4362.5% 89.25 ±135% interrupts.CPU38.74:PCI-MSI.31981607-edge.i40e-eth0-TxRx-38
132.75 ± 6% -31.6% 90.75 ± 34% interrupts.CPU44.NMI:Non-maskable_interrupts
132.75 ± 6% -31.6% 90.75 ± 34% interrupts.CPU44.PMI:Performance_monitoring_interrupts
157673 ± 44% -45.5% 85876 ± 44% interrupts.CPU45.CAL:Function_call_interrupts
8304 ± 6% -21.3% 6537 ± 17% interrupts.CPU48.RES:Rescheduling_interrupts
113272 ± 32% +66.1% 188101 ± 22% interrupts.CPU5.CAL:Function_call_interrupts
12178 ± 9% -48.7% 6249 ± 34% interrupts.CPU55.RES:Rescheduling_interrupts
574.50 ±132% -80.6% 111.25 ± 7% interrupts.CPU56.NMI:Non-maskable_interrupts
574.50 ±132% -80.6% 111.25 ± 7% interrupts.CPU56.PMI:Performance_monitoring_interrupts
10975 ± 21% -55.5% 4879 ± 34% interrupts.CPU56.RES:Rescheduling_interrupts
134.00 ± 9% -15.9% 112.75 ± 5% interrupts.CPU57.NMI:Non-maskable_interrupts
134.00 ± 9% -15.9% 112.75 ± 5% interrupts.CPU57.PMI:Performance_monitoring_interrupts
13877 ± 31% -48.6% 7138 ± 46% interrupts.CPU58.RES:Rescheduling_interrupts
10752 ± 25% -43.5% 6070 ± 48% interrupts.CPU59.RES:Rescheduling_interrupts
124.25 ± 3% -9.7% 112.25 ± 6% interrupts.CPU61.NMI:Non-maskable_interrupts
124.25 ± 3% -9.7% 112.25 ± 6% interrupts.CPU61.PMI:Performance_monitoring_interrupts
91014 ± 27% -49.4% 46018 ± 36% interrupts.CPU62.TLB:TLB_shootdowns
9574 ± 24% -49.4% 4841 ± 40% interrupts.CPU69.RES:Rescheduling_interrupts
8343 ± 17% -45.7% 4533 ± 34% interrupts.CPU70.RES:Rescheduling_interrupts
4009 ± 96% -80.5% 783.50 ±152% interrupts.CPU71.NMI:Non-maskable_interrupts
4009 ± 96% -80.5% 783.50 ±152% interrupts.CPU71.PMI:Performance_monitoring_interrupts
10207 ± 14% -24.9% 7670 ± 18% interrupts.CPU72.RES:Rescheduling_interrupts
127441 ± 20% -40.7% 75630 ± 18% interrupts.CPU74.CAL:Function_call_interrupts
106754 ± 23% -36.8% 67436 ± 20% interrupts.CPU74.TLB:TLB_shootdowns
10256 ± 20% -29.5% 7230 ± 22% interrupts.CPU8.RES:Rescheduling_interrupts
79980 ± 36% -49.4% 40484 ± 21% interrupts.CPU9.TLB:TLB_shootdowns
875684 ± 2% -12.4% 767074 ± 6% interrupts.RES:Rescheduling_interrupts
stress-ng.dup.ops
4e+08 +-----------------------------------------------------------------+
|+ ++.+ +.+ +.++ ++ ++.+ |
|: : : : : : : :: : : |
3.5e+08 |:+ : : : : : : :: : : |
| : : : : : : : : : : : |
| : : : : : : : : : : : |
3e+08 |-: : : : : : : : : : : |
| :: : : : : : :: : |
2.5e+08 |-:: : : : : : :: : |
| :: : : : : : :: : |
| : : : : : : : : |
2e+08 |-+: ++. .+ .+ +. : : +.+ : +.+ +.+ +. |
|O + O OO +++.++ O++ + +++.++ + + + + +++.+|
| OO O O OO |
1.5e+08 +-----------------------------------------------------------------+
stress-ng.dup.ops_per_sec
1.3e+07 +-----------------------------------------------------------------+
|+ ++.+ +.+ +.++ ++ ++.+ |
1.2e+07 |:+ : : : : : : :: : : |
1.1e+07 |:+ : : : : : : :: : : |
| : : : : : : : : : : : |
1e+07 |-: : : : : : : : : : : |
| : : : : :: : : : : : |
9e+06 |-:: : : : : : :: : |
| :: : : : : : :: : |
8e+06 |-+: : : : : : : : |
7e+06 |-+: : : : : : : : |
| + ++.+++.++.+++.+++.+++. + + +.+ + +.+++.+++.+ +.+|
6e+06 |O+OOO OOO OOO OO OO + + |
| |
5e+06 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2ap3: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap3/getppid1/will-it-scale/0x5002f01
commit:
d5c678aed5 ("x86/debug: Allow a single level of #DB recursion")
4facb95b7a ("x86/entry: Unbreak 32bit fast syscall")
d5c678aed5eddb94 4facb95b7adaf77e2da73aafb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
12038155 -4.1% 11545213 will-it-scale.per_process_ops
1.156e+09 -4.1% 1.108e+09 will-it-scale.workload
79181 ± 3% +10.8% 87715 ± 6% cpuidle.POLL.time
8.25 ± 6% -3.2 5.07 ± 9% perf-profile.calltrace.cycles-pp.syscall_enter_from_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.getppid
10.46 ± 6% +2.0 12.48 ± 9% perf-profile.calltrace.cycles-pp.__x64_sys_getppid.do_syscall_64.entry_SYSCALL_64_after_hwframe.getppid
8.26 ± 6% -2.9 5.34 ± 9% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
10.62 ± 6% +2.0 12.64 ± 9% perf-profile.children.cycles-pp.__x64_sys_getppid
8.06 ± 6% -2.9 5.18 ± 9% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
1.14 ± 6% +1.1 2.21 ± 10% perf-profile.self.cycles-pp.do_syscall_64
3.27 ± 6% +1.4 4.65 ± 9% perf-profile.self.cycles-pp.__x64_sys_getppid
5.086e+10 -4.1% 4.879e+10 perf-stat.i.branch-instructions
1.16 -0.1 1.06 perf-stat.i.branch-miss-rate%
5.856e+08 -12.2% 5.144e+08 perf-stat.i.branch-misses
1.18 +6.3% 1.25 perf-stat.i.cpi
8.872e+10 -6.6% 8.291e+10 perf-stat.i.dTLB-loads
0.00 ± 4% +0.0 0.00 ± 6% perf-stat.i.dTLB-store-miss-rate%
6.113e+10 -7.7% 5.643e+10 perf-stat.i.dTLB-stores
5.564e+08 -8.2% 5.106e+08 perf-stat.i.iTLB-load-misses
2.531e+11 -5.8% 2.384e+11 perf-stat.i.instructions
456.82 +2.5% 468.10 perf-stat.i.instructions-per-iTLB-miss
0.85 -5.9% 0.80 perf-stat.i.ipc
1045 -6.3% 979.65 perf-stat.i.metric.M/sec
0.04 +7.0% 0.05 perf-stat.overall.MPKI
1.15 -0.1 1.05 perf-stat.overall.branch-miss-rate%
1.18 +6.3% 1.25 perf-stat.overall.cpi
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
455.16 +2.6% 466.97 perf-stat.overall.instructions-per-iTLB-miss
0.85 -5.9% 0.80 perf-stat.overall.ipc
65906 -1.8% 64694 perf-stat.overall.path-length
5.068e+10 -4.1% 4.862e+10 perf-stat.ps.branch-instructions
5.834e+08 -12.2% 5.125e+08 perf-stat.ps.branch-misses
8.841e+10 -6.5% 8.262e+10 perf-stat.ps.dTLB-loads
6.092e+10 -7.7% 5.623e+10 perf-stat.ps.dTLB-stores
5.543e+08 -8.2% 5.088e+08 perf-stat.ps.iTLB-load-misses
2.522e+11 -5.8% 2.376e+11 perf-stat.ps.instructions
7.617e+13 -5.9% 7.17e+13 perf-stat.total.instructions
***************************************************************************************************
lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/thread/16/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap2/futex4/will-it-scale/0x5002f01
commit:
d5c678aed5 ("x86/debug: Allow a single level of #DB recursion")
4facb95b7a ("x86/entry: Unbreak 32bit fast syscall")
d5c678aed5eddb94 4facb95b7adaf77e2da73aafb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
6661382 -3.5% 6425740 will-it-scale.per_thread_ops
1.066e+08 -3.5% 1.028e+08 will-it-scale.workload
673195 ± 12% -13.4% 582901 ± 5% sched_debug.cpu.max_idle_balance_cost.max
2863 ± 6% -8.7% 2615 ± 6% slabinfo.fsnotify_mark_connector.active_objs
2863 ± 6% -8.7% 2615 ± 6% slabinfo.fsnotify_mark_connector.num_objs
24713 ± 21% -30.5% 17177 ± 9% numa-meminfo.node2.KReclaimable
24713 ± 21% -30.5% 17177 ± 9% numa-meminfo.node2.SReclaimable
22491 ± 29% -31.4% 15432 ± 2% numa-meminfo.node3.Shmem
423370 ± 30% +49.6% 633375 ± 17% numa-vmstat.node0.numa_local
6178 ± 21% -30.5% 4293 ± 9% numa-vmstat.node2.nr_slab_reclaimable
504182 ± 22% -33.3% 336099 ± 15% numa-vmstat.node2.numa_local
63782 ± 57% +93.7% 123540 numa-vmstat.node2.numa_other
5616 ± 29% -31.4% 3851 ± 2% numa-vmstat.node3.nr_shmem
3728 ± 2% +428.9% 19720 ± 83% softirqs.CPU12.SCHED
11536 ± 8% +18.7% 13692 ± 9% softirqs.CPU141.RCU
39530 ± 3% +5.0% 41516 ± 4% softirqs.CPU58.SCHED
39627 ± 4% +5.1% 41636 ± 4% softirqs.CPU64.SCHED
11762 ± 6% +12.9% 13281 ± 3% softirqs.CPU94.RCU
2858 +103.4% 5813 ± 50% interrupts.CPU108.NMI:Non-maskable_interrupts
2858 +103.4% 5813 ± 50% interrupts.CPU108.PMI:Performance_monitoring_interrupts
65.00 ± 45% -88.5% 7.50 ± 55% interrupts.CPU112.RES:Rescheduling_interrupts
805.25 +67.5% 1348 ± 54% interrupts.CPU16.CAL:Function_call_interrupts
39.50 ±120% -85.4% 5.75 ± 39% interrupts.CPU2.RES:Rescheduling_interrupts
1404 ± 27% -42.9% 801.00 interrupts.CPU51.CAL:Function_call_interrupts
800.50 +39.6% 1117 ± 30% interrupts.CPU71.CAL:Function_call_interrupts
946.00 ± 25% +47.1% 1391 ± 44% interrupts.CPU73.CAL:Function_call_interrupts
1423 ± 3% -16.8% 1184 ± 19% interrupts.CPU8.CAL:Function_call_interrupts
895.25 ± 10% -10.8% 798.75 interrupts.CPU84.CAL:Function_call_interrupts
1004 ± 23% -20.5% 799.25 interrupts.CPU87.CAL:Function_call_interrupts
1024 ± 22% -21.9% 799.25 interrupts.CPU94.CAL:Function_call_interrupts
8.836e+09 -3.3% 8.541e+09 perf-stat.i.branch-instructions
0.95 +5.2% 1.00 perf-stat.i.cpi
1.571e+10 -4.8% 1.497e+10 perf-stat.i.dTLB-loads
1.192e+10 -5.2% 1.13e+10 perf-stat.i.dTLB-stores
56842974 -2.4% 55486396 perf-stat.i.iTLB-load-misses
5.901e+10 -4.1% 5.66e+10 perf-stat.i.instructions
1.06 -5.0% 1.00 perf-stat.i.ipc
190.07 -4.5% 181.55 perf-stat.i.metric.M/sec
40174 -14.6% 34309 ± 21% perf-stat.i.node-loads
0.95 +5.2% 1.00 perf-stat.overall.cpi
1.06 -5.0% 1.00 perf-stat.overall.ipc
8.806e+09 -3.3% 8.513e+09 perf-stat.ps.branch-instructions
1.566e+10 -4.8% 1.491e+10 perf-stat.ps.dTLB-loads
1.188e+10 -5.2% 1.126e+10 perf-stat.ps.dTLB-stores
56650485 -2.4% 55298787 perf-stat.ps.iTLB-load-misses
5.881e+10 -4.1% 5.641e+10 perf-stat.ps.instructions
40073 -14.5% 34257 ± 21% perf-stat.ps.node-loads
1.778e+13 -4.2% 1.704e+13 perf-stat.total.instructions
3.52 ± 9% -0.9 2.66 perf-profile.calltrace.cycles-pp.syscall_enter_from_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
1.11 ± 7% +0.2 1.35 ± 15% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.13 ± 7% +0.2 1.38 ± 16% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.14 ± 7% +0.2 1.39 ± 16% perf-profile.calltrace.cycles-pp.asm_call_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.75 ± 5% +0.5 2.22 ± 13% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
38.04 ± 8% +2.8 40.87 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
33.65 ± 8% +3.1 36.76 perf-profile.calltrace.cycles-pp.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
3.52 ± 9% -0.7 2.80 perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.15 ± 14% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.16 ± 27% +0.2 0.35 ± 42% perf-profile.children.cycles-pp.tick_irq_enter
0.71 ± 9% +0.2 0.92 ± 26% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.29 ± 8% +0.3 1.59 ± 12% perf-profile.children.cycles-pp.hrtimer_interrupt
1.31 ± 7% +0.3 1.63 ± 12% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.53 ± 6% +0.4 1.88 ± 14% perf-profile.children.cycles-pp.asm_call_on_stack
1.95 ± 6% +0.5 2.50 ± 12% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
38.11 ± 8% +2.8 40.96 perf-profile.children.cycles-pp.do_syscall_64
33.80 ± 8% +3.1 36.89 perf-profile.children.cycles-pp.__x64_sys_futex
3.46 ± 9% -0.7 2.73 perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.00 +0.1 0.05 perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.11 ± 11% +0.1 0.18 ± 23% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.21 ± 9% +0.4 1.62 perf-profile.self.cycles-pp.do_futex
0.58 ± 11% +0.4 1.02 ± 3% perf-profile.self.cycles-pp.do_syscall_64
1.60 ± 9% +2.0 3.57 perf-profile.self.cycles-pp.__x64_sys_futex
***************************************************************************************************
lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/16/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap2/poll1/will-it-scale/0x5002f01
commit:
d5c678aed5 ("x86/debug: Allow a single level of #DB recursion")
4facb95b7a ("x86/entry: Unbreak 32bit fast syscall")
d5c678aed5eddb94 4facb95b7adaf77e2da73aafb9b
---------------- ---------------------------
%stddev %change %stddev
\ | \
6604008 -3.1% 6400612 will-it-scale.per_process_ops
1.057e+08 -3.1% 1.024e+08 will-it-scale.workload
16484 -36.2% 10522 ± 28% numa-meminfo.node0.Mapped
180979 ± 30% -67.3% 59090 ± 80% numa-numastat.node3.local_node
196599 ± 21% -54.1% 90155 ± 53% numa-numastat.node3.numa_hit
545463 ± 19% -26.2% 402373 ± 7% numa-vmstat.node3.numa_hit
471870 ± 23% -39.0% 287855 ± 15% numa-vmstat.node3.numa_local
935.25 ± 7% -25.5% 696.38 ± 15% slabinfo.skbuff_fclone_cache.active_objs
935.25 ± 7% -25.5% 696.38 ± 15% slabinfo.skbuff_fclone_cache.num_objs
1055 ± 24% -23.3% 809.00 interrupts.CPU161.CAL:Function_call_interrupts
1549 ± 49% -39.3% 941.25 ± 37% interrupts.CPU5.CAL:Function_call_interrupts
1432 ± 55% -40.6% 850.12 ± 12% interrupts.CPU75.CAL:Function_call_interrupts
11812 ± 8% +13.9% 13457 ± 7% softirqs.CPU87.RCU
11812 ± 8% +11.7% 13190 ± 4% softirqs.CPU94.RCU
32469 ± 5% +7.1% 34781 ± 4% softirqs.TIMER
3.23 ± 9% -0.9 2.30 ± 9% perf-profile.calltrace.cycles-pp.syscall_enter_from_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__poll
0.75 ± 10% +0.1 0.89 ± 10% perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_from_user.do_sys_poll.__x64_sys_poll
3.24 ± 9% -0.8 2.43 ± 9% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.81 ± 10% +0.1 0.95 ± 10% perf-profile.children.cycles-pp.___might_sleep
3.18 ± 9% -0.8 2.38 ± 9% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.80 ± 9% +0.1 0.93 ± 9% perf-profile.self.cycles-pp.___might_sleep
0.70 ± 10% +0.4 1.12 ± 9% perf-profile.self.cycles-pp.do_syscall_64
1.71 ± 10% +1.8 3.49 ± 9% perf-profile.self.cycles-pp.__x64_sys_poll
1.289e+10 -3.1% 1.249e+10 perf-stat.i.branch-instructions
28.40 ± 11% -8.9 19.50 ± 43% perf-stat.i.cache-miss-rate%
6416617 ± 17% -23.0% 4943945 ± 6% perf-stat.i.cache-misses
0.85 +3.2% 0.88 perf-stat.i.cpi
9079 ± 16% +26.4% 11476 ± 8% perf-stat.i.cycles-between-cache-misses
1.754e+10 -3.8% 1.688e+10 perf-stat.i.dTLB-loads
1.319e+10 -4.6% 1.258e+10 perf-stat.i.dTLB-stores
6.64e+10 -3.7% 6.391e+10 perf-stat.i.instructions
1229 -4.7% 1171 perf-stat.i.instructions-per-iTLB-miss
1.18 -3.1% 1.14 perf-stat.i.ipc
227.32 -3.8% 218.71 perf-stat.i.metric.M/sec
62934 ± 32% -44.6% 34876 ± 22% perf-stat.i.node-loads
28.25 ± 11% -8.8 19.43 ± 43% perf-stat.overall.cache-miss-rate%
0.85 +3.2% 0.88 perf-stat.overall.cpi
9029 ± 15% +26.1% 11383 ± 8% perf-stat.overall.cycles-between-cache-misses
1227 -4.8% 1168 perf-stat.overall.instructions-per-iTLB-miss
1.18 -3.1% 1.14 perf-stat.overall.ipc
1.285e+10 -3.1% 1.244e+10 perf-stat.ps.branch-instructions
6396063 ± 17% -23.0% 4927357 ± 6% perf-stat.ps.cache-misses
1.748e+10 -3.8% 1.682e+10 perf-stat.ps.dTLB-loads
1.315e+10 -4.6% 1.254e+10 perf-stat.ps.dTLB-stores
6.618e+10 -3.7% 6.37e+10 perf-stat.ps.instructions
62806 ± 32% -44.6% 34787 ± 22% perf-stat.ps.node-loads
1.997e+13 -3.8% 1.922e+13 perf-stat.total.instructions
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
4 months, 1 week
[mm] 09854ba94c: vm-scalability.throughput 31.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a 31.4% improvement of vm-scalability.throughput due to commit:
commit: 09854ba94c6aad7886996bfbee2530b3d8a7f4f4 ("mm: do_wp_page() simplification")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq
cpufreq_governor: performance
ucode: 0x2006906
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 7.7% improvement |
| test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=512G |
| | test=anon-cow-rand |
| | ucode=0xd6 |
+------------------+-------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2006906
commit:
v5.8
09854ba94c ("mm: do_wp_page() simplification")
v5.8 09854ba94c6aad7886996bfbee2
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:8 64% 9:16 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
9:8 120% 19:16 perf-profile.calltrace.cycles-pp.error_entry.do_access
11:8 157% 23:16 perf-profile.children.cycles-pp.error_entry
5:8 79% 11:16 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
268551 +35.1% 362826 vm-scalability.median
28753307 +31.4% 37793854 vm-scalability.throughput
134481 ± 6% +23.5% 166124 ± 2% vm-scalability.time.involuntary_context_switches
1.403e+09 +26.2% 1.77e+09 vm-scalability.time.minor_page_faults
6444 +34.1% 8640 vm-scalability.time.percent_of_cpu_this_job_got
14353 +34.2% 19255 vm-scalability.time.system_time
5188 +32.9% 6893 ± 4% vm-scalability.time.user_time
1.555e+08 ± 4% -99.2% 1279562 ± 5% vm-scalability.time.voluntary_context_switches
6.302e+09 +26.2% 7.953e+09 vm-scalability.workload
6.874e+08 ± 3% +27.3% 8.749e+08 numa-numastat.node0.local_node
6.874e+08 ± 3% +27.3% 8.749e+08 numa-numastat.node0.numa_hit
7.174e+08 ± 3% +25.2% 8.978e+08 numa-numastat.node1.local_node
7.174e+08 ± 3% +25.2% 8.978e+08 numa-numastat.node1.numa_hit
1.096e+09 ± 5% -95.5% 49493536 ±159% cpuidle.C1.time
39622278 ± 4% -97.9% 827336 ± 80% cpuidle.C1.usage
31389280 ± 10% -75.1% 7813790 ± 22% cpuidle.C1E.usage
5.883e+08 ± 5% -98.9% 6340935 ± 8% cpuidle.POLL.time
88921823 ± 6% -99.3% 609604 ± 8% cpuidle.POLL.usage
32.68 -17.4 15.24 ± 3% mpstat.cpu.all.idle%
4.01 ± 7% -4.0 0.03 ± 16% mpstat.cpu.all.iowait%
1.54 -0.3 1.24 ± 3% mpstat.cpu.all.irq%
0.11 -0.1 0.04 ± 3% mpstat.cpu.all.soft%
45.27 +16.2 61.49 mpstat.cpu.all.sys%
16.40 +5.6 21.96 ± 4% mpstat.cpu.all.usr%
34719024 +29.1% 44835140 meminfo.Active
34718776 +29.1% 44834810 meminfo.Active(anon)
34487360 +28.5% 44314921 meminfo.AnonPages
78184290 +12.4% 87858835 ± 2% meminfo.Committed_AS
37213454 +27.4% 47409435 meminfo.Memused
158668 +12.0% 177734 meminfo.PageTables
252730 +28.5% 324817 meminfo.max_used_kB
32.62 -53.1% 15.31 ± 3% vmstat.cpu.id
46.00 +34.4% 61.81 vmstat.cpu.sy
16.00 +33.6% 21.38 ± 4% vmstat.cpu.us
5252099 ±129% -100.0% 0.00 vmstat.procs.b
65.38 +40.2% 91.62 vmstat.procs.r
1016936 ± 4% -98.9% 11412 ± 3% vmstat.system.cs
304498 -32.0% 207124 ± 3% vmstat.system.in
17153583 +29.0% 22136482 ± 4% numa-meminfo.node0.Active
17153413 +29.0% 22136310 ± 4% numa-meminfo.node0.Active(anon)
17047962 +28.6% 21924106 ± 4% numa-meminfo.node0.AnonPages
18352394 ± 2% +27.7% 23432911 ± 4% numa-meminfo.node0.MemUsed
17592839 +29.4% 22761355 ± 4% numa-meminfo.node1.Active
17592761 +29.4% 22761198 ± 4% numa-meminfo.node1.Active(anon)
17466514 +28.6% 22454452 ± 4% numa-meminfo.node1.AnonPages
18888900 ± 2% +27.3% 24039768 ± 4% numa-meminfo.node1.MemUsed
4279901 ± 2% +30.1% 5568775 ± 4% numa-vmstat.node0.nr_active_anon
4253727 +29.7% 5515159 ± 4% numa-vmstat.node0.nr_anon_pages
4279762 ± 2% +30.1% 5568563 ± 4% numa-vmstat.node0.nr_zone_active_anon
3.435e+08 ± 3% +27.5% 4.381e+08 numa-vmstat.node0.numa_hit
3.434e+08 ± 3% +27.6% 4.381e+08 numa-vmstat.node0.numa_local
4394975 +30.0% 5713855 ± 4% numa-vmstat.node1.nr_active_anon
4363511 +29.1% 5635227 ± 4% numa-vmstat.node1.nr_anon_pages
4394841 +30.0% 5713610 ± 4% numa-vmstat.node1.nr_zone_active_anon
3.605e+08 ± 3% +24.9% 4.503e+08 numa-vmstat.node1.numa_hit
3.604e+08 ± 3% +24.9% 4.502e+08 numa-vmstat.node1.numa_local
72302 ± 2% +16.1% 83948 ± 2% slabinfo.anon_vma.active_objs
1572 ± 2% +16.1% 1825 ± 2% slabinfo.anon_vma.active_slabs
72343 ± 2% +16.1% 84013 ± 2% slabinfo.anon_vma.num_objs
1572 ± 2% +16.1% 1825 ± 2% slabinfo.anon_vma.num_slabs
139153 ± 3% +10.3% 153501 slabinfo.anon_vma_chain.active_objs
2176 ± 3% +10.3% 2400 slabinfo.anon_vma_chain.active_slabs
139330 ± 3% +10.3% 153676 slabinfo.anon_vma_chain.num_objs
2176 ± 3% +10.3% 2400 slabinfo.anon_vma_chain.num_slabs
3574 ± 4% +14.9% 4107 ± 4% slabinfo.khugepaged_mm_slot.active_objs
3574 ± 4% +14.9% 4107 ± 4% slabinfo.khugepaged_mm_slot.num_objs
7595 ± 3% +11.8% 8490 ± 4% slabinfo.signal_cache.active_objs
7614 ± 3% +11.7% 8503 ± 3% slabinfo.signal_cache.num_objs
8676099 +29.4% 11226006 proc-vmstat.nr_active_anon
8618051 +28.8% 11096095 proc-vmstat.nr_anon_pages
186.00 +63.4% 303.94 ± 76% proc-vmstat.nr_dirtied
3960653 -6.5% 3703377 proc-vmstat.nr_dirty_background_threshold
7930992 -6.5% 7415811 proc-vmstat.nr_dirty_threshold
275794 +3.8% 286144 proc-vmstat.nr_file_pages
39882917 -6.4% 37312878 proc-vmstat.nr_free_pages
28737 +1.8% 29250 proc-vmstat.nr_inactive_anon
387.38 +5.8% 409.81 ± 3% proc-vmstat.nr_inactive_file
9494 +4.5% 9919 proc-vmstat.nr_mapped
39670 +12.8% 44752 ± 2% proc-vmstat.nr_page_table_pages
38106 +7.6% 41012 proc-vmstat.nr_shmem
237268 +3.1% 244670 proc-vmstat.nr_unevictable
173.00 +67.1% 289.12 ± 80% proc-vmstat.nr_written
8676099 +29.4% 11226005 proc-vmstat.nr_zone_active_anon
28737 +1.8% 29250 proc-vmstat.nr_zone_inactive_anon
387.38 +5.8% 409.81 ± 3% proc-vmstat.nr_zone_inactive_file
237268 +3.1% 244670 proc-vmstat.nr_zone_unevictable
1377947 +25.9% 1734779 proc-vmstat.numa_hint_faults
728235 +38.0% 1004832 proc-vmstat.numa_hint_faults_local
1.405e+09 +26.2% 1.773e+09 proc-vmstat.numa_hit
1331719 +26.6% 1686290 proc-vmstat.numa_huge_pte_updates
1.405e+09 +26.2% 1.773e+09 proc-vmstat.numa_local
1.002e+09 +4.9% 1.051e+09 ± 2% proc-vmstat.numa_pte_updates
14101 ± 3% +50.0% 21151 ± 7% proc-vmstat.pgactivate
1.659e+09 +25.8% 2.087e+09 proc-vmstat.pgalloc_normal
1.403e+09 +26.2% 1.771e+09 proc-vmstat.pgfault
1.658e+09 +25.6% 2.083e+09 proc-vmstat.pgfree
2.407e+08 ± 3% +23.7% 2.977e+08 ± 5% proc-vmstat.pgmigrate_fail
26254 +26.2% 33130 proc-vmstat.thp_fault_alloc
2729468 +26.2% 3444694 proc-vmstat.thp_split_pmd
4932939 ± 19% -69.3% 1513787 ± 96% sched_debug.cfs_rq:/.MIN_vruntime.max
84033 ± 11% +41.8% 119159 ± 9% sched_debug.cfs_rq:/.exec_clock.avg
71200 ± 10% +54.4% 109919 ± 9% sched_debug.cfs_rq:/.exec_clock.min
4932940 ± 19% -69.3% 1513787 ± 96% sched_debug.cfs_rq:/.max_vruntime.max
6123507 ± 11% +88.9% 11566046 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
6855360 ± 12% +76.2% 12079127 ± 9% sched_debug.cfs_rq:/.min_vruntime.max
5364188 ± 10% +99.8% 10715451 ± 9% sched_debug.cfs_rq:/.min_vruntime.min
0.35 ± 8% -49.7% 0.18 ± 21% sched_debug.cfs_rq:/.nr_running.stddev
54.01 ± 12% +52.0% 82.10 ± 9% sched_debug.cfs_rq:/.nr_spread_over.avg
191.68 ± 39% +118.7% 419.26 ± 14% sched_debug.cfs_rq:/.nr_spread_over.max
27.61 ± 22% +119.7% 60.66 ± 12% sched_debug.cfs_rq:/.nr_spread_over.min
19.60 ± 30% +85.2% 36.29 ± 14% sched_debug.cfs_rq:/.nr_spread_over.stddev
1331 ± 13% +23.3% 1641 ± 10% sched_debug.cfs_rq:/.runnable_avg.max
53.31 ± 73% +279.8% 202.45 ± 36% sched_debug.cfs_rq:/.runnable_avg.min
48.95 ± 81% +280.1% 186.09 ± 35% sched_debug.cfs_rq:/.util_avg.min
396.93 ± 11% +48.4% 589.24 ± 17% sched_debug.cfs_rq:/.util_est_enqueued.avg
0.02 ±264% +1.5e+05% 31.03 ±198% sched_debug.cfs_rq:/.util_est_enqueued.min
218292 ± 12% +32.3% 288869 ± 13% sched_debug.cpu.avg_idle.stddev
5646 ± 8% +38.6% 7826 ± 18% sched_debug.cpu.curr->pid.avg
2974 ± 10% -43.5% 1681 ± 28% sched_debug.cpu.curr->pid.stddev
1333544 ± 12% -98.7% 17020 ± 9% sched_debug.cpu.nr_switches.avg
1950550 ± 14% -98.0% 38090 ± 9% sched_debug.cpu.nr_switches.max
789316 ± 9% -99.0% 8155 ± 28% sched_debug.cpu.nr_switches.min
446932 ± 27% -98.5% 6869 ± 23% sched_debug.cpu.nr_switches.stddev
0.16 ± 48% -94.3% 0.01 ±157% sched_debug.cpu.nr_uninterruptible.avg
24.52 ± 9% -20.2% 19.57 ± 14% sched_debug.cpu.nr_uninterruptible.stddev
1332157 ± 12% -98.8% 15736 ± 10% sched_debug.cpu.sched_count.avg
1947528 ± 14% -98.3% 33519 ± 11% sched_debug.cpu.sched_count.max
787480 ± 9% -99.1% 7471 ± 29% sched_debug.cpu.sched_count.min
446663 ± 27% -98.6% 6318 ± 26% sched_debug.cpu.sched_count.stddev
664464 ± 12% -99.0% 6592 ± 10% sched_debug.cpu.sched_goidle.avg
972338 ± 14% -98.5% 15040 ± 12% sched_debug.cpu.sched_goidle.max
391974 ± 9% -99.3% 2756 ± 34% sched_debug.cpu.sched_goidle.min
223557 ± 27% -98.7% 2997 ± 29% sched_debug.cpu.sched_goidle.stddev
667116 ± 12% -98.9% 7572 ± 10% sched_debug.cpu.ttwu_count.avg
937077 ± 13% -98.2% 17318 ± 13% sched_debug.cpu.ttwu_count.max
428250 ± 9% -99.3% 3178 ± 29% sched_debug.cpu.ttwu_count.min
183076 ± 26% -98.1% 3391 ± 23% sched_debug.cpu.ttwu_count.stddev
1669 ± 9% +300.6% 6686 ± 19% sched_debug.cpu.ttwu_local.max
672.88 ± 12% -29.3% 475.56 ± 11% sched_debug.cpu.ttwu_local.min
184.37 ± 14% +363.6% 854.73 ± 20% sched_debug.cpu.ttwu_local.stddev
10.09 -5.9% 9.50 perf-stat.i.MPKI
3.452e+10 +13.1% 3.904e+10 perf-stat.i.branch-instructions
0.21 ± 2% -0.1 0.11 ± 2% perf-stat.i.branch-miss-rate%
59081166 ± 2% -44.2% 32984841 perf-stat.i.branch-misses
35.55 -12.2 23.31 ± 4% perf-stat.i.cache-miss-rate%
4.767e+08 -36.0% 3.052e+08 ± 4% perf-stat.i.cache-misses
1.411e+09 -7.7% 1.302e+09 perf-stat.i.cache-references
1115456 ± 4% -98.8% 13912 ± 4% perf-stat.i.context-switches
1.50 +41.0% 2.11 ± 2% perf-stat.i.cpi
105792 +10.7% 117110 perf-stat.i.cpu-clock
1.933e+11 +46.4% 2.83e+11 perf-stat.i.cpu-cycles
2384 ± 4% -81.2% 448.09 ± 6% perf-stat.i.cpu-migrations
431.76 +126.4% 977.49 ± 4% perf-stat.i.cycles-between-cache-misses
2.883e+10 +19.6% 3.449e+10 perf-stat.i.dTLB-loads
0.15 +0.0 0.18 perf-stat.i.dTLB-store-miss-rate%
15666838 +38.2% 21653648 perf-stat.i.dTLB-store-misses
9.651e+09 +24.2% 1.199e+10 perf-stat.i.dTLB-stores
14.00 +4.1 18.07 perf-stat.i.iTLB-load-miss-rate%
9001555 ± 3% +72.9% 15566626 ± 2% perf-stat.i.iTLB-load-misses
53605109 +30.2% 69780056 perf-stat.i.iTLB-loads
15977 -42.8% 9133 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.71 -29.1% 0.50 perf-stat.i.ipc
1.77 +30.0% 2.30 perf-stat.i.metric.GHz
0.05 ± 24% +151.1% 0.12 ± 4% perf-stat.i.metric.K/sec
685.48 +8.4% 743.38 perf-stat.i.metric.M/sec
5007741 +39.6% 6991127 perf-stat.i.minor-faults
75936116 ± 2% -57.0% 32653994 ± 6% perf-stat.i.node-load-misses
36986631 ± 2% -61.5% 14242141 ± 20% perf-stat.i.node-loads
16.90 ± 3% -5.8 11.10 ± 8% perf-stat.i.node-store-miss-rate%
28049976 +33.8% 37519554 perf-stat.i.node-stores
5007740 +39.6% 6991122 perf-stat.i.page-faults
105792 +10.7% 117110 perf-stat.i.task-clock
10.44 -8.5% 9.56 perf-stat.overall.MPKI
0.17 ± 2% -0.1 0.09 perf-stat.overall.branch-miss-rate%
33.97 -10.6 23.34 ± 3% perf-stat.overall.cache-miss-rate%
1.47 +42.7% 2.09 perf-stat.overall.cpi
413.26 +127.3% 939.14 ± 3% perf-stat.overall.cycles-between-cache-misses
0.16 +0.0 0.18 perf-stat.overall.dTLB-store-miss-rate%
14.35 ± 2% +3.8 18.18 perf-stat.overall.iTLB-load-miss-rate%
15053 ± 2% -41.7% 8779 perf-stat.overall.instructions-per-iTLB-miss
0.68 -29.9% 0.48 perf-stat.overall.ipc
14.56 ± 2% -4.1 10.48 ± 6% perf-stat.overall.node-store-miss-rate%
6008 -28.1% 4321 perf-stat.overall.path-length
3.197e+10 +2.0% 3.26e+10 perf-stat.ps.branch-instructions
55126365 ± 2% -49.1% 28060044 perf-stat.ps.branch-misses
4.421e+08 -42.7% 2.534e+08 ± 3% perf-stat.ps.cache-misses
1.302e+09 -16.5% 1.086e+09 perf-stat.ps.cache-references
1022856 ± 4% -98.9% 11408 ± 3% perf-stat.ps.context-switches
1.827e+11 +30.1% 2.377e+11 perf-stat.ps.cpu-cycles
2222 ± 4% -81.9% 402.70 ± 5% perf-stat.ps.cpu-migrations
2.681e+10 +7.6% 2.884e+10 perf-stat.ps.dTLB-loads
14423754 +25.4% 18081863 perf-stat.ps.dTLB-store-misses
9.01e+09 +11.5% 1.005e+10 perf-stat.ps.dTLB-stores
8286270 ± 3% +56.3% 12953590 ± 2% perf-stat.ps.iTLB-load-misses
49469188 +17.9% 58313749 perf-stat.ps.iTLB-loads
1.246e+11 -8.8% 1.137e+11 perf-stat.ps.instructions
4608940 +26.5% 5828391 perf-stat.ps.minor-faults
70645873 -61.8% 27020155 ± 7% perf-stat.ps.node-load-misses
34484923 -65.9% 11769356 ± 17% perf-stat.ps.node-loads
4389253 ± 2% -14.4% 3759325 ± 6% perf-stat.ps.node-store-misses
25749167 +24.8% 32132643 perf-stat.ps.node-stores
4608940 +26.5% 5828391 perf-stat.ps.page-faults
3.787e+13 -9.2% 3.437e+13 perf-stat.total.instructions
56161 ± 8% -75.0% 14034 ± 24% softirqs.CPU0.SCHED
54147 ± 8% -77.4% 12259 ± 21% softirqs.CPU1.SCHED
55175 ± 9% -79.2% 11491 ± 22% softirqs.CPU10.SCHED
55228 ± 7% -77.9% 12193 ± 20% softirqs.CPU100.SCHED
55982 ± 10% -78.1% 12234 ± 19% softirqs.CPU101.SCHED
55442 ± 11% -77.9% 12241 ± 19% softirqs.CPU102.SCHED
55518 ± 8% -78.5% 11947 ± 19% softirqs.CPU103.SCHED
53922 ± 10% -78.7% 11489 ± 23% softirqs.CPU11.SCHED
55586 ± 8% -79.6% 11321 ± 24% softirqs.CPU12.SCHED
55173 ± 10% -79.4% 11392 ± 24% softirqs.CPU13.SCHED
55394 ± 8% -79.0% 11629 ± 24% softirqs.CPU14.SCHED
55491 ± 8% -79.1% 11593 ± 24% softirqs.CPU15.SCHED
55674 ± 9% -79.3% 11531 ± 23% softirqs.CPU16.SCHED
55917 ± 9% -79.3% 11563 ± 23% softirqs.CPU17.SCHED
54978 ± 8% -78.9% 11582 ± 24% softirqs.CPU18.SCHED
55355 ± 9% -79.1% 11566 ± 25% softirqs.CPU19.SCHED
54475 ± 9% -77.7% 12164 ± 21% softirqs.CPU2.SCHED
55032 ± 9% -79.0% 11582 ± 25% softirqs.CPU20.SCHED
55773 ± 8% -79.5% 11455 ± 24% softirqs.CPU21.SCHED
55528 ± 8% -78.9% 11695 ± 25% softirqs.CPU22.SCHED
54925 ± 8% -79.1% 11472 ± 24% softirqs.CPU23.SCHED
55020 ± 7% -79.2% 11451 ± 25% softirqs.CPU24.SCHED
56676 ± 8% -79.6% 11554 ± 27% softirqs.CPU25.SCHED
52001 ± 9% -77.5% 11677 ± 17% softirqs.CPU26.SCHED
11388 ± 9% +19.1% 13562 ± 10% softirqs.CPU27.RCU
52240 ± 13% -76.4% 12324 ± 21% softirqs.CPU27.SCHED
52709 ± 14% -76.9% 12173 ± 21% softirqs.CPU28.SCHED
53661 ± 15% -76.6% 12567 ± 20% softirqs.CPU29.SCHED
54832 ± 11% -79.2% 11383 ± 23% softirqs.CPU3.SCHED
53860 ± 13% -76.6% 12598 ± 19% softirqs.CPU30.SCHED
53010 ± 14% -76.1% 12651 ± 21% softirqs.CPU31.SCHED
54592 ± 11% -77.0% 12560 ± 19% softirqs.CPU32.SCHED
53792 ± 12% -77.1% 12344 ± 21% softirqs.CPU33.SCHED
53907 ± 10% -76.9% 12438 ± 20% softirqs.CPU34.SCHED
54309 ± 12% -77.1% 12447 ± 20% softirqs.CPU35.SCHED
54252 ± 11% -76.7% 12637 ± 20% softirqs.CPU36.SCHED
54257 ± 10% -77.0% 12462 ± 20% softirqs.CPU37.SCHED
54904 ± 10% -77.2% 12497 ± 20% softirqs.CPU38.SCHED
53990 ± 10% -77.0% 12405 ± 20% softirqs.CPU39.SCHED
54821 ± 10% -78.9% 11543 ± 24% softirqs.CPU4.SCHED
53301 ± 12% -76.4% 12555 ± 20% softirqs.CPU40.SCHED
54519 ± 11% -76.9% 12600 ± 18% softirqs.CPU41.SCHED
53100 ± 10% -76.4% 12529 ± 20% softirqs.CPU42.SCHED
54225 ± 10% -76.8% 12587 ± 20% softirqs.CPU43.SCHED
55777 ± 11% -77.4% 12591 ± 21% softirqs.CPU44.SCHED
55576 ± 11% -77.8% 12341 ± 21% softirqs.CPU45.SCHED
54704 ± 11% -77.2% 12462 ± 20% softirqs.CPU46.SCHED
54317 ± 11% -76.5% 12775 ± 20% softirqs.CPU47.SCHED
54594 ± 12% -77.2% 12470 ± 20% softirqs.CPU48.SCHED
11759 ± 8% +15.0% 13522 ± 9% softirqs.CPU49.RCU
54531 ± 10% -77.1% 12512 ± 20% softirqs.CPU49.SCHED
54946 ± 10% -79.3% 11365 ± 24% softirqs.CPU5.SCHED
55452 ± 10% -77.7% 12373 ± 21% softirqs.CPU50.SCHED
52610 ± 8% -76.5% 12358 ± 21% softirqs.CPU51.SCHED
54711 ± 7% -80.5% 10649 ± 20% softirqs.CPU52.SCHED
55017 ± 7% -79.5% 11276 ± 22% softirqs.CPU53.SCHED
56543 ± 8% -80.2% 11210 ± 20% softirqs.CPU54.SCHED
11937 ± 7% +15.0% 13731 ± 10% softirqs.CPU55.RCU
55301 ± 8% -80.1% 10992 ± 20% softirqs.CPU55.SCHED
55110 ± 6% -80.2% 10900 ± 22% softirqs.CPU56.SCHED
56739 ± 9% -80.5% 11072 ± 22% softirqs.CPU57.SCHED
56275 ± 7% -80.1% 11197 ± 21% softirqs.CPU58.SCHED
11751 ± 7% +15.7% 13597 ± 10% softirqs.CPU59.RCU
55990 ± 7% -80.0% 11195 ± 21% softirqs.CPU59.SCHED
54866 ± 9% -78.9% 11571 ± 22% softirqs.CPU6.SCHED
56393 ± 7% -80.3% 11129 ± 23% softirqs.CPU60.SCHED
55101 ± 8% -79.5% 11322 ± 21% softirqs.CPU61.SCHED
55478 ± 8% -79.6% 11336 ± 21% softirqs.CPU62.SCHED
55893 ± 7% -79.9% 11249 ± 21% softirqs.CPU63.SCHED
56486 ± 7% -80.2% 11180 ± 22% softirqs.CPU64.SCHED
56751 ± 7% -80.4% 11140 ± 23% softirqs.CPU65.SCHED
56496 ± 7% -80.0% 11281 ± 22% softirqs.CPU66.SCHED
55843 ± 10% -79.7% 11346 ± 22% softirqs.CPU67.SCHED
56365 ± 8% -79.8% 11384 ± 22% softirqs.CPU68.SCHED
56332 ± 8% -80.0% 11245 ± 22% softirqs.CPU69.SCHED
55551 ± 9% -79.8% 11246 ± 22% softirqs.CPU7.SCHED
56376 ± 8% -79.8% 11412 ± 20% softirqs.CPU70.SCHED
57543 ± 7% -80.5% 11221 ± 22% softirqs.CPU71.SCHED
56590 ± 8% -80.1% 11250 ± 22% softirqs.CPU72.SCHED
56141 ± 8% -80.0% 11223 ± 23% softirqs.CPU73.SCHED
57131 ± 8% -80.1% 11376 ± 22% softirqs.CPU74.SCHED
56374 ± 8% -79.9% 11306 ± 23% softirqs.CPU75.SCHED
57457 ± 6% -80.5% 11181 ± 23% softirqs.CPU76.SCHED
10833 ± 8% +16.9% 12667 ± 9% softirqs.CPU77.RCU
57177 ± 7% -80.3% 11243 ± 25% softirqs.CPU77.SCHED
54421 ± 8% -78.7% 11579 ± 18% softirqs.CPU78.SCHED
55021 ± 10% -78.0% 12098 ± 17% softirqs.CPU79.SCHED
54123 ± 10% -79.1% 11327 ± 26% softirqs.CPU8.SCHED
54622 ± 10% -77.3% 12389 ± 17% softirqs.CPU80.SCHED
55160 ± 10% -77.5% 12387 ± 17% softirqs.CPU81.SCHED
54067 ± 11% -77.1% 12372 ± 19% softirqs.CPU82.SCHED
54866 ± 10% -77.7% 12240 ± 19% softirqs.CPU83.SCHED
54955 ± 9% -77.6% 12283 ± 17% softirqs.CPU84.SCHED
54322 ± 10% -77.7% 12117 ± 20% softirqs.CPU85.SCHED
53794 ± 12% -77.0% 12374 ± 19% softirqs.CPU86.SCHED
54479 ± 11% -77.7% 12164 ± 18% softirqs.CPU87.SCHED
54409 ± 10% -77.4% 12321 ± 18% softirqs.CPU88.SCHED
55144 ± 11% -77.7% 12305 ± 18% softirqs.CPU89.SCHED
54151 ± 9% -78.5% 11620 ± 23% softirqs.CPU9.SCHED
55803 ± 8% -77.9% 12348 ± 20% softirqs.CPU90.SCHED
55109 ± 11% -77.9% 12202 ± 17% softirqs.CPU91.SCHED
55000 ± 9% -77.5% 12370 ± 19% softirqs.CPU92.SCHED
55289 ± 10% -77.9% 12245 ± 19% softirqs.CPU93.SCHED
54811 ± 10% -77.4% 12404 ± 19% softirqs.CPU94.SCHED
55395 ± 10% -77.7% 12370 ± 19% softirqs.CPU95.SCHED
55795 ± 11% -78.0% 12256 ± 19% softirqs.CPU96.SCHED
55877 ± 8% -78.2% 12162 ± 20% softirqs.CPU97.SCHED
55228 ± 10% -78.2% 12064 ± 19% softirqs.CPU98.SCHED
55294 ± 10% -77.6% 12376 ± 19% softirqs.CPU99.SCHED
5730596 -78.4% 1236237 ± 2% softirqs.SCHED
41.84 ± 2% -40.3 1.54 ± 24% perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
27.67 ± 4% -27.7 0.00 perf-profile.calltrace.cycles-pp.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
15.43 ± 8% -13.7 1.76 ± 27% perf-profile.calltrace.cycles-pp.secondary_startup_64
15.28 ± 8% -13.6 1.73 ± 26% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
15.28 ± 8% -13.6 1.73 ± 26% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
15.27 ± 8% -13.5 1.73 ± 26% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.06 ± 9% -11.4 1.68 ± 27% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.04 ± 9% -11.4 1.67 ± 27% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
10.34 ± 12% -10.3 0.00 perf-profile.calltrace.cycles-pp.lru_cache_add.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault
10.20 ± 13% -10.2 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.do_wp_page.__handle_mm_fault
8.94 ± 14% -8.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.do_wp_page
10.29 ± 12% -8.7 1.58 ± 28% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
8.65 ± 2% -8.6 0.00 perf-profile.calltrace.cycles-pp.reuse_swap_page.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
7.08 ± 3% -7.1 0.00 perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault
0.00 +0.7 0.70 ± 13% perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_add_new_anon_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +0.8 0.84 ± 13% perf-profile.calltrace.cycles-pp.try_charge.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.94 ± 11% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault
0.00 +0.9 0.94 ± 11% perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault
0.00 +1.1 1.12 ± 23% perf-profile.calltrace.cycles-pp.page_remove_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.2 1.21 ± 15% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +1.2 1.25 ± 14% perf-profile.calltrace.cycles-pp.page_add_new_anon_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.3 1.31 ± 15% perf-profile.calltrace.cycles-pp.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.6 1.62 ± 16% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +2.4 2.38 ± 13% perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
0.00 +2.8 2.78 ± 12% perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +2.9 2.90 ± 12% perf-profile.calltrace.cycles-pp.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +4.4 4.40 ± 12% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
8.90 ± 14% +6.0 14.92 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy
0.00 +6.1 6.07 ± 8% perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +15.0 14.97 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault
11.96 ± 12% +15.1 27.01 ± 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range
11.97 ± 12% +15.1 27.03 ± 14% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
14.65 ± 11% +15.1 29.76 ± 14% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
14.65 ± 11% +15.1 29.76 ± 14% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
14.64 ± 11% +15.1 29.75 ± 14% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.74 ± 11% +15.2 29.90 ± 14% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
14.74 ± 11% +15.2 29.90 ± 14% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
14.76 ± 11% +15.2 29.92 ± 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
14.76 ± 11% +15.2 29.92 ± 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.89 ± 11% +15.2 29.14 ± 14% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
13.98 ± 11% +15.3 29.23 ± 14% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
0.00 +16.4 16.39 ± 6% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +16.6 16.56 ± 6% perf-profile.calltrace.cycles-pp.lru_cache_add.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +36.4 36.40 ± 7% perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
41.85 ± 2% -40.2 1.68 ± 19% perf-profile.children.cycles-pp.do_wp_page
15.43 ± 8% -13.7 1.76 ± 27% perf-profile.children.cycles-pp.secondary_startup_64
15.43 ± 8% -13.7 1.76 ± 27% perf-profile.children.cycles-pp.cpu_startup_entry
15.42 ± 8% -13.7 1.76 ± 27% perf-profile.children.cycles-pp.do_idle
15.28 ± 8% -13.6 1.73 ± 26% perf-profile.children.cycles-pp.start_secondary
13.19 ± 9% -11.5 1.71 ± 27% perf-profile.children.cycles-pp.cpuidle_enter
13.19 ± 9% -11.5 1.71 ± 27% perf-profile.children.cycles-pp.cpuidle_enter_state
10.40 ± 11% -8.8 1.61 ± 29% perf-profile.children.cycles-pp.intel_idle
8.77 ± 2% -8.7 0.05 ± 27% perf-profile.children.cycles-pp.reuse_swap_page
7.15 ± 3% -1.0 6.14 ± 7% perf-profile.children.cycles-pp.copy_page
1.03 ± 3% -0.6 0.44 ± 6% perf-profile.children.cycles-pp.asm_call_on_stack
0.52 ± 10% -0.4 0.13 ± 16% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.37 ± 5% -0.3 0.06 ± 10% perf-profile.children.cycles-pp.update_load_avg
0.84 ± 3% -0.2 0.65 ± 7% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.02 ± 2% -0.2 0.85 ± 6% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.49 ± 4% -0.2 0.34 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.20 ± 6% -0.1 0.05 ± 28% perf-profile.children.cycles-pp.ktime_get
0.19 ± 7% -0.1 0.07 ± 18% perf-profile.children.cycles-pp.irq_exit_rcu
0.13 ± 10% -0.1 0.07 ± 9% perf-profile.children.cycles-pp.clockevents_program_event
0.20 ± 4% -0.1 0.15 ± 9% perf-profile.children.cycles-pp._find_next_bit
0.15 ± 4% -0.0 0.11 ± 12% perf-profile.children.cycles-pp.up_read
0.10 ± 13% -0.0 0.07 ± 17% perf-profile.children.cycles-pp.update_cfs_group
0.19 ± 5% +0.1 0.24 ± 8% perf-profile.children.cycles-pp.scheduler_tick
0.09 ± 11% +0.1 0.15 ± 12% perf-profile.children.cycles-pp.tlb_finish_mmu
0.06 ± 10% +0.1 0.13 ± 11% perf-profile.children.cycles-pp.__unlock_page_memcg
0.12 ± 7% +0.1 0.19 ± 8% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 ± 23% perf-profile.children.cycles-pp.unlock_page_memcg
0.00 +0.2 0.20 ± 9% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
1.70 ± 2% +0.2 1.93 ± 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.22 ± 4% +0.5 0.75 ± 22% perf-profile.children.cycles-pp.lock_page_memcg
1.13 ± 9% +0.6 1.77 ± 10% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.82 ± 6% +0.8 1.60 ± 13% perf-profile.children.cycles-pp.page_remove_rmap
3.05 ± 7% +1.4 4.47 ± 10% perf-profile.children.cycles-pp.mem_cgroup_charge
10.54 ± 11% +6.1 16.61 ± 6% perf-profile.children.cycles-pp.pagevec_lru_move_fn
10.67 ± 11% +6.1 16.76 ± 6% perf-profile.children.cycles-pp.lru_cache_add
27.68 ± 4% +8.7 36.42 ± 7% perf-profile.children.cycles-pp.wp_page_copy
14.65 ± 11% +15.1 29.76 ± 14% perf-profile.children.cycles-pp.unmap_vmas
14.65 ± 11% +15.1 29.76 ± 14% perf-profile.children.cycles-pp.unmap_page_range
14.64 ± 11% +15.1 29.76 ± 14% perf-profile.children.cycles-pp.zap_pte_range
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.children.cycles-pp.__x64_sys_exit_group
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.children.cycles-pp.do_group_exit
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.children.cycles-pp.do_exit
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.children.cycles-pp.mmput
14.75 ± 11% +15.2 29.91 ± 14% perf-profile.children.cycles-pp.exit_mmap
15.15 ± 11% +15.2 30.35 ± 14% perf-profile.children.cycles-pp.do_syscall_64
15.15 ± 11% +15.2 30.35 ± 14% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
14.07 ± 11% +15.3 29.38 ± 14% perf-profile.children.cycles-pp.tlb_flush_mmu
14.09 ± 11% +15.3 29.41 ± 14% perf-profile.children.cycles-pp.release_pages
22.48 ± 10% +19.8 42.33 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
23.12 ± 10% +20.0 43.17 ± 8% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
10.39 ± 11% -8.8 1.61 ± 29% perf-profile.self.cycles-pp.intel_idle
8.69 ± 2% -8.6 0.05 ± 38% perf-profile.self.cycles-pp.reuse_swap_page
7.08 ± 3% -1.0 6.09 ± 7% perf-profile.self.cycles-pp.copy_page
0.45 ± 4% -0.4 0.07 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.48 ± 4% -0.2 0.33 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.33 ± 4% -0.1 0.24 ± 9% perf-profile.self.cycles-pp._raw_spin_lock
0.17 ± 10% -0.1 0.11 ± 11% perf-profile.self.cycles-pp.zap_pte_range
0.20 ± 5% -0.1 0.15 ± 9% perf-profile.self.cycles-pp._find_next_bit
0.14 ± 5% -0.0 0.11 ± 11% perf-profile.self.cycles-pp.up_read
0.16 ± 7% -0.0 0.13 ± 9% perf-profile.self.cycles-pp.vmacache_find
0.14 ± 3% -0.0 0.12 ± 7% perf-profile.self.cycles-pp.___might_sleep
0.13 ± 7% +0.0 0.16 ± 8% perf-profile.self.cycles-pp.lru_cache_add
0.06 ± 9% +0.1 0.13 ± 10% perf-profile.self.cycles-pp.__unlock_page_memcg
0.00 +0.1 0.09 ± 21% perf-profile.self.cycles-pp.unlock_page_memcg
0.47 ± 3% +0.1 0.57 ± 8% perf-profile.self.cycles-pp.page_add_new_anon_rmap
0.36 ± 3% +0.3 0.65 ± 11% perf-profile.self.cycles-pp.page_remove_rmap
0.22 ± 5% +0.5 0.74 ± 22% perf-profile.self.cycles-pp.lock_page_memcg
0.44 ± 15% +0.6 1.04 ± 12% perf-profile.self.cycles-pp.mem_cgroup_charge
1.12 ± 9% +0.6 1.75 ± 10% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.84 ± 3% +0.8 1.65 ± 19% perf-profile.self.cycles-pp.do_wp_page
0.62 ± 2% +1.0 1.61 ± 20% perf-profile.self.cycles-pp.wp_page_copy
23.12 ± 10% +20.0 43.17 ± 8% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
28859127 -99.3% 196613 ± 3% interrupts.CAL:Function_call_interrupts
236575 ± 11% -99.1% 2094 ± 19% interrupts.CPU0.CAL:Function_call_interrupts
5873 ± 35% -81.9% 1062 ± 14% interrupts.CPU0.RES:Rescheduling_interrupts
254146 ± 9% -99.2% 2128 ± 22% interrupts.CPU1.CAL:Function_call_interrupts
5669 ± 38% -89.5% 595.38 ± 39% interrupts.CPU1.RES:Rescheduling_interrupts
281815 ± 14% -99.4% 1764 ± 10% interrupts.CPU10.CAL:Function_call_interrupts
6227 ± 41% -91.3% 538.88 ± 26% interrupts.CPU10.RES:Rescheduling_interrupts
345264 ± 45% -74.5% 88030 ± 45% interrupts.CPU10.TLB:TLB_shootdowns
257795 ± 11% -99.2% 1989 ± 10% interrupts.CPU100.CAL:Function_call_interrupts
4587 ± 24% +52.7% 7006 ± 5% interrupts.CPU100.NMI:Non-maskable_interrupts
4587 ± 24% +52.7% 7006 ± 5% interrupts.CPU100.PMI:Performance_monitoring_interrupts
6223 ± 42% -92.0% 495.31 ± 21% interrupts.CPU100.RES:Rescheduling_interrupts
254459 ± 13% -99.2% 1927 ± 7% interrupts.CPU101.CAL:Function_call_interrupts
6364 ± 44% -91.7% 526.31 ± 24% interrupts.CPU101.RES:Rescheduling_interrupts
262103 ± 16% -99.3% 1919 ± 5% interrupts.CPU102.CAL:Function_call_interrupts
6446 ± 43% -91.2% 565.69 ± 39% interrupts.CPU102.RES:Rescheduling_interrupts
207081 ± 37% -69.1% 64047 ± 70% interrupts.CPU102.TLB:TLB_shootdowns
247274 ± 13% -99.2% 1883 ± 8% interrupts.CPU103.CAL:Function_call_interrupts
6352 ± 42% -91.6% 536.12 ± 24% interrupts.CPU103.RES:Rescheduling_interrupts
212327 ± 45% -64.2% 76050 ± 59% interrupts.CPU103.TLB:TLB_shootdowns
272351 ± 16% -99.3% 1916 ± 31% interrupts.CPU11.CAL:Function_call_interrupts
5962 ± 40% -90.7% 555.50 ± 26% interrupts.CPU11.RES:Rescheduling_interrupts
377485 ± 35% -76.1% 90096 ± 43% interrupts.CPU11.TLB:TLB_shootdowns
274253 ± 12% -99.4% 1720 ± 11% interrupts.CPU12.CAL:Function_call_interrupts
3712 ± 37% +61.6% 6001 ± 23% interrupts.CPU12.NMI:Non-maskable_interrupts
3712 ± 37% +61.6% 6001 ± 23% interrupts.CPU12.PMI:Performance_monitoring_interrupts
5921 ± 36% -90.8% 546.50 ± 27% interrupts.CPU12.RES:Rescheduling_interrupts
403722 ± 49% -78.8% 85577 ± 44% interrupts.CPU12.TLB:TLB_shootdowns
270063 ± 18% -99.3% 1775 ± 11% interrupts.CPU13.CAL:Function_call_interrupts
5957 ± 41% -91.2% 524.12 ± 25% interrupts.CPU13.RES:Rescheduling_interrupts
363588 ± 40% -78.3% 78783 ± 44% interrupts.CPU13.TLB:TLB_shootdowns
279866 ± 15% -99.4% 1780 ± 10% interrupts.CPU14.CAL:Function_call_interrupts
6083 ± 39% -91.3% 530.19 ± 28% interrupts.CPU14.RES:Rescheduling_interrupts
354391 ± 53% -79.9% 71350 ± 60% interrupts.CPU14.TLB:TLB_shootdowns
278040 ± 14% -99.3% 1809 ± 12% interrupts.CPU15.CAL:Function_call_interrupts
6099 ± 38% -91.1% 544.56 ± 35% interrupts.CPU15.RES:Rescheduling_interrupts
272619 ± 17% -99.3% 1855 ± 25% interrupts.CPU16.CAL:Function_call_interrupts
6278 ± 39% -91.7% 518.44 ± 27% interrupts.CPU16.RES:Rescheduling_interrupts
276781 ± 15% -99.3% 1858 ± 25% interrupts.CPU17.CAL:Function_call_interrupts
5994 ± 40% -91.3% 520.69 ± 26% interrupts.CPU17.RES:Rescheduling_interrupts
354528 ± 48% -79.6% 72213 ± 59% interrupts.CPU17.TLB:TLB_shootdowns
273028 ± 18% -99.3% 1790 ± 13% interrupts.CPU18.CAL:Function_call_interrupts
6020 ± 37% -91.4% 516.12 ± 28% interrupts.CPU18.RES:Rescheduling_interrupts
270901 ± 17% -99.4% 1744 ± 11% interrupts.CPU19.CAL:Function_call_interrupts
6012 ± 37% -91.1% 537.19 ± 35% interrupts.CPU19.RES:Rescheduling_interrupts
258570 ± 12% -99.3% 1728 ± 14% interrupts.CPU2.CAL:Function_call_interrupts
5867 ± 39% -89.4% 622.50 ± 53% interrupts.CPU2.RES:Rescheduling_interrupts
274427 ± 16% -99.4% 1762 ± 14% interrupts.CPU20.CAL:Function_call_interrupts
5928 ± 38% -91.0% 530.75 ± 33% interrupts.CPU20.RES:Rescheduling_interrupts
270596 ± 16% -99.3% 1828 ± 13% interrupts.CPU21.CAL:Function_call_interrupts
6193 ± 37% -92.0% 494.88 ± 26% interrupts.CPU21.RES:Rescheduling_interrupts
266280 ± 16% -99.3% 1835 ± 13% interrupts.CPU22.CAL:Function_call_interrupts
6198 ± 38% -91.7% 512.44 ± 31% interrupts.CPU22.RES:Rescheduling_interrupts
344139 ± 45% -78.6% 73709 ± 44% interrupts.CPU22.TLB:TLB_shootdowns
262026 ± 13% -99.3% 1869 ± 24% interrupts.CPU23.CAL:Function_call_interrupts
6176 ± 37% -92.2% 483.81 ± 26% interrupts.CPU23.RES:Rescheduling_interrupts
258980 ± 15% -99.3% 1754 ± 12% interrupts.CPU24.CAL:Function_call_interrupts
6205 ± 39% -92.3% 480.81 ± 31% interrupts.CPU24.RES:Rescheduling_interrupts
333871 ± 37% -74.2% 85996 ± 45% interrupts.CPU24.TLB:TLB_shootdowns
283396 ± 15% -99.4% 1765 ± 13% interrupts.CPU25.CAL:Function_call_interrupts
6184 ± 37% -92.1% 489.75 ± 28% interrupts.CPU25.RES:Rescheduling_interrupts
247688 ± 13% -99.4% 1584 ± 6% interrupts.CPU26.CAL:Function_call_interrupts
5499 ± 37% -89.2% 594.88 ± 23% interrupts.CPU26.RES:Rescheduling_interrupts
500203 ± 53% -67.5% 162801 ± 96% interrupts.CPU26.TLB:TLB_shootdowns
259091 ± 19% -99.3% 1874 ± 9% interrupts.CPU27.CAL:Function_call_interrupts
5973 ± 43% -89.8% 606.62 ± 21% interrupts.CPU27.RES:Rescheduling_interrupts
444527 ± 70% -78.1% 97165 ± 78% interrupts.CPU27.TLB:TLB_shootdowns
269264 ± 21% -99.3% 2000 ± 20% interrupts.CPU28.CAL:Function_call_interrupts
5820 ± 45% -89.3% 622.50 ± 27% interrupts.CPU28.RES:Rescheduling_interrupts
406902 ± 48% -80.0% 81212 ± 72% interrupts.CPU28.TLB:TLB_shootdowns
268200 ± 20% -99.3% 1945 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
5921 ± 46% -90.1% 588.50 ± 25% interrupts.CPU29.RES:Rescheduling_interrupts
373606 ± 59% -84.8% 56709 ± 81% interrupts.CPU29.TLB:TLB_shootdowns
269262 ± 14% -99.3% 1764 ± 11% interrupts.CPU3.CAL:Function_call_interrupts
5926 ± 37% -91.1% 527.19 ± 19% interrupts.CPU3.RES:Rescheduling_interrupts
271528 ± 13% -99.2% 2069 ± 21% interrupts.CPU30.CAL:Function_call_interrupts
5789 ± 43% -90.0% 578.81 ± 19% interrupts.CPU30.RES:Rescheduling_interrupts
406837 ± 63% -84.2% 64217 ± 77% interrupts.CPU30.TLB:TLB_shootdowns
261885 ± 19% -99.2% 1984 ± 9% interrupts.CPU31.CAL:Function_call_interrupts
5815 ± 45% -89.9% 587.12 ± 25% interrupts.CPU31.RES:Rescheduling_interrupts
449728 ± 54% -87.3% 57267 ± 78% interrupts.CPU31.TLB:TLB_shootdowns
281457 ± 18% -99.3% 2029 ± 10% interrupts.CPU32.CAL:Function_call_interrupts
6231 ± 45% -90.1% 615.69 ± 37% interrupts.CPU32.RES:Rescheduling_interrupts
414606 ± 57% -87.2% 53170 ± 65% interrupts.CPU32.TLB:TLB_shootdowns
276473 ± 18% -99.3% 1925 ± 9% interrupts.CPU33.CAL:Function_call_interrupts
5979 ± 44% -89.9% 606.62 ± 32% interrupts.CPU33.RES:Rescheduling_interrupts
340514 ± 52% -84.6% 52396 ± 92% interrupts.CPU33.TLB:TLB_shootdowns
277365 ± 16% -99.3% 1981 ± 9% interrupts.CPU34.CAL:Function_call_interrupts
6088 ± 43% -91.0% 545.31 ± 24% interrupts.CPU34.RES:Rescheduling_interrupts
339744 ± 61% -81.0% 64621 ± 63% interrupts.CPU34.TLB:TLB_shootdowns
280125 ± 17% -99.3% 1911 ± 10% interrupts.CPU35.CAL:Function_call_interrupts
6203 ± 42% -91.2% 546.25 ± 27% interrupts.CPU35.RES:Rescheduling_interrupts
369904 ± 59% -87.1% 47869 ± 47% interrupts.CPU35.TLB:TLB_shootdowns
281759 ± 16% -99.3% 2089 ± 10% interrupts.CPU36.CAL:Function_call_interrupts
6238 ± 42% -90.8% 573.38 ± 32% interrupts.CPU36.RES:Rescheduling_interrupts
376171 ± 57% -86.4% 51069 ± 63% interrupts.CPU36.TLB:TLB_shootdowns
289651 ± 16% -99.3% 1930 ± 8% interrupts.CPU37.CAL:Function_call_interrupts
6270 ± 42% -91.5% 532.56 ± 25% interrupts.CPU37.RES:Rescheduling_interrupts
341623 ± 45% -84.3% 53804 ± 60% interrupts.CPU37.TLB:TLB_shootdowns
287179 ± 16% -99.3% 2010 ± 23% interrupts.CPU38.CAL:Function_call_interrupts
5988 ± 42% -91.0% 537.69 ± 27% interrupts.CPU38.RES:Rescheduling_interrupts
379793 ± 54% -88.4% 44230 ± 60% interrupts.CPU38.TLB:TLB_shootdowns
270672 ± 15% -99.3% 1926 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
5956 ± 41% -89.3% 636.00 ± 51% interrupts.CPU39.RES:Rescheduling_interrupts
272921 ± 14% -99.4% 1751 ± 13% interrupts.CPU4.CAL:Function_call_interrupts
5961 ± 39% -90.5% 564.62 ± 29% interrupts.CPU4.RES:Rescheduling_interrupts
270867 ± 16% -99.3% 1966 ± 8% interrupts.CPU40.CAL:Function_call_interrupts
5974 ± 42% -90.7% 553.56 ± 29% interrupts.CPU40.RES:Rescheduling_interrupts
363839 ± 55% -87.1% 47093 ± 59% interrupts.CPU40.TLB:TLB_shootdowns
291528 ± 17% -99.3% 2058 ± 11% interrupts.CPU41.CAL:Function_call_interrupts
6193 ± 41% -90.5% 588.38 ± 36% interrupts.CPU41.RES:Rescheduling_interrupts
350894 ± 55% -86.7% 46677 ± 66% interrupts.CPU41.TLB:TLB_shootdowns
267317 ± 14% -99.2% 2071 ± 12% interrupts.CPU42.CAL:Function_call_interrupts
5873 ± 43% -90.6% 553.19 ± 24% interrupts.CPU42.RES:Rescheduling_interrupts
317772 ± 45% -84.0% 50967 ± 52% interrupts.CPU42.TLB:TLB_shootdowns
294788 ± 16% -99.3% 2120 ± 33% interrupts.CPU43.CAL:Function_call_interrupts
6345 ± 45% -91.2% 555.75 ± 25% interrupts.CPU43.RES:Rescheduling_interrupts
324788 ± 41% -86.5% 43783 ± 55% interrupts.CPU43.TLB:TLB_shootdowns
295855 ± 15% -99.3% 2028 ± 13% interrupts.CPU44.CAL:Function_call_interrupts
6121 ± 43% -90.9% 559.75 ± 33% interrupts.CPU44.RES:Rescheduling_interrupts
325984 ± 56% -86.9% 42813 ± 62% interrupts.CPU44.TLB:TLB_shootdowns
293053 ± 16% -99.4% 1858 ± 10% interrupts.CPU45.CAL:Function_call_interrupts
6088 ± 42% -91.6% 510.94 ± 25% interrupts.CPU45.RES:Rescheduling_interrupts
305496 ± 49% -85.6% 43977 ± 73% interrupts.CPU45.TLB:TLB_shootdowns
283413 ± 16% -99.3% 1980 ± 17% interrupts.CPU46.CAL:Function_call_interrupts
6263 ± 41% -91.6% 527.81 ± 23% interrupts.CPU46.RES:Rescheduling_interrupts
339644 ± 46% -85.9% 48054 ± 62% interrupts.CPU46.TLB:TLB_shootdowns
283190 ± 16% -99.2% 2152 ± 42% interrupts.CPU47.CAL:Function_call_interrupts
6090 ± 42% -91.3% 530.81 ± 21% interrupts.CPU47.RES:Rescheduling_interrupts
305344 ± 31% -80.6% 59186 ± 62% interrupts.CPU47.TLB:TLB_shootdowns
285498 ± 17% -99.3% 2000 ± 9% interrupts.CPU48.CAL:Function_call_interrupts
4830 ± 24% +44.3% 6971 ± 4% interrupts.CPU48.NMI:Non-maskable_interrupts
4830 ± 24% +44.3% 6971 ± 4% interrupts.CPU48.PMI:Performance_monitoring_interrupts
6163 ± 47% -90.6% 579.38 ± 31% interrupts.CPU48.RES:Rescheduling_interrupts
374123 ± 40% -87.2% 47885 ± 74% interrupts.CPU48.TLB:TLB_shootdowns
283342 ± 12% -99.3% 1992 ± 8% interrupts.CPU49.CAL:Function_call_interrupts
6182 ± 42% -91.5% 528.00 ± 31% interrupts.CPU49.RES:Rescheduling_interrupts
296317 ± 56% -84.9% 44774 ± 80% interrupts.CPU49.TLB:TLB_shootdowns
269975 ± 15% -99.3% 1773 ± 16% interrupts.CPU5.CAL:Function_call_interrupts
6231 ± 40% -91.7% 520.00 ± 22% interrupts.CPU5.RES:Rescheduling_interrupts
279242 ± 14% -99.3% 1984 ± 8% interrupts.CPU50.CAL:Function_call_interrupts
6206 ± 43% -91.2% 543.81 ± 26% interrupts.CPU50.RES:Rescheduling_interrupts
287745 ± 53% -80.0% 57453 ± 95% interrupts.CPU50.TLB:TLB_shootdowns
284642 ± 10% -99.3% 1949 ± 7% interrupts.CPU51.CAL:Function_call_interrupts
6153 ± 41% -91.1% 546.06 ± 25% interrupts.CPU51.RES:Rescheduling_interrupts
350304 ± 47% -86.0% 49176 ± 61% interrupts.CPU51.TLB:TLB_shootdowns
269390 ± 11% -99.3% 1809 ± 30% interrupts.CPU52.CAL:Function_call_interrupts
5893 ± 39% -92.5% 443.12 ± 29% interrupts.CPU52.RES:Rescheduling_interrupts
278682 ± 14% -99.2% 2144 ± 71% interrupts.CPU53.CAL:Function_call_interrupts
6089 ± 40% -92.0% 484.62 ± 34% interrupts.CPU53.RES:Rescheduling_interrupts
282560 ± 13% -99.4% 1767 ± 14% interrupts.CPU54.CAL:Function_call_interrupts
6112 ± 40% -91.9% 495.56 ± 27% interrupts.CPU54.RES:Rescheduling_interrupts
278386 ± 13% -99.4% 1807 ± 11% interrupts.CPU55.CAL:Function_call_interrupts
5862 ± 37% -92.2% 458.06 ± 19% interrupts.CPU55.RES:Rescheduling_interrupts
278461 ± 8% -99.3% 1912 ± 33% interrupts.CPU56.CAL:Function_call_interrupts
6093 ± 37% -92.8% 439.75 ± 25% interrupts.CPU56.RES:Rescheduling_interrupts
205528 ± 20% -54.0% 94511 ± 48% interrupts.CPU56.TLB:TLB_shootdowns
296955 ± 15% -99.4% 1805 ± 19% interrupts.CPU57.CAL:Function_call_interrupts
6130 ± 42% -92.6% 453.56 ± 28% interrupts.CPU57.RES:Rescheduling_interrupts
282967 ± 12% -99.4% 1772 ± 15% interrupts.CPU58.CAL:Function_call_interrupts
6156 ± 38% -92.4% 469.56 ± 26% interrupts.CPU58.RES:Rescheduling_interrupts
278009 ± 11% -99.4% 1762 ± 14% interrupts.CPU59.CAL:Function_call_interrupts
6070 ± 36% -92.6% 448.19 ± 27% interrupts.CPU59.RES:Rescheduling_interrupts
219405 ± 50% -64.5% 77887 ± 50% interrupts.CPU59.TLB:TLB_shootdowns
270342 ± 14% -99.4% 1745 ± 16% interrupts.CPU6.CAL:Function_call_interrupts
5904 ± 38% -91.1% 528.25 ± 26% interrupts.CPU6.RES:Rescheduling_interrupts
283384 ± 12% -99.4% 1825 ± 13% interrupts.CPU60.CAL:Function_call_interrupts
6235 ± 37% -92.8% 450.44 ± 26% interrupts.CPU60.RES:Rescheduling_interrupts
245711 ± 42% -68.2% 78157 ± 57% interrupts.CPU60.TLB:TLB_shootdowns
275351 ± 11% -99.3% 1831 ± 9% interrupts.CPU61.CAL:Function_call_interrupts
6107 ± 39% -92.2% 479.00 ± 34% interrupts.CPU61.RES:Rescheduling_interrupts
280718 ± 13% -99.4% 1769 ± 11% interrupts.CPU62.CAL:Function_call_interrupts
6319 ± 41% -92.9% 450.06 ± 28% interrupts.CPU62.RES:Rescheduling_interrupts
282203 ± 12% -99.4% 1746 ± 13% interrupts.CPU63.CAL:Function_call_interrupts
6069 ± 36% -92.8% 439.25 ± 32% interrupts.CPU63.RES:Rescheduling_interrupts
298698 ± 13% -99.4% 1710 ± 11% interrupts.CPU64.CAL:Function_call_interrupts
6409 ± 37% -92.8% 459.75 ± 34% interrupts.CPU64.RES:Rescheduling_interrupts
278523 ± 13% -99.4% 1763 ± 15% interrupts.CPU65.CAL:Function_call_interrupts
6290 ± 38% -92.8% 451.38 ± 27% interrupts.CPU65.RES:Rescheduling_interrupts
266089 ± 31% -69.7% 80607 ± 52% interrupts.CPU65.TLB:TLB_shootdowns
287640 ± 11% -99.4% 1737 ± 12% interrupts.CPU66.CAL:Function_call_interrupts
6394 ± 39% -93.0% 446.25 ± 31% interrupts.CPU66.RES:Rescheduling_interrupts
292804 ± 20% -99.4% 1814 ± 18% interrupts.CPU67.CAL:Function_call_interrupts
6318 ± 41% -93.0% 441.12 ± 32% interrupts.CPU67.RES:Rescheduling_interrupts
277669 ± 16% -99.3% 1807 ± 14% interrupts.CPU68.CAL:Function_call_interrupts
6346 ± 38% -92.4% 485.12 ± 43% interrupts.CPU68.RES:Rescheduling_interrupts
221011 ± 37% -64.9% 77605 ± 51% interrupts.CPU68.TLB:TLB_shootdowns
283218 ± 16% -99.4% 1732 ± 13% interrupts.CPU69.CAL:Function_call_interrupts
6024 ± 38% -92.5% 449.38 ± 30% interrupts.CPU69.RES:Rescheduling_interrupts
273986 ± 15% -99.4% 1749 ± 11% interrupts.CPU7.CAL:Function_call_interrupts
5865 ± 41% -91.0% 527.56 ± 22% interrupts.CPU7.RES:Rescheduling_interrupts
383011 ± 40% -72.6% 104799 ± 46% interrupts.CPU7.TLB:TLB_shootdowns
284794 ± 16% -99.4% 1715 ± 11% interrupts.CPU70.CAL:Function_call_interrupts
6067 ± 41% -93.1% 420.88 ± 27% interrupts.CPU70.RES:Rescheduling_interrupts
301229 ± 14% -99.4% 1760 ± 13% interrupts.CPU71.CAL:Function_call_interrupts
6022 ± 37% -92.9% 427.94 ± 30% interrupts.CPU71.RES:Rescheduling_interrupts
206452 ± 46% -62.3% 77816 ± 39% interrupts.CPU71.TLB:TLB_shootdowns
285872 ± 17% -99.4% 1737 ± 11% interrupts.CPU72.CAL:Function_call_interrupts
6208 ± 39% -93.1% 431.00 ± 26% interrupts.CPU72.RES:Rescheduling_interrupts
225654 ± 44% -65.2% 78634 ± 49% interrupts.CPU72.TLB:TLB_shootdowns
267068 ± 15% -99.3% 1958 ± 40% interrupts.CPU73.CAL:Function_call_interrupts
6509 ± 41% -92.0% 518.50 ± 40% interrupts.CPU73.RES:Rescheduling_interrupts
255758 ± 12% -99.3% 1751 ± 9% interrupts.CPU74.CAL:Function_call_interrupts
6423 ± 40% -92.1% 509.50 ± 41% interrupts.CPU74.RES:Rescheduling_interrupts
214026 ± 51% -59.0% 87732 ± 45% interrupts.CPU74.TLB:TLB_shootdowns
261002 ± 13% -99.3% 1766 ± 9% interrupts.CPU75.CAL:Function_call_interrupts
6359 ± 39% -92.8% 458.00 ± 31% interrupts.CPU75.RES:Rescheduling_interrupts
255352 ± 10% -99.3% 1721 ± 16% interrupts.CPU76.CAL:Function_call_interrupts
6310 ± 38% -93.1% 437.00 ± 31% interrupts.CPU76.RES:Rescheduling_interrupts
262224 ± 12% -99.3% 1934 ± 36% interrupts.CPU77.CAL:Function_call_interrupts
6459 ± 41% -92.0% 515.56 ± 39% interrupts.CPU77.RES:Rescheduling_interrupts
282389 ± 12% -99.4% 1587 ± 14% interrupts.CPU78.CAL:Function_call_interrupts
5996 ± 43% -92.3% 459.44 ± 22% interrupts.CPU78.RES:Rescheduling_interrupts
199766 ± 58% -78.4% 43049 ± 60% interrupts.CPU78.TLB:TLB_shootdowns
291051 ± 17% -99.3% 1996 ± 9% interrupts.CPU79.CAL:Function_call_interrupts
6044 ± 42% -91.3% 525.19 ± 20% interrupts.CPU79.RES:Rescheduling_interrupts
180645 ± 51% -80.1% 35874 ± 49% interrupts.CPU79.TLB:TLB_shootdowns
258588 ± 14% -99.2% 1953 ± 32% interrupts.CPU8.CAL:Function_call_interrupts
5952 ± 39% -91.0% 534.56 ± 24% interrupts.CPU8.RES:Rescheduling_interrupts
286661 ± 15% -99.3% 1929 ± 7% interrupts.CPU80.CAL:Function_call_interrupts
6102 ± 45% -91.2% 538.94 ± 25% interrupts.CPU80.RES:Rescheduling_interrupts
229221 ± 54% -81.0% 43445 ± 72% interrupts.CPU80.TLB:TLB_shootdowns
294718 ± 14% -99.3% 1983 ± 7% interrupts.CPU81.CAL:Function_call_interrupts
6417 ± 42% -92.0% 510.94 ± 18% interrupts.CPU81.RES:Rescheduling_interrupts
228030 ± 57% -83.8% 36977 ± 54% interrupts.CPU81.TLB:TLB_shootdowns
282427 ± 16% -99.3% 2103 ± 20% interrupts.CPU82.CAL:Function_call_interrupts
6011 ± 45% -90.1% 595.38 ± 41% interrupts.CPU82.RES:Rescheduling_interrupts
247992 ± 34% -83.5% 40801 ± 67% interrupts.CPU82.TLB:TLB_shootdowns
285126 ± 16% -99.3% 1940 ± 9% interrupts.CPU83.CAL:Function_call_interrupts
4710 ± 23% +49.0% 7016 ± 5% interrupts.CPU83.NMI:Non-maskable_interrupts
4710 ± 23% +49.0% 7016 ± 5% interrupts.CPU83.PMI:Performance_monitoring_interrupts
6066 ± 44% -91.8% 494.50 ± 23% interrupts.CPU83.RES:Rescheduling_interrupts
212594 ± 45% -83.9% 34328 ± 87% interrupts.CPU83.TLB:TLB_shootdowns
294133 ± 13% -99.3% 1973 ± 12% interrupts.CPU84.CAL:Function_call_interrupts
6151 ± 43% -91.7% 511.25 ± 22% interrupts.CPU84.RES:Rescheduling_interrupts
170484 ± 75% -81.7% 31255 ± 72% interrupts.CPU84.TLB:TLB_shootdowns
296658 ± 20% -99.3% 1949 ± 7% interrupts.CPU85.CAL:Function_call_interrupts
6073 ± 45% -91.5% 513.88 ± 20% interrupts.CPU85.RES:Rescheduling_interrupts
209309 ± 51% -79.7% 42480 ± 60% interrupts.CPU85.TLB:TLB_shootdowns
283092 ± 18% -99.3% 1988 ± 10% interrupts.CPU86.CAL:Function_call_interrupts
6049 ± 46% -90.7% 562.12 ± 34% interrupts.CPU86.RES:Rescheduling_interrupts
233435 ± 47% -83.7% 38026 ± 66% interrupts.CPU86.TLB:TLB_shootdowns
274265 ± 16% -99.3% 1957 ± 8% interrupts.CPU87.CAL:Function_call_interrupts
6042 ± 43% -91.7% 500.75 ± 24% interrupts.CPU87.RES:Rescheduling_interrupts
194705 ± 53% -76.6% 45647 ± 69% interrupts.CPU87.TLB:TLB_shootdowns
284202 ± 13% -99.3% 1992 ± 10% interrupts.CPU88.CAL:Function_call_interrupts
4866 ± 25% +43.1% 6962 ± 5% interrupts.CPU88.NMI:Non-maskable_interrupts
4866 ± 25% +43.1% 6962 ± 5% interrupts.CPU88.PMI:Performance_monitoring_interrupts
6041 ± 41% -91.6% 505.06 ± 18% interrupts.CPU88.RES:Rescheduling_interrupts
228907 ± 34% -80.1% 45542 ± 49% interrupts.CPU88.TLB:TLB_shootdowns
292926 ± 14% -99.3% 1948 ± 8% interrupts.CPU89.CAL:Function_call_interrupts
5920 ± 42% -91.0% 531.94 ± 27% interrupts.CPU89.RES:Rescheduling_interrupts
177476 ± 37% -77.7% 39596 ± 77% interrupts.CPU89.TLB:TLB_shootdowns
260528 ± 14% -99.3% 1798 ± 13% interrupts.CPU9.CAL:Function_call_interrupts
6035 ± 38% -91.2% 531.56 ± 23% interrupts.CPU9.RES:Rescheduling_interrupts
299905 ± 15% -99.3% 2151 ± 48% interrupts.CPU90.CAL:Function_call_interrupts
6125 ± 46% -92.0% 487.00 ± 25% interrupts.CPU90.RES:Rescheduling_interrupts
216493 ± 45% -78.3% 46905 ± 59% interrupts.CPU90.TLB:TLB_shootdowns
280441 ± 17% -99.3% 1944 ± 7% interrupts.CPU91.CAL:Function_call_interrupts
6320 ± 46% -91.8% 518.31 ± 28% interrupts.CPU91.RES:Rescheduling_interrupts
210306 ± 43% -80.2% 41673 ± 78% interrupts.CPU91.TLB:TLB_shootdowns
284700 ± 16% -99.3% 2034 ± 8% interrupts.CPU92.CAL:Function_call_interrupts
6027 ± 42% -91.0% 542.06 ± 29% interrupts.CPU92.RES:Rescheduling_interrupts
157865 ± 45% -67.1% 51966 ± 70% interrupts.CPU92.TLB:TLB_shootdowns
289762 ± 18% -99.3% 1999 ± 9% interrupts.CPU93.CAL:Function_call_interrupts
6257 ± 43% -91.8% 512.62 ± 22% interrupts.CPU93.RES:Rescheduling_interrupts
193767 ± 52% -70.6% 56953 ± 58% interrupts.CPU93.TLB:TLB_shootdowns
291026 ± 14% -99.3% 2135 ± 17% interrupts.CPU94.CAL:Function_call_interrupts
6160 ± 46% -90.9% 559.62 ± 27% interrupts.CPU94.RES:Rescheduling_interrupts
192657 ± 34% -73.3% 51503 ± 69% interrupts.CPU94.TLB:TLB_shootdowns
295831 ± 16% -99.3% 1927 ± 7% interrupts.CPU95.CAL:Function_call_interrupts
6151 ± 43% -91.8% 504.56 ± 23% interrupts.CPU95.RES:Rescheduling_interrupts
191279 ± 37% -77.0% 43973 ± 55% interrupts.CPU95.TLB:TLB_shootdowns
291597 ± 20% -99.3% 1949 ± 9% interrupts.CPU96.CAL:Function_call_interrupts
6064 ± 44% -91.9% 490.88 ± 22% interrupts.CPU96.RES:Rescheduling_interrupts
189614 ± 48% -73.2% 50736 ± 78% interrupts.CPU96.TLB:TLB_shootdowns
298823 ± 14% -99.4% 1890 ± 11% interrupts.CPU97.CAL:Function_call_interrupts
6053 ± 41% -91.2% 532.81 ± 28% interrupts.CPU97.RES:Rescheduling_interrupts
261476 ± 38% -83.3% 43773 ± 61% interrupts.CPU97.TLB:TLB_shootdowns
290625 ± 17% -99.3% 1929 ± 7% interrupts.CPU98.CAL:Function_call_interrupts
6051 ± 44% -91.6% 510.56 ± 22% interrupts.CPU98.RES:Rescheduling_interrupts
220709 ± 48% -78.6% 47320 ± 71% interrupts.CPU98.TLB:TLB_shootdowns
282786 ± 14% -99.3% 1968 ± 9% interrupts.CPU99.CAL:Function_call_interrupts
6157 ± 42% -91.9% 497.19 ± 18% interrupts.CPU99.RES:Rescheduling_interrupts
509956 ± 3% +29.7% 661249 ± 4% interrupts.NMI:Non-maskable_interrupts
509956 ± 3% +29.7% 661249 ± 4% interrupts.PMI:Performance_monitoring_interrupts
635122 ± 3% -91.4% 54578 ± 3% interrupts.RES:Rescheduling_interrupts
30211373 ± 39% -74.6% 7677486 ± 39% interrupts.TLB:TLB_shootdowns
vm-scalability.time.minor_page_faults
1.9e+09 +-----------------------------------------------------------------+
| |
1.8e+09 |OOOOOOOOO OOOOOO OOOO O OOOOO O OOOOOO O O OO O OOOOOO |
| O O O O O O O O OO OOO O |
1.7e+09 |-+ |
| |
1.6e+09 |-+ |
| |
1.5e+09 |-+ |
| + + + + + |
1.4e+09 |-+ +++ + ++++++++ +++ + ++++++++|
| : |
1.3e+09 |-+ : |
|+ + ++ + + ++++++++++++ +++++++++ |
1.2e+09 +-----------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
1.8e+08 +-----------------------------------------------------------------+
| + + |
1.6e+08 |-+ :++ +++++++++++ ++ ++++ +++++|
1.4e+08 |-+ : + + + + |
| : |
1.2e+08 |-+ : |
1e+08 |-+ + : |
|++++++++++++++++++++++++ +++++++++++ |
8e+07 |-+ |
6e+07 |-+ |
| |
4e+07 |-+ |
2e+07 |-+ |
| |
0 +-----------------------------------------------------------------+
vm-scalability.throughput
4e+07 +-----------------------------------------------------------------+
|O O OOOOO O O OO O |
3.8e+07 |-O O O O O O OOO OOOOOOO OOOOOOOO OOOO OOOOO OOOOOO |
| O O O O O O |
3.6e+07 |-+ |
| |
3.4e+07 |-+ |
| |
3.2e+07 |-+ |
| |
3e+07 |-+ + + + |
| +++ ++++++++++ +++ ++++++++++|
2.8e+07 |-+ ++ + + + |
|++++++++++++++ ++++++ ++ +++++++++ |
2.6e+07 +-----------------------------------------------------------------+
vm-scalability.median
380000 +------------------------------------------------------------------+
| O OO OOOO O OO OOOO OOOOOO OOOOOOOOO OO O O OOOOOO |
360000 |-+ O O O OO O OOOO |
340000 |-+ |
| |
320000 |-+ |
| |
300000 |-+ |
| |
280000 |-+ ++++ + + + ++ + |
260000 |-+ +++ + + +++++ +++ + + +++++|
| : |
240000 |-++ + + ++ + + ++ + + +++ + |
|++ +++++++ + + + + ++ + + ++++ + |
220000 +------------------------------------------------------------------+
vm-scalability.workload
8.5e+09 +-----------------------------------------------------------------+
| |
8e+09 |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOO |
| O |
7.5e+09 |-+ |
| |
7e+09 |-+ |
| |
6.5e+09 |-+ + + + + + |
| +++ + ++++++++ +++ + ++++++++|
6e+09 |-+ : |
| : |
5.5e+09 |++++++++++++++++++++++++++++++++++++ |
| |
5e+09 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/512G/lkp-cfl-e1/anon-cow-rand/vm-scalability/0xd6
commit:
v5.8
09854ba94c ("mm: do_wp_page() simplification")
v5.8 09854ba94c6aad7886996bfbee2
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
34:7 -488% :16 perf-profile.calltrace.cycles-pp.error_entry
35:7 -386% 8:16 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
53578 +7.6% 57674 vm-scalability.median
857728 +7.7% 923526 vm-scalability.throughput
57967803 +6.7% 61831024 vm-scalability.time.minor_page_faults
716.06 -40.6% 425.06 vm-scalability.time.system_time
4173 +6.3% 4435 vm-scalability.time.user_time
1036150 -99.5% 4761 vm-scalability.time.voluntary_context_switches
2.606e+08 +6.7% 2.78e+08 vm-scalability.workload
14.46 -5.8 8.69 mpstat.cpu.all.sys%
22.66 +1.6% 23.03 boot-time.boot
305.13 +2.5% 312.76 boot-time.idle
1250 -11.9% 1101 meminfo.Inactive(file)
18156 -11.1% 16149 meminfo.VmallocUsed
82.00 +7.3% 88.00 vmstat.cpu.us
7411 -86.6% 992.31 ± 3% vmstat.system.cs
34030 -2.8% 33088 vmstat.system.in
159333 ± 2% -97.1% 4626 ±109% cpuidle.C1.usage
247724 ± 5% -53.1% 116196 ± 12% cpuidle.C1E.time
14950 ± 8% -87.5% 1865 ± 7% cpuidle.C1E.usage
2940981 -99.5% 15906 ± 20% cpuidle.POLL.time
662285 -99.4% 3870 ± 9% cpuidle.POLL.usage
104.14 ± 9% -100.0% 0.00 slabinfo.btrfs_inode.active_objs
104.14 ± 9% -100.0% 0.00 slabinfo.btrfs_inode.num_objs
150.43 ± 16% -74.1% 39.00 slabinfo.buffer_head.active_objs
150.43 ± 16% -74.1% 39.00 slabinfo.buffer_head.num_objs
88.86 ± 46% +200.2% 266.75 ± 9% slabinfo.xfs_buf.active_objs
88.86 ± 46% +200.2% 266.75 ± 9% slabinfo.xfs_buf.num_objs
20730 ± 4% -76.3% 4918 ± 14% softirqs.CPU0.SCHED
19150 -77.7% 4269 ± 12% softirqs.CPU1.SCHED
18436 -83.3% 3082 ± 8% softirqs.CPU10.SCHED
18360 -83.9% 2957 ± 4% softirqs.CPU11.SCHED
18413 -84.0% 2948 ± 8% softirqs.CPU12.SCHED
18439 ± 2% -84.2% 2911 ± 8% softirqs.CPU13.SCHED
18210 -84.7% 2786 ± 9% softirqs.CPU14.SCHED
18399 ± 2% -83.5% 3029 ± 9% softirqs.CPU15.SCHED
18361 -82.0% 3305 ± 6% softirqs.CPU2.SCHED
18560 ± 3% -83.0% 3153 ± 12% softirqs.CPU3.SCHED
18497 ± 2% -83.0% 3144 ± 16% softirqs.CPU4.SCHED
18260 -84.5% 2839 ± 10% softirqs.CPU5.SCHED
18194 -83.4% 3019 ± 9% softirqs.CPU6.SCHED
18213 -84.0% 2917 ± 9% softirqs.CPU7.SCHED
18503 ± 2% -82.4% 3247 ± 11% softirqs.CPU8.SCHED
18262 -83.2% 3061 ± 8% softirqs.CPU9.SCHED
296998 -82.6% 51595 softirqs.SCHED
284640 -98.3% 4863 ± 22% interrupts.CAL:Function_call_interrupts
17552 ± 4% -97.8% 388.25 ± 47% interrupts.CPU0.CAL:Function_call_interrupts
17492 ± 4% -96.8% 555.81 ± 64% interrupts.CPU1.CAL:Function_call_interrupts
17333 ± 4% -98.9% 188.88 ± 47% interrupts.CPU10.CAL:Function_call_interrupts
17308 ± 3% -98.9% 195.94 ± 47% interrupts.CPU11.CAL:Function_call_interrupts
18119 ± 5% -99.0% 189.06 ± 42% interrupts.CPU12.CAL:Function_call_interrupts
18200 ± 2% -98.9% 201.12 ± 57% interrupts.CPU13.CAL:Function_call_interrupts
18330 ± 2% -98.9% 200.19 ± 44% interrupts.CPU14.CAL:Function_call_interrupts
18143 ± 3% -99.1% 169.06 ± 51% interrupts.CPU15.CAL:Function_call_interrupts
17191 ± 4% -97.5% 426.38 ± 43% interrupts.CPU2.CAL:Function_call_interrupts
17100 ± 3% -97.8% 370.50 ± 31% interrupts.CPU3.CAL:Function_call_interrupts
17880 ± 4% -98.0% 358.12 ± 31% interrupts.CPU4.CAL:Function_call_interrupts
18097 ± 6% -97.9% 387.44 ± 41% interrupts.CPU5.CAL:Function_call_interrupts
18283 ± 3% -97.6% 437.06 ± 37% interrupts.CPU6.CAL:Function_call_interrupts
18517 ± 2% -97.8% 407.75 ± 33% interrupts.CPU7.CAL:Function_call_interrupts
17901 ± 2% -99.1% 169.56 ± 35% interrupts.CPU8.CAL:Function_call_interrupts
17189 ± 4% -98.7% 218.56 ± 45% interrupts.CPU9.CAL:Function_call_interrupts
44209 ± 5% -16.3% 36983 ± 4% interrupts.RES:Rescheduling_interrupts
173.86 ± 48% -73.4% 46.19 ± 16% interrupts.TLB:TLB_shootdowns
3810358 +3.1% 3929986 proc-vmstat.nr_active_anon
3805080 +3.2% 3924954 proc-vmstat.nr_anon_pages
401615 -3.1% 388972 proc-vmstat.nr_dirty_background_threshold
804213 -3.1% 778897 proc-vmstat.nr_dirty_threshold
246012 +3.0% 253497 proc-vmstat.nr_file_pages
4088131 -3.1% 3963245 proc-vmstat.nr_free_pages
312.29 -12.0% 274.88 proc-vmstat.nr_inactive_file
8771 +2.7% 9007 proc-vmstat.nr_shmem
19793 -6.6% 18483 proc-vmstat.nr_slab_unreclaimable
236921 +3.1% 244216 proc-vmstat.nr_unevictable
3810358 +3.1% 3929986 proc-vmstat.nr_zone_active_anon
312.29 -12.0% 274.88 proc-vmstat.nr_zone_inactive_file
236921 +3.1% 244216 proc-vmstat.nr_zone_unevictable
58418825 +6.6% 62295794 proc-vmstat.numa_hit
58418825 +6.6% 62295794 proc-vmstat.numa_local
4252 ± 3% +10.0% 4679 ± 4% proc-vmstat.pgactivate
62058733 +6.6% 66174824 proc-vmstat.pgalloc_normal
58391274 +6.6% 62255253 proc-vmstat.pgfault
60704133 +7.0% 64973361 ± 2% proc-vmstat.pgfree
7054 +6.7% 7526 proc-vmstat.thp_fault_alloc
112868 +6.7% 120421 proc-vmstat.thp_split_pmd
26275 ± 3% +13.5% 29811 ± 4% sched_debug.cfs_rq:/.min_vruntime.stddev
26260 ± 3% +13.5% 29798 ± 4% sched_debug.cfs_rq:/.spread0.stddev
820.18 -70.3% 243.76 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
1019 ± 6% -38.4% 628.00 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.max
767.76 -86.2% 105.58 ± 16% sched_debug.cfs_rq:/.util_est_enqueued.min
70.19 ± 20% +90.0% 133.34 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.stddev
62894 ± 26% +250.8% 220614 ± 5% sched_debug.cpu.avg_idle.avg
7223 ± 45% +531.2% 45599 ± 18% sched_debug.cpu.avg_idle.min
76934 -82.0% 13864 ± 2% sched_debug.cpu.nr_switches.avg
87426 -68.6% 27489 ± 12% sched_debug.cpu.nr_switches.max
69676 -91.2% 6097 ± 12% sched_debug.cpu.nr_switches.min
73009 -86.3% 9994 ± 3% sched_debug.cpu.sched_count.avg
80302 -77.6% 18022 ± 8% sched_debug.cpu.sched_count.max
67053 ± 2% -93.7% 4253 ± 14% sched_debug.cpu.sched_count.min
32175 -98.6% 436.32 ± 6% sched_debug.cpu.sched_goidle.avg
33641 -95.5% 1509 ± 7% sched_debug.cpu.sched_goidle.max
30062 -99.3% 199.53 ± 9% sched_debug.cpu.sched_goidle.min
801.65 ± 13% -57.4% 341.57 ± 11% sched_debug.cpu.sched_goidle.stddev
36206 -87.8% 4433 ± 4% sched_debug.cpu.ttwu_count.avg
39349 -80.0% 7852 ± 9% sched_debug.cpu.ttwu_count.max
33384 -94.4% 1857 ± 22% sched_debug.cpu.ttwu_count.min
71.96 -7.2% 66.74 perf-stat.i.MPKI
2.388e+09 -1.6% 2.349e+09 perf-stat.i.branch-instructions
2607749 ± 3% -9.1% 2371691 ± 2% perf-stat.i.branch-misses
76.52 +6.2 82.72 perf-stat.i.cache-miss-rate%
7.81e+08 -13.2% 6.775e+08 perf-stat.i.cache-references
7401 -86.8% 975.06 ± 3% perf-stat.i.context-switches
6.42 +4.3% 6.69 perf-stat.i.cpi
2.748e+09 +3.1% 2.833e+09 perf-stat.i.dTLB-loads
8.35 +0.5 8.82 perf-stat.i.dTLB-store-miss-rate%
94899654 +7.5% 1.02e+08 perf-stat.i.dTLB-store-misses
1.008e+09 +6.3% 1.072e+09 perf-stat.i.dTLB-stores
76.89 +4.0 80.86 perf-stat.i.iTLB-load-miss-rate%
189171 -32.8% 127059 perf-stat.i.iTLB-load-misses
1018881 +5.7% 1077007 perf-stat.i.iTLB-loads
1.066e+10 -4.6% 1.017e+10 perf-stat.i.instructions
0.16 -5.4% 0.15 perf-stat.i.ipc
0.51 +8.6% 0.56 perf-stat.i.metric.K/sec
184985 +5.6% 195420 perf-stat.i.minor-faults
81588669 -10.4% 73110690 perf-stat.i.node-loads
2.024e+08 +7.4% 2.174e+08 perf-stat.i.node-stores
184985 +5.6% 195420 perf-stat.i.page-faults
73.30 -9.1% 66.62 perf-stat.overall.MPKI
0.11 ± 3% -0.0 0.10 ± 2% perf-stat.overall.branch-miss-rate%
71.94 +11.1 82.99 perf-stat.overall.cache-miss-rate%
6.33 +5.5% 6.68 perf-stat.overall.cpi
15.63 -5.2 10.42 perf-stat.overall.iTLB-load-miss-rate%
56015 +41.2% 79073 perf-stat.overall.instructions-per-iTLB-miss
0.16 -5.2% 0.15 perf-stat.overall.ipc
12800 -11.4% 11346 perf-stat.overall.path-length
2.379e+09 -1.7% 2.339e+09 perf-stat.ps.branch-instructions
2617135 ± 3% -8.8% 2385532 ± 2% perf-stat.ps.branch-misses
7.79e+08 -13.4% 6.749e+08 perf-stat.ps.cache-references
7434 -86.9% 974.26 ± 3% perf-stat.ps.context-switches
2.738e+09 +3.1% 2.822e+09 perf-stat.ps.dTLB-loads
94387865 +7.4% 1.014e+08 perf-stat.ps.dTLB-store-misses
1.005e+09 +6.4% 1.069e+09 perf-stat.ps.dTLB-stores
189717 -32.5% 128138 perf-stat.ps.iTLB-load-misses
1023815 +7.6% 1101383 perf-stat.ps.iTLB-loads
1.063e+10 -4.7% 1.013e+10 perf-stat.ps.instructions
185866 +7.5% 199819 perf-stat.ps.minor-faults
81382471 -10.4% 72935146 perf-stat.ps.node-loads
2.014e+08 +7.4% 2.164e+08 perf-stat.ps.node-stores
185866 +7.5% 199819 perf-stat.ps.page-faults
3.336e+12 -5.5% 3.154e+12 perf-stat.total.instructions
55.87 ± 4% -55.9 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt
53.40 ± 4% -53.4 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
41.92 ± 5% -41.9 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
39.23 ± 5% -39.2 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
32.69 ± 5% -32.7 0.00 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
24.23 ± 9% -24.2 0.00 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
21.98 ± 10% -22.0 0.00 perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
21.91 ± 10% -21.9 0.00 perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
13.44 ± 13% -13.4 0.00 perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
11.32 ± 5% -11.3 0.00 perf-profile.calltrace.cycles-pp.printk.irq_work_single.irq_work_run_list.irq_work_run.__sysvec_irq_work
11.32 ± 5% -11.3 0.00 perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_single.irq_work_run_list.irq_work_run
11.17 ± 6% -11.2 0.00 perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_single.irq_work_run_list
10.23 ± 5% -10.2 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_single
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_irq_work
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.sysvec_irq_work.asm_sysvec_irq_work
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.irq_work_run.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
10.23 ± 17% -10.2 0.00 perf-profile.calltrace.cycles-pp.irq_work_single.irq_work_run_list.irq_work_run.__sysvec_irq_work.sysvec_irq_work
9.99 ± 5% -10.0 0.00 perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
9.40 ± 16% -9.4 0.00 perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
9.26 ± 12% -9.3 0.00 perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
9.15 ± 8% -9.2 0.00 perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
9.15 ± 8% -9.2 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
8.18 ± 9% -8.2 0.00 perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
7.25 ± 17% -7.3 0.00 perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
7.08 ± 17% -7.1 0.00 perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
6.73 ± 16% -6.7 0.00 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +0.8 0.77 ± 4% perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
0.00 +0.8 0.77 ± 4% perf-profile.calltrace.cycles-pp.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.00 +0.8 0.81 ± 4% perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
0.00 +0.8 0.83 ± 5% perf-profile.calltrace.cycles-pp.try_charge.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.23 ±161% +0.9 1.14 ± 3% perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.9 0.92 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +1.0 0.98 ± 4% perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +1.0 1.04 ± 4% perf-profile.calltrace.cycles-pp.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.1 1.06 ± 4% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy
0.00 +1.1 1.11 ± 4% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault
0.00 +1.2 1.21 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault
0.00 +1.2 1.23 ± 5% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.3 1.28 ± 4% perf-profile.calltrace.cycles-pp.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.4 1.37 ± 3% perf-profile.calltrace.cycles-pp.nrand48_r
0.00 +1.8 1.76 ± 3% perf-profile.calltrace.cycles-pp.do_rw_once
0.00 +2.5 2.49 ± 13% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
0.00 +2.5 2.49 ± 13% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
0.00 +3.1 3.15 ± 9% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.zap_pte_range
0.00 +3.3 3.26 ± 9% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
0.00 +4.3 4.28 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range
0.00 +4.3 4.29 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
3.18 ± 19% +6.3 9.49 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.17 ± 18% +6.3 9.49 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.49 ± 20% +8.0 9.46 ± 7% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.49 ± 20% +8.0 9.46 ± 7% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.47 ± 21% +8.0 9.46 ± 7% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.11 ± 18% +8.3 9.45 ± 7% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
1.11 ± 18% +8.3 9.45 ± 7% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
0.00 +8.4 8.38 ± 8% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
0.00 +8.5 8.49 ± 8% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
0.62 ± 57% +8.5 9.13 ± 7% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.58 ± 44% +8.6 9.14 ± 7% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.57 ± 45% +8.6 9.13 ± 7% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.00 +11.4 11.42 ± 4% perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +16.2 16.22 ± 3% perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
1.32 ± 25% +17.5 18.84 ± 3% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +19.0 19.02 ± 3% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
0.00 +19.6 19.58 ± 3% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
0.00 +19.8 19.83 ± 3% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
0.00 +23.4 23.36 ± 3% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
0.00 +84.2 84.18 perf-profile.calltrace.cycles-pp.do_access
56.15 ± 4% -55.3 0.88 ± 5% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
53.60 ± 4% -52.9 0.74 ± 5% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
41.95 ± 5% -41.4 0.58 ± 5% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
39.34 ± 5% -38.8 0.54 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
32.81 ± 5% -32.4 0.44 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
24.29 ± 9% -24.0 0.33 ± 8% perf-profile.children.cycles-pp.tick_sched_timer
22.04 ± 10% -21.7 0.30 ± 8% perf-profile.children.cycles-pp.tick_sched_handle
22.01 ± 10% -21.7 0.30 ± 8% perf-profile.children.cycles-pp.update_process_times
13.48 ± 13% -13.3 0.19 ± 11% perf-profile.children.cycles-pp.scheduler_tick
11.39 ± 5% -11.2 0.21 ± 13% perf-profile.children.cycles-pp.irq_work_run_list
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.asm_sysvec_irq_work
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.sysvec_irq_work
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.__sysvec_irq_work
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.irq_work_run
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.irq_work_single
11.32 ± 5% -11.1 0.21 ± 13% perf-profile.children.cycles-pp.printk
11.32 ± 5% -10.9 0.39 ± 20% perf-profile.children.cycles-pp.vprintk_emit
11.17 ± 6% -10.8 0.38 ± 20% perf-profile.children.cycles-pp.console_unlock
10.23 ± 5% -9.9 0.33 ± 20% perf-profile.children.cycles-pp.serial8250_console_write
9.99 ± 5% -9.7 0.32 ± 20% perf-profile.children.cycles-pp.uart_console_write
9.73 ± 15% -9.6 0.14 ± 19% perf-profile.children.cycles-pp.irq_exit_rcu
9.49 ± 16% -9.3 0.14 ± 13% perf-profile.children.cycles-pp.task_tick_fair
9.38 ± 8% -9.1 0.30 ± 19% perf-profile.children.cycles-pp.wait_for_xmitr
9.15 ± 8% -8.9 0.29 ± 19% perf-profile.children.cycles-pp.serial8250_console_putchar
9.04 ± 18% -8.8 0.27 ± 20% perf-profile.children.cycles-pp.io_serial_in
8.67 ± 37% -8.3 0.32 ± 14% perf-profile.children.cycles-pp.asm_call_on_stack
7.67 ± 21% -7.6 0.11 ± 22% perf-profile.children.cycles-pp.do_softirq_own_stack
7.17 ± 21% -7.1 0.11 ± 23% perf-profile.children.cycles-pp.__softirqentry_text_start
5.87 ± 23% -5.2 0.71 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
4.45 ± 19% -4.4 0.06 ± 13% perf-profile.children.cycles-pp.update_load_avg
4.59 ± 39% -4.3 0.27 ± 11% perf-profile.children.cycles-pp.ret_from_fork
4.76 ± 18% -4.3 0.45 ± 3% perf-profile.children.cycles-pp.sync_regs
4.41 ± 41% -4.1 0.27 ± 12% perf-profile.children.cycles-pp.kthread
3.84 ± 21% -3.8 0.03 ± 78% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
3.70 ± 48% -3.4 0.25 ± 12% perf-profile.children.cycles-pp.worker_thread
3.55 ± 49% -3.3 0.25 ± 12% perf-profile.children.cycles-pp.process_one_work
3.31 ± 52% -3.1 0.25 ± 12% perf-profile.children.cycles-pp.memcpy_erms
3.24 ± 54% -3.0 0.25 ± 12% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
2.92 ± 22% -2.9 0.03 ± 77% perf-profile.children.cycles-pp.__x64_sys_execve
2.90 ± 23% -2.9 0.03 ± 77% perf-profile.children.cycles-pp.__do_execve_file
2.88 ± 22% -2.8 0.03 ± 77% perf-profile.children.cycles-pp.execve
1.40 ± 19% -1.2 0.21 ± 31% perf-profile.children.cycles-pp.ksys_write
1.39 ± 19% -1.2 0.21 ± 31% perf-profile.children.cycles-pp.vfs_write
1.30 ± 18% -1.1 0.21 ± 31% perf-profile.children.cycles-pp.new_sync_write
0.75 ± 44% -0.7 0.07 ± 29% perf-profile.children.cycles-pp.vm_mmap_pgoff
0.53 ± 55% -0.5 0.07 ± 29% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.63 ± 40% -0.4 0.19 ± 34% perf-profile.children.cycles-pp.write
0.30 ± 46% -0.2 0.05 ± 39% perf-profile.children.cycles-pp.clear_page_erms
0.26 ± 48% -0.2 0.06 ± 27% perf-profile.children.cycles-pp.__mmap
0.21 ± 33% -0.1 0.06 ± 27% perf-profile.children.cycles-pp.__get_user_pages
0.00 +0.1 0.07 ± 14% perf-profile.children.cycles-pp.__split_huge_pmd
0.00 +0.1 0.07 ± 14% perf-profile.children.cycles-pp.__split_huge_pmd_locked
0.02 ±158% +0.1 0.09 ± 16% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.00 +0.1 0.08 ± 9% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.08 ± 17% perf-profile.children.cycles-pp.cpumask_any_but
0.01 ±244% +0.1 0.11 ± 16% perf-profile.children.cycles-pp.__mod_memcg_state
0.01 ±244% +0.1 0.12 ± 19% perf-profile.children.cycles-pp.__count_memcg_events
0.00 +0.1 0.12 ± 11% perf-profile.children.cycles-pp.vmacache_find
0.00 +0.1 0.12 ± 9% perf-profile.children.cycles-pp.reuse_swap_page
0.00 +0.2 0.17 ± 4% perf-profile.children.cycles-pp.lrand48_r@plt
0.00 +0.2 0.18 ± 36% perf-profile.children.cycles-pp.devkmsg_write.cold
0.00 +0.2 0.18 ± 36% perf-profile.children.cycles-pp.devkmsg_emit
0.00 +0.2 0.20 ± 7% perf-profile.children.cycles-pp.do_huge_pmd_wp_page
0.02 ±158% +0.2 0.24 ± 8% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.01 ±244% +0.2 0.25 ± 9% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.16 ± 90% +0.2 0.41 ± 34% perf-profile.children.cycles-pp.intel_idle
0.02 ±158% +0.2 0.27 ± 7% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.16 ± 90% +0.3 0.42 ± 33% perf-profile.children.cycles-pp.cpuidle_enter
0.16 ± 90% +0.3 0.42 ± 33% perf-profile.children.cycles-pp.cpuidle_enter_state
0.11 ±120% +0.3 0.38 ± 35% perf-profile.children.cycles-pp.start_secondary
0.16 ± 90% +0.3 0.43 ± 32% perf-profile.children.cycles-pp.secondary_startup_64
0.16 ± 90% +0.3 0.43 ± 32% perf-profile.children.cycles-pp.cpu_startup_entry
0.16 ± 90% +0.3 0.43 ± 32% perf-profile.children.cycles-pp.do_idle
0.02 ±244% +0.3 0.32 ± 8% perf-profile.children.cycles-pp.__mod_lruvec_state
0.06 ±101% +0.3 0.39 ± 8% perf-profile.children.cycles-pp.__perf_sw_event
0.13 ± 54% +0.3 0.47 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.00 +0.4 0.44 ± 6% perf-profile.children.cycles-pp.lrand48_r
0.05 ± 87% +0.5 0.53 ± 7% perf-profile.children.cycles-pp.lru_cache_add
0.58 ± 31% +0.6 1.23 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.47 ± 40% +0.7 1.13 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
0.09 ± 72% +0.7 0.78 ± 4% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.12 ± 91% +0.7 0.83 ± 5% perf-profile.children.cycles-pp.try_charge
0.09 ± 72% +0.7 0.81 ± 4% perf-profile.children.cycles-pp.flush_tlb_func_common
0.03 ±158% +0.8 0.78 ± 3% perf-profile.children.cycles-pp.rmqueue_bulk
0.18 ± 62% +0.9 1.03 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.12 ± 85% +0.9 0.98 ± 4% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.18 ± 42% +0.9 1.08 ± 4% perf-profile.children.cycles-pp.rmqueue
0.03 ±115% +1.0 1.04 ± 4% perf-profile.children.cycles-pp.ptep_clear_flush
0.23 ± 73% +1.1 1.29 ± 4% perf-profile.children.cycles-pp.alloc_pages_vma
0.11 ± 74% +1.1 1.23 ± 5% perf-profile.children.cycles-pp.mem_cgroup_charge
0.00 +1.6 1.58 ± 2% perf-profile.children.cycles-pp.nrand48_r
0.00 +1.9 1.90 ± 3% perf-profile.children.cycles-pp.do_rw_once
0.82 ± 13% +3.0 3.82 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +3.4 3.40 ± 9% perf-profile.children.cycles-pp.free_pcppages_bulk
0.02 ±158% +3.5 3.53 ± 9% perf-profile.children.cycles-pp.free_unref_page_list
0.80 ± 44% +3.6 4.41 ± 10% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.29 ± 49% +6.9 7.22 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.82 ± 20% +7.6 9.46 ± 7% perf-profile.children.cycles-pp.mmput
1.80 ± 20% +7.7 9.46 ± 7% perf-profile.children.cycles-pp.exit_mmap
1.65 ± 16% +7.8 9.46 ± 7% perf-profile.children.cycles-pp.__x64_sys_exit_group
1.65 ± 16% +7.8 9.46 ± 7% perf-profile.children.cycles-pp.do_group_exit
1.63 ± 18% +7.8 9.46 ± 7% perf-profile.children.cycles-pp.do_exit
0.95 ± 26% +8.2 9.14 ± 7% perf-profile.children.cycles-pp.unmap_vmas
0.88 ± 29% +8.3 9.14 ± 7% perf-profile.children.cycles-pp.unmap_page_range
0.85 ± 29% +8.3 9.14 ± 7% perf-profile.children.cycles-pp.zap_pte_range
0.26 ± 84% +8.5 8.80 ± 8% perf-profile.children.cycles-pp.tlb_flush_mmu
0.16 ± 72% +8.6 8.72 ± 8% perf-profile.children.cycles-pp.release_pages
0.49 ± 42% +11.0 11.48 ± 4% perf-profile.children.cycles-pp.copy_page
0.66 ± 31% +15.6 16.24 ± 3% perf-profile.children.cycles-pp.wp_page_copy
2.30 ± 24% +16.6 18.94 ± 3% perf-profile.children.cycles-pp.__handle_mm_fault
2.34 ± 23% +16.8 19.12 ± 3% perf-profile.children.cycles-pp.handle_mm_fault
2.41 ± 24% +17.2 19.63 ± 3% perf-profile.children.cycles-pp.do_user_addr_fault
2.42 ± 23% +17.5 19.88 ± 3% perf-profile.children.cycles-pp.exc_page_fault
2.61 ± 22% +19.1 21.69 ± 3% perf-profile.children.cycles-pp.asm_exc_page_fault
0.00 +85.3 85.35 perf-profile.children.cycles-pp.do_access
9.04 ± 18% -8.8 0.27 ± 20% perf-profile.self.cycles-pp.io_serial_in
5.87 ± 23% -5.2 0.71 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
4.75 ± 19% -4.3 0.45 ± 3% perf-profile.self.cycles-pp.sync_regs
2.22 ± 17% -2.0 0.24 ± 12% perf-profile.self.cycles-pp.memcpy_erms
0.30 ± 46% -0.2 0.05 ± 39% perf-profile.self.cycles-pp.clear_page_erms
0.00 +0.1 0.06 ± 15% perf-profile.self.cycles-pp.ptep_clear_flush
0.00 +0.1 0.06 ± 19% perf-profile.self.cycles-pp.__split_huge_pmd_locked
0.00 +0.1 0.08 ± 9% perf-profile.self.cycles-pp.__mod_node_page_state
0.02 ±158% +0.1 0.10 ± 17% perf-profile.self.cycles-pp.rmqueue
0.03 ±116% +0.1 0.11 ± 11% perf-profile.self.cycles-pp.handle_mm_fault
0.01 ±244% +0.1 0.10 ± 16% perf-profile.self.cycles-pp.__mod_memcg_state
0.01 ±244% +0.1 0.12 ± 20% perf-profile.self.cycles-pp.__count_memcg_events
0.00 +0.1 0.11 ± 11% perf-profile.self.cycles-pp.vmacache_find
0.00 +0.1 0.12 ± 9% perf-profile.self.cycles-pp.reuse_swap_page
0.01 ±244% +0.1 0.14 ± 8% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.01 ±244% +0.1 0.14 ± 11% perf-profile.self.cycles-pp.__mod_lruvec_state
0.00 +0.1 0.15 ± 6% perf-profile.self.cycles-pp.lrand48_r@plt
0.07 ± 78% +0.2 0.24 ± 8% perf-profile.self.cycles-pp.release_pages
0.01 ±244% +0.2 0.21 ± 8% perf-profile.self.cycles-pp.wp_page_copy
0.01 ±244% +0.2 0.25 ± 9% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.16 ± 90% +0.2 0.41 ± 34% perf-profile.self.cycles-pp.intel_idle
0.00 +0.3 0.29 ± 9% perf-profile.self.cycles-pp.lrand48_r
0.00 +0.3 0.30 ± 6% perf-profile.self.cycles-pp.free_pcppages_bulk
0.01 ±244% +0.6 0.58 ± 3% perf-profile.self.cycles-pp.rmqueue_bulk
0.09 ± 72% +0.7 0.78 ± 4% perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.10 ±101% +0.7 0.79 ± 5% perf-profile.self.cycles-pp.try_charge
0.18 ± 62% +0.9 1.03 ± 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.06 ±128% +1.1 1.13 ± 3% perf-profile.self.cycles-pp.do_wp_page
0.00 +1.4 1.38 ± 3% perf-profile.self.cycles-pp.nrand48_r
0.00 +1.8 1.76 ± 3% perf-profile.self.cycles-pp.do_rw_once
0.29 ± 49% +6.9 7.22 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.49 ± 42% +10.9 11.39 ± 4% perf-profile.self.cycles-pp.copy_page
0.00 +63.9 63.86 ± 2% perf-profile.self.cycles-pp.do_access
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
4 months, 1 week