Re: [tracing] 06e0a548ba: WARNING:at_kernel/trace/ring_buffer.c:#ring_buffer_iter_peek
by Steven Rostedt
On Wed, 13 May 2020 18:15:57 +0200
Sven Schnelle <svens(a)linux.ibm.com> wrote:
> Thanks for looking into this. I've attached my /proc/config.gz to this Mail.
> The x86 system is my Laptop which is a Thinkpad X280 with 4 HT CPUs (so 8 cpus
> in total). I've tried disabling preemption, but this didn't help.
>
> It's always this check that causes the loop:
>
> if (iter->head >= rb_page_size(iter->head_page)) {
> rb_inc_iter(iter);
> goto again;
> }
>
> On the first loop iter->head is some value > 0 and rb_page_size returns
> 0, afterwards it gets twice to this check with both values 0. The third
> time the warning is triggered. Maybe that information helps.
Letting it run long enough, I was able to trigger it.
I think I know what's wrong with it. I'll put in some debugging to see if
my thoughts are accurate.
Thanks for bringing this back to my attention.
-- Steve
9 months, 3 weeks
[software node] 7589238a8c: BUG:kernel_NULL_pointer_dereference,address
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 7589238a8cf37331607c3222a64ac3140b29532d ("Revert "software node: Simplify software_node_release() function"")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: kernel-selftests
with following parameters:
group: kselftests-lib
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 81.294302] BUG: kernel NULL pointer dereference, address: 0000000000000000
[ 81.299499] #PF: supervisor read access in kernel mode
[ 81.301121] #PF: error_code(0x0000) - not-present page
[ 81.302088] PGD 0 P4D 0
[ 81.302571] Oops: 0000 [#1] SMP PTI
[ 81.303252] CPU: 1 PID: 1189 Comm: modprobe Not tainted 5.6.0-rc4-00001-g7589238a8cf37 #1
[ 81.304928] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 81.306473] RIP: 0010:ida_free+0x76/0x140
[ 81.307218] Code: 00 00 48 c7 44 24 28 00 00 00 00 0f 88 c0 00 00 00 89 f3 e8 8c 3e 02 00 48 89 e7 49 89 c5 e8 f1 f6 00 00 a8 01 48 89 c5 75 41 <4c> 0f a3 20 72 74 48 8b 3c 24 4c 89 ee e8 18 3c 02 00 89 de 48 c7
[ 81.311772] RSP: 0000:ffffb32b40ff7a98 EFLAGS: 00010046
[ 81.337521] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 81.340410] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffb32b40ff7a98
[ 81.343254] RBP: 0000000000000000 R08: ffffb32b40ff7a98 R09: 0000000000000000
[ 81.346138] R10: ffffffffffffffff R11: 0000000000000000 R12: 0000000000000000
[ 81.348985] R13: 0000000000000246 R14: ffffffffc0632937 R15: ffffb32b40ff7e88
[ 81.353197] FS: 00007f1fe8a395c0(0000) GS:ffff9609f7d00000(0000) knlGS:0000000000000000
[ 81.356217] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 81.359250] CR2: 0000000000000000 CR3: 000000015e8f8000 CR4: 00000000000406e0
[ 81.362127] Call Trace:
[ 81.364185] software_node_release+0x26/0xb0
[ 81.366534] kobject_put+0xa6/0x1b0
[ 81.368685] kobject_del+0x45/0x60
[ 81.370784] kobject_put+0x8b/0x1b0
[ 81.372911] software_node_unregister_nodes+0x25/0x40
[ 81.375351] test_pointer+0xb8d/0xbb5 [test_printf]
[ 81.377815] ? test_pointer+0xbb5/0xbb5 [test_printf]
[ 81.380276] test_printf_init+0x368/0x1075 [test_printf]
[ 81.383175] do_one_initcall+0x46/0x220
[ 81.385499] ? free_unref_page_commit+0x9f/0x120
[ 81.387915] ? _cond_resched+0x19/0x30
[ 81.390128] ? kmem_cache_alloc_trace+0x3b/0x230
[ 81.392560] do_init_module+0x5b/0x21d
[ 81.394766] load_module+0x1bf1/0x2080
[ 81.396961] ? ima_post_read_file+0xe2/0x120
[ 81.399179] ? __do_sys_finit_module+0xe9/0x110
[ 81.401413] __do_sys_finit_module+0xe9/0x110
[ 81.403595] do_syscall_64+0x5b/0x1f0
[ 81.405648] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 81.408077] RIP: 0033:0x7f1fe896cf49
[ 81.410293] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 17 3f 0c 00 f7 d8 64 89 01 48
[ 81.416838] RSP: 002b:00007ffcdd8d3898 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 81.419744] RAX: ffffffffffffffda RBX: 000055a5fce0e060 RCX: 00007f1fe896cf49
[ 81.422531] RDX: 0000000000000000 RSI: 000055a5fb151638 RDI: 0000000000000005
[ 81.425377] RBP: 000055a5fb151638 R08: 0000000000000000 R09: 000055a5fce0f950
[ 81.428220] R10: 0000000000000005 R11: 0000000000000246 R12: 0000000000000000
[ 81.431049] R13: 000055a5fce0e200 R14: 0000000000040000 R15: 0000000000000000
[ 81.433857] Modules linked in: test_printf(+) bochs_drm drm_vram_helper drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm sr_mod intel_rapl_msr cdrom intel_rapl_common sg crct10dif_pclmul ppdev crc32_pclmul crc32c_intel ghash_clmulni_intel ata_generic pata_acpi snd_pcm aesni_intel snd_timer crypto_simd cryptd glue_helper joydev snd soundcore pcspkr serio_raw ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables
[ 81.453400] CR2: 0000000000000000
[ 81.455665] ---[ end trace 3ff065295a4eab6a ]---
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc4-00001-g7589238a8cf37 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 3 weeks
Re: [tracing] 06e0a548ba: WARNING:at_kernel/trace/ring_buffer.c:#ring_buffer_iter_peek
by Steven Rostedt
On Wed, 13 May 2020 11:19:06 +0200
Sven Schnelle <svens(a)linux.ibm.com> wrote:
> Did you had a chance to look into this? I can easily reproduce this both on x86
> and s390 by doing:
>
> cd /sys/kernel/tracing
> cat /dev/zero >/dev/null & # generate some load
> echo function >current_tracer
> # wait a few seconds to fill the buffer
> cat trace
>
> Usually it will print the warn after a few seconds.
>
> I haven't digged through all the ring buffer code yet, so i thought i might ask
> whether you have an idea what's going on.
Can you send me the config for where you can reproduce it on x86?
The iterator now doesn't stop the ring buffer when it iterates, and is
doing so over a live buffer (but should be able to handle it). It's
triggering something I thought wasn't suppose to happen (which must be
happening).
Perhaps with your config I'd be able to reproduce it.
-- Steve
9 months, 3 weeks
[ACPI] 5a91d41f89: BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 5a91d41f89e8874053e12766fa8eb5eaa997d277 ("[PATCH v2] ACPI: Drop rcu usage for MMIO mappings")
url: https://github.com/0day-ci/linux/commits/Dan-Williams/ACPI-Drop-rcu-usage...
base: https://git.kernel.org/cgit/linux/kernel/git/rafael/linux-pm.git linux-next
in testcase: v4l2
with following parameters:
test: device
ucode: 0x43
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 15.593161] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935
[ 15.594078] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
[ 15.594078] 2 locks held by swapper/0/1:
[ 15.594078] #0: ffff88a08055b188 (&dev->mutex){....}-{3:3}, at: device_driver_attach+0x1d/0x60
[ 15.594078] #1: ffffffff82a1a658 (ghes_notify_lock_irq){....}-{2:2}, at: ghes_probe+0x1c7/0x470
[ 15.594078] irq event stamp: 11350142
[ 15.594078] hardirqs last enabled at (11350141): [<ffffffff8137d9bf>] kfree+0x18f/0x2f0
[ 15.594078] hardirqs last disabled at (11350142): [<ffffffff81cc0c00>] _raw_spin_lock_irqsave+0x20/0x60
[ 15.594078] softirqs last enabled at (11350110): [<ffffffff82000353>] __do_softirq+0x353/0x466
[ 15.594078] softirqs last disabled at (11350103): [<ffffffff8113c277>] irq_exit+0xe7/0xf0
[ 15.594078] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc4-00022-g5a91d41f89e88 #1
[ 15.594078] Hardware name: Intel Corporation S2600WTT/S2600WTT, BIOS SE5C610.86B.01.01.0008.021120151325 02/11/2015
[ 15.594078] Call Trace:
[ 15.594078] dump_stack+0x8f/0xcb
[ 15.594078] ___might_sleep+0x175/0x260
[ 15.594078] __mutex_lock+0x55/0x9f0
[ 15.594078] ? cpumask_next+0x17/0x20
[ 15.594078] ? validate_chain+0xdec/0x1240
[ 15.594078] ? rdinit_setup+0x2b/0x2b
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] acpi_os_rw_map+0x34/0xb0
[ 15.594078] acpi_os_read_memory+0x34/0xc0
[ 15.594078] ? lock_acquire+0xac/0x390
[ 15.594078] apei_read+0x97/0xb0
[ 15.594078] __ghes_peek_estatus+0x27/0xc0
[ 15.594078] ghes_proc+0x37/0x120
[ 15.594078] ghes_probe+0x1d2/0x470
[ 15.594078] platform_drv_probe+0x37/0x90
[ 15.594078] really_probe+0xef/0x430
[ 15.594078] driver_probe_device+0x110/0x120
[ 15.594078] device_driver_attach+0x4f/0x60
[ 15.594078] __driver_attach+0x9c/0x140
[ 15.594078] ? device_driver_attach+0x60/0x60
[ 15.594078] bus_for_each_dev+0x79/0xc0
[ 15.594078] bus_add_driver+0x147/0x220
[ 15.594078] ? bert_init+0x229/0x229
[ 15.594078] driver_register+0x5b/0xf0
[ 15.594078] ? bert_init+0x229/0x229
[ 15.594078] ghes_init+0x83/0xde
[ 15.594078] do_one_initcall+0x5d/0x2f0
[ 15.594078] ? rdinit_setup+0x2b/0x2b
[ 15.594078] ? rcu_read_lock_sched_held+0x52/0x80
[ 15.594078] kernel_init_freeable+0x260/0x2da
[ 15.594078] ? rest_init+0x250/0x250
[ 15.594078] kernel_init+0xa/0x110
[ 15.594078] ret_from_fork+0x3a/0x50
[ 15.594078]
[ 15.594078] =============================
[ 15.594078] [ BUG: Invalid wait context ]
[ 15.594078] 5.7.0-rc4-00022-g5a91d41f89e88 #1 Tainted: G W
[ 15.594078] -----------------------------
[ 15.594078] swapper/0/1 is trying to lock:
[ 15.594078] ffffffff82a0b5c8 (acpi_ioremap_lock){+.+.}-{3:3}, at: acpi_os_rw_map+0x34/0xb0
[ 15.594078] other info that might help us debug this:
[ 15.594078] context-{4:4}
[ 15.594078] 2 locks held by swapper/0/1:
[ 15.594078] #0: ffff88a08055b188 (&dev->mutex){....}-{3:3}, at: device_driver_attach+0x1d/0x60
[ 15.594078] #1: ffffffff82a1a658 (ghes_notify_lock_irq){....}-{2:2}, at: ghes_probe+0x1c7/0x470
[ 15.594078] stack backtrace:
[ 15.594078] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G W 5.7.0-rc4-00022-g5a91d41f89e88 #1
[ 15.594078] Hardware name: Intel Corporation S2600WTT/S2600WTT, BIOS SE5C610.86B.01.01.0008.021120151325 02/11/2015
[ 15.594078] Call Trace:
[ 15.594078] dump_stack+0x8f/0xcb
[ 15.594078] __lock_acquire+0x61e/0xbc0
[ 15.594078] lock_acquire+0xac/0x390
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] __mutex_lock+0xa1/0x9f0
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] ? cpumask_next+0x17/0x20
[ 15.594078] ? rdinit_setup+0x2b/0x2b
[ 15.594078] ? acpi_os_rw_map+0x34/0xb0
[ 15.594078] acpi_os_rw_map+0x34/0xb0
[ 15.594078] acpi_os_read_memory+0x34/0xc0
[ 15.594078] ? lock_acquire+0xac/0x390
[ 15.594078] apei_read+0x97/0xb0
[ 15.594078] __ghes_peek_estatus+0x27/0xc0
[ 15.594078] ghes_proc+0x37/0x120
[ 15.594078] ghes_probe+0x1d2/0x470
[ 15.594078] platform_drv_probe+0x37/0x90
[ 15.594078] really_probe+0xef/0x430
[ 15.594078] driver_probe_device+0x110/0x120
[ 15.594078] device_driver_attach+0x4f/0x60
[ 15.594078] __driver_attach+0x9c/0x140
[ 15.594078] ? device_driver_attach+0x60/0x60
[ 15.594078] bus_for_each_dev+0x79/0xc0
[ 15.594078] bus_add_driver+0x147/0x220
[ 15.594078] ? bert_init+0x229/0x229
[ 15.594078] driver_register+0x5b/0xf0
[ 15.594078] ? bert_init+0x229/0x229
[ 15.594078] ghes_init+0x83/0xde
[ 15.594078] do_one_initcall+0x5d/0x2f0
[ 15.594078] ? rdinit_setup+0x2b/0x2b
[ 15.594078] ? rcu_read_lock_sched_held+0x52/0x80
[ 15.594078] kernel_init_freeable+0x260/0x2da
[ 15.594078] ? rest_init+0x250/0x250
[ 15.594078] kernel_init+0xa/0x110
[ 15.594078] ret_from_fork+0x3a/0x50
[ 16.109557] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC.
[ 16.118199] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 16.125308] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 16.133596] 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[ 16.142660] Non-volatile memory driver v1.3
[ 16.147391] Linux agpgart interface v0.103
[ 16.158812] lkdtm: No crash points registered, enable through debugfs
[ 16.166314] rdac: device handler registered
[ 16.171055] hp_sw: device handler registered
[ 16.175838] emc: device handler registered
[ 16.180592] alua: device handler registered
[ 16.185679] MACsec IEEE 802.1AE
[ 16.189408] libphy: Fixed MDIO Bus: probed
[ 16.194253] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[ 16.202138] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 16.208641] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[ 16.215173] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 16.221901] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.6.0-k
[ 16.229691] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 16.235985] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[ 16.244549] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 16.251177] IOAPIC[9]: Set IRTE entry (P:1 FPD:0 Dst_Mode:1 Redir_hint:1 Trig_Mode:0 Dlvry_Mode:0 Avail:0 Vector:EF Dest:00000001 SID:002C SQ:0 SVT:1)
[ 16.266251] IOAPIC[1]: Set routing entry (9-13 -> 0xef -> IRQ 38 Mode:1 Active:1 Dest:1)
[ 16.548010] ixgbe 0000:03:00.0: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0
[ 16.643801] ixgbe 0000:03:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 16.677322] ixgbe 0000:03:00.0: MAC: 3, PHY: 0, PBA No: 000000-000
[ 16.684240] ixgbe 0000:03:00.0: 00:1e:67:f7:44:b3
[ 16.838449] ixgbe 0000:03:00.0: Intel(R) 10 Gigabit Network Connection
[ 16.845921] libphy: ixgbe-mdio: probed
[ 16.850249] IOAPIC[9]: Set IRTE entry (P:1 FPD:0 Dst_Mode:1 Redir_hint:1 Trig_Mode:0 Dlvry_Mode:0 Avail:0 Vector:EF Dest:00000001 SID:002C SQ:0 SVT:1)
[ 16.865321] IOAPIC[1]: Set routing entry (9-10 -> 0xef -> IRQ 105 Mode:1 Active:1 Dest:1)
[ 17.147743] ixgbe 0000:03:00.1: Multiqueue Enabled: Rx Queue count = 63, Tx Queue count = 63 XDP Queue count = 0
[ 17.243466] ixgbe 0000:03:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 17.276992] ixgbe 0000:03:00.1: MAC: 3, PHY: 0, PBA No: 000000-000
[ 17.283910] ixgbe 0000:03:00.1: 00:1e:67:f7:44:b4
[ 17.437334] ixgbe 0000:03:00.1: Intel(R) 10 Gigabit Network Connection
[ 17.444791] libphy: ixgbe-mdio: probed
[ 17.449025] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k
[ 17.457890] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[ 17.465025] usbcore: registered new interface driver catc
[ 17.471076] usbcore: registered new interface driver kaweth
[ 17.477311] pegasus: v0.9.3 (2013/04/25), Pegasus/Pegasus II USB Ethernet driver
[ 17.485597] usbcore: registered new interface driver pegasus
[ 17.491949] usbcore: registered new interface driver rtl8150
[ 17.498295] usbcore: registered new interface driver asix
[ 17.504340] usbcore: registered new interface driver cdc_ether
[ 17.510870] usbcore: registered new interface driver cdc_eem
[ 17.517207] usbcore: registered new interface driver dm9601
[ 17.523455] usbcore: registered new interface driver smsc75xx
[ 17.529891] usbcore: registered new interface driver smsc95xx
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
9 months, 3 weeks
[firmware] 88a5883981: will-it-scale.per_thread_ops -66.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -66.8% regression of will-it-scale.per_thread_ops due to commit:
commit: 88a588398167c06f3890949fe60e864fa75a6f21 ("[PATCH 1/1] firmware: arm_scmi/mailbox: ignore notification for tx done using irq")
url: https://github.com/0day-ci/linux/commits/joe_zhuchg-126-com/firmware-arm_...
in testcase: will-it-scale
on test machine: 104 threads Skylake with 192G memory
with following parameters:
nr_task: 100%
mode: thread
test: sched_yield
cpufreq_governor: performance
ucode: 0x2000065
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/100%/debian-x86_64-20191114.cgz/lkp-skl-fpga01/sched_yield/will-it-scale/0x2000065
commit:
d5eeab8d7e (" SCSI fixes on 20200508")
88a5883981 ("firmware: arm_scmi/mailbox: ignore notification for tx done using irq")
d5eeab8d7e269e8c 88a588398167c06f3890949fe60
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 75% 3:4 kmsg.IP-Config:Failed_to_open_erspan0
:4 75% 3:4 kmsg.IP-Config:Failed_to_open_gretap0
1:4 -25% :4 kmsg.debugfs:Directory'#'with_parent'rpc_clnt'already_present
:4 75% 3:4 kmsg.kvm:already_loaded_the_other_module
%stddev %change %stddev
\ | \
1137135 -66.8% 378072 will-it-scale.per_thread_ops
8321066 ± 14% -64.6% 2942794 ± 4% will-it-scale.time.involuntary_context_switches
18362 -2.5% 17908 will-it-scale.time.minor_page_faults
17858 +47.7% 26372 will-it-scale.time.system_time
13434 -63.4% 4912 will-it-scale.time.user_time
1.183e+08 -66.8% 39319577 will-it-scale.workload
36.51 ± 3% +14.3% 41.71 boot-time.boot
20.75 -1.7% 20.39 boot-time.dhcp
3393 ± 3% +16.3% 3945 boot-time.idle
0.40 ± 14% +0.2 0.56 ± 3% mpstat.cpu.all.idle%
56.81 +27.0 83.79 mpstat.cpu.all.sys%
42.78 -27.1 15.66 mpstat.cpu.all.usr%
56.00 +47.0% 82.33 vmstat.cpu.sy
42.00 -64.3% 15.00 vmstat.cpu.us
28252 ± 14% -62.7% 10526 ± 4% vmstat.system.cs
1.219e+08 ± 52% +54.6% 1.884e+08 cpuidle.C1E.time
264518 ± 43% +52.3% 402973 cpuidle.C1E.usage
9904 ± 23% -47.7% 5175 ± 7% cpuidle.POLL.time
2764 ± 29% -51.4% 1344 ± 8% cpuidle.POLL.usage
744784 ± 2% -46.9% 395600 ± 10% meminfo.DirectMap4k
79337 +46.5% 116199 meminfo.KReclaimable
33152 +10.8% 36736 meminfo.Percpu
79337 +46.5% 116199 meminfo.SReclaimable
166934 +16.7% 194870 meminfo.SUnreclaim
77261 +14.2% 88257 meminfo.Shmem
246272 +26.3% 311071 meminfo.Slab
243167 +10.4% 268500 meminfo.VmallocUsed
9622 ± 13% +58.4% 15242 ± 8% numa-vmstat.node0.nr_slab_reclaimable
21164 ± 6% +21.0% 25615 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
715273 ± 3% -11.3% 634182 ± 5% numa-vmstat.node0.numa_hit
10210 ± 12% +35.2% 13807 ± 8% numa-vmstat.node1.nr_slab_reclaimable
20567 ± 7% +12.3% 23101 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
821312 ± 2% -23.1% 631298 ± 7% numa-vmstat.node1.numa_hit
780418 ± 2% -31.4% 535357 ± 3% numa-vmstat.node1.numa_local
40893 ± 17% +134.6% 95940 ± 68% numa-vmstat.node1.numa_other
38491 ± 13% +58.4% 60971 ± 8% numa-meminfo.node0.KReclaimable
1086867 ± 6% +10.4% 1199675 ± 5% numa-meminfo.node0.MemUsed
38491 ± 13% +58.4% 60971 ± 8% numa-meminfo.node0.SReclaimable
84661 ± 6% +21.0% 102464 ± 2% numa-meminfo.node0.SUnreclaim
123153 ± 6% +32.7% 163436 ± 4% numa-meminfo.node0.Slab
40846 ± 12% +35.2% 55232 ± 8% numa-meminfo.node1.KReclaimable
40846 ± 12% +35.2% 55232 ± 8% numa-meminfo.node1.SReclaimable
82270 ± 7% +12.3% 92405 ± 3% numa-meminfo.node1.SUnreclaim
123117 ± 7% +19.9% 147638 ± 4% numa-meminfo.node1.Slab
94315 +2.9% 97038 proc-vmstat.nr_active_anon
88.50 ± 19% +23.5% 109.33 proc-vmstat.nr_anon_transparent_hugepages
6979 +1.8% 7103 proc-vmstat.nr_inactive_anon
19302 +14.2% 22050 proc-vmstat.nr_shmem
19833 +46.5% 29049 proc-vmstat.nr_slab_reclaimable
41733 +16.7% 48716 proc-vmstat.nr_slab_unreclaimable
94315 +2.9% 97038 proc-vmstat.nr_zone_active_anon
6979 +1.8% 7103 proc-vmstat.nr_zone_inactive_anon
17834 ± 2% +22.1% 21778 ± 2% proc-vmstat.pgactivate
163.11 ± 7% +28.7% 209.95 ± 5% sched_debug.cfs_rq:/.exec_clock.stddev
2498 ± 2% -9.4% 2262 ± 2% sched_debug.cfs_rq:/.runnable_avg.max
241.18 ± 5% -13.1% 209.60 ± 5% sched_debug.cfs_rq:/.runnable_avg.stddev
339472 ± 6% +15.0% 390386 ± 7% sched_debug.cfs_rq:/.spread0.max
842.62 -13.1% 732.50 ± 8% sched_debug.cfs_rq:/.util_avg.min
0.00 ± 9% +15.7% 0.00 ± 3% sched_debug.cpu.next_balance.stddev
66265 ± 15% -63.1% 24479 ± 4% sched_debug.cpu.nr_switches.avg
2124461 ± 26% -67.6% 689147 ± 29% sched_debug.cpu.nr_switches.max
298705 ± 20% -69.5% 90983 ± 18% sched_debug.cpu.nr_switches.stddev
0.00 ± 24% -81.0% 0.00 ±141% sched_debug.cpu.nr_uninterruptible.avg
1.696e+08 -66.8% 56291966 sched_debug.cpu.sched_count.avg
1.755e+08 -66.6% 58556615 sched_debug.cpu.sched_count.max
1.612e+08 -67.0% 53263971 sched_debug.cpu.sched_count.min
4002018 ± 3% -56.1% 1757473 sched_debug.cpu.sched_count.stddev
51.49 ± 11% -22.2% 40.05 ± 5% sched_debug.cpu.sched_goidle.avg
1843 -12.1% 1620 ± 4% sched_debug.cpu.sched_goidle.max
234.65 ± 7% -28.7% 167.22 ± 4% sched_debug.cpu.sched_goidle.stddev
10020 ± 8% -20.0% 8015 ± 4% sched_debug.cpu.ttwu_count.max
1636 ± 2% -15.1% 1388 ± 2% sched_debug.cpu.ttwu_count.stddev
3877 ± 13% +23.7% 4796 ± 10% sched_debug.cpu.ttwu_local.max
1.696e+08 -66.8% 56290192 sched_debug.cpu.yld_count.avg
1.755e+08 -66.6% 58555996 sched_debug.cpu.yld_count.max
1.612e+08 -67.0% 53260956 sched_debug.cpu.yld_count.min
4002070 ± 3% -56.1% 1757614 sched_debug.cpu.yld_count.stddev
569.50 ± 34% -35.3% 368.67 ± 12% interrupts.38:PCI-MSI.67633153-edge.eth0-TxRx-0
265902 ± 16% -11.2% 236085 ± 2% interrupts.CAL:Function_call_interrupts
2372 ± 18% -17.6% 1954 ± 4% interrupts.CPU0.CAL:Function_call_interrupts
2798 ± 33% -37.6% 1745 ± 28% interrupts.CPU0.RES:Rescheduling_interrupts
479.25 ± 55% -46.4% 256.67 ± 2% interrupts.CPU100.RES:Rescheduling_interrupts
358.00 ± 19% -49.1% 182.33 ± 6% interrupts.CPU101.RES:Rescheduling_interrupts
398.50 ± 33% -73.3% 106.33 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
1038 ± 88% -87.8% 127.00 ± 19% interrupts.CPU103.RES:Rescheduling_interrupts
7346 ± 6% -33.4% 4890 ± 35% interrupts.CPU13.NMI:Non-maskable_interrupts
7346 ± 6% -33.4% 4890 ± 35% interrupts.CPU13.PMI:Performance_monitoring_interrupts
7346 ± 6% -33.5% 4887 ± 35% interrupts.CPU15.NMI:Non-maskable_interrupts
7346 ± 6% -33.5% 4887 ± 35% interrupts.CPU15.PMI:Performance_monitoring_interrupts
330.50 +37.7% 455.00 interrupts.CPU15.RES:Rescheduling_interrupts
7347 ± 6% -33.5% 4886 ± 35% interrupts.CPU16.NMI:Non-maskable_interrupts
7347 ± 6% -33.5% 4886 ± 35% interrupts.CPU16.PMI:Performance_monitoring_interrupts
354.75 ± 9% +25.2% 444.00 ± 3% interrupts.CPU17.RES:Rescheduling_interrupts
325.00 +88.0% 611.00 ± 31% interrupts.CPU18.RES:Rescheduling_interrupts
328.00 +27.1% 417.00 ± 9% interrupts.CPU19.RES:Rescheduling_interrupts
330.75 ± 2% +53.9% 509.00 ± 15% interrupts.CPU20.RES:Rescheduling_interrupts
341.00 ± 3% +31.9% 449.67 ± 6% interrupts.CPU21.RES:Rescheduling_interrupts
330.75 ± 3% +29.9% 429.67 ± 8% interrupts.CPU22.RES:Rescheduling_interrupts
331.75 ± 4% +23.8% 410.67 ± 10% interrupts.CPU23.RES:Rescheduling_interrupts
329.25 ± 2% +26.1% 415.33 ± 12% interrupts.CPU24.RES:Rescheduling_interrupts
2132 ± 20% -18.5% 1737 ± 2% interrupts.CPU24.TLB:TLB_shootdowns
327.25 +33.1% 435.67 ± 10% interrupts.CPU25.RES:Rescheduling_interrupts
569.50 ± 34% -35.3% 368.67 ± 12% interrupts.CPU30.38:PCI-MSI.67633153-edge.eth0-TxRx-0
327.00 +12.1% 366.67 ± 3% interrupts.CPU33.RES:Rescheduling_interrupts
320.75 ± 2% +45.2% 465.67 ± 5% interrupts.CPU45.RES:Rescheduling_interrupts
326.50 ± 3% +21.7% 397.33 ± 5% interrupts.CPU46.RES:Rescheduling_interrupts
321.00 ± 3% +16.9% 375.33 ± 4% interrupts.CPU47.RES:Rescheduling_interrupts
325.75 ± 2% +21.7% 396.33 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
320.00 +24.9% 399.67 ± 7% interrupts.CPU49.RES:Rescheduling_interrupts
9309 ± 18% -60.8% 3652 ± 35% interrupts.CPU53.RES:Rescheduling_interrupts
492.50 ± 24% +110.2% 1035 ± 53% interrupts.CPU58.RES:Rescheduling_interrupts
491.00 ± 27% +75.8% 863.00 ± 13% interrupts.CPU59.RES:Rescheduling_interrupts
346.50 ± 3% +87.7% 650.33 ± 59% interrupts.CPU64.RES:Rescheduling_interrupts
7354 ± 6% -33.5% 4890 ± 35% interrupts.CPU73.NMI:Non-maskable_interrupts
7354 ± 6% -33.5% 4890 ± 35% interrupts.CPU73.PMI:Performance_monitoring_interrupts
7352 ± 6% -33.5% 4891 ± 35% interrupts.CPU74.NMI:Non-maskable_interrupts
7352 ± 6% -33.5% 4891 ± 35% interrupts.CPU74.PMI:Performance_monitoring_interrupts
1689 ± 3% +21.3% 2050 ± 11% interrupts.CPU78.RES:Rescheduling_interrupts
10131 ± 21% -43.5% 5724 ± 26% interrupts.CPU79.RES:Rescheduling_interrupts
8613 ± 18% -25.7% 6401 ± 8% interrupts.CPU80.RES:Rescheduling_interrupts
396.50 ± 2% +24.0% 491.67 ± 9% interrupts.CPU83.RES:Rescheduling_interrupts
349.50 ± 6% +24.9% 436.67 ± 17% interrupts.CPU88.RES:Rescheduling_interrupts
342.75 ± 8% -9.2% 311.33 ± 3% interrupts.CPU94.RES:Rescheduling_interrupts
358.00 ± 19% -16.1% 300.33 ± 3% interrupts.CPU98.RES:Rescheduling_interrupts
323.00 ± 2% -11.2% 286.67 ± 2% interrupts.CPU99.RES:Rescheduling_interrupts
101929 ± 3% -13.7% 88014 interrupts.RES:Rescheduling_interrupts
0.07 ± 8% -48.2% 0.04 ± 5% perf-stat.i.MPKI
2.346e+10 +104.6% 4.801e+10 perf-stat.i.branch-instructions
1.04 -0.8 0.19 perf-stat.i.branch-miss-rate%
2.41e+08 -63.6% 87669840 perf-stat.i.branch-misses
26724 ± 14% -62.9% 9902 ± 4% perf-stat.i.context-switches
2.58 -54.9% 1.17 perf-stat.i.cpi
2.875e+11 -3.7% 2.769e+11 perf-stat.i.cpu-cycles
169.51 -7.7% 156.48 perf-stat.i.cpu-migrations
784875 ± 2% -16.3% 656903 ± 5% perf-stat.i.cycles-between-cache-misses
0.34 -0.3 0.05 perf-stat.i.dTLB-load-miss-rate%
1.179e+08 -66.7% 39207481 perf-stat.i.dTLB-load-misses
3.493e+10 +109.7% 7.325e+10 perf-stat.i.dTLB-loads
0.00 ± 40% -0.0 0.00 ± 3% perf-stat.i.dTLB-store-miss-rate%
2.073e+10 +102.5% 4.197e+10 perf-stat.i.dTLB-stores
1.305e+08 -69.9% 39224524 perf-stat.i.iTLB-load-misses
3119043 ± 2% -59.1% 1276303 ± 9% perf-stat.i.iTLB-loads
1.112e+11 +113.7% 2.376e+11 perf-stat.i.instructions
880.78 +619.0% 6332 perf-stat.i.instructions-per-iTLB-miss
0.39 +121.2% 0.86 perf-stat.i.ipc
2.76 -3.7% 2.66 perf-stat.i.metric.GHz
0.91 ± 2% -32.8% 0.61 ± 3% perf-stat.i.metric.K/sec
760.63 +106.3% 1568 perf-stat.i.metric.M/sec
93.00 -1.5 91.46 perf-stat.i.node-load-miss-rate%
15369 ± 6% +22.6% 18835 ± 7% perf-stat.i.node-loads
89.54 -8.1 81.48 perf-stat.i.node-store-miss-rate%
26669 ± 2% -55.1% 11971 ± 2% perf-stat.i.node-store-misses
0.06 ± 17% -46.0% 0.03 ± 6% perf-stat.overall.MPKI
1.03 -0.8 0.18 perf-stat.overall.branch-miss-rate%
2.59 -54.9% 1.17 perf-stat.overall.cpi
629815 ± 3% -11.3% 558913 ± 5% perf-stat.overall.cycles-between-cache-misses
0.34 -0.3 0.05 perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 3% -0.0 0.00 ± 2% perf-stat.overall.dTLB-store-miss-rate%
851.98 +610.9% 6056 perf-stat.overall.instructions-per-iTLB-miss
0.39 +121.8% 0.86 perf-stat.overall.ipc
89.91 -2.3 87.60 perf-stat.overall.node-load-miss-rate%
86.68 -12.9 73.83 perf-stat.overall.node-store-miss-rate%
283543 +542.2% 1821041 perf-stat.overall.path-length
2.338e+10 +104.6% 4.784e+10 perf-stat.ps.branch-instructions
2.402e+08 -63.6% 87362417 perf-stat.ps.branch-misses
455461 ± 3% +8.8% 495461 ± 5% perf-stat.ps.cache-misses
28325 ± 14% -62.7% 10575 ± 4% perf-stat.ps.context-switches
2.865e+11 -3.7% 2.76e+11 perf-stat.ps.cpu-cycles
168.96 -7.6% 156.04 perf-stat.ps.cpu-migrations
1.175e+08 -66.7% 39078502 perf-stat.ps.dTLB-load-misses
3.481e+10 +109.7% 7.3e+10 perf-stat.ps.dTLB-loads
9316 ± 3% +6.0% 9873 ± 2% perf-stat.ps.dTLB-store-misses
2.066e+10 +102.5% 4.183e+10 perf-stat.ps.dTLB-stores
1.301e+08 -69.9% 39090616 perf-stat.ps.iTLB-load-misses
3110017 ± 2% -59.1% 1272313 ± 10% perf-stat.ps.iTLB-loads
1.108e+11 +113.7% 2.368e+11 perf-stat.ps.instructions
15708 ± 6% +25.6% 19734 ± 5% perf-stat.ps.node-loads
26557 ± 2% -55.1% 11924 ± 2% perf-stat.ps.node-store-misses
3.353e+13 +113.5% 7.16e+13 perf-stat.total.instructions
1314 ± 5% -55.6% 584.00 ± 17% slabinfo.Acpi-Parse.active_objs
1314 ± 5% -55.6% 584.00 ± 17% slabinfo.Acpi-Parse.num_objs
5111 +62.5% 8306 ± 3% slabinfo.Acpi-State.active_objs
5111 +62.5% 8306 ± 3% slabinfo.Acpi-State.num_objs
1738 ± 11% -41.8% 1012 slabinfo.UNIX.active_objs
1738 ± 11% -41.8% 1012 slabinfo.UNIX.num_objs
380.25 ± 4% -46.0% 205.33 ± 5% slabinfo.bdev_cache.active_objs
380.25 ± 4% -46.0% 205.33 ± 5% slabinfo.bdev_cache.num_objs
380.25 ± 4% -100.0% 0.00 slabinfo.blkdev_ioc.active_objs
380.25 ± 4% -100.0% 0.00 slabinfo.blkdev_ioc.num_objs
2282 ± 2% -24.2% 1729 slabinfo.dentry.active_slabs
2282 ± 2% -24.2% 1729 slabinfo.dentry.num_slabs
4754 ± 4% -7.9% 4380 ± 3% slabinfo.eventpoll_pwq.active_objs
4754 ± 4% -7.9% 4380 ± 3% slabinfo.eventpoll_pwq.num_objs
726.50 ± 17% +90.0% 1380 ± 9% slabinfo.file_lock_cache.active_objs
726.50 ± 17% +90.0% 1380 ± 9% slabinfo.file_lock_cache.num_objs
1920 ± 4% -70.1% 574.00 ± 15% slabinfo.fsnotify_mark_connector.active_objs
1920 ± 4% -70.1% 574.00 ± 15% slabinfo.fsnotify_mark_connector.num_objs
1200 +63.2% 1959 slabinfo.inode_cache.active_slabs
1200 +63.2% 1959 slabinfo.inode_cache.num_slabs
3013 -32.5% 2033 slabinfo.kernfs_node_cache.active_slabs
3013 -32.5% 2033 slabinfo.kernfs_node_cache.num_slabs
701.00 ± 8% -100.0% 0.00 slabinfo.khugepaged_mm_slot.active_objs
701.00 ± 8% -100.0% 0.00 slabinfo.khugepaged_mm_slot.num_objs
6080 -18.1% 4977 slabinfo.kmalloc-256.active_objs
6080 -18.1% 4981 slabinfo.kmalloc-256.num_objs
1801 ± 5% +44.2% 2598 slabinfo.kmalloc-4k.active_objs
229.00 ± 5% +43.1% 327.67 slabinfo.kmalloc-4k.active_slabs
1833 ± 5% +43.0% 2621 slabinfo.kmalloc-4k.num_objs
229.00 ± 5% +43.1% 327.67 slabinfo.kmalloc-4k.num_slabs
10915 ± 9% +15.2% 12571 slabinfo.kmalloc-512.num_objs
4621 ± 5% +23.4% 5700 ± 12% slabinfo.kmalloc-rcl-64.active_objs
4621 ± 5% +23.4% 5700 ± 12% slabinfo.kmalloc-rcl-64.num_objs
812.00 ± 8% -39.1% 494.33 ± 8% slabinfo.kmem_cache_node.active_objs
864.00 ± 8% -42.0% 501.33 ± 7% slabinfo.kmem_cache_node.num_objs
18442 -100.0% 0.00 slabinfo.lsm_file_cache.active_objs
107.50 -100.0% 0.00 slabinfo.lsm_file_cache.active_slabs
18442 -100.0% 0.00 slabinfo.lsm_file_cache.num_objs
107.50 -100.0% 0.00 slabinfo.lsm_file_cache.num_slabs
3125 -28.7% 2226 slabinfo.mm_struct.active_objs
3125 -28.7% 2226 slabinfo.mm_struct.num_objs
279.00 ± 11% +137.0% 661.33 ± 8% slabinfo.numa_policy.active_objs
279.00 ± 11% +137.0% 661.33 ± 8% slabinfo.numa_policy.num_objs
13149 ± 5% -19.6% 10574 slabinfo.proc_inode_cache.active_objs
13151 ± 5% -19.6% 10578 slabinfo.proc_inode_cache.num_objs
7198 ± 3% -22.0% 5616 slabinfo.shmem_inode_cache.active_objs
7198 ± 3% -22.0% 5616 slabinfo.shmem_inode_cache.num_objs
3689 -16.5% 3079 ± 2% slabinfo.signal_cache.active_objs
3689 -16.5% 3079 ± 2% slabinfo.signal_cache.num_objs
2944 ± 6% -23.7% 2247 ± 2% slabinfo.sock_inode_cache.active_objs
2944 ± 6% -23.7% 2247 ± 2% slabinfo.sock_inode_cache.num_objs
1003 ± 8% -22.1% 781.67 ± 12% slabinfo.task_group.active_objs
1003 ± 8% -22.1% 781.67 ± 12% slabinfo.task_group.num_objs
1335 -45.2% 731.33 slabinfo.task_struct.active_slabs
1335 -45.2% 731.33 slabinfo.task_struct.num_slabs
2527 ± 2% -23.5% 1932 slabinfo.trace_event_file.active_objs
2527 ± 2% -23.5% 1932 slabinfo.trace_event_file.num_objs
12089 -100.0% 0.00 slabinfo.vmap_area.active_objs
188.50 -100.0% 0.00 slabinfo.vmap_area.active_slabs
12090 -100.0% 0.00 slabinfo.vmap_area.num_objs
188.50 -100.0% 0.00 slabinfo.vmap_area.num_slabs
25577 ± 5% +24.4% 31809 ± 3% softirqs.CPU0.RCU
9552 ± 6% +35.4% 12933 ± 14% softirqs.CPU0.SCHED
113274 ± 2% +12.9% 127889 ± 3% softirqs.CPU0.TIMER
111542 +13.5% 126558 ± 2% softirqs.CPU1.TIMER
111395 ± 2% +11.9% 124617 softirqs.CPU10.TIMER
106110 ± 3% +11.8% 118654 ± 3% softirqs.CPU100.TIMER
105132 ± 3% +13.0% 118833 ± 3% softirqs.CPU101.TIMER
105250 ± 3% +12.5% 118435 ± 3% softirqs.CPU102.TIMER
104685 ± 3% +12.4% 117702 ± 3% softirqs.CPU103.TIMER
110664 +12.6% 124625 softirqs.CPU11.TIMER
112126 ± 2% +11.5% 125033 softirqs.CPU12.TIMER
25511 ± 10% +22.6% 31282 ± 3% softirqs.CPU13.RCU
113149 ± 3% +10.3% 124843 softirqs.CPU13.TIMER
27353 ± 3% +13.3% 30995 ± 4% softirqs.CPU14.RCU
111181 +12.6% 125176 softirqs.CPU14.TIMER
111832 +11.7% 124866 softirqs.CPU15.TIMER
110827 +13.2% 125419 softirqs.CPU16.TIMER
110657 +13.3% 125380 softirqs.CPU17.TIMER
111110 +12.7% 125209 softirqs.CPU18.TIMER
110876 +13.0% 125237 softirqs.CPU19.TIMER
112677 ± 3% +11.8% 125976 softirqs.CPU2.TIMER
111496 +13.0% 126043 ± 2% softirqs.CPU20.TIMER
111051 +12.6% 125095 softirqs.CPU21.TIMER
111011 +12.4% 124766 softirqs.CPU22.TIMER
28352 ± 13% +20.1% 34054 ± 3% softirqs.CPU23.RCU
112266 ± 2% +11.0% 124579 softirqs.CPU23.TIMER
110454 +25.7% 138890 ± 13% softirqs.CPU24.TIMER
110719 +13.7% 125896 softirqs.CPU25.TIMER
107538 ± 3% +10.9% 119288 ± 3% softirqs.CPU26.TIMER
105220 ± 3% +15.4% 121464 ± 2% softirqs.CPU28.TIMER
109117 ± 4% +13.2% 123514 ± 6% softirqs.CPU29.TIMER
111133 +12.8% 125355 softirqs.CPU3.TIMER
105844 ± 3% +12.5% 119041 ± 3% softirqs.CPU30.TIMER
105774 ± 3% +12.4% 118887 ± 3% softirqs.CPU31.TIMER
105721 ± 3% +13.2% 119728 ± 2% softirqs.CPU32.TIMER
107012 ± 2% +10.8% 118522 ± 3% softirqs.CPU33.TIMER
106671 ± 4% +11.6% 119031 ± 3% softirqs.CPU34.TIMER
106618 ± 4% +11.6% 119028 ± 3% softirqs.CPU35.TIMER
106613 ± 3% +11.8% 119183 ± 3% softirqs.CPU36.TIMER
106085 ± 3% +12.8% 119648 ± 3% softirqs.CPU37.TIMER
106108 ± 3% +12.4% 119307 ± 3% softirqs.CPU38.TIMER
105788 ± 3% +12.5% 118977 ± 3% softirqs.CPU39.TIMER
110955 +11.5% 123714 ± 2% softirqs.CPU4.TIMER
106046 ± 3% +14.4% 121337 ± 5% softirqs.CPU40.TIMER
105547 ± 4% +17.4% 123888 ± 7% softirqs.CPU41.TIMER
105073 ± 4% +14.2% 119991 ± 4% softirqs.CPU42.TIMER
104783 ± 3% +13.5% 118905 ± 3% softirqs.CPU43.TIMER
105176 ± 3% +13.5% 119338 ± 3% softirqs.CPU44.TIMER
105559 ± 3% +12.7% 118965 ± 3% softirqs.CPU45.TIMER
105576 ± 3% +12.4% 118691 ± 3% softirqs.CPU46.TIMER
106065 ± 3% +11.9% 118715 ± 3% softirqs.CPU47.TIMER
106823 ± 4% +10.9% 118518 ± 3% softirqs.CPU48.TIMER
105189 ± 3% +12.8% 118628 ± 3% softirqs.CPU49.TIMER
111167 +12.6% 125191 softirqs.CPU5.TIMER
28995 +10.9% 32168 ± 3% softirqs.CPU50.RCU
106355 ± 3% +11.5% 118547 ± 3% softirqs.CPU50.TIMER
106315 ± 3% +11.9% 119013 ± 3% softirqs.CPU51.TIMER
111270 +12.3% 124907 softirqs.CPU52.TIMER
110848 +13.0% 125266 softirqs.CPU53.TIMER
110544 +13.7% 125711 softirqs.CPU55.TIMER
110306 +14.2% 125992 softirqs.CPU56.TIMER
110421 +13.4% 125255 softirqs.CPU57.TIMER
110446 +12.7% 124425 softirqs.CPU58.TIMER
110651 +13.0% 125074 ± 2% softirqs.CPU59.TIMER
111882 ± 2% +11.9% 125214 softirqs.CPU6.TIMER
27503 ± 3% +13.4% 31190 ± 4% softirqs.CPU60.RCU
110507 +13.2% 125131 ± 2% softirqs.CPU60.TIMER
109902 +13.1% 124263 softirqs.CPU61.TIMER
111798 +10.9% 124003 softirqs.CPU62.TIMER
110192 +13.4% 124940 softirqs.CPU63.TIMER
110850 +12.3% 124485 softirqs.CPU64.TIMER
26747 ± 3% +15.6% 30907 ± 3% softirqs.CPU65.RCU
110956 +12.5% 124786 softirqs.CPU66.TIMER
110559 +12.3% 124114 softirqs.CPU67.TIMER
110382 +13.0% 124781 softirqs.CPU68.TIMER
110162 +13.5% 125057 softirqs.CPU69.TIMER
111219 +12.7% 125306 softirqs.CPU7.TIMER
109897 +15.1% 126512 softirqs.CPU70.TIMER
110228 +13.5% 125162 softirqs.CPU71.TIMER
111251 +11.7% 124230 softirqs.CPU72.TIMER
110499 +12.6% 124441 softirqs.CPU73.TIMER
110718 +12.4% 124432 softirqs.CPU74.TIMER
27214 ± 7% +18.0% 32103 ± 7% softirqs.CPU75.RCU
110636 +16.1% 128474 ± 3% softirqs.CPU76.TIMER
109979 +13.2% 124463 softirqs.CPU77.TIMER
105992 ± 3% +12.0% 118744 ± 3% softirqs.CPU78.TIMER
110791 +12.6% 124799 softirqs.CPU8.TIMER
107284 ± 3% +12.2% 120373 ± 4% softirqs.CPU80.TIMER
106572 ± 3% +23.6% 131773 ± 15% softirqs.CPU81.TIMER
105665 ± 3% +12.2% 118575 ± 3% softirqs.CPU82.TIMER
105879 ± 3% +12.6% 119170 ± 3% softirqs.CPU83.TIMER
105143 ± 3% +13.1% 118966 ± 3% softirqs.CPU84.TIMER
105474 ± 3% +12.3% 118442 ± 3% softirqs.CPU85.TIMER
105151 ± 3% +12.9% 118766 ± 3% softirqs.CPU86.TIMER
106321 ± 3% +11.4% 118480 ± 3% softirqs.CPU87.TIMER
105631 ± 3% +12.4% 118758 ± 3% softirqs.CPU88.TIMER
105196 ± 3% +13.1% 118992 ± 3% softirqs.CPU89.TIMER
110739 +12.8% 124917 softirqs.CPU9.TIMER
105698 ± 3% +12.9% 119329 ± 3% softirqs.CPU90.TIMER
105525 ± 3% +12.2% 118421 ± 3% softirqs.CPU91.TIMER
105484 ± 3% +11.9% 117993 ± 3% softirqs.CPU92.TIMER
105410 ± 3% +24.9% 131658 ± 15% softirqs.CPU93.TIMER
104847 ± 3% +13.0% 118518 ± 3% softirqs.CPU94.TIMER
104478 ± 3% +15.0% 120175 ± 4% softirqs.CPU95.TIMER
104997 ± 3% +13.2% 118839 ± 3% softirqs.CPU96.TIMER
104780 ± 3% +13.3% 118666 ± 3% softirqs.CPU97.TIMER
106171 ± 3% +11.7% 118626 ± 3% softirqs.CPU98.TIMER
105765 ± 3% +11.7% 118159 ± 3% softirqs.CPU99.TIMER
11325259 +12.5% 12744787 ± 2% softirqs.TIMER
25.14 -18.5 6.68 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__sched_yield
17.14 -12.5 4.67 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__sched_yield
3.25 -2.5 0.79 perf-profile.calltrace.cycles-pp.__calc_delta.update_curr.pick_next_task_fair.__sched_text_start.schedule
2.69 ± 2% -2.2 0.51 ± 2% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__sched_text_start.schedule.__x64_sys_sched_yield
2.10 -1.4 0.65 perf-profile.calltrace.cycles-pp.sched_clock_cpu.update_rq_clock.__sched_text_start.schedule.__x64_sys_sched_yield
1.88 -1.3 0.58 perf-profile.calltrace.cycles-pp.native_sched_clock.sched_clock.sched_clock_cpu.update_rq_clock.__sched_text_start
1.86 -1.3 0.57 perf-profile.calltrace.cycles-pp.sched_clock.sched_clock_cpu.update_rq_clock.__sched_text_start.schedule
2.84 -1.0 1.85 perf-profile.calltrace.cycles-pp.update_rq_clock.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
1.46 -0.8 0.64 perf-profile.calltrace.cycles-pp.yield_task_fair.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
99.60 +0.1 99.72 perf-profile.calltrace.cycles-pp.__sched_yield
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.trace_hardirqs_off.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.lockdep_hardirqs_on._raw_spin_unlock_irq.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.lockdep_hardirqs_on.trace_hardirqs_on_thunk.entry_SYSCALL_64_after_hwframe.__sched_yield
0.00 +0.5 0.53 perf-profile.calltrace.cycles-pp.trace_hardirqs_off_caller.trace_hardirqs_off_thunk.entry_SYSCALL_64_after_hwframe.__sched_yield
0.00 +0.5 0.53 perf-profile.calltrace.cycles-pp.mark_lock.__lock_acquire.lock_acquire.__lock_text_start.__sched_text_start
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.mark_lock.__lock_acquire.lock_acquire.cpuacct_charge.update_curr
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.find_held_lock.lock_release.update_curr.pick_next_task_fair.__sched_text_start
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.rcu_read_lock_held_common.rcu_read_lock_held.update_curr.pick_next_task_fair.__sched_text_start
0.00 +0.6 0.58 perf-profile.calltrace.cycles-pp.rcu_read_lock_held_common.rcu_read_lock_held.cpuacct_charge.update_curr.pick_next_task_fair
0.00 +0.6 0.61 perf-profile.calltrace.cycles-pp.rcu_lockdep_current_cpu_online.rcu_read_lock_held_common.rcu_read_lock_sched_held.rcu_note_context_switch.__sched_text_start
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.validate_chain.__lock_acquire.lock_acquire.__lock_text_start.__sched_text_start
0.00 +0.7 0.66 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irq.__sched_text_start
0.00 +0.8 0.76 perf-profile.calltrace.cycles-pp.lock_is_held_type.update_rq_clock.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +0.8 0.78 perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irq.__sched_text_start.schedule
0.00 +0.8 0.79 perf-profile.calltrace.cycles-pp.rcu_read_lock_held_common.rcu_read_lock_sched_held.update_curr.pick_next_task_fair.__sched_text_start
0.00 +0.8 0.79 perf-profile.calltrace.cycles-pp.validate_chain.__lock_acquire.lock_acquire.__lock_text_start.do_sched_yield
0.00 +0.8 0.80 perf-profile.calltrace.cycles-pp.lockdep_hardirqs_on.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
0.00 +0.8 0.82 perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irq.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +0.8 0.85 perf-profile.calltrace.cycles-pp.mark_lock.__lock_acquire.lock_acquire.update_curr.pick_next_task_fair
0.00 +0.9 0.94 perf-profile.calltrace.cycles-pp.lock_is_held_type.rcu_read_lock_held.update_curr.pick_next_task_fair.__sched_text_start
0.00 +1.0 0.96 perf-profile.calltrace.cycles-pp.lock_unpin_lock.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.0 0.96 perf-profile.calltrace.cycles-pp.lock_unpin_lock.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +1.0 0.97 perf-profile.calltrace.cycles-pp.lock_is_held_type.rcu_read_lock_sched_held.update_curr.pick_next_task_fair.__sched_text_start
0.00 +1.0 1.01 perf-profile.calltrace.cycles-pp.trace_hardirqs_off_thunk.entry_SYSCALL_64_after_hwframe.__sched_yield
0.00 +1.0 1.03 perf-profile.calltrace.cycles-pp.lock_is_held_type.rcu_read_lock_held.cpuacct_charge.update_curr.pick_next_task_fair
0.00 +1.1 1.07 ± 2% perf-profile.calltrace.cycles-pp.trace_hardirqs_on_thunk.entry_SYSCALL_64_after_hwframe.__sched_yield
0.00 +1.2 1.25 perf-profile.calltrace.cycles-pp.lock_pin_lock.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +1.3 1.28 perf-profile.calltrace.cycles-pp.lock_is_held_type.__sched_yield
0.00 +1.3 1.32 perf-profile.calltrace.cycles-pp.lock_pin_lock.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.38 perf-profile.calltrace.cycles-pp.rcu_read_lock_held_common.rcu_read_lock_sched_held.rcu_note_context_switch.__sched_text_start.schedule
0.00 +1.5 1.46 perf-profile.calltrace.cycles-pp.lock_release._raw_spin_unlock.do_sched_yield.__x64_sys_sched_yield.do_syscall_64
0.00 +1.5 1.46 perf-profile.calltrace.cycles-pp.lock_release._raw_spin_unlock_irq.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +1.5 1.54 perf-profile.calltrace.cycles-pp.lock_is_held_type.rcu_read_lock_sched_held.rcu_note_context_switch.__sched_text_start.schedule
0.00 +1.7 1.71 perf-profile.calltrace.cycles-pp._raw_spin_unlock.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.8 1.75 perf-profile.calltrace.cycles-pp.rcu_read_lock_held.update_curr.pick_next_task_fair.__sched_text_start.schedule
0.00 +1.8 1.84 perf-profile.calltrace.cycles-pp.rcu_read_lock_held.cpuacct_charge.update_curr.pick_next_task_fair.__sched_text_start
0.00 +2.1 2.05 perf-profile.calltrace.cycles-pp.rcu_read_lock_sched_held.update_curr.pick_next_task_fair.__sched_text_start.schedule
0.00 +2.2 2.19 perf-profile.calltrace.cycles-pp.lock_is_held_type.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +2.2 2.23 perf-profile.calltrace.cycles-pp.lock_is_held_type.cpuacct_charge.update_curr.pick_next_task_fair.__sched_text_start
0.00 +3.3 3.32 perf-profile.calltrace.cycles-pp.lock_release.update_curr.pick_next_task_fair.__sched_text_start.schedule
0.00 +3.4 3.39 perf-profile.calltrace.cycles-pp.rcu_read_lock_sched_held.rcu_note_context_switch.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +3.4 3.41 perf-profile.calltrace.cycles-pp._raw_spin_unlock_irq.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +4.0 4.02 perf-profile.calltrace.cycles-pp.__lock_acquire.lock_acquire.__lock_text_start.do_sched_yield.__x64_sys_sched_yield
0.00 +4.2 4.19 perf-profile.calltrace.cycles-pp.__lock_acquire.lock_acquire.__lock_text_start.__sched_text_start.schedule
0.00 +4.4 4.35 perf-profile.calltrace.cycles-pp.rcu_note_context_switch.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +4.6 4.60 perf-profile.calltrace.cycles-pp.__lock_acquire.lock_acquire.cpuacct_charge.update_curr.pick_next_task_fair
0.00 +5.3 5.28 perf-profile.calltrace.cycles-pp.lock_is_held_type.update_curr.pick_next_task_fair.__sched_text_start.schedule
0.00 +5.4 5.42 perf-profile.calltrace.cycles-pp.lock_acquire.__lock_text_start.__sched_text_start.schedule.__x64_sys_sched_yield
0.00 +5.5 5.50 perf-profile.calltrace.cycles-pp.lock_acquire.__lock_text_start.do_sched_yield.__x64_sys_sched_yield.do_syscall_64
0.00 +5.6 5.56 perf-profile.calltrace.cycles-pp.__lock_text_start.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
0.00 +5.6 5.61 perf-profile.calltrace.cycles-pp.__lock_acquire.lock_acquire.update_curr.pick_next_task_fair.__sched_text_start
0.00 +5.7 5.69 perf-profile.calltrace.cycles-pp.__lock_text_start.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +5.9 5.93 perf-profile.calltrace.cycles-pp.lock_acquire.cpuacct_charge.update_curr.pick_next_task_fair.__sched_text_start
0.00 +7.0 6.96 perf-profile.calltrace.cycles-pp.lock_acquire.update_curr.pick_next_task_fair.__sched_text_start.schedule
4.04 +8.2 12.19 perf-profile.calltrace.cycles-pp.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
1.86 ± 4% +9.3 11.19 perf-profile.calltrace.cycles-pp.cpuacct_charge.update_curr.pick_next_task_fair.__sched_text_start.schedule
17.76 +18.9 36.62 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64
11.01 +23.7 34.70 perf-profile.calltrace.cycles-pp.update_curr.pick_next_task_fair.__sched_text_start.schedule.__x64_sys_sched_yield
54.91 +27.5 82.45 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
56.32 +28.9 85.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__sched_yield
26.02 +33.2 59.21 perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
25.38 +33.5 58.83 perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
31.12 +40.9 72.04 perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
23.38 -17.3 6.04 perf-profile.children.cycles-pp.entry_SYSCALL_64
18.99 -13.7 5.30 perf-profile.children.cycles-pp.syscall_return_via_sysret
3.43 -2.6 0.88 perf-profile.children.cycles-pp.__calc_delta
3.05 ± 2% -2.4 0.61 ± 2% perf-profile.children.cycles-pp.pick_next_entity
2.07 -1.8 0.29 perf-profile.children.cycles-pp.update_min_vruntime
2.12 -1.5 0.67 perf-profile.children.cycles-pp.sched_clock_cpu
1.96 -1.3 0.62 perf-profile.children.cycles-pp.sched_clock
1.90 -1.3 0.59 perf-profile.children.cycles-pp.native_sched_clock
3.11 -1.2 1.89 perf-profile.children.cycles-pp.update_rq_clock
1.03 ± 3% -0.8 0.20 ± 2% perf-profile.children.cycles-pp.clear_buddies
1.47 -0.8 0.65 perf-profile.children.cycles-pp.yield_task_fair
0.43 ± 3% -0.2 0.18 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.41 -0.2 0.16 ± 2% perf-profile.children.cycles-pp.__list_add_valid
0.36 -0.2 0.12 ± 4% perf-profile.children.cycles-pp.check_cfs_rq_runtime
0.41 ± 2% -0.2 0.22 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.33 -0.2 0.18 ± 2% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
99.94 -0.1 99.88 perf-profile.children.cycles-pp.__sched_yield
0.09 ± 4% -0.0 0.06 perf-profile.children.cycles-pp.sched_yield@plt
0.10 ± 4% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.clockevents_program_event
0.09 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.run_timer_softirq
0.00 +0.1 0.06 perf-profile.children.cycles-pp.update_load_avg
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.13 ± 5% +0.1 0.20 ± 2% perf-profile.children.cycles-pp.rcu_qs
0.10 ± 24% +0.1 0.19 ± 5% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 perf-profile.children.cycles-pp.lockdep_sys_exit
0.00 +0.1 0.09 ± 5% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.12 ± 8% perf-profile.children.cycles-pp.__softirqentry_text_start
0.12 ± 20% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.1 0.13 ± 7% perf-profile.children.cycles-pp.irq_exit
0.00 +0.1 0.14 perf-profile.children.cycles-pp.lockdep_sys_exit_thunk
0.21 ± 13% +0.2 0.39 ± 4% perf-profile.children.cycles-pp.update_process_times
0.22 ± 12% +0.2 0.40 ± 5% perf-profile.children.cycles-pp.tick_sched_handle
0.24 ± 10% +0.2 0.46 ± 5% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.3 0.32 perf-profile.children.cycles-pp.trace_hardirqs_on_caller
0.00 +0.3 0.32 perf-profile.children.cycles-pp.do_raw_spin_unlock
0.30 ± 8% +0.4 0.68 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.4 0.38 perf-profile.children.cycles-pp.mark_held_locks
0.00 +0.4 0.42 perf-profile.children.cycles-pp.prandom_u32_state
0.46 ± 7% +0.4 0.89 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.5 0.49 perf-profile.children.cycles-pp.tracer_hardirqs_on
0.57 ± 6% +0.5 1.11 perf-profile.children.cycles-pp.apic_timer_interrupt
0.51 ± 6% +0.6 1.06 perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.00 +0.6 0.56 perf-profile.children.cycles-pp.trace_hardirqs_off_caller
0.00 +0.6 0.63 perf-profile.children.cycles-pp.trace_hardirqs_on
0.00 +0.7 0.73 perf-profile.children.cycles-pp.prandom_u32
0.00 +0.8 0.77 perf-profile.children.cycles-pp.tracer_hardirqs_off
0.00 +0.8 0.79 perf-profile.children.cycles-pp.do_raw_spin_lock
0.00 +1.0 0.95 perf-profile.children.cycles-pp.trace_hardirqs_off
0.00 +1.1 1.05 perf-profile.children.cycles-pp.trace_hardirqs_off_thunk
0.00 +1.1 1.10 perf-profile.children.cycles-pp.trace_hardirqs_on_thunk
0.00 +1.2 1.18 perf-profile.children.cycles-pp.lockdep_hardirqs_off
0.00 +1.2 1.20 perf-profile.children.cycles-pp.rcu_is_watching
0.00 +1.2 1.22 perf-profile.children.cycles-pp.find_held_lock
0.00 +1.6 1.56 perf-profile.children.cycles-pp.rcu_lockdep_current_cpu_online
0.00 +1.8 1.76 perf-profile.children.cycles-pp._raw_spin_unlock
0.00 +1.8 1.78 perf-profile.children.cycles-pp.match_held_lock
0.00 +2.1 2.08 perf-profile.children.cycles-pp.lockdep_hardirqs_on
0.00 +2.2 2.18 perf-profile.children.cycles-pp.lock_unpin_lock
0.00 +2.2 2.21 perf-profile.children.cycles-pp.validate_chain
0.00 +2.6 2.61 perf-profile.children.cycles-pp.mark_lock
0.00 +2.9 2.88 perf-profile.children.cycles-pp.debug_lockdep_rcu_enabled
0.00 +3.0 2.98 perf-profile.children.cycles-pp.lock_pin_lock
0.00 +3.5 3.51 perf-profile.children.cycles-pp._raw_spin_unlock_irq
0.00 +3.7 3.67 perf-profile.children.cycles-pp.rcu_read_lock_held
0.00 +3.7 3.74 perf-profile.children.cycles-pp.rcu_read_lock_held_common
0.33 ± 2% +4.1 4.42 perf-profile.children.cycles-pp.rcu_note_context_switch
0.00 +5.7 5.68 perf-profile.children.cycles-pp.rcu_read_lock_sched_held
0.00 +6.7 6.69 perf-profile.children.cycles-pp.lock_release
4.06 +8.2 12.28 perf-profile.children.cycles-pp.do_sched_yield
1.86 ± 4% +9.5 11.36 perf-profile.children.cycles-pp.cpuacct_charge
0.00 +11.3 11.34 perf-profile.children.cycles-pp.__lock_text_start
0.00 +17.1 17.06 perf-profile.children.cycles-pp.lock_is_held_type
18.16 +18.6 36.77 perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +19.1 19.10 perf-profile.children.cycles-pp.__lock_acquire
11.50 +23.6 35.10 perf-profile.children.cycles-pp.update_curr
0.00 +24.6 24.57 perf-profile.children.cycles-pp.lock_acquire
55.01 +27.7 82.69 perf-profile.children.cycles-pp.do_syscall_64
56.40 +29.0 85.38 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
26.03 +33.2 59.22 perf-profile.children.cycles-pp.schedule
25.56 +33.5 59.03 perf-profile.children.cycles-pp.__sched_text_start
31.31 +40.8 72.09 perf-profile.children.cycles-pp.__x64_sys_sched_yield
21.56 -16.2 5.38 perf-profile.self.cycles-pp.entry_SYSCALL_64
23.18 -14.8 8.39 perf-profile.self.cycles-pp.do_syscall_64
18.97 -13.7 5.28 perf-profile.self.cycles-pp.syscall_return_via_sysret
3.20 -2.3 0.86 perf-profile.self.cycles-pp.__calc_delta
4.45 -2.2 2.25 ± 2% perf-profile.self.cycles-pp.update_curr
2.76 -1.9 0.85 perf-profile.self.cycles-pp.pick_next_task_fair
1.98 -1.7 0.26 perf-profile.self.cycles-pp.update_min_vruntime
2.58 -1.7 0.86 perf-profile.self.cycles-pp.__sched_yield
2.08 ± 2% -1.7 0.43 ± 3% perf-profile.self.cycles-pp.pick_next_entity
1.85 -1.3 0.56 perf-profile.self.cycles-pp.native_sched_clock
2.60 -1.1 1.55 perf-profile.self.cycles-pp.__sched_text_start
1.79 ± 4% -1.0 0.77 perf-profile.self.cycles-pp.cpuacct_charge
1.37 ± 2% -0.9 0.51 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.35 -0.8 0.56 perf-profile.self.cycles-pp.yield_task_fair
1.38 -0.7 0.72 perf-profile.self.cycles-pp.do_sched_yield
0.77 ± 3% -0.7 0.12 perf-profile.self.cycles-pp.clear_buddies
1.23 -0.7 0.58 ± 2% perf-profile.self.cycles-pp.__x64_sys_sched_yield
0.99 -0.6 0.42 perf-profile.self.cycles-pp.update_rq_clock
0.47 -0.3 0.19 ± 2% perf-profile.self.cycles-pp.schedule
0.40 -0.2 0.16 ± 3% perf-profile.self.cycles-pp.__list_add_valid
0.42 ± 3% -0.2 0.18 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.39 -0.2 0.16 ± 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.28 ± 3% -0.2 0.06 perf-profile.self.cycles-pp.check_cfs_rq_runtime
0.33 -0.2 0.17 ± 2% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.12 ± 4% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.sched_clock_cpu
0.11 ± 4% -0.1 0.05 perf-profile.self.cycles-pp.sched_clock
0.09 ± 4% -0.0 0.06 perf-profile.self.cycles-pp.sched_yield@plt
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.lockdep_sys_exit
0.00 +0.1 0.11 perf-profile.self.cycles-pp.lockdep_sys_exit_thunk
0.00 +0.1 0.12 perf-profile.self.cycles-pp._raw_spin_unlock
0.00 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.trace_hardirqs_on_caller
0.00 +0.2 0.19 ± 4% perf-profile.self.cycles-pp.trace_hardirqs_on_thunk
0.00 +0.2 0.22 ± 2% perf-profile.self.cycles-pp._raw_spin_unlock_irq
0.00 +0.3 0.26 perf-profile.self.cycles-pp.mark_held_locks
0.00 +0.3 0.27 perf-profile.self.cycles-pp.trace_hardirqs_off_caller
0.00 +0.3 0.29 perf-profile.self.cycles-pp.__lock_text_start
0.00 +0.3 0.30 perf-profile.self.cycles-pp.rcu_read_lock_held
0.00 +0.3 0.30 perf-profile.self.cycles-pp.prandom_u32
0.00 +0.3 0.30 ± 2% perf-profile.self.cycles-pp.trace_hardirqs_off_thunk
0.00 +0.3 0.30 perf-profile.self.cycles-pp.do_raw_spin_unlock
0.00 +0.3 0.32 ± 2% perf-profile.self.cycles-pp.trace_hardirqs_on
0.20 ± 4% +0.3 0.55 perf-profile.self.cycles-pp.rcu_note_context_switch
0.00 +0.4 0.40 ± 2% perf-profile.self.cycles-pp.prandom_u32_state
0.00 +0.4 0.43 perf-profile.self.cycles-pp.tracer_hardirqs_on
0.00 +0.5 0.46 perf-profile.self.cycles-pp.trace_hardirqs_off
0.00 +0.6 0.58 perf-profile.self.cycles-pp.rcu_read_lock_sched_held
0.00 +0.7 0.71 perf-profile.self.cycles-pp.tracer_hardirqs_off
0.00 +0.8 0.76 perf-profile.self.cycles-pp.do_raw_spin_lock
0.00 +0.9 0.90 perf-profile.self.cycles-pp.find_held_lock
0.00 +1.1 1.05 perf-profile.self.cycles-pp.rcu_is_watching
0.00 +1.1 1.15 perf-profile.self.cycles-pp.lockdep_hardirqs_off
0.00 +1.2 1.18 perf-profile.self.cycles-pp.rcu_read_lock_held_common
0.00 +1.4 1.39 perf-profile.self.cycles-pp.match_held_lock
0.00 +1.5 1.48 perf-profile.self.cycles-pp.rcu_lockdep_current_cpu_online
0.00 +1.8 1.81 perf-profile.self.cycles-pp.lockdep_hardirqs_on
0.00 +2.1 2.06 perf-profile.self.cycles-pp.lock_unpin_lock
0.00 +2.1 2.08 perf-profile.self.cycles-pp.validate_chain
0.00 +2.2 2.16 perf-profile.self.cycles-pp.lock_pin_lock
0.00 +2.4 2.37 perf-profile.self.cycles-pp.mark_lock
0.00 +2.5 2.51 perf-profile.self.cycles-pp.debug_lockdep_rcu_enabled
0.00 +5.1 5.09 perf-profile.self.cycles-pp.lock_release
0.00 +5.1 5.12 perf-profile.self.cycles-pp.lock_acquire
0.00 +14.6 14.57 perf-profile.self.cycles-pp.__lock_acquire
0.00 +15.9 15.92 perf-profile.self.cycles-pp.lock_is_held_type
will-it-scale.per_thread_ops
1.2e+06 +-----------------------------------------------------------------+
|.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.|
1e+06 |-+ |
| |
| |
800000 |-+ |
| O O O O O O O |
600000 |-+ |
| |
400000 |-+ |
| O O O O O O O O O O O O |
| |
200000 |-+ |
| |
0 +-----------------------------------------------------------------+
will-it-scale.workload
1.2e+08 +-----------------------------------------------------------------+
| + |
1e+08 |-+ |
| |
| |
8e+07 |-+ |
| O O O O O O O |
6e+07 |-+ |
| |
4e+07 |-+ O O O O O O O O O O O O |
| |
| |
2e+07 |-+ |
| |
0 +-----------------------------------------------------------------+
will-it-scale.time.user_time
14000 +-------------------------------------------------------------------+
| +.+ +..+.+.+.+..+.+ +..+ +. +.+ +.+.+ + +.+. |
12000 |-+ |
| |
10000 |-+ |
| |
8000 |-+ O O O O O O O |
| |
6000 |-+ |
| O O O O O O O O O O O O |
4000 |-+ |
| |
2000 |-+ |
| |
0 +-------------------------------------------------------------------+
will-it-scale.time.system_time
30000 +-------------------------------------------------------------------+
| |
25000 |-+ O O O O O O O O O O O O |
| O O O O O O O |
| |
20000 |-+ .+.+. .+..+. |
|.+..+.+.+.+. + +.+.+..+.+.+..+.+.+.+..+.+.+.+..+.+.+.+..+.|
15000 |-+ |
| |
10000 |-+ |
| |
| |
5000 |-+ |
| |
0 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
9 months, 3 weeks
[xfs] f7c4c6cb55: xfstests.xfs.231.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: f7c4c6cb557622a6808848b1a1c7d8b44b7fe819 ("xfs: remove the separate cowblocks worker")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git repair-metadata-atomically
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-reflink
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-05-12 21:42:18 export TEST_DIR=/fs/vda
2020-05-12 21:42:18 export TEST_DEV=/dev/vda
2020-05-12 21:42:18 export FSTYP=xfs
2020-05-12 21:42:18 export SCRATCH_MNT=/fs/scratch
2020-05-12 21:42:18 mkdir /fs/scratch -p
2020-05-12 21:42:18 export SCRATCH_DEV=/dev/vdd
2020-05-12 21:42:18 export SCRATCH_LOGDEV=/dev/vdb
2020-05-12 21:42:18 export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
2020-05-12 21:42:18 export SCRATCH_XFS_LIST_FUZZ_VERBS=random
2020-05-12 21:42:18 export MKFS_OPTIONS=-mreflink=1
2020-05-12 21:42:18 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink | grep -F -f merged_ignored_files
2020-05-12 21:42:18 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink | grep -v -F -f merged_ignored_files
2020-05-12 21:42:18 ./check xfs/127 xfs/128 xfs/129 xfs/130 xfs/139 xfs/140 xfs/169 xfs/179 xfs/180 xfs/182 xfs/184 xfs/192 xfs/193 xfs/198 xfs/200 xfs/204 xfs/207 xfs/208 xfs/209 xfs/210 xfs/211 xfs/212 xfs/213 xfs/214 xfs/215 xfs/218 xfs/219 xfs/221 xfs/223 xfs/224 xfs/225 xfs/226 xfs/228 xfs/230 xfs/231 xfs/232 xfs/237 xfs/239 xfs/240 xfs/241 xfs/243 xfs/245 xfs/247 xfs/248 xfs/249 xfs/251 xfs/254 xfs/255 xfs/256 xfs/257 xfs/258 xfs/265 xfs/280 xfs/307 xfs/308 xfs/309 xfs/312 xfs/313 xfs/315 xfs/316 xfs/319 xfs/320 xfs/321 xfs/323 xfs/324 xfs/325 xfs/326 xfs/327 xfs/328 xfs/330 xfs/344 xfs/345 xfs/346 xfs/347 xfs/372 xfs/373 xfs/410 xfs/411 xfs/420 xfs/421 xfs/435 xfs/440 xfs/441 xfs/442 xfs/464 xfs/483 xfs/507
FSTYP -- xfs (debug)
PLATFORM -- Linux/x86_64 vm-snb-110 5.7.0-rc3-00062-gf7c4c6cb55762 #1 SMP Tue May 12 21:03:37 CST 2020
MKFS_OPTIONS -- -f -mreflink=1 /dev/vdd
MOUNT_OPTIONS -- /dev/vdd /fs/scratch
xfs/127 46s
xfs/128 30s
xfs/129 10s
xfs/130 5s
xfs/139 45s
xfs/140 140s
xfs/169 78s
xfs/179 4s
xfs/180 5s
xfs/182 5s
xfs/184 4s
xfs/192 5s
xfs/193 4s
xfs/198 5s
xfs/200 5s
xfs/204 5s
xfs/207 4s
xfs/208 4s
xfs/209 3s
xfs/210 3s
xfs/211 393s
xfs/212 6s
xfs/213 5s
xfs/214 4s
xfs/215 5s
xfs/218 4s
xfs/219 5s
xfs/221 5s
xfs/223 5s
xfs/224 5s
xfs/225 4s
xfs/226 5s
xfs/228 5s
xfs/230 4s
xfs/231 - output mismatch (see /lkp/benchmarks/xfstests/results//xfs/231.out.bad)
--- tests/xfs/231.out 2020-05-11 11:07:16.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/231.out.bad 2020-05-12 21:57:00.598055414 +0800
@@ -1,4 +1,5 @@
QA output created by 231
+cat: /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime: No such file or directory
Format and mount
Create the original files
Compare files
@@ -6,11 +7,14 @@
bdbcf02ee0aa977795a79d25fcfdccb1 SCRATCH_MNT/test-231/file2
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/231.out /lkp/benchmarks/xfstests/results//xfs/231.out.bad' to see the entire diff)
xfs/232 - output mismatch (see /lkp/benchmarks/xfstests/results//xfs/232.out.bad)
--- tests/xfs/232.out 2020-05-11 11:07:16.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/232.out.bad 2020-05-12 21:57:09.641055414 +0800
@@ -1,4 +1,5 @@
QA output created by 232
+cat: /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime: No such file or directory
Format and mount
Create the original files
Compare files
@@ -6,11 +7,14 @@
bdbcf02ee0aa977795a79d25fcfdccb1 SCRATCH_MNT/test-232/file2
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/232.out /lkp/benchmarks/xfstests/results//xfs/232.out.bad' to see the entire diff)
xfs/237 7s
xfs/239 5s
xfs/240 5s
xfs/241 5s
xfs/243 5s
xfs/245 4s
xfs/247 4s
xfs/248 5s
xfs/249 5s
xfs/251 5s
xfs/254 4s
xfs/255 5s
xfs/256 5s
xfs/257 6s
xfs/258 5s
xfs/265 22s
xfs/280 4s
xfs/307 6s
xfs/308 6s
xfs/309 21s
xfs/312 6s
xfs/313 4s
xfs/315 5s
xfs/316 4s
xfs/319 4s
xfs/320 4s
xfs/321 5s
xfs/323 5s
xfs/324 5s
xfs/325 4s
xfs/326 5s
xfs/327 5s
xfs/328 42s
xfs/330 4s
xfs/344 6s
xfs/345 5s
xfs/346 11s
xfs/347 8s
xfs/372 [not run] xfs_scrub not found
xfs/373 [not run] xfs_scrub not found
xfs/410 [not run] xfs_scrub not found
xfs/411 [not run] xfs_scrub not found
xfs/420 5s
xfs/421 4s
xfs/435 4s
xfs/440 4s
xfs/441 4s
xfs/442 541s
xfs/464 [not run] xfs_scrub not found
xfs/483 [not run] xfs_scrub not found
xfs/507 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//xfs/507.out.bad)
--- tests/xfs/507.out 2020-05-11 11:07:16.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/507.out.bad 2020-05-12 22:11:13.071055414 +0800
@@ -1,8 +1,29 @@
QA output created by 507
Format and mount
-Create crazy huge file
-Reflink crazy huge file
-COW crazy huge file
-Check crazy huge file
-st_blocks is in range
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/507.out /lkp/benchmarks/xfstests/results//xfs/507.out.bad' to see the entire diff)
Ran: xfs/127 xfs/128 xfs/129 xfs/130 xfs/139 xfs/140 xfs/169 xfs/179 xfs/180 xfs/182 xfs/184 xfs/192 xfs/193 xfs/198 xfs/200 xfs/204 xfs/207 xfs/208 xfs/209 xfs/210 xfs/211 xfs/212 xfs/213 xfs/214 xfs/215 xfs/218 xfs/219 xfs/221 xfs/223 xfs/224 xfs/225 xfs/226 xfs/228 xfs/230 xfs/231 xfs/232 xfs/237 xfs/239 xfs/240 xfs/241 xfs/243 xfs/245 xfs/247 xfs/248 xfs/249 xfs/251 xfs/254 xfs/255 xfs/256 xfs/257 xfs/258 xfs/265 xfs/280 xfs/307 xfs/308 xfs/309 xfs/312 xfs/313 xfs/315 xfs/316 xfs/319 xfs/320 xfs/321 xfs/323 xfs/324 xfs/325 xfs/326 xfs/327 xfs/328 xfs/330 xfs/344 xfs/345 xfs/346 xfs/347 xfs/372 xfs/373 xfs/410 xfs/411 xfs/420 xfs/421 xfs/435 xfs/440 xfs/441 xfs/442 xfs/464 xfs/483 xfs/507
Not run: xfs/372 xfs/373 xfs/410 xfs/411 xfs/464 xfs/483
Failures: xfs/231 xfs/232 xfs/507
Failed 3 of 87 tests
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc3-00062-gf7c4c6cb55762 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 3 weeks
fd21dfbb51: BUG:kernel_NULL_pointer_dereference,address
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: fd21dfbb51796e455d734a4b0ba412cfe48b678f ("nsproxy: attach to namespaces via pidfds")
https://git.kernel.org/cgit/linux/kernel/git/brauner/linux.git setns_pidfd
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 8.438919] BUG: kernel NULL pointer dereference, address: 000000c8
[ 8.439646] #PF: supervisor read access in kernel mode
[ 8.440248] #PF: error_code(0x0000) - not-present page
[ 8.440847] *pde = 00000000
[ 8.441256] Oops: 0000 [#1] SMP
[ 8.441686] CPU: 0 PID: 556 Comm: trinity-c0 Not tainted 5.7.0-rc2-00002-gfd21dfbb51796 #6
[ 8.442662] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 8.443642] EIP: cap_capable+0x16/0x80
[ 8.444124] Code: e8 ff e9 f5 fe ff ff e8 18 07 d3 ff 8d b4 26 00 00 00 00 90 55 89 e5 56 53 8b 58 6c 39 d3 74 22 8d 74 26 00 8b b3 c8 00 00 00 <39> b2 c8 00 00 00 7e 4a 8b b2 c4 00 00 00 39 f3 74 28 89 f2 39 d3
[ 8.446083] EAX: e9d3d200 EBX: cf396b80 ECX: 00000013 EDX: 00000000
[ 8.446778] ESI: 00000000 EDI: 00000000 EBP: ea307f08 ESP: ea307f00
[ 8.447464] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010282
[ 8.448194] CR0: 80050033 CR2: 000000c8 CR3: 29dc8000 CR4: 000406d0
[ 8.448886] DR0: b724a000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 8.449585] DR6: fffe0ff0 DR7: 00000600
[ 8.450071] Call Trace:
[ 8.450460] ? cap_inode_getsecurity+0x1c0/0x1c0
[ 8.451014] security_capable+0x33/0x50
[ 8.451506] ptrace_has_cap+0x14/0x40
[ 8.451981] __ptrace_may_access+0x7c/0x130
[ 8.452503] ptrace_may_access+0x1d/0x40
[ 8.453002] __ia32_sys_setns+0x238/0x4c0
[ 8.453508] do_fast_syscall_32+0x75/0x250
[ 8.454019] entry_SYSENTER_32+0xa5/0xf8
[ 8.454519] EIP: 0xb7fd3edd
[ 8.454919] Code: 00 89 d3 5b 5e 5d c3 8d b6 00 00 00 00 b8 40 42 0f 00 eb c1 8b 04 24 c3 8b 1c 24 c3 8b 3c 24 c3 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d 76 00 58 b8 77 00 00 00 cd 80 90 8d 76
[ 8.456877] EAX: ffffffda EBX: 00000007 ECX: 4c000000 EDX: 12000009
[ 8.457562] ESI: 21160249 EDI: 00002000 EBP: 00000085 ESP: bfec765c
[ 8.458257] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[ 8.458995] Modules linked in:
[ 8.459421] CR2: 00000000000000c8
[ 8.459867] ---[ end trace 7309713183e9afab ]---
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc2-00002-gfd21dfbb51796 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 3 weeks
[mac80211] 6f0a20247e: WARNING:at_net/mac80211/ieee80211_i.h:#ieee80211_chandef_he_oper[mac80211]
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 6f0a20247e0f849e204996df3cf03a0ddbf0303c ("[PATCH v2 10/11] mac80211: determine chantype from HE operation in 6 GHz")
url: https://github.com/0day-ci/linux/commits/Rajkumar-Manoharan/cfg80211-use-...
base: https://git.kernel.org/cgit/linux/kernel/git/jberg/mac80211.git master
in testcase: hwsim
with following parameters:
group: hwsim-22
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 272.408005] WARNING: CPU: 0 PID: 5137 at net/mac80211/ieee80211_i.h:1443 ieee80211_chandef_he_oper+0x1f0/0x270 [mac80211]
[ 272.411872] Modules linked in: mac80211_hwsim mac80211 cfg80211 rfkill libarc4 intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sr_mod cdrom sg ata_generic pata_acpi bochs_drm drm_vram_helper drm_ttm_helper aesni_intel ppdev crypto_simd ata_piix ttm cryptd glue_helper libata snd_pcm drm_kms_helper snd_timer syscopyarea sysfillrect joydev snd sysimgblt fb_sys_fops soundcore serio_raw pcspkr drm i2c_piix4 parport_pc parport floppy ip_tables
[ 272.423523] CPU: 0 PID: 5137 Comm: wpa_supplicant Not tainted 5.7.0-rc1-00236-g6f0a20247e0f8 #1
[ 272.425792] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 272.428161] RIP: 0010:ieee80211_chandef_he_oper+0x1f0/0x270 [mac80211]
[ 272.430160] Code: 24 08 48 89 02 48 8b 44 24 10 48 89 42 08 48 8b 44 24 18 48 89 42 10 48 8b 44 24 20 48 89 42 18 b8 01 00 00 00 e9 fb fe ff ff <0f> 0b 65 ff 0d 1f d3 88 3f e9 ed fe ff ff 85 ed 74 2b 44 29 e5 89
[ 272.434819] RSP: 0018:ffffb7b18051f780 EFLAGS: 00010246
[ 272.436991] RAX: 0000000000000000 RBX: ffffb7b18051f800 RCX: ffff9be361b30800
[ 272.439328] RDX: ffffb7b18051f8c0 RSI: ffff9be36147d1a6 RDI: ffff9be2ca1668c0
[ 272.441463] RBP: ffff9be2ca1668c0 R08: 000000000000145a R09: 0000000000000000
[ 272.443689] R10: ffff9be2ca167298 R11: ffffffffc060f4e1 R12: 0000000000000000
[ 272.446050] R13: ffff9be36147d150 R14: ffff9be361b31f40 R15: ffffb7b18051f8c0
[ 272.448272] FS: 00007f7e233cee40(0000) GS:ffff9be3ffc00000(0000) knlGS:0000000000000000
[ 272.450688] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 272.452952] CR2: 0000559f677408e0 CR3: 00000001a1936000 CR4: 00000000000406f0
[ 272.455389] Call Trace:
[ 272.457182] ieee80211_determine_chantype+0x2c4/0x490 [mac80211]
[ 272.459490] ieee80211_prep_connection+0x3c5/0x9e0 [mac80211]
[ 272.461943] ieee80211_mgd_auth+0x22c/0x360 [mac80211]
[ 272.464247] cfg80211_mlme_auth+0x125/0x250 [cfg80211]
[ 272.466508] nl80211_authenticate+0x298/0x2f0 [cfg80211]
[ 272.468965] genl_rcv_msg+0x1ed/0x430
[ 272.471103] ? pollwake+0x74/0x90
[ 272.473065] ? genl_family_rcv_msg_attrs_parse+0x100/0x100
[ 272.475242] netlink_rcv_skb+0x4a/0x110
[ 272.477298] genl_rcv+0x24/0x40
[ 272.479132] netlink_unicast+0x1b2/0x280
[ 272.481038] netlink_sendmsg+0x329/0x450
[ 272.482815] sock_sendmsg+0x5b/0x60
[ 272.484674] ____sys_sendmsg+0x200/0x280
[ 272.486413] ? copy_msghdr_from_user+0x5c/0x90
[ 272.488120] ? set_page_dirty+0xe/0xb0
[ 272.489862] ___sys_sendmsg+0x88/0xd0
[ 272.491416] ? __generic_file_write_iter+0x192/0x1c0
[ 272.493327] ? generic_file_write_iter+0x105/0x16b
[ 272.495138] ? new_sync_write+0x12d/0x1d0
[ 272.496721] ? __sys_sendmsg+0x5e/0xa0
[ 272.498436] __sys_sendmsg+0x5e/0xa0
[ 272.502179] do_syscall_64+0x5b/0x1f0
[ 272.503989] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 272.505790] RIP: 0033:0x7f7e215a0dc7
[ 272.507348] Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb cd 66 0f 1f 44 00 00 8b 05 4a 49 2b 00 85 c0 75 2e 48 63 ff 48 63 d2 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 a1 f0 2a 00 f7 d8 64 89 02 48
[ 272.511990] RSP: 002b:00007ffd82f56a28 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
[ 272.514107] RAX: ffffffffffffffda RBX: 00005625de742310 RCX: 00007f7e215a0dc7
[ 272.516316] RDX: 0000000000000000 RSI: 00007ffd82f56ab0 RDI: 0000000000000007
[ 272.518263] RBP: 00005625de742220 R08: 0000000000000004 R09: 00000000000000f0
[ 272.520346] R10: 00007ffd82f56b9c R11: 0000000000000246 R12: 00005625de7571d0
[ 272.522391] R13: 00007ffd82f56ab0 R14: 0000000000000000 R15: 00007ffd82f56b9c
[ 272.524508] ---[ end trace f7a220193ae26eea ]---
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc1-00236-g6f0a20247e0f8 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 3 weeks
[treewide] 482c9c1a17: stack_segment:#[##]
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 482c9c1a1743ae666af92d732448ee52a4c836ab ("treewide: Replace one-element array with flexible-array")
https://git.kernel.org/cgit/linux/kernel/git/gustavoars/linux.git testing/fam1
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------+----------+------------+
| | v5.7-rc3 | 482c9c1a17 |
+----------------------------------------------+----------+------------+
| boot_successes | 446 | 0 |
| boot_failures | 16 | 12 |
| BUG:kernel_hang_in_boot_stage | 9 | 2 |
| BUG:kernel_hang_in_early-boot_stage | 1 | |
| INFO:rcu_sched_detected_stalls_on_CPUs/tasks | 3 | |
| INFO:rcu_sched_self-detected_stall_on_CPU | 4 | |
| RIP:simple_write_begin | 2 | |
| RIP:__memcpy | 2 | |
| BUG:unable_to_handle_page_fault_for_address | 1 | |
| Oops:#[##] | 1 | |
| RIP:__sock_release | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 1 | 10 |
| RIP:copy_page | 1 | |
| stack_segment:#[##] | 0 | 8 |
| RIP:__kmalloc | 0 | 1 |
| RIP:__kmalloc_track_caller | 0 | 7 |
| canonical_address#:#[##] | 0 | 2 |
| RIP:kmem_cache_alloc_trace | 0 | 2 |
+----------------------------------------------+----------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.600837] pci 0000:00:04.0: reg 0x10: [io 0xc040-0xc07f]
[ 0.602806] pci 0000:00:04.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
[ 0.607841] pci 0000:00:04.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref]
[ 0.609462] pci 0000:00:05.0: [8086:25ab] type 00 class 0x088000
[ 0.610608] pci 0000:00:05.0: reg 0x10: [mem 0xfebf2000-0xfebf200f]
[ 0.614609] stack segment: 0000 [#1] SMP PTI
[ 0.614966] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc3-00001-g482c9c1a1743a #1
[ 0.614966] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.614966] RIP: 0010:__kmalloc_track_caller+0x98/0x270
[ 0.614966] Code: 01 00 00 4d 8b 07 65 49 8b 50 08 65 4c 03 05 57 77 d3 4d 49 8b 28 48 85 ed 0f 84 a0 01 00 00 41 8b 47 20 49 8b 3f 40 f6 c7 0f <48> 8b 5c 05 00 0f 85 a6 01 00 00 48 8d 4a 01 48 89 e8 65 48 0f c7
[ 0.614966] RSP: 0000:ffffb95f00013bd8 EFLAGS: 00010246
[ 0.614966] RAX: 0000000000000008 RBX: 0000000000000cc0 RCX: 0000000000000000
[ 0.614966] RDX: 000000000000071d RSI: 0000000000000cc0 RDI: 0000000000030200
[ 0.614966] RBP: 0035304130504e50 R08: ffff982d3fc30200 R09: ffff982d2a403180
[ 0.614966] R10: 00000000000000f8 R11: ffff982d29dfe1fb R12: 0000000000000cc0
[ 0.614966] R13: 000000000000000b R14: ffff982d2a403a40 R15: ffff982d2a403a40
[ 0.614966] FS: 0000000000000000(0000) GS:ffff982d3fc00000(0000) knlGS:0000000000000000
[ 0.614966] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.614966] CR2: 0000000000000000 CR3: 000000001060a000 CR4: 00000000000406f0
[ 0.614966] Call Trace:
[ 0.614966] ? wakeup_source_create+0x35/0x80
[ 0.614966] kstrdup+0x2d/0x60
[ 0.614966] ? program_hpx_type0+0x130/0x130
[ 0.614966] wakeup_source_create+0x35/0x80
[ 0.614966] wakeup_source_register+0x12/0x50
[ 0.614966] acpi_add_pm_notifier+0x84/0xc0
[ 0.614966] acpi_pci_root_add+0x21e/0x5c0
[ 0.614966] ? acpi_evaluate_integer+0x52/0x90
[ 0.614966] ? rdinit_setup+0x2b/0x2b
[ 0.614966] acpi_bus_attach+0x15d/0x1f0
[ 0.614966] acpi_bus_attach+0x88/0x1f0
[ 0.614966] ? acpi_sleep_proc_init+0x24/0x24
[ 0.614966] acpi_bus_scan+0x43/0x90
[ 0.614966] acpi_scan_init+0xec/0x230
[ 0.614966] ? acpi_sleep_proc_init+0x24/0x24
[ 0.614966] acpi_init+0x2f4/0x354
[ 0.614966] do_one_initcall+0x46/0x220
[ 0.614966] kernel_init_freeable+0x1fa/0x288
[ 0.614966] ? rest_init+0xd0/0xd0
[ 0.614966] kernel_init+0xa/0x110
[ 0.614966] ret_from_fork+0x35/0x40
[ 0.614966] Modules linked in:
[ 0.614978] ---[ end trace ad36a3a45e77294d ]---
[ 0.615921] RIP: 0010:__kmalloc_track_caller+0x98/0x270
[ 0.615976] Code: 01 00 00 4d 8b 07 65 49 8b 50 08 65 4c 03 05 57 77 d3 4d 49 8b 28 48 85 ed 0f 84 a0 01 00 00 41 8b 47 20 49 8b 3f 40 f6 c7 0f <48> 8b 5c 05 00 0f 85 a6 01 00 00 48 8d 4a 01 48 89 e8 65 48 0f c7
[ 0.616975] RSP: 0000:ffffb95f00013bd8 EFLAGS: 00010246
[ 0.617955] RAX: 0000000000000008 RBX: 0000000000000cc0 RCX: 0000000000000000
[ 0.617972] RDX: 000000000000071d RSI: 0000000000000cc0 RDI: 0000000000030200
[ 0.618973] RBP: 0035304130504e50 R08: ffff982d3fc30200 R09: ffff982d2a403180
[ 0.619973] R10: 00000000000000f8 R11: ffff982d29dfe1fb R12: 0000000000000cc0
[ 0.620973] R13: 000000000000000b R14: ffff982d2a403a40 R15: ffff982d2a403a40
[ 0.621974] FS: 0000000000000000(0000) GS:ffff982d3fc00000(0000) knlGS:0000000000000000
[ 0.622973] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.623972] CR2: 0000000000000000 CR3: 000000001060a000 CR4: 00000000000406f0
[ 0.624977] Kernel panic - not syncing: Fatal exception
Elapsed time: 60
qemu-img create -f qcow2 disk-vm-snb-ssd-64-0 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-64-1 256G
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc3-00001-g482c9c1a1743a .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
9 months, 3 weeks
[selftests/bpf] 77bb53cb09: kernel-selftests.bpf.make_fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 77bb53cb094828a31cd3c5b402899810f63073c1 ("selftests/bpf: Fix perf_buffer test on systems w/ offline CPUs")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y
in testcase: kernel-selftests
with following parameters:
group: kselftests-bpf
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1
2020-05-09 22:23:18 sed -i s/default_timeout=45/default_timeout=300/ kselftest/runner.sh
2020-05-09 22:23:18 make -C ../../../tools/bpf/bpftool
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/bpf/bpftool'
Auto-detecting system features:
... libbfd: [ [32mon[m ]
... disassembler-four-args: [ [31mOFF[m ]
... zlib: [ [32mon[m ]
CC map_perf_ring.o
CC xlated_dumper.o
CC btf.o
CC tracelog.o
CC perf.o
CC prog.o
CC btf_dumper.o
CC net.o
CC netlink_dumper.o
CC common.o
CC cgroup.o
CC main.o
CC json_writer.o
CC cfg.o
CC map.o
CC feature.o
CC jit_disasm.o
jit_disasm.c: In function ‘disasm_print_insn’:
jit_disasm.c:122:29: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
info.disassembler_options = disassembler_options;
^
CC disasm.o
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
Auto-detecting system features:
... libelf: [ [32mon[m ]
... bpf: [ [32mon[m ]
MKDIR staticobjs/
CC staticobjs/libbpf.o
CC staticobjs/bpf.o
CC staticobjs/nlattr.o
CC staticobjs/btf.o
CC staticobjs/libbpf_errno.o
CC staticobjs/str_error.o
CC staticobjs/netlink.o
CC staticobjs/bpf_prog_linfo.o
CC staticobjs/libbpf_probes.o
CC staticobjs/xsk.o
CC staticobjs/hashmap.o
CC staticobjs/btf_dump.o
LD staticobjs/libbpf-in.o
LINK libbpf.a
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
LINK bpftool
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/bpf/bpftool'
2020-05-09 22:23:32 make install -C ../../../tools/bpf/bpftool
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/bpf/bpftool'
Auto-detecting system features:
... libbfd: [ [32mon[m ]
... disassembler-four-args: [ [31mOFF[m ]
... zlib: [ [32mon[m ]
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
INSTALL bpftool
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/bpf/bpftool'
ping6 is /bin/ping6
ignored_by_lkp bpf.test_lirc_mode2_user test
ignored_by_lkp bpf.test_tc_tunnel.sh test
ignored_by_lkp bpf.test_lwt_seg6local.sh test
2020-05-09 22:23:36 make run_tests -C bpf
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf'
gcc -I. -I/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program -I. -I/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf -Iverifier -c -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o test_stub.c
make -C ../../../lib/bpf OUTPUT=/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
Auto-detecting system features:
... libelf: [ [32mon[m ]
... bpf: [ [32mon[m ]
HOSTCC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/fixdep.o
HOSTLD /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/fixdep-in.o
LINK /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/fixdep
MKDIR /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/libbpf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/bpf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/nlattr.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/btf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/libbpf_errno.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/str_error.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/netlink.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/bpf_prog_linfo.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/libbpf_probes.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/xsk.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/hashmap.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/btf_dump.o
LD /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/staticobjs/libbpf-in.o
LINK /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a
MKDIR /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/libbpf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/bpf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/nlattr.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/btf.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/libbpf_errno.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/str_error.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/netlink.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/bpf_prog_linfo.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/libbpf_probes.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/xsk.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/hashmap.o
CC /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/btf_dump.o
LD /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/sharedobjs/libbpf-in.o
LINK /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.so.0.0.5
GEN /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.pc
LINK /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_libbpf
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/lib/bpf'
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program -I. -I/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf -Iverifier test_verifier.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_verifier
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program test_tag.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_tag
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program -I. -I/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf test_maps.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a map_tests/sk_storage_map.c -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_maps
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program test_lru_map.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_lru_map
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program test_lpm_map.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_lpm_map
gcc -g -Wall -O2 -I../../../include/uapi -I../../../lib -I../../../lib/bpf -I../../../../include/generated -I../../../include -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program -I. -I/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf test_progs.c /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_stub.o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/libbpf.a cgroup_helpers.c trace_helpers.c prog_tests/attach_probe.c prog_tests/stacktrace_map.c prog_tests/raw_tp_writable_test_run.c prog_tests/stacktrace_map_raw_tp.c prog_tests/raw_tp_writable_reject_nbd_invalid.c prog_tests/bpf_verif_scale.c prog_tests/xdp.c prog_tests/send_signal.c prog_tests/stacktrace_build_id.c prog_tests/reference_tracking.c prog_tests/prog_run_xattr.c prog_tests/sockopt_inherit.c prog_tests/task_fd_query_tp.c prog_tests/tp_attach_query.c prog_tests/get_stack_raw_tp.c prog_tests/sockopt_sk.c prog_tests/pkt_md_access.c prog_tests/xdp_adjust_tail.c prog_tests/stacktrace_build_id_nmi.c prog_tests/pkt_access.c prog_tests/spinlock.c prog_tests/sockopt.c prog_tests/flow_dissector_load_bytes.c prog_tests/perf_buffer.c prog_tests/tcp_rtt.c prog_tests/skb_ctx.c prog_tests/queue_stack_map.c prog_tests/task_fd_query_rawtp.c prog_tests/signal_pending.c prog_tests/sockopt_multi.c prog_tests/flow_dissector.c prog_tests/core_reloc.c prog_tests/l4lb_all.c prog_tests/tcp_estats.c prog_tests/obj_name.c prog_tests/map_lock.c prog_tests/xdp_noinline.c prog_tests/global_data.c prog_tests/bpf_obj_id.c -lcap -lelf -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_progs
prog_tests/perf_buffer.c: In function ‘test_perf_buffer’:
prog_tests/perf_buffer.c:39:8: warning: implicit declaration of function ‘parse_cpu_mask_file’ [-Wimplicit-function-declaration]
err = parse_cpu_mask_file("/sys/devices/system/cpu/online",
^~~~~~~~~~~~~~~~~~~
/tmp/lkp/ccttzc8t.o: In function `test_perf_buffer':
/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/prog_tests/perf_buffer.c:39: undefined reference to `parse_cpu_mask_file'
collect2: error: ld returned 1 exit status
../lib.mk:138: recipe for target '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_progs' failed
make: *** [/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf/test_progs] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-77bb53cb094828a31cd3c5b402899810f63073c1/tools/testing/selftests/bpf'
To reproduce:
# build kernel
cd linux
cp config-5.4.18-00145-g77bb53cb09482 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 3 weeks