Re: [IB/srpt] c804af2c1d: last_state.test.blktests.exit_code.143
by Bart Van Assche
On 2020-08-03 00:27, Sagi Grimberg wrote:
>
>>>> Greeting,
>>>>
>>>> FYI, we noticed the following commit (built with gcc-9):
>>>>
>>>> commit: c804af2c1d3152c0cf877eeb50d60c2d49ac0cf0 ("IB/srpt: use new shared CQ mechanism")
>>>> https://git.kernel.org/cgit/linux/kernel/git/rdma/rdma.git for-next
>>>>
>>>>
>>>> in testcase: blktests
>>>> with following parameters:
>>>>
>>>> test: srp-group1
>>>> ucode: 0x21
>>>>
>>>>
>>>>
>>>> on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory
>>>>
>>>> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
>>>>
>>>>
>>>>
>>>>
>>>> If you fix the issue, kindly add following tag
>>>> Reported-by: kernel test robot <rong.a.chen(a)intel.com>
>>>>
>>>>
>>>> user :notice: [ 44.688140] 2020-08-01 16:10:22 ./check srp/001 srp/002 srp/003 srp/004 srp/005 srp/006 srp/007 srp/008 srp/009 srp/010 srp/011 srp/012 srp/013 srp/015
>>>> user :notice: [ 44.706657] srp/001 (Create and remove LUNs)
>>>> user :notice: [ 44.718405] srp/001 (Create and remove LUNs) [passed]
>>>> user :notice: [ 44.729902] runtime ... 1.972s
>>>> user :notice: [ 99.038748] IPMI BMC is not supported on this machine, skip bmc-watchdog setup!
>>>> user :notice: [ 3699.039790] Sat Aug 1 17:11:22 UTC 2020 detected soft_timeout
>>>> user :notice: [ 3699.060341] kill 960 /usr/bin/time -v -o /tmp/lkp/blktests.time /lkp/lkp/src/tests/blktests
>>> Yamin and Max, can you take a look at this? The SRP tests from the
>>> blktests repository pass reliably with kernel version v5.7 and before.
>>> With label next-20200731 from linux-next however that test triggers the
>>> following hang:
>>
>> I will look into it.
>
> FWIW, I ran into this as well with nvme-rdma, but it also reproduces
> when I revert the shared CQ patch from nvme-rdma. Another data point
> is that my tests passes with siw.
Hi Jason,
The patch below is sufficient to unbreak blktests. I think that the
deadlock while unloading rdma_rxe happens because the RDMA core waits for
all ib_dev references to be dropped before dealloc_driver is called.
The rdma_rxe dealloc_driver implementation drops an ib_dev reference. The
dealloc_driver callback was introduced by commit d0899892edd0
("RDMA/device: Provide APIs from the core code to help unregistration").
Do you agree that this regression has been introduced by commits
d0899892edd0 and c367074b6c37 ("RDMA/rxe: Use driver_unregister and new
unregistration API")?
Thanks,
Bart.
---
drivers/infiniband/core/device.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index dca2842a7872..5192f305b253 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -1287,13 +1287,8 @@ static void disable_device(struct ib_device *device)
/* Pairs with refcount_set in enable_device */
ib_device_put(device);
- wait_for_completion(&device->unreg_completion);
- /*
- * compat devices must be removed after device refcount drops to zero.
- * Otherwise init_net() may add more compatdevs after removing compat
- * devices and before device is disabled.
- */
+ /* To do: prevent init_net() to add more compat_devs. */
remove_compat_devs(device);
}
5 months, 2 weeks
[rcu/tree] 53c72b590b: stress-ng.sigq.ops_per_sec 18.8% improvement
by kernel test robot
Greeting,
FYI, we noticed a 18.8% improvement of stress-ng.sigq.ops_per_sec due to commit:
commit: 53c72b590b3a0afd6747d6f7957e6838003e90a4 ("rcu/tree: cache specified number of objects")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 30s
class: interrupt
cpufreq_governor: performance
ucode: 0x5002f01
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-9/performance/1HDD/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp7/stress-ng/30s/0x5002f01
commit:
69f08d3999 ("rcu/tree: Use static initializer for krc.lock")
53c72b590b ("rcu/tree: cache specified number of objects")
69f08d3999dbef15 53c72b590b3a0afd6747d6f7957
---------------- ---------------------------
%stddev %change %stddev
\ | \
2.315e+08 +18.8% 2.749e+08 stress-ng.sigq.ops
7716291 +18.8% 9164111 stress-ng.sigq.ops_per_sec
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
5 months, 3 weeks
[PATCH v2] syscall/ptrace08: Simplify the test.
by Cyril Hrubis
The original test was attempting to crash the kernel by setting a
breakpoint on do_debug kernel function which, when triggered, caused an
infinite loop in the kernel. The problem with this approach is that
kernel internal function names are not stable at all and the name was
changed recently, which made the test fail for no good reason.
The original kernel fix made it however poissible to set a kernel
address as a breakpoint and instead disabled the breakpoint on userspace
modification. The error checks were deffered to write to the dr7 that
enabled the breakpoint again.
So on newer kernels we do not allow to set the breakpoint to the kernel
addres at all, which means that the POKEUSR to dr0 has to fail with an
address in a kernel range and also we read back the breakpoint address
and check that it wasn't set just to be sure.
On older kernels we check that the POKEUSER to dr7 that enables the
breakpoint fails properly after the dr0 has been set to an address in
the kernel range.
Signed-off-by: Cyril Hrubis <chrubis(a)suse.cz>
CC: Andy Lutomirski <luto(a)kernel.org>
CC: Peter Zijlstra <peterz(a)infradead.org>
CC: Thomas Gleixner <tglx(a)linutronix.de>
CC: Alexandre Chartre <alexandre.chartre(a)oracle.com>
---
testcases/kernel/syscalls/ptrace/ptrace08.c | 136 +++++++++++---------
1 file changed, 76 insertions(+), 60 deletions(-)
diff --git a/testcases/kernel/syscalls/ptrace/ptrace08.c b/testcases/kernel/syscalls/ptrace/ptrace08.c
index 591aa0dd2..1b84ce376 100644
--- a/testcases/kernel/syscalls/ptrace/ptrace08.c
+++ b/testcases/kernel/syscalls/ptrace/ptrace08.c
@@ -5,8 +5,17 @@
*
* CVE-2018-1000199
*
- * Test error handling when ptrace(POKEUSER) modifies debug registers.
- * Even if the call returns error, it may create breakpoint in kernel code.
+ * Test error handling when ptrace(POKEUSER) modified x86 debug registers even
+ * when the call returned error.
+ *
+ * When the bug was present we could create breakpoint in the kernel code,
+ * which shoudn't be possible at all. The original CVE caused a kernel crash by
+ * setting a breakpoint on do_debug kernel function which, when triggered,
+ * caused an infinite loop. However we do not have to crash the kernel in order
+ * to assert if kernel has been fixed or not. All we have to do is to try to
+ * set a breakpoint, on any kernel address, then read it back and check if the
+ * value has been set or not.
+ *
* Kernel crash partially fixed in:
*
* commit f67b15037a7a50c57f72e69a6d59941ad90a0f0f
@@ -26,69 +35,54 @@
#include "tst_safe_stdio.h"
#if defined(__i386__) || defined(__x86_64__)
-#define SYMNAME_SIZE 256
-#define KERNEL_SYM "do_debug"
-static unsigned long break_addr;
static pid_t child_pid;
-static void setup(void)
-{
- int fcount;
- char endl, symname[256];
- FILE *fr = SAFE_FOPEN("/proc/kallsyms", "r");
-
- /* Find address of do_debug() in /proc/kallsyms */
- do {
- fcount = fscanf(fr, "%lx %*c %255s%c", &break_addr, symname,
- &endl);
-
- if (fcount <= 0 && feof(fr))
- break;
-
- if (fcount < 2) {
- fclose(fr);
- tst_brk(TBROK, "Unexpected data in /proc/kallsyms %d",
- fcount);
- }
-
- if (fcount >= 3 && endl != '\n')
- while (!feof(fr) && fgetc(fr) != '\n');
- } while (!feof(fr) && strcmp(symname, KERNEL_SYM));
-
- SAFE_FCLOSE(fr);
-
- if (strcmp(symname, KERNEL_SYM))
- tst_brk(TBROK, "Cannot find address of kernel symbol \"%s\"",
- KERNEL_SYM);
-
- if (!break_addr)
- tst_brk(TCONF, "Addresses in /proc/kallsyms are hidden");
+#if defined(__x86_64__)
+# define KERN_ADDR_MIN 0xffff800000000000
+# define KERN_ADDR_MAX 0xffffffffffffffff
+# define KERN_ADDR_BITS 64
+#elif defined(__i386__)
+# define KERN_ADDR_MIN 0xc0000000
+# define KERN_ADDR_MAX 0xffffffff
+# define KERN_ADDR_BITS 32
+#endif
- tst_res(TINFO, "Kernel symbol \"%s\" found at 0x%lx", KERNEL_SYM,
- break_addr);
-}
+static int deffered_check;
-static void debug_trap(void)
+static void setup(void)
{
- /* x86 instruction INT1 */
- asm volatile (".byte 0xf1");
+ /*
+ * When running in compat mode we can't pass 64 address to ptrace so we
+ * have to skip the test.
+ */
+ if (tst_kernel_bits() != KERN_ADDR_BITS)
+ tst_brk(TCONF, "Cannot pass 64bit kernel address in compat mode");
+
+
+ /*
+ * The original fix for the kernel haven't rejected the kernel address
+ * right away when breakpoint was modified from userspace it was
+ * disabled and the EINVAL was returned when dr7 was written to enable
+ * it again.
+ */
+ if (tst_kvercmp(4, 17, 0) < 0)
+ deffered_check = 1;
}
static void child_main(void)
{
raise(SIGSTOP);
- /* wait for SIGCONT from parent */
- debug_trap();
exit(0);
}
-static void run(void)
+static void ptrace_try_kern_addr(unsigned long kern_addr)
{
int status;
- pid_t child;
- child = child_pid = SAFE_FORK();
+ tst_res(TINFO, "Trying address 0x%lx", kern_addr);
+
+ child_pid = SAFE_FORK();
if (!child_pid)
child_main();
@@ -102,23 +96,46 @@ static void run(void)
SAFE_PTRACE(PTRACE_POKEUSER, child_pid,
(void *)offsetof(struct user, u_debugreg[7]), (void *)1);
- /* Return value intentionally ignored here */
- ptrace(PTRACE_POKEUSER, child_pid,
+ TEST(ptrace(PTRACE_POKEUSER, child_pid,
(void *)offsetof(struct user, u_debugreg[0]),
- (void *)break_addr);
+ (void *)kern_addr));
+
+ if (deffered_check) {
+ TEST(ptrace(PTRACE_POKEUSER, child_pid,
+ (void *)offsetof(struct user, u_debugreg[7]), (void *)1));
+ }
+
+ if (TST_RET != -1) {
+ tst_res(TFAIL, "ptrace() breakpoint with kernel addr succeeded");
+ } else {
+ if (TST_ERR == EINVAL) {
+ tst_res(TPASS | TTERRNO,
+ "ptrace() breakpoint with kernel addr failed");
+ } else {
+ tst_res(TFAIL | TTERRNO,
+ "ptrace() breakpoint on kernel addr should return EINVAL, got");
+ }
+ }
+
+ unsigned long addr;
+
+ addr = ptrace(PTRACE_PEEKUSER, child_pid,
+ (void*)offsetof(struct user, u_debugreg[0]), NULL);
+
+ if (!deffered_check && addr == kern_addr)
+ tst_res(TFAIL, "Was able to set breakpoint on kernel addr");
SAFE_PTRACE(PTRACE_DETACH, child_pid, NULL, NULL);
SAFE_KILL(child_pid, SIGCONT);
child_pid = 0;
+ tst_reap_children();
+}
- if (SAFE_WAITPID(child, &status, 0) != child)
- tst_brk(TBROK, "Received event from unexpected PID");
-
- if (!WIFSIGNALED(status))
- tst_brk(TBROK, "Received unexpected event from child");
-
- tst_res(TPASS, "Child killed by %s", tst_strsig(WTERMSIG(status)));
- tst_res(TPASS, "We're still here. Nothing bad happened, probably.");
+static void run(void)
+{
+ ptrace_try_kern_addr(KERN_ADDR_MIN);
+ ptrace_try_kern_addr(KERN_ADDR_MAX);
+ ptrace_try_kern_addr(KERN_ADDR_MIN + (KERN_ADDR_MAX - KERN_ADDR_MIN)/2);
}
static void cleanup(void)
@@ -133,7 +150,6 @@ static struct tst_test test = {
.setup = setup,
.cleanup = cleanup,
.forks_child = 1,
- .taint_check = TST_TAINT_W | TST_TAINT_D,
.tags = (const struct tst_tag[]) {
{"linux-git", "f67b15037a7a"},
{"CVE", "2018-1000199"},
--
2.26.2
5 months, 3 weeks
[cfg80211] 7f96dd3657: hwsim.dpp_qr_code_auth_hostapd_mutual2.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 7f96dd365770550bef6f09757774788e87b5f92e ("cfg80211: avoid holding the RTNL when calling the driver")
https://git.kernel.org/cgit/linux/kernel/git/jberg/mac80211-next.git rtnl
in testcase: hwsim
version: hwsim-x86_64-6eb6cf0-1_20200619
with following parameters:
group: hwsim-16
ucode: 0x21
on test machine: 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-09-10 04:02:50 ./run-tests.py dpp_qr_code_auth_hostapd_mutual2
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START dpp_qr_code_auth_hostapd_mutual2 1/1
Test: DPP QR Code and authentication exchange (hostapd mutual2)
Starting AP wlan3
AP displays QR Code
dev0 displays QR Code
dev0 scans QR Code and initiates DPP Authentication
AP scans QR Code
DPP authentication did not succeed (Responder)
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_dpp.py", line 585, in test_dpp_qr_code_auth_hostapd_mutual2
wait_auth_success(hapd, dev[0], stop_responder=True)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_dpp.py", line 3815, in wait_auth_success
raise Exception("DPP authentication did not succeed (Responder)")
Exception: DPP authentication did not succeed (Responder)
FAIL dpp_qr_code_auth_hostapd_mutual2 5.111452 2020-09-10 04:02:56.047366
passed 0 test case(s)
skipped 0 test case(s)
failed tests: dpp_qr_code_auth_hostapd_mutual2
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
5 months, 3 weeks
Re: [mm/lru] f4ba6c0e1b: vm-scalability.throughput -27.1% regression
by kernel test robot
On Thu, Sep 10, 2020 at 04:05:44PM +0800, Alex Shi wrote:
>
>
> 在 2020/9/10 下午3:46, kernel test robot 写道:
> > Greeting,
> >
> > FYI, we noticed a -27.1% regression of vm-scalability.throughput due to commit:
> >
> >
> > commit: f4ba6c0e1b65b09bafd8efe3a414c6dc1f8ab647 ("mm/lru: replace pgdat lru_lock with lruvec lock")
> > https://github.com/alexshi/linux.git lru59
>
> Hi Rong,
>
> Thanks a lot!
>
> Just want to know how much lost on the whole patchset?
>
The regression is 27%:
v5.9-rc2 68ee199a9da2f311c818efc0e1 testcase/testparams/testbox
---------------- -------------------------- ---------------------------
%stddev change %stddev
\ | \
17195218 -27% 12485310 vm-scalability/performance-300s-lru-file-readtwice-ucode=0x2006906-monitor=e907e467/lkp-skl-fpga01
17195218 -27% 12485310 GEO-MEAN vm-scalability.throughput
Best Regards,
Rong Chen
5 months, 3 weeks
[mm/debug_vm_pgtable/locks] c50eb1ed65: BUG:sleeping_function_called_from_invalid_context_at_mm/page_alloc.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: c50eb1ed654b59efad96884cc26895a0acd7a15a ("mm/debug_vm_pgtable/locks: move non page table modifying test together")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------------+------------+------------+
| | 5c65ca35e5 | c50eb1ed65 |
+----------------------------------------------------------------------+------------+------------+
| boot_successes | 18 | 0 |
| boot_failures | 0 | 10 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/page_alloc.c | 0 | 10 |
+----------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 9.409233] BUG: sleeping function called from invalid context at mm/page_alloc.c:4822
[ 9.410557] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper
[ 9.411932] no locks held by swapper/1.
[ 9.412595] CPU: 0 PID: 1 Comm: swapper Not tainted 5.9.0-rc3-00323-gc50eb1ed654b5 #2
[ 9.413824] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 9.415207] Call Trace:
[ 9.415651] ? ___might_sleep.cold+0xa7/0xcc
[ 9.416367] ? __alloc_pages_nodemask+0x14c/0x5b0
[ 9.417055] ? swap_migration_tests+0x50/0x293
[ 9.417704] ? debug_vm_pgtable+0x4bc/0x708
[ 9.418287] ? swap_migration_tests+0x293/0x293
[ 9.418911] ? do_one_initcall+0x82/0x3cb
[ 9.419465] ? parse_args+0x1bd/0x280
[ 9.419983] ? rcu_read_lock_sched_held+0x36/0x60
[ 9.420673] ? trace_initcall_level+0x1f/0xf3
[ 9.421279] ? trace_initcall_level+0xbd/0xf3
[ 9.421881] ? do_basic_setup+0x9d/0xdd
[ 9.422410] ? do_basic_setup+0xc3/0xdd
[ 9.422938] ? kernel_init_freeable+0x72/0xa3
[ 9.423539] ? rest_init+0x134/0x134
[ 9.424055] ? kernel_init+0x5/0x12c
[ 9.424574] ? ret_from_fork+0x19/0x30
[ 9.425310] Key type ._fscrypt registered
[ 9.426019] Key type .fscrypt registered
[ 9.426707] Key type fscrypt-provisioning registered
[ 9.427637] fs-verity: Initialized fs-verity
[ 9.840093] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[ 9.853197] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 9.877389] Sending DHCP requests ., OK
[ 9.879581] IP-Config: Got DHCP answer from 10.0.2.2, my address is 10.0.2.15
[ 9.880824] IP-Config: Complete:
[ 9.881354] device=eth0, hwaddr=52:54:00:12:34:56, ipaddr=10.0.2.15, mask=255.255.255.0, gw=10.0.2.2
[ 9.882905] host=vm-snb-i386-109, domain=, nis-domain=(none)
[ 9.883902] bootserver=10.0.2.2, rootserver=10.0.2.2, rootpath=
[ 9.883905] nameserver0=10.0.2.3
[ 9.894061] Freeing unused kernel image (initmem) memory: 656K
[ 9.895575] Write protecting kernel text and read-only data: 23520k
[ 9.898343] Run /init as init process
[ 9.899004] with arguments:
[ 9.899514] /init
[ 9.899927] with environment:
[ 9.900487] HOME=/
[ 9.900987] TERM=linux
[ 9.901465] user=lkp
[ 9.901908] job=/lkp/jobs/scheduled/vm-snb-i386-109/boot-1-yocto-i386-minimal-20190520.cgz-c50eb1ed654b59efad96884cc26895a0acd7a15a-20200910-4374-1mjkprf-3.yaml
[ 9.904329] ARCH=i386
[ 9.904786] kconfig=i386-randconfig-r026-20200909
[ 9.905564] branch=linux-next/master
[ 9.906202] commit=c50eb1ed654b59efad96884cc26895a0acd7a15a
[ 9.907202] BOOT_IMAGE=/pkg/linux/i386-randconfig-r026-20200909/gcc-9/c50eb1ed654b59efad96884cc26895a0acd7a15a/vmlinuz-5.9.0-rc3-00323-gc50eb1ed654b5
[ 9.909484] max_uptime=600
[ 9.910007] RESULT_ROOT=/result/boot/1/vm-snb-i386/yocto-i386-minimal-20190520.cgz/i386-randconfig-r026-20200909/gcc-9/c50eb1ed654b59efad96884cc26895a0acd7a15a/3
[ 9.912177] LKP_SERVER=inn
[ 9.912636] selinux=0
[ 9.913011] softlockup_panic=1
[ 9.913484] vga=normal
[ 9.931703] process 143 (init) attempted a POSIX timer syscall while CONFIG_POSIX_TIMERS is not set
Starting udev
[ 10.061529] pidof (167) used greatest stack depth: 6528 bytes left
[ 10.070496] udevd[168]: starting version 3.2.7
[ 10.072314] random: udevd: uninitialized urandom read (16 bytes read)
[ 10.073812] random: udevd: uninitialized urandom read (16 bytes read)
[ 10.075031] random: udevd: uninitialized urandom read (16 bytes read)
[ 10.079997] udevd[168]: specified group 'kvm' unknown
[ 10.087732] udevd[169]: starting eudev-3.2.7
[ 10.225841] udevd[169]: specified group 'kvm' unknown
[ 10.392367] parport_pc 00:04: reported by Plug and Play ACPI
[ 10.394027] parport0: PC-style at 0x378, irq 7 [PCSPP(,...)]
[ 10.428374] lp0: using parport0 (interrupt-driven).
[ 10.429380] lp0: console ready
[ 10.432762] NET3 PLIP version 2.4-parport gniibe(a)mri.co.jp
[ 10.433719] plip0: Parallel port at 0x378, using IRQ 7.
[ 10.453613] Linux agpgart interface v0.103
[ 10.975166] ppdev: user-space parallel port driver
[ 12.143279] rcu-torture: rcu_torture_read_exit: End of episode
[ 14.166934] udevd (173) used greatest stack depth: 6280 bytes left
[ 16.266823] uvesafb: Getting VBE info block failed (eax=0x4f00, err=1)
[ 16.267957] uvesafb: vbe_init() failed with -22
[ 16.268728] uvesafb: probe of uvesafb.0 failed with error -22
[ 16.785570] urandom_read: 3 callbacks suppressed
[ 16.785574] random: dd: uninitialized urandom read (512 bytes read)
[ 17.024261] bootlogd (222) used greatest stack depth: 6276 bytes left
LKP: HOSTNAME vm-snb-i386-109, MAC 52:54:00:12:34:56, kernel 5.9.0-rc3-00323-gc50eb1ed654b5 2, serial console /dev/ttyS0
Poky (Yocto Project Reference Distro) 2.7+snapshot vm-snb-i386-109 /dev/ttyS0
[ 18.137435] mount: mounting debug on /sys/kernel/debug failed: No such file or directory
[ 25.848294] rcu-torture: rcu_torture_read_exit: Start of episode
[ 25.855111] rcu-torture: rcu_torture_read_exit: End of episode
[ 26.072692] random: fast init done
[ 27.041201] sysrq: Emergency Sync
[ 27.042059] sysrq: Resetting
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc3-00323-gc50eb1ed654b5 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
5 months, 3 weeks
[mm/lru] f4ba6c0e1b: vm-scalability.throughput -27.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -27.1% regression of vm-scalability.throughput due to commit:
commit: f4ba6c0e1b65b09bafd8efe3a414c6dc1f8ab647 ("mm/lru: replace pgdat lru_lock with lruvec lock")
https://github.com/alexshi/linux.git lru59
in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
test: lru-file-readtwice
cpufreq_governor: performance
ucode: 0x2006906
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.read_iops -13.2% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory |
| test parameters | bs=2M |
| | cpufreq_governor=performance |
| | disk=2pmem |
| | fs=ext4 |
| | ioengine=sync |
| | nr_task=50% |
| | runtime=200s |
| | rw=randread |
| | test_size=200G |
| | time_based=tb |
| | ucode=0x5002f01 |
+------------------+----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_iops -11.5% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory |
| test parameters | bs=2M |
| | cpufreq_governor=performance |
| | disk=2pmem |
| | fs=ext4 |
| | ioengine=mmap |
| | nr_task=50% |
| | runtime=200s |
| | rw=randrw |
| | test_size=200G |
| | time_based=tb |
| | ucode=0x5002f01 |
+------------------+----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-skl-fpga01/lru-file-readtwice/vm-scalability/0x2006906
commit:
c1139de73f ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn")
f4ba6c0e1b ("mm/lru: replace pgdat lru_lock with lruvec lock")
c1139de73fe749dc f4ba6c0e1b65b09bafd8efe3a41
---------------- ---------------------------
%stddev %change %stddev
\ | \
78238 ± 2% -26.0% 57874 ± 2% vm-scalability.median
16778421 -27.1% 12226460 ± 2% vm-scalability.throughput
2664544 ± 2% -14.8% 2268921 ± 2% vm-scalability.time.involuntary_context_switches
156047 -2.2% 152592 vm-scalability.time.minor_page_faults
28604 +2.5% 29309 vm-scalability.time.system_time
884.29 -29.1% 627.10 ± 2% vm-scalability.time.user_time
426101 ± 5% -38.5% 262034 ± 7% vm-scalability.time.voluntary_context_switches
5.034e+09 -27.1% 3.668e+09 ± 2% vm-scalability.workload
14823 ± 2% -24.7% 11159 ± 3% vmstat.system.cs
0.19 ± 7% -0.1 0.09 ± 18% mpstat.cpu.all.iowait%
0.04 ± 4% -0.0 0.03 ± 2% mpstat.cpu.all.soft%
2.55 ± 2% -0.7 1.80 ± 3% mpstat.cpu.all.usr%
12114633 ± 14% +54.9% 18760443 ± 9% meminfo.Inactive
11686607 ± 15% +56.9% 18334291 ± 9% meminfo.Inactive(file)
2088482 -9.9% 1881793 meminfo.KReclaimable
2088482 -9.9% 1881793 meminfo.SReclaimable
2325040 -9.0% 2114735 meminfo.Slab
5010 ± 7% -17.3% 4141 ± 11% slabinfo.eventpoll_pwq.active_objs
5010 ± 7% -17.3% 4141 ± 11% slabinfo.eventpoll_pwq.num_objs
3415382 -9.9% 3078648 slabinfo.radix_tree_node.active_objs
69378 -12.6% 60664 ± 2% slabinfo.radix_tree_node.active_slabs
3549339 -10.1% 3190972 slabinfo.radix_tree_node.num_objs
69378 -12.6% 60664 ± 2% slabinfo.radix_tree_node.num_slabs
2.822e+08 ± 2% -28.3% 2.023e+08 ± 7% numa-numastat.node0.local_node
51745467 ± 14% -41.5% 30268440 ± 23% numa-numastat.node0.numa_foreign
2.822e+08 ± 2% -28.3% 2.024e+08 ± 7% numa-numastat.node0.numa_hit
2.661e+08 ± 2% -22.1% 2.073e+08 ± 2% numa-numastat.node1.local_node
2.661e+08 ± 2% -22.1% 2.074e+08 ± 2% numa-numastat.node1.numa_hit
51745467 ± 14% -41.5% 30268440 ± 23% numa-numastat.node1.numa_miss
51764123 ± 14% -41.5% 30286505 ± 23% numa-numastat.node1.other_node
6235756 ± 12% +46.3% 9125374 ± 7% numa-meminfo.node0.Inactive
6030002 ± 11% +47.9% 8920631 ± 7% numa-meminfo.node0.Inactive(file)
1723400 -11.4% 1526767 numa-meminfo.node0.KReclaimable
1723400 -11.4% 1526767 numa-meminfo.node0.SReclaimable
1862550 -11.4% 1650092 numa-meminfo.node0.Slab
5871243 ± 32% +62.4% 9532341 ± 14% numa-meminfo.node1.Inactive
5648948 ± 33% +64.8% 9311342 ± 15% numa-meminfo.node1.Inactive(file)
8873204 ± 9% +21.3% 10758985 ± 6% numa-meminfo.node1.MemFree
97371 ± 8% +12.6% 109616 ± 5% numa-meminfo.node1.SUnreclaim
1496138 ± 11% +48.6% 2223320 ± 8% numa-vmstat.node0.nr_inactive_file
231.25 ± 12% -46.3% 124.25 ± 83% numa-vmstat.node0.nr_mlock
431567 -12.2% 379127 numa-vmstat.node0.nr_slab_reclaimable
1496130 ± 11% +48.6% 2223314 ± 8% numa-vmstat.node0.nr_zone_inactive_file
28296141 ± 19% -42.9% 16164783 ± 15% numa-vmstat.node0.numa_foreign
1.668e+08 ± 4% -27.4% 1.21e+08 ± 5% numa-vmstat.node0.numa_hit
1.667e+08 ± 4% -27.4% 1.21e+08 ± 5% numa-vmstat.node0.numa_local
2094310 ± 4% -58.6% 867710 ± 10% numa-vmstat.node0.workingset_nodereclaim
2120086 -14.8% 1806725 numa-vmstat.node0.workingset_nodes
2286467 ± 9% +23.3% 2819021 ± 5% numa-vmstat.node1.nr_free_pages
1390375 ± 32% +68.4% 2340803 ± 15% numa-vmstat.node1.nr_inactive_file
24337 ± 8% +12.6% 27395 ± 5% numa-vmstat.node1.nr_slab_unreclaimable
1390383 ± 32% +68.4% 2340806 ± 15% numa-vmstat.node1.nr_zone_inactive_file
1.582e+08 ± 2% -22.2% 1.231e+08 numa-vmstat.node1.numa_hit
1.581e+08 ± 2% -22.2% 1.23e+08 numa-vmstat.node1.numa_local
28306213 ± 19% -42.9% 16172227 ± 15% numa-vmstat.node1.numa_miss
28418902 ± 19% -42.5% 16340497 ± 15% numa-vmstat.node1.numa_other
702102 ± 12% -52.6% 332764 ± 14% proc-vmstat.allocstall_movable
1221 ± 4% -34.6% 798.75 ± 18% proc-vmstat.compact_fail
1407 ± 3% -29.9% 987.00 ± 19% proc-vmstat.compact_stall
40635877 -5.6% 38374214 proc-vmstat.nr_active_file
43833190 -1.4% 43201891 proc-vmstat.nr_file_pages
4490757 ± 7% +15.1% 5170115 ± 3% proc-vmstat.nr_free_pages
2897749 ± 15% +56.3% 4529243 ± 8% proc-vmstat.nr_inactive_file
522321 -9.7% 471528 proc-vmstat.nr_slab_reclaimable
59143 -1.5% 58264 proc-vmstat.nr_slab_unreclaimable
40635883 -5.6% 38374227 proc-vmstat.nr_zone_active_file
2897803 ± 15% +56.3% 4529273 ± 8% proc-vmstat.nr_zone_inactive_file
91891340 -31.8% 62710161 ± 3% proc-vmstat.numa_foreign
5.483e+08 -25.3% 4.097e+08 ± 2% proc-vmstat.numa_hit
5.483e+08 -25.3% 4.097e+08 ± 2% proc-vmstat.numa_local
91891340 -31.8% 62710161 ± 3% proc-vmstat.numa_miss
91928587 -31.7% 62746910 ± 3% proc-vmstat.numa_other
6.171e+08 -27.9% 4.449e+08 ± 2% proc-vmstat.pgactivate
4347183 -29.4% 3069587 ± 6% proc-vmstat.pgalloc_dma32
6.365e+08 -26.2% 4.7e+08 ± 2% proc-vmstat.pgalloc_normal
5.778e+08 -29.4% 4.079e+08 ± 2% proc-vmstat.pgdeactivate
6.44e+08 -25.9% 4.771e+08 ± 2% proc-vmstat.pgfree
5.778e+08 -29.4% 4.079e+08 ± 2% proc-vmstat.pgrefill
4.161e+08 ± 2% -31.2% 2.861e+08 ± 4% proc-vmstat.pgscan_direct
5.901e+08 -28.3% 4.229e+08 ± 2% proc-vmstat.pgscan_file
1.741e+08 ± 5% -21.4% 1.368e+08 ± 2% proc-vmstat.pgscan_kswapd
4.16e+08 ± 2% -31.2% 2.861e+08 ± 4% proc-vmstat.pgsteal_direct
5.901e+08 -28.3% 4.229e+08 ± 2% proc-vmstat.pgsteal_file
1.741e+08 ± 5% -21.4% 1.368e+08 ± 2% proc-vmstat.pgsteal_kswapd
7213180 ± 2% -54.9% 3254583 ± 6% proc-vmstat.slabs_scanned
2081477 ± 4% -58.5% 862994 ± 8% proc-vmstat.workingset_nodereclaim
2641633 -12.2% 2319297 proc-vmstat.workingset_nodes
2833 ± 19% +59.7% 4524 ± 8% sched_debug.cfs_rq:/.load.min
53.85 ± 2% -17.9% 44.20 ± 9% sched_debug.cfs_rq:/.load_avg.avg
2507591 ± 15% -34.0% 1654886 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.48 ± 19% +60.3% 0.78 ± 8% sched_debug.cfs_rq:/.nr_running.min
12.77 ± 11% +18.6% 15.14 ± 6% sched_debug.cfs_rq:/.nr_spread_over.min
3956 ± 5% -13.1% 3439 ± 5% sched_debug.cfs_rq:/.runnable_avg.max
310.77 ± 28% +68.8% 524.45 ± 14% sched_debug.cfs_rq:/.runnable_avg.min
762.10 ± 5% -14.5% 651.40 ± 7% sched_debug.cfs_rq:/.runnable_avg.stddev
4083933 ± 17% -97.2% 116153 ±1553% sched_debug.cfs_rq:/.spread0.avg
8284486 ± 12% -46.4% 4442875 ± 46% sched_debug.cfs_rq:/.spread0.max
-2459228 +192.7% -7198991 sched_debug.cfs_rq:/.spread0.min
2504184 ± 15% -34.1% 1651381 ± 21% sched_debug.cfs_rq:/.spread0.stddev
169454 ± 4% -12.8% 147828 ± 4% sched_debug.cpu.avg_idle.stddev
375.39 ± 21% +56.1% 586.16 ± 7% sched_debug.cpu.curr->pid.min
0.00 ± 28% +71.9% 0.00 ± 38% sched_debug.cpu.next_balance.stddev
0.48 ± 19% +60.3% 0.78 ± 8% sched_debug.cpu.nr_running.min
0.81 ± 5% -12.8% 0.71 ± 7% sched_debug.cpu.nr_running.stddev
23392 ± 12% -27.0% 17068 ± 6% sched_debug.cpu.nr_switches.avg
19097 ± 14% -36.0% 12219 ± 6% sched_debug.cpu.nr_switches.min
3307 ± 13% -27.7% 2392 ± 6% sched_debug.cpu.nr_switches.stddev
192.53 ± 20% -60.0% 77.03 ± 5% sched_debug.cpu.nr_uninterruptible.max
-248.38 -46.0% -134.11 sched_debug.cpu.nr_uninterruptible.min
122.96 ± 20% -68.3% 38.97 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
22491 ± 13% -28.1% 16174 ± 7% sched_debug.cpu.sched_count.avg
30388 ± 14% -28.7% 21660 ± 7% sched_debug.cpu.sched_count.max
18448 ± 15% -44.9% 10163 ± 4% sched_debug.cpu.sched_count.min
2683 ± 17% -36.5% 1703 ± 8% sched_debug.cpu.sched_count.stddev
397.89 ± 17% -51.7% 192.11 ± 5% sched_debug.cpu.sched_goidle.avg
338.84 ± 13% -24.2% 256.96 ± 8% sched_debug.cpu.sched_goidle.stddev
8729 ± 16% -42.9% 4987 ± 6% sched_debug.cpu.ttwu_count.avg
12239 ± 17% -32.2% 8304 ± 11% sched_debug.cpu.ttwu_count.max
6456 ± 17% -54.6% 2934 ± 4% sched_debug.cpu.ttwu_count.min
4003 ± 14% -30.2% 2795 ± 8% sched_debug.cpu.ttwu_local.avg
2929 ± 17% -41.8% 1706 ± 6% sched_debug.cpu.ttwu_local.min
5.941e+09 ± 2% -19.9% 4.759e+09 perf-stat.i.branch-instructions
0.78 ± 7% -0.1 0.71 perf-stat.i.branch-miss-rate%
41746887 -26.0% 30894535 ± 2% perf-stat.i.branch-misses
1.76e+08 ± 3% -21.5% 1.382e+08 ± 3% perf-stat.i.cache-misses
4.213e+08 ± 2% -24.1% 3.196e+08 ± 2% perf-stat.i.cache-references
14694 ± 2% -24.8% 11045 ± 3% perf-stat.i.context-switches
8.12 ± 2% +26.2% 10.25 ± 2% perf-stat.i.cpi
898.55 ± 2% -30.5% 624.53 ± 2% perf-stat.i.cpu-migrations
1370 +26.0% 1726 perf-stat.i.cycles-between-cache-misses
0.20 ± 2% -0.0 0.16 ± 10% perf-stat.i.dTLB-load-miss-rate%
16593569 ± 4% -35.0% 10788430 ± 10% perf-stat.i.dTLB-load-misses
7.786e+09 ± 2% -21.6% 6.105e+09 ± 2% perf-stat.i.dTLB-loads
2534311 ± 5% -26.6% 1860291 ± 5% perf-stat.i.dTLB-store-misses
3.49e+09 ± 2% -27.0% 2.549e+09 ± 2% perf-stat.i.dTLB-stores
7620654 ± 2% -29.2% 5396804 ± 2% perf-stat.i.iTLB-load-misses
2.877e+10 ± 2% -21.2% 2.267e+10 perf-stat.i.instructions
170.34 ± 2% -22.1% 132.63 ± 2% perf-stat.i.metric.M/sec
3036 -2.9% 2947 perf-stat.i.minor-faults
36.19 ± 2% +5.1 41.26 ± 2% perf-stat.i.node-load-miss-rate%
24129704 ± 3% -8.3% 22121973 ± 5% perf-stat.i.node-load-misses
46356472 ± 3% -27.1% 33779162 ± 3% perf-stat.i.node-loads
8305402 -28.6% 5927794 ± 3% perf-stat.i.node-store-misses
5340827 ± 3% -27.2% 3887276 ± 3% perf-stat.i.node-stores
3036 -2.9% 2947 perf-stat.i.page-faults
14.65 -3.7% 14.11 perf-stat.overall.MPKI
0.70 -0.1 0.64 perf-stat.overall.branch-miss-rate%
41.75 +1.5 43.23 perf-stat.overall.cache-miss-rate%
8.84 +26.3% 11.17 perf-stat.overall.cpi
1446 +26.7% 1831 perf-stat.overall.cycles-between-cache-misses
0.21 ± 3% -0.0 0.18 ± 9% perf-stat.overall.dTLB-load-miss-rate%
3770 +11.2% 4193 perf-stat.overall.instructions-per-iTLB-miss
0.11 -20.8% 0.09 perf-stat.overall.ipc
34.27 +5.4 39.63 ± 3% perf-stat.overall.node-load-miss-rate%
1958 +8.8% 2131 perf-stat.overall.path-length
5.984e+09 ± 2% -19.9% 4.792e+09 perf-stat.ps.branch-instructions
41764376 -26.1% 30871855 ± 2% perf-stat.ps.branch-misses
1.773e+08 ± 3% -21.4% 1.393e+08 ± 2% perf-stat.ps.cache-misses
4.246e+08 ± 2% -24.1% 3.222e+08 ± 2% perf-stat.ps.cache-references
14865 ± 2% -24.9% 11166 ± 3% perf-stat.ps.context-switches
909.57 ± 2% -30.6% 631.57 ± 2% perf-stat.ps.cpu-migrations
16782889 ± 3% -35.0% 10914287 ± 10% perf-stat.ps.dTLB-load-misses
7.847e+09 ± 2% -21.6% 6.151e+09 perf-stat.ps.dTLB-loads
2416707 ± 5% -25.6% 1798684 ± 5% perf-stat.ps.dTLB-store-misses
3.514e+09 ± 2% -27.0% 2.564e+09 ± 2% perf-stat.ps.dTLB-stores
7685832 -29.2% 5445344 ± 2% perf-stat.ps.iTLB-load-misses
2.898e+10 ± 2% -21.2% 2.283e+10 perf-stat.ps.instructions
2987 -3.0% 2896 perf-stat.ps.minor-faults
24325464 ± 3% -8.2% 22335021 ± 5% perf-stat.ps.node-load-misses
46664275 ± 3% -27.1% 34005024 ± 3% perf-stat.ps.node-loads
8396100 -28.6% 5992119 ± 3% perf-stat.ps.node-store-misses
5376741 ± 3% -27.2% 3913520 ± 3% perf-stat.ps.node-stores
2987 -3.0% 2897 perf-stat.ps.page-faults
9.859e+12 -20.7% 7.818e+12 perf-stat.total.instructions
13858 ± 8% -29.1% 9831 ± 2% softirqs.CPU1.RCU
13050 ± 5% -24.2% 9896 ± 7% softirqs.CPU10.RCU
11508 ± 5% -31.0% 7935 ± 8% softirqs.CPU100.RCU
11447 ± 6% -32.5% 7728 ± 7% softirqs.CPU101.RCU
11557 ± 5% -29.9% 8104 ± 6% softirqs.CPU102.RCU
11814 -26.5% 8680 ± 10% softirqs.CPU103.RCU
14197 ± 7% -30.4% 9879 ± 6% softirqs.CPU11.RCU
13522 ± 6% -31.8% 9226 ± 6% softirqs.CPU12.RCU
13895 ± 8% -26.8% 10167 ± 10% softirqs.CPU13.RCU
15068 ± 14% -38.4% 9281 ± 4% softirqs.CPU14.RCU
13865 ± 9% -34.1% 9143 softirqs.CPU15.RCU
13810 ± 9% -30.5% 9594 ± 8% softirqs.CPU16.RCU
14278 ± 11% -34.7% 9322 ± 6% softirqs.CPU17.RCU
14065 ± 7% -29.2% 9956 ± 11% softirqs.CPU18.RCU
13789 ± 2% -33.6% 9152 ± 5% softirqs.CPU19.RCU
15056 ± 5% -36.6% 9541 ± 4% softirqs.CPU2.RCU
13446 ± 6% -33.6% 8933 ± 7% softirqs.CPU20.RCU
13280 ± 8% -31.3% 9127 ± 4% softirqs.CPU21.RCU
13293 ± 5% -25.5% 9900 ± 5% softirqs.CPU22.RCU
13400 ± 6% -22.9% 10332 ± 10% softirqs.CPU23.RCU
13405 ± 4% -31.9% 9134 ± 4% softirqs.CPU24.RCU
13386 ± 8% -34.2% 8811 ± 4% softirqs.CPU25.RCU
12832 ± 8% -27.4% 9319 ± 12% softirqs.CPU26.RCU
12339 ± 4% -32.2% 8362 ± 11% softirqs.CPU27.RCU
12482 ± 8% -26.4% 9183 ± 20% softirqs.CPU28.RCU
12792 ± 5% -33.1% 8561 ± 13% softirqs.CPU29.RCU
13615 ± 5% -31.0% 9396 ± 7% softirqs.CPU3.RCU
12989 ± 3% -23.1% 9982 ± 14% softirqs.CPU30.RCU
14332 ± 18% -35.6% 9237 ± 9% softirqs.CPU31.RCU
12972 ± 5% -25.6% 9652 ± 19% softirqs.CPU32.RCU
12599 ± 4% -29.2% 8916 ± 6% softirqs.CPU33.RCU
12938 ± 7% -32.2% 8769 ± 12% softirqs.CPU34.RCU
12834 ± 7% -24.4% 9696 ± 9% softirqs.CPU35.RCU
12522 ± 3% -26.5% 9206 ± 14% softirqs.CPU36.RCU
12077 ± 5% -25.5% 9000 ± 11% softirqs.CPU37.RCU
12340 ± 7% -27.6% 8938 ± 8% softirqs.CPU38.RCU
12308 ± 6% -26.5% 9043 ± 15% softirqs.CPU39.RCU
13606 ± 4% -29.2% 9638 ± 5% softirqs.CPU4.RCU
12553 ± 4% -26.6% 9212 ± 10% softirqs.CPU40.RCU
12762 ± 8% -32.2% 8648 ± 7% softirqs.CPU41.RCU
12571 ± 7% -28.3% 9007 ± 15% softirqs.CPU42.RCU
12177 ± 7% -25.7% 9052 ± 11% softirqs.CPU43.RCU
12424 ± 5% -26.6% 9124 ± 14% softirqs.CPU44.RCU
12558 ± 4% -26.7% 9208 ± 23% softirqs.CPU45.RCU
12062 ± 3% -26.4% 8874 ± 20% softirqs.CPU46.RCU
12180 ± 6% -32.1% 8264 ± 10% softirqs.CPU47.RCU
11966 ± 3% -24.6% 9026 ± 10% softirqs.CPU48.RCU
12218 ± 2% -28.9% 8684 ± 12% softirqs.CPU49.RCU
13701 ± 4% -33.5% 9113 ± 4% softirqs.CPU5.RCU
12104 ± 4% -25.9% 8965 ± 15% softirqs.CPU50.RCU
11950 ± 5% -28.6% 8533 ± 10% softirqs.CPU51.RCU
13024 ± 6% -31.9% 8867 ± 5% softirqs.CPU52.RCU
12808 ± 4% -35.0% 8321 ± 3% softirqs.CPU53.RCU
13732 ± 10% -35.4% 8872 ± 5% softirqs.CPU54.RCU
13267 ± 8% -36.1% 8474 ± 3% softirqs.CPU55.RCU
12765 ± 3% -31.3% 8767 ± 5% softirqs.CPU56.RCU
12860 ± 3% -32.8% 8639 ± 6% softirqs.CPU57.RCU
12612 ± 3% -33.5% 8390 ± 2% softirqs.CPU58.RCU
12861 ± 4% -29.6% 9056 ± 5% softirqs.CPU59.RCU
15221 ± 14% -39.2% 9257 ± 3% softirqs.CPU6.RCU
13037 ± 5% -33.3% 8698 ± 7% softirqs.CPU60.RCU
12789 ± 5% -31.1% 8811 softirqs.CPU61.RCU
13407 ± 5% -32.4% 9064 ± 5% softirqs.CPU62.RCU
13282 ± 6% -32.8% 8920 ± 8% softirqs.CPU63.RCU
12826 ± 4% -31.4% 8801 ± 2% softirqs.CPU64.RCU
12727 ± 5% -30.2% 8886 softirqs.CPU65.RCU
13009 ± 8% -34.0% 8589 ± 3% softirqs.CPU66.RCU
12986 ± 4% -32.6% 8754 ± 5% softirqs.CPU67.RCU
12601 ± 5% -27.3% 9157 ± 7% softirqs.CPU68.RCU
12975 ± 8% -33.0% 8698 ± 5% softirqs.CPU69.RCU
13906 ± 8% -29.2% 9839 ± 6% softirqs.CPU7.RCU
12809 ± 7% -32.6% 8631 ± 3% softirqs.CPU70.RCU
13304 ± 3% -35.4% 8596 ± 5% softirqs.CPU71.RCU
12886 ± 4% -33.8% 8528 ± 4% softirqs.CPU73.RCU
12840 ± 2% -32.6% 8659 ± 2% softirqs.CPU74.RCU
12621 ± 8% -34.2% 8309 ± 3% softirqs.CPU75.RCU
12237 ± 3% -30.2% 8543 ± 7% softirqs.CPU76.RCU
12494 ± 7% -33.9% 8258 ± 5% softirqs.CPU77.RCU
11897 ± 3% -29.7% 8367 ± 10% softirqs.CPU78.RCU
11796 ± 3% -32.4% 7975 ± 12% softirqs.CPU79.RCU
13300 ± 5% -28.5% 9505 ± 9% softirqs.CPU8.RCU
11818 ± 5% -27.9% 8520 ± 6% softirqs.CPU80.RCU
11852 ± 5% -22.5% 9185 ± 14% softirqs.CPU81.RCU
11806 ± 2% -28.3% 8469 ± 11% softirqs.CPU82.RCU
13762 ± 24% -40.6% 8178 ± 10% softirqs.CPU83.RCU
11775 ± 7% -31.4% 8082 ± 4% softirqs.CPU84.RCU
11858 ± 3% -34.0% 7822 ± 6% softirqs.CPU85.RCU
12253 ± 10% -31.3% 8415 ± 11% softirqs.CPU86.RCU
11607 ± 5% -30.7% 8045 ± 5% softirqs.CPU87.RCU
11638 ± 4% -32.9% 7808 ± 6% softirqs.CPU88.RCU
11495 ± 5% -30.5% 7985 ± 6% softirqs.CPU89.RCU
13934 ± 14% -26.0% 10311 ± 8% softirqs.CPU9.RCU
11533 ± 4% -31.4% 7906 ± 6% softirqs.CPU90.RCU
11370 ± 3% -30.7% 7879 ± 6% softirqs.CPU91.RCU
11433 ± 3% -28.2% 8213 ± 6% softirqs.CPU92.RCU
11842 ± 3% -33.4% 7884 ± 5% softirqs.CPU93.RCU
11938 ± 3% -34.3% 7844 ± 3% softirqs.CPU94.RCU
11653 ± 6% -34.0% 7689 ± 6% softirqs.CPU95.RCU
11736 ± 4% -33.5% 7803 ± 4% softirqs.CPU96.RCU
11230 ± 3% -27.7% 8119 ± 8% softirqs.CPU97.RCU
11865 ± 2% -34.8% 7739 ± 4% softirqs.CPU98.RCU
11723 ± 4% -34.3% 7704 ± 3% softirqs.CPU99.RCU
1329005 ± 4% -30.3% 926140 ± 4% softirqs.RCU
13.86 ± 8% -13.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_buffered_read
13.83 ± 8% -13.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed
13.43 ± 8% -13.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded
13.41 ± 8% -13.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
15.36 ± 8% -8.7 6.64 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
15.34 ± 8% -8.7 6.62 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
14.57 ± 8% -8.2 6.41 ± 9% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
27.21 ± 7% -5.5 21.75 ± 9% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
4.93 ± 6% -1.2 3.69 ± 16% perf-profile.calltrace.cycles-pp.ret_from_fork
4.93 ± 6% -1.2 3.69 ± 16% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.calltrace.cycles-pp.drain_local_pages_wq.process_one_work.worker_thread.kthread.ret_from_fork
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.calltrace.cycles-pp.drain_pages.drain_local_pages_wq.process_one_work.worker_thread.kthread
2.47 ± 13% -1.0 1.44 ± 18% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.calltrace.cycles-pp.drain_pages_zone.drain_pages.drain_local_pages_wq.process_one_work.worker_thread
2.46 ± 13% -1.0 1.43 ± 18% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
2.45 ± 13% -1.0 1.42 ± 18% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.drain_pages_zone.drain_pages.drain_local_pages_wq.process_one_work
2.25 ± 13% -1.0 1.29 ± 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages
2.25 ± 13% -1.0 1.29 ± 18% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages.drain_local_pages_wq
1.47 ± 2% -0.8 0.72 ± 37% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
1.46 ± 2% -0.7 0.71 ± 37% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.generic_file_buffered_read.xfs_file_buffered_aio_read
2.38 ± 2% -0.7 1.71 ± 5% perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read
2.37 ± 2% -0.7 1.71 ± 4% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readahead.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read
2.38 ± 2% -0.7 1.71 ± 5% perf-profile.calltrace.cycles-pp.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
2.35 ± 3% -0.7 1.70 ± 4% perf-profile.calltrace.cycles-pp.iomap_readahead_actor.iomap_apply.iomap_readahead.read_pages.page_cache_readahead_unbounded
2.22 ± 3% -0.6 1.60 ± 4% perf-profile.calltrace.cycles-pp.iomap_readpage_actor.iomap_readahead_actor.iomap_apply.iomap_readahead.read_pages
0.87 ± 6% -0.6 0.26 ±100% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
1.58 ± 2% -0.6 1.02 ± 20% perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
0.72 ± 5% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
1.23 ± 6% -0.4 0.87 ± 7% perf-profile.calltrace.cycles-pp.write
0.60 ± 7% +0.4 1.01 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
90.11 +2.5 92.60 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
90.04 +2.5 92.56 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
89.91 +2.6 92.47 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
89.72 +2.6 92.34 perf-profile.calltrace.cycles-pp.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
10.09 ± 8% +3.5 13.56 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
10.68 ± 8% +3.9 14.56 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
13.84 ± 8% +4.6 18.46 ± 9% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read
13.82 ± 8% +4.6 18.44 ± 9% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read
15.38 ± 8% +4.9 20.32 ± 9% perf-profile.calltrace.cycles-pp.mark_page_accessed.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
14.49 ± 8% +5.2 19.70 ± 10% perf-profile.calltrace.cycles-pp.activate_page.mark_page_accessed.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
14.46 ± 8% +5.2 19.68 ± 10% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_buffered_read.xfs_file_buffered_aio_read
14.57 ± 8% +6.2 20.78 ± 11% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
0.00 +18.0 18.00 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add
0.00 +18.0 18.02 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
0.00 +18.0 18.02 ± 9% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded
0.00 +19.1 19.05 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.activate_page
0.00 +19.1 19.07 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed
0.00 +19.1 19.07 ± 10% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_buffered_read
15.72 ± 8% -8.4 7.31 ± 8% perf-profile.children.cycles-pp.lru_note_cost
29.42 ± 7% -5.2 24.27 ± 7% perf-profile.children.cycles-pp.shrink_inactive_list
2.90 ± 11% -1.3 1.64 ± 22% perf-profile.children.cycles-pp._raw_spin_lock
4.93 ± 6% -1.2 3.69 ± 16% perf-profile.children.cycles-pp.kthread
4.94 ± 6% -1.2 3.70 ± 16% perf-profile.children.cycles-pp.ret_from_fork
2.60 ± 10% -1.1 1.52 ± 17% perf-profile.children.cycles-pp.free_pcppages_bulk
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.children.cycles-pp.drain_local_pages_wq
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.children.cycles-pp.drain_pages
2.46 ± 13% -1.0 1.42 ± 18% perf-profile.children.cycles-pp.drain_pages_zone
2.47 ± 13% -1.0 1.44 ± 18% perf-profile.children.cycles-pp.worker_thread
2.46 ± 13% -1.0 1.43 ± 18% perf-profile.children.cycles-pp.process_one_work
2.10 ± 7% -0.9 1.16 ± 7% perf-profile.children.cycles-pp.shrink_page_list
1.57 ± 12% -0.8 0.79 ± 11% perf-profile.children.cycles-pp.__remove_mapping
2.38 ± 2% -0.7 1.71 ± 4% perf-profile.children.cycles-pp.iomap_readahead
2.38 ± 2% -0.7 1.72 ± 4% perf-profile.children.cycles-pp.read_pages
2.37 ± 2% -0.7 1.71 ± 4% perf-profile.children.cycles-pp.iomap_apply
2.35 ± 2% -0.7 1.70 ± 4% perf-profile.children.cycles-pp.iomap_readahead_actor
2.23 ± 3% -0.6 1.60 ± 4% perf-profile.children.cycles-pp.iomap_readpage_actor
1.60 -0.5 1.11 ± 5% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.15 ± 3% -0.4 0.71 ± 20% perf-profile.children.cycles-pp.get_page_from_freelist
1.59 ± 2% -0.4 1.15 ± 6% perf-profile.children.cycles-pp.copy_page_to_iter
1.48 ± 2% -0.4 1.07 ± 6% perf-profile.children.cycles-pp.copyout
1.47 ± 2% -0.4 1.07 ± 6% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.26 ± 6% -0.4 0.90 ± 7% perf-profile.children.cycles-pp.write
0.90 ± 4% -0.4 0.54 ± 24% perf-profile.children.cycles-pp.rmqueue
1.16 ± 3% -0.3 0.83 ± 4% perf-profile.children.cycles-pp.memset_erms
0.78 ± 6% -0.3 0.46 ± 29% perf-profile.children.cycles-pp.rmqueue_bulk
0.47 ± 28% -0.3 0.17 ± 13% perf-profile.children.cycles-pp.shrink_slab
0.87 ± 4% -0.3 0.59 ± 9% perf-profile.children.cycles-pp.workingset_age_nonresident
0.41 ± 31% -0.3 0.13 ± 14% perf-profile.children.cycles-pp.do_shrink_slab
0.99 ± 3% -0.3 0.72 ± 5% perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.91 -0.3 0.65 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.86 -0.2 0.61 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.83 ± 3% -0.2 0.59 ± 9% perf-profile.children.cycles-pp.workingset_activation
0.66 ± 2% -0.2 0.46 ± 4% perf-profile.children.cycles-pp.__list_del_entry_valid
0.49 ± 5% -0.1 0.35 ± 4% perf-profile.children.cycles-pp.__delete_from_page_cache
0.45 ± 3% -0.1 0.31 ± 9% perf-profile.children.cycles-pp.pagecache_get_page
0.43 ± 3% -0.1 0.30 ± 9% perf-profile.children.cycles-pp.find_get_entry
0.47 ± 2% -0.1 0.35 ± 14% perf-profile.children.cycles-pp.ksys_write
0.42 ± 3% -0.1 0.30 ± 4% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.31 ± 2% -0.1 0.20 ± 6% perf-profile.children.cycles-pp.security_file_permission
0.23 ± 8% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.workingset_eviction
0.35 ± 6% -0.1 0.24 ± 10% perf-profile.children.cycles-pp.xas_load
0.34 ± 3% -0.1 0.24 ± 2% perf-profile.children.cycles-pp.xas_store
0.14 ± 12% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.start_secondary
0.38 ± 3% -0.1 0.29 ± 17% perf-profile.children.cycles-pp.vfs_write
0.14 ± 12% -0.1 0.04 ± 60% perf-profile.children.cycles-pp.secondary_startup_64
0.14 ± 12% -0.1 0.04 ± 60% perf-profile.children.cycles-pp.cpu_startup_entry
0.14 ± 12% -0.1 0.04 ± 60% perf-profile.children.cycles-pp.do_idle
0.13 ± 12% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.cpuidle_enter
0.13 ± 12% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.cpuidle_enter_state
0.13 ± 14% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.intel_idle
0.21 ± 3% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.mem_cgroup_charge
0.21 ± 2% -0.1 0.13 ± 5% perf-profile.children.cycles-pp.__mod_memcg_state
0.35 ± 2% -0.1 0.28 ± 7% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.23 ± 22% -0.1 0.16 ± 16% perf-profile.children.cycles-pp.free_unref_page_list
0.20 ± 2% -0.1 0.13 ± 9% perf-profile.children.cycles-pp.common_file_perm
0.16 ± 8% -0.1 0.10 ± 14% perf-profile.children.cycles-pp.count_shadow_nodes
0.30 -0.1 0.24 ± 5% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.15 ± 3% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.__mod_lruvec_state
0.57 -0.1 0.52 ± 3% perf-profile.children.cycles-pp.isolate_lru_pages
0.29 ± 2% -0.1 0.24 ± 5% perf-profile.children.cycles-pp.asm_call_on_stack
0.21 ± 3% -0.1 0.16 ± 5% perf-profile.children.cycles-pp.xas_create
0.13 ± 5% -0.0 0.08 ± 8% perf-profile.children.cycles-pp.__mod_node_page_state
0.07 ± 5% -0.0 0.03 ±100% perf-profile.children.cycles-pp.xas_init_marks
0.15 ± 8% -0.0 0.11 ± 11% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.13 ± 6% -0.0 0.09 ± 8% perf-profile.children.cycles-pp.xa_load
0.07 ± 7% -0.0 0.03 ±100% perf-profile.children.cycles-pp.apparmor_file_permission
0.12 ± 10% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.__fsnotify_parent
0.10 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.__count_memcg_events
0.06 -0.0 0.03 ±100% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.xfs_iunlock
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.atime_needs_update
0.25 -0.0 0.22 ± 5% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.12 ± 3% -0.0 0.09 ± 9% perf-profile.children.cycles-pp.__fdget_pos
0.10 ± 7% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.__fget_light
0.09 -0.0 0.06 perf-profile.children.cycles-pp.try_charge
0.08 ± 5% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.10 -0.0 0.07 ± 5% perf-profile.children.cycles-pp.xfs_ilock
0.09 -0.0 0.06 ± 6% perf-profile.children.cycles-pp.___might_sleep
0.24 ± 2% -0.0 0.21 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
0.09 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.down_read
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.touch_atime
0.08 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.09 ± 5% -0.0 0.06 ± 6% perf-profile.children.cycles-pp._cond_resched
0.15 ± 3% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.tick_sched_timer
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.update_process_times
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.18 ± 2% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.13 ± 11% +0.0 0.17 ± 12% perf-profile.children.cycles-pp.handle_mm_fault
0.18 ± 3% +0.1 0.25 ± 4% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.42 ± 5% +0.1 0.53 ± 8% perf-profile.children.cycles-pp.lru_add_drain
0.22 ± 3% +0.1 0.33 ± 6% perf-profile.children.cycles-pp.move_pages_to_lru
0.71 ± 3% +1.6 2.31 ± 61% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.11 ± 14% +1.8 1.88 ± 74% perf-profile.children.cycles-pp.xas_nomem
0.13 ± 16% +1.8 1.90 ± 73% perf-profile.children.cycles-pp.kmem_cache_alloc
0.11 ± 17% +1.8 1.90 ± 74% perf-profile.children.cycles-pp.__slab_alloc
0.11 ± 17% +1.8 1.90 ± 74% perf-profile.children.cycles-pp.___slab_alloc
0.11 ± 19% +1.8 1.90 ± 74% perf-profile.children.cycles-pp.allocate_slab
93.02 +1.8 94.84 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
91.34 +2.3 93.69 perf-profile.children.cycles-pp.do_syscall_64
90.50 +2.4 92.88 perf-profile.children.cycles-pp.ksys_read
90.42 +2.4 92.82 perf-profile.children.cycles-pp.vfs_read
90.11 +2.5 92.61 perf-profile.children.cycles-pp.new_sync_read
90.05 +2.5 92.56 perf-profile.children.cycles-pp.xfs_file_read_iter
89.92 +2.6 92.48 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
89.73 +2.6 92.35 perf-profile.children.cycles-pp.generic_file_buffered_read
83.54 +4.4 87.94 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
13.86 ± 8% +4.6 18.50 ± 9% perf-profile.children.cycles-pp.lru_cache_add
14.01 ± 8% +4.7 18.73 ± 9% perf-profile.children.cycles-pp.__pagevec_lru_add
15.38 ± 8% +4.9 20.33 ± 9% perf-profile.children.cycles-pp.mark_page_accessed
14.49 ± 8% +5.2 19.70 ± 10% perf-profile.children.cycles-pp.activate_page
14.70 ± 8% +5.3 19.96 ± 9% perf-profile.children.cycles-pp.pagevec_lru_move_fn
14.57 ± 8% +6.2 20.78 ± 11% perf-profile.children.cycles-pp.add_to_page_cache_lru
28.48 ± 7% +9.5 37.97 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +37.7 37.69 ± 9% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
1.54 -0.5 1.07 ± 6% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
1.46 ± 2% -0.4 1.06 ± 6% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.16 ± 3% -0.3 0.83 ± 4% perf-profile.self.cycles-pp.memset_erms
0.86 ± 4% -0.3 0.58 ± 9% perf-profile.self.cycles-pp.workingset_age_nonresident
0.98 ± 2% -0.3 0.71 ± 5% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.91 -0.3 0.65 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.80 -0.2 0.57 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.66 ± 2% -0.2 0.46 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.13 ± 14% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.intel_idle
0.26 ± 3% -0.1 0.17 ± 11% perf-profile.self.cycles-pp.find_get_entry
0.28 ± 5% -0.1 0.20 ± 9% perf-profile.self.cycles-pp.xas_load
0.21 ± 2% -0.1 0.13 ± 3% perf-profile.self.cycles-pp.__mod_memcg_state
0.21 ± 2% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.get_page_from_freelist
0.10 ± 8% -0.1 0.04 ± 57% perf-profile.self.cycles-pp.workingset_eviction
0.17 ± 3% -0.1 0.11 ± 7% perf-profile.self.cycles-pp.common_file_perm
0.19 ± 4% -0.1 0.13 ± 11% perf-profile.self.cycles-pp.free_pcppages_bulk
0.12 ± 8% -0.1 0.07 ± 14% perf-profile.self.cycles-pp.count_shadow_nodes
0.19 ± 4% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.xas_create
0.13 ± 5% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__mod_node_page_state
0.13 ± 3% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.xfs_file_read_iter
0.12 ± 9% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__fsnotify_parent
0.12 ± 7% -0.0 0.08 ± 10% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.12 ± 8% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.shrink_page_list
0.10 ± 5% -0.0 0.06 perf-profile.self.cycles-pp.__count_memcg_events
0.13 ± 3% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.generic_file_buffered_read
0.07 ± 10% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.shrink_active_list
0.10 ± 5% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.__fget_light
0.08 -0.0 0.05 ± 8% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.09 ± 4% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.___might_sleep
0.11 ± 7% -0.0 0.09 ± 13% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.07 -0.0 0.05 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.07 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.vfs_read
0.07 ± 5% +0.0 0.10 ± 11% perf-profile.self.cycles-pp.__list_add_valid
0.13 +0.0 0.17 ± 3% perf-profile.self.cycles-pp.__activate_page
0.20 ± 2% +0.1 0.25 ± 2% perf-profile.self.cycles-pp.isolate_lru_pages
0.08 ± 5% +0.1 0.14 ± 8% perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.14 ± 5% +0.1 0.22 ± 4% perf-profile.self.cycles-pp.__pagevec_lru_add
0.09 ± 4% +0.1 0.22 ± 6% perf-profile.self.cycles-pp.move_pages_to_lru
83.54 +4.4 87.94 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
516002 ± 7% -45.4% 281637 ± 12% interrupts.CAL:Function_call_interrupts
5761 ± 7% -39.4% 3490 ± 18% interrupts.CPU0.CAL:Function_call_interrupts
6073 ± 8% -44.2% 3387 ± 14% interrupts.CPU0.RES:Rescheduling_interrupts
5018 ± 7% -41.9% 2914 ± 9% interrupts.CPU1.CAL:Function_call_interrupts
6066 ± 7% -44.5% 3366 ± 14% interrupts.CPU1.RES:Rescheduling_interrupts
4799 ± 6% -44.0% 2685 ± 11% interrupts.CPU10.CAL:Function_call_interrupts
5965 ± 6% -46.4% 3195 ± 13% interrupts.CPU10.RES:Rescheduling_interrupts
5137 ± 8% -46.6% 2744 ± 14% interrupts.CPU100.CAL:Function_call_interrupts
4593 ± 6% -43.9% 2577 ± 11% interrupts.CPU100.RES:Rescheduling_interrupts
5105 ± 7% -47.1% 2698 ± 12% interrupts.CPU101.CAL:Function_call_interrupts
4582 ± 7% -44.1% 2563 ± 11% interrupts.CPU101.RES:Rescheduling_interrupts
5100 ± 8% -47.9% 2658 ± 13% interrupts.CPU102.CAL:Function_call_interrupts
4556 ± 7% -43.2% 2588 ± 12% interrupts.CPU102.RES:Rescheduling_interrupts
5181 ± 7% -46.0% 2798 ± 13% interrupts.CPU103.CAL:Function_call_interrupts
4679 ± 7% -42.0% 2714 ± 8% interrupts.CPU103.RES:Rescheduling_interrupts
4754 ± 7% -44.4% 2645 ± 10% interrupts.CPU11.CAL:Function_call_interrupts
5949 ± 8% -45.7% 3230 ± 11% interrupts.CPU11.RES:Rescheduling_interrupts
4732 ± 7% -43.8% 2658 ± 10% interrupts.CPU12.CAL:Function_call_interrupts
5862 ± 8% -44.8% 3233 ± 12% interrupts.CPU12.RES:Rescheduling_interrupts
4721 ± 6% -43.4% 2671 ± 12% interrupts.CPU13.CAL:Function_call_interrupts
5970 ± 7% -45.4% 3259 ± 12% interrupts.CPU13.RES:Rescheduling_interrupts
4772 ± 7% -44.6% 2643 ± 10% interrupts.CPU14.CAL:Function_call_interrupts
6055 ± 8% -45.9% 3278 ± 12% interrupts.CPU14.RES:Rescheduling_interrupts
4720 ± 6% -44.1% 2639 ± 10% interrupts.CPU15.CAL:Function_call_interrupts
5959 ± 8% -47.2% 3145 ± 11% interrupts.CPU15.RES:Rescheduling_interrupts
4689 ± 6% -44.0% 2623 ± 12% interrupts.CPU16.CAL:Function_call_interrupts
5887 ± 7% -45.4% 3213 ± 12% interrupts.CPU16.RES:Rescheduling_interrupts
4715 ± 7% -44.0% 2638 ± 11% interrupts.CPU17.CAL:Function_call_interrupts
5877 ± 8% -44.9% 3240 ± 14% interrupts.CPU17.RES:Rescheduling_interrupts
4680 ± 7% -43.4% 2649 ± 11% interrupts.CPU18.CAL:Function_call_interrupts
5899 ± 6% -45.6% 3209 ± 12% interrupts.CPU18.RES:Rescheduling_interrupts
4710 ± 7% -44.6% 2608 ± 11% interrupts.CPU19.CAL:Function_call_interrupts
5911 ± 7% -46.4% 3171 ± 13% interrupts.CPU19.RES:Rescheduling_interrupts
4998 ± 8% -42.9% 2856 ± 12% interrupts.CPU2.CAL:Function_call_interrupts
6106 ± 7% -47.1% 3228 ± 15% interrupts.CPU2.RES:Rescheduling_interrupts
4690 ± 7% -44.6% 2599 ± 10% interrupts.CPU20.CAL:Function_call_interrupts
5958 ± 7% -46.5% 3185 ± 11% interrupts.CPU20.RES:Rescheduling_interrupts
4686 ± 7% -44.8% 2585 ± 10% interrupts.CPU21.CAL:Function_call_interrupts
5879 ± 7% -45.9% 3178 ± 12% interrupts.CPU21.RES:Rescheduling_interrupts
4653 ± 6% -44.5% 2584 ± 11% interrupts.CPU22.CAL:Function_call_interrupts
5894 ± 8% -46.1% 3174 ± 13% interrupts.CPU22.RES:Rescheduling_interrupts
4692 ± 7% -43.9% 2634 ± 10% interrupts.CPU23.CAL:Function_call_interrupts
5978 ± 8% -46.1% 3225 ± 14% interrupts.CPU23.RES:Rescheduling_interrupts
4676 ± 6% -43.9% 2623 ± 11% interrupts.CPU24.CAL:Function_call_interrupts
5943 ± 6% -46.4% 3186 ± 14% interrupts.CPU24.RES:Rescheduling_interrupts
4660 ± 7% -44.8% 2573 ± 11% interrupts.CPU25.CAL:Function_call_interrupts
5872 ± 7% -45.2% 3219 ± 13% interrupts.CPU25.RES:Rescheduling_interrupts
5597 ± 6% -47.5% 2936 ± 13% interrupts.CPU26.CAL:Function_call_interrupts
5164 ± 4% -44.8% 2851 ± 12% interrupts.CPU26.RES:Rescheduling_interrupts
5388 ± 7% -46.3% 2890 ± 12% interrupts.CPU27.CAL:Function_call_interrupts
5106 ± 7% -42.9% 2913 ± 12% interrupts.CPU27.RES:Rescheduling_interrupts
5459 ± 8% -47.4% 2874 ± 10% interrupts.CPU28.CAL:Function_call_interrupts
5245 ± 7% -44.4% 2918 ± 10% interrupts.CPU28.RES:Rescheduling_interrupts
5416 ± 6% -47.4% 2848 ± 14% interrupts.CPU29.CAL:Function_call_interrupts
5260 ± 5% -42.3% 3035 ± 12% interrupts.CPU29.RES:Rescheduling_interrupts
4863 ± 6% -42.8% 2782 ± 13% interrupts.CPU3.CAL:Function_call_interrupts
610608 ± 8% -18.4% 498360 ± 18% interrupts.CPU3.LOC:Local_timer_interrupts
5948 ± 6% -44.9% 3276 ± 13% interrupts.CPU3.RES:Rescheduling_interrupts
5355 ± 7% -46.6% 2857 ± 12% interrupts.CPU30.CAL:Function_call_interrupts
5249 ± 6% -43.5% 2965 ± 9% interrupts.CPU30.RES:Rescheduling_interrupts
5359 ± 7% -46.8% 2852 ± 12% interrupts.CPU31.CAL:Function_call_interrupts
5247 ± 6% -42.8% 3002 ± 10% interrupts.CPU31.RES:Rescheduling_interrupts
5400 ± 7% -46.7% 2879 ± 12% interrupts.CPU32.CAL:Function_call_interrupts
5354 ± 2% -43.5% 3024 ± 15% interrupts.CPU32.RES:Rescheduling_interrupts
5323 ± 8% -46.0% 2873 ± 16% interrupts.CPU33.CAL:Function_call_interrupts
5275 ± 6% -46.0% 2849 ± 14% interrupts.CPU33.RES:Rescheduling_interrupts
5391 ± 7% -47.4% 2838 ± 13% interrupts.CPU34.CAL:Function_call_interrupts
5221 ± 6% -42.2% 3020 ± 12% interrupts.CPU34.RES:Rescheduling_interrupts
5309 ± 8% -46.5% 2840 ± 13% interrupts.CPU35.CAL:Function_call_interrupts
5202 ± 6% -43.3% 2950 ± 11% interrupts.CPU35.RES:Rescheduling_interrupts
5320 ± 8% -47.3% 2805 ± 13% interrupts.CPU36.CAL:Function_call_interrupts
5219 ± 5% -43.3% 2960 ± 10% interrupts.CPU36.RES:Rescheduling_interrupts
5375 ± 7% -47.9% 2802 ± 13% interrupts.CPU37.CAL:Function_call_interrupts
5215 ± 7% -42.6% 2993 ± 12% interrupts.CPU37.RES:Rescheduling_interrupts
46.25 ±154% -97.3% 1.25 ±131% interrupts.CPU37.TLB:TLB_shootdowns
5303 ± 8% -46.5% 2838 ± 14% interrupts.CPU38.CAL:Function_call_interrupts
5132 ± 5% -42.7% 2940 ± 10% interrupts.CPU38.RES:Rescheduling_interrupts
5314 ± 6% -47.0% 2815 ± 13% interrupts.CPU39.CAL:Function_call_interrupts
5205 ± 6% -44.4% 2895 ± 12% interrupts.CPU39.RES:Rescheduling_interrupts
4843 ± 6% -43.1% 2756 ± 11% interrupts.CPU4.CAL:Function_call_interrupts
6025 ± 5% -45.5% 3286 ± 14% interrupts.CPU4.RES:Rescheduling_interrupts
5277 ± 7% -46.8% 2807 ± 13% interrupts.CPU40.CAL:Function_call_interrupts
5183 ± 5% -44.1% 2898 ± 10% interrupts.CPU40.RES:Rescheduling_interrupts
5256 ± 8% -46.1% 2835 ± 14% interrupts.CPU41.CAL:Function_call_interrupts
5228 ± 5% -45.0% 2875 ± 10% interrupts.CPU41.RES:Rescheduling_interrupts
5304 ± 8% -47.3% 2793 ± 13% interrupts.CPU42.CAL:Function_call_interrupts
5219 ± 6% -43.0% 2976 ± 10% interrupts.CPU42.RES:Rescheduling_interrupts
5331 ± 7% -47.2% 2815 ± 14% interrupts.CPU43.CAL:Function_call_interrupts
5093 ± 7% -40.6% 3024 ± 11% interrupts.CPU43.RES:Rescheduling_interrupts
5221 ± 8% -47.1% 2761 ± 14% interrupts.CPU44.CAL:Function_call_interrupts
5190 ± 6% -44.2% 2895 ± 12% interrupts.CPU44.RES:Rescheduling_interrupts
5318 ± 7% -48.1% 2758 ± 13% interrupts.CPU45.CAL:Function_call_interrupts
5319 ± 3% -44.7% 2940 ± 9% interrupts.CPU45.RES:Rescheduling_interrupts
5250 ± 8% -46.3% 2821 ± 12% interrupts.CPU46.CAL:Function_call_interrupts
5147 ± 6% -42.7% 2948 ± 10% interrupts.CPU46.RES:Rescheduling_interrupts
5205 ± 8% -46.3% 2797 ± 12% interrupts.CPU47.CAL:Function_call_interrupts
5152 ± 6% -42.7% 2951 ± 10% interrupts.CPU47.RES:Rescheduling_interrupts
5266 ± 8% -48.1% 2736 ± 13% interrupts.CPU48.CAL:Function_call_interrupts
5164 ± 6% -43.4% 2923 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
5310 ± 8% -47.7% 2778 ± 14% interrupts.CPU49.CAL:Function_call_interrupts
5154 ± 6% -44.4% 2865 ± 10% interrupts.CPU49.RES:Rescheduling_interrupts
4773 ± 6% -41.3% 2800 ± 7% interrupts.CPU5.CAL:Function_call_interrupts
5990 ± 8% -46.8% 3186 ± 11% interrupts.CPU5.RES:Rescheduling_interrupts
5241 ± 8% -46.8% 2789 ± 14% interrupts.CPU50.CAL:Function_call_interrupts
5175 ± 6% -43.1% 2945 ± 11% interrupts.CPU50.RES:Rescheduling_interrupts
5225 ± 7% -46.7% 2783 ± 14% interrupts.CPU51.CAL:Function_call_interrupts
5154 ± 6% -44.1% 2883 ± 12% interrupts.CPU51.RES:Rescheduling_interrupts
4721 ± 9% -44.8% 2607 ± 10% interrupts.CPU52.CAL:Function_call_interrupts
5542 ± 6% -46.7% 2955 ± 12% interrupts.CPU52.RES:Rescheduling_interrupts
4602 ± 6% -41.9% 2675 ± 11% interrupts.CPU53.CAL:Function_call_interrupts
5555 ± 6% -46.9% 2951 ± 13% interrupts.CPU53.RES:Rescheduling_interrupts
4612 ± 7% -42.0% 2673 ± 12% interrupts.CPU54.CAL:Function_call_interrupts
5611 ± 8% -46.3% 3013 ± 12% interrupts.CPU54.RES:Rescheduling_interrupts
4589 ± 6% -44.0% 2570 ± 10% interrupts.CPU55.CAL:Function_call_interrupts
5591 ± 6% -47.0% 2963 ± 13% interrupts.CPU55.RES:Rescheduling_interrupts
4567 ± 6% -42.5% 2625 ± 11% interrupts.CPU56.CAL:Function_call_interrupts
5516 ± 8% -45.7% 2995 ± 13% interrupts.CPU56.RES:Rescheduling_interrupts
4570 ± 6% -44.1% 2553 ± 11% interrupts.CPU57.CAL:Function_call_interrupts
5463 ± 7% -43.6% 3081 ± 12% interrupts.CPU57.RES:Rescheduling_interrupts
4539 ± 6% -43.0% 2587 ± 11% interrupts.CPU58.CAL:Function_call_interrupts
5509 ± 6% -46.3% 2960 ± 14% interrupts.CPU58.RES:Rescheduling_interrupts
4577 ± 6% -44.0% 2562 ± 11% interrupts.CPU59.CAL:Function_call_interrupts
5471 ± 8% -46.2% 2944 ± 13% interrupts.CPU59.RES:Rescheduling_interrupts
4785 ± 7% -43.7% 2692 ± 11% interrupts.CPU6.CAL:Function_call_interrupts
5990 ± 7% -45.5% 3262 ± 13% interrupts.CPU6.RES:Rescheduling_interrupts
4568 ± 7% -43.4% 2587 ± 9% interrupts.CPU60.CAL:Function_call_interrupts
5473 ± 7% -46.1% 2948 ± 14% interrupts.CPU60.RES:Rescheduling_interrupts
4559 ± 7% -44.1% 2549 ± 10% interrupts.CPU61.CAL:Function_call_interrupts
5433 ± 7% -45.2% 2976 ± 14% interrupts.CPU61.RES:Rescheduling_interrupts
4557 ± 7% -41.9% 2649 ± 13% interrupts.CPU62.CAL:Function_call_interrupts
5437 ± 7% -46.4% 2914 ± 13% interrupts.CPU62.RES:Rescheduling_interrupts
4568 ± 7% -41.5% 2675 ± 8% interrupts.CPU63.CAL:Function_call_interrupts
5451 ± 6% -46.1% 2936 ± 13% interrupts.CPU63.RES:Rescheduling_interrupts
4534 ± 6% -44.0% 2541 ± 11% interrupts.CPU64.CAL:Function_call_interrupts
5427 ± 6% -46.2% 2920 ± 12% interrupts.CPU64.RES:Rescheduling_interrupts
4538 ± 7% -43.4% 2568 ± 10% interrupts.CPU65.CAL:Function_call_interrupts
5396 ± 7% -46.8% 2868 ± 12% interrupts.CPU65.RES:Rescheduling_interrupts
4583 ± 7% -44.2% 2558 ± 10% interrupts.CPU66.CAL:Function_call_interrupts
5413 ± 6% -46.8% 2878 ± 14% interrupts.CPU66.RES:Rescheduling_interrupts
4542 ± 7% -44.1% 2541 ± 10% interrupts.CPU67.CAL:Function_call_interrupts
5437 ± 9% -46.8% 2893 ± 14% interrupts.CPU67.RES:Rescheduling_interrupts
4532 ± 6% -44.4% 2519 ± 12% interrupts.CPU68.CAL:Function_call_interrupts
5360 ± 7% -46.3% 2879 ± 15% interrupts.CPU68.RES:Rescheduling_interrupts
4515 ± 6% -43.6% 2548 ± 10% interrupts.CPU69.CAL:Function_call_interrupts
5307 ± 7% -45.8% 2878 ± 15% interrupts.CPU69.RES:Rescheduling_interrupts
4815 ± 7% -44.5% 2670 ± 10% interrupts.CPU7.CAL:Function_call_interrupts
5989 ± 7% -47.4% 3153 ± 17% interrupts.CPU7.RES:Rescheduling_interrupts
4523 ± 6% -44.0% 2533 ± 9% interrupts.CPU70.CAL:Function_call_interrupts
5333 ± 6% -46.0% 2881 ± 11% interrupts.CPU70.RES:Rescheduling_interrupts
4630 ± 9% -45.1% 2541 ± 11% interrupts.CPU71.CAL:Function_call_interrupts
5410 ± 6% -47.5% 2839 ± 12% interrupts.CPU71.RES:Rescheduling_interrupts
4513 ± 7% -43.7% 2541 ± 10% interrupts.CPU72.CAL:Function_call_interrupts
5343 ± 8% -46.5% 2857 ± 14% interrupts.CPU72.RES:Rescheduling_interrupts
4518 ± 6% -44.2% 2523 ± 10% interrupts.CPU73.CAL:Function_call_interrupts
5379 ± 7% -46.8% 2860 ± 14% interrupts.CPU73.RES:Rescheduling_interrupts
4492 ± 6% -41.3% 2639 ± 5% interrupts.CPU74.CAL:Function_call_interrupts
5318 ± 8% -45.0% 2923 ± 10% interrupts.CPU74.RES:Rescheduling_interrupts
4505 ± 6% -40.4% 2685 ± 17% interrupts.CPU75.CAL:Function_call_interrupts
5266 ± 7% -45.8% 2854 ± 15% interrupts.CPU75.RES:Rescheduling_interrupts
4519 ± 7% -44.0% 2530 ± 11% interrupts.CPU76.CAL:Function_call_interrupts
5321 ± 7% -47.4% 2800 ± 13% interrupts.CPU76.RES:Rescheduling_interrupts
4490 ± 6% -44.3% 2501 ± 11% interrupts.CPU77.CAL:Function_call_interrupts
5242 ± 8% -45.4% 2864 ± 14% interrupts.CPU77.RES:Rescheduling_interrupts
5201 ± 8% -46.5% 2781 ± 15% interrupts.CPU78.CAL:Function_call_interrupts
4713 ± 5% -41.7% 2747 ± 19% interrupts.CPU78.RES:Rescheduling_interrupts
5156 ± 8% -47.3% 2719 ± 13% interrupts.CPU79.CAL:Function_call_interrupts
4703 ± 7% -44.1% 2631 ± 12% interrupts.CPU79.RES:Rescheduling_interrupts
4810 ± 6% -44.8% 2657 ± 11% interrupts.CPU8.CAL:Function_call_interrupts
5972 ± 7% -46.7% 3185 ± 12% interrupts.CPU8.RES:Rescheduling_interrupts
5190 ± 8% -47.4% 2729 ± 13% interrupts.CPU80.CAL:Function_call_interrupts
4615 ± 7% -43.4% 2612 ± 11% interrupts.CPU80.RES:Rescheduling_interrupts
5149 ± 7% -47.2% 2716 ± 13% interrupts.CPU81.CAL:Function_call_interrupts
4668 ± 7% -44.6% 2586 ± 14% interrupts.CPU81.RES:Rescheduling_interrupts
5108 ± 7% -47.1% 2703 ± 13% interrupts.CPU82.CAL:Function_call_interrupts
4707 ± 7% -44.9% 2595 ± 12% interrupts.CPU82.RES:Rescheduling_interrupts
5162 ± 7% -46.6% 2758 ± 13% interrupts.CPU83.CAL:Function_call_interrupts
4650 ± 8% -42.4% 2676 ± 12% interrupts.CPU83.RES:Rescheduling_interrupts
5143 ± 7% -47.1% 2720 ± 14% interrupts.CPU84.CAL:Function_call_interrupts
4660 ± 7% -43.2% 2646 ± 11% interrupts.CPU84.RES:Rescheduling_interrupts
5212 ± 7% -48.3% 2693 ± 15% interrupts.CPU85.CAL:Function_call_interrupts
4718 ± 7% -44.3% 2627 ± 11% interrupts.CPU85.RES:Rescheduling_interrupts
5197 ± 8% -47.6% 2724 ± 13% interrupts.CPU86.CAL:Function_call_interrupts
4594 ± 8% -41.0% 2712 ± 9% interrupts.CPU86.RES:Rescheduling_interrupts
5149 ± 7% -47.5% 2703 ± 13% interrupts.CPU87.CAL:Function_call_interrupts
4640 ± 5% -42.1% 2686 ± 11% interrupts.CPU87.RES:Rescheduling_interrupts
5225 ± 8% -49.0% 2663 ± 12% interrupts.CPU88.CAL:Function_call_interrupts
4657 ± 7% -43.4% 2634 ± 10% interrupts.CPU88.RES:Rescheduling_interrupts
5158 ± 7% -47.1% 2731 ± 14% interrupts.CPU89.CAL:Function_call_interrupts
4631 ± 6% -42.9% 2645 ± 12% interrupts.CPU89.RES:Rescheduling_interrupts
4772 ± 7% -40.3% 2846 ± 12% interrupts.CPU9.CAL:Function_call_interrupts
6039 ± 8% -47.5% 3168 ± 12% interrupts.CPU9.RES:Rescheduling_interrupts
5132 ± 8% -47.8% 2677 ± 9% interrupts.CPU90.CAL:Function_call_interrupts
4658 ± 7% -45.4% 2545 ± 14% interrupts.CPU90.RES:Rescheduling_interrupts
5138 ± 8% -48.3% 2657 ± 17% interrupts.CPU91.CAL:Function_call_interrupts
4630 ± 7% -44.0% 2593 ± 12% interrupts.CPU91.RES:Rescheduling_interrupts
5182 ± 8% -48.0% 2696 ± 13% interrupts.CPU92.CAL:Function_call_interrupts
4624 ± 6% -43.9% 2592 ± 11% interrupts.CPU92.RES:Rescheduling_interrupts
5120 ± 7% -46.4% 2743 ± 16% interrupts.CPU93.CAL:Function_call_interrupts
4619 ± 6% -44.2% 2578 ± 12% interrupts.CPU93.RES:Rescheduling_interrupts
5190 ± 9% -47.3% 2737 ± 13% interrupts.CPU94.CAL:Function_call_interrupts
4638 ± 6% -42.1% 2685 ± 8% interrupts.CPU94.RES:Rescheduling_interrupts
5131 ± 8% -45.9% 2775 ± 14% interrupts.CPU95.CAL:Function_call_interrupts
4636 ± 7% -44.2% 2587 ± 13% interrupts.CPU95.RES:Rescheduling_interrupts
5154 ± 7% -48.7% 2645 ± 15% interrupts.CPU96.CAL:Function_call_interrupts
4678 ± 7% -43.3% 2654 ± 7% interrupts.CPU96.RES:Rescheduling_interrupts
5216 ± 9% -47.9% 2716 ± 13% interrupts.CPU97.CAL:Function_call_interrupts
4613 ± 6% -44.6% 2555 ± 12% interrupts.CPU97.RES:Rescheduling_interrupts
5179 ± 9% -48.1% 2688 ± 13% interrupts.CPU98.CAL:Function_call_interrupts
4590 ± 7% -43.1% 2613 ± 11% interrupts.CPU98.RES:Rescheduling_interrupts
5187 ± 7% -47.9% 2703 ± 14% interrupts.CPU99.CAL:Function_call_interrupts
4589 ± 6% -43.0% 2616 ± 11% interrupts.CPU99.RES:Rescheduling_interrupts
552013 ± 6% -44.9% 304399 ± 12% interrupts.RES:Rescheduling_interrupts
vm-scalability.time.user_time
950 +---------------------------------------------------------------------+
| .+. |
900 |.+.. .+.+..+ +..+.+.+..+.+. .+.+.+..+. .+..+.+.+..+. .+..+.+.+ |
850 |-+ + +. + + |
| |
800 |-+ |
| |
750 |-+ |
| |
700 |-+ |
650 |-+ O O O O O O O O O O O O |
| O O O O O O O O O O O |
600 |-+ O O O O O O |
| |
550 +---------------------------------------------------------------------+
vm-scalability.throughput
1.8e+07 +-----------------------------------------------------------------+
|.+. .+.+.. .+. .+.+.. .+.+.. |
1.7e+07 |-+ +..+.+.+ + + +.+.+ +.+.+.+.+..+.+. .+.+..+ |
| + |
1.6e+07 |-+ |
| |
1.5e+07 |-+ |
| |
1.4e+07 |-+ |
| |
1.3e+07 |-+ |
| O O O O O O O O O O O O O O O O O O |
1.2e+07 |-+ O O O O O O O O O O |
| O |
1.1e+07 +-----------------------------------------------------------------+
vm-scalability.median
85000 +-------------------------------------------------------------------+
|. .+.+..+.+. .+..+.+. .+.+.. .+.. |
80000 |-+..+.+ + +.+..+ +.+.+ +.+.+. .+ |
| +..+ + .+ |
75000 |-+ + |
| |
70000 |-+ |
| |
65000 |-+ |
| |
60000 |-+ O O O O O O O O O O O |
| O O O O O O O O O O O O O O |
55000 |-+ O O O O |
| |
50000 +-------------------------------------------------------------------+
vm-scalability.workload
5.4e+09 +-----------------------------------------------------------------+
5.2e+09 |.+. .+.+.. .+. .+.+.. .+.+.. |
| +..+.+.+ + + +.+.+ +.+.+.+.+..+.+. .+. |
5e+09 |-+ + +..+ |
4.8e+09 |-+ |
| |
4.6e+09 |-+ |
4.4e+09 |-+ |
4.2e+09 |-+ |
| |
4e+09 |-+ |
3.8e+09 |-+ O O O O O O O O O O |
| O O O O O O O O O O O O O O O |
3.6e+09 |-+ O O O O |
3.4e+09 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp6: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-9/performance/2pmem/ext4/sync/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/200s/randread/lkp-csl-2sp6/200G/fio-basic/tb/0x5002f01
commit:
c1139de73f ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn")
f4ba6c0e1b ("mm/lru: replace pgdat lru_lock with lruvec lock")
c1139de73fe749dc f4ba6c0e1b65b09bafd8efe3a41
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.17 ± 25% +0.1 0.26 ± 16% fio.latency_1000us%
35.24 ± 8% +10.9 46.09 ± 5% fio.latency_10ms%
10.23 ± 2% -3.7 6.57 ± 9% fio.latency_20ms%
45.90 ± 6% -10.6 35.29 ± 9% fio.latency_4ms%
3.40 ± 7% +3.8 7.23 ± 11% fio.latency_50ms%
15551 -13.2% 13500 ± 2% fio.read_bw_MBps
15859712 +14.5% 18153472 ± 2% fio.read_clat_90%_us
18743296 +17.5% 22020096 ± 3% fio.read_clat_95%_us
23658496 +21.3% 28704768 ± 4% fio.read_clat_99%_us
5870365 +14.8% 6737472 ± 2% fio.read_clat_mean_us
5200151 +19.6% 6219315 ± 3% fio.read_clat_stddev
7775 -13.2% 6750 ± 2% fio.read_iops
6.375e+09 -13.2% 5.531e+09 ± 2% fio.time.file_system_inputs
361909 -10.0% 325788 fio.time.involuntary_context_switches
24.60 ± 4% -20.1% 19.64 ± 7% fio.time.user_time
1556320 -13.2% 1350346 ± 2% fio.workload
0.97 ± 11% -5.6% 0.92 ± 10% boot-time.smp_boot
18567 ± 13% +38.1% 25641 ± 24% cpuidle.POLL.usage
6.33 ± 11% -50.2% 3.15 ± 19% sched_debug.cpu.clock.stddev
15624236 -13.3% 13540128 ± 2% vmstat.io.bi
5596 -6.6% 5227 vmstat.system.cs
1.30 -0.2 1.13 ± 3% mpstat.cpu.all.irq%
0.14 ± 3% -0.0 0.11 ± 3% mpstat.cpu.all.soft%
0.15 ± 5% -0.0 0.12 ± 8% mpstat.cpu.all.usr%
675809 ± 5% -13.4% 585046 ± 9% numa-meminfo.node0.MemFree
540517 ± 32% -50.0% 270086 ± 78% numa-meminfo.node1.Inactive(anon)
3187 ± 59% -64.4% 1135 ± 72% numa-meminfo.node1.PageTables
3.459e+08 -13.0% 3.008e+08 ± 3% numa-numastat.node0.local_node
3.46e+08 -13.0% 3.008e+08 ± 3% numa-numastat.node0.numa_hit
43097053 ± 13% -29.9% 30221603 ± 7% numa-numastat.node0.numa_miss
43111179 ± 13% -29.8% 30243464 ± 7% numa-numastat.node0.other_node
3.649e+08 -10.9% 3.252e+08 numa-numastat.node1.local_node
43097737 ± 13% -29.9% 30221603 ± 7% numa-numastat.node1.numa_foreign
3.649e+08 -10.9% 3.252e+08 numa-numastat.node1.numa_hit
170490 ± 5% -12.5% 149095 ± 8% numa-vmstat.node0.nr_free_pages
23074722 ± 22% -36.1% 14747148 ± 7% numa-vmstat.node0.numa_miss
23124228 ± 22% -35.9% 14830988 ± 7% numa-vmstat.node0.numa_other
134951 ± 32% -50.0% 67528 ± 78% numa-vmstat.node1.nr_inactive_anon
796.50 ± 59% -64.4% 283.25 ± 73% numa-vmstat.node1.nr_page_table_pages
134983 ± 32% -49.9% 67575 ± 78% numa-vmstat.node1.nr_zone_inactive_anon
23076171 ± 22% -36.1% 14748260 ± 7% numa-vmstat.node1.numa_foreign
4119 ± 9% -17.0% 3417 ± 11% slabinfo.fsnotify_mark_connector.active_objs
4119 ± 9% -17.0% 3417 ± 11% slabinfo.fsnotify_mark_connector.num_objs
760.00 ± 14% -24.2% 576.00 ± 8% slabinfo.kmalloc-rcl-128.active_objs
760.00 ± 14% -24.2% 576.00 ± 8% slabinfo.kmalloc-rcl-128.num_objs
4722 ± 4% -10.9% 4209 ± 2% slabinfo.kmalloc-rcl-64.active_objs
4722 ± 4% -10.9% 4209 ± 2% slabinfo.kmalloc-rcl-64.num_objs
2467 ± 6% -18.1% 2019 ± 4% slabinfo.kmalloc-rcl-96.active_objs
2467 ± 6% -18.1% 2019 ± 4% slabinfo.kmalloc-rcl-96.num_objs
20208 ± 18% +54.3% 31185 ± 36% softirqs.CPU1.SCHED
11808 ± 7% +33.2% 15728 ± 16% softirqs.CPU18.SCHED
13791 ± 9% -12.1% 12123 ± 12% softirqs.CPU36.SCHED
10287 ± 11% +16.0% 11930 ± 5% softirqs.CPU39.SCHED
11622 ± 6% +19.5% 13885 ± 7% softirqs.CPU41.SCHED
11790 ± 13% +23.8% 14591 ± 2% softirqs.CPU43.SCHED
6324 ± 6% +34.1% 8481 ± 19% softirqs.CPU49.RCU
18820 ± 37% -40.7% 11157 ± 20% softirqs.CPU53.SCHED
15365 ± 17% -30.1% 10734 ± 15% softirqs.CPU56.SCHED
16710 ± 19% -31.1% 11519 ± 22% softirqs.CPU65.SCHED
16438 ± 18% -35.5% 10597 ± 23% softirqs.CPU66.SCHED
16673 ± 14% -22.6% 12907 ± 18% softirqs.CPU69.SCHED
7810 ± 30% -25.6% 5813 ± 4% softirqs.CPU74.RCU
15946 ± 12% +55.1% 24725 ± 37% softirqs.CPU83.SCHED
15977 ± 17% +20.4% 19232 ± 5% softirqs.CPU95.SCHED
9341 ± 83% +134.4% 21898 ± 19% softirqs.NET_RX
57302 ± 4% +19.8% 68635 ± 3% softirqs.TIMER
211997 -14.2% 181930 ± 2% proc-vmstat.allocstall_movable
1850451 ± 43% +178.0% 5144154 ± 64% proc-vmstat.compact_daemon_free_scanned
1738912 ± 2% -10.0% 1565838 ± 11% proc-vmstat.compact_daemon_migrate_scanned
2730 ± 13% -21.5% 2144 ± 11% proc-vmstat.compact_daemon_wake
2866470 ± 32% +124.7% 6441347 ± 48% proc-vmstat.compact_free_scanned
9137 -12.6% 7985 ± 2% proc-vmstat.kswapd_low_wmark_hit_quickly
5228 +1.8% 5323 proc-vmstat.nr_active_file
15870 -1.1% 15689 proc-vmstat.nr_kernel_stack
5228 +1.8% 5323 proc-vmstat.nr_zone_active_file
86433130 ± 3% -23.6% 66013870 ± 3% proc-vmstat.numa_foreign
7.106e+08 -11.9% 6.26e+08 ± 2% proc-vmstat.numa_hit
7.106e+08 -11.9% 6.259e+08 ± 2% proc-vmstat.numa_local
86433130 ± 3% -23.6% 66013870 ± 3% proc-vmstat.numa_miss
86464679 ± 3% -23.6% 66045686 ± 3% proc-vmstat.numa_other
9166 -12.7% 8006 ± 2% proc-vmstat.pageoutrun
27465172 -12.1% 24132079 ± 3% proc-vmstat.pgalloc_dma32
7.705e+08 -13.3% 6.683e+08 ± 2% proc-vmstat.pgalloc_normal
7.882e+08 -13.4% 6.823e+08 ± 2% proc-vmstat.pgfree
3.187e+09 -13.2% 2.766e+09 ± 2% proc-vmstat.pgpgin
5.46e+08 -14.1% 4.687e+08 ± 2% proc-vmstat.pgscan_direct
6.165e+08 -13.1% 5.356e+08 ± 2% proc-vmstat.pgscan_file
5.46e+08 -14.1% 4.687e+08 ± 2% proc-vmstat.pgsteal_direct
6.165e+08 -13.1% 5.356e+08 ± 2% proc-vmstat.pgsteal_file
27.48 -5.1% 26.07 perf-stat.i.MPKI
7.843e+09 -7.7% 7.241e+09 perf-stat.i.branch-instructions
0.27 -0.0 0.25 perf-stat.i.branch-miss-rate%
20236482 -14.8% 17239646 ± 2% perf-stat.i.branch-misses
7.992e+08 -13.6% 6.906e+08 ± 2% perf-stat.i.cache-misses
1.101e+09 -13.5% 9.523e+08 ± 2% perf-stat.i.cache-references
5609 -6.8% 5226 perf-stat.i.context-switches
3.44 +9.5% 3.77 perf-stat.i.cpi
129.95 -3.8% 124.96 perf-stat.i.cpu-migrations
179.57 +14.7% 206.03 perf-stat.i.cycles-between-cache-misses
0.02 ± 2% -0.0 0.01 ± 8% perf-stat.i.dTLB-load-miss-rate%
1502340 -25.8% 1114358 ± 12% perf-stat.i.dTLB-load-misses
9.095e+09 -8.3% 8.337e+09 ± 2% perf-stat.i.dTLB-loads
1525537 ± 12% -32.0% 1037333 ± 13% perf-stat.i.dTLB-store-misses
5.173e+09 -13.0% 4.502e+09 ± 2% perf-stat.i.dTLB-stores
62.37 -1.1 61.24 perf-stat.i.iTLB-load-miss-rate%
2972168 -7.5% 2748717 perf-stat.i.iTLB-load-misses
4.005e+10 -8.8% 3.651e+10 ± 2% perf-stat.i.instructions
0.29 -8.7% 0.27 perf-stat.i.ipc
244.77 -9.4% 221.67 ± 2% perf-stat.i.metric.M/sec
27.43 ± 6% +3.1 30.52 ± 3% perf-stat.i.node-load-miss-rate%
79947653 ± 3% -17.0% 66322018 ± 2% perf-stat.i.node-loads
1.419e+08 -14.5% 1.214e+08 ± 4% perf-stat.i.node-stores
27.48 -5.1% 26.08 perf-stat.overall.MPKI
0.26 -0.0 0.24 perf-stat.overall.branch-miss-rate%
3.42 +9.5% 3.75 perf-stat.overall.cpi
171.55 +15.5% 198.23 ± 2% perf-stat.overall.cycles-between-cache-misses
0.02 ± 2% -0.0 0.01 ± 11% perf-stat.overall.dTLB-load-miss-rate%
62.37 -1.1 61.23 perf-stat.overall.iTLB-load-miss-rate%
0.29 -8.7% 0.27 perf-stat.overall.ipc
27.21 ± 6% +3.1 30.28 ± 3% perf-stat.overall.node-load-miss-rate%
5167013 +5.1% 5432376 perf-stat.overall.path-length
7.804e+09 -7.7% 7.205e+09 perf-stat.ps.branch-instructions
20133217 -14.8% 17153476 ± 2% perf-stat.ps.branch-misses
7.952e+08 -13.6% 6.872e+08 ± 2% perf-stat.ps.cache-misses
1.095e+09 -13.5% 9.476e+08 ± 2% perf-stat.ps.cache-references
5579 -6.8% 5199 perf-stat.ps.context-switches
129.43 -3.8% 124.53 perf-stat.ps.cpu-migrations
1494774 -25.8% 1108725 ± 12% perf-stat.ps.dTLB-load-misses
9.05e+09 -8.3% 8.295e+09 ± 2% perf-stat.ps.dTLB-loads
1518024 ± 12% -32.0% 1032263 ± 13% perf-stat.ps.dTLB-store-misses
5.148e+09 -13.0% 4.48e+09 ± 2% perf-stat.ps.dTLB-stores
2957143 -7.5% 2735036 perf-stat.ps.iTLB-load-misses
3.985e+10 -8.8% 3.633e+10 ± 2% perf-stat.ps.instructions
79555111 ± 3% -17.0% 65998632 ± 2% perf-stat.ps.node-loads
1.412e+08 -14.5% 1.208e+08 ± 4% perf-stat.ps.node-stores
8.042e+12 -8.8% 7.335e+12 perf-stat.total.instructions
737.25 ±162% -99.5% 4.00 ±173% interrupts.89:PCI-MSI.31981622-edge.i40e-eth0-TxRx-53
60428 ± 2% -3.3% 58420 interrupts.CAL:Function_call_interrupts
4445 ± 22% -42.9% 2538 ± 27% interrupts.CPU1.NMI:Non-maskable_interrupts
4445 ± 22% -42.9% 2538 ± 27% interrupts.CPU1.PMI:Performance_monitoring_interrupts
250.50 ± 18% -33.9% 165.50 ± 21% interrupts.CPU1.RES:Rescheduling_interrupts
3820 ± 21% +49.5% 5712 ± 12% interrupts.CPU10.NMI:Non-maskable_interrupts
3820 ± 21% +49.5% 5712 ± 12% interrupts.CPU10.PMI:Performance_monitoring_interrupts
556.25 ± 8% +16.4% 647.25 ± 10% interrupts.CPU11.CAL:Function_call_interrupts
3543 ± 21% +94.4% 6889 ± 8% interrupts.CPU16.NMI:Non-maskable_interrupts
3543 ± 21% +94.4% 6889 ± 8% interrupts.CPU16.PMI:Performance_monitoring_interrupts
308.50 ± 10% -30.1% 215.75 ± 14% interrupts.CPU18.RES:Rescheduling_interrupts
10.75 ± 40% +900.0% 107.50 ± 94% interrupts.CPU19.TLB:TLB_shootdowns
297.50 ± 5% -15.7% 250.75 ± 6% interrupts.CPU21.RES:Rescheduling_interrupts
6631 ± 17% -37.5% 4145 ± 48% interrupts.CPU27.NMI:Non-maskable_interrupts
6631 ± 17% -37.5% 4145 ± 48% interrupts.CPU27.PMI:Performance_monitoring_interrupts
660.50 ± 10% -13.1% 574.00 ± 5% interrupts.CPU29.CAL:Function_call_interrupts
547.00 ± 2% +12.4% 614.75 ± 7% interrupts.CPU3.CAL:Function_call_interrupts
700.50 ± 11% -17.5% 578.00 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
7002 ± 8% -33.1% 4685 ± 38% interrupts.CPU32.NMI:Non-maskable_interrupts
7002 ± 8% -33.1% 4685 ± 38% interrupts.CPU32.PMI:Performance_monitoring_interrupts
6654 ± 13% -40.8% 3942 ± 19% interrupts.CPU34.NMI:Non-maskable_interrupts
6654 ± 13% -40.8% 3942 ± 19% interrupts.CPU34.PMI:Performance_monitoring_interrupts
659.75 ± 4% -9.7% 595.75 ± 7% interrupts.CPU38.CAL:Function_call_interrupts
723.75 ± 10% -15.0% 615.25 ± 4% interrupts.CPU39.CAL:Function_call_interrupts
309.50 ± 5% -17.6% 255.00 ± 4% interrupts.CPU39.RES:Rescheduling_interrupts
707.25 ± 10% -18.7% 575.00 ± 4% interrupts.CPU40.CAL:Function_call_interrupts
137.25 ± 41% -63.6% 50.00 ± 76% interrupts.CPU40.TLB:TLB_shootdowns
290.00 ± 5% -14.7% 247.25 ± 9% interrupts.CPU41.RES:Rescheduling_interrupts
295.00 ± 8% -21.4% 232.00 ± 8% interrupts.CPU43.RES:Rescheduling_interrupts
279.25 ± 9% -19.3% 225.25 ± 14% interrupts.CPU44.RES:Rescheduling_interrupts
146.75 ± 55% -68.8% 45.75 ± 58% interrupts.CPU45.TLB:TLB_shootdowns
89.00 ± 11% -48.9% 45.50 ± 60% interrupts.CPU47.TLB:TLB_shootdowns
285.25 ± 10% -25.2% 213.50 ± 23% interrupts.CPU5.RES:Rescheduling_interrupts
579.50 ± 2% +9.9% 637.00 ± 4% interrupts.CPU52.CAL:Function_call_interrupts
737.25 ±162% -99.5% 3.75 ±173% interrupts.CPU53.89:PCI-MSI.31981622-edge.i40e-eth0-TxRx-53
28.25 ± 41% +195.6% 83.50 ± 48% interrupts.CPU54.TLB:TLB_shootdowns
216.75 ± 21% +25.3% 271.50 ± 8% interrupts.CPU56.RES:Rescheduling_interrupts
7319 ± 7% -55.0% 3294 ± 63% interrupts.CPU61.NMI:Non-maskable_interrupts
7319 ± 7% -55.0% 3294 ± 63% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1957 ± 24% +160.6% 5101 ± 46% interrupts.CPU62.NMI:Non-maskable_interrupts
1957 ± 24% +160.6% 5101 ± 46% interrupts.CPU62.PMI:Performance_monitoring_interrupts
6299 ± 16% -45.5% 3435 ± 35% interrupts.CPU64.NMI:Non-maskable_interrupts
6299 ± 16% -45.5% 3435 ± 35% interrupts.CPU64.PMI:Performance_monitoring_interrupts
33.75 ± 35% +243.7% 116.00 ± 83% interrupts.CPU66.TLB:TLB_shootdowns
213.75 ± 18% -21.4% 168.00 ± 14% interrupts.CPU73.RES:Rescheduling_interrupts
4693 ± 12% -28.8% 3343 ± 16% interrupts.CPU77.NMI:Non-maskable_interrupts
4693 ± 12% -28.8% 3343 ± 16% interrupts.CPU77.PMI:Performance_monitoring_interrupts
206.00 ± 10% -20.9% 163.00 ± 6% interrupts.CPU79.RES:Rescheduling_interrupts
569.25 ± 5% +8.4% 617.00 ± 6% interrupts.CPU8.CAL:Function_call_interrupts
5814 ± 28% -61.4% 2244 ± 44% interrupts.CPU81.NMI:Non-maskable_interrupts
5814 ± 28% -61.4% 2244 ± 44% interrupts.CPU81.PMI:Performance_monitoring_interrupts
159.25 ± 41% -63.9% 57.50 ± 69% interrupts.CPU82.TLB:TLB_shootdowns
6081 ± 16% -49.4% 3079 ± 34% interrupts.CPU83.NMI:Non-maskable_interrupts
6081 ± 16% -49.4% 3079 ± 34% interrupts.CPU83.PMI:Performance_monitoring_interrupts
616.50 ± 6% -11.5% 545.50 ± 6% interrupts.CPU84.CAL:Function_call_interrupts
216.00 ± 17% -32.1% 146.75 ± 14% interrupts.CPU84.RES:Rescheduling_interrupts
201.50 ± 12% -22.3% 156.50 ± 17% interrupts.CPU86.RES:Rescheduling_interrupts
132.50 ± 26% -53.0% 62.25 ± 70% interrupts.CPU90.TLB:TLB_shootdowns
109.50 ± 37% -45.7% 59.50 ± 59% interrupts.CPU92.TLB:TLB_shootdowns
195.00 ± 25% -27.4% 141.50 ± 13% interrupts.CPU93.RES:Rescheduling_interrupts
5992 ± 27% -36.2% 3821 ± 22% interrupts.CPU95.NMI:Non-maskable_interrupts
5992 ± 27% -36.2% 3821 ± 22% interrupts.CPU95.PMI:Performance_monitoring_interrupts
239.00 ± 29% -36.7% 151.25 ± 12% interrupts.CPU95.RES:Rescheduling_interrupts
6.94 ± 4% -6.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded
6.88 ± 3% -6.9 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
33.00 -5.9 27.05 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read.vfs_read
7.44 -4.3 3.17 ± 8% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read
7.11 ± 2% -4.2 2.95 ± 9% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read
6.60 ± 2% -4.0 2.60 ± 10% perf-profile.calltrace.cycles-pp.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded
5.56 ± 2% -3.7 1.81 ± 13% perf-profile.calltrace.cycles-pp._raw_spin_lock.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask
5.55 ± 2% -3.7 1.81 ± 13% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.rmqueue_bulk.rmqueue.get_page_from_freelist
8.04 ± 3% -3.2 4.85 ± 3% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
7.98 ± 3% -3.2 4.81 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
7.95 ± 3% -3.2 4.77 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
9.73 ± 6% -3.2 6.57 ± 4% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
8.38 ± 9% -2.9 5.46 ± 6% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
14.90 ± 2% -2.4 12.52 perf-profile.calltrace.cycles-pp.ext4_mpage_readpages.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read
14.91 ± 2% -2.4 12.54 perf-profile.calltrace.cycles-pp.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read.vfs_read
14.28 ± 2% -2.3 11.99 perf-profile.calltrace.cycles-pp.submit_bio.ext4_mpage_readpages.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read
14.28 ± 2% -2.3 11.99 perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.ext4_mpage_readpages.read_pages.page_cache_readahead_unbounded
14.25 ± 2% -2.3 11.97 perf-profile.calltrace.cycles-pp.pmem_submit_bio.submit_bio_noacct.submit_bio.ext4_mpage_readpages.read_pages
6.17 ± 8% -2.2 3.99 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
13.59 ± 2% -2.2 11.44 perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio.ext4_mpage_readpages
13.47 ± 2% -2.1 11.33 perf-profile.calltrace.cycles-pp.__memcpy_mcsafe.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio
5.86 ± 9% -2.1 3.78 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list
25.43 ± 2% -1.7 23.77 perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read
24.73 ± 2% -1.3 23.45 perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read
24.72 ± 2% -1.3 23.43 perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded
24.71 ± 2% -1.3 23.43 perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
70.29 -1.2 69.05 perf-profile.calltrace.cycles-pp.__libc_read
70.27 -1.2 69.03 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read
70.26 -1.2 69.03 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
70.26 -1.2 69.03 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
70.26 -1.2 69.03 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
70.25 -1.2 69.02 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
70.24 -1.2 69.01 perf-profile.calltrace.cycles-pp.generic_file_buffered_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
24.46 ± 2% -1.2 23.28 perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
24.44 ± 2% -1.2 23.26 perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
8.68 ± 3% -1.2 7.50 ± 5% perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_buffered_read.new_sync_read.vfs_read.ksys_read
8.56 ± 3% -1.2 7.40 ± 5% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.generic_file_buffered_read.new_sync_read.vfs_read
8.51 ± 3% -1.2 7.35 ± 5% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.generic_file_buffered_read.new_sync_read
0.77 ± 11% -0.5 0.29 ±101% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.__pagevec_release.invalidate_mapping_pages.generic_fadvise
2.24 ± 2% -0.5 1.75 ± 4% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read
0.92 -0.2 0.69 perf-profile.calltrace.cycles-pp.__list_del_entry_valid.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask
0.90 ± 2% -0.2 0.69 ± 4% perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.02 ± 4% -0.2 0.82 ± 10% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read
2.23 ± 11% +1.0 3.22 ± 24% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.invalidate_mapping_pages.generic_fadvise.ksys_fadvise64_64
2.23 ± 11% +1.0 3.22 ± 24% perf-profile.calltrace.cycles-pp.__pagevec_release.invalidate_mapping_pages.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64
0.00 +2.4 2.38 ± 26% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.__pagevec_release
0.00 +2.4 2.40 ± 26% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.__pagevec_release.invalidate_mapping_pages
0.00 +2.4 2.40 ± 26% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.release_pages.__pagevec_release.invalidate_mapping_pages.generic_fadvise
5.92 ± 2% +5.2 11.07 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
5.88 ± 2% +5.5 11.35 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
10.30 ± 2% +8.8 19.08 ± 2% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read.vfs_read
8.02 ± 3% +9.3 17.30 ± 2% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read.new_sync_read
7.93 ± 3% +9.3 17.23 ± 2% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded.generic_file_buffered_read
0.00 +15.9 15.92 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add
0.00 +16.0 15.98 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
0.00 +16.0 15.98 ± 2% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.page_cache_readahead_unbounded
33.16 -6.0 27.18 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
8.10 -4.6 3.49 ± 8% perf-profile.children.cycles-pp.get_page_from_freelist
6.83 ± 3% -4.5 2.33 ± 11% perf-profile.children.cycles-pp._raw_spin_lock
7.74 -4.5 3.25 ± 8% perf-profile.children.cycles-pp.rmqueue
7.12 ± 2% -4.3 2.82 ± 9% perf-profile.children.cycles-pp.rmqueue_bulk
10.59 ± 5% -3.3 7.26 ± 3% perf-profile.children.cycles-pp.shrink_page_list
8.42 ± 2% -3.3 5.13 ± 4% perf-profile.children.cycles-pp.lru_note_cost
9.04 ± 6% -2.9 6.14 ± 6% perf-profile.children.cycles-pp.__remove_mapping
14.91 ± 2% -2.4 12.54 perf-profile.children.cycles-pp.read_pages
14.90 ± 2% -2.4 12.53 perf-profile.children.cycles-pp.ext4_mpage_readpages
14.28 ± 2% -2.3 11.99 perf-profile.children.cycles-pp.submit_bio
14.28 ± 2% -2.3 11.99 perf-profile.children.cycles-pp.submit_bio_noacct
14.25 ± 2% -2.3 11.97 perf-profile.children.cycles-pp.pmem_submit_bio
13.60 ± 2% -2.2 11.45 perf-profile.children.cycles-pp.pmem_do_read
13.49 ± 2% -2.1 11.36 perf-profile.children.cycles-pp.__memcpy_mcsafe
25.56 ± 2% -1.7 23.88 perf-profile.children.cycles-pp.__alloc_pages_slowpath
24.85 ± 2% -1.3 23.54 perf-profile.children.cycles-pp.try_to_free_pages
24.84 ± 2% -1.3 23.53 perf-profile.children.cycles-pp.do_try_to_free_pages
70.29 -1.2 69.05 perf-profile.children.cycles-pp.__libc_read
70.27 -1.2 69.04 perf-profile.children.cycles-pp.ksys_read
70.27 -1.2 69.04 perf-profile.children.cycles-pp.vfs_read
70.25 -1.2 69.02 perf-profile.children.cycles-pp.new_sync_read
70.24 -1.2 69.01 perf-profile.children.cycles-pp.generic_file_buffered_read
26.36 -1.2 25.15 ± 2% perf-profile.children.cycles-pp.shrink_node
8.69 ± 3% -1.2 7.51 ± 5% perf-profile.children.cycles-pp.copy_page_to_iter
8.56 ± 3% -1.2 7.40 ± 5% perf-profile.children.cycles-pp.copyout
8.55 ± 3% -1.2 7.40 ± 5% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.59 ± 6% -0.6 1.01 ± 3% perf-profile.children.cycles-pp.free_pcppages_bulk
1.71 ± 5% -0.6 1.15 ± 4% perf-profile.children.cycles-pp.free_unref_page_list
2.07 -0.5 1.52 ± 2% perf-profile.children.cycles-pp.__list_del_entry_valid
2.25 ± 2% -0.5 1.77 ± 4% perf-profile.children.cycles-pp.__add_to_page_cache_locked
1.23 ± 2% -0.2 1.01 ± 2% perf-profile.children.cycles-pp.__delete_from_page_cache
0.69 ± 3% -0.2 0.47 ± 5% perf-profile.children.cycles-pp.workingset_eviction
1.02 ± 3% -0.2 0.83 ± 10% perf-profile.children.cycles-pp.mem_cgroup_charge
0.95 ± 4% -0.1 0.81 ± 3% perf-profile.children.cycles-pp.xas_store
0.57 -0.1 0.45 perf-profile.children.cycles-pp.__read_end_io
0.53 ± 4% -0.1 0.42 ± 6% perf-profile.children.cycles-pp.xas_create
0.58 ± 3% -0.1 0.48 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
0.57 ± 2% -0.1 0.46 ± 2% perf-profile.children.cycles-pp.find_get_entry
0.82 ± 2% -0.1 0.72 ± 3% perf-profile.children.cycles-pp.xas_load
0.38 -0.1 0.30 ± 3% perf-profile.children.cycles-pp.unlock_page
0.30 ± 2% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.workingset_age_nonresident
0.36 ± 7% -0.1 0.29 ± 9% perf-profile.children.cycles-pp.__count_memcg_events
0.42 ± 2% -0.1 0.35 ± 5% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.54 ± 4% -0.1 0.47 ± 5% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.39 ± 2% -0.1 0.32 ± 7% perf-profile.children.cycles-pp.try_charge
0.41 ± 2% -0.1 0.34 perf-profile.children.cycles-pp.xa_load
0.45 ± 5% -0.1 0.40 ± 6% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.20 ± 10% -0.1 0.15 ± 13% perf-profile.children.cycles-pp.kmem_cache_alloc
0.38 ± 5% -0.1 0.33 ± 6% perf-profile.children.cycles-pp.hrtimer_interrupt
0.39 ± 5% -0.1 0.33 ± 6% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.46 ± 4% -0.0 0.41 ± 5% perf-profile.children.cycles-pp.asm_call_on_stack
0.26 ± 6% -0.0 0.21 ± 8% perf-profile.children.cycles-pp.xas_init_marks
0.21 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.__mod_lruvec_state
0.18 ± 3% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.__mod_node_page_state
0.29 ± 6% -0.0 0.24 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.17 ± 5% -0.0 0.12 ± 12% perf-profile.children.cycles-pp.shrink_slab
0.11 ± 17% -0.0 0.07 ± 26% perf-profile.children.cycles-pp.__slab_alloc
0.11 ± 17% -0.0 0.07 ± 26% perf-profile.children.cycles-pp.___slab_alloc
0.11 ± 17% -0.0 0.07 ± 21% perf-profile.children.cycles-pp.allocate_slab
0.34 ± 2% -0.0 0.31 ± 5% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.12 ± 8% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.xas_alloc
0.16 ± 5% -0.0 0.12 ± 16% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.28 -0.0 0.25 ± 6% perf-profile.children.cycles-pp.uncharge_batch
0.10 -0.0 0.08 ± 6% perf-profile.children.cycles-pp.do_shrink_slab
0.15 ± 3% -0.0 0.12 ± 8% perf-profile.children.cycles-pp.scheduler_tick
0.11 ± 7% -0.0 0.09 ± 7% perf-profile.children.cycles-pp.__mod_zone_page_state
0.11 ± 4% -0.0 0.09 ± 7% perf-profile.children.cycles-pp.xas_start
0.10 ± 9% -0.0 0.08 perf-profile.children.cycles-pp.mark_page_accessed
0.07 ± 7% -0.0 0.05 perf-profile.children.cycles-pp.count_shadow_nodes
0.07 -0.0 0.06 perf-profile.children.cycles-pp.bio_add_page
0.09 +0.0 0.14 ± 3% perf-profile.children.cycles-pp.__list_add_valid
2.34 ± 10% +1.0 3.30 ± 23% perf-profile.children.cycles-pp.release_pages
2.23 ± 11% +1.0 3.22 ± 24% perf-profile.children.cycles-pp.__pagevec_release
14.79 ± 2% +2.1 16.92 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irq
35.61 +5.8 41.39 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
14.65 ± 2% +8.0 22.69 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
10.30 ± 2% +8.8 19.08 ± 2% perf-profile.children.cycles-pp.add_to_page_cache_lru
8.03 ± 3% +9.3 17.31 ± 2% perf-profile.children.cycles-pp.lru_cache_add
7.98 ± 3% +9.3 17.29 ± 2% perf-profile.children.cycles-pp.__pagevec_lru_add
0.00 +18.4 18.44 ± 4% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
13.37 ± 2% -2.1 11.26 perf-profile.self.cycles-pp.__memcpy_mcsafe
8.46 ± 3% -1.1 7.32 ± 5% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.07 -0.5 1.52 ± 2% perf-profile.self.cycles-pp.__list_del_entry_valid
2.17 ± 2% -0.3 1.86 ± 2% perf-profile.self.cycles-pp.generic_file_buffered_read
0.51 ± 2% -0.2 0.34 perf-profile.self.cycles-pp.__remove_mapping
0.39 ± 5% -0.1 0.25 ± 3% perf-profile.self.cycles-pp.workingset_eviction
0.44 -0.1 0.33 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.50 ± 2% -0.1 0.40 ± 2% perf-profile.self.cycles-pp.__read_end_io
0.32 ± 3% -0.1 0.21 ± 8% perf-profile.self.cycles-pp.get_page_from_freelist
0.32 ± 3% -0.1 0.22 ± 8% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.72 ± 2% -0.1 0.63 ± 3% perf-profile.self.cycles-pp.xas_load
0.38 ± 2% -0.1 0.29 ± 5% perf-profile.self.cycles-pp.unlock_page
0.30 ± 2% -0.1 0.22 ± 8% perf-profile.self.cycles-pp.workingset_age_nonresident
0.41 ± 4% -0.1 0.33 ± 6% perf-profile.self.cycles-pp.xas_create
0.36 ± 7% -0.1 0.29 ± 9% perf-profile.self.cycles-pp.__count_memcg_events
0.54 ± 4% -0.1 0.47 ± 3% perf-profile.self.cycles-pp.free_pcppages_bulk
0.34 ± 5% -0.1 0.27 ± 3% perf-profile.self.cycles-pp.find_get_entry
0.39 ± 3% -0.1 0.32 ± 4% perf-profile.self.cycles-pp.ext4_mpage_readpages
0.29 -0.0 0.25 ± 5% perf-profile.self.cycles-pp.shrink_page_list
0.18 ± 4% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.16 ± 6% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.rmqueue
0.15 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.unaccount_page_cache_page
0.15 ± 3% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.rmqueue_bulk
0.16 ± 5% -0.0 0.12 ± 16% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.17 ± 2% -0.0 0.15 ± 3% perf-profile.self.cycles-pp.try_charge
0.16 ± 2% -0.0 0.13 ± 7% perf-profile.self.cycles-pp.mem_cgroup_charge
0.33 ± 5% -0.0 0.30 ± 2% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.17 ± 6% -0.0 0.15 ± 7% perf-profile.self.cycles-pp.xas_clear_mark
0.13 ± 3% -0.0 0.10 ± 8% perf-profile.self.cycles-pp.release_pages
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.pmem_do_read
0.09 ± 5% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.07 -0.0 0.06 perf-profile.self.cycles-pp.pmem_submit_bio
0.09 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.__list_add_valid
0.22 +0.2 0.43 ± 6% perf-profile.self.cycles-pp.isolate_lru_pages
0.31 +0.3 0.60 ± 5% perf-profile.self.cycles-pp.__pagevec_lru_add
35.61 +5.8 41.39 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-csl-2sp6: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-9/performance/2pmem/ext4/mmap/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/200s/randrw/lkp-csl-2sp6/200G/fio-basic/tb/0x5002f01
commit:
c1139de73f ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn")
f4ba6c0e1b ("mm/lru: replace pgdat lru_lock with lruvec lock")
c1139de73fe749dc f4ba6c0e1b65b09bafd8efe3a41
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:16 -6% :16 kmsg.BTRFS_critical(device_sdb4):corrupt_leaf:root=#block=#slot=#ino=#,invalid_nlink:has#expect_no_more_than#for_dir
1:16 -6% :16 kmsg.BTRFS_error(device_sdb4):block=#read_time_tree_block_corruption_detected
11:16 -19% 8:16 perf-profile.children.cycles-pp.error_entry
2:16 -3% 1:16 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.59 ± 12% +1.0 1.64 ± 5% fio.latency_100ms%
8.80 ± 4% +1.4 10.19 ± 5% fio.latency_10ms%
3.69 ± 8% +1.7 5.41 ± 10% fio.latency_20ms%
0.15 ± 17% +0.3 0.45 ± 13% fio.latency_250ms%
50.63 -4.2 46.44 fio.latency_4ms%
3148 ± 2% -11.5% 2785 fio.read_bw_MBps
7323648 ± 5% +53.8% 11264000 ± 6% fio.read_clat_90%_us
10698752 ± 3% +86.2% 19922944 ± 5% fio.read_clat_95%_us
26542080 ± 5% +106.3% 54755328 ± 7% fio.read_clat_99%_us
4219212 ± 2% +43.4% 6049909 ± 2% fio.read_clat_mean_us
9601170 ± 8% +99.7% 19177399 ± 12% fio.read_clat_stddev
1574 ± 2% -11.5% 1392 fio.read_iops
2.541e+09 ± 2% -11.5% 2.248e+09 fio.time.file_system_inputs
1.294e+09 ± 2% -11.5% 1.145e+09 fio.time.file_system_outputs
3.204e+08 ± 2% -11.5% 2.834e+08 fio.time.major_page_faults
1399 ± 3% +28.2% 1793 ± 2% fio.time.percent_of_cpu_this_job_got
2599 ± 4% +31.5% 3420 ± 3% fio.time.system_time
212.59 ± 2% -13.0% 184.97 fio.time.user_time
630799 ± 2% -11.5% 558005 fio.workload
3158 ± 2% -11.5% 2793 fio.write_bw_MBps
38731776 ± 2% +4.2% 40370176 fio.write_clat_90%_us
40304640 +10.9% 44695552 fio.write_clat_95%_us
55017472 ± 4% +50.3% 82706432 ± 4% fio.write_clat_99%_us
19725262 ± 5% +48.0% 29191649 ± 10% fio.write_clat_stddev
1579 ± 2% -11.5% 1396 fio.write_iops
207579 ± 94% +56.2% 324213 ± 59% numa-meminfo.node1.Shmem
1.266e+08 ± 4% -14.7% 1.08e+08 ± 4% numa-numastat.node1.local_node
1.266e+08 ± 4% -14.7% 1.08e+08 ± 4% numa-numastat.node1.numa_hit
23044 ± 3% -18.7% 18736 ± 26% softirqs.CPU49.SCHED
162332 -9.9% 146250 ± 3% softirqs.TIMER
33.36 -11.8% 29.43 ± 2% iostat.cpu.iowait
17.96 ± 3% +23.3% 22.15 ± 2% iostat.cpu.system
1.07 ± 2% -11.8% 0.94 ± 2% iostat.cpu.user
33.69 -4.0 29.74 ± 2% mpstat.cpu.all.iowait%
17.13 ± 3% +4.3 21.41 ± 2% mpstat.cpu.all.sys%
1.08 ± 2% -0.1 0.95 ± 2% mpstat.cpu.all.usr%
68945965 ± 36% -15.1% 58564519 ± 42% numa-vmstat.node0.nr_dirtied
68106928 ± 37% -15.2% 57732456 ± 43% numa-vmstat.node0.nr_written
58.81 ± 9% +42.9% 84.06 ± 10% numa-vmstat.node1.nr_isolated_file
9050886 ± 5% -16.3% 7578162 ± 4% numa-vmstat.node1.workingset_refault_file
32.94 -12.1% 28.94 ± 2% vmstat.cpu.wa
6226764 ± 2% -11.8% 5494508 vmstat.io.bi
3168417 ± 2% -11.8% 2794885 vmstat.io.bo
2048 ± 6% +10.1% 2256 ± 3% vmstat.memory.buff
32.31 ± 2% -11.2% 28.69 ± 2% vmstat.procs.b
17.88 ± 4% +21.3% 21.69 ± 4% vmstat.procs.r
1.617e+08 ± 2% -11.5% 1.43e+08 proc-vmstat.nr_dirtied
108.50 ± 5% +36.6% 148.19 ± 5% proc-vmstat.nr_isolated_file
1.605e+08 ± 2% -11.6% 1.419e+08 proc-vmstat.nr_written
2.54e+08 -12.4% 2.226e+08 proc-vmstat.numa_hit
2.54e+08 -12.4% 2.225e+08 proc-vmstat.numa_local
3.067e+08 ± 2% -11.5% 2.715e+08 proc-vmstat.pgalloc_normal
6.42e+08 ± 2% -11.5% 5.681e+08 proc-vmstat.pgfault
3.101e+08 ± 2% -11.8% 2.736e+08 proc-vmstat.pgfree
3.176e+08 ± 2% -11.5% 2.81e+08 proc-vmstat.pgmajfault
1.27e+09 ± 2% -11.5% 1.124e+09 proc-vmstat.pgpgin
6.419e+08 ± 2% -11.6% 5.676e+08 proc-vmstat.pgpgout
6.252e+08 ± 2% -11.8% 5.513e+08 proc-vmstat.pgscan_file
3.815e+08 -16.1% 3.202e+08 proc-vmstat.pgscan_kswapd
3.081e+08 ± 2% -11.9% 2.715e+08 proc-vmstat.pgsteal_file
2.112e+08 -16.3% 1.767e+08 ± 2% proc-vmstat.pgsteal_kswapd
209967 ± 6% -14.5% 179600 ± 4% proc-vmstat.workingset_activate_file
16968721 ± 2% -16.6% 14154681 ± 2% proc-vmstat.workingset_refault_file
7.236e+09 ± 2% -6.9% 6.734e+09 perf-stat.i.branch-instructions
0.43 -0.0 0.42 perf-stat.i.branch-miss-rate%
30490921 ± 2% -9.5% 27601649 ± 2% perf-stat.i.branch-misses
70.22 -2.0 68.19 perf-stat.i.cache-miss-rate%
3.706e+08 ± 2% -10.6% 3.315e+08 perf-stat.i.cache-misses
5.256e+08 ± 2% -8.6% 4.804e+08 ± 2% perf-stat.i.cache-references
1.44 +37.1% 1.97 ± 2% perf-stat.i.cpi
5.337e+10 ± 2% +20.4% 6.425e+10 ± 2% perf-stat.i.cpu-cycles
163.10 ± 2% +67.8% 273.68 ± 4% perf-stat.i.cycles-between-cache-misses
0.17 -0.0 0.16 ± 2% perf-stat.i.dTLB-load-miss-rate%
16720500 ± 2% -13.9% 14398069 ± 2% perf-stat.i.dTLB-load-misses
9.74e+09 ± 2% -8.1% 8.95e+09 perf-stat.i.dTLB-loads
14187160 ± 4% -13.1% 12325962 ± 3% perf-stat.i.dTLB-store-misses
5.485e+09 ± 2% -11.4% 4.861e+09 perf-stat.i.dTLB-stores
3092901 -3.9% 2970941 perf-stat.i.iTLB-loads
3.729e+10 ± 2% -7.9% 3.433e+10 perf-stat.i.instructions
0.76 -16.3% 0.64 perf-stat.i.ipc
1589556 ± 2% -11.6% 1405412 perf-stat.i.major-faults
0.56 ± 2% +20.3% 0.67 ± 2% perf-stat.i.metric.GHz
240.88 ± 2% -8.5% 220.30 perf-stat.i.metric.M/sec
28673458 ± 4% -10.8% 25564966 ± 5% perf-stat.i.node-load-misses
36835476 ± 2% -10.5% 32950546 ± 3% perf-stat.i.node-store-misses
32037616 ± 3% -11.5% 28344766 ± 2% perf-stat.i.node-stores
1595967 ± 2% -11.5% 1412099 perf-stat.i.page-faults
0.42 -0.0 0.41 perf-stat.overall.branch-miss-rate%
70.52 -1.5 68.98 perf-stat.overall.cache-miss-rate%
1.43 +31.1% 1.88 perf-stat.overall.cpi
144.19 +35.1% 194.85 perf-stat.overall.cycles-between-cache-misses
0.17 -0.0 0.16 perf-stat.overall.dTLB-load-miss-rate%
0.70 -23.7% 0.53 perf-stat.overall.ipc
11879210 +4.3% 12390040 perf-stat.overall.path-length
7.199e+09 ± 2% -7.0% 6.696e+09 perf-stat.ps.branch-instructions
30332659 ± 2% -9.6% 27425581 ± 2% perf-stat.ps.branch-misses
3.686e+08 ± 2% -10.7% 3.292e+08 perf-stat.ps.cache-misses
5.229e+08 ± 2% -8.7% 4.773e+08 ± 2% perf-stat.ps.cache-references
5.316e+10 ± 2% +20.6% 6.414e+10 ± 2% perf-stat.ps.cpu-cycles
16632482 ± 2% -14.0% 14301613 ± 2% perf-stat.ps.dTLB-load-misses
9.69e+09 ± 2% -8.2% 8.898e+09 perf-stat.ps.dTLB-loads
14112500 ± 4% -13.3% 12241129 ± 3% perf-stat.ps.dTLB-store-misses
5.455e+09 ± 2% -11.5% 4.828e+09 perf-stat.ps.dTLB-stores
3076923 -4.0% 2953660 perf-stat.ps.iTLB-loads
3.709e+10 ± 2% -8.0% 3.413e+10 perf-stat.ps.instructions
1580918 ± 2% -11.7% 1395493 perf-stat.ps.major-faults
28519430 ± 4% -10.9% 25404577 ± 5% perf-stat.ps.node-load-misses
36642184 ± 2% -10.6% 32741800 ± 3% perf-stat.ps.node-store-misses
31861305 ± 3% -11.7% 28136943 ± 2% perf-stat.ps.node-stores
1587297 ± 2% -11.7% 1402147 perf-stat.ps.page-faults
7.494e+12 ± 2% -7.7% 6.914e+12 perf-stat.total.instructions
8.82 ± 12% -2.6 6.20 ± 23% perf-profile.calltrace.cycles-pp.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault.__do_fault.do_fault
7.62 ± 13% -2.3 5.34 ± 23% perf-profile.calltrace.cycles-pp.submit_bio.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault.__do_fault
7.58 ± 13% -2.3 5.31 ± 23% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault
6.90 ± 13% -2.1 4.80 ± 23% perf-profile.calltrace.cycles-pp.pmem_submit_bio.submit_bio_noacct.submit_bio.ext4_mpage_readpages.filemap_fault
6.07 ± 12% -1.9 4.21 ± 23% perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio.ext4_mpage_readpages
5.96 ± 12% -1.8 4.15 ± 23% perf-profile.calltrace.cycles-pp.__memcpy_mcsafe.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio
5.83 ± 12% -1.7 4.09 ± 27% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
4.02 ± 18% -1.5 2.51 ± 26% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
2.01 ± 12% -0.9 1.14 ± 61% perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
2.23 ± 9% -0.8 1.47 ± 32% perf-profile.calltrace.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
1.73 ± 29% -0.7 1.05 ± 32% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.96 ± 9% -0.7 1.30 ± 32% perf-profile.calltrace.cycles-pp.rmap_walk_file.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.27 ± 26% -0.6 0.66 ± 73% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list
0.84 ± 23% -0.5 0.35 ± 78% perf-profile.calltrace.cycles-pp.jbd2_journal_try_to_free_buffers.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.90 ± 14% -0.4 0.49 ± 60% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.filemap_fault
0.95 ± 13% -0.4 0.59 ± 51% perf-profile.calltrace.cycles-pp.__block_commit_write.block_page_mkwrite.ext4_page_mkwrite.do_page_mkwrite.do_fault
0.85 ± 12% -0.4 0.49 ± 59% perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_page_mkwrite.ext4_page_mkwrite.do_page_mkwrite
0.31 ±116% +0.7 1.05 ± 17% perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list
0.52 ± 72% +0.8 1.35 ± 25% perf-profile.calltrace.cycles-pp.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.52 ± 72% +0.8 1.35 ± 25% perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.52 ± 72% +0.8 1.35 ± 25% perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list
2.25 ± 14% +3.1 5.31 ± 18% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault.ext4_filemap_fault
2.13 ± 15% +3.1 5.23 ± 19% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault
0.00 +4.3 4.29 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add
0.00 +4.3 4.33 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
0.00 +4.3 4.33 ± 20% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page
5.12 ± 48% +8.5 13.62 ± 39% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
5.09 ± 48% +9.2 14.31 ± 37% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
8.83 ± 12% -2.6 6.21 ± 23% perf-profile.children.cycles-pp.ext4_mpage_readpages
6.07 ± 12% -1.9 4.22 ± 23% perf-profile.children.cycles-pp.pmem_do_read
6.00 ± 12% -1.8 4.17 ± 23% perf-profile.children.cycles-pp.__memcpy_mcsafe
4.24 ± 18% -1.5 2.77 ± 23% perf-profile.children.cycles-pp.__remove_mapping
2.38 ± 10% -0.7 1.70 ± 23% perf-profile.children.cycles-pp.page_referenced
2.10 ± 12% -0.6 1.48 ± 27% perf-profile.children.cycles-pp.try_to_unmap
1.58 ± 12% -0.5 1.08 ± 24% perf-profile.children.cycles-pp.get_page_from_freelist
1.64 ± 8% -0.5 1.15 ± 21% perf-profile.children.cycles-pp._raw_spin_lock
1.52 ± 9% -0.5 1.07 ± 23% perf-profile.children.cycles-pp.page_vma_mapped_walk
1.39 ± 10% -0.4 1.00 ± 24% perf-profile.children.cycles-pp.page_referenced_one
1.19 ± 10% -0.4 0.82 ± 26% perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
1.39 ± 10% -0.4 1.03 ± 21% perf-profile.children.cycles-pp.__list_del_entry_valid
1.13 ± 13% -0.4 0.78 ± 26% perf-profile.children.cycles-pp.__delete_from_page_cache
1.06 ± 12% -0.3 0.75 ± 21% perf-profile.children.cycles-pp.native_irq_return_iret
0.91 ± 14% -0.3 0.59 ± 23% perf-profile.children.cycles-pp.mem_cgroup_charge
1.14 ± 11% -0.3 0.83 ± 22% perf-profile.children.cycles-pp.down_read
1.01 ± 14% -0.3 0.71 ± 25% perf-profile.children.cycles-pp.unlock_page
0.98 ± 13% -0.3 0.68 ± 23% perf-profile.children.cycles-pp.__block_commit_write
0.88 ± 13% -0.3 0.61 ± 23% perf-profile.children.cycles-pp.mark_buffer_dirty
0.89 ± 10% -0.3 0.62 ± 26% perf-profile.children.cycles-pp.try_to_free_buffers
0.84 ± 12% -0.3 0.59 ± 25% perf-profile.children.cycles-pp.kmem_cache_free
0.77 ± 10% -0.3 0.52 ± 22% perf-profile.children.cycles-pp.__mod_memcg_state
0.76 ± 12% -0.2 0.53 ± 25% perf-profile.children.cycles-pp.up_read
0.66 ± 13% -0.2 0.42 ± 23% perf-profile.children.cycles-pp.__count_memcg_events
0.68 ± 12% -0.2 0.48 ± 23% perf-profile.children.cycles-pp.__set_page_dirty
0.68 ± 10% -0.2 0.48 ± 26% perf-profile.children.cycles-pp.free_buffer_head
0.64 ± 12% -0.2 0.45 ± 23% perf-profile.children.cycles-pp.get_io_u
0.46 ± 14% -0.2 0.30 ± 30% perf-profile.children.cycles-pp.workingset_eviction
0.48 ± 10% -0.2 0.33 ± 23% perf-profile.children.cycles-pp.__mod_lruvec_state
0.38 ± 11% -0.1 0.26 ± 23% perf-profile.children.cycles-pp.__mod_node_page_state
0.41 ± 15% -0.1 0.29 ± 24% perf-profile.children.cycles-pp.ext4_map_blocks
0.27 ± 18% -0.1 0.16 ± 24% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.35 ± 13% -0.1 0.25 ± 27% perf-profile.children.cycles-pp.__read_end_io
0.34 ± 24% -0.1 0.23 ± 19% perf-profile.children.cycles-pp.update_process_times
0.11 ± 18% -0.1 0.03 ± 88% perf-profile.children.cycles-pp.wake_all_kswapds
0.22 ± 12% -0.1 0.14 ± 23% perf-profile.children.cycles-pp.lock_page_memcg
0.29 ± 11% -0.1 0.21 ± 23% perf-profile.children.cycles-pp._cond_resched
0.14 ± 15% -0.0 0.09 ± 23% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.14 ± 22% -0.0 0.09 ± 24% perf-profile.children.cycles-pp.__check_block_validity
0.12 ± 17% -0.0 0.07 ± 33% perf-profile.children.cycles-pp.get_mem_cgroup_from_page
0.14 ± 14% -0.0 0.10 ± 23% perf-profile.children.cycles-pp._raw_read_lock
0.11 ± 11% -0.0 0.07 ± 23% perf-profile.children.cycles-pp.check_pte
0.10 ± 14% -0.0 0.07 ± 41% perf-profile.children.cycles-pp.__mark_inode_dirty
0.42 ± 11% +0.2 0.61 ± 12% perf-profile.children.cycles-pp.move_pages_to_lru
0.88 ± 25% +0.7 1.61 ± 12% perf-profile.children.cycles-pp.smp_call_function_many_cond
0.95 ± 25% +0.9 1.85 ± 16% perf-profile.children.cycles-pp.try_to_unmap_flush
0.95 ± 25% +0.9 1.85 ± 16% perf-profile.children.cycles-pp.arch_tlbbatch_flush
2.46 ± 16% +1.3 3.78 ± 14% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
2.27 ± 14% +3.1 5.38 ± 18% perf-profile.children.cycles-pp.lru_cache_add
2.18 ± 15% +3.2 5.36 ± 18% perf-profile.children.cycles-pp.__pagevec_lru_add
0.00 +6.2 6.16 ± 27% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
16.67 ± 35% +12.9 29.53 ± 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
5.90 ± 12% -1.8 4.10 ± 23% perf-profile.self.cycles-pp.__memcpy_mcsafe
1.38 ± 10% -0.4 1.02 ± 21% perf-profile.self.cycles-pp.__list_del_entry_valid
1.06 ± 12% -0.3 0.74 ± 21% perf-profile.self.cycles-pp.native_irq_return_iret
0.97 ± 14% -0.3 0.69 ± 25% perf-profile.self.cycles-pp.unlock_page
0.95 ± 10% -0.3 0.67 ± 23% perf-profile.self.cycles-pp._raw_spin_lock
0.85 ± 11% -0.3 0.60 ± 22% perf-profile.self.cycles-pp.down_read
0.75 ± 10% -0.2 0.51 ± 23% perf-profile.self.cycles-pp.__mod_memcg_state
0.65 ± 13% -0.2 0.42 ± 23% perf-profile.self.cycles-pp.__count_memcg_events
0.73 ± 12% -0.2 0.50 ± 23% perf-profile.self.cycles-pp.up_read
0.70 ± 10% -0.2 0.51 ± 24% perf-profile.self.cycles-pp.shrink_page_list
0.62 ± 12% -0.2 0.44 ± 23% perf-profile.self.cycles-pp.get_io_u
0.48 ± 15% -0.2 0.32 ± 26% perf-profile.self.cycles-pp.get_page_from_freelist
0.48 ± 12% -0.1 0.33 ± 24% perf-profile.self.cycles-pp.kmem_cache_free
0.38 ± 16% -0.1 0.24 ± 29% perf-profile.self.cycles-pp.__remove_mapping
0.37 ± 11% -0.1 0.25 ± 24% perf-profile.self.cycles-pp.__mod_node_page_state
0.27 ± 18% -0.1 0.16 ± 24% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.26 ± 13% -0.1 0.16 ± 30% perf-profile.self.cycles-pp.workingset_eviction
0.21 ± 12% -0.1 0.14 ± 24% perf-profile.self.cycles-pp.lock_page_memcg
0.19 ± 18% -0.1 0.12 ± 21% perf-profile.self.cycles-pp.mem_cgroup_charge
0.19 ± 10% -0.1 0.13 ± 24% perf-profile.self.cycles-pp.rmap_walk_file
0.11 ± 16% -0.0 0.07 ± 33% perf-profile.self.cycles-pp.get_mem_cgroup_from_page
0.11 ± 12% -0.0 0.07 ± 33% perf-profile.self.cycles-pp.check_pte
0.13 ± 12% -0.0 0.10 ± 24% perf-profile.self.cycles-pp._cond_resched
0.23 ± 12% +0.2 0.39 ± 13% perf-profile.self.cycles-pp.move_pages_to_lru
0.76 ± 28% +0.7 1.51 ± 12% perf-profile.self.cycles-pp.smp_call_function_many_cond
16.66 ± 35% +12.9 29.53 ± 27% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
5 months, 3 weeks
[mptcp] db71a2f198: WARNING:inconsistent_lock_state
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: db71a2f198fef53a9f710ad5ac475bbdb6aba840 ("[MPTCP][PATCH v2 net 1/2] mptcp: fix subflow's local_id issues")
url: https://github.com/0day-ci/linux/commits/Geliang-Tang/mptcp-fix-subflow-s...
base: https://git.kernel.org/cgit/linux/kernel/git/davem/net.git e1f469cd5866499ac40bfdca87411e1c525a10c7
in testcase: kernel-selftests
version: kernel-selftests-x86_64-e8e8f16e-1_20200807
with following parameters:
group: kselftests-mptcp
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------------------+------------+------------+
| | e1f469cd58 | db71a2f198 |
+----------------------------------------------------------------------------+------------+------------+
| boot_successes | 15 | 8 |
| boot_failures | 2 | 9 |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 2 | |
| WARNING:inconsistent_lock_state | 0 | 9 |
| inconsistent{SOFTIRQ-ON-W}->{IN-SOFTIRQ-W}usage | 0 | 9 |
| calltrace:asm_call_on_stack | 0 | 9 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/slab.h | 0 | 9 |
+----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 257.607162] WARNING: inconsistent lock state
[ 257.609399] 5.9.0-rc3-00371-gdb71a2f198fef #1 Not tainted
[ 257.611927] --------------------------------
[ 257.614273] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
[ 257.617486] kworker/1:2/101 [HC0[0]:SC1[3]:HE1:SE0] takes:
[ 257.620140] ffffffffae1aaa40 (fs_reclaim){+.?.}-{0:0}, at: fs_reclaim_acquire+0x5/0x40
[ 257.623680] {SOFTIRQ-ON-W} state was registered at:
[ 257.626250] lock_acquire+0xaf/0x380
[ 257.628516] fs_reclaim_acquire+0x25/0x40
[ 257.631071] __kmalloc_node+0x60/0x560
[ 257.633350] alloc_cpumask_var_node+0x1b/0x40
[ 257.635850] native_smp_prepare_cpus+0xad/0x292
[ 257.638255] kernel_init_freeable+0x15a/0x2dd
[ 257.640847] kernel_init+0xa/0x122
[ 257.643277] ret_from_fork+0x22/0x30
[ 257.645510] irq event stamp: 89762
[ 257.647888] hardirqs last enabled at (89762): [<ffffffffacf08ef0>] process_backlog+0x1b0/0x260
[ 257.651614] hardirqs last disabled at (89761): [<ffffffffacf08f75>] process_backlog+0x235/0x260
[ 257.655368] softirqs last enabled at (89756): [<ffffffffacfb8598>] ip_finish_output2+0x258/0xa20
[ 257.659186] softirqs last disabled at (89757): [<ffffffffad2010d2>] asm_call_on_stack+0x12/0x20
[ 257.663053]
[ 257.663053] other info that might help us debug this:
[ 257.667675] Possible unsafe locking scenario:
[ 257.667675]
[ 257.672233] CPU0
[ 257.674229] ----
[ 257.676375] lock(fs_reclaim);
[ 257.678563] <Interrupt>
[ 257.680618] lock(fs_reclaim);
[ 257.682673]
[ 257.682673] *** DEADLOCK ***
[ 257.682673]
[ 257.687974] 8 locks held by kworker/1:2/101:
[ 257.690177] #0: ffffa060c7c56938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1bc/0x5a0
[ 257.693771] #1: ffffc2fac0197e58 ((work_completion)(&msk->work)){+.+.}-{0:0}, at: process_one_work+0x1bc/0x5a0
[ 257.697437] #2: ffffa060dd8258e0 (sk_lock-AF_INET){+.+.}-{0:0}, at: mptcp_worker+0x5f/0xac0
[ 257.700972] #3: ffffa061247e2e20 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: inet_stream_connect+0x23/0x60
[ 257.704558] #4: ffffffffae0c9a40 (rcu_read_lock){....}-{1:2}, at: __ip_queue_xmit+0x5/0x600
[ 257.707957] #5: ffffffffae0c9a40 (rcu_read_lock){....}-{1:2}, at: process_backlog+0x75/0x260
[ 257.711431] #6: ffffffffae0c9a40 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x2c/0x120
[ 257.714689] #7: ffffffffae0c9a40 (rcu_read_lock){....}-{1:2}, at: tcp_rcv_state_process+0x17f/0x981
[ 257.718220]
[ 257.718220] stack backtrace:
[ 257.722396] CPU: 1 PID: 101 Comm: kworker/1:2 Not tainted 5.9.0-rc3-00371-gdb71a2f198fef #1
[ 257.726013] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 257.729513] Workqueue: events mptcp_worker
[ 257.732037] Call Trace:
[ 257.734092] <IRQ>
[ 257.736155] dump_stack+0x8d/0xc0
[ 257.738400] mark_lock+0x633/0x7c0
[ 257.740652] ? print_shortest_lock_dependencies+0x40/0x40
[ 257.743499] __lock_acquire+0x954/0xaa0
[ 257.745907] lock_acquire+0xaf/0x380
[ 257.748288] ? fs_reclaim_acquire+0x5/0x40
[ 257.750889] ? mptcp_pm_nl_get_local_id+0x232/0x400
[ 257.753356] fs_reclaim_acquire+0x25/0x40
[ 257.755986] ? fs_reclaim_acquire+0x5/0x40
[ 257.758562] kmem_cache_alloc_trace+0x40/0x460
[ 257.761092] mptcp_pm_nl_get_local_id+0x232/0x400
[ 257.763793] subflow_init_req+0x1c2/0x3a0
[ 257.766127] ? inet_reqsk_alloc+0x21/0x140
[ 257.768560] ? rcu_read_lock_sched_held+0x52/0xa0
[ 257.771236] ? kmem_cache_alloc+0x3b8/0x460
[ 257.773656] tcp_conn_request+0x341/0xe60
[ 257.776117] ? lock_acquire+0xaf/0x380
[ 257.778486] ? tcp_rcv_state_process+0x17f/0x981
[ 257.781114] ? tcp_rcv_state_process+0x1e2/0x981
[ 257.783833] tcp_rcv_state_process+0x1e2/0x981
[ 257.786417] ? tcp_v4_inbound_md5_hash+0x4c/0x160
[ 257.789117] tcp_v4_do_rcv+0xb8/0x200
[ 257.791562] tcp_v4_rcv+0xf94/0x1080
[ 257.793835] ip_protocol_deliver_rcu+0x2d/0x2a0
[ 257.796463] ip_local_deliver_finish+0x8c/0x120
[ 257.799035] ip_local_deliver+0x71/0x220
[ 257.801471] ? rcu_read_lock_held+0x52/0x60
[ 257.803973] ip_rcv+0x57/0x200
[ 257.806218] ? process_backlog+0x75/0x260
[ 257.808714] __netif_receive_skb_one_core+0x87/0xa0
[ 257.811476] process_backlog+0xe7/0x260
[ 257.814050] net_rx_action+0x166/0x480
[ 257.816877] __do_softirq+0xea/0x4eb
[ 257.819171] asm_call_on_stack+0x12/0x20
[ 257.821573] </IRQ>
[ 257.823513] ? ip_finish_output2+0x258/0xa20
[ 257.825923] do_softirq_own_stack+0x78/0xa0
[ 257.828215] do_softirq+0x52/0xa0
[ 257.830335] __local_bh_enable_ip+0xde/0x100
[ 257.832834] ip_finish_output2+0x27c/0xa20
[ 257.835155] ? rcu_read_lock_held+0x52/0x60
[ 257.837334] ? ip_output+0x7f/0x280
[ 257.839546] ip_output+0x7f/0x280
[ 257.841650] __ip_queue_xmit+0x1df/0x600
[ 257.844052] __tcp_transmit_skb+0xa17/0xc80
[ 257.846277] tcp_connect+0x4fe/0x600
[ 257.848421] tcp_v4_connect+0x44e/0x560
[ 257.850615] __inet_stream_connect+0xc5/0x360
[ 257.853019] ? __local_bh_enable_ip+0x81/0x100
[ 257.855302] inet_stream_connect+0x37/0x60
[ 257.857569] __mptcp_subflow_connect+0x195/0x228
[ 257.860107] mptcp_pm_create_subflow_or_signal_addr+0x27d/0x5a0
[ 257.862786] mptcp_worker+0x5e4/0xac0
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc3-00371-gdb71a2f198fef .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
5 months, 3 weeks
[PCI] 3233e41d3e: WARNING:at_drivers/pci/pci.c:#pci_reset_hotplug_slot
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 3233e41d3e8ebcd44e92da47ffed97fd49b84278 ("[PATCH] PCI: pciehp: Fix AB-BA deadlock between reset_lock and device_lock")
url: https://github.com/0day-ci/linux/commits/Lukas-Wunner/PCI-pciehp-Fix-AB-B...
base: https://git.kernel.org/cgit/linux/kernel/git/helgaas/pci.git next
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 8 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+------------+------------+
| | 8a445afd71 | 3233e41d3e |
+------------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| WARNING:at_drivers/pci/pci.c:#pci_reset_hotplug_slot | 0 | 4 |
| RIP:pci_reset_hotplug_slot | 0 | 4 |
+------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.971752] WARNING: CPU: 0 PID: 1 at drivers/pci/pci.c:4905 pci_reset_hotplug_slot+0x70/0x80
[ 0.971753] Modules linked in:
[ 0.971755] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc1-00053-g3233e41d3e8eb #1
[ 0.971756] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.971757] RIP: 0010:pci_reset_hotplug_slot+0x70/0x80
[ 0.971759] Code: 41 89 c4 48 8b 7b 20 e8 4e 3a c1 ff 44 89 e0 5b 5d 41 5c c3 48 8b 43 18 31 f6 48 8d b8 90 00 00 00 e8 e4 80 70 00 85 c0 75 ba <0f> 0b eb b6 41 bc e7 ff ff ff eb d6 0f 1f 40 00 66 66 66 66 90 48
[ 0.971759] RSP: 0000:ffffbd73c0013ab0 EFLAGS: 00010246
[ 0.971761] RAX: 0000000000000000 RBX: ffff9700475024c0 RCX: ffff9701048c9900
[ 0.971762] RDX: ffff970047e40000 RSI: ffff9701048c9990 RDI: 0000000000000246
[ 0.971763] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
[ 0.971763] R10: 0000000000000001 R11: 0000000000000000 R12: ffff9700477320b0
[ 0.971764] R13: ffff970047732000 R14: ffff97004769c238 R15: 0000000000000000
[ 0.971765] FS: 0000000000000000(0000) GS:ffff97036fc00000(0000) knlGS:0000000000000000
[ 0.971766] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.971767] CR2: 0000000000000000 CR3: 00000001c2c10000 CR4: 00000000000006f0
[ 0.971767] Call Trace:
[ 0.971768] pci_probe_reset_function+0xc4/0xe0
[ 0.971769] pci_device_add+0x13f/0x2a0
[ 0.971769] pci_scan_single_device+0xa4/0xc0
[ 0.971770] pci_scan_slot+0x52/0x110
[ 0.971771] pci_scan_child_bus_extend+0x3a/0x2a0
[ 0.971771] acpi_pci_root_create+0x1f7/0x250
[ 0.971772] pci_acpi_scan_root+0x182/0x1b0
[ 0.971773] acpi_pci_root_add.cold+0x59/0x1b0
[ 0.971773] ? acpi_device_always_present+0x20/0x90
[ 0.971774] acpi_bus_attach+0xf6/0x200
[ 0.971775] acpi_bus_attach+0x6b/0x200
[ 0.971776] acpi_bus_scan+0x43/0x90
[ 0.971776] ? acpi_sleep_proc_init+0x24/0x24
[ 0.971777] acpi_scan_init+0x102/0x24b
[ 0.971778] acpi_init+0x2c7/0x329
[ 0.971778] do_one_initcall+0x5d/0x330
[ 0.971779] ? rcu_read_lock_sched_held+0x52/0x90
[ 0.971780] kernel_init_freeable+0x248/0x2c9
[ 0.971780] ? rest_init+0x23e/0x23e
[ 0.971781] kernel_init+0xa/0x112
[ 0.971782] ret_from_fork+0x22/0x30
[ 0.971782] irq event stamp: 109115
[ 0.971783] hardirqs last enabled at (109115): [<ffffffffaa71a514>] _raw_spin_unlock_irqrestore+0x54/0x70
[ 0.971784] hardirqs last disabled at (109114): [<ffffffffaa71ab01>] _raw_spin_lock_irqsave+0x21/0x60
[ 0.971785] softirqs last enabled at (109060): [<ffffffffaaa003aa>] __do_softirq+0x3aa/0x4af
[ 0.971786] softirqs last disabled at (109053): [<ffffffffaa8010b2>] asm_call_on_stack+0x12/0x20
[ 0.971787] ---[ end trace c3e4ce92dee5df5f ]---
To reproduce:
# build kernel
cd linux
cp config-5.8.0-rc1-00053-g3233e41d3e8eb .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
5 months, 3 weeks
625608edb6 ("mtd: rawnand: Use the ECC framework OOB layouts"): BUG: unable to handle page fault for address: f71fe000
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/miquelraynal/linux-0day.git nand/next
commit 625608edb698b9e62a9620398ff660659714f92b
Author: Miquel Raynal <miquel.raynal(a)bootlin.com>
AuthorDate: Thu Aug 27 10:52:05 2020 +0200
Commit: Miquel Raynal <miquel.raynal(a)bootlin.com>
CommitDate: Wed Sep 2 09:28:20 2020 +0200
mtd: rawnand: Use the ECC framework OOB layouts
No need to have our own in the raw NAND core.
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20200827085208.16276-18-miquel.raynal@b...
9b08a7d863 mtd: rawnand: Make use of the ECC framework
625608edb6 mtd: rawnand: Use the ECC framework OOB layouts
2296830e3d mtd: onenand: omap2: Allow for compile-testing on !ARM
+-------------------------------------------------------+------------+------------+------------+
| | 9b08a7d863 | 625608edb6 | 2296830e3d |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 40 | 0 | 0 |
| boot_failures | 3 | 23 | 1 |
| invoked_oom-killer:gfp_mask=0x | 1 | | |
| Mem-Info | 2 | | |
| EIP:__put_user_4 | 1 | | |
| Initiating_system_reboot | 1 | | |
| BUG:unable_to_handle_page_fault_for_address | 0 | 10 | 1 |
| Oops:#[##] | 0 | 13 | 1 |
| EIP:memcpy | 0 | 14 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 9 | |
| INFO:trying_to_register_non-static_key | 0 | 1 | |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 4 | |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 4 | 1 |
| BUG:kernel_reboot-without-warning_in_test_stage | 0 | 8 | |
| EIP:run_timer_softirq | 0 | 2 | |
| BUG:spinlock_bad_magic_on_CPU | 0 | 1 | |
| EIP:lookup_object | 0 | 0 | 1 |
+-------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 13.371873] udevd[339]: starting version 175
[ 13.526613] _warn_unseeded_randomness: 601 callbacks suppressed
[ 13.526623] random: get_random_u32 called from copy_process+0x240/0x13ab with crng_init=0
[ 13.526836] random: get_random_u32 called from allocate_slab+0x10e/0x374 with crng_init=0
[ 13.529371] random: get_random_u32 called from copy_process+0x240/0x13ab with crng_init=0
[ 14.148526] BUG: unable to handle page fault for address: f71fe000
[ 14.151301] #PF: supervisor read access in kernel mode
[ 14.152114] #PF: error_code(0x0000) - not-present page
[ 14.152893] *pdpt = 0000000002ba7001 *pde = 0000000003749067 *pte = 0000000000000000
[ 14.154106] Oops: 0000 [#1] SMP
[ 14.154629] CPU: 1 PID: 436 Comm: mtd_probe Not tainted 5.9.0-rc2-00017-g625608edb698b9 #1
[ 14.155908] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 14.157241] EIP: memcpy+0xf/0x1f
[ 14.157766] Code: 88 43 52 5b 5d c3 e8 14 fd ff ff 8b 43 5c 2b 43 54 88 43 52 5b 5d c3 c3 cc cc cc 55 89 e5 57 89 c7 56 89 d6 53 89 cb c1 e9 02 <f3> a5 89 d9 83 e1 03 74 02 f3 a4 5b 5e 5f 5d c3 55 89 e5 57 89 c7
[ 14.160625] EAX: bf98ea44 EBX: fffffffc ECX: 3db28781 EDX: ede9fe06
[ 14.161622] ESI: f71fdffe EDI: c8cecc3c EBP: beffbd2c ESP: beffbd20
[ 14.162603] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010a06
[ 14.163556] CR0: 80050033 CR2: f71fe000 CR3: 7ef24400 CR4: 000006f0
[ 14.164487] Call Trace:
[ 14.164857] mtd_ooblayout_get_bytes+0x70/0x8f
[ 14.165539] ? get_mtd_device_nm+0x9f/0x9f
[ 14.166183] mtd_ooblayout_get_eccbytes+0x13/0x15
[ 14.166919] ? get_mtd_device_nm+0x9f/0x9f
[ 14.167565] nand_read_page_swecc+0x90/0x115
[ 14.168250] nand_do_read_ops+0x17e/0x4e5
[ 14.168891] nand_read_oob+0x44/0x24f
[ 14.169472] ? find_held_lock+0x24/0x6b
[ 14.170081] ? create_object+0x1c8/0x2e3
[ 14.170711] ? nand_do_read_ops+0x4e5/0x4e5
[ 14.171378] mtd_read_oob_std+0x60/0x89
[ 14.171955] mtd_read_oob+0x75/0x10d
[ 14.172521] mtd_read+0x49/0x67
[ 14.173019] mtdchar_read+0xf2/0x212
[ 14.173581] ? mtdchar_write+0x211/0x211
[ 14.174194] vfs_read+0x92/0x185
[ 14.174725] ksys_read+0x5c/0xc9
[ 14.175247] __ia32_sys_read+0x10/0x12
[ 14.175845] do_int80_syscall_32+0x27/0x34
[ 14.176494] entry_INT80_32+0x111/0x111
[ 14.177089] EIP: 0x37efd1b2
[ 14.177519] Code: 89 c2 31 c0 89 d7 f3 aa 8b 44 24 1c 89 30 c6 40 04 00 83 c4 2c 89 f0 5b 5e 5f 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 1c 24 c3 8d b6 00 00
[ 14.180127] EAX: ffffffda EBX: 00000003 ECX: 004a0008 EDX: 00000200
[ 14.181107] ESI: 004a0008 EDI: 00000000 EBP: 3fc28698 ESP: 3fc28654
[ 14.182067] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000246
[ 14.183131] Modules linked in:
[ 14.183612] CR2: 00000000f71fe000
[ 14.184136] ---[ end trace 5c769db94eac64db ]---
[ 14.184867] EIP: memcpy+0xf/0x1f
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start cb0e49097ccffd20d4cd33a6346740070bab1974 f4d51dffc6c01a9e94650d95ce0104964f8ae822 --
git bisect bad 4577ad883f283c25dd53ef1ab1002eee93e75755 # 22:37 B 2 1 2 4 Merge 'mptcp/export' into devel-catchup-202009072059
git bisect bad 812048bdb9b22d5e0a24588e863eea1d3872ed45 # 22:53 B 6 5 6 8 Merge 'linux-review/Ville-Syrjala/drm-atomic-helper-Extract-drm_atomic_helper_calc_timestamping_constants/20200907-200154' into devel-catchup-202009072059
git bisect good 3265bbfadc6dd6ce90052fe2da9e546b713be32e # 23:24 G 17 0 11 11 Merge 'asoc/for-next' into devel-catchup-202009072059
git bisect bad dff8253a1df95da4dc2f443e9df7f891f4fb3134 # 23:31 B 0 5 14 0 Merge 'ceph-client/testing' into devel-catchup-202009072059
git bisect good da7471d4956496c48046ed6ea6d9594b9351146e # 00:01 G 17 0 10 10 Merge 'renesas-devel/master' into devel-catchup-202009072059
git bisect bad 301a66c61759541231917e71fbc477689595481c # 00:17 B 1 2 1 1 Merge 'miquelraynal/nand/next' into devel-catchup-202009072059
git bisect bad 09f4b18a9b0d0316371391d6c887287a911d9a6e # 00:29 B 5 3 5 5 mtd: rawnand: atmel: Drop redundant nand_read_page_op()
git bisect good 8f27947f2ea51f14340c5743ed6c865edaaba372 # 00:56 G 23 0 15 15 mtd: nand: Create a helper to extract the ECC configuration
git bisect good 9b08a7d863cc16304a8988b4784487f3e88c310f # 01:21 G 22 0 14 15 mtd: rawnand: Make use of the ECC framework
git bisect bad a965a86a8b29381a8ca4b9733f25011b9714380e # 01:36 B 0 1 10 0 mtd: rawnand: Use the ECC framework user input parsing bits
git bisect bad 127e7d2f5ac9261ecbfca11d264d6651a60e16f6 # 05:01 B 3 1 3 13 mtd: rawnand: Use the ECC framework nand_ecc_is_strong_enough() helper
git bisect bad 625608edb698b9e62a9620398ff660659714f92b # 05:20 B 0 1 10 0 mtd: rawnand: Use the ECC framework OOB layouts
# first bad commit: [625608edb698b9e62a9620398ff660659714f92b] mtd: rawnand: Use the ECC framework OOB layouts
git bisect good 9b08a7d863cc16304a8988b4784487f3e88c310f # 05:32 G 63 0 32 47 mtd: rawnand: Make use of the ECC framework
# extra tests with debug options
git bisect bad 625608edb698b9e62a9620398ff660659714f92b # 06:07 B 1 1 1 1 mtd: rawnand: Use the ECC framework OOB layouts
# extra tests on head commit of miquelraynal/nand/next
git bisect bad 2296830e3dae6dcace9e435b6c508a8319d2eaa0 # 07:31 B 0 1 10 0 mtd: onenand: omap2: Allow for compile-testing on !ARM
# bad: [2296830e3dae6dcace9e435b6c508a8319d2eaa0] mtd: onenand: omap2: Allow for compile-testing on !ARM
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
5 months, 3 weeks