[selftests] 0c3355cc8e: kernel-selftests.net.fib_tests.sh.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 0c3355cc8e1973cc4c22c1622f211fcbab793608 ("selftests: fib_tests: add more tests for metric update")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y
in testcase: kernel-selftests
with following parameters:
group: kselftests-net
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
selftests: net: fib_tests.sh
========================================
Single path route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Nexthop device deleted
TEST: IPv4 fibmatch - no route [ OK ]
TEST: IPv6 fibmatch - no route [ OK ]
Multipath route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One nexthop device deleted
TEST: IPv4 - multipath route removed on delete [ OK ]
TEST: IPv6 - multipath down to single path [ OK ]
Second nexthop device deleted
TEST: IPv6 - no route [ OK ]
Single path, admin down
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Route deleted on down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Admin down multipath
Verify start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One device down, one up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Other device down and up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Both devices down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Local carrier tests - single path
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - no linkdown flag [ OK ]
TEST: IPv6 - no linkdown flag [ OK ]
Carrier off on nexthop
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - linkdown flag set [ OK ]
TEST: IPv6 - linkdown flag set [ OK ]
Route to local address with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Single path route carrier test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 no linkdown flag [ OK ]
TEST: IPv6 no linkdown flag [ OK ]
Carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Second address added with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
IPv4 nexthop tests
<<< write me >>>
IPv6 nexthop tests
TEST: Directly connected nexthop, unicast address [ OK ]
TEST: Directly connected nexthop, unicast address with device [ OK ]
TEST: Gateway is linklocal address [ OK ]
TEST: Gateway is linklocal address, no device [ OK ]
TEST: Gateway can not be local unicast address [ OK ]
TEST: Gateway can not be local unicast address, with device [ OK ]
TEST: Gateway can not be a local linklocal address [ OK ]
TEST: Gateway can be local address in a VRF [ OK ]
TEST: Gateway can be local address in a VRF, with device [ OK ]
TEST: Gateway can be local linklocal address in a VRF [ OK ]
TEST: Redirect to VRF lookup [ OK ]
TEST: VRF route, gateway can be local address in default VRF [ OK ]
TEST: VRF route, gateway can not be a local address [ OK ]
TEST: VRF route, gateway can not be a local addr with device [ OK ]
IPv6 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv6 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv4 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Add new nexthop for existing prefix [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Append nexthop to existing route - dev only [ OK ]
TEST: Append nexthop to existing route - reject route [ OK ]
TEST: Append nexthop to existing reject route - gw [ OK ]
TEST: Append nexthop to existing reject route - dev only [ OK ]
TEST: add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv4 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with reject route [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath with reject route [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv6 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
IPv4 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
TEST: Modify metric of .0/24 address [ OK ]
TEST: Modify metric of address with peer route [FAIL]
Tests passed: 131
Tests failed: 1
not ok 1..12 selftests: net: fib_tests.sh [FAIL]
To reproduce:
# build kernel
cd linux
cp config-4.19.82-00060-g0c3355cc8e197 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 1 week
[tcp] 43876b1ce4: kernel-selftests.net.tls.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 43876b1ce42be40179068aeff2a5aac0b21b16fe ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y
in testcase: kernel-selftests
with following parameters:
group: kselftests-net
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
selftests: net: tls
========================================
tls.c:128:tls.send_then_sendfile:Expected recv(self->cfd, buf, st.st_size, 0) (65536) == st.st_size (165960)
tls.send_then_sendfile: Test failed at step #8
[==========] Running 28 tests from 2 test cases.
[ RUN ] tls.sendfile
[ OK ] tls.sendfile
[ RUN ] tls.send_then_sendfile
[ FAIL ] tls.send_then_sendfile
[ RUN ] tls.recv_max
[ OK ] tls.recv_max
[ RUN ] tls.recv_small
[ OK ] tls.recv_small
[ RUN ] tls.msg_more
[ OK ] tls.msg_more
[ RUN ] tls.sendmsg_single
[ OK ] tls.sendmsg_single
[ RUN ] tls.sendmsg_large
[ OK ] tls.sendmsg_large
[ RUN ] tls.sendmsg_multiple
[ OK ] tls.sendmsg_multiple
[ RUN ] tls.sendmsg_multiple_stress
[ OK ] tls.sendmsg_multiple_stress
[ RUN ] tls.splice_from_pipe
[ OK ] tls.splice_from_pipe
[ RUN ] tls.splice_from_pipe2
[ OK ] tls.splice_from_pipe2
[ RUN ] tls.send_and_splice
[ OK ] tls.send_and_splice
[ RUN ] tls.splice_to_pipe
[ OK ] tls.splice_to_pipe
[ RUN ] tls.recvmsg_single
[ OK ] tls.recvmsg_single
[ RUN ] tls.recvmsg_single_max
[ OK ] tls.recvmsg_single_max
[ RUN ] tls.recvmsg_multiple
[ OK ] tls.recvmsg_multiple
[ RUN ] tls.single_send_multiple_recv
[ OK ] tls.single_send_multiple_recv
[ RUN ] tls.multiple_send_single_recv
[ OK ] tls.multiple_send_single_recv
[ RUN ] tls.recv_partial
[ OK ] tls.recv_partial
[ RUN ] tls.recv_nonblock
[ OK ] tls.recv_nonblock
[ RUN ] tls.recv_peek
[ OK ] tls.recv_peek
[ RUN ] tls.recv_peek_multiple
[ OK ] tls.recv_peek_multiple
[ RUN ] tls.recv_peek_multiple_records
[ OK ] tls.recv_peek_multiple_records
[ RUN ] tls.pollin
[ OK ] tls.pollin
[ RUN ] tls.poll_wait
[ OK ] tls.poll_wait
[ RUN ] tls.blocking
[ OK ] tls.blocking
[ RUN ] tls.nonblocking
[ OK ] tls.nonblocking
[ RUN ] tls.control_msg
[ OK ] tls.control_msg
[==========] 27 / 28 tests passed.
[ FAILED ]
not ok 1..6 selftests: net: tls [FAIL]
To reproduce:
# build kernel
cd linux
cp config-4.19.85-00024-g43876b1ce42be .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 1 week
1e76b8ad20 ("nsproxy: attach to namespaces via pidfds"): [ 35.793949] WARNING: CPU: 0 PID: 978 at kernel/locking/lockdep.c:183 hlock_class
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1e76b8ad203ac0cd1fd01e1c5c824a74085117c8
Author: Christian Brauner <christian.brauner(a)ubuntu.com>
AuthorDate: Tue May 5 16:04:31 2020 +0200
Commit: Christian Brauner <christian.brauner(a)ubuntu.com>
CommitDate: Sat May 9 13:57:13 2020 +0200
nsproxy: attach to namespaces via pidfds
For quite a while we have been thinking about using pidfds to attach to
namespaces. This patchset has existed for about a year already but we've
wanted to wait to see how the general api would be received and adopted.
Now that more and more programs in userspace have started using pidfds
for process management it's time to send this one out.
This patch makes it possible to use pidfds to attach to the namespaces
of another process, i.e. they can be passed as the first argument to the
setns() syscall. When only a single namespace type is specified the
semantics are equivalent to passing an nsfd. That means
setns(nsfd, CLONE_NEWNET) equals setns(pidfd, CLONE_NEWNET). However,
when a pidfd is passed, multiple namespace flags can be specified in the
second setns() argument and setns() will attach the caller to all the
specified namespaces all at once or to none of them. Specifying 0 is not
valid together with a pidfd.
Here are just two obvious examples:
setns(pidfd, CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWNET);
setns(pidfd, CLONE_NEWUSER);
Allowing to also attach subsets of namespaces supports various use-cases
where callers setns to a subset of namespaces to retain privilege, perform
an action and then re-attach another subset of namespaces.
If the need arises, as Eric suggested, we can extend this patchset to
assume even more context than just attaching all namespaces. His suggestion
specifically was about assuming the process' root directory when
setns(pidfd, 0) or setns(pidfd, SETNS_PIDFD) is specified. For now, just
keep it flexible in terms of supporting subsets of namespaces but let's
wait until we have users asking for even more context to be assumed. At
that point we can add an extension.
The obvious example where this is useful is a standard container
manager interacting with a running container: pushing and pulling files
or directories, injecting mounts, attaching/execing any kind of process,
managing network devices all these operations require attaching to all
or at least multiple namespaces at the same time. Given that nowadays
most containers are spawned with all namespaces enabled we're currently
looking at at least 14 syscalls, 7 to open the /proc/<pid>/ns/<ns>
nsfds, another 7 to actually perform the namespace switch. With time
namespaces we're looking at about 16 syscalls.
(We could amortize the first 7 or 8 syscalls for opening the nsfds by
stashing them in each container's monitor process but that would mean
we need to send around those file descriptors through unix sockets
everytime we want to interact with the container or keep on-disk
state. Even in scenarios where a caller wants to join a particular
namespace in a particular order callers still profit from batching
other namespaces. That mostly applies to the user namespace but
all container runtimes I found join the user namespace first no matter
if it privileges or deprivileges the container similar to how unshare
behaves.)
With pidfds this becomes a single syscall no matter how many namespaces
are supposed to be attached to.
A decently designed, large-scale container manager usually isn't the
parent of any of the containers it spawns so the containers don't die
when it crashes or needs to update or reinitialize. This means that
for the manager to interact with containers through pids is inherently
racy especially on systems where the maximum pid number is not
significicantly bumped. This is even more problematic since we often spawn
and manage thousands or ten-thousands of containers. Interacting with a
container through a pid thus can become risky quite quickly. Especially
since we allow for an administrator to enable advanced features such as
syscall interception where we're performing syscalls in lieu of the
container. In all of those cases we use pidfds if they are available and
we pass them around as stable references. Using them to setns() to the
target process' namespaces is as reliable as using nsfds. Either the
target process is already dead and we get ESRCH or we manage to attach
to its namespaces but we can't accidently attach to another process'
namespaces. So pidfds lend themselves to be used with this api.
The other main advantage is that with this change the pidfd becomes the
only relevant token for most container interactions and it's the only
token we need to create and send around.
Apart from significiantly reducing the number of syscalls from double
digit to single digit which is a decent reason post-spectre/meltdown
this also allows to switch to a set of namespaces atomically, i.e.
either attaching to all the specified namespaces succeeds or we fail. If
we fail we haven't changed a single namespace. There are currently three
namespaces that can fail (other than for ENOMEM which really is not
very interesting since we then have other problems anyway) for
non-trivial reasons, user, mount, and pid namespaces. We can fail to
attach to a pid namespace if it is not our current active pid namespace
or a descendant of it. We can fail to attach to a user namespace because
we are multi-threaded or because our current mount namespace shares
filesystem state with other tasks, or because we're trying to setns()
to the same user namespace, i.e. the target task has the same user
namespace as we do. We can fail to attach to a mount namespace because
it shares filesystem state with other tasks or because we fail to lookup
the new root for the new mount namespace. In most non-pathological
scenarios these issues can be somewhat mitigated. But there are cases where
we're half-attached to some namespace and failing to attach to another one.
I've talked about some of these problem during the hallway track (something
only the pre-COVID-19 generation will remember) of Plumbers in Los Angeles
in 2018(?). Even if all these issues could be avoided with super careful
userspace coding it would be nicer to have this done in-kernel. Pidfds seem
to lend themselves nicely for this.
The other neat thing about this is that setns() becomes an actual
counterpart to the namespace bits of unshare().
Signed-off-by: Christian Brauner <christian.brauner(a)ubuntu.com>
Reviewed-by: Serge Hallyn <serge(a)hallyn.com>
Cc: Eric W. Biederman <ebiederm(a)xmission.com>
Cc: Serge Hallyn <serge(a)hallyn.com>
Cc: Jann Horn <jannh(a)google.com>
Cc: Michael Kerrisk <mtk.manpages(a)gmail.com>
Cc: Aleksa Sarai <cyphar(a)cyphar.com>
Link: https://lore.kernel.org/r/20200505140432.181565-3-christian.brauner@ubunt...
f2a8d52e0a nsproxy: add struct nsset
1e76b8ad20 nsproxy: attach to namespaces via pidfds
+--------------------------------------------------+------------+------------+
| | f2a8d52e0a | 1e76b8ad20 |
+--------------------------------------------------+------------+------------+
| boot_successes | 69 | 7 |
| boot_failures | 2 | 21 |
| invoked_oom-killer:gfp_mask=0x | 1 | |
| Mem-Info | 1 | |
| kernel_BUG_at_mm/vmalloc.c | 1 | |
| invalid_opcode:#[##] | 1 | |
| EIP:alloc_vmap_area | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 1 | 20 |
| INFO:trying_to_register_non-static_key | 0 | 8 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 11 |
| Oops:#[##] | 0 | 20 |
| EIP:cap_capable | 0 | 4 |
| BUG:unable_to_handle_page_fault_for_address | 0 | 9 |
| EIP:__lock_acquire | 0 | 9 |
| WARNING:at_kernel/locking/lockdep.c:#hlock_class | 0 | 9 |
| EIP:hlock_class | 0 | 9 |
| EIP:__ptrace_may_access | 0 | 7 |
+--------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child3:976] fchown16 (95) returned ENOSYS, marking as inactive.
[child1:978] setgid16 (46) returned ENOSYS, marking as inactive.
[child1:978] mbind (274) returned ENOSYS, marking as inactive.
[ 35.793447] ------------[ cut here ]------------
[ 35.793900] DEBUG_LOCKS_WARN_ON(1)
[ 35.793949] WARNING: CPU: 0 PID: 978 at kernel/locking/lockdep.c:183 hlock_class+0xb5/0xe3
[ 35.795152] Modules linked in:
[ 35.795468] CPU: 0 PID: 978 Comm: trinity-c1 Not tainted 5.7.0-rc4-00002-g1e76b8ad203ac #1
[ 35.796204] EIP: hlock_class+0xb5/0xe3
[ 35.796699] Code: 05 90 8c df d1 01 68 df 01 57 d1 68 18 3f 4d d1 83 15 94 8c df d1 00 e8 51 18 f8 ff 83 05 98 8c df d1 01 83 15 9c 8c df d1 00 <0f> 0b 83 05 a0 8c df d1 01 58 83 15 a4 8c df d1 00 5a 31 c0 eb 16
[ 35.799290] EAX: 00000016 EBX: 00000000 ECX: 00000265 EDX: 00000000
[ 35.799860] ESI: eab2a000 EDI: eab2a560 EBP: eab2de70 ESP: eab2de68
[ 35.800424] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010046
[ 35.801028] CR0: 80050033 CR2: b780be00 CR3: 2ab4e000 CR4: 001406b0
[ 35.801594] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 35.802165] DR6: fffe0ff0 DR7: 00000400
[ 35.802558] Call Trace:
[ 35.802854] __lock_acquire+0x2d8/0x1cd2
[ 35.803256] ? perf_trace_buf_update+0x67/0x67
[ 35.803688] ? ftrace_ops_test+0x80/0xc0
[ 35.804079] ? rcu_is_watching+0x2c/0xb1
[ 35.804457] ? rcu_get_gp_kthreads_prio+0x1d/0x1d
[ 35.804975] lock_acquire+0x3a6/0x3f5
[ 35.805365] ? ptrace_may_access+0x2b/0x72
[ 35.805795] ? preempt_count_add+0x2f/0x3f
[ 35.806202] ? preempt_latency_start+0x34/0x6a
[ 35.806636] ? trace_preempt_off+0x23b/0x251
[ 35.807100] _raw_spin_lock+0x70/0xa9
[ 35.807486] ? ptrace_may_access+0x2b/0x72
[ 35.807892] ptrace_may_access+0x2b/0x72
[ 35.808301] validate_nsset+0x123/0x6c1
[ 35.808829] __ia32_sys_setns+0x34e/0x516
[ 35.809409] do_fast_syscall_32+0x178/0x254
[ 35.809859] entry_SYSENTER_32+0xaa/0x101
[ 35.810374] EIP: 0xb7f5bc25
[ 35.810660] Code: da 8b 5d 08 e8 18 00 00 00 89 d3 5b 5e 5f 5d c3 8b 04 24 c3 8b 14 24 c3 8b 1c 24 c3 8b 34 24 c3 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d 76 00 58 b8 77 00 00 00 cd 80 90 8d 76
[ 35.812278] EAX: ffffffda EBX: 0000001d ECX: 08000000 EDX: 000000cd
[ 35.812841] ESI: 0020402a EDI: 00000ff0 EBP: 9e3b7c48 ESP: bfa0fcfc
[ 35.813404] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[ 35.814164] irq event stamp: 30841
[ 35.814499] hardirqs last enabled at (30841): [<d09f47fa>] _raw_spin_unlock_irqrestore+0xa4/0x16d
[ 35.815481] hardirqs last disabled at (30840): [<d09f40cd>] _raw_spin_lock_irqsave+0x42/0xd3
[ 35.816780] softirqs last enabled at (30720): [<d09fa4ac>] __do_softirq+0x7bc/0x876
[ 35.817905] softirqs last disabled at (30713): [<cf817c6a>] call_on_stack+0x20/0x34
[ 35.819088] ---[ end trace aa36361ac734d817 ]---
[ 35.819643] BUG: kernel NULL pointer dereference, address: 0000005c
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5f458e572071a54841b93f41e25fbe8ded82df79 v5.6 --
git bisect good 6e7f2eacf09811d092c1b41263108ac7fe0d089d # 02:08 G 16 0 0 0 Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
git bisect good 2b8766c343dbac93d3a429693050a4394f9ad529 # 02:28 G 17 0 0 0 Merge remote-tracking branch 'tip/auto-latest'
git bisect bad 68846c160b980c05150d975273d33b395d692e8f # 02:42 B 0 2 18 0 arm: add loglvl to unwind_backtrace()
git bisect good a4ad8bbc013f4483f46d5069b6d5ba372772c7a3 # 03:28 G 17 0 1 3 Merge remote-tracking branch 'ntb/ntb-next'
git bisect bad 36d17448fd690a49fa85f312c07b3a1758ddc910 # 03:43 B 0 1 18 0 kernel/sysctl: support setting sysctl parameters from kernel command line
git bisect bad 76cc1b4ec14974293c5064af2196a7dedbfec357 # 03:54 B 0 1 18 0 Merge remote-tracking branch 'fpga/for-next'
git bisect good 0089c822956fde8a7c1cdeab491414b6aeedf720 # 04:17 G 16 0 0 0 Merge remote-tracking branch 'hyperv/hyperv-next'
git bisect bad 9226d1137bbe8061dd18a71e4e77c7e78d3379b3 # 04:28 B 3 1 3 4 Merge remote-tracking branch 'pidfd/for-next'
git bisect good 376498c8ffb226c5a98b9939c60e7a34d7afb884 # 05:02 G 20 0 3 5 Merge remote-tracking branch 'kgdb/kgdb/for-next'
git bisect bad 1e76b8ad203ac0cd1fd01e1c5c824a74085117c8 # 05:17 B 3 1 3 3 nsproxy: attach to namespaces via pidfds
git bisect good f2a8d52e0a4db968c346c4332630a71cba377567 # 05:49 G 20 0 0 0 nsproxy: add struct nsset
# first bad commit: [1e76b8ad203ac0cd1fd01e1c5c824a74085117c8] nsproxy: attach to namespaces via pidfds
git bisect good f2a8d52e0a4db968c346c4332630a71cba377567 # 06:11 G 60 0 2 2 nsproxy: add struct nsset
# extra tests with debug options
git bisect bad 1e76b8ad203ac0cd1fd01e1c5c824a74085117c8 # 06:24 B 0 1 17 0 nsproxy: attach to namespaces via pidfds
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
9 months, 1 week
379706875d ("x86/mm: simplify init_trampoline() and .."): BUG: kernel reboot-without-warning in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Mike-Rapoport/mm-consolidate-def...
commit 379706875d28bf7fc90b067355981de242b7bff1
Author: Mike Rapoport <rppt(a)linux.ibm.com>
AuthorDate: Tue May 12 21:44:17 2020 +0300
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Wed May 13 02:55:59 2020 +0800
x86/mm: simplify init_trampoline() and surrounding logic
There are three cases for the trampoline initialization:
* 32-bit does nothing
* 64-bit with kaslr disabled simply copies a PGD entry from the direct map
to the trampoline PGD
* 64-bit with kaslr enabled maps the real mode trampoline at PUD level
These cases are currently differentiated by a bunch of ifdefs inside
asm/include/pgtable.h and the case of 64-bits with kaslr on uses
pgd_index() helper.
Replacing the ifdefs with a static function in arch/x86/mm/init.c gives
clearer code and allows moving pgd_index() to the generic implementation in
include/linux/pgtable.h
Signed-off-by: Mike Rapoport <rppt(a)linux.ibm.com>
7cc33e59db m68k/mm: move {cache,nocahe}_page() definitions close to their user
379706875d x86/mm: simplify init_trampoline() and surrounding logic
6498f3f0af mm: consolidate pgd_index() and pgd_offset{_k}() definitions
+----------------------------------------------------------------------------+------------+------------+------------+
| | 7cc33e59db | 379706875d | 6498f3f0af |
+----------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 0 | 0 | 0 |
| boot_failures | 50 | 17 | 21 |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 2 | | |
| WARNING:suspicious_RCU_usage | 6 | | |
| kernel/kprobes.c:#RCU-list_traversed_in_non-reader_section | 6 | | |
| BUG:unable_to_handle_page_fault_for_address | 27 | | |
| Oops:#[##] | 40 | | |
| RIP:test_bit | 1 | | |
| Kernel_panic-not_syncing:Fatal_exception | 40 | | |
| RIP:__lock_acquire | 35 | | |
| RIP:perf_trace_lock_acquire | 1 | | |
| WARNING:at_kernel/locking/lockdep.c:#hlock_class | 4 | | |
| RIP:hlock_class | 4 | | |
| BUG:kernel_NULL_pointer_dereference,address | 13 | | |
| INFO:trying_to_register_non-static_key | 5 | | |
| INFO:rcu_preempt_detected_stalls_on_CPUs/tasks | 2 | | |
| BUG:kernel_hang_in_test_stage | 6 | | |
| RIP:cap_capable | 2 | | |
| RIP:__ptrace_may_access | 1 | | |
| BUG:kernel_reboot-without-warning_in_boot_stage | 0 | 17 | 21 |
+----------------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.243102] smpboot: CPU0: Intel Core Processor (Haswell) (family: 0x6, model: 0x3c, stepping: 0x4)
[ 0.244628] Performance Events: unsupported p6 CPU model 60 no PMU driver, software events only.
[ 0.244993] rcu: Hierarchical SRCU implementation.
[ 0.245617] smp: Bringing up secondary CPUs ...
[ 0.247388] x86: Booting SMP configuration:
BUG: kernel reboot-without-warning in boot stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start a0d389c0f696161685908dc1c5d4237ccdccb282 e098d7762d602be640c53565ceca342f81e55ad2 --
git bisect good cb805e4e4348a4aac0ecde5fa87da273447cbb72 # 20:18 G 10 0 10 27 Merge 'linux-review/ChenTao/rtl8187-Remove-unused-variable-rtl8225z2_tx_power_ofdm/20200513-131943' into devel-catchup-202005131336
git bisect bad 244b6c0835cc28183e262962a44ae6df21ccf52b # 20:18 B 0 17 33 0 Merge 'linux-review/Swathi-Dhanavanthri/drm-i915-ehl-Restrict-w-a-1607087056-for-EHL-JSL/20200513-020736' into devel-catchup-202005131336
git bisect bad 66b11ec04cd7d22a559f2f3192906cf15fcd8fba # 20:19 B 0 17 33 0 Merge 'linux-review/Mike-Rapoport/mm-consolidate-definitions-of-page-table-accessors/20200513-025551' into devel-catchup-202005131336
git bisect good f087d1943e1d245646f7eed5a4f6ad9534ed36a5 # 20:52 G 10 0 9 25 Merge 'mptcp/export' into devel-catchup-202005131336
git bisect good 027be301b42b8cbaaf2b71e239cef5c6e1e79f0c # 20:57 G 10 0 10 27 Merge 'linux-review/UPDATE-20200513-005237/Chris-Wilson/drm-i915-gt-Transfer-old-virtual-breadcrumbs-to-irq_worker/20200512-212708' into devel-catchup-202005131336
git bisect good 8fe1fbbc73d66eed82f5f14e5b4ff43910bcffdd # 21:36 G 11 0 10 16 Merge 'arm64/for-kernelci' into devel-catchup-202005131336
git bisect good 7cc33e59db8254165ff394d25a9c093b5d582f22 # 22:16 G 11 0 11 17 m68k/mm: move {cache,nocahe}_page() definitions close to their user
git bisect bad 2ac3e0aed9fbf7cb31dc854a7d5e7f6291931373 # 22:29 B 0 11 33 6 mm: consolidate pte_index() and pte_offset_*() definitions
git bisect bad ab03a2ed4f1f4b56c0f4e2db30823310952ff981 # 22:41 B 0 11 33 6 mm: pgtable: add shortcuts for accessing kernel PMD and PTE
git bisect bad 379706875d28bf7fc90b067355981de242b7bff1 # 22:53 B 0 11 33 6 x86/mm: simplify init_trampoline() and surrounding logic
# first bad commit: [379706875d28bf7fc90b067355981de242b7bff1] x86/mm: simplify init_trampoline() and surrounding logic
git bisect good 7cc33e59db8254165ff394d25a9c093b5d582f22 # 23:36 G 30 0 30 47 m68k/mm: move {cache,nocahe}_page() definitions close to their user
# extra tests with debug options
git bisect bad 379706875d28bf7fc90b067355981de242b7bff1 # 23:49 B 0 11 33 6 x86/mm: simplify init_trampoline() and surrounding logic
# extra tests on head commit of linux-review/Mike-Rapoport/mm-consolidate-definitions-of-page-table-accessors/20200513-025551
git bisect bad 6498f3f0afabb9f53ee5df2fca35692c9e047b6c # 01:26 B 0 11 37 10 mm: consolidate pgd_index() and pgd_offset{_k}() definitions
# bad: [6498f3f0afabb9f53ee5df2fca35692c9e047b6c] mm: consolidate pgd_index() and pgd_offset{_k}() definitions
# extra tests on revert first bad commit
# 119: [a37851d44be411ea5ceedfb5c86a9049b3187e7b] Revert "x86/mm: simplify init_trampoline() and surrounding logic"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
9 months, 1 week
494344d6e6 ("vxlan: support for nexthop notifiers"): [ 6.235374] BUG: spinlock trylock failure on UP on CPU#0, swapper/1
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Roopa-Prabhu/Support-for-fdb-ECM...
commit 494344d6e6e348b1866d75998efe873203a5f2e6
Author: Roopa Prabhu <roopa(a)cumulusnetworks.com>
AuthorDate: Wed May 20 11:23:10 2020 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu May 21 05:42:59 2020 +0800
vxlan: support for nexthop notifiers
vxlan driver registers for nexthop add/del notifiers to
cleanup fdb entries pointing to such nexthops.
Signed-off-by: Roopa Prabhu <roopa(a)cumulusnetworks.com>
6751c12243 nexthop: add support for notifiers
494344d6e6 vxlan: support for nexthop notifiers
fccc09644f selftests: net: add fdb nexthop tests
+-------------------------------------------+------------+------------+------------+
| | 6751c12243 | 494344d6e6 | fccc09644f |
+-------------------------------------------+------------+------------+------------+
| boot_successes | 0 | 0 | 0 |
| boot_failures | 44 | 11 | 11 |
| BUG:kernel_timeout_in_test_stage | 41 | | |
| BUG:kernel_timeout_in_torture_test_stage | 3 | | |
| BUG:spinlock_trylock_failure_on_UP_on_CPU | 0 | 11 | 11 |
+-------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 6.231483] ok 33 - list_test_list_for_each_safe
[ 6.232192] ok 34 - list_test_list_for_each_prev_safe
[ 6.232748] ok 35 - list_test_list_for_each_entry
[ 6.233412] ok 36 - list_test_list_for_each_entry_reverse
[ 6.233859] ok 3 - list-kunit-test
[ 6.235374] BUG: spinlock trylock failure on UP on CPU#0, swapper/1
[ 6.235925] lock: init_net+0x204/0xe20, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
[ 6.236600] CPU: 0 PID: 1 Comm: swapper Not tainted 5.7.0-rc5 #1
[ 6.237085] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 6.237770] Call Trace:
[ 6.237984] dump_stack+0x32/0x4a
[ 6.238261] spin_dump+0xae/0x130
[ 6.238540] do_raw_spin_trylock+0xa1/0xc0
[ 6.238881] _raw_spin_lock_irqsave+0xbc/0x160
[ 6.239247] ? atomic_notifier_chain_register+0x24/0x70
[ 6.239675] atomic_notifier_chain_register+0x24/0x70
[ 6.240089] register_nexthop_notifier+0x20/0x30
[ 6.240469] vxlan_init_net+0x202/0x350
[ 6.240790] ops_init+0x65/0x390
[ 6.241064] register_pernet_operations+0x13a/0x2d0
[ 6.241464] ? virtio_net_driver_init+0x10b/0x10b
[ 6.241861] register_pernet_subsys+0x3d/0x70
[ 6.242217] vxlan_init_module+0x3e/0xf4
[ 6.242542] do_one_initcall+0xef/0x6a0
[ 6.242892] ? rcu_read_lock_sched_held+0x5d/0xd0
[ 6.243279] ? trace_initcall_level+0x132/0x1cc
[ 6.243657] do_initcalls+0x16c/0x1ad
[ 6.243964] kernel_init_freeable+0x1af/0x264
[ 6.244322] ? rest_init+0x4a0/0x4a0
[ 6.244617] kernel_init+0x1b/0x280
[ 6.244906] ret_from_fork+0x2e/0x40
[ 6.260656] debug: unmapping init [mem 0xc43c2000-0xc4566fff]
[ 6.266355] Write protecting kernel text and read-only data: 46568k
[ 6.266886] NX-protecting the kernel data: 23720k
[ 6.267832] Run /init as init process
[ 6.271961] with arguments:
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1123c0a2ffa14f44554106ffc92488afdb0fa578 b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce --
git bisect bad a68db01313720f76c06c324c98f10de7af8fe564 # 16:10 B 0 1 17 0 Merge 'linux-review/Huang-Qijun/vlan-fix-the-bug-that-cannot-create-vlan4095/20200518-133055' into devel-hourly-2020052109
git bisect good 7f0cf9df0894951e2850ac25731c30772b7622ce # 17:10 G 11 0 10 10 Merge 'linux-review/Jernej-Skrabec/media-cedrus-Add-support-for-additional-output-formats/20200521-051138' into devel-hourly-2020052109
git bisect good 557f799ce541fc3798225dded1060ecf6a51cd98 # 18:09 G 11 0 11 11 Merge 'linux-review/Jiri-Olsa/perf-stat-Fix-duration_time-value-for-higher-intervals/20200518-212252' into devel-hourly-2020052109
git bisect bad 200723f8cb3e8a4caabf7b771dd901751c2cb972 # 18:25 B 0 3 19 0 Merge 'linux-review/Leonardo-Bras/powerpc-crash-Use-NMI-context-for-printk-when-starting-to-crash/20200519-184308' into devel-hourly-2020052109
git bisect good f722da7b4b3d0024055d3ea8e1abba2a08086b0a # 19:33 G 10 0 10 10 Merge 'linux-review/Emil-Velikov/fbdev-annotate-rivafb-nvidiafb-as-obsolete/20200518-060957' into devel-hourly-2020052109
git bisect bad 4a9a268f023a4eb6afa9e29113c330232b9a9977 # 19:55 B 0 1 17 0 Merge 'linux-review/UPDATE-20200518-212138/Fredrik-Strupe/arm-ptrace-Fix-mask-for-thumb-breakpoint-hook/20200414-045104' into devel-hourly-2020052109
git bisect bad 595b1f68f964c3864bf9762d3bb453fd12397703 # 20:15 B 0 1 17 0 Merge 'linux-review/Roopa-Prabhu/Support-for-fdb-ECMP-nexthop-groups/20200521-054241' into devel-hourly-2020052109
git bisect good 08c1033517c3daf5424362a6768b52d78be91bfb # 21:09 G 10 0 10 10 Merge 'linux-review/David-Ahern/nexthop-Fix-attribute-checking-for-groups/20200518-012917' into devel-hourly-2020052109
git bisect good 6751c122439854abfa90ad54b60c9550434d71a3 # 22:08 G 10 0 10 10 nexthop: add support for notifiers
git bisect bad fccc09644f7667e01bbfe9f31773264d1c28b394 # 22:26 B 0 11 27 0 selftests: net: add fdb nexthop tests
git bisect bad 494344d6e6e348b1866d75998efe873203a5f2e6 # 22:34 B 0 1 17 0 vxlan: support for nexthop notifiers
# first bad commit: [494344d6e6e348b1866d75998efe873203a5f2e6] vxlan: support for nexthop notifiers
git bisect good 6751c122439854abfa90ad54b60c9550434d71a3 # 23:27 G 30 0 30 41 nexthop: add support for notifiers
# extra tests with debug options
git bisect bad 494344d6e6e348b1866d75998efe873203a5f2e6 # 23:40 B 0 3 19 0 vxlan: support for nexthop notifiers
# extra tests on head commit of linux-review/Roopa-Prabhu/Support-for-fdb-ECMP-nexthop-groups/20200521-054241
git bisect bad fccc09644f7667e01bbfe9f31773264d1c28b394 # 23:52 B 0 11 27 0 selftests: net: add fdb nexthop tests
# bad: [fccc09644f7667e01bbfe9f31773264d1c28b394] selftests: net: add fdb nexthop tests
# extra tests on revert first bad commit
git bisect good a25378bb8a65fb83c85eeab8dd814b9e5f34ee0d # 01:04 G 10 0 10 10 Revert "vxlan: support for nexthop notifiers"
# good: [a25378bb8a65fb83c85eeab8dd814b9e5f34ee0d] Revert "vxlan: support for nexthop notifiers"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
9 months, 1 week
f410328e93 ("Default enable RCU list lockdep debugging with .."): WARNING: suspicious RCU usage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev.2020.05.14b
commit f410328e93834f1d9c7e2f707ac05fd9e6417c27
Author: Madhuparna Bhowmik <madhuparnabhowmik10(a)gmail.com>
AuthorDate: Fri Feb 28 14:54:51 2020 +0530
Commit: Paul E. McKenney <paulmck(a)kernel.org>
CommitDate: Thu May 14 10:11:39 2020 -0700
Default enable RCU list lockdep debugging with PROVE_RCU
This patch default enables CONFIG_PROVE_RCU_LIST option with
CONFIG_PROVE_RCU for RCU list lockdep debugging.
With this change, RCU list lockdep debugging will be default
enabled in CONFIG_PROVE_RCU=y kernels.
Most of the RCU users (in core kernel/, drivers/, and net/
subsystem) have already been modified to include lockdep
expressions hence RCU list debugging can be enabled by
default.
However, there are still chances of enountering
false-positive lockdep splats because not everything is converted,
in case RCU list primitives are used in non-RCU read-side critical
section but under the protection of a lock. It would be okay to
have a few false-positives, as long as bugs are identified, since this
patch only affects debugging kernels.
Co-developed-by: Amol Grover <frextrite(a)gmail.com>
Signed-off-by: Amol Grover <frextrite(a)gmail.com>
Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10(a)gmail.com>
Acked-by: Joel Fernandes (Google) <joel(a)joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck(a)kernel.org>
df2e4807c8 torture: Add --allcpus argument to the kvm.sh script
f410328e93 Default enable RCU list lockdep debugging with PROVE_RCU
c1628f71b9 ubsan, kcsan: Don't combine sanitizer with kcov on clang
+-------------------------------------------------------------------------+------------+------------+------------+
| | df2e4807c8 | f410328e93 | c1628f71b9 |
+-------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 2 | 0 | 0 |
| boot_failures | 37 | 17 | 23 |
| Assertion_failed | 33 | 13 | 13 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 35 | 15 | 21 |
| BUG:kernel_hang_in_test_stage | 2 | 1 | |
| WARNING:suspicious_RCU_usage | 0 | 17 | 23 |
| security/smack/smack_lsm.c:#RCU-list_traversed_in_non-reader_section | 0 | 17 | 23 |
| security/smack/smack_access.c:#RCU-list_traversed_in_non-reader_section | 0 | 17 | 23 |
+-------------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.347631] ..... CPU clock speed is 2893.0023 MHz.
[ 0.347631] ..... host bus clock speed is 1000.0027 MHz.
[ 0.347677] smpboot: CPU0: Intel Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
[ 0.348602]
[ 0.348635] =============================
[ 0.348962] WARNING: suspicious RCU usage
[ 0.349295] 5.7.0-rc2-00236-gf410328e93834 #1 Not tainted
[ 0.349634] -----------------------------
[ 0.349962] security/smack/smack_lsm.c:354 RCU-list traversed in non-reader section!!
[ 0.350634]
[ 0.350634] other info that might help us debug this:
[ 0.350634]
[ 0.351278]
[ 0.351278] rcu_scheduler_active = 1, debug_locks = 1
[ 0.351675] no locks held by kthreadd/2.
[ 0.351997]
[ 0.351997] stack backtrace:
[ 0.352636] CPU: 0 PID: 2 Comm: kthreadd Not tainted 5.7.0-rc2-00236-gf410328e93834 #1
[ 0.353267] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.353631] Call Trace:
[ 0.353631] ? dump_stack+0x6b/0x9b
[ 0.353631] ? smack_cred_prepare+0x1c8/0x1e0
[ 0.353631] ? smack_sk_alloc_security+0xa0/0xa0
[ 0.353631] ? security_prepare_creds+0x3f/0x90
[ 0.353631] ? prepare_creds+0x13c/0x260
[ 0.353631] ? copy_creds+0x2c/0x1d0
[ 0.353631] ? copy_process+0x366/0x1760
[ 0.353631] ? lock_acquire+0x72/0x370
[ 0.353631] ? _do_fork+0x71/0x680
[ 0.353631] ? lock_acquire+0x72/0x370
[ 0.353631] ? kthread_park+0xa0/0xa0
[ 0.353631] ? kthreadd+0x50/0x140
[ 0.353631] ? kernel_thread+0x4e/0x60
[ 0.353631] ? kthread_park+0xa0/0xa0
[ 0.353631] ? kthreadd+0xf4/0x140
[ 0.353631] ? kthread_create_on_cpu+0x80/0x80
[ 0.353631] ? ret_from_fork+0x2e/0x38
[ 0.354096] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[ 0.354910] rcu: Hierarchical SRCU implementation.
[ 0.356850] NMI watchdog: Perf NMI watchdog permanently disabled
[ 0.357864] smp: Bringing up secondary CPUs ...
[ 0.359261] x86: Booting SMP configuration:
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 144ce0ceed88071d2cffb323089fcb710f2e4350 0e698dfa282211e414076f9dc7e83c1c288314fd --
git bisect good 0f5233d56ed5563c895eb957131074e71b821ffd # 23:25 G 11 0 11 11 Merge 'regulator/for-next' into devel-catchup-202005150615
git bisect good a9f1f0805c1a2d8b1a53afbae38c3516b5194179 # 23:31 G 11 0 11 17 Merge 's390/fixes' into devel-catchup-202005150615
git bisect bad 61973078db675fd3ee2e26fc0bd9708b9d79b2df # 23:41 B 0 8 29 5 Merge 'rcu/dev.2020.05.14b' into devel-catchup-202005150615
git bisect good 74e078cc85670493864c48208e04229d4fac48fa # 23:50 G 11 0 11 17 Merge 'linux-review/Grygorii-Strashko/soc-ti-add-k3-platforms-chipid-module-driver/20200514-153644' into devel-catchup-202005150615
git bisect good 4b0c4c07546dfca786eaa447d9f9f95b25c7b399 # 23:58 G 10 0 10 22 Merge 'jpirko-mlxsw/combined_queue' into devel-catchup-202005150615
git bisect good 8b458949f2e7896ee126899994a23735a9be2d2d # 00:07 G 11 0 11 17 Merge 'block/for-5.8/block' into devel-catchup-202005150615
git bisect bad e317828f7d9693a7cd5a718e947bcf3824f6146a # 00:16 B 0 7 25 2 rcu: Expedited grace-period sleeps to idle priority
git bisect bad 9ddfee9e62685e77b23c25a0fff7eaef56092130 # 00:23 B 0 11 32 5 rcu: Grace-period-kthread related sleeps to idle priority
git bisect bad f410328e93834f1d9c7e2f707ac05fd9e6417c27 # 00:33 B 0 1 21 4 Default enable RCU list lockdep debugging with PROVE_RCU
# first bad commit: [f410328e93834f1d9c7e2f707ac05fd9e6417c27] Default enable RCU list lockdep debugging with PROVE_RCU
git bisect good df2e4807c87c32ff01e0fe25b0fdf1352ab986bd # 00:53 G 33 0 33 35 torture: Add --allcpus argument to the kvm.sh script
# extra tests with debug options
git bisect bad f410328e93834f1d9c7e2f707ac05fd9e6417c27 # 01:01 B 0 10 26 0 Default enable RCU list lockdep debugging with PROVE_RCU
# extra tests on head commit of rcu/dev.2020.05.14b
git bisect bad c1628f71b9ac81a2349f02cdebaaefe35a3fe4ba # 01:22 B 0 8 36 12 ubsan, kcsan: Don't combine sanitizer with kcov on clang
# bad: [c1628f71b9ac81a2349f02cdebaaefe35a3fe4ba] ubsan, kcsan: Don't combine sanitizer with kcov on clang
# extra tests on revert first bad commit
git bisect good 183564b7c798c2d106ad64972d202433a536ccca # 01:44 G 11 0 11 11 Revert "Default enable RCU list lockdep debugging with PROVE_RCU"
# good: [183564b7c798c2d106ad64972d202433a536ccca] Revert "Default enable RCU list lockdep debugging with PROVE_RCU"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
9 months, 1 week
[PCI] e5f40f65d8: BUG:kernel_reboot-without-warning_in_early-boot_stage,last_printk:early_console_in_setup_code
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-5):
commit: e5f40f65d862637d5cec86196473b6effb972c14 ("PCI: Add ACPI StorageD3Enable _DSD support")
git://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git revert-e9d96b32f05a316030f07bcd6086f9ba2096efbe-e5f40f65d862637d5cec86196473b6effb972c14
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------------------------------------------+------------+------------+
| | bb7ba30bf9 | e5f40f65d8 |
+-----------------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 30 | 2 |
| boot_failures | 5 | 13 |
| BUG:kernel_timeout_in_torture_test_stage | 5 | |
| BUG:kernel_reboot-without-warning_in_early-boot_stage,last_printk:early_console_in_setup_code | 0 | 13 |
+-----------------------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
early console in setup code
BUG: kernel reboot-without-warning in early-boot stage, last printk: early console in setup code
Linux version 5.7.0-rc6-00065-ge5f40f65d8626 #1
Command line: ip=::::vm-snb-54::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-54/boot-1-openwrt-i386-generic-20190428.cgz-e5f40f65d862637d5cec86196473b6effb972c14-20200522-5190-eunjwz-2.yaml ARCH=x86_64 kconfig=x86_64-randconfig-a013-20200520 branch=internal-devel/devel-hourly-2020052109 commit=e5f40f65d862637d5cec86196473b6effb972c14 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-a013-20200520/gcc-5/e5f40f65d862637d5cec86196473b6effb972c14/vmlinuz-5.7.0-rc6-00065-ge5f40f65d8626 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-snb/openwrt-i386-generic-20190428.cgz/x86_64-randconfig-a013-20200520/gcc-5/e5f40f65d862637d5cec86196473b6effb972c14/3 LKP_SERVER=inn selinux=0 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0 watchdog_thresh=60
Elapsed time: 60
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc6-00065-ge5f40f65d8626 .config
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
9 months, 1 week
6fe12cdbcf ("i2c: core: support bus regulator controlling in .."): BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 6fe12cdbcfe35ad4726a619a9546822d34fc934c
Author: Bibby Hsieh <bibby.hsieh(a)mediatek.com>
AuthorDate: Tue May 19 15:27:29 2020 +0800
Commit: Wolfram Sang <wsa(a)kernel.org>
CommitDate: Wed May 20 15:25:55 2020 +0200
i2c: core: support bus regulator controlling in adapter
Although in the most platforms, the bus power of i2c
are alway on, some platforms disable the i2c bus power
in order to meet low power request.
We get and enable bulk regulator in i2c adapter device.
Signed-off-by: Bibby Hsieh <bibby.hsieh(a)mediatek.com>
Reviewed-by: Tomasz Figa <tfiga(a)chromium.org>
Signed-off-by: Wolfram Sang <wsa(a)kernel.org>
6aab46bc52 dt-binding: i2c: add bus-supply property
6fe12cdbcf i2c: core: support bus regulator controlling in adapter
+-----------------------------------------------------------------------------+------------+------------+
| | 6aab46bc52 | 6fe12cdbcf |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 33 | 0 |
| boot_failures | 2 | 16 |
| WARNING:at_lib/kobject_uevent.c:#add_uevent_var | 1 | |
| EIP:add_uevent_var | 1 | |
| BUG:kernel_hang_in_boot_stage | 1 | |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 0 | 16 |
+-----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 105.730065] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13/status
[ 105.730071] ### dt-test ### EXPECT \ : i2c i2c-1: Added multiplexed i2c bus 3
[ 105.747587] i2c i2c-3: supply bus not found, using dummy regulator
[ 105.754529] i2c i2c-1: Added multiplexed i2c bus 3
[ 105.756092] ### dt-test ### EXPECT / : i2c i2c-1: Added multiplexed i2c bus 3
[ 105.773831] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935
[ 105.777468] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 189, name: kworker/0:2
[ 105.778995] 2 locks held by kworker/0:2/189:
[ 105.779775] #0: f48249a4 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x3ac/0x1050
[ 105.781743] #1: f4145f34 ((work_completion)(&sdp->work)){+.+.}-{0:0}, at: process_one_work+0x3ac/0x1050
[ 105.783485] Preemption disabled at:
[ 105.783509] [<cdbb653d>] srcu_invoke_callbacks+0x14d/0x280
[ 105.785397] CPU: 0 PID: 189 Comm: kworker/0:2 Not tainted 5.7.0-rc1-00058-g6fe12cdbcfe35 #1
[ 105.786961] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 105.788608] Workqueue: rcu_gp srcu_invoke_callbacks
[ 105.789777] Call Trace:
[ 105.790308] dump_stack+0x32/0x4a
[ 105.790914] ___might_sleep+0x3dc/0x4b0
[ 105.791659] ? srcu_invoke_callbacks+0x14d/0x280
[ 105.792574] ? srcu_invoke_callbacks+0x14d/0x280
[ 105.793677] ? srcu_invoke_callbacks+0x14d/0x280
[ 105.794480] __might_sleep+0x10e/0x210
[ 105.795186] __mutex_lock+0x34/0x12f0
[ 105.795838] ? mark_held_locks+0xb3/0x100
[ 105.796561] ? _raw_spin_unlock_irqrestore+0x13b/0x190
[ 105.797694] ? _raw_spin_unlock_irqrestore+0x13b/0x190
[ 105.798653] ? lockdep_hardirqs_on+0x1bb/0x420
[ 105.799462] mutex_lock_nested+0x41/0x60
[ 105.800155] ? regulator_put+0x25/0x70
[ 105.800839] regulator_put+0x25/0x70
[ 105.801641] devm_regulator_release+0x1d/0x30
[ 105.802415] release_nodes+0x326/0x500
[ 105.803126] devres_release_all+0xb9/0x130
[ 105.803873] device_release+0x25/0x1b0
[ 105.804548] ? srcu_invoke_callbacks+0xfa/0x280
[ 105.805591] kobject_put+0x33e/0x800
[ 105.806244] ? refcount_dec_not_one+0x107/0x2f0
[ 105.807050] ? srcu_invoke_callbacks+0x14d/0x280
[ 105.807918] __device_link_free_srcu+0x79/0xe0
[ 105.808685] srcu_invoke_callbacks+0x160/0x280
[ 105.809729] process_one_work+0x594/0x1050
[ 105.810535] ? process_one_work+0x3ac/0x1050
[ 105.811314] ? _raw_spin_lock_irq+0x56/0xe0
[ 105.812113] worker_thread+0x4a3/0xc00
[ 105.812823] kthread+0x31f/0x3b0
[ 105.813537] ? rescuer_thread+0x720/0x720
[ 105.814227] ? kthread_bind+0x30/0x30
[ 105.814920] ret_from_fork+0x19/0x24
[ 105.866663] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-B-input) hogged as input
[ 105.868134] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-A-input) hogged as input
[ 105.870372] GPIO line 509 (line-B-input) hogged as input
[ 105.872524] GPIO line 503 (line-A-input) hogged as input
[ 105.874446] ### dt-test ### EXPECT / : GPIO line <<int>> (line-A-input) hogged as input
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start d0edf98c01ebe0790295cf888a2976d2d04377b1 b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce --
git bisect bad 4fde3f7c56e653e1d62102a4a77534f4bfec9689 # 07:04 B 0 17 33 0 Merge 'linux-review/Anson-Huang/ARM-dts-imx-Make-tempmon-node-as-child-of-anatop-node/20200520-230948' into devel-hourly-2020052107
git bisect bad e8344c6f4e6b7a7318fa46dd6fd5418f3e36d010 # 07:04 B 0 17 33 0 Merge 'linux-review/Gabriel-Krisman-Bertazi/iscsi-Fix-deadlock-on-recovery-path-during-GFP_IO-reclaim/20200520-220836' into devel-hourly-2020052107
git bisect good 6332b5a106b02909b3ac673832d1e36a901d81fa # 07:04 G 11 0 0 0 Merge 'stericsson/ux500-dts' into devel-hourly-2020052107
git bisect bad 05c599fa08c02ea883075d05cc544f5e133dad15 # 07:04 B 0 11 27 0 Merge 'ras/edac-misc' into devel-hourly-2020052107
git bisect good 25ef63dabd6b9567bd3d7870c5e8595b6b87c678 # 07:05 G 11 0 0 0 Merge 'linux-review/Maulik-Shah/arm64-dts-qcom-sc7180-Correct-the-pdc-interrupt-ranges/20200518-202257' into devel-hourly-2020052107
git bisect good 0db2ae8dfea89b3541eb5ffb6e0aed13d921b277 # 07:05 G 11 0 0 0 Merge 'linux-review/Geert-Uytterhoeven/ARM-dts-r9a06g032-Correct-GIC-compatible-value-order/20200519-204708' into devel-hourly-2020052107
git bisect good aeee233deb1c06fcf7630402c1f15208aa3d723a # 07:05 G 11 0 0 0 Merge 'linux-review/Lubomir-Rintel/media-marvell-ccic-Add-support-for-runtime-PM/20200521-053250' into devel-hourly-2020052107
git bisect bad 6872daa8abed0e8bd8447d56cfca80c4d29d6243 # 07:05 B 0 10 27 1 Merge 'linux-review/Dinghao-Liu/i2c-imx-lpi2c-fix-runtime-pm-imbalance-on-error/20200521-033738' into devel-hourly-2020052107
git bisect good f23da43a58d09dc6ea58837a45374e15de36537e # 07:05 G 10 0 0 1 Merge branch 'i2c/for-current' into i2c/for-next
git bisect good fadb47fca1f132c123395bca26c69007e816988f # 07:05 G 11 0 0 0 Merge branch 'i2c/for-5.8' into i2c/for-next
git bisect bad 4f118a7e4686062bd4df4a37e24c22cd71495b5f # 07:05 B 0 11 27 0 Merge tag 'for-5.8-i2c' of git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux into i2c/for-5.8
git bisect good c73178b93754edd8449dccd3faf05baafd4d3f0e # 07:05 G 11 0 0 0 i2c: tegra: Add support for the VI I2C on Tegra210
git bisect good 6aab46bc52a8f579879d491c9d8062e03caa5c61 # 07:05 G 30 0 0 3 dt-binding: i2c: add bus-supply property
git bisect bad f89c326dcaa0cb8c3af7764e75eeed4e3f3c879a # 07:05 B 0 15 31 0 Merge branch 'i2c/for-current-fixed' into i2c/for-5.8
git bisect bad 6fe12cdbcfe35ad4726a619a9546822d34fc934c # 07:05 B 0 15 45 0 i2c: core: support bus regulator controlling in adapter
# first bad commit: [6fe12cdbcfe35ad4726a619a9546822d34fc934c] i2c: core: support bus regulator controlling in adapter
git bisect good 6aab46bc52a8f579879d491c9d8062e03caa5c61 # 07:06 G 30 0 0 3 dt-binding: i2c: add bus-supply property
# extra tests with debug options
git bisect bad 6fe12cdbcfe35ad4726a619a9546822d34fc934c # 07:06 B 0 14 30 0 i2c: core: support bus regulator controlling in adapter
# extra tests on revert first bad commit
git bisect good 3788bb973ab542fcf623106a1cef5d8b1e237f7c # 07:56 G 10 0 0 0 Revert "i2c: core: support bus regulator controlling in adapter"
# good: [3788bb973ab542fcf623106a1cef5d8b1e237f7c] Revert "i2c: core: support bus regulator controlling in adapter"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
9 months, 1 week
[vxlan] dcf6d31edc: BUG:spinlock_bad_magic_on_CPU
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: dcf6d31edcdfcc2f6a71a4588ee6916ba8d98abe ("[PATCH net-next v2 4/5] vxlan: support for nexthop notifiers")
url: https://github.com/0day-ci/linux/commits/Roopa-Prabhu/Support-for-fdb-ECM...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 22.242861] BUG: spinlock bad magic on CPU#0, swapper/0/1
[ 22.243939] lock: init_net+0x380/0x1c40, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
[ 22.245630] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc5-01820-gdcf6d31edcdfc #1
[ 22.247289] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 22.248922] Call Trace:
[ 22.249548] dump_stack+0x8f/0xcb
[ 22.250296] do_raw_spin_lock+0x71/0xc0
[ 22.251109] _raw_spin_lock_irqsave+0x4e/0x60
[ 22.251996] ? atomic_notifier_chain_register+0x14/0x70
[ 22.253022] atomic_notifier_chain_register+0x14/0x70
[ 22.254042] ops_init+0x40/0x170
[ 22.254661] ? rdinit_setup+0x2b/0x2b
[ 22.255384] register_pernet_operations+0x11b/0x240
[ 22.256288] ? veth_init+0x11/0x11
[ 22.257004] register_pernet_subsys+0x24/0x40
[ 22.257935] vxlan_init_module+0x23/0x87
[ 22.258759] do_one_initcall+0x5d/0x320
[ 22.259535] ? rdinit_setup+0x2b/0x2b
[ 22.260285] ? rcu_read_lock_sched_held+0x52/0x80
[ 22.261177] kernel_init_freeable+0x260/0x2da
[ 22.262086] ? rest_init+0x250/0x250
[ 22.262846] kernel_init+0xa/0x110
[ 22.263564] ret_from_fork+0x3a/0x50
[ 22.269764] 8021q: adding VLAN 0 to HW filter on device eth0
[ 22.271101] IP-Config: Failed to open gretap0
[ 22.271862] IP-Config: Failed to open erspan0
[ 22.284305] Sending DHCP requests .
[ 24.278845] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 24.283720] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 24.780278] ., OK
[ 24.781519] IP-Config: Got DHCP answer from 10.0.2.2, my address is 10.0.2.15
[ 24.782786] IP-Config: Complete:
[ 24.783513] device=eth0, hwaddr=52:54:00:12:34:56, ipaddr=10.0.2.15, mask=255.255.255.0, gw=10.0.2.2
[ 24.785342] host=vm-snb-ssd-68, domain=, nis-domain=(none)
[ 24.786511] bootserver=10.0.2.2, rootserver=10.0.2.2, rootpath=
[ 24.786513] nameserver0=10.0.2.3
[ 24.811488] Freeing unused decrypted memory: 2040K
[ 24.813873] Freeing unused kernel image (initmem) memory: 2492K
[ 24.818291] Write protecting the kernel read-only data: 28672k
[ 24.821352] Freeing unused kernel image (text/rodata gap) memory: 2040K
[ 24.823377] Freeing unused kernel image (rodata/data gap) memory: 1060K
[ 24.824693] rodata_test: all tests were successful
[ 24.825746] Run /init as init process
[ 24.826612] with arguments:
[ 24.827378] /init
[ 24.828001] with environment:
[ 24.828766] HOME=/
[ 24.829415] TERM=linux
[ 24.830060] user=lkp
[ 24.830743] job=/lkp/jobs/scheduled/vm-snb-ssd-68/boot-1-yocto-x86_64-minimal-20190520.cgz-dcf6d31edcdfcc2f6a71a4588ee6916ba8d98abe-20200521-9044-sbd554-2.yaml
[ 24.833493] ARCH=x86_64
[ 24.834158] kconfig=x86_64-rhel-7.6-kselftests
[ 24.835146] branch=linux-devel/devel-catchup-202005210110
[ 24.836298] commit=dcf6d31edcdfcc2f6a71a4588ee6916ba8d98abe
[ 24.837463] BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-7/dcf6d31edcdfcc2f6a71a4588ee6916ba8d98abe/vmlinuz-5.7.0-rc5-01820-gdcf6d31edcdfc
[ 24.839809] max_uptime=600
[ 24.840536] RESULT_ROOT=/result/boot/1/vm-snb-ssd/yocto-x86_64-minimal-20190520.cgz/x86_64-rhel-7.6-kselftests/gcc-7/dcf6d31edcdfcc2f6a71a4588ee6916ba8d98abe/3
[ 24.843303] LKP_SERVER=inn
[ 24.844014] prompt_ramdisk=0
[ 24.844775] vga=normal
Starting udev
[ 24.930760] udevd[202]: starting version 3.2.7
[ 24.932154] random: udevd: uninitialized urandom read (16 bytes read)
[ 24.933434] random: udevd: uninitialized urandom read (16 bytes read)
[ 24.934660] random: udevd: uninitialized urandom read (16 bytes read)
[ 24.937351] udevd[202]: specified group 'kvm' unknown
[ 24.940415] udevd[203]: starting eudev-3.2.7
[ 24.966274] udevd[203]: specified group 'kvm' unknown
[ 25.028617] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
[ 25.082190] input: PC Speaker as /devices/platform/pcspkr/input/input5
[ 25.085091] Floppy drive(s): fd0 is 2.88M AMI BIOS
[ 25.100857] FDC 0 is a S82078B
[ 25.112734] scsi host0: Virtio SCSI HBA
[ 25.114776] scsi 0:0:1:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.116769] scsi 0:0:1:10: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.121143] scsi 0:0:1:9: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.123826] parport_pc 00:04: reported by Plug and Play ACPI
[ 25.125347] scsi 0:0:1:8: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.127387] scsi 0:0:1:7: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.134463] scsi 0:0:1:6: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.136315] scsi 0:0:1:5: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.138160] scsi 0:0:1:4: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.139900] scsi 0:0:1:3: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.141513] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
[ 25.142995] scsi 0:0:1:2: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.144867] scsi 0:0:1:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 25.175988] libata version 3.00 loaded.
[ 25.183494] ata_piix 0000:00:01.1: version 2.13
[ 25.194549] random: fast init done
[ 25.197091] bochs-drm 0000:00:02.0: vgaarb: deactivate vga console
[ 25.204962] Console: switching to colour dummy device 80x25
[ 25.206691] [drm] Found bochs VGA, ID 0xb0c0.
[ 25.207289] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebf0000.
[ 25.208765] scsi host1: ata_piix
[ 25.210258] [TTM] Zone kernel: Available graphics memory: 4065162 KiB
[ 25.212273] scsi host2: ata_piix
[ 25.212865] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc080 irq 14
[ 25.213868] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc088 irq 15
[ 25.214810] [TTM] Zone dma32: Available graphics memory: 2097152 KiB
[ 25.215581] [TTM] Initializing pool allocator
[ 25.216166] [TTM] Initializing DMA pool allocator
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc5-01820-gdcf6d31edcdfc .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
9 months, 2 weeks
[net] a6211caa63: netperf.Throughput_total_tps 160.7% improvement
by kernel test robot
Greeting,
FYI, we noticed a 160.7% improvement of netperf.Throughput_total_tps due to commit:
commit: a6211caa634da39d861a47437ffcda8b38ef421b ("net: revert "net: get rid of an signed integer overflow in ip_idents_reserve()"")
https://git.kernel.org/cgit/linux/kernel/git/davem/net.git master
in testcase: netperf
on test machine: 104 threads Skylake with 192G memory
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 200%
cluster: cs-localhost
test: UDP_RR
cpufreq_governor: performance
ucode: 0x2000065
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
cs-localhost/gcc-7/performance/ipv4/x86_64-rhel-7.6/200%/debian-x86_64-20191114.cgz/300s/lkp-skl-fpga01/UDP_RR/netperf/0x2000065
commit:
61d0301e6c ("dt-bindings: net: dsa: b53: Add missing size and address cells to example")
a6211caa63 ("net: revert "net: get rid of an signed integer overflow in ip_idents_reserve()"")
61d0301e6c05db55 a6211caa634da39d861a47437ff
---------------- ---------------------------
%stddev %change %stddev
\ | \
2382389 ± 3% +160.7% 6210158 netperf.Throughput_total_tps
11453 ± 3% +160.7% 29856 netperf.Throughput_tps
2.834e+08 ± 26% +94.4% 5.509e+08 ± 8% netperf.time.involuntary_context_switches
14146 -6.7% 13198 netperf.time.system_time
1458 +63.3% 2382 netperf.time.user_time
4.302e+08 ± 19% +203.9% 1.308e+09 ± 3% netperf.time.voluntary_context_switches
7.147e+08 ± 3% +160.7% 1.863e+09 netperf.workload
28604 ± 30% -70.3% 8507 ±170% numa-numastat.node1.other_node
765.00 ± 4% +13.3% 867.00 ± 4% slabinfo.mnt_cache.active_objs
765.00 ± 4% +13.3% 867.00 ± 4% slabinfo.mnt_cache.num_objs
13.15 ± 2% +9.1 22.29 mpstat.cpu.all.soft%
76.18 -15.1 61.06 mpstat.cpu.all.sys%
9.10 +5.9 14.97 mpstat.cpu.all.usr%
35582343 ± 3% -24.5% 26879340 ± 10% cpuidle.C1.time
6354413 ± 3% -18.3% 5193270 ± 10% cpuidle.C1.usage
6640796 ± 11% +29.1% 8574021 ± 3% cpuidle.POLL.time
2047691 ± 14% +44.1% 2949986 ± 2% cpuidle.POLL.usage
88.25 -6.8% 82.25 vmstat.cpu.sy
8.25 ± 5% +69.7% 14.00 vmstat.cpu.us
4677547 ± 3% +159.8% 12150848 vmstat.system.cs
213476 +1.5% 216585 vmstat.system.in
16116 ± 17% -49.6% 8120 ± 25% numa-meminfo.node0.Inactive
15901 ± 17% -49.8% 7986 ± 26% numa-meminfo.node0.Inactive(anon)
19329 ± 11% -25.0% 14497 ± 14% numa-meminfo.node0.Mapped
17899 ± 14% -48.3% 9256 ± 31% numa-meminfo.node0.Shmem
14756 ± 19% +47.4% 21749 ± 9% numa-meminfo.node1.Inactive
14595 ± 19% +47.2% 21486 ± 9% numa-meminfo.node1.Inactive(anon)
3975 ± 17% -49.8% 1996 ± 26% numa-vmstat.node0.nr_inactive_anon
4864 ± 11% -24.6% 3669 ± 14% numa-vmstat.node0.nr_mapped
4474 ± 14% -48.3% 2314 ± 31% numa-vmstat.node0.nr_shmem
3975 ± 17% -49.8% 1996 ± 26% numa-vmstat.node0.nr_zone_inactive_anon
3668 ± 18% +45.4% 5334 ± 10% numa-vmstat.node1.nr_inactive_anon
3668 ± 18% +45.4% 5334 ± 10% numa-vmstat.node1.nr_zone_inactive_anon
7597 -3.0% 7370 proc-vmstat.nr_inactive_anon
8899 -2.8% 8651 proc-vmstat.nr_mapped
26511 ± 3% -3.9% 25486 ± 2% proc-vmstat.nr_shmem
7597 -3.0% 7370 proc-vmstat.nr_zone_inactive_anon
781674 +1.5% 793716 proc-vmstat.numa_hit
747985 +1.6% 760013 proc-vmstat.numa_local
37078 ± 4% -17.0% 30783 ± 7% proc-vmstat.pgactivate
851230 +1.8% 866712 proc-vmstat.pgfault
3.21 ± 9% -29.9% 2.25 ± 12% sched_debug.cfs_rq:/.load_avg.min
-1551214 -150.7% 786316 ±223% sched_debug.cfs_rq:/.spread0.avg
584.71 ± 7% -10.6% 522.75 ± 4% sched_debug.cfs_rq:/.util_avg.min
135.54 ± 3% +8.1% 146.50 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev
711376 ± 41% +41.7% 1008145 ± 6% sched_debug.cpu.avg_idle.max
4410 ± 4% -16.7% 3674 ± 7% sched_debug.cpu.avg_idle.min
6783672 ± 3% +161.8% 17758928 sched_debug.cpu.nr_switches.avg
8197064 ± 4% +130.4% 18889627 sched_debug.cpu.nr_switches.max
4787733 ± 10% +236.1% 16093322 ± 2% sched_debug.cpu.nr_switches.min
992865 ± 18% -48.2% 514302 ± 17% sched_debug.cpu.nr_switches.stddev
70.67 ± 8% -26.8% 51.75 ± 17% sched_debug.cpu.nr_uninterruptible.max
1.114e+10 ± 3% +171.2% 3.021e+10 perf-stat.i.branch-instructions
1.66 -0.1 1.60 perf-stat.i.branch-miss-rate%
1.834e+08 ± 3% +160.7% 4.782e+08 perf-stat.i.branch-misses
41518884 ± 19% +119.3% 91068934 ± 21% perf-stat.i.cache-misses
3.629e+08 +189.7% 1.051e+09 ± 4% perf-stat.i.cache-references
4716783 ± 3% +160.3% 12278919 perf-stat.i.context-switches
4.99 ± 3% -64.7% 1.76 perf-stat.i.cpi
2.855e+11 -4.3% 2.733e+11 perf-stat.i.cpu-cycles
7245 ± 21% -53.6% 3361 ± 28% perf-stat.i.cycles-between-cache-misses
92817683 ± 4% +201.7% 2.8e+08 ± 7% perf-stat.i.dTLB-load-misses
1.665e+10 ± 3% +172.1% 4.53e+10 perf-stat.i.dTLB-loads
9731355 ± 7% +202.8% 29466191 ± 4% perf-stat.i.dTLB-store-misses
9.977e+09 ± 3% +172.2% 2.715e+10 perf-stat.i.dTLB-stores
53.61 ± 2% +4.4 58.02 ± 2% perf-stat.i.iTLB-load-miss-rate%
48591600 ± 5% +189.8% 1.408e+08 ± 4% perf-stat.i.iTLB-load-misses
41279422 ± 3% +140.2% 99161913 perf-stat.i.iTLB-loads
5.709e+10 ± 3% +172.7% 1.557e+11 perf-stat.i.instructions
0.20 ± 2% +178.6% 0.57 perf-stat.i.ipc
2.75 -4.5% 2.63 perf-stat.i.metric.GHz
0.57 ± 39% -75.5% 0.14 ± 30% perf-stat.i.metric.K/sec
368.55 ± 3% +171.3% 999.80 perf-stat.i.metric.M/sec
2707 +1.8% 2755 perf-stat.i.minor-faults
5928215 ± 27% +184.2% 16848721 ± 23% perf-stat.i.node-load-misses
101013 ± 26% +85.7% 187548 ± 14% perf-stat.i.node-loads
2780627 ± 8% +96.4% 5460264 ± 8% perf-stat.i.node-store-misses
2707 +1.8% 2755 perf-stat.i.page-faults
1.65 -0.1 1.58 perf-stat.overall.branch-miss-rate%
5.01 ± 3% -64.9% 1.76 perf-stat.overall.cpi
7162 ± 21% -55.8% 3165 ± 24% perf-stat.overall.cycles-between-cache-misses
54.05 ± 2% +4.6 58.65 ± 2% perf-stat.overall.iTLB-load-miss-rate%
0.20 ± 3% +184.9% 0.57 perf-stat.overall.ipc
24334 +4.6% 25454 perf-stat.overall.path-length
1.11e+10 ± 3% +171.1% 3.011e+10 perf-stat.ps.branch-instructions
1.828e+08 ± 3% +160.6% 4.764e+08 perf-stat.ps.branch-misses
41392364 ± 19% +119.2% 90737530 ± 21% perf-stat.ps.cache-misses
3.618e+08 +189.6% 1.048e+09 ± 4% perf-stat.ps.cache-references
4701274 ± 3% +160.2% 12234892 perf-stat.ps.context-switches
2.845e+11 -4.3% 2.723e+11 perf-stat.ps.cpu-cycles
92531227 ± 4% +201.5% 2.79e+08 ± 7% perf-stat.ps.dTLB-load-misses
1.659e+10 ± 3% +172.0% 4.513e+10 perf-stat.ps.dTLB-loads
9700889 ± 7% +202.7% 29361270 ± 4% perf-stat.ps.dTLB-store-misses
9.944e+09 ± 3% +172.1% 2.705e+10 perf-stat.ps.dTLB-stores
48437762 ± 5% +189.7% 1.403e+08 ± 4% perf-stat.ps.iTLB-load-misses
41133588 ± 3% +140.2% 98803267 perf-stat.ps.iTLB-loads
5.69e+10 ± 3% +172.6% 1.551e+11 perf-stat.ps.instructions
5909971 ± 27% +184.0% 16786827 ± 23% perf-stat.ps.node-load-misses
101876 ± 25% +83.8% 187272 ± 14% perf-stat.ps.node-loads
2771558 ± 8% +96.3% 5440535 ± 8% perf-stat.ps.node-store-misses
1.739e+13 ± 3% +172.7% 4.742e+13 perf-stat.total.instructions
4608 ± 18% +186.0% 13177 ± 62% interrupts.CPU0.RES:Rescheduling_interrupts
4349 ± 98% +215.2% 13708 ± 32% interrupts.CPU1.RES:Rescheduling_interrupts
3614 ± 31% +101.4% 7279 ± 39% interrupts.CPU10.RES:Rescheduling_interrupts
4465 ± 66% +209.0% 13796 ± 42% interrupts.CPU101.RES:Rescheduling_interrupts
6524 ± 45% +201.9% 19696 ± 41% interrupts.CPU102.RES:Rescheduling_interrupts
5961 ± 31% +290.9% 23304 ± 60% interrupts.CPU103.RES:Rescheduling_interrupts
3330 ± 44% +346.9% 14882 ± 51% interrupts.CPU15.RES:Rescheduling_interrupts
2516 ± 59% +1044.5% 28798 ±120% interrupts.CPU18.RES:Rescheduling_interrupts
3744 ± 42% +364.1% 17377 ± 49% interrupts.CPU2.RES:Rescheduling_interrupts
4907 ± 25% +206.7% 15051 ± 11% interrupts.CPU21.RES:Rescheduling_interrupts
3520 ± 55% +411.1% 17994 ± 52% interrupts.CPU23.RES:Rescheduling_interrupts
2886 ± 47% +327.9% 12351 ± 43% interrupts.CPU27.RES:Rescheduling_interrupts
2588 ± 81% +399.6% 12930 ± 49% interrupts.CPU29.RES:Rescheduling_interrupts
3549 ± 5% +386.1% 17255 ± 97% interrupts.CPU3.RES:Rescheduling_interrupts
2984 ± 44% +286.1% 11523 ± 80% interrupts.CPU30.RES:Rescheduling_interrupts
3237 ± 25% +443.6% 17599 ± 71% interrupts.CPU31.RES:Rescheduling_interrupts
4551 ± 75% +168.3% 12211 ± 61% interrupts.CPU33.RES:Rescheduling_interrupts
3952 ± 83% +413.6% 20300 ± 45% interrupts.CPU35.RES:Rescheduling_interrupts
2983 ± 86% +632.2% 21846 ± 73% interrupts.CPU36.RES:Rescheduling_interrupts
4917 ± 58% +292.2% 19282 ± 81% interrupts.CPU40.RES:Rescheduling_interrupts
2589 ± 70% +614.5% 18499 ± 92% interrupts.CPU44.RES:Rescheduling_interrupts
4009 ±108% +302.0% 16116 ± 58% interrupts.CPU45.RES:Rescheduling_interrupts
5099 ± 29% +125.3% 11486 ± 41% interrupts.CPU46.RES:Rescheduling_interrupts
4820 ± 39% +176.2% 13312 ± 55% interrupts.CPU47.RES:Rescheduling_interrupts
6377 ± 90% +201.0% 19197 ± 54% interrupts.CPU5.RES:Rescheduling_interrupts
6547 ± 45% +120.8% 14457 ± 32% interrupts.CPU50.RES:Rescheduling_interrupts
4026 ± 48% +330.7% 17342 ± 61% interrupts.CPU54.RES:Rescheduling_interrupts
4342 ± 32% +248.0% 15110 ± 7% interrupts.CPU58.RES:Rescheduling_interrupts
4162 ± 41% +141.8% 10067 ± 23% interrupts.CPU59.RES:Rescheduling_interrupts
4439 ± 30% +231.9% 14734 ± 31% interrupts.CPU6.RES:Rescheduling_interrupts
4160 ± 48% +226.3% 13575 ± 33% interrupts.CPU61.RES:Rescheduling_interrupts
3257 ± 34% +128.9% 7458 ± 23% interrupts.CPU62.RES:Rescheduling_interrupts
2763 ± 17% +264.6% 10077 ± 24% interrupts.CPU63.RES:Rescheduling_interrupts
4463 ± 70% +211.2% 13889 ± 73% interrupts.CPU64.RES:Rescheduling_interrupts
3378 ± 47% +568.4% 22585 ±129% interrupts.CPU66.RES:Rescheduling_interrupts
3033 ± 36% +285.2% 11684 ± 24% interrupts.CPU67.RES:Rescheduling_interrupts
4746 ± 43% +174.6% 13035 ± 20% interrupts.CPU68.RES:Rescheduling_interrupts
3248 ± 23% +222.9% 10491 ± 40% interrupts.CPU7.RES:Rescheduling_interrupts
2247 ± 34% +348.9% 10086 ± 37% interrupts.CPU70.RES:Rescheduling_interrupts
3328 ± 46% +466.6% 18858 ± 64% interrupts.CPU72.RES:Rescheduling_interrupts
4881 ± 27% +217.3% 15486 ± 25% interrupts.CPU73.RES:Rescheduling_interrupts
2956 ± 60% +384.6% 14326 ± 35% interrupts.CPU75.RES:Rescheduling_interrupts
3863 ± 61% +343.6% 17140 ± 67% interrupts.CPU79.RES:Rescheduling_interrupts
2850 ± 68% +367.1% 13316 ± 28% interrupts.CPU81.RES:Rescheduling_interrupts
3384 ± 40% +330.3% 14565 ± 52% interrupts.CPU83.RES:Rescheduling_interrupts
4285 ± 42% +123.6% 9583 ± 26% interrupts.CPU84.RES:Rescheduling_interrupts
5129 ± 64% +133.5% 11976 ± 53% interrupts.CPU85.RES:Rescheduling_interrupts
3496 ± 87% +517.6% 21592 ± 58% interrupts.CPU87.RES:Rescheduling_interrupts
2920 ±100% +523.9% 18218 ± 70% interrupts.CPU88.RES:Rescheduling_interrupts
5249 ± 56% +209.0% 16220 ± 56% interrupts.CPU92.RES:Rescheduling_interrupts
4098 ± 78% +351.4% 18499 ± 75% interrupts.CPU93.RES:Rescheduling_interrupts
2070 ± 46% +640.1% 15323 ± 66% interrupts.CPU96.RES:Rescheduling_interrupts
4232 ±109% +249.3% 14782 ± 58% interrupts.CPU97.RES:Rescheduling_interrupts
4886 ± 24% +119.6% 10731 ± 39% interrupts.CPU98.RES:Rescheduling_interrupts
466502 ± 32% +231.9% 1548094 ± 30% interrupts.RES:Rescheduling_interrupts
7611209 ± 14% +129.9% 17497928 ± 2% softirqs.CPU0.NET_RX
117644 ± 3% -10.1% 105808 softirqs.CPU0.TIMER
7754285 ± 6% +125.0% 17446076 softirqs.CPU1.NET_RX
7661563 ± 4% +131.7% 17752614 ± 2% softirqs.CPU10.NET_RX
6265597 ± 6% +196.8% 18597014 softirqs.CPU100.NET_RX
111790 -12.1% 98285 softirqs.CPU100.TIMER
6003920 ± 5% +206.5% 18403172 softirqs.CPU101.NET_RX
112387 -12.7% 98082 softirqs.CPU101.TIMER
5914242 ± 6% +214.9% 18622144 softirqs.CPU102.NET_RX
113186 -13.6% 97820 softirqs.CPU102.TIMER
6036155 ± 6% +203.2% 18302912 softirqs.CPU103.NET_RX
111454 -12.0% 98105 softirqs.CPU103.TIMER
7865285 ± 6% +125.9% 17764453 softirqs.CPU11.NET_RX
7785952 ± 4% +127.2% 17687980 softirqs.CPU12.NET_RX
7679186 ± 13% +131.1% 17745182 softirqs.CPU13.NET_RX
7489829 ± 7% +135.1% 17612130 softirqs.CPU14.NET_RX
7986983 ± 5% +115.9% 17240928 ± 4% softirqs.CPU15.NET_RX
7685814 ± 2% +131.1% 17758139 softirqs.CPU16.NET_RX
7763445 ± 2% +127.8% 17682866 softirqs.CPU17.NET_RX
7775718 ± 4% +126.1% 17579079 softirqs.CPU18.NET_RX
7692242 ± 4% +127.7% 17513046 softirqs.CPU19.NET_RX
7318854 ± 9% +140.9% 17631491 softirqs.CPU2.NET_RX
7432944 ± 16% +139.5% 17801499 softirqs.CPU20.NET_RX
7798319 ± 5% +129.9% 17927670 softirqs.CPU21.NET_RX
8016516 ± 3% +121.0% 17715449 softirqs.CPU22.NET_RX
7816079 ± 4% +119.7% 17168660 ± 4% softirqs.CPU23.NET_RX
7813370 ± 2% +127.0% 17735135 softirqs.CPU24.NET_RX
7868286 ± 4% +121.9% 17456994 softirqs.CPU25.NET_RX
5716747 ± 10% +211.2% 17791004 softirqs.CPU26.NET_RX
115007 ± 2% -12.8% 100280 softirqs.CPU26.TIMER
6070414 ± 4% +201.9% 18327734 softirqs.CPU27.NET_RX
114508 ± 2% -13.8% 98735 softirqs.CPU27.TIMER
5619088 ± 14% +225.1% 18267351 softirqs.CPU28.NET_RX
116359 ± 2% -13.6% 100482 softirqs.CPU28.TIMER
6213794 ± 5% +205.7% 18998636 ± 2% softirqs.CPU29.NET_RX
113060 -12.8% 98573 softirqs.CPU29.TIMER
8030911 ± 4% +120.5% 17710712 softirqs.CPU3.NET_RX
5963093 ± 6% +205.6% 18220250 softirqs.CPU30.NET_RX
115032 -11.6% 101730 ± 4% softirqs.CPU30.TIMER
5971100 ± 9% +201.7% 18015213 softirqs.CPU31.NET_RX
113379 -11.2% 100672 ± 2% softirqs.CPU31.TIMER
6047995 ± 6% +197.5% 17992368 softirqs.CPU32.NET_RX
113267 -12.2% 99398 softirqs.CPU32.TIMER
6068931 ± 7% +203.2% 18400418 softirqs.CPU33.NET_RX
126026 ± 16% -21.7% 98700 softirqs.CPU33.TIMER
6144173 ± 8% +198.9% 18365776 softirqs.CPU34.NET_RX
113227 -12.9% 98564 softirqs.CPU34.TIMER
6345890 ± 6% +191.0% 18468925 softirqs.CPU35.NET_RX
112662 -12.1% 99063 softirqs.CPU35.TIMER
5791449 ± 4% +218.9% 18471601 softirqs.CPU36.NET_RX
113723 -13.0% 98943 softirqs.CPU36.TIMER
5811184 ± 8% +211.4% 18093802 ± 3% softirqs.CPU37.NET_RX
113198 -11.5% 100188 softirqs.CPU37.TIMER
5878689 ± 5% +214.0% 18457807 softirqs.CPU38.NET_RX
113697 -13.0% 98900 softirqs.CPU38.TIMER
6036575 ± 8% +202.9% 18281922 softirqs.CPU39.NET_RX
113087 -12.7% 98699 softirqs.CPU39.TIMER
7715041 ± 4% +129.6% 17712443 ± 2% softirqs.CPU4.NET_RX
5753853 ± 10% +220.0% 18412975 softirqs.CPU40.NET_RX
113355 ± 2% -13.1% 98467 softirqs.CPU40.TIMER
6192776 ± 3% +198.0% 18452898 softirqs.CPU41.NET_RX
112945 -12.9% 98428 softirqs.CPU41.TIMER
5803388 ± 5% +217.7% 18435506 softirqs.CPU42.NET_RX
5918098 ± 10% +208.9% 18283279 softirqs.CPU43.NET_RX
112855 -12.8% 98415 softirqs.CPU43.TIMER
6051506 ± 8% +201.3% 18230382 softirqs.CPU44.NET_RX
112687 -12.1% 99002 softirqs.CPU44.TIMER
5974980 ± 6% +206.4% 18307269 softirqs.CPU45.NET_RX
112919 -12.8% 98430 softirqs.CPU45.TIMER
5725365 ± 12% +220.8% 18365705 softirqs.CPU46.NET_RX
113460 -13.0% 98665 softirqs.CPU46.TIMER
6084344 ± 5% +202.7% 18417111 softirqs.CPU47.NET_RX
112834 -13.0% 98164 softirqs.CPU47.TIMER
6256784 ± 5% +198.2% 18654929 softirqs.CPU48.NET_RX
112693 ± 2% -12.4% 98713 softirqs.CPU48.TIMER
5994904 ± 7% +205.1% 18293336 softirqs.CPU49.NET_RX
112767 -12.6% 98562 softirqs.CPU49.TIMER
7728398 ± 5% +129.3% 17724544 softirqs.CPU5.NET_RX
6003663 ± 8% +207.4% 18454182 softirqs.CPU50.NET_RX
112681 -13.0% 98072 softirqs.CPU50.TIMER
6143237 ± 6% +200.3% 18445995 softirqs.CPU51.NET_RX
112722 -12.3% 98887 softirqs.CPU51.TIMER
7663998 ± 14% +127.7% 17447812 softirqs.CPU52.NET_RX
7743623 ± 5% +126.0% 17502781 softirqs.CPU53.NET_RX
7342228 ± 9% +140.9% 17688208 softirqs.CPU54.NET_RX
8032766 ± 3% +122.7% 17885003 softirqs.CPU55.NET_RX
7736344 ± 4% +128.7% 17693498 ± 3% softirqs.CPU56.NET_RX
7878109 ± 2% +121.7% 17468805 softirqs.CPU57.NET_RX
7822540 ± 5% +123.3% 17470237 softirqs.CPU58.NET_RX
7807645 ± 10% +123.7% 17462378 softirqs.CPU59.NET_RX
7836732 ± 4% +122.9% 17466230 softirqs.CPU6.NET_RX
7818018 ± 6% +125.8% 17653948 softirqs.CPU60.NET_RX
8147307 ± 4% +116.6% 17649112 softirqs.CPU61.NET_RX
7675584 ± 4% +133.8% 17944850 ± 2% softirqs.CPU62.NET_RX
7819166 ± 6% +126.2% 17683764 softirqs.CPU63.NET_RX
7758109 ± 4% +126.0% 17529854 softirqs.CPU64.NET_RX
7663761 ± 12% +130.9% 17697627 softirqs.CPU65.NET_RX
7524645 ± 7% +134.3% 17633068 softirqs.CPU66.NET_RX
8050561 ± 4% +121.7% 17851477 softirqs.CPU67.NET_RX
7680779 ± 2% +130.0% 17663190 softirqs.CPU68.NET_RX
7706030 ± 2% +128.0% 17569195 softirqs.CPU69.NET_RX
7854752 ± 10% +123.0% 17518515 softirqs.CPU7.NET_RX
7749293 ± 2% +128.2% 17686713 ± 2% softirqs.CPU70.NET_RX
7762287 ± 4% +125.9% 17536102 softirqs.CPU71.NET_RX
7430632 ± 16% +138.0% 17683431 softirqs.CPU72.NET_RX
7822935 ± 5% +127.2% 17774751 softirqs.CPU73.NET_RX
8097380 ± 4% +118.8% 17719244 softirqs.CPU74.NET_RX
7834736 ± 4% +128.0% 17859974 softirqs.CPU75.NET_RX
7834363 ± 3% +122.2% 17407472 ± 3% softirqs.CPU76.NET_RX
7882104 ± 4% +125.3% 17757705 softirqs.CPU77.NET_RX
5808530 ± 10% +205.1% 17724062 softirqs.CPU78.NET_RX
113054 -12.2% 99231 softirqs.CPU78.TIMER
5981769 ± 4% +206.8% 18352732 softirqs.CPU79.NET_RX
113644 ± 3% -13.3% 98539 softirqs.CPU79.TIMER
8021831 ± 7% +122.5% 17851420 softirqs.CPU8.NET_RX
5632680 ± 14% +224.6% 18282437 softirqs.CPU80.NET_RX
113396 ± 3% -12.4% 99333 ± 3% softirqs.CPU80.TIMER
6179103 ± 6% +192.7% 18085354 ± 4% softirqs.CPU81.NET_RX
112293 -10.1% 100988 ± 2% softirqs.CPU81.TIMER
5945340 ± 5% +205.6% 18171651 softirqs.CPU82.NET_RX
113059 -12.9% 98492 softirqs.CPU82.TIMER
5935515 ± 8% +203.7% 18025789 softirqs.CPU83.NET_RX
112751 -11.4% 99886 ± 2% softirqs.CPU83.TIMER
6033617 ± 6% +200.8% 18150908 softirqs.CPU84.NET_RX
112657 -12.5% 98631 softirqs.CPU84.TIMER
6082408 ± 6% +202.2% 18383125 softirqs.CPU85.NET_RX
112975 -13.1% 98127 softirqs.CPU85.TIMER
6314946 ± 6% +191.2% 18390473 softirqs.CPU86.NET_RX
111921 -12.3% 98111 softirqs.CPU86.TIMER
6376108 ± 6% +186.2% 18251399 softirqs.CPU87.NET_RX
111983 -12.2% 98357 softirqs.CPU87.TIMER
5802245 ± 5% +219.4% 18531405 softirqs.CPU88.NET_RX
114244 -13.5% 98785 softirqs.CPU88.TIMER
5839247 ± 9% +219.9% 18680117 softirqs.CPU89.NET_RX
113444 -13.1% 98639 ± 2% softirqs.CPU89.TIMER
8177186 ± 4% +116.6% 17708425 softirqs.CPU9.NET_RX
5901819 ± 5% +210.9% 18350770 softirqs.CPU90.NET_RX
112855 -13.2% 97988 softirqs.CPU90.TIMER
6034808 ± 8% +203.4% 18312456 softirqs.CPU91.NET_RX
112511 -12.7% 98172 softirqs.CPU91.TIMER
5745445 ± 9% +220.9% 18438900 softirqs.CPU92.NET_RX
113273 ± 2% -13.6% 97826 softirqs.CPU92.TIMER
6129480 ± 3% +201.8% 18501681 softirqs.CPU93.NET_RX
112494 -13.2% 97681 softirqs.CPU93.TIMER
5852642 ± 6% +214.9% 18428832 softirqs.CPU94.NET_RX
112812 -13.5% 97562 softirqs.CPU94.TIMER
5890746 ± 8% +211.0% 18321360 softirqs.CPU95.NET_RX
113647 -13.7% 98072 softirqs.CPU95.TIMER
6039907 ± 9% +202.8% 18289418 softirqs.CPU96.NET_RX
112247 -12.7% 97967 softirqs.CPU96.TIMER
5983287 ± 6% +205.3% 18269666 softirqs.CPU97.NET_RX
113011 ± 2% -13.1% 98260 softirqs.CPU97.TIMER
5694254 ± 12% +220.7% 18264029 softirqs.CPU98.NET_RX
114122 -14.1% 98038 softirqs.CPU98.TIMER
6067754 ± 4% +202.5% 18353600 softirqs.CPU99.NET_RX
112309 -12.8% 97877 softirqs.CPU99.TIMER
7.155e+08 ± 3% +161.4% 1.871e+09 softirqs.NET_RX
11540641 -8.8% 10528497 softirqs.TIMER
46.32 -41.2 5.16 ± 6% perf-profile.calltrace.cycles-pp.ip_idents_reserve.__ip_select_ident.__ip_make_skb.ip_make_skb.udp_sendmsg
46.62 -40.8 5.77 ± 6% perf-profile.calltrace.cycles-pp.__ip_select_ident.__ip_make_skb.ip_make_skb.udp_sendmsg.sock_sendmsg
46.99 -40.4 6.63 ± 5% perf-profile.calltrace.cycles-pp.__ip_make_skb.ip_make_skb.udp_sendmsg.sock_sendmsg.__sys_sendto
51.10 -37.1 14.04 ± 2% perf-profile.calltrace.cycles-pp.ip_make_skb.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
74.29 -24.2 50.13 perf-profile.calltrace.cycles-pp.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64
74.72 -23.3 51.38 perf-profile.calltrace.cycles-pp.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
75.42 -22.1 53.33 perf-profile.calltrace.cycles-pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
75.54 -21.9 53.66 perf-profile.calltrace.cycles-pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
95.86 -6.2 89.64 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
96.09 -5.9 90.19 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.87 -2.6 1.25 ± 2% perf-profile.calltrace.cycles-pp.sock_wfree.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_output2
5.40 -1.8 3.60 perf-profile.calltrace.cycles-pp.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_output2.ip_output
5.54 -1.7 3.84 perf-profile.calltrace.cycles-pp.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_output2.ip_output.ip_send_skb
5.97 -0.8 5.15 perf-profile.calltrace.cycles-pp.__dev_queue_xmit.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb
1.22 -0.7 0.53 ± 3% perf-profile.calltrace.cycles-pp.sock_def_write_space.sock_wfree.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit
0.59 ± 2% +0.2 0.82 ± 9% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__sched_text_start.schedule.exit_to_usermode_loop.do_syscall_64
0.53 +0.3 0.81 perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.80 ± 5% +0.3 1.13 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock_bh.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom
1.17 ± 3% +0.3 1.51 perf-profile.calltrace.cycles-pp.netif_rx_internal.netif_rx.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit
1.18 ± 3% +0.4 1.53 perf-profile.calltrace.cycles-pp.netif_rx.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_output2
0.61 +0.4 1.00 perf-profile.calltrace.cycles-pp.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.__ip_append_data
0.00 +0.5 0.50 perf-profile.calltrace.cycles-pp.sockfd_lookup_light.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.5 0.53 perf-profile.calltrace.cycles-pp.sockfd_lookup_light.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.__calc_delta.update_curr.reweight_entity.enqueue_task_fair.activate_task
0.00 +0.5 0.54 perf-profile.calltrace.cycles-pp.ktime_get_with_offset.netif_rx_internal.netif_rx.loopback_xmit.dev_hard_start_xmit
0.00 +0.5 0.55 perf-profile.calltrace.cycles-pp._copy_from_user.move_addr_to_kernel.__sys_sendto.__x64_sys_sendto.do_syscall_64
0.00 +0.6 0.56 ± 4% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.siphash_3u32.__ip_select_ident.__ip_make_skb.ip_make_skb.udp_sendmsg
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.udp_recvmsg.inet_recvmsg.__sys_recvfrom
0.00 +0.6 0.59 perf-profile.calltrace.cycles-pp.__get_user_4.move_addr_to_user.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
0.00 +0.6 0.59 ± 3% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout
0.81 ± 3% +0.6 1.40 ± 3% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +0.6 0.60 ± 2% perf-profile.calltrace.cycles-pp.aa_sk_perm.security_socket_recvmsg.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.00 +0.6 0.60 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +0.6 0.61 ± 4% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__sched_text_start.schedule
0.00 +0.6 0.62 ± 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__sched_text_start.schedule
0.00 +0.6 0.63 perf-profile.calltrace.cycles-pp.native_write_msr
0.00 +0.7 0.66 ± 5% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__sched_text_start.schedule
0.00 +0.7 0.67 ± 4% perf-profile.calltrace.cycles-pp.validate_xmit_skb.__dev_queue_xmit.ip_finish_output2.ip_output.ip_send_skb
0.00 +0.7 0.68 perf-profile.calltrace.cycles-pp.__check_object_size.ip_generic_getfrag.__ip_append_data.ip_make_skb.udp_sendmsg
0.00 +0.7 0.69 ± 2% perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.__ip_append_data
0.00 +0.7 0.70 ± 2% perf-profile.calltrace.cycles-pp.security_socket_recvmsg.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
0.00 +0.7 0.71 perf-profile.calltrace.cycles-pp.move_addr_to_kernel.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.74 perf-profile.calltrace.cycles-pp.__netif_receive_skb_core.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start
0.00 +0.7 0.74 ± 7% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.8 0.76 perf-profile.calltrace.cycles-pp.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.77 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.81 ± 2% +0.8 1.60 ± 2% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.80 perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.19 ± 3% +0.8 2.00 ± 2% perf-profile.calltrace.cycles-pp.ip_route_output_key_hash_rcu.ip_route_output_key_hash.ip_route_output_flow.udp_sendmsg.sock_sendmsg
0.00 +0.8 0.81 ± 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.58 ± 6% +0.8 1.40 ± 5% perf-profile.calltrace.cycles-pp.__udp4_lib_lookup.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
0.00 +0.8 0.83 perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.9 0.85 perf-profile.calltrace.cycles-pp._copy_from_iter_full.ip_generic_getfrag.__ip_append_data.ip_make_skb.udp_sendmsg
0.00 +0.9 0.90 ± 2% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.26 ±100% +0.9 1.16 ± 6% perf-profile.calltrace.cycles-pp.udp4_lib_lookup2.__udp4_lib_lookup.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
0.00 +0.9 0.90 ± 2% perf-profile.calltrace.cycles-pp.security_socket_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64
0.00 +0.9 0.92 perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__sched_text_start.schedule.schedule_timeout
0.80 ± 4% +1.0 1.75 ± 4% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout
0.94 ± 3% +1.0 1.90 perf-profile.calltrace.cycles-pp.ip_generic_getfrag.__ip_append_data.ip_make_skb.udp_sendmsg.sock_sendmsg
0.00 +1.0 0.96 perf-profile.calltrace.cycles-pp._copy_to_iter.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.00 +1.0 1.00 ± 3% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout
1.29 ± 3% +1.0 2.31 ± 2% perf-profile.calltrace.cycles-pp.ip_route_output_key_hash.ip_route_output_flow.udp_sendmsg.sock_sendmsg.__sys_sendto
1.33 ± 4% +1.0 2.35 ± 2% perf-profile.calltrace.cycles-pp.ip_route_output_flow.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
0.99 ± 3% +1.1 2.08 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__sched_text_start.schedule.schedule_timeout.__skb_wait_for_more_packets
1.50 ± 3% +1.1 2.63 perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.__ip_append_data.ip_make_skb
0.25 ±100% +1.1 1.39 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +1.1 1.14 perf-profile.calltrace.cycles-pp.__check_object_size.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.00 +1.2 1.19 ± 5% perf-profile.calltrace.cycles-pp.fib_table_lookup.ip_route_output_key_hash_rcu.ip_route_output_key_hash.ip_route_output_flow.udp_sendmsg
0.64 ± 3% +1.2 1.83 perf-profile.calltrace.cycles-pp.move_addr_to_user.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2 1.23 ± 2% perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__sched_text_start.schedule.schedule_timeout
1.59 ± 3% +1.2 2.83 perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.__ip_append_data.ip_make_skb.udp_sendmsg
0.76 ± 5% +1.3 2.08 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__sched_text_start.schedule.schedule_timeout.__skb_wait_for_more_packets
2.28 ± 3% +1.4 3.66 perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.__ip_append_data.ip_make_skb.udp_sendmsg.sock_sendmsg
0.77 ± 3% +1.4 2.17 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.24 +1.8 2.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.14 ± 3% +2.1 3.26 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
1.88 ± 2% +2.2 4.04 ± 2% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.91 ± 2% +2.8 4.71 ± 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout.__skb_wait_for_more_packets
3.80 ± 2% +3.0 6.79 perf-profile.calltrace.cycles-pp.__ip_append_data.ip_make_skb.udp_sendmsg.sock_sendmsg.__sys_sendto
1.86 ± 3% +3.5 5.34 ± 3% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
1.89 ± 3% +3.5 5.42 ± 3% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.91 ± 3% +3.6 5.46 ± 3% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
8.02 ± 2% +6.6 14.63 ± 2% perf-profile.calltrace.cycles-pp.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu
9.29 ± 2% +6.7 15.95 perf-profile.calltrace.cycles-pp.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
9.40 +6.8 16.21 perf-profile.calltrace.cycles-pp.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
5.19 ± 3% +7.0 12.18 perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp
5.31 ± 2% +7.0 12.36 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
5.27 ± 3% +7.2 12.48 perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
5.40 ± 2% +7.3 12.67 ± 2% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb
6.52 ± 2% +7.3 13.87 ± 2% perf-profile.calltrace.cycles-pp.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv
5.32 ± 3% +7.4 12.68 perf-profile.calltrace.cycles-pp.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg
5.50 ± 2% +7.5 12.97 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb
5.81 ± 2% +7.6 13.40 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb
10.24 ± 2% +7.8 18.03 perf-profile.calltrace.cycles-pp.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
6.03 ± 3% +7.9 13.94 perf-profile.calltrace.cycles-pp.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom
10.35 ± 2% +8.1 18.42 perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
10.43 ± 2% +8.2 18.64 perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
10.57 ± 2% +8.3 18.91 perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action
8.13 ± 2% +8.6 16.71 perf-profile.calltrace.cycles-pp.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
10.97 ± 2% +8.9 19.82 perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start
12.58 ± 2% +9.1 21.67 perf-profile.calltrace.cycles-pp.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq
11.44 ± 2% +9.3 20.69 perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack
13.03 ± 2% +9.5 22.48 perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip
13.30 ± 2% +9.7 23.05 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
20.09 +9.8 29.85 perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg
13.41 ± 2% +9.9 23.33 perf-profile.calltrace.cycles-pp.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
13.59 ± 2% +10.2 23.77 perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb
13.64 ± 2% +10.3 23.89 perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb
20.86 +10.3 31.20 perf-profile.calltrace.cycles-pp.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg
21.05 +10.6 31.66 perf-profile.calltrace.cycles-pp.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto
21.35 +10.9 32.23 perf-profile.calltrace.cycles-pp.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
9.86 ± 2% +11.2 21.07 perf-profile.calltrace.cycles-pp.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
10.09 ± 2% +11.5 21.62 perf-profile.calltrace.cycles-pp.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.46 ± 2% +13.9 25.36 perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.59 ± 2% +14.1 25.71 perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.36 -41.2 5.17 ± 6% perf-profile.children.cycles-pp.ip_idents_reserve
46.62 -40.8 5.79 ± 6% perf-profile.children.cycles-pp.__ip_select_ident
47.00 -40.4 6.64 ± 5% perf-profile.children.cycles-pp.__ip_make_skb
51.11 -37.0 14.08 ± 2% perf-profile.children.cycles-pp.ip_make_skb
74.31 -24.1 50.16 perf-profile.children.cycles-pp.udp_sendmsg
74.73 -23.3 51.39 perf-profile.children.cycles-pp.sock_sendmsg
75.43 -22.1 53.35 perf-profile.children.cycles-pp.__sys_sendto
75.55 -21.9 53.69 perf-profile.children.cycles-pp.__x64_sys_sendto
95.94 -6.2 89.72 perf-profile.children.cycles-pp.do_syscall_64
96.15 -5.9 90.25 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
3.87 -2.6 1.25 ± 2% perf-profile.children.cycles-pp.sock_wfree
5.40 -1.8 3.62 perf-profile.children.cycles-pp.loopback_xmit
5.54 -1.7 3.85 perf-profile.children.cycles-pp.dev_hard_start_xmit
3.13 -1.0 2.17 perf-profile.children.cycles-pp._raw_spin_lock
5.98 -0.8 5.18 perf-profile.children.cycles-pp.__dev_queue_xmit
1.22 -0.7 0.53 ± 3% perf-profile.children.cycles-pp.sock_def_write_space
0.52 ± 4% -0.2 0.29 perf-profile.children.cycles-pp.dst_release
0.67 ± 3% -0.2 0.49 ± 2% perf-profile.children.cycles-pp.ipv4_pktinfo_prepare
0.18 ± 6% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.resched_curr
0.45 ± 2% -0.1 0.39 ± 2% perf-profile.children.cycles-pp.udp_rmem_release
0.46 ± 3% -0.0 0.43 perf-profile.children.cycles-pp.skb_set_owner_w
0.06 ± 7% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.skb_pull_rcsum
0.14 ± 13% +0.0 0.18 ± 5% perf-profile.children.cycles-pp.update_process_times
0.17 ± 11% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.tick_sched_timer
0.14 ± 13% +0.0 0.18 ± 4% perf-profile.children.cycles-pp.tick_sched_handle
0.13 ± 9% +0.0 0.17 ± 2% perf-profile.children.cycles-pp.apparmor_ipv4_postroute
0.05 +0.0 0.10 ± 5% perf-profile.children.cycles-pp.udp4_hwcsum
0.06 ± 9% +0.0 0.10 perf-profile.children.cycles-pp.__skb_try_recv_from_queue
0.08 ± 5% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.rb_next
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__usecs_to_jiffies
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__bitmap_and
0.00 +0.1 0.05 perf-profile.children.cycles-pp.rcu_note_context_switch
0.21 ± 5% +0.1 0.26 ± 3% perf-profile.children.cycles-pp.security_sock_rcv_skb
0.08 ± 5% +0.1 0.14 ± 3% perf-profile.children.cycles-pp.skb_release_data
0.01 ±173% +0.1 0.07 ± 7% perf-profile.children.cycles-pp.apparmor_socket_recvmsg
0.11 ± 9% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.apparmor_socket_sock_rcv_skb
0.23 ± 9% +0.1 0.29 ± 5% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.54 ± 8% +0.1 0.60 ± 2% perf-profile.children.cycles-pp.apic_timer_interrupt
0.00 +0.1 0.06 perf-profile.children.cycles-pp.__fdget
0.06 ± 15% +0.1 0.12 ± 3% perf-profile.children.cycles-pp.check_cfs_rq_runtime
0.21 ± 2% +0.1 0.27 perf-profile.children.cycles-pp.ip_rcv_finish_core
0.11 ± 4% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.kfree_skbmem
0.08 ± 5% +0.1 0.15 ± 5% perf-profile.children.cycles-pp.__kfree_skb_flush
0.06 ± 7% +0.1 0.12 ± 6% perf-profile.children.cycles-pp.kmalloc_slab
0.01 ±173% +0.1 0.08 ± 8% perf-profile.children.cycles-pp.udp_queue_rcv_skb
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.skb_csum_hwoffload_help
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.is_cpu_allowed
0.11 ± 3% +0.1 0.18 ± 7% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.07 perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.07 perf-profile.children.cycles-pp.skb_clone_tx_timestamp
0.08 ± 6% +0.1 0.15 ± 14% perf-profile.children.cycles-pp.skb_consume_udp
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.apparmor_ip_postroute
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.eth_type_trans
0.07 ± 7% +0.1 0.14 ± 12% perf-profile.children.cycles-pp.xfrm_lookup_with_ifid
0.01 ±173% +0.1 0.09 ± 4% perf-profile.children.cycles-pp.default_wake_function
0.08 ± 8% +0.1 0.16 ± 2% perf-profile.children.cycles-pp.rb_erase
0.04 ± 57% +0.1 0.12 ± 3% perf-profile.children.cycles-pp.rcu_all_qs
0.06 ± 6% +0.1 0.14 ± 3% perf-profile.children.cycles-pp.iov_iter_advance
0.47 ± 7% +0.1 0.55 ± 2% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.13 +0.1 0.21 ± 2% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.07 ± 6% +0.1 0.15 ± 2% perf-profile.children.cycles-pp.ip_send_check
0.07 ± 5% +0.1 0.17 ± 9% perf-profile.children.cycles-pp.xfrm_lookup_route
0.42 ± 8% +0.1 0.52 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
0.06 +0.1 0.15 ± 3% perf-profile.children.cycles-pp.__enqueue_entity
0.00 +0.1 0.10 ± 15% perf-profile.children.cycles-pp.deactivate_task
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.skb_network_protocol
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.switch_ldt
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.netdev_core_pick_tx
0.00 +0.1 0.10 ± 7% perf-profile.children.cycles-pp.__raw_spin_unlock
0.27 ± 3% +0.1 0.38 ± 4% perf-profile.children.cycles-pp.put_prev_entity
0.08 ± 5% +0.1 0.20 ± 4% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.11 ± 4% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.03 ±100% +0.1 0.14 ± 5% perf-profile.children.cycles-pp.__raise_softirq_irqoff
0.07 ± 6% +0.1 0.18 ± 2% perf-profile.children.cycles-pp.apparmor_socket_sendmsg
0.27 ± 5% +0.1 0.38 perf-profile.children.cycles-pp.nf_hook_slow
0.06 ± 6% +0.1 0.18 ± 3% perf-profile.children.cycles-pp.native_load_tls
0.03 ±100% +0.1 0.14 ± 3% perf-profile.children.cycles-pp.should_failslab
0.09 ± 4% +0.1 0.22 perf-profile.children.cycles-pp.check_stack_object
0.01 ±173% +0.1 0.14 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.00 +0.1 0.13 ± 5% perf-profile.children.cycles-pp.receiver_wake_function
0.08 ± 5% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.ksoftirqd_running
0.15 +0.1 0.29 perf-profile.children.cycles-pp.kmem_cache_free
0.09 ± 7% +0.1 0.23 perf-profile.children.cycles-pp._cond_resched
0.00 +0.1 0.14 ± 7% perf-profile.children.cycles-pp.inet_send_prepare
0.00 +0.1 0.14 ± 3% perf-profile.children.cycles-pp.raw_local_deliver
0.07 ± 26% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.cpumask_next
0.24 +0.1 0.39 perf-profile.children.cycles-pp.ip_rcv_finish
0.10 ± 4% +0.2 0.25 ± 10% perf-profile.children.cycles-pp.cpuacct_charge
0.16 ± 2% +0.2 0.32 ± 2% perf-profile.children.cycles-pp.set_next_buddy
0.11 ± 4% +0.2 0.28 perf-profile.children.cycles-pp.__ip_local_out
0.10 ± 7% +0.2 0.26 ± 3% perf-profile.children.cycles-pp.import_single_range
0.27 ± 4% +0.2 0.44 ± 3% perf-profile.children.cycles-pp.sk_filter_trim_cap
0.08 ± 5% +0.2 0.26 ± 4% perf-profile.children.cycles-pp.inet_sendmsg
0.12 ± 6% +0.2 0.30 ± 2% perf-profile.children.cycles-pp.__ksize
0.11 ± 6% +0.2 0.29 ± 3% perf-profile.children.cycles-pp.clear_buddies
0.16 ± 2% +0.2 0.36 perf-profile.children.cycles-pp.read_tsc
0.15 ± 5% +0.2 0.36 perf-profile.children.cycles-pp.kfree
0.26 ± 8% +0.2 0.47 perf-profile.children.cycles-pp.__ip_finish_output
0.17 ± 4% +0.2 0.39 perf-profile.children.cycles-pp.copyin
0.11 ± 4% +0.2 0.34 ± 8% perf-profile.children.cycles-pp.netif_skb_features
0.15 ± 3% +0.2 0.38 perf-profile.children.cycles-pp.ip_local_out
0.10 ± 7% +0.2 0.34 ± 2% perf-profile.children.cycles-pp.account_entity_enqueue
0.18 ± 7% +0.3 0.44 perf-profile.children.cycles-pp.ip_setup_cork
0.28 ± 5% +0.3 0.54 perf-profile.children.cycles-pp.ktime_get_with_offset
0.24 ± 6% +0.3 0.51 perf-profile.children.cycles-pp.__check_heap_object
0.18 ± 4% +0.3 0.45 ± 4% perf-profile.children.cycles-pp.update_min_vruntime
0.15 ± 4% +0.3 0.42 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.11 ± 4% +0.3 0.38 perf-profile.children.cycles-pp.ip_rcv_core
0.23 ± 4% +0.3 0.51 perf-profile.children.cycles-pp.__consume_stateless_skb
0.20 ± 2% +0.3 0.48 perf-profile.children.cycles-pp.finish_task_switch
0.15 ± 14% +0.3 0.45 ± 3% perf-profile.children.cycles-pp._find_next_bit
0.18 ± 5% +0.3 0.50 perf-profile.children.cycles-pp._copy_to_user
0.55 +0.3 0.87 perf-profile.children.cycles-pp.__kmalloc_node_track_caller
0.39 ± 2% +0.3 0.72 ± 2% perf-profile.children.cycles-pp.check_preempt_wakeup
0.81 ± 5% +0.3 1.14 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_bh
0.41 ± 3% +0.3 0.75 perf-profile.children.cycles-pp.__netif_receive_skb_core
0.15 ± 4% +0.3 0.49 perf-profile.children.cycles-pp.native_sched_clock
1.18 ± 3% +0.3 1.52 perf-profile.children.cycles-pp.netif_rx_internal
0.24 ± 3% +0.3 0.58 perf-profile.children.cycles-pp.siphash_3u32
0.40 ± 6% +0.4 0.76 perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.20 ± 4% +0.4 0.55 ± 2% perf-profile.children.cycles-pp.__might_sleep
0.15 ± 2% +0.4 0.51 perf-profile.children.cycles-pp.sched_clock
0.20 ± 6% +0.4 0.56 perf-profile.children.cycles-pp._copy_from_user
1.18 ± 3% +0.4 1.54 perf-profile.children.cycles-pp.netif_rx
0.35 +0.4 0.72 perf-profile.children.cycles-pp.native_write_msr
0.20 ± 4% +0.4 0.57 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.21 ± 4% +0.4 0.58 perf-profile.children.cycles-pp.copyout
0.20 ± 3% +0.4 0.57 ± 3% perf-profile.children.cycles-pp.cpumask_next_wrap
0.16 ± 4% +0.4 0.54 ± 2% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.33 ± 4% +0.4 0.71 perf-profile.children.cycles-pp.__virt_addr_valid
0.17 ± 4% +0.4 0.56 perf-profile.children.cycles-pp.sched_clock_cpu
0.42 +0.4 0.81 perf-profile.children.cycles-pp.check_preempt_curr
0.62 +0.4 1.02 perf-profile.children.cycles-pp.__kmalloc_reserve
0.30 ± 3% +0.4 0.71 ± 2% perf-profile.children.cycles-pp.security_socket_recvmsg
0.32 ± 2% +0.4 0.74 ± 2% perf-profile.children.cycles-pp.pick_next_entity
0.20 ± 3% +0.4 0.63 perf-profile.children.cycles-pp.__get_user_4
0.47 ± 2% +0.4 0.91 ± 2% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.32 ± 3% +0.4 0.77 ± 2% perf-profile.children.cycles-pp.sock_recvmsg
0.27 ± 3% +0.4 0.72 perf-profile.children.cycles-pp.move_addr_to_kernel
0.76 +0.5 1.22 perf-profile.children.cycles-pp.__switch_to
0.23 ± 2% +0.5 0.69 ± 4% perf-profile.children.cycles-pp.validate_xmit_skb
0.34 +0.5 0.81 ± 2% perf-profile.children.cycles-pp.__update_load_avg_se
0.42 ± 3% +0.5 0.89 perf-profile.children.cycles-pp._copy_from_iter_full
0.32 ± 5% +0.5 0.80 ± 3% perf-profile.children.cycles-pp.account_entity_dequeue
0.31 ± 4% +0.5 0.79 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
0.28 ± 2% +0.5 0.76 perf-profile.children.cycles-pp.ipv4_mtu
0.29 ± 4% +0.5 0.78 perf-profile.children.cycles-pp.___might_sleep
0.28 ± 3% +0.5 0.78 perf-profile.children.cycles-pp.update_rq_clock
0.39 +0.6 0.97 perf-profile.children.cycles-pp._copy_to_iter
0.32 ± 3% +0.6 0.90 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.31 ± 7% +0.6 0.90 ± 2% perf-profile.children.cycles-pp.security_socket_sendmsg
0.28 ± 4% +0.6 0.87 perf-profile.children.cycles-pp.__fget_light
0.81 ± 3% +0.6 1.40 ± 3% perf-profile.children.cycles-pp.available_idle_cpu
0.37 ± 5% +0.6 0.98 ± 2% perf-profile.children.cycles-pp.__calc_delta
0.35 ± 5% +0.6 0.96 ± 3% perf-profile.children.cycles-pp.update_cfs_group
0.38 ± 4% +0.6 1.00 perf-profile.children.cycles-pp.__might_fault
0.40 ± 3% +0.6 1.03 ± 6% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.36 ± 3% +0.7 1.01 perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.52 ± 4% +0.7 1.18 perf-profile.children.cycles-pp.set_next_entity
0.48 ± 6% +0.7 1.16 ± 6% perf-profile.children.cycles-pp.udp4_lib_lookup2
0.43 ± 4% +0.7 1.13 perf-profile.children.cycles-pp.aa_sk_perm
0.34 ± 4% +0.7 1.06 perf-profile.children.cycles-pp.sockfd_lookup_light
0.44 ± 2% +0.8 1.19 ± 5% perf-profile.children.cycles-pp.fib_table_lookup
0.81 ± 2% +0.8 1.60 ± 2% perf-profile.children.cycles-pp.switch_fpu_return
1.20 ± 4% +0.8 2.02 ± 2% perf-profile.children.cycles-pp.ip_route_output_key_hash_rcu
0.59 ± 6% +0.8 1.42 ± 5% perf-profile.children.cycles-pp.__udp4_lib_lookup
0.94 ± 3% +1.0 1.90 perf-profile.children.cycles-pp.ip_generic_getfrag
0.81 ± 4% +1.0 1.79 ± 4% perf-profile.children.cycles-pp.dequeue_entity
1.65 ± 2% +1.0 2.65 perf-profile.children.cycles-pp.switch_mm_irqs_off
1.30 ± 3% +1.0 2.32 ± 2% perf-profile.children.cycles-pp.ip_route_output_key_hash
1.33 ± 4% +1.0 2.36 ± 2% perf-profile.children.cycles-pp.ip_route_output_flow
0.92 ± 3% +1.1 2.04 perf-profile.children.cycles-pp.__check_object_size
1.51 ± 3% +1.1 2.65 perf-profile.children.cycles-pp.__alloc_skb
0.67 +1.2 1.86 perf-profile.children.cycles-pp.load_new_mm_cr3
0.65 ± 3% +1.2 1.85 perf-profile.children.cycles-pp.move_addr_to_user
1.60 ± 3% +1.2 2.85 perf-profile.children.cycles-pp.alloc_skb_with_frags
2.28 ± 3% +1.4 3.66 perf-profile.children.cycles-pp.sock_alloc_send_pskb
0.78 ± 3% +1.4 2.20 ± 4% perf-profile.children.cycles-pp.enqueue_entity
0.88 ± 2% +1.5 2.41 ± 2% perf-profile.children.cycles-pp.reweight_entity
1.41 ± 2% +1.6 3.03 perf-profile.children.cycles-pp.pick_next_task_fair
1.13 ± 2% +1.7 2.83 perf-profile.children.cycles-pp.update_curr
1.24 +1.8 3.00 perf-profile.children.cycles-pp.entry_SYSCALL_64
1.33 +1.9 3.24 ± 3% perf-profile.children.cycles-pp.update_load_avg
1.90 ± 2% +2.2 4.09 ± 2% perf-profile.children.cycles-pp.select_task_rq_fair
1.29 ± 3% +2.4 3.67 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.92 ± 2% +2.8 4.73 ± 2% perf-profile.children.cycles-pp.dequeue_task_fair
3.80 ± 2% +3.0 6.82 perf-profile.children.cycles-pp.__ip_append_data
1.87 ± 3% +3.5 5.35 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
1.90 ± 3% +3.5 5.43 ± 3% perf-profile.children.cycles-pp.activate_task
1.91 ± 3% +3.6 5.46 ± 3% perf-profile.children.cycles-pp.ttwu_do_activate
8.02 ± 2% +6.6 14.64 ± 2% perf-profile.children.cycles-pp.__udp_enqueue_schedule_skb
9.30 ± 2% +6.7 16.00 perf-profile.children.cycles-pp.udp_queue_rcv_one_skb
9.40 +6.8 16.22 perf-profile.children.cycles-pp.udp_unicast_rcv_skb
5.32 ± 2% +7.1 12.39 ± 2% perf-profile.children.cycles-pp.try_to_wake_up
7.25 +7.2 14.42 perf-profile.children.cycles-pp.__sched_text_start
5.40 ± 2% +7.3 12.68 ± 2% perf-profile.children.cycles-pp.autoremove_wake_function
7.37 +7.3 14.70 perf-profile.children.cycles-pp.schedule
6.52 ± 2% +7.4 13.88 ± 2% perf-profile.children.cycles-pp.sock_def_readable
5.33 ± 3% +7.4 12.69 perf-profile.children.cycles-pp.schedule_timeout
5.50 ± 2% +7.5 12.97 ± 2% perf-profile.children.cycles-pp.__wake_up_common
5.82 ± 2% +7.6 13.42 ± 2% perf-profile.children.cycles-pp.__wake_up_common_lock
10.25 ± 2% +7.8 18.04 perf-profile.children.cycles-pp.__udp4_lib_rcv
6.03 ± 3% +7.9 13.96 perf-profile.children.cycles-pp.__skb_wait_for_more_packets
10.36 ± 2% +8.1 18.45 perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
10.43 ± 2% +8.2 18.65 perf-profile.children.cycles-pp.ip_local_deliver_finish
10.59 ± 2% +8.4 18.95 perf-profile.children.cycles-pp.ip_local_deliver
8.14 ± 2% +8.6 16.76 perf-profile.children.cycles-pp.__skb_recv_udp
10.97 ± 2% +8.9 19.85 perf-profile.children.cycles-pp.ip_rcv
12.59 ± 2% +9.1 21.68 perf-profile.children.cycles-pp.process_backlog
11.44 ± 2% +9.3 20.70 perf-profile.children.cycles-pp.__netif_receive_skb_one_core
13.05 ± 2% +9.5 22.51 perf-profile.children.cycles-pp.net_rx_action
13.34 ± 2% +9.8 23.09 perf-profile.children.cycles-pp.__softirqentry_text_start
20.11 +9.8 29.91 perf-profile.children.cycles-pp.ip_finish_output2
13.42 ± 2% +10.0 23.37 perf-profile.children.cycles-pp.do_softirq_own_stack
13.61 ± 2% +10.2 23.82 perf-profile.children.cycles-pp.do_softirq
13.70 ± 2% +10.3 24.05 perf-profile.children.cycles-pp.__local_bh_enable_ip
20.86 +10.4 31.22 perf-profile.children.cycles-pp.ip_output
21.06 +10.6 31.67 perf-profile.children.cycles-pp.ip_send_skb
21.35 +10.9 32.25 perf-profile.children.cycles-pp.udp_send_skb
9.86 ± 2% +11.2 21.08 perf-profile.children.cycles-pp.udp_recvmsg
10.09 ± 2% +11.5 21.63 perf-profile.children.cycles-pp.inet_recvmsg
11.46 ± 2% +13.9 25.39 perf-profile.children.cycles-pp.__sys_recvfrom
11.60 ± 2% +14.1 25.75 perf-profile.children.cycles-pp.__x64_sys_recvfrom
46.11 -41.0 5.12 ± 6% perf-profile.self.cycles-pp.ip_idents_reserve
2.64 -1.9 0.69 ± 4% perf-profile.self.cycles-pp.sock_wfree
3.12 -1.0 2.12 perf-profile.self.cycles-pp._raw_spin_lock
1.20 -0.7 0.51 ± 3% perf-profile.self.cycles-pp.sock_def_write_space
0.90 ± 2% -0.3 0.55 ± 4% perf-profile.self.cycles-pp.__udp_enqueue_schedule_skb
0.70 ± 3% -0.3 0.45 perf-profile.self.cycles-pp.sock_def_readable
0.50 ± 4% -0.2 0.27 perf-profile.self.cycles-pp.dst_release
0.99 ± 4% -0.2 0.80 perf-profile.self.cycles-pp.switch_mm_irqs_off
0.18 ± 4% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.resched_curr
0.44 ± 2% -0.1 0.38 ± 2% perf-profile.self.cycles-pp.udp_rmem_release
0.70 ± 3% -0.0 0.66 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.46 ± 2% -0.0 0.43 perf-profile.self.cycles-pp.skb_set_owner_w
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.copyin
0.12 ± 6% +0.0 0.15 ± 2% perf-profile.self.cycles-pp.apparmor_ipv4_postroute
0.06 ± 9% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.skb_pull_rcsum
0.11 ± 4% +0.0 0.15 ± 2% perf-profile.self.cycles-pp.nf_hook_slow
0.11 ± 6% +0.0 0.15 perf-profile.self.cycles-pp.put_prev_entity
0.05 ± 8% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.__skb_try_recv_from_queue
0.08 ± 10% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.rb_next
0.11 ± 7% +0.0 0.15 ± 2% perf-profile.self.cycles-pp.apparmor_socket_sock_rcv_skb
0.13 ± 6% +0.0 0.18 ± 3% perf-profile.self.cycles-pp.ktime_get_with_offset
0.05 +0.0 0.10 ± 8% perf-profile.self.cycles-pp.netif_rx_internal
0.12 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp._copy_from_iter_full
0.05 ± 8% +0.1 0.10 ± 4% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__usecs_to_jiffies
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__bitmap_and
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.sock_recvmsg
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.apparmor_ip_postroute
0.08 ± 5% +0.1 0.13 ± 3% perf-profile.self.cycles-pp.skb_release_data
0.20 ± 3% +0.1 0.26 perf-profile.self.cycles-pp.ip_rcv_finish_core
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.skb_csum_hwoffload_help
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.__ip_select_ident
0.06 ± 9% +0.1 0.11 ± 4% perf-profile.self.cycles-pp.__netif_receive_skb_one_core
0.06 ± 9% +0.1 0.11 ± 7% perf-profile.self.cycles-pp.kmalloc_slab
0.00 +0.1 0.06 perf-profile.self.cycles-pp.skb_clone_tx_timestamp
0.00 +0.1 0.06 perf-profile.self.cycles-pp.apparmor_socket_recvmsg
0.17 ± 2% +0.1 0.24 ± 3% perf-profile.self.cycles-pp.ipv4_pktinfo_prepare
0.17 ± 12% +0.1 0.23 ± 6% perf-profile.self.cycles-pp.__ip_finish_output
0.08 ± 10% +0.1 0.14 ± 3% perf-profile.self.cycles-pp.rb_erase
0.08 ± 11% +0.1 0.14 perf-profile.self.cycles-pp.ip_setup_cork
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.is_cpu_allowed
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.kfree_skbmem
0.09 ± 4% +0.1 0.16 ± 6% perf-profile.self.cycles-pp.ktime_get
0.01 ±173% +0.1 0.08 perf-profile.self.cycles-pp.default_wake_function
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.activate_task
0.08 ± 6% +0.1 0.15 ± 15% perf-profile.self.cycles-pp.skb_consume_udp
0.06 ± 6% +0.1 0.13 ± 9% perf-profile.self.cycles-pp.xfrm_lookup_with_ifid
0.06 ± 6% +0.1 0.13 ± 3% perf-profile.self.cycles-pp.iov_iter_advance
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.sock_sendmsg
0.00 +0.1 0.07 perf-profile.self.cycles-pp.__wake_up_common_lock
0.00 +0.1 0.07 perf-profile.self.cycles-pp.rb_insert_color
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.eth_type_trans
0.14 ± 3% +0.1 0.21 ± 2% perf-profile.self.cycles-pp.ip_make_skb
0.01 ±173% +0.1 0.09 perf-profile.self.cycles-pp.check_cfs_rq_runtime
0.13 +0.1 0.21 ± 2% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.skb_network_protocol
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.udp_queue_rcv_skb
0.07 ± 12% +0.1 0.14 ± 5% perf-profile.self.cycles-pp.udp_unicast_rcv_skb
0.04 ± 57% +0.1 0.11 ± 4% perf-profile.self.cycles-pp.__ip_local_out
0.00 +0.1 0.08 perf-profile.self.cycles-pp._copy_from_user
0.00 +0.1 0.08 perf-profile.self.cycles-pp._copy_to_user
0.00 +0.1 0.08 perf-profile.self.cycles-pp.should_failslab
0.07 ± 6% +0.1 0.15 perf-profile.self.cycles-pp.ip_send_check
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.move_addr_to_kernel
0.06 ± 6% +0.1 0.15 ± 3% perf-profile.self.cycles-pp.__kfree_skb_flush
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.netdev_core_pick_tx
0.00 +0.1 0.09 ± 4% perf-profile.self.cycles-pp.check_preempt_curr
0.00 +0.1 0.09 ± 4% perf-profile.self.cycles-pp.__raw_spin_unlock
0.06 ± 6% +0.1 0.15 ± 4% perf-profile.self.cycles-pp.__kmalloc_reserve
0.06 +0.1 0.15 ± 2% perf-profile.self.cycles-pp.__enqueue_entity
0.03 ±100% +0.1 0.11 ± 4% perf-profile.self.cycles-pp.inet_sendmsg
0.00 +0.1 0.09 perf-profile.self.cycles-pp.rcu_all_qs
0.00 +0.1 0.09 ± 7% perf-profile.self.cycles-pp.udp4_hwcsum
0.00 +0.1 0.09 ± 13% perf-profile.self.cycles-pp.deactivate_task
0.29 +0.1 0.38 perf-profile.self.cycles-pp.udp_queue_rcv_one_skb
0.14 ± 6% +0.1 0.23 perf-profile.self.cycles-pp.dev_hard_start_xmit
0.41 ± 2% +0.1 0.51 ± 3% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.switch_ldt
0.10 ± 4% +0.1 0.21 ± 2% perf-profile.self.cycles-pp.__list_del_entry_valid
0.04 ± 57% +0.1 0.14 perf-profile.self.cycles-pp.sockfd_lookup_light
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.ip_local_out
0.11 ± 4% +0.1 0.21 ± 3% perf-profile.self.cycles-pp._copy_to_iter
0.08 ± 6% +0.1 0.18 perf-profile.self.cycles-pp.check_stack_object
0.08 ± 8% +0.1 0.18 ± 4% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.07 ± 7% +0.1 0.17 ± 4% perf-profile.self.cycles-pp.apparmor_socket_sendmsg
0.06 +0.1 0.17 ± 2% perf-profile.self.cycles-pp.ip_local_deliver_finish
0.06 ± 11% +0.1 0.17 ± 4% perf-profile.self.cycles-pp.sk_filter_trim_cap
0.00 +0.1 0.11 ± 4% perf-profile.self.cycles-pp._cond_resched
0.05 ± 8% +0.1 0.17 ± 4% perf-profile.self.cycles-pp.__wake_up_common
0.06 ± 6% +0.1 0.18 ± 3% perf-profile.self.cycles-pp.native_load_tls
0.00 +0.1 0.12 ± 3% perf-profile.self.cycles-pp.ip_rcv_finish
0.01 ±173% +0.1 0.13 ± 5% perf-profile.self.cycles-pp.__raise_softirq_irqoff
0.08 ± 10% +0.1 0.20 ± 3% perf-profile.self.cycles-pp.alloc_skb_with_frags
0.03 ±100% +0.1 0.14 ± 3% perf-profile.self.cycles-pp.ip_rcv
0.00 +0.1 0.12 ± 6% perf-profile.self.cycles-pp.receiver_wake_function
0.07 ± 6% +0.1 0.19 ± 2% perf-profile.self.cycles-pp.dequeue_entity
0.00 +0.1 0.12 ± 4% perf-profile.self.cycles-pp.raw_local_deliver
0.07 ± 5% +0.1 0.20 ± 3% perf-profile.self.cycles-pp.ksoftirqd_running
0.15 ± 2% +0.1 0.28 perf-profile.self.cycles-pp.kmem_cache_free
0.11 ± 4% +0.1 0.25 perf-profile.self.cycles-pp.do_softirq
0.40 ± 3% +0.1 0.54 ± 3% perf-profile.self.cycles-pp.process_backlog
0.10 ± 4% +0.1 0.23 ± 3% perf-profile.self.cycles-pp.__local_bh_enable_ip
0.00 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.__list_add_valid
0.06 ± 6% +0.1 0.20 ± 6% perf-profile.self.cycles-pp.security_socket_sendmsg
0.00 +0.1 0.14 ± 7% perf-profile.self.cycles-pp.inet_send_prepare
0.06 ± 11% +0.1 0.21 ± 2% perf-profile.self.cycles-pp.schedule_timeout
0.00 +0.1 0.15 ± 3% perf-profile.self.cycles-pp.autoremove_wake_function
0.30 ± 7% +0.1 0.45 perf-profile.self.cycles-pp.kmem_cache_alloc_node
0.16 ± 2% +0.1 0.31 perf-profile.self.cycles-pp.ip_local_deliver
0.16 ± 2% +0.1 0.31 ± 2% perf-profile.self.cycles-pp.set_next_buddy
0.09 ± 8% +0.2 0.24 ± 4% perf-profile.self.cycles-pp.clear_buddies
0.25 ± 3% +0.2 0.41 perf-profile.self.cycles-pp.__udp4_lib_rcv
0.10 ± 5% +0.2 0.25 ± 10% perf-profile.self.cycles-pp.cpuacct_charge
0.10 ± 4% +0.2 0.26 ± 3% perf-profile.self.cycles-pp.__udp4_lib_lookup
0.16 ± 6% +0.2 0.33 ± 3% perf-profile.self.cycles-pp.set_next_entity
0.23 ± 6% +0.2 0.39 perf-profile.self.cycles-pp.sock_alloc_send_pskb
0.12 ± 9% +0.2 0.28 ± 2% perf-profile.self.cycles-pp.schedule
0.10 ± 4% +0.2 0.26 ± 3% perf-profile.self.cycles-pp.import_single_range
0.10 ± 5% +0.2 0.26 ± 3% perf-profile.self.cycles-pp.cpumask_next_wrap
0.10 ± 7% +0.2 0.28 perf-profile.self.cycles-pp.ip_generic_getfrag
0.10 ± 5% +0.2 0.27 ± 2% perf-profile.self.cycles-pp.__might_fault
0.14 ± 5% +0.2 0.32 ± 2% perf-profile.self.cycles-pp.finish_task_switch
0.12 ± 3% +0.2 0.30 ± 3% perf-profile.self.cycles-pp.__ksize
0.08 +0.2 0.27 ± 9% perf-profile.self.cycles-pp.netif_skb_features
0.09 ± 4% +0.2 0.28 ± 3% perf-profile.self.cycles-pp.validate_xmit_skb
0.15 ± 3% +0.2 0.35 ± 2% perf-profile.self.cycles-pp.read_tsc
0.10 +0.2 0.30 perf-profile.self.cycles-pp.ip_route_output_key_hash
0.07 +0.2 0.27 ± 2% perf-profile.self.cycles-pp.ip_protocol_deliver_rcu
0.12 ± 3% +0.2 0.32 ± 2% perf-profile.self.cycles-pp.__x64_sys_sendto
0.32 ± 5% +0.2 0.52 ± 2% perf-profile.self.cycles-pp.__alloc_skb
0.15 ± 3% +0.2 0.36 ± 2% perf-profile.self.cycles-pp.pick_next_entity
0.15 ± 7% +0.2 0.36 ± 2% perf-profile.self.cycles-pp.kfree
0.12 ± 5% +0.2 0.33 ± 2% perf-profile.self.cycles-pp.do_softirq_own_stack
0.16 ± 6% +0.2 0.38 ± 3% perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 5% +0.2 0.36 perf-profile.self.cycles-pp.__x64_sys_recvfrom
0.10 ± 5% +0.2 0.32 ± 2% perf-profile.self.cycles-pp.account_entity_enqueue
0.14 ± 7% +0.2 0.36 ± 3% perf-profile.self.cycles-pp.try_to_wake_up
0.22 ± 5% +0.2 0.45 perf-profile.self.cycles-pp.ip_output
0.25 ± 7% +0.2 0.49 ± 7% perf-profile.self.cycles-pp.udp_send_skb
0.25 ± 6% +0.2 0.49 perf-profile.self.cycles-pp.enqueue_to_backlog
0.30 ± 3% +0.2 0.55 perf-profile.self.cycles-pp.net_rx_action
0.18 ± 4% +0.3 0.44 ± 5% perf-profile.self.cycles-pp.update_min_vruntime
0.24 ± 5% +0.3 0.50 perf-profile.self.cycles-pp.__check_heap_object
0.14 ± 8% +0.3 0.40 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.10 ± 4% +0.3 0.37 perf-profile.self.cycles-pp.ip_rcv_core
0.24 ± 3% +0.3 0.51 ± 2% perf-profile.self.cycles-pp.__softirqentry_text_start
0.13 ± 6% +0.3 0.41 ± 3% perf-profile.self.cycles-pp.check_preempt_wakeup
0.16 ± 2% +0.3 0.45 perf-profile.self.cycles-pp.move_addr_to_user
0.14 ± 3% +0.3 0.42 ± 2% perf-profile.self.cycles-pp.reweight_entity
0.17 ± 3% +0.3 0.46 ± 5% perf-profile.self.cycles-pp.__sys_recvfrom
0.15 ± 14% +0.3 0.45 ± 3% perf-profile.self.cycles-pp._find_next_bit
0.20 ± 4% +0.3 0.49 perf-profile.self.cycles-pp.pick_next_task_fair
0.26 ± 3% +0.3 0.56 ± 4% perf-profile.self.cycles-pp.__ip_make_skb
0.23 ± 6% +0.3 0.54 ± 10% perf-profile.self.cycles-pp.inet_recvmsg
0.18 ± 4% +0.3 0.49 ± 2% perf-profile.self.cycles-pp.__might_sleep
0.21 ± 2% +0.3 0.53 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.80 ± 5% +0.3 1.12 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_bh
0.13 ± 6% +0.3 0.45 ± 2% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.15 ± 5% +0.3 0.48 perf-profile.self.cycles-pp.native_sched_clock
0.41 ± 5% +0.3 0.74 perf-profile.self.cycles-pp.__netif_receive_skb_core
0.17 ± 5% +0.3 0.51 perf-profile.self.cycles-pp.___perf_sw_event
0.49 ± 4% +0.3 0.83 ± 3% perf-profile.self.cycles-pp.ip_finish_output2
0.32 ± 3% +0.3 0.66 perf-profile.self.cycles-pp.__virt_addr_valid
0.23 ± 4% +0.3 0.57 perf-profile.self.cycles-pp.siphash_3u32
0.17 ± 4% +0.4 0.52 ± 10% perf-profile.self.cycles-pp.__dev_queue_xmit
0.35 +0.4 0.71 perf-profile.self.cycles-pp.native_write_msr
0.18 ± 8% +0.4 0.54 ± 9% perf-profile.self.cycles-pp.enqueue_entity
0.23 ± 4% +0.4 0.60 ± 2% perf-profile.self.cycles-pp.__sys_sendto
0.29 ± 5% +0.4 0.67 ± 2% perf-profile.self.cycles-pp.loopback_xmit
0.28 ± 4% +0.4 0.68 ± 2% perf-profile.self.cycles-pp.__check_object_size
0.19 ± 7% +0.4 0.59 ± 2% perf-profile.self.cycles-pp.dequeue_task_fair
0.34 ± 4% +0.4 0.75 ± 2% perf-profile.self.cycles-pp.__skb_recv_udp
0.73 +0.4 1.15 perf-profile.self.cycles-pp.__switch_to
0.20 ± 4% +0.4 0.62 perf-profile.self.cycles-pp.__get_user_4
0.25 ± 2% +0.4 0.69 perf-profile.self.cycles-pp.ipv4_mtu
0.30 ± 4% +0.5 0.75 ± 3% perf-profile.self.cycles-pp.account_entity_dequeue
0.33 ± 2% +0.5 0.79 ± 2% perf-profile.self.cycles-pp.__update_load_avg_se
0.17 ± 8% +0.5 0.64 ± 2% perf-profile.self.cycles-pp.__skb_wait_for_more_packets
0.28 ± 4% +0.5 0.74 perf-profile.self.cycles-pp.___might_sleep
0.30 ± 2% +0.5 0.78 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
5.62 +0.5 6.12 perf-profile.self.cycles-pp.do_syscall_64
0.26 ± 2% +0.5 0.77 ± 5% perf-profile.self.cycles-pp.enqueue_task_fair
0.30 ± 6% +0.5 0.82 ± 2% perf-profile.self.cycles-pp.aa_sk_perm
0.29 ± 4% +0.5 0.83 ± 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.79 ± 3% +0.6 1.36 ± 3% perf-profile.self.cycles-pp.available_idle_cpu
0.27 ± 3% +0.6 0.86 perf-profile.self.cycles-pp.__fget_light
0.36 ± 5% +0.6 0.96 ± 2% perf-profile.self.cycles-pp.__calc_delta
0.34 ± 4% +0.6 0.95 ± 3% perf-profile.self.cycles-pp.update_cfs_group
0.39 ± 3% +0.6 1.01 ± 6% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.34 ± 3% +0.6 0.96 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.56 ± 2% +0.6 1.19 perf-profile.self.cycles-pp.__ip_append_data
0.48 ± 6% +0.7 1.14 ± 6% perf-profile.self.cycles-pp.udp4_lib_lookup2
0.46 ± 2% +0.7 1.19 ± 3% perf-profile.self.cycles-pp.update_curr
0.44 ± 3% +0.7 1.18 ± 5% perf-profile.self.cycles-pp.fib_table_lookup
0.87 +0.8 1.62 ± 2% perf-profile.self.cycles-pp.__sched_text_start
0.40 ± 4% +0.8 1.18 ± 3% perf-profile.self.cycles-pp.udp_recvmsg
0.80 ± 2% +0.8 1.59 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
0.60 +0.8 1.40 ± 3% perf-profile.self.cycles-pp.update_load_avg
0.41 ± 6% +0.9 1.27 ± 2% perf-profile.self.cycles-pp.udp_sendmsg
0.77 ± 2% +1.0 1.72 ± 2% perf-profile.self.cycles-pp.select_task_rq_fair
0.66 +1.2 1.85 perf-profile.self.cycles-pp.load_new_mm_cr3
1.24 +1.8 3.00 perf-profile.self.cycles-pp.entry_SYSCALL_64
1.29 ± 3% +2.4 3.65 perf-profile.self.cycles-pp.syscall_return_via_sysret
netperf.Throughput_tps
32000 +-------------------------------------------------------------------+
30000 |-+O O O O O O O O O O O O O O O O O O O O O O |
| |
28000 |-+ |
26000 |-+ |
24000 |-+ |
22000 |-+ |
| |
20000 |-+ |
18000 |-+ |
16000 |-+ |
14000 |-+ |
|.. .+.. .+.. .+.. |
12000 |-++. + +..+..+..+..+.+..+..+..+..+..+. .+. +..+..+.+..+..|
10000 +-------------------------------------------------------------------+
netperf.Throughput_total_tps
6.5e+06 +-----------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O O O O O |
6e+06 |-+ |
5.5e+06 |-+ |
| |
5e+06 |-+ |
4.5e+06 |-+ |
| |
4e+06 |-+ |
3.5e+06 |-+ |
| |
3e+06 |-+ |
2.5e+06 |..+..+. .+..+.. .+.. .+.. .+..+..+..+. .+. .|
| +. + +. + +..+..+. +..+..+..+.+. |
2e+06 +-----------------------------------------------------------------+
netperf.workload
2e+09 +-----------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O O O O O |
1.8e+09 |-+ |
| |
1.6e+09 |-+ |
| |
1.4e+09 |-+ |
| |
1.2e+09 |-+ |
| |
1e+09 |-+ |
| |
8e+08 |.. .+. .+..+.. .+.. .+.. .+. .+. .|
| +. +. + +..+..+.+. +. +..+..+. +..+..+..+.+. |
6e+08 +-----------------------------------------------------------------+
netperf.time.user_time
2600 +--------------------------------------------------------------------+
| O O |
2400 |-+O O O O O O O O O O O O |
| O O O O O O O O |
| |
2200 |-+ |
| |
2000 |-+ |
| |
1800 |-+ |
| |
| |
1600 |-.+.. |
|. +..+..+. .+..+..+..+..+..+..+.+..+..+.. .+..+..+..+. .+..+..|
1400 +--------------------------------------------------------------------+
netperf.time.system_time
14400 +-------------------------------------------------------------------+
| |
14200 |-+ +.. .+..+.. .+..+.. .+. |
|..+..+..+. .. +. +.+..+..+..+..+..+ +..+..+. +..+..|
14000 |-+ + |
| |
13800 |-+ |
| |
13600 |-+ |
| |
13400 |-+ |
| O |
13200 |-+O O O O O O O O O O O O O O O O O O |
| O O O |
13000 +-------------------------------------------------------------------+
netperf.time.voluntary_context_switches
1.6e+09 +-----------------------------------------------------------------+
| O O O |
1.4e+09 |-+ O O O O O O O |
1.2e+09 |-+O O O O O O O O O O O |
| O |
1e+09 |-+ |
| |
8e+08 |-+ |
| |
6e+08 |.. .+.+..+ .+.. .+..|
4e+08 |-++. : +. .+..+.+..+..+..+ +.. .+.+..+.. .+ |
| : + +..+. +. +. |
2e+08 |-+ : + |
| + |
0 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
9 months, 2 weeks