docker exit will report some errors
by Zhi Yong Wu
HI
Does anyone hit this issue? And it will take long time to exit.
[root@192 ~]# docker run --runtime=cor -it --rm --net=mynet busybox /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc fq qlen 1000
link/ether be:e3:ef:ff:0e:1d brd ff:ff:ff:ff:ff:ff
inet 172.19.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::bce3:efff:feff:e1d/64 scope link tentative
valid_lft forever preferred_lft forever
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
/ # ping 172.19.0.2
PING 172.19.0.2 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=1.708 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.532 ms
64 bytes from 172.19.0.2: seq=2 ttl=64 time=0.529 ms
^C
--- 172.19.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.529/0.923/1.708 ms
/ # exit
^C
Error response from daemon: Driver devicemapper failed to remove root filesystem 94f3f52c76ae25471086398a7191b414545a72a64917800cd29c3fff91b5eb14: Device is Busy
[root@192 ~]#
[root@192 ~]#
Regards,
Zhi Yong Wu
3 years, 9 months
the key resource is configuable for cc 3.0?
by Zhi Yong Wu
HI
-m 256M,slots=2,maxmem=512M -smp 24,cores=24,threads=1,sockets=1
Can the count of cores or memory be configurable? The cores seems to be too large.
Regards,
Zhi Yong Wu
3 years, 9 months
cc 3.0 issue
by Zhi Yong Wu
HI
When i played with cc 3.0, i hit some issue as below:
[root@192 ovsdpdk]# sudo docker run -itd busybox /bin/sh
ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e
[root@192 ovsdpdk]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba02daa5a2db busybox "/bin/sh" 3 seconds ago Up 1 seconds lonely_kirch
[root@192 ovsdpdk]# ps -ef | grep qemu
root 15450 15427 7 00:14 ? 00:00:00 /usr/local/bin/qemu-system-x86_64 -name pod-ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e -uuid 62613032-6461-6135-6132-646238363762 -machine pc-lite,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e/monitor.sock,server,nowait -qmp unix:/run/virtcontainers/pods/ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e/ctrl.sock,server,nowait -m 2G,slots=2,maxmem=3G -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/devicemapper/mnt/791b4471711b3590ecc4b09b8201aa87d255f9ccd1cc70b491844f6d9bc7c17e/rootfs,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-containers.img,size=235929600 -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/tmp/hyper-pod-ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/tmp/tty-podba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/ba02daa5a2db867be8381493da83381906ba287c6176f50e4ca642dd62b2e93e,security_model=none -device virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02 -netdev tap,id=network-0,ifname=tap0,downscript=no,script=no -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinux.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug init=/usr/lib/systemd/systemd systemd.unit=cc-agent.target iommu=off systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false
root 15500 13490 0 00:14 pts/1 00:00:00 grep --color=auto qemu
[root@192 ovsdpdk]# docker exec -it ba02daa5a2db /bin/sh
rpc error: code = 2 desc = oci runtime error: json: cannot unmarshal array into Go struct field Process.capabilities of type specs.LinuxCapabilities
[root@192 ovsdpdk]#
Regards,
Zhi Yong Wu
3 years, 9 months
Unable to bring up Kubernetes master
by Li, Horace
Hi, All,
I was following up the Wiki guidance, https://github.com/01org/cc-oci-runtime/wiki/Clear-Containers-and-Kubernetes, to setup Clear Container 2.1.4 with Kubernetes 1.6.3, and stuck to the step of bringing up Kubernetes master. When executing `#sudo ~/k8s-bring-up.sh`, it stops at "waiting for the control panel to become ready".
What might be the cause of the issue?
Full log is as below,
horaceli@horaceli-workstation:~$ sudo ~/k8s-bring-up.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.3
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [horaceli-workstation kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.239.159.165]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
Thanks,
Horace
3 years, 9 months
any guide for CC 2.1 + OVS?
by Zhi Yong Wu
HI,
Does anyone have any guide for cc 2.1 + OVS, not ovs-dpdk? If yes, it will be appreciated, thanks. When i am trying to play with cc 2.1 + OVS, i find that the network is not available for 2 cc dockers.
Regards,
Zhi Yong Wu
3 years, 9 months
Clear Containers 3.0 owners and tech lead
by Samuel Ortiz
All,
I sent a few PRs for making the new OWNERS based maintenance model a
little more formal. As described in the CONTRIBUTING file, the
maintainership for each of our 3.0 repos[1] is split between reviewers and
approvers. The latter group is a subset of the former and have the final last
word for merging a PR. A PR must be LGTM'ed by one member of each group
before being merged.
On the technical leadership front, Damien Lespiau (@dlespiau) will be the
project technical lead, at least until we reach our first 3.0.0 release [2].
Please let us know if you have any questions/concerns.
Cheers,
Samuel.
[1] https://github.com/clearcontainers
[2] https://github.com/orgs/clearcontainers/projects/1
---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number: 302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
3 years, 10 months