启动TOS报错
2022-06-27T20:23:43.019 [Master] ========== Task 291 start to run. ==========
2022-06-27T20:23:43.021 [Master] Starting task local part ...
2022-06-27T20:23:43.117 [Master] Start handle role task...
2022-06-27T20:23:44.860 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=kubelet, sleepSec=2)
2022-06-27T20:23:44.860 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=haproxy, sleepSec=2)
2022-06-27T20:23:44.860 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-etcd.manifest, targetPath=/opt/kubernete*anifest*ulti/tos-etcd.manifest, mode=755, owner=null, group=null, opsTpl=false)
2022-06-27T20:23:44.860 [Master] rendering content of: /opt/kubernete*anifest*ulti/tos-etcd.manifest
2022-06-27T20:23:44.924 [Master] content of host tdhc-node01 file /opt/kubernete*anifest*ulti/tos-etcd.manifest generated
2022-06-27T20:23:44.924 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-etcd.manifest]
2022-06-27T20:23:45.183 [Master] Execute success.
2022-06-27T20:23:45.184 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-etcd.manifest to [tdhc-node01] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-etcd.manifest
2022-06-27T20:23:45.387 [Master] Copy success.
2022-06-27T20:23:45.387 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-apiserver.manifest, targetPath=/opt/kubernete*anifest*ulti/tos-apiserver.manifest, mode=755, owner=null, group=null, opsTpl=false)
2022-06-27T20:23:45.387 [Master] rendering content of: /opt/kubernete*anifest*ulti/tos-apiserver.manifest
2022-06-27T20:23:45.412 [Master] content of host tdhc-node01 file /opt/kubernete*anifest*ulti/tos-apiserver.manifest generated
2022-06-27T20:23:45.412 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-apiserver.manifest]
2022-06-27T20:23:45.622 [Master] Execute success.
2022-06-27T20:23:45.622 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-apiserver.manifest to [tdhc-node01] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-apiserver.manifest
2022-06-27T20:23:45.679 [Master] Copy success.
2022-06-27T20:23:45.681 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-controller.manifest, targetPath=/opt/kubernete*anifest*ulti/tos-controller.manifest, mode=755, owner=null, group=null, opsTpl=false)
2022-06-27T20:23:45.682 [Master] rendering content of: /opt/kubernete*anifest*ulti/tos-controller.manifest
2022-06-27T20:23:45.687 [Master] content of host tdhc-node01 file /opt/kubernete*anifest*ulti/tos-controller.manifest generated
2022-06-27T20:23:45.687 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-controller.manifest]
2022-06-27T20:23:45.951 [Master] Execute success.
2022-06-27T20:23:45.951 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-controller.manifest to [tdhc-node01] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-controller.manifest
2022-06-27T20:23:45.991 [Master] Copy success.
2022-06-27T20:23:45.992 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-scheduler.manifest, targetPath=/opt/kubernete*anifest*ulti/tos-scheduler.manifest, mode=755, owner=null, group=null, opsTpl=false)
2022-06-27T20:23:45.992 [Master] rendering content of: /opt/kubernete*anifest*ulti/tos-scheduler.manifest
2022-06-27T20:23:46.093 [Master] content of host tdhc-node01 file /opt/kubernete*anifest*ulti/tos-scheduler.manifest generated
2022-06-27T20:23:46.094 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-scheduler.manifest]
2022-06-27T20:23:46.545 [Master] Execute success.
2022-06-27T20:23:46.545 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdhc-node01/@opt@kubernete*anifest*ulti@tos-scheduler.manifest to [tdhc-node01] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-scheduler.manifest
2022-06-27T20:23:46.913 [Master] Copy success.
2022-06-27T20:23:46.913 [Master] Task local part ended.
2022-06-27T20:23:46.913 [Master] Starting task remote part ...
2022-06-27T20:23:47.181 [Agent] Execute command: systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
sleep 2
systemctl status kubelet
2022-06-27T20:23:55.838 [Agent] command output:
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-06-27 20:23:53 CST; 2s ago
Main PID: 173943 (kubelet)
CGroup: /system.slice/kubelet.service
└─173943 /opt/kubernete*in/kubelet --v=2 --hostname-override=tdhc-node01 --log-dir=/var/log/kubernetes --node-labels=master=true,worker=true --node-ip=192.168.111.133 --pod-infra-container-image=transwarp/pause:tos-2.1 --network-plugin=cni --eviction-hard= --bootstrap-kubeconfig=/srv/kubernete*ootstrap.kubeconfig --feature-gates=SupportPodPidsLimit=false,SupportNodePidsLimit=false --kubeconfig=/srv/kubernetes/kubeconfig --config=/opt/kubernetes/kubelet-config.yaml
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.036983 173943 flags.go:33] FLAG: --system-reserved-cgroup=""
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037001 173943 flags.go:33] FLAG: --tls-cert-file=""
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037018 173943 flags.go:33] FLAG: --tls-cipher-suites="[]"
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037050 173943 flags.go:33] FLAG: --tl*in-version=""
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037066 173943 flags.go:33] FLAG: --tls-private-key-file=""
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037083 173943 flags.go:33] FLAG: --topology-manager-policy="none"
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037101 173943 flags.go:33] FLAG: --v="2"
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037125 173943 flags.go:33] FLAG: --version="false"
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037173 173943 flags.go:33] FLAG: --vmodule=""
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037193 173943 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Jun 27 20:23:55 tdhc-node01 kubelet[173943]: I0627 20:23:55.037217 173943 flags.go:33] FLAG: --volume-stats-agg-period="1m0*r>
2022-06-27T20:23:55.853 [Agent] Execute command: systemctl daemon-reload
systemctl enable haproxy
systemctl restart haproxy
sleep 2
systemctl status haproxy
2022-06-27T20:24:00.252 [Agent] command output:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-06-27 20:23:58 CST; 2s ago
Main PID: 174179 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
├─174179 /usr/*in/haproxy-systemd-wrapper -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid
├─174184 /usr/*in/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -D*r>
└─174187 /usr/*in/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -D*r>
Jun 27 20:23:58 tdhc-node01 systemd[1]: Started HAProxy Load Balancer.
Jun 27 20:23:58 tdhc-node01 haproxy-systemd-wrapper[174179]: haproxy-systemd-wrapper: executing /usr/*in/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -D*r>
2022-06-27T20:24:00.261 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernete*anifest*ulti/tos-etcd.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-etcd.manifest" "/opt/kubernete*anifest*ulti/tos-etcd.manifest" && chmod 755 "/opt/kubernete*anifest*ulti/tos-etcd.manifest"
2022-06-27T20:24:00.933 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernete*anifest*ulti/tos-apiserver.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-apiserver.manifest" "/opt/kubernete*anifest*ulti/tos-apiserver.manifest" && chmod 755 "/opt/kubernete*anifest*ulti/tos-apiserver.manifest"
2022-06-27T20:24:01.001 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernete*anifest*ulti/tos-controller.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-controller.manifest" "/opt/kubernete*anifest*ulti/tos-controller.manifest" && chmod 755 "/opt/kubernete*anifest*ulti/tos-controller.manifest"
2022-06-27T20:24:01.063 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernete*anifest*ulti/tos-scheduler.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernete*anifest*ulti@tos-scheduler.manifest" "/opt/kubernete*anifest*ulti/tos-scheduler.manifest" && chmod 755 "/opt/kubernete*anifest*ulti/tos-scheduler.manifest"
2022-06-27T20:24:01.209 [Master] Waiting TO*aster (TOS,tdhc-node01) to become Healthy within 600 *r>
2022-06-27T20:31:50.164 [Master] Latest health check result of roles:
2022-06-27T20:31:50.193 [Master] VITAL_SIGN_CHECK DOWN at 2022-06-27T20:31:31.367
Connection check result of TO*aster tdhc-node01 was Down
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (35) Encountered end of file
2022-06-27T20:31:50.193 [Master] DAEMON_CHECK DOWN at 2022-06-27T20:31:20.862
TO*ASTER is not running on tdhc-node01
etcd --data-d is running as 4dae2b0a1c0386215f1d1052fbaa6206fd4d5f13185a49a0e696369948025b74
kube-apiserver has no running container
kube-controller is running as 56be688532091da0ea3dbfb6391786bad2bbfa272fc0441e42315b70eccdfba2
kube-scheduler is running as 08ee46d215539f4b03f4c9a85d29339532943e9e835b7ab3d99376080eabdd1d
2022-06-27T20:31:50.194 [Master] Fail to run task remote part: java.lang.IllegalStateException: TO*aster (TOS,tdhc-node01) didn't become healthy within 180 *r>
at io.transwarp.manager.master.operation.execution.localrunner.AbstractTaskLocalRunner.waitRolesHealthy(AbstractTaskLocalRunner.java:577)
at io.transwarp.manager.master.operation.execution.localrunner.RoleTaskLocalRunner.postRemote(RoleTaskLocalRunner.java:253)
at io.transwarp.manager.master.operation.execution.TaskDriver$TaskRunHelper.runTask(TaskDriver.java:234)
at io.transwarp.manager.master.operation.execution.TaskDriver$TaskRunHelper$$FastClas*ySpringCGLIB$$e353057f.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.awaitility.core.ConditionTimeoutException: still DOWN within 180 *r>
at io.transwarp.manager.master.operation.execution.localrunner.AbstractTaskLocalRunner.lambda$waitRolesHealthy$7(AbstractTaskLocalRunner.java:563)
at org.awaitility.core.CallableCondition$ConditionEvaluationWrapper.eval(CallableCondition.java:99)
at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:232)
at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:219)
... 4 more
2022-06-27T20:31:50.461 [Master] The Task 291 run failed: java.lang.RuntimeException: java.lang.IllegalStateException: TO*aster (TOS,tdhc-node01) didn't become healthy within 180 *r>
at io.transwarp.manager.master.operation.execution.TaskDriver$TaskRunHelper.runTask(TaskDriver.java:239)
at io.transwarp.manager.master.operation.execution.TaskDriver$TaskRunHelper$$FastClas*ySpringCGLIB$$e353057f.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: TO*aster (TOS,tdhc-node01) didn't become healthy within 180 *r>
at io.transwarp.manager.master.operation.execution.localrunner.AbstractTaskLocalRunner.waitRolesHealthy(AbstractTaskLocalRunner.java:577)
at io.transwarp.manager.master.operation.execution.localrunner.RoleTaskLocalRunner.postRemote(RoleTaskLocalRunner.java:253)
at io.transwarp.manager.master.operation.execution.TaskDriver$TaskRunHelper.runTask(TaskDriver.java:234)
... 10 more
Caused by: org.awaitility.core.ConditionTimeoutException: still DOWN within 180 *r>
at io.transwarp.manager.master.operation.execution.localrunner.AbstractTaskLocalRunner.lambda$waitRolesHealthy$7(AbstractTaskLocalRunner.java:563)
at org.awaitility.core.CallableCondition$ConditionEvaluationWrapper.eval(CallableCondition.java:99)
at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:232)
at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:219)
... 4 more