在安装社区版TDH进行到TOS Master启动的时候提示失败,看到了
TOS etcd-ca 相关证书脚本化续签 - Knowledge Base (transwarp.cn)帖子,也跟着一起做了,但是重新尝试启动还是失败。不知道原因在那个地方。
以下是输出的错误日志,恳请各位大佬帮忙解决,感谢~
2023-05-15T17:06:05.140 [Master] ========== Task 43 start to run. ==========
2023-05-15T17:06:05.141 [Master] Starting task local part ...
2023-05-15T17:06:05.145 [Master] Start handle role task...
2023-05-15T17:06:05.201 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=kubelet, sleepSec=2)
2023-05-15T17:06:05.201 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=haproxy, sleepSec=2)
2023-05-15T17:06:05.201 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-etcd.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-etcd.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T17:06:05.201 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-etcd.manifest
2023-05-15T17:06:05.205 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-etcd.manifest generated
2023-05-15T17:06:05.205 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-etcd.manifest]
2023-05-15T17:06:05.210 [Master] Execute success.
2023-05-15T17:06:05.210 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-etcd.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-etcd.manifest
2023-05-15T17:06:05.216 [Master] Copy success.
2023-05-15T17:06:05.216 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-apiserver.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-apiserver.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T17:06:05.216 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-apiserver.manifest
2023-05-15T17:06:05.221 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-apiserver.manifest generated
2023-05-15T17:06:05.221 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-apiserver.manifest]
2023-05-15T17:06:05.227 [Master] Execute success.
2023-05-15T17:06:05.227 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-apiserver.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-apiserver.manifest
2023-05-15T17:06:05.234 [Master] Copy success.
2023-05-15T17:06:05.234 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-controller.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-controller.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T17:06:05.234 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-controller.manifest
2023-05-15T17:06:05.238 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-controller.manifest generated
2023-05-15T17:06:05.238 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-controller.manifest]
2023-05-15T17:06:05.243 [Master] Execute success.
2023-05-15T17:06:05.243 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-controller.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-controller.manifest
2023-05-15T17:06:05.248 [Master] Copy success.
2023-05-15T17:06:05.248 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-scheduler.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-scheduler.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T17:06:05.248 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-scheduler.manifest
2023-05-15T17:06:05.250 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-scheduler.manifest generated
2023-05-15T17:06:05.250 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-scheduler.manifest]
2023-05-15T17:06:05.256 [Master] Execute success.
2023-05-15T17:06:05.256 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-scheduler.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-scheduler.manifest
2023-05-15T17:06:05.266 [Master] Copy success.
2023-05-15T17:06:05.266 [Master] Task local part ended.
2023-05-15T17:06:05.266 [Master] Starting task remote part ...
2023-05-15T17:06:05.272 [Agent] Execute command: systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
sleep 2
systemctl status kubelet
2023-05-15T17:06:07.479 [Agent] command output:
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2023-05-15 17:06:05 CST; 2s ago
Main PID: 5029 (kubelet)
CGroup: /system.slice/kubelet.service
├─5029 /opt/kubernetes/bin/kubelet --v=2 --hostname-override=tdh-03 --log-dir=/var/log/kubernetes --node-labels=master=true,worker=true --node-ip=192.168.86.204 --pod-infra-container-image=transwarp/pause:tos-2.1 --network-plugin=cni --eviction-hard= --bootstrap-kubeconfig=/srv/kubernetes/bootstrap.kubeconfig --feature-gates=SupportPodPidsLimit=false,SupportNodePidsLimit=false --kubeconfig=/srv/kubernetes/kubeconfig --config=/opt/kubernetes/kubelet-config.yaml
└─5067 /opt/cni/bin/tos
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.423993 5029 kubelet.go:290] Adding pod path: /opt/kubernetes/manifests-multi
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.424175 5029 file.go:68] Watching path "/opt/kubernetes/manifests-multi"
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.424253 5029 kubelet.go:315] Watching apiserver
May 15 17:06:07 tdh-03 kubelet[5029]: E0515 17:06:07.429380 5029 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dtdh-03&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
May 15 17:06:07 tdh-03 kubelet[5029]: E0515 17:06:07.430387 5029 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Service: Get https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
May 15 17:06:07 tdh-03 kubelet[5029]: E0515 17:06:07.430902 5029 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:462: Failed to list *v1.Node: Get https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dtdh-03&limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.435407 5029 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.435500 5029 client.go:104] Start docker client with request timeout=2m0s
May 15 17:06:07 tdh-03 kubelet[5029]: W0515 17:06:07.443777 5029 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 15 17:06:07 tdh-03 kubelet[5029]: I0515 17:06:07.444012 5029 docker_service.go:240] Hairpin mode set to "hairpin-veth"
2023-05-15T17:06:07.481 [Agent] Execute command: systemctl daemon-reload
systemctl enable haproxy
systemctl restart haproxy
sleep 2
systemctl status haproxy
2023-05-15T17:06:09.835 [Agent] command output:
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2023-05-15 17:06:07 CST; 2s ago
Main PID: 5159 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
├─5159 /usr/sbin/haproxy-systemd-wrapper -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid
├─5161 /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
└─5162 /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
May 15 17:06:07 tdh-03 systemd[1]: Started HAProxy Load Balancer.
May 15 17:06:07 tdh-03 haproxy-systemd-wrapper[5159]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
2023-05-15T17:06:09.836 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-etcd.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-etcd.manifest" "/opt/kubernetes/manifests-multi/tos-etcd.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-etcd.manifest"
2023-05-15T17:06:09.861 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-apiserver.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-apiserver.manifest" "/opt/kubernetes/manifests-multi/tos-apiserver.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-apiserver.manifest"
2023-05-15T17:06:09.886 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-controller.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-controller.manifest" "/opt/kubernetes/manifests-multi/tos-controller.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-controller.manifest"
2023-05-15T17:06:09.911 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-scheduler.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-scheduler.manifest" "/opt/kubernetes/manifests-multi/tos-scheduler.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-scheduler.manifest"
2023-05-15T17:06:09.945 [Master] Waiting TOS Master (TOS,tdh-03) to become Healthy within 600 s
2023-05-15T17:08:05.141 [Master] Task 43 timed out after 120000ms.
2023-05-15T17:08:05.143 [Master] The Task 43 run failed: java.util.concurrent.CancellationException
at java.util.concurrent.FutureTask.report(FutureTask.java:121)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.springframework.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:83)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:384)
at java.util.concurrent.FutureTask.cancel(FutureTask.java:180)
at io.transwarp.manager.master.operation.execution.TaskDriver.lambda$submitTask$0(TaskDriver.java:75)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2023-05-15T18:00:00.558 [Master] ========== Task 43 start to run. ==========
2023-05-15T18:00:00.564 [Master] Starting task local part ...
2023-05-15T18:00:00.570 [Master] Start handle role task...
2023-05-15T18:00:00.693 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=kubelet, sleepSec=2)
2023-05-15T18:00:00.693 [Master] execute command: DirectiveDetail.SystemctlOp(action=EnableStart, service=haproxy, sleepSec=2)
2023-05-15T18:00:00.694 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-etcd.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-etcd.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T18:00:00.694 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-etcd.manifest
2023-05-15T18:00:00.707 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-etcd.manifest generated
2023-05-15T18:00:00.707 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-etcd.manifest]
2023-05-15T18:00:00.757 [Master] Execute success.
2023-05-15T18:00:00.757 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-etcd.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-etcd.manifest
2023-05-15T18:00:00.798 [Master] Copy success.
2023-05-15T18:00:00.798 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-apiserver.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-apiserver.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T18:00:00.798 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-apiserver.manifest
2023-05-15T18:00:00.855 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-apiserver.manifest generated
2023-05-15T18:00:00.855 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-apiserver.manifest]
2023-05-15T18:00:00.869 [Master] Execute success.
2023-05-15T18:00:00.869 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-apiserver.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-apiserver.manifest
2023-05-15T18:00:00.880 [Master] Copy success.
2023-05-15T18:00:00.880 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-controller.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-controller.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T18:00:00.880 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-controller.manifest
2023-05-15T18:00:00.885 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-controller.manifest generated
2023-05-15T18:00:00.885 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-controller.manifest]
2023-05-15T18:00:00.897 [Master] Execute success.
2023-05-15T18:00:00.897 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-controller.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-controller.manifest
2023-05-15T18:00:00.919 [Master] Copy success.
2023-05-15T18:00:00.919 [Master] execute command: DirectiveDetail.RenderFileOp(templateType=FreeMarker, templatePath=tos-scheduler.manifest, targetPath=/opt/kubernetes/manifests-multi/tos-scheduler.manifest, mode=755, owner=null, group=null, opsTpl=false)
2023-05-15T18:00:00.919 [Master] rendering content of: /opt/kubernetes/manifests-multi/tos-scheduler.manifest
2023-05-15T18:00:00.928 [Master] content of host tdh-03 file /opt/kubernetes/manifests-multi/tos-scheduler.manifest generated
2023-05-15T18:00:00.928 [Master] Start executing [chmod 755 /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-scheduler.manifest]
2023-05-15T18:00:00.941 [Master] Execute success.
2023-05-15T18:00:00.941 [Master] Start copy from [localhost] /var/lib/transwarp-manager/master/content/resources/nodes/tdh-03/@opt@kubernetes@manifests-multi@tos-scheduler.manifest to [tdh-03] /var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-scheduler.manifest
2023-05-15T18:00:00.948 [Master] Copy success.
2023-05-15T18:00:00.948 [Master] Task local part ended.
2023-05-15T18:00:00.948 [Master] Starting task remote part ...
2023-05-15T18:00:00.956 [Agent] Execute command: systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
sleep 2
systemctl status kubelet
2023-05-15T18:00:03.398 [Agent] command output:
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2023-05-15 18:00:01 CST; 2s ago
Main PID: 57590 (kubelet)
CGroup: /system.slice/kubelet.service
└─57590 /opt/kubernetes/bin/kubelet --v=2 --hostname-override=tdh-03 --log-dir=/var/log/kubernetes --node-labels=master=true,worker=true --node-ip=192.168.86.204 --pod-infra-container-image=transwarp/pause:tos-2.1 --network-plugin=cni --eviction-hard= --bootstrap-kubeconfig=/srv/kubernetes/bootstrap.kubeconfig --feature-gates=SupportPodPidsLimit=false,SupportNodePidsLimit=false --kubeconfig=/srv/kubernetes/kubeconfig --config=/opt/kubernetes/kubelet-config.yaml
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.233866 57590 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.233958 57590 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.233991 57590 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.234380 57590 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000890de0, CONNECTING
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.235216 57590 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000a24010, CONNECTING
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.235339 57590 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000890de0, READY
May 15 18:00:02 tdh-03 kubelet[57590]: I0515 18:00:02.236106 57590 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000a24010, READY
May 15 18:00:03 tdh-03 kubelet[57590]: E0515 18:00:03.013368 57590 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Service: Get https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0: EOF
May 15 18:00:03 tdh-03 kubelet[57590]: E0515 18:00:03.014717 57590 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:462: Failed to list *v1.Node: Get https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dtdh-03&limit=500&resourceVersion=0: EOF
May 15 18:00:03 tdh-03 kubelet[57590]: E0515 18:00:03.018008 57590 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dtdh-03&limit=500&resourceVersion=0: EOF
2023-05-15T18:00:03.399 [Agent] Execute command: systemctl daemon-reload
systemctl enable haproxy
systemctl restart haproxy
sleep 2
systemctl status haproxy
2023-05-15T18:00:05.802 [Agent] command output:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2023-05-15 18:00:03 CST; 2s ago
Main PID: 57751 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
├─57751 /usr/sbin/haproxy-systemd-wrapper -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid
├─57753 /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
└─57754 /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
May 15 18:00:03 tdh-03 systemd[1]: Started HAProxy Load Balancer.
May 15 18:00:03 tdh-03 haproxy-systemd-wrapper[57751]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/tos/conf/haproxy.cfg -p /run/haproxy.pid -Ds
2023-05-15T18:00:05.803 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-etcd.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-etcd.manifest" "/opt/kubernetes/manifests-multi/tos-etcd.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-etcd.manifest"
2023-05-15T18:00:05.837 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-apiserver.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-apiserver.manifest" "/opt/kubernetes/manifests-multi/tos-apiserver.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-apiserver.manifest"
2023-05-15T18:00:05.869 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-controller.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-controller.manifest" "/opt/kubernetes/manifests-multi/tos-controller.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-controller.manifest"
2023-05-15T18:00:05.898 [Agent] Execute command: umask 0022 && mkdir -p $(dirname "/opt/kubernetes/manifests-multi/tos-scheduler.manifest") && mv -f "/var/lib/transwarp-manager/agent/resource-tmp/@opt@kubernetes@manifests-multi@tos-scheduler.manifest" "/opt/kubernetes/manifests-multi/tos-scheduler.manifest" && chmod 755 "/opt/kubernetes/manifests-multi/tos-scheduler.manifest"
2023-05-15T18:00:05.931 [Master] Waiting TOS Master (TOS,tdh-03) to become Healthy within 600 s
2023-05-15T18:02:00.558 [Master] Task 43 timed out after 120000ms.
2023-05-15T18:02:00.561 [Master] The Task 43 run failed: java.util.concurrent.CancellationException
at java.util.concurrent.FutureTask.report(FutureTask.java:121)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.springframework.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:83)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:384)
at java.util.concurrent.FutureTask.cancel(FutureTask.java:180)
at io.transwarp.manager.master.operation.execution.TaskDriver.lambda$submitTask$0(TaskDriver.java:75)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)