首页
文章
问答
圈子
最新资讯
活动
资源中心
开发者社区
>
问答
>
正文
>
启动zookeeper1服务时,通过TOS启动server报错
2022-07-21 20:54:17
724
Wed Aug 01 22:51:37 PDT 2018 [Manager] Starting task local part …
target yaml path on [Manager] is /var/lib/transwarp-manager/master/content/resources/services/zookeeper1/zookeeper-server.yaml
start to generate zookeeper-server.yaml on [Manager]…
start handle local part, dataModel is
Map(transwarpRepo ->
http://192.168.11.186:8180/pub/transwarp 4
, current.user -> root, service -> Map(keytab -> /etc/zookeeper1/conf/zookeeper.keytab, syncLimit -> 5, zoo_cfg -> Map(maxClientCnxns -> 0, tickTime -> 9000, initLimit -> 10, syncLimit -> 5), plugins -> List(), xinghuan-1 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), auth -> simple, zookeeper.jmxremote.port -> 9911, domain -> dc=tdh, zookeeper.container.requests.memory -> -1, zookeeper.client.port -> 2181, kdc -> Map(hostname -> xinghuan-3, port -> 1088), zookeeper.container.limits.cpu -> -1, id -> 3, maxClientCnxns -> 0, xinghuan-2 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), roles -> Map(ZOOKEEPER -> List(Map(id -> 12, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 13, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 14, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false))), masterPrincipal -> zookeeper, zookeeper.server.memory -> 1024, zookeeper.container.limits.memory -> -1, zookeeper.container.requests.cpu -> -1, zookeeper.memory.ratio -> -1, realm -> TDH, initLimit -> 10, tickTime -> 9000, xinghuan-3 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), sid -> zookeeper1), dependencies -> Map(TOS -> Map(keytab -> /etc/tos/conf/tos.keytab, tos.master.apiserver.secure.port -> 8553, tos.master.dashboard.username -> dashboard, plugins -> List(), tos.master.controller.port -> 10252, xinghuan-1 -> Map(tos.master.etcd.initial.cluster.state -> new), auth -> simple, domain -> dc=tdh, tos.slave.kubelet.port -> 10250, tos.master.etcd.heartbeat.interval -> 250, tos.master.apiserver.port -> 8080, kdc -> Map(hostname -> xinghuan-3, port -> 1088), id -> 1, tos.registry.ui.port -> 8081, xinghuan-2 -> Map(tos.master.etcd.initial.cluster.state -> new), roles -> Map(TOS_REGISTRY -> List(Map(id -> 5, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false)), TOS_MASTER -> List(Map(id -> 6, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 7, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 8, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false)), TOS_SLAVE -> List(Map(id -> 1, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 2, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 3, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 4, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false))), masterPrincipal -> , tos.master.dashboard.password -> password, realm -> TDH, tos.master.etcd.port -> 4001, tos.registry.port -> 5000, tos.slave.kubelet.healthzport -> 10248, tos.master.etcd.election.timeout -> 1250, xinghuan-3 -> Map(tos.master.etcd.initial.cluster.state -> new), tos.master.leader.elect.port -> 4002, sid -> tos, tos.master.scheduler.port -> 10251), LICENSE_SERVICE -> Map(keytab -> /etc/transwarp_license_cluster/conf/license_service.keytab, syncLimit -> 5, zoo_cfg -> Map(maxClientCnxns -> 0, tickTime -> 9000, initLimit -> 10, syncLimit -> 5), plugins -> List(), xinghuan-1 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), auth -> simple, zookeeper.jmxremote.port -> 9922, domain -> dc=tdh, zookeeper.container.requests.memory -> -1, kdc -> Map(hostname -> xinghuan-3, port -> 1088), zookeeper.container.limits.cpu -> -1, id -> 2, maxClientCnxns -> 0, xinghuan-2 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), roles -> Map(LICENSE_NODE -> List(Map(id -> 9, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 10, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 11, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false))), masterPrincipal -> , zookeeper.server.memory -> 256, zookeeper.container.limits.memory -> -1, zookeeper.container.requests.cpu -> -1, zookeeper.memory.ratio -> -1, realm -> TDH, initLimit -> 10, tickTime -> 9000, xinghuan-3 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), sid -> transwarp_license_cluster), GUARDIAN -> Map(keytab -> /etc/guardian/conf/guardian.keytab, guardian.ds.root.password -> admin, guardian.server.kerberos.password -> YluyBqXsC6YN7IYqM8Uk, plugins -> List(), guardian.apacheds.ldap.port -> 10389, auth -> simple, domain -> dc=tdh, guardian.ds.realm -> TDH, guardian.server.port -> 8380, guardian.server.audit.level -> ADD,UPDATE,DELETE,LOGIN, guardian.apacheds.kdc.port -> 1088, kdc -> Map(hostname -> xinghuan-3, port -> 1088), guardian.apacheds.data.dir -> /guardian/data/, guardian.ds.ldap.tls.enabled -> false, id -> 23, guardian.client.cache.enabled -> true, roles -> Map(GUARDIAN_APACHEDS -> List(Map(id -> 114, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 115, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false)), GUARDIAN_SERVER -> List(Map(id -> 116, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 117, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false))), masterPrincipal -> guardian/guardian, guardian.admin.password -> admin, guardian.cache.repli.bind.port -> 7800, realm -> TDH, guardian.server.tls.enabled -> true, guardian.ds.domain -> dc=tdh, guardian.admin.username -> admin, guardian.server.audit.enabled -> true, sid -> guardian)))
tos registry hostname in Service(3,Some(1),ZOOKEEPER,None,transwarp-5.1.2-final,INSTALLED,zookeeper1,ZooKeeper1,KUBERNETES,false,true,true) 's dependencies is xinghuan-1
generated zookeeper-server.yaml on [Manager]
start to create role(s) on [Manager] using kubectl --server=https://127.0.0.1:6443 --certificate-authority=/srv/kubernetes/ca.crt --client-certificate=/srv/kubernetes/kubecfg.crt --client-key=/srv/kubernetes/kubecfg.key scale --replicas=3 -f /var/lib/transwarp-manager/master/content/resources/services/zookeeper1/zookeeper-server.yaml…
role(s) successfully created on [Manager]
Wed Aug 01 22:51:39 PDT 2018 [Manager] Task local part ended
Waiting ZOOKEEPERs in ZooKeeper1 to become Healthy within 600 seconds …
Latest health check result of roles:
DAEMON_CHECK HEALTHY at Wed Aug 01 23:01:36 PDT 2018
ZOOKEEPER on xinghuan-1 has Pod zookeeper-server-zookeeper1-2269979424-2f1sb with status Running
VITAL_SIGN_CHECK DOWN at Wed Aug 01 23:01:27 PDT 2018
Connection check result of ZooKeeper Server xinghuan-1 was Down: org.I0Itec.zkclient.exception.ZkInterruptedException: java.lang.InterruptedException
DAEMON_CHECK HEALTHY at Wed Aug 01 23:01:36 PDT 2018
ZOOKEEPER on xinghuan-2 has Pod zookeeper-server-zookeeper1-2269979424-m1v6w with status Running
VITAL_SIGN_CHECK DOWN at Wed Aug 01 23:01:27 PDT 2018
Connection check result of ZooKeeper Server xinghuan-2 was Down: java.lang.IllegalMonitorStateException
DAEMON_CHECK HEALTHY at Wed Aug 01 23:01:36 PDT 2018
ZOOKEEPER on xinghuan-3 has Pod zookeeper-server-zookeeper1-2269979424-mdc51 with status Running
VITAL_SIGN_CHECK HEALTHY at Wed Aug 01 23:01:27 PDT 2018
Connection check result of ZooKeeper Server xinghuan-3 was Healthy
io.transwarp.manager.master.manager.operation.TaskLocalRunner$DownException: ZOOKEEPERs in ZooKeeper1 didn’t become Healthy within 600 secondsWed Aug 01 23:05:39 PDT 2018 [Manager] Starting task local part …
target yaml path on [Manager] is /var/lib/transwarp-manager/master/content/resources/services/zookeeper1/zookeeper-server.yaml
start to generate zookeeper-server.yaml on [Manager]…
start handle local part, dataModel is
Map(transwarpRepo ->
http://192.168.11.186:8180/pub/transwarp 4
, current.user -> root, service -> Map(keytab -> /etc/zookeeper1/conf/zookeeper.keytab, syncLimit -> 5, zoo_cfg -> Map(maxClientCnxns -> 0, tickTime -> 9000, initLimit -> 10, syncLimit -> 5), plugins -> List(), xinghuan-1 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), auth -> simple, zookeeper.jmxremote.port -> 9911, domain -> dc=tdh, zookeeper.container.requests.memory -> -1, zookeeper.client.port -> 2181, kdc -> Map(hostname -> xinghuan-3, port -> 1088), zookeeper.container.limits.cpu -> -1, id -> 3, maxClientCnxns -> 0, xinghuan-2 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), roles -> Map(ZOOKEEPER -> List(Map(id -> 12, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 13, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 14, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false))), masterPrincipal -> zookeeper, zookeeper.server.memory -> 1024, zookeeper.container.limits.memory -> -1, zookeeper.container.requests.cpu -> -1, zookeeper.memory.ratio -> -1, realm -> TDH, initLimit -> 10, tickTime -> 9000, xinghuan-3 -> Map(zookeeper.leader.elect.port -> 3888, zookeeper.peer.communicate.port -> 2888), sid -> zookeeper1), dependencies -> Map(TOS -> Map(keytab -> /etc/tos/conf/tos.keytab, tos.master.apiserver.secure.port -> 8553, tos.master.dashboard.username -> dashboard, plugins -> List(), tos.master.controller.port -> 10252, xinghuan-1 -> Map(tos.master.etcd.initial.cluster.state -> new), auth -> simple, domain -> dc=tdh, tos.slave.kubelet.port -> 10250, tos.master.etcd.heartbeat.interval -> 250, tos.master.apiserver.port -> 8080, kdc -> Map(hostname -> xinghuan-3, port -> 1088), id -> 1, tos.registry.ui.port -> 8081, xinghuan-2 -> Map(tos.master.etcd.initial.cluster.state -> new), roles -> Map(TOS_REGISTRY -> List(Map(id -> 5, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false)), TOS_MASTER -> List(Map(id -> 6, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 7, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 8, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false)), TOS_SLAVE -> List(Map(id -> 1, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 2, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 3, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 4, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false))), masterPrincipal -> , tos.master.dashboard.password -> password, realm -> TDH, tos.master.etcd.port -> 4001, tos.registry.port -> 5000, tos.slave.kubelet.healthzport -> 10248, tos.master.etcd.election.timeout -> 1250, xinghuan-3 -> Map(tos.master.etcd.initial.cluster.state -> new), tos.master.leader.elect.port -> 4002, sid -> tos, tos.master.scheduler.port -> 10251), LICENSE_SERVICE -> Map(keytab -> /etc/transwarp_license_cluster/conf/license_service.keytab, syncLimit -> 5, zoo_cfg -> Map(maxClientCnxns -> 0, tickTime -> 9000, initLimit -> 10, syncLimit -> 5), plugins -> List(), xinghuan-1 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), auth -> simple, zookeeper.jmxremote.port -> 9922, domain -> dc=tdh, zookeeper.container.requests.memory -> -1, kdc -> Map(hostname -> xinghuan-3, port -> 1088), zookeeper.container.limits.cpu -> -1, id -> 2, maxClientCnxns -> 0, xinghuan-2 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), roles -> Map(LICENSE_NODE -> List(Map(id -> 9, hostname -> xinghuan-1, ip -> 192.168.11.186, toDecommission -> false), Map(id -> 10, hostname -> xinghuan-2, ip -> 192.168.11.187, toDecommission -> false), Map(id -> 11, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false))), masterPrincipal -> , zookeeper.server.memory -> 256, zookeeper.container.limits.memory -> -1, zookeeper.container.requests.cpu -> -1, zookeeper.memory.ratio -> -1, realm -> TDH, initLimit -> 10, tickTime -> 9000, xinghuan-3 -> Map(zookeeper.client.port -> 2291, zookeeper.leader.elect.port -> 3988, zookeeper.peer.communicate.port -> 2988), sid -> transwarp_license_cluster), GUARDIAN -> Map(keytab -> /etc/guardian/conf/guardian.keytab, guardian.ds.root.password -> admin, guardian.server.kerberos.password -> YluyBqXsC6YN7IYqM8Uk, plugins -> List(), guardian.apacheds.ldap.port -> 10389, auth -> simple, domain -> dc=tdh, guardian.ds.realm -> TDH, guardian.server.port -> 8380, guardian.server.audit.level -> ADD,UPDATE,DELETE,LOGIN, guardian.apacheds.kdc.port -> 1088, kdc -> Map(hostname -> xinghuan-3, port -> 1088), guardian.apacheds.data.dir -> /guardian/data/, guardian.ds.ldap.tls.enabled -> false, id -> 23, guardian.client.cache.enabled -> true, roles -> Map(GUARDIAN_APACHEDS -> List(Map(id -> 114, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 115, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false)), GUARDIAN_SERVER -> List(Map(id -> 116, hostname -> xinghuan-3, ip -> 192.168.11.188, toDecommission -> false), Map(id -> 117, hostname -> xinghuan-4, ip -> 192.168.11.189, toDecommission -> false))), masterPrincipal -> guardian/guardian, guardian.admin.password -> admin, guardian.cache.repli.bind.port -> 7800, realm -> TDH, guardian.server.tls.enabled -> true, guardian.ds.domain -> dc=tdh, guardian.admin.username -> admin, guardian.server.audit.enabled -> true, sid -> guardian)))
tos registry hostname in Service(3,Some(1),ZOOKEEPER,None,transwarp-5.1.2-final,INSTALLED,zookeeper1,ZooKeeper1,KUBERNETES,false,true,true) 's dependencies is xinghuan-1
generated zookeeper-server.yaml on [Manager]
start to create role(s) on [Manager] using kubectl --server=https://127.0.0.1:6443 --certificate-authority=/srv/kubernetes/ca.crt --client-certificate=/srv/kubernetes/kubecfg.crt --client-key=/srv/kubernetes/kubecfg.key scale --replicas=3 -f /var/lib/transwarp-manager/master/content/resources/services/zookeeper1/zookeeper-server.yaml…
role(s) successfully created on [Manager]
Wed Aug 01 23:05:41 PDT 2018 [Manager] Task local part ended
Waiting ZOOKEEPERs in ZooKeeper1 to become Healthy within 600 seconds …
回答
登录
后可回答问题
提问者
好奇号
关注TA
热门文章
聊一聊UDF/UDTF/UDAF是什么,开发要点及如何使用?
社区版相关资源指路
驱动下载
4
手把手教你怎么安装社区版TDH
5
TORC表格式对应的小文件合并方法
活动推荐
【直播预约】星课堂第七期:数字时代下的创新人才
2022-06-07 19:30:00 ~ 20:30:00
线上
报名参加
5月30-31日,2024向星力·未来数据技术峰会邀您报名
2024-05-30 13:00:00 ~ 05-31 18:00:00
线下
报名参加
“新科技 星力量” 第三届科技实践案例评选报名火热进行中
2023-11-16 00:00:00 ~ 12-16 23:59:59
线上
报名参加
5月25-26日,向星力·未来数据技术峰会(FDTC)邀您共赴数据技术盛宴
2023-05-25 08:00:00 ~ 05-26 18:00:00
线下
报名参加
“新科技 星力量” 第二届(2022)星环科技实践案例评选报名火热进行中
2022-11-20 00:00:00 ~ 2023-01-04 00:00:00
线上
报名参加
【直播预约】星课堂第七期:数字时代下的创新人才
2022-06-07 19:30:00 ~ 20:30:00
线上
报名参加
5月30-31日,2024向星力·未来数据技术峰会邀您报名
2024-05-30 13:00:00 ~ 05-31 18:00:00
线下
报名参加
“新科技 星力量” 第三届科技实践案例评选报名火热进行中
2023-11-16 00:00:00 ~ 12-16 23:59:59
线上
报名参加
5月25-26日,向星力·未来数据技术峰会(FDTC)邀您共赴数据技术盛宴
2023-05-25 08:00:00 ~ 05-26 18:00:00
线下
报名参加
“新科技 星力量” 第二届(2022)星环科技实践案例评选报名火热进行中
2022-11-20 00:00:00 ~ 2023-01-04 00:00:00
线上
报名参加
【直播预约】星课堂第七期:数字时代下的创新人才
2022-06-07 19:30:00 ~ 20:30:00
线上
报名参加
关注星环科技
获取最新活动资讯
加入TDH社区版技术交流群
获取更多技术支持 ->
扫描二维码,立即加入
我知道了