您的位置:新葡亰496net > 电脑系统 > 新葡亰496net:云主机搭建Kubernetes,一步步塑造专

新葡亰496net:云主机搭建Kubernetes,一步步塑造专

发布时间:2019-09-23 03:49编辑:电脑系统浏览(126)

    一、基础条件

    kubeadm是Kubernetes官方提供的便捷安装和早先化Kubernetes集群的工具,方今的还处在孵化开荒意况,伴随Kubernetes各样版本的公布都会共同立异。 当然,近期的kubeadm是不可能用于生产情况的。 但伴随着Kubernetes每一次版本进级,kubeadm都会对集群配置方面包车型地铁片段执行做调节,通过试验kubeadm大家得以学学到Kubernetes官方在集群配置上某个新的最好施行。

    概述

    本篇小说首要介绍在CentOS7.3系统上应用Kubeadm安装kubernetes1.7.5。安装进度中,会经过Ali云的yum镜像和docker镜像站点访谈各类被墙的财富。

    多年来开端接纳k8s搭建内部容器云平台,在搭建kubernetes集群时遇上有些主题素材,网络有非常多搭建文书档案能够参照,可是满足以下网络互通技能算k8s集群ready

    一 kubernetes

    新葡亰496net 1

    kubernetes

    以下介绍摘自Wiki
    Kubernetes (平时称为K8s) 是用于机动安排、扩充和治本容器化(containerized)应用程序的开源系统。谷歌(Google)设计并赠送给Cloud Native Computing Foundation(今属Linux基金会)来利用的。它目的在于提供“跨主机集群的电动安插、扩展以及运维应用程序容器的阳台”。它协助一多元容器工具, 富含Docker等。

    本文重要介绍怎么着使用kubeadm神速搭建K8s集群遇到,让您快捷体验学习Kubernetes。

    • K8s-concepts
    • Install-Kubeadm
    • Kubeadm-create-cluster
    • troubleshooting-kubeadm

    越来越多内容查阅官方网址K8s.io


    云主机

    新葡亰496net 2

    Kubernetes 1.8曾经发表,为了跟上合法更新的剧本,接下去感受一下Kubernetes 1.第88中学的kubeadm。

    1. 基础条件

    除去下述新闻,最佳有一台能够访谈google能源的云主机也许代理。此教程使用root帐户施行安装操作。
    主机音讯

    k8s-master    10.23.118.35     2core,2G,20G    CentOS Linux release 7.3.1611 (Core)
    k8s-node01    10.23.118.36    2core,2G,20G    CentOS Linux release 7.3.1611 (Core)
    k8s-node02    10.23.118.37    2core,2G,20G    CentOS Linux release 7.3.1611 (Core)
    

    Hostname设置

    hostnamectl --static set-hostname k8s-master
    hostnamectl --static set-hostname k8s-node01
    hostnamectl --static set-hostname k8s-node02
    

    写入/etc/hosts

    cat >> /etc/hosts << EOF
    10.23.118.35 k8s-master
    10.23.118.36 k8s-node01
    10.23.118.37 k8s-node02
    EOF
    

    关门防火墙与SELINUX

    systemctl disable firewalld.service
    systemctl stop firewalld.service
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    
    node <-> pod              #主机和pod之前IP可互相ping通
    pod  <-> pod              #同/跨主机Pod之间可互相ping通
    pod  -> svc cluster ip    #pod可以访问Service 的cluster ip
    node -> svc cluster ip    #node可以访问Service 的cluster ip
    

    二 实验景况

    下载软件包

    将具有软件下载至/data目录

    # 链接:https://pan.baidu.com/s/13DlR1akNBCjib5VFaIjGTQ 密码:1l69
    # 链接:https://pan.baidu.com/s/1V6Uuj6a08mEq-mRLaI1xgw 密码:6gap
    

    1.准备

    2. Kubernetes安装与安插

    三台主机都要奉行以下操作。
    Docker安装

    yum install docker -y
    systemctl enable docker   && systemctl start docker
    

    安装到位后运营“docker version”检查下,版本应该是“1.12.6”。
    kubernetes 阿里云yum源

    cat >> /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    EOF
    

    查阅当前相继零部件的版本

    yum list kubeadm --showduplicates
    yum list kubernetes-cni --showduplicates
    yum list kubelet --showduplicates
    yum list kubectl --showduplicates
    

    硬件

    宿主机:Win10 Virtualbox
    设想机:2核4g 桥接网卡

    主机名 系统 IP
    master.k8s CentOS 7.4 x86_64 192.168.1.100
    node1.k8s CentOS 7.4 x86_64 192.168.1.101
    node2.k8s CentOS 7.4 x86_64 192.168.1.102

    master到node做免密认证

    ssh-keygen
    ssh-copy-id root@192.168.1.237
    ssh-copy-id root@192.168.1.100
    ssh-copy-id root@192.168.1.188
    

    1.1种类安顿

    安装

    yum install kubeadm-1.7.5-0.x86_64

    安装kubeadm时,会活动将Kubernetes的任何组件安装到位,对应版本如下
    kubeadm 1.7.5-0, kubectl 1.7.5-0, kubelet 1.7.5-0, kubernetes-cni 0.5.1-0

    kubelet配置
    基本功pause镜像设置

    cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf  << EOF
    [Service]
    Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-qingdao.aliyuncs.com/haitao/pause-amd64:3.0"
    EOF
    

    设置docker 1.12.6及版本要求安装cgroup-driver=cgroupfs
    sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    说明:https://github.com/kubernetes/kubernetes/issues/43805这里有证实
    初叶服务
    systemctl enable kubelet && systemctl start kubelet

    Kubernetes Docker镜像下载
    通过此网站查看正视镜像的本子号:https://kubernetes.io/docs/admin/kubeadm/
    因为google的财富本国不可能访问,已将Kubernetes安装进度中所要求的镜像上传至Ali云

    registry.cn-qingdao.aliyuncs.com/haitao/etcd-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/kube-apiserver-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/kube-controller-manager-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/kube-proxy-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/kube-scheduler-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/pause-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/k8s-dns-sidecar-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/k8s-dns-kube-dns-amd64
    registry.cn-qingdao.aliyuncs.com/haitao/k8s-dns-dnsmasq-nanny-amd64
    

    下载和上传镜像脚步** **[请在能够访谈google资源的主机上运维]

    #!/bin/bash
    KUBE_VERSION=v1.7.5
    KUBE_PAUSE_VERSION=3.0
    ETCD_VERSION=3.0.17
    DNS_VERSION=1.14.4
    GCR_URL=gcr.io/google_containers
    ALIYUN_URL=registry.cn-qingdao.aliyuncs.com/haitao
    images=(kube-proxy-amd64:${KUBE_VERSION}
    kube-scheduler-amd64:${KUBE_VERSION}
    kube-controller-manager-amd64:${KUBE_VERSION}
    kube-apiserver-amd64:${KUBE_VERSION}
    pause-amd64:${KUBE_PAUSE_VERSION}
    etcd-amd64:${ETCD_VERSION}
    k8s-dns-sidecar-amd64:${DNS_VERSION}
    k8s-dns-kube-dns-amd64:${DNS_VERSION}
    k8s-dns-dnsmasq-nanny-amd64:${DNS_VERSION})
    for imageName in ${images[@]} ; do
        docker pull $GCR_URL/$imageName
        docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
        docker push $ALIYUN_URL/$imageName
        docker rmi $ALIYUN_URL/$imageName
    done
    

    kubernetes 集群结构图

    软件

    软件包 版本
    kubeadm v1.8.4
    kubelet v1.8.4
    kubectl v1.8.4
    kubernetes-cni 0.5.1
    docker 1.12.6

    上述安装包已经上传百度云,下载链接: https://pan.baidu.com/s/1c2NJADy 密码: dfgq

    设定主机名与host文件

    # 分别设定node与master的主机名
    hostnamectl set-hostname master
    exec bash
    
    # 同步所有主机的hosts文件
    vim /etc/hosts
    192.168.1.78 master localhost
    192.168.1.237 node1
    192.168.1.100 node2
    192.168.1.188 node3
    

    在装置从前,须求先做如下企图。两台CentOS 7.3主机如下:

    3. 创造集群

    先是在主机“k8s-master"上进行init操作。
    api-advertise-addresses为“k8s-master" ip,pod-network-cidr内定IP段供给和kube-flannel.yml文件中配备的平等(kube-flannel.yaml在底下flannel的安装中会用到)

    export KUBE_REPO_PREFIX="registry.cn-qingdao.aliyuncs.com/haitao"
    export KUBE_ETCD_IMAGE="registry.cn-qingdao.aliyuncs.com/haitao/etcd-amd64:3.0.17"
    kubeadm init --apiserver-advertise-address=10.23.118.35 --kubernetes-version=v1.7.5 --pod-network-cidr=10.244.0.0/16
    

    假如一切顺遂, 能够达到规定的规范如下提醒:

    Your Kubernetes master has initialized successfully!
    To start using your cluster, you need to run (as a regular user):
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/
    You can now join any number of machines by running the following on each node
    as root:
    kubeadm join --token c071b2.d57d76cd7d69a79d 10.23.118.35:6443
    

    kubectl的kubeconfig配置

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    

    flannel安装

    wget https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
    sed -i 's/quay.io/coreos/flannel:v0.9.0-amd64/registry.cn-qingdao.aliyuncs.com/haitao/flannel:v0.9.0-amd64/g' ./kube-flannel.yml
    kubectl apply -f ./kube-flannel.yml
    

    master节点安装验证

    kubectl get cs
    NAME                STATUS    MESSAGE              ERROR
    controller-manager  Healthy  ok
    scheduler            Healthy  ok
    etcd-0              Healthy  {"health": "true"}
    

    新葡亰496net 3

    镜像

    Kubeadm开首化中会从gcr.io中下载很多镜像。借使是在境内(无可奈何的FUC~K),只得另辟蹊径,自寻觅路。小编是将镜像同步到了Docker Hub,然后从docker hub 下载,再tag回来。

    镜像名称 仓库 备注
    kube-proxy-amd64:v1.8.4 gcr.io/google_containers/ FQ
    kube-scheduler-amd64:v1.8.4 gcr.io/google_containers/ FQ
    kube-controller-manager-amd64:v1.8.4 gcr.io/google_containers/ FQ
    kube-apiserver-amd64:v1.8.4 gcr.io/google_containers/ FQ
    etcd-amd64:3.0.17 gcr.io/google_containers/ FQ
    k8s-dns-sidecar-amd64:1.14.5 gcr.io/google_containers/ FQ
    pause-amd64:3.0 gcr.io/google_containers/ FQ
    k8s-dns-kube-dns-amd64:1.14.5 gcr.io/google_containers/ FQ
    k8s-dns-dnsmasq-nanny-amd64:1.14.5 gcr.io/google_containers/ FQ
    flannel:v0.9.1-amd64 quay.io/coreos 正常访问

    解决DNS解析localhost

    此云主机的DNS解析localhost会深入分析到三个鬼地址,这是个大网仔。kubeadm伊始化是会用到localhost。要是你的主机能深入分析到协和的IP,那么那步能够跳过。就算无法则要求协调搭建壹个DNS,将localhost深入分析到温馨。

    # 1.检测
    [root@node2 ~]# nslookup localhost
    Server:     118.118.118.9
    Address:    118.118.118.9#53
    
    Non-authoritative answer:
    Name:   localhost.openstacklocal
    Address: 183.136.168.91
    
    # 2.搭建DNS
    yum -y install dnsmasq
    cp /etc/resolv.conf{,.bak}
    rm -rf /etc/resolv.conf
    echo -e "nameserver 127.0.0.1nnameserver $(hostname -i)" >> /etc/resolv.conf
    chmod 444 /etc/resolv.conf
    chattr  i /etc/resolv.conf
    echo -e "server=8.8.8.8nserver=8.8.4.4" > /etc/dnsmasq.conf
    echo -e "$(hostname -i)tlocalhost.$(hostname -d)" >> /etc/hosts
    service dnsmasq restart
    
    # 3.再次检测
    [root@master ~]# nslookup localhost
    Server:     127.0.0.1
    Address:    127.0.0.1#53
    
    Name:   localhost
    Address: 192.168.1.78
    
    # 4.添加域名解析
    vim /etc/dnsmasq.conf
    address=/www.baidu.com/123.123.123.123
    

    cat/etc/hosts192.168.61.11node1192.168.61.12node2

    node节点安装和加盟集群

    举行如下指令:

    export KUBE_REPO_PREFIX="registry.cn-qingdao.aliyuncs.com/haitao"
    export KUBE_ETCD_IMAGE="registry.cn-qingdao.aliyuncs.com/haitao/etcd-amd64:3.0.17"
    kubeadm join --token c071b2.d57d76cd7d69a79d 10.23.118.35:6443
    

    node节点安装验证

    kubectl get nodes
    NAME        STATUS    AGE      VERSION
    k8s-master  Ready    1d        v1.7.5
    k8s-node01  Ready    1d        v1.7.5
    k8s-node02  Ready    1d        v1.7.5
    

    image.png

    三 all节点

    下列操作在享有节点以root客户实施。

    手拉手系统时间

    ntpdate 0.centos.pool.ntp.org
    

    只要各种主机启用了防火墙,必要开放Kubernetes各样零部件所急需的端口,能够查阅Installing kubeadm中的”Check required ports”一节。 这里差不离起见在各节点禁止使用防火墙:

    4. 参考

    使用kubeadm安装kubernetes1.7 http://blog.csdn.net/zhuchuangang/article/details/76572157


    3.1 更新系统

    yum makecache fast
    yum -y update
    

    关闭防火墙

    iptables -F
    systemctl stop firewalld
    systemctl disable firewalld
    

    systemctl stop firewalld

    以下是本子和机械和工具消息:

    3.2 停用SWAP分区

    有的时候甘休,重启无效:
    # swapoff -a

    长久关闭:

    1. 删除SWAP分区
    2. 修改/etc/default/grub,找到GRUB_CMDLINE_LINUX并删除swap
    3. 备份/etc/grub2.cfg
    4. 重新生成/etc/grub2.cfg
    

    关闭SELinux & 关闭swap

    swapoff -a 
    sed -i 's/.*swap.*/#&/' /etc/fstab
    setenforce 0
    

    systemctl disable firewalld

    • kubernetes 1.7.2
    • docker 1.12
    • calico 2.3.0
    • centos 7 x86_64 四个节点

    10.12.0.18 -> k8s master
    10.12.0.19 -> k8s node1
    10.12.0.22 -> k8s node2, etcd node

    3.3 关闭SELinux

    临时截至,重启无效:
    # setenforce 0

    世世代代关闭:
    修改/etc/selinux/config,然后重启系统。

    确认时区

    timedatectl set-timezone Asia/Shanghai 
    systemctl restart chronyd.service 
    

    开创/etc/sysctl.d/k8s.conf文件,加多如下内容:


    3.4 设置基本参数

    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    

    修改系统参数

    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    

    net.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1

    节点初阶化

    • 创新CentOS-Base.repo为Ali云yum源
    mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bk; 
    curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    

    设置bridge

    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-arptables = 1
    EOF
    sudo sysctl --system
    
    • disable selinux (请不要用setenforce 0)
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
    
    • 闭馆防火墙
    sudo systemctl disable firewalld.service
    sudo systemctl stop firewalld.service
    
    • 关闭iptables
    sudo yum install -y iptables-services;iptables -F;   #可略过
    sudo systemctl disable iptables.service
    sudo systemctl stop iptables.service
    
    • 设置相关软件
    sudo yum install -y vim wget curl screen git etcd ebtables flannel
    sudo yum install -y socat net-tools.x86_64 iperf bridge-utils.x86_64
    
    • 设置docker (如今暗许安装是1.12)
    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    sudo yum install -y libdevmapper* docker
    
    • 安装kubernetes
    ##设置kubernetes.repo为阿里云源,适合国内
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    EOF
    
    ##设置kubernetes.repo为阿里云源,适合能连通google的网络
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    
    ## 安装k8s 1.7.2 (kubernetes-cni会作为依赖一并安装,在此没有做版本指定)
    export K8SVERSION=1.7.2
    sudo yum install -y "kubectl-${K8SVERSION}-0.x86_64" "kubelet-${K8SVERSION}-0.x86_64" "kubeadm-${K8SVERSION}-0.x86_64"
    
    • 升级kernel到最新(4.12.5 ,可选)
    uname -sr
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
    yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
    yum --enablerepo=elrepo-kernel install -y kernel-ml
    
    awk -F' '$1=="menuentry " {print i   " @ " $2}' /etc/grub2.cfg
    grub2-set-default 0
    
    • 重启机器 (这一步是需求的)
    reboot
    

    重启机器后实施如下步骤

    • 配置docker daemon并启动docker
    cat <<EOF >/etc/sysconfig/docker
    OPTIONS="-H unix:///var/run/docker.sock -H tcp://127.0.0.1:2375 --storage-driver=overlay --exec-opt native.cgroupdriver=cgroupfs --graph=/localdisk/docker/graph --insecure-registry=gcr.io --insecure-registry=quay.io  --insecure-registry=registry.cn-hangzhou.aliyuncs.com --registry-mirror=http://138f94c6.m.daocloud.io"
    EOF
    
    systemctl start docker
    systemctl status docker -l
    
    • 拉取k8s 1.7.2 要求的镜像
    quay.io/calico/node:v1.3.0
    quay.io/calico/cni:v1.9.1
    quay.io/calico/kube-policy-controller:v0.6.0
    
    gcr.io/google_containers/pause-amd64:3.0
    gcr.io/google_containers/kube-proxy-amd64:v1.7.2
    gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
    gcr.io/google_containers/kube-controller-manager-amd64:v1.7.2
    gcr.io/google_containers/kube-scheduler-amd64:v1.7.2
    gcr.io/google_containers/etcd-amd64:3.0.17
    
    gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
    gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
    gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
    
    • 在非k8s master节点 10.12.0.22 上运转ETCD (图省事,也可搭建成ETCD集群)
    screen etcd -name="EtcdServer" -initial-advertise-peer-urls=http://10.12.0.22:2380 -listen-peer-urls=http://0.0.0.0:2380 -listen-client-urls=http://10.12.0.22:2379 -advertise-client-urls http://10.12.0.22:2379 -data-dir /var/lib/etcd/default.etcd
    
    • 在各个节点上check是不是可通达ETCD, 必需可通才行, 不通须要看下防火墙是或不是不曾安息
    etcdctl --endpoint=http://10.12.0.22:2379 member list
    etcdctl --endpoint=http://10.12.0.22:2379 cluster-health
    
    • 在k8s master节点上使用kubeadm运行,
      pod-ip网段设定为10.68.0.0/16, cluster-ip网段为私下认可10.96.0.0/16
      正如命令在master节点上推行
    cat << EOF >kubeadm_config.yaml
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
      advertiseAddress: 10.12.0.18
      bindPort: 6443
    etcd:
      endpoints:
      - http://10.12.0.22:2379
    networking:
     dnsDomain: cluster.local
     serviceSubnet: 10.96.0.0/16
     podSubnet: 10.68.0.0/16
    kubernetesVersion: v1.7.2
    #token: <string>
    #tokenTTL: 0
    EOF
    
    ##
    kubeadm init --config kubeadm_config.yaml
    
    • 实行kubeadm init命令后稍等几十秒,master上api-server, scheduler, controller-manager容器都运行起来,以下命令来check下master
      如下命令在master节点上实践
    rm -rf $HOME/.kube
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    kubectl get cs -o wide --show-labels
    kubectl get nodes -o wide --show-labels
    
    • 节点出席, 须求kubeadm init命令输出的token, 如下命令在node节点上奉行
    systemctl start docker
    systemctl start kubelet
    kubeadm join --token *{6}.*{16} 10.12.0.18:6443 --skip-preflight-checks
    
    • 在master节点上观看比赛节点插足境况, 因为还未曾成立网络,所以,全体master和node节点都以NotReady状态, kube-dns也是pending状态
    kubectl get nodes -o wide
    watch kubectl get all --all-namespaces -o wide
    
    • 对calico.yaml做了更动
      剔除ETCD创建部分,使用外界ETCD
      修改CALICO_IPV4POOL_CIDR为10.68.0.0/16
      calico.yaml如下
    # Calico Version v2.3.0
    # http://docs.projectcalico.org/v2.3/releases#v2.3.0
    # This manifest includes the following component versions:
    #   calico/node:v1.3.0
    #   calico/cni:v1.9.1
    #   calico/kube-policy-controller:v0.6.0
    
    # This ConfigMap is used to configure a self-hosted Calico installation.
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: calico-config
      namespace: kube-system
    data:
      # The location of your etcd cluster.  This uses the Service clusterIP defined below.
      etcd_endpoints: "http://10.12.0.22:2379"
      # Configure the Calico backend to use.
      calico_backend: "bird"
    
      # The CNI network configuration to install on each node.
      cni_network_config: |-
        {
            "name": "k8s-pod-network",
            "cniVersion": "0.1.0",
            "type": "calico",
            "etcd_endpoints": "__ETCD_ENDPOINTS__",
            "log_level": "info",
            "ipam": {
                "type": "calico-ipam"
            },
            "policy": {
                "type": "k8s",
                 "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
                 "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
            },
            "kubernetes": {
                "kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
            }
        }
    ---
    # This manifest installs the calico/node container, as well
    # as the Calico CNI plugins and network config on
    # each master and worker node in a Kubernetes cluster.
    kind: DaemonSet
    apiVersion: extensions/v1beta1
    metadata:
      name: calico-node
      namespace: kube-system
      labels:
        k8s-app: calico-node
    spec:
      selector:
        matchLabels:
          k8s-app: calico-node
      template:
        metadata:
          labels:
            k8s-app: calico-node
          annotations:
            # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
            # reserves resources for critical add-on pods so that they can be rescheduled after
            # a failure.  This annotation works in tandem with the toleration below.
            scheduler.alpha.kubernetes.io/critical-pod: ''
        spec:
          hostNetwork: true
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
          # This, along with the annotation above marks this pod as a critical add-on.
          - key: CriticalAddonsOnly
            operator: Exists
          serviceAccountName: calico-cni-plugin
          containers:
            # Runs calico/node container on each Kubernetes node.  This
            # container programs network policy and routes on each
            # host.
            - name: calico-node
              image: quay.io/calico/node:v1.3.0
              env:
                # The location of the Calico etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # Enable BGP.  Disable to enforce policy only.
                - name: CALICO_NETWORKING_BACKEND
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: calico_backend
                # Disable file logging so `kubectl logs` works.
                - name: CALICO_DISABLE_FILE_LOGGING
                  value: "true"
                # Set Felix endpoint to host default action to ACCEPT.
                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                  value: "ACCEPT"
                # Configure the IP Pool from which Pod IPs will be chosen.
                - name: CALICO_IPV4POOL_CIDR
                  value: "10.68.0.0/16"
                - name: CALICO_IPV4POOL_IPIP
                  value: "always"
                # Disable IPv6 on Kubernetes.
                - name: FELIX_IPV6SUPPORT
                  value: "false"
                # Set Felix logging to "info"
                - name: FELIX_LOGSEVERITYSCREEN
                  value: "info"
                # Auto-detect the BGP IP address.
                - name: IP
                  value: ""
              securityContext:
                privileged: true
              resources:
                requests:
                  cpu: 250m
              volumeMounts:
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
                - mountPath: /var/run/calico
                  name: var-run-calico
                  readOnly: false
            # This container installs the Calico CNI binaries
            # and CNI network config file on each node.
            - name: install-cni
              image: quay.io/calico/cni:v1.9.1
              command: ["/install-cni.sh"]
              env:
                # The location of the Calico etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # The CNI network config to install on each node.
                - name: CNI_NETWORK_CONFIG
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: cni_network_config
              volumeMounts:
                - mountPath: /host/opt/cni/bin
                  name: cni-bin-dir
                - mountPath: /host/etc/cni/net.d
                  name: cni-net-dir
          volumes:
            # Used by calico/node.
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: var-run-calico
              hostPath:
                path: /var/run/calico
            # Used to install CNI.
            - name: cni-bin-dir
              hostPath:
                path: /opt/cni/bin
            - name: cni-net-dir
              hostPath:
                path: /etc/cni/net.d
    
    ---
    
    # This manifest deploys the Calico policy controller on Kubernetes.
    # See https://github.com/projectcalico/k8s-policy
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: calico-policy-controller
      namespace: kube-system
      labels:
        k8s-app: calico-policy
    spec:
      # The policy controller can only have a single active instance.
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          name: calico-policy-controller
          namespace: kube-system
          labels:
            k8s-app: calico-policy-controller
          annotations:
            # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
            # reserves resources for critical add-on pods so that they can be rescheduled after
            # a failure.  This annotation works in tandem with the toleration below.
            scheduler.alpha.kubernetes.io/critical-pod: ''
        spec:
          # The policy controller must run in the host network namespace so that
          # it isn't governed by policy that would prevent it from working.
          hostNetwork: true
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
          # This, along with the annotation above marks this pod as a critical add-on.
          - key: CriticalAddonsOnly
            operator: Exists
          serviceAccountName: calico-policy-controller
          containers:
            - name: calico-policy-controller
              image: quay.io/calico/kube-policy-controller:v0.6.0
              env:
                # The location of the Calico etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # The location of the Kubernetes API.  Use the default Kubernetes
                # service for API access.
                - name: K8S_API
                  value: "https://kubernetes.default:443"
                # Since we're running in the host namespace and might not have KubeDNS
                # access, configure the container's /etc/hosts to resolve
                # kubernetes.default to the correct service clusterIP.
                - name: CONFIGURE_ETC_HOSTS
                  value: "true"
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: calico-cni-plugin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: calico-cni-plugin
    subjects:
    - kind: ServiceAccount
      name: calico-cni-plugin
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: calico-cni-plugin
      namespace: kube-system
    rules:
      - apiGroups: [""]
        resources:
          - pods
          - nodes
        verbs:
          - get
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: calico-cni-plugin
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: calico-policy-controller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: calico-policy-controller
    subjects:
    - kind: ServiceAccount
      name: calico-policy-controller
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: calico-policy-controller
      namespace: kube-system
    rules:
      - apiGroups:
        - ""
        - extensions
        resources:
          - pods
          - namespaces
          - networkpolicies
        verbs:
          - watch
          - list
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: calico-policy-controller
      namespace: kube-system
    
    • 创制calico跨主机互连网, 在master节点上施行如下命令
    kubectl apply -f calico.yaml
    
    • 留心旁观各样节点上会出名称为calico-node-****的pod起来, calico-policy-controller和kube-dns也会起来, 那些pod都在kube-system名字空间里
    >kubectl get all --all-namespaces
    
    NAMESPACE     NAME                                                 READY     STATUS    RESTARTS   AGE
    kube-system   po/calico-node-2gqf2                                 2/2       Running   0          19h
    kube-system   po/calico-node-fg8gh                                 2/2       Running   0          19h
    kube-system   po/calico-node-ksmrn                                 2/2       Running   0          19h
    kube-system   po/calico-policy-controller-1727037546-zp4lp         1/1       Running   0          19h
    kube-system   po/etcd-izuf6fb3vrfqnwbct6ivgwz                      1/1       Running   0          19h
    kube-system   po/kube-apiserver-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h
    kube-system   po/kube-controller-manager-izuf6fb3vrfqnwbct6ivgwz   1/1       Running   0          19h
    kube-system   po/kube-dns-2425271678-3t4g6                         3/3       Running   0          19h
    kube-system   po/kube-proxy-6fg1l                                  1/1       Running   0          19h
    kube-system   po/kube-proxy-fdbt2                                  1/1       Running   0          19h
    kube-system   po/kube-proxy-lgf3z                                  1/1       Running   0          19h
    kube-system   po/kube-scheduler-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h
    
    NAMESPACE     NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
    default       svc/kubernetes             10.96.0.1       <none>        443/TCP         19h
    kube-system   svc/kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   19h
    
    
    NAMESPACE     NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    kube-system   deploy/calico-policy-controller   1         1         1            1           19h
    kube-system   deploy/kube-dns                   1         1         1            1           19h
    
    
    NAMESPACE     NAME                                     DESIRED   CURRENT   READY     AGE
    kube-system   rs/calico-policy-controller-1727037546   1         1         1         19h
    kube-system   rs/kube-dns-2425271678                   1         1         1         19h
    
    • 部署dash-board
    wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
    kubectl create -f kubernetes-dashboard.yaml
    
    • 部署heapster
    wget https://github.com/kubernetes/heapster/archive/v1.4.0.tar.gz
    tar -zxvf v1.4.0.tar.gz
    cd heapster-1.4.0/deploy/kube-config/influxdb
    kubectl create -f ./
    

    3.5 安装kubeadm等包

    将下载好的包上传至系统目录,这里是/opt/soft/

    实施安装命令:yum localinstall -y /opt/soft/*.rpm

    安装docker

    tar -xvf docker-packages.tar
    cd docker-packages
    yum -y install local *.rpm
    systemctl start docker && systemctl enable docker
    

    施行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

    其余命令

    • 强制删除有个别pod
    kubectl delete pod <podname> --namespace=<namspacer>  --grace-period=0 --force
    
    • 重新恢复设置有些node节点
    kubeadm reset 
    systemctl stop kubelet;
    docker ps -aq | xargs docker rm -fv
    find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
    rm -rf /var/lib/kubelet /etc/kubernetes/ /var/lib/etcd 
    systemctl start kubelet;
    
    • 访谈dashboard (在master节点上实践)
    kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^.*'
    or
    kubectl proxy --port=8011 --address=192.168.61.100 --accept-hosts='^192.168.61.*'
    
    access to http://0.0.0.0:8001/ui
    
    • Access to API with authentication token
    APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
    TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d 't')
    curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
    
    • 让master节点参加调整,暗许master是不到场到任务调解中的
    kubectl taint nodes --all node-role.kubernetes.io/master-
    or
    kubectl taint nodes --all dedicated-
    
    • kubernetes master 化解隔开在此之前 Annotations
    Name:           izuf6fb3vrfqnwbct6ivgwz
    Role:
    Labels:         beta.kubernetes.io/arch=amd64
                beta.kubernetes.io/os=linux
                kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
                node-role.kubernetes.io/master=
    Annotations:        node.alpha.kubernetes.io/ttl=0
                volumes.kubernetes.io/controller-managed-attach-detach=true
    
    • kubernetes master 解决隔断之后 Annotations
    Name:           izuf6fb3vrfqnwbct6ivgwz
    Role:
    Labels:         beta.kubernetes.io/arch=amd64
                beta.kubernetes.io/os=linux
                kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
                node-role.kubernetes.io/master=
    Annotations:        node.alpha.kubernetes.io/ttl=0
                volumes.kubernetes.io/controller-managed-attach-detach=true
    Taints:         <none>
    





    别的小编也超出相比坑爹的事,同样的步骤在Ali云、ucloud上搭建k8s集群都没难题,但是在Azure上calico互联网跨主机pod间不通,到现行还不晓得难题出在哪儿。。。

    后续分享命令行格局搭建k8s集群,以及k8s高可用的实行


    附一些参谋链接

    • kubeadm安装kubernetes 1.6
      http://blog.frognew.com/2017/04/kubeadm-install-kubernetes-1.6.html

    • kubeadm搭建kubernetes集群 1.4
      https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/

    • Ali云快速安排K8s-VPC意况
      https://yq.aliyun.com/articles/66474

    • 使用Kubeadm安装Kubernetes1.5版本
      http://www.openskill.cn/article/511

    • http://tonybai.com/2016/12/30/install-kubernetes-on-ubuntu-with-kubeadm/

    3.6 开启firewalld

    systemctl restart firewalld
    systemctl enable firewalld
    

    安排镜像加速器

    vim /etc/docker/daemon.json
    {
      "registry-mirrors": ["https://lw9sjwma.mirror.aliyuncs.com"]
    }
    
    systemctl daemon-reload 
    systemctl restart docker
    

    禁用SELINUX:

    3.7 加速docker pull

    出于比相当多分源都在外国,国内下载体验很不佳,所以必要安装docker加快器。

    常用的有:

    1. Daocloud 加速
    2. aliyun 加速

    这里运用Daocloud

    配置k8s的yum源

    vim /etc/yum.repos.d/k8s.repo
    [k8s]
    name=k8s
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=0
    

    setenforce0

    3.8 调治kubelet运转参数

    这一步很重视,在一贯不调度kubelet运营参数在此之前,小编开始化K8s cluster后,在 /var/log/message 中往往出现以下错误音信:

    Nov 28 09:29:03 k8s kubelet: E1128 09:29:03.679613    6485 summary.go:92] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
    Nov 28 09:29:03 k8s kubelet: E1128 09:29:03.679651    6485 summary.go:92] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
    Nov 28 09:29:03 k8s kubelet: W1128 09:29:03.679695    6485 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
    

    后来在 stackoverflow 上找到了同主题材料的化解办法,就是调动运行参数:

    编写制定配置文件 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    新增: Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
    
    修改ExecStart: 在末尾新增 $KUBELET_MY_ARGS
    

    难点地址: https://stackoverflow.com/questions/46726216/kubelet-fails-to-get-cgroup-stats-for-docker-and-kubelet-services

    获取kube软件包

    cd kube-packages-1.10.1                 # 软件包在网盘中下载    
    tar -xvf kube-packages-1.10.1.tar
    cd kube-packages-1.10.1
    yum -y install local *.rpm 
    systemctl start kubelet && systemctl enable kubelet
    

    vi/etc/selinux/config

    3.9 下载镜像

    从docker hub中下载所需镜像,同样重视复tag

    images=(kube-proxy-amd64:v1.8.4 kube-scheduler-amd64:v1.8.4 kube-controller-manager-amd64:v1.8.4 kube-apiserver-amd64:v1.8.4 etcd-amd64:3.0.17 k8s-dns-sidecar-amd64:1.14.5 pause-amd64:3.0 k8s-dns-kube-dns-amd64:1.14.5 k8s-dns-dnsmasq-nanny-amd64:1.14.5) 
    
    for imageName in ${images[@]} ; do
      docker pull yotoobo/$imageName
      docker tag  yotoobo/$imageName gcr.io/google_containers/$imageName
      docker rmi  yotoobo/$imageName
    done
    

    统一k8s与docker的驱动

    # 1.查看docker驱动
     docker info | Cgroup Driver
    Cgroup Driver: cgroupfs
    
    # 修改k8s配置文件与docker保持一致
    sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    

    SELINUX=disabled

    3.10 修改/etc/hosts

    鉴于未有内网dns服务,所以那边运用hosts文件。

    增多以下内容到/etc/hosts

    192.168.1.100 master.k8s
    192.168.1.101 node1.k8s
    192.168.1.102 node2.k8s
    

    导入基础镜像

    cd /data
    docker load -i k8s-images-1.10.tar.gz 
    

    Kubernetes 1.8始发渴求关闭系统的Swap,假设不停息,暗中认可配置下kubelet将不能够起动。能够透过kubelet的启航参数–fail-swap-on=false退换那么些范围。 大家这里关闭系统的Swap:

    四 master节点

    下列操作在master节点以root顾客实践。

    二、初始化master节点

    # 初始化master 指定的版本要与kubeadm版本一致
    # kubeadm只给定了最少选项,集群名称等等都没有指定,kubeadm init
    [root@master ~]# kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16
    
    # 初始化完成后得到如下信息
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.1.78:6443 --token qabol0.c2gq0uyfxvpqr8bu --discovery-token-ca-cert-hash sha256:2237ec7b8efd5a8f68adcb04900a0b17b9df2a78675a7d62b4aef644a7f62c05
    # kubeadm join 是node节点加入集群的命令,注意token的有效期
    

    swapoff-a

    4.1 允许钦点端口访谈

    firewall-cmd --permanent --add-port=6443/tcp
    firewall-cmd --permanent --add-port=2379-2380/tcp
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10251/tcp
    firewall-cmd --permanent --add-port=10252/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --reload
    

    端口功能:

    端口 目的
    6443 kube-apiserver
    2379-2380 etcd server client API
    10250 kubelet api
    10251 kube-scheduler
    10252 kube-controller-manager
    10255 Read-only Kubelet API

    假设之后要经过别的普通客商运转k8s,那么切换客商后进行,不然root下直接实施

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    修改 /etc/fstab 文件,注释掉 SWAP 的电动挂载,使用free -m确认swap已经关门。

    4.2 运转服务

    systemctl enable kubelet && systemctl restart kubelet
    systemctl enable docker && systemctl restart docker
    

    中央命令

    # 查看pods
    kubectl get pods
    
    
    # 查看系统pods 
    [root@master ~]# kubectl get pods -n kube-system
    NAME                             READY     STATUS     RESTARTS   AGE
    etcd-master                      0/1       Pending    0          1s
    kube-apiserver-master            0/1       Pending    0          1s
    kube-controller-manager-master   0/1       Pending    0          1s
    kube-dns-86f4d74b45-d42zm        0/3       Pending    0          8h
    kube-proxy-884h6                 1/1       NodeLost   0          8h
    kube-scheduler-master            0/1       Pending    0          1s
    
    # 查看集群各组件状态信息
    [root@master ~]# kubectl get componentstatuses
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}   
    You have new mail in /var/spool/mail/root
    

    swappiness参数调解,修改/etc/sysctl.d/k8s.conf增多底下一行:

    新葡亰496net:云主机搭建Kubernetes,一步步塑造专门项目于自身的Kubernetes。4.3 kubeadm 初始化

    kubeadm init --kubernetes-version=v1.8.4 --token-ttl 0 --pod-network-cidr=10.244.0.0/16
    

    --kubernetes-version=v1.8.4 :不点名会去google获取版本音信,所以你懂的~~~
    --token-ttl 0 :token永可是期,不钦定私下认可24h后过期
    --pod-network-cidr=10.244.0.0/16 :若是要正常使用Flannel,则有限援救使用此布局

    随后等待伊始化完结

    新葡亰496net 4

    kubeadm-init

    假使见到提示1,则证实开首化成功,恭喜,你曾经打响了十分之七了。

    根据提醒2进行相应操作。

    提醒3则特别关键了,要稳当保存好,今后增添node机到K8s集群就必要它了。

    三、node参预集群

    # 确保node节点cgroup驱动保持一致
    sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    # 命令来自集群初始化之后额显示中
    kubeadm join 192.168.1.78:6443 --token v0866r.u7kvg5js1ah2u1bi --discovery-token-ca-cert-hash sha256:7b36794f4fa5121f6a5e309d0e312ded72997a88236a93ec7da3520e5aaccf0e
    
    # master节点查看nodes信息
    [root@master data]# kubectl get nodes
    NAME      STATUS     ROLES     AGE       VERSION
    master    NotReady      master    57m       v1.10.1
    node1     NotReady      <none>    27m       v1.10.1
    node2     NotReady      <none>    11s       v1.10.1
    node3     NotReady   <none>    4s        v1.10.1
    You have new mail in /var/spool/mail/root
    

    vm.swappiness=0

    4.4 安装Flannel

    K8s有无数可选的Pod Network,这里选拔Coreos的Flannel。

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
    

    四、计划互联网

    实践sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

    4.5 查看master状态

    新葡亰496net 5

    get-info


    部署

    flannel官网
    flannel下载时不用科学上网,flannel的yml文件会自行去quay.io网址中下载镜像。

    # 1.1使用软件包中的flannel,并指pod映射到哪个主机的网卡上面。
    vim kube-flannel.yml
    command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr","-iface=eth0" ]
    # 以下要按顺序创建,先创建rbac,之前没有穿件rbac导致pod正常创建,但是pin不同
    kubectl apply -f kube-flannel-rbac.yml
    kubectl apply -f kube-flannel.yml
    # 后,节点的状态会变为ready
    [root@master1 kubernetes1.10]# kubectl get node
    NAME      STATUS    ROLES     AGE       VERSION
    master    Ready      master    57m       v1.10.1
    node1     Ready      <none>    27m       v1.10.1
    node2     Ready      <none>    11s       v1.10.1
    node3     Ready   <none>    4s        v1.10.1
    
    # 2.从官网下载最新的flannel,k8s1.7  直接执行以下命令即可
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    1.2安装Docker

    五 nodes节点

    下列操作在nodes节点以root客商推行。

    flannel配置文件修改

    kube-flannel.yml中指定使用的网段
    "Network": "10.244.0.0/16"
    
    默认使用16位掩码,则在各node中都分配一个10.244.0.0/8的网络
    

    yum install-y yum-utils device-mapper-persistent-data lvm2

    5.1 允许内定端口

    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=30000-32767/tcp
    firewall-cmd --reload
    

    30000-32767为NodeService的暗中同意端口

    五、部署dashboard

    kubectl apply -f kubernetes-dashboard-http.yam
    kubectl apply -f admin-role.yaml
    kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
    

    yum-config-manager --add-repo

    5.2 运转服务

    systemctl enable kubelet && systemctl restart kubelet
    systemctl enable docker && systemctl restart docker
    

    命令行常用命令

    # 查看pod信息,默认显示default名称空间下的pod
    [root@master ~]# kubectl get pods
    No resources found.
    
    # 指定名称空间写pod
    [root@master ~]# kubectl get pods -n kube-system
    NAME                                    READY     STATUS    RESTARTS   AGE
    etcd-master                             1/1       Running   0          3h
    kube-apiserver-master                   1/1       Running   0          3h
    kube-controller-manager-master          1/1       Running   0          3h
    kube-dns-86f4d74b45-bzbvc               3/3       Running   0          3h
    kube-flannel-ds-5ghhj                   1/1       Running   0          2h
    kube-flannel-ds-ht4xd                   1/1       Running   0          3h
    kube-flannel-ds-kbm5g                   1/1       Running   0          3h
    kube-flannel-ds-mlj4r                   1/1       Running   0          2h
    kube-proxy-9xxnd                        1/1       Running   0          3h
    kube-proxy-n9w5x                        1/1       Running   0          3h
    kube-proxy-nkn8c                        1/1       Running   0          2h
    kube-proxy-shd6l                        1/1       Running   0          2h
    kube-scheduler-master                   1/1       Running   0          3h
    kubernetes-dashboard-5c469b58b8-rjfx6   1/1       Running   0          1h
    
    
    # 显示更详细的pod信息,此时各pod中都运行了一个kube-proxy和flannel容器
    -o wide 显示更详细的信息,报错node节点iP、主机名
    [root@master ~]# kubectl get pods -n kube-system -o wide
    NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
    etcd-master                             1/1       Running   0          3h        192.168.1.78    master
    kube-apiserver-master                   1/1       Running   0          3h        192.168.1.78    master
    kube-controller-manager-master          1/1       Running   0          3h        192.168.1.78    master
    kube-dns-86f4d74b45-bzbvc               3/3       Running   0          3h        10.244.0.2      master
    kube-flannel-ds-5ghhj                   1/1       Running   0          2h        192.168.1.188   node3
    kube-flannel-ds-ht4xd                   1/1       Running   0          3h        192.168.1.78    master
    kube-flannel-ds-kbm5g                   1/1       Running   0          3h        192.168.1.237   node1
    kube-flannel-ds-mlj4r                   1/1       Running   0          2h        192.168.1.100   node2
    kube-proxy-9xxnd                        1/1       Running   0          3h        192.168.1.237   node1
    kube-proxy-n9w5x                        1/1       Running   0          3h        192.168.1.78    master
    kube-proxy-nkn8c                        1/1       Running   0          2h        192.168.1.100   node2
    kube-proxy-shd6l                        1/1       Running   0          2h        192.168.1.188   node3
    kube-scheduler-master                   1/1       Running   0          3h        192.168.1.78    master
    kubernetes-dashboard-5c469b58b8-rjfx6   1/1       Running   0          1h        10.244.0.3      master
    

    5.3 kubeadm join

    运用手续4.3中的提醒3,将nodes节点增加到K8s集群中。

    新葡亰496net 6

    kubeadm-join


    六、kubeadm清空配置

    # 清空kubectl
    kubeadm reset
    
    # 清空网络信息
    ip link del cni0
    ip link del flannel.1
    

    查看当前的Docker版本:

    最后

    从那之后,大家赖以Kubeadm搭建了一套3节点的集群意况,不过须要提出的是Kubeadm还是贰个beta版工具,还不提议在生产条件中选取。因为master节点、etcd、kube-apiserver等都还属于单节点。

    当今,回到master机器上,再来验证下K8s意况:

    新葡亰496net 7

    get-info2

    在创立四个简练的Pod:

    cat >> /opt/k8s/myapp-pod.yml << EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: myapp-pod
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-container
        image: busybox
        command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
    EOF
    
    kubectl apply -f /opt/k8s/myapp-pod.yml
    

    验证:

    新葡亰496net 8

    pod-info

    K8s还应该有巨大的表征和效果与利益,在深入学习中你会发觉K8s是那般的强硬而全部魔力。

    奔跑的K8s!!!

    再者此文章也公布在了自身的民用博客,希望大家可以多多光顾。

    七、踩过的那多少个坑

    • 担保master与node的DNS深入分析localhost能解析到温馨的IP
    • node加入master确保token不过期
    • node确定保障kubelet不荒谬运营并运营
    • flannel网络要先创建kube-flannel-rbac.ymal再创造 kube-flannel.yml

    yum list docker-ce.x86_64--showduplicates|sort-r

    八、token过期的化解办法

    # 1.查看已经存在的token
    kubeadm token list
    
    # 2.创建token
    kubeadm token create
    
    # 3.查看ca证书的sha256编码
    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    
    # 4.node使用新的token加入集群
    kubeadm join --token acb123 --discovery-token-ca-cert-hash sha256:efg456  172.16.6.79:6443 --skip-preflight-checks
        # abc123    新创建的Token
        # efg456    证书的sha256编码
        # IP Port   Master的IP Port
    

    docker-ce.x86_6417.09.0.ce-1.el7.centosdocker-ce-stable

    感谢

    • 无痴迷,不成功
    • discsthnew

    docker-ce.x86_6417.06.2.ce-1.el7.centosdocker-ce-stable

    docker-ce.x86_6417.06.1.ce-1.el7.centosdocker-ce-stable

    docker-ce.x86_6417.06.0.ce-1.el7.centosdocker-ce-stable

    docker-ce.x86_6417.03.2.ce-1.el7.centosdocker-ce-stable

    docker-ce.x86_6417.03.1.ce-1.el7.centosdocker-ce-stable

    docker-ce.x86_6417.03.0.ce-1.el7.centosdocker-ce-stable

    Kubernetes 1.8已经针对性Docker的1.11.2, 1.12.6, 1.13.1和17.03.2等版本做了表明。 因为大家这里在各节点安装docker的17.03.2本子。

    yum makecache fast

    yum install-y--setopt=obsoletes=0

    docker-ce-17.03.2.ce-1.el7.centos

    docker-ce-selinux-17.03.2.ce-1.el7.centossystemctl start docker

    systemctl enable docker

    Docker从1.13本子开首调度了私下认可的防火墙准绳,禁止使用了iptables filter表中FOWATiguanD链,那样会孳生Kubernetes集群中跨Node的Pod无法通讯,在每一个Docker节点实践下边包车型地铁吩咐:

    iptables-P FORWARD ACCEPT

    可在docker的systemd unit文件中以ExecStartPost到场地点的命令:

    ExecStartPost=/usr/sbin/iptables-P FORWARD ACCEPT

    systemctl daemon-reload

    systemctl restart docker

    2.安装kubeadm和kubelet

    上边在各节点安装kubeadm和kubelet:

    cat</etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=

    测量检验地方

    curl

    yum makecache fast

    yum install-y kubelet kubeadm kubectl...Installed:kubeadm.x86_640:1.8.0-0kubectl.x86_640:1.8.0-0kubelet.x86_640:1.8.0-0DependencyInstalled:kubernetes-cni.x86_640:0.5.1-0socat.x86_640:1.7.3.2-2.el7

    从安装结果能够见到还安装了kubernetes-cni和socat四个依据:

    能够看来官方Kubernetes 1.8借助的cni依旧0.5.1版本

    socat是kubelet的依赖

    大家事先在Kubernetes 1.6 高可用集群陈设中手动安装这多个依附的

    Kubernetes文档中kubelet的运维参数:

    --cgroup-driverstringDriverthat the kubelet uses to manipulate cgroups on the host.Possiblevalues:'cgroupfs','systemd'(default"cgroupfs")

    默许值为cgroupfs,可是我们注意到yum安装kubelet,kubeadm时生成10-kubeadm.conf文件元帅那几个参数值改成了systemd。

    查看kubelet的 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件,在那之中富含如下内容:

    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

    使用docker info打印docker信息:

    docker info......ServerVersion:17.03.2-ce......CgroupDriver:cgroupfs

    能够看出docker 17.03施用的Cgroup Driver为cgroupfs。

    于是修改各节点docker的cgroup driver使其和kubelet一致,即修改或创制/etc/docker/daemon.json,到场下面包车型大巴始末:

    {"exec-opts":["native.cgroupdriver=systemd"]}

    重启docker:

    systemctl restart docker

    systemctl status docker

    在各节点开机运行kubelet服务:

    systemctl enable kubelet.service

    3.施用kubeadm init早先化集群

    接下去使用kubeadm早先化集群,选取node1作为Master Node,在node1上实行上边的指令:

    kubeadm init --kubernetes-version=v1.8.0--pod-network-cidr=10.244.0.0/16--apiserver-advertise-address=192.168.61.11

    因为大家选用flannel作为Pod网络插件,所以地点的吩咐钦赐–pod-network-cidr=10.244.0.0/16。

    kubeadm init >--kubernetes-version=v1.8.0>--pod-network-cidr=10.244.0.0/16>--apiserver-advertise-address=192.168.61.11[kubeadm]WARNING:kubeadmisinbeta,pleasedonotuseitforproduction clusters.[init]UsingKubernetesversion:v1.8.0[init]UsingAuthorizationmodes:[NodeRBAC][preflight]Runningpre-flight checks[preflight]WARNING:firewalldisactive,pleaseensureports[644310250]are openoryour cluster maynotfunctioncorrectly[preflight]Startingthe kubelet service[kubeadm]WARNING:startingin1.8,tokens expire after24hoursbydefault(ifyourequirea non-expiring tokenuse--token-ttl0)[certificates]Generatedca certificateandkey.[certificates]Generatedapiserver certificateandkey.[certificates]apiserver serving certissignedforDNS names[node1 kubernetes kubernetes.defaultkubernetes.default.svc kubernetes.default.svc.cluster.local]andIPs[10.96.0.1192.168.61.11][certificates]Generatedapiserver-kubelet-client certificateandkey.[certificates]Generatedsa keyandpublickey.[certificates]Generatedfront-proxy-ca certificateandkey.[certificates]Generatedfront-proxy-client certificateandkey.[certificates]Validcertificatesandkeys now existin"/etc/kubernetes/pki"[新葡亰496net:云主机搭建Kubernetes,一步步塑造专门项目于自身的Kubernetes。kubeconfig]WroteKubeConfigfile to disk:"admin.conf"[kubeconfig]WroteKubeConfigfile to disk:"kubelet.conf"[kubeconfig]WroteKubeConfigfile to disk:"controller-manager.conf"[kubeconfig]WroteKubeConfigfile to disk:"scheduler.conf"[controlplane]WroteStaticPodmanifestforcomponent kube-apiserver to"/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane]WroteStaticPodmanifestforcomponent kube-controller-manager to"/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane]WroteStaticPodmanifestforcomponent kube-scheduler to"/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd]WroteStaticPodmanifestforalocaletcd instance to"/etc/kubernetes/manifests/etcd.yaml"[init]Waitingforthe kubelet to boot up the control planeasStaticPodsfromdirectory"/etc/kubernetes/manifests"[init]Thisoften takes around a minute;orlongerifthe control plane images have to be pulled.[apiclient]Allcontrol plane components are healthy after28.505733seconds[uploadconfig]Storingthe configuration usedinConfigMap"kubeadm-config"inthe"kube-system"Namespace[markmaster]Willmark node node1asmasterbyadding a labelanda taint[markmaster]Masternode1 taintedandlabelledwithkey/value:node-role.kubernetes.io/master=""[bootstraptoken]Usingtoken:9e68dd.7117f03c900f9234[bootstraptoken]ConfiguredRBAC rules to allowNodeBootstraptokens to postCSRsinorderfornodes togetlongterm certificate credentials[bootstraptoken]ConfiguredRBAC rules to allow the csrapprover controller automatically approveCSRsfromaNodeBootstrapToken[bootstraptoken]Creatingthe"cluster-info"ConfigMapinthe"kube-public"namespace[addons]Appliedessential addon:kube-dns[addons]Appliedessential addon:kube-proxyYourKubernetesmaster has initialized successfully!Tostartusingyour cluster,you need to run(asa regular user):mkdir-p $HOME/.kube

    sudo cp-i/etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id-u):$(id-g)$HOME/.kube/configYoushould now deploy a pod network to the cluster.Run"kubectl apply -f [podnetwork].yaml"withone of the options listed at: now join any number of machinesbyrunning the following on each nodeasroot:kubeadm join--token9e68dd.7117f03c900f9234192.168.61.11:6443--discovery-token-ca-cert-hash sha256:82a08ef9c830f240e588a26a8ff0a311e6fe3127c1ee4c5fc019f1369007c0b7

    上边记录了成功的最早化输出的内容。

    里面由以下入眼内容:

    kubeadm 1.8脚下还处于beta状态,还不能够用来生产景况。近日来看那东台北装的etcd和apiserver都以单节点,当然不可能用来生产条件。

    RBAC格局已经在Kubernetes 1.第88中学稳固可用。kubeadm 1.8也私下认可启用了RBAC

    接下去是生成证书和有关的kubeconfig文件,那个近期我们在Kubernetes 1.6 高可用集群计划也是如此做的,最近没看出有怎么样新东西

    生成token记录下来,前面使用kubeadm join往集群中加多节点时会用到

    另外注意kubeadm还报了starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)的告诫

    上面的吩咐是安排常规客户如何利用kubectl访问集群:

    mkdir-p $HOME/.kube

    sudo cp-i/etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id-u):$(id-g)$HOME/.kube/config

    最终交给了将节点出席集群的通令kubeadm join –token 9e68dd.7117f03c900f9234 192.168.61.11:6443 –discovery-token-ca-cert-hash sha256:82a08ef9c830f240e588a26a8ff0a311e6fe3127c1ee4c5fc019f1369007c0b7

    查看一下集群状态:

    kubectlgetcs

    NAME                STATUS    MESSAGE              ERROR

    schedulerHealthyok

    controller-managerHealthyok

    etcd-0Healthy{"health":"true"}

    承认个零部件都远在healthy状态。

    集群早先化假若遇上标题,能够动用上面包车型大巴吩咐进行清理:

    kubeadm reset

    ifconfig cni0 down

    ip linkdeletecni0

    ifconfig flannel.1down

    ip linkdeleteflannel.1rm-rf/var/lib/cni/

    4.安装Pod Network

    接下去安装flannel network add-on:

    mkdir-p~/k8s/wget apply-f  kube-flannel.yml

    clusterrole"flannel"created

    clusterrolebinding"flannel"created

    serviceaccount"flannel"created

    configmap"kube-flannel-cfg"created

    daemonset"kube-flannel-ds"created

    这里注意kube-flannel.yml那一个文件中曾经包括了ServiceAccount, ClusterRole和ClusterRoleBinding,原本是在贰个独立的kube-flannel-rbac.yml文件中。kube-flannel.yml这几个文件里的flannel的镜疑似0.9.0,quay.io/coreos/flannel:v0.9.0-amd64

    只要Node有八个网卡的话,参照他事他说加以考察flannel issues 39701,这段时间必要在kube-flannel.yml中动用–iface参数钦定集群主机内网网卡的称谓,不然恐怕会冒出dns不能够分析。需求将kube-flannel.yml下载到本地,flanneld运转参数加上–iface=

    ......apiVersion:extensions/v1beta1

    kind:DaemonSetmetadata:name:kube-flannel-ds......containers:-name:kube-flannel

    image:quay.io/coreos/flannel:v0.9.0-amd64

    command:["/opt/bin/flanneld","--ip-masq","--kube-subnet-mgr","--iface=eth1"]......

    运用kubectl get pod –all-namespaces -o wide确定保证全体的Pod都处在Running状态。

    kubectlgetpod--all-namespaces-o wide

    5.master node加入工作负荷

    应用kubeadm伊始化的集群,出于安全着想Pod不会被调解到Master Node上,也正是说Master Node不插足工业作负荷。

    这边搭建的是测量检验情形能够接纳下边包车型地铁通令使Master Node参加专门的职业负荷:

    kubectl taint nodes node1 node-role.kubernetes.io/master-node"node1"untainted

    6.测试DNS

    kubectl run curl--image=radial/busyboxplus:curl-i--ttyIfyou don't see a command prompt, try pressing enter.

    [ root@curl-2716574283-xr8zd:/ ]$

    步向后进行nslookup kubernetes.default确认分析平常:

    nslookup kubernetes.defaultServer:10.96.0.10Address1:10.96.0.10kube-dns.kube-system.svc.cluster.localName:kubernetes.defaultAddress1:10.96.0.1kubernetes.default.svc.cluster.local

    7.向Kubernetes集群增多Node

    上面大家将k8s-node2这一个主机增添到Kubernetes集群中,在k8s-node2上实行:

    kubeadm join--token9e68dd.7117f03c900f9234192.168.61.11:6443--discovery-token-ca-cert-hash sha256:82a08ef9c830f240e588a26a8ff0a311e6fe3127c1ee4c5fc019f1369007c0b7[kubeadm]WARNING:kubeadmisinbeta,pleasedonotuseitforproduction clusters.[preflight]Runningpre-flight checks[discovery]Tryingto connect to APIServer"192.168.61.11:6443"[discovery]Createdcluster-info discovery client,requesting infofrom" to validate TLS against the pinnedpublickey[discovery]Clusterinfo signatureandcontents are validandTLS certificate validates against pinned roots,willuseAPIServer"192.168.61.11:6443"[discovery]Successfullyestablished connectionwithAPIServer"192.168.61.11:6443"[bootstrap]Detectedserver version:v1.8.0[bootstrap]Theserver supports theCertificatesAPI(certificates.k8s.io/v1beta1)Nodejoin complete:*Certificatesigning request sent to masterandresponse

    received.*Kubeletinformed ofnewsecure connection details.Run'kubectl get nodes'on the master to seethismachine join.

    本次极度百步穿杨,上边在master节点上实行命令查看集群中的节点:

    kubectlgetnodes

    NAME      STATUS    ROLES    AGE      VERSION

    node1Readymaster25mv1.8.0node2Ready10mv1.8.0

    何以从集群中移除Node

    设若要求从集群中移除node2那几个Node实践上边包车型客车下令:

    在master节点上施行:

    kubectl drain node2--delete-local-data--force--ignore-daemonsets

    kubectldeletenode node2

    在node2上执行:

    kubeadm reset

    ifconfig cni0 down

    ip linkdeletecni0

    ifconfig flannel.1down

    ip linkdeleteflannel.1rm-rf/var/lib/cni/

    8.dashboard插件安插

    只顾脚下dashboard的版本已经是1.7.1了。 而1.7.x版本的dashboard对日喀则做了拉长,私下认可须要以https的办法访谈,增添了登入的页面,同期扩展了叁个gcr.io/google_containers/kubernetes-dashboard-init-amd64的init容器。

    别的部须要要小心dashboard调节了配备文件的源码目录结构:

    mkdir-p~/k8s/wget create-f kubernetes-dashboard.yaml

    kubernetes-dashboard.yaml文件中的ServiceAccount kubernetes-dashboard独有相对十分小的权位,因而我们创设一个kubernetes-dashboard-admin的ServiceAccount并给予集群admin的权限,创设kubernetes-dashboard-admin.rbac.yaml:

    ---apiVersion:v1

    kind:ServiceAccountmetadata:labels:k8s-app:kubernetes-dashboard

    name:kubernetes-dashboard-adminnamespace:kube-system---apiVersion:rbac.authorization.k8s.io/v1beta1

    kind:ClusterRoleBindingmetadata:name:kubernetes-dashboard-admin

    labels:k8s-app:kubernetes-dashboard

    roleRef:apiGroup:rbac.authorization.k8s.io

    kind:ClusterRolename:cluster-admin

    subjects:-kind:ServiceAccountname:kubernetes-dashboard-adminnamespace:kube-system

    kubectl create-f kubernetes-dashboard-admin.rbac.yaml

    serviceaccount"kubernetes-dashboard-admin"created

    clusterrolebinding"kubernetes-dashboard-admin"created

    查看kubernete-dashboard-admin的token:

    kubectl-n kube-systemgetsecret|grep kubernetes-dashboard-admin

    kubernetes-dashboard-admin-token-pfss5  kubernetes.io/service-account-token314skubectl describe-n kube-system secret/kubernetes-dashboard-admin-token-pfss5Name:kubernetes-dashboard-admin-token-pfss5Namespace:kube-systemLabels:Annotations:kubernetes.io/service-account.name=kubernetes-dashboard-admin

    kubernetes.io/service-account.uid=1029250a-ad76-11e7-9a1d-08002778b8a1Type:kubernetes.io/service-account-tokenData====ca.crt:1025bytesnamespace:11bytes

    token:eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1wZnNzNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjEwMjkyNTBhLWFkNzYtMTFlNy05YTFkLTA4MDAyNzc4YjhhMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Bs6h65aFCFkEKBO_h4muoIK3XdTcfik-pNM351VogBJD_pk5grM1PEWdsCXpR45r8zUOTpGM-h8kDwgOXwy2i8a5RjbUTzD3OQbPJXqa1wBk0ABkmqTuw-3PWMRg_Du8zuFEPdKDFQyWxiYhUi_v638G-R5RdZD_xeJAXmKyPkB3VsqWVegoIVTaNboYkw6cgvMa-4b7IjoN9T1fFlWCTZI8BFXbM8ICOoYMsOIJr3tVFf7d6oVNGYqaCk42QL_2TfB6xMKLYER9XDh753-_FDVE5ENtY5YagD3T_s44o0Ewara4P9C3hYRKdJNLxv7qDbwPl3bVFH3HXbsSxxF3TQ

    在dashboard的记名窗口使用方面包车型地铁token登陆。

    9.heapster插件安插

    下边安装Heapster为集群增多使用计算和监理作用,为Dashboard增加仪表盘。 使用InfluxDB做为Heapster的后端存款和储蓄,初始布署:

    mkdir-p~/k8s/heapster

    cd~/k8s/heapster

    wget create-f./

    终极确认全体的pod都地处running状态,张开Dashboard,集群的利用总括会以仪表盘的款型体现出来。

    此次安装涉及到的Docker镜像:

    gcr.io/google_containers/kube-apiserver-amd64:v1.8.0gcr.io/google_containers/kube-controller-manager-amd64:v1.8.0gcr.io/google_containers/kube-scheduler-amd64:v1.8.0gcr.io/google_containers/kube-proxy-amd64:v1.8.0gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5quay.io/coreos/flannel:v0.9.0-amd64

    gcr.io/google_containers/etcd-amd64:3.0.17gcr.io/google_containers/pause-amd64:3.0quay.io/coreos/flannel:v0.9.0-amd64

    gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.0gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3gcr.io/google_containers/heapster-grafana-amd64:v4.4.3gcr.io/google_containers/heapster-amd64:v1.4.0

    本文由新葡亰496net发布于电脑系统,转载请注明出处:新葡亰496net:云主机搭建Kubernetes,一步步塑造专

    关键词: