Kubernetes is an open source container management system that allows the deployment, orchestration, and scaling of container applications and micro-services across multiple hosts. This tutorial will describe the installation and configuration of a multi-node Kubernetes cluster on CentOS 7.
A single master host will manage the cluster and run several core Kubernetes services.
API Server - The REST API endpoint for managing most aspects of the Kubernetes cluster.
Replication Controller - Ensures the number of specified pod replicas are always running by starting or shutting down pods.
Scheduler - Finds a suitable host where new pods will be reside. etcd - A distributed key value store where Kubernetes stores information about itself, pods, services, etc.
Flannel - A network overlay that will allow containers to communicate across multiple hosts.
The minion hosts will run the following services to manage containers and their network.
Kubelet - Host level pod management; determines the state of pod containers based on the pod manifest received from the Kubernetes master.
Proxy - Manages the container network (IP addresses and ports) based on the network service manifests received from the Kubernetes master.
Docker - An API and framework built around Linux Containers (LXC) that allows for the easy management of containers and their images.
Flannel - A network overlay that will allow containers to communicate across multiple hosts.
Note: Flannel, or another network overlay service, is required to run on the minions when there is more than one minion host. This allows the containers which are typically on their own internal subnet to communicate across multiple hosts. As the Kubernetes master is not typically running containers, the Flannel service is not required to run on the master.
Using hostname resolution will help clarify the relationship between all the hosts. Add the following mapping to the /etc/hosts file to allow proper DNS resolution across all hosts.
# for SERVICES in kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES systemctl enable$SERVICES systemctl status $SERVICES done
# systemctl daemon-reload for SERVICES in kube-proxy kubelet flanneld docker; do systemctl restart $SERVICES systemctl enable$SERVICES systemctl status $SERVICES done
配置flannel
配置 flannel 通信网段
# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' # etcdctl get /coreos.com/network/config {"Network":"172.17.0.0/16"}
# # kubectl get nodes NAME LABELS STATUS 192.168.56.103 kubernetes.io/hostname=192.168.56.103 Ready
etcd 集群
# etcdctl --peers 192.168.56.101:2379 member list ce2a822cea30bfca: name=default peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://kube-master:2379