1:k8s集群的安装
1.1 k8s的架构
通常我们都是通过 kubectl 对 Kubernetes 下命令的,它通过 APIServer 去调用各个进程来完成对 Node 的部署和控制。
APIServer 的核心功能是对核心对象(例如:Pod,Service,RC)的增删改查操作,同时也是集群内模块之间数据交换的枢纽。
etcd 包含在 APIServer 中,用来存储资源信息。接下来就是 Controller Manager 了,如果说 Kubernetes 是一个自动化运行的系统,那么就需要有一套管理规则来控制这套系统。
Controller Manager 就是这个管理者,或者说是控制者。它包括 8 个 Controller,分别对应着副本,节点,资源,命名空间,服务等等
紧接着,Scheduler 会把 Pod 调度到 Node 上,调度完以后就由 kubelet 来管理 Node 了。
kubelet 用于处理 Master 下发到 Node 的任务(即 Scheduler 的调度任务),同时管理 Pod 及 Pod 中的容器。
在完成资源调度以后,kubelet 进程也会在 APIServer 上注册 Node 信息,定期向 Master 汇报 Node 信息,并通过 cAdvisor 监控容器和节点资源。
由于,微服务的部署都是分布式的,所以对应的 Pod 以及容器的部署也是。为了能够方便地找到这些 Pod 或者容器,引入了 Service(kube-proxy)进程,它来负责反向代理和负载均衡的实施。
参考文献
https://www.cnblogs.com/Tao9/p/12026232.html
除了核心组件,还有一些推荐的Add-ons:
| 组件名称 | 说明 |
|---|---|
| kube-dns | 负责为整个集群提供DNS服务 |
| Ingress Controller | 为服务提供外网入口 |
| Heapster | 提供资源监控 |
| Dashboard | 提供GUI |
| Federation | 提供跨可用区的集群 |
| Fluentd-elasticsearch | 提供集群日志采集、存储与查询 |
1.2:修改IP地址、主机名和host解析
10.0.0.11 k8s-master
10.0.0.12 k8s-node-1
10.0.0.13 k8s-node-2
scp /etc/hosts 10.0.0.13:/etc/hosts
所有节点需要做hosts解析
1.3:master节点安装==etcd==
etcd的官方将它定位成一个可信赖的分布式键值存储服务,它能够为整个分布式集群存储
-一些关键数据,协助分布式集群的正常运转
https://www.cnblogs.com/linuxws/p/11194403.html 参数介绍文档
rm -fr /etc/yum.repos.d/local.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-matser /etc/yum.repos.d] # yum install etcd -y
[root@k8s-matser /k8s_yaml]# vim /etc/etcd/etcd.conf
--储存的目录 --类似于mangge数据库
3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
--etcd 对外提供服务的地址。
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
--对外公告的该节点客户端监听地址。也就是连接本地地址-集群名字-不可相同
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
[root@k8s-matser /k8s_yaml] # systemctl start etcd.service
[root@k8s-matser /k8s_yaml] # systemctl enable etcd.service
--2379-端口 --2380 端口集群同步
--数据库演示一个值-get key set-key
[root@k8s-matser /k8s_yaml] # etcdctl set testdir/testkey0 0
[root@k8s-matser /k8s_yaml] # etcdctl get testdir/testkey0
--检查集群监控状态
[root@k8s-matser /k8s_yaml] # etcdctl -C http://10.0.0.11:2379 cluster-health
==etcd原==生支持做集群,
作业1:安装部署etcd集群,要求三个节点
1.4:master节点安装kubernetes
[root@k8s-matser /k8s_yaml] # yum install kubernetes-master.x86_64 -y
[root@k8s-matser /k8s_yaml] # vim /etc/kubernetes/apiserver
8行: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #监听端口
11行:KUBE_API_PORT="--port=8080" # 端口
14行: KUBELET_PORT="--kubelet-port=10250" #kubelet-随从监听端口
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379" # ecd 集群位置,号隔开 http,
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
#管理控制删除了ser账号控制
[root@k8s-matser /etc/yum.repos.d] # vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080" #apiser地址
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
检查服务是否安装正常
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
1.5:node节点安装kubernetes
[root@k8s-node-1 ~] # yum install kubernetes-node.x86_64 -y
[root@k8s-node-1 ~] # vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080" #指向apiser的地址
[root@k8s-node-1 ~] # vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0" # 监听地址
8行:KUBELET_PORT="--port=10250" # 端口跟服务端一直
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12" #每个节点名字唯一
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080" # 找apiser地址
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest 重哪里拿容器镜像
[root@k8s-node-1 ~] # systemctl enable kubelet.service
[root@k8s-node-1 ~] # systemctl restart kubelet.service
[root@k8s-node-1 ~] # systemctl enable kube-proxy.service
[root@k8s-node-1 ~] # systemctl restart kube-proxy.service
在master节点检查
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 6m
10.0.0.13 Ready 3s
6:所有节点配置flannel网络
插件-实现容器跨主机访问–
所有节点安装网络插件-实现容器跨主机访问--
flannel 需要数据库 etcd
overlay 需要数据库 consul
[所有节点]# yum install flannel -y
[所有节点]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
##master节点:
--创建key 必须的 tailf /var/log/messages
[root@k8s-matser /k8s_yaml] # etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16" }'
[root@k8s-matser /etc/yum.repos.d] # systemctl restart flanneld.service
[root@k8s-matser /] # yum install docker -y
[root@k8s-matser /] # systemctl enable flanneld.service
[root@k8s-matser /] # systemctl restart flanneld.service
[root@k8s-matser /] # cat /usr/lib/systemd/system/docker.service.d/flannel.conf
[root@k8s-node-2 /] # ifconfig
[root@k8s-matser /] # systemctl restart docker
[root@k8s-matser /] # systemctl enable docker
[root@k8s-matser /] # systemctl restart kube-apiserver.service
[root@k8s-matser /] # systemctl restart kube-controller-manager.service
[root@k8s-matser /] # systemctl restart kube-scheduler.service
----测试夸主机访问
##node节点:
systemctl enable flanneld.service
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
--端口设置iptables docker 启动 docker 之后改动 iptable 规则 ---所有的节点
[root@k8s-node-2 /] # vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
---------
从启
systemctl daemon-reload
systemctl restart docker
启动容器ping 网络查看
7:配置master为镜像仓库
#所有节点 或者cp
[root@k8s-matser /] # vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
---
信任加速
---
[root@k8s-node-2 /] # systemctl restart docker
#master节点
[root@k8s-matser /] # sz registry.tar.gz
[root@k8s-matser /] # docker load -i registry.tar.gz
[root@k8s-node-2 /] # docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
测试-打标签
2:什么是k8s,k8s有什么功能?
k8s是一个docker集群的管理工具
k8s是容器的编排工具
2.1 k8s的核心功能
==自愈: 重新==启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。
==弹性伸缩: 通过监控容==器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量
服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。
==滚动升级和一键回滚:== Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。
私密配置文件管理. web容器里面,数据库的账户密码(测试库密码)
2.2 k8s的历史
2014年 docker容器编排工具,立项
2015年7月 发布kubernetes 1.0, 加入cncf基金会 孵化
2016年,kubernetes干掉两个对手,docker swarm,mesos marathon 1.2版
2017年 1.5 -1.9
2018年 k8s 从cncf基金会 毕业项目1.10 1.11 1.12
2019年: 1.13, 1.14 ,1.15,1.16 1.17
cncf :cloud native compute foundation 孵化器
kubernetes (k8s): 希腊语 舵手,领航者 容器编排领域,
谷歌15年容器使用经验,borg容器管理平台,使用golang重构borg,kubernetes
2.3 k8s的安装方式
yum安装 1.5 最容易安装成功,最适合学习的
源码编译安装—难度最大 可以安装最新版
二进制安装—步骤繁琐 可以安装最新版 shell,ansible,saltstack
kubeadm 安装最容易, 网络 可以安装最新版
minikube 适合开发人员体验k8s, 网络
2.4 k8s的应用场景
qcr.io
# https://oldqiang.com/archives/624.html kubeadm
k8s最==适合跑微服务项目==!
k8s最适合运行微服务。
软件的开发架构:
mvc:
- [秒杀](https://www.jd.com/miaosha
- [优惠券](https://www.jd.com/a
- [PLUS会员](https://www.jd.com/plus
- [品牌闪购](https://www.jd.com/red
- [拍卖](https://www.jd.com/paimai
- [京东家电](https://www.jd.com/jiadian
- [京东超市](https://www.jd.com/chaoshi
一个站点,实现所有功能,运行一套架构 10G
微服务:
- [秒杀](https://miaosha.jd.com/ 运行一套架构 200M
- [优惠券](https://a.jd.com/ 运行一套架构 200M
- [PLUS会员](https://plus.jd.com 运行一套架构
- [品牌闪购](https://red.jd.com/ 运行一套架构
- [拍卖](https://paimai.jd.com/ 运行一套架构
- [京东家电](https://jiadian.jd.com/运行一套架构
- [京东超市](https://chaoshi.jd.com/运行一套架构
- [首页] https://www.jd.com/ 导航
一个功能一个站点,一个功能一个域名
微服务的好处:
支持更大的用户访问量
业务的稳定性更强
代码的更新和发布更加快捷
微服务对运维的影响:
工作量变大,ansible,自动化代码上线,ELK日志处理
k8s最适合运行微服务。
物理机 装系统,装运行环境,放入代码,维护,工作量变大*4
开发环境 一套微服务
测试环境 一套微服务
预生产环境 一套微服务
生产环境 一套微服务
3:k8s常用的资源
3.1 创建pod资源
==pod是最小资源单==位.
任何的一个k8s资源都可以由yml清单文件来定义
k8s yaml的主要组成
apiVersion: v1 api版本
kind: pod 资源类型
metadata: 属性
spec: 详细
k8s_pod.yaml
[root@k8s-node-2 /] # docker load -i docker_nginx1.13.tar.gz
[root@k8s-matser /] # mkdir k8s_yaml
[root@k8s-matser /] # cd k8s_yaml/
[root@k8s-matser /k8s_yaml] # mkdir pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
[root@k8s-matser /k8s_yaml/pod] # vi k8s_pod.yml
[root@k8s-matser /k8s_yaml/pod] # cat k8s_pod.yml
[root@k8s-matser /k8s_yaml/pod] # kubectl create -f k8s_pod.yml
[root@k8s-matser /k8s_yaml/pod] # kubectl get pod
[root@k8s-matser /k8s_yaml/pod] # kubectl describe pod nginx
51m 51m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx to 10.0.0.13
51m 45m 6 {kubelet 10.0.0.13} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
51m 43m 33 {kubelet 10.0.0.13} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
#--原因是红帽地址改变成私有地址-上传镜像 pod-infrastructure-latest.tar.gz
node节点:所有
vim /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
systemctl restart kubelet.service
[root@k8s-matser /k8s_yaml/pod] # curl -I 172.18.2.2
==pod资源:至==少由两个容器==组成,pod基础容器和==业务容器组成(最多1+4)
为什么创建一个pod资源,需要至少启动两个容器?
pod基础容器 实现k8s的高级功能+
nginx容器 实现业务功能
pod资源中,业务容器的端口不能冲突
只要创建一个pod资源,都会启动一个pod基础容器+pod资源中的业务容器
curl -I 172.18.78.2 访问pod的ip地址,pod里面的nginx可以被访问,nginx容器和pod-infrastructure
172.18.78.2 究竟是哪一个容器的ip地址?
172.18.78.2是pod基础容器的ip
nginx容器和pod基础容器共用ip地址
pod配置文件2:
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
- name: alpine
image: 10.0.0.11:5000/alpine:latest
command: ["sleep","1000"]
pod是k8s最小的资源单位
3.2 Repl==icationCo==ntroller资源
rc:保证指定数量的pod始终存活,rc通 过标签选择器来关联pod
k8s资源的常见操作:
1--创建资源
[root@k8s-matser /k8s_yaml/rc] # kubectl create -f xxx.yaml
2--查看某一个资源
[root@k8s-matser /k8s_yaml/rc] #kubectl get pod|rc
3--查看某一个资源详细信息
[root@k8s-matser /k8s_yaml/rc] # kubectl describe pod nginx
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml
[root@k8s-matser /k8s_yaml/rc] # kubectl get pod -o wide --show-labels
4--修改标签
--查看某一个资源[root@k8s-matser /k8s_yaml/rc] # kubectl edit pod nginx
5--删除 某一个资源
[root@k8s-matser /k8s_yaml] # kubectl delete pod nginx
[root@k8s-matser /k8s_yaml/rc] # kubectl delete node 10.0.0.12
[root@k8s-matser /k8s_yaml/deployment] # kubectl delete -f .
6--编辑某一个具体资源的配置文件
[root@k8s-matser /k8s_yaml] # kubectl edit pod nginx
7-获取apiversion版本信息
[root@k8s-matser /k8s_yaml/rc] # kubectl api-versions
8-获取资源的apiVersion版本信息
[root@k8s-matser /k8s_yaml/rc] # kubectl explain pod
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl explain pod.spec.volumes.nfs
9-在线修改支援
[root@k8s-matser /k8s_yaml/svc] # kubectl scale rc nginx2 --replicas=2
10-进去容器
[root@k8s-matser /k8s_yaml/svc] # kubectl exec -it nginx2-9xxzc
11-查看日志
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl logs mysql-1pjwk
12-调整资源数
[root@k8s-matser /k8s_yaml/svc] # kubectl scale rc nginx2 --replicas=3
13--自动发现
[root@k8s-matser /k8s_yaml/svc] # kubectl describe svc
创建一个rc
[root@k8s-matser /k8s_yaml/rc] # mkdir rc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 5 #副本5
selector:
app: myweb
template: #模板
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
节点挂了自动切换
rc的滚动升级 新建一个nginx-rc1.15.yaml 
升级 kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=10s
回滚 kubectl rolling-update nginx2 -f nginx-rc.yaml --update-period=1s
==rc 升级更换了标签,中途网络无法访问==使用deployment
3.3 service资源
ClusterIP Service:这是默认的 Service 类型,该类型 Service 对外暴露一个集群内部 IP,可以用于集群内部服务间的通信
NodePort Service:该类型 Service 对外暴露集群上所有节点的固定端口,支持外部通过 <NodeIP>:<NodePort> 进行访问
LoadBalancer:该类 Service 通过云平台提供的 load balancer 向外提供服务
ExternalName:将服务映射到一个 externalName (比如 http://foo.bar.example.com)


service帮助pod暴露端口
创建一个service
[root@k8s-matser /k8s_yaml/svc] # mkdir svc
apiVersion: v1
kind: Service #简称svc
metadata:
name: myweb
spec:
type: NodePort #默认ClusterIP
ports:
- port: 80 #clusterIP vip 端口
nodePort: 30000 #node port 宿主机端口
targetPort: 80 #pod port 容器端口
selector:
app: myweb2
[root@k8s-matser /k8s_yaml/svc] # kubectl get svc -o wide
[root@k8s-matser /k8s_yaml/svc] # kubectl scale rc nginx2 --replicas=3
[root@k8s-matser /k8s_yaml/svc] # kubectl exec -it nginx2-4pw6m /bin/bash #进入pod容器
---测试负载均衡
[root@k8s-matser /k8s_yaml/svc] # curl 10.0.0.13:30000
we01
[root@k8s-matser /k8s_yaml/svc] # curl 10.0.0.13:30000
web02
--自动发现
[root@k8s-matser /k8s_yaml/svc] # kubectl describe svc
删除自动添加
修改nodePort范围
[root@k8s-matser /k8s_yaml/svc] # vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
命令行创建service资源
--给rc 创建 svc
[root@k8s-matser] # kubectl expose rc deployment nginx2 --port=80 --target-port=80 --type=NodePort
nginx2 网络 vip 端口 后端容器端口
命令行非交互式创建service资源,不能自定义nodeport 宿主机端口
service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均==衡 传输层tcp,udp)==
3.4 deployment资源
有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源
创建deployment
[root@k8s-matser /k8s_yaml/deployment] # kubectl create -f deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1 #多启动pod
maxUnavailable: 1 #最多不可访问的pod
type: RollingUpdate
minReadySeconds: 30
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
创建ser访问
[root@k8s-matser /k8s_yaml/deployment] # kubectl edit deployment nginx
deployment升级和回滚 命令行
1--命令行创建deployment
kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
资源名字 容器的名字 副本3 个 方便记录升级回滚
2--命令行升级版本
kubectl set (资源,镜像) image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
3--查看deployment所有历史版本
kubectl rollout history deployment nginx
4--命令行回滚deployment到上一个版本
kubectl rollout undo deployment nginx
5--命令行回滚deployment到指定版本
kubectl rollout undo deployment nginx --to-revision=2
deployment相对于rc的优势:
1:滚动升级服务不会中断访问
2:滚动升级不依赖配置文件
3:deployment修改配置文件,立即生效
4. deployment资源的创建,升级,回滚等操作,都可以命令行执行
deployment滚动升级策略
strategy:
rollingUpdate:
maxSurge: 1 #多启动pod的数量
maxUnavailable: 1 #最多不可访问的pod数量
type: RollingUpdate
minReadySeconds: 30 #升级间隔
副本数:3
3.5 tomcat+mysql练习
在k8s中容器之间相互访问,通过VIP地址!
rz -E 上传镜像
ls
[root@k8s-node-2 ~] # docker load -i tomcat-app-v2.tar.gz
[root@k8s-node-2 ~] # docker load -i docker-mysql-5.7.tar.gz
[root@k8s-node-2 ~] # docker tag docker.io/kubeguide/tomcat-app:v2 10.0.0.11:5000/tomcat-app:v2
[root@k8s-node-2 ~] # docker tag docker.io/mysql:5.7 10.0.0.11:5000/mysql:5.7
[root@k8s-node-2 ~] # docker push 10.0.0.11:5000/mysql:5.7
[root@k8s-node-2 ~] # docker push 10.0.0.11:5000/tomcat-app:v2
kodexplorer:
rc+svc
deploy+svc


http://10.0.0.12:30008/demo
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl exec -it mysql-mxctm /bin/bash
root@mysql-mxctm:/# mysql -uroot -p123456
mysql> select * from T_USERS
启动
注意svc 地址:
rename tomcat wordpress *
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl create -f mysql-rc.yml
[root@k8s-matser /k8s_yaml/tomcat_demo] # cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
------
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl create -f mysql-svc.yml
[root@k8s-matser /k8s_yaml/tomcat_demo] # cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
-----
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl create -f tomcat-rc.yml
[root@k8s-matser /k8s_yaml/tomcat_demo] # cat tomcat-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 1
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: '10.254.9.254'
- name: MYSQL_SERVICE_PORT
value: '3306'
-----
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl create -f tomcat-svc.yml
[root@k8s-matser /k8s_yaml/tomcat_demo] # cat tomcat-svc.yml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30008
selector:
app: myweb
3.5-w==ordpr==ess 改下comps
version: "3"
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: wordpress
MYSQL_USER: tom
MYSQL_PASSWORD: 123456
wordpress:
image: 10.30.12.55/docker/wordpress
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: tom
WORDPRESS_DB_PASSWORD: 123456
----
上传镜像
[root@k8s-matser /k8s_yaml/wordpress] # kubectl create -f mysql-rc.yml
[root@k8s-matser /k8s_yaml/wordpress] # cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wp-mysql
spec:
replicas: 1
selector:
app: wp-mysql
template:
metadata:
labels:
app: wp-mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'tom'
- name: MYSQL_PASSWORD
value: '123456'
----
[root@k8s-matser /k8s_yaml/wordpress] # kubectl create -f mysql-svc.yml
[root@k8s-matser /k8s_yaml/wordpress] # cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: wp-mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: wp-mysql
-----------
[root@k8s-matser /k8s_yaml/wordpress] # kubectl create -f wordpress-rc.yml
[root@k8s-matser /k8s_yaml/wordpress] # cat wordpress-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress
spec:
replicas: 1
selector:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: 10.0.0.11:5000/wordpress:v1
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_NAME
value: 'wordpress'
- name: WORDPRESS_DB_HOST
value: '10.254.14.33'
- name: WORDPRESS_DB_USER
value: 'tom'
- name: WORDPRESS_DB_PASSWORD
value: '123456'
-----
[root@k8s-matser /k8s_yaml/wordpress] # kubectl create -f wordpress-svc.yml
[root@k8s-matser /k8s_yaml/wordpress] # cat wordpress-svc.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort
ports:
- port: 80
nodePort: 30009
selector:
app: wordpress
http://10.0.0.12:30009/wp-admin/install.php
3.5_zabbix-comps
[root@k8s-matser /k8s_yaml/zabbix/zabbix_yam] # cat zabbix-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql-server
spec:
replicas: 1
selector:
app: mysql-server
template:
metadata:
labels:
app: mysql-server
spec:
nodeName: 10.0.0.13
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
args: ["--character-set-server=utf8","--collation-server=utf8_bin"]
env:
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
---
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-java-gateway
spec:
replicas: 1
selector:
app: zabbix-java-gateway
template:
metadata:
labels:
app: zabbix-java-gateway
spec:
nodeName: 10.0.0.13
containers:
- name: zabbix-java-gateway
imagePullPolicy: IfNotPresent
image: 10.0.0.11:5000/zabbix-java-gateway:latest
---
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-server
spec:
replicas: 1
selector:
app: zabbix-server
template:
metadata:
labels:
app: zabbix-server
spec:
nodeName: 10.0.0.13
containers:
- name: zabbix-server
imagePullPolicy: IfNotPresent
image: 10.0.0.11:5000/zabbix-server-mysql:latest
ports:
- containerPort: 10051
env:
- name: DB_SERVER_HOST
value: 'mysql-server'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: ZBX_JAVAGATEWAY
value: 'zabbix-java-gateway'
---
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-web-nginx-mysql
spec:
replicas: 1
selector:
app: zabbix-web-nginx-mysql
template:
metadata:
labels:
app: zabbix-web-nginx-mysql
spec:
nodeName: 10.0.0.13
containers:
- name: zabbix-web-nginx-mysql
imagePullPolicy: IfNotPresent
image: 10.0.0.11:5000/zabbix-web-nginx-mysql:latest
ports:
- containerPort: 80
env:
- name: DB_SERVER_HOST
value: 'mysql-server'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
----
[root@k8s-matser /k8s_yaml/zabbix/zabbix_yam] # cat zabbix-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-server
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-server
---
apiVersion: v1
kind: Service
metadata:
name: zabbix-java-gateway
spec:
ports:
- port: 10052
targetPort: 10052
selector:
app: zabbix-java-gateway
---
apiVersion: v1
kind: Service
metadata:
name: zabbix-server
spec:
type: NodePort
ports:
- port: 10051
nodePort: 10051
targetPort: 10051
selector:
app: zabbix-server
---
apiVersion: v1
kind: Service
metadata:
name: zabbix-web-nginx-mysql
spec:
type: NodePort
ports:
- port: 80
nodePort: 30007
targetPort: 80
selector:
app: zabbix-web-nginx-mysql
注意 默认内型
type: NodePort
ports:
- port: 80
nodePort: 30007
targetPort: 80
selector:
---
imagePullPolicy: IfNotPresent 本地有就不从仓库拉取镜像
imagePullPolicy Always 总是从镜像仓库pull 镜像
Nevet 从不从私有仓库拉取镜像
docker-compose迁移到k8s中运行:
mysql-server
将docker-compose中服务的名字,作为service的名字
k8s:容器的初始命令
command: #不能被替换
args: #容易被替换
args: ["--character-set-server=utf8","--collation-server=utf8_bin"]
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
dockerfile 容器的初始命令
entrypoint:
cmd:
4:k8s的附加组件
k8s集群中dns服务的作用,==就是将svc的名称解析成对应VIP地址==
4.1 dns服务

安装dns服务
导入DNS包1:下载dns_docker镜像包(node2节点10.0.0.13)
wget http://192.168.37.200/191127/docker_k8s_dns.tar.gz
2:导入dns_docker镜像包(node2节点10.0.0.13)
3:创建dns服务
vi skydns-.yaml
...
spec:
50 nodeName: 10.0.0.13
51 containers:
52 - name: kubedns
53 image: gcr.io/google_containers/kubedns-amd64:1.9
固定节点 无法调度
kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml
4:检查
kubectl get all --namespace=kube-system
[root@k8s-matser /k8s_yaml/dns] # kubectl get all -n kube-system
5:修改所有node节点kubelet的配置文件
vim /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
systemctl restart kubelet
6:修改tomcat-rc.yml
env:
- name: MYSQL_SERVICE_HOST
value: 'mysql' #修改前值是VIP
kubectl delete -f .
kubectl create -f .
7:验证

如何将rc转换成deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1 #多启动pod
maxUnavailable: 1 #最多不可访问的pod
type: RollingUpdate
minReadySeconds: 30
template:
metadata:
labels:
app: nginx
--------------------
apiVersion: v1 #修改
kind: ReplicationController #修改
metadata:
name: mysql
spec:
replicas: 1
selector: #删除标签选择器
app: mysql #删除标签选择器
template:
##
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
template:
k8s的namespace:资源隔离
一个业务,一个namespace mysql
rc+svc 必须处于同一namespace
deploy+svc 必须处于同一namespace
initialDelaySeconds: 25 #第一次检查开始时间
mysql pod 启动需要10秒
tomcat
容器,状态running ,程序已经卡死了
readiness
ready

4.2 namespace命令空间
namespace做资源隔离
1--查看namespace
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl get namespace
2--创建一个
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl create namespace wordpress
3--启动一个指定 namespace
[root@k8s-matser] # kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record --namespace=xiaohe
deployment "nginx" created
4--删除资源空间
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl delete -n xiaohe pod --all
5--删除资源名字
[root@k8s-matser /k8s_yaml/wordpress] # kubectl delete namespace xoapming
6--资源申明 namespace
piVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: tomcat # 申明所属资源
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
--- 方案二
kubectl create -f . -n web
4.3 健康检查和可用性检查
4.3.1 探针的种类
livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器
readiness==Probe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints==中移除
4.3.2 探针的检测方法
- ==exec:执行==一段命令 返回值为0, 非0
- htt==pGet:检测某==个 http 请求的返回状态码 2xx,3xx正常, 4xx,5xx错误
- tcpSocket:测试某个端口是否能够连接
4.3.3 liveness探针的exec使用
[root@k8s-matser /k8s_yaml/check] # mkdir check
---
状态检查
---
vi nginx_pod_exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: exec
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5 #启动5s后检查给容器启动时间
periodSeconds: 5 #检查周期5s
timeoutSeconds: 5 #超时时间
successThreshold: 1 #检查成功次数1次成功
failureThreshold: 1 #检查1次就是失败
---创建文件30 秒之前在 过了30s 失败
--- 给容器时间
[root@k8s-matser /k8s_yaml/check] # kubectl describe pod exec
4.3.4 liveness探==针的httpGet使==用
---发起http 请求
--- 失败kill pod 从新启动
--
vi nginx_pod_httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpget
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html #检查文件首页 127.0.0.1
port: 80
initialDelaySeconds: 3
periodSeconds: 3
---
删除首页测试
[root@k8s-matser /k8s_yaml/check] # kubectl exec -it httpget /bin/bash
4.3.5 liveness探针的tcpSocket使用
---判断端口通不通
---少 # teneat
vi nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcpSocket
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- tail -f /etc/hosts
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 10
periodSeconds: 3
4.3.6 ==readiness探针的h==ttpGet使用
vi nginx-rc-httpGet.yaml
iapiVersion: v1
kind: ReplicationController
metadata:
name: readiness
spec:
replicas: 2
selector:
app: readiness
template:
metadata:
labels:
app: readiness
spec:
containers:
- name: readiness
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /xiaoming.html #检查
port: 80
initialDelaySeconds: 3 #初试话3s
periodSeconds: 3 # 周期3s
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl expose rc readiness --port=80 --target-port=80 --type=NodePort
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl get svc
[root@k8s-matser /k8s_yaml/tomcat_demo] # curl -i 10.0.12:29094
[root@k8s-matser /k8s_yaml/check] # kubectl exec -it readiness-wx8q9 /bin/bash
修改 xiaoming.htm
--- 检查通过负载均衡就会有地址
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl describe svc readiness
readiness-wx8q9 0/1 Running 0 14s
--- 可用性检查不通过
4.4 dashboard服务
1:上传并导入镜像,打标签
[root@k8s-node-2 /] # docker images |grep 10.0.0.11:5000/heapster
10.0.0.11:5000/heapster canary 0a56f7040da5 3 years ago 971 MB
10.0.0.11:5000/heapster_grafana v2.6.0 4fe73eb13e50 4 years ago 267 MB
10.0.0.11:5000/heapster_influxdb v0.5 a47993810aac 4 years ago 251 MB
[root@k8s-node-2 /] #
2:创建dashborad的deployment和service
[root@k8s-matser /k8s_yaml/dashbord] # cat dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: "true"
spec:
nodeName: 10.0.0.13
containers:
- name: kubernetes-dashboard
image: 10.0.0.11:5000/kubernetes-dashboard-amd64:v1.4.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://10.0.0.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
----
[root@k8s-matser /k8s_yaml/dashbord] # cat dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090


daemon sets :监控node节点宿主机的资源
pod 不指定副本数 根据node节点的数量,创建pod
一个节点一个挂掉就没有了
deploy,rc,daemonset:畜牲应用 创建pod--随机名称 没有数据的应用 web站点
pet sets:宠物应用 创建pod--固定名字 01,02 有数据的应用 redis,mysql
StatefulSet: 有状态的应用,有数据的应用
数据库,运行在docker容器,不方便调优,容器是共用宿主机内核,涉及内核参数优化
mysql 调优 内核参数优化
4.5 通过apiservicer反向代理访问service
第一种:NodePort类型
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30008
第二种:ClusterIP类型
type: ClusterIP
ports:
- port: 80
targetPort: 80
http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字/
#例子:
http://10.0.0.11:8080/api/v1/proxy/namespaces/qiangge/services/wordpress
job 执行一次性的任务:每周进行一次备份
dashboard-svc没有做端口映射,但是可以访问
http://10.0.0.11:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/
http://10.0.0.12:30008/
#锚点 定位符
所有的svc都可以通过api-server反向代理访问
http://10.0.0.11:8080/api/v1/proxy/namespaces/tomcat/services/myweb
5: k8s弹性伸缩
k8s弹性伸缩,需要附加插件heapster监控
5.1 安装heapster监控

1:上传并导入镜像,打标签
ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary
2:上传配置文件,kubectl create -f .
[root@k8s-matser /k8s_yaml/heapster/heapster-influxdb] # ls -l
total 20
-rw-r--r-- 1 root root 414 Sep 14 2016 grafana-service.yaml
-rw-r--r-- 1 root root 648 Jun 3 16:05 heapster-controller.yaml
-rw-r--r-- 1 root root 249 Sep 14 2016 heapster-service.yaml
-rw-r--r-- 1 root root 1499 Jun 3 16:06 influxdb-grafana-controller.yaml
-rw-r--r-- 1 root root 259 Sep 14 2016 influxdb-service.yaml
修改配置文件:
#heapster-controller.yaml
spec:
nodeName: 10.0.0.13
containers:
- name: heapster
image: 10.0.0.11:5000/heapster:canary
imagePullPolicy: IfNotPresent
#influxdb-grafana-controller.yaml
spec:
nodeName: 10.0.0.13
containers:
3:打开dashboard验证

5.2 弹性伸缩
1:修改rc的配置文件
弹性伸缩:动态调整pod的数量
压力大,平均cpu使用率高,增加pod数量
压力小,平均cpu使用率低,减少pod数量
依赖监控,监控cpu的使用率
弹性伸缩:扩容的最大值,缩容的最小值,扩容的触发条件cpu=10%
[root@k8s-matser /k8s_yaml/tomcat_demo] # docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8db9aa946ca9 -- -- / -- -- -- -- --
[root@k8s-matser /k8s_yaml/tanxing] # vim k8s_tanxin.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
memory: 10Mi
requests:
cpu: 100m
memory: 5Mi
---
[root@k8s-matser /k8s_yaml/tanxing] # kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
2:创建弹性伸缩规则
kubectl autoscale deployment nginx-deployment --max=8 --min=1 --cpu-percent=5
hpa
[root@k8s-matser /k8s_yaml/tanxing] # kubectl autoscale deployment nginx --max=8 --min=1 --cpu-percent=5
-- 创建规则 最多副本 最小副本时常 cpu 达到5
--kubectl autoscale
HPA简介HPA(Horizontal Pod Autoscaler)
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
hpa/nginx Deployment/nginx 5% 0% 1 8 2m
3:测试
ab -n 1000000 -c 40 http://10.0.0.12:33218/index.html
yum install httpd-tools -y
扩容截图

缩容:

6:持久化存储
数据持久化类型:
6.1 emptyDir:
emptyDir:跟随pod创建一个空目录,会随着pod的删除而删除 2 nginx pod,2个空目录,调整pod数量为1,删除一个pod
hostpath:多个pod共用一个目录,前提多个pod都在同一个node节点,否则数据不共享
不能调整 pod 会删除 目录 数据
没有用---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: tomcat
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
emptyDir: {}
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
~ - name: alpine
image: 10.0.0.11:5000/alpine:latest
volumeMounts:
- mountPath: /tmp
name: tmp
command: ["sleep","1000"]
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
emptyDir: {} # 可以持久化多个-申明方式
containers:
- name: wp-mysql
image: 10.0.0.11:5000/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts: #持久化的目录
- mountPath: /var/lib/mysql #容器的路径
name: mysql #目录的名字
# 可以持久化多个-目录多个容器
6.2 HostPath:
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl explain pod.spec.volumes.hostPath
FIELDS:
path <string> -required-
Path of the directory on the host. More info:
http://kubernetes.io/docs/user-guide/volumes#hostpath
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: tomcat
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
hostPath:
path: /data/mysql
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
~
spec:
nodeName: 10.0.0.12
volumes:
- name: mysql
hostPath:
path: /data/wp_mysql
containers:
- name: wp-mysql
image: 10.0.0.11:5000/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
----缺点判断当另外的节点没有会从新创建数据
hostpath:多个pod共用一个目录,前提多个pod都在同一个node节点,否则数据不共享
wordpress-svc.yml
6.==3 nfs:==
[root@k8s-matser /k8s_yaml/tomcat_demo] # mkdir /data/tomcat-mysql
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: tomcat
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
nfs:
path: /data/tomcat-mysql
server: 10.0.0.11
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
~

volumes:
- name: mysql
nfs:
path: /data/wp_mysql
server: 10.0.0.11
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl explain pod.spec.volumes.nfs
6.4 pv和pvc:

pv: persistent volume ==全局资源,k8s集群==
pvc: persistent ==volume== claim, ==局部资源属于==某一个namespace
6.4.1:安装nfs服务端(10.0.0.11)
yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
6.4.2:在node节点安装nfs客户端
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
6.4.3:创建pv和pvc
[root@k8s-matser /k8s_yaml/tomcat_demo] # vim mysql_pv2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
labels:
type: nfs002
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle #回收策略
nfs:
path: "/data/pv2"
server: 10.0.0.11
readOnly: false
----
[root@k8s-matser /k8s_yaml/tomcat_demo] # cat mysql_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-mysql
namespace: tomcat
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi #需求的资源
[root@k8s-matser /k8s_yaml/tomcat_demo] # vim mysql-rc.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: tomcat
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
persistentVolumeClaim:
claimName: tomcat-mysql
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
上传yaml配置文件,创建pv和pvc
[root@k8s-matser /k8s_yaml/tomcat_demo] # kubectl get pv
apiVersion:v1
kind: persistentVolumeClaim
metadata:
name: xxx
namespace: xxxx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
pvc挑选pv的时候,1根据存储容量,2:根据标签选择器
pvc回收的时候,需要修改回收pod的镜像地址
gcr.io/google_containers/busybox
registry.aliyuncs.com/google_containers/busybox
6.4.4:创建mysql-rc,pod模板里使用volume
volumes:
- name: mysql
persistentVolumeClaim:
claimName: tomcat-mysql
6.4.5: 验证持久化
验证方法1:删除mysql的pod,数据库不丢
kubectl delete pod mysql-gt054
验证方法2:查看nfs服务端,是否有mysql的数据文件
!
6.5: 分布式存储==glusterfs==
a: 什么是glusterfs

Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。
b: 安装glusterfs
多个人一起来工服务--多个人干一件事-坏一个节点数据不完整-
使用分布式复制卷---加冗余 副本数代表复制 几分 生产达到这个项目9 台机器
但是也浪费支援
群晖nas,web界面 配置的时候 开启nfs mount -t nfs
无线路由器 web界面 配置的时候
--加 硬盘或者加集群
6版本
所有节点:
yum install centos-release-gluster6.noarch -y
yum install glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
#为gluster集群增加存储单元brick
[root@k8s-matser /k8s_yaml/tomcat_demo] # fdisk -l
echo '- - -' >/sys/class/scsi_host/host0/scan
echo '- - -' >/sys/class/scsi_host/host1/scan
echo '- - -' >/sys/class/scsi_host/host2/scan
格式化
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkfs.xfs /dev/sdd
[root@k8s-matser /k8s_yaml/tomcat_demo] # gluster pool list
创建复制卷
mkdir -p /gfs/test1
mkdir -p /gfs/test2
mkdir -p /gfs/test3
mount /dev/sdb /gfs/test1
mount /dev/sdc /gfs/test2
mount /dev/sdd /gfs/test3
c: 添加存储资源池
master节点:
[root@k8s-matser ~] # gluster pool list
[root@k8s-matser ~] # gluster peer probe k8s-node1 10.0 .0 .12
[root@k8s-matser ~] # gluster peer probe k8s-node2 10.0 .0 .13
[root@k8s-matser ~] # gluster pool list
UUID Hostname State
494c6f9e-51d2-4ad1-a651-eac071a0e9e4 10.0.0.12 Connected
0ee48fa1-6f28-4f63-b21a-7ded3725c4ad 10.0.0.13 Connected
组成集群 创建卷
d: glusterfs卷管理
==参考文献==
https://www.cnblogs.com/huangyanqi/p/8406534.html#创建分布式复制卷
[root@k8s-matser ~] # gluster volume create qiangge replica 2 k8s-master:/gfs/test1 k8s-node-1:/gfs/test1 k8s-master:/gfs/test2 k8s-node-1:/gfs/test2 force 强制
[root@k8s-matser ~] # gluster volume create qiangge replica 2 10.0.0.11:/gfs/test1 10.0.0.12:/gfs/test1 10.0.0.11:/gfs/test2 10.0.0.12:/gfs/test2 force 强制
-- #必须4个
#启动卷
[root@k8s-matser ~] # gluster volume start qiangge
#查看卷
[root@k8s-matser ~] # gluster volume info qiangge
[root@k8s-matser ~] # gluster volume info oldboy
Volume Name: oldboy
Type: Distributed-Replicate
Volume ID: ad39171f-0fb0-454e-8535-c95eef290cc6
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.11:/gfs/test1
Brick2: 10.0.0.12:/gfs/test1
Brick3: 10.0.0.11:/gfs/test2
Brick4: 10.0.0.12:/gfs/test2
Brick5: 10.0.0.13:/gfs/test1
Brick6: 10.0.0.13:/gfs/test2
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
[root@k8s-matser ~] #
#挂载卷
[root@k8s-matser ~] # mount -t glusterfs 10.0.0.11:/qiangge /mnt
e: 分布式复制卷讲解

f: 分布式复制卷扩容
#扩容前查看容量:
df -h
#扩容命令:
[root@k8s-matser ~] #gluster volume add-brick qiangge 10.0.0.13:/gfs/test1 10.0.0.13:/gfs/test2 force
#扩容后查看容量:
df -h
6.6 k8s 对接==glusterfs存储==

==a:创建endpoint==
不能直接对借创建资源
[root@k8s-matser /k8s_yaml/glusterfs] # ll
total 40
-rw-r--r-- 1 root root 198 Jun 4 21:07 glusterfs-ep.yaml
-rw-r--r-- 1 root root 287 Jun 4 21:10 mysql_pv2.yaml
-rw-r--r-- 1 root root 287 Jun 4 21:10 mysql_pv3.yaml
-rw-r--r-- 1 root root 182 Jun 4 21:10 mysql_pvc.yaml
-rw-r--r-- 1 root root 287 Jun 4 21:10 mysql_pv.yaml
-rw-r--r-- 1 root root 635 Jun 4 21:10 mysql-rc.yml
-rw-r--r-- 1 root root 166 Jun 4 21:10 mysql-svc.yml
-rw-r--r-- 1 root root 2132 Jun 4 21:10 tomcat_demo.zip
-rw-r--r-- 1 root root 481 Jun 4 21:10 tomcat-rc.yml
-rw-r--r-- 1 root root 183 Jun 4 21:10 tomcat-svc.yml
[root@k8s-matser /k8s_yaml/glusterfs] # kubectl create -f .
vi glusterfs-ep.yaml
iapiVersion: v1
kind: Endpoints
metadata:
name: glusterfs
namespace: tomcat
subsets:
- addresses:
- ip: 10.0.0.11
- ip: 10.0.0.12
- ip: 10.0.0.13
ports:
- port: 49152
protocol: TCP
b: 创建service(用于外部服务映射的svc,不需要标签选择器)
vi glusterfs-svc.yaml
iapiVersion: v1
kind: Service
metadata:
name: glusterfs
namespace: tomcat
spec:
ports:
- port: 49152
protocol: TCP
targetPort: 49152
type: ClusterIP
方案二修改pod 对接
[root@k8s-matser /k8s_yaml/glusterfs] # vim mysql-rc.yml
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
glusterfs:
endpoints: glusterfs #名字
path: oldboy #卷的名字
containers:
- name: mysql
-----
labels:
app: mysql
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
glusterfs:
endpoints: glusterfs
path: oldboy
containers:

c: 创建gluster类型pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster
labels:
type: glusterfs
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs"
path: "qiangge"
readOnly: false
d: 创建pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gluster
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Gi
e:在pod中使用gluster
vi nginx_pod.yaml
……
volumeMounts:
- name: nfs-vol2
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-vol2
persistentVolumeClaim:
claimName: gluster
容器连接外部数据库
[root@k8s-matser /k8s_yaml/glusterfs] # kubectl delete namespace tomcat
[root@k8s-node-2 /mnt] # yum -y install mariadb-server
[root@k8s-node-2 /mnt] # systemctl start mariadb.service
[root@k8s-node-1 ~] # mysql_secure_installation
MariaDB [(none)]> grant all on *.* to root@'%' identified by '123456' with grant option;
[root@k8s-node-1 ~] # mysql -uroot -p123456 -h 10.0.0.12
MariaDB [(none)]> select user,host from mysql.user;
MariaDB [(none)]> drop user root@'127.0.0.1';
MariaDB [(none)]> drop user root@'localhost';
MariaDB [(none)]> drop user root@'::1';
群晖nas,web界面 配置的时候 开启nfs mount -t nfs
无线路由器 web界面 配置的时候
k8s可以通过创建endpoint+service为外部服务做负载均衡
k8s 映射
k8s的 tomcat,连接外部 数据库
vi mysql-ep.yaml
iapiVersion: v1
kind: Endpoints
metadata:
name: mysql
namespace: tomcat
subsets:
- addresses:
- ip: 10.0.0.13
ports:
- port: 3306
protocol: TCP
vi mysql-svc.yaml
iapiVersion: v1
kind: Service
metadata:
name: mysql
namespace: tomcat
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
type: ClusterIP
[root@k8s-matser /k8s_yaml/duiwai_mysql] # kubectl get all -n tomcat
http://10.0.0.12:30008/demo/

7:使用jenkins实现k8s持续更新
| ip地址 | 服务 | 内存 |
|---|---|---|
| 10.0.0.11 | kube-apiserver 8080 | 1G |
| 10.0.0.12 | kube-apiserver 8080 | 1G |
| 10.0.0.13 | jenkins(tomcat + jdk) 8080 | 3G |
代码仓库使用gitee托管

7.1: 安装gitlab并上传代码

单独服务器
#a:安装
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:应用并启动服务
gitlab-ctl reconfigure
#使用浏览器访问http://10.0.0.13,修改root用户密码,创建project
#上传代码到git仓库
cd /srv/
rz -E
unzip xiaoniaofeifei.zip
rm -fr xiaoniaofeifei.zip
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master
使用gitee 仓库



[root@k8s-node-1 ~] # git config --global user.name "浅笑痕"
git config --global user.email "7468467+luhong233@user.noreply.gitee.com"
创建 git 仓库:
mkdir sadas
cd sadas
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin https://gitee.com/luhong233/sadas.git
git push -u origin master
已有仓库?
cd existing_git_repo
git remote add origin https://gitee.com/luhong233/sadas.git
git push -u origin master
7.2 安装jenkins,并自动构建docker镜像
1:安装jenkins
13 上传
cd /opt/
wget http://192.168.12.201/191216/apache-tomcat-8.0.27.tar.gz
wget http://192.168.12.201/191216/jdk-8u102-linux-x64.rpm
wget http://192.168.12.201/191216/jenkin-data.tar.gz
wget http://192.168.12.201/191216/jenkins.war
4 16:52 apache-tomcat-8.0.27.tar.gz
4 16:53 jdk-8u102-linux-x64.rpm
4 16:53 jenkin-data.tar.gz
ROOT.war
[root@k8s-node-2 /opt] # rpm -ivh jdk-8u102-linux-x64.rpm
[root@k8s-node-2 /opt] #mkdir /app -p
[root@k8s-node-2 /opt] #tar xf apache-tomcat-8.0.27.tar.gz -C /app
[root@k8s-node-2 /opt] #rm -fr /app/apache-tomcat-8.0.27/webapps/*
[root@k8s-node-2 /opt] #mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
[root@k8s-node-2 /opt] #tar xf jenkin-data.tar.gz -C /root
drwxr-xr-x 13 root root 4096 Jun 4 20:02 .jenkins
[root@k8s-node-2 /opt] # /app/apache-tomcat-8.0.27/bin/startup.sh
[root@k8s-node-2 /opt] #netstat -lntup
2:访问jenkins
访问http://10.0.0.12:8080/,默认账号密码admin:123456
3:配置jenkins拉取gitlab代码凭据
==如何实现,自动化构建镜像,拉取,自动构建镜像==
==方法1:执行脚本 生产dockerfile文件==
==方法2:把dockerfile文件上传代码仓库,==
创建
Dockerfile
浅笑痕 提交于 5小时前 . add Dockerfile.
FROM 10.0.0.11:5000/nginx:1.13
ADD . /usr/share/nginx/html


填写码云账户密码

[root@k8s-node-2 ~/.jenkins/workspace/yiliao] #
[root@k8s-node-2 ~/.jenkins] # docker images yiliao
-- 参数化构件
docker build -t yiliao:$version .
docker tag yiliao:$version 10.0.0.11:5000/yiliao:$version
docker push 10.0.0.11:5000/yiliao:$version
k8s 实现-
[root@k8s-matser /k8s_yaml] # kubectl create namespace yiliao
[root@k8s-matser /k8s_yaml] # kubectl run -n yiliao yiliao --image=10.0.0.11:5000/yiliao:v3 --replicas=2 --record
deployment "yiliao" created
svc
[root@k8s-matser /k8s_yaml] # kubectl expose -n yiliao deployment yiliao --port=80 --target-port=80 --type=NodePort
[root@k8s-matser /k8s_yaml] # kubectl get all -n yiliao
升级
[root@k8s-matser /k8s_yaml] # kubectl set image -n yiliao deployment yiliao yiliao=10.0.0.11:5000/yiliao:v4
[root@k8s-node-2 ~/.jenkins] # kubectl -s 10.0.0.11:8080 get pod
发布更新
kubectl -s 10.0.0.11:8080 set image -n yiliao deployment yiliao yiliao=10.0.0.11:5000/yiliao:$version

a:在jenkins上生成秘钥对
ssh-keygen -t rsa
b:复制公钥粘贴gitlab上

c:jenkins上创建全局凭据

4:拉取代码测试

5:编写dockerfile并测试
#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add . /usr/share/nginx/html添加docker build构建时不add的文件
vim .dockerignore dockerfile
docker build -t xiaoniao:v1 . docker run -d -p 88:80 xiaoniao:v1
打开浏览器测试访问xiaoniaofeifei的项目
6:上传dockerfile和.dockerignore到私有仓库
git add docker .dockerignore git commit -m “fisrt commit” git push -u origin master
7:点击jenkins立即构建,自动构建docker镜像并上传到私有仓库
修改jenkins 工程配置

docker build -t 10.0.0.11:5000/test:v$BUILD_ID . docker push 10.0.0.11:5000/test:v$BUILD_ID
7.3 jenkins自动部署应用到k8s
kubectl -s 10.0.0.11:8080 get nodes
if [ -f /tmp/xiaoniao.lock ];then
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 set image -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
port=`kubectl -s 10.0.0.11:8080 get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
echo "你的项目地址访问是http://10.0.0.13:$port"
echo "更新成功"
else
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 create namespace xiaoniao
kubectl -s 10.0.0.11:8080 run xiaoniao -n xiaoniao --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
kubectl -s 10.0.0.11:8080 expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
port=`kubectl -s 10.0.0.11:8080 get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
echo "你的项目地址访问是http://10.0.0.13:$port"
echo "发布成功"
touch /tmp/xiaoniao.lock
chattr +i /tmp/xiaoniao.lock
fi
jenkins一键回滚

`
新的job 回滚
1--创建新的 创建 新的job
2-- 配置---参数化构建过程---宇符参数 ---version
3--构建环境---执行shell
------
kubectl -s 10.0.0.11:8080 set image -n yiliao deployment yiliao yiliao=10.0.0.11:5000/yiliao:$version
-------
kubectl -s 10.0.0.11:8080 set image -n yiliao deployment yiliao yiliao=10.0.0.11:5000/yiliao:$version
方法1:
kubectl -s 10.0.0.11:8080 rollout undo -n yiliao deployment yiliao
方法2:
kubectl -s 10.0.0.11:8080 set image -n yiliao deployment yiliao yiliao=10.0.0.11:5000/yiliao:v2
kubectl -s 10.0.0.11:8080 rollout undo -n xiaoniao deployment xiaoniao


8: k8s高可用
8.1: 安装配置etcd高可用集群

etcd 高科用 | root@k8s-matser | 10.0.0.0.11 | 主节点 |
|---|---|---|
| root@k8s-matser2 | 10.0.0.12 | 主节点 |
| root@k8s-node | 10.0.0.13 | node 节点 |
清理环境重新部署不算
主节点
[root@k8s-matser ~] # systemctl stop kube-apiserver.service
[root@k8s-matser ~] # systemctl stop kube-controller-manager.service
[root@k8s-matser ~] # systemctl stop kube-scheduler.service
[root@k8s-matser ~] # systemctl stop etcd.service
cat /etc/etcd/etcd.conf |grep -v '^$|#'
cat /etc/etcd/etcd.conf |grep -Ev '^$|#'
ls /var/lib/etcd/
ls /var/lib/etcd/default.etcd/
ls /var/lib/etcd/default.etcd/member/
tree /var/lib/etcd/default.etcd/member/
rm -fr /var/lib/etcd/default.etcd
node
[root@k8s-node-2 ~] # systemctl stop kubelet.service
[root@k8s-node-2 ~] # systemctl stop kube-proxy.service
[root@k8s-node-2 ~] # docker rm -f `docker ps -a -q`
yum install etcd -y
#所有节点安装etcd
yum install etcd -y
-------
配置etcd 集群
======================
vim /etc/etcd/etcd.conf
3:ETCD_DATA_DIR="/var/lib/etcd/"
5:ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
6:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
9:ETCD_NAME="node1" #节点的名字
20:ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380" #节点的同步数据的地址
# 初始化加入集群-改为自己的ip 地址
21:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379" #节点对外提供服务的地址
# 在集群里面的地址
26:ETCD_INITIAL_CLUSTER="node1=http://10.0.0.11:2380,node2=http://10.0.0.12:2380,node3=http://10.0.0.13:2380"
# 集群有哪些人
27:ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群口令
28:ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-matser ~] # scp /etc/etcd/etcd.conf 10.0.0.13:/etc/etcd/etcd.conf
[root@k8s-matser ~] # scp /etc/etcd/etcd.conf 10.0.0.12:/etc/etcd/etcd.con
修改注释地址 # 12 13 节点名字
9 行
20 行
21 行
======================
所有节点---------启动时同时执行
systemctl enable etcd
systemctl restart etcd
--------------------------
检查
[root@k8s-master tomcat_demo]# etcdctl cluster-health
member 9e80988e833ccb43 is healthy: got healthy result from http://10.0.0.11:2379
member a10d8f7920cc71c7 is healthy: got healthy result from http://10.0.0.13:2379
member abdc532bc0516b2d is healthy: got healthy result from http://10.0.0.12:2379
cluster is healthy
#修改flannel
[root@k8s-matser ~] vim /etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS="http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379" 4行
[root@k8s-matser ~] # scp /etc/sysconfig/flanneld 10.0.0.13:/etc/sysconfig/flanneld
[root@k8s-matser ~] # scp /etc/sysconfig/flanneld 10.0.0.12:/etc/sysconfig/flanneld
节点同步
--任何一节点--网络 ---创建网段
etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16" }'
--所有节点
systemctl restart flanneld
systemctl restart docker
[root@k8s-matser ~] # ifconfig
8.2 安装配置master01的api-server,controller-manager,scheduler(127.0.0.1:8080)
[root@k8s-matser ~] # vim /etc/kubernetes/apiserver 增加节点 使用etcd 集群
17 KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379"
[root@k8s-matser ~] # vim /etc/kubernetes/config # 找 apiserv 找个字的apiser 可以指向vip
22 KUBE_MASTER="--master=http://127.0.0.1:8080"
[root@k8s-matser ~] # systemctl restart kube-apiserver.service
[root@k8s-matser ~] # systemctl restart kube-controller-manager.service kube-scheduler.service
8.3 安装配置master02的api-server,controller-manager,scheduler(127.0.0.1:8080)
[root@k8s-node-1 ~] # yum install kubernetes-master.x86_64 -y
scp -rp 10.0.0.11:/etc/kubernetes/apiserver /etc/kubernetes/apiserver
scp -rp 10.0.0.11:/etc/kubernetes/config /etc/kubernetes/config
systemctl stop kubelet.service
systemctl disable kubelet.service
systemctl stop kube-proxy.service
systemctl disable kube-proxy.service
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
[root@k8s-node-1 ~] # kubectl get componentstatus 2个嗲点都要可以
NAME STATUS MESSAGE ERROR
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
8.4 为m==aster01和master02安装配置Keepalived==
使用 ==非抢占 nopreempt==
yum install keepalived.x86_64 -y
[root@k8s-matser ~] # vim /etc/keepalived/keepalived.conf
#master01配置:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL_11
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.10
}
}
#master02配置
[root@k8s-node-1 ~] # vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL_12
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.10
}
}
systemctl enable keepalived
systemctl start keepalived
8.5: 所有node节点kubelet,kube-proxy指向api-server的vip
[root@k8s-node-2 ~] # vim /etc/kubernetes/kubelet
14 KUBELET_API_SERVER="--api-servers=http://10.0.0.10:8080" vip 地址
[root@k8s-node-2 ~] # vim /etc/kubernetes/config
22 KUBE_MASTER="--master=http://10.0.0.10:8080"
[root@k8s-node-2 ~] # systemctl restart kubelet.service kube-proxy.service
资源控制器
什么是控制器
Kubernetes 中内建了很多 controller(控制器),这些相当于一个状态机,用来控制 Pod 的具体状态和行为
控制器类型
ReplicationController 和 ReplicaSet
Deployment
DaemonSet
StateFulSet
Job/CronJob
Horizontal Pod Autoscaling
ReplicationController 和 ReplicaSet
ReplicationController(RC)用来确保容器应用的副本数始终保持在用户定义的副本数,即如果有容器异常退
出,会自动创建新的 Pod 来替代;而如果异常多出来的容器也会自动回收;
在新版本的 Kubernetes 中建议使用 ReplicaSet 来取代 ReplicationController 。ReplicaSet 跟
ReplicationController 没有本质的不同,只是名字不一样,并且 ReplicaSet 支持集合式的 selector;
Deployment
Deployment 为 Pod 和 ReplicaSet 提供了一个声明式定义 (declarative) 方法,用来替代以前的
ReplicationController 来方便的管理应用。典型的应用场景包括;
定义 Deployment 来创建 Pod 和 ReplicaSet
滚动升级和回滚应用
扩容和缩容
暂停和继续 Deployment
DaemonSet
DaemonSet 确保全部(或者一些)Node 上运行一个 Pod 的副本。当有 Node 加入集群时,也会为他们新增一个
Pod 。当有 Node 从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
使用 DaemonSet 的一些典型用法:
运行集群存储 daemon,例如在每个 Node 上运行 glusterd 、ceph
在每个 Node 上运行日志收集 daemon,例如fluentd 、logstash
在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、collectd 、Datadog 代理、
New Relic 代理,或 Ganglia gmond
Job
Job 负责批处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个 Pod 成功结束
CronJob
Cron Job 管理基于时间的 Job,即:
在给定时间点只运行一次
周期性地在给定时间点运行
使用前提条件:**当前使用的 Kubernetes 集群,版本 >= 1.8(对 CronJob)。对于先前版本的集群,版本 <
1.8,启动 API Server时,通过传递选项 --runtime-config=batch/v2alpha1=true 可以开启 batch/v2alpha1
API**
典型的用法如下所示:
在给定的时间点调度 Job 运行
创建周期性运行的 Job,例如:数据库备份、发送邮件
StatefulSet
StatefulSet 作为 Controller 为 Pod 提供唯一的标识。它可以保证部署和 scale 的顺序
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用
场景包括:
稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有
Cluster IP的Service)来实现
有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到
N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实
现
有序收缩,有序删除(即从N-1到0)
Horizontal Pod Autoscaling
应用的资源使用率通常都有高峰和低谷的时候,如何削峰填谷,提高集群的整体资源利用率,让service中的Pod
个数自动调整呢?这就有赖于Horizontal Pod Autoscaling了,顾名思义,使Pod水平自动缩放
9.k8s _1.85 admin 版本
https://oldqiang.com/archives/624.html环境要求:
| 机器名 | ip地址 | cpu和内存要求 |
|---|---|---|
| kubernetes-master | 10.0.0.11 | 2c2g(关闭swap) |
| kubernetes-node1 | 10.0.0.12 | 2c2g(关闭swap) |
[root@kubernetes-master ~] # cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 kubernetes-master
10.0.0.12 kubernetes-node1
增加解析
###1- 安装指定版本docker
#安装docker-ce#
文档地址清华源码
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install docker-ce -y
systemctl enable docker
systemctl start docker
2:安装kubeadm
#所有节点
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubelet-1.15.5-0 kubeadm-1.15.5-0 kubectl-1.15.5-0 -y
systemctl enable kubelet && systemctl start kubelet
-------
最新版------
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
https://developer.aliyun.com/mirror/
3:使用kubeadm初始化k8s集群
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
#所有节点 优化内核
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
--关闭swap
swapoff -a
vim /etc/fstab
#控制节点
kubeadm init --kubernetes-version=v1.15.0 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16
--kubernetes-version=v1.15.0 指定版本 删掉最新版本
--image-repository 镜像地址
-------------------
[root@kubernetes-master ~] # kubeadm init --kubernetes-version=v1.18.0 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16
-------------------
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kubernetes-master ~] # kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
4:给k8s集群加入node节点:
#node节点 复制主节点的
kubeadm join 10.0.0.11:6443 --token 47hq6d.uvtn5ymfah6egl53 \
--discovery-token-ca-cert-hash sha256:ff283c3350b5dfa0ac8c093383416c535485ec18d5cdd6b82273e0d198157605
----
[root@kubernetes-node1 ~] # kubeadm join 10.0.0.11:6443 --token 743fl1.exoyrjn8kcr5ufwz \
> --discovery-token-ca-cert-hash sha256:55b61e6c9fda7657d82b8dc7ae47b9492dfd2ed5a2fe2b06ec4c707df1054a5f
[root@kubernetes-master ~] # kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 3m14s v1.18.3
kubernetes-node1 NotReady <none> 2m39s v1.18.3
5:为k8s集群配置网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
或者上传
128 "Network": "10.244.0.0/16",
#修改网段范围为 网段 --pod-network-cidr=10.244.0.0/16
[root@kubernetes-master ~] # kubectl create -f kube-flannel.yml
[root@kubernetes-master ~] # kubectl get all -n kube-system
--全部为1
[root@kubernetes-master ~] # kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 13m v1.18.3
kubernetes-node1 NotReady <none> 12m v1.18.3
docker pull quay.io/coreos/flannel:v0.12.0-amd64
[root@kubernetes-master ~] # docker load -i docker_k8s_flannel.tar.gz
两个节点都上传这个镜像
[root@kubernetes-master ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 79m v1.18.3
kubernetes-node1 Ready <none> 78m v1.18.3
测试 Deployment 不会写看系统的怎么写
[root@kubernetes-master ~] # vim nginx_k8s_1.80.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
补全健
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc更多参考
https://oldqiang.com/archives/624.html