Kubernetes ingress方式访问集群服务
目标:通过Ingress方式访问集群服务
概述
应用部署到Kubernetes集群之后,需要对外发布服务,目前Kubernetes支持三种方式暴露服务:
(1)NodePort
Service
通过与集群节点的端口映射进行服务发布,外部用户需要获取节点IP,并以http://NodeIP:port
的方式访问
(2)LoadBlancer
通过设置公网负载均衡器地址进行访问
(3)Ingress
Ingress
方式能通过Nginx、HAproxy等负载均衡器暴露集群内服务,包括Ingress Controller
与Ingress
两部分。
Ingress Contronler
通过与Kubernetes API
交互,动态的去感知集群中Ingress
规则变化,然后读取它,按照自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段Nginx配置,再写到Nginx-ingress-control
的Pod
里,这个Ingress Contronler
的pod里面运行着一个Nginx服务,控制器会把生成的Nginx配置写入/etc/nginx.conf
文件中,然后reload
一下使用配置生效。以此来达到域名分配及动态更新的问题。
本文使用以nginx作为负载均衡的Ingress
部署Kubernetes Ingress容器
由于测试k8s环境由kubeadm方式部署,因此读取的github官方配置文件地址为:
选用kubeadm目录下的文件 nginx-ingress-controller.yaml
(1)nginx-ingress-controller.yaml
原始文件需要修改,加入serviceaccount的配置,不然会导致controller无法访问集群API
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
其中包含gcr镜像,因此需要拉取国内镜像重新打标签,或者使用国外云主机拉取后导出,再导入至国内服务器
在node节点上执行:
docker pull anjia0532/google-containers.defaultbackend:1.0
docker pull anjia0532/google-containers.nginx-ingress-controller:0.9.0-beta.11
docker tag anjia0532/google-containers.defaultbackend:1.0 gcr.io/google_containers/defaultbackend:1.0
docker tag anjia0532/google-containers.nginx-ingress-controller:0.9.0-beta.11 gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
或:
# docker pull gcr.io/google_containers/defaultbackend:1.0
# docker pull gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
# docker save -o defaultbackend:1.0.tar gcr.io/google_containers/defaultbackend:1.0
# docker save -o nginx-ingress-controller:0.9.0-beta.11.tar gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
# wget http://119.28.85.235/defaultbackend:1.0.tar
# wget http://119.28.85.235/nginx-ingress-controller:0.9.0-beta.11.tar
# docker load < defaultbackend:1.0.tar
# docker load < nginx-ingress-controller:0.9.0-beta.11.tar
(2)rbac.yaml
该文件的作用为创建服务账户,赋予nginx-controller
的kubernetes api
访问权限
github原始文件地址:https://github.com/kubernetes/ingress-nginx/blob/master/deploy/rbac.yaml
将其中的命名空间修改为kube-system
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
(3)部署
执行kubectl create -f *.yaml
命令
kubectl create -f rbac.yaml
kubectl create -f nginx-ingress-controller.yaml
分别查看controller和default-backend信息
kubectl get pods --namespace=kube-system -l k8s-app=default-http-backend -o wide
kubectl get pods --namespace=kube-system -l k8s-app=nginx-ingress-controller -o wide
# kubectl get pods --namespace=kube-system -l k8s-app=default-http-backend -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default-http-backend-78bd8558fd-kmb47 1/1 Running 1 31m 172.16.1.22 k8s-node01 <none> <none>
# kubectl get pods --namespace=kube-system -l k8s-app=nginx-ingress-controller -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-5d7c6b8d5f-d4m4f 0/1 Running 10 31m 192.168.0.10 k8s-node01 <none> <none>
编写jupyter部署yaml文件
基于jupyter镜像编写yaml文件,提供deployment
、service
、ingress
三种类型的资源
jupyter-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jupyter-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: jupyter
spec:
containers:
- name: jupyter-test
image: jupyter/tensorflow-notebook
ports:
- containerPort: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
labels:
app: jupyter
spec:
selector:
app: jupyter
ports:
- port: 8888
targetPort: 8888
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jupyter-ingress
spec:
rules:
- host: test.jupyter.com
http:
paths:
- backend:
serviceName: jupyter-service
servicePort: 8888
其中设置ingress
后端的访问域名为test.jupyter.com
kubectl create -f jupyter-deploy.yaml
ingress部署信息查看:
# kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
jupyter-ingress test.jupyter.com 192.168.0.10 80 36s
运行测试
在本地客户端主机上配置hosts,加入对域名的解析,即nodeIP test.jupyter.com
nodeIP test.jupyter.com
访问test.jupyter.com
以上,完成测试。
版权声明:
作者:Joe.Ye
链接:https://www.appblog.cn/index.php/2023/03/23/kubernetes-ingress-method-for-accessing-cluster-services/
来源:APP全栈技术分享
文章版权归作者所有,未经允许请勿转载。
共有 0 条评论