k8s中的NetWork-ingress实战

k8s中的NetWork-ingress实战使用kubeadm搭建k8s集群k8s中的NetWork为了对外发布pod内的应用,k8s支持两种负载均衡机制1、一种是service,用于实现

欢迎大家来到IT世界,在知识的湖畔探索吧!

使用kubeadm搭建k8s集群

k8s中的NetWork

为了对外发布pod内的应用,k8s支持两种负载均衡机制

1、一种是service,用于实现四层TCP负载均衡

service主要实现集群内部通信,以及基于四层的内外通信(如端口)

2、另一种是ingress,用户实现七层HTTP负载均衡

ingress主要实现基于七层的内外通信(如URL)

ingress仅仅是一组路由规则的集合,它需要借助ingress控制器才能发挥作用

ingress控制器不受controller-manager管理,它作为一个附件直接运行在k8s集群上

ingress控制器本身也是以pod形式运行,它与被代理的pod运行在同一个网络

和service不同的是,要使用ingress,必须先创建ingress-controller这个pod和基于该pod的svc

对于小规模的应用我们使用 NodePort 或许能够满足我们的需求,但是当你的应用越来越多的时候,你就会发现对于 NodePort 的管理就非常麻烦了,这个时候使用 ingress 就非常方便了,可以避免管理大量的 Port。

k8s中的NetWork-ingress实战

使用Ingress实现访问tomcat的需求。

官网Ingress:https://kubernetes.io/docs/concepts/services-networking/ingress/

GitHub Ingress Nginx:https://github.com/kubernetes/ingress-nginx

Nginx Ingress Controller:<https://kubernetes.github.io/ingress-nginx/

(1)以Deployment方式创建Pod,该Pod为Ingress Nginx Controller,要想让外界访问,可以通过Service的NodePort或者HostPort方式,这里选择HostPort,比如指定worker01运行

部署ingress-controller pod及相关资源【mandatory.yaml】

# 确保nginx-controller运行到w1节点上【给k8s-worker01打一个标签,目的是让nginx-controller运行在w1节点上】
kubectl label node k8s-worker01 name=ingress   
#查看node的标签
kubectl get node --show-labels

# 使用HostPort方式运行,需要在mandatory.yaml增加配置
hostNetwork: true

# 搜索nodeSelector,并且要确保w1节点上的80和443端口没有被占用,镜像拉取需要较长的时间,这块注意一下哦
kubectl apply -f mandatory.yaml  

#查看ingress-controller pod
kubectl get pods -n ingress-nginx

#查看w1的80和443端口
lsof -i tcp:80
lsof -i tcp:443

欢迎大家来到IT世界,在知识的湖畔探索吧!

k8s中的NetWork-ingress实战

查看Node的标签【kubectl get node –show-labels】

k8s中的NetWork-ingress实战

查看mandatory.yaml要拉取哪些镜像

欢迎大家来到IT世界,在知识的湖畔探索吧!cat mandatory.yaml | grep image  #查看要拉取哪些镜像
k8s中的NetWork-ingress实战

因为quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1是国外的所以拉取进行会失败通过阿里云拉取后打tag的方式解决。

#因为ingress-controller是运行在w1节点上的,所以在w1节点执行就可以了
#通过阿里云仓库拉取镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
#给拉取下来的镜像打tag
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
#删除原镜像
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1

可以通过kubectl get pods -n ingress-nginx查看命名空间为ingress-nginx下的pod运行情况

k8s中的NetWork-ingress实战

通过kubectl describe pod nginx-ingress-controller-7c66dcdd6c-655qd -n ingress-nginx查看pod的具体描述信息

k8s中的NetWork-ingress实战

创建tomcat的pod和service【这里是通过yaml文件运行service】

欢迎大家来到IT世界,在知识的湖畔探索吧!apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  labels:
    app: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
spec:
  ports:
  - port: 80   
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat

部署tomcat的pod和service[kubectl apply -f tomcat.yaml]

查看pod和service[kubectl get svc,kubectl get pods]

k8s中的NetWork-ingress实战

创建Ingress以及定义转发规则

#ingress 当访问/目录时就转发到tomcat-service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: tomcat.jack.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-service
          servicePort: 80

部署ingress[kubectl apply -f nginx-ingress.yaml]

查看kubectl get ingress

k8s中的NetWork-ingress实战

查看详情kubectl describe ingress nginx-ingress

k8s中的NetWork-ingress实战

修改win的hosts文件,添加dns解析

192.168.0.22 tomcat.jack.com#这里的ip地址是tomcat Pod运行的节点ip地址[如果是虚拟机还要添加端口映射]

打开浏览器,访问tomcat.jack.com

k8s中的NetWork-ingress实战

总结:如果以后想要使用Ingress网络,其实只要定义ingress,service和pod即可,前提是要保证nginx ingress controller已经配置好了。

mandatory.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        name: ingress
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://itzsg.com/37583.html

(0)

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们YX

mu99908888

在线咨询: 微信交谈

邮件:itzsgw@126.com

工作时间:时刻准备着!

关注微信