Kubernetes(十三)Service管理

  • A+
所属分类:Kubernetes

网络代理模式

服务代理

服务发现

发布服务

网络代理模式

https://blog.csdn.net/sinat_35930259/article/details/80080778

创建一个service.yaml文件

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443

在selector字段中指定了为哪一个标签的app进行负载均衡。ports字段指定了暴露的端口,每一个- name指定一组端口,targetPort为目标容器的端口。

Service – 服务发现

https://www.cnblogs.com/ilinuxer/p/6188804.html

服务发现支持Service环境变量和DNS两种模式:生产环境使用DNS的方式

环境变量

当一个Pod运行到Node,kubelet会为每个容器添加一组环境变量,Pod容器中程序就可以使用这些环境变量发现Service。

环境变量名格式如下:

{SVCNAME}_SERVICE_HOST

{SVCNAME}_SERVICE_PORT

其中服务名和端口名转为大写,连字符转换为下划线。

限制:

1)Pod和Service的创建顺序是有要求的,Service必须在Pod创建之前被创建,否则环境变量不会设置到Pod中。

2)Pod只能获取同Namespace中的Service环境变量。

DNS

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。这样Pod中就可以通过DNS域名获取Service的访问地址。

环境变量方式

进入容器,可以看到环境变量ip和端口,注意上面写的限制,必须先创建service再创建pod,并且不能垮namespasac

[root@k8s-master yaml]# kubectl exec -it nginx-deployment-67dccb759c-d8g6x bash
root@nginx-deployment-67dccb759c-d8g6x:/# env
root@nginx-deployment-67dccb759c-d8g6x:/# echo $NGINX_SERVICE_SERVICE_HOST  
10.10.10.86
root@nginx-deployment-67dccb759c-d8g6x:/# echo $NGINX_SERVICE_SERVICE_PORT
88

DNS方式

可以查看gitlib 的yaml实例

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.sed

https://blog.csdn.net/sinat_35930259/article/details/80080778

通过kube-dns.yaml配置kube-dns:

[root@k8s-master yaml]# vim kube-dns.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.10.10.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.7
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

yaml文件中为kube-dns指定了一个固定的clusterIP,和node节点配置的一致

[root@k8s-master yaml]# kubectl create -f kube-dns.yaml

指定namespace 查看service,可以看到kube-dns,有3个pod

[root@k8s-master yaml]# kubectl get all -n kube-system

测试kube-dns

首先创建一个busybox用于测试:

[root@k8s-master yaml]# vim busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

创建

[root@k8s-master yaml]# kubectl create -f busybox.yaml
[root@k8s-master yaml]# kubectl exec -ti busybox -- nslookup kubernetes.default

这样在后续程序中主需要通过svc名称访问程序即可,而不用在程序中写死ip,比较灵活。

busybox nslookup没测试出来,可以用nginx pod,ping service名,可以看到能解析ip

root@nginx-deployment-67dccb759c-hgrmp:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.10.10.1): 56 data bytes
root@nginx-deployment-67dccb759c-hgrmp:/# ping nginx-service
PING nginx-service.default.svc.cluster.local (10.10.10.86): 56 data bytes

应用发布

服务类型

ClusterIP: 

分配一个内部集群IP地址,只能在集群内部访问(同Namespace内的Pod),默认ServiceType。

NodePort: 

分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务,可以在集群外部访问。 

访问地址:<NodeIP>:<NodePort>

LoadBalancer: 

分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务。 

除此之外,Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。

YaLei

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: