renshengdezheli

一.模块概览

在本模块中,我们将部署名为 Online Boutique 的微服务应用程序,试用 Istio 的不同功能。

Online Boutique 是一个云原生微服务演示应用程序。Online Boutique 是一个由 10 个微服务组成的应用。该应用是一个基于 Web 的电子商务应用,用户可以浏览商品,将其添加到购物车,并购买商品。

二.系统环境

服务器版本 docker软件版本 Kubernetes(k8s)集群版本 Istio软件版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 v1.21.9 Istio1.14 x86_64

三.创建Kubernetes(k8s)集群

3.1 创建Kubernetes(k8s)集群

我们需要一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html

3.2 Kubernetes集群环境

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

四.安装istio

4.1 安装Istio

Istio最新版本为1.15,因为我们Kubernetes集群版本为1.21.9,所以我们选择安装Istio 1.14版本。

[root@k8scloude1 ~]# kubectl get node
NAME         STATUS   ROLES                  AGE    VERSION
k8scloude1   Ready    control-plane,master   288d   v1.21.9
k8scloude2   Ready    <none>                 288d   v1.21.9
k8scloude3   Ready    <none>                 288d   v1.21.9

我们将安装 Istio的demo 配置文件,因为它包含所有的核心组件,启用了跟踪和日志记录,便于学习不同的 Istio 功能
关于istio的详细安装部署,请查看博客《Istio(二):在Kubernetes(k8s)集群上安装部署istio1.14》https://www.cnblogs.com/renshengdezheli/p/16836404.html

也可以按照如下使用 GetMesh CLI 在Kubernetes集群中安装 Istio 。

下载 GetMesh CLI:

 curl -sL https://istio.tetratelabs.io/getmesh/install.sh | bash

安装 Istio:

 getmesh istioctl install --set profile=demo

Istio安装完成后,创建一个命名空间online-boutique,新的项目就部署在online-boutique命名空间下,给命名空间online-boutique设置上 istio-injection=enabled 标签,启用sidecar 自动注入。

#创建命名空间online-boutique
[root@k8scloude1 ~]# kubectl create ns online-boutique
namespace/online-boutique created

#切换命名空间
[root@k8scloude1 ~]# kubens online-boutique
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "online-boutique".

#让命名空间online-boutique启用sidecar 自动注入
[root@k8scloude1 ~]# kubectl label ns online-boutique istio-injection=enabled
namespace/online-boutique labeled

[root@k8scloude1 ~]# kubectl get ns -l istio-injection --show-labels 
NAME              STATUS   AGE   LABELS
online-boutique   Active   16m   istio-injection=enabled,kubernetes.io/metadata.name=online-boutique

五.部署online Boutique应用

5.1 部署 Online Boutique 应用

在集群和 Istio 准备好后,我们可以克隆 Online Boutique 应用库了。istio和k8s集群版本如下:

[root@k8scloude1 ~]# istioctl version
client version: 1.14.3
control plane version: 1.14.3
data plane version: 1.14.3 (1 proxies)

[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8scloude1   Ready    control-plane,master   283d   v1.21.9
k8scloude2   Ready    <none>                 283d   v1.21.9
k8scloude3   Ready    <none>                 283d   v1.21.9

使用git克隆代码仓库:

#安装git
[root@k8scloude1 ~]# yum -y install git

#查看git版本
[root@k8scloude1 ~]# git version
git version 1.8.3.1

#创建online-boutique目录,项目放在该目录下
[root@k8scloude1 ~]# mkdir online-boutique

[root@k8scloude1 ~]# cd online-boutique/

[root@k8scloude1 online-boutique]# pwd
/root/online-boutique

#git克隆代码
[root@k8scloude1 online-boutique]# git clone https://github.com/GoogleCloudPlatform/microservices-demo.git
正克隆到 'microservices-demo'...
remote: Enumerating objects: 8195, done.
remote: Counting objects: 100% (332/332), done.
remote: Compressing objects: 100% (167/167), done.
remote: Total 8195 (delta 226), reused 241 (delta 161), pack-reused 7863
接收对象中: 100% (8195/8195), 30.55 MiB | 154.00 KiB/s, done.
处理 delta 中: 100% (5823/5823), done.

[root@k8scloude1 online-boutique]# ls
microservices-demo

前往 microservices-demo 目录,istio-manifests.yaml,kubernetes-manifests.yaml是主要的安装文件

[root@k8scloude1 online-boutique]# cd microservices-demo/

[root@k8scloude1 microservices-demo]# ls
cloudbuild.yaml     CODEOWNERS       docs  istio-manifests       kustomize  pb         release        SECURITY.md    src
CODE_OF_CONDUCT.md  CONTRIBUTING.md  hack  kubernetes-manifests  LICENSE    README.md  renovate.json  skaffold.yaml  terraform

[root@k8scloude1 microservices-demo]# cd release/

[root@k8scloude1 release]# ls
istio-manifests.yaml  kubernetes-manifests.yaml

查看所需的镜像,可以在k8s集群的worker节点提前下载镜像

关于gcr.io镜像的下载方式可以查看博客《轻松下载k8s.gcr.io,gcr.io,quay.io镜像 》https://www.cnblogs.com/renshengdezheli/p/16814395.html

[root@k8scloude1 release]# ls
istio-manifests.yaml  kubernetes-manifests.yaml

[root@k8scloude1 release]# vim kubernetes-manifests.yaml 

#可以看到安装此项目需要13个镜像,gcr.io表示是Google的镜像
[root@k8scloude1 release]# grep image kubernetes-manifests.yaml 
        image: gcr.io/google-samples/microservices-demo/emailservice:v0.4.0
          image: gcr.io/google-samples/microservices-demo/checkoutservice:v0.4.0
        image: gcr.io/google-samples/microservices-demo/recommendationservice:v0.4.0
          image: gcr.io/google-samples/microservices-demo/frontend:v0.4.0
        image: gcr.io/google-samples/microservices-demo/paymentservice:v0.4.0
        image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.4.0
        image: gcr.io/google-samples/microservices-demo/cartservice:v0.4.0
        image: busybox:latest
        image: gcr.io/google-samples/microservices-demo/loadgenerator:v0.4.0
        image: gcr.io/google-samples/microservices-demo/currencyservice:v0.4.0
        image: gcr.io/google-samples/microservices-demo/shippingservice:v0.4.0
        image: redis:alpine
        image: gcr.io/google-samples/microservices-demo/adservice:v0.4.0

[root@k8scloude1 release]# grep image kubernetes-manifests.yaml | uniq | wc -l
13

#在k8s集群的worker节点提前下载镜像,以k8scloude2为例
#把gcr.io换为gcr.lank8s.cn,比如gcr.io/google-samples/microservices-demo/emailservice:v0.4.0换为gcr.lank8s.cn/google-samples/microservices-demo/emailservice:v0.4.0
[root@k8scloude2 ~]# docker pull gcr.lank8s.cn/google-samples/microservices-demo/emailservice:v0.4.0
。。。。。。
其他那些镜像就按照此方法下载......
。。。。。。
[root@k8scloude2 ~]# docker pull gcr.lank8s.cn/google-samples/microservices-demo/adservice:v0.4.0

#镜像下载之后,使用sed把kubernetes-manifests.yaml文件中的gcr.io修改为gcr.lank8s.cn
[root@k8scloude1 release]# sed -i 's/gcr.io/gcr.lank8s.cn/' kubernetes-manifests.yaml

#此时kubernetes-manifests.yaml文件中的镜像就全被修改了
[root@k8scloude1 release]# grep image kubernetes-manifests.yaml
        image: gcr.lank8s.cn/google-samples/microservices-demo/emailservice:v0.4.0
          image: gcr.lank8s.cn/google-samples/microservices-demo/checkoutservice:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/recommendationservice:v0.4.0
          image: gcr.lank8s.cn/google-samples/microservices-demo/frontend:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/paymentservice:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/productcatalogservice:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/cartservice:v0.4.0
        image: busybox:latest
        image: gcr.lank8s.cn/google-samples/microservices-demo/loadgenerator:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/currencyservice:v0.4.0
        image: gcr.lank8s.cn/google-samples/microservices-demo/shippingservice:v0.4.0
        image: redis:alpine
        image: gcr.lank8s.cn/google-samples/microservices-demo/adservice:v0.4.0

#istio-manifests.yaml 文件没有镜像
[root@k8scloude1 release]# vim istio-manifests.yaml 
[root@k8scloude1 release]# grep image istio-manifests.yaml 

创建 Kubernetes 资源:

[root@k8scloude1 release]# pwd
/root/online-boutique/microservices-demo/release

[root@k8scloude1 release]# ls
istio-manifests.yaml  kubernetes-manifests.yaml

#在online-boutique命名空间创建k8s资源
[root@k8scloude1 release]# kubectl apply -f /root/online-boutique/microservices-demo/release/kubernetes-manifests.yaml -n online-boutique

检查所有 Pod 都在运行:

[root@k8scloude1 release]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
adservice-9c6d67f96-txrsb                2/2     Running   0          85s   10.244.112.151   k8scloude2   <none>           <none>
cartservice-6d7544dc98-86p9c             2/2     Running   0          86s   10.244.251.228   k8scloude3   <none>           <none>
checkoutservice-5ff49769d4-5p2cn         2/2     Running   0          86s   10.244.112.148   k8scloude2   <none>           <none>
currencyservice-5f56dd7456-lxjnz         2/2     Running   0          85s   10.244.251.241   k8scloude3   <none>           <none>
emailservice-677bbb77d8-8ndsp            2/2     Running   0          86s   10.244.112.156   k8scloude2   <none>           <none>
frontend-7d65884948-hnmh6                2/2     Running   0          86s   10.244.112.154   k8scloude2   <none>           <none>
loadgenerator-77ffcbd84d-hhh2w           2/2     Running   0          85s   10.244.112.147   k8scloude2   <none>           <none>
paymentservice-88f465d9d-nfxnc           2/2     Running   0          86s   10.244.112.149   k8scloude2   <none>           <none>
productcatalogservice-8496676498-6zpfk   2/2     Running   0          86s   10.244.112.143   k8scloude2   <none>           <none>
recommendationservice-555cdc5c84-j5w8f   2/2     Running   0          86s   10.244.251.227   k8scloude3   <none>           <none>
redis-cart-6f65887b5d-42b8m              2/2     Running   0          85s   10.244.251.236   k8scloude3   <none>           <none>
shippingservice-6ff94bd6-tm6d2           2/2     Running   0          85s   10.244.251.242   k8scloude3   <none>           <none>

创建 Istio 资源:

[root@k8scloude1 microservices-demo]# pwd
/root/online-boutique/microservices-demo

[root@k8scloude1 microservices-demo]# ls istio-manifests/
allow-egress-googleapis.yaml  frontend-gateway.yaml  frontend.yaml

[root@k8scloude1 microservices-demo]# kubectl apply -f ./istio-manifests
serviceentry.networking.istio.io/allow-egress-googleapis created
serviceentry.networking.istio.io/allow-egress-google-metadata created
gateway.networking.istio.io/frontend-gateway created
virtualservice.networking.istio.io/frontend-ingress created
virtualservice.networking.istio.io/frontend created

部署了一切后,我们就可以得到入口网关的 IP 地址并打开前端服务:

[root@k8scloude1 microservices-demo]# INGRESS_HOST="$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

[root@k8scloude1 microservices-demo]# echo "$INGRESS_HOST"
192.168.110.190

[root@k8scloude1 microservices-demo]# kubectl get service -n istio-system istio-ingressgateway -o wide
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                                                      AGE   SELECTOR
istio-ingressgateway   LoadBalancer   10.107.131.65   192.168.110.190   15021:30093/TCP,80:32126/TCP,443:30293/TCP,31400:30628/TCP,15443:30966/TCP   27d   app=istio-ingressgateway,istio=ingressgateway 

在浏览器中打开 INGRESS_HOST,你会看到前端服务,浏览器访问http://192.168.110.190/,如下图所示:

image-20221025171946519

我们需要做的最后一件事是删除 frontend-external 服务。frontend-external 服务是一个 LoadBalancer 服务,它暴露了前端。由于我们正在使用 Istio 的入口网关,我们不再需要这个 LoadBalancer 服务了。

删除frontend-external服务,运行:

[root@k8scloude1 ~]# kubectl get svc | grep frontend-external
frontend-external       LoadBalancer   10.102.0.207     192.168.110.191   80:30173/TCP   4d15h

[root@k8scloude1 ~]# kubectl delete svc frontend-external
service "frontend-external" deleted

[root@k8scloude1 ~]# kubectl get svc | grep frontend-external

Online Boutique 应用清单还包括一个负载发生器,它正在生成对所有服务的请求——这是为了让我们能够模拟网站的流量。

六.部署可观察性工具

6.1 部署可观察性工具

接下来,我们将部署可观察性、分布式追踪、数据可视化工具,下面两种方法任选一种;

关于prometheus,grafana,kiali,zipkin更详细的安装方法可以查看博客《Istio(三):服务网格istio可观察性:Prometheus,Grafana,Zipkin,Kiali》https://www.cnblogs.com/renshengdezheli/p/16836943.html

#方法一:
[root@k8scloude1 ~]# kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.14/samples/addons/prometheus.yaml

[root@k8scloude1 ~]# kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.14/samples/addons/grafana.yaml

[root@k8scloude1 ~]# kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.14/samples/addons/kiali.yaml

[root@k8scloude1 ~]# kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.14/samples/addons/extras/zipkin.yaml 

#方法二:下载istio安装包istio-1.14.3-linux-amd64.tar.gz安装分析工具
[root@k8scloude1 ~]# ls istio* -d
istio-1.14.3  istio-1.14.3-linux-amd64.tar.gz  

[root@k8scloude1 ~]# cd istio-1.14.3/

[root@k8scloude1 addons]# pwd
/root/istio-1.14.3/samples/addons

[root@k8scloude1 addons]# ls
extras  grafana.yaml  jaeger.yaml  kiali.yaml  prometheus.yaml  README.md

[root@k8scloude1 addons]# kubectl apply -f prometheus.yaml  

[root@k8scloude1 addons]# kubectl apply -f grafana.yaml  

[root@k8scloude1 addons]# kubectl apply -f kiali.yaml 

[root@k8scloude1 addons]# ls extras/
prometheus-operator.yaml  prometheus_vm_tls.yaml  prometheus_vm.yaml  zipkin.yaml
[root@k8scloude1 addons]# kubectl apply -f extras/zipkin.yaml  

如果你在安装 Kiali 的时候发现以下错误 No matches for kind "MonitoringDashboard" in version "monitoring.kiali.io/v1alpha1" 请重新运行以上命令。

prometheus,grafana,kiali,zipkin被安装在istio-system命名空间下,我们可以使用 getmesh istioctl dashboard kiali 打开 Kiali界面。

我们使用另外一种方法打开Kiali界面:

#可以看到prometheus,grafana,kiali,zipkin被安装在istio-system命名空间下
[root@k8scloude1 addons]# kubectl get pod -n istio-system 
NAME                                    READY   STATUS    RESTARTS   AGE
grafana-6c5dc6df7c-cnc9w                1/1     Running   2          27h
istio-egressgateway-58949b7c84-k7v6f    1/1     Running   8          10d
istio-ingressgateway-75bc568988-69k8j   1/1     Running   6          3d21h
istiod-84d979766b-kz5sd                 1/1     Running   14         10d
kiali-5db6985fb5-8t77v                  1/1     Running   0          3m25s
prometheus-699b7cc575-dx6rp             2/2     Running   8          2d21h
zipkin-6cd5d58bcc-hxngj                 1/1     Running   1          17h

#可以看到kiali这个service的类型为ClusterIP,外部环境访问不了
[root@k8scloude1 addons]# kubectl get service -n istio-system 
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                      AGE
grafana                NodePort       10.100.151.232   <none>            3000:31092/TCP                                                               27h
istio-egressgateway    ClusterIP      10.102.56.241    <none>            80/TCP,443/TCP                                                               10d
istio-ingressgateway   LoadBalancer   10.107.131.65    192.168.110.190   15021:30093/TCP,80:32126/TCP,443:30293/TCP,31400:30628/TCP,15443:30966/TCP   10d
istiod                 ClusterIP      10.103.37.59     <none>            15010/TCP,15012/TCP,443/TCP,15014/TCP                                        10d
kiali                  ClusterIP      10.109.42.120    <none>            20001/TCP,9090/TCP                                                           7m42s
prometheus             NodePort       10.101.141.187   <none>            9090:31755/TCP                                                               2d21h
tracing                ClusterIP      10.101.30.10     <none>            80/TCP                                                                       17h
zipkin                 NodePort       10.104.85.78     <none>            9411:30350/TCP                                                               17h
#修改kiali这个service的类型为NodePort,这样外部环境就可以访问kiali了
#把type: ClusterIP 修改为 type: NodePort即可 
[root@k8scloude1 addons]# kubectl edit service kiali -n istio-system 
service/kiali edited

#现在kiali这个service的类型为NodePort,浏览器输入物理机ip:30754即可访问kiali网页了
[root@k8scloude1 addons]# kubectl get service -n istio-system 
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                      AGE
grafana                NodePort       10.100.151.232   <none>            3000:31092/TCP                                                               27h
istio-egressgateway    ClusterIP      10.102.56.241    <none>            80/TCP,443/TCP                                                               10d
istio-ingressgateway   LoadBalancer   10.107.131.65    192.168.110.190   15021:30093/TCP,80:32126/TCP,443:30293/TCP,31400:30628/TCP,15443:30966/TCP   10d
istiod                 ClusterIP      10.103.37.59     <none>            15010/TCP,15012/TCP,443/TCP,15014/TCP                                        10d
kiali                  NodePort       10.109.42.120    <none>            20001:30754/TCP,9090:31573/TCP                                               8m42s
prometheus             NodePort       10.101.141.187   <none>            9090:31755/TCP                                                               2d21h
tracing                ClusterIP      10.101.30.10     <none>            80/TCP                                                                       17h
zipkin                 NodePort       10.104.85.78     <none>            9411:30350/TCP                                                               17h

k8scloude1机器的地址为192.168.110.130,我们可以在浏览器中打开 http://192.168.110.130:30754,进入 kiali,kiali首页如下:

image-20221025231229769

在online-boutique命名空间点击Graph,查看服务的拓扑结构

image-20221025231412590

下面是 Boutique 图表在 Kiali 中的样子:

该图向我们展示了服务的拓扑结构,并将服务的通信方式可视化。它还显示了入站和出站的指标,以及通过连接 Jaeger 和 Grafana(如果安装了)的追踪。图中的颜色代表服务网格的健康状况。红色或橙色的节点可能需要注意。组件之间的边的颜色代表这些组件之间的请求的健康状况。节点形状表示组件的类型,如服务、工作负载或应用程序。

image-20221025231730619

七.流量路由

7.1 流量路由

我们已经建立了一个新的 Docker 镜像,它使用了与当前运行的前端服务不同的标头。让我们看看如何部署所需的资源并将一定比例的流量路由到不同的前端服务版本

在我们创建任何资源之前,让我们删除现有的前端部署(kubectl delete deploy frontend

[root@k8scloude1 ~]# kubectl get deploy | grep frontend
frontend                1/1     1            1           4d21h

[root@k8scloude1 ~]# kubectl delete deploy frontend
deployment.apps "frontend" deleted

[root@k8scloude1 ~]# kubectl get deploy | grep frontend

重新创建一个前端deploy,名字还是frontend,但是指定了一个版本标签设置为 original 。yaml文件如下:

[root@k8scloude1 ~]# vim frontend-original.yaml

[root@k8scloude1 ~]# cat frontend-original.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: frontend
      version: original
  template:
    metadata:
      labels:
        app: frontend
        version: original
      annotations:
        sidecar.istio.io/rewriteAppHTTPProbers: "true"
    spec:
      containers:
        - name: server
          image: gcr.lank8s.cn/google-samples/microservices-demo/frontend:v0.2.1
          ports:
          - containerPort: 8080
          readinessProbe:
            initialDelaySeconds: 10
            httpGet:
              path: "/_healthz"
              port: 8080
              httpHeaders:
              - name: "Cookie"
                value: "shop_session-id=x-readiness-probe"
          livenessProbe:
            initialDelaySeconds: 10
            httpGet:
              path: "/_healthz"
              port: 8080
              httpHeaders:
              - name: "Cookie"
                value: "shop_session-id=x-liveness-probe"
          env:
          - name: PORT
            value: "8080"
          - name: PRODUCT_CATALOG_SERVICE_ADDR
            value: "productcatalogservice:3550"
          - name: CURRENCY_SERVICE_ADDR
            value: "currencyservice:7000"
          - name: CART_SERVICE_ADDR
            value: "cartservice:7070"
          - name: RECOMMENDATION_SERVICE_ADDR
            value: "recommendationservice:8080"
          - name: SHIPPING_SERVICE_ADDR
            value: "shippingservice:50051"
          - name: CHECKOUT_SERVICE_ADDR
            value: "checkoutservice:5050"
          - name: AD_SERVICE_ADDR
            value: "adservice:9555"
          - name: ENV_PLATFORM
            value: "gcp"
          resources:
            requests:
              cpu: 100m
              memory: 64Mi
            limits:
              cpu: 200m
              memory: 128Mi

创建deploy

[root@k8scloude1 ~]# kubectl apply -f frontend-original.yaml 
deployment.apps/frontend created

#deploy创建成功
[root@k8scloude1 ~]# kubectl get deploy | grep frontend
frontend                1/1     1            1           43s

#pod也正常运行
[root@k8scloude1 ~]# kubectl get pod | grep frontend
frontend-ff47c5568-qnzpt                 2/2     Running   0          105s

现在我们准备创建一个 DestinationRule,定义两个版本的前端——现有的(original)和新的(v1)。

[root@k8scloude1 ~]# vim frontend-dr.yaml

[root@k8scloude1 ~]# cat frontend-dr.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: frontend
spec:
  host: frontend.online-boutique.svc.cluster.local
  subsets:
    - name: original
      labels:
        version: original
    - name: v1
      labels:
        version: 1.0.0

创建DestinationRule

[root@k8scloude1 ~]# kubectl apply -f frontend-dr.yaml 
destinationrule.networking.istio.io/frontend created

[root@k8scloude1 ~]# kubectl get destinationrule
NAME       HOST                                         AGE
frontend   frontend.online-boutique.svc.cluster.local   12s

接下来,我们将更新 VirtualService,并指定将所有流量路由到子集。在这种情况下,我们将把所有流量路由到原始版本original的前端。

[root@k8scloude1 ~]# vim frontend-vs.yaml

[root@k8scloude1 ~]# cat frontend-vs.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-ingress
spec:
  hosts:
    - '*'
  gateways:
    - frontend-gateway
  http:
  - route:
    - destination:
        host: frontend.online-boutique.svc.cluster.local
        port:
          number: 80
        subset: original

更新 VirtualService 资源

[root@k8scloude1 ~]# kubectl apply -f frontend-vs.yaml 
virtualservice.networking.istio.io/frontend-ingress created

[root@k8scloude1 ~]# kubectl get virtualservice
NAME               GATEWAYS               HOSTS                                    AGE
frontend                                  ["frontend.default.svc.cluster.local"]   5d14h
frontend-ingress   ["frontend-gateway"]   ["*"]                                    14s

#修改frontend这个virtualservice的hosts为frontend.online-boutique.svc.cluster.local
[root@k8scloude1 ~]# kubectl edit virtualservice frontend
virtualservice.networking.istio.io/frontend edited

[root@k8scloude1 ~]# kubectl get virtualservice
NAME               GATEWAYS               HOSTS                                            AGE
frontend                                  ["frontend.online-boutique.svc.cluster.local"]   5d14h
frontend-ingress   ["frontend-gateway"]   ["*"]                                            3m24s

现在我们将 VirtualService 配置为将所有进入的流量路由到 original 子集,我们可以安全地创建新的前端部署。

[root@k8scloude1 ~]# vim frontend-v1.yaml

[root@k8scloude1 ~]# cat frontend-v1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-v1
spec:
  selector:
    matchLabels:
      app: frontend
      version: 1.0.0
  template:
    metadata:
      labels:
        app: frontend
        version: 1.0.0
      annotations:
        sidecar.istio.io/rewriteAppHTTPProbers: "true"
    spec:
      containers:
        - name: server
          image: gcr.lank8s.cn/tetratelabs/boutique-frontend:1.0.0
          ports:
          - containerPort: 8080
          readinessProbe:
            initialDelaySeconds: 10
            httpGet:
              path: "/_healthz"
              port: 8080
              httpHeaders:
              - name: "Cookie"
                value: "shop_session-id=x-readiness-probe"
          livenessProbe:
            initialDelaySeconds: 10
            httpGet:
              path: "/_healthz"
              port: 8080
              httpHeaders:
              - name: "Cookie"
                value: "shop_session-id=x-liveness-probe"
          env:
          - name: PORT
            value: "8080"
          - name: PRODUCT_CATALOG_SERVICE_ADDR
            value: "productcatalogservice:3550"
          - name: CURRENCY_SERVICE_ADDR
            value: "currencyservice:7000"
          - name: CART_SERVICE_ADDR
            value: "cartservice:7070"
          - name: RECOMMENDATION_SERVICE_ADDR
            value: "recommendationservice:8080"
          - name: SHIPPING_SERVICE_ADDR
            value: "shippingservice:50051"
          - name: CHECKOUT_SERVICE_ADDR
            value: "checkoutservice:5050"
          - name: AD_SERVICE_ADDR
            value: "adservice:9555"
          - name: ENV_PLATFORM
            value: "gcp"
          resources:
            requests:
              cpu: 100m
              memory: 64Mi
            limits:
              cpu: 200m
              memory: 128Mi

创建前端部署frontend-v1

[root@k8scloude1 ~]# kubectl apply -f frontend-v1.yaml 
deployment.apps/frontend-v1 created

#deploy正常运行
[root@k8scloude1 ~]# kubectl get deploy | grep frontend-v1
frontend-v1             1/1     1            1           54s

#pod正常运行
[root@k8scloude1 ~]# kubectl get pod | grep frontend-v1
frontend-v1-6457cb648d-fgmkk             2/2     Running   0          70s

如果我们在浏览器中打开 INGRESS_HOST,我们仍然会看到原始版本的前端。浏览器打开http://192.168.110.190/,显示的前端如下:

image-20221026162018502

让我们更新 VirtualService 中的权重,开始将 30% 的流量路由到 v1 的子集。

[root@k8scloude1 ~]# vim frontend-30.yaml 

[root@k8scloude1 ~]# cat frontend-30.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-ingress
spec:
  hosts:
    - '*'
  gateways:
    - frontend-gateway
  http:
  - route:
    - destination:
        host: frontend.online-boutique.svc.cluster.local
        port:
          number: 80
        subset: original
      weight: 70
    - destination:
        host: frontend.online-boutique.svc.cluster.local
        port:
          number: 80
        subset: v1
      weight: 30

更新 VirtualService

[root@k8scloude1 ~]# kubectl apply -f frontend-30.yaml 
virtualservice.networking.istio.io/frontend-ingress configured

[root@k8scloude1 ~]# kubectl get virtualservices
NAME               GATEWAYS               HOSTS                                            AGE
frontend                                  ["frontend.online-boutique.svc.cluster.local"]   5d14h
frontend-ingress   ["frontend-gateway"]   ["*"]                                            20m

浏览器访问http://192.168.110.190/,查看前端界面,如果我们刷新几次网页,我们会注意到来自前端 v1 的更新标头,一般显示$75,如下所示:

image-20221026163009880

多刷新几次页面显示$30,如下所示:

image-20221026163142866

我们可以在浏览器中打开 http://192.168.110.130:30754,进入 kiali界面查看服务的拓扑结构,选择online-boutique命名空间,查看Graph

image-20221026163627332

服务的拓扑结构如下,我们会发现有两个版本的前端在运行:

image-20221026165941191

八.故障注入

8.1 故障注入

我们将为推荐服务引入 5 秒的延迟。Envoy 将为 50% 的请求注入延迟。

[root@k8scloude1 ~]# vim recommendation-delay.yaml

[root@k8scloude1 ~]# cat recommendation-delay.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendationservice
spec:
  hosts:
  - recommendationservice.online-boutique.svc.cluster.local
  http:
  - route:
      - destination:
          host: recommendationservice.online-boutique.svc.cluster.local
    fault:
      delay:
        percentage:
          value: 50
        fixedDelay: 5s

将上述 YAML 保存为 recommendation-delay.yaml,然后用 kubectl apply -f recommendation-delay.yaml 创建 VirtualService。

[root@k8scloude1 ~]# kubectl apply -f recommendation-delay.yaml 
virtualservice.networking.istio.io/recommendationservice created

[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d13h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         23h
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   7s

我们可以在浏览器中打开 INGRESS_HOST http://192.168.110.190/,然后点击其中一个产品。推荐服务的结果显示在屏幕底部的”Other Products You Might Light“部分。如果我们刷新几次页面,我们会注意到,该页面要么立即加载,要么有一个延迟加载页面。这个延迟是由于我们注入了 5 秒的延迟。

我们可以打开 Grafana(getmesh istioctl dash grafana)和 Istio 服务仪表板,或者使用如下方法打开Grafana界面:

#查看grafana的端口号
[root@k8scloude1 ~]# kubectl get svc -n istio-system | grep grafana
grafana                NodePort       10.100.151.232   <none>            3000:31092/TCP                                                               24d    

http://192.168.110.130:31092/打开grafana界面。点击istio-service-dashboard进入istio服务界面

image-20221027152432309

确保从服务列表中选择recommendationsservice,在 Reporter 下拉菜单中选择 source,并查看显示延迟的 Client Request Duration,如下图所示:

image-20221027152915290

点击View放大Client Request Duration图表

image-20221027153115621

同样地,我们可以注入一个中止。在下面的例子中,我们为发送到产品目录服务的 50% 的请求注入一个 HTTP 500。

[root@k8scloude1 ~]# vim productcatalogservice-abort.yaml 

[root@k8scloude1 ~]# cat productcatalogservice-abort.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: productcatalogservice
spec:
  hosts:
  - productcatalogservice.online-boutique.svc.cluster.local
  http:
  - route:
      - destination:
          host: productcatalogservice.online-boutique.svc.cluster.local
    fault:
      abort:
        percentage:
          value: 50
        httpStatus: 500 

创建VirtualService。

[root@k8scloude1 ~]# kubectl apply -f productcatalogservice-abort.yaml
virtualservice.networking.istio.io/productcatalogservice created

[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d13h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         23h
productcatalogservice                          ["productcatalogservice.online-boutique.svc.cluster.local"]   8s
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   36m

如果我们刷新几次产品页面,我们应该得到如下图所示的错误信息。

image-20221027154558903

请注意,错误信息说,失败的原因是故障过滤器中止。如果我们打开 Grafana(getmesh istioctl dash grafana),我们也会注意到图中报告的错误。

删除productcatalogservice这个VirtualService:

[root@k8scloude1 ~]# kubectl delete virtualservice productcatalogservice 
virtualservice.networking.istio.io "productcatalogservice" deleted
 
[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d14h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         23h
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   44m

九.弹性

9.1 弹性

为了演示弹性功能,我们将在产品目录服务部署中添加一个名为 EXTRA_LATENCY 的环境变量。这个变量会在每次调用服务时注入一个额外的休眠

通过运行 kubectl edit deploy productcatalogservice 来编辑产品目录服务部署。

[root@k8scloude1 ~]# kubectl get deploy
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
adservice               1/1     1            1           6d14h
cartservice             1/1     1            1           6d14h
checkoutservice         1/1     1            1           6d14h
currencyservice         1/1     1            1           6d14h
emailservice            1/1     1            1           6d14h
frontend                1/1     1            1           24h
frontend-v1             1/1     1            1           28h
loadgenerator           1/1     1            1           6d14h
paymentservice          1/1     1            1           6d14h
productcatalogservice   1/1     1            1           6d14h
recommendationservice   1/1     1            1           6d14h
redis-cart              1/1     1            1           6d14h
shippingservice         1/1     1            1           6d14h

[root@k8scloude1 ~]# kubectl edit deploy productcatalogservice
deployment.apps/productcatalogservice edited

这将打开一个编辑器。滚动到有环境变量的部分,添加 EXTRA_LATENCY 环境变量。

 ...
     spec:
       containers:
       - env:
         - name: EXTRA_LATENCY
           value: 6s
 ...

image-20221027161533084

保存并推出编辑器。

如果我们刷新http://192.168.110.190/页面,我们会发现页面需要 6 秒的时间来加载(那是由于我们注入的延迟)

让我们给产品目录服务添加一个 2 秒的超时

[root@k8scloude1 ~]# vim productcatalogservice-timeout.yaml

[root@k8scloude1 ~]# cat productcatalogservice-timeout.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: productcatalogservice
spec:
  hosts:
  - productcatalogservice.online-boutique.svc.cluster.local
  http:
  - route:
    - destination:
        host: productcatalogservice.online-boutique.svc.cluster.local
    timeout: 2s

创建 VirtualService。

[root@k8scloude1 ~]# kubectl apply -f productcatalogservice-timeout.yaml 
virtualservice.networking.istio.io/productcatalogservice created
 
[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d14h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         24h
productcatalogservice                          ["productcatalogservice.online-boutique.svc.cluster.local"]   10s
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   76m

如果我们刷新页面http://192.168.110.190/,我们会注意到一个错误信息的出现:

image-20221027162639815

 rpc error: code = Unavailable desc = upstream request timeout
 could not retrieve products

该错误表明对产品目录服务的请求超时了。原因为:我们修改了服务,增加了 6 秒的延迟,并将超时设置为 2 秒

让我们定义一个重试策略,有三次尝试,每次尝试的超时为 1 秒。

[root@k8scloude1 ~]# vim productcatalogservice-retry.yaml

[root@k8scloude1 ~]# cat productcatalogservice-retry.yaml 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: productcatalogservice
spec:
  hosts:
  - productcatalogservice.online-boutique.svc.cluster.local
  http:
  - route:
    - destination:
        host: productcatalogservice.online-boutique.svc.cluster.local
    retries:
      attempts: 3
      perTryTimeout: 1s

[root@k8scloude1 ~]# kubectl apply -f productcatalogservice-retry.yaml 
virtualservice.networking.istio.io/productcatalogservice configured

[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d14h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         24h
productcatalogservice                          ["productcatalogservice.online-boutique.svc.cluster.local"]   10m
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   86m

由于我们在产品目录服务部署中留下了额外的延迟,我们仍然会看到错误。

image-20221027163658860

让我们打开 Zipkin 中的追踪,看看重试策略的作用。使用 getmesh istioctl dash zipkin 来打开 Zipkin 仪表盘。或者使用如下方法打开zipkin界面

#查看zipkin端口为30350
[root@k8scloude1 ~]# kubectl get svc -n istio-system | grep zipkin
zipkin                 NodePort       10.104.85.78     <none>            9411:30350/TCP                                                               23d

浏览器输入http://192.168.110.130:30350/打开zipkin界面。

image-20221027164402121

点击 + 按钮,选择 serviceNamefrontend.online-boutique。为了只得到至少一秒钟的响应(这就是我们的 perTryTimeout),选择 minDuration,在文本框中输入 1s。点击RUN QUERY搜索按钮,显示所有追踪。

image-20221027164841468

点击 Filter 按钮,从下拉菜单中选择 productCatalogService.online-boutique。你应该看到花了 1 秒钟的 trace。这些 trace 对应于我们之前定义的 perTryTimeout

image-20221027165242597

点击SHOW

image-20221027165516346

详细信息如下:

image-20221027165759743

运行 kubectl delete vs productcatalogservice 删除 VirtualService。

[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d15h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         24h
productcatalogservice                          ["productcatalogservice.online-boutique.svc.cluster.local"]   37m
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   113m

[root@k8scloude1 ~]# kubectl delete virtualservice productcatalogservice
virtualservice.networking.istio.io "productcatalogservice" deleted

[root@k8scloude1 ~]# kubectl get virtualservice
NAME                    GATEWAYS               HOSTS                                                         AGE
frontend                                       ["frontend.online-boutique.svc.cluster.local"]                6d15h
frontend-ingress        ["frontend-gateway"]   ["*"]                                                         24h
recommendationservice                          ["recommendationservice.online-boutique.svc.cluster.local"]   114m

相关文章: