首页
About Me
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,484 阅读
2
linuxea:如何复现查看docker run参数命令
23,649 阅读
3
Graylog收集文件日志实例
18,633 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,423 阅读
5
git+jenkins发布和回滚示例
18,235 阅读
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
667
篇文章
累计收到
111
条评论
首页
栏目
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
搜索到
12
篇与
的结果
2023-09-14
linuxea:istio全链路传递cookie和header灰度
测试一下在istio中的全链路中基于cookie和header灰度发布,这些在higress中也可以的。istio在进行测试。根据istio版本信息中的提示,在1.19中支持的是1.25 到 1.28Istio 1.19.0 已得到 Kubernetes 1.25 到 1.28 的官方正式支持。鉴于 我本地使用的是1.25.11,因此1.19在我考虑范围内。下载安装组件istioctlwget https://github.com/istio/istio/releases/download/1.19.0/istioctl-1.19.0-linux-amd64.tar.gz tar xf istioctl-1.19.0-linux-amd64.tar.gz mv istioctl /usr/local/sbin/[root@master-01 ~/istio]# istioctl version no ready Istio pods in "istio-system" 1.19.0生成安装配置文件istioctl manifest generate --set profile=default > istio.yaml我们替换其中两个重要的镜像 image: docker.io/istio/proxyv2:1.19.0 image: docker.io/istio/pilot:1.19.0修改为 uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0 uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0 sed -i 's@docker.io/istio/pilot:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0@g' istio.yaml sed -i 's@docker.io/istio/proxyv2:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0@g' istio.yaml开始安装kubectl create ns istio-system kubectl apply -f istio.yaml安装完成[root@master-01 ~/istio]# kubectl -n istio-system get all NAME READY STATUS RESTARTS AGE pod/istio-ingressgateway-65cff96b76-nzdk9 1/1 Running 0 3m30s pod/istiod-ffc9db9cc-7g554 1/1 Running 0 3m30s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-ingressgateway LoadBalancer 10.68.208.80 <pending> 15021:31635/TCP,80:30598/TCP,443:31349/TCP 2m28s service/istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2m28s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-ingressgateway 1/1 1 1 21m deployment.apps/istiod 1/1 1 1 21m NAME DESIRED CURRENT READY AGE replicaset.apps/istio-ingressgateway-65cff96b76 1 1 1 3m30s replicaset.apps/istiod-ffc9db9cc 1 1 1 3m30s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 2%/80% 1 5 1 21m horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 0%/80% 1 5 1 21m接着我们配置一个vip做为loadbalancerip addr add 172.16.100.210/24 dev eth0而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑 27 clusterIP: 10.68.113.92 28 externalIPs: 29 - 172.16.100.210 30 clusterIPs: 31 - 10.68.113.92 32 externalTrafficPolicy: Cluster 33 internalTrafficPolicy: Cluster 34 ipFamilies: 35 - IPv4现在状态就正常了[root@master-01 ~/istio]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.68.208.80 172.16.100.210 15021:31635/TCP,80:30598/TCP,443:31349/TCP 127m istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 127m接着给test1名称空间打标签,表示test1作为istio的配置范围,test1名称空间内的pod都会注入一个边车[root@master-01 ~/istio]# kubectl create ns test1 namespace/test1 created [root@master-01 ~/istio]# kubectl label namespace test1 istio-injection=enabled namespace/test1 labeled测试代码我必须保持让cookie或者Header以某种方式被赋值后在代码链路中传递,而且应该有一个约束范围的名称。在测试中:cookie名称是:cannaryHeader名称是:test代码如下:package main import ( "fmt" "io/ioutil" "log" "net/http" "os" "github.com/gin-gonic/gin" ) // 全局变量 var ( PATH_URL = getEnv("PATH_URL", "go-test2") METHODS = getEnv("METHODS", "GET") QNAME = getEnv("QNAME", "name") ) func getEnv(key, defaultVal string) string { if value, ok := os.LookupEnv(key); ok { return value } return defaultVal } func main() { r := gin.Default() r.POST("/post", postJson) r.GET("/get", getJson) r.Run(":9999") } func getJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") // 传递header和cookie sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("c.Request.Header:", k, v) } } c.JSON(200, gin.H{"status": "ok"}) } func postJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println("c.Request.Query:", query) body := c.Request.Body x, err := ioutil.ReadAll(body) if err != nil { c.JSON(400, gin.H{"error": err.Error()}) return } fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("Test:", k, v) } } log.Println(string(x)) c.JSON(200, gin.H{"status": "ok"}) } // 调用下游 func sed2sort(headerValue, icookie string) { fmt.Println("sed2sort:", METHODS, PATH_URL) client := &http.Client{} req, err := http.NewRequest(METHODS, PATH_URL, nil) // 添加Header req.Header.Add("test", headerValue) // 添加Cookie cookies := []*http.Cookie{ &http.Cookie{Name: "cannary", Value: icookie}, } for _, cookie := range cookies { req.AddCookie(cookie) } if err != nil { fmt.Println(err) return } res, err := client.Do(req) if err != nil { fmt.Println(err) return } defer res.Body.Close() body, err := ioutil.ReadAll(res.Body) if err != nil { fmt.Println(err) return } fmt.Println(string(body)) }yaml相对的,需要创建几组服务和vs,分别测试Header,cookie,服务均从server1访问server2,在server2中进行meshserver1在server1中会去调用server2的get接口,通过环境变量传入apiVersion: v1 kind: Service metadata: name: server1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server1 namespace: test1 spec: replicas: selector: matchLabels: app: server1 version: v0.2 template: metadata: labels: app: server1 version: v0.2 spec: containers: - name: server1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 #image: uhub.service.ucloud.cn/marksugar-k8s/cookie:v1 ports: - name: http containerPort: 9999 env: - name: PATH_URL value: https://server2/get - name: METHODS value: GETserver2server2提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2 namespace: test1 spec: replicas: selector: matchLabels: app: server2 version: v0.2 template: metadata: labels: app: server2 version: v0.2 spec: containers: - name: server2 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999server2-1server3也提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2-1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-1 namespace: test1 spec: replicas: selector: matchLabels: app: server2-1 version: v0.2 template: metadata: labels: app: server2-1 version: v0.2 spec: containers: - name: server2-1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999 server2-cooikeapiVersion: v1 kind: Service metadata: name: server2-cooike namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-cooike version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-cooike namespace: test1 spec: replicas: selector: matchLabels: app: server2-cooike version: v0.2 template: metadata: labels: app: server2-cooike version: v0.2 spec: containers: - name: server2-cooike # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999创建完成后相对的svc和pod正常[root@master-01 ~/higress/ops/server]# kubectl -n test1 get pod NAME READY STATUS RESTARTS AGE server1-79fd8456ff-8fj9v 2/2 Running 0 25m server2-1-74bfdd776c-5zs7z 2/2 Running 0 24m server2-5bc69c4f75-wcbcq 2/2 Running 0 25m server2-cooike-94ffb459-bdgk4 2/2 Running 0 21m [root@master-01 ~/higress/ops/server]# kubectl -n test1 get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE server1 ClusterIP 10.68.142.192 <none> 80/TCP 3h13m server2 ClusterIP 10.68.27.255 <none> 80/TCP 3h13m server2-1 ClusterIP 10.68.196.212 <none> 80/TCP 3h server2-cooike ClusterIP 10.68.165.157 <none> 80/TCP 21m开始将server1发布发布在istio中,我们需要配置Gateway,destinationRule和VirtualService,如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: cookie-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "cookie.linuxea.com" --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: cookie namespace: test1 spec: host: "cookie.linuxea.com" trafficPolicy: tls: mode: DISABLE --- # apiVersion: networking.istio.io/v1beta3 apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: cookie namespace: test1 spec: hosts: - "cookie.linuxea.com" gateways: - istio-system/cookie-gateway - mesh http: - name: server1 headers: response: add: X-Envoy: linuxea route: - destination: host: server1 测试我们通过postman发送请求,无论什么情况,访问cookie.linuxea.com域名都会将请求发往server1server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: X-B3-Parentspanid [c782871d39b17cab] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Traceid [42855963a60a52bcc782871d39b17cab] c.Request.Header: Postman-Token [e65b66ff-296f-4033-ab83-e6ccf904c043] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Spanid [9d5a0f1e0f9ecb58] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-Request-Id [e2ec1977-62d9-4cba-90bc-7476e4037b47] [GIN] 2023/09/14 - 09:30:18 | 200 | 1.970681ms | 192.20.1.0 | GET "/get"server2[root@master-01 ~]# kubectl -n test1 logs -f server2-5bc69c4f75-wcbcq [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-B3-Traceid [bedabe3d26de18e00f3029cb0970a46f] c.Request.Header: X-B3-Parentspanid [0f3029cb0970a46f] c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [904970b36297796d] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Request-Id [faa7c8ad-1175-43f3-9635-277747543a85] [GIN] 2023/09/14 - 09:30:18 | 200 | 573.45µs | 127.0.0.6 | GET "/get"基于headerheader的name在约束内假设代码内传递的header名称就是test,因此我们添加test为true,如果通过postman发送的请求头中包含了header等于true就路由到server2-1 http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2发起一次测试此时请求就被路由到server2-1上了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: Postman-Token [0a2c1682-fb2a-428f-a9d1-8232bf8372db] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Traceid [034f8708b77c398ea651d5f18dae87f3] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Request-Id [39b021a2-b596-4899-8eec-bc58210bae4f] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Spanid [6178bc93cf359eb7] c.Request.Header: X-B3-Parentspanid [a651d5f18dae87f3] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:37:17 | 200 | 4.640183ms | 192.20.1.0 | GET "/get"server2-1[root@master-01 ~]# kubectl -n test1 logs -f server2-1-74bfdd776c-5zs7z [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-Request-Id [8bd6ca98-2b11-4867-85b6-6c0ec629de49] c.Request.Header: User-Agent [Mozilla] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [d3693e3f3f1bc32a0220533906f33a9e] c.Request.Header: X-B3-Parentspanid [0220533906f33a9e] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [db50496d52cfae29] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] [GIN] 2023/09/14 - 09:37:18 | 200 | 230.31µs | 127.0.0.6 | GET "/get"基于cookie我们需要regexheader中cookie的值,如:cannary=marksugar;由冒号分隔,在regex后就变成了"^(.*;.)?(cannary=marksugar)(;.*)?$"代码中仍然需要约束传递的名称,而后,我们修改server2的VirtualService配置:如果cookie包含cannary=marksugar就路由到server2-cooike,添加如下 http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2接着在postman中添加cooike,左侧中部添加域名:cookie.linuxea.com,而后点击Add Cookie添加cannary=marksugar!当携带cannary=marksugar的请求在流向server2的时候。检测到cookie为cannary=marksugar的时候就会将请求路由到server2-cooike的pod此时请求中携带cannary=marksugar就发送server2-cooike中了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-B3-Parentspanid [6b9b59fdbe1b967c] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Postman-Token [bd6ef33a-279b-4b13-853c-c0785eb6161d] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [b53f23943a757ec7] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Request-Id [0cf8ff34-3f41-4197-9e22-0f8d02371942] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Traceid [3211f68dd35e189b6b9b59fdbe1b967c] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] [GIN] 2023/09/14 - 09:26:37 | 200 | 2.427771ms | 192.20.1.0 | GET "/get"server2-cooike[root@master-01 ~]# kubectl -n test1 logs -f server2-cooike-94ffb459-bdgk4 [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Request-Id [1bb67f53-5a26-4803-b934-6ed5b0af0c1d] c.Request.Header: X-B3-Spanid [1da69ab234ba1e23] c.Request.Header: X-B3-Parentspanid [1ca523b46de45366] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [ba730ac5aa266e2b1ca523b46de45366] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:26:39 | 200 | 71.76µs | 127.0.0.6 | GET "/get"参考https://regex101.com/r/CPv2kU/3https://istio.io/latest/docs/reference/config/networking/destination-rule/https://istio.io/latest/zh/docs/tasks/traffic-management/request-routing/
2023年09月14日
177 阅读
0 评论
0 点赞
2023-02-04
linuxea:istio bookinfo配置演示(11)
bookinfo其中包包中有一个 bookinfo的示例,这个应用模仿在线书店的一个分类,显示一本书的信息。 页面上会显示一本书的描述,书籍的细节(ISBN、页数等),以及关于这本书的一些评论。Bookinfo 应用分为四个单独的微服务:productpage. 这个微服务会调用 details 和 reviews 两个微服务,用来生成页面。details. 这个微服务中包含了书籍的信息。reviews. 这个微服务中包含了书籍相关的评论。它还会调用 ratings 微服务。ratings. 这个微服务中包含了由书籍评价组成的评级信息。reviews 微服务有 3 个版本:v1 版本不会调用 ratings 服务。v2 版本会调用 ratings 服务,并使用 1 到 5 个黑色星形图标来显示评分信息。v3 版本会调用 ratings 服务,并使用 1 到 5 个红色星形图标来显示评分信息。拓扑结构如下:Bookinfo 应用中的几个微服务是由不同的语言编写的。 这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成(接口API统一),并且 reviews 服务具有多个版本安装解压istio后,在samples/bookinfo目录下是相关bookinfo目录,参考官网中的getting-startrd[root@linuxea_48 /usr/local/istio-1.14.1]# ls samples/bookinfo/ -ll total 20 -rwxr-xr-x 1 root root 3869 Jun 8 10:11 build_push_update_images.sh drwxr-xr-x 2 root root 4096 Jun 8 10:11 networking drwxr-xr-x 3 root root 18 Jun 8 10:11 platform drwxr-xr-x 2 root root 46 Jun 8 10:11 policy -rw-r--r-- 1 root root 3539 Jun 8 10:11 README.md drwxr-xr-x 8 root root 123 Jun 8 10:11 src -rw-r--r-- 1 root root 6329 Jun 8 10:11 swagger.yaml而后安装 platform/kube/bookinfo.yaml文件[root@linuxea_48 /usr/local/istio-1.14.1]# kubectl -n java-demo apply -f samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created错误处理Unhandled exception Type=Bus error vmState=0x00000000 J9Generic_Signal_Number=00000028 Signal_Number=00000007 Error_Value=00000000 Signal_Code=00000002 Handler1=00007F368FD0AD30 Handler2=00007F368F5F72F0 InaccessibleAddress=00002AAAAAC00000 RDI=00007F369017F7D0 RSI=0000000000000008 RAX=00007F369018CBB0 RBX=00007F369017F7D0 RCX=00007F369003A9D0 RDX=0000000000000000 R8=0000000000000000 R9=0000000000000000 R10=00007F36900008D0 R11=0000000000000000 R12=00007F369017F7D0 R13=00007F3679C00000 R14=0000000000000001 R15=0000000000000080 RIP=00007F368DA7395B GS=0000 FS=0000 RSP=00007F3694D1E4A0 EFlags=0000000000010202 CS=0033 RBP=00002AAAAAC00000 ERR=0000000000000006 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00002AAAAAC00000 xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312) xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm2 ffffffff00000002 (f: 2.000000, d: -nan) xmm3 40a9000000000000 (f: 0.000000, d: 3.200000e+03) xmm4 dddddddd000a313d (f: 667965.000000, d: -1.456815e+144) xmm5 0000000000000994 (f: 2452.000000, d: 1.211449e-320) xmm6 00007f369451ac40 (f: 2488380416.000000, d: 6.910614e-310) xmm7 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm8 dd006b6f6f68396a (f: 1869101440.000000, d: -9.776703e+139) xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm10 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm11 0000000049d70a38 (f: 1238829568.000000, d: 6.120632e-315) xmm12 000000004689a022 (f: 1183424512.000000, d: 5.846894e-315) xmm13 0000000047ac082f (f: 1202456576.000000, d: 5.940925e-315) xmm14 0000000048650dc0 (f: 1214582272.000000, d: 6.000833e-315) xmm15 0000000046b73e38 (f: 1186414080.000000, d: 5.861665e-315) Module=/opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so Module_base_address=00007F368D812000 Target=2_90_20200901_454898 (Linux 3.10.0-693.el7.x86_64) CPU=amd64 (32 logical CPUs) (0x1f703dd000 RAM) ----------- Stack Backtrace ----------- (0x00007F368DA7395B [libj9jit29.so+0x26195b]) (0x00007F368DA7429B [libj9jit29.so+0x26229b]) (0x00007F368D967C57 [libj9jit29.so+0x155c57]) J9VMDllMain+0xb44 (0x00007F368D955C34 [libj9jit29.so+0x143c34]) (0x00007F368FD1D041 [libj9vm29.so+0xa7041]) (0x00007F368FDB4070 [libj9vm29.so+0x13e070]) (0x00007F368FC87E94 [libj9vm29.so+0x11e94]) (0x00007F368FD2581F [libj9vm29.so+0xaf81f]) (0x00007F368F5F8053 [libj9prt29.so+0x1d053]) (0x00007F368FD1F9ED [libj9vm29.so+0xa99ed]) J9_CreateJavaVM+0x75 (0x00007F368FD15B75 [libj9vm29.so+0x9fb75]) (0x00007F36942F4305 [libjvm.so+0x12305]) JNI_CreateJavaVM+0xa82 (0x00007F36950C9B02 [libjvm.so+0xab02]) (0x00007F3695ADDA94 [libjli.so+0xfa94]) (0x00007F3695CF76DB [libpthread.so.0+0x76db]) clone+0x3f (0x00007F36955FAA3F [libc.so.6+0x121a3f]) --------------------------------------- JVMDUMP039I Processing dump event "gpf", detail "" at 2022/07/20 08:59:38 - please wait. JVMDUMP032I JVM requested System dump using '/opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp' in response to an event JVMDUMP010I System dump written to /opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp JVMDUMP032I JVM requested Java dump using '/opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt' in response to an event JVMDUMP012E Error in Java dump: /opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt JVMDUMP032I JVM requested Snap dump using '/opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc' in response to an event JVMDUMP010I Snap dump written to /opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc JVMDUMP032I JVM requested JIT dump using '/opt/ibm/wlp/output/defaultServer/jitdump.20220720.085938.1.0004.dmp' in response to an event JVMDUMP013I Processed dump event "gpf", detail "".如下echo 0 > /proc/sys/vm/nr_hugepages见34510,13389配置完成,pod准备结束(base) [root@k8s-01 bookinfo]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE details-v1-6d89cf9847-46c4z 2/2 Running 0 27m productpage-v1-f44fc594c-fmrf4 2/2 Running 0 27m ratings-v1-6c77b94555-twmls 2/2 Running 0 27m reviews-v1-765697d479-tbprw 2/2 Running 0 6m30s reviews-v2-86855c588b-sm6w2 2/2 Running 0 6m2s reviews-v3-6ff967c97f-g6x8b 2/2 Running 0 5m55s sleep-557747455f-46jf5 2/2 Running 0 5d1.gateway配置hosts为*,也就是默认的配置,匹配所有。也就意味着,可以使用ip地址访问在VirtualService中的访问入口如下 http: - match: - uri: exact: /productpage只要pod正常启动,南北流量的访问就能够被引入到网格内部,并且可以通过ip/productpage进行访问yaml如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080apply(base) [root@k8s-01 bookinfo]# kubectl -n java-demo apply -f networking/bookinfo-gateway.yaml gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created而后便可以通过浏览器打开同时。这里的版本是随着刷新一直在变化reviews-v1reviews-v3reviews-v2此时在kiali中能看到一个简单的拓扑:请求从ingress-gateway进入后,到达productpage的v1版本,而后调度到details的v1, 其中reviews流量等比例的被切割到v1,v2,v3,并且v2,v3比v1还多了一个ratings服务,如下图网格测试安装完成,我们进行一些测试,比如:请求路由,故障注入等1.请求路由要开始,需要将destination rules中配置的子集规则,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ratings spec: host: ratings subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v2-mysql labels: version: v2-mysql - name: v2-mysql-vm labels: version: v2-mysql-vm --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: details spec: host: details subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 ---applykubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created而后,对于非登录用户,将流量全发送到v1的版本,展开的yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---使用如下命令创建即可kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created PS E:\ops\k8s-1.23.1-latest\istio-企鹅通\istio-1.14此时在去访问,流量都会到v1这取决于定义了三个reviews的子集,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3随后指明了reviews调度到v1apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1如果没有VirtualService这reviews的配置,就会在三个版本中不断切换2.用户标识调度此时我们希望某个用户登录就让他转发到某个版本如果end-user等于json就转发到v2apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1在/productpageBookinfo 应用程序上,以 user 身份登录jason。登录后kiali变化如下3.故障注入要了解故障注入,需要了解混沌工程。在云原生上,在某些时候希望能够抵御某种程度局部故障。比如希望允许客户端重试,超时来解决局部问题istio原生支持两种故障注入来模拟混动工程的效果,注入超时,或者重试故障基于此前上的两个之上$ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml $ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml使用上述配置,请求流程如下:productpage→ reviews:v2→ ratings(仅限用户jason)productpage→ reviews:v1(对于其他所有人)注入延迟故障要测试 Bookinfo 应用程序微服务的弹性,在userreviews:v2和微服务之间注入 7s 延迟。此测试将发现一个有意引入 Bookinfo 应用程序的错误。ratings`jason`如果是jason,在100%的流量上,注入7秒延迟,路由到v1版本,其他的也路由到v1版本,唯一不同是jason访问是有 延迟的,其他人正常,yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: delay: percentage: value: 100.0 fixedDelay: 7s route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml virtualservice.networking.istio.io/ratings configured我们打开浏览器测试并且可以看到Sorry, product reviews are currently unavailable for this book.中断故障注入测试微服务弹性的另一种方法是引入 HTTP 中止故障。ratings在此任务中,将为测试用户的微服务引入 HTTP 中止jason。在这种情况下,希望页面立即加载并显示Ratings service is currently unavailable消息。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: abort: percentage: value: 100.0 httpStatus: 500 route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml virtualservice.networking.istio.io/ratings configuredkiali如下4.流量迁移如何将流量从一个微服务版本转移到另一个版本。一个常见的用例是将流量从旧版本的微服务逐渐迁移到新版本。在 Istio 中,可以通过配置一系列路由规则来实现这一目标,这些规则将一定比例的流量从一个目的地重定向到另一个目的地。在此任务中,将使用将 50% 的流量发送到reviews:v1和 50% 到reviews:v3。然后,将通过将 100% 的流量发送到 来完成迁移reviews:v3。首先,运行此命令将所有流量路由到v1每个微服务的版本。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage unchanged virtualservice.networking.istio.io/reviews configured virtualservice.networking.istio.io/ratings configured virtualservice.networking.istio.io/details unchanged现在无论刷新多少次,页面的评论部分都不会显示评分星。这是因为将 Istio 配置为将 reviews 服务的所有流量路由到该版本reviews:v1,并且该版本的服务不访问星级评分服务。reviews:v1使用reviews:v3以下清单传输 50% 的流量apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml virtualservice.networking.istio.io/reviews configured如下kiali5.请求超时使用路由规则的timeout字段指定 HTTP 请求的超时。默认情况下,请求超时被禁用,但在此任务中,将服务超时覆盖为 1 秒。但是,为了查看其效果,还可以在调用服务时人为地引入 2 秒延迟。reviews`ratings`在开始之前,将所有的请求指派到v1kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml而后将reviews调度到v2kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF在ratings的v1上注入一个2秒钟的延迟reviews会访问到ratingskubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF访问如下图,当应用程序调到ratings的时候,会超时2秒一旦应用程序的上游响应缓慢,势必影响到服务体验,于是,我们将调整,如果上游服务响应超过0.5s就不去请求kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF此时刷新提示Sorry, product reviews are currently unavailable for this book.因为服务响应超过0.5秒,不去请求了如果此时,我们认为3秒是可以接受的,就改成3,服务就可以访问到ratings了
2023年02月04日
1,056 阅读
0 评论
0 点赞
2023-02-03
linuxea:istio 故障注入/重试和容错/流量镜像(10)
6.故障注入istiO支持两种故障注入,分别是延迟故障和中断故障延迟故障:超时,重新发送请求abort中断故障:重试故障注入仍然在http层进行定义中断故障 fault: abort: # 中断故障 percentage: value: 20 # 在多大的比例流量上注入 httpStatus: 567 # 故障响应码延迟故障 fault: delay: percentage: value: 20 # 在百分之20的流量上注入 fixedDelay: 6s # 注入三秒的延迟yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 fault: abort: percentage: value: 20 httpStatus: 567 - name: default route: - destination: host: dpment subset: v11 fault: delay: percentage: value: 20 fixedDelay: 6s此时,当我们用curl访问 dpment.linuxea.com的时候,有20的流量会被中断6秒(base) [root@master1 7]# while true;do date;curl dpment.linuxea.com; date;sleep 0.$RANDOM;done 2022年 08月 07日 星期日 18:10:40 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:40 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:47 CST 2022年 08月 07日 星期日 18:10:47 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:53 CST 2022年 08月 07日 星期日 18:10:54 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:54 CST 2022年 08月 07日 星期日 18:10:55 CST如果我们访问dpment.linuxea.com/version/的时候,有20%的流量返回的状态码是567(base) [root@master1 7]# while true;do echo -e "===============";curl dpment.linuxea.com/version/ -I ; sleep 0.$RANDOM;done =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:40 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:31 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:32 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:42 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:46 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:37 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1如果使用curl命令直接访问会看到fault filter abort(base) [root@master1 7]# while true;do echo -e "\n";curl dpment.linuxea.com/version/ ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0回到kiali6.1. 重试和容错请求重试条件:5xx:上游主机返回5xx响应码,或者根本未响应(端口,重置,读取超时)gateway-error: 网关错误,类似于5xx策略,但仅为502,503,504应用进行重试connection-failure:在tcp级别与上游服务建立连接失败时进行重试retriable-4xx:上游服务器返回可重复的4xx响应码时进行重试refused-stream:上游服务器使用REFUSED-STREAM错误码重置时进行重试retrable-status-codes:上游服务器的响应码与重试策略或者x-envoy-retriable-status-codes标头值中定义的响应码匹配时进行重试reset:上游主机完全不响应(disconnect/reset/read超时),envoy将进行重试retriable-headers:如果上游服务器响应报文匹配重试策略或x-envoy-retriable-header-names标头中包含的任何标头,则envoy将尝试重试envoy-rateliited:标头中存在x-envoy-ratelimited时重试重试条件2(同x-envoy-grpc-on标头):cancelled: grpc应答标头中的状态码是"cancelled"时进行重试deadline-exceeded: grpc应答标头中的状态码是"deadline-exceeded"时进行重试internal: grpc应答标头中的状态码是“internal”时进行重试resource-exhausted:grpc应答标头中的状态码是"resource-exhausted"时进行重试unavailable:grpc应答标头中的状态码是“unavailable”时进行重试默认情况下,envoy不会进行任何类型的重试操作,除非明确定义我们假设现在有多个服务,A->B->C,A向后代理,或者访问其中的B出现了响应延迟,在A上配置容错机制,如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: default route: - destination: host: A timeout: 1s # 如果上游超过1秒响应,就返回超时结果 retries: # 重试 attempts: 5 # 重试次数 perTryTimeout: 1s # 重试时间 retryOn: 5xx,connect-failure,refused-stream # 对那些条件进行重试如果上游服务超过1秒未响应就进行重试,对于5开头的响应码,tcp链接失败的,或者是GRPC的Refused-stream的建立链接也拒绝了,就重试五次,每次重试1秒。这个重试的 5次过程中,如果在1s内,有成功的则会成功 。7.流量镜像流量镜像,也叫影子流量(Traffic shadowing),是一种通过复制生产环境的流量到其他环境进行测试开发的工作模式。在traffic-mirror中,我们可以直接使用mirror来指定给一个版本 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12于是,我们在此前的配置上修改apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12我们发起curl请求 while ("true"){ curl https://dpment.linuxea.com/ ;sleep 1}而后在v12中查看日志以获取是否流量被镜像进来(base) [root@master1 10]# kubectl -n java-demo exec -it dpment-linuxea-c-568b9fcb5c-ltdcg -- /bin/bash bash-5.0# curl 127.0.0.1 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 bash-5.0# tail -f /data/logs/access.log 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:27:59 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:00 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:01 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:02 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:03 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:04 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:05 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:06 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:07 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:08 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:11 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:12 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:13 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:14 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:15 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:16 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:17 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:18 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:19 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:20 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:21 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:23 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:24 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:25 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-"
2023年02月03日
1,052 阅读
0 评论
0 点赞
2023-01-17
linuxea:istio 基于headers请求头路由(9)
5.请求首部条件路由正常情况下从客户端请求的流量会发送到sidecar-proxy(eneoy),请求发送到上游的pod,而后响应到envoy,并从envoy响应给客户端。在这个过程中,客户端到envoy是不能够被修改的,只有从envoy到上游serrvice中才是可以被操作的,因为这段报文是由envoy发起的。而从serrvice响应到envoy的也是不能够被修改的。并且从envoy响应到客户端的报文由envoy发起响应也是可以被修改的。总共可配置的只有发送到上游service的请求和发给下游客户端的响应。而另外两端并不是envoy生成的,因此没用办法去操作标头的。request: 发送给上游请求serviceresponse: 响应给下游客户端1.如果请求的首部有x-for-canary等于true则路由到v10,如果浏览器是Mozilla就路由给v11,并且修改发送给上游的请求标头的 User-Agent: Mozilla,其次,在响应给客户端的标头添加一个 x-canary: "true" - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "true"没有匹配到这些规则,就给默认规则匹配,就路由给v10,并且添加一个下游的响应报文: 全局标头X-Envoy: linuxea - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10我们添加上gateway部分apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v105.1 测试此时可以使用curl来模拟访问请求# curl dpment.linuxea.com linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0添加头部, -H "x-for-canary: true"# curl -H "x-for-canary: true" dpment.linuxea.com linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而后查看日志的user-agent ,这里是Mozilla,默认是curl130.130.0.0, 127.0.0.6 - [07/Aug/2022:08:18:33 +0000] "GET / HTTP/1.1" dpment.linuxea.com94 "-" "Mozilla" - -0.000 [200] [-] [-] "-"我们使用curl发起请求,模拟的是Mozilla此时使用-I,查看标头的添加信息x-canary: marksugarPS C:\Users\Administrator> curl -H "x-for-canary: true" dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:47:45 GMT content-type: text/html content-length: 94 last-modified: Wed, 03 Aug 2022 07:58:30 GMT etag: "62ea2aa6-5e" accept-ranges: bytes x-envoy-upstream-service-time: 4 x-canary: marksugar如果不加头部-H "x-for-canary: true"则响应报文的是x-envoy: linuxeaPS C:\Users\Administrator> curl dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:51:53 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 7 x-envoy: linuxea
2023年01月17日
1,251 阅读
0 评论
0 点赞
2023-01-15
linuxea:istio 基于权重路由(8)
紧接前面,这篇我们希望访问dpment服务的请求在百分之90的流量在原来的v10的pod,而百分之10的在新的v11的pod,因此我们配置weight来实现基于权重比例的流量切割首先部署dpment-a和dpment-b仍然需要配置service关联到后端的pod标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP配置一个DestinationRule,并且配置上subsets根据标签匹配,通过标签 匹配到两个service上,将子集定义完成此前的篇幅中我门配置过deployment,并且标签配置完成,此时的子集直接引用--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0在VirtualService中添加一项weight分别指定两个subsets.name的权重--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0 --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10而后我们在cli的容器,或者其他容器都可以进行访问测试/ $ while true;do curl dpment;sleep 0.3;done linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0此时在kiali界面会看到流量走向现在我们希望增加流量权重比例,修改即可,比如百分之60到v10,百分之40到v11--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 60 - destination: host: dpment subset: v11 weight: 40此时的kiali可以看到流量的比例情况如下如果希望流量全部切换到一方,修改为100即可--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 0 - destination: host: dpment subset: v11 weight: 100一旦流量全部倒向100,另外为0的权重的service将不接收流量,并且从kiali界面上移除4.1 gateway那如果是前端页面就需要添加一些其他配置我们假设,有一个服务是dpment,现在的版本是1.1,现在要按照权重比例升级到1.2但是此前我们只配置了v1.0和v1.1的子集,所以现在我们添加一个v1.2因此,我们配置v1.3的pod组,和DestinationRule主要配置DestinationRule是通过标签来关联的,因此pod的标签需要进行修改 matchLabels: app: linuxea_app version: v1.2在此前的DestinationRule之上,我们添加 - name: v12 labels: version: v1.2 都关联到一个dpment的host下yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-c namespace: java-demo spec: replicas: 3 selector: matchLabels: app: linuxea_app version: v1.2 template: metadata: labels: app: linuxea_app version: v1.2 spec: containers: - name: dpment-linuxea-c # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 ports: - name: http containerPort: 80 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 - name: v12 labels: version: v1.2 而后在vs调整比例 - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v12 weight: 10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: weight-based-routing route: - destination: host: dpment subset: v11 weight: 90 - destination: host: dpment subset: v12 weight: 10curlPS C:\Users\usert> while ("true"){ curl https://dpment.linuxea.com/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0由于配的host是 hosts: - "dpment.linuxea.com" - "dpment"因此在pod里面也可以进行访问要在网格内和ingress同时访问, - mesh配置至关重要创建一个clikubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bashcurl开始模拟访问bash-4.4# while true;do curl dpment;sleep 0.4;done linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0此时的kiali的示意图就变成了
2023年01月15日
1,091 阅读
0 评论
0 点赞
2023-01-03
linuxea:istio 发布web到集群外(7)
3.开放到外网我们通过域名通过外网来访问集群内的这两个pod,就需定义gateway和vs,vs也是定义在网关gateway打开侦听器gateway必须在网格部署的所在名称空间内,否则有可能注入失败VirtualService定义路由信息等此前定义的VirtualService并没有指定网关,如果没有指定,就只会在网格内的各sidecar内使用如果只是这样,那么网格内部是不能访问的,如果需要让网格内部访问,就需要加上- mesh通常,集群内部使用的是service名称访问配置Gateway接受ingress入网的hosts流量apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "dpment.linuxea.com" - "dpment1.linuxea.com" 配置VirtualService并且关联istio-system/dpment-gateway ,对应之上的gateway的hosts,前后呼应apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment1.linuxea.com" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 #- mesh http: - name: dpment route: - destination: host: dpment ---配置一个serviceapiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP ---此时可以通过浏览器访问但是这样的方式是会将流量轮询到app: linuxea_app标签的pod,因此,我们添加一个path url路径,允许通过外部网络访问。我们希望,如果请求没有附带/version/就发送到v11,如果附带了/version/重写为/发送到v10 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11我们修改VirtualService,使用subset,因此额外添加一个DestinationRuleapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: # hosts: # - dpment hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0配置本地hosts进行测试PS C:\Users\usert> while ("true"){ curl https://dpment.linuxea.com/ https://dpment.linuxea.com/version/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而在kiali中,绘制的图片已经发生了变化
2023年01月03日
1,081 阅读
0 评论
0 点赞
2022-12-27
linuxea:istio 定义subset子集(6)
定义subset子集我们将两个版本归类到一个版本的pod上,去进行适配到一个pod上去,通过标签关联来做区分对于多个版本,在同一个host,通过标签来标注不同的版本信息来进行管理,而后在vs中进行调用子集需要在DestinationRule集群上面进行配置DestinationRule在cluster配置的,通过routes进行调度基于子集,在本案例中根据version标签来备注,类似如下: selector: app: linuxea_app version: v0.2service首先仍然照样创建一个service关联到标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP定义DestinationRule集群而后创建DestinationRule,在一个Host里面使用subsets通过标签版本关联两个service,两个service分别关联不同的pod,版本也不同host 与service保持一致使用subsets定义v11,并根据标签筛选v1.1到v11子集定义v10,并根据标签筛选v1.0到v10子集我们修改下标签pod yamldpment-b 如下--- apiVersion: v1 kind: Service metadata: name: dpment-b namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-b namespace: java-demo spec: replicas: 2 selector: matchLabels: app: linuxea_app version: v1.1 template: metadata: labels: app: linuxea_app version: v1.1 spec: containers: - name: nginx-b # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80dpment-a如下--- apiVersion: v1 kind: Service metadata: name: dpment-a namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v1.0 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-a namespace: java-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v1.0 template: metadata: labels: app: linuxea_app version: v1.0 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80创建完成(base) [root@master1 2]# kubectl -n java-demo get svc,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dpment ClusterIP 10.96.155.138 <none> 80/TCP 22h service/dpment-a ClusterIP 10.99.74.80 <none> 80/TCP 12s service/dpment-b ClusterIP 10.101.155.240 <none> 80/TCP 33s NAME READY STATUS RESTARTS AGE pod/cli 2/2 Running 0 22h pod/dpment-linuxea-a-777847fd74-fsnsv 2/2 Running 0 12s pod/dpment-linuxea-b-55694cb7f5-576qs 2/2 Running 0 32s pod/dpment-linuxea-b-55694cb7f5-lhkrb 2/2 Running 0 32sDestinationRule如果有多个版本,此时的subsets的逻辑组内就可以有很多个版本标签来匹配相对应的每个不同版本的服务dr如下--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 ---dr一旦创建完成在cluster中就能看到相关信息IMP=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name})使用istioctl proxy-config cluster $IMP.java-demo 查看定义好的cluster(base) [root@master1 2]# istioctl proxy-config cluster $IMP.java-demo ... dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v10 outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v11 outbound EDS dpment.java-demo ...可以看到,在cluster中,每个service都是一个集群,这些并且可以被访问bash-4.4# curl dpment linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 bash-4.4# curl dpment-b linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0因此,我们删除多余的dpment-a和dpment-bdpment已经是我们现在的子集的service,dr和vs都是使用的dpment,删掉不会影响到dpment一旦删除dpment-a和dpment-b,listnrners和cluster,routes都会被删除(base) [root@master1 2]# kubectl -n java-demo delete svc dpment-a dpment-b service "dpment-a" deleted service "dpment-b" deletedVirtualService而后仍然需要配置一个VirtualService用于url路径路由,路由规则不变,但是路由的host就不变,都是dpment。只是subset不同spec: hosts: - dpment # service http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment # service subset: v10 - name: default route: - destination: host: dpment # service subset: v11如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-ayaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11创建完成,在kiali中可以查看已经配置好的配置services在vs和dr中,均已配置完成通过命令测试bash-4.4# while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0仍然能看到一样的效果
2022年12月27日
1,142 阅读
0 评论
0 点赞
1
2