首页
About Me
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,484 阅读
2
linuxea:如何复现查看docker run参数命令
23,649 阅读
3
Graylog收集文件日志实例
18,633 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,423 阅读
5
git+jenkins发布和回滚示例
18,235 阅读
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
667
篇文章
累计收到
111
条评论
首页
栏目
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
搜索到
92
篇与
的结果
2023-09-14
linuxea:istio全链路传递cookie和header灰度
测试一下在istio中的全链路中基于cookie和header灰度发布,这些在higress中也可以的。istio在进行测试。根据istio版本信息中的提示,在1.19中支持的是1.25 到 1.28Istio 1.19.0 已得到 Kubernetes 1.25 到 1.28 的官方正式支持。鉴于 我本地使用的是1.25.11,因此1.19在我考虑范围内。下载安装组件istioctlwget https://github.com/istio/istio/releases/download/1.19.0/istioctl-1.19.0-linux-amd64.tar.gz tar xf istioctl-1.19.0-linux-amd64.tar.gz mv istioctl /usr/local/sbin/[root@master-01 ~/istio]# istioctl version no ready Istio pods in "istio-system" 1.19.0生成安装配置文件istioctl manifest generate --set profile=default > istio.yaml我们替换其中两个重要的镜像 image: docker.io/istio/proxyv2:1.19.0 image: docker.io/istio/pilot:1.19.0修改为 uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0 uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0 sed -i 's@docker.io/istio/pilot:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0@g' istio.yaml sed -i 's@docker.io/istio/proxyv2:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0@g' istio.yaml开始安装kubectl create ns istio-system kubectl apply -f istio.yaml安装完成[root@master-01 ~/istio]# kubectl -n istio-system get all NAME READY STATUS RESTARTS AGE pod/istio-ingressgateway-65cff96b76-nzdk9 1/1 Running 0 3m30s pod/istiod-ffc9db9cc-7g554 1/1 Running 0 3m30s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-ingressgateway LoadBalancer 10.68.208.80 <pending> 15021:31635/TCP,80:30598/TCP,443:31349/TCP 2m28s service/istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2m28s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-ingressgateway 1/1 1 1 21m deployment.apps/istiod 1/1 1 1 21m NAME DESIRED CURRENT READY AGE replicaset.apps/istio-ingressgateway-65cff96b76 1 1 1 3m30s replicaset.apps/istiod-ffc9db9cc 1 1 1 3m30s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 2%/80% 1 5 1 21m horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 0%/80% 1 5 1 21m接着我们配置一个vip做为loadbalancerip addr add 172.16.100.210/24 dev eth0而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑 27 clusterIP: 10.68.113.92 28 externalIPs: 29 - 172.16.100.210 30 clusterIPs: 31 - 10.68.113.92 32 externalTrafficPolicy: Cluster 33 internalTrafficPolicy: Cluster 34 ipFamilies: 35 - IPv4现在状态就正常了[root@master-01 ~/istio]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.68.208.80 172.16.100.210 15021:31635/TCP,80:30598/TCP,443:31349/TCP 127m istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 127m接着给test1名称空间打标签,表示test1作为istio的配置范围,test1名称空间内的pod都会注入一个边车[root@master-01 ~/istio]# kubectl create ns test1 namespace/test1 created [root@master-01 ~/istio]# kubectl label namespace test1 istio-injection=enabled namespace/test1 labeled测试代码我必须保持让cookie或者Header以某种方式被赋值后在代码链路中传递,而且应该有一个约束范围的名称。在测试中:cookie名称是:cannaryHeader名称是:test代码如下:package main import ( "fmt" "io/ioutil" "log" "net/http" "os" "github.com/gin-gonic/gin" ) // 全局变量 var ( PATH_URL = getEnv("PATH_URL", "go-test2") METHODS = getEnv("METHODS", "GET") QNAME = getEnv("QNAME", "name") ) func getEnv(key, defaultVal string) string { if value, ok := os.LookupEnv(key); ok { return value } return defaultVal } func main() { r := gin.Default() r.POST("/post", postJson) r.GET("/get", getJson) r.Run(":9999") } func getJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") // 传递header和cookie sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("c.Request.Header:", k, v) } } c.JSON(200, gin.H{"status": "ok"}) } func postJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println("c.Request.Query:", query) body := c.Request.Body x, err := ioutil.ReadAll(body) if err != nil { c.JSON(400, gin.H{"error": err.Error()}) return } fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("Test:", k, v) } } log.Println(string(x)) c.JSON(200, gin.H{"status": "ok"}) } // 调用下游 func sed2sort(headerValue, icookie string) { fmt.Println("sed2sort:", METHODS, PATH_URL) client := &http.Client{} req, err := http.NewRequest(METHODS, PATH_URL, nil) // 添加Header req.Header.Add("test", headerValue) // 添加Cookie cookies := []*http.Cookie{ &http.Cookie{Name: "cannary", Value: icookie}, } for _, cookie := range cookies { req.AddCookie(cookie) } if err != nil { fmt.Println(err) return } res, err := client.Do(req) if err != nil { fmt.Println(err) return } defer res.Body.Close() body, err := ioutil.ReadAll(res.Body) if err != nil { fmt.Println(err) return } fmt.Println(string(body)) }yaml相对的,需要创建几组服务和vs,分别测试Header,cookie,服务均从server1访问server2,在server2中进行meshserver1在server1中会去调用server2的get接口,通过环境变量传入apiVersion: v1 kind: Service metadata: name: server1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server1 namespace: test1 spec: replicas: selector: matchLabels: app: server1 version: v0.2 template: metadata: labels: app: server1 version: v0.2 spec: containers: - name: server1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 #image: uhub.service.ucloud.cn/marksugar-k8s/cookie:v1 ports: - name: http containerPort: 9999 env: - name: PATH_URL value: https://server2/get - name: METHODS value: GETserver2server2提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2 namespace: test1 spec: replicas: selector: matchLabels: app: server2 version: v0.2 template: metadata: labels: app: server2 version: v0.2 spec: containers: - name: server2 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999server2-1server3也提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2-1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-1 namespace: test1 spec: replicas: selector: matchLabels: app: server2-1 version: v0.2 template: metadata: labels: app: server2-1 version: v0.2 spec: containers: - name: server2-1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999 server2-cooikeapiVersion: v1 kind: Service metadata: name: server2-cooike namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-cooike version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-cooike namespace: test1 spec: replicas: selector: matchLabels: app: server2-cooike version: v0.2 template: metadata: labels: app: server2-cooike version: v0.2 spec: containers: - name: server2-cooike # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999创建完成后相对的svc和pod正常[root@master-01 ~/higress/ops/server]# kubectl -n test1 get pod NAME READY STATUS RESTARTS AGE server1-79fd8456ff-8fj9v 2/2 Running 0 25m server2-1-74bfdd776c-5zs7z 2/2 Running 0 24m server2-5bc69c4f75-wcbcq 2/2 Running 0 25m server2-cooike-94ffb459-bdgk4 2/2 Running 0 21m [root@master-01 ~/higress/ops/server]# kubectl -n test1 get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE server1 ClusterIP 10.68.142.192 <none> 80/TCP 3h13m server2 ClusterIP 10.68.27.255 <none> 80/TCP 3h13m server2-1 ClusterIP 10.68.196.212 <none> 80/TCP 3h server2-cooike ClusterIP 10.68.165.157 <none> 80/TCP 21m开始将server1发布发布在istio中,我们需要配置Gateway,destinationRule和VirtualService,如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: cookie-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "cookie.linuxea.com" --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: cookie namespace: test1 spec: host: "cookie.linuxea.com" trafficPolicy: tls: mode: DISABLE --- # apiVersion: networking.istio.io/v1beta3 apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: cookie namespace: test1 spec: hosts: - "cookie.linuxea.com" gateways: - istio-system/cookie-gateway - mesh http: - name: server1 headers: response: add: X-Envoy: linuxea route: - destination: host: server1 测试我们通过postman发送请求,无论什么情况,访问cookie.linuxea.com域名都会将请求发往server1server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: X-B3-Parentspanid [c782871d39b17cab] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Traceid [42855963a60a52bcc782871d39b17cab] c.Request.Header: Postman-Token [e65b66ff-296f-4033-ab83-e6ccf904c043] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Spanid [9d5a0f1e0f9ecb58] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-Request-Id [e2ec1977-62d9-4cba-90bc-7476e4037b47] [GIN] 2023/09/14 - 09:30:18 | 200 | 1.970681ms | 192.20.1.0 | GET "/get"server2[root@master-01 ~]# kubectl -n test1 logs -f server2-5bc69c4f75-wcbcq [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-B3-Traceid [bedabe3d26de18e00f3029cb0970a46f] c.Request.Header: X-B3-Parentspanid [0f3029cb0970a46f] c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [904970b36297796d] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Request-Id [faa7c8ad-1175-43f3-9635-277747543a85] [GIN] 2023/09/14 - 09:30:18 | 200 | 573.45µs | 127.0.0.6 | GET "/get"基于headerheader的name在约束内假设代码内传递的header名称就是test,因此我们添加test为true,如果通过postman发送的请求头中包含了header等于true就路由到server2-1 http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2发起一次测试此时请求就被路由到server2-1上了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: Postman-Token [0a2c1682-fb2a-428f-a9d1-8232bf8372db] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Traceid [034f8708b77c398ea651d5f18dae87f3] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Request-Id [39b021a2-b596-4899-8eec-bc58210bae4f] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Spanid [6178bc93cf359eb7] c.Request.Header: X-B3-Parentspanid [a651d5f18dae87f3] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:37:17 | 200 | 4.640183ms | 192.20.1.0 | GET "/get"server2-1[root@master-01 ~]# kubectl -n test1 logs -f server2-1-74bfdd776c-5zs7z [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-Request-Id [8bd6ca98-2b11-4867-85b6-6c0ec629de49] c.Request.Header: User-Agent [Mozilla] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [d3693e3f3f1bc32a0220533906f33a9e] c.Request.Header: X-B3-Parentspanid [0220533906f33a9e] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [db50496d52cfae29] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] [GIN] 2023/09/14 - 09:37:18 | 200 | 230.31µs | 127.0.0.6 | GET "/get"基于cookie我们需要regexheader中cookie的值,如:cannary=marksugar;由冒号分隔,在regex后就变成了"^(.*;.)?(cannary=marksugar)(;.*)?$"代码中仍然需要约束传递的名称,而后,我们修改server2的VirtualService配置:如果cookie包含cannary=marksugar就路由到server2-cooike,添加如下 http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2接着在postman中添加cooike,左侧中部添加域名:cookie.linuxea.com,而后点击Add Cookie添加cannary=marksugar!当携带cannary=marksugar的请求在流向server2的时候。检测到cookie为cannary=marksugar的时候就会将请求路由到server2-cooike的pod此时请求中携带cannary=marksugar就发送server2-cooike中了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET https://server2/get {"status":"ok"} c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-B3-Parentspanid [6b9b59fdbe1b967c] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Postman-Token [bd6ef33a-279b-4b13-853c-c0785eb6161d] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [b53f23943a757ec7] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Request-Id [0cf8ff34-3f41-4197-9e22-0f8d02371942] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Traceid [3211f68dd35e189b6b9b59fdbe1b967c] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] [GIN] 2023/09/14 - 09:26:37 | 200 | 2.427771ms | 192.20.1.0 | GET "/get"server2-cooike[root@master-01 ~]# kubectl -n test1 logs -f server2-cooike-94ffb459-bdgk4 [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Request-Id [1bb67f53-5a26-4803-b934-6ed5b0af0c1d] c.Request.Header: X-B3-Spanid [1da69ab234ba1e23] c.Request.Header: X-B3-Parentspanid [1ca523b46de45366] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [ba730ac5aa266e2b1ca523b46de45366] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:26:39 | 200 | 71.76µs | 127.0.0.6 | GET "/get"参考https://regex101.com/r/CPv2kU/3https://istio.io/latest/docs/reference/config/networking/destination-rule/https://istio.io/latest/zh/docs/tasks/traffic-management/request-routing/
2023年09月14日
177 阅读
0 评论
0 点赞
2023-09-13
linuxea:higress基于Header的流量切分
higress是和apisix,kong归为一类的云原生网关,由阿里开源,与众多阿里的开源产品一样,都有相对应的阿里云的商业产品。相比较apisix面向开发人员友善,higress提供了相对完善的声明式配置,你可以更方便的管理编排YAML。并且Higress的插件,如:WAF防护,支持基于 OWASP ModSecurity Core Rule Set (CRS) 的 WAF 规则配置请求屏蔽,基于 URL、请求头等特征屏蔽 HTTP 请求,可以用于防护部分站点资源不对外部暴露爬虫拦截自定义应答,自定义 HTTP 应答状态码、HTTP 应答头,以及 HTTP 应答 Body。可以用于 Mock 响应,也可以用于判断特定状态码后给出自定义应答,例如在触发网关限流策略时实现自定义响应。基于Key限流等higress提供了两种配置方式:1,基于K8s基于k8s情况下,目前官方默认使用的是K8s本身的etcd2,基于docker-compose脱离K8s,基于docker-compose编排部署同时higress可以与 nacos结合, 也可以kruise Rollout做简单的基于请求头和cookie的灰度发布等。安装 higresshelm repo add higress.io https://higress.io/helm-charts --force-update helm install higress -n higress-system higress.io/higress \ --create-namespace --render-subchart-notes --set higress-console.domain=linuxea.higress.local我们配置一个VIP172.16.100.110/24来模拟LoadBalancer[root@linuxea-11 ~]# ip addr add 172.16.100.210/24 dev eth0 [root@linuxea-11 ~]# ip a | grep 172.16.100.210 inet 172.16.100.110/24 scope global secondary eth0 [root@master-01 ~/higress/helm/core]# ping 172.16.100.210 -c 1 PING 172.16.100.210 (172.16.100.210) 56(84) bytes of data. 64 bytes from 172.16.100.210: icmp_seq=1 ttl=64 time=0.034 ms --- 172.16.100.210 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms而后使用kubectl -n higress-system edit svc higress-gateway编辑... spec: allocateLoadBalancerNodePorts: true clusterIP: 10.68.167.66 clusterIPs: - 10.68.167.66 externalIPs: - 172.16.100.210 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: ....将ip写入到本地hosts: echo "172.16.100.210 linuxea.higress.local" >> /etc/hosts获取密码export ADMIN_USERNAME=$(kubectl get secret --namespace higress-system higress-console -o jsonpath="{.data.adminUsername}" | base64 -d) export ADMIN_PASSWORD=$(kubectl get secret --namespace higress-system higress-console -o jsonpath="{.data.adminPassword}" | base64 -d) echo -e "Username: ${ADMIN_USERNAME}\nPassword: ${ADMIN_PASSWORD}"登录成功基于请求头的配置higress提供了web界面配置和crd清单配置我们先将 域名配置在本地hosts172.16.100.210 linuxea.higress.local test.abc.com test1.abc.com声明式接着创建ingress的test.abc.comapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: nginx-demo spec: ingressClassName: higress rules: - host: test.abc.com http: paths: - backend: service: name: test port: number: 80 path: / pathType: Exact --- apiVersion: v1 kind: Service metadata: name: test namespace: nginx-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: nginx-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80而后创建一个test-v1-test-v1-canary,域名保持一致,镜像修改为v2.0apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: higress.io/canary: "true" higress.io/canary-by-header: "nginx-demo" higress.io/canary-by-header-value: "v1" higress.io/request-header-control-add: "app v1" name: test-v1-canary namespace: nginx-demo spec: ingressClassName: higress rules: - host: test.abc.com http: paths: - backend: service: name: test-v1-canary port: number: 80 path: / pathType: Exact --- apiVersion: v1 kind: Service metadata: name: test-v1-canary namespace: nginx-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: test-v1-canary namespace: nginx-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.2 template: metadata: labels: app: linuxea_app version: v0.2 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80查看[root@master-01 ~/canary]# kubectl -n nginx-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test ClusterIP 10.68.106.114 <none> 80/TCP 29m test-v1-canary ClusterIP 10.68.91.109 <none> 80/TCP 15s [root@master-01 ~/canary]# kubectl -n nginx-demo get pod NAME READY STATUS RESTARTS AGE test-7f58c59f89-ddrgr 1/1 Running 0 29m test-v1-canary-79dd55496c-zwhth 1/1 Running 0 17s测试此时我们如果加上请求头-H "nginx-demo: v1" 就会被指向到v2的版本,否则就v1[root@master-01 ~]# curl -H "nginx-demo: v1" https://test.abc.com linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0 [root@master-01 ~]# curl https://test.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0而这个功能在web界面仍然一样web首先我们创建域名接着选择路由进行创建选中已经添加域名和请求头 ,以及目标服务接着在创建一组不需要请求头的路由而后测试[root@master-01 ~]# curl -H "app:v1" https://test1.abc.com linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0 [root@master-01 ~]# curl https://test1.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0你会发现,higress用最简单的方式做了A/B测试如果此时你配置了请求参数那你的请求比如携带name=test1,否则无法访问[root@master-01 ~/higress/ops]# curl -H "app:v1" https://test1.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0 [root@master-01 ~/higress/ops]# curl -H "app:v1" https://test1.abc.com/?name=test1 linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0
2023年09月13日
201 阅读
0 评论
0 点赞
2023-02-06
linuxea:使用robusta收集事件pod崩溃OOM日志
robusta的功能远不止本章介绍的这些,它可以去监控Kubernetes,提供观测性,可以于prometheus接入,作为告警的二次处理,自动修复等,也提供了事件的时间线。此前使用的是阿里的kube-eventer,kube-eventer仅仅只是提供了一个转发,因此kube-eventer只能解决的是事件触发的通知。当然, 如果robusta也是仅仅止步于此,那也没用多少必要性去使用它。它还提供了另外一种非常有用的功能: 事件告警。 在robusta的事件告警中,当侦测到后,会将预设中预设的pod状态连同最近一段日志发送到slack. 这也是为什么会有这篇文章最重要的原因。基础依赖python版本必须等于大于3.7,于是我们升级版本升级pythonwget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tar.xz yum install gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel -y yum install libffi-devel -y yum install zlib* -y tar xf Python-3.9.16.tar.xz cd Python-3.9.16 ./configure --with-ssl --prefix=/usr/local/python3 make make install rm -rf /usr/bin/python3 /usr/bin/pip3 ln -s /usr/local/python3/bin/python3 /usr/bin/python3 ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3准备国内源mkdir -p ~/.pip/ cat > ~/.pip/pip.conf << EOF [global] trusted-host = mirrors.aliyun.com index-url = https://mirrors.aliyun.com/pypi/simple EOFrobusta.dev参考官方文档开始安装pip3 install -U robusta-cli --no-cache robusta gen-config由于网络问题,我个人将使用使用docker进行配置curl -fsSL -o robusta https://docs.robusta.dev/master/_static/robusta chmod +x robusta ./robusta gen-config开始之前,务必下载我中转的镜像docker pull registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest docker tag us-central1-docker.pkg.dev/genuine-flight-317411/devel/robusta-cli:latest registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y # 强烈建议配置slack If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359 # 浏览器打开 ====================================================================== Error getting slack token ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ====================================================================== ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=64a3ee7c-5691-466f-80da-85e8ece80359 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f50b1f18cd0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # 根据提示打开If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359勾选允许如下此时slack已经有了 robusta应用继续下一步,在提示种选择了频道后Which slack channel should I send notifications to? # devops会受到一封消息执行完成后,如下:[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=d1fcbb13-5174-4027-a176-a3dcab10c27a ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=d1fcbb13-5174-4027-a176-a3dcab10c27a (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f0ec508eee0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # devops Configure MsTeams integration? [y/N]: n 配置MsTeams集成?[y / N]: N Configure Robusta UI sink? This is HIGHLY recommended. [Y/n]: y 配置Robusta UI接收器?这是强烈推荐的。[Y / n]: Enter your Gmail/Google address. This will be used to login: user@gmail.com 输入您的Gmail/谷歌地址。这将用于登录: Choose your account name (e.g your organization name): marksugar 选择您的帐户名称(例如您的组织名称): Successfully registered. Robusta can use Prometheus as an alert source. If you haven't installed it yet, Robusta can install a pre-configured Prometheus. Would you like to do so? [y/N]: y 罗布斯塔可以使用普罗米修斯作为警报源。 如果你还没有安装它,罗布斯塔可以安装一个预先配置的Prometheus。 你愿意这样做吗?[y / N]: Please read and approve our End User License Agreement: https://api.robusta.dev/eula.html Do you accept our End User License Agreement? [y/N]: y 请阅读并批准我们的最终用户许可协议:https://api.robusta.dev/eula.html 您是否接受我们的最终用户许可协议?[y / N]: Last question! Would you like to help us improve Robusta by sending exception reports? [y/N]: n 最后一个问题!你愿意通过发送异常报告来帮助我们改进Robusta吗?[y / N]: Saved configuration to ./generated_values.yaml - save this file for future use! Finish installing with Helm (see the Robusta docs). Then login to Robusta UI at https://platform.robusta.dev By the way, we'll send you some messages later to get feedback. (We don't store your API key, so we scheduled future messages using Slack'sAPI) 保存配置到。/generated_values。保存这个文件以备将来使用! 完成Helm的安装(参见罗布斯塔文档)。然后登录到罗布斯塔用户界面https://platform.robusta.dev 顺便说一下,我们稍后会给你发一些信息以获得反馈。(我们不存储你的API密钥,所以我们使用Slack的API来安排未来的消息)上述完成后,创建了一个generated_values.yamlglobalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: true enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg==helm紧接着使用上述创建的yaml文件进行安装。我们适当调整下内容关于触发器的种类非常多,我们可以参考:example-triggers, java-troubleshooting,event-enrichmentmiscellaneous,kubernetes-triggers。我们可以针对某一组pod或者名称空间进行过滤去监控的特定的信息。我们节选一些测试,并且加到generated_values.yaml种,如下:globalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: false enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg== customPlaybooks: - triggers: - on_deployment_update: {} actions: - resource_babysitter: omitted_fields: [] fields_to_monitor: ["spec.replicas"] - triggers: - on_pod_crash_loop: restart_reason: "CrashLoopBackOff" restart_count: 1 rate_limit: 3600 actions: - report_crash_loop: {} - triggers: - on_pod_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-pod" namespace: "default" actions: - pod_graph_enricher: resource_type: Memory display_limits: true - triggers: - on_container_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-container" namespace: "default" actions: - oomkilled_container_graph_enricher: resource_type: Memory - triggers: - on_job_failure: namespace_prefix: robusta actions: - create_finding: title: "Job $name on namespace $namespace failed" aggregation_key: "Job Failure" - job_events_enricher: { } runner: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-runner:0.10.10 imagePullPolicy: IfNotPresent kubewatch: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/kubewatch:v2.0 imagePullPolicy: IfNotPresent现在我们开始使用helm安装helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ --set clusterName=test 也可以使用如下命令调试 helm upgrade --install robusta --namespace robusta robusta/robusta -f ./generated_values.yaml --set clusterName=test --dry-run 如下[root@master1 opt]# helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ > --set clusterName=test Release "robusta" does not exist. Installing it now. NAME: robusta LAST DEPLOYED: Thu Feb 2 15:58:32 2023 NAMESPACE: robusta STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing Robusta 0.10.10 As an open source project, we collect general usage statistics. This data is extremely limited and contains only general metadata to help us understand usage patterns. If you are willing to share additional data, please do so! It really help us improve Robusta. You can set sendAdditionalTelemetry: true as a Helm value to send exception reports and additional data. This is disabled by default. To opt-out of telemetry entirely, set a ENABLE_TELEMETRY=false environment variable on the robusta-runner deployment. Visit the web UI at: https://platform.robusta.dev/等待pod就绪[root@master1 opt]# kubectl -n robusta get pod -w NAME READY STATUS RESTARTS AGE robusta-forwarder-78964b4455-vnt77 1/1 Running 0 2m55s robusta-runner-758cf9c986-87l4x 0/1 ContainerCreating 0 2m55s robusta-runner-758cf9c986-87l4x 1/1 Running 0 7m6s此时如果你的集群上pod有异常状态的而崩溃的,在被删除前,将会将日志发送到slack, slack上已经可以收到日志信息了选择点击以展开内联,即可查看详细信息
2023年02月06日
1,157 阅读
0 评论
0 点赞
2023-02-04
linuxea:istio bookinfo配置演示(11)
bookinfo其中包包中有一个 bookinfo的示例,这个应用模仿在线书店的一个分类,显示一本书的信息。 页面上会显示一本书的描述,书籍的细节(ISBN、页数等),以及关于这本书的一些评论。Bookinfo 应用分为四个单独的微服务:productpage. 这个微服务会调用 details 和 reviews 两个微服务,用来生成页面。details. 这个微服务中包含了书籍的信息。reviews. 这个微服务中包含了书籍相关的评论。它还会调用 ratings 微服务。ratings. 这个微服务中包含了由书籍评价组成的评级信息。reviews 微服务有 3 个版本:v1 版本不会调用 ratings 服务。v2 版本会调用 ratings 服务,并使用 1 到 5 个黑色星形图标来显示评分信息。v3 版本会调用 ratings 服务,并使用 1 到 5 个红色星形图标来显示评分信息。拓扑结构如下:Bookinfo 应用中的几个微服务是由不同的语言编写的。 这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成(接口API统一),并且 reviews 服务具有多个版本安装解压istio后,在samples/bookinfo目录下是相关bookinfo目录,参考官网中的getting-startrd[root@linuxea_48 /usr/local/istio-1.14.1]# ls samples/bookinfo/ -ll total 20 -rwxr-xr-x 1 root root 3869 Jun 8 10:11 build_push_update_images.sh drwxr-xr-x 2 root root 4096 Jun 8 10:11 networking drwxr-xr-x 3 root root 18 Jun 8 10:11 platform drwxr-xr-x 2 root root 46 Jun 8 10:11 policy -rw-r--r-- 1 root root 3539 Jun 8 10:11 README.md drwxr-xr-x 8 root root 123 Jun 8 10:11 src -rw-r--r-- 1 root root 6329 Jun 8 10:11 swagger.yaml而后安装 platform/kube/bookinfo.yaml文件[root@linuxea_48 /usr/local/istio-1.14.1]# kubectl -n java-demo apply -f samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created错误处理Unhandled exception Type=Bus error vmState=0x00000000 J9Generic_Signal_Number=00000028 Signal_Number=00000007 Error_Value=00000000 Signal_Code=00000002 Handler1=00007F368FD0AD30 Handler2=00007F368F5F72F0 InaccessibleAddress=00002AAAAAC00000 RDI=00007F369017F7D0 RSI=0000000000000008 RAX=00007F369018CBB0 RBX=00007F369017F7D0 RCX=00007F369003A9D0 RDX=0000000000000000 R8=0000000000000000 R9=0000000000000000 R10=00007F36900008D0 R11=0000000000000000 R12=00007F369017F7D0 R13=00007F3679C00000 R14=0000000000000001 R15=0000000000000080 RIP=00007F368DA7395B GS=0000 FS=0000 RSP=00007F3694D1E4A0 EFlags=0000000000010202 CS=0033 RBP=00002AAAAAC00000 ERR=0000000000000006 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00002AAAAAC00000 xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312) xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm2 ffffffff00000002 (f: 2.000000, d: -nan) xmm3 40a9000000000000 (f: 0.000000, d: 3.200000e+03) xmm4 dddddddd000a313d (f: 667965.000000, d: -1.456815e+144) xmm5 0000000000000994 (f: 2452.000000, d: 1.211449e-320) xmm6 00007f369451ac40 (f: 2488380416.000000, d: 6.910614e-310) xmm7 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm8 dd006b6f6f68396a (f: 1869101440.000000, d: -9.776703e+139) xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm10 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm11 0000000049d70a38 (f: 1238829568.000000, d: 6.120632e-315) xmm12 000000004689a022 (f: 1183424512.000000, d: 5.846894e-315) xmm13 0000000047ac082f (f: 1202456576.000000, d: 5.940925e-315) xmm14 0000000048650dc0 (f: 1214582272.000000, d: 6.000833e-315) xmm15 0000000046b73e38 (f: 1186414080.000000, d: 5.861665e-315) Module=/opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so Module_base_address=00007F368D812000 Target=2_90_20200901_454898 (Linux 3.10.0-693.el7.x86_64) CPU=amd64 (32 logical CPUs) (0x1f703dd000 RAM) ----------- Stack Backtrace ----------- (0x00007F368DA7395B [libj9jit29.so+0x26195b]) (0x00007F368DA7429B [libj9jit29.so+0x26229b]) (0x00007F368D967C57 [libj9jit29.so+0x155c57]) J9VMDllMain+0xb44 (0x00007F368D955C34 [libj9jit29.so+0x143c34]) (0x00007F368FD1D041 [libj9vm29.so+0xa7041]) (0x00007F368FDB4070 [libj9vm29.so+0x13e070]) (0x00007F368FC87E94 [libj9vm29.so+0x11e94]) (0x00007F368FD2581F [libj9vm29.so+0xaf81f]) (0x00007F368F5F8053 [libj9prt29.so+0x1d053]) (0x00007F368FD1F9ED [libj9vm29.so+0xa99ed]) J9_CreateJavaVM+0x75 (0x00007F368FD15B75 [libj9vm29.so+0x9fb75]) (0x00007F36942F4305 [libjvm.so+0x12305]) JNI_CreateJavaVM+0xa82 (0x00007F36950C9B02 [libjvm.so+0xab02]) (0x00007F3695ADDA94 [libjli.so+0xfa94]) (0x00007F3695CF76DB [libpthread.so.0+0x76db]) clone+0x3f (0x00007F36955FAA3F [libc.so.6+0x121a3f]) --------------------------------------- JVMDUMP039I Processing dump event "gpf", detail "" at 2022/07/20 08:59:38 - please wait. JVMDUMP032I JVM requested System dump using '/opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp' in response to an event JVMDUMP010I System dump written to /opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp JVMDUMP032I JVM requested Java dump using '/opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt' in response to an event JVMDUMP012E Error in Java dump: /opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt JVMDUMP032I JVM requested Snap dump using '/opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc' in response to an event JVMDUMP010I Snap dump written to /opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc JVMDUMP032I JVM requested JIT dump using '/opt/ibm/wlp/output/defaultServer/jitdump.20220720.085938.1.0004.dmp' in response to an event JVMDUMP013I Processed dump event "gpf", detail "".如下echo 0 > /proc/sys/vm/nr_hugepages见34510,13389配置完成,pod准备结束(base) [root@k8s-01 bookinfo]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE details-v1-6d89cf9847-46c4z 2/2 Running 0 27m productpage-v1-f44fc594c-fmrf4 2/2 Running 0 27m ratings-v1-6c77b94555-twmls 2/2 Running 0 27m reviews-v1-765697d479-tbprw 2/2 Running 0 6m30s reviews-v2-86855c588b-sm6w2 2/2 Running 0 6m2s reviews-v3-6ff967c97f-g6x8b 2/2 Running 0 5m55s sleep-557747455f-46jf5 2/2 Running 0 5d1.gateway配置hosts为*,也就是默认的配置,匹配所有。也就意味着,可以使用ip地址访问在VirtualService中的访问入口如下 http: - match: - uri: exact: /productpage只要pod正常启动,南北流量的访问就能够被引入到网格内部,并且可以通过ip/productpage进行访问yaml如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080apply(base) [root@k8s-01 bookinfo]# kubectl -n java-demo apply -f networking/bookinfo-gateway.yaml gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created而后便可以通过浏览器打开同时。这里的版本是随着刷新一直在变化reviews-v1reviews-v3reviews-v2此时在kiali中能看到一个简单的拓扑:请求从ingress-gateway进入后,到达productpage的v1版本,而后调度到details的v1, 其中reviews流量等比例的被切割到v1,v2,v3,并且v2,v3比v1还多了一个ratings服务,如下图网格测试安装完成,我们进行一些测试,比如:请求路由,故障注入等1.请求路由要开始,需要将destination rules中配置的子集规则,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ratings spec: host: ratings subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v2-mysql labels: version: v2-mysql - name: v2-mysql-vm labels: version: v2-mysql-vm --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: details spec: host: details subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 ---applykubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created而后,对于非登录用户,将流量全发送到v1的版本,展开的yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---使用如下命令创建即可kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created PS E:\ops\k8s-1.23.1-latest\istio-企鹅通\istio-1.14此时在去访问,流量都会到v1这取决于定义了三个reviews的子集,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3随后指明了reviews调度到v1apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1如果没有VirtualService这reviews的配置,就会在三个版本中不断切换2.用户标识调度此时我们希望某个用户登录就让他转发到某个版本如果end-user等于json就转发到v2apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1在/productpageBookinfo 应用程序上,以 user 身份登录jason。登录后kiali变化如下3.故障注入要了解故障注入,需要了解混沌工程。在云原生上,在某些时候希望能够抵御某种程度局部故障。比如希望允许客户端重试,超时来解决局部问题istio原生支持两种故障注入来模拟混动工程的效果,注入超时,或者重试故障基于此前上的两个之上$ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml $ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml使用上述配置,请求流程如下:productpage→ reviews:v2→ ratings(仅限用户jason)productpage→ reviews:v1(对于其他所有人)注入延迟故障要测试 Bookinfo 应用程序微服务的弹性,在userreviews:v2和微服务之间注入 7s 延迟。此测试将发现一个有意引入 Bookinfo 应用程序的错误。ratings`jason`如果是jason,在100%的流量上,注入7秒延迟,路由到v1版本,其他的也路由到v1版本,唯一不同是jason访问是有 延迟的,其他人正常,yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: delay: percentage: value: 100.0 fixedDelay: 7s route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml virtualservice.networking.istio.io/ratings configured我们打开浏览器测试并且可以看到Sorry, product reviews are currently unavailable for this book.中断故障注入测试微服务弹性的另一种方法是引入 HTTP 中止故障。ratings在此任务中,将为测试用户的微服务引入 HTTP 中止jason。在这种情况下,希望页面立即加载并显示Ratings service is currently unavailable消息。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: abort: percentage: value: 100.0 httpStatus: 500 route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml virtualservice.networking.istio.io/ratings configuredkiali如下4.流量迁移如何将流量从一个微服务版本转移到另一个版本。一个常见的用例是将流量从旧版本的微服务逐渐迁移到新版本。在 Istio 中,可以通过配置一系列路由规则来实现这一目标,这些规则将一定比例的流量从一个目的地重定向到另一个目的地。在此任务中,将使用将 50% 的流量发送到reviews:v1和 50% 到reviews:v3。然后,将通过将 100% 的流量发送到 来完成迁移reviews:v3。首先,运行此命令将所有流量路由到v1每个微服务的版本。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage unchanged virtualservice.networking.istio.io/reviews configured virtualservice.networking.istio.io/ratings configured virtualservice.networking.istio.io/details unchanged现在无论刷新多少次,页面的评论部分都不会显示评分星。这是因为将 Istio 配置为将 reviews 服务的所有流量路由到该版本reviews:v1,并且该版本的服务不访问星级评分服务。reviews:v1使用reviews:v3以下清单传输 50% 的流量apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml virtualservice.networking.istio.io/reviews configured如下kiali5.请求超时使用路由规则的timeout字段指定 HTTP 请求的超时。默认情况下,请求超时被禁用,但在此任务中,将服务超时覆盖为 1 秒。但是,为了查看其效果,还可以在调用服务时人为地引入 2 秒延迟。reviews`ratings`在开始之前,将所有的请求指派到v1kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml而后将reviews调度到v2kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF在ratings的v1上注入一个2秒钟的延迟reviews会访问到ratingskubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF访问如下图,当应用程序调到ratings的时候,会超时2秒一旦应用程序的上游响应缓慢,势必影响到服务体验,于是,我们将调整,如果上游服务响应超过0.5s就不去请求kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF此时刷新提示Sorry, product reviews are currently unavailable for this book.因为服务响应超过0.5秒,不去请求了如果此时,我们认为3秒是可以接受的,就改成3,服务就可以访问到ratings了
2023年02月04日
1,056 阅读
0 评论
0 点赞
2023-02-03
linuxea:istio 故障注入/重试和容错/流量镜像(10)
6.故障注入istiO支持两种故障注入,分别是延迟故障和中断故障延迟故障:超时,重新发送请求abort中断故障:重试故障注入仍然在http层进行定义中断故障 fault: abort: # 中断故障 percentage: value: 20 # 在多大的比例流量上注入 httpStatus: 567 # 故障响应码延迟故障 fault: delay: percentage: value: 20 # 在百分之20的流量上注入 fixedDelay: 6s # 注入三秒的延迟yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 fault: abort: percentage: value: 20 httpStatus: 567 - name: default route: - destination: host: dpment subset: v11 fault: delay: percentage: value: 20 fixedDelay: 6s此时,当我们用curl访问 dpment.linuxea.com的时候,有20的流量会被中断6秒(base) [root@master1 7]# while true;do date;curl dpment.linuxea.com; date;sleep 0.$RANDOM;done 2022年 08月 07日 星期日 18:10:40 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:40 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:47 CST 2022年 08月 07日 星期日 18:10:47 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:53 CST 2022年 08月 07日 星期日 18:10:54 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:54 CST 2022年 08月 07日 星期日 18:10:55 CST如果我们访问dpment.linuxea.com/version/的时候,有20%的流量返回的状态码是567(base) [root@master1 7]# while true;do echo -e "===============";curl dpment.linuxea.com/version/ -I ; sleep 0.$RANDOM;done =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:40 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:31 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:32 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:42 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:46 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:37 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1如果使用curl命令直接访问会看到fault filter abort(base) [root@master1 7]# while true;do echo -e "\n";curl dpment.linuxea.com/version/ ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0回到kiali6.1. 重试和容错请求重试条件:5xx:上游主机返回5xx响应码,或者根本未响应(端口,重置,读取超时)gateway-error: 网关错误,类似于5xx策略,但仅为502,503,504应用进行重试connection-failure:在tcp级别与上游服务建立连接失败时进行重试retriable-4xx:上游服务器返回可重复的4xx响应码时进行重试refused-stream:上游服务器使用REFUSED-STREAM错误码重置时进行重试retrable-status-codes:上游服务器的响应码与重试策略或者x-envoy-retriable-status-codes标头值中定义的响应码匹配时进行重试reset:上游主机完全不响应(disconnect/reset/read超时),envoy将进行重试retriable-headers:如果上游服务器响应报文匹配重试策略或x-envoy-retriable-header-names标头中包含的任何标头,则envoy将尝试重试envoy-rateliited:标头中存在x-envoy-ratelimited时重试重试条件2(同x-envoy-grpc-on标头):cancelled: grpc应答标头中的状态码是"cancelled"时进行重试deadline-exceeded: grpc应答标头中的状态码是"deadline-exceeded"时进行重试internal: grpc应答标头中的状态码是“internal”时进行重试resource-exhausted:grpc应答标头中的状态码是"resource-exhausted"时进行重试unavailable:grpc应答标头中的状态码是“unavailable”时进行重试默认情况下,envoy不会进行任何类型的重试操作,除非明确定义我们假设现在有多个服务,A->B->C,A向后代理,或者访问其中的B出现了响应延迟,在A上配置容错机制,如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: default route: - destination: host: A timeout: 1s # 如果上游超过1秒响应,就返回超时结果 retries: # 重试 attempts: 5 # 重试次数 perTryTimeout: 1s # 重试时间 retryOn: 5xx,connect-failure,refused-stream # 对那些条件进行重试如果上游服务超过1秒未响应就进行重试,对于5开头的响应码,tcp链接失败的,或者是GRPC的Refused-stream的建立链接也拒绝了,就重试五次,每次重试1秒。这个重试的 5次过程中,如果在1s内,有成功的则会成功 。7.流量镜像流量镜像,也叫影子流量(Traffic shadowing),是一种通过复制生产环境的流量到其他环境进行测试开发的工作模式。在traffic-mirror中,我们可以直接使用mirror来指定给一个版本 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12于是,我们在此前的配置上修改apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12我们发起curl请求 while ("true"){ curl https://dpment.linuxea.com/ ;sleep 1}而后在v12中查看日志以获取是否流量被镜像进来(base) [root@master1 10]# kubectl -n java-demo exec -it dpment-linuxea-c-568b9fcb5c-ltdcg -- /bin/bash bash-5.0# curl 127.0.0.1 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 bash-5.0# tail -f /data/logs/access.log 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:27:59 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:00 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:01 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:02 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:03 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:04 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:05 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:06 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:07 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:08 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:11 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:12 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:13 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:14 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:15 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:16 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:17 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:18 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:19 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:20 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:21 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:23 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:24 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:25 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-"
2023年02月03日
1,052 阅读
0 评论
0 点赞
2023-01-17
linuxea:istio 基于headers请求头路由(9)
5.请求首部条件路由正常情况下从客户端请求的流量会发送到sidecar-proxy(eneoy),请求发送到上游的pod,而后响应到envoy,并从envoy响应给客户端。在这个过程中,客户端到envoy是不能够被修改的,只有从envoy到上游serrvice中才是可以被操作的,因为这段报文是由envoy发起的。而从serrvice响应到envoy的也是不能够被修改的。并且从envoy响应到客户端的报文由envoy发起响应也是可以被修改的。总共可配置的只有发送到上游service的请求和发给下游客户端的响应。而另外两端并不是envoy生成的,因此没用办法去操作标头的。request: 发送给上游请求serviceresponse: 响应给下游客户端1.如果请求的首部有x-for-canary等于true则路由到v10,如果浏览器是Mozilla就路由给v11,并且修改发送给上游的请求标头的 User-Agent: Mozilla,其次,在响应给客户端的标头添加一个 x-canary: "true" - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "true"没有匹配到这些规则,就给默认规则匹配,就路由给v10,并且添加一个下游的响应报文: 全局标头X-Envoy: linuxea - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10我们添加上gateway部分apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v105.1 测试此时可以使用curl来模拟访问请求# curl dpment.linuxea.com linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0添加头部, -H "x-for-canary: true"# curl -H "x-for-canary: true" dpment.linuxea.com linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而后查看日志的user-agent ,这里是Mozilla,默认是curl130.130.0.0, 127.0.0.6 - [07/Aug/2022:08:18:33 +0000] "GET / HTTP/1.1" dpment.linuxea.com94 "-" "Mozilla" - -0.000 [200] [-] [-] "-"我们使用curl发起请求,模拟的是Mozilla此时使用-I,查看标头的添加信息x-canary: marksugarPS C:\Users\Administrator> curl -H "x-for-canary: true" dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:47:45 GMT content-type: text/html content-length: 94 last-modified: Wed, 03 Aug 2022 07:58:30 GMT etag: "62ea2aa6-5e" accept-ranges: bytes x-envoy-upstream-service-time: 4 x-canary: marksugar如果不加头部-H "x-for-canary: true"则响应报文的是x-envoy: linuxeaPS C:\Users\Administrator> curl dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:51:53 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 7 x-envoy: linuxea
2023年01月17日
1,251 阅读
0 评论
0 点赞
2023-01-15
linuxea:istio 基于权重路由(8)
紧接前面,这篇我们希望访问dpment服务的请求在百分之90的流量在原来的v10的pod,而百分之10的在新的v11的pod,因此我们配置weight来实现基于权重比例的流量切割首先部署dpment-a和dpment-b仍然需要配置service关联到后端的pod标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP配置一个DestinationRule,并且配置上subsets根据标签匹配,通过标签 匹配到两个service上,将子集定义完成此前的篇幅中我门配置过deployment,并且标签配置完成,此时的子集直接引用--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0在VirtualService中添加一项weight分别指定两个subsets.name的权重--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0 --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10而后我们在cli的容器,或者其他容器都可以进行访问测试/ $ while true;do curl dpment;sleep 0.3;done linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0此时在kiali界面会看到流量走向现在我们希望增加流量权重比例,修改即可,比如百分之60到v10,百分之40到v11--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 60 - destination: host: dpment subset: v11 weight: 40此时的kiali可以看到流量的比例情况如下如果希望流量全部切换到一方,修改为100即可--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 0 - destination: host: dpment subset: v11 weight: 100一旦流量全部倒向100,另外为0的权重的service将不接收流量,并且从kiali界面上移除4.1 gateway那如果是前端页面就需要添加一些其他配置我们假设,有一个服务是dpment,现在的版本是1.1,现在要按照权重比例升级到1.2但是此前我们只配置了v1.0和v1.1的子集,所以现在我们添加一个v1.2因此,我们配置v1.3的pod组,和DestinationRule主要配置DestinationRule是通过标签来关联的,因此pod的标签需要进行修改 matchLabels: app: linuxea_app version: v1.2在此前的DestinationRule之上,我们添加 - name: v12 labels: version: v1.2 都关联到一个dpment的host下yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-c namespace: java-demo spec: replicas: 3 selector: matchLabels: app: linuxea_app version: v1.2 template: metadata: labels: app: linuxea_app version: v1.2 spec: containers: - name: dpment-linuxea-c # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 ports: - name: http containerPort: 80 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 - name: v12 labels: version: v1.2 而后在vs调整比例 - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v12 weight: 10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: weight-based-routing route: - destination: host: dpment subset: v11 weight: 90 - destination: host: dpment subset: v12 weight: 10curlPS C:\Users\usert> while ("true"){ curl https://dpment.linuxea.com/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0由于配的host是 hosts: - "dpment.linuxea.com" - "dpment"因此在pod里面也可以进行访问要在网格内和ingress同时访问, - mesh配置至关重要创建一个clikubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bashcurl开始模拟访问bash-4.4# while true;do curl dpment;sleep 0.4;done linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0此时的kiali的示意图就变成了
2023年01月15日
1,091 阅读
0 评论
0 点赞
1
2
...
14