首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,451 阅读
2
linuxea:如何复现查看docker run参数命令
23,044 阅读
3
Graylog收集文件日志实例
18,580 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,275 阅读
5
git+jenkins发布和回滚示例
18,181 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
690
篇文章
累计收到
139
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
92
篇与
的结果
2023-01-03
linuxea:istio 发布web到集群外(7)
3.开放到外网我们通过域名通过外网来访问集群内的这两个pod,就需定义gateway和vs,vs也是定义在网关gateway打开侦听器gateway必须在网格部署的所在名称空间内,否则有可能注入失败VirtualService定义路由信息等此前定义的VirtualService并没有指定网关,如果没有指定,就只会在网格内的各sidecar内使用如果只是这样,那么网格内部是不能访问的,如果需要让网格内部访问,就需要加上- mesh通常,集群内部使用的是service名称访问配置Gateway接受ingress入网的hosts流量apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "dpment.linuxea.com" - "dpment1.linuxea.com" 配置VirtualService并且关联istio-system/dpment-gateway ,对应之上的gateway的hosts,前后呼应apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment1.linuxea.com" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 #- mesh http: - name: dpment route: - destination: host: dpment ---配置一个serviceapiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP ---此时可以通过浏览器访问但是这样的方式是会将流量轮询到app: linuxea_app标签的pod,因此,我们添加一个path url路径,允许通过外部网络访问。我们希望,如果请求没有附带/version/就发送到v11,如果附带了/version/重写为/发送到v10 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11我们修改VirtualService,使用subset,因此额外添加一个DestinationRuleapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: # hosts: # - dpment hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0配置本地hosts进行测试PS C:\Users\usert> while ("true"){ curl http://dpment.linuxea.com/ http://dpment.linuxea.com/version/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而在kiali中,绘制的图片已经发生了变化
2023年01月03日
1,060 阅读
0 评论
0 点赞
2022-12-27
linuxea:istio 定义subset子集(6)
定义subset子集我们将两个版本归类到一个版本的pod上,去进行适配到一个pod上去,通过标签关联来做区分对于多个版本,在同一个host,通过标签来标注不同的版本信息来进行管理,而后在vs中进行调用子集需要在DestinationRule集群上面进行配置DestinationRule在cluster配置的,通过routes进行调度基于子集,在本案例中根据version标签来备注,类似如下: selector: app: linuxea_app version: v0.2service首先仍然照样创建一个service关联到标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP定义DestinationRule集群而后创建DestinationRule,在一个Host里面使用subsets通过标签版本关联两个service,两个service分别关联不同的pod,版本也不同host 与service保持一致使用subsets定义v11,并根据标签筛选v1.1到v11子集定义v10,并根据标签筛选v1.0到v10子集我们修改下标签pod yamldpment-b 如下--- apiVersion: v1 kind: Service metadata: name: dpment-b namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-b namespace: java-demo spec: replicas: 2 selector: matchLabels: app: linuxea_app version: v1.1 template: metadata: labels: app: linuxea_app version: v1.1 spec: containers: - name: nginx-b # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80dpment-a如下--- apiVersion: v1 kind: Service metadata: name: dpment-a namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v1.0 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-a namespace: java-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v1.0 template: metadata: labels: app: linuxea_app version: v1.0 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80创建完成(base) [root@master1 2]# kubectl -n java-demo get svc,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dpment ClusterIP 10.96.155.138 <none> 80/TCP 22h service/dpment-a ClusterIP 10.99.74.80 <none> 80/TCP 12s service/dpment-b ClusterIP 10.101.155.240 <none> 80/TCP 33s NAME READY STATUS RESTARTS AGE pod/cli 2/2 Running 0 22h pod/dpment-linuxea-a-777847fd74-fsnsv 2/2 Running 0 12s pod/dpment-linuxea-b-55694cb7f5-576qs 2/2 Running 0 32s pod/dpment-linuxea-b-55694cb7f5-lhkrb 2/2 Running 0 32sDestinationRule如果有多个版本,此时的subsets的逻辑组内就可以有很多个版本标签来匹配相对应的每个不同版本的服务dr如下--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 ---dr一旦创建完成在cluster中就能看到相关信息IMP=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name})使用istioctl proxy-config cluster $IMP.java-demo 查看定义好的cluster(base) [root@master1 2]# istioctl proxy-config cluster $IMP.java-demo ... dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v10 outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v11 outbound EDS dpment.java-demo ...可以看到,在cluster中,每个service都是一个集群,这些并且可以被访问bash-4.4# curl dpment linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 bash-4.4# curl dpment-b linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0因此,我们删除多余的dpment-a和dpment-bdpment已经是我们现在的子集的service,dr和vs都是使用的dpment,删掉不会影响到dpment一旦删除dpment-a和dpment-b,listnrners和cluster,routes都会被删除(base) [root@master1 2]# kubectl -n java-demo delete svc dpment-a dpment-b service "dpment-a" deleted service "dpment-b" deletedVirtualService而后仍然需要配置一个VirtualService用于url路径路由,路由规则不变,但是路由的host就不变,都是dpment。只是subset不同spec: hosts: - dpment # service http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment # service subset: v10 - name: default route: - destination: host: dpment # service subset: v11如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-ayaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11创建完成,在kiali中可以查看已经配置好的配置services在vs和dr中,均已配置完成通过命令测试bash-4.4# while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0仍然能看到一样的效果
2022年12月27日
1,115 阅读
0 评论
0 点赞
2022-12-03
linuxea:istio 基于url路径路由(5)
流量治理在istio1.5后的版本,istiod充当控制平面,并且将配置分发到所有的sidecar代理和网关,能够支持网格的应用实现只能话的负载均衡机制。而envoy通过简单的二次开发后在istio中称为istio-proxy,被用于围绕某个pod,以sidecar的模式允许在某个应用的pod中。除此之外envoy还负责ingress和egress,分别是入向和出向网关。在k8s中,一旦istio部署完成,并添加了标签到名称空间下后,sidecar是以自动注入的方式注入到pod中,而无需手动配置。无论手动还是自动都是借助istio 控制器。ingress作为统一入口,是必须有的,而egress并非必然。pod中的应用程序,向外发送的时候是通过sidecar来进行处理,而当前的代理作为接收端的时候只是仅仅进行转发到后端应用程序即可。如果是 集群外的,则先从ingerss引入进来,而后在往后端转发。而在istio中服务注册表是用来发现所有的服务,并且将每个服务的服务名称为主机,pod为上游集群,访问主机的流量默认没有路由条件的转给名称对应的所有pod。每个seidcar会拿到服务的所有配置,每一个服务对应的服务转换为虚拟主机的配置。虚拟主机适配的主机头就是服务的服务名,主机头适配的所有流量都转发给后端的所有pod, service的作用是负责发现pod,并不介入流量转发在envoy中定义一个组件listencr,并且定义一个上游服务器组cluster,流量进入后去往哪里,需要定义多个vhosts,根据主机头就那些匹配到某虚拟主机,根据路由匹配规则url等,来判定转发给上游某个集群。cluster的角色和nginx的upstrm很像,调度,以及会话等。如果此时要进行流量比例,就需要在这里进行配置。调度算法由destnartionrule来进行定义,如:路由等。而虚拟机主机是由virtualService定义hosts。而如果是外部流量ingress gateway就需要定义一个gateway的crd。在istio中,如果使用envoy的方式那就太复杂了,因此,要想配置流量治理,我们需要了解配置一些CRD,如:Gateway为网格引入外部流量,但是不会下发到整个网格,只会将配置下发到ingress-gateway这个pod,而这个pod是没用sidecar的serviceEntty对于出站流量统一配置需要serviceEntty来定义, 也会转换成envoy的原生api配置,这些配置只会下发到egress gateway用来管控出向流量vitrual services只要定义了网格,istiod就会将k8s集群上控制平面和数据平面的所有service(istio-system和打了标签的namespace)自动发现并转换为envoy配置,下发到各个sidecar-proxy。这些配置的下发是所有的,service和service之间本身都是可以互相访问的, 每个service都会被转换成pod envoy的egress listerners, 因此,只要service存在, service之间通过envoy配置的listeners,以及路由,cluster,这些本身就可以进行互相访问。istio将网格中每个service端口创建为listener,而其匹配到的endpoint将组和为一个cluster而vitrual services是对网格内的流量配置的补充,对于一个service到达另外一个cluster之间的扩充,一个到另外一个的调度算法等其他高级功能,比如:1.路由规则,子集2.url3.权重等vitrual services就是配置在listeners上的vitrual hosts和router configDestination rulesdestination rules将配置好的配置指向某个后端的cluster,在cluster上指明均衡机制,异常探测等类的流量分发机制。这些配置应用后会在所有的网格内的每个sidecar内被下发,大部分都在outbound出站上Destination rules和vitrual services是配置的扩充,因此Destination和vitrual services每次并非都需要配置,只有在原生默认配置无法满足的时候,比如需要配置高级功能的时候才需要扩充配置要配置这些流量治理,需要virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,指定虚拟主机配置virtualService和destnartionrule外部的入站流量会经由ingress gateway到达集群内部,也就是南北流量经由gateway定义的ingress的vhsots包括目标流量访问的"host",以及虚拟主机监听的端口号集群内部的流量仅会在sidecar之间流动,也就是东西向流量, 大都在egress gateway发挥作用virtualService为sedecar envoy定义的listener(定义流量路由机制等)DestinationRule为sedecar envoy定义的cluster(包括发现端点等)网格内的流量无论是发布测试等,都是通过访问发起段的正向代理出站envoy进行配置,并且网格内的流量配置与动向都是在数据平面完成。而控制平面只是在进行下发配置策略的定义。要想在egress或者ingress 定义相应的配置,需要通过virtualService来进行定义,1. 基于url路径路由新的版本与旧版本之间,我们希望百分之一的流量在新版本之上,而百分之99还在旧的版本上我们重新配置下清单,我准备了两个pod,当打开根目录的时候显示版本是1nginx:v1.0linuxea-dpment-linuxea-x-xxxxx version number 1.0nginx:v1.1linuxea-dpment-linuxea-x-xxxxx version number 1.1/version/显示同样的信息。准备两个版本的pod,根路径和/version/都存在,且版本号不一样,而后配置进行测试、registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.1使用如上两个版本来测试1.1 dpment-a要想被istio发现,我们必须创建一个service,而后创建一个dpment-a的deployment的pod清单如下--- apiVersion: v1 kind: Service metadata: name: dpment-a namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-a namespace: java-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80此时我们可以通过命令来查看当前创建的这个pod在istio的表现获取当前的pod名称(base) [root@linuxea.com test]# INGS=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name}) (base) [root@linuxea.com test]# echo $INGS dpment-linuxea-a-68dc49d5d-c9pcb查看proxy-status(base) [root@linuxea.com test]# istioctl proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION dpment-linuxea-a-68dc49d5d-c9pcb.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 dpment-linuxea-a-68dc49d5d-h6v6v.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 dpment-linuxea-a-68dc49d5d-svl52.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 sleep-557747455f-46jf5.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1查看routes的80端口,可见dpment-a已经被创建(base) [root@linuxea.com test]# istioctl proxy-config routes $INGS.java-demo --name 80 NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.98.127.60 /* 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 ingress-nginx.ingress-nginx, 10.99.195.253 /* 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 sleep, sleep.java-demo + 1 more... /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /* 查看cluster也被发现到(base) [root@linuxea.com test]# istioctl proxy-config cluster $INGS.java-demo | grep dpment-a dpment-a.java-demo.svc.cluster.local 80 - outbound EDS 在endpionts中能看到后端的ip(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo | grep dpment-a 130.130.0.3:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.0.4:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.1.119:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local或者使用cluster来过滤(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo --cluster "outbound|80||dpment-a.java-demo.svc.cluster.local" ENDPOINT STATUS OUTLIER CHECK CLUSTER 130.130.0.3:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.0.4:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.1.119:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local这里的ip就是pod的ip(base) [root@linuxea.com test]# kubectl -n java-demo get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 30m 130.130.0.4 master2 <none> <none> dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 31m 130.130.0.3 master2 <none> <none> dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 30m 130.130.1.119 k8s-03 <none> <none>a.查看而后我们run一个pod来这个pod也会被加入到istio中来kubectl run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash如下(base) [root@linuxea.com test]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash If you don't see a command prompt, try pressing enter. bash-4.4# 通过service名称访问bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0并且listeners的端口也在这个pod内bash-4.4# ss -tlnpp State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 0.0.0.0:15021 0.0.0.0:* LISTEN 0 128 0.0.0.0:15021 0.0.0.0:* LISTEN 0 128 0.0.0.0:15090 0.0.0.0:* LISTEN 0 128 0.0.0.0:15090 0.0.0.0:* LISTEN 0 128 127.0.0.1:15000 0.0.0.0:* LISTEN 0 128 0.0.0.0:15001 0.0.0.0:* LISTEN 0 128 0.0.0.0:15001 0.0.0.0:* LISTEN 0 128 127.0.0.1:15004 0.0.0.0:* LISTEN 0 128 0.0.0.0:15006 0.0.0.0:* LISTEN 0 128 0.0.0.0:15006 0.0.0.0:* LISTEN 0 128 *:15020 *:* 此时,可以过滤listeners的80端口来查看他的侦听器bash-4.4# curl -s 127.0.0.1:15000/listeners | grep 80 10.102.80.102_10257::10.102.80.102:10257 0.0.0.0_8080::0.0.0.0:8080 0.0.0.0_80::0.0.0.0:80 10.104.119.238_80::10.104.119.238:80 10.109.18.63_8083::10.109.18.63:8083 0.0.0.0_11800::0.0.0.0:11800 10.104.18.194_80::10.104.18.194:80 10.96.124.32_12800::10.96.124.32:12800 10.103.47.163_8080::10.103.47.163:8080 0.0.0.0_8060::0.0.0.0:8060 10.96.59.20_8084::10.96.59.20:8084 10.106.152.2_8080::10.106.152.2:8080 10.96.171.119_8080::10.96.171.119:8080 10.96.132.151_8080::10.96.132.151:8080 10.99.185.170_8080::10.99.185.170:8080 10.105.132.58_8082::10.105.132.58:8082 10.96.59.20_8081::10.96.59.20:8081 0.0.0.0_8085::0.0.0.0:8085查看clusterbash-4.4# curl -s 127.0.0.1:15000/clusters|grep dpment-a outbound|80||dpment-a.java-demo.svc.cluster.local::observability_name::outbound|80||dpment-a.java-demo.svc.cluster.local outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_connections::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_pending_requests::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_requests::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_retries::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_connections::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_pending_requests::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_requests::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_retries::3 outbound|80||dpment-a.java-demo.svc.cluster.local::added_via_api::true outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_active::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_total::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_success::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_total::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::local_origin_success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_active::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_success::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::local_origin_success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_active::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_success::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::local_origin_success_rate::-1此时的,我们在run起的这个pod中通过curl命令来请求dpment-adpment-a本身是在service中实现的,但是在istio介入后,就委托给istio实现while true;do curl dpment-a;sleep 0.5;done(base) [root@linuxea.com ~]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash If you don't see a command prompt, try pressing enter. bash-4.4# while true;do curl dpment-a;sleep 0.5;done linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0kialicli pod请求的是dpment-a,经过自己的sidecar-envoy(egress-listener)根据请求调度了dpment-a的请求,请求先在cli的sidecar上发生的,出站流量通过egress listener的dpment-a的服务,对于这个主机的请求是通过egress listener的cluster调度到后端进行响应b.ingress-gw如果此时要被外部访问,就需要配置ingress-gw因此,配置即可apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "dpment.linuxea.com" - "dpment1.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment1.linuxea.com" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 #- mesh http: - name: dpment-a route: - destination: host: dpment-a ---apply后在本地解析域名即可1.2 dpment-b此时 ,我们在创建一个dpment-b的service--- apiVersion: v1 kind: Service metadata: name: dpment-b namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-b namespace: java-demo spec: replicas: 2 selector: matchLabels: app: linuxea_app version: v0.2 template: metadata: labels: app: linuxea_app version: v0.2 spec: containers: - name: nginx-b # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80创建完成(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc NAME READY STATUS RESTARTS AGE pod/cli 2/2 Running 0 5h59m pod/dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 23h pod/dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 23h pod/dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 23h pod/dpment-linuxea-b-59b448f49c-j7gk9 2/2 Running 0 29m pod/dpment-linuxea-b-59b448f49c-nfkfh 2/2 Running 0 29m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h service/dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 29m如下(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/cli 2/2 Running 0 5h59m 130.130.0.9 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 23h 130.130.0.4 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 23h 130.130.0.3 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 23h 130.130.1.119 k8s-03 <none> <none> pod/dpment-linuxea-b-59b448f49c-j7gk9 2/2 Running 0 29m 130.130.1.121 k8s-03 <none> <none> pod/dpment-linuxea-b-59b448f49c-nfkfh 2/2 Running 0 29m 130.130.0.13 master2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h app=linuxea_app,version=v0.1 service/dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 29m app=linuxea_app,version=v0.21.3 dpment此时dpment-a和dpment-b已经被创建,他们会生成相应的listener,clusters,routes,endpions,而后我们在创建一个dpment而后我们创建一个dpment的VirtualService在网格内做url转发如果是/version/的就重定向到/,并转发到dpment-b否则就转发到dpment-a配置如下--- apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: dpment type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment-b - name: default route: - destination: host: dpment-a关键配置注释spec: hosts: - dpment # 与service名称一致 http: # 7层路由机制 - name: version match: - uri: # 请求报文中的url prefix: /version/ # 如果以/version/为前缀 rewrite: # 重写 uri: / # 如果以/version/为前缀就重写到/ route: - destination: host: dpment-b # 如果以/version/为前缀就重写到/,并且发送到 dpment-b 的host - name: default # 不能匹配/version/的都会发送到default,并且路由到dpment-a route: - destination: host: dpment-a我们定义了一个路由规则,如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-a创建dpment , 现在多了一个svc(base) [root@linuxea.com test]# kubectl -n java-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dpment ClusterIP 10.96.155.138 <none> 80/TCP 6s dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 51m还有一个vs(base) [root@master2 ~]# kubectl -n java-demo get vs NAME GATEWAYS HOSTS AGE dpment ["dpment"] 19s此时我们查看routes,如下图(base) [root@linuxea.com ~]# istioctl proxy-config routes $IMP.java-demo | grep 80 web-nginx.test.svc.cluster.local:80 * /* 8060 webhook-dingtalk.monitoring, 10.107.177.232 /* 8080 argocd-applicationset-controller.argocd, 10.96.132.151 /* 8080 cloud-his-gateway-nodeport.default, 10.96.171.119 /* 8080 cloud-his-gateway.default, 10.103.47.163 /* 8080 devops-system-nodeport.default, 10.106.152.2 /* 8080 devops-system.default, 10.99.185.170 /* 8080 jenkins-master-service.devops, 10.100.245.168 /* 8080 jenkins-service.jenkins, 10.98.131.142 /* 8085 cloud-base-uaa.devops, 10.109.0.226 /* 80 argocd-server.argocd, 10.98.127.60 /* 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 dpment-b, dpment-b.java-demo + 1 more... /* 80 dpment, dpment.java-demo + 1 more... /version/* dpment.java-demo 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 ingress-nginx.ingress-nginx, 10.99.195.253 /* 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /* argocd-applicationset-controller.argocd.svc.cluster.local:8080 * /* devops-system-nodeport.default.svc.cluster.local:8080 * /* argocd-metrics.argocd.svc.cluster.local:8082 * 我们在java-demo 的pod内进行测试kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash仍然起一个cli进行测试如果请求直接访问的发送到v0.1, 如果请求携带version的url,发送到0.2bash-4.4# while true;do curl dpment ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0我们循环的访问后,在kiali页面能看到不通的状态 while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done打开web页面观测
2022年12月03日
1,178 阅读
0 评论
0 点赞
2022-11-18
linuxea:istio 1.14.1 kiali配置(4)
4.kiali配置ingress和egress分别处理的出和入的流量,而大部分相关资源都需要定义资源来完成。每个pod访问外部的时候,流量到达sedcar上,而sedcar上有对应的配置。这些配置包括:集群内有多少个服务服务对应的pod是什么访问给服务流量的比例是多少而virtualServIce就是为了这些来定义,这类似于nginx的虚拟主机。virtualServIce为每一个服务定义一个虚拟主机或者定义一个path路径,一旦用户流量到达虚拟主机上,根据访问将请求发送到“upstrem server”。而在istio之上是借助kubernetes的service,为每一个虚拟主机,虚拟主机的名称就是服务service的名字,虚拟主机背后上游中有多少个节点和真正的主机是借助kubernetes的服务来发现pod,每一个pod在istio之上被称为Destination,一个目标。一个对应的服务对应一个主机名字来进行匹配,客户端流量请求的就是主机头的流量就会被这个服务匹配到目标上,而目标有多少个pod取决于kubernetes集群上的服务有多少个pod, 在envoy中这些pod被称为cluster.virtualServIce就是用来定义虚拟主机有多少个,发往主机的流量匹配的不同规则,调度到那些,Destination就是定义后端集群中有多少个pod,这些pod如果需要分组就需要Destination rule来进行定义子集。比如说,一个主机服务是A,MatchA,对于根的流量,送给由A服务的所有pod上,流量就被调度给这些pod,负载均衡取决于istio和envoy的配置。对于这些流量而言,可以将pod分为两部分,v1和v2, 99%的流量给v1,1%的流量给v2,并且对1%的流量进行超时,重试,故障注入等。要想配置流量治理,我们需要配置virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,紧接着配置virtualService和destnartionrule4.1 测试一个pod此前有一个java-demo此时再创建一个pod,满足app和version这两个标签 app: linuxea_app version: v1.0如下--- apiVersion: v1 kind: Service metadata: name: dpment spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v1.0 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea spec: replicas: 1 selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.a ports: - name: http containerPort: 80 apply> kubectl.exe -n java-demo apply -f .\linuxeav1.yaml service/dpment created deployment.apps/dpment-linuxea createdistioctl ps能看到网格中有sidcar的pod和gateway> kubectl.exe -n java-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dpment ClusterIP 10.68.212.243 <none> 80/TCP 20h java-demo NodePort 10.68.4.28 <none> 8080:31181/TCP 6d21h > kubectl.exe -n java-demo get pod NAME READY STATUS RESTARTS AGE dpment-linuxea-54b8b64c75-b6mqj 2/2 Running 2 (43m ago) 20h java-demo-79485b6d57-rd6bm 2/2 Running 2 (43m ago) 42h[root@linuxea-48 ~]# istioctl ps NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION dpment-linuxea-54b8b64c75-b6mqj.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-56d9c5557-tffdv 1.14.1 istio-egressgateway-7fcb98978c-8t685.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-56d9c5557-tffdv 1.14.1 istio-ingressgateway-55b6cffcbc-9rn99.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-56d9c5557-tffdv 1.14.1 java-demo-79485b6d57-rd6bm.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-56d9c5557-tffdv 1.14.14.2 配置LoadBalancer我们配置一个VIP172.16.100.110/24来模拟LoadBalancer[root@linuxea-11 ~]# ip addr add 172.16.100.110/24 dev eth0 [root@linuxea-11 ~]# ip a | grep 172.16.100.110 inet 172.16.100.110/24 scope global secondary eth0 [root@linuxea-11 ~]# ping 172.16.100.110 PING 172.16.100.110 (172.16.100.110) 56(84) bytes of data. 64 bytes from 172.16.100.110: icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from 172.16.100.110: icmp_seq=2 ttl=64 time=0.017 ms 64 bytes from 172.16.100.110: icmp_seq=3 ttl=64 time=0.024 ms 64 bytes from 172.16.100.110: icmp_seq=4 ttl=64 time=0.037 ms ^C --- 172.16.100.110 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3081ms rtt min/avg/max/mdev = 0.017/0.027/0.037/0.007 ms而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑 27 clusterIP: 10.68.113.92 28 externalIPs: 29 - 172.16.100.110 30 clusterIPs: 31 - 10.68.113.92 32 externalTrafficPolicy: Cluster 33 internalTrafficPolicy: Cluster 34 ipFamilies: 35 - IPv4如下一旦修改,可通过命令查看[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.68.57.153 <none> 3000/TCP 45h istio-egressgateway ClusterIP 10.68.66.165 <none> 80/TCP,443/TCP 2d16h istio-ingressgateway LoadBalancer 10.68.113.92 172.16.100.110 15021:31787/TCP,80:32368/TCP,443:30603/TCP,31400:30435/TCP,15443:32099/TCP 2d16h istiod ClusterIP 10.68.7.43 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h jaeger-collector ClusterIP 10.68.50.134 <none> 14268/TCP,14250/TCP,9411/TCP 45h kiali ClusterIP 10.68.203.141 <none> 20001/TCP,9090/TCP 45h prometheus ClusterIP 10.68.219.101 <none> 9090/TCP 45h tracing ClusterIP 10.68.193.43 <none> 80/TCP,16685/TCP 45h zipkin ClusterIP 10.68.101.144 <none> 9411/TCP 45h4.3.1 nodeportistio-ingressgateway一旦修改为nodeport打开就需要使用ip:port来进行访问nodeport作为clusterip的增强版,但是如果是在一个云环境下可能就需要LoadBalancer事实上LoadBalancer并非是在上述例子中的自己设置的ip,而是一个高可用的ip开始修改nodeport编辑 kubectl.exe -n istio-system edit svc istio-ingressgateway,修改为 type: NodePort,如下: selector: app: istio-ingressgateway istio: ingressgateway sessionAffinity: None type: NodePort status: loadBalancer: {}随机一个端口PS C:\Users\usert> kubectl.exe -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.110.43.38 <none> 3000/TCP 14d istio-egressgateway ClusterIP 10.97.213.128 <none> 80/TCP,443/TCP 14d istio-ingressgateway NodePort 10.97.154.56 <none> 15021:32514/TCP,80:30142/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP 14d istiod ClusterIP 10.98.150.70 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 14d jaeger-collector ClusterIP 10.111.33.218 <none> 14268/TCP,14250/TCP,9411/TCP 14d kiali ClusterIP 10.111.90.166 <none> 20001/TCP,9090/TCP 14d prometheus ClusterIP 10.99.141.97 <none> 9090/TCP 14d tracing ClusterIP 10.104.76.74 <none> 80/TCP,16685/TCP 14d zipkin ClusterIP 10.100.238.112 <none> 9411/TCP 14d4.3.2 kiali开放至外部要想开放kiali至集群外部,需要定义并创建kiali VirtualService,Gateway,DestinationRule资源对象当安装完成,默认安装一些crds(base) [root@linuxea-master1 ~]# kubectl -n istio-system get crds | grep istio authorizationpolicies.security.istio.io 2022-07-14T02:28:00Z destinationrules.networking.istio.io 2022-07-14T02:28:00Z envoyfilters.networking.istio.io 2022-07-14T02:28:00Z gateways.networking.istio.io 2022-07-14T02:28:00Z istiooperators.install.istio.io 2022-07-14T02:28:00Z peerauthentications.security.istio.io 2022-07-14T02:28:00Z proxyconfigs.networking.istio.io 2022-07-14T02:28:00Z requestauthentications.security.istio.io 2022-07-14T02:28:00Z serviceentries.networking.istio.io 2022-07-14T02:28:00Z sidecars.networking.istio.io 2022-07-14T02:28:00Z telemetries.telemetry.istio.io 2022-07-14T02:28:00Z virtualservices.networking.istio.io 2022-07-14T02:28:00Z wasmplugins.extensions.istio.io 2022-07-14T02:28:00Z workloadentries.networking.istio.io 2022-07-14T02:28:00Z workloadgroups.networking.istio.io 2022-07-14T02:28:00Z和api(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources | grep istio wasmplugins extensions.istio.io true WasmPlugin istiooperators iop,io install.istio.io true IstioOperator destinationrules dr networking.istio.io true DestinationRule envoyfilters networking.istio.io true EnvoyFilter gateways gw networking.istio.io true Gateway proxyconfigs networking.istio.io true ProxyConfig serviceentries se networking.istio.io true ServiceEntry sidecars networking.istio.io true Sidecar virtualservices vs networking.istio.io true VirtualService workloadentries we networking.istio.io true WorkloadEntry workloadgroups wg networking.istio.io true WorkloadGroup authorizationpolicies security.istio.io true AuthorizationPolicy peerauthentications pa security.istio.io true PeerAuthentication requestauthentications ra security.istio.io true RequestAuthentication telemetries telemetry telemetry.istio.io true Telemetry并且可以通过命令过滤--api-group=networking.istio.io(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources --api-group=networking.istio.io NAME SHORTNAMES APIGROUP NAMESPACED KIND destinationrules dr networking.istio.io true DestinationRule envoyfilters networking.istio.io true EnvoyFilter gateways gw networking.istio.io true Gateway proxyconfigs networking.istio.io true ProxyConfig serviceentries se networking.istio.io true ServiceEntry sidecars networking.istio.io true Sidecar virtualservices vs networking.istio.io true VirtualService workloadentries we networking.istio.io true WorkloadEntry workloadgroups wg networking.istio.io true WorkloadGroup这些可以通过帮助看来是如何定义的,比如gw的配置kubectl explain gw.spec.server定义gatewayGateway标签匹配istio-ingressgateway selector: app: istio-ingressgateway创建到istio-ingressgaetway下,如下apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: kiali-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 20001 name: http-kiali protocol: HTTP hosts: - "kiali.linuxea.com" ---创建只会可用 istioctl proxy-status查看此时我们通过标签获取到istio-ingress的pod名称kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo(base) [root@linuxea-master1 ~]# kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo istio-ingressgateway-559d4ffc58-7rgft并且配置成变量进行调用(base) [root@linuxea-master1 ~]# INGS=$(kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name}) (base) [root@linuxea-master1 ~]# echo $INGS istio-ingressgateway-559d4ffc58-7rgft随后查看以及定义的侦听器(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config listeners $INGS ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001可以看到,此时的0.0.0.0 20001 ALL Route: http.20001以及被添加但是在Ingress中是不会自动创建routed,因此在routes中VIRTUAL SERVICE是404(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 * /* 404 * /stats/prometheus* * /healthz/ready*gateway创建完成在名称空间下(base) [root@linuxea-master1 package]# kubectl -n istio-system get gw NAME AGE kiali-gateway 3m于是,我们创建VirtualServiceVirtualService在gateway中配置了hosts,于是在VirtualService中需要指明hosts,并且需要指明流量规则适配的位置,比如Ingress-gateway,在上的配置里面ingress-gateway的名字是kiali-gateway,于是在这里就配置上。gateway中的端口是20001,默认。将流量路由到一个kiali的serivce,端口是20001,该service将流量调度到后端的podVirtualService要么配置在ingress gateway作为接入流量,要么就在配在集群内处理内部流量hosts确保一致关联gateway的名称(base) [root@master2 ~]# kubectl -n istio-system get gw NAME AGE kiali-gateway 1m18sroute 的host指向的是上游集群的cluster,而这个cluster名称和svc的名称是一样的。请注意,这里的流量不会发送到svc ,svc负责发现,流量发从到istio的cluster中,这与ingress-nginx的发现相似(base) [root@master2 ~]# kubectl -n istio-system get svc kiali NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kiali ClusterIP 10.111.90.166 <none> 20001/TCP,9090/TCP 14m如下:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: kiali-virtualservice namespace: istio-system spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - port: 20001 route: - destination: host: kiali port: number: 20001 ---此时routes中就会发现到http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready*创建完成后vs也会在名称空间下被创建(base) [root@linuxea-master1 package]# kubectl -n istio-system get vs NAME GATEWAYS HOSTS AGE kiali-virtualservice ["kiali-gateway"] ["kiali.linuxea.com"] 26m同时查看cluster(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config cluster $INGS|grep kiali kiali.istio-system.svc.cluster.local 9090 - outbound EDS kiali.istio-system.svc.cluster.local 20001 - outbound EDS要想通过web访问,svc里面必然需要有这个20001端口,如果没用肯定是不能够访问的,因为我们在配置里配置的就是20001,因此修改(base) [root@linuxea-master1 ~]# kubectl -n istio-system edit svc istio-ingressgateway .... - name: http-kiali nodePort: 32653 port: 20001 protocol: TCP targetPort: 20001 ...svc里面以及由了一个20001端口被映射成32653(base) [root@linuxea-master1 ~]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.110.43.38 <none> 3000/TCP 14d istio-egressgateway ClusterIP 10.97.213.128 <none> 80/TCP,443/TCP 15d istio-ingressgateway LoadBalancer 10.97.154.56 172.16.15.111 15021:32514/TCP,80:30142/TCP,20001:32653/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP 15d这两个端口都可以访问2000132653并且在proxy-config的routes中也会体现出来(base) [root@linuxea-master1 ~]# istioctl proxy-config routes $INGS.istio-system NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready* DestinationRuleDestinationRule默认会自动生成,并非必须要定义的,这却决于是否需要更多的功能扩展定义该kiali的serivce将流量调度到后端的pod。此时ingress-gateway将流量转到kiali的pod之间是否需要流量加密,或者不加密,这部分的调度是由cluster决定。而其中使用什么样的调度算法,是否启用链路加密,是用DestinationRule来定义的。如:tls:mode: DISABLE 不使用链路加密 trafficPolicy: tls: mode: DISABLEhost: kiali : 关键配置,匹配到service的name如下apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali trafficPolicy: tls: mode: DISABLEapply只会会创建一个DestinationRule(base) [root@linuxea-master1 ~]# kubectl -n istio-system get dr NAME HOST AGE kiali kiali 88s配置完成查看cluster(base) [root@linuxea-master1 ~]# istioctl proxy-config cluster $INGS.istio-system ... kiali.istio-system.svc.cluster.local 9090 - outbound EDS kiali.istio-system kiali.istio-system.svc.cluster.local 20001 - outbound EDS kiali.istio-system ...gateway是不会生效到网格内部的istio-system是istio安装组件的名称空间,也就是控制平面(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001java-demo是数据平面的名称空间在java-demo中只有出站反向的PassthroughCluster,而这个PassthroughCluster是由service生成的一旦创建service就自动创建,在这里创建到网关上,并没有在网格内(base) [root@linuxea-master1 ~]# istioctl -n java-demo proxy-config listeners marksugar --port 20001 ADDRESS PORT MATCH DESTINATION 0.0.0.0 20001 Trans: raw_buffer; App: http/1.1,h2c Route: 20001 0.0.0.0 20001 ALL PassthroughCluster80端口的清单如下:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: kiali-virtualservice namespace: istio-system spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - uri: prefix: / route: - destination: host: kiali port: number: 20001 --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: kiali-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http-kiali protocol: HTTP hosts: - "kiali.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali trafficPolicy: tls: mode: DISABLE ---apply> kubectl.exe apply -f .\kiali.linuxea.com.yaml virtualservice.networking.istio.io/kiali-virtualservice created gateway.networking.istio.io/kiali-gateway created destinationrule.networking.istio.io/kiali created此时我们通过istioctl proxy-config查看[root@linuxea-48 ~]# istioctl -n istio-system proxy-config all istio-ingressgateway-55b6cffcbc-4vc94 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE BlackHoleCluster - - - STATIC agent - - - STATIC 此处删除,省略 skywalking-ui.skywalking.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 16685 - outbound EDS xds-grpc - - - STATIC zipkin - - - STRICT_DNS zipkin.istio-system.svc.cluster.local 9411 - outbound EDS ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready* RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 244178067234775886684219941410566024258 2022-07-18T06:25:04Z 2022-07-17T06:23:04Z ROOTCA CA ACTIVE true 102889470196612755194280100451505524786 2032-07-10T16:33:20Z 2022-07-13T16:33:20Z同时可以使用命令查看路由[root@linuxea-48 /usr/local]# istioctl -n java-demo pc route dpment-linuxea-54b8b64c75-b6mqj NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.68.36.89 /* 80 dpment, dpment.java-demo + 1 more... /* 此处省略 /* inbound|80|| * /* 15014 istiod.istio-system, 10.68.7.43 /* 16685 tracing.istio-system, 10.68.193.43 /* 20001 kiali.istio-system, 10.68.203.141 /* * /stats/prometheus* 默认就是cluster,因此VIRTUAL SERVICE是空的,可以通过命令查看EDS是envoy中的,表示能够通过eds动态的方式来发现后端的pod并生成一个集群的我们使用命令过滤后端有几个pod,此时我们只有一个[root@linuxea-48 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment 172.20.1.12:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local如果我们进行scale,将会发生变化[root@linuxea-11 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment 172.20.1.12:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local 172.20.2.168:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local此时我们准备在访问kiali修改本地Hosts172.16.100.110 kiali.linuxea.comkiali.linuxea.com4.3.3 grafana在kiali中,配置了20002的端口进行访问,并且还修改添加了service的pod才完成访问,而后又补充了一个80端口的访问,于是乎,我们将grafana也配置成80端口访问一旦配置了80端口,hosts就不能配置成*了配置关键点apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 # 80端口侦听器 name: http protocol: HTTP hosts: - "grafana.linuxea.com" # 域名1,这里可以是多个域名,不能为* --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: grafana-virtualservice namespace: istio-system spec: hosts: - "grafana.linuxea.com" # 匹配Gateway的hosts gateways: - grafana-gateway # 匹配Gateway的name,如果不是同一个名称空间的需要加名称空间引用 http: - match: # 80端口这里已经不能作为识别标准,于是match url - uri: prefix: / # 只要是针对grafana.linuxea.com发起的请求,无论是什么路径 route: - destination: host: grafana port: number: 3000 --- # DestinationRule 在这里是可有可无的 apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: grafana namespace: istio-system spec: host: grafana trafficPolicy: tls: mode: DISABLE ---于是,我们先创建gatewayapiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "grafana.linuxea.com"apply(base) [root@linuxea-master1 ~]# kubectl -n istio-system get gw NAME AGE grafana-gateway 52s kiali-gateway 24hgateway创建后,这里的端口是808080端口是作为流量拦截的,这里的80端口都会被转换成8080,访问仍然是请求80端口(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001定义virtualserviceapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: grafana-virtualservice namespace: istio-system spec: hosts: - "grafana.linuxea.com" gateways: - grafana-gateway http: - match: - uri: prefix: / route: - destination: host: grafana port: number: 3000apply(base) [root@linuxea-master1 ~]# kubectl -n istio-system get vs NAME GATEWAYS HOSTS AGE grafana-virtualservice ["grafana-gateway"] ["grafana.linuxea.com"] 82s kiali-virtualservice ["kiali-gateway"] ["kiali.linuxea.com"] 24hroutes中已经配置有了DOMAINS和VIRTUAL SERVICE的配置(base) [root@linuxea-master1 opt]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.8080 grafana.linuxea.com /* grafana-virtualservice.istio-system http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system而在cluster中的grafana的出站是存在的(base) [root@linuxea-master1 opt]# istioctl -n istio-system proxy-config cluster $INGS|grep grafana grafana.istio-system.svc.cluster.local 3000 - outbound EDS grafana.monitoring.svc.cluster.local 3000 - outbound EDS 此时 ,我们 配置本地hosts后就可以打开grafana其中默认已经添加了模板4.4 简单测试于是,我们在java-demo pod里面ping dpment[root@linuxea-48 ~]# kubectl -n java-demo exec -it java-demo-79485b6d57-rd6bm -- /bin/bash Defaulting container name to java-demo. Use 'kubectl describe pod/java-demo-79485b6d57-rd6bm -n java-demo' to see all of the containers in this pod. bash-5.1$ while true;do curl dpment; sleep 0.2;done linuxea-dpment-linuxea-54b8b64c75-b6mqj.com-127.0.0.1/8 172.20.1.254/24如下图此时 ,我们的访问的dpment并没有走service,而是服务网格的sidecat而后返回kiali在所属名称空间查看4.5 dpment开放至外部在上面,我们将kiali ui开放至外部访问,现在我们将dpment也开放至外部。因此,我除了deplpoyment的yaml,我们还需要配置其他的部分此前的deployment.yaml--- apiVersion: v1 kind: Service metadata: name: dpment spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea spec: replicas: 1 selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.a ports: - name: http containerPort: 80 配置istio外部访问apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment-virtualservice namespace: java-demo spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - uri: prefix: / route: - destination: host: dpment port: number: 80 --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: java-demo spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: dpment protocol: HTTP hosts: - "kiali.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment-destinationRule namespace: java-demo spec: host: kiali trafficPolicy: tls: mode: DISABLE ---而后就可以通过浏览器进行访问如下关键概念listenersservice本身就会发现后端的pod的ip端口信息,而listeners是借助于发现的service实现的
2022年11月18日
1,311 阅读
0 评论
0 点赞
2022-10-18
linuxea:初时istio服务网格(3)
网格中有很多service,sidecar就会看到很多egerss listeners通过正向代理来确保pod访问之外服务的时候是通过sidecar来代理,ingress是来接受外部访问内部的,但这并不多,该pod被后端端点的service 的端口所属的服务会在该pod的sidecar生成ingress listeners,通过ingress反向代理完成访问如: istio-system和被打上标签的名称空间,这两个名称空间下的service会被发现并转换成sidecar的envoy的网格内配置,经过网格的流量转化为sidecar转发。而service主要被istio用于服务发现服务而存在的 sidecar通过VirtualService来管理的,流量到达sidecar后被拦截且重定向到一个统一的端口,所有出去的流量也会被在该pod被iptables拦截重定向到这个唯一的端口,分别是15001和15006的虚拟端口,这个过程会生成很多iptables规则,这个功能就称为流量拦截,拦截后被交给eneoy作为正向或者反向代理此前的prox-status能看到配置下发的状态PS C:\Users\usert> istioctl.exe proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-6d89cf9847-46c4z.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 marksugar.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 productpage-v1-f44fc594c-fmrf4.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 ratings-v1-6c77b94555-twmls.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v1-765697d479-tbprw.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v2-86855c588b-sm6w2.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v3-6ff967c97f-g6x8b.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 sleep-557747455f-46jf5.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1proxy-config能查看对应pod之上sidecar的配置信息,这比config_dump查看的要直观3.1 查看listeners查看marksugar之上的listenersistioctl -n java-demo proxy-config listeners marksugar查看marksugar之上sidecar的listeners,默认是格式化后的格式展示PS C:\Users\usert> istioctl.exe proxy-config listeners marksugar.java-demo ADDRESS PORT MATCH DESTINATION 10.96.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local 0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80 0.0.0.0 80 ALL PassthroughCluster 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local 10.104.18.194 80 Trans: raw_buffer; App: http/1.1,h2c Route: web-nginx.test.svc.cluster.local:80 10.104.18.194 80 ALL Cluster: outbound|80||web-nginx.test.svc.cluster.local 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java-demo.svc.cluster.local 10.102.45.140 443 ALL Cluster: outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local 10.107.160.181 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local 10.109.235.93 443 ALL Cluster: outbound|443||prometheus-adapter.monitoring.svc.cluster.local 10.96.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local ........对于每一个网格内的服务都会有两个listeners,一个向外outbound,一个向内的Route对于这些,我们针对性进行--port过滤查看istioctl -n java-demo proxy-config listeners marksugar --port 80PS C:\Users\usert> istioctl.exe -n java-demo proxy-config listeners marksugar --port 80 ADDRESS PORT MATCH DESTINATION 0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80 0.0.0.0 80 ALL PassthroughCluster 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local 10.104.18.194 80 Trans: raw_buffer; App: http/1.1,h2c Route: web-nginx.test.svc.cluster.local:80 10.104.18.194 80 ALL Cluster: outbound|80||web-nginx.test.svc.cluster.local 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java-demo.svc.cluster.local在或者添加ip过滤 --addressistioctl -n java-demo proxy-config listeners marksugar --port 80 --address 10.104.119.238PS C:\Users\usert> istioctl.exe -n java-demo proxy-config listeners marksugar --port 80 --address 10.104.119.238 ADDRESS PORT MATCH DESTINATION 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local如果需要查看更详细的配置需要在后添加-o yaml,其他参考--help3.2 查看routes当路由进入侦听器后,路由的匹配规则是先匹配虚拟主机,而后在虚拟主机内部匹配流量匹配路由条件MATCHistioctl -n java-demo proxy-config routes marksugar过滤80istioctl -n java-demo proxy-config routes marksugar --name 80匹配DOMAINS匹配MATCH路由目标VIRTUAL SERVICE没有显示VIRTUAL SERVICE会被路由到DOMAINS到后端端点PS C:\Users\usert> istioctl.exe -n java-demo proxy-config routes marksugar --name 80 NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.98.127.60 /* 80 details.java-demo.svc.cluster.local /* details.java-demo 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 dpment-b, dpment-b.java-demo + 1 more... /* 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 marksugar, marksugar.java-demo + 1 more... /* 80 productpage.java-demo.svc.cluster.local /* productpage.java-demo 80 ratings.java-demo.svc.cluster.local /* ratings.java-demo 80 reviews.java-demo.svc.cluster.local /* reviews.java-demo 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 sleep, sleep.java-demo + 1 more... /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /*其他参考--help3.3 查看cluster查看istioctl.exe -n java-demo proxy-config cluster marksugar过滤端口istioctl -n java-demo proxy-config cluster marksugar --port 80inbound为入站侦听器,outbound为出站PS C:\Users\usert> istioctl.exe -n java-demo proxy-config cluster marksugar --port 80 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE 80 - inbound ORIGINAL_DST argocd-server.argocd.svc.cluster.local 80 - outbound EDS dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS kuboard.kube-system.svc.cluster.local 80 - outbound EDS marksugar.java-demo.svc.cluster.local 80 - outbound EDS skywalking-ui.skywalking.svc.cluster.local 80 - outbound EDS sleep.java-demo.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 80 - outbound EDS web-nginx.test.svc.cluster.local 80 - outbound EDS除此之外可以使用 --direction查看特定方向的详情,其他参考--help istioctl.exe -n java-demo proxy-config cluster marksugar --port 80 --direction inbound3.4 查看endpoints对于集群而言还可以看endpoints使用 --port 80过滤80端口PS C:\Users\usert> istioctl.exe -n java-demo proxy-config endpoints marksugar --port 80 ENDPOINT STATUS OUTLIER CHECK CLUSTER 130.130.0.106:80 HEALTHY OK outbound|80||marksugar.java-demo.svc.cluster.local 130.130.0.12:80 HEALTHY OK outbound|80||kuboard.kube-system.svc.cluster.local 130.130.0.16:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.0.17:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.0.18:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.1.103:80 HEALTHY OK outbound|80||sleep.java-demo.svc.cluster.local 130.130.1.60:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.1.61:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local如果要查看所有的信息,使用all即可istioctl -n java-demo proxy-config all marksugar --port 80PS C:\Users\usert> istioctl.exe -n java-demo proxy-config all marksugar --port 80 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE 80 - inbound ORIGINAL_DST argocd-server.argocd.svc.cluster.local 80 - outbound EDS dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS nginx.test.svc.cluster.local ........... 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java- .............. RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 102059676829591632788425320896870277908 2022-07-27T21:03:04Z 2022-07-26T21:01:04Z ROOTCA CA ACTIVE true 301822650017575269000203210584654904630 2032-07-11T02:27:37Z 2022-07-14T02:27:37Z3.5 查看bootstrap有很多配置是之后加载的,而bootstrap是启动的基础配置istioctl -n java-demo proxy-config bootstrap marksugar
2022年10月18日
1,165 阅读
0 评论
0 点赞
2022-10-17
linuxea:istio 1.14.1安装与测试(2)
要想部署istio,我门需要kubernetes组件。控制平面默认部署在istio-system名称空间内,包含,istiod,ingress-gateway,egress-gateway,addons(kiali.prometheus,grafanajacger)。而数据平面的自动注入只会在部署应用之后部署在应用的名称空间内,以Sidecar 运行,并且是需要打上一个label才能被启用。而egress-gateway是可选的,对于addons,需要手动按需进行配置。如:kiali,prometheus等。部署有三种方式进行部署istioctlistio的专用工具,通过命令行选项完整支持istioOpperator api,包括CR生成CRD,并且由内置的默认配置,可选择默认的配置进行部署default: 根据istioOpperator API的默认设置启用相关组件,适用于生产环境demo: 部署较多组件演示istio功能minimal:类似于default,仅部署控制平台remore: 用于配置共享control plane多集群empty: 不部署任何组件,通常帮助用户自定义profifle时生成基础配置preview:包含预览性的profile,可探索新功能,不保证稳定性和安全及性能istio Opperatoristio相关自定义CR的专用控制器,运行在k8s下以一个pod形式运行,负责自动维护CR定义的资源对象。根据需要定义CR配置文件,提交到kkubernetes API后,由operator完成相应的操作。这仍然借助于istioOpperator api。与istioctl不通,我们需要自定义配置才能够进行部署helm基于特定的chart,也可以是helm安装及配置istio,目前处于alpha阶段这些配置所属资源群组是install.istio.io/v1alpha1,配置该资源类型下的资源对象即可定义istioctl最为常用,我们采用istioctl进行部署1.下载安装包istio和其他的软件部署类似,不过istio有一个istioctl工具管理,同时也支持helm和istio pperator我门根据官方文档的Download Istio进行下载istioctl,或者到github进行下载即可$ curl -L https://istio.io/downloadIstio | sh -我直接去github下载wget https://github.com/istio/istio/releases/download/1.14.1/istioctl-1.14.1-linux-amd64.tar.gz wget https://github.com/istio/istio/releases/download/1.14.1/istio-1.14.1-linux-arm64.tar.gz tar xf istio-1.14.1-linux-arm64.tar.gz -C /usr/local cd /usr/local ln -s istio-1.14.1 istio cd ~/istio tar xf istioctl-1.14.1-linux-amd64.tar.gz -C /usr/local/istio/bin(base) [root@master1 istio]# istioctl version no running Istio pods in "istio-system" 1.14.1目录结构如下(base) [root@master1 istio]# ll 总用量 28 drwxr-x--- 2 root root 22 7月 13 17:13 bin -rw-r--r-- 1 root root 11348 6月 8 10:11 LICENSE drwxr-xr-x 5 root root 52 6月 8 10:11 manifests -rw-r----- 1 root root 796 6月 8 10:11 manifest.yaml -rw-r--r-- 1 root root 6016 6月 8 10:11 README.md drwxr-xr-x 23 root root 4096 6月 8 10:11 samples # bookinfo目录 drwxr-xr-x 3 root root 57 6月 8 10:11 tools如果你是windows用户下载windows包即可2.安装和配置安装完成后要实现流量治理需要借助几个开箱即用的CRD来完成,CRD属于网络群组,分别是VirtualService,DestinationRule,Gateway,ServiceEntry和Envoyflter等。istio一旦部署后,务必确保服务到达网格中的所有其他服务才行istio默认是不知道pod与pod直接的访问关系,因此1.sedcar下发的配置必须是能够注册找到网格所有的其他服务的2.istio的出向流量在k8s中,通常,service是通过iptables和ipvs借助内核完成规则配置。service是作为kubnerets上的每个服务,提供注册发现的机制。A访问B通常先到达B的service,而后在到B的pod。A访问B是从什么时候开始别识别成B的服务的每个节点上的Kube-proxy程序,读取加载并转换成该节点上的ipvs或者iptables规则,每一个节点都有kube-proxy,每一个节点都有每一个节点的定义。当A在访问B的时候,请求流量到达自己所在节点的内核就已经被识别成B的服务的请求,而后就被调度给另外一个节点的B pod。service是把没一个节点都配置为每一个服务的调度器,仅负责在当前节点的客户端在访问没一个服务时在本地完成服务识别和调度。每定义一个service ,就相当于把当前集群的每个节点配置为该service的负载均衡器,只不过该节点的负载均衡器只负责接入该节点的流量。这种代理机制是作为4层代理机制。如,ingress和service,流量经过ingress被代理到pod之前是不需要经过service的,ingress网关本身就完成调度,service只是用来发现Pod的。而在istio之上,kubernetes上定义的service都被转为istio上的service,这意味着istio把每个pod起了一个单独的sedcar作为代理,代理这个pod,这类似于给这个pod起了一个单独的ingress gateway而访问在外部的服务究竟有多少个pod是借助kurnetes服务来发现完成,每一个服务都会被配置成一个ingess gateway的资源。这些是出向流量的作用,而对于接入的流量没有复杂的操作。istiod借助kubernetes把每个节点的服务配置都读进来并且下发到每个sedcar对应的资源上。而不是只有某一个。如: 当前节点有A,B,C,D的配置信息,但是当前业务只需要A访问D,istiod默认是无法做到只允许A访问D的。2.1 install使用install或者apply应用,默认是适配到default,我门使用istioctl profile list查看(base) [root@master1 istio]# istioctl profile list Istio configuration profiles: default # 生产 demo # 演示 empty # 测试 external minimal openshift preview remote如果使用 istioctl install --set profile=demo -y就配置到了生产环境使用istioctl profile dump demo 能够查看demo的yaml,或者输入default查看default的yamlprofile: 内建配置环境,作为资源配置,使用的环境我们可以提前pull好镜像docker pull docker.io/istio/pilot:1.14.1 docker pull docker.io/istio/proxyv2:1.14.1选项-y : --skip-confirmation --set : 可进行定义启用或者关闭这些在istioctl profile dump demo可以查看到yaml配置,可以根据--set定义,如果太多则可以进行yaml编排,而后使用-f指定yaml文件即可倘若需要在安装后添加新配置install即可,相当于apply 我们直接进行安装[root@linuxea-48 ~]# istioctl install --set profile=demo -y ✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Egress gateways installed ✔ Installation complete Making this installation the default for injection and validation. Thank you for installing Istio 1.14. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/yEtCbt45FZ3VoDT5A安装完成,使用istioctl.exe x precheck检查是否安装好PS C:\Users\usert> istioctl.exe x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out https://istio.io/latest/docs/setup/getting-started/使用 istioctl verify-install命令查看是否都是successfully状态即可。在istio-system 名称空间下,会创建如下[root@linuxea-11 ~]# kubectl -n istio-system get all NAME READY STATUS RESTARTS AGE istio-egressgateway-65b46d7874-xdjkr 1/1 Running 0 61s istio-ingressgateway-559d4ffc58-7rgft 1/1 Running 0 61s istiod-8689fcd796-mqd8n 1/1 Running 0 87s (base) [root@master1 local]# kubectl get all -n istio-system NAME READY STATUS RESTARTS AGE pod/istio-egressgateway-65b46d7874-xdjkr 1/1 Running 0 78s pod/istio-ingressgateway-559d4ffc58-7rgft 1/1 Running 0 78s pod/istiod-8689fcd796-mqd8n 1/1 Running 0 104s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-egressgateway ClusterIP 10.97.213.128 <none> 80/TCP,443/TCP 78s service/istio-ingressgateway LoadBalancer 10.97.154.56 <pending> 15021:32514/TCP,80:30142/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP 78s service/istiod ClusterIP 10.98.150.70 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 104s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-egressgateway 1/1 1 1 78s deployment.apps/istio-ingressgateway 1/1 1 1 78s deployment.apps/istiod 1/1 1 1 104s NAME DESIRED CURRENT READY AGE replicaset.apps/istio-egressgateway-65b46d7874 1 1 1 78s replicaset.apps/istio-ingressgateway-559d4ffc58 1 1 1 78s replicaset.apps/istiod-8689fcd796 1 1 1 104s而后其他addons组件安装,在samples/addons目录下[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl apply -f ./ serviceaccount/grafana created configmap/grafana created service/grafana created deployment.apps/grafana created configmap/istio-grafana-dashboards created configmap/istio-services-grafana-dashboards created deployment.apps/jaeger created service/tracing created service/zipkin created service/jaeger-collector created serviceaccount/kiali created configmap/kiali created clusterrole.rbac.authorization.k8s.io/kiali-viewer created clusterrole.rbac.authorization.k8s.io/kiali created clusterrolebinding.rbac.authorization.k8s.io/kiali created role.rbac.authorization.k8s.io/kiali-controlplane created rolebinding.rbac.authorization.k8s.io/kiali-controlplane created service/kiali created deployment.apps/kiali created serviceaccount/prometheus created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/prometheus created deployment.apps/prometheus created如下[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl -n istio-system get pod NAME READY STATUS RESTARTS AGE grafana-67f5ccd9d7-psrsn 1/1 Running 0 9m12s istio-egressgateway-7fcb98978c-8t685 1/1 Running 1 (30m ago) 19h istio-ingressgateway-55b6cffcbc-9rn99 1/1 Running 1 (30m ago) 19h istiod-56d9c5557-tffdv 1/1 Running 1 (30m ago) 19h jaeger-78cb4f7d4b-btn7h 1/1 Running 0 9m11s kiali-6b455fd9f9-5cqjx 1/1 Running 0 9m11s # UI客户端 prometheus-7cc96d969f-l2rkt 2/2 Running 0 9m11s2.2 配置生成除此之外,可以通过命令来生成配置,并且生成的k8s yaml格式的配置清单istioctl manifest generate --set profile=demo将上述命令保存进行部署和istioctl是一样的istioctl manifest generate --set profile=demo | kubectl apply -f -2.3 配置标签而后给需要使用istio的名称空间打一个istio-injection=enabled的标签一旦在某个名称空间打了一个istio-injection=enabled的标签,就会在当前名称空间下自动给pod注入一个sedcar并且,每个服务都需要一个service,并且需要对应的标签:app和version用于后续的配置app: version:如: java-demo名称空间打一个标签istio-injection=enabled[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl label namespace java-demo istio-injection=enabled namespace/java-demo labeled如下[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl get ns --show-labels NAME STATUS AGE LABELS argocd Active 20d kubernetes.io/metadata.name=argocd default Active 32d kubernetes.io/metadata.name=default ingress-nginx Active 22d app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx istio-system Active 19h kubernetes.io/metadata.name=istio-system java-demo Active 6d21h istio-injection=enabled,kubernetes.io/metadata.name=java-demo kube-node-lease Active 32d kubernetes.io/metadata.name=kube-node-lease kube-public Active 32d kubernetes.io/metadata.name=kube-public kube-system Active 32d kubernetes.io/metadata.name=kube-system marksugar Active 21d kubernetes.io/metadata.name=marksugar monitoring Active 31d kubernetes.io/metadata.name=monitoring nacos Active 18d kubernetes.io/metadata.name=nacos skywalking Active 17d kubernetes.io/metadata.name=skywalking而后在启动pod的时候,至少需要一个service,而后启动java-demo(base) [root@master1 sleep]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE java-demo-76b97fc95-fkmjs 2/2 Running 0 14m java-demo-76b97fc95-gw9r6 2/2 Running 0 14m java-demo-76b97fc95-ngkb9 2/2 Running 0 14m java-demo-76b97fc95-pt2t5 2/2 Running 0 14m java-demo-76b97fc95-znqrm 2/2 Running 0 14m此时的pod已经是两个了,另外一个是istio-proxy或者通过命令创建一个pod即可> kubectl.exe -n java-demo run marksugar --image=registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 --restart=Never pod/marksugar created > kubectl.exe -n java-demo get pod NAME READY STATUS RESTARTS AGE marksugar 2/2 Running 0 9s也可以通过命令查看# kubectl.exe -n java-demo get pod marksugar -o yaml ....... initContainers: - args: - istio-iptables - -p - "15001" - -z - "15006" - -u - "1337" - -m - REDIRECT - -i - '*' - -x - "" - -b - '*' - -d - 15090,15021,15020 image: docker.io/istio/proxyv2:1.14.1 imagePullPolicy: IfNotPresent name: istio-init resources: limits: cpu: "2" memory: 1Gi requests: cpu: 10m memory: 40Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_ADMIN - NET_RAW drop: - ALL privileged: false readOnlyRootFilesystem: false .......而后通过curl -I POD的IP,可以看到envoy的信息x-envoy-upstream-service-time: 0 x-envoy-decorator-operation: :0/*如下# curl -I 130.130.0.106 HTTP/1.1 200 OK server: istio-envoy date: Tue, 26 Jul 2022 09:24:01 GMT content-type: text/html content-length: 70 last-modified: Tue, 26 Jul 2022 09:04:16 GMT etag: "62dfae10-46" accept-ranges: bytes x-envoy-upstream-service-time: 0 x-envoy-decorator-operation: :0/*而后可以通过curl localhost:15000/listeners查看套接字这里会将发现到都所有service全部转换成listeners,并且作为egress 使用> kubectl.exe -n java-demo exec -it marksugar -- curl localhost:15000/listeners 0d333f68-f03b-44e1-b38d-6ed612e71f6c::0.0.0.0:15090 28f19ea0-a4d5-4935-a887-a906f0ea410b::0.0.0.0:15021 130.130.1.37_9094::130.130.1.37:9094 10.109.235.93_443::10.109.235.93:443 130.130.1.41_8443::130.130.1.41:8443 10.98.127.60_443::10.98.127.60:443 10.97.213.128_443::10.97.213.128:443 130.130.1.38_9094::130.130.1.38:9094 10.107.145.213_8443::10.107.145.213:8443 10.98.150.70_443::10.98.150.70:443 10.97.154.56_31400::10.97.154.56:31400 172.16.15.138_9100::172.16.15.138:9100 10.97.154.56_15443::10.97.154.56:15443 172.16.15.137_9100::172.16.15.137:9100 10.96.0.1_443::10.96.0.1:443 10.107.160.181_443::10.107.160.181:443 172.16.15.137_10250::172.16.15.137:10250 130.130.1.36_9094::130.130.1.36:9094 130.130.1.35_8443::130.130.1.35:8443 10.97.154.56_443::10.97.154.56:443 130.130.1.41_9443::130.130.1.41:9443 10.102.45.140_443::10.102.45.140:443 10.104.213.128_10259::10.104.213.128:10259 10.100.230.94_6379::10.100.230.94:6379 10.98.150.70_15012::10.98.150.70:15012 10.96.0.10_53::10.96.0.10:53 172.16.15.138_10250::172.16.15.138:10250 10.102.80.102_10257::10.102.80.102:10257 10.104.18.194_80::10.104.18.194:80 130.130.1.37_9093::130.130.1.37:9093 10.96.132.151_8080::10.96.132.151:8080 10.110.43.38_3000::10.110.43.38:3000 0.0.0.0_10255::0.0.0.0:10255 130.130.1.38_9093::130.130.1.38:9093 10.104.119.238_80::10.104.119.238:80 0.0.0.0_9090::0.0.0.0:9090 0.0.0.0_5557::0.0.0.0:5557 10.109.18.63_8083::10.109.18.63:8083 10.99.185.170_8080::10.99.185.170:8080 130.130.1.42_9090::130.130.1.42:9090 0.0.0.0_15014::0.0.0.0:15014 0.0.0.0_15010::0.0.0.0:15010 10.96.171.119_8080::10.96.171.119:8080 172.16.15.138_4194::172.16.15.138:4194 10.103.151.226_9090::10.103.151.226:9090 10.105.132.58_8082::10.105.132.58:8082 10.111.33.218_14250::10.111.33.218:14250 10.107.145.213_8302::10.107.145.213:8302 0.0.0.0_9080::0.0.0.0:9080 0.0.0.0_8085::0.0.0.0:8085 10.96.59.20_8084::10.96.59.20:8084 0.0.0.0_11800::0.0.0.0:11800 10.107.145.213_8600::10.107.145.213:8600 10.96.0.10_9153::10.96.0.10:9153 10.106.152.2_8080::10.106.152.2:8080 10.96.59.20_8081::10.96.59.20:8081 0.0.0.0_8500::0.0.0.0:8500 10.107.145.213_8400::10.107.145.213:8400 0.0.0.0_80::0.0.0.0:80 0.0.0.0_9411::0.0.0.0:9411 10.99.155.134_5558::10.99.155.134:5558 0.0.0.0_3000::0.0.0.0:3000 0.0.0.0_5556::0.0.0.0:5556 10.96.124.32_12800::10.96.124.32:12800 10.97.154.56_15021::10.97.154.56:15021 10.103.47.163_8080::10.103.47.163:8080 130.130.1.36_9093::130.130.1.36:9093 10.107.145.213_8301::10.107.145.213:8301 0.0.0.0_8060::0.0.0.0:8060 0.0.0.0_16685::0.0.0.0:16685 10.96.132.151_7000::10.96.132.151:7000 10.96.140.72_9001::10.96.140.72:9001 0.0.0.0_20001::0.0.0.0:20001 10.107.145.213_8300::10.107.145.213:8300 130.130.1.44_9090::130.130.1.44:9090 10.111.33.218_14268::10.111.33.218:14268 10.97.225.212_9093::10.97.225.212:9093 172.16.15.137_4194::172.16.15.137:4194 10.103.187.135_9088::10.103.187.135:9088 virtualOutbound::0.0.0.0:15001 virtualInbound::0.0.0.0:15006同时也可以使用cluster来查看cluster, kubectl -n java-demo exec -it marksugar -- curl localhost:15000/clusters我们会看到很多outbound,这些是由istiod自动发现 kubernetes集群所有的service,并转换成eveny配置,并下发的配置...... outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::health_flags::healthy outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::weight::1 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::region:: outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::zone:: outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::sub_zone:: outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::canary::false outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::priority::0 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::success_rate::-1 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::local_origin_success_rate::-1 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_active::0 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_connect_fail::0 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_total::0 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::rq_active::0 outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::rq_error::0 .....这些配置可以通过istioctl proxy-status查看下发状态, 如:marksugar.java-demo# istioctl proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-6d89cf9847-46c4z.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 marksugar.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1也可以通过istioctl ps查看状态所有节点安装socat使用istioctl ps(base) [root@master2 kube]# istioctl ps NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 java-demo-76b97fc95-fkmjs.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 java-demo-76b97fc95-gw9r6.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 java-demo-76b97fc95-ngkb9.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 java-demo-76b97fc95-pt2t5.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 java-demo-76b97fc95-znqrm.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.12.4 测试为了更好的演示,我们创建一个service关联上述的pod给marksugar打一个标签并且关联到servicekubectl -n java-demo label pods marksugar app=marksugar kubeclt -n java-demo create service clusterip marksugar --tcp=80:80(base) [root@master1 ~]# kubectl -n java-demo label pods marksugar app=marksugar pod/marksugar labeled (base) [root@master1 ~]# kubectl -n java-demo create service clusterip marksugar --tcp=80:80 service/marksugar created如下(base) [root@master1 ~]# kubectl -n java-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE marksugar ClusterIP 10.107.112.228 <none> 80/TCP 58s已经关联上(base) [root@master1 ~]# kubectl -n java-demo describe svc marksugar Name: marksugar Namespace: java-demo Labels: app=marksugar Annotations: <none> Selector: app=marksugar Type: ClusterIP IP: 10.107.112.228 Port: 80-80 80/TCP TargetPort: 80/TCP Endpoints: 130.130.0.106:80 Session Affinity: None Events: <none>当这个服务创建会被istiod发现,并自动生成配置信息下发到每个sedecarlistenerslisteners多了一条10.107.112.228_80::10.107.112.228:80(base) [root@master1 ~]# kubectl -n java-demo exec -it marksugar -- curl localhost:15000/listeners ...... 10.103.187.135_9088::10.103.187.135:9088 virtualOutbound::0.0.0.0:15001 virtualInbound::0.0.0.0:15006 10.107.112.228_80::10.107.112.228:80cluster在看cluster(base) [root@master1 ~]# kubectl -n java-demo exec -it marksugar -- /bin/sh / # curl localhost:15000/clusters|less outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::health_flags::healthy outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::weight::1 outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::region:: outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::zone:: outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::sub_zone:: outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::canary::false outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::priority::0 outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::success_rate::-1 outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::local_origin_success_rate::-1 outbound|80||kuboard.kube-system.svc.cluster.local::observability_name::outbound|80||kuboard.kube-system.svc.cluster.local .......并且可以通过/ # curl localhost:15000/config_dump查看一些其他路由配置信息
2022年10月17日
1,139 阅读
0 评论
0 点赞
2022-10-15
linuxea:微服务治理与istio(1)
我们知道,当应用程度到达了一定程度大的时候,必然会走到分布式,而微服务是作为分布式的一种极致表现。谈到微服务,不得不去提微服务治理,在这之前,我们了解下分布式架构治理模式的演进。服务治理在早期的治理能是是嵌入在业务代码中的,典型技术如:SOA,ESB,开发人员把服务治理的代码写入到程序中来实现服务发现,符在均衡,熔断容错,动态路由。这意味着每个开发人员都需要在自己的模块内维护一个这些功能。而到了SDK实现的时候应用的典型技术就变成了spring cloud,dubbo来实现这些,这些通过SDK提供框架,只要程序引入库的功能,并编写业务就能完成。但是,一旦使用SDK,同框架的语言就只能使用一个。并且一旦这个框架有漏洞,修复就会导致所有程序需要重新编译和部署、到了现在,服务治理到了服务网络。为了减少SDK之间的耦合,SDK被独立成一个应用程序,这个应用程序通过代理的方式运行,服务本身通过同一个某协议通信,如:grp,http,restfullapi等,代理程序来代理这些。程序员只需要开发自己的业务代码逻辑,基于分布式框架对接即可。这三个阶段分别称为:内嵌于应用程序,SDK,代理前言而istio就是这么一个组件。程序只需要部署在istio之上,istio会添加一个代理来完成服务代理的功能。程序员不需要关心服务治理的关系。这就是服务网格的价值。分布式程序本身遇到的网络现状是需要重传,重试,超时,重新连接机制,如果程序本身不能解决,构建分布式应用是很容易出现问题的,而在服务和服务之间网络协作的网络管理功能是独立出来的,各微服务框架此类的解决方案往往和基础组件一起使用,如zookeeper或者eureka等。kubernetes是应用程序生命周期管理机制。在这之上service mesh作为基础设施向应用提供的。这样一来,就从研发人员的开发中解耦出来,应用层只关注应用本身。在kubernetes中,istio是以kubernetes的扩展提供的,因为istio是作为Kubernetes的某个CRD控制器,它通过的是资源类型,只需要通过声明式API即可调用API接口来编写对应的程序,并apply到集群即可。只要了解这些就能够在过去建立的Kubernetes体系和工具来管理能够被istio治理的微服务。在Service MESH中,CNCF定义了UDPA和SMI标准,数据平面和控制平面,而在istio出现之前,likerd是在2016年9月出现,随着linkerd加入CNCF后,2017年五月Istio发布0.1版本,随后Envory加入CNCF,当2018年7月发布了istio的1.0版本,2019年的1.1版本,直到2020的1.5版本,istio的组件发生了不同的变化。服务网格服务网格源于Buoyant 公司的 CEO Willian Morgan 在的文章 WHAT’S A SERVICE MESH? AND WHY DO I NEED ONE? 指向专注于处理服务间通信的基础设施,她负责在现代云原生应用组成的复杂拓扑中可靠的传递请求。除了处理业务逻辑的相关功能,每个微服务还必须实现此前单体应用模型中用于网络间通信的基础功能,甚至还包括分布式应用程序之间的通信环境中应该实现的其他网络功能。例如:熔断,限流,应用追踪,指标采集,服务发现和符在均衡等。而实现模型经过三代演进:内嵌于应用程序,SDK,代理(sidecar)而sitio就是服务网格,而服务网格又分为两个部分:数据平面和控制平面控制平面:控制平面负责生成和部署数据平面行为的相关配置:控制平面通常包括API接口,命令行界面和用于管理应用程序的图形用户界面等。如果要为每个微服务实例构建一个服务代理,这在Kubernetes中将会是在同pod内的两个容器。代理程序将被称为sidecar,业务容器通常被称为主容器。而在控制平面中有可以被下发的每一个代理的信息,并允许用户进行修改,一旦修改后可以向网格内下发配置。而微服务一旦需要更新的时候,借助于底层Kubernetes进行部署,但是在这个过程中的流量,以及通信直接是加密的,就需要向控制平面进行配置下发到proxy,也就是sidecar来实现。而这个proxy是进行二次开发过的envoy数据平面:服务网格应用程序中管理实例之间的网络流量部分称为数据平面,而支持数据平面的就是sedecar-envoy在过去,deployment从镜像A更新到镜像B的时候,是滚动的。现在istio是可以监控到部署了几个pod,并为每个pod自动注入一个sedecar,我们只需要像过去使用deployment那样使用。一旦注入完成,pod之间通讯都是通过sedecar来完成。无论pod中的程序是怎么监听的,sedecar都会截取来往pod之间的流量istio体系istio主要功能分为四部分,流量治理,安全,可观测性,网格。istio主要组件,pIlot,citadel,mixer,但是这在1.0,1.1,1.5有一些区别。mixer在1.0的时候是通过插件扩展和第三方来支持遥测的,而到了1.1,又换成了adapter,通过网络通信来做遥测,并添加了一个galley来支持适配其他平台。但迎之而来的是性能下降,因为proxy每次都需要想mixer上报指标,这损耗的大量的资源。在istio1.5后,丢弃Mixer这种分布式,遥测功能交给envoy自行完成,将pilot,citadel,galley和sidecar injector整合到一个单体应用istiod。istiod充当控制平面,讲配置分发到sidecar代理和网关,它能够为支持网络的应用实现智能化的负载均衡机制,且相关流量绕过了kube-proxyservice之所以可以转换ipvs或者iptables规则,把每一个节点当作负载均衡使用,就是通过kube-proxy监控apiserver上的service变动,实时反应到节点上实现service是同样应用,不同实例提供一个固定的访问入口。每个集群中的service,下面都有对应的pod,这其实都是istio中的微服务。因为istiod读取的是apiserver中的每个service来确定整个网格当中有多少个微服务的。除此之外,还会将service的pod转换为网格中sidecar listner,客户端转换为cluster并生成默认的路由(所有满足listner请求的url都路由到这个cluster)。因此,网格部署后这些会自动转换。pilot在1.4中作为独立组件,而在1.5后是被整合在一起。到apiserver发现service,配置sidecar,还有其他配置增强功能,如:AB测试,故障转移,灰度发布等除此之外还有一些其他证书管理工具,如:citadel,vault但是envoy需要扩展,就需要重新编译,这种退回增强了性能,但是envoy变得复杂。因此envoy提供了一种基于wasm的接口,envoy支持动态装载这些模块。ingress/egress gateway在多数时间,更多的流量都是服务和服务之间的访问,而在外层总有一层是接入外部流量的,在集群外部的流量叫南北,而在集群内的网格称为东西流量。而ingress gateway就是管理入向流量网关,也就是南北流量。当客户端作为外部访问集群内部的接入层服务时候,从ingress gateway才能到网格内。而客户端作为网格内访问集群外部就是egress gateway出向流量网关,是否需要做流量统一治理就需要一个出向网关,但是它是可选的。而ingress gateway是必须的。如下图:这里的ingress gateway并非是ingress controller,也不能像配置kubernetes ingress资源那样进行配置,而是网格专用的网关。对于一些入口,流量到了入口在发送给那个应用,这些都是在ingress-gateway上进行配置的。这个是独立运行的,并且网格内的流量是不需要经过它的,因而它的配置有别与网格中每个sidecar配置的,进出流量是通过独有的CR来定义的。对于流出网格流量是通过egress只有在需要配置的时候进行配置,并非必要。但是它的配置也是不同的,通过service entry来进行定义的。对于入口流量,内部流量,出口流量都需要单独进行治理。而入口流量和出口流量多数情况下配置与他们是无关的,只有在需要的时候才会进行配置。ingress和egress一直都是独立的并且,他们的配置策略,接口都是不一样的。一旦部署好istio后,istio sidecar injector会自动注入,这些依托于admission controller来完成的。当然也可以使用istioctl客户端手动注入。自动注入:使用istio sidecar injector自动完成注入过程(借助于Mutating admission)利用k8s webhook实现创建pod自动注入发生在admission controller的mutaion阶段,根据自动注入配置,kube-apiservier拦截到pod创建请求时调用自动注入服务istio-sidecar-injector生成的sidecar容器描述并将插入到pod清单手工注入:使用istioctl工具进行注入数据平面是实现南北和东西流量治理完成,控制平面不涉及客户端和流量通讯,只是负责为业务代码注入一个sidecar proxy,并进行下发流量管里配置的控制臂中心。kialiistio有辅助插件kiali来完成 web ui的,kiali本身只能显示简单的绘图功能,其他依赖于promehteus,skywaling,grafana,jaeger来展示数据等。这些需要单独部署才能使用。它能够提供以下功能:服务拓扑的动态绘制分布式跟踪指标度量收集配置校验健康检查和提示kiali基于go语言开发 ,由kiali front-end和back-end组件构成,分别提供ui和后端应用。同时它依赖k8s外部服务和组件,如: prometheus,cluster api, jaeger, grafana。将这些结合起来统一展示CR为了便于使用,这些功能被抽象成CR。istio所有规则和控制策略都基于k8s crd实现,这这些配置策略定义保存在etcd中,提供的CR如下:network:流量治理,包括virtualService,destinationRule,Gateway,serviceEntry,sidecar,EnvoyFilter,WorkLoadEntry和WorkloadGroup等8个CRsecurity:网络安全,包括AuthorizationPilicy, PeerAuthenticaiton和RequestAuthentication等3个CRtelemetry:网络遥测,目前仅包括Telemetry一个extensions:扩展机制,目前仅包括WasmPluginistioOperator:ItioOperator,目前仅包括一个,IstioOperator但是这并非所有,很多envoy的功能仍然需要用户自行定义
2022年10月15日
1,237 阅读
0 评论
0 点赞
1
2
3
...
14