首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,204 阅读
2
linuxea:如何复现查看docker run参数命令
21,591 阅读
3
Graylog收集文件日志实例
18,272 阅读
4
git+jenkins发布和回滚示例
17,903 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,804 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
676
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
3
篇与
的结果
2022-04-20
linuxea:ingress-nginx的rewrite与canary
ingress-nginx的官网提供了更多的一些配置信息,包括url重写,金丝雀,尽管金丝雀支持并不完美,ingress-nginx仍然是最受欢迎的ingress之一。在上一篇中,我介绍了ingress-nginx应用常见的两种方式,并且采用的最新的版本,早期有非常陈旧的版本在使用。鉴于此,随后打算将ingress-nginx重新理一遍,于是就有了这篇,后续可能还会有ingress-nginx本身只是做一个声明,从哪里来到哪里去而已,并不会做一些流量转发,而核心是annotations的class是可以借助作一些操作的,比如修改城Traefik或者自己定制创建一个deployment的pod,我们至少需要指定标签如下 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32而后在service关联 selector: app: linuxea_app version: v0.1.32并配置一个ingress,name: myapp必须是这个service的name , 且必须在同一个名称空间,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80 --- apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: linuxea_app version: v0.1.32 ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea namespace: default spec: replicas: 7 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.b ports: - name: http containerPort: 80这样就完成了一个最简单不过的ingress-nginx的域名配置,当然也可以配置一个404的页面之类的而整个过程大致如下请求到达LB后被分发到ingress-controller,controller会一直关注ingress对象,匹配到对应的信息后将请求转发到其中的某一个pod之上,而service对后端的pod做发现和关联。意思就是ingress-nginx不是直接到service,而是只从service中获取pod的信息,service仍然负责发现后端的pod状态,而请求是通过ingress通过ed到pod的url重写ingress-nginx大致如下,除此之外,我们可以对annotation做一些配置,较为常见的rewrite功能在实际中,访问的url可能如下linuxea.test.com/qsv1 linuxea.test.com/qsv2 linuxea.test.com/qsv3诸如此类,而要进行这种跳转,需要前端代码支持,或者配置rewrite进行转发,如下NameDescriptionValuesnginx.ingress.kubernetes.io/rewrite-targetTarget URI where the traffic must be redirectedstringnginx.ingress.kubernetes.io/ssl-redirectIndicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)boolnginx.ingress.kubernetes.io/force-ssl-redirectForces the redirection to HTTPS even if the Ingress is not TLS Enabledboolnginx.ingress.kubernetes.io/app-rootDefines the Application Root that the Controller must redirect if it's in / contextstringnginx.ingress.kubernetes.io/use-regexIndicates if the paths defined on an Ingress use regular expressionsboolCaptured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.nginx.ingress.kubernetes.io/rewrite-target 将请求转发到目标比如现在要将/app转发到/app/modiy,那么就可以如下正则表达/app(/|$)(.*)并且rewrite-target,的值是一个$2占位符 annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 ... paths: - path: /app(/|$)(.*)而这种方式i还有一个问题,就是你的js代码很有可能是绝对路径的,因此你不能够打开js,js会404 。 要么修改为相对路径,要么就需要重新配置一个重定向假设你的jss样式在/style下,还可能有图片是在image下以及js的路径,和其他增删改查的页面,现在的跳转后央视404,可以使用configuration-snippet重写configuration-snippet annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^/style/(.*)$ /app/style/$1 redirect; rewrite ^/image/(.*)$ /app/image/$1 redirect; rewrite ^/javascripts/(.*)$ /app/javascripts/$1 redirect; rewrite ^/modiy/(.*)$ /app/modiy/$1 redirect; rewrite ^/create/(.*)$ /app/create/$1 redirect; rewrite ^/delete/(.*)$ /app/delete/$1 redirect; |表示换行,而后rewrite以/style/路径下的所有跳转到/app/style/下,完成对style添加前缀/appapp-root如果此时我们希望访问的根目录不是默认的,可以使用app-root来进行跳转,比如跳转到linuxea.html如果是一个目录,就可以写一个路径,比如/app/ annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html[root@Node-172_16_100_50 ~/ingress]# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured现在就完成了自动跳转basic auth认证在nginx里面是可以配置basic auth认证的,非常简单的一个配置,在ingress-nginx中也是可以的我们可以进行yum安装一个httpd的应用,或者在搜索引擎搜索一个在线 htpasswd 生成器来生成一个用户mark,密码linuxea.comyum install httpd -y# htpasswd -c auth mark New password: Re-type new password: Adding password for user mark在线生成即可# cat auth1 mark:$apr1$69ocxsQr$omgzB53m59LeCVxlOAsTr/创建一个secret,将这个文件配置即可kubectl create secret generic bauth --from-file=auth1# kubectl create secret generic bauth --from-file=auth1 secret/bauth created # kubectl get secret bauth NAME TYPE DATA AGE bauth Opaque 1 21s # kubectl describe secret bauth Name: bauth Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== auth1: 43 bytes而后添加到ingress-nginx中,如下 annotations: ... nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again'auth-secret是引入刚创建的bauth,而auth-type指定了类型,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again' spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80执行一下# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured如下灰度我们通常使用最多的滚动更新,蓝绿,灰度,而ingress-ngiinx是通过annotations配置来实现的,能满足金丝雀,蓝绿、ab测试缺少描述 部分此前我们配置了一个pod和一个service,要配置金丝雀那就需要在配置一组,而后我们在ingress中使用annotations来进行调用其他的一些class来完成一些操作配置nginx:v1.14.aapiVersion: apps/v1 kind: Deployment metadata: name: testv1 labels: app: testv1 spec: replicas: 5 selector: matchLabels: app: testv1 template: metadata: labels: app: testv1 spec: containers: - name: testv1 image: marksugar/nginx:1.14.a ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv1-service spec: ports: - port: 80 targetPort: 80 selector: app: testv1testv2apiVersion: apps/v1 kind: Deployment metadata: name: testv2 labels: app: testv2 spec: replicas: 5 selector: matchLabels: app: testv2 template: metadata: labels: app: testv2 spec: containers: - name: testv2 image: marksugar/nginx:1.14.b ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv2-service spec: ports: - port: 80 targetPort: 80 selector: app: testv2apply# kubectl apply -f testv1.yaml # kubectl apply -f testv2.yaml可以看到现在已经有两组# kubectl get pod NAME READY STATUS RESTARTS AGE testv1-9c974bd5d-c46dh 1/1 Running 0 19s testv1-9c974bd5d-j7fzn 1/1 Running 0 19s testv1-9c974bd5d-qp4tv 1/1 Running 0 19s testv1-9c974bd5d-thx4r 1/1 Running 0 19s testv1-9c974bd5d-x9rpf 1/1 Running 0 19s testv2-5767685995-f8z5s 1/1 Running 0 6s testv2-5767685995-htm74 1/1 Running 0 6s testv2-5767685995-k8sdv 1/1 Running 0 6s testv2-5767685995-mjd6c 1/1 Running 0 6s testv2-5767685995-prhld 1/1 Running 0 6s给testv1配置一个 ingress-v1.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv1 namespace: default spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv1-service port: number: 80# kubectl apply -f ingress-v1.yaml 而后我们查看的版本信息# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0canary而后我们配置canary nginx.ingress.kuberentes.io/canary: "true" # 开启灰度发布机制,首先启用canary nginx.ingress.kuberentes.io/canary-weight: "30" # 分配30%的流量到当前的canary版本如下给testv2配置一个 ingress-v2.yaml 并配置canary权重apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv2 namespace: default annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "30" spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv2-service port: number: 80此时由于版本问题你或许会发现 有一个问题Error from server (BadRequest): error when creating "ingress-1.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "linuxea.test.com" and path "/" is already defined in ingress default/test而这个问题的根源在于没有验证webhook忽略具有不同的ingressclass的入口controller.admissionWebhooks.enabled=false并且在1.1.2修复我们安装1.1.2的ingress-nginxdocker pull k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c找到一个aliyun的docker pull registry.cn-shanghai.aliyuncs.com/wanfei/ingress-nginx-controller:v1.1.2 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/kube-webhook-certgen:v1.1.1 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/defaultbackend-amd64:1.5修改ingress-nginx的deployment.yaml而后在配置下应用声明式的文件# kubectl apply -f ingress-v2.yaml # kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE linuxea nginx linuxea.test.com 172.16.100.50 80 22h testv1 nginx test.mark.com 172.16.100.50 80 3m23s testv2 nginx test.mark.com 172.16.100.50 80 70sfor i in $(seq 1 10);do curl -s linuxea.test.com ;done# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv2-5767685995-mjd6c.com ▍ 1fa571f0e1e0e ▍version number 2.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0这里的比例是大致的一个算法,而并不是固定的此时可以将weight配置成0撤销更新 nginx.ingress.kubernetes.io/canary-weight: "0"或者将weight配置成100完成更新 nginx.ingress.kubernetes.io/canary-weight: "100"参考:validating webhook should ignore ingresses with a different ingressclassvalidating webhook should ignore ingresses with a different ingressclassslack讨论Error: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "xyz" and path "/" is already defined in ingress xxx #821kubernetes Ingress Controller (15)kubernetes Ingress nginx http以及7层https配置 (17)kubernetes Ingress nginx配置 (16)
2022年04月20日
1,580 阅读
0 评论
0 点赞
2022-04-18
linuxea:k8s下kube-prometheus监控ingress-nginx
首先需要已经配置好了一个ingress-nginx亦或者使用ACK上的ingress-nginx鉴于对ingress-nginx的状态,或者流量的监控是有一定的必要性,配置监控的指标有助于了解更多细节通过使用kube-prometheus的项目来监控ingress-nginx,首先需要在nginx-ingress-controller的yaml中配置10254的端口,并且配置一个service,最后加入到ServiceMonitor即可。start如果是helm,则需要如下修改helm.. controller: metrics: enabled: true service: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" ..如果不是 helm,则必须像这样编辑清单:服务清单: - name: prometheus port: 10254 targetPort: prometheusprometheus将会在service中被调用apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus port: 10254 targetPort: prometheus ..deploymentapiVersion: v1 kind: Deployment metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus containerPort: 10254 ..测试10254的/metrics的url能够被访问到bash-5.1$ curl 127.0.0.1:10254/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 1.9802e-05 go_gc_duration_seconds{quantile="0.25"} 3.015e-05 go_gc_duration_seconds{quantile="0.5"} 4.2054e-05 go_gc_duration_seconds{quantile="0.75"} 9.636e-05 go_gc_duration_seconds{quantile="1"} 0.000383868 go_gc_duration_seconds_sum 0.000972498 go_gc_duration_seconds_count 11 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 92 # HELP go_info Information about the Go environment.Service And ServiceMonitor另外需要配置一个ServiceMonitor, 这取决于kube-promentheus的发行版spec部分字段如下spec: endpoints: - interval: 15s # 15s频率 port: metrics # port的名称 path: /metrics # url路径 namespaceSelector: matchNames: - kube-system # ingress-nginx所在的名称空间 selector: matchLabels: app: ingress-nginx # ingress-nginx的标签最终配置如下:service在ingress-nginx的名称空间下配置,而ServiceMonitor在kube-prometheus的monitoring名称空间下,使用endpoints定义port名称,使用namespaceSelector.matchNames指定了ingress pod的名称空间,selector.matchLabels和标签apiVersion: v1 kind: Service metadata: name: ingress-nginx-metrics namespace: kube-system labels: app: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - name: metrics port: 10254 targetPort: 10254 protocol: TCP selector: app: ingress-nginx --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ingress-nginx-metrics namespace: monitoring spec: endpoints: - interval: 15s port: metrics path: /prometheus namespaceSelector: matchNames: - kube-system selector: matchLabels: app: ingress-nginxgrafana在grafana的dashboards中搜索ingress-nginx,得到的和github的官网的模板一样https://grafana.com/grafana/dashboards/9614?pg=dashboards&plcmt=featured-dashboard-4或者下面这个模板这些在prometheus的targets中被发现参考ingress-nginx monitoringprometheus and grafana install
2022年04月18日
1,941 阅读
0 评论
0 点赞
2022-03-14
linuxea:ingress-nginx应用常见的两种方式
有这么一个场景,前面是一个中国移动的云服务器,他们的网络环境是通过SNAT和DNAT的方式进行做的网络转换。这种情况在云环境中并不多见,一般发生在自建物理机房中,通常而言,DNAT和SNAT发生在防火墙或者外部路由阶段。一个IP地址和不同的端口来进行映射到后端的一个节点上的某个端口上,这种情况下走的必然是4层转发,这就有出现了一个问题,你的https是无法在这里进行卸载的,要么在前端架设代理层,要么在后端上添加https,而后端使用的kubernetes集群,如下后端上添加https,就需要有一个能够做域名解析一个层,这个时候就需要使用类似于nginx-ingress的东西来处理这个这个时候无论上面的两种情况如何解决,都会有一个负载和卸载后端故障节点的问题,如果后端或者前端某一个节点挂掉,这时候存在了单点故障和后端快速卸载的问题,那大概率变成了这个样子这样仍然有单点故障的问题,首先DNAT是不需要考虑的,因为脱离了我们的掌控,proxy层只做后端的服务器故障剔除或者上线,此时你大概率使用的一个4层的nginx以ingress-nginx为例,ingress-nginx配置方式基本上大概有两种1。使用默认的nodeport进行转发2。使用宿主机的网络名称空间进行转发通常这两种方式都被采用,第二种方式被认为更高效,原因是pod不进行隔离宿主机的网络名称空间,因此少了一层网络名称空间的消耗,这意味着从内核空间到用户空间少了一次转换,从这个角度来,他比nodeport快安装ingress-nginx我们下载1.1.1的ingress-nginxwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml我找了一个人家已经搬好的镜像docker pull liangjw/kube-webhook-certgen:v1.1.1 docker pull liangjw/ingress-nginx-controller:v1.1.1--改名称docker tag liangjw/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 docker tag liangjw/ingress-nginx-controller:v1.1.1 k8s.gcr.io/ingress-nginx/controller:v1.1.1如果此时没有外网可以保存到本地传递给其他节点docker save -o controller.tar k8s.gcr.io/ingress-nginx/controller:v1.1.1 docker save -o kube-webhook-certgen.tar k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 for i in 21 30 ;do scp controller.tar 172.16.1.$i:~/;done for i in 21 30 ;do scp kube-webhook-certgen.tar 172.16.1.$i:~/;done或许你可以配置daemonset或者配置nodeName来调度sed -i 's/kind: Deployment/kind: DaemonSet/g' deploy.yaml声明式运行[root@node1 ~/ingress]# kubectl apply -f deployment.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created正常情况下,此时ingress-nginx-controller已经准备妥当[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-m7hph 0/1 Completed 0 114s 192.20.137.84 172.16.100.50 <none> <none> ingress-nginx-admission-patch-bmx2r 0/1 Completed 0 114s 192.20.180.14 172.16.100.51 <none> <none> ingress-nginx-controller-78c57d6886-m7mtc 1/1 Running 0 114s 192.20.180.16 172.16.100.51 <none> <none>我们什么都没有修改,因此她的nodeport的端口也是随机的,这里可以修改城固定的端口[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.68.108.97 <none> 80:31837/TCP,443:31930/TCP 2m28s ingress-nginx-controller-admission ClusterIP 10.68.102.110 <none> 443/TCP 2m28s配置测试配置名为myapp一个nginxapiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: linuxea_app version: v0.1.32 ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea namespace: default spec: replicas: 7 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.b ports: - name: http containerPort: 80确保可以通过custer-ip进行访问[root@linuxea-50 ~/ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 273d myapp ClusterIP 10.68.211.186 <none> 80/TCP 9m42s mysql ClusterIP None <none> 3306/TCP 2d5h[root@linuxea-50 ~/ingress]# curl 10.68.211.186 linuxea-dpment-linuxea-6bdfbd7b77-tlh8k.com-127.0.0.1/8 192.20.137.98/32配置ingressingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80而后执行即可[root@linuxea-50 ~/ingress]# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured [root@linuxea-50 ~/ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE test nginx linuxea.test.com 172.16.100.51 80 30s我们进行访问测试首先先修改下本地的hosts文件作为域名解析# C:\Windows\System32\drivers\etc\hosts ...添加如下 172.16.100.51 linuxea.test.com那么现在我们已经可以通过默认的配置进行访问了。配置负载1.我们先将nginx绑定到节点上而此时,我们需要配置的是,只允许一部分node可以访问即可,于是我们添加一个标签选择器kubectl label node 172.16.100.50 beta.kubernetes.io/zone=ingress kubectl label node 172.16.100.51 beta.kubernetes.io/zone=ingress删掉之前的nodeSelector nodeSelector: kubernetes.io/os: linux改成 nodeSelector: beta.kubernetes.io/zone: ingress并且将deployment改成DaemonSetsed -i 's/kind: Deployment/kind: DaemonSet/g' [root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2 标签打完之后如下[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get node --show-labels|grep "ingress" 172.16.100.50 Ready master 276d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.50,kubernetes.io/os=linux,kubernetes.io/role=master 172.16.100.51 Ready master 276d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.51,kubernetes.io/os=linux,kubernetes.io/role=master如下[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE ingress-nginx-admission-create-kfj7v 0/1 Completed 0 22m 192.20.137.99 172.16.100.50 ingress-nginx-admission-patch-5dwvf 0/1 Completed 1 22m 192.20.137.110 172.16.100.50 ingress-nginx-controller-4q9qb 1/1 Running 0 12m 192.20.180.27 172.16.100.51 ingress-nginx-controller-n9qkl 1/1 Running 0 12m 192.20.137.92 172.16.100.50 此时,我们是通过172.16.100.51和172.16.100.50进行访问的,那么你的请求就需要打到这两个节点的nodeport端口上,因此我们将nodeport端口固定apiVersion: v1 kind: Service .... name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http nodePort: 31080 - name: https port: 443 protocol: TCP targetPort: https nodePort: 31443 appProtocol: https selector: ....2.配置nginx我们可以配置4层或者7层的nginx,这取决于我们需要做什么。如果是4层,只做一个对后端节点的卸载,如果是7层,那可以配置域名等yum install nginx-mod-stream nginx安装了4层模块后会自动加载,我们在nginx.conf中配置即可stream { include stream/*.conf; }并创建目录mkdir stream创建配置文件如下upstream test-server { server 172.16.100.50:31080 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:31080 max_fails=3 fail_timeout=1s weight=1; } log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ' '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ; access_log /data/logs/nginx/web-server.log proxy; server { listen 31080; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server; }我们访问测试下[root@linuxea-49 /etc/nginx]# tail -f /data/logs/nginx/web-server.log 172.16.100.3 4803 - [14/Mar/2022:00:38:27 +0800] 200 TCP "172.16.100.51:31080" "19763" "0.000" 172.16.100.3 4811 - [14/Mar/2022:00:38:29 +0800] 200 TCP "172.16.100.50:31080" "43999" "0.000" 172.16.100.3 4812 - [14/Mar/2022:00:38:30 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000" 172.16.100.3 4813 - [14/Mar/2022:00:38:31 +0800] 200 TCP "172.16.100.50:31080" "43944" "0.000" 172.16.100.3 4816 - [14/Mar/2022:00:38:34 +0800] 200 TCP "172.16.100.51:31080" "3464" "0.000" 172.16.100.3 4819 - [14/Mar/2022:00:38:43 +0800] 200 TCP "172.16.100.50:31080" "44105" "0.001" 172.16.100.3 4820 - [14/Mar/2022:00:38:44 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000" 172.16.100.3 4821 - [14/Mar/2022:00:38:47 +0800] 200 TCP "172.16.100.50:31080" "8660" "0.000" 172.16.100.3 4825 - [14/Mar/2022:00:39:06 +0800] 200 TCP "172.16.100.51:31080" "42747" "0.000" 172.16.100.3 4827 - [14/Mar/2022:00:39:09 +0800] 200 TCP "172.16.100.50:31080" "32058" "0.000"如下配置负载2我们修改ingress-nginx配置文件,采用hostNetwork: true,ingress-nginx的pod将不会隔离网络名称空间,采用宿主机的网络,这样就可以直接使用service,而不是nodeport1.修改网络模式并修改dnsPolicy ,一旦hostnetwork: true,dnsPolicy就不能在是ClusterFirst,而应该是ClusterFirstWithHostNet,只有这样才能在集群和宿主机上都能进行解析spec: selector: ... spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: controller image: k8s.gcr.io/ingress-nginx/controller:v1.1.1 ....如下[root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE ingress-nginx-admission-create-kfj7v 0/1 Completed 0 23h 192.20.137.99 172.16.100.50 ingress-nginx-admission-patch-5dwvf 0/1 Completed 1 23h 192.20.137.110 172.16.100.50 ingress-nginx-controller-5nd59 1/1 Running 0 46s 172.16.100.51 172.16.100.51 ingress-nginx-controller-zzrsz 1/1 Running 0 85s 172.16.100.50 172.16.100.50 当你修改完成后,你会发现他用的宿主机的网卡,这就是没有隔离网络名称空间,这样好处相对nodeport,是要快[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-zzrsz -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether a2:5a:55:54:d1:1d brd ff:ff:ff:ff:ff:ff inet 172.16.100.50/24 brd 172.16.100.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet6 fe80::b68f:449c:af0f:d91f/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:c8:d3:08:a9 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: calib6c2ec954f8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 6: califa7cddb93a8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP ....2.配置nginx 7层代理而后我们可以直接配置nginx,不用关系svc的事情那么现在,我们可以把证书配置在7层的这个代理上面我们直接进行配置一个ssl的自签证书openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com[root@linuxea-49 /etc/nginx]# ll ssl total 12 -rw-r--r-- 1 root root 424 Mar 14 22:23 dhparam.pem -rw-r--r-- 1 root root 1285 Mar 14 22:31 linuxea.crt -rw-r--r-- 1 root root 1704 Mar 14 22:31 linuxea.key日志格式 log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '$body_bytes_sent "$http_referer" "$http_user_agent" $ssl_protocol $ssl_cipher' '$request_time [$status] [$upstream_status] [$upstream_response_time] "$upstream_addr"'而后直接在conf.d下引入k8s.confupstream web { server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1; } #server { # listen 80; # server_name http://linuxea.test.com/; # if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; } # index index.html index.htm index.php default.html default.htm default.php; #} server { listen 80; server_name linuxea.test.com; # if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; } index index.html index.htm index.php default.html default.htm default.php; #limit_conn conn_one 20; #limit_conn perserver 20; #limit_rate 100k; #limit_req zone=anti_spider burst=10 nodelay; #limit_req zone=req_one burst=5 nodelay; access_log /data/logs/nginx/web-server.log upstream2; location / { proxy_pass http://web; include proxy.conf; } } server { listen 443 ssl; server_name linuxea.test.com; #include fangzhuru.conf; ssl_certificate ssl/linuxea.crt; ssl_certificate_key ssl/linuxea.key; access_log /data/logs/nginx/web-server.log upstream2; # include ssl-params.conf; location / { proxy_pass http://web; include proxy.conf; } }proxy.conf如下proxy_connect_timeout 1000s; proxy_send_timeout 2000; proxy_read_timeout 2000; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_redirect off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header REMOTE-HOST $remote_addr; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_set_header Host $host; proxy_set_header Referer $http_referer; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;3.访问测试5.配置https我们在ingress-nginx上配置https[root@linuxea-50 ~/ingress]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com Generating a 2048 bit RSA private key ......................+++ ............................................+++ writing new private key to 'linuxea.key' -----创建secret[root@linuxea-50 ~/ingress]# kubectl create secret tls nginx-ingress-secret --cert=linuxea.crt --key=linuxea.key secret/nginx-ingress-secret created查看secret[root@linuxea-50 ~/ingress]# kubectl get secret nginx-ingress-secret NAME TYPE DATA AGE nginx-ingress-secret kubernetes.io/tls 2 24s [root@linuxea-50 ~/ingress]# kubectl describe secret nginx-ingress-secret Name: nginx-ingress-secret Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/tls Data ==== tls.crt: 1285 bytes tls.key: 1700 bytes而后在ingress中配置中添加字段spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret如下[root@linuxea-50 ~/ingress]# cat ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80在4层的代理里面加一下443端口upstream test-server443 { server 172.16.100.50:443 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:443 max_fails=3 fail_timeout=1s weight=1; } access_log /data/logs/nginx/web-server.log proxy; server { listen 443; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server443; } upstream test-server { server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1; } log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ' '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ; access_log /data/logs/nginx/web-server.log proxy; server { listen 80; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server; }[root@linuxea-49 /etc/nginx/stream]# tail -f /data/logs/nginx/web-server.log 172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000" 172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000" 172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000" 172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000" 172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000" 172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000" 172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001" 172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001" 172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000" 172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000"参考kubernetes Ingress nginx http以及7层https配置 (17)
2022年03月14日
1,930 阅读
0 评论
0 点赞