首页
About Me
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,484 阅读
2
linuxea:如何复现查看docker run参数命令
23,649 阅读
3
Graylog收集文件日志实例
18,633 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,423 阅读
5
git+jenkins发布和回滚示例
18,235 阅读
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
667
篇文章
累计收到
111
条评论
首页
栏目
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
搜索到
71
篇与
的结果
2022-05-08
linuxea:kubernetes检测pod部署状态简单实现
通常,无状态的应用多数情况下以deployment控制器下运行,在deployment更新中,当配置清单发生变化后,应用这些新的配置。我们假设一些都ok,也成功拉取镜像,并且以默认的25%进行滚动更新,直到更新完成。kubectl apply -f test1.yaml然而这一切按照预期进行,没有任何问题。kubectl只是将配置推送到k8s后,只要配置清单没有语法或者冲突问题,返回的是0,状态就是成功的而整个过程有很多不确定性,比如,不存在的镜像,没有足够的资源调度,配置错误导致的一系列问题,而捕捉这种问题也是比较关键的事情之一。这并不单纯的简单观测问题,pod并不是拉取镜像就被running起,一旦runing就意味着接收到流量,而程序准备需要时间,如果此时程序没有准备好,流量就接入,势必会出现错误。为了解决这个问题,就需要配置就绪检测或者Startup检测pod在被真正的处于ready起来之前,通常会做就绪检测,或者启动检测。在之前的几篇中,我记录了就绪检测和健康检测的重要性,而在整个就绪检测中,是有一个初始化时间的问题。如果此时,配置清单发送变化,调度器开始执行清单任务。假设此时的初始化准备时间是100秒,有30个pod,每次至少保证有75%是正常运行的,默认按照25%滚动更新Updating a Deployment。此时的准备时间(秒)至少是30 / 25% * (100)readiness probe time 如果pod越多,意味着等待所有pod就绪完成的总时间就越长,如果放在cd管道中去运行,势必会让反馈时间越久。当一个重量级的集群中,每一条全局遍历都非常消耗资源,因此操作非常昂贵。整个集群有可能因此产生大的延迟,在集群外部调用API间隔去获取远比实时获取消耗资源要少。如果pod并不多,这个问题不值得去考量。使用rollout足以解决。获取清单被推送到节点的方式,有如下:事件监控pod在更新的时候,如果有问题会触发事件watch状态以及其他第三方的编码来达到这个需求,比如由额外的程序来间隔时间差去检测状态,而不是一直watch通常,使用kubectl rollout或者helm的--wait,亦或者argocd的平面控制来观测rollout在kubernetes的文档中,rollout的页面中提到的检查状态rollout能够管理资源类型如:部署,守护进程,状态阅读rolling-back-a-deployment中的status watch,得到以下配置kubectl -n NAMESPACE rollout status deployment NAME --watch --timeout=Xmrollout下的其他参数history列出deployment/testv1的历史kubectl -n default rollout history deployment/testv1查看历史记录的版本1信息kubectl -n default rollout history deployment/testv1 --revision=1pause停止,一旦停止,更新将不会生效kubectl rollout pause deployment/testv1需要恢复,或者重启resume恢复,恢复后延续此前暂停的部署kubectl rollout resume deployment/testv1status此时可以配置status查看更新过程的状态kubectl rollout status deployment/testv1status提供了一下参数,比如常用的超时比如,--timeout=10m,最长等待10m,超过10分钟就超时kubectl rollout status deployment/testv1 --watch --timeout=10m 其他参数如下NameShorthandDefaultUsagefilenamef[]Filename, directory, or URL to files identifying the resource to get from a server.kustomizek Process the kustomization directory. This flag can't be used together with -f or -R.recursiveRfalseProcess the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.revision 0Pin to a specific revision for showing its status. Defaults to 0 (last revision).timeout 0sThe length of time to wait before ending watch, zero means never. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).watchwtrueWatch the status of the rollout until it's done.undo回滚回滚到上一个版本kubectl rollout undo deployment/testv1 回滚到指定的版本1,查看已有版本# kubectl -n default rollout history deployment/testv1 deployment.apps/testv1 REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none>查看版本信息# kubectl rollout history deployment/testv1 deployment.apps/testv1 REVISION CHANGE-CAUSE 2 <none> 7 <none> 8 <none>2,回滚到2# kubectl rollout undo deployment/testv1 --to-revision=2 deployment.apps/testv1 rolled backhelm--waitif set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout只有在控制器的pod出于就绪状态才会结束,默认时间似乎是600秒·看起来像是这样helm upgrade --install --namespace NAMESPACE --create-namespace --wait APP FILEAPI上面两种方式能够完成大部分场景,但是watch是非常占用资源,如果希望通过一个脚本自己的逻辑去处理,可以使用clent-go的包手动for循环查看状态clent-go手动去for循环package main import ( "context" "flag" "fmt" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" typev1 "k8s.io/client-go/kubernetes/typed/apps/v1" "k8s.io/client-go/rest" "k8s.io/client-go/util/retry" "os" "time" ) type args struct { namespace string image string deployment string } const ( numberOfPoll = 200 pollInterval = 3 ) func parseArgs() *args { namespace := flag.String("n", "", "namespace") deployment := flag.String("deploy", "", "deployment name") image := flag.String("image", "", "image for update") flag.Parse() var _args args if *namespace == "" { fmt.Fprintln(os.Stderr, "namespace must be specified") os.Exit(1) } _args.namespace = *namespace if *deployment == "" { fmt.Fprintln(os.Stderr, "deployment must be specified") os.Exit(1) } _args.deployment = *deployment if *image == "" { fmt.Fprintln(os.Stderr, "image must be specified") os.Exit(1) } _args.image = *image return &_args } func main() { _args := parseArgs() // creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) } // creates the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } deploymentsClient := clientset.AppsV1().Deployments(_args.namespace) ctx := context.Background() retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error { // Retrieve the latest version of Deployment before attempting update // RetryOnConflict uses exponential backoff to avoid exhausting the apiserver result, getErr := deploymentsClient.Get(ctx, _args.deployment, metav1.GetOptions{}) if getErr != nil { fmt.Fprintf(os.Stderr, "Failed to get latest version of Deployment %s: %v", _args.deployment, getErr) os.Exit(1) } result.Spec.Template.Spec.Containers[0].Image = _args.image _, updateErr := deploymentsClient.Update(ctx, result, metav1.UpdateOptions{}) return updateErr }) if retryErr != nil { fmt.Fprintf(os.Stderr, "Failed to update image version of %s/%s to %s: %v", _args.namespace, _args.deployment, _args.image, retryErr) os.Exit(1) } _args.pollDeploy(deploymentsClient) fmt.Println("Updated deployment") } // watch 太浪费资源了,而且时间太长,还是轮询吧 func (p *args) pollDeploy(deploymentsClient typev1.DeploymentInterface) { ctx := context.Background() for i := 0; i <= numberOfPoll; i++ { time.Sleep(pollInterval * time.Second) result, getErr := deploymentsClient.Get(ctx, p.deployment, metav1.GetOptions{}) if getErr != nil { fmt.Fprintf(os.Stderr, "Failed to get latest version of Deployment %s: %v", p.deployment, getErr) os.Exit(1) } resourceStatus := result.Status fmt.Printf("%s -> replicas: %d, ReadyReplicas: %d, AvailableReplicas: %d, UpdatedReplicas: %d, UnavailableReplicas: %d\n", result.Name, resourceStatus.Replicas, resourceStatus.ReadyReplicas, resourceStatus.AvailableReplicas, resourceStatus.UpdatedReplicas, resourceStatus.UnavailableReplicas) if resourceStatus.Replicas == resourceStatus.ReadyReplicas && resourceStatus.ReadyReplicas == resourceStatus.AvailableReplicas && resourceStatus.AvailableReplicas == resourceStatus.UpdatedReplicas { return } } fmt.Fprintf(os.Stderr, "应用在 %d 秒内没有启动成功,视作启动失败,请查看日志。\n", numberOfPoll*pollInterval) os.Exit(1) }其他参考kubernetes-deployment-status-in-jenkinsKubernetes探针补充Kubernetes Liveness 和 Readiness 探测避免给自己挖坑续集重新审视kubernetes活跃探针和就绪探针 如何避免给自己挖坑2Kubernetes Startup Probes避免给自己挖坑3
2022年05月08日
2,056 阅读
0 评论
0 点赞
2022-04-22
linuxea:helm3的简单使用(1)
无论是debian还是redhat,亦或者其他linux发行版,都有一个包管理用来解决依赖问题,而在kubernetes中,helm是用来管理kubernetes应用程序,其中charts是可以定义一个可以进行安装升级的应用程序,同时也容易创建起来,并且进行版本管理。而在越复杂的应用程序来讲,helm可以作为一个开箱即用的,单单从使用角度来看,类似于yum或者apt的,使用起来,会更加流行。比如:我们创建一个应用程序,控制器使用deployment,同时需要一个service和关联其他的对象,并且还需配置一个ingress配置域名等作为入口,还可能需要部署一个有状态的,类似mysql的后端数据存储等等。这些如果需要一个个去安装维护将会麻烦很多,特别对于一个使用者来讲,更多时候,无需关注里面发生了什么,而更多的时候只想拿来即用的,helm就是用来打包这些程序。一个流行kubernetes生态的组件库中,你会发现必然会提供一个helm的方式。这正是因为helm的特色,得益于这种的易用性使得helm愈发的普及。作为一个charts提供的参数,对其中的内容进行渲染,从而生成yaml文件,安装到kubernetes中。helm就是解决这些事情的除此之外,我们还有一个kustomize也可以进行配置清单管理,kustomize解决的是另外一个问题,有机会在写这个kustomize。而helm2和helm3是有些不同的。安装helm是读取kubeconfig文件来访问集群的,因此,你至少能够使用kubectl访问集群才能使用helm在使用版本上v3版本比v2更好用一些,简化了集群内的一个服务换城了kubernetes CRD, 在v2中需要大量的权限控制,这样也会带来一个安全问题,而在v3中变成了一个客户端, 因此,我们使用v3稳定版本即可如果需要了解更多的概念,可以参考helm2的时候的一些文章对于helm2,可以查看如下kubernetes helm概述(49)kubernetes helm简单使用(50)kubernetes 了解chart(51)kubernetes helm安装efk(52)在helm的github下载对应系统的版本,比如:3.8.1的amd版本wget https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz tar xf helm-v3.8.1-linux-amd64.tar.gz cp helm /usr/local/sbin/查看版本信息这里温馨的提示说我们的配置文件的权限太高# helm version WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config version.BuildInfo{Version:"v3.8.1", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}常用的命令- helm search: 搜索以恶个 charts - helm pull: 下载 chart - helm install: 安装到 Kubernetes - helm list: 查看 chartshelm installhelm install可以通过多个源进行安卓,大致如下chart仓库本地chart压缩包本地解开的压缩包的目录中的路径在线url总之要能够访问的到,首先通过在线安装1.添加chart仓库源我们需要安装一个chart源来使用,这类似于yum的源一样,我们使用azure的仓库helm repo add stable https://mirror.azure.cn/kubernetes/charts/ helm repo list[root@linuxea.com ~]# helm repo add stable "stable" has been added to your repositories [root@linuxea.com ~]# helm repo list NAME URL stable https://mirror.azure.cn/kubernetes/charts/我们可以使用 helm search repo stable查看当前的包于此同时,使用helm repo update更新到最新的状态[root@linuxea.com ~]# helm repo update WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ [root@linuxea.com ~]# helm search repo stable WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.3.5 v4.5.0.5 DEPRECATED A Helm chart for Aerospike in Kubern... stable/airflow 7.13.3 1.10.12 DEPRECATED - please use: https://github.com/air... stable/ambassador 5.3.2 0.86.1 DEPRECATED A Helm chart for Datawire Ambassador stable/anchore-engine 1.7.0 0.7.3 Anchore container analysis and policy evaluatio... stable/apm-server 2.1.7 7.0.0 DEPRECATED The server receives data from the El... stable/ark 4.2.2 0.10.2 DEPRECATED A Helm chart for ark stable/artifactory 7.3.2 6.1.0 DEPRECATED Universal Repository Manager support... stable/artifactory-ha 0.4.2 6.2.0 DEPRECATED Universal Repository Manager support... stable/atlantis 3.12.4 v0.14.0 DEPRECATED A Helm chart for Atlantis https://ww... ......2.安装 chart安装一个mysql,在安装之前我们可以show一下 helm show chart stable/mysql查看它的版本号等信息更详细的信息可以通过helm show all stable/mysql ,all来查看[root@linuxea.com ~]# helm show chart stable/mysql WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config apiVersion: v1 appVersion: 5.7.30 deprecated: true description: DEPRECATED - Fast, reliable, scalable, and easy to use open-source relational database system. home: https://www.mysql.com/ icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png keywords: - mysql - database - sql name: mysql sources: - https://github.com/kubernetes/charts - https://github.com/docker-library/mysql version: 1.6.9安装--generate-name生成一个名称# helm install stable/mysql --generate-name WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated NAME: mysql-1649580933 LAST DEPLOYED: Sun Apr 10 04:55:35 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649580933.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649580933 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h mysql-1649580933 -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/mysql-1649580933 3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}而后我们可以观察到pod的状态[root@linuxea.com ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ... mysql-1649580933-8466b76578-gphkp 0/1 Pending 0 106s <none> <none> <none> <none> ...和svc[root@linuxea.com ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... mysql-1649580933 ClusterIP 10.68.106.229 <none> 3306/TCP 2m23s ...以及一个pvc[root@linuxea.com ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-1649580933 Pending 4m33s一旦安装完成可以通过ls查看她的版本[root@linuxea.com ~]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649580933 default 1 2022-04-10 04:55:35.427415297 -0400 EDT deployed mysql-1.6.9 5.7.30 当我们能看到这个name的时候,就可以使用uninstall删除uninstalll 会删除这个包下的所有相关的这个包的资源。同时,可以使用--keep-history参数保留release的记录而使用了--keep-history的时候就可以使用helm ls -a查看被卸载掉的记录[root@linuxea.com ~]# helm uninstall mysql-1649580933 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config release "mysql-1649580933" uninstalled参数配置如果直接install是默认的配置,但是更多时候,我们需要调整一下配置的参数,比如类似于端口等其他的选项参数,当然,这些参数必须是可以配置的的,一旦配置后,就会覆盖掉默认的值,通过helm show values来查看这些参数[root@linuxea.com ~]# helm show values stable/mysql比如配置密码,初始化,参数,调度,容忍,是否持久化等等。既然,我们要修改这些参数,那就需要一个覆盖的文件来进行操作,于是,创建一个文件,比如mvalule.yaml,在文件中配置想要修改的值, 如下指定用户和密码,并创建一个linuea的库,并且不进行数据持久化mysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea persistence: enabled: false而后只需要指定这个配置文件即可当你不使用 --generate-name的时候,只需要指定名称即可helm install mysql -f mvalule.yaml stable/mysql[root@linuxea.com /data/helm]# helm install -f mvalule.yaml stable/mysql --generate-name WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated NAME: mysql-1649582722 LAST DEPLOYED: Sun Apr 10 05:25:23 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649582722.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il .....此时,可以通过kubectl describe 来查看传入的变量[root@linuxea.com ~]# kubectl describe pod mysql-1649582722-dbcdcb895-tjvsr Name: mysql-1649582722-dbcdcb895-tjvsr Namespace: default .... Environment: MYSQL_ROOT_PASSWORD: <set to the key 'mysql-root-password' in secret 'mysql-1649582722'> Optional: false MYSQL_PASSWORD: <set to the key 'mysql-password' in secret 'mysql-1649582722'> Optional: false MYSQL_USER: linuxea MYSQL_DATABASE: linuxea ...pod启动完成,我们通过上面的提示进入到mysql[root@linuxea.com /data/helm]# MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) [root@linuxea.com /data/helm]# echo $MYSQL_ROOT_PASSWORD 8FFSmw66je [root@linuxea.com /data/helm]# kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il If you don't see a command prompt, try pressing enter. root@ubuntu:/# root@ubuntu:/# apt-get update && apt-get install mysql-client -y ... Setting up mysql-client-5.7 (5.7.33-0ubuntu0.16.04.1) ... Setting up mysql-client (5.7.33-0ubuntu0.16.04.1) ... Processing triggers for libc-bin (2.23-0ubuntu11.3) ... ... root@ubuntu:/# mysql -h mysql-1649582722 -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 12 Server version: 5.7.30 MySQL Community Server (GPL) Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | linuxea | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> 环境变量基本上使用两种方式来传递配置信息:value那么除了使用value 或者-f指定yaml文件来覆盖values的值外,还可以指定多个值set直接在命令行指定需要覆盖的配置,但是对于深度嵌套的不建议使用--set通常--set优先于-f,-f将值持久化在configmap中如果我们使用--value配置文件已经配置了enabled: false,同时有配置了--set persistence.enabled: true, 而此时的enabled是等于true的,--set优先于--value对于value可以通过get value查看[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlUser: linuxea persistence: enabled: false对于一个已经运行的chart而言,使用helm upgrade来更新,或者使用--reset来删除--set--set可以接受0个或者多个键值对,最直接的常用的修改镜像,就是--set image:11,而对于多个,使用逗号隔开即可--set name:linuxea,image:11,如果在yaml文件中就换行,如下name: linuxea image: 11如果参数配置中所示,假如我们要修改的是mysql的参数[root@linuxea.com /data/helm]# cat mvalule.yaml mysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea persistence: enabled: false两种方式sethelm install mysql -f mvalule.yaml stable/mysql --set mysqlUser:linuxea,mysqlPassword:linuxea.com,mysqlDatabase:linuxea,persistence.enabled:false对于有换行的空格,使用.来拼接, persistence.enabled:false对应如下persistence: enabled: false其他1,如果有更多的参数,比如:--set args={run,/bin/start,--devel}args: - run - /bin/start - --devel2,除此之外,我们可以借用索引的方式,如下metadata: name: etcd-k8s namespace: monitoring这样的话,就变成了metadata[0].name=etcd-k8s,metadata[0].namespace=monitoring3,对于特殊字符可以使用反斜杠和双引号来做name: "a,b"这样的set明天就变成了--set name=a\,b4,其他包含反斜杠的nodeSelector: kubernetes.io/role: master这时候的--set就需要转义:--set nodeSelector."kubernetes\.io/role"=master本地安装通过fetch可以将chart放到本地[root@linuxea.com /data/helm]# helm fetch stable/mysql WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config [root@linuxea.com /data/helm]# ls mysql-1.6.9.tgz -ll -rw-r--r-- 1 root root 11589 Apr 10 06:14 mysql-1.6.9.tgz而后就可以直接使用helm安装[root@linuxea.com /data/helm]# helm install mysql mysql-1.6.9.tgz 或者解压[root@linuxea.com /data/helm]# tar xf mysql-1.6.9.tgz [root@linuxea.com /data/helm]# ls mysql Chart.yaml README.md templates values.yaml [root@linuxea.com /data/helm]# tree mysql mysql ├── Chart.yaml ├── README.md ├── templates │ ├── configurationFiles-configmap.yaml │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── initializationFiles-configmap.yaml │ ├── NOTES.txt │ ├── pvc.yaml │ ├── secrets.yaml │ ├── serviceaccount.yaml │ ├── servicemonitor.yaml │ ├── svc.yaml │ └── tests │ ├── test-configmap.yaml │ └── test.yaml └── values.yaml 2 directories, 15 files安装[root@linuxea.com /data/helm]# helm install mysql ./mysql升级与回滚helm的upgrade命令会更新你提供的信息,并且只会更新上一个版本,这种较小的更新更快捷每,进行一次upgrade都会生成新的配置版本,比如secret,默认似乎有15个版本,这将会是一个问题。添加一个mysqlRootPassword: www.linuxea.com 进行upgrademysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea mysqlRootPassword: www.linuxea.com persistence: enabled: falseupgradehelm upgrade mysql-1649582722 stable/mysql -f mvalule.yaml如下[root@linuxea.com /data/helm]# helm upgrade mysql-1649582722 stable/mysql -f mvalule.yaml WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated Release "mysql-1649582722" has been upgraded. Happy Helming! NAME: mysql-1649582722 LAST DEPLOYED: Sun Apr 10 06:29:00 2022 NAMESPACE: default STATUS: deployed REVISION: 2 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649582722.default.svc.cluster.local ...更新完成后REVISION已经变成2了通过helm ls查看[root@linuxea.com /data/helm]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649582722 default 2 2022-04-10 06:29:00.717252842 -0400 EDT deployed mysql-1.6.9 5.7.30 而后可以通过helm get values mysql-1649582722 查看[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlRootPassword: www.linuxea.com mysqlUser: linuxea persistence: enabled: false此时的mysql的新密码已经更新到secret,但是并没有在mysql生效的 ,我们就进行回滚下[root@linuxea.com /data/helm]# kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo www.linuxea.comrollbackls查看helm的名称[root@linuxea.com /data/helm]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649582722 default 2 2022-04-10 06:29:00.717252842 -0400 EDT deployed mysql-1.6.9 5.7.30 查看mysql-1649582722历史版本[root@linuxea.com /data/helm]# helm history mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Sun Apr 10 05:25:23 2022 superseded mysql-1.6.9 5.7.30 Install complete 2 Sun Apr 10 06:29:00 2022 deployed mysql-1.6.9 5.7.30 Upgrade complete进行rollback[root@linuxea.com /data/helm]# helm rollback mysql-1649582722 1 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Rollback was a success! Happy Helming!在来查看values[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlUser: linuxea persistence: enabled: false在查看密码[root@linuxea.com /data/helm]# kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo 8FFSmw66je而现在的history就是三个版本了[root@linuxea.com /data/helm]# helm history mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Sun Apr 10 05:25:23 2022 superseded mysql-1.6.9 5.7.30 Install complete 2 Sun Apr 10 06:29:00 2022 superseded mysql-1.6.9 5.7.30 Upgrade complete 3 Sun Apr 10 06:39:44 2022 deployed mysql-1.6.9 5.7.30 Rollback to 1 这是因为版本是一直在新增,而3的版本就是回滚到1了,Rollback to 1 其他参数在整个helm中有一些非常有意思且重要的参数,比如常见的install和upgrade,当我们不确定一个程序是否被安装的时候,我们就需要安装,否则就是更新,于是可以使用upgrade --install,一般而言,我们可能还需要一个名称空间,那么就有了另外一个参数--create-namesapce,如果不存在就创建helm upgrade --install --create-namespace --namespace linuxea hmysql ./mysql如果名称空间不存在就创建,如果mysql没有install就install,否则就upgrade同时,当helm执行完成后, list列表中的状态已经为deployed,但是并不能说明pod已经装备好了,这两者之间并没有直接关系的,此时需要一些配置参数辅助--wait等待所有pod就绪,包含共享存储的pvc,就绪状态准备情况,以及svc,如果超过五分钟,这个版本就会标记失败-- timeout等待kubernetes命令完成,默认五分钟--no-hooks跳过命令的运行的hooks--recreate-pods仅适用于upgrade和rollback,在helm3中这个标志将导致重新创建所有的pod参考kubernetes helm概述(49)kubernetes helm简单使用(50)kubernetes 了解chart(51)kubernetes helm安装efk(52)
2022年04月22日
2,054 阅读
0 评论
0 点赞
2022-04-20
linuxea:ingress-nginx的rewrite与canary
ingress-nginx的官网提供了更多的一些配置信息,包括url重写,金丝雀,尽管金丝雀支持并不完美,ingress-nginx仍然是最受欢迎的ingress之一。在上一篇中,我介绍了ingress-nginx应用常见的两种方式,并且采用的最新的版本,早期有非常陈旧的版本在使用。鉴于此,随后打算将ingress-nginx重新理一遍,于是就有了这篇,后续可能还会有ingress-nginx本身只是做一个声明,从哪里来到哪里去而已,并不会做一些流量转发,而核心是annotations的class是可以借助作一些操作的,比如修改城Traefik或者自己定制创建一个deployment的pod,我们至少需要指定标签如下 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32而后在service关联 selector: app: linuxea_app version: v0.1.32并配置一个ingress,name: myapp必须是这个service的name , 且必须在同一个名称空间,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80 --- apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: linuxea_app version: v0.1.32 ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea namespace: default spec: replicas: 7 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.b ports: - name: http containerPort: 80这样就完成了一个最简单不过的ingress-nginx的域名配置,当然也可以配置一个404的页面之类的而整个过程大致如下请求到达LB后被分发到ingress-controller,controller会一直关注ingress对象,匹配到对应的信息后将请求转发到其中的某一个pod之上,而service对后端的pod做发现和关联。意思就是ingress-nginx不是直接到service,而是只从service中获取pod的信息,service仍然负责发现后端的pod状态,而请求是通过ingress通过ed到pod的url重写ingress-nginx大致如下,除此之外,我们可以对annotation做一些配置,较为常见的rewrite功能在实际中,访问的url可能如下linuxea.test.com/qsv1 linuxea.test.com/qsv2 linuxea.test.com/qsv3诸如此类,而要进行这种跳转,需要前端代码支持,或者配置rewrite进行转发,如下NameDescriptionValuesnginx.ingress.kubernetes.io/rewrite-targetTarget URI where the traffic must be redirectedstringnginx.ingress.kubernetes.io/ssl-redirectIndicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)boolnginx.ingress.kubernetes.io/force-ssl-redirectForces the redirection to HTTPS even if the Ingress is not TLS Enabledboolnginx.ingress.kubernetes.io/app-rootDefines the Application Root that the Controller must redirect if it's in / contextstringnginx.ingress.kubernetes.io/use-regexIndicates if the paths defined on an Ingress use regular expressionsboolCaptured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.nginx.ingress.kubernetes.io/rewrite-target 将请求转发到目标比如现在要将/app转发到/app/modiy,那么就可以如下正则表达/app(/|$)(.*)并且rewrite-target,的值是一个$2占位符 annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 ... paths: - path: /app(/|$)(.*)而这种方式i还有一个问题,就是你的js代码很有可能是绝对路径的,因此你不能够打开js,js会404 。 要么修改为相对路径,要么就需要重新配置一个重定向假设你的jss样式在/style下,还可能有图片是在image下以及js的路径,和其他增删改查的页面,现在的跳转后央视404,可以使用configuration-snippet重写configuration-snippet annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^/style/(.*)$ /app/style/$1 redirect; rewrite ^/image/(.*)$ /app/image/$1 redirect; rewrite ^/javascripts/(.*)$ /app/javascripts/$1 redirect; rewrite ^/modiy/(.*)$ /app/modiy/$1 redirect; rewrite ^/create/(.*)$ /app/create/$1 redirect; rewrite ^/delete/(.*)$ /app/delete/$1 redirect; |表示换行,而后rewrite以/style/路径下的所有跳转到/app/style/下,完成对style添加前缀/appapp-root如果此时我们希望访问的根目录不是默认的,可以使用app-root来进行跳转,比如跳转到linuxea.html如果是一个目录,就可以写一个路径,比如/app/ annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html[root@Node-172_16_100_50 ~/ingress]# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured现在就完成了自动跳转basic auth认证在nginx里面是可以配置basic auth认证的,非常简单的一个配置,在ingress-nginx中也是可以的我们可以进行yum安装一个httpd的应用,或者在搜索引擎搜索一个在线 htpasswd 生成器来生成一个用户mark,密码linuxea.comyum install httpd -y# htpasswd -c auth mark New password: Re-type new password: Adding password for user mark在线生成即可# cat auth1 mark:$apr1$69ocxsQr$omgzB53m59LeCVxlOAsTr/创建一个secret,将这个文件配置即可kubectl create secret generic bauth --from-file=auth1# kubectl create secret generic bauth --from-file=auth1 secret/bauth created # kubectl get secret bauth NAME TYPE DATA AGE bauth Opaque 1 21s # kubectl describe secret bauth Name: bauth Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== auth1: 43 bytes而后添加到ingress-nginx中,如下 annotations: ... nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again'auth-secret是引入刚创建的bauth,而auth-type指定了类型,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again' spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80执行一下# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured如下灰度我们通常使用最多的滚动更新,蓝绿,灰度,而ingress-ngiinx是通过annotations配置来实现的,能满足金丝雀,蓝绿、ab测试缺少描述 部分此前我们配置了一个pod和一个service,要配置金丝雀那就需要在配置一组,而后我们在ingress中使用annotations来进行调用其他的一些class来完成一些操作配置nginx:v1.14.aapiVersion: apps/v1 kind: Deployment metadata: name: testv1 labels: app: testv1 spec: replicas: 5 selector: matchLabels: app: testv1 template: metadata: labels: app: testv1 spec: containers: - name: testv1 image: marksugar/nginx:1.14.a ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv1-service spec: ports: - port: 80 targetPort: 80 selector: app: testv1testv2apiVersion: apps/v1 kind: Deployment metadata: name: testv2 labels: app: testv2 spec: replicas: 5 selector: matchLabels: app: testv2 template: metadata: labels: app: testv2 spec: containers: - name: testv2 image: marksugar/nginx:1.14.b ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv2-service spec: ports: - port: 80 targetPort: 80 selector: app: testv2apply# kubectl apply -f testv1.yaml # kubectl apply -f testv2.yaml可以看到现在已经有两组# kubectl get pod NAME READY STATUS RESTARTS AGE testv1-9c974bd5d-c46dh 1/1 Running 0 19s testv1-9c974bd5d-j7fzn 1/1 Running 0 19s testv1-9c974bd5d-qp4tv 1/1 Running 0 19s testv1-9c974bd5d-thx4r 1/1 Running 0 19s testv1-9c974bd5d-x9rpf 1/1 Running 0 19s testv2-5767685995-f8z5s 1/1 Running 0 6s testv2-5767685995-htm74 1/1 Running 0 6s testv2-5767685995-k8sdv 1/1 Running 0 6s testv2-5767685995-mjd6c 1/1 Running 0 6s testv2-5767685995-prhld 1/1 Running 0 6s给testv1配置一个 ingress-v1.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv1 namespace: default spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv1-service port: number: 80# kubectl apply -f ingress-v1.yaml 而后我们查看的版本信息# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0canary而后我们配置canary nginx.ingress.kuberentes.io/canary: "true" # 开启灰度发布机制,首先启用canary nginx.ingress.kuberentes.io/canary-weight: "30" # 分配30%的流量到当前的canary版本如下给testv2配置一个 ingress-v2.yaml 并配置canary权重apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv2 namespace: default annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "30" spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv2-service port: number: 80此时由于版本问题你或许会发现 有一个问题Error from server (BadRequest): error when creating "ingress-1.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "linuxea.test.com" and path "/" is already defined in ingress default/test而这个问题的根源在于没有验证webhook忽略具有不同的ingressclass的入口controller.admissionWebhooks.enabled=false并且在1.1.2修复我们安装1.1.2的ingress-nginxdocker pull k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c找到一个aliyun的docker pull registry.cn-shanghai.aliyuncs.com/wanfei/ingress-nginx-controller:v1.1.2 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/kube-webhook-certgen:v1.1.1 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/defaultbackend-amd64:1.5修改ingress-nginx的deployment.yaml而后在配置下应用声明式的文件# kubectl apply -f ingress-v2.yaml # kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE linuxea nginx linuxea.test.com 172.16.100.50 80 22h testv1 nginx test.mark.com 172.16.100.50 80 3m23s testv2 nginx test.mark.com 172.16.100.50 80 70sfor i in $(seq 1 10);do curl -s linuxea.test.com ;done# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv2-5767685995-mjd6c.com ▍ 1fa571f0e1e0e ▍version number 2.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0这里的比例是大致的一个算法,而并不是固定的此时可以将weight配置成0撤销更新 nginx.ingress.kubernetes.io/canary-weight: "0"或者将weight配置成100完成更新 nginx.ingress.kubernetes.io/canary-weight: "100"参考:validating webhook should ignore ingresses with a different ingressclassvalidating webhook should ignore ingresses with a different ingressclassslack讨论Error: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "xyz" and path "/" is already defined in ingress xxx #821kubernetes Ingress Controller (15)kubernetes Ingress nginx http以及7层https配置 (17)kubernetes Ingress nginx配置 (16)
2022年04月20日
2,021 阅读
0 评论
0 点赞
2022-04-18
linuxea:k8s下kube-prometheus监控ingress-nginx
首先需要已经配置好了一个ingress-nginx亦或者使用ACK上的ingress-nginx鉴于对ingress-nginx的状态,或者流量的监控是有一定的必要性,配置监控的指标有助于了解更多细节通过使用kube-prometheus的项目来监控ingress-nginx,首先需要在nginx-ingress-controller的yaml中配置10254的端口,并且配置一个service,最后加入到ServiceMonitor即可。start如果是helm,则需要如下修改helm.. controller: metrics: enabled: true service: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" ..如果不是 helm,则必须像这样编辑清单:服务清单: - name: prometheus port: 10254 targetPort: prometheusprometheus将会在service中被调用apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus port: 10254 targetPort: prometheus ..deploymentapiVersion: v1 kind: Deployment metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus containerPort: 10254 ..测试10254的/metrics的url能够被访问到bash-5.1$ curl 127.0.0.1:10254/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 1.9802e-05 go_gc_duration_seconds{quantile="0.25"} 3.015e-05 go_gc_duration_seconds{quantile="0.5"} 4.2054e-05 go_gc_duration_seconds{quantile="0.75"} 9.636e-05 go_gc_duration_seconds{quantile="1"} 0.000383868 go_gc_duration_seconds_sum 0.000972498 go_gc_duration_seconds_count 11 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 92 # HELP go_info Information about the Go environment.Service And ServiceMonitor另外需要配置一个ServiceMonitor, 这取决于kube-promentheus的发行版spec部分字段如下spec: endpoints: - interval: 15s # 15s频率 port: metrics # port的名称 path: /metrics # url路径 namespaceSelector: matchNames: - kube-system # ingress-nginx所在的名称空间 selector: matchLabels: app: ingress-nginx # ingress-nginx的标签最终配置如下:service在ingress-nginx的名称空间下配置,而ServiceMonitor在kube-prometheus的monitoring名称空间下,使用endpoints定义port名称,使用namespaceSelector.matchNames指定了ingress pod的名称空间,selector.matchLabels和标签apiVersion: v1 kind: Service metadata: name: ingress-nginx-metrics namespace: kube-system labels: app: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - name: metrics port: 10254 targetPort: 10254 protocol: TCP selector: app: ingress-nginx --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ingress-nginx-metrics namespace: monitoring spec: endpoints: - interval: 15s port: metrics path: /prometheus namespaceSelector: matchNames: - kube-system selector: matchLabels: app: ingress-nginxgrafana在grafana的dashboards中搜索ingress-nginx,得到的和github的官网的模板一样https://grafana.com/grafana/dashboards/9614?pg=dashboards&plcmt=featured-dashboard-4或者下面这个模板这些在prometheus的targets中被发现参考ingress-nginx monitoringprometheus and grafana install
2022年04月18日
2,472 阅读
0 评论
0 点赞
2022-04-16
linuxea:windows远程调试k8s环境
有些朋友问怎么在windows上调试自己的环境,刚好最近也在自己的虚拟环境调试,就整理下了文档以kubectl和helm以及kustomize为例下载对应的包你要正常使用当你包,必须是与你kubernetes版本匹配的,这些信息在他们的readme.md中都有介绍假如你的k8s 是1.20的,那你就不能使用与此版本差距太大的版本以免出现未知的问题而其他的大版本的包使用方式一直在发送变化https://dl.k8s.io/release/v1.20.11/bin/windows/amd64/kubectl.exe https://get.helm.sh/helm-v3.8.2-windows-amd64.zip https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.10.0将exe放置在一个位置,比如:C:\k8sbinPS C:\k8sbin> dir 目录: C:\k8sbin Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 2022-04-14 1:47 46256128 helm.exe -a---- 2022-04-16 12:59 41438208 kubectl.exe -a---- 2021-02-10 8:03 15297536 kustomize.exe以win10为例,在左下角的搜索栏中,或者有一个放大镜,输入"环境变量"重新打开一个窗口PS C:\WINDOWS\system32> kubectl.exe version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"} Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. PS C:\WINDOWS\system32> kustomize.exe version {Version:kustomize/v3.10.0 GitCommit:602ad8aa98e2e17f6c9119e027a09757e63c8bec BuildDate:2021-02-10T00:00:50Z GoOs:windows GoArch:amd64} PS C:\WINDOWS\system32> helm version version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.17.5"} PS C:\WINDOWS\system32>将kubernetes的config文件拿到本地cat /etc/kubernetes/kubelet.kubeconfig 在windwos上当前用户的加目录创建.kube,并将kubelet.kubeconfig 内容复制到一个config的文件中C:\Users\Administrator\.kube\configgetPS C:\Users\Administrator\.kube> kubectl.exe get pod NAME READY STATUS RESTARTS AGE dpment-linuxea-6bdfbd7b77-fr4pn 1/1 Running 9 10d dpment-linuxea-a-5b98f7fb86-9ff2f 1/1 Running 17 23d hello-run-96whr-pod 0/1 Completed 0 10d hello-run-pod 0/1 Completed 0 10d mysql-1649582722-dbcdcb895-tjvsr 1/1 Running 6 5d20h nfs-client-provisioner-597f7dd4b-h2nsg 1/1 Running 71 248d testv1-9c974bd5d-gl52m 1/1 Running 9 10d testv2-5767685995-mjd6c 1/1 Running 16 22d traefik-6866c896d5-dqlv6 1/1 Running 9 10d ubuntu 0/1 Error 0 5d19h whoami-7d666f84d8-8wmk4 1/1 Running 15 20d whoami-7d666f84d8-vlgb9 1/1 Running 9 10d whoamitcp-744cc4b47-24prx 1/1 Running 9 10d whoamitcp-744cc4b47-xrgqp 1/1 Running 9 10d whoamiudp-58f6cf7b8-b6njt 1/1 Running 9 10d whoamiudp-58f6cf7b8-jnq6c 1/1 Running 15 20d PS C:\Users\Administrator\.kube>如下图挂在windows共享目录仅限于内网共享使用如果是传统的共享,你需要创建用户,需要共享文件,权限指定,而后使用netstat -aon来过滤139,145,138端口权限是否开启添加用户权限到共享文件夹查看是否打开共享yum install cifs-utils -y挂载mount -t cifs -o username=share,password=share //172.16.100.3/helm /data/helm[root@liinuxea.com /data]# mkdir helm [root@liinuxea.com /data]# mount -t cifs -o username=share,password=share //172.16.100.3/helm /data/helm [root@liinuxea.com /data]# df -h|grep helm //172.16.100.3/helm 282G 275G 6.5G 98% /data/helm
2022年04月16日
1,817 阅读
0 评论
0 点赞
2022-03-16
linuxea:kuberntes Events事件处理
应用程序并不总是在稳定状态下运行。会根据日期、时间或特定事件而波动。CPU 需求各不相同,运行时内存和网络需求也各不相同。而应用程序不断变化的需求,由Kubernetes来调度程序分配计算或内存资源,这些可能会导致Kubernetes上的node或者pod处于不可用状态的原因之一,而这些pod最终是关联到应用程序上,只有将这些资源被分配给其他应用程序或暂停,直到一切结束。这种重新分配可能有意或意外地发生。突然的性能波动可能导致集群内的节点故障——或其他由pod 驱逐、内核恐慌,内存溢出,或 VM 意外停机引起的中断。OOMKilledOOMKilled是为了保证vm的健康状态,在后台,OOM Killer 为每个正在运行的进程分配一个分数。分数越高,进程被杀死的可能性就越大, Kubernetes 会利用该分数来帮助决定要杀死哪些 pod。VM 上运行的 kubelet 监控内存消耗。如果 VM 上的资源变得稀缺,kubelet 将开始杀死 pod。从本质上讲,这个想法是为了保持 VM 的健康状况,以便在其上运行的所有 pod 都不会失败。多数的需求超过了少数的需求,少数被干掉。两种方式OOMKilled:限制过度使用OOMKilled:已达到容器限制限制过度使用当pod 限制的总和大于节点上的可用内存时,可能会发生OOMKilled: Limit Overcommit错误。例如,如果有一个具有 8 GB 可用内存的节点,可能会得到 8 个 pod,每个 pod 都需要 1 gig 内存。但是,即使其中一个 Pod 配置了 1.5 gigs 的限制,也会面临内存不足的风险。只需要一个 pod 出现流量峰值或未知的内存泄漏,Kubernetes 将被迫开始杀死 pod。达到容器限制虽然Limit Overcommit错误与节点上的内存总量有关,但Container Limit Reached通常归结为单个 pod。当 Kubernetes 检测到一个 pod 使用的内存超过了设置的限制时,它会杀死该 pod,并显示错误OOMKilled—Container Limit Reached。发生这种情况时,检查应用程序日志以尝试了解 pod 使用的内存超过设置限制的原因。可能有多种原因,例如流量激增或长时间运行的Kubernetes 作业导致它使用比平时更多的内存。如果在期间发现应用程序按预期运行并且它只需要更多内存来运行,可能会考虑增加 request 和 limit 的值。有效应对这些事件至关重要。但是,了解特定应用程序如何以及为何表现出此类行为同样重要。K8s 事件对象可以帮助提供一些上下文,但是默认的手段能到的东西有限事件事件在Kubernetes框架中是一个对象,它会自动生成以响应其他资源(如节点、Pod 或容器)的变化状态变化是最关键的部分。例如,pod 生命周期中的各个阶段——比如从pending到running的转换,或者成功或失败,或重启等状态可能会触发 K8s 事件。正如之前提到的,重新分配和调度也是如此。事件实际上是 K8s 领域的。它们可以深入了解基础架构的运行方式,同时为任何令人不安的行为提供信息。但是Kubernetes 中的事件日志记录并不完美。由于事件对象不被视为常规日志,因此默认情况下它们不包含在Kubernetes 日志中。此外,事件可能不会按预期显示以响应某些行为,例如由错误镜像引起的容器启动失败(例如[ImagePullBackoff)。我们可能希望某个事件在某些资源(ReplicaSet、StatefulSet等)上可见,但它们在 pod 本身上却是模糊可见的。因此,管理员需要执行调试时手动提取事件分类Kubernetes 部署中可能有多种类型的事件。当 Kubernetes 执行核心操作时,可能会出现许多状态转换Failed Events容器创建失败非常常见,因此创建失败或者更新的失败会因为很多原因导致,如镜像名称错误,权限等都会导致失败此外,节点本身可能会失败。当这些故障发生时,应用程序应该回退到正常的剩余节点Evicted Events驱逐事件相当普遍,因为 Kubernetes 可以在工作节点终止其各种 pod。某些 pod 可能会占用计算和内存资源,或者相对于它们各自的运行时消耗不成比例的数量。Kubernetes 通过驱逐 pod 并在别处分配磁盘、内存或 CPU 空间来解决这个问题。这并不总是一个问题,如果未能对他们的 pod 设置适当的限制,因此未能在容器级别限制资源消耗。如果没有定义的请求或限制参数,资源很可能会失控。被驱逐的事件可以揭示这些次优配置,当数百个容器同时运行时,这些配置很容易在一次新的洗牌中丢失。驱逐事件还可以告诉我们pod 何时被分配到新节点。Kubernetes Scheduling Events称为FailedScheduling事件,当 Kubernetes 调度程序无法找到合适的节点时发生。这个事件的介绍非常具有描述性。记录的消息准确地解释了为什么新节点没有获得必要的资源。内存或 CPU 不足也会触发此事件。Node-Specific Events节点上的事件可能指向系统内某处的不稳定或不健康行为。首先,NodeNotReady事件表示没有准备好进行 pod 调度的节点。相反,健康节点将被描述为“就绪”,而“unknown”节点无法轮询状态和响应。同时,Rebooted事件的含义很简单。由于特定操作或不可预见的崩溃,可能必须重新启动节点。最后,当集群无法访问时会触发HostPortConflict事件。这可能意味着选择的 nodePort 不正确。此外,DaemonSets可能与 hostPorts 冲突并导致此问题。过滤和监控事件Kubernetes 事件并非所有事件都是关键任务。就像确认消息一样,其中许多可以简单地应用于按设计或预期运行的系统。K8s 将这些表示为Normal。因此,所有事件都被分配了一个类型——Normal, Information, Warning。对于典型的集群,正常事件是无聊且不那么有用的;在故障排除过程中,它们不会透露任何内在价值。还为事件分配了一个简短的原因(触发此事件的原因)、一个Age、一个From(或起源)和一个提供更多上下文的Message 。这些本质上是与所有事件相关的“属性”或字段。它们还提供了一种从更大的集合中过滤某些事件的方法。过滤非常简单。在kubectl中,可以输入以下内容以过滤掉正常事件,--field-selector type!=Normal # kubectl -n preh5 get events --field-selector type!=Normal LAST SEEN TYPE REASON OBJECT MESSAGE 14m Warning Failed pod/gv-service-857fc678f8-wwtpv Error: ImagePullBackOff也可以根据最新的时间选项来匹配--sort-by='.lastTimestamp' # kubectl -n preh5 get events --field-selector type!=Normal --sort-by='.lastTimestamp' LAST SEEN TYPE REASON OBJECT MESSAGE 8m33s Warning Failed pod/gv-service-857fc678f8-wwtpv Error: ImagePullBackOff使用-o json查看json格式的信息# kubectl -n pre-veh5 get events --field-selector type!=Normal --sort-by='.lastTimestamp' -o json { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "count": 1150, "eventTime": null, "firstTimestamp": "2022-03-16T03:18:45Z", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{gis-vehicle-service}", ... "resourceVersion": "", "selfLink": "" } }同时可以使用kubectl get events --watch开持续观察日志由于使用的是kubectl,因此可以从 Kubernetes 组件日志或集成的第三方日志工具中提取这些事件。后者很常见,因为 K8s 事件通常不存储在默认的 Kubernetes 日志中可观测SloopSloop 监控 Kubernetes,记录事件和资源状态变化的历史,并提供可视化来帮助调试过去的事件。主要特征:允许查找和检查不再存在的资源(例如:发现之前部署中的 pod 正在使用的主机)。提供时间线显示,显示部署、ReplicaSet 和 StatefulSet 更新中相关资源。帮助调试瞬态和间歇性错误。允许查看 Kubernetes 应用程序中随时间的变化。是一个独立的服务,不依赖于分布式存储。我们安装sloop,通过docker run起来docker run --rm -it -p 28080:8080 -v ~/.kube/:/kube/ -v /etc/localtime:/etc/localtime -e KUBECONFIG=/kube/config sloopimage/sloop:latestdocker-composeversion: '2.2' services: sloopimage: image: sloopimage/sloop:latest container_name: sloopimage restart: always hostname: "sloopimage" #network_mode: "host" environment: - KUBECONFIG=/kube/config volumes: - /root/.kube/:/kube/ - /etc/localtime:/etc/localtime - /data:/some_path_on_host mem_limit: 2048m ports: - 28080:8080kube-eventerkube-eventer是阿里云的开源项目,他可以捕捉一些事件,配合钉钉发送报警, 但是他的频率非常高类似这样TestPlan Level:Warning Kind:Pod Namespace:test-test Name:vecial-96b6765cf-6zst5.16d65799bf506486 Reason:BackOff Timestamp:2022-03-16 15:42:14 Message:Back-off restarting failed container参考https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/在 Linux OOM Killer 中幸存
2022年03月16日
1,707 阅读
0 评论
0 点赞
2022-03-14
linuxea:ingress-nginx应用常见的两种方式
有这么一个场景,前面是一个中国移动的云服务器,他们的网络环境是通过SNAT和DNAT的方式进行做的网络转换。这种情况在云环境中并不多见,一般发生在自建物理机房中,通常而言,DNAT和SNAT发生在防火墙或者外部路由阶段。一个IP地址和不同的端口来进行映射到后端的一个节点上的某个端口上,这种情况下走的必然是4层转发,这就有出现了一个问题,你的https是无法在这里进行卸载的,要么在前端架设代理层,要么在后端上添加https,而后端使用的kubernetes集群,如下后端上添加https,就需要有一个能够做域名解析一个层,这个时候就需要使用类似于nginx-ingress的东西来处理这个这个时候无论上面的两种情况如何解决,都会有一个负载和卸载后端故障节点的问题,如果后端或者前端某一个节点挂掉,这时候存在了单点故障和后端快速卸载的问题,那大概率变成了这个样子这样仍然有单点故障的问题,首先DNAT是不需要考虑的,因为脱离了我们的掌控,proxy层只做后端的服务器故障剔除或者上线,此时你大概率使用的一个4层的nginx以ingress-nginx为例,ingress-nginx配置方式基本上大概有两种1。使用默认的nodeport进行转发2。使用宿主机的网络名称空间进行转发通常这两种方式都被采用,第二种方式被认为更高效,原因是pod不进行隔离宿主机的网络名称空间,因此少了一层网络名称空间的消耗,这意味着从内核空间到用户空间少了一次转换,从这个角度来,他比nodeport快安装ingress-nginx我们下载1.1.1的ingress-nginxwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml我找了一个人家已经搬好的镜像docker pull liangjw/kube-webhook-certgen:v1.1.1 docker pull liangjw/ingress-nginx-controller:v1.1.1--改名称docker tag liangjw/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 docker tag liangjw/ingress-nginx-controller:v1.1.1 k8s.gcr.io/ingress-nginx/controller:v1.1.1如果此时没有外网可以保存到本地传递给其他节点docker save -o controller.tar k8s.gcr.io/ingress-nginx/controller:v1.1.1 docker save -o kube-webhook-certgen.tar k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 for i in 21 30 ;do scp controller.tar 172.16.1.$i:~/;done for i in 21 30 ;do scp kube-webhook-certgen.tar 172.16.1.$i:~/;done或许你可以配置daemonset或者配置nodeName来调度sed -i 's/kind: Deployment/kind: DaemonSet/g' deploy.yaml声明式运行[root@node1 ~/ingress]# kubectl apply -f deployment.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created正常情况下,此时ingress-nginx-controller已经准备妥当[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-m7hph 0/1 Completed 0 114s 192.20.137.84 172.16.100.50 <none> <none> ingress-nginx-admission-patch-bmx2r 0/1 Completed 0 114s 192.20.180.14 172.16.100.51 <none> <none> ingress-nginx-controller-78c57d6886-m7mtc 1/1 Running 0 114s 192.20.180.16 172.16.100.51 <none> <none>我们什么都没有修改,因此她的nodeport的端口也是随机的,这里可以修改城固定的端口[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.68.108.97 <none> 80:31837/TCP,443:31930/TCP 2m28s ingress-nginx-controller-admission ClusterIP 10.68.102.110 <none> 443/TCP 2m28s配置测试配置名为myapp一个nginxapiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: linuxea_app version: v0.1.32 ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea namespace: default spec: replicas: 7 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.b ports: - name: http containerPort: 80确保可以通过custer-ip进行访问[root@linuxea-50 ~/ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 273d myapp ClusterIP 10.68.211.186 <none> 80/TCP 9m42s mysql ClusterIP None <none> 3306/TCP 2d5h[root@linuxea-50 ~/ingress]# curl 10.68.211.186 linuxea-dpment-linuxea-6bdfbd7b77-tlh8k.com-127.0.0.1/8 192.20.137.98/32配置ingressingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80而后执行即可[root@linuxea-50 ~/ingress]# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured [root@linuxea-50 ~/ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE test nginx linuxea.test.com 172.16.100.51 80 30s我们进行访问测试首先先修改下本地的hosts文件作为域名解析# C:\Windows\System32\drivers\etc\hosts ...添加如下 172.16.100.51 linuxea.test.com那么现在我们已经可以通过默认的配置进行访问了。配置负载1.我们先将nginx绑定到节点上而此时,我们需要配置的是,只允许一部分node可以访问即可,于是我们添加一个标签选择器kubectl label node 172.16.100.50 beta.kubernetes.io/zone=ingress kubectl label node 172.16.100.51 beta.kubernetes.io/zone=ingress删掉之前的nodeSelector nodeSelector: kubernetes.io/os: linux改成 nodeSelector: beta.kubernetes.io/zone: ingress并且将deployment改成DaemonSetsed -i 's/kind: Deployment/kind: DaemonSet/g' [root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2 标签打完之后如下[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get node --show-labels|grep "ingress" 172.16.100.50 Ready master 276d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.50,kubernetes.io/os=linux,kubernetes.io/role=master 172.16.100.51 Ready master 276d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.51,kubernetes.io/os=linux,kubernetes.io/role=master如下[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE ingress-nginx-admission-create-kfj7v 0/1 Completed 0 22m 192.20.137.99 172.16.100.50 ingress-nginx-admission-patch-5dwvf 0/1 Completed 1 22m 192.20.137.110 172.16.100.50 ingress-nginx-controller-4q9qb 1/1 Running 0 12m 192.20.180.27 172.16.100.51 ingress-nginx-controller-n9qkl 1/1 Running 0 12m 192.20.137.92 172.16.100.50 此时,我们是通过172.16.100.51和172.16.100.50进行访问的,那么你的请求就需要打到这两个节点的nodeport端口上,因此我们将nodeport端口固定apiVersion: v1 kind: Service .... name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http nodePort: 31080 - name: https port: 443 protocol: TCP targetPort: https nodePort: 31443 appProtocol: https selector: ....2.配置nginx我们可以配置4层或者7层的nginx,这取决于我们需要做什么。如果是4层,只做一个对后端节点的卸载,如果是7层,那可以配置域名等yum install nginx-mod-stream nginx安装了4层模块后会自动加载,我们在nginx.conf中配置即可stream { include stream/*.conf; }并创建目录mkdir stream创建配置文件如下upstream test-server { server 172.16.100.50:31080 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:31080 max_fails=3 fail_timeout=1s weight=1; } log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ' '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ; access_log /data/logs/nginx/web-server.log proxy; server { listen 31080; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server; }我们访问测试下[root@linuxea-49 /etc/nginx]# tail -f /data/logs/nginx/web-server.log 172.16.100.3 4803 - [14/Mar/2022:00:38:27 +0800] 200 TCP "172.16.100.51:31080" "19763" "0.000" 172.16.100.3 4811 - [14/Mar/2022:00:38:29 +0800] 200 TCP "172.16.100.50:31080" "43999" "0.000" 172.16.100.3 4812 - [14/Mar/2022:00:38:30 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000" 172.16.100.3 4813 - [14/Mar/2022:00:38:31 +0800] 200 TCP "172.16.100.50:31080" "43944" "0.000" 172.16.100.3 4816 - [14/Mar/2022:00:38:34 +0800] 200 TCP "172.16.100.51:31080" "3464" "0.000" 172.16.100.3 4819 - [14/Mar/2022:00:38:43 +0800] 200 TCP "172.16.100.50:31080" "44105" "0.001" 172.16.100.3 4820 - [14/Mar/2022:00:38:44 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000" 172.16.100.3 4821 - [14/Mar/2022:00:38:47 +0800] 200 TCP "172.16.100.50:31080" "8660" "0.000" 172.16.100.3 4825 - [14/Mar/2022:00:39:06 +0800] 200 TCP "172.16.100.51:31080" "42747" "0.000" 172.16.100.3 4827 - [14/Mar/2022:00:39:09 +0800] 200 TCP "172.16.100.50:31080" "32058" "0.000"如下配置负载2我们修改ingress-nginx配置文件,采用hostNetwork: true,ingress-nginx的pod将不会隔离网络名称空间,采用宿主机的网络,这样就可以直接使用service,而不是nodeport1.修改网络模式并修改dnsPolicy ,一旦hostnetwork: true,dnsPolicy就不能在是ClusterFirst,而应该是ClusterFirstWithHostNet,只有这样才能在集群和宿主机上都能进行解析spec: selector: ... spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: controller image: k8s.gcr.io/ingress-nginx/controller:v1.1.1 ....如下[root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE ingress-nginx-admission-create-kfj7v 0/1 Completed 0 23h 192.20.137.99 172.16.100.50 ingress-nginx-admission-patch-5dwvf 0/1 Completed 1 23h 192.20.137.110 172.16.100.50 ingress-nginx-controller-5nd59 1/1 Running 0 46s 172.16.100.51 172.16.100.51 ingress-nginx-controller-zzrsz 1/1 Running 0 85s 172.16.100.50 172.16.100.50 当你修改完成后,你会发现他用的宿主机的网卡,这就是没有隔离网络名称空间,这样好处相对nodeport,是要快[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-zzrsz -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether a2:5a:55:54:d1:1d brd ff:ff:ff:ff:ff:ff inet 172.16.100.50/24 brd 172.16.100.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet6 fe80::b68f:449c:af0f:d91f/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:c8:d3:08:a9 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: calib6c2ec954f8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 6: califa7cddb93a8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP ....2.配置nginx 7层代理而后我们可以直接配置nginx,不用关系svc的事情那么现在,我们可以把证书配置在7层的这个代理上面我们直接进行配置一个ssl的自签证书openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com[root@linuxea-49 /etc/nginx]# ll ssl total 12 -rw-r--r-- 1 root root 424 Mar 14 22:23 dhparam.pem -rw-r--r-- 1 root root 1285 Mar 14 22:31 linuxea.crt -rw-r--r-- 1 root root 1704 Mar 14 22:31 linuxea.key日志格式 log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '$body_bytes_sent "$http_referer" "$http_user_agent" $ssl_protocol $ssl_cipher' '$request_time [$status] [$upstream_status] [$upstream_response_time] "$upstream_addr"'而后直接在conf.d下引入k8s.confupstream web { server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1; } #server { # listen 80; # server_name https://linuxea.test.com/; # if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; } # index index.html index.htm index.php default.html default.htm default.php; #} server { listen 80; server_name linuxea.test.com; # if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; } index index.html index.htm index.php default.html default.htm default.php; #limit_conn conn_one 20; #limit_conn perserver 20; #limit_rate 100k; #limit_req zone=anti_spider burst=10 nodelay; #limit_req zone=req_one burst=5 nodelay; access_log /data/logs/nginx/web-server.log upstream2; location / { proxy_pass https://web; include proxy.conf; } } server { listen 443 ssl; server_name linuxea.test.com; #include fangzhuru.conf; ssl_certificate ssl/linuxea.crt; ssl_certificate_key ssl/linuxea.key; access_log /data/logs/nginx/web-server.log upstream2; # include ssl-params.conf; location / { proxy_pass https://web; include proxy.conf; } }proxy.conf如下proxy_connect_timeout 1000s; proxy_send_timeout 2000; proxy_read_timeout 2000; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_redirect off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header REMOTE-HOST $remote_addr; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_set_header Host $host; proxy_set_header Referer $http_referer; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;3.访问测试5.配置https我们在ingress-nginx上配置https[root@linuxea-50 ~/ingress]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com Generating a 2048 bit RSA private key ......................+++ ............................................+++ writing new private key to 'linuxea.key' -----创建secret[root@linuxea-50 ~/ingress]# kubectl create secret tls nginx-ingress-secret --cert=linuxea.crt --key=linuxea.key secret/nginx-ingress-secret created查看secret[root@linuxea-50 ~/ingress]# kubectl get secret nginx-ingress-secret NAME TYPE DATA AGE nginx-ingress-secret kubernetes.io/tls 2 24s [root@linuxea-50 ~/ingress]# kubectl describe secret nginx-ingress-secret Name: nginx-ingress-secret Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/tls Data ==== tls.crt: 1285 bytes tls.key: 1700 bytes而后在ingress中配置中添加字段spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret如下[root@linuxea-50 ~/ingress]# cat ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80在4层的代理里面加一下443端口upstream test-server443 { server 172.16.100.50:443 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:443 max_fails=3 fail_timeout=1s weight=1; } access_log /data/logs/nginx/web-server.log proxy; server { listen 443; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server443; } upstream test-server { server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1; server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1; } log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ' '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ; access_log /data/logs/nginx/web-server.log proxy; server { listen 80; proxy_connect_timeout 3s; proxy_timeout 3s; proxy_pass test-server; }[root@linuxea-49 /etc/nginx/stream]# tail -f /data/logs/nginx/web-server.log 172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000" 172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000" 172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000" 172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000" 172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000" 172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000" 172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001" 172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001" 172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000" 172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000"参考kubernetes Ingress nginx http以及7层https配置 (17)
2022年03月14日
2,429 阅读
0 评论
0 点赞
1
2
...
11