首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
32,485 阅读
2
Graylog收集文件日志实例
17,020 阅读
3
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
16,479 阅读
4
git+jenkins发布和回滚示例
16,307 阅读
5
linuxea:如何复现查看docker run参数命令
16,091 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
docker-compose
saltstack
haproxy
jenkins
GitLab
prometheus
marksugar
累计撰写
657
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
657
篇与
marksugar
的结果
2022-07-01
linuxea:gitlab和jenkins自动和手动触发构建(6)
在前面几章里面,我们配置了基本的组件,围绕java构建配置了nexus3,配置了skywalking,能够打包和构建镜像,但是我们需要将这些串联起来,组成一个流水线,并且需要将skywalking的agent打包在镜像内,并配置必要的参数。与此同时,我们使用一个简单的实现方式用作在jenkins上,那就是pipeline和部分groovy语法的函数,至少来完成一下场景。场景1: A方希望提交代码或者打TAG来触发jenkins构建,在构建之前使用sonarqube进行代码扫描,并且配置简单的阈值。而后去上述的流水线整合。按道理,sonarqube的配置是有一些讲究的,处于整体考虑sonarqube只用作流水线管道的一部分,本次不去考虑sonarqube的代码扫描策略,也不会将扫描结果关联到gitlab,只是仅仅将文件反馈到Jenkins。这些在后面如果有时间在进行配置在本次中我只仅仅使用pipeline,并不是共享库。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(本章实现)jenkins凭据使用(本章实现)juit配置sonarqube简单扫描配置docker中构建docker打包基于java的skywalking agent(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送拓扑如下图:1.添加skywalking agent此前在基于nexus3代码构建和harbor镜像打包(3)一骗你中,我们已经有了一个java-hello-world的包,提供了一个8086的端口,并且我们将Dockerfile之类的都已准备妥当,此时延续此前的流程继续走。如果没有,在此页面克隆。1.现在我们下载一个skywaling的agent(8.11.0)端来到Dockerfile中,要实现,需要下载包到jenkins服务器上,或者打在镜像内。https://www.apache.org/dyn/closer.cgi/skywalking/java-agent/8.11.0/apache-skywalking-java-agent-8.11.0.tgz2.gitlab上创建一个java组,创建一个java-demo的项目,将代码和代码中的Dockerfile推送到gitlab仓库中3.在Dockerfile中添加COPY agent,并在启动的时候添加到启动命令中,如下docker-compose中映射关系中,/data/jenkins-latest/package:/usr/local/package。于是我们将skywalking包存放在/data/jenkins-latest/package下,而后在Dockerfile中/usr/local/package的路径即可COPY /usr/local/package/skywalking-agent /skywalking-agent而后启动的时候引入到启动命令中 -javaagent:/skywalking-agent/skywalking-agent.jarCMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jarDockerfile如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER www.linuxea.com by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs -p COPY target/*.jar /data/ COPY /usr/local/package/skywalking-agent /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jar注意事项我们需要通过trace-ignore-plugin来过滤跟踪系统需要忽略的url,因此我们需要根据trace-ignore-plugin进行配置。这很有必要匹配规则遵循 Ant Path 匹配风格,如 /path/、/path/*、/path/?。将apm-trace-ignore-plugin-x.jar复制到agent/plugins,重启agent即可生效插件。于是,我们将这个插件复制到plugins下tar xf apache-skywalking-java-agent-8.11.0.tar.gz cd skywalking-agent/ cp optional-plugins/apm-trace-ignore-plugin-8.11.0.jar plugins/忽略参数(忽略参数在k8syaml中进行配置)有两种方法可以配置忽略模式。通过系统环境设置具有更高的优先级。1.系统环境变量设置,需要在系统变量中添加skywalking.trace.ignore_path,值为需要忽略的路径,多条路径之间用,分隔2.将/agent/optional-plugins/apm-trace-ignore-plugin/apm-trace-ignore-plugin.config 复制到/agent/config/ 目录下,并添加过滤跟踪的规则trace.ignore_path=/your/path/1/,/your/path/2/4.将gitlab的java-demo项目拉到本地后,将java-helo-word项目文件移动到私有gitlab,并且将Dockerfile放入[root@Node-172_16_100_48 /data]# git clone git@172.16.100.47:java/java-demo.git Cloning into 'java-demo'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), done. [root@Node-172_16_100_48 /data]# mv java-helo-word/* java-demo/ [root@Node-172_16_100_48 /data]# tree java-demo/linuxea/ java-demo/ ├── bbin.png ├── cn-site-service.iml ├── Dockerfile .......... 23 directories, 26 files放入完成后的Dockerfile的内容如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER www.linuxea.com by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs -p COPY target/*.jar /data/ COPY /usr/local/package/skywalking-agent /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jarDockerfile添加skywalking到此完成。[root@linuxea-48 /data/java-demo]# git add . && git commit -m "first commit" && git push -u origin main [main 2f6d866] first commit 25 files changed, 545 insertions(+) create mode 100644 Dockerfile create mode 100644 bbin.png create mode 100644 cn-site-service.iml create mode 100644 fenghuang.png create mode 100644 index.html create mode 100644 pom.xml create mode 100644 src/main/java/com/dt/info/InfoSiteServiceApplication.java create mode 100644 src/main/java/com/dt/info/controller/HelloController.java create mode 100644 src/main/resources/account.properties create mode 100644 src/main/resources/application.yml create mode 100644 src/main/resources/log4j.properties ........... remote: remote: To create a merge request for main, visit: remote: http://172.16.100.47/java/java-demo/-/merge_requests/new?merge_request%5Bsource_branch%5D=main remote: To git@172.16.100.47:java/java-demo.git * [new branch] main -> main Branch main set up to track remote branch main from origin.代码上传到gitlab后开始配置jenkinsB.new jar你也可以生成一个空的java包来测试准备一个jar包,可以是一个java已有的程序或者下载一个空的,如下在https://start.spring.io/页面默认选择,选择java 8,而后点击CENERATE下载demo包,解压这个包将代码推送到gitlab将项目拉到本地后在上传demo包Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ git clone git@172.16.100.47:pipeline-ops/2022-test.git Cloning into '2022-test'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ unzip demo.zip -d 2022-test $ git add . && git commit -m "first commit" && git push2.关联jenkins和gitlab为了能够实现gitlab自动触发,我们需要配置一个webhook,并且jenkins安装插件来完成。首先我们需要插件,并且在gitlab配置一个webhook,一旦gitlab发生事件后就会触发到jenkins,jenkins启动流水线作业。我将会在流水线作业来判断作业被触发是通过gitlab还是其他的。2.1 jenkins插件安装Generic Webhook Trigger插件,而后点击新建imtes->输入一个名称—>选择pipeline例如,创建了一个Linuxea-2022的项目,勾选了Generic Webhook Trigger,并且在下方的token,输入了一个marksugar测试pipelinepipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipStagesAfterUnstable() } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps{ script{ println("获取代码") } } } stage("Build"){ steps{ script{ println("运行构建") } } } } post { always{ script{ println("流水线结束后,经常做的事情") } } success{ script{ println("流水线成功后,要做的事情") } } failure{ script{ println("流水线失败后,要做的事情") } } aborted{ script{ println("流水线取消后,要做的事情") } } } }手动触发测试curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar'运行一次[root@linuxea-47 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar' {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{},"regexpFilterText":"","id":4,"url":"queue/item/4/"}},"message":"Triggered jobs."}You have new mail in /var/spool/mail/root2.2 配置gitlab webhook1.在右上角的preferences中的最下方Localization选择简体中文保存2.管理元->设置->网络->下拉菜单中的“出战请求”勾选 允许来自 web hooks 和服务对本地网络的请求回到gitlab首先进入项目后选择settings->webhooks->urlurl输入http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar?marksugar, 问号后面为设置的token而后选中push events和tag push events和最下面的ssl verfication : Enable SSL verification兵点击Add webhook测试在最下方 -> test下拉菜单中选择一个被我们选中过的事件,而后点击。模拟一次push在edit中 -> 最下方 View details 查看Request body,Request body就是发送的内容,这些内容可以被获取到并且解析回到jenkins,查看已经开始构建3.1 自动与手动关联在上面已经配置了自动触发jenkins构建,但是这还不够,我们想在jenkins上体现出来,那一次是自动构建,那一次是手动点击,于是我们添加try和catch因此,我们配置try,并且在现有的阶段添加两个必要的环境变量来应对手动触发3.2 添加手动参数branch: 分支BASEURL:git地址3.2 配置识别try语法我们需要获取请求过来的数据,因此我们获取所有的json请求,配置如下自动触发而后获取的变量方式解析后,我将必要的值进行拼接后如下println("Trigger User: ${info_user_username}") println("Trigger Branch: ${info_ref}" ) println("Trigger event: ${info_event_name}") println("Trigger application: ${info_project_name}") println("Trigger version number: ${info_checkout_sha}") println("Trigger commit message: ${info_commits_0_message}") println("Trigger commit time: ${info_commits_0_timestamp}")而我们只需要部分,因此就变成了如下这般try { println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 非gitlab自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 手动触发 \n branch: ${branch} \n git url: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }一旦被触发后,如下图3.3 判断gitlab但这还没玩,尽管此时已经能够识别自动触发了,但是我们无法识别到底是从哪里来的自动触发。但是,我们只需要知道那些是gitlab来的请求和不是gitlab来的请求即可。简单来说,就是需要一个参数来判断触发这次构建的来源。于是,我们配置请求参数来识别判断在Request parameters-> 输入onerunonerun: 用来判断的参数而后在gitlab的url中添加上传递参数http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar这里的onerun=gitlabs,如下我们在try中进行判断即可onerun=gitlabstry { if ( "${onerun}" == "gitlabs"){ println("从带有gitlabs请求来的构建") } }catch(e){ println("从没有带有gitlabs请求来的构建") }在本次中,配置如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }手动构建一次通过命令构建[root@linuxea-48 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar' && echo {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{"info":"","onerun":"gitlabs","onerun_0":"gitlabs"},"regexpFilterText":"","id":14,"url":"queue/item/14/"}},"message":"Triggered jobs."}如下通过这样的配置,我们至少能从jenkins上看到构建的触发类型而已。
2022年07月01日
3 阅读
0 评论
0 点赞
2022-06-30
linuxea:skywalking9.1基于nacos的动态告警配置二(5)
接着上一篇skywalking9.1基于nacos的动态告警配置一(4),这本是在一起的,由于字数问题,分为两篇来写。那接着skywalking的ES安装完成后,继续配置skywalking和nacos的动态配置部分2.skywalking与nacos准备了这么多,其实就是先把nacos配置i起来,而后在skywalking中来调用,那么现在,我们配置nacos到skywalking中。但是在此之前,我们要配置一个configmpa清单来来列出配置而后传入到skywalking中。2.1 准备configmap文件skywalking在早在8版本已经支持了动态配置,这些在/skywalking/config下的application.yml文件中,默认为none,需要修改这个环境变量来应用不同的配置中心,比如为nacos阅读官网了解更多动态配置内容configuration: selector: ${SW_CONFIGURATION:none}我们修改为nacos使用,提供的变量如下 nacos: # Nacos Server Host serverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1} # Nacos Server Port port: ${SW_CONFIG_NACOS_SERVER_PORT:8848} # Nacos Configuration Group group: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking} # Nacos Configuration namespace namespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:} # Unit seconds, sync period. Default fetch every 60 seconds. period: ${SW_CONFIG_NACOS_PERIOD:60} # Nacos auth username username: ${SW_CONFIG_NACOS_USERNAME:""} password: ${SW_CONFIG_NACOS_PASSWORD:""} # Nacos auth accessKey accessKey: ${SW_CONFIG_NACOS_ACCESSKEY:""} secretKey: ${SW_CONFIG_NACOS_SECRETKEY:""}因此,我们需要将配置注入到pod中在k8s里面,使用configmap中的值传入到skwalking-oap中即可创建configmap清单,调用nacos,如下:如果你是外置的nacos或者es就需要修改配置的链接地址。我这里是集群apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" nacos.addr: "nacos-0.nacos-headless.nacos.svc.cluster.local,nacos-1.nacos-headless.nacos.svc.cluster.local,nacos-2.nacos-headless.nacos.svc.cluster.local" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos"这些配配置在skywalk-oap中需要传入 - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password2.2 安装skywalking当准备完成上面的配置后,yaml如下#ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: labels: app: skywalking name: skywalking-oap namespace: skywalking --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: skywalking namespace: skywalking labels: app: skywalking rules: - apiGroups: [""] resources: ["pods", "endpoints", "services", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["extensions"] resources: ["deployments", "replicasets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: skywalking namespace: skywalking labels: app: skywalking roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: skywalking subjects: - kind: ServiceAccount name: skywalking-oap namespace: skywalking --- # oap apiVersion: v1 kind: Service metadata: name: skywalking-oap namespace: skywalking labels: app: skywalking-oap spec: type: ClusterIP ports: - port: 11800 name: grpc - port: 12800 name: rest selector: app: skywalking-oap chart: skywalking-4.2.0 --- # connent to nacos apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" # nacos.addr: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848,nacos-1.nacos-headless.skywalking.svc.cluster.local:8848,nacos-2.nacos-headless.skywalking.svc.cluster.local:8848" nacos.addr: "nacos-0.nacos-headless.nacos.svc.cluster.local,nacos-1.nacos-headless.nacos.svc.cluster.local,nacos-2.nacos-headless.nacos.svc.cluster.local" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos" --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: skywalking-oap name: skywalking-oap namespace: skywalking spec: replicas: 1 selector: matchLabels: app: skywalking-oap template: metadata: labels: app: skywalking-oap chart: skywalking-4.2.0 spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchLabels: app: "skywalking" release: "skywalking" component: "oap" initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 # docker pull apache/skywalking-oap-server:8.8.1 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 11800 name: grpc - containerPort: 12800 name: rest env: - name: JAVA_OPTS value: "-Dmode=no-init -Xmx2g -Xms2g" - name: SW_CLUSTER value: kubernetes - name: SW_CLUSTER_K8S_NAMESPACE value: "default" - name: SW_CLUSTER_K8S_LABEL value: "app=skywalking,release=skywalking,component=oap" # 记录数据。 - name: SW_CORE_RECORD_DATA_TTL value: "2" # Metrics数据 - name: SW_CORE_METRICS_DATA_TTL value: "2" - name: SKYWALKING_COLLECTOR_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" # nacos - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password volumeMounts: - name: alarm-settings mountPath: /skywalking/config/alarm-settings.yml subPath: alarm-settings.yml volumes: - configMap: name: alarm-configmap name: alarm-settings --- # ui apiVersion: v1 kind: Service metadata: labels: app: skywalking-ui name: skywalking-ui namespace: skywalking spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: skywalking-ui --- apiVersion: apps/v1 kind: Deployment metadata: name: skywalking-ui namespace: skywalking labels: app: skywalking-ui spec: replicas: 1 selector: matchLabels: app: skywalking-ui template: metadata: labels: app: skywalking-ui spec: affinity: containers: - name: ui image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-ui:9.1.0 # docker pull apache/skywalking-ui:9.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: page env: - name: SW_OAP_ADDRESS value: http://skywalking-oap:12800 --- # job apiVersion: batch/v1 kind: Job metadata: name: "skywalking-es-init" namespace: skywalking labels: app: skywalking-job spec: template: metadata: name: "skywalking-es-init" labels: app: skywalking-job spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap restartPolicy: Never initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: "-Xmx2g -Xms2g -Dmode=init" - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200"而后应用配置PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\9.1.yaml serviceaccount/skywalking-oap created clusterrole.rbac.authorization.k8s.io/skywalking created clusterrolebinding.rbac.authorization.k8s.io/skywalking created service/skywalking-oap created configmap/skywalking-to-nacos created deployment.apps/skywalking-oap created service/skywalking-ui created deployment.apps/skywalking-ui created job.batch/skywalking-es-init created启动完成后如下PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pod -w NAME READY STATUS RESTARTS AGE elasticsearch-64c9d98794-ndktz 1/1 Running 0 15m skywalking-es-init-p8j4z 0/1 Completed 0 6m13s skywalking-oap-64b87cf44c-cgb8w 1/1 Running 0 2m26s skywalking-ui-6c6f789f9f-qxxqd 1/1 Running 0 6m13s2.2.B 本地nacos配置#ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: labels: app: skywalking name: skywalking-oap namespace: skywalking --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: skywalking namespace: skywalking labels: app: skywalking rules: - apiGroups: [""] resources: ["pods", "endpoints", "services", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["extensions"] resources: ["deployments", "replicasets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: skywalking namespace: skywalking labels: app: skywalking roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: skywalking subjects: - kind: ServiceAccount name: skywalking-oap namespace: skywalking --- # oap apiVersion: v1 kind: Service metadata: name: skywalking-oap namespace: skywalking labels: app: skywalking-oap spec: type: ClusterIP ports: - port: 11800 name: grpc - port: 12800 name: rest selector: app: skywalking-oap chart: skywalking-4.2.0 --- # connent to nacos apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" nacos.addr: "172.16.15.136" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos" --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: skywalking-oap name: skywalking-oap namespace: skywalking spec: replicas: 1 selector: matchLabels: app: skywalking-oap template: metadata: labels: app: skywalking-oap chart: skywalking-4.2.0 spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchLabels: app: "skywalking" release: "skywalking" component: "oap" initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 # docker pull apache/skywalking-oap-server:8.8.1 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 11800 name: grpc - containerPort: 12800 name: rest env: - name: JAVA_OPTS value: "-Dmode=no-init -Xmx2g -Xms2g" - name: SW_CLUSTER value: kubernetes - name: SW_CLUSTER_K8S_NAMESPACE value: "default" - name: SW_CLUSTER_K8S_LABEL value: "app=skywalking,release=skywalking,component=oap" # 记录数据。 - name: SW_CORE_RECORD_DATA_TTL value: "2" # Metrics数据 - name: SW_CORE_METRICS_DATA_TTL value: "2" - name: SKYWALKING_COLLECTOR_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" # nacos - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password volumeMounts: - name: alarm-settings mountPath: /skywalking/config/alarm-settings.yml subPath: alarm-settings.yml volumes: - configMap: name: alarm-configmap name: alarm-settings --- # ui apiVersion: v1 kind: Service metadata: labels: app: skywalking-ui name: skywalking-ui namespace: skywalking spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: skywalking-ui --- apiVersion: apps/v1 kind: Deployment metadata: name: skywalking-ui namespace: skywalking labels: app: skywalking-ui spec: replicas: 1 selector: matchLabels: app: skywalking-ui template: metadata: labels: app: skywalking-ui spec: affinity: containers: - name: ui image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-ui:9.1.0 # docker pull apache/skywalking-ui:9.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: page env: - name: SW_OAP_ADDRESS value: http://skywalking-oap:12800 --- # job apiVersion: batch/v1 kind: Job metadata: name: "skywalking-es-init" namespace: skywalking labels: app: skywalking-job spec: template: metadata: name: "skywalking-es-init" labels: app: skywalking-job spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap restartPolicy: Never initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 172.16.15.136 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: "-Xmx2g -Xms2g -Dmode=init" - name: SW_STORAGE value: 172.16.15.136 - name: SW_STORAGE_ES_CLUSTER_NODES value: "172.16.15.136:9200"2.3 配置ingressskywalking的ui没有用户名密码认证,于是我们通过Ingress来配置一个简单的nginx auth认证即可用户名: linuxea密码: OpSOQKs,qDJ1dSvzsapiVersion: v1 data: auth: bGludXhlYTokYXByMSRidG1naTc0cyRKRUtJcThkVEUzT0k4bzVhMXFRdnEwCg== kind: Secret metadata: name: basic-auth namespace: skywalking type: Opaqueingress配置如下--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: skywalking-ui namespace: skywalking annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - input: Trump " spec: ingressClassName: nginx rules: - host: skywalking.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: skywalking-ui port: number: 80分别创建他们PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\secret.yaml secret/basic-auth created PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\ingress.yaml ingress.networking.k8s.io/skywalking-ui created查看创建情况PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get ingress NAME CLASS HOSTS ADDRESS PORTS AGE skywalking-ui nginx skywalking.linuxea.com 172.16.100.11,172.16.100.43 80 22s修改本地hosts文件解析后,进行访问172.16.100.11 nacos.linuxea.com skywalking.linuxea.com输入用户名: linuxea输入密码: OpSOQKs,qDJ1dSvzs2.3 日志信息查看ops的日志信息,如果应用后nacos没有配置,在日志中将会看到变动参阅官方文档获取更多信息key:core.default.log4j-xml module:core provider:default value(current):null key:agent-analyzer.default.uninstrumentedGateways module:agent-analyzer provider:default value(current):null key:configuration-discovery.default.agentConfigurations module:configuration-discovery provider:default value(current):null key:agent-analyzer.default.traceSamplingPolicy module:agent-analyzer provider:default value(current):null key:core.default.endpoint-name-grouping module:core provider:default value(current):SkyWalking endpoint rule key:core.default.apdexThreshold module:core provider:default value(current):null key:agent-analyzer.default.slowDBAccessThreshold module:agent-analyzer provider:default value(current):null key:alarm.default.alarm-settings module:alarm provider:default value(current):null 没有配置全是nullFollowing dynamic config items are available. --------------------------------------------- key:core.default.log4j-xml module:core provider:default value(current):null key:agent-analyzer.default.uninstrumentedGateways module:agent-analyzer provider:default value(current):null key:configuration-discovery.default.agentConfigurations module:configuration-discovery provider:default value(current):null key:agent-analyzer.default.traceSamplingPolicy module:agent-analyzer provider:default value(current):null key:core.default.endpoint-name-grouping module:core provider:default value(current):SkyWalking endpoint rule key:core.default.apdexThreshold module:core provider:default value(current):null key:agent-analyzer.default.slowDBAccessThreshold module:agent-analyzer provider:default value(current):null key:alarm.default.alarm-settings module:alarm provider:default value(current):null 2022-06-26 16:18:27,691 com.linecorp.armeria.common.util.SystemInfo 237 [main] INFO [] - hostname: skywalking-oap-64b87cf44c-d5l6n (from /proc/sys/kernel/hostname) 2022-06-26 16:18:27,692 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.log4j-xml: null 2022-06-26 16:18:27,694 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.uninstrumentedGateways: null 2022-06-26 16:18:27,695 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: configuration-discovery.default.agentConfigurations: null 2022-06-26 16:18:27,699 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.traceSamplingPolicy: null 2022-06-26 16:18:27,701 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.endpoint-name-grouping: null 2022-06-26 16:18:27,703 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.apdexThreshold: null 2022-06-26 16:18:27,704 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.slowDBAccessThreshold: null 2022-06-26 16:18:27,705 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: alarm.default.alarm-settings: null 2022-06-26 16:18:28,006 com.linecorp.armeria.server.Server 755 [armeria-boss-http-*:12800] INFO [] - Serving HTTP at /0.0.0.0:12800 - http://127.0.0.1:12800/ 2022-06-26 16:18:28,007 org.apache.skywalking.oap.server.core.storage.PersistenceTimer 58 [main] INFO [] - persistence timer start 2022-06-26 16:18:28,008 org.apache.skywalking.oap.server.core.cache.CacheUpdateTimer 46 [main] INFO [] - Cache updateServiceInventory timer start 2022-06-26 16:18:28,369 org.apache.skywalking.oap.server.starter.OAPServerBootstrap 53 [main] INFO [] - Version of OAP: 9.1.0-f1f519c2.4 nacos配置管理我们配置一个示例,比如告警在nacos页面中-》名称空间-》新建名称空间-》创建skywalking的名称空间在nacos页面中-》配置列表-》选中skywalking -》+号也就是说在skywaling名称空间内创建这个配置在nacos页面中-》配置列表-》+号-》输入id和组回到日志信息中,配置是以key:value的形式key:alarm.default.alarm-settings配置信息如下rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: # 以_rule结尾的规则名称,必须唯一 metrics-name: service_resp_time # 名称 op: ">" # 运算符 threshold: 1000 # 阈值 period: 5 #周期 count: 3 # 次数 silence-period: 15 # 沉默时间 message: Response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes. service_sla_rule: # Metrics value need to be long, double or int metrics-name: service_sla op: "<" threshold: 8000 # The length of time to evaluate the metrics period: 10 # How many times after the metrics match the condition, will trigger alarm count: 2 # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period. silence-period: 3 message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes service_resp_time_percentile_rule: # Metrics value need to be long, double or int metrics-name: service_percentile op: ">" threshold: 1000,1000,1000,1000,1000 period: 10 count: 3 silence-period: 5 message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000 service_instance_resp_time_rule: metrics-name: service_instance_resp_time op: ">" threshold: 1000 period: 10 count: 2 silence-period: 5 message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes database_access_resp_time_rule: metrics-name: database_access_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of database access {name} is more than 1000ms in 2 minutes of last 10 minutes endpoint_relation_resp_time_rule: metrics-name: endpoint_relation_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of endpoint relation {name} is more than 1000ms in 2 minutes of last 10 minutes dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n %s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=322a01560303e2e96e8e1261d491ffc918c686bbfd2f7e846 secret: SEC71126603dfcf6594e96ffad825ac3e32d2a3fde0e643bbd80a7a473208fc5706填写到nacos中点击发布一旦创建完成,nacos中配置管理的skywalking名称空间下就有了alarm.default.alarm-settings配置信息而后回到oap中查看日志信息kubectl -n skywalking logs -f skywalking-oap-74b59b897c-2slvn2022-06-26 16:41:20,965 com.linecorp.armeria.common.util.SystemInfo 237 [main] INFO [] - hostname: skywalking-oap-6f58cbc9d8-hmfw9 (from /proc/sys/kernel/hostname) 2022-06-26 16:41:20,978 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.log4j-xml: null 2022-06-26 16:41:20,987 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.uninstrumentedGateways: null 2022-06-26 16:41:20,995 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: configuration-discovery.default.agentConfigurations: null 2022-06-26 16:41:20,998 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.traceSamplingPolicy: null 2022-06-26 16:41:21,005 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.endpoint-name-grouping: null 2022-06-26 16:41:21,008 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.apdexThreshold: null 2022-06-26 16:41:21,011 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.slowDBAccessThreshold: null 2022-06-26 16:41:21,034 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: alarm.default.alarm-settings: rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: # 以_rule结尾的规则名称,必须唯一 metrics-name: service_resp_time # 名称 op: ">" # 运算符 threshold: 1000 # 阈值 period: 5 #周期 count: 3 # 次数 silence-period: 15 # 沉默时间 message: Response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes. service_sla_rule: # Metrics value need to be long, double or int metrics-name: service_sla op: "<" threshold: 8000 # The length of time to evaluate the metrics period: 10 # How many times after the metrics match the condition, will trigger alarm count: 2 # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period. silence-period: 3 message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes service_resp_time_percentile_rule: # Metrics value need to be long, double or int metrics-name: service_percentile op: ">" threshold: 1000,1000,1000,1000,1000 period: 10 count: 3 silence-period: 5 message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000 service_instance_resp_time_rule: metrics-name: service_instance_resp_time op: ">" threshold: 1000 period: 10 count: 2 silence-period: 5 message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes database_access_resp_time_rule: metrics-name: database_access_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of database access {name} is more than 1000ms in 2 minutes of last 10 minutes endpoint_relation_resp_time_rule: metrics-name: endpoint_relation_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of endpoint relation {name} is more than 1000ms in 2 minutes of last 10 minutes dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n %s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=322a01560303e2ed491ffc918c686bbfd2f7e8420de36301a5d9536 secret: SEC71126603dfcf6594e96ffad825ac3e32d2a3fde0e643b208fc5706而一旦配置发生改变将会在日志中看到进度使用一个不断触发的阈值,大于1毫秒的都被触发rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: metrics-name: service_resp_time op: "<" threshold: 1 period: 1 count: 1 silence-period: 2 #message: 服务:{name}\n 指标:响应时间\n 详情:至少1次超过1毫秒(最近2分钟内) message: 服务:{name}的响应时间至少1次超过1毫秒(最近2分钟内) dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n%s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=18d15996fa24e3eabe8 secret: SEC65d0a92e985fa0dc5df11ec88d89763a178102nacos和skywalking配置完成。参考观测语句参考参考2参考3到底要配置什么oal,需要关注/skywalking/config/oal下的文件,这些是oal-runner,oal运行时关注聚合函数:aggregation-functionoal参考如下:https://github.com/apache/skywalking/blob/master/docs/en/concepts-and-designs/oal.mdhttps://github.com/apache/skywalking/blob/master/docs/en/guides/backend-oal-scripts.mdhttps://skywalking.apache.org/docs/skywalking-java/latest/en/setup/service-agent/java-agent/java-plugin-development-guide/#extension-logic-endpoint-tag-key-x-lehttps://www.cnblogs.com/heihaozi/p/14958368.html
2022年06月30日
21 阅读
0 评论
0 点赞
2022-06-30
linuxea:skywalking9.1基于nacos的动态告警配置一(4)
在上一篇中,我们构建了必要的组件,并且打包构建的docker镜像。但这还不够,我们需要添加一些链路追踪的观测,如:skywalking。此前我使用skywalking9.0进行部署,本次将使用最新的9.1.0进行安装配置但这次不同,skywalking支持动态配置,于是我们使用动态配置来管理告警,关于skywalking动态配置可以参考官方文档#dynamic-configuration。而动态配置不得不提nacos,作为动态服务发现、配置管理和服务管理平台nacos被大量普及和应用。nacos要使用nacos,我们需要一个后端数据库,可以放在k8s中,也可以在虚拟机部署我将在虚拟机部署mysql1. 准备外部mysql首先你需要安装docker和docker-compose。因为我将使用docker-compose进行编排准备yaml文件version: '3.3' services: nacos-mysql: container_name: nacos-mysql image: registry.cn-hangzhou.aliyuncs.com/marksugar/mysql:8.0.29-debian # docker pull mysql:8.0.29-debian # docker pull nacos/nacos-mysql:5.7 # network_mode: host restart: always # docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql # docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_ROOT_PASSWORD=PASSWORDABCD - MYSQL_DATABASE=nacos_fat - MYSQL_USER=nacos - MYSQL_PASSWORD=PASSWORDABCD #- MYSQL_INITDB_SKIP_TZINFO= volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/mysql/nacos/data:/var/lib/mysql - /data/mysql/nacos/file:/var/lib/mysql-files - ./my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf logging: driver: "json-file" options: max-size: "50M" ports: - 3306:3307准备my.cnf# naocs sql init # /docker-entrypoint-initdb.d # /etc/mysql/mysql.conf.d/mysqld.cnf [mysqld] port=3307 pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql #log-error = /var/log/mysql/error.log # By default we only accept connections from localhost #bind-address = 127.0.0.1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0启动docker-compose -f docker-compose.yaml up -d而后导入nacos.sql/* * Copyright 1999-2018 Alibaba Group Holding Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info */ /******************************************/ CREATE TABLE `config_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) DEFAULT NULL, `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `c_desc` varchar(256) DEFAULT NULL, `c_use` varchar(64) DEFAULT NULL, `effect` varchar(64) DEFAULT NULL, `type` varchar(64) DEFAULT NULL, `c_schema` text, `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_aggr */ /******************************************/ CREATE TABLE `config_info_aggr` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) NOT NULL COMMENT 'group_id', `datum_id` varchar(255) NOT NULL COMMENT 'datum_id', `content` longtext NOT NULL COMMENT '内容', `gmt_modified` datetime NOT NULL COMMENT '修改时间', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_beta */ /******************************************/ CREATE TABLE `config_info_beta` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_tag */ /******************************************/ CREATE TABLE `config_info_tag` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `tag_id` varchar(128) NOT NULL COMMENT 'tag_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_tags_relation */ /******************************************/ CREATE TABLE `config_tags_relation` ( `id` bigint(20) NOT NULL COMMENT 'id', `tag_name` varchar(128) NOT NULL COMMENT 'tag_name', `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `nid` bigint(20) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`nid`), UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = group_capacity */ /******************************************/ CREATE TABLE `group_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_group_id` (`group_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = his_config_info */ /******************************************/ CREATE TABLE `his_config_info` ( `id` bigint(64) unsigned NOT NULL, `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `data_id` varchar(255) NOT NULL, `group_id` varchar(128) NOT NULL, `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL, `md5` varchar(32) DEFAULT NULL, `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `src_user` text, `src_ip` varchar(50) DEFAULT NULL, `op_type` char(10) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`nid`), KEY `idx_gmt_create` (`gmt_create`), KEY `idx_gmt_modified` (`gmt_modified`), KEY `idx_did` (`data_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = tenant_capacity */ /******************************************/ CREATE TABLE `tenant_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表'; CREATE TABLE `tenant_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `kp` varchar(128) NOT NULL COMMENT 'kp', `tenant_id` varchar(128) default '' COMMENT 'tenant_id', `tenant_name` varchar(128) default '' COMMENT 'tenant_name', `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc', `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source', `gmt_create` bigint(20) NOT NULL COMMENT '创建时间', `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info'; CREATE TABLE `users` ( `username` varchar(50) NOT NULL PRIMARY KEY, `password` varchar(500) NOT NULL, `enabled` boolean NOT NULL ); CREATE TABLE `roles` ( `username` varchar(50) NOT NULL, `role` varchar(50) NOT NULL, UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE ); CREATE TABLE `permissions` ( `role` varchar(50) NOT NULL, `resource` varchar(255) NOT NULL, `action` varchar(8) NOT NULL, UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE ); INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE); INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');而后使用如下命令导入docker exec -i nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" nacos_fat' < ./nacos.sql 如下[root@Node-172_16_100_54 /data/mysql]# docker exec -i nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" nacos_fat' < ./nacos.sql mysql: [Warning] Using a password on the command line interface can be insecure.2. 准备nacos的pvc我们已经创建了mysql,并且导入了sql,但是对于一些日志我们希望留下来,于是我们创建一个pvc开始之前,我们创建一个名称空间用作nacos专用的apiVersion: v1 kind: Namespace metadata: name: nacos如下PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\namespace.yaml namespace/nacos created PS J:\k8s-1.23.1-latest\nacos> kubectl.exe get ns NAME STATUS AGE argocd Active 2d20h default Active 14d ingress-nginx Active 4d19h kube-node-lease Active 14d kube-public Active 14d kube-system Active 14d marksugar Active 3d19h monitoring Active 13d nacos Active 12spvc配置如下,我们进行创建*参数:*姓名描述默认onDelete如果存在且有删除值,则删除目录,如果存在且有保留值,则保存目录。将与共享名称一起存档:archived-<volume.Name>archiveOnDelete如果它存在并且具有 false 值,则删除该目录。如果onDelete存在,archiveOnDelete将被忽略。将与共享名称一起存档:archived-<volume.Name>pathPattern指定用于通过 PVC 元数据(例如标签、注释、名称或命名空间)创建目录路径的模板。要指定元数据使用${.PVC.<metadata>}. 示例:如果文件夹应命名为<pvc-namespace>-<pvc-name>,${.PVC.namespace}-${.PVC.name}则用作 pathPattern。不适用yaml如下apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-latest provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "false" pathPattern: "${.PVC.namespace}/${.PVC.name}" onDelete: delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nacos-nfs namespace: nacos spec: storageClassName: nfs-latest accessModes: - ReadWriteMany resources: requests: storage: 50G开始创建PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\nfs-pvc.yaml storageclass.storage.k8s.io/nfs-latest created persistentvolumeclaim/nacos-nfs created须知而在使用了pathPattern之后,由于我们引用的是${.PVC.namespace}/${.PVC.name},于是我们的nfs目录结构就变成了这样,如下[root@Node-172_16_100_49 /data/nfs-k8s/1.21.1]# ll total 0 drwxrwxrwx 2 root root 21 Jun 13 00:02 default-test-claim-pvc-d64f6d7d-3be8-407e-bb3f-59efcd481e3d drwxr-xr-x 3 root root 23 Jun 26 18:09 nacos [root@Node-172_16_100_49 /data/nfs-k8s/1.21.1]# ls nacos/nacos-nfs/ data logs peer-finder目录结构变成nacos/nacos-nfs/后,在某些时候将会比较好用的查看PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nacos-nfs Bound pvc-2e3c7f7b-648a-4b22-b2af-ca0ab26b8e8a 50G RWX nfs-latest 4m9s PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 13d nfs-latest k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 4m44s我们进入数据库验证是否成功创建[root@linuxea-54 /data/mysql]# docker exec -ti nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"; ' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 14 Server version: 8.0.29 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | nacos_fat | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> use nacos_fat Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +----------------------+ | Tables_in_nacos_fat | +----------------------+ | config_info | | config_info_aggr | | config_info_beta | | config_info_tag | | config_tags_relation | | group_capacity | | his_config_info | | permissions | | roles | | tenant_capacity | | tenant_info | | users | +----------------------+ 12 rows in set (0.00 sec) mysql> 授权远程 连接GRANT ALL PRIVILEGES ON *.* TO 'nacos'@'%' WITH GRANT OPTION;确保可以连通如果不通。大概率是网络问题,可以修改network模式,如下version: '3.3' services: nacos-mysql: container_name: nacos-mysql image: registry.cn-hangzhou.aliyuncs.com/marksugar/mysql:8.0.29-debian # docker pull mysql:8.0.29-debian # docker pull nacos/nacos-mysql:5.7 network_mode: host restart: always # docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql # docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_ROOT_PASSWORD=PASSWORDABCD - MYSQL_DATABASE=nacos_fat - MYSQL_USER=nacos - MYSQL_PASSWORD=PASSWORDABCD #- MYSQL_INITDB_SKIP_TZINFO= volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/mysql/nacos/data:/var/lib/mysql - /data/mysql/nacos/file:/var/lib/mysql-files - ./my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf logging: driver: "json-file" options: max-size: "50M" #ports: #- 3307:33073. 安装naocsnacos yaml清单如下,我们需要修改--- apiVersion: v1 kind: Service metadata: name: nacos-headless namespace: skywalking labels: app: nacos # annotations: # service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 8848 name: server targetPort: 8848 - port: 9848 name: client-rpc targetPort: 9848 - port: 9849 name: raft-rpc targetPort: 9849 ## 兼容1.4.x版本的选举端口 - port: 7848 name: old-raft-rpc targetPort: 7848 clusterIP: None type: ClusterIP selector: app: nacos --- apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: skywalking data: mysql.host: "172.16.0.158" mysql.db.name: "nacos_fat" mysql.port: "3307" mysql.user: "nacos" mysql.password: "PASSWORDABCD" --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nacos namespace: skywalking spec: serviceName: nacos-headless replicas: 3 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - nacos topologyKey: "kubernetes.io/hostname" # serviceAccountName: nfs-client-provisioner initContainers: - name: peer-finder-plugin-install image: nacos/nacos-peer-finder-plugin:1.1 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /home/nacos/plugins/peer-finder name: data subPath: peer-finder containers: - name: nacos imagePullPolicy: IfNotPresent image: nacos/nacos-server:v2.1.0 resources: requests: memory: "2048Mi" cpu: "500m" ports: - containerPort: 8848 name: client-port - containerPort: 9848 name: client-rpc - containerPort: 9849 name: raft-rpc - containerPort: 7848 name: old-raft-rpc env: - name: MYSQL_SERVICE_DB_PARAM value: characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai - name: NACOS_REPLICAS value: "3" - name: SERVICE_NAME value: "nacos-headless" - name: DOMAIN_NAME value: "cluster.local" - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MYSQL_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.db.name - name: MYSQL_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.port - name: MYSQL_SERVICE_HOST valueFrom: configMapKeyRef: name: nacos-cm key: mysql.host - name: MYSQL_SERVICE_USER valueFrom: configMapKeyRef: name: nacos-cm key: mysql.user - name: MYSQL_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.password - name: MODE value: "cluster" - name: NACOS_SERVER_PORT value: "8848" - name: PREFER_HOST_MODE value: "hostname" - name: NACOS_APPLICATION_PORT value: "8848" - name: NACOS_SERVERS value: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848 nacos-1.nacos-headless.skywalking.svc.cluster.local:8848 nacos-2.nacos-headless.skywalking.svc.cluster.local:8848" volumeMounts: - name: data mountPath: /home/nacos/plugins/peer-finder subPath: peer-finder - name: data mountPath: /home/nacos/data subPath: data - name: data mountPath: /home/nacos/logs subPath: logs - mountPath: /etc/localtime name: nacostime volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nacos-nfs selector: matchLabels: app: nacos需要修改如下configmap1.账号密码apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: skywalking data: mysql.host: "172.16.0.158" mysql.db.name: "nacos_fat" mysql.port: "3307" mysql.user: "nacos" mysql.password: "PASSWORDABCD"2.nacos变量 - name: NACOS_SERVERS value: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848 nacos-1.nacos-headless.skywalking.svc.cluster.local:8848 nacos-2.nacos-headless.skywalking.svc.cluster.local:8848"3.pvc配置 volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nas-nacos最终如下--- apiVersion: v1 kind: Service metadata: name: nacos-headless namespace: nacos labels: app: nacos # annotations: # service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 8848 name: server targetPort: 8848 - port: 9848 name: client-rpc targetPort: 9848 - port: 9849 name: raft-rpc targetPort: 9849 ## 兼容1.4.x版本的选举端口 - port: 7848 name: old-raft-rpc targetPort: 7848 clusterIP: None type: ClusterIP selector: app: nacos --- apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: nacos data: mysql.host: "172.16.100.54" mysql.db.name: "nacos_fat" mysql.port: "3306" mysql.user: "nacos" mysql.password: "PASSWORDABCD" --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nacos namespace: nacos spec: serviceName: nacos-headless replicas: 3 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - nacos topologyKey: "kubernetes.io/hostname" initContainers: - name: peer-finder-plugin-install image: nacos/nacos-peer-finder-plugin:1.1 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /home/nacos/plugins/peer-finder name: data subPath: peer-finder containers: - name: nacos imagePullPolicy: IfNotPresent image: nacos/nacos-server:v2.1.0 resources: requests: memory: "2048Mi" cpu: "500m" ports: - containerPort: 8848 name: client-port - containerPort: 9848 name: client-rpc - containerPort: 9849 name: raft-rpc - containerPort: 7848 name: old-raft-rpc env: - name: MYSQL_SERVICE_DB_PARAM value: characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai - name: NACOS_REPLICAS value: "3" - name: SERVICE_NAME value: "nacos-headless" - name: DOMAIN_NAME value: "cluster.local" - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MYSQL_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.db.name - name: MYSQL_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.port - name: MYSQL_SERVICE_HOST valueFrom: configMapKeyRef: name: nacos-cm key: mysql.host - name: MYSQL_SERVICE_USER valueFrom: configMapKeyRef: name: nacos-cm key: mysql.user - name: MYSQL_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.password - name: MODE value: "cluster" - name: NACOS_SERVER_PORT value: "8848" - name: PREFER_HOST_MODE value: "hostname" - name: NACOS_APPLICATION_PORT value: "8848" - name: NACOS_SERVERS value: "nacos-0.nacos-headless.nacos.svc.cluster.local:8848 nacos-1.nacos-headless.nacos.svc.cluster.local:8848 nacos-2.nacos-headless.nacos.svc.cluster.local:8848" volumeMounts: - name: data mountPath: /home/nacos/plugins/peer-finder subPath: peer-finder - name: data mountPath: /home/nacos/data subPath: data - name: data mountPath: /home/nacos/logs subPath: logs - mountPath: /etc/localtime name: nacostime volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nacos-nfs selector: matchLabels: app: nacos应用PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\nacos-nfs.yaml service/nacos-headless created configmap/nacos-cm created statefulset.apps/nacos created直到所有的pod runingPS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get pod NAME READY STATUS RESTARTS AGE nacos-0 1/1 Running 0 5m59s nacos-1 1/1 Running 0 4m37s nacos-2 1/1 Running 0 3m31s4. 配置ingress如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nacos-ui labels: app: nacos namespace: nacos spec: ingressClassName: nginx rules: - host: nacos.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: nacos-headless port: number: 8848创建PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get ingress NAME CLASS HOSTS ADDRESS PORTS AGE nacos-ui nginx nacos.linuxea.com 172.16.100.11,172.16.100.43 80 53s配置本地Hosts172.16.100.11 nacos.linuxea.com访问的域名如下http://nacos.linuxea.com/nacos/#/login账号密码:nacos/nacos参考: https://github.com/nacos-group/nacos-k8s/tree/master/deploy/nacosskywalkingskywalking需要一个后端来存储数据,或者MySQL,或者ES,我将在这里使用ES我们仍然需要一个PVC来存储ES的数据,与nacos不同的是,我这里用k8s来运行ES1.安装ES在安装之前,我们需要准备一个PVC1.1 准备ES的PVC复制nacos的配置,如法炮制一个,修改下名称即可apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-skywalking-es provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "false" pathPattern: "${.PVC.namespace}/${.PVC.name}" onDelete: delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: es-data namespace: skywalking spec: storageClassName: nfs-skywalking-es accessModes: - ReadWriteMany resources: requests: storage: 50G我们创建一个名称空间skwayling,而后创建pvcPS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\ns.yaml namespace/skywalking created PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\nfs-to-es.yaml storageclass.storage.k8s.io/nfs-skywalking-es created persistentvolumeclaim/es-data created如下PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE es-data Bound pvc-5baafefd-a2da-48f4-b9d4-815c6f3c2fe3 50G RWX nfs-skywalking-es 29s1.2 安装ES我们仍然要修改一些配置,claimName:的值是我们创建的pvc名称,也就是es-data volumes: - name: oms-skywalking-to-elasticsearch-data persistentVolumeClaim: claimName: es-data最终的yaml如下# Source: skywalking/charts/elasticsearch/templates/statefulset.yaml apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: type: ClusterIP ports: - name: elasticsearch port: 9200 protocol: TCP selector: app: elasticsearch --- apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: selector: matchLabels: app: elasticsearch replicas: 1 template: metadata: name: elasticsearch labels: app: elasticsearch spec: initContainers: - name: configure-sysctl securityContext: runAsUser: 0 privileged: true image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 imagePullPolicy: "IfNotPresent" command: ["/bin/sh"] args: ["-c", "sysctl -w DefaultLimitNOFILE=65536; sysctl -w DefaultLimitMEMLOCK=infinity; sysctl -w DefaultLimitNPROC=32000; sysctl -w vm.max_map_count=262144"] resources: {} containers: - name: "elasticsearch" securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 imagePullPolicy: "IfNotPresent" livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 1 tcpSocket: port: 9300 timeoutSeconds: 2 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 2 tcpSocket: port: 9300 timeoutSeconds: 2 ports: - name: http containerPort: 9200 - name: transport containerPort: 9300 resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 2Gi env: - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: cluster.name value: "elasticsearch" - name: network.host value: "0.0.0.0" - name: ES_JAVA_OPTS value: "-Xmx1g -Xms1g -Duser.timezone=Asia/Shanghai" - name: discovery.type value: single-node volumeMounts: - mountPath: /usr/share/elasticsearch/data name: oms-skywalking-to-elasticsearch-data restartPolicy: Always volumes: - name: oms-skywalking-to-elasticsearch-data persistentVolumeClaim: claimName: es-data应用清单PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\es.yaml service/elasticsearch created deployment.apps/elasticsearch created使用-w来观察状态,直到runningPS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pod -w NAME READY STATUS RESTARTS AGE elasticsearch-64c9d98794-ndktz 0/1 Init:0/1 0 14s elasticsearch-64c9d98794-ndktz 0/1 PodInitializing 0 55s elasticsearch-64c9d98794-ndktz 0/1 Running 0 56s elasticsearch-64c9d98794-ndktz 1/1 Running 0 88snfs上面已经创建的es的数据文件[root@linuxea-49 /data/nfs-k8s/1.21.1]# ls skywalking/es-data/ nodeses安装完成1.3 本地es除此之外,我们可以在vm虚拟机上安装esversion: '3.3' services: elasticsearch: image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 container_name: elasticsearch sysctls: net.core.somaxconn: 10240 #DefaultLimitNOFILE: 65536 #DefaultLimitMEMLOCK: infinity #DefaultLimitNPROC: 32000 #vm.max_map_count: 262144 ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: elasticsearch restart: always environment: - cluster.name="elasticsearch" # - network.host="0.0.0.0" - discovery.type=single-node # - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms2048m -Xmx4096m -XX:-UseConcMarkSweepGC -XX:-UseCMSInitiatingOccupancyOnly -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=75 -Duser.timezone=Asia/Shanghai" user: root ports: - 9200:9200 - 9300:9300 volumes: - /etc/localtime:/etc/localtime:ro - /data/elasticsearch/data:/usr/share/elasticsearch/data logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 6144m reservations: memory: 6144m而后docker-compose up -d即可未完,因篇幅字数问题,见下一章
2022年06月30日
15 阅读
0 评论
0 点赞
2022-06-29
linuxea:基于nexus3代码构建和harbor镜像打包(3)
在前两篇中,主要叙述了即将做什么以及基础环境搭建。此时我们需要一个通用的java程序来配置这些东西,但是这其中又需要配置不少的东西,比如nexus3等。因此,本章将围绕nexus3配置,而后通过nexus3配置maven打包。将jar包构建,接着配置harbor,并且将打包的镜像推送到harbor镜像仓库。如下图红色阴影部分内容:阅读此篇,你将了解如下信息:nexus3配置java编译打包与nexus3harbor安装配置和使用基于alpine构建jdk编写Dockerfile技巧和构建和推送镜像我们仅仅使用非Https的harbor仓库,如果要配置https已经helm仓库,阅读habor2.5的helm仓库和镜像仓库使用(5)进行配置即可配置java和node环境变量[root@linuxea-01 local]# tar xf apache-maven-3.8.6-bin.tar.gz -C /usr/local/ [root@linuxea-01 local]# tar xf node-v16.15.1-linux-x64.tar.xz -C /usr/local/ [root@linuxea-01 local]# MAVEN_PATH=/usr/local/apache-maven-3.8.6 [root@linuxea-01 local]# NODE_PATH=/usr/local/node-v16.15.1-linux-x64 [root@linuxea-01 local]# PATH=$PATH:$NODE_PATH/bin:$PATH:$MAVEN_PATH/bin1.修改为阿里源settings.xml修改阿里云源<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>可以使用-s指定settings.yaml你也可以指定pom.xml </parent> 之下 <repositories> <repository> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> </repository> </repositories>2 配置nexus32.1 创建Blob Stores如果剩余10G就报警2.2 创建proxy创建repositories->选择maven2(proxy)http://maven.aliyun.com/nexus/content/groups/public我们着重修改和存储桶如法炮制,继续将下面的都创建maven2-proxy1. aliyun http://maven.aliyun.com/nexus/content/groups/public 2. apache_snapshot https://repository.apache.org/content/repositories/snapshots/ 3. apache_release https://repository.apache.org/content/repositories/releases/ 4. atlassian https://maven.atlassian.com/content/repositories/atlassian-public/ 5. central.maven.org http://central.maven.org/maven2/ 6. datanucleus http://www.datanucleus.org/downloads/maven2 7. maven-central (安装后自带,仅需设置Cache有效期即可) https://repo1.maven.org/maven2/ 8. nexus.axiomalaska.com http://nexus.axiomalaska.com/nexus/content/repositories/public 9. oss.sonatype.org https://oss.sonatype.org/content/repositories/snapshots 10.pentaho https://public.nexus.pentaho.org/content/groups/omni/ 11.central http://maven.aliyun.com/nexus/content/repositories/central2.3 创建local在创建一个maven2-local2.4 创建group创建group,将上面所有创建的拉入到当前group2.5 配置xml文件配置settings.xml,修改nexus3地址,如下所示[root@linuxea-01 linuxea]# cat ~/.m2/settings.xml <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> <server> <id>alimaven</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://172.16.15.136:8081/repository/maven2-group/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>打包测试mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s ~/.m2/settings.xml截图如下s) Downloaded from alimaven: http://172.16.15.136:8081/repository/maven2-group/commons-codec/commons-codec/1.6/commons-codec-1.6.jar (233 kB at 991 kB/s) [INFO] Installing /data/java-helo-word/linuxea/target/hello-world-0.0.6.jar to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.jar [INFO] Installing /data/java-helo-word/linuxea/pom.xml to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.pom [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:04 min [INFO] Finished at: 2022-06-24T17:07:10+08:00 [INFO] ------------------------------------------------------------------------ [root@linuxea-01 linuxea]#3. 配置harbor登录harbor项目后创建一个项目而后可以看到推送命令docker tag SOURCE_IMAGE[:TAG] 172.16.100.54/linuxea/REPOSITORY[:TAG] docker push 172.16.100.54/linuxea/REPOSITORY[:TAG]我们直接把镜像打成172.16.100.54/linuxea/java-demo:TAG即可,而不是要去修改tag,而后直接上传即可4. 打包和构建使用alpine的最大好处就是可以适量的最小化缩减镜像体积。这也是alpine流行的最大因素。由于一直使用的都是jdk8,因此仍然使用jdk8版本,基础镜像仍然使用alpine:3.15,我参考了dockerhub上一个朋友的镜像,重新构建了jdk8u202,整个镜像大小大概在453M左右。可以通过如下地址进行获取docker pull registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u2024.1 构建基础镜像jdl的基础镜像已经构建完成,在本地,仍然按照这里的dockerfile进行构建而后我们创建一个base仓库来存放登录并推送[root@linuxea-48 ~]# docker login harbor.marksugar.com Authenticating with existing credentials... Stored credentials invalid or expired Username (admin): admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@linuxea-48 ~]# docker push harbor.marksugar.com/base/jdk:8u202 The push refers to repository [harbor.marksugar.com/base/jdk] 788766eb7d3e: Pushed 8d3ac3489996: Pushed 8u202: digest: sha256:516cd5bd65041d4b00587127417c1a9a3aea970fa533d330f60b07395aa5e5ca size: 7414.2 打包java镜像此前我找了一个java的hello world的包,现在在我的github上可以找到将它拉到本地构建,进行测试[root@linuxea-48 /data]# git clone https://ghproxy.futils.com/https://github.com/marksugar/java-helo-word.git Cloning into 'java-helo-word'... remote: Enumerating objects: 110, done. remote: Total 110 (delta 0), reused 0 (delta 0), pack-reused 110 Receiving objects: 100% (110/110), 28.09 KiB | 0 bytes/s, done.开始打包jar包构建频繁出错,需要解决的是依赖包,可能需要添加nexus3仓库的代理,这些通过搜索引擎解决。一旦构建完成,在target目录下就会有一个jar包[root@linuxea-48 /data/java-helo-word/linuxea]# ll target/hello-world-0.0.6.jar -rw-r--r-- 1 root root 17300624 Jun 26 01:00 target/hello-world-0.0.6.jar而后这个jar可以进行启动的,并监听了一个8086的端口号[root@linuxea-48 /data/java-helo-word/linuxea]# java -jar target/hello-world-0.0.6.jar . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-26 01:05:52.217 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on Node172_16_100_48.marksugar.me with PID 38183 (/data/java-helo-word/linuxea/target/hello-world-0.0.6.jar started by root in /data/java-helo-word/linuxea) 2022-06-26 01:05:52.219 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-26 01:05:52.265 INFO 38183 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.118 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-26 01:05:53.126 INFO 38183 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-26 01:05:53.129 INFO 38183 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 916 ms 2022-06-26 01:05:53.256 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-26 01:05:53.257 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-26 01:05:53.283 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-26 01:05:53.287 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-26 01:05:53.459 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.502 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.551 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.568 INFO 38183 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-26 01:05:53.639 INFO 38183 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-26 01:05:53.680 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-26 01:05:53.682 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.392 seconds (JVM running for 2.647)访问现在jar包和nexus3准备好了4.3 编写Dockerfile当这一切准备妥当,开始编写Dockerfile,我们需要注意以下其他配置配置内存限制资源配置普通用户,并已普通用户启动pod应用程序如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER www.linuxea.com by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs -p COPY target/*.jar /data/ WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -jar *.jar开始构建我们指定配置文件位置进行创建docker build -t hello-java -f ./Dockerfile .如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker build -t hello-java -f ./Dockerfile . Sending build context to Docker daemon 17.5MB Step 1/7 : FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 ---> 5919494d49c0 Step 2/7 : MAINTAINER www.linuxea.com by mark ---> Running in 51ea254cd0c3 Removing intermediate container 51ea254cd0c3 ---> 109317878a94 Step 3/7 : ENV JAVA_OPTS=" -server -Xms2048m -Xmx2048m -Xmn512m -Xss256k -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs" MY_USER=linuxea MY_USER_ID=316 ---> Running in 5745dbc7928b Removing intermediate container 5745dbc7928b ---> a7d40e22389a Step 4/7 : RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} && mkdir /data/logs -p ---> Running in 2e4c34e11b62 Removing intermediate container 2e4c34e11b62 ---> d2fdac4de2fa Step 5/7 : COPY target/*.jar /data/ ---> 5538b183318b Step 6/7 : WORKDIR /data ---> Running in 7d0ac5b1dcc2 Removing intermediate container 7d0ac5b1dcc2 ---> e03a5699e97c Step 7/7 : CMD java ${JAVA_OPTS} jar *.jar ---> Running in 58ff0459e4d7 Removing intermediate container 58ff0459e4d7 ---> d1689a9a179f Successfully built d1689a9a179f Successfully tagged hello-java:latest接着我们run起来[root@linuxea-48 /data/java-helo-word/linuxea]# docker run --rm hello-java . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-25 17:26:22.052 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on f18e65565a19 with PID 1 (/data/hello-world-0.0.6.jar started by linuxea in /data) 2022-06-25 17:26:22.054 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-25 17:26:22.121 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.079 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-25 17:26:23.087 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-25 17:26:23.089 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-25 17:26:23.148 INFO 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-25 17:26:23.149 INFO 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1028 ms 2022-06-25 17:26:23.236 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-25 17:26:23.273 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-25 17:26:23.279 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-25 17:26:23.459 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.508 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.559 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.654 INFO 1 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-25 17:26:23.786 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-25 17:26:23.841 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-25 17:26:23.845 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.076 seconds (JVM running for 2.329)进入容器可以看到当前使用的是linuxea用户bash-5.1$ ps aux PID USER TIME COMMAND 1 linuxea 0:00 bash 15 linuxea 0:00 ps aux4.4 推送仓库将构建的镜像推送到仓库,以备用途,于是,我们登录harbor创建一个项目存放修改镜像名称并pushdocker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest docker push harbor.marksugar.com/linuxea/hello-world:latest如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest [root@linuxea-48 /data/java-helo-word/linuxea]# docker push harbor.marksugar.com/linuxea/hello-world:latest The push refers to repository [harbor.marksugar.com/linuxea/hello-world] 9435dbe70451: Pushed 8c3c8b0adf90: Pushed 788766eb7d3e: Mounted from base/jdk 8d3ac3489996: Mounted from base/jdk latest: digest: sha256:2248bf99e35cf864d521441d8d2efc9aedbed56c24625e4f60e93df5e8fc65c3 size: 1161此时harbor仓库已经有了已经已经打包完成的镜像,也就是所谓的一个制品
2022年06月29日
37 阅读
0 评论
0 点赞
2022-06-28
linuxea:habor2.5的helm仓库和镜像仓库使用(5)
在早先,容器仓库有Portus和harbor最为瞩目,harbor是vmware的产品,而前者则是由suse团队维护,但在2019portus提供了最后一个版本。但随着时间的推移,harbor提供了更多与时俱进的功能服务,这也导致harbor愈发的收到关注和使用。并且harbor是由vmware中国区成员参与,众所周知,一款提供中文界面且有优秀的产品,往往都是更受欢迎的。harbor不仅可以存放容器镜像,还可以提供helm的仓库,简单列出他具备的功能云原生注册表:支持容器镜像和Helm图表,Harbor 充当云原生环境(如容器运行时和编排平台)的注册表。基于角色的访问控制:用户通过“项目”访问不同的存储库,并且用户可以对项目下的图像或 Helm 图表具有不同的权限。基于策略的复制:可以使用过滤器(存储库、标签和标签)基于策略在多个注册表实例之间复制(同步)图像和图表。如果遇到任何错误,Harbor 会自动重试复制。这可用于辅助负载平衡、实现高可用性以及促进混合和多云场景中的多数据中心部署。漏洞扫描:Harbor 定期扫描映像以查找漏洞,并进行策略检查以防止部署易受攻击的映像。LDAP/AD 支持:Harbor 与现有的企业 LDAP/AD 集成以进行用户身份验证和管理,并支持将 LDAP 组导入 Harbor,然后可以授予特定项目的权限。OIDC 支持:Harbor 利用 OpenID Connect (OIDC) 来验证由外部授权服务器或身份提供者认证的用户的身份。可以启用单点登录以登录到 Harbor 门户。图像删除和垃圾收集:系统管理员可以运行垃圾收集作业,以便可以删除图像(悬空清单和未引用的 blob)并定期释放它们的空间。Notary:支持使用 Docker Content Trust(利用 Notary)对容器镜像进行签名,以保证真实性和出处。此外,还可以激活防止部署未签名映像的策略。图形用户门户:用户可以轻松浏览、搜索存储库和管理项目。审计:通过日志跟踪对存储库的所有操作。RESTful API:提供 RESTful API 是为了方便管理操作,并且易于用于与外部系统集成。嵌入式 Swagger UI 可用于探索和测试 API。易于部署:Harbor 可以通过 Docker compose 以及 Helm Chart 进行部署,最近还添加了一个 Harbor Operator。以上内容特征从github获取,阅读本章,你将了解如何利用harbor配置基本的docker容器仓库和helm仓库的使用harbor如果你是nginx,则可以直接使用如下命令创建。openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com1.准备证书我们按照官方文档进行创建ssl证书与之不同的是我们尽可能的将证书配置的旧一些。因为无论在什么环境,证书最大的问题就是会过期,后期替换的时候会波及到正常使用。域名:harbor.local.com创建的证书目录:/data/cert-date +%F复制下面的命令在sh脚本中修改${YOU_DOMAIN}和${CERT_PATH}后执行即可CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=harbor.local.com mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 2.harbor和docker接着将证书复制到harbor和docker的目录,使其生效1.复制crt和key 到harbor的证书目录cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 2.转换yourdomain.com.crt为yourdomain.com.cert, 供 Docker 使用openssl x509 -inform PEM -in harbor.local.com.crt -out harbor.local.com.cert本机mkdir -p /etc/docker/certs.d/harbor.local.com/ cp harbor.local.com.cert /etc/docker/certs.d/harbor.local.com/ cp harbor.local.com.key /etc/docker/certs.d/harbor.local.com/ cp ca.crt /etc/docker/certs.d/harbor.local.com/ systemctl reload docker其他节点scp harbor.local.com.cert harbor.local.com.key ca.crt 172.16.15.136:/etc/docker/certs.d/harbor.local.com/ systemctl reload docker如果不是80端口请创建文件夹/etc/docker/certs.d/yourdomain.com:port或/etc/docker/certs.d/harbor_IP:port.3.harbor.yml经过过滤得到如下配置[root@b.linuxea.com harbor]# egrep -v "^$|^#|^ #|^ #" harbor.yml hostname: reg.mydomain.com http: port: 80 https: port: 443 certificate: /your/certificate/path private_key: /your/private/key/path harbor_admin_password: Harbor12345 database: password: root123 max_idle_conns: 100 max_open_conns: 900 data_volume: /data trivy: ignore_unfixed: false skip_update: false offline_scan: false insecure: false jobservice: max_job_workers: 10 notification: webhook_job_max_retry: 10 chart: absolute_url: disabled log: level: info local: rotate_count: 50 rotate_size: 200M location: /var/log/harbor _version: 2.5.0 proxy: http_proxy: https_proxy: no_proxy: components: - core - jobservice - trivy upload_purging: enabled: true age: 168h interval: 24h dryrun: false3.1.将服务器证书和密钥复制到 Harbor 主机上的 certficates 文件夹中。创建一个存储目录:mkdir /data/harbor/data配置文件。进行sed替换即可,主要修改如下:hostname:域名certificate: cert目录地址private_key: key目录地址harbor_admin_password: 登录密码data_volume: docker-compose的所有挂载目录sed -i 's@hostname: reg.mydomain.com@hostname: harbor.local.com@g' harbor.yml sed -i 's@certificate: /your/certificate/path@certificate: /data/cert-2022-06-28/harbor.local.com.crt@g' harbor.yml sed -i 's@private_key: /your/private/key/path@private_key: /data/cert-2022-06-28/harbor.local.com.key@g' harbor.yml sed -i 's@harbor_admin_password: Harbor12345@harbor_admin_password: admin@g' harbor.yml sed -i 's@data_volume: /data@data_volume: /data/harbor/data@g' harbor.yml4.安装harbor一切准备妥当,执行./install.sh脚本自动安装[root@b.linuxea.com harbor]# ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 19.03.6 [Step 1]: checking docker-compose is installed ... ....... [Step 5]: starting Harbor ... Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-db ... done Creating registry ... done Creating redis ... done Creating harbor-portal ... done Creating registryctl ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating nginx ... done ✔ ----Harbor has been installed and started successfully.----5.测试容器仓库回到其他节点测试docker的应有配置目录结构如下[root@a.linuxea.com ~]# tree /etc/docker/ /etc/docker/ ├── certs.d │ └── harbor.local.com │ ├── ca.crt │ ├── harbor.local.com.cert │ └── harbor.local.com.key ├── daemon.json └── key.json 2 directories, 5 files (base) [root@a.linuxea.com ~]# cat /etc/docker/daemon.json {"insecure-registries":["172.16.100.150:8443",harbor.local.com]}登录测试[root@a.linuxea.com ~]# docker login harbor.local.com Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded上传镜像测试创建一个仓库harbor提供的命令如下docker push harbor.local.com/base/REPOSITORY[:TAG]测试[root@a.linuxea.com ~]# docker tag mysql:8.0.16 harbor.local.com/base/mysql:8.0.16 [root@a.linuxea.com ~]# docker push harbor.local.com/base/mysql:8.0.16 The push refers to repository [harbor.local.com/base/mysql] 605d208195c7: Pushed 9d87c3455758: Pushed 80f1020054a4: Pushed b0425df45fae: Pushed 680666c6bf72: Pushed 7e7fffcdabb3: Pushed 77737de99484: Pushed 2f1b41b24201: Pushed 007a7f930352: Pushed c6926fcee191: Pushed b78ec9586b34: Pushed d56055da3352: Pushed 8.0.16: digest: sha256:036b8908469edac85afba3b672eb7cbc58d6d6b90c70df0bb3fe2ab4fd939b22 size: 2828docker没有问题后配置helm仓库helm31.helm3安装在官网下载一个helm,解压后并将可执行文件放置sbin下wget https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz tar xf helm-v3.8.2-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/sbin安装完成[root@a.linuxea.com ~]# helm version version.BuildInfo{Version:"v3.8.2", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}2.添加helm源添加一个azure源,并将其更新helm repo add stable http://mirror.azure.cn/kubernetes/charts/ helm repo list helm repo update helm search repo stable3.登录helm[root@a.linuxea.com ~]# helm registry login harbor.local.com Username: admin Password: Login Succeeded我们还需要让系统信任这个ca,于是我们将 /etc/docker/certs.d/harbor.local.com/ca.crt的内容追加到如/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem文件中,并复制ca.crt到/etc/pki/ca-trust/source/anchors/cp /etc/docker/certs.d/harbor.local.com/ca.crt /etc/pki/ca-trust/source/anchors/3.中转charts我们一个共有的mirros的包中转到私有的仓库上。在中转之前,需要添加一个源来提供现有的charts下载redishelm fetch stable/redis[root@a.linuxea.com ~]# ls redis-10.5.7.tgz redis-10.5.7.tgz推送到harbor.local.com推送 chart 到当前项目helm push redis-10.5.7.tgz oci://harbor.local.com/redis登录harbor查看点进这个Artifacts内,就能看到更多的信息4.上传本地charts现在在本地创建一个charts用作测试,上传到harbor中[root@a.linuxea.com data]# helm create test Creating test [root@a.linuxea.com data]# ls test/ charts Chart.yaml templates values.yaml打包推送[root@a.linuxea.com data]# helm package test Successfully packaged chart and saved it to: /data/test-0.1.0.tgz推送到helm服务器[root@a.linuxea.com data]# helm push test-0.1.0.tgz oci://harbor.local.com/redis Pushed: harbor.local.com/redis/test:0.1.0 Digest: sha256:1a86bc2ae87a8760398099a9c0966ce41141eacc7270673d03dfc4005bc349db5.使用私有charts回到仓库里面,鼠标放在拉取按钮上将会显示拉取的命令如下helm pull oci://harbor.local.com/redis/redis --version 10.5.7拉到本地[root@a.linuxea.com opt]# helm pull oci://harbor.local.com/redis/redis --version 10.5.7 Pulled: harbor.local.com/redis/redis:10.5.7 Digest: sha256:41643fa64d23797d0a874a2b264c9fc1f7323b08b9a02fb3010d72805b54bc3a [root@a.linuxea.com opt]# ls redis-10.5.7.tgz解压后使用template可以看到模板的配置清单信息[root@a.linuxea.com opt]# tar xf redis-10.5.7.tgz [root@a.linuxea.com redis]# helm template test ./安装测试helm upgrade --install -f values.yaml test-redis stable/redis --namespace redis --create-namespace--create-namespace: 如果名称空间不存在就创建upgrade: 如果存在就更新,不存在就创建[root@a.linuxea.com redis]# helm install -f values.yaml test-redis stable/redis --namespace redis --create-namespace WARNING: This chart is deprecated NAME: test-redis LAST DEPLOYED: Tue Jun 28 14:50:56 2022 NAMESPACE: redis STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: This Helm chart is deprecated Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/). The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keepinghere these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`) $ helm repo add bitnami https://charts.bitnami.com/bitnami$ helm install my-release bitnami/<chart> # Helm 3$ helm install --name my-release bitnami/<chart> # Helm 2 To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can executerepo add bitnami https://charts.bitnami.com/bitnami $ helm upgrade my-release bitnami/<chart> Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion. ** Please be patient while the chart is being deployed ** Redis can be accessed via port 6379 on the following DNS name from within your cluster: test-redis-master.redis.svc.cluster.local To get your password run: export REDIS_PASSWORD=$(kubectl get secret --namespace redis test-redis -o jsonpath="{.data.redis-password}" | base64 --decode) To connect to your Redis server: 1. Run a Redis pod that you can use as a client: kubectl run --namespace redis test-redis-client --rm --tty -i --restart='Never' \ --env REDIS_PASSWORD=$REDIS_PASSWORD \ --image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash 2. Connect using the Redis CLI: redis-cli -h test-redis-master -a $REDIS_PASSWORD To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace redis svc/test-redis-master 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD查看密码[root@a.linuxea.com redis]# kubectl get secret --namespace redis test-redis -o jsonpath="{.data.redis-password}" | base64 --decode VeiervwDUG查看运行状态由于一些配置没有准备,此时redis是pending的,但是helm安装是成功的。我们的目的达到了[root@k8s-02 ~]# kubectl -n redis get all NAME READY STATUS RESTARTS AGE pod/test-redis-master-0 0/1 Pending 0 2m53s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/test-redis-headless ClusterIP None <none> 6379/TCP 2m53s service/test-redis-master ClusterIP 10.101.161.177 <none> 6379/TCP 2m53s NAME READY AGE statefulset.apps/test-redis-master 0/1 2m53s已经测试完成,现在卸载掉即可([root@a.linuxea.com redis]# helm -n redis ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test-redis redis 1 2022-06-28 14:50:56.56116374 +0800 CST deployed redis-10.5.7 5.0.7 [root@a.linuxea.com redis]# helm -n redis uninstall test-redis release "test-redis" uninstalled [root@a.linuxea.com redis]# helm -n redis ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION基于ip的helm在很多场景中,我们需要把一个http的改成https并且还要他支持https,并且还是ip,这在harbor官网已经有说明,老话重提的。1.准备证书与此前不同的是,我们需要将subjectAltName = @alt_names的值也改成ip地址subjectAltName = IP:IPADDRESS如下CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=harbor.local.com mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 以172.16.100.150为例CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=172.16.100.150:8443 mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = IP:172.16.100.150 [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH}2.harbor.yaml配置harbor.yaml部分配置hostname: 172.16.100.150 http: port: 8080 https: port: 8443 certificate: /etc/ssl/certs/172.16.100.150:8443.crt private_key: /etc/ssl/certs/172.16.100.150:8443.key配置完成后需要执行./prepare并且重启3.docker和ca仍然进行证书拷贝首先拷贝当前节点的 cp 172.16.100.150\:8443.* /etc/docker/certs.d/172.16.100.150\:8443/在将当前的证书打包拷贝到其他需要使用helm上传下载的节点 tar -zcf 8443.tar 172.16.100.150\:8443 scp 8443.tar 172.16.15.136:/etc/docker/certs.d/目录结构如下[root@docker-156 certs.d]# tree /etc/docker/certs.d/ /etc/docker/certs.d/ ├── 172.16.100.150:8443 │ ├── 172.16.100.150:8443.cert │ ├── 172.16.100.150:8443.crt │ ├── 172.16.100.150:8443.csr │ ├── 172.16.100.150:8443.key │ └── ca.crt仍然需要让系统信任这个ca,于是我们将 ca.crt的内容追加到如/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem文件中,并复制ca.crt到/etc/pki/ca-trust/source/anchors/cp /etc/docker/certs.d/harbor.local.com/ca.crt /etc/pki/ca-trust/source/anchors/而后就可以正常推送和下载了
2022年06月28日
44 阅读
0 评论
0 点赞
2022-06-27
linuxea:gitops持续集成组件快速搭建
我想我多少有些浮夸,因为我将这几句破烂的文字描述的一个持续集成的幼形称作“gitops”。不禁我有些害臊这充其量只是一个持续集成的组件整合,远远算不上gitops,更别说什么devops,那是个什么东西呢。不知道从什么时候开始,我逐渐厌烦有人枉谈devops,随意的描述devops,更可恶的是有些人做了一个流水线管道就妄言从事了devops的工作,我不想与他们为伍。我肤浅的认为只有无知才会大言不惭。为此,为了能和这些所谓的devops划清界限,并跨远。我利用业余时间将一些小项目的实施交付文档经过修改改为所谓的基于git的持续集成和持续发布,很明显,这里面引入了gitlab。gitlab作为管理jenkins的共享库和k8s的yaml配置清单。当然,这是一个幼形。并且,如果我的描述和形容使你感到不适,那当我什么都没说。好的,那么我们正式开始在一些场合中,我们希望快速构建一个项目,项目里面一套持续集成的流水线,我们至少需要一些必要的组件,如:jenkins,gitlab,sonarqube,harbor,nexus3,k8s集群等。我们的目的是交付一套持续集成和持续交付的幼形,来应对日益变换的构建和发布。拓扑如下为此,这篇文章简单介绍如何快速使用docker来部署这些必要的组件。首要条件安装docker和docker-compose离线安装docker如果你准备了离线包就可以使用本地的包进行安装centos7.9:cd docker/docker-rpm yum localinstall * -y离线安装docker-compose我们至少下载一个较新的版本来应对harbor的配置要求,一般来说都够用cd docker/docker-compose cp docker-compose-Linux-x86_64 /usr/loca/sbin/docker-compose chmod +x /usr/loca/sbin/docker-compose验证docker verson docker-compsoe -v在线安装:yum install epel* -y yum install docker-ce docker-compose -yjenkins如果本地有旧的包,解压即可tar xf jenkins.tar.gz -C /data/ chown -R 1000:1000 /data/jenkins cd /data/jenkins docker-compose -f jenkins.yaml up -d安装新的version: '3.5' services: jenkins: image: registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest container_name: jenkins restart: always network_mode: host environment: - JAVA_OPTS=-Duser.timezone=Asia/Shanghai # 时区1 volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/jenkins-latest/jenkins_home:/var/jenkins_home #chown 1000:1000 -R jenkins_home - /data/jenkins-latest/ansiblefile:/etc/ansible - /data/jenkins-latest/local_repo:/data/jenkins-latest/local_repo - /data/jenkins-latest/package:/usr/local/package #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/node:/sbin/node #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/npm:/sbin/npm #- /data/jenkins-latest/latest_war_package/jenkins.war:/usr/share/jenkins/jenkins.war # jenkins war新包挂载 # ports: # - 58080:58080 user: root logging: driver: "json-file" options: max-size: "1G" deploy: resources: limits: memory: 30720m reservations: memory: 30720m 查看密钥[root@linuxea.com data]# cat /data/jenkins-latest/jenkins_home/secrets/initialAdminPassword c3e5dd22ea5e4adab28d001a560302bc第一次卡住,修改# cat /data/jenkins-latest/jenkins_home/hudson.model.UpdateCenter.xml <?xml version='1.1' encoding='UTF-8'?> <sites> <site> <id>default</id> <url>https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json</url> </site> </sites>跳过,不安装任何插件选择none如果没有修改上面的插件源,我们就在Manage Jenkins->Plugin Manager->Advanced->最下方的Update Site修改https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json必要安装的jenkins插件1.Credentials: 凭据 localization: 中文插件 localization: chinase(simplified) 2.AnsiColor: 颜色插件 "echo -en \\033[1;32m" 3.Rebuilder: 重复上次构建插件 4.build user vars:变量 变量分为如下几种: Full name :全名 BUILD_USER_FIRST_NAME :名字 BUILD_USER_LAST_NAME :姓 BUILD_USER_ID :Jenkins用户ID BUILD_USER_EMAIL :用户邮箱 5.Workspace Cleanup: 清理workspace 6.Role-based Authorization Strategy 用户角色 7.Git Plugin 8.Gogs 9.GitLab 10.Generic Webhook TriggerVersion 11.Pipeline 12.Pipeline: Groovy 13.JUnit Attachments 14.Performance 15.Html Publisher 16.Gitlab Authentication 17.JIRA 18.LDAP 19.Parameterized Triggersonarqubesonarqube:8.9.2-community docker pull sonarqube:8.9.8-communityversion: '3.3' services: sonarqube: container_name: sonarqube image: registry.cn-hangzhou.aliyuncs.com/marksugar/sonarqube:8.9.8-community restart: always hostname: 172.16.100.47 environment: - stop-timeout: 3600 - "ES_JAVA_OPTS=-Xms16384m -Xmx16384m" ulimits: memlock: soft: -1 hard: -1 logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 16384m reservations: memory: 16384m ports: - '9000:9000' volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/dataharbortar xf harbor-offline-installer-v2.5.1.tgz cd harbor cp harbor.yml.tmpl harbor.yml Nodeip=`ip a s ${NETWORK_DEVIDE:-eth0}|awk '/inet/{print $2}'|sed -r 's/\/[0-9]{1,}//'` sed -i "s/hostname: reg.mydomain.com/hostname: ${NodeIp}/g" harbor.yml sed -i "s@https:@#https:@g" harbor.yml sed -i "s@port: 443@#port: 443@g" harbor.yml sed -i "s@certificate: /your/certificate/path@#certificate: /your/certificate/path@g" harbor.yml sed -i "s@private_key: /your/private/key/path@#private_key: /your/private/key/path@g" harbor.yml bash install.sh默认密码:Harbor12345nexusmkdir /data/nexus/data -p && chown -R 200.200 /data/nexus/datayamlversion: '3.3' services: nexus3: image: sonatype/nexus3:3.39.0 container_name: nexus3 network_mode: host restart: always environment: - INSTALL4J_ADD_VM_PARAMS=-Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=8192m -Djava.util.prefs.userRoot=/nexus-data # - NEXUS_CONTEXT=/ # ports: # - 8081:8081 volumes: - /etc/localtime:/etc/localtime:ro - /data/nexus/data:/nexus-data logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 8192m reservations: memory: 8192mgitlabversion: '3' services: gitlab-ce: container_name: gitlab-ce image: gitlab/gitlab-ce:15.0.3-ce.0 restart: always # network_mode: host hostname: 192.168.100.22 environment: TZ: 'Asia/Shanghai' GITLAB_OMNIBUS_CONFIG: | external_url 'http://192.168.100.22' gitlab_rails['time_zone'] = 'Asia/Shanghai' gitlab_rails['gitlab_shell_ssh_port'] = 23857 # unicorn['port'] = 8888 # nginx['listen_port'] = 80 ports: - '80:80' - '443:443' - '23857:22' volumes: - /etc/localtime:/etc/localtime - /data/gitlab/config:/etc/gitlab - /data/gitlab/logs:/var/log/gitlab - /data/gitlab/data:/var/opt/gitlab logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 13312m reservations: memory: 13312mgitlab-ce启动完成后使用如下命令查看登录密码docker exec -it gitlab-ce grep 'Password:' /etc/gitlab/initial_root_password
2022年06月27日
50 阅读
0 评论
0 点赞
2022-06-22
linuxea:了解gitops
前言理论是实现一个技术的基础,任何一个新的技术,背后都会有一个理念的支撑,了解这个有助于更好的理解一个新的技术到底解决了什么那些问题,以及为什么它需要存在。在早期的实践中,有devops,chatops,dataops,aiops,而gitops是weaveworks提出的一种持续交付方式,它的核心思想是将应用系统声明式配置文件存放在git版本库。这与我们此前流行,公所周知的配置及代码,及设施有些类似。git作为交付流水线的核心,每个项目人员都可以拉取并使用git来简化kubernetes的程序部署和运维。一旦通过gitops的方式,开发人员就可以参与,并且整个部署的流程变得简单,开发人员就无需关注底层的运维任务而去关注业务功能。当git提交的配置代码变更,这些变更会被同步到一个控制器上,并最终按照代码中声明式的描述被应用到应用程序的实际应用上。还可以将应用程序和实际生成状态与基础架构进行比较,并且告知集群那些代码与实际环境不匹配。并且这些应用程序代码都是从一个安全的来源,比如gitlab,github等一些版本控制器,这样的方式使得开发和部署速度提高了的同时也提供了应用系统的最终的可靠性。优势1,更快的部署时间和恢复时间2,稳定,可回滚(代码在git 通过git恢复,回滚)3,监控,可视化然而并没有一个工具能完成流水线的所有工作,因此可以自由的选择,无论是开源还是闭源,最终将他们整合在一起,基于git或者其他的工具组成gitops,不管是从技术角度还说团队文化,都是不错的开始。关于不可变架构以及为什么需要不可变架构不管是开发,测试,生产环境,最终都是被部署到不同的节点,通常而言我们才用了批量自动化的部署方式,如ansible,puppt,slatslack等来确保所有机器的处于相同的状态,并进行初始化,升级,更新,修改等操作。尽管这一开始就需要更多的人力和时间,但随着时间的推移,这些操作越来越容易出错,因此就出现了更多的平台化,简化整个操作的流程来尽可能的规避人为的错误。但是并不能从根本改变整个环境,而这种被称作为可变架构。而容器的普及,整个环境打包城一个不可变的单元,而不是一个应用,这个单元包含了所有的修改,升级。最终通过kubernetes来分发部署,这就解决了上面的这个问题。不可变基础设施并不是一个新鲜的词,并不一定必须是容器技术,但是当下容器技术是最容易理解和实现的,尤其是在以微服务为代表的分布式系统的部署,运维方面具备很好的可靠性。而在实际中如何应用不可变基础设施的方式是一个问题,而gitops在kuberentes的实践中出现的,gitops则需要依托于不可变基础设施才能发挥作用。在一定程度上说,不可变基础设施为gitos的出现创造了必要的条件, 而gitops应用kuberentes的容器编排能力,能够迅速使用镜像搭建出所需的组件。容器编排声明性的这种配置信息可以作为代码的形式,这意味着你看到的是一组描述信息,而不是一条命令组成,而这些信息放置在git进行版本控制,并借助git特性进行更新回滚,这使得配置将会变得有迹可循。而kubernetes能够对容器实例,网络,存储,CPU, 内存等配置描述出来,这些描述被应用到kubernetes中apiVersion: apps/v1 kind: Deployment metadata: name: testv1 labels: app: testv1 spec: replicas: 5 selector: matchLabels: app: testv1 template: metadata: labels: app: testv1 spec: containers: - name: testv1 image: marksugar/nginx:1.14.a ports: - containerPort: 80gitops充分利用了不可变基础设施和声明式容器编排,通过gitops的方式可以管理多个应用部署,无论是有意,无意或者偶然的配置差异问题,gitops的方式都可以提供有效的方式进行纠正,降低这种不可控性。在这个可重复且可靠的部署中,就算系统宕机或者损坏,都可以提供快速恢复的条件。原则将能够描述的信息都存储在git库中声明式的配置和应用程序部署的描述代码,甚至监控都需要存储在版本库中,正常情况下这些都被应用。否则也可以进行回滚重来。不使用kubectlkubectl会带来不可预测的风险,不可能都记住昨天发生了什么事情遵循控制器模式集群的状态和git库的配置要保持一直非gitops推送具有集群外读/写权限的典型推送管道:从代码CI开始,一系列的打包构建,最终打包镜像,通过kubectl 替换yaml配置文件的几种方式,将变更信息推送到集群,通常这样情况下,集群外部必然是有凭据,这存在一定的安全隐患开发推送代码到仓库,进行系统集成,打包镜像,推送镜像仓库,kubectl配置镜像进行拉取,如下:拉取而在gitops中,是进行pull的流水线,git仓库存储应用的程序和配置文件,开发人员推送代码的git仓库,ci工具获取更改并构建城镜像。gitops检测有镜像,进行拉去并且配置更新到yaml,gitops检测集群状态与仓库不一致,更新清单,将镜像部署到集群中gitops流水线代码发生变换,通过ci系统打包构建,打包镜像,推送到仓库,config update检测到镜像变更,变更配置信息更改到git config仓库中,而集群中的deploy operalor和git仓库实时同步部署到环境中。图片来自example gitops pieline官网的页面,如下图deploy operator的方式比Kubectl相对而言,更安全。一旦git上的配置文件变更,gitops就会对比线上环境,按需要进行同步到集群中去config updater和deploy operatorconfig updater和deploy operator是可以根据需要进行开发的,也是实现gitops的关键。其中最重要的是自动同步来确保存储仓库的变更,能够有自动部署到环境中的能力而目前市面上能够实现gitops中两个最重要概念的config updater和deploy operator的工具有: flux ,argocd,jenkins X查看FluxCD, ArgoCD or Jenkins X: Which Is the Right GitOps Tool for You?了解三个工具的对比其中jenkins x是最复杂的其次是argocd以及FluxCD,FluxCD相对更轻量一些
2022年06月22日
69 阅读
0 评论
0 点赞
2022-06-15
linuxea:curl常用命令
curl常用命令time_connect : 建立到服务器的 TCP 连接所用的时间time_starttransfer: 在发出请求之后,Web 服务器返回数据的第一个字节所用的时间time_total: 完成请求所用的时间在 发出请求之后,Web 服务器处理请求并开始发回数据所用的时间是(time_starttransfer)1.044 - (time_connect)0.244 = 0.8 秒客户机从服务器下载数据所用的时间是(time_total)2.672 - (time_starttransfer)1.044 = 1.682 秒指定特定主机IP地址访问网站 curl -x 61.1315.169.105:80 http://www.baidu.com curl -x 61.1315.169.125:80 http://www.baidu.com网页响应时间curl -o /dev/null -s -w "time_connect: %{time_connect}\ntime_starttransfer: %{time_starttransfer}\ntime_total: %{time_total}\n" "http://www.linuxea.com" time_connect: 0.009 time_starttransfer: 0.357 time_total: 0.358状态返回码curl -s -w %{http_code} "http://www.baidu.com"完成请求所用的时间curl -o /dev/null -s -w '%{time_total}' http://www.linuxea.com 0.456或者如下curl -o /dev/null -s -w "%{http_code}\n%{time_connect}\n%{time_starttransfer}\n%{time_total}" http://www.baidu.com 200 0.038 0.071 0.071文件创建一个新文件 curl-format.txt,然后粘贴: time_namelookup: %{time_namelookup}s\n time_connect: %{time_connect}s\n time_appconnect: %{time_appconnect}s\n time_pretransfer: %{time_pretransfer}s\n time_redirect: %{time_redirect}s\n time_starttransfer: %{time_starttransfer}s\n ----------\n time_total: %{time_total}s\n发出请求:curl -w "@curl-format.txt" -o /dev/null -s "http://wordpress.com/"或者在 Windows 上,它是... curl -w "@curl-format.txt" -o NUL -s "http://wordpress.com/"这是做什么的:-w "@curl-format.txt"告诉 cURL 使用我们的格式文件-o /dev/null将请求的输出重定向到 /dev/null-s 告诉 cURL 不要显示进度表"http://wordpress.com/"是我们请求的 URL。如果您的 URL 具有“&”查询字符串参数,请特别使用引号输出 time_namelookup: 0.001s time_connect: 0.037s time_appconnect: 0.000s time_pretransfer: 0.037s time_redirect: 0.000s time_starttransfer: 0.092s ---------- time_total: 0.164s制作 Linux/Mac 快捷方式(别名)alias curltime="curl -w \"@$HOME/.curl-format.txt\" -o /dev/null -s "然后你可以简单地调用...curltime wordpress.org
2022年06月15日
67 阅读
0 评论
0 点赞
2022-06-13
linuxea: kustomize变量传入
kustomize一直是备受欢迎的yaml配置管理之一,在过去kustomize一直在解决 “提供一种操作配置数据的方法,而不会使原始配置无法被 Kubernetes 使用。”,我们不用去和其他工具做比较,因为这就是kustomize的魅力所在。但是kustomize也有自己的缺点,它无法像helm那样灵活多变,比如,在配置多个一个Ingress的域名的时候,这在helm中将会非常简单,但是在kustomize中,几乎无法通过kustomize本身来解决,但是官方提供了var,而后var被诟病,于是出现了valueAdd,但很可惜,valueAdd并不是为了解决这个问题。valueAdd是从vars演变而来,但是valueAdd并不是最好的方式这么多的方式,均在解决一个核心的问题,环境变量env和字符自定义删除vars是计划的一部分,很显然,目前并没有更好的方式来解决更多的问题为了应对这个情况,使用最原始的envsubst成了一个选项。如果你并不希望你的配置清单是原始的,而是一些环境变量,大量的模板语法,你可以尝试helm。但用helm来管理大量零散的清单配置,在我看来是条不归路。因为事情在演变的过程中会不断的超过预期,变得复杂。而一旦复杂只会增加额外的成本。像往常一样去配置一个kustomize的目录[root@linuxea.com ~/kustomize]# tree ./ ./ └── env-path ├── base │ ├── deployment.yaml │ └── kustomization.yaml ├── kustomize.exe ├── overlays │ ├── dev │ │ ├── env.file │ │ ├── kustomization.yaml │ │ └── replacement.yaml │ ├── pre-prod │ │ ├── kustomization.yaml │ │ └── patch-shared-env.yaml │ └── prod │ ├── kustomization.yaml │ ├── patch-env-from.yaml │ ├── prod-1-env.yaml │ └── prod-2-env.yaml └── transformers └── setProject ├── kustomization.yaml └── setProject.yaml 8 directories, 14 filesdeployment.yamlapiVersion: apps/v1 kind: Deployment metadata: name: test spec: replicas: 3 selector: matchLabels: template: metadata: labels: spec: containers: - name: test image: alpine:3.12.12 imagePullPolicy: Always command: [ "/bin/sh"] args: ["-c","echo $(ALT_GREETING) $(ENABLE_RISKY) $SW_AGENT_TRACE_IGNORE_PATH; sleep 36000" ] ports: - containerPort: 8080 env: - name: ALT_GREETING valueFrom: configMapKeyRef: name: envinpod key: ALT_GREETING - name: ENABLE_RISKY valueFrom: configMapKeyRef: name: envinpod key: ENABLE_RISKY - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_TRACE_IGNORE_PATH 我们分别传入了三个环境变量$(ALT_GREETING) $(ENABLE_RISKY) $SW_AGENT_TRACE_IGNORE_PATH首先通过env.valueFrom.configMapKeyRef来进行传递。因为这是k8s configmapkeyref的常用方式 env: - name: ALT_GREETING valueFrom: configMapKeyRef: name: envinpod key: ALT_GREETING - name: ENABLE_RISKY valueFrom: configMapKeyRef: name: envinpod key: ENABLE_RISKY - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_TRACE_IGNORE_PATH kustomization导入[root@linuxea.com ~/kustomize/env-path]# cat base/kustomization.yaml resources: - deployment.yaml而在overlays下的dev中,引用了这些环境变量[root@linuxea.com ~/kustomize/env-path/overlays]# cat dev/env.file ALT_GREETING=Hiya ENABLE_RISKY=false TEST-NAME=marksugar SW_AGENT_TRACE_IGNORE_PATH=GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/**在kustomization中的配置如下apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization # 名称空间 # namespace: test1 # 前缀 namePrefix: fat- bases: - ../../base # configmap变量 configMapGenerator: - name: envinpod env: env.file # 副本数 replicas: - name: test count: 5 # 标签 commonLabels: app.kubernetes.io/name: nginx app: mark # 镜像 images: - name: alpine newTag: 3.12.12其中关键的在于configMapGenerator: - name: envinpod env: env.file我们渲染下看[root@linuxea.com ~/kustomize/env-path]# kustomize build overlays/dev/ apiVersion: v1 data: ALT_GREETING: Hiya ENABLE_RISKY: "false" SW_AGENT_TRACE_IGNORE_PATH: GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** TEST-NAME: marksugar kind: ConfigMap metadata: labels: app: mark app.kubernetes.io/name: nginx name: fat-envinpod-8hbm9d86m9 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: mark app.kubernetes.io/name: nginx name: fat-test spec: replicas: 5 selector: matchLabels: app: mark app.kubernetes.io/name: nginx template: metadata: labels: app: mark app.kubernetes.io/name: nginx spec: containers: - args: - -c - echo $(ALT_GREETING) $(ENABLE_RISKY) $SW_AGENT_TRACE_IGNORE_PATH; sleep 36000 command: - /bin/sh env: - name: ALT_GREETING valueFrom: configMapKeyRef: key: ALT_GREETING name: fat-envinpod-8hbm9d86m9 - name: ENABLE_RISKY valueFrom: configMapKeyRef: key: ENABLE_RISKY name: fat-envinpod-8hbm9d86m9 - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: fat-envinpod-8hbm9d86m9 image: alpine:3.12.12 imagePullPolicy: Always name: test ports: - containerPort: 8080而后run起来查看镜像的内容你可以使用kubectl -k 或者如我这样使用[root@linuxea.com ~/kustomize/env-path]# kustomize build overlays/dev/ | kubectl --kubeconfig /root/.kube/marksugar-dev-1 apply -f - configmap/fat-envinpod-8hbm9d86m9 created deployment.apps/fat-test createdrun起来后直接查看日志,环境变量有没有被传入[root@linuxea.com ~/kustomize/env-path]# kubectl --kubeconfig /root/.kube/marksugar-dev-1 logs -f fat-test-f9d967c4-vsc28 Hiya false GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/**除此之外使用envsubst 可以参考变量实值与文件变量替换配置清单在应用之前必须先通过kustomize进行重新转换成k8s的原始格式文件并且可以通过环境变量传递给kubectl,如下env is=true kubectl apply -k 推荐阅读Kustomize command to add environment variables to containers in a kustomizationDocument the environment variable substitution feature of kustomize configMapGeneratorCreate environment variables from env file with KustomizeUse ConfigMap-defined environment variables in Pod commands combineConfigskustomize-with-multiple-envsGenerating ResourcesKustomize Vars exampleIntroduce a ReplacementTransformer to replace the vars feature.#3492kustomize vars - enhance or replace?#2052Replacement poc#1631airshipctlkv.gohttps://github.com/kubernetes-sigs/kustomize/pull/3737/files#diff-c3d1278453f2a6fb229ec8998df0f109d8605b5e46ba2a84d067083f5a543761R194Using Kustomize for per-environment deployment of cert-manager resourcesHow To Manage Your Kubernetes Configurations with KustomizeIntroduce a ReplacementTransformer to replace the vars feature.#3492Kustomize PluginsChanging 'imagePullPolicy' of all containers in all deployments#1493valueAdd.mdkustomize-with-multiple-envs使用 Kustomize 对 Kubernetes 对象进行声明式管理Using system environment variables with KustomizeDemo: combining config data from devops and developershttps://github.com/kubernetes-sigs/kustomize/issues/2052https://github.com/kubernetes-sigs/kustomize/blob/master/examples/valueAdd.md
2022年06月13日
69 阅读
0 评论
0 点赞
2022-06-12
linuxea: flannel udp不同节点的pod通信
如下图所示我们先查看下pod的调度情况,分别是10.11.1.45和10.11.0.10在不同的两个节点,并且是不同的网段不同的网段,那么久需要查找路由表,看路由信息[root@master1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE marksugar-deployment-578cdd567b-968gg 1/1 Running 4 8d 10.11.1.45 node1 marksugar-deployment-578cdd567b-fw5rh 1/1 Running 4 8d 10.11.1.48 node1 marksugar-deployment-578cdd567b-nfhtt 1/1 Running 8 12d 10.11.0.10 master1里面分别有两个网卡,分别是eth0和lopod尾968gg开头的mac地址是9e:66:19:aa:f6:c7,ip是10.11.1.45[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-968gg -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default link/ether 9e:66:19:aa:f6:c7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.11.1.45/24 brd 10.11.1.255 scope global eth0 valid_lft forever preferred_lft foreverpod尾nfhtt开头的mac地址是ee:a2:33:a2:b6:69,ip是10.11.0.10[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-nfhtt -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default link/ether ee:a2:33:a2:b6:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.11.0.10/24 brd 10.11.0.255 scope global eth0 valid_lft forever preferred_lft forever路由匹配原则我们在上面知道,不同的网段通讯,需要查找路由表,看路由信息,我们就可以查询下路由信息[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-968gg -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.11.1.1 0.0.0.0 UG 0 0 0 eth0 10.11.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.244.0.0 10.11.1.1 255.255.0.0 UG 0 0 0 eth0路由匹配是最长匹配原则,大致意思是,越精确的信息优先路由匹配是最长匹配原则,大致意思是,越精细的信息优先,什么意思呢如果是一个ip就优先匹配,而后在到0.0.0.0的默认路由,如果有就走,如过没有就丢掉当我们ping的时候,就会走默认路由,将信息发送到网关,也就是10.11.1.1要ping通,就需要源ip,目标ip,源mac,目标mac路由转发前提路由转发中ip是不变的,mac地址每经过一条都会发送变化也就是说源ip和目标ip是不发生改变的,但是源mac和目的mac是一直在变的如上,10.11.1.45的路由表的信息的下一条是10.11.1.1,他们属于同一个网段,因此只需要解析到二层即可。解析的mac地址在宿主机上是ee:e9:19:55:93:d1,如下7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default qlen 1000 link/ether ee:e9:19:55:93:d1 brd ff:ff:ff:ff:ff:ff inet 10.11.1.1/24 brd 10.11.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::ece9:19ff:fe55:93d1/64 scope link valid_lft forever preferred_lft forever开始抓包我们进行抓包来查看两个pod的mac地址是不是和我们上面说的那样。不同节点的pod网络通讯ip不变,而mac地址一直在发生变化我们在master节点抓nfhtt的包,nfhtt是10.11.0.10的ip,mac地址是ee:a2:33:a2:b6:69[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-nfhtt -- tcpdump -n -e -i eth0我们并且上面开始Ping,ping的是968gg的pod,ip地址是10.11.1.45,mac地址是9e:66:19:aa:f6:c7[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-nfhtt -- ping 10.11.1.45 PING 10.11.1.45 (10.11.1.45): 56 data bytes 64 bytes from 10.11.1.45: seq=0 ttl=60 time=0.613 ms 64 bytes from 10.11.1.45: seq=1 ttl=60 time=0.257 ms 64 bytes from 10.11.1.45: seq=2 ttl=60 time=0.206 ms 64 bytes from 10.11.1.45: seq=3 ttl=60 time=0.235 ms ^C --- 10.11.1.45 ping statistics --- 13 packets transmitted, 13 packets received, 0% packet loss round-trip min/avg/max = 0.191/0.254/0.613 ms查看抓包的结果[root@master1 ~]# kubectl exec -it marksugar-deployment-578cdd567b-nfhtt -- tcpdump -n -e -i eth0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 06:56:41.779184 ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91, ethertype IPv4 (0x0800), length 98: 10.11.0.10 > 64 06:56:41.779377 a2:a2:7f:a0:0d:91 > ee:a2:33:a2:b6:69, ethertype IPv4 (0x0800), length 98: 10.11.1.45 > 4 06:56:42.779249 ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91, ethertype IPv4 (0x0800), length 98: 10.11.0.10 > 64 06:56:42.779448 a2:a2:7f:a0:0d:91 > ee:a2:33:a2:b6:69, ethertype IPv4 (0x0800), length 98: 10.11.1.45 > 4 06:56:43.779308 ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91, ethertype IPv4 (0x0800), length 98: 10.11.0.10 > 64 06:56:43.779500 a2:a2:7f:a0:0d:91 > ee:a2:33:a2:b6:69, ethertype IPv4 (0x0800), length 98: 10.11.1.45 > 4 06:56:44.042429 ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91, ethertype ARP (0x0806), length 42: Request who-ha 06:56:44.042437 a2:a2:7f:a0:0d:91 > ee:a2:33:a2:b6:69, ethertype ARP (0x0806), length 42: Request who-ha 06:56:44.042440 ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91, ethertype ARP (0x0806), length 42: Reply 10.11.0. 06:56:44.042463 a2:a2:7f:a0:0d:91 > ee:a2:33:a2:b6:69, ethertype ARP (0x0806), length 42: Reply 10.11.0. ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel查看抓包的结果我们现在在看下cni0的网关的ip和mac地址如下cni0在这里表示的是pod网络的网关10.11.1.1网段7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default qlen 1000 link/ether ee:e9:19:55:93:d1 brd ff:ff:ff:ff:ff:ff inet 10.11.1.1/24 brd 10.11.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::ece9:19ff:fe55:93d1/64 scope link valid_lft forever preferred_lft forever和10.11.0.1网段7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default qlen 1000 link/ether a2:a2:7f:a0:0d:91 brd ff:ff:ff:ff:ff:ff inet 10.11.0.1/24 brd 10.11.0.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::a0a2:7fff:fea0:d91/64 scope link valid_lft forever preferred_lft forever如上,我们看到抓包结果中两个mac地址是ee:a2:33:a2:b6:69 > a2:a2:7f:a0:0d:91pod尾968gg开头的mac地址是9e:66:19:aa:f6:c7,ip是10.11.1.45pod尾nfhtt开头的mac地址是ee:a2:33:a2:b6:69,ip是10.11.0.10我们从 10.11.0.10 ping 10.11.1.45, 10.11.0.10 的mac地址是ee:a2:33:a2:b6:69 ,10.11.1.45的pod的mac地址是9e:66:19:aa:f6:c7,而抓包走的则是10.11.0.10 的mac地址是ee:a2:33:a2:b6:69和10.11.0.1的cni0网卡的mac地址a2:a2:7f:a0:0d:91,返回的也是a2:a2:7f:a0:0d:91和ee:a2:33:a2:b6:69我们通过pod nfhtt(ee:a2:33:a2:b6:69 10.11.0.10)ping pod为9688GG的(9e:66:19:aa:f6:c7 10.11.1.45)ip而抓包显示,返回的信息的mac地址是10.11.0.1的mac地址。而10.11.0.1是cni0网关这篇延续上面几篇基础
2022年06月12日
77 阅读
0 评论
1 点赞
1
2
...
66