首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
48,736 阅读
2
linuxea:如何复现查看docker run参数命令
19,340 阅读
3
Graylog收集文件日志实例
17,774 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,316 阅读
5
git+jenkins发布和回滚示例
17,312 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
GitLab
marksugar
累计撰写
672
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
12
篇与
gitops
的结果
2022-07-11
linuxea:jenkins基于钉钉的构建通知(11)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,主要配置消息通知阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单 (上一章已实现)kubectl部署 (上一章已实现)kubeclt deployment的状态跟踪 (上一章已实现)钉钉消息的构建状态推送(本章实现)前面我们断断续续的将最简单的持续集成做好,在cd阶段,使用了kustomize和argocd,并且搭配了kustomize和argocd做了gitops的部分事宜,现在们在添加一个基于钉钉的构建通知我们创建一个钉钉机器人,关键字是DEVOPS我们创建一个函数,其中采用markdown语法,如下:分别需要向DingTalk传递几个行参,分别是:mdTitle 标签,这里的标签也就是我们创建的关键字: DEVOPSmdText 详细文本atUser 需要@谁atAll @所有人SedContent 通知标题函数体如下:def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd38" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }而在流水线阶段添加post,如下 post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } }当然,现在你看到了上面的函数传递中有很多变量,这些需要我们去获取我们在任意一个阶段中的script中,并用env.声明到全局环境变量,添加如下GIT_COMMIT_DESCRIBE: 提交信息GIT_COMMIT_TAGSHA:提交的SHA值TIMENOW_CN:可阅读的时间格式 env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.GIT_COMMIT_TAGSHA=sh(script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() env.TIMENOW_CN=sh(script: """date +%Y年%m月%d日%H时%M分%S秒""",returnStdout: true).trim()进行构建,一旦构建完成,将会发送一段消息到钉钉如下而最终的管道流水线试图如下:完整的流水线管道代码如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.TIMENOW_CN=sh(returnStdout: true, script: 'date +%Y年%m月%d日%H时%M分%S秒') env.GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' // ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} } // ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - } stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } } } post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } } } def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd3803abdd1452e83d5b607ab" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }现在,一个最简单的gitops的demo项目搭建完成参考gitops
2022年07月11日
907 阅读
0 评论
0 点赞
2022-07-10
linuxea:基于kustomize的argocd发布实现(10)
在此前我们配置了Kustomize清单,并且通过kubectl将清单应用到k8s中,之后又做另一个状态跟踪,但这还不够。我们希望通过一个cd工具来配置管理,并且提供一个可视化界面。我们选择argocd我不会在这篇章节中去介绍uI界面到底怎么操作,因为那些显而易见。我只会介绍argocd的二进制程序客户端的操作使用,但是也仅限于完成一个app的创建,集群的添加,项目的添加。仅此而已。argocd是一个成熟的部署工具,如果有时间,我将会在后面的时间里更新其他的必要功能。阅读此篇,你将了解argocd客户端最简单的操作,和一些此前的流水线实现方式列表如下:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(已实现)sonarqube与gitlab关联 (已实现)配置docker中构建docker (已实现)mvn打包(已实现)sonarqube简单分支扫描(已实现)基于gitlab来管理kustomize的k8s配置清单(已实现)kubectl部署(已实现)kubeclt deployment的状态跟踪(已实现)kustomize和argocd(本章实现)钉钉消息的构建状态推送1.1 安装2.4.2我们在gitlab上获取此配置文件,并修改镜像此前我拉取了2.4.0和2.4.2的镜像,如下2.4.0 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.0 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine2.4.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine分别替换所有镜像地址,如果是install.yaml就替换,如果是ha-install.yaml也替换sed -i 's@redis:7.0.0-alpine@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine@g' sed -i 's@ghcr.io/dexidp/dex:v2.30.2@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2@g' sed -i 's@quay.io/argoproj/argocd:v2.4.0@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.0@g' sed -i 's@haproxy:2.0.25-alpine@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine@g'创建名称空间并applykubectl create namespace argocd kubectl apply -n argocd -f argocd.yaml更新删除不掉的时候的解决办法kubectl patch crd/appprojects.argoproj.io -p '{"metadata":{"finalizers":[]}}' --type=merge等待,到argocd组件准备完成[root@linuxea-11 ~/argocd]# kubectl -n argocd get pod NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 7m33s argocd-applicationset-controller-7bbcd5c9bd-rqn84 1/1 Running 0 7m33s argocd-dex-server-75c668865-s9x5d 1/1 Running 0 7m33s argocd-notifications-controller-bc5954bd7-gg4ks 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-hdzkv 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-jrrtl 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-rk868 1/1 Running 0 7m33s argocd-redis-ha-server-0 2/2 Running 0 7m33s argocd-redis-ha-server-1 2/2 Running 0 5m3s argocd-redis-ha-server-2 2/2 Running 0 4m3s argocd-repo-server-567dd6c487-6k89z 1/1 Running 0 7m33s argocd-repo-server-567dd6c487-rt4vq 1/1 Running 0 7m33s argocd-server-677d79497b-k72h2 1/1 Running 0 7m33s argocd-server-677d79497b-pb5gt 1/1 Running 0 7m33s配置域名访问apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: rules: - host: argocd.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https创建[root@linuxea-11 ~/argocd]# kubectl apply -f argocd-ingress.yaml ingress.networking.k8s.io/argocd-server-ingress created [root@linuxea-11 ~/argocd]# kubectl -n argocd get ingress NAME CLASS HOSTS ADDRESS PORTS AGE argocd-server-ingress nginx argocd.linuxea.com 80 11s配置nodeport我们直接使用nodeport来配置apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server namespace: argocd spec: ports: - name: http port: 80 nodePort: 31080 protocol: TCP targetPort: 8080 - name: https port: 443 nodePort: 31443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server type: NodePort用户名admin, 获取密码[root@linuxea-11 ~/argocd]# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo QOOMW76CV8bEczKO1.2 客户端登录安装完成后,我们通过一个二进制的客户端来操作整个流程,于是我们需要下载一个Linux客户端注意: 和此前的其他包一样,如果是docker运行的jenkins,要将二进制包放到容器内,因此我提供了两种方式wget https://github.com/argoproj/argo-cd/releases/download/v2.4.2/argocd-linux-amd64如果你用私有域名的话,你本地hosts解析需要配置[root@linuxea-48 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.100.11 argocd.linuxea.com下载二进制文件后进行登录即可, 我使用的是nodeportargocd login 172.16.100.11:31080 --grpc-web[root@linuxea-48 ~/.kube]# argocd login 172.16.100.11:31080 --grpc-web WARNING: server certificate had error: x509: cannot validate certificate for 172.16.100.11 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context '172.16.100.11:31080' updated登录会在一段时间后失效,于是我门需要些一个脚本过一段时间登录一次argocd login 172.16.100.11:31080 --grpc-web # 登录 argocd login 172.16.15.137:31080 --grpc-web最好写在脚本里面登录即可容器外脚本# cat /login.sh KCONFIG=/root/.kube/config-1.23.1-dev argocd login 172.16.100.11:31080 --username admin --password $(kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web容器内下载argocd二进制文件存放到已经映射的目录内,并添加执行权限[root@linuxea-48 /data/jenkins-latest/jenkins_home]# cp /usr/local/sbin/argocd /data/jenkins-latest/package/ [root@linuxea-48 /data/jenkins-latest/jenkins_home]# ll /data/jenkins-latest/package/ total 251084 drwxr-xr-x 6 root root 99 Sep 5 2021 apache-maven-3.8.2 -rw-r--r-- 1 root root 131352410 Jul 9 17:24 argocd drwxr-xr-x 6 root root 105 Sep 6 2021 gradle-6.9.1 drwxr-xr-x 2 root root 16 Oct 18 2021 jq-1.6 -rwxr-xr-x 1 root root 40230912 Jul 9 15:08 kubectl -rwxr-xr-x 1 root root 11976704 Jul 9 15:08 kustomize drwxr-xr-x 6 1001 1001 108 Aug 31 2021 node-v14.17.6-linux-x64 drwxrwxr-x 10 1001 1002 221 Jun 18 11:37 skywalking-agent -rw-r--r-- 1 root root 30443381 Jun 29 23:46 skywalking-java-8.11.0.tar.gz drwxr-xr-x 6 root root 51 May 7 2021 sonar-scanner-4.6.2.2472-linux -rw-r--r-- 1 root root 43099390 Sep 11 2021 sonar-scanner-cli-4.6.2.2472-linux.zip [root@linuxea-48 /data/jenkins-latest/jenkins_home]# chmod +x /data/jenkins-latest/package/argocd 还需要k8s的config配置文件,如果你阅读了上一篇基于jenkins的kustomize配置发布(9),那这里当然是轻车熟路了我的二进制文件存放在/usr/local/package - /data/jenkins-latest/package:/usr/local/package由于我门在容器里面,我门复制config文件到一个位置而后指定即可[root@linuxea-48 ~]# cp -r ~/.kube /data/jenkins-latest/jenkins_home/ [root@linuxea-48 ~]# ls /data/jenkins-latest/jenkins_home/.kube/ cache config config-1.20.2-test config-1.22.1-prod config-1.22.1-test config-1.23.1-dev config2 marksugar-dev-1 marksugar-prod-1容器内登录KUBE_PATH=/usr/local/package KCONFIG=/var/jenkins_home/.kube/config-1.23.1-dev ${KUBE_PATH}/argocd login 172.16.100.11:31080 --username admin --password $(${KUBE_PATH}/kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web如下bash-5.1# KUBE_PATH=/usr/local/package bash-5.1# KCONFIG=/var/jenkins_home/.kube/config-1.23.1-dev bash-5.1# ${KUBE_PATH}/argocd login 172.16.100.11:31080 --username admin --password $(${KUBE_PATH}/kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web 'admin:login' logged in successfully Context '172.16.100.11:31080' updated在上面我们说过,一旦登录了只会,登录的凭据是会失效的,因此我们需要在计划任务里面,5个小时登录一次。而后使用计划任务进行登录即可0 5 * * * /bin/bash /login.sh查看版本信息[root@linuxea-48 ~]# argocd version --grpc-web argocd: v2.4.2+c6d0c8b BuildDate: 2022-06-21T21:03:41Z GitCommit: c6d0c8baaa291cd68465acd7ad6bef58b2b6f942 GitTreeState: clean GoVersion: go1.18.3 Compiler: gc Platform: linux/amd64 argocd-server: v2.4.2+c6d0c8b BuildDate: 2022-06-21T20:42:05Z GitCommit: c6d0c8baaa291cd68465acd7ad6bef58b2b6f942 GitTreeState: clean GoVersion: go1.18.3 Compiler: gc Platform: linux/amd64 Kustomize Version: v4.4.1 2021-11-11T23:36:27Z Helm Version: v3.8.1+g5cb9af4 Kubectl Version: v0.23.1 Jsonnet Version: v0.18.01.2.1. 集群凭据管理通常可能存在多个集群,因此,我们使用配置参数指定即可如果只有一个,无需指定,默认config[root@linuxea-48 ~]# ll ~/.kube/ total 56 drwxr-x--- 4 root root 35 Jun 22 00:09 cache -rw-r--r-- 1 root root 6254 Jun 21 23:58 config-1.20.2-test -rw-r--r-- 1 root root 6277 Jun 22 00:07 config-1.22.1-prod -rw-r--r-- 1 root root 6277 Jun 22 00:06 config-1.22.1-test -rw-r--r-- 1 root root 6193 Jun 22 00:09 config-1.23.1-dev -rw-r--r-- 1 root root 6246 Mar 4 23:55 config2 -rw-r--r-- 1 root root 6277 Aug 22 2021 marksugar-dev-1 -rw-r--r-- 1 root root 6277 Aug 22 2021 marksugar-prod-1 如果有多个,需要指定配置文件[root@linuxea-48 ~/.kube]# kubectl --kubeconfig /root/.kube/config-1.23.1-dev -n argocd get pod NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 1 (12m ago) 23h argocd-applicationset-controller-7bbcd5c9bd-rqn84 1/1 Running 1 (12m ago) 23h argocd-dex-server-75c668865-s9x5d 1/1 Running 1 (12m ago) 23h argocd-notifications-controller-bc5954bd7-gg4ks 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-hdzkv 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-jrrtl 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-rk868 1/1 Running 1 (12m ago) 23h argocd-redis-ha-server-0 2/2 Running 2 (12m ago) 23h argocd-redis-ha-server-1 2/2 Running 2 (12m ago) 23h argocd-redis-ha-server-2 2/2 Running 2 (12m ago) 23h argocd-repo-server-567dd6c487-6k89z 1/1 Running 1 (12m ago) 23h argocd-repo-server-567dd6c487-rt4vq 1/1 Running 1 (12m ago) 23h argocd-server-677d79497b-k72h2 1/1 Running 1 (12m ago) 23h argocd-server-677d79497b-pb5gt 1/1 Running 1 (12m ago) 23h\1.2.2 将集群加入argocd仍然需要重申下环境变量的配置export KUBECONFIG=$HOME/.kube/config-1.23.1-dev而后在查看当前的集群[root@linuxea-48 ~/.kube]# kubectl config get-contexts -o name context-cluster1将此集群加入到argocd[root@linuxea-48 ~/.kube]# argocd cluster add context-cluster1 WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `context-cluster1` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0008] ServiceAccount "argocd-manager" created in namespace "kube-system" INFO[0008] ClusterRole "argocd-manager-role" created INFO[0008] ClusterRoleBinding "argocd-manager-role-binding" created Cluster 'https://172.16.100.11:6443' added这里添加完成后,在settings->Clusters 中也将会看到容器内首先将config文件复制到映射的目录内,比如/var/jenkins_home/# 配置kubeconfig位置 bash-5.1# export KUBECONFIG=/var/jenkins_home/.kube/config-1.23.1-dev # 复制二进制文件到sbin,仅仅是方便操作 bash-5.1# cp /usr/local/package/argocd /usr/sbin/ bash-5.1# cp /usr/local/package/kubectl /usr/sbin/ # 测试 bash-5.1# kubectl get pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-59bd97ddb-qcrpj 1/1 Running 18 (7h51m ago) 26d # 查看当前contexts名称 bash-5.1# kubectl config get-contexts -o name context-cluster1 # 添加到argocd bash-5.1# argocd cluster add context-cluster WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kubernetes-admin@kubernetes` with full cluster level privileges. Do you want to continue [y/N]? WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kubernetes-admin@kubernetes` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0003] ServiceAccount "argocd-manager" created in namespace "kube-system" INFO[0003] ClusterRole "argocd-manager-role" created INFO[0003] ClusterRoleBinding "argocd-manager-role-binding" created Cluster 'https://172.16.100.11:6443' added添加完成1.3 定义repo存储库定于存储库有两种方式分别是ssh和http,都可以使用,参考官方文档1.3.1 密钥如果已经有现成的密钥,则不需要创建,如果没有,可以使用ssh-keygen -t ed25519 生成密钥, 并且添加到gitlab中# ssh-keygen -t ed25519 -f /home/jenkins_home/.ssh/ # ls /home/jenkins_home/.ssh/ -ll 总用量 8 -rw------- 1 root root 399 7月 8 16:44 id_rsa -rw-r--r-- 1 root root 93 7月 8 16:44 id_rsa.pubargocd添加git,指定~/.ssh/id_rsa,并使用--insecure-ignore-host-key选项[root@linuxea-48 ~/.kube]# argocd repo add git@172.16.100.47:pipeline-ops/marksugar-ui.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-key Repository 'git@172.16.100.47:pipeline-ops/marksugar-ui.git' added这里添加完成在settings->repositories界面将会看到一个存储库容器内和上面一样,如果已经有现成的密钥,则不需要创建,如果没有,可以使用ssh-keygen -t ed25519 生成密钥, 并且将id_rsa.pub添加到gitlab中下面是docker-compose的密钥 volumes: .... - /home/jenkins_home/.ssh/:/root/.ssh我们在上面已经添加了marksugar-ui, 如果有多个项目,多次添加即可我们开始添加 java-demogit@172.16.100.47:devops/k8s-yaml.git是kustmoize配置清单的地址argocd repo add git@172.16.100.47:devops/k8s-yaml.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-keybash-5.1# argocd repo add git@172.16.100.47:devops/k8s-yaml.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-key Repository 'git@172.16.100.47:devops/k8s-yaml.git' added1.3.2 http我门仍然可以考虑使用http来使用,官方的示例如下argocd repo add https://github.com/argoproj/argocd-example-apps --username <username> --password <password>我的环境如下配置:argocd repo add http://172.16.15.136:180/devops/k8s-yaml --username root --password gitlab.com # 添加repo root@ca060212e6f6:/var/jenkins_home# argocd repo add http://172.16.15.136:180/devops/k8s-yaml.git --username root --password gitlab.com Repository 'http://172.16.15.136:180/devops/k8s-yaml.git' added1.4 定义项目AppProject CRD 是代表应用程序逻辑分组的 Kubernetes 资源对象。它由以下关键信息定义:sourceRepos引用项目中的应用程序可以从中提取清单的存储库。destinations引用项目中的应用程序可以部署到的集群和命名空间(不要使用该name字段,仅server匹配该字段)。roles定义了他们对项目内资源的访问权限的实体列表。一个示例规范如下:在创建之前,我们先在集群内创建一个名称空间:marksugarkubectl create ns marksugar声明式配置如下,指定name,指定marksugar部署的名称空间,其他默认 destinations: - namespace: marksugar server: 'https://172.16.100.11:6443'更多时候我们限制项目内使用的范围,比如我们只配置使用的如:deployment,service,configmap,这些配置取决于控制器apiVersion: v1 kind: ConfigMap ... --- apiVersion: v1 kind: Service ...and DeploymentapiVersion: apps/v1 kind: Deployment如果此时有ingress,那么配置就如下 - group: 'networking.k8s.io' kind: 'Ingress'以此推论。最终我的配置如下: namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap'一个完整的配置如下:apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea # 名称 # name: marksugar namespace: argocd # finalizers: # - resources-finalizer.argocd.argoproj.io spec: description: Example Project(测试) # 更详细的内容 sourceRepos: - '*' destinations: - namespace: marksugar # 名称空间 server: 'https://172.16.100.11:6443' # k8s api地址 # clusterResourceWhitelist: # - group: '' # kind: Namespace # namespaceResourceBlacklist: # - group: '' # kind: ResourceQuota # - group: '' # kind: LimitRange # - group: '' # kind: NetworkPolicy namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' # 名称空间的内允许让argocd当前app使用的的kind - group: '' kind: 'Service' # 名称空间的内允许让argocd当前app使用的的kind - group: '' kind: 'ConfigMap' # 名称空间的内允许让argocd当前app使用的的kind # kind: Deployment # - group: 'apps' # kind: StatefulSet # roles: # - name: read-only # description: Read-only privileges to my-project # policies: # - p, proj:my-project:read-only, applications, get, my-project/*, allow # groups: # - test-env # - name: ci-role # description: Sync privileges for guestbook-dev # policies: # - p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow # jwtTokens: # - iat: 1535390316上面的这个有太多注释,精简一下,并进行成我门实际的参数,最终如下:apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea-java-demo namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: java-demo server: 'https://172.16.100.11:6443' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap'执行PS E:\ops\k8s-1.23.1-latest\gitops\argocd> kubectl.exe apply -f .\project-new.yaml appproject.argoproj.io/my-linuxea-java-demo created执行完成后,将会创建一个projects,在settings->projects查看1.5 定义应用Application CRD 是 Kubernetes 资源对象,表示环境中已部署的应用程序实例。它由两个关键信息定义:source对 Git 中所需状态的引用(存储库、修订版、路径、环境)destination对目标集群和命名空间的引用。对于集群,可以使用 server 或 name 之一,但不能同时使用两者(这将导致错误)。当服务器丢失时,它会根据名称进行计算并用于任何操作。一个最小的应用程序规范如下:apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: git@172.16.100.47:pipeline-ops/marksugar-ui.git # git地址 targetRevision: master # git分支 path: overlays/marksugar-ui/prod/ # git路径对应到目录下的配置 destination: server: https://172.16.100.11:6443 # k8s api namespace: marksugar # 名称空间有关其他字段,请参阅application.yaml。只要您完成了入门的第一步,您就可以应用它kubectl apply -n argocd -f application.yaml,Argo CD 将开始部署留言簿应用程序。或者使用下面客户端命令进行配置,比如我此前配置去的marksugar-ui就是命令行配置的,如下:argocd app create marksugar-ui --repo git@172.16.100.47:pipeline-ops/marksugar-ui.git --revision master --path overlays/marksugar-ui/prod/ --dest-server https://172.16.100.11:6443 --dest-namespace marksugar --project=my-linuxea --label=marksugar/marksugar-ui=prod我门仍然进行修改成我门希望的配置样子,yaml如下我这里使用的是httpapiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: java-demo namespace: argocd labels: marksugar/app: java-demo spec: project: my-linuxea-java-demo source: repoURL: git@172.16.100.47:devops/k8s-yaml.git targetRevision: java-demo path: overlays/dev/ destination: server: https://172.16.100.11:6443 namespace: java-demo此时创建了一个appPS E:\ops\k8s-1.23.1-latest\gitops\argocd\java-demo> kubectl.exe apply -f .\app.yaml application.argoproj.io/java-demo created如下只有同步正常,healthy才会变绿如果有多个名称空间,不想混合显示,我们在页面中在做左侧,选择cluster的名称空间后,才能看到名称空间下的app,也就是应用如果你配置的是http的git地址就会是下面这个样子配置apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: java-demo namespace: argocd labels: marksugar/app: java-demo spec: project: my-linuxea-java-demo source: repoURL: http://172.16.15.136:180/devops/k8s-yaml.git targetRevision: java-demo path: overlays/dev/ destination: server: https://172.16.15.137:6443 namespace: java-demo视图1.6 手动同步我门可以点击web页面的上面的sync来进行同步,也可以用命令行手动同步使其生效我门通过argocd app list查看当前的已经有的项目示例:密钥root@9c0cad5ebce8:/# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.15.137:6443 java-demo my-linuxea-java-demo Unknown Healthy <none> ComparisonError git@172.16.15.136:23857/devops/k8s-yaml.git overlays/dev/ java-demohttproot@ca060212e6f6:/var/jenkins_home# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.15.137:6443 java-demo my-linuxea-java-demo OutOfSync Missing <none> <none> http://172.16.15.136:180/devops/k8s-yaml.git overlays/dev/ java-demo而我们现在的是这样的bash-5.1# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.100.11:6443 java-demo my-linuxea-java-demo OutOfSync Missing <none> <none> git@172.16.100.47:devops/k8s-yaml.git overlays/dev/ java-demo marksugar-ui https://172.16.100.11:6443 marksugar my-linuxea Synced Healthy <none> <none> git@172.16.100.47:pipeline-ops/marksugar-ui.git overlays/marksugar-ui/prod/ master而后进行同步即可argocd app sync java-demo --retry-backoff-duration=10s -l marksugar/app=java-demo如下bash-5.1# argocd app sync java-demo --retry-backoff-duration=10s -l marksugar/app=java-demo TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-07-09T19:20:26+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced 2022-07-09T19:20:26+08:00 Service java-demo java-demo OutOfSync Missing 2022-07-09T19:20:26+08:00 apps Deployment java-demo java-demo Synced Healthy 2022-07-09T19:20:27+08:00 Service java-demo java-demo OutOfSync Healthy 2022-07-09T19:20:27+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged 2022-07-09T19:20:27+08:00 Service java-demo java-demo OutOfSync Healthy service/java-demo created 2022-07-09T19:20:27+08:00 apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured Name: java-demo Project: my-linuxea-java-demo Server: https://172.16.100.11:6443 Namespace: java-demo URL: https://172.16.100.11:31080/applications/java-demo Repo: git@172.16.100.47:devops/k8s-yaml.git Target: java-demo Path: overlays/dev/ SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to java-demo (fd1286f) Health Status: Healthy Operation: Sync Sync Revision: fd1286f64d1edac2def43d4a37bcc13a9f0286d0 Phase: Succeeded Start: 2022-07-09 19:20:26 +0800 CST Finished: 2022-07-09 19:20:27 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged Service java-demo java-demo Synced Healthy service/java-demo created apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-07-09T19:20:28+08:00 apps Deployment java-demo java-demo Synced Healthy 2022-07-09T19:20:28+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced 2022-07-09T19:20:28+08:00 Service java-demo java-demo Synced Healthy 2022-07-09T19:20:28+08:00 apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured 2022-07-09T19:20:28+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged 2022-07-09T19:20:28+08:00 Service java-demo java-demo Synced Healthy service/java-demo unchanged Name: java-demo Project: my-linuxea-java-demo Server: https://172.16.100.11:6443 Namespace: java-demo URL: https://172.16.100.11:31080/applications/java-demo Repo: git@172.16.100.47:devops/k8s-yaml.git Target: java-demo Path: overlays/dev/ SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to java-demo (fd1286f) Health Status: Healthy Operation: Sync Sync Revision: fd1286f64d1edac2def43d4a37bcc13a9f0286d0 Phase: Succeeded Start: 2022-07-09 19:20:27 +0800 CST Finished: 2022-07-09 19:20:28 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged Service java-demo java-demo Synced Healthy service/java-demo unchanged apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured同步完成后状态就会发生改变命令行查看bash-5.1# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.100.11:6443 java-demo my-linuxea-java-demo Synced Healthy <none> <none> git@172.16.100.47:devops/k8s-yaml.git overlays/dev/ java-demo marksugar-ui https://172.16.100.11:6443 marksugar my-linuxea Synced Healthy <none> <none> git@172.16.100.47:pipeline-ops/marksugar-ui.git overlays/marksugar-ui/prod/ master打开页面查看如果是http的这里会显示http此时正在拉取镜像状态是 Progressing,我们等待拉取完成,而后选中后会点击进入详情页面项目内的仪表盘功能如下图一旦镜像完成拉取,并且runing起来,则显示健康仪表盘功能如下图回到k8s查看[root@linuxea-01 .ssh]# kubectl get all -n java-demo NAME READY STATUS RESTARTS AGE pod/java-demo-6474cb8fc8-6zwlt 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-92sw7 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-k8985 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-ndzpl 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-rxg2k 1/1 Running 0 7m45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/java-demo NodePort 10.111.26.148 <none> 8080:31180/TCP 24h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/java-demo 5/5 5 5 7m45s NAME DESIRED CURRENT READY AGE replicaset.apps/java-demo-6474cb8fc8 5 5 5 7m45s1.7 加入流水线阅读过上一篇基于jenkins的kustomize配置发布(9)你大概就知道,整个简单的流程是怎么走的,我们复制过来修改一下,如下当前阶段流水线阶段,步骤大致如下:1.判断本地是否有git的目录,如果有就删除2.拉取git,并切换到分支3.追加当前的镜像版本到一个buildhistory的文件中4.cd到目录中修改镜像5.修改完成后上传修改你被人6.argocd同步与之不同的就是将kustomize和kubectl改成了argocd代码快如下: stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' } } 仅此而已在上一篇中忘了截图与此同时,gitlab上已经有了一个版本的历史记录argocd最简单的示例到此告一段落参考gitops
2022年07月10日
1,195 阅读
0 评论
0 点赞
2022-07-09
linuxea:基于jenkins的kustomize配置发布(9)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,基于构建的镜像进行清单编排。我们需要一种工具来管理配置清单。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单(本章实现)kubectl部署(本章实现)kubeclt deployment的状态跟踪(本章实现)钉钉消息的构建状态推送没错,我移情别恋了,在Helm和kustomize中,我选择后者。最大的原因是因为kustomize简单,易于维护。无论从那个角度,我都找不到不用kustomize的理由。这倒不是因为kustomize是多么优秀,仅仅是因为kustomize的方式让一切变得都简单。Helm和kustomizehelm几乎可以完成所有的操作,但是helm的问题是学习有难度,对于小白不友好,配置一旦过多调试将会更复杂。也是因为这种限制,那么使用helm的范围就被缩小了,不管在什么条件下,它都不在是优选。kustomize更直白,无论是开发,还是运维新手,都可以快速上手进行修改添加等基础配置。kustomizekustomize用法在官网的github上已经有所说明了,并且这里温馨的提供了中文示例。讨论如何学习kustomize不在本章的重点遵循kustmoize的版本,在https://github.com/kubernetes-sigs/kustomize/releases找到一个版本,通过http://toolwa.com/github/加速下载Kubectl 版本自定义版本< v1.14不适用v1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0[root@k8s-01 linuxea]# kustomize version {Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:25:40Z GoOs:linux GoArch:amd64}创建必要的目录结构阅读示例中的示例:devops和开发配合管理配置数据有助于理解kustomize配置方法场景:在生产环境中有一个基于 Java 由多个内部团队对于业务拆分了不通的组并且有不同的项目的应用程序。这些服务在不同的环境中运行:development、 testing、 staging 和 production,有些配置需要频繁修改的。如果只是维护一个大的配置文件是非常麻烦且困难的 ,而这些配置文件也是需要专业运维人员或者devops工程师来进行操作的,这里面包含了一些片面且偏向运维的工作是开发人员不必知道的。例如:生产环境的敏感数据关键的登录凭据等这些在kustomize中被分成了不通的类因此,kustomize提供了混合管理办法基于相同的 base 创建 n 个 overlays 来创建 n 个集群环境的方法我们将使用 n==2,例如,只使用 development 和 production ,这里也可以使用相同的方法来增加更多的环境。运行 kustomize build 基于 overlay 的 target 来创建集群环境。为了让这一切开始运行,准备如下创建kustomize目录结构创建并配置kustomize配置文件最好创建gitlab项目,将配置存放在gitlab开始此前我写了一篇kustomize变量传入有过一些介绍,我们在简单补充一下。kustomize在1.14版本中已经是Kubectl内置的命令,并且支持kubernetes的原生可复用声明式配置的插件。它引入了一种无需模板的方式来自定义应用程序配置,从而简化了现成应用程序的使用。Kustomize 遍历 Kubernetes 清单以添加、删除或更新配置选项。它既可以作为独立的二进制文件使用,也可以作为kubectl来使用更多的背景可参考它的白皮书,这些在github的Declarative application management in Kubernetes存放。因为总的来说,这篇不是让你如何去理解背后的故事,而是一个最简单的示例常见操作在项目中为所有 Kubernetes 对象设置贯穿性字段是一种常见操作。 贯穿性字段的一些使用场景如下:为所有资源设置相同的名字空间为所有对象添加相同的前缀或后缀为对象添加相同的标签集合为对象添加相同的注解集合为对象添加相同的资源限制以及以及副本数这些通过在overlays目录下不同的配置来区分不通的环境所用的清单信息安装遵循github版本对应规则Kubectl versionKustomize version< v1.14n/av1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0我的集群是1.23.1,因此我下载4.5.4PS E:\ops\k8s-1.23.1-latest\gitops> kustomize version {Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:12:45Z GoOs:windows GoArch:amd64}java-demo我这里已经配置了一个已经配置好的环境,我将会在这里简单介绍使用方法和配置,我不会详细说明deployment控制器的配置清单,也不会说明和kustomize基本使用无关的配置信息,我只会尽可能的在这个简单的示例中说明整个kustomize的在本示例中的用法。简述:kustomize需要base和Overlays目录,base可以是多个,overlays也可以是多个,overlays下的文件最终会覆盖到base的配置之上,只要配置是合理的,base的配置应该将有共性的配置最终通过overlays来进行配置,以此应对多个环境的配置。java-demo是一个无状态的java应用,使用的是Deployment控制器进行配置,并且创建一个service,于此同时传入skywalking的环境变量信息。1. 目录结构目录结构如下:# tree ./ ./ ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml ├── overlays │ ├── dev │ │ ├── env.file │ │ ├── kustomization.yaml │ │ └── resources.yaml │ └── prod │ ├── kustomization.yaml │ ├── replicas.yaml │ └── resources.yaml └── README.md 4 directories, 11 files其中两目录如下:./ ├── base ├── overlays └── README.mdbase: 目录作为基础配置目录,真实的配置文件在这个文件下overlays: 目录作为场景目录,描述与 base 应用配置的差异部分来实现资源复用而在overlays目录下,又有两个目录,分别是dev和prod,分别对应俩个环境的配置,这里可以任意起名来区分,因为在这两个目录下面存放的是各自不通的配置./ ├── base ├── overlays │ ├── dev │ └── prod └── README.md1.1 imagePullSecrets除此之外,我们需要一个拉取镜像的信息使用cat ~/.docker/config.json |base64获取到base64字符串编码,而后复制到.dockerconfigjson: >-下即可apiVersion: v1 data: .dockerconfigjson: >- ewoJImkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTIgKGxpbnV4KSIKCX0KfQ== kind: Secret metadata: name: 156pull namespace: java-demo type: kubernetes.io/dockerconfigjson2. base目录base目录下分别有三个文件,分别如下├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml在deployment.yaml中定义必要的属性不定义场景的指标,如标签,名称空间,副本数量和资源限制定义名称,镜像地址,环境变量名这些不定义的属性通过即将配置的overlays中的配置进行贯穿覆盖到这个基础配置之上必须定义的属性表明了贯穿的属性和基础的配置是一份这里的环境变量用的是configmap的方式,值是通过后面传递过来的。如下deployment.yamlapiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: template: metadata: labels: spec: containers: - image: harbor.marksugar.com/java/linuxea-2022 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_NAME - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_TRACE_IGNORE_PATH - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_COLLECTOR_BACKEND_SERVICES imagePullSecrets: - name: 156pull restartPolicy: Alwaysservice.yamlapiVersion: v1 kind: Service metadata: name: java-demo spec: type: NodePort ports: - port: 8080 targetPort: 8080 nodePort: 31180kustomization.yamlkustomization.yaml引入这两个配置文件resources: - deployment.yaml - service.yaml执行 kustomize build /base ,得到的结果如下,这就是当前的原始清单apiVersion: v1 kind: Service metadata: name: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: null template: metadata: labels: null spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod image: harbor.marksugar.com/java/linuxea-2022:202207091551 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 imagePullSecrets: - name: 156pull restartPolicy: Always3. overlays目录首先,在overlays目录下是有dev和prod目录的,我们先看在dev目录下的kustomization.yamlkustomization.yaml中的内容,包含一组资源和相关的自定义信息,如下更多用法参考官方文档或者github社区kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml # 当如当前的文件 namespace: java-demo # 名称空间 images: - name: harbor.marksugar.com/java/linuxea-2022 # 镜像url必须保持和base中一致 newTag: '202207072119' # 镜像tag bases: - ../../base # 引入bases基础文件 # configmap变量 configMapGenerator: - name: envinpod # 环境变量名称 env: env.file # 环境变量位置 # 副本数 replicas: - name: java-demo # 名称必须保持一致 count: 5 # namePrefix: dev- # pod前缀 # nameSuffix: "-001" # pod后缀 commonLabels: app: java-demo # 标签 # logging: isOk # commonAnnotations: # oncallPager: 897-001删掉那些注释后如下apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml namespace: java-demo images: - name: harbor.marksugar.com/java/linuxea-2022 newTag: '202207071059' bases: - ../../base configMapGenerator: - name: envinpod env: env.file replicas: - name: java-demo count: 5 commonLabels: app: java-demoresources.yaml resources.yaml 中的name必须保持一致apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: template: spec: containers: - name: java-demo resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mienv.fileenv.file定义的变量是对应在base中的,这些是skwayling中的必要信息,参考kubernetes中skywalking9.0部署使用,env的用法参考kustomize变量引入SW_AGENT_NAME=test::java-demo SW_AGENT_TRACE_IGNORE_PATH=GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** SW_AGENT_COLLECTOR_BACKEND_SERVICES=skywalking-oap.skywalking:11800查看 kustomize build overlays/dev/后的配置清单。如下所示:apiVersion: v1 data: SW_AGENT_COLLECTOR_BACKEND_SERVICES: skywalking-oap.skywalking:11800 SW_AGENT_NAME: test::java-demo SW_AGENT_TRACE_IGNORE_PATH: GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** kind: ConfigMap metadata: labels: app: java-demo name: envinpod-74t9b8htb6 namespace: java-demo --- apiVersion: v1 kind: Service metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 selector: app: java-demo type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: replicas: 5 selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod-74t9b8htb6 - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod-74t9b8htb6 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod-74t9b8htb6 image: harbor.marksugar.com/java/linuxea-2022:202207071059 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mi imagePullSecrets: - name: 156pull restartPolicy: Alwaysbase作为基础配置,Overlays作为覆盖来区分。base是包含 kustomization.yaml 文件的一个目录,其中包含一组资源及其相关的定制。 base可以是本地目录或者来自远程仓库的目录,只要其中存在 kustomization.yaml 文件即可。 Overlays 也是一个目录,其中包含将其他 kustomization 目录当做 bases 来引用的 kustomization.yaml 文件。 base不了解Overlays的存在,且可被多个Overlays所使用。 Overlays则可以有多个base,且可针对所有base中的资源执行操作,还可以在其上执行定制。通过sed替换Overlays下的文件内容或者kustomize edit set,如:在Overlays下执行kustomize edit set image harbor.marksugar.com/java/linuxea-2022:202207091551:202207071059:1.14.b替换镜像文件。一切符合预期后,使用kustomize.exe build .\overlays\dev\ | kubectl apply -f -使其生效。4. 部署到k8s命令部署两种方式kustomizekustomize build overlays/dev/ | kubectl apply -f -kubectlkubectl apply -k overlays/dev/使用kubectl apply -k生效,如下PS E:\ops\k8s-1.23.1-latest\gitops> kubectl.exe apply -k .\overlays\dev\ configmap/envinpod-74t9b8htb6 unchanged service/java-demo created deployment.apps/java-demo created如果使用的域名是私有的,需要在本地hosts填写本地解析172.16.100.54 harbor.marksugar.com并且需要修改/etc/docker/daemon.json{ "data-root": "/var/lib/docker", "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries": ["harbor.marksugar.com"], "max-concurrent-downloads": 10, "live-restore": true, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "50m", "max-file": "1" }, "storage-driver": "overlay2" }查看部署情况PS E:\ops\k8s-1.23.1-latest\gitops\kustomize-k8s-yaml> kubectl.exe -n java-demo get all NAME READY STATUS RESTARTS AGE pod/java-demo-6474cb8fc8-6xs8t 1/1 Running 0 41s pod/java-demo-6474cb8fc8-9z9sd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-jfqv6 1/1 Running 0 41s pod/java-demo-6474cb8fc8-p5ztd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-sqt7b 1/1 Running 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/java-demo NodePort 10.111.26.148 <none> 8080:31180/TCP 41s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/java-demo 5/5 5 5 41s NAME DESIRED CURRENT READY AGE replicaset.apps/java-demo-6474cb8fc8 5 5 5 42s与此同时,skywalking也加入成功创建git项目在gitlab创建了一个组,在组织里面创建了一个项目,名称以项目命名,在项目内每个应用对应一个分支如: devops组内内新建一个k8s-yaml的项目,项目内创建一个java-demo分支,java-demo分支中存放java-demo的配置文件现在创建key,将密钥加入到项目中ssh-keygen -t ed25519将文件推送到git上$ git clone git@172.16.100.47:devops/k8s-yaml.git Cloning into 'k8s-yaml'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. $ cd k8s-yaml/ $ git checkout -b java-demo Switched to a new branch 'java-demo $ ls -ll total 1024 -rw-r--r-- 1 Administrator 197121 12 Jul 7 21:09 README.MD drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 base/ -rw-r--r-- 1 Administrator 197121 774 Jul 6 18:05 imagepullsecrt.yaml drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 overlays/ $ git add . $ git commit -m "first commit" [java-demo a9701f7] first commit 11 files changed, 185 insertions(+) create mode 100644 base/deployment.yaml create mode 100644 base/kustomization.yaml create mode 100644 base/service.yaml create mode 100644 imagepullsecrt.yaml create mode 100644 overlays/dev/env.file create mode 100644 overlays/dev/kustomization.yaml create mode 100644 overlays/dev/resources.yaml create mode 100644 overlays/prod/kustomization.yaml create mode 100644 overlays/prod/replicas.yaml create mode 100644 overlays/prod/resources.yaml $ git push -u origin java-demo Enumerating objects: 19, done. Counting objects: 100% (19/19), done. Delta compression using up to 8 threads Compressing objects: 100% (15/15), done. Writing objects: 100% (17/17), 2.90 KiB | 329.00 KiB/s, done. Total 17 (delta 2), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for java-demo, visit: remote: http://172.16.100.47/devops/k8s-yaml/-/merge_requests/new?merge_request%5Bsource_branch%5D=java-demo remote: To 172.16.100.47:devops/k8s-yaml.git bb67227..a9701f7 java-demo -> java-demo Branch 'java-demo' set up to track remote branch 'java-demo' from 'origin'.添加到流水线首先,kustomize是配置文件是存放在gitlab上,因此,这个git需要我们拉取下来,而后修改镜像名称,应用kustomize的配置后,在push到gitlab上在这里的是kustomize是仅仅来管理yaml清单文件,在后面将使用argocd来做我们在流水线里面配置一个环境变量,指向kustomize配置文件的git地址,并切除git拉取后的目录地址尽可能的在gitlab和jenkins上的项目名称保持一直,才能做好流水线取值或者切出值的时候方便def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 但是kustomize是不能直接去访问集群的,因此还必须用kubectl,那就以为这需要config文件我们使用命令指定配置文件位置kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev另外,如果你的jenkins的docker镜像没有kustomize,或者kubectl,需要挂载进去,因此我的就变成了 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" }并且在容器内生成一个密钥,而后加到gitlab中,以供git拉取和上传bash-5.1# ssh-keygen -t rsa而后在复制到/var/jenkins_home下,并且挂载到容器内- /data/jenkins-latest/jenkins_home/.ssh:/root/.ssh第一次拉取需要输入yes,我们规避它echo ' Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null' >>/root/.ssh/config如果你使用的是宿主机运行的Jenkins,这一步可省略因为资源不足的问题,我们手动修改副本数为1流水线阶段,步骤大致如下:1.判断本地是否有git的目录,如果有就删除2.拉取git,并切换到分支3.追加当前的镜像版本到一个buildhistory的文件中4.cd到目录中修改镜像5.修改完成后上传修改你被人6.kustomize和kubectl应用配置清单代码快如下: stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - ''' } } 观测状态配置清单被生效后,不一定符合预期,此时有很多种情况出现,特别是在使用原生的这些命令和脚本更新的时候我们需要追踪更新后的状态,以便于我们随时做出正确的动作。我此前写过一篇关于kubernetes检测pod部署状态简单实现,如果感兴趣可以查看仍然使用此前的方式,如下 stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } }构建一次到服务器上查看[root@linuxea-11 ~]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE java-demo-66b98564f6-xsc6z 1/1 Running 0 9m24s其他参考kubernetes中skywalking9.0部署使用,kustomize变量引入
2022年07月09日
1,095 阅读
0 评论
0 点赞
2022-07-07
linuxea:jenkins流水线集成sonar分支扫描/关联gitlab/docker和mvn打包配置二(8)
在前面的jenkins流水线集成juit/sonarqube/覆盖率扫描配置一中介绍了juilt,覆盖率以及soanrqube的一些配置实现。接着上一篇中,我们继续。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(上一章已实现)jenkins凭据使用(上一章已实现)juit配置(上一章已实现)sonarqube简单扫描(上一章已实现)sonarqube覆盖率(上一章已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (本章实现)配置docker中构建docker (本章实现)mvn打包 (本章实现)sonarqube简单分支扫描(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送4.6 分支扫描我们可能更希望扫描某一个分支,于是我们需要sonarqube-community-branch-plugin插件我们在https://github.com/mc1arke/sonarqube-community-branch-plugin/releases中,留意支持的版本Note: This version supports Sonarqube 8.9 and above. Sonarqube 8.8 and below or 9.0 and above are not supported in this release使用下表查找每个 SonarQube 版本的正确插件版本SonarQube 版本插件版本9.1+1.12.09.01.9.08.91.8.28.7 - 8.81.7.08.5 - 8.61.6.08.2 - 8.41.5.08.11.4.07.8 - 8.01.3.27.4 - 7.71.0.2于是,我们在nexus3上下载1.8.1版本https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar 或者 https://github.91chifun.workers.dev//https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar根据安装提示https://github.com/mc1arke/sonarqube-community-branch-plugin#manual-install而后直接将 jar包下载在/data/sonarqube/extensions/plugins/下即可wget http://172.16.100.48/jenkins/sonar-plugins/sonarqube-community-branch-plugin-1.8.0.jar -o /data/sonarqube/extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar实际上/data/sonarqube/extensions/目录被挂载到nexus的容器内的/opt/sonarqube/extensions下而容器内的位置是不变的,因此挂载映射关系如下: volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/data[root@linuxea-47 /data/sonarqube/extensions]# ll plugins/ total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar而后,我们在本地是/data/sonarqube/conf下的创建一个配置文件sonar.properties,内容如下sonar.web.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=web sonar.ce.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=ce这个配置文件被映射到容器内的/opt/sonarqube/conf进入容器查看[root@linuxea-47 /data/sonarqube]# ls extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar分支扫描参数增加 –Dsonar.branch.name=-Dsonar.branch.name=master那现在的projetctkey就不需要加分支名字了 -Dsonar.projectKey=${JOB_NAME}_${branch} \ -Dsonar.projectName=${JOB_NAME}_${branch} \直接在一个项目中就可以看到多个分支的扫描结果了 stage("coed sonar"){ steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} """ } } } }此时我们分别构建master和web后,在sonarqube的UI中就会有两个分支的扫描结果注意事项如果你使用的是不同的版本,而不同的版本配置是不一样的。见github的每个分支,比如:1.5.04.7 关联gitlab在https://github.com/gabrie-allaigre/sonar-gitlab-plugin下载插件,参阅用法中版本对应,我们下载4.1.0https://github.com/gabrie-allaigre/sonar-gitlab-plugin/releases/download/4.1.0/sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar而后仍然存放到sonarqube的plugin目录下[root@linuxea-47 ~]# ls /data/sonarqube/extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar这在启动的时候,实际上可以看到日志加载根据文档,要完成扫描必须提供如下必要参数-Dsonar.gitlab.commit_sha=1632c729e8f78f913cbf0925baa2a8c893e4473b \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=16 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=http://192.168.1.200 \ gitlab地址 -Dsonar.gitlab.user_token=k8xLe6dYTzdtoewSysmy \ gitlab token -Dsonar.gitlab.api_version=v41.配置一个全局token至少需要如下权限令牌如下K8DtxxxifxU1gQeDgvDK其他信息根据现有的项目输入即可-Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=19 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=http://172.16.100.47 \ gitlab地址 -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ gitlab token -Dsonar.gitlab.api_version=v42.将上述命令添加到sonarqube的流水线中/var/jenkins_home/package/sonar-scanner/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.15.136:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=120 \ -Dsonar.login=636558affea60cc5f264247de36e7c27c817530b \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=http://172.16.15.136:180/devops/java-demo.git \ -Dsonar.links.ci=http://172.16.15.136:8088/job/java-demo/120/ \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.branch.name=main \ -Dsonar.gitlab.commit_sha=9353e89a7b42e0d93ddf95520408ecfde9a5144a \ -Dsonar.gitlab.ref_name=main \ -Dsonar.gitlab.project_id=2 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=http://172.16.15.136:180 \ -Dsonar.gitlab.user_token=9mszu2KXx7nHXiwJveBs \ -Dsonar.gitlab.api_version=v4运行测试正常是什么样的呢,换一个环境配置下/usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=20 \ -Dsonar.login=bc826f124d691127c351388274667d7deb1cc9b2 \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=www.baidu.com \ -Dsonar.links.ci=20 \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=master \ -Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ -Dsonar.gitlab.ref_name=master \ -Dsonar.gitlab.project_id=19 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=http://172.16.100.47 \ -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ -Dsonar.gitlab.api_version=v4 执行之后INFO: SCM Publisher SCM provider for this project is: git INFO: SCM Publisher 2 source files to be analyzed INFO: SCM Publisher 2/2 source files have been analyzed (done) | time=704ms INFO: CPD Executor 2 files had no CPD blocks INFO: CPD Executor Calculating CPD for 0 files INFO: CPD Executor CPD calculation finished (done) | time=0ms INFO: Analysis report generated in 42ms, dir size=74 KB INFO: Analysis report compressed in 14ms, zip size=13 KB INFO: Analysis report uploaded in 468ms INFO: ANALYSIS SUCCESSFUL, you can browse http://172.16.100.47:9000/dashboard?id=java-demo&branch=master INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report INFO: More about the report processing at http://172.16.100.47:9000/api/ce/task?id=AYHOP018DZyaRsN1subY INFO: Executing post-job 'GitLab Commit Issue Publisher' INFO: Waiting quality gate to complete... INFO: Quality gate status: OK INFO: Duplicated Lines : 0 INFO: Lines of Code : 18 INFO: Report status=success, desc=SonarQube reported QualityGate is ok, with 2 ok, no issues INFO: Analysis total time: 7.130 s INFO: ------------------------------------------------------------------------ INFO: EXECUTION SUCCESS INFO: ------------------------------------------------------------------------ INFO: Total time: 7.949s INFO: Final Memory: 17M/60M INFO: ------------------------------------------------------------------------流水线已通过3.获取参数现在的问题是,手动输入gitlab的这些值不可能在jenkins中输入,我们需要自动获取这些。分支的环境变量通过传递来,用变量获取即可commit_sha通过读取当前代码中的文件实现gitlab token放到密钥管理当中于是,我们通过jq来获取格式化gitlab api返回值获取缺省的项目id需要下载一个jq程序在jenkins节点上。于是我们在https://stedolan.github.io/jq/download/页面下载一个 binaries二进制的即可https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64获取项目id curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")'|jq .id示例1:如果项目名称在所有组内是唯一的,就可以使用jq -rc '.[]|select(.name == "java-demo")',如下.name == "java-demo": 项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")' | jq .id示例2:如果项目名称在所有组内不是唯一,且有多个的,用jq -rc '.[]|select(.path_with_namespace == "java/java-demo")',如下.path_with_namespace == java/java-demo : 组名/项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'|jq .id获取当前的sha版本号获取办版本号只需要在当前项目目录内读取文件或者命令即可,it log --pretty=oneline|head -1| cut -b 1-40,如下[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# git log --pretty=oneline|head -1| cut -b 1-40 4a5bb3db1c845cddc86290d137ef694b3b076d0e除此之外使用cut -b -40 .git/refs/remotes/origin/master 能获得一样的效果[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# cut -b -40 .git/refs/remotes/origin/master 4a5bb3db1c845cddc86290d137ef694b3b076d0e项目名称项目名称,我们可以使用Jenkins的项目名字。但是,这个名字有时候未必和git的项目名称一样,于是,我们直接截取项目的地址名称JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 那么现在已经具备上面的几个关键参数,现在分别命名GIT_COMMIT_TAGSHA和Projects_GitId,JOB_NAMESenvironment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() }那么现在的环境变量就是 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } 而新增的调用的命令如下 -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 构建一次能够看到已经获取到的值,构建成功的完整的阶段代码如下: stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } }4.8 mvn 打包我们是哟个一条命令直接进行打包-Dmaven.test.skip=true,不执行测试用例,也不编译测试用例类-Dmaven.test.failure.ignore=true ,忽略单元测试失败-s ~/.m2/settings.xml,指定mvn构建的配置文件位置mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml阶段如下 stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml """ } } }4.9 推送镜像我们先需要将docker配置好,首先容器内需要安装docker,而后挂载socket如果你的系统是和容器系统的库文件一样,你可以将本地的docker二进制文件挂载到容器内,但是我使用的是alpine,因此我在容器内安装了docker,此时只需要挂载目录和sock即可也可以将docker挂载到容器内即可 - /usr/bin/docker:/usr/bin/docker - /etc/docker:/etc/docker - /var/run/docker.sock:/var/run/docker.sock并在容器内登录docker容器内登录,或者在流水线阶段中登录也可以[root@linuxea-48 /data/jenkins-latest/jenkins_home]# docker exec -it jenkins bash bash-5.1# cat ~/.docker/config.json { "auths": { "harbor.marksugar.com": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } }将配置复制到主机并挂载到容器内,或者在主机登录挂载到容器都可以- /data/jenkins-latest/.docker:/root/.docker能够在容器内查看docker命令bash-5.1# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 536cb1dbeb3f registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest "/sbin/tini -- /usr/…" About an hour ago Up About an hour jenkins而后配置docker推送阶段开始之前要配置环境变量,用于获取镜像的时间tag_time随机时间 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" }docker阶段请注意:此时在COPY skywalking-agent的时候,需要将包拷贝到当前目录才能COPY到容器内 stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } }与此同时需要修改Dockerfile中的COPY 目录而后创建harbor仓库开始构建一旦构建完成,镜像将会推送到harbor仓库此时的pipeline流水线i清单如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } } }
2022年07月07日
863 阅读
0 评论
0 点赞
2022-07-05
linuxea:jenkins流水线集成juit/sonarqube/覆盖率扫描配置一(7)
到了当前阶段流水线阶段,我们需要配置mvn去打包,sonarqube,以及容器的镜像构建,这些东西都会被安排到流水线上,直到docker镜像构建完成,此阶段,我将一个docker镜像构建完成视作为一个制品的完成。这些流程被放置在一起,因此称作持续集成。当然,这在这里只是一个非常简单的模型。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(上一章实现)jenkins凭据使用(上一章实现)juit配置(本章实现)sonarqube简单扫描(本章实现)sonarqube简单分支扫描(本章实现)sonarqube覆盖率(本章实现)sonarqube与gitlab关联配置docker中构建dockermvn打包打包基于java的skywalking agent(上一章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送开始下载一个java程序包另外,为了演示这个效果,我们需要一个合适的jar包来测试,而不是在前面提供的那个hello-world的包。于是在https://start.spring.io/页面默认选择,选择java 8,而后点击CENERATE下载demo包,解压这个包将代码推送到gitlab而后选择ADD最后点击GENERATE,下载一个java-demo的包错误修改如果构建失败,mvn使用aliyun的直接构建即可<mirrors> .... <mirror> <id>nexus-aliyun</id> <mirrorOf>*</mirrorOf> <name>Nexus aliyun</name> <url>http://maven.aliyun.com/nexus/content/groups/public</url> </mirror> </mirrors>而后,我们创建一个新分支来存放cd /tmp git clone git@172.16.100.47:java/java-demo.git cd java-demo git checkout -b web mkdir linuxea而后将java-demo.zip解压复制进去unzip java-demo.zip cp java-demo/* /tmp/java-demo/linuxea/上传到git仓库git add . git commit -m "first commit" git push -u origin web另外还需要将Dockerfile放入4.1 配置凭据你可以使用密钥或者账号密码来进行配置,如果是账号密码需要配置一个Credentials后来进行引用。因此,需要安装Credentials插件install plugins "Credentials"在Manage Jenkins -> Manage Credentials -> 选择全局,而后添加凭据而后复制创建完成的id,将会在拉取代码时候用到,如下 //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } }总的如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-root', url: "${BASEURL}"]]]) } } } } }但是这里面涉及到两个变量,分别是:$branch: 分支这里我们修改为: web$BASEURL: git地址这个就是这个项目地地址这两个参数化构建过程中需要在选项参数中进行添加4.2 配置mvn源地址如果没有配置mvn的,参考如下.m2下的settings.xml文件不好用了,或者反正就不好用了,我们直接配置绝对路径找到mirrors字段,字段内全部删除,复制下面的粘贴进去即可<mirrors> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors>像这样,而后存放在一个位置<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://172.16.15.136:8081/repository/maven2-group/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>4.3 juit测试pom文件增加依赖 <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> 我们的目录结构中,首先是一个Linuxea的目录,于是我们每次都需要cd进去 stage("unit Test"){ steps{ script{ sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } }正常情况下会在surefire-reports下创建一个xml文件,jenkins读取此文件[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022/linuxea/target]# ll total 0 drwxr-xr-x 3 root root 47 Jul 3 18:31 classes drwxr-xr-x 3 root root 25 Jul 3 01:30 generated-sources drwxr-xr-x 3 root root 30 Jul 3 01:45 generated-test-sources drwxr-xr-x 3 root root 35 Jul 3 01:40 maven-status drwxr-xr-x 2 root root 257 Jul 3 18:32 surefire-reports drwxr-xr-x 3 root root 17 Jul 3 18:31 test-classes如果代码里没有单元测试,这里是没有报告产生的,并且有可能因为版本问题会报错现在我们将web合并到master,开始构建。最终如下4.4 sonarqube简单使用,安装插件即可,SonarQube Scanner,或者下载SonarQube Scanner安装挂载到容器内1.首先我们配置token信息和Jenkins互联复制token01c2f60766bd896fe82a378bb105e1a73d9161c32.回到jenkins添加凭据插件配置如果此时用的插件就可以这么配置,在Manage Jenkins->Configure System流水线阶段如下 stage('SonarQube analysis') { // sonarqube installation name steps{ withSonarQubeEnv( installationName: 'sonarqube-servers') { sh """ cd demo && mvn clean verify sonar:sonar -s /var/jenkins_home/.m2/settings.xml """ } } }不使用插件(现在就是)不用插件就需要下载sonar-scanner-cli包使用,如下wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.6.2.2472-linux.zip unzi sonar-scanner-cli-4.6.2.2472-linux.zip -d /usr/local/ ln -s /usr/local/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner /usr/local/sbin/ sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /usr/local/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner这个包最终是被挂载到容器的/usr/local/package/下如果不用插件,用命令组合变量,因此需要生成一下命令的格式在流水线语法中-选中"withCredentials; Build credentials to variables",而后绑定到"sonarqube-token"变量中将生成的流水线脚本复制,并进行修改 stage("coed sonar"){ steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=${JOB_NAME}_${branch} \ -Dsonar.projectName=${JOB_NAME}_${branch} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports """ } } } }此阶段运行后如下现在在sonarqube生成的就是master分支的4.5 代码覆盖率在8.9.2的版本里面jacoco默认是有的,因此我们不需要额外插件但是仍然需要配置pom.xml中的dependency,如下 <dependency> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.3</version> <scope>test</scope> </dependency>还有一个plugins <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> </plugin> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.2</version> <configuration> <destFile>target/jacoco.exec</destFile> <detaFile>target/jacoco.exec</detaFile> </configuration> <executions> <execution> <id>jacoco-initialize</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>jacoco-site</id> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin>修改完成的pom.xml如下<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.1</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.linuxea</groupId> <artifactId>java-demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>java-demo</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.3</version> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> </plugin> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.2</version> <configuration> <destFile>target/jacoco.exec</destFile> <detaFile>target/jacoco.exec</detaFile> </configuration> <executions> <execution> <id>jacoco-initialize</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>jacoco-site</id> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>而后在构建的命令中添加如下-Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec添加到流水线 stage("coed sonar"){ steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=${JOB_NAME}_${branch} \ -Dsonar.projectName=${JOB_NAME}_${branch} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec """ } } } }构建后sonarqube的覆盖率将会出现值如果没有配置测试用例,这里是0
2022年07月05日
929 阅读
0 评论
0 点赞
2022-07-01
linuxea:gitlab和jenkins自动和手动触发构建(6)
在前面几章里面,我们配置了基本的组件,围绕java构建配置了nexus3,配置了skywalking,能够打包和构建镜像,但是我们需要将这些串联起来,组成一个流水线,并且需要将skywalking的agent打包在镜像内,并配置必要的参数。与此同时,我们使用一个简单的实现方式用作在jenkins上,那就是pipeline和部分groovy语法的函数,至少来完成一下场景。场景1: A方希望提交代码或者打TAG来触发jenkins构建,在构建之前使用sonarqube进行代码扫描,并且配置简单的阈值。而后去上述的流水线整合。按道理,sonarqube的配置是有一些讲究的,处于整体考虑sonarqube只用作流水线管道的一部分,本次不去考虑sonarqube的代码扫描策略,也不会将扫描结果关联到gitlab,只是仅仅将文件反馈到Jenkins。这些在后面如果有时间在进行配置在本次中我只仅仅使用pipeline,并不是共享库。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(本章实现)jenkins凭据使用(本章实现)juit配置sonarqube简单扫描配置docker中构建docker打包基于java的skywalking agent(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送拓扑如下图:1.添加skywalking agent此前在基于nexus3代码构建和harbor镜像打包(3)一骗你中,我们已经有了一个java-hello-world的包,提供了一个8086的端口,并且我们将Dockerfile之类的都已准备妥当,此时延续此前的流程继续走。如果没有,在此页面克隆。1.现在我们下载一个skywaling的agent(8.11.0)端来到Dockerfile中,要实现,需要下载包到jenkins服务器上,或者打在镜像内。https://www.apache.org/dyn/closer.cgi/skywalking/java-agent/8.11.0/apache-skywalking-java-agent-8.11.0.tgz2.gitlab上创建一个java组,创建一个java-demo的项目,将代码和代码中的Dockerfile推送到gitlab仓库中3.在Dockerfile中添加COPY agent,并在启动的时候添加到启动命令中,如下docker-compose中映射关系中,/data/jenkins-latest/package:/usr/local/package。于是我们将skywalking包存放在/data/jenkins-latest/package下,而后在Dockerfile中/usr/local/package的路径即可COPY /usr/local/package/skywalking-agent /skywalking-agent而后启动的时候引入到启动命令中 -javaagent:/skywalking-agent/skywalking-agent.jarCMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jarDockerfile如下我们需要修改下目录 结构,提前创建/skywalking-agent/logs并且授权并且,skywalking-agent目录需要提前在流水线中复制到当前目录中来FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs /skywalking-agent -p COPY target/*.jar /data/ COPY skywalking-agent /skywalking-agent/ RUN chown -R 316.316 /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jar 注意事项我们需要通过trace-ignore-plugin来过滤跟踪系统需要忽略的url,因此我们需要根据trace-ignore-plugin进行配置。这很有必要匹配规则遵循 Ant Path 匹配风格,如 /path/、/path/*、/path/?。将apm-trace-ignore-plugin-x.jar复制到agent/plugins,重启agent即可生效插件。于是,我们将这个插件复制到plugins下tar xf apache-skywalking-java-agent-8.11.0.tar.gz cd skywalking-agent/ cp optional-plugins/apm-trace-ignore-plugin-8.11.0.jar plugins/忽略参数(忽略参数在k8syaml中进行配置)有两种方法可以配置忽略模式。通过系统环境设置具有更高的优先级。1.系统环境变量设置,需要在系统变量中添加skywalking.trace.ignore_path,值为需要忽略的路径,多条路径之间用,分隔2.将/agent/optional-plugins/apm-trace-ignore-plugin/apm-trace-ignore-plugin.config 复制到/agent/config/ 目录下,并添加过滤跟踪的规则trace.ignore_path=/your/path/1/,/your/path/2/4.将gitlab的java-demo项目拉到本地后,将java-helo-word项目文件移动到私有gitlab,并且将Dockerfile放入[root@Node-172_16_100_48 /data]# git clone git@172.16.100.47:java/java-demo.git Cloning into 'java-demo'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), done. [root@Node-172_16_100_48 /data]# mv java-helo-word/* java-demo/ [root@Node-172_16_100_48 /data]# tree java-demo/linuxea/ java-demo/ ├── bbin.png ├── cn-site-service.iml ├── Dockerfile .......... 23 directories, 26 files放入完成后的Dockerfile的内容如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs /skywalking-agent -p COPY target/*.jar /data/ COPY skywalking-agent /skywalking-agent/ RUN chown -R 316.316 /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jar Dockerfile添加skywalking到此完成。[root@linuxea-48 /data/java-demo]# git add . && git commit -m "first commit" && git push -u origin main [main 2f6d866] first commit 25 files changed, 545 insertions(+) create mode 100644 Dockerfile create mode 100644 bbin.png create mode 100644 cn-site-service.iml create mode 100644 fenghuang.png create mode 100644 index.html create mode 100644 pom.xml create mode 100644 src/main/java/com/dt/info/InfoSiteServiceApplication.java create mode 100644 src/main/java/com/dt/info/controller/HelloController.java create mode 100644 src/main/resources/account.properties create mode 100644 src/main/resources/application.yml create mode 100644 src/main/resources/log4j.properties ........... remote: remote: To create a merge request for main, visit: remote: http://172.16.100.47/java/java-demo/-/merge_requests/new?merge_request%5Bsource_branch%5D=main remote: To git@172.16.100.47:java/java-demo.git * [new branch] main -> main Branch main set up to track remote branch main from origin.代码上传到gitlab后开始配置jenkinsB.new jar你也可以生成一个空的java包来测试准备一个jar包,可以是一个java已有的程序或者下载一个空的,如下在https://start.spring.io/页面默认选择,选择java 8,而后点击CENERATE下载demo包,解压这个包将代码推送到gitlab将项目拉到本地后在上传demo包Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ git clone git@172.16.100.47:pipeline-ops/2022-test.git Cloning into '2022-test'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ unzip demo.zip -d 2022-test $ git add . && git commit -m "first commit" && git push2.关联jenkins和gitlab为了能够实现gitlab自动触发,我们需要配置一个webhook,并且jenkins安装插件来完成。首先我们需要插件,并且在gitlab配置一个webhook,一旦gitlab发生事件后就会触发到jenkins,jenkins启动流水线作业。我将会在流水线作业来判断作业被触发是通过gitlab还是其他的。2.1 jenkins插件安装Generic Webhook Trigger插件,而后点击新建imtes->输入一个名称—>选择pipeline例如,创建了一个Linuxea-2022的项目,勾选了Generic Webhook Trigger,并且在下方的token,输入了一个marksugar测试pipelinepipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipStagesAfterUnstable() } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps{ script{ println("获取代码") } } } stage("Build"){ steps{ script{ println("运行构建") } } } } post { always{ script{ println("流水线结束后,经常做的事情") } } success{ script{ println("流水线成功后,要做的事情") } } failure{ script{ println("流水线失败后,要做的事情") } } aborted{ script{ println("流水线取消后,要做的事情") } } } }手动触发测试curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar'运行一次[root@linuxea-47 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar' {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{},"regexpFilterText":"","id":4,"url":"queue/item/4/"}},"message":"Triggered jobs."}You have new mail in /var/spool/mail/root2.2 配置gitlab webhook1.在右上角的preferences中的最下方Localization选择简体中文保存2.管理元->设置->网络->下拉菜单中的“出战请求”勾选 允许来自 web hooks 和服务对本地网络的请求回到gitlab首先进入项目后选择settings->webhooks->urlurl输入http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar?marksugar, 问号后面为设置的token而后选中push events和tag push events和最下面的ssl verfication : Enable SSL verification兵点击Add webhook测试在最下方 -> test下拉菜单中选择一个被我们选中过的事件,而后点击。模拟一次push在edit中 -> 最下方 View details 查看Request body,Request body就是发送的内容,这些内容可以被获取到并且解析回到jenkins,查看已经开始构建3.1 自动与手动关联在上面已经配置了自动触发jenkins构建,但是这还不够,我们想在jenkins上体现出来,那一次是自动构建,那一次是手动点击,于是我们添加try和catch因此,我们配置try,并且在现有的阶段添加两个必要的环境变量来应对手动触发3.2 添加手动参数branch: 分支BASEURL:git地址3.2 配置识别try语法我们需要获取请求过来的数据,因此我们获取所有的json请求,配置如下自动触发而后获取的变量方式解析后,我将必要的值进行拼接后如下println("Trigger User: ${info_user_username}") println("Trigger Branch: ${info_ref}" ) println("Trigger event: ${info_event_name}") println("Trigger application: ${info_project_name}") println("Trigger version number: ${info_checkout_sha}") println("Trigger commit message: ${info_commits_0_message}") println("Trigger commit time: ${info_commits_0_timestamp}")而我们只需要部分,因此就变成了如下这般try { println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 非gitlab自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 手动触发 \n branch: ${branch} \n git url: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }一旦被触发后,如下图3.3 判断gitlab但这还没玩,尽管此时已经能够识别自动触发了,但是我们无法识别到底是从哪里来的自动触发。但是,我们只需要知道那些是gitlab来的请求和不是gitlab来的请求即可。简单来说,就是需要一个参数来判断触发这次构建的来源。于是,我们配置请求参数来识别判断在Request parameters-> 输入onerunonerun: 用来判断的参数而后在gitlab的url中添加上传递参数http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar这里的onerun=gitlabs,如下我们在try中进行判断即可onerun=gitlabstry { if ( "${onerun}" == "gitlabs"){ println("从带有gitlabs请求来的构建") } }catch(e){ println("从没有带有gitlabs请求来的构建") }在本次中,配置如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }手动构建一次通过命令构建[root@linuxea-48 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar' && echo {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{"info":"","onerun":"gitlabs","onerun_0":"gitlabs"},"regexpFilterText":"","id":14,"url":"queue/item/14/"}},"message":"Triggered jobs."}如下通过这样的配置,我们至少能从jenkins上看到构建的触发类型而已。
2022年07月01日
956 阅读
0 评论
0 点赞
2022-06-30
linuxea:skywalking9.1基于nacos的动态告警配置二(5)
接着上一篇skywalking9.1基于nacos的动态告警配置一(4),这本是在一起的,由于字数问题,分为两篇来写。那接着skywalking的ES安装完成后,继续配置skywalking和nacos的动态配置部分2.skywalking与nacos准备了这么多,其实就是先把nacos配置i起来,而后在skywalking中来调用,那么现在,我们配置nacos到skywalking中。但是在此之前,我们要配置一个configmpa清单来来列出配置而后传入到skywalking中。2.1 准备configmap文件skywalking在早在8版本已经支持了动态配置,这些在/skywalking/config下的application.yml文件中,默认为none,需要修改这个环境变量来应用不同的配置中心,比如为nacos阅读官网了解更多动态配置内容configuration: selector: ${SW_CONFIGURATION:none}我们修改为nacos使用,提供的变量如下 nacos: # Nacos Server Host serverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1} # Nacos Server Port port: ${SW_CONFIG_NACOS_SERVER_PORT:8848} # Nacos Configuration Group group: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking} # Nacos Configuration namespace namespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:} # Unit seconds, sync period. Default fetch every 60 seconds. period: ${SW_CONFIG_NACOS_PERIOD:60} # Nacos auth username username: ${SW_CONFIG_NACOS_USERNAME:""} password: ${SW_CONFIG_NACOS_PASSWORD:""} # Nacos auth accessKey accessKey: ${SW_CONFIG_NACOS_ACCESSKEY:""} secretKey: ${SW_CONFIG_NACOS_SECRETKEY:""}因此,我们需要将配置注入到pod中在k8s里面,使用configmap中的值传入到skwalking-oap中即可创建configmap清单,调用nacos,如下:如果你是外置的nacos或者es就需要修改配置的链接地址。我这里是集群apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" nacos.addr: "nacos-0.nacos-headless.nacos.svc.cluster.local,nacos-1.nacos-headless.nacos.svc.cluster.local,nacos-2.nacos-headless.nacos.svc.cluster.local" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos"这些配配置在skywalk-oap中需要传入 - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password2.2 安装skywalking当准备完成上面的配置后,yaml如下#ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: labels: app: skywalking name: skywalking-oap namespace: skywalking --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: skywalking namespace: skywalking labels: app: skywalking rules: - apiGroups: [""] resources: ["pods", "endpoints", "services", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["extensions"] resources: ["deployments", "replicasets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: skywalking namespace: skywalking labels: app: skywalking roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: skywalking subjects: - kind: ServiceAccount name: skywalking-oap namespace: skywalking --- # oap apiVersion: v1 kind: Service metadata: name: skywalking-oap namespace: skywalking labels: app: skywalking-oap spec: type: ClusterIP ports: - port: 11800 name: grpc - port: 12800 name: rest selector: app: skywalking-oap chart: skywalking-4.2.0 --- # connent to nacos apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" # nacos.addr: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848,nacos-1.nacos-headless.skywalking.svc.cluster.local:8848,nacos-2.nacos-headless.skywalking.svc.cluster.local:8848" nacos.addr: "nacos-0.nacos-headless.nacos.svc.cluster.local,nacos-1.nacos-headless.nacos.svc.cluster.local,nacos-2.nacos-headless.nacos.svc.cluster.local" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos" --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: skywalking-oap name: skywalking-oap namespace: skywalking spec: replicas: 1 selector: matchLabels: app: skywalking-oap template: metadata: labels: app: skywalking-oap chart: skywalking-4.2.0 spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchLabels: app: "skywalking" release: "skywalking" component: "oap" initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 # docker pull apache/skywalking-oap-server:8.8.1 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 11800 name: grpc - containerPort: 12800 name: rest env: - name: JAVA_OPTS value: "-Dmode=no-init -Xmx2g -Xms2g" - name: SW_CLUSTER value: kubernetes - name: SW_CLUSTER_K8S_NAMESPACE value: "default" - name: SW_CLUSTER_K8S_LABEL value: "app=skywalking,release=skywalking,component=oap" # 记录数据。 - name: SW_CORE_RECORD_DATA_TTL value: "2" # Metrics数据 - name: SW_CORE_METRICS_DATA_TTL value: "2" - name: SKYWALKING_COLLECTOR_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" # nacos - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password volumeMounts: - name: alarm-settings mountPath: /skywalking/config/alarm-settings.yml subPath: alarm-settings.yml volumes: - configMap: name: alarm-configmap name: alarm-settings --- # ui apiVersion: v1 kind: Service metadata: labels: app: skywalking-ui name: skywalking-ui namespace: skywalking spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: skywalking-ui --- apiVersion: apps/v1 kind: Deployment metadata: name: skywalking-ui namespace: skywalking labels: app: skywalking-ui spec: replicas: 1 selector: matchLabels: app: skywalking-ui template: metadata: labels: app: skywalking-ui spec: affinity: containers: - name: ui image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-ui:9.1.0 # docker pull apache/skywalking-ui:9.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: page env: - name: SW_OAP_ADDRESS value: http://skywalking-oap:12800 --- # job apiVersion: batch/v1 kind: Job metadata: name: "skywalking-es-init" namespace: skywalking labels: app: skywalking-job spec: template: metadata: name: "skywalking-es-init" labels: app: skywalking-job spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap restartPolicy: Never initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: "-Xmx2g -Xms2g -Dmode=init" - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200"而后应用配置PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\9.1.yaml serviceaccount/skywalking-oap created clusterrole.rbac.authorization.k8s.io/skywalking created clusterrolebinding.rbac.authorization.k8s.io/skywalking created service/skywalking-oap created configmap/skywalking-to-nacos created deployment.apps/skywalking-oap created service/skywalking-ui created deployment.apps/skywalking-ui created job.batch/skywalking-es-init created启动完成后如下PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pod -w NAME READY STATUS RESTARTS AGE elasticsearch-64c9d98794-ndktz 1/1 Running 0 15m skywalking-es-init-p8j4z 0/1 Completed 0 6m13s skywalking-oap-64b87cf44c-cgb8w 1/1 Running 0 2m26s skywalking-ui-6c6f789f9f-qxxqd 1/1 Running 0 6m13s2.2.B 本地nacos配置#ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: labels: app: skywalking name: skywalking-oap namespace: skywalking --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: skywalking namespace: skywalking labels: app: skywalking rules: - apiGroups: [""] resources: ["pods", "endpoints", "services", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["extensions"] resources: ["deployments", "replicasets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: skywalking namespace: skywalking labels: app: skywalking roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: skywalking subjects: - kind: ServiceAccount name: skywalking-oap namespace: skywalking --- # oap apiVersion: v1 kind: Service metadata: name: skywalking-oap namespace: skywalking labels: app: skywalking-oap spec: type: ClusterIP ports: - port: 11800 name: grpc - port: 12800 name: rest selector: app: skywalking-oap chart: skywalking-4.2.0 --- # connent to nacos apiVersion: v1 kind: ConfigMap metadata: name: skywalking-to-nacos namespace: skywalking data: nacos.name: "nacos" nacos.addr: "172.16.15.136" nacos.port: "8848" nacos.group: "skywalking" nacos.namespace: "skywalking" nacos.synctime: "60" nacos.username: "nacos" nacos.password: "nacos" --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: skywalking-oap name: skywalking-oap namespace: skywalking spec: replicas: 1 selector: matchLabels: app: skywalking-oap template: metadata: labels: app: skywalking-oap chart: skywalking-4.2.0 spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchLabels: app: "skywalking" release: "skywalking" component: "oap" initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 # docker pull apache/skywalking-oap-server:8.8.1 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 11800 name: grpc - containerPort: 12800 name: rest env: - name: JAVA_OPTS value: "-Dmode=no-init -Xmx2g -Xms2g" - name: SW_CLUSTER value: kubernetes - name: SW_CLUSTER_K8S_NAMESPACE value: "default" - name: SW_CLUSTER_K8S_LABEL value: "app=skywalking,release=skywalking,component=oap" # 记录数据。 - name: SW_CORE_RECORD_DATA_TTL value: "2" # Metrics数据 - name: SW_CORE_METRICS_DATA_TTL value: "2" - name: SKYWALKING_COLLECTOR_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" # nacos - name: SW_CONFIGURATION valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.name - name: SW_CONFIG_NACOS_SERVER_ADDR valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.addr - name: SW_CONFIG_NACOS_SERVER_PORT valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.port - name: SW_CONFIG_NACOS_SERVER_GROUP valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.group - name: SW_CONFIG_NACOS_SERVER_NAMESPACE valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.namespace - name: SW_CONFIG_NACOS_PERIOD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.synctime - name: SW_CONFIG_NACOS_USERNAME valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.username - name: SW_CONFIG_NACOS_PASSWORD valueFrom: configMapKeyRef: name: skywalking-to-nacos key: nacos.password volumeMounts: - name: alarm-settings mountPath: /skywalking/config/alarm-settings.yml subPath: alarm-settings.yml volumes: - configMap: name: alarm-configmap name: alarm-settings --- # ui apiVersion: v1 kind: Service metadata: labels: app: skywalking-ui name: skywalking-ui namespace: skywalking spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: skywalking-ui --- apiVersion: apps/v1 kind: Deployment metadata: name: skywalking-ui namespace: skywalking labels: app: skywalking-ui spec: replicas: 1 selector: matchLabels: app: skywalking-ui template: metadata: labels: app: skywalking-ui spec: affinity: containers: - name: ui image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-ui:9.1.0 # docker pull apache/skywalking-ui:9.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: page env: - name: SW_OAP_ADDRESS value: http://skywalking-oap:12800 --- # job apiVersion: batch/v1 kind: Job metadata: name: "skywalking-es-init" namespace: skywalking labels: app: skywalking-job spec: template: metadata: name: "skywalking-es-init" labels: app: skywalking-job spec: serviceAccount: skywalking-oap serviceAccountName: skywalking-oap restartPolicy: Never initContainers: - name: wait-for-elasticsearch image: registry.cn-hangzhou.aliyuncs.com/marksugar/busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 172.16.15.136 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: registry.cn-hangzhou.aliyuncs.com/marksugar/skywalking-oap-server:9.1.0 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: "-Xmx2g -Xms2g -Dmode=init" - name: SW_STORAGE value: 172.16.15.136 - name: SW_STORAGE_ES_CLUSTER_NODES value: "172.16.15.136:9200"2.3 配置ingressskywalking的ui没有用户名密码认证,于是我们通过Ingress来配置一个简单的nginx auth认证即可用户名: linuxea密码: OpSOQKs,qDJ1dSvzsapiVersion: v1 data: auth: bGludXhlYTokYXByMSRidG1naTc0cyRKRUtJcThkVEUzT0k4bzVhMXFRdnEwCg== kind: Secret metadata: name: basic-auth namespace: skywalking type: Opaqueingress配置如下--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: skywalking-ui namespace: skywalking annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - input: Trump " spec: ingressClassName: nginx rules: - host: skywalking.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: skywalking-ui port: number: 80分别创建他们PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\secret.yaml secret/basic-auth created PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\ingress.yaml ingress.networking.k8s.io/skywalking-ui created查看创建情况PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get ingress NAME CLASS HOSTS ADDRESS PORTS AGE skywalking-ui nginx skywalking.linuxea.com 172.16.100.11,172.16.100.43 80 22s修改本地hosts文件解析后,进行访问172.16.100.11 nacos.linuxea.com skywalking.linuxea.com输入用户名: linuxea输入密码: OpSOQKs,qDJ1dSvzs2.3 日志信息查看ops的日志信息,如果应用后nacos没有配置,在日志中将会看到变动参阅官方文档获取更多信息key:core.default.log4j-xml module:core provider:default value(current):null key:agent-analyzer.default.uninstrumentedGateways module:agent-analyzer provider:default value(current):null key:configuration-discovery.default.agentConfigurations module:configuration-discovery provider:default value(current):null key:agent-analyzer.default.traceSamplingPolicy module:agent-analyzer provider:default value(current):null key:core.default.endpoint-name-grouping module:core provider:default value(current):SkyWalking endpoint rule key:core.default.apdexThreshold module:core provider:default value(current):null key:agent-analyzer.default.slowDBAccessThreshold module:agent-analyzer provider:default value(current):null key:alarm.default.alarm-settings module:alarm provider:default value(current):null 没有配置全是nullFollowing dynamic config items are available. --------------------------------------------- key:core.default.log4j-xml module:core provider:default value(current):null key:agent-analyzer.default.uninstrumentedGateways module:agent-analyzer provider:default value(current):null key:configuration-discovery.default.agentConfigurations module:configuration-discovery provider:default value(current):null key:agent-analyzer.default.traceSamplingPolicy module:agent-analyzer provider:default value(current):null key:core.default.endpoint-name-grouping module:core provider:default value(current):SkyWalking endpoint rule key:core.default.apdexThreshold module:core provider:default value(current):null key:agent-analyzer.default.slowDBAccessThreshold module:agent-analyzer provider:default value(current):null key:alarm.default.alarm-settings module:alarm provider:default value(current):null 2022-06-26 16:18:27,691 com.linecorp.armeria.common.util.SystemInfo 237 [main] INFO [] - hostname: skywalking-oap-64b87cf44c-d5l6n (from /proc/sys/kernel/hostname) 2022-06-26 16:18:27,692 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.log4j-xml: null 2022-06-26 16:18:27,694 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.uninstrumentedGateways: null 2022-06-26 16:18:27,695 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: configuration-discovery.default.agentConfigurations: null 2022-06-26 16:18:27,699 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.traceSamplingPolicy: null 2022-06-26 16:18:27,701 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.endpoint-name-grouping: null 2022-06-26 16:18:27,703 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.apdexThreshold: null 2022-06-26 16:18:27,704 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.slowDBAccessThreshold: null 2022-06-26 16:18:27,705 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: alarm.default.alarm-settings: null 2022-06-26 16:18:28,006 com.linecorp.armeria.server.Server 755 [armeria-boss-http-*:12800] INFO [] - Serving HTTP at /0.0.0.0:12800 - http://127.0.0.1:12800/ 2022-06-26 16:18:28,007 org.apache.skywalking.oap.server.core.storage.PersistenceTimer 58 [main] INFO [] - persistence timer start 2022-06-26 16:18:28,008 org.apache.skywalking.oap.server.core.cache.CacheUpdateTimer 46 [main] INFO [] - Cache updateServiceInventory timer start 2022-06-26 16:18:28,369 org.apache.skywalking.oap.server.starter.OAPServerBootstrap 53 [main] INFO [] - Version of OAP: 9.1.0-f1f519c2.4 nacos配置管理我们配置一个示例,比如告警在nacos页面中-》名称空间-》新建名称空间-》创建skywalking的名称空间在nacos页面中-》配置列表-》选中skywalking -》+号也就是说在skywaling名称空间内创建这个配置在nacos页面中-》配置列表-》+号-》输入id和组回到日志信息中,配置是以key:value的形式key:alarm.default.alarm-settings配置信息如下rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: # 以_rule结尾的规则名称,必须唯一 metrics-name: service_resp_time # 名称 op: ">" # 运算符 threshold: 1000 # 阈值 period: 5 #周期 count: 3 # 次数 silence-period: 15 # 沉默时间 message: Response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes. service_sla_rule: # Metrics value need to be long, double or int metrics-name: service_sla op: "<" threshold: 8000 # The length of time to evaluate the metrics period: 10 # How many times after the metrics match the condition, will trigger alarm count: 2 # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period. silence-period: 3 message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes service_resp_time_percentile_rule: # Metrics value need to be long, double or int metrics-name: service_percentile op: ">" threshold: 1000,1000,1000,1000,1000 period: 10 count: 3 silence-period: 5 message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000 service_instance_resp_time_rule: metrics-name: service_instance_resp_time op: ">" threshold: 1000 period: 10 count: 2 silence-period: 5 message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes database_access_resp_time_rule: metrics-name: database_access_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of database access {name} is more than 1000ms in 2 minutes of last 10 minutes endpoint_relation_resp_time_rule: metrics-name: endpoint_relation_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of endpoint relation {name} is more than 1000ms in 2 minutes of last 10 minutes dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n %s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=322a01560303e2e96e8e1261d491ffc918c686bbfd2f7e846 secret: SEC71126603dfcf6594e96ffad825ac3e32d2a3fde0e643bbd80a7a473208fc5706填写到nacos中点击发布一旦创建完成,nacos中配置管理的skywalking名称空间下就有了alarm.default.alarm-settings配置信息而后回到oap中查看日志信息kubectl -n skywalking logs -f skywalking-oap-74b59b897c-2slvn2022-06-26 16:41:20,965 com.linecorp.armeria.common.util.SystemInfo 237 [main] INFO [] - hostname: skywalking-oap-6f58cbc9d8-hmfw9 (from /proc/sys/kernel/hostname) 2022-06-26 16:41:20,978 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.log4j-xml: null 2022-06-26 16:41:20,987 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.uninstrumentedGateways: null 2022-06-26 16:41:20,995 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: configuration-discovery.default.agentConfigurations: null 2022-06-26 16:41:20,998 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.traceSamplingPolicy: null 2022-06-26 16:41:21,005 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.endpoint-name-grouping: null 2022-06-26 16:41:21,008 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: core.default.apdexThreshold: null 2022-06-26 16:41:21,011 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: agent-analyzer.default.slowDBAccessThreshold: null 2022-06-26 16:41:21,034 org.apache.skywalking.oap.server.configuration.nacos.NacosConfigWatcherRegister 168 [pool-7-thread-1] INFO [] - Nacos config changed: alarm.default.alarm-settings: rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: # 以_rule结尾的规则名称,必须唯一 metrics-name: service_resp_time # 名称 op: ">" # 运算符 threshold: 1000 # 阈值 period: 5 #周期 count: 3 # 次数 silence-period: 15 # 沉默时间 message: Response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes. service_sla_rule: # Metrics value need to be long, double or int metrics-name: service_sla op: "<" threshold: 8000 # The length of time to evaluate the metrics period: 10 # How many times after the metrics match the condition, will trigger alarm count: 2 # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period. silence-period: 3 message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes service_resp_time_percentile_rule: # Metrics value need to be long, double or int metrics-name: service_percentile op: ">" threshold: 1000,1000,1000,1000,1000 period: 10 count: 3 silence-period: 5 message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000 service_instance_resp_time_rule: metrics-name: service_instance_resp_time op: ">" threshold: 1000 period: 10 count: 2 silence-period: 5 message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes database_access_resp_time_rule: metrics-name: database_access_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of database access {name} is more than 1000ms in 2 minutes of last 10 minutes endpoint_relation_resp_time_rule: metrics-name: endpoint_relation_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of endpoint relation {name} is more than 1000ms in 2 minutes of last 10 minutes dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n %s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=322a01560303e2ed491ffc918c686bbfd2f7e8420de36301a5d9536 secret: SEC71126603dfcf6594e96ffad825ac3e32d2a3fde0e643b208fc5706而一旦配置发生改变将会在日志中看到进度使用一个不断触发的阈值,大于1毫秒的都被触发rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: metrics-name: service_resp_time op: "<" threshold: 1 period: 1 count: 1 silence-period: 2 #message: 服务:{name}\n 指标:响应时间\n 详情:至少1次超过1毫秒(最近2分钟内) message: 服务:{name}的响应时间至少1次超过1毫秒(最近2分钟内) dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n%s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=18d15996fa24e3eabe8 secret: SEC65d0a92e985fa0dc5df11ec88d89763a178102nacos和skywalking配置完成。参考观测语句参考参考2参考3到底要配置什么oal,需要关注/skywalking/config/oal下的文件,这些是oal-runner,oal运行时关注聚合函数:aggregation-functionoal参考如下:https://github.com/apache/skywalking/blob/master/docs/en/concepts-and-designs/oal.mdhttps://github.com/apache/skywalking/blob/master/docs/en/guides/backend-oal-scripts.mdhttps://skywalking.apache.org/docs/skywalking-java/latest/en/setup/service-agent/java-agent/java-plugin-development-guide/#extension-logic-endpoint-tag-key-x-lehttps://www.cnblogs.com/heihaozi/p/14958368.html
2022年06月30日
1,630 阅读
0 评论
0 点赞
2022-06-30
linuxea:skywalking9.1基于nacos的动态告警配置一(4)
在上一篇中,我们构建了必要的组件,并且打包构建的docker镜像。但这还不够,我们需要添加一些链路追踪的观测,如:skywalking。此前我使用skywalking9.0进行部署,本次将使用最新的9.1.0进行安装配置但这次不同,skywalking支持动态配置,于是我们使用动态配置来管理告警,关于skywalking动态配置可以参考官方文档#dynamic-configuration。而动态配置不得不提nacos,作为动态服务发现、配置管理和服务管理平台nacos被大量普及和应用。nacos要使用nacos,我们需要一个后端数据库,可以放在k8s中,也可以在虚拟机部署我将在虚拟机部署mysql1. 准备外部mysql首先你需要安装docker和docker-compose。因为我将使用docker-compose进行编排准备yaml文件version: '3.3' services: nacos-mysql: container_name: nacos-mysql image: registry.cn-hangzhou.aliyuncs.com/marksugar/mysql:8.0.29-debian # docker pull mysql:8.0.29-debian # docker pull nacos/nacos-mysql:5.7 # network_mode: host restart: always # docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql # docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_ROOT_PASSWORD=PASSWORDABCD - MYSQL_DATABASE=nacos_fat - MYSQL_USER=nacos - MYSQL_PASSWORD=PASSWORDABCD #- MYSQL_INITDB_SKIP_TZINFO= volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/mysql/nacos/data:/var/lib/mysql - /data/mysql/nacos/file:/var/lib/mysql-files - ./my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf logging: driver: "json-file" options: max-size: "50M" ports: - 3306:3307准备my.cnf# naocs sql init # /docker-entrypoint-initdb.d # /etc/mysql/mysql.conf.d/mysqld.cnf [mysqld] port=3307 pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql #log-error = /var/log/mysql/error.log # By default we only accept connections from localhost #bind-address = 127.0.0.1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0启动docker-compose -f docker-compose.yaml up -d而后导入nacos.sql/* * Copyright 1999-2018 Alibaba Group Holding Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info */ /******************************************/ CREATE TABLE `config_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) DEFAULT NULL, `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `c_desc` varchar(256) DEFAULT NULL, `c_use` varchar(64) DEFAULT NULL, `effect` varchar(64) DEFAULT NULL, `type` varchar(64) DEFAULT NULL, `c_schema` text, `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_aggr */ /******************************************/ CREATE TABLE `config_info_aggr` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) NOT NULL COMMENT 'group_id', `datum_id` varchar(255) NOT NULL COMMENT 'datum_id', `content` longtext NOT NULL COMMENT '内容', `gmt_modified` datetime NOT NULL COMMENT '修改时间', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_beta */ /******************************************/ CREATE TABLE `config_info_beta` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_tag */ /******************************************/ CREATE TABLE `config_info_tag` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `tag_id` varchar(128) NOT NULL COMMENT 'tag_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_tags_relation */ /******************************************/ CREATE TABLE `config_tags_relation` ( `id` bigint(20) NOT NULL COMMENT 'id', `tag_name` varchar(128) NOT NULL COMMENT 'tag_name', `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `nid` bigint(20) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`nid`), UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = group_capacity */ /******************************************/ CREATE TABLE `group_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_group_id` (`group_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = his_config_info */ /******************************************/ CREATE TABLE `his_config_info` ( `id` bigint(64) unsigned NOT NULL, `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `data_id` varchar(255) NOT NULL, `group_id` varchar(128) NOT NULL, `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL, `md5` varchar(32) DEFAULT NULL, `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `src_user` text, `src_ip` varchar(50) DEFAULT NULL, `op_type` char(10) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`nid`), KEY `idx_gmt_create` (`gmt_create`), KEY `idx_gmt_modified` (`gmt_modified`), KEY `idx_did` (`data_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = tenant_capacity */ /******************************************/ CREATE TABLE `tenant_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表'; CREATE TABLE `tenant_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `kp` varchar(128) NOT NULL COMMENT 'kp', `tenant_id` varchar(128) default '' COMMENT 'tenant_id', `tenant_name` varchar(128) default '' COMMENT 'tenant_name', `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc', `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source', `gmt_create` bigint(20) NOT NULL COMMENT '创建时间', `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info'; CREATE TABLE `users` ( `username` varchar(50) NOT NULL PRIMARY KEY, `password` varchar(500) NOT NULL, `enabled` boolean NOT NULL ); CREATE TABLE `roles` ( `username` varchar(50) NOT NULL, `role` varchar(50) NOT NULL, UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE ); CREATE TABLE `permissions` ( `role` varchar(50) NOT NULL, `resource` varchar(255) NOT NULL, `action` varchar(8) NOT NULL, UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE ); INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE); INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');而后使用如下命令导入docker exec -i nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" nacos_fat' < ./nacos.sql 如下[root@Node-172_16_100_54 /data/mysql]# docker exec -i nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" nacos_fat' < ./nacos.sql mysql: [Warning] Using a password on the command line interface can be insecure.2. 准备nacos的pvc我们已经创建了mysql,并且导入了sql,但是对于一些日志我们希望留下来,于是我们创建一个pvc开始之前,我们创建一个名称空间用作nacos专用的apiVersion: v1 kind: Namespace metadata: name: nacos如下PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\namespace.yaml namespace/nacos created PS J:\k8s-1.23.1-latest\nacos> kubectl.exe get ns NAME STATUS AGE argocd Active 2d20h default Active 14d ingress-nginx Active 4d19h kube-node-lease Active 14d kube-public Active 14d kube-system Active 14d marksugar Active 3d19h monitoring Active 13d nacos Active 12spvc配置如下,我们进行创建*参数:*姓名描述默认onDelete如果存在且有删除值,则删除目录,如果存在且有保留值,则保存目录。将与共享名称一起存档:archived-<volume.Name>archiveOnDelete如果它存在并且具有 false 值,则删除该目录。如果onDelete存在,archiveOnDelete将被忽略。将与共享名称一起存档:archived-<volume.Name>pathPattern指定用于通过 PVC 元数据(例如标签、注释、名称或命名空间)创建目录路径的模板。要指定元数据使用${.PVC.<metadata>}. 示例:如果文件夹应命名为<pvc-namespace>-<pvc-name>,${.PVC.namespace}-${.PVC.name}则用作 pathPattern。不适用yaml如下apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-latest provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "false" pathPattern: "${.PVC.namespace}/${.PVC.name}" onDelete: delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nacos-nfs namespace: nacos spec: storageClassName: nfs-latest accessModes: - ReadWriteMany resources: requests: storage: 50G开始创建PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\nfs-pvc.yaml storageclass.storage.k8s.io/nfs-latest created persistentvolumeclaim/nacos-nfs created须知而在使用了pathPattern之后,由于我们引用的是${.PVC.namespace}/${.PVC.name},于是我们的nfs目录结构就变成了这样,如下[root@Node-172_16_100_49 /data/nfs-k8s/1.21.1]# ll total 0 drwxrwxrwx 2 root root 21 Jun 13 00:02 default-test-claim-pvc-d64f6d7d-3be8-407e-bb3f-59efcd481e3d drwxr-xr-x 3 root root 23 Jun 26 18:09 nacos [root@Node-172_16_100_49 /data/nfs-k8s/1.21.1]# ls nacos/nacos-nfs/ data logs peer-finder目录结构变成nacos/nacos-nfs/后,在某些时候将会比较好用的查看PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nacos-nfs Bound pvc-2e3c7f7b-648a-4b22-b2af-ca0ab26b8e8a 50G RWX nfs-latest 4m9s PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 13d nfs-latest k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 4m44s我们进入数据库验证是否成功创建[root@linuxea-54 /data/mysql]# docker exec -ti nacos-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"; ' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 14 Server version: 8.0.29 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | nacos_fat | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> use nacos_fat Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +----------------------+ | Tables_in_nacos_fat | +----------------------+ | config_info | | config_info_aggr | | config_info_beta | | config_info_tag | | config_tags_relation | | group_capacity | | his_config_info | | permissions | | roles | | tenant_capacity | | tenant_info | | users | +----------------------+ 12 rows in set (0.00 sec) mysql> 授权远程 连接GRANT ALL PRIVILEGES ON *.* TO 'nacos'@'%' WITH GRANT OPTION;确保可以连通如果不通。大概率是网络问题,可以修改network模式,如下version: '3.3' services: nacos-mysql: container_name: nacos-mysql image: registry.cn-hangzhou.aliyuncs.com/marksugar/mysql:8.0.29-debian # docker pull mysql:8.0.29-debian # docker pull nacos/nacos-mysql:5.7 network_mode: host restart: always # docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql # docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_ROOT_PASSWORD=PASSWORDABCD - MYSQL_DATABASE=nacos_fat - MYSQL_USER=nacos - MYSQL_PASSWORD=PASSWORDABCD #- MYSQL_INITDB_SKIP_TZINFO= volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/mysql/nacos/data:/var/lib/mysql - /data/mysql/nacos/file:/var/lib/mysql-files - ./my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf logging: driver: "json-file" options: max-size: "50M" #ports: #- 3307:33073. 安装naocsnacos yaml清单如下,我们需要修改--- apiVersion: v1 kind: Service metadata: name: nacos-headless namespace: skywalking labels: app: nacos # annotations: # service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 8848 name: server targetPort: 8848 - port: 9848 name: client-rpc targetPort: 9848 - port: 9849 name: raft-rpc targetPort: 9849 ## 兼容1.4.x版本的选举端口 - port: 7848 name: old-raft-rpc targetPort: 7848 clusterIP: None type: ClusterIP selector: app: nacos --- apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: skywalking data: mysql.host: "172.16.0.158" mysql.db.name: "nacos_fat" mysql.port: "3307" mysql.user: "nacos" mysql.password: "PASSWORDABCD" --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nacos namespace: skywalking spec: serviceName: nacos-headless replicas: 3 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - nacos topologyKey: "kubernetes.io/hostname" # serviceAccountName: nfs-client-provisioner initContainers: - name: peer-finder-plugin-install image: nacos/nacos-peer-finder-plugin:1.1 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /home/nacos/plugins/peer-finder name: data subPath: peer-finder containers: - name: nacos imagePullPolicy: IfNotPresent image: nacos/nacos-server:v2.1.0 resources: requests: memory: "2048Mi" cpu: "500m" ports: - containerPort: 8848 name: client-port - containerPort: 9848 name: client-rpc - containerPort: 9849 name: raft-rpc - containerPort: 7848 name: old-raft-rpc env: - name: MYSQL_SERVICE_DB_PARAM value: characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai - name: NACOS_REPLICAS value: "3" - name: SERVICE_NAME value: "nacos-headless" - name: DOMAIN_NAME value: "cluster.local" - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MYSQL_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.db.name - name: MYSQL_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.port - name: MYSQL_SERVICE_HOST valueFrom: configMapKeyRef: name: nacos-cm key: mysql.host - name: MYSQL_SERVICE_USER valueFrom: configMapKeyRef: name: nacos-cm key: mysql.user - name: MYSQL_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.password - name: MODE value: "cluster" - name: NACOS_SERVER_PORT value: "8848" - name: PREFER_HOST_MODE value: "hostname" - name: NACOS_APPLICATION_PORT value: "8848" - name: NACOS_SERVERS value: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848 nacos-1.nacos-headless.skywalking.svc.cluster.local:8848 nacos-2.nacos-headless.skywalking.svc.cluster.local:8848" volumeMounts: - name: data mountPath: /home/nacos/plugins/peer-finder subPath: peer-finder - name: data mountPath: /home/nacos/data subPath: data - name: data mountPath: /home/nacos/logs subPath: logs - mountPath: /etc/localtime name: nacostime volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nacos-nfs selector: matchLabels: app: nacos需要修改如下configmap1.账号密码apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: skywalking data: mysql.host: "172.16.0.158" mysql.db.name: "nacos_fat" mysql.port: "3307" mysql.user: "nacos" mysql.password: "PASSWORDABCD"2.nacos变量 - name: NACOS_SERVERS value: "nacos-0.nacos-headless.skywalking.svc.cluster.local:8848 nacos-1.nacos-headless.skywalking.svc.cluster.local:8848 nacos-2.nacos-headless.skywalking.svc.cluster.local:8848"3.pvc配置 volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nas-nacos最终如下--- apiVersion: v1 kind: Service metadata: name: nacos-headless namespace: nacos labels: app: nacos # annotations: # service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 8848 name: server targetPort: 8848 - port: 9848 name: client-rpc targetPort: 9848 - port: 9849 name: raft-rpc targetPort: 9849 ## 兼容1.4.x版本的选举端口 - port: 7848 name: old-raft-rpc targetPort: 7848 clusterIP: None type: ClusterIP selector: app: nacos --- apiVersion: v1 kind: ConfigMap metadata: name: nacos-cm namespace: nacos data: mysql.host: "172.16.100.54" mysql.db.name: "nacos_fat" mysql.port: "3306" mysql.user: "nacos" mysql.password: "PASSWORDABCD" --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nacos namespace: nacos spec: serviceName: nacos-headless replicas: 3 template: metadata: labels: app: nacos annotations: pod.alpha.kubernetes.io/initialized: "true" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - nacos topologyKey: "kubernetes.io/hostname" initContainers: - name: peer-finder-plugin-install image: nacos/nacos-peer-finder-plugin:1.1 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /home/nacos/plugins/peer-finder name: data subPath: peer-finder containers: - name: nacos imagePullPolicy: IfNotPresent image: nacos/nacos-server:v2.1.0 resources: requests: memory: "2048Mi" cpu: "500m" ports: - containerPort: 8848 name: client-port - containerPort: 9848 name: client-rpc - containerPort: 9849 name: raft-rpc - containerPort: 7848 name: old-raft-rpc env: - name: MYSQL_SERVICE_DB_PARAM value: characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai - name: NACOS_REPLICAS value: "3" - name: SERVICE_NAME value: "nacos-headless" - name: DOMAIN_NAME value: "cluster.local" - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MYSQL_SERVICE_DB_NAME valueFrom: configMapKeyRef: name: nacos-cm key: mysql.db.name - name: MYSQL_SERVICE_PORT valueFrom: configMapKeyRef: name: nacos-cm key: mysql.port - name: MYSQL_SERVICE_HOST valueFrom: configMapKeyRef: name: nacos-cm key: mysql.host - name: MYSQL_SERVICE_USER valueFrom: configMapKeyRef: name: nacos-cm key: mysql.user - name: MYSQL_SERVICE_PASSWORD valueFrom: configMapKeyRef: name: nacos-cm key: mysql.password - name: MODE value: "cluster" - name: NACOS_SERVER_PORT value: "8848" - name: PREFER_HOST_MODE value: "hostname" - name: NACOS_APPLICATION_PORT value: "8848" - name: NACOS_SERVERS value: "nacos-0.nacos-headless.nacos.svc.cluster.local:8848 nacos-1.nacos-headless.nacos.svc.cluster.local:8848 nacos-2.nacos-headless.nacos.svc.cluster.local:8848" volumeMounts: - name: data mountPath: /home/nacos/plugins/peer-finder subPath: peer-finder - name: data mountPath: /home/nacos/data subPath: data - name: data mountPath: /home/nacos/logs subPath: logs - mountPath: /etc/localtime name: nacostime volumes: - name: nacostime hostPath: path: /etc/localtime - name: data persistentVolumeClaim: claimName: nacos-nfs selector: matchLabels: app: nacos应用PS J:\k8s-1.23.1-latest\nacos> kubectl.exe apply -f .\nacos-nfs.yaml service/nacos-headless created configmap/nacos-cm created statefulset.apps/nacos created直到所有的pod runingPS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get pod NAME READY STATUS RESTARTS AGE nacos-0 1/1 Running 0 5m59s nacos-1 1/1 Running 0 4m37s nacos-2 1/1 Running 0 3m31s4. 配置ingress如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nacos-ui labels: app: nacos namespace: nacos spec: ingressClassName: nginx rules: - host: nacos.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: nacos-headless port: number: 8848创建PS J:\k8s-1.23.1-latest\nacos> kubectl.exe -n nacos get ingress NAME CLASS HOSTS ADDRESS PORTS AGE nacos-ui nginx nacos.linuxea.com 172.16.100.11,172.16.100.43 80 53s配置本地Hosts172.16.100.11 nacos.linuxea.com访问的域名如下http://nacos.linuxea.com/nacos/#/login账号密码:nacos/nacos参考: https://github.com/nacos-group/nacos-k8s/tree/master/deploy/nacosskywalkingskywalking需要一个后端来存储数据,或者MySQL,或者ES,我将在这里使用ES我们仍然需要一个PVC来存储ES的数据,与nacos不同的是,我这里用k8s来运行ES1.安装ES在安装之前,我们需要准备一个PVC1.1 准备ES的PVC复制nacos的配置,如法炮制一个,修改下名称即可apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-skywalking-es provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "false" pathPattern: "${.PVC.namespace}/${.PVC.name}" onDelete: delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: es-data namespace: skywalking spec: storageClassName: nfs-skywalking-es accessModes: - ReadWriteMany resources: requests: storage: 50G我们创建一个名称空间skwayling,而后创建pvcPS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\ns.yaml namespace/skywalking created PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\nfs-to-es.yaml storageclass.storage.k8s.io/nfs-skywalking-es created persistentvolumeclaim/es-data created如下PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE es-data Bound pvc-5baafefd-a2da-48f4-b9d4-815c6f3c2fe3 50G RWX nfs-skywalking-es 29s1.2 安装ES我们仍然要修改一些配置,claimName:的值是我们创建的pvc名称,也就是es-data volumes: - name: oms-skywalking-to-elasticsearch-data persistentVolumeClaim: claimName: es-data最终的yaml如下# Source: skywalking/charts/elasticsearch/templates/statefulset.yaml apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: type: ClusterIP ports: - name: elasticsearch port: 9200 protocol: TCP selector: app: elasticsearch --- apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: selector: matchLabels: app: elasticsearch replicas: 1 template: metadata: name: elasticsearch labels: app: elasticsearch spec: initContainers: - name: configure-sysctl securityContext: runAsUser: 0 privileged: true image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 imagePullPolicy: "IfNotPresent" command: ["/bin/sh"] args: ["-c", "sysctl -w DefaultLimitNOFILE=65536; sysctl -w DefaultLimitMEMLOCK=infinity; sysctl -w DefaultLimitNPROC=32000; sysctl -w vm.max_map_count=262144"] resources: {} containers: - name: "elasticsearch" securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 imagePullPolicy: "IfNotPresent" livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 1 tcpSocket: port: 9300 timeoutSeconds: 2 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 2 tcpSocket: port: 9300 timeoutSeconds: 2 ports: - name: http containerPort: 9200 - name: transport containerPort: 9300 resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 2Gi env: - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: cluster.name value: "elasticsearch" - name: network.host value: "0.0.0.0" - name: ES_JAVA_OPTS value: "-Xmx1g -Xms1g -Duser.timezone=Asia/Shanghai" - name: discovery.type value: single-node volumeMounts: - mountPath: /usr/share/elasticsearch/data name: oms-skywalking-to-elasticsearch-data restartPolicy: Always volumes: - name: oms-skywalking-to-elasticsearch-data persistentVolumeClaim: claimName: es-data应用清单PS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe apply -f .\es.yaml service/elasticsearch created deployment.apps/elasticsearch created使用-w来观察状态,直到runningPS J:\k8s-1.23.1-latest\nacos\skywalking> kubectl.exe -n skywalking get pod -w NAME READY STATUS RESTARTS AGE elasticsearch-64c9d98794-ndktz 0/1 Init:0/1 0 14s elasticsearch-64c9d98794-ndktz 0/1 PodInitializing 0 55s elasticsearch-64c9d98794-ndktz 0/1 Running 0 56s elasticsearch-64c9d98794-ndktz 1/1 Running 0 88snfs上面已经创建的es的数据文件[root@linuxea-49 /data/nfs-k8s/1.21.1]# ls skywalking/es-data/ nodeses安装完成1.3 本地es除此之外,我们可以在vm虚拟机上安装esversion: '3.3' services: elasticsearch: image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 container_name: elasticsearch sysctls: net.core.somaxconn: 10240 #DefaultLimitNOFILE: 65536 #DefaultLimitMEMLOCK: infinity #DefaultLimitNPROC: 32000 #vm.max_map_count: 262144 ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: elasticsearch restart: always environment: - cluster.name="elasticsearch" # - network.host="0.0.0.0" - discovery.type=single-node # - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms2048m -Xmx4096m -XX:-UseConcMarkSweepGC -XX:-UseCMSInitiatingOccupancyOnly -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=75 -Duser.timezone=Asia/Shanghai" user: root ports: - 9200:9200 - 9300:9300 volumes: - /etc/localtime:/etc/localtime:ro - /data/elasticsearch/data:/usr/share/elasticsearch/data logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 6144m reservations: memory: 6144m而后docker-compose up -d即可未完,因篇幅字数问题,见下一章
2022年06月30日
1,655 阅读
0 评论
0 点赞
2022-06-29
linuxea:基于nexus3代码构建和harbor镜像打包(3)
在前两篇中,主要叙述了即将做什么以及基础环境搭建。此时我们需要一个通用的java程序来配置这些东西,但是这其中又需要配置不少的东西,比如nexus3等。因此,本章将围绕nexus3配置,而后通过nexus3配置maven打包。将jar包构建,接着配置harbor,并且将打包的镜像推送到harbor镜像仓库。如下图红色阴影部分内容:阅读此篇,你将了解如下信息:nexus3配置java编译打包与nexus3harbor安装配置和使用基于alpine构建jdk编写Dockerfile技巧和构建和推送镜像我们仅仅使用非Https的harbor仓库,如果要配置https已经helm仓库,阅读habor2.5的helm仓库和镜像仓库使用(5)进行配置即可配置java和node环境变量[root@linuxea-01 local]# tar xf apache-maven-3.8.6-bin.tar.gz -C /usr/local/ [root@linuxea-01 local]# tar xf node-v16.15.1-linux-x64.tar.xz -C /usr/local/ [root@linuxea-01 local]# MAVEN_PATH=/usr/local/apache-maven-3.8.6 [root@linuxea-01 local]# NODE_PATH=/usr/local/node-v16.15.1-linux-x64 [root@linuxea-01 local]# PATH=$PATH:$NODE_PATH/bin:$PATH:$MAVEN_PATH/bin1.修改为阿里源settings.xml修改阿里云源<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>可以使用-s指定settings.yaml你也可以指定pom.xml </parent> 之下 <repositories> <repository> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> </repository> </repositories>2 配置nexus32.1 创建Blob Stores如果剩余10G就报警2.2 创建proxy创建repositories->选择maven2(proxy)http://maven.aliyun.com/nexus/content/groups/public我们着重修改和存储桶如法炮制,继续将下面的都创建maven2-proxy1. aliyun http://maven.aliyun.com/nexus/content/groups/public 2. apache_snapshot https://repository.apache.org/content/repositories/snapshots/ 3. apache_release https://repository.apache.org/content/repositories/releases/ 4. atlassian https://maven.atlassian.com/content/repositories/atlassian-public/ 5. central.maven.org http://central.maven.org/maven2/ 6. datanucleus http://www.datanucleus.org/downloads/maven2 7. maven-central (安装后自带,仅需设置Cache有效期即可) https://repo1.maven.org/maven2/ 8. nexus.axiomalaska.com http://nexus.axiomalaska.com/nexus/content/repositories/public 9. oss.sonatype.org https://oss.sonatype.org/content/repositories/snapshots 10.pentaho https://public.nexus.pentaho.org/content/groups/omni/ 11.central http://maven.aliyun.com/nexus/content/repositories/central2.3 创建local在创建一个maven2-local2.4 创建group创建group,将上面所有创建的拉入到当前group2.5 配置xml文件配置settings.xml,修改nexus3地址,如下所示[root@linuxea-01 linuxea]# cat ~/.m2/settings.xml <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> <server> <id>alimaven</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://172.16.15.136:8081/repository/maven2-group/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>打包测试mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s ~/.m2/settings.xml截图如下s) Downloaded from alimaven: http://172.16.15.136:8081/repository/maven2-group/commons-codec/commons-codec/1.6/commons-codec-1.6.jar (233 kB at 991 kB/s) [INFO] Installing /data/java-helo-word/linuxea/target/hello-world-0.0.6.jar to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.jar [INFO] Installing /data/java-helo-word/linuxea/pom.xml to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.pom [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:04 min [INFO] Finished at: 2022-06-24T17:07:10+08:00 [INFO] ------------------------------------------------------------------------ [root@linuxea-01 linuxea]#3. 配置harbor登录harbor项目后创建一个项目而后可以看到推送命令docker tag SOURCE_IMAGE[:TAG] 172.16.100.54/linuxea/REPOSITORY[:TAG] docker push 172.16.100.54/linuxea/REPOSITORY[:TAG]我们直接把镜像打成172.16.100.54/linuxea/java-demo:TAG即可,而不是要去修改tag,而后直接上传即可4. 打包和构建使用alpine的最大好处就是可以适量的最小化缩减镜像体积。这也是alpine流行的最大因素。由于一直使用的都是jdk8,因此仍然使用jdk8版本,基础镜像仍然使用alpine:3.15,我参考了dockerhub上一个朋友的镜像,重新构建了jdk8u202,整个镜像大小大概在453M左右。可以通过如下地址进行获取docker pull registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u2024.1 构建基础镜像jdl的基础镜像已经构建完成,在本地,仍然按照这里的dockerfile进行构建而后我们创建一个base仓库来存放登录并推送[root@linuxea-48 ~]# docker login harbor.marksugar.com Authenticating with existing credentials... Stored credentials invalid or expired Username (admin): admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@linuxea-48 ~]# docker push harbor.marksugar.com/base/jdk:8u202 The push refers to repository [harbor.marksugar.com/base/jdk] 788766eb7d3e: Pushed 8d3ac3489996: Pushed 8u202: digest: sha256:516cd5bd65041d4b00587127417c1a9a3aea970fa533d330f60b07395aa5e5ca size: 7414.2 打包java镜像此前我找了一个java的hello world的包,现在在我的github上可以找到将它拉到本地构建,进行测试[root@linuxea-48 /data]# git clone https://ghproxy.futils.com/https://github.com/marksugar/java-helo-word.git Cloning into 'java-helo-word'... remote: Enumerating objects: 110, done. remote: Total 110 (delta 0), reused 0 (delta 0), pack-reused 110 Receiving objects: 100% (110/110), 28.09 KiB | 0 bytes/s, done.开始打包jar包构建频繁出错,需要解决的是依赖包,可能需要添加nexus3仓库的代理,这些通过搜索引擎解决。一旦构建完成,在target目录下就会有一个jar包[root@linuxea-48 /data/java-helo-word/linuxea]# ll target/hello-world-0.0.6.jar -rw-r--r-- 1 root root 17300624 Jun 26 01:00 target/hello-world-0.0.6.jar而后这个jar可以进行启动的,并监听了一个8086的端口号[root@linuxea-48 /data/java-helo-word/linuxea]# java -jar target/hello-world-0.0.6.jar . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-26 01:05:52.217 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on Node172_16_100_48.marksugar.me with PID 38183 (/data/java-helo-word/linuxea/target/hello-world-0.0.6.jar started by root in /data/java-helo-word/linuxea) 2022-06-26 01:05:52.219 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-26 01:05:52.265 INFO 38183 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.118 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-26 01:05:53.126 INFO 38183 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-26 01:05:53.129 INFO 38183 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 916 ms 2022-06-26 01:05:53.256 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-26 01:05:53.257 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-26 01:05:53.283 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-26 01:05:53.287 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-26 01:05:53.459 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.502 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.551 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.568 INFO 38183 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-26 01:05:53.639 INFO 38183 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-26 01:05:53.680 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-26 01:05:53.682 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.392 seconds (JVM running for 2.647)访问现在jar包和nexus3准备好了4.3 编写Dockerfile当这一切准备妥当,开始编写Dockerfile,我们需要注意以下其他配置配置内存限制资源配置普通用户,并已普通用户启动pod应用程序如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER www.linuxea.com by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs -p COPY target/*.jar /data/ WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -jar *.jar开始构建我们指定配置文件位置进行创建docker build -t hello-java -f ./Dockerfile .如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker build -t hello-java -f ./Dockerfile . Sending build context to Docker daemon 17.5MB Step 1/7 : FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 ---> 5919494d49c0 Step 2/7 : MAINTAINER www.linuxea.com by mark ---> Running in 51ea254cd0c3 Removing intermediate container 51ea254cd0c3 ---> 109317878a94 Step 3/7 : ENV JAVA_OPTS=" -server -Xms2048m -Xmx2048m -Xmn512m -Xss256k -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs" MY_USER=linuxea MY_USER_ID=316 ---> Running in 5745dbc7928b Removing intermediate container 5745dbc7928b ---> a7d40e22389a Step 4/7 : RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} && mkdir /data/logs -p ---> Running in 2e4c34e11b62 Removing intermediate container 2e4c34e11b62 ---> d2fdac4de2fa Step 5/7 : COPY target/*.jar /data/ ---> 5538b183318b Step 6/7 : WORKDIR /data ---> Running in 7d0ac5b1dcc2 Removing intermediate container 7d0ac5b1dcc2 ---> e03a5699e97c Step 7/7 : CMD java ${JAVA_OPTS} jar *.jar ---> Running in 58ff0459e4d7 Removing intermediate container 58ff0459e4d7 ---> d1689a9a179f Successfully built d1689a9a179f Successfully tagged hello-java:latest接着我们run起来[root@linuxea-48 /data/java-helo-word/linuxea]# docker run --rm hello-java . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-25 17:26:22.052 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on f18e65565a19 with PID 1 (/data/hello-world-0.0.6.jar started by linuxea in /data) 2022-06-25 17:26:22.054 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-25 17:26:22.121 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.079 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-25 17:26:23.087 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-25 17:26:23.089 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-25 17:26:23.148 INFO 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-25 17:26:23.149 INFO 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1028 ms 2022-06-25 17:26:23.236 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-25 17:26:23.273 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-25 17:26:23.279 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-25 17:26:23.459 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.508 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.559 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.654 INFO 1 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-25 17:26:23.786 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-25 17:26:23.841 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-25 17:26:23.845 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.076 seconds (JVM running for 2.329)进入容器可以看到当前使用的是linuxea用户bash-5.1$ ps aux PID USER TIME COMMAND 1 linuxea 0:00 bash 15 linuxea 0:00 ps aux4.4 推送仓库将构建的镜像推送到仓库,以备用途,于是,我们登录harbor创建一个项目存放修改镜像名称并pushdocker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest docker push harbor.marksugar.com/linuxea/hello-world:latest如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest [root@linuxea-48 /data/java-helo-word/linuxea]# docker push harbor.marksugar.com/linuxea/hello-world:latest The push refers to repository [harbor.marksugar.com/linuxea/hello-world] 9435dbe70451: Pushed 8c3c8b0adf90: Pushed 788766eb7d3e: Mounted from base/jdk 8d3ac3489996: Mounted from base/jdk latest: digest: sha256:2248bf99e35cf864d521441d8d2efc9aedbed56c24625e4f60e93df5e8fc65c3 size: 1161此时harbor仓库已经有了已经已经打包完成的镜像,也就是所谓的一个制品
2022年06月29日
954 阅读
0 评论
0 点赞
2022-06-27
linuxea:gitops持续集成组件快速搭建
我想我多少有些浮夸,因为我将这几句破烂的文字描述的一个持续集成的幼形称作“gitops”。不禁我有些害臊这充其量只是一个持续集成的组件整合,远远算不上gitops,更别说什么devops,那是个什么东西呢。不知道从什么时候开始,我逐渐厌烦有人枉谈devops,随意的描述devops,更可恶的是有些人做了一个流水线管道就妄言从事了devops的工作,我不想与他们为伍。我肤浅的认为只有无知才会大言不惭。为此,为了能和这些所谓的devops划清界限,并跨远。我利用业余时间将一些小项目的实施交付文档经过修改改为所谓的基于git的持续集成和持续发布,很明显,这里面引入了gitlab。gitlab作为管理jenkins的共享库和k8s的yaml配置清单。当然,这是一个幼形。并且,如果我的描述和形容使你感到不适,那当我什么都没说。好的,那么我们正式开始在一些场合中,我们希望快速构建一个项目,项目里面一套持续集成的流水线,我们至少需要一些必要的组件,如:jenkins,gitlab,sonarqube,harbor,nexus3,k8s集群等。我们的目的是交付一套持续集成和持续交付的幼形,来应对日益变换的构建和发布。拓扑如下为此,这篇文章简单介绍如何快速使用docker来部署这些必要的组件。首要条件安装docker和docker-compose离线安装docker如果你准备了离线包就可以使用本地的包进行安装centos7.9:cd docker/docker-rpm yum localinstall * -y离线安装docker-compose我们至少下载一个较新的版本来应对harbor的配置要求,一般来说都够用cd docker/docker-compose cp docker-compose-Linux-x86_64 /usr/loca/sbin/docker-compose chmod +x /usr/loca/sbin/docker-compose验证docker verson docker-compsoe -v在线安装:yum install epel* -y yum install docker-ce docker-compose -yjenkins如果本地有旧的包,解压即可tar xf jenkins.tar.gz -C /data/ chown -R 1000:1000 /data/jenkins cd /data/jenkins docker-compose -f jenkins.yaml up -d安装新的version: '3.5' services: jenkins: image: registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest container_name: jenkins restart: always network_mode: host environment: - JAVA_OPTS=-Duser.timezone=Asia/Shanghai # 时区1 volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/jenkins-latest/jenkins_home:/var/jenkins_home #chown 1000:1000 -R jenkins_home - /data/jenkins-latest/ansiblefile:/etc/ansible - /data/jenkins-latest/local_repo:/data/jenkins-latest/local_repo - /data/jenkins-latest/package:/usr/local/package #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/node:/sbin/node #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/npm:/sbin/npm #- /data/jenkins-latest/latest_war_package/jenkins.war:/usr/share/jenkins/jenkins.war # jenkins war新包挂载 # ports: # - 58080:58080 user: root logging: driver: "json-file" options: max-size: "1G" deploy: resources: limits: memory: 30720m reservations: memory: 30720m 查看密钥[root@linuxea.com data]# cat /data/jenkins-latest/jenkins_home/secrets/initialAdminPassword c3e5dd22ea5e4adab28d001a560302bc第一次卡住,修改# cat /data/jenkins-latest/jenkins_home/hudson.model.UpdateCenter.xml <?xml version='1.1' encoding='UTF-8'?> <sites> <site> <id>default</id> <url>https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json</url> </site> </sites>跳过,不安装任何插件选择none如果没有修改上面的插件源,我们就在Manage Jenkins->Plugin Manager->Advanced->最下方的Update Site修改https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json必要安装的jenkins插件1.Credentials: 凭据 localization: 中文插件 localization: chinase(simplified) 2.AnsiColor: 颜色插件 "echo -en \\033[1;32m" 3.Rebuilder: 重复上次构建插件 4.build user vars:变量 变量分为如下几种: Full name :全名 BUILD_USER_FIRST_NAME :名字 BUILD_USER_LAST_NAME :姓 BUILD_USER_ID :Jenkins用户ID BUILD_USER_EMAIL :用户邮箱 5.Workspace Cleanup: 清理workspace 6.Role-based Authorization Strategy 用户角色 7.Git Plugin 8.Gogs 9.GitLab 10.Generic Webhook TriggerVersion 11.Pipeline 12.Pipeline: Groovy 13.JUnit Attachments 14.Performance 15.Html Publisher 16.Gitlab Authentication 17.JIRA 18.LDAP 19.Parameterized Triggersonarqubesonarqube:8.9.2-community docker pull sonarqube:8.9.8-communityversion: '3.3' services: sonarqube: container_name: sonarqube image: registry.cn-hangzhou.aliyuncs.com/marksugar/sonarqube:8.9.8-community restart: always hostname: 172.16.100.47 environment: - stop-timeout: 3600 - "ES_JAVA_OPTS=-Xms16384m -Xmx16384m" ulimits: memlock: soft: -1 hard: -1 logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 16384m reservations: memory: 16384m ports: - '9000:9000' volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/dataharbortar xf harbor-offline-installer-v2.5.1.tgz cd harbor cp harbor.yml.tmpl harbor.yml Nodeip=`ip a s ${NETWORK_DEVIDE:-eth0}|awk '/inet/{print $2}'|sed -r 's/\/[0-9]{1,}//'` sed -i "s/hostname: reg.mydomain.com/hostname: ${NodeIp}/g" harbor.yml sed -i "s@https:@#https:@g" harbor.yml sed -i "s@port: 443@#port: 443@g" harbor.yml sed -i "s@certificate: /your/certificate/path@#certificate: /your/certificate/path@g" harbor.yml sed -i "s@private_key: /your/private/key/path@#private_key: /your/private/key/path@g" harbor.yml bash install.sh默认密码:Harbor12345nexusmkdir /data/nexus/data -p && chown -R 200.200 /data/nexus/datayamlversion: '3.3' services: nexus3: image: sonatype/nexus3:3.39.0 container_name: nexus3 network_mode: host restart: always environment: - INSTALL4J_ADD_VM_PARAMS=-Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=8192m -Djava.util.prefs.userRoot=/nexus-data # - NEXUS_CONTEXT=/ # ports: # - 8081:8081 volumes: - /etc/localtime:/etc/localtime:ro - /data/nexus/data:/nexus-data logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 8192m reservations: memory: 8192mgitlabversion: '3' services: gitlab-ce: container_name: gitlab-ce image: gitlab/gitlab-ce:15.0.3-ce.0 restart: always # network_mode: host hostname: 192.168.100.22 environment: TZ: 'Asia/Shanghai' GITLAB_OMNIBUS_CONFIG: | external_url 'http://192.168.100.22' gitlab_rails['time_zone'] = 'Asia/Shanghai' gitlab_rails['gitlab_shell_ssh_port'] = 23857 # unicorn['port'] = 8888 # nginx['listen_port'] = 80 ports: - '80:80' - '443:443' - '23857:22' volumes: - /etc/localtime:/etc/localtime - /data/gitlab/config:/etc/gitlab - /data/gitlab/logs:/var/log/gitlab - /data/gitlab/data:/var/opt/gitlab logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 13312m reservations: memory: 13312mgitlab-ce启动完成后使用如下命令查看登录密码docker exec -it gitlab-ce grep 'Password:' /etc/gitlab/initial_root_password
2022年06月27日
855 阅读
0 评论
0 点赞
1
2