首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,451 阅读
2
linuxea:如何复现查看docker run参数命令
23,044 阅读
3
Graylog收集文件日志实例
18,580 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,275 阅读
5
git+jenkins发布和回滚示例
18,181 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
690
篇文章
累计收到
139
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
68
篇与
的结果
2023-09-21
linuxea:skywalking最新版本9.6.0链路和日志关联
我们使用一个前后端分离的权限管理开源系统来模拟skywalking的日志和链路关联行为。为什么我不在去讨论OpenTelemetry了?因为skywalking更加简单的实现我需要的功能。观测性实际需要付出的东西是很多的,怎么样用最简单的办法来实现这部分的功能的便利,是我考虑的事情。而OpenTelemetry目前只有signoz能够做到所谓的,目前最容易做到的日志,链路,指标的关联。但是signoz并非像skywalking这样简单。值得一提的是,仅全链路观测角度来看,nodejs方面,无论是OpenTelemetry还是单独的skywalking提供的解决方案,都是不太容易使用的,更别提需要手动埋点写入一大堆只有业务开发才能明白的链路请求生产的上下文span。而自动埋点是永远都需要的,恰巧OpenTelemetry和skywalking都支持 。因此,这些内容都基于自动埋点。当然,暂且不会说什么nodejs相关的,这是个头疼的事情。我们需要让子弹多飞一会。写这篇的目的在于,我又倾向skywalking了。在9.6.0中界面UI简洁了,这也是我为什么又选择使用skywalking重要的原因 之一。开始我使用的是https://gitee.com/y_project/RuoYi-Vue,由java SpringBoot和Vue组成。我将演示如何进行日志和链路关联。其他的语言支持也同样简单 ,这可能更多的来着社区wusheng的支持,感谢他。也感谢罗总提醒我使用这个项目来测试。在skywalking.apache.org/downloads/下载最新的9.0.0 java-agent jar包。这里的版本号和后续pom文件中的版本号相关联而日志如何修改和配置取决于你使用了什么组件,skywalking支持log4j,log4j2和logback,以RuoYi-Vue项目为例,他使用的是logback,那么就参考logback如果没有其他说明,下面所有内容出现的配置都是修改RuoYi-Vue项目我们快速拉起一套skywalking的环境,使用9.6.0最新的版本。docker-compose如下:version: '2' services: skywalking_oap: image: uhub.service.ucloud.cn/marksugar-k8s/skywalking-oap-server:9.6.0 #dapache/skywalking-oap-server:9.6.0 container_name: skywalking_oap ports: - "11800:11800" - "12800:12800" depends_on: - skywalking_elasticsearch environment: SW_STORAGE: elasticsearch SW_STORAGE_ES_CLUSTER_NODES: skywalking_elasticsearch:9200 SW_HEALTH_CHECKER: default SW_TELEMETRY: prometheus JAVA_OPTS: "-Xms2048m -Xmx2048m" skywalking_ui: image: uhub.service.ucloud.cn/marksugar-k8s/skywalking-ui:9.6.0 #apache/skywalking-ui:9.6 container_name: skywalking_ui depends_on: - skywalking_oap ports: - "8080:8080" environment: SW_OAP_ADDRESS: http://skywalking_oap:12800 SW_ZIPKIN_ADDRESS: http://skywalking_oap:9412 skywalking_elasticsearch: # image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 image: uhub.service.ucloud.cn/marksugar-k8s/elasticsearch:7.17.13 container_name: skywalking_elasticsearch ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: elasticsearch restart: always environment: - cluster.name="elasticsearch" # - network.host="0.0.0.0" - discovery.type=single-node # - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms2048m -Xmx4096m -XX:-UseConcMarkSweepGC -XX:-UseCMSInitiatingOccupancyOnly -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=75 -Duser.timezone=Asia/Shanghai" user: root ports: - 9200:9200 - 9300:9300 # docker-compose 3.x # healthcheck: # test: [ "CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1" ] # interval: 30s # timeout: 10s # retries: 3 # start_period: 10s volumes: - /etc/localtime:/etc/localtime:ro - /data/skywalking_elasticsearch/data:/usr/share/elasticsearch/data # mkdir -p /data/elasticsearch/data #chown -R 1000.1001 /data/elasticsearch/data logging: driver: "json-file" options: max-size: "50M" mem_limit: 6144m skywalking_kabana: # image: uhub.service.ucloud.cn/marksugar-k8s/kibana:6.8.6 image: uhub.service.ucloud.cn/marksugar-k8s/kibana:7.17.13 container_name: skywalking_kabana ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: kibana restart: always environment: - ELASTICSEARCH_URL=http://skywalking_elasticsearch:9200 user: root ports: - 5601:5601 volumes: - /etc/localtime:/etc/localtime:ro logging: driver: "json-file" options: max-size: "50M" mem_limit: 2048mlogback要使用它,我们修改pom.xml,添加两个apm的标签,如下 <dependency> <groupId>org.apache.skywalking</groupId> <artifactId>apm-toolkit-logback-1.x</artifactId> <version>{project.release.version}</version> </dependency> <dependency> <groupId>org.apache.skywalking</groupId> <artifactId>apm-toolkit-trace</artifactId> <version>{project.release.version}</version> </dependency>{project.release.version}填写当前skywalking agent的版本,我这里使用的是9.0.0,参考apm-toolkit-logback-1.x查看最新版本而后我们需要创建一个文件ruoyi-admin/src/main/resources/logback-spring.xml,并添加如下注意:1,这里使用的是ch.qos.logback.core.ConsoleAppender,那么可能在k8s中,日志打印或许就不能使用STDOUT收集日志了,STDOUT只能传递给skywalking2,skywalking已经收集了日志和链路,那为什么还要在收集日志呢。这取决于ES存储维护性考虑。你至少需要将日志收集到另外一个日志管理系统。skywalking的日志和链路用于近期查看。而日志收集用作与日志搜索。综上所述,如果不能将日志打印到STDOUT的同时还发送到skywalking,那么在K8S中就需要写入到容器内。这可不是一个好主意。<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout"> <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%tid] [%thread] %-5level %logger{36} -%msg%n</Pattern> </layout> </encoder> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern> </layout> </encoder> </appender>而后我们修改ruoyi-admin中的logback.xml,片段如下 <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> 其他如下<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="5 seconds"> <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <appender name="file_error" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统模块日志级别控制 --> <logger name="com.ruoyi" level="info" /> <!-- Spring日志级别控制 --> <logger name="org.springframework" level="warn" /> <root level="info"> <appender-ref ref="console" /> </root> <!--系统操作日志--> <root level="info"> <appender-ref ref="file_info" /> <appender-ref ref="file_error" /> </root> <!--系统用户操作日志--> <logger name="sys-user" level="info"> <appender-ref ref="sys-user"/> </logger> </configuration> 如上所言,我们还需要一种方式提供日志系统的收集,因此同时将日志写入到本地文件。片段如下: <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> 完整xml如下<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="5 seconds"> <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <appender name="file_error" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 写入文件 --> <!-- 日志存放路径 --> <property name="log.path" value="/data/ruoyi/logs" /> <!-- 控制台输出 --> <appender name="console_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-console.log</file> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-info.log</file> <!-- 循环政策:基于时间创建日志文件 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 日志文件名格式 --> <fileNamePattern>${log.path}/sys-info.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <!-- 过滤的级别 --> <level>INFO</level> <!-- 匹配时的操作:接收(记录) --> <onMatch>ACCEPT</onMatch> <!-- 不匹配时的操作:拒绝(不记录) --> <onMismatch>DENY</onMismatch> </filter> </appender> <appender name="file_error_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-err.log</file> <!-- 循环政策:基于时间创建日志文件 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 日志文件名格式 --> <fileNamePattern>${log.path}/sys-err.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <!-- 过滤的级别 --> <level>ERROR</level> <!-- 匹配时的操作:接收(记录) --> <onMatch>ACCEPT</onMatch> <!-- 不匹配时的操作:拒绝(不记录) --> <onMismatch>DENY</onMismatch> </filter> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-user.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 按天回滚 daily --> <fileNamePattern>${log.path}/sys-user.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统模块日志级别控制 --> <logger name="com.ruoyi" level="info" /> <!-- Spring日志级别控制 --> <logger name="org.springframework" level="warn" /> <root level="info"> <appender-ref ref="console" /> </root> <!--系统操作日志--> <root level="info"> <appender-ref ref="file_info" /> <appender-ref ref="file_error" /> </root> <!--系统用户操作日志--> <logger name="sys-user" level="info"> <appender-ref ref="sys-user"/> </logger> <root level="info"> <appender-ref ref="console_file" /> <appender-ref ref="file_info_file" /> <appender-ref ref="file_error_file" /> <appender-ref ref="sys-user_file"/> </root> </configuration> 现在日志会发送到skywalking,本地也会保留一份,并且,我们可以通过日志的TID和skywalking中的id关联进行查询。接着开始打包mvn clean package -Dmaven.test.skip=true构建镜像构建完成后,我们定制一个Dockerfile来构建一个容器的镜像FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/ruoyi/logs" \ MY_USER=linuxea \ MY_USER_ID=316 COPY ruoyi-admin/target/*.jar /data/ COPY skywalking-agent /data/ruoyi # copy文件到/data下后需要chonw普通用户权限 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/ruoyi/logs -p \ && echo "Asia/Shanghai" > /etc/timezone \ && chown -R linuxea.linuxea /data/ruoyi WORKDIR /data # 使用linuxea运行 java程序 USER linuxea CMD java ${JAVA_OPTS} -javaagent:/data/ruoyi/skywalking-agent.jar -jar *.jar进行构建docker build -t uhub.service.ucloud.cn/marksugar-k8s/ruoyi-admin:v1 .在k8s中,我们需要传递如下 环境变量 - name: SW_AGENT_NAME value: mark::test1 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES value: skywalking-oap.skywalking:11800 docker 使用-e即可docker run --rm -e SW_AGENT_NAME=linuxea:ruoyi-admin -e SW_AGENT_COLLECTOR_BACKEND_SERVICES=172.16.100.151:11800 --net=host uhub.service.ucloud.cn/marksugar-k8s/ruoyi-admin:v1接着,我们进入当前创建的容器中在日志里面随意拿取一个TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001[root@Node-172_16_100_151 /data/ruoyi/logs]# docker exec -it distracted_leakey bash bash-5.1$ cd /data/ruoyi/logs/ bash-5.1$ tail -f sys-user.log 2023-09-21 22:14:17.281 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.S.selectUserList_COUNT -<== Total: 1 2023-09-21 22:14:17.283 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Preparing: select u.user_id, u.dept_id, u.nick_name, u.user_name, u.email, u.avatar, u.phonenumber, u.sex, u.status, u.del_flag, u.login_ip, u.login_date, u.create_by, u.create_time, u.remark, d.dept_name, d.leader from sys_user u left join sys_dept d on u.dept_id = d.dept_id where u.del_flag = '0' LIMIT ? 2023-09-21 22:14:17.285 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Parameters: 10(Integer) 2023-09-21 22:14:17.290 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -<== Total: 7 2023-09-21 22:14:18.367 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -==> Preparing: SELECT count(0) FROM sys_user u LEFT JOIN sys_dept d ON u.dept_id = d.dept_id WHERE u.del_flag = '0' AND (u.dept_id = ? OR u.dept_id IN (SELECT t.dept_id FROM sys_dept t WHERE find_in_set(?, ancestors))) 2023-09-21 22:14:18.371 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -==> Parameters: 103(Long), 103(Long) 2023-09-21 22:14:18.373 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -<== Total: 1 2023-09-21 22:14:18.373 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Preparing: select u.user_id, u.dept_id, u.nick_name, u.user_name, u.email, u.avatar, u.phonenumber, u.sex, u.status, u.del_flag, u.login_ip, u.login_date, u.create_by, u.create_time, u.remark, d.dept_name, d.leader from sys_user u left join sys_dept d on u.dept_id = d.dept_id where u.del_flag = '0' AND (u.dept_id = ? OR u.dept_id IN ( SELECT t.dept_id FROM sys_dept t WHERE find_in_set(?, ancestors) )) LIMIT ? 2023-09-21 22:14:18.374 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Parameters: 103(Long), 103(Long), 10(Integer) 2023-09-21 22:14:18.377 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -<== Total: 1回到skywalking查询traceID为 e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001而后点击查看日志点击标记就可以看到日志信息了我特意将UI界面点开。简洁性不言而喻参考:https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-logback-1.x/#logstash-logback-pluginhttps://www.apache.org/dyn/closer.cgi/skywalking/java-agent/9.0.0/apache-skywalking-java-agent-9.0.0.tgzhttps://skywalking.apache.org/downloads/https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-log4j-1.x/https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-logback-1.x/
2023年09月21日
11 阅读
0 评论
0 点赞
2023-09-09
linuxea: tekton ci Sidecar(3)
相比较此前在k8s上运行一个Docker Outside of Docker, DooD的方式,由于这个镜像一直运行,因此缓存存在在容器内,可以加速构建和复用已经拉取到资源。而Sidecar就不需要提前部署,Tekton 会将 Sidecar 注入属于 TaskRun 的 Pod,一旦 Task 中的所有 Steps 完成执行,Pod 内运行的每一个 Sidecar 就会终止掉,如果 Sidecar 成功退出,kubectl get pods 命令会将 Pod 的状态返回为 Completed,如果 Sidecar 退出时出现了错误,则返回 Error,而忽略实际执行 Pod 内部 Steps 的容器镜像的退出码值。在每个容器里面都是一个独立的 Docker Daemon,因此也支持并行运行,互不影响,DinD显然更加安全、更加干净。我门仍然需要准备相关的配置1.凭据和SAapiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: password --- apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image-go spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang #构建完的镜像名称 接着配置一个task的测试用例2.测试taskapiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test3.构建task# task-docker-build.yaml apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: docker-build-push spec: resources: inputs: # 定义输入资源 - name: source # 源代码仓库 type: git params: - name: image description: Reference of the image docker will produce. - name: builder_image description: The location of the docker builder image. default: docker:stable - name: dockerfile description: Path to the Dockerfile to build. default: ./Dockerfile - name: context description: Path to the directory to use as context. default: . - name: build_extra_args description: Extra parameters passed for the build command when building images. default: "" - name: push_extra_args description: Extra parameters passed for the push command when pushing images. default: "" - name: insecure_registry description: Allows the user to push to an insecure registry that has been specified default: "" - name: registry_mirror description: Specific the docker registry mirror default: "" - name: registry_url description: private docker images registry url steps: - name: docker-build # 构建步骤 image: $(params.builder_image) env: - name: DOCKER_HOST # 用 TLS 形式通过 TCP 链接 sidecar value: tcp://localhost:2376 - name: DOCKER_TLS_VERIFY # 校验 TLS value: "1" - name: DOCKER_CERT_PATH # 使用 sidecar 守护进程生成的证书 value: /certs/client workingDir: $(resources.inputs.source.path) script: | # docker 构建命令 docker login $(params.registry_url) cd tekton/go docker build \ $(params.build_extra_args) \ --no-cache \ -f $(params.dockerfile) -t $(params.image) $(params.context) volumeMounts: # 声明挂载证书目录 - mountPath: /certs/client name: dind-certs - name: docker-push # image: $(params.builder_image) env: - name: DOCKER_HOST value: tcp://localhost:2376 - name: DOCKER_TLS_VERIFY value: "1" - name: DOCKER_CERT_PATH value: /certs/client workingDir: $(resources.inputs.source.path) script: | # 推送 docker 镜像 docker login $(params.registry_url) echo $(params.image) docker push $(params.push_extra_args) $(params.image) volumeMounts: - mountPath: /certs/client name: dind-certs sidecars: # sidecar 模式,提供 docker daemon服务,实现真正的 DinD 模式 - image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind name: server args: - --storage-driver=vfs - --userland-proxy=false - --debug - --insecure-registry=$(params.insecure_registry) - --registry-mirror=$(params.registry_mirror) securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR # 将生成的证书写入与客户端共享的路径 value: /certs volumeMounts: - mountPath: /certs/client name: dind-certs readinessProbe: # 等待 dind daemon 生成它与客户端共享的证书 periodSeconds: 1 exec: command: ["ls", "/certs/client/ca.pem"] volumes: # 使用 emptyDir 的形式即可 - name: dind-certs emptyDir: {}上面的 Task 使用了一个 registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind 镜像来提供 docker 服务端,在 sidecar 模式容器是共享 network namespace 的,通过 tcp://localhost:2376 和 docker 服务端进行通信,由于还使用的是 TLS 证书模式,所以需要将证书目录进行声明挂载。在dockerfile的细节部分需要进入dockerfile目录进行构建4.pipeline我门需要指定params参数最终给每个task传递使用apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: test-sidecar-pipeline spec: resources: # 为 Tasks 提供输入和输出资源声明 - name: git-res type: git params: - name: image type: string - name: image-tag type: string default: "v0.4.0" - name: registry_url type: string default: "uhub.service.ucloud.cn" - name: registry_mirror type: string default: "https://ot2k4d59.mirror.aliyuncs.com/" - name: insecure_registry type: string default: "uhub.service.ucloud.cn" tasks: # 添加task到流水线中 # 运行应用测试 - name: test taskRef: name: test resources: inputs: - name: repo # Task 输入名称 resource: git-res # Pipeline 资源名称 - name: get-build-id taskRef: name: generate-build-id params: - name: base-version value: $(params.image-tag) # 构建并推送 Docker 镜像 - name: build-and-push taskRef: name: docker-build-push # 使用上面定义的镜像构建任务 runAfter: - test # 测试任务执行之后 resources: inputs: - name: source # 指定输入的git仓库资源 resource: git-res params: - name: image value: "$(params.image):$(tasks.get-build-id.results.build-id)" - name: registry_url value: $(params.registry_url) - name: insecure_registry value: $(params.insecure_registry) - name: registry_mirror value: $(params.registry_mirror)5.pipelinerun这些参数其中包括tag就会通过pipelinerun来进行传递,或者使用默认值apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: test-sidecar-pipelinerun spec: serviceAccountName: build-sa pipelineRef: name: test-sidecar-pipeline resources: - name: git-res # 指定输入的git仓库资源 resourceRef: name: git-res params: - name: image value: uhub.service.ucloud.cn/linuxea/golang - name: image-tag # 传入版本号 value: "v0.4.1"apply[root@master1 Sidecar]# kubectl apply -f ./ task.tekton.dev/generate-build-id created task.tekton.dev/docker-build-push created task.tekton.dev/test created pipeline.tekton.dev/test-sidecar-pipeline created pipelinerun.tekton.dev/test-sidecar-pipelinerun created我门仍然通过命令查看构建pipelinerun[root@master1 Sidecar]# tkn pipelinerun describe test-sidecar-pipelinerun Name: test-sidecar-pipelinerun Namespace: default Pipeline Ref: test-sidecar-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-sidecar-pipeline Status STARTED DURATION STATUS 12 seconds ago --- Running Timeouts Pipeline: 1h0m0s Params NAME VALUE ∙ image uhub.service.ucloud.cn/linuxea/golang ∙ image-tag v0.4.1查看taskrun[root@master1 Sidecar]# tkn taskrun list NAME STARTED DURATION STATUS test-sidecar-pipelinerun-build-and-push 4 minutes ago 2m1s Succeeded test-sidecar-pipelinerun-get-build-id 4 minutes ago 12s Succeeded test-sidecar-pipelinerun-test 4 minutes ago 10s Succeeded test-pipelinerun-build-and-push-test 22 hours ago 11s Succeeded test-pipelinerun-get-build-id 22 hours ago 9s Succeeded test-pipelinerun-test 22 hours ago 9s Succeeded build-and-push 1 day ago 12m13s Succeeded testrun 4 days ago 2m45s Succeeded查看taskrun日志[root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-get-build-id [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/main/x86_64/APKINDEX.tar.gz [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/community/x86_64/APKINDEX.tar.gz [get-timestamp] (1/1) Installing tzdata (2023c-r1) [get-timestamp] OK: 11 MiB in 19 packages [get-timestamp] Current Timestamp: 20230613-151739 [get-timestamp] 20230613-151739 [get-buildid] v0.4.1-20230613-151739 [root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-test [git-source-repo-lgfhp] {"level":"info","ts":1686640658.3913343,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-lgfhp] {"level":"info","ts":1686640658.4037006,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s[root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-build-and-push -f [git-source-source-vx28b] {"level":"info","ts":1686640670.9329953,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/source"} [git-source-source-vx28b] {"level":"info","ts":1686640670.948619,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/source"} [docker-build] Authenticating with existing credentials... [docker-build] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [docker-build] Configure a credential helper to remove this warning. See [docker-build] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [docker-build] [docker-build] Login Succeeded [docker-build] Sending build context to Docker daemon 5.12kB [docker-build] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [docker-build] 1.18.10-alpine3.17: Pulling from marksugar-k8s/golang [docker-build] 8921db27df28: Pulling fs layer [docker-build] a2f8637abd91: Pulling fs layer [docker-build] 4ba80a8cd2c7: Pulling fs layer [docker-build] dbc2308a4587: Pulling fs layer [docker-build] dbc2308a4587: Waiting [docker-build] a2f8637abd91: Verifying Checksum [docker-build] a2f8637abd91: Download complete [docker-build] dbc2308a4587: Verifying Checksum [docker-build] dbc2308a4587: Download complete [docker-build] 8921db27df28: Download complete [docker-build] 8921db27df28: Pull complete [docker-build] a2f8637abd91: Pull complete [docker-build] 4ba80a8cd2c7: Verifying Checksum [docker-build] 4ba80a8cd2c7: Download complete [docker-build] 4ba80a8cd2c7: Pull complete [docker-build] dbc2308a4587: Pull complete [docker-build] Digest: sha256:ab5685692564e027aa84e2980855775b2e48f8fc82c1590c0e1e8cbc2e716542 [docker-build] Status: Downloaded newer image for registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [docker-build] ---> a77f45e5f987 [docker-build] Step 2/5 : RUN mkdir /test -p [docker-build] ---> Running in 7924d227b939 [docker-build] Removing intermediate container 7924d227b939 [docker-build] ---> 12e24160d708 [docker-build] Step 3/5 : WORKDIR /test [docker-build] ---> Running in 14f7ab09a508 [docker-build] Removing intermediate container 14f7ab09a508 [docker-build] ---> 0faa68db85f7 [docker-build] Step 4/5 : COPY . . [docker-build] ---> 9411b3cf535a [docker-build] Step 5/5 : CMD ["go test"] [docker-build] ---> Running in 8356b0c1c4ba [docker-build] Removing intermediate container 8356b0c1c4ba [docker-build] ---> 921c8de056ef [docker-build] Successfully built 921c8de056ef [docker-build] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.4.1-20230613-151739 [docker-push] Authenticating with existing credentials... [docker-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [docker-push] Configure a credential helper to remove this warning. See [docker-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [docker-push] [docker-push] Login Succeeded [docker-push] uhub.service.ucloud.cn/linuxea/golang:v0.4.1-20230613-151739 [docker-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [docker-push] a13db5615e44: Preparing [docker-push] 55da38d80f35: Preparing [docker-push] 4ad1c2ef216c: Preparing [docker-push] c23db623ee98: Preparing [docker-push] c1bfd5512d71: Preparing [docker-push] 8e012198eea1: Preparing [docker-push] 8e012198eea1: Waiting [docker-push] c23db623ee98: Layer already exists [docker-push] 4ad1c2ef216c: Layer already exists [docker-push] c1bfd5512d71: Layer already exists [docker-push] 8e012198eea1: Layer already exists [docker-push] 55da38d80f35: Pushed [docker-push] a13db5615e44: Pushed [docker-push] v0.4.1-20230613-151739: digest: sha256:a6ad200509fe8776b5ae2aaaf2ddf8387b0af30ae0667bfb67883bffefe7a962 size: 1571 [sidecar-server] 2023/06/13 07:19:44 Exiting...回到界面查看镜像仓库此时已经push了最新的
2023年09月09日
134 阅读
0 评论
0 点赞
2023-09-02
linuxea: 使用tekton构建镜像(2)
现在我门将这些组合起来形成一个流水线,使用的 CRD 对象是 Pipeline 。假如是一个新的环境,我门仍然需要配置此前的凭据,比如git认证,镜像仓库认证,必要的配置和版本号的传递等首先配置一个使用的凭据在清单中1.配置凭据和SAapiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: password --- apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image-go spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang #构建完的镜像名称 PipelineResource的ucloud-image-go中并没有写入版本号,这是因为在task会进行生成2.配置task1task1作为一个假定的测试apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test3.生成版本号我门需要提供一个版本号,使用task生成一个由环境变量的传入-年月日时间的版本号apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: generate-build-id spec: description: >- Given a base version, this task generates a unique build id by appending the base-version to the current timestamp. params: - name: base-version description: Base product version type: string default: "1.0" results: - name: timestamp description: Current timestamp - name: build-id description: ID of the current build steps: - name: get-timestamp image: uhub.service.ucloud.cn/marksugar-k8s/bash:5.0.18 script: | #!/usr/bin/env bash sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories apk add -U tzdata cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime ts=`date "+%Y%m%d-%H%M%S"` echo "Current Timestamp: ${ts}" echo ${ts} | tr -d "\n" | tee $(results.timestamp.path) - name: get-buildid image: uhub.service.ucloud.cn/marksugar-k8s/bash:5.0.18 script: | #!/usr/bin/env bash ts=`cat $(results.timestamp.path)` buildId=$(inputs.params.base-version)-${ts} echo ${buildId} | tr -d "\n" | tee $(results.build-id.path)3.配置镜像构建我门需要修改构建时候的命令,以便于添加传入的版本号# task-build-push.yaml apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: build-and-push-test spec: resources: inputs: # 定义输入资源 - name: repo #输入资源,就是gitee的那个仓库 type: git outputs: # 定义输出资源 - name: linuxea # 输出镜像名字 type: image params: # 定义参数 - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 type: string default: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 description: The path to the dockerfile to build - name: pathToContext #指明构建上下文的路径 type: string default: $(resources.inputs.repo.path) # repo资源的路径 description: the build context used by docker daemon - name: imageTag type: string default: "v0.2.0" description: the docker image tag steps: - name: build-and-push image: docker:stable script: | #!/usr/bin/env sh docker login uhub.service.ucloud.cn cd /workspace/repo/tekton/go docker build -t $(resources.outputs.linuxea.url):$(params.imageTag) . docker push $(resources.outputs.linuxea.url):$(params.imageTag) # 这边的参数都是在 input 和 output 中定义的 env: - name: DOCKER_HOST value: tcp://docker-dind.tekton-pipelines:23754.配置pipeline管道中,我门需要注意每个task运行的顺序是由runAfter来决定的,在build-and-push-test阶段,传入了环境变量imageTagapiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: test-pipeline spec: resources: # 为 Tasks 提供输入和输出资源声明 - name: git-res type: git - name: ucloud-image-go type: image params: - name: image-tag type: string tasks: # 添加task到流水线中 - name: get-build-id taskRef: name: generate-build-id # 引入generate-build-id task params: - name: base-version value: $(params.image-tag) # 运行应用测试 - name: test taskRef: name: test resources: inputs: - name: repo # Task 输入名称 resource: git-res # Pipeline 资源名称 # 构建并推送 Docker 镜像 - name: build-and-push-test taskRef: name: build-and-push-test runAfter: - test # 测试任务执行之后 - get-build-id resources: inputs: - name: repo # 指定输入的git仓库资源 resource: git-res outputs: # 指定输出的镜像资源 - name: linuxea resource: ucloud-image-go params: - name: imageTag value: "$(tasks.get-build-id.results.build-id)"5.pipelinerun接着在spec中引入build-sa的sa,和各个资源和管道apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: test-pipelinerun spec: serviceAccountName: build-sa # 关联带有认证信息的 ServiceAccount pipelineRef: name: test-pipeline resources: - name: git-res # 指定输入的git仓库资源 resourceRef: name: git-res - name: ucloud-image-go # 指定输出的镜像资源 resourceRef: name: ucloud-image-go params: - name: image-tag # 传入版本号 value: "v0.3.0"注意,这里的image-tag的变量是传递给管道中名称为 get-build-id的task, 而这个task指向的是generate-build-id这个实际的task定义,而generate-build-id这个task在接收到image-tag的参数后返回了一个 buildId的环境变量,这个变量被获取后赋值给imageTag,而在构建镜像之前这个变量已经获取完成并且被传入到build-and-push-test的构建步骤中。6.构建查看创建完成后,通过命令查看[root@master1 pipeline]# tkn pipelinerun describe test-pipelinerun Name: test-pipelinerun Namespace: default Pipeline Ref: test-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-pipeline Status STARTED DURATION STATUS 6 seconds ago --- Running ⏱ Timeouts Pipeline: 1h0m0s查看当前的taskrun[root@master1 pipeline]# tkn taskrun list NAME STARTED DURATION STATUS test-pipelinerun-build-and-push-test 15 minutes ago 11s Succeeded test-pipelinerun-get-build-id 15 minutes ago 9s Succeeded test-pipelinerun-test 15 minutes ago 9s Succeeded build-and-push 7 hours ago 12m13s Succeeded testrun 3 days ago 2m45s Succeeded根据name查看日志[root@master1 pipeline]# tkn taskrun logs test-pipelinerun-test [git-source-repo-gknf2] {"level":"info","ts":1686559991.0322871,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-gknf2] {"level":"info","ts":1686559991.0433397,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s [root@master1 pipeline]# tkn taskrun logs test-pipelinerun-get-build-id [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/main/x86_64/APKINDEX.tar.gz [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/community/x86_64/APKINDEX.tar.gz [get-timestamp] (1/1) Installing tzdata (2023c-r1) [get-timestamp] OK: 11 MiB in 19 packages [get-timestamp] Current Timestamp: 20230612-165311 [get-timestamp] 20230612-165311 [get-buildid] v0.3.0-20230612-165311 [root@master1 pipeline]# tkn taskrun logs test-pipelinerun-build-and-push-test [create-dir-linuxea-q6w6n] 2023/06/12 08:53:16 warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to "/home/nonroot": unable to create destination directory: mkdir /home/nonroot: permission denied [git-source-repo-w5dcj] {"level":"info","ts":1686559998.4194562,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-w5dcj] {"level":"info","ts":1686559998.4337826,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [build-and-push] Authenticating with existing credentials... [build-and-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [build-and-push] Configure a credential helper to remove this warning. See [build-and-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [build-and-push] [build-and-push] Login Succeeded [build-and-push] Sending build context to Docker daemon 5.12kB [build-and-push] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] ---> a77f45e5f987 [build-and-push] Step 2/5 : RUN mkdir /test -p [build-and-push] ---> Using cache [build-and-push] ---> 48d724b29eff [build-and-push] Step 3/5 : WORKDIR /test [build-and-push] ---> Using cache [build-and-push] ---> 9b6eda13d6c1 [build-and-push] Step 4/5 : COPY . . [build-and-push] ---> Using cache [build-and-push] ---> a5c71d579512 [build-and-push] Step 5/5 : CMD ["go test"] [build-and-push] ---> Using cache [build-and-push] ---> 2f377c99476e [build-and-push] Successfully built 2f377c99476e [build-and-push] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.3.0-20230612-165311 [build-and-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [build-and-push] 6a7300559c98: Preparing [build-and-push] 17600296503b: Preparing [build-and-push] 4ad1c2ef216c: Preparing [build-and-push] c23db623ee98: Preparing [build-and-push] c1bfd5512d71: Preparing [build-and-push] 8e012198eea1: Preparing [build-and-push] 8e012198eea1: Waiting [build-and-push] 17600296503b: Layer already exists [build-and-push] 6a7300559c98: Layer already exists [build-and-push] 4ad1c2ef216c: Layer already exists [build-and-push] c1bfd5512d71: Layer already exists [build-and-push] c23db623ee98: Layer already exists [build-and-push] 8e012198eea1: Layer already exists [build-and-push] v0.3.0-20230612-165311: digest: sha256:b649a5f42e753e36562b574e26018301e8e839f129f92e3f87fd29f4b30734fc size: 1572 [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.548887491Z","caller":"logging/config.go:116","message":"Successfully created the logger."} [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.548992849Z","caller":"logging/config.go:117","message":"Logging level set to: info"} [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.549269268Z","caller":"imagedigestexporter/main.go:59","message":"No index.json found for: linuxea","commit":"68f2a66"}此时pod构建结束[root@master1 build]# kubectl get pods -w NAME READY STATUS RESTARTS AGE build-and-push-pod 0/4 Completed 0 7h26m test-pipelinerun-build-and-push-test-pod 0/4 Completed 0 8m18s test-pipelinerun-get-build-id-pod 0/2 Completed 0 8m33s test-pipelinerun-test-pod 0/2 Completed 0 8m33s testrun-pod 0/2 Completed 0 3d7h test-pipelinerun-get-build-id-pod 0/2 Terminating 0 8m37s test-pipelinerun-test-pod 0/2 Terminating 0 8m37s test-pipelinerun-build-and-push-test-pod 0/4 Terminating 0 8m22s test-pipelinerun-get-build-id-pod 0/2 Terminating 0 8m37s test-pipelinerun-test-pod 0/2 Terminating 0 8m37s test-pipelinerun-build-and-push-test-pod 0/4 Terminating 0 8m22s test-pipelinerun-get-build-id-pod 0/2 Pending 0 0s test-pipelinerun-test-pod 0/2 Pending 0 0s test-pipelinerun-get-build-id-pod 0/2 Pending 0 0s test-pipelinerun-test-pod 0/2 Pending 0 0s test-pipelinerun-get-build-id-pod 0/2 Init:0/3 0 0s test-pipelinerun-test-pod 0/2 Init:0/4 0 0s test-pipelinerun-test-pod 0/2 Init:1/4 0 1s test-pipelinerun-get-build-id-pod 0/2 Init:1/3 0 1s test-pipelinerun-test-pod 0/2 Init:2/4 0 2s test-pipelinerun-get-build-id-pod 0/2 Init:2/3 0 2s test-pipelinerun-test-pod 0/2 Init:3/4 0 3s test-pipelinerun-get-build-id-pod 0/2 PodInitializing 0 3s test-pipelinerun-get-build-id-pod 2/2 Running 0 4s test-pipelinerun-test-pod 0/2 PodInitializing 0 4s test-pipelinerun-get-build-id-pod 2/2 Running 0 4s test-pipelinerun-test-pod 2/2 Running 0 5s test-pipelinerun-test-pod 2/2 Running 0 5s test-pipelinerun-test-pod 1/2 NotReady 0 8s test-pipelinerun-get-build-id-pod 1/2 NotReady 0 8s test-pipelinerun-test-pod 0/2 Completed 0 9s test-pipelinerun-get-build-id-pod 0/2 Completed 0 9s test-pipelinerun-build-and-push-test-pod 0/4 Pending 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Pending 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:0/3 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:1/3 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:2/3 0 2s test-pipelinerun-build-and-push-test-pod 0/4 PodInitializing 0 3s test-pipelinerun-build-and-push-test-pod 4/4 Running 0 4s test-pipelinerun-build-and-push-test-pod 4/4 Running 0 4s test-pipelinerun-build-and-push-test-pod 3/4 NotReady 0 5s test-pipelinerun-build-and-push-test-pod 2/4 NotReady 0 6s test-pipelinerun-build-and-push-test-pod 1/4 NotReady 0 9s test-pipelinerun-build-and-push-test-pod 0/4 Completed 0 10s任务成功[root@master1 pipeline]# tkn pipelinerun describe test-pipelinerun Name: test-pipelinerun Namespace: default Pipeline Ref: test-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-pipeline STARTED DURATION STATUS 1 minute ago 20s Succeeded Timeouts Pipeline: 1h0m0s Params NAME VALUE ∙ image-tag v0.3.0镜像也推送完成在tekton中看到详细的细节,其中test的task1做了go test,build-and-push进行了镜像打包和推送。
2023年09月02日
172 阅读
0 评论
0 点赞
2023-08-27
linuxea: Tekton在k8s上的简单应用(1)
前言: jenkins作为老派构建打包工具,无论在云原生之前,还是容器出现后,jenkins的脚本构建,pipeline,共享库等不断在丰富jenkins的能力。而在这期间,我们非常有必要提一下KodeRover公司的zadig。jenkins并不是为了kubernetes而生的,但是zadig是为基于Kubernetes下协同而生的。 除此之外,cd中spinnaker,argocd等更专注于持续部署的能力。而tekton被捐赠后,重新为基于kubernetes中的ci/cd工具又重新做了定义。tektonTekton 的前身是 Knative 项目的 build-pipeline 项目,这个项目是为了给 build 模块增加 pipeline 的功能,但是随着不同的功能加入到 Knative build 模块中,build 模块越来越变得像一个通用的 CI/CD 系统,于是,索性将 build-pipeline 剥离出 Knative,就变成了现在的 Tekton,而 Tekton 也从此致力于提供全功能、标准化的云原生 CI/CD 解决方案。Tekton 为 CI/CD 系统提供了诸多好处:可定制:Tekton 是完全可定制的,具有高度的灵活性,我们可以定义非常详细的构建块目录,供开发人员在各种场景中使用。可重复使用:Tekton 是完全可移植的,任何人都可以使用给定的流水线并重用其构建块,可以使得开发人员无需"造轮子"就可以快速构建复杂的流水线。可扩展:Tekton Catalog 是社区驱动的 Tekton 构建块存储库,我们可以使用 Tekton Catalog 中定义的组件快速创建新的流水线并扩展现有管道。标准化:Tekton 在你的 Kubernetes 集群上作为扩展安装和运行,并使用完善的 Kubernetes 资源模型,Tekton 工作负载在 Kubernetes Pod 内执行。伸缩性:要增加工作负载容量,只需添加新的节点到集群即可,Tekton 可随集群扩展,无需重新定义资源分配或对管道进行任何其他修改。Tekton 由一些列组件组成:Tekton Pipelines 是 Tekton 的基础,它定义了一组 Kubernetes CRD 作为构建块,我们可以使用这些对象来组装 CI/CD 流水线。Tekton Triggers 允许我们根据事件来实例化流水线,例如,可以我们在每次将 PR 合并到 GitHub 仓库的时候触发流水线实例和构建工作。Tekton CLI 提供了一个名为 tkn 的命令行界面,它构建在 Kubernetes CLI 之上,运行和 Tekton 进行交互。Tekton Dashboard 是 Tekton Pipelines 的基于 Web 的一个图形界面,可以线上有关流水线执行的相关信息。Tekton Catalog 是一个由社区贡献的高质量 Tekton 构建块(任务、流水线等)存储库,可以直接在我们自己的流水线中使用这些构建块。Tekton Hub 是一个用于访问 Tekton Catalog 的 Web 图形界面工具。Tekton Operator 是一个 Kubernetes Operator,可以让我们在 Kubernetes 集群上安装、更新、删除 Tekton 项目。而每个task作为最小单元,每个task中steps中定义不同的阶段,使用者可以灵活组合抽象使用。而TaskRun可以使task运行,Pipeline 将task组合起来,PipelineRun将Pipeline组合起来,环境变量层层进行传递。安装我门仍然需要遵循安装的版本要求,tekton作为一个CRD在kubernetes中。Required Kubernetes VersionStarting from the v0.24.x release of Tekton: Kubernetes version 1.18 or laterStarting from the v0.27.x release of Tekton: Kubernetes version 1.19 or laterStarting from the v0.30.x release of Tekton: Kubernetes version 1.20 or laterStarting from the v0.33.x release of Tekton: Kubernetes version 1.21 or laterStarting from the v0.39.x release of Tekton: Kubernetes version 1.22 or laterStarting from the v0.41.x release of Tekton: Kubernetes version 1.23 or laterStarting from the v0.45.x release of Tekton: Kubernetes version 1.24 or later并且我们可以在releases中查看最新的版本和稳定版的支持时间我的集群是1.21,因此我安装v0.33.2-0.350.33.2wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.33.2/release.yaml -O 0.33.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.33.2@sha256:ce01f1f89751bc2d2465d9f09f1918dcd4302551193475bdf0d23f12d5795ce1#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.33.2#g' 0.33.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.33.2@sha256:558b4c734c1dc7be8b2f3681a105bd23cc704fbf7525f0a5e7673beed40a7ca6#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.33.2#g' 0.33.yaml kubectl apply -f 0.33.yaml另外我搬运了最新的版本镜像v.47下载release.yaml 文件进行安装,如下所示的命令:kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.1/release.yaml或者我门修改一下镜像地址后进行安装wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.1/release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.47.1@sha256:9336443dc0b28585a8f75bb9d56082b69fcc61b0e92e968f8cd2ac4dd1f781c5#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.47.1#g' release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/resolvers:v0.47.1@sha256:e68ab3f4efa096a4aa96fec0bc8fd91ee2d7a4bcf671ae0c90b2345cd0cb89c7#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/resolvers:v0.47.1#g' release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.47.1@sha256:0f045f5e9a9bc025ab1e66909476616a7c2d69d0b0fcf2fbbeefdc8c99d8fd5b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.47.1#g' release.yaml kubectl apply -f release.yaml0.47.3wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.3/release.yaml -O 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.47.3@sha256:cfbca9c19a8e7fe4f68b80499c9d921a03240ae2185d6f7d536c33b1177138ca@uhub.service.ucloud.cn/marksugar-k8s/controller:v0.47.3@g' 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/resolvers:v0.47.3@sha256:ea46db5fd1c6c1774762fee57cb49aef6a9a6ba862c85232c8a89f1ab67b43fd@uhub.service.ucloud.cn/marksugar-k8s/resolvers:v0.47.3@g' 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.47.3@sha256:20fe883b019e80fecddbb97a86d6773925c7b6727cf5e8e7007c47416bd9ebf7@uhub.service.ucloud.cn/marksugar-k8s/webhook:v0.47.3@g' 0.47.3.yaml0.35.0wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.35.0/release.yaml -O 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.35.0@sha256:1ba151e081bea043d82c684b4b63042a00886901bcb91a83db06325857b85e9c#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.35.0@sha256:7244bf5e496347e2ed40347b152c0298b7ab87a16a24149691acea6f59bfde76#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:v0.35.0@sha256:9cb21a57f5f51813e54321f1cf20ae11e573d622ab8064f39cdfdff702905c1e#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/kubeconfigwriter:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.35.0@sha256:f98b666818a529e1fe6e7bb94cae77402ba0cabbf3a3fe00e4711d75b107472b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/git-init:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.35.0@sha256:5730032a7daf7526fae6b586badf849ffe4539e16fd8be927ec7e320564486be#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/entrypoint:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop:v0.35.0@sha256:1d65a20cd5fbc79dc10e48ce9d2f7251736dac13b302b49a1c9a8717c5f2b5c5#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/nop:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.35.0@sha256:c2e028e35a3c3a38e584bec51cb21483411eb8e0dd02c22c2f910c29df5892af#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/imagedigestexporter:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init:v0.35.0@sha256:155f059340b19364ce2b6bd40ac565070885db8922dc6e9a52e0a7181747476a#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/pullrequest-init:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/workingdirinit:v0.35.0@sha256:f4dc5477599754b42261ce367ab5590ca7c7866f64e5381e894d11e43231c268#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/workingdirinit:v0.35.0#g' 0.35.yaml kubectl apply -f 0.35.yaml安装后,自动创建了名称空间tekton-pipelines[root@master1 tekton]# kubectl -n tekton-pipelines get pod NAME READY STATUS RESTARTS AGE tekton-pipelines-controller-7b59f8bc-l2hmk 1/1 Running 0 2m17s tekton-pipelines-webhook-7f6889f9b7-gwcvb 1/1 Running 0 2m17s而后下载一个cli包wget https://github.com/tektoncd/cli/releases/download/v0.31.0/tkn_0.31.0_Linux_x86_64.tar.gz tar xf tkn_0.31.0_Linux_x86_64.tar.gz mv tkn /usr/local/sbin/安装dashboardwget https://storage.googleapis.com/tekton-releases/dashboard/previous/v0.35.0/release.yaml -O 0.35-dashboard.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard:v0.35.0@sha256:057471aa317c900e30178d64d83fc9d32cf2fcd718632243f7b902403b64981b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/dashboard:v0.35.0#g' 0.35-dashboard.yaml kubectl apply -f 0.35-dashboard.yaml安装完成后,配置一个ingress-nginx的域名[root@master1 tekton]# kubectl -n tekton-pipelines get pod -w NAME READY STATUS RESTARTS AGE tekton-dashboard-7787bd585d-6glr9 1/1 Running 0 6m6s tekton-pipelines-controller-7489bd899d-w7k55 1/1 Running 0 34m tekton-pipelines-webhook-5cb648d57f-scd5g 1/1 Running 0 34m配置ingress-nginx的域名apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tekton-dashboard namespace: tekton-pipelines spec: ingressClassName: nginx rules: - host: tekton.linuxea.local http: paths: - path: / pathType: Prefix backend: service: name: tekton-dashboard port: number: 9097而后即可打开页面了如果你想拥有更高的权限,需要将安装对应的版本ModeCurrentv0.31.0 and earlierread-onlyrelease.yamltekton-dashboard-release-readonly.yamlread/writerelease-full.yamltekton-dashboard-release.yaml比如我们安装最新的wget https://github.com/tektoncd/dashboard/releases/download/v0.37.0/release-full.yaml -O 0.37.0-full.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard:v0.37.0@sha256:2f38f99b6eafc18e67d013da84265ab61b5525a1a6c37005aaf86152b586427b#uhub.service.ucloud.cn/marksugar-k8s/dashboard:v0.37.0#g' 0.37.0-full.yaml kubectl apply -f 0.37.0-full.yaml定义Task我门需要了解最基本的几个概念,如下:Task:表示执行命令的一系列有序的步骤,task 里可以定义一系列的 steps,例如编译代码、构建镜像、推送镜像等,每个 task 实际由一个 Pod 执行。TaskRun:Task 只是定义了一个模版,TaskRun 才真正代表了一次实际的运行,当然你也可以自己手动创建一个 TaskRun,TaskRun 创建出来之后,就会自动触发 Task 描述的构建任务。Pipeline:一组有序的 Task,Pipeline 中的 Task 可以使用之前执行过的 Task 的输出作为它的输入。表示一个或多个 Task、PipelineResource 以及各种定义参数的集合。PipelineRun:类似 Task 和 TaskRun 的关系,PipelineRun 也表示某一次实际运行的 pipeline,下发一个 PipelineRun CRD 实例到 Kubernetes 后,同样也会触发一次 pipeline 的构建。ClusterTask:覆盖整个集群的任务,而不是单一的某一个命名空间,这是和 Task 最大的区别,其他基本上一致的。PipelineResource(Deprecated):定义由 Tasks 中的步骤摄取的输入和产生的输出的位置,比如 github 上的源码,或者 pipeline 输出资源,例如一个容器镜像或者构建生成的 jar 包等。Run(alpha):实例化自定义任务以在特定输入时执行。每个任务都在自己的 Kubernetes Pod 中执行,因此,默认情况下,管道内的任务不共享数据。要在 Tasks 之间共享数据,你必须明确配置每个 Task 以使其输出可用于下一个 Task 并获取先前执行的 Task 的输出作为其输入。Tekton 的 CI/CD 工作流中的每个操作都变成了一个 Step,使用指定的容器镜像来执行。Steps 然后组织在 Tasks 中,它在集群中作为 Kubernetes Pod 运行,还可以进一步组织 Tasks 变成成 Pipelines,还可以控制几个 Tasks 的执行顺序。1.创建task要创建一个 Task 任务,就需要使用到 Kubernetes 中定义的 Task 这个 CRD 对象,创建一个如下所示的资源文件,内容如下所示:apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test #command: ["go"] #args: ["test"]其中 resources 定义了我们的任务中定义的 Step 所需的输入内容,我们的步骤需要 Clone 一个 Git 仓库作为 go test 命令的输入,目前支持 git、pullRequest、image、cluster、storage、cloudevent 等资源。Tekton 内置的 git 资源类型,它会自动将代码仓库 Clone 到 /workspace/$input_name 目录中,由于我们这里输入被命名成 repo,所以代码会被 Clone 到 /workspace/repo 目录下面。然后下面的 steps 就是来定义执行运行测试命令的步骤,这里我们直接在代码的根目录中运行 go test 命令即可,需要注意的是命令和参数需要分别定义。创建该任务:[root@master1 tekton]# kubectl apply -f test.yaml task.tekton.dev/test created [root@master1 tekton]# kubectl get task NAME AGE test 7s新建的 Task 任务并不会立即执行, 必须创建一个 TaskRun 引用它并提供所有必需输入的数据才行。当然我们也可以直接使用 tkn 命令来启动这个 Task 任务,如下所示的命令来获取启动 Task 所需的资源对象:[root@master1 tekton]# tkn task start test --dry-run apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: creationTimestamp: null generateName: test-run- namespace: default spec: serviceAccountName: "" taskRef: name: test status: podName: ""定义git 代码仓库作为输入,所以需要一个 PipelineResource 对象来定义输入信息如下所示:apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git而后创建一个taskrun的模块对应上门的inputs的值apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: testrun spec: resources: inputs: - name: repo resourceRef: name: git-res taskRef: name: test创建上门的配置完成之后,我门查看下详情[root@master1 pipeline]# tkn taskrun logs --last -f [git-source-repo-jwtsb] {"level":"info","ts":1686274161.6897743,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-jwtsb] {"level":"info","ts":1686274161.7028632,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s [root@master1 pipeline]# kubectl get taskrun NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME testrun True Succeeded 3m10s 25s如果有状态异常,可以通过describe查看kubectl describe pod testrun-pod测试通过后在dashboard也可以看到2.镜像仓库认证要完成镜像打包,需要指定一个harbor的镜像仓库,将镜像推送到harbor中,因此我门需要准备一个harbor的仓库,或者其他仓库,而后创建一个Secret 资源对象apiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: passwordtekton.dev/docker-0 的 annotation注解信息是用来告诉 Tekton 这些认证信息所属的 Docker 镜像仓库。并且创建一个 ServiceAccount 对象来使用上面的 ucloud-auth 这个 Secret 对象,内容如下所示:apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth创建完成后就可以使用这个进行认证3.构建和推送我门需要配置一个docker-dind来应对非docker环境的情况1,创建pvcapiVersion: v1 kind: PersistentVolume metadata: name: tekton-docker-dind namespace: tekton-pipelines spec: capacity: storage: 200Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 172.168.204.36 path: /data/nfs-share/docker-dind # mkdir -p /data/nfs-share/docker-dind --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: tekton-docker-dind namespace: tekton-pipelines spec: accessModes: - ReadWriteMany resources: requests: storage: 200Gi2,创建dind如果你有共享存储请修改,如果没有使用hostpath请配置nodename,或者删除nodenameapiVersion: apps/v1 kind: Deployment metadata: name: docker-dind namespace: tekton-pipelines labels: app: docker-dind spec: selector: matchLabels: app: docker-dind template: metadata: labels: app: docker-dind spec: nodename: 172.168.204.39 containers: #- image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:dind-20230303 - image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind name: docker-dind args: - --insecure-registry=172.168.204.42 # 私有镜像地址 - --registry-mirror=https://ot2k4d59.mirror.aliyuncs.com/ # 指定一个镜像加速器地址 env: # - name: DOCKER_DRIVER # value: overlay2 - name: DOCKER_HOST value: tcp://0.0.0.0:2375 - name: DOCKER_TLS_CERTDIR # 禁用 TLS(最好别禁用) value: "" volumeMounts: - name: docker-dind-data-vol # 持久化docker根目录 mountPath: /var/lib/docker/ ports: - name: daemon-port containerPort: 2375 securityContext: privileged: true # 需要设置成特权模式 volumes: - hostPath: path: /data/docker-dind # mkdir -p /data/docker-dind name: docker-dind-data-vol #- name: docker-dind-data-vol # persistentVolumeClaim: # claimName: tekton-docker-dind --- apiVersion: v1 kind: Service metadata: name: docker-dind namespace: tekton-pipelines labels: app: docker-dind spec: ports: - port: 2375 targetPort: 2375 selector: app: docker-dind创建安装完成[root@master1 pipeline]# kubectl -n tekton-pipelines get pod -w NAME READY STATUS RESTARTS AGE docker-dind-5b7888c577-x86k6 1/1 Running 0 12m ... [root@master1 pipeline]# kubectl -n tekton-pipelines get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-dind ClusterIP 10.68.143.200 <none> 2375/TCP 38m ...现在,我门创建一个 Task 任务来构建并推送 Docker 镜像,而在gitee中已经包含了一个 Dockerfile 文件了,直接 Clone 代码就可以获得:FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 RUN mkdir /test -p WORKDIR /test COPY . . CMD ["go test"]创建一个名为 task-build-push.yaml 的文件,如下所示:定义了 DockerfileURL 与 pathToContext 参数,用来指定 Dockerfile 和构建上下文的路径,此外还定义了一个名为 linuxea 的镜像输出资源,用来定义 Docker 镜像的相关参数。然后定义了一个名为 build-and-push 的步骤来解决不同运行时的问题,我们在上面独立运行了一个 Docker Daemon 的服务,现在可以直接通过 DOCKER_HOST 环境变量来远程连接到该 Daemon 进行构建镜像。apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: build-and-push spec: resources: inputs: # 定义输入资源 - name: repo #输入资源,就是gitee的那个仓库 type: git outputs: # 定义输出资源 - name: linuxea # 输出镜像名字 type: image params: # 定义参数 - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 type: string default: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 description: The path to the dockerfile to build - name: pathToContext #指明构建上下文的路径 type: string default: $(resources.inputs.repo.path) # repo资源的路径 description: the build context used by docker daemon steps: - name: build-and-push image: docker:stable script: | #!/usr/bin/env sh docker login uhub.service.ucloud.cn cd /workspace/repo/tekton/go docker build -t $(resources.outputs.linuxea.url) . # echo $(resources.outputs.linuxea.url) # echo $(params.DockerfileURL) # echo $(params.pathToContext) # uhub.service.ucloud.cn/linuxea/golang:v0.1.0 # /workspace/repo/tekton/go/Dockerfile # /workspace/repo # docker build -t $(resources.outputs.linuxea.url) -f $(params.DockerfileURL) $(params.pathToContext) docker push $(resources.outputs.linuxea.url) # 这边的参数都是在 input 和 output 中定义的 env: - name: DOCKER_HOST value: tcp://docker-dind.tekton-pipelines:2375此时,我门再继续创建一个taskrun来触发,并且指定此前创建的sa。现在我门先创建PipelineResource 资源和TaskRunapiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang:v0.1.0 #构建完的镜像名称 --- # taskrun-build-push.yaml apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-and-push spec: serviceAccountName: build-sa # 关联具有harbor认证信息的serviceaccount taskRef: name: build-and-push # 关联定义好的task resources: inputs: - name: repo # 指定输入的仓库资源 resourceRef: name: git-res outputs: # 指定输出的镜像资源 - name: linuxea resourceRef: name: ucloud-image params: - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 value: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 - name: pathToContext # 指定构建上下文 value: $(resources.inputs.repo.path) # repo资源的路径创建[root@master1 pipeline]# kubectl get pods NAME READY STATUS RESTARTS AGE build-and-push-pod 0/4 PodInitializing 0 7s testrun-pod 0/2 Completed 0 7h12m [root@master1 pipeline]# kubectl get taskrun NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME build-and-push Unknown Pending 13s我门通过如下命令查看细节,或者通过describe taskrun查看kubectl -n default logs build-and-push-pod -c step-build-and-push或者如下tkn taskrun logs build-and-push taskrun logs build-and-push -f[root@master1 pipeline]# tkn taskrun logs build-and-push taskrun logs build-and-push -f [create-dir-linuxea-5vvfb] 2023/06/12 01:27:00 warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to "/home/nonroot": unable to create destination directory: mkdir /home/nonroot: permission denied [git-source-repo-jfwsv] {"level":"info","ts":1686533225.9578078,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-jfwsv] {"level":"info","ts":1686533225.9820316,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [build-and-push] Authenticating with existing credentials... [build-and-push] Login Succeeded [build-and-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [build-and-push] Configure a credential helper to remove this warning. See [build-and-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [build-and-push] [build-and-push] Sending build context to Docker daemon 5.12kB [build-and-push] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] 1.18.10-alpine3.17: Pulling from marksugar-k8s/golang [build-and-push] 8921db27df28: Pulling fs layer [build-and-push] a2f8637abd91: Pulling fs layer [build-and-push] 4ba80a8cd2c7: Pulling fs layer [build-and-push] dbc2308a4587: Pulling fs layer [build-and-push] dbc2308a4587: Waiting [build-and-push] a2f8637abd91: Verifying Checksum [build-and-push] a2f8637abd91: Download complete [build-and-push] dbc2308a4587: Verifying Checksum [build-and-push] dbc2308a4587: Download complete [build-and-push] 8921db27df28: Verifying Checksum [build-and-push] 8921db27df28: Download complete [build-and-push] 8921db27df28: Pull complete [build-and-push] a2f8637abd91: Pull complete [build-and-push] 4ba80a8cd2c7: Verifying Checksum [build-and-push] 4ba80a8cd2c7: Download complete [build-and-push] 4ba80a8cd2c7: Pull complete [build-and-push] dbc2308a4587: Pull complete [build-and-push] Digest: sha256:ab5685692564e027aa84e2980855775b2e48f8fc82c1590c0e1e8cbc2e716542 [build-and-push] Status: Downloaded newer image for registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] ---> a77f45e5f987 [build-and-push] Step 2/5 : RUN mkdir /test -p [build-and-push] ---> Running in ed14023d5ee7 [build-and-push] Removing intermediate container ed14023d5ee7 [build-and-push] ---> 48d724b29eff [build-and-push] Step 3/5 : WORKDIR /test [build-and-push] ---> Running in 3bb3e619663a [build-and-push] Removing intermediate container 3bb3e619663a [build-and-push] ---> 9b6eda13d6c1 [build-and-push] Step 4/5 : COPY . . [build-and-push] ---> a5c71d579512 [build-and-push] Step 5/5 : CMD ["go test"] [build-and-push] ---> Running in 35c3ce8a0468 [build-and-push] Removing intermediate container 35c3ce8a0468 [build-and-push] ---> 2f377c99476e [build-and-push] Successfully built 2f377c99476e [build-and-push] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.1.0 [build-and-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [build-and-push] 6a7300559c98: Preparing [build-and-push] 17600296503b: Preparing [build-and-push] 4ad1c2ef216c: Preparing [build-and-push] c23db623ee98: Preparing [build-and-push] c1bfd5512d71: Preparing [build-and-push] 8e012198eea1: Preparing [build-and-push] 8e012198eea1: Waiting [build-and-push] c23db623ee98: Layer already exists [build-and-push] c1bfd5512d71: Layer already exists [build-and-push] 4ad1c2ef216c: Layer already exists [build-and-push] 8e012198eea1: Layer already exists [build-and-push] 17600296503b: Pushed [build-and-push] 6a7300559c98: Pushed [build-and-push] v0.1.0: digest: sha256:b649a5f42e753e36562b574e26018301e8e839f129f92e3f87fd29f4b30734fc size: 1572 [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159373327Z","caller":"logging/config.go:116","message":"Successfully created the logger."} [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159447186Z","caller":"logging/config.go:117","message":"Logging level set to: info"} [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159608529Z","caller":"imagedigestexporter/main.go:59","message":"No index.json found for: linuxea","commit":"68f2a66"}在tekton中能够观察到构建的细节展示此时镜像就被推送到UHub了
2023年08月27日
190 阅读
0 评论
0 点赞
2023-07-30
linuxea: argocd2.7.5简单的配置/使用/通知和监控
Argo CD 在 GitOps 模式中被常用,使用 Git 仓库作为定义所需应用程序状态的真实来源,同时可指定的目标环境中自动部署所需的应用程序状态,应用程序部署可以在 Git 提交时跟踪对分支、标签的更新,或固定到清单的指定版本。Argo CD在 Kubernetes中支持多种方式:kustomizehelm chartsksonnet applicationsjsonnet filesPlain directory of YAML/json manifestsAny custom config management tool configured as a config management plugin在k8s中通过一个控制器持续watch正在运行的程序的状态与git仓库的中的状态进行对比,如果有差异就返回一个outofsync,这些差异被argocd获取。而后手动,或者自动的进行同步。在基于GRPC/REST的api中,提供web ui,cli和CI/CD的接口。分别如下:应用程序管理和状态报告执行应用程序操作(例如同步、回滚、用户定义的操作)存储仓库和集群凭据管理(存储为 K8S Secrets 对象)认证和授权给外部身份提供者RBACGit webhook 事件的侦听器/转发器除此之外,通过一个仓库服务来生成K8s的清单存储 URLrevision 版本(commit、tag、branch)应用路径模板配置:参数、ksonnet 环境、helm values.yaml 等它可以提供的功能有很多, 包括但不限于本章记录的这些。安装我想肯定有人会疑问,为什么总在安装较为新的版本呢,答案是新版本一般解决了旧的问题。但是也不要高兴太早,尽管解决了旧的问题,迎之而来的就会是新的问题。如此而已。我门安装2.7.5最新单机版,如果是生产,那么建议使用ha版本 wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.5/manifests/install.yaml sed -i 's@ghcr.io/dexidp/dex:v2.36.0@registry.cn-zhangjiakou.aliyuncs.com/marksugar/dex:v2.36.0@g' install.yaml sed -i 's@redis:7.0.11-alpine@registry.cn-zhangjiakou.aliyuncs.com/marksugar/redis:7.0.11-alpine@g' install.yaml kubectl -n argocd apply -f install.yaml如果你需要ha,则可以试试2.7.6或者我门修改下镜像版本: wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.6/manifests/ha/install.yaml -O v2.7.6.yaml kubectl create namespace argocd sed -i 's@ghcr.io/dexidp/dex:v2.36.0@uhub.service.ucloud.cn/marksugar-k8s/dex:v2.36.0@g' v2.7.6.yaml sed -i 's@haproxy:2.6.14-alpine@uhub.service.ucloud.cn/marksugar-k8s/haproxy:2.6.14-alpine@g' v2.7.6.yaml sed -i 's@quay.io/argoproj/argocd:v2.7.6@uhub.service.ucloud.cn/marksugar-k8s/argocd:v2.7.6@g' v2.7.6.yaml sed -i 's@redis:7.0.11-alpine@uhub.service.ucloud.cn/marksugar-k8s/redis:7.0.11-alpine@g' v2.7.6.yaml kubectl apply -n argocd -f v2.7.6.yaml而后,我门需要删除如下字段,来应付msg="gpg --no-permission-warning --logger-fd 1 --batch --gen-key /tmp/gpg-key-recipe3098385539 failed exit status 2"字段的报错,我门需要在名称为argocd-repo-server的deployment中删除如下字段。参考9809和11647 seccompProfile: type: RuntimeDefault而后重新apply 即可。而后下载一个客户端VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/') curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64 chmod +x /usr/local/bin/argocd我们可以通过配置 Ingress 的方式来对外暴露服务,其他 Ingress 控制器的配置可以参考官方文档 https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/ 进行配置。Argo CD 在同一端口 (443) 上提供多个协议 (gRPC/HTTPS),所以当我们为 argocd 服务定义单个 nginx ingress 对象和规则的时候有点麻烦,因为 nginx.ingress.kubernetes.io/backend -protocol 这个 annotation 只能接受一个后端协议(例如 HTTP、HTTPS、GRPC、GRPCS)。为了使用单个 ingress 规则和主机名来暴露 Argo CD APIServer,必须使用 nginx.ingress.kubernetes.io/ssl-passthrough 这个 annotation 来传递 TLS 连接并校验 Argo CD APIServer 上的 TLS。apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: ingressClassName: nginx rules: - host: argocd.k8s.local http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https而后,我门在deploymen中给argocd-server的args中添加--insecure, 如下:... containers: - args: - /usr/local/bin/argocd-server - --insecure env: ...你需要安装ingress-nginxhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx --namespace ingress-nginx --create-namespace -f .\latest.yaml ingress-nginx/ingress-nginx或者简单地在 argocd-cmd-params-cm ConfigMap 中设置 server.insecure: "true" 即可。接着配置本地Hosts就可以打开了使用如下命令获取密码,用户名:admin[root@master1 argocd]# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo X5sqkyi-ANK5LrGu除此之外,由于 ingress-nginx 的每个 Ingress 对象仅支持一个协议,因此另一种方法是定义两个 Ingress 对象。一个用于 HTTP/HTTPS,另一个用于 gRPCapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-http-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTP" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: http host: argocd.k8s.local tls: - hosts: - argocd.k8s.local secretName: argocd-secret # do not change, this is provided by Argo CD --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-grpc-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https host: grpc.argocd.k8s.local tls: - hosts: - grpc.argocd.k8s.local secretName: argocd-secret # do not change, this is provided by Argo CD我门创建grpc的ingress,而后使用下载完成后的cli使用argocd登录grpc.argocd.k8s.local[root@master1 argocd]# argocd login grpc.argocd.k8s.local WARNING: server certificate had error: x509: certificate is valid for ingress.local, not grpc.argocd.k8s.local. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context 'grpc.argocd.k8s.local' updated而后我门修改密码1, 生成一个密码# argocd account bcrypt --password www.linuxea.com && echo $2a$10$64Ywt88aWJD.LYeAzA0UfelaUSENF.paiYSyw9QehsawqW8Nokc/.2, 打补丁的方式进行修改kubectl -n argocd patch secret argocd-secret \ -p '{"stringData": { "admin.password": "$2a$10$64Ywt88aWJD.LYeAzA0UfelaUSENF.paiYSyw9QehsawqW8Nokc/.", "admin.passwordMtime": "'$(date +%FT%T%Z)'" }}'重新登录[root@master1 argocd]# argocd login grpc.argocd.k8s.local WARNING: server certificate had error: x509: certificate is valid for ingress.local, not grpc.argocd.k8s.local. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context 'grpc.argocd.k8s.local' updated [root@master1 argocd]# argocd version argocd: v2.7.2+cbee7e6 BuildDate: 2023-05-12T14:06:49Z GitCommit: cbee7e6011407ed2d1066c482db74e97e0cc6bdb GitTreeState: clean GoVersion: go1.19.9 Compiler: gc Platform: linux/amd64 argocd-server: v2.7.2+cbee7e6.dirty BuildDate: 2023-05-12T13:43:26Z GitCommit: cbee7e6011407ed2d1066c482db74e97e0cc6bdb GitTreeState: dirty GoVersion: go1.19.6 Compiler: gc Platform: linux/amd64 Kustomize Version: v5.0.1 2023-03-14T01:32:48Z Helm Version: v3.11.2+g912ebc1 Kubectl Version: v0.24.2 Jsonnet Version: v0.19.1api添加swagger-ui即可: https://argocd.k8s.local/swagger-ui创建应用argocd支持配置在集群外,然后在将集群加入进来。但是我这里在集群内安装的因此不需要。我门直接创建应用CLI 创建应用argocd app create marksugar-cli \ --repo https://gitee.com/marksugar/argocd-example.git \ --path marksugar \ --dest-server https://kubernetes.default.svc \ --dest-namespace default而后就会在界面中显示出刚创建的项目Argo CD 默认情况下每 3 分钟会检测 Git 仓库一次,用于判断应用实际状态是否和 Git 中声明的期望状态一致,如果不一致,状态就转换为 OutOfSync。默认情况下并不会触发更新,除非通过 syncPolicy 配置了自动同步。CRD创建除了这些,还可以通过CRD创建apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: default 或者配置一个AppProject类型的,指定他的AppProject权限apiVersion: v1 kind: Namespace metadata: name: marksugar --- apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: marksugar server: 'https://kubernetes.default.svc' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui-crd namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: marksugar 创建完成app部署应用自动更新开关 syncPolicy: # 打开自动更新 automated: prune: false selfHeal: false 手动 syncPolicy: automated: null配置为自动后,开始自动同步apiVersion: v1 kind: Namespace metadata: name: marksugar --- apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea-auto namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: marksugar server: 'https://kubernetes.default.svc' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui-auto namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: marksugar syncPolicy: # 打开自动更新 automated: prune: false selfHeal: false创建完成后就会开始[root@master1 argocd]# kubectl -n marksugar get pod NAME READY STATUS RESTARTS AGE marksugar-nginx-69ccfd5bb4-jvxsg 1/1 Running 0 21scli手动配置我门仍然可以通过argo的二进制cli手动部署1,列出[root@master1 ~]# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/marksugar-cli https://kubernetes.default.svc default default OutOfSync Missing <none> <none> https://gitee.com/marksugar/argocd-example.git marksugar argocd/marksugar-ui-auto https://kubernetes.default.svc marksugar my-linuxea Synced Healthy Auto <none> https://gitee.com/marksugar/argocd-example.git marksugar master argocd/marksugar-ui-crd https://kubernetes.default.svc marksugar my-linuxea OutOfSync Healthy <none> SharedResourceWarning(2) https://gitee.com/marksugar/argocd-example.git marksugar master2, 查看状态[root@master1 ~]# argocd app get marksugar-ui-auto Name: argocd/marksugar-ui-auto Project: my-linuxea Server: https://kubernetes.default.svc Namespace: marksugar URL: https://grpc.argocd.k8s.local/applications/marksugar-ui-auto Repo: https://gitee.com/marksugar/argocd-example.git Target: master Path: marksugar SyncWindow: Sync Allowed Sync Policy: Automated Sync Status: Synced to master (df898c3) Health Status: Healthy GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service marksugar marksugar-ui Synced Healthy service/marksugar-ui configured apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged3, 手动同步状态[root@master1 ~]# argocd app sync marksugar-ui-auto TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2023-06-08T10:35:35+08:00 Service marksugar marksugar-ui Synced Healthy 2023-06-08T10:35:35+08:00 apps Deployment marksugar marksugar-nginx Synced Healthy 2023-06-08T10:35:36+08:00 Service marksugar marksugar-ui Synced Healthy service/marksugar-ui unchanged 2023-06-08T10:35:36+08:00 apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged Name: argocd/marksugar-ui-auto Project: my-linuxea Server: https://kubernetes.default.svc Namespace: marksugar URL: https://grpc.argocd.k8s.local/applications/marksugar-ui-auto Repo: https://gitee.com/marksugar/argocd-example.git Target: master Path: marksugar SyncWindow: Sync Allowed Sync Policy: Automated Sync Status: Synced to master (df898c3) Health Status: Healthy Operation: Sync Sync Revision: df898c3db89f1a156d7c9889bb44f0d0d56f2937 Phase: Succeeded Start: 2023-06-08 10:35:35 +0800 CST Finished: 2023-06-08 10:35:36 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service marksugar marksugar-ui Synced Healthy service/marksugar-ui unchanged apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged顶顶通知argocd 组件中ArgoCD Notifications 支持消息通知,并且在安装时候已经被安装,于是我门修改argocd-notifications-cm的configmap即可,在这之前,我门创建一个钉钉的机器人1,创建机器人2,修改配置文件apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.dingtalk: | url: https://oapi.dingtalk.com/robot/send?access_token=8923fbb89cc6adc7a07163be headers: - name: Content-Type value: application/json context: | argocdUrl: http://argocd.k8s.local template.app-sync-change: | webhook: dingtalk: method: POST body: | { "msgtype": "markdown", "markdown": { "title":"ArgoCD同步状态", "text": "### ArgoCD同步状态\n> - app名称: {{.app.metadata.name}}\n> - app同步状态: {{ .app.status.operationState.phase}}\n> - 时间:{{.app.status.operationState.startedAt}}\n> - URL: [点击跳转ArgoCD]({{.context.argocdUrl}}/applications/{{.app.metadata.name}}?operation=true) \n" } } trigger.on-deployed: | - description: Application is synced and healthy. Triggered once per commit. oncePer: app.status.sync.revision send: [app-sync-change] # template names # trigger condition when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy' trigger.on-health-degraded: | - description: Application has degraded send: [app-sync-change] when: app.status.health.status == 'Degraded' trigger.on-sync-failed: | - description: Application syncing has failed send: [app-sync-change] # template names when: app.status.operationState.phase in ['Error', 'Failed'] trigger.on-sync-running: | - description: Application is being synced send: [app-sync-change] # template names when: app.status.operationState.phase in ['Running'] trigger.on-sync-status-unknown: | - description: Application status is 'Unknown' send: [app-sync-change] # template names when: app.status.sync.status == 'Unknown' trigger.on-sync-succeeded: | - description: Application syncing has succeeded send: [app-sync-change] # template names when: app.status.operationState.phase in ['Succeeded'] subscriptions: | - recipients: [dingtalk] triggers: [on-sync-running, on-deployed, on-sync-failed, on-sync-succeeded]而后创建到argocd名称空间下[root@master1 argocd]# kubectl -n argocd apply -f argocd-message.yaml configmap/argocd-notifications-cm configured而后就可以通过命令查看kubectl -n argocd get configmap argocd-notifications-cm -o json3,添加本地Hosts到podyaml如下apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: notifications-controller app.kubernetes.io/name: argocd-notifications-controller app.kubernetes.io/part-of: argocd name: argocd-notifications-controller spec: selector: matchLabels: app.kubernetes.io/name: argocd-notifications-controller strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: argocd-notifications-controller spec: hostAliases: - ip: 172.168.204.36 hostnames: - "argocd.k8s.local" containers: - args: - /usr/local/bin/argocd-notifications image: registry.cn-zhangjiakou.aliyuncs.com/marksugar/argocd:v2.7.2 imagePullPolicy: Always livenessProbe: tcpSocket: port: 9001 name: argocd-notifications-controller securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - mountPath: /app/config/tls name: tls-certs - mountPath: /app/config/reposerver/tls name: argocd-repo-server-tls workingDir: /app securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccountName: argocd-notifications-controller volumes: - configMap: name: argocd-tls-certs-cm name: tls-certs - name: argocd-repo-server-tls secret: items: - key: tls.crt path: tls.crt - key: tls.key path: tls.key - key: ca.crt path: ca.crt optional: true secretName: argocd-repo-server-tls创建[root@master1 argocd]# kubectl -n argocd apply -f notifications.yaml deployment.apps/argocd-notifications-controller configured手动同步一次,就可以收到消息了监控Argo CD 本身暴露了两组 Prometheus 指标,默认情况下 Metrics 指标通过端点 argocd-metrics:8082/metrics 获取指标,包括:应用健康状态指标应用同步状态指标应用同步历史记录Argo CD 的 API 服务的 API 请求和响应相关的指标(请求数、响应码值等等...)通过端点 argocd-server-metrics:8083/metrics 获取如果开启了 endpoints 这种类型的服务自动发现,那么我们可以在几个指标的 Service 上添加 prometheus.io/scrape: "true" 这样的 annotation:# kubectl edit svc argocd-metrics -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" # kubectl edit svc argocd-server-metrics -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "8083" # 指定8083端口为指标端口 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/name":"argocd-server-metrics","app.kubernetes.io/part-of":"argocd"},"name":"argocd-server-metrics","namespace":"argocd"},"spec":{"ports":[{"name":"metrics","port":8083,"protocol":"TCP","targetPort":8083}],"selector":{"app.kubernetes.io/name":"argocd-server"}}} creationTimestamp: "2023-06-06T09:18:46Z" # kubectl edit svc argocd-repo-server -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "8084" # 指定8084端口为指标端口 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"repo-server","app.kubernetes.io/name":"argocd-repo-server","app.kubernetes.io/part-of":"argocd"},"name":"argocd-repo-server","namespace":"argocd"},"spec":{"ports":[{"name":"server","port":8081,"protocol":"TCP","targetPort":8081},{"name":"metrics","port":8084,"protocol":"TCP","targetPort":8084}],"selector":{"app.kubernetes.io/name":"argocd-repo-server"}}} creationTimestamp: "2023-06-06T09:18:46Z"而后就可以发现这几个指标或者手动创建ServiceMonitor 对象来创建指标对象。 而后在Grafana 中导入 Argo CD 的 Dashboard
2023年07月30日
365 阅读
0 评论
0 点赞
2023-07-13
linuxea:再一次谈论devops的现况
我们知道,devops的最早出向要追溯到很久之前,甚至从丰田制造中吸取了一些工业经验,而在后来的凤凰项目的故事,到前几年的持续交付2.0,在后来的AI阶段,这中间发展了太多追逐的,且不确定的东西。别忘了,敏捷在当时是很火的事情,聘请高级敏捷专家来重组也是有发生过的。而如今谈论ai介入整个流程又变成先锋,但是为什么总时会回到原来的地方呢?值得一说的是,今天的云编排环境提供了一体化,规模化,简化的平台部署方式回到开始,devops是一个团队在做一件事,目标是快速的发布,更新。最终 -> 更快的交付,缩短交付周期。而在这个过程中开发人员或者说测试人员需要具备测试用例的编写,具备持续测试的能力,需要在早期就发现一些能够发现的问题,而在现实中这其中,开发->测试->运维的角色分工与以往已经不是那么的一样。回过头来,在看现状:1,招聘了某一个员工通过一个局部的岗位来主持Devops2,团队对devops一无所知,更无从谈起测试用例等实际问题3,运维或者devops人员主持了devops工具,完成了一个表象的"devops"说起来,是非常有趣的。devops这个东西变成一个岗位,是有原因的。如下:1,团队不需要快速的迭代,也不需要快速的交付2,团队人员技术、运营和软技能差距明显,不具备devops能力3,业务代码和业务工作开发工作繁忙,无暇其他综上所述:在 devops方面,技术技能差距通常是最明显的。这些可能包括缺乏编码、系统管理、云计算和其他技术领域的熟练程度。这些差距可能会对 devops团队的绩效产生重大影响。它们可能会减慢软件交付速度,导致更多错误和错误,并使新技术或实践的实施变得更加困难。此外,它们还会在团队成员之间造成分歧,一些人感到不知所措或能力不足,而另一些人则感到负担过重。另外,如果参与devops团队缺乏对软件开发生命周期、项目管理或有效监控和排除系统故障的能力的了解。可能会导致流程效率低下、错过最后期限和产品质量低劣。还可能在团队内部造成紧张,因为成员可能很难理解自己的角色或他们的工作如何为项目的总体目标做出贡献。哪还有,devops理念的核心,成员处理问题的方式,以及情商等。面对如上的问题,我们又做了那些呢?思考1:如果团队不需要那么迫切的交付,那意味着我们不需要快速构建和发布。我们知道,通常发布一个程序大致分为两种情况:A。窗口期进行冷发布顾名思义,在一个安全的时间窗口内进行最小影响的发布。B。具备一定能力的优雅发布,一天发布至少一次需要开发,运维,测试一起合作,联合起来构建一套能够随时随地发布的系统。假设我们谈论我们在做devops,但是我们使用的仍然是窗口期发布,也不具备随时发布更新的能力。我认为将这种方式称为devops的场景是荒谬的。因为在这期间有大量的时间在等待一个窗口,累计修改和增加的代码都在等待一个窗口后进行最终的测试,这种方式并不高效。但这就一定是错的吗,也不完全是。每个团队的能力层次不齐,每个团队的产品规模和类型各不相同。能够推出一款超前的,但是又欠缺的东西,这是因地制宜造成的结果。我们在回顾devops的开始:我想,如果在从遥远的产品经理为了快速交付的故事开始有点扯蛋了。大概在5年前,gitlab cicd公开问世,这一事件让当时只有大厂具备的持续集成能力被公开到每个技术人员可以实现的范畴,并且我第一时间进行了测试。而后jenkins pipeline的广泛传播被应用。这背后是什么在驱动这项古老的技术被快速的应用?没错,是容器,是应用打包的问题被解决,是kubernetes的广泛应用给持续集成提供了一个成熟且便捷的底座。回到openstack时代,应用需要进行扩容,开发人员需要编写基础设施代码去自动构建vm节点,而后通过自动化手段初始化,并部署应用,或者直接镜像导入,无论如何,面对不同的底层系统需要配置不同的兼容属性。但是容器的到来,让我们不在关注底层这些逻辑。当然,底层逻辑仍然存在。在上述简短的描述中,我想已经很清楚了,在k8s生态的每一个新的技术的实现都是需要开发人员深度的构建的。如gitlab ,早期更多是由开发人员进行编排gitlab-ci.yml的,因为这种设计本身就是为了便利开发人员,而不是其他。让本来需要多个阶段的工作合成一条流水线,减少了测试,运维的频繁沟通,这本身就是通往快速迭代的路径之一。而随着devops被普及后,特别的是被广泛应用到互联网小公司后,它变成了一个因地制宜,切合实际的技术。为什么这么说呢,因为这和所谓的devops,开始发生了变化。具体有两种问题成了完成devops的难题,如下:A,虽然,devops开发人员配置了持续集成和持续部署。但是持续集成没有解决应用测试的问题,持续部署没有解决快速迭代,快速交付的问题。1,连续测试和自动化测试需要经验更丰富的,具备一定开发的测试人员进行构建。B,开发团队没有宽裕的时间和团队能力,也不具备符合自己的代码规范,团队欠缺完善的观测平台,如:OpenTelemetry1,开发团队面对累加的主业务代码,无暇其他细节,对待繁琐的代码扫描不进行关注。没有快速定位,和观测业务的平台如上两个问题外,那devops的这种因地制宜的方式真的一点好处都没有吗?其实并不是,别忘了,容器的问世是解决了应用打包的问题,而这种虽然声称自己做了devops,但仍然延续老旧办法的开发环境,假的持续集成和持续交付能力的平台仍然解决了应用打包到发布的环节。至少从应用到打包发不提供了便利的方式,这种方式对团队更友善。我想,我表达的很清楚了。如果你的团队没有做到快速交付,随时发布的能力,我认为这不是devops,这也是我为什么一直主张的gitops的原因之一。既然开发人员不能够关注代码质量,安全范畴,或者说其他,例如:sonarqube的质量,整个连续测试,开发团队也不具备业务的观测性。而构建一套的devops工具链,仅仅只是解决了运维层次的一些问题,仍然采用传统模式的那一套。我个人认为使用gitops,更加说的过去,因为你没有使用devops来提升更快的效率,以达到更快的产出。毕竟devops是围绕是价值为导向的,一旦付出和回报不成比例,那么无论什么样的技术实现,都将毫无意义。至于为什么采用gitops,这是改进后的方式:配置既代码。这种基础设施既代码,配置既代码的方式会变得更加普遍。随波逐流,不如尽可能的扩大可掌握的空间,能够随时随地的掌握应用的状态。认为这个问题似乎又变成了一个重要的问题了。2023-07-12
2023年07月13日
445 阅读
0 评论
1 点赞
2022-07-11
linuxea:jenkins基于钉钉的构建通知(11)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,主要配置消息通知阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单 (上一章已实现)kubectl部署 (上一章已实现)kubeclt deployment的状态跟踪 (上一章已实现)钉钉消息的构建状态推送(本章实现)前面我们断断续续的将最简单的持续集成做好,在cd阶段,使用了kustomize和argocd,并且搭配了kustomize和argocd做了gitops的部分事宜,现在们在添加一个基于钉钉的构建通知我们创建一个钉钉机器人,关键字是DEVOPS我们创建一个函数,其中采用markdown语法,如下:分别需要向DingTalk传递几个行参,分别是:mdTitle 标签,这里的标签也就是我们创建的关键字: DEVOPSmdText 详细文本atUser 需要@谁atAll @所有人SedContent 通知标题函数体如下:def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd38" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }而在流水线阶段添加post,如下 post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } }当然,现在你看到了上面的函数传递中有很多变量,这些需要我们去获取我们在任意一个阶段中的script中,并用env.声明到全局环境变量,添加如下GIT_COMMIT_DESCRIBE: 提交信息GIT_COMMIT_TAGSHA:提交的SHA值TIMENOW_CN:可阅读的时间格式 env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.GIT_COMMIT_TAGSHA=sh(script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() env.TIMENOW_CN=sh(script: """date +%Y年%m月%d日%H时%M分%S秒""",returnStdout: true).trim()进行构建,一旦构建完成,将会发送一段消息到钉钉如下而最终的管道流水线试图如下:完整的流水线管道代码如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.TIMENOW_CN=sh(returnStdout: true, script: 'date +%Y年%m月%d日%H时%M分%S秒') env.GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' // ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} } // ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - } stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } } } post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } } } def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd3803abdd1452e83d5b607ab" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }现在,一个最简单的gitops的demo项目搭建完成参考gitops
2022年07月11日
1,785 阅读
0 评论
0 点赞
1
2
...
10