首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,451 阅读
2
linuxea:如何复现查看docker run参数命令
23,044 阅读
3
Graylog收集文件日志实例
18,580 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,275 阅读
5
git+jenkins发布和回滚示例
18,181 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
690
篇文章
累计收到
139
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
690
篇与
的结果
2023-09-21
linuxea:skywalking最新版本9.6.0链路和日志关联
我们使用一个前后端分离的权限管理开源系统来模拟skywalking的日志和链路关联行为。为什么我不在去讨论OpenTelemetry了?因为skywalking更加简单的实现我需要的功能。观测性实际需要付出的东西是很多的,怎么样用最简单的办法来实现这部分的功能的便利,是我考虑的事情。而OpenTelemetry目前只有signoz能够做到所谓的,目前最容易做到的日志,链路,指标的关联。但是signoz并非像skywalking这样简单。值得一提的是,仅全链路观测角度来看,nodejs方面,无论是OpenTelemetry还是单独的skywalking提供的解决方案,都是不太容易使用的,更别提需要手动埋点写入一大堆只有业务开发才能明白的链路请求生产的上下文span。而自动埋点是永远都需要的,恰巧OpenTelemetry和skywalking都支持 。因此,这些内容都基于自动埋点。当然,暂且不会说什么nodejs相关的,这是个头疼的事情。我们需要让子弹多飞一会。写这篇的目的在于,我又倾向skywalking了。在9.6.0中界面UI简洁了,这也是我为什么又选择使用skywalking重要的原因 之一。开始我使用的是https://gitee.com/y_project/RuoYi-Vue,由java SpringBoot和Vue组成。我将演示如何进行日志和链路关联。其他的语言支持也同样简单 ,这可能更多的来着社区wusheng的支持,感谢他。也感谢罗总提醒我使用这个项目来测试。在skywalking.apache.org/downloads/下载最新的9.0.0 java-agent jar包。这里的版本号和后续pom文件中的版本号相关联而日志如何修改和配置取决于你使用了什么组件,skywalking支持log4j,log4j2和logback,以RuoYi-Vue项目为例,他使用的是logback,那么就参考logback如果没有其他说明,下面所有内容出现的配置都是修改RuoYi-Vue项目我们快速拉起一套skywalking的环境,使用9.6.0最新的版本。docker-compose如下:version: '2' services: skywalking_oap: image: uhub.service.ucloud.cn/marksugar-k8s/skywalking-oap-server:9.6.0 #dapache/skywalking-oap-server:9.6.0 container_name: skywalking_oap ports: - "11800:11800" - "12800:12800" depends_on: - skywalking_elasticsearch environment: SW_STORAGE: elasticsearch SW_STORAGE_ES_CLUSTER_NODES: skywalking_elasticsearch:9200 SW_HEALTH_CHECKER: default SW_TELEMETRY: prometheus JAVA_OPTS: "-Xms2048m -Xmx2048m" skywalking_ui: image: uhub.service.ucloud.cn/marksugar-k8s/skywalking-ui:9.6.0 #apache/skywalking-ui:9.6 container_name: skywalking_ui depends_on: - skywalking_oap ports: - "8080:8080" environment: SW_OAP_ADDRESS: http://skywalking_oap:12800 SW_ZIPKIN_ADDRESS: http://skywalking_oap:9412 skywalking_elasticsearch: # image: registry.cn-hangzhou.aliyuncs.com/marksugar/elasticsearch:6.8.6 image: uhub.service.ucloud.cn/marksugar-k8s/elasticsearch:7.17.13 container_name: skywalking_elasticsearch ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: elasticsearch restart: always environment: - cluster.name="elasticsearch" # - network.host="0.0.0.0" - discovery.type=single-node # - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms2048m -Xmx4096m -XX:-UseConcMarkSweepGC -XX:-UseCMSInitiatingOccupancyOnly -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=75 -Duser.timezone=Asia/Shanghai" user: root ports: - 9200:9200 - 9300:9300 # docker-compose 3.x # healthcheck: # test: [ "CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1" ] # interval: 30s # timeout: 10s # retries: 3 # start_period: 10s volumes: - /etc/localtime:/etc/localtime:ro - /data/skywalking_elasticsearch/data:/usr/share/elasticsearch/data # mkdir -p /data/elasticsearch/data #chown -R 1000.1001 /data/elasticsearch/data logging: driver: "json-file" options: max-size: "50M" mem_limit: 6144m skywalking_kabana: # image: uhub.service.ucloud.cn/marksugar-k8s/kibana:6.8.6 image: uhub.service.ucloud.cn/marksugar-k8s/kibana:7.17.13 container_name: skywalking_kabana ulimits: memlock: soft: -1 hard: -1 #network_mode: host hostname: kibana restart: always environment: - ELASTICSEARCH_URL=http://skywalking_elasticsearch:9200 user: root ports: - 5601:5601 volumes: - /etc/localtime:/etc/localtime:ro logging: driver: "json-file" options: max-size: "50M" mem_limit: 2048mlogback要使用它,我们修改pom.xml,添加两个apm的标签,如下 <dependency> <groupId>org.apache.skywalking</groupId> <artifactId>apm-toolkit-logback-1.x</artifactId> <version>{project.release.version}</version> </dependency> <dependency> <groupId>org.apache.skywalking</groupId> <artifactId>apm-toolkit-trace</artifactId> <version>{project.release.version}</version> </dependency>{project.release.version}填写当前skywalking agent的版本,我这里使用的是9.0.0,参考apm-toolkit-logback-1.x查看最新版本而后我们需要创建一个文件ruoyi-admin/src/main/resources/logback-spring.xml,并添加如下注意:1,这里使用的是ch.qos.logback.core.ConsoleAppender,那么可能在k8s中,日志打印或许就不能使用STDOUT收集日志了,STDOUT只能传递给skywalking2,skywalking已经收集了日志和链路,那为什么还要在收集日志呢。这取决于ES存储维护性考虑。你至少需要将日志收集到另外一个日志管理系统。skywalking的日志和链路用于近期查看。而日志收集用作与日志搜索。综上所述,如果不能将日志打印到STDOUT的同时还发送到skywalking,那么在K8S中就需要写入到容器内。这可不是一个好主意。<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout"> <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%tid] [%thread] %-5level %logger{36} -%msg%n</Pattern> </layout> </encoder> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern> </layout> </encoder> </appender>而后我们修改ruoyi-admin中的logback.xml,片段如下 <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> 其他如下<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="5 seconds"> <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <appender name="file_error" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统模块日志级别控制 --> <logger name="com.ruoyi" level="info" /> <!-- Spring日志级别控制 --> <logger name="org.springframework" level="warn" /> <root level="info"> <appender-ref ref="console" /> </root> <!--系统操作日志--> <root level="info"> <appender-ref ref="file_info" /> <appender-ref ref="file_error" /> </root> <!--系统用户操作日志--> <logger name="sys-user" level="info"> <appender-ref ref="sys-user"/> </logger> </configuration> 如上所言,我们还需要一种方式提供日志系统的收集,因此同时将日志写入到本地文件。片段如下: <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> 完整xml如下<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="5 seconds"> <!-- 日志输出格式 --> <property name="APM_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n" /> <!-- skyWalking日志采集 --> <appender name="APM_LOG" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 控制台输出 --> <appender name="console" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <appender name="file_error" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 写入文件 --> <!-- 日志存放路径 --> <property name="log.path" value="/data/ruoyi/logs" /> <!-- 控制台输出 --> <appender name="console_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-console.log</file> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统日志输出 --> <appender name="file_info_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-info.log</file> <!-- 循环政策:基于时间创建日志文件 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 日志文件名格式 --> <fileNamePattern>${log.path}/sys-info.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <!-- 过滤的级别 --> <level>INFO</level> <!-- 匹配时的操作:接收(记录) --> <onMatch>ACCEPT</onMatch> <!-- 不匹配时的操作:拒绝(不记录) --> <onMismatch>DENY</onMismatch> </filter> </appender> <appender name="file_error_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-err.log</file> <!-- 循环政策:基于时间创建日志文件 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 日志文件名格式 --> <fileNamePattern>${log.path}/sys-err.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <!-- 过滤的级别 --> <level>ERROR</level> <!-- 匹配时的操作:接收(记录) --> <onMatch>ACCEPT</onMatch> <!-- 不匹配时的操作:拒绝(不记录) --> <onMismatch>DENY</onMismatch> </filter> </appender> <!-- 用户访问日志输出 --> <appender name="sys-user_file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log.path}/sys-user.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 按天回滚 daily --> <fileNamePattern>${log.path}/sys-user.%d{yyyy-MM-dd}.log</fileNamePattern> <!-- 日志最大的历史 60天 --> <maxHistory>1</maxHistory> </rollingPolicy> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> <Pattern>${APM_PATTERN}</Pattern> </layout> </encoder> </appender> <!-- 系统模块日志级别控制 --> <logger name="com.ruoyi" level="info" /> <!-- Spring日志级别控制 --> <logger name="org.springframework" level="warn" /> <root level="info"> <appender-ref ref="console" /> </root> <!--系统操作日志--> <root level="info"> <appender-ref ref="file_info" /> <appender-ref ref="file_error" /> </root> <!--系统用户操作日志--> <logger name="sys-user" level="info"> <appender-ref ref="sys-user"/> </logger> <root level="info"> <appender-ref ref="console_file" /> <appender-ref ref="file_info_file" /> <appender-ref ref="file_error_file" /> <appender-ref ref="sys-user_file"/> </root> </configuration> 现在日志会发送到skywalking,本地也会保留一份,并且,我们可以通过日志的TID和skywalking中的id关联进行查询。接着开始打包mvn clean package -Dmaven.test.skip=true构建镜像构建完成后,我们定制一个Dockerfile来构建一个容器的镜像FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/ruoyi/logs" \ MY_USER=linuxea \ MY_USER_ID=316 COPY ruoyi-admin/target/*.jar /data/ COPY skywalking-agent /data/ruoyi # copy文件到/data下后需要chonw普通用户权限 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/ruoyi/logs -p \ && echo "Asia/Shanghai" > /etc/timezone \ && chown -R linuxea.linuxea /data/ruoyi WORKDIR /data # 使用linuxea运行 java程序 USER linuxea CMD java ${JAVA_OPTS} -javaagent:/data/ruoyi/skywalking-agent.jar -jar *.jar进行构建docker build -t uhub.service.ucloud.cn/marksugar-k8s/ruoyi-admin:v1 .在k8s中,我们需要传递如下 环境变量 - name: SW_AGENT_NAME value: mark::test1 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES value: skywalking-oap.skywalking:11800 docker 使用-e即可docker run --rm -e SW_AGENT_NAME=linuxea:ruoyi-admin -e SW_AGENT_COLLECTOR_BACKEND_SERVICES=172.16.100.151:11800 --net=host uhub.service.ucloud.cn/marksugar-k8s/ruoyi-admin:v1接着,我们进入当前创建的容器中在日志里面随意拿取一个TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001[root@Node-172_16_100_151 /data/ruoyi/logs]# docker exec -it distracted_leakey bash bash-5.1$ cd /data/ruoyi/logs/ bash-5.1$ tail -f sys-user.log 2023-09-21 22:14:17.281 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.S.selectUserList_COUNT -<== Total: 1 2023-09-21 22:14:17.283 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Preparing: select u.user_id, u.dept_id, u.nick_name, u.user_name, u.email, u.avatar, u.phonenumber, u.sex, u.status, u.del_flag, u.login_ip, u.login_date, u.create_by, u.create_time, u.remark, d.dept_name, d.leader from sys_user u left join sys_dept d on u.dept_id = d.dept_id where u.del_flag = '0' LIMIT ? 2023-09-21 22:14:17.285 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Parameters: 10(Integer) 2023-09-21 22:14:17.290 [TID:e5bebb3ccecc4f0d88512263c809653b.77.16953056571440001] [http-nio-8080-exec-6] DEBUG c.r.s.m.SysUserMapper.selectUserList -<== Total: 7 2023-09-21 22:14:18.367 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -==> Preparing: SELECT count(0) FROM sys_user u LEFT JOIN sys_dept d ON u.dept_id = d.dept_id WHERE u.del_flag = '0' AND (u.dept_id = ? OR u.dept_id IN (SELECT t.dept_id FROM sys_dept t WHERE find_in_set(?, ancestors))) 2023-09-21 22:14:18.371 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -==> Parameters: 103(Long), 103(Long) 2023-09-21 22:14:18.373 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.S.selectUserList_COUNT -<== Total: 1 2023-09-21 22:14:18.373 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Preparing: select u.user_id, u.dept_id, u.nick_name, u.user_name, u.email, u.avatar, u.phonenumber, u.sex, u.status, u.del_flag, u.login_ip, u.login_date, u.create_by, u.create_time, u.remark, d.dept_name, d.leader from sys_user u left join sys_dept d on u.dept_id = d.dept_id where u.del_flag = '0' AND (u.dept_id = ? OR u.dept_id IN ( SELECT t.dept_id FROM sys_dept t WHERE find_in_set(?, ancestors) )) LIMIT ? 2023-09-21 22:14:18.374 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -==> Parameters: 103(Long), 103(Long), 10(Integer) 2023-09-21 22:14:18.377 [TID:e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001] [http-nio-8080-exec-9] DEBUG c.r.s.m.SysUserMapper.selectUserList -<== Total: 1回到skywalking查询traceID为 e5bebb3ccecc4f0d88512263c809653b.80.16953056583460001而后点击查看日志点击标记就可以看到日志信息了我特意将UI界面点开。简洁性不言而喻参考:https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-logback-1.x/#logstash-logback-pluginhttps://www.apache.org/dyn/closer.cgi/skywalking/java-agent/9.0.0/apache-skywalking-java-agent-9.0.0.tgzhttps://skywalking.apache.org/downloads/https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-log4j-1.x/https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-logback-1.x/
2023年09月21日
11 阅读
0 评论
0 点赞
2023-09-14
linuxea:istio全链路传递cookie和header灰度
测试一下在istio中的全链路中基于cookie和header灰度发布,这些在higress中也可以的。istio在进行测试。根据istio版本信息中的提示,在1.19中支持的是1.25 到 1.28Istio 1.19.0 已得到 Kubernetes 1.25 到 1.28 的官方正式支持。鉴于 我本地使用的是1.25.11,因此1.19在我考虑范围内。下载安装组件istioctlwget https://github.com/istio/istio/releases/download/1.19.0/istioctl-1.19.0-linux-amd64.tar.gz tar xf istioctl-1.19.0-linux-amd64.tar.gz mv istioctl /usr/local/sbin/[root@master-01 ~/istio]# istioctl version no ready Istio pods in "istio-system" 1.19.0生成安装配置文件istioctl manifest generate --set profile=default > istio.yaml我们替换其中两个重要的镜像 image: docker.io/istio/proxyv2:1.19.0 image: docker.io/istio/pilot:1.19.0修改为 uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0 uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0 sed -i 's@docker.io/istio/pilot:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/pilot:1.19.0@g' istio.yaml sed -i 's@docker.io/istio/proxyv2:1.19.0@uhub.service.ucloud.cn/marksugar-k8s/proxyv2:1.19.0@g' istio.yaml开始安装kubectl create ns istio-system kubectl apply -f istio.yaml安装完成[root@master-01 ~/istio]# kubectl -n istio-system get all NAME READY STATUS RESTARTS AGE pod/istio-ingressgateway-65cff96b76-nzdk9 1/1 Running 0 3m30s pod/istiod-ffc9db9cc-7g554 1/1 Running 0 3m30s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-ingressgateway LoadBalancer 10.68.208.80 <pending> 15021:31635/TCP,80:30598/TCP,443:31349/TCP 2m28s service/istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2m28s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-ingressgateway 1/1 1 1 21m deployment.apps/istiod 1/1 1 1 21m NAME DESIRED CURRENT READY AGE replicaset.apps/istio-ingressgateway-65cff96b76 1 1 1 3m30s replicaset.apps/istiod-ffc9db9cc 1 1 1 3m30s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 2%/80% 1 5 1 21m horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 0%/80% 1 5 1 21m接着我们配置一个vip做为loadbalancerip addr add 172.16.100.210/24 dev eth0而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑 27 clusterIP: 10.68.113.92 28 externalIPs: 29 - 172.16.100.210 30 clusterIPs: 31 - 10.68.113.92 32 externalTrafficPolicy: Cluster 33 internalTrafficPolicy: Cluster 34 ipFamilies: 35 - IPv4现在状态就正常了[root@master-01 ~/istio]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.68.208.80 172.16.100.210 15021:31635/TCP,80:30598/TCP,443:31349/TCP 127m istiod ClusterIP 10.68.9.174 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 127m接着给test1名称空间打标签,表示test1作为istio的配置范围,test1名称空间内的pod都会注入一个边车[root@master-01 ~/istio]# kubectl create ns test1 namespace/test1 created [root@master-01 ~/istio]# kubectl label namespace test1 istio-injection=enabled namespace/test1 labeled测试代码我必须保持让cookie或者Header以某种方式被赋值后在代码链路中传递,而且应该有一个约束范围的名称。在测试中:cookie名称是:cannaryHeader名称是:test代码如下:package main import ( "fmt" "io/ioutil" "log" "net/http" "os" "github.com/gin-gonic/gin" ) // 全局变量 var ( PATH_URL = getEnv("PATH_URL", "go-test2") METHODS = getEnv("METHODS", "GET") QNAME = getEnv("QNAME", "name") ) func getEnv(key, defaultVal string) string { if value, ok := os.LookupEnv(key); ok { return value } return defaultVal } func main() { r := gin.Default() r.POST("/post", postJson) r.GET("/get", getJson) r.Run(":9999") } func getJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") // 传递header和cookie sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("c.Request.Header:", k, v) } } c.JSON(200, gin.H{"status": "ok"}) } func postJson(c *gin.Context) { // 获取cookie cookie, err := c.Cookie("cannary") if err != nil { cookie = "NotSet" c.SetCookie("gin_cookie", "test", 3600, "getJson", "localhost", false, true) } fmt.Println("c.Cookie:", cookie) // 获取传入参数 query := c.Query(QNAME) fmt.Println("c.Request.Query:", query) body := c.Request.Body x, err := ioutil.ReadAll(body) if err != nil { c.JSON(400, gin.H{"error": err.Error()}) return } fmt.Println(query) // 获取test Header headers := c.Request.Header customHeader := headers.Get("test") sed2sort(customHeader, cookie) // 打印Header for k, v := range c.Request.Header { fmt.Println("c.Request.Header:", k, v) if k == "Test" { fmt.Println("Test:", k, v) } } log.Println(string(x)) c.JSON(200, gin.H{"status": "ok"}) } // 调用下游 func sed2sort(headerValue, icookie string) { fmt.Println("sed2sort:", METHODS, PATH_URL) client := &http.Client{} req, err := http.NewRequest(METHODS, PATH_URL, nil) // 添加Header req.Header.Add("test", headerValue) // 添加Cookie cookies := []*http.Cookie{ &http.Cookie{Name: "cannary", Value: icookie}, } for _, cookie := range cookies { req.AddCookie(cookie) } if err != nil { fmt.Println(err) return } res, err := client.Do(req) if err != nil { fmt.Println(err) return } defer res.Body.Close() body, err := ioutil.ReadAll(res.Body) if err != nil { fmt.Println(err) return } fmt.Println(string(body)) }yaml相对的,需要创建几组服务和vs,分别测试Header,cookie,服务均从server1访问server2,在server2中进行meshserver1在server1中会去调用server2的get接口,通过环境变量传入apiVersion: v1 kind: Service metadata: name: server1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server1 namespace: test1 spec: replicas: selector: matchLabels: app: server1 version: v0.2 template: metadata: labels: app: server1 version: v0.2 spec: containers: - name: server1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 #image: uhub.service.ucloud.cn/marksugar-k8s/cookie:v1 ports: - name: http containerPort: 9999 env: - name: PATH_URL value: http://server2/get - name: METHODS value: GETserver2server2提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2 namespace: test1 spec: replicas: selector: matchLabels: app: server2 version: v0.2 template: metadata: labels: app: server2 version: v0.2 spec: containers: - name: server2 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999server2-1server3也提供一个单独的服务apiVersion: v1 kind: Service metadata: name: server2-1 namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-1 version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-1 namespace: test1 spec: replicas: selector: matchLabels: app: server2-1 version: v0.2 template: metadata: labels: app: server2-1 version: v0.2 spec: containers: - name: server2-1 # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999 server2-cooikeapiVersion: v1 kind: Service metadata: name: server2-cooike namespace: test1 spec: ports: - name: http port: 80 protocol: TCP targetPort: 9999 selector: app: server2-cooike version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: server2-cooike namespace: test1 spec: replicas: selector: matchLabels: app: server2-cooike version: v0.2 template: metadata: labels: app: server2-cooike version: v0.2 spec: containers: - name: server2-cooike # imagePullPolicy: Always image: uhub.service.ucloud.cn/marksugar-k8s/go-test:v3.1 ports: - name: http containerPort: 9999创建完成后相对的svc和pod正常[root@master-01 ~/higress/ops/server]# kubectl -n test1 get pod NAME READY STATUS RESTARTS AGE server1-79fd8456ff-8fj9v 2/2 Running 0 25m server2-1-74bfdd776c-5zs7z 2/2 Running 0 24m server2-5bc69c4f75-wcbcq 2/2 Running 0 25m server2-cooike-94ffb459-bdgk4 2/2 Running 0 21m [root@master-01 ~/higress/ops/server]# kubectl -n test1 get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE server1 ClusterIP 10.68.142.192 <none> 80/TCP 3h13m server2 ClusterIP 10.68.27.255 <none> 80/TCP 3h13m server2-1 ClusterIP 10.68.196.212 <none> 80/TCP 3h server2-cooike ClusterIP 10.68.165.157 <none> 80/TCP 21m开始将server1发布发布在istio中,我们需要配置Gateway,destinationRule和VirtualService,如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: cookie-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "cookie.linuxea.com" --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: cookie namespace: test1 spec: host: "cookie.linuxea.com" trafficPolicy: tls: mode: DISABLE --- # apiVersion: networking.istio.io/v1beta3 apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: cookie namespace: test1 spec: hosts: - "cookie.linuxea.com" gateways: - istio-system/cookie-gateway - mesh http: - name: server1 headers: response: add: X-Envoy: linuxea route: - destination: host: server1 测试我们通过postman发送请求,无论什么情况,访问cookie.linuxea.com域名都会将请求发往server1server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET http://server2/get {"status":"ok"} c.Request.Header: X-B3-Parentspanid [c782871d39b17cab] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Traceid [42855963a60a52bcc782871d39b17cab] c.Request.Header: Postman-Token [e65b66ff-296f-4033-ab83-e6ccf904c043] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Spanid [9d5a0f1e0f9ecb58] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-Request-Id [e2ec1977-62d9-4cba-90bc-7476e4037b47] [GIN] 2023/09/14 - 09:30:18 | 200 | 1.970681ms | 192.20.1.0 | GET "/get"server2[root@master-01 ~]# kubectl -n test1 logs -f server2-5bc69c4f75-wcbcq [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-B3-Traceid [bedabe3d26de18e00f3029cb0970a46f] c.Request.Header: X-B3-Parentspanid [0f3029cb0970a46f] c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [904970b36297796d] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Request-Id [faa7c8ad-1175-43f3-9635-277747543a85] [GIN] 2023/09/14 - 09:30:18 | 200 | 573.45µs | 127.0.0.6 | GET "/get"基于headerheader的name在约束内假设代码内传递的header名称就是test,因此我们添加test为true,如果通过postman发送的请求头中包含了header等于true就路由到server2-1 http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2发起一次测试此时请求就被路由到server2-1上了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET http://server2/get {"status":"ok"} c.Request.Header: Postman-Token [0a2c1682-fb2a-428f-a9d1-8232bf8372db] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] c.Request.Header: X-B3-Traceid [034f8708b77c398ea651d5f18dae87f3] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Request-Id [39b021a2-b596-4899-8eec-bc58210bae4f] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-B3-Spanid [6178bc93cf359eb7] c.Request.Header: X-B3-Parentspanid [a651d5f18dae87f3] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:37:17 | 200 | 4.640183ms | 192.20.1.0 | GET "/get"server2-1[root@master-01 ~]# kubectl -n test1 logs -f server2-1-74bfdd776c-5zs7z [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: X-Request-Id [8bd6ca98-2b11-4867-85b6-6c0ec629de49] c.Request.Header: User-Agent [Mozilla] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [d3693e3f3f1bc32a0220533906f33a9e] c.Request.Header: X-B3-Parentspanid [0220533906f33a9e] c.Request.Header: Test [true] c.Request.Header: Test [true] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [db50496d52cfae29] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Cookie [cannary=NotSet] [GIN] 2023/09/14 - 09:37:18 | 200 | 230.31µs | 127.0.0.6 | GET "/get"基于cookie我们需要regexheader中cookie的值,如:cannary=marksugar;由冒号分隔,在regex后就变成了"^(.*;.)?(cannary=marksugar)(;.*)?$"代码中仍然需要约束传递的名称,而后,我们修改server2的VirtualService配置:如果cookie包含cannary=marksugar就路由到server2-cooike,添加如下 http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: server2 namespace: test1 spec: hosts: - "server2" http: - name: server2-cookie match: - headers: cookie: regex: "^(.*;.)?(cannary=marksugar)(;.*)?$" route: - destination: host: server2-cooike - name: server2-1 match: - headers: test: exact: "true" route: - destination: host: server2-1 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: server2 headers: response: add: X-Envoy: linuxea route: - destination: host: server2接着在postman中添加cooike,左侧中部添加域名:cookie.linuxea.com,而后点击Add Cookie添加cannary=marksugar!当携带cannary=marksugar的请求在流向server2的时候。检测到cookie为cannary=marksugar的时候就会将请求路由到server2-cooike的pod此时请求中携带cannary=marksugar就发送server2-cooike中了server1[root@master-01 ~]# kubectl -n test1 logs -f server1-79fd8456ff-8fj9v [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET http://server2/get {"status":"ok"} c.Request.Header: X-Forwarded-For [192.20.1.0] c.Request.Header: X-Envoy-External-Address [192.20.1.0] c.Request.Header: X-B3-Parentspanid [6b9b59fdbe1b967c] c.Request.Header: X-B3-Sampled [0] c.Request.Header: Postman-Token [bd6ef33a-279b-4b13-853c-c0785eb6161d] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-B3-Spanid [b53f23943a757ec7] c.Request.Header: Accept-Encoding [gzip, deflate, br] c.Request.Header: X-Request-Id [0cf8ff34-3f41-4197-9e22-0f8d02371942] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: User-Agent [PostmanRuntime/7.28.2] c.Request.Header: Accept [*/*] c.Request.Header: X-B3-Traceid [3211f68dd35e189b6b9b59fdbe1b967c] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=b5b1bdfa157a62e0c7d88009119f39a681271410387e440353ee23e8db6bedf8;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account] [GIN] 2023/09/14 - 09:26:37 | 200 | 2.427771ms | 192.20.1.0 | GET "/get"server2-cooike[root@master-01 ~]# kubectl -n test1 logs -f server2-cooike-94ffb459-bdgk4 [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /post --> main.postJson (3 handlers) [GIN-debug] GET /get --> main.getJson (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Listening and serving HTTP on :9999 c.Cookie: marksugar sed2sort: GET go-test2 Get "go-test2": unsupported protocol scheme "" c.Request.Header: User-Agent [Go-http-client/1.1] c.Request.Header: Cookie [cannary=marksugar] c.Request.Header: X-Request-Id [1bb67f53-5a26-4803-b934-6ed5b0af0c1d] c.Request.Header: X-B3-Spanid [1da69ab234ba1e23] c.Request.Header: X-B3-Parentspanid [1ca523b46de45366] c.Request.Header: Test [] c.Request.Header: Test [] c.Request.Header: Accept-Encoding [gzip] c.Request.Header: X-Forwarded-Proto [http] c.Request.Header: X-Envoy-Attempt-Count [1] c.Request.Header: X-Forwarded-Client-Cert [By=spiffe://cluster.local/ns/test1/sa/default;Hash=102cfb7487ec810d309276831d9a41169dedce98bc5efcd81343e1d58d49bdd7;Subject="";URI=spiffe://cluster.local/ns/test1/sa/default] c.Request.Header: X-B3-Traceid [ba730ac5aa266e2b1ca523b46de45366] c.Request.Header: X-B3-Sampled [0] [GIN] 2023/09/14 - 09:26:39 | 200 | 71.76µs | 127.0.0.6 | GET "/get"参考https://regex101.com/r/CPv2kU/3https://istio.io/latest/docs/reference/config/networking/destination-rule/https://istio.io/latest/zh/docs/tasks/traffic-management/request-routing/
2023年09月14日
140 阅读
0 评论
0 点赞
2023-09-13
linuxea:higress基于Header的流量切分
higress是和apisix,kong归为一类的云原生网关,由阿里开源,与众多阿里的开源产品一样,都有相对应的阿里云的商业产品。相比较apisix面向开发人员友善,higress提供了相对完善的声明式配置,你可以更方便的管理编排YAML。并且Higress的插件,如:WAF防护,支持基于 OWASP ModSecurity Core Rule Set (CRS) 的 WAF 规则配置请求屏蔽,基于 URL、请求头等特征屏蔽 HTTP 请求,可以用于防护部分站点资源不对外部暴露爬虫拦截自定义应答,自定义 HTTP 应答状态码、HTTP 应答头,以及 HTTP 应答 Body。可以用于 Mock 响应,也可以用于判断特定状态码后给出自定义应答,例如在触发网关限流策略时实现自定义响应。基于Key限流等higress提供了两种配置方式:1,基于K8s基于k8s情况下,目前官方默认使用的是K8s本身的etcd2,基于docker-compose脱离K8s,基于docker-compose编排部署同时higress可以与 nacos结合, 也可以kruise Rollout做简单的基于请求头和cookie的灰度发布等。安装 higresshelm repo add higress.io https://higress.io/helm-charts --force-update helm install higress -n higress-system higress.io/higress \ --create-namespace --render-subchart-notes --set higress-console.domain=linuxea.higress.local我们配置一个VIP172.16.100.110/24来模拟LoadBalancer[root@linuxea-11 ~]# ip addr add 172.16.100.210/24 dev eth0 [root@linuxea-11 ~]# ip a | grep 172.16.100.210 inet 172.16.100.110/24 scope global secondary eth0 [root@master-01 ~/higress/helm/core]# ping 172.16.100.210 -c 1 PING 172.16.100.210 (172.16.100.210) 56(84) bytes of data. 64 bytes from 172.16.100.210: icmp_seq=1 ttl=64 time=0.034 ms --- 172.16.100.210 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms而后使用kubectl -n higress-system edit svc higress-gateway编辑... spec: allocateLoadBalancerNodePorts: true clusterIP: 10.68.167.66 clusterIPs: - 10.68.167.66 externalIPs: - 172.16.100.210 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: ....将ip写入到本地hosts: echo "172.16.100.210 linuxea.higress.local" >> /etc/hosts获取密码export ADMIN_USERNAME=$(kubectl get secret --namespace higress-system higress-console -o jsonpath="{.data.adminUsername}" | base64 -d) export ADMIN_PASSWORD=$(kubectl get secret --namespace higress-system higress-console -o jsonpath="{.data.adminPassword}" | base64 -d) echo -e "Username: ${ADMIN_USERNAME}\nPassword: ${ADMIN_PASSWORD}"登录成功基于请求头的配置higress提供了web界面配置和crd清单配置我们先将 域名配置在本地hosts172.16.100.210 linuxea.higress.local test.abc.com test1.abc.com声明式接着创建ingress的test.abc.comapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: nginx-demo spec: ingressClassName: higress rules: - host: test.abc.com http: paths: - backend: service: name: test port: number: 80 path: / pathType: Exact --- apiVersion: v1 kind: Service metadata: name: test namespace: nginx-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: nginx-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80而后创建一个test-v1-test-v1-canary,域名保持一致,镜像修改为v2.0apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: higress.io/canary: "true" higress.io/canary-by-header: "nginx-demo" higress.io/canary-by-header-value: "v1" higress.io/request-header-control-add: "app v1" name: test-v1-canary namespace: nginx-demo spec: ingressClassName: higress rules: - host: test.abc.com http: paths: - backend: service: name: test-v1-canary port: number: 80 path: / pathType: Exact --- apiVersion: v1 kind: Service metadata: name: test-v1-canary namespace: nginx-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: test-v1-canary namespace: nginx-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.2 template: metadata: labels: app: linuxea_app version: v0.2 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80查看[root@master-01 ~/canary]# kubectl -n nginx-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test ClusterIP 10.68.106.114 <none> 80/TCP 29m test-v1-canary ClusterIP 10.68.91.109 <none> 80/TCP 15s [root@master-01 ~/canary]# kubectl -n nginx-demo get pod NAME READY STATUS RESTARTS AGE test-7f58c59f89-ddrgr 1/1 Running 0 29m test-v1-canary-79dd55496c-zwhth 1/1 Running 0 17s测试此时我们如果加上请求头-H "nginx-demo: v1" 就会被指向到v2的版本,否则就v1[root@master-01 ~]# curl -H "nginx-demo: v1" http://test.abc.com linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0 [root@master-01 ~]# curl http://test.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0而这个功能在web界面仍然一样web首先我们创建域名接着选择路由进行创建选中已经添加域名和请求头 ,以及目标服务接着在创建一组不需要请求头的路由而后测试[root@master-01 ~]# curl -H "app:v1" http://test1.abc.com linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0 [root@master-01 ~]# curl http://test1.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0你会发现,higress用最简单的方式做了A/B测试如果此时你配置了请求参数那你的请求比如携带name=test1,否则无法访问[root@master-01 ~/higress/ops]# curl -H "app:v1" http://test1.abc.com linuxea-test-7f58c59f89-ddrgr.com-127.0.0.1/8 ::1/128 192.20.4.248/24 fe80::cfd:44ff:fe63:d93c/64 version number 1.0 [root@master-01 ~/higress/ops]# curl -H "app:v1" http://test1.abc.com/?name=test1 linuxea-test-v1-canary-79dd55496c-zwhth.com-127.0.0.1/8 ::1/128 192.20.4.249/24 fe80::b0fb:bcff:fef8:710e/64 version number 2.0
2023年09月13日
138 阅读
0 评论
0 点赞
2023-09-09
linuxea: tekton ci Sidecar(3)
相比较此前在k8s上运行一个Docker Outside of Docker, DooD的方式,由于这个镜像一直运行,因此缓存存在在容器内,可以加速构建和复用已经拉取到资源。而Sidecar就不需要提前部署,Tekton 会将 Sidecar 注入属于 TaskRun 的 Pod,一旦 Task 中的所有 Steps 完成执行,Pod 内运行的每一个 Sidecar 就会终止掉,如果 Sidecar 成功退出,kubectl get pods 命令会将 Pod 的状态返回为 Completed,如果 Sidecar 退出时出现了错误,则返回 Error,而忽略实际执行 Pod 内部 Steps 的容器镜像的退出码值。在每个容器里面都是一个独立的 Docker Daemon,因此也支持并行运行,互不影响,DinD显然更加安全、更加干净。我门仍然需要准备相关的配置1.凭据和SAapiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: password --- apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image-go spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang #构建完的镜像名称 接着配置一个task的测试用例2.测试taskapiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test3.构建task# task-docker-build.yaml apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: docker-build-push spec: resources: inputs: # 定义输入资源 - name: source # 源代码仓库 type: git params: - name: image description: Reference of the image docker will produce. - name: builder_image description: The location of the docker builder image. default: docker:stable - name: dockerfile description: Path to the Dockerfile to build. default: ./Dockerfile - name: context description: Path to the directory to use as context. default: . - name: build_extra_args description: Extra parameters passed for the build command when building images. default: "" - name: push_extra_args description: Extra parameters passed for the push command when pushing images. default: "" - name: insecure_registry description: Allows the user to push to an insecure registry that has been specified default: "" - name: registry_mirror description: Specific the docker registry mirror default: "" - name: registry_url description: private docker images registry url steps: - name: docker-build # 构建步骤 image: $(params.builder_image) env: - name: DOCKER_HOST # 用 TLS 形式通过 TCP 链接 sidecar value: tcp://localhost:2376 - name: DOCKER_TLS_VERIFY # 校验 TLS value: "1" - name: DOCKER_CERT_PATH # 使用 sidecar 守护进程生成的证书 value: /certs/client workingDir: $(resources.inputs.source.path) script: | # docker 构建命令 docker login $(params.registry_url) cd tekton/go docker build \ $(params.build_extra_args) \ --no-cache \ -f $(params.dockerfile) -t $(params.image) $(params.context) volumeMounts: # 声明挂载证书目录 - mountPath: /certs/client name: dind-certs - name: docker-push # image: $(params.builder_image) env: - name: DOCKER_HOST value: tcp://localhost:2376 - name: DOCKER_TLS_VERIFY value: "1" - name: DOCKER_CERT_PATH value: /certs/client workingDir: $(resources.inputs.source.path) script: | # 推送 docker 镜像 docker login $(params.registry_url) echo $(params.image) docker push $(params.push_extra_args) $(params.image) volumeMounts: - mountPath: /certs/client name: dind-certs sidecars: # sidecar 模式,提供 docker daemon服务,实现真正的 DinD 模式 - image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind name: server args: - --storage-driver=vfs - --userland-proxy=false - --debug - --insecure-registry=$(params.insecure_registry) - --registry-mirror=$(params.registry_mirror) securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR # 将生成的证书写入与客户端共享的路径 value: /certs volumeMounts: - mountPath: /certs/client name: dind-certs readinessProbe: # 等待 dind daemon 生成它与客户端共享的证书 periodSeconds: 1 exec: command: ["ls", "/certs/client/ca.pem"] volumes: # 使用 emptyDir 的形式即可 - name: dind-certs emptyDir: {}上面的 Task 使用了一个 registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind 镜像来提供 docker 服务端,在 sidecar 模式容器是共享 network namespace 的,通过 tcp://localhost:2376 和 docker 服务端进行通信,由于还使用的是 TLS 证书模式,所以需要将证书目录进行声明挂载。在dockerfile的细节部分需要进入dockerfile目录进行构建4.pipeline我门需要指定params参数最终给每个task传递使用apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: test-sidecar-pipeline spec: resources: # 为 Tasks 提供输入和输出资源声明 - name: git-res type: git params: - name: image type: string - name: image-tag type: string default: "v0.4.0" - name: registry_url type: string default: "uhub.service.ucloud.cn" - name: registry_mirror type: string default: "https://ot2k4d59.mirror.aliyuncs.com/" - name: insecure_registry type: string default: "uhub.service.ucloud.cn" tasks: # 添加task到流水线中 # 运行应用测试 - name: test taskRef: name: test resources: inputs: - name: repo # Task 输入名称 resource: git-res # Pipeline 资源名称 - name: get-build-id taskRef: name: generate-build-id params: - name: base-version value: $(params.image-tag) # 构建并推送 Docker 镜像 - name: build-and-push taskRef: name: docker-build-push # 使用上面定义的镜像构建任务 runAfter: - test # 测试任务执行之后 resources: inputs: - name: source # 指定输入的git仓库资源 resource: git-res params: - name: image value: "$(params.image):$(tasks.get-build-id.results.build-id)" - name: registry_url value: $(params.registry_url) - name: insecure_registry value: $(params.insecure_registry) - name: registry_mirror value: $(params.registry_mirror)5.pipelinerun这些参数其中包括tag就会通过pipelinerun来进行传递,或者使用默认值apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: test-sidecar-pipelinerun spec: serviceAccountName: build-sa pipelineRef: name: test-sidecar-pipeline resources: - name: git-res # 指定输入的git仓库资源 resourceRef: name: git-res params: - name: image value: uhub.service.ucloud.cn/linuxea/golang - name: image-tag # 传入版本号 value: "v0.4.1"apply[root@master1 Sidecar]# kubectl apply -f ./ task.tekton.dev/generate-build-id created task.tekton.dev/docker-build-push created task.tekton.dev/test created pipeline.tekton.dev/test-sidecar-pipeline created pipelinerun.tekton.dev/test-sidecar-pipelinerun created我门仍然通过命令查看构建pipelinerun[root@master1 Sidecar]# tkn pipelinerun describe test-sidecar-pipelinerun Name: test-sidecar-pipelinerun Namespace: default Pipeline Ref: test-sidecar-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-sidecar-pipeline Status STARTED DURATION STATUS 12 seconds ago --- Running Timeouts Pipeline: 1h0m0s Params NAME VALUE ∙ image uhub.service.ucloud.cn/linuxea/golang ∙ image-tag v0.4.1查看taskrun[root@master1 Sidecar]# tkn taskrun list NAME STARTED DURATION STATUS test-sidecar-pipelinerun-build-and-push 4 minutes ago 2m1s Succeeded test-sidecar-pipelinerun-get-build-id 4 minutes ago 12s Succeeded test-sidecar-pipelinerun-test 4 minutes ago 10s Succeeded test-pipelinerun-build-and-push-test 22 hours ago 11s Succeeded test-pipelinerun-get-build-id 22 hours ago 9s Succeeded test-pipelinerun-test 22 hours ago 9s Succeeded build-and-push 1 day ago 12m13s Succeeded testrun 4 days ago 2m45s Succeeded查看taskrun日志[root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-get-build-id [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/main/x86_64/APKINDEX.tar.gz [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/community/x86_64/APKINDEX.tar.gz [get-timestamp] (1/1) Installing tzdata (2023c-r1) [get-timestamp] OK: 11 MiB in 19 packages [get-timestamp] Current Timestamp: 20230613-151739 [get-timestamp] 20230613-151739 [get-buildid] v0.4.1-20230613-151739 [root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-test [git-source-repo-lgfhp] {"level":"info","ts":1686640658.3913343,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-lgfhp] {"level":"info","ts":1686640658.4037006,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s[root@master1 Sidecar]# tkn taskrun logs test-sidecar-pipelinerun-build-and-push -f [git-source-source-vx28b] {"level":"info","ts":1686640670.9329953,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/source"} [git-source-source-vx28b] {"level":"info","ts":1686640670.948619,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/source"} [docker-build] Authenticating with existing credentials... [docker-build] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [docker-build] Configure a credential helper to remove this warning. See [docker-build] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [docker-build] [docker-build] Login Succeeded [docker-build] Sending build context to Docker daemon 5.12kB [docker-build] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [docker-build] 1.18.10-alpine3.17: Pulling from marksugar-k8s/golang [docker-build] 8921db27df28: Pulling fs layer [docker-build] a2f8637abd91: Pulling fs layer [docker-build] 4ba80a8cd2c7: Pulling fs layer [docker-build] dbc2308a4587: Pulling fs layer [docker-build] dbc2308a4587: Waiting [docker-build] a2f8637abd91: Verifying Checksum [docker-build] a2f8637abd91: Download complete [docker-build] dbc2308a4587: Verifying Checksum [docker-build] dbc2308a4587: Download complete [docker-build] 8921db27df28: Download complete [docker-build] 8921db27df28: Pull complete [docker-build] a2f8637abd91: Pull complete [docker-build] 4ba80a8cd2c7: Verifying Checksum [docker-build] 4ba80a8cd2c7: Download complete [docker-build] 4ba80a8cd2c7: Pull complete [docker-build] dbc2308a4587: Pull complete [docker-build] Digest: sha256:ab5685692564e027aa84e2980855775b2e48f8fc82c1590c0e1e8cbc2e716542 [docker-build] Status: Downloaded newer image for registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [docker-build] ---> a77f45e5f987 [docker-build] Step 2/5 : RUN mkdir /test -p [docker-build] ---> Running in 7924d227b939 [docker-build] Removing intermediate container 7924d227b939 [docker-build] ---> 12e24160d708 [docker-build] Step 3/5 : WORKDIR /test [docker-build] ---> Running in 14f7ab09a508 [docker-build] Removing intermediate container 14f7ab09a508 [docker-build] ---> 0faa68db85f7 [docker-build] Step 4/5 : COPY . . [docker-build] ---> 9411b3cf535a [docker-build] Step 5/5 : CMD ["go test"] [docker-build] ---> Running in 8356b0c1c4ba [docker-build] Removing intermediate container 8356b0c1c4ba [docker-build] ---> 921c8de056ef [docker-build] Successfully built 921c8de056ef [docker-build] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.4.1-20230613-151739 [docker-push] Authenticating with existing credentials... [docker-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [docker-push] Configure a credential helper to remove this warning. See [docker-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [docker-push] [docker-push] Login Succeeded [docker-push] uhub.service.ucloud.cn/linuxea/golang:v0.4.1-20230613-151739 [docker-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [docker-push] a13db5615e44: Preparing [docker-push] 55da38d80f35: Preparing [docker-push] 4ad1c2ef216c: Preparing [docker-push] c23db623ee98: Preparing [docker-push] c1bfd5512d71: Preparing [docker-push] 8e012198eea1: Preparing [docker-push] 8e012198eea1: Waiting [docker-push] c23db623ee98: Layer already exists [docker-push] 4ad1c2ef216c: Layer already exists [docker-push] c1bfd5512d71: Layer already exists [docker-push] 8e012198eea1: Layer already exists [docker-push] 55da38d80f35: Pushed [docker-push] a13db5615e44: Pushed [docker-push] v0.4.1-20230613-151739: digest: sha256:a6ad200509fe8776b5ae2aaaf2ddf8387b0af30ae0667bfb67883bffefe7a962 size: 1571 [sidecar-server] 2023/06/13 07:19:44 Exiting...回到界面查看镜像仓库此时已经push了最新的
2023年09月09日
134 阅读
0 评论
0 点赞
2023-09-02
linuxea: 使用tekton构建镜像(2)
现在我门将这些组合起来形成一个流水线,使用的 CRD 对象是 Pipeline 。假如是一个新的环境,我门仍然需要配置此前的凭据,比如git认证,镜像仓库认证,必要的配置和版本号的传递等首先配置一个使用的凭据在清单中1.配置凭据和SAapiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: password --- apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git --- apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image-go spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang #构建完的镜像名称 PipelineResource的ucloud-image-go中并没有写入版本号,这是因为在task会进行生成2.配置task1task1作为一个假定的测试apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test3.生成版本号我门需要提供一个版本号,使用task生成一个由环境变量的传入-年月日时间的版本号apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: generate-build-id spec: description: >- Given a base version, this task generates a unique build id by appending the base-version to the current timestamp. params: - name: base-version description: Base product version type: string default: "1.0" results: - name: timestamp description: Current timestamp - name: build-id description: ID of the current build steps: - name: get-timestamp image: uhub.service.ucloud.cn/marksugar-k8s/bash:5.0.18 script: | #!/usr/bin/env bash sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories apk add -U tzdata cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime ts=`date "+%Y%m%d-%H%M%S"` echo "Current Timestamp: ${ts}" echo ${ts} | tr -d "\n" | tee $(results.timestamp.path) - name: get-buildid image: uhub.service.ucloud.cn/marksugar-k8s/bash:5.0.18 script: | #!/usr/bin/env bash ts=`cat $(results.timestamp.path)` buildId=$(inputs.params.base-version)-${ts} echo ${buildId} | tr -d "\n" | tee $(results.build-id.path)3.配置镜像构建我门需要修改构建时候的命令,以便于添加传入的版本号# task-build-push.yaml apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: build-and-push-test spec: resources: inputs: # 定义输入资源 - name: repo #输入资源,就是gitee的那个仓库 type: git outputs: # 定义输出资源 - name: linuxea # 输出镜像名字 type: image params: # 定义参数 - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 type: string default: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 description: The path to the dockerfile to build - name: pathToContext #指明构建上下文的路径 type: string default: $(resources.inputs.repo.path) # repo资源的路径 description: the build context used by docker daemon - name: imageTag type: string default: "v0.2.0" description: the docker image tag steps: - name: build-and-push image: docker:stable script: | #!/usr/bin/env sh docker login uhub.service.ucloud.cn cd /workspace/repo/tekton/go docker build -t $(resources.outputs.linuxea.url):$(params.imageTag) . docker push $(resources.outputs.linuxea.url):$(params.imageTag) # 这边的参数都是在 input 和 output 中定义的 env: - name: DOCKER_HOST value: tcp://docker-dind.tekton-pipelines:23754.配置pipeline管道中,我门需要注意每个task运行的顺序是由runAfter来决定的,在build-and-push-test阶段,传入了环境变量imageTagapiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: test-pipeline spec: resources: # 为 Tasks 提供输入和输出资源声明 - name: git-res type: git - name: ucloud-image-go type: image params: - name: image-tag type: string tasks: # 添加task到流水线中 - name: get-build-id taskRef: name: generate-build-id # 引入generate-build-id task params: - name: base-version value: $(params.image-tag) # 运行应用测试 - name: test taskRef: name: test resources: inputs: - name: repo # Task 输入名称 resource: git-res # Pipeline 资源名称 # 构建并推送 Docker 镜像 - name: build-and-push-test taskRef: name: build-and-push-test runAfter: - test # 测试任务执行之后 - get-build-id resources: inputs: - name: repo # 指定输入的git仓库资源 resource: git-res outputs: # 指定输出的镜像资源 - name: linuxea resource: ucloud-image-go params: - name: imageTag value: "$(tasks.get-build-id.results.build-id)"5.pipelinerun接着在spec中引入build-sa的sa,和各个资源和管道apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: test-pipelinerun spec: serviceAccountName: build-sa # 关联带有认证信息的 ServiceAccount pipelineRef: name: test-pipeline resources: - name: git-res # 指定输入的git仓库资源 resourceRef: name: git-res - name: ucloud-image-go # 指定输出的镜像资源 resourceRef: name: ucloud-image-go params: - name: image-tag # 传入版本号 value: "v0.3.0"注意,这里的image-tag的变量是传递给管道中名称为 get-build-id的task, 而这个task指向的是generate-build-id这个实际的task定义,而generate-build-id这个task在接收到image-tag的参数后返回了一个 buildId的环境变量,这个变量被获取后赋值给imageTag,而在构建镜像之前这个变量已经获取完成并且被传入到build-and-push-test的构建步骤中。6.构建查看创建完成后,通过命令查看[root@master1 pipeline]# tkn pipelinerun describe test-pipelinerun Name: test-pipelinerun Namespace: default Pipeline Ref: test-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-pipeline Status STARTED DURATION STATUS 6 seconds ago --- Running ⏱ Timeouts Pipeline: 1h0m0s查看当前的taskrun[root@master1 pipeline]# tkn taskrun list NAME STARTED DURATION STATUS test-pipelinerun-build-and-push-test 15 minutes ago 11s Succeeded test-pipelinerun-get-build-id 15 minutes ago 9s Succeeded test-pipelinerun-test 15 minutes ago 9s Succeeded build-and-push 7 hours ago 12m13s Succeeded testrun 3 days ago 2m45s Succeeded根据name查看日志[root@master1 pipeline]# tkn taskrun logs test-pipelinerun-test [git-source-repo-gknf2] {"level":"info","ts":1686559991.0322871,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-gknf2] {"level":"info","ts":1686559991.0433397,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s [root@master1 pipeline]# tkn taskrun logs test-pipelinerun-get-build-id [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/main/x86_64/APKINDEX.tar.gz [get-timestamp] fetch https://mirrors.ustc.edu.cn/alpine/v3.18/community/x86_64/APKINDEX.tar.gz [get-timestamp] (1/1) Installing tzdata (2023c-r1) [get-timestamp] OK: 11 MiB in 19 packages [get-timestamp] Current Timestamp: 20230612-165311 [get-timestamp] 20230612-165311 [get-buildid] v0.3.0-20230612-165311 [root@master1 pipeline]# tkn taskrun logs test-pipelinerun-build-and-push-test [create-dir-linuxea-q6w6n] 2023/06/12 08:53:16 warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to "/home/nonroot": unable to create destination directory: mkdir /home/nonroot: permission denied [git-source-repo-w5dcj] {"level":"info","ts":1686559998.4194562,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-w5dcj] {"level":"info","ts":1686559998.4337826,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [build-and-push] Authenticating with existing credentials... [build-and-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [build-and-push] Configure a credential helper to remove this warning. See [build-and-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [build-and-push] [build-and-push] Login Succeeded [build-and-push] Sending build context to Docker daemon 5.12kB [build-and-push] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] ---> a77f45e5f987 [build-and-push] Step 2/5 : RUN mkdir /test -p [build-and-push] ---> Using cache [build-and-push] ---> 48d724b29eff [build-and-push] Step 3/5 : WORKDIR /test [build-and-push] ---> Using cache [build-and-push] ---> 9b6eda13d6c1 [build-and-push] Step 4/5 : COPY . . [build-and-push] ---> Using cache [build-and-push] ---> a5c71d579512 [build-and-push] Step 5/5 : CMD ["go test"] [build-and-push] ---> Using cache [build-and-push] ---> 2f377c99476e [build-and-push] Successfully built 2f377c99476e [build-and-push] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.3.0-20230612-165311 [build-and-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [build-and-push] 6a7300559c98: Preparing [build-and-push] 17600296503b: Preparing [build-and-push] 4ad1c2ef216c: Preparing [build-and-push] c23db623ee98: Preparing [build-and-push] c1bfd5512d71: Preparing [build-and-push] 8e012198eea1: Preparing [build-and-push] 8e012198eea1: Waiting [build-and-push] 17600296503b: Layer already exists [build-and-push] 6a7300559c98: Layer already exists [build-and-push] 4ad1c2ef216c: Layer already exists [build-and-push] c1bfd5512d71: Layer already exists [build-and-push] c23db623ee98: Layer already exists [build-and-push] 8e012198eea1: Layer already exists [build-and-push] v0.3.0-20230612-165311: digest: sha256:b649a5f42e753e36562b574e26018301e8e839f129f92e3f87fd29f4b30734fc size: 1572 [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.548887491Z","caller":"logging/config.go:116","message":"Successfully created the logger."} [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.548992849Z","caller":"logging/config.go:117","message":"Logging level set to: info"} [image-digest-exporter-htxn8] {"severity":"INFO","timestamp":"2023-06-12T08:53:22.549269268Z","caller":"imagedigestexporter/main.go:59","message":"No index.json found for: linuxea","commit":"68f2a66"}此时pod构建结束[root@master1 build]# kubectl get pods -w NAME READY STATUS RESTARTS AGE build-and-push-pod 0/4 Completed 0 7h26m test-pipelinerun-build-and-push-test-pod 0/4 Completed 0 8m18s test-pipelinerun-get-build-id-pod 0/2 Completed 0 8m33s test-pipelinerun-test-pod 0/2 Completed 0 8m33s testrun-pod 0/2 Completed 0 3d7h test-pipelinerun-get-build-id-pod 0/2 Terminating 0 8m37s test-pipelinerun-test-pod 0/2 Terminating 0 8m37s test-pipelinerun-build-and-push-test-pod 0/4 Terminating 0 8m22s test-pipelinerun-get-build-id-pod 0/2 Terminating 0 8m37s test-pipelinerun-test-pod 0/2 Terminating 0 8m37s test-pipelinerun-build-and-push-test-pod 0/4 Terminating 0 8m22s test-pipelinerun-get-build-id-pod 0/2 Pending 0 0s test-pipelinerun-test-pod 0/2 Pending 0 0s test-pipelinerun-get-build-id-pod 0/2 Pending 0 0s test-pipelinerun-test-pod 0/2 Pending 0 0s test-pipelinerun-get-build-id-pod 0/2 Init:0/3 0 0s test-pipelinerun-test-pod 0/2 Init:0/4 0 0s test-pipelinerun-test-pod 0/2 Init:1/4 0 1s test-pipelinerun-get-build-id-pod 0/2 Init:1/3 0 1s test-pipelinerun-test-pod 0/2 Init:2/4 0 2s test-pipelinerun-get-build-id-pod 0/2 Init:2/3 0 2s test-pipelinerun-test-pod 0/2 Init:3/4 0 3s test-pipelinerun-get-build-id-pod 0/2 PodInitializing 0 3s test-pipelinerun-get-build-id-pod 2/2 Running 0 4s test-pipelinerun-test-pod 0/2 PodInitializing 0 4s test-pipelinerun-get-build-id-pod 2/2 Running 0 4s test-pipelinerun-test-pod 2/2 Running 0 5s test-pipelinerun-test-pod 2/2 Running 0 5s test-pipelinerun-test-pod 1/2 NotReady 0 8s test-pipelinerun-get-build-id-pod 1/2 NotReady 0 8s test-pipelinerun-test-pod 0/2 Completed 0 9s test-pipelinerun-get-build-id-pod 0/2 Completed 0 9s test-pipelinerun-build-and-push-test-pod 0/4 Pending 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Pending 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:0/3 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:1/3 0 0s test-pipelinerun-build-and-push-test-pod 0/4 Init:2/3 0 2s test-pipelinerun-build-and-push-test-pod 0/4 PodInitializing 0 3s test-pipelinerun-build-and-push-test-pod 4/4 Running 0 4s test-pipelinerun-build-and-push-test-pod 4/4 Running 0 4s test-pipelinerun-build-and-push-test-pod 3/4 NotReady 0 5s test-pipelinerun-build-and-push-test-pod 2/4 NotReady 0 6s test-pipelinerun-build-and-push-test-pod 1/4 NotReady 0 9s test-pipelinerun-build-and-push-test-pod 0/4 Completed 0 10s任务成功[root@master1 pipeline]# tkn pipelinerun describe test-pipelinerun Name: test-pipelinerun Namespace: default Pipeline Ref: test-pipeline Service Account: build-sa Labels: tekton.dev/pipeline=test-pipeline STARTED DURATION STATUS 1 minute ago 20s Succeeded Timeouts Pipeline: 1h0m0s Params NAME VALUE ∙ image-tag v0.3.0镜像也推送完成在tekton中看到详细的细节,其中test的task1做了go test,build-and-push进行了镜像打包和推送。
2023年09月02日
172 阅读
0 评论
0 点赞
2023-08-27
linuxea: Tekton在k8s上的简单应用(1)
前言: jenkins作为老派构建打包工具,无论在云原生之前,还是容器出现后,jenkins的脚本构建,pipeline,共享库等不断在丰富jenkins的能力。而在这期间,我们非常有必要提一下KodeRover公司的zadig。jenkins并不是为了kubernetes而生的,但是zadig是为基于Kubernetes下协同而生的。 除此之外,cd中spinnaker,argocd等更专注于持续部署的能力。而tekton被捐赠后,重新为基于kubernetes中的ci/cd工具又重新做了定义。tektonTekton 的前身是 Knative 项目的 build-pipeline 项目,这个项目是为了给 build 模块增加 pipeline 的功能,但是随着不同的功能加入到 Knative build 模块中,build 模块越来越变得像一个通用的 CI/CD 系统,于是,索性将 build-pipeline 剥离出 Knative,就变成了现在的 Tekton,而 Tekton 也从此致力于提供全功能、标准化的云原生 CI/CD 解决方案。Tekton 为 CI/CD 系统提供了诸多好处:可定制:Tekton 是完全可定制的,具有高度的灵活性,我们可以定义非常详细的构建块目录,供开发人员在各种场景中使用。可重复使用:Tekton 是完全可移植的,任何人都可以使用给定的流水线并重用其构建块,可以使得开发人员无需"造轮子"就可以快速构建复杂的流水线。可扩展:Tekton Catalog 是社区驱动的 Tekton 构建块存储库,我们可以使用 Tekton Catalog 中定义的组件快速创建新的流水线并扩展现有管道。标准化:Tekton 在你的 Kubernetes 集群上作为扩展安装和运行,并使用完善的 Kubernetes 资源模型,Tekton 工作负载在 Kubernetes Pod 内执行。伸缩性:要增加工作负载容量,只需添加新的节点到集群即可,Tekton 可随集群扩展,无需重新定义资源分配或对管道进行任何其他修改。Tekton 由一些列组件组成:Tekton Pipelines 是 Tekton 的基础,它定义了一组 Kubernetes CRD 作为构建块,我们可以使用这些对象来组装 CI/CD 流水线。Tekton Triggers 允许我们根据事件来实例化流水线,例如,可以我们在每次将 PR 合并到 GitHub 仓库的时候触发流水线实例和构建工作。Tekton CLI 提供了一个名为 tkn 的命令行界面,它构建在 Kubernetes CLI 之上,运行和 Tekton 进行交互。Tekton Dashboard 是 Tekton Pipelines 的基于 Web 的一个图形界面,可以线上有关流水线执行的相关信息。Tekton Catalog 是一个由社区贡献的高质量 Tekton 构建块(任务、流水线等)存储库,可以直接在我们自己的流水线中使用这些构建块。Tekton Hub 是一个用于访问 Tekton Catalog 的 Web 图形界面工具。Tekton Operator 是一个 Kubernetes Operator,可以让我们在 Kubernetes 集群上安装、更新、删除 Tekton 项目。而每个task作为最小单元,每个task中steps中定义不同的阶段,使用者可以灵活组合抽象使用。而TaskRun可以使task运行,Pipeline 将task组合起来,PipelineRun将Pipeline组合起来,环境变量层层进行传递。安装我门仍然需要遵循安装的版本要求,tekton作为一个CRD在kubernetes中。Required Kubernetes VersionStarting from the v0.24.x release of Tekton: Kubernetes version 1.18 or laterStarting from the v0.27.x release of Tekton: Kubernetes version 1.19 or laterStarting from the v0.30.x release of Tekton: Kubernetes version 1.20 or laterStarting from the v0.33.x release of Tekton: Kubernetes version 1.21 or laterStarting from the v0.39.x release of Tekton: Kubernetes version 1.22 or laterStarting from the v0.41.x release of Tekton: Kubernetes version 1.23 or laterStarting from the v0.45.x release of Tekton: Kubernetes version 1.24 or later并且我们可以在releases中查看最新的版本和稳定版的支持时间我的集群是1.21,因此我安装v0.33.2-0.350.33.2wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.33.2/release.yaml -O 0.33.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.33.2@sha256:ce01f1f89751bc2d2465d9f09f1918dcd4302551193475bdf0d23f12d5795ce1#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.33.2#g' 0.33.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.33.2@sha256:558b4c734c1dc7be8b2f3681a105bd23cc704fbf7525f0a5e7673beed40a7ca6#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.33.2#g' 0.33.yaml kubectl apply -f 0.33.yaml另外我搬运了最新的版本镜像v.47下载release.yaml 文件进行安装,如下所示的命令:kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.1/release.yaml或者我门修改一下镜像地址后进行安装wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.1/release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.47.1@sha256:9336443dc0b28585a8f75bb9d56082b69fcc61b0e92e968f8cd2ac4dd1f781c5#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.47.1#g' release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/resolvers:v0.47.1@sha256:e68ab3f4efa096a4aa96fec0bc8fd91ee2d7a4bcf671ae0c90b2345cd0cb89c7#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/resolvers:v0.47.1#g' release.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.47.1@sha256:0f045f5e9a9bc025ab1e66909476616a7c2d69d0b0fcf2fbbeefdc8c99d8fd5b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.47.1#g' release.yaml kubectl apply -f release.yaml0.47.3wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.47.3/release.yaml -O 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.47.3@sha256:cfbca9c19a8e7fe4f68b80499c9d921a03240ae2185d6f7d536c33b1177138ca@uhub.service.ucloud.cn/marksugar-k8s/controller:v0.47.3@g' 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/resolvers:v0.47.3@sha256:ea46db5fd1c6c1774762fee57cb49aef6a9a6ba862c85232c8a89f1ab67b43fd@uhub.service.ucloud.cn/marksugar-k8s/resolvers:v0.47.3@g' 0.47.3.yaml sed -i 's@gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.47.3@sha256:20fe883b019e80fecddbb97a86d6773925c7b6727cf5e8e7007c47416bd9ebf7@uhub.service.ucloud.cn/marksugar-k8s/webhook:v0.47.3@g' 0.47.3.yaml0.35.0wget https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.35.0/release.yaml -O 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.35.0@sha256:1ba151e081bea043d82c684b4b63042a00886901bcb91a83db06325857b85e9c#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/controller:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook:v0.35.0@sha256:7244bf5e496347e2ed40347b152c0298b7ab87a16a24149691acea6f59bfde76#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/webhook:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/kubeconfigwriter:v0.35.0@sha256:9cb21a57f5f51813e54321f1cf20ae11e573d622ab8064f39cdfdff702905c1e#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/kubeconfigwriter:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.35.0@sha256:f98b666818a529e1fe6e7bb94cae77402ba0cabbf3a3fe00e4711d75b107472b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/git-init:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.35.0@sha256:5730032a7daf7526fae6b586badf849ffe4539e16fd8be927ec7e320564486be#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/entrypoint:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop:v0.35.0@sha256:1d65a20cd5fbc79dc10e48ce9d2f7251736dac13b302b49a1c9a8717c5f2b5c5#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/nop:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.35.0@sha256:c2e028e35a3c3a38e584bec51cb21483411eb8e0dd02c22c2f910c29df5892af#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/imagedigestexporter:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init:v0.35.0@sha256:155f059340b19364ce2b6bd40ac565070885db8922dc6e9a52e0a7181747476a#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/pullrequest-init:v0.35.0#g' 0.35.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/workingdirinit:v0.35.0@sha256:f4dc5477599754b42261ce367ab5590ca7c7866f64e5381e894d11e43231c268#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/workingdirinit:v0.35.0#g' 0.35.yaml kubectl apply -f 0.35.yaml安装后,自动创建了名称空间tekton-pipelines[root@master1 tekton]# kubectl -n tekton-pipelines get pod NAME READY STATUS RESTARTS AGE tekton-pipelines-controller-7b59f8bc-l2hmk 1/1 Running 0 2m17s tekton-pipelines-webhook-7f6889f9b7-gwcvb 1/1 Running 0 2m17s而后下载一个cli包wget https://github.com/tektoncd/cli/releases/download/v0.31.0/tkn_0.31.0_Linux_x86_64.tar.gz tar xf tkn_0.31.0_Linux_x86_64.tar.gz mv tkn /usr/local/sbin/安装dashboardwget https://storage.googleapis.com/tekton-releases/dashboard/previous/v0.35.0/release.yaml -O 0.35-dashboard.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard:v0.35.0@sha256:057471aa317c900e30178d64d83fc9d32cf2fcd718632243f7b902403b64981b#registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/dashboard:v0.35.0#g' 0.35-dashboard.yaml kubectl apply -f 0.35-dashboard.yaml安装完成后,配置一个ingress-nginx的域名[root@master1 tekton]# kubectl -n tekton-pipelines get pod -w NAME READY STATUS RESTARTS AGE tekton-dashboard-7787bd585d-6glr9 1/1 Running 0 6m6s tekton-pipelines-controller-7489bd899d-w7k55 1/1 Running 0 34m tekton-pipelines-webhook-5cb648d57f-scd5g 1/1 Running 0 34m配置ingress-nginx的域名apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tekton-dashboard namespace: tekton-pipelines spec: ingressClassName: nginx rules: - host: tekton.linuxea.local http: paths: - path: / pathType: Prefix backend: service: name: tekton-dashboard port: number: 9097而后即可打开页面了如果你想拥有更高的权限,需要将安装对应的版本ModeCurrentv0.31.0 and earlierread-onlyrelease.yamltekton-dashboard-release-readonly.yamlread/writerelease-full.yamltekton-dashboard-release.yaml比如我们安装最新的wget https://github.com/tektoncd/dashboard/releases/download/v0.37.0/release-full.yaml -O 0.37.0-full.yaml sed -i 's#gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard:v0.37.0@sha256:2f38f99b6eafc18e67d013da84265ab61b5525a1a6c37005aaf86152b586427b#uhub.service.ucloud.cn/marksugar-k8s/dashboard:v0.37.0#g' 0.37.0-full.yaml kubectl apply -f 0.37.0-full.yaml定义Task我门需要了解最基本的几个概念,如下:Task:表示执行命令的一系列有序的步骤,task 里可以定义一系列的 steps,例如编译代码、构建镜像、推送镜像等,每个 task 实际由一个 Pod 执行。TaskRun:Task 只是定义了一个模版,TaskRun 才真正代表了一次实际的运行,当然你也可以自己手动创建一个 TaskRun,TaskRun 创建出来之后,就会自动触发 Task 描述的构建任务。Pipeline:一组有序的 Task,Pipeline 中的 Task 可以使用之前执行过的 Task 的输出作为它的输入。表示一个或多个 Task、PipelineResource 以及各种定义参数的集合。PipelineRun:类似 Task 和 TaskRun 的关系,PipelineRun 也表示某一次实际运行的 pipeline,下发一个 PipelineRun CRD 实例到 Kubernetes 后,同样也会触发一次 pipeline 的构建。ClusterTask:覆盖整个集群的任务,而不是单一的某一个命名空间,这是和 Task 最大的区别,其他基本上一致的。PipelineResource(Deprecated):定义由 Tasks 中的步骤摄取的输入和产生的输出的位置,比如 github 上的源码,或者 pipeline 输出资源,例如一个容器镜像或者构建生成的 jar 包等。Run(alpha):实例化自定义任务以在特定输入时执行。每个任务都在自己的 Kubernetes Pod 中执行,因此,默认情况下,管道内的任务不共享数据。要在 Tasks 之间共享数据,你必须明确配置每个 Task 以使其输出可用于下一个 Task 并获取先前执行的 Task 的输出作为其输入。Tekton 的 CI/CD 工作流中的每个操作都变成了一个 Step,使用指定的容器镜像来执行。Steps 然后组织在 Tasks 中,它在集群中作为 Kubernetes Pod 运行,还可以进一步组织 Tasks 变成成 Pipelines,还可以控制几个 Tasks 的执行顺序。1.创建task要创建一个 Task 任务,就需要使用到 Kubernetes 中定义的 Task 这个 CRD 对象,创建一个如下所示的资源文件,内容如下所示:apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: test spec: resources: inputs: - name: repo type: git steps: - name: run-test image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 workingDir: /workspace/repo script: | #!/usr/bin/env sh cd tekton/go && go test #command: ["go"] #args: ["test"]其中 resources 定义了我们的任务中定义的 Step 所需的输入内容,我们的步骤需要 Clone 一个 Git 仓库作为 go test 命令的输入,目前支持 git、pullRequest、image、cluster、storage、cloudevent 等资源。Tekton 内置的 git 资源类型,它会自动将代码仓库 Clone 到 /workspace/$input_name 目录中,由于我们这里输入被命名成 repo,所以代码会被 Clone 到 /workspace/repo 目录下面。然后下面的 steps 就是来定义执行运行测试命令的步骤,这里我们直接在代码的根目录中运行 go test 命令即可,需要注意的是命令和参数需要分别定义。创建该任务:[root@master1 tekton]# kubectl apply -f test.yaml task.tekton.dev/test created [root@master1 tekton]# kubectl get task NAME AGE test 7s新建的 Task 任务并不会立即执行, 必须创建一个 TaskRun 引用它并提供所有必需输入的数据才行。当然我们也可以直接使用 tkn 命令来启动这个 Task 任务,如下所示的命令来获取启动 Task 所需的资源对象:[root@master1 tekton]# tkn task start test --dry-run apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: creationTimestamp: null generateName: test-run- namespace: default spec: serviceAccountName: "" taskRef: name: test status: podName: ""定义git 代码仓库作为输入,所以需要一个 PipelineResource 对象来定义输入信息如下所示:apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: git-res namespace: default spec: params: - name: url value: https://gitee.com/marksugar/argocd-example - name: revision value: master type: git而后创建一个taskrun的模块对应上门的inputs的值apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: testrun spec: resources: inputs: - name: repo resourceRef: name: git-res taskRef: name: test创建上门的配置完成之后,我门查看下详情[root@master1 pipeline]# tkn taskrun logs --last -f [git-source-repo-jwtsb] {"level":"info","ts":1686274161.6897743,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-jwtsb] {"level":"info","ts":1686274161.7028632,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [run-test] PASS [run-test] ok test 0.002s [root@master1 pipeline]# kubectl get taskrun NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME testrun True Succeeded 3m10s 25s如果有状态异常,可以通过describe查看kubectl describe pod testrun-pod测试通过后在dashboard也可以看到2.镜像仓库认证要完成镜像打包,需要指定一个harbor的镜像仓库,将镜像推送到harbor中,因此我门需要准备一个harbor的仓库,或者其他仓库,而后创建一个Secret 资源对象apiVersion: v1 kind: Secret metadata: name: ucloud-auth annotations: tekton.dev/docker-0: http://uhub.service.ucloud.cn type: kubernetes.io/basic-auth stringData: username: username password: passwordtekton.dev/docker-0 的 annotation注解信息是用来告诉 Tekton 这些认证信息所属的 Docker 镜像仓库。并且创建一个 ServiceAccount 对象来使用上面的 ucloud-auth 这个 Secret 对象,内容如下所示:apiVersion: v1 kind: ServiceAccount metadata: name: build-sa secrets: - name: ucloud-auth创建完成后就可以使用这个进行认证3.构建和推送我门需要配置一个docker-dind来应对非docker环境的情况1,创建pvcapiVersion: v1 kind: PersistentVolume metadata: name: tekton-docker-dind namespace: tekton-pipelines spec: capacity: storage: 200Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 172.168.204.36 path: /data/nfs-share/docker-dind # mkdir -p /data/nfs-share/docker-dind --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: tekton-docker-dind namespace: tekton-pipelines spec: accessModes: - ReadWriteMany resources: requests: storage: 200Gi2,创建dind如果你有共享存储请修改,如果没有使用hostpath请配置nodename,或者删除nodenameapiVersion: apps/v1 kind: Deployment metadata: name: docker-dind namespace: tekton-pipelines labels: app: docker-dind spec: selector: matchLabels: app: docker-dind template: metadata: labels: app: docker-dind spec: nodename: 172.168.204.39 containers: #- image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:dind-20230303 - image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/docker:20.10.16-dind name: docker-dind args: - --insecure-registry=172.168.204.42 # 私有镜像地址 - --registry-mirror=https://ot2k4d59.mirror.aliyuncs.com/ # 指定一个镜像加速器地址 env: # - name: DOCKER_DRIVER # value: overlay2 - name: DOCKER_HOST value: tcp://0.0.0.0:2375 - name: DOCKER_TLS_CERTDIR # 禁用 TLS(最好别禁用) value: "" volumeMounts: - name: docker-dind-data-vol # 持久化docker根目录 mountPath: /var/lib/docker/ ports: - name: daemon-port containerPort: 2375 securityContext: privileged: true # 需要设置成特权模式 volumes: - hostPath: path: /data/docker-dind # mkdir -p /data/docker-dind name: docker-dind-data-vol #- name: docker-dind-data-vol # persistentVolumeClaim: # claimName: tekton-docker-dind --- apiVersion: v1 kind: Service metadata: name: docker-dind namespace: tekton-pipelines labels: app: docker-dind spec: ports: - port: 2375 targetPort: 2375 selector: app: docker-dind创建安装完成[root@master1 pipeline]# kubectl -n tekton-pipelines get pod -w NAME READY STATUS RESTARTS AGE docker-dind-5b7888c577-x86k6 1/1 Running 0 12m ... [root@master1 pipeline]# kubectl -n tekton-pipelines get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-dind ClusterIP 10.68.143.200 <none> 2375/TCP 38m ...现在,我门创建一个 Task 任务来构建并推送 Docker 镜像,而在gitee中已经包含了一个 Dockerfile 文件了,直接 Clone 代码就可以获得:FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 RUN mkdir /test -p WORKDIR /test COPY . . CMD ["go test"]创建一个名为 task-build-push.yaml 的文件,如下所示:定义了 DockerfileURL 与 pathToContext 参数,用来指定 Dockerfile 和构建上下文的路径,此外还定义了一个名为 linuxea 的镜像输出资源,用来定义 Docker 镜像的相关参数。然后定义了一个名为 build-and-push 的步骤来解决不同运行时的问题,我们在上面独立运行了一个 Docker Daemon 的服务,现在可以直接通过 DOCKER_HOST 环境变量来远程连接到该 Daemon 进行构建镜像。apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: build-and-push spec: resources: inputs: # 定义输入资源 - name: repo #输入资源,就是gitee的那个仓库 type: git outputs: # 定义输出资源 - name: linuxea # 输出镜像名字 type: image params: # 定义参数 - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 type: string default: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 description: The path to the dockerfile to build - name: pathToContext #指明构建上下文的路径 type: string default: $(resources.inputs.repo.path) # repo资源的路径 description: the build context used by docker daemon steps: - name: build-and-push image: docker:stable script: | #!/usr/bin/env sh docker login uhub.service.ucloud.cn cd /workspace/repo/tekton/go docker build -t $(resources.outputs.linuxea.url) . # echo $(resources.outputs.linuxea.url) # echo $(params.DockerfileURL) # echo $(params.pathToContext) # uhub.service.ucloud.cn/linuxea/golang:v0.1.0 # /workspace/repo/tekton/go/Dockerfile # /workspace/repo # docker build -t $(resources.outputs.linuxea.url) -f $(params.DockerfileURL) $(params.pathToContext) docker push $(resources.outputs.linuxea.url) # 这边的参数都是在 input 和 output 中定义的 env: - name: DOCKER_HOST value: tcp://docker-dind.tekton-pipelines:2375此时,我门再继续创建一个taskrun来触发,并且指定此前创建的sa。现在我门先创建PipelineResource 资源和TaskRunapiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: ucloud-image spec: type: image params: - name: url value: uhub.service.ucloud.cn/linuxea/golang:v0.1.0 #构建完的镜像名称 --- # taskrun-build-push.yaml apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-and-push spec: serviceAccountName: build-sa # 关联具有harbor认证信息的serviceaccount taskRef: name: build-and-push # 关联定义好的task resources: inputs: - name: repo # 指定输入的仓库资源 resourceRef: name: git-res outputs: # 指定输出的镜像资源 - name: linuxea resourceRef: name: ucloud-image params: - name: DockerfileURL #指明 dockerfile 在仓库中的哪个位置 value: $(resources.inputs.repo.path)/tekton/go/Dockerfile # repo资源的路径 - name: pathToContext # 指定构建上下文 value: $(resources.inputs.repo.path) # repo资源的路径创建[root@master1 pipeline]# kubectl get pods NAME READY STATUS RESTARTS AGE build-and-push-pod 0/4 PodInitializing 0 7s testrun-pod 0/2 Completed 0 7h12m [root@master1 pipeline]# kubectl get taskrun NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME build-and-push Unknown Pending 13s我门通过如下命令查看细节,或者通过describe taskrun查看kubectl -n default logs build-and-push-pod -c step-build-and-push或者如下tkn taskrun logs build-and-push taskrun logs build-and-push -f[root@master1 pipeline]# tkn taskrun logs build-and-push taskrun logs build-and-push -f [create-dir-linuxea-5vvfb] 2023/06/12 01:27:00 warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to "/home/nonroot": unable to create destination directory: mkdir /home/nonroot: permission denied [git-source-repo-jfwsv] {"level":"info","ts":1686533225.9578078,"caller":"git/git.go:178","msg":"Successfully cloned https://gitee.com/marksugar/argocd-example @ d270fd8931fb059485622edb2ef1aa1209b7d42c (grafted, HEAD, origin/master) in path /workspace/repo"} [git-source-repo-jfwsv] {"level":"info","ts":1686533225.9820316,"caller":"git/git.go:217","msg":"Successfully initialized and updated submodules in path /workspace/repo"} [build-and-push] Authenticating with existing credentials... [build-and-push] Login Succeeded [build-and-push] WARNING! Your password will be stored unencrypted in /root/.docker/config.json. [build-and-push] Configure a credential helper to remove this warning. See [build-and-push] https://docs.docker.com/engine/reference/commandline/login/#credentials-store [build-and-push] [build-and-push] Sending build context to Docker daemon 5.12kB [build-and-push] Step 1/5 : FROM registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] 1.18.10-alpine3.17: Pulling from marksugar-k8s/golang [build-and-push] 8921db27df28: Pulling fs layer [build-and-push] a2f8637abd91: Pulling fs layer [build-and-push] 4ba80a8cd2c7: Pulling fs layer [build-and-push] dbc2308a4587: Pulling fs layer [build-and-push] dbc2308a4587: Waiting [build-and-push] a2f8637abd91: Verifying Checksum [build-and-push] a2f8637abd91: Download complete [build-and-push] dbc2308a4587: Verifying Checksum [build-and-push] dbc2308a4587: Download complete [build-and-push] 8921db27df28: Verifying Checksum [build-and-push] 8921db27df28: Download complete [build-and-push] 8921db27df28: Pull complete [build-and-push] a2f8637abd91: Pull complete [build-and-push] 4ba80a8cd2c7: Verifying Checksum [build-and-push] 4ba80a8cd2c7: Download complete [build-and-push] 4ba80a8cd2c7: Pull complete [build-and-push] dbc2308a4587: Pull complete [build-and-push] Digest: sha256:ab5685692564e027aa84e2980855775b2e48f8fc82c1590c0e1e8cbc2e716542 [build-and-push] Status: Downloaded newer image for registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/golang:1.18.10-alpine3.17 [build-and-push] ---> a77f45e5f987 [build-and-push] Step 2/5 : RUN mkdir /test -p [build-and-push] ---> Running in ed14023d5ee7 [build-and-push] Removing intermediate container ed14023d5ee7 [build-and-push] ---> 48d724b29eff [build-and-push] Step 3/5 : WORKDIR /test [build-and-push] ---> Running in 3bb3e619663a [build-and-push] Removing intermediate container 3bb3e619663a [build-and-push] ---> 9b6eda13d6c1 [build-and-push] Step 4/5 : COPY . . [build-and-push] ---> a5c71d579512 [build-and-push] Step 5/5 : CMD ["go test"] [build-and-push] ---> Running in 35c3ce8a0468 [build-and-push] Removing intermediate container 35c3ce8a0468 [build-and-push] ---> 2f377c99476e [build-and-push] Successfully built 2f377c99476e [build-and-push] Successfully tagged uhub.service.ucloud.cn/linuxea/golang:v0.1.0 [build-and-push] The push refers to repository [uhub.service.ucloud.cn/linuxea/golang] [build-and-push] 6a7300559c98: Preparing [build-and-push] 17600296503b: Preparing [build-and-push] 4ad1c2ef216c: Preparing [build-and-push] c23db623ee98: Preparing [build-and-push] c1bfd5512d71: Preparing [build-and-push] 8e012198eea1: Preparing [build-and-push] 8e012198eea1: Waiting [build-and-push] c23db623ee98: Layer already exists [build-and-push] c1bfd5512d71: Layer already exists [build-and-push] 4ad1c2ef216c: Layer already exists [build-and-push] 8e012198eea1: Layer already exists [build-and-push] 17600296503b: Pushed [build-and-push] 6a7300559c98: Pushed [build-and-push] v0.1.0: digest: sha256:b649a5f42e753e36562b574e26018301e8e839f129f92e3f87fd29f4b30734fc size: 1572 [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159373327Z","caller":"logging/config.go:116","message":"Successfully created the logger."} [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159447186Z","caller":"logging/config.go:117","message":"Logging level set to: info"} [image-digest-exporter-7bd59] {"severity":"INFO","timestamp":"2023-06-12T01:39:09.159608529Z","caller":"imagedigestexporter/main.go:59","message":"No index.json found for: linuxea","commit":"68f2a66"}在tekton中能够观察到构建的细节展示此时镜像就被推送到UHub了
2023年08月27日
190 阅读
0 评论
0 点赞
2023-08-25
linuxea: openobseve HA本地单集群模式
ha默认就不支持本地存储了,集群模式下openobseve会运行多个节点,每个节点都是无状态的,数据存储在对象存储中,元数据在etcd中,因此理论上openobseve可以随时进行水平扩容组件如下:router:处理数据写入和页面查询,作为路由etcd: 存储用户信息,函数,规则,元数据等s3: 数据本身querier: 数据查询ingester: 数据没有在被写入到s3中之前,数据会进行临时通过预写来确保数据不会丢失,这类似于prometheus的walcompactor: 合并小文件到大文件,以及数据保留时间要配置集群模式,我们需要一个 对象存储,awk的s3,阿里的oss,或者本地的minio,还需要部署一个etcd作为元数据的存储,并且为ingester数据提供一个pvc,因为openobseve是运行在k8s上etcd我们将etcd运行在外部k8s之外的外部节点version: '2' services: oo_etcd: container_name: oo_etcd #image: 'docker.io/bitnami/etcd/3.5.8-debian-11-r4' image: uhub.service.ucloud.cn/marksugar-k8s/etcd:3.5.8-debian-11-r4 #network_mode: host restart: always environment: - ALLOW_NONE_AUTHENTICATION=yes - ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379 #- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 #- ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380 - ETCD_DATA_DIR=/bitnami/etcd/data volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/etcd/date:/bitnami/etcd # chown -R 777 /data/etcd/date/ ports: - 2379:2379 - 2380:2380 logging: driver: "json-file" options: max-size: "50M" mem_limit: 2048mpvc需要一个安装好的storageClass,我这里使用的是nfs-subdir-external-provisioner创建的nfs-latestminio部署一个单机版本的minio进行测试即可version: '2' services: oo_minio: container_name: oo_minio image: "uhub.service.ucloud.cn/marksugar-k8s/minio:RELEASE.2023-02-10T18-48-39Z" volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /docker/minio/data:/data command: server --console-address ':9001' /data environment: - MINIO_ACCESS_KEY=admin #管理后台用户名 - MINIO_SECRET_KEY=admin1234 #管理后台密码,最小8个字符 ports: - 9000:9000 # api 端口 - 9001:9001 # 控制台端口 logging: driver: "json-file" options: max-size: "50M" mem_limit: 2048m healthcheck: test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] interval: 30s timeout: 20s retries: 3启动后创建一个名为openobserve的桶安装openObserve我们仍然使用helm进行安装helm repo add openobserve https://charts.openobserve.ai helm repo update kubectl create ns openobserve对values.yaml定制的内容如下,latest.yaml:image: repository: uhub.service.ucloud.cn/marksugar-k8s/openobserve pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "latest" # 副本数 replicaCount: ingester: 1 querier: 1 router: 1 alertmanager: 1 compactor: 1 ingester: persistence: enabled: true size: 10Gi storageClass: "nfs-latest" # NFS的storageClass accessModes: - ReadWriteOnce # Credentials for authentication # 账号密码 auth: ZO_ROOT_USER_EMAIL: "root@example.com" ZO_ROOT_USER_PASSWORD: "abc123" # s3地址 ZO_S3_ACCESS_KEY: "admin" ZO_S3_SECRET_KEY: "admin1234" etcd: enabled: false # if true then etcd will be deployed as part of openobserve externalUrl: "172.16.100.47:2379" config: # ZO_ETCD_ADDR: "172.16.100.47:2379" # etcd地址 # ZO_HTTP_ADDR: "172.16.100.47:2379" ZO_DATA_DIR: "./data/" #数据目录 # 开启minio ZO_LOCAL_MODE_STORAGE: s3 ZO_S3_SERVER_URL: http://172.16.100.47:9000 ZO_S3_REGION_NAME: local ZO_S3_ACCESS_KEY: admin ZO_S3_SECRET_KEY: admin1234 ZO_S3_BUCKET_NAME: openobserve ZO_S3_BUCKET_PREFIX: openobserve ZO_S3_PROVIDER: minio ZO_TELEMETRY: "false" # 禁用匿名 ZO_WAL_MEMORY_MODE_ENABLED: "false" # 内存模式 ZO_WAL_LINE_MODE_ENABLED: "true" # wal写入模式 #ZO_S3_FEATURE_FORCE_PATH_STYLE: "true" # 数据没有在被写入到s3中之前,数据会进行临时通过预写来确保数据不会丢失,这类似于prometheus的wal resources: ingester: {} querier: {} compactor: {} router: {} alertmanager: {} autoscaling: ingester: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80 querier: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80 router: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80 compactor: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80指定本地minio,桶名称,认证信息等;指定etcd地址;为ingester指定sc; 而后安装 helm upgrade --install openobserve -f latest.yaml --namespace openobserve openobserve/openobserve如下[root@master-01 ~/openObserve]# helm upgrade --install openobserve -f latest.yaml --namespace openobserve openobserve/openobserve Release "openobserve" does not exist. Installing it now. NAME: openobserve LAST DEPLOYED: Sun Aug 20 18:04:31 2023 NAMESPACE: openobserve STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: kubectl --namespace openobserve port-forward svc/openobserve-openobserve-router 5080:5080 [root@master-01 ~/openObserve]# kubectl -n openobserve get pod NAME READY STATUS RESTARTS AGE openobserve-alertmanager-6f486d5df5-krtxm 1/1 Running 0 53s openobserve-compactor-98ccf664c-v9mkb 1/1 Running 0 53s openobserve-ingester-0 1/1 Running 0 53s openobserve-querier-695cf4fcc9-854z8 1/1 Running 0 53s openobserve-router-65b68b4899-j9hs7 1/1 Running 0 53s [root@master-01 ~/openObserve]# kubectl -n openobserve get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-openobserve-ingester-0 Bound pvc-5d86b642-4464-4b3e-950a-d5e0b4461c27 10Gi RWO nfs-latest 2m47s而后配置一个Ingress指向openobserve-routerapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: openobserve-ui namespace: openobserve labels: app: openobserve annotations: # kubernetes.io/ingress.class: nginx cert-manager.io/issuer: letsencrypt kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/connection-proxy-header: keep-alive nginx.ingress.kubernetes.io/proxy-connect-timeout: '600' nginx.ingress.kubernetes.io/proxy-send-timeout: '600' nginx.ingress.kubernetes.io/proxy-read-timeout: '600' nginx.ingress.kubernetes.io/proxy-body-size: 32m spec: ingressClassName: nginx rules: - host: openobserve.test.com http: paths: - path: / pathType: ImplementationSpecific backend: service: name: openobserve-router port: number: 5080添加本地hosts后打开此时是没有任何数据的测试我们手动写入测试数据[root@master-01 ~/openObserve]# curl http://openobserve.test.com/api/linuxea/0820/_json -i -u 'root@example.com:abc123' -d '[{"author":"marksugar","name":"www.linuxea.com"}]' HTTP/1.1 200 OK Date: Sun, 20 Aug 2023 11:02:08 GMT Content-Type: application/json Content-Length: 65 Connection: keep-alive Vary: Accept-Encoding vary: accept-encoding Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONS Access-Control-Allow-Headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization Access-Control-Max-Age: 1728000 {"code":200,"status":[{"name":"0820","successful":1,"failed":0}]}数据插入同时,在NFS的本地磁盘也会写入[root@Node-172_16_100_49 ~]# cat /data/nfs-share/openobserve/data-openobserve-ingester-0/wal/files/linuxea/logs/0820/0_2023_08_20_13_2c624affe8540b70_7099015230658842624DKMpVA.json {"_timestamp":1692537124314778,"author":"marksugar","name":"www.linuxea.com"}在minio内的数据也进行写入minio中存储的数据无法查看,因为元数据在etcd中。
2023年08月25日
208 阅读
0 评论
0 点赞
1
2
...
99