首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
48,996 阅读
2
linuxea:如何复现查看docker run参数命令
20,462 阅读
3
Graylog收集文件日志实例
18,021 阅读
4
git+jenkins发布和回滚示例
17,601 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,574 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
676
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
4
篇与
MQ
的结果
2018-12-01
linuxea:rabbitmq3.7.9集群配置笔记
此前,简单的叙述了rabbitmq(瞎扯)的单机使用,本篇记录了rabbitmq集群的简单使用。其他参考:activemq+zookeeper集群配置笔记准备环境:CentOS Linux release 7.5.1804 (Core) erlang-21.1.4-1.el7.centos.x86_64rabbitmq-server-3.7.9-1.el7HA-Proxy version 1.6.9 2(docker)大致情况如下:先决条件编辑hosts文件[root@LinuxEA-172_25_12_16 ~]# cat /etc/hosts 172.25.12.16 rabbitmq-1 172.25.12.17 rabbitmq-2 172.25.12.18 rabbitmq-3而后重启网卡systemctl restart network如果有必要,修改主机名。否则每次登陆可能无法正常使用hostnamectl set-hostname rabbitmq-1 hostnamectl set-hostname rabbitmq-2 hostnamectl set-hostname rabbitmq-3安装在页面https://packagecloud.io/rabbitmq/erlang/packages/el/7/erlang-21.1.4-1.el7.centos.x86_64.rpm根据提示,使用如下安装:curl -s https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh | sudo bash sudo yum install erlang-21.1.4-1.el7.centos.x86_64在页面https://packagecloud.io/rabbitmq/rabbitmq-server/packages/el/7/rabbitmq-server-3.7.9-1.el7.noarch.rpm根据提示,使用如下安装curl -s https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh | sudo bash sudo yum install rabbitmq-server-3.7.9-1.el7.noarch其他版本仓库:http://www.rabbitmq.com/releases/rabbitmq-server/配置互通配置互通主要用于传递文件,和集群没毛关系[root@LinuxEA-172_25_12_16 ~]# ssh-keygen -t rsa [root@LinuxEA-172_25_12_16 ~]# ssh-copy-id 172.25.12.17 [root@LinuxEA-172_25_12_16 ~]# ssh-copy-id 172.25.12.18防火墙-A INPUT -s 172.25.12.16/32 -p tcp -m tcp -m state --state NEW -m multiport --dports 4369,5671:5673,15672:15673,25672 -m comment --comment "RabbitMQ_cluster-1" -j ACCEPT -A INPUT -s 172.25.12.17/32 -p tcp -m tcp -m state --state NEW -m multiport --dports 4369,5671:5673,15672:15673,25672 -m comment --comment "RabbitMQ_cluster-2" -j ACCEPT -A INPUT -s 172.25.12.18/32 -p tcp -m tcp -m state --state NEW -m multiport --dports 4369,5671:5673,15672:15673,25672 -m comment --comment "RabbitMQ_cluster-3" -j ACCEPT [root@LinuxEA--172_25_12_16 ~]# iptables -L -nv|grep RabbitMQ 2158 110K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp state NEW multiport dports 4369,5671:5673,15672:15673,25672 /* RabbitMQ_cluster-1 */ 0 0 ACCEPT tcp -- * * 172.25.12.16 0.0.0.0/0 tcp state NEW multiport dports 4369,5671:5673,15672:15673,25672 /* RabbitMQ_cluster-1 */ 0 0 ACCEPT tcp -- * * 172.25.12.17 0.0.0.0/0 tcp state NEW multiport dports 4369,5671:5673,15672:15673,25672 /* RabbitMQ_cluster-2 */ 0 0 ACCEPT tcp -- * * 172.25.12.18 0.0.0.0/0 tcp state NEW multiport dports 4369,5671:5673,15672:15673,25672 /* RabbitMQ_cluster-3 */我假设你关闭了selinux,如果没有,请关闭。统一认证配置将本地的/var/lib/rabbitmq/.erlang.cookie 复制到另外两台节点相同目录下[root@LinuxEA-172_25_12_16 ~]# scp -P22992 /var/lib/rabbitmq/.erlang.cookie root@172.25.12.17:/var/lib/rabbitmq/ .erlang.cookie 100% 20 16.3KB/s 00:00 [root@LinuxEA-172_25_12_16 ~]# scp -P22992 /var/lib/rabbitmq/.erlang.cookie root@172.25.12.18:/var/lib/rabbitmq/ .erlang.cookie 100% 20 16.5KB/s 00:00 如果有必要,可以修改下权限[root@LinuxEA-172_25_12_16 ~]# ssh -p22992 root@172.25.12.17 'chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie' [root@LinuxEA-172_25_12_16 ~]# ssh -p22992 root@172.25.12.18 'chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie' [root@LinuxEA-172_25_12_16 ~]# ssh -p22992 root@172.25.12.17 'chmod 400 /var/lib/rabbitmq/.erlang.cookie ' [root@LinuxEA-172_25_12_16 ~]# ssh -p22992 root@172.25.12.18 'chmod 400 /var/lib/rabbitmq/.erlang.cookie '而后关闭掉所有节点,重新启动另外两台rabbitmq节点[root@LinuxEA-172_25_12_16 ~]# rabbitmq-server -detached Warning: PID file not written; -detached was passed.[root@LinuxEA-172_25_12_17 ~]# rabbitmq-server -detached Warning: PID file not written; -detached was passed.[root@LinuxEA-172_25_12_18 ~]# rabbitmq-server -detached Warning: PID file not written; -detached was passed.加入集群将172.25.12.17加入到172.25.12.16集群中[root@LinuxEA-172_25_12_17 ~]# rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq-2 ... [root@LinuxEA-172_25_12_17 ~]# rabbitmqctl join_cluster --ram rabbit@rabbitmq-1 Clustering node rabbit@rabbitmq-2 with rabbit@rabbitmq-1 [root@LinuxEA-172_25_12_17 ~]# rabbitmqctl start_app Starting node rabbit@rabbitmq-2 ... completed with 4 plugins.[root@LinuxEA-172_25_12_17 ~]# rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq-2 ... [{nodes,[{disc,['rabbit@rabbitmq-1']},{ram,['rabbit@rabbitmq-2']}]}, {running_nodes,['rabbit@rabbitmq-1','rabbit@rabbitmq-2']}, {cluster_name,<<"rabbit@DS-VM-Node172_25_12_16">>}, {partitions,[]}, {alarms,[{'rabbit@rabbitmq-1',[]},{'rabbit@rabbitmq-2',[]}]}]将172.25.12.18加入到172.25.12.16集群中[root@LinuxEA-172_25_12_18 ~]# rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq-3 ... [root@LinuxEA-172_25_12_18 ~]# rabbitmqctl join_cluster --ram rabbit@rabbitmq-1 Clustering node rabbit@rabbitmq-3 with rabbit@rabbitmq-1 [root@LinuxEA-172_25_12_18 ~]# rabbitmqctl start_app Starting node rabbit@rabbitmq-3 ... completed with 4 plugins.[root@LinuxEA-172_25_12_18 ~]# rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq-3 ... [{nodes,[{disc,['rabbit@rabbitmq-1']}, {ram,['rabbit@rabbitmq-3','rabbit@rabbitmq-2']}]}, {running_nodes,['rabbit@rabbitmq-2','rabbit@rabbitmq-1','rabbit@rabbitmq-3']}, {cluster_name,<<"rabbit@DS-VM-Node172_25_12_16">>}, {partitions,[]}, {alarms,[{'rabbit@rabbitmq-2',[]}, {'rabbit@rabbitmq-1',[]}, {'rabbit@rabbitmq-3',[]}]}]集群状态:[root@LinuxEA-172_25_12_16 ~]# rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq-1 ... [{nodes,[{disc,['rabbit@rabbitmq-1']}, {ram,['rabbit@rabbitmq-3','rabbit@rabbitmq-2']}]}, {running_nodes,['rabbit@rabbitmq-3','rabbit@rabbitmq-2','rabbit@rabbitmq-1']}, {cluster_name,<<"rabbit@DS-VM-Node172_25_12_16">>}, {partitions,[]}, {alarms,[{'rabbit@rabbitmq-3',[]}, {'rabbit@rabbitmq-2',[]}, {'rabbit@rabbitmq-1',[]}]}]镜像高可用集群大概分为2种,单一的不算。就剩下默认的集群模式和镜像模式。镜像模式可以完善普通集群的弊端,所有消息实体会主动在镜像节点同步,而不是单单的临时拉取。但这样的方式对性能有一定的损耗,但是胜在可靠性高。[root@LinuxEA-172_25_12_16 ~]# rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}' Setting policy "ha-all" for pattern "^" to "{"ha-mode":"all"}" with priority "0" for vhost "/" ...配置用户添加用户[root@LinuxEA-172_25_12_16 ~]# rabbitmqctl add_user admin admin set管理员状态权限[root@LinuxEA-172_25_12_16 ~]# rabbitmqctl set_user_tags admin administrator授权读写所有权限[root@LinuxEA-172_25_12_16 ~]# rabbitmqctl set_permissions -p "/" admin ".*" ".*" ".*" Setting permissions for user "admin" in vhost "/" ...WEB管理界面插件安装[root@LinuxEA-172_25_12_16 ~]# rabbitmq-plugins enable rabbitmq_management创建一个host[root@LinuxEA-172_25_12_17 ~]# rabbitmqctl add_vhost test Adding vhost "test" ...将test所有队列标记为镜像队列,即队列会被复制到各个节点,各个节点状态保持一致[root@LinuxEA-172_25_12_17 ~]# rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}' -p test Setting policy "ha-all" for pattern "^" to "{"ha-mode":"all"}" with priority "0" for vhost "test" ...haproxyglobal log 127.0.0.1 local3 log 127.0.0.1 local1 notice maxconn 5000 pidfile /run/haproxy.pid defaults log global mode tcp option tcplog option dontlognull retries 3 option redispatch #在使用基于cookie定向时,一旦后端某一server宕机时,会将会话重新定向至某一上游服务器,必须使用 的选项 maxconn 4000 #每个server最大的连接数 timeout http-request 10s #在客户端建立连接但不请求数据时,关闭客户端连接 timeout queue 1m #等待最大时长 timeout connect 10s #定义haproxy将客户端请求转发至后端服务器所等待的超时时长 timeout client 1m #客户端非活动状态的超时时长 timeout server 1m #客户端与服务器端建立连接后,等待服务器端的超时时长 timeout http-keep-alive 10s #定义保持连接的超时时长 timeout check 10s #健康状态监测时的超时时间,过短会误判,过长资源消耗 listen stats bind :9000 mode http stats enable stats uri /haproxy_stats stats realm HAProxy\ Statistics #统计页面密码框上提示文本,默认为Haproxy\ Statistics stats auth admin:ZTVmNTlhYzczOGM0 #统计页面用户名和密码设置 stats admin if TRUE #如果认证通过就做管理功能,可以管理后端的服务器 stats hide-version #隐藏统计页面上HAProxy的版本信息 stats refresh 30s #统计页面自动刷新时间 frontend rabbitmq bind *:5672 mode tcp log global default_backend rabbitmq backend rabbitmq balance roundrobin server rabbitmq1-slave 172.25.12.18:5672 check port 5672 rise 1 fall 2 maxconn 4000 weight 1 # mem data server rabbitmq2-slave 172.25.12.17:5672 check port 5672 rise 1 fall 2 maxconn 4000 weight 1 # mem data #server rabbitmq3-master 172.25.12.16:5672 check port 5672 rise 1 fall 2 maxconn 4000 weight 1 #disk data172.25.12.16在rabbitmqctl cluster_status查看时候是disc节点,就不做为读写了,之作为存储盘,而其他两个内存节点就作为读写[root@DT_Node-172_25_12_16 ~]# rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq-1 ... [{nodes,[{disc,['rabbit@rabbitmq-1']}, {ram,['rabbit@rabbitmq-3','rabbit@rabbitmq-2']}]}, {running_nodes,['rabbit@rabbitmq-3','rabbit@rabbitmq-2','rabbit@rabbitmq-1']}, {cluster_name,<<"rabbit@DS-VM-Node172_25_12_16">>}, {partitions,[]}, {alarms,[{'rabbit@rabbitmq-3',[]}, {'rabbit@rabbitmq-2',[]}, {'rabbit@rabbitmq-1',[]}]}]
2018年12月01日
2,907 阅读
0 评论
0 点赞
2017-08-16
linuxea:activemq+zookeeper集群配置笔记
activemq官网提供三种集群方式,Shared File System Master Slave,JDBC Master Slave,Replicated LevelDB Store,本章采用最后一种http://activemq.apache.org/masterslave.html ,官网是不建议使用LevelDB,推荐使用kahaDB集群方式:依靠Apache ZooKeeper协调节点是否为主,当主节点启动后接受链接时,其他节点接入等候状态并同步持久状态,并不接受客户端链接,所有持久操作都是复制到等候状态的节点,如果主宕机,最近状态的节点将被提升为主。节点重新恢复正常,自动进入从模式。当然,我们在使用中也可以在上游加上代理层,如下:下载安装包:wget http://mirror.rise.ph/apache/activemq/5.14.5/apache-activemq-5.14.5-bin.tar.gz wget http://mirror.rise.ph/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz [root@DS-VM-Node49-linuxea /usr/local]# tar xf apache-activemq-5.14.5-bin.tar.gz [root@DS-VM-Node49-linuxea /usr/local]# tar xf zookeeper-3.4.10.tar.gz [root@DS-VM-Node49-linuxea /usr/local]# mkdir /data/{activemq,zookeeper,logs} -p [root@DS-VM-Node49-linuxea /usr/local]# ln -s zookeeper-3.4.10 zookeeper [root@DS-VM-Node49-linuxea /usr/local]# ln -s apache-activemq-5.14.5 activemqdownload jdk:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html axel -n 30 https://mirrors.dtops.cc/java/8/8u111-b14/jdk-8u111-linux-x64.rpm yum install jdk-8u111-linux-x64.rpm -y 防火墙规则:iptables -I INPUT -p tcp -m tcp -m state --state NEW -m multiport --dports 2181,2888,3888,1883,61613:61617,62621,5672,8161 -m comment --comment "mq_zook" -j ACCEPTzookeeper——node1:[root@DS-VM-Node49-linuxea ~]# vim /usr/local/zookeeper/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/zookeeper dataLogDir=/data/logs clientPort=2181 maxClientCnxns=0 autopurge.snapRetainCount=3 autopurge.purgeInterval=5 server.1=10.0.1.49:2888:3888 server.2=10.0.1.61:2889:3889 server.3=10.10.240.113:2890:3890 [root@DS-VM-Node49-linuxea /data/zookeeper]# echo 1 > myid [root@DS-VM-Node49-linuxea /data/zookeeper]# ls myid [root@DS-VM-Node49-linuxea /data/zookeeper]# cat myid 1 [root@DS-VM-Node49-linuxea /data/zookeeper]# zookeeper——node2:[root@DS-VM-Node113-linuxea /usr/local]# cat /usr/local/zookeeper/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/zookeeper dataLogDir=/data/logs clientPort=2181 maxClientCnxns=0 autopurge.snapRetainCount=3 autopurge.purgeInterval=5 server.1=10.0.1.49:2888:3888 server.2=10.0.1.61:2889:3889 server.3=10.10.240.113:2890:3890 [root@DS-VM-Node113-linuxea /usr/local]# echo 2 > /data/zookeeper/myid [root@DS-VM-Node113-linuxea /usr/local]# cat /data/zookeeper/myid 3 [root@DS-VM-Node113-linuxea /usr/local]# zookeeper——node3:[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# cat /usr/local/zookeeper/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/zookeeper dataLogDir=/data/logs clientPort=2181 maxClientCnxns=0 autopurge.snapRetainCount=3 autopurge.purgeInterval=5 server.1=10.0.1.49:2888:3888 server.2=10.0.1.61:2889:3889 server.3=10.10.240.113:2890:3890 [root@DS-VM-Node61-linuxea /usr/local/zookeeper]# echo 3 > /data/zookeeper/myid [root@DS-VM-Node61-linuxea /usr/local/zookeeper]# cat /data/zookeeper/myid 2 [root@DS-VM-Node61-linuxea /usr/local/zookeeper]# 全部启动后zkServer status,查看状态,分别是follow和leader摘在网络:Zookeeper的通信架构在Zookeeper整个系统中,有3中角色的服务,client、Follower、leader。其中client负责发起应用的请求,Follower接受client发起的请求,参与事务的确认过程,在leader crash后的leader选择。而leader主要承担事务的协调,当然leader也可以承担接收客户请求的功能,为了方便描述,后面的描述都是client与Follower之间的通信,如果Zookeeper的配置支持leader接收client的请求,client与leader的通信跟client与Follower的通信模式完全一样。Follower与leader之间的角色可能在某一时刻进行转换。一个Follower在leader crash掉以后可能被集群(Quorum)的Follower选举为leader。而一个leader在crash后,再次加入集群(Quorum)将作为Follower角色存在。在一个集群(Quorum)中,除了在选举leader的过程中没有Follower和leader的区分外,其他任何时刻都只有1个leader和多个Follower。更多的可以参考:http://zoutm.iteye.com/blog/708447[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# ./bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Mode: leader [root@DS-VM-Node61-linuxea /usr/local/zookeeper]# MQ:备份下配置文件,因为要修改一些[root@DS-VM-Node61-linuxea /usr/local/activemq/conf]# cp activemq.xml activemq.xml.bak修改第一个地方:brokerName,如下: <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}">修改第二个地方,这里虽然被启用了leveldb,但是还是可以用的: <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:51621" zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181" hostname="10.0.1.49" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter>node1-10.0.1.49: <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}"> <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:51621" zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181" hostname="10.0.1.49" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter>node2-10.0.1.61: <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}"> <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:51621" zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181" hostname="10.0.1.61" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter>node3-10.10.240.113: <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}"> <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:51621" zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181" hostname="10.10.240.113" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter>配置完成基本上启动即可
2017年08月16日
3,413 阅读
0 评论
0 点赞
1
2