说到服务的高可用性,我们前面文章介绍了很多了,在linux下实现的方式有很多种,在此主要介绍Centos7+LVS+Keepalived实现Nginx服务的高可用性,具体见下:
环境介绍
hostname:Nginx01
IP:192.168.6.10
Role:Nginx Server
hostname:Nginx02
IP: 192.168.6.11
Role:Nginx Server
hostname:LVS01
IP: 192.168.6.11
Role:LVS+Keepalived
hostname:LVS02
IP: 192.168.6.12
Role:LVS+Keepalived
VIP:192.168.6.15
我们首先准备安装配置Nginx,我们需要对以下操作
systemctl stop firewalld systemctl disable firewalld hostnamectl set-hostname Nginx01
关闭selinux配置
vim /etc/selinx/config SELINUX=Disabled
yum install wget
yum install nginx
所以我们得先安装依赖repo
http://nginx.org/en/linux_packages.html
yum install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum install -y nginx
安装完成
rpm -qa | grep nginx
find / -name nginx
安装好后,我们就尝试访问以下Nginx服务,
在访问的前提是我们需要启动nginx 服务
systemctl start nginx
接下来我们使用web进行访问
我们首先查看nginx.conf配置文件
vim /etc/nginx/nginx.conf
然后查看nginx默认访问页面显示信息
/etc/naginx/conf.d/defautlt.conf
我们为了更好的显示,接下来我们要定义一个显示内容;默认网页路径
/usr/share/nginx/html/index.html
vim index.html
修改显示内容
接下来我们重启服务
systemctl restart nginx
我们同样需要为Nginx02安装nginx安装repo
yum install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum install nginx
安装好后,我们在Nginx01上将nginx上修改的index.html为Nginx02拷贝一份,当然,我们也可以直接使用vim修改Nginx02上的index.html文件
scp /usr/share/nginx/html/index.html 10.10.1.5:/etc/share/nginx/html/index.html
接下来我们回到Nginx server02上,修改index.html文件
vim /usr/share/nginx/html/index.html
修改后保存退出,然后重启服务后,我们通过web尝试访问
systemctl restart nginx
接下来就是我们安装配置LVS+Keepalived了
我们首先是安装ipvsadm
yum install -y ipvsadm
安装完成
接下来安装keepalive
首先是必要条件
yum install -y gcc openssl openssl-devel
接着安装keepalived
yum install keepalived
完成安装
接下来我们将keepalived.conf备份一份,所以建议大家都这么做
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
我们先查看默认的keepalived.conf的配置文件内容
默认的keepalived配置
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.16
192.168.200.17
192.168.200.18
}
}
virtual_server 192.168.200.100 443 {
delay_loop 6
lb_algo rr
lb_kind NAT
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 192.168.201.100 443 {
weight 1
SSL_GET {
url {
path /
digest ff20ad2481f97b1754ef3e12ecd3a9cc
}
url {
path /mrtg/
digest 9b3a0c85a887a256d6939da88aabd8cd
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.10.2 1358 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
sorry_server 192.168.200.200 1358
real_server 192.168.200.2 1358 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl3/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.200.3 1358 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.10.3 1358 {
delay_loop 3
lb_algo rr
lb_kind NAT
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 192.168.200.4 1358 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl3/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.200.5 1358 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl3/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}清空keepalived.conf配置文件内容
echo >/etc/keepalived/keepalived.conf
粘贴以下内容
! Configuration File for keepalived
global_defs {
router_id lvs_clu_1
}
virrp_sync_group Prox {
group {
mail
}
}
vrrp_instance mail {
state MASTER
interface eth0
lvs_sync_daemon_interface eth0
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.6.15
}
}
virtual_server 192.168.6.15 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.6.10 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.6.11 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}开启路由转发
cat /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/ip_forward
systemctl start keepalived
然后查看服务
ipvsadm
接下来我们将LVS02的IPVSADM+Keepalived服务安装
yum install -y ipvsadm yum install -y gcc openssl openssl-devel yum install keepalived
因为我们是两台LVS服务器,所以我们需要使用以下命令将LVS01指定目录的配置文件(keepalived.conf)复制到目标服务器上LVS02(10.10.1.7)及覆盖(keepalived.conf)
scp /etc/keepalived/keepalived.conf 192.168.6.13:/etc/keepalived/keepalived.conf
在LVS02上也启用转发路由
cat /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/ip_forward
修改LVS02的keepalive的相关信息
! Configuration File for keepalived
global_defs {
router_id lvs_clu_1
}
virrp_sync_group Prox {
group {
mail
}
}
vrrp_instance mail {
state BACKUP
interface eth0
lvs_sync_daemon_interface eth0
virtual_router_id 50
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.6.15
}
}
virtual_server 192.168.6.15 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.6.10 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.6.11 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}然后也是启动keepalived服务
systemctl start keepalived
最后我们需要查看keepalived的运行状态
systemct status keepalived
我们发现有错误
Tail -f /var/log/message
Ip a show
Vim keepalived 修改网卡配置 eth0--->修改为eth016777984
然后重启keepalived
Systemctl restart keepalived Systemctl status keepalived
最后我们还需要在两台Nginx主机上配置realserver配置
vim realserver
#!/bin/bash
# chkconfig: 2345 85 35
# Description: Start real server with host boot
VIP=192.168.6.15
function start() {
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
echo 1 >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce
echo “Real Server $(uname -n) started”
}
function stop() {
ifconfig lo:0 down
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce
echo “Real Server $(uname -n) stopped”
}
case $1 in
start)
start
;;
stop)
stop
;;
*)
echo “Usage: $0 {start|stop}”
exit 1
esacchmod a+x realserver
systemctl start realserver ./realserver start
scp realserver 192.168.6.11:/root/realserver
ls -l
./realserver start
Ipvsadm systemctl status keepalived 查看lvs02的keepalived的状态
r然后 Ipvsadm -l
接着我们尝试使用vip进行访问
192.168.6.15
接下来我们将nginx02的nginx服务停用,尝试继续使用vip进行访问
Systemctl stop nginx
然后查看ipvsadm状态
Systemctl status keepalived
然后我们继续使用vip进行访问
最后我们将nginx02的nginx服务启动,然后查看ipvsadm及keepalived状态
keepalived
最后我们再次尝试访问
本文出自 “高文龙” 博客,谢绝转载!
centos7+LVS+KeepAlived实现Nginx服务的高可用性
原文:http://gaowenlong.blog.51cto.com/451336/1713280