您的位置:新葡亰496net > 电脑系统 > 超详细saltstack安装配置及利用,自动化运维之S

超详细saltstack安装配置及利用,自动化运维之S

发布时间:2019-10-30 19:34编辑:电脑系统浏览(124)

    作用:为了不手动去安装黄金年代风暴流倜傥台去salt-minion,并进重复的布署

    Saltstack是Python开采的,上千台的服务器都得以管理。

                               自动化运营之SaltStack施行

    1.1、环境

    linux-node1(master服务端) 192.168.0.15
    linux-node2(minion客户端) 192.168.0.16

    1.2、SaltStack二种运维模式介绍

    Local 本地
    Master/Minion 传统运行方式(server端跟agent端)
    Salt SSH SSH

    1.3、SaltStack三大效劳

    ●远程推行

    ●配置管理

    ●云管理

    1.4、SaltStack安装基础际遇计划

    [root@linux-node1 ~]# cat /etc/redhat-release  ##查阅系统版本

    CentOS release 6.7 (Final)

    [root@linux-node1 ~]# uname -r ##翻看系统基本版本

    2.6.32-573.el6.x86_64

    [root@linux-node1 ~]# getenforce ##查看selinux的状态

    Enforcing

    [root@linux-node1 ~]# setenforce 0 ##关闭selinux

    [root@linux-node1 ~]# getenforce  

    Permissive

    [root@linux-node1 ~]# /etc/init.d/iptables stop

    [root@linux-node1 ~]# /etc/init.d/iptables stop

    [root@linux-node1 ~]# ifconfig eth0|awk -F '[: ] ' 'NR==2{print $4}' ##过滤Ip地址

    192.168.0.15

    [root@linux-node1 ~]# hostname ##翻看主机名

    linux-node1.zhurui.com

    [root@linux-node1 yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo   ##安装salt必得使用到epel源 

    1.4、安装Salt

    服务端:

    [root@linux-node1 yum.repos.d]# yum install -y salt-master salt-minion ##salt-master包跟salt-minion包

    [root@linux-node1 yum.repos.d]# chkconfig salt-master on  ##投入到开机自动运营

    [root@linux-node1 yum.repos.d]# chkconfig salt-minion on  ##步向到开机自动运营

    [root@linux-node1 yum.repos.d]# /etc/init.d/salt-master start   ##启动salt-master

    Starting salt-master daemon:                                   [  OK  ]

    起首到这里须要改过minion配置文件,本领开发银行salt-minion服务

    [root@linux-node1 yum.repos.d]# grep '^[a-z]' /etc/salt/minion   

    master: 192.168.0.15  ##指定master主机

    [root@linux-node1 yum.repos.d]# cat /etc/hosts

    192.168.0.15 linux-node1.zhurui.com linux-node1  ##确认主机名是或不是解析

    192.168.0.16 linux-node2.zhurui.com linux-node2

    浅析结果:

    1. [root@linux-node1 yum.repos.d]# ping linux-node1.zhurui.com
    2. PING linux-node1.zhurui.com (192.168.0.15)56(84) bytes of data.
    3. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=1 ttl=64 time=0.087 ms
    4. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=2 ttl=64 time=0.060 ms
    5. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=3 ttl=64 time=0.053 ms
    6. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=4 ttl=64 time=0.060 ms
    7. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=5 ttl=64 time=0.053 ms
    8. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=6 ttl=64 time=0.052 ms
    9. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=7 ttl=64 time=0.214 ms
    10. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=8 ttl=64 time=0.061 ms

    [root@linux-node1 yum.repos.d]# /etc/init.d/salt-minion start  ##启动minion客户端

    Starting salt-minion daemon:                               [  OK  ]

    [root@linux-node1 yum.repos.d]#

    客户端:

    [root@linux-node2 ~]# yum install -y salt-minion  ##安装salt-minion包,也正是用户端包

    [root@linux-node2 ~]# chkconfig salt-minion on  ##投入开机自运维

    [root@linux-node2 ~]# grep '^[a-z]' /etc/salt/minion   ##客户端钦赐master主机

    master: 192.168.0.15

    [root@linux-node2 ~]# /etc/init.d/salt-minion start  ##继之运转minion

    Starting salt-minion daemon:                               [  OK  ]

    1.5、Salt秘钥认证设置

    1.5.1使用salt-kes -a linux*一声令下以前在目录/etc/salt/pki/master目录结构如下

    新葡亰496net 1

    新葡亰496net 2

    1.5.2使用salt-kes -a linux*命令将秘钥通过同意,随后minions_pre下的公文种转移到minions目录下

    1. [root@linux-node1 minion]# salt-key -a linux*
    2. The following keys are going to be accepted:
    3. UnacceptedKeys:
    4. linux-node1.zhurui.com
    5. linux-node2.zhurui.com
    6. Proceed?[n/Y] Y
    7. Keyfor minion linux-node1.zhurui.com accepted.
    8. Keyfor minion linux-node2.zhurui.com accepted.
    9. [root@linux-node1 minion]# salt-key
    10. AcceptedKeys:
    11. linux-node1.zhurui.com
    12. linux-node2.zhurui.com
    13. DeniedKeys:
    14. UnacceptedKeys:
    15. RejectedKeys:

    新葡亰496net 3

    1.5.3那儿目录机构变化成如下:

    新葡亰496net 4

    1.5.4何况伴随着顾客端/etc/salt/pki/minion/目录下有master公钥生成

    新葡亰496net 5

    1.6、salt远程推行命令详解

    1.6.1 salt '*' test.ping 命令

    [root@linux-node1 master]# salt '*' test.ping  ##salt命令  test.ping的意义是,test是多少个模块,ping是模块内的章程

    linux-node2.zhurui.com:

        True

    linux-node1.zhurui.com:

        True

    [root@linux-node1 master]# 

    新葡亰496net 6

    1.6.2  salt '*' cmd.run 'uptime' 命令

    新葡亰496net 7

    1.7、saltstack配置管理

    1.7.1编辑配置文件/etc/salt/master,将file_roots注释去掉

    新葡亰496net 8

    1.7.2随之saltstack远程推行如下命令

    [root@linux-node1 master]# ls /srv/

    [root@linux-node1 master]# mkdir /srv/salt

    [root@linux-node1 master]# /etc/init.d/salt-master restart

    Stopping salt-master daemon:                               [  OK  ]

    Starting salt-master daemon:                                 [  OK  ]

    [root@linux-node1 salt]# cat apache.sls   ##步向到/srv/salt/目录下创办

    新葡亰496net 9

    [root@linux-node1 salt]# salt '*' state.sls apache  ##继之试行如下语句

    随之会现身如下报错:

    新葡亰496net 10

    便捷apache.sls文件增多如下:

    新葡亰496net 11

    最后成功如下:

    1. [root@linux-node1 salt]# salt '*' state.sls apache
    2. linux-node2.zhurui.com:
    3. ----------
    4. ID: apache-install
    5. Function: pkg.installed
    6. Name: httpd
    7. Result:True
    8. Comment:Package httpd is already installed.
    9. Started:22:38:52.954973
    10. Duration:1102.909 ms
    11. Changes:
    12. ----------
    13. ID: apache-install
    14. Function: pkg.installed
    15. Name: httpd-devel
    16. Result:True
    17. Comment:Package httpd-devel is already installed.
    18. Started:22:38:54.058190
    19. Duration:0.629 ms
    20. Changes:
    21. ----------
    22. ID: apache-service
    23. Function: service.running
    24. Name: httpd
    25. Result:True
    26. Comment:Service httpd has been enabled, and is running
    27. Started:22:38:54.059569
    28. Duration:1630.938 ms
    29. Changes:
    30. ----------
    31. httpd:
    32. True
    33. ``
    34. Summary
    35. ------------
    36. Succeeded:3(changed=1)
    37. Failed:0
    38. ------------
    39. Total states run:3
    40. linux-node1.zhurui.com:
    41. ----------
    42. ID: apache-install
    43. Function: pkg.installed
    44. Name: httpd
    45. Result:True
    46. Comment:Package httpd is already installed.
    47. Started:05:01:17.491217
    48. Duration:1305.282 ms
    49. Changes:
    50. ----------
    51. ID: apache-install
    52. Function: pkg.installed
    53. Name: httpd-devel
    54. Result:True
    55. Comment:Package httpd-devel is already installed.
    56. Started:05:01:18.796746
    57. Duration:0.64 ms
    58. Changes:
    59. ----------
    60. ID: apache-service
    61. Function: service.running
    62. Name: httpd
    63. Result:True
    64. Comment:Service httpd has been enabled, and is running
    65. Started:05:01:18.798131
    66. Duration:1719.618 ms
    67. Changes:
    68. ----------
    69. httpd:
    70. True
    71. ``
    72. Summary
    73. ------------
    74. Succeeded:3(changed=1)
    75. Failed:0
    76. ------------
    77. Total states run:3
    78. [root@linux-node1 salt]#

    1.7.3证实使用saltstack安装httpd是还是不是成功

    linux-node1:

    [root@linux-node1 salt]# lsof -i:80  ##业已打响运行

    COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

    httpd   7397   root    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7399 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7400 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7401 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7403 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7404 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7405 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7406 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    httpd   7407 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

    linux-node2:

    [root@linux-node2 pki]# lsof -i:80

    COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

    httpd   12895   root    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12897 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12898 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12899 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12901 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12902 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12906 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12908 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    httpd   12909 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

    [root@linux-node2 pki]# 

    1.7.4应用saltstack状态管理

    新葡亰496net 12

    [root@linux-node1 salt]# salt '*' state.highstate

    2.1、SaltStack之Grains数据系统

    ●Grains

    ●Pillar

    2.1.1使用salt命令查看系统版本

    1. ``
    2. [root@linux-node1 salt]# salt 'linux-node1*' grains.ls
    3. linux-node1.zhurui.com:
    4. -SSDs
    5. - biosreleasedate
    6. - biosversion
    7. - cpu_flags
    8. - cpu_model
    9. - cpuarch
    10. - domain
    11. - fqdn
    12. - fqdn_ip4
    13. - fqdn_ip6
    14. - gpus
    15. - host
    16. - hwaddr_interfaces
    17. - id
    18. - init
    19. - ip4_interfaces
    20. - ip6_interfaces
    21. - ip_interfaces
    22. - ipv4
    23. - ipv6
    24. - kernel
    25. - kernelrelease
    26. - locale_info
    27. - localhost
    28. - lsb_distrib_codename
    29. - lsb_distrib_id
    30. - lsb_distrib_release
    31. - machine_id
    32. - manufacturer
    33. - master
    34. - mdadm
    35. - mem_total
    36. - nodename
    37. - num_cpus
    38. - num_gpus
    39. - os
    40. - os_family
    41. - osarch
    42. - oscodename
    43. - osfinger
    44. - osfullname
    45. - osmajorrelease
    46. - osrelease
    47. - osrelease_info
    48. - path
    49. - productname
    50. - ps
    51. - pythonexecutable
    52. - pythonpath
    53. - pythonversion
    54. - saltpath
    55. - saltversion
    56. - saltversioninfo
    57. - selinux
    58. - serialnumber
    59. - server_id
    60. - shell
    61. - virtual
    62. - zmqversion
    63. [root@linux-node1 salt]#

    2.1.2种类版本相关音信:

    1. [root@linux-node1 salt]# salt 'linux-node1*' grains.items
    2. linux-node1.zhurui.com:
    3. ----------
    4. SSDs:
    5. biosreleasedate:
    6. 07/31/2013
    7. biosversion:
    8. 6.00
    9. cpu_flags:
    10. - fpu
    11. - vme
    12. - de
    13. - pse
    14. - tsc
    15. - msr
    16. - pae
    17. - mce
    18. - cx8
    19. - apic
    20. - sep
    21. - mtrr
    22. - pge
    23. - mca
    24. - cmov
    25. - pat
    26. - pse36
    27. - clflush
    28. - dts
    29. - mmx
    30. - fxsr
    31. - sse
    32. - sse2
    33. - ss
    34. - syscall
    35. - nx
    36. - rdtscp
    37. - lm
    38. - constant_tsc
    39. - up
    40. - arch_perfmon
    41. - pebs
    42. - bts
    43. - xtopology
    44. - tsc_reliable
    45. - nonstop_tsc
    46. - aperfmperf
    47. - unfair_spinlock
    48. - pni
    49. - ssse3
    50. - cx16
    51. - sse4_1
    52. - sse4_2
    53. - x2apic
    54. - popcnt
    55. - hypervisor
    56. - lahf_lm
    57. - arat
    58. - dts
    59. cpu_model:
    60. Intel(R)Core(TM) i3 CPU M 380@2.53GHz
    61. cpuarch:
    62. x86_64
    63. domain:
    64. zhurui.com
    65. fqdn:
    66. linux-node1.zhurui.com
    67. fqdn_ip4:
    68. -192.168.0.15
    69. fqdn_ip6:
    70. gpus:
    71. |_
    72. ----------
    73. model:
    74. SVGA II Adapter
    75. vendor:
    76. unknown
    77. host:
    78. linux-node1
    79. hwaddr_interfaces:
    80. ----------
    81. eth0:
    82. 00:0c:29:fc:ba:90
    83. lo:
    84. 00:00:00:00:00:00
    85. id:
    86. linux-node1.zhurui.com
    87. init:
    88. upstart
    89. ip4_interfaces:
    90. ----------
    91. eth0:
    92. -192.168.0.15
    93. lo:
    94. -127.0.0.1
    95. ip6_interfaces:
    96. ----------
    97. eth0:
    98. - fe80::20c:29ff:fefc:ba90
    99. lo:
    100. -::1
    101. ip_interfaces:
    102. ----------
    103. eth0:
    104. -192.168.0.15
    105. - fe80::20c:29ff:fefc:ba90
    106. lo:
    107. -127.0.0.1
    108. -::1
    109. ipv4:
    110. -127.0.0.1
    111. -192.168.0.15
    112. ipv6:
    113. -::1
    114. - fe80::20c:29ff:fefc:ba90
    115. kernel:
    116. Linux
    117. kernelrelease:
    118. 2.6.32-573.el6.x86_64
    119. locale_info:
    120. ----------
    121. defaultencoding:
    122. UTF8
    123. defaultlanguage:
    124. en_US
    125. detectedencoding:
    126. UTF-8
    127. localhost:
    128. linux-node1.zhurui.com
    129. lsb_distrib_codename:
    130. Final
    131. lsb_distrib_id:
    132. CentOS
    133. lsb_distrib_release:
    134. 6.7
    135. machine_id:
    136. da5383e82ce4b8d8a76b5a3e00000010
    137. manufacturer:
    138. VMware,Inc.
    139. master:
    140. 192.168.0.15
    141. mdadm:
    142. mem_total:
    143. 556
    144. nodename:
    145. linux-node1.zhurui.com
    146. num_cpus:
    147. 1
    148. num_gpus:
    149. 1
    150. os:
    151. CentOS
    152. os_family:
    153. RedHat
    154. osarch:
    155. x86_64
    156. oscodename:
    157. Final
    158. osfinger:
    159. CentOS-6
    160. osfullname:
    161. CentOS
    162. osmajorrelease:
    163. 6
    164. osrelease:
    165. 6.7
    166. osrelease_info:
    167. -6
    168. -7
    169. path:
    170. /sbin:/usr/sbin:/bin:/usr/bin
    171. productname:
    172. VMwareVirtualPlatform
    173. ps:
    174. ps -efH
    175. pythonexecutable:
    176. /usr/bin/python2.6
    177. pythonpath:
    178. -/usr/bin
    179. -/usr/lib64/python26.zip
    180. -/usr/lib64/python2.6
    181. -/usr/lib64/python2.6/plat-linux2
    182. -/usr/lib64/python2.6/lib-tk
    183. -/usr/lib64/python2.6/lib-old
    184. -/usr/lib64/python2.6/lib-dynload
    185. -/usr/lib64/python2.6/site-packages
    186. -/usr/lib64/python2.6/site-packages/gtk-2.0
    187. -/usr/lib/python2.6/site-packages
    188. pythonversion:
    189. -2
    190. -6
    191. -6
    192. - final
    193. -0
    194. saltpath:
    195. /usr/lib/python2.6/site-packages/salt
    196. saltversion:
    197. 2015.5.10
    198. saltversioninfo:
    199. -2015
    200. -5
    201. -10
    202. -0
    203. selinux:
    204. ----------
    205. enabled:
    206. True
    207. enforced:
    208. Permissive
    209. serialnumber:
    210. VMware-564d8f43912d3a99-eb c4 3b a9 34 fc ba 90
    211. server_id:
    212. 295577080
    213. shell:
    214. /bin/bash
    215. virtual:
    216. VMware
    217. zmqversion:
    218. 3.2.5

    2.1.3体系版本相关新闻:

    新葡亰496net 13

    2.1.4查看node1所有ip地址:

    [root@linux-node1 salt]# salt 'linux-node1*' grains.get ip_interfaces:eth0 ##用来消息的募集

    linux-node1.zhurui.com:

        - 192.168.0.15

        - fe80::20c:29ff:fefc:ba90

    新葡亰496net 14

    新葡亰496net 15

    2.1.4选用Grains搜聚系统新闻:

    [root@linux-node1 salt]# salt 'linux-node1*' grains.get os 

    linux-node1.zhurui.com:

        CentOS

    [root@linux-node1 salt]# salt -G os:CentOS cmd.run 'w'  ##  -G:代表选用Grains采撷,使用w命令,查看登陆消息

    linux-node2.zhurui.com:

         20:29:40 up 2 days, 16:09,  2 users,  load average: 0.00, 0.00, 0.00

        USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

        root     tty1     -                Sun14   29:07m  0.32s  0.32s -bash

        root     pts/0    192.168.0.101    Sun20   21:41m  0.46s  0.46s -bash

    linux-node1.zhurui.com:

         02:52:01 up 1 day, 22:31,  3 users,  load average: 4.00, 4.01, 4.00

        USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

        root     tty1     -                Sat20   24:31m  0.19s  0.19s -bash

        root     pts/0    192.168.0.101    Sun02    1.00s  1.33s  0.68s /usr/bin/python

        root     pts/1    192.168.0.101    Sun04   21:36m  0.13s  0.13s -bash

    [root@linux-node1 salt]# 

    截图如下:

    新葡亰496net 16

    2.1.5 使用Grains准则相配到memcache的主机上运维输入hehe

    [root@linux-node1 salt]# vim /etc/salt/minion ##编写制定minion配置文件,撤销如下几行注释

    88 grains:

     89   roles:

     90     - webserver

     91     - memcache

     截图如下:

    新葡亰496net 17

    [root@linux-node1 salt]# /etc/init.d/salt-minion restart   ##

    Stopping salt-minion daemon:                               [  OK  ]

    Starting salt-minion daemon:                               [  OK  ]

    [root@linux-node1 salt]# 

    [root@linux-node1 salt]# salt -G 'roles:memcache' cmd.run 'echo zhurui'  ##选择grains相配准则是memcache的顾客端机器,然后输出命令

    linux-node1.zhurui.com:

        zhurui

    [root@linux-node1 salt]#

    截图如下:

    新葡亰496net 18

    2.1.5 也能够经过创造新的配备文件/etc/salt/grains文件来铺排法则

    [root@linux-node1 salt]# cat /etc/salt/grains 

    web: nginx

    [root@linux-node1 salt]# /etc/init.d/salt-minion restart  ##改进完配置文件之后必要重启服务

    Stopping salt-minion daemon:                               [  OK  ]

    Starting salt-minion daemon:                               [  OK  ]

    [root@linux-node1 salt]# 

    [root@linux-node1 salt]# salt -G web:nginx cmd.run 'w'  ##运用grains相称法则为web:nginx的主机械运输维命令w

    linux-node1.zhurui.com:

         03:31:07 up 1 day, 23:11,  3 users,  load average: 4.11, 4.03, 4.01

        USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

        root     tty1     -                Sat20   25:10m  0.19s  0.19s -bash

        root     pts/0    192.168.0.101    Sun02    0.00s  1.41s  0.63s /usr/bin/python

        root     pts/1    192.168.0.101    Sun04   22:15m  0.13s  0.13s -bash

     

    grains的用法:

    1.采摘底层系统音讯

    2、远程实行里面相配minion

    3、top.sls里面相称minion

     

    2.1.5 也得以/srv/salt/top.sls配置文件相配minion

     

    [root@linux-node1 salt]# cat /srv/salt/top.sls 

    base:

      'web:nginx':

        - match: grain

        - apache

    [root@linux-node1 salt]# 

    新葡亰496net 19

    2.2、SaltStack之Pillar数据系统

    2.2.1 首先在master配置文件552行展开pillar开关

     

    [root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master 

    file_roots:

    pillar_opts: True

    [root@linux-node1 salt]# /etc/init.d/salt-master restart   ##重启master

    Stopping salt-master daemon:                               [  OK  ]

    Starting salt-master daemon:                                 [  OK  ]

    [root@linux-node1 salt]# salt '*' pillar.items  ##应用如下命令验证

    截图如下:

    新葡亰496net 20

    [root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master

    529 pillar_roots:  ##张开如下行

    530   base:

    531     - /srv/pillar

    截图如下:

    新葡亰496net 21

    [root@linux-node1 salt]# mkdir /srv/pillar

    [root@linux-node1 salt]# /etc/init.d/salt-master restart  ##重启master

    Stopping salt-master daemon:                               [  OK  ]

    Starting salt-master daemon:                                 [  OK  ]

    [root@linux-node1 salt]# vim /srv/pillar/apache.sls

    [root@linux-node1 salt]# cat /srv/pillar/apache.sls

    {%if grains['os'] == 'CentOS' %}

    apache: httpd

    {% elif grains['os'] == 'Debian' %}

    apache: apache2

    {% endif %}

    [root@linux-node1 salt]# 

    截图如下:

    新葡亰496net 22

    进而钦命哪个minion能够见见:

    [root@linux-node1 salt]# cat /srv/pillar/top.sls 

    base:

      '*':

        - apache

    新葡亰496net 23

     

    [root@linux-node1 salt]# salt '*' pillar.items ##改过完毕未来采纳该命令验证

    linux-node1.zhurui.com:

        ----------

        apache:

            httpd

    linux-node2.zhurui.com:

        ----------

        apache:

            httpd

    截图如下:

    新葡亰496net 24

    2.2.1 使用Pillar定位主机

    新葡亰496net 25

    报错管理:

    [root@linux-node1 salt]# salt '*' saltutil.refresh_pillar  ##亟需实践刷新命令

    linux-node2.zhurui.com:

        True

    linux-node1.zhurui.com:

        True

    [root@linux-node1 salt]# 

    截图如下:

    新葡亰496net 26

    [root@linux-node1 salt]# salt -I 'apache:httpd' test.ping

    linux-node1.zhurui.com:

        True

    linux-node2.zhurui.com:

        True

    [root@linux-node1 salt]# 

    新葡亰496net 27

     

    2.3、SaltStack数据系统一分配化介绍

    名称 存储位置 数据类型 数据采集更新方式 应用
    Grains minion端 静态数据 minion启动时收集,也可以使用saltutil.sync_grains进行刷新。 存储minion基本数据,比如用于匹配minion,自身数据可以用来做资产管理等。
    Pillar master端 动态数据 在master端定义,指定给对应的minion,可以使用saltutil.refresh_pillar刷新 存储Master指定的数据,只有指定的minion可以看到,用于敏感数据保存。

    1.条件准备

    未雨希图两台设想机

    主机名

      ip

    role

    linux-node1

    10.0.0.7

    master

    linux-node2

    10.0.0.8

    minion

     

    在节点1上安装 master 和 minion

    [root@linux-node1 ~]yum install salt-master salt-minion -y

     

    在节点2上安装 minion

    [root@linux-node2 ~]yum install  salt-minion -y

     

    独家设置开机自运转

    [root@linux-node1 ~]chkconfig  salt-master on

    [root@linux-node1 ~]chkconfig  --add salt-master

    [root@linux-node1 ~]chkconfig  salt-minion on

    [root@linux-node1 ~]chkconfig  --add salt-minion

    [root@linux-node2 ~]chkconfig  salt-minion on

    [root@linux-node1 ~]chkconfig  --add salt-minion

     

    指定master

    vim /etc/salt/minion

    master: 10.0.0.7

     

    授权节点1和节点2

    slat-key -a linux*

     

    一、环境

    运行重复性工作:系统安装、情形安顿、加多监察和控制、代码发表(基于git或svn三遍开辟)、项目搬迁、安插职责。

    2.测试

    测试 ping 节点1 和节点2

    salt '*' test.ping

     

    施行 cmd.run  试行bash查看负载命令

    salt '*' cmd.run 'uptime'

     

    设置sls文件的门道

    [root@linux-node1 ~]mkdir -p /srv/salt/base

    [root@linux-node1 ~]mkdir -p /srv/salt/test

    [root@linux-node1 ~]mkdir -p /srv/salt/prod

     

    vim /etc/salt/master

    file_roots:

      base:

        - /srv/salt/base

      test:

        - /srv/salt/test

      prod:

    - /srv/salt/prod

     

    重启master

    /etc/init.d/salt-master restart

     

    编写YMAL安装Apache 并安装运转文件

    cd /srv/salt

    vim apache.sls

    apache-install:

      pkg.installed:

        - names:

          - httpd

          - httpd-devel

     

    apache-service:

      service.running:

        - name: httpd

        - enable: True

        - reload: True

     

    推市场价格况文件

    salt '*' state.sls apache

     

    编写高等状态文件

    vim top.sls

    base:

      'linux-node2':

      - apache

     

    slat '*' state.highstate   #实践高档状态 top.sls

     

    系统碰到:

    salt是二个新的功底平台管理工科具。只需开支数分钟就能够运转起来,扩张性足以帮忙管理上万台服务器,数秒就可以形成多少传递。

    3.数据系统之 Grains

    salt 'linux-node1' grains.items  #查询全体键值

     

    salt 'linux-node1' grains.get fqdn #询问单个主机值

     

    来得全体 节点1 eth0的ip

    [root@linux-node1 ~]# salt 'linux-node1' grains.get ip_interfaces:eth0

    linux-node1:

        - 10.0.0.7

    - fe80::20c:29ff:fe9d:57e8

     

    #据悉系统名称相称试行cmd.run命令

    [root@linux-node1 ~]# salt -G os:CentOS cmd.run 'w'  #-G 代表采用grains相称

    linux-node2:

         03:47:49 up  9:58,  2 users,  load average: 0.00, 0.00, 0.00

        USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

        root     pts/1    10.0.0.1         17:50    1:31m  0.14s  0.14s -bash

        root     pts/0    10.0.0.1         03:37    5:40   0.00s  0.00s -bash

    linux-node1:

         03:47:49 up  1:35,  2 users,  load average: 0.00, 0.00, 0.00

        USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

        root     pts/0    10.0.0.1         02:13    1:01m  0.08s  0.01s vim top.sls

        root     pts/1    10.0.0.1         03:37    0.00s  0.52s  0.34s /usr/bin/python

     

    vim /etc/salt/grains

    web: nginx

    salt -G web:nginx cmd.run 'w'

     

    #cat /etc/redhat-release
    CentOS Linux release 7.4.1708 (Core)

    salt能够做布署管理、远程命令、包管理。

    4.数据系统之 Pillar

    设置pillar文件的门路

    vim /etc/salt/master

    pillar_roots:

      base:

        - /srv/pillar

     

    mkdir /srv/pillar #创建暗中同意pillar目录

     

    /etc/init.d/salt-master restart

    vim /srv/pillar/apache.sls  #利用jinja模板语言

    {%if grains['os'] == 'CentOS' %}

    apache: httpd

    {% elif grains['os'] == 'Debian' %}

    apache: apche2

    {% endif %}

     

    vim /srv/pillar/top.sls

    base:

      '*':

    - apache

     

    [root@linux-node1 ~]# salt '*' pillar.items

    linux-node2:

        ----------

        apache:

            httpd

    linux-node1:

        ----------

        apache:

            httpd

     

    安顿完 pillar必要刷新 生效

    [root@linux-node1 ~]salt '*' saltutil.refresh_pillar

    [root@linux-node1 ~]#  salt -I 'apache:httpd' test.ping

    linux-node2:

        True

    linux-node1:

        True

     

    http://docs.saltstack.cn/topics/index.html    #slatstack汉语网址

    slatstack 之远程施行  

    targeting

    moudles 

    returners   

     

    根据对模块的访问调控

    [root@linux-node1 ~]vim /etc/salt/master       

    client_acl:

      oldboy:                      #oldboy顾客下只可以选择test.ping network的有着办法

        - test.ping

        - network.*

      user01:                    

        - linux-node1*:

          - test.ping

     

    权力设置   

    chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master

     

     

    [root@linux-node1 ~]/etc/ini.d/salt-master restart

    [root@linux-node1 ~]# su - oldboy

    [oldboy@linux-node1 ~]$ salt '*' cmd.run 'df -h'

    [WARNING ] Failed to open log file, do you have permission to write to /var/log/salt/master?

    Failed to authenticate! This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).

     

    始建表结构 3个表:

    CREATE DATABASE `salt`

    DEFAULT CHARACTER SET utf8

    DEFAULT COLLATE utf8_general_ci;

    USE `salt`;

     

    CREATE TABLE `jids` (

    `jid` varchar(255) NOT NULL,

    `load` mediumtext NOT NULL,

    UNIQUE KEY `jid` (`jid`)

    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    CREATE INDEX jid ON jids(jid) USING BTREE;

     

    CREATE TABLE `salt_returns` (

    `fun` varchar(50) NOT NULL,

    `jid` varchar(255) NOT NULL,

    `return` mediumtext NOT NULL,

    `id` varchar(255) NOT NULL,

    `success` varchar(10) NOT NULL,

    `full_ret` mediumtext NOT NULL,

    `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

    KEY `id` (`id`),

    KEY `jid` (`jid`),

    KEY `fun` (`fun`)

    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

     

    CREATE TABLE `salt_events` (

    `id` BIGINT NOT NULL AUTO_INCREMENT,

    `tag` varchar(255) NOT NULL,

    `data` mediumtext NOT NULL,

    `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

    `master_id` varchar(255) NOT NULL,

    PRIMARY KEY (`id`),

    KEY `tag` (`tag`)

    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

     

    授权salt用户

    grant all on salt.* to salt@'10.0.0.0/255.255.255.0 identified by 'salt';

     

    yum install -y MySQL-python     #联手数据信任 MySQL-python包

    vim /etc/salt/master

    底层增多

    master_job_cache: mysql   #累计这一句 施行的授命自动保存到数据库不用加--return mysql

    mysql.host: '10.0.0.7'

    mysql.user: 'salt'

    mysql.pass: 'salt'

    mysql.db: 'salt'

    mysql.port: 3306

    /etc/init.d/salt-master restart

     

    测验命令试行结果是不是同步到数据库

    [root@linux-node1 ~]# salt '*' cmd.run 'ls' --return mysql

     

    编写翻译安装所需的依据包

    yum install gcc gcc-c glibc autoconf make openssl openssl-devel

     

    #python -V

    新葡亰496net 28

    5.web集群架构自动化陈设

    Python 2.7.5

    新葡亰496net 29

    5.1安装haproxy

    cd /usr/local/src && tar zxf haproxy-1.7.9.tar.gz && cd haproxy-1.7.9 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

    cd /usr/local/src/haproxy-1.7.9/examples/

    vim haproxy.init

    BIN=/usr/local/haproxy/sbin/$BASENAME  修正运转脚本的默许路线

    cp haproxy-init /srv/salt/prod/haproxy/files/

     

    编写YMAL脚本

    mkdir /srv/salt/prod/pkg            #源码安装注重包sls

    mkdir /srv/salt/prod/haproxy        #haproxy安装 sls

    mkdir /srv/salt/prod/haproxy/files    #寄存haproxy源码压缩包

     

    haproxy自动化编写翻译安装。

    cd /srv/salt/prod/pkg

     

    编写翻译安装所需注重包的自动化安装

    vim pkg-init.sls  

    pkg-init:

      pkg.installed:                 #pkg的installed

        - names:

          - gcc

          - gcc-c

          - glibc

          - make

          - autoconf

          - openssl

          - openssl-devel

      

    cd /srv/salt/prod/haproxy

    vim install.sls   #haproxy自动化编写翻译安装YMAL脚本

    include:

      - pkg.pkg-init

     

    haproxy-install:

      file.managed:

        - name: /usr/local/src/haproxy-1.7.9.tar.gz

        - source: salt://haproxy/files/haproxy-1.7.9.tar.gz #salt:相当于/srv/salt/prod

        - user: root

        - group: root

        - mode: 755

      cmd.run:

        - name: cd /usr/local/src && tar zxf haproxy-1.7.9.tar.gz && cd haproxy-1.7.9 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

        - unless: test -d /usr/local/haproxy

        - require:

          - pkg: pkg-init

          - file: haproxy-install

     

    haproxy-init:

      file.managed:

        - name: /etc/init.d/haproxy   创造一个/etc/init.d/haproxy 文件

        - source: salt://haproxy/files/haproxy.init

        - user: root

        - group: root

        - mode: 755

        - require:

          - cmd: haproxy-install

      cmd.run:

        - name: chkconfig --add haproxy

        - unless: chkconfig --list | grep haproxy #重临false才执行和-onlyif相反,有就不施行上边的通令

        - require:

          - file: haproxy-init

    net.ipv4.ip_nonlocal_bind:   #cat /proc/sys/net/ipv4/ip_nonlocal_bind 默许是0改为1,意思是足以监听非本地的ip

      sysctl.present:             #设定基本参数的方法

        - value: 1

     

    haproxy-config-dir:

      file.directory:   #文本的创设目录的主意

        - name: /etc/haproxy  #始建三个/etc/haproxy的目录

        - user: root

        - group: root

        - mode: 755

     

    手动施行 节点1地点的设置haproxy脚本

    salt 'linux-node1' state.sls haproxy.install env=prod #env内定使用prod目录下的

     

    成立集群目录

    mkdir /srv/salt/prod/cluster

    mkdir /srv/salt/prod/cluster/files

    cd /srv/salt/prod/cluster/files

    vim haproxy-outside.cfg

    global

    maxconn 100000

    chroot /usr/local/haproxy

    uid 99

    gid 99

    daemon

    nbproc 1

    pidfile /usr/local/haproxy/logs/haproxy.pid

    log 127.0.0.1 local3 info

     

    defaults

    option http-keep-alive

    maxconn 100000

    mode http

    timeout connect 5000ms

    timeout client  50000ms

    timeout server  50000ms

     

    listen stats

    mode http

    bind 0.0.0.0:8888

    stats enable

    stats uri       /haproxy-status

    stats auth      haproxy:saltstack

    frontend frontend_www_example_com

    bind    10.0.0.11:80

    mode    http

    option  httplog

    log global

            default_backend backend_www_example_com

     

    backend backend_www_example_com

    option forwardfor header X-REAL-IP

    option httpchk HEAD / HTTP/1.0

    balance source

    server web-node1        10.0.0.7:8080 check inter 2000 rise 30 fall 15

    server web-node2        10.0.0.8:8080 check inter 2000 rise 30 fall 15

     

    cd ..

    vim haproxy-outside.sls

    include:

      - haproxy.install

     

    haproxy-service:

      file.managed:

        - name: /etc/haproxy/haproxy.cfg

        - source: salt://cluster/files/haproxy-outside.cfg

        - user: root

        - group: root

        - mode: 644

      service.running:

        - name: haproxy

        - enable: True

        - reload: True

        - require:

          - cmd: haproxy-init

        - watch:

          - file: haproxy-service

    编辑top.sls

    cd /srv/salt/base/

    vim top.sls

    base:

      '*':

        - init.env_init

     

    prod:

      'linux-node1':

        - cluster.haproxy-outside

      'linux-node2':

        - cluster.haproxy-outside

    在节点1和节点2上各自校正httpd 的监听端口

    vim /etc/httpd/conf/httpd.conf 将80端口改为8080

    Listen 8080  

    接下来重启 /etc/init.d/httpd restart

     

    vim /var/www/html/index.html

    linux-node1  #节点2上linux-node2

     

    在浏览器中输入 10.0.0.7:8888/haproxy-status  健康检查

    账号密码 haproxy/saltstack

     

    [root@linux-node1 html]# cd /srv/salt/prod/

    [root@linux-node1 prod]# tree

    .

    |-- cluster

    |   |-- files

    |   |   `-- haproxy-outside.cfg

    |   `-- haproxy-outside.sls

    |-- haproxy

    |   |-- files

    |   |   |-- haproxy-1.7.9.tar.gz

    |   |   `-- haproxy.init

    |   `-- install.sls

    `-- pkg

        `-- pkg-init.sls

     

    各节点情况表明:

    salt配置

    有备无患粮草先行粮草先行3台虚构机,依照专门的学问改进主机名:test-c2c-console01、test-c2c-php01、test-c2c-php02。

    1. [root@test-c2c-console01 ~]# cat /etc/sysconfig/network

    2. NETWORKING=yes

    3. HOSTNAME=test-c2c-console01.bj

     

    1. [root@test-c2c-console01 ~]# cat /etc/hosts

    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 oldboylinux

    1. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 oldboylinux

    2.  

    3. 192.168.31.138 test-c2c-php01

    4. 192.168.31.137 test-c2c-php02

    5. 192.168.31.128 test-c2c-console01.bj

    配置yum源

    1. [root@test-c2c-console01 ~]# cd /etc/yum.repos.d/

    2. [root@test-c2c-console01 yum.repos.d]# ls

    3. CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo

    4. CentOS-Base.repo.20161216.oldboy CentOS-fasttrack.repo CentOS-Vault.repo

     

    1. rpm -ivh

     

    1. wget

     

    1. [root@test-c2c-console01 yum.repos.d]# ls

    2. CentOS6-Base-163.repo CentOS-Debuginfo.repo CentOS-Vault.repo

    3. CentOS-Base.repo CentOS-fasttrack.repo epel.repo

    4. CentOS-Base.repo.20161216.oldboy CentOS-Media.repo epel-testing.repo

     

    服务端

    yum install salt-master –y

    /etc/init.d/salt-master start

    chkconfig salt-master on

    客户端

    yum install salt-minion -y

     

    vim /etc/salt/minion

    master: 192.168.31.128 #master端地址

    cachedir: /etc/salt/modules #模块目录

    log_file: /var/log/salt/minion.log #日记路径

    log_level: warning #日志等第

     

    /etc/init.d/salt-minion start

    chkconfig salt-minion on

    5.2安装keepalived

    wget && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

    /usr/local/src/keepalived-1.2.19/keepalived/etc/init.d/keepalived.init #开发银行脚本

    /usr/local/src/keepalived-1.2.19/keepalived/etc/keepalived/keepalived.conf #模板文件

    [root@linux-node1 etc]# mkdir /srv/salt/prod/keepalived

    [root@linux-node1 etc]# mkdir /srv/salt/prod/keepalived/files

    [root@linux-node1 etc]# cp init.d/keepalived.init /srv/salt/prod/keepalived/files/

    [root@linux-node1 etc]# cp keepalived/keepalived.conf /srv/salt/prod/keepalived/files/

    [root@linux-node1 keepalived]# cd /usr/local/keepalived/etc/sysconfig/

    [root@linux-node1 sysconfig]# cp keepalived /srv/salt/prod/keepalived/files/keepalived.sysconfig

    [root@linux-node1 etc]# cd /srv//salt/prod/keepalived/files/

    [root@linux-node1 files]# vim keepalived.init

    daemon /usr/local/keepalived/sbin/keepalived ${KEEPALIVED_OPTIONS} 修正运营时的加载文件路线

    [root@linux-node1 files] cp /usr/local/src/keepalived-1.2.19.tar.gz .

    [root@linux-node1 files]# cd ..    

    [root@linux-node1 keepalived]# vim install.sls

    include:

      - pkg.pkg-init

     

    keepalived-install:

      file.managed:

        - name: /usr/local/src/keepalived-1.2.19.tar.gz

        - source: salt://keepalived/files/keepalived-1.2.19.tar.gz

        - user: root

        - group: root

        - mode: 755

      cmd.run:

        - name: wget && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

        - unless: test -d /usr/local/keepalived

        - require:

          - pkg: pkg-init

          - file: keepalived-install

     

    keepalived-init:

      file.managed:

        - name: /etc/init.d/keepalived

        - source: salt://keepalived/files/keepalived.init

        - user: root

        - group: root

        - mode: 755

      cmd.run:

        - name: chkconfig --add keepalived

        - unless: chkconfig --list | grep keepalived

        - require:

          - file: keepalived-init

     

    /etc/sysconfig/keepalived:

      file.managed:

        - source: salt://keepalived/files/keepalived.sysconfig

        - user: root

        - group: root

        - mode: 644

     

    /etc/keepalived:

      file.directory:

        - user: root

        - group: root

        - mode: 755

     

    [root@linux-node1 ~]# cd /srv/salt/prod/cluster/files/

    [root@linux-node1 files]# vim haproxy-outside-keepalived.conf

    ! Configuration File for keepalived

    global_defs {

       notification_email {

         saltstack@example.com

       }

       notification_email_from keepalived@example.com

       smtp_server 127.0.0.1

       smtp_connect_timeout 30

       router_id {{ROUTEID}}

    }

    vrrp_instance haproxy_ha {

    state {{STATEID}}

    interface eth0

        virtual_router_id 36

    priority {{PRIORITYID}}

        advert_int 1

    authentication {

    auth_type PASS

            auth_pass 1111

        }

        virtual_ipaddress {

           10.0.0.11

        }

     

    [root@linux-node1 cluster]# vim haproxy-outside-keepalived.sls

    include:

      - keepalived.install

     

    keepalived-service:

      file.managed:

        - name: /etc/keepalived/keepalived.conf

        - source: salt://cluster/files/haproxy-outside-keepalived.conf

        - user: root

        - group: root

        - mode: 644

        - template: jinja

        {% if grains['fqdn'] == 'linux-node1' %}

        - ROUTEID: haproxy_ha

        - STATEID: MASTER

        - PRIORITYID: 150

        {% elif grains['fqdn'] == 'linux-node2' %}

        - ROUTEID: haproxy_ha

        - STATEID: BACKUP

        - PRIORITYID: 100

        {% endif %}

      service.running:

        - name: keepalived

        - enable: True

        - watch:

          - file: keepalived-service

     

    [root@linux-node1 cluster]salt '*' state.sls cluster.haproxy-outside-keepalived env=prod

    [root@linux-node1 base]# cd /srv/salt/base/

    [root@linux-node1 base]# vim top.sls

    base:

      '*':

        - init.env_init

     

    prod:

      'linux-node1':

        - cluster.haproxy-outside

        - cluster.haproxy-outside-keepalived

      'linux-node2':

        - cluster.haproxy-outside

        - cluster.haproxy-outside-keepalived

    验证keeplivedalived

    [root@linux-node1 prod]# ip ad li

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:0c:29:9d:57:e8 brd ff:ff:ff:ff:ff:ff

        inet 10.0.0.7/24 brd 10.0.0.255 scope global eth0

        inet 10.0.0.11/32 scope global eth0

        inet6 fe80::20c:29ff:fe9d:57e8/64 scope link

           valid_lft forever preferred_lft forever

     

    [root@linux-node2 html]# ip ad li

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

        inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

        inet6 fe80::20c:29ff:feca:4195/64 scope link

           valid_lft forever preferred_lft forever    

    [root@linux-node1 prod]# /etc/init.d/keepalived stop

    Stopping keepalived:                                       [  OK  ]

    [root@linux-node2 html]# ip ad li

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

        inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

        inet 10.0.0.11/32 scope global eth0

        inet6 fe80::20c:29ff:feca:4195/64 scope link

           valid_lft forever preferred_lft forever

    [root@linux-node1 prod]# /etc/init.d/keepalived start

    Starting keepalived:                                       [  OK  ]

    [root@linux-node2 html]# ip ad li

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

        inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

        inet6 fe80::20c:29ff:feca:4195/64 scope link

           valid_lft forever preferred_lft forever

    [root@linux-node1 prod]# vim /srv/salt/prod/cluster/files/haproxy-outside.cfg    

    balance roundrobin   #roundrobin表示轮询,source代表一定。

     

    新葡亰496net 30

    key管理

    1. [root@test-c2c-console01 ~]# salt-key -L

    2. Accepted Keys: #已认证

    3. Denied Keys: #未认证

    4. Unaccepted Keys:

    5. test-c2c-php01

    6. test-c2c-php02

    7. Rejected Keys: #被吊销

     

    1. [root@test-c2c-console01 ~]# salt-key -A

    2. The following keys are going to be accepted:

    3. Unaccepted Keys:

    4. test-c2c-php01

    5. test-c2c-php02

    6. Proceed? [n/Y] y

    7. Key for minion test-c2c-php01 accepted.

    1. Key for minion test-c2c-php02 accepted.
    1. [root@test-c2c-console01 ~]# salt-key -L

    2. Accepted Keys:

    3. test-c2c-php01

    4. test-c2c-php02

    5. Denied Keys:

    6. Unaccepted Keys:

    7. Rejected Keys:

     

    1. [root@test-c2c-console01 ~]# salt '*' test.ping

    2. test-c2c-php02:

    3.     True

    4. test-c2c-php01:

    5.     True

    常用参数:

    -L:查看key状态

    -A:允许全体

    -D:删除全部

    -a:认证钦点的key

    -d:删除钦赐的key

    -r:注销钦定的key(该key状态为未表明)

    5.3安装zabbix-agent

    [root@linux-node1 prod]# cd /srv/salt/base/init

    [root@linux-node1 init]# vim zabbix.agent.sls

    zabbix-agent-install:

      pkg.installed:

        - name: zabbix-agent

     

      file.managed:

        - name: /etc/zabbix/zabbix_agentd.conf

        - source: salt://init/files/zabbix_agent.conf

        - template: jinja

        - defaults:

          Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }}

        - require:

          - pkg: zabbix-agent-install

     

      service.running:

        - name: zabbix-agent

        - enable: True

        - watch:

          - pkg: zabbix-agent-install

          - file: zabbix-agent-install

    [root@linux-node1 init]# vim /etc/salt/master

    pillar_roots:

      base:

        - /srv/pillar/base

    [root@linux-node1 init]# mkdir /srv/pillar/base

    [root@linux-node1 init]# /etc/init.d/salt-master restart

    [root@linux-node1 init]# cd /srv/pillar/base/

    [root@linux-node1 base]# vim top.sls

    base:

      '*':

        - zabbix

    [root@linux-node1 base]# vim zabbix.sls

    zabbix-agent:

      Zabbix_Server: 10.0.0.7

    [root@linux-node1 base]# cd /srv/salt/base/init/files

    [root@linux-node1 files]# cp /etc/zabbix/zabbix_agent.conf .

    [root@linux-node1 files]# vim zabbix_agent.conf  #采纳模板语言的变量援引

    Server={{ Server }}  

     

    [root@linux-node1 init]# vim env_init.sls

    include:

      - init.dns

      - init.history

      - init.audit

      - init.sysctl

      - init.zabbix_agent

    [root@linux-node1 ~]# salt '*' state.highstate

     

    nginx php 以及 memcache 的安装

    https://github.com/a7260488/slat-test

     

    percona-zabbix-templates  #zabbix监控mysql的软件

     

    管理

    分组

    [root@test-c2c-console01 salt]# pwd

    /etc/salt

    [root@test-c2c-console01 salt]# vim master

    nodegroups:

    #dev:'L@ops-dev01.bj,ops-dev02.bj' #列表相称

    dev:'E@ops-dev0[1-9].bj' #正则相配

    1. [root@test-c2c-console01 salt]# salt -N 'php' test.ping #ping php组的机器

    2. test-c2c-php02:

    3.     True

    4. test-c2c-php01:

    5.     True

    6. [root@test-c2c-console01 salt]# salt -N 'php' cmd.run 'uptime' #翻看php组机器的负载

    7. test-c2c-php01:

    8.      11:45:01 up 1:45, 2 users, load average: 0.00, 0.00, 0.00

    9. test-c2c-php02:

    10.      11:44:20 up 1:46, 2 users, load average: 0.00, 0.00, 0.00

    条件安排

    file_roots:

    base: #测量检验景况

    -/srv/salt

    dev: #支付条件

    - /srv/salt/dev/services

    - /srv/salt/dev/states

    prod: #生产情形

    - /srv/salt/prod/services

    - /srv/salt/prod/states

    当即管理

    salt -N 'dev' test.ping #合营分组主机,即时ping

    salt -N 'dev' cmd.run 'uptime' #实施命令

    salt -N 'ops-dev(02|03)' test.ping #正则相称主机,即时ping

    salt '*' cmd.run "ab -n 10 -c 2 " #非常全数机器做压力测验

    salt -N 'dev' sys.doc cmd #查阅模块文书档案

    salt -N 'dev' saltutil.sync_all #同步到dev分组

    salt -N 'dev' sys.doc mi #翻看模块使用扶植

    salt -N 'dev' mi.sshkey #实行该模块

    salt -N 'dev' state.sls yum -v test=true #联合钦赐陈设模块

    salt -N 'dev' state.hightstate -v test=true #手拉手全体模块

    5.4配置master-syndic

    效率有一点点相符 zabbix-proxy

    [root@linux-node2 ~]# yum install salt-master salt-syndic -y

    [root@linux-node2 ~]# vim /etc/salt/master

    syndic_master 10.0.0.7

    [root@linux-node2 ~]# vim /etc/salt/master

    [root@linux-node2 ~]# /etc/init.d/salt-master start

    Starting salt-master daemon:                               [  OK  ]

    [root@linux-node2 超详细saltstack安装配置及利用,自动化运维之SaltStack施行。~]# /etc/init.d/salt-syndic start

    Starting salt-syndic daemon:                               [  OK  ]

    [root@linux-node1 ~]# vim /etc/salt/mast

    order_masters: True

    [root@linux-node1 ~]# /etc/init.d/salt-master restart

    [root@linux-node1 ~]# /etc/init.d/salt-minion stop

    Stopping salt-minion daemon:                               [  OK  ]

    [root@linux-node2 ~]# /etc/init.d/salt-minion stop

    Stopping salt-minion daemon:                               [  OK  ]

    [root@linux-node2 ~]# salt-key -D

    [root@linux-node1 ~]# cd /etc/salt/pki/minion/

    [root@linux-node1 minion]# rm -fr *

    [root@linux-node1 ~]# cd  /etc/salt/pki/minion

    [root@linux-node2 minion]# rm -fr *

    [root@linux-node1 salt]# vim /etc/salt/minion

    master 10.0.0.8

    [root@linux-node1 salt]# vim /etc/salt/minion

    master 10.0.0.8

    [root@linux-node1 salt]# /etc/init.d/salt-minion start

    Starting salt-minion daemon:                               [  OK  ]

    [root@linux-node2 salt]# /etc/init.d/salt-minion start

    Starting salt-minion daemon:                               [  OK  ]

    [root@linux-node1 minion]# salt-key -A

    The following keys are going to be accepted:

    Unaccepted Keys:

    linux-node2

    Proceed? [n/Y] y

    Key for minion linux-node2 accepted.

    [root@linux-node1 minion]# salt-key

    Accepted Keys:

    linux-node2

    Denied Keys:

    Unaccepted Keys:

    Rejected Keys:

    [root@linux-node2 salt]# salt-key

    Accepted Keys:

    Denied Keys:

    Unaccepted Keys:

    linux-node1

    linux-node2

    Rejected Keys:

    [root@linux-node2 salt]# salt-key -A

    The following keys are going to be accepted:

    Unaccepted Keys:

    linux-node1

    linux-node2

    Proceed? [n/Y] y

    Key for minion linux-node1 accepted.

    Key for minion linux-node2 accepted.

     

    二、hosts文件剖判

    5.5saltstack活动扩大容积

    zabbix监察和控制--->Action---->创制后生可畏台虚构机/Docker容器---->安排服务---->布置代码---->测量试验状态----->参加集群--->参与监察和控制--->文告

    依照域名下载etcd

    rz etcd-v2.2.1-linux-amd64.tar.gz (2进制包)

    [root@linux-node1 src]# cd etcd-v2.0.5-linux-amd64

    [root@linux-node1 etcd-v2.0.5-linux-amd64]# cp etcd etcdctl  /usr/local/bin/

    [root@linux-node1 etcd-v2.0.5-linux-amd64] . /etcd &

    要么这样起步

    nohub etcd --name auto_scale --data-dir /data/etcd/

    新葡亰496net,--listen-peer-urls ''

    --listen-client-urls ''

    超详细saltstack安装配置及利用,自动化运维之SaltStack施行。--adevertise-client-urls '' &

    设置key的值

    [root@linux-node1 wal]# curl -s -XPUT -d value="Hello world" | python -m json.tool      

    {

        "action": "set",

        "node": {

            "createdIndex": 8,

            "key": "/message",

            "modifiedIndex": 8,

            "value": "Hello world"

        },

        "prevNode": {

            "createdIndex": 7,

            "key": "/message",

            "modifiedIndex": 7,

            "value": "Hello world"

        }

    }

    获取key的值

    [root@linux-node1 wal]# curl -s |python -m json.tool          {

        "action": "get",

        "node": {

            "createdIndex": 8,

            "key": "/message",

            "modifiedIndex": 8,

            "value": "Hello world"

        }

    }

    删除key

    [root@linux-node1 wal]# curl -s -XDELETE |python -m json.tool      

    {

        "action": "delete",

        "node": {

            "createdIndex": 8,

            "key": "/message",

            "modifiedIndex": 9

        },

        "prevNode": {

            "createdIndex": 8,

            "key": "/message",

            "modifiedIndex": 8,

            "value": "Hello world"

        }

    }

    删除key现在重新获得key not found

    [root@linux-node1 wal]# curl -s |python -m json.tool          {

        "cause": "/message",

        "errorCode": 100,

        "index": 9,

        "message": "Key not found"

    }

    安装key 有效时间5秒 5秒后过期  "message": "Key not found"

    [root@linux-node1 wal]# curl -s -XPUT -d valu=="Hello world" |"Hello world 1" -d ttl=5 |python -m json.tool        

    {

        "action": "set",

        "node": {

            "createdIndex": 10,

            "expiration": "2017-11-17T12:59:41.572099187Z",

            "key": "/ttl_use",

            "modifiedIndex": 10,

            "ttl": 5,

            "value": ""

        }

    }

     

    [root@linux-node1 ~]# vim /etc/salt/master  #行尾增加

    etcd_pillar_config:

      etcd.host: 10.0.0.7

      etcd.port: 4001

     

    ext_pillar:

      - etcd: etcd_pillar_config root=/salt/haproxy/

     

    [root@linux-node1 ~]# /etc/init.d/salt-master restart

    [root@linux-node1 ~]# curl -s -XPUT -d value="10.0.0.7:8080" | python -m json.tool       

    {

        "action": "set",

        "node": {

            "createdIndex": 10,

            "key": "/salt/haproxy/backend_www_oldboyedu_com/web-node1", #加多贰个web-node1的节点

            "modifiedIndex": 10,

            "value": "10.0.0.7:8080"

        }

    }

    [root@linux-node1 ~]#pip install python-etcd

    [root@linux-node1 etcd-v2.2.1-linux-amd64]# salt '*' pillar.items

    linux-node2:

        ----------

        backend_www_oldboyedu_com:

            ----------

            web-node1:

                10.0.0.7:8080

        zabbix-agent:

            ----------

            Zabbix_Server:

                10.0.0.7

    linux-node1:

        ----------

        backend_www_oldboyedu_com:

            ----------

            web-node1:

                10.0.0.7:8080

        zabbix-agent:

            ----------

            Zabbix_Server:

                10.0.0.7

     

    [root@linux-node1 ~]# vi /srv/salt/prod/cluster/files/haproxy-outside.cfg  #行尾增多

    {% for web,web_ip in pillar.backend_www_oldboyedu_com.iteritems() -%}

    server {{ web }} {{ web_ip }} check inter 2000 rise 30 fall 15

    {% endfor %}

    vim /srv/salt/prod/cluster/haproxy-outside.sls

    - template: jinja

    重启master

    履增势况 salt '*' statehighstate

    #vim /etc/hosts

    192.168.1.101 salt.node1.com
    192.168.1.200 salt.node2.com
    192.168.1.201 salt.node3.com

    三、安装salt-ssh

    a.添加yum源:

    *参考salt-stack官网:

    # vim /etc/yum.repos.d/salt-stack.repo
    [saltstack-repo]
    name=SaltStack repo for Red Hat Enterprise Linux $releasever
    baseurl=
    enabled=1
    gpgcheck=1
    gpgkey=

    b.安装salt-ssh

    #yum install salt-ssh -y

    c.配置roster文件

    *能够在user上面配置passwd,如不配置来讲,将要选拔salt-ssh '*' test.ping -i命令时安插输入密码实行求证

    # vim /etc/salt/roster

    node1:
    host: 192.168.1.200
    user: root
    port: 22
    node2:
    host: 192.168.1.201
    user: root
    port: 22
    四、配置state.sls文件及给复制相关文书到布署目录

    a.创制文件目录

    # mkdir -p /srv/salt/minions

    # mkdir -p /srv/salt/minions/conf

    # mkdir -p /srv/salt/minions/yum.repos.d

    b.编写安装minions的sls文件--install.sls

    # cd /srv/salt/minions/

    # vim install.sls

    #salt_minion_install
    minion_yum:             #把本地minions/yum.repos.d下和文件复制到要安装minion的/etc/yum.repos.d下
      file.recurse:
        - name: /etc/yum.repos.d
        - source: salt://minions/yum.repos.d
        - user: root
        - group: root
        - file_mode: 644
        - dir_mode: 755
        - include_empty: True
    minion_install:         #安装salt-minion
      pkg.installed:
        - pkgs:
          - salt-minion
        - require:
          - file: minion_yum
        - unless: rpm -qa | grep salt-minion
    minion_conf:           #复制准备好的minion配置文件复制到要安装minion下的/etc/salt/minion下
      file.managed:
        - name: /etc/salt/minion
        - source: salt://minions/conf/minion
        - user: root
        - group: root
        - mode: 640
        - template: jinja
        - defaults:
          minion_id: {{ grains['fqdn_ip4'][0] }}        
        - require:
          - pkg: minion_install
    minion_service:       #开机自动启动
      service.running:
        - name: salt-minion
        - enable: True
        - require:
          - file: minion_conf
    

    c.编写minion配置文件 

    #vim  minion 

    # resolved, then the minion will fail to start.
    master: 192.168.1.101                     #只用校勘master地址

    d.把salt源和epel源复制到钦赐目录下

    #cp /etc/yum.repos.d/salt-stack.repo /srv/salt/minions/yum.repos.d/

    # cp /etc/yum.repos.d/epel.repo /srv/salt/minions/yum.repos.d/ 

    e.最终查看一下目录详细情形:

    # pwd
    /srv/salt/minions

    # tree
    ├── conf
    │   └── minion
    ├── install.sls
    └── yum.repos.d
    ├── epel.repo
    └── salt-stack.repo

    五、执行salt-ssh安装salt-minion

    #salt-ssh -i '*' state.sls minions.install

    六、验证安装结果

    *注:在终比不大编在salt-ssh那台主机上安装了salt-mater(yum install -y salt-master ),不然上面包车型地铁授命施行无效

    # salt-key
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    centos7
    node1
    node2
    Rejected Keys:

     新葡亰496net 31

    本文由新葡亰496net发布于电脑系统,转载请注明出处:超详细saltstack安装配置及利用,自动化运维之S

    关键词: