http://www.iyunv.com/thread-41970-1-1.html
文件同步:
其实在做openstack的运维对一些文件的同步其实是很繁琐。有一个配置项或者一行代码的源码文件进行同步。那么现在我们就开始介绍saltstack的文件同步功能
环境说明:操作系统版本:rhel6.5x64
1、master配置同步根目录
在开始saltstack的配置管理之前,要首先指定saltstack所有状态文件的根目录,在master上做如下操作
## 首先修改master的配置文件,指定根目录,注意缩进全部使用两个空格来代替Tab(python规范)## 确定指定的目录是否存在,如果不存在,需要手动来创建目录
1 2 3 4 5 6 7 8 9 10
|
[iyunv@controller1 ~]# vim /etc/salt/master file_roots: base: - /srv/salt dev: - /srv/salt/dev/
[iyunv@controller1 ~]# mkdir -p /srv/salt/dev [iyunv@controller1 ~]# ls -ld /srv/salt/dev drwxr-xr-x 2 root root 4096 Feb 3 21:49 /srv/salt/dev
|
重启master服务
1 2 3
|
[iyunv@controller1 ~]# service salt-master restart Stopping salt-master daemon: [ OK ] Starting salt-master daemon: [ OK ]
|
2、介绍cp.get_file
首先介绍cp.get_file,用来从master端下载文件到minion的指定目录下,如下
## 在master上创建测试用的文件
1 2 3
|
[iyunv@controller1 ~]# echo 'This is test file with saltstack module to cp.get_file' >/opt/getfile.txt [iyunv@controller1 ~]# cat /opt/getfile.txt This is test file with saltstack module to cp.get_file
|
将文件拷贝到master的同步根目录下
1
|
[iyunv@controller1 ~]# cp /opt/getfile.txt /srv/salt/
|
在master上执行文件下发
1 2 3
|
[iyunv@controller1 ~]# salt 'computer3' cp.get_file salt://getfile.txt /tmp/getfile.txt computer3: /tmp/getfile.txt
|
登录到computer3上查看同步情况
1 2
|
[iyunv@computer3 ~]# cat /tmp/getfile.txt This is test file with saltstack module to cp.get_file
|
分发文件的一些属性:
(1)压缩 gzip
使用gzip的方式进行压缩,数字越大,压缩率就越高,9代表最大的压缩率
1 2 3
|
[iyunv@controller1 ~]# salt 'computer8' cp.get_file salt://getfile.txt /tmp/getfile.txt gzip=9 computer8: /tmp/getfile.txt
|
(2)创建目录 makedirs(当分发的位置在目标主机上不存在时,自动创建该目录)
1 2 3 4 5 6
|
[iyunv@controller1 ~]# salt 'computer8' cp.get_file salt://getfile.txt /tmp/srv/getfile.txt makedirs=True computer8: /tmp/srv/getfile.txt
[iyunv@computer8 opt]# ll /tmp/srv/getfile.txt -rw-r--r-- 1 root root 56 Feb 3 22:14 /tmp/srv/getfile.txt
|
3、grains
先介绍一下grains,这个接口的作用是在minion端的minion服务启动时,调用这个接口,收集minion端的信息,这些信息数据可以在salt的其他模块中直接使用,需要注意的是,这个接口只在minion端的minion服务启动时被调用一次,所以收集的数据是静态的,不会改变的,除非你重启了minion端的服务
grains的基本用法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
|
[iyunv@controller1 ~]# salt 'computer3' grains.ls computer3: - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - defaultencoding - defaultlanguage - domain - fqdn - fqdn_ip4 - fqdn_ip6 - gpus - host - hwaddr_interfaces - id - ip_interfaces - ipv4 - ipv6 - kernel - kernelrelease - localhost - manufacturer - master - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - path - productname - ps - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - serialnumber - server_id - shell - virtual - zmqversion
|
使用grains.items模块列出所有可用grains的具体数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
|
[iyunv@controller1 ~]# salt 'computer3' grains.items computer3: biosreleasedate: 08/28/2013 biosversion: 2.10.0 cpu_flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt aes lahf_lm arat dts tpr_shadow vnmi flexpriority ept vpid cpu_model: Intel(R) Xeon(R) CPU E7- 4820 @ 2.00GHz cpuarch: x86_64 defaultencoding: UTF8 defaultlanguage: en_US domain: fqdn: computer3 fqdn_ip4: 192.168.100.23 fqdn_ip6: gpus: {'model': 'MGA G200eW WPCM450', 'vendor': 'unknown'} host: computer3 hwaddr_interfaces: {'lo': '00:00:00:00:00:00', 'tap002cf093-0c': 'fe:16:3e:cf:43:28', 'em4': 'f0:1f:af:90:38:65', 'eth1.2': 'f0:1f:af:90:37:fd', 'em3': 'f0:1f:af:90:38:63', 'brq8f15ee7f-54': 'f0:1f:af:90:37:fd', 'brqadf94242-74': 'f0:1f:af:90:37:fd', 'eth1.400': 'f0:1f:af:90:37:fd', 'eth1': 'f0:1f:af:90:37:fd', 'eth0': 'f0:1f:af:90:37:fb'} id: computer3 ip_interfaces: {'lo': ['127.0.0.1'], 'tap002cf093-0c': [], 'em4': [], 'eth1.2': [], 'em3': [], 'brq8f15ee7f-54': [], 'brqadf94242-74': [], 'eth1.400': [], 'eth1': [], 'eth0': ['192.168.100.23']} ipv4: 127.0.0.1 192.168.100.23 ipv6: ::1 fe80::60f7:96ff:feab:3d44 fe80::f21f:afff:fe90:37fb fe80::f21f:afff:fe90:37fd fe80::f8e7:cdff:fe54:7d02 fe80::fc16:3eff:fecf:4328 kernel: Linux kernelrelease: 2.6.32-431.el6.x86_64 localhost: computer3 manufacturer: Dell Inc. master: 192.168.100.200 mem_total: 225995 nodename: computer3 num_cpus: 64 num_gpus: 1 os: RedHat os_family: RedHat osarch: x86_64 oscodename: Santiago osfinger: Red Hat Enterprise Linux Server-6 osfullname: Red Hat Enterprise Linux Server osmajorrelease: 6 5 osrelease: 6.5 path: /sbin:/usr/sbin:/bin:/usr/bin productname: PowerEdge M910 ps: ps -efH pythonpath: /usr/bin /usr/lib64/python26.zip /usr/lib64/python2.6 /usr/lib64/python2.6/plat-linux2 /usr/lib64/python2.6/lib-tk /usr/lib64/python2.6/lib-old /usr/lib64/python2.6/lib-dynload /usr/lib64/python2.6/site-packages /usr/lib64/python2.6/site-packages/gtk-2.0 /usr/lib/python2.6/site-packages /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info pythonversion: 2.6.6.final.0 saltpath: /usr/lib/python2.6/site-packages/salt saltversion: 2014.1.10 saltversioninfo: 2014 1 10 0 serialnumber: XXXXXX server_id: 111111111 shell: /bin/bash virtual: physical zmqversion: 4.0.5
|
ping测试grains中os的值为RedHat的主机通信是否正常
1 2 3 4 5 6 7 8 9 10 11 12 13
|
[iyunv@controller1 ~]# salt -G 'os:RedHat' test.ping computer5: True computer8: True computer6: True computer7: True computer4: True computer3: True
|
查看uadoop2主机的ip地址,注意这里不是items噢,而是item
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
|
[iyunv@controller1 ~]# salt '*' grains.item ipv4 computer5: ipv4: 127.0.0.1 192.168.100.25 computer7: ipv4: 127.0.0.1 192.168.100.27 computer4: ipv4: 127.0.0.1 192.168.100.24 computer3: ipv4: 127.0.0.1 192.168.100.23 computer8: ipv4: 127.0.0.1 192.168.100.28 computer6: ipv4: 127.0.0.1 192.168.100.26
|
好了,在介绍了grains接口之后,接下来看下在cp模块中如何简单的使用grains的数据呢
先确定os是什么版本
1 2 3
|
[iyunv@controller1 RedHat]# salt 'computer4' grains.item os computer4: os: RedHat
|
1 2 3 4 5
|
[iyunv@controller1 ~]# mkdir /srv/salt/RedHat/ [iyunv@controller1 ~]# mv /srv/salt/getfile.txt /srv/salt/RedHat/ [iyunv@controller1 RedHat]# salt 'computer4' cp.get_file "salt://{{grains.os}}/getfile.txt" /opt/getfile.txt template=jinja computer4: /opt/getfile.txt
|
4、目录同步
介绍cp.get_dir,get_dir与get_file的用法十分相似,用来将整个目录分发到minions
创建测试文件
1 2 3 4 5 6 7
|
[iyunv@controller1 ~]# mkdir /srv/salt/test_dir [iyunv@controller1 ~]# echo 'hello word !!' >>/srv/salt/test_dir/hello1.txt [iyunv@controller1 ~]# echo 'hello2 word !!' >>/srv/salt/test_dir/hello2.txt [iyunv@controller1 ~]# ll /srv/salt/test_dir/ total 8 -rw-r--r-- 1 root root 14 Feb 4 14:49 hello1.txt -rw-r--r-- 1 root root 15 Feb 4 14:49 hello2.txt
|
测试分发: 执行目录文件的分发,并使用压缩传输
1 2 3 4 5 6 7 8 9 10
|
[iyunv@controller1 ~]# salt 'computer4' cp.get_dir salt://test_dir /tmp gzip=9 computer4: - /tmp/test_dir/hello1.txt - /tmp/test_dir/hello2.txt
登录到目标节点查看分发状态: [iyunv@computer4 ~]# ll /tmp/test_dir/ total 8 -rw-r--r-- 1 root root 14 Feb 4 14:52 hello1.txt -rw-r--r-- 1 root root 15 Feb 4 14:52 hello2.txt
|
5、数据的灵活变更
在往下介绍之前,首先介绍一下salt的pillar接口,pillar是salt中比较重要的组件,跟grains有些相似,但是pillar相比于grains更加灵活,而且是动态的,数据可以随时更新,只要你愿意的话。而grains只在minion启动时采集一次数据,关于pillar官网描述如下,简单翻译一下,但不保证翻译的到位,意思是说pillar是salt实现部署功能的最重要的组件,能够为minions生成非常灵活的数据,这些数据可以被salt的其他的组件所使用。
1
|
The pillar interface inside of Salt is one of the most important components of a Salt deployment. Pillar is the interface used to generate arbitrary data for specific minions. The data generated in pillar is made available to almost every component of Salt.
|
grains的基本用法
(1)配置master
1 2 3 4 5 6 7 8 9
|
[iyunv@controller1 ~]# vim /etc/salt/master pillar_roots: base: - /srv/pillar [iyunv@controller1 ~]# service salt-master restart Stopping salt-master daemon: [ OK ] Starting salt-master daemon: [ OK ] [iyunv@controller1 ~]# mkdir /srv/pillar [iyunv@controller1 ~]# mkdir /srv/pillar/user //创建一个user的测试目录
|
(2)创建入口文件
首先在/srv/pillar目录中要有一个入口文件top.sls
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
[iyunv@controller1 pillar]# cat top.sls base: 'computer3': - date ## 为uadoop2定义了一个属性数据,引用了跟top.sls同目录下的data.sls
'computer4': - webserver ## 为uadoop3定义了一个属性数据,引用了跟top.sls同目录下的web.sls
'*': - user ## 为所有节点定义了一个属性数据,引用了/srv/pillar/user/init.sls ## 这里指定的是一个目录,salt会自动在top.sls文件中的引用目录中寻找状态文件 ## 因此会找到位于user目录中的init.sls文件 ## 在测试请不要有任何的‘#’ 编写其他两个属性: [iyunv@controller1 pillar]# cat date.sls date: some date [iyunv@controller1 pillar]# cat webserver.sls webserver: test_dir
|
测试:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
|
# salt '*' pillar.items
computer3: ---------- date: some date master: ---------- auth_mode: 1 auto_accept: ....省略N行........... user: ---------- foway: 1200 kadefor: 1000 kora: 1000 computer4: ---------- master: ---------- auth_mode: 1 auto_accept: False cachedir: /var/cache/salt/master .........省略N行.......... user: ---------- foway: 1200 kadefor: 1000 kora: 1000 webserver: test_dir computer5: ---------- master: ---------- auth_mode: 1 auto_accept: False cachedir: /var/cache/salt/master ......省略N行...... user: ---------- foway: 1200 kadefor: 1000 kora: 1000 ...............
|
在master上远程获取刚刚定义的属性
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
|
[iyunv@controller1 pillar]# salt 'computer3' pillar.items computer3: ---------- date: some date master: ---------- auth_mode: 1 auto_accept: False cachedir: ..........省略N行.......... user: ---------- foway: 1200 kadefor: 1000 kora: 1000 [iyunv@controller1 pillar]# salt 'computer4' pillar.items computer4: ---------- master: ---------- auth_mode: 1 auto_accept: False ..........省略N行.......... user: ---------- foway: 1200 kadefor: 1000 kora: 1000 webserver: test_dir
|
## 可以看到刚刚为不同的minion定义的属性已经同步到了各个minion上,从这个测试可以看出,使用pillar## 我们可以为不同的minion或者不同的minion组定义不同的属性,极其灵活。好了,在介绍了pillar接口之后,接下来看下在cp模块中如何简单的使用pillar定义的属性数据呢 我们可以利用之前定义的属性来匹配不同的minion首先先同步一下pillar到每个节点上
1 2 3 4 5 6 7 8 9 10 11 12 13
|
[iyunv@controller1 ~]# salt '*' saltutil.refresh_pillar computer8: None computer4: None computer5: None computer6: None computer3: None computer7: None
|
测试一下匹配
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
|
[iyunv@controller1 ~]# salt -I -v 'date:some date' test.ping Executing job with jid 20150204164730224160 -------------------------------------------
computer3: True [iyunv@controller1 ~]# salt -I -v 'webserver:test_dir' test.ping Executing job with jid 20150204165017170702 -------------------------------------------
computer4: True [iyunv@controller1 ~]# salt -I -v 'users:foway:1200' test.ping Executing job with jid 20150204165053938046 -------------------------------------------
computer5: True computer6: True computer7: True computer3: True computer8: True computer4: True
|
匹配computer3,然后在master上远程分发文件到computer上去
1 2 3 4 5 6 7
|
[iyunv@controller1 salt]# salt -I -v 'webserver:test_dir' cp.get_dir "salt://{{pillar.webserver}}" /opt/ gzip=9 template=jinja Executing job with jid 20150204165518149257 -------------------------------------------
computer4: - /opt//test_dir/hello1.txt - /opt//test_dir/hello2.txt
|
感觉saltstack的功能很强大,有没有把你震惊到。继续学习把。。。。。
posted on 2016-12-21 14:39
思月行云 阅读(1111)
评论(0) 编辑 收藏 引用 所属分类:
服务器\Ops