ansible 手册

ansible 手册

获取模块信息

获取所有模块信息,100多个

  • ansible-doc -l

获取每个模块的具体信息

  • ansible-doc
    example:ansible-doc ping

    PING

    A trivial test module, this module always returns pong' on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/udp’

    EXAMPLES:
    Test ‘webservers’ status

    udp webservers -m ping

嵌套执行命令roles

- name: create jdk home
  file: path={{ remote_jdk_home }} state=directory mode=0755

- name: xxxxxxxxx
  include: ../../init/tasks/main.yml

defaults 中变量定义

 1:加双引号;2:变量名和变量之间,有空格;
 diamond_db_key: "{{ diamond_db_ip }}_{{ diamond_db_name }}_dbkey"
 manager_user1: "{{ manager_user_name }}"

tags

相同的tasks在不同的环境下面执行,通过tag来进行表面,如下图:

  useage: 
    udp-playbook setup.yml -v -kK -i hosts.ini --tags "ta"

- name: 1
  authorized_key: user={{ ansible_ssh_user }}  key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"  state=present
  tags: ta

- name: 2
  group: name={{ remote_user }}
  tags: always

- name: 3
  file: path={{ remote_home }} owner={{ remote_user }} group={{ remote_user }} state=directory recurse=yes mode=0755
  tags: tb

常见错误

ansible 中 scp scp: ambiguous target 错误还是因为ssh 增加了 -t 参数, scp不支持 -t 参数

disable python warning

To control the discovery behavior:

  • for individual hosts and groups, use the ansible_python_interpreter inventory variable
  • globally, use the interpreter_python key in the [defaults] section of ansible.cfg
1
2
[defaults]
interpreter_python=auto_silent

ansible 常见问题

ansible 常见问题

获取模块信息

获取所有模块信息,100多个

  • ansible-doc -l

获取每个模块的具体信息

  • ansible-doc
    example:ansible-doc ping

    PING

    A trivial test module, this module always returns pong' on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/udp’

    EXAMPLES:
    Test ‘webservers’ status

    udp webservers -m ping

嵌套执行命令roles

- name: create jdk home
  file: path={{ remote_jdk_home }} state=directory mode=0755

- name: xxxxxxxxx
  include: ../../init/tasks/main.yml

defaults 中变量定义

 1:加双引号;2:变量名和变量之间,有空格;
 diamond_db_key: "{{ diamond_db_ip }}_{{ diamond_db_name }}_dbkey"
 manager_user1: "{{ manager_user_name }}"

tags

相同的tasks在不同的环境下面执行,通过tag来进行表面,如下图:

  useage: 
    udp-playbook setup.yml -v -kK -i hosts.ini --tags "ta"

- name: 1
  authorized_key: user={{ ansible_ssh_user }}  key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"  state=present
  tags: ta

- name: 2
  group: name={{ remote_user }}
  tags: always

- name: 3
  file: path={{ remote_home }} owner={{ remote_user }} group={{ remote_user }} state=directory recurse=yes mode=0755
  tags: tb

常见错误

ansible 中 scp scp: ambiguous target 错误还是因为ssh 增加了 -t 参数, scp不支持 -t 参数

disable python warning

To control the discovery behavior:

  • for individual hosts and groups, use the ansible_python_interpreter inventory variable
  • globally, use the interpreter_python key in the [defaults] section of ansible.cfg
1
2
[defaults]
interpreter_python=auto_silent

其它常见错误

问题 解决方案
性能 ansible现在并发执行的任务好像还不够,执行批量传大文件的任务等的比较久 — 用 synchronize 并将 fork 默认的5改大
sudoers 尝试解决ansible不能执行的问题,搜索各种英文文档,有人说版本的原因,有人反馈是脚本错误,最终无解。 继续在本地进行测试,发现使用原始的ansible命令可以执行ls,但是sudo ls时会提示 sudo need tty 之类的报错。 定位这个错误是因为在/etc/sudoers文件中设置了 Defaults requiretty,修改为 #Defaults requiretty,重试发现问题解决。 手工修改所有机器的配置文件,问题解决。{“msg”: “ssh connection closed waiting for a privilege escalation password prompt”}—实际在部分机器上执行ansible命令时仍然有:sudo: no tty present and no askpass program specified 可以给ssh 增加-t/-tt参数来强制分配一个tty
failed to transfer file to xxx 远端机器磁盘已经满,查看df -h,特别是/tmp
requires a json module, none found 问题已经通过nginx进行解决部署,安装ansible的时候,在目标机器上面安装 python-simplejson 通过如下命令:yum install python-simplejson -y
openssh升级后无法登录报错 sshrpm 升级后会修改/etc/pam.d/sshd 文件。需要升级前备份此文件最后还原即可登录。
安装EagleEye出现的问题 1.hadoop name -format 这个需要输入Y/N;2.ssh-key没搞定;3.我们原来可以for循环的地方,古谦脚本只能1条1条的加
使用lineinfile方法时,内容不能包含”: “(冒号+空格),这个与ansible底层的分隔符冲突; 让用户在内容中不要包含”: “
https 相关 SSL validation is not available in your version of python. You can use validate_certs=no, however this is unsafe and not recommended. You can also install python-ssl from EPEL
You need a C++ compiler for C++ support yum install -y gcc gcc-c++
1:udp权限问题,有时候会出现权限认证失败;2:udp如何执行本地命令; 3:udp线上有什么方便的安装方法 问题1:方法一 去掉sudo试试(报访问文件 /opt/aliUDP/logs/udp.log 失败,备份重新建一个udp.log 文件给于 777 权限); 方法二 指定 –private-key=PRIVATE_KEY_FILE (先试试直接ssh登录某台目标机器行不行) 问题2:udp支持直接运行目标机器上的命令,用法:udp server -i ~/ali/udp-roles/roles/udp-install/udp-hosts.ini -m shell -a “ uptime ; df -lh “ -u admin
同一个ip部署不同的工程时,定义的变量会冲突;例如ip1同时部署mysql和diamond,都定义project_name;这样上面的会生效,下面定义的会被冲掉 Wiki:http://gitlab.alibaba-inc.com/middleware-udp/udp-doc/wikis/Different_Hosts_With_Different_Variables 将变量分别定义在 ./roles/mysql/defaults/main.yml 和 ./roles/diamond/defaults/main.yml中 或者使用不同的变量名
执行udp-play-book 时会报找不到key的问题 在udp机器上执行 ssh-keygen 来生成key,解决
ssh 的时候需要手工 yes/no 增加参数 -o StrictHostKeyChecking no 就不需要输入了
防火墙问题,本地可以访问,远程不能 通过抓包/telnet等方式来确认这个问题, 通过iptables stop 来临时关闭防火墙; 修改iptables 的配置永久关闭或者增加所有其它节点到白名单中
重要! hostname -i 一定要是本机在局域网内的真实ip地址(不是127.0.0.1 )。 要绑定etc/hosts 下面 把自己的hostname绑定到对应的真实ip上。
在UDP PlayBook中如何定义不同的机器、不同的Role使用不同的变量 http://gitlab.alibaba-inc.com/middleware-udp/udp-doc/wikis/Different_Hosts_With_Different_Variables
Dauth部署问题总结 http://gitlab.alibaba-inc.com/middleware-udp/udp-doc/wikis/Dauth-UDP-deployment-issues
Device or resource busy 一般出现在Docker中修改/etc/hosts会有这个问题,ansible会rm它,实际它是-v进去的,通过脚本补丁绕过去

ansible 中 scp scp: ambiguous target 错误还是因为ssh 增加了 -t 参数, scp不支持 -t 参数

ansible 命令使用手册

ansible 命令使用手册

什么是命令通道?

有时候一些简单任务,没必要写复杂的playbook,所以大多时候我们可以通过ansible命令行来批量操控目标机器

当我们需要批量操作、查看一组机器,或者在这些机器上批量执行某个命令、修改某个文件,都可以通过命令通道在一台机器上批量并发完成对所有机器的操作

命令通道只是一个帮你将命令发送到多个目标机器,并将执行结果返回来给你的一个执行通道

使用场景

  • 执行一行命令就能看到几十台机器的负载情况
  • 批量执行远程服务器上已经写好的Shell脚本
  • 查看所有Web服务器最近10000行Log中有没有ERROR
  • 查看所有DB服务器的内存使用情况
  • 批量将所有Diamond服务器的某个端口从7000改成9000

开始准备

如果不想每次输入ssh密码的话请提前将本地公钥(~/.ssh/id_rsa.pub 没有的话 ssh-keygen生成一对)复制到目标机器的 ~/.ssh/authorized_keys 里面,否则每次执行命令都要输入密码

编写一个 hosts.ini 配置文件,内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
[server]
10.125.0.169 ansible_ssh_port=9999 #如果只有这台机器ssh走的是9999端口,其它没有设置的还是默认22端口
10.125.3.33
120.26.116.193

[worker]
10.125.12.174
10.125.14.238

[target]
10.125.192.40
10.125.7.151
192.168.2.[101:107]

server/worker/target表示将7台机器分成了三组,可以到所有7台机器执行同一个命令,也可以只在server/worker/target中的一组机器上执行某个命令.all代表所有7台机器

运行命令行

查看 hosts.ini 里面所有服务器的 uptime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
	$ ansible -i hosts.ini all -m raw -a " uptime  " -u admin
/usr/bin/ansible -i hosts.ini all -m raw -a uptime -u admin

success => 10.125.12.174 => rc=0 =>
11:10:50 up 27 days, 15:40, 1 user, load average: 0.05, 0.03, 0.05
success => 120.26.116.193 => rc=0 =>
11:10:50 up 13 days, 21:07, 1 user, load average: 0.00, 0.00, 0.00

命令参数说明

> __all:__ 表示对hosts.ini里面的所有服务器执行后面的命令

> __-i:__ 指定hosts.ini文件所在的位置

> __-m raw -a:__ 指定需要执行的命令

> __" uptime "__ 双引号里面写上需要执行的命令

> __-u admin__ 表示通过用户名admin 去执行命令【如果没有做好免密码,请加上 -k 参数,会出来提示输入SSH密码】


### 查看 hosts.ini 里面 server 组服务器的 home目录下的文件结构
$ ansible -i hosts.ini server -m raw -a " ls -lh ~/ " -u admin

/usr/bin/ansible -i hosts.ini server -m raw -a ls -lh ~/ -u admin

success => 10.125.0.169 => rc=0 =>
total 12K
drwxr-xr-x 2 root root 4.0K Nov 13 12:34 files
drwxr-xr-x 11 admin admin 4.0K Oct 20 10:49 tomcat
drwxr-xr-x 3 test games 4.0K Nov 18 15:40 ansible-engine
success => 10.125.3.33 => rc=0 =>
total 20K
-rw------- 1 admin admin 1.4K Nov 12 13:39 authorized_keys
drwxr-xr-x 2 root root 4.0K Nov 12 16:24 engine
drwxr-xr-x 2 root root 4.0K Nov 13 12:22 files
drwxr-xr-x 11 admin admin 4.0K Nov 18 15:43 tomcat
drwxr-xr-x 3 test games 4.0K Nov 18 15:40 ansible-engine

### 查看部分机器 hostname

ansible -i ccb_test.ini 192.168.2.10* -m shell -a ‘hostname ‘

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
192.168.2.100 | CHANGED | rc=0 >>
az2-drds-100
192.168.2.106 | CHANGED | rc=0 >>
az2-manager-106
192.168.2.101 | CHANGED | rc=0 >>
az2-alisql-101
192.168.2.102 | CHANGED | rc=0 >>
az2-alisql-102
192.168.2.105 | CHANGED | rc=0 >>
az2-alisql-105
192.168.2.104 | CHANGED | rc=0 >>
az2-alisql-104
192.168.2.103 | CHANGED | rc=0 >>
az2-alisql-103
192.168.2.107 | CHANGED | rc=0 >>
az2-manager-107

1
2
3

### 使用环境变量

#config /etc/hosts
ansible -i $1 all -m shell -a “ sed -i ‘/registry/d’ /etc/hosts “
ansible -i $1 all -m shell -a “ echo ‘ registry’ >/etc/hosts “
ansible -i $1 all -m shell -a “ echo ‘ hostname‘ >>/etc/hosts “
ansible -i $1 diamond -m shell -a “ echo ‘ jmenv.tbsite.net’ >> /etc/hosts “ -u root
//修改机器hostname
ansible -i $1 all -m shell -a “ hostnamectl set-hostname=’drds-‘ “ -u root
//修改机器hostname -i
ansible -i $1 all -m shell -a “ echo ‘ drds-‘ >> /etc/hosts “ -u root

//hostname 修改机器名

ansible -i ccb_test.ini 192.168.2.101 -m hostname -a “ name=az2-alisql-101 “

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
192.168.2.101 | CHANGED => {
“ansible_facts”: {
“ansible_domain”: “”,
“ansible_fqdn”: “iZ2ze9aj0re2ggbqa4dgxkZ”,
“ansible_hostname”: “az2-alisql-101”,
“ansible_nodename”: “az2-alisql-101”,
“discovered_interpreter_python”: “/usr/bin/python”
},
“changed”: true,
“name”: “az2-alisql-101”
}

1
2
3
4
5

### 管理系统service

设置 docker daemon服务重新启动和开机自动启动

ansible -i ccb_test.ini 192.168.2.101 -m service -a “ name=docker enabled=yes state=restarted “

[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
192.168.2.101 | CHANGED => {
“ansible_facts”: {
“discovered_interpreter_python”: “/usr/bin/python”
},
“changed”: true,
“enabled”: true,
“name”: “docker”,
“state”: “started”,
“status”: {
“ActiveEnterTimestamp”: “二 2020-05-12 19:03:57 CST”,
“ActiveEnterTimestampMonotonic”: “1553024093129”,
“ActiveExitTimestamp”: “二 2020-05-12 19:01:24 CST”,
“ActiveExitTimestampMonotonic”: “1552870910912”,
“ActiveState”: “active”,
“After”: “systemd-journald.socket system.slice docker.socket firewalld.service containerd.service network-online.target basic.target”,
“AllowIsolate”: “no”,
“AmbientCapabilities”: “0”,
“AssertResult”: “yes”,
“AssertTimestamp”: “二 2020-05-12 19:03:57 CST”,
“AssertTimestampMonotonic”: “1553023902297”,
“Before”: “multi-user.target shutdown.target”,
“BindsTo”: “containerd.service”,
“BlockIOAccounting”: “no”,
“BlockIOWeight”: “18446744073709551615”,
“CPUAccounting”: “no”,
“CPUQuotaPerSecUSec”: “infinity”,
“CPUSchedulingPolicy”: “0”,
“CPUSchedulingPriority”: “0”,
“CPUSchedulingResetOnFork”: “no”,
“CPUShares”: “18446744073709551615”,
“CanIsolate”: “no”,
“CanReload”: “yes”,
“CanStart”: “yes”,
“CanStop”: “yes”,
“CapabilityBoundingSet”: “18446744073709551615”,
“ConditionResult”: “yes”,
“ConditionTimestamp”: “二 2020-05-12 19:03:57 CST”,
“ConditionTimestampMonotonic”: “1553023902297”,
“Conflicts”: “shutdown.target”,
“ConsistsOf”: “docker.socket”,
“ControlGroup”: “/system.slice/docker.service”,
“ControlPID”: “0”,
“DefaultDependencies”: “yes”,
“Delegate”: “yes”,
“Description”: “Docker Application Container Engine”,
“DevicePolicy”: “auto”,
“Documentation”: “https://docs.docker.com“,
“ExecMainCode”: “0”,
“ExecMainExitTimestampMonotonic”: “0”,
“ExecMainPID”: “16213”,
“ExecMainStartTimestamp”: “二 2020-05-12 19:03:57 CST”,
“ExecMainStartTimestampMonotonic”: “1553023907468”,
“ExecMainStatus”: “0”,
“ExecReload”: “{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }”,
“ExecStart”: “{ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 –data-root=/var/lib/docker –log-opt max-size=50m –log-opt max-file=3 –registry-mirror=https://oqpc6eum.mirror.aliyuncs.com –containerd=/run/containerd/containerd.sock ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }”,
“FailureAction”: “none”,
“FileDescriptorStoreMax”: “0”,
“FragmentPath”: “/usr/lib/systemd/system/docker.service”,
“GuessMainPID”: “yes”,
“IOScheduling”: “0”,
“Id”: “docker.service”,
“IgnoreOnIsolate”: “no”,
“IgnoreOnSnapshot”: “no”,
“IgnoreSIGPIPE”: “yes”,
“InactiveEnterTimestamp”: “二 2020-05-12 19:03:43 CST”,
“InactiveEnterTimestampMonotonic”: “1553009791884”,
“InactiveExitTimestamp”: “二 2020-05-12 19:03:57 CST”,
“InactiveExitTimestampMonotonic”: “1553023907496”,
“JobTimeoutAction”: “none”,
“JobTimeoutUSec”: “0”,
“KillMode”: “process”,
“KillSignal”: “15”,
“LimitAS”: “18446744073709551615”,
“LimitCORE”: “18446744073709551615”,
“LimitCPU”: “18446744073709551615”,
“LimitDATA”: “18446744073709551615”,
“LimitFSIZE”: “18446744073709551615”,
“LimitLOCKS”: “18446744073709551615”,
“LimitMEMLOCK”: “65536”,
“LimitMSGQUEUE”: “819200”,
“LimitNICE”: “0”,
“LimitNOFILE”: “18446744073709551615”,
“LimitNPROC”: “18446744073709551615”,
“LimitRSS”: “18446744073709551615”,
“LimitRTPRIO”: “0”,
“LimitRTTIME”: “18446744073709551615”,
“LimitSIGPENDING”: “379870”,
“LimitSTACK”: “18446744073709551615”,
“LoadState”: “loaded”,
“MainPID”: “16213”,
“MemoryAccounting”: “no”,
“MemoryCurrent”: “58327040”,
“MemoryLimit”: “18446744073709551615”,
“MountFlags”: “0”,
“Names”: “docker.service”,
“NeedDaemonReload”: “no”,
“Nice”: “0”,
“NoNewPrivileges”: “no”,
“NonBlocking”: “no”,
“NotifyAccess”: “main”,
“OOMScoreAdjust”: “0”,
“OnFailureJobMode”: “replace”,
“PermissionsStartOnly”: “no”,
“PrivateDevices”: “no”,
“PrivateNetwork”: “no”,
“PrivateTmp”: “no”,
“ProtectHome”: “no”,
“ProtectSystem”: “no”,
“RefuseManualStart”: “no”,
“RefuseManualStop”: “no”,
“RemainAfterExit”: “no”,
“Requires”: “docker.socket basic.target”,
“Restart”: “always”,
“RestartUSec”: “2s”,
“Result”: “success”,
“RootDirectoryStartOnly”: “no”,
“RuntimeDirectoryMode”: “0755”,
“SameProcessGroup”: “no”,
“SecureBits”: “0”,
“SendSIGHUP”: “no”,
“SendSIGKILL”: “yes”,
“Slice”: “system.slice”,
“StandardError”: “inherit”,
“StandardInput”: “null”,
“StandardOutput”: “journal”,
“StartLimitAction”: “none”,
“StartLimitBurst”: “3”,
“StartLimitInterval”: “60000000”,
“StartupBlockIOWeight”: “18446744073709551615”,
“StartupCPUShares”: “18446744073709551615”,
“StatusErrno”: “0”,
“StopWhenUnneeded”: “no”,
“SubState”: “running”,
“SyslogLevelPrefix”: “yes”,
“SyslogPriority”: “30”,
“SystemCallErrorNumber”: “0”,
“TTYReset”: “no”,
“TTYVHangup”: “no”,
“TTYVTDisallocate”: “no”,
“TasksAccounting”: “no”,
“TasksCurrent”: “58”,
“TasksMax”: “18446744073709551615”,
“TimeoutStartUSec”: “0”,
“TimeoutStopUSec”: “0”,
“TimerSlackNSec”: “50000”,
“Transient”: “no”,
“TriggeredBy”: “docker.socket”,
“Type”: “notify”,
“UMask”: “0022”,
“UnitFilePreset”: “disabled”,
“UnitFileState”: “enabled”,
“WantedBy”: “multi-user.target”,
“Wants”: “network-online.target system.slice”,
“WatchdogTimestamp”: “二 2020-05-12 19:03:57 CST”,
“WatchdogTimestampMonotonic”: “1553024093096”,
“WatchdogUSec”: “0”
}
}

1
2
3
4
5



### 一次执行多个命令

$ ansible -i hosts.ini server -m raw -a “ which nc ; find /opt/aliUDP/logs/ “ -u admin

/usr/bin/ansible -i hosts.ini server -m raw -a which nc ; find /opt/aliUDP/logs/ -u admin

FAILED => 120.26.116.193 => rc=1 =>
which: no nc in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
find: /opt/aliUDP: No such file or directory

success => 10.125.3.33 => rc=0 =>
/usr/bin/nc
/opt/aliUDP/logs/
/opt/aliUDP/logs/ansible.log.bak
/opt/aliUDP/logs/ansible.log

success => 10.125.0.169 => rc=0 =>
/usr/bin/nc
/opt/aliUDP/logs/
/opt/aliUDP/logs/ansible.log.bak
/opt/aliUDP/logs/ansible.log

1
2
3
4
5
6

结果说明

> 其中 120.26.116.193 上没有命令 nc 和 /opt/aliUDP 文件夹所有执行失败,但是其他两台机器都正常返回了结果

### Copy本地的某个文件到服务器上【前面的例子中都是单独在远程机器上执行的命令】

$ ansible -i hosts.ini server -m copy -a “ src=’~/.ssh/id_rsa.pub’ dest=’/tmp/‘ owner=admin “ -u admin

SUCCESS => 120.26.116.193 => {
“changed”: true,
“checksum”: “b12ccf236ab788bbaebd7159c563e97411389c9e”,
“dest”: “/tmp/id_rsa.pub”,
“gid”: 0,
“group”: “root”,
“md5sum”: “b6ba28284ab95aaa0f47602bdab49f46”,
“mode”: “0644”,
“owner”: “root”,
“size”: 392,
“src”: “/root/.ansible/ansible-tmp-1449109886.94-70134064194486/source”,
“state”: “file”,
“uid”: 0
}

SUCCESS => 10.125.0.169 => {
“changed”: true,
“checksum”: “b12ccf236ab788bbaebd7159c563e97411389c9e”,
“dest”: “/tmp/id_rsa.pub”,
“gid”: 500,
“group”: “admin”,
“md5sum”: “b6ba28284ab95aaa0f47602bdab49f46”,
“mode”: “0664”,
“owner”: “admin”,
“size”: 392,
“src”: “/home/admin/.ansible/ansible-tmp-1449109886.78-98797505042348/source”,
“state”: “file”,
“uid”: 500
}

SUCCESS => 10.125.3.33 => {
“changed”: true,
“checksum”: “b12ccf236ab788bbaebd7159c563e97411389c9e”,
“dest”: “/tmp/id_rsa.pub”,
“gid”: 500,
“group”: “admin”,
“md5sum”: “b6ba28284ab95aaa0f47602bdab49f46”,
“mode”: “0664”,
“owner”: “admin”,
“size”: 392,
“src”: “/home/admin/.ansible/ansible-tmp-1449109886.81-269249309502640/source”,
“state”: “file”,
“uid”: 500
}

1
2
3
4
5
6
7
参数说明

> __-m copy -a:__ 指定这是 **copy** 的命令
>
> __" src='~/.ssh/id_rsa.pub' dest='/tmp/' "__ src表示本地文件 dest表示远程目标位置

### 验证一下刚刚copy上去的文件的MD5值

$ ansible -i hosts.ini server -m command -a “ md5sum /tmp/id_rsa.pub “ -u admin

success => 10.125.0.169 => rc=0 =>
b6ba28284ab95aaa0f47602bdab49f46 /tmp/id_rsa.pub

success => 10.125.3.33 => rc=0 =>
b6ba28284ab95aaa0f47602bdab49f46 /tmp/id_rsa.pub

success => 120.26.116.193 => rc=0 =>
b6ba28284ab95aaa0f47602bdab49f46 /tmp/id_rsa.pub

1
2
3
4
5
结果说明

> md5都是b6ba28284ab95aaa0f47602bdab49f46 跟本地的一致,说明成功复制到目标机器了

### 执行远程服务器上已经写好的Shell脚本

$ cat test.sh
#/bin/sh

ifconfig | grep ‘inet addr’
echo “————-“
uptime
echo “————-“
date

df -lh

1
2
3
4
5
6
7
8
9
10
11
执行结果

```shell
$ ansible -i hosts.ini server -m command -a " sh /tmp/test.sh " -u admin

/usr/bin/ansible -i hosts.ini server -m command -a sh /tmp/test.sh -u admin

success => 10.125.3.33 => rc=0 =>
inet addr:10.125.3.33 Bcast:10.125.15.255 Mask:255.255.240.0
inet addr:127.0.0.1 Mask:255.0.0.0

copy个人笔记本的公钥到服务器上,以后从笔记本登录服务器不再需要输入密码

1
$ ansible -i ansible-hosts.ini all -m authorized_key -a " user=admin key=\"{{ lookup('file', '/tmp/id_rsa.pub') }} \"  " -u admin -k

Copying files between different folders on the same remote machine

You can also copy files between the various locations on the remote servers. You have to set the remote_src parameter to yes.

The following example copies the hello6 file in the /tmp directory of the remote server and pastes it in the /etc/ directory.

1
2
3
4
5
6
7
- hosts: blocks
tasks:
- name: Ansible copy files remote to remote
copy:
src: /tmp/hello6
dest: /etc
remote_src: yes

or:

1
ansible blocks -m copy -a "src=/tmp/hello6 dest=/tmp/hello7etc remote_src=yes" -s -i inventory.ini

效率更高的 copy:synchronize

1
ansible -i xty_172.ini all -m synchronize -a " src=/home/ren/docker.service dest=/usr/lib/systemd/system/docker.socket " -u root

find_file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
- hosts: all

tasks:
- name: find_file
find:
paths: /home/admin/.ssh/
patterns: "*.rsa"
recurse: no
register: file_name

- name: copy_file
fetch:
src: "{{ item.path }}"
dest: /tmp/sshbak/
flat: no
with_items: "{{ file_name.files }}"

test

1
ansible-playbook -i 127.0.0.1,  ./find_file.yaml

不使用 hosts.ini文件,从命令行中传入目标机的 ip 列表

1
2
3
4
5
6
7
$ ansible -i 10.125.0.169,10.125.192.40 all -e "ansible_ssh_port=22" -a "uptime" -u admin

success => 10.125.192.40 => rc=0 =>
12:31:50 up 48 days, 17:01, 0 users, load average: 0.13, 0.06, 0.05

success => 10.125.0.169 => rc=0 =>
12:31:50 up 49 days, 2:25, 0 users, load average: 0.00, 0.01, 0.05

执行说明

-i 后面带入ip列表,注意每个IP后面一定要有 “,” 分割开来,all 关键字也是必须的

-e 中ansible_ssh_port=22表示ssh使用22端口(默认),如果ssh使用9999端口在这里将22改成9999即可

使用root sudo权限来执行命令

1
2
ansible -i 10.125.6.93, all -m  shell -a " ls -lh /home/admin/"    -u admin --become-user=root --ask-become-pass --become-method=sudo --become -k

给admin授权登录server不需要输入密码(也不知道admin的密码)

1
2
3
4
5
通过 admin(已知密码) 以root 权限将本机pub key复制到server上的 /home/admin, 再通过admin账号登录server就不需要密码了:
ansible -i 10.125.6.93, all -m authorized_key -a " user=admin key=\"{{ lookup('file', '/home/ren/.ssh/id_rsa.pub') }} \" " -u admin --become-user=root --ask-become-pass --become-method=sudo --become -k

不需要密码就可以执行:
ansible -i 10.125.6.93, all -m shell -a " ls -lha /home/admin/ " -u admin

fetch:将远程服务器上的public key 读取到本地

1
2
3
4
5
6
ansible -i kfc.ini hadoop -m fetch -a " src=/home/admin/.ssh/id_rsa.pub dest=./test/  "  -u admin

find test/ -type f | xargs cat > ./authorized_keys

#push all the public keys to the server
ansible -i ~/ali/ansible-edas/kfc.ini hadoop -m copy -a " src=./authorized_keys dest=/home/admin/.ssh/authorized_keys mode=600 " -u admin

或者循环fetch:

1
2
3
4
5
6
7
8
9
10
11
12
$cat fetch.yaml 
- hosts: all
tasks:
- name: list the files in the folder
#command: ls /u01/nmon/tpcc/
shell: (cd /remote; find . -maxdepth 1 -type f) | cut -d'/' -f2
register: dir_out

- name: do the action
fetch: src=/u01/nmon/tpcc/{{item}} dest=/home/aliyun/nmon_tpcc/ flat=no
with_items: "{{dir_out.stdout_lines}}"

执行结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ansible-playbook -i /home/aliyun/all.ini  fetch.yaml -u admin

PLAY [all] *******************************************************************************************

TASK [Gathering Facts] *******************************************************************************
ok: [10.88.88.18]
ok: [10.88.88.16]
ok: [10.88.88.15]
ok: [10.88.88.19]
ok: [10.88.88.17]
ok: [10.88.88.20]

TASK [list the files in the folder] ******************************************************************
changed: [10.88.88.15]
changed: [10.88.88.16]
changed: [10.88.88.17]
changed: [10.88.88.18]
changed: [10.88.88.19]
changed: [10.88.88.20]

TASK [do the action] *********************************************************************************
changed: [10.88.88.15] => (item=uos15_200729_1108.nmon)
changed: [10.88.88.18] => (item=uos18_200729_1107.nmon)
changed: [10.88.88.16] => (item=uos16_200729_1106.nmon)
changed: [10.88.88.19] => (item=adbpg2-PC_200729_1108.nmon)
changed: [10.88.88.17] => (item=uos17_200729_1107.nmon)
changed: [10.88.88.19] => (item=adbpg2-PC_200729_1936.nmon)
changed: [10.88.88.20] => (item=adbpg-PC_200729_1110.nmon)

PLAY RECAP *******************************************************************************************
10.88.88.15 : ok=3 changed=2 unreachable=0 failed=0
10.88.88.16 : ok=3 changed=2 unreachable=0 failed=0
10.88.88.17 : ok=3 changed=2 unreachable=0 failed=0
10.88.88.18 : ok=3 changed=2 unreachable=0 failed=0
10.88.88.19 : ok=3 changed=2 unreachable=0 failed=0
10.88.88.20 : ok=3 changed=2 unreachable=0 failed=0

setup:获取机器配置、参数信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
# ansible -i 192.168.1.91, all -m setup -u admin
192.168.1.91 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"172.17.0.1",
"192.168.0.91",
"192.168.1.91"
],
"ansible_all_ipv6_addresses": [],
"ansible_apparmor": {
"status": "disabled"
},
"ansible_architecture": "x86_64",
"ansible_bios_date": "04/01/2014",
"ansible_bios_version": "8c24b4c",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-957.21.3.el7.x86_64",
"LANG": "en_US.UTF-8",
"biosdevname": "0",
"console": "ttyS0,115200n8",
"crashkernel": "auto",
"idle": "halt",
"net.ifnames": "0",
"noibrs": true,
"quiet": true,
"rhgb": true,
"ro": true,
"root": "UUID=1114fe9e-2309-4580-b183-d778e6d97397"
},
"ansible_date_time": {
"date": "2020-07-15",
"day": "15",
"epoch": "1594796084",
"hour": "14",
"iso8601": "2020-07-15T06:54:44Z",
"iso8601_basic": "20200715T145444643628",
"iso8601_basic_short": "20200715T145444",
"iso8601_micro": "2020-07-15T06:54:44.643725Z",
"minute": "54",
"month": "07",
"second": "44",
"time": "14:54:44",
"tz": "CST",
"tz_offset": "+0800",
"weekday": "星期三",
"weekday_number": "3",
"weeknumber": "28",
"year": "2020"
},
"ansible_default_ipv4": {
"address": "192.168.0.91",
"alias": "eth0",
"broadcast": "192.168.0.255",
"gateway": "192.168.0.253",
"interface": "eth0",
"macaddress": "00:16:3e:30:d9:a4",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.168.0.0",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_device_links": {
"ids": {},
"labels": {
"loop2": [
"CDROM"
]
},
"masters": {},
"uuids": {
"loop0": [
"2020-07-12-14-26-47-00"
],
"loop1": [
"2020-07-12-20-25-18-00"
],
"loop2": [
"2020-07-13-09-57-36-00"
],
"vda1": [
"1114fe9e-2309-4580-b183-d778e6d97397"
]
}
},
"ansible_devices": {
"loop0": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": [
"2020-07-12-14-26-47-00"
]
},
"model": null,
"partitions": {},
"removable": "0",
"rotational": "1",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "",
"sectors": "327924",
"sectorsize": "512",
"size": "160.12 MB",
"support_discard": "4096",
"vendor": null,
"virtual": 1
},
"loop1": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": [
"2020-07-12-20-25-18-00"
]
},
"model": null,
"partitions": {},
"removable": "0",
"rotational": "1",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "",
"sectors": "359172",
"sectorsize": "512",
"size": "175.38 MB",
"support_discard": "4096",
"vendor": null,
"virtual": 1
},
"loop2": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [
"CDROM"
],
"masters": [],
"uuids": [
"2020-07-13-09-57-36-00"
]
},
"model": null,
"partitions": {},
"removable": "0",
"rotational": "1",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "",
"sectors": "128696",
"sectorsize": "512",
"size": "62.84 MB",
"support_discard": "4096",
"vendor": null,
"virtual": 1
},
"vda": {
"holders": [],
"host": "SCSI storage controller: Red Hat, Inc. Virtio block device",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"model": null,
"partitions": {
"vda1": {
"holders": [],
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": [
"1114fe9e-2309-4580-b183-d778e6d97397"
]
},
"sectors": "838847992",
"sectorsize": 512,
"size": "399.99 GB",
"start": "2048",
"uuid": "1114fe9e-2309-4580-b183-d778e6d97397"
}
},
"removable": "0",
"rotational": "1",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "mq-deadline",
"sectors": "838860800",
"sectorsize": "512",
"size": "400.00 GB",
"support_discard": "0",
"vendor": "0x1af4",
"virtual": 1
}
},
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "7",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "7.8",
"ansible_dns": {
"nameservers": [
"100.100.2.136",
"100.100.2.138"
],
"options": {
"attempts": "3",
"rotate": true,
"single-request-reopen": true,
"timeout": "2"
}
},
"ansible_docker0": {
"active": false,
"device": "docker0",
"features": {
"busy_poll": "off [fixed]",
"fcoe_mtu": "off [fixed]",
"generic_receive_offload": "on",
"generic_segmentation_offload": "on",
"highdma": "on",
"hw_tc_offload": "off [fixed]",
"l2_fwd_offload": "off [fixed]",
"large_receive_offload": "off [fixed]",
"loopback": "off [fixed]",
"netns_local": "on [fixed]",
"ntuple_filters": "off [fixed]",
"receive_hashing": "off [fixed]",
"rx_all": "off [fixed]",
"rx_checksumming": "off [fixed]",
"rx_fcs": "off [fixed]",
"rx_gro_hw": "off [fixed]",
"rx_udp_tunnel_port_offload": "off [fixed]",
"rx_vlan_filter": "off [fixed]",
"rx_vlan_offload": "off [fixed]",
"rx_vlan_stag_filter": "off [fixed]",
"rx_vlan_stag_hw_parse": "off [fixed]",
"scatter_gather": "on",
"tcp_segmentation_offload": "on",
"tx_checksum_fcoe_crc": "off [fixed]",
"tx_checksum_ip_generic": "on",
"tx_checksum_ipv4": "off [fixed]",
"tx_checksum_ipv6": "off [fixed]",
"tx_checksum_sctp": "off [fixed]",
"tx_checksumming": "on",
"tx_fcoe_segmentation": "on",
"tx_gre_csum_segmentation": "on",
"tx_gre_segmentation": "on",
"tx_gso_partial": "on",
"tx_gso_robust": "on",
"tx_ipip_segmentation": "on",
"tx_lockless": "on [fixed]",
"tx_nocache_copy": "off",
"tx_scatter_gather": "on",
"tx_scatter_gather_fraglist": "on",
"tx_sctp_segmentation": "on",
"tx_sit_segmentation": "on",
"tx_tcp6_segmentation": "on",
"tx_tcp_ecn_segmentation": "on",
"tx_tcp_mangleid_segmentation": "on",
"tx_tcp_segmentation": "on",
"tx_udp_tnl_csum_segmentation": "on",
"tx_udp_tnl_segmentation": "on",
"tx_vlan_offload": "on",
"tx_vlan_stag_hw_insert": "on",
"udp_fragmentation_offload": "on",
"vlan_challenged": "off [fixed]"
},
"hw_timestamp_filters": [],
"id": "8000.0242e441b693",
"interfaces": [],
"ipv4": {
"address": "172.17.0.1",
"broadcast": "172.17.255.255",
"netmask": "255.255.0.0",
"network": "172.17.0.0"
},
"macaddress": "02:42:e4:41:b6:93",
"mtu": 1500,
"promisc": false,
"stp": false,
"timestamping": [
"rx_software",
"software"
],
"type": "bridge"
},
"ansible_domain": "",
"ansible_effective_group_id": 1000,
"ansible_effective_user_id": 1000,
"ansible_env": {
"HISTCONTROL": "erasedups",
"HISTFILESIZE": "30000",
"HISTIGNORE": "pwd:ls:cd:ll:",
"HISTSIZE": "30000",
"HISTTIMEFORMAT": "%d/%m/%y %T ",
"HOME": "/home/admin",
"JAVA_HOME": "/opt/taobao/java",
"LANG": "C",
"LC_ADDRESS": "zh_CN.UTF-8",
"LC_ALL": "C",
"LC_IDENTIFICATION": "zh_CN.UTF-8",
"LC_MEASUREMENT": "zh_CN.UTF-8",
"LC_MONETARY": "zh_CN.UTF-8",
"LC_NAME": "zh_CN.UTF-8",
"LC_NUMERIC": "C",
"LC_PAPER": "zh_CN.UTF-8",
"LC_TELEPHONE": "zh_CN.UTF-8",
"LC_TIME": "zh_CN.UTF-8",
"LESSOPEN": "||/usr/bin/lesspipe.sh %s",
"LOGNAME": "admin",
"MAIL": "/var/mail/admin",
"PATH": "/usr/local/bin:/usr/bin:/opt/taobao/java8/bin:/home/admin/tools",
"PROMPT_COMMAND": "history -a",
"PS4": "+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }",
"PWD": "/home/admin",
"SHELL": "/bin/bash",
"SHLVL": "2",
"SSH_CLIENT": "192.168.1.79 51412 22",
"SSH_CONNECTION": "192.168.1.79 51412 192.168.1.91 22",
"USER": "admin",
"XDG_RUNTIME_DIR": "/run/user/1000",
"XDG_SESSION_ID": "40120",
"_": "/usr/bin/python"
},
"ansible_eth0": {
"active": true,
"device": "eth0",
"features": {
"busy_poll": "off [fixed]",
"fcoe_mtu": "off [fixed]",
"generic_receive_offload": "on",
"generic_segmentation_offload": "on",
"highdma": "on [fixed]",
"hw_tc_offload": "off [fixed]",
"l2_fwd_offload": "off [fixed]",
"large_receive_offload": "off [fixed]",
"loopback": "off [fixed]",
"netns_local": "off [fixed]",
"ntuple_filters": "off [fixed]",
"receive_hashing": "off [fixed]",
"rx_all": "off [fixed]",
"rx_checksumming": "on [fixed]",
"rx_fcs": "off [fixed]",
"rx_gro_hw": "off [fixed]",
"rx_udp_tunnel_port_offload": "off [fixed]",
"rx_vlan_filter": "off [fixed]",
"rx_vlan_offload": "off [fixed]",
"rx_vlan_stag_filter": "off [fixed]",
"rx_vlan_stag_hw_parse": "off [fixed]",
"scatter_gather": "on",
"tcp_segmentation_offload": "on",
"tx_checksum_fcoe_crc": "off [fixed]",
"tx_checksum_ip_generic": "on",
"tx_checksum_ipv4": "off [fixed]",
"tx_checksum_ipv6": "off [fixed]",
"tx_checksum_sctp": "off [fixed]",
"tx_checksumming": "on",
"tx_fcoe_segmentation": "off [fixed]",
"tx_gre_csum_segmentation": "off [fixed]",
"tx_gre_segmentation": "off [fixed]",
"tx_gso_partial": "off [fixed]",
"tx_gso_robust": "off [fixed]",
"tx_ipip_segmentation": "off [fixed]",
"tx_lockless": "off [fixed]",
"tx_nocache_copy": "off",
"tx_scatter_gather": "on",
"tx_scatter_gather_fraglist": "off [fixed]",
"tx_sctp_segmentation": "off [fixed]",
"tx_sit_segmentation": "off [fixed]",
"tx_tcp6_segmentation": "on",
"tx_tcp_ecn_segmentation": "on",
"tx_tcp_mangleid_segmentation": "off",
"tx_tcp_segmentation": "on",
"tx_udp_tnl_csum_segmentation": "off [fixed]",
"tx_udp_tnl_segmentation": "off [fixed]",
"tx_vlan_offload": "off [fixed]",
"tx_vlan_stag_hw_insert": "off [fixed]",
"udp_fragmentation_offload": "on",
"vlan_challenged": "off [fixed]"
},
"hw_timestamp_filters": [],
"ipv4": {
"address": "192.168.0.91",
"broadcast": "192.168.0.255",
"netmask": "255.255.255.0",
"network": "192.168.0.0"
},
"macaddress": "00:16:3e:30:d9:a4",
"module": "virtio_net",
"mtu": 1500,
"pciid": "virtio2",
"promisc": false,
"timestamping": [
"rx_software",
"software"
],
"type": "ether"
},
"ansible_eth1": {
"active": true,
"device": "eth1",
"features": {
"busy_poll": "off [fixed]",
"fcoe_mtu": "off [fixed]",
"generic_receive_offload": "on",
"generic_segmentation_offload": "on",
"highdma": "on [fixed]",
"hw_tc_offload": "off [fixed]",
"l2_fwd_offload": "off [fixed]",
"large_receive_offload": "off [fixed]",
"loopback": "off [fixed]",
"netns_local": "off [fixed]",
"ntuple_filters": "off [fixed]",
"receive_hashing": "off [fixed]",
"rx_all": "off [fixed]",
"rx_checksumming": "on [fixed]",
"rx_fcs": "off [fixed]",
"rx_gro_hw": "off [fixed]",
"rx_udp_tunnel_port_offload": "off [fixed]",
"rx_vlan_filter": "off [fixed]",
"rx_vlan_offload": "off [fixed]",
"rx_vlan_stag_filter": "off [fixed]",
"rx_vlan_stag_hw_parse": "off [fixed]",
"scatter_gather": "on",
"tcp_segmentation_offload": "on",
"tx_checksum_fcoe_crc": "off [fixed]",
"tx_checksum_ip_generic": "on",
"tx_checksum_ipv4": "off [fixed]",
"tx_checksum_ipv6": "off [fixed]",
"tx_checksum_sctp": "off [fixed]",
"tx_checksumming": "on",
"tx_fcoe_segmentation": "off [fixed]",
"tx_gre_csum_segmentation": "off [fixed]",
"tx_gre_segmentation": "off [fixed]",
"tx_gso_partial": "off [fixed]",
"tx_gso_robust": "off [fixed]",
"tx_ipip_segmentation": "off [fixed]",
"tx_lockless": "off [fixed]",
"tx_nocache_copy": "off",
"tx_scatter_gather": "on",
"tx_scatter_gather_fraglist": "off [fixed]",
"tx_sctp_segmentation": "off [fixed]",
"tx_sit_segmentation": "off [fixed]",
"tx_tcp6_segmentation": "on",
"tx_tcp_ecn_segmentation": "on",
"tx_tcp_mangleid_segmentation": "off",
"tx_tcp_segmentation": "on",
"tx_udp_tnl_csum_segmentation": "off [fixed]",
"tx_udp_tnl_segmentation": "off [fixed]",
"tx_vlan_offload": "off [fixed]",
"tx_vlan_stag_hw_insert": "off [fixed]",
"udp_fragmentation_offload": "on",
"vlan_challenged": "off [fixed]"
},
"hw_timestamp_filters": [],
"ipv4": {
"address": "192.168.1.91",
"broadcast": "192.168.1.255",
"netmask": "255.255.255.0",
"network": "192.168.1.0"
},
"macaddress": "00:16:3e:2c:a2:c2",
"module": "virtio_net",
"mtu": 1500,
"pciid": "virtio4",
"promisc": false,
"timestamping": [
"rx_software",
"software"
],
"type": "ether"
},
"ansible_fibre_channel_wwn": [],
"ansible_fips": false,
"ansible_form_factor": "Other",
"ansible_fqdn": "jtdb001",
"ansible_hostname": "jtdb001",
"ansible_hostnqn": "",
"ansible_interfaces": [
"lo",
"docker0",
"eth1",
"eth0"
],
"ansible_is_chroot": false,
"ansible_iscsi_iqn": "",
"ansible_kernel": "3.10.0-957.21.3.el7.x86_64",
"ansible_kernel_version": "#1 SMP Tue Jun 18 16:35:19 UTC 2019",
"ansible_lo": {
"active": true,
"device": "lo",
"features": {
"busy_poll": "off [fixed]",
"fcoe_mtu": "off [fixed]",
"generic_receive_offload": "on",
"generic_segmentation_offload": "on",
"highdma": "on [fixed]",
"hw_tc_offload": "off [fixed]",
"l2_fwd_offload": "off [fixed]",
"large_receive_offload": "off [fixed]",
"loopback": "on [fixed]",
"netns_local": "on [fixed]",
"ntuple_filters": "off [fixed]",
"receive_hashing": "off [fixed]",
"rx_all": "off [fixed]",
"rx_checksumming": "on [fixed]",
"rx_fcs": "off [fixed]",
"rx_gro_hw": "off [fixed]",
"rx_udp_tunnel_port_offload": "off [fixed]",
"rx_vlan_filter": "off [fixed]",
"rx_vlan_offload": "off [fixed]",
"rx_vlan_stag_filter": "off [fixed]",
"rx_vlan_stag_hw_parse": "off [fixed]",
"scatter_gather": "on",
"tcp_segmentation_offload": "on",
"tx_checksum_fcoe_crc": "off [fixed]",
"tx_checksum_ip_generic": "on [fixed]",
"tx_checksum_ipv4": "off [fixed]",
"tx_checksum_ipv6": "off [fixed]",
"tx_checksum_sctp": "on [fixed]",
"tx_checksumming": "on",
"tx_fcoe_segmentation": "off [fixed]",
"tx_gre_csum_segmentation": "off [fixed]",
"tx_gre_segmentation": "off [fixed]",
"tx_gso_partial": "off [fixed]",
"tx_gso_robust": "off [fixed]",
"tx_ipip_segmentation": "off [fixed]",
"tx_lockless": "on [fixed]",
"tx_nocache_copy": "off [fixed]",
"tx_scatter_gather": "on [fixed]",
"tx_scatter_gather_fraglist": "on [fixed]",
"tx_sctp_segmentation": "on",
"tx_sit_segmentation": "off [fixed]",
"tx_tcp6_segmentation": "on",
"tx_tcp_ecn_segmentation": "on",
"tx_tcp_mangleid_segmentation": "on",
"tx_tcp_segmentation": "on",
"tx_udp_tnl_csum_segmentation": "off [fixed]",
"tx_udp_tnl_segmentation": "off [fixed]",
"tx_vlan_offload": "off [fixed]",
"tx_vlan_stag_hw_insert": "off [fixed]",
"udp_fragmentation_offload": "on",
"vlan_challenged": "on [fixed]"
},
"hw_timestamp_filters": [],
"ipv4": {
"address": "127.0.0.1",
"broadcast": "host",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"mtu": 65536,
"promisc": false,
"timestamping": [
"rx_software",
"software"
],
"type": "loopback"
},
"ansible_local": {},
"ansible_lsb": {},
"ansible_machine": "x86_64",
"ansible_machine_id": "20190711105006363114529432776998",
"ansible_memfree_mb": 33368,
"ansible_memory_mb": {
"nocache": {
"free": 41285,
"used": 6079
},
"real": {
"free": 33368,
"total": 47364,
"used": 13996
},
"swap": {
"cached": 0,
"free": 0,
"total": 0,
"used": 0
}
},
"ansible_memtotal_mb": 47364,
"ansible_mounts": [
{
"block_available": 0,
"block_size": 2048,
"block_total": 32174,
"block_used": 32174,
"device": "/dev/loop2",
"fstype": "iso9660",
"inode_available": 0,
"inode_total": 0,
"inode_used": 0,
"mount": "/mnt/yum",
"options": "ro,relatime",
"size_available": 0,
"size_total": 65892352,
"uuid": "2020-07-13-09-57-36-00"
},
{
"block_available": 0,
"block_size": 2048,
"block_total": 81981,
"block_used": 81981,
"device": "/dev/loop0",
"fstype": "iso9660",
"inode_available": 0,
"inode_total": 0,
"inode_used": 0,
"mount": "/mnt/iso",
"options": "ro,relatime",
"size_available": 0,
"size_total": 167897088,
"uuid": "2020-07-12-14-26-47-00"
},
{
"block_available": 0,
"block_size": 2048,
"block_total": 89793,
"block_used": 89793,
"device": "/dev/loop1",
"fstype": "iso9660",
"inode_available": 0,
"inode_total": 0,
"inode_used": 0,
"mount": "/mnt/drds",
"options": "ro,relatime",
"size_available": 0,
"size_total": 183896064,
"uuid": "2020-07-12-20-25-18-00"
},
{
"block_available": 96685158,
"block_size": 4096,
"block_total": 103177963,
"block_used": 6492805,
"device": "/dev/vda1",
"fstype": "ext4",
"inode_available": 26110896,
"inode_total": 26214400,
"inode_used": 103504,
"mount": "/",
"options": "rw,relatime,data=ordered",
"size_available": 396022407168,
"size_total": 422616936448,
"uuid": "1114fe9e-2309-4580-b183-d778e6d97397"
}
],
"ansible_nodename": "jtdb001",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "yum",
"ansible_proc_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-957.21.3.el7.x86_64",
"LANG": "en_US.UTF-8",
"biosdevname": "0",
"console": [
"tty0",
"ttyS0,115200n8"
],
"crashkernel": "auto",
"idle": "halt",
"net.ifnames": "0",
"noibrs": true,
"quiet": true,
"rhgb": true,
"ro": true,
"root": "UUID=1114fe9e-2309-4580-b183-d778e6d97397"
},
"ansible_processor": [
"0",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"1",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"2",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"3",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"4",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"5",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"6",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"7",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"8",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"9",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"10",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz",
"11",
"GenuineIntel",
"Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz"
],
"ansible_processor_cores": 6,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 2,
"ansible_processor_vcpus": 12,
"ansible_product_name": "Alibaba Cloud ECS",
"ansible_product_serial": "NA",
"ansible_product_uuid": "NA",
"ansible_product_version": "pc-i440fx-2.1",
"ansible_python": {
"executable": "/usr/bin/python",
"has_sslcontext": true,
"type": "CPython",
"version": {
"major": 2,
"micro": 5,
"minor": 7,
"releaselevel": "final",
"serial": 0
},
"version_info": [
2,
7,
5,
"final",
0
]
},
"ansible_python_version": "2.7.5",
"ansible_real_group_id": 1000,
"ansible_real_user_id": 1000,
"ansible_selinux": {
"status": "disabled"
},
"ansible_selinux_python_present": true,
"ansible_service_mgr": "systemd",
"ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBAIjMSdXjIBwLTRwqzzLzJzw52IikcmHpmM65Idw9Q/CCH23SJdmmYzl9LIWFTEf2ZP4dHYibvgWtqfc6AHLFVgM1lz3wwdJJSyBD1TyFet+MPZEA1A9jw2Ke2K9C942dWATCpi3B0nk0KJDp49+V0QjUUjZmzt7I66wDmPLpW7mNAAAAFQDXmbLv48zsFHUgPiixhcKsk29ZPQAAAIAHHM+jfcL3V/X6EovQGj/2OytDN7k5hb4KRNTzBwh9JU5V44+S3r5ZViJDthKBolVT1CLX8jAivBu6d70ImYcZLa75AImOnlSp9D4xGP4TNfdAYrA7CkYpzn8ky15xjFDjkL0BjVmeEg6In+04tZOp/kIi/Ft9/ld63W4xopspwwAAAIAhBCIAMW37rknrsmv3sXmhgt+FeUQA/o8moZKcX+xI5sv27NEavQGGKOvZM4+nhCggRvjWaxC9N1DnO2g52trhGrUhNF0qwn/4iar/yknZWwRyZXzB3YtOdJXxCoJphuuGeqJRsLPb7OEIAF7c3lFJcfMUrwcjWrRtFMUM6mE+gQ==",
"ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMffg6EX26f+10IIgg/U7+PsCUDs8Ep0MUttUyVh3+bJ7/K7ROMhuc8BTieA4PRj3MOaKMbUuZTqPTmrK/4srqg=",
"ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAINIKYkm+FKDTvx6VgENoAnXwOJQ+xZjk3rkvUqZ/4F3i",
"ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC1xlLrDTri/jRfph6Uqx6CoY1/+uAE34rR9sR4FtE+2OMM8kUN0+N+hWLL+8r/pzM40RJOUmELYTlibfnjkYDsmYcpxD8kOxonvlYQbpvram8Hx7X8W1thYs//Zdhltmz1ijTiEatCL/yxJnwrpxN1XOtbMtALKgykbOzF+LNevFUG05MxxQR5WVjijXwK/Auf0ce/ei3NISQZLiW+d+IVYPkAQDpbUpH5W/qGDN0W8wT2OGE0bOvrPfDPRhSxeYrcS4mgS7nGvB26sFyeAimgadnxmWaxAveargYKt33jJQhVaA/23kw+/lygQcSN1QJ2mpeHb3ugay0Gv1i/Wd7P",
"ansible_swapfree_mb": 0,
"ansible_swaptotal_mb": 0,
"ansible_system": "Linux",
"ansible_system_capabilities": [
""
],
"ansible_system_capabilities_enforced": "True",
"ansible_system_vendor": "Alibaba Cloud",
"ansible_uptime_seconds": 11384976,
"ansible_user_dir": "/home/admin",
"ansible_user_gecos": "",
"ansible_user_gid": 1000,
"ansible_user_id": "admin",
"ansible_user_shell": "/bin/bash",
"ansible_user_uid": 1000,
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "kvm",
"discovered_interpreter_python": "/usr/bin/python",
"gather_subset": [
"all"
],
"module_setup": true
},
"changed": false
}

ansible + xargs 占位符

1
2
//批量执行docker exec
ansible -i host.ini all -m shell -a "docker ps -a | grep pxd-tpcc | grep dn | cut -d ' ' -f 1 | xargs -I{} docker exec {} bash -c \"myc -e 'shutdown'\""

指定ip执行playbook

ansible-playbook -i “10.168.101.179,” all test.yml

或者:

ansible -i phy.ini 11.167.60.150 -m shell -a ‘docker run -it -d –net=host -e diamond_server_list=”“ -e diamond_db0=”“ -e diamond_db1=”“ -e diamond_db2=”“ -e HOST_IP=”“ -p 8080:8080 -p 9090:9090 –name diamond ‘ -vvv

上面这种还能重用phy.ini中所有的变量配置

创建用户并打通账号

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$cat create_user.yml
# create user ren with passwd test and sudo privileges.
# ansible-playbook -i docker.ini create_user.yml
- hosts: all
user: root
vars:
# created with:
# python -c 'import crypt; print crypt.crypt("password", "$1$SomeSalt$")'
password: $1$SomeSalt$OrX9ouxOCP0ZOpVG9SwnR/

tasks:
- name: create a new user
user:
name: '{{ user }}'
password: '{{ password }}'
home: /home/{{ user }}
state: present
shell: /bin/bash

- name: Add user to the sudoers
copy:
dest: "/etc/sudoers.d/{{ user }}"
content: "{{ user }} ALL=(ALL) NOPASSWD: ALL"

- name: Deploy SSH Key
authorized_key: user={{ user }}
key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
state=present

然后执行: ansible-playbook -i all.ini create_user.yml -e “user=admin” 。

或者:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 ansible -i 192.168.2.101, all -m user -a "name=user02 system=yes uid=503 group=root groups=root shell=/etc/nologin home=/home/user02 password=pwd@123"
192.168.2.101 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 0,
"groups": "root",
"home": "/home/user02",
"name": "user02",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/etc/nologin",
"state": "present",
"system": true,
"uid": 503
}

playbook task规范:

image.png

对齐的时候不能用tab和空格混合

修改密码

创建如下yaml脚本 changepw.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
- name: user module demo
hosts: all
become: true
vars:
user: 'admin'
mypassword: "PolarDB-X"
#mypassword: "$1$SomeSalt$PB9C3LT9wCjmaMYdBWsRS1"

tasks:
- name: change password
ansible.builtin.user:
name: "{{ user }}"
state: present
password: "{{ mypassword | password_hash('sha512') }}"

使用方法:

1
ansible-playbook -i 1.2.3.4, changepw.yml -e "user=root" -e "mypassword=123"

将 root 账号的密码改成123

或者:

1
ansible -i 1.2.3.4, all -e "newpassword=1234" -m user -a "name=admin update_password=always password={{ newpassword|password_hash('sha512') }}"

创建用户以及密码

1
ansible -i 1.2.3.4, all  -e "newpassword=1234" -m user -a "name=ren state=present shell=/bin/sh update_password=always password={{ newpassword|password_hash('sha512') }}"

部署docker daemon的playbook

执行 ansible-playbook site.yml -v -i test.ini -u admin -e “project=docker” -p

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
$cat roles/docker/tasks/main.yml 
# filename: main.yml
---
#"****************************************************************************""
- name: copy docker execute file to remote
copy: src=docker/ dest=/usr/bin/ mode=0755 force=yes
tags: copytar

- name: create storage dir
file: path={{ storage_dir }} state=directory
ignore_errors: true
tags: docker

- name: create the dir
file: path=/etc/systemd/system/ state=directory
ignore_errors: true
tags: docker

- name: template docker.service to server
template: src=docker.service dest=/etc/systemd/system/docker.service
tags: docker

- name: template docker.socket to server
template: src=docker.socket dest=/usr/lib/systemd/system/docker.socket
tags: docker

- name: create /etc/docker dir to server
file: path=/etc/docker state=directory
ignore_errors: true
tags: docker

- name: copy daemon.json to server
template: src={{ inventory_hostname }}/daemon.json dest=/etc/docker/daemon.json
ignore_errors: true
tags: docker

- name: copy the load ovs modules to server
copy: src=openvswitch.modules dest=/etc/sysconfig/modules/openvswitch.modules mode=0755 force=yes
tags: docker

- name: kill docker daemon
shell: "kill -9 $(cat /var/run/docker.pid)"
ignore_errors: true
tags: test

- name: reload systemctl daemon-reload
shell: "systemctl daemon-reload"
tags: docker

- name: enabled the docker service
shell: "systemctl enable docker.service"
ignore_errors: true
tags: docker

- name: start docker service
shell: "systemctl start docker.service"

- name: remove all containers
shell: sudo docker ps -a | awk '{print $1}' | xargs sudo docker rm -f -v
ignore_errors: true

- name: template /etc/hosts to server
template: src=hosts dest=/etc/hosts owner=root group=root mode=0644 force=yes
tags: restorehosts

- name: mkdir /tmp/etc/
shell: "mkdir /tmp/etc/ "
ignore_errors: true
tags: hosts

- name: copy remote /etc/hosts to /tmp
shell: "cp /etc/hosts /tmp/etc/ "
tags: hosts

- name: copy /etc/hosts to server
template: src=etc.host dest=/tmp/etc/ owner={{ remote_user }} group={{ remote_user }} mode=0700 force=yes
tags: hosts

- name: merge /etc/hosts
assemble: src=/tmp/etc dest=/etc/hosts owner=root group=root mode=0644 force=yes
tags: hosts

- name: copy docker_rc.sh to server
template: src=docker_rc.sh dest={{ docker_rc_dir }}/docker_rc.sh owner=root group=root mode=0755 force=yes
when: use_vxlan!="true"
tags: docker_rc

- name: copy docker_rc.sh to server
template: src=docker_rc_vm.sh dest={{ docker_rc_dir }}/docker_rc.sh owner=root group=root mode=0755 force=yes
when: use_vxlan=="true"
tags: docker_rc

- name: clean docker_rc in rc.local
command: su - root -c " sed -i '/docker_rc.sh/d' /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes
tags: docker_rc

- name: start the docker when the system reboot
command: su - root -c " echo 'su - root -c \"{{ docker_rc_dir }}/docker_rc.sh\" ' >> /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes
tags: docker_rc

- name: chown the /etc/rc.d/rc.local
shell: "chmod +x /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes
tags: docker_rc

- name: clean previous space occupier
file: path={{ storage_dir }}/ark.disk{{ item }}.tmp state=absent
with_items:
- 1
- 2
ignore_errors: true
tags: docker

- name: Occupy space for docker
shell: "dd if=/dev/zero of={{ storage_dir }}/ark.disk{{ item }}.tmp bs=1M count=1024"
sudo: yes
with_items:
- 1
- 2
tags: docker

部署zk

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
$cat roles/zookeeper/tasks/main.yml
# filename: main.yml
---
#"****************************************************************************""
- name: extract zookeeper tgz
unarchive: src={{ packages_dir }}/lib/{{ zk_package_name }} dest=/opt
sudo: yes

- name: create zk data and log dir
file: path={{ zk_data_dir }} state=directory mode=0755
with_items:
- "{{ zk_data_dir }}"
- "{{ zk_logs_dir }}"

- name: set the myid
template: src=myid dest={{ zk_myid_file }} mode=0644

- name: template zoo.cfg
template: src=zoo.cfg dest={{ zk_install_dir }}/conf/ mode=0644

- name: copy log4j to remote
template: src=log4j.properties dest={{ zk_install_dir }}/conf/log4j.properties

- name: determine zk process
command: su - root -c "ps aux | grep java | grep -v grep | grep {{ zk_install_dir }}"
register: result
ignore_errors: true

- name: stop zk server
command: su - root -c "sh {{ zk_install_dir }}/bin/zkServer.sh stop"
ignore_errors: true
when: "result.rc == 0"

- name: start zk server
command: su - root -c "sh {{ zk_install_dir }}/bin/zkServer.sh start"

- name: get process info
command: su - root -c "ps aux | grep java | grep -v grep | grep {{ zk_install_dir }}"
register: result

- name: clean zk service when the system reboot
command: su - root -c " sed -i '/{{ zk_dir_name }}/d' /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes

- name: start the zk service when the system reboot
command: su - root -c " echo 'su - root -c \"{{ zk_install_dir }}/bin/zkServer.sh start\" ' >> /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes

- name: start the zk service when the system reboot
shell: "chmod +x /etc/rc.d/rc.local "
ignore_errors: true
sudo: yes

参考资料

How to Copy Files and Directories in Ansible Using Copy and Fetch Modules

在ansible PlayBook中如何定义不同的机器、不同的Role使用不同的变量

在ansible PlayBook中如何定义不同的机器、不同的Role使用不同的变量

问题场景1

在安装Edas Agent脚本的时候发现在不同的机房[深圳、杭州、北京]有不同的网络定义[VPC、Normal],希望不同机房的机器在不同网络下使用不同的下载地址

问题场景2

在同一台机器上安装MySQL和Diamond,需要定义一个Project_Name, 如果定义在Hosts.ini中必然会覆盖,一台机器相当于一个作用域【同一个函数中也不允许你定义两个一样的名字吧!】

问题场景1的解决

在hosts.ini文件中定义不同的机器和变量

[sz_vpc]
10.125.0.169 
10.125.192.40

[sz_normal]
10.125.12.174 

[sz:children]
sz_vpc
sz_normal

[hz_vpc]
10.125.3.33  
[hz_normal]
10.125.14.238

[hz:children]
hz_vpc
hz_normal

############variables
[sz_vpc:vars]
script_url="sz_vpc"

[sz_normal:vars]
script_url="sz_normal"

[hz_vpc:vars]
script_url="hz_vpc"

[hz_normal:vars]
script_url="hz_normal"

执行代码

- name: test variables
  debug: msg={{ script_url }}  #对所有机器输出他们的url来验证一下我们的定义生效没有
  tags: test

执行结果

$udp-playbook -i udp-hosts.ini site.yml -b -u admin -t test    

UDP-PLAY-START: [apply common configuration to all nodes] ********************* 

UDP-TASK: [test variables] **************************************************** 
ok => 10.125.3.33 => {
    "msg": "hz_vpc"
}
ok => 10.125.0.169 => {
    "msg": "sz_vpc"
}
ok => 10.125.192.40 => {
    "msg": "sz_vpc"
}
ok => 10.125.14.238 => {
    "msg": "hz_normal"
}
ok => 10.125.12.174 => {
    "msg": "sz_normal"
}

问题场景2的解决

在这里变量不要放在hosts.ini中,到MySQL、Diamond的roles中新建两个yml文件,在 里面分别写上 MySQL和Diamond的 Project_Name 这样就不会覆盖了

目录结构

1
2
3
4
5
6
7
8
9
10
11
12
13
$ find roles
roles/
roles/mysql
roles/mysql/tasks
roles/mysql/tasks/main.yml
roles/mysql/defaults
roles/mysql/defaults/main.yml
roles/diamond
roles/diamond/tasks
roles/diamond/tasks/main.yml
roles/diamond/defaults
roles/diamond/defaults/main.yml

变量定义

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat roles/mysql/defaults/main.yml

project: {
"project_name": mysql,
"version": 5.6.0
}

$ cat roles/daimond/defaults/main.yml

project: {
"project_name": daimond,
"version": 3.5.0
}

变量使用

1
2
3
- name: print the tar file name
debug: msg="{{ project.project_name }}"
tags: test

role 和 playbook 用法

role中文件夹含义

  • tasks目录:存放task列表。若role要生效,此目录必须要有一个主task文件main.yml,在main.yml中可以使用include包含同目录(即tasks)中的其他文件。
  • handlers目录:存放handlers的目录,若要生效,则文件必须名为main.yml文件。
  • files目录:在task中执行copy或script模块时,如果使用的是相对路径,则会到此目录中寻找对应的文件。
  • templates目录:在task中执行template模块时,如果使用的是相对路径,则会到此目录中寻找对应的模块文件。
  • vars目录:定义专属于该role的变量,如果要有var文件,则必须为main.yml文件。
  • defaults目录:定义角色默认变量,角色默认变量的优先级最低,会被任意其他层次的同名变量覆盖。如果要有var文件,则必须为main.yml文件。
1
ansible-playbook 11.harbor.yml --list-tasks