How to use the open source Ansible Tower (AWX)

Install AWX

AWX is a GUI tool for Ansible. In this article we will deploy AWX on a Kubernetes cluster. We also have GitLab-CE to store playbooks and manage the version.

The first step is to clone AWX:

git clone https://github.com/ansible/awx.git

Then you need to complete the inventory file. AWX will be deployed with Ansible. You also need to have Ansible, Helm and kubectl installed.

The installation is explained here: https://github.com/ansible/awx/blob/devel/INSTALL.md

You need to modify some variables to describe which K8s cluster you will use:

  • c01-m1 is the kubernetes master node
  • kubernetes-admin@kubenetes is the context name
c01-m1 ansible_python_interpreter="/usr/bin/env python3"
[..]
kubernetes_context=kubernetes-admin@kubernetes
kubernetes_namespace=awx
kubernetes_web_svc_type=LoadBalancer

Now you will be able to deploy AWX on your cluster with Ansible :

ansible-playbook -i inventory install.yml

Check your deployment:

kubectl get svc,pod,pvc -n awx

The default credential is admin / password.

GitLab-CE

Now you need a Git repository with one playbook.

You can easily deploy Gitlab-ce with Docker inside Kubernetes.

Example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-ce
  labels:
    app: gitlab-ce
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitlab-ce
  template:
    metadata:
      labels:
        app: gitlab-ce
    spec:
      containers:
      - name: gitlab-ce
        image: gitlab/gitlab-ce
        ports:
        - containerPort: 22
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-gitlab-ce
  labels:
    app: gitlab-ce
spec:
  ports:
  # make the service available on this port
  - port: 22
    targetPort: 22
    protocol: TCP
    name: ssh
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    # apply this service to the pod with the label app: mysql
    app: gitlab-ce
  type: LoadBalancer

Now we create on project with one playbook named : query_tenant.yml

The playbook : query_tenant.yml.

This is a very basic playbook with three tasks:

  • The first task is just to verify and understand the parameter push by AWX
  • The second send a query to get the tenant list in the ACI fabric
  • The latest is to display the result.

This playbook can be write by a developer and push on Gitlab when it works. Then this playbook will be used in AWX.

The inventory file is used by the developer. Then the credential and inventory will be configured in AWX.

$ ansible-playbook -i inventory query_tenant.yml
PLAY [Play Query] ****
TASK [Gathering Facts] *
ok: [10.202.0.1]
TASK [debug - Display Credential used by AWX] ****
ok: [10.202.0.1] => {
"msg": "10.202.0.1 - admin - cisco1234"
}
TASK [Query tenant] **
ok: [10.202.0.1]
TASK [Display Tenant]
ok: [10.202.0.1] => (item={u'fvTenant': {u'attributes': {u'dn': u'uni/tn-infra', u'status': u'', u'ownerKey': u'', u'uid': u'0', u'descr': u'', u'extMngdBy': u'', u'annotation': u'', u'lcOwn': u'local', u'monPolDn': u'uni/tn-common/monepg-default', u'modTs': u'2019-11-13T18:44:22.536+00:00', u'ownerTag': u'', u'childAction': u'', u'nameAlias': u'', u'name': u'infra'}}}) => {
"msg": "Tenant Name: infra"
}
ok: [10.202.0.1] => (item={u'fvTenant': {u'attributes': {u'dn': u'uni/tn-mgmt', u'status': u'', u'ownerKey': u'', u'uid': u'0', u'descr': u'test', u'extMngdBy': u'', u'annotation': u'', u'lcOwn': u'local', u'monPolDn': u'uni/tn-common/monepg-default', u'modTs': u'2020-07-09T16:15:03.963+00:00', u'ownerTag': u'', u'childAction': u'', u'nameAlias': u'', u'name': u'mgmt'}}}) => {
"msg": "Tenant Name: mgmt"
}
ok: [10.202.0.1] => (item={u'fvTenant': {u'attributes': {u'dn': u'uni/tn-common', u'status': u'', u'ownerKey': u'', u'uid': u'0', u'descr': u'', u'extMngdBy': u'msc', u'annotation': u'orchestrator:msc', u'lcOwn': u'local', u'monPolDn': u'uni/tn-common/monepg-default', u'modTs': u'2019-11-13T19:45:10.901+00:00', u'ownerTag': u'', u'childAction': u'', u'nameAlias': u'', u'name': u'common'}}}) => {
"msg": "Tenant Name: common"
}
[..]
PLAY RECAP ***
10.202.0.1 : ok=4 changed=0 unreachable=0 failed=0

Configure AWX

Login to AWX

By default, it’s admin / password

Create a credential

Here we will create a new credential. With Cisco ACI, we will use the credential type: Machine.

We will be able to use it with the following parameter in the playbook:

ansible_username
ansible_password

Create a project

Now we create a project. The project will point to a Gitlab repository. We have also selected the option : “Update Revision on launch”. This option permits to synchronize Gitlab and get the last version.

Create an inventory

In this step we will create an inventory, which contains a Group: APIC with one host : 10.202.0.1 (the ACI controller)

Create a job

Now we need to create one job to play the playbook in GitLab with the inventory and credential in AWX.

You need to enter :

  • A name
  • Job type : Run
  • Inventory, where you select the previous inventory
  • Project: Just a name
  • Playbook, where you select the playbook from Git. If you see nothing, the playbook is probably not good.
  • Credentials, where you select your credential

If everything is good, you can save your Template and launch it.

Play template

After click on the launch button, you will see the following window.

How to Optimize Your Cisco Day-0 with POAP and Ansible

In this article, we will see how to optimize your time during your Day-0 with Cisco Nexus N9K using POAP and Ansible.

Topology Overview

In this infrastructure, we have :

  • one Cisco Nexus N9Kv
  • one Cisco router to split the mgmt Out-Of-Band (OOB) network used by Nexus (interface mgmt 0)
  • one linux server running on Ubuntu.

The Ubuntu server will run the following services :

  • DHCP server to assign IP addresses and return the path for the tftp server
  • TFTP server to host and deliver the poap python script
  • SCP and / or HTTP server to host and deliver the Nexus images and configuration file

We will use two different networks where the router will be the default gateway for each one:

  • 10.0.1.0/24 – GW .254, used for the OOB network
  • 10.0.2.0/24 – GW .254, used to host administration tools

The server will have the IP address 10.0.2.1/24 and we will use 10.0.1.100/24 for the Nexus 9Kv.

The network services

DHCP server

For the DHCP server, we will use isc-dhcp-server.

You can easily install with the following command this package:

apt-get install isc-dhcp-server

Then you need to configure your server. In this topology the router will be a DHCP Relay, that means request from N9Kv will arrive to this server in unicast with the IP (10.0.1.254) on the router.

Router configuration:

interface Ethernet0/1
ip address 10.0.1.254 255.255.255.0
ip helper-address 10.0.2.1
end 

For the server configuration, we need to add at least a subnet pool for your own network : 10.0.2.0/24. If you don’t have a network which listen on your network, the server will not start.

You need also to create a range for your N9Kv : 10.0.1.0/24. For this range, we will provide :

  • The default gateway (option routers)
  • The tftp server (option tftp-server-name)
  • The file, which should be used on the tftp server (option bootfile-name)

In this file we will also create one entry for our N9Kv. We will reserve an IP address based on the Serial Number. The S/N should be prefixed with : \000.

Configuration file:

root@ubuntu:/srv/tftp/poap# cat /etc/dhcp/dhcpd.conf
ddns-update-style none;

option domain-name "lab";
default-lease-time 600;
max-lease-time 7200;

authoritative;

log-facility local7;

subnet 10.0.1.0 netmask 255.255.255.0 {
  range 10.0.1.1 10.0.1.180;
  option routers 10.0.1.254;
  option tftp-server-name "10.0.2.1";
  option bootfile-name "/poap/poap.py";
  ping-check = 1;
}

subnet 10.0.2.0 netmask 255.255.255.0 {
}

host N9K-POAP {
  option dhcp-client-identifier "\00090IFLAUVL3T";
  fixed-address 10.0.1.100;
  option tftp-server-name "10.0.2.1";
  option bootfile-name "/poap/poap.py";
}

Finally, you can start your server :

root@ubuntu:/srv/tftp/poap# service isc-dhcp-server start
root@ubuntu:/srv/tftp/poap# service isc-dhcp-server status
* isc-dhcp-server.service - ISC DHCP IPv4 server
   Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-08-23 21:12:52 EEST; 16h ago
     Docs: man:dhcpd(8)
 Main PID: 14027 (dhcpd)
    Tasks: 1
   Memory: 8.5M
      CPU: 563ms
   CGroup: /system.slice/isc-dhcp-server.service
           `-14027 dhcpd -user dhcpd -group dhcpd -f -4 -pf /run/dhcp-server/dhcpd.pid -cf /etc/dhcp/dhcpd.conf

Aug 24 13:22:29 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:30 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:39 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:40 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:42 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:58 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:22:59 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:23:00 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.
Aug 24 13:23:07 ubuntu systemd[1]: Started ISC DHCP IPv4 server.
Aug 24 13:23:07 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.2.1 from 00:50:00:00:03:00 via ens3: unknown lease 10.0.2.1.

TFTP server

For the TFTP server, we will use atftp. To install it enter the following command :

root@ubuntu:/srv/tftp/poap# apt install atftpd

By default the configuration file is located on /etc/default/atftpd. You can setup the path where is located your files.

USE_INETD=false
OPTIONS="--tftpd-timeout 300 --logfile /var/log/atftpd.log --retry-timeout 5 --mcast-port 1758 --mcast-addr 239.239.239.0-255 --mcast-ttl 1 --maxthread 100 --verbose=7 /srv/tftp"

In your case, we will create directories /srv/tftp and some other one:

root@ubuntu:/srv/tftp/poap# tree /srv
/srv
|-- ftp
|   `-- welcome.msg
`-- tftp
    `-- poap
        |-- conf
        |   |-- conf.90IFLAUVL3T
        |   |-- conf.90IFLAUVL3T.md5
        |-- nxos.9.3.2.bin
        |-- nxos.9.3.2.bin.md5
        |-- poap.py
        `-- poap.py.md5

You can download from git the poap.py file.

https://github.com/datacenter/nexus9000/blob/master/nx-os/poap/poap.py

You have different version. We will describe one after.

SCP server

You need also a server to deliver the Nexus images and the configuration file. Prefer to use a secure server like an scp or https server.

Here we will simply use opeenssh-server as scp server and create one service account: poap.

We can see one poap user, where the homedirectory is located on /srv/tftp/

poap:x:1001:1001::/srv/tftp/:

poap.py

Now we have our services, we need to prepare the poap.py script.

In this script, we will provide some information:

  • the target NXOS version
  • the path where are located the images
  • the path where are located the configurations
  • the credential for scp
  • the mode used to obtain the config file in our case: serial_number

Extract of the poap.py file:

# **** Here are all variables that parametrize this script ****
# These parameters should be updated with the real values used
# in your automation environment

# system and kickstart images, configuration: location on server (src) and target (dst)
n9k_image_version       = "9.3.2" # this must match your code version
image_dir_src           = "/srv/tftp/poap/"  # Sample - /Users/bob/poap
ftp_image_dir_src_root  = image_dir_src
tftp_image_dir_src_root = image_dir_src
n9k_system_image_src    = "nxos.%s.bin" % n9k_image_version
config_file_src         = "/srv/tftp/poap/conf/conf" # Sample - /Users/bob/poap/conf
image_dir_dst           = "bootflash:" # directory where n9k image will be stored
system_image_dst        = n9k_system_image_src
config_file_dst         = "volatile:poap.cfg"
md5sum_ext_src          = "md5"
# Required space on /bootflash (for config and system images)
required_space          = 350000

# copy protocol to download images and config
# options are: scp/http/tftp/ftp/sftp
protocol                = "scp" # protocol to use to download images/config

# Host name and user credentials
username                = "poap" # server account
ftp_username            = "anonymous" # server account
password                = "cisco1234" # password
hostname                = "10.0.2.1" # ip address of ftp/scp/http/sftp server

# vrf info
vrf = "management"
if os.environ.has_key('POAP_VRF'):
    vrf=os.environ['POAP_VRF']

# Timeout info (from biggest to smallest image, should be f(image-size, protocol))
system_timeout          = 2100
config_timeout          = 120
md5sum_timeout          = 120

# POAP can use 3 modes to obtain the config file.
# - 'static' - filename is static
# - 'serial_number' - switch serial number is part of the filename
# - 'location' - CDP neighbor of interface on which DHCPDISCOVER arrived
#                is part of filename
# if serial-number is abc, then filename is $config_file_src.abc
# if cdp neighbor's device_id=abc and port_id=111, then filename is config_file_src.abc.111
# Note: the next line can be overwritten by command-line arg processing later
config_file_type        = "serial_number"

After you changed something on this file, you need to generate the md5 with the command within the script :

f=poap.py ; cat $f | sed '/^#md5sum/d' > $f.md5 ; sed -i "s/^#md5sum=./#md5sum=\"$(md5sum $f.md5 | sed 's/ .//')\"/" $f

You need also to prepare the configuration file. Basically you need to provide the minimum like :

  • the admin credential
  • the IP address for the management interface
  • the default gateway for the management vrf.

One other recommendation is also to add one other account, which can be used to push the Post-Configuration. In my case, I added one account for ansible with an ssh key.

If you want to do the same, you need to create one new user dedicated for ansible, generated one RSA key and put the public key on your configuration.

Example:

#adduser ansible
#su - ansible
#ssh-keygen

ansible@ubuntu:/srv/tftp/poap# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/ansible/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ansible/.ssh/id_rsa.
Your public key has been saved in /ansible/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9YHPgARV1M1o0OhO/XLPiRqFBnqP+aIY3AXdWxwZqKQ root@ubuntu
The key's randomart image is:
+---[RSA 2048]----+
| .ooo+=.o | | ..ooo.+ |
| .++=+.o |
| Eoo+==. |
| .S= ++o |
| . . o * o o |
| o . o o o.o.|
| o .. .. .o|
| . .. .o. |
+----[SHA256]-----+

Then get the public key on the following file

ansible@ubuntu:~/ansible$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPcNuuldpLiBuII9obYQXxRJKyqDoIsJAEWDo8w7Wk0bF8Y1A/Yba4Ld2a61k9NYUy8BbwF7ra3sRM1sd/lzW4KEsFx0lMq5SFXYBQCeYVSWSstRnuRuspfyQzcGYnPziyolBcDKTpMRekZk3cGD7lWSq32uIKEIW4k5UCywxqXP0RsjlGedtRg2in5tDPn4+qaTGpPRqYN/Cicoivm4SaX4iFtPhTyGdingss9aMMahdSKK4G1EixQnAfTcotY0A409013a1xuiMetBq+wXgCC19mepwwvovWm825q5CH8xTu9JxzvfolXHKNKIeUxFoQo55MNRgte7RNTC0EtYx9 ansible@ubuntu

and add it on your Nexus configuration file. You will be able to connect in ssh without password on th Nexus 9K.

[..]
username ansible password 5 ! role network-operator
username ansible role network-admin
username ansible sshkey ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPcNuuldpLiBuII9obYQXxRJKyqDoIsJAEWDo8w7Wk0bF8Y1A/Yba4Ld2a61k9NYUy8BbwF7ra3sRM1sd/lzW4KEsFx0lMq5SFXYBQCeYVSWSstRnuRuspfyQzcGYnPziyolBcDKTpMRekZk3cGD7lWSq32uIKEIW4k5UCywxqXP0RsjlGedtRg2in5tDPn4+qaTGpPRqYN/Cicoivm4SaX4iFtPhTyGdingss9aMMahdSKK4G1EixQnAfTcotY0A409013a1xuiMetBq+wXgCC19mepwwvovWm825q5CH8xTu9JxzvfolXHKNKIeUxFoQo55MNRgte7RNTC0EtYx9 ansible@ubuntu
username ansible passphrase lifetime 99999 warntime 14 gracetime 3
[..]

POAP process

At this time your Cisco Nexus device is probably up and loop in the poap.

..2020 Aug 24 08:04:39 %$ VDC-1 %$ %CARDCLIENT-2-FPGA_BOOT_GOLDEN: IOFPGA booted from Golden
2020 Aug 24 08:04:39 %$ VDC-1 %$ %CARDCLIENT-2-FPGA_BOOT_STATUS: Unable to retrieve MIFPGA boot status
..System is coming up … Please wait …
…Starting Auto Provisioning …
2020 Aug 24 08:05:13 %$ VDC-1 %$ %VDC_MGR-2-VDC_ONLINE: vdc 1 has come online
Done
Abort Power On Auto Provisioning yes/skip/no yes - continue with normal setup, skip - bypass password and basic configuration, no - continue with Power On Auto Provisioning [no]:
Abort Power On Auto Provisioning yes - continue with normal setup, skip - bypass password and basic configuration, no - continue with Power On Auto Provisioning[no]: 2020 Aug 24 08:05:23 switch %$ VDC-1 %$ %POAP-2-POAP_INITED: [90IFLAUVL3T-50:00:00:01:00:07] - POAP process initialized
2020 Aug 24 08:05:39 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - USB Initializing Success
2020 Aug 24 08:05:39 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - USB disk not detected

Now your switch will start the process. First it will ask an IP address, the server will answer with some other parameter, basically a tftp-server address and a filename.

On the DHCP Discover phase you can see the Serial Number inside one option (61). This option will use to reserve one IP Address.

The server will offer the IP address, etc.

On the server DHCP side we can see the source is 10.0.1.254, which is the default GW for the mgmt subnet, where we have configure the DHCP Relay.

Aug 24 12:22:54 ubuntu dhcpd[14027]: from the dynamic address pool for 10.0.1.0/24
Aug 24 12:22:54 ubuntu dhcpd[14027]: uid lease 10.0.1.1 for client 50:00:00:01:00:00 is duplicate on 10.0.1.0/24
Aug 24 12:22:54 ubuntu dhcpd[14027]: DHCPREQUEST for 10.0.1.100 (10.0.2.1) from 50:00:00:01:00:00 via 10.0.1.254
Aug 24 12:22:54 ubuntu dhcpd[14027]: DHCPACK on 10.0.1.100 to 50:00:00:01:00:00 via 10.0.1.254

One the Nexus console we can observe the following logs :

2020 Aug 24 08:05:40 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: Recieved DHCP offer from server ip - 10.0.2.1
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ last message repeated 1 time
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Using DHCP, valid information received over mgmt0 from 10.0.2.1
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Assigned IP address: 10.0.1.100
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Netmask: 255.255.255.0
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - DNS Server: 10.0.100.1
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Default Gateway: 10.0.1.254
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Script Server: 10.0.2.1
2020 Aug 24 08:05:48 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Script Name: /poap/poap.py
2020 Aug 24 08:06:00 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - The POAP Script download has started
2020 Aug 24 08:06:00 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - The POAP Script is being downloaded from [copy tftp://10.0.2.1//poap/poap.py bootflash:scripts/script.sh vrf management ]
2020 Aug 24 08:06:05 switch %$ VDC-1 %$ %USER-1-SYSTEM_MSG: SWINIT failed. devid:241 inst:0 - t2usd
2020 Aug 24 08:06:10 switch %$ VDC-1 %$ %POAP-2-POAP_SCRIPT_DOWNLOADED: [90IFLAUVL3T-50:00:00:01:00:07] - Successfully downloaded POAP script file
2020 Aug 24 08:06:10 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - Script file size 20390, MD5 checksum 89f5b64624dcd4c2350dbece6aaf3bab
2020 Aug 24 08:06:10 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: [90IFLAUVL3T-50:00:00:01:00:07] - MD5 checksum received from the script file is 89f5b64624dcd4c2350dbece6aaf3bab
2020 Aug 24 08:06:10 switch %$ VDC-1 %$ %POAP-2-POAP_SCRIPT_STARTED_MD5_VALIDATED: [90IFLAUVL3T-50:00:00:01:00:07] - POAP script execution started(MD5 validated)
2020 Aug 24 08:07:25 switch %$ VDC-1 %$ %ASCII-CFG-2-CONF_CONTROL: System ready

If everything is good, you have an IP address and download in TFTP the poap.py script. Next the script will verify if you run the target image and download the configuration in SCP.

Finally, you will reload the device.

2020 Aug 24 09:23:20 switch %$ VDC-1 %$ %USER-1-SYSTEM_MSG: SWINIT failed. devid:241 inst:0  - t2usd
2020 Aug 24 09:24:53 switch %$ VDC-1 %$ %ASCII-CFG-2-CONF_CONTROL: System ready
2020 Aug 24 09:25:22 switch %$ VDC-1 %$ %VMAN-2-ACTIVATION_STATE: Successfully activated virtual service 'guestshell+'
2020 Aug 24 09:25:22 switch %$ VDC-1 %$ %VMAN-2-GUESTSHELL_ENABLED: The guest shell has been enabled. The command 'guestshell' may be used to access it, 'guestshell destroy' to remove it.
2020 Aug 24 09:27:35 switch %$ VDC-1 %$ %POAP-2-POAP_SCRIPT_EXEC_SUCCESS: [90IFLAUVL3T-50:00:00:01:00:07] - POAP script execution success
2020 Aug 24 09:27:41 switch %$ VDC-1 %$ %POAP-2-POAP_RELOAD_DEVICE: [90IFLAUVL3T-50:00:00:01:00:07] - Reload device
2020 Aug 24 09:27:51 switch %$ VDC-1 %$ %VMAN-2-ACTIVATION_STATE: Successfully deactivated virtual service 'guestshell+'
2020 Aug 24 09:27:53 switch %$ VDC-1 %$ %PLATFORM-2-PFM_SYSTEM_RESET: Manual system restart from Command Line Interface
[  731.127762] sysrq: SysRq : Resetting
Sysconf checksum failed. Using default values
WARNING:  No BIOS Info found
Sysconf checksum failed. Using default values
Sysconf checksum failed. Using default values
Sysconf checksum failed. Using default values
ATE0Q1&D2&C1S0=1
Standalone chassis
check_bootmode: grub2pxe: grub failed, launch ipxe
Trying to load ipxe
Loading Application:
/Vendor(429bdb26-48a6-47bd-664c-801204061400)/UnknownMedia(6)/EndEntire
cannot load imageFailed to launch ipxe
Came back to grub, now load efi shell
Trying to load efishell
Loading Application:
/Vendor(429bdb26-48a6-47bd-664c-801204061400)/UnknownMedia(6)/EndEntire
cannot load imageFailed to launch shell
Trying to read config file /boot/grub/menu.lst.local from (hd0,4)
 Filesystem type is ext2fs, partition type 0x83

Booting bootflash:/nxos.9.3.2.bin ...
Booting bootflash:/nxos.9.3.2.bin
Trying diskboot
 Filesystem type is ext2fs, partition type 0x83
Image valid
[..]
Installing local RPMS
Patch Repository Setup completed successfully
Bootstrapping via POAP overriding existing startup-config
Creating /dev/mcelog
Starting mcelog daemon
INIT: Entering runlevel: 3
Running S93thirdparty-script...

Populating conf files for hybrid sysmgr ...
Starting hybrid sysmgr ...
done
Netbroker support IS present in the kernel.
done
Executing Prune clis.
Aug 24 09:31:14 %FW_APP-2-FIRMWARE_IMAGE_LOAD_SUCCESS No Firmware needed for Non SR card.
2020 Aug 24 09:31:28  %$ VDC-1 %$  %USER-2-SYSTEM_MSG: <<%USBHSD-2-MOUNT>> logflash: online  - usbhsd
2020 Aug 24 09:31:35  %$ VDC-1 %$  %DAEMON-2-SYSTEM_MSG: <<%ASCII-CFG-2-CONF_CONTROL>> Poap replay /bootflash/poap_replay01.cfg - ascii-cfg[31425]
2020 Aug 24 09:31:53  %$ VDC-1 %$ netstack: Registration with cli server complete
System is coming up ... Please wait ...
....System is coming up ... Please wait ...
2020 Aug 24 09:32:46  %$ VDC-1 %$ %USER-2-SYSTEM_MSG: ssnmgr_app_init called on ssnmgr up - aclmgr
....2020 Aug 24 09:33:06  %$ VDC-1 %$ %USER-0-SYSTEM_MSG: end of default policer - copp
2020 Aug 24 09:33:06  %$ VDC-1 %$ %COPP-2-COPP_NO_POLICY: Control-plane is unprotected.
System is coming up ... Please wait ...
2020 Aug 24 09:33:15  %$ VDC-1 %$ %CARDCLIENT-2-FPGA_BOOT_GOLDEN: IOFPGA booted from Golden
2020 Aug 24 09:33:15  %$ VDC-1 %$ %CARDCLIENT-2-FPGA_BOOT_STATUS: Unable to retrieve MIFPGA boot status
....System is coming up ... Please wait ...
.2020 Aug 24 09:33:47  %$ VDC-1 %$ %ASCII-CFG-2-CONFIG_REPLAY_STATUS: Bootstrap Replay Started.
.2020 Aug 24 09:33:51  %$ VDC-1 %$ %VDC_MGR-2-VDC_ONLINE: vdc 1 has come online
Waiting for box online to replay poap config
2020 Aug 24 09:34:09 switch %$ VDC-1 %$ %ASCII-CFG-2-CONFIG_REPLAY_STATUS: Bootstrap Replay Done.
2020 Aug 24 09:34:31 switch %$ VDC-1 %$ %USER-1-SYSTEM_MSG: SWINIT failed. devid:241 inst:0  - t2usd
2020 Aug 24 09:35:46 switch %$ VDC-1 %$ %ASCII-CFG-2-CONFIG_REPLAY_STATUS: Ascii Replay Started.
2020 Aug 24 09:36:19 switch %$ VDC-1 %$ %ASCII-CFG-2-CONFIG_REPLAY_STATUS: Ascii Replay Done.
2020 Aug 24 09:36:21 switch %$ VDC-1 %$ %ASCII-CFG-2-CONF_CONTROL: System ready
[########################################] 100%
2020 Aug 24 09:36:52 switch %$ VDC-1 %$ %VMAN-2-ACTIVATION_STATE: Successfully activated virtual service 'guestshell+'
2020 Aug 24 09:36:52 switch %$ VDC-1 %$ %VMAN-2-GUESTSHELL_ENABLED: The guest shell has been enabled. The command 'guestshell' may be used to access it, 'guestshell destroy' to remove it.
Copy complete, now saving to disk (please wait)...
Copy complete.
Auto provisioning complete



User Access Verification
switch login:

ANSIBLE

Your switch is UP with the target image and your configuration. Now you are able to continue your setup with ansible.

For ansible we have install the latest version with python-pip :

pip install ansible
root@ubuntu:/srv/tftp/poap# pip list
[..]

ansible 2.9.12
cffi 1.14.2
cryptography 3.0
ecdsa 0.13
enum34 1.1.10
httplib2 0.9.1
ipaddress 1.0.23
Jinja2 2.8
MarkupSafe 0.23
netaddr 0.7.18
paramiko 1.16.0
pip 20.2.2
pycparser 2.20
pycrypto 2.6.1
PyYAML 3.11
setuptools 20.7.0
six 1.10.0
wheel 0.29.0

For this lab, we have two files :

  • inventory, which contains the variables for your switch
  • play1.yml, which is your simple playbook
ansible@ubuntu:~/ansible$ tree
.
|-- inventory
`-- play1.yml
0 directories, 2 files
ansible@ubuntu:~/ansible$ cat inventory
[N9K]
N9K1 ansible_host=10.0.1.100  ansible_port=22

[N9K:vars]
ansible_user=ansible
ansible_connection=network_cli
ansible_network_os=nxos
ansible_python_interpreter="/usr/bin/env python"
ansible@ubuntu:~/ansible$ cat play1.yml
---
- name: Setup Nexus Devices

  hosts: all
  connection: local
  gather_facts: False

  tasks:
    - name: configure hostname
      nxos_config:
        lines: hostname {{ inventory_hostname }}
        save_when: modified

This playbook will setup the hostname and replace the variable {{ inventory_hostname }} by the value inside the inventory : N9K1.

We play the playbook and you can see one changed:

The hostname has been changed.

switch login:
User Access Verification
switch login:
User Access Verification
N9K1 login: 2020 Aug 24 09:48:06 N9K1 %$ VDC-1 %$ %COPP-2-COPP_NO_POLICY: Control-plane is unprotected.

Troubleshooting

If you have issue during the poap, the best option is probably to skip the process and the check the log created in bootflash.

Example here where the file name for the configuration is not good:

root@ubuntu:/srv/tftp/poap# cat conf/poap.log.7_26_15
INFO: Selected config filename (serial_number) : /srv/tftp/poap/conf/conf..90IFLAUVL3T
INFO: free space is 2523696 kB
CLI : terminal dont-ask ; terminal password cisco1234 ; copy scp://poap@10.0.2.1/srv/tftp/poap//nxos.9.3.2.bin.md5 volatile:nxos.9.3.2.bin.md5.poap_md5 vrf management
CLI : show file volatile:nxos.9.3.2.bin.md5.poap_md5 | grep -v '^#' | head lines 1 | sed 's/ .*$//'
INFO: md5sum 76b01ff1d7243ce035c25becd2634d27 (.md5 file)
CLI : show file bootflash:/nxos.9.3.2.bin md5sum
INFO: md5sum 76b01ff1d7243ce035c25becd2634d27 (recalculated)
INFO: Same source and destination images
INFO: Verification passed. (system : 11/4/2019)
INFO: Verification passed.  (system : 11/4/2019)
CLI : terminal dont-ask ; terminal password cisco1234 ; copy scp://poap@10.0.2.1/srv/tftp/poap/conf/conf..90IFLAUVL3T volatile:poap.cfg vrf management
WARN: Copy Failed: "\r\n\nError: no such file
[..]
ERR : aborting
INFO: cleaning up

The following shows when it works properly :

root@ubuntu:/srv/tftp/poap# cat poap.log.8_6_17
INFO: Selected config filename (serial_number) : /srv/tftp/poap/conf/conf.90IFLAUVL3T
INFO: free space is 2523664 kB
CLI : terminal dont-ask ; terminal password cisco1234 ; copy scp://poap@10.0.2.1/srv/tftp/poap//nxos.9.3.2.bin.md5 volatile:nxos.9.3.2.bin.md5.poap_md5 vrf management
CLI : show file volatile:nxos.9.3.2.bin.md5.poap_md5 | grep -v '^#' | head lines 1 | sed 's/ .*$//'
INFO: md5sum 76b01ff1d7243ce035c25becd2634d27 (.md5 file)
CLI : show file bootflash:/nxos.9.3.2.bin md5sum
INFO: md5sum 76b01ff1d7243ce035c25becd2634d27 (recalculated)
INFO: Same source and destination images
INFO: Verification passed. (system : 11/4/2019)
INFO: Verification passed.  (system : 11/4/2019)
CLI : terminal dont-ask ; terminal password cisco1234 ; copy scp://poap@10.0.2.1/srv/tftp/poap/conf/conf.90IFLAUVL3T volatile:poap.cfg vrf management
INFO: Completed Copy of Config File
CLI : terminal dont-ask ; terminal password cisco1234 ; copy scp://poap@10.0.2.1/srv/tftp/poap/conf/conf.90IFLAUVL3T.md5 volatile:conf.90IFLAUVL3T.md5.poap_md5 vrf management
CLI : show file volatile:conf.90IFLAUVL3T.md5.poap_md5 | grep -v '^#' | head lines 1 | sed 's/ .*$//'
INFO: md5sum 97a6fd0ffad10c1986a1c89b0e433ae8 (.md5 file)
CLI : show file volatile:poap.cfg md5sum
INFO: md5sum 97a6fd0ffad10c1986a1c89b0e433ae8 (recalculated)
CLI : show system internal platform internal info | grep box_online | sed 's/[^0-9]*//g'
INFO: Setting the boot variables
CLI : config terminal ; boot nxos bootflash:/nxos.9.3.2.bin
CLI : copy running-config startup-config
CLI : copy volatile:poap.cfg scheduled-config
INFO: Configuration successful

Start a new DevOps journey with Docker and automate ACI

If you want to test the automation with Cisco ACI, you can use the following container.

docker pull zednetwork/aci-dev:latest

root@docker1:~/aci-dev# docker images zednetwork/aci-dev
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
zednetwork/aci-dev   latest              b1c09a7c66f0        About an hour ago   1.31GB

I preinstalled Ansible 2.9.11, the ACI SDK (COBRA) in version 4.2(4) and ARYA.

You can run directly the container with the command:

docker run -it <Image ID> bash

or start it with docker-compose command :

docker-compose up -d 

Docker-compose file :

---
version: "3"

services:
  aci-dev:
    image: zednetwork/aci-dev:latest
    stdin_open: true

and connect to the container :

root@docker1:~/aci-dev# docker-compose ps
      Name          Command   State   Ports
-------------------------------------------
aci-dev_aci-dev_1   bash      Up

root@docker1:~/aci-dev# docker exec -it  aci-dev_aci-dev_1 bash

Packages version

root@3a93719ed29b:~# ansible --version
/usr/local/lib/python2.7/dist-packages/cryptography/__init__.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
  CryptographyDeprecationWarning,
ansible 2.9.11
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.16 (default, Oct 10 2019, 22:02:15) [GCC 8.3.0]
root@3a93719ed29b:~# pip list
Package      Version
------------ ---------
acicobra     4.2-4i
acimodel     4.2-4i
ansible      2.9.11
arya         1.1.5
certifi      2020.6.20
cffi         1.14.1
chardet      3.0.4
cryptography 3.0
enum34       1.1.10
et-xmlfile   1.0.1
future       0.18.2
idna         2.10
ipaddress    1.0.23
jdcal        1.4.1
Jinja2       2.11.2
MarkupSafe   1.1.1
openpyxl     2.6.4
pip          18.1
ply          3.11
prettytable  0.7.2
pyaml        20.4.0
pycparser    2.20
PyYAML       5.3.1
requests     2.24.0
setuptools   44.1.1
six          1.15.0
urllib3      1.25.10

Then you can copy and paste one script and runs it.

Example with the script (https://zed-network.fr/?p=511) :

root@40a189762958:~/cobra# pwd
/root/cobra

root@40a189762958:~/cobra# ls
getEP.py

root@40a189762958:~/cobra# python getEP.py
MAC: FA:16:3E:9A:35:B7 | IP: 10.0.0.1 | Encaps: vlan-489
MAC: FA:16:3E:BB:B4:BE | IP: 42.0.0.1 | Encaps: vlan-488
[..]

How to automate Cisco UCS with Ansible playbook

To use ansible with Cisco UCS Manager, you need to install the SDK on your ansible server.

root@c86d023821ff:~# pip install ucsmsdk
Collecting ucsmsdk
Downloading https://files.pythonhosted.org/packages/f7/f9/280d7cea9e37ed183d694f5d2e8505b5eacfdbad709eba68fc32a5ed2bcf/ucsmsdk-0.9.10.tar.gz (4.2MB)
100% |################################| 4.2MB 306kB/s
Collecting pyparsing (from ucsmsdk)
Downloading https://files.pythonhosted.org/packages/8a/bb/488841f56197b13700afd5658fc279a2025a39e22449b7cf29864669b15d/pyparsing-2.4.7-py2.py3-none-any.whl (67kB)
100% |################################| 71kB 5.2MB/s
Requirement already satisfied: setuptools in /usr/lib/python2.7/dist-packages (from ucsmsdk) (40.8.0)
Requirement already satisfied: six in /usr/lib/python2.7/dist-packages (from ucsmsdk) (1.12.0)
Building wheels for collected packages: ucsmsdk
Running setup.py bdist_wheel for ucsmsdk … done
Stored in directory: /root/.cache/pip/wheels/ac/a2/a9/5c39875aca61b780d8d94690f22b54237452d9fc290756781f
Successfully built ucsmsdk
Installing collected packages: pyparsing, ucsmsdk
Successfully installed pyparsing-2.4.7 ucsmsdk-0.9.10

root@c86d023821ff:~# pip list | grep ucs
ucsmsdk 0.9.10

Then you can create a simple inventory file with the UCS Manager IP.

Example:

root@c86d023821ff:~# cat inv_ucs
[ucs]
10.0.100.162

Then you can create a simple playbook to add one vlan:

root@c86d023821ff:~# cat ucs_vlan.yml
---
- name: ENSURE APPLICATION CONFIGURATION EXISTS
  hosts: ucs
  connection: local
  gather_facts: False

  tasks:

  - name: Configure VLAN
    ucs_vlans:
      hostname: 10.0.100.162
      username: ucspe
      password: ucspe
      name: TheVlan11
      id: '11'
      native: 'no'

And finally you can run it:

root@c86d023821ff:~# ansible-playbook ucs_vlan.yml -i inv_ucs
PLAY [ENSURE APPLICATION CONFIGURATION EXISTS] *
TASK [Configure VLAN] **
[WARNING]: Platform linux on host 10.0.100.162 is using the discovered Python interpreter at /usr/bin/python, but
future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
changed: [10.0.100.162]
PLAY RECAP *
10.0.100.162 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Now your vlan ID 11 is available on the UCS Fabric Interconnect.

You can use the module ansible ucs_vlan_find to get all vlans:

root@c86d023821ff:~# cat ucs_vlan.yml
---
- name: ENSURE APPLICATION CONFIGURATION EXISTS
  hosts: ucs
  connection: local
  gather_facts: False

  tasks:

  - name: Get All Vlans
    ucs_vlan_find:
      hostname: 10.0.100.162
      username: ucspe
      password: ucspe
      pattern: '.'
    register: vlans
    tags:
    - showvlan

  - name: Display vlans
    debug:
      var: vlans
    tags:
    - showvlan

Result:

root@c86d023821ff:~# ansible-playbook ucs_vlan.yml -i inv_ucs --tags showvlan
PLAY [ENSURE APPLICATION CONFIGURATION EXISTS]
TASK [Get All Vlans]
[WARNING]: Platform linux on host 10.0.100.162 is using the discovered Python interpreter at /usr/bin/python, but future installation of
another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for
more information.
ok: [10.0.100.162]
TASK [Display vlans]
ok: [10.0.100.162] => {
"vlans": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"failed": false,
"vlan_list": [
{
"id": "1",
"name": "default"
},
{
"id": "5",
"name": "human-resource"
},
{
"id": "1",
"name": "default"
},
{
"id": "3",
"name": "finance"
},
{
"id": "5",
"name": "human-resource"
},
{
"id": "1",
"name": "default"
},
{
"id": "3",
"name": "finance"
},
{
"id": "42",
"name": "NewVlan42"
},
{
"id": "10",
"name": "vlan10"
},
{
"id": "11",
"name": "TheVlan11"
}
],
"warnings": [
"Platform linux on host 10.0.100.162 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
]
}
}
PLAY RECAP **
10.0.100.162 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to automate your Cisco legacy network with Ansible

In the previous article , we introduce ansible with NXOS devices. We can also use ansible for Catalyst, NXOS, NXOS-ACI, etc.

Ansible can be very useful to search something or to backup your configuration.

Example to save your configuration :

---

  - name: Configure IOS
    hosts: routers
    connection: local
    gather_facts: False
    any_errors_fatal: true

    tasks:

      - name: show running
        ios_command:
          commands:
            - 'show run'
        register: running_config
        tags:
        - backup

      - name: save output
        copy: content="{{running_config.stdout[0]}}" dest="./output/{{inventory_hostname}}-show_run.txt"
        tags:
        - backup

In the last part, I use the save my output with register: running_config and then I use the module copy to create a new file with the content save in running_config.

You need to create first the directory named here output. After Ansible will create a file with the device name as prefix and concatenate -show_run.txt

copy: content="{{running_config.stdout[0]}}" dest="./output/{{inventory_hostname}}-show_run.txt"

root@09cf326cc275:/ansible/NXOS#  tree output/
output/
|-- R7-show_run.txt
`-- R7.txt

Inside the file you will have your running configuration.

Playbooks are now mandatory, you can also use ad hoc command to search something on your device.

Example with show ip arp or show version

ansible R7 -i inventory-home -m ios_command -a "commands='show ip arp'"                            
 R7 | SUCCESS => {
     "changed": false, 
     "stdout": [
         "Protocol  Address          Age (min)  Hardware Addr   Type   Interface\nInternet  10.0.100.1              0   000c.2935.812f  ARPA   Ethernet0/0\nInternet  10.0.100.67             -   aabb.cc00.7000  ARPA   Ethernet0/0\nInternet  10.0.100.150            1   a483.e7bf.9979  ARPA   Ethernet0/0"
     ], 
     "stdout_lines": [
         [
             "Protocol  Address          Age (min)  Hardware Addr   Type   Interface", 
             "Internet  10.0.100.1              0   000c.2935.812f  ARPA   Ethernet0/0", 
             "Internet  10.0.100.67             -   aabb.cc00.7000  ARPA   Ethernet0/0", 
             "Internet  10.0.100.150            1   a483.e7bf.9979  ARPA   Ethernet0/0"
         ]
     ]
 }

root@09cf326cc275:/ansible/NXOS# ansible R7 -i inventory-home -m ios_command -a "commands='show version'"

Currently we use only show command, but you can also configure your catalyst devices. The following task will enable ospf on all interfaces. I added a tag named OSPF to be able to play only OSPF task within my playbook.

---

  - name: Configure IOS
    hosts: routers
    connection: local
    gather_facts: False
    any_errors_fatal: true

    tasks:
      - name: Enable ospf
        ios_config:
          lines:
            - network 0.0.0.0 255.255.255.255 ar 0
          parents: router ospf 1
        register: ospf
        tags:
        - OSPF

      - debug: var=ospf
        tags:
        - OSPF
root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-ios.yaml --tags OSPF
 PLAY [Configure IOS] *
 TASK [Enable ospf] ***
 changed: [R7]
 TASK [debug] *
 ok: [R7] => {
     "ospf": {
         "banners": {}, 
         "changed": true, 
         "commands": [
             "router ospf 1", 
             "network 0.0.0.0 255.255.255.255 ar 0"
         ], 
         "failed": false, 
         "updates": [
             "router ospf 1", 
             "network 0.0.0.0 255.255.255.255 ar 0"
         ]
     }
 }
 PLAY RECAP ***
 R7                         : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Other example to search an Endpoint on your network with ad hoc command. Here I want to search one server with the @MAC : ecbd.1d44.b6c1.

root@09cf326cc275:/ansible/NXOS# ansible SW1 -i inventory-home -m ios_command -a "commands='show mac address'"  | egrep -v "\n"       
SW1 | SUCCESS => {
    "stdout": [
    ], 
        [
            "Mac Address Table", 
            "-------------------------------------------", 
            "", 
            "----    -----------       --------    -----", 
            "   1    0050.2935.812f    DYNAMIC     Gi0/0", 
            "   1    0050.8824.7718    DYNAMIC     Gi0/0", 
            "   1    0050.bdf0.b6ad    DYNAMIC     Gi0/0", 
            "   1    0050.1878.2797    DYNAMIC     Gi0/0", 
            "   1    0050.9110.af2c    DYNAMIC     Gi0/0", 
            "   1    0050.e7bf.9979    DYNAMIC     Gi0/0", 
            "   1    0050.cc00.2011    DYNAMIC     Gi0/0", 
            "   1    0050.cc00.7000    DYNAMIC     Gi0/0", 
            "   1    0050.eba6.c667    DYNAMIC     Gi0/0", 
            "   1    0050.8b57.d81b    DYNAMIC     Gi0/0", 
            "   1    0050.817a.ce2e    DYNAMIC     Gi0/0", 
            "   1    ecbd.1d44.b6c1    DYNAMIC     Gi0/0", 
            "   1    0050.754a.a8ee    DYNAMIC     Gi0/0", 
        ]
    ]
}

root@09cf326cc275:/ansible/NXOS# ansible catalyst -i inventory-home -m ios_command -a "commands='show mac address'"  | egrep -v "\n" | grep "SUCCESS \|ecbd.1d44.b6c1"
SW1 | SUCCESS => {
            "   1    ecbd.1d44.b6c1    DYNAMIC     Gi0/0",

Here I have found the server on switch SW1 port Gi0/0, which is an uplink port. If I added other switch in my group named catalyst, I’ll be able to found on all switches where is learned this @MAC.

In your inventory file, you need to use groups to organize properly your network. It can be very useful to run one command to only one part of your network or to all.

In the following example we have on DC named DC1 with two different rooms. Each room contains two switches. Now, you can run command only to switches in the Room1 or Room2 or all inside DC1.

[DC1:children]
Room1
Room2

[Room1]
DC1-SW1
DC1-SW2

[Room2]
DC1-SW10
DC1-SW11