How to Automate Cisco NXOS infrastructure with Ansible

You manage a lot of network devices, but you are alone or you don’t have time. Ansible can help you to manage your change on your whole network very quickly based on your own template. In this article we will use Cisco Nexus 9K.

You have a new DNS server, syslog server etc and you need to modify hundred switches. No worries, with ansible it can be very simple.

First you should create at least two files. The first one will be your inventory and contains your switches. The second will be your playbook.

The first thing is to create a service account for ansible in your switches. This account could be centralize or local. In the following I’ll provide my password in cleartext. Of course, it’s not recommended and you should prefer ssh-key.

On my virtual nexus 9k, I only configured my account and my management IP address.

My topology contains :

  • Nexus-1 : IP 10.0.100.99, name: AGG1
  • Nexus-2 : IP 10.0.100.100, name: ACC1
  • Nexus-3 : IP 10.0.100.101, name: ACC2
switch(config-if)# sh run 

!Command: show running-config
!Running configuration last done at: Sat Mar 21 18:28:03 2020
!Time: Sat Mar 21 18:29:45 2020

version 9.3(2) Bios:version  
[..]
username ansible password 5 $5$.FhD0kmO$4PJV/HKJN5ul9aK7160ii.1WQ3s9pjh2QCRL7x7l
EU/  role network-admin
username ansible passphrase  lifetime 99999 warntime 14 gracetime 3
ip domain-lookup

[..]
interface mgmt0
  vrf member management
  ip address 10.0.100.100/24
line console
line vty

The inventory file will be the following. We can use two formats: YAML or INI. This one will use the INI format. This file contains a group named N9K with three switches.

[N9K]
AGR1 ansible_host=10.0.100.99  ansible_port=22
ACC1  ansible_host=10.0.100.100 ansible_port=22
ACC2  ansible_host=10.0.100.101 ansible_port=22

[N9K:vars]
ansible_user=ansible
ansible_password=@ns1b!E.
ansible_connection=network_cli
ansible_network_os=nxos
ansible_python_interpreter="/usr/bin/env python"

The following file uses the YAML format. This first playbook is very simple and contains one task to configure the switch hostname.

---
- name: Setup Nexus Devices

  hosts: all
  connection: local
  gather_facts: False


  tasks:

    - name: configure hostname
      nxos_config:
        lines: hostname {{ inventory_hostname }}
        save_when: modified

Now I’ll verify my playbook, before apply the changes. This command uses the option -i to specify which file should be use as inventory and –check to simulate the changes.

root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml --check

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************
[WARNING]: Skipping command `copy running-config startup-config` due to check_mode.  Configuration not copied to non-volatile storage
terpreter_discovery.html for more information.
changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Now I’ll do the same without the option –check and my Nexus device should be configured. You can see the message copy running is not there.

root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml        

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************

changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Fantastic, my nexus have been configured !! With the command show accounting log, you can verify the command injected by ansible. In my playbook, I added the line save_when: modified to save the configuration after the changes.

AGR1# show accounting log | last 10
Sat Mar 21 18:45:42 2020:type=stop:id=10.0.100.150@pts/2:user=ansible:cmd=shell terminated because the ssh session closed
Sat Mar 21 18:49:11 2020:type=start:id=10.0.100.150@pts/2:user=ansible:cmd=
Sat Mar 21 18:49:12 2020:type=update:id=10.0.100.150@pts/2:user=ansible:cmd=terminal length 0 (SUCCESS)
Sat Mar 21 18:49:12 2020:type=update:id=10.0.100.150@pts/2:user=ansible:cmd=terminal width 511 (SUCCESS)
Sat Mar 21 18:49:20 2020:type=update:id=10.0.100.150@pts/2:user=ansible:cmd=configure terminal ; hostname AGR1 (SUCCESS)
Sat Mar 21 18:49:26 2020:type=update:id=10.0.100.150@pts/2:user=ansible:cmd=Performing configuration copy.
Sat Mar 21 18:49:36 2020:type=start:id=vsh.bin.13650:user=admin:cmd=
Sat Mar 21 18:49:52 2020:type=update:id=10.0.100.150@pts/2:user=ansible:cmd=copy running-config startup-config (SUCCESS)
Sat Mar 21 18:49:53 2020:type=stop:id=10.0.100.150@pts/2:user=ansible:cmd=shell terminated because the ssh session closed
Sat Mar 21 18:52:35 2020:type=update:id=console0:user=admin:cmd=terminal width 511 (SUCCESS)

Now you can imagine the next step. You can add your syslog server for example.

    - name: configure syslog server
      nxos_config:
        lines:
          - logging server 10.0.100.42 4 use-vrf management facility local7
          - logging timestamp milliseconds
        save_when: modified

Before the change:

AGR1(config)# logging timestamp milliseconds ^C
AGR1(config)# sh logging 

Logging console:                enabled (Severity: critical)
Logging monitor:                enabled (Severity: notifications)
Logging linecard:               enabled (Severity: notifications)
Logging timestamp:              Seconds
Logging source-interface :      disabled
Logging rate-limit:             enabled
Logging server:                 disabled
Logging origin_id :             disabled
Logging RFC :                   disabled
Logging logflash:               enabled (Severity: notifications)
Logging logfile:                enabled
        Name - messages: Severity - notifications Size - 4194304

[..]

After the change:

AGR1(config)# 2020 Mar 21 18:58:48 AGR1 %$ VDC-1 %$  %SYSLOG-2-SYSTEM_MSG: Attempt to configure logging server with: hostname/IP 10.0.100.42,severity 4,port 514,facility local7 - syslogd
AGR1(config)# sh logging 

Logging console:                enabled (Severity: critical)
Logging monitor:                enabled (Severity: notifications)
Logging linecard:               enabled (Severity: notifications)
Logging timestamp:              Milliseconds
Logging source-interface :      disabled
Logging rate-limit:             enabled
Logging server:                 enabled
{10.0.100.42}
        This server is temporarily unreachable
        server severity:        warnings
        server facility:        local7
        server VRF:             management
        server port:            514
Logging origin_id :             disabled
Logging RFC :                   disabled
Logging logflash:               enabled (Severity: notifications)
Logging logfile:                enabled
        Name - messages: Severity - notifications Size - 4194304
[..]
root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************

ok: [ACC1]
ok: [AGR1]
ok: [ACC2]

TASK [configure syslog server] *******************************************************************************************************************
changed: [ACC1]
changed: [ACC2]
changed: [AGR1]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

I can be useful to manage your access-list. Imagine you install a new server for the monitoring and you need to update one entry. This time we will use another module named nxos_acl.

    - name: configure SNMP-ACCESS-LIST
      nxos_acl:
        name: ACL_SNMP-ReadOnly
        seq: "10"
        action: permit
        proto: udp
        src: 10.0.100.42/32
        dest: any
        state: present

Now we have the ACL configured on all switches. When the module exists, prefer to use the specific module.

root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************

ok: [ACC1]
ok: [AGR1]
ok: [ACC2]

TASK [configure syslog server] *******************************************************************************************************************
changed: [ACC1]
changed: [ACC2]
changed: [AGR1]

TASK [configure SNMP-ACCESS-LIST] ****************************************************************************************************************
changed: [ACC1]
changed: [ACC2]
changed: [AGR1]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 
AGR1(config)# sh ip access-lists ACL_SNMP-ReadOnly

IP access list ACL_SNMP-ReadOnly
        10 permit udp 10.0.100.42/32 any
--
ACC1(config)# sh ip access-lists ACL_SNMP-ReadOnly

IP access list ACL_SNMP-ReadOnly
	10 permit udp 10.0.100.42/32 any
--
ACC2# sh ip access-lists ACL_SNMP-ReadOnly

IP access list ACL_SNMP-ReadOnly
        10 permit udp 10.0.100.42/32 any 

This module is idempotent. Now we will update the ACL with a second entry. The documentation is here.

    - name: configure SNMP-ACCESS-LIST
      nxos_acl:
        name: ACL_SNMP-ReadOnly
        seq: "10"
        action: permit
        proto: udp
        src: 10.0.100.42/32
        dest: any
        state: present

    - name: configure SNMP-ACCESS-LIST
      nxos_acl:
        name: ACL_SNMP-ReadOnly
        seq: "20"
        action: permit
        proto: udp
        src: 10.0.100.43/32
        dest: any
        state: present
root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************
changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

TASK [configure syslog server] *******************************************************************************************************************
changed: [ACC1]
changed: [ACC2]
changed: [AGR1]

TASK [configure SNMP-ACCESS-LIST] ****************************************************************************************************************
ok: [ACC1]
ok: [AGR1]
ok: [ACC2]

TASK [configure SNMP-ACCESS-LIST] ****************************************************************************************************************
changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
AGR1(config)# sh ip access-lists ACL_SNMP-ReadOnly

IP access list ACL_SNMP-ReadOnly
        10 permit udp 10.0.100.42/32 any 
        20 permit udp 10.0.100.43/32 any

and they update the last entry with a new IP address.

    - name: configure SNMP-ACCESS-LIST
      nxos_acl:
        name: ACL_SNMP-ReadOnly
        seq: "10"
        action: permit
        proto: udp
        src: 10.0.100.42/32
        dest: any
        state: present

    - name: configure SNMP-ACCESS-LIST
      nxos_acl:
        name: ACL_SNMP-ReadOnly
        seq: "20"
        action: permit
        proto: udp
        src: 10.0.100.44/32
        dest: any
        state: present
root@09cf326cc275:/ansible/NXOS# ansible-playbook -i inventory-home playbook-home.yaml

PLAY [Setup Nexus Devices] ***********************************************************************************************************************

TASK [configure hostname] ************************************************************************************************************************
changed: [ACC1]
changed: [ACC2]
changed: [AGR1]

TASK [configure syslog server] *******************************************************************************************************************
changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

TASK [configure SNMP-ACCESS-LIST] ****************************************************************************************************************
ok: [ACC1]
ok: [AGR1]
ok: [ACC2]

TASK [configure SNMP-ACCESS-LIST] ****************************************************************************************************************
changed: [ACC1]
changed: [AGR1]
changed: [ACC2]

PLAY RECAP ***************************************************************************************************************************************
ACC1                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ACC2                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
AGR1                       : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 
AGR1(config)# sh ip access-lists ACL_SNMP-ReadOnly

IP access list ACL_SNMP-ReadOnly
        10 permit udp 10.0.100.42/32 any 
        20 permit udp 10.0.100.44/32 any

You can image a lot of scenario now, and apply your change very quickly.

How to Use Ansible: An Ansible Cheat Sheet

Check hosts in the inventory :

ansible@Deb-Master:~/base$ ansible all --list-hosts
   hosts (1):
     client1

Get list of modules

ansible@Deb-Master:~/base$ /usr/local/bin/ansible-doc -l | egrep ^(aci|mso)
 [WARNING]: win_template parsing did not produce documentation.
 [WARNING]: template parsing did not produce documentation.
 aci_l3out                                                     Manage Layer 3 Outside (L3Out) objects (l3ext:Out)
 aci_interface_policy_cdp                                      Manage CDP interface policies (cdp:IfPol)
 aci_maintenance_group_node                                    Manage maintenance group nodes
 mso_site                                                      Manage sites
 aci_intf_policy_fc                                            Manage Fibre Channel interface policies (fc:IfPol)
 aci_filter_entry                                              Manage filter entries (vz:Entry)
 mso_schema_site_vrf                                           Manage site-local VRFs in schema template
 mso_schema_site_anp_epg_staticleaf                            Manage site-local EPG static leafs in schema template
 aci_intf_policy_port_channel                                  Manage port channel interface policies (lacp:LagPol)
 mso_schema_template_filter_entry                              Manage filter entries in schema templates
 aci_aaa_user_certificate                                      Manage AAA user certificates (aaa:UserCert)
 aci_switch_policy_leaf_profile                                Manage switch policy leaf profiles (infra:NodeP)
 aci_interface_policy_lldp                                     Manage LLDP interface policies (lldp:IfPol)
 mso_schema_template_externalepg                               Manage external EPGs in schema templates
 aci_tenant_span_src_group                                     Manage SPAN source groups (span:SrcGrp)
 aci_access_port_block_to_access_port                          Manage port blocks of Fabric interface policy leaf profile interface selectors (infra:HPortS, infra:PortBlk)
 aci_epg_to_contract                                           Bind EPGs to Contracts (fv:RsCons, fv:RsProv)
 aci_access_port_to_interface_policy_leaf_profile              Manage Fabric interface policy leaf profile interface selectors (infra:HPortS, infra:RsAccBaseGrp, infra:PortBlk)
 aci_firmware_source                                           Manage firmware image sources (firmware:OSource)
 aci_tenant_action_rule_profile                                Manage action rule profiles (rtctrl:AttrP)
[..]

The documentation for a specific module

ansible@Deb-Master:~/base$ /usr/local/bin/ansible-doc  aci_tenant
   ACI_TENANT (/usr/local/lib/python2.7/dist-packages/ansible/modules/network/aci/aci_tenant.py) 
    Manage tenants on Cisco ACI fabrics. 
 This module is maintained by an Ansible Partner
 OPTIONS (= is mandatory):
 certificate_name
     The X.509 certificate name attached to the APIC AAA user used for signature-based authentication.
     If a private_key' filename was provided, this defaults to the private_key' basename, without extension.
     If PEM-formatted content was provided for private_key', this defaults to the username' value.
     (Aliases: cert_name)[Default: (null)]
     type: str
 description
     Description for the tenant.
     (Aliases: descr)[Default: (null)]
     type: str 
 = host
         IP Address or hostname of APIC resolvable by Ansible control host.
         (Aliases: hostname)
         type: str
 output_level
     Influence the output of this ACI module.
      normal' means the standard output, incl. current' dict
      info' adds informational output, incl. previous', proposed' and sent' dicts
     debug' adds debugging output, incl. filter_string', method',response', status' and url' information
     (Choices: debug, info, normal)[Default: normal]
     type: str 
[..]

Convert your code easily with APIC Rest Python Adapter (arya)

Arya is a tool to translate an XML or JSON to Python. Arya will convert your input and use the Cisco sdk COBRA.

Generate the code with arya :

arya -f tenant.xml
 !/usr/bin/env python
 '''
 Autogenerated code using arya
 Original Object Document Input:
 
 
 '''
 raise RuntimeError('Please review the auto generated code before ' +
                     'executing the output. Some placeholders will ' +
                     'need to be changed')
 list of packages that should be imported for this code to work
 import cobra.mit.access
 import cobra.mit.naming
 import cobra.mit.request
 import cobra.mit.session
 import cobra.model.fv
 import cobra.model.vns
 from cobra.internal.codec.xmlcodec import toXMLStr
 log into an APIC and create a directory object
 ls = cobra.mit.session.LoginSession('https://1.1.1.1', 'admin', 'password')
 md = cobra.mit.access.MoDirectory(ls)
 md.login()
 the top level object on which operations will be made
 Confirm the dn below is for your top dn
 topDn = cobra.mit.naming.Dn.fromString('uni/tn-aaaaaaaa-tn')
 topParentDn = topDn.getParent()
 topMo = md.lookupByDn(topParentDn)
 build the request using cobra syntax
 fvTenant = cobra.model.fv.Tenant(topMo, ownerKey='', name='aaaaaaaa-tn', descr='', nameAlias='', ownerTag='')
 vnsSvcCont = cobra.model.vns.SvcCont(fvTenant)
 fvRsTenantMonPol = cobra.model.fv.RsTenantMonPol(fvTenant, tnMonEPGPolName='')
 commit the generated code to APIC
 print toXMLStr(topMo)
 c = cobra.mit.request.ConfigRequest()
 c.addMo(topMo)
 md.commit(c)

How to program Cisco ACI with Ansible and Docker

Ansible guide : https://docs.ansible.com/ansible/devel/scenario_guides/guide_aci.html

I create a docker container with ansible, python and the demo from github.

git clone https://github.com/CiscoDevNet/aci-learning-labs-code-samples 
cd aci-learning-labs-code-samples 

docker image with ansible and python:

docker pull zednetwork/aci-ansible2-4

New version with ansible 2.8.2 using debian 10.

docker pull zednetwork/aci-ansible.2-8-2

Docker Compose example:

version: "3" 
services:
  ansible:
    image: zednetwork/aci-ansible2-4
    tty: true
    stdin_open: true

Start the container and connect to it:

docker-compose up -d 
Creating network "aci-ansible_default" with the default driver

Pulling ansible (zednetwork/aci-ansible2-4:)…

latest: Pulling from zednetwork/aci-ansible2-4

22dbe790f715: Downloading [>                                                  ]  465.6kB/45.34 MBf88405a685: Pulling fs layer

22dbe790f715: Downloading [=>                                                 <..>
22dbe790f715: Pull complete
3bf88405a685: Pull complete
Creating aci-ansible_ansible_1 … done

Check container

# docker images
 REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
 zednetwork/aci-ansible2-4   latest              ff17ed37f691        34 minutes ago      659MB

# docker ps
 CONTAINER ID        IMAGE                       COMMAND             CREATED              STATUS              PORTS               NAMES
 53993071ffa9        zednetwork/aci-ansible2-4   "bash"              About a minute ago   Up About a minute                       aci-ansible_ansible_1

Connect to the container. Use the Container ID above.

# docker exec -it 53993071ffa9 /bin/bash
root@53993071ffa9:/#

This container already contains an example from devnet.cisco.com ( https://developer.cisco.com/docs/aci/#ansible). This example uses a public ACI Fabric.

We can use the first playbook to create a tenant on the ACI Fabric. The fabric credential is on the inventory file.

root@53993071ffa9:~/aci_ansible_learning_labs_code_samples/intro_module# cat inventory
 [apic:vars]
 username=admin
 password=ciscopsdt
 ansible_python_interpreter="/usr/bin/env python"
 [apic]
 sandboxapicdc.cisco.com

You can connect directly to the fabric and verify if your tenant is present. https://sandboxapicdc.cisco.com/

root@53993071ffa9:~/aci_ansible_learning_labs_code_samples/intro_module# ansible-playbook -i inventory 01_aci_tenant_pb.yml
 What would you like to name your Tenant?: MyFirstTenant-tn
 PLAY [ENSURE APPLICATION CONFIGURATION EXISTS] 
 TASK [ENSURE APPLICATIONS TENANT EXISTS] 
 changed: [sandboxapicdc.cisco.com]
 PLAY RECAP 
 sandboxapicdc.cisco.com    : ok=1    changed=1    unreachable=0    failed=0

Go to ACI > Tenants

You can delete your tenant with another playbook

root@53993071ffa9:~/aci_ansible_learning_labs_code_samples/intro_module# ansible-playbook -i inventory 01-1_aci_tenant_pb.yml
 What would you like to name your Tenant?: MyFirstTenant-tn
 PLAY [ENSURE APPLICATION CONFIGURATION EXISTS] 
 TASK [ENSURE APPLICATIONS TENANT EXISTS] 
 changed: [sandboxapicdc.cisco.com]
 PLAY RECAP 
 sandboxapicdc.cisco.com    : ok=1    changed=1    unreachable=0    failed=0

Other example to list all tenants:

# cat listTenants.yml
---
- name: ENSURE APPLICATION CONFIGURATION EXISTS
  hosts: apic
  connection: local
  gather_facts: False
  
  tasks:

    - name: List all tenants
        aci_tenant:
        host: "{{ ansible_host }}"
        username: "{{ username }}"
        password: "{{ password }}"
        state: "query"
      validate_certs: False 

# ansible-playbook -i inventory listTenants.yml -vvv

How to test your network services with Docker

This container has been tested with IOS / NXOS and ACI.

Test syslog

You can verify if you receive logs with syslog-ng. This service runs on the default port udp/514.

The configuration on the file /etc/syslog-ng/syslog-ng.conf redirects the external logs to the following file: /var/log/remote-syslog.log

# Extract of syslog-ng.conf

source s_net {
tcp(ip(0.0.0.0) port(514));
udp(ip(0.0.0.0) port(514));
};

log { source(s_net); destination(d_net); };
destination d_net { file(“/var/log/remote-syslog.log”); };

Logs could be see with the following command:

root@89944db0da60:~# tailf /var/log/remote-syslog.log
Apr 15 06:50:51 10.0.100.46 2019 Apr 15 06:50:48 UTC: %ETHPORT-5-IF_DOWN_CFG_CHANGE: Interface Ethernet1/1 is down(Config change)
Apr 15 06:50:52 10.0.100.46 2019 Apr 15 06:50:49 UTC: %ETHPORT-5-IF_DOWN_ADMIN_DOWN: Interface Ethernet1/1 is down (Administratively down)
Apr 15 06:50:55 10.0.100.46 2019 Apr 15 06:50:52 UTC: last message repeated 1 time
Apr 15 11:57:59 10.255.0.2 %LOG_LOCAL7-4-SYSTEM_MSG [F1186][raised][config-failure][warning][sys/phys-[eth1/35]/fault-F1186] Port configuration failure.                                   Reason: 2                                   Failed Config: l1:PhysIfspeed_failed_flag

Test snmptrap

snmptrapd is used to receive snmptrap. The logs are redirect to the file : /var/log/snmptrapd.log.

The configuration files are the following : /etc/snmp/snmptrapd.conf and /etc/default/snmptrapd.

The community configured is “public”. You can change in the /etc/snmp/snmptrad file or disabled the authentification with ” disableAuthorization yes”

Example:


Agent Address: 0.0.0.0
Agent Hostname: nxos – UDP: [10.0.100.46]:59353->[172.21.0.2]:162
Date: 6:50:57 15-4
Enterprise OID: .
EngineID:
Trap Type: Cold Start
Trap Sub-Type: 0
Community/Infosec Context: TRAP2, SNMP v2c, community nxos
Uptime: 0
Description: Cold Start
PDU Attribute/Value Pair Array:
iso.3.6.1.2.1.1.3.0 = Timeticks: (16384794) 1 day, 21:30:47.94
iso.3.6.1.6.3.1.1.4.1.0 = OID: iso.3.6.1.2.1.17.0.2
iso.3.6.1.4.1.9.9.46.1.3.1.1.1.1.1 = INTEGER: 1
iso.3.6.1.2.1.31.1.1.1.1.436207616 = STRING: “Ethernet1/1”


Agent Address: 0.0.0.0
Agent Hostname: nxos – UDP: [10.0.100.46]:59353->[172.21.0.2]:162
Date: 6:51:6 15-4
Enterprise OID: .
EngineID:
Trap Type: Cold Start
Trap Sub-Type: 0
Community/Infosec Context: TRAP2, SNMP v2c, community nxos
Uptime: 0
Description: Cold Start
PDU Attribute/Value Pair Array:
iso.3.6.1.2.1.1.3.0 = Timeticks: (16385696) 1 day, 21:30:56.96
iso.3.6.1.6.3.1.1.4.1.0 = OID: iso.3.6.1.4.1.9.9.43.2.0.2
iso.3.6.1.4.1.9.9.43.1.1.1.0 = Timeticks: (16384764) 1 day, 21:30:47.64
iso.3.6.1.4.1.9.9.43.1.1.6.1.6.7117 = INTEGER: 3


Test tacacs+

tacacs+ is used to verify the Authentication, Authorization and Accounting. The configuration is in the file /etc/tacacs/tac_plus.conf.

We use the following package : http://www.shrubbery.net/tac_plus/

The current configuration is the following:

  • Tacacs Key : cisco1234
  • user : user1 / cisco1234
  • Right: admin

The log files are the following :

  • For accounting : /var/log/tacacs/tac_plus.acct
  • For authentication : /var/log/tac_plus.log

Test radius

We use freeradius with the following files:

  • radiusd.conf
  • clients.conf
  • users

The logs are in the following directory /var/log/freeradius/.

Example for IOS/NXOS and ACI :

user1 Cleartext-Password := “cisco1234”
Service-Type = NAS-Prompt-User,
Cisco-AVPair = “shell:priv-lvl=15”,
Cisco-AVPair += “shell:domains=all/admin/”

Synchronize ntp

This container can be use to verify if your device can synchronize with a ntp server. This container runs a ntp server as stratum 5.

server 127.127.1.0
fudge 127.127.1.0 stratum 5

SSH / scp server

You can use this container to upload some file via scp if needed. The daemon is stopped and you need to create you own user.

root@9371dba394dc:~# adduser cisco
 Adding user cisco' ... Adding new groupcisco' (1001) …
 Adding new user cisco' (1001) with groupcisco' …
 Creating home directory /home/cisco' ... Copying files from/etc/skel' …
 New password:
 Retype new password:
 passwd: password updated successfully
 Changing the user information for cisco
 Enter the new value, or press ENTER for the default
         Full Name []:
         Room Number []:
         Work Phone []:
         Home Phone []:
         Other []:
 Is the information correct? [Y/n] y

root@9371dba394dc:~# /etc/init.d/ssh start
 [ ok ] Starting OpenBSD Secure Shell server: sshd.

The port exposed for ssh is 30022 on the docker-compose.yml file. You can change this port.

Docker-compose file

docker-compose.yml

version: "3"
 services:
   network-test:
     build: .
     image: zednetwork/network-test
     ports:
      - "30022:22/tcp"
      - "123:123/udp"
      - "49:49/tcp"
      - "162:162/udp"
      - "514:514/udp"
      - "1812:1812/udp"
      - "1813:1813/udp"
     tty: true
     stdin_open: true

To download the container :
docker pull zednetwork/network-test:latest

To enter in the container :

docker exec -it <container_ID> /bin/bash