Backing up Firepower FDM to the Cloud via Ansible

My criticism of Cisco's Firepower product is fairly well-published. However, to their credit Cisco seems to be finally steering the product in the right direction with the continued development of their Firepower Device Management alternative for on-box management, allowing customers to abandon the dismal FMC.

With the breakaway from a centralized controller though, users now need to revert to managing individual resources on their appliance(s) such as firewall policies, configuration backups, etc. This guide shows one elegant way to maintain configuration backups of a fleet of FDM appliances using a simple, flexible Ansible playbook.

A year or so ago I wouldn't have even considered using FDM. Though it had introduced a nicely modernized UI framework, it still lagged in seriously critical feature sets, such as: the ability to use with a 4100 or 9300 chassis, simple active/standby high-availability, route-based VPN options, LACP link aggregation, etc.

As of FTD version 6.6 however, dare I actually say that I'm finally pleased enough with FDM's development to put it into a production network (I know, it feels weird even typing this). Granted I've not written a ton of tooling for it yet (rule automation is a backlog item) but for what I've done so far, the FDM's API has been reliable enough.

One of the things you have to consider though when reverting back to on-box management of a fleet of appliances is configuration backups, and your trusty RANCID and Oxidized-type tools won't help you here. This is because FDM and Firepower aren't like the ASA where flat text configuration files are all that are needed. Instead you'll need to periodically download a tarball backup in order to perform a proper restore.

Luckily, the ability to download backups is something which can be accomplished via the nicely documented REST-ful API. As such, since I wasn't able to find this anywhere else on the internets, I wrote my own Ansible playbook to do just this.

A few notes up front:

  • This playbook runs from AWX (the free version of Ansible Tower).
  • My AWX deployment runs in GCP. If you use a different cloud provider such as AWS, it shouldn't be that difficult to modify this to make it work. The fact that we're just using Ansible's URI module makes this playbook super portable.
  • I happen to use Hashicorp's Vault for secret storage, which is great, but you can replace this with whatever you use to pull secrets
  • File lifecycle is defined on the bucket. Let's say you only want to store backups for the last 30 days so not to accumulate a decade's worth of of backup files, just simply define this in your provider's bucket policy.
  • My FDM devices were previously already configured for daily backups (Now that I think about it I should probably add a simple step to this playbook to turn this backup schedule on if it's not configured already)

---
# Python requirements: pip install hvac google-auth

- name: Download Latest FDM Backup File and Upload to GCS Bucket for Cold Storage
  hosts: all
  connection: local
  gather_facts: yes
  vars:
    awx_run: true
    vault_addr: "vault-server-hostname"
    svc_acct_user: "{{ lookup('hashi_vault', 'secret=/secret/svc_acct:user  url=https://{{vault_addr}}')  }}"
    svc_acct_pass: "{{ lookup('hashi_vault', 'secret=/secret/svc_acct:pass  url=https://{{vault_addr}}')  }}"
    gcs_bucket_name: "my-google-cloud-storage-backup-bucket"

  tasks:
      - name: Retrieve Vault Access Token
      # I use Hashicorp Vault to store/retrieve user credentials. This is a simple script which sets a local
      # environment variable on the Ansible VM enabling hashi_vault lookups
        when: awx_run|bool
        local_action: command ../bin/gce_vault_token.sh
        run_once: true

      - name: Retrieve GCS Access Token
      # Note that AWX runs from a Google Cloud VM with a service account that has pre-defined IAM read/write
      # permissions to the Google Cloud bucket. This play wont work if ran outside of a GCE environment.
        uri:
          url: http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
          validate_certs: false
          method: GET
          headers:
            Metadata-Flavor: Google
          return_content: yes
        run_once: true
        register: gcs_token_resp

      - name: Retrieve FDM Access Token
      # We authenticate w/ a regular AAA account on FTD to retrieve an API access token for subsequent calls
      # Cisco documentation: https://bit.ly/3luVE27
        uri:
          url: https://{{ ansible_host }}/api/fdm/latest/fdm/token
          validate_certs: false 
          method: POST
          body: >
            {"grant_type":"password","username":"{{ svc_acct_user }}","password":"{{ svc_acct_pass }}"}
          body_format: json
          return_content: yes
        register: fdm_token_resp

      - name: Determine Firewall HA State and End Play for All Standby Units
      # Currently FDM prevents scheduling of backup jobs on standby firewalls, so we drop those hosts off here
        uri:
          url: https://{{ ansible_host }}/api/fdm/latest/devices/default/operational/ha/status/default
          validate_certs: false
          method: GET
          return_content: yes
          headers:
            Authorization: Bearer {{ fdm_token_resp.json.access_token }}
        register: ha_resp
      - name: End Play for Standby Units
        meta: end_host
        when: ha_resp.json.nodeState == 'HA_STANDBY_NODE'

      - name: Retrieve List of Backup Filenames from FDM
      # Retrieves list of latest backup files from the FDM
        uri:
          url: https://{{ ansible_host }}/api/fdm/latest/managedentity/archivedbackups
          validate_certs: false
          method: GET
          return_content: yes
          headers:
            Authorization: Bearer {{ fdm_token_resp.json.access_token }}
        register: file_resp

      - name: Find and Download Latest Backup(s)
      # Parses the list of files to download only the latest one to a temporary Ansible VM local directory
        get_url:
          url: https://{{ ansible_host }}/api/fdm/latest/action/downloadbackup/{{ file_resp | json_query(q) | sort(reverse=True) | first }}
          timeout: 720
          dest: /tmp/{{ ansible_host }}-{{ ansible_date_time.date }}-backup.tar
          validate_certs: false
          headers:
            Authorization: Bearer {{ fdm_token_resp.json.access_token }}
        vars:
          q: "json.items[?type=='archivedbackup'].archiveName"

      - name: Upload Latest Backup(s) to GCS Bucket
      # Uploads the file downloaded in the previous step to Google Cloud Storage
        uri:
          url: https://storage.googleapis.com/upload/storage/v1/b/{{ gcs_bucket_name }}/o?uploadType=media&name={{ ansible_date_time.date }}-{{ inventory_hostname }}-backup.tar
          validate_certs: false
          timeout: 720
          method: POST
          src: /tmp/{{ ansible_host }}-{{ ansible_date_time.date }}-backup.tar
          remote_src: yes
          return_content: yes
          headers:
            Authorization: Bearer {{ gcs_token_resp.json.access_token }}
            Content-Type: application/x-tar
        ignore_errors: yes

      - name: Remove Local Backup
      # Deletes the backup file from the AWX host once it's been uploaded
        file:
          path: /tmp/{{ ansible_host }}-{{ ansible_date_time.date }}-backup.tar
          state: absent

As mentioned, this stateless playbook should be relatively straightforward enough to easily modify to run on AWS EC2 and upload to S3 since it interacts with APIs using the URI module and no vendor-specific ones. In fact, I originally tried using some Ansible modules directly from Cisco and Google but ended up abandoning them for...reasons.

Popular posts from this blog

Running ASA on Firepower 2100: An End-to-End Guide

Configuring Cisco ASA for Route-Based VPN

Up and Rawring with TRex: Cisco's Open Traffic Generator