31. 10. 2017 Juergen Vigna Log Management, Log-SIEM, NetEye

Sending Cisco Syslogs to Elasticsearch: A simple guide

Elasticsearch
Do you use Cisco’s network infrastructure? Would you like to view its logs through the syslog protocol in an Elasticsearch database? Find out below about the filters and templates needed for the Logstash setup.

As you probably already know, you need a Logstash instance in order to get indexed data into the Elasticsearch database. Cisco is a well-known network device provider, so it is crucial to have a workable solution to index the logs that can be retrieved from these devices.

This article mainly deals with two types of Cisco logs from three types of devices:

  • Network devices such as switches or routers
  • Cisco wireless LAN controllers (WLCs)
  • Cisco Radius Appliance

I have divided these elements into two filters since the Radius logs are completely different and have nothing in common with “normal” Cisco logs. Thus I used the following models:

CISCOTIMESTAMPTZ %{CISCOTIMESTAMP}( %{TZ})?
NEXUSTIMESTAMP %{YEAR} %{MONTH} %{MONTHDAY} %{TIME}( %{TZ})?
ISETIMESTAMP %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})? %{ISO8601_TIMEZONE}?

And then I wrote the following filtering rules:

#
# FILTER - Try to parse the cisco log format
#
# Configuration:
#   clock timezone Europe +1
#   no clock summer-time
#   ntp server 0.0.0.0 prefer
#   ntp server 129.6.15.28
#   ntp server 131.107.13.100
#   service timestamps log datetime msec show-timezone
#   service timestamps debug datetime msec show-timezone
#   logging source-interface Loopback0
#   ! Two logging servers for redundancy
#   logging host 0.0.0.0 transport tcp port 8514
#   logging host 0.0.0.0 transport tcp port 8514
#   logging trap 6

filter {
  # NOTE: The frontend logstash servers set the type of incoming messages.
  if [type] == "syslog" and [host_group] == "Netzwerk" {
    # Parse the log entry into sections.  Cisco doesn't use a consistent log format, unfortunately.
    grok {
      patterns_dir => "/var/lib/neteye/logstash/etc/pattern.d"
      match => [
        # IOS
        "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?:( %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
        "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?:( %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",

        # Nexus
        "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
        "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",

	# WLC
	"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{SYSLOGHOST:wlc_host}: %{DATA:wlc_action}: %{CISCOTIMESTAMP:log_date}: %{DATA:wlc_mnemonic}: %{DATA:wlc_mnemonic_message} %{GREEDYDATA:message}"
      ]

      overwrite => [ "message" ]

      add_tag => [ "cisco" ]
    }
  }

  # If we made it here, the grok was sucessful
  if "cisco" in [tags] {
    date {
      match => [
        "log_date",

        # IOS
        "MMM dd HH:mm:ss.SSS ZZZ",
        "MMM dd HH:mm:ss ZZZ",
        "MMM dd HH:mm:ss.SSS",
        
        # Nexus
        "YYYY MMM dd HH:mm:ss.SSS ZZZ",
        "YYYY MMM dd HH:mm:ss ZZZ",
        "YYYY MMM dd HH:mm:ss.SSS",
        
        # Hail marry
        "ISO8601"
      ]
    }

    # Add the log level's name instead of just a number.
    mutate {
      gsub => [
        "severity_level", "0", "0 - Emergency",
        "severity_level", "1", "1 - Alert",
        "severity_level", "2", "2 - Critical",
        "severity_level", "3", "3 - Error",
        "severity_level", "4", "4 - Warning",
        "severity_level", "5", "5 - Notification",
        "severity_level", "6", "6 - Informational"
      ]
    }

  } # if
} # filter

For Cisco Radius logs I used the following filters instead:

#
# FILTER - Try to parse the ise logfiles (Radius)
#

filter {
  if [type] == "syslog" and [host_group] == "ISE" {
    grok {
      patterns_dir => "/var/lib/neteye/logstash/etc/pattern.d"
      match => [
        "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{DATA:ise_log_type} %{NUMBER:ise_log_sequence} %{INT:ise_log_lines_split} %{INT:ise_log_line_sequence} %{ISETIMESTAMP} %{NUMBER:ise_log_number} %{NUMBER:ise_log_id} %{DATA:ise_log_facility} %{DATA:ise_log_id_description},.* Device IP Address=%{IP:ise_device_ip},.* UserName=%{DATA:ise_username},.* NetworkDeviceName=%{DATA:ise_network_device_name},.* User-Name=%{DATA:ise_user_name},.* (NAS-Port-Id=%{DATA:ise_nas_port_id},.* )?(cisco-av-pair=%{DATA:ise_cisco_av_pair},.* )?AuthenticationMethod=%{DATA:ise_authentication_method},.* (AuthenticationStatus=%{DATA:ise_authentication_status}, .*)?(EndPointMACAddress=%{DATA:ise_endpoint_mac_address},.*)? Location=%{DATA:ise_location},%{GREEDYDATA:message}"
      ]
      add_tag => [ "ISE" ]
      remove_tag => [ "_grokparsefailure" ]
      tag_on_failure => [ "_iseparsefailure" ]
    }
  }
} # filter

The entire solution now provides well-formatted Cisco logs in Elasticsearch, from which you can perform structured analyses and views. You can download the above-mentioned files from the following link ( Cisco-Logstash – Elasticsearch ).

Juergen Vigna

Juergen Vigna

NetEye Solution Architect at Würth Phoenix
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.

Author

Juergen Vigna

I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive