Secure your infrastructure

According to the Australian Dept Defence’s Australian Signals Directorate and their mandate in Cyber Security support to Australian Government and the whole of Australian industry, 55% of the cybersecurity incidents reported to them are for compromised asset, network, or infrastructure.

That dominates the #2 in the list Denial and Distributed Denial of Service, at 21% of incidents.

Securing infrastructure is critical. While this includes physical security, its dominated by virtual access to assets: compromised credentials, flawed firmware with known hard coded credentials, and other attack vectors.

While network restrictions are useful, strong logging and alerting is also critical, as is actually reading those alerts, triaging them and prioritising them.

Every piece of infrastructure in your environment should have some form of remote logging available. Local logging, on a device, is not sufficient. These logs should be treated with the same security deference as your PCI payment credentials, medical information or more.

Step 0: authentication

If your device only permits local username and password, then it should be a unique combination for each device. That could be a large list, so you’ll need some sort of password management in place.

Never use default passwords; and change usernames where possible. If I had a dollar for every time I saw “admin/admin” as the default… please use “${mycompany}admin/device-unique-password” or something unusual.

If the device supports MFA, then (with Step 2: Time configured) you should enable that.

If the device supports RADIUS or other network authentication and single sign-on, then consider using that (but more considerations may exist). Even still, a fallback to local credentials may still exist.

Step 1: Restricted network access

Your devices on your network probably don’t need a whole lot of inbound access, nor outbound for that matter. Lets talk about both.

The admin interface to your device is the most sensitive. It should not face the open internet if possible, and if it does, it should have some level of address range restriction as a rudimentary first step of protection.

IP address range should be on a permit basis: eg, permit only from your trusted range where you expect to admin the device from, including from backup networks in emergencies, and reject everything else. The Internet is full of bots and scripts that scan juicy looking admin ports, testing for zero day exploits, known bad configurations, and hard coded defaults or back doors. Even if you have patched and remediated what you know of, there could be more, as yet undiscovered by you or the vendor, so why take the risk?

If I have to have public facing interfaces then the restrictions that I like to use include reasonably large ranges from the corporate ISP network providers I use, and the well known ranged for cell/mobile phone providers, so that I can tether in an emergency. You may also wish to include your home ISP range, so that in an emergency, you can WFH to fix things.

This isn’t considered trusted, its just more trusted than the open Internet. And even if you have a large internal network where all staff — including admins — work from, its worth rearranging your networks to keep those admins in one subnet, and restricting internally as well, particularly if you have a wide area network, and possibly have publicly accessible ethernet ports that can be accessed by untrusted devices. Yes, 802.1x port authentication is a step up here, but why have that exposure in the first place.

Then think about what egress is needed form the device itself. Probably a remote logging destination (Step 3), which may be over TCP HTTPS, for example. Your device may also need to access internal DNS (UDP and TCP), but probably only to a small, possibly internal, set of ranges. And lastly, UDP NTP (for the next step, Time). Not that UDP traffic typically needs an ALLOW rule on network traffic in both directions.

Step 2: Time

Lets start with the basics: the time. Every device in your infrastructure should have the correct time. They should all be synced to a very high accuracy, using NTP or similar protocols. Its imperative for timestamps between systems to line up so that logs can be correlated. You don’t need to run out and buy a stratum 1 atomic clock, but configure NTP sensibly for your network.

Your Cloud provider may have a scalable, reliable time source that you can synchronise virtual machine clocks with. For your colo or private networks, you may want to configure a set of NTP servers that the rest of your environment can depend upon.

And when I say depend upon, you should monitor the time difference between your NTP servers to detect any drift, and detect if any of your NTP servers are offline. Start with having every device use a private DNS resolver on your network that all devices can use, and publish an internal DNS entries that list your set of NTP servers:

ntp.internal IN A 10.0.0.6
ntp.internal IN A 10.0.0.7
ntp.internal IN A 10.0.1.6

Your internal DNS suddenly became a critical vector for compromise, so ensure that it is also in scope for this advise!

In AWS Cloud, check out the Time Sync service.

Step 3: Logging

Do not log locally. Always send acros the network to a logging endpoint.

Your logging endpoint should be scalable so it doesn’t get overwhelmed or limited to how much logs it can ingest.

It must be encrypted in flight for both privacy and integrity, and it must be authenticated to ensure the right device is sending the right logs.

Logs should contain the timestamp of when they are received, as well as when devices sent them; and there should be minimal difference between these times.

And lastly, logs should be verbose enough that you do NOT need to go back to the original device to get more information. Get everything off the device, and you (or someone else) should never need to access the device itself directly. This handles the case where the device is compromised, no longer accessible, or has been bricked, deleted or otherwise removed.

Now that logs are in a uniform place, there’s two things to do:

  1. Provision authenticated, encrypted access to those logs for the people who need to search them (and log their access to these logs!)
  2. Set up some automated alerts

In AWS, definitely use CloudWatch Logs. And remember, you can use CloudWatch logs from your on-premises networks, over HTTPS, with authentication

Step 4: Alerts

This is where the fun happens. How many things can you think of that would be an indicator of a compromise (IOC). Let’s start with the simple: any access that fails authenticate to the device should be an alert. Your endpoint should not have unfeted public exposure, so the authentication attempts should all be legitimate

Auth Failure: this could be a bot, even on your internal network, probing for access. Or perhaps its just you before a coffee and you mistyped a password. Good to know where these come from as early as possible.

Auth Success: so you know the alerting is working, and have a record of what you are doing, it’s nice to get confirmation to show its you on the device. Or it could be compromised credentials being used. An auth success alert at 3am in your local time could be a sign you’re working late, or… something else.

Timestamp mismatch: the log receive time and the log time from the device could be out by a meaningful amount. This could be indication that submission of logs was delayed for some reason.

Device reboot: why should devices be unstable? Did they just flash a new firmware? Where they replaced/cloned by compromised devices?

Lack of regular log submission: a reliable heartbeat is very useful, but watch out for no longs when you expect at least something.

Config change: for critical components like routers, or other devices that will have a reasonably stable configuration, then alerting on this is a nice feedback confirmation off changes you (or someone else) has done.

Local device password change: if you can’t used centralised access control and single sign on, then you should alert on this. And you should probably alert on this NOT having happened after a year.

Log access: this is becoming a little meta, but having an alert when someone inspects the logging system itself, to view the logs, may be a reason for a notification.

Step 5: Alert Destinations and Escalations

Email is a terrible log destination, but the easiest to set up. Then again, its the easiest to set up a rule to then ignore. Some people use Slack or other instant messenger interfaces.

One thing you will want is a way to determine all the alert that have been triggered historically, filtered by device or device type (all switches), time span (last 7 days, last week), alert type (auth failure & auth success), etc.

Creating a dashboard to show these alerts will help you understand what’s happening.

A single auth failure is an interesting event, but a repeated auth failure, over a relatively small time window (an hour, a day) may be a brute force attack. A repeated reboot may be a device failing.

When a device (re-)boots, if it gives a firmware revision in its logging, how do you check that against the previously known firmware revision (hint: it’s in your logs from the previous boot). Is that the currently recommended firmware? Is there some form of automatic firmware update in place? Is it lower than the previous revision – which could be a forced downgrade to a known buggy firmware.

Summary

Pretty quickly you start to see the complexity, depth and urgency of having a strong logging and alerting in place. Without a trusted base to work from, any workloads in your environment may not be trusted.

Leave a Reply