UniFi: Should I wait for the next DreamMachine Pro?

I switched to a 1 Gb/s NBN connection a few months back, but it soon became apparent that the original Unifi Security Gateway (USG) is no match for 1 Gb/s link.

While I love the Wifi access points, the management interface, and the rapid firmware updates, throughput limitations of the USG only became noticeable when the link speed went up. Ubiquity Networks, the manufacturers of the Unifi range, has released faster products – the throughput being the major selling point. And of course, the pricing goes up accordingly.

But in thinking of the current top-of-the-line device, the DreamMachine Pro, it kind of gives me some pause for consideration.

The device has two “WAN links”; one is an RJ45 gigabit Ethernet port, the other is a 10Gb/s SFP slot for a fibre GBIC. I’d love to have a fail-over Internet connection, but the fibre connection isn’t an advantage to me.

Ubiquiti are not selling their LTE fail-over device in Australia. I’d have to drop the 10 Gb/s SFP port back to a vanilla 1 Gb/s RJ34 copper port to plug into an alternate LTE device. But then again, carrier plans for this pattern are expensive.

However, it could be that I have two RJ45 Internet connections; my NBN connection affords me up to 4 ISP connections in the fiber-to-the-premesis that I have available. Now, the upstream link from my CPE to the Point of Interconnect (PoI) may be limited to 1 GB, but having the ability to fail-over to another ISP may be useful. Or I may ant to route traffic by port or Service to a different link (eg, VPN traffic over Link #2, or Web Traffic over Link #2, or perhaps just streaming video from some specific providers over alternate link.).

The Dream Machine also has a built in 8 port switch (EJ45), but none of the ports are Power over Ethernet (POE). After all, the majority of links going in to this are going to be WiFi access points directly, and linking an 8 port PoE switch in here seems a waste. A long tail of customers would find this fills their needs without having additional switches to worry about.

I would also have expected more ports here, given the cost of the device: say 16 ports, even if only half of them were PoE.

The inclusion of Protect for video cameras is a neat idea, but having two local disks to RAID together would be nice. I have shied away form on=premise storage, but for large volumes of video, I still like having the highest bandwidth version not traversing the WAN. So Its great we have one disk option available, but it could be so much more awesome if we just had some local resilience.

Of course, if Japan has residential 2 Gb/sec Internet connection, then would this device still be usable? I’m guessing Australia will max out on 1 Gb/sec for a while…

So, trying to decide if I dive in for the current DreamMachine Pro, or wait until it’s tweaked…..

Integrating McAfee ESM SIEM and CloudTrail

McAfee Enterprise Security Manager (ESM) is a Security Information and Event Management (SIEM) product that helps collect and then correlate events from various data sources (McAfee info).

Gartner rates it in their Magic quadrant, shown here from 2019:

Gartner Magic Quadrant for SIEM, 2019

I was working with a customer recently who wanted to ingest their AWS CloudTrail logs into this product.

McAffee have implemented support for this, using an IAM Access and Secret key, reading from an SQS Queue, and fetching the referenced files from S3.

AWS CloudTrail itself supports sending a notification to the Simple Notification Service for each log file delivered; it is left as an exercise to the customer to plumb the rest together. It’s not hard, and can be put into a CloudFormation template. Here it is: Download the CloudFormation Template.

Lets pull apart the simple architecture that is wrapped up in the above linked template.

Parameters

The parameters to a template are the only items you likely need to customise to your environment.

  1. The S3 Bucket name that you have CloudTrail being deployed in to (from the Trail config)
  2. The SNS Topic name that you have CloudTrail sending file delivery notifications sent to (from the Trail config)
  3. The Name of the new SQS Queue to be made that will catch the notifications, and buffer them for McAfee to then read. You can probably leave this as the default.
  4. The name of the IAM User to be created to permit an off-cloud SIEM to have API credentials. You can probably leave this as McAfeeSIEM.
  5. The Public IPv4 address range that requests will originate for your ESM making its outbound API calls to AWS.
  6. Your AWS Organizations “Organization ID”. Its a string that starts with “o-” that is in the prefix for the files appearing in S3 from CloudTrail.

Inside the template

The resources created in the template are the guts of what gets created on our behalf.

The SQS queue is configured with long polling enabled, to reduce the number of polls being made in case McAfee tries a tight continual loop, but also so that when the queue is already drained, the McAfee will remain connected for up to this duration to immediately get the message to fetch a file.

An SQS Policy is added to the Queue to permit the selected SNS Topic to publish messages to the queue, and then a Subscription to hook these together is defined.

Lastly, an IAM user is created, with a policy that permits access to process messages from the queue (read and delete messages, plus a few other APIs that McAfee documented), as well as access to List the contents of the target bucket, and Get the objects within the defined prefix.

Admin actions after deploying the template

With this CloudFormation stack deployed, go to the IAM console, find the new IAM User (McAfeeSIEM was the default), go to the Credentials tab, and issue an Access key pair. Take care to record the secret key – this is the only time you’ll see this; if you lose it, then start a key rotation to get a fresh Access/Secret key pair.

On McAfee SIEM, insert the access key and secret key into the AWS CloudTrail config. If your SIEM has outbound Internet access (possibly via a proxy) then this should start to fetch messages form the SQS Queue and process files.

You can look for the number of messages in the SQS queue as a help to debug: if the queue is non-zero (and growing) then your SIEM is probably not fetching and clearing its Queue messages.

Evolution of Compute: Physical to Serverless

Unless you’ve been under a rock, you’ve seen the impact that Hyperscale Public Cloud has made on the IT industry. Its invention wasn’t to be a thing, but to be a continually evolving, improving thing.

And while many organisations will use SaaS platforms, those platforms themselves often run atop the IaaS and PaaS platforms of a hyperscale cloud platform.

One person’s SaaS is another person’s IaaS.

Me, James Bromberger

But its worth just checking on the evolution of IT service delivery at a low level, for not everyone who is in the IT industry has seen what that looks like at this time.

Evolution of service delivery from Physical servers to Serverless.

Change is hard. Humans are bad at it. I’ve seen many who evolved from column 1 to column 2, and have felt they are “done”. They aren’t on-board for the next wave of the evolution.

I suffer from this too. But three is a short cut that I can offer: try to jump from where you are now, to as far to the right as you can in one step.

Every one of these phases is a monumental shift in the way that services are delivered, requiring training, and experience. There is an overhead knowledge baggage that engineers take with them, trying to work out what functions the same as before, and what is different. This is taxing, stressful, and unpleasant.

So rather than repeat this process in sequence, over years for each change, my recommendation is to see how far to the right you can jump. Some limitations will crop up that prevent you from leap-frogging all the way to Serverless, but that’s OK. Other services will not be thus constrained.

Well Architected, meet Well Maintained

In 2012, the Well Architected concept was born inside AWS. It is a set of principles that helps lead to success in the Cloud; at that time, that was the AWS EC2 environment. It’s well worth a read if you have not seen it. At this time, its also been adopted my Microsoft for the Azure environment as well.

However, I want to move your attention from Architecture time, to operations time.

If you look at the traditional total life-cycle activities, there’s a lot of time and effort spent learning, adjusting, and implementing supporting technologies that are starting to become invisible in the Serverless world.

Lets look at the operational activities done in a physical environment, and compare that to Serverless. I’ll skip the middle phases of evolution as shown above:

ActivityPhysicalServerless
Physical securityRequiredManaged
Physical installationRequiredManaged
Capacity PlanningRequiredManaged
Network switchingRequiredManaged
Hardware power planningRequiredManaged
Physical coolingRequiredManaged
Hardware procurementRequiredManaged
Hardware firmware updatesRequiredManaged
OS installationRequiredManaged
OS patchingRequiredManaged
OS upgradeRequiredManaged
OS licensingOften RequiredManaged
Runtime selectionRequiredRequired
Runtime minor patchingRequiredManaged
Runtime major version upgradeRequiredRequired
App server selectionRequiredManaged
App server minor patchingRequiredManaged
App server major version upgradeRequiredManaged
Code base maintenanceRequiredRequired
Code base 3rd party library updates (SDKs)RequiredRequired
Network encryption protocol and cipher upgrades (TLS, etc)RequiredRequired

As you can see, a large number of activities that should be done regularly to ensure operational excellence. However, I am yet to see a traditional physical environment, or virtualised on-prem environment that actively does all of the above well.

It’s an easy test: wander into any Java environment, and ask what version of the Java runtime is deployed in production. The typical response is “we updated to Java 8 two years ago“. What that means if “we haven’t touched the exact deployed version of Java for two years“.

Likewise, ask what version of Windows Server is deployed? Anything older than 2016 (even that, with 2019 has been out for nearly 2 years at this time is generous) shows a lack of agility and maintenance.

I challenge those in IT operations to think through the above table and check the last time their service updated each row – post project launch. If its a poor show, the change is your in “support mode”, and not “DevOps Operations”.

So what can be done to help do this maintenance?

Take it away. Stop it. While it can be argued to be important, and interesting, you’re possibly better off spending that effort on the smaller list that remains in a Serverless environment.

Evolution Continues

We can’t see where this evolution will go next. We do see that identity, authentication, authorisation, in-flight encryption, remain key elements to be aware of.

What comes next, I can’t predict. I know many ideas will be thrown about, new or recycled, and some will work, while others will wither and disappear again.

The only constant in life is change.

Heraclitus, b. 565 BC

Buying a house: Electronic Settlement with PEXA

I have spent many years working with Landgate, the state government Department of Land Administration. It’s a well known AWS Case Study, and a platform that is available for other land jurisdictions of the world if they wish to move to it.

One of the integrations that implemented is from/to the Electronic Lodgement Network Operators (ELNOs) to facilitate electronic settlement of property transactions, of which only one is currently active in Australia, Property Exchange Australia, otherwise known as PEXA.

Using PEXA saves settlement agencies, and banks from having to send representatives to a specific location at a specific time with the assortment of cheques, paperwork, and other administration that, should one thing be out of order, causes settlement to be delayed (a costly exercise). Many transactions types have now been mandated to be done via electronic interfaces, one of the first of which in Western Australia was the Discharge of Mortgage.

More than 80% of transactions on the land registry are a property being sold while under mortgage, to someone else who has also taken out a mortgage. This is called DTM; Discharge, Transfer, Mortgage. It’s one of the first transactions that the Advara platform automated the validation of data submitted, saving huge amounts of manual effort.

For a transaction submitted by PEXA, the general turn around time on data validation and transaction approval has now dropped to around 10.8 seconds, down from historical highs of ?30 days.

My Transaction

I was recently purchasing a new property (my home study is occupied by a rather adorable 5 year old girl, IMHO) and armed with the workings of the land titling system, I figured I’d actively watch my settlement transaction.

PEXA has created a user application called PEXA Key, for Android and iPhone, that permits sellers and purchasers to be invited to their property settlement transaction.

All a settlement agents needs to do is collect a mobile phone number and email address from the seller or purchaser, and enter into into the PEXA workspace.

I enquired about this to the real estate agent selling the property, and then in turn my settlement agent, and none of them had heard of this, much less actually done it. So I pushed on, and lo, managed to have them submit my details.

This post shows what happens next.

A Text Message

I received a text message almost immediately – with variables shown where real names were used:

Hi JAMES,

${SETTLEMENT_AGENT} has invited you to download the PEXA Key app to track your settlement. Check your email for more details. Get the app free here key.pexa.com.au/download or exclusively on Google Play or Apple App Store.

Text message I received after my settelment agent registered me in the workspace.

I quickly complied, and was then sent a security activation code.

The app then told me when my settlement had been scheduled for, and any pending tasks that I was responsible for (as it happened, I had already done everything, so it was fine).

This immediately gave me piece of mind, knowing the transaction workspace was set up and pending.

The morning of settlement came around, and I was greeted with this:

It’s settlement day

I nervously checked the application every few minutes to see what would happen next.

And so it begins

It turned out that the process was initiated around 10:25am or so, after which the PEXA Key application showed:

Settlement started

OK, strap in, the wheels are in motion.

It took around 40 minutes until all was done and dusted, and the final result came through:

Property settled

A few hours later, a new set of house keys were in my hand.

This has to be the most expensive testing I have ever personally done! 😉

The inclusion of the end customer in this process, which just simple visibility, is something that I think should be offered to all parties in the transaction to bring confidence and clarity in the progress or inhibitors in the transaction.

Key to this (pun intended, in two ways) is the swift and efficient recognition of land property transactions. My colleagues and I have worked hard to uplift the validation and security of the land registry system for years, and continue to do so. And as a customer of this system, it worked smoothly.

Some coverage of PEXA Key is here in Cyber Security Magazine (saying this stops an avenue of attack).

I recommend anyone buying or selling property to ask their agent to invite them into the settlement on PEXA using PEXA Key. As many in the real estate industry I have spoken to are unaware of this, you may need to explain this (send them this article’s URL), but its worth it.


Disclosure: I do not work for PEXA, nor have been asked by them (or anyone else) to write this. I share the above to assist anyone else who would like to see their property transactions being processed. While PEXA is a national (Australian) electronic settlement platform, the turn around time from each separate land jurisdiction to validate and register the transaction will vary. Indeed, I’d challenge any of them to beat 10.8 seconds full validation!

Why you still have a VPN in 2020

Many organisations are today able to access their email, corporate video conferencing and other services while mobile and without being connected to their company VPN endpoint.

Universal access over the Internet – on IPv4 and IPv6 – working seamlessly wherever you are to these services just works. It’s liberating, and no one is jumping up and down asking about the firewall, VPN.

Key amongst the platforms being used to give this is Microsoft Office 365 and its various platforms.

So why do you still have a corporate VPN? Why does your existing corporate IT services require you to jump through hoops to access it?

Let me be direct: your corporate strategy on security is based around lowest cost, lowest effort. This budget approach also means the least amount of work for the technology staff who operate these services for your organisation.

Office365, Salesforce, and a slew of other universally-just-works over the Internet solutions have something that the bespoke solutions you have in-house do not: funding to operate as such.

The main premise when you make services available over the internet is a commitment to do several things from an operational perspective:

  1. Support newer encryption protocols (TLS) over time, and remove older encryption protocols (TLS) over time
  2. Add new encryption ciphers over time, and remove older encryption ciphers over time
  3. Use federated sign-on (single sign-on)
  4. Maintain (update) the single sign on service over time, with continual uplift (eg, introduce MFA)
  5. Examine logs and look for anomalies in access, and then automatically lock out a user, and iterate improvements into the application

Your organisation probably does not do this. Your company’s IT operations team probably “keep the lights on”, ensuring the currently deployed application is responsive, poking it with a stick to ensure it moved. They probably didn’t uplift to TLS 1.3 in the last 2 years, and they probably haven’t removed TLS 1.1 and below.

And while they collect application logs, any review is probably pretty basic.

Why?

Doing so requires time, training, effort, experience and knowledge. Until you have a 24×7 DevOps team able to turn on a dime, a CISO who represents the security risk and operational response to the board, and a few other tell-tale signs, then your organisation is not ready.

All of the above requires a strong vision, strong senior leadership fro the top, and a strong funding model that prioritises the digital security of the company.

A traditional VPN means there is a controlled ingress point (in theory) as a single point to protect. Here you need to have the focus on encryption and authentication, but quite often most organisations just deploy a firmware on a device, install an initial config, and leave a device for years.

I’ve seen some MSPs deploy minor version updates on their security endpoints, but never adopt the major version updates they are entitled to, despite the customer paying support for the major upgrades. And still, when the major version upgrades were installed, the config was not adjusted to enable newer capabilities, or disable outdated options.

So, next time you have to VPN in to the company, ask yourself: why? Why are spending money on expensive bottlenecks that slow you down, instead of mature operations? The value proposition isn’t there. Budget. Focus. Leadership.