Evolution of Compute: Physical to Serverless

Unless you’ve been under a rock, you’ve seen the impact that Hyperscale Public Cloud has made on the IT industry. Its invention wasn’t to be a thing, but to be a continually evolving, improving thing.

And while many organisations will use SaaS platforms, those platforms themselves often run atop the IaaS and PaaS platforms of a hyperscale cloud platform.

One person’s SaaS is another person’s IaaS.

Me, James Bromberger

But its worth just checking on the evolution of IT service delivery at a low level, for not everyone who is in the IT industry has seen what that looks like at this time.

Evolution of service delivery from Physical servers to Serverless.

Change is hard. Humans are bad at it. I’ve seen many who evolved from column 1 to column 2, and have felt they are “done”. They aren’t on-board for the next wave of the evolution.

I suffer from this too. But three is a short cut that I can offer: try to jump from where you are now, to as far to the right as you can in one step.

Every one of these phases is a monumental shift in the way that services are delivered, requiring training, and experience. There is an overhead knowledge baggage that engineers take with them, trying to work out what functions the same as before, and what is different. This is taxing, stressful, and unpleasant.

So rather than repeat this process in sequence, over years for each change, my recommendation is to see how far to the right you can jump. Some limitations will crop up that prevent you from leap-frogging all the way to Serverless, but that’s OK. Other services will not be thus constrained.

Well Architected, meet Well Maintained

In 2012, the Well Architected concept was born inside AWS. It is a set of principles that helps lead to success in the Cloud; at that time, that was the AWS EC2 environment. It’s well worth a read if you have not seen it. At this time, its also been adopted my Microsoft for the Azure environment as well.

However, I want to move your attention from Architecture time, to operations time.

If you look at the traditional total life-cycle activities, there’s a lot of time and effort spent learning, adjusting, and implementing supporting technologies that are starting to become invisible in the Serverless world.

Lets look at the operational activities done in a physical environment, and compare that to Serverless. I’ll skip the middle phases of evolution as shown above:

ActivityPhysicalServerless
Physical securityRequiredManaged
Physical installationRequiredManaged
Capacity PlanningRequiredManaged
Network switchingRequiredManaged
Hardware power planningRequiredManaged
Physical coolingRequiredManaged
Hardware procurementRequiredManaged
Hardware firmware updatesRequiredManaged
OS installationRequiredManaged
OS patchingRequiredManaged
OS upgradeRequiredManaged
OS licensingOften RequiredManaged
Runtime selectionRequiredRequired
Runtime minor patchingRequiredManaged
Runtime major version upgradeRequiredRequired
App server selectionRequiredManaged
App server minor patchingRequiredManaged
App server major version upgradeRequiredManaged
Code base maintenanceRequiredRequired
Code base 3rd party library updates (SDKs)RequiredRequired
Network encryption protocol and cipher upgrades (TLS, etc)RequiredRequired

As you can see, a large number of activities that should be done regularly to ensure operational excellence. However, I am yet to see a traditional physical environment, or virtualised on-prem environment that actively does all of the above well.

It’s an easy test: wander into any Java environment, and ask what version of the Java runtime is deployed in production. The typical response is “we updated to Java 8 two years ago“. What that means if “we haven’t touched the exact deployed version of Java for two years“.

Likewise, ask what version of Windows Server is deployed? Anything older than 2016 (even that, with 2019 has been out for nearly 2 years at this time is generous) shows a lack of agility and maintenance.

I challenge those in IT operations to think through the above table and check the last time their service updated each row – post project launch. If its a poor show, the change is your in “support mode”, and not “DevOps Operations”.

So what can be done to help do this maintenance?

Take it away. Stop it. While it can be argued to be important, and interesting, you’re possibly better off spending that effort on the smaller list that remains in a Serverless environment.

Evolution Continues

We can’t see where this evolution will go next. We do see that identity, authentication, authorisation, in-flight encryption, remain key elements to be aware of.

What comes next, I can’t predict. I know many ideas will be thrown about, new or recycled, and some will work, while others will wither and disappear again.

The only constant in life is change.

Heraclitus, b. 565 BC

Buying a house: Electronic Settlement with PEXA

I have spent many years working with Landgate, the state government Department of Land Administration. It’s a well known AWS Case Study, and a platform that is available for other land jurisdictions of the world if they wish to move to it.

One of the integrations that implemented is from/to the Electronic Lodgement Network Operators (ELNOs) to facilitate electronic settlement of property transactions, of which only one is currently active in Australia, Property Exchange Australia, otherwise known as PEXA.

Using PEXA saves settlement agencies, and banks from having to send representatives to a specific location at a specific time with the assortment of cheques, paperwork, and other administration that, should one thing be out of order, causes settlement to be delayed (a costly exercise). Many transactions types have now been mandated to be done via electronic interfaces, one of the first of which in Western Australia was the Discharge of Mortgage.

More than 80% of transactions on the land registry are a property being sold while under mortgage, to someone else who has also taken out a mortgage. This is called DTM; Discharge, Transfer, Mortgage. It’s one of the first transactions that the Advara platform automated the validation of data submitted, saving huge amounts of manual effort.

For a transaction submitted by PEXA, the general turn around time on data validation and transaction approval has now dropped to around 10.8 seconds, down from historical highs of ?30 days.

My Transaction

I was recently purchasing a new property (my home study is occupied by a rather adorable 5 year old girl, IMHO) and armed with the workings of the land titling system, I figured I’d actively watch my settlement transaction.

PEXA has created a user application called PEXA Key, for Android and iPhone, that permits sellers and purchasers to be invited to their property settlement transaction.

All a settlement agents needs to do is collect a mobile phone number and email address from the seller or purchaser, and enter into into the PEXA workspace.

I enquired about this to the real estate agent selling the property, and then in turn my settlement agent, and none of them had heard of this, much less actually done it. So I pushed on, and lo, managed to have them submit my details.

This post shows what happens next.

A Text Message

I received a text message almost immediately – with variables shown where real names were used:

Hi JAMES,

${SETTLEMENT_AGENT} has invited you to download the PEXA Key app to track your settlement. Check your email for more details. Get the app free here key.pexa.com.au/download or exclusively on Google Play or Apple App Store.

Text message I received after my settelment agent registered me in the workspace.

I quickly complied, and was then sent a security activation code.

The app then told me when my settlement had been scheduled for, and any pending tasks that I was responsible for (as it happened, I had already done everything, so it was fine).

This immediately gave me piece of mind, knowing the transaction workspace was set up and pending.

The morning of settlement came around, and I was greeted with this:

It’s settlement day

I nervously checked the application every few minutes to see what would happen next.

And so it begins

It turned out that the process was initiated around 10:25am or so, after which the PEXA Key application showed:

Settlement started

OK, strap in, the wheels are in motion.

It took around 40 minutes until all was done and dusted, and the final result came through:

Property settled

A few hours later, a new set of house keys were in my hand.

This has to be the most expensive testing I have ever personally done! 😉

The inclusion of the end customer in this process, which just simple visibility, is something that I think should be offered to all parties in the transaction to bring confidence and clarity in the progress or inhibitors in the transaction.

Key to this (pun intended, in two ways) is the swift and efficient recognition of land property transactions. My colleagues and I have worked hard to uplift the validation and security of the land registry system for years, and continue to do so. And as a customer of this system, it worked smoothly.

Some coverage of PEXA Key is here in Cyber Security Magazine (saying this stops an avenue of attack).

I recommend anyone buying or selling property to ask their agent to invite them into the settlement on PEXA using PEXA Key. As many in the real estate industry I have spoken to are unaware of this, you may need to explain this (send them this article’s URL), but its worth it.


Disclosure: I do not work for PEXA, nor have been asked by them (or anyone else) to write this. I share the above to assist anyone else who would like to see their property transactions being processed. While PEXA is a national (Australian) electronic settlement platform, the turn around time from each separate land jurisdiction to validate and register the transaction will vary. Indeed, I’d challenge any of them to beat 10.8 seconds full validation!

Why you still have a VPN in 2020

Many organisations are today able to access their email, corporate video conferencing and other services while mobile and without being connected to their company VPN endpoint.

Universal access over the Internet – on IPv4 and IPv6 – working seamlessly wherever you are to these services just works. It’s liberating, and no one is jumping up and down asking about the firewall, VPN.

Key amongst the platforms being used to give this is Microsoft Office 365 and its various platforms.

So why do you still have a corporate VPN? Why does your existing corporate IT services require you to jump through hoops to access it?

Let me be direct: your corporate strategy on security is based around lowest cost, lowest effort. This budget approach also means the least amount of work for the technology staff who operate these services for your organisation.

Office365, Salesforce, and a slew of other universally-just-works over the Internet solutions have something that the bespoke solutions you have in-house do not: funding to operate as such.

The main premise when you make services available over the internet is a commitment to do several things from an operational perspective:

  1. Support newer encryption protocols (TLS) over time, and remove older encryption protocols (TLS) over time
  2. Add new encryption ciphers over time, and remove older encryption ciphers over time
  3. Use federated sign-on (single sign-on)
  4. Maintain (update) the single sign on service over time, with continual uplift (eg, introduce MFA)
  5. Examine logs and look for anomalies in access, and then automatically lock out a user, and iterate improvements into the application

Your organisation probably does not do this. Your company’s IT operations team probably “keep the lights on”, ensuring the currently deployed application is responsive, poking it with a stick to ensure it moved. They probably didn’t uplift to TLS 1.3 in the last 2 years, and they probably haven’t removed TLS 1.1 and below.

And while they collect application logs, any review is probably pretty basic.

Why?

Doing so requires time, training, effort, experience and knowledge. Until you have a 24×7 DevOps team able to turn on a dime, a CISO who represents the security risk and operational response to the board, and a few other tell-tale signs, then your organisation is not ready.

All of the above requires a strong vision, strong senior leadership fro the top, and a strong funding model that prioritises the digital security of the company.

A traditional VPN means there is a controlled ingress point (in theory) as a single point to protect. Here you need to have the focus on encryption and authentication, but quite often most organisations just deploy a firmware on a device, install an initial config, and leave a device for years.

I’ve seen some MSPs deploy minor version updates on their security endpoints, but never adopt the major version updates they are entitled to, despite the customer paying support for the major upgrades. And still, when the major version upgrades were installed, the config was not adjusted to enable newer capabilities, or disable outdated options.

So, next time you have to VPN in to the company, ask yourself: why? Why are spending money on expensive bottlenecks that slow you down, instead of mature operations? The value proposition isn’t there. Budget. Focus. Leadership.

AWS Certification trends (on LinkedIn)

I am always trying to find great talent; it’s part of being a Practice Lead in a large consulting organisation to find and develop talent. I work with a team recruiters who are constantly finding and screening people for the many roles we have.

I’ve been a big proponent of the AWS Certifications for a number of reasons; amongst which are value and confidence to the holder, value to the partner, value to the customer. I helped contribute questions to the AWS Solution Architect Professional certification in 2014 whilst passing through Herndon, Washington DC as an AWS employee, and again in February 2020 in San Francisco as an industry Subject Matter Expert, just before COVID-19 started closing down travel.

Today I took to LinkedIn, and did a search for the various AWS Certifications, and found a tally that looked interesting. These numbers are by no means authoritative, and could just be a reflection of the network of connections that I have.

AWS CertificationTallyLaunch Year#/Year (to 2020)
Solution Architect Associate*311,000201344,428
Developer Associate*189,000201431,500
Cloud Practitioner*103,000201734,333
Solution Architect Professional*94,000201415,667
DevOps Engineer Professional*57,00020149,500
SysOps Associate*29,00020179,667
Security Specialty*12,00020186,000
Networking Specialty*7,80020183,900
Database Specialty*7,20020197,200
Data Analytics Specialty6,30020196,300
Big Data Specialty (retired/renamed to Data Analytics)81,0002014 – 201916,200 #
Machine Learning Specialty5,30020195,300
Alexa Skill Builder Specialty5462019549
AWS Certifications as found on Linked In, 18/9/2020. * Denotes certifications I hold. # only calculated over the five years this was active.

With such a low number for the Alexa certification, I expect the source numbers is not be complete. Many people in certain industries (eg, intelligence services) will not put their profile online.

But regardless, lets review what we see…

The clear winner is the venerable Solution Architect Associate with the largest number per annum and largest number in total. Its seen as the initial certification in the technical certs, and is regularly reported as one of the most valuable in the industry with respect to salary expectations. Its also the longest cert I have held – being part of the very first cohort to pass this in January 2013.

While the Developer Associate certification is in second place by total number, it is just eclipsed by the number of people who have taken the Cloud Practitioner Foundational certification, on a yearly basis. The Cloud Prac is billed as an entry level, non-technical certification, so its appeal is to an even wider audience – the technical team can obtain it relatively easily, and the non-technical roles involved in total service delivery can achieve this as well.

At the Professional level, it seems the demand for certified Architects outweighs the DevOps Engineers almost 2:1; I suspect this is as a natural progression from that initial SA Associate.

The Data Analytics certification replaced the original Big Data cert last year; this gives us an insight into the change in demand. Over its active lifetime, Big Data drove 16,200 per year – its replacement sites at almost a third the prior demand. Perhaps the data analytics hype is stablising?

The total number of certifications reported above is 903,146; just shy of a million certifications in 7 years (and probably more given the validity of the data) excluding re-certifications (after 3 years, now).

Lets see what this looks like in a year from now. New AWS certifications will likely launch, continuing to help validate and differentiate experienced Cloud engineers.

Goodbye, iiNet.

I first found my way online around 1991, calling into BBSes in Australia (such as Dialix). When I arrived at the University of Western Australia as an undergraduate in 1994, the ISPs were starting to be born, and I subscribed to Omen Internet ISP, run by Mark Dignam and his brother

In the early years of market acquisition, Omen was consumed by iiNet, and thus I became an iiNet subscriber on ADSL. iiNet was itself founded and staffed by friends across the industry in Perth, and during this period there was plenty of innovation happening in the organisations across ADSL and DSLAMs, connectivity, routing, and speed. They were one of the first to offer naked ADSL – not requiring a telephone number subscription.

I pushed off to the UK in 2003, and upon my 2010 return, I subscribed again to iiNet. They (had been) Perth based and started by friends, and plenty of friends had worked there in engineering and other role.

As I have experimented with IPv6 tunnels back in 1999 from UWA, I looked for this in iiNet, and found only 6RD – tunnelled/encapsulated IPv6 offering from iiNet. The downside of this experiment was that packets were tunnelled from the customer site, to iiNet in Sydney. As I was in Perth, my IPv4 traffic would hit local cache endpoints, but IPv6 would traverse 50ms of Australia before getting the change to peer to cache services. It was… sub-optimal.

However, iiNet started an IPV6 blog, and showed a promise that technical engineering was continuing.

Sadly, that last saw a post in 2013, and since then, crickets.

But that was not all.

It seemed that every year, iiNet would silently introduce a new service plan. This plan would be nearly identical to the one customers were already on, but with slightly larger included data plan, or slightly faster, or slightly cheaper. In any sense, always better. Customers would have to notice this newer plan, and then request to move to it (later, this was self service in the iInet customer Toolbox). But it always took an action by the customer to ensure they continued to get value from iiNet. This meant that customers couldn’t trust iiNet to always be getting the best option. I recently discovered I was paying $10/month more than other customers, for the same speed internet access, and still unlimited downloads.

I don’t think this is overly customer focused. They are looking after their interest, rather than long term customer satisfaction and retention.

So coupled with the decline in engineering, and faced with an impressive price/performance offer from a competitor, I have finally churned away from iiNet.

I contracted Aussie Broadband on Monday 1st June at 9am. By 11am the same day I had a 1GB/sec internet connection with native IPv6 enabled (FTTP). I am in the process of porting over the home phone number I have had with iiNet for a decade.

Am I paying more? Yes.

Is it better? Yes.

I went from a 50/20 NBN unlimited plan, with a VoIP service with all Australian land-line and mobile calls included; it was AU$89/month, while iiNet’s new offering was $79. I ended up on 1000/50 NBN unlimited plan, with a VoIP services with all land-line and mobiles calls included for $169/month. 2x the price, 20x the speed.

Does that make it 10 times better? Hmm….

But more importantly, as they introduced this plan, Aussie indicated their existing customers on their legacy, more expensive yet slower plans would be migrated to this without them having to lift a finger. Proacticely better for their existing customers.

This breed customer trust.

So with two knock-out blows — innovation in engineering, and customer focus — I finally pulled the pin, giving up on the hope that the iiNet of old would engineer its way towards a modern ISP with a strong customer focus.

I have had a number of friends and colleagues move to Aussie Broadband in the last few months, and thus far I haven’t seen any one have any issues that haven’t been resolved quickly and capably. What held me back was my included phone number via iiNet, but that has now ported the phone number across to Aussie as well.