Software License Depreciation in a Cloud World

Much effort is spent on preserving and optimising software licenses when organisations shift their workloads to a cloud provider. It’s seen as a “sunk cost”, something that needs to be taken whole into the new world, without question.

However, some vendors don’t like their customers using certain cloud providers, and are making things progressively more difficult for those organisations that value (or are required) to keep their software stack well maintained.

Case in point, one software vendor who has their own cloud provider made significant changes to their licensing, removing rights progressively for customers to have the choice to run their acquired licences in a competitors cloud.

I say progressively, customers can continue to run (now) older versions of the software before that point in time the licensing was modified.

The Security Focus

Security in IT is a moving target. Three’s always better ways of doing something, and previous ways which, once were the best way, but are now deemed obsolete.

Let me give you a clear example: network encryption in flight. The dominant protocol used to negotiate this is called Transport Layer Security (TLS), and its something I’ve written about many times. There’s different versions (and if you dig back far enough, it even had a different name – SSL or Secure Sockets Layer).

Older TLS versions have been found to be weaker, and newer versions implemented.

But certain industry regulators have mandated only the latest versions be used.

Support for this TLS is embedded in both your computer operating system, and certain applications that you run. This permits the application to make outbound connections using TLS, as well as listen and receive connections protected with TLS.

Take a database server: its listening for connections. Unless you’ve been living under a rock, the standard approach these days is to insist on using encryption in flight in each segment of your application. Application servers may access your database, but only if the connection is encrypted – despite them sitting in the same data centre, possibly in the same rack or same physical host! It’s an added layer of security, and the optimisations done mean its rarely a significant overhead compared to the eavesdropping protection it grants you.

Your operating system from say 2019 or before may not support the latest TLS 1.3 – some vendors were pretty slow with implementing support for it, and only did so when you installed a new version of the entire operating system. And then some application providers didn’t integrate the increased capability (or a control to permit or limit the version of TLS) in their software in those older versions from 2019 or earlier.

But in newer versions they have fixed this.

Right now, most compliance programs require only TLS 1.2 or newer, but it is foreseeable that in future, organisations will be required to “raise the bar” (or drawbridge) to use only TLS 1.3 (or newer), at which time, all that older software becomes unusable.

Those licences become worthless.

Of course, the vendor would love you to take a new licence, but only if you don’t use other cloud providers.

Vendor Stickiness

At this time, you may be thinking that this is not a great customer relationship. You have an asset that, over time, will become useless, and you are being restricted from using your licence under newer terms.

The question then turns to “why do we use this vendor”. And often it is because of historical reasons. “We’ve always used XYZ database”, “we already have a site licence for their products, so we use it for everything”. Turns out, that’s a trap. Trying to smear cost savings by forcing technology decisions because of what you already have may preclude you from having flexibility in your favour.

For some in the industry, the short term goal is the only objective; they signa purchase order to reach an immediate objective, without taking the longer term view of where that is leading the organisation – even if that’s backing hem into a corner. They celebrate the short term win, get a few games of golf out of it, and then go hunting for their next role elsewhere, using the impressive short term saving as their report card.

A former colleague of mine once wrote that senior executive bonuses shouldn’t be paid out in the same calendar year, but delayed (perhaps 3 years) to ensure that the longer term success was the right outcome.

Those with more fortitude with change have, over the last decade, been embracing Open Source solutions for more of their software stack. The lack of licence restriction – and licence cost – makes it palatable.

The challenge is having the team who can not only implement potential software changes, but also support a new component in your technology stack. For incumbent operations and support teams, this can be an upskilling challenge; some wont want to learn something new, and will churn up large amounts of Fear, Uncertainty and Doubt (FUD). Ultimately, they argue it is better to just keep doing what we’ve always done, and pay the financial cost, instead of the effort to do something better.

Because better is change, and change is hard.

An Example

Several years ago, my colleagues helped rewrite a Java based application and change the database from Oracle, to PostgreSQL. It was a few months from start to finish, with significant testing. Both the Oracle and PostgreSQL were running happily on AWS Relational Database Service (RDS). The database was simple table storage, but the original application developers already had a site license for Oracle, and since that’s what they had, that’s what they’ll use.

At the end of the project, the cost savings were significant. The return on investment for the project services to implement the change was around 3 months, and now, years later, the client is so much better off financially. It changed the trajectory of the TCO spend.

The coming software apocalypse

So all these licences that are starting to hold back innovation are becoming progressively problematic. The time that security requirements tighten again, you’re going to hear a lot of very large, legacy software license agreements disintegrate.

Meanwhile, some clod providers can bundle the software licence into the hourly compute usage fee. If you use it, you pay for it; when you don’t use it, you don’t pay for it. if you want a newer version, then you have flexibility to do so. Or perhaps event to stop using it.

Time to minimise public IPv4 usage in the AWS Cloud

It was always going to happen. We’ve been watching the exhaustion of the 32 bit address space of IPv4 for more than 20 years, and we’ve had the solution available for even longer: IPv6.

I’ve written many times about IPv6 adoption and migration on this blog. I’ve spoken many times with colleagues about it. I’ve presented at AWS User Groups about using IPv6 in AWS. And when I worked at AWS 10 years ago, I championed that s a competitive advantage to IPv6 all the things where IPv4 was in use.

The adoption has been slow. Outside of the Cloud, ISP support has been mixed, depending if they have the engineering capability to uplift legacy networks, or not. Let’s be clear – those ISPs who removed their engineers, and minimise the innovation, are about to have a lot of work to do, or face tough conversations with customers.

For those that have already done the work, then this weeks AWS annoucement about the start of charging for public IPv4 address space from 2024 is a non-issue. For others, its going to start to mean some action.


Lets start with the basics; go have a read of the AWS Announcement: New – AWS Public IPv4 Address Charge + Public IP, posted 28 July 2023.

You’re back, ok, so at time of blogging, charges start in 2024. Currently, your first IPv4 assigned to an instance is not charged for, but soon it will be half a US cent per hour, or on a 744 hour month, US$3.72 a month. Not much, unless you have hundreds of them.

Selling an IPv4 netblock

In the last few years I helped a government agency “sell” an unused /16 IPv4 netblock for several million dollars. They had two of them, and had only ever used a few /24 ranges from their first block; the second block was not even announced anywhere. There was no sound plan for keeping them.

The market price to sell a large contiguous block of addresses keeps going up – 4 years ago it was around $22 per IPv4 address (and a /16 is 65,536 of them, so just over US$1.4M). Over time, large contiguous address blocks were becoming more valuable. Only one event would stop this from happening: when no one needed them any more. And that event was when the tipping point into the large spread (default) usage of IPv6, at which time, they drop towards worthless.

The tipping point just got closer.

Bringing it back to now

So with this announcement, what do we see. Well, this kind of sums it up:

Congratulations, your IPv6 migration plan just got a business case, AWS is now charging for v4 addresses. v6 is free, and the sky has finally fallen:

Nick Matthews @nickpowpow

There have been many IPv6 improvements over the years, but few deployments are ready to ditch IPv4 all together. Anything with an external deployment that only supports IPv4 is going to be a bit of a pain.

Luckily, AWS has made NAT64 and DNS64 available, which lets IPv6 only hosts contact IPv4 hosts.

The time has come to look at your business partners you work with – those you have API interfaces to, and have the IPv6 conversation. It’s going to be a journey, but at this stage, its one that some in the industry have been on since the last millennium (I used to use Hurricane Electric’s TunnelBroker IPv6 tunnelling service in the late 1990s from UWA for IPv6).

Looking at your personal ISP and Mobile/Cell provider

It’s also time to start to reconsider your home ISP and cell phone provider if they aren’t already providing you with real IPv6 addresses. I personally swapped home Internet provider in Australia several years ago, tired of the hollow promises of native IPv6 implementation from one of Australia’s largest and oldest ISPs, started by an industry friend of mine in Perth many years ago (who has not been associated with it for several years). When the ISP was bought out, many of the talented engineers left (one way or another), and it was clear they weren’t going to implement new and modern transport protocols any time soon.

Looking at your corporate IT Dept

Your office network is going to need to step up, eventually. This is likely to be difficult, as often corporate IT departments are understaffed when it comes to these kinds of changes. They often outsource to managed service providers, many of whom don’t look to the future to see what they need to anticipate for their customers, but minimise the current present cost to “keep the lights on”. This is because customers often buy on cost, not on quality or value, in which case, the smart engineers are elsewhere.

Your best hope is to find the few technically minded people in your organisation who have already done this, or are talking about this, and getting them involved.

Looking at your internet-facing services

There’s only one thing to do, ASAP: dual-stack everything that is [public] Internet facing. Monitor your integration partners for traffic that uses IPv4, and talk to them about your IPv6 migration plans.

Its worth watching for when organisations make this switch. There are many ways to do this.

For web sites and HTTP/HTTPS APIs, consider using a CDN that can sit in front of your origin server, and as the front-door to your service, can be dual stack for you. Amazon CloudFront has been a very flexible way to do this for years, but you must remember both steps in doing this:

  1. Tick the Enable IPv6 on the CloudFront distribution
  2. Add a record to your DNS for the desired hostname as an AAAA record, alongside the existing A record.

The Long Term Future

IPv4 will go away, one day.

It may be another 20 years, or it may now be sooner given economic pressures starting to appear. Eventually the world will move on past Vint Cerf’s experimental range that, from the 1970s, has outlasted all expectations. IPv4 was never supposed to scale to all of humanity. But its replacement, IPv6, is likely to outlast all of us alive today.


EDIT: Cross link to Greg Cockburn’s recent AWS IPv6 post, and Corey Quinn’s post on the topic.

More TLS 1.3 on AWS

Earlier this week, AWS posted about their expanded support for TLS 1.3, clearly jumping on the reduced handshake as a speed improvement in their blog post entitled: Faster AWS cloud connections with TLS 1.3.

Back in 2017, (yes, 6 years ago) we started raising Product Feature Requests for AWS products to enable this support, and at the same time, customer control to be able to limit the acceptable TLS versions. This makes perfect sense in customer applications (the data plane). Not only do we not want our applications supporting every possible historic version of cryptography, various compliance programs require us to disable them.

Most notable in this was PCI DSS 3.1, the Payment Card (credit card) Industry Association’s Data Security Standard, which drove the nail in to the coffin of TLS 1.1 and everything before it.

Over time, TLS versions (and SSL before it) have fallen from grace. Indeed, SSL 1.0 was so bad it never saw the light of day outside of Netscape.

And it stands to reason that, in future, newer versions of TLS will come to life, and older versions will, eventually, have to be retired; and between those two, is another transition. However, this transition requires deep upgrades from cryptography libraries, and sometimes to client code to support the lower level library’s new capability..

On the server side, we often see a more proactive implementation of what currently supported TLS versions are permitted. Great services like SSLLabs.com, Hardenize.com, and testssl.sh have guided many people to what today’s current state of “acceptable” and “good” would generally look like. And the key item of those services, is their continual uplift as the state of “acceptable” and “good” changes over time.

On the client side, its not always been as useful. I may have a process that establishes outbound connections to a server, but as a client, I amy wan tto specify some minimum version for my compliance, and not just rely upon the remote party to do this for me. Not many software packages do this – the closest control you get is an integration possibly using HTTPS (or TLS), and not the next level down of “yeah, so which versions are OK to use when I connect outbound”. Of course, having specified HTTPS (or TLS) and doing server certificate validation against our local trust store, we then have a degree of confidence hat its probably the right provider, given that one of my 500 trusted CAs signed that certificate. we got given back during the handshake

This sunrise/sunset is even more important to understand in the case of managed services from hyperscaler cloud providers. AWS speaks of the deprecation of TLS 1.1 and prior in this article (June 2022).

If you have solutions that use AWS APIs, such as applications talking to DynamoDB, then this is part of your technical debt you should be actively, regularly addressing. If you haven’t been including updated AWS SDKs in your application, and updating your installed SSL libraries, updating your OS, then you may not be prepared for this. Sure, it may be “working” fine right now.

One option you have is to look at your application connection logs, and see if the TLS version for connections is being logged. If not, you probably want to get that level of visibility. Sure, you could Wireshark (packet dump) a few sample connections, but it would probably be better not to have to resort to that. Having the right data logged is all part of Observability.

June 28 is the (current) deadline for AWS to raise the minimum supported TLS version. That’s a month away from today. Let’s see who hasn’t been listening…

Cloud Optimisation all the rage in 2023

I have been tinkering with AWS since 2008, and delivering AWS Cloud solutions since 2010, and in this time, I’ve seen many cloud trends come along, and messaging subtly change from all of the hyperscale IaaS and PaaS providers.

This year, we’re seeing more talk about Optimisation. In the 2023 earnings calls, we heard:

  • Microsoft: “Customers continued to exercise some caution as optimization … trends … continued”
  • Google: “slower growth of consumption as customers optimized GCP costs reflecting the macro backdrop”
  • AWS: “Customers continue to evaluate ways to optimize their cloud spending in response to these tough economic conditions.”

We have also seen the messaging around Migration to cloud evolve to “Migrate & Modernise”. This is putting pressure on the laziest, simplest, and least effective of the “Seven R’s of cloud migration”, being “Rehost” and “Relocate”.

Why is this?

Rehost takes the existing spaghetti of installed software, and runs it in exactly the same way on a hyperscale cloud providers concept of a virtual machine. And in true least effort approach, if you had a virtual machine with 64 GB of RAM previously (even if only 20% utilised), then you would select the closest match in a simple rehost/reinstall.

If you had 10 application servers on 24×7, then you still get 10 cloud virtual machines, 24×7, even if you only need that peak capacity for one day of the year.

And even more interesting is paying licensing fees for a virtualisation layer you don’t need to be paying for (and get a different experience when you do).

Not very efficient. Not very smart. But least effort.

So why do organisations do this?

That is easy: It’s complicated to do well. It takes time, experience, and wisdom.

This technology industry is full of individuals who think their job is to make nothing change. Historically, many IT service frameworks were designed around slowing the rate of change, diametrically opposed to DevOps. And organisations that see the cost of technology operations, not the value of it.

Cost has been driven down so much that the talented individuals have left, and only those who remain (who perhaps couldn’t geta job elsewhere) are keeping the lights on. They don’t have the experience, knowledge or wisdom to know what good looks like, they just know what has, for the last 30 years, been “stable”.

And while some engineers are keen to learn new things, and use that, many senior stakeholders have in their mind “what’s the least we can do right now”.

So when it comes to a cloud migration, they see “least change” as “least risky”, not “more costly”.

In 2023 at the AWS Partner summit, AWS representatives quoted that “only some 15% of workloads that will move to the cloud, have now done so”. They furthered that that 15% now requires some level of modernisation.

Again, 10 years ago, many organisation moved large workloads in lift-and-shift method to the same number of instances of the day. Perhaps the m3.xlarge in AWS or similar elsewhere. And then the people who knew what Cloud was, departed the scene, leaving an under experienced set of individuals to keep the lights on and change nothing.

I once did a review of an 3 party Analytics package running on 4 m3.xlarge instances; they had acquired a 3 year Reservation, and kept the instances running 24×7. They didn’t use them all the time. They had never done any changes (not even OS patching). They were running a RedHat 7.x OS.

Two cycles (yes, 2) of newer instance families had been released in that period: the m4, and then m5.  Because of the Linux kernel in that version of RedHat, they were prevented from moving these virtual machines to a newer AWS EC2 instance family due to lack of kernel support.

In reality, they were fortunate, as they had run a version of RedHat that finally (after 20+ years) supported in-place upgrades. The easiest path ahead was to snapshot (or make an AMI of each individual host), then in-place upgrade to the latest RedHat release, and then during a shutdown of the instance (stop), adjust the instance family to the m5 equivalent. This alone would save them something like 20% of cost. They could then do a Reservation (my recommendation was for one year, as things change…) and they would have ended up at over 80% cost reduction – and faster performance.

Of course, far cleaner is to understand the installation of the 3rd party analytics software, the way it clusters, and replace each node with a clean OS, instead of keeping the craft from bygone installations.

And beyond that would have been the option to not Reserve any instances, but just turn them off when not in use.

But they hadn’t.

Poor practices in basic maintenance and continual patching and upgrades meant they had a lot of work to do now, rather than already be up to date.

I’ve seen a number of Partners in the Cloud ecosystem get recognised and rewarded for the sheer volume of migrations they have done. And despite these being naive lift-and-shift, and looking good for the first 6 months, the bill shock starts to set in. And over time, the lack of maintenance becomes an issue.

Even with the “Re-platform” pattern of migration (adopting some cloud managed services in your tech stack), and adopting perhaps something like a Managed Database service, can have its catches. Selecting a managed version of perhaps Postgres 9.5 back 5 years ago, would have put you in trouble, when 9.6, 10, 11, 12, 13, 14 came out, because as newer versions came out, older versions were deprecated, and eventually unavailable. If you weren’t adequately addressing technical debt and maintenance tasks as part of your standard operations, then you’re looking at possible operational trouble.

One clever tactic to address this is to have engineers who are familiar with more contemporary configurations, experts in the field (whom you can spot by the 6+ concurrent AWS Certification they hold) who can perform a Well-Architected Framework Review. This is typically short 1 week engagement for a professional consulting company, borrowing experienced talent into your business that can give you clear addressable mitigations covering many facets, including cost.

If you previously did a Lift & Shift, and found it expensive, perhaps your organisations ability to bring expertise to bear on the work is missing. If you don’t know where to start, then start asking for expertise.

So why are all the cloud providers talking about Optimisation? Because they know their customers can do better, on the same cloud provider. And they would rather have a customer optimise their spend and stay, rather than now migrate & modernise to a competitor, and start the training and upskilling process again.

AWS Sydney Summit 2023

Last week I was fortunate enough to attend the AWS Sydney Summit 2023, along with my colleague and friend, Elliot Segler, on behalf of Akkodis.

A Partner Day was arranged that ran on Monday 3rd April – I’ll avoid details on that here at this time to focus on the main Customer Summit – the Big Event.

This was really the first major in-country Summit since the “end” of the Covid-19 pandemic – the 2022 version was much smaller, and that was only announced as a small in-person event just a few weeks before it was held. Thus in 2022 this event only had a small crowd; this year, attendance tickets were capped at 5,000.

In 2019, some 19,000 people attended over two days.

I’ve been fortunate to never miss an AWS Summit in Australia, and it gives me insight into the state of the Cloud when comparing year-on-year.

The floorplan of the AWS Sydney Summit 2023
The exhibitor floor at the ASW Sydney Summit 2023
AWS Managing Director A/NZ, Rianne van Veldhuizen, opens the Summit
Cameron Adams, founder of Canva, shared some of their AWS Cloud architecture.

Cameron also spoke about the move to sustainable computing, Canva signing The Climate Pledge, and Canva racing to integrate AI models into their service offering, such as magic erase.

We also saw Nicole Sheffield from Wesfarmers “One Digital” service who spoke about the implementation of their digital platform that offers customer-facing digital services across the Wesfarmers portfolio. This is significant, as Wesfarmers had, in 2014, been quite alarmed at Amazon entering Australia.

The exhibitors were split between Consulting Services partners, and ISV partners. Those ISVs had a distinct developer tooling flavour to them, but we also saw Canva talking to end customers about their product, and both Bespoke and Lumnify (formerly DDLS) training providers discussing their educational schedules and offerings.

Early in the morning I took a photo of the Builder lab, set up for individuals to undertake self-paced digital training – which was full for the rest of the day:

The Builder Lab (taken first thing in the morning before it opened)

In Financial Services, Australian bank (and one of the Big Four) Westpac provided their take on some of the cloud security approaches. This is notable to me as one of their staff in 2013 said they would “never use Public Cloud”; they seem to be doing well putting Public Cloud to good use.

Suncorp (finance, insurance and banking) also spoke about their exit strategy from their existing data centers using AWs as a target platform.

This year there were no major service announcements or releases at this Summit, but then these days the major announcements happen almost every few days anyway! In previous years, the Summit has always had a sort of “revolution” (when major new concepts or services were released, such as AI services) or “evolution” (incremental updates announced for existing services). This year the theme was more “steady-as-she goes” and stable.

What’s clear is that commentary around Lift & Shift migrations are now evolving to Migrate & Modernise (which is what I have always focused on – deeper expertise and a better short and long term outcome). This isn’t surprising, as often naïve Lift & Shift has left customers with workloads costing more and unable to take advantage of key cloud attributes, such as scalability or cloud-platform-managed-services to reduce TCO.

Of course, those who implemented cloud-native solutions, and paid close attention to Cloud Operating Models (and tech team org structures) with an eye to the Well-Architected Framework have enjoyed optimised and reliable operation.

AWS claimed that perhaps only 15% of all workloads that will eventually run in the cloud, is now doing so. Of course, that 15% requires the maintenance, care and attention to ensure it remains operational, and optimal.

So where are we in the Cloud evolution timeline? I suspect we’re in the middle of furious catch-up by software providers who now are focusing on adopting IaaS and PaaS to take their legacy solutions, and reimplement them as cloud-native SaaS. More vertical-specific SaaS products are coming to market.

The individual services within the cloud are maturing. ARM-based Graviton chips continue to uphold Moore’s Law (RIP Gordon Moore, 2023). IPv6 is progressing, but as noted by one of my fellow Partner Ambassadors and long term friend, Greg Cockburn, the rate of change in modern IPv6 networking appears to be slowing (with notable exception, VPC Network Firewall now supporting IPv6 only subnets). I suspect the major requests are now satisfied; workloads that want to be dual-stacked to the outside world are fully supported. Of course, many ISPs, telcos and carriers are continuing to slowly adopt IPv6 for their consumers; some advanced end user providers in Australia have been Telstra, Internode (ironically part of iiNet who dropped the IPv6 ball in 2013, and thus part of Vodafone), and Aussie Broadband.

From speaking to people, it did appear that a 5,000 person cap had limited many people from attending, particualy those who live in the same Australian state of New South Wales who may have left booking a ticket too late, and missed out. Perhaps in 2024 we’ll see another increase (and who knows).

Meanwhile, outside around Darling Harbour, much construction happens, watched over by warships and tall ships.

Warships and Tall Ships in Darling Harbour, 2023

And in case you’re wondering, its a long way from Perth (where I live) to Sydney, and the Great Circle line directly between the two cities takes us far over the Southern Ocean:

3,300 kms from Sydney to Perth