Time to minimise public IPv4 usage in the AWS Cloud

It was always going to happen. We’ve been watching the exhaustion of the 32 bit address space of IPv4 for more than 20 years, and we’ve had the solution available for even longer: IPv6.

I’ve written many times about IPv6 adoption and migration on this blog. I’ve spoken many times with colleagues about it. I’ve presented at AWS User Groups about using IPv6 in AWS. And when I worked at AWS 10 years ago, I championed that s a competitive advantage to IPv6 all the things where IPv4 was in use.

The adoption has been slow. Outside of the Cloud, ISP support has been mixed, depending if they have the engineering capability to uplift legacy networks, or not. Let’s be clear – those ISPs who removed their engineers, and minimise the innovation, are about to have a lot of work to do, or face tough conversations with customers.

For those that have already done the work, then this weeks AWS annoucement about the start of charging for public IPv4 address space from 2024 is a non-issue. For others, its going to start to mean some action.


Lets start with the basics; go have a read of the AWS Announcement: New – AWS Public IPv4 Address Charge + Public IP, posted 28 July 2023.

You’re back, ok, so at time of blogging, charges start in 2024. Currently, your first IPv4 assigned to an instance is not charged for, but soon it will be half a US cent per hour, or on a 744 hour month, US$3.72 a month. Not much, unless you have hundreds of them.

Selling an IPv4 netblock

In the last few years I helped a government agency “sell” an unused /16 IPv4 netblock for several million dollars. They had two of them, and had only ever used a few /24 ranges from their first block; the second block was not even announced anywhere. There was no sound plan for keeping them.

The market price to sell a large contiguous block of addresses keeps going up – 4 years ago it was around $22 per IPv4 address (and a /16 is 65,536 of them, so just over US$1.4M). Over time, large contiguous address blocks were becoming more valuable. Only one event would stop this from happening: when no one needed them any more. And that event was when the tipping point into the large spread (default) usage of IPv6, at which time, they drop towards worthless.

The tipping point just got closer.

Bringing it back to now

So with this announcement, what do we see. Well, this kind of sums it up:

Congratulations, your IPv6 migration plan just got a business case, AWS is now charging for v4 addresses. v6 is free, and the sky has finally fallen:

Nick Matthews @nickpowpow

There have been many IPv6 improvements over the years, but few deployments are ready to ditch IPv4 all together. Anything with an external deployment that only supports IPv4 is going to be a bit of a pain.

Luckily, AWS has made NAT64 and DNS64 available, which lets IPv6 only hosts contact IPv4 hosts.

The time has come to look at your business partners you work with – those you have API interfaces to, and have the IPv6 conversation. It’s going to be a journey, but at this stage, its one that some in the industry have been on since the last millennium (I used to use Hurricane Electric’s TunnelBroker IPv6 tunnelling service in the late 1990s from UWA for IPv6).

Looking at your personal ISP and Mobile/Cell provider

It’s also time to start to reconsider your home ISP and cell phone provider if they aren’t already providing you with real IPv6 addresses. I personally swapped home Internet provider in Australia several years ago, tired of the hollow promises of native IPv6 implementation from one of Australia’s largest and oldest ISPs, started by an industry friend of mine in Perth many years ago (who has not been associated with it for several years). When the ISP was bought out, many of the talented engineers left (one way or another), and it was clear they weren’t going to implement new and modern transport protocols any time soon.

Looking at your corporate IT Dept

Your office network is going to need to step up, eventually. This is likely to be difficult, as often corporate IT departments are understaffed when it comes to these kinds of changes. They often outsource to managed service providers, many of whom don’t look to the future to see what they need to anticipate for their customers, but minimise the current present cost to “keep the lights on”. This is because customers often buy on cost, not on quality or value, in which case, the smart engineers are elsewhere.

Your best hope is to find the few technically minded people in your organisation who have already done this, or are talking about this, and getting them involved.

Looking at your internet-facing services

There’s only one thing to do, ASAP: dual-stack everything that is [public] Internet facing. Monitor your integration partners for traffic that uses IPv4, and talk to them about your IPv6 migration plans.

Its worth watching for when organisations make this switch. There are many ways to do this.

For web sites and HTTP/HTTPS APIs, consider using a CDN that can sit in front of your origin server, and as the front-door to your service, can be dual stack for you. Amazon CloudFront has been a very flexible way to do this for years, but you must remember both steps in doing this:

  1. Tick the Enable IPv6 on the CloudFront distribution
  2. Add a record to your DNS for the desired hostname as an AAAA record, alongside the existing A record.

The Long Term Future

IPv4 will go away, one day.

It may be another 20 years, or it may now be sooner given economic pressures starting to appear. Eventually the world will move on past Vint Cerf’s experimental range that, from the 1970s, has outlasted all expectations. IPv4 was never supposed to scale to all of humanity. But its replacement, IPv6, is likely to outlast all of us alive today.


EDIT: Cross link to Greg Cockburn’s recent AWS IPv6 post, and Corey Quinn’s post on the topic.

More TLS 1.3 on AWS

Earlier this week, AWS posted about their expanded support for TLS 1.3, clearly jumping on the reduced handshake as a speed improvement in their blog post entitled: Faster AWS cloud connections with TLS 1.3.

Back in 2017, (yes, 6 years ago) we started raising Product Feature Requests for AWS products to enable this support, and at the same time, customer control to be able to limit the acceptable TLS versions. This makes perfect sense in customer applications (the data plane). Not only do we not want our applications supporting every possible historic version of cryptography, various compliance programs require us to disable them.

Most notable in this was PCI DSS 3.1, the Payment Card (credit card) Industry Association’s Data Security Standard, which drove the nail in to the coffin of TLS 1.1 and everything before it.

Over time, TLS versions (and SSL before it) have fallen from grace. Indeed, SSL 1.0 was so bad it never saw the light of day outside of Netscape.

And it stands to reason that, in future, newer versions of TLS will come to life, and older versions will, eventually, have to be retired; and between those two, is another transition. However, this transition requires deep upgrades from cryptography libraries, and sometimes to client code to support the lower level library’s new capability..

On the server side, we often see a more proactive implementation of what currently supported TLS versions are permitted. Great services like SSLLabs.com, Hardenize.com, and testssl.sh have guided many people to what today’s current state of “acceptable” and “good” would generally look like. And the key item of those services, is their continual uplift as the state of “acceptable” and “good” changes over time.

On the client side, its not always been as useful. I may have a process that establishes outbound connections to a server, but as a client, I amy wan tto specify some minimum version for my compliance, and not just rely upon the remote party to do this for me. Not many software packages do this – the closest control you get is an integration possibly using HTTPS (or TLS), and not the next level down of “yeah, so which versions are OK to use when I connect outbound”. Of course, having specified HTTPS (or TLS) and doing server certificate validation against our local trust store, we then have a degree of confidence hat its probably the right provider, given that one of my 500 trusted CAs signed that certificate. we got given back during the handshake

This sunrise/sunset is even more important to understand in the case of managed services from hyperscaler cloud providers. AWS speaks of the deprecation of TLS 1.1 and prior in this article (June 2022).

If you have solutions that use AWS APIs, such as applications talking to DynamoDB, then this is part of your technical debt you should be actively, regularly addressing. If you haven’t been including updated AWS SDKs in your application, and updating your installed SSL libraries, updating your OS, then you may not be prepared for this. Sure, it may be “working” fine right now.

One option you have is to look at your application connection logs, and see if the TLS version for connections is being logged. If not, you probably want to get that level of visibility. Sure, you could Wireshark (packet dump) a few sample connections, but it would probably be better not to have to resort to that. Having the right data logged is all part of Observability.

June 28 is the (current) deadline for AWS to raise the minimum supported TLS version. That’s a month away from today. Let’s see who hasn’t been listening…

IoT Trackers and the AWS Cloud

I continued my IoT project over the recent end-of-year/Christmas break period, picking up from where I was 6 months ago.

Since then, a new firmware version had become available for the RAK Wireless RAK10700 GNSS (Global Navigation Satellite System) solar powered device. These devices shipped without battery (due to postal limitations), and came with firmware 1.0.4.

I failed completely last time to get these to associate with my local IoT gateway (rak7246, basically a raspberry PI in a box that bridges LoRaWAN and WiFi).

Since then, a new firmware 1.0.6 has been released.

Documentation for the RAK10700 was OK, until you get to the page that says the latest firmware is version 1.0.1; given this device already shipped with 1.0.4, I dived in deeper; the link to the firmware is https://downloads.rakwireless.com/LoRa/WisBlock/Solutions/LPWAN-Tracker-Latest.zip, and the contents of this zip file, at the time of writing, are three files:

  • Manifest
  • WisBlock_SENS_V1.0.6_2022.04.07.13.36.27.bin
  • WisBlock_SENS_V1.0.6_2022.04.07.13.36.27.dat

Caveat Elit (developer beware): this appears to be firmware version 1.0.6.

Flashing this was interesting: the device, connected via its USB cable to my laptop, had to be reset into DFU mode, which required double-tapping the reset pin in quick succession (its located next to the USB port). Once done, the device presented as USB storage, and the adafruit-nrfutil tool could update it (check the COM port from Device Manager).

adafruit-nrfutil dfu serial --package LPWAN-Tracker-Latest.zip -p COM19 -b 115200

When in DFU mode, the device turned up as a different COM port compared to when it was in its normal mode. It took me two attempts for this to be successful, and then pressing the rest button to have the device return to normal mode.

Next came the interface to AWS IoT Core for LoRaWAN. I’d previously been using the LGT-92 (now not available), but had to abandon these as no amount of protection in waterproof bags had made them durable enough to last the distance of my use case; tracking a small sail boat.

The configuration that eventually worked for me was to define a profile with MAC version 1.0.3, Regional Parameters RP002-1.01, Max EIRP 15, for AU915 frequencies (I am in Australia):

AWS IoT Core for LPWAN: Profile Configuration for RAK10700

Now with the profile defined, I can add the two Devices in, using the AppKey, DevKey, etc:

With data coming through it was now time to decide the Payload. These devices use a format called CayenneLPP to stuff as much data into as small a payload as possible. One of the first things you’ll want to do is decide the data to check it looks legitimate. Using a small Python script, I can unpack it – after doing a pip install cayennelpp:

#!/usr/bin/python3
import base64
import sys
from cayennelpp import LppFrame
d=base64.standard_b64decode(sys.argv[1])
f=LppFrame().from_bytes(d)

for i in f.data:
  if len(i.value) == 1:
    print("Ch {}, {}: {}".format(i.channel, i.type, i.value[0]))
  else:
    print("Ch {}, {}: {}".format(i.channel, i.type, i.value))

By routing the incoming IoT messages to a Lambda function, I can now pick out the PayloadData from the event and see the string being send. Here’s what I am seeing in CloudWatch logs when I just print(event):

{'WirelessDeviceId': 'b15xxxx-xxxx-47df-8a5d-f57800c170b5', 'PayloadData': ' AXQBqwZoRwdnARQIcydWCQIG6Q==', 'WirelessMetadata': {'LoRaWAN': {'ADR': False, 'Bandwidth': 125, 'ClassB': False, 'CodeRate': '4/5', 'DataRate': '3', 'DevAddr': 'xxxxx', 'DevEui': 'xxxxx', 'FCnt': 73, 'FOptLen': 0, 'FPort': 2, 'Frequency': '917200000', 'Gateways': [{'GatewayEui': 'xxxxx930e93', 'Rssi': -31, 'Snr': 9.5}], 'MIC': 'xxxxx95b', 'MType': 'UnconfirmedDataUp', 'Major': 'LoRaWANR1', 'Modulation': 'LORA', 'PolarizationInversion': False, 'SpreadingFactor': 9, 'Timestamp': '2023-01-04T13:46:42Z'}}}

While inside, with no satellite lock, that PayloadData translates out to:

Ch 1, Voltage: 4.27
Ch 6, Humidity: 35.5
Ch 7, Temperature: 27.6
Ch 8, Barometer: 1007.0
Ch 9, Analog Input: 17.69

Now I have the two sensors, its time to get them outside, with a bit of soldering of the LiPo battery on to the right connector…

The Sydney to Hobart Yacht Race 2022

This years Sydney to Hobart was a stunning race. Coverage on broadcast TV in Australia started with good coverage by Seven West Media’s 7+ service. The stunning coverage included a view of the four simultaneous start lines for the different classes:

Four start lines of the 2022 Sydney to Hobart Yacht Race, taken from @CYCATV on YouTube

Sadly, the broadcast TV coverage ended just after the start. With 7+ on the sail of one of the boats, I was expecting a bit more coverage.

Luckily the CYC had an intermittent live stream on YouTube, with Peter Shipway (nominative determinism at work there), Gordon Bray and Peter Gee hosting.

The primary website for the race this year was https://www.rolexsydneyhobart.com/, and this year this appeared to be running via AWS CloudFront.

Time for a quick health check, with SSL Labs:

After noting this is CloudFront, I notice that its resolved as IPv4 only. Shame, as IPv6 is just two steps away: tick a box in the CloudFront config, and publish an AAAA record in DNS. Its also interesting that some of the sub-resources being loaded on their page from alternate origins are available over IPv6 (as well as good old IPv4).

Talking of DNS, a quick nslookup shows Route53 in use.

Back to the output, a B. Here’s a few items observed on the SSLLabs report:

  • TLS 1.1 is enabled – it’s well past time to retire this. Luckily, TLS 1.2 and 1.3 are both enabled.
  • Even with TLS 1.2, there are some weak ciphers, but (luckily?) the first one in the list is reasonably strong.
  • HTTP/2 is not enabled (falling back to HTTP/1.1).
  • HTTP/3 is not enabled. Even more performance than HTTP/2.
  • Amazon Certificate Manager (ACM) is in use for the TLS certificate on CloudFront

It also says that there is no DNS CAA record, a simple way to lock out any other CA provider being duped into mis-issuance of a certificate. A low risk, but a (free) way to prevent this.

Turning to SecurityHeaders.com, we get this:

SecurityHeaders.com output for rolexsydneyhobart.com, December 2022

Unfortunately, looks like no security related headers are sent.

Strict Transport Security (HSTS) is a no-brainer these days. We (as a world) have all gone TLS for online security, and we’re not heading back to unencrypted HTTP.

The service stayed up and responsive: well done to the team who put this together, and good luck with looking through the event and finding improvements (like above) for next year.

CloudFormation and CloudFront Origin Access Control

I recently wrote about the change of Amazon CloudFront’s support for accessing content from S3 privately.

It’s bad practice to leave an origin server open to the world; if an attacker can overwhelm your origin server then your CDN cant help to insulate you from that, and the CDN cannot serve any legitimate traffic. There are tricks to this such as having a secret header value injected into origin requests and then have the origin process that, but that’s kind of a static credential. Origin Access Identity was the first approach to move this authentication into the AWS domain, and Origin Access Control is the newer way, supporting the v4 Signature algorithm (at this time).

(If you like web security, read up on the v4 Signature, look at why we don’t use v1/2/3, and think about a time if/when this gets bumped – we’ve already seen v4a)

CloudFormation Support

When Origin Access Control launched last month, it was announced with CloudFormation support! Unfortunately, that CloudFormation support was “in documentation only” by the time I saw & tried it, and thus didn’t actually work for a while (the resource type was not recognised). CloudFormation OAC documentation was rolled back, and has now been published again, along with the actual implementation in the CloudFormation service.

It’s interesting to note that the original documentation for AWS::CloudFront::OriginAccessControl had some changes between the two releases: DisplayName became Name, for example.

Why use CloudFormation for these changes?

CloudFormation is an Infrastructure as Code (IaC) way of deploying resources on the cloud. It’s not the only IaC approach, another being Terraform, or the AWS CDK. All of these approaches gives the operator an artefact (document/code) that itself can be checked in to revision control, giving us the ability to easily track differences over time and compare the current deployment to what is in revision control.

Using IaC also gives us the ability to deploy to multiple environments (Dev, Test, … Prod) with repeatability, consistency, and as minimal manual effort as possible.

IaC itself can also be automated, further reducing the human effort. With CloudFormation as our IaC, we also have the concept of Drift Detection within the deployed Stack resources as part of the CloudFormation service, so we can validate if any local (e.g., console) changes have been introduced as a deviation from the prescribed template configuration.

Migrating from Origin ID to OAC with CloudFormation

In using CloudFormation changes to migrate between the old and the new ways of securely accessing content in S3, you need to do a few steps to implement and then tidy up.

1. Create the new Origin Access Control Identity:

  OriginAccessControlConfig:
    Name: !Ref OriginAccessControlName
    Description: "Access to S3"
    OriginAccessControlOriginType: s3
    SigningBehavior: always
    SigningProtocol: sigv4

If you had a template that created the old OriginAccessId, then you could put this new resource along side that (and later, come back and remove the OID resource).

2. Update your S3 Bucket to trust both the old Origin Access ID, and the new Origin Access Control.

 PolicyDocument:
    Statement:
      -
        Action:
          - s3:GetObject
        Effect: Allow
        Resource: 
          - !Sub arn:aws:s3:::${S3Bucket}/*
        Principal:
          "AWS": !Sub 'arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${OriginAccessIdentity}'
          "Service": "cloudfront.amazonaws.com"

If you wish, you can split that new Principal (cloudfront.amazonaws.com) into a separate statement, and be more specific as to which CloudFront distribution Id is permitted to this S3 bucket/prefix.

In my case, I am using one Origin Access Control for all my distributions to access different prefixes in the same S3 bucket, but if I wanted to raise the bar I’d split that with one OAC per distribution, and a unique mapping of Distribution Id to S3 bucket/prefix.

3. Update the Distribution to use OAC, per Origin:

    Origins:
      - Id: S3WebBucket
        OriginAccessControlId: !Ref OriginAccessControl
        ConnectionAttempts: 2
        ConnectionTimeout: 5
        DomainName: !Join
          - ""
          - - !Ref ContentBucket
            - ".s3.amazonaws.com"
        S3OriginConfig:
          OriginAccessIdentity: ""
        OriginPath: !Ref OriginPath

You’ll note above we still have the S3OriginConfig defined, with an OriginAccessIdentity that is empty. That took a few hours to figure out that empty string; without it, the S3OriginConfig element is invalid, and a CustomOriginConfig is not for accessing S3. At least at this time.

If you’re adopting this, be sure to also look at your CloudFront distributions’ HttpVersion setting; you may want to adopt http2and3 to turn on HTTP3.

4. Remove the existing S3 Bucket Policy line that permitted the old OID

“AWS”: !Sub ‘arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${OriginAccessIdentity}’ is no longer needed:

 PolicyDocument:
    Statement:
      -
        Action:
          - s3:GetObject
        Effect: Allow
        Resource: 
          - !Sub arn:aws:s3:::${S3Bucket}/*
        Principal:
          "Service": "cloudfront.amazonaws.com"

5. Delete the now unused OID from CloudFront

Back in part 1 where you created the new OriginAccessControl, remove the OriginAccessIdentity resource and update your stack to delete it.

Summary

Of course, run this in your development environment first, and roll steps out to higher environments in an orderly fashion.