Web security and the 2022 Australian Federal Election

One thing is sure, these days every political party has a website to publish their message, and right now its one of their key places to disseminate their content from – often reticulating from the web site to the wider broadcast media.

As a source of truth for each party, how well are they implementing modern web security that’s free to implement and use?

I’ve used a number of tools in the past, but I chose just two to do a straw poll to examine them.

The first is Scott Helm’s SecurityHeaders.com. A simple rating A through F gives the general overview of the way the curators of the various sites have activated browser support to help ensure their content – and the visitors to their sites – are as protected as possible.

Scott does a fantastic job of adjusting the ratings over time, as new capabilities are established as commonplace amongst the major web browser platforms. It’s a free service to check any site, publicly available, and you can check your favourite site (such as your employer, or band) right now!

The second services is Qualys’ SSLLabs.com (originally by Ivan Ristic, who now operates hardenize.com – worth a look too). Instead of looking at the simple text headers, SSLLabs looks at the encryption used over the untrusted Internet, and a few other attributes, and again gives an A through F report, so it is easy to understand who does a good job, and who is not quite there yet.

The Australian Labour Party

The ALP lives at https://www.alp.org.au/. Let’s start with the simple security headers rating:

D rating for alp.org.au on 14 May 2022

That’ s a pretty poor outing. The first header activated is a legacy security header used to instruct browsers about having content rendered with iframe and frame HTML elements – these days accomplished via a Content Security Policy. Secondly they have indicated to browsers that their site is an HTTPS site and should only ever be contacted using encrypted communications (TLS, or HTTPS), and never over plain-text unencrypted HTTP.

So what?

Content Security Policies (CSPs) are about to become a mandated part of the Payment Card Industry (PCI) Data Security Standard, currently in draft, that any payment page on the Internet (you know, the one you use every day when you buy something online and enter card holder details) will be required to have a CSP to help protect the security of the web page. A CSP doesn’t cost anything, it’s just a text field letting your browser know boundaries from where it can fetch additional content to render the page. And if it’s good enough for a payment page, then its good enough for anything you’re trying to have a strong security reputation on.

OK, let’s move to the TLS (formerly called SSL) strength, with SSLLabs:

ALP.org.au on SSLLabs on 14 May 2022

Well, they’ve left the older TLS 1.1 protocol enabled. That’s been deprecated since around 2016, so only 6 years out of date. It’ s nice to see the newest TLS 1.3 is enabled here, and the encryption ciphers are ordered with stronger crypto before weaker ciphers (why are those older ones still enabled, as they are likely ever legitimately used?). The test shows that the more efficient HTTP2 has not been enabled, and the simple Certificate Authority Authorization record in DNS has not been set – which helps declare which Certificate Authorities are permitted to issue the trust certificates for alp.org.au.

We notice that there is just one IPv4 address returned when doing this check which raises a few questions:

  1. there is no apparent Content Delivery Network in place
  2. dual-stack support for IPv6 has not been enabled
  3. there’s possibly only one site for this service to run from?

A traceroute for this appears to disappear into MSN.net in Melbourne.

Liberal Party of Australia

Move to the Liberals who are at https://www.liberal.org.au/. Cranking up Security Headers shows:

liberal.org.au on Security Headers.com on 14 May 2022

This is just marginally better than the Labour Party: they have enabled one extra header: the Permission Policy. This tells the browser what capabilities its allowed to use when rendering your content.

Its a good start: but the policy contains just “interest-cohort=()”. This policy is opting out of Google Analytics cohort analysis. as shown here, but its only supported on the Chrome browser. They’ve missed the chance to disable geo-location and other browser capabilities to protect their viewers.

The configured headers the admin has left enabled declare they have a Varnish Cache, and Apache/2.4.29. I’d recommend turning off as much of this identification as possible (hey admin, look up: ServerTokens PROD).

OK, on to SSLLabs analysis, but as we do, we get a different initial screen compared to our first review:

libaeral.org.au on SSL labs on 14 May 2022

This time, we’ve detected two distinct site locations that this content has been served from. Again we’re only talking IPv4, but the reverse DNS shown gives away where; the AWS Sydney Region (which I helped launch as an AWS staff member in 2013).

This is possibly an AWS managed load balancer, configured across two Availability Zones (for those coming here for the first time, an Availability Zone, or AZ, is a cluster of data centres, so each AZ can be through of as a site: Central North Sydney, South Sydney. Indeed, The Sydney AWS Region has three AZs available as at May 2022, and not using the third AZ when its just sitting there is possibly a missed opportunity for higher fault tolerance.

Of course, both site locations are configured identically so we already know they rate as an A, so we can inspect any of the two in detail:

libaeral.org.au on SSLLabs on 14 May 2022

That’s pretty satisfying to start with.

We still note a missing DNS Certificate Authority Authorization (CAA) entry, as per the Labour Party. But we note that ONLY TLS 1.2 is enabled, and not the current best-in-show, TLS 1.3 (which is slightly optimised in connection establishment).

What is unusual is the ordering of the encryption ciphers here; some weaker ones are priorities over stronger ones:

liberal.org.au on SSL labs on 14 May 2022

Normally you would want your strongest encryption ciphers first before the ones that are known to be weak are selected (or better yet, don’t even support the weak ones).

We note that only HTTP 1.0 is supported, not HTTP/1.1 not HTTP/2.

The Australian Greens

Start with the headers:

greens.org.au on Securityheaders.com on 14 May 2022

This is looking marginally more polished. It’s a Drupal 9 site (the headers show this – would be good to not advertise it). This time one additional legacy security header is set: x-content-type-options; this tells browsers to trust the mime content-type that is sent with objects, and not try and double-guess them (if the website admin got it wrong). For example, if we try and download an image, and the response is a content-type of image/jpeg, but the payload is JavaScript, then treat it as a broken image! Don’t keep guessing as browsers have in the past, as that guessing may trick the browser into executing some browser code that the admin had not intended.

OK, move to the crypto on SSLLabs – and this time we have three sites serving this content:

greens.org.au on SSL Labs on 14 May 2022

Nice, the Greens are also using AWS in Sydney, and are spread across all three Availability zones. (Shout out to an old friend: Grahame Bowland, are you doing this? 😉 ). Its still only IPv4, sadly. But already see see a stellar A+ rating:

greens.org.au on SSL Labs on 14 May 2022

The same DNS CAA record is missing, but we see HTTP/2 is enabled, as well as just TLS 1.2 and 1.3. More over the cipher suite is super strong, with nothing weak supported:

greens.org.au on SSL Lab son 14 May 2022

This is what a site that doesn’t take weak encryption as acceptable is supposed to look like!

The Climate200 Collective

There are a lot of candidates under this umbrella, and instead of reviewing them all independently, I’ll just pop over to https://www.climate200.com.au/. Lets roll with security headers:

climate200.com.au on SecurityHeaders.com on 14 May 2022

But this is stronger than it looks, because we finally have a Content Security Policy. However the extend of the policy is to limit frames and iframes, with “frame-ancestors ‘self’“. So much more has been missed, like enforcing everything the browser loads comes form the same domain, over HTTPS.

Now, headers are indicating this is an Open Resty server running on Containers (with Kubernetes management) in AWS’s US West 2 Region – also known as the AWS Oregon Region. AWS often speaks of this Region as running from a lot of green energy, which may be the reason for this.

OK, lets scoot to the network transport encryption report from SSL Labs, and again we have the three site presentation choice:

climate200.com.au on SSLLabs on 14 May 2022

As with SecurityHeaders.com, the confirmation of using AWS, this time US-west-2 (Oregon) Region. All sites rating an A, but only using IPv4.

The Australian Election Commission

Now lets look at who is running the election, the AEC. A hat tip to their social media team who have been having a right ripper time with some bon humeur in the lead up to Democracy Sausage day (many polling booths in Australia will have a local, volunteer, non-partisan community group running a barbecue (bbq) with a sausage in a roll, possibly with grilled onion – oh I can smell it now!).

Right, AEC, how are your security headers:

aec.gov.au on Security headers on 14 may 2022

Oh. Erm.

Lets move to your crypto and see if we can recover this:

aec.gov.au on SSLLabs on 14 May 2022

What a save! They are using a Content Delivery Network (CDN) to front their origin web service. That’s the fourth time in this article it’s been an AWS based service as well, but again its IPv4 only. Let’s lean in to the first site:

aec.gov.au on SSLLabs.com on 14 May 2022

So we have TLS 1.3 enabled, with TLS 1.2 as a backdrop, but none of the older risky protocols. Nice. But the ciphers for TLS 1.2 are a little confused:

aec.gov.au on SSLLabs.com on 14 May 2022

That CBC use of AES in yellow should be either below the other green ones, or removed. However, custom configuration is very limited with Amazon CloudFront; AWS does permit you to chose some good TLS options (I’ve worked for years with them to ensure these choices are available to customers).

Moving down the details shown, we saw HTTP/1.0, 1.1 and 2 are all available, which is also good.

An Overview

Lets put those ratings for the above organisations, and add a few more for good order, into a table:

PartySec HeadersSSLLabsHostingMulti-siteIPv6
Labour PartyDBMelb?NoNo
Liberal PartyCAAWS Sydney2 AZNo
Australian GreensBA+AWS Sydney3 AZNo
Climate 200DAAWS Oregon3 AZNo
Australian Electoral CommissionFAAWS CloudFrontMany (4 sites in DNS response)No
United Australia PartyFBCloudFlareMany (6 sites in DNS response)Yes
One NationCBCloudFlareMany (4 sites in DNS response)Yes
Liberal DemocratsDACloudFlareMany (4 sites in DNS response)Yes
Australian ChristiansFAHost Universal in MelbourneNoNo
All assessments as at 14 May 2022

So what can we deduce?

  1. None of them have populated a DNS CAA record to help ensure only their authorised Certificate Authority is issuing certificates in their name.
  2. Minor parties are using CloudFlare and permitting IPv6; none of the major parties have discovered IPv6!
  3. None of them have strong Content Security Policies
  4. Most major parties and the AEC are AWS Customers.
  5. I didn’t observe any of them implement Network Error Logging (NEL). Now there’s a nice feedback loop to help detect web security incidents as they happen…

So who would I chose as my winner here? It would be… the Greens, with the stronger ratings they have. There’s still room for improvement (like dual-stack IPv6, using CloudFront, a proper CSP), but they are ahead of the rest leading both of these basic assessments.

And the loser? Well, let’s not punch down too much; the explanations here are plain enough for any tech to follow the bouncing ball and enable better security, availability, and speed (at no additional cost!).

Does this make any difference to policies, fairness, environment (well, Australian Greens are using the AWS Oregon Region)? No, not really right now. I doubt any future minister for telecommunications is going to understand if the simple security adjustments shown could help increase security in any cyber attack. I just find this interesting…

As always, my thanks to Scott Helme for Security Headers, Ivan Ristic for SSLLabs, and the people who contribute to web and browser security improvements.

Unifi Protect Viewport

Like many, I ditched my out of date ISP provided home gateway a few years back, and about a year ago put in a Unifi Dream Machine Pro as a home gateway, and a pair of Unifi Access Points, implementing WiFi 5 (802.11ac), and able to take better advantage of the 1GB/50MB NBN connection I have.

Now, I find that WiFi 5 maxes out at around 400 MBit/sec, so I’ve been waiting for the newer WiFi 6 APs to launch – in particular the In-Wall access point. However, then comes along WiFi 6E, using the newly available 6 GHz spectrum, as well as dropping to the 5Gz and 2.4 GHz spectrum.

Then I went one further, and acquired a 16 TB HDD into the Unifi Dream Machine Pro and a single G4 Pro camera. This gave me around 3 months of continuous recording, and has helped pin point the exact time a neighbours car got lifted, as well as showing us the two times before that the perps drove past – all from the end of a 50 meter driveway an the other side of a closed vehicle gate.

I wanted to have an easy way my family can bring up the video feed on the TV, large enough to see detail from each camera.

But then the pandemic hit, and the global supply chain brought things to a standstill. Unifi, and their Australian distributors and retailers, have been sorely out of stock for a long time. Only one WiFi 6E product has launched from Ubiquiti thus far, and like most of their products, immediately sold out on their US store and hasn’t made it to Aus yet. Even the UDM Pro Special Edition hasn’t surfaced either in stock in the US, or from the Australian Distributors.

So it was with some glee I found just 5 of the Unifi Viewport devices had made it to Australia last week, the first time I’d seen stock in a year (I could have missed it). So I pounced on it, and today I unboxed it.

Unifi Protect Viewport
Contents and box of the Unifi Protect Viewport

The device shipped with an HDMI cable, some screws and a wall mount, and a small slip of instructions.

At one end of the device is a standard Ethernet port, th eother end has an Ethernet-out port, and an HDMI-out port. That’s handy if you already have a device that’s on Ethernet, like your TV itself, without running another patch back to your switch.

The actual Viewport itself was larger than I had expected, as shown when I hold it in my hand here:

Unifi Protect Viewport

I plugged it into a patch to a POE port on my Switch-8, and immediately it powered up, took a DHCP lease, and was shown as pending for adoption into my network.

The adoption took a moment, then a firware update and reboot, and then it automatically connected and started showing the default layout of cameras from Unifi Protect.

There were no visible lights to indicate the unit was powered on. Meanwhile, the device showed up in the console, with the following settings:

Unifi Protect settings

As you can see from the above, the “Select a Live View” comes from the Protect web app. I created a second Live view configured for four cameras, dragged the one camera I g=do have to one of the quadrants, and then could update the Viewport to instantly show the alternate 4 quadrant view.

The end result, on an 80″ TV looks like this:

Viewport displaying on a TV

I left the unit streaming to the TV for several hours, and it didn’t miss a beat. I could feel a little warmth from the Viewport, but not enough that I would be alarmed.

If I were running a larger security setup, I could imagine having several large TVs each with their own Viewport, but showing different Live Views (with one showing just he primary camera of interest).

There’s no administrative control that I’ve seen on the Viewport itself. You cant change or select cameras, you cant shuttle/jog the stream forwards or backwards. It seems to do one thing – stream current feeds – and do it reliably (thus far).

The video image was crisp and clear (the above image was when it had changed to night mode). The time stamp in the stop left corner appeared to roll forward smoothly. I couldn’t measure the frame rate, but it seemed pretty good – perhaps 20 fps, maybe 25 fps.

AWS RDS Goes Dual-stack: IPv4 and IPv6

I’ve spoken of the IPv6 transition for many, many years. Last month I gave a presentation at the AWS User Group (Perth) on this, and included a role play on packets through the network.

Earlier in 2022 we saw AWS VPC support IPv6-only subnets, a great way to scale out vast numbers of instances with 18 billion billion addresses per subnet. Today sees one of the most commonly used services with virtual machines – managed databases via the Relational Database Service – finally get its first bit of IPv6 support!

When creating a database, you now have a new option as shown here:

AWS Console Wizard for starting an RDS instance

It’s worth noting that, the Database Subnet defined in RDS can (at this point in time) select subnets that are either IPv4 only, or dual stack IPv4 and IPv6. To put this clearer, RDS is not (yet?) supporting IPv6 only deployment.

But that’s a small limitation. The power of scale-out of application servers in vast subnets can now natively talk to a dual-stack deployed RDS Instances using IPv6 as the transport protocol. No other proxies or adaptors or work-around required.

Of course, there’s more managed AWS services to even get this far – ElasticCache, for example, or even IPv6 as first class (eg: CloudFront origin fetch).

This is incremental improvement.

Frustrating IoT Devices!

I’ve been continuing my IoT journey, finding that IoT devices are a little fickle.

My first LGT92 GPS Tracker device failed back in 2021; and I tried contacting both the retailer (IoT Store Perth), and the manufacturer. I was instructed by the manufacturer to open the clamshell case and take numerous photos to send to them. They suggested a fault, and that they would organise a replacement, but after 6 months, nothings happened.

During that period, i ordered a second LGT92, and it failed on first use. I contacted IoT Store again – by webform, email, and phone, and after many weeks, spoke to “Sam”, who from the sound of it was on the phone in his car. While he said he would look into this (and the original, nothing came of it, and I tried following up several times.

I then tried to get an IP67 rated, solar power device; however, what IoT Store sent me had no solar panel or GPS tracker device, just a box with some wires and screws. Again I spoke with “Sam” (in his car again) having tried webforms, email and his mobile number multiple times, and again he said he’d follow up on it, and that’s been three months and no success.

So I’m never buying anything from IoT Store again, and I strongly advise against anyone else doing so. The customer service is terrible. Not one of the emails I’ve sent have been replied to. Not one of the contact me forms have been responded to. And when I have managed to speak with Sam, he is evasive, and does not follow up on the actions he says he’ll take.

Next up is the RAK Wireless 10700, a new GPS tracker device, again IP67 rated, with solar power. Released in 2022, these devices shipped from China after about 3 months, but without a battery that the solar panel would charge. I ordered a LiPO battery from Amazon.com.au, but naturally these had a different connector, so I find myself soldering again after 15 years.

But they do power up, with device firmware 1.0.4 installed. I connected a serial power and enter the AT command to dump the config: Dev EUI, App EUI and App Key.

I enter this into the AWS IoT Core device registration, and ensure thing slike the frequency are correct, but the device refuses to join the LoRaWAN network with the local gateway running basicstation (current build at this time), with the best log output from the basicstation gateway showing:

Mar 28 14:15:19 rak-gateway basicstation[12538]: 2022-03-28 13:15:19.629 [S2E:VERB] RX 917.0MHz DR2 SF10/BW125 snr=-14.8 rssi=-89 xtime=0x6900001BA46F94 - jreq MHdr=00 JoinEUI=ac1f:9ff:f915:4631 DevEUI=ac1f:9ff:fe06:7117 DevNonce=35258 MIC=1390227384

Mar 28 14:15:20 rak-gateway basicstation[12538]: 2022-03-28 13:15:20.093 [S2E:WARN] Unknown field in dnmsg - ignored: regionid

And the output on the tracker device showing:

+EVT:JOIN FAILED

Out of interest, the AT+STATUS shows (with some of the keys and addresses hidden with underscores):

Device status:
   Auto join enabled
   Mode LPWAN
   Network not joined
LPWAN status:
   Dev EUI AC1F09FFFE______
   App EUI AC1F09__________
   App Key AC1F09__________________________
   Dev Addr 26021F__
   NWS Key 323D155A000DF335307A16DA0C______
   Apps Key 3F6A66459D5EDCA63CBC4619CD______
   OTAA enabled
   ADR enabled
   Public Network
   Dutycycle disabled
   Send Frequency 2
   Join trials 2
   TX Power 0
   DR 3
   Class 0
   Subband 1
   Fport 2
   Unconfirmed Message
   Region AU915
LoRa P2P status:
   P2P frequency 916000000
   P2P TX Power 22
   P2P BW 125
   P2P SF 7
   P2P CR 1
   P2P Preamble length 8
   P2P Symbol Timeout 0

I did notice the documentation from RAKWireless says that firmware 1.0.1 supports LoRaWAN MAC version 1.0.2 (not the 1.0.3 that the LGT92 supported); and this version difference is defined in a device profile in AWS IoT Core for LoRaWAN.

What I also noticed was the documentation for the RAK 10700 at https://docs.rakwireless.com/Product-Categories/WisBlock/RAK10700/Datasheet/#software mentioned that the firmware version available is 1.0.1, so older than what shipped to me on the device:

+VER:1.0.4 Jan 14 2022 14:17:02

But, on that same documentation page, is a link to download for a firmware, but is unfortunately a 404!

So, my journey continues, but I’ve learnt a few lessons. The IoT device landscape seems… littered with failures. The quality, of commodity devices is low, the compatibility is bewildering, and the standards are evolving.

Transitioning to IPv6 in AWS

There are a large number of workloads that operate in the AWS Cloud using traditional virtual machines (Instances) on traditional IPv4 networking. And for the last few years, we’ve seen the steady growth in IPv6 adoption globally. For those who haven’t started this journey yet, here’s some notes on what you may want to look at as you start to embrace the future of the Internet.

It should be noted that this transition is a two way street:

  1. you need to get ready to offer your digital services to your clients over both IPv4 and IPv4 (Dual Stack)
  2. you need to have your dependant services you use to offer (listen) on an IPv6 address, and probably via a gradual transition via offering both IPv4 and IPv6 for a (long) period of time

Within your internal (to your VPC) network architecture you can use either network protocol: the initial focus needs to be on enabling your incoming traffic to use either IPv4 or IPv6.

Your transport layer security (TLS) should be identical on either network protocol. The IP protocol is just a transport protocol.

Here are the steps:

  1. VPC Changes
  2. Subnet Changes
  3. Load Balancers Changes
  4. Routing Changes
  5. Security Group Changes
  6. DNS Changes

VPC Configuration

Adding an IPv6 address block is reasonably simple in VPC. While you can allocate from your own assigned pool, its far easier to use the AWS pool; its ready to go and doesn’t need any other preparation.

There are three ways to add an IPv6 address allocation:

  • In the console, via ClickOps
  • Via the API (including the CLI)
  • Via the CloudFomation template that defines your VPC – highly recommended

Assigning the address block to the VPC does not actually use it, and should make zero impact to already running workloads. You should be safe to apply this at any time.

Subnet Configuration

Once the VPC has an allocation, we can then update existing subnets to also include an allocation from within the VPC’s range. The key difference we see here is that in IPv4 we can chose the size of the subnet, in IPv6 you cannot: every IPv6 allocation to a subnet is a /64, which is about 18 billion billion IP addresses.

You can undo an allocation if no Network interfaces (ENIs) are present in the subnet using those addresses.

The configuration is relativity simple: you get to those which slice of the VPC IPv6 address block will be used for which subnet. I follow a pretty simple rule: I anticipate that my VPCs will perhaps one day spread across 4 Availability Zones, so I allocate subnets sequentially across Availability zones in order to be able to reference the range via a supernet.

The reason for this is:

  • subnetting is done in powers of two: so for continuous addressing (supernetting) we’re looking at using two AZs, four AZs, or eight AZs, etc.
  • two availability zones is insufficient. If one fails, then I you are running on a single Availability Zone during the incident (which may last several hours). This AZ may be constrained in capacity, while other AZs may be underutilised. Hence we want to use three AZs to have fault tolerance able to be restored DURING a single AZ outage

Most Regions have between three and 5 AZs. Preparing for 8 in most Regions will be reserving address space we’ll likely never be allocating.

Hence, starting with public subnets, we want to sequentially allocate them with space to accommodate four AZs. These allocations are a hexadecimal number between 00 and FF – and hence a 256 limit on the total number of subnets. If we recall the four AZ allocation, then that’s 64 sets of Subnets across all AZs.

Again, you can allocate these by:

  • Click Ops in the console on each existing subnet (or when creating new subnets)
  • API call (including the CLI)
  • CloudFormation template – recommended – in which case, look at the Fn::Cidr to calculate the allocation. Check out my post form March 2018 on this.

If your focus is to start with your services being dual-stack available, then the only subnets you need to allocate initially are the Public Subnets: the subnets where your client facing (internet facing) load balancers are.

Once again, there’s no interruption to existing traffic during this change; indeed you’re less than half way through the required changes.

You may also allocate the rest of your private subnets at this time if you wish.

Routing Changes

For public subnets to function, they need a route for the default IPv6 address via the existing Internet Gateway (IGW). This looks like “::/0”, and when pointing to the IGW, it permits two way traffic just like IPv4. Your set of public subnets will need this route, and this can be done at any time: permitting IPv6 routing wont start clients using it.

If you have private subnets with IPv6 allocations, and you want them to be able to make outbound requests on IPv6 to the Internet, then you may want to consider an Egress Only IGW as the destination for “::/0” for private subnets. Note your public subnets still will use the standard IGW.

The Egress only IGW resource does what it says, and supplants the need for NAT Gateway as used in IPv4 (more on NAT GW later).

Again, you can add the Egress Only IGW and the Routing changes in several ways:

  • Click Ops on the console
  • Via the API (including the CLI)
  • In your CloudFormation template for your VPC – recommended

Load Balancer Changes

Now you have public load balancers in public subnets that have IPv6 available, you can modify your load balancer to have it get an IPv6 address. This is yet another action that will have no impact on current traffic.

You can modify the existing load balancers by:

  • Click ops on the console
  • An API call (including the CLI)
  • In your CloudFormation template for your Workload – recommended

Security Group Changes

Now we’re down the the last two items. By default, your security group is closed unless you have made changes. Your typical load balancer will be listening on TCP 80 and/or 443 for web traffic, and be open to the entire [IPv4] Internet with a source of 0.0.0.0/0.

To enable this security group for IPv6, we add a set of rules for source of ::/0 for the same ports you have for IPv4 (typically 80 and 443 for web traffic, different for other protocols).

Its at this time you can now test connectivity to your load balancer using IPv6 end-to-end – assuming you have another end on the IPv6 Internet somewhere.

If your workstation/cellphone is using IPv6, then you could browse to IPv6 address – but you’ll probably get a certificate warning as the name in the certificate doesn’t match the raw IP address.

If you’re not familiar yet, this should also be a CloudFormation template update.

DNS Changes

This is when we announce to the world that your service can be accessed with IPv6. You want to make sure you have done the above test to ensure you can connect, as this is the final piece in the puzzle.

Typically a custom DNS name for a load balancer is a Route53 ALIAS record of type A (Address). The customer DNS name is what also appears in any TLS Certificates.

To finally flick the switch on IPv6, you add an additional Route53 ALIAS record of type AAAA (four As), with the destination being the same as you have used for the existing Alias A record (one A).

You should now be able to check that you can resolve your service using the dnslookup utility. From a command prompt or Powershell, type:

  • nslookup -type AAAA my.custom.load.balancer.name
  • nslookup -type A my.custom.load.balancer.name

Your Dependencies

Now you’re up and running, you need to think about the services you depend upon. Services within your VPC, such as RDS, require AWS to enable these to be dual stack. Some services already are, such as the Link-Local MetaData service, Time Sync Service and VPC DNS resolver (note: always use the DNS resolver).

Some services will be outside of your VPC but still AWS-run, like SQS, and S3: in which case, look to use VPC Endpoints to communicate with them.

But other third party resources across the Internet may be stack back on IPv4. if you have an EC2 Linux Instance then its sometimes worth running a TCPDUMP to inspect the traffic you see using IPv4. A command like tcpdump ip and port not 22 may be useful. You can extend that to also exclude HTTP/HTTPS traffic with tcpdump ip and port not 22 and port not 80 and port not 443. Remember, your service port on your instance may be a different number on the inside of your network.

You’ll need to ask your dependencies to include dual-stack support on their services. In the mean time, you’ll be having to fall back to using IPv4 from your network to communicate with these dependencies. There’s two ways this can happen:

  1. If the subnet with your EC2 instance in it is dual-stack, hen the host can use an IPv4 connection itself, possibly via a NAT Gateway to communicate with the external IPv4 dependency
  2. If the subnet with your EC2 instance is IPv6 only (which is rather new), then the subnet can be configured to use DNS64 addressing (a subnet level configuration), and can route its traffic via the NAT GW, which will translate from IPv6 on the VPC-internal network, to IPv4 across the Internet (and back). See this.

Moving to IPv6 only internal networks is a long term goal, probably in the order of half a decade or so. A number of additional AWS updates will be needed before this becomes a default.

Additional IPv6 Notes in AWS

In this transition period (which has been going for nearly 25 years), you’re going to find stuff that silently falls back to IPv4. With host able to simultaneously have two addresses (IPv4 A, and IPv6 AAAA), then things that look them up can have a choice. For more things this is the newer AAAA, with a fall-back to A if needed (see the Happy Eyeballs RFC).

However, at this time (Mar 2022), CloudFront still preferences IPv4 origins when the origin is dual-stack. CloudFront also still uses TLS 1.2 instead of the newer and faster TLS 1.3, and HTTP/1.1 instead of the slightly more efficient HTTP/2 request protocol.

AWS IoT core exposes IPv4 endpoints, which is unusual as a key element of IoT is having millions of devices connected, a situation best served by IPv6.

Similar considerations exist for Route53 Health Checks, and others.

Summary

If you’re thinking this is all very new in cloud, you’d be mistaken. I was transitioning customer environments (including production) in AWS to dual stack in 2018 – four years ago. I’ve been dual-stack for my home Internet connection since I swapped to Aussie Broadband (I churned away from iiNet, who once had an IPv6 blog and strong implementation plans).

For several years, Australia’s dominant telco, Telstra, has had IPv6 dual stack for its consumer mobile broadband, something that the other players like Optus are yet to enable.

But these changes are inevitable.

The future is here, its just not evenly distributed.