AWS Sydney Summit 2023

Last week I was fortunate enough to attend the AWS Sydney Summit 2023, along with my colleague and friend, Elliot Segler, on behalf of Akkodis.

A Partner Day was arranged that ran on Monday 3rd April – I’ll avoid details on that here at this time to focus on the main Customer Summit – the Big Event.

This was really the first major in-country Summit since the “end” of the Covid-19 pandemic – the 2022 version was much smaller, and that was only announced as a small in-person event just a few weeks before it was held. Thus in 2022 this event only had a small crowd; this year, attendance tickets were capped at 5,000.

In 2019, some 19,000 people attended over two days.

I’ve been fortunate to never miss an AWS Summit in Australia, and it gives me insight into the state of the Cloud when comparing year-on-year.

The floorplan of the AWS Sydney Summit 2023
The exhibitor floor at the ASW Sydney Summit 2023
AWS Managing Director A/NZ, Rianne van Veldhuizen, opens the Summit
Cameron Adams, founder of Canva, shared some of their AWS Cloud architecture.

Cameron also spoke about the move to sustainable computing, Canva signing The Climate Pledge, and Canva racing to integrate AI models into their service offering, such as magic erase.

We also saw Nicole Sheffield from Wesfarmers “One Digital” service who spoke about the implementation of their digital platform that offers customer-facing digital services across the Wesfarmers portfolio. This is significant, as Wesfarmers had, in 2014, been quite alarmed at Amazon entering Australia.

The exhibitors were split between Consulting Services partners, and ISV partners. Those ISVs had a distinct developer tooling flavour to them, but we also saw Canva talking to end customers about their product, and both Bespoke and Lumnify (formerly DDLS) training providers discussing their educational schedules and offerings.

Early in the morning I took a photo of the Builder lab, set up for individuals to undertake self-paced digital training – which was full for the rest of the day:

The Builder Lab (taken first thing in the morning before it opened)

In Financial Services, Australian bank (and one of the Big Four) Westpac provided their take on some of the cloud security approaches. This is notable to me as one of their staff in 2013 said they would “never use Public Cloud”; they seem to be doing well putting Public Cloud to good use.

Suncorp (finance, insurance and banking) also spoke about their exit strategy from their existing data centers using AWs as a target platform.

This year there were no major service announcements or releases at this Summit, but then these days the major announcements happen almost every few days anyway! In previous years, the Summit has always had a sort of “revolution” (when major new concepts or services were released, such as AI services) or “evolution” (incremental updates announced for existing services). This year the theme was more “steady-as-she goes” and stable.

What’s clear is that commentary around Lift & Shift migrations are now evolving to Migrate & Modernise (which is what I have always focused on – deeper expertise and a better short and long term outcome). This isn’t surprising, as often naïve Lift & Shift has left customers with workloads costing more and unable to take advantage of key cloud attributes, such as scalability or cloud-platform-managed-services to reduce TCO.

Of course, those who implemented cloud-native solutions, and paid close attention to Cloud Operating Models (and tech team org structures) with an eye to the Well-Architected Framework have enjoyed optimised and reliable operation.

AWS claimed that perhaps only 15% of all workloads that will eventually run in the cloud, is now doing so. Of course, that 15% requires the maintenance, care and attention to ensure it remains operational, and optimal.

So where are we in the Cloud evolution timeline? I suspect we’re in the middle of furious catch-up by software providers who now are focusing on adopting IaaS and PaaS to take their legacy solutions, and reimplement them as cloud-native SaaS. More vertical-specific SaaS products are coming to market.

The individual services within the cloud are maturing. ARM-based Graviton chips continue to uphold Moore’s Law (RIP Gordon Moore, 2023). IPv6 is progressing, but as noted by one of my fellow Partner Ambassadors and long term friend, Greg Cockburn, the rate of change in modern IPv6 networking appears to be slowing (with notable exception, VPC Network Firewall now supporting IPv6 only subnets). I suspect the major requests are now satisfied; workloads that want to be dual-stacked to the outside world are fully supported. Of course, many ISPs, telcos and carriers are continuing to slowly adopt IPv6 for their consumers; some advanced end user providers in Australia have been Telstra, Internode (ironically part of iiNet who dropped the IPv6 ball in 2013, and thus part of Vodafone), and Aussie Broadband.

From speaking to people, it did appear that a 5,000 person cap had limited many people from attending, particualy those who live in the same Australian state of New South Wales who may have left booking a ticket too late, and missed out. Perhaps in 2024 we’ll see another increase (and who knows).

Meanwhile, outside around Darling Harbour, much construction happens, watched over by warships and tall ships.

Warships and Tall Ships in Darling Harbour, 2023

And in case you’re wondering, its a long way from Perth (where I live) to Sydney, and the Great Circle line directly between the two cities takes us far over the Southern Ocean:

3,300 kms from Sydney to Perth

AWS Local Zone: Perth Launch

In the last few days, the AWS Local Zone service launched in Perth.

Articles in the media are claiming this is a significant uplift:

AWS has officially opened a new cloud infrastructure region in Perth.

Kate Weber, IT News

Only problem, this is not a Region (big R) in the AWS vernacular, but a single Availability Zone, attached as an adjunct to the AWS Sydney Region (ap-southeast-2).

Media Statements and press releases are designed to excite the media, but the reality is a little more circumspect. The Local Zone does add local compute (EC2) and block storage (in the form of EBS) within Perth. However, its what many would expect that leads a local zone to be perhaps a little limiting.


First up, the compute is only certain, limited types. The same with the EBS volume types. You may be forced to use more expensive compute and storage, because the cheaper instance sizes or EBS types are not available.

Second, the costs for the Compute and Storage is more expensive than in the full Region, where economies of scale and usage patterns drive down costs. You may see up to a 50% overhead on the same 4xLarge instance type compared to Sydney.

Third, some basics like IPv6 support is not (yet?) supported. That may upset your VPC conventions.

Fourth, you may have wanted close access to managed services, like RDS (databases), or even Workspaces (desktop) and AppStream (application virtualisation). Well, that’s back in the main Region, not in the Local Zone, and at 50ms across Australia, that may be a bit too far, which means you’re back to the stone age of running databases on Instances yourself.

Fifth, being a single AZ, you wont get any multi-AZ resilience that you’re comfortable with an a full Region.

Sixth: the Local zone’s operation is tied to the designated Region: issues in ap-southeast-2 (Sydney) may impact management (control plane) or availability (data plane) of that AZ. Your CloudFormation has to execute in Sydney to stand up a stack.


In essence, a Local Zone needs to be looked at from the Well-Architected Framework perspective, and utilised accordingly.

Once you have reconciled those issues, there is the bright side: compute in cloud, managed in a uniform way, without having to deploy an Outpost and have a datacentre for the Outpost to run in.

Many organisations do want Cloud services locally, and for some high volume, low latency, idempotent, loosely coupled, edge processing, this may be perfect.

An AWS Local Zone is also a small stepping stone to perhaps being a full Region one day, as was seen in Osaka, Japan. It just takes demand to validate the point.

Its 10 years since AWS opened first its point of presence (CloudFront, Route53) in Australia, and then the Sydney Region. We’re on the cusp of a second Local zone up in Brisbane, and a second Australian Region – yes, a full Region – in Melbourne.

It’s the new Region that suddenly opens up Multi-region application architecture, which, for public sector in particular, has not been permissible for data jurisdiction purposes (even if that is just a desire and not a mandated legal requirement).

IoT Trackers and the AWS Cloud

I continued my IoT project over the recent end-of-year/Christmas break period, picking up from where I was 6 months ago.

Since then, a new firmware version had become available for the RAK Wireless RAK10700 GNSS (Global Navigation Satellite System) solar powered device. These devices shipped without battery (due to postal limitations), and came with firmware 1.0.4.

I failed completely last time to get these to associate with my local IoT gateway (rak7246, basically a raspberry PI in a box that bridges LoRaWAN and WiFi).

Since then, a new firmware 1.0.6 has been released.

Documentation for the RAK10700 was OK, until you get to the page that says the latest firmware is version 1.0.1; given this device already shipped with 1.0.4, I dived in deeper; the link to the firmware is https://downloads.rakwireless.com/LoRa/WisBlock/Solutions/LPWAN-Tracker-Latest.zip, and the contents of this zip file, at the time of writing, are three files:

  • Manifest
  • WisBlock_SENS_V1.0.6_2022.04.07.13.36.27.bin
  • WisBlock_SENS_V1.0.6_2022.04.07.13.36.27.dat

Caveat Elit (developer beware): this appears to be firmware version 1.0.6.

Flashing this was interesting: the device, connected via its USB cable to my laptop, had to be reset into DFU mode, which required double-tapping the reset pin in quick succession (its located next to the USB port). Once done, the device presented as USB storage, and the adafruit-nrfutil tool could update it (check the COM port from Device Manager).

adafruit-nrfutil dfu serial --package LPWAN-Tracker-Latest.zip -p COM19 -b 115200

When in DFU mode, the device turned up as a different COM port compared to when it was in its normal mode. It took me two attempts for this to be successful, and then pressing the rest button to have the device return to normal mode.

Next came the interface to AWS IoT Core for LoRaWAN. I’d previously been using the LGT-92 (now not available), but had to abandon these as no amount of protection in waterproof bags had made them durable enough to last the distance of my use case; tracking a small sail boat.

The configuration that eventually worked for me was to define a profile with MAC version 1.0.3, Regional Parameters RP002-1.01, Max EIRP 15, for AU915 frequencies (I am in Australia):

AWS IoT Core for LPWAN: Profile Configuration for RAK10700

Now with the profile defined, I can add the two Devices in, using the AppKey, DevKey, etc:

With data coming through it was now time to decide the Payload. These devices use a format called CayenneLPP to stuff as much data into as small a payload as possible. One of the first things you’ll want to do is decide the data to check it looks legitimate. Using a small Python script, I can unpack it – after doing a pip install cayennelpp:

#!/usr/bin/python3
import base64
import sys
from cayennelpp import LppFrame
d=base64.standard_b64decode(sys.argv[1])
f=LppFrame().from_bytes(d)

for i in f.data:
  if len(i.value) == 1:
    print("Ch {}, {}: {}".format(i.channel, i.type, i.value[0]))
  else:
    print("Ch {}, {}: {}".format(i.channel, i.type, i.value))

By routing the incoming IoT messages to a Lambda function, I can now pick out the PayloadData from the event and see the string being send. Here’s what I am seeing in CloudWatch logs when I just print(event):

{'WirelessDeviceId': 'b15xxxx-xxxx-47df-8a5d-f57800c170b5', 'PayloadData': ' AXQBqwZoRwdnARQIcydWCQIG6Q==', 'WirelessMetadata': {'LoRaWAN': {'ADR': False, 'Bandwidth': 125, 'ClassB': False, 'CodeRate': '4/5', 'DataRate': '3', 'DevAddr': 'xxxxx', 'DevEui': 'xxxxx', 'FCnt': 73, 'FOptLen': 0, 'FPort': 2, 'Frequency': '917200000', 'Gateways': [{'GatewayEui': 'xxxxx930e93', 'Rssi': -31, 'Snr': 9.5}], 'MIC': 'xxxxx95b', 'MType': 'UnconfirmedDataUp', 'Major': 'LoRaWANR1', 'Modulation': 'LORA', 'PolarizationInversion': False, 'SpreadingFactor': 9, 'Timestamp': '2023-01-04T13:46:42Z'}}}

While inside, with no satellite lock, that PayloadData translates out to:

Ch 1, Voltage: 4.27
Ch 6, Humidity: 35.5
Ch 7, Temperature: 27.6
Ch 8, Barometer: 1007.0
Ch 9, Analog Input: 17.69

Now I have the two sensors, its time to get them outside, with a bit of soldering of the LiPo battery on to the right connector…

The Sydney to Hobart Yacht Race 2022

This years Sydney to Hobart was a stunning race. Coverage on broadcast TV in Australia started with good coverage by Seven West Media’s 7+ service. The stunning coverage included a view of the four simultaneous start lines for the different classes:

Four start lines of the 2022 Sydney to Hobart Yacht Race, taken from @CYCATV on YouTube

Sadly, the broadcast TV coverage ended just after the start. With 7+ on the sail of one of the boats, I was expecting a bit more coverage.

Luckily the CYC had an intermittent live stream on YouTube, with Peter Shipway (nominative determinism at work there), Gordon Bray and Peter Gee hosting.

The primary website for the race this year was https://www.rolexsydneyhobart.com/, and this year this appeared to be running via AWS CloudFront.

Time for a quick health check, with SSL Labs:

After noting this is CloudFront, I notice that its resolved as IPv4 only. Shame, as IPv6 is just two steps away: tick a box in the CloudFront config, and publish an AAAA record in DNS. Its also interesting that some of the sub-resources being loaded on their page from alternate origins are available over IPv6 (as well as good old IPv4).

Talking of DNS, a quick nslookup shows Route53 in use.

Back to the output, a B. Here’s a few items observed on the SSLLabs report:

  • TLS 1.1 is enabled – it’s well past time to retire this. Luckily, TLS 1.2 and 1.3 are both enabled.
  • Even with TLS 1.2, there are some weak ciphers, but (luckily?) the first one in the list is reasonably strong.
  • HTTP/2 is not enabled (falling back to HTTP/1.1).
  • HTTP/3 is not enabled. Even more performance than HTTP/2.
  • Amazon Certificate Manager (ACM) is in use for the TLS certificate on CloudFront

It also says that there is no DNS CAA record, a simple way to lock out any other CA provider being duped into mis-issuance of a certificate. A low risk, but a (free) way to prevent this.

Turning to SecurityHeaders.com, we get this:

SecurityHeaders.com output for rolexsydneyhobart.com, December 2022

Unfortunately, looks like no security related headers are sent.

Strict Transport Security (HSTS) is a no-brainer these days. We (as a world) have all gone TLS for online security, and we’re not heading back to unencrypted HTTP.

The service stayed up and responsive: well done to the team who put this together, and good luck with looking through the event and finding improvements (like above) for next year.

Cyber Insurance: EoL?

The Chief Executive of insurance company Zurich, Mario Greco, recently said:

“What will become uninsurable is going to be cyber,” he said. “What if someone takes control of vital parts of our infrastructure, the consequences of that?” 

Mario Greco, Zurich

In the same article is Lloyds insurance looking for exceptions in Cyber insurance for those attacks that are state based actors, which is a difficult thing to prove with certainty.

All in all, some reasons that Cyber Insurance exists is to cover from a risk perspective the opportunity of spending less on insurance premiums (and having financial recompense to cover operational costs) that having competent processes around software maintenance to code securely to start with, detect threats quickly, and maintain (patch/update) rapidly over time.

The structure of most organisations to have a “support team” who are responsible for an ever growing list of digital solutions, goaled on cost minimisation, and not measured against the amount of maintenance actions per solutions operated.

Its one of the reasons I like the siloed approach of DevOps and Service Teams. Scope is contained to one (or a small number of similar) solution(s). Same tech base, same skill set. With a remit to have observability, metrics and focus on one solution, the team can go deep on full-stack maintenance, focusing on a job well done, rather than a system that is just turned on.

It’s the difference between a grand painter, and a photocopier. Both make images; and for some low-value solutions, perhaps a photocopier is all they are worth investing in from a risk-reward perspective. But for those solutions that are the digital-life-blood of an organisation, the differentiator to competitors, and those that have the biggest end-customer impact, then perhaps they need a more appropriate level of operational investment — as part of the digital solution, not as a separate cost centre that can be seen to be minimised or eradicated.

If Cyber insurance goes end-of-life as a product in the insurance industry, then the war on talent, the focus to find those artisans who can adequately provide that , increases. All companies want the smartest people, as one smarter person may be more cost effective than 3 average engineers.