AWS CloudFront launches in Perth

I moved back to Perth in 2010, having grown up here, gone to school, University and started my career here. It’s a lovely city, with the metropolitan area sprawling north and south along the blue Indian Ocean for some 50+kms. They says it’s a bit of a Mediterranean climate, normally never going below 0°C, and the heat of summer hitting mid 40°C, but with a fresh westerly coastal breeze appearing most afternoons to cool the place down.

But it is rather remote from other major population centers. The next nearest capital city, Adelaide, is 2,600 kms (1,600 miles) by road. Melbourne is 3,400 kms (2,100 miles) on the road, and Sydney is 3,900 kms (2,400 miles).  It’s a large state, some 2.5 million square kilometers of land, the size of the US Alaska and Texas states combined.

So one thing those in technology are well aware of is latency. Even with fibre to the premises (NBN in Australia), the Round Trip Time to Sydney is around 55ms – which is a similar time to Singapore. Melbourne comes in around 45ms.Latency from Perth to Singapore, Sydney, Melbourne, and New Zealand to Sydney

In 2013 I met with the AWS CloudFront team in Seattle, and was indicating the distances and population size (circa 2 million) in Perth. There’s a lot of metrics that goes in to selecting roll-out locations (Points of Presence) for caching services, with latency, population size, economic prosperity, cost of doing business, customer demand from a direct customer model, and customer demand from an end-consumer model being weighed up.

This week (1st week of January 2018) AWS CloudFront launched in Perth.

This impact on this is that all web sites that people of Perth that use CloudFront will now appear to be faster for cachable content. The latency has dropped from the 45ms (to Melbourne) to around 3ms to 5ms (from a residential NBN FTTP @ 50 Mbit/sec).

Test at 9:30pm from Perth (iiNet NBN).

In addition, the ability to upload/send data to applications (Transfer Acceleration) on-Region via the Edge (Edge Upload) may now also make a difference; with 45 ms to Melbourne, its been a largely unused feature as the acceleration hadn’t made much of a difference. There is a Transfer Acceleration test tool that shows what effect this will give you; and right now, while it shows an advantage to Singapore, just a 7% increase in performance to the AWS Sydney Region. Its not clear if TA via the Perth PoP is enabled at this point, so prehaps this will change the result over time.

And so, after several years, and with other improvements like the ability to restrict HTTPS traffic to TLS 1.2, it now makes sense to me to use CloudFront for my personal blog. In an hour, I had applied a new (additional) hostname against my origin server (a Linux box running WordPress) by editing the Apache config, symlinking the wordpress config file, and adding a Route53 CNAME for the host. I had certbot on Linux then add the new name to the Let’s Encrypt certificate on the origin. Next I applied for an Amazon Certificate Manager SSL certificate, with the hostname blog, and (if you inspect it) blog-cloudfront.james.rcpt.to. I then created a Cloudfront Distribution, with one origin, but two behaviours – one for the WordPress admin path, and one for the default paths, so that I could apply additional rules to protect the administration interface.

With this in place I could then update the DNS CNAME to move traffic to CloudFront, without any downtime. Not that downtime matters on my personal blog, but doing exercises like this you need to practice.

Welcome to Perth, CloudFront.

PS: It’s worth noting that IPv4 DNS resolution for my CloudFront distribiution is giving me 4ms RTT from Perth, but IPv6 RTT is 52ms, which indicates that IPv6 CloudFront has not yet arrived here.

Staring at tomorrow: Technology Impacts on society

This started as musings on where broadband connections enter buildings. It went further.


There’s a few technical specifications that we’ve become very used to in our homes and offices for the last few decades, but they’ve not always been there. Their introduction to our lives has spurred changes in the construction of our dwellings, and our expectations of what we do in our lives. And another one is unfolding now.

Plumbing (Scheme Water, Sewerage), Electricity, and Broadcast TV have all had their impacts. Water in & out has remained largely unchanged in the last 50 years, having moved from outhouses to servicing dedicated ‘wet’ rooms in our buildings (kitchens, bathrooms, WCs). Electricity has crept from being provided for lighting purposes only, to additionally providing a single power socket per home, to multiple sockets per room at convenient places for us to leave devices semi-permanent plugged in. We’ve become accustomed to the 110v/60Hz and 230v/50Hz, and the sockets and plugs are generally down to several well known arrangements of pins and locking mechanisms.

With the dawn of broadcast TV (UK: 1936, US: 1948, Australia: 1965), and excluding set-top “rabbit ears”, we’ve had antennas on roofs with coaxial cables leading to specific rooms: family rooms, bedrooms, etc. Using the superior reception of the roof-top antenna has meant that the placement of TVs was pretty much determined by the location of Co-Axial sockets in walls. Indeed, not just which rooms, but which corners and walls of specific rooms, close to these antenna sockets. While we’ve happily added multiple power sockets to rooms for future proofing our desire to rearrange our lives, co-axial cables suffered more loss with more joins to other sockets, so it was generally avoided.

Thus our layout of our homes has been throttled by the tether to the roof top antenna.

We are well into the next revolution delivery of our video feeds over a new carrier: IP and the Internet. The age of radio frequency broadcast is declining, and with it some of our cultural norms.

One impact is building design: a time will come when we’ll no longer request TV broadcast antennas on buildings, and coaxial sockets on walls. This is replaced with either short range high speed WiFi such as 802.11ac, or wired Ethernet ports at gigabit speed (no less). Wireless has continued to increase in speed, but as with all broadcast spectrum, it is subject to interference from other wireless signals, which can impact throughput. A continual battle between ever faster wireless speeds, improved signal processing and interference handling, and cost considerations will play out to determine if our TVs will have a wired or wireless IP connection within the home.

Wireless has been through several generations, some of which are now well and truly dead. WEP encryption, once the critical protection for wireless networks, is known to be compromisable, and thus is abandoned today. Hardware devices that only implemented WEP are effectively junk. Devices with physical Ethernet socket have not suffered so – the trusty RJ45 100 base TX CSMACD works as well today as it did 20 years ago.

We’ll see people opt to use wireless for video delivery within homes for short term savings, the aggressive rate of change on encryption and signalling standards for wireless networks will find many wireless-only devices become redundant before their time. We expect to get 10 – 20 years from a TV (its often an expensive item), but should WPA2 get compromised, then those hosts are vulnerable. We’ve already seen this with the KRACK vulnerability, and those vendors that did not patch their firmwares are probably vulnerable (both the Access Point and clients need patches to address KRACK for WPA2).

Of course, the interactions that your TV makes over whatever connection should indeed be over an encrypted transport, but even the encryption that uses – the key lengths, signature algorithms, all will be refreshed and strengthened over time.  In the last 5 years we’ve seen web site certificates change in key length (1024 -> 2048 bit), signatures for asserting a chain-of-trust from a certificate authority move from what was MD5 to SHA1 to SHA256, the bulk encryption algorithms move from RC4 to AES128 to AES256 to Elliptical Curve algorithms, and message digest checksums from MD5 to SHA1, to SHA256, and not SHA384. Each of these changes needs to be applied to the software in your appliances for them to maintain the security you expect to be there.

Many manufacturers won’t bother updating this on their already-shipped units; the disposable consumer sales cycle continues unabated and effectively is helped by these changes, and customers barely make any mention of the trouble of having to replace these items.

But as we move to IP delivery of video content, we also move to more of a video-on-demand world, and away from a time-of-broadcast (playout) world where all consumers – the audience in a broadcast area – would all get the same content at the same time.

30 years ago, when I was at school, the talk in the playground was of the latest episode of some show: MacGuyver, Airwolf, The A-Team, comedies like ‘Allo ‘Allo, The D-Generation, The Comedy Company, etc. With only a few select channels to choose from (3 where I was), your peers likely saw the same content at the same time, making for a shared experience that you wove into the daily fabric of society.

Shows with catch phrases became common place, and everyone knew them. Mark Mitchell gave Australia the “Couple o’ days, Beautiful”, and everyone knew the setting and connotation.

As we move to a VOD, subscription based scenarios, this disappears. Shows on one subscription service are not on others, or appear at different times. Subscription services cross local boundaries in a way that broadcast spectrum could never do with the limitation of transmitter power. People watch content at different times, in different months, from different countries (despite the continued attempt for geo-blocking). The social background noise of modern society is changed, disjointed.

I suspect that in the next decade, broadcast TV will disintegrate. Advertising spend will continue to shift to product placement in original content, and customised pre- and mid-roll ad insertion, customised to the view, tracked and targeted.

As broadcast TV declines, it heavily modifies the stalwart of most TV stations: TV news. While news has been available form multiple media sources, local broadcast TV news was always curated to somewhat balance local, national and global current affairs. If all journalism production costs the same, then stories filed that can be replayed in multiple territories have more value per second, and the decline of local stories.

I’ve watched local news for decades, paying attention to their mix of slow news days stories (car through a fence, cat up a tree) and more interesting ones (state general or by election results, perhaps investigative journalism – but that’s on the decline). Its even interesting to sometimes see the format of content being padded out with in-content extended adverts for news clips that are scheduled later, the tweaks to the lower thirds straps, even the background animations that engage the reader in subtle ways. These things often annoy me at how much they con the audience’s emotional engagement with a story. One local broadcast TV channel has a habit of applying a cinematic dust/sparkle video filter to most human-interest stories (cat up tree). They’ve played games with putting weather forecasts showing MIN (overnight) and MAX (daytime) temperatures on top of each other, and then sometimes MAX (daytime) on top of MIN.

(While I’m ranting, the “cross to the local hospital to a reporting standing out front” seems a waste of time, as is a “cross to the next room” scenario which an anchor could have just read out and moved on from.)

One media that has withstood much change until now is broadcast radio. I suspect that because radio has had one place where its continued its existence without much challenge: vehicles. Vehicle manufacturers have been pretty slow at putting DAB radios into vehicles by default. AM and FM radios are still universal. I wrote in 1995 in the UWA student magazine Pelican about the dawn of MP3 as an audio format, and the start of Digital Audio Broadcasting (DAB). The MP3 player was born – Apple seized on it with the iPod, and the Sony Walkman receded to the annals of history. And while DAB has been around for some time, the licensing and hardware has been at an expense that did not generally warrant the improvement in delivery, and clearly not supported by vehicle manufacturers.

But while broadcast radio has been compatible with what the listener is doing in vehicles: a threat is coming to the radio’s last bastion. It’s the same threat that is coming to organisations that live off driving license fees: self-driving cars. Drivers can then do other tasks: they can then look at screens. The radio will stop becoming an in-vehicle companion to millions of single-occupant cars, as those occupants will start viewing content instead of just listening to it. Eyes will come off the road.

(And if eyes are off the road, do we have the demise of billboards and placards as an advertising medium – as no one is looking out the window any more?)

By extension to this, the music industry relies on radio for socialising its content, encouraging people to either purchase content, or become customers to performers when they tour. With broadcast music no longer curated to seed the introduction of new content over time, artists will find it difficult to get established.

Coupled with the self-driving car is the move to electric vehicles, and the eventual drop in the petroleum fuel use, and the tax (excise) that is collected from this. Governments often use this as a source of funding road building. This model will have to change, probably to a tax-per-kilometer travelled on the owner of these vehicles.

And with self-driving cars, we can finally have some backward parts of the world switch to the metric system for units of distance and speed, without the risk of the human population getting it wrong and going too fast/slow/far.

Eventually, as my friend Paul Fenwick (PJF) has spoken of, the population will move to not owning vehicles, but calling them completely on demand – a la Uber/taxi, but without the human driver. These vehicles will get corporate ownership, and will all have live mobile broadband data links to each vehicle. They’ll all have built-in dash cams, and logging of all activities in and out of the vehicles at all times. New advertising opportunities will rise up – screens in these vehicles will know the route you’re about to take to advertise products and services on or near your route, so you can chose to stop off. They’ll know your preferences, the environmental factors (warm, cold, sunny, rainy) and advertisers will bid to produce targeted advertising at you while you travel.

Next knock on effect from this is the car insurance industry. If the self-driving vehicle has less accidents, and the risk of death or major damage is less, then disruption will arrive here too. With only a few major self-driving-taxi companies that require insurance, the number of insurance companies will consolidate.

The self driving vehicle probably needs less street signs. It may require less street lighting. It may require less lane markings, cats eyes, number of lanes.

This probably sounds like a diatribe version of a Swardley map (hat tip to Simon). All these things are connected, generally by revenue streams or shared interests, and always by data and technology. As always, its all change, and resistance is futile.

Google Pixel & Pixel XL: impressions

It’s a little late in the release cycle, with the Pixel 2 and Pixel XL 2 having been released, but there’s a number of points I’ve been contemplating on this premium-priced phone for some time that I’ve wanted to Blog about.  Here goes…

Phone Retailer: Avoiding the Bloat-ware

I purchased my Pixel (in Australia) direct from Google about 12 months ago (as at December 2017). One of my primary reasons for purchasing a phone direct from the vendor is to explicitly avoid 3rd party (Tel co) pre-installed, forced additional ‘value’ software.

Telephone companies (collectively, Wireless Providers, Tel Co, Phone company, Mobile Company, Cell Provider or whatever your term is) seem to take vanilla smart phone firmware, and force-install their own additional software that they see as adding essential functionality. They also mark such software on Android as being not uninstallable, leaving the consumer with space consumed on their device for software they potentially don’t want, or may want to free up later.

Telcos have a history of producing some fairly horrible 3rd Party software. Somehow they get the combination of inefficient software that drains battery life, causes system reboots, consumes inordinate amount of phone storage capacity for no obvious reason, and often has horrible security throughout, none of which is in the consumer’s interest.

Given this software is not uninstallable, the consumer has two options over the life of the phone: either put up with the issues, or apply security updates for this bloat ware — if they are made available — which inevitable consumes more device storage space (apparently never less), and spin the wheel on changes around battery life, stability and security.

You’ll note I say ‘consumer‘ in the above, because if the Telcos treated the people paying them as customers, then perhaps they’d pay a bit more attention to customer experience and customer satisfaction, rather than forcing their own poorly implemented branded bloat ware on these devices. Even a boot logo — I’d rather have the default boot logo rather than have to fetch the animated loop for a Tel co to be displayed to me when I turn my phone on.

I had this with the original Google Nexus phone perhaps a decade ago, but phones I have used since then have suffered this bloat infestation. My wife has a Samsung Galaxy S4, with a combination of additional Optus and Samsung software crowding out the fixed-storage-space in the S4.

The Pixel

While Nexus returned with the 5P, it was the time that the Pixel launched that my S3 was on its last legs; and with an option of going direct to Google, I ordered one; a reasonably easy ordering process, good tracking for delivery.

The install looked great: just hook up a USB cable to the older Android phone, and everything should transfer — except it didn’t work at all. The S3 (from Optus) was capped at Android 4.4, the Pixel shipped with Android 7, an the delta was too long a divide for a promised smooth transfer.

Oh well, looks likes this may be useful in future for easing the upgrade/transition/replacement path I thought.

Pixel Sound Issues

Then the speaker started to play up.

Over the course of three months, the sound quality from the speakers (ie, when playing music, YouTube content, and phone Speakerphone mode) degraded (and eventually stopped, later). When the phone ‘rang’, it would be highly distorted audio. Speakerphone was not possible – you couldn’t make out the words the caller was saying.

This is when I first contacted Google support.

Google Support

Conveniently, Google support was contacted through a menu on the phone; either text chat, or a ring-back system that must have registered me into a queue, and called me when an agent was available. Neat.

After performing a few checks (ie, volume turned up), I was asked to firmware reset the phone. With MFA enabled on my phone for a few accounts (>30), I didn’t want to loose those seeds; but would like to transfer them to a new handset, especially if the promised transfer experience was going to avoid me having to recover MFA set up. After explaining this, the call ended, but no replacement was forthcoming.

Fast forward until October, and the audio on speaker phone was completely dead, and I’d even tried the new Oreo release and any other software solution. So now another call with Google support; this time they confirmed on the phone they would send a replacement.

Replacement procedure

What they didn’t say was that they would send a time sensitive email to my Gmail address (not my primary address) that Gmail would automatically filter into a folder called “updates” (ie, not my default view of my Gmail inbox) that required me to a lick a link to order a replacement model within 5 days.

So a week after the call, wondering where the process was up to, I discovered email (having not been told to click a link in an email); but the link had expired, so another call to get a fresh time based link generated.

Confusingly, while I was trying to replace a Pixel, support sent me a link that would only order a Pixel XL. I wasn’t looking to change form factor (the Pixel fits nicely in my pocket). Another call – to sort this out, turned up that there were no Pixel replacements for RMA, and I would have to move to a larger handset.

The RMA procedure also required me to order a new one, a daunting process of having a UAD$1400 hold on my credit card, especially late in the pay cycle when there wasn’t $1400 clear on my card to hold. A few days later, another support call, a fresh link to click and start the “order” (RMA) again.

Transferring Pixels

Finally, it arrived. I connected the magic USB cable to initiate the transfer… hoping to keep copy media on the device, and the precious MFA seeds.

But it failed to start. Pixel 8.1 → Pixel XL 8.0 wouldn’t connect over the USB cable, but after trying various options, and proceeding to join a common WiFi network, it did promise to copy over WiFi.

Sadly, account logins only for Gmail. No media, no seeds. Not even the set of applications installed on the old phone.

So the promise of a seamless upgrade over a back-to-back connection between handsets seems unfulfilled.

Symantec Touchdown

For my various work email addresses, I purchased and have been using Symantec Touchdown for about 6 years. Its a reasonable exchange client, and consisted of the Touchdown application, and the Touchdown License application (ie, two installations).

Now as stars align, Symatec have End-of-Lifed Touchdown. They did this by pulling the license installation from the Play store. So I am transferring my applications, and can no longer install the license I purchased from Symantec.

Pixel & Pixel XL USB-C PD (Power Delivery) charging

One of the nice points about the Pixel was that it charges quick., using a new USB connector. This rectangular connector is effectively symmetrical; it can be plugged in either way, and starts a very rapid charging process (like a percent per 30 seconds or so).

However, it appears to wear loose pretty quickly. Even on the Pixel XL (now two months old) the USB-C PD connector actually needs to be held in place to acquire a rapid charge. Numerous times I have connected it, seen the rapid charge begin, but returned to find that it had dislodged and not been charging at all!

So now I have to regularly check on a charging phone to ensure I don’t need to grab-and-go and find its flat.

Pixel & Pixel XL Performance

So some positives: the snapdragon processor seems pretty speedy; applications respond well.

The Chrome browser is regularly updated, and Security updates come through each month without delay (didn’t get that with a Telco branded firmware).

The camera takes nice photos and videos, including some reasonable slow motion (either 120 or 240 fps) and nice panorama and photo-sphere pics (stitched on camera). The integration of photos.google.com into the phone to backup (and offload) media from the device works well.

The placement of the fingerprint sensor works well on the rear; with the same hand I am holing the phone I can unlock it. And unlike FaceID, it doesn’t stink: I can register multiple fingers (ie, one from left hand, and one from right – H/A for my hands).

Wish List

Google:

  1. make transferring phones also install the applications from the older phone into the new, and set them up with the same settings
  2. transfer media from old to new over the back-to-back USB-C link
  3. improve the support experience for RMA; perhaps extend the link validity a little longer (2 weeks?), tell customers to look for the email that customers have to order the device
  4. have the USB-C click and lock into place, or something else to help it not spring back and loose connection

Symantec:

  1. can I get my licence key or a refund?

Tel cos in general:

  1. stop forcing your software onto customers phones; make your ‘essential’ services available as web apps without requiring client side bloat, make them uninstallable, and ensure that Androind updates flow to customers as soon as possible (have you pushed WPA Krack updates yet?).

AWS Certifications in Perth

AWS Certified Developer (Associate), Sysops (Associate), Solution Architect (Associate), DevOpsEngineer (Professional), Solution Architect (Professional)Today I went and sat yet another of the AWS Certifications; I’ve been doing a bit of a Pokemon approach and collecting them all.

AWS’s certifications fall in what are essentially three classes: Associate, Professional, and Speciality (still in beta at this point in time).

Each of the certifications requires sitting a proctored exam at a certified exam venue. Subjects are not permitted any personal equipment, watches, wallets, or anything else that could be used to collude or circumvent the test integrity. The testing is done on a locked-down PC, and are generally multi-choice of:

  • Choose the best answer (think: radio box)
  • Choose N answers that apply (think: tick the check boxes)

The Associate certifications are effectively entry-level: the number of questions is around 55, and the permitted time is around an hour and a half.

Meanwhile the Professional and Speciality certifications are around 100+ questions, and three hours assessment time.

The calibre of the questions have made these certifications some of the most valuable, and thus desired certifications in IT. I’ve been lucky to spend several years working on some large projects to hone these skills, and am pleased to hold all five of the AWS certifications.

Certification Venues in Perth

Over the last few months, two more venues have appeared as options for sitting these certifications, and I have now used all three to compare them. For several years, AICT (next to Myer in the Murray Street Mall) has been the only option, but now Saxons Training Facilities at 140 St Georges Tce, and now ATI-Mirage at the redeveloped Cloisters have become available.

AICT is probably the most dilapidated venue. They have set aside a small room at the very rear of their location just by their lab technicians hub and on-premise data centre, with small screens on the testing PCs, no windows in the room, and at times, a lack of adequate cooling assisted by a pedestal fan. They can test about 5 people at a time here.

Saxons became available in January: their rooms were considerably larger, well lit, and had large windows for daylight. The facilities were much cleaner and newer. A very large break-out kitchen/coffee area was there, but I had no time to use it. The room would have held about 18 people, but I was the only one the morning I sat this certification.

And today, ATI-Mirage – I was their first Kryterion exam to go through since ATI-Mirage started offering them. Their testing facility was reasonably well resourced, no windows but well lit, with enough room for around 12 people or so to sit exams. This is shared with their Pearson-VUE testing, and this morning, was full.

If I had to order the facilities, I’d probably chose Saxons first, followed closely by the very friendly people at ATI-Mirage, and AICT last. But then again, my office in the Perth CBD is opposite Saxons, so its a short walk to hop over the road!

CloudPets security fail is not a Cloud failure

I spent several years at Amazon Web Services as the Solution Architect with a depth in Security in A/NZ. I created and presented the Security keynotes at the AWS Summits in Australia and New Zealand. I teach Advanced Security and Operations on AWS. I have run online share-trading systems for many of the banks in Australia. I help create the official Debian EC2 AMIs. I am the National Cloud Lead for AWS Partner Ajilon, and via Ajilon, I also secure the State Government Land Registry in Ec2 with Advara.

So I am reasonably familiar with configuring AWS resources to secure workloads.

Last week saw a poor security failure; the compromise of a company that makes Internet-connected plush toys for children that lets users record and playback audio via the toys: CloudPets. Coverage from Troy HuntThe Register, ArsTechnica.

As details emerged, a few things became obvious. But here are the highlights (low-lights, really) to me that apparently occurred:

  • A production database (MongoDB) was exposed directly to the Internet with no authentication required to query it
  • Audio files in S3 were publicly, anonymously retrievable. However, they were not listable directly (no worries, the object URLs were in that open Mondo database)
  • Non-production and production systems were co-tenanted

There’s a number of steps that should have been taken technically to secure this:

  1. Each device should have had a unique certificate or credential on each of them
  2. This certificate/credential should have been used to authenticate to an API Endpoint
  3. Each certificate/credential could then be uniquely invalidated if someone stole the the keys from it
  4. Each certificate/credential should only have been permitted access to fetch/retrieve its own recordings, not any recording from any customer
  5. The Endpoint that authenticates the certificate should have generated Presigned URLs for the referenced recordings. PreSigned URLs contain a timestamp set in the future, after which the Presigned URL is no longer valid. Each time the device (pet) would want a file, it could ask the Endpoint to generate the Presigned URL, and then fetch it from S3
  6. The Endpoint could rate limit the number of requests per certificate pre minute/hour/day. Eg, 60 per minute (for burst fetches), 200 per hour, 400 per day?

If the Endpoint for the API was an Ec2 instance (or better yet, an AutoScale Group of them), then it could itself be running in the context of an IAM Role, with permission to create these Presigned URLs. Similarly an API Gateway running a Lambda in a Role.

Indeed, that Endpoint would have been what would have used the MongoDB (privately), removing the publicly facing database.

I’ve often quoted Voltaire (or Uncle Ben from Spider Man, take your pick): “with great power comes great responsibility“. There’s no excuse from the series of failures that were conducted here; the team apparently didn’t understand security in their architecture.

Yet security is in all the publicly facing AWS customer documents (joint responsibility). It’s impossible to miss this. AWS even offers a free security fundamentals course, which I recommend as a precursor to my own teachings.

Worse is the response and lack of action from the company when they were alerted last year.

PII and PHI is stored in the cloud. Information that the economy, indeed modern civilisation depends upon. The techniques used to secure workloads are not overly costly, they mostly require knowledge and implementation.

You don’t need to be using Hardware Security Modules (HSMs) to have a good security architecture, but you do need current protocols, ciphers, authentication and authorisation. The protocols and ciphers will change over time, so IoT devices like this need to also update over time to support Protocols and Ciphers that may not exist today. It’s this constant stepping-stone approach, to continually be moving to the next implementation of transport and at-rest ciphers that is becoming a pattern.

Security architecture is not an after-thought that can be left on the shelf of unfulfilled requirements, but a core enabler of business models.