Stronger SSH Keys for EC2

For those not familiar, SSH is the Secure Shell, an encrypted login system that has been in use for over 25 years. It replaced unencrypted Telnet for remote (text) terminal connections used to access (and administer) systems over remote networks.

Authentication for SSH can be done in multiple ways: simple passwords (not recommended), SSH Keys, and even MFA.

SSH keys is perhaps one of the most common ways; its simple, free, and relatively easy to understand. It uses asymmetric key pairs, consisting of a Private key, and a Public Key.

Understandably, the Private key is kept private, only on your local system perhaps, and the Public key which is openly distributed to any system that wishes to give you access.

For a long time, the Key algorithm used here was the RSA algorithm, and keys had a particular size (length) measured in bits. In the 1990s, 128 bits was considered enough,but more recently, 2048 bits and beyond has been used. The length of the key was one factor to the complexity of guessing the correct combination: fewer bits means smaller numbers. However, the RSA algorithm becomes quiet slow when key sizes start to get quite large, and people (and systems) start to notice a few seconds of very busy CPU when trying to connect across the network.

Luckily, a replacement key algorithm has been around for some time, leveraging Elliptical Curves. This article gives some overview of the Edwards Curve Elliptical Curve for creating the public and private key.

What we see is keys that are smaller compared to RSA keys of similar cryptographic strength, but more importantly, the CPU load is not as high.

OpenSSH and Putty have supported Edwards curves for some time (as at 2022), and several years ago, I requested support from AWS for the EC2 environment. Today, that suggestion/wish-list item has come to fruition with this:

Amazon EC2 customers can now use ED25519 keys for authentication with EC2 Instance Connect

AWS has been one of the last places I was still using RSA based keys, so now I can start planning their total removal.

  • Clearly generating a new ED25519 key is the first step. PuttyGen can do this, as can ssh-keygen. Save the key, and make sure you grab a copy of the OpenSSH format of the key (a single line that starts with ssh-ed25519 and is followed by a string representing the key, and optionally a space and comment at the end). I would recommend having the Comment include the person name, year and possibly even the key type, so that you can identify which key for which individual.
  • You can publish the Public Key to systems that will accept this key – and this can be done in parallel to the existing key still being in place. The public key has no problem with being shared and advertised publicly – its in the name. The worse thing that someone can do with your public key is give you access to their system. In Linux systems, this is typically by adding a line to the ~/.ssh/authorized_keys file (note: US spelling); just add a new line starting with “ssh-ed25519”. From this point, these systems will trust the key.
  • Next you can test access using this key for the people (or systems) that will need access. Ensure you only give the key to those systems or people that should use it. Eg, yourself. When you sign in, look for evidence that shows the new key was used. For example, the Comment on the key (see point 1 above) may be displayed, such as:
  • Lastly you can remove the older key being trusted for remote access from those systems. For your first system, you may one to leave one SSH session connected, remove the older SSH key from the Authorized Keys file, and then initiate a second new connection to ensure you still have access.

Now that we have familiarity with this, we need to look at places where the older key may be used.

In the AWS environment, SSH Public Keys are stored in the Amazon EC2 environment for provision to new EC2 instances (hosts). This may be referenced and deployed during instance start time; but it can also be referenced as part of a Launch Configuration (LC) or Launch Template (LT). These LCs and LTs will need to be updated, so that any subsequent EC2 launches are provisioned with the new key. Ideally you have these defined in a CloudFormation Template; hence adjusting this template and updating the stack is necessary; this will likely trigger a replacement of the current instances, so schedule this operation accordingly (and test in lower environments first).

There’s no sudden emergency for this switch; it is part of the continual sunrise and sunset of technologies, and address the technical debt in a systematic and continual way, just as you would migrate in AWS from GP2 to GP3 SSD EBS volumes, from one EC2 instance family to the next, from the Instance MetaData v1 to v2, and or from IPv4 to dual-stack IPv6.

The IPv6 Transition: 2021

As we round out another year, here’s a perspective on where we are up to on the IPv6 transition that has been going for 20 years, but now gathering momentum.

Adoption

Google has been doing a great job tracking the IPv6 traffic level they see since 2008.

With around 33% of traffic being IPv6, its reasonably significant on a global level. The country-by-country map shows the US on 47%, Germany at 55%, UK on 34%, and Australia at 27%. We’re fast reaching the tipping point where IPv6 is the expected traffic type.

Clearly, the biggest factor is going to be the destination services that people (and systems) want to get to offering IPv6, and then the client (users) having IPv6 available to then send that traffic.

In Australia, the dominant cell phone provider, Telstra, has been using IPv6 for its consumer business since 2016. Now, five and a half years later, we’re still waiting for Optus (Singtel), Vodafone, and many more to pick up.

For ISPs: iiNet continues to show no innovation since the engineering talent departed in the post TPG acquisition. Aussie Broadband (whom I churned to) still call their IPv6 offering a beta, but its been very stable for me now for… years. Perhaps time to take the beta label off?

IPv6 on the Cloud

AWS made many IPv6 announcements through pre-Invent and re:invent (the annual AWS Cloud summit/conference).

Many key services already had 1st class dual-stack support for IPv6 along side IPv4. Key amongst this is Route53, serving DNS responses on either transport layer.

Now we’re seeing IPv6 only capability, as IPv4 starts to look more and more like a legacy transport protocol.

With that perspective, its a shame to see how many of the trusted brands back on the commercial ISP and Mobile carrier market just aren’t up to speed. No engineering investment to really modernise.

AWS VPC: IPv6 only subnets

When defining a VPC, one of the most durable configurations is the IPv4 CIDR range assigned. Once allocated, its set. You can add one more IPv4 allocation if you must, in a separate range, with all kinds of complications.

Since 2016 AWS has permitted an IPv6 network allocation to dual-stack subnets in a VPC, and to allocate to an EC2 instance. Traffic within the VPC was still primarily IPv4 – for services like RDS (databases), etc. Slowly we’ve seen Load Balancers become dual-stack, which is one of the most useful pieces for building customer/internet facing dual-stack, but then fall back to IPv4 on the private side of the load balancer.

The IPv6 in subnets had an interesting feature. Initially the address allocation was only, but now you can BYO, however, the subnet size is always /56, some 18 quintillion IP addresses. Compare with IPv4, where you can make subnets from /28 (14 IPs) to /16 (65,533 IPs).

Two common challenges used to appear with IPv4 subnets: one, you ran out of addresses, especially in public subnets with multiple ALBs of variable traffic pattern peaks, and two: a new Availability Zone launch, and the contiguous address space needs some consistency to meet with traditional on-premises (internal) firewalling.

Now, you only have the one consideration: contiguous address space, but you’re unlikely to exhaust a subnet’s IPv6 address allocation.

When ap-southeast-2 launched, I loved having two Availability zones, and in the traditional IPv4 address space, I would allocate a contiguous range for each purpose across AZs equally. For example:

PurposeAZ AAZ BSupernet
Public10.0.0.0/2410.0.1.0/2410.0.0.0/23
Apps (Private)10.0.2.0/2410.0.3.0/2410.0.2.0/23
Databases (Private)10.0.4.0/2410.0.5.0/2410.0.4.0/23
Contiguous address space across two AZs

From the above, you can see I can summarise up the two allocations for each purpose, the one in AZ A and AZ B, into one range.

Now for various points I wont dive into her, having TWO AZs is great, but having three is better. However, address space is binary and works in powers of two the purpose of subnetting and supernetting, so if I wanted to preserve contiguous address space, then I would have kept the 10.0.2.0/24 reserved for the AZ C, and then have a left over /24 (10.0.3.0/24) in order to make a larger supernet of 10.0.0.0/22 to cover all Public Subnets:

PurposeAZ AAZ BAZ CReservedSupernet
Public10.0.0.0/2410.0.1.0/2410.0.2.0/2410.0.3.0/2410.0.0.0/22
Apps (Private)10.0.4.0/2410.0.5.0/2410.0.6.0/2410.0.7.0/2410.0.4.0/22
Databases (Private)10.0.8.0/2410.0.9.0/2410.0.10.0/2410.0.11.0/2410.0.8.0/22

Now we’ve split our address space four ways, preserved continuity to be able to subnet, but we had to reallocate the .2 and .3 ranges. What a pain.

Now, we can take VPC subnetting further; the next increment would be provisions for 8 AZs. That’s probably a stretch for most organisations, so 4 seems to be most common.

Now take an IPv6 lens over this, and the only thing you’re looking at keeping contiguous over the IPV6 range is the order of allocation, from :00:, through to :03: for the Public Subnets, and then :04: to :07: for apps, and :08: to :0b: (yes, its hex) for the Databases.

Going backwards from IPv6 toIPv4

Another release at reinvent for VPC was the support for DNS64, and NAT64 on NAT Gateway. On a subnet by subnet basis, you can have the DNS resolver return a specially crafted IPv6 address that actually has an IPv4 embedded in it; when used with NAT64,then NAT Gateway will bridge the traffic going outbound from IPv6 internally, back to IPv4 externally. Now you can adopt modern IPv6 internal topologies, but still reach back into the past for those integrations that haven’t gone dual-stack yet.

Of course, this would be far easier if, for all services you wanted to access, you already had the choice of IPv4 or IPv6. Which, as a service provider, you should be offering to your integration partners already.

Load Balancers end-to-end IPv6

Until recently, when traffic from the Internet hit an ALB or NLB over IPv6, it would drop down to IPv4 for the internal connection; however that changed during reinvent with end-to-end IPv6. Virtual Machines (EC2 instances) now see the originating IPv6 address in packets.

Summary

Its been 3 years since some of the critical state government projects I was working on went dual stack in AWS. I encourage my teams to present dual-stack external interfaces, and to prepare all customer environment for this switch over as part of their managed services and professional services deployments. Its not complicated, it doesn’t add any cost, and it can be a competitive advantage. Its another example of a sunrise and sunset of yet another digital standard, and it wont be the last.

Log4Shell and your apps in AWS

There’s a great XKCD cartoon entitled Depencency that cuts to the heart of today’s software engineering world: developers (and in turn organisations) everywhere love the use of libraries to accelerate their development efforts, particularly if that library of code is free to use, and typically that’s Open Source Free.

The image speaks about large complex systems, critical to organisations, needing the unpaid, thankless contributors of these libraries but upon whom everything relies.

In the last week, we’ve seen Log4J, a Java logging utility, come under such focus due to a critical remote code execution bug that can see the server side triggered to make outbound requests. A vast amount of Java based solutions for the last 15+ years has dependencies on logging messages being implemented using this library.

It’s lead to articles like “The Internet is on Fire” by the Australian Broadcasting Corporation, and thousands of posts on Infosec specific news sites (eg: (Kaspersky, Postswigger, SecurityScorecard, BleepingComputer).

Java is widely used, as Oracle corporation points out clearly:

3 Billion Devices Run Java – Oracle

There’s two sides to this: invalid requests coming in that should be handled with sensible data validation, and the resulting external requests that servers can be tricked into making.

Now I am not saying everyone should use their own logging library; that would be even more on fire. But we should stand ready to update these things rapidly, and we should help with either code contributions or financial donations (or both) to help improve this for the common good.

Untrusted Data Validation

Validating untrusted data sources is critical. The content of a local configuration file is vastly different from the query from the Internet. I’ve often joked about setting my Browser user-agent string to the EICAR test file content, used as a dummy value to trigger Antivirus software to match on this text.

In this case, we have remote attackers stuffing custom generated data strings in HTTP requests (and email and other sources that accept external traffic/data) to try and trick the Log4j library into processing and interpreting this data instead of just writing it to a log file.

Web servers always accept data from the Internet, and Web Application Firewalls can offer some protection, but in this case, the actual “string to check” can be escaped, making it harder to write simple rules that match.

Restricting outbound traffic

An attacker is often trying to get a better access into the systems they target; their initial foothold may be tentative. In this example, the ability to trick a target server to fetch additional data (payload) from an external service is key. There’s two main types of external data egress: direct, and indirect.

In the direct model, your server, which you installed and thus trust, may be running behind a firewall, but have you checked if you have restrictions on what it can fetch directly from the Internet?

In AWS, the default AWS Security Group for egress is to permit all traffic; this is a terrible idea, but is the element of least surprise for those new to the AWS VPC environment. It is strongly recommended that you pair this down for all applications, to end up with only the minimum network access you need, even when behind a (managed) NAT Gateway or routing rules, and even if you think your server only has internal network access.

I wrote a whitepaper on this topic for Modis in 2019 about Lateral Movement within the AWS VPC, and some of the concepts there are relevant now.

Your VPC-deployed virtual machine instance probably only needs to initiate connections to S3 on 443, and its database server on the local CIDR (address) range. For example, if you have three Subnets for databases:

  • 10.0.0.0/26 (Databases in AZ-A)
  • 10.0.0.64/26 (Databases in AZ-B)
  • 10.0.0.128/26 (Databases in AZ-C)
  • 10.0.0.196/26 (reserved for future expansion of Databases in a yet-to-be announced AZ-D)

… and are running MySQL (eg, RDS MySQL) in those AZs, then you probably want an egress rule on your Application Server/instance of 10.0.0.0/24:3306. (Note, be ready for making this all IPv6 in future). However, your inbound rule on the same group is probably referential to your managed Load Balancer, on port 443.

What about DNS and Time Sync?

If you have cut down your egress to just the two rules (HTTPS for S3 to bootstrap, CFN-init to signal ASG creation, and database traffic), what about things like DNS and Time. These are typically UDP based (ports 53, 123).

Indeed, the typical DNS firewall used for NTP, when syncing from external time services, is *:123 inbound and *:123 outbound. Ouch.

AWS Time Sync Service

The good news is you do not need to permit this in your security group rules IF you are using the AWS VPC provided Time Sync service and DNS Resolvers. These are available over the link-local network, and security groups do not restrict this traffic; hence can be left closed for UDP port 123.

This time service is also scalable; you don’t need to have thousands of hosts pointing at one or two of your own NTP servers; the AWS Time Sync service runs from the hyper-visor, so as you randomly add instances, you have more physical nodes (droplets) involved in provisioning this, so your time services scale.

Managed & Scalable DNS Resolution

DNS can be used for data exfiltration. If you run your own DNS resolver (eg, on a Windows Domain Controller(s) or Linux host(s) and set your DHCP to hand this resolver address to clients, then you may be at risk of not even seeing this happen. This is an indirect way of being exploited; your end server may not have access to egress to the Internet, but it can egress to your DNS resolver to… well, look up addresses. If you do run your own DNS server, you should be looking at the log of what is being looked up, and managing the process to match this against a threat list, and issuing warnings of potential compromise.

Managed DNS Security Checks: Guard Duty

If that’s too much effort, then there is a managed solution for this: AWS Guard Duty and the VPC-provided DNS resolver. In order for Guard Duty to inspect and warn on this traffic, you must be sending DNS queries via the VPC resolver. Turning on Guard Duty while not sending DNS traffic through the AWS provided service – for example, running your own root-resolving DNS server, means the warnings from Guard Duty will probably never trigger.

By contrast, having your self-managed resolver (eg Active Directory server) use the VPC resolver means that it is the one that will be reported upon when any other instance uses it as a resolver with a risky lookup! I’m sue that will be a mild panic.

Managed DNS Proactive Blocking: DNS Firewall

Going beyond simply retrospectively telling you that traffic happened is pro-actively blocking DNS traffic. Route53 DNS Firewall was introduced in 2021, using managed block lists for malicious domains. This gives some level of protection that clients (instances) will get a failed DNS lookup when trying to resolve these bad domains.

My Recommendations

So here’s the approach I tell my teams when using VPCs:

  1. Always use the link-local time Sync service; it scales, and reduces SPOFs and bad firewall rules.
  2. Always use the link-local DNS resolver; it scales. use a Resolver Rule if you need to then hook the DNS traffic up to your own DNS server (AD Domain Controller).
  3. Turn on Guard Duty, set up notifications of the Findings it generates.
  4. Turn on DNS Firewall to actively BLOCK DNS lookups for bad domains.
  5. Turn on your own Route53 query logs for yourself, with some retention period (90 days?)
  6. For inbound Web traffic, use a managed Web Application Firewall with managed rules, and/or scope your application to the country you’re intending to serve traffic to. In particular, block access to administrative URL paths that don’t come from trusted source ranges.
  7. Leverage any additional managed services that you can, so you minimise the hand-crafted solutions in your application.
  8. Template your workload, and implement updates from template automation; no local changes. Deploy changes rapidly using DevOps principles. Socialiase with your team/management the importance of full stack maintenance and least privilege access — including at the network layer ingress and egress — and schedule and prioritise time to include technical debt in each iteration, including the updating of every third party library in your app.
  9. If you have a DevOps pipeline with something like SonarQube or Whitesource, have it report on dependencies (libraries), and get reports on how out-of-date those libraries are, and/or if those out of date versions have known CVEs against them. Google Lighthouse (in the browser) does a great job of his for JavaScript web frameworks.

For this exploit you need to go widerthat what you run in cloud: your company printer (MFP), network security cameras, VoIP phones, UPS units, air-conditioners, Smart Hubs, TVs, Home Internet Gateways, and other devices will probably have an update. Your games console, and the games on it (this started from an update in Minecraft to address this and has… escalated quickly!). Even the physical on–prem firewalls and virtual appliances themselves – but ensure you don’t just do firewalls and ignore the larger landscape of equipment you have.

PS: I highly recommend my colleague Elliot Segler’s recent post entitled Learnings with AWS WAF and Log4Shell.

PPS: You may want to apply something like this to Apache:

 <IfModule mod_rewrite.c>
 RewriteEngine On
 RewriteCond %{THE_REQUEST} ^.*%7Bjndi:.* [NC]
 RewriteRule ^(.*)$ - [F,L]
 </IfModule>

Thoughts on the IPv6 Transition

I’ve been discussing the IPv6 transition with our customers more recently; for over 3 years we’ve been dual-stack IPv4 and IPv6 for public-facing AWS-Cloud-based solutions and services for our customers.

So what?“, you’re thinking?

It’s worth noting that from Google’s numbers, global IPv6 is now approaching 36%, while at home in Australia 27%, helped by TelCo carriers like Telstra enabling IPv6 to their mobile phone subscribers, and advanced ISPs like Aussie Broadband and Internode making IPv6 trivial to enable.

Google IPv6 Adoption, as of 12/Oct/2021

I first had an IPv6 tunnel established to Hurrican Electric in 1999 when I worked for The University of Western Australia. I championed the adoption of IPv6 as a first-class citizen in the cloud when I worked at Amazon Web Services as a Solution Architect, and these days, a large majority of AWS public-facing services already support dual-stack approaches, and more are on the way.

As the next billion people come online, the unavailability of more existing IPv4 Internet is a limiting factor. The temporary value of the IPv4 address space, being reallocated (“sold“) between assignees will eventually presumably peak when a majority of clients (people) and the services they are accessing are all on IPv6.

I have been advising a government body, who had two IPv4 “Class B” sized IPv4 subnets allocated to them. Each of these subnets is a “/16” netblock (65,535 addresses); they had only ever used a handful of /24 ranges from within their first allocation.

Most services they use, both for staff and for public-facing services, now run on the cloud, from cloud-provider address space. They’re unlikely to need all of the address blocks they currently have from the first /16 block, let alone the second.

This netblock has a current value of a couple of million dollars (AUD).

It’s likely that many public sector agencies have IPv4 address netblocks that they’re unlikely to ever use, and could also benefit from reallocating to service providers desperate for their own address space to host solutions from.

Well, desperate until most clients are using IPv6.

I’d urge any public sector organisation to review their plans for using their address space, and if they have large unused, contiguous address space, consider reallocating that. The funds raised can then help with further modernisation of workloads – including those workloads to move to IPv6 addressing.

For any managed service providers, I would urge you to “dual-stack” all public-facing Internet services. You should continue to use strong encryption in flight, modern TLS protocols, and strong authentication, regardless of the network transport protocol version.

If you are using AWS CloudFront as a CDN in front of your origin service, then enable IPv6 in the CloudFront configuration, and then publish the corresponding AAAA DNS record just as you have to the A DNS record. Similar works if using CloudFlare, Akamai, Fastly or others.

For those who use managed service providers for their corporate business networking, ask why your work Internet connection is not dual-stacked already. It’s typically a configuration question, and rarely has any actual cost associate with it. If you have a corporate proxy service, then if it is dual-stacked, the clients (on your internal corporate network) already get some benefit of being able to talk to IPv6 services.

If you have DNS services, check they not only can serve IPv6 records (AAAA), but they are reachable using IPv6. Services like AWS Route53 have done this for years (see my earlier point about getting IPv6 as a first-class citizen within AWS).

While you’re looking at DNS, have a look at creating a simple CAA record, to list the Certificate Authorities you obtain certs from.

Blocking outbound HTTP from the Home Network, 2021

With the move to HTTPS as a default, I took a chance and recently blocked outbound (EGRESS) HTTP (TCP 80) traffic from my home network. I’ve got around 30 – 40 devices on the network, and I was intrigued to see what we (my family and I) would experience.

With my Unfi Dream Machine Pro, this was a reasonably easy update: Settings -> Traffic & Security -> Global Threat Management -> Firewall. I added a rule for Internet Out that that dropped anything going to port 80:

HTTP reject rule for Internet Out from Unifi Dream Machine Prod

This is a rule I had in place for two days. I checked my own laptop access for HTTPS, SSH, IMAPS and SMTPS egress, and all was fine.

What transpired over the following two days helped me identify the devices and vendors that still produce products that have a dependency to operate using unencrypted HTTP over the Internet.

Logitech Smart Radio

We have had a streaming radio for some time; we still like to listen to London Capital Radio despite the 7-8 hour timezone offset. Within a few minutes of blocking HTTP, the audio stream stopped.

We purchased this device around 2012. On my network, it identifies as a Squeezebox running RedHat. The manufacturer discontinued it years ago and there have been no firmware updates for a long time. It only supports 802.11g wifi in the 2.4 GHz spectrum (is this WiFi 3?).

I wasn’t prepared to replace the device, so for the moment, a work-around rule to permit HTTP (by MAC address) fixes this for the short term. We’re unlikely to see any updates from Logitech anyway.

GoToWebinar

This one surprised me; I was signing in to a webinar, and the obligatory download tried to execute and stalled. It turned out the installer was doing an HTTP based OCSP check.

Now, for web browsers, OCSP has been mostly relegated to the annals of history, replaced with OCSP Stapling.

OCSP is a network efficient query that a client can do against a Certificate Provider’s endpoint to get a signed confirmation that the certificate in question has not been revoked recently. However, in doing so, it tells the certificate authority which site you (your source IP address) just visited; this is called an Information Disclosure vulnerability. Instead, the website in question fetches these signed validations at a regular interval and passes this to the clients that it’s already communicating with – stapling the validation to the certificate during TLS negotiation: “Hi client, here’s my certificate, and here’s a recent verification that my certificate is not revoked”.

Using HTTP for OCSP isn’t too bad, as the response that is being downloaded is itself cryptographically signed. But it’s still visible in the plain for all to see.

Update 27/9: No response from GoToWebinar.

Enphase Envoy

12 hours later, my solar panel data aggregation service, Enphase Enlighten, alerted me that it was no longer receiving data from my solar panel inverters. With another rule to permit the Envoy controller to make HTTP outbound requests, and the data started flowing.

This is a reasonable issue. The submission of the generation and consumption of power in my home should not be trundling over the Internet unencrypted.

I raised a support request with Enphase (21/September 2021 at 15:04 AWST UTC +8), asking them to contact me, I received a ticket auto-response (03164xxx) but no other contact.

Later that night I tried reaching out over Twitter to any security folk at Enphase, but after 24 hours, no response.

Update 27/9: After a few days of no reply, Enphase asked (via automated email) how my support experience was. Unbelievable!

I then tried calling them on their Australian support number as shown on their website. I ended up in a call queue which was quite amusing in itself; every 5 seconds (no, literally, every 5,000 ms) it would announce that all callers were busy, and then it would restart the same audio music clip, only to then interrupt itself… I gave up after 20 minutes in the queue.

Lastly, I have DMed the Enphase Twitter account, and await a reply.

Enphase does not have a security.txt file on their website!

Any customer data should be transmitted to an HTTPS endpoint. The firmware of the Envoy device should have the Certificate Authority’s Root Certificate, used to issue that Endpoint certificate, in its trust store. The device should receive updates as that Root CA expires and is replaced (this happens every 10 – 20 years per CA). The embedded firmware also would need to keep step with the improving TLS protocols over time, now TLS 1.3 would be ideal, but in future, who knows.

What also struck me was the lack of IPv6 being picked up by this device; not only should it have picked up a new IPv6 address locally, the Endpoint it submits its data to should also be dual-stack IPv4 and IPv6.

Apple iPad 14.8 -> 15.0 upgrade

This one was very unusual. Apple had released iOS 15, and our iPads were about to make the jump from 14.8. However, despite full WiFi signal, the devices keep announcing they couldn’t verify the downloaded image because they couldn’t connect to the WiFi!

I’m hoping Apple can address this dependency before the next iOS update.

Update 27/9: Apple responded on Twitter concerned only that I could install the update, not that the security issue was being investigated, resolved or understood.

Nintendo Switch

It turns out the Nintendo Switch won’t join a WiFi network that doesn’t have outbound HTTP access; it must do a call home or validation using HTTP.

Summary

I’ve paused the experiment for the moment, but next month I’ll resume it and find more edge cases where devices we rely upon still use unencrypted channels, exposing our data, without us even knowing…