With the move to HTTPS as a default, I took a chance and recently blocked outbound (EGRESS) HTTP (TCP 80) traffic from my home network. I’ve got around 30 – 40 devices on the network, and I was intrigued to see what we (my family and I) would experience.
With my Unfi Dream Machine Pro, this was a reasonably easy update: Settings -> Traffic & Security -> Global Threat Management -> Firewall. I added a rule for Internet Out that that dropped anything going to port 80:
This is a rule I had in place for two days. I checked my own laptop access for HTTPS, SSH, IMAPS and SMTPS egress, and all was fine.
What transpired over the following two days helped me identify the devices and vendors that still produce products that have a dependency to operate using unencrypted HTTP over the Internet.
Logitech Smart Radio
We have had a streaming radio for some time; we still like to listen to London Capital Radio despite the 7-8 hour timezone offset. Within a few minutes of blocking HTTP, the audio stream stopped.
We purchased this device around 2012. On my network, it identifies as a Squeezebox running RedHat. The manufacturer discontinued it years ago and there have been no firmware updates for a long time. It only supports 802.11g wifi in the 2.4 GHz spectrum (is this WiFi 3?).
I wasn’t prepared to replace the device, so for the moment, a work-around rule to permit HTTP (by MAC address) fixes this for the short term. We’re unlikely to see any updates from Logitech anyway.
This one surprised me; I was signing in to a webinar, and the obligatory download tried to execute and stalled. It turned out the installer was doing an HTTP based OCSP check.
Now, for web browsers, OCSP has been mostly relegated to the annals of history, replaced with OCSP Stapling.
OCSP is a network efficient query that a client can do against a Certificate Provider’s endpoint to get a signed confirmation that the certificate in question has not been revoked recently. However, in doing so, it tells the certificate authority which site you (your source IP address) just visited; this is called an Information Disclosure vulnerability. Instead, the website in question fetches these signed validations at a regular interval and passes this to the clients that it’s already communicating with – stapling the validation to the certificate during TLS negotiation: “Hi client, here’s my certificate, and here’s a recent verification that my certificate is not revoked”.
Using HTTP for OCSP isn’t too bad, as the response that is being downloaded is itself cryptographically signed. But it’s still visible in the plain for all to see.
12 hours later, my solar panel data aggregation service, Enphase Enlighten, alerted me that it was no longer receiving data from my solar panel inverters. With another rule to permit the Envoy controller to make HTTP outbound requests, and the data started flowing.
This is a reasonable issue. The submission of the generation and consumption of power in my home should not be trundling over the Internet unencrypted.
I raised a support request with Enphase (21/September 2021 at 15:04 AWST UTC +8), asking them to contact me, I received a ticket auto-response (03164xxx) but no other contact.
Later that night I tried reaching out over Twitter to any security folk at Enphase, but after 24 hours, no response.
I then tried calling them on their Australian support number as shown on their website. I ended up in a call queue which was quite amusing in itself; every 5 seconds (no, literally, every 5,000 ms) it would announce that all callers were busy, and then it would restart the same audio music clip, only to then interrupt itself… I gave up after 20 minutes in the queue.
Lastly, I have DMed the Enphase Twitter account, and await a reply.
Enphase does not have a security.txt file on their website!
Any customer data should be transmitted to an HTTPS endpoint. The firmware of the Envoy device should have the Certificate Authority’s Root Certificate, used to issue that Endpoint certificate, in its trust store. The device should receive updates as that Root CA expires and is replaced (this happens every 10 – 20 years per CA). The embedded firmware also would need to keep step with the improving TLS protocols over time, now TLS 1.3 would be ideal, but in future, who knows.
What also struck me was the lack of IPv6 being picked up by this device; not only should it have picked up a new IPv6 address locally, the Endpoint it submits its data to should also be dual-stack IPv4 and IPv6.
Apple iPad 14.8 -> 15.0 upgrade
This one was very unusual. Apple had released iOS 15, and our iPads were about to make the jump from 14.8. However, despite full wifi signal, the devices keep announcing they couldn’t verify the downloaded image because they couldn’t connect to the WiFi!
I’m hoping Apple can address this dependency before the next iOS update.
I’ve paused the experiment for the moment, but next month I’ll resume it and find more edge cases where devices we rely upon still use unencrypted channels, exposing our data, without us even knowing…
As Hanno Böck noted in the recent Bulletproof TLS Newsletter, FTP Support in Firefox 90 has been removed. We’ve seen similar messaging from most major browser vendors over the last few years.
I’m going to make a bold prediction, and say in 10 years time we’ll be seeing the removal of (plain text) HTTP support as well. Regardless of internal or external networks (an out-dated concept aligned to the Crunch Shell of network security), the move to stronger security for all communications, backed by free TLS Certificate Authorities (such as Let’s Encrypt) means we should be doing end-to-end encryption for everything the common web browser fetches.
For some time, Firefox has had an HTTPS-only mode, with warnings when services try and dip back to unencrypted access. I’ve typically found this warning pops up when various link-shortening services are chained together, and I’m grateful for the awareness that a jump in that chain is poorly implemented.
In the meantime, the distribution of files using FTP needs to stop. If you run an FTP service then you need to think about transitioning to something that permits access using HTTPS as the transport protocol.
Another sunset in the circle of life of a protocol.
DNS is the fundamental directory service of the Internet, and these days, of all corporate systems performing discovery of their various components in digital deployments. It is used at least hundreds of times per day per device, and no one really notices until it breaks.
For the most part, its what changes the hostname you have used to navigate to this article – blog.james.rcpt.to – into an IP address of some sort, the numbered address that is supposed to uniquely identify the server or load balancer that is connected to the public Internet to which your browser will then make a network connection.
If your DNS breaks, then your service, or perhaps your entire company, is “off the air”, no longer discoverable by clients wishing to use human-readable labels and names to find your address.
I recently gave a 45 minute presentation at the AWS User Group in Perth, Western Australia, highlighting some of the advantages of using Route53 for DNS, an some of the more modern security protections therein; slides can be found here.
I have helped various organisations transition their authoritative DNS from their existing services to AWS Route53. This migration, in isolation, requires coordination, preparedness, and awareness of the impact of poor planning, and opportunity for service improvement.
DNS Migration Planning
DNS manages to serve the scale of the planet by effective use of caching. This is done at multiple layers in the service.
When a DNS service does a looking for a query it has not already cached (or has expired from cache), it must start a recursive resolve. For this to happen, it needs to find the Authoritative Name Server for a Zone. This Authoritative Name Server will be delegated down from a parent domain, and so on, until we reach the top of the DNS tree.
A common example in this scenario is the address “www.example.com”. Before answering the address for this exact name, a client must determine the address of the Authoritative Name Server(s) for “example.com”. If this is unknown, then the client must ask the address of the Authoritative Name Server(s) for “.com”. Again, if “.com” Name Servers are not already known (cached), then the level above that must be queried, commonly called the Public DNS Roots, or “.”.
The Global Root DNS Servers
These DNS roots are globally agreed, and every DNS server is given a (relatively) static file of the addresses for these. They are given generic names, A-M, and are operated by a variety of organisations who agree to share the same data for the 1000+ Top Level Domains under the globally agreed root.
These Root DNS Servers are all accessible by both the existing IPv4 address scheme, and the newer IPv6 Internet. These addresses exist in a file commonly called the “root.hints” file and is distributed with all DNS server software as the initial glue. It rarely changes.
In a trick of Internet routing (BGP), many of these 13 hosts (A-M) are also each replicated multiple times by a process called Anycast: the same small address segment that the server lives on is “announced” to the world from multiple locations, and a duplicate server, performing the same process, and responding with the same answers.
This first layer of scalability helps the root servers deal with billions of devices using the DNS service every second.
The Global Top Level Domains (TLDs): Registry Operator and Registrars
Each of the global TLDS are operated by a Registry Operator, but records are added and removed by multiple Registrar organisations. For example, the “.com” zone is operated by Verisign, but there are many Registrar organisations you can obtain a DNS name from, amongst which is Route53 itself.
These operators have a selection of innovations and policies they apply to their delegation. Some operate their service with just IPv4, and some are dual-stack IPv4 and IPv6. Some operators have their DNS zones cryptographically signed (using DNSSEC) to provide some validation of the DNS queries.
Starting in December 2010, Route53 originally provided support only for hosting DNS Zones for customers. The engineering for the service at that time was designed to eclipse what most organisations had in place, providing higher reliability and scalability.
Back in the day, the authoritative references on running DNS services were the Bind Operators Guide (aka The BOG), and the O’Reilly book by author Cricket Liu. Most organisations organised just two DNS servers to respond to their customers queries, and the most common software for doing so was ISC’s Berkeley Interned Name Daemon, or BIND.
Of course, to have your own DNS server, you need a fixed IP address that your server would operator from, as this IP address is what the upstream zone would respond with to clients. And thus the initial problem for most organisations was getting a pool of static IP addresses.
Most ISPs only hand out dynamic addresses, and charge substantially more for static routes. Other (typically larger) organisations went through a laborious process of having IP addresses themselves assigned to their organisation (through ARIN, APNIC, or other IP address registries), and then deploying BGP to announce their range to their connected ISP(s) – could be multiple.
This overhead of assigned IP address ranges, setting up corporate BGP (and trying to secure it) all went away with the launch of hosted DNS services, and Route53 turned out to be one of the most well engineered and cost-effective solutions.
With Zones hosted we can delegate from the parent domain to the name servers that Route53 provides us; each individual record (e.g., “www”) can then point to any IP address (i.e., anywhere). The entire need for corporations acquiring large blocks of IP addressing for their organisation went away.
Indeed, I have helped organisations who previously had very large, fixed blocks of IPv4 addresses to relinquish some of these in a commercial market (for millions of dollars).
Route53 has expanded its remit in the AWS Cloud environment. In addition to hosting authoritative DNS zones, it also offers Registrar Services for hundreds of domains, as well as tuning the use of DNS with in the Virtual Private Cloud Environment. Each of these functions can be used completely separately for example, you can:
Register a domain with Route53 to handle the re-registration, but delegate to your own (or a 3rd party) DNS servers.
Register a domain with another Registrar (e.g., Go Daddy, Verisign) and delegate to Route53 Hosted Zone.
Configure complex routing and protection mechanisms for your Virtual Cloud Environment.
Host private DNS for your VPC, invisible to the outside world.
In this article, we are going to concentrate on running Public DNS Zones, and the protections you can put in place.
Route53 Public DNS Zone Hosting: Scalability
By default, Route53 gives the operator a choice of 4 DNS servers to pass to the parent domain for delegation. Each of the 4 names are themselves given DNS Names, from four different TLDs. The Four names also themselves resolved to both IPv4 and IPv6 addresses.
The parent domain and then record (and cache) the delegation addresses of these 4 DNS server endpoints; and can instruct end clients doing lookups to also cache this delegation.
Each of those four endpoint addresses are also potentially themselves Anycast announced from multiple locations worldwide. This helps clients reach the closest deployed endpoint for each of the four names, reducing DNS latency.
This set of 4 DNS servers provides much greater reliability than the traditional two, and the multiple anycast presentation further improves this. The chance of any other AWS Route53 customer having the same set of 4 DNS server endpoints is very small, so any Denial of service on specific set of delegated IP address for another Zone is unlikely to affect your zone significantly. This is part of the reason why Route53 offers a 100% availability Service Level Agreement (SLA).
(Note, the control plane, for providing updates to records, is not covered by this SLA)
A number of configuration questions arise when planning the migration:
do you want query logging turned on?
this is delivered to an S3 bucket: what’s the retention policy on this (look at S3 life cycle policies) – always set an automated time to delete, perhaps in 12 months?
what’s the analytics processing done on this data, if any?
who has access to this log data, is the bucket marked private, default encryption, versioning?
do you want DNSSEC enabled on the zone –perhaps do this after service migration if you don’t currently have DNSSEC enabled.
What integrations for automated updates are in place, if any?
Who needs access to the console to see and/or update records?
The process for migrating to Route53 is relatively simple:
Reduce the parent domains Cache time (TTL – time to live; see below) for the delegation records that point at your current service: a value of 300 seconds may be reasonable.
Prepare the DNS Export form the older service; review the records in there before doing a test import into the new service to ensure that no records cause any issue. This is perfectly safe as we have not redelegated yet. You should also review the individual records’ TTL values, and potentially reduce them down as well as part of the export/import. Any web site or load balancer should run with a TTL no higher than 300 seconds. Once the export/import has been successful, then delete the imported records – we will take a fresh update later…
Determine if there are any processes that are automatically updating your DNS. These will have to be integrated to call the Route53 API for those updates after the migration.
Ensure you have access to redelegate with the Registrar. Test username & password to log in.
Schedule a time for the transition, during which we will avoid updates, or update both old and new. You will have to wait for 5 periods of the previous original TTL time to pass since step 1. For example, if the Delegation TTL was a week, then wait 5 weeks before proceeding (see below on TTL)!
At the agreed time:
Record the current (old) delegation IP addresses the registrar has configured.
re-export the values
update the record TTLs in the export.
import into the Route53.
update the Registrar with the four new Delegation address.
test the DNS immediately; if it fails, revert the Delegation to the addresses in step (a) above.
watch logs/metrics from other systems that rely upon this domain name, such as Web traffic for the zone, or mail traffic.
test the DNS again after 25 minutes.
DNS Caching: TTL & Delegation records
A key element of DNS’s scalability is caching as much as possible. Records all have a customer defined Time To Live value, the duration (in seconds) that a record can be kept by a client.
When making changes to records, we typically observe 50% of clients seeing cached values updated after the TTL period; a further 50% of the remainder see the update after another TTL period; in practice, the TTL almost acts like a half-life:
Cumulative % clients seeing update after this time
1 * TTL
2 * TTL
3 * TTL
4 * TTL
5 * TTL
6 * TTL
We would typically wait at least 5 periods of the time-to-live value on a delegation before making a change. As many organisations have this value set to a week (or a month), this could take some time; we’d also recommend keeping this value relatively small once migrated, to ensure you have flexibility in re-delegation in future. 24 hours is a reasonable time for delegation records, unless you’re about to do a migration, in which case 300 seconds is reasonable (25 minutes for 96% of clients to see the update).
During this period, however, you could have your new DNS zone hosting the identical records of the old one, and any updates during this period should be applied to both (or updates can be avoided during this window).
However, some network operators chose to override values as they see fit. A certain ISP in Australia would not honour any small TTL values, and this would result in at least a 24-hour TTL being enforced. Given the 5-TTL-period duration to get 96% of clients seeing the update, you may have to adjust your time frame to accommodate a parallel run of old and new DNS service. Unfortunately, you cannot force update this ISPs.
Route53: Records to Add and Modify
Route53 will automatically create a Start of Authority (SOA) record for your zone. This standard record type has two fields of interest: the RNAME (the responsible person’s email address), and the default TTL value (used for Negative DNS responses, when a query tries to find something not defined. You can leave these as the default, but if you adjust them, then the RNAME field must point to a monitored mailbox, and the TTL adjustment may result in higher query traffic.
Outside of the SOA, there’s a number of other DNS records you should put in place:
SPF on the apex, now implemented as a Text (TXT) record, to indicate where your Email is permitted to originate from (low volume lookup traffic). Something like “v=spf1 mx -all” may do. You should set a different record even if your domain is not hosting email, to indicate that all fraudulently generated email from it is SPAM.
CAA on the apex, to indicate to Certificate Authorities, who your permitted CAs for your domain are (extremely low volume lookup traffic). Something like: 0 issue “letsencrypt.org” 0 issue “amazon.com” may do.
DMARC: a TXT record on hostname _dmarc.yourdomain, with value “v=DMARC1; p=quarantine; rua=mailto:youremail@yourdomain”
SMT TLS Reporting: a TXT record on the hostname _smtp._tls.yourdomain with value v=TLSRPTv1;rua=mailto:youremail@yourdomain”
An MTA (Mail Transfer Agent) STS (Strict Transport Security) record: a TEXT record for hostname _mta-sts.yourdomain with value “v=STSv1; id=2021021000;” – the number can be a representation of the current datetime in yyyymmddhhmm that can be incremented. You should also set up a static web site to host your MTA STS policy document itself on https://mta-sts.yourdomain/.well-known/mta-sts.txt.
For checking this domain’s security configuration, have a look at Hardenize.com, by Ivan Risti?.
Ensure that all administrative staff have access to set and update the records they need.
Lastly, don’t forget to decommission the existing DNS service once you are convinced you do not need to go back to it.
Sometime in the next week, a large swathe of web sites around the world, from Fortune 500 companies, to governments and beyond will stop being available securely. All with the next release version of Google Chrome (version 70) and Firefox. The words “NET::ERR_CERT_SYMANTEC_LEGACY” are about to become well known.
For those wishing to look into the future, Google Chrome makes its future releases available to those interested under the labels of Beta, and prior to that, as a “Canary” (ie, in the coal mine). And if you’d cast your eyes over a few sites with these pre-release versions, you’ll see examples like that shown here (name removed).
Sadly, the operators of these sites may well be looking at the embedded certificate expiry (Valid Until) date, and think this is not an issue for them. Some of these certificates may have many more years of appearing to be valid.
The case is much worse: the organisation that these web site operators obtained the certificate from — the Certificate Authority — is about to have its status revoked, having been caught acting in ways that undermine the trust instilled in it. These are all powered by Symantec’s legacy root certificates, which includes the Thawte, GeoTrust, and RapidSSL brands.
You can read plenty online about this, for example: form DigiCert, Mozilla, and Google. Here’s Scott Helm’s February 2018 post, and his follow up from a recent Alexa Top sites crawl. Several of these have since updated their certificates (Well done, Tigereair.com.au, stratco.com.au, naati.com.au; fixed it!).
So what’s actually going to happen?
I’m sure a large number of these will be smeared across mainstream media for being “hacked”, or “offline”, when in reality, “oblivious” is closer to the point.
Poor service providers are going to tell vulnerable people to “ignore the security warnings”, and to “proceed” to the site regardless. This is BAD advise. If you are told this, you are better off ceasing to do business with the organisation as they do not under stand the security they are dealing with. If this is the advice of your employer, then you should consider what this means to the security of your personal HR (and other) data.
There’s far too many people operating, controlling, or otherwise “responsible” for large numbers of web sites who have no idea about what they are actually operating. It’s evident from scanning site and seeing those that still have legacy, vulnerable encryption on their HTTPS configuration, or worse, serve content over unencrypted HTTP. Just because you don’t value you’re content from modification, doesn’t mean your web visitors don’t value NOT being compromised when visiting you.
Web traffic interception happens every second of every day. In Wifi Cafes, Airports, air planes, corporate LANs. TLS (formerly SSL) is the best way we have to protect the integrity of the content across untrusted networks, but we’re in a constant capability race to ensure that services only offer ways to connect that minimise the risk of using untrusted networks.
Driven by a desire to not change things that appear to be working (or indeed, being either lazy, overworked, under resourced/funded, or unaware), organisations are not bringing up their drawbridge of security on their most vulnerable interfaces: those services that are facing the Internet, such as their web site or web services. This issue, when it breaks, will help highlight that some organisations and individuals should probably not be in charge of the services they currently operate.
Case in point: check out BankGradeSecurity.com, a ranking of financial institutions around the world and how well they have adopted modern encryption and security capabilities on their web site and Internet banking services.
It’s clear we’re constantly in the middle of technology transitions – IT Services are not simply done; they are either in-use and actively well-maintained, or they should be archived or removed. Anything else demonstrates cost cutting and under-valuation of the digital capability that allows an organisation to operate.
Organisations face a choice of two types of Managed Services providers today: those that understand service maintenance on behalf of their customers and those that do not (and are still running with the same HTTPS configuration they went live with years ago.
It’s easy to spot these services — they haven’t enabled GCM based AES block ciphers or Eliptical Curve Diffie-Hellman Ephemeral (ECDHE) key exchange mechanisms. Worse, those permitting the use of SSLv2, SSLv3, or TLS 1.0, or not yet permitting the use of TLS 1.2 (or the shiny new TLS 1.3). And unbelievably, those that don’t enable HTTPS at all.
There’s more signs of stagnation if you know what to look for; lack of HTTP/2, lack of IPv6, long TTLs on DNS records, etc, that all indicate organisations that are stifled, or don’t have capability to understand what they are doing. Sometimes its corporate direction to use 3rd party IT operations who again, use the cheapest unskilled and unqualified labour to delivery IT services, dressed up in marketing to make it look like they save the earth.
I was recently speaking with someone about high availability. Their approach to high availability was two hand crafted instances (servers), in each in a separate Availability Zones on AWS, behind an ELB.
My approach to high availability is two (or more) instances in two (or more) Availability Zones behind an ELB, auto-created (and replaced upon failure) by AutoScale, with health-checks to ensure the instances are processing their workloads correctly, and replacing those that are not.
I extend this to include:
rolling updates so when the need arises, such as replacing the underlying OS every 6 – 12 months I can do so as seamlessly as possible
applying critical security updates daily without human intervention
ensuring that logs are sent off the instance to persistent storage and analytics where they can’t be overwritten or tampered with
ensuring data workloads are on object storage, or managed database platforms
Of course the ASG provisioning approach to EC2 deployment requires that everything during install is scripted and repeatable.And it gives me the freedom to size my ASG to zero instances outside of service hours, and ensure that the system deploys every single time.
I’ve done this resize ASGs to zero in non production times for many years now, across hundreds of instances per day. You can bet its pretty well tested, has saved hundreds of thousands of dollars, and ensured that the process works.
I bake my own AMIs, and re-baseline these semi-regularly using automation to do so – a process that takes 10 minutes of wall time, and about 60 seconds of my time. I treat AMIs as artefacts, as precious and as important as the code they run.
I have to stop myself sometimes when others rely upon pre-cloud approaches to high availability.