Browser support for FTP (another sunset)

As Hanno Böck noted in the recent Bulletproof TLS Newsletter, FTP Support in Firefox 90 has been removed. We’ve seen similar messaging from most major browser vendors over the last few years.

I’m going to make a bold prediction, and say in 10 years time we’ll be seeing the removal of (plain text) HTTP support as well. Regardless of internal or external networks (an out-dated concept aligned to the Crunch Shell of network security), the move to stronger security for all communications, backed by free TLS Certificate Authorities (such as Let’s Encrypt) means we should be doing end-to-end encryption for everything the common web browser fetches.

For some time, Firefox has had an HTTPS-only mode, with warnings when services try and dip back to unencrypted access. I’ve typically found this warning pops up when various link-shortening services are chained together, and I’m grateful for the awareness that a jump in that chain is poorly implemented.

In the meantime, the distribution of files using FTP needs to stop. If you run an FTP service then you need to think about transitioning to something that permits access using HTTPS as the transport protocol.

Another sunset in the circle of life of a protocol.

Using AWS to help secure your email domain: the MTA-STS website

I recently posted about using AWS to provide very cost-effective, Scalable, Secure Static websites. In this post, here’s a valid reason you should do this now, to publish a new website on your domain that has one, simple file on it.

Email on the Internet has used SMTP for transferring email between mail transport agents (MTAs) since 1982, on TCP port 25. The initial implementation offered only unencrypted transport of plain text messages.

It’s worth noting that people, as clients to the system, generally will send their email to their corporate mail server, not directly from their workstation to the recipient; the software on your desktop or phone is a Mail User Agent (MUA), and your MUA (client) transfers your outbound message to your MTA (mail server), which then sends the message using SMTP to your recipients MTA, and then when the user is read they sign in and read their mail with their MUA.

The focus of this article is that middle hop above – MTA to MTA, across the untrusted Internet.

SMTPS added encryption in 1997, wrapping SMTP in a TLS layer, similar to how HTTPS is HTTP in a TLS wrapper, with certificates as many are familiar with, issued by Certificate Authorities. This commonly uses TCP port 465. And while modern MTAs support both encrypted and unencrypted protocols, it’s the order and fail-over that’s important to note.

Modern Mail Servers will generally try and do an encrypted mail transfer to the target MTA, but they will seamlessly fall back to the original unencrypted SMTP if that is not available. This step is invisible to the actual person who sent the message – they’ve wandered off with their MUA, leaving the mail server the job to forward the message.

Sending an email, from left user, to right, via two MTA servers.

Now imagine an unscrupulous network provider somewhere in the path between the two mail servers, who just drop the port 465 traffic; the end result is the email server will assume that the destination does not support encrypted transfer, and will then fall back to plain text SMTP. Tat same attacker then reads your email. Easy!

If only there was a way the recipient could express a preference to not have email fall back to unencrypted SMTP for its inbound messages.

Indeed, there’s a similar situation with web sites; if how to we express that a web site should only be HTTPS and not down graded to HTTP. The answer here is the Hypertext Strict Transport Security header, which tells web browsers not to go back to unencrypted web traffic.

Well, mail systems have a similar concept, called the Mail Transport Application Strict Transport Security, or MTA-STS defined in RFC8461.

MTA-STS has a policy document, which allows the preference for how remote clients should handle connections to the mail server. It’s a simple text file, published to a well-known location on a domain. Remote mail servers may retrieve this file, and cache it for extended periods (such as a year).

In addition, there is a DNS text record (TXT), named _mta-sts.$yourdomain. The value of this for me is “v=STSv1; id=2019042901“, where the ID is effectively used as a timestamp of when the policy document was set. I can update the policy text file on the MTA-STS website, and then update the DNS id, and it should refresh on clients who talk to my mail server.

The well-known location is on a specific hostname in your domain – a new website if you will – that only has this one file being served. The site is mta-sts.$yourdomain, and the path and filename are “.well-known/mta-sts.txt‘. The document must be served from an HTTPS site, with a valid HTTPS certificate.

Here’s mine for my personal domain: https://mta-sts.james.rcpt.to/.well-known/mta-sts.txt, and here is the content at the time of writing:

version: STSv1
mode: enforce
mx: mx0.james.rcpt.to
max_age: 2592000

So an excellent place to host this MTA-STS static website, with a valid TLS certificate, that is extremely cost-effective (and possibly even cost you nothing) is the AWS Serverless approach previously posted.

You can also check for this with Hardenize.com: if you get a grey box next to MTA-STS for your domain, then you don’t have it set up.

Of course, not all MTAs out there may support MTA-STS, but for those that do, they can stop sending plain text email. Even still, don’t send sensitive information via email, like passwords or credit card information.

The MTA trying to send the message may cache the STS policy for a while (seconds as indicated in the file), so as long as TCP 443 is available at some time, and has a valid certificate (from a trusted public certificate authority), then that policy can persist even if the HTTPS MTA-STS site is unavailable later (eg, changed network).

Its worth noting that your actual email server can stay exactly where it is – on site, mass hosted elsewhere; we’re just talking about the MTA STS website and policy document being on a very simple, static web site in Amazon S3 and CloudFront.

The ASD Essential Eight in the AWS Serverless World

The Australian Signals Directorate, part of the Australian Department of Defence, has been issuing guidance to organisations to help secure their digital systems for several years. Known as the Essential Eight, it defines eight activities that help mitigate exposure to compromise or exploit.

Some of the most basic items are around patching the tech stack:

  • operating systems
  • programming runtime environments like Java, .Net, Python and more
  • software solutions that run on those run-times

Of course Multi-Factor Authentication (MFA) is a key one; and slowly our service providers are coming around to offering MFA pas part of their login services – or better yet, federation of identity to other online services hat already do this, such as Facebook, Google, etc.

But how much of this applies to your technology stack in the Serverless world of AWS? Let us begin, following the AWS guide

1. Application Control

ASD recommends organisations “prevent execution of unapproved/malicious programs including .exe, DLL, scripts (e.g. Windows Script Host, PowerShell and HTA) and installers“.

In the world of AWS Lambda, the only code that is present is our bespoke code and any libraries (layers) we’ve possibly added in. What we want to do is ensure that the code we upload is the code executing, and Lambda now allows signed code bundles (Configuring, Best Practices).

If we’re running a Serverless static web site (using S3, CloudFront, etc), then we have no executing code; only content (note you may have some Lambda@Edge or CloudFront functions to inject various Security related HTTP headers, such as HSTS, CSP, and more: see Scott Helme’s excellent securityheaders.com).

However, there are no other applications as… there is no application server per sé.

2. Patch Applications

Well, in AWS Lambda, this is where we have to update out own applications (and those layers/libraries) to ensure they are up to date. If you have abstracted those libraries and imports into Layers, then manage them and update.

Again, in a static web site deployed Serverlessly, we have no application serves to patch (again, except for any Lambda@Edge or CloudFront functions that need maintenance).

3. Configure Microsoft Office Macro Settings

Er, well, no Microsoft Office installed in Serverless, so this is a no-op. Nothing to do here, move along…

4. User Application Hardening

ASD says “Configure web browsers to block Flash (ideally uninstall it), ads and Java on the internet. Disable unneeded features in Microsoft Office (e.g. OLE), web browsers and PDF viewers.“.

We have none of this in our Serverless environments. However we should be delivering updated applications with everything we can do to support the most modern and up to date browsers; the rest of the world is auto-updating these browsers very rapidly.

For corporate environments that lock down browser updates; question why the rest of the world has better security than your corporate users you’re trying to protect!

5. Restrict Admin Privileges

Using AWS IAM, restrict who can deploy, particularly to production environments. Using CI/CD pipelines and approvals, developers should be able to write and update code and then have it deploy immediately to non-production environments, but it should require a second sign off from a separate individual (or team or people) before it gets near production. Indeed, consider that commits to a revision control repository of code being the source of truth, and that repository needs review before changes are staged ready for a CI/CD pipeline to do its delivery job.

6. Patch Operating Systems

No servers, no Operating systems. OK, Lambda will apply minor version updates to run-times to address security requirements, but its also worth updating major versions of run-times as well. Newer runtime versions have a great chance of supporting newer TLS protocols, ciphers, key exchange methods and checksums.

7. Multi-Factor Authentication

There should never be an interactive user with access that doesn’t use MFA. Not only should your access to AWS be MFA based, it should probably be federated via AWS SSO, using MFA back on your identity provider (IdP).

However, your users of your Serverless solutions may also want the option of using federated identity (SAML, etc), and with MFA implemented on their IdP as well (if you have authenticated access). Or perhaps mutual certificate authentication. If you have an open system with no authentication (publicly, anonymously available) then perhaps that’s fine too. Most web sites are, after all, publicly, anonymously available for their home page and other public content; but the ability to change that content is heavily protected.

8. Backups.

You should have Backups. You should know where your data is. If you’re using DynamoDB, then at least turn on Point In Time Recovery, and a backup schedule. Consider dumping those backups to a separate account in escrow: check out the S3 options around versioning, and retention (Life Cycle) of older versions. Consider the concept from the point of an AWS account being compromised; can an attacker than delete the backups across-account to another environment.

For your code base – is it in a revision control repository – separate to the operational runtime environment. What happens if bad code is put into your repository, and pushed through your environments – can you go back. Do you consider the code repository as a Production service, accessible for commits from developers, but managed as a Production service for them.


Summary

In summary, much – but not all – of the ASD Essential Eight evaporates from being the operator/developers responsibility, leaving you more time to concentrate on the effective implementation of the rest of those items that do remain.

This is all excellent advise, and the more that it is clearly demonstrated with easy adoption for organisations, the better we are across all sizes and types of organisations.

Going further, I am keen on is to remove the use of any unencrypted protocols, particularly HTTP. With free, globally trusted TLS certificates available, moving to HTTPS should be straight forward.

However, that’s not the end of the journey, as TLS has versions. Older versions – less than TLS 1.2 as of this time of writing – should not be used – and most browsers and crypto libraries have removed these from their technology stack to prevent them being used.

Your application – even in a Serverless environment – should verify when it establishes an outbound HTTPS connection that the details of that connection meet your minimum TLS requirements – and you should be ready to up your requirements in future. As mentioned above, sometimes that requires a newer runtime, but newer run-times often still support older TLS protocols – even if you don’t want to (or shouldn’t).

I have been recommending to organisations for some time is to start blocking corporate users from using unencrypted HTTP from their workstations. Firefox has a setting to soft-disable unencrypted HTTP as well (a warning is presented to the user). This may seem inconvenient, but its a huge step up in the security for your workers, which is a key vector into your systems.

Furthermore, stop providing convenience redirects on your services from unencrypted HTTP on port 80 to HTTPS on port 443 – for anything other than your organisations home page. Any other redirection via an unencrypted port should be a hard fail, and fixed at the source.

Migrating & Operating Public DNS with AWS Route53

DNS is the fundamental directory service of the Internet, and these days, of all corporate systems performing discovery of their various components in digital deployments. It is used at least hundreds of times per day per device, and no one really notices until it breaks.

For the most part, its what changes the hostname you have used to navigate to this article – blog.james.rcpt.to – into an IP address of some sort, the numbered address that is supposed to uniquely identify the server or load balancer that is connected to the public Internet to which your browser will then make a network connection.

If your DNS breaks, then your service, or perhaps your entire company, is “off the air”, no longer discoverable by clients wishing to use human-readable labels and names to find your address.

I recently gave a 45 minute presentation at the AWS User Group in Perth, Western Australia, highlighting some of the advantages of using Route53 for DNS, an some of the more modern security protections therein; slides can be found here.

I have helped various organisations transition their authoritative DNS from their existing services to AWS Route53. This migration, in isolation, requires coordination, preparedness, and awareness of the impact of poor planning, and opportunity for service improvement.

DNS Migration Planning

DNS manages to serve the scale of the planet by effective use of caching. This is done at multiple layers in the service.

When a DNS service does a looking for a query it has not already cached (or has expired from cache), it must start a recursive resolve. For this to happen, it needs to find the Authoritative Name Server for a Zone. This Authoritative Name Server will be delegated down from a parent domain, and so on, until we reach the top of the DNS tree.

A common example in this scenario is the address “www.example.com”. Before answering the address for this exact name, a client must determine the address of the Authoritative Name Server(s) for “example.com”. If this is unknown, then the client must ask the address of the Authoritative Name Server(s) for “.com”.  Again, if “.com” Name Servers are not already known (cached), then the level above that must be queried, commonly called the Public DNS Roots, or “.”.

The Global Root DNS Servers

These DNS roots are globally agreed, and every DNS server is given a (relatively) static file of the addresses for these. They are given generic names, A-M, and are operated by a variety of organisations who agree to share the same data for the 1000+ Top Level Domains under the globally agreed root.

These Root DNS Servers are all accessible by both the existing IPv4 address scheme, and the newer IPv6 Internet. These addresses exist in a file commonly called the “root.hints” file and is distributed with all DNS server software as the initial glue. It rarely changes.

In a trick of Internet routing (BGP), many of these 13 hosts (A-M) are also each replicated multiple times by a process called Anycast: the same small address segment that the server lives on is “announced” to the world from multiple locations, and a duplicate server, performing the same process, and responding with the same answers.

This first layer of scalability helps the root servers deal with billions of devices using the DNS service every second.

The Global Top Level Domains (TLDs): Registry Operator and Registrars

Each of the global TLDS are operated by a Registry Operator, but records are added and removed by multiple Registrar organisations. For example, the “.com” zone is operated by Verisign, but there are many Registrar organisations you can obtain a DNS name from, amongst which is Route53 itself.

These operators have a selection of innovations and policies they apply to their delegation. Some operate their service with just IPv4, and some are dual-stack IPv4 and IPv6. Some operators have their DNS zones cryptographically signed (using DNSSEC) to provide some validation of the DNS queries.

Route53

Starting in December 2010, Route53 originally provided support only for hosting DNS Zones for customers. The engineering for the service at that time was designed to eclipse what most organisations had in place, providing higher reliability and scalability.

Back in the day, the authoritative references on running DNS services were the Bind Operators Guide (aka The BOG), and the O’Reilly book by author Cricket Liu. Most organisations organised just two DNS servers to respond to their customers queries, and the most common software for doing so was ISC’s Berkeley Interned Name Daemon, or BIND.

Of course, to have your own DNS server, you need a fixed IP address that your server would operator from, as this IP address is what the upstream zone would respond with to clients. And thus the initial problem for most organisations was getting a pool of static IP addresses.

Most ISPs only hand out dynamic addresses, and charge substantially more for static routes. Other (typically larger) organisations went through a laborious process of having IP addresses themselves assigned to their organisation (through ARIN, APNIC, or other IP address registries), and then deploying BGP to announce their range to their connected ISP(s) – could be multiple.

This overhead of assigned IP address ranges, setting up corporate BGP (and trying to secure it) all went away with the launch of hosted DNS services, and Route53 turned out to be one of the most well engineered and cost-effective solutions.

With Zones hosted we can delegate from the parent domain to the name servers that Route53 provides us; each individual record (e.g., “www”) can then point to any IP address (i.e., anywhere). The entire need for corporations acquiring large blocks of IP addressing for their organisation went away.

Indeed, I have helped organisations who previously had very large, fixed blocks of IPv4 addresses to relinquish some of these in a commercial market (for millions of dollars).

Route53 has expanded its remit in the AWS Cloud environment. In addition to hosting authoritative DNS zones, it also offers Registrar Services for hundreds of domains, as well as tuning the use of DNS with in the Virtual Private Cloud Environment. Each of these functions can be used completely separately for example, you can:

  • Register a domain with Route53 to handle the re-registration, but delegate to your own (or a 3rd party) DNS servers.
  • Register a domain with another Registrar (e.g., Go Daddy, Verisign) and delegate to Route53 Hosted Zone.
  • Configure complex routing and protection mechanisms for your Virtual Cloud Environment.
  • Host private DNS for your VPC, invisible to the outside world.

In this article, we are going to concentrate on running Public DNS Zones, and the protections you can put in place.

Route53 Public DNS Zone Hosting: Scalability

By default, Route53 gives the operator a choice of 4 DNS servers to pass to the parent domain for delegation. Each of the 4 names are themselves given DNS Names, from four different TLDs. The Four names also themselves resolved to both IPv4 and IPv6 addresses.

The parent domain and then record (and cache) the delegation addresses of these 4 DNS server endpoints; and can instruct end clients doing lookups to also cache this delegation.

Each of those four endpoint addresses are also potentially themselves Anycast announced from multiple locations worldwide. This helps clients reach the closest deployed endpoint for each of the four names, reducing DNS latency.

This set of 4 DNS servers provides much greater reliability than the traditional two, and the multiple anycast presentation further improves this. The chance of any other AWS Route53 customer having the same set of 4 DNS server endpoints is very small, so any Denial of service on specific set of delegated IP address for another Zone is unlikely to affect your zone significantly. This is part of the reason why Route53 offers a 100% availability Service Level Agreement (SLA).

(Note, the control plane, for providing updates to records, is not covered by this SLA)

Route53: Questions

A number of configuration questions arise when planning the migration:

  1. do you want query logging turned on?
    1. this is delivered to an S3 bucket: what’s the retention policy on this (look at S3 life cycle policies) – always set an automated time to delete, perhaps in 12 months?
    2. what’s the analytics processing done on this data, if any?
    3. who has access to this log data, is the bucket marked private, default encryption, versioning?
  2. do you want DNSSEC enabled on the zone –perhaps do this after service migration if you don’t currently have DNSSEC enabled.
  3. What integrations for automated updates are in place, if any?
  4. Who needs access to the console to see and/or update records?

Route53: Migration

The process for migrating to Route53 is relatively simple:

  1. Reduce the parent domains Cache time (TTL – time to live; see below) for the delegation records that point at your current service: a value of 300 seconds may be reasonable.
  2. Prepare the DNS Export form the older service; review the records in there before doing a test import into the new service to ensure that no records cause any issue. This is perfectly safe as we have not redelegated yet. You should also review the individual records’ TTL values, and potentially reduce them down as well as part of the export/import. Any web site or load balancer should run with a TTL no higher than 300 seconds. Once the export/import has been successful, then delete the imported records – we will take a fresh update later…
  3. Determine if there are any processes that are automatically updating your DNS. These will have to be integrated to call the Route53 API for those updates after the migration.
  4. Ensure you have access to redelegate with the Registrar. Test username & password to log in.
  5. Schedule a time for the transition, during which we will avoid updates, or update both old and new. You will have to wait for 5 periods of the previous original TTL time to pass since step 1. For example, if the Delegation TTL was a week, then wait 5 weeks before proceeding (see below on TTL)!
  6. At the agreed time:
    1. Record the current (old) delegation IP addresses the registrar has configured.
    2. re-export the values
    3. update the record TTLs in the export.
    4. import into the Route53.
    5. update the Registrar with the four new Delegation address.
    6. test the DNS immediately; if it fails, revert the Delegation to the addresses in step (a) above.
    7. watch logs/metrics from other systems that rely upon this domain name, such as Web traffic for the zone, or mail traffic.
    8. test the DNS again after 25 minutes.

DNS Caching: TTL & Delegation records

A key element of DNS’s scalability is caching as much as possible. Records all have a customer defined Time To Live value, the duration (in seconds) that a record can be kept by a client.

When making changes to records, we typically observe 50% of clients seeing cached values updated after the TTL period; a further 50% of the remainder see the update after another TTL period; in practice, the TTL almost acts like a half-life:

DurationCumulative % clients seeing update after this time
1 * TTL50%
2 * TTL75%
3 * TTL87.5%
4 * TTL93.75%
5 * TTL96.875%
6 * TTL98.4375%

We would typically wait at least 5 periods of the time-to-live value on a delegation before making a change. As many organisations have this value set to a week (or a month), this could take some time; we’d also recommend keeping this value relatively small once migrated, to ensure you have flexibility in re-delegation in future. 24 hours is a reasonable time for delegation records, unless you’re about to do a migration, in which case 300 seconds is reasonable (25 minutes for 96% of clients to see the update).

During this period, however, you could have your new DNS zone hosting the identical records of the old one, and any updates during this period should be applied to both (or updates can be avoided during this window).

However, some network operators chose to override values as they see fit. A certain ISP in Australia would not honour any small TTL values, and this would result in at least a 24-hour TTL being enforced. Given the 5-TTL-period duration to get 96% of clients seeing the update, you may have to adjust your time frame to accommodate a parallel run of old and new DNS service. Unfortunately, you cannot force update this ISPs.

Route53: Records to Add and Modify

Route53 will automatically create a Start of Authority (SOA) record for your zone. This standard record type has two fields of interest: the RNAME (the responsible person’s email address), and the default TTL value (used for Negative DNS responses, when a query tries to find something not defined. You can leave these as the default, but if you adjust them, then the RNAME field must point to a monitored mailbox, and the TTL adjustment may result in higher query traffic.

Outside of the SOA, there’s a number of other DNS records you should put in place:

  • SPF on the apex, now implemented as a Text (TXT) record, to indicate where your Email is permitted to originate from (low volume lookup traffic). Something like “v=spf1 mx -all” may do. You should set a different record even if your domain is not hosting email, to indicate that all fraudulently generated email from it is SPAM.
  • CAA on the apex, to indicate to Certificate Authorities, who your permitted CAs for your domain are (extremely low volume lookup traffic). Something like:
    0 issue “letsencrypt.org”
    0 issue “amazon.com”
    may do.
  • DMARC: a TXT record on hostname _dmarc.yourdomain, with value “v=DMARC1; p=quarantine; rua=mailto:youremail@yourdomain
  • SMT TLS Reporting: a TXT record on the hostname _smtp._tls.yourdomain with value v=TLSRPTv1;rua=mailto:youremail@yourdomain
  • An MTA (Mail Transfer Agent) STS (Strict Transport Security) record: a TEXT record for hostname _mta-sts.yourdomain with value “v=STSv1; id=2021021000;” – the number can be a representation of the current datetime in yyyymmddhhmm that can be incremented. You should also set up a static web site to host your MTA STS policy document itself on https://mta-sts.yourdomain/.well-known/mta-sts.txt.

For checking this domain’s security configuration, have a look at Hardenize.com, by Ivan Risti?.

Post-Migration

Ensure that all administrative staff have access to set and update the records they need.

Lastly, don’t forget to decommission the existing DNS service once you are convinced you do not need to go back to it.

AWS Re-certification

Time passes, and before you know it, three years have raced past and you get the following email:

Hello James Bromberger,

Your AWS Certified Solutions Architect – Associate is set to expire on Mar 13, 2021.

How to Recertify

To maintain your certification, you must pass the current version of the AWS Certified Solutions Architect – Associate exam. Check out our Recertification Policy for more information and details by certification level.

You have a 50% discount voucher in your AWS Certification Account under the “Benefits” section. If you haven’t done so already, you can apply this voucher to your exam fee to recertify or apply it to any future certification exam you wish to pursue by Mar 13, 2021. Sign in to aws.training/certification to get started.

If you have any questions, please refer to our FAQs or contact us.

Thank you,

AWS Training and Certification

My Solution Architect Professional certification also renews the corresponding subordinate Solution Architect Associate certification, which I first obtained on the 24th of February 2013 as one of the first in the world to sit this.

This reminder email came out exactly one month before expiry, so I have plenty of time to study and prepare.

With the global pandemic effectively shutting down much of the world, next week also marks 12 months since I was on a plane – the purpose for which was to attend an exam certification workshop to write the items (questions) for the… Solution Architect Professional certification, as a Subject Matter Expert. of course, there are many questions in the certification pool, and each candidate gets a random selection. including some questions that are non-scoring and are themselves being tested on candidates.

I often point my Modis AWS Cloud practice member colleagues at the Certification process training course, on the aws.training site. It gives you a great insight to the thoroughness of the process; it’s quite in depth. This should give confidence to candidates that strives to obtain these vendor certifications – they are discerning, and for good reason – to retain value.