Migrating & Operating Public DNS with AWS Route53

DNS is the fundamental directory service of the Internet, and these days, of all corporate systems performing discovery of their various components in digital deployments. It is used at least hundreds of times per day per device, and no one really notices until it breaks.

For the most part, its what changes the hostname you have used to navigate to this article – blog.james.rcpt.to – into an IP address of some sort, the numbered address that is supposed to uniquely identify the server or load balancer that is connected to the public Internet to which your browser will then make a network connection.

If your DNS breaks, then your service, or perhaps your entire company, is “off the air”, no longer discoverable by clients wishing to use human-readable labels and names to find your address.

I recently gave a 45 minute presentation at the AWS User Group in Perth, Western Australia, highlighting some of the advantages of using Route53 for DNS, an some of the more modern security protections therein; slides can be found here.

I have helped various organisations transition their authoritative DNS from their existing services to AWS Route53. This migration, in isolation, requires coordination, preparedness, and awareness of the impact of poor planning, and opportunity for service improvement.

DNS Migration Planning

DNS manages to serve the scale of the planet by effective use of caching. This is done at multiple layers in the service.

When a DNS service does a looking for a query it has not already cached (or has expired from cache), it must start a recursive resolve. For this to happen, it needs to find the Authoritative Name Server for a Zone. This Authoritative Name Server will be delegated down from a parent domain, and so on, until we reach the top of the DNS tree.

A common example in this scenario is the address “www.example.com”. Before answering the address for this exact name, a client must determine the address of the Authoritative Name Server(s) for “example.com”. If this is unknown, then the client must ask the address of the Authoritative Name Server(s) for “.com”.  Again, if “.com” Name Servers are not already known (cached), then the level above that must be queried, commonly called the Public DNS Roots, or “.”.

The Global Root DNS Servers

These DNS roots are globally agreed, and every DNS server is given a (relatively) static file of the addresses for these. They are given generic names, A-M, and are operated by a variety of organisations who agree to share the same data for the 1000+ Top Level Domains under the globally agreed root.

These Root DNS Servers are all accessible by both the existing IPv4 address scheme, and the newer IPv6 Internet. These addresses exist in a file commonly called the “root.hints” file and is distributed with all DNS server software as the initial glue. It rarely changes.

In a trick of Internet routing (BGP), many of these 13 hosts (A-M) are also each replicated multiple times by a process called Anycast: the same small address segment that the server lives on is “announced” to the world from multiple locations, and a duplicate server, performing the same process, and responding with the same answers.

This first layer of scalability helps the root servers deal with billions of devices using the DNS service every second.

The Global Top Level Domains (TLDs): Registry Operator and Registrars

Each of the global TLDS are operated by a Registry Operator, but records are added and removed by multiple Registrar organisations. For example, the “.com” zone is operated by Verisign, but there are many Registrar organisations you can obtain a DNS name from, amongst which is Route53 itself.

These operators have a selection of innovations and policies they apply to their delegation. Some operate their service with just IPv4, and some are dual-stack IPv4 and IPv6. Some operators have their DNS zones cryptographically signed (using DNSSEC) to provide some validation of the DNS queries.

Route53

Starting in December 2010, Route53 originally provided support only for hosting DNS Zones for customers. The engineering for the service at that time was designed to eclipse what most organisations had in place, providing higher reliability and scalability.

Back in the day, the authoritative references on running DNS services were the Bind Operators Guide (aka The BOG), and the O’Reilly book by author Cricket Liu. Most organisations organised just two DNS servers to respond to their customers queries, and the most common software for doing so was ISC’s Berkeley Interned Name Daemon, or BIND.

Of course, to have your own DNS server, you need a fixed IP address that your server would operator from, as this IP address is what the upstream zone would respond with to clients. And thus the initial problem for most organisations was getting a pool of static IP addresses.

Most ISPs only hand out dynamic addresses, and charge substantially more for static routes. Other (typically larger) organisations went through a laborious process of having IP addresses themselves assigned to their organisation (through ARIN, APNIC, or other IP address registries), and then deploying BGP to announce their range to their connected ISP(s) – could be multiple.

This overhead of assigned IP address ranges, setting up corporate BGP (and trying to secure it) all went away with the launch of hosted DNS services, and Route53 turned out to be one of the most well engineered and cost-effective solutions.

With Zones hosted we can delegate from the parent domain to the name servers that Route53 provides us; each individual record (e.g., “www”) can then point to any IP address (i.e., anywhere). The entire need for corporations acquiring large blocks of IP addressing for their organisation went away.

Indeed, I have helped organisations who previously had very large, fixed blocks of IPv4 addresses to relinquish some of these in a commercial market (for millions of dollars).

Route53 has expanded its remit in the AWS Cloud environment. In addition to hosting authoritative DNS zones, it also offers Registrar Services for hundreds of domains, as well as tuning the use of DNS with in the Virtual Private Cloud Environment. Each of these functions can be used completely separately for example, you can:

  • Register a domain with Route53 to handle the re-registration, but delegate to your own (or a 3rd party) DNS servers.
  • Register a domain with another Registrar (e.g., Go Daddy, Verisign) and delegate to Route53 Hosted Zone.
  • Configure complex routing and protection mechanisms for your Virtual Cloud Environment.
  • Host private DNS for your VPC, invisible to the outside world.

In this article, we are going to concentrate on running Public DNS Zones, and the protections you can put in place.

Route53 Public DNS Zone Hosting: Scalability

By default, Route53 gives the operator a choice of 4 DNS servers to pass to the parent domain for delegation. Each of the 4 names are themselves given DNS Names, from four different TLDs. The Four names also themselves resolved to both IPv4 and IPv6 addresses.

The parent domain and then record (and cache) the delegation addresses of these 4 DNS server endpoints; and can instruct end clients doing lookups to also cache this delegation.

Each of those four endpoint addresses are also potentially themselves Anycast announced from multiple locations worldwide. This helps clients reach the closest deployed endpoint for each of the four names, reducing DNS latency.

This set of 4 DNS servers provides much greater reliability than the traditional two, and the multiple anycast presentation further improves this. The chance of any other AWS Route53 customer having the same set of 4 DNS server endpoints is very small, so any Denial of service on specific set of delegated IP address for another Zone is unlikely to affect your zone significantly. This is part of the reason why Route53 offers a 100% availability Service Level Agreement (SLA).

(Note, the control plane, for providing updates to records, is not covered by this SLA)

Route53: Questions

A number of configuration questions arise when planning the migration:

  1. do you want query logging turned on?
    1. this is delivered to an S3 bucket: what’s the retention policy on this (look at S3 life cycle policies) – always set an automated time to delete, perhaps in 12 months?
    2. what’s the analytics processing done on this data, if any?
    3. who has access to this log data, is the bucket marked private, default encryption, versioning?
  2. do you want DNSSEC enabled on the zone –perhaps do this after service migration if you don’t currently have DNSSEC enabled.
  3. What integrations for automated updates are in place, if any?
  4. Who needs access to the console to see and/or update records?

Route53: Migration

The process for migrating to Route53 is relatively simple:

  1. Reduce the parent domains Cache time (TTL – time to live; see below) for the delegation records that point at your current service: a value of 300 seconds may be reasonable.
  2. Prepare the DNS Export form the older service; review the records in there before doing a test import into the new service to ensure that no records cause any issue. This is perfectly safe as we have not redelegated yet. You should also review the individual records’ TTL values, and potentially reduce them down as well as part of the export/import. Any web site or load balancer should run with a TTL no higher than 300 seconds. Once the export/import has been successful, then delete the imported records – we will take a fresh update later…
  3. Determine if there are any processes that are automatically updating your DNS. These will have to be integrated to call the Route53 API for those updates after the migration.
  4. Ensure you have access to redelegate with the Registrar. Test username & password to log in.
  5. Schedule a time for the transition, during which we will avoid updates, or update both old and new. You will have to wait for 5 periods of the previous original TTL time to pass since step 1. For example, if the Delegation TTL was a week, then wait 5 weeks before proceeding (see below on TTL)!
  6. At the agreed time:
    1. Record the current (old) delegation IP addresses the registrar has configured.
    2. re-export the values
    3. update the record TTLs in the export.
    4. import into the Route53.
    5. update the Registrar with the four new Delegation address.
    6. test the DNS immediately; if it fails, revert the Delegation to the addresses in step (a) above.
    7. watch logs/metrics from other systems that rely upon this domain name, such as Web traffic for the zone, or mail traffic.
    8. test the DNS again after 25 minutes.

DNS Caching: TTL & Delegation records

A key element of DNS’s scalability is caching as much as possible. Records all have a customer defined Time To Live value, the duration (in seconds) that a record can be kept by a client.

When making changes to records, we typically observe 50% of clients seeing cached values updated after the TTL period; a further 50% of the remainder see the update after another TTL period; in practice, the TTL almost acts like a half-life:

DurationCumulative % clients seeing update after this time
1 * TTL50%
2 * TTL75%
3 * TTL87.5%
4 * TTL93.75%
5 * TTL96.875%
6 * TTL98.4375%

We would typically wait at least 5 periods of the time-to-live value on a delegation before making a change. As many organisations have this value set to a week (or a month), this could take some time; we’d also recommend keeping this value relatively small once migrated, to ensure you have flexibility in re-delegation in future. 24 hours is a reasonable time for delegation records, unless you’re about to do a migration, in which case 300 seconds is reasonable (25 minutes for 96% of clients to see the update).

During this period, however, you could have your new DNS zone hosting the identical records of the old one, and any updates during this period should be applied to both (or updates can be avoided during this window).

However, some network operators chose to override values as they see fit. A certain ISP in Australia would not honour any small TTL values, and this would result in at least a 24-hour TTL being enforced. Given the 5-TTL-period duration to get 96% of clients seeing the update, you may have to adjust your time frame to accommodate a parallel run of old and new DNS service. Unfortunately, you cannot force update this ISPs.

Route53: Records to Add and Modify

Route53 will automatically create a Start of Authority (SOA) record for your zone. This standard record type has two fields of interest: the RNAME (the responsible person’s email address), and the default TTL value (used for Negative DNS responses, when a query tries to find something not defined. You can leave these as the default, but if you adjust them, then the RNAME field must point to a monitored mailbox, and the TTL adjustment may result in higher query traffic.

Outside of the SOA, there’s a number of other DNS records you should put in place:

  • SPF on the apex, now implemented as a Text (TXT) record, to indicate where your Email is permitted to originate from (low volume lookup traffic). Something like “v=spf1 mx -all” may do. You should set a different record even if your domain is not hosting email, to indicate that all fraudulently generated email from it is SPAM.
  • CAA on the apex, to indicate to Certificate Authorities, who your permitted CAs for your domain are (extremely low volume lookup traffic). Something like:
    0 issue “letsencrypt.org”
    0 issue “amazon.com”
    may do.
  • DMARC: a TXT record on hostname _dmarc.yourdomain, with value “v=DMARC1; p=quarantine; rua=mailto:youremail@yourdomain
  • SMT TLS Reporting: a TXT record on the hostname _smtp._tls.yourdomain with value v=TLSRPTv1;rua=mailto:youremail@yourdomain
  • An MTA (Mail Transfer Agent) STS (Strict Transport Security) record: a TEXT record for hostname _mta-sts.yourdomain with value “v=STSv1; id=2021021000;” – the number can be a representation of the current datetime in yyyymmddhhmm that can be incremented. You should also set up a static web site to host your MTA STS policy document itself on https://mta-sts.yourdomain/.well-known/mta-sts.txt.

For checking this domain’s security configuration, have a look at Hardenize.com, by Ivan Risti?.

Post-Migration

Ensure that all administrative staff have access to set and update the records they need.

Lastly, don’t forget to decommission the existing DNS service once you are convinced you do not need to go back to it.

AWS Re-certification

Time passes, and before you know it, three years have raced past and you get the following email:

Hello James Bromberger,

Your AWS Certified Solutions Architect – Associate is set to expire on Mar 13, 2021.

How to Recertify

To maintain your certification, you must pass the current version of the AWS Certified Solutions Architect – Associate exam. Check out our Recertification Policy for more information and details by certification level.

You have a 50% discount voucher in your AWS Certification Account under the “Benefits” section. If you haven’t done so already, you can apply this voucher to your exam fee to recertify or apply it to any future certification exam you wish to pursue by Mar 13, 2021. Sign in to aws.training/certification to get started.

If you have any questions, please refer to our FAQs or contact us.

Thank you,

AWS Training and Certification

My Solution Architect Professional certification also renews the corresponding subordinate Solution Architect Associate certification, which I first obtained on the 24th of February 2013 as one of the first in the world to sit this.

This reminder email came out exactly one month before expiry, so I have plenty of time to study and prepare.

With the global pandemic effectively shutting down much of the world, next week also marks 12 months since I was on a plane – the purpose for which was to attend an exam certification workshop to write the items (questions) for the… Solution Architect Professional certification, as a Subject Matter Expert. of course, there are many questions in the certification pool, and each candidate gets a random selection. including some questions that are non-scoring and are themselves being tested on candidates.

I often point my Modis AWS Cloud practice member colleagues at the Certification process training course, on the aws.training site. It gives you a great insight to the thoroughness of the process; it’s quite in depth. This should give confidence to candidates that strives to obtain these vendor certifications – they are discerning, and for good reason – to retain value.

Securing VPC S3 Endpoints: Blocking other buckets

What is the new s3:ResourceAccount policy condition for? Security!

AWS Virtual Private Cloud is a wonder of the modern cloud age. While most of the magic is encapsulation, it also has a deep permissions policy that is highly effective in large deployments.

From a security perspective, accessing your S3 private stores by egressing your VPC over the Internet seemed like a control needing to be improved, and this landed with S3 Endpoints (now Gateway Endpoints) in 2015. These Gateway Endpoints rely upon integration into the VPC Routing table, where as the newer Interface Endpoints have network interfaces (ENIs) in designated VPCs. Oh, and Interface Endpoints are charged for (at this time), while the Gateway Endpoints are (again, at this time), complementary.

Having an S3 Endpoint meant that your buckets, as a Resource, could now have a policy applied to them to limit their access to only traffic originating from the given Endpoint(s) or VPC(s). This helps limits the steal of credentials.

But another consideration existing, which endpoints also supported: a filter on the Endpoint itself, limiting the actions and buckets that resources within the VPC were allowed to access from the S3 service.

However the policy language would limit any permit deny role on an S3 Bucket name, and as we know, Buckets can have any name so long as no one else already has that name. Indeed, there is a race here to create names for buckets that other people may want, and Bucket Squatting (on those names) is a thing.

S3 bucket names couldn’t be reserved or namespaced (outside of the existing ARN), and while a policy that denies access to any bucket not called “mycompany-*” could be deployed on the Endpoint, that doesn’t stop an attacker also calling their bucket “mycompany-notreally”.

Why Filter Access to S3

There’s two major reasons why an attacker would want to get access from your resources to S3:

  1. Data ingestion to your network of malware, compilers, large scripts or other tools
  2. Data ex-filtration to their own resource

Lets consider an Instance that has been taken over. Some RAT or execution is happening on your compute at their behest. And perhaps the attacker is aware of some level of VPC S3 Endpoint policy that may be in place.

The ability to put in large complicated scripts, malware and payloads may be limited form the command and control channel, whereas a cal to wget s3://mycompany-notreally/payload.bin may actually succeed in transferring that very large payload to your instance, which it then runs.

And of course in the reverse way, when they want to steal your data, then upload to s3 to a bucket in their control, from which they can later exfil out of S3 separately.

Policies for S3 ARNs

The initial thought is to use an ARN that would filter on something like arn:aws:s3:12345678901::mybucket-*, but alas, Account IDs are not valid in ARNs for S3 ARNs! Today, AWS announced a new condition key that takes care of this, called s3:ResourceAccount. It achieves a similar thing.

Thus, in a CloudFormation template snippet, you can now put:

S3Endpoint:
  Type: 'AWS::EC2::VPCEndpoint'
  Properties:
    PolicyDocument:
      Version: "2012-10-17"
      Statement:
      - Action: s3:*
        Effect: Deny
        Resource: '*'
        Principal: '*'
        Condition:
          StringNotEquals:
            s3:ResourceAccount: !Ref 'AWS::AccountId'
    RouteTableIds:
      - !Ref RouteTablePublic
      - !Ref RouteTableNATGatewayA
      - !Ref RouteTableNATGatewayB
      - !Ref RouteTableNATGatewayC
      - !Ref RouteTablePrivate
    ServiceName: !Join
      - .
      - - com.amazonaws
        - !Ref 'AWS::Region'
        - s3
    VpcId: !Ref VPC

Current AWS Workload recommendations December 2020

There’s a heap of Best Practice around workloads online and in AWS, and here’s some of my current thoughts as at December 2020 – your mileage may vary, caveat emptor, no warranty expressed or implied, and you may have use-cases that justify something different:

PatternRecommendationRationale
Multi-AZ VPCDesign Address space for 4 AZsIn an AZ outage, having just one AZ remaining to satisfy demand during a rush is not enough; using contiguous address space and CIDR masks means after 2, we have 4
VPC DNSSEC validationEnable for VPC Validation, but be ready for external zones to stuff up their DNSSEC keysFailing closed maybe better than failing open; but new failure modes need to be understood.
Route53 Hosted Zone DNSSECHold off until current issues are resolved if you use CloudFrontNew service, new failure modes.
TLS1.2 and above onlyOlder versions are now already removed from many clients; be ready for TLS 1.3 and above only
VPC IPv6Enable for all subnets33% of traffic worldwide is now IPv6; your external interface (ALB/NLB) should all be dual stack now as a minimum. Don’t forget your AAAA Alias DNS records.
VPC External EGRESS for private subnetsMinimise, avoid if possible.You shouldn’t have any boot time or runtime dependencies – apart form the outbound integrations you are explicitly creating. Use ENDPOINTS for S3 and other services. Minimise Internet transit.
CloudFront IPv6Enable for all distributionsAs above; particularly if your origin is only on IPv4; Don’t forget your AAAA Alias DNS records.
HTTP interfacesOnly for the APEX of the domain if you think people will type your address by hand into a browser; for all other services, do not listen on port 80 HTTPAvoid convenience redirects, they are a point of weakness. Use HTTPS for everything, including internal services.
ACM Public TLS CertificatesUse DNS validation, and leave validation in place for subsequent reissueRemove the manual work in renewing and redeploying certificates.
S3 Block Public AccessDo this for every bucket, and if possible, Account-wide AS WELL.Two levels of this in case you have to disable account-wide in future.
S3 Website public (anonymous) hostingDo not use; look at CloudFront with Origin Access IdentityYou can’t get a custom certificate nor control TLS on S3. But beware default document handling and other issues.
S3 Access LoggingEnable, but set a retention policy in the S3 BucketNo logs means no evidence when investigating issues.
CloudFront Access LoggingEnable, but set a retention policy in the S3 BucketNo logs means no evidence when investigating issues.
VPC Flow LogsEnable for all, but set a retention policy in the CloudWatch LogNo logs means no evidence when investigating issues.
DatabaseUse RDS or Aurora wherever possible Less operational overhead
RDS Maintenance; Minor versionsAlways adopt latest minor version pro-actively, from Dev through to ProdDon’t wait for Auto graduand to happen; that’s typically on decommission of the version being available.
RDS Maintenance: Major VersionsAfter testing, move up to latest Major versionAvoid being on a decommissioned major version; the enforced upgrade jump may be a bigger jump forward than your application can support.
RDS Encrypt in flightEnforceEnsure privacy of the credentials for connection regardless of where the client it. Don’t assume the client config to use encryption is correct
RDS Encryption in flightValidateGet the RDS CA certificate(s) in your trust path during application build time. Always automate brining them in (and validate and log where you get these from).
RDS Encryption at restEnableKMS is fine. Use a dedicated key for important workloads (and don’t share the key with other accounts).
DNS RecordsAlways publish a CAA and SPF record, even for parked domainsProtect risk and reputation
HTTP Security HeadersValidate on SecurityHeaders, Hardenize, SSLLabs, Mozilla Observatory, and Google Lighthouse (and possibly more).This is an entire lesson, but an A will get you in good stead.
HTTP Security Headers: HSTSEnforce HSTS for a yearWe’re never going back to unencrypted HTTP
Public CDNs for libraries in major projectsAvoid; host your own assets.Remove external dependencies

DNSSEC and Route53

DNS is one of the last insecure protocols in use. Since 1983 it has helped identify resources on the Internet, with a name space and a hierarchy based upon a common agreed root.

Your local device – your laptop, your phone, your smart TV – whatever you’re using to read this article – typically has been configured with a local DNS resolver that, when your device needs to look up an address, it can ask the local resolver to go find the answer to a query.

The protocol used by your local device to the resolver, and from the resolver to reach out across the Internet, is an unencrypted protocol. It normally runs on UDP port 53, switching to TCP 53 under certain conditions.

There is no privacy across either your local network, or the wider Internet, of what records are being looked up or the responses coming back.

There’s also no validation that the response sent back to the Resolver IS the correct answer. And malicious actors may try to spuriously send invalid responses to your upstream resolver. For example, I could get my laptop on the same WiFi as you, and send UDP packets to the configured resolver telling it that the response to “www.bank.com” is my address, in order to get you to then connect to a fake service I am running, and try and get some data from you (like your username and password). Hopefully your bank is using HTTPS, and the certificate warning you would likely get would be enough to stop you from entering information that I would get.

The solution to this was to use digital signatures (not encryption) to have a verification of the DNS response received by the Upstream resolver from across the Internet. And thus DNSSEC was in born 1997 (23 years ago as at 2020).

The take up has been slow.

Part of this has been the need for each component of a DNS name – each zone – needing to deploy a DNSSEC-capable DNS server to generate the signatures, and then to have each domain be signed.

The public DNS root was signed in 2010, along with some of the original Top Level Domains. Today the Wikipedia page for the Internet TLDs shows a large number of them are signed and ready for their customers to have their DNS domains return DNSSEC results.

Since 2012 US Government agencies have been required by NIST to deploy DNSSEC, but most years agencies opt out of this. Its been too difficult, or the DNS software or service they are using to host their Domain does not support it.

Two parts to DNS SEC

One the one side, the operator of the zone being looked up (and their parent domain) all need to support and have established a chain-of-trust for DNSSEC. If you turn on DNSSEC for your authoritative domain, then those clients who are not validating the responses won’t see any difference.

Separately, the client side DNS Resolver (often deployed by your ISP, Telco, or network provider) needs to understand and validate the DNSSEC Response. If they turn on DNSSEC for your Resolver, then there’s no impact for resolving domains that don’t support DNSSEC.

Both of these need to be in place to offer some form of protection for DNS spoofing, cache poisoning or other attacks.

Route 53 Support for DNSSEC

In December 2020, Route53 finally announced support for DNSSEC, after many years and many customer requests. And this support comes in two ways.

Firstly, there is now a tick box to enable the VPC-provided resolver to validate DNSSEC entries, if they are received. Its either on, or off at this stage.

And separately, for hosted DNS Zones (your domains), you can now enable DNSSEC and have signed responses sent by Route53 for queries to your DNS entries, so they can be validated.

A significant caveat right now (Dec 2020) for hosted zones is that this doesn’t support the customer Route53 ALIAS record type, used for defining custom names for CloudFront Distributions.

DNSSEC Considerations: VPC Resolver

You probably should enable DNSSEC for your VPC resolvers, particularly if you want additional verification that you aren’t being spoofed. There appears to be no additional cost for this, so the only consideration is why not?

The largest risk comes from misconfiguration of the domain names that you are looking up.

In January 2018, the US Government had a shut down due to blocked legislation. Staff walked off the job, and for some of those agencies, they had DNS SEC Deployed – and for at least one of those agencies, its DNS keys expired, rendering their entire domain off-line (many other let their web site TLS certificates expire, causing warnings for browsers, but email still worked for them for example).

So, you should weigh up the improvement in security posture, versus the risk of an interruption through misconfiguration.

In order to enable it, go to the Route53 Console, and navigate to Resolvers -> VPCs.

ChoOse the VPC Resolver, and scroll to the bottom of the page where you’ll see the below check box.

DNSSEC enabled for a VPC

DNSSEC Considerations: Your Hosted Zones

As a managed service, Route53 normally handles all maintenance and operational activities for you. Serving your records with DNSSEC at least gives your customers the opportunity to validate responses (as they enable their validation).

I’d suggest that this is a good thing. However, with the caveat around CloudFront ALIAS records right now, I am choosing not to rush to production hosted zones today, but staying on my non-production and non-mission critical zones.

DNSSEC enabled on a hosted zone

I have always said that your non-production environments should be a leading indicator of the security that will get to production (at some stage), so this approach aligns with this.

The long term impact of Route53 DNSSEC

Route5 is a strategic service that enables customers to not need their own allocate fixed address space and run their own DNS servers (many of which never receive enough security maintenance and updates). With DNSSEC support this means that barriers for adoption are reduced, and indeed, I feel we’ll see an up-tick in DNSSEC deployment worldwide because of this capability coming to Route53.

Other Approaches

An alternate security mechanism being tested now is called DNS over HTTPS, or -DoH. This encrypts the DNS names being requested from the local network provider (they still see the IP addresses being accessed).

In corporate settings, DoH is frowned upon, as many corporate It departments want to inspect and protect staff by blocking certain content at the DNS level (eg, block all lookups for betting sites) – and hiding this in DoH may prevent this.

In the end, a resolver somewhere knows which client looked up what address.