Put your CAA in DNS!

There are hundreds of public, trusted* certificate authorities (CAs) in the world. These CAs have had their root CA Certificate published into the Trust Store of many solutions that the world uses. These Trust Stores include widely used web browsers (like the one you’re using now), to the various programming language run times, and individuals operating systems.

A trust store is literally a store of certificates which are deemed trusted. While users can edit their trust store, or make their own, they come with a set that have been selected by your software vendor. Sometimes these are manipulated in the corporate environment to include a company Certificate Authority, or remove specific distrusted authorities.

Over time, some CAs fall into disrepute, and eventually software distributors will issue updates that remove a rouge CA. Of course, issuing an update for systems that the public never apply doesn’t change much in the short term (tip: patch your environments, including the trust store).

Like all x509 certificates the CA root certificates have an expiry, typically over a very long 20+year period, and before expiry, much effort is put into creating a new root Certificate and having it issued distributed and updated in deployed applications.

Legitimate public certificate authorities are required to undertake some mandatory checks when they issue their certificates to their customers. These checks are called the Baseline Requirements, and are governed by the Browser/CA Forum industry body. CAs that are found to be flouting the Baseline Requirements are expelled from the Browser/CA Forum, and subsequently, most software distributions then remove them from their products (sometimes retrospectively via patches as mentioned above).

Being a Certificate Authority has been a lucrative business over the years. In the early days, it was enough to make Mark Shuttleworth a tidy packet with Thawte – enough for him to become a very early Space Tourist, and then start Canonical. With a trusted CA Root certificate widely adopted, a CA can then issue certificates for whatever they wish to charge.

What’s important to note though, is that any certificate in use has no bearing on the strength of encryption or negotiation protocol being used when a client connects to an HTTPS service. The only thing a CA-issued certificate gives you is a reasonably strong validation that the controller of the DNS name you’re connecting to has validate themselves to the CA vetting process.

It doesn’t tell you that the other end of your connection is someone you can TRUST, but you can reasonably TRUST that a given Certificate Authority thinks the entity at the other end of your connection may be the controller of their DNS (in Domain Validated (DV) certificates). Why reasonably? Well what if the controll erof the web site you’re trying to talk to accidentally published their PRIVATE key somewhere; a scammer could then set up a site that may look legitimate, poison some DNS or control a network segment your traffic routes over….

When a CA issues a certificate, it adds a digital signature (typically RSA based) around the originating certificate request. With in the certificate data are the various fields about the subject of the certificate, as well as information about who the issuer is, including a fingerprint (hash) of the issuer’s public certificate.

Previously CAs would issue certificates with an MD5 of their certificate. MD5 was replaced with SHA1, and around 2014, SHA1 was replaced with SHA2-256.

This signature algorithm is effectively the strength of the trust between the issuing CA, and the subjects certificate that you see on a web site. RSA gets very slow as key sizes get larger; today’s services typically use RSA at 2048 bits, which is currently strong enough to be deemed secure, and fast enough not to be a major performance overhead; make that 4096 bits and its another story.

Not only is the RSA algorithm being replaced, but eventually the SHA2-256 will be as well. The replacement for RSA is likely to be Eliptic Curve based, and SHA2-256 will either grow longer (SHA2-384), or to a new algorithm (SHA3-256), or a completely new method.

But back to the hundreds of CAs: you probably only use a small number in your organisation. LetsEncrypt, Amacon, Google, Verisign, GlobalTrust, etc. However, all CAs are seen as equally trusted when presented with a valid signed certificate. What can you do to prevent other CAs from issuing certificates in your (DNS) name?

The answer is simple: the DNS CAA record: Certificate Authority Authorisation. Its a list that says which CA(s) are allowed to issue certificates for your domain. It’s a record in DNS that is looked up by CAs just before they’re about to issue a certificate: if their indicator flag is not found, they don’t issue.

As it is so rarely issued, you can set this DNS record up with an extremely low TTL (say, 60 seconds). If you get the record wrong, or you forget to whitelist a new CA you’re moving to, update the record.

DNS isn’t perfect, but this slight incremental step may help keep public CAs to only issue from the CA’s you’ve made a decision to trust, and for your customers to trust as well.

DNS CAA was defined in 2010, and an IETF RFC in 2014. I worked with AWS Route53 team to have the record type supported in 2015. You can inspect CAA records using the dig command:

dig caa advara.com
; <<>> DiG 9.10.6 <<>> caa advara.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5546
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;advara.com. IN CAA
;; ANSWER SECTION:
advara.com. 60 IN CAA 0 issue "amazon.com"

Here you can see that advara.com has permitted AWS’s Certificate Manager, with its well known flag of “amazon.com” (and its a 60 second TTL).

You’ll also see that various online services will let you inspect this, including SSLLabs.com, Hardenize.com, and more.

Putting a CAA record in DNS typically costs nothing; its rarely looked up and can easily be changed. It protects you from someone tricking another CA into issuing certificates they think are legitimate; and this has been seen several times (think how valuable a google.com certificate would be ot intercept (MITM) mobile phones, searches, gmail, etc) – and while mis-issuance like this MAy lead to Browser/CA forum expulsion, and eventual client updates to distrust this CA, its far easier to prevent issuance with this simple record.

Of course, DNS Sec would be nice too…

Project & Support versus DevOps and Service teams

The funding model for the majority of the worlds IT projects is fundamentally flawed, and the fall out is, over time, broken systems, lacking security and legacy systems.

It’s pretty easy to see that digital systems are the lifeblood of most organisations today. From banking, stock inventory and tracking, HR systems. And the majority of these critical operations have been deployed as “projects”, and then “migrate to support”. And it’s that “migrate to support” that is the problem.

Support roles are typically over subscribed, and under empowered. It’s a cost saving exercise to minimise the overhead, by taking the more expensive development resources and moving them to a fresh project, while more commodity problem solving labour comes along to triage operational run time issues. However, that support function has no history in the design and architecture, and often either has no access to the development and test environments to continue doing managed change, or is not empowered to do so. The end result is that Support teams use the deployed production features (eg: manually add a user to a standalone system) instead of driving incremental improvements (eg: automatically add a user base don the HR system being updated).

Contrast with a DevOps team, of dynamic size over time. The team that builds & tests & deploys & automates this more complete lifecycle, and stays with the critical line-of-business system, becomes a Service Team. Any changes they need to perform are not applied in production locally, as is often the case with “Support teams”, but in the Development environment. This then should pass automated testing and feedback loops before being promoted to a higher environment. Sounds great, yeah?

Unfortunately, economic realities are the constraint here. Both the customer, and consultancy are trying to minimise cost, not maximise capability. And navigating a procurement and legal team is something that the procurement cycle wants to do as rarely as possible, not on a continuous basis.

Contrast a Service team focus, of variable size over time, containing different capabilities over time. The cost for this team varies over time, based upon the required skill set. The team objective is to make the Best Service they can, and need to drive from metrics: Availability, Latency, Accuracy while meeting strict security requirements.

From the Service team’s perspective, they obviously need remuneration for their time, but also want to take a sense of pride in their work, and a sense of achievement.

A Support Team is not a Service Team, as they don’t have the full Software Lifecycle Management capability and/or Data Lifecycle Management capability. A Service Team should never be one person; that’s one step away from being zero people. A Service Team may look after more than one service, but not so many that they do not have crystal clear focus on any service.

AWS Partner Ambassador Meetup #1, Seattle, August 2019

The inaugural global meetup of the top partner engineers from around the world.

Another long overdue post from three weeks ago…

On the heel of the AWS Canberra Public Sector Summit 2019, and after some 24 hours at home with my family, I joined my fellow AWS Partner Ambassador at Modis – Steve Kinsman – and we started to wend our way across three flights to get to Seattle, departing a few minutes after midnight on Friday night/Saturday morning.

That guy behind me better not kick my seat! 😉
Continue reading “AWS Partner Ambassador Meetup #1, Seattle, August 2019”

AWS Public Sector Summit, Canberra 2019

It’s been a reasonably busy few weeks for me; here’s a recount of the AWS Public Sector Summit in Canberra…

On Monday 19th July, I went to Canberra for the AWS Public Sector summit, held at the National Convention Centre, with some 1,200 people in attendance this time. I recall the first AWS Canberra Public Sector Summit of 2013, with a few hundred going to the Realm Hotel: NCC is now starting to look reasonably full.

Mikal & James gurning, and an awesome photo-bomb

It’s always nice running into old friends, and this time, long time Linux.conf.au and Australian Open Source community personality Michael Still. Michael ran LCA 2013 in Canberra, when Sir Tim Berners-Lee was one of the keynotes (and Bunnie Huang, Bdale, and Radia Perlman). I helped the video team that year – and recall chatting with Robert Llewellyn…

AWS’s Matt Fitgerald, formerly, from Perth.

Later, I ran into Matt Fitzgerald, whom I first met when I worked for AWS – and was the only other person at that time (circa 2013) from Perth in Seattle with AWS.

Of course, multiple current and former colleagues, other AWS Ambassadors from the region, other folk in the cloud space with other vendors.

Pia Andrews & James.

And then, in the foyer while chatting, I suddenly find Pia, well known for her work inside the halls of government from Australia to New Zealand, but 17 years ago, helping establish the fledgling Linux.conf.au conference and helping the Australian open source community find its platform and voice.

Of course, its not all about catching up with friends.

A crowd in the NCC’s main auditorium, 2019

The masses packed into the main theatre to hear the set of lighthouse case studies, new capabilities, and opportunities that can be reached on the AWS platform.

Iain Rouse, AWS Public Sector Country Manager 2019: A/NZ PS Partners

This time, the baton of AWS PS Country Manager and MC responsibilities had passed to Iain Rouse, formerly of Technology One. Modis has been an AWS partner since 2013 (as former brand Ajilon), with many Public Sector customers since then, it was nice to see our logo amongst a healthy ecosystem of capability.

A/NZ PS Customers

Even nicer than seeing our logo, is our customers and those I have worked with. At the first PS Summit in 2013, I asked and had ICRAR attend; I used to work for UWA (as chief webmaster in the last millennium); when I was at AWS I worked with CFS SA and Moodle, and of course, Landgate – which is now over four years of running on the AWS Cloud.

NZ Conservation’s CIO Mike Edginton

New Zealand’s Conservation’s CIO, Mike Edginton spoke of the digital twinning they have been doing for the environments that their endangered species are in, and of having to set traps for introduced species but IoT enabling them. They cover a vast area of NZ, but the collection of data and analytics and visualisation makes their management more efficient. They’ve also managed to decode Kiwi calls (the bird, not the people).


The mercurial Simon Elisha, PS Solution Architect Manager

Former colleague Simon Elisha continued with a strong positioning of the further efforts around the efforts that the AWS engineering teams have been deploying on resilience, multi-layered security, hardware design, physical security, video CCTV archiving; and then into the customer accessible security services for Data Protection, Identity Directory & Access, Detective Controls & Management, and Networking & Infrastructure.

S3 Block Public Access

He then dived into a customer controlled capability for S3 (Object Storage) that was surfaced at the global re:Invent service in 2018: Block Public Access. This capability can be leveraged at a per-bucket level, as well as at an AWS-account-wide level (which would be effective for any new S3 Buckets created, regardless of their per-bucket settings)

S3 has been around for many years, and has expanded from a small set of micro services, to over 200 today (as disclosed at AWS Sydney Summit 2019). It can by itself act as a public web server for the content in a bucket; can have public anonymous access.; can encrypt in flight and at rest; storage tiering; life-cycle, logging, and much more. These days, I don’t encouraged teams to serve content to the web directly via S3, but via the CloudFront global CDN (today: 189 points of presence – see this). And with the ability for CloudFront to access S3 buckets using an Origin Access identity, its possible to remove all anonymous access from S3, and enable the Block Public Access – something we have done for many of our customers. This pattern forces that access to the data from the Internet will come from an endpoint set to my desired TLS policy, with a custom named TLS Certificate, and with a bonus, I can set (inject) my specific security headers on the content being served. For example, check out securityheaders.com (hi Scott) and test www.advara.com.

Simon also spoke about the technology stack (not quite the full OSI stack, for those that recall):

  • Physical Layer: secure facilities with optical encryption using AES 256
  • Data Link Layer: MACsec IEE 802.1AE
  • Network Layer: VPN, Peering
  • Transport Layer: s2n, NLB-TLS, ALB, CloudFront and ACM
  • Application Layer: Crypto SDK, Server Side Encryption

After a quick tour of Security Hub, and then Ian speaking about some of the training and reskilling initiatives, it was time for another customer.

Dr Paul Sully-Power, and his little ripper beach patrol drones

This was the second time I had seen this, with the drone having been shown at the AWS Commercial Summit in Sydney in July. However, Dr Scully-Power’s presentation was, to be honest, very powerful. Watch the video and hear for yourself about rescuing kids from rips, spotting sharks, crocs and more.

The AWS DeepRacer (reinforcement learning autonomous vehicles) was set up and competing again, part of the effort to lower the barrier of entry for customer into machine learning. The exhibitor hall continued to have technology and consulting partners showcasing their achievements and capabilities, as well as the various AWS customer-facing teams such as the certification teams, concierge team, Solution Architects (now split further by services and specialisations).

In the break-out sessions (actually held on the Tuesday), was a track dedicated to Healthcare, a track for High Performance Compute, and more. Presentations for the fledgling Australian space community (see Ground Station), decoupling workloads, connectivity, etc.

Once again a group of local school children were given the opportunity to attend and see the innovation being discussed, and a stream of activities aimed at helping show them career pathways.

Of course, in specific break out streams were media analyst briefings, executive briefings, Public Sector partner forums and workshops.


Mark Smith, from Modis at Landgate (and long-term volunteer fire-fighter, as it happens), and James at the Modis Canberra office.

I also had the opportunity to stop by the Modis Canberra office, and with Mark Smith (with whom I have worked for nearly half a decade) and I spoke at length to the local team on the challenges and successes of our engagements with customers, delivering advanced, managed Cloud services and solutions.

That night, I returned to Perth for a day at work and a few hours with my family… before heading for the next adventure, the AWS Ambassador Global meetup in Seattle (next post).