I’ve been tracking my blog posts and other “contributions” to the AWS developer community since 2017 when the program was originally called the AWS Cloud Warrior Program. This morphed into the Partner Ambassador program, for the top engineering talent in the partner community, and then became a global program.
You can find the ambassadors here. At the time of writing (Nov 1 2021), three are 227 people listed: 114 in APAC, 43 in Europe, 9 in LATAM, and 46 in North America.
I submitted some 28 items to the program in 2021 (until mid-October 2021), from Blog Posts to Case Studies, Open Source work, Event Hosting, and Certification Subject Matter Expert contributions.
This was enough to land me in the #2 position for 2021, as shared during the online Global Ambassador Summit recently – shown in this slide:
And while I sit here with 9 (of 11) AWS certifications (more set to launch during re:Invent), I don’t yet hold the coveted Gold Jacket for holding all available cert (which looks as loud and proud as you can imagine; I think I saw something similar in The Hangover movie).
Arjen and Ian are both amazing engineers; I am honoured to be considered amongst them in this program.
Sharing ideas and solutions has been core to my work in the technology field since I was at University when I first discovered open-source and then became a Debian Linux Developer. Indeed, as developers (and Sys Admins, these days deemed as DevOps Engineers) become more senior, sharing and mentoring becomes more of the job.
Occasionally I get feedback from people that my posts have helped them save time and find a solution quickly, or avoid problems. Often I find myself reading back my own posts years later, thanking younger-me for putting some notes online.
But as with most in this industry, we stand on the shoulders of others; the only right thing to do is to support those coming after us.
I’ve been discussing the IPv6 transition with our customers more recently; for over 3 years we’ve been dual-stack IPv4 and IPv6 for public-facing AWS-Cloud-based solutions and services for our customers.
“So what?“, you’re thinking?
It’s worth noting that from Google’s numbers, global IPv6 is now approaching 36%, while at home in Australia 27%, helped by TelCo carriers like Telstra enabling IPv6 to their mobile phone subscribers, and advanced ISPs like Aussie Broadband and Internode making IPv6 trivial to enable.
Google IPv6 Adoption, as of 12/Oct/2021
I first had an IPv6 tunnel established to Hurrican Electric in 1999 when I worked for The University of Western Australia. I championed the adoption of IPv6 as a first-class citizen in the cloud when I worked at Amazon Web Services as a Solution Architect, and these days, a large majority of AWS public-facing services already support dual-stack approaches, and more are on the way.
As the next billion people come online, the unavailability of more existing IPv4 Internet is a limiting factor. The temporary value of the IPv4 address space, being reallocated (“sold“) between assignees will eventually presumably peak when a majority of clients (people) and the services they are accessing are all on IPv6.
I have been advising a government body, who had two IPv4 “Class B” sized IPv4 subnets allocated to them. Each of these subnets is a “/16” netblock (65,535 addresses); they had only ever used a handful of /24 ranges from within their first allocation.
Most services they use, both for staff and for public-facing services, now run on the cloud, from cloud-provider address space. They’re unlikely to need all of the address blocks they currently have from the first /16 block, let alone the second.
This netblock has a current value of a couple of million dollars (AUD).
It’s likely that many public sector agencies have IPv4 address netblocks that they’re unlikely to ever use, and could also benefit from reallocating to service providers desperate for their own address space to host solutions from.
Well, desperate until most clients are using IPv6.
I’d urge any public sector organisation to review their plans for using their address space, and if they have large unused, contiguous address space, consider reallocating that. The funds raised can then help with further modernisation of workloads – including those workloads to move to IPv6 addressing.
For any managed service providers, I would urge you to “dual-stack” all public-facing Internet services. You should continue to use strong encryption in flight, modern TLS protocols, and strong authentication, regardless of the network transport protocol version.
If you are using AWS CloudFront as a CDN in front of your origin service, then enable IPv6 in the CloudFront configuration, and then publish the corresponding AAAA DNS record just as you have to the A DNS record. Similar works if using CloudFlare, Akamai, Fastly or others.
For those who use managed service providers for their corporate business networking, ask why your work Internet connection is not dual-stacked already. It’s typically a configuration question, and rarely has any actual cost associate with it. If you have a corporate proxy service, then if it is dual-stacked, the clients (on your internal corporate network) already get some benefit of being able to talk to IPv6 services.
If you have DNS services, check they not only can serve IPv6 records (AAAA), but they are reachable using IPv6. Services like AWS Route53 have done this for years (see my earlier point about getting IPv6 as a first-class citizen within AWS).
While you’re looking at DNS, have a look at creating a simple CAA record, to list the Certificate Authorities you obtain certs from.
November 2021: Note there is a new way to do this natively within CloudFront, and it wont cost you a Lambda@Edge invocation.
For a long time, I’ve been using Lambda@Edge to inject various HTTP security-related headers to help browsers improve the security model of the content that they fetch and render.
I’ve been doing this as I have been using S3 as the origin (accessed via a CloudFront Origin Access Identity). S3 itself cannot add/inject many of the common security headers when it passes
These Functions execute when the origin returns the content to the CloudFront regional edge; the returned content then gets cached with the injected headers included.
The end result is getting a good rating on securityheaders.com, hardenize.com, and other public security evaluation services.
An alternate in the Lanbda@Edge execution lifecycle is to trigger on Viewer Response; in which case the cached version doesn’t have the headers injected, and every viewer request triggers the code execution. Clearly, if every viewer has the same set of headers, there’s no need to execute each view response and pay for the additional Lambda@Edge executions.
Now there’s a new option – CloudFront Functions (AWS blog post). Written entirely in JavaScript, it executes only at Viewer Request, or Viewer Response. There is no Origin Request or Origin Response option. It also executes at the CloudFront Edge, not the Regional Edge.
Thie example injects a number of headers, and would need only minor potential customisation on the Content Security Policy (and possibly Permissions Policy) to work for most sites:
You may want to evaluate the cost of both Lambda@Edge and CloudFront Functions. After the first year, Functions is charged at US$0.10 per million functions. As an equivalent, Lambda@Edge for a similar Node.JS function that executes in one millisecond with 128 MB of memory would be US$0.2021 per million requests.
However, given a busy website, you may want to look at the efficiency differences between Viewer Response execution for CloudFront Functions, and Origin Response and the caching for Lambda@Edge (multiplied by the number of Edge Cache locations (13), and the cache retention rate).
If you have only a few unique URLs, and content that can be cached for a long period, and large volumes of requests, then Lambda@Edge may result in near free execution.
Lambda@Edge
CloudFront Functions
Unique URLs
100
100
HTTP viewer Requests
10M/month
10M/month
Execution time
1ms
N/A
Number of Regional Edges
13
N/A
Memory/execution
128MB
N/A
Execution time
Origin Response
Viewer Response
Number of code invocations
1300 (once per Regional Edge, Per Unique URL, and possibly cached for a month – depending on Edge cache expiry)
CloudFront Functions cost uplift compared to Lambda@Edge
3,806 times more expensive
If we were using Lambda@Edge on ViewerResponse, and not caching the object with headers injected, then CloudFront Functions would be cheaper; or if the content being sent was dynamic from the origin and not suitable to be cached, in which case we wouldn’t get the efficiency savings of fewer executions.
Even if we are using Origin Response with Lambda@Edge, we can’t determine the cache expiry of the Lambda@Edge cached responses (we can influence it); the cached objects could expire and re-execute every day, so the Lambda@Edge costs could go up 30x (which would only make CloudFront functions 126 times more expensive). YMMV. TIMTOWTDI.
As Hanno Böck noted in the recent Bulletproof TLS Newsletter, FTP Support in Firefox 90 has been removed. We’ve seen similar messaging from most major browser vendors over the last few years.
I’m going to make a bold prediction, and say in 10 years time we’ll be seeing the removal of (plain text) HTTP support as well. Regardless of internal or external networks (an out-dated concept aligned to the Crunch Shell of network security), the move to stronger security for all communications, backed by free TLS Certificate Authorities (such as Let’s Encrypt) means we should be doing end-to-end encryption for everything the common web browser fetches.
For some time, Firefox has had an HTTPS-only mode, with warnings when services try and dip back to unencrypted access. I’ve typically found this warning pops up when various link-shortening services are chained together, and I’m grateful for the awareness that a jump in that chain is poorly implemented.
In the meantime, the distribution of files using FTP needs to stop. If you run an FTP service then you need to think about transitioning to something that permits access using HTTPS as the transport protocol.
Another sunset in the circle of life of a protocol.
I recently posted about using AWS to provide very cost-effective, Scalable, Secure Static websites. In this post, here’s a valid reason you should do this now, to publish a new website on your domain that has one, simple file on it.
Email on the Internet has used SMTP for transferring email between mail transport agents (MTAs) since 1982, on TCP port 25. The initial implementation offered only unencrypted transport of plain text messages.
It’s worth noting that people, as clients to the system, generally will send their email to their corporate mail server, not directly from their workstation to the recipient; the software on your desktop or phone is a Mail User Agent (MUA), and your MUA (client) transfers your outbound message to your MTA (mail server), which then sends the message using SMTP to your recipients MTA, and then when the user is read they sign in and read their mail with their MUA.
The focus of this article is that middle hop above – MTA to MTA, across the untrusted Internet.
SMTPS added encryption in 1997, wrapping SMTP in a TLS layer, similar to how HTTPS is HTTP in a TLS wrapper, with certificates as many are familiar with, issued by Certificate Authorities. This commonly uses TCP port 465. And while modern MTAs support both encrypted and unencrypted protocols, it’s the order and fail-over that’s important to note.
Modern Mail Servers will generally try and do an encrypted mail transfer to the target MTA, but they will seamlessly fall back to the original unencrypted SMTP if that is not available. This step is invisible to the actual person who sent the message – they’ve wandered off with their MUA, leaving the mail server the job to forward the message.
Sending an email, from left user, to right, via two MTA servers.
Now imagine an unscrupulous network provider somewhere in the path between the two mail servers, who just drop the port 465 traffic; the end result is the email server will assume that the destination does not support encrypted transfer, and will then fall back to plain text SMTP. Tat same attacker then reads your email. Easy!
If only there was a way the recipient could express a preference to not have email fall back to unencrypted SMTP for its inbound messages.
Indeed, there’s a similar situation with web sites; if how to we express that a web site should only be HTTPS and not down graded to HTTP. The answer here is the Hypertext Strict Transport Security header, which tells web browsers not to go back to unencrypted web traffic.
Well, mail systems have a similar concept, called the Mail Transport Application Strict Transport Security, or MTA-STS defined in RFC8461.
MTA-STS has a policy document, which allows the preference for how remote clients should handle connections to the mail server. It’s a simple text file, published to a well-known location on a domain. Remote mail servers may retrieve this file, and cache it for extended periods (such as a year).
In addition, there is a DNS text record (TXT), named _mta-sts.$yourdomain. The value of this for me is “v=STSv1; id=2019042901“, where the ID is effectively used as a timestamp of when the policy document was set. I can update the policy text file on the MTA-STS website, and then update the DNS id, and it should refresh on clients who talk to my mail server.
The well-known location is on a specific hostname in your domain – a new website if you will – that only has this one file being served. The site is mta-sts.$yourdomain, and the path and filename are “.well-known/mta-sts.txt‘. The document must be served from an HTTPS site, with a valid HTTPS certificate.
So an excellent place to host this MTA-STS static website, with a valid TLS certificate, that is extremely cost-effective (and possibly even cost you nothing) is the AWS Serverless approach previously posted.
You can also check for this with Hardenize.com: if you get a grey box next to MTA-STS for your domain, then you don’t have it set up.
Of course, not all MTAs out there may support MTA-STS, but for those that do, they can stop sending plain text email. Even still, don’t send sensitive information via email, like passwords or credit card information.
The MTA trying to send the message may cache the STS policy for a while (seconds as indicated in the file), so as long as TCP 443 is available at some time, and has a valid certificate (from a trusted public certificate authority), then that policy can persist even if the HTTPS MTA-STS site is unavailable later (eg, changed network).
Its worth noting that your actual email server can stay exactly where it is – on site, mass hosted elsewhere; we’re just talking about the MTA STS website and policy document being on a very simple, static web site in Amazon S3 and CloudFront.