CodeCommit: Mono Repo, Multiple Pipelines – part I: repackaging the repo

As an experiment, I have a CodeCommit repository that has a combination of CloudFormation Templates, and some static web content, checked in to two separate prefixes or folders: /Templates/ and /Website/.

What I am trying to do is, upon any commit to the repo, determine if the Website prefix needs an update, or the Templates have to trigger CFN Stack Update.

Starting with the most basic piece, I want the web content to go via CodePipeline, and unpack into an S3 Bucket, against which there is a CloudFront distribution pointing (with an Origin Access Identity already in place).

By default, an S3 unpack expects the entire repo to unpack into S3, but I want to only have a particular sub folder, so I’ve implemented a “repackage” step as a Lambda function in the pipeline, which grabs the Original Artifact the pipeline has, unpacks it, and then create as new Artifact containing just the folder /Website/ and below. Turned out to be around 50 lines of code in Python:

import json
import boto3
import os
import zipfile

def lambda_handler(event, context):
    if (event["CodePipeline.job"]["data"]["inputArtifacts"][0]["location"]["type"] != "S3"):
        return { 'statusCode': 500, 'body': json.dumps('Not on S3') }
    dl_filename = event["CodePipeline.job"]["data"]["inputArtifacts"][0]["location"]["s3Location"]["objectKey"].split('/')[-1]

    s3client = boto3.client('s3',
      aws_access_key_id=event["CodePipeline.job"]["data"]["artifactCredentials"]["accessKeyId"], 
      aws_secret_access_key=event["CodePipeline.job"]["data"]["artifactCredentials"]["secretAccessKey"], 
      aws_session_token=event["CodePipeline.job"]["data"]["artifactCredentials"]["sessionToken"]
      )
    with open("/tmp/" + dl_filename, 'wb') as data:
        s3client.download_fileobj(
            event["CodePipeline.job"]["data"]["inputArtifacts"][0]["location"]["s3Location"]["bucketName"], 
            event["CodePipeline.job"]["data"]["inputArtifacts"][0]["location"]["s3Location"]["objectKey"], 
            data)
    with zipfile.ZipFile("/tmp/" + dl_filename, 'r') as zip:
        zip.extractall('/tmp/')
        zip.close()
    ul_filename = event["CodePipeline.job"]["data"]["outputArtifacts"][0]["location"]["s3Location"]["objectKey"].split('/')[-1]
    zipf = zipfile.ZipFile("/tmp/" + ul_filename, 'w', zipfile.ZIP_DEFLATED)
    os.chdir('/tmp/Website/')
    for root, dirs, files in os.walk('.'):
        for file in files:
            zipf.write(os.path.join(root, file))
    zipf.close()
    #WARNING: CodePipeline artifacts may ave a default BucketPolicy requiring an explict KMS key. Remove that SSE requirement, turn on dfault encryption for the bucket.
    s3response=s3client.upload_file(
        "/tmp/" + ul_filename,
        event["CodePipeline.job"]["data"]["outputArtifacts"][0]["location"]["s3Location"]["bucketName"],
        event["CodePipeline.job"]["data"]["outputArtifacts"][0]["location"]["s3Location"]["objectKey"],
        ExtraArgs={"ServerSideEncryption": "AES256"}
        )
    client_cp = boto3.client('codepipeline')
    response_cp = client_cp.put_job_success_result(jobId=event["CodePipeline.job"][ "id"])
    return {
        'statusCode': 200,
        'body': json.dumps('Done repacking.')
    }

This runs reasonably quickly, and means I am not unpacking the entire CodeCommit repo into my CloudFront distribution.

AWS Ambassador Program 2021: #2 in ANZ!

I’ve been tracking my blog posts and other “contributions” to the AWS developer community since 2017 when the program was originally called the AWS Cloud Warrior Program. This morphed into the Partner Ambassador program, for the top engineering talent in the partner community, and then became a global program.

You can find the ambassadors here. At the time of writing (Nov 1 2021), three are 227 people listed: 114 in APAC, 43 in Europe, 9 in LATAM, and 46 in North America.

I submitted some 28 items to the program in 2021 (until mid-October 2021), from Blog Posts to Case Studies, Open Source work, Event Hosting, and Certification Subject Matter Expert contributions.

This was enough to land me in the #2 position for 2021, as shared during the online Global Ambassador Summit recently – shown in this slide:

And while I sit here with 9 (of 11) AWS certifications (more set to launch during re:Invent), I don’t yet hold the coveted Gold Jacket for holding all available cert (which looks as loud and proud as you can imagine; I think I saw something similar in The Hangover movie).

Arjen and Ian are both amazing engineers; I am honoured to be considered amongst them in this program.

Sharing ideas and solutions has been core to my work in the technology field since I was at University when I first discovered open-source and then became a Debian Linux Developer. Indeed, as developers (and Sys Admins, these days deemed as DevOps Engineers) become more senior, sharing and mentoring becomes more of the job.

Occasionally I get feedback from people that my posts have helped them save time and find a solution quickly, or avoid problems. Often I find myself reading back my own posts years later, thanking younger-me for putting some notes online.

But as with most in this industry, we stand on the shoulders of others; the only right thing to do is to support those coming after us.

Thoughts on the IPv6 Transition

I’ve been discussing the IPv6 transition with our customers more recently; for over 3 years we’ve been dual-stack IPv4 and IPv6 for public-facing AWS-Cloud-based solutions and services for our customers.

So what?“, you’re thinking?

It’s worth noting that from Google’s numbers, global IPv6 is now approaching 36%, while at home in Australia 27%, helped by TelCo carriers like Telstra enabling IPv6 to their mobile phone subscribers, and advanced ISPs like Aussie Broadband and Internode making IPv6 trivial to enable.

Google IPv6 Adoption, as of 12/Oct/2021

I first had an IPv6 tunnel established to Hurrican Electric in 1999 when I worked for The University of Western Australia. I championed the adoption of IPv6 as a first-class citizen in the cloud when I worked at Amazon Web Services as a Solution Architect, and these days, a large majority of AWS public-facing services already support dual-stack approaches, and more are on the way.

As the next billion people come online, the unavailability of more existing IPv4 Internet is a limiting factor. The temporary value of the IPv4 address space, being reallocated (“sold“) between assignees will eventually presumably peak when a majority of clients (people) and the services they are accessing are all on IPv6.

I have been advising a government body, who had two IPv4 “Class B” sized IPv4 subnets allocated to them. Each of these subnets is a “/16” netblock (65,535 addresses); they had only ever used a handful of /24 ranges from within their first allocation.

Most services they use, both for staff and for public-facing services, now run on the cloud, from cloud-provider address space. They’re unlikely to need all of the address blocks they currently have from the first /16 block, let alone the second.

This netblock has a current value of a couple of million dollars (AUD).

It’s likely that many public sector agencies have IPv4 address netblocks that they’re unlikely to ever use, and could also benefit from reallocating to service providers desperate for their own address space to host solutions from.

Well, desperate until most clients are using IPv6.

I’d urge any public sector organisation to review their plans for using their address space, and if they have large unused, contiguous address space, consider reallocating that. The funds raised can then help with further modernisation of workloads – including those workloads to move to IPv6 addressing.

For any managed service providers, I would urge you to “dual-stack” all public-facing Internet services. You should continue to use strong encryption in flight, modern TLS protocols, and strong authentication, regardless of the network transport protocol version.

If you are using AWS CloudFront as a CDN in front of your origin service, then enable IPv6 in the CloudFront configuration, and then publish the corresponding AAAA DNS record just as you have to the A DNS record. Similar works if using CloudFlare, Akamai, Fastly or others.

For those who use managed service providers for their corporate business networking, ask why your work Internet connection is not dual-stacked already. It’s typically a configuration question, and rarely has any actual cost associate with it. If you have a corporate proxy service, then if it is dual-stacked, the clients (on your internal corporate network) already get some benefit of being able to talk to IPv6 services.

If you have DNS services, check they not only can serve IPv6 records (AAAA), but they are reachable using IPv6. Services like AWS Route53 have done this for years (see my earlier point about getting IPv6 as a first-class citizen within AWS).

While you’re looking at DNS, have a look at creating a simple CAA record, to list the Certificate Authorities you obtain certs from.

Blocking outbound HTTP from the Home Network, 2021

With the move to HTTPS as a default, I took a chance and recently blocked outbound (EGRESS) HTTP (TCP 80) traffic from my home network. I’ve got around 30 – 40 devices on the network, and I was intrigued to see what we (my family and I) would experience.

With my Unfi Dream Machine Pro, this was a reasonably easy update: Settings -> Traffic & Security -> Global Threat Management -> Firewall. I added a rule for Internet Out that that dropped anything going to port 80:

HTTP reject rule for Internet Out from Unifi Dream Machine Prod

This is a rule I had in place for two days. I checked my own laptop access for HTTPS, SSH, IMAPS and SMTPS egress, and all was fine.

What transpired over the following two days helped me identify the devices and vendors that still produce products that have a dependency to operate using unencrypted HTTP over the Internet.

Logitech Smart Radio

We have had a streaming radio for some time; we still like to listen to London Capital Radio despite the 7-8 hour timezone offset. Within a few minutes of blocking HTTP, the audio stream stopped.

We purchased this device around 2012. On my network, it identifies as a Squeezebox running RedHat. The manufacturer discontinued it years ago and there have been no firmware updates for a long time. It only supports 802.11g wifi in the 2.4 GHz spectrum (is this WiFi 3?).

I wasn’t prepared to replace the device, so for the moment, a work-around rule to permit HTTP (by MAC address) fixes this for the short term. We’re unlikely to see any updates from Logitech anyway.

GoToWebinar

This one surprised me; I was signing in to a webinar, and the obligatory download tried to execute and stalled. It turned out the installer was doing an HTTP based OCSP check.

Now, for web browsers, OCSP has been mostly relegated to the annals of history, replaced with OCSP Stapling.

OCSP is a network efficient query that a client can do against a Certificate Provider’s endpoint to get a signed confirmation that the certificate in question has not been revoked recently. However, in doing so, it tells the certificate authority which site you (your source IP address) just visited; this is called an Information Disclosure vulnerability. Instead, the website in question fetches these signed validations at a regular interval and passes this to the clients that it’s already communicating with – stapling the validation to the certificate during TLS negotiation: “Hi client, here’s my certificate, and here’s a recent verification that my certificate is not revoked”.

Using HTTP for OCSP isn’t too bad, as the response that is being downloaded is itself cryptographically signed. But it’s still visible in the plain for all to see.

Update 27/9: No response from GoToWebinar.

Enphase Envoy

12 hours later, my solar panel data aggregation service, Enphase Enlighten, alerted me that it was no longer receiving data from my solar panel inverters. With another rule to permit the Envoy controller to make HTTP outbound requests, and the data started flowing.

This is a reasonable issue. The submission of the generation and consumption of power in my home should not be trundling over the Internet unencrypted.

I raised a support request with Enphase (21/September 2021 at 15:04 AWST UTC +8), asking them to contact me, I received a ticket auto-response (03164xxx) but no other contact.

Later that night I tried reaching out over Twitter to any security folk at Enphase, but after 24 hours, no response.

Update 27/9: After a few days of no reply, Enphase asked (via automated email) how my support experience was. Unbelievable!

I then tried calling them on their Australian support number as shown on their website. I ended up in a call queue which was quite amusing in itself; every 5 seconds (no, literally, every 5,000 ms) it would announce that all callers were busy, and then it would restart the same audio music clip, only to then interrupt itself… I gave up after 20 minutes in the queue.

Lastly, I have DMed the Enphase Twitter account, and await a reply.

Enphase does not have a security.txt file on their website!

Any customer data should be transmitted to an HTTPS endpoint. The firmware of the Envoy device should have the Certificate Authority’s Root Certificate, used to issue that Endpoint certificate, in its trust store. The device should receive updates as that Root CA expires and is replaced (this happens every 10 – 20 years per CA). The embedded firmware also would need to keep step with the improving TLS protocols over time, now TLS 1.3 would be ideal, but in future, who knows.

What also struck me was the lack of IPv6 being picked up by this device; not only should it have picked up a new IPv6 address locally, the Endpoint it submits its data to should also be dual-stack IPv4 and IPv6.

Apple iPad 14.8 -> 15.0 upgrade

This one was very unusual. Apple had released iOS 15, and our iPads were about to make the jump from 14.8. However, despite full WiFi signal, the devices keep announcing they couldn’t verify the downloaded image because they couldn’t connect to the WiFi!

I’m hoping Apple can address this dependency before the next iOS update.

Update 27/9: Apple responded on Twitter concerned only that I could install the update, not that the security issue was being investigated, resolved or understood.

Nintendo Switch

It turns out the Nintendo Switch won’t join a WiFi network that doesn’t have outbound HTTP access; it must do a call home or validation using HTTP.

Summary

I’ve paused the experiment for the moment, but next month I’ll resume it and find more edge cases where devices we rely upon still use unencrypted channels, exposing our data, without us even knowing…

CloudFront Functions and Security Headers

November 2021: Note there is a new way to do this natively within CloudFront, and it wont cost you a Lambda@Edge invocation.

For a long time, I’ve been using Lambda@Edge to inject various HTTP security-related headers to help browsers improve the security model of the content that they fetch and render.

I’ve been doing this as I have been using S3 as the origin (accessed via a CloudFront Origin Access Identity). S3 itself cannot add/inject many of the common security headers when it passes

These Functions execute when the origin returns the content to the CloudFront regional edge; the returned content then gets cached with the injected headers included.

The end result is getting a good rating on securityheaders.com, hardenize.com, and other public security evaluation services.

An alternate in the Lanbda@Edge execution lifecycle is to trigger on Viewer Response; in which case the cached version doesn’t have the headers injected, and every viewer request triggers the code execution. Clearly, if every viewer has the same set of headers, there’s no need to execute each view response and pay for the additional Lambda@Edge executions.

Now there’s a new option – CloudFront Functions (AWS blog post). Written entirely in JavaScript, it executes only at Viewer Request, or Viewer Response. There is no Origin Request or Origin Response option. It also executes at the CloudFront Edge, not the Regional Edge.

Thie example injects a number of headers, and would need only minor potential customisation on the Content Security Policy (and possibly Permissions Policy) to work for most sites:

function handler(event) {
    var request = event.request;
    var response = event.response;
    response.headers['strict-transport-security']= { value: 'max-age=31536000' };
    response.headers['x-xss-protection']= { value: '1'};
    response.headers['x-content-type-options']= { value: 'nosniff'};
    response.headers['x-frame-options']= { value: 'DENY'};
    response.headers['referrer-policy']= { value: 'strict-origin-when-cross-origin'};
    response.headers['expect-ct']= { value: 'enforce, max-age=86400'};
    response.headers['permissions-policy']= { value: 'geolocation=(self), midi=(), sync-xhr=(self), microphone=(), camera=(), magnetometer=(), gyroscope=(), fullscreen=(), payment=(), autoplay=(self)'};
    response.headers['content-security-policy'] = { value: "default-src: 'self'; img-src 'self' data: ; style-src 'self' 'unsafe-inline' ; frame-ancestors 'none'; form-action 'none'; base-uri 'self'; "};
    return response;
}

You may want to evaluate the cost of both Lambda@Edge and CloudFront Functions. After the first year, Functions is charged at US$0.10 per million functions. As an equivalent, Lambda@Edge for a similar Node.JS function that executes in one millisecond with 128 MB of memory would be US$0.2021 per million requests.

However, given a busy website, you may want to look at the efficiency differences between Viewer Response execution for CloudFront Functions, and Origin Response and the caching for Lambda@Edge (multiplied by the number of Edge Cache locations (13), and the cache retention rate).

If you have only a few unique URLs, and content that can be cached for a long period, and large volumes of requests, then Lambda@Edge may result in near free execution.

 Lambda@EdgeCloudFront Functions
Unique URLs100100
HTTP viewer Requests10M/month10M/month
Execution time1msN/A
Number of Regional Edges13N/A
Memory/execution128MBN/A
Execution timeOrigin ResponseViewer Response
Number of code invocations1300 (once per Regional Edge, Per Unique URL, and possibly cached for a month – depending on Edge cache expiry)10M
Possible Costs  (as at 28/Aug/2021)Duration: US$0.0000000021 * 1300 = Requests: US$0.2 * 0.0013 Total: US $0.00026273US$0.1 * 10
Total: US$1
CloudFront Functions cost uplift compared to Lambda@Edge 3,806 times more expensive

If we were using Lambda@Edge on ViewerResponse, and not caching the object with headers injected, then CloudFront Functions would be cheaper; or if the content being sent was dynamic from the origin and not suitable to be cached, in which case we wouldn’t get the efficiency savings of fewer executions.

Even if we are using Origin Response with Lambda@Edge, we can’t determine the cache expiry of the Lambda@Edge cached responses (we can influence it); the cached objects could expire and re-execute every day, so the Lambda@Edge costs could go up 30x (which would only make CloudFront functions 126 times more expensive). YMMV. TIMTOWTDI.