AWS VPCs: Calculating Subnets in CloudFormation

Virtual Private Cloud is a construct in AWS that gives the customer their own, er, virtual network for the deployment of network based resources such as virtual machines and more. Its been around for nearly a decade, and is a basic construct that helps provide security of those resources within an AWS Region.

CloudFormation is the (text, either YAML or JSON) templating language (service) that can take a definition of resources you would like configured, and does the execution of creating these resources for you, saving you the hassle of having to either navigate the web console for hours, or scripting up many API calls (which could be thousands of API create calls).

VPCs can be quite complex; they can specify subnets for resources, across multiple Availability Zones within a Region, define routing tables, Endpoints to create, and much more. So it probably comes as no surprise that managing a VPC via CloudFormation is a natural desire. The configuration of the virtual network for a workload needs to be as management in a CI/CD fashion as the workload that will live in there.

But there’s often been a limitation in making this simple; mathematics.
Continue reading “AWS VPCs: Calculating Subnets in CloudFormation”

AWS Certifications in Perth (II)

I wrote last year about sitting AWS Certifications in Perth. I’ve done another two AWS Certifications in the last month (Networking Specialty, and Cloud Practitioner), and a few things have changed. Gone is Kryterion as the assessment provider, and in has come PSI; this means new venues- and there’s now only one in Perth at 100 Havelock St, West Perth.

It’s a new-ish building I know well; an old friend was working on the top floor for a while, and I spoke to his teams about AWS several times (they became and AWS reference customer). Small Italian-inspired coffee shop on the ground floor (more on this later).

The booking process for exams is much the same, but now via https://aws.training/ (funky new DNS TLD). The certifications with PSI happen via their customer rigged Kiosk systems: a PC with two webcams, one mounted on the monitor facing the candidate, and one positioned on mast protruding above the screen facing the desk (down). With these two cameras, a remote monitor can view the candidate and the desk at all times to ensure there is no compromise of reference materials; and one person remotely monitoring can theoretically be proctoring multiple students in many locations simultaneously (I suspect they are listening too).

With this customer rig, there are only limited seats — in Perth, there are two. And the booking process is scheduling candidates to one of these Kiosks — literally called Kiosk 1 and Kiosk 2 — are located in a small room on the 1st floor of 100 Havelock St, looked after by the friendly Regus staff.

The exam start time is often 8:30am, and advise on the booking emails recommends turning up 15 minutes before this. By contrast, some non-AWS exams scheduled with PSI on the same Kiosks recommend arriving 30 minutes before hand. But there’s a catch; the doors on the ground floor do not unlock for access until around 8:25am, and Regus doesn’t often get staffed until 8:30am (Regus checks you in and sets you up at the Kiosk).

Unlike the Kryterion centers, this doesn’t seem to be a big problem — previously being just a few minutes late was an issue; so, if you do get there with plenty of time, the aforementioned cafe on the ground floor is open much earlier (there were open at 8:00am they day I got there early).

Photo ID is critical to have with you; a scanner mounted on the Kiosk rig is used to get an image of documents like Passports and Drivers’ Licences. You should have two forms of photo ID, but if you have bank cards or others they can supplement (just cover some of your card numbers for security’s sake). The moderator looking at the camera compares the Photo ID with the image of you sitting there in real time.

The assessment interface itself is then very similar, with the addition of a chat window to communicate with the moderator at any time. Feedback comments can be left on questions. I found one question had assumed that multi-choice answers that did not include the answer that had changed in mid-December (just a few weeks ago) so I left a commend for the AWS certification team on this and followed up by my contacts directly.

I’ve had no problem scheduling certifications with a week’s notice, but I envisage that as demand grows, the lead time to book a slot may become an issue until more Kiosks are added (or additional venues). But that’s not an issue right now.

AWS GuardDuty: taking on the undifferentiated heavy lifting of network security analytics

Guard Duty is a machine learning security analytics service for AWS

Several years ago saw the introduction of AWS CloudTrail, the ‘almost’ audit log of API calls performed by a customer against an AWS Account. This was a huge security milestone; the ability for the customer to play back what they had asked for.

I say ‘almost’, as a critical design decision was for CloudTrail in no way to inhibit the already authenticated API call that had been made by the customer. If the internal logging mechanism of CloudTrail were to ever fail, it should not stop the API call that was issued. Other logging mechanisms in computing may place logging in the critical path of call execution, and if logging fails, then the API call fails.

With CloudTrail (and the ability to go directly cross-account to from AWS direct to a trusted independent account, came the second task – looking at the data. Its all JSON text, and it has a corresponding chain of check-summed and signed digest files meaning the set of log files cannot be tampered with, and cannot be removed without breaking the chain.

Numerous solutions were put in place, but they were mostly basic individual pattern matches against single lines of logs. If you see X, then alert with a message Y: If there is a Console Login event, and it doesn’t come from XX.YY.ZZ.AA/32, then alert.

Similarly, VPC introduced VPC flow logs, tracking the authorisation or rejection of connections through the VPC (no payload content, just payload size, start time, ports, addresses).

In December, AWS introduced a managed service that would use a private copy of the VPC Flow Logs, a private copy of the CloudTrail log, as well as a Route53 query log, and supplement this with some centrally managed, maintained and updated threat lists, mix in some customer defined threat lists and white lists, mix with a bit of machine learning, and produce much richer alerting.

Guard duty currently has not finished yet. At re:Invent, Tom Stickle indicated in a graph that there is a slew of additional capability coming shortly to GuardDuty, and now that it’s GA, more customers will have feedback and input into the future direction of the service.

However, this doesn’t replace the need to have your own, secured and trusted copy of your CloudTrail logs, and your own alerting for events that you think are particularly significant, such as a SAML Identity Provider being updated with a new Metadata document!

But between this, and Amazon Macie (for analysing and helping you review and secure your S3 documents), your visibility of security compliance and issues continues to get even higher.

Inspecting the AWS RDS CA Certificates

Trying to fetch all the RDS CA certificates as a bundle, and inspect them:

#!/usr/bin/python3
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
import urllib.request
import re
from OpenSSL import crypto
from datetime import datetime


def get_certs():
    url = ("https://s3.amazonaws.com/rds-downloads/"
           "rds-combined-ca-bundle.pem")
    with urllib.request.urlopen(url=url) as f:
        pem_certs = []
        current_cert = ''
        for line in f.read().decode('utf-8').splitlines():
            current_cert = current_cert + line + "\n"
            if re.match("^-----END CERTIFICATE-----", line):
                pem_certs.append(current_cert)
                current_cert = ""
        return pem_certs


def validate_certs(certs):
    ca = None
    for cert_pem in certs:
        cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
        if cert.get_issuer().CN == cert.get_subject().CN:
            ca = cert
    for cert_pem in certs:
        cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
        start_time = datetime.strptime(
            cert.get_notBefore().decode('utf-8')[0:14], "%Y%m%d%H%M%S")
        end_time = datetime.strptime(
            cert.get_notAfter().decode('utf-8')[0:14], "%Y%m%d%H%M%S")
        print("%s: %s (#%s) exp %s" %
              (cert.get_issuer().CN, cert.get_subject().CN,
               cert.get_serial_number(), end_time))
        if end_time < datetime.now():
            print("EXPIRED: %s on %s" % (cert.get_subject().CN,
                                         cert.get_notAfter()))
        if start_time > datetime.now():
            print("NOT YET ACTIVE: %s on %s" % (cert.get_subject().CN,
                                                cert.get_notBefore()))
    return

pem_certs = get_certs()
validate_certs(pem_certs)

Output

Today this gives me::

Amazon RDS Root CA: Amazon RDS Root CA (#66) exp 2020-03-05 09:11:31
Amazon RDS Root CA: Amazon RDS ap-northeast-1 CA (#68) exp 2020-03-05 22:03:06
Amazon RDS Root CA: Amazon RDS ap-southeast-1 CA (#69) exp 2020-03-05 22:03:19
Amazon RDS Root CA: Amazon RDS ap-southeast-2 CA (#70) exp 2020-03-05 22:03:24
Amazon RDS Root CA: Amazon RDS eu-central-1 CA (#71) exp 2020-03-05 22:03:31
Amazon RDS Root CA: Amazon RDS eu-west-1 CA (#72) exp 2020-03-05 22:03:35
Amazon RDS Root CA: Amazon RDS sa-east-1 CA (#73) exp 2020-03-05 22:03:40
Amazon RDS Root CA: Amazon RDS us-east-1 CA (#67) exp 2020-03-05 21:54:04
Amazon RDS Root CA: Amazon RDS us-west-1 CA (#74) exp 2020-03-05 22:03:45
Amazon RDS Root CA: Amazon RDS us-west-2 CA (#75) exp 2020-03-05 22:03:50
Amazon RDS Root CA: Amazon RDS ap-northeast-2 CA (#76) exp 2020-03-05 00:05:46
Amazon RDS Root CA: Amazon RDS ap-south-1 CA (#77) exp 2020-03-05 21:29:22
Amazon RDS Root CA: Amazon RDS us-east-2 CA (#78) exp 2020-03-05 19:58:45
Amazon RDS Root CA: Amazon RDS ca-central-1 CA (#79) exp 2020-03-05 00:10:11
Amazon RDS Root CA: Amazon RDS eu-west-2 CA (#80) exp 2020-03-05 17:44:42

S3 MFA Delete

The Simple Storage Service (S3, or S3) has made long term durable storage simple for the masses. The democratisation of object storage with well documented, stable APIs has been incorporated into many products. The API is part of the product.

But despite the word Simple, there are more and more advanced features: storage tiers, security policies, life-cycle policies, logging, versioning, requestor-pays, and more recently, Inventory generation and more.

S3 features prominently in long-term retention of important data due to its high durability. But today I’m diving into the another benefit: MFA Delete.

Simple CRUD

Create, Read, Update, Delete: the basics of a REST interface for sending and manipulated a data store. In AWS, IAM policy (or Bucket Policy) can permit or limit the actions that a user can perform.  If you delete an Object, then it’s gone. If you overwrite an object (using the same Prefix or name), then the original is lost, as you would expect.

We can limit a calls to s3:DeleteObject, either with a explicit DENY, or carefully only permitting the fine grained controls we intend (s3:PutObject, s3:GetObject) for the role., groups or users we confer privileges to. However, we still run the change of an unintended overwrite.

Furthermore, there may be privileged users or roles that have elevated access, so while your general work-flow is protected by policy from accidental deletion, you’re not protected from accidents from other source (eg, humans with admin privs).

S3 Versioning

To help with this, S3 Versioning permits you to retain multiple revisions of the same object. When listing the bucket naturally, you see the current revision in the list. But a few API calls and you can drill into the previous revisions of the same object, helping you recover from object overwrites.

When a file is deleted from a Versioned S3 Bucket, its really just updated with a new version as a designated Delete Marker. This Marker prevents the object being included in a natural bucket listing. Without further action, the previous versions are still present, and you’re still paying for their storage.

Lifecycle Policies

I always recommend agreeing a life cycle retention policy for S3 buckets – possibly by agreed prefix – upon creation of the Bucket. It makes the creator of the data set really consider how permanent their data must be.

Lifecycle policies can change data storage tiers, but my favourite is the expiry of “previous revisions” after a customer-defined number of days. This gives me a kind of “S3 Undelete” window, and its saved my bacon on several occasions; the accidental Admin delete can easily be undone within the number of days you have specified.

But I want to go further, I want to have some buckets that I know are my “keep forever” bucket, and I want to make any kind of delete of even previous revisions difficult: enter MFA Delete.

Enabling MFA Delete

MFA delete works on Versioned S3 Buckets, and protects all revisions (including delete markers) from being deleted with a corresponding special delete command that includes a valid MFA token from an authorised user.

In my experimentation, I had an existing bucket that I had Versioning enabled. To enable this feature I had to turn to the API – this isn’t available in the AWS Console at this time. I also had to us an IAM User with MFA or the master Root identity – federated users or Ec2 Instances in IAM Roles cannot do this, as they have no MFA associated with them directly.

In this example, I created a profile for the AWS CLI called MasterUser, and had root IAM keys created (which I immediately rescinded). I had a bucket called MyVersionBucket, that I had set up just as I liked it. I also grabbed the ARN of my Virtual MFA I had for the Root user in this account (the ARN is listed as a SERIAL number in the console).

To enable MFA Delete:

aws s3api put-bucket-versioning –profile MasterUser –bucket MyVersionBucket –versioning-configuration MFADelete=Enabled,Status=Enabled –mfa ‘arn:…. 012345

Note: the MFA is referenced with quotes around it, as the single argument contains a space between the serial (ARN in this case) and the current value on the MFA).

To then see the configuration:

aws s3api get-bucket-versioning –profile MasterUser –bucket MyVersionBucket

With this in place it was time to test it out.

(Not really) Deleting from an MFA-Delete protected Bucket

The first thing I did was upload a file (same as normal), and then delete it. Using the “current view” of the bucket, the file vanished. In the new AWS console I could see the deleted item listed, and drilling into it, I could see the revisions there as with a regular Versioning bucket.

The next thing I tried was to “undelete” an object, an option that has just appeared in the revised S3 console, however this silently failed.

I then looked at the revisions of my sample file, and could see the delete marker sitting there. I attempted to delete the Delete Marker, but without an MFA I was blocked. This seemed to make sense: previously “undeleting” an object from S3 meant removing the delete marker, and clearly that’s just a version that I cannot really delete.

I looked at the other revisions of my sample file, and I was likewise blocked from deleting them.

Next looked at adding a Lifecycle policy to the bucket, and discovered that no Lifecycle policies can be added to an MFA protected bucket. So three’s no opportunity to move to the Infrequent Access tier of storage after a period automatically.

To truly empty the bucket, I deleted the a version of the file:

aws s3api delete-object –bucket MyVersionBucket –key sample.png –version-id Foo1234 –mfa ‘arn:… 123456

The VersionID was displayed to me in the list versions’ output.

Of course, I could potentially have suspended MFA delete, tidied up, and then re-enabled it.

At the end of my experiment, with MFA Delete Enabled, I could dimply delete the empty bucket as normal – there were no further challenges.

When to use MFA Delete

As MFA-Delete is a bucket-wide policy, you need to ensure that all objects that will be in this bucket are right to be considered permanent. You’ll want to limit who has access to put-version policy (perhaps your PowerUsers should have an explicit deny on this API call). If you have temporary or staging data in the bucket, or data that you want a lifecycle policy to automaticlaly clean up, then MFA Delete is not for you.