Web Security 2017

Stronger encryption requirements for PCI compliance is having a good effect on purging the scourge of the web: legacy browsers, and as they disappear comes even more capability client side for security.

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

Inspecting the AWS RDS CA Certificates

Trying to fetch all the RDS CA certificates as a bundle, and inspect them:

#!/usr/bin/python3
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
import urllib.request
import re
from OpenSSL import crypto
from datetime import datetime


def get_certs():
    url = ("https://s3.amazonaws.com/rds-downloads/"
           "rds-combined-ca-bundle.pem")
    with urllib.request.urlopen(url=url) as f:
        pem_certs = []
        current_cert = ''
        for line in f.read().decode('utf-8').splitlines():
            current_cert = current_cert + line + "\n"
            if re.match("^-----END CERTIFICATE-----", line):
                pem_certs.append(current_cert)
                current_cert = ""
        return pem_certs


def validate_certs(certs):
    ca = None
    for cert_pem in certs:
        cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
        if cert.get_issuer().CN == cert.get_subject().CN:
            ca = cert
    for cert_pem in certs:
        cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
        start_time = datetime.strptime(
            cert.get_notBefore().decode('utf-8')[0:14], "%Y%m%d%H%M%S")
        end_time = datetime.strptime(
            cert.get_notAfter().decode('utf-8')[0:14], "%Y%m%d%H%M%S")
        print("%s: %s (#%s) exp %s" %
              (cert.get_issuer().CN, cert.get_subject().CN,
               cert.get_serial_number(), end_time))
        if end_time < datetime.now():
            print("EXPIRED: %s on %s" % (cert.get_subject().CN,
                                         cert.get_notAfter()))
        if start_time > datetime.now():
            print("NOT YET ACTIVE: %s on %s" % (cert.get_subject().CN,
                                                cert.get_notBefore()))
    return

pem_certs = get_certs()
validate_certs(pem_certs)

Output

Today this gives me::

Amazon RDS Root CA: Amazon RDS Root CA (#66) exp 2020-03-05 09:11:31
Amazon RDS Root CA: Amazon RDS ap-northeast-1 CA (#68) exp 2020-03-05 22:03:06
Amazon RDS Root CA: Amazon RDS ap-southeast-1 CA (#69) exp 2020-03-05 22:03:19
Amazon RDS Root CA: Amazon RDS ap-southeast-2 CA (#70) exp 2020-03-05 22:03:24
Amazon RDS Root CA: Amazon RDS eu-central-1 CA (#71) exp 2020-03-05 22:03:31
Amazon RDS Root CA: Amazon RDS eu-west-1 CA (#72) exp 2020-03-05 22:03:35
Amazon RDS Root CA: Amazon RDS sa-east-1 CA (#73) exp 2020-03-05 22:03:40
Amazon RDS Root CA: Amazon RDS us-east-1 CA (#67) exp 2020-03-05 21:54:04
Amazon RDS Root CA: Amazon RDS us-west-1 CA (#74) exp 2020-03-05 22:03:45
Amazon RDS Root CA: Amazon RDS us-west-2 CA (#75) exp 2020-03-05 22:03:50
Amazon RDS Root CA: Amazon RDS ap-northeast-2 CA (#76) exp 2020-03-05 00:05:46
Amazon RDS Root CA: Amazon RDS ap-south-1 CA (#77) exp 2020-03-05 21:29:22
Amazon RDS Root CA: Amazon RDS us-east-2 CA (#78) exp 2020-03-05 19:58:45
Amazon RDS Root CA: Amazon RDS ca-central-1 CA (#79) exp 2020-03-05 00:10:11
Amazon RDS Root CA: Amazon RDS eu-west-2 CA (#80) exp 2020-03-05 17:44:42

S3 MFA Delete

The Simple Storage Service (S3, or S3) has made long term durable storage simple for the masses. The democratisation of object storage with well documented, stable APIs has been incorporated into many products. The API is part of the product.

But despite the word Simple, there are more and more advanced features: storage tiers, security policies, life-cycle policies, logging, versioning, requestor-pays, and more recently, Inventory generation and more.

S3 features prominently in long-term retention of important data due to its high durability. But today I’m diving into the another benefit: MFA Delete.

Simple CRUD

Create, Read, Update, Delete: the basics of a REST interface for sending and manipulated a data store. In AWS, IAM policy (or Bucket Policy) can permit or limit the actions that a user can perform.  If you delete an Object, then it’s gone. If you overwrite an object (using the same Prefix or name), then the original is lost, as you would expect.

We can limit a calls to s3:DeleteObject, either with a explicit DENY, or carefully only permitting the fine grained controls we intend (s3:PutObject, s3:GetObject) for the role., groups or users we confer privileges to. However, we still run the change of an unintended overwrite.

Furthermore, there may be privileged users or roles that have elevated access, so while your general work-flow is protected by policy from accidental deletion, you’re not protected from accidents from other source (eg, humans with admin privs).

S3 Versioning

To help with this, S3 Versioning permits you to retain multiple revisions of the same object. When listing the bucket naturally, you see the current revision in the list. But a few API calls and you can drill into the previous revisions of the same object, helping you recover from object overwrites.

When a file is deleted from a Versioned S3 Bucket, its really just updated with a new version as a designated Delete Marker. This Marker prevents the object being included in a natural bucket listing. Without further action, the previous versions are still present, and you’re still paying for their storage.

Lifecycle Policies

I always recommend agreeing a life cycle retention policy for S3 buckets – possibly by agreed prefix – upon creation of the Bucket. It makes the creator of the data set really consider how permanent their data must be.

Lifecycle policies can change data storage tiers, but my favourite is the expiry of “previous revisions” after a customer-defined number of days. This gives me a kind of “S3 Undelete” window, and its saved my bacon on several occasions; the accidental Admin delete can easily be undone within the number of days you have specified.

But I want to go further, I want to have some buckets that I know are my “keep forever” bucket, and I want to make any kind of delete of even previous revisions difficult: enter MFA Delete.

Enabling MFA Delete

MFA delete works on Versioned S3 Buckets, and protects all revisions (including delete markers) from being deleted with a corresponding special delete command that includes a valid MFA token from an authorised user.

In my experimentation, I had an existing bucket that I had Versioning enabled. To enable this feature I had to turn to the API – this isn’t available in the AWS Console at this time. I also had to us an IAM User with MFA or the master Root identity – federated users or Ec2 Instances in IAM Roles cannot do this, as they have no MFA associated with them directly.

In this example, I created a profile for the AWS CLI called MasterUser, and had root IAM keys created (which I immediately rescinded). I had a bucket called MyVersionBucket, that I had set up just as I liked it. I also grabbed the ARN of my Virtual MFA I had for the Root user in this account (the ARN is listed as a SERIAL number in the console).

To enable MFA Delete:

aws s3api put-bucket-versioning –profile MasterUser –bucket MyVersionBucket –versioning-configuration MFADelete=Enabled,Status=Enabled –mfa ‘arn:…. 012345

Note: the MFA is referenced with quotes around it, as the single argument contains a space between the serial (ARN in this case) and the current value on the MFA).

To then see the configuration:

aws s3api get-bucket-versioning –profile MasterUser –bucket MyVersionBucket

With this in place it was time to test it out.

(Not really) Deleting from an MFA-Delete protected Bucket

The first thing I did was upload a file (same as normal), and then delete it. Using the “current view” of the bucket, the file vanished. In the new AWS console I could see the deleted item listed, and drilling into it, I could see the revisions there as with a regular Versioning bucket.

The next thing I tried was to “undelete” an object, an option that has just appeared in the revised S3 console, however this silently failed.

I then looked at the revisions of my sample file, and could see the delete marker sitting there. I attempted to delete the Delete Marker, but without an MFA I was blocked. This seemed to make sense: previously “undeleting” an object from S3 meant removing the delete marker, and clearly that’s just a version that I cannot really delete.

I looked at the other revisions of my sample file, and I was likewise blocked from deleting them.

Next looked at adding a Lifecycle policy to the bucket, and discovered that no Lifecycle policies can be added to an MFA protected bucket. So three’s no opportunity to move to the Infrequent Access tier of storage after a period automatically.

To truly empty the bucket, I deleted the a version of the file:

aws s3api delete-object –bucket MyVersionBucket –key sample.png –version-id Foo1234 –mfa ‘arn:… 123456

The VersionID was displayed to me in the list versions’ output.

Of course, I could potentially have suspended MFA delete, tidied up, and then re-enabled it.

At the end of my experiment, with MFA Delete Enabled, I could dimply delete the empty bucket as normal – there were no further challenges.

When to use MFA Delete

As MFA-Delete is a bucket-wide policy, you need to ensure that all objects that will be in this bucket are right to be considered permanent. You’ll want to limit who has access to put-version policy (perhaps your PowerUsers should have an explicit deny on this API call). If you have temporary or staging data in the bucket, or data that you want a lifecycle policy to automaticlaly clean up, then MFA Delete is not for you.

AWS Certifications in Perth

AWS Certified Developer (Associate), Sysops (Associate), Solution Architect (Associate), DevOpsEngineer (Professional), Solution Architect (Professional)Today I went and sat yet another of the AWS Certifications; I’ve been doing a bit of a Pokemon approach and collecting them all.

AWS’s certifications fall in what are essentially three classes: Associate, Professional, and Speciality (still in beta at this point in time).

Each of the certifications requires sitting a proctored exam at a certified exam venue. Subjects are not permitted any personal equipment, watches, wallets, or anything else that could be used to collude or circumvent the test integrity. The testing is done on a locked-down PC, and are generally multi-choice of:

  • Choose the best answer (think: radio box)
  • Choose N answers that apply (think: tick the check boxes)

The Associate certifications are effectively entry-level: the number of questions is around 55, and the permitted time is around an hour and a half.

Meanwhile the Professional and Speciality certifications are around 100+ questions, and three hours assessment time.

The calibre of the questions have made these certifications some of the most valuable, and thus desired certifications in IT. I’ve been lucky to spend several years working on some large projects to hone these skills, and am pleased to hold all five of the AWS certifications.

Certification Venues in Perth

Over the last few months, two more venues have appeared as options for sitting these certifications, and I have now used all three to compare them. For several years, AICT (next to Myer in the Murray Street Mall) has been the only option, but now Saxons Training Facilities at 140 St Georges Tce, and now ATI-Mirage at the redeveloped Cloisters have become available.

AICT is probably the most dilapidated venue. They have set aside a small room at the very rear of their location just by their lab technicians hub and on-premise data centre, with small screens on the testing PCs, no windows in the room, and at times, a lack of adequate cooling assisted by a pedestal fan. They can test about 5 people at a time here.

Saxons became available in January: their rooms were considerably larger, well lit, and had large windows for daylight. The facilities were much cleaner and newer. A very large break-out kitchen/coffee area was there, but I had no time to use it. The room would have held about 18 people, but I was the only one the morning I sat this certification.

And today, ATI-Mirage – I was their first Kryterion exam to go through since ATI-Mirage started offering them. Their testing facility was reasonably well resourced, no windows but well lit, with enough room for around 12 people or so to sit exams. This is shared with their Pearson-VUE testing, and this morning, was full.

If I had to order the facilities, I’d probably chose Saxons first, followed closely by the very friendly people at ATI-Mirage, and AICT last. But then again, my office in the Perth CBD is opposite Saxons, so its a short walk to hop over the road!

Wikileaks Vault 7: Tech’s dirty laundry

Wikileaks have dumped another huge cache of data ex-filtrated from behind the closed doors of three-letter acronym agencies: BBC, ABC, Independent.

Apple’s comments was wonderful, according to the BBC link above:

Apple’s statement was the most detailed, saying it had already addressed some of the vulnerabilities.

This is the crux of good security posture. Vulnerabilities exist in so much of what we use, the point is to be continuously addressing the issues and applying security before it is a problem.

I see patch cycles in organisations that can be measured in tectonic plate movement intervals. There are security updates available every few hours, yet organisations wait sometimes years to apply these.

Its simple:

  • Do you know more than the software vendor about security?
    1. Probably not; therefore, take their advise and apply all pending security updates.
    2. Yes, I do!!; no, you probably don’t. See 1 above.
  • Do you want to have an exploit situation caused by a KNOWN vulnerability with a KNOWN patch?
    1. No, cause I’d look pretty foolish if this happened. Apply security patches.
    2. Yes, because that’s the corporate policy and I don’t care about my job!

There’s not much we can do about UNKNOWN vulnerabilities, except that over time, some of the UNKNOWN become KNOWN, and they then become the PATCHED.

Now take this approach to your entire operating environment. Production servers, monitoring servers, CI systems, bastion hosts, VPN servers, proxy servers, Wikis, revision control systems, routers, switches, printers. The list goes on, but they all require maintenance, because writing good software is hard, and what looks like good practice today may become relegated tomorrow.