AWS: Save up to 19.2% on t* instances

Despite what AWS may say, the burstable CPUs are a workhorse for so many smaller workloads – the long tail of deployments in the cloud.

Yesterday saw the announcement of the AMD based T3a instance family as generally available in many regions. Memory and core-count matches the previous T3 and T2 instance families of the same size, which makes comparisons rather easy.

Below are prices as shown today (25/Apr/2019) for Sydney ap-southeast-2:

Sizet2 US$t3 US$t3a US$Diff t3a-t3%Diff t3a-t2%
nano.0073.0066.0059.000710.6.001419.2
micro.0146.0132.0118.001410.6.002819.2
small.0292.0264.0236.002810.6.005619.2
medium.0584.0528.0472.005610.6.011219.2
large.1168.1056.0944.011210.6.022419.2
xlarge.2336.2112.1888.022410.6.044819.2
2xlarge.4672.4224.3776.044810.6.089619.2

As you can see, the savings of moving from one older family to the next is consistent across the sizes: 10.6% saving for the minor t3 to t3a equivalent, but a larger 19.2% if you’re still back on t2.

It’s worth looking at any pending Reservations you currently have for older families, and not jumping to this prematurely – you may end up paying twice.

Talking of which, Reservations are available for t3a as well. Looking at the Sydney price for a nano, it drops from the 5.6c/hr to 4c/hr; across the fleet, discounts on reserved versus on-demand for the t3a are up to 63%

For those who don’t reserve – because you’re not ready to commit, perhaps? – then the simple change of family is an easy and low-risk way of reaping some savings. For example, a fleet of 100 small instances for a month on t2 swapped to t3a would reap a saving of US$2,172.48 – US$1,755.84 = US$416.64/month, or just shy of US$5,000 a year (AU$7,000).

YMMV, test your workload – and Availability Zones – for support of the t3a.

AWS Certification: Pearson VUE and PSI

With the announcement a few weeks back I thought I’d look back on where I can send my team to get certified. For the last few years, AWS Certification has only had their testing via PSI, and in Perth, that meant one venue, with two kiosks. Prior to that, there were more test centres (with Kryterion as the test provider, as per previous blog post in 2017).

But now Pearson VUE are in the mix along side PSI, and the expansion is great.

There are now an additional 6 locations to get certified in Western Australia, including the first one outside of Perth by some 300+ kms:

  • North Metropolitan TAFE, 30 Aberdeen St, Northbridge
  • DDLS Perth, 553 Hay Street
  • ATI-Mirage, Cloisters 863 Hay Street
  • Edith Cowan, Joondalup
  • North Metro TAFE, 35 Kendrew Crescent, Joondalup
  • Market Creations, 7 Chapman Road, Geraldton

Geraldton is several hours drive north from Perth, at around 420kms (260mi), with a population around 40,000. The rest of Western Australia north of that is probably only another 60,000 people in total across Karratha (16k), Carnarvon, Exmouth, Port Headland, Dampier, and Exmouth.

Lets get some perspective on these distances, for my foreign friends:

For comparisons, check out this. Suffice to say, its a bloody long way. My wife lived for a while in Carnarvon, half way up the coast; that was around 10 hours driving to get there.

It would be interesting to see Busselton (pop 74k), and Albany, both to the south have some availability hereto help get people services without having to trek for days, or not bother at all.

S3 Public Access: Preventable SNAFUs

It’s happened again.

This time it is Facebook who left an Amazon S3 Bucket with publicly (anonymously) accessible data. 540 million breached records.

Previously, Verizon, PicketiNet, GoDaddy, Booz Allen Hamilton, Dow Jones, WWE, Time Warner, Pentagon, Accenture, and more. Large, presumably trusted names.

Let’s start with the truth: objects (files, data) uploaded to S3, with no options set on the bucket or object, are private by default.
Someone has to either set a Bucket Policy to make objects anonymously accessible, or set each object as Public ACL for objects to be shared.

Lets be clear.

These breaches are the result of someone uploading data and setting the acl:public-read, or editing a Bucket’s overriding resource policy to facilittate anonymous public access.

Having S3 accessible via authenticated http(s) is great. Having it available directly via anonymous http(s) is not, but historically that was a valid use case.

This week I have updated a client’s account, that serves a static web site hosted in S3, to have the master “Block Public Access” enabled on their entire AWS account. And I sleep easier. Their service experienced no downtime in the swap, no significant increase in cost, and the CloudFront caching CDN cannot be randomly side-stepped with requests to the S3 bucket.

Serving from S3 is terrible

So when you set an object public it can be fetched from S3 with no authentication. It can also be served over unencrypted HTTP (which is a terrible idea).

When hitting the S3 endpoint, the TLS certificate used matches the S3 endpoint hostname, which is something like s3.ap-southeast-2.amazonaws.com. Now that hostname probably has nothing to do with your business brand name, and something like files.mycompany.com may at least give some indication of affiliation of the data with your brand. But with the S3 endpoint, you have no choice.

Ignoring the unencrypted HTTP; the S3 endpoint TLS configuration for HTTPS is also rather loosely curated, as it is a public, shared endpoint with over a decade of backwards compatibility to deal with. TLS 1.0 is still enabled, which would be a breach of PCI DSS 3.2 (and TLS 1.1 is there too, which IMHO is next to useless).

Its worth noting that there are dual-stack IPv4 and IPv6 endpoints, such as s3.dualstack.ap-southeast-2.amazonaws.com.

So how can we fix this?

CloudFront + Origin Access Identity

CloudFront allows us to select a TLS policy, pre-defined by AWS, but permitting us to restrict available protocols and ciphers. This lets us remove “early crypto” and be TLS 1.2 only.

CloudFront also permits us to use a customer specific name, for SNI enabled clients for no additional cost, or a dedicated IP address (not worth it, IMHO).

Origin Access Identities give CloudFront a rolling API keypair that the service can use to access S3. Your S3 bucket then has a policy permitting this Identity access to the host.

With this access in place, you can then flick the “Block Public Access” setting account-wide, possibly on the bucket first, then the account-wide settings last.

One thing to work out is your use of URLs ending in “/”. Using Lambda@edge, we convert these to a request for “/index.html”. Similaly URL paths that end in “/foo” with no typical suffix get mapped to “/foo/index.html”.

Governance FTW?

So, have you checked if Block Public Access is enabled in your account(s). How about a sweep through right now?

If you’re not sure about this, contact me.