Wikileaks Vault 7: Tech’s dirty laundry

Wikileaks have dumped another huge cache of data ex-filtrated from behind the closed doors of three-letter acronym agencies: BBC, ABC, Independent.

Apple’s comments was wonderful, according to the BBC link above:

Apple’s statement was the most detailed, saying it had already addressed some of the vulnerabilities.

This is the crux of good security posture. Vulnerabilities exist in so much of what we use, the point is to be continuously addressing the issues and applying security before it is a problem.

I see patch cycles in organisations that can be measured in tectonic plate movement intervals. There are security updates available every few hours, yet organisations wait sometimes years to apply these.

Its simple:

  • Do you know more than the software vendor about security?
    1. Probably not; therefore, take their advise and apply all pending security updates.
    2. Yes, I do!!; no, you probably don’t. See 1 above.
  • Do you want to have an exploit situation caused by a KNOWN vulnerability with a KNOWN patch?
    1. No, cause I’d look pretty foolish if this happened. Apply security patches.
    2. Yes, because that’s the corporate policy and I don’t care about my job!

There’s not much we can do about UNKNOWN vulnerabilities, except that over time, some of the UNKNOWN become KNOWN, and they then become the PATCHED.

Now take this approach to your entire operating environment. Production servers, monitoring servers, CI systems, bastion hosts, VPN servers, proxy servers, Wikis, revision control systems, routers, switches, printers. The list goes on, but they all require maintenance, because writing good software is hard, and what looks like good practice today may become relegated tomorrow.

CloudPets security fail is not a Cloud failure

I spent several years at Amazon Web Services as the Solution Architect with a depth in Security in A/NZ. I created and presented the Security keynotes at the AWS Summits in Australia and New Zealand. I teach Advanced Security and Operations on AWS. I have run online share-trading systems for many of the banks in Australia. I help create the official Debian EC2 AMIs. I am the National Cloud Lead for AWS Partner Ajilon, and via Ajilon, I also secure the State Government Land Registry in Ec2 with Advara.

So I am reasonably familiar with configuring AWS resources to secure workloads.

Last week saw a poor security failure; the compromise of a company that makes Internet-connected plush toys for children that lets users record and playback audio via the toys: CloudPets. Coverage from Troy Hunt,  The Register, ArsTechnica.

As details emerged, a few things became obvious. But here are the highlights (low-lights, really) to me that apparently occurred:

  • A production database (MongoDB) was exposed directly to the Internet with no authentication required to query it
  • Audio files in S3 were publicly, anonymously retrievable. However, they were not listable directly (no worries, the object URLs were in that open Mondo database)
  • Non-production and production systems were co-tenanted

There’s a number of steps that should have been taken technically to secure this:

  1. Each device should have had a unique certificate or credential on each of them
  2. This certificate/credential should have been used to authenticate to an API Endpoint
  3. Each certificate/credential could then be uniquely invalidated if someone stole the the keys from it
  4. Each certificate/credential should only have been permitted access to fetch/retrieve its own recordings, not any recording from any customer
  5. The Endpoint that authenticates the certificate should have generated Presigned URLs for the referenced recordings. PreSigned URLs contain a timestamp set in the future, after which the Presigned URL is no longer valid. Each time the device (pet) would want a file, it could ask the Endpoint to generate the Presigned URL, and then fetch it from S3
  6. The Endpoint could rate limit the number of requests per certificate pre minute/hour/day. Eg, 60 per minute (for burst fetches), 200 per hour, 400 per day?

If the Endpoint for the API was an Ec2 instance (or better yet, an AutoScale Group of them), then it could itself be running in the context of an IAM Role, with permission to create these Presigned URLs. Similarly an API Gateway running a Lambda in a Role.

Indeed, that Endpoint would have been what would have used the MongoDB (privately), removing the publicly facing database.

I’ve often quoted Voltaire (or Uncle Ben from Spider Man, take your pick): “with great power comes great responsibility“. There’s no excuse from the series of failures that were conducted here; the team apparently didn’t understand security in their architecture.

Yet security is in all the publicly facing AWS customer documents (joint responsibility). It’s impossible to miss this. AWS even offers a free security fundamentals course, which I recommend as a precursor to my own teachings.

Worse is the response and lack of action from the company when they were alerted last year.

PII and PHI is stored in the cloud. Information that the economy, indeed modern civilisation depends upon. The techniques used to secure workloads are not overly costly, they mostly require knowledge and implementation.

You don’t need to be using Hardware Security Modules (HSMs) to have a good security architecture, but you do need current protocols, ciphers, authentication and authorisation. The protocols and ciphers will change over time, so IoT devices like this need to also update over time to support Protocols and Ciphers that may not exist today. It’s this constant stepping-stone approach, to continually be moving to the next implementation of transport and at-rest ciphers that is becoming a pattern.

Security architecture is not an after-thought that can be left on the shelf of unfulfilled requirements, but a core enabler of business models.