Gartner Magic Quadrant for Cloud 2018: Half the players dessimated

It’s as if the left hand side of the 2017 Gartner Cloud MQ just imploded! And just as interesting, most of those that were in the Visionaries quadrant are now relegated to the Niche Players sector.

Only six players remain compared to last years 14. Lets see who has gone this year:

  • Skytap
  • NTT
  • Joyent
  • Interoute
  • Fujitsu
  • Rackspace
  • CenturyLink
  • Virtustream

It’s no surprise to those close to the ground that the only real survivors here thus far are AWS and MS Azure, with Google barely making it into the top right corner, which actually looks like they just flopped over the line from last year &mdash but progress none the less. And while the gap between the top two is closing, AWS is still far above in “Ability to execute”, and slightly ahead in “Completeness of vision”.

For those that had chosen one of the departed 8 as their Cloud provider, then its time to question what their strategy is, and what yours is. Gartner may have upped its inclusion requirements resulting in some of these players being filtered out, and that may have no impact on those providers. But it may resonate poorly for their sales prospects going forward, all of which could have a long term downward trend, increased cost per customer, loss of economies of scale, etc.

Meanwhile, the rate of innovation by service improvement, refinement, or new offerings continues.

AWS VPCs: Calculating Subnets in CloudFormation

Virtual Private Cloud is a construct in AWS that gives the customer their own, er, virtual network for the deployment of network based resources such as virtual machines and more. Its been around for nearly a decade, and is a basic construct that helps provide security of those resources within an AWS Region.

CloudFormation is the (text, either YAML or JSON) templating language (service) that can take a definition of resources you would like configured, and does the execution of creating these resources for you, saving you the hassle of having to either navigate the web console for hours, or scripting up many API calls (which could be thousands of API create calls).

VPCs can be quite complex; they can specify subnets for resources, across multiple Availability Zones within a Region, define routing tables, Endpoints to create, and much more. So it probably comes as no surprise that managing a VPC via CloudFormation is a natural desire. The configuration of the virtual network for a workload needs to be as management in a CI/CD fashion as the workload that will live in there.

But there’s often been a limitation in making this simple; mathematics.
Continue reading “AWS VPCs: Calculating Subnets in CloudFormation”

AWS Certifications in Perth (II)

I wrote last year about sitting AWS Certifications in Perth. I’ve done another two AWS Certifications in the last month (Networking Specialty, and Cloud Practitioner), and a few things have changed. Gone is Kryterion as the assessment provider, and in has come PSI; this means new venues- and there’s now only one in Perth at 100 Havelock St, West Perth.

It’s a new-ish building I know well; an old friend was working on the top floor for a while, and I spoke to his teams about AWS several times (they became and AWS reference customer). Small Italian-inspired coffee shop on the ground floor (more on this later).

The booking process for exams is much the same, but now via https://aws.training/ (funky new DNS TLD). The certifications with PSI happen via their customer rigged Kiosk systems: a PC with two webcams, one mounted on the monitor facing the candidate, and one positioned on mast protruding above the screen facing the desk (down). With these two cameras, a remote monitor can view the candidate and the desk at all times to ensure there is no compromise of reference materials; and one person remotely monitoring can theoretically be proctoring multiple students in many locations simultaneously (I suspect they are listening too).

With this customer rig, there are only limited seats — in Perth, there are two. And the booking process is scheduling candidates to one of these Kiosks — literally called Kiosk 1 and Kiosk 2 — are located in a small room on the 1st floor of 100 Havelock St, looked after by the friendly Regus staff.

The exam start time is often 8:30am, and advise on the booking emails recommends turning up 15 minutes before this. By contrast, some non-AWS exams scheduled with PSI on the same Kiosks recommend arriving 30 minutes before hand. But there’s a catch; the doors on the ground floor do not unlock for access until around 8:25am, and Regus doesn’t often get staffed until 8:30am (Regus checks you in and sets you up at the Kiosk).

Unlike the Kryterion centers, this doesn’t seem to be a big problem — previously being just a few minutes late was an issue; so, if you do get there with plenty of time, the aforementioned cafe on the ground floor is open much earlier (there were open at 8:00am they day I got there early).

Photo ID is critical to have with you; a scanner mounted on the Kiosk rig is used to get an image of documents like Passports and Drivers’ Licences. You should have two forms of photo ID, but if you have bank cards or others they can supplement (just cover some of your card numbers for security’s sake). The moderator looking at the camera compares the Photo ID with the image of you sitting there in real time.

The assessment interface itself is then very similar, with the addition of a chat window to communicate with the moderator at any time. Feedback comments can be left on questions. I found one question had assumed that multi-choice answers that did not include the answer that had changed in mid-December (just a few weeks ago) so I left a commend for the AWS certification team on this and followed up by my contacts directly.

I’ve had no problem scheduling certifications with a week’s notice, but I envisage that as demand grows, the lead time to book a slot may become an issue until more Kiosks are added (or additional venues). But that’s not an issue right now.

AWS GuardDuty: taking on the undifferentiated heavy lifting of network security analytics

Guard Duty is a machine learning security analytics service for AWS

Several years ago saw the introduction of AWS CloudTrail, the ‘almost’ audit log of API calls performed by a customer against an AWS Account. This was a huge security milestone; the ability for the customer to play back what they had asked for.

I say ‘almost’, as a critical design decision was for CloudTrail in no way to inhibit the already authenticated API call that had been made by the customer. If the internal logging mechanism of CloudTrail were to ever fail, it should not stop the API call that was issued. Other logging mechanisms in computing may place logging in the critical path of call execution, and if logging fails, then the API call fails.

With CloudTrail (and the ability to go directly cross-account to from AWS direct to a trusted independent account, came the second task – looking at the data. Its all JSON text, and it has a corresponding chain of check-summed and signed digest files meaning the set of log files cannot be tampered with, and cannot be removed without breaking the chain.

Numerous solutions were put in place, but they were mostly basic individual pattern matches against single lines of logs. If you see X, then alert with a message Y: If there is a Console Login event, and it doesn’t come from XX.YY.ZZ.AA/32, then alert.

Similarly, VPC introduced VPC flow logs, tracking the authorisation or rejection of connections through the VPC (no payload content, just payload size, start time, ports, addresses).

In December, AWS introduced a managed service that would use a private copy of the VPC Flow Logs, a private copy of the CloudTrail log, as well as a Route53 query log, and supplement this with some centrally managed, maintained and updated threat lists, mix in some customer defined threat lists and white lists, mix with a bit of machine learning, and produce much richer alerting.

Guard duty currently has not finished yet. At re:Invent, Tom Stickle indicated in a graph that there is a slew of additional capability coming shortly to GuardDuty, and now that it’s GA, more customers will have feedback and input into the future direction of the service.

However, this doesn’t replace the need to have your own, secured and trusted copy of your CloudTrail logs, and your own alerting for events that you think are particularly significant, such as a SAML Identity Provider being updated with a new Metadata document!

But between this, and Amazon Macie (for analysing and helping you review and secure your S3 documents), your visibility of security compliance and issues continues to get even higher.

Web Security 2017

Stronger encryption requirements for PCI compliance is having a good effect on purging the scourge of the web: legacy browsers, and as they disappear comes even more capability client side for security.

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.