AWS Zero to Hero in a few hours: Environment creation and Deployment at speed

I had a colleague approach me asking to create a new environment for him. Previously I had helped create a CloudFormation template defining an enterprise VPC, and he had previously created CloudFormation templates for his work loads in his development environment.

I wasn’t intending this to be a race, but it happened pretty quickly as we’d prepared our templates for other environments previously. So starting around 10:30am, here’s what we did:

  1. Create a new email Distribution List for the master (root) account on the corporate mail system.
  2. Sign up a new account, and lock it down. Hardware MFA for the root account, IAM Group (Admins) for local IAM users, and an initial IAM user (me) with MFA turned on for the user. Customer Managed Policy – Administrator access, but with an IP Address Condition – assigned to the IAM Group. Password policy enabled, and SST disabled in all except US East 1 and our commonly used (and closest) Region. Set Challenge Response questions for support, adjust comms preferences (to none).
  3. Configure the SAML (AD FS) provider for the organisation. Create several IAM roles for Web SSO via the SAML IDP: Network team, Security team.
  4. Create an AWS cross-account IAM role for the security account, with read-only privileges. This enables the proactive DevSecOps to kick in.
  5. In our security account we adjusted the S3 Bucket policy to permit the new workload account to log to it. In my billing account, issue a consolidated invite to the new workload account.
  6. Back in my new workload account, accept the Consolidate billing request, and configure Global CloudTrail to the security account bucket above. This then automatically filters to my security log processing and alerting.

By this time it’s around noon, and we were ready to create our VPC. After taking an initial IP allocation from the initial topology (I’ve been using a /20 CIDR block for most VPCs, and either locating a significant workload, or multiple smaller workloads, in each VPC).

I’ve maintained my VPC Template for some time, progressively adding more “baseline” features that I love to have present – and sharing it with those around me to help accelerate (and standardise) their environments):

  • VPC designed across three AZs, but with addressing consistency and space to add a fourth AZ for each allocation, with:
    • A set of subnets for Internal Load Balancers. No route to the Internet for these.
    • A set of (small) subnets for (relational) databases. No route to the Internet here again.
    • A set of subnets for direct Internet access, such as for External Load Balancers and services that are facing the internet directly… routing the internet via the IGW.
    • A NAT Gateway per AZ (housed in the Internet subnets above)
    • A set of subnets for Application servers, using the NAT GW Per AZ. Hence a routing table per AZ.
    • A set of subnets for other “backend” servers, in case there is anything else that we’ll want to segregate out from the Application servers internally.
  • VPC Flow Logging
  • CloudWatch logs group with retention period set
  • SNS Topics for app servers to send default Alerts and Escalations to.
  • VPC Endpoints for S3
  • RDS DB Subnets
  • etc, etc.

The separation of Internal Elastic Load Balancers into their own contiguous CIDR block is to make my life simpler with the traditional on-premise firewalls. Naturally I expose my ELBs to my clients, but I am required to also authorise the on-premise firewall to egress traffic into our VPC. By having one contiguous block, this makes it one destination rule in this legacy equipment by super-netting the three contiguous blocks. For example:

  • ELB in AZ A: 10.0.0.0/26
  • ELB in AZ B: 10.0.0.64/26
  • ELB in AZ C: 10.0.0.128/26
  • Reserved block if there were AZ D: 10.0.0.196/26
  • Total range for ELBs in one CIDR: 10.0.0.0/24

It’s important to observe the natural block boundaries of CIDR ranges, so choose carefully and use various web tools to help you with address calculations. As noted above, there’s some left over space that I’m not currently allocating: that’s the price to pay for being prepared for the future in an IPv4 world, but its better that than having to re-jig subnets after the workload is live (I’ve had to do this with significant government workloads in order to switch from 2AZs to using 3AZs, but I’m glad I did for several reasons).

After 10 minutes of this VPC creation, we were ready for DirectConnect sub interfaces to drop in, and initial connectivity back to the on-premise network, to be supplemented by a VPN over Internet on a lower priority, preferenced by BGP weightings.

After this came a few S3 buckets: one for holding software and associated ‘artefacts’ for the development cycle, and other for holding logging data (ELB, S3, etc). A quick switch to the Development account and a read-only policy for the Development ‘release’ bucket, and artefacts are ready to be pulled into this Test environment.

After an initial sync, the CloudFormation templates for the EC2-based workloads were ready to roll, with parameters for the new logging destinations, artefact sources from S3 buckets, and subnets options.

By 4pm, the workload was up and stable. Ready for the 6pm call to resize all Autoscale groups to zero, which would be reversed at 6am.

Now this wouldn’t be possible without the support of the networking team looking after the on-premise routers and the direct connect VLAN allocations, or the enterprise server team for creating the email Distribution List, and Claims on the SAML Identity Provider (IDP): it’s as a team we manage to get such velocity at delivery.

But the real key to all of this: templating and automation. Managing changes via a template makes it repeatable; that’s what makes updates to CloudFormation just as exciting for me as updates to the services you can configure yourself via CLI, API or Web Console.


If you’re interested in AWS and Security, then please check out my training at https://nephology.net.au/, where in a 2 day in-person class we cover above and beyond the AWS courses to ensure you have the knowledge and are prepared for the agile world of running and securing environments in the AWS Cloud. Every student on our course gets a complementary Gemalto hardware MFA for use with any AWS account.