Someone posed the question recently: how did you get started in using AWS?
Once upon a time…. I was working in London (2003-2010), and during my time at Vibrant Media running the IT operations team for their contextual advertising platform, I was looking for ways to serve content and process requests efficiently.
Vibrant had thousands of customers, and CommScore reporting indicated our advertising services were seen by some 49% of the US population each month (the platform was world-wide, but the CommScore report was for the US market). It was fairly busy.
In 2008 I stumbled across the then-launched AWS (starting 2006). At that time, the rudimentary controls were basic, and the architectural patterns for VPC at that time did not suit our requirements (all traffic from the VPC had to egress to the customer VPN – no IGW!). So I parked the idea, and moved on.
In 2010 I returned to Australia, and was approached by the team at Netshelter to implement a crawler for forum sites to identify the influencers in the network. Unlike my previous role at Vibrant, Netshelter had no data centres, no infrastructure, just AWS.
It was the words of Richard Brindley who said “we just have AWS, don’t worry about the bill, because anything you do in AWS is going to be vastly cheaper than what we would have done on premises”.
With only myself to architect, implement and operate the solution, I had to find ways to make myself scale. Platform as a service – managed components, was key. Any increase in pricing meant that I didn’t have to deal with the details of operations.
As a Linux developer and System Admin for the 15 years prior to that, I started with the EC2 platform. Finding images, launching them, and configuring them. Then came the automation of installation: scripting the deployment of packages as required for the code I was writing (back then, in Perl).
Pretty quickly, I realised I needed to scale horizontally to get through the work, and I would need some capability to distribute out work. I turned to SQS, and within a day had the epiphany that a reliable queue system was more important than a fleet of processing nodes. Individuals nodes could fail, but a good approach to queuing and message processing could overcome many obstacles.
In storing my results, I needed a database. I had been MySQL certified for years, writing stored procedures, creating schemas, and managing server updates. All of which was fascinating, but time consuming. RDS MySQL was the obvious choice to save me time.
As VPC capability evolved, additional layers of security became easier to implement without introducing Single Points of Failure (SPOFs), or pinch-points and bottlenecks.
From an Australian perspective, this was an interesting era: it was pre-Region in Australia. That meant that, at that time, most organisations dismissed cloud as not being applicable to them. True, some organisations addressing European and US markets were all-in, but latency and fears around the then-relevant Patriot Act kept usage low (this obviously changed in 2012!).
But in essence, the getting started advise of not worrying about the bill with respect to what the equivalent all-in-cost would have been for co-location fees, bandwidth commitments, compute and storage hardware, rack and stack time and costs, and th eoverhead of managing all these activities, meant that the immediacy and control of an AWs environment was far more effective.
I didn’t go wild on cost. Keeping an eye on the individual components mean the total charges remained sensible. As they say, look after the pennies, and the pounds look after themselves.
What was key was the approach to continuously learn. And then relearn something when it changes slightly, or unlearn past behaviours that no longer made sense.
It was also useful to always push the boundaries; reach out and ask service teams to add new capabilities, be they technical, compliance, policy, etc.
How would I start today…well, that’s another article for another day….