AWS Certification trends (on LinkedIn)

I am always trying to find great talent; it’s part of being a Practice Lead in a large consulting organisation to find and develop talent. I work with a team recruiters who are constantly finding and screening people for the many roles we have.

I’ve been a big proponent of the AWS Certifications for a number of reasons; amongst which are value and confidence to the holder, value to the partner, value to the customer. I helped contribute questions to the AWS Solution Architect Professional certification in 2014 whilst passing through Herndon, Washington DC as an AWS employee, and again in February 2020 in San Francisco as an industry Subject Matter Expert, just before COVID-19 started closing down travel.

Today I took to LinkedIn, and did a search for the various AWS Certifications, and found a tally that looked interesting. These numbers are by no means authoritative, and could just be a reflection of the network of connections that I have.

AWS CertificationTallyLaunch Year#/Year (to 2020)
Solution Architect Associate*311,000201344,428
Developer Associate*189,000201431,500
Cloud Practitioner*103,000201734,333
Solution Architect Professional*94,000201415,667
DevOps Engineer Professional*57,00020149,500
SysOps Associate*29,00020179,667
Security Specialty*12,00020186,000
Networking Specialty*7,80020183,900
Database Specialty*7,20020197,200
Data Analytics Specialty6,30020196,300
Big Data Specialty (retired/renamed to Data Analytics)81,0002014 – 201916,200 #
Machine Learning Specialty5,30020195,300
Alexa Skill Builder Specialty5462019549
AWS Certifications as found on Linked In, 18/9/2020. * Denotes certifications I hold. # only calculated over the five years this was active.

With such a low number for the Alexa certification, I expect the source numbers is not be complete. Many people in certain industries (eg, intelligence services) will not put their profile online.

But regardless, lets review what we see…

The clear winner is the venerable Solution Architect Associate with the largest number per annum and largest number in total. Its seen as the initial certification in the technical certs, and is regularly reported as one of the most valuable in the industry with respect to salary expectations. Its also the longest cert I have held – being part of the very first cohort to pass this in January 2013.

While the Developer Associate certification is in second place by total number, it is just eclipsed by the number of people who have taken the Cloud Practitioner Foundational certification, on a yearly basis. The Cloud Prac is billed as an entry level, non-technical certification, so its appeal is to an even wider audience – the technical team can obtain it relatively easily, and the non-technical roles involved in total service delivery can achieve this as well.

At the Professional level, it seems the demand for certified Architects outweighs the DevOps Engineers almost 2:1; I suspect this is as a natural progression from that initial SA Associate.

The Data Analytics certification replaced the original Big Data cert last year; this gives us an insight into the change in demand. Over its active lifetime, Big Data drove 16,200 per year – its replacement sites at almost a third the prior demand. Perhaps the data analytics hype is stablising?

The total number of certifications reported above is 903,146; just shy of a million certifications in 7 years (and probably more given the validity of the data) excluding re-certifications (after 3 years, now).

Lets see what this looks like in a year from now. New AWS certifications will likely launch, continuing to help validate and differentiate experienced Cloud engineers.

Goodbye, iiNet.

I first found my way online around 1991, calling into BBSes in Australia (such as Dialix). When I arrived at the University of Western Australia as an undergraduate in 1994, the ISPs were starting to be born, and I subscribed to Omen Internet ISP, run by Mark Dignam and his brother

In the early years of market acquisition, Omen was consumed by iiNet, and thus I became an iiNet subscriber on ADSL. iiNet was itself founded and staffed by friends across the industry in Perth, and during this period there was plenty of innovation happening in the organisations across ADSL and DSLAMs, connectivity, routing, and speed. They were one of the first to offer naked ADSL – not requiring a telephone number subscription.

I pushed off to the UK in 2003, and upon my 2010 return, I subscribed again to iiNet. They (had been) Perth based and started by friends, and plenty of friends had worked there in engineering and other role.

As I have experimented with IPv6 tunnels back in 1999 from UWA, I looked for this in iiNet, and found only 6RD – tunnelled/encapsulated IPv6 offering from iiNet. The downside of this experiment was that packets were tunnelled from the customer site, to iiNet in Sydney. As I was in Perth, my IPv4 traffic would hit local cache endpoints, but IPv6 would traverse 50ms of Australia before getting the change to peer to cache services. It was… sub-optimal.

However, iiNet started an IPV6 blog, and showed a promise that technical engineering was continuing.

Sadly, that last saw a post in 2013, and since then, crickets.

But that was not all.

It seemed that every year, iiNet would silently introduce a new service plan. This plan would be nearly identical to the one customers were already on, but with slightly larger included data plan, or slightly faster, or slightly cheaper. In any sense, always better. Customers would have to notice this newer plan, and then request to move to it (later, this was self service in the iInet customer Toolbox). But it always took an action by the customer to ensure they continued to get value from iiNet. This meant that customers couldn’t trust iiNet to always be getting the best option. I recently discovered I was paying $10/month more than other customers, for the same speed internet access, and still unlimited downloads.

I don’t think this is overly customer focused. They are looking after their interest, rather than long term customer satisfaction and retention.

So coupled with the decline in engineering, and faced with an impressive price/performance offer from a competitor, I have finally churned away from iiNet.

I contracted Aussie Broadband on Monday 1st June at 9am. By 11am the same day I had a 1GB/sec internet connection with native IPv6 enabled (FTTP). I am in the process of porting over the home phone number I have had with iiNet for a decade.

Am I paying more? Yes.

Is it better? Yes.

I went from a 50/20 NBN unlimited plan, with a VoIP service with all Australian land-line and mobile calls included; it was AU$89/month, while iiNet’s new offering was $79. I ended up on 1000/50 NBN unlimited plan, with a VoIP services with all land-line and mobiles calls included for $169/month. 2x the price, 20x the speed.

Does that make it 10 times better? Hmm….

But more importantly, as they introduced this plan, Aussie indicated their existing customers on their legacy, more expensive yet slower plans would be migrated to this without them having to lift a finger. Proacticely better for their existing customers.

This breed customer trust.

So with two knock-out blows — innovation in engineering, and customer focus — I finally pulled the pin, giving up on the hope that the iiNet of old would engineer its way towards a modern ISP with a strong customer focus.

I have had a number of friends and colleagues move to Aussie Broadband in the last few months, and thus far I haven’t seen any one have any issues that haven’t been resolved quickly and capably. What held me back was my included phone number via iiNet, but that has now ported the phone number across to Aussie as well.

How did you get started in AWS?

Someone posed the question recently: how did you get started in using AWS?

Once upon a time…. I was working in London (2003-2010), and during my time at Vibrant Media running the IT operations team for their contextual advertising platform, I was looking for ways to serve content and process requests efficiently.

Vibrant had thousands of customers, and CommScore reporting indicated our advertising services were seen by some 49% of the US population each month (the platform was world-wide, but the CommScore report was for the US market). It was fairly busy.

In 2008 I stumbled across the then-launched AWS (starting 2006). At that time, the rudimentary controls were basic, and the architectural patterns for VPC at that time did not suit our requirements (all traffic from the VPC had to egress to the customer VPN – no IGW!). So I parked the idea, and moved on.

In 2010 I returned to Australia, and was approached by the team at Netshelter to implement a crawler for forum sites to identify the influencers in the network. Unlike my previous role at Vibrant, Netshelter had no data centres, no infrastructure, just AWS.

It was the words of Richard Brindley who said “we just have AWS, don’t worry about the bill, because anything you do in AWS is going to be vastly cheaper than what we would have done on premises”.

With only myself to architect, implement and operate the solution, I had to find ways to make myself scale. Platform as a service – managed components, was key. Any increase in pricing meant that I didn’t have to deal with the details of operations.

As a Linux developer and System Admin for the 15 years prior to that, I started with the EC2 platform. Finding images, launching them, and configuring them. Then came the automation of installation: scripting the deployment of packages as required for the code I was writing (back then, in Perl).

Pretty quickly, I realised I needed to scale horizontally to get through the work, and I would need some capability to distribute out work. I turned to SQS, and within a day had the epiphany that a reliable queue system was more important than a fleet of processing nodes. Individuals nodes could fail, but a good approach to queuing and message processing could overcome many obstacles.

In storing my results, I needed a database. I had been MySQL certified for years, writing stored procedures, creating schemas, and managing server updates. All of which was fascinating, but time consuming. RDS MySQL was the obvious choice to save me time.

As VPC capability evolved, additional layers of security became easier to implement without introducing Single Points of Failure (SPOFs), or pinch-points and bottlenecks.

From an Australian perspective, this was an interesting era: it was pre-Region in Australia. That meant that, at that time, most organisations dismissed cloud as not being applicable to them. True, some organisations addressing European and US markets were all-in, but latency and fears around the then-relevant Patriot Act kept usage low (this obviously changed in 2012!).

But in essence, the getting started advise of not worrying about the bill with respect to what the equivalent all-in-cost would have been for co-location fees, bandwidth commitments, compute and storage hardware, rack and stack time and costs, and th eoverhead of managing all these activities, meant that the immediacy and control of an AWs environment was far more effective.

I didn’t go wild on cost. Keeping an eye on the individual components mean the total charges remained sensible. As they say, look after the pennies, and the pounds look after themselves.

What was key was the approach to continuously learn. And then relearn something when it changes slightly, or unlearn past behaviours that no longer made sense.

It was also useful to always push the boundaries; reach out and ask service teams to add new capabilities, be they technical, compliance, policy, etc.

How would I start today…well, that’s another article for another day….

Writing (some of) the questions for the AWS Solution Architect Professional Certification

Writing the SA Professional questions in San Francisco.

bs the longest certified AWS individuals.

During my time with AWS, I also helped contribute to an early set of questions for the then-in-development Solution Architetc Professional certification. My contributions pulled upon my many years of involvement in Linux and Open Source, as well as my time then as AWS Security Solution architect for Australia and New Zealand.

As time (and I) moved on, I continued to sit more AWS certifications – at this time, I hold 8 AWS Certifications, and am awaiting results of the new Database Specialty certification. I’ve written many times about sitting these certifications, given guidance to friends and colleagues on sitting them. I’ve watched as the value to an individual of these certifications has increased, making them amongst some of the most respected, and best paid certifications in the technology field.

The attention to detail on running the certifications is high. The whole point of a certification is to discriminate fairly based on those who have the required capability to perform a task, and those who do not. If the certification were too easy, then it would undermine the value of the certification to those who are more adept in the topic.

Of course, the certification itself is not based on the same static set of questions. Some questions get invalidated over time as features get released and updated. Some services fall out of fashion, and new services are born that become critical (could you imagine running today without CloudTrail enabled).

The questions for these certifications are in a pool; and each time a candidate sits a certification, a subset of the currently active questions gets presented to them. The order of the questions is not fixed. The likelihood of two people getting the same questions, in the same order is extremely low.

However, over time, the pool runs low. Questions expire. New questions are needed.

Transamerica Building, San Francisco

In January 2020, I received a request to attend a question-writing workshop as a Subject Matter Expert (SME) for the Solution Architect Professional certification.

These workshops bring together some of the most capable, experienced AWS Cloud engineers on the planet. The goal is not to write questions that none of us could pass, but questions that all of us could pass that would bring more people into this tier.

Travel there

Arriving on Sunday, I managed to make it to my hotel, and then run to dinner with some dear friends and former colleagues from a decade ago who live in and around San Francisco.

Monday was a work day, so I was in the Modis office in San Francisco, talking to our team there about our cloud practice in Australia.

Corey Quin, @QuinnyPig, Cloud Economist, and James Bromberger having a coffee catch up in Unuion Square

I was also lucky enough to cross paths with Corey Quinn, whom I had met when he came to Perth for the Latency conference in 2018. A quick coffee, and we realised we knew a fair number of people in common, across AWS, and the UK and Australia.

Consul-General Nick Nichles speaks at the 111 Mimosa Gallery and the Austrade Cybersecurity event

As timing was still working well, there was an AISA and NAB sponsored trade delegation with the Australian Consular General hosting an event in town on Monday evening. Many people were in town for the popular RSA Conference, so I popped along.

Small world it is, running in to Andrew Woodward of ECU, and Graeme Speak of Bankvault, both from Perth. I was also recognised from my AISA presentations over the last few years…

The Bay Bridge

The exam workshop

14 Subject Matter Experts (SMEs) from around the world gathered in San Francisco for the Question writing workshop. The backgrounds were all varied, from end customers of massive national broadcasters, to finance workloads, government, and more.

Much time was spent trying to strike a fair balance of what should be passable, and trying to ensure the expression of the problems, and the answers, were as clear and unequivocal as possible.

The 2020-02-25 to 2020-02-27 SA Professional workshop team (minus Cassandra Hope)

Three days of this was mentally draining. But the team contributed and reviewed over 100 items. These items now go through review, and may eventually turn up in an exam that those aspiring to the professional lavel AWS certification will sit.

Ding ding! A cable car, the easiest way from Van Ness to Sansome Sts (along California, past the Top of the Mark, and more)

Thanks to the AWS team for organising and paying for my travel, and thanks to my team for letting me participate.


AWS Certified Database — Specialty

Today, Monday 25th of November 2019, is the dawn of a new AWS Certification, the “Certified Database — Specialty“, taking the current active AWS certifications to 12:

  • Cloud Practitioner — Foundational
  • Solution Architect — Associate
  • SysOps — Associate
  • Developer — Associate
  • Solution Architect — Professional
  • DevOps Engineer — Professional
  • Networking — Specialty
  • Security — Specialty
  • Big Data — Specialty
  • Alexa Skills Builder — Specialty
  • Machine Learning — Specialty
  • Database — Specialty

I saw my first AWS Certification, the Solution Architect Associate, back in January of 2013 with the initial cohort of AWS staff while in Seattle, and thus am the equal longest AWS-certified person in the world; to which I have continued doing many of these certifications.

I’ve been using databases – primarily open source databases such as MySQL and Postgres, since the mid 1990s. I was certified by MySQL AB back in 2005 in London. Indeed, in 2004 I wrote (and open sourced) an exhaustive MySQL replication check for Nagios, so I have some in-depth knowledge here.

So today, on this first day of the new certification, I went and sat it. Since this is a new beta, there are no immediate pass/fail scores made available — that will be some time in 2020, when enough people have sat this, and grading can be done to determine a fair passing score (as well as review the questions.

Services Covered

As always, three’s an NDA so I can’t go into detail about questions, but I can confirm some of the services covered:

  • RDS — of course — with Postgres, MySQL, Oracle and SQL Server
  • DynamoDB – regional and global tables
  • Aurora – both Postgres and MySQL interfaces
  • Elasticache Redis
  • DocumentDB
  • DMS
  • Glue

Sadly for Corey Quinn, no Route53 as a database-storage-engine, but DNS as a topic did come up. As did a fair amount of security, of course.

What was interesting was a constant focus on high availability, automated recovery, and minimal downtime when doing certain operations. This plays squarely into the Well-Architected Framework.

Who is this Certification for?

In my opinion, this certification is playing straight into the hands of the existing Database Administrator, who has perhaps long felt threatened by the automation that has replaced much of the undifferentiated heavy lifting of basic database operation (patching, replication and snapshots) with Managed RDS instances.

This gives the humble DBA of yore a pathway to regain legitimacy; for those that don’t will be left behind. It will probably spur many DBAs to undertake architectures and approaches they may have often felt were too hard, or too complicated, when indeed these are quite easy with managed services.

Conclusion

A good outing for a new certification, but the odd typo (the likes of which I produce) were seen (eg: cloud where it should have been could, if you can believe me).

For anyone with a Pro SA and Pro Dev Ops certification, this one shouldn’t be a stretch too hard. Of course, come March I may eat my words.

I know how much work goes into creating these question pools, reviewing the blueprints, questions, and the work yet to be done – grading and then confirming and rejecting. Well done Cert team on another one hitting customers hands!