AWS API Keys: 7 Simple rules.

Handle your API keys with care, and protect yourself.

DISCLOSURE: I worked for AWS as a Solution Architect in Perth, Australia for 2.5 years.

Yet another story about people leaking their API keys on Slashdot. I gave the AWS Security presentation at the Sydney AWS Summit (and Auckland and Perth) in 2014 and highlighted this issue then. Let’s reiterate some simple rules.

  1. Never use Root API keys
  2. Give minimal permissions for the task at hand
  3. Add a constraints when using long term (IAM User) API keys
  4. Never put any API keys in your code
  5. Never check API keys into revision control
  6. Rotate credentials
  7. Use EC2 Instance Roles

Let’s go into some detail to explain why these are good ideas…

 

1. Never use Root API keys

Root API keys cannot be controlled or limited by any IAM policy. They have an effective “god mode”. The best thing you can do with your Root keys is delete them; you’ll see the AWS IAM Console now recommends this (the Dashboard has a traffic light display for good practice).

Instead, create an IAM User, assign an appropriate IAM Policy. You can assign the Policy directly to a user, or they can inherit it via a simple Group membership (the policy being applied to the Group). IAM Policy can be very detailed, but lets start with a simple policy: the same “god mode” you had with Root API keys:

{
 "Statement": [
    {
      "Action": "*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

This policy is probably a bad idea for anyone: it’s waaay too open. But for the insistent partner who wants your Root API keys, consider handing them an IAM User user with this policy (to start with).

 

2. Minimal permissions for the task at hand

So that service provider (or your script) doesn’t need access to your entire AWS account. Face it: when ServiceNow wants to handle your EC2 fleet, they don’t need to control your Route53 DNS. This is where you can build out the Action section of the policy. Multiple actions can be bundled together in a list. Lets think of a program (script) that acesses some data file sitting in S3; we can apply this policy to the IAM user:

{
 "Statement": [
     {
       "Action": [
         "s3:ListBucket",
         "s3:ListBucketMultipartUploads",
         "s3:ListBucketVersions",
         "s3:ListMultipartUploadParts",
         "s3:GetObject"
       ],
       "Effect": "Allow",
       "Resource": "*"
     }
   ]
 }

With this policy in place for this IAM User, they are now limited to Listing and getting objects from any S3 bucket we have created (or have been granted access to in other AWS accounts). Its good, but we can do better. Lets draw in the Resource to a specific bucket. We do that with an Amazon Resource Name, or ARN, that can be quite specific.

{
  "Statement": [
    {
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:ListBucketVersions",
        "s3:ListMultipartUploadParts"
      ],
      "Effect": "Allow",
      "Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ]
    }
  ]
}

Now we have said we want to list the root of the bucket “mybucket“, and every object within it. That’s kept our data in other buckets private from this script.

3. Add Policy Constraints

Let’ s now add some further restrictions on this script.  We know we’re going to run it always from a certain ISP or location, so we can have a pretty good guess at the set of IP addresses our authenticated requests should come from. In this case, I’m going to suggest that from somewhere in the 106.0.0.0/8 CIDR range should be OK, and that any requests from elsewhere cannot get this permission applied (unless another policy permits it). I’m also going to insist on using an encrypted transport (SSL). Here’s our new policy:

{
  "Statement": [
    {
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:ListBucketVersions",
        "s3:ListMultipartUploadParts"
      ],
      "Effect": "Allow",
      "Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ],
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "106.0.0.0/8"
        },
        "Bool": {
          "aws:SecureTransport": "true"
        }
      }
    }
  ]
}

That’s looking more secure. Even if credentials got exfiltrated, then the culprit would have to guess what my policy is to use these keys. I could add a set of IP addresses here; I could also check CloudTrail for attempted use of those credentials from IPs that don’t have access (and rotate them ASAP and figure out how they were carried off to start with).

We could further constrain with conditions such as a UserAgent string, which while not the best security in the world, can be a nice touch. Imagine if we had two policies, one with a UserAgent condition StringEquals match of “MyScript” that lets you read your data, and one with a UserAgent condition StringEquals of “MyScriptWithDelete” that lets you delete data. It’s a simple Molly guard, but when you do that delete you also set your UserAgent string to let you delete. (See also MFA Delete).

If we’re running as a cron/Scheduled Task, then we could also limit this policy to a small time window per day to be active. Perhaps +/- 5 minutes for a set time? Easy.

4. Never put any API keys in your code

Hard coding your credentials should raise the hackles on the back of your neck. There’s a number of much better options:

  • use a config file (outside of your code base)
  • or an environment variable
  • or a command line switch
  • or all of the above

There’s plenty of libraries for most languages that let you handle config files quite easily, even multi-layered config files (eg. first read /etc/$prog/config, then read ~/.$prog/config).

 

5. Never check API keys into revision control

Consider checking for API keys in your code and rejecting commits that contain them. Here’s a simple check for an Access Key or Secret Key:

egrep -r "(AK[[:digit:][:upper:]]{18}|[[:alnum:]]{40})" .

So perhaps having this in your pre-commit hooks to stop you from checking in something that matches this would be useful? The format for Secret and Access keys is a little rough, and could always change in future, so prepare to be flexible. If you opted above for a config file, then perhaps that’s a good candidate for adding to your .gitignore file.

 

 6. Rotate Credentials

Change credentials regularly.The IAM Console app now has a handy Credential Report to show the administrator when credentials are used and updated, and IAM lets you create a second Access and Secret key pair while the current one is still active letting you update your script(s) in production to the new credential without an interruption to access. Just remember to complete the process by marking the old key inactive for a short spell (to confirm all is well), and then delete the old one. Don’t be precious about the credentials themselves – use and discard. It’s somewhat manual (so see the next point).

 

7. Use IAM Roles for EC2 Instances (where possible)

The AWS IAM team have solved the chicken-and-egg of getting secured credentials to a new EC2 instance: IAM Roles for EC2 Instances. In a nutshell its the ability for the instance to ask for a set of credentials (via an HTTP call) and get return role credentials (Access Key, Secret Key, and Token) to then do API calls with; you set the Policy (similar to above) on the Role, and you’re done. If you’re using one of the AWS supplied libraries/SDKs, then your code wont need any modifications – these libraries check for an EC2 role and use that if you don’t explicitly set credentials.

It’s not magic – its the EC2 metadata service that’s handing this to your instance. Word of warning through – any process on your instance that can do an outbound HTTP call can grab these creds, so don’t do this on an a multi-user SSH host.

The metadata service also tells your instance the life span of these credentials, and makes fresh creds available multiple times per day. With that information, the SDKs know when to request updated credentials from the metadata service, so your now auto-rotating your credentials!

Of course, if the API keys you seek are for an off-cloud script or client, then this is not an option.

 

Summary

IAM Policy is a key piece in you securing your AWS resources, but you have to implement it. If you have AWS Support, then raise a ticket for them to help you with your policies. You can also use the IAM Policy Generator to help make them, and the IAM Policy Simulator to help test them.

Its worth keeping an eye on the AWS Security Blog, and as always, if you suspect something, contact AWS Support for assistance from the relevent AWS Security Team.

What’s interesting is if you take this to the next level and programatically scan for violations of the above in your own account. A simple script to test how many policies have no Constraints (sound of klaxon going off)? Any policies that Put objects into S3 that don’t also enforce s3:x-amz-server-side-encryption: AES256 and s3:x-amz-acl: private?

PS: I really loved the video from the team at Intuit at Re:Invent this year – so much, it’s right here:

Woodlands Primary School Song

In the mid 80s, my Dad wrote a lyrics for a song for my primary school – Woodlands -  with one of our neighbours creating the music. It became the official school song for nearly 30 odd years, and was only recently supplanted. I was trying to remember the lyrics, and found only one document with it left on line, so I thought I ‘d paste it here to preserve it a little longer.

At the bottom of the hill
Nestling by the trees
Warmed by the sun
Cooled by the breeze
There’s a place for learning
There’s a place for fun
It’s the school at Woodlands
We welcome everyone
Banksia gum and wattle
They are just a few
Of the many trees around us
That make our little school
A good place to learn in
A good place for fun
It’s the school at Woodlands
We welcome everyone
The Banksia is our emblem
We wear it with pride
Endeavour is our motto
It means we always try
A good place to learn in
A good place for fun
The BEST school in W.A.
Woodlands number one.
– John A N Bromberger

Linux.conf.au 2014: LCA TV

The radio silence here on my blog has been not from lack of activity, but the inverse. Linux.conf.au chewed up the few remaining spare cycles I have had recently (after family and work), but not from organising the conference (been there, got the T-Shirt and the bag). So, let’s do a run through of what has happened…

LCA2014 – Perth – has come and gone in pretty smooth fashion. A remarkable effort from the likes of the Perth crew of Luke, Paul, Euan, Leon, Jason, Michael, and a slew of volunteers who stepped up – not to mention our interstate firends of Steve and Erin, Matthew, James I, Tim the Streaming guy… and others, and our pro organisers at Manhattan Events. It was a reasonably smooth ride: the UWA campus was beautiful, the leacture theatres were workable, and the Octogon Theatre was at its best when filled with just shy of 500 like minded people and an accomplished person gracing the stage.

What was impressive (to me, at least) was the effort of the AV team (which I was on the extreme edges of); videos of keynotes hit the Linux Australia mirror within hours of the event. Recording and live streaming of all keynotes and sessions happend almost flawlessly. Leon had built a reasonably robust video capture management system (eventstreamer – on github) to ensure that people fresh to DVswitch had nothing break so bad it didn’t automatically fix itself – and all of this was monitored from the Operations Room (called the TAVNOC, which would have been the AV NOC, but somehow a loose reference to the UWA Tavern – the Tav – crept in there).

Some 167 videos were made and uploaded – most of this was also mirrored on campus before th end of the conference so attendees could load up their laptops with plenty of content for the return trip home. Euan’s quick Blender work meant there was a nice intro and outro graphic, and Leon’s scripting ensured that Zookeepr, the LCA conference manegment software, was the source of truth in getting all videos processed and tagged correctly.

I was scheduled (and did give) a presentation at LCA 2014 – about Debian on Amazon Web Services (on Thursday), and attended as many of the sessions as possible, but my good friend Michael Davies (LCA 2004 chair, and chair of the LCA Papers Committee for a good many years) had another role for this year. We wanted to capture some of the ‘hallway track’ of Linux.conf.au that is missed in all the videos of presentations. And thus was born… LCA TV.

LCA TV consisted of the video equipment for an additional stream – mixer host, cameras, cables and switches, hooking into the same streaming framework as the rest of the sessions. We took over a corner of the registration room (UWA Undercroft), brought in a few stage lights, a couch, coffee table, seat, some extra mics, and aimed to fill the session gaps with informal chats with some of the people at Linux.conf.au – speakers, attendees, volunteers alike. And come they did. One or two interviews didn’t succeed (this was an experiment), but in the end, we’ve got over 20 interviews with some interesting people. These streamed out live to the people watching LCA from afar; those unable to make it to Perth in early January; but they were recorded too… and we can start to watch them… (see below)

I was also lucky enough to mix the video for the three keynotes as well as the opening and closing, with very capable crew around the Octogon Theatre. As the curtain came down, and the 2014 crew took to the stage to be congratulated by the attendees, I couldn’t help but feel a little bit proud and a touch nostalgic… memories from 11 years earlier when LCA 2003 came to a close in the very same venue.

So, before we head into the viewing season for LCA TV, let me thank all the volunteers who organised, the AV volunteers, the Registration volunteers, the UWA team who helped with Octogon, Networking, awesome CB Radios hooked up to the UWA repeated that worked all the way to the airport. Thanks to the Speakers who submitted proposals. The Speakers who were accepted, made the journey and took to the stage. The people who attended. The sponsors who help make this happen. All of the above helps share the knowledge, and ultimately, move the community forward. But my thanks to Luke and Paul for agreeing to stand there in the middle of all this… madness and hive of semi structured activity that just worked.

Please remember this was experimental; the noise was the buzz of the conference going on around us. There was pretty much only one person on the AV kit – my thanks to Andrew Cooks who I’ll dub as our sound editor, vision director, floor manager, and anything else. So who did we interview?

  • Alan Robertson (Assim Proj)
  • Arjen Lentz (twice – well, two topics!)
  • Daniel (A student at LCA for the first time)
  • Dave Chinner (XFS)
  • Erin Walsh (Rego desk manager)
  • Jason Nicholls (AV Director LCA 2014)
  • Jeremy Kerr (Kernel Developer)
  • Jessica Smith (Astronomy Mini Conf)
  • Jono Oxer (Audosat)
  • Karen Sandler (Gnome)
  • Keith Packard (X) and BDale Garbee (Freedom Box, Debian)
  • Kevin Vinsen (ICRAR, Square Kilometer Array)
  • Lennart Poettering (SystemD)
  • Linus Torvalds (Yet another Kernel Developer)
  • Matthew Wilcox (Another Kernel dev, and a Debian Dev as well)
  • Michael Still (OpenStack)
  • Paul Weyper (Canberra Linus Users Group)
  • Paul Wise (Debian)
  • Pia Waugh (Open Government)
  • Rusty Russel (Yet another Kernel Developer! Oh, and started LCA in 1999)

One or two talks did not work, so appologies to those that are missing. Here’s the playlist to start you off! Enjoy.

AWS CLI: searching for AMIs

I’ve been experimenting with the new Python based AWS CLI tool, and its getting to be very good indeed. It can support multiple login profiles, so its easy to switch between the various separate AWS accounts you may use.

Today I’ve been using it from Windows (!), and was searching for a specific AMI ID, and wanted to share the syntax for those who want to do similar:

C:\>aws --profile my-profile --region us-east-1 ec2 describe-images --filters name=image-id,values=ami-51ff9238
C:\>aws --profile my-profile --region us-east-1 ec2 describe-image-attribute --attribute launchPermission --image-id ami-51ff9238
C:\>aws --profile my-profile --region us-east-1 ec2 modify-image-attribute ami-51ff9238 --image-id --operation-type remove --attribute launchPermission --user-groups all
C:\>aws --profile my-profile --region us-east-1 ec2 modify-image-attribute ami-51ff9238 --image-id --operation-type add --attribute launchPermission --user-groups all

 

 

 

aws –profile turnkey –region ap-southeast-2 ec2 modify-image-attribute ami-fd46d6c7 –image-id –operation-type remove –attribute launchPermission –user-groups all

aws –profile turnkey –region ap-southeast-2 ec2 modify-image-attribute ami-fd46d6c7 –image-id –operation-type add –attribute launchPermission –user-groups all

LCA 2013

LCA Past Organisers
Previous core organisers of Linux.conf.au, taken at Mt Stromolo Observatory during LCA 2013 (pic by Josh Stewart); except one of these people organised CALU, and another hasn’t organised one at all!

Thanks to all the people at LCA2013 in Canberra; it was a blast! So good to see old friends and chat freely on what’s hot and happening. Radia (known for STP, TRILL), Sir Tim (the web) and old friend Bdale (Debian, SPI, Freedom Box) were inspiring. As was Robert Llewellyn (Kryten, Red Dwarf), who was a complete pleasure — he wandered back and talked for a while with the volunteer video crew.

Hats off to Pia for organising the TBL tour, to Mary Gardner for being awarded the Rusty Wrench, and to the team from PLUG (Euan, Jason, Leon, Luke) who stepped up to help with the video team – and to Paul who graciously accepted the help.

Next up – LCA2014 – Perth! Y’all come back now.. it’s been a decade.