DISCLOSURE: I worked for AWS as a Solution Architect in Perth, Australia for 2.5 years.
Yet another story about people leaking their API keys on Slashdot. I gave the AWS Security presentation at the Sydney AWS Summit (and Auckland and Perth) in 2014 and highlighted this issue then. Let’s reiterate some simple rules.
- Never use Root API keys
- Give minimal permissions for the task at hand
- Add a constraints when using long term (IAM User) API keys
- Never put any API keys in your code
- Never check API keys into revision control
- Rotate credentials
- Use EC2 Instance Roles
Let’s go into some detail to explain why these are good ideas…
1. Never use Root API keys
Root API keys cannot be controlled or limited by any IAM policy. They have an effective “god mode”. The best thing you can do with your Root keys is delete them; you’ll see the AWS IAM Console now recommends this (the Dashboard has a traffic light display for good practice).
Instead, create an IAM User, assign an appropriate IAM Policy. You can assign the Policy directly to a user, or they can inherit it via a simple Group membership (the policy being applied to the Group). IAM Policy can be very detailed, but lets start with a simple policy: the same “god mode” you had with Root API keys:
{ "Statement": [ { "Action": "*", "Effect": "Allow", "Resource": "*" } ] }
This policy is probably a bad idea for anyone: it’s waaay too open. But for the insistent partner who wants your Root API keys, consider handing them an IAM User user with this policy (to start with).
2. Minimal permissions for the task at hand
So that service provider (or your script) doesn’t need access to your entire AWS account. Face it: when ServiceNow wants to handle your EC2 fleet, they don’t need to control your Route53 DNS. This is where you can build out the Action section of the policy. Multiple actions can be bundled together in a list. Lets think of a program (script) that acesses some data file sitting in S3; we can apply this policy to the IAM user:
{ "Statement": [ { "Action": [ "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions", "s3:ListMultipartUploadParts", "s3:GetObject" ], "Effect": "Allow", "Resource": "*" } ] }
With this policy in place for this IAM User, they are now limited to Listing and getting objects from any S3 bucket we have created (or have been granted access to in other AWS accounts). Its good, but we can do better. Lets draw in the Resource to a specific bucket. We do that with an Amazon Resource Name, or ARN, that can be quite specific.
{
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Effect": "Allow",
"Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ]
}
]
}
Now we have said we want to list the root of the bucket “mybucket“, and every object within it. That’s kept our data in other buckets private from this script.
3. Add Policy Constraints
Let’ s now add some further restrictions on this script. We know we’re going to run it always from a certain ISP or location, so we can have a pretty good guess at the set of IP addresses our authenticated requests should come from. In this case, I’m going to suggest that from somewhere in the 106.0.0.0/8 CIDR range should be OK, and that any requests from elsewhere cannot get this permission applied (unless another policy permits it). I’m also going to insist on using an encrypted transport (SSL). Here’s our new policy:
{
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Effect": "Allow",
"Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ],
"Condition": {
"IpAddress": {
"aws:SourceIp": "106.0.0.0/8"
},
"Bool": {
"aws:SecureTransport": "true"
}
}
}
]
}
That’s looking more secure. Even if credentials got exfiltrated, then the culprit would have to guess what my policy is to use these keys. I could add a set of IP addresses here; I could also check CloudTrail for attempted use of those credentials from IPs that don’t have access (and rotate them ASAP and figure out how they were carried off to start with).
We could further constrain with conditions such as a UserAgent string, which while not the best security in the world, can be a nice touch. Imagine if we had two policies, one with a UserAgent condition StringEquals match of “MyScript” that lets you read your data, and one with a UserAgent condition StringEquals of “MyScriptWithDelete” that lets you delete data. It’s a simple Molly guard, but when you do that delete you also set your UserAgent string to let you delete. (See also MFA Delete).
If we’re running as a cron/Scheduled Task, then we could also limit this policy to a small time window per day to be active. Perhaps +/- 5 minutes for a set time? Easy.
4. Never put any API keys in your code
Hard coding your credentials should raise the hackles on the back of your neck. There’s a number of much better options:
- use a config file (outside of your code base)
- or an environment variable
- or a command line switch
- or all of the above
There’s plenty of libraries for most languages that let you handle config files quite easily, even multi-layered config files (eg. first read /etc/$prog/config, then read ~/.$prog/config).
5. Never check API keys into revision control
Consider checking for API keys in your code and rejecting commits that contain them. Here’s a simple check for an Access Key or Secret Key:
egrep -r "(AK[[:digit:][:upper:]]{18}|[[:alnum:]]{40})" .
So perhaps having this in your pre-commit hooks to stop you from checking in something that matches this would be useful? The format for Secret and Access keys is a little rough, and could always change in future, so prepare to be flexible. If you opted above for a config file, then perhaps that’s a good candidate for adding to your .gitignore file.
 6. Rotate Credentials
Change credentials regularly.The IAM Console app now has a handy Credential Report to show the administrator when credentials are used and updated, and IAM lets you create a second Access and Secret key pair while the current one is still active letting you update your script(s) in production to the new credential without an interruption to access. Just remember to complete the process by marking the old key inactive for a short spell (to confirm all is well), and then delete the old one. Don’t be precious about the credentials themselves – use and discard. It’s somewhat manual (so see the next point).
7. Use IAM Roles for EC2 Instances (where possible)
The AWS IAM team have solved the chicken-and-egg of getting secured credentials to a new EC2 instance: IAM Roles for EC2 Instances. In a nutshell its the ability for the instance to ask for a set of credentials (via an HTTP call) and get return role credentials (Access Key, Secret Key, and Token) to then do API calls with; you set the Policy (similar to above) on the Role, and you’re done. If you’re using one of the AWS supplied libraries/SDKs, then your code wont need any modifications – these libraries check for an EC2 role and use that if you don’t explicitly set credentials.
It’s not magic – its the EC2 metadata service that’s handing this to your instance. Word of warning through – any process on your instance that can do an outbound HTTP call can grab these creds, so don’t do this on an a multi-user SSH host.
The metadata service also tells your instance the life span of these credentials, and makes fresh creds available multiple times per day. With that information, the SDKs know when to request updated credentials from the metadata service, so your now auto-rotating your credentials!
Of course, if the API keys you seek are for an off-cloud script or client, then this is not an option.
Summary
IAM Policy is a key piece in you securing your AWS resources, but you have to implement it. If you have AWS Support, then raise a ticket for them to help you with your policies. You can also use the IAM Policy Generator to help make them, and the IAM Policy Simulator to help test them.
Its worth keeping an eye on the AWS Security Blog, and as always, if you suspect something, contact AWS Support for assistance from the relevent AWS Security Team.
What’s interesting is if you take this to the next level and programatically scan for violations of the above in your own account. A simple script to test how many policies have no Constraints (sound of klaxon going off)? Any policies that Put objects into S3 that don’t also enforce s3:x-amz-server-side-encryption: AES256 and s3:x-amz-acl: private?
PS: I really loved the video from the team at Intuit at Re:Invent this year – so much, it’s right here: