AWS API Keys: 7 Simple rules.

Handle your API keys with care, and protect yourself.

DISCLOSURE: I worked for AWS as a Solution Architect in Perth, Australia for 2.5 years.

Yet another story about people leaking their API keys on Slashdot. I gave the AWS Security presentation at the Sydney AWS Summit (and Auckland and Perth) in 2014 and highlighted this issue then. Let’s reiterate some simple rules.

  1. Never use Root API keys
  2. Give minimal permissions for the task at hand
  3. Add a constraints when using long term (IAM User) API keys
  4. Never put any API keys in your code
  5. Never check API keys into revision control
  6. Rotate credentials
  7. Use EC2 Instance Roles

Let’s go into some detail to explain why these are good ideas…

 

1. Never use Root API keys

Root API keys cannot be controlled or limited by any IAM policy. They have an effective “god mode”. The best thing you can do with your Root keys is delete them; you’ll see the AWS IAM Console now recommends this (the Dashboard has a traffic light display for good practice).

Instead, create an IAM User, assign an appropriate IAM Policy. You can assign the Policy directly to a user, or they can inherit it via a simple Group membership (the policy being applied to the Group). IAM Policy can be very detailed, but lets start with a simple policy: the same “god mode” you had with Root API keys:

{
 "Statement": [
    {
      "Action": "*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

This policy is probably a bad idea for anyone: it’s waaay too open. But for the insistent partner who wants your Root API keys, consider handing them an IAM User user with this policy (to start with).

 

2. Minimal permissions for the task at hand

So that service provider (or your script) doesn’t need access to your entire AWS account. Face it: when ServiceNow wants to handle your EC2 fleet, they don’t need to control your Route53 DNS. This is where you can build out the Action section of the policy. Multiple actions can be bundled together in a list. Lets think of a program (script) that acesses some data file sitting in S3; we can apply this policy to the IAM user:

{
 "Statement": [
     {
       "Action": [
         "s3:ListBucket",
         "s3:ListBucketMultipartUploads",
         "s3:ListBucketVersions",
         "s3:ListMultipartUploadParts",
         "s3:GetObject"
       ],
       "Effect": "Allow",
       "Resource": "*"
     }
   ]
 }

With this policy in place for this IAM User, they are now limited to Listing and getting objects from any S3 bucket we have created (or have been granted access to in other AWS accounts). Its good, but we can do better. Lets draw in the Resource to a specific bucket. We do that with an Amazon Resource Name, or ARN, that can be quite specific.

{
  "Statement": [
    {
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:ListBucketVersions",
        "s3:ListMultipartUploadParts"
      ],
      "Effect": "Allow",
      "Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ]
    }
  ]
}

Now we have said we want to list the root of the bucket “mybucket“, and every object within it. That’s kept our data in other buckets private from this script.

3. Add Policy Constraints

Let’ s now add some further restrictions on this script.  We know we’re going to run it always from a certain ISP or location, so we can have a pretty good guess at the set of IP addresses our authenticated requests should come from. In this case, I’m going to suggest that from somewhere in the 106.0.0.0/8 CIDR range should be OK, and that any requests from elsewhere cannot get this permission applied (unless another policy permits it). I’m also going to insist on using an encrypted transport (SSL). Here’s our new policy:

{
  "Statement": [
    {
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:ListBucketVersions",
        "s3:ListMultipartUploadParts"
      ],
      "Effect": "Allow",
      "Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ],
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "106.0.0.0/8"
        },
        "Bool": {
          "aws:SecureTransport": "true"
        }
      }
    }
  ]
}

That’s looking more secure. Even if credentials got exfiltrated, then the culprit would have to guess what my policy is to use these keys. I could add a set of IP addresses here; I could also check CloudTrail for attempted use of those credentials from IPs that don’t have access (and rotate them ASAP and figure out how they were carried off to start with).

We could further constrain with conditions such as a UserAgent string, which while not the best security in the world, can be a nice touch. Imagine if we had two policies, one with a UserAgent condition StringEquals match of “MyScript” that lets you read your data, and one with a UserAgent condition StringEquals of “MyScriptWithDelete” that lets you delete data. It’s a simple Molly guard, but when you do that delete you also set your UserAgent string to let you delete. (See also MFA Delete).

If we’re running as a cron/Scheduled Task, then we could also limit this policy to a small time window per day to be active. Perhaps +/- 5 minutes for a set time? Easy.

4. Never put any API keys in your code

Hard coding your credentials should raise the hackles on the back of your neck. There’s a number of much better options:

  • use a config file (outside of your code base)
  • or an environment variable
  • or a command line switch
  • or all of the above

There’s plenty of libraries for most languages that let you handle config files quite easily, even multi-layered config files (eg. first read /etc/$prog/config, then read ~/.$prog/config).

 

5. Never check API keys into revision control

Consider checking for API keys in your code and rejecting commits that contain them. Here’s a simple check for an Access Key or Secret Key:

egrep -r "(AK[[:digit:][:upper:]]{18}|[[:alnum:]]{40})" .

So perhaps having this in your pre-commit hooks to stop you from checking in something that matches this would be useful? The format for Secret and Access keys is a little rough, and could always change in future, so prepare to be flexible. If you opted above for a config file, then perhaps that’s a good candidate for adding to your .gitignore file.

 

 6. Rotate Credentials

Change credentials regularly.The IAM Console app now has a handy Credential Report to show the administrator when credentials are used and updated, and IAM lets you create a second Access and Secret key pair while the current one is still active letting you update your script(s) in production to the new credential without an interruption to access. Just remember to complete the process by marking the old key inactive for a short spell (to confirm all is well), and then delete the old one. Don’t be precious about the credentials themselves – use and discard. It’s somewhat manual (so see the next point).

 

7. Use IAM Roles for EC2 Instances (where possible)

The AWS IAM team have solved the chicken-and-egg of getting secured credentials to a new EC2 instance: IAM Roles for EC2 Instances. In a nutshell its the ability for the instance to ask for a set of credentials (via an HTTP call) and get return role credentials (Access Key, Secret Key, and Token) to then do API calls with; you set the Policy (similar to above) on the Role, and you’re done. If you’re using one of the AWS supplied libraries/SDKs, then your code wont need any modifications – these libraries check for an EC2 role and use that if you don’t explicitly set credentials.

It’s not magic – its the EC2 metadata service that’s handing this to your instance. Word of warning through – any process on your instance that can do an outbound HTTP call can grab these creds, so don’t do this on an a multi-user SSH host.

The metadata service also tells your instance the life span of these credentials, and makes fresh creds available multiple times per day. With that information, the SDKs know when to request updated credentials from the metadata service, so your now auto-rotating your credentials!

Of course, if the API keys you seek are for an off-cloud script or client, then this is not an option.

 

Summary

IAM Policy is a key piece in you securing your AWS resources, but you have to implement it. If you have AWS Support, then raise a ticket for them to help you with your policies. You can also use the IAM Policy Generator to help make them, and the IAM Policy Simulator to help test them.

Its worth keeping an eye on the AWS Security Blog, and as always, if you suspect something, contact AWS Support for assistance from the relevent AWS Security Team.

What’s interesting is if you take this to the next level and programatically scan for violations of the above in your own account. A simple script to test how many policies have no Constraints (sound of klaxon going off)? Any policies that Put objects into S3 that don’t also enforce s3:x-amz-server-side-encryption: AES256 and s3:x-amz-acl: private?

PS: I really loved the video from the team at Intuit at Re:Invent this year – so much, it’s right here:

AWS CLI: searching for AMIs

I’ve been experimenting with the new Python based AWS CLI tool, and its getting to be very good indeed. It can support multiple login profiles, so its easy to switch between the various separate AWS accounts you may use.

Today I’ve been using it from Windows (!), and was searching for a specific AMI ID, and wanted to share the syntax for those who want to do similar:

C:\>aws --profile my-profile --region us-east-1 ec2 describe-images --filters name=image-id,values=ami-51ff9238
C:\>aws --profile my-profile --region us-east-1 ec2 describe-image-attribute --attribute launchPermission --image-id ami-51ff9238
C:\>aws --profile my-profile --region us-east-1 ec2 modify-image-attribute ami-51ff9238 --image-id --operation-type remove --attribute launchPermission --user-groups all
C:\>aws --profile my-profile --region us-east-1 ec2 modify-image-attribute ami-51ff9238 --image-id --operation-type add --attribute launchPermission --user-groups all

 

 

 

aws –profile turnkey –region ap-southeast-2 ec2 modify-image-attribute ami-fd46d6c7 –image-id –operation-type remove –attribute launchPermission –user-groups all

aws –profile turnkey –region ap-southeast-2 ec2 modify-image-attribute ami-fd46d6c7 –image-id –operation-type add –attribute launchPermission –user-groups all

AWS S3 Bucket Policies: Restricting to In-Region access

From time to time, Amazon Web Services adds new IP address ranges (it keeps growing!). These new addresses are published in the forums, such as via this post from EricS. I was creating a bucket policy to restrict access only to nonymous users who are within my region – I’m happy for the access requests, but I don’t want to pay the bandwidth charges. So here’s a small Perl script that takes the copy-and-paste text from EricS’s forum post, and creates an S3 buck policy element suitable for this:

#!/usr/bin/perl
open F, 'ips.txt' or die "Cannot read list of IPs: $!";
my @ip_conditions;
while (<F>) {
  push @ip_conditions, $1 if /^(\d+\.\d+\.\d+\.\d+\/\d+)\s/;
}
print "\t\"aws:SourceIp\": [" . join(",", @ip_conditions) . "]\n";

Official Debian Images on Amazon Web Services EC2

Official Debian AMIs are now on Amazon web Services

Please Note: this article is written from my personal perspective as a Debian Developer, and is not the opinion or expression of my employer.

Amazon Web Service‘s EC2 offers customers a number of Operating Systems to run. There are many Linux Distributions available, however for all this time, there has never been an ‘Official’ Debian Image – or Amazon Machine Image (AMI), created by Debian.

For some Debian users this has not been an issue as there are several solutions of creating your own personal AMI. However for the AWS Users who wanted to run a recognised image, it has been a little confusing at times; several Debian AIMs have been made available by other customers, but the source of those images has not been ‘Debian’.

In October 2012 the AWS Marketplace engaged in discussions with the Debian Project Leader, Stefano Zacchiroli. A group of Debian Developers and the wider community formed to generated a set of AMIs using Anders Ingemann’s ec2debian-build-ami script. These AMIs are published in the AWS Marketplace, and you can find the listing here:

No fees are collected for Debian for the use of these images via the AWS Marketplace; they are listed here for your convenience. This is the same AMI that you may generate yourself, but this one has been put together by Debian Developers.

If you plan to use this AMI, I suggest you read http://wiki.debian.org/Cloud/AmazonEC2Image, and more explicity, SSH as the user ‘admin and then ‘sudo -i‘ to root.

Additional details

Anders Ingemann and others maintain a GitHub project called ec2debian-build-ami which generates a Debian AMI. This script supports several desired features, an was also updated to add in some new requirements. This means the generated image supports:

  • non-root SSH (use the user ‘admin)
  • secure deletion of files in the generation of the image
  • using the Eucalyptus toolchain for generation of th eimage
  • ensuring that this script and all its dependencies are DFSG compliant
  • using the http.debian.net redirector service in APT’s sources.list to select a reasonably ‘close’ mirror site
  • and the generated image contains only packages from ‘main’
  • plus minimal additional scripts (nuder the Apache 2.0 license as in ec2debian-build-ami) to support:
    • fetching the SSH Public Key for the ‘admin’ user (sudo -i to gain root)
    • executing UserData shell scripts (example here)

Debian Stable (Squeeze; 6.0.6 at this point in time) does not contain the cloud-init package, and neither does Debian Testing (Wheezy).

A fresh AWS account (ID 379101102735) was used for the initial generation of this image. Any Debian Developer who would like access is welcome to contact me. Minimal charges for the resource utilisation of this account (storage, some EC2 instances for testing) are being absorbed by Amazon for this. Co-ordination of this effort is held on the debian-cloud mailing list.

The current Debian stable is 6.0.6 ‘Squeeze‘, and we’re in deep freeze for the ‘Wheezy‘ release. Squeeze has a Xen kernel that works on the Parallel Virtual Machine (PVM) EC2 instance, and hence this is what we support on EC2. (HVM images are a next phase, being headed up by Yasuhiro Akarki <ar@d.o>).

Marketplace Listing

The process of listing in the AWS Marketplace was conducted as follows:

  • A 32 bit and 64 bit image was generated in US-East 1, which was AMI IDs:
    • ami-1977f070: 379101102735/debian-squeeze-i386-20121119
    • ami-8568efec: 379101102735/debian-squeeze-amd64-20121119
  • The image was shared ‘public’ with all other AWS users (as was the underlying EBS snapshot, for completeness)
  • The AWS Marketplace team duplicated these two AMIs into their AWS account
  • The AWS Marketplace team further duplicated these into other AWS Marketplace-supported Regions

This image went out on the 19th of November 2012. Additional documentation was put into the Wiki at: http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze

A CloudFormation template may help you launch a Debian instance by containing a mapping to the relevent AMI in the region you’re using: see the wiki link above.

What’s Next

The goal is to continue stable releases as they come out. Further work is happening to support generation of Wheezy images, and HVM (which may all collapse into one effort with a Linux 3.x kernel in Wheezy). If you’re a Debian Developer and would like a login to the AWS account we’ve been using, then please drop me a line.

Further work to improve this process has come from Marcin Kulisz, who is starting to package ec2debian-build-ami into a Debian: this will complete the circle of the entire stack being in main (one day)!

Thanks goes to Stefano, Anders, Charles, and everyone who  contributed to this effort.

Resources

Load Balancing on Amazon Web Services

I’ve been using Amazon’s Elastic Load Balancing (ELB) service for about a year now, and thought I should pen some of the things I’ve had to do to make it work nicely.

Firstly, when using HTTP with Apache, you probably want to add a new log format that, instead of using the Source IP address of the connection int he first field, you use the extra header that ELB adds, X-Forwarded-For. It’s very simple, something like:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" fwd_combined

… and then wherever you’ve been using a Log statement with format “common”, just use “fwd_common”. Next, if you’re trying to use your domain name as your web server, eg “example.com” instead (or as well) as “www.example.com”, then with Amazon Route53 (DNS hosting) you’ll get a message about a conflict witht he “apex” of the domain. You get around this using the elb-associate-route53-hosted-zone command line tool, with something like:

./elb-associate-route53-hosted-zone ELB-Web --region ap-southeast-1 --hosted-zone-id Z3S76ABCFYXRX6 --rr-name example.com --weight 100

And if you want to also use IPv6:

./elb-associate-route53-hosted-zone ELB-Web --region ap-southeast-1 --hosted-zone-id Z3S76ABCFYXRX6 --rr-name example.com --weight 100 --rr-type AAAA

If you’re using HTTPS, then you may have an issue if you chose to pass your SSL traffic through the ELB (just as a generic TCP stream). Since the content is encrypted, the ELB cannot modify the request header to add the X-Forwarded-For. Your only option is to “terminate” the incoming HTTPS connection on the ELB, and then having it establish a new connection to the back end instance (web server). You will need to load your certificate and key into the ELB for it to correctly represent itself as the target server. This will be an overhead on the load balancer having to decrypt (and option re-encrypt to the back end), so be aware of the costs.

One of the nice things about having the ELB in place, even for a single instance web site, is that it will do health checks and push the results to CloudWatch. CloudWatch will give you pretty graphs, but also permit you to set Alerts, which may be pushed to the Amazon Notification Service – which in turn can send you an email, or call a URL to trigger some other action that you configure (send SMS, or sound a klaxon?).