AWS S3 Security Guide:

How to Secure and Audit AWS S3 Buckets?

+ Bonus tip: how you can do it in minutes with automation

By: Cintia del Rio and Sadequl Hussain

Introduction

Amazon S3 is an object storage service widely used for storing many different types of data in the cloud.

While it’s inexpensive and easy to set up and maintain S3 buckets, it’s also very easy to overlook S3 security aspects and expose them to malicious attacks.

A typical example is accidentally allowing public access to S3 files.

Several recent high-profile data breaches were caused by lax S3 security.

Other attacks used AWS credentials from less protected services to download files, whereas those services shouldn’t have access to S3 in the first place.

In this AWS security guide, we will talk about some best practices to help you identify and prevent most common S3 security problems.

Want to secure your AWS S3 buckets now?

Get AWS S3 insights, monitors, problem detection, in just a few clicks from download! 

AWS S3 security tip #1 – use policies

A common way of securing S3 objects is through policies.

These policies can be set at two different places: resource level (e.g. S3 bucket policy and KMS key policy) and identity level (e.g. IAM policy).

Resource-based Policy

A resource-based policy is defined per S3 bucket (namely “Bucket policy”) or KMS key (“Key policy”).

For example, a bucket policy allowing full bucket access to a certain (user) is shown below:

a bucket policy allowing full bucket access to a certain <user data-srcset=” />

Identity-based Policy

An identity-based policy is defined per IAM user or role.

For example, an IAM policy allowing full bucket access to a certain (bucket-name) can be like this:

An identity-based policy is defined per IAM user or role. For example, an IAM policy allowing full bucket access to a certain <bucket-name data-srcset=” width=”566″ height=”379″ />

Both policies are very similar in syntax, they just differ in where they are applied.

Resource-based policies and identity-based policies are added and evaluated together.

For example, to give access to S3 bucket (bucket-name) for user (user), you can either grant it via IAM policy on user (user) or via bucket policy for bucket (bucket-name).

Check AWS Documentation for details.

In order to avoid confusion, make sure you are using only one type of policy for all of your S3 resources.

AWS S3 security tip #2- prevent public access

The most important security configuration of an S3 bucket is the bucket policy.

It defines which AWS accounts, IAM users, IAM roles and AWS services will have access to the files in the bucket (including anonymous access) and under which conditions.

Pro tip: you should remove public access from all your S3 buckets unless it’s necessary. An exception can be buckets containing public data.

For example, a bucket storing images, PDFs or HTML files of a public website will need public access.

Pro tip #2: when you make a bucket publicly accessible, remove any files from it that shouldn’t be public.

If any S3 bucket allows public access, it’s visible from the AWS console:

remove public access from all your aws S3 buckets unless it’s necessary

AWS S3 Security tip #3 – disable file ACLs

Even if an S3 bucket is private, it’s still possible to override its policy and make one or more folders or files public.

This is possible with file Access Control Lists (ACL):

Disable File ACLs

AWS S3 public access setting from account level

Unlike buckets, AWS console does not show which S3 files or folders are public.

It’s therefore recommended to not use the feature at all and block it from account level:

Unlike buckets, AWS console does not show which S3 files or folders are public. It’s therefore recommended to not use the feature at all and block it from account level

If the AWS account does not need to host any new public S3 buckets, you can set the “Block new public bucket policies” property to “True”.

Pro tip #3: as a best practice, it’s also recommended you create and manage all your public S3 buckets in a single AWS account.

AWS S3 security tip #4 – least privilege principle

It’s highly recommended to follow the principle of least privilege when configuring access to S3 buckets.

Make sure the S3 permissions are as granular as reasonable; if a certain user does not need access to an S3 bucket, don’t grant that access.

For application layer access, instead of using IAM users, use IAM roles whenever possible.

With this approach, credentials are harder to steal and valid only for a short time.

It's highly recommended to follow the principle of least privilege when configuring access to S3 buckets

For example, to allow the ‘sync’ command for an IAM role “upload-blog”, we can use a policy like this:

You may wonder why some resources in the policy are shown with “/*” and others are not.

S3 bucket policy permissions can be sometimes confusing because some actions are applicable at bucket level (e.g. s3:ListBucket) while others are applicable at key-level – meaning files and folders (e.g. s3:PutObject).

If you create a policy that attempts to apply a bucket-level action on a key-level resource (e.g. “arn:aws:s3:::(bucket-name)/*”) – or vice-versa – it will not work.

A really good reference for all S3 permissions – and their resources and conditions – can be found here.

You can also lock down S3 bucket access from certain VPC endpoints, IP ranges, and other AWS accounts.

The AWS official documentation has some examples of restricting access from different sources.

AWS S3 security tip #5 – encrypt S3 files

A very important security measure for private buckets is server-side encryption.

A very important security measure for private buckets is server-side encryption.

Enabling server-side encryption from S3 bucket properties

The “Default encryption” will be automatically applied if an uploaded file does not explicitly specify any encryption.

A bucket’s policy can be configured to prevent non-encrypted files from being uploaded or downloaded.

There are three types of server-side encryption for S3 objects. Of these, SSE-KMS is the recommended method.

Amazon Key Management Service (KMS) keys apply an extra layer of security on S3 objects.

In order to download a KMS-encrypted file from S3, a user not only needs access to the bucket and the file but also needs access to the specific key to encrypt and decrypt that file.

AWS S3 security tip #6 – use versioning

S3 offers 11 nines of durability.

But this does not prevent a user with write permissions from overwriting a file.

It’s therefore recommended to enable versioning on all important S3 buckets.

With versioning, if a file is overwritten or deleted (maliciously or by accident), it can be recovered.

Extra costs apply for enabling bucket versioning.

enable versioning on all important S3 buckets.

Enabling versioning from S3 bucket properties

You can also configure object locks for new S3 buckets to prevent older versions of files from being deleted before a certain period of time has passed.

With this setting, a file can still be deleted or overwritten, but its older versions cannot be deleted and can be restored if necessary.

AWS S3 Security tip #7 – enable logging

It’s recommended you enable server access logging for all important S3 buckets.

With server access logging, bucket access requests are captured and logged every few minutes.

These logs can be sent to a separate S3 bucket of your choice.

The logs look similar to Nginx or Apache web server logs and can be parsed by third-party log management solutions like XpoLog.

enable server access logging for all important S3 buckets

Enabling server access logging from S3 bucket properties

In the log, each request to S3 will show up as a new line:

In the log, each request to S3 will show up as a new line:

The  AWS Documentation can help identify all the fields in a log event:

identify all the fields in a log event:

Another option is to enable object-level logging for the bucket. With this type of logging, access requests to S3 objects are sent to CloudTrail.

Secure All Your S3 Buckets With Automation

Ensuring all your S3 buckets are following security best-practices is an ongoing effort, and this is where automation can help.

If you pay for highest tiers of AWS support, Trusted Advisor will allow you to visualize public buckets, public files in private buckets, and other common misconfigurations.

For those using AWS organizations, another option can be preventing certain S3 configurations using AWS Config.

Using infrastructure-as-code (e.g. CloudFormation or Terraform) can be a great way to achieve standardization and security compliance.

The security restrictions, standards, and best practices can be either checked as part of code reviews or automatically applied during infrastructure builds.

10 Essential S3 Audits – Free Cheat Sheet

We created a “cheat-sheet” of essential S3 audits you should perform.

Ideally, these audits should be automated by scheduled tasks and scripts. Learn more. 

How to Easily Audit Your S3 Buckets?

XpoLog is a unique-patented log management and analysis solution, used by known brands, corporates, and IT organizations worldwide.

XpoLog allows you to effortlessly: 

  • Setup log streaming and aggregation.
  • Run advanced search queries.
  • Build powerful visual analytics and get alerts – all of which can help monitor any enterprise S3 footprint.

What makes XpoLog different is the simplicity offered to the admin – after a quick deployment, you’ll be offered with log analysis apps that match your logs, to get insights instantly.

  • Our pre-packaged “apps” are basically sets of useful dashboards for dozens of IT systems.
  • Once an app is installed, it easily visualizes log events from source systems and drill-down for anomaly detection.

The XpoLog S3 app

With the S3 app, system administrators and IT operation teams can proactively monitor S3 performance and security.

Here are some readily available insights from the XpoLog S3 app:

  • A number of statistics gathered from S3 access logs: this includes  HTTP status codes, URLs of top objects accessed, referer IP, hits over time and users:

includes HTTP status codes, URLs of top objects accessed, referer IP, hits over time and users

  • Geolocation of user access requests: the geographic map can be filtered and cross-referenced by cities, IP addresses, hits / URLs, time span, servers, and users:

Geolocation of user access requests: the geographic map can be filtered and cross-referenced by cities, IP addresses, hits / URLs, time span, servers, and users:

  • HTTP traffic errors: these can be cross-referenced with referrer IP, time of error, URLs or geographic locations:

HTTP traffic errors: these can be cross-referenced with referrer IP, time of error, URLs or geographic locations:

  • The type of browsers accessing S3 resources: this is broken down into browser types, versions and devices, cross-referenced by locations, time, and IP addresses:

View the type of browsers accessing S3 resources: this is broken down into browser types, versions and devices, cross-referenced by locations, time, and IP addresses:

  • Performance statistics: this includes metrics like file read or write  response times and errors over time, cross-referenced by a number of dimensions:

Immediately view file read or write response times and errors over time, cross-referenced by a number of dimensions:

  • Analysis of S3 usage errors: top errors, errors per location, resource and clients accessing the buckets:

immediately view s3 insights like - top errors, errors per location, resource and clients accessing the buckets:

Easily monitor & analyze S3 buckets activity