How to Host Multiple React Apps in the Same AWS S3 Bucket

Karen Kua
10 min readJun 15, 2020

--

Hosting your React applications in the cloud can often be done without much difficulty. There’s a plethora of resources online, so with some light reading and some elbow grease, you could have your applications deployed in, what, 10 minutes (or less, seriously). However, what if you have multiple applications to host that warrants a limit in the number of S3 buckets you can provision? Amazon currently allows each account to use up to 100 buckets. That’s a small number if your company has a long list of applications to host, with one app per bucket. On top of that, if you need to have different builds deployed for separate environments, you’ll be in deep water. The 1 to 1 relationship between the application and S3 that most tutorials speak to just won’t cut it. Instead, here’s a solution for hosting separate React apps in sub-directories of the same S3 bucket.

Please note that this article will not explain every detail in provisioning the AWS resources. The main purpose of this article is to cover the must-have configurations you’ll need to achieve this solution. If you are unclear on how to perform a step or want further clarity on how a service works, I highly recommend reading the AWS documentation. It’s a lot to read, I know — but it’s an eccentric way to spend a Sunday afternoon.

Theoretical Situation

To help contextualize everything, let’s say I have a product called Example. The domain is example.com. The product is made up of 4 React apps in a S3 bucket:

  • a marketing SPA in a marketing directory
  • a customer portal in a portal directory
  • a console for the support team in a support directory
  • an informational web page at the bucket’s root
// Bucket Directory Structuremarketing
- index.html
- main.[hash].bundle.js
portal
- index.html
- main.[hash].bundle.js
support
- index.html
- main.[hash].bundle.js
// Root of the bucket- index.html
- main.[hash].bundle.js

I would hit the appropriate React app based on a marketing, support, or portal subdomain. If I make a request to the root domain, I would hit the React app at the root of the bucket.

  • https://marketing.example.com would serve the build files under the marketing directory (the marketing SPA)
  • https://portal.example.com would serve the build files under the portal directory (the customer portal)
  • https://support.example.com would serve the build files under the support directory (the console for the support team)
  • https://example.com would serve the build files at the root of the bucket (the informational web page)

Note: If you don’t want files to live at the root and want to direct traffic to a directory if someone hits the root domain, fret not, a few tweaks to the Lambda@Edge function you’ll see later will do the trick!

The Amazon Web Services You’ll Need

  • S3
  • CloudFront
  • AWS Certificate Manager
  • Lambda@Edge
    (a fancy word for a Lambda function for CloudFront distributions)
  • Route 53 (optional)

S3: Getting Your Code in the Cloud

  1. Create a S3 bucket with the appropriate bucket policy and Access Control List (ACL).
  • Unless your situation warrants otherwise, a good practice is to restrict access to the S3 bucket to CloudFront. To do this, in the ACL, only the bucket owner should have privileges, and, for the bucket policy, use a policy that only provides read access if the request has an appropriate referer header. Using this header is recommended by the AWS documentation.
AWS Referer S3 Bucket Policy
  • Don’t worry about the region of the bucket, as it will sit behind the CloudFront distribution. CloudFront will manage traffic to and from the bucket with minimal latency because of its global network of edge locations.

2. Turn on Static Website Hosting for the bucket.

  • The key here is to ensure the Index and Error documents are set to index.html. Something to keep in mind is that it’s a misconception to think that the index.html file needs to be at the root of the bucket. In truth, S3 will look for the index.html file at the location of entry. In our example here, it means an index.html file needs to be not only at the bucket’s root, but at the root of the marketing, portal, and support directories as well.

3. Upload your static build files, ensuring they don’t have public read status.

4. Optional: Enable Versioning.

  • Versioning allows you to preserve and restore former versions of the files and helps to prevent accidental deletions.

CloudFront: Configuring Your Origins, Behaviors, and Error Pages

5. Assuming you already have your domain, create your CloudFront distribution’s initial set-up. Here are the must-haves for the General tab.

  • example.com and *.example.com should be set up as CNAMEs.
  • Add your custom SSL Certificate. Please keep in mind your SSL Certificate will need to be verified through the AWS Certificate Manager.
  • DNS records for the root domain and the subdomains should be configured to point to the CloudFront distribution. You can do this using Route 53 or a third party provider of your choice (eg. Cloudflare).
  • Leave the input for the default root object blank.

Normally, you can fill this in with index.html when you want to host a static website using CloudFront and not S3 (static website hosting would be disabled on the bucket). That’s fine and dandy if you only have one app in the bucket. In such a case, you would have the index.html at the root of the bucket, and everything would work. However, CloudFront’s default root object can’t be extended to sub-directories, even if you have an index.html file in them. That’s why the S3 bucket needs to host if you have multiple apps in the same bucket. By enabling static website hosting on the bucket, the Index document (index.html) can be sourced from both the root and within sub-directories.

6. Configure your Origins.

  • Under the Origins tab, add a Custom Origin for the S3 bucket using its website endpoint. You can grab this endpoint from the Static Website Hosting settings in the bucket. Remember to toss out http:// from the url! You do not need to configure any of the additional fields.
// Note: the bucket is named example.comexample.com.s3-website-us-east-1.amazonaws.com <-- Correct
example.com.s3.amazonaws.com <-- Incorrect

7. Configure your Behaviors.

  • Under the Behaviors tab, edit the Default (*) path pattern to have at least these settings:

Origin: The Origin you just created

Viewer Protocol Policy: Redirect HTTP to HTTPS

Cache Based on Selected Request Headers: Whitelist

Whitelist Headers: Host

Whitelisting the host not only allows CloudFront to cache the responses based on the host, but it also forwards the host to the origin request Lambda@Edge function.

8. Configure your Error Pages.

  • Since we are dealing with React applications, this is an essential step to support React-Router. Imagine this: you have a /login path defined in your router. Well, hitting https://portal.example.com/login will first result in a 404 because a resource called login doesn’t exist in the directory. The 404 response shouldn’t be sent back to the user. Instead, it should be transformed into a 200 level response and the request should fallback to the index.html file, where our code will be able to handle the request URI (/login) with React-Router. Effectively, through the magic of React-Router, the app will respond with the correct login page. Luckily, this process of falling back to the index.html file is easy to set up in CloudFront! Create a custom error response as such:

You’ll notice I’m using /index.html instead of prefixing it with a sub-directory. Fret not, this is in fact correct. You see, after implementing the Lambda@Edge function, /index.html will correspond to the index.html file within the sub-directories.

Optionally, you can also duplicate this for 403 errors.

Lambda@Edge: Customizing the Origin

9. Alright, we’re at the big kahuna: Adding your Lambda@Edge function!

Lambda@Edge Function for Accessing React Apps in Sub-Directories of the Same S3 Bucket

This was written in Node.js, but you can certainly write it in another language of your choice!

When you create your Lambda@Edge function, you will need to provide it with the correct permissions to use it for a CloudFront trigger. You can attach the permissions automatically if you start with a blueprint, like cloudfront-http-redirect. If you are authoring a function from scratch instead, under Permissions, you can select Basic Lambda@Edge permissions (for CloudFront trigger) from the drop-down of AWS policy templates.

The path property is the magic trick here. It declares a path to locate content. If someone makes a request to https://support.example.com, then the path value would be "/support" and S3 would look for objects at the root of the support directory. In fact, if you wanted to direct traffic to one of the directories instead of the bucket’s root when someone hits the root domain (https://example.com), you would just need to ensure the path property is set to that directory’s path.

10. Test, test, test!

  • Once you’ve edited the code above to your liking, remember to test the function. Using the AWS Console, you can mock up request objects.

11. Go live!

  • Publish a version of the Lambda@Edge function and add it as a Trigger for the CloudFront distribution. You’ll want to attach it to the default * cache behavior as an Origin Request event. There are 4 event types, as illustrated by the AWS documentation. A trigger on the Origin Request means the function will execute when the user makes a request, the request passes CloudFront’s cache, and the request has yet to hit an origin.

12. Invalidate your CloudFront distribution’s cache.

  • I cannot advise this enough! Think of this as stepping on a landmine if this isn’t done every time you change your function or edit your distribution’s settings. If the cache isn’t invalidated, an outdated response that was cached will be sent back to the user without the request even touching the Lambda@Edge function.

Ok, I’ve set everything up! So what’s happening?

Let’s say you make a request to https://marketing.example.com. The host will be parsed for the subdomain (marketing), which will in turn identify the appropriate sub-directory containing the desired React app. The path to this directory (/marketing) will be the value of the custom origin’s path property. This path property is the secret sauce. It identifies an entry point to locate content. Then, since the S3 bucket’s Index document is set to index.html, it will automatically look for the index.html file at the root of the marketing sub-directory. It’s like magic!

Now, let’s say you make a request to https://marketing.example.com/login instead. The path property sets the entry point to the marketing sub-directory, but this time, the request will look for a resource called login. It will respond back with the object if it exists. Otherwise, it will create a 404 response, which, modified by CloudFront’s Error Pages, will transform into a 200 level response and fallback to the index.html file at the root of the marketing directory. Since the path property doesn’t compromise the request URI, /login will be evaluated by the React-Router and, in turn, the app will respond with the login page.

Let’s Step It Up a Notch with Multiple Buckets

Now that you’ve gotten to this point, you can be even more creative with your cloud infrastructure! For example, companies often have more than one major product. What if you need to have several S3 buckets, not just one? On top of that, each bucket has their own sub-directories and all the buckets exist behind a single CloudFront distribution using the same domain. Easy peasy, I’d say. Achieving this would be simple: provision the other S3 buckets, add in the necessary DNS records, and spice up your Lambda@Edge function to account for more than one origin. Here’s a sample function for this scenario:

The example-1 bucket hosts React apps for the Kimchi Beanz product:

https://kimchibeanz.example.com → at the root
https://kimchibeanz-marketing.example.com → in the marketing directory
https://kimchibeanz-portal.example.com → in the portal directory

The example-2 bucket hosts React apps for the Durian Shakez product:

https://durianshakez.example.com → at the root
https://durianshakez-customer.example.com → in the customer directory
https://durianshakez-support.example.com → in the support directory

The example-3 bucket hosts files for a general informational web page:

https://example.com → at the root

Lambda@Edge Function for Accessing React Apps in Sub-Directories of Multiple S3 Buckets

For the sake of simplicity, I made a default export of the bucketData as an array of objects. You could certainly use a database or another storage method of your choice.

Here, the request is sent unaltered if there are no matching origins. However, you can be snazzier with how you handle these cases. You could create an Origin Group to direct requests to a secondary origin that acts as a fallback. You could ensure your DNS records match your list of origins perfectly, so users wouldn’t be able to make the request in the first place. You could redirect to an external origin outside of AWS. The possibilities are endless!

I hope this article has provided some inspiration for your cloud infrastructure. I kind of see serving websites in the cloud to be similar to playing with Lego. Provisioning your resources is akin to putting together a Lego house, piece by piece. Similar to when you need to reference the documentation or online resources, you dig into the Lego box, searching for that one unique piece to add to your creation. Like I said before, reading the documentation would make for a fun Sunday afternoon!

--

--

Karen Kua
Karen Kua

Written by Karen Kua

Software developer specializing in React and Amazon Web Services.

Responses (9)