Blog
October 2, 2019
TABLE OF CONTENTS
APIs are an integral part of modern software systems, small and large. As the number of APIs that a system consumes or produces grows, as is typically the case when using microservices, it becomes important to have a robust API-management platform to manage the lifecycle of these APIs. Amazon API Gateway is one such platform that can be used to publish, manage, and secure APIs. In this article, we describe how to use the Amazon API Gateway in a multi-account environment where one API Gateway instance can be used to manage a variety of APIs deployed across multiple accounts.
AWS deployments that support large businesses can quickly grow to include a variety of cloud services and resources. As the number of such resources increases, their management can be simplified by using a hierarchical account structure. An account may represent a line of business or an organizational unit, each of which owns and manages the specific services and resources that support products and services offered by them. Such multi-account architectures also require careful planning to establish secure and low-latency connectivity between resources in different accounts. AWS provides a number of solutions to achieve this, such as VPC Endpoints, VPC Peering, and VPC PrivateLinks, and Transit Gateways.
A best practice when implementing API governance is to establish a centralized security and policy enforcement point. In a multi-account architecture, this requires the deployment of a single instance of Amazon API Gateway as a shared resource that manages APIs deployed in other accounts. Consider a bank, for example, which offers open APIs to its partners. These may include Account-Management APIs, Payment APIs, Investment APIs, Card-Management APIs, and more. The implementation and ownership of these APIs may belong within different business units, each of which has their own AWS account. A centralized gateway provides a single management point for all these APIs. The solution we describe can be used to configure Amazon API Gateway as a single shared platform that manages and governs all APIs offered by this bank.
To demonstrate how to configure the API Gateway, we begin with two accounts. Account A is the shared-services account that hosts the Amazon API Gateway, and Account B is the service provider that hosts the microservice. The figure below represents this scenario.
There are 2 network paths between the API Gateway and the microservice. The path marked as “connectivity option 1” in the figure routes traffic through the public internet based on the DNS name or public IP address of the EC2 instance hosting the microservice. (Note: this is a simplified architecture. In more robust architectures, a Load Balancer would front an Auto-Scaling Group of EC2 instances.)
A second and more efficient option would be to route traffic through the AWS network to the private IP address of the EC2 instance (marked “connectivity option 2” in figure above). This approach results in lower latency and higher security. However, this option does not work out of the box and requires the use of VPC peering and VPC Links, which, per AWS documentation, is a feature of Amazon API Gateway that provides access to resources within a VPC without exposing them directly to the public internet.
Below are the steps required to set up a VPC Link in Account A to access a resource that is owned by Account B.
Before we establish VPC peering between the accounts, we must first create a VPC in Account A. Ensure that the CIDR range of this VPC does not overlap with that of the target VPC. Next, add one or more subnets to this VPC. Finally, create a VPC-Peering Connection that links your new local VPC in Account A to the remote VPC in Account B. The figure below shows how to create a Peering Connection.
Peering Connections must be accepted by an authorized user in the target account. After the Peering Connection is accepted, create a Route Table in Account A that sets the target for all IP destinations in the CIDR range of the VPC in Account B as the Peering Connection. The figure below shows the route table, in which the CIDR range of the target VPC in Account B is 10.0.0.0/16. At this point, traffic in from the VPC in account A that has a destination IP in the 10.0.0.0/16 address range routes to the VPC in Account B.
A Network Load Balancer is required to create a VPC Link for the API Gateway to communicate with other VPCs. In the VPC in Account A (which we created in the previous step), add a Network Load Balancer and configure its target to be the private IP address of the EC2 instance that hosts the microservice. At this point, all traffic sent to this load balancer routes to the EC2 instance in Account B.
Next, add a VPC Link in the API Gateway with its Target NLB as the Network Load Balancer which was created above (see figure below).
We now have configured the API Gateway to route incoming requests using the AWS network to a remote microservice. In the next step, we configure an actual API in the gateway to route requests to the microservice.
To validate this configuration, create a simple API in the gateway, selecting the Integration Type as VPC Link. Use the DNS Name of your local Network Load Balancer as the host in the Endpoint URL for this API, as shown in the figure below. If you have set up an internal Elastic Load Balancer in the target account in front of your microservice, you could also use the DNS name of that ELB. (Note: When using a VPC Link, the API Gateway routes all traffic for the configured API to the NLB. The NLB then routes all traffic per its configured target rules. As such, the host portion of the URL is not relevant. The scheme, path, and parameters of the endpoint URL are important since they forwarded to the target). Test the API to ensure that you get the expected response.
The figure below illustrates the architecture after all required components have been configured.
We have shown how to establish connectivity from Amazon API Gateway in one account to a resource owned by a different account without routing traffic through the public internet. The steps described above represent a simplified solution, but the concepts illustrated can be used for more complex architectures. Load balancers in the account that owns the API Gateway, such as Account A in our example, can point to any resource in the target account that has a static private IP address. When additional targets (such as microservices owned by other accounts) need to be configured, this can be done by either creating new Network Load Balancers, or by adding listeners (on different ports) to the existing NLB. The latter option is certainly less expensive, whereas the former option is more scalable.
There are other options to establish connectivity between VPCs in different accounts, such as VPC Endpoints and Transit Gateways. A limitation of the approach described above is that it requires peering connections to be established individually between each VPC pair, which can be difficult to scale. In the next part of this article, we will describe a solution based on Transit Gateways, which can be used to simplify connections between multiple VPCs in AWS and other on-premise networks.
Authored By
Sonny Werghis
Principal Architecture Consultant, Levvel
Belal Bayaa
Senior Cloud Consultant
Meet our Experts
Principal Architecture Consultant, Levvel
Sonny Werghis is a Principal Architecture Consultant at Levvel where he advises clients on Payment technology. Previously, Sonny worked at IBM as a Product Manager and a Solution Architect focused on Cloud and Cognitive technology where he developed AI and Machine Learning-based business solutions for customers in various industries, including Finance, Government, Healthcare, and Transportation. Sonny is an Open Group Master Certified IT Architect and a certified Enterprise Architect.
Senior Cloud Consultant
Belal is an AWS Certified Solutions Architect who focuses on infrastructure automation, security, and compliance in the public cloud. Prior to Levvel, he worked in application development in the dental, insurance, and FinTech industries. His DevOps expertise, combined with his application-development experience allow him to work in all stages of the SDLC, from code, to deployment, to infrastructure layout. He holds a B.E. in Electrical Engineering and lives with his wife and children in Dallas, TX.
Let's chat.
You're doing big things, and big things come with big challenges. We're here to help.