A Step-by-Step Guide to Traffic Director

What is Traffic Director, why is it important and how can you implement it?

Written by Babul Bansal
Published on Apr. 11, 2024
A traffic director showing a vehicle where to go with another traffic director stopping traffic behind him.
Image: Shutterstock / Built In
Brand Studio Logo

Traffic Director is a Google Managed Service that is used to deploy applications with service mesh architecture. It uses xDS APIs to strengthen communication between various services and helps simplify complex traffic management.

Traffic Director helps resolve network challenges for any microservice application. For users struggling with the network connectivity of a global service mesh application, it facilitates the right connectivity between different components.

What Is a Service Mesh?

To understand what a service mesh is, we first have to break down the difference between a service mesh and a microservice. When an application has multiple interconnected components that are loosely coupled to work together, it’s called a microservice.

A service mesh provides a pattern for these services to communicate with one another via network functionality, so the logic of the application versus the communication remains independent. This way, users don’t need to deal with connectivity.

With services such as resiliency, load balancing, observability, authentication and authorization, however, it’s difficult to manage communication that’s secure and reliable. That’s where Traffic Director comes in.

More on MicroservicesService-Oriented Architecture vs. Microservices: A Guide

 

Benefits of Using Traffic Director

Do you ever find yourself asking these questions?

  • When a microservice application is deployed, how will I ensure the connectivity between various components?
  • What kind of logs are going to be produced by the network when the services communicate with each other?
  • If I need to update one of the service components while not impacting others, how will I do that?

Traffic Director can overcome the challenges these questions present. Because TD is a managed service, users can add autoscaling, automation, intelligence and monitoring to ensure they can efficiently retrieve data. This is mostly beneficial in scenarios where data needs to travel across regions, organizations or avoid vendor lock-ins.

Here are some more ways Traffic Director can support your work.

 

Google’s IAM Integration

You can rely on role-based access controls of Google Cloud Identity and Access Management’s service and ensure that the right actors get permission to use it.

 

Layer 7 Traffic Support

With advanced traffic management capabilities like fault injection, traffic splitting and intelligent routing capabilities, you can use Traffic Director for HTTP/HTTPS protocols and ensure that the traffic can be routed and filtered based on URL mappings.

 

Load Balancing

You can add load balancing techniques to ensure that the traffic doesn’t end up on single node but gets distributed round-robin, load based, etc.

 

Managed Instance Groups

Users can install Traffic Director on a single instance group they created and scale up and down based on their needs.

 

How to Implement Traffic Director

Before you can implement Google Cloud traffic director, make sure you set up these prerequisites.

  1. Enable Traffic Director services in Google Cloud Platform projects.
  2. Ensure that all the GCP IAM permissions are given.
  3. Ensure that the compute service account is given appropriate roles, such as compute.networkUser and trafficdirector.client permissions. 
  4. Ensure that the virtual private clouds have been created, subnets are assigned and appropriate firewall rules are given.
    gcloud compute firewall-rules create fw-for-traffic-director --network vpc-1-org-1 --allow tcp:0-65535,udp:0-65535, icmp --source-ranges 10.0.0.2/26,130.211.0.0/22,35.191.0.0/16

There are a few ways to set up Traffic Director. For instance, you can set up Traffic Director for Google Kubernetes Engine-based applications, applications with Google’s Private Service Access, connect to other Google Cloud’s Managed services, etc.

But let’s take a look at a more common situation. Let’s say you need to set one up for managed instance group deployment with internal load balancer.

 

1. Create an instance template

gcloud compute instance-templates create vm-template-td-envoy \

--machine-type=n1-standard-4 \

--service-proxy=enabled \

--region=us-east1 --no-address \

--subnet projects/org-a-project/regions/us-east1/subnetworks/subnet-for-traffic-director

 

2. Create a managed instance group using the instance template

gcloud compute instance-groups managed create mig-envoy \

--region us-east1 --size=3 \

--template=vm-template-td-envoy

 

3. Create a health check for load balancer

gcloud compute health-checks create tcp hc-80-tcp \

--region=us-east1 \

--port=80

 

4. Create a back-end service

gcloud compute backend-services create backend-envoy-ilb \

--load-balancing-scheme=internal \

--protocol=tcp \

--region=us-east1 \

--health-checks=hc-80-tcp \

--health-checks-region=northamerica-northeast1

 

5. Add the instances to the back-end service

gcloud compute backend-services add-backend backend-envoy-ilb \

--region=us-east1 \

--instance-group=mig-envoy \

--instance-group-region=us-east1

 

6. Add the forwarding rules to the load balancer

gcloud compute forwarding-rules create fr-traffic-director \

--region=us-east1 \

--load-balancing-scheme=internal \

--address=10.0.2.250 \

--subnet=projects/org-a-project/regions/us-east1/subnetworks/subnet-for-traffic-director \

--ip-protocol=TCP \

--ports=ALL \

--backend-service=backend-envoy-ilb \

--backend-service-region=us-east1 \

--allow-global-access

 

How to Deploy the Application Server

We are going to create the instance template and managed instance group to deploy the application server instances. In this section, we’ll deploy a Hypertext Transfer Protocol daemon web server with a simple web page shown when the application is called through Traffic Director Load Balancer.

 

1. Create an instance template for application server

gcloud compute instance-templates create instance-template-httpd-service \

--machine-type n1-standard-2 \

--network vpc-httpd-org-b --subnet subnet-httpd-org-b \

--image-family centos-7-v202three0809 --image-project centos-cloud --region

us-east1 \

--no-address --metadata startup-script="#! /bin/bash

sudo yum update

sudo yum install -y httpd

mkdir /var/www/html/

cat <<EOF > /var/www/html/index.html

<h3>DEMO OF TRAFFIC DIRECTOR!</h3>

EOF"

 

2. Create application servers managed instance group

gcloud compute instance-groups managed create mig-td-httpd \

--region=us-east1 \

--size=3 \

--template=instance-template-httpd-service

 

Back-End Configurations for Traffic Director

Now we need to create the back-end service for the application managed instance group.

 

1. Create a health check on port 80

gcloud compute health-checks create tcp hc-80-tcp-service \

--global \

--port=80

 

2. Create a back-end service

gcloud compute backend-services create backend-td-httpd \

--global \

--health-checks=hc-80-tcp-service \

--protocol=TCP \

--load-balancing-scheme=INTERNAL_SELF_MANAGED

 

3. Create the Network Endpoint Group for web application

gcloud compute network-endpoint-groups create neg-httpd-web-service \

--zone=us-east1-a \

--network projects/org-a-project/regions/us-east1/subnetworks/subnet-for-traffic-director \

--network-endpoint-type=NON_GCP_PRIVATE_IP_PORT

 

4. Add the back-end to the NEGs

gcloud compute backend-services add-backend backend-td-httpd \

--global \

--network-endpoint-group neg-httpd-web-service \

--network-endpoint-group-zone us-east1-a \

--balancing-mode CONNECTION \

--max-connections-per-endpoint 100

 

5. Update the Network Endpoint group with the IPs of the application deployed in the organization B subnet

gcloud compute network-endpoint-groups update neg-httpd-web-service \

--zone=us-east1-a \

--add-endpoint="ip=172.16.0.21,port=80" \

--add-endpoint="ip=172.16.0.22,port=80" \

--add-endpoint="ip=172.16.0.23,port=80"


 

How to Configure the Forwarding Rules

Now configure the forwarding rule to tell the Envoy proxies to forward the traffic on the application server ports. Forwarding rules help define the target of the transmission control protocol proxy where the traffic needs to be routed.

Based on this, it contains the IP address of the destination, the port number to which it will connect and the protocol. Below are some of the commands.

 

1. Create tcp proxy

gcloud compute target-tcp-proxies create target-tcp-proxy-td-httpd \

--proxy-bind--backend-service=backend-td-httpd

 

2. Configure forwarding rule to send the traffic to the organization B instances

gcloud compute forwarding-rules create forwarding-rule-td-httpd \

--global \

--load-balancing-scheme=INTERNAL_SELF_MANAGED \

--address=0.0.0.0 \

--target-tcp-proxy=target-tcp-proxy-td-httpd \

-- ports=8080 \

--network=projects/org-b-project/regions/us-east1/subnetworks/subnet-for-traffic-director


 

Testing Traffic Director Connectivity

Create a virtual machine on-prem and open the required firewalls between GCP and the on-premise VM for port 8080. Then SSH, or secure shell, into the VM and test the service.

user1@on-prem-client1:~$ curl 10.0.0.250:8080

DEMO OF TRAFFIC DIRECTOR!

 

Adding Robustness to the Implementation

There are several ways to improve the robustness of Traffic Director for different needs. The basic version does not provide secure communication, resiliency for multiple regions or authentication and authorization services. You can add capabilities to handle more instances, check the health or monitor issues. Let’s look at each one.

 

Autoscaling Traffic Director instances

Based on the usage of the Traffic Director instances, if you need to autoscale then you can add more instances via the instance template. Instance templates are useful in creating a template of your instance type with the application deployed, and then it can be used to create the instance that will speed up the TD deployment. This will ensure that all the instances contain Traffic Director configurations and can join the TD instances to forward the traffic automatically.

 

Health Check

Traffic Director uses GCP pre-defined IP ranges for health checks if MIGs are used (130.211.0.0/22,35.191.0.0/16) while in case of network endpoint groups can do health checks directly using envoy proxy servers.

 

Logging, Monitoring and Alerting

Logging policies can be added to ensure appropriate logs are collected and captured, while alerts can be implemented on the logs to ensure TD services remain up. You can add metrics like sent/receive bytes, server and remote procedure call latencies to improve the efficiency.

Explore Job Matches.