Category Archives: Networking

GCP Cloud Architect Study Guide – Networking

The information below is a summary of the information presented in the article Best Practices and Reference Architectures for VPC Design.

Best Practices for VPCs

  • Use Custom Mode VPCs to align with corporate IP addressing schemes
  • Use larger subnets and leverage native GCP functionality such as tagging and service accounts to provide fine grained access control
  • Use Shared VPCs
  • Grant networkuser access at the subnet level
  • Use multiple host projects to separate environments and to support higher quotas
  • Use dedicated VPC networks for sensitive / regulated environments

Options for Interconnecting VPCs

  • VPC Peering – due to the non-transitive nature of peered VPCs this can cause connectivity problems between spoke VPCs. Due to quotas, full mesh configurations become impractical. VPC peering may be an option for shared services VPCs when using 3rd party / market places services (log aggregation, backup, etc.)
  • Cloud VPNs or using external addresses – This type of connectivity will incur egress costs when communicating between VPCs in different regions. VM-VM communication using external IP addresses within a region will be charged the same as inter-zonal communication regardless of zone.
  • Routing over Dedicated Interconnect – Since the traffic must traverse the connections to the on-prem datacenters, latency and egress costs could become issues
  • Multiple NICs on cloud based appliances managing interconnectivity is also an option. Using a Palo Alto or Checkpoint firewall to manage the interconnectivity between VPCs would allow for a classic “on-prem like solution” however there are quotas on the number of NICs and the appliance itself can become a bottleneck.

DNS

Since all Cloud DNS instances use the same IP prefix when performing lookups to on prem DNS servers, the prefix can only be advertised from one host VPC environment. Therefore, best practice for DNS forwarding from multiple shared VPCs (for example a Dev and a Prod SVPC) is to have two separate Cloud DNS instances, one in each host project. Only one of the Cloud DNS instances is responsible for performing name lookups for on-prem hosts and thus all responses will be routed correctly back. In this case three domains are configured… onprem.corp.com, prod.gcp.corp.com and dev.gcp.corp.com. The dev Cloud DNS instance refers all lookups for corp.com to the prod Cloud DNS instance which forwards the requests to on-prem.

Network Security

  • Limit access to public IPs and the Internet but enable Private Service Access so that Google based services can be reached via internal addresses.
  • Leverage Cloud NAT to allow for specific internet access
  • Leverage native firewall policies to only allow necessary access to hosts.
  • Leverage firewall appliances with multiple NICs to filter traffic to/from the Internet if necessary. Note the scalability limitations.
  • Limit access to Google managed services such as Big Query and Cloud Storage using VPC Service Controls which defines a trusted perimeter for these services. This will control access from the Internet and from specific VPCs. On-prem environments can also be included in the perimeter.
  • Firewall Policies
    • Subnet Isoltion – If using subnets to group hosts, then set a subnet based filter in the firewall policy to allow for hosts within a subnet to communicate freely.
    • If hosts are spread across subnets use target filtering via service accounts or firwall tagging to manage access to hosts. Service accounts are preferred over tagging since tags can be changed easier than service accounts.
  • Use automation to monitor changes to VMs such as updates to network tags

Identity Aware Proxies

Authenticates HTTP access to services behind a load balancer or App Engine Standard services. Requires a Google authentication and membership in the correct IAM role to be able to access the resource behind the IAP enabled service.

API Access for Google Managed Services

  • Where possible use the default gateway. This will allow for Private Google Access to work without additional configuration.
  • If using the default gateway is not an option, then to enable Private Google Access you need to add specific routes for Google managed service subnets.

Logging and Monitoring

  • Enable flow logs on all subnets but tailor them to the intended audience so that they provide value.
  • Logging all network traffic can be expensive so limit the amount of data by adjusting the log aggregation interval, using sampling or removing metadata when it is not needed.

Load Balancer Options

  • Internal traffic
    • HTTP(S) traffic use Internal HTTPS Load Balancer (proxied)
    • TCP/UDP traffic use Internal TCP/UDP Load Balancer (passthrough)
  • External traffic
    • HTTP(S) traffic use Global HTTPS Load Balancer (proxied)
    • TCP traffic with SSL offload use SSL Proxy
    • TCP traffic without SSL offload use TCP Proxy
    • TCP/UDP traffic preserving client IP use Network Load Balancer
  • Only the Global HTTPS load balancer and the SSL / TCP proxies are Global services, all of the others are regional

Network Tiers

  • Apply to externally facing services
  • Standard Tier allows for only regionally available external IP addresses, all user traffic must be routed to the region where the service is located
  • Premium Tier allows for the use of globally available external IP address, all user traffic is routed to the closed GCP POP and then routed over the Google backbone to the region where the service is located.
  • All internal traffic runs on the Premium Tier
  • Can be applied on a per resource basis

Configuring an IPv6 Address in GCP

Once I had a working Instance Group I was able to then able to begin configuring the load balancer.  IPv6 load balancing is described in this article.  Essentially they are configuring a reverse proxy which terminates the IPv6 connection and builds a new IPv4 connection to the backend server.

The first step is to select a load balancer and in this case I selected an HTTP(S) load balancer.  Once selected you need to configure the Backend, the Path Rules/URL Map and the Frontend as shown in the following diagram:

Load Balancer Configuration

The Back End maps the load balancer to the instance group.  I chose to disable autoscaling and limit the total number of hosts to 1.  If you have a stateless server you can allow the load balancer to autoscale the service as needed based on load.  Since my LAMP server has a local MySQL database this would not work.

For the Path Rules I added the name and pointed it to the main page for WordPress.  You need to enter a name for the server since GCP does not take IPv6 IP addresses (IPv4 works fine).  Without this rule the load balancer will not know where to route the incoming http packet.

For the Front End I added both IPv4 and IPv6 addresses for testing purposes, in case I needed IPv4 access. 

Once that was complete I could begin testing.  Luckily for me my ISP has already rolled out IPv6 to my house so I already had IPv6 access. For the client configuration, I added a new static entry in my hosts file to allow me to use a name to access the server. I only configured the IPv6 address and to make sure IPv4 was not used at any point in the connection, I removed IPv4 from my network adapter. Once this was done, I was able to use my web browser and access the WordPress host.

Adapter Congiruation

In Wireshark you can see the entire IPv6 connection which shows clearly that the configuration works.

Wireshark Output

IP Addresses in WordPress

Since I was just working in a test instance of GCP, I didn’t have DNS running for the server and was using IP addresses for everything.  This posed a problem with WordPress since it stored the IP addresses from the original installation server in the configuration.  Naturally when I booted the new instance group using the disk image created from the original WordPress server, GCP assigned a new IP address to the server and the site no longer worked.

The fix is to update the database to reflect the new public IP address via ssh.  After some searching in the WordPress blogs, I was able to find the following changes which seem to have done the trick. If you don’t do this step, then whenever you access the site it will attempt to access the old IP address and fail.

chris@instance-group-wordpress-lmfd:$ sudo mysql -u root -p
password:##########
mysql> use wordpress;
mysql> show tables;
+-----------------------+
| Tables_in_wordpress |
+-----------------------+
| wp_commentmeta |
| wp_comments |
| wp_links |
| wp_options |
| wp_postmeta |
| wp_posts |
| wp_term_relationships |
| wp_term_taxonomy |
| wp_termmeta |
| wp_terms |
| wp_usermeta |
| wp_users |
+-----------------------+
mysql> UPDATE wp_options SET option_value = "http://NEW_IP_ADDRESS"
WHERE option_value = 'http://OLD_IP_ADDRESS';
mysql> exit;

I originally did this with IPv4 addresses.  However, since my goal was to get the site running with IPv6, I first changed the addresses to IPv6 via the console to test if a raw IP address would work, then I changed both addresses to the domain name, which would be the preferred option when working with a production site.  In both cases WordPress accepted the changes and the site worked as expected.  Below is a screenshot of the console with a mix of IPv6 and domain name configured.

WordPress Console