test-blog

default-blog (Clone) (Clone)

Written by Julia | May 11, 2023

Hello World!

There is not broken HubL

Extending Red Hat OpenShift Container Platform to AWS Local Zones

Authors: Marcos Entenza Garcia Marco Braga Fatih Nar

 

Overview

In Red Hat OpenShift Container Platform 4.12, we introduced the ability to extend cluster formation into Amazon Web Services (AWS) Local Zones in Red Hat OpenShift. In this post, we present how to deploy OpenShift compute nodes in Local Zones at cluster creation time, where the OpenShift Installer creates compute nodes in configured Local Zones. In addition, we share how the cluster administrator adds compute nodes in Local Zones to an existing OpenShift cluster.

Before diving into deploying OpenShift with Local Zones, let’s review what Local Zones are.

Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them with very low latency access to the applications running locally. Local Zones are fully-owned and managed by AWS with no-upfront commitment and no hardware purchase or lease required. In addition, Local Zones connect to the parent AWS cloud region via AWS’ redundant and very high bandwidth private network, providing applications running in Local Zones fast, secure, and seamless access to the rest of AWS services.

Figure-1 AWS Infrastructure Continuum

Using OpenShift with Local Zones, application developers and service consumers will reap the following benefits:

  • Improving application performance and user experience by hosting resources closer to the user, Local Zones reduce the time it takes for data to travel over the network, resulting in faster load times and more responsive applications. This is especially important for applications, such as video streaming or online gaming that require low-latency performance and real-time data access.
  • Hosting resources in specific geographic locations leads to cost savings, whereby customers avoid high costs associated with data transfer charges, such as cloud egress charges, which is a significant business expense, when large volumes of data is moved between regions in the case of image, graphics, and video related applications). 
  • Provide healthcare, government agencies, financial institutions, and other regulated industries a way to meet data residency requirements by hosting data and applications in specific locations to comply with regulatory laws and mandates.

 

Let’s walk through the steps to install an OpenShift cluster in an existing virtual private cloud (VPC) in the US Virginia (us-east-1) region by creating a Local Zone subnet, OpenShift Machine Set manifests, and automatically launch worker nodes during the installation. This diagram below shows what gets created:

  • An standard OpenShift Cluster is installed in us-east-1 with three Control Plane nodes and three Compute nodes
  • One “edge” Compute node runs in the Local Zone subnet in the New York metropolitan region
  • One Application Load Balancer exposes the sample application running in the Local Zone worker node

How to Create an OpenShift cluster with AWS Local Zones at install time

To deploy a new OpenShift cluster extending compute nodes in Local Zone subnets, you install a cluster in an existing VPC and create MachineSet manifests for the Installer.

The installation process automatically creates tainted compute nodes with `NoSchedule.` This allows the administrator to choose workloads to run in each remote location, without needing additional steps to isolate the applications.

Once the cluster is installed, the label node-role.kubernetes.io/edge is set for each node located in the Local Zones, along with the regular node-role.kubernetes.io/worker.

Note the following considerations when deploying a cluster in AWS Local Zones:

  • The Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. This causes the cluster-wide network MTU to change according to the network plugin that is used on the deployment.
  • Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Nat Gateways are not supported in AWS Local Zones.
  • The AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class set on AWS OpenShift clusters. This volume type is not globally available in Local Zone locations. By default, the nodes running in Local Zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass must be set when creating workloads on Local Zone nodes.

Install the following prerequisites before you proceed to the next step:

  1. Create the VPC

This section is optional. Create a VPC with your preferred customizations, as recommended in Installing a cluster on AWS into an existing VPC.

Define the environment variables:

$ export CLUSTER_REGION=us-east-1

$ export CLUSTER_NAME=ocp-lz

Download the following CloudFormation Templates with the following names:

Create the VPC with CloudFormation Template:

$ export STACK_VPC=${CLUSTER_NAME}-vpc

$ aws cloudformation create-stack --stack-name ${STACK_VPC} \

     --template-body file://template-vpc.yaml \

     --parameters \

        ParameterKey=ClusterName,ParameterValue=${CLUSTER_NAME} \

        ParameterKey=VpcCidr,ParameterValue="10.0.0.0/16" \

        ParameterKey=AvailabilityZoneCount,ParameterValue=3 \

        ParameterKey=SubnetBits,ParameterValue=12


$ aws cloudformation wait stack-create-complete --stack-name ${STACK_VPC}

$ aws cloudformation describe-stacks --stack-name ${STACK_VPC}

  1. Create the public subnet in the AWS Local Zone

Create the subnet on Local Zone (example New York [us-east-1-nyc-1a]), and set the variables used to Local Zones.

$ export STACK_LZ=${CLUSTER_NAME}-lz-nyc-1a

$ export ZONE_GROUP_NAME=${CLUSTER_REGION}-nyc-1


# extract public and private subnetIds from VPC CloudFormation

$ export VPC_ID=$(aws cloudformation describe-stacks \

  --stack-name ${STACK_VPC} \

  | jq -r '.Stacks[0].Outputs[] | select(.OutputKey=="VpcId").OutputValue' )

$ export VPC_RTB_PUB=$(aws cloudformation describe-stacks \

  --stack-name ${STACK_VPC} \

  | jq -r '.Stacks[0].Outputs[] | select(.OutputKey=="PublicRouteTableId").OutputValue' )

Download the following CloudFormation template for Local Zones: template-lz.yaml: CloudFormation template for the subnet that uses AWS Local Zones

Enable the Zone Group and create the resources.

$ aws ec2 modify-availability-zone-group \

    --group-name "${ZONE_GROUP_NAME}" \

    --opt-in-status opted-in


$ aws cloudformation create-stack --stack-name ${STACK_LZ} \

     --template-body file://template-lz.yaml \

     --parameters \

        ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}" \

        ParameterKey=VpcId,ParameterValue="${VPC_ID}" \

        ParameterKey=PublicRouteTableId,ParameterValue="${VPC_RTB_PUB}" \

        ParameterKey=LocalZoneName,ParameterValue="${ZONE_GROUP_NAME}a" \

        ParameterKey=LocalZoneNameShort,ParameterValue="nyc-1a" \

        ParameterKey=PublicSubnetCidr,ParameterValue="10.0.128.0/20"


$ aws cloudformation wait stack-create-complete --stack-name ${STACK_LZ} 


$ aws cloudformation describe-stacks --stack-name ${STACK_LZ}

The network is ready! Now you can set up the OpenShift installer to create a cluster in the existing VPC.

  1. Setup the Install configuration

To create the install configuration, you set the Subnet IDs for all zones in the region excluding Local Zone's subnets.

First, collect the subnet Ids from the CloudFormation templates outputs:

$ mapfile -t SUBNETS < <(aws cloudformation describe-stacks \

  --stack-name "${STACK_VPC}" \

  | jq -r '.Stacks[0].Outputs[0].OutputValue' | tr ',' '\n')


$ mapfile -t -O "$