views

Search This Blog

Monday, September 28, 2020

NSX-T Data Center v3.0 for Tanzu Kubernetes

 

In this post we will learn how to  set up  NSX-T Data Center v3.0 for Tanzu Kubernetes .

 

To perform a new installation of NSX-T Data Center for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented


NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Centre components such as logical switches, logical routers, and firewalls. 


NSX Manager provides a system view and is the management component of NSX-T Data Center. 

For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. For a proof-of-concept environment, you can deploy a single NSX Manager. 


In a vSphere environment, the following functions are supported by NSX Manager: 


  • vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters.
  • vCenter Server can use the Storage vMotion function to live migrate NSX Manager across hosts and clusters.
  • vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager across hosts and clusters.
  • vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters.

Deploy NSX-T Manager  


*In my lab I am going to deploy one node NSX-T Manager for testing .

Deploy the NSX-T Manager through OVA in vSphere

Download the NSX-T Data Center OVA file from VMware download portal: 







Right-click on the vCenter cluster and select Deploy OVF Template.

















Browse the OVA file and click Next.
















Enter a  VM name and a location for the NSX Manager  click Next
















Select a compute resource for the NSX Manager and click Next
















Review the details. 

















On Configuration screen select  Manager size for the configuration size



















Choose Thin Provision and the desired storage



















Enter the network information for management network. 




















Enter  passwords for all user types. 

 

Fill out the required fields on the Customize Template section of the Deploy OVF Template wizard. 




















Click Finish and  NSX-T Manager will get start deploying




















Once the installation is completed, log in with admin privileges on  NSX Manager at https://192.168.208.160/






























Register vCenter Server as a Compute Manager


On the NSX UI Home page, navigate to System > Configuration > Fabric > Compute Managers and click +ADD. 















Click Add





























Click Add again at the thumbprint warning
















Verify that the Compute Manager is added and registered.










Enable the NSX-T Manager Interface


The NSX Management Console provides two user interfaces: Policy and Manager. TKGI requires the Manager interface for configuring its networking and security objects. Do NOT use the Policy interface for TKGI objects.


In NSX-T Manager GUI console go  to System > User Interface Settings. 















Select Visible to all Users and Manager 








Click Save


Refresh the NSX-T Manager Console 

 

Upper-right area of the console, verify that the Manager option is enabled.








Create Transport Zones 


you need  to create two transport zones: an Overlay TZ for Transport Nodes and a VLAN TZ for Edge Nodes.

 

On the NSX UI Home page, navigate to System > Configuration > Fabric > Transport Zones and click +ADD























Click ADD.

 

Create a VLAN-based transport zone to communicate with the nonoverlay networks that are external to NSX-T Data Center. a. Click +ADD. 

 

In the New Transport Zone window, create a transport zone
























Click ADD. 

 

A new transport zone appear







Create IP Pool


You create an IP pool for assigning IP addresses to the NSX transport nodes.

 1.     On the NSX UI Home page, navigate to Networking > IP Management > IP Address Pools and click ADD IP ADDRESS POOL.

2. Provide the configuration details in the ADD IP ADDRESS POOL window. 

a. Enter VTEP-IP-Pool in the Name text box. 

b. Enter IP Pool for ESXi, KVM, and Edge in the Description text box. 

c. Click Set under Subnets and select ADD SUBNET > IP Ranges. 

d. In the IP Ranges/Block text box, enter 192.168.208.190-192.168.208.200 and click Add item(s). e. In the CIDR text box, enter 192.168.208.0/24. 

f. In the Gateway IP text box, enter 192.168.208.1

g. Click ADD on the ADD SUBNETS page

























3. Click APPLY on the Set Subnets page.

 4. Click SAVE.

























Prepare the ESXi Hosts


You prepare the ESXi hosts to participate in the virtual networking and security functions offered by NSX-T Data Center. 


On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Host Transport Nodes. 









From the Managed by drop-down menu, select  cluster appear:  




Expand the cluster view. The NSX Configuration status of the hosts appears as Not Configured and the Node Status is Not Available

Select the cluster check box and click CONFIGURE NSX. 

In the NSX Installation dialog box, click Create New Transport Node Profile.

Provide the required details in the Add Transport Node Profile page




















Click on add and apply
















In the NSX Installation window, click APPLY. 

The auto install process starts. 

 

The process might take approximately 5 minutes to complete. 


When the installation completes, verify that NSX is installed on the hosts and the status of the SA-Compute-01 cluster nodes is Up. 

 

You might need to click REFRESH at the bottom to refresh the page.










Deploying and Configuring NSX Edge Node


NSX Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers.


On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Edge Transport Nodes. 

 

Click +ADD EDGE VM. 



















Click NEXT.




















Click NEXT.


















Configure the node settings as below




















Configure the first NSX switch for the Edge Node 




















On the Configure NSX page, click + ADD SWITCH and provide the configuration details. 

 



















The Edge deployment might take several minutes to complete.







The deployment status displays various values, for example, Node Not Ready, which is only temporary. 

Wait for the configuration status to appear as Success and the status as Up. 

 

You can click REFRESH occasionally.

 

On the NSX UI Home page, navigate to System Configuration > Fabric > Nodes > Edge Transport Nodes , click +ADD EDGE VM, and provide the configuration details to deploy the second edge node.




















In the Credentials window, enter VMware1!VMware1! as the CLI password and the system root password.



















Click the Allow SSH Login and Allow Root SSH Login toggles to display Yes. 

 

Click next


















On the Configure Node Settings window, enter the details



















On the Configure NSX window, enter the details.




















On the Configure NSX page, click + ADD SWITCH and provide the configuration details.



















Click FINISH.

 

The Edge deployment might take several minutes to complete. 


The deployment status displays various temporary values, for example, Node Not Ready. Wait for the configuration state to appear as Success and the node status as Up. 


You can click REFRESH occasion.






Verify that the two edge nodes are deployed and listed on the Edge VM list. 

The configuration state appears as Success and the node status appears as Up.


Configure an Edge Cluster 


You create an NSX Edge cluster and add the two NSX Edge nodes to the cluster.

 

  On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Edge Clusters. 

 

Click +ADD

In the Available (2) pane, select both sa-nsxedge-01 and sa-nsxedge-02 and click the right arrow to move them to the Selected (0) pane.






























 

Click ADD. 


Verify that Edge-Cluster-01 appears in the Edge Cluster list. Click REFRESH if Edge-Cluster-01 does not appear after a few seconds





Click 2 in the Edge Transport Nodes column and verify that sa-nsxedge-01 and sa-nsxedge-02 appear in the list.











Create Uplink Logical Switch


Create an uplink Logical Switch to be used for the Tier-0 Router.

At upper-right, select the Manager tab

Go to Networking > Logical Switches





Click on ADD

 

Configure Logical Switch as below.


























Click on Add and verify if uplink created 








Create Tier-0 Router

 

You create segments for the  uplinks used by the Tier-0 gateway to connect to the upstream router

 

On the NSX UI Home page, navigate to Networking > Connectivity > Tier-0 Gateways. 







Click ADD GATEWAY > Tier-0
























Save and verify






Select the T0 router










Go to Configuration > Router Ports.

 

Click Add





















Add and verify










Add a second uplink by creating a second router port for edge-node-2





















Once completed then verify  if  you have two connected router ports. 











Configure and Test the Tier-0 Router

 

 

Create an HA VIP for the T0 router, and a default route for the T0 router. Then test the T0 router.









Configure the HA VIP as below 

























Click add and verify

 










Create Static Routes

Go in Routing > Static Routes. 









Click on Add


























Click Add and verify. 










Verify the Tier 0 router by making sure the T0 uplinks and HA VIP are reachable from your laptop.

 

 

ping 192.168.208.67

PING 192.168.208.67 (192.168.208.67) 56(84) bytes of data.

64 bytes from 192.168.208.67: icmp_seq=1 ttl=64 time=0.981 ms

64 bytes from 192.168.208.67: icmp_seq=2 ttl=64 time=0.568 ms

64 bytes from 192.168.208.67: icmp_seq=3 ttl=64 time=0.487 ms

64 bytes from 192.168.208.67: icmp_seq=4 ttl=64 time=0.895 ms

64 bytes from 192.168.208.67: icmp_seq=5 ttl=64 time=0.372 ms

64 bytes from 192.168.208.67: icmp_seq=6 ttl=64 time=0.386 ms

ping 192.168.208.68

PING 192.168.208.68 (192.168.208.68) 56(84) bytes of data.

64 bytes from 192.168.208.68: icmp_seq=1 ttl=64 time=1.26 ms

64 bytes from 192.168.208.68: icmp_seq=2 ttl=64 time=0.586 ms

64 bytes from 192.168.208.68: icmp_seq=3 ttl=64 time=0.651 ms

ping 192.168.208.69

PING 192.168.208.69 (192.168.208.69) 56(84) bytes of data.

From 192.168.208.165 icmp_seq=1 Destination Host Unreachable

From 192.168.208.165 icmp_seq=2 Destination Host Unreachable

From 192.168.208.165 icmp_seq=3 Destination Host Unreachable

From 192.168.208.165 icmp_seq=4 Destination Host Unreachable

 


 

Create IP Blocks and Pool for Compute Plane


TKGI requires a Floating IP Pool for NSX-T load balancer assignment and the following 2 IP blocks for Kubernetes pods and nodes:

  • PKS-POD-IP-BLOCK: 172.18.0.0/16
  • PKS-NODE-IP-BLOCK: 172.23.0.0/16 

On the NSX UI Home page, navigate to Networking > IP Address Pools > IP Block




 



Click Add

 

Configure the Pod IP Block as follows:


Name: PKS-POD-IP-BLOCK 

CIDR: 172.18.0.0/16 


















Add and verify









Configure the Pod Node Block as follow

Name: PKS-NODE-IP-BLOCK

CIDR: 172.23.0.0/16


















Add and verify









Select IP Pools tab











Click on add 

Configure the IP pool as below









Add and verify 







Create Management Plane

Create Tier-1 Router and Switch


On the NSX UI Home page, navigate to Networking > Logical Switches.







Click add





























Click Add and verify 








On the NSX UI Home page, navigate to Networking > Tier-1 Logical Router.








Click Add















Add and verify






Go to T1 router> Configuration > Router port.








Click Add 














Verify the Route port 









Select Routing tab. 








Click edit

Status: Enabled 

Advertise All Connected Routes: Yes 















Save and verify








Create NAT Rules


You should  create the following NAT rules on the Tier-0 router for the TKGI Management Plane VMs.

 

 

On the NSX UI Home page, navigate to Networking > NAT









Click add

















Verify new added DNAT











Add a second DNAT






























Verify the creation of the DNAT rules.









Create the SNAT rule
















Verify the creation of the SNAT rule. 









I hope you enjoy reading this blog as much as I enjoyed writing it. Feel free to share this on social media if it is worth sharing.



1 comment:

Deploy Windows VMs for vRealize Automation Installation using vRealize Suite Lifecycle Manager 2.0

Deploy Windows VMs for vRealize Automation Installation using vRealize Suite Lifecycle Manager 2.0 In this post I am going to describe ...