Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

You have the following routing design. You discover that Compute Engine instances in Subnet-2 in the asia-southeast1 region cannot communicate with compute resources on-premises. What should you do?

A.

Configure a custom route advertisement on the Cloud Router.

B.

Enable IP forwarding in the asia-southeast1 region.

C.

Change the VPC dynamic routing mode to Global.

D.

Add a second Border Gateway Protocol (BGP) session to the Cloud Router.

You have applications running in the us-west1 and us-east1 regions. You want to build a highly available VPN that provides 99.99% availability to connect your applications from your project to the cloud services provided by your partner's project while minimizing the amount of infrastructure required. Your partner's services are also in the us-west1 and us-east1 regions. You want to implement the simplest solution. What should you do?

A.

Create one Cloud Router and one HA VPN gateway in each region of your VPC and your partner's VPC. Connect your VPN gateways to the partner's gateways. Enable global dynamic routing in each VPC.

B.

Create one Cloud Router and one HA VPN gateway in the us-west1 region of your VPC. Create one OpenVPN Access Server in each region of your partner's VPC. Connect your VPN gateway to your partner's servers.

C.

Create one OpenVPN Access Server in each region of your VPC and your partner's VPC. Connect your servers to the partner's servers.

D.

Create one Cloud Router and one HA VPN gateway in the us-west1 region of your VPC and your partner's VPC. Connect your VPN gateways to the partner's gateways with a pair of tunnels. Enable global dynamic routing in each VPC.

Your on-premises data center has 2 routers connected to your GCP through a VPN on each router. All applications are working correctly; however, all of the traffic is passing across a single VPN instead of being load-balanced across the 2 connections as desired.

During troubleshooting you find:

•Each on-premises router is configured with the same ASN.

•Each on-premises router is configured with the same routes and priorities.

•Both on-premises routers are configured with a VPN connected to a single Cloud Router.

•The VPN logs have no-proposal-chosen lines when the VPNs are connecting.

•BGP session is not established between one on-premises router and the Cloud Router.

What is the most likely cause of this problem?

A.

One of the VPN sessions is configured incorrectly.

B.

A firewall is blocking the traffic across the second VPN connection.

C.

You do not have a load balancer to load-balance the network traffic.

D.

BGP sessions are not established between both on-premises routers and the Cloud Router.

You successfully provisioned a single Dedicated Interconnect. The physical connection is at a colocation facility closest to us-west2. Seventy-five percent of your workloads are in us-east4, and the remaining twenty-five percent of your workloads are in us-central1. All workloads have the same network traffic profile. You need to minimize data transfer costs when deploying VLAN attachments. What should you do?

A.

Keep the existing Dedicated interconnect. Deploy a VLAN attachment to a Cloud Router in us-west2, and use VPC global routing to access workloads in us-east4 and us-central1.

B.

Keep the existing Dedicated Interconnect. Deploy a VLAN attachment to a Cloud Router in us-east4, and deploy another VLAN attachment to a Cloud Router in us-central1.

C.

Order a new Dedicated Interconnect for a colocation facility closest to us-east4, and use VPC global routing to access workloads in us-central1.

D.

Order a new Dedicated Interconnect for a colocation facility closest to us-central1, and use VPC global routing to access workloads in us-east4.

You are designing an IP address scheme for new private Google Kubernetes Engine (GKE) clusters. Due to IP address exhaustion of the RFC 1918 address space In your enterprise, you plan to use privately used public IP space for the new clusters. You want to follow Google-recommended practices. What should you do after designing your IP scheme?

A.

Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters

B.

Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters

C.

Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected and

D.

Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected --disable-default-snat, —enable-ip-alias, and—enable-private-nodes

Your company has a single Virtual Private Cloud (VPC) network deployed in Google Cloud with access from on-premises locations using Cloud Interconnect connections. Your company must be able to send traffic to Cloud Storage only through the Interconnect links while accessing other Google APIs and services over the public internet. What should you do?

A.

Use the default public domains for all Google APIs and services.

B.

Use Private Service Connect to access Cloud Storage, and use the default public domains for all other Google APIs and services.

C.

Use Private Google Access, with restricted.googleapis.com virtual IP addresses for Cloud Storage and private.googleapis.com for all other Google APIs and services.

D.

Use Private Google Access, with private.googleapis.com virtual IP addresses for Cloud Storage and restricted.googleapis.com virtual IP addresses for all other Google APIs and services.

Your company is planning a migration to Google Kubernetes Engine. Your application team informed you that they require a minimum of 60 Pods per node and a maximum of 100 Pods per node Which Pod per node CIDR range should you use?

A.

/24

B.

/25

C.

/26

D.

/28

You manage two VPCs: VPC1 and VPC2, each with resources spread across two regions. You connected the VPCs with HA VPN in both regions to ensure redundancy. You’ve observed that when one VPN gateway fails, workloads that are located within the same region but different VPCs lose communication with each other. After further debugging, you notice that VMs in VPC2 receive traffic but their replies never get to the VMs in VPC1. You need to quickly fix the issue. What should you do?

A.

Q Enable regional dynamic routing mode in VPC2.

B.

Q Enable global dynamic routing mode in VPC1.

C.

Q Enable global dynamic routing mode in VPC2.

D.

Q Enable regional dynamic routing mode in VPC1.

There are two established Partner Interconnect connections between your on-premises network and Google Cloud. The VPC that hosts the Partner Interconnect connections is named "vpc-a" and contains three VPC subnets across three regions, Compute Engine instances, and a GKE cluster. Your on-premises users would like to resolve records hosted in a Cloud DNS private zone following Google-recommended practices. You need to implement a solution that allows your on-premises users to resolve records that are hosted in Google Cloud. What should you do?

A.

Associate the private zone to "vpc-a." Create an outbound forwarding policy and associate the policy to "vpc-a." Configure the on-premises DNS servers to forward queries for the private zone to the entry point addresses created when the policy was attached to "vpc-a."

B.

Configure a DNS proxy service inside one of the GKE clusters. Expose the DNS proxy service in GKE as an internal load balancer. Configure the on-premises DNS servers to forward queries for the private zone to the IP address of the internal load balancer.

C.

Use custom route advertisements to announce 169.254.169.254 via BGP to the on-premises environment. Configure the on-premises DNS servers to forward DNS requests to 169.254.169.254.

D.

Associate the private zone to "vpc-a." Create an inbound forwarding policy and associate the policy to "vpc-a." Configure the on-premises DNS servers to forward queries for the private zone to the entry point addresses created when the policy was attached to "vpc-a."

Your organization uses a Shared VPC architecture with a host project and three service projects. You have Compute Engine instances that reside in the service projects. You have critical workloads in your on-premises data center. You need to ensure that the Google Cloud instances can resolve on-premises hostnames via the Dedicated Interconnect you deployed to establish hybrid connectivity. What should you do?

A.

Create a Cloud DNS private forwarding zone in the host project of the Shared VPC that forwards the private zone to the on-premises DNS servers.

In your Cloud Router, add a custom route advertisement for the IP 35.199.192.0/19 to the on-premises environment.

B.

Create a Cloud DNS private forwarding zone in the host project of the Shared VPC that forwards the Private zone to the on-premises DNS servers.

In your Cloud Router, add a custom route advertisement for the IP 169.254 169.254 to the on-premises environment.

C.

Configure a Cloud DNS private zone in the host project of the Shared VPC.

Set up DNS forwarding to your Google Cloud private zone on your on-premises DNS servers to point to the inbound forwarder IP address in your host project

In your Cloud Router, add a custom route advertisement for the IP 169.254 169 254 to the on-premises environment.

D.

Configure a Cloud DNS private zone in the host project of the Shared VPC.

Set up DNS forwarding to your Google Cloud private zone on your on-premises DNS servers to point to the inbound forwarder IP address in your host project.

Configure a DNS policy in the Shared VPC to allow inbound query forwarding with your on-premises DNS server as the alternative DNS server.