Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

A system administrator of a high-performance computing (HPC) cluster that uses an InfiniBand fabric for high-speed interconnects between nodes received reports from researchers that they are experiencing unusually slow data transfer rates between two specific compute nodes. The system administrator needs to ensure the path between these two nodes is optimal.

What command should be used?

A.

ibtracert

B.

ibstatus

C.

ibping

D.

ibnetdiscover

An administrator requires full access to the NGC Base Command Platform CLI.

Which command should be used to accomplish this action?

A.

ngc set API

B.

ngc config set

C.

ngc config BCP

A system administrator is troubleshooting a Docker container that crashes unexpectedly due to a segmentation fault. They want to generate and analyze core dumps to identify the root cause of the crash.

Why would generating core dumps be a critical step in troubleshooting this issue?

A.

Core dumps prevent future crashes by stopping any further execution of the faulty process.

B.

Core dumps provide real-time logs that can be used to monitor ongoing application performance.

C.

Core dumps restore the process to its previous state, often fixing the error-causing crash.

D.

Core dumps capture the memory state of the process at the time of the crash.

A cloud engineer is looking to provision a virtual machine for machine learning using the NVIDIA Virtual Machine Image (VMI) and Rapids.

What technology stack will be set up for the development team automatically when the VMI is deployed?

A.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI, NVIDIA Driver

B.

Cent OS, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI

C.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI, NVIDIA Driver, Rapids

D.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI

A system administrator needs to lower latency for an AI application by utilizing GPUDirect Storage.

What two (2) bottlenecks are avoided with this approach? (Choose two.)

A.

PCIe

B.

CPU

C.

NIC

D.

System Memory

E.

DPU

You are managing a high-performance computing environment. Users have reported storage performance degradation, particularly during peak usage hours when both small metadata-intensive operations and large sequential I/O operations are being performed simultaneously. You suspect that the mixed workload is causing contention on the storage system.

Which of the following actions is most likely to improve overall storage performance in this mixed workload environment?

A.

Reducing stripe count for large files would decrease parallelism, likely worsening performance for large sequential I/O operations.

B.

Separate metadata-intensive operations and large sequential I/O operations by using different storage pools for each type of workload.

C.

Increase the number of Object Storage Targets (OSTs) to handle more metadata operations.

D.

Disable GPUDirect Storage (GDS) during peak hours to reduce I/O load on the Lustre file system.

An administrator wants to check if the BlueMan service can access the DPU.

How can this be done?

A.

Via system logs

B.

Via the DOCA Telemetry Service (DTS)

C.

Via a lightweight database operating in the DPU server

D.

Via Linux dump files

If a Magnum IO-enabled application experiences delays during the ETL phase, what troubleshooting step should be taken?

A.

Disable NVLink to prevent conflicts between GPUs during data transfer.

B.

Reduce the size of datasets being processed by splitting them into smaller chunks.

C.

Increase the swap space on the host system to handle larger datasets.

D.

Ensure that GPUDirect Storage is configured to allow direct data transfer from storage to GPU memory.

Your Kubernetes cluster is running a mixture of AI training and inference workloads. You want to ensure that inference services have higher priority over training jobs during peak resource usage times.

How would you configure Kubernetes to prioritize inference workloads?

A.

Increase the number of replicas for inference services so they always have more resources than training jobs.

B.

Set up a separate namespace for inference services and limit resource usage in other namespaces.

C.

Use Horizontal Pod Autoscaling (HPA) based on memory usage to scale up inference services during peak times.

D.

Implement ResourceQuotas and PriorityClasses to assign higher priority and resource guarantees to inference workloads over training jobs.