Which statement BEST differentiates an LLM-powered test infrastructure from a traditional chatbot system used in testing?
Which option BEST differentiates the three prompting techniques?
Which technique MOST directly reduces hallucinations by grounding the model in project realities?
Which setting can reduce variability by narrowing the sampling distribution during inference?
Which statement BEST describes vision-language models (VLMs)?
What is a hallucination in LLM outputs?
Which AI approach requires feature engineering and structured data preparation?
Which statement about fine-tuning for test tasks is INCORRECT?
An LLM prioritizes tests using likelihood X impact but ranks a trivial tooltip change above a payment failure. What defect does this MOST LIKELY show?
Who typically defines the system prompt in a testing workflow?