You are part of your organization’s ML engineering team and notice that the accuracy of a model that was recently deployed into production is deteriorating.
What is the best first step address this?
What is the most important reason for documenting risks when developing an AI system?
CASE STUDY
Please use the following answer the next question:
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant Agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, crossfunctional team with clear roles andresponsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network's existing data and de-identified data that is licensed from a large US clinical research partner.
Which of the following steps can best mitigate the possibility of discrimination prior to training and testing the Al solution?
A leading software development company wants to integrate AI-powered chatbots into their customer service platform. After researching various AI models in the market which have been developed by third-party developers, they're considering two options:
Option A - an open-source language model trained on a vast corpus of text data and capable of being trained to respond to natural language inputs.
Option B - a proprietary, generative AI model pre-trained on large data sets, which uses transformer-based architectures to generate human-like responses based on multimodal user input.
Option A would be the best choice for the company because?
A company deploys an AI model for fraud detection in online transactions. During its operation, the model begins to exhibit high rates of false positives, flagging legitimate transactions as fraudulent.
Which is the best step the company should take to address this development?
What is the most important reason to document the results of AI testing?
Which of the following is an obligation of an importer of high-risk AI systems under the EU AI Act?
All of the following are examples of biometric data in the US EXCEPT?
Which stakeholder is responsible for lawful collection of data for the training of the foundational AI model?
Retraining an LLM can be necessary for all of the following reasons EXCEPT?