ExamGecko
Home / Amazon / AIF-C01 / List of questions
Ask Question

Amazon AIF-C01 Practice Test - Questions Answers, Page 6

List of questions

Question 51

Report
Export
Collapse

A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions.

Which business objective should the company use to evaluate the effect of the LLM chatbot?

Website engagement rate
Website engagement rate
Average call duration
Average call duration
Corporate social responsibility
Corporate social responsibility
Regulatory compliance
Regulatory compliance
Suggested answer: B
asked 16/09/2024
Justin Schowalter
34 questions

Question 52

Report
Export
Collapse

A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost.

Which solution will meet these requirements?

Customize the model by using fine-tuning.
Customize the model by using fine-tuning.
Decrease the number of tokens in the prompt.
Decrease the number of tokens in the prompt.
Increase the number of tokens in the prompt.
Increase the number of tokens in the prompt.
Use Provisioned Throughput.
Use Provisioned Throughput.
Suggested answer: B
asked 16/09/2024
MIGUEL GAITERO GIL
40 questions

Question 53

Report
Export
Collapse

An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.

What should the firm do when developing and deploying the LLM? (Select TWO.)

Include fairness metrics for model evaluation.
Include fairness metrics for model evaluation.
Adjust the temperature parameter of the model.
Adjust the temperature parameter of the model.
Modify the training data to mitigate bias.
Modify the training data to mitigate bias.
Avoid overfitting on the training data.
Avoid overfitting on the training data.
Apply prompt engineering techniques.
Apply prompt engineering techniques.
Suggested answer: A, C

Explanation:


asked 16/09/2024
Hoang Son
47 questions

Question 54

Report
Export
Collapse

A company has built an image classification model to predict plant diseases from photos of plant leaves. The company wants to evaluate how many images the model classified correctly.

Which evaluation metric should the company use to measure the model's performance?

R-squared score
R-squared score
Accuracy
Accuracy
Root mean squared error (RMSE)
Root mean squared error (RMSE)
Learning rate
Learning rate
Suggested answer: B
asked 16/09/2024
AJ Foraker
36 questions

Question 55

Report
Export
Collapse

A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock.

What are the key benefits of using Amazon Bedrock agents that could help this retailer?

Generation of custom foundation models (FMs) to predict customer needs
Generation of custom foundation models (FMs) to predict customer needs
Automation of repetitive tasks and orchestration of complex workflows
Automation of repetitive tasks and orchestration of complex workflows
Automatically calling multiple foundation models (FMs) and consolidating the results
Automatically calling multiple foundation models (FMs) and consolidating the results
Selecting the foundation model (FM) based on predefined criteria and metrics
Selecting the foundation model (FM) based on predefined criteria and metrics
Suggested answer: B
asked 16/09/2024
MANIVANNAN POOPALASINGHAM
31 questions

Question 56

Report
Export
Collapse

A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company's security policy states that each team can access data for only the team's own customers.

Which solution will meet these requirements?

Create an Amazon Bedrock custom service role for each team that has access to only the team's customer data.
Create an Amazon Bedrock custom service role for each team that has access to only the team's customer data.
Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on each Amazon Bedrock request.
Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on each Amazon Bedrock request.
Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that have access to only each team's customer folders.
Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that have access to only each team's customer folders.
Suggested answer: A
asked 16/09/2024
Prabith Balagopalan
37 questions

Question 57

Report
Export
Collapse

A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and processing times up to 1 hour. The company needs near real-time latency.

Which SageMaker inference option meets these requirements?

Real-time inference
Real-time inference
Serverless inference
Serverless inference
Asynchronous inference
Asynchronous inference
Batch transform
Batch transform
Suggested answer: A
asked 16/09/2024
DATA DYNAMICAL SOLUTIONS
35 questions

Question 58

Report
Export
Collapse

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.

Which solution will meet these requirements?

Deploy optimized small language models (SLMs) on edge devices.
Deploy optimized small language models (SLMs) on edge devices.
Deploy optimized large language models (LLMs) on edge devices.
Deploy optimized large language models (LLMs) on edge devices.
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Suggested answer: A
asked 16/09/2024
rene laas
49 questions

Question 59

Report
Export
Collapse

A company is building a contact center application and wants to gain insights from customer conversations. The company wants to analyze and extract key information from the audio of the customer calls.

Which solution meets these requirements?

Build a conversational chatbot by using Amazon Lex.
Build a conversational chatbot by using Amazon Lex.
Transcribe call recordings by using Amazon Transcribe.
Transcribe call recordings by using Amazon Transcribe.
Extract information from call recordings by using Amazon SageMaker Model Monitor.
Extract information from call recordings by using Amazon SageMaker Model Monitor.
Create classification labels by using Amazon Comprehend.
Create classification labels by using Amazon Comprehend.
Suggested answer: B
asked 16/09/2024
muhammad ikram
32 questions

Question 60

Report
Export
Collapse

A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple teams.

Which SageMaker feature meets these requirements?

Amazon SageMaker Feature Store
Amazon SageMaker Feature Store
Amazon SageMaker Data Wrangler
Amazon SageMaker Data Wrangler
Amazon SageMaker Clarify
Amazon SageMaker Clarify
Amazon SageMaker Model Cards
Amazon SageMaker Model Cards
Suggested answer: A
asked 16/09/2024
josny Cameus
38 questions
Total 96 questions
Go to page: of 10
Search

Related questions