ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 58 - AIF-C01 discussion

Report
Export

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.

Which solution will meet these requirements?

A.
Deploy optimized small language models (SLMs) on edge devices.
Answers
A.
Deploy optimized small language models (SLMs) on edge devices.
B.
Deploy optimized large language models (LLMs) on edge devices.
Answers
B.
Deploy optimized large language models (LLMs) on edge devices.
C.
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
Answers
C.
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D.
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Answers
D.
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Suggested answer: A
asked 16/09/2024
rene laas
49 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first