List of questions
Related questions
Question 58 - AIF-C01 discussion
A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?
A.
Deploy optimized small language models (SLMs) on edge devices.
B.
Deploy optimized large language models (LLMs) on edge devices.
C.
Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D.
Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Your answer:
0 comments
Sorted by
Leave a comment first