Examtopics

AWS Certified AI Practitioner
  • Topic 1 Question 10

    A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible. Which solution will meet these requirements?

    • Deploy optimized small language models (SLMs) on edge devices.

    • Deploy optimized large language models (LLMs) on edge devices.

    • Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.

    • Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.


    シャッフルモード