Topic 1 Question 230
An analytics company has an Amazon SageMaker hosted endpoint for an image classification model. The model is a custom-built convolutional neural network (CNN) and uses the PyTorch deep learning framework. The company wants to increase throughput and decrease latency for customers that use the model.
Which solution will meet these requirements MOST cost-effectively?
Use Amazon Elastic Inference on the SageMaker hosted endpoint.
Retrain the CNN with more layers and a larger dataset.
Retrain the CNN with more layers and a smaller dataset.
Choose a SageMaker instance type that has multiple GPUs.
ユーザの投票
コメント(2)
- 正解だと思う選択肢: A
Use Amazon Elastic Inference on the SageMaker hosted endpoint would be the most cost-effective solution for increasing throughput and decreasing latency. Amazon Elastic Inference is a service that allows you to attach GPU-powered inference acceleration to Amazon SageMaker hosted endpoints and EC2 instances. By attaching an Elastic Inference accelerator to the SageMaker endpoint, you can achieve better performance with lower costs than using a larger, more expensive instance type.
👍 2oso03482023/03/19 - 正解だと思う選択肢: A
"cost efficient" therefore A based on slide 20: https://pages.awscloud.com/rs/112-TZM-766/images/AL-ML%20for%20Startups%20-%20Select%20the%20Right%20ML%20Instance.pdf
👍 1sevosevo2023/03/18
シャッフルモード