Topic 1 Question 244
Your work for a textile manufacturing company. Your company has hundreds of machines, and each machine has many sensors. Your team used the sensory data to build hundreds of ML models that detect machine anomalies. Models are retrained daily, and you need to deploy these models in a cost-effective way. The models must operate 24/7 without downtime and make sub millisecond predictions. What should you do?
Deploy a Dataflow batch pipeline and a Vertex AI Prediction endpoint.
Deploy a Dataflow batch pipeline with the Runlnference API, and use model refresh.
Deploy a Dataflow streaming pipeline and a Vertex AI Prediction endpoint with autoscaling.
Deploy a Dataflow streaming pipeline with the Runlnference API, and use automatic model refresh.
ユーザの投票
コメント(6)
- 正解だと思う選択肢: D
why D? Real-time Predictions: Dataflow streaming pipelines continuously process sensor data, enabling real-time anomaly detection with sub-millisecond predictions. This is crucial for immediate response to potential machine issues. RunInference API: This API allows invoking TensorFlow models directly within the Dataflow pipeline for on-the-fly inference. This eliminates the need for separate prediction endpoints and reduces latency. Automatic Model Refresh: Since models are retrained daily, automatic refresh ensures the pipeline utilizes the latest version without downtime. This is essential for maintaining model accuracy and anomaly detection effectiveness. Why not C? Dataflow Streaming Pipeline with Vertex AI Prediction Endpoint with Autoscaling: While autoscaling can handle varying workloads, Vertex AI Prediction endpoints might incur higher costs for real-time, high-volume predictions compared to invoking models directly within the pipeline using RunInference.
👍 8fitri0012024/04/17 - 正解だと思う選択肢: D
Needs to be active 24/7 -> streaming. RunInference API seems like the way to go here, using automatic model refresh on a daily basis. https://beam.apache.org/documentation/ml/about-ml/
👍 4b1a8fae2024/01/20 - 正解だと思う選択肢: C
My Answer: C
The phrase: “The models must operate 24/7 without downtime and make sub millisecond predictions” configures a case of online prediction (option B or C)
The phrase: “Models are retrained daily, and you need to deploy these models in a cost-effective way”, choose between “ Vertex AI Prediction endpoint with autoscaling” instead “Runlnference API, and use automatic model refresh” looks better because always update with retrained models, and the scalability.
👍 3guilhermebutzke2024/02/18
シャッフルモード