Topic 1 Question 130
You are working on a system log anomaly detection model for a cybersecurity organization. You have developed the model using TensorFlow, and you plan to use it for real-time prediction. You need to create a Dataflow pipeline to ingest data via Pub/Sub and write the results to BigQuery. You want to minimize the serving latency as much as possible. What should you do?
Containerize the model prediction logic in Cloud Run, which is invoked by Dataflow.
Load the model directly into the Dataflow job as a dependency, and use it for prediction.
Deploy the model to a Vertex AI endpoint, and invoke this endpoint in the Dataflow job.
Deploy the model in a TFServing container on Google Kubernetes Engine, and invoke it in the Dataflow job.
ユーザの投票
コメント(13)
- 正解だと思う選択肢: C👍 4hiromi2022/12/21
- 正解だと思う選択肢: B
B. Load the model directly into the Dataflow job as a dependency, and use it for prediction.
By loading the model directly into the Dataflow job as a dependency, you minimize the serving latency since the model is available within the pipeline itself. This way, you avoid additional network latency that would be introduced by invoking external services, such as Cloud Run, Vertex AI endpoints, or TFServing containers.
👍 4hghdh54542023/03/28 - 正解だと思う選択肢: B👍 2pshemol2022/12/21
シャッフルモード