Topic 1 Question 49
You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are [email protected], the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution. How should you configure the prediction pipeline?
Embed the client on the website, and then deploy the model on AI Platform Prediction.
Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI Platform Prediction.
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user's navigation context, and then deploy the model on Google Kubernetes Engine.
ユーザの投票
コメント(17)
Security => not A. B: doesn't handle processing with banner inventory. D: deployment on GKE is less simple than on AI Platform. Besides, MemoryStore is in-memory while banners are stored persistently. Ans: C
👍 9Paul_Dirac2021/07/31ANS: C GAE + IAP https://medium.com/google-cloud/secure-cloud-run-cloud-functions-and-app-engine-with-api-key-73c57bededd1
Bigtable at low latency https://cloud.google.com/bigtable#section-2
👍 6Celia202107142021/07/18- 正解だと思う選択肢: C
Bigtable to get the context at low latency, AIP for simple solution
👍 4ggorzki2022/01/19
シャッフルモード