Topic 1 Question 249
You are developing an ML model to identify your company’s products in images. You have access to over one million images in a Cloud Storage bucket. You plan to experiment with different TensorFlow models by using Vertex AI Training. You need to read images at scale during training while minimizing data I/O bottlenecks. What should you do?
Load the images directly into the Vertex AI compute nodes by using Cloud Storage FUSE. Read the images by using the tf.data.Dataset.from_tensor_slices function
Create a Vertex AI managed dataset from your image data. Access the AIP_TRAINING_DATA_URI environment variable to read the images by using the tf.data.Dataset.list_files function.
Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the TFRecords by using the tf.data.TFRecordDataset function.
Store the URLs of the images in a CSV file. Read the file by using the tf.data.experimental.CsvDataset function.
ユーザの投票
コメント(1)
- 正解だと思う選択肢: C
Option A: Cloud Storage FUSE can be slower for large datasets and adds complexity. Option B: Vertex AI managed datasets offer convenience but might not match TFRecord performance for large-scale image training. Option D: CSV files require manual loading and parsing, increasing overhead.
👍 1pikachu0072024/01/13
シャッフルモード