Topic 2 Question 74
If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps. Which is NOT a recommended step to follow?
Do not use a production instance.
Run your test for at least 10 minutes.
Before you test, run a heavy pre-test for several minutes.
Use at least 300 GB of data.
解説
If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you plan and execute your test: Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load. Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use 100 GB of data per node. Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes. Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory. Reference: https://cloud.google.com/bigtable/docs/performance
コメント(6)
Answer: A Description: Use production instance to make sure how bigtable will behave in production
👍 11[Removed]2020/03/29I think D. Test with enough data. If the tables in your production instance contain a total of 100 GB of data or less per node, test with a table of the same amount of data. If the tables contain more than 100 GB of data per node, test with a table that contains at least 100 GB of data per node. It is not usualy to test using production instance
👍 8norwayping2020/06/30D....Can't be A ...testing will not be done in prod
👍 4Prakzz2020/08/02
シャッフルモード