Topic 1 Question 111
Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?
Use Firestore.
Use Bigtable
Use BigQuery.
Use Cloud Spanner.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: B
Financial data, such as transaction histories, stock prices, and currency exchange rates. https://cloud.google.com/bigtable/docs/overview#what-its-good-for
With SSD: Reads - up to 10,000 rows per second Writes - up to 10,000 rows per second Scans - up to 220 MB/s https://cloud.google.com/bigtable/docs/performance#typical-workloads
👍 6chelbsik2022/12/26 C: Use BigQuery. About 330 100MB/sec dedicated hard-drives to read 1TB of data. A 330 Gigabit network to shuffle the 1.25 TB of data. 3,300 cores to uncompress 1TB of data and process 100 billion regular expressions at 1 μsec per. μsec: A microsecond is a unit of time in the International System of Units equal to one millionth of a second. Its symbol is μs, sometimes simplified to us when Unicode is not available. A microsecond is equal to 1000 nanoseconds or 1⁄1000 of a millisecond.
👍 1pk3492022/12/24B. Processing time-series data is a classic use case for Bigtable. Sub-second latency and scaling to 100s of TBs seals the deal. It’s B.
👍 1dynamic_dba2023/03/16
シャッフルモード