This is a very quick overview of the different kinds of database available in the Google Cloud Platform.
The available options are:
- Cloud SQL - Hosted MySQL or PostgreSQL for OLTP small/medium scale with transactions
- Cloud Spanner - Relational DB for OLTP large scale with transactions
- Cloud Datastore - Document DB with limited transactions
- Cloud Bigtable - OLTP and business-intelligence/OLAP (mutable, fast key-based lookup)
- Cloud BigQuery - Business-intelligence/OLAP (immutable, slow key-based lookup)
I am ignoring file-based cloud storage (not a database) and the Firebase api (for mobile apps).
Cloud SQL is simply a cluster of either MySQL or PostgreSQL DB instances, running on a set of VMs with storage on disks attached to the VMs.
The fact that Cloud SQL is composed of a set of VMs leaks through at the administration level; you don’t need to explicitly log in to VMs but their existence is clear. When setting up the Cloud SQL environment, you select the number of VMs in the cluster, the VM “machine type”, and the size and type of disk attached to each VM. You do not, however, need to install any software on these VMs - Google takes care of that.
For a MySQL cluster, there is one master node through which all writes must go, and zero or more read-only replica nodes to scale read performance. A “hot-standby master” may also be set up for high-availability if desired. Each of these nodes in the cluster is another VM - and each must run continuously. The master node runs in a specific zone; read-only nodes must run in the same region but may run in different zones. A standby master node must run in the same region, but should run in a different zone to provide resilience against a datacenter (zone) outage.
PostgreSQL-based Cloud SQL clustering works similarly to MySQL-based clustering; one master and zero or more read-only replica nodes. However failover does not require a “standby master”; instead a read-only node is promoted to master.
Data is stored on the disks mounted on each node in the cluster. Google persistent disks are moderately expensive - at least in comparison to cloud storage prices. You are responsible for ensuring that the disks in the cluster are appropriately-sized for the data stored in the database. Each node holds a full copy of the dataset.
A Cloud SQL instance has a maximum storage size of 10TB.
As example pricing, a large Cloud SQL “mysql second generation” instance with 9GB storage capacity, 3 servers (ie 2 read-only slaves) plus high-availability, with moderately-powerful servers and running permanently costs around US$125,000 per year. A smaller instance with 1TB of data, running on a single moderate-sized server and with high-availability costs US$6,500 per year. Network traffic between the database and systems outside of the same region incurs additional network costs.
It is possible to stop a Cloud SQL instance, which stops all associated VMs. You are still charged for the persistent disks used by the VMs, but not for CPU time.
Communication with a Cloud SQL instance is done via the usual MySQL or PostgreSQL tools, eg a JDBC driver. However for client apps which are not running in the GCP environment, setting up security is complex - either client systems should run a “Cloud SQL Proxy” service, or various networking and SSL settings need to be manually configured in the Google cloud environment.
Spanner provides what appears to be a traditional relational database, ie it acts like Oracle/SQLServer/MySQL/PostgreSQL/etc. Like other relational databases, it provides transactions with ACID support, primary keys and indexes (but not foreign key constraints), and supports reasonably efficient joins. Unlike other relational databases, it supports extremely large datasets - ie scales to larger amounts of data than the Cloud SQL options do.
Unlike Cloud SQL, the infrastructure is mostly abstracted. As a user, you just define an “instance”, then a “database” within that instance, and tables within that database. There is exactly one configuration-item, the number of nodes, but that does not necessarily map to VMs.
Read and write performance scalability is easy in Spanner; just changing the number of nodes associated with the Spanner instance will increase performance (and price) when needed. High availability comes automatically. Scaling storage requires increasing the number of nodes - but no explicit disk-related administration is needed.
A Spanner instance may be bound to a region (eg
europe-west1), in which case all data in that instance is stored in the associated region. Alternatively, it may be US-only or world-wide (“multi-region”). There is currently no “europe-only” or “asia-only” option.
At least one node is required per 2TB of stored data, and this minimum number of nodes must run continuously in order to provide access to the data. Additional nodes give better performance.
Spanner is fairly expensive; each node costs around US$1.00 per day and each terabyte of storage appears to cost around US$8.00 per day. According to the google price calculator, a DB holding 100TB, with the minimum node-count of 50 costs around US$750,000.00 per year. However building a similar system in-house would also be expensive - 100TB of storage plus more for redundancy, plus networking, plus DB servers, plus sysadmin costs. If you truly need a large-scale relational DB with transactions for OLTP purposes, then Spanner is the right choice. For reporting/business-intelligence options, see BigQuery later.
For a smaller system holding 10TB of data with 5 nodes, pricing is around US$75,000.00 per year. That appears to be cheaper than Cloud SQL for a database of this size. A minimal system of 1TB (but three nodes as recommended by Google) costs US$30,000 per year - which is more expensive than Cloud SQL. Unlike Cloud SQL, a Spanner instance cannot be stopped to reduce costs when it is not in use.
For queries made from outside the region in which the Spanner instance is running, there are additional charges for network bandwidth used.
To quote the Google documentation on performance for regional Spanner instances (the multi-region options are slightly slower):
(When best practices are followed) each Cloud Spanner node can provide up to 10,000 queries per second (QPS) of reads or 2,000 QPS of writes (writing single rows at 1 KB of data per row).
There are a few things that you need to keep in mind when using Spanner, eg
- foreign key constraints are not supported
- aggregation and analytic functions are not supported
- rows are sharded by primary-key-range, so the primary key should be chosen in a way that avoids “insert hot spots”.
- tables can be linked as “parent/child” (interleaved) in order to improve read performance for
1:Ntype relations; this requires that the primary keys for the two tables are appropriately structured.
- applications which generate high write-loads to a multi-region Spanner instance should be deployed near the “leader region” for the instance.
Spanner’s SQL features are not quite standard. Using an index for a query requires explicitly including the index-name in the query; covering indexes are supported with their own syntax, and various other quirks. Porting an existing app to Spanner will certainly require some modification.
From code, you have the choice of sending SQL commands to Spanner, or using a more procedural API to read/write records.
Rather than storing rows which have columns, Datastore stores json objects (called documents or entities). Each entity has a key, and data is generally retrieved by that key. It is possible to define additional indexes on properties within entities, and then search for entities matching specific indexed properties - but only for predefined indexes (unlike relational systems where any property can be queried and indexes are just for performance improvement).
Individual properties within an entity in the database cannot be updated; the entire entity must be replaced. Nevertheless, read-by-key and write are fast operations.
Datastore supports “entity kinds” which group entities rather like relational systems use tables to group records. However Datastore does not support schemas; any entity can be added to any “kind” collection. Datastore is thus “schemaless” (or better stated: the database does not enforce a schema).
There are no foreign key constraints in Datastore (or any document-oriented DB). Joins are also not supported, ie “following links” require multiple read-requests to first read the parent entities, then read the referenced ones.
Document-oriented DBs always provide atomic replacement of a document with a new version, but most do not provide atomic operations involving multiple documents. Datastore does support this, allowing multiple entities (up to 25) to be modified in an atomic manner. It uses optimistic concurrency and reports an error if a conflicting update occurs; client code is responsible for retrying if needed.
Updates on a specific entity (actually, on an entity-group) are limited to about 1 per second. In other words, you can update lots of entities - but cannot update the same entity (ie unique key) more than once per second. This restriction is due to the replication algorithm used.
Datastore provides a query-language somewhat like SQL, but applications accessing a Datastore typically use a more imperative API to read and write entities.
Unlike some document-oriented DBs, Datastore does not support “full text search” on string fields.
Unlike Cloud SQL and Spanner, it is not necessary to create instances of Datastore; each GCP project implicitly has exactly one Datastore database.
Datastore is truly serverless; unlike Cloud SQL there is no need to think about VM machine-types, high availability, or similar issues. Usage is charged just by the volume of data stored and the number of read/update requests made. While pricing is difficult to compare, it should be considerably cheaper than either Cloud SQL or Cloud Spanner. Of course the features are quite different, and thus the set of use-cases for which these products are appropriate are not the same. As comparison, a database holding 100TB costs US$220,000.00 per year for storage, plus costs for reads/writes (vs at least US$750,000.00 for Spanner - but that includes reads/writes).
Due to its fast read-by-key and updates, Datastore is suitable for OLTP-style uses (like Cloud SQL and Cloud Spanner). Its data model, API, and lack of sophisticated queries means that it is not suitable for business-intelligence/reporting/OLAP-style workloads.
Bigtable is a mix of key/value database with relational model, and is similar to Apache HBase or Cassandra. Alternatively it could be seen as a kind of document-oriented DB. It can hold extremely large datasets.
Bigtable stores data in tables, with rows having columns. Columns can be of type array or struct (nested table). However tables do not have an associated schema; each row in the same table can theoretically have completely different columns or datatypes (though in practice that is not helpful). Columns can be added to a row at any time (yes, per-row not per-table); a row is therefore more like a persistent “map” (aka dictionary) structure than a relational row - or like a document in a document-oriented DB. Having records with large numbers of columns is supported and even encouraged for performance reasons.
Rows do have a primary key, which must be unique for each record. Reading and updating a record by key is very fast. Unlike document-oriented DBs, a single column can be updated without replacing the entire row. Iterating over the set of records whose primary key is within a range is an efficient operation, supporting some use-cases (including reporting) which are not efficient with other key/value databases (including document-oriented DBs such as Datastore). This key-range-iteration feature also allows relatively efficient execution of Spark, MapReduce and Beam/Dataflow applications against data in Bigtable. Choosing the structure of the primary key for records is an extremely important part of data modelling with Bigtable; getting this wrong can have significant performance impact.
Data is stored in a hybrid row-oriented/column-oriented format. When columns within a table are appropriately allocated to column-families, then reporting/OLAP-style queries can get the improved performance associated with column-oriented formats while OLTP-style workloads are not penalised.
Bigtable does not support indexes (other than on the primary key) or foreign keys. It does not itself support joins, although reporting/OLAP tools may support joins over data stored in Bigtable. Similarly, Bigtable does not offer a SQL-like query language itself, but some reporting/OLAP tools do allow SQL-like queries to be applied to data stored in Bigtable.
Bigtable does not support transactions. However a read/modify/write of a single row can effectively be done as an atomic operation.
Like Cloud SQL and Spanner, it is necessary to create an instance of Bigtable before storing data. Like Spanner, resources are measured in “nodes”, and you pay for running nodes - ie the database is not entirely abstracted like Datasource or BigQuery, but the underlying infrastructure is not so directly exposed as with Cloud SQL. High availability is built-in to the service, and there is no need to manage disks. You do need to monitor CPU usage on the cluster and increase the number of nodes when necessary. Access-rights are managed via normal GCP roles.
A Bigtable instance consists of 1 or 2 clusters; each cluster is bound to a single zone, ie a single datacenter. Using a second cluster within an instance allows for high availability in the case that a single datacenter goes offline, and also increases read-throughput. A second cluster can be used to increase write-throughput (ie multi-master) but with some constraints. The number of nodes in a cluster affects the performance significantly, and like Spanner the number of nodes must increase with increasing amount of data stored (at least 1 node per 2.5TB of data when storing data on SSDs).
Bigtable is moderately expensive; according to the google price calculator, a DB holding 100TB on SSD, with a node-count of 50 costs around US$500,000.00 per year. This is less than Spanner, but significantly more than BigQuery - ie if data is stored just for reporting/analytics/archiving purposes and does not need to be often mutated then BigQuery may be a more cost-effective solution. Note, however, that if you add a second cluster for high-availability when a single zone (datacenter) fails, then that will double the price - at which point Spanner is actually cheaper (at least for basic storage; what additional costs are required to obtain equivalent performance is hard to estimate).
For a smaller system holding 10TB of data with 5 nodes, pricing is around US$50,000.00 per year. That appears to be cheaper than Cloud SQL for a database of this size but, unlike Cloud SQL, a Bigtable instance cannot be stopped to reduce costs when it is not in use. Bigtable will of course scale up when needed, unlike Cloud SQL which has a maximum bound of 10TB per instance. As noted above, adding high-availability will double the price of Bigtable, making Cloud SQL and Bigtable roughly equal on price (though significantly different in feature-set and performance).
Due to its very fast read-by-key and update operations, Bigtable can be appropriate for OLTP workloads. However its unusual data-model, api, lack of transactions, joins and schemas place more responsibility on client applications that read/write data. Its efficient support for key-range-iteration makes it a moderately good database for reporting/OLAP/machine-learning purposes; this dual nature is something that not all databases support.
BigQuery is a data analysis tool, most similar to Apache Hive (combined with ORC and managed tables). It is also a good basis for a data lake.
Many reporting tools have built-in support for BigQuery.
BigQuery models data using the standard relational approach, ie as tables with associated schemas. Schemas are enforced, ie inserting data fails if it does not match the table schema.
Primary keys and foreign keys are not supported - a table is effectively just a list of records. Indexes are also not supported, ie queries are always table scans (though some optimisations are applied). Tables may be partitioned or sharded, which does improve queries which use the partition/sharding criteria in their where-clause. While queries that result in table-scans are a bad idea in OLTP-style workloads, they are fine when executing reporting-style or analysis-style queries which evaluate a significant percentage of the available records anyway - and this is the use-case that BigQuery is intended for.
Reading a single record by some unique “logical key” requires a table scan of the entire database (or an entire partition if the key includes the partition criteria), which is obviously extremely inefficient - just don’t do that. Query latency is also high; BigQuery is best used with queries that do significant amounts of work (as is the case for reporting/OLAP workloads).
Data is always immutable once written; modifying an existing record effectively means rewriting a partition. Updates are thus slow and expensive; BigQuery is best used for “write-once” data (event sourcing for example). Appending records to an existing table is, however, efficient. Dropping entire partitions of data is also efficient, and BigQuery has built-in support for automatically dropping partitions older than some configurable threshold, in order to limit the volume of stored data.
The immutable nature of tables (excepting partition rewrites) means that queries can be executed efficiently in parallel; reporting and OLAP-style queries are executed against very large datasets by running the operation on many (potentially thousands) of nodes in parallel. Spark, MapReduce and Beam/Dataflow applications can also efficiently process data from BigQuery. In addition, BigQuery is a good provider of data for machine learning algorithms; the Google machine learning tools all support reading data from BigQuery. Its ability to append records rapidly also makes BigQuery a good base for near real time dashboards of event-streams, eg showing the status of machinery in a factory.
Access to data stored in BigQuery is only via SQL. BigQuery supports nearly standard SQL - with some extensions to support columns that hold arrays or structs (nested tables) and a few additional features. The usual SQL reporting/analysis operators such as aggregate-functions and analytic-functions are supported. Data is stored in column-oriented format to increase the performance of OLAP-style queries that only access a few columns of many records (rather than OLTP-style queries which access many columns of a few records).
Like Datastore, BigQuery is serverless. No “instance” needs to be configured for a project; you just define a dataset, tables within the dataset, and then insert records into those tables. Usage is charged simply by the volume of data stored and the amount of data read/written by queries. BigQuery is certainly the cheapest of all database options in GCP, with data storage priced identically to just storing the same data as files in Google Cloud Storage and queries costing very reasonable amounts. There are no permanently-running VMs to pay for, as with the other solutions (except Datastore) - though higher performance (reduced latency) can be achieved by paying in advance for BigQuery processing capacity.
Cloud SQL is simply a replacement for a medium-sized on-premise relational database, and is applicable to the same use-cases (read, write, reporting). Administration overhead is reasonably large (though possibly less than an on-premise solution).
Cloud Spanner is effectively equivalent to a medium-sized to extremely large relational database - although some minor differences to traditional relational DBs mean that apps won’t just run unchanged on Spanner. Its rapid read/write and transaction support make it good for OLTP use-cases. Unlike relational DBs such as Oracle/MySQL/etc, Spanner is not a good platform for report-generation or analysis workloads.
Datastore is applicable to the same use-cases as document-oriented DBs such as MongoDB. It supports rapid read-by-key and updates, and transactions, and is thus a good option for OLTP workloads. The lack of foreign keys and schemas does put significant responsibility on the client applications to behave correctly. It is not suitable for report-generation or analysis workloads. It is probably cheaper than Cloud SQL or Cloud Spanner.
Bigtable supports rapid read-by-key and updates, making it suitable for OLTP workloads - although its lack of secondary indexes, joins and transactions must be considered. Like Datastore, it puts responsibility on client applications to behave correctly. It is a good source of data for reporting/OLAP workloads too (not as good as BigQuery, but not bad).
BigQuery is specialized at reporting/OLAP/archiving and providing data feeds for machine learning. Its very limited/inefficient support for read-by-key and modifications make it useless for OLTP-style workloads. It does, however, support appending records efficiently, making it a good choice for a “sink” of event-streams and time-series-data. It is the cheapest of all the data storage options (other than just writing files to Google Cloud Storage).