'BigQuery: Best way to handle frequent schema changes?

Our BigQuery schema is heavily nested/repeated and constantly changes. For example, a new page, form, or user-info field to the website would correspond to new columns for in BigQuery. Also if we stop using a certain form, the corresponding deprecated columns will be there forever because you can't delete columns in Bigquery.

So we're going to eventually result in tables with hundreds of columns, many of which are deprecated, which doesn't seem like a good solution.

The primary alternative I'm looking into is to store everything as json (for example where each Bigquery table will just have two columns, one for timestamp and another for the json data). Then batch jobs that we have running every 10minutes will perform joins/queries and write to aggregated tables. But with this method, I'm concerned about increasing query-job costs.

Some background info:

Our data comes in as protobuf and we update our bigquery schema based off the protobuf schema updates.

I know one obvious solution is to not use BigQuery and just use a document storage instead, but we use Bigquery as both a data lake and also as a data warehouse for BI and building Tableau reports off of. So we have jobs that aggregates raw data into tables that serve Tableau. The top answer here doesn't work that well for us because the data we get can be heavily nested with repeats: BigQuery: Create column of JSON datatype



Solution 1:[1]

You are already well prepared, you layout several options in your question.

You could go with the JSON table and to maintain low costs

  • you can use a partition table
  • you can cluster your table

so instead of having just two timestamp+json column I would add 1 partitioned column and 5 cluster colums as well. Eventually even use yearly suffixed tables. This way you have at least 6 dimensions to scan only limited number of rows for rematerialization.

The other would be to change your model, and do an event processing middle-layer. You could first wire all your events either to Dataflow or Pub/Sub then process it there and write to bigquery as a new schema. This script would be able to create tables on the fly with the schema you code in your engine.

Btw you can remove columns, that's rematerialization, you can rewrite the same table with a query. You can rematerialize to remove duplicate rows as well.

Solution 2:[2]

I think this use case can be implemeted using Dataflow (or Apache Beam) with Dynamic Destination feature in it. The steps of dataflow would be like:

  1. read the event/json from pubsub
  2. flattened the events and put filter on the columns which you want to insert into BQ table.
  3. With Dynamic Destination you will be able to insert the data into the respective tables (if you have various event of various types). In Dynamic destination you can specify the schema on the fly based on the fields in your json
  4. Get the failed insert records from the Dynamic Destination and write it to a file of specific event type following some windowing based on your use case (How frequently you observe such issues).
  5. read the file and update the schema once and load the file to that BQ table

I have implemented this logic in my use case and it is working perfectly fine.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2