'BigQuery Results to Panda DataFrame in Chunks

I am trying to save the results of a BigQuery query to a Panda DataFrame using bigquery.Client.query.to_dataframe()

This query can return millions of rows.

Given that Panda to BQ (Dataframe.to_gbq()) has a chunk parameter, is there something similar for BQ to Pandas to incrementally add to the dataframe without having to run the query multiple times with a limit and offset?



Solution 1:[1]

How @William mentioned, you can chunk the BigQuery results and paginate them, the query will only charge one execution. I did this code based on official documentation using the public Dataset: 'bigquery-public-data.baseball.games_wide' as a demo:

import pandas as pd
import math

bq_client = bigquery.Client()

class BqToDfChunker(object):


    def __init__(self, query_job, results_per_page):
        bq_result = query_job.result()
        destination = query_job.destination
        destination =  bq_client.get_table(destination)

        self.destination = destination
        self.results_per_page = results_per_page
        self.num_pages = math.ceil(float(destination.num_rows/results_per_page))
        self.index = 0
        self.next_token = None


    def get_next_df_page(self):
        rows = bq_client.list_rows(self.destination,
           max_results = self.results_per_page,
           page_token = self.next_token)

        if self.index < self.num_pages:

            df = pd.DataFrame(rows)
            self.index += 1
            self.next_token = rows.next_page_token 

            return df

        else:
            return None


    def has_next(self):
        if self.index != self.num_pages:
            return True
        else:
            return False

if __name__ == '__main__':

    query = """
        SELECT homeTeamName FROM `bigquery-public-data.baseball.games_wide` group by homeTeamName
    """

    query_job = bq_client.query(query) 

    #initialize the class with the query_job and number_of_results_per_page
    bq_test = BqToDfChunker(query_job, 10)

    while bq_test.has_next():
        print(bq_test.get_next_df_page())

Solution 2:[2]

You can use to_dataframe_iterable instead to do this.

job = client.query(query)
result = job.result(page_size=20)

for df in result.to_dataframe_iterable():
    # df will have at most 20 rows
    print(df)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Orlandog
Solution 2 Decko