Category "google-bigquery"

How to Convert Geohash to Geometry in BigQuery?

PostGIS has this function ST_GeomFromGeoHash to get the bounding box geometry of the geohash area (https://postgis.net/docs/ST_GeomFromGeoHash.html), but it has

How to Convert Geohash to Geometry in BigQuery?

PostGIS has this function ST_GeomFromGeoHash to get the bounding box geometry of the geohash area (https://postgis.net/docs/ST_GeomFromGeoHash.html), but it has

Partitioning BigQuery Tables via API in python

I'm using Python to hit the BigQuery API. I've been successful at running queries and writing new tables, but would like to ensure those output tables are par

DATAFRAME TO BIGQUERY - Error: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp1yeitxcu_job_4b7daa39.parquet'

I am uploading a dataframe to a bigquery table. df.to_gbq('Deduplic.DailyReport', project_id=BQ_PROJECT_ID, credentials=credentials, if_exists='append') And I

Creating a View in BigQuery from Temporary Function and Dynamic SQL

I want to create a view dynamically with a string generated by a temporary function. The code below fails with Creating views with temporary user-defined functi

What does "EXCEPT distinct select * " in SQL language mean?

I am following a tutorial on Qwiklabs on Bigquery and financial fraud detection and came across a query, below, that I am failing to understand CREATE OR REPLAC

Make calculations across multiple tables based on the table suffix in Bigquery

I have a database of daily tables (with prefixes formatted as yyyymmdd) with customers info, and I need to get a 90 day timeline of 90 day ARPUs (average revenu

Patterns for replicating data to BigQuery

I'm asking for the best practice/industrial standard on these types of jobs, this is what I've been doing: The end goal is to have a replication of the data in

Translating an Excel concept into SQL

Let's say I have the following range in Excel named MyRange: This isn't a table by any means, it's more a collection of Variant values entered into cells. Exce

BigQuery: Best way to handle frequent schema changes?

Our BigQuery schema is heavily nested/repeated and constantly changes. For example, a new page, form, or user-info field to the website would correspond to new

Stage level data is not coming for bigquery running jobs through java bigquery libraries

I am using com.google.cloud.bigquery library for fetching the job level details. We have the following code snippets Job job = getBigQuery(projectId, location)

How can I access Schema from the QueryResponse while calling getQueryResults method from my Java application?

I am using google.cloud.bigquery library to execute and create query using bigquery.query() method. I want to fetch the Schema details from the response but whe

Extract data from JSON column

i want to extract a value from a json column. The schema is (- first level, -- second level): Column Name | Type | Mode event_params RECORD NULLABLE -key STRI

Bigquery keyword Remote is not supported

We are trying out the REMOTE functions within bigquery as per this guide. We created the CLOUD_RESOURCE using the following command : bq mk --connection --disp

how to list ALL table sizes in a project

Is there a way to list all the table size in BigQuery? I know a command like this: select table_id, sum(size_bytes)/pow(10,9) as size from certain_data

BigQuery insert values AS, assume nulls for missing columns

Imagine there is a table with 1000 columns. I want to add a row with values for 20 columns and assume NULLs for the rest. INSERT VALUES syntax can be used for t

Scheduling data extract from Google BigQuery using export data

I am trying to schedule monthly data exports in Google bigquery using query scheduler. This is how my query looks atm: export data options( uri='gs://bucket_nam

Google Big Query - loading a csv file - Error while reading table

I'm trying to upload a report in CSV fotmat to Google Big Query. The report contains the following column names: Adjustment Type; Day; Country; Asset ID; As

LEFT JOIN with an OR in the ON clause BigQuery Standard SQL

I need some help understanding joins in bigquery standard sql. I want to do a left join keeping all the columns in table1, and joining to table2 if 2 fields mat

Is there any way to unnest bigquery columns in databricks in single pyspark script

I am trying to connect bigquery using databricks latest version(7.1+, spark 3.0) with pyspark as script editor/base language. We ran a below pyspark script to f