'Google Big Query - loading a csv file - Error while reading table

I'm trying to upload a report in CSV fotmat to Google Big Query. The report contains the following column names:

Adjustment Type; Day; Country; Asset ID; Asset Title; Asset Labels; Asset Channel ID; Asset Type; Custom ID; TMS; EIDR; UPC; Season; Episode Title; Episode Number; Director; Studio; Owned Views; YouTube Revenue Split : Auction; YouTube Revenue Split : Reserved; YouTube Revenue Split : Partner Sold YouTube Served; YouTube Revenue Split : Partner Sold Partner Served; YouTube Revenue Split; Partner Revenue : Auction; Partner Revenue : Reserved; Partner Revenue : Partner Sold YouTube Served; Partner Revenue : Partner Sold Partner Served; Partner Revenue

After creating the table for this report, the column names and types look as follows:

[
 {
   "name": "Adjustment_Type",
   "type": "STRING"
 },
 {
   "name": "Day",
   "type": "STRING"
 },
 {
   "name": "Country",
   "type": "STRING"
 },
 {
   "name": "Asset_ID",
   "type": "STRING"
 },
 {
   "name": "Asset_Title",
   "type": "STRING"
 },
 {
   "name": "Asset_Labels",
   "type": "STRING"
 },
 {
   "name": "Asset_Channel_ID",
   "type": "STRING"
 },
 {
   "name": "Asset_Type",
   "type": "STRING"
 },
 {
   "name": "Custom_ID",
   "type": "STRING"
 },
 {
   "name": "TMS",
   "type": "STRING"
 },
 {
   "name": "EIDR",
   "type": "STRING"
 },
 {
   "name": "UPC",
   "type": "STRING"
 },
 {
   "name": "Season",
   "type": "STRING"
 },
 {
   "name": "Episode_Title",
   "type": "STRING"
 },
 {
   "name": "Episode_Number",
   "type": "STRING"
 },
 {
   "name": "Director",
   "type": "STRING"
 },
 {
   "name": "Studio",
   "type": "STRING"
 },
 {
   "name": "Owned_Views",
   "type": "STRING"
 },
 {
   "name": "YouTube_Revenue_Split___Auction",
   "type": "FLOAT"
 },
 {
   "name": "YouTube_Revenue_Split___Reserved",
   "type": "FLOAT"
 },
 {
   "name": "YouTube_Revenue_Split___Partner_Sold_YouTube_Served",
   "type": "FLOAT"
 },
 {
   "name": "YouTube_Revenue_Split___Partner_Sold_Partner_Served",
   "type": "FLOAT"
 },
 {
   "name": "YouTube_Revenue_Split",
   "type": "FLOAT"
 },
 {
   "name": "Partner_Revenue___Auction",
   "type": "FLOAT"
 },
 {
   "name": "Partner_Revenue___Reserved",
   "type": "FLOAT"
 },
 {
   "name": "Partner_Revenue___Partner_Sold_YouTube_Served",
   "type": "FLOAT"
 },
 {
   "name": "Partner_Revenue___Partner_Sold_Partner_Served",
   "type": "FLOAT"
 },
 {
   "name": "Partner_Revenue",
   "type": "FLOAT"
 }
]

While trying to query the table, I'm getting the following error message:

Could not parse 'YouTube Revenue Split : Auction' as double for field YouTube_Revenue_Split___Auction (position 18) starting at location 0 (error code: invalid)

Any idea, what could be the reason for this error?



Solution 1:[1]

I've been able to replicate the error. In my case it appears when trying to load the CSV to BigQuery. The CSV has the string YouTube Revenue Split : Auction where should be float.

What I suspect is happening is that your CSV file has the column headers in it and you are not skipping them when loading the file to BigQuery. This causes that, when the import process gets to the YouTube_Revenue_Split___Auction field (position 18), expects to insert float, but instead it tries to insert the column header, YouTube Revenue Split : Auction, which is a string that cannot be parsed correctly.

Try re-loading the CSV but remove the headers first (or skip them using the Header rows to skip option).

If my supossition is wrong and this doesn't apply, update your question by adding the query that produces the error.

Solution 2:[2]

Adding on to what Guillermo said, you can also have Big Query automatically identify the headers and field types when you upload your files in CSV

Solution 3:[3]

I had a similar error and solved it by replacing the semi-colons with commas.

You can use a regex for this or use this great online text replacement tool that I found - https://onlinetexttools.com/replace-text

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Guillermo Cacheda
Solution 2 breadible
Solution 3 daniellambert