'How to account for new columns being added to a table but not for all the dates?

I have a script like this

An inital API call and then the following script

df22 = spark.read.json("dbfs:/mnt/servicenow/bronze/d******/"+file_timestamp[:8]+"/d******.json")
df23 = df22.select("result.*")

cols1 = ["request_number","requested_for","application",",not_found","application_changes","rto_met"]
a = []

The code was running fine before i included "rto_met" which was a new addition to the table but after this, it gives me the error that column "rto_met" was not found. If i look at the json file, column was only included from last two weeks and not before till Jan 1st 2020.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source