'AWS Glue dynamic frames not getting updated

We are currrently facing an issue where we cannot insert more than 600K records in oracle db using AWS glue. We are getting connection reset error and DBA's are currently looking into it. As a temporary solution we thought of adding data in chunks by splitting a dataframe into multiple dataframe and looping this list of dataframe to add data. We are sure that splitting algorithm works fine and here is the code we use


def split_by_row_index(df, num_partitions=10):
    # Let's assume you don't have a row_id column that has the row order
    t = df.withColumn('_row_id', monotonically_increasing_id())
    # Using ntile() because monotonically_increasing_id is discontinuous across partitions
    t = t.withColumn('_partition', ntile(num_partitions).over(Window.orderBy(t._row_id)))
    return [t.filter(t._partition == i + 1) for i in range(num_partitions)]

Here each DF have unique data but somehow when we convert this df in dynamic frame in loop it is we are getting common data in each dynamic frame. here is small snippet for this example

 df_trns_details_list = split_by_row_index(df_trns_details, int(df_trns_details.count() / 100000))
   
    trnsDetails1 = DynamicFrame.fromDF(df_trns_details_list[0], glueContext, "trnsDetails1")
   
    trnsDetails2 = DynamicFrame.fromDF(df_trns_details_list[1], glueContext, "trnsDetails2")
    print(df_trns_details_list[0].count())# counts are same
    print(trnsDetails1.count())
    print('-------------------------------')
    print(df_trns_details_list[1].count()) # counts are same
    print(trnsDetails2.count())
    print('-------------------------------')

    subDf1 = trnsDetails1.toDF().select(col("id"), col("details_id"))
    subDf2 = trnsDetails2.toDF().select(col("id"), col("details_id"))
    common = subDf1.intersect(subDf2)
    # ------------------ common data exists----------------
    print(common.count())
    subDf3 = df_trns_details_list[0].select(col("id"), col("details_id"))
    subDf4 = df_trns_details_list[1].select(col("id"), col("details_id"))
    #------------------0 common data----------------
    common1 = subDf3.intersect(subDf4)
    print(common1.count())

here Id and details_id combination will be unique We used this logic in multiple areas where it worked not sure why this is happening. We are also quite new to Python and AWS Glue so any suggestion to improve it also welcomed. Thanks



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source