'AWS Amplify: Resource is not in the state stackUpdateComplete

I'm setting up aws-amplify to my project. I am facing a problem in amplify push when I configured for the first time it worked fine. now i changed the repository since i had to do sub-tree from the old repo. Now when i do amplify push i get

Resource is not in the state stackUpdateComplete

⠸ Updating resources in the cloud. This may take a few minutes...Error updating cloudformation stack ⠸ Updating resources in the cloud. This may take a few minutes...

Following resources failed

✖ An error occurred when pushing the resources to the cloud

Resource is not in the state stackUpdateComplete An error occured during the push operation: Resource is not in the state stackUpdateComplete



Solution 1:[1]

This worked for me:

$ amplify update auth

Choose the option “Yes, use default configuration” (uses the Cognito Identitypool).

Then:

$ amplify push

Another reason can be this

The issue is tied to the selection of this option - Select the authentication/authorization services that you want to use: User Sign-Up & Sign-In only (Best used with a cloud API only) which creates just the UserPool and not the IdentityPool which the rootstack is looking for. It's a bug and we'll fix that.

To unblock, for just the first question, you could select - ? User Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user Storage features for images or other content, Analytics, and more) which would create a user pool as well as as the identity pool and then choose any of the other configurations that you've mentioned above.

Solution 2:[2]

As mentioned by others in this thread - the issue comes from one of the resources that you updated locally.

Check which ones did you modify:

$ amplify status

Then remove and add it again, followed by push. The Api is known not to work with updates right now, so you must remove it if you've changed it locally:

$ amplify api remove YourAPIName
$ amplify api add
$ amplify push

Solution 3:[3]

You can try as below

First do

amplify env checkout {environment} and then

amplify push

Solution 4:[4]

I got this after making some modifications to my GraphQL schema. I adjusted the way I was making @connection directives on a few tables. I was able to fix this by following these steps:-

  1. Make a backup copy of your new schema that you're trying to push
  2. Run amplify pull to restore your local to be in sync with your backend in the cloud.
  3. Once that completes, you should have the local synced to the cloud and amplify push should work without flaws because it is synced to the cloud and there should be no updates.
  4. Copy over the new schema onto the pulled schema and try running the amplify push once more to see if it works.

If it doesn't work, undo the overwrite to the pulled schema and compare what is different between the pulled schema and the updated schema that you backed up. Do a line by line diffcheck and see what has changed and try to push the changes one by one to see where it is failing. I think it is wiser to not push too many changes to the schema at once. Do it one by one so that you can troubleshoot more easily. If you do have other issues, then it should be unrelated to the one highlighted in this question, because the pulling should solve this particular issue.

Solution 5:[5]

In my opinion, these kind of problems always related to 3rd party auth.

  • Amplify update auth,
  • then update auth flow the id and secret of 3rd party.
  • Then push.

It will fix the problem

Solution 6:[6]

In my case the issue was due to multiple @connections referring to GSI, which were not getting removed and added correctly when I do the amplify push api.

I was able to resolve this by amplify pull then, comment off the @connection then the GSI linked to connection then, add each new changes manually, but there was trouble in GSI getting linked again because the local update considered the GSI already removed but in cloud it seems to be retained, and I got error that a GSI is being added which was already in cloud. So I renamed the model name, so it got recreated to new tables in dynamoDB then I reverted it back to the correct name. This is ideal for dev environment which has no much impact.

But of course it ate up most of my time, but it did fix my issue.

Solution 7:[7]

In my case it was an issue when switching between amplify env (checkout), the error was not clear but this is what I did to fix it without having to "clear" api and lose the whole database :

  • Delete the existing API Key by setting the "CreateAPIKey" to "0" in the "amplify/backend/api//parameters.json" then save file and execute "amplify push".
  • once done, do the same process with "CreateAPIKey" to "1" then "amplify push". This fixed my issue.

Solution 8:[8]

The solution is:

a. Go to the s3 bucket containing project settings.

b. locate deployment-state.json file in root folder and delete it.

c. amplify push

Solution 9:[9]

This worked for me

amplify remove storage

And, then

amplify add storage

Then, again

amplify push

As after amplify add storage I mistakenly choose Y to Do you want to add a Lambda Trigger for your S3 Bucket? I didn't have any Lamda function and also I didn't have anything in my bucket.

Solution 10:[10]

It's look like a conflict between backend and local

The only thing that work for me is backing up the local schema and initiating the amplify pulling command.

Then use the back up schema file and initial the amplify push.

In most of case updates in the following file must be set manually (for Android): app/src/main/res/raw/amplifyconfiguration.json

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Paul Alexandru Pop
Solution 3 user3358326
Solution 4 Ragav Y
Solution 5 bad_coder
Solution 6 harishanth raveendren
Solution 7 marc_s
Solution 8 W.M.
Solution 9 octogenex
Solution 10