'deploying on eks via travis ci fails

i am trying to make a cicd pipeline github->travisci->aws eks everything works fine images are posted to dockerhub and all.but when travis is executing kubectl apply -f "the files" it is throwing a error.. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

(theres nothing wrong with the source coe/deployment/service files as i manually deployed them on aws eks and they worked fine.)


#-----------------travis.yml-------------
sudo: required
services:
  - docker
env:
  global:
    - SHA=$(git rev-parse HEAD)
before_install:
# Install kubectl
  - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  - chmod +x ./kubectl
  - sudo mv ./kubectl /usr/local/bin/kubectl

 # Install AWS CLI
  - if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
  # export environment variables for AWS CLI (using Travis environment variables)
  - export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
  - export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
  - export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
  # Setup kubectl config to use the desired AWS EKS cluster
  - aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
  
  - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
  - docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client
 # - aws s3 ls

script:
  - docker run -e CI=true akifboi/multi-client npm test

deploy:
  provider: script
  script: bash ./deploy.sh
  on:
    branch: master
#----deploy.sh--------
# docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client
# docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server
# docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker

# docker push akifboi/multi-client:latest
# docker push akifboi/multi-server:latest
# docker push akifboi/multi-worker:latest

# docker push akifboi/multi-client:$SHA
# docker push akifboi/multi-server:$SHA
# docker push akifboi/multi-worker:$SHA

echo "starting"
aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese!
echo "applying k8 files"
kubectl apply -f ./k8s/
# kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA
# kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA
# kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA

echo "done"
#------travis;logs----------
last few lines:

starting

"ACTIVE"

applying k8 files

error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

done

Already up to date.

HEAD detached at c1858f7

Untracked files:

  (use "git add <file>..." to include in what will be committed)

    aws/

    awscliv2.zip

nothing added to commit but untracked files present (use "git add" to track)

Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225)

Done. Your build exited with 0.


Solution 1:[1]

Ran into the same issue with Gitlab Runner and EKS..api version v1beta support was removed with the kubectl release v1.24.. Fixed this by using the Kubectl version matching to the kubernetes version of the EKS cluster / kube controller instead of using the latest kubectl release:

curl -LO https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl

Solution 2:[2]

We were installing the latest version of kubectl in CI and hit this error today. After pinning to a previous version (1.18) the error was resolved.

the last working version was 1.23.6 and we saw errors with 1.24

Solution 3:[3]

We ran into this issue today as well. Our kubectl auto updates every time we deploy and there was a new release yesterday (version 1.24) that appears to have broken things. What I did to fix was to change the auto update to a set version (1.23.5) and that fixed the issue.

Solution 4:[4]

I confirmed, it's working with version v1.22.0

If anyone looking for a circleci solution, they can try below code

    steps:
      - checkout
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 fedonev
Solution 2
Solution 3 decapo
Solution 4 Vishnu Nair