'How to deploy an openstack instance with ansible in a specific project

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).

I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.

So this deploys in a named project:

- name: Create instances
    server:
      state: present
      auth: 
        auth_url: "{{ auth_url}}"
        username: "{{ username }}"
        password: "{{ password }}"
        project_name: "{{ project }}"
        project_domain_name: "{{ domain_name }}"
        user_domain_name: "{{ domain_name }}"
      name: "test-instance-1"
      image: "{{ image_name }}"
      key_name: "{{ key_name }}"
      timeout: 200
      flavor: "{{ flavor }}"
      network: "{{ network }}"

When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>

- name: Create instances
    server:
      state: present
      name: "test-instance-1"
      image: "{{ image_name }}"
      key_name: "{{ key_name }}"
      timeout: 200
      flavor: "{{ flavor }}"
      network: "{{ network }}
      image: "{{ image_name }}"
      key_name: "{{ key_name }}"
      timeout: 200
      flavor: "{{ flavor }}"
      network: "{{ network }}"

When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:

- name: Create instances
    server:
      state: present
      auth: 
        project_name: "{{ project }}"
      name: "test-instance-1"
      image: "{{ image_name }}"
      key_name: "{{ key_name }}"
      timeout: 200
      flavor: "{{ flavor }}"
      network: "{{ network }}"

I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.



Solution 1:[1]

(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)

You can use the cloud property from the "server" task (openstack.cloud.server). Here is how you can proceed :

  • All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
  tarantula:
    auth:
      auth_url: https://auth.cloud.myprovider.net/v3/
      project_name: tarantula
      project_id: the-id-of-my-project
      user_domain_name: Default
      project_domain_name: Default
      username: my-username
      password: my-password
    regions :
      - US
      - EU1
  • from a task you can refer to the appropriate cloud like this
- name: Create instances
    server:
      state: present
      cloud: tarantula
      region_name: EU1
      name: "test-instance-1"

Solution 2:[2]

We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Cédric
Solution 2 CptLolliPants