Aws keybase12/22/2023 We’ll run playbooks like this: ansible-playbook do-a-thing.yml -e env=staging -e project=canon Now we have made available any extra vars in dictionary form, making it easy to figure out which environment and project we’re working on. We can extend this to parse the CLI arguments with ArgParse, making sure to use parse_known_args() so that we don’t have to duplicate the entire set of Ansible arguments. So, starting in the vars_plugins directory (relative to playbooks), here is a skeleton plugin that runs but does not yet do anything useful. Personally I find this a major shortcoming in this particular plugin architecture, however since the required information is always passed as extra vars, I decided to manually parse the CLI arguments to extract them in the plugin and not relying on Ansible to do it. There are inputs and outputs, but those do not include a way to inspect existing variables (either global or per-host) from within the plugin itself. Vars plugins, however cool, are just plugins. The outputs of a vars plugin are host variables, but with a little cheekiness you can manipulate the environment - which happens to be where Boto and Boto3 look for credentials! I’ve previously made use of Ansible vars plugins this is a very underdocumented feature of Ansible that whilst primarily designed for injecting group/host vars from alternative sources, actually provides a really flexible entrypoint into a running Ansible process in which you can do whatever you want. Since I already have a fairly consistent approach to writing playbooks, where the environment and project are almost always provided as extra vars, this should be easy! What I really wanted was a way to automatically figure out which AWS account should be operated on, based on the project and or environment being managed. This worked, proving the concept, but lacked finesse and intelligence as you’d still need to purposely decide which role to assume before running a playbook. My first pass was a wrapper script, making use of AWS CLI calls to STS and parsing out the required bits with jq. Now I just needed a way to obtain the credentials, and set them before playbook execution. Thankfully even the Session Token issued with temporary credentials (such as when assuming a role) is barely supported in boto2, albeit with a different environment variable. I spent some time digging in the boto2 and boto3 docs to find commonalities in authentication support, and eventually figured that I should be able to inject temporary credentials via environment variables. Perhaps the biggest blocker is that Ansible has no support for assuming IAM roles, which is amplified by the fact that most of the core AWS modules still rely on boto2, which has patchy support for this at best, and won’t be improving any time in the future. That’s not even considering the insane amount of boilerplate you get when forced to specify credentials for each and every task. This might seem flexible at first glance, but when you consider you have to duplicate tasks, and therefore roles, and even playbooks, when you have to use different accounts, it quickly becomes unwiedly. It turns out you can easily inject credentials authenticating with another IAM user, but this can only be done on a per-task (or perhaps, per block?) level. Having opted to use Ansible for driving deployments, I looked at built-in capabilities for account switching. I recently worked on a project involving multiple AWS accounts, with different projects and environments spread through those accounts in different combinations. AWS Account Switching with Ansible Wed, Apr 4, 2018
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |