OpenTofu is an infrastructure as code tool that allows you to define configurations declaratively, ensuring consistency and applying software development practices to manage Cloud resources and services.
The tool operates with a system called providers to interact with the cloud or service, ensuring consistent configurations. It aims for idempotent behavior when setting and applying configurations. This leads to the question of why a random provider is even necessary, as a declarative system focuses on achieving a consistent and predictable end state. The random provider docs do say that the randomness is only at the time of creation, after which the same output from the time of creation (or modification) until there’s a change in the keepers attribute. This is quite important to keep OpenTofu/Terraform to be idempotent for multiple apply of the same configuration. So the aspect of a consistent and an idempotent configuration is met. The question still remains as to why the randomness is desired in a declarative system.
Having worked on Terraform and OpenTofu for a few years, here are a few reasons it’s useful.
- SDLC practices while developing reusable modules: If we are to apply the best practices to Infrastructure as Code (IaC), one should be able to develop reusable modules. These modules would be developed locally with locally stored state before they are used in a project with a centralized state store like an S3 bucket or an HTTP based state store. As a part of the module development, tests should be run where real infrastructure is created to ensure that they provide the intended functionality. Usually, the name of the cloud resource is also its unique identifier which means that you need randomness in the name while running tests in parallel for either one (or all) of these scenarios. Using the random provider helps create randomness to the name of the infrastructure resource, thus eliminating this limitation.
- Test the module for multiple versions of OpenTofu/Terraform.
- Test the module in parallel for different inputs. (For example, an AWS ALB with logging enabled and disabled.)
- Multiple developers working on the module in parallel and running tests locally to ensure that the behavior they introduce works as expected.
- Resources that won’t delete immediately: There are several cloud resources that can only be marked for deletion as a part of the delete operation. This is because of their criticality to the systems. Examples of these are KMS keys, secrets etc. If their names don’t have a randomness associated with them at the time of creation, they cannot be re-created until after they’ve been truly deleted past their retention period. This is similar to the first situation, where randomness ensures that multiple resources of the same name prefix can exist together for parallel tests. This scenario is also common in ephemeral, slightly long-running environments. An example is a development environment that may use a centralized state. We should be able to spin up the infrastructure on-demand. This allows us to run more rigorous tests or demos in a more production-like environment. We should be able to tear it down after its purpose is served.
- Resources that may have
create before destroyrules: There are several resources that would require thecreate_before_destroymeta argument to ensure that a similar resource exists before the existing one is deleted. This is especially true for entities like SSL certificates that are attached to load balancers. While many providers come with the option of aname_prefixto solve this exact problem, there are still resources that would require randomness to adhere to this lifecycle rule.
.
Leave a comment