File Provisioners
File provisioners copy files and directories from the system where Terraform is running to your instances. They only have arguments for specifying the source and destination for the copy. Sources can be locations of files or directories, specified absolutely or relative to the Terraform configuration directory, or they can provide the content that is to be copied to a destination file. Destinations can only be specified as absolute paths on the target instance.
To try out some file provisioners, download the files fileprovisioners.tf and example.tmpl into the same Terraform configuration directory that you used for the deployment examples. Let's first look at fileprovisioners.tf, which defines some file provisioners in a null Resource:
resource "null_resource" "file_provisioners" {
depends_on = [
openstack_compute_floatingip_associate_v2.leader,
openstack_compute_floatingip_associate_v2.followers
]
connection {
type = "ssh"
host = openstack_networking_floatingip_v2.leader.address
user = "ubuntu"
agent = false
private_key = file(var.private_key_file)
}
The resource begins with a depends_on
argument (line 2),
which specifies a list of objects in the Terraform configuration on which this resource depends.
In this example, we don't want to run the second provisioner until the leader and all followers have been fully created.
This mechanism provides a way to tell Terraform about explicit dependencies that it cannot discover itself.
Next comes a Connection block (starting on line 7) that will be used by both of the provisioners in this Resource. Here, we will be connecting to the "leader" instance that was created in our deployment example.
File Provisioner Examples
The final two blocks in the resource define example file provisioners. The first copies a single file.
provisioner "file" {
source = "leader.tf"
destination = "/tmp/leader.tf"
}
This provisioner copies a file specified by the "source" argument (line 2) to the instance. There is no practical reason for copying our "leader.tf" Terraform file, it is just handy for this example. The destination on the instance can be given as a relative path to the user's home directory, or as an absolute path (as in this example). The destination path can end with a file name or a directory, in which case a "/" must be appended to the path. Whole directories can also be copied but special rules must be copied in that case.
The second provider example creates a block of text listing the floating IP addresses for all of the follower instances and saves it into the file "follower_fips.txt" in the home directory on the leader instance. A leader application could then read these addresses in order to communicate with the followers.
provisioner "file" {
content = templatefile("example.tmpl",
{ fips = openstack_networking_floatingip_v2.followers.*.address })
destination = "follower_fips.txt"
}
This provider uses the content
argument (line 2) rather than the source
argument.
Note that you may only supply one or the other argument in a file provider.
This content specification calls the templatefile
function,
which takes as arguments a "template" file name and a "map" defining some number of variables and their values.
In our example, the template file is "example.tmpl" (downloaded above and shown below)
and the map contains one variable definition for "fips", which is set to the list of follower floating IP addresses.
%{ for fip in fips ~}
${fip}
%{endfor ~}
The template file takes each floating IP address from the input variable and
outputs that value into the content for the provider.
The templatefile
documentation shows more examples of how it can be used and
Terraform's String Templates
discussion gives more details on template syntax.
Templates provide an excellent mechanism for merging information known by Terraform into your own scripts.
A common workflow is to use a file provisioner like this to create and copy a script to an instance,
and then use a "remote-exec" provisioner to run that script.
Since this resource is new, it will be created and its provisioners executed when you again apply the Terraform configuration:
terraform apply -var-file=terratest.tfvars
Jetstream2 Connections
It is important to remember that file providers can only copy files to locations on an instance for which the given Connection credentials allow access. The examples above use credentials for the image's default user and so they copy files into the home directory of that user because writing to most system directories requires "root" privileges. There is no way to perform a copy using "sudo" privileges, so to perform a copy as the "root" user you must provide the root user's credentials. There are several obstacles to doing this on Jetstream2:
- Jetstream2 instances do not initially include a public SSH key for the root user.
- Jetstream2 root users do not initially have a password set.
- Password logins are initially disabled on Jetstream2 instances.
If you must copy files or directories to a protected system location, you may need to perform the copy with a user_data script or a remote-exec provisioner. Or, you could create a custom image that either includes an SSH key or a password for the root user and the proper system settings to allow logging in with a password.