In AWS EBS ("Elastic Block Storage") is the underlying technology that (virtual) hard disks of your instances (virtual machines) use. You can take snapshots of those virtual hard disks and use those snapshots to, for example:

Here we'll focus on the last use-case: being able to create copies of virtual machines on another AWS account. The reason why I even bothered writing this blog post is that most of the articles on the Internet do not cover this use-case: they assume you're working within one AWS account and/or one region. The use-case covered here requires a few extra steps:

  1. In the origin AWS account take a snapshot of the virtual machine's EBS volume
  2. In the origin AWS account (If needed) copy the EBS snapshot to the region where it will be deployed on on the other AWS account
  3. In the origin AWS account configure snapshot permissions to grant access to the target AWS account
  4. In the target AWS account create a snapshot from the snapshot that was shared with you (creating an AMI directly from a snapshot shared with you does not work)
  5. In the target AWS account create an AMI from the snapshot that was created from the EBS volume
  6. In the target AWS account launch a new instance (virtual machine) from the AMI you just created

Why the process needs this extra step (snapshot -> snapshot) I do not know. Possibly it has something to do with how the snapshot permissions/sharing works.

Microsoft Azure has a nice service for scheduling tasks called Azure Automation. While Azure Automation is able to other things as well, such as being able to act as a Powershell DSC pull server, we'll focus on the runbooks and scheduling. Runbooks are scripts that do things, e.g. run maintenance and reporting tasks. Runbooks often, but not necessarily manipulate objects in Azure. Runbooks are serverless like Azure Functions and the pricing is therefore similar - you pay for the amount of computing resources you use and don't have to pay for a server that's mostly idle.

On the surface Azure Automation, runbooks and schedules seem deceptively simple, and getting something running in the Azure Portal is actually relatively easy. However, several Azure technologies are involved in making the pieces of the puzzle come together:

When starting with Azure Automation I recommend doing it manually first. Once you're able to make it work manually, you can way more easily codify your work. That said, here comes the sample Terraform code to set up Azure Automation to start and stop VMs on schedule - something that seems to be a very common use-case. First we need some plumbing:

data "azurerm_subscription" "primary" {

resource "azurerm_resource_group" "development" {
  name = "${var.resource_prefix}-rg"
  location = "northeurope"

Then we create the user-managed identity which the Azure Automation account will use and assign it a custom role that has the permissions to start and stop VMs:

resource "azurerm_role_definition" "stop_start_vm" {
  name        = "StopStartVM"
  scope       =
  description = "Allow stopping and starting VMs in the primary subscription"

  permissions {
    actions     = ["Microsoft.Network/*/read",
    not_actions = []

resource "azurerm_user_assigned_identity" "development_automation" {
  resource_group_name =
  location            = azurerm_resource_group.development.location

  name = "development-automation"

resource "azurerm_role_assignment" "development_automation" {
  scope              =
  role_definition_id = azurerm_role_definition.stop_start_vm.role_definition_resource_id
  principal_id       = azurerm_user_assigned_identity.development_automation.principal_id

With the user-managed identity in place we can create the Azure Automation account:

resource "azurerm_automation_account" "development" {
  name                = "development"
  location            = azurerm_resource_group.development.location
  resource_group_name =
  sku_name            = "Basic"

  identity {
    type = "UserAssigned"
    identity_ids = []

As you can see, the Azure Automation account is linked with the user-managed identity in the identity block.

Now the Simple-Azure-VM-Start-Stop.ps1 runbook can be added:

data "local_file" "simple_azure_vm_start_stop" {
  filename = "${path.module}/scripts/SimpleAzureVMStartStop.ps1"

resource "azurerm_automation_runbook" "simple_azure_vm_start_stop" {
  name                    = "Simple-Azure-VM-Start-Stop"
  location                = azurerm_resource_group.development.location
  resource_group_name     =
  automation_account_name =
  log_verbose             = "true"
  log_progress            = "true"
  description             = "Start or stop virtual machines"
  runbook_type            = "PowerShell"
  content                 = data.local_file.simple_azure_vm_start_stop.content

The runbook is from here, but small modifications were made to make it work with user-managed identities. In particular, the Azure connection part was changed from this simplistic version to:

try {
    $null = Connect-AzAccount -Identity
catch {
    --- snip ---

to a more complex version:

    --- snip ---
    [Parameter(Mandatory = $true)]
    --- snip ---

--- snip ---

try {
    # Ensures you do not inherit an AzContext in your runbook
    Disable-AzContextAutosave -Scope Process

    # Connect to Azure with user-assigned managed identity
    $AzureContext = (Connect-AzAccount -Identity -AccountId $AccountId).context

    # set and store context
    $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
catch {
    --- snip ---

The -AccountId parameter somewhat confusingly expects to get the Client ID of the managed identity.

Now, with the runbook in place we can create the schedules:

resource "azurerm_automation_schedule" "nightly_vm_backup_start" {
  name                    = "nightly-vm-backup-start"
  resource_group_name     =
  automation_account_name =
  frequency               = "Day"
  interval                = 1
  timezone                = "Etc/UTC"
  start_time              = "2022-08-11T01:00:00+00:00"
  description             = "Start VMs every night for backups"

resource "azurerm_automation_schedule" "nightly_vm_backup_stop" {
  name                    = "nightly-vm-backup-stop"
  resource_group_name     =
  automation_account_name =
  frequency               = "Day"
  interval                = 1
  timezone                = "Etc/UTC"
  start_time              = "2022-08-11T01:30:00+00:00"
  description             = "Stop VMs every night after backups"

The final step is to link the schedules with the runbook with job schedules. Note that the parameters to to the runbook are passed here. Also note that the keys (parameter names) have to be lowercase even if in the Powershell code they're uppercase (e.g. AccountId -> accountid):

resource "azurerm_automation_job_schedule" "nightly_vm_backup_start" {
  resource_group_name     =
  automation_account_name =
  schedule_name           =
  runbook_name            =

  parameters = {
    resourcegroupname =
    accountid         = azurerm_user_assigned_identity.development_automation.client_id
    vmname            = "testvm"
    action            = "start"

resource "azurerm_automation_job_schedule" "nightly_vm_backup_stop" {
  resource_group_name     =
  automation_account_name =
  schedule_name           =
  runbook_name            =

  parameters = {
    resourcegroupname =
    accountid         = azurerm_user_assigned_identity.development_automation.client_id
    vmname            = "testvm"
    action            = "stop"

With this code you should be able to schedule the startup and shutdown of a VM called "testvm" succesfully. If that is not the case, go to Azure portal -> Automation Accounts -> Development -> Runbooks -> Simple-Azure-VM-Start-Stop, edit the runbook and use the "test pane" to debug what is going on. You can get script input, output, errors and all that good stuff from there, and you can trigger the script with various parameters for testing purposes.

This code is also available as a generalized module in GitHub.

Puppet Development Kit is probably the best thing since sliced bread if you work a lot with Puppet. It makes adding basic validation and unit tests trivial with help from rspec-puppet. It also makes it very easy to build module packages for the Puppet Forge.

That said, there is a minor annoyance with it: whenever you run "pdk update" to update PDK to the latest version, all your local changes to files such as .gitignore get destroyed, because PDK creates those files from PDK templates and any local changes are mercilessly wiped. Fortunately there is a way to sync your local changes to the files generated from the templates with .sync.yml. While .sync.yml is is pretty well documented, documentation is lots and lots of words with very few examples. So, here's a fairly trivial .sync.yml example that ensures local changes to .pdkignore and .gitignore persist across PDK updates:

    - '*.log'
    - '*.log'

In PDK template speak .gitignore and .pdkignore are namespaces. For a list of namespaces see the official documentation.

Typically Linux nodes are joined to FreeIPA using admin credentials. While this works, it exposes fully privileged credentials unnecessarily, for example when used within a configuration management system (see for example puppet-ipa).

Fortunately joining nodes to FreeIPA is possible with more limited privileges. The first step is to create a new FreeIPA role, e.g. "Enrollment administrator" with three privileges:

Then you create a new user, e.g. "enrollment", and join it to the "Enrollment administrator" role. After that you should be able to join nodes using that "enrollment" user.

While this is not perfect security-vise, it is still better than having to expose the admin credentials just to join nodes to FreeIPA.

Cloud-Init is "a standard for customizing" cloud instances, typically on their first boot. It is allows mixing state-based configuration management with imperative provisioning commands (details in our IaC article). By using cloud-init most of the annoyances of SSH-based provisioning can be avoided:

That said, neither cloud-init itself, nor its use within Terraform is particularly well documented. Therefore it can be an effort to create cloud-init-based provisioning that works and adapts easily to different use-cases. This article attempts to fill that gap to some extent.

In this particular case I had to convert an existing, imperative SSH-based Puppet agent provisioning process to cloud-init, so there's very little state-based configuration management in all of this. What I ended up with a three-phase approach:

  1. Put the provisioning scripts on the host
  2. Run all the provisioning scripts that are required for any particular use-case
  3. Remove the provisioning scripts from the host

The first step includes creating a cloud-init yaml config, write-scripts.cfg, that has all the provisioning scripts embedded into it:

  - path: /var/cache/
    owner: root:root
    permissions: '0755'
    content: |
      # Script body start
      --- snip ---
      # Script body end
  - path: /var/cache/
    owner: root:root
    permissions: '0755'
    content: |
      # Script body start
      --- snip ---
      # Script body end
  - path: /var/cache/
    owner: root:root
    permissions: '0755'
    content: |
      # Script body start
      --- snip ---
      # Script body end
  - path: /var/cache/
    owner: root:root
    permissions: '0755'
    content: |
      # Script body start
      --- snip ---
      # Script body end

The key with these scripts is that they are not Terraform templates. Instead, they're static files that take parameters to adapt their behavior, including doing nothing if the user so desires. The main reason for making this file static instead of a template is that it prevents Terraform variable interpolation from getting confused about POSIX shell variables written in the ${} syntax.

The cloud-init part is just thin wrapping to allow "uploading" the scripts to the host. In Terraform we load the above file using a local_file datasource:

# cloud-init config that installs the provisioning scripts
data "local_file" "write_scripts" {
  filename = "${path.module}/write-scripts.cfg"

This alone does not do anything, just makes the file contents available for use in Terraform.

The next step is to create the cloud-init config, run-scripts.cfg.tftpl, that actually runs the scripts and does cleanup after the scripts have run. As the name implies, it is a Terraform template:

  - [ "/var/cache/", "${hostname}" ]
%{ if install_puppet_agent ~}
  - [ "/var/cache/", "${puppetmaster_ip}" ]
  - [ "/var/cache/", "${deployment}" ]
  - [ "/var/cache/", "-n", "${hostname}", "-e", "${puppet_env}", "-p", "${puppet_version}", "-s"]
%{endif ~}
  - [ "rm", "-f", "/var/cache/", "/var/cache/", "/var/cache/", "/var/cache/" ]

Note the ~ after the statements: it ensures that a linefeed is not added to the resulting cloud-init configuration file.

By making this file a template we can drive the provisioning logic using "advanced" constructs like real for-loops and if statements which Terraform (or rather, HCL2) itself lacks. Templating also allows making all provisioning steps conditional - something that's very difficult to accomplish with SSH-based provisioning (see my earlier blog post).

The matching Terraform datasource looks like this:

data "template_file" "run_scripts" {
  template = file("${path.module}/run-scripts.cfg.tftpl")
  vars     = {
               hostname             = var.hostname,
               deployment           = var.deployment,
               install_puppet_agent = var.install_puppet_agent,
               puppet_env           = local.puppet_env,
               puppet_version       = var.puppet_version,
               puppetmaster_ip      = var.puppetmaster_ip,

As can be seen the template does not magically know the values that are already available in Terraform code - instead, they need to be passed to the template explicitly as a map.

The next step is to bind the two cloud-init configs into a single, multi-part cloud-init configuration using the cloudinit_config datasource:

data "cloudinit_config" "provision" {
  gzip          = true
  base64_encode = true

  part {
    content_type = "text/cloud-config"
    content      = data.local_file.write_scripts.content

  part {
    content_type = "text/cloud-config"
    content      = data.template_file.run_scripts.rendered

The above file shows one the strengths of cloud-init: you can do provisioning using a combination of shell commands, scripts and cloud-init configurations by setting the content_type appropriately for each part. See cloud-init documentation for more details.

Finally we can pass the rendered cloud-init configuration to the VM resource that will consume it:

resource "aws_instance" "ec2_instance" {
  --- snip ---
  user_data = data.cloudinit_config.provision.rendered

You may also want to ensure that changes to provisioning scripts do not trigger instance rebuilt:

  lifecycle {
    ignore_changes = [

When developing cloud-init templates it can be useful to validate their contents:

$ cloud-init devel schema --config-file <config-file>

This will catch all the easy errors quickly. According to some sources this command is (or was) nothing but a glorified yaml linter, but still, it is easily available on Linux so worth using.

If provisioning scripts are not working as expected, cloud-init logs may reveal why:

Some notes:

External links:

Terraform does not have a particularly strong decoupling between data and code, at least not from a best practices perspective. It is possible and useful, however, to use data to define Terraform resources - if not for any other reason but to reduce code repetition for common resources that require defining lots of parameters. Here's an example of setting up a Kubernetes cluster in Hetzner Cloud using Terraform with a "data-driven" approach:

variable "k8s_instances" {
  type = map
  default = { "" = { ip = "", server_type = "cx21", worker = false },
              "" = { ip = "", server_type = "cx21", worker = false },
              "" = { ip = "", server_type = "cx21", worker = false },
              "" = { ip = "", server_type = "cx31", worker = true  },
              "" = { ip = "", server_type = "cx31", worker = true  },
              "" = { ip = "", server_type = "cx31", worker = true  }

locals {
  k8s_master_firewall_ids = []
  k8s_worker_firewall_ids = [,,]

module "k8s_instance" {
  for_each = var.k8s_instances

  source          = ""
  hostname        = each.key
  image           = "ubuntu-20.04"
  puppetmaster_ip = var.puppet7_example_org_private_ip
  ssh_keys        = [ ]
  backups         = "true"
  server_type     = each.value["server_type"]
  floating_ip     = "false"
  firewall_ids    = each.value["worker"] ? local.k8s_worker_firewall_ids : local.k8s_master_firewall_ids

resource "hcloud_server_network" "k8s_network" {
  for_each = var.k8s_instances

  server_id  = module.k8s_instance[each.key].id
  network_id =
  ip         = each.value["ip"]

resource "hcloud_firewall" "k8s_common" {
  name = "k8s-common"

  dynamic "rule" {
    for_each = var.k8s_instances

    content {
      description = "Any TCP from ${rule.key}"
      direction   = "in"
      protocol    = "tcp"
      port        = "any"
      source_ips  = ["${rule.value["ip"]}/32"]

resource "hcloud_firewall" "k8s_worker" {
  name = "k8s-worker"
  rule {
    description = "HTTPS from anywhere"
    direction   = "in"
    protocol    = "tcp"
    port        = 443
    source_ips  = [

The code above does two main things:

The looping of data is based on for_each function, which is used to for creating multiple resources on one go from a map, as well as for creating dynamic embedded blocks inside a single resource definition.

This same approach can be used for other purposes. Note that if you're refactoring code then you will need to move resources in the Terraform state file after implementing this kind of data-driven approach, as resource paths will change.

System tray is a "legacy" tray where various applications (e.g. Nextcloud, Pidgin and Signal) have an icon with which you can interact with the application without actually opening the main application window. I said "legacy", because phasing it out was the plan in the Gnome 3 project, but it seems like we're not getting rid of the tray any time soon, if ever.

In any case the system tray is not enabled by default in recent Fedoras, and the proper way to enable it has changed several times in the recent past as Gnome has been upgraded. This outlines the most recent way to do it, and hopefully this one sticks for longer than a year or so.

First you need to install the AppIndicator Gnome shell extension. It is recommended to do that with dnf to ensure that all its dependencies are met and that it is always compatible with your current Gnome version:

sudo dnf install gnome-shell-extension-appindicator

This installs the extension but does not enable it. You may recall using Gnome Tweaks tool for enabling Gnome Shell extensions, but in Fedora 35 you would just get confused when the "Extensions" tab is completely missing. This is because management of extensions is now done with a standalone Gnome Extensions app, which can be installed like this:

sudo dnf install gnome-extensions-app

The final step is to launch the Gnome Extensions app:


Then enable the AppIndicator extension. Now you should start seeing your system tray applets on the right-hand side of your top pane.

In Terraform you have access to basic data types like bool or string. Defining the data type is a good start for starting to improve the quality of your modules. However, you may want to validate that a certain string matches a list of pre-defined options, and if not, fail validation early.

Terraform, unlike Puppet, does not have a built-in Enum data type. However, you can emulate an Enum like this:

variable "type" {
  type = string

  validation {
    condition     = length(regexall("^(public|private)$", var.type)) > 0
    error_message = "ERROR: Valid types are \"public\" and \"private\"!"

This looks crude, but it does work.

Puppet Bolt handles inventories in a very flexible and powerful manner: you can combine static target definitions and different targets into a single inventory. For example, you can have an inventory which defines some static node names combined with the AWS inventory, or one that combines static nodes with the Vagrant inventory.

Puppet Bolt inventory plugins do their magic and then output what is essentially a static inventory generated dynamically. When dynamic inventory plugins do not work (as in my case) it can be challenging to figure out what is wrong, let alone how to fix it. However, there's is a very handy way to print out a static version of the entire inventory, including the dynamically created content:

$ bolt inventory show --detail
  name: server      
  uri: ssh://
  alias: []            
    transport: ssh
      batch-mode: true                                                                    
      cleanup: true  
      connect-timeout: 10
      disconnect-timeout: 5
      load-config: false
      login-shell: bash
      tty: false   
      host-key-check: false
      private-key: "/home/john/bolt-vagrant-test/.vagrant/machines/server/virtualbox/private_key"
      run-as: root
      user: vagrant
  vars: {}
  features: []
  facts: {}     
      plugin: puppet_agent
      stop_service: true            
  - all
--- snip ---                                

You can pipe the output to a file and edit it the file make it work as a static inventory. The only changes I had to do is removal of the "groups" section and getting rid of the useless crap that came in in STDOUT. Once you have the static inventory you can change settings (e.g. for SSH) to figure out why things are not working as expected, pointing Bolt to the proper inventory file when testing, e.g.

$ bolt command run "date" -i test-inventory.yaml -t server 

Once the static inventory works you can proceed to fixing the underlying problem in dynamic provider.

Terraform's remote-exec provisioner fails immediately if any command in a script exists with a non-zero exit code. This makes building polling loops a bit more involved than it normally is. So, here is an example loop that checks if a URL can be reached:

while true; do
    curl -k https://puppet:8140/puppet-ca/v1 && break || sleep 3

When the URL is unreachable, the "||" will ensure that "sleep" gets run, which returns 0. If the URL is reachable, curl itself will return 0 and "&&" ensures that we break out of the loop.

Spiraling down into a Cloud mess (photo: Ferran Perez,

The slippery slope

Custom software development services are in high demand today. Organizations are willing to pay considerable amounts of money for development of custom software, be it mobile applications relying on backend services in the Cloud, or tailoring of open source software to better fit their needs. This has provided the impetus for rapid growth of companies that provide software development services. Some of these companies simply rent developers to clients, others work on customer projects handling both project management and development.

Development of software is the first, and most important step, in the software life cycle. However, things do not end there. Some clients may be unwilling to maintain the software that was developed for them, which leaves maintenance responsibility to the software company.

Software developers do not enjoy managing production deployments which areriddled with boring yet extremely important duties like ensuring that disaster recovery procedures work and that everything is monitored and backed up to a sufficient degree. Historically that has been the turf of system administrators.

It is also likely that the cash cow for a software company is developing software, not maintenance of production systems. Unless, of course, the company had the foresight to factor in the latter to the contracts they made with their clients.

The production maintenance responsibility can end up being a huge burden if client preferences are followed to the letter: with a large number of clients the software company can end up responsible for a huge mix of technologies. For example, one production application may be deployed in AWS on top of Fargate, another in Google Cloud on top of Kubernetes and yet another in Azure on virtual machines with autoscalers, load balancers and managed databases. This is very problematic, because the feature set of each of the three public Clouds is huge:

While these Cloud share many similarities, most of their services are different enough to cause lots of maintenance headaches if managed to distribute your development and production workloads across them and did it all manually.

Cloud spaghetti (photo: Ksenia Chernaya,

Developers usually do not need DevOps or infrastructure code tools in their line of work. A natural consequence of this is that they don’t know those tools well, which means that they tend to lean towards configuring production environment manually. Containers are an exception to this rule: they are widely used by developers and force them to write infrastructure code - typically Dockerfiles. However, even containers depend on correct setup of their runtime environment, be it Amazon ECS, managed Kubernetes on Digital Ocean or Azure Container Instances. Again not something you need to be worried as a developer if you run your code in Docker Compose.

This slippery slope can lead to a situation where each production deployment depends on a single developer and others struggle to help him or her out because of technology sprawl caused by lack of standardization.

It should also be noted that container images created for development purposes are not necessarily suitable for a production environment. In production you need security hardenings, different configurations and often have to add things like sidecar containers for handling logging, monitoring and backups.

Hiring a full-stack DevOps messiah?

Once a software development company finds itself in the situation like above the natural inclination is to hire their way out of it - in other words get somebody to clean up the mess. Depending on the depth of the pit the company has dug itself and the level of realism they have they may be looking for a DevOps messiah. Such a person would be a full-stack software developer who also happens to be an expert in AWS, Azure, Google Cloud, Terraform, Kubernetes, Docker, Ansible, Linux, Lean, Scrum, Kanban and DevOps. The idea is that this person would be a developer who could, on the side, manage all the DevOps tools and practices. The problem is that such people do not exist. One person may know something about all of these, but being proficient in all of them something else entirely. To give some perspective of what skills a DevOps needs:

DevOps roadmap by TEL4VN

A super-senior DevOps who began his or her career in the golden age of baremetal servers, lived through the age of the virtual machines and classic configuration management and now walks naturally in a world filled with containers, Clouds and serverless, would be able to cope with all the DevOps things, but such a person would definitely not fill the boots of a full-stack developer.

On the flipside a good and experienced full-stack developer is rarely if ever a good DevOps. You learn by doing, and if your doing is split across too many fields you never become an expert in any.

Even if people in this messiah category were available, what are the chances that they would be available on the market looking for a job?

Hiring an in-house DevOps team?

So, the best thing the software company can do is to step back and take a reality check. What they’re really looking for is a DevOps team that can work with their own developers to setup up development and testing pipelines and to maintain a stable production infrastructure.

The challenge with both approaches is that DevOps skills, too, are in high demand. That’s probably why CEOs and CTOs get bombarded by “partnership” proposals from DevOps outsourcing companies that have people from Ukraine, Poland, Romania and India on their payroll. Their prices vary, but in 2021 prices for outsourced Ukrainian DevOps are in this ballbark:

You can get a junior DevOps from India a bit cheaper, starting at 20€/hour, but the cultural differences to Europe and U.S. are big which may cause challenges down the line for the unprepared.

Hiring experienced, local DevOps in most western European countries is challenging, which better explains the steady outbound marketing push from outsourcing companies in Eastern Europe than the difference in cost.

If the software company felt they had a need for a DevOps messiah then hiring one or two junior DevOps is not a proper solution. As the word “junior” implies such a person is still relatively inexperienced in what they do. Inexperienced people are generally insecure and those that are not can easily become a liability especially if they have what we call “a cowboy mentality”: shoot from the hip instead of aiming first. Cowboy or not, a junior DevOps can only be expected to solve junior-level challenges.

A much better plan is to hire a super senior DevOps and possibly one or two junior DevOps he can lead. It won’t be cheap, but it will get the job done.

Hiring external DevOps contractors?

This article was written by us, and as we sell DevOps services we can’t be considered impartial. That said, we have decades of experience working with developers (who we love), system admins (the grumpy bearded guys) and DevOps (hipster sysadmins who install everything they read about on Hacker News), so we have a fair amount of experience to back up our claims.

The benefit of hiring contractors instead of hiring employees is that you don’t need to commit to hiring them full-time and you can get senior guys without having to have piles of money next to you.

Our crystal trophy. We think of the phrase as a goal rather than as a statement of a fact.

This is a question I asked myself in September 2021 when I was informed by Corporate Vision that we were nominated as candidate for being the best company in the "IT Infrastructure Management Specialists - Finland" category for Small Business Awards. At that time (before this blog post) there was very little information online about Small Business Awards. It seemed that they are a so-called vanity award, where winners have been preselected and everything else was just for show. We could not be absolutely sure about that then, and we thought that "winning" might have some marketing value, so we decided to play along and see where it leads us. I will first go through the process, then I'll try to give an answer the question in the title.

The emails they sent had this to say about the selection process:

Our merit led selection process narrows down nominees in order to seek out exclusively the very best that the Small Business industry has to offer on a global scale.

Email, early September 2019

Moreover, on their website they say this:

For the 2021 Small Business Awards Corporate Vision Magazine will leave no stone unturned when establishing those who truly represent the sheer determination and dedication it takes to establish, run, and grow a small business successfully!

This gives the process the appearance of a competition based on merit. If you've worked hard to build up your company, like we have, you're of course inclined to believe that maybe they've genuinely done their research and reached the conclusion that we really are worthy of such recognition.

The first step in the process was to accept the nomination, which did not cost any money or time. Soon after that they requested supporting information:

Over the next few weeks our in-house research team will be compiling a case file for submission to the judging panel in order to determine who will be successful in this year’s awards.

At this time, we would like to give you the opportunity to provide supporting evidence to help us build your case file. By clicking the link below, you will be taken to a short survey - compatible with PC / Tablet / Phone - with some standard questions we would like you to answer.  Completing this survey is entirely optional and not doing it will not impact your nomination in any way, however it is a great opportunity to provide us with information we may not find in the public domain.

Please click here to submit supporting information.

The deadline for supplying this information is Sep 22, 2021.

If you have a specific document that you would like to submit; media packs/ brochures press packs then please feel free to email this across separately in an email or mail to 2nd Floor, Suite F, The Maltsters, 1-2 Wetmore Road, Burton-on-Trent, Staffordshire, DE14 1LS.

Email, 15th September 2021

I will get back to that address later as it turned out to be quite relevant for the purposes of this article.

On September 20th we received an email that told that we had won the contest. This was before we had sent them any supporting evidence:

On behalf of Corporate Vision, it is my privilege to inform you that Puppeteers Oy has been presented the following:

Best IT Infrastructure Management Specialists - Finland

--- snip ---

As one of our 2021 winners' Corporate Vision is delighted to present you with access to our Complimentary Package. This comprises of entry in our online winners' directory as well as access to the official awards press release. 

--- snip ---

Email, 20th September 2021

I remember being confused about this and somehow interpreted the email as if we were selected as one of the nominees, not winners. Probably my brain tried to make sense out of the anomaly the best way it could.

Only a couple of hours later I received a notification about the status of the competition and a reminder to fill out the questionnaire:

Just to let you know, your case file is currently with our research team who are in the final stages of deliberating the winners for this year’s Small Business Awards 2021 hosted by Corporate Vision!

If you would like to fill out the supporting evidence questionnaire to be taken into consideration, you are free to do so.

Please click here to access the supporting evidence questionnaire.

Email, 20th September 2021

So, I went on to fill the form which had plenty of questions:

  1. Company Name
  2. Company age
  3. What is your primary industry?
  4. In what region do you operate?
  5. In which country are you based?
  6. Please give us a brief description of yourself or the relevant person within your company.
  7. Position within the company
  8. Length of service
  9. Please give us a brief overview of Puppeteers Oy and the service you provide.
  10. With the difficulties of 2020 in mind, how were you able to adapt and persevere?
  11. Do you have any flagship products or services?
  12. What makes you stand out above your competitors?
  13. If you could highlight an award-winning aspect of your company, what would it be?
  14. Please feel free to upload any supporting documentation you feel may be relevant. In the case of multiple files please upload in one single zipped folder.
  15. Do you have a suggested Award title?

Some of the questions, like "What makes you stand out above your competitors?", are very good and make you think. We had internally talked about these things a lot, so we could answer them easily: had we not, the process would have been a lot more involved. It took slightly over an hour to fill this form for us - your mileage may vary.

The next day I was informed that my answers had been received:

Thank you for submitting your supporting evidence.

I can confirm that this has reached us successfully and we have added this to your case file for the judges to review when making their final decisions.

As soon as the decisions have been made, I will let you know the outcome!

I wish you all the best with your nomination and good luck!

If you do have any questions in the meantime, please feel free to get in touch.

Email, 20th September 2021

A week later we got email telling us that we had won our category:

I hope you are well and keeping safe.

I emailed you last week to let you know that Corporate Vision has finalised the results of the Small Business Awards 2021 and that Puppeteers Oy has been named one of our illustrious victors. As a reminder Puppeteers Oy has been presented with the following:

Best IT Infrastructure Management Specialists - Finland

This truly represents all of your hard work and dedication paying off. Our merit led selection process narrows down nominees in order to seek out exclusively the very best that the Small Business industry has to offer on a global scale.

Email, late September 2021

As noted above, we had received the same "you won" email on 20th September. This was before we submitted our supporting evidence to the judges on 21st September.

Anyhow, after winning things started to become more interesting as money came into play:

What happens next?

An official online announcement will take place on the Corporate Vision website within the next few months.

As one of our 2021 awardees, Corporate Vision is delighted to present you with access to our Complimentary Package. This comprises of entry in our online SEO directory as well as access to the official awards press release.

We will also publish the 2021 Small Business Awards celebratory magazine; a platform through which you can further share the news and receive much deserved industry recognition. The magazine will be digitally distributed to our circulation of 155,000+ business leaders and experts.

Should you wish to expand upon the complimentary offering, the following selection of optional packages are available to you. In true style, our front cover spread has already been snapped up, but don't worry, we still have plenty of high exposure options still available - if you would like to see visual examples of these packages you can do so by clicking the button below:

The Bronze Package – 595 GBP / 827 USD: 1 page of dedicated content, 1 Crystal trophy, Personalised digital logo + Free of charge items

The Silver Package – 995 GBP / 1,383 USD: Your logo on the front cover, 1 page of dedicated content, 1 Crystal trophy, Personalised digital logo, Bespoke digital certificate, Free of charge items

Upper Level Packages

The Gold Package - 1,895 GBP / 2,634 USD: Supporting front cover headline and image, 2 pages of dedicated content, 1 Crystal trophy, 1 Slate Trophy, 1 Wall mounted plaque, Personalised digital logo, Bespoke digital certificate, 4-Page bespoke digital brochure, 25 Hard copies of your bespoke brochure, Free of charge items

Individual Items

Crystal trophy: 275 GBP / 382 USD

Slate trophy: 300 GBP / 417 USD

Wall plaque: 300 GBP / 417 USD

Personalised digital logo: 250 GBP / 348 USD

Personalised digital certificate: 250 GBP / 348 USD

Email, 20th October 2021

This offer was crafted quite well. Basically you can choose the free "Complimentary package", but it boils down to one line on a web page which few if any people are likely to see. Of course you could still use it in your own marketing, but that's it. The way they make their paid packages more tempting is to tell you that their magazine is circulated to "155,000+ business leaders and experts". Essentially they want you believe that if you buy the package and get an article about your company in their magazine, the article will be placed in front of 155,000 potential customers. You then hope that some of those people would contact you, asking you to help them - in return for your usual fee, of course.

At this point I decided to try to check if this claim of 155,000 people was bloated or not. So, I went to check their social media accounts:

Their social media postings are essentially abstracts of the articles they write on their blog, with links to the full text on their website. I read through some of their articles and their quality seemed pretty decent - nothing brilliant, but generally ok and useful. Their claim about "circulation of 155,000+ business leaders and experts" must be derived from the subscriber count of their email newsletter as their social media following is nowhere near those numbers.

After doing the due diligence we decided to play along and get the cheapest ("Bronze") package. We had low expectations about the marketing value, but the end of they year was approaching and we needed to spend a bit of money before the tax office took its own cut of our profits (20%). Plus having a crystal trophy in our office would be obviously nice. That said, we dragged our feet long enough to be offered a 10% discount, probably in the fear that we might not buy at all.

The next step was giving them the shipping address, billing details and all the boring stuff. After that they requested more information for the featured article in their magazine:

I hope you are well. My name is xxx and I will be coordinating the content for your inclusion in Corporate Vision. It is my pleasure to be working with you, and as Editor I am here to make sure that we create the perfect piece of content for your company.

Please find below a link to a set of questions, from which we will write your inclusion and send it over for you to review. If you could fill out the questionnaire and return it by 10 Nov, 2021 that would be super.

Please note that these questions are simply a guide to help us put together copy for you, if you feel that these questions are not appropriate or already have content you would like to supply to us then please let me know. Alternatively, if you would prefer that I write a first draft for you utilising information available about your firm online as researched by myself and/or the team and send it over for you to review and add to, please inform me and I will be happy to help.

The deadline is flexible, if you have any problems with the current deadline then please let me know and I will be happy to extend this for you.

Also, it would be great if you could send a headshot or any group/firm photos on file as high resolution .jpg or .png file formats and any logos in .eps or .jpg formats. You are welcome to submit these separately to your content or upload them via the link at the beginning of the below questions, please do let me know if these will take more time for you to obtain.

Email, 21st October 2021

The questionnaire was as follows. Contact details and such have been omitted:

  1. To start: Please give us a brief overview of your company, your clients and the services you offer. What would you say were the driving factors behind your success so far? What are your core values? What is your overall mission?
  2. When working on a new project or with a new client, what steps do you take to ensure that the overall outcome is successful?
  3. Working in such a competitive industry, what steps does your firm take to set itself apart from your competition? What is, essentially, your unique selling point?
  4. What is the internal culture in your firm? How do you ensure that all of your staff are equipped to provide the best possible service to your clients? What qualities do you look for when recruiting new talent?
  5. What does the future have in store for your firm? Do you plan to grow or are you proud to offer a service only a small business can?
  6. Do you have anything you would like to add/ Anything you would like the writing team to mention or focus on in the write up? Feel free to add any details here that you feel were missed by the previous questions.

Again, these are good question you should answer even if you're not nominated for any award. For us answering these questions took an hour and a half.

They then provided the draft of the article based on the information we provided to them and sent it to us for review. We made some fixes and sent the fixed version to them. We then sent them our logo, some photos and the article was finalized. As we accidentally missed a deadline, our article initially displayed a photo of lawn mower which apparently qualified as a "relevant stock image" in our case. They did replace it at our request after some delay. Some time later we received our crystal trophy which got stuck in customs. The customs declaration cost a whopping 2 euros.

While writing this article I did additional research on this topic. In particular, I googled for the postal address "Suite F, The Maltsters, 1-2 Wetmore Road, Burton-on-Trent, Staffordshire, DE14 1LS" and rather surprisingly found lots and lots of magazines similar to Corporate Vision, all part of the same company, AI Global Media Ltd. Here are all of that could be found from Google:

Based on cursory look all of these websites and organizations are built using an identical template: they publish fairly ok quality articles, post them to social media, have an email newsletter and have some an awards program - probably very similar the Small Business Awards we took part in. This means that their business model is proven and working, and seems to be based on the money they make from awards.

When I started writing this article I had hard time putting all the events in order. That's because the events were actually not in the entirely correct order. As mentioned, we received a notification that we had won before the judges has had a chance to review the supporting evidence we sent them.

Moreover, I never saw a list of the other contenders or nominees. The list of 2021 winners is quite long, but not knowing how many accepted their nomination it is impossible to tell what percentage actually won the award. It is important to note that the awards were never framed as a competition. Instead, the judges would give awards to companies that deserve recognition.

Based on all I've seen and experienced I feel like winning a Small Business Award is as easy as accepting the nomination. Business-vise this would make lots of sense for AI Global Media, as the more winners they have, the more money they make. That said, I have absolutely no way to prove this.

Does that make Small Business Awards a scam? I'd say no, because you're still getting some marketing exposure and a trophy/trophies in return, and their prices are quite reasonable considering that they have to pay their employees for writing and editing the articles, doing the layout in the magazine, editing images, customizing and sending the trophies etc.

But should you give them your money? I personally would not, knowing what I know now. We did it mainly because we were curious about the whole thing and had money to spare. In the end we did not get any contacts from any prospective customers thanks to our featured article, so the marketing value of winning was, in our case at least, minimal.

What we did get was a small flood of proposals from other companies for being nominated in various "best of <x> in 2021" award programs, some of which had very steep prices (2000-5000€) - those we bluntly rejected without even thinking and you should probably too.

While rspec-puppet documentation is quite decent, it does not really explain how to test classes that get their parameters via Hiera lookups, such as profiles in the roles and profiles pattern. Several parameters related to Hiera are listed in the rspec-puppet configuration reference, but that's all. The other documentation you find on the Internet is generally outdated and not applicable for modules converted with PDK.

In this article we show how to test an "ipa_client" class that uses lookups to get its parameter from Hiera. It used to be a profile that was later separated from the control-repo into a separate module, which means that rspec-puppet does not have access to the control-repo's Hiera data.

The first step is to make the class spec file load a Hiera config:

let(:hiera_config) { 'hiera-rspec.yaml' }

If this line is missing then rspec-puppet will not do any Hiera lookups. The hiera-rspec.yaml file should be placed to the module root. Its contents should be like this:

version: 5

defaults:  # Used for any hierarchy level that omits these keys.
  datadir: data         # This path is relative to hiera.yaml's directory.
  data_hash: yaml_data  # Use the built-in YAML backend.

  - name: 'rspec'
    path: 'rspec.yaml'

This hierarchy is completely separate from the module's default hierarchy to prevent rspec data and module data (defaults) from getting entangled.

The data/rspec.yaml file contains the data required by the lookups in the class and nothing more:

ipa_manage: true
ipa_domain: ''
ipa_admin_password: 'foobar'
ipa_master_fqdn: ''
ipa_configure_sshd: false
ipa_enable_dns_updates: true

The class spec file was generated with PDK and only slightly modified to support lookups and the special requirements of the class itself:

# frozen_string_literal: true

require 'spec_helper'

describe 'ipa_client' do
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      extra_facts = {}
      extra_facts[:ipa_force_join] = false
      extra_facts[:lsbdistcodename] = 'RedHat' if os_facts[:osfamily] == 'RedHat'

      let(:facts) { os_facts.merge(extra_facts) }
      let(:hiera_config) { 'hiera-rspec.yaml' }

      it { compile }

Further reading

Mautic is a widely used open source email marketing automation application written in PHP. Email marketing is typically used in conjunction with inbound marketing done by asking people to give their email address in exchange for something, like a free ebook or a newsletter with good content. As you're asking for the email address on a publicly facing webpage, spambots will find your form, and start misusing it to send your "Welcome" emails all over the place. This in turn may result in lots of bounces and/or complaints, which may get you locked out from your email service provider unless you pay attention. So, you need a way to block or fool those spam bots to prevent such issues, and to keep the Mautic database clean.

One way to reduce the number of bot submissions is to use a CAPTCHA, which Mautic supports out of the box. There is some evidence that CAPTCHAs negatively impacts conversions, that is, fewer real people fill your email submission form. This makes sense as CAPTCHAs are quite difficult nowadays and take a fair amount of time to complete.

Another, less invasive way is to use a honeypot field in the form that catches bots - at least the stupid and/or non-targeted kind - by surprise and prevents them from misusing your form. There is an official Mautic blog post in here that tells how to add a honeypot to campaign forms. That approach does not work with standalone forms, but fortunately there's another way to do it (idea from here):

  1. Create a custom field of data type "email" but give it a name like "Your phone". Make sure it is not a required field. Also make sure it is of "Contact" type.
  2. Add a new field of type "text area" to your standalone form.
  3. Under "Generic" tab set the label to "Phone".
  4. Under "Generic" tab set "Save result" to "No" (=never save field content to Mautic database)
  5. Under "Contact field" tab map your custom field ("Your phone").
  6. Under "Validation" tab set "Required" to "No"
  7. Under "Attributes" tab set "Field container attributes" to style=display:none. This hides the field from normal users but leaves it to the HTML source code.

The honeypot works by fooling the bots by luring them to fill the field "Phone" that is invisible to real people with phone numbers instead of email addresses. This in turn prevents them from pressing the "Submit" button because the data is invalid. As the field is not required, normal people who can't even see the field will leave it empty and are thus able to submit the form. How well this works in practice remains to be seen.

To ensure that your hidden field is actually working open your browser's developer console (Ctrl-Shirt-C on Firefox), click on your standalone form and check the you can find the "Phone" field in there. Another way is to clear the "Field container attributes" setting and ensure that you can see the field, then add it back to see it disappear again.

Even if you're able to reduce the flow of spam submissions into Mautic with CAPTCHAs and honeypots, you still probably need to clean up those that got through. For campaign forms you can use conditional logic to remove forms submitted by bots. There are at least two ways to accomplish this:

Another thing to do is to handle bounces from real and invalid email addresses or your outbound email provider will eventually block you. In fact, SMTP providers like Amazon SES require you to have a plan for handling bounces and complaints or they won't let you out of the development sandbox.

This article will be updated with more information as we work our way through this annoying but necessary step in Mautic configuration.

I wrote this article to better understand all the pieces called "AD" or "Active Directory" in Microsoft Azure fit together. The pieces are as follows:

Until quite recently it was not possible to join a Windows instance to Azure AD directly. That is, you needed one of the following:

As of November 11, 2021 it became possible to directly register or join devices such as Windows VMs to Azure AD. Joining is limited to recent versions of Windows 10 and Windows Server 2019 or later. Registering a device to AAD is more suitable for personal (non-corporate) devices the user, typically an employee, manages himself/herself. Joining a device to AAD is mainly meant for corporate-owned devices in cases where an on-premise AD is not available (e.g. small branch office) or the vast majority of corporate services are Cloud-based and having an on-premise AD does not make much sense.

The easiest way to join a VM to Azure AD is to just enable Azure AD login when creating the VM. However, it is also possible to join existing VMs to Azure AD by provisioning a virtual machine extension (installable using the Azure Cloud shell) and by enabling system managed identity in the VMs identity settings. For debugging use dsregcmd.exe and check Azure AD's Devices section to ensure that your device (VM) is there and shows as joined.

Things get a bit interesting when you use Azure AD with Azure AD Domain Services. Based on practical testing you can join some Windows devices directly to Azure AD and join some via Azure AD DS. The latter devices will not show up in the "Devices" section in Azure AD, which makes sense given that Azure AD DS syncs one-way from Azure AD. The domain join methods are also different: Azure AD DS supports the classic on-premise style domain join, whereas direct Azure AD domain join process is completely different. Trying to use classic domain join with Azure AD fails because Azure AD does not expose DNS SRV records that classic join depends on.

Note that by default you can RDP to Azure AD-joined devices (e.g Azure VMs) only from devices that are, at minimum, registered to Azure AD - look here for details. It is, however, possible work around this limitation by disabling the requirement for network level authentication in the Azure AD-joined Windows machine. This may be acceptable if you, say, only use RDP through a VPN. Instructions for doing this on Windows RDP clients are in here, and for Linux (Remmina) clients in here. In addition, you need to grant Virtual Machine User Login or Virtual Machine Administrator Login IAM roles to users who need to be able to RDP in to the VM.

Overall, it seems like Azure Active Directory is slowly replacing on-premise Active Directory and Active Directory Federation Services, and tools like Azure AD Domain Services and Azure AD Connect and are just tools aimed at helping the migration to Microsoft's cloud-based SaaS offering.

I'll start with a spoiler: what the title suggests is not possible. It is, however, possible to accomplish this with cloud-init and Terraform templates as described in the Multi-part cloud-init provisioning with Terraform blog post.

If you need to use SSH/WinRM provisioning, then there are various workarounds you can apply, and this article explains some of them and goes into depth into one in particular.

The first workaround is relatively clean, but is not usable in all cases, especially when working with complex modules. The idea is to have two resources, here of type aws_instance. Those resources are identical except for the provisioner blocks:

variable "provision" {
  type = bool

resource "aws_instance" "provisioned_instance" {
  count = var.provision : 1 ? 0
  --- snip ---

  provisioner "remote-exec" {
    --- snip ---

resource "aws_instance" "non_provisioned_instance" {
  count = var.provision : 0 ? 1
  --- snip ---

Depending on what value you give var.provision Terraform will create either aws_instance.provisioned_instance or aws_instance.non_provisioned_instance. This approach works ok, but may get you into trouble if any other resources depend on those aws_instance resources (as they may be present or not). It also duplicates most of the code which may introduce bugs.

The second workaround is less complete, but works more easily with complex modules, and does not change the title of the resource. The idea is to use the remote-exec provisioner blocks but replace the provisioning commands on the fly. First you define a bool value which tells whether to run the provisioning scripts or not:

variable "provision" {
  type    = bool

Then you define the commands you'd like to run if you want to run the provisioner as a local:

locals {  
  provisioner_commands = ["/do/something", "/do/something/else"]

If you don't use any variables in your provisioning commands you can put the commands into a variable as well. This is because variables can't contain variable interpolations, whereas locals can.

Now define the instance:

resource "aws_instance" "provisioned_instance" {
  --- snip ---

  provisioner "remote-exec" {
    inline = concat(["echo Provisioning"], [for command in local.provisioner_commands: command if var.provisioning])

What this basically does is create a new list based on local.provisioner_commands by filtering out every entry if var.provisioning is not true. As the inline parameter can't be empty a dummy command is added to the list to keep Terraform happy.

I should note that this approach does not work with other provisioner types (e.g. file), but may still be an acceptable workaround in many cases.

The traditional way of managing systems with Puppet is to install Puppet agent on the nodes being managed and point those agents to a Puppet server (more details here). This approach works well for environments with tens or hundreds of nodes, but is an overkill for small environments with just a handful of nodes. Fortunately nowadays Puppet Bolt enables you to push configurations to the managed nodes via SSH and WinRM. When using Bolt the catalog (desired state) is created on the computer running Puppet Bolt using facts from managed nodes and then sent to the managed nodes to be applied.

While the configuration management use-case with Puppet Bolt is clearly supported, there is not much documentation available on how to use it with the well-tested control repository paradigm, so this article aims to fill that void. I've written two articles that touched on this topic earlier, but as Puppet Bolt develops really fast they have already become more or less outdated:

Let's start with the layout of the control repository which is very typical:

├── bolt-project.yaml
├── data
│   ├── common.eyaml
│   ├── common.yaml
│   └── nodes
│       └── mynode.yaml
├── hiera.yaml
├── inventory.yaml
├── keys
│   ├── private_key.pkcs7.pem
│   └── public_key.pkcs7.pem
├── plans
│   └── configure.pp
└── site
    ├── profile
    │   └── manifests
    |       └── unixbase.pp
    └── role
        └── manifests
            └── generic.pp

The first thing you need is bolt-project.yaml. We'll go through it in pieces:

name: acme
concurrency: 3
format: human

A simple project file like this works well if you're not using Puppet modules, e.g. just doing tasks or using plans to orchestrations. However, when using roles and profiles you probably want to place them to the site subdirectory, so you need to add that to the modulepath:

  - site

Bolt's default modules directory is .modules and that one does not have to added to modulepath explicitly.

The final thing you need in bolt-project.yaml is definition of external Puppet modules you wish to use. For reasons unknown Puppet Bolt nowadays uses bolt-project.yaml to define module dependencies. Bolt uses that information to download the modules and to dynamically manage a Puppetfile with equivalent contents. This does not work particularly well in an environment where some nodes are managed by a Puppet server + agents and some by Puppet Bolt, but in this article's use-case that does not matter. The module dependency definition in bolt-project.yaml looks like this:

  - name: puppetlabs/concat
    version_requirement: '7.1.1'
    resolve: false
  - name: puppetlabs/firewall
    version_requirement: '3.3.0'
    resolve: false
  - name: puppetfinland-packetfilter
    git: ''
    ref: 'cb3ca18ebbce594bd7e527999390c0137daf3a57'
    resolve: false
  - name: puppetlabs/stdlib
    version_requirement: '8.1.0'
    resolve: false
  - name: saz-timezone
    version_requirement: '6.1.0'
    resolve: false

Note how resolve: false is set for every module. This is based on bad experiences (with librarian-puppet) with automatic dependency resolution in conjunction with Puppet modules. To be more exact the dependencies defined in Puppet modules' metadata.json are not usually up-to-date or even correct, which will inevitably lead into dependency conflicts you need to work your way around. For details on the modules setting syntax see the official documentation.

Once you've created bolt-project.yaml it's time to move on to another import file, inventory.yaml. Here's a really simplistic sample:

  - uri: ''
    name: mynode
      transport: ssh
    host-key-check: false
    user: root

It defines just one target to connect to using an IPv4 addrress and gives it a symbolic name ("mynode") that can be used in Puppet Bolt command-lines when defining targets. On top of that some defaults are set for SSH connections. This kind of static inventory works ok if you only have a handful of nodes, especially when you group your targets. In bigger environments you probably want to use inventory plugins to create inventories dynamically instead of having to keep them up to date yourself.

The next step is to configure hiera.yaml because we want to separate code from data. Here's a very simple version that supports decrypting secrets with hiera-eyaml:

version: 5
 datadir: data
 - name: "Sensitive data (passwords and such)"
   lookup_key: eyaml_lookup_key
     - "nodes/%{trusted.certname}.eyaml"
     - "common.eyaml"
     pkcs7_private_key: keys/private_key.pkcs7.pem
     pkcs7_public_key:  keys/public_key.pkcs7.pem
 - name: "Normal data"
   data_hash: yaml_data
     - "nodes/%{trusted.certname}.yaml"
     - "common.yaml"

Make sure you copy your eyaml keys in keys directory in Bolt project root.

Now you can set up your profiles and roles. Here's an example of extremely minimalistic profile (site/profile/manifests/unixbase.pp):

# @summary base classes included on all nodes
class profile::unixbase {
  $timezone = lookup(profile::unixbase::timezone, String)

  class { '::timezone':
    timezone => $timezone,

For this to work you need to have data in data/common.yaml:

profile::unixbase::timezone: 'Etc/UTC'

If you have any secret data you can put it to eyaml files just like you would normally.

Next you need a role that includes the unixbase profile. Here we use a generic role (site/role/manifests/generic.pp):

# @summary dummy role used for demonstration purposes
class role::generic {
  contain profile::unixbase

In a real environment you'd include profiles that actually configure the node to do something useful.

Next our node (mynode) needs to include role::generic. With the hiera.yaml defined above you'd add data/nodes/mynode.yaml with contents like this:

  - role::generic

In a Puppet server + agent architecture this would be all you need to do. However, in case of Puppet Bolt you need to one more step: add a Bolt plan (plans/configure.pp) that glues everything together:

plan acme::configure (
  TargetSpec $targets
) {
  # Install Puppet agent on target(s) so that Puppet code can run

  # Apply Puppet code in parallel on all defined targets
  $apply_result = apply($targets) {

    # Look up the classes array of this node from Hiera to figure out which classes
    # should be included on this particular node.
    $classes = lookup('classes', Optional[Array[String]], 'first', undef)

    # Do not fail if no classes are found
    if $classes {

      # Include each class that was found
      $classes.each |$c| {
        include $c

As you can see, there's no need to create separate Bolt plans for each role.

Now, before you can apply any code with Puppet you need to download the Puppet modules defined in bolt-project.yaml:

bolt module install --force

The --force option tends tends to be required as Bolt apparently tries to somehow co-exist with Puppetfiles that are manually managed.

Now you run the plan to configure you nodes:

bolt plan run -v acme::configure -t all

The -v option enables you to see what changes Puppet makes on the target nodes.

Finally you should configure .gitignore so that you don't accidentally version things like eyaml keys:


Note how Puppetfile is not versioned: as mentioned above Puppet Bolt manages it dynamically and we don't need/want to version it.

For more information on Puppet and Infrastructure as Code see these articles:

In Bitbucket usernames are unique across whole of Bitbucket. Moreover, the same SSH key can only be configured for one user. If you registered your Bitbucket account using a corporate email and used your primary SSH key with it, you're pretty much hosed if you then need to create another corporate Bitbucket account and wanted to use the same SSH key with it - that's just not going to happen. The solution to this dilemma is to create another SSH keypair and somehow make SSH, Git and Bitbucket co-operate. As the correct solution was not immediately obvious it is documented here.

The first step is to create a new SSH keypair:

$ ssh-keygen -t ed25519 -f ~/.ssh/john_company_b

NOTE: RSA key's won't work with Bitbucket.

Once you have the key add entries for all your Bitbucket accounts in ~/.ssh/config:

     User git
     IdentityFile ~/.ssh/john_company_a
     IdentitiesOnly yes

     User git
     IdentityFile ~/.ssh/john_company_b
     IdentitiesOnly yes

Now comes the important part: you need to change the remote URLs so that Git will know which of these Host sections to use for each Git repository.

To clone a repository from "Company A":

git clone

To clone a repository from "Company B":

git clone

If you tried to just use

git clone

you would get Forbidden unless Git happened to use the correct SSH key by default.

The Azure VPN Gateway supports the OpenVPN protocol (except the "Basic SKU"). Unlike, for example, the commercial Access Server, the VPN Gateway does not have a built-in certificate authority (CA) tool for managing client certificates. And client certificates are essentially a requirement if you need to support clients other than Windows and Mac, such as Linux, iOS or Android. The use of easy-rsa-old as a CA for Azure VPN Gateway is documented quite well in this blog post, but this post is about using the newer easyrsa3 instead. Basic familiarity with easyrsa3 is assumed, so we just go through the steps you'd normally take.

First clone easyrsa3:

git clone

Then go initialize the PKI:

cd easy-rsa/easyrsa3
./easyrsa init-pki
./easyrsa build-ca

We recommend giving CA key a strong password. The next step is to create client certificates for everyone who needs them. Here we do not protect the client's private key with a password here, but feel free to do otherwise:

./easyrsa build-client-full john.doe nopass 

Once you have all the client certificates and keys available you need to add the CA certificate to Azure VPN Gateway. Navigate to Azure Portal -> Virtual Network Gateways -> <name of your gateway> -> Point-to-site configuration, then add a new Root certificate:

Once the public certificate has been uploaded to the VPN Gateway download the client configuration ZIP file. This can be done from the VPN Gateway's Point-to-site configuration in Azure Portal by clicking on Download VPN client.

Once you have the ZIP file extract it. Under the OpenVPN directory you will find a file called vpnconfig.ovpn. That is an OpenVPN configuration file, but not a fully functional one because the client certificate and private key are missing. Open the file and go to the <cert></cert> and <key></key> sections, remove the placeholder variables and insert the certificate (e.g. easy-rsa/easyrsa3/pki/issued/john.doe.crt) and key (e.g. easy-rsa/easyrsa3/pki/private/john.doe.key) there. Make sure you do not replace the certificate in the <ca></ca> section with ca.crt from easyrsa3: Azure expects that the original CA certificate is retained and connections will fail if you remove it.

You may want to consider disabling the "log" parameter if you're doing debugging or using systemctl (on Linux) to connect to the VPN Gateway. If you don't, you will have to look at the logfile to figure out what is happening.

We use Zimbra as our main email server. We also have Office 365 subscription to make working with our clients a bit easier. The challenge is that when customers send us, say, Teams meeting invites, they typically use autofill and the email gets sent to our Office 365 mailboxes which nobody really looks at.

It is possible to fix this by configuring email forwarding. That is theoretically possible out of the box on a per-user basis in However, when you actually try it out you will get an error message in your Office 365 inbox because forwarding is disabled on the organizational level by default:

Remote Server returned '550 5.7.520 Access denied, Your organization does not allow external forwarding. Please contact your administrator for further assistance. AS(7555)'

Most of the solution is described in this blog post, but we'll go through the steps in here as well.

The solution is to set up a new outbound antispam policy in Microsoft 365 Defender. You can select to allow forwarding for individual users, groups or the entire domain. Keeping the scope minimal is the best option security-vise. That said, when you try to add the anti-spam policy you may get blocked because your Office 365 organization does not allow customization, with the hint to run the Enable-OrganizationCustomization Cmdlet to fix the issue. Doing that is easier said than done, so the steps are outlined below. Windows Powershell is assumed, though the commands might work on Powershell Core as well.

First you need to connect to Exchange Online. Doing that is a multi-step process. First you install a custom Powershell module:

Install-Module -Name ExchangeOnlineManagement -RequiredVersion 2.0.5

Then you import it:

Import-Module ExchangeOnlineManagement

Finally you can actually connect to your Office 365 "Exchange Online" organization, e.g.

Connect-ExchangeOnline -UserPrincipalName

Finally you can enable organization changes:


Now you should be able to create the outbound spam policy, which in turn allows users to enable email forwarding and expect it to work.