Contact us
Keycloak authorization services REST API lets you to programmatically manage your fine-grained authorization policies. Photo credit: https://www.pexels.com/it-it/foto/foto-del-primo-piano-del-recinto-del-collegamento-a-catena-3605822/
Keycloak authorization services REST API lets you to programmatically manage your fine-grained authorization policies. Photo credit.

What does it do, this Keycloak thing?

Dear seasoned keycloaker, as you probably know, Keycloak is a stable, scalable, programmable and otherwise killer platform to centralize all your identity, authentication and authorization needs. Keycloak supports fine-grained authorization policies using Keycloak authorization services. We highly recommend reading our Keycloak authorization services terminology article before this article. The purpose of this article is to show how to work with the Keycloak authorization services REST API and if you don't understand terms, you will have some hard times ahead. To be perfectly honest, there is no separate Keycloak authorization services REST API: instead, it is the same old Keycloak Admin REST API, just a particularly lightly documented part of it - with pun intended.

Commercial support from Red Hat?

If you are on the business side of things and worry about support, continuity, standardization, compliance, SLAs and stuff (you should), Red Hat Single Sign-on is the supported version of Keycloak. Unfortunately Keycloak is not available as a standalone product and you need to get it as part of Red Hat Runtimes or Red Hat OpenShift Container Platform subscription. For further details, contact us or Red Hat.

What is it good for?

Keycloak supports standard protocols such as:

You might use it for:

With all the features and power, comes the cost of complexity, but it has real design to keep this complexity in shape. Keycloak has a pretty WebUI (at least the latest versions) to make you handle tasks that are suitable for a human with ease.

Assuming things

Every technical blog will be obsolete tomorrow. Without specifying what is assumed, people might waste their precious time and feel bad. We're nice and happy people, we don't want that. Therefore we assume that:

From clicking to code

For things that we humans would really not like perform, usually repetitive tasks, they invented programming languages and REST APIs. Some then went farther and used programming languages to invent infrastructure as code tools to maintain a desired state for all your fleet, and some, ahem, then automate all kinds of things with those tools (Terraform, Ansible, Puppet anyone?) Talking to the API with a programming language, however, is what we will cover here. If you are the infamous clicking operator in your company, and actually enjoy your work, perhaps this is not for you. 

The language we use here is Ruby, and the gems we need are HTTParty and json. Ruby is a pretty language and I like pretty languages. The goal is to add a new permission, a scope-based permission to be precise, to already existing shared resources. As noted, this is not very enjoyable with thousands of resources in the WebUI, unless you took overdose of steroids and built inhuman clicking muscles, or learned how to do RPA as an ugly workaround. Yes, we can also do robot for you, if you absolutely require. However Keycloak is all about APIs, so let's leave them UIs alone. 

Set things up for Keycloak authorization services REST API

First we need to prepare to talk to the Keycloak authorization services REST API. In your master realm, admin-cli client, enable service accounts and authorization and set client access type to confidential:

Setting up

Then under Credentials tab, set Client authenticator to Client id and secret and copy the Secret:

Now we have the capability to talk to the Keycloak authorization services REST API. In production thou shalt not use the master realm and admin-cli client. Thou shalt disable them. But we’re working with a test instance here, that we can re-create in an instant, right? And we want all the power and torque. 

Let's do some Keycloak authorization services REST API

Some class action

Ok, now for the code. Let get our class started:

#!/usr/bin/env ruby

require 'httparty'
require 'json'
require './config'

class KeycloakClient
  include HTTParty

  attr_accessor :token

This is pretty obvious: we get the configuration values by using a Config class from the file config.rb. That class reads the values from an ini-style file, that we initialize it with. 

Let's initialize

First let’s initialize the wanna-be instance with the server and authentication details, and get a token. Using an inifile makes it easy to DRY and switch between instances:

def initialize(server_url, grant_type, client_id, client_secret)
    @server_url = server_url
    @auth = { grant_type: grant_type, client_id: client_id, client_secret: client_secret }
    @token = get_token
  end

Everyone needs a token

Here’s the method to obtain and store the token:

def get_token
    response = self.class.post("#{@server_url}/auth/realms/master/protocol/openid-connect/token",
                               body: {
                                 grant_type: @auth[:grant_type],
                                 client_id: @auth[:client_id],
                                 client_secret: @auth[:client_secret]
                               },
                                 headers: { 'Content-Type' => 'application/x-www-form-urlencoded' })
    response.parsed_response['access_token']
  end

Now, what do we actually need to do?

Getting a token is nice, but not very interesting. For meaningful operations, we need to do a couple of things:

  1. Convert our target client name to an internal id
  2. Search and get list of existing resources on the client based on type
  3. Retrieve the hash of the scope that we want to add to the resources
  4. Retrieve a resource hash based on name 
  5. Update the current resources

What's the client's id?

After instantiating the class and having gotten the token, we need the target client id, where the resources are located. Here’s a method to get that based on it’s name:

 def get_client_id_by_name(realm, client_name)
    response = HTTParty.get("#{@server_url}/auth/admin/realms/#{realm}/clients",
                              headers: { 'Authorization' => "Bearer #{@token}" })
    client = JSON.parse(response.body).find { |c| c["clientId"] == client_name }
    return client["id"] if client
  end

What are the existing resources?

With this method, we build an array of our existing resources of certain type (type is a completely arbitrary string).

# returns array
  def find_by_type(array, type)
     value = array.select { |hash| hash["type"] == type }
   end

# returns array
   def get_resources_by_type(realm, client_id, type)
     response = HTTParty.get("#{@server_url}/auth/admin/realms/#{realm}/clients/#{client_id}/authz/resource-server/resource?deep=false&first=0&max=1000",
                             headers: { 'Authorization' => "Bearer #{@token}" })
     matches = find_by_type(response.parsed_response, type)
   end

What does the scope look like as a hash?

For building the final payload, we need to retrieve the new scope as a hash:

# returns hash
  def get_scope_hash_by_name(realm, client_id, scope_name)
    response = HTTParty.get("#{@server_url}/auth/admin/realms/#{realm}/clients/#{client_id}/authz/resource-server/scope?deep=false&first=0&max=1000",
                              headers: { 'Authorization' => "Bearer #{@token}" })
    name = JSON.parse(response.body).find { |c| c['name'] == scope_name }
    return name if name
  end

How do I feed the server?

And finally we need a method to actually shoot our payload. This is also where the class ends.

def update_resource(realm, client_id, payload, resource_id)
    response = HTTParty.put("#{@server_url}/auth/admin/realms/#{realm}/clients/#{client_id}/authz/resource-server/resource/#{resource_id}",
                            headers: {
                              "Authorization" => "Bearer #{@token}",
                              "Content-Type" => "application/json"
                            },
                            body: payload.to_json
                           )
  end
end # class ends

Where to get the parameters?

To use these methods, some parameters need to be to initialized. As mentioned, I use a config class that gets it values from an ini file

config = Config.new(’./config.ini')
realm = config.keycloak_target_realm
client_name = config.keycloak_target_client
server_url = config.keycloak_server_url
username = config.keycloak_username
password = config.keycloak_password
grant_type = config.keycloak_grant_type
client_id =  config.keycloak_client_id
client_secret = config.keycloak_client_secret
target_resource_type = config.keycloak_target_resource_type
new_scope_name = config.keycloak_new_scope_name

Create a new client instance

After these we create a shiny new instance:

client = KeycloakClient.new(server_url, grant_type, client_id, client_secret)

Talk to the right client

Get the target client id:

target_client_id = client.get_client_id_by_name(realm, client_name)

Collect the existing resources

Build an array of existing resources that match the type you want:

resources = client.get_resources_by_type(realm, target_client_id, type))

What does the scope look like as a hash?

Get the new scope as a hash:

new_scope_hash = client.get_scope_hash_by_name(realm, target_client_id, new_scope_name)

Feeding time

Now we can finally do some real work.

We iterate over our existing shared resources and build an acceptable payload. Then load it to the server. This payload will associate the new scope with the resources of the type you wanted:

resources.each do |resource|
  payload = {}
  resource_name = resource['name']
  h = client.get_resource_hash_by_name(realm, target_client_id, resource_name)
  if h.has_key?('scopes') and h['scopes'].any? { |e| e['name'] == new_scope_name }
    puts("resource '#{resource['name']}' already has the scope '#{new_scope_name}', skipping...")
    next
  end
  payload['name'] = h['name']
  payload['type'] = h['type']
  payload['owner'] = h['owner']
  payload['ownerManagedAccess'] = h['ownerManagedAccess']
  payload['displayName'] = h['displayName']
  payload['_id'] = h['_id']
  payload['uris'] = h['uris']
  current_scopes = h['scopes'] || []
  updated_scopes = current_scopes + [new_scope_hash]
  payload['scopes'] = updated_scopes
  print("updating resource '#{resource_name}'...")
  response = client.update_resource(realm, target_client_id, payload, h['_id'])
  if response.code == 200 || response.code == 201 || response.code == 204
    puts "ok"
  else
    puts "failed, got #{response.message}"
  end
end

Good luck!

Run it and (hopefully) enjoy your new scope-based permission to further authorize your resources (or theirs, with UMA). Of course you can improve from here. This was meant to be a short demonstration of how to use the Keycloak authorization services REST API to manage keycloak authorization resources. There are many other ways to use Keycloak APIs for all kinds of things, perhaps to build killer puppet types and providers, Ansible modules, or Terraform providers. That’s what we pretty much do here: automate anything that moves and make stuff behave, for fun and profit.

The methods here are mostly a result of tcpdumping the WebUI traffic and experimenting. The documentation does not always cover all the necessary things. I’m sure there are other, perhaps funnier and/or more elegant ways to do this. If you know about cool Keycloak stuff, tell us, if you don't, don't. The following might be useful with your RESTing practices with Keycloak. What it does is shamelessly left as an internet search exercise for the reader:

tcpdump -vvv -ni lo port 8080 and 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x50555420' or 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504F5354'

Computers were supposed to relieve us humans from boring and repetitive jobs. Here we turn this upside down and do the boring and repetitive job of a computer by importing Cloudflare DNS records to Terraform ourselves. Not fun, but someone’s gotta do it sometimes. If you’re reading this, that someone is probably you. Condolences. My hope is that someone lazy enough gets bored enough to write a program to do this. I didn’t. Yet. 

To import a DNS record to terraform, you need to create yourself a Cloudflare API token and (to make this a bit less awful), also an API auth key. They can be created or obtained from your Cloudflare profile. Check Cloudflare documentation for more info. 

The generic form of importing a record from Cloudflare is:

$ terraform import cloudflare_record.default <zoneid>/<recordid>

Now where do you get the zoneid? Here’s one way by using their API (you obviously need curl and jq):

#!/bin/bash

if [ $# -ne 0 ]; then
    echo "usage: $(basename $0)"
    exit 1
fi

BASURL="https://api.cloudflare.com/client/v4/zones"
QUERY="?name=${NAME}"
EMAIL=”[email protected]”
AUTHKEY=”yourapiauthkey”

zones=$(curl -s -X GET "${BASURL}" \
  -H "X-Auth-Email: ${EMAIL}" \
  -H "X-Auth-Key: ${AUTHKEY}" \
  -H "Content-Type: application/json" | jq '.result[].name' | tr -d '"')

for zone in $zones; do

  echo -n "$zone: "

  id=$(curl -s -X GET "${BASURL}?name=${zone}" \
  -H "X-Auth-Email: ${EMAIL}" \
  -H "X-Auth-Key: ${AUTHKEY}" \
  -H "Content-Type: application/json" | jq '.result[].id')

  echo $id

done

Fill the obvious missing information, save it and run it. That will list all your Cloudflare zones and their ids.

To get an id of an existing record in Cloudflare:

#!/bin/bash

if [ $# -ne 2 ]; then
    echo "usage: $(basename $0) recordname type"
    exit 1
else
    NAME=$1
    TYPE=$2
fi

BASURL="https://api.cloudflare.com/client/v4/zones"
ZONEID=”yourzoneid”
QUERY="?name=${NAME}&type=${TYPE}"
EMAIL="[email protected]”
AUTHKEY=”yourapiauthkey”

curl -s -X GET "${BASURL}/${ZONEID}/dns_records${QUERY}" \
     -H "X-Auth-Email: ${EMAIL}" \
     -H "X-Auth-Key: ${AUTHKEY}" \
     -H "Content-Type: application/json" | jq '.result[] | .id'

Name the script like get_recordid and use it by specifying a name and a type, like this:

$ ./get_recordid www.example.com A

Here we assume the record is ”www.example.com”. Now, to import a record:

  1. lookup the name in Cloudflare portal 
  2. Lookup the id of the record with the above script 
  3. Create a file named import.tf with a content of:
resource "cloudflare_record" "www" {}

And do the import:

$ terraform import cloudflare_record.www <zoneid>/<recordid>

If you didn’t screw up, the record is now in your terraform state, hopefully a remote state. To show the contents of the resource:

$ terraform state show cloudflare_record.www

Copy the content to your preferred file and carve off extra fat to make it look like:

resource "cloudflare_record" "www" {
  name    = "www"
  proxied = true
  type    = "A"
  value   = ”127.0.0.1
  zone_id = cloudflare_zone.yourzone.id
}

Remove the initial resource from import.tf. You should now have it properly imported.

$ terraform plan 

Should not want to make any changes. That’s it. Repeat for every record like a good computer. Have "fun".

If you're of the type that sees things as processes, here's one for you:

Every now and then a need to use the content of a file as a variable on an agent node arises. Here's one way to do it with the help of a custom fact.

First create a custom fact on the puppet server:

Facter.add(:my_file_content) do
  setcode do
    path = "/path/to/my_file"
    File.read(path) if File.exist?(path)
   end
end

You can confine this to restrict it to be available only on selected nodes, for example:

Facter.add(:my_file_content) do
  confine :hostname => [ :'my_first_node', :'my_second_node']
  setcode do
    path = "/path/to/my_file"
    File.read(path) if File.exist?(path)
   end
end

Next use this custom fact to load the file content into a variable:

$my_variable=$facts['my_file_content']

You can now split(), join(), regexp(), strip() etc. it to your hearts content.

That's it. As always, be sensible with the possible security implications.

Sometimes you might find yourself wondering whether there is some paranormal activity going on with your keycloak and its database. To check if things are still in the realm of physical reality, and to restore your child's faith in the programmer who never makes mistakes, it might be soothing to check what's actually happening to the database. To do this, create a new property named showSql and give it a value of true in your connectionsJpa spi. This is located in standalone.xml, standalone-ha.xml, or in domain.xml (if you've already graduated from kindergarten).

<spi name="connectionsJpa">
<provider name="default" enabled="true">
<properties>                        
<property name="dataSource" value="java:jboss/datasources/KeycloakDS"/>                        <property name="initializeEmpty" value="true"/>                        
<property name="migrationStrategy" value="update"/>                        
<property name="migrationExport" value="${jboss.home.dir}/keycloak-database-update.sql"/>
<property name="showSql" value="true"/>                    
</properties>                
</provider>            
</spi>

Then stare at the content of your server.log as long as your eyeballs can take it.

This article assumes that the user backend for Keycloak is FreeIPA. Regardless of that the instructions will apply to any other setup with minor modifications. Here we use two different AWS accounts renamed to 123412341234 and 567856785678 to protect the personal information of the innocent. The Keycloak staging cluster on which this integration was done is called id-staging.example.org and the production cluster is called id.example.org. The Keycloak realm is EXAMPLE.ORG. Replace those with the values correct for your environment.

But let's get on with the actual integration.

  1. Create a new SAML client in Keycloak. This client will be used for all AWS IAM accounts and their roles.

The ClientId should really be “urn:amazon:webservices”, as this is the audience restriction in the user claim. This is technically not necessary, however. But not using it is kind of hackish, and may make thing difficult in the future.

Enter the IDP Initiated SSO URL as “aws”, or a name to your liking. This is an important setting as AWS does not support Service Provider initiated logins, only IdP initiated logins. So your goto place for all things AWS SAML: is

https://id-staging.example.org/auth/realms/EXAMPLE.ORG/protocol/saml/clients/aws

  1. Ensure there are no roles in the Roles tab.
  2. Ensure there is nothing in Assigned Default Client Scopes in the Client Scopes tab
  3. Ensure there are no Mappers defined in the Mappers tab
  4. Ensure Full Scope Allowe is Off in the Scope tab

2. Create a new client scope aws_roles. This will include all AWS roles that AWS requires, as well as mappers to give them the proper format. We do this formatting, because newest Keycloaks do not allow semicolons in role names. They do allow renaming the roles. We do this renaming with the help of Role Name Mappers. For every role (that in this case comes from the equivalent FreeIPA group), there needs to be a mapper.

3. Create the mappers for three required attributes. If AWS finds anything else than these, or errors in these (like a role in wrong format), you will get an invalid response message, and things will not work. These must be only these attributes and they must be formatted correctly in the response.

3.1 Session Role. The AWS attribute is https://aws.amazon.com/SAML/Attributes/Role. This will create a list of all the roles the users is permitted to assume:

3.2 Session Name. The SAML attribute is https://aws.amazon.com/SAML/Attributes/RoleSessionName. This will create the username for AWS:

3.3 Session Duration. The SAML attribute is https://aws.amazon.com/SAML/Attributes/SessionDuration. This will define the maximum time in seconds, that the user session is allowed to persist.

4. Create the FreeIPA group for the specific AWS IAM role in a specific AWS account. Do this initially based on current AWS account groups and their members. We recommend giving it a name “aws-<account>-<role>”, here “aws-qa-administrators”. Add the users, who will need to be able to assume this role, as members. Do this for every required AWS IAM role. These will become realm level roles in Keycloak:

5. Map the FreeIPA group to Keycloak realm level role in Keycloak, here “aws-qa-administrators”. Navigate to Groups → aws-qa-administrators → Role Mapping and add selected role “aws-qa-administrators” to Assigned Roles. Do this for every required AWS IAM role for which you created an FreeIPA group.

You can select Member tab and ensure you have the correct people:

You can also check Users → <user> → Role Mappings → Effective Roles if a certain user can correctly assume the roles he/she needs:

6. Go to Clients → <clientname> → Scope and assign all the realm roles prefixed with aws- to the the client:

Now test with Chrome SAML panel extension and verify that your claims are correct, you can choose your role and login by using:

https://id-staging.example.org/auth/realms/EXAMPLE.ORG/protocol/saml/clients/aws

When these are imported in production, you of course need to go to:

https://id.example.org/auth/realms/EXAMPLE.ORG/protocol/saml/clients/aws

Notice how there are different roles in different AWS accounts:

saml:AttributeStatement

<saml:Attribute
  FriendlyName="Session Name"
  Name="https://aws.amazon.com/SAML/Attributes/RoleSessionName"
  NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
  <saml:AttributeValue
    xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      john.doe
  </saml:AttributeValue>
</saml:Attribute>

<saml:Attribute
  FriendlyName="Session Duration"
  Name="https://aws.amazon.com/SAML/Attributes/SessionDuration"
  NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
  <saml:AttributeValue
    xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      28800
  </saml:AttributeValue>
</saml:Attribute>

<saml:Attribute
  FriendlyName="Session Role"
  Name="https://aws.amazon.com/SAML/Attributes/Role"
  NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
  <saml:AttributeValue
    xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      arn:aws:iam::123412341234:role/SAMLTestGroup,
      arn:aws:iam::123412341234:saml-provider/id-staging.example.org
  </saml:AttributeValue>

  <saml:AttributeValue
    xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      arn:aws:iam::567856785678:role/SAMLAdministrator,
      arn:aws:iam::567856785678:saml-provider/id-staging.example.org
  </saml:AttributeValue>

  <saml:AttributeValue
    xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      arn:aws:iam::123412341234:role/SAMLSysadmin,
      arn:aws:iam::123412341234:saml-provider/id-staging.example.org
  </saml:AttributeValue>

  <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema%22
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance%22
    xsi:type="xs:string">
      arn:aws:iam::123412341234:role/SAMLDeveloper,
      arn:aws:iam::123412341234:saml-provider/id-staging.example.org
  </saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>

If you’re at all like me, you every now and then find yourself thrown out of your comfort zone, when you should actually be in it.

The pattern usually goes something like this:

  1. It’s something simple. I’ll fix it in a couple of minutes and document it for others. I know my stuff. 
  2. Hmm, this seems to be a bit stubborn. I need to take a closer look.
  3. Still doesn’t work. I need to run this in a debugging set up and break it into pieces, to see where the problem is. 
  4. I don’t understand. Everything should be ok. It should just work. I’ve read everything about this, googled and used every debugging method and tool under the sun. It must be a bug.
  5. (After  hours or days of banging head to the wall). This is a simple configuration error. I didn’t know my stuff. 

This happened to me (again) when trying to solve why Apache Mellon refused to work and entered a redirect loop. Now, this is not something unheard of with SAML setups, but with Mellon, I had never encountered this. And I’ve set up Mellon many, many times.  

When you set up mellon to work with your IdP, you basically just export the IdP metadata, the SP metadata, the SP certificate and private key from Keycloak. Yes, we’re talking about Keycloak and Mellon here. You then place the files in some directory and create a mellon configuration file where those files are referred to. Here’s a simple example:

<Location />
    Require valid-user
    AuthType Mellon
    MellonEnable auth
    MellonSPMetadataFile /etc/httpd/mellon/sp-metadata.xml
    MellonIdPMetadataFile /etc/httpd/mellon/idp-metadata.xml
    MellonSPPrivateKeyFile /etc/httpd/mellon/client-private-key.pem
    MellonSPCertFile /etc/httpd/mellon/client-cert.pem
</Location>

Here the MellonSPMetadataFile directive points to an SP metadata file. Let’s take a look:

<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" entityID="mellon.example.com">
    <SPSSODescriptor AuthnRequestsSigned="true" WantAssertionsSigned="false"
            protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol urn:oasis:names:tc:SAML:1.1:protocol http://schemas.xmlsoap.org/ws/2003/07/secext">
        <KeyDescriptor use="signing">
          <dsig:KeyInfo xmlns:dsig="http://www.w3.org/2000/09/xmldsig#">
            <dsig:X509Data>
              <dsig:X509Certificate><cert data></dsig:X509Certificate>
            </dsig:X509Data>
          </dsig:KeyInfo>
        </KeyDescriptor>
        <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://mellon.example.com/mellon/myapp/postResponse"/>
        <NameIDFormat>urn:oasis:names:tc:SAML:2.0:nameid-format:transient
        </NameIDFormat>
        <AssertionConsumerService
                Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://mellon.example.com/mellon/myapp/postResponse"
                index="1" isDefault="true" />
    </SPSSODescriptor>
</EntityDescriptor>

What this SPSSODescriptor tells us, is that this particular SP has an entityID of 'mellon.example.com' and it’s AssertionConsumerService endpoint is found in https://mellon.example.com/mellon/myapp/postResponse, amongst other more or less esoteric things. The AssertionConsumerService endpoint wants a HTTP POST request or will feel bad. Cool.  

This is so simple I do it with my left hand while playing Resident Evil Biohazard with the rest of my body. 

Or don't. Do you notice where I screwed up? Hint: Half of it is implicit.

The default value for MellonEndPointPath directive is '/mellon/'. It’s not in the mellon configuration because it’s, er, the default, but you can see it if you run Mellon with diagnostics enabled. (It's also in the documentation, but who reads those). What I did here was to instruct my IdP to do a POST binding to the supposed endpoint in:

https://mellon.example.com/mellon/myapp/postResponse

But this is not the endpoint! Mellon has set it to be:

https://mellon.example.com/mellon/postResponse

What will happen is that Keycloak will authenticate a session and happily redirect to the endpoint given to it. Mellon will see a POST to  mellon/myapp/postResponse, refuse to co-operate and redirect to Keycloak. Keycloak will see the incoming GET and after some introspection, find out there’s already a session and redirect to Mellon. Now we have a loop. The obvious fix is to pay attention to the SP metadata locations and fix them. Or tune MellonEndPointPath to work with the intentions. 

The take away here is that always triple check the most basic things about your Mellon setup. And do this when you are sober, have time to focus and feel otherwise energetic. My experience is that Mellon with Keycloak works quite well, including authorization. If you have weird problems, chances are it’s something simple and you didn’t pay attention. But that’s life. Just pay attention.

Another take away is to use a configuration management system, like Puppet, to enforce a known working configuration. Manual set ups for complex systems are slow and prone to all kinds of creative human errors. We don't wanna be creative, we wanna be industrial. You’re wasting your time and probably someone’s money. It’s much better to develop and test a working configuration outside of production and then distribute and enforce it across the universe.

As always, if you have problems, feel free to contact us. If your problems are not in computing domain, we possibly cannot help. Sorry.

Grafana is a common tool to visualize data from multiple datasources. Perhaps the most common datasource is Prometheus. If an organization has a Single-Sign On solution, it makes sense to authenticate users centrally with that solution That will make authentication easier and friendlier for end users (authenticate once and then access multiple services), and also enable stronger authentication by adding multifactor authentication.

In addition to authenticating user, it is also possible to authorize Grafana users based on their group membership. 

 Grafana supports three organizational roles for authorization :

  1. Admin
    1. Can add, edit, and delete data sources.
    2. Can add and edit users and teams in their organization.
    3. Can add, edit, and delete folders containing dashboards for data sources associated with their organization.
    4. Can configure app plugins and organization settings.
    5. Can do everything allowed by the Editor role
  2. Editor
    1. Can view, add, and edit dashboards, panels, and alert rules in dashboards they have access to. This can be disabled on specific folders and dashboards.
    2. Can create, update, or delete playlists.
    3. Can access Explore.
    4. Can add, edit, or delete alert notification channels.
    5. Cannot add, edit, or delete data sources.
    6. Cannot manage other organizations, users, and teams.
  3. Viewer
    1. Can view any dashboard they have access to. This can be disabled on specific folders and dashboards.
    2. Cannot add, edit, or delete data sources.
    3. Cannot add, edit, or delete dashboards or panels.
    4. Cannot create, update, or delete playlists.
    5. Cannot add, edit, or delete alert notification channels.
    6. Cannot access Explore.
    7. Cannot manage other organizations, users, and teams.

Here we will use the excellent Keycloak identity and access management solution to centrally authenticate and authorize users using Single-Sign on. 

Assumptions

And now for the configuration. Configure a new client in Keycloak:

  1. Access Keycloak using credentials with appropriate permissions
  2. Create the grafana client:
    • ID
    • Name (Optional)
    • Client Protocol: opened-connect
    • root url: <the url of the grafana url>
    • Access Type: confidential
    • Standard flow enabled: ON
    • Direct Access Grants Enabled: ON
    • Root URL: ${authBaseUrl}
    • Valid Redirect URIs: <root url>/login/generic_oauth/*
    • Base URL: /login/generic_oauth
    • Clear Admin URL and Web Origins

Go to Credentials and copy the secret. You'll need it later.

Go to Client Scopes and create a new client scope 'groups-oidc':

Create a mapper of type 'Group Membership'

Associate new client scope with your new client in  Clients -> Your client -> Client scopes. Add 'groups-oidc' to Assigned Default Client scopes

Access your FreeIPA and create new groups for mapping into Grafana roles (you can of course use some existing ones):

Add your users to these groups as member based to their required permissions. Note that users not in these groups will be authorized as Viewers. 
Access your grafana and edit your grafana.ini. For auth.generic_oauth section specify this:

[auth.generic_oauth]
allow_sign_up = true
api_url = https://keycloak.example.com/auth/realms/YOURREALM/protocol/openid-connect/userinfoauth_url = https://keycloak.exampe.com/auth/realms/YOURREALM/protocol/openid-connect/auth
client_id = grafana.example.comclient_secret = &lt;your-client-secret&gt;
disable_login_form = true
email_attribute_path = email
enabled = true
login_attribute_path = uid
name = Example SSO
name_attribute_path = uid
role_attribute_path = contains(groups[*], 'grafana_admins') &amp;&amp; 'Admin' || contains(groups[*], 'grafana_editors') &amp;&amp; 'Editor' || 'Viewer'
scopes = openid email profile
tls_skip_verify_insecure = false
token_url = https://keycloak.example.com/auth/realms/YOURREALM/protocol/openid-connect/token

The most important thing to notice here is that we have altered the JMESPath expression to look for groups in the token:

role_attribute_path = contains(groups[*], 'grafana_admins') &amp;&amp; 'Admin' || contains(groups[*], 'grafana_editors') &amp;&amp; 'Editor' || 'Viewer'

If configured correctly, Keycloak will provide these groups in the token.

Test the JWT token with your own username & password using the below shell script. Make sure you set Direct Access Grants Enabled to ON in your client settings, or this will not work. Also ensure that you have chrome JWT extension installed and jq available, and fill in the the right values for username, password, realm, client secret and client id.

#!/bin/sh
KEYCLOAK_USERNAME=<username>
KEYCLOAK_PASSWORD=<password>
KEYCLOAK_REALM=YOURREALM
KEYCLOAK_CLIENT_SECRET=<your client secret>
KEYCLOAK_CLIENT_ID=grafana.example.com
curl -s -v -k \
     -d "client_id=$KEYCLOAK_CLIENT_SECRET"" \
     -d "client_secret=$KEYCLOAK_CLIENT_SECRET" \
     -d "username=$KEYCLOAK_USERNAME" \
     -d "password=$KEYCLOAK_PASSWORD" \
     -d "grant_type=password" \ 
"https://keycloak.example.com/auth/realms/$KEYCLOAK_REALM/protocol/openid-connect/token" |  jq -r '.access_token'

Run the script, copy the token and paste it into JWT debugger. You should now see:

"groups": [
  "grafana_admins",
],

among other groups your user is a member of in your decoded payload. According to this group they will assume the role and their permissions in Grafana.

So this is a way to centrally both authenticate and authorize your users from FreeIPA using the awesome keycloak software. With slight modifications and testing you could make this work with other user federation providers, and even other IdPs. But really, why bother? We use these happily every day ourselves (also called eating our own dog food). This makes us better persons, although I'm not exactly sure how.

As always, we're both too busy and too lazy to configure these things by hand. Instead we set them up and ensure they stay up using Puppet automatization. This is also our modest contribution to ease the pain of our customers and make the world a slightly better place. If you too feel lazy or are too busy, let these modules do the hard lifting for you:

Or better yet, let us make them do the hard lifting for you. That's kind of, ahem, what we do. If you have questions or needs where we might be of help, don't hesitate to contact us. If you don't, well, contact us anyway and say hi.

In this blog we consider JBoss/Wildfly domain mode in the context of the wonderful Keycloak software. It is not necessarily trivial to understand how the interfaces  should be configured, especially if you want to do something other than the defaults, for example to secure your Wildfly/JBOSS configuration, or if you are dealing with a more complex nodes with multiple network interfaces. 

Assumptions:

When configuring Keycloak in domain mode, the interfaces can and must be configured in two places. 

In the domain master:

In the slaves

The differences of defining interfaces in these configuration file is actually not quite clear. You should be able to configure common logical names in domain.xml as general values, and then be more specific in host-master.xml and host-slave.xml to adapt the logical names for the specific needs of a specific cluster node. 

JBoss/Wildfy uses a multitude of ports for it’s cluster, management and service needs. Therefore we need to take these ports into consideration. The most important ports in our case are:

When booting up, the slave will try to connect to the master in tcp port 9999. If it cannot connect, it will fail to boot and emit a message:

WFLYHC0001: Could not connect to remote domain controller remote://<master-ip>:9999

With the default configuration files, there are three logical interfaces defined in domain.xml:

<interfaces>
<interface name="management"/>
<interface name="private">            
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>        </interface>
<interface name="public"/>
<interfaces>

two in host-master.xml:

<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
 </interface>
 <interface name="public">
 <inet-address value="${jboss.bind.address:127.0.0.1}"/>
 </interface>

The block in host-slave.xml is identical to that in host-master.xml.

The "management" interface is used for all components and services that are required by the management layer (i.e. the HTTP Management Endpoint)
The "public" interface binding is used for any application related network communication (i.e. Web, Messaging, etc). 
The private interface is used for components that are meant to be, well, private. 

There is nothing special about these names; interfaces can be declared with any names. Other sections of the configuration can then reference those interfaces by their logical name, rather than having to include the full details of the interface (which, on servers in a management domain, may vary on different machines).

When you boot the domain master node with something like this: 

domain.sh --host-config=host-master.xml -b 0.0.0.0 -Djboss.http.port=8080

the -b <bind_address> switch to the domain.sh script is a shortcut to bind the public interface. This binds the public-named logical interface to 0.0.0.0.  

With reference to the above blocks, you can express the same thing like this:

domain.sh --host-config=host-master.xml -Djboss.http.port=8080 -Djboss.bind.address=0.0.0.0

Now, what we specifically want to do is:

The approach we are going to use here is to first declare the logical interfaces in domain.xml and clear any attributes and properties:

<interfaces>
  <interface name="management"/>
  <interface name="private"/>
  <interface name="public"/>
</interfaces>

The second thing we want to do is to ensure our socket-binding group, in our case "ha-sockets" is using "public" as it’s default interface. For specific bindings in the group, we override the default interface. Most importantly this means jgroups-tcp binding. The block looks like this:

<socket-binding-group name="ha-sockets" default-interface="public">
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="jgroups-mping" interface="private" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" interface="management" port="7600"/>
<socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>
<socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>
<socket-binding name="modcluster" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>

Notice that we have used the management interface for jgroups-tcp binding. 

Now that we have the domain.xml out of the way, we can be more specific in host-master.xml and host-slave.xml:

<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:127.0.0.1}"/>
</interface>
</interfaces>

The inet-address values all have the default of 127.0.0.1, but allows us to specify our own values by using the environment. This interfaces blocks are identical in host-master.xml and host-slave.xml.

When these definitions are in place, we can simply start the master with:

domain.sh --host-config=host-master.xml -b 0.0.0.0 -Djboss.http.port=8080 -Djboss.bind.address.management=192.168.168.253

and the slave with:

domain.sh --host-config=host-slave.xml -b 0.0.0.0 -Djboss.http.port=8080 -Djboss.domain.master.address=192.168.168.253 -Djboss.bind.address.management=192.168.168.252

Of course there is more to make the domain mode work. More on that later. Hopefully this clears some of the common source of confusion. 

We actually automate our keycloak setups and keep them running with puppet, using the exellent puppet-module-keycloak by treydock, to which we also contribute. If you want to know more about how to make Keycloak domain mode work, you can take a look at the code. Should you have any other questions or needs about infrastructure or cloud management with code, feel free to contact us.

Separating data from code in Puppet modules is advisable as it improves reusability of code. The separation can be accomplished with Hiera by having separate levels based on facts, organizational units, locations, etc.

Hiera can also be used for storing private data that needs to be protected and must not be readable by outsiders. Typically the data, like all other code, is stored in a version control system that may be located outside the organisation. In such a case Hiera encryption is a must.

The development of hiera-eyaml is an improvement to the usability compared to previous encryption backends: it allows encryption of just the desired data inside yaml files. Besides hiera-eyaml there are other options as well, like using hiera-vault for fetching secrets dynamically from Hashicorp Vault, but those are outside of the scope of this article.

First you'll need to set up hiera-yaml:

$ sudo gem install hiera-eyaml

This is the installation command for a Puppet server:

$ sudo puppetserver gem install hiera-eyaml

Hiera-eyaml uses asymmetric encryption, PKCS#7 being the default. So it needs a key pair for using it: one key, the public key is used for encryption, and the other, the private key, is used for decryption. The key pair is created by:

$ eyaml createkeys

The key pair is created to the keys sub folder of the current folder.

Puppetserver needs both the public and private key for decryption and reading the values. A good location to store the keys on Puppetserver is for example:

/etc/puppetlabs/puppet/eyaml

So let's create the folder and minimize the permissions:

$ mkdir /etc/puppetlabs/puppet/eyaml
$ chown -R puppet:puppet /etc/puppetlabs/puppet/eyaml
$ chmod -R 0500 /etc/puppetlabs/puppet/eyaml
$ chmod 0400 /etc/puppetlabs/puppet/eyaml/*.pem

Let's verify that the owners and permissions are what they're supposed to be:

$ ls -lha /etc/puppetlabs/puppet/eyaml

The owner should be "puppet", group owner "puppet" and read permission should be granted to the user "puppet".

If the user wants to edit the values in the existing Hiera file, typically in a copy of the control repository, both keys are needed. The keys need to be transferred to your own computer with a safe protocol, for example with scp.

In order to ease up the editing it's useful to create a new folder .eyaml in the home folder and in it a configuration file config.yaml which states where the keys will be located:

---
pkscs7_public_key: '/Users/tunnus/keys/public_key.pkcs7.pem'
pkcs7_private_key: '/Users/tunnus/keys/private_key.pkcs7.pem'

Make sure that you configure environment variable EYAML_CONFIG to point to that configuration file - otherwise hiera-eyaml might not be able to find them.

An alternative is to store the keys at the root of the control repo inside ./keys and add that directory to .gitignore. This works, but you need to launch all eyaml commands from the root of the repository or they will fail.

When the eyaml keys are on the server you can configure the Hiera configuration file to use these keys. The new Hiera version 5 has independent hierarchy settings for each environment and module. Let's suppose that the private key is called private_key.pkcs7.pem and the public key public_key.pkcs7.pem:

---
version: 5
defaults:
  datadir: data
hierarchy:
  - name: "Secret data"
    lookup_key: eyaml_lookup_key
    paths:
      - "nodes/%{trusted.certname}.eyaml"
      - "common.eyaml"
    options:
      pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem
      pkcs7_public_key: /etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem
  - name: "Normal data"
    data_hash: yaml_data 
    paths:
      - ”nodes/%{trusted.certname}.yaml"
      - "common.yaml"

In *nix type operating systems it's possible to create an environment variable EDITOR that determines the desired default editor. The environmental variable can be placed in the shell initialization script (e.g.~/.bashrc). The editor can be given directly on the command line as well:

$ export EDITOR=emacs

When that environmental variable exists, the data sections of the eyaml file defined in Hiera be edited followingly:

$ eyaml edit common.eyaml

Eyaml opens the file and prints out helpful instructions. The value to be encrypted will be placed between square brackets: DEC::PKCS7[]!

#| This is eyaml edit mode. This text (lines starting with #| at the top of the
 #| file) will be removed when you save and exit.
 #| - To edit encrypted values, change the content of the DEC(<num>)::PKCS7[]!
 #| block.
 #| WARNING: DO NOT change the number in the parentheses.
 #| - To add a new encrypted value copy and paste a new block from the
 #| appropriate example below. Note that:
 #| * the text to encrypt goes in the square brackets
 #| * ensure you include the exclamation mark when you copy and paste
 #| * you must not include a number when adding a new block
 #| e.g. DEC::PKCS7[]!
 ---
 # This value will not be encrypted
 plaintextvalue: plaintext
 # This value will be encrypted
 encryptedvalue: DEC::PKCS7[encrypted]!

If we now look at the file, we can see that the previously define value is encrypted:

$ cat common.eyaml
 ---
 # This value will not be encrypted
 plaintextvalue: plaintext
 # This value will be encrypted
 encryptedvalue: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAK/YGRbR2qbgpxHxrCic6ywXG6x0w0hZksNQqJPBYT

Links:

Fattening the workflow series:

Ever had a case where you needed to use a name based Apache reverse proxy in front of some application server, while restring access to some proxied location at the same time? Here’s how to do it.

First define a virtual host:

<VirtualHost *:443>
    ServerName myserver.example.com

Set the request headers (you are of course using TLS, aren't you):

RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Port "443"

Proxy to some internal address, here to localhost port 8080:

ProxyRequests Off
ProxyPreserveHost On
ProxyPass http://127.0.0.1:8080/
ProxyPassReverse http://127.0.0.1:8080/

Restrict access to the host or networks you need to:

<Location "/my/location/">
 Require ip 10.0.0.0/8
</Location>

Note: this will work with Apache 2.4 and up. With older versions you can use the same idea.

Here is a complete configuration:

<VirtualHost *:443>
  ServerName myserver.example.com

  ## Vhost docroot
  DocumentRoot "/var/www/html"

  ## Directories, there should at least be a declaration for /var/www/html

  <Directory "/var/www/html">
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Require all granted
  </Directory>

  ## Logging
  ErrorLog "/var/log/httpd/myserver_error_ssl.log"
  ServerSignature Off
  CustomLog "/var/log/httpd/myserver_access_ssl.log" combined

  ## Header rules
  Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"

  ## Request header rules
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"

  ## Proxy rules
  ProxyRequests Off
  ProxyPreserveHost On
  ProxyPass / http://127.0.0.1:8080/
  ProxyPassReverse / http://127.0.0.1:8080/

  ## Restrict accèss to /my/location
  <Location "/my/location/">
      Require ip 10.0.0.0/8
  </Location>

  ## SSL directives
  SSLEngine on
  SSLCertificateFile      "/etc/pki/tls/certs/my.crt"
  SSLCertificateKeyFile   "/etc/pki/tls/private/my.key"
</VirtualHost>

Petri Lammi

Tässä webinaaritallenteessa esittelemme perusteet infrastruktuurin rakentamisesta koodilla, mukaan luettuna versionhallinnan, laadunvarmistuksen ja erilaiset työkalut kuten Puppetin, Terraformin, Ansiblen ja Puppet Boltin:

Toteutettu yhteistyössä Turku Business Regionin kanssa 5.5.2020.

It seems every other organization is using Jenkins these days. Jenkins is a continuous integration and continuous delivery server that can be used to automate building, testing, and delivering or deploying software. 

Many organizations also use Puppet for their configuration management needs. Puppet is, if not the de facto configuration management solution, at least one of the most mature and field tested ones. It consists of several server components, agents, a declarative programming language, amongst others. Puppet can be used to install, set up and manage large number of all kinds of resources.

While setting up a single Jenkins master server manually is not particularly challenging, setting up several, and managing changes on them over time, can become quite a burden to manage. With Puppet a lot of that burden can be taken care of with a set of tools that are designed for just for that.

We assume here that you already have a server instance with puppet agent installed. We will first install a minimal, working Jenkins master server and continue from there later.

First we need to have the puppet jenkins module in place. If you are using r10k or Code Manager (you should), add a line to your Puppetfile:

mod 'puppet-jenkins', '2.0.0'

or if you want to pin to a specific commit:

mod 'puppet-jenkins',  
  :git => 'https://github.com/voxpupuli/puppet-jenkins.git',  
  :commit => '3bd0774fddec9a78d86304d018db89a57130b9e3'

If you don’t use r10k or Code Manager, install the module on your puppetserver manually:

puppet module install puppet-jenkins --version 2.0.0

After you have the module in place, create a minimal manifest to install and set up a basic jenkins master. Here we will use LTS version 2.204.2:

class profile::jenkins_master { 

# Ensure needed java jdk is present
package { 'openjdk-8-jdk-headless':
  ensure => 'present',  
}

# Install and set up basic jenkins master 
class { '::jenkins':

version            => '2.204.2',    
lts                => true,
repo               => true,
configure_firewall => false,    
install_java       => false,    
port               => '8080',    
purge_plugins      => false,    
require            => Package['openjdk-8-jdk-headless'],  
}

After putting this manifest in place, preferably with the help of your version control system and r10k, run

$ puppet agent --test --verbose 

on your server and you should end up with a working basic Jenkins master. You can then proceed by navigating to your server url and finish the installation by following the install wizard.

FreeIPA on Linux-verkkojen "Active Directory", jossa on integroitu lukuisia eri komponentteja (mm. LDAP, Kerberos, CA, sssd) siten, että Linux-koneet saadaan liitettyä domainiin. Domainiin liittäminen mahdollistaa mm. keskitetyt käyttäjätunnukset, kertakirjautumisen ja SSH-avaimien ja SSH host-avaimien jakelun. FreeIPA:n voi pystyttää sen omalla asennusohjelmalla, johon iso osa myös puppet-ipa moduulin toiminnallisuudesta perustuu.

FreeIPA:ssa on kuitenkin yksi pieni ärsyttävyys: mikäli noodi (esim. palvelin) rakennetaan (esim. AWS EC2 snapshotista) tai provisioidaan (esim. PXE boot) uudelleen, saattavat FreeIPA:ssa säilytettävät SSH host-avaimet mennä epäsynkkaan todellisuuden kanssa. Käytännössä koneelle ei enää silloin pääse kirjautumaan ilman SSH:n tietoturvamekanismien härskiä kiertämistä:

$ ssh -o GlobalKnownHostsFile=/dev/null server.example.org

Onneksi virheelliset SSH-avaimet saa korjattua FreeIPA-palvelimelta käsin melko suoraviivaisesti. Ensin haetaan noodin nykyinen host-avain:

$ ssh-keyscan server.example.org

Kun host-avain on tiedossa, päivitetään noodin tiedot FreeIPA:ssa:

$ kinit admin
 $ ipa host-mod server.example.org --sshpubkey='pubkey-here'

Koko homman kinittiä lukuun ottamatta voi tehdä myös tällä skriptillä:

#!/bin/sh
 #
 # Replace an outdated FreeIPA SSH host key with the active one.
 #
 usage() {
 echo "fix-ipa-host-key.sh <host> <key-types>"
 echo
 echo "Examples:"
 echo " fix-ipa-host-key.sh server.example.org dsa,rsa"
 echo " fix-ipa-host-key.sh server.example.org rsa,ecdsa,ed25519"
 echo
 exit 1
 }
 
 if [ "${1}" = "" ] || [ "${2}" = "" ]; then
 usage
 fi
 
 if [ "${3}" != "" ]; then
 usage
 fi
 
 HOST=$1
 KEY_TYPES="$2"
 
 IPA_CMDLINE="ipa host-mod $HOST"
 for KEY_TYPE in $(echo $KEY_TYPES|sed s/","/" "/g); do
 echo $KEY_TYPE|grep -E '^(dsa|rsa|ecdsa|ed25519) > /dev/null
 if [ $? -ne 0 ]; then
 echo "ERROR: invalid key type ${KEY_TYPE}! Valid values are dsa, rsa, ecdsa and ed25519."
 else
 HOST_KEY=$(ssh-keyscan -t $KEY_TYPE $HOST 2> /dev/null|cut -d " " -f 2-)
 IPA_CMDLINE="${IPA_CMDLINE} --sshpubkey='$HOST_KEY'"
 fi
 done
 
 IPA_SCRIPT=$(mktemp)
 chmod 700 $IPA_SCRIPT
 echo $IPA_CMDLINE > $IPA_SCRIPT
 $IPA_SCRIPT
 rm $IPA_SCRIPT

Samuli Seppänen

Joskus käy niin että verkosta löytyy huolellisesti vuosien mittaan käsin säädetty, mahdollisesti jo edesmenneeltä sysadminilta peritty taideteos, jonka sielunelämästä ei kukaan enää ymmärrä, ja joka uhkaa romahtaa jos hengittää kohti liian kovaa. Se tuottaa mahdollisesti vielä jotakin yritykselle merkityksellistä palveluakin. Mikä avuksi?

Jos näin on päässyt käymään, tai odotetaan suoritettavan muun toistaiseksi villinä ja vapaana ajelevan järjestelmän hallinta puppetilla, on hyvä palauttaa mieleen, että puppet sisältää RAL-kuoren (Resource Abstraction Layer), jonka avulla voidaan tarkastella sitä miten puppet näkee järjestelmän. RAL-kuoren avulla voidaan järjestelmän tila muuntaa puppet-koodiksi pienellä vaivalla. Esimerkiksi hakemistopuun muuntaminen:

$ find `pwd` | while read file; do puppet resource file $file; done
 file { '/tmp/tmp':
 ensure => 'directory',
 ctime => '2018-03-28 14:58:55 +0300',
 group => '0',
 mode => '0755',
 mtime => '2018-03-28 14:58:55 +0300',
 owner => '0',
 selrange => 's0',
 selrole => 'object_r',
 seltype => 'user_tmp_t',
 seluser => 'unconfined_u',
 type => 'directory',
 }
 file { '/tmp/tmp/tmp':
 ensure => 'directory',
 ctime => '2018-03-28 14:59:14 +0300',
 group => '0',
 mode => '0755',
 mtime => '2018-03-28 14:59:14 +0300',
 owner => '0',
 selrange => 's0',
 selrole => 'object_r',
 seltype => 'user_tmp_t',
 seluser => 'unconfined_u',
 type => 'directory',
 }
 file { '/tmp/tmp/tmp/file':
 ensure => 'file',
 content => '{md5}d41d8cd98f00b204e9800998ecf8427e',
 ctime => '2018-03-28 14:59:43 +0300',
 group => '0',
 mode => '0640',
 mtime => '2018-03-28 14:59:14 +0300',
 owner => '0',
 selrange => 's0',
 selrole => 'object_r',
 seltype => 'user_tmp_t',
 seluser => 'unconfined_u',
 type => 'file',
 }

File-resurssi tarvitsee syötteeksi absoluuttisen polun, joten myös find-komennolle annetaan se. Tämä varmistaa että find myös tulostaa em. putkitukselle absoluuttisen polun.

Ainoa miinus File-resurssien kanssa on se, että owner ja group näkyvät niissä numeerisina. Onkin järkevintä muuntaa numeerisen arvot nimiksi esim. id-komennolla:

$ id -un 0
 root
 $ id -gn 0
 root

Ensimmäinen komento palauttaa käyttäjän 0 (UID=0) nimen ja jälkimmäinen ryhmän 0 (GID=0) nimen.

Puppet-moduulien ja manifestien uudelleenkäytettävyyden vuoksi on suositeltavaa pitää data niistä erillään. Tämä tapahtuu käyttämällä hieraa. Moduulit voivat esimerkiksi tuottaa erilaisia asetuksia käyttämällä hieran perusteella organisaatiohin tai sijainteihin määriteltyä dataa

Hieraan tallennetaan tyypillisesti myös sellaista yksityistä dataa mikä pitää olla suojattua, ja minkä ei haluta joutuvan ulkopuolisten saataville. Tyypillisesti data tallennetaan muun koodin ohella versionhallintajärjestelmään, joka voi olla organisaation rajojen ulkopuolella. Tällöin on käytettävä hieran encryptionta.

Aiemmin yleisesti käytetty hieran salauksen menetelmä oli hiera-gpg. Hiera-gpg:n avulla salattiin koko tiedosto. Hiera-eyaml parantaa tilannetta sallimalla hieratiedoston sisällä haluttujen datalohkojen salaamisen, mikä parantaa käytettävyyttä.

Hiera-yaml pitää ensin asentaa:

$ sudo gem install hiera-eyaml
Puppet-palvelimella asennus tapahtuu seuraavasti:
$ sudo puppetserver gem install hiera-eyaml

Hiera-eyaml käyttää asymmetristä enkryptointia, oletuksena PKCS#7:ää. Sen käyttöön tarvitaan siis avainpari. Julkisella avaimella voidaan purkaa salattujen datalohkojen encryption ja yksityisellä avaimella salata. Avainpari luodaan komennolla:

$ eyaml createkeys

Avainpari luodaan sen hetkisen hakemiston keys-alihakemistoon. Luodut avaimet tulee suojata ja säilyttää huolellisesti. Salauksen tuottamiseen riittää että julkinen avain on käytettävissä. Salatun lohkon muokkaamiseen tarvitaan sekä julkinen että yksityinen avain. Henkilö, joka lisää tai muokkaa hiera-tiedostojen sisältöä tarvitsee siis molemmat avaimet. Avaimet tulee suojata vastaavasti kuten palvelimellakin, eli omistajaksi määritetään käyttäjä ja tiedostoille vain lukuoikeus käyttäjälle itselleen. Sopiva sijainti on esimerkiksi kotihakemiston keys-alihakemisto.

Puppetserver tarvitsee salauksen purkamiseen ja arvojen lukemiseen sekä julkisen että yksityisen avaimen. Puppetserverillä sopiva sijainti avaimille on esim.:

/etc/puppetlabs/puppet/eyaml
Luodaan siis tämä hakemisto ja säädetään sen oikeudet minimiin.
$ mkdir /etc/puppetlabs/puppet/eyaml
 $ chown -R puppet:puppet /etc/puppetlabs/puppet/eyaml
 $ chmod -R 0500 /etc/puppetlabs/puppet/eyaml
 $ chmod 0400 /etc/puppetlabs/puppet/eyaml/*.pem
Tarkistetaan että tiedostojen omistajat ja oikeudet ovat halutunlaiset:
$ ls -lha /etc/puppetlabs/puppet/eyaml

Omistajana tulee siis olla puppet, ryhmäomistajana puppet, ja lukuoikeus vain puppet-tunnukselle.

Jos käyttäjä haluaa muokata omalle koneellaan olemassaolevan hiera-tiedoston arvoja, tyypillisesti hallintarepon kopiossa, tarvitaan molemmat avaimet. Avaimet tulee siirtää omalle koneelle asianmukaisella turvallisella protokollalla, esim. scp:llä. Muokkaamisen helpottamiseksi on hyödyllistä luoda kotihakemistoon uusi hakemisto .eyaml, ja sinne konfigurointitiedosto config.yaml, jossa määritetään mistä avaimet löytyvät:

---
 pkscs7_public_key: '/Users/tunnus/keys/public_key.pkcs7.pem'
 pkcs7_private_key: '/Users/tunnus/keys/private_key.pkcs7.pem'

Kun avaimet ovat palvelimella paikoillaan, voidaan määrittää hieran konfigurointitiedosto käyttämään näitä avaimia. Uudessa hiera-versiossa 5 on itsenäiset hierarkian määritykset jokaiselle environmentille ja moduulille. Hiera.yaml voidaan siis sijoittaa hallintarepoon ja sen haaroihin, ja viitata näissä edellä luotuihin avaimiin palvelimella. Oletetaan että yksityiselle avaimelle on annettu nimi private_key.pkcs7.pem ja julkiselle avaimelle public_key.pkcs7.pem.

---
 version: 5
 defaults:
 datadir: data
 data_hash: yaml_data
 
 hierarchy:
 - name: "Secret data"
 lookup_key: eyaml_lookup_key
 paths:
 - "secrets.eyaml"
 options:
 pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem
 pkcs7_public_key: /etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem
 
 - name: "Per-node data"
 path: ”nodes/%{trusted.certname}.yaml"
 - name: "Common data"
 path: "common.yaml"
*nix-tyyppisissä käyttöjärjestelmissä voidaan luoda ympäristömuuttuja EDITOR joka määrittää käyttäjän haluaman oletusmuokkaimen. Ympäristömuuttuja voidaan laittaa esim. bash-käyttäjän kuoren alustustiedostoon ~/.bashrc. Tai se voidaan antaa suoraan komentorivillä:
$ export EDITOR=emacs
Kun ympäristömuuttuja on olemassa, voidaan hierassa määritettyä eyaml-muotoisen tiedoston datalohkoja muokata seuraavasti
$ eyaml edit secrets.eyaml

Eyaml avaa tällöin tiedoston ohjeen kera. Salattavaksi haluttu arvo laitetaan tällöin hakasulkeiden väliin muotoon DEC::PKCS7[]!

#| This is eyaml edit mode. This text (lines starting with #| at the top of the
 #| file) will be removed when you save and exit.
 #| - To edit encrypted values, change the content of the DEC(<num>)::PKCS7[]!
 #| block.
 #| WARNING: DO NOT change the number in the parentheses.
 #| - To add a new encrypted value copy and paste a new block from the
 #| appropriate example below. Note that:
 #| * the text to encrypt goes in the square brackets
 #| * ensure you include the exclamation mark when you copy and paste
 #| * you must not include a number when adding a new block
 #| e.g. DEC::PKCS7[]!
 ---
 # This value will not be encrypted
 plaintextvalue: plaintext
 # This value will be encrypted
 encryptedvalue: DEC::PKCS7[encrypted]!

Jos nyt katsotaan tiedostoa, voidaan todentaa että edellä määritetty arvo on salattu:

$ cat secrets.eyaml
 ---
 # This value will not be encrypted
 plaintextvalue: plaintext
 # This value will be encrypted
 encryptedvalue: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAK/YGRbR2qbgpxHxrCic6ywXG6x0w0hZksNQqJPBYTq2FDyDO7H9L0XlVmnSP+wpjEleDGBJqUEyxgucYICvub5QaHQukBJ7/5ZeQ3grGIBOQkvEZVONWjNtdA+MkiIrc/erasgWYaU8lVJZ73RC6VzJQHYdphCsxue10kTAQw1uBKZOCbc9qHlhIwJuNERfUZBsfMpWgmnExph3kBsVlQ4FPTurkX2Kp0wEQlDVKm5llv4juq3dQLhDS4NkxmdopX/8jWP8+TMQB7vfW5kgS2U08vlm9QKgukO6GMeDrn/1Y66KnbokfGh4eJF7L94A1EYpKQx5eja+ITkvGarvCSTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBn5gfCNVVsnRmQgIhE23pZgBAHsuh7FqP+XQMbJCQlREMN]

Linkkejä:

Artikkelisarjan muut osat:

Vagrantin shell-provisioner tukee suoraan Powershell-skriptejä. Toisin sanoen Windows-virtuaalikoneen asetuksia voi muokata Powershellillä ilman, että tarvitsee rakentaa uusi virtuaalikonekuva. Tyypillinen esimerkki on Windows liittäminen konfiguraationhallintajärjestelmään kuten Puppetiin.

Vagrantin dokumentaatiossa mainitaan, että Powershell-skriptille voi välittää parametreja powershell_args -valinnalla joko merkkijonona (string) tai listana (array). Esimerkkejä etenkin Powershell-provisioinnista on kuitenkin heikosti, mistä syystä tämäkin blogipostaus on kirjoitettu. Otetaan esimerkkinä skripti bootstrap_windows.ps1, joka liittää Windows-virtuaalikoneen olemassa olevaan Puppetserveriin. Sitä käytetään käsin Powershellistä seuraavasti:

> .bootstrap_windows.ps1 -certName win2012r2.local
 -ServerName puppet.local
 -puppetServerAddress 192.168.137.10

Rivinvaihdot on lisätty selvyyden vuoksi.

Jos bootstrap_windows.ps1 skripti halutaan ajaa suoraan Vagrantissa, tehdään se seuraavasti:

# -*- mode: ruby -*-
 # vi: set ft=ruby :
 
 Vagrant.configure("2") do |config|
 
 config.vm.define "win2012r2" do |box|
 box.vm.box = "mwrock/Windows2012R2"
 box.vm.box_version = "0.6.1"
 box.vm.hostname = "windows2012r2a"
 box.vm.network "private_network", ip: "192.168.31.101"
 box.vm.provider "virtualbox" do |vb|
 vb.gui = false
 vb.memory = 2048
 end
 box.vm.provision "shell" do |s|
 s.path = "bootstrap_windows.ps1"
 s.args = ["-certName", "win2012r2.local",
 "-ServerName", "puppet.local",
 "-puppetServerAddress", "192.168.31.1"]
 end
 end
 end

Huomaa, miten kukin parametri ja parametrin arvo on lisätty listaan erillisenä merkkijonona; syystä tai toisesta esimerkiksi "-certName win2012r2.local" ei toimi.

Windowsissa voidaan käyttää hosts-tiedostoa ip-osoitteiden ja nimien liittämiseen toisiinsa, aivan kuten Linuxissakin. Linuxissa tiedoston sijainti on /etc/hosts, Windowsissa se on c:windowssystem32driversetchosts, tai oikeastaan $env:windirsystem32driversetchosts.

Hosts-tiedoston käyttäminen tulee ajankohtaiseksi silloin kun sovellus tarvitsee DNS-nimen, eikä DNS-palvelinta ole käytettävissä, tai sitä ei voida jostakin syystä käyttää.

Jos Puppet agentille ei ole määritetty palvelinta, se yrittää oletuksena yhteyttä palvelimeen jonka DNS-nimi on 'puppet', eli jos palvelimelle on määritetty domain 'example.com', agent hakee palvelinta jonka DNS-nimi on 'puppet.example.com' Määrittämällä server-asetuksella palvelimen DNS-nimi, voidaan Puppet agent osoittaa haluttuun palvelimeen. Tämä määritetään Linuxeissa tyypillisesti uusimmissa puppet-versioissa tiedostossa /etc/puppetlabs/puppet/puppet.conf osiossa main tai agent.

servername = puppet.example.com

Olemassa olevan Windowsin hosts-tiedoston muokkaaminen esim. Notepad-ohjelmassa sujuu yleensä ongelmitta. Hosts-tiedoston muokkaaminen tai rakentaminen Powershell-koodilla sen sijaan ei välttämättä suju. Muokkaamisen tai rakentamisen jälkeen saatetaan todeta että Puppet agent ei löydä palvelinta koska palvelimen oletusnimi (puppet) tai määritetty nimi eivät käänny ip-osoitteiksi. Seuraava koodi aiheuttaa tämänkaltaisen ongelman.

$hostsFile = "$env:windirSystem32driversetchosts"
 
 if (!(Test-Path "$hostsFile")) {
 New-Item -path $env:windirSystem32driversetc -name hosts -type "file"
 Write-Host "Created new hosts file"
 }
 
 # Remove any lines containing our puppetservername
 $tempfile = $env:temp + "" + (get-date -uformat %s)
 New-Item $tempfile -type file -Force | Out-Null
 Get-Content $HostsFile | Where-Object {$_ -notmatch "$puppetServerName"} | Set-Content $tempfile
 Move-Item -Path $tempfile -Destination $HostsFile -Force
 
 # Insert name, address of puppetserver separated by a tab 
 [email protected]($puppetServerAddress,$puppetServerName)
 $myString=[string]::join("`t", (0,1 | % {$fields[$_]}))
 $found = Get-Content $hostsFile | Where-Object { $_.Contains("$myString") }
 if ([string]::IsNullOrEmpty($found)) {
 [string]::join("`t", (0,1 | % {$fields[$_]})) | Out-File -encoding ASCII -append $hostsFile
 }

Eli ensin määritetään hosts-tiedoston sijainti. Jos tiedostoa ei ole olemassa, se luodaan. Seuraavaksi luodaan väliaikainen tiedosto, jonka sisältö haetaan olemassa olevasta hosts-tiedostosta, samalla poistaen nykyiset rivit jotka sisältävät palvelimen nimen (saatu parametrina). Muokattu tiedosto siirretään sitten oikeaan paikkaan. Tämän jälkeen tiedostoon lisätään palvelimen nimi ja ip-osoite (jotka saatu parametreinä).

Jos nyt tarkastellaan hosts-tiedostoa silmämääräisesti, se vaikuttaa olevan ok. Vaikka Puppet-palvelin on käytettävissä, agentin (ensimmäinen) ajo manuaalisesti tuottaa virheen.

Error: Could not request certificate: getaddrinfo: No such host is known.
 Exiting; failed to retrieve certificate and waitforcert is disabled

Samoin ping tuottaa virheen.

Ping request could not find host puppet.example.com. Please check the name and try again.

Tämä johtuu siitä että nimi puppet.example.com ei ratkea ip-osoitteeksi vaikka se on asianmukaisesti määritetty hosts-tiedostossa. Mistä on kyse?

Windows hosts-tiedoston ongelmat eivät ole aivan harvinaisia, ja näiden ongelmien etsiminen Googlella tuottaa paljon hakutuloksia, esim. tämän. Tarkistettavina asioina esitetään nimien resoluution välimuistia, enkoodausta, ylimääräisiä sarkaimia tai välilyöntejä jne.

Koska tässä artikkelissa on kyse Powershellistä, haluamme selvittää mistä ongelmassa on kyse, ja myös korjata ongelman Powershell-koodilla.

Ongelman syy on siinä että muokkaamalla hosts-tiedostoa edellä mainitulla Powershell-koodilla sen käyttöoikeudet, 'security descriptorit' jäävät vajaiksi. Muokkaamalla oikeudet asianmukaisiksi powershell-koodilla nimet ratkeavat oikein ip-osoitteiksi.

# Create access rule
 $userName = "Users"
 $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$userName", 'Read', 'Allow')
 
 # Apply access rule
 $acl = Get-ACL $hostsFile
 $acl.SetAccessRule($rule)
 Set-Acl -path $hostsFile -AclObject $acl

Annetaan siis ryhmälle 'Users' lukuoikeus tiedostoon. Tämä oikeus puuttuu, jos hosts-tiedostoa muokataan aiemman Powershell-koodin mukaan. Tämän lukuoikeuden antaminen riittää saattamaan nimiresoluution toimivaksi.

Tässä kokonainen toimiva funktio.

function SetupHostsFile {
 
 param(
 [IPADDRESS]$puppetServerAddress,
 [String]$puppetServerName
 )
 
 if ($debug) {
 write-host ("Now in function {0}." -f $MyInvocation.MyCommand)
 }
 
 
 If (-Not [BOOL]($puppetServerAddress -as [IPADDRESS])) {
 write-host ("{0} is not an IP address" -f $puppetServerAddress)
 break
 }
 
 $fqdnRe='(?=^.{4,253}$)(^((?!-)[a-zA-Z0-9-]{1,63}(?<!-).)+[a-zA-Z]{2,63}$)'
 
 If ($puppetServerName -notmatch $fqdnRe) {
 write-host ("{0} is not a fully qualified name" -f $puppetServerName)
 break
 }
 
 write-host "Setting up hosts file..."
 
 $hostsFile = "$env:windirSystem32driversetchosts"
 
 if (!(Test-Path "$hostsFile")) {
 New-Item -path $env:windirSystem32driversetc -name hosts -type "file"
 Write-Host "Created new hosts file"
 }
 
 # Remove any lines containing our puppetservername
 $tempfile = $env:temp + "" + (get-date -uformat %s)
 New-Item $tempfile -type file -Force | Out-Null
 Get-Content $HostsFile | Where-Object {$_ -notmatch "$puppetServerName"} | Set-Content $tempfile
 Move-Item -Path $tempfile -Destination $HostsFile -Force
 
 # Insert name, address of puppetserver separated by a tab 
 [email protected]($puppetServerAddress,$puppetServerName)
 $myString=[string]::join("`t", (0,1 | % {$fields[$_]}))
 $found = Get-Content $hostsFile | Where-Object { $_.Contains("$myString") }
 if ([string]::IsNullOrEmpty($found)) {
 [string]::join("`t", (0,1 | % {$fields[$_]})) | Out-File -encoding ASCII -append $hostsFile
 }
 
 # Create access rule
 $userName = "Users"
 $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$userName", 'Read', 'Allow')
 
 # Apply access rule
 $acl = Get-ACL $hostsFile
 $acl.SetAccessRule($rule)
 Set-Acl -path $hostsFile -AclObject $acl
 
 }

Voit lukea lisää Powershellin Set-Acl-cmdletistä täältä

Vagrant on näppärä väline myös Windows-virtuaalikoneiden luontiin esim. Puppet-resurssien testauksessa. Windows-virtuaalikoneille on luontevaa käyttää Powershell-provisiointia, minkä Vagrant tekee melkoisen yksinkertaiseksi. Aika nopeasti ilmenee tarve välittää Powershell-skriptille parametrejä Vagrantin kautta. Tästä on hieman hankalaa löytää esimerkkejä, joten tässä yksi.

Oletetaan että meillä on Powershell-skripti nimeltä bootstrap_windows.ps1 Vagrant-projektin alihakemistossa 'provision'. Skripti asentaa ja konfiguroi Puppet-agentin Windows-virtuaalikoneelle. Skriptille pitää välittää kaksi parametria: Puppetserverin FQDN (serverName) ja mukavuusbonuksena virtuaalikoneelle haluttu FQDN (certName). (Käytän itse Vagrantille yaml-muotoista ns. data-driven vagrant-tiedostoa, joten syntaksi on hieman erilainen vakiomuotoiseen Vagrant-tiedostoon verrattuna).

Kerrotaan Vagrantille että käytetään shell-provisioneria, annetaan skriptin paikallinen polku ja välitetään tälle em. parametrit:

provisioners: 
 - shell: 
 path: provision/bootstrap_windows.ps1 
 arguments: 
 - name: -certName 
 value: agent.example.vm 
 - name: -serverName 
 value: puppetserver.example.vm

Powershell-skriptissä määritetään vastaavat parametrit:

param(
 [string]$certName,
 [string]$ServerName
 )

Näitä parametrejä voidaan sitten käyttää Powershell-skriptin sisällä esim. funktiossa halutulla tavalla:

configurePuppetAgent -ServerName $ServerName -certName $certName

Parametrien välittäminen Powershell-skriptille/provisionerille toimii siis Vagrantin kannalta samalla tavalla kuin shell-provisionerinkin.

Cobblerissa on eräs varsin heikosti dokumentoitu ominaisuus, jonka avulla on mahdollista selvittää, mikä kyselyn tekevän koneen nimi Cobblerissa on. Tästä on hyötyä esimerkiksi Windowsin bootstrappauksessa, jotta saadaan noudettua oikeaan Cobbler System-olioon liittyvä Unattended.xml -tiedosto. Linuxien tapauksessa tähän kikkailuun ei ole tarvetta, koska Cobblerin parametrit välittyvät käyttöjärjestelmän asentimelle suoraan kickstart- tai preseed tiedoston mukana.

Itse ominaisuus on erittäin yksinkertainen käyttää, kunhan muistaa ajaa sen siltä koneelta, jonka System name halutaan tietää:

$ curl -s http://cobbler.domain.com/cblr/svc/op/autodetect/
 web.domain.com

Jos sama komento ajetaan esimerkiksi Cobbler-palvelimelta itseltään, saadaan vastaukseksi vain tämä:

$ curl -s http://127.0.0.1/cblr/svc/op/autodetect/
 FAILED: no match (127.0.0.1,None)

Yllä oleva virhe johtuu siitä, että Cobbler (services.py) käy läpi Cobblerin System-oliot ja etsii niiden verkkoliitännöistä kyselyn tehneen koneen MAC-osoitetta. Koska Cobbler-konetta ei ole luotu Cobblerilla, ei sille ole myöskään vastaavaa System-oliota, ja lopputulos on "FAILED: no match".

Sama asia on tehty eri tavoin ja huomattavasti vaikeammin Cobbler and Windows -artikkelissa, joka on valitettavasti yksi harvoista Windowsien provisiointia Cobblerilla käsittelevistä lähteistä. Omien testieni perusteella em. artikkelin lähestymistapa ei itse asiassa toimi ainakaan käyttämällämme Cobblerin versiolla.

Toisinaan Puppet-koodissa on tarve tehdä haku Hierasta, mutta se ei toimi odotetulla tavalla. Näissä tapauksissa auttaa "puppet lookup"-komento:

$ puppet lookup --node www.domain.com --explain bacula::filedaemon::backup_files
 Data Binding "hiera"
 Found key: "bacula::filedaemon::backup_files" value: [
 "/etc",
 "/var/lib/puppet/ssl",
 "/var/backups/local"
 ]

Erikoista kyllä, tämä komento löytää ainoastaan ne Hieran hierarkian tasot, jotka perustuvat faktoihin:

---
 :backends:
 - yaml
 :hierarchy:
 - "nodes/%{::trusted.certname}"
 - "roles/%{::role}"
 - "lsbdistcodename/%{::lsbdistcodename}"
 - "osfamily/%{::osfamily}"
 - "kernel/%{::kernel}"
 - common
 
 :yaml:
 :datadir:

Yllä muuttuja role ei siis ole fakta, vaan se on oikeastaan parametri tai käytänne, joka määritetään noodin yaml-tiedostossa:

role: webserver

Nämä yaml-tiedostoissa määritetyt roolit muunnetaan globaaliksi muuttujiksi tiedostossa /etc/puppetlabs/code/environments/production/manifests/site.pp:

# Save server role defined in the node's yaml into a top-scope
 # variable. These top-scope variables are then used in the Hiera
 # hierarchy to configure a node according to it's role.
 $role = hiera('role', undef)

Kun edellä mainitulla "puppet lookup"-komennolla yritetään löytää arvo parametrille, joka on määritelty ainoastaan valitun noodin roolissa, ei sitä löydy. Tämä ongelma voidaan kuitenkin kiertää kankeahkosti luomalla faktatiedosto, esim. facts.yaml ja määrittämällä noodin rooli siinä:

role: 'webserver'

Tämän jälkeen tehdään puppet lookup --facts -parametrin kera:

$ puppet lookup --node www.domain.com --explain bacula::filedaemon::backup_files --facts facts.yaml

Tämä komento etsii myös roolin mukaisesta yaml-tiedostosta. Tämä on erittäin kätevää etenkin, jos halutaan nähdä kaikki noodille määritetyt luokat ja käytössä on edellä mainittu roolien toteutus:

$ puppet lookup --node www.domain.com --explain classes --facts facts.yaml --merge unique

Lisätietoja "puppet lookup"-komennon käytöstä löytyy täältä.

Samuli Seppänen

menucross-circle