ssh certificates part 2


This is the second part of the SSH certificate series, server side SSH certificates. You can find the first one here.

This post shows, what use server side certificates can be and how they can be created.

What use have server side certificates?

SSH certificates on the host side are used to extend the ssh host keys. These can be used to better identify a running system, as multiple names can be provided in the certificate. This avoids the message of a wrong host key in a shared IP system, as all IPs and names can be provided.

SSH certificates can also help to identify freshly deployed systems in that the system gets certified directly after the deployment by a build ca.

signing a host key

For this step, we need a CA key. How that can be generated was mentioned in the first part. We also need the host public key to sign. This can be either copied from /etc/ssh/ from the server or it can be fetch using ssh-keyscan.


It can also take a parameter for a specific type

ssh-keyscan -t ed25519

This is needed for some older versions of openssh, where ed25519 public keys were not fetched by default with ssh-keyscan.

The output returned looks like the following: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIP0JSsdP2pjtcYNcmqyPg6nLbMOjDbRf0YR/M2pu2N

The second and third field need to be put into a file, so that it can be used to generate the certificate.

A complete command would then look like this:

ssh-keyscan | awk '/ssh|ecdsa/ { print $2,$3 }' >

With the resulting file, we can now proceed to create the certificate.

ssh-keygen \
  -s ca.key \
  -V '+52w1d' \
  -I 'foohost' \
  -h \
  -n, \

The meaning of the options is:

  • -s the key to use for signing (the ca)
  • -V interval the certificate is valid
  • -I the identity of the certificate (a name for the certificate)
  • -h flag to create a host certificate
  • -n all names the host is allowed to use (This list can also contain IPs)

The last option is the public key file to certify.

This results in a file, which contains the certificate. It can be viewed like the SSH client certificate, with ssh-keygen.

ssh-keygen -L -f

This file now has to be placed in the same directory like the public key on that host, with the same ending.

The last step on the server is to adjust the sshd_config, so that it includes the certificate. For that, add the following line for the fitting host key, for example:

HostCertificate /etc/ssh/

With a reload, it should load the certificate and make it available for authentication.

Now the only thing left to do is to tell the client, that it should trust the CA to identify systems. For that, the public key of the CA has to be added to the file ~/.ssh/known_hosts in the following format

@cert-authority * <content of>

The * marks a filter, so different CAs can be trusted depending on the domain.

With this, you are able to connect to your server only using the certificate provided by the server. When connecting with debugging on, you should get output like the following:

$ ssh -v
debug1: Server host key: SHA256:+JfUty0G4i3zkWdPiFzbHZS/64S7C+NbOpPAKJwjyUs
debug1: Host '' is known and matches the ED25519-CERT host certificate.
debug1: Found CA key in /home/foo/.ssh/known_hosts:1

With the first part and now the second part done, you can already lock up your infrastructure pretty fine. In the next part, I will show some stuff I use to keep my infrastructure easily managable.

ssh certificates part 1


All of my infrastructure SSH access is handled with SSH certificates for more than a year now. As I am asked every now and then how it works, I will describe how it works in multiple blog posts.

This part will revolve around Client certificates.

What is it good for?

With general public key usage one can identify a user by his public key. These get put into an ~/authorized_keys file and if a user presents the correct key, they are let onto the system. This approach works well, but it is a bit tricky to find out, which key was actually used. Restricting the user based on his key on any machine also requires to manage the authorized_keys with options.

Now SSH certificates on the client side grant the possibility to sign a public key and remove the requirement for an authorized keys file. The options can be set directly in the certificate and are active on every server this certificate is used with. As the certificate can also hold an identification string it is easier to see from the logs, which key for what purpose connected. The only thing to make this work is to set every server to trust the signee and no authorized keys file has to be managed anymore.

generating the CA

First we need a SSH key for the purpose of a CA. This should not be the same key as your normal key in a production environment. The key is generated any other key with ssh-keygen

ssh-keygen -t ed25519 -C CA -f ca.key

You can choose any key type you want, it works with all types and any type can sign any type. The -C flag adds a comment to the key.

Now we can sign a public key.

signing a user key

First we sign a user public key

ssh-keygen \
  -s ca.key \
  -I 'foouser' \
  -n foouser \

Now what do all these options mean?

  • -s defines the signing key
  • -I is an identification for the certificate. This also shows up in the auth.log on the server.
  • -n the principal, which in this case means the username this key will be allowed to login with.

To restrict the IP address for the public key, one can use the following line

-O source-address=","

Any option from ssh-keygen(1) requires its own -O options, for example:

-O clear -O no-pty -O force-command="/opt/foo/bin/do_stufff"

A good source for further options is the ssh-keygen man page.

After the command was executed, a file shows up. The content can be inspected using ssh-keygen again:

ssh-keygen -L -f

To get the authentication working with this key, two steps have to be taken. First is to put the generated certificated in the same directory like the private key, so that the ssh agent will sent the certificate. Second is to put the CA onto the server, so that it will trust all created certificates.

This is done with the following option in the sshd_config

TrustedUserCAKeys /etc/ssh/ssh_user_certs

where the content of the file /etc/ssh/ssh_user_certs is the ca public key. It is possible to put multiple CAs into that file.

Now one can connect to the server using the newly created key

ssh -vvv -I foouser <yourserver>

Which should print a line like

debug1: Server accepts key: pkalg blen 364
debug1: Offering ED25519-CERT public key: /home/foouser/.ssh/id_ed25519
debug3: sign_and_send_pubkey: ED25519-CERT SHA256:YYv18lDTPtT2g5vLylVQZiXQvknQNskCv1aCNaSZbmg

These three lines state for my session, that the server accepts certificates and that my certificate was sent.

With this, the first step to using SSH certificates is done. In the next post I will show how to use SSH certificates for the server side.

S.M.A.R.T. values


I wondered for some time, what all S.M.A.R.T. values mean and which of them could tell me, that my disk is failing. Finally I found a wikipedia article which has a nice list of what each value means.

minimal nginx configuration


As I was asked today, how I manage the nginx setup, I thought I write it down.

The configuration was inpsired by the blog entry of Zach Orr (looks like the blog post is gone since 2014). The setup consists of one main configuration and multiple domain specific configuration files which get sourced in the main config. If a domain is using certificates, these are pulled in in their respective files.

I will leave out the performance stuff to make the config more readable. As the location of the config files differs per platform, I will use $CONF_DIR as a placeholder.

main configuration

The main configuration $CONF_DIR/nginx.conf first sets some global stuff.

# global settings
user www www;
pid /var/run/;

This will take care of dropping the privileges after the start to the www user group.

Next is the http section, which sets the defaults for all server parts.

http {
  include      mime.types;
  default_type application/octet-stream;
  charset      UTF-8;

  # activate some modules
  gzip on;
  # set some defaults for modules
  ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

  include sites/*.conf;

This part sets some default options for all server sections and helps to make the separate configuration easier. In this example the mime types are included (a large file with mime type definitions), the default charset and mime type is set.

In this section we can also active modules like gzip (see gzip on nginx) or set some options for modules like ssl (see ssl on nginx).

The last option is to include more config files from the sites directory. This is the directive which makes it possible to split up the configs.

server section config

The server section config may look different for each purpose. Here are some smaller config files just to show, what is possible.

static website

For example the file $CONF_DIR/sites/ looks like this:

server {
  listen 80;

  location / {
    root /var/srv/;
    index index.html;

In this case a domain is configured delivering static content from the directory /var/src/ on port 80 for the domain`. If the root path is called in the browser, nginx will look for the *index.html to show.

reverse proxy site

For a reverse proxy setup, the config $CONF_DIR/sites/ might look like this.

server {
  listen 80;

  location / {
    proxy_pass http://unix:/tmp/reverse.sock;
    include proxy_params;

In this case, nginx will also listen on port 80, but for the host All incoming requests will be forwarded to the local unix socket /tmp/reverse.sock. You can also define IPs and ports here, but for an easy setup, unix sockets might be easier. The parameter include proxy_params; includes the config file proxy_params to set some headers when forwarding the request, for example Host or X-Forwarded-For. There should be a number of config files already included with the nginx package, so best is to tkae a look in $CONF_DIR.

uwsgi setup

As I got my graphite setup running some days ago, I can also provide a very bare uwsgi config, which actually looks like the reverse proxy config.

server {
  listen 80;

  location / {
    uwsgi_pass uwsgi://unix:/tmp/uwsgi_graphite.sock;
    include uwsgi_params;

So instead of proxy_pass uwsgi_pass is used to tell nginx, that it has to use the uwsgi format. Nginx will also include the uwsgi parameters, which is like the proxy_params file a collection of headers to set.


So this is my pretty minimal configuration for nginx. It helped me automate the configuration, as I just have to drop new config files in the directory and reload the server.

I hope you liked it and have fun.

pgstats - vmstat like stats for postgres


Some weeks ago a tool got my attention - pgstats. It was mentioned in a blog post, so I tried it out and it made a very good first impression.

Now version 1.0 was released. It can be found in github.

It is a small tool to get statistics from postgres in intervals, just like with iostat, vmstat and other *stat tools. It has a number of modules to get these, for example for databases, tables, index usage and the like.

If you are running postgres, you definitely should take a look at it.

setting zpool features


Before SUN was bought by Oracle, OpenSolaris had ever newer versions and upgrading was just an

$ zpool upgrade rpool

away. But since then, the open source version of ZFS gained feature flags.


If you want to enable only one of these features, you may have already hit the problem, that zpool upgrade can only upgrade one pool or all.

The way to go is to use zpool set. Feature flags are options on the pool and can also be listed with zpool get.

$ zpool get all tank1 | grep feature
tank1  feature@async_destroy          enabled                        local
tank1  feature@empty_bpobj            active                         local
tank1  feature@lz4_compress           active                         local
tank1  feature@multi_vdev_crash_dump  disabled                       local

Enabling a feature, for example multi_vdev_crash_dump, would then be

$ zpool set feature@multi_vdev_crash_dump=enabled tank1

It will then disappear from the zpool upgrade output and be set to enabled active in zpool get.

using unbound and dnsmasq


After some time of using an Almond as our router and always having trouble with disconnects, I bought a small apu1d4, an AMD low power board, as our new router. It is now running FreeBSD and is very stable. Not a single connection was dropped yet.

As we have some services in our network, like a fileserver and a printer, we always wanted to use names instead of IPs, but not a single router yet could provide that. So this was the first problem I solved.

FreeBSD comes with unbound preinstalled. Unbound is a caching DNS resolver, which helps answer DNS queries faster, when they were already queried before. I wanted to use unbound as the primary source for DNS queries, as the caching functionality is pretty nice. Further I wanted an easy DHCP server, which would also function as a DNS server. For that purpose dnsmasq fits best. There are also ways to use dhcpd, bind and some glue to get the same result, but I wanted as few services as possible.

So my setup constellation looks like this:

client -> unbound -> dnsmasq
             +-----> ISP dns server

For my internal tld, I will use zero. The dns server is called and has the IP The network for this setup is

configuring unbound

For this to work, first we configure unbound to make name resolution work at all. Most files already have pretty good defaults, so we will overwrite these with a file in /etc/unbound/conf.d/, in my case /etc/unbound/conf.d/zero.conf.

  do-not-query-localhost: no
  access-control: allow
  local-data: "cerberus. 86400 IN A"
  local-data: " 86400 IN A"
  local-data: " 86400 IN PTR"
  local-zone: "" nodefault
  domain-insecure: "zero"

  name: "zero"

  name: ""

So what happens here is the following. First we tell unbound, on which addresses it should listen for incoming queries. Next we staate, that querying dns servers in localhost is totally okay. This is needed to later be able to resolve addresses on the local dnsmasq. If your dnsmasq is running on a different machine, you can leave this out. With access-control we allow the network to query the dns server. The next three lines tell unbound, that the name cerberus and are one and the same machine, the DNS server. Without these two lines unbound would not resolve the name of the local server, even if its name would be stated in /etc/hosts. With the last line we enable name resolution for the local network. The key domain-insecure tells unbound, that this domain has no support for DNSSEC. DNSSEC is enabled by default on unbound.

The two forward-zone entries tell unbound, where it should ask for queries regarding the zero tld and the reverse entries of the network. The address in this case points to the dnsmasq instance. In my case, that is running on localhost and port 5353.

Now we can add unbound to /etc/rc.conf and start unbound for the first time with the following command

$ sysrc local_unbound_enable=YES && service local_unbound start

Now you should be able to resolve the local hostname already

$ host has address

configuring dnsmasq

The next step is to configure dnsmasq, so that it provides DHCP and name resolution for the network. When adjusting the config, please read the comments for each option in your config file carefully. You can find an example config in /usr/local/etc/dnsmasq.conf.example. Copy it to /usr/local/etc/dnsmasq.conf and open it in your editor:


First we set the port to 5353, as defined in the unbound config. On this port dnsmasq will listen for incoming dns requests. The next two options are to avoid forwarding dns requests needlessly. The option no-resolv avoids dnsmasq knowning of any other dns server. no-hosts does the same for /etc/hosts. Its sole purpose is to provide DNS for the local domain, so it needn’t to know.

The next option tells dnsmasq for which domain it is responsible. It will also avoid answering requests for any other domain.

except-interfaces tells dnsmasq on which interfaces not to listen on. You should enter here all external interfaces to avoid queries from the wide web detecting hosts on your internal network. The option bind-interfaces will try to listen only on the interfaces allowed instead of listening on all interfaces and filtering the traffic. This makes dnsmasq a bit more secure, as not listening at all is better than listening.

The two options expand-hosts and domain=zero will expand all dns requests with the given domain part, if it is missing. This way, it is easier to resolv hosts in the local domain.

The next three options configure the DHCP part of dnsmasq. First is the range. In this example, the range starts from and ends in and all IPs get a 48h lease time. So if a new hosts enters the network, it will be given an IP from this range. The next two lines set options sent with the DHCP offer to the client, so it learns the default route and dns server. As both is running on the same machine in my case, it points to the same IP.

Now all machines which should have a static name and/or IP can be set through dhcp-host lines. You have to give the mac address, the name, the IP and the lease time. There are many examples in the example dnsmasq config, so the best is to read these.

When your configuration is done, you can enable the dnsmasq service and start it

$ sysrc dnsmasq_enable=YES && service dnsmasq start

When you get your first IP, do the following request and it should give you your IP

$ host $(hostname) has address

With this, we have a running DNS server setup with DHCP.

common table expressions in postgres


Four weeks ago I was askes to show some features of PostgreSQL. In that presentation I came up with an interesting statement, with which I could show nice feature.

What I’m talking about is the usage of common table expressions (or short CTE) and explain.

Common table expressions create a temporary table just for this query. The result can be used anywhere in the rest of the query. It is pretty useful to group sub selects into smaller chunks, but also to create DML statements which return data.

A statement using CTEs can look like this:

with numbers as (
  select generate_series(1,10)
select * from numbers;

But it gets even nicer, when we can use this to move data between tables, for example to archive old data.

Lets create a table and an archive table and try it out.

$ create table foo(
  id serial primary key,
  t text
$ create table foo_archive(
  like foo
$ insert into foo(t)
  select generate_series(1,500);

The like option can be used to copy the table structure to a new table.

The table foo is now filled with data. Next we will delete all rows where the modulus 25 of the ID resolves to 0 and insert the row to the archive table.

$ with deleted_rows as (
  delete from foo where id % 25 = 0 returning *
insert into foo_archive select * from deleted_rows;

Another nice feature of postgres is the possibility to get an explain from a delete or insert. So when we prepend explain to the above query, we get this explain:

                            QUERY PLAN
 Insert on foo_archive  (cost=28.45..28.57 rows=6 width=36)
   CTE deleted_rows
     ->  Delete on foo  (cost=0.00..28.45 rows=6 width=6)
           ->  Seq Scan on foo  (cost=0.00..28.45 rows=6 width=6)
                 Filter: ((id % 25) = 0)
   ->  CTE Scan on deleted_rows  (cost=0.00..0.12 rows=6 width=36)
(6 rows)

This explain shows, that a sequence scan is done for the delete and grouped into the CTE deleted_rows, our temporary view. This is then scanned again and used to insert the data into foo_archive.

range types in postgres


Nearly two years ago, Postgres got a very nice feature - range types. These are available for timestamps, numerics and integers. The problem is, that till now, I didn’t have a good example what one could do with it. But today someone gave me a quest to use it!

His problem was, that they had id ranges used by customers and they weren’t sure if they overlapped. The table looked something like this:

create table ranges(
  range_id serial primary key,
  lower_bound bigint not null,
  upper_bound bigint not null

With data like this

insert into ranges(lower_bound, upper_bound) values
  (120000, 120500), (123000, 123750), (123750, 124000);

They had something like 40,000 rows of that kind. So this was perfect for using range type queries.

To find out, if there was an overlap, I used the following query

select *
  from ranges r1
  join ranges r2
    on int8range(r1.lower_bound, r1.upper_bound, '[]') &&
       int8range(r2.lower_bound, r2.upper_bound, '[]')
 where r1.range_id != r2.range_id;

In this case, int8range takes two bigint values and converts it to a range. The string [] defines if the two values are included or excluded in the range. In this example, they are included. The output for this query looked like the following

 range_id │ lower_bound │ upper_bound │ range_id │ lower_bound │ upper_bound
        2 │      123000 │      123750 │        3 │      123750 │      124000
        3 │      123750 │      124000 │        2 │      123000 │      123750
(2 rows)

Time: 0.317 ms

But as I said, the table had 40,000 values. That means the set to filter has a size of 1.6 billion entries. The computation of the query took a very long time, so I used another nice feature of postgres - transactions.

The idea was to add a temporary index to get the computation done in a much faster time (the index is also described in the documentation).

create index on ranges using gist(int8range(lower_bound, upper_bound, '[]'));
select *
  from ranges r1
  join ranges r2
    on int8range(r1.lower_bound, r1.upper_bound, '[]') &&
       int8range(r2.lower_bound, r2.upper_bound, '[]')
 where r1.range_id != r2.range_id;

The overall runtime in my case was 300ms, so the writelock wasn’t that much of a concern anymore.

learning the ansible way


Some weeks ago I read a blog post about rolling out your configs with ansible as a way to learn how to use it. The posts wasn’t full of information how to do it, but his repository was a great inspiration.

As I stopped using cfengine and instead wanted to use ansible, that was a great opportunity to further learn how to use it and I have to say, it is a really nice experience. Apart from a bunch configs I find every now and then, I have everything in my config repository.

The config is split at the moment between servers and workstations, but using an inventory file with localhost. As I mostly use freebsd and archlinux, I had to set the python interpreter path to different locations. There are two ways to do that in ansible. The first is to add it to the inventory



and the other is to set it in the playbook

- hosts: hosts
    ansible_python_interpreter: /usr/local/bin/python2
    - vim

The latter has the small disadvantage, that running plain ansible is not possible. Ansible in the command and check mode also needs an inventory and uses the variables too. But if they are not stated there, ansible has no idea what to do. But at the moment, it isn’t so much a problem. Maybe that problem can be solved by using a dynamic inventory.

What I can definitely recommend is using roles. These are descriptions on what to do and can be filled with variables from the outside. I have used them bundle all tasks for one topic. Then I can unclude these for the hosts I want them to have, which makes rather nice playbooks. One good example is my vim config, as it shows how to use lists.

All in all I’m pretty impressed how well it works. At the moment I’m working on a way to provision jails automatically, so that I can run the new server completely through ansible. Should make moving to a new server in the fututre much easier.

show older