There are tons of tutorials out there on how to get poudriere running in a jail. But most of them have in common, that they either miss options or have too many of them. So what I try with this, is to get the most condensed version of the whole process to get poudriere up and running, serving the generated packages. In the following for 4 sections, we create a jail with the name poudriere1, using partitions tank/jails/poudriere1 and tank/jails/ppoudriere1/data.
Through some problems with installing postfix and opensmtpd at the same time, I again had the need to invest some time into FreeBSD Jails. As I had some problems with the IP allocation, I document what I found out here. First and foremost, I think I could have had it easier using VIMAGE/vnet, but that still isn’t enabled per default on 10.2 and 10.3, the versions I tested. The following settings are for the jail.conf system, but can also be used on the command line.
The last couple days I found a number of very interesting links. As some of them might be interesting for others too, I will put them here with some short explanation. Go There is some really interesting development in the golang community to build tools based on the provided compiler infrastructure to do various stuff with your code. Some of these tools are: depscheck by divan (through a blog post) which checks the dependencies and prints some stats about these.
The last week I found two nice tools, which can help when developing tools in Go. The first one is json-to-go. It can take a json document and convert it into a go struct able to contain the content. The second is curl-to-go, which takes a curl command with arguments and converts it into go code. Both tools are pretty helpful when developing against foreign web APIs and they already helped me out.
Some days ago, a disk in the old server failed. I replaced it with a new server having ECC memory. It also had a LSI raid controller, which I did not notice when ordering the system. The configuration of the raid through the raid internal interface was pretty hard and took a long time. I could have booted a linux and used MegaCLI, but from work I knew that I would have to invest approximately the same time to investigate the same commands.
This week I was looking for a mechanism to build an application specific lock in postgres. I did know about pg_try_advisory_lock(long) and pg_try_advisory_lock(int, int), but could not figure out a good mechanism until I found depesz blog entry about how to pick a task of a list. He provides some very good insight into the problem and his way to finding a solution. What depesz does there is to use a hash function to feed into the advisory lock functions.
As you can see, the blog finally got a new design, because we came around to actually work on it. So in this post, I will explain a bit, what we actually did and why we reworked the blog yet again. history The old blog engine was a self written system in ruby using zero. It worked pretty well the first couple months. But it was on one side a huge pain to keep up to date and on the other side, I didn’t even get around to implement a nice interface to write new blog entries.
This is the second part of the SSH certificate series, server side SSH certificates. You can find the first one here. This post shows, what use server side certificates can be and how they can be created. What use have server side certificates? SSH certificates on the host side are used to extend the ssh host keys. These can be used to better identify a running system, as multiple names can be provided in the certificate.
All of my infrastructure SSH access is handled with SSH certificates for more than a year now. As I am asked every now and then how it works, I will describe how it works in multiple blog posts. This part will revolve around Client certificates. What is it good for? With general public key usage one can identify a user by his public key. These get put into an ~/authorized_keys file and if a user presents the correct key, they are let onto the system.
As I was asked today, how I manage the nginx setup, I thought I write it down. The configuration was inpsired by the blog entry of Zach Orr (looks like the blog post is gone since 2014). The setup consists of one main configuration and multiple domain specific configuration files which get sourced in the main config. If a domain is using certificates, these are pulled in in their respective files.
Some weeks ago a tool got my attention - pgstats. It was mentioned in a blog post, so I tried it out and it made a very good first impression. Now version 1.0 was released. It can be found in github. It is a small tool to get statistics from postgres in intervals, just like with iostat, vmstat and other *stat tools. It has a number of modules to get these, for example for databases, tables, index usage and the like.