Some days ago, a disk in the old server failed. I replaced it with a new server having ECC memory. It also had a LSI raid controller, which I did not notice when ordering the system. The configuration of the raid through the raid internal interface was pretty hard and took a long time. I could have booted a linux and used MegaCLI, but from work I knew that I would have to invest approximately the same time to investigate the same commands.
This week I was looking for a mechanism to build an application specific lock in postgres. I did know about pg_try_advisory_lock(long) and pg_try_advisory_lock(int, int), but could not figure out a good mechanism until I found depesz blog entry about how to pick a task of a list. He provides some very good insight into the problem and his way to finding a solution. What depesz does there is to use a hash function to feed into the advisory lock functions.
As you can see, the blog finally got a new design, because we came around to actually work on it. So in this post, I will explain a bit, what we actually did and why we reworked the blog yet again. history The old blog engine was a self written system in ruby using zero. It worked pretty well the first couple months. But it was on one side a huge pain to keep up to date and on the other side, I didn’t even get around to implement a nice interface to write new blog entries.
This is the second part of the SSH certificate series, server side SSH certificates. You can find the first one here. This post shows, what use server side certificates can be and how they can be created. What use have server side certificates? SSH certificates on the host side are used to extend the ssh host keys. These can be used to better identify a running system, as multiple names can be provided in the certificate.
All of my infrastructure SSH access is handled with SSH certificates for more than a year now. As I am asked every now and then how it works, I will describe how it works in multiple blog posts. This part will revolve around Client certificates. What is it good for? With general public key usage one can identify a user by his public key. These get put into an ~/authorized_keys file and if a user presents the correct key, they are let onto the system.
As I was asked today, how I manage the nginx setup, I thought I write it down. The configuration was inpsired by the blog entry of Zach Orr (looks like the blog post is gone since 2014). The setup consists of one main configuration and multiple domain specific configuration files which get sourced in the main config. If a domain is using certificates, these are pulled in in their respective files.
Some weeks ago a tool got my attention - pgstats. It was mentioned in a blog post, so I tried it out and it made a very good first impression. Now version 1.0 was released. It can be found in github. It is a small tool to get statistics from postgres in intervals, just like with iostat, vmstat and other *stat tools. It has a number of modules to get these, for example for databases, tables, index usage and the like.
Before SUN was bought by Oracle, OpenSolaris had ever newer versions and upgrading was just an $ zpool upgrade rpool away. But since then, the open source version of ZFS gained feature flags. POOL FEATURE --------------- tank1 multi_vdev_crash_dump enabled_txg hole_birth extensible_dataset embedded_data bookmarks filesystem_limits If you want to enable only one of these features, you may have already hit the problem, that zpool upgrade can only upgrade one pool or all.
After some time of using an Almond as our router and always having trouble with disconnects, I bought a small apu1d4, an AMD low power board, as our new router. It is now running FreeBSD and is very stable. Not a single connection was dropped yet. As we have some services in our network, like a fileserver and a printer, we always wanted to use names instead of IPs, but not a single router yet could provide that.
Four weeks ago I was askes to show some features of PostgreSQL. In that presentation I came up with an interesting statement, with which I could show nice feature. What I’m talking about is the usage of common table expressions (or short CTE) and explain. Common table expressions create a temporary table just for this query. The result can be used anywhere in the rest of the query. It is pretty useful to group sub selects into smaller chunks, but also to create DML statements which return data.
Nearly two years ago, Postgres got a very nice feature - range types. These are available for timestamps, numerics and integers. The problem is, that till now, I didn’t have a good example what one could do with it. But today someone gave me a quest to use it! His problem was, that they had id ranges used by customers and they weren’t sure if they overlapped. The table looked something like this: create table ranges( range_id serial primary key, lower_bound bigint not null, upper_bound bigint not null ); With data like this insert into ranges(lower_bound, upper_bound) values (120000, 120500), (123000, 123750), (123750, 124000); They had something like 40,000 rows of that kind.