playing with go

Gibheer
2014-04-04
22-39

For some weeks now I have been playing with Go, a programming language developed with support from google. I’m not really sure yet, if I like it or not.

The ugly things first - so that the nice things can be enjoyed longer.

Gos package management is probably one of the worst points of the language. It has an included system to load code from any repository system, but everything has to be versioned. The weird thing is that they forgot to make it possible to pin the dependencies to a specific version. Some projects are on the way to implement this feature, but it will probably take some time.

What I also miss a shell to test code and just try stuff. Go is a language which is compiled. I really like it for small code spikes, calculations and the like. I really hope they will include it sometime in the future, but I doubt it.

With that comes also a very strict project directory structure, which makes it nearly impossible to just open a project and code away. One has to move into the project structure.

The naming of functions and variables is strict too. Everything is bound to the package namespace by default. If the variable, type or function begins with a capital letter, it means that the object is exported and can be used from other packages.

// a public function
func FooBar() {
}

// not a public function
func fooBar() {
}

Coming from other programming languages, it might be a bit irritating and I still don’t really like the strictness, but my hands learned the lesson and mostly capitalize it for me.

Now the most interesting part for me is, that I can use Go very easily. I have to look for much of the functions, but the syntax is very easy to learn. Just for fun I built a small cassandra benchmark in a couple of hours and it works very nice.

After some adjustments it even ran in parallel and is now stressing a cassandra cluster for more than 3 weeks. That was a very nice experience.

Starting a thread in Go is surprisingly easy. There is nothing much needed to get it started.

go function(arg2, arg2)

It is really nice to just include a small two letter command to get the function to run in parallel.

Go also includes a feature I wished for some time in Ruby. Here is an example of what I mean

def foo(arg1)
  return unless arg1.respond_to?(:bar)
  do_stuff
end

What this function does is test the argument for a specific method. Essentially it is an interface without a name. For some time I found that pretty nice to ask for methods instead of some weird name someone put behind the class name.

The Go designers found another way for the same problem. They called them also interfaces, but they work a bit differently. The same example, but this time in Go

type Barer interface {
  func Bar()
}

func foo(b Bar) {
  do_stuff
}

In Go, we give our method constraint a name and use that in the function definition. But instead of adding the name to the struct or class like in Java, only the method has to be implemented and the compiler takes care of the rest.

But the biggest improvement for me is the tooling around Go. They deliver it with a formatting tool, a documentation and a test tool. And everything works blazingly fast. Even the compiler can run in mere seconds instead of minutes. It actually makes fun to have such a fast feedback cycle with a compiled language.

So for me, Go is definitely an interesting but not perfect project. The language definition is great and the tooling is good. But the strict and weird project directory structure and project management is currently a big problem for me.

I hope they get that figured out and then I will gladly use Go for some stuff.

no cfengine anymore

Gibheer
2014-03-16
10-51

I thought I could write more good stuff about cfengine, but it had some pretty serious issues for me.

The first issue is the documentation. There are two documents available. One for an older version but very well written and a newer one which is a nightmare to navigate. I would use the older version, if it would work all the time.

The second issue is that cfengine can destroy itself. cfengine is one of the oldest configuration management systems and I didn’t expect that.

Given a configuration error, the server will give out the files to the agents. As the agent pulls are configured in the same promise files as the rest of the system an error in any file will result in the agent not being able to pull any new version.

Further is the syntax not easy at all and has some bogus limitations. For example it is not allowed to name a promise file with a dash. But instead of a warning or error, cfengine just can’t find the file.

This is not at all what I expect to get.

What I need is a system, which can’t deactivate itself or even better, just runs on a central server. I also didn’t want to run weird scripts just to get ruby compiled on the system to setup the configuration management. In my eyes, that is part of the job of the tool.

The only one I found which can handle that seems to be ansible. It is written in python and runs all commands remote with the help of python or in a raw mode. The first tests also looked very promising. I will keep posting, how it is going.

scan to samba share with HP Officejet pro 8600

Gibheer
2014-03-16
10-28

Yesterday I bought a printer/scanner combination, a HP Officejet pro 8600. It has some nice functions included, but the most important for us was the ability to print to a network storage. As I did not find any documentation on how it is possible to get the printer to speak with a samba share, I will describe it here.

To get started I assume, that you already have a configured and running samba server.

The first step is to create a new system user and group. This user will used to create a login on the samba server for the scanner. The group will hold all users which should have access to the scanned documents. The following commands are for freebsd, but there should be an equivalent for any other system (like useradd).

pw groupadd -n scans
pw useradd -n scans -u 10000 -c "login for scanner" -d /nonexistent -g scans -s /usr/sbin/nologin

We can already add the user to the samba user managament. Don’t forget to set a strong password.

smbpasswd -a scans

As we have the group for all scan users, we can add every account which should have access

pw groupmod scans -m gibheer,stormwind

Now we need a directory to store the scans into. We make sure, that none other than group members can modify data in that directory.

zfs create rpool/export/scans
chown scans:scans /export/scans
chmod 770 /export/scans

Now that we have the system stuff done, we need to configure the share in the samba config. Add and modify the following part

[scans]
comment = scan directory
path = /export/scans
writeable = yes
create mode = 0660
guest ok = no
valid users = @scans

Now restart/reload the samba server and the share should be good to go. The only thing left is to configure the scanner to use that share. I did it over the webinterface. For that, go to https://<yourscannerhere>/#hId-NetworkFolderAccounts. The we add a new network folder with the following data:

  • display name: scans
  • network path:
  • user name: scans
  • password:

In the next step, you can secure the network drive with a pin. In the third step you can set the default scan settings and now you are done. Safe and test the settings and everything should work fine. The first scan will be named scan.pdf and all following have an id appended. Too bad there isn’t a setting to append a timestamp instead. But it is still very nice t o be able to scan to a network device.

[cfengine] log to syslog

Gibheer
2014-02-24
21-51

When you want to start with cfengine, it is not exactly obvious how some stuff works. To make it easier for others, I will write about some stuff I find out in the process.

For the start, here is the first thing I found out. By default cfengine logs to files in the work directory. This can get a bit ugly, when the agent is running every 5min. As I use cf-execd, I added the option executorfacility to the exed section.

body executor control {
  executorfacility => "LOG_LOCAL7";
}

After that a restart of execd will result in logs appearing through syslog.

overhaul of the blog

Gibheer
2014-02-19
09-42

The new blog is finally online. It took us nearly more than a year to finally get the new design done.

First we replaced thin with puma. Thin was getting more and more a bother and didn’t really work reliable anymore. Because of the software needed, it was pinned to a specific version of rack, thin, rubinius and some other stuff. Changing one dependency meant a lot of working getting it going again. Puma together with rubinius make a pretty nice stack and in all the time it worked pretty well. We will see, how good it can handle running longer than some hours.

The next part we did was throw out sinatra and replace it with zero, our own toolkit for building small web applications. But instead of building yet another object spawning machine, we tried something different. The new blog uses a chain of functions to process a request into a response. This has the advantage that the number of objects kept around for the livetime of a request is minimized, the stack level is smaller and in all it should now need much less memory to process a request. From the numbers, things are looking good, but we will see how it will behave in the future.

On the frontend part we minimized the layout further, but found some nice functionality. It is now possible to view one post after another through the same pagination mechanism. This should make a nice experience when reading more a number of posts one after another.

We hope you like the new design and will enjoy reading our stuff in the future too.

block mails for unknown users

Gibheer
2014-01-16
09-01

Postfix’ policy system is a bit confusing. There are so many knobs to avoid receiving mails which do not belong to any account on the system and most of them check multiple things at once, which makes building restrictions a bit of a gamble.

After I finally enabled the security reports in freebsd the amount of mails in the mailqueue hit me. After some further investigation I found even error messages of dspam, having trouble to rate spam for receivers which were not even in the system.

To fix it, I read into the postfix documentation again, build new and hopefully better restrictions. The result was even more spam getting through. After a day went by and my head was relaxed I read the documentation again and found the following in the postfix manual

The virtual_mailbox_maps parameter specifies the lookup table with all valid recipient addresses. The lookup result value is ignored by Postfix.

So instead of one of the many restrictions a completely unrelated parameter is responsible for blocking mails for unknown users. Another parameter related is smtpd_reject_unlisted_recipient. This is the only other place I could find, which listed virtual_mailbox_maps and I only found it when looking for links for this blog entry.

So if you ever have problems with receiving mails for unknown users, check smtpd_reject_unlistef_recipient and virtual_mailbox_maps.

choosing a firewall on freebsd

Gibheer
2014-01-06
16-15

As I was setting up a firewall on my freebsd server I had to choose between one of the three firewalls available.

There is the freebsd developed firewall ipfw, the older filter ipf and the openbsd developed pf. As for features they have all their advantages and disadvantages. Best is to read firewall documentation of freebsd.

In the end my decision was to use pf for one reason - it can check the syntax before it is running any command. This was very important for me, as I’m not able to get direct access to the server easily.

ipf and ipfw both get initialized by a series of shell commands. That means the firewall controll program gets called by a series of commands. Is one command failing, the script may fail and the firewall ends up in a state undefined by the script. You may not even get into the server by ssh anymore and needs a reboot.

This is less of a problem with pf, as it does a syntax check on the configuration beforehand. It is not possible to throw pf into an undefined state because of a typo. So the only option left would be to forget ssh access or anything else.

I found the syntax of pf a bit weird, but I got a working firewall up and running which seems to work pretty well. ipfw looks similar, so maybe I try it the next time.

use dovecot to store mails with lmtp

Gibheer
2013-11-06
06-37

After more than a year working on my mail setup, I think I have it running in a pretty good way. As some of the stuff is not documented at all in the wide of the internet, I will post parts here to make it accessible to others.

Many setups use the MTA (postfix, exim) to store mails on the filesystem. My setup lets dovecot take care of that. That way it is the only process able to change data on the filesystem.

to make this work, we first need a lmtp socket opened by dovecot. The configuration part looks like this

service lmtp {
  unix_listener /var/spool/postfix/private/delivery.sock {
    mode = 0600
    user = postfix
    group = postfix
  }
}

LMTP is a lightweight smtp protocol and most mail server components can speak it.

Next we need to tell postfix to send mails to this socket instead storing it on the filesystem. This can be done with with the following setting

mailbox_transport = lmtp:unix:/var/spool/postfix/private/delivery.sock

or for virtal accounts with

virtual_transport = lmtp:unix:/var/spool/postfix/private/delivery.sock

Now postfix will use the socket to deliver the mails.

It is also possible to use other services between these two like dspam. In my case postfix delivers the mails to dspam and that will deliver them to dovecot.

For dovecot change the path of the socket to something dspam can reach. I’m using /var/run/delivery.sock.

Then change the dspam.conf to use that socket as a delivery host

DeliveryProto LMTP
DeliveryHost  "/var/run/delivery.sock"

As postfix needs to speak to dspam, we set dspam to create a socket too

ServerMode auto
ServerDomainSocketPath "/var/run/dspam.sock"

ServerMode should be set to either auto or standard.

Now the only thing left to do is to tell postfix to use that socket to deliver its mails. For that, set the options from before to the new socket

virtual_transport = lmtp:unix:/var/run/dspam.sock

And with that, we have a nice setup where only dovecot stores mails.

grub can't read zpool

Gibheer
2013-08-05
19-13

This weekend I had a small problem with my omnios installation. The installation is now more than a year old and back then, the feature flags for zfs were really fresh. So as time went on, zfs got better but somehow it was missed to update the grub installation. When I then booted my server on friday, I did not get up again as grub was unable to load my zpool.

Thanks to Rich Lowe from illumos project the bug got fixed. But I had to somehow run installgrub on my system to get it up again. For that to work, I had to find a system which either boots a current illumos kernel or lets me enter a kernel paramter.

To make things more complicated, the current omnios installation medium did not like my system and just rebooted with a kernel panic without me having a chance to get my kernel paramter from the zpool. FreeBSD 9.1 was not able to read the zpool because of the feature flags. After half a day of try and error I just went with SmartOS and lo and behold - it worked!

So I imported my zpool with zpool import -o /mnt rpool, read my menu.lst to get the parameter and restarted with omnios to enter that command to the kernel in grub and I got a live environment up to do installgrub!

For me, the parameter in question is -B acpi-user-options=0x2,$ZFS-BOOTFS to switch off ACPI on my Sandy Bridge Celeron system.

sysidcfg replacement on omnios

Gibheer
2013-07-17
22-19

A very nice feature on Solaris was the possibility to initialize new zones with a sysidcfg file. This does not exist on omnios. With kayak, omnitis deployment server, a way to run postboot scripts was created. The way is the file /.initialboot. This is just a shell script which gets executed on the first boot and gets removed afterwards. Nothing much but already very useful to make the initial setup for dns and the ip.

A little example. I have a zone foo1 with a vnic barnic1. I want to setup dns and dhcp for the interface. The zone is installed in /zones/foo1/ so we have to root file system mounted as /zones/foo1/root.

We create the /.initialboot file (which ends up as /zones/foo1/root/.initialboot with the following content

ipadm create-if foonic1
cp /etc/nsswitch.dns /etc/nsswitch.conf
ipadm create-addr -T dhcp foonic1/ipv4
echo "nameserver 192.168.56.1" >> /etc/resolv.conf

Now boot the zone and after 5minutes or so everything is setup and ready to go. Makes it really easy.

show older