Chris Umbel

Adding a Stand-alone Windows Worker to a bosh-managed Concourse Deployment


In this article I'll demonstrate the procedure for adding a manually-built Windows worker to an existing bosh-managed Concourse deployment. For demonstrative purposes I'll also use our deployment to run a simple pipeline that builds a .NET console application on our newly-created worker.

I'll assume that the Concourse deployment is relatively vanilla and the "tsa" job hasn't substantially changed from the manifest provided by the installation instructions.

This article will *not* be covering building Windows bosh stemcells or deploying Windows workers with bosh. One day when the space matures a little I may do so.


I'll assume provide the following resources. Everything else will be downloaded, generated, built or otherwise conjured.

  • A running Windows server
  • A bosh-managed Concourse deployment
  • A workstation with ssh-keygen
  • The bosh cli targeted at your Concourse's bosh director.
  • A basic understanding of Concourse's architecture, specifically the TSA component.

Generating SSH Keys

In order for a trust relationship to be established and for a worker to register itself with the TSA two ssh RSA keypairs are required.

  • TSA Host Key - the key identifying TSA service.
  • Worker Key - the key identifying the worker(s). Multiple workers can share this key, but you can also have many of these keys spread across many workers.

The TSA Host and Worker keys can respectively be generated with commands similar to the following:

~/ $ ssh-keygen -f tsakey -t rsa  -N ''
~/ $ ssh-keygen -f workerkey -t rsa  -N '' 

which would grant us the files:

~/ $ ls
tsakey	workerkey

The "tsakey" and "workerkey" files are the encoded private keys for the TSA host and workers, respectively, with "" and "" being their public keys.

Preparing the TSA

All of the configuration necessary to prepare the TSA to register our Windows worker involves the keys generated above.

If your Concourse's "tsa" job remains stock it will likely look like this in your deployment manifest:

  - name: tsa
    release: concourse
    properties: {}

We're going to want to add 3 properties to supplement that default configuration:

  • authorized_keys - an array of public keys belonging to workers that TSA should trust. Add this property and a string entry containing the content of "". This establishes trust from our TSA to workers with the corresponding private key.
  • host_key - the private key beling to our deployment's TSA job. Add this property with its content copied from the "tsakey" file.
  • host_public_key - the public key belonging to our deployment's TSA job. Add this property with its content copied from the "" file. Note that we'll be distributing this key to our workers.

resulting in a deployment manifest outlined like this:

  - name: tsa
    release: concourse
       - ""
      host_key: |
        -----BEGIN RSA PRIVATE KEY-----
       - ""
        -----END RSA PRIVATE KEY-----
      host_public_key: ""

After updating our deployment as such

~/ bosh deploy

our TSA will be updated with the keys necessary to register the Windows worker we'll build.

Preparing The Windows Worker

Now we turn our attention to our Windows server that we'll be turning in to a Concourse worker.

First we'll want to establish a directory to house our binaries for the worker service and its data i.e. C:\concourse

C:\> mkdir concourse
C:\> cd concourse

Now download the Windows concourse binary (named something like "concourse_windows_amd64.exe") from the Concourse download page and place it in our working directory. Also, we'll want to copy the "" and "workerkey" files there as well.

The fact that we'll provide our local concourse binary with "" establishes that we cryptographically trust the TSA server from our deployment.

We're now ready to start the worker and have it register itself with the TSA.

C:\concourse> .\concourse_windows_amd64.exe worker \
   /work-dir .\work /tsa-host  \
   /tsa-public-key .\ \
   /tsa-worker-private-key .\workerkey

If all goes well we should see output similar to:


and the new worker should appear in the list via the Concourse CLI as such:

~/  $ fly -t ci workers
name            containers  platform  tags  team
2a334e70-c75c   3           linux     none  none
WORKERSHOSTNAME 0           windows   none  none

Testing Things Out

Assuming the .NET framework is present on our Worker with the build tools in the path we could test this out by building this simple .NET Console app project:

Consider the pipeline:

  - name: code
    type: git
      branch: master
  - name: build
    - aggregate:
      - get: code
        trigger: true
    - task: compile
      privileged: true
      file: code/Pipeline/compile.yml

with the build task:

platform: windows    
  - name: code
  dir: code
  path: msbuild

Note that the platform specified in the build task is "windows". That instructs concourse to place the task on a Windows worker.

If all went well we should see a successful build with output similar to:

~/ $ fly -t ci trigger-job -j datdotnet/build --watch
started datdotnet/build #8

using version of resource found in cache
running msbuild
Microsoft (R) Build Engine version 4.6.1085.0
[Microsoft .NET Framework, version 4.0.30319.42000]
Copyright (C) Microsoft Corporation. All rights reserved.

Building the projects in this solution one at a time. To enable parallel build, please add the "/m" switch.
Build started 11/5/2016 4:04:00 PM.
nces, or take a dependency on references with a processor architecture that matches the targeted processor architecture of your project. [C:\concourse\work\containers\00000arl2se\tmp\build\36d0981b\code\DatDotNet\DatDotNet.csproj]

    3 Warning(s)
    0 Error(s)

Time Elapsed 00:00:00.22

Sat Nov 05 2016 12:00:00 GMT+0000 (UTC)


Raspberry Pi Pig Tank

This is my first post about the Raspberry Pi Pig-Tank -- a project my son and I have been working on for a little while. The project is still incomplete, but very functional. When it finally feels "1.0" we'll make a YouTube video that details the project further, but in the meantime I thought it'd be potentially useful to others to see our progress thus far.

Eventually I'll also get a post together that outlines measurements, part numbers and has detailed photos, but that's a ways off.

Motivation and Goals

See, I've been fiddling with Raspberry Pi for several years now, but have largely used them simply as small, lower-power computers. Anything involving more interesting hardware-wise would have me resorting to a microcontroller. I was interested in using a Pi as a control system just for the heck of it, but lacked the inspiration for an interesting project.

After building a simple RC tank kit from Popular Mechanics my son mentioned the idea of mounting a camera on it. The kit itself lacked any real intelligence. It was pretty much just a chassis with treads, motors, motor drivers, IR receiver (this kit is technically not RC because it's not *Radio* controlled), battery box... standard stuff. If we were going to get a camera involved we were going to need to get some proper computing power on there. Considering I have more Raspberry Pi and camera boards than I can shake a stick at it seemed perfect for the project.

As our conversations continued it was clear that we'd have the opportunity to extend the range of the vehicle. A Pi can easily interface with various wireless communication systems: XBee, WiFi, GSM... To keep things simple to start we agreed that WiFi made sense so we could at least control the vehicle around the house.

Also, it seemed reasonable that we should ditch specialized remote controls and go with something web-based. That would allow us to use our laptops, tablets and phones to control the vehicle. Not only are they easy for dealing with the video output of cameras but it also increases the cool factor a bit.

We agreed that our end goal should be to pilot the vehicle 0.6 miles from our house to the local grocery store and back just as an arbitrary measure of awesomeness (we'd obviously have to go beyond simple WiFi to get it done). If anything I pushed for that goal just to get it in his brain that remote control can actually be *very* remote. Maybe it would give him some additional appreciation for the wonderful work that's been done on massively-frickin-ridiculously-remote-controlled vehicles (MFRRCVs) like the great work NASA has done on the mars rovers. If we can take this thing from tens of feet to hundreds to miles the possibilities are endless!


So I set about getting materials together to build the bloody thing.

The plan was simple for the structure and drive. Use the chassis, treads and motors from the kit for the structure and drive system. From there holes could be drilled and components fastened.

The control system will be a Raspberry Pi with a USB WiFi dongle. Due to favorable power consumption characteristics a Model A was chosen (the Model A runs sub-200 mA idle while the B runs > 450 mA).

We went with these 7.4V (both 1000 mAh and 2200 mAh versions fit in my case) LiPo batteries I had laying around. Although the Pi and motors were all fine with 5V these batteries would give us some extra voltage for other additional systems in the future (maybe an amplifier for some crazy sound or something). The drawback of that extra 2.4V was that I had to employ a voltage regulator and waste some space to heatsink it, but that didn't seem unreasonable.

The standard Raspberry Pi camera module seemed to be small and light enough to fit the bill, plus I had a few stashed away. There are also relatively easy to use programs (raspystill, raspyvid) that provided a great starting point.

We also eventually decided to add some red LED eyes and use a 10mm white, ultra-bright LED as a headlight so I grabbed some from the parts bin.

Now to drive DC motors in both directions something like an H-bridge would be required. Sure, I could have monkeyed around with some MOSFETs to get one in place, but I decided to go with a little driver board from Pololu instead (uses a Toshiba TB6612FNG dual motor driver). I've used different driver boards of theirs in 3D printing with good results so I figured it would likely save me some headaches.

Since there was no way all of our madness was going to fit into the body from the tank kit it was clear that we were going to have to mount a larger enclosure on the kit's chassis. We selected an appropriately-sized project box from Radio Shack to serve this need. Sure, it would end up looking like a rolling box, but who cares.

Physical Construction

The first step was to get the electronics together to prove that all of the ideas would work. After breadboarding everything out I put a board together from prototyping PCB with 8-pin female headers for the motor driver; 4-pin headers for ground, 7.4V and 5V power rails and the regulator with its associated capacitors/rigamarole.

To get the basic structure in place we started chassis from the tank kit and sawed off the connector pieces that the body attached to. From there we drilled some holes through both the base of the project box and the chassis for mounting hardware and motor wires. Machine screws fastened the bottom of the box to the chassis and long M2.5x20mm-ish screws were inserted to ultimately mount the Pi and the custom board mentioned above.

We mounted the Pi and the custom board on the skinny screws using nylon spacers to keep everything apart. Everything was (and still is) wired up using jumper wires. 5V goes to the Pi's 5V in, the Pi's GPIO pins go to the motor driver's inputs and pretty much everything works as expected.

After toggling GPIO pins from test code to move the motors it was clear that we were on the right track. But, you know, the boy wanted more. He wanted LED eyes. Feature-creep happens at home too, I guess. It seemed that if we were going to go as far as to add ornamental LEDs a headlamp would be a useful addition. Since the Pi's GPIO pins don't source enough to get the 10mm ultra-bright LED to burn your retinas I put a board together with PNP transistors to trip it.

With the connection of the camera and mounting of the wheels/tracks we were in good shape. Everything worked with some test code so it was time to turn my attention to putting proper control software together.


Now we came to the point where I'm actually somewhat professionally qualified (I'm a software developer, I don't hack apart toys by trade). It was time to develop the control software.

Since we were sticking with a vanilla Raspbian install on the Pi our options were open. It seemed the path of least resistance was to hack in python and use the "RPi.GPIO" package to control the peripherals. As I stated initially we wanted the system to be web-based. Our needs were simple and small so I chose CherryPy for the job.

To start we kept the video simple. Rather than shoot and stream video we just used raspystill to shoot stills a few times per second. Our web interface would then periodically refresh to get a current view of the situation. If this proves cumbersome we may instead choose to stream video.

In time I'll get the software more organized and get it up on Github, but I'm just not there yet.

The Result

Well, here it is in its current form doing my bidding.

Fri Mar 07 2014 00:00:00 GMT+0000 (UTC)


SELinux on Amazon's AWS Linux AMI

One interesting omission from Amazon's Linux AMI is SElinux and I recently had occasion to install it on a few EC2 instances. The process of installing and enabling SELinux in this environment is actually quite strait-forward, although it can require digging through quite a bit of incorrect and obsolete documentation.

The instructions below are what worked for me using the 2012.09 relase of the AMI. 2012.09 ships with kernel (3.2.30-49.59.amzn1.x86_64), but these instruction will indeed upgrade it.

The first step is to install the following packages which include SELinux and some accompanying tools.

[root@EC2]# yum install libselinux libselinux-utils libselinux-utils selinux-policy-minimum selinux-policy-mls selinux-policy-targeted policycoreutils 

Now we have to tell the kernel to enable SELinux on boot. Append the following to the kernel line in your /etc/grub.conf for your current kernel. Note that if you want to boot into permissive mode replace enforcing=1 with permissive=1.

selinux=1 security=selinux enforcing=1

In my case the resulting /etc/grub.conf looked like:

# created by imagebuilder 

title Amazon Linux 2012.09 (3.2.30-49.59.amzn1.x86_64)

root (hd0)
kernel /boot/vmlinuz-3.2.30-49.59.amzn1.x86_64 root=LABEL=/ console=hvc0 selinux=1 security=selinux enforcing=1
initrd /boot/initramfs-3.2.30-49.59.amzn1.x86_64.img

Now install a new kernel and build a new RAM disk. Don't worry, the options you added above will propogate to the new kernel.

[root@EC2]# yum -y update

Relabel the root filesystem

[root@EC2]# touch /.autorelabel

Now examine /etc/selinux/config and ensure the enforcement level and policy you desire are enabled. In my case I stuck with the default fully enforced targeted policy.

# This file controls the state of SELinux on the system. 

# SELINUX= can take one of these three values: 
# enforcing - SELinux security policy is enforced. 
# permissive - SELinux prints warnings instead of enforcing. 
# disabled - No SELinux policy is loaded. 
# SELINUXTYPE= can take one of these two values: 
# targeted - Targeted processes are protected, 
# mls - Multi Level Security protection. 

Now reboot the instance

[root@EC2]# reboot

Because the root file-system was set to be relabeled rebooting will take a few minutes longer than usual.

Once the instance comes back up log in and verify your work. If everything went as planned the getenforce command will generate the following (for full enforcement).

[root@EC2]# getenforce 

And you're done! SELinux is installed and operating on your instance.

Fri Dec 14 2012 00:00:00 GMT+0000 (UTC)


MySQL Replication with Minimal Downtime Using R1Soft Hot Copy for Linux

At the office a while back I was experimenting with techniques to initialize MySQL replication for both InnoDB and MyISAM tables without significant downtime. The MySQL systems in question didn't use LVMs and the idea of locking all tables and performing a backup to ship to the slave simply takes far too long. The method that I ultimately ended up adding to my toolbelt was an adaptation of the process outlined in this article by Badan Sergiu which uses a tool from Idera called R1Soft Hot Copy for Linux.

R1Soft Hot Copy

What differentiates this process from a more standard approach is the employment of R1Soft Hot Copy. R1Soft Hot Copy is a tool that facilities the creation a snapshot of a block device. When changes to the original device occur only the differences are placed in the snapshot in a Copy-on-Write fashion (similar to VSS in Microsoft Windows). This allows an administrator to create a functional, mountable backup of an entire device almost instantly with very little effort.

Motivation and Caveats

I'm posting these instructions because I'd like some feedback not only on my adaptation, but also on the initial method. Feel free to use any of this information, but please be careful. It worked for me, but I'm not qualified to write authoritative tutorials on the subject.

Prerequisites and Requirements

I'm going to make the assumption that the reader knows how to setup MySQL replication using the methods outlined in the official documentation and that they've read the source article mentioned above.

Also keep in mind that R1Soft Hot Copy is a Linux utility making this article not directly applicable to other operating systems.


A central theme in Badan Sergiu's article was to avoid locking tables (or using lvm). The cost of not locking tables was a restart of the MySQL service itself on the master; meaning that even read queries were not able to be processed momentarily. My idea was to instead flush and lock tables in the standard fashion while creating the Hot Copy mount. That should allow read queries to still be processed and connection attempts to succeed. Writes will be temporarily blocked, but only briefly and clients should have an error free, albeit slower, experience.

Step 1: Install R1Soft Hot Copy

Use the instructions on Idera's website to install Hot Copy and then run

# hcp-setup --get-module

on the master.

Step 2: Configure master

Enable binary logging on the master server and configure a server id in my.cnf.


On the master create a user specifically to be used for replication.


Step 3: Create/mount a snapshot

Ensure mysql has flushed all data to disk and then lock tables so no writes can occurr.


Obtain log coordinates. Record the values of the File and Position fields.

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
| mysql-bin.000002 | 1234     |              |                  |

Create and mount the snapshot on the master. Because all tables are locked the coordinates obtained above will be consistent with the data in the snapshot.

# hcp -o /dev/sda2

... where /dev/sda2 is the device containing the filesystem which houses the MySQL databases to be replicated. Watch the output for the resulting mount point. This process should take mere seconds.

Release locks on the tables. This will return operation on the master to normal.


Step 4: Shutdown the slave's mysqld and copy the data

Run these commands on the slave:

# /etc/init.d/mysql stop
# rm -rf /var/lib/mysql
# rsync -avz root@MASTER_IP_OR_HOST:/var/hotcopy/sda2_hcp1/lib/mysql /var/lib/

... where /var/lib/mysql is an example path to MySQL's data.

Step 5: Unmount the snapshot on the master

# hcp -r /dev/hcp1

Step 6: Configure the slave's identity and start MySQL

Edit /etc/mysql/my.cnf on the slave and set a server id.

# /etc/init.d/mysql start

Step 7: Configure and start slave

Now it's time to point the slave at the master and start replication. The MASTER_LOG_FILE and MASTER_LOG_POS should be set to the File and Position fields recorded in Step 3.

    ->     MASTER_USER='repl',
    ->     MASTER_PASSWORD='slavepass',
    ->     MASTER_LOG_FILE='mysql-bin.000002',
    ->     MASTER_LOG_POS=1234;



At this point replication should be running and the only major service interruption was that writes were blocked for a short period on the master.

There's nothing fundamentally different in the finished product between replication setup in this fashion and a more typical dump-and-copy process. That means monitoring and maintenance should be quite standard.


Also, thanks Badan Sergiu for posting the original article. It helped me immensely.

Sun Nov 18 2012 00:00:00 GMT+0000 (UTC)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 >
Follow Chris
RSS Feed