Moreover golang lends itself very nicely to compile to statically linked executables and is therefore a perfect candidate to be put into stripped down containers.
Using multi-stage builds you can describe docker files that are constructed by multiple different (base-) images that are chained one after another while being able to share artifacts - and this inside just one Dockerfile
.
Usually you had to imitate such a behavior by creating so called “builder” docker images that replicated an isolated build environment, just to inject the resulting build artifacts into the actual runtime docker image at a later point.
This time that we want to package a golang application into a docker container. That’s why we choose the “official” golang
image as our starting point for the first “builder” stage:
# BUILDER IMAGE
# official golang 1.12 base image
# stretch indicate a specific debian version
FROM golang:1.12-stretch AS builder
# switch into build directory
WORKDIR $GOPATH/src/github.com/kongo2002/example
# copy sources into container
COPY . .
# get dependencies
RUN go get -d -v
# compile statically linked go binary
# CGO_ENABLED=0 - disable cgo tool
# GOOS=linux - target linux only
# GOARCH=amd64 - no cross compile
# -s - strip debug and symbol table
# -w - strip dwarf symbol table
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags "-w -s" -o /go/bin/cmd
Now that we have described the building phase of the docker image we can assemble the image components for the actual runtime of the application. We actually have multiple possibilities to go with:
scratch
- empty container (suitable for simple, self-contained applications that won’t need any dependency during runtime at all)alpine
- very popular base image with a very minimal volume footprint (interesting if you need some dependencies after all)We are going to use the alpine
base image in here as we do need some additional runtime dependencies (ca-certificates
for SSL to be specific):
# RUNTIME IMAGE
# lean and functional base image
FROM alpine:3.9
# install ca certificates (for SSL)
RUN apk add --no-cache ca-certificates
# fetch binary we built in the 'builder' stage before
COPY --from=builder /go/bin/cmd /go/bin/cmd
ENTRYPOINT ["/go/bin/cmd"]
You can build the final docker image with a single call to docker build
just as you are used to:
$ docker build -t kongo2002/go-test -f Dockerfile .
In my example this results in a pretty tiny docker image (YMMV):
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kongo2002/go-test latest 14f8984cd4a1 About an hour ago 11.5MB
The firmware that is used is called QMK and is open-sourced (here on github). In order to customize the keyboard layout you have to modify the default keymap, compile the firmware and finally flash it onto your keyboard.
Although the QMK project is pretty well-documented, there are no build or setup instructions for the Gentoo distribution at all. Let’s go through these few steps together:
First you have to make sure your gcc
is installed with multilib
support:
Now you can install crossdev
:
Install the AVR compiler toolchain after that:
The remaining build toolchain dependencies can be easily installed via portage
:
Now you are ready to create your customized firmware in no time.
$ git clone https://github.com/qmk/qmk_firmware.git
$ cd qmk_firmware
# choose the correct keyboard type ('planck' for me)
$ cd keyboards/planck/keymaps
# create your own keymap (e.g. your github user name)
$ cp -a default kongo2002
$ cd kongo2002
# now edit the keymap.c to your liking
# vim keymap.c
# once finished you can build the firmware
$ cd ../../..
$ make planck/rev5:kongo2002
# flash the firmware onto your keyboard
# using the dfu tools we installed earlier
$ make planck/rev5:kongo2002:dfu
In my early Elm days I searched through numerous example projects and popular Elm repositories on Github and picked up all the bits in small pieces. In this post I want to summarize the few steps on how to get started with an “Elm + Webpack” setup you can built upon.
At first we create a new Elm project using yarn
or npm
:
# create new project
$ yarn init
# install elm webpack integration
$ yarn add elm-webpack-loader file-loader
# install webpack development dependency
$ yarn add webpack webpack-cli webpack-dev-server -D
Now you can add some helper scripts into your newly created package.json
:
"scripts": {
"build": "webpack --mode production",
"dev": "webpack --mode development",
"client": "webpack-dev-server --port 3000 --mode development"
}
Next we are going to create a very basic webpack.config.js
that processes Elm files using the elm-webpack-loader
and the remaining files using a basic file-loader
:
var path = require('path');
module.exports = {
entry: {
app: [
'./src/index.js'
]
},
output: {
path: path.resolve(__dirname + '/dist'),
filename: '[name].js',
},
module: {
rules: [
{
test: /\.(css|scss)$/,
loader: 'file-loader?name=[name].[ext]',
},
{
test: /\.html$/,
exclude: /node_modules/,
loader: 'file-loader?name=[name].[ext]',
},
{
test: /\.elm$/,
exclude: [/elm-stuff/, /node_modules/],
loader: 'elm-webpack-loader?verbose=true&warn=true',
},
],
noParse: /\.elm$/,
},
devServer: {
inline: true,
stats: { colors: true },
},
};
Now we take a basic HTML index page stub, the Elm application will be injected into:
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Elm + Webpack in 2 minutes</title>
</head>
<body>
<script src="app.js"></script>
</body>
</html>
The corresponding javascript entry point can start with something like the following:
'use strict';
require('./index.html');
var Elm = require('./Main.elm');
var app = Elm.Main.fullscreen();
/* in here you might register ports and such */
Now that we have a basic setup prepared we can write the first lines of Elm to start with:
module Main exposing ( main )
import Html exposing (..)
type Msg
= NoOp
type alias Model =
{ state : List String
}
initialModel : Model
initialModel = Model []
view : Model -> Html Msg
view model =
p [] [ text "hi, this was faster than 2 minutes right?!" ]
update : Msg -> Model -> Model
update msg model =
case msg of
NoOp -> model
main : Program Never Model Msg
main =
Html.beginnerProgram
{ model = initialModel
, view = view
, update = update
}
The lines above will look pretty familiar to everyone that has written or seen some Elm before - you can see the main components that drive basically every Elm application: a “model”, “view” and an “update” function.
That’s all we needed so far - let’s run it:
# build a production release
$ yarn build
# the resulting artifacts are placed into the 'dist' folder
$ firefox dist/index.html
In case I have to come back to my steps I will shortly summarize what I tried just now in here.
The hubot service we are about to create has to be registered in your Slack workspace beforehand. This is just a few clicks away given you have an existing Slack workspace and the necessary permissions to create and administrate the workspace:
"Bot User"
page and create a new bot user with the desired “display name”"App page"
and register the bot with the workspace you want the bot to live in. After authorization you can copy the Bot OAuth Access Token
- we will need this token for the hubot configuration laterNow that we have established the basic setup to integrate a slack bot we are going to setup hubot itself. There are numerous ways to achieve that - this is what I did:
# install yeoman and the hubot generator
# you may install either locally or globally (-g)
$ npm install -g yo generator-hubot
$ mkdir uhlebot
$ cd uhlebot
# start the scaffolding script
$ yo hubot
# answer the questions you are asked (name, author, adapter ...)
# ...
# install hubot-grafana integration
$ npm install hubot-grafana --save
After that you are almost ready to start - you have to add the hubot-grafana
plugin to the external-scripts.json
configuration file first:
// external-scripts.json
[
"hubot-diagnostics",
"hubot-help",
"hubot-rules",
"hubot-shipit",
// ...
"hubot-grafana"
]
Moreover there are a couple of configuration values you can or must set via environment variables.
The bare minimum set of configuration values are the following three:
HUBOT_GRAFANA_HOST
: specify the HTTP endpoint to your Grafana instance (CAUTION: without a trailing slash!)HUBOT_GRAFANA_API_KEY
: a proper Grafana API key or "Viewer"
(if anonymous access is possible and desired)HUBOT_SLACK_TOKEN
: a Slack Bot user token (see Bot OAuth Access Token
above)Instead of directly uploading the PNG image files to Slack you will probably want to utilize a publicly available S3 bucket to store the generated metrics files. The following additional configuration values are needed in that case:
HUBOT_GRAFANA_S3_ENDPOINT
: Endpoint of the S3 API (defaults to s3.amazonaws.com
)HUBOT_GRAFANA_S3_BUCKET
: Name of the S3 bucket to copy the images intoHUBOT_GRAFANA_S3_ACCESS_KEY_ID
: Access key ID for S3HUBOT_GRAFANA_S3_SECRET_ACCESS_KEY
: Secret access key for S3HUBOT_GRAFANA_S3_PREFIX
: Bucket prefix (useful for shared buckets)HUBOT_GRAFANA_S3_REGION
: Bucket region (defaults to us-standard
)That’s all you have to do - you can now start the bot by specifying the slack
adapter:
By default hubot chooses the listen on port 8080
- you may change that by overriding the PORT
environment variable:
Once started and connected to your Slack workspace you can interact with the bot either via direct messages or via mentions in all channels the bot was invited into before.
Please refer to the documentation of hubot-grafana
for all the possible commands - the most common ones are probably:
graf list
: list all available dashboardsgraf db <dashboard>
: query all panels of the given dashboardgraf db <dashboard>:<panel>
: query a specific panel of the given dashboardgraf db <dashboard> now-30m now
: query a dashboard with a specified time range (here: last 30 minutes)Let me quickly describe what I came up with that ended up being a pretty flexible attempt to be included in an automated build.
At first we want to get all protobuf definitions that are probably placed somewhere in the project repository.
$ mkdir -p proto
$ cp $(find /some/repository/path -path '*src/protobuf') proto
After that we compile the protobuf message definitions into actual python files:
$ protoc --python_out=proto $(find proto -name '*.proto')
Now we could end up with have a directory structure like this (including sub directories!):
$ tree proto
proto
├── details
│ ├── details_pb2.py
│ └── details.proto
├── messages_pb2.py
└── messages.proto
Now let’s have a look in the main script file protoload.py
. I’ll skip the Cassandra database part altogether as that’s not very interesting anyways.
#!/usr/bin/env python
from __future__ import print_function
import sys
from google.protobuf import symbol_database as sdb
# this will import all protobuf definitions under 'proto'
import proto
# all loaded message descriptors and symbols will be
# registered in this symbol database instance
__db = sdb.Default()
def _get_records():
# XXX: fetch data from the Cassandra in here
pass
def __find_message(manifest):
try:
symb = __db.GetSymbol(manifest)
return symb()
except KeyError:
print('unknown record manifest "%s"' % manifest, file=sys.stderr)
return None
def _extract_record(record):
# XXX: this is just a proof-of-concept
# try to find matching message description based on 'manifest'
# and parse the payload using the retrieved protobuf definition
msg = __find_message(record.manifest)
if msg:
payload = record.payload
msg.ParseFromString(payload)
return msg
return None
def _main():
for record in _get_records():
extracted = _extract_record(record)
if extracted:
print(extracted)
if __name__ == '__main__':
_main()
The most interesting part in here is basically the line which imports the proto
module. In order for this to work properly without having to explicitly importing every message definition module by hand we have to write some glue logic in the __init__.py
of the proto
module:
import importlib
import os
def __import_modules(dirname, paths):
# iterate through dir contents
for mod in os.listdir(dirname):
full = os.path.join(dirname, mod)
# recurse into sub directories
if os.path.isdir(full):
__import_modules(full, paths + [mod])
# import all .py files other than __init__.py
elif os.path.isfile(full) and mod != '__init__.py' and mod[-3:] == '.py':
base = '.'.join(paths)
module = mod[:-3]
importlib.import_module('%s.%s' % (base, module))
# start in the current directory
__import_modules(os.path.dirname(__file__), ['proto'])
Using this approach you only have to recompile the protobuf definitions into new python modules inside the target proto
folder to be automatically picked up by the script.
My first goal was to get something working first and iterate on proper automation and reasonable size of the resulting image after that.
So my first attempt was based on CentOS 6:
FROM centos:6
# erlang
RUN yum update -y && \
# install the build dependencies for erlang itself
yum install -y wget git gcc-c++ unzip && \
# install the erlang package from erlang-solutions
yum install -y http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm && \
yum install -y erlang && yum clean all
# statser release
RUN wget https://github.com/kongo2002/statser/archive/master.zip && \
unzip master.zip && rm master.zip && \
cd statser-master && \
# fetch rebar3
wget https://s3.amazonaws.com/rebar3/rebar3 && chmod +x rebar3 && \
./rebar3 compile
EXPOSE 2003 8080 8125/udp
WORKDIR /statser-master
CMD ./start.sh
The Dockerfile
described above first installs the latest erlang distribution, fetches the master
version of “statser” from github and starts the build by using rebar3
(which is downloaded as well) after that.
The resulting image works fine but results in a rather large image:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kongo2002/centos-statser latest fd9de4025ce3 2 days ago 818 MB
My next attempt is using a minimalistic base image using alpine linux. I have used that base image already before and it serves really well to produce small docker images.
At first we will need a base image that contains a working erlang build and runtime environment:
FROM alpine:3.6
# install relang distribution from source
RUN apk add --no-cache build-base autoconf openssl git openssl-dev && \
wget http://erlang.org/download/otp_src_20.1.tar.gz && \
tar xvf otp_src_20.1.tar.gz && rm otp_src_20.1.tar.gz && \
cd otp_src_20.1 && \
./otp_build autoconf && \
./configure --disable-hipe --without-termcap --without-javac && \
make -j8 && make install && \
cd .. && rm -rf otp_src_20.1
# install rebar3
ADD https://s3.amazonaws.com/rebar3/rebar3 /usr/bin/rebar3
RUN chmod a+x /usr/bin/rebar3
# install post-build triggers for depending images
ONBUILD COPY src src
# prepare build invocation
CMD rebar3 as production release -o /build
This docker image will serve as our stable erlang base image (in this case using erlang OTP 20.1). Additionally we installed ONBUILD
triggers (see docker documentation) that allow us to create new build images containing the “statser” sources we actually want to create a docker image of.
Next we will create another docker image that actually contains the “statser” sources to package a release of:
Pretty simple, right? The trick is actually the building of the image that looks somewhat like the following:
$ docker build -t kongo2002/alpine-statser-build --file alpine-statser-build-base/Dockerfile src
Now that we built a new image based on the kongo2002/alpine-statser-build-base
we just created before the build procedure will execute the ONBUILD
triggers and copy the src
files into the working directory of the new image.
Starting a new container of that image will build the actual “statser” release and put the build results in a mounted volume:
$ docker run --rm -v "$PWD/build:/build" kongo2002/alpine-statser-build
As soon as the container terminated with success we should have an erlang release of “statser” in the /build
folder.
Now all we have to do is to build a new “runtime” container that just contains the release files from the previous step:
FROM alpine:3.6
# jiffy runtime dependency
RUN apk add --no-cache libstdc++
COPY build/statser /statser
EXPOSE 2003
EXPOSE 8080
EXPOSE 8125/udp
CMD ["/statser/bin/statser", "foreground"]
That’s it! The reward for those few steps is a docker image that is much smaller with a total size of 37 MB:
REPOSITORY TAG IMAGE ID CREATED SIZE
kongo2002/statser latest 6931953373f6 56 minutes ago 36.7 MB
And it even works as well :D
$ docker run -it kongo2002/statser
Exec: /statser/erts-9.1/bin/erlexec -noshell -noinput +Bd -boot /statser/releases/1.0.0/statser -mode embedded -boot_var ERTS_LIB_DIR /statser/lib -config /statser/releases/1.0.0/sys.config -args_file /statser/releases/1.0.0/vm.args -pa -- foreground
Root: /statser
/statser
21:28:03.004 [info] Application lager started on node statser@325f4ca71a44
21:28:03.004 [info] initial load of configuration
21:28:03.005 [info] start listening for API requests on port 8080
21:28:03.005 [info] start listening for metrics on port 2003
21:28:03.005 [info] starting health service with update interval of 60 sec
21:28:03.005 [info] starting instrumentation service at <0.489.0>
21:28:03.005 [info] starting rate limiter [create_limiter] with limit 10/sec
21:28:03.005 [info] preparing instrumentation service timer with interval of 60000 ms
21:28:03.005 [info] starting rate limiter [update_limiter] with limit 500/sec
21:28:03.005 [info] Application statser started on node statser@325f4ca71a44
...
The steps described above might seem intimidating at first but after you understood the basic mechanism of ONBUILD
triggers it is actually pretty straightforward. In fact the whole process can be “automated” in a simple script file in about 6 lines (without comments).
Please keep in mind I just finished this process and there may still be some flaws or ways to improve these steps for sure. Anyways, I am pretty satisfied so far - the next step will be to integrate this (or a similar mechanism) into my github build/publish.
My first attempt was to use python because that is available on most of our machines at work anyways. After I finished the task pretty quickly I discovered the python version was ridiculously slow on protobuf parsing. As the tool was actually supposed to process millions of records waiting for hour(s) wasn’t an option. Obviously there are ways to improve the python protobuf performance by using custom-compiled protobuf libraries but that wasn’t too easy to accomplish and no process to put on everybody looking to use that tool.
My next thought was:
Come on! If C++ is supposed to be that much faster how difficult can that be!
Actually is wasn’t too difficult indeed - integrating the Cassandra C++ driver and using the protobuf C++ libraries went pretty smoothly. Soon I had a small console application running that was much faster than the python version I built at first.
Now I was happy, right? Well, almost…
At that point my current Makefile
looked somewhat like the following:
SRCS := $(wildcard src/*.cc) $(wildcard src/*/*.cc)
OBJECTS := $(patsubst %.cc,%.o, $(SRCS))
CPPFLAGS=-std=c++11 -O2 -g -Wall
.PHONY: all clean
all: event-reader
$(OBJECTS): %.o : %.cc
g++ $(CPPFLAGS) -c -Ilibs/cpp-driver/include -o $@ $<
event-reader: $(OBJECTS)
g++ $(OBJECTS) -o event-reader -Llibs/cpp-driver/build -lcassandra -lprotobuf -lz
Looking at the above you will probably notice the problem there: the resulting binary I build is dynamically linked to the Cassandra C++ driver (-lcassandra
), the protobuf library (-lprotobuf
), zlib (-lz
) and the dependencies of the mentioned ones. That doesn’t fit my goals of having a more-or-less portable executable that can be easily used by anyone.
There is of course a solution at hand: a statically linked executable. After reading man gcc
and asking google it is supposed to be pretty easy but I could remember having some problems with that some years ago…
But hey, I was probably pretty stupid at that time - how difficult can that be?
In theory it shouldn’t be much more than adding -static
to the gcc
invokation of linking the executable. I read some things of problems with statically linking the C++ standard library - so there are some more flags to toggle of course: -static-libgcc
and -static-libstdc++
.
After fiddling around with numerous gcc
switches for quite some time I finally succeeded without -static-libgcc
and -static-libstdc++
but instead with -lstdc++
. To be honest I have no explanation why this one works while the others don’t but this is what finally got me going:
event-reader-static: $(OBJECTS)
g++ -s -static $(OBJECTS) -o event-reader-static -Llibs/cpp-driver/build -lcassandra_static -luv -lpthread -lprotobuf -lz -lstdc++
There are a few things to note here:
libcassandra_static
-lpthread
and -luv
for the cassandra driver)-s
to strip the resulting executable to reduce its final file size (optional)On my road to the wisdom of statically linked executables I stumbled upon the following tips to check if the static linking actually worked properly:
ldd <executable>
should report: not a dynamic executable
nm <executable> | grep ' U '
should be empty (listing unresolved symbols)Finally we have an executable that runs - on this machine at least…
In the unlikely case you followed along with a similar executable you have noticed a warning by gcc
that we did actually include some dynamic dependencies in our executable: nss
to be specific. Why is that?
My explanation is probably not too accurate but by default glibc
dynamically links with libnss
. That could result in a failure on the target machine your executable is running in case the versions differ. There are some explanations in the glibc wiki on this topic.
As the above glibc wiki entry explains there are possibilities to force your glibc to statically include nss
as well. The explanations weren’t too helpful for me and I didn’t want to mess with my local glibc install so my approach was to use a glibc alternative instead: musl libc
musl is an alternative to glibc and describes itself as:
a new standard library to power a new generation of Linux-based devices. musl is lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.
As I mentioned I didn’t intend to mess with my local glibc install too much, that’s why I opted for using docker for that matter. There is a linux distribution called Alpine linux that uses musl
by default. So we are going to build our executable in a tiny docker container instead.
This is the blueprint for the Dockerfile
to use for building:
FROM alpine:3.4
RUN apk add --no-cache gcc g++ make cmake openssl-dev libuv-dev protobuf-dev
ENTRYPOINT ["/bin/sh"]
Now we can build the docker image and run it in the source directory:
$ docker build -t uhlenheuer/musl-builder .
$ docker run --rm -it -v $(pwd):/tmp/build uhlenheuer/musl-builder
# inside the docker container
$$ cd /tmp/build
$$ make clean event-reader-static
$$ exit
That’s it! We finally have a statically linked executable that is as more-or-less portable in a sense that it runs on any machine with the same architecture at least.
A final note to my fellow gentoo users, in case you don’t follow the docker approach you have to build your dependencies with the static-libs
USE flag of course. Moreover watch out for the CFLAGS
you are building your static libraries with because I had some trouble on other machines with invalid/unknown instruction
errors due to some CPU specific compiler flags.
Having monodevelop running inside a docker container might be just the solution you have been looking for.
This is the tiny Dockerfile I came up with:
FROM mono:4
MAINTAINER Gregor Uhlenheuer <kongo2002@gmail.com>
RUN apt-get update && \
apt-get install -y monodevelop monodevelop-nunit && \
rm -rf /var/lib/apt/lists/*
ENTRYPOINT [ "/usr/bin/monodevelop" ]
That’s all! Now you can go ahead and build a docker image and run it by exposing the X11 socket to your docker container:
]]>The most convenient way is to use portage’s built-in to inject custom patches just before the build process. All you have to do is to place a .patch
file in the appropriate folder in /etc/portage/patches/
.
A custom patch for the x11-wm/dwm
package should be used like this:
$ mkdir -p /etc/portage/patches/x11-wm/dwm-6.0
$ cp 99-bottom-stack.patch /etc/portage/patches/x11-wm/dwm-6.0
Although I am using the gentoo distribution for many years now I dicovered this great way only a few months ago. But there is a catch as I found yesterday - the user patches will only be applied if the ebuild you are dealing with is prepared to do so.
The ebuild you are using has to invoke the epatch_user
command inside its src_prepare
function:
If you are out of luck and the ebuild you are dealing with does not invoke epatch_user
on its own like described above there is another possibility to your rescue. In case the ebuild inherits the eutils eclass you can get your patch applied with a custom bashrc
used for portage.
Place something along the following in your /etc/portage/bashrc
file:
This will inject the epatch_user
function call after the src_prepare
process on ebuilds that inherit eutils
.
If the above does not work for you or the changes you want to do involve more than just applying a patch you may want to create a custom overlay to place your own ebuilds into.
First you have to create the necessary directory structure for the to-be-created overlay (i.e. myoverlay
):
$ mkdir -p /usr/local/portage/{metadata,profiles}
$ echo 'myoverlay' > /usr/local/portage/profiles/repo_name
$ echo 'masters = gentoo' > /usr/local/portage/metadata/layout.conf
$ chown -R portage:portage /usr/local/portage
Now you just have to register the overlay for portage usage - create a local.conf
file inside /etc/portage/repos.conf/
directory:
$ cat /etc/portage/repos.conf/local.conf
[myoverlay]
location = /usr/local/portage
masters = gentoo
auto-sync = no
Now you can go ahead and take for example an existing ebuild and do your modifications:
$ mkdir -p /usr/local/portage/x11-wm/dwm
$ cd /usr/local/portage/x11-wm/dwm
$ cp /usr/portage/x11-wm/dwm/dwm-6.0.ebuild dwm-6.0-r1.ebuild
# fix or modify the ebuild
$ vim dwm-6.0-r1.ebuild
# create the manifest
$ repoman manifest
After hours of investigating we finally found and reported the bug in the MySQL driver itself. Nevertheless this behavior inspired me to write a tiny extension for the process monitoring tool god.
The custom watch extends the basic PollCondition
behavior which basically polls the number of open file descriptors of the watched process and restarts the service when a configured threshold is exceeded.
This is what the extension looks like:
module God
module Conditions
# Condition Symbol :file_descriptors
# Type: Poll
#
# Trigger when the process own more than a specified amount of
# open file descriptors.
#
# Parameters
# Required
# +pid_file+ is the pid file of the process in question. Automatically
# populated for Watches.
# +above+ is the amount of maximum allowed open file descriptors
#
# Optional
# +times+ number of checks that have to fail to be triggered
#
# Examples
#
# Trigger if the process owns more than 256 file descriptors in
# at least 3 of the last 5 checks (from a Watch):
#
# on.condition(:file_descriptors) do |c|
# c.above = 256
# c.times = [3, 5]
# end
#
# Non-Watch Tasks must specify a PID file:
#
# on.condition(:file_descriptors) do |c|
# c.above = 512
# c.pid_file = "/var/run/service.3000.pid"
# end
class FileDescriptors < PollCondition
attr_accessor :above, :pid_file, :times
def initialize
super
self.above = nil
self.times = [1, 1]
end
def prepare
if self.times.kind_of?(Integer)
self.times = [self.times, self.times]
end
@timeline = Timeline.new(self.times[1])
end
def reset
@timeline.clear
end
def pid
self.pid_file ? File.read(self.pid_file).strip.to_i : self.watch.pid
end
def valid?
valid = true
valid &= complain("Attribute 'pid_file' must be specified", self) if self.pid_file.nil? && self.watch.pid_file.nil?
valid &= complain("Attribute 'above' must be specified", self) if self.above.nil?
valid
end
def test
fds = Dir["/proc/#{self.pid}/fd/*"].size
@timeline.push(fds)
if @timeline.select { |x| x > self.above }.size >= self.times.first
output = @timeline.map { |x| "#{x > self.above ? '*' : ''}#{x}" }.join(", ")
self.info = "max file descriptors reached: [#{output}]"
return true
else
return false
end
end
end
end
end
As described in the documentation comments you would can easily integrate the watch in your god configuration using the file_descriptors
condition:
# load the custom extension
God.load "file_descriptors.god"
# your watch
God.watch do |w|
# your watch configuration
# ...
# restart condition(s)
w.restart_if do |restart|
restart.condition(:file_descriptors) do |c|
c.above = 512
end
end
end
The above setup would cause the process to be restarted as soon as it exceeds 512 open file descriptors.
]]>The latter is my preferred choice as it supports very nice process supervisioning using a linux kernel userspace interface (you will need your kernel to be compiled with CONFIG_CONNECTOR
for this mechanism to work). That way there is no need to poll the process if it is still running. Moreover god comes with a lot of reasonable default preferences resulting in very simple configuration files. A basic god configuration file might look something like this:
Sadly using ejabberd using god is not as easy as the example shown above. god expects the process to supervise to run in the foreground whereas ejabberd which is usually controlled by the ejabberdctl
script starts a background daemon process.
I extracted some of the logic in the ejabberdctl
script into a god configuration file that works with god as expected:
HOST = 'localhost'
NAME = "ejabberd@#{HOST}"
SPOOL_DIR = '/var/lib/ejabberd'
EJABBERD_DIR = '/lib/ejabberd'
LOG_DIR = '/var/log/ejabberd'
LOG_FILE = "#{LOG_DIR}/ejabberd.log"
OPTS = '+K true ' + # kernel polling
'-smp auto ' + # automatic SMP detection
'+P 250000' # 250,000 ports
KERNEL = '-kernel inet_dist_use_interface \{0,0,0,0\}'
SASL = "-sasl sasl_error_logger \\{file,\\\"#{LOG_FILE}\\\"\\}"
ERL = "erl -noinput " +
"-sname #{NAME} " +
"-pa #{EJABBERD_DIR}/ebin " +
"-mnesia dir \"'#{SPOOL_DIR}'\" " +
"#{KERNEL} " +
"-s ejabberd #{OPTS} #{SASL}"
CMD = "erl -noinput -hidden " +
"-sname ctl_#{HOST} " +
"-pa #{EJABBERD_DIR}/ebin " +
"#{KERNEL} " +
"-s ejabberd_ctl -extra #{NAME}"
#
# EJABBERD
#
God.watch do |w|
w.name = 'ejabberd'
w.dir = SPOOL_DIR
w.env = {
'EJABBERD_CONFIG_PATH' => '/etc/ejabberd/ejabberd.yml',
'EJABBERD_LOG_PATH' => LOG_FILE
}
w.start = "#{ERL} start"
w.stop = "#{CMD} stop"
w.restart = "#{CMD} restart"
w.grace = 20.seconds
w.keepalive
end
Now you just have to add the god configuration to your Dockerfile like this:
]]>ejabberd already supports some message archiving out-of-the-box by the mod_archive
module that is part of the ejabberd community scripts repository. That module implements the XEP-0136 specification and supports storage via mnesia, PostgreSQL, MySQL or sqlite.
For my approach I chose to implement the XEP-0313 standard instead because it is known to be less complicated and therefore easier to implement by clients, as stated by the specification:
“This specification aims to define a much simpler and modular protocol for working with a server-side message store. Through this it is hoped to boost implementation and deployment of archiving in XMPP.”
Moreover I plan to use the MongoDB database as storage engine.
You can find the latest version of mod-mam on github. The module targets the current master branch of ejabberd which is the so called “community edition”.
The latest version of mod-mam supports:
The following points are on my TODO list:
In order to use mod-mam you have to add it to the modules section in your ejabberd.cfg
. This could look like this:
{modules,
[
{mod_mam,
[
% use the default localhost:27017
% or define a specific host
{mongo, {localhost, 27017}},
% define a database to use
% (default: test)
{mongo_database, test},
% specify a collection to use
% (default: ejabberd_mam)
{mongo_collection, ejabberd_mam}
]
},
% ...
]
}.
Just to give a short impression on how the content is stored, this is what an archived message looks like inside the MongoDB collection:
{
"_id" : ObjectId("52f7fc9ecdbb08255f000002"),
"user" : "test2",
"server" : "localhost",
"jid" : {
"user" : "test",
"server" : "localhost",
"resource" : "sendxmpp"
},
"body" : "foo",
"direction" : "to",
"ts" : ISODate("2014-02-09T22:09:34.282Z"),
"raw" : "<message xml:lang='en' to='test@localhost' type='chat'><body>foo</body><subject/></message>"
}
An examplary archive query conversion between client and server would look like the following. At first the client queries for its last two messages (limited by a RSM instruction):
<iq type='get'
id='query1'>
<query xmlns='urn:xmpp:mam:tmp'
queryid='x01'>
<set xmlns='http://jabber.org/protocol/rsm'>
<max>2</max>
</set>
</query>
</iq>
The server responds with the two requested messages:
<message xmlns='jabber:client'
to='test@localhost/37367024071393189531836643'>
<result queryid='x01'
xmlns='urn:xmpp:mam:tmp'
id='52F7FC8DCDBB08255F000001'>
<forwarded xmlns='urn:xmpp:forward:0'>
<delay xmlns='urn:xmpp:delay'
stamp='2014-02-09T22:09:17Z'/>
<message from='test2@localhost/sendxmpp'
to='test@localhost'
xml:lang='en'
type='chat'>
<body>
foo
</body>
</message>
</forwarded>
</result>
</message>
<message xmlns='jabber:client'
to='test@localhost/37367024071393189531836643'>
<result queryid='x01'
xmlns='urn:xmpp:mam:tmp'
id='52F7FC9ECDBB08255F000003'>
<forwarded xmlns='urn:xmpp:forward:0'>
<delay xmlns='urn:xmpp:delay'
stamp='2014-02-09T22:09:34Z'/>
<message from='test2@localhost/sendxmpp'
to='test@localhost'
xml:lang='en'
type='chat'>
<body>
bar
</body>
</message>
</forwarded>
</result>
</message>
Finally the server finishes the interaction with the closing <id>
stanza:
<iq xmlns='jabber:client'
to='test@localhost/37367024071393189531836643'
id='query1'
type='result'/>
As mod-mam is still in early beta phase any feedback, bug reports or contributions are very much appreciated. Either contact me directly or better head over to the issues on github and open a bug report or pull request.
The ejabberd XMPP server supports several authentication mechanisms out-of-the-box:
internal: the default authentication using the Mnesia database backend
ldap: authentication against a LDAP server/directory
pam: user authentication using PAM (pluggable authentication modules) that is currently supported by FreeBSD, Linux, Mac OSX, NetBSD, Solaris …
odbc: ODBC connection using either the PostgreSQL or MySQL interfaces
external: using an external authentication script
anonymous
For this post we will focus on the external authentication method of ejabberd. This mechanism allows the use of an authentication script that may be written in practically any language.
Ejabberd will invoke a configurable number of instances of the script and pass the authentication requests via stdin
and expects the result on stdout
. This method may seem somewhat awkward but is very flexible on the other hand.
Moreover this approach allows you to profit from authentication methods of existing services. Say you already have a user administration interface in your business layer this may be your preferred way to reuse that logic.
To use the external authentication script you have to edit the ejabberd configuration in your ejabberd.cfg
.
To enable external authentication for the whole ejabberd instance use something like this:
% toggle external authentication
{auth_method, external}.
% specify the full path to the script
{extauth_program, "python /usr/bin/auth.py http://localhost:8000/auth/"}
% number of instances to start per virtual host
{extauth_instances, 1}.
% whether to cache authentication for a specified amount of seconds
% requires the `mod_last` module
{extauth_cache, false}.
Alternatively you may activate external authentication on a ‘per virtual host’ base:
% use default internal authentication for "foo.domain"
{host_config, "foo.domain",
[
{auth_method, [internal]}
]}.
% use the external authentication script for "bar.domain"
{host_config, "bar.domain",
[
{auth_method, [external]},
{extauth_program, "python /usr/bin/auth.py http://localhost:8000/auth/"}
]}.
In my case I chose python as my favorite scripting language for this task. Python is platform independent and comes with a lot of existing high-level libraries to make such a problem easy to implement. You may find the script I came up with below or fetch the latest version from github.
This script redirects the authentication requests sent from the connected ejabberd instance to a configurable URL where the script expects a JSON API. You probably cannot use the script as-it-is but it may serve as a starting point to script your own one.
#!/usr/bin/env python
# Copyright 2014 Gregor Uhlenheuer
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import logging
import os
import struct
import sys
import urllib2
#
# DEFAULTS AND CONSTANTS
#
DEFAULT_LOG_DIR = '/var/log/ejabberd'
FALLBACK_URL = 'http://localhost:8000/auth/'
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json' }
#
# CLASS DEFINITIONS
#
class EjabberdError(Exception):
'''Exception class that holds ejabberd related errors.'''
def __init__(self, ex):
self.ex = ex
def __str__(self):
return repr(self.ex)
class ApiHandler:
'''
Class to execute HTTP requests to process
the authentication orders from ejabberd.
'''
def __init__(self, url, headers):
'''Initialize an ApiHandler instance.'''
self.url = url
self.headers = headers
def call(self, call, data):
'''
Call the specified authentication API using the
urllib2 library functions.
'''
url = '%s/%s' % (self.url, call)
req = urllib2.Request(url, data, self.headers)
res = urllib2.urlopen(req)
return json.load(res)
class EjabberdAuth:
'''
Class that encapsulates the ejabberd authentication logic.
'''
def __init__(self, url, headers, handler=None):
'''
Initialize a new EjabberdAuth instance.
'''
self.url = url
self.headers = headers
if handler is None:
self.handler = ApiHandler(url, headers)
else:
self.handler = handler
@staticmethod
def make_jid(user, host):
'''Build a JID using the given user and host'''
return '%s@%s' % (user, host)
def __from_ejabberd(self):
'''
Listen on stdin and read input data sent from the
connected ejabberd instance.
'''
try:
input_length = sys.stdin.read(2)
if len(input_length) is not 2:
logging.warn('ejabberd called with invalid input')
return None
(size,) = struct.unpack('>h', input_length)
result = sys.stdin.read(size)
logging.debug('Read %d bytes: %s', size, result)
return result.split(':')
except IOError:
raise EjabberdError('Failed to read from ejabberd via stdin')
def __to_ejabberd(self, success):
'''
Convert the input data into an ejabberd compatible
format and send it to stdout.
'''
answer = 1 if success else 0
token = struct.pack('>hh', 2, answer)
sys.stdout.write(token)
sys.stdout.flush()
logging.debug('Returned %s success', 'with' if success else 'without')
def __call_api(self, call, data):
'''
Call the JSON compatible API handler with the specified data
and parse the response for a success.
'''
body = json.dumps(data)
result = self.handler.call(call, body)
success = result['success']
if not success:
msg = result['message']
logging.warn('Call to API returned without success: ' + msg)
return success
def __auth(self, username, server, password):
'''Try to authenticate the user with the specified password.'''
logging.debug('Processing "auth"')
jid = make_jid(username, server)
data = {'username': jid, 'password': password}
return self.__call_api('login', data)
def __isuser(self, username, server):
'''Try to find the specified user.'''
logging.debug('Processing "isuser"')
jid = make_jid(username, server)
data = {'username': jid}
return self.__call_api('exists', data)
def __setpass(self, username, server, password):
'''Try to set the user's password.'''
logging.debug('Processing "setpass"')
# TODO
return False
def loop(self):
'''
Start the endless loop that reads on stdin and passes
the authentication results to stdout towards the
connected ejabberd instance.
'''
while True:
try:
data = self.__from_ejabberd()
except KeyboardInterrupt:
logging.info('Terminating by user input')
break
except EjabberdError, err:
logging.warn('Input error: ' + err)
break
success = False
cmd = data[0]
if cmd == 'auth':
success = self.__auth(data[1], data[2], data[3])
elif cmd == 'isuser':
success = self.__isuser(data[1], data[2])
elif cmd == 'setpass':
success = self.__setpass(data[1], data[2], data[3])
else:
logging.warn('Unhandled ejabberd cmd "%s"', cmd)
self.__to_ejabberd(success)
def get_args():
'''
Parse some basic configuration from command line arguments.
'''
# build command line argument parser
desc = 'ejabberd authentication script'
parser = argparse.ArgumentParser(description=desc)
# base url
parser.add_argument('url',
nargs='?',
metavar='URL',
default=FALLBACK_URL,
help='base URL (default: %(default)s)')
# log file location
parser.add_argument('-l', '--log',
default=DEFAULT_LOG_DIR,
help='log directory (default: %(default)s)')
# debug log level
parser.add_argument('-d', '--debug',
action='store_const', const=True,
help='toggle debug mode')
args = vars(parser.parse_args())
return args['url'], args['debug'], args['log']
if __name__ == '__main__':
URL, DEBUG, LOG = get_args()
LOGFILE = LOG + '/extauth.log'
LEVEL = logging.DEBUG if DEBUG else logging.INFO
PID = str(os.getpid())
FMT = '[%(asctime)s] ['+PID+'] [%(levelname)s] %(message)s'
# redirect stderr
ERRFILE = LOG + '/extauth.err'
sys.stderr = open(ERRFILE, 'a+')
# configure logging
logging.basicConfig(level=LEVEL, format=FMT, filename=LOGFILE)
logging.info('Starting ejabberd auth script')
logging.info('Using %s as base URL', URL)
logging.info('Running in %s mode', 'debug' if DEBUG else 'release')
EJABBERD = EjabberdAuth(URL, HEADERS)
EJABBERD.loop()
logging.warn('Terminating ejabberd auth script')
You may often just need to install ejabberd as it is on your server and have it run like forever without the need of touching it. But problems arise if you have to go off the main route of using the out-of-the-box setup of ejabberd - i.e. you want to migrate your existing ejabberd into a clustered setup or create a clustered ejabberd from the very beginning. I didn’t find much help on this topic so I will describe a small walkthrough on how to setup some ejabberds in a cluster. The way I approach this task may not be the best one but this is the way I found to be the most stable and reproducable way I came up with after trying a lot of things on the way.
The first step will be the building and installation of the ejabberd nodes itself. These steps will be repeated for every node you want to participate in the cluster.
At first we will fetch the latest sources from ProcessOne’s repository on github. As recommended by ProcessOne the version 2.x
is still the stable branch for production mode. So we will checkout the appropriate branch and compile from the sources. I have gone trough the described process for the latest commits on the so called “community edition” of ejabberd as well - and it works the same.
git clone git://github.com/processone/ejabberd.git
git checkout origin/2.1.x -b 2.1.x
cd ejabberd/src
Next we will compile the sources with the familiar configure
, make
, make install
procedure. Probably you have to create the configure script with autoreconf
.
Now your ejabberd is successfully compiled and installed on your (first) node and is ready to be configured. In a non-clustered setup you would be almost finished - after adjusting our ejabberd.cfg
you could already start the service by running ejabberdctl start
.
This configuration part is actually the most important step of the setup. You will probably have to edit the three following files:
ejabberd.cfg
: the main configuration file of ejabberdejabberdctl.cfg
: the configuration of the ejabberdctl
control scriptejabberdctl
: the control script itselfThe ejabberd configuration can be found at /etc/ejabberd/ejabberd.cfg
. There are a few settings you will probably want to edit.
$ vim /etc/ejabberd/ejabberd.cfg
% adjust the logging level if you like
{loglevel, 3}.
% set the ejabberd domain(s)
{hosts, ["your.net"]}.
% set the admin user(s)
{acl, admin, {user, "admin", "your.net"}}.
If you have specific needs for ejabberd modules you can search for the modules
section in the configuration file and (un)comment the appropriate modules:
At work we have the need to adjust the shaping settings as we are using the XMPP messaging for internal communication between different services that may exceed the default shaper limits.
% normal shaper rule - the unit is B/s
{shaper, normal, {maxrate, 100000}}.
% fast shaper rule
{shaper, fast, {maxrate, 5000000}}.
Basically you have to adjust two settings in your ejabberdctl.cfg
file - the configuration file of the ejabberd control script.
$ vim /etc/ejabberd/ejabberdctl.cfg
# the listening address of ejabberd
#
# the default is set to 127.0.0.1 where the different ejabberd nodes
# would not be able to see each other
INET_DIST_INTERFACE={0.0.0.0}
# the ejabberd node name
#
# this setting is crucial and has to match with DNS and hostname
ERLANG_NODE=ejabberd@node1
The last file we have to edit before we can start the first node is the control script ejabberdctl
itself. I don’t really like to edit this file because updates of ejabberd and subsequent runs of make install
would override your current settings. But sadly there is no way to set the ejabberd hostname in the ejabberdctl.cfg
apart from being passed as an environment variable.
$ vim /sbin/ejabberdctl
# the ejabberd host name
#
# the host name defaults to 'localhost' but this has to match
# with the 'ERLANG_NODE' setting in your ejabberdctl.cfg
HOST=node1
The big advantage of editing the ejabberdctl script like this is that you can later start, stop, restart the ejabberd just by running ejabberdctl ...
in your shell without caring about the correct ejabberd node name you are talking with. At work we found this to be the safest way especially to people not 100% aware of the ejabberd setup to execute basic control commands.
Now we are ready to fire up the first ejabberd node. After starting the service you can register the root account you specified in your ejabberd.cfg
.
$ ejabberdctl start
$ ejabberdctl register admin my.net ***
You should now be able to login into the web administration interface of ejabberd using your accout: http://node1:5280/admin
By the way, the web interface is configured in ejabberd.cfg
in the listen
section. You could for example modify the listening port like this:
After the first node is up and running we can proceed with building the next nodes and joining those to the cluster.
The building and configuation of the other nodes can simply be repeated from the first node steps with modifying host names.
After ejabberd is successfully built and installed when can proceed with the joining to the running first node.
First we have to exchange/synchronize the erlang cookie files. You could easily copy the erlang cookie file from the first node like this:
Now we are ready to connect the mnesia of the current node with the running first node’s mnesia database. We simply start an erlang shell and start a mnesia in the specified directory /var/lib/ejabberd
:
# kill any running erlang/epmd instances
killall epmd
# remove any existing mnesia files
rm -f /var/lib/ejabberd/*
# set HOME to ejabberd mnesia directory
export HOME=/var/lib/ejabberd
# start erlang shell with mnesia
erl -sname ejabberd@node2 -mnesia dir '"/var/lib/ejabberd"' -s mnesia
Now as we are in the erlang shell we can interactively connect the two mnesia databases with each other. In case you are not sure what you are doing you can exit the erlang shell every time with <ctrl-c><ctrl-c>
.
% check mnesia state for the current node:
% running db nodes = [node2]
mnesia:info().
% connect with first node
mnesia:change_config(extra_db_nodes, ["ejabberd@node1"]).
% now you should see two running nodes
% running db nodes = [node2, node1]
mnesia:info().
% copy schema table type
mnesia:change_table_copy_type(schema, node(), disc_copies).
% check mnesia state for
% disc_copies = [schema]
mnesia:info().
% copy tables from first node
% depending on the amount of data in your first node's database
% this may take a while
Tables = mnesia:system_info(tables).
[mnesia:add_table_copy(Tb, node(), Type) ||
{Tb, [{'ejabberd@node1', Type}]} <- [ {T, mnesia:table_info(T, where_to_commit)} ||
T <- Tables]].
% you should see output like the following:
[{atomic,ok},{atomic,ok},{atomic,ok},{atomic,ok},...]
Now that the mnesia databases are connected you can start the second ejabberd.
$ ejabberdctl start
You can use the web interface on both hosts to check for all running nodes: http://node1:5280/admin/nodes/
The described procedure can now be repeated for as many ejabberd nodes you have and would like to join to the clustered setup.
When all ejabberds are up and running you can simply add ejabberdctl start/stop
to your distribution’s init scripts and you have a reboot consistent ejabberd cluster!
The following settings are not meant to be a fully functional or useful vim configuration but more of a collection of snippets you could consider adding to your own .vimrc
. In my opinion you should not blindly copy other people’s vim configuration files without understanding every single setting itself anyway.
However you can find a copy of my .vimrc
on github for reference or inspiration.
The most important non-default setting is called hidden
which enables you to switch between buffers without having to save in between. This is very important in order to understand vim’s concept of buffers and tabs which may appear somewhat different compared to other popular text editors. See :h buffers
and :h tabpage
for further information.
" disable VI compatibility
set nocompatible
" enable buffer switching without having to save
set hidden
" allow backspace in insert mode
set backspace=indent,eol,start
" always activate automatic indentation
set autoindent
" display statusline even if there is only one window
set laststatus=2
" visually break lines
set wrap
" display line numbers
set number
" line numbers as narrow as possible
set numberwidth=1
" turn on highlight search
set hlsearch
" ignore case in search when no uppercase search
set incsearch
set ignorecase
set smartcase
Starting from a basic set of options you can incrementally improve or extend your vim configuration by exploring vim’s options. For one you can get help for every setting by invoking :h :<optionname>
. Moreover you can get a complete list of available options via :options
.
Find a few very basic key mappings you may find useful as well:
" redraw screen and remove search highlights
nnoremap <silent> <C-l> :noh<CR><C-l>
" yank to end of line
nnoremap Y y$
" use Q for formatting
noremap Q gq
" easier navigation on wrapped lines
nnoremap j gj
nnoremap k gk
You may find a few of my favourite plugins below:
pathogen: This plugin by Tim Pope is the undisputed must-have plugin for managing your vim runtime path and therefore your plugins. With pathogen you then can easily manage your plugins i.e. using git submodules.
syntastic: Live syntax checking at its best - I am fully objective here although I am contributing to this amazing plugin :-)
surround: Easily surround text with brackets, tags, keywords etc. (by Tim Pope as well)
fugitive: Another one by Tim Pope: incredible git integration
Command-T: Fast and intuitive file searching plugin written by Wincent Colaiuta (requires ruby enabled vim)
space: “The Smart Space key for Vim” written by Henrik Öhman
Just to name a few other great plugins you may want to check out: FSwitch, Tagbar, NerdCommenter, NerdTree …
OutOfMemoryException
. The reason was because eclipse was started with a maximum of 256 MB of heap space which is obviously not enough for all the used plugins to build an android app.
Actually the hard part was now to increase this maximum heap space. The eclipse I am using is the gentoo package dev-util/eclipse-sdk-bin
of the java-overlay. At the end I discovered there are at least three possibilities you can try:
The first thing to try is to change eclipse’s .INI file. On my gentoo installation the file can be found in the /opt
folder:
$ vim /opt/eclipse-sdk-bin-*/eclipse.ini
Now you can search for lines looking similar to this:
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
-vmargs
-Xms40m
-Xmx256m
The interesting lines are the last two:
-Xms
: the initial heap size-Xmx
: the maximum used heap sizeThe next possibility is to adjust your VM arguments passed to your JRE used by eclipse. In eclipse you can navigate to Window → Preferences → Java → Installed JREs
and selecting Edit...
on the JRE being used. In the following dialog you can set the Default VM Arguments to something like this:
-Xms512m -Xmx1024m
Editing the gentoo launcher script for eclipse was actually the one that finally solved my problems. I ended up editing the gentoo specific configuration file that is used by the launcher script. I discovered that by looking into:
$ vim `which eclipse-bin-4.2`
The two important lines are right at the top:
[ -f "/etc/eclipserc-bin-${SLOT}" ] && . "/etc/eclipserc-bin-${SLOT}"
[ -f "$HOME/gentoo/.eclipserc" ] && . "$HOME/gentoo/.eclipserc"
This means to either use the system-wide configuration file /etc/eclipse-bin-4.2
(replace 4.2
with the $SLOT
of your eclipse version) or the file gentoo/.eclipserc
in your home directory. In my opinion the folder gentoo
inside the home directory is not the ideal place to store config files, so I use the system-wide setting instead.
Now all you have to do is to create/edit the file of your liking and put the following lines in it:
In order to check your configuration changes you can easily look on your running processes (i.e. use htop
) and verify your parameters are passed the way you want them to:
Diclaimer : This article expects your build servers to run gentoo linux as well and to be configured to cross compile Raspberry Pi compatible ARM (arm6j-hardfloat-linux-gnueabi) binaries. Until I find some time to write a small howto for that you can read the respective gentoo documentation on distcc cross-compiling.
At first we are going to prepare all your build servers that should assist your Raspberry Pi during the compilation process.
First you have to install sys-devel/distcc:
$ emerge -av sys-devel/distcc
After you have successfully installed distcc on your build server(s) you can adjust the configuration to your likings. The configuration file usually can be found at /etc/conf.d/distcc
.
# set the access rights for your distcc daemon to the right network subnet
# you can also list single IP addresses
DISTCCD_OPTS="${DISTCCD_OPTS} --allow 192.168.1.0/24"
# especially during the setup phase I found increasing the log level very helpful
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /var/log/distccd"
DISTCCD_OPTS="${DISTCCD_OPTS} --log-level info"
Now you can start your distcc daemon using the init scripts:
$ /etc/init.d/distccd start
Additionally you may want to add the distcc service to your default runlevel:
$ rc-update add distccd default
After you prepared your build server(s) you can move on to setup your Raspberry Pi.
You have to install distcc on your Raspberry Pi as well:
$ emerge -av sys-devel/distcc
Additionally you have to add distcc to your portage features. Edit your make.conf
appropriately:
FEATURES="distcc"
Now you have to specify which build servers should be taken into account when using distcc. In the /etc/distcc/hosts
file you can list all server addresses. The order defines the priorities:
Finally you have to tell distcc which compiler has to be used instead of gcc
- you can use a wrapper script like this for this purpose:
Now you just have to replace the existing symbolic links like this:
# move in your distcc folder
$ cd /usr/lib/distcc/bin
# set the executable flag on the wrapper script
$ chmod +x wrapper
# remove the old symlinks
$ rm cc c++ gcc g++
# link to the wrapper script
$ ln -s wrapper cc
$ ln -s wrapper c++
$ ln -s wrapper gcc
$ ln -s wrapper g++
Now that all necessary steps are taken you can test the distcc setup when emerging a cross-compile compatible package.
$ emerge -va htop
You can observe the distcc daemon log on one of your build servers in order to check if your build servers are utilized during the compilation phase:
$ tail -f /var/log/distccd
]]>ejabberdctl
. For those of you that do not know what ejabberd is: it’s a very popular jabber/XMPP server or daemon written in Erlang. After maybe half an hour of googling around and not finding some ready-to-use solution we pretty much discarded the idea and moved on to other problems.
Nevertheless today at home I got interested again and did some more research on extending the basic ejabberd functionality. The good part is that ejabberd comes with a built-in module system that allows you to add your own erlang modules into ejabberd and even hook into some predefined events (though I did not get to that part). The downside is the fact that ejabberd cannot be described as being well documented. So many links, guides or further related information found on the message board or FAQ’s are broken or horribly outdated.
Anyways, in the following parts I will shortly describe what I came up with so far. The described module does not contain any helpful functionality but the structure on how to built such a module is more important here than the actual implementation.
In order to add a new module into ejabberd you have to implement the OTP behavior gen_mod
which expects two functions to be implemented:
start/2
: module initializationstop/1
: module terminationIn our case we want to build a HTTP module so we want to additionally implement the process/2
function that handles all HTTP requests that are routed to the module.
The rough outline of our HTTP module will look like this:
%% Module name (has to match with the filename)
-module(mod_custom).
%% Module author
-author('Gregor Uhlenheuer').
%% Module version
-vsn('1.0').
%% Debug flag
-define(EJABBERD_DEBUG, true).
%% Implement the OTP gen_mod behavior
-behavior(gen_mod).
%% Module exports
-export([start/2, stop/1, process/2]).
%%
%% INCLUDES
%%
%% base ejabberd headers
-include("ejabberd.hrl").
%% ejabberd compatibility functions
-include("jlib.hrl").
%% ejabberd HTTP headers
-include("web/ejabberd_http.hrl").
%% initialization function
start(_Host, _Opts) ->
ok.
%% function on module unload
stop(_Host) ->
ok.
%% process any request to "/sockets"
process(["sockets"], _Request) ->
% FIXME: implementation goes here
"Not implemented yet";
%% process all remaining requests
process(_Page, _Request) ->
% FIXME: implementation goes here
"Fallback result".
So this is basically the whole module structure you need to get started with the actual implementation.
Next we have to compile the module itself and adjust the ejabberd configuration in order to integrate our newly built module.
# move into your source directory
$ cd mod_custom/src
You have to pass the file paths to your erlang/ejabberd header files referenced in your module file (ejabberd.hrl
, jlib.hrl
and ejabberd_http.hrl
):
# compile using erlc
$ erlc -I ../ejabberd/src \
-I /lib64/ejabberd/include \
-pa ../ejabberd/src \
mod_custom.erl
Before starting the ejabberd server we have to add the module to the main configration file ejabberd.cfg
. Somewhere in your config file you will find the part of the ejabberd_http
setting:
You add a new request handler to the ejabberd_http
part and you are good to go:
% this will probably look like this
{5280, ejabberd_http, [http_poll, web_admin,
{request_handlers, [
% your request handler will respond to anything like:
% http://example.com:5280/custom/
{["custom"], mod_custom}
]}
]}
Now you can copy the compiled beam file mod_custom.beam
into your ejabberd ebin
directory and (re)start the ejabberd service:
$ cp mod_custom.beam /lib64/ejabberd/ebin
$ ejabberdctl restart
Now you should be able to request your new module function via HTTP:
$ curl -v localhost:5280/custom/sockets
* About to connect() to localhost port 5280 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 5280 (#0)
> GET /custom/sockets HTTP/1.1
> User-Agent: curl/7.26.0
> Host: localhost:5280
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 19
<
Not implemented yet
* Closing connection #0
While experimenting and searching for ways to get going with the ejabberd module I stumbled upon a great way to modiy, compile and test your changes.
Instead of manually recompiling your erlang module, copying into your ebin
folder and restarting your ejabberd server you can just remotely connect to your running ejabberd node and inspect your service during execution.
You can either start your ejabberd in debug
mode and execute your commands from there:
$ ejabberdctl debug
Or you can remotely attach to an already running ejabberd node:
$ erl -sname node1 -remsh ejabberd@someserver
In case you get an error like the following:
*** ERROR: Shell process terminated! (^G to start new job) ***
You have to pass your erlang cookie along with your erl
command:
$ erl -sname node1 -remsh ejabberd@someserver -setcookie *****
Now you can easily compile and reload your module from within your remote shell without restarting the ejabberd service:
]]>The project is inspired by the great .NET JSON serializer ServiceStack.Text.
You can get SharpXml either by installing via nuget, by downloading the precompiled binaries or by cloning the git repository from github and compiling the library on your own.
SharpXml can be found and installed via nuget:
PM> Install-Package SharpXml
You can also download the latest precompiled binaries using the downloads page on github:
Alternatively you can clone the git repository and compile the project by yourself:
$ git clone git://github.com/kongo2002/SharpXml.git
$ cd SharpXml\SharpXml
$ msbuild
The API tries to appear small and descriptive at the same time:
// Serialization functions
string XmlSerializer.SerializeToString<T>(T element);
string XmlSerializer.SerializeToString(object element, Type targetType);
void XmlSerializer.SerializeToWriter<T>(TextWriter writer, T element);
void XmlSerializer.SerializeToWriter(TextWriter writer, object element, Type targetType);
// Deserialization functions
T XmlSerializer.DeserializeFromString<T>(string value);
object XmlSerializer.DeserializeFromString(string value, Type targetType);
T XmlSerializer.DeserializeFromReader<T>(TextReader reader);
object XmlSerializer.DeserializeFromReader(TextReader reader, Type targetType);
T XmlSerializer.DeserializeFromStream<T>(Stream stream);
object XmlSerializer.DeserializeFromStream(Stream stream, Type targetType);
T can be any .NET POCO type. Apart from others SharpXml supports all basic collection types residing in System.Collections
, System.Collections.Generic
and System.Collections.Specialized
.
SharpXml intends to work in a convention based manner meaning that there won’t be too many configuration options to change its basic (de-)serialization behavior. A few options to modify SharpXml’s output exist anyways:
XmlConfig.IncludeNullValues
: Whether to include null
values in the generated/serialized output (default: false
)
XmlConfig.ExcludeTypeInfo
: Whether to include additional type information for dynamic or anonymous types (default: false
)
XmlConfig.EmitCamelCaseNames
: Whether to convert property/type names into camel-case output, i.e. MyClass -> "myClass"
(default: false
)
XmlConfig.WriteXmlHeader
: Whether to include a XML header sequence (<?xml ... ?>
) in the serialized output (default: false
)
XmlConfig.ThrowOnError
: Whether to throw an exception on deserialization errors or silently ignore errors (default: false
)
Although SharpXml comes with built-in support of all basic .NET types there are two ways to modify its de-/serialization behavior. You can either add custom serialization and/or deserialization logic by registering serialization delegates for a specified type on the static XmlConfig
class or you can modify serialization of collections using the XmlElementAttribute
in the SharpXml.Common
namespace.
Moreover the serialization and deserialization of struct types may be customized by overriding the public ToString()
method and/or providing a static ParseXml()
function.
/// Register a serializer delegate for the specified type
void RegisterSerializer<T>(SerializerFunc func);
/// Register a deserializer delegate for the specified type
void RegisterDeserializer<T>(DeserializerFunc func);
/// Unregister the serializer delegate for the specified type
void UnregisterSerializer<T>();
/// Unregister the deserializer delegate for the specified type
void UnregisterDeserializer<T>();
/// Clear all registered custom serializer delegates
void ClearSerializers();
/// Clear all registered custom deserializer delegates
void ClearDeserializers();
The XmlElementAttribute
in SharpXml.Common
allows you to modify the default serialization of .NET types using a few properties to choose from:
[XmlElement Name="..."]
: Override the default name of the property/class
[XmlElement ItemName="..."]
: Override the default name of collection’s items (default: "item"
)
[XmlElement KeyName="..."]
: Override the default name of keys in dictionary types (default: "key"
)
[XmlElement ValueName="..."]
: Override the default name of values in dictionary types (default: "value"
)
[XmlElement Namespace="..."]
: Defines a XML namespace attribute for the selected type or property (Note: this attribute is currently used for serialization of root types only)
In the following section I want to give a short description of the format SharpXml generates and expects on deserialization.
The first thing to mention is that public properties are serialized and deserialized only. Fields whether public or not are not serialized at the moment and won’t be in the future! Attributes placed inside the XML tags are not supported either and are simply ignored. Apart from that serialization is pretty straight-forward and your XML looks like you would probably expect it anyway – at least from my point of view :-)
public class MyClass
{
public int Foo { get; set; }
public string Bar { get; set; }
}
var test = new MyClass { Foo = 144, Bar = "I like SharpXml very much" };
An instance of the class above will be serialized like the following:
Using XmlConfig.EmitCamelCaseNames = true;
the generated XML output would look like this instead:
public class ListClass
{
public int Id { get; set; }
public List<string> Items { get; set; }
}
var test = new ListClass
{
Id = 20,
Items = new List<string> { "one", "two" }
};
SharpXml will generate the following XML:
public class DictClass
{
public int Id { get; set; }
public Dictionary<string, int> Values { get; set; }
}
var test = new DictClass
{
Id = 753,
Values = new Dictionary<string, int>
{
{ "ten", 10 },
{ "eight", 8 }
}
};
The serialized output by SharpXml looks like the following:
<DictClass>
<Id>753</Id>
<Values>
<Item>
<Key>ten</Key>
<Value>10</Value>
</Item>
<Item>
<Key>eight</Key>
<Value>8</Value>
</Item>
</Values>
</DictClass>
Note: In all XML examples above indentation is added for convenience only.
As mentioned before you can use the XmlElementAttribute
to customize the generated XML output which is especially useful for collection and dictionary types.
[XmlElement("CustomClass")]
public class CustomDictClass
{
public int Id { get; set; }
[XmlElement(ItemName="Element", KeyName="String", ValueName="Int")]
public Dictionary<string, int> Values { get; set; }
}
var test = new CustomDictClass
{
Id = 753,
Values = new Dictionary<string, int>
{
{ "ten", 10 },
{ "eight", 8 }
}
};
This example shows the effect of the four major options given by the XmlElementAttribute
: Name
, ItemName
, KeyName
and ValueName
.
<CustomClass>
<Id>753</Id>
<Values>
<Element>
<String>ten</String>
<Int>10</Int>
</Element>
<Element>
<String>eight</String>
<Int>8</Int>
</Element>
</Values>
</CustomClass>
Using the property Namespace
of the XmlElementAttribute
you can set an optional namespace string that will be used on serialization of the root element of the resulting XML document:
[XmlElement(Namespace = "Some.Namespace")]
public class NamespaceClass
{
public int Id { get; set; }
public string Name { get; set; }
}
var test = new NamespaceClass { Id = 201, Name = "foo" };
The class described above will be serialized like the following:
Non-reference types like struct may provide custom implementation of the methods ToString()
and/or ParseXml()
in order to customize SharpXml’s serialization behavior.
A typical example might look like this:
public struct MyStruct
{
public int X { get; set; }
public int Y { get; set; }
/// <summary>
/// Custom ToString() implementation - will be used by SharpXml
/// </summary>
public override string ToString()
{
return X + "x" + Y;
}
/// <summary>
/// Custom deserialization function used by SharpXml
/// </summary>
public static MyStruct ParseXml(string input)
{
var parts = input.Split('x');
return new MyStruct
{
X = int.Parse(parts[0]),
Y = int.Parse(parts[1])
};
}
}
var test = new MyStruct { X = 200, Y = 50 };
Using the struct type described above results in the following output:
Without the custom implementations the struct would be serialized like this:
Moreover reference types can be customized by registering custom serialization delegates to the static XmlConfig
class using the aforementioned RegisterSerializer
and RegisterDeserializer
functions.
public class SomeClass
{
public double Width { get; set; }
public double Height { get; set; }
}
// register custom serializer
XmlConfig.RegisterSerializer<SomeClass>(x => return x.Width + "x" + x.Height);
// register custom deserializer
XmlConfig.RegisterDeserializer<SomeClass>(v => {
var parts = v.Split('x');
return new SomeClass
{
Width = double.Parse(parts[0]),
Height = double.Parse(parts[1])
};
});
The resulting XML will look pretty much the same as the struct example described earlier but you can imagine the possibilities given by this approach.
The deserialization logic of SharpXml can be described as very fault-tolerant meaning that usually bad formatted or even invalid XML may be deserialized without errors.
Tag name matching is case insensitive
Closing tags don’t have to be the same as the opening tag. The nesting of tags is more important here.
The order of the tags is irrelevant
Tag attributes are not supported and therefore ignored
XML namespaces are ignored as well
In order to provide a better view on how fault-tolerant SharpXml works I will give an example of a very bad formatted XML input that will be deserialized without any errors:
This XML above will be successfully deserialized into an instance of MyClass
.
Some random things I am planning to work on in the future:
SharpXml.Common
an optional dependencySharpXml is written by Gregor Uhlenheuer. You can reach me at kongo2002@gmail.com
SharpXml is licensed under the Apache license, Version 2.0
]]>Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
To add folders to your F# project you have to edit the project file (*.fsproj
) by hand using your favorite text editor. Search for the item group with all the Compile
statements:
<!-- ... -->
<ItemGroup>
<Compile Include="Parser.fs" />
<None Include="Script.fsx" />
</ItemGroup>
<!-- ... -->
Now you just have to add your new Compile
, Resource
or whatever statements to that item group:
<!-- ... -->
<ItemGroup>
<Resource Include="resources\images\icon.png" />
<Resource Include="resources\images\error.png" />
<Compile Include="Parser.fs" />
<None Include="Script.fsx" />
</ItemGroup>
<!-- ... -->
After you matched your file system structure with your file paths you can reload your project and enjoy your ordered project structure. Now you can even add subfolders to already existing folders via the Visual Studio context menu.
]]>DynamicObject
(in the System.Dynamic
namespace) and therefore I had to override the methods TrySetMember
and TryGetMember
. Especially the latter one forced me to look up on reference parameters in F#.
The method signature of TryGetMember
looks like the following (in C# syntax):
In F# you have to declare the result
parameter as a reference type with byref
:
open System.Runtime.InteropServices
override x.TryGetMember (binder : GetMemberBinder, [<Out>] result : byref<obj>) =
raise <| NotImplementedException()
I am not quite sure if the [<Out>]
parameter attribute is really necessary since my project compiled just fine without it as well.
The Conway sequence is built by reading the input string/sequence from left to right and returning the number of repeated consecutive elements. Eg. 1211
is converted to 111221
which in turn is processed into 312211
.
My pretty straight forward approach looks like the following:
module Conway (getConway) where
import Data.List (unfoldr)
-- | 'getConway' generates an inifinite Conway look-and-say sequence
-- (sequence A006715 in OEIS). See
-- http://en.wikipedia.org/wiki/Look-and-say_sequence
--
-- A simple use of 'getConway':
--
-- > (take 3 $ getConway "abc") == ["abc","1a1b1c","111a111b111c"]
--
getConway :: String -> [String]
getConway input =
unfoldr (\seed -> Just(seed, getNext seed)) input
where
grp x (Just l, c, lst) =
if x == l then (Just l, (c+1), lst)
else (Just x, 1, (show c) ++ (l : lst))
grp x (Nothing, _, _) = (Just x, 1, [])
appendLast (a, b, c) =
case a of
Just chr -> (show b) ++ chr : c
otherwise -> c
getNext = appendLast . foldr grp (Nothing, 0, [])
First you have to download the files necessary for the installation:
$ cd /tmp
# arm stage3 autobuild
$ wget http://distfiles.gentoo.org/releases/arm/autobuilds/current-stage3-armv6j/stage3-armv6j-20121107.tar.bz2
# latest portage snapshot
$ wget http://distfiles.gentoo.org/snapshots/portage-latest.tar.bz2
You can get the latest Raspberry Pi kernel from github:
Now that we have all necessary files you can insert your SD card. In the following steps I am using /dev/mmcblk0
to identify the SD card. This identifier may vary on other systems - you can check with dmesg
after inserting your card.
I chose to create a FAT32 boot partition of 32 MB, a swap partition with 512 MB and the rest for the root EXT4 partition.
The SD card is formatted and ready to be used for gentoo installation.
I am using the directory /tmp/mnt/gentoo
for the installation directory. You are free to substitute this to your liking.
$ mkdir /tmp/mnt/gentoo
$ mount /dev/mmcblk0p3 /tmp/mnt/gentoo
$ mkdir /tmp/mnt/gentoo/boot
$ mount /dev/mmcblk0p1 /tmp/mnt/gentoo/boot
Next we can extract portage and the stage3 image on the mounted SD card:
# extract stage3 files
$ tar xvf stage3-armv6j*.tar.bz2 -C /tmp/mnt/gentoo
# extract portage image
$ tar xvf portage-latest.tar.bz2 -C /tmp/mnt/gentoo/usr
Next we have to copy the kernel and its modules from the cloned github repository:
Before being able to use the new installation we have to adjust a few configuration files.
Next you have to edit your fstab
to match your partition scheme:
My fstab
looks like this:
/dev/mmcblk0p1 /boot auto noauto,noatime 1 2
/dev/mmcblk0p2 none swap sw 0 0
/dev/mmcblk0p3 / ext4 noatime 0 1
After that you have to create a cmdline.txt
file to pass the required boot parameters:
After that you may want to edit your make.conf
file to set your desired make parameters like CFLAGS
and set some default USE flags.
Next you will want to set your current timezone. Find a list of available timezones like this:
Set your desired timezone by copying the zoneinfo to the new file /etc/localtime
. In my case I chose the Europe/Berlin timezone:
$ cp /tmp/mnt/gentoo/usr/share/zoneinfo/Europe/Berlin /tmp/mnt/gentoo/etc/localtime
$ echo "Europe/Berlin" > /tmp/mnt/gentoo/etc/timezone
As we don’t want to chroot into the newly created gentoo installation we just reset the root password by editing the /tmp/mnt/gentoo/etc/shadow
file to the following:
root::10770:0:::::
Before booting your Raspberry Pi you first have to unmount the SD card:
After inserting your SD card into your Raspberry Pi and turning on the power you should see a gentoo startup sequence and a login prompt.
After logging into root without a password you should immediately set a new password for root:
In order to activate networking on boot you can add an entry via rc-update
:
$ nano -w /etc/conf.d/net
$ cd /etc/init.d
$ ln -s net.lo net.eth0
$ rc-update add net.eth0 default
$ /etc/init.d/net.eth0 start
In case you get error messages like INIT Id "s0" respawning too fast
on boot you may want to comment the first two serial console entries in /etc/inittab
:
After editing the mentioned entries should look like this:
# SERIAL CONSOLES
#s0:12345:respawn:/sbin/agetty 9600 ttyS0 vt100
#s1:12345:respawn:/sbin/agetty 9600 ttyS1 vt100
The Raspberry Pi does not have a hardware clock so you need to disable the hwclock
service and enable swclock
instead:
Optionally you may want to emerge ntp
and synchronize the clock on startup:
You probably want to ssh into your Raspberry Pi from time to time:
After all necessary installation steps are passed you can update your system and start using gentoo on your Raspberry Pi:
public static T GetOrDefault<T>(this T[] elements, int n)
{
if (elements.Length > n)
return elements[n];
return default(T);
}
After trying a few things without any success I found a solution on StackOverflow that I want to share with you.
Basically this is what it has to look like in F#:
type 'a ``[]`` with
member x.GetOrDefault(n) =
if x.Length > n then x.[n]
else Unchecked.defaultof<'a>
The trick is to use the backticks notation to define the array class. As stated in the mentioned post you can extend via the IList<_>
generic interface as well:
This post now contains my solutions to the first 10 problems mentioned on the haskell wiki translated to scala. This first problems target the handling of lists being one of the most important data structure in functional programming.
class Problem001 extends Problem {
def number = 1
def getLast[T](input : List[T]) : T = {
input match {
case Nil => throw new IllegalArgumentException("empty list")
case head :: Nil => head
case _ :: tail => getLast(tail)
}
}
def test() = {
getLast(List(1, 2, 3, 4)) == 4
}
}
class Problem002 extends Problem {
def number = 2
def lastButOne[T](list : List[T]) : T = {
list match {
case Nil => throw new IllegalArgumentException("empty list")
case _ :: Nil => throw new IllegalArgumentException("list with one element only")
case last :: _ :: Nil => last
case _ :: tail => lastButOne(tail)
}
}
def test() = {
lastButOne(List(1,2,3,4)) == 3
}
}
class Problem003 extends Problem {
def number = 3
def getNth[T](list : List[T], n : Int) : T = {
list match {
case Nil => throw new IllegalArgumentException("index out of bounds")
case head :: tail if n == 1 => head
case _ :: tail => getNth(tail, n-1)
}
}
def test() = {
val lst = List(1,2,3,4,5)
getNth(lst, 3) == 3 &&
getNth(lst, 1) == 1 &&
getNth(lst, 5) == 5
}
}
The first element in the list is number 1.
class Problem004 extends Problem {
def number = 4
def numElements[T](list : List[T]) = {
def inner(lst : List[T], i : Int) : Int = {
lst match {
case Nil => i
case _ :: tail => inner(tail, i+1)
}
}
inner(list, 0)
}
def test() = {
val lst = List.range(0, 10)
numElements(lst) == 10 &&
numElements(List()) == 0
}
}
class Problem005 extends Problem {
def number = 5
def rev[T](list : List[T]) = {
def inner(lst : List[T], res : List[T]) : List[T] = {
lst match {
case Nil => res
case head :: tail => inner(tail, head :: res)
}
}
inner(list, List())
}
def test() = {
rev(List(1,2,3,4,5)) == List(5,4,3,2,1)
}
}
A palindrome can be read forward or backward (i.e. 12321
)
class Problem006 extends Problem {
def number = 6
def isPalindrome[T](list : List[T]) = {
list == list.reverse
}
def test() = {
isPalindrome(List(1,2,3,2,1)) &&
isPalindrome("madamimadam".toList) &&
!isPalindrome(List(1,3,1,2))
}
}
class Problem007 extends Problem {
def number = 7
def flattenList[T](list : List[List[T]]) = {
def inner(list : List[List[T]], res : List[T]) : List[T] = {
list match {
case Nil => res
case head :: tail =>
inner(tail, head.foldLeft(res)((l, h) => h :: l))
}
}
inner(list, List()) reverse
}
def test() = {
val lst = List(List(1,2), List(3), List(4,5,6))
flattenList(lst) == List(1,2,3,4,5,6)
}
}
class Problem008 extends Problem {
def number = 8
def compress[T](list : List[T]) = {
def inner(lst : List[T], last : T, res : List[T]) : List[T] = {
lst match {
case Nil => res
case head :: tail if head == last => inner(tail, last, res)
case head :: tail => inner(tail, head, head :: res)
}
}
list match {
case Nil => Nil
case head :: tail => inner(tail, head, List(head)) reverse
}
}
def test() = {
val lst = "aaabbbbbcccdefff".toList
compress(lst) == "abcdef".toList
}
}
If a list contains repeated elements they should be placed in separate sublists.
class Problem009 extends Problem {
def number = 9
def pack[T](list : List[T]) = {
def inner(lst : List[T], last : List[T], res : List[List[T]]) : List[List[T]] = {
lst match {
case Nil => last :: res
case hd :: tl if hd == last.head => inner(tl, hd :: last, res)
case hd :: tl => inner(tl, List(hd), last :: res)
}
}
list match {
case Nil => Nil
case head :: tail => inner(tail, List(head), List()) reverse
}
}
def test() = {
val lst = "aaaabbbcdeeee".toList
pack(lst) == List("aaaa".toList, "bbb".toList, List('c'), List('d'), "eeee".toList)
}
}
Use the result of problem 9 to implement the so-called run-length encoding data compression method. Consecutive duplicates of elements are encoded as tuples (N E
) where N
is the number of duplicates of the element E
.
class Problem010 extends Problem {
def number = 10
def encode[T](list : List[T]) = {
def inner(lst : List[T], last : (Int, T), res : List[(Int, T)]) : List[(Int, T)] = {
val (count, elem) = last
lst match {
case Nil => last :: res
case hd :: tl if hd == elem => inner(tl, last.copy(_1 = count+1), res)
case hd :: tl => inner(tl, (1, hd), last :: res)
}
}
list match {
case Nil => Nil
case hd :: tl => inner(tl, (1, hd), List()) reverse
}
}
def test() = {
val lst = "aaabbcddeeeee".toList
encode(lst) == List((3, 'a'), (2, 'b'), (1, 'c'), (2, 'd'), (5, 'e'))
}
}
class Euler020 extends Euler {
def number = 20
def solution = {
def factorial(n: Int) = {
def inner(i: Int, current: BigInt): BigInt = {
i match {
case 1 => current
case _ => inner(i - 1, current * i)
}
}
inner(n, 1)
}
val fact = factorial(100).toString()
fact.foldLeft(0)((s, c) => s + Character.getNumericValue(c))
}
}
2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the number 2^1000?
I am using my own version of pow
here because it seems that the scala standard method from scala.math
does not support BigInt
which is obviously needed for this problem (or at least for my way of solving it *g*).
class Euler016 extends Euler {
def number = 16
def solution = {
def pow(base : Int, n : Int) = {
def inner(base : BigInt, exp : Int, sum : BigInt) : BigInt = {
exp match {
case 1 => sum
case _ => inner(base, exp-1, sum * base)
}
}
inner(BigInt(base), n, BigInt(1))
}
val sum = pow(2, 1000).toString()
sum.foldLeft(0)((s, c) => s + Character.getNumericValue(c))
}
}
class Euler006 extends Euler {
def number = 6
def solution = {
def sumOfSquares(n : Int) = {
Stream.range(1, n+1, 1).map(x => x * x).sum
}
def squareOfSums(n : Int) = {
val sums = Stream.range(1, n+1, 1).sum
sums * sums
}
squareOfSums(100) - sumOfSquares(100)
}
}
This one contains nothing too special at all. In fact I am not too sure if there is any real benefit in using Stream.range
instead of Range
.
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
class Euler002 extends Euler {
def number = 2
def solution = {
lazy val sequence = {
def build(i : Int, j : Int) : Stream[Int] = i #:: build(j, i+j)
build(1, 2)
}
val fibs = for (elem <- sequence.iterator if elem % 2 == 0) yield elem
fibs.takeWhile(_ <= 4000000).sum
}
}
This solution uses a lazily evaluated sequence (Stream
) where each element is computed as it is requested.
So in case you haven’t read one of my earlier post on Project Euler this is the problems description:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
My scala solution looks like this:
]]>These are my default git aliases I configured on every box I work on:
br = branch
ci = commit
co = checkout
df = diff
lg = log --graph --oneline --decorate --all
st = status --branch --short
unadd = reset HEAD
ffmerge = merge --ff-only
fixup = commit --amend -C HEAD
You can define your aliases via git config
:
Use color where possible:
Use a mergetool for diffing and merging:
Stage parts/hunks of your changes interactively:
Stage files interactively:
git add -i
Checkout/discard parts of the current changes in your working directory:
This is how you create a new branch for changes you already did and switch those on a new branch (this is a command I use all the time):
# take my current working directory changes
# create a new branch called 'fix_issue21'
# and immediately switch to the created branch
git checkout -b fix_issue21
Show branched that are branches that are completely merged into the current branch:
git branch --merged
This would be the opposite - branches that do have unique commits in it:
git branch --no-merged
Find a branch that have a particular commit in it:
git branch --contains 7e830ac7
There is actually one pretty standard use case of rebasing which is to rebase your topic branch off the latest version of the master branch. The usual way is probably something like this:
# we are currently on the topic branch
git checkout master
git pull
git checkout topic_branch
git rebase master
But there is actually a pretty nice way to speed up this workflow without actually touching your local master branch:
git fetch && git rebase origin/master
Use the interactive rebasing:
Show all commits reachable by branchA that are not reachable by branchB:
git log branchA ^branchB
A typical example might be: “which commits are in my topic branch that are not yet merged into master?”:
git log feature ^master
Another one: “which commits I just fetched from origin that are not yet merged into master”:
git log origin/master ^master
I don’t get tired of this: “which new commits will be pushed to origin?”:
git log master ^origin/master
Important log options you will use from time to time:
Who modified what changes in the specified file?
This is how you would remove a branch on a remote:
You can easily check out a specific branch and track its remote:
git checkout -b feature origin/feature
# these two would do exactly the same
git checkout -t origin/feature
git checkout feature
If you want to get the work from a remote branch that you don’t want to add permanently to your remotes you can add the remote address to the pull
command:
git checkout -b user
git pull 'git://github.com/user/project.git'
Get more human-readable names for a specific commit:
git describe HEAD
git describe HEAD@{1.month.ago}
Get more verbose output of curl
when communicating over http like cloning:
Show the last commit that contains a specific string (regular expression match):
git show :/fixed
git show :/^Merged
Create a bundle file:
git bundle create repo.bundle master
Now you can send the binary file via email, copy it on a usb drive and the like and treat it like a remote:
# show branches in the bundle
git ls-remote repo.bundle
# clone from the bundle file
git clone repo.bundle -b master localrepo
cd localrepo
Some really great sites or talks on stuff about git:
]]>In order to better illustrate what I mean here is a small example (which actually does not work):
module internal ReflectionHelpers =
open System
open System.Linq.Expressions
open System.Reflection
// this is the delegate type we want to use
type GetterFunc<'T> = delegate of 'T -> obj
let getGetter<'a> (p : PropertyInfo) =
let inst = Expression.Parameter(p.DeclaringType, "i")
let prop = Expression.Property(inst, p)
let conv = Expression.Convert(prop, typeof<obj>)
// this will throw an ArgumentNullException
Expression.Lambda<GetterFunc<'a>>(conv, inst).Compile()
The above code snippet compiles just fine but on execution you will get an ArgumentNullException
. The problem is somewhat hidden because the method Expression.Lambda
tries to find a public Invoke
method on the given delegate type. This works on C# as expected but in F# the Invoke
method is defined with the same visibility as the declaring type (which is internal
in this example).
As of now you only have to workarounds to choose from:
InternalsVisibleTo
In contrast to that the following C# snippet works without any problems:
using System;
using System.Linq.Expressions;
using System.Reflection;
namespace TestSnippets
{
internal static class ReflectionHelpers
{
internal delegate object GetterFunc<T>(T element);
internal static GetterFunc<T> GetGetterFunc<T>(PropertyInfo property)
{
var inst = Expression.Parameter(property.DeclaringType, "i");
var prop = Expression.Property(inst, property);
var conv = Expression.Convert(prop, typeof(object));
return Expression.Lambda<GetterFunc<T>>(conv, inst).Compile();
}
}
}
public static class ProcessorUnit<T>
where T : IProcessor
{
public static bool Process(T element)
{
return element.DoWork();
}
}
The closest I could get in F# looks like the following. Since there are no static classes in F# at all you have to use a type with a private constructor and static member definitions:
]]>// Singleton type with a private parameterless constructor
type MySingleton private() =
// other bindings
// private static instance of the MySingleton type
static let mutable instance = lazy(MySingleton())
// public getter property
static member Instance with get() = instance
// other members
The point I had the most trouble with was the correct definition of a static let
binding. So in order to make a let
binding static in a type definition just put a static
in front of it - that’s all.
ObjectDataProvider
is a pretty neat construct in WPF to build custom data providers for existing types. I find those especially useful when working with enumerations. If you want to list all possible values of an enumeration in a ComboBox, this are the steps you have to take.
First you have to create a static instance of the ObjectDataProvider in some resource dictionary. In the resources of the current window this could look like the following:
<Window.Resources>
<ObjectDataProvider
MethodName="GetValues"
ObjectType="{x:Type sys:Enum}"
x:Key="myEnumTypeProvider">
<ObjectDataProvider.MethodParameters>
<x:Type TypeName="local:MyEnumType"/>
</ObjectDataProvider.MethodParameters>
</ObjectDataProvider>
</Window.Resources>
The above lines are roughly equivalent to the invokation of GetValues
in your code-behind:
// the enumeration type
public enum MyEnumType
{
First,
Second,
Third
}
// usage of Enum.GetValues to manually set the container's items
internal static void SetComboBoxValues(ComboBox container)
{
if (container == null)
throw new ArgumentNullException("container");
var enumValues = Enum.GetValues(typeof(MyEnumType));
container.ItemsSource = enumValues;
container.SelectedIndex = 0;
}
The next step is to add a static binding for the ItemsSource
of your container to the new data provider:
<ComboBox
Name="myTypeSelector"
ItemsSource="{Binding Source={StaticResource myEnumTypeProvider}}"/>
Notice that you may have to add the necessary namespace declarations (for System in mscorlib and your project’s own namespace) in your XAML’s header like:
xmlns:local=“clr-namespace:MyProject”
Using a discriminated union in F# sadly does not work the same way. In order to obtain the types of a union type you have to use reflection and bind the values manually to the container’s ItemsSource
.
open System.Windows.Controls
open Microsoft.FSharp.Reflection
type MyUnionType =
| First
| Second
| Third
let getUnionNames =
FSharpType.GetUnionCases typeof<MyUnionType>
|> Array.map (fun t -> t.Name)
let setComboBoxValues (cb : ComboBox) =
cb.ItemsSource <- getUnionNames
cb.SelectedIndex <- 0
In case you really want to use a real enumeration in F# you have to define the discriminated union like this instead:
]]>INotifyPropertyChanged
interface 1. I hadn’t used events in F# before so had to experiment a few things to get it right.
The solution was to implement a CheckItem class that wraps the selection functionality and the implementation of the INotifyPropertyChanged
interface. This is what it looks like:
open System.ComponentModel
open Microsoft.FSharp.Quotations.Patterns
/// Observable object class implementing the INotifyPropertyChanged
/// interface
type ObservableItem() =
let propertyChanged = Event<_,_>()
let getPropertyName = function
| PropertyGet(_, p, _) -> p.Name
| _ -> invalidOp "Invalid expression argument: expecting property getter"
interface INotifyPropertyChanged with
[<CLIEvent>]
member this.PropertyChanged = propertyChanged.Publish
member this.NotifyPropertyChanged name =
propertyChanged.Trigger(this, PropertyChangedEventArgs(name))
member this.NotifyPropertyChanged expr =
expr |> getPropertyName |> this.NotifyPropertyChanged
/// Simple wrapper class for a selectable item
type CheckItem<'a>(item : 'a) =
inherit ObservableItem()
let mutable _isSelected = true
let _item = item
member this.Item
with get () = _item
member this.IsSelected
with get () = _isSelected
and set value =
_isSelected <- value
this.NotifyPropertyChanged <@ this.IsSelected @>
There are a few things worth noting here: The first interesting thing is the attribute CLIEventAttribute 2 that allows us the use a more consise way of implementing events. Basically it adds the necessary CLI metadata to the event and implements the add_EventName and remove_EventName methods.
The second interesting fact is the usage of F# quotations to attach the right property name to the PropertyChangedEventArgs
structure. Instead of specifying a string constant it is possible to use a property expression.
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
My first attempt used a straight-forward implementation of a single fibonacci number and calculating each one less than four million separately:
fibonacci :: Int -> Int
fibonacci 1 = 1
fibonacci 2 = 1
fibonacci n = fibonacci (n-1) + fibonacci (n-2)
solution :: Int
solution =
sum $ filter even $ takeWhile (<4000000) [fibonacci x | x <- [1..]]
Since this solution runs not very efficently I tried to come up with a slightly improved version:
fibSeq :: Int -> [Int]
fibSeq 1 = [1]
fibSeq 2 = [1, 1]
fibSeq n = fibSeq' n [1,1]
fibSeq' :: Int -> [Int] -> [Int]
fibSeq' n list@(x:y:_) =
if next > n then
list
else
fibSeq' n (next:list)
where
next = x + y
solution2 :: Int
solution2 =
sum $ filter even $ fibSeq 4000000
Now after reviewing the written code I notice that it is easily possible to rewrite the fibSeq'
function using guards. The resulting function looks more readable to me:
The problem’s description is as following:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
I am pretty sure there are more elegant solutions to this fairly simple problem, but this is my first shot:
]]>Hakyll is a Haskell library for generating static sites, mostly aimed at small-to-medium sites and personal blogs. It is written in a very configurable way and uses an xmonad-like DSL2 for configuration.
Given the fact that Hakyll uses pandoc to parse and build the web pages Hakyll can process Markdown, reStructuredText or other popular text formats. I tried to use reStructuredText with pandoc at first so I could reuse my already written posts. Sadly that did not work out too good because I didn’t get syntax-highlighting to work properly.
Luckily converting to Markdown solved that problem and the conversion of my posts from reStructuredText wasn’t too complicated.
I am no export on Haskell yet at all so my current configuration is a pretty basic one and consists of approximately 90% of the example configurations on the Hakyll website. So it took some time to get comfortable with the Hakyll DSL and the way it is supposed to be configured.
This is the way the blog posts are rendered:
-- Posts
match "posts/*" $ do
route $ setExtension "html"
compile $ pageCompiler
>>> arr (renderDateField "date" "%B %e, %Y" "Date unknown")
>>> arr (renderDateField "shortdate" "%Y-%m-%d" "Date unknown")
>>> renderTagsField "posttags" (fromCapture "tags/*")
>>> applyTemplateCompiler "templates/post.html"
>>> applyTemplateCompiler "templates/default.html"
>>> relativizeUrlsCompiler
The following snippet illustrates the way the tag cloud is processed (though I am not really happy with the output yet):
match "tags.html" $ route idRoute
create "tags.html" $ constA mempty
>>> arr (setField "title" "tag cloud")
>>> requireA "tags" (setFieldA "tagcloud" (renderTagCloud'))
>>> applyTemplateCompiler "templates/tagcloud.html"
>>> applyTemplateCompiler "templates/default.html"
>>> relativizeUrlsCompiler
-- ...
-- ...
-- ...
where
tagIdentifier :: String -> Identifier (Page String)
tagIdentifier = fromCapture "tags/*"
renderTagCloud' :: Compiler (Tags String) String
renderTagCloud' = renderTagCloud tagIdentifier 100 120
Overall I am pretty happy with the result so far. The blog looks nearly the same as it looked before - anyways there is room for a lot of improvements. These are a few points I would like to add in the near future:
So I guess the amount of spare time and my progress of learning Haskell will influece when that will happen. If you have got critique or comments of any other kind feel free to send me an email.
Domain Specific Language↩
I will give you a short walkthrough on how to create a new F# project in Visual Studio and how to use WPF in there.
The first thing you have to do is to create a new F# application in Visual Studio.
PresentationCore
PresentationFramework
System.Xaml
System.Xml
WindowsBase
Open the Properties of your newly created project
Select Windows Application as Output type (in the Application tab)
Basically you are now ready to start hacking a fine WPF application in F#. In order to define your application’s entry point you may want to add the following lines in your main source file:
namespace WpfSharpProject
open System
open System.Windows
module Program =
[<STAThread>]
[<EntryPoint>]
let main args =
SharpWindow().Run()
In order to conveniently handle the WPF controls I found a few helper functions to be very handy:
open System
open Microsoft.FSharp.Control
module Utils =
/// Create a new window instance from the given XAML filename
let window assembly name
let uri = sprintf "/%s;component/%s" assembly name
Application.LoadComponent(new Uri(uri, UriKind.Relative)) :?> Window
/// Find the resource with the given name
let (?) (window : Window) name =
window.FindName name |> unbox
/// Imitate the C# event handler syntax
let inline (+=) (event : IEvent<_, _>) handler =
(event :> IDelegateEvent<_>).AddHandler(RoutedEventHandler(handler))
The last thing you have to do is to create a new XAML file, add its BuildAction to Resource
and add the matching code-behind F# source file. You find a small example down below:
<!-- File: SharpWindow.xaml -->
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Width="500" Height="400"
Title="SharpWindow">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<!-- ... -->
</Grid>
</Window>
The code-behind F# source file:
// File: SharpWindow.xaml.fs
namespace WpfSharpProject
open System.Windows
open System.Windows.Controls
open System.Windows.Data
type SharpWindow private (xaml : Window) as this =
// associate a few controls
let quit : Button = xaml?quitBtn
let import : Button = xaml?importBtn
let export : Button = xaml?exportBtn
// connect a few event handlers
do
quit.Click += this.quitClick
import.Click += this.importClick
export.Click += this.exportClick
new () =
SharpWindow(window "WpfSharpProject" "SharpWindow.xaml")
member this.Run() =
(new Application()).Run xaml
member this.quitClick sender args =
xaml.Close()
// ...
That’s pretty much all you need to get started with your WPF hacking in F#. I was very surprised how the creation of a small application like this was not much more complicated than it would have been in C#.
So you don’t have to be too frightened by Visual Studio 2010 not shipping a builtin project template or sophisticated support for WPF and try it out yourself!
ghci
, the interactive compiler. Using ghci you can quickly test and reevalute code you have just written.
Today I just installed the Haskell platform for Windows. After first trying the ghci I just remembered how crappy cmd.exe
is. But rescue is near - go ahead and setup Console2 to use ghci as one tab setting.
cmd.exe /K "ghci.exe"
The first one is a simple function that calculates the lengths of a given list. As far as I can see this function behaves pretty much the same like the in-built length
function.
-- Write a function that computes the number of elements in a list. To test
-- it, ensure that it gives the same answers as the standard 'length'
-- function.
len :: [a] -> Int
len [] = 0
len (x:xs) = 1 + len xs
The second one is supposed to calculate the mean of the elements in the given list. My first approach used a self-written sum
function using foldr
. Later I noticed that there is an in-built function called sum
already that I can use.
-- Compute a function that computes the mean of a list, i.e., the sum of all
-- elements in the list divided by its length. (You may need to use the
-- 'fromIntegral' function to convert the length of the list from an integer
-- into a floating-point number.)
mean [] = 0
mean lst = sum lst / fromIntegral (len lst)
The next two ones are about palindrome numbers or collections. The first one returns a palindrome list by simply appending the reverse of the input list.
-- Turn a list into a palindrome; i.e., it should read the same both
-- backward and forward. For example, given the list [1,2,3], your function
-- should return [1,2,3,3,2,1]
to_palindrome lst = lst ++ reverse lst
The second function determines if the given list is palindromic by sequencially comparing the mirrored element pairs of the list.
-- Write a function that determines whether its input list is a palindrome.
is_palindrome [] = False
is_palindrome lst =
all (\x -> (lst !! x) == (lst !! (len-x-1))) [0..middle]
where
len = length lst
middle = len `div` 2
Those functions do look really stupid simple but for an absolute beginner in Haskell it was kind of a hassle especially to get all those types right.
But nevertheless it’s quite interesting to slowly get a feel for those functional programming constructs. Also it makes fun to see that you are indeed able to solve these problems in the end after trying dozens of wrong approaches.
]]>Without further ado, this is what I came up with so far:
open System
open System.Text
open System.Text.RegularExpressions
/// Replace the non-guessed letters with an underscore
let replace (word : string) (letters : char list) =
let sb = StringBuilder(word.Length)
let contains c list = list |> List.exists (fun x -> x = c)
word.ToCharArray()
|> Array.iter (fun l -> sb.Append(if contains l letters then l else '_') |> ignore)
sb.ToString()
/// Determine whether the word was guessed
let solved (word : string) =
let rgx = Regex(@"^[^_]+$")
rgx.IsMatch(word)
/// Start the Hangman game with a maximum number of attempts
/// and a given word to guess
let hangman max (word : string) =
printfn "Hangman: %s" (replace word [])
let rec hangman' attempts (word : string) (letters : char list) =
if attempts = 0 then printfn "You lost the game"
else
printf "Attempts left %d: " attempts
let input = Console.ReadKey(true).KeyChar
let ls = input :: letters
let rep = replace word ls
if solved rep then
printfn "%s" word
else
printfn "%s" rep
hangman' (attempts-1) word ls
hangman' max word []
I am pretty sure that there are much more elegant ways to implement this in F#. So if you have got any remarks or suggestions on how to improve this, I am very much interested in your opinion. So feel free to email me.
By the way, I did program on my linux machine at home using MonoDevelop with the great F# bindings 1 mainly written by Tomas Petricek. Go check that out if you are running linux or MacOS - it does work very well especially regarding that it’s completely open-source.
Since there was no one really familiar with programming in Erlang I gave it a shot. After half an hour of reading in google on Erlang syntax we managed to patch the sources and add some additional logging information.
Later at home I tried if I could come up with some basic functionality to kind of get a feeling for the language. A common example when starting to learn a new programming language is to implement the factorial function:
n! = \left\{
\begin{array}{l l}
1 & \quad \text{if $n = 0$}\\
n((n-1)!) & \quad \text{if $n > 0$}\\
\end{array}\right.
This is the first approach I came up with that uses recursion:
The more interesting way of solving the above problem is to use tail-recursion. That way the temporary processing values are stored in an accumulator argument of the function that is being called. Using tail-recursion it is not necessary that every step of the recursive function has to be hold up in the stack.
-module(factorial).
-export([fac/1]).
% API function that is being exporting
fac(N) -> fac(N,1).
% factorial function using tail-recursion
fac(0, Acc) -> Acc;
fac(N, Acc) when N > 0 -> fac(N-1, N*Acc).
# first remove the tag in your local repository
git tag -d v0.0.3
# delete the tag on the remote
git push origin :refs/tags/v0.0.3
Not that difficult, isn’t it?!
]]>The disadvantage of this command is that the whole version string is returned from this query. In order to get more fine grained information about your SQL Server you will want to use SERVERPROPERTY
instead:
-- get the product version like '10.50.1617.0'
SELECT SERVERPROPERTY('productversion')
-- get the SQL Server edition like 'Express Edition'
SELECT SERVERPROPERTY('edition')
You can find further information about the SQL Server versions right here: http://sqlserverbuilds.blogspot.com/
]]>Now that I just happened to visit Scott Hanselman’s blog in order to read about a good initial startup configuration I decided to write a short post about Console2 too.
d:\bin\
).Console2.exe
executableThe following configuration example is heavily inspired by Scott Hanselman’s great blog post about Console2 and slightly modified to my own preference. The following points are a few recommendations on how to configure Console2:
Edit → Settings
Console
Appearance → More
Appearance
Behavior
Ctrl-T
Ctrl-C
Ctrl-V
%comspec% /k "D:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vsvarsall.bat"
%SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe
D:\msysgit\bin\sh.exe --login -i
If you intend to use Console2 with git, I would recommend to not modify your default font color unlike Scott Hanselman suggests. With a modified font color you will loose the shell coloring git provides i.e. on using commands like git diff
.
Say you have added an existing image file to your a project with the namespace Project.Tools
. Now all you have to do is to set the Build Action of the file to Resource
and reference the image like the following:
<MenuItem.Icon>
<Image Source="/Project.Tools;component/Resources/Images/image.png" Height="16" Width="16"/>
</MenuItem.Icon>
In the example above the image is located inside the project in a directory structure of /Resources/Images/
.
The first event you can attach to is the UnhandledException
event of the current application domain. See the documentation on the msdn for more information.
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.UnhandledException +=
new UnhandledExceptionEventHandler(CustomHandler);
// ...
private static void CustomHandler(object sender, UnhandledExceptionEventArgs e)
{
Exception ex = (Exception) e.ExceptionObject;
Console.WriteLine("Caught unhandled exception: " + ex.Message);
}
The event DispatcherUnhandledException
is triggered by the main UI dispatcher of your WPF application. The documentation can be found on the msdn.
Application.Current.DispatcherUnhandledException +=
new DispatcherUnhandledExceptionEventHandler(CustomHandler);
// ...
private static void CustomHandler(object sender, DispatcherUnhandledExceptionEventArgs e)
{
Console.WriteLine("Caught unhandled exception: " + e.Exception.Message);
}
Moreover it is possible to hook into the DispatcherUnhandledException
event of a specific Dispatcher instance. The behavior is described in the documentation on the msdn in more detail.
dispatcher.DispatcherUnhandledException +=
new DispatcherUnhandledExceptionEventHandler(CustomHandler);
To get you going as quickly as possible I am shortly describing with steps you have to do to install Vagrant on your system.
Vagrant is written in Ruby and published as a RubyGem – so you have to install Ruby and RubyGems_ at first. In my case on Gentoo Linux this is nothing more than running:
$ emerge -av ruby rubygems
Since Vagrant is utilizing VirtualBox you have to install that one of course. VirtualBox is an open-source full virtualizer for x86 hardware and runs on Windows, Linux, Mac OSX and Solaris. In order to install VirtualBox you either use your distribution’s package manager or go the download page and install it manually. On Gentoo Linux you can use emerge
of course.
$ emerge -av ">=virtualbox-4.1"
It is important to note that only the versions 4.1.x of VirtualBox are compatible with Vagrant. If you are running on a stable gentoo profile you currently have to unmask the version 4.1.2 of VirtualBox by adding the following lines to your package.keywords
file:
$ echo "app-emulation/virtualbox
app-emulation/virtualbox-modules
app-emulation/virtualbox-additions
dev-util/kbuild" >> /etc/portage/package.keywords
Moreover it might be necessary to add the qt4
USE flag in order to build the VirtualBox
executable:
$ echo "app-emulation/virtualbox qt4" >> /etc/portage/package.use
Now all there is to do is to add your user to the vboxusers
group and start the necessary virtualbox kernel modules:
$ usermod -a -G vboxusers <username>
$ modprobe vboxdrv vboxnetflt vboxnetadp
Instead of manually starting the virtualbox kernel modules every time you can also autoload them by modifying the /etc/conf.d/modules
file accordingly.
Once this is done all requirements are set you can go ahead and install Vagrant itself:
$ gem install vagrant
On my gentoo machine I had some trouble using gem via RVM 1 because the gentoo developers set the RUBYOPT
environment variable to -rauto_gem
by default. In this case you would have to unset the variable beforehand:
$ unset RUBYOPT
$ gem install vagrant
Now that Vagrant is installed you are able to fetch a prebuilt virtual machine image (called “box”) and build a new development environment based on that:
$ vagrant box add newbox http://files.vagrantup.com/lucid32.box
$ vagrant init newbox
$ vagrant up
The default box named “lucid32” which you usually use is a bare bone installation of 32-bit Ubuntu Lucid (10.04). The name “newbox” is just an arbitrary name for the fresh box image – you can choose whatever name you like.
In case you encounter problems with ssh’ing into your new virtual environment on executing vagrant up
or vagrant ssh
you should check if you define an alias of localhost in your /etc/hosts
file like:
127.0.0.1 localhost name alias
After I removed the alias Vagrant worked like it is supposed to.
RVM: Ruby Version Manager↩
The RVM installation process is explained in detail on the RVM website but I just want to sum up the steps I did on my machine.
# fetch rvm sources
$ bash < <(curl -sk https://rvm.beginrescueend.com/install/rvm)
# source rvm scripts on shell login
$ echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"' >> ~/.zshrc
# emerge required packages
$ emerge -va libiconv readline zlib openssl curl git libyaml sqlite libxslt libtool bison
Now you are ready to install and load a ruby instance of your liking:
RVM allows you to have multiple sets of RubyGems even for one single ruby version – these sets are called gemsets. A typical workflow borrowed from the RVM website looks like this:
$ rvm 1.9.2
$ gem install rails -v 2.3.3
$ rvm gemset create rails222 rails126
$ rvm 1.9.2@rails222
$ gem install rails -v 2.2.2
$ rvm 1.9.2@rails126
$ gem install rails -v 1.2.6
$ rvm 1.8.7
$ gem install rails -v 1.2.3
VsVim is an open-source vim emulator plugin written by Jared Parsons and it intents to introduce basic vim functionality into the Visual Studio 2010 editor. This plugin greatly improves the working experience for all those folks who are used to the incredible editing power you gain when using vim.
The plugin is mainly programmed in F# and C# which makes it additionally interesting for me to follow the development. The source code is hosted on github and now as of version 1.1 it is licensed under the Apache 2.0 license.
So if you want to increase your programming productivity in Visual Studio, just give it a shot!
Find more useful tips concerning T-SQL
on inpad.de
Comma separated file↩
live-inspection
:
I found this really helpful tip on inpad.de
]]>I just merged all posts of my old wordpress blog into the rst
1 format the pages of the rstblog are built of.
reStructured Text↩
<Space>
key to act as a clever key to repeat all kinds of motions depending on the last actions.
This way the plugin hooks i.e. into search-commands like /
, ?
, *
, #
as well as navigation in the quickfix window and jumping through diffs.
I can strongly recommend this plugin as it heavily improves your navigation speed and comfort just give it a try.
If you are interested I have got my own fork of the plugin on github. A few changes by me are already merged into the author’s repository. A few days ago I added support for tag-movement commands like :tnext
, :tprev
and <Ctrl-]>
0F 31
) in EDX:EAX.
The following C function worked for me on x86_64 linux:
static inline unsigned long rdtsc()
{
unsigned int lo, hi;
__asm__ __volatile__("rdtsc" : "=a" (lo), "=d" (hi));
return (unsigned long) hi << 32 | lo;
}
Read Time Stamp Counter↩
-j
to junk all paths. A typical workflow then would look like this:
$ unzip -l archive.zip
$ unzip -j archive.zip subdir/specific_file.c
To extract the file into a specific directory just append -d {directory}
to the command:
$ unzip -j archive.zip subdir/specific_file.c -d target_dir
]]>function! VSetSearch()
let tmp = @@
normal! gvy
let @/ = '\V' . substitute(escape(@@, '\'), '\n', '\\n', 'g')
call histadd('/', substitute(@/, '[?/]',
\ '\="\\%d".char2nr(submatch(0))', 'g'))
let @@ = tmp
endfunction
vnoremap * :<C-u>call VSetSearch()<CR>//<CR>
vnoremap # :<C-u>call VSetSearch()<CR>??<CR>
Now you can search as usual with *
and #
for the next and previous search match of the currently highlighted text.
$ grep -i randr /var/log/Xorg.0.log
That should output something similar to:
(==) RandR enabled
(II) Initializing built-in extension RANDR
Once you know the Xrandr extension is loaded you can change the resolution via:
$ xrandr --size 800x600
All available resolutions can be print to console with a simple call of Xrandr without arguments:
$ xrandr
X11 resize and rotation extension↩
By now you can find some of my config files and scripts for the dwm and awesome window managers on my account. For those of you who don’t know github yet:
github.com is a social coding platform that allows sharing of open-source projects managed by the git revision control software developed by Linus Torvalds.
Just give it a try – it’s worth a look
]]>If you are using highlighted search you can un-highlight the last search by typing :noh
A few day ago I wondered if it is possible to use a column-selection in vim. After a quick research I found the block-wise visual mode (CTRL-V)
which is exactly what I was searching for. A quick summary of the different visual modes:
v
- character-wise visual modeV
- line-wise visual modeCtrl-v
- block-wise visual mode (under win32 you can use the Ctrl-q
key binding)The next useful command is the one-time insert command (CTRL-O)
which allows you to ‘execute’ a single normal mode command while typing in insert mode. A typical usage might be to mark a line you are currently typing in via CTRL-O ma
. That way you can continue typing in insert after marking the line.
If you are interested in case-insensitive searching the next one might be interesting: Combining the settings set ignorecase
and set smartcase
allows you to search case-insensitive by default but searches case-sensitive if the search string contains uppercase characters.
The next one is a quick way to convert tabs into space characters:
The last one for today is the position of the current line in the window:
zz
- move current line to the center of the pagezt
- move current line to the top of the pagezb
- move current line to the bottom of the pagedwm is surely the smallest and fastest tiling window manager I know. It’s currently written in under 2000 lines of source code and is only customizable through editing the config.h
written in C. That way dwm supports exactly those features you are really planning to use and I like that very much.
Here’s a small screenshot of my current desktop:
]]>Officially there is a portage tool named depclean
(emerge -–depclean
) that searches for unused packages that are not necessary for other packages. But everyone who once tried this tool can say that it’s not working that great. Often really important packages are found to be unmerged and would badly damage the system.
A useful alternative for this problem is a tool named udept (which can be found in the portage tree in app-portage/udept) in my experience it’s working nearly failure-free and relatively fast on the top. So just give it a try – it’s worth testing
dep -dp (depclean-mode with pretend-flag)
dep -L <package-name> (reverse dependencies from <package-name>)
dep -l <package-name> (dependencies from <package-name>)
dep -Ln <package-name> (reverse dependencies (+uninstalled) from <package-name>)
]]>