Montag, 9. November 2015

First Experience with Docker-Machine 0.5.0

Last weekend I had some time for "playing" with the brand new versions of Docker-Engine (v1.9) and Docker-Machine (v0.5.0).

First I installed them. Using "brew" on Mac (Mac OS X 10.11.1) this was not a big deal.

brew update
brew upgrade

Next step was to "migrate" my local "Docker-Tool-VM" based on VirtualBox V5.0.8 from Docker v1.8 to current version. Therefore I started it first:

docker-machine start tools

Checking if hardware virtualization is enabled failed: open /Users/bf/.docker/machine/machines/tools/tools/Logs/VBox.log: no such file or directory

What?
Never got such strange kind of message during months of working with "Docker-Machine"!

Even asking "Google" didn't help ... It seems that no one else went into this trouble :-(

After some investigation I found the file "VBox.log" on a little bit different directory than "Docker-Machine" expected it. Instead of

/Users/bf/.docker/machine/machines/tools/tools/Logs

it exists at

/Users/bf/.docker/machine/machines/tools/DockerMachine/tools/Logs

Hm - let's try what will happen if we copy "VBox.log" and all other directories and files from "wrong" to "right" location?

cp \
  /Users/bf/.docker/machine/machines/tools/DockerMachine/tools \

  /Users/bf/.docker/machine/machines/tools/tools

And than it was time to start "Docker-Tool-VM" again.

docker-machine start tools
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

Success!

I still don't know what really happened here - but I'm happy that it work again :-) If someone knows more about background and/or reasons: Feel free to add a comment.

Dienstag, 12. Mai 2015

First Experiences with Docker Compose new keyword "extends"

On April 7 Docker Inc. announced general availability of Docker Compose V1.2.0. Together with other bugfixes and features it added the new keyword extends.

I thought that could be a good way to handle our needs to improve the configuration of our "application environments" (e.g. dev, test, production). Because of some differences between them (mainly in handling of "data container") we have to maintain several mostly identical Docker Compose configuration (aka "yaml") files, one for each environment. Not nice ...

Why the environments need to be different?
To shorten the round trip cycle time we map some container directories to local hard drives of our developer work stations (e.g. for html, css and javascript files), similar to this (simple) example based on "pure" Docker taken from official Docker-NGINX-Image website [04]

docker run \
  --name some-nginx \
  -v /some/content:/usr/share/nginx/html:ro \
  -d nginx

Of course, this isn't a solution for production, there we use "data containers" instead of mapped host directories.

As we found the differences and some spare time I updated my Docker environment (Mac OS X 10.10 with Docker V1.6.0, Docker-Machine V0.2.0 and Docker-Compose V1.2 ) to be able to try it out.


Part1: First Experiment


And of course I started first experiment without reading documentation completely. Therefore I run in some issues ...

Given a Docker Compose configuration file "common.yml"

nexus:
  image: mapp/nexus:latest
  hostname: nexus
  user: root
  ports:
    - "8081:8081"
  volumes_from:
    - nexvol

nexvol:
  image: busybox
  volumes:
    - /sonatype-work

# Hint:
# Setting user to root for service "nexus" is

# necessary to get this example working, but is 
# meaningless for now and dealing with volumes 
# and rights would be a topic for another post ...

and another one named "extended.yml"

nexus:
  extends:
    file: common.yml
    service: nexus

nexvol:
  extends:
    file: common.yml
    service: nexvol
  volumes:
    - /Users/bf/Projects/eval/nexus-home:/sonatype-work

Syntax itself focuses on services (not on files) and is imo easy to understand and well explained in [02].

Short summary: Service "nexus" defined in file "extended.yml" (will refer to it as "extended::nexus" from now) extends service "common::nexus" without changing anything. But service "extended::nexvol" extends service "common:nexvol" by changing its volume definition to be mapped to a directory on my developer notebook.

Unfortunately, starting this service configuration lead me into trouble ... :-(

$ docker-compose -f extended.yml -p eval up -d

Cannot extend service 'nexus' in /Users/bf/Projects/eval/common.yml:
services with 'volumes_from' cannot be extended

What a pitty! Maybe it would be worth to spend some time studying documentation, which you can find at [03] (including explanation why this example doesn't work), before starting to code something.


Part2: Get it working


Result from studying: "volumes_from" and "links" can't be extended.
Only workaround I found is moving them from "parent" to "child" configuration, which means in our case from "common.yml" to "extended.yml"

Finally "common.yml" looks like

nexus:
  image: mapp/nexus:latest
  hostname: nexus
  user: root
  ports:
    - "8081:8081"

nexvol:
  image: busybox
  volumes:
    - /sonatype-work

and "extended.yml"

nexus:
  extends:
    file: common.yml
    service: nexus
  volumes_from:
    - nexvol

nexvol:
  extends:
    file: common.yml
    service: nexvol
  volumes:
    - /Users/bf/Projects/eval/nexus-home:/sonatype-work


Part3: Conclusion


The new keyword extends is an improvement and makes working with Docker Compose more convenient, with some small disadvantages: I'm using "volumes_from" and "links" quiet often and they are still more or less duplicated between my environments.


Part4: References and interesting links


[01] Docker Compose V1.2.0 Release Notes
[02] Docker Compose Documentation keyword "extends"
[03] Docker Compose - Tutorial and Reference for extending Services
[04] Official Docker image for NGINX



Dienstag, 10. März 2015

Loading Workflow Scripts


Besides hacking or coding Workflow Engine scripts directly inside a little bit "stupid" editor (in reality it looks like a HTML input field) Jenkins provides another possibility: Loading scripts from SCM.

If you choose in Job configuration page "Groovy CPS DSL from SCM" instead "Groovy CPS DSL" you will get additional configuration options (see figure below)

Figure 1: Loading Workflow script from Git


As many other projects we choose Git as "SCM technology". Additionally we are using a quit simple Maven project hosted on Bitbucket for our example but this doesn't matter right now (it contains the parent.pom which we will use for more advanced examples later). Right now we need only one file from this repository (it looked too crazy for me to create a separate repository for one file ;-).

The interesting part you find close to bottom line: The path to Workflow script "flow1.groovy". For our "Hello World" example it will looks like:

echo( "Hello World" );

After saving configuration and starting a "build" you should get a similar output in the Job console:

Figure 2: Console output after finishing the build


Essentially Workflow scripts are coded in a DSL based on Groovy (a more or less well known scripting language related to Java [01]). Therefore you can use any valid Groovy "construct" for coding your workflow. The DSL is implemented in or by a set of Jenkins plugins providing additional functionality, e.g. the "step echo" we used to write "Hello World" to console output. You can read more about in the Jenkins Workflow Engine Tutorial [02].

Storing Workflow scripts in SCM has the advantage that they are handled and versioned as any other source code. And if we store them together with all other sources for this project than we don't need any additional information to find the right script.

But there is one disadvantage: If you have to develop the Workflow script itself you are faced with a long round trip cycle (assuming you are using Git):
  1. change Workflow file (e.g. flow1.groovy)
  2. commit it to local repository
  3. push it to remote repository
  4. start/build Jenkins job and watch what happens in job console 
Doesn't look nice ...

To avoid the round trip over the source code repository we could try to apply the changes to the local copy of the Workflow script (usually located in ${JENKINS_HOME}/jobs/${PROJECT_NAME}/workspace or workspace@script). But this doesn't work because the changes will be overwritten during next synchronization between remote and local Git repository. And no way to switch this feature off.

From a bird view it seems that dividing synchronization and running "real" build process could do the trick ... Other project or job types, e.g. Freestyle or Maven, are offering this possibility. You are able to configure SCM (where to get sources from) and the "build" independently.

Because I'm not a "Jenkins hacker" (can't extend/change plugins directly) I had to look for a workaround. It took me some time but finally I decided to spread the functionality over two Workflow scripts.

First one is used for bootstrapping the build process and should be more or less reusable for other projects too (I hope ;-). Instead using SCM functionality provided by Jenkins Workflow as explained before, I code the bootstrapping part directly on job configuration page as "Groovy CPS DSL".

Figure 3: Bootstrapping a build



For better readability I repeat the script:

def flow
node{
  echo "bootstrap function begin"
  git url: 'https://bitbucket.org/mindapproach/demo-parentpom.git'
  flow = load "flow.groovy"
  echo "bootstrap function end"
}
flow.build()


Let's talk a little bit why this script looks a bit more complex than expected.

First "workflow steps" like "git" and others need some kind of "Launcher context". Therefore we have to wrap them into a "node" which providing necessary information. Btw, if not you will be faced with following or similar error message

Figure 4: Error if no Launch Context provided




"Git Workflow step" is used to create and synchronise local with remote source repository. Nothing special.

Real magic is done by "load Workflow step" which loads (surprise, surprise ;-) a file from workspace and runs it as Groovy source code. The file can either contain statements at top level or can define functions and return "this". Such a function would be the right place for our real "build code" (see below).

Additionally, using such a function give us the possibility to divide between loading and running the function. This may be helpful if we later want use different contexts during building our project (e.g. for integration tests). To be prepared I call the "build()" function outside of the scope of the original node.

Finally let's have a look at "flow.groovy" - the "real" build script maintained together with our other project sources:

// flow.groovy:
def build(){
  echo "build() function begin"
  node{
    echo "Hallo from flow.groovy"
  }
  echo "build() function end"
}
return this;

The script defines a function called "build()" which acquire an own launcher context (a workspace) by using a "node step". Later we will add some more useful code than simply echo-ing some text.

After first check out or synchronization of source code you may comment out the git step. Now you can edit Workflow script "flow.groovy" and build your project as often as you want.

Summary:

Even it works but it is a (dirty) workaround with some disadvantages:
  1. the build code inside "flow.groovy" needs own "node" statement and 
  2. have to contain a build() function
  3. Right now, the build in its narrow sense is bound to the node there the sources are checked out
May be I missed something (any feedback will be welcome) - from my current point of view and knowledge I would vote for a style like aforementioned project types "Freestyle" or "Maven" provide.

Appendix:

Unfortunately, loading a script from a script checked out via "Groovy CPS DSL from SCM" doesn't work ...

If you replace in figure 1 the script "flow1.groovy" with "bootstrap.groovy" (see below), which in turn calls another script named "flow.groovy", and "build" the project - it will fail because flow.groovy can't be found.

[UPDATE]

// bootstrap.groovy
def flow
node{
  echo "bootstrap function begin"
  echo "pwd: " + pwd()
  flow = load "flow.groovy"
  echo "bootstrap function end"
}
flow.build()

[/UPDATE]

Reason: Workflow step "load" expects a directory named "workspace" which it use as root for searching the script. But "Groovy CPS DSL from SCM" creates a directory named "workspace@script" ...

Following snipped I took from console output from my local test environment:

Cloning repository https://bitbucket.org/mindapproach/demo-parentpom.git;
git init /var/jenkins_home/jobs/Test03/workspace@script # timeout=10 
[...]
java.io.FileNotFoundException:
/var/jenkins_home/jobs/Test03/workspace/flow.groovy (No such file or directory)

 

References and interesting links:

[01] Groovy Homepage
[02] Jenkins Workflow Engine Tutorial

Hello World with Jenkins Workflow Engine

After successfully creating and running a Docker image/container with Jenkins and the new “Workflow-Engine” we should get our fingers dirty and code a first “workflow” ;-) As usual that would be some kind of “Hello World” …

As first step we create a new Jenkins job of type “Workflow” and name it “HelloWorld”.


To keep things simple we don’t use any source code repository (SCM) for this example. So we have to fill in some more or less useful commands (like “echo('Hello World');") into script area of the Workflow configuration section and check “Use Groovy Sandbox”. Nothing more, nothing less.

After saving the job we have to start it manually by pressing “Build Now” button.

If you look at the console output afterwards you should see something similar:

That’s all. We created and run our first (simple) Jenkins Workflow job.

Summary:
Jenkins Workflow Engine adds a new job type named “Workflow”. In contrast to other job types it doesn’t provide so much configuration possibilities: Of course you can
  • give it a name, 
  • trigger the build by 
    • scheduling, 
    • polling a source repository or 
    • joining to another project and 
  • write or provide a Groovy based DSL script
This script is there all the magic happens, usually more than echo-ing "Hello World" - but this we will explore more deeply in following posts.

Freitag, 6. März 2015

Updating Jenkins plugins

Some days ago (to be more exactly: On March 4, 2015) new version 1.3 of Workflow Engine was released. A good reason to think about how to update our Jenkins installation.

Of course easiest way would be to do it manually using Jenkins Plugin Manager.

Jenkins Plugin Manager - Update Sheet


Choose "Updates" sheet, check plugins you like to update, press button "Download now and install after restart" and wait some minutes.

We like to refer to it as the "traditionell way" which works pretty well and stable.

But on the other side patterns like "Immutable Server" [01] or "Phoenix Server" [02] get more and more attention. At first look it seems easy to achieve because Jenkins separates itself (the core) from any other stuff, which is located in a special directory usually named "jenkins-home".

In our installation variant based on Docker we delivered plugins as part of the Docker image, but to preserve manual changes Jenkins will use them only if they are new (more precisely: they are copied from /usr/share/jenkins/ref/plugins to /var/jenkins-home/plugins during start of container but only if they don't exist already). Conclusion: We can add plugins via Docker image but we can't update existing plugins.

The only workaround we found is
  1. stopping Jenkins
  2. delete content of plugins directory
  3. create new Jenkins-Docker image with updated plugins (do you remember configuration file "plugins.txt" from my last post [03]?) and 
  4. create and run new Jenkins-Docker container
  5. if all steps went well delete "old" container (optional but recommended)
Now all plugins inside Jenkins-Docker image are new and will be copied to plugin subdirectory inside the already mentioned "jenkins-home".

Summary: Updating Jenkins plugins by providing them in Docker image looks crucial. It needs some improvement before it can be used in production. 

References and interesting links:

[01] Martin Fowler: PhoenixServer
[02] Kief Morris: ImmutableServer
[03] Jenkins Docker Image with Workflow Engine

Donnerstag, 19. Februar 2015

Jenkins Docker Image with Workflow Engine

To be able to play a little bit with the new Jenkins Workflow Engine (more about this topic later) I needed to install an additional Jenkins instance on my notebook. But I don’t want to pollute my work environment with temporary installations, e.g. won’t fight with port conflicts and so on. 
Therefore I decided to give the “Docker way” a shot instead of doing a traditional installation.
Thanks to the official Jenkins Docker image this could be done in seconds (of course depending on your internet bandwith for download).


docker run -d -p 8080:8080 --name=jenkins jenkins


Afterwards Jenkins is up and running and can be accessed via http://<dockerhost>:8080. Concrete value of “dockerhost” is depending on how you installed Docker. Im my case (Mac OS X with Boot2Docker) it “evaluates” to 192.168.59.103.
Unfortunately, the newly installed Jenkins version (at time of writing: Jenkins LTS 1.580.2) doesn’t contain the Workflow-Engine, so there is some work to do.
Easiest way to install it, is using the “Plugins-Maintenance-Center” of Jenkins. Go there and choose the “Workflow: Aggregator” (see picture below) which will download and install all dependencies too.



As promised this way is quiet easy. But in my opinion we need a little bit more automation, which we can achieve by creating an own Docker image, extending the original official Jenkins Docker image with needed plugins.

Fortunately, the inventors of the official Jenkins Docker image prepared a simple mechanism for installing further plugins: A shell script which we have to call with a text file containing the names of the plugins to be installed. The only drawback I found is that automatic dependency resolution doesn’t work in this scenario. Therefore we have to do it manually and provide all direct and indirect needed plugins in this file. Figuring out which plugins are additionally needed is a little bit crazy and annoying but have to be done only once (plugin-pages on Jenkins website like [04] are helpful but didn't provide full truth) …

As result we get a directory (we will store the exact place as environment variable “JENKINS_DOCKER_DIR” for later use) containing the following two files:

First the “Dockerfile

FROM jenkins
MAINTAINER Bernd Fischer <bfischer@mindapproach.de>

COPY plugins.txt    /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt

and second “plugins.txt” containing the list of plugins to be installed

greenballs:1.14
durable-task:1.0
script-security:1.12
git-client:1.11.0
git-server:1.5
workflow-api:1.2
workflow-durable-task-step:1.2
workflow-cps-global-lib:1.2
workflow-scm-step:1.2
workflow-basic-steps:1.2
workflow-cps:1.2
workflow-support:1.2
workflow-step-api:1.2
workflow-job:1.2
workflow-aggregator:1.2

Right, “Greenballs” plugin isn’t necessary for running “Workflows” but I favor green to blue “balls” in Jenkins UI ;-)
With this files in place we are able to create our own Docker image

cd $JENKINS_DOCKER_DIR
docker build --tag=”my/jenkins” .

This may take some minutes ….
As soon as it is ready we can create a container and run it

docker run -d -p 8080:8080 --name=my-jenkins my/jenkins

and access Jenkins via http://<dockerhost>:8080.
To be sure that all went well we can have a look into the “Plugin-Maintenance-Center” and we should find all plugins we defined in “plugins.txt” in tab “Installed” (see image below shows some of them).



Remark: To use the “docker commands” you need appropriate rights, which means in most cases “root rights”. In some environments you therefore have to prepend command lines with “sudo”.

References and interesting links


Montag, 16. Februar 2015

Continuous Delivery and Jenkins Workflow Engine

Trying to implement a Continuous Delivery Pipeline according to the well known book from Jez Humble and David Farley [01] with Jenkins as CI (Continuous Integration) server,  everyone stumbeld over following plugins and “technics”


  • Build Pipeline Plugin [06]
  • Build Flow Plugin [07]
  • Delivery Pipeline Plugin [08]
  • Parametrised Trigger Plugin [09]
  • Job Chaining (up-/downstream)


Possibly you have needed some more.


It was never an easy task whether you used a particular plugin only or a set of them. Every variant of building the pipeline had and has its own pros and cons for sure.


Last year CloudBees Inc. (the company behind Jenkins) announced the “Workflow Engine” [02][03] which should be able to replace all the aforementioned plugins and should make the usage of  Jenkins much more developer friendly.


The announcements sound promising, so I decided to give it a try. I would like to invite every one who is interested in this topic to follow my journey in the upcoming blog posts.


References and interesting links:


[01] Jez Humble, David Farley: Continuous Delivery. Addison-Wesley - Copyright 2011 Pearson Education inc. (e.g. via Amazon)