Teamcity

Thomas Suebwicha
9 min readAug 22, 2021

Written June 16, 2020

A step by step approach to getting started with TeamCity using Docker Desktop.

Requirements:

  • Docker Desktop
  • Docker Compose
  • DockerHub Account

TeamCity is a continuous integration and delivery product which allows comprehensive automation of a build pipeline from development through testing and QA to production.

This guide will walk through a basic setup of TeamCity using Docker and some basic configuration options available. This guide assumes Docker is already installed on the host machine. Also note that the TeamCity docker image is very large, expect to download ~3GB.

Primer

TeamCity uses a master-slave architecture, similar to Jenkins, except it must be configured manually. TeamCity use the terms Server and Agent.

Server

This is what actually runs the builds and holds all of the data for the product, including build configurations, project settings and credentials. It is also responsible for allocating and initiating the agents to begin work.

Agent

This is what runs the builds. It runs independently from the server, in the sense that it runs separate processes and has an isolated file system. The server and agent communicate using TCP and can be set up on different host machines, however this guide will be setting them up on a single host machine, albeit in different docker containers.

Installation

Some small configuration changes may need to be made to the base TeamCity images before they can communicate with GitLab and the host machine’s docker daemon.

Server

Below is the Dockerfile for the TeamCity Server. Place this in a directory called server, in a file called Dockerfile

server/Dockerfile:

FROM jetbrains/teamcity-server
WORKDIR /dockerbuild/
# Install gitlab certificate
RUN echo | openssl s_client -connect git.fdmgroup.com:443 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > gitlab.cer && \
keytool -import -noprompt -alias gitlab -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit -file gitlab.cer && rm gitlab.cer

The above snippet adds the GitLab SSL certificate to the Java keystore. Without this, the TeamCity Server may throw an error when attempting a connection to the GitLab repository.

Agent

The example used in this guide uses Docker in Docker, which allows docker commands to be used within the pipeline. The approach taken below shares the Docker daemon between the host and the client within the docker container, meaning the client is able to use docker commands to create new sibling docker containers alongside the client’s container. This avoids nesting of docker containers, which could potentially result in infinite recursion.

Similar to the server, the agent must install the GitLab certificate to the CA store so Git recognises the certificate. Place this Dockerfile inside a directory called agent.

agent/Dockerfile:

FROM jetbrains/teamcity-agent
WORKDIR /dockerbuild/
USER root
# Install gitlab certificate
RUN echo -n | openssl s_client -showcerts -connect git.fdmgroup.com:443 -servername git.fdmgroup.com \
2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' \
>> /etc/ssl/certs/ca-certificates.crt
# Add buildagent to the docker group for permissions to run docker commands
RUN usermod -aG docker buildagent
USER buildagent

The last RUN command adds the buildagent user to the docker group. Without this, the user doesn’t have permissions to use the docker.sock socket.

Docker-compose

To simplify instantiation of the group of TeamCity services, we’ll use docker-compose. Create the following file in the working directory alongside the server and agent directories.

docker-compose.yml:

version: "3.8"services:
server:
container_name: teamcity-server
build: ./server
ports:
- 8111:8111
volumes:
- ./server/data:/data/teamcity_server/datadir
- ./server/logs:/opt/teamcity/logs

agent:
container_name: teamcity-agent
build: ./agent
volumes:
- ./agent/conf:/data/teamcity_agent/conf
- /var/run/docker.sock:/var/run/docker.sock
environment:
SERVER_URL: teamcity-server:8111
links:
- server

This file contains the majority of our configuration.

  • We can see there are two services, server and agent. Each has a name specified for the container and the build context is set to the respective server and agent directories we created previously.
  • The server exposes the port 8111, which is the default, and this is referenced in the environment variable $SERVER_URL in the agent. Also note that the hostname of the server_url is the same as the container_name for the server — docker-compose automatically creates a network for all services specified, and each service can communicate using the container name as their DNS name.
  • Most importantly are the volumes which are bind-mounted. By storing this data on the host machine, we will be able to shut down the services (using docker-compose down) without losing any data.
  • Finally, the docker.sock socket is mounted to the agent so that the agent can use Docker in Docker.

Progress So Far

After completing the previous steps, our directory structure looks like this:

.└── Teamcity
├── agent
│ └── Dockerfile
├── server
│ └── Dockerfile
└── docker-compose.yml

Instantiation

With our files in place, we can now build and run TeamCity.

Enter a terminal and navigate to the working directory (the one with the docker-compose.yml file) and type:

docker-compose up -d

Your images for the server and agent will now build and run. This may take a while if the host machine needs to download the TeamCity-server or TeamCity-agent images.

When the terminal indicates the services are running (you can check with docker ps in the terminal), open a browser and go to the TeamCity Server address, which we’ve configured as the default: http://localhost:8111

Configuration

When TeamCity Server finishes initialising, add the credentials for your admin account. Now we can start adding a project.

Project Hierarchy

TeamCity uses a hierarchy for its projects. Although there aren’t any restrictions in how this is configured, for maintainability the following structure is suggested:

.└── Root Project├── Project A│   ├── Build and Package│   ├── Code Analysis│   ├── Build Configuration C│   ├── Build Configuration D│   └── Deploy└── Project B    ├── Build and Package    ├── Code Analysis    └── Deploy

By using this structure, it becomes easier to add new projects without making the structure any messier — we can easily minimise all projects in the browser except the one we’re currently working on.

Note that all children inherit configuration settings from their parents. Therefore if multiple steps or projects will be likely to share configs, place those settings in a parent node. For example, a VCS connection could be configured from the Project node so that each child step may inherit that connection.

Authorising the agent

With the server up and running, Navigate to the Agents tab. TeamCity requires any agents with connections to the server to be authorised, due to the 3 agent limit on the community (free) version of TeamCity. To authorize, in the agents tab, click authorize to allow the connection between agent and server.

Project Creation

Since we’re creating our first project, you will see a create project button on the home page. This can be accessed for future projects from the Administration > Projects page.

Start by configuring the general settings of the project:

Now navigate to the VCS Roots page and click Create VCS root

Type the following details in:

Edit VCS Root| Options               | Specified                                |
|-----------------------|------------------------------------------|
| Type of VCS | Git |
| VCS root name | VCS or project name |
| Fetch URL | https://git.hostname.com/user/repo |
| Default Branch | refs/heads/master (or any other branch) |
| Authentication Method | Password |
| Username | Username used for VCS |
| Password | Can use API key for VCS instead |

Go back to the General Settings page and click Create build configuration. A build configuration contains a number of steps to be run in order to complete some process — this will depend on business requirements, but for this guide, we will create a build configuration called Build And Package. This will be responsible for compiling our application, for demonstration purposes this is a Java project packaged using Maven, but it could be anything.

A build can publish artifacts after success. In the above example, the build configuration is taking any .jar files from the target directory after building and packaging them into server.zip. However if the default settings are sufficient, click Save.

Now, inside the build configuration settings, navigate to the Build Steps page and click Add build step. For a Java project, we will choose Maven as the runner type and in the Goals field, add clean install. We may also wish to change the Path to POM file if it isn’t in the project root in the VCS. Save and you will have a basic pipeline which polls a VCS for changes, pulls the code and automatically builds and publishes an artifact for the compiled application!

Next Steps

So far, our CI/CD server doesn’t do anything particularly useful. However we can add steps to make it more useful.

Snapshot Dependencies

To get the artifact which was previously published, there are a few settings in TeamCity that must be changed. Start by creating a new build configuration. To do this, click on Projects in the nav bar, then click on the project. Up the top right, there is an Edit Project Settings link which will take us to the General Settings page, with a Create build configuration button.

Use the settings below to add a deployment step to our build process:

Now we need to link the Build And Package configuration with the Deploy configuration. Go to the configuration settings for the Deploy config and then click on Dependencies on the left.

Here we see two possible types of dependencies.

Snapshot Dependencies

  • These link multiple build configurations together. In this example, the Build and Package stage will be linked to the Deploy stage, so that running the Deploy stage triggers the Build and Package stage

Artifact Dependencies

  • These are artifacts which the specified build configuration relies on. If a build fails to produce a specified artifact, this stage will also fail.

Add a new Artifact Dependency using the same settings as following:

This will create a build chain, which links the Build and Package and Deploy stages together. If there’s an error in the Build and Package stage, the Deploy stage will also have an error, which is what we want.

Now create a Snapshot Dependency.

The settings above synchronise the stages, so that when the Deploy stage is started, if there isn’t a suitable build it will re-run the previous stages. Additionally, from the Build and Package stage, Deploy can be manually triggered.

Finalising

Hoorah! You should now have a linked build config, with triggers and dependencies.

Below is a sample script which deploys the artifact from the Build and Package build configuration into a sibling Docker container. Modify as appropriate, however note that most testing and production deployments would be done on a separate server, potentially on the cloud.

containerName="pondserver-test" 
networkName="pond-test"
# Create docker container and use docker-compose up to run
docker ps
docker build -t $containerName .
# Check if the container is already running
container=$(docker container ls -a | awk '{print $NF}' | grep $containerName)
echo "Container is=$container."
if [ ! -z $container ]
then
echo "Container exists. Removing container"
docker container rm -f $containerName
fi
# Check if the network already exists
network=$(docker network ls | grep $networkName)
if [ -z "$network" ]
then
echo "Network doesn't exist. Creating network $networkName"
docker network create $networkName
fi
echo "Running new container: $containerName"
docker run -dp 8088:8088 --name $containerName --network $networkName $containerName

References

--

--

Thomas Suebwicha

A developer wanting to share my knowledge and bring up others.