Quantcast
Channel: How-To’s : TeamCity | The JetBrains Blog
Viewing all 103 articles
Browse latest View live

The official TeamCity CloudFormation template

$
0
0

As you might have noticed, there was recently an option added to the Get TeamCity page of our website: AWS. This lets you run TeamCity in AWS using the official CloudFormation template.

get-teamcity-aws

In this post, we will go over what’s under the hood of the template, and why it may save you some time and effort.

Usually, installing TeamCity on top of AWS is quite a time-consuming task.
It requires the following steps:

  • Setting up an external database,
  • Configuring the EC2 instance to run a TeamCity server,
  • Configuring it to then connect to the database,
  • Installing the TeamCity server,
  • Installing a TeamCity agent.

And then making the whole installation secure requires even more effort.

We have tried to ease this process and created an official CloudFormation template to run the TeamCity stack in AWS. Using this template lets you run all the above steps with just a single click. And should you decide to destroy the stack, CloudFormation also provides a super simple way to do it with just one click.

The template is located in the S3 bucket. The stack can be launched via the ‘Run on AWS’ button available on the TeamCity site.

The template provides several parameters:

aws-1
It takes about 15 minutes for the template to deploy the whole stack, the most time-consuming task being the RDS Database instance roll up. Once the deployment is ready, you will see the TeamCity server endpoint in the Output section which points you to your TeamCity installation.

aws-2

Just generate the root account and it’s ready to use.

So what is under the hood?

The TeamCity server runs on an EC2 instance with CoreOS Container Linux. The default agent runs as a separate container on the same instance. The external database is provided by an RDS MySQL instance. We decided not to introduce a custom AMI with TeamCity. Instead, we use the official Docker images with the TeamCity server and build agent.

The server and the database are placed in their own VPC which is completely secure. The DB allows only internal connections within the VPC. It’s only possible to connect to the Server via HTTP(s) or SSH.

How the server is running

There are several systemd services that prepare the LVM on the EBS volume to persist your data, create the file system, and run the latest official TeamCity Server and TeamCity Build Agent from the DockerHub images. Those services are linked to each other and roll the whole system back after an instance reboot or failure.

To connect to the server’s console, you need to use your instance key:
ssh -i instance-key.pem core@[server IP]
To see the logs, just run the docker logs command for the desired container.

Once you have TeamCity up and running, there are a few more steps to consider:

Happy building with TeamCity on AWS!


TeamCity Kubernetes Support Plugin

$
0
0

Kubernetes nowadays is quite a popular way to run Docker containers. A number of teams and organizations already have a Kubernetes cluster configured and used in production.

Now with the help of the TeamCity Kubernetes Support plugin, it is possible to use the same infrastructure to run TeamCity build agents.

The plugin is compatible with TeamCity 2017.1.x and later.

First, download the plugin and install it on the TeamCity server as usual.

Then you can start by configuring a cloud profile in a project:

kub-1-new

Specify the URL of the Kubernetes API server (aka Kubernetes master), select the appropriate namespace.

Select one of the Kubernetes API authentication options:

kub-2

The next step after connecting to the Kubernetes API is creating a cloud image.

kub-3

There are two options available:

  • Simply run single container: good choice for those who don’t know about all the powerful Kubernetes features but want to simply run its container;
  • Use pod template from deployment: will handle all the possible advanced scenarios of configuring workload to a Kubernetes cluster like multi-container pods, node/pod affinity and toleration, etc.

When the “Simply run single container” mode is selected, users can specify the name of the Docker image with the build agent they want to use.

In our setup, we are using the official TeamCity Build Agent image which is supported by the plugin. You can also create your own image.

Other options like Docker command, Docker arguments, and image pull policy, will be useful as well.

kub-4

Another cloud image option is ‘Use pod template from deployment’.

Here you simply specify a deployment name: remember to check that the deployment belongs to the same namespace you’ve provided in the cloud profile. You can either use the official TeamCity Build Agent image in your deployment like in the example below, or your own image.

kub-5

There is a small trick here. When given a deployment name, the plugin will not actually use it as deployment, but it will extract the PodTemplateSpec section from its definition and use it to create the plugin’s own pods. The plugin’s own pods means that the pods will not be connected to deployment anyhow. The Kubernetes deployments feature will not be used to manage the pods’ lifecycle. The TeamCity server will take care of those pods on its own. Deployment will be used as a container of PodTemplateSpec which can be referenced by name.

After a cloud profile is created and saved, you will be able to start TeamCity agents running in containers in the scope of the pods on the Kubernetes cluster. TeamCity will mark every started pod with a set of specific labels.

kub-6

Using those labeled pods, you can always determine which TeamCity server started a particular pod, which cloud profile and cloud image are affected.

Feel free to download the plugin, try it, and share your feedback with us!

Or you can TestDrive it in the cloud.

Happy building with TeamCity!

Branch specific settings in TeamCity

$
0
0

We’re often asked how to run different build steps in different branches. In fact, this has already been possible in TeamCity for quite some time (since version 9.1), but it seems we need to do a better job explaining how to use this feature.

Let’s assume that you have a build configuration where building of feature branches is enabled.

First, enable Versioned settings for the project where this build configuration resides. TeamCity will ask for a VCS root where the settings are to be stored. For simplicity’s sake, let’s store the settings in the same VCS repository where other project files are located.

For the Versioned settings, make sure the option When build starts is set to the use settings from VCS value. TeamCity will then take the settings from a branch used by the build instead of the current settings.

versioned_settings

Once versioned settings are enabled, TeamCity will commit the .teamcity directory with the current project settings to the default branch of the selected VCS root. To start customizing the settings in our feature branch, we need to merge the .teamcity directory from the default branch.

But before making any changes in the .teamcity directory in your feature branch, bear in mind that not all the changes there will be applied. For instance, if you add a new project or a build configuration, such changes will be ignored. This happens because TeamCity does not support different sets of projects and build configurations in feature branches.

Besides, there is a chicken and egg problem. Currently TeamCity loads settings from .teamcity while the build sits in the queue. But to load the settings from a repository, TeamCity needs to know this repository settings (VCS roots). Thus VCS roots are always taken from the current project settings.

The same story is with build triggers and snapshot dependencies. TeamCity needs triggers to decide when to place builds into the queue. And TeamCity needs snapshot dependencies to create a build chain (graph of builds) before the builds are going to the queue. Hence the changes in .teamcity affecting these settings are ignored too.

But there are plenty of other settings whose changes will affect the build behavior:

  • build number pattern
  • build steps
  • system properties and environment variables
  • requirements
  • build features (except Automatic merge and VCS labeling)
  • failure conditions
  • artifact publishing rules
  • artifact dependencies

By default, the settings are stored in the XML format. Since there is no publicly available documentation for this format, it is hard to guess how to change build steps correctly, or how to add a build feature. This is what a command line build step looks like in XML:


 
   
     
     
     
   
 

What can help here is seeing how TeamCity generates these settings. For instance, get a sandbox project on your TeamCity server for such experiments, and browse the audit page for settings difference after each change:

diff

Another option is to use Kotlin DSL (see the Settings format option on the Versioned settings page). In this case, instead of .xml files, .kt files will be stored under the .teamcity folder.

A Kotlin DSL project can be opened in IntelliJ IDEA Community (free and open-source IDE from JetBrains). Kotlin DSL files look much more concise, and IDE auto-completion makes it much easier to work with them, especially in the recent TeamCity versions. This is what the same command line build step looks like in Kotlin DSL:

steps {
    script {
        scriptContent = """echo "Hello world!""""
    }
}

Some additional hints:

  • Since TeamCity 2017.2 you can browse Kotlin DSL documentation using the following URL: <TeamCity server URL>/app/dsl-documentation/index.html
  • If a build loads settings from the VCS repository, you should see something like this in the build log:
    buildlog
  • Actual settings used by the build are stored in the buildSettings.xml file under the hidden build artifacts. This file can be helpful if something does not work as expected.
    buildSettings
  • If you’re using remote runs instead of feature branches, then all the same abilities are also available, provided that the .teamcity directory is  present in your repository and versioned settings are configured as described above.

Finally, one more thing to keep in mind. If changes under .teamcity in your feature branch were temporary, you should not forget to revert the settings during the merge with the default branch. For instance, if you removed some long running steps or switched off some tests for convenience.

Happy building!

Deploying TeamCity into AWS using CloudFormation and Fargate

$
0
0

For the good cause, it is sometimes easier to start with TeamCity by deploying it into a cloud service, such as AWS. This allows the users to try TeamCity without the need to prepare the dedicated machine.

For the quick start, it is possible to use CloudFormation template which deploys the minimal setup to AWS. For the agents, TeamCity includes the integration for Amazon EC2 out of the box. However, there is also an alternative way to start the agents as Docker containers at Amazon’s Elastic Container Services platform.

In this article, we’re going to describe how to deploy TeamCity server using the CloudFormation template and how to configure the agents to run on Amazon ECS with Fargate.

Deploying TeamCity server to AWS with CloudFormation template

The official TeamCity CloudFormation template is available to simplify deployment to AWS. The resulting implementation will start a VM with two Docker containers: one for TeamCity server, and the other for the default agent.

DISCLAIMER: The current configuration of the CloudFormation template is not recommended for production setup. The default agent is configured to run on the same machine as the TeamCity server. Instead, it is advised to run the agents separately in order not to consume the machine resources required by the server.

Deploy TeamCity to AWS using CloudFormation template

Once a TeamCity server is deployed, check the Outputs tab for the URL. After the initial configuration steps, it is possible to add more build agents to the server. There are multiple options for TeamCity agents deployment at AWS. In the following, we’ll describe how to deploy the agent to Amazon ECS with Fargate.

AWS Fargate for TeamCity Agents

AWS Fargate is a technology for Amazon ECS that allows us to run containers without having to manage servers. TeamCity agent can be deployed as a container to ECS by using Fargate.

The short-lived container instances at ECS are perfectly suitable for executing build tasks. For TeamCity, it means that it is possible to acquire more resources dynamically. Once more agents are required to process the build queue TeamCity will request a new agent to start via Fargate. The agents can stop once the workload isn’t as high anymore.

To run TeamCity agents at Amazon ECS with Fargate several steps needs to be fulfilled:

  1. Have a running TeamCity server instance
  2. Install the required plugins for TeamCity server
  3. Create task definition for TeamCity agent in Amazon ECS
  4. Create cloud agent profile for the project in TeamCity
  5. Optionally, configure S3 artifact storage for the project in TeamCity

Installing Plugins

The required integration with various AWS services activates by installing the additional plugins to TeamCity server. To make use of AWS Fargate integration in TeamCity the Amazon ECS Support plugin needs to be installed. Also, one might want to consider storing the build artifacts in Amazon S3 as well. AWS S3 Artifact Storage plugin enables the integration.

Once the plugins are installed, we can proceed to the configuration.

Agents Configuration

Create AWS Fargate task definition

TeamCity agent is available as a Docker image via Docker Hub. In the Fargate task definition, it is possible to define a container, referring to that image. Once the server requests for a new agent via Fargate, a new container will start on Amazon ECS.

The important bit is to add SERVER_URL environment variable pointing at the URL of the TeamCity server. The agent will use that variable to connect to the server.

Create Fargate task definition

Create cloud profile for a project in TeamCity

To enable cloud integration in TeamCity one has to create a cloud profile for the desired project. Configuring cloud profile in TeamCity is quite straightforward.

Go to the project settings, the Cloud Profiles tab, click “Create new cloud profile” button and proceed with the configuration. Make sure to select Amazon Elastic Container Service as the cloud type.

It is also important that the selected AWS region in the cloud profile matches the region that you used to create the Fargate task definition.

In order to point the cloud profile at the task definition in Farget we have to add an agent image with the appropriate launch type.

Create Fargate task definition

Artifact Storage Configuration

In TeamCity, a build configuration can expose the resulting artifacts. It is wise to store the artifacts in the external storage. We can use Amazon S3 to save the files. Let’s see how to configure the service for TeamCity.

In the project settings, navigate to the Artifacts Storage Project tab and click the “Add new storage” button. Select “S3 Storage” type and fill in the name, bucket name and press Save. After that, activate this artifact storage in the TeamCity project by clicking the “Make Active” link. The new builds in this project will store artifacts in S3 automatically.

S3 artifact storage configuration

Summary

TeamCity was designed as on-premise solution. However, cloud services, such as AWS, simplify the setup if you would like to try TeamCity for your projects. It is easy to start with the CloudFormation template and scale by deploying the agents as Docker containers at Amazon ECS.

Configuration as Code, Part 1: Getting Started with Kotlin DSL

$
0
0

Configuration as code is a well-established practice for CI servers. The benefits of this approach include versioning support via VCS repository, a simplified audit of the configuration changes, and improved portability of the configurations. Some users may also prefer code to configuring the builds with point-and-click via UI. In TeamCity, we can use Kotlin DSL to author build configurations.

The possibility to use Kotlin for defining build configurations was added in TeamCity 10. In TeamCity 2018.x Kotlin support was greatly improved for a more pleasant user experience.

In this series of posts, we are going to explain how to use Kotlin to define build configurations for TeamCity. We will start with the basics on how to get started with configuration-as-code in TeamCity. We will then dive into the practicalities of using Kotlin DSL for build configurations. Finally, we will take a look at advanced topics such as extending the DSL.

  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

The demo application

In this tutorial, we are going to use the famous spring-petclinic project for demonstration. Spring Petclinic is a Java project that uses Maven for the build. The goal of this post is to demonstrate an approach for building an existing application with Kotlin DSL.

It is best to add a Kotlin DSL configuration to an existing project in TeamCity. First, create a new project in TeamCity by pointing to a repository URL. For our demo project, TeamCity will detect the presence of the Maven pom.xml and propose the required build steps. As a result, the new project will include one VCS root and a build configuration with a Maven build step and a VCS trigger.

New project for Spring Petclinic

The next step for this project is transitioning to configuration-as-code with Kotlin DSL. To do that, we have to enable Versioned Settings in our project.

WARNING! Once you enable Versioned Settings for the project, TeamCity will generate the corresponding files and immediately commit/push them into the selected repository.

Enable Versioned Settings

To start using Kotlin build scripts in TeamCity the Versioned Settings have to be enabled. Regardless of whether you are starting from scratch or you have an existing project. In the project configuration, under Versioned Settings, select the Synchronization enabled option, select a VCS root for the settings, and choose Kotlin format.

The VCS root selected to store the settings can be the same as the source code of the application you want to build. For our demo, however, we are going to use a specific VCS root dedicated to only the settings. Hence, there will be 2 VCS roots in the project: one for source code of the application and the other for build settings.

TeamCity Versioned Settings

In addition to this, it is possible to define which settings to take when the build starts. When we use Kotlin DSL and make changes to the scripts in our favorite IDE, the “source of truth” is located in the VCS repository. Hence, it is advised to enable the use settings from VCS option. The changes to the settings are then always reflected on the server.

Once the Versioned Settings are enabled, TeamCity will generate the corresponding script and commit to the selected VCS root. Now we can pull the changes from the settings repository and start editing.

Opening the configuration in IntelliJ IDEA

The generated settings layout is a .teamcity directory with two files: settings.kts and pom.xml. For instance, the following are the files checked in to version control for the spring-petclinic application.

Clone the demo repository:

git clone https://github.com/antonarhipov/spring-petclinic-teamcity-dsl.git

Open the folder in IntelliJ IDEA, you will see the following layout:

TeamCity Kotlin DSL project layout

Essentially, the .teamcity folder is a Maven module. Right-click on the pom.xml file and select Add as Maven Project – the IDE will import the Maven module and download the required dependencies.

add-as-maven-project

pom.xml

The TeamCity specific classes that we use in the Kotlin script are coming from the dependencies that we declare in the pom.xml as part of the Maven module. This is a standard pom.xml file, besides that, we have to pay attention to a few things.

The DSL comes in a series of packages with the main one being the actual DSL which is contained in the configs-dsl-kotlin-{version}.jar file. Examining it, we can see that it has a series of classes that describe pretty much the entire TeamCity user interface.

kotlin-dsl-library

The other dependencies that we see in the list are the Kotlin DSL extensions contributed by TeamCity plugins.

kotlin-dsl-dependencies

Some of the dependencies are downloaded from JetBrains’ own Maven repository. However, the plugins you have installed on your TeamCity instance may provide the extensions for the DSL, and therefore our pom.xml needs to pull the dependencies from the corresponding server. This is why you can find an additional repository URL in the pom.xml file.

The source directory is also redefined in this pom.xml since in many cases there will be just one settings.kts file. There is not much use for the standard Maven project layout here.

settings.kts

kts is a Kotlin Script file, different from a Kotlin file (.kt) in that it can be run as a script. All the code relevant to the TeamCity configuration can be stored in this file, but it can also be divided into several files to provide a better separation of concerns. Imports omitted, this is how the settings.kts for our demo project looks:

settings-kts

version indicates the TeamCity version, and project() is the main entry point to the configuration script. It is a function call, which takes as a parameter a block that represents the entire TeamCity project. In that block, we compose the structure of the project.

The vcsRoot(...) function call registers a predefined VCS root to the project. The buildType(...) function registers a build configuration. As a result, there is one project, with one VCS root, and one build configuration declared in our settings.kts file.

The corresponding objects for VCS root and the build configuration are declared in the same script.

Build configuration

The Build object inherits from the BuildType class that represents the build configuration in TeamCity. The object is registered in the project using a buildType(...) function call.

The declarations in the object are self-explanatory: you can see the name of the build configuration, artifact rules, the related VCS root, build steps, and triggers. There are more possibilities that we can use, which we will cover in further posts.

Kotlin Build Configuration

VCS root

The VCS root object, PetclinicVcs, is a very simple one in our example. It has just two attributes: the name, and the URL of the repository.

kotlin-dsl-vcs-root

The parent type of the object, GitVcsRoot, indicates that this is a git repository that we’re going to connect to.

There are more attributes that we can specify for the VCS root object, like branches specification, and authentication type if needed.

Import project with the existing Kotlin script

It is possible to import existing Kotlin settings. When creating the project from a repository URL, TeamCity will scan the sources. If existing Kotlin settings are detected, the wizard will suggest importing them.

Import from Kotlin DSL

You can then decide if you want to just import the project from the settings, import and enable the synchronization with the VCS repository, or proceed without the import.

Summary

In this part of the series, we’ve looked at how to get started configuring the builds in TeamCity with Kotlin DSL. We explored the main components of a TeamCity configuration script and its dependencies. In part two, we’ll dive a little deeper into the DSL, modify the script, and see some of the benefits that Kotlin and IntelliJ IDEA already start providing us with in terms of guidance via code assistants.

Configuration as Code, Part 2: Working with Kotlin Scripts

$
0
0

This is part two of the six-part series on working with Kotlin to create build configurations for TeamCity.

  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

In the first part of the series, we have seen how to get started with Kotlin DSL for TeamCity. Now we’ll dive a little deeper into the DSL and see what it provides us with in terms of building configuration scripts.

An important thing to note is that TeamCity 2018.x uses Kotlin version 1.2.50.

Configuration script

Because the configuration is actually a valid Kotlin program, we get to use all the assistance from the IDE – code completion, refactoring, and navigation.

Editing settings.kts

As we saw in the previous post, the entry point to the configuration is the project {...} function call defined in settings.kts. Let’s examine the individual blocks of the script.

Project

The top-level project element in the settings.kts file represents a top-level context for all the build configurations and subprojects that are declared within the scope.

project {
}

A top-level project does not need to have an id or a name. These attributes are defined when we register a new project in TeamCity.

A project in TeamCity may include sub-projects and build configurations. A sub-project should be registered in the main context using the subProject function:

project {
   subProject(MyProject)
}

We also can register a few other entities, like one or more VCS roots and build configurations.

project {
   vcsRoot(PetclinicVcs)
   buildType(Build)
}

The above exactly matches our demo project configuration script. PetclinicVcs and Build objects represent a VCS root and build configuration – this is where the build happens!

Build configuration

The build configuration is represented by a BuildType class in TeamCity’s Kotlin DSL. To define a new build configuration we define an object that derives from the BuildType.

object Build: BuildType({
   
})

The constructor of BuildType receives a block of code, enclosed within the curly braces {}. This is where we define all the required attributes for the build configuration.

object Build: BuildType({
   id(“Build”)
   name = “Build”

   vcs {
     root(PetclinicVcs)
   }
   
   steps {
     maven {
       goals = “clean package”
     }
   }
 
   triggers {
      vcs {}
   }
})

Let’s examine the individual configuration blocks in the example above.

The id and name

The first lines in the Build object denote its id and name. The id, if not specified explicitly, will be derived from the object name.

Configuration id and name

Version control settings

The vcs{} block is used to define the version control settings, including the list of VCS roots and other attributes.

vcs settings

Build steps

After VCS settings block you will find the most important block, steps{}, where we define all the required build steps.

In our example, for a simple Maven project, we only define one Maven build step with clean and package goals.

steps {
     maven {
       goals = “clean package”

       //Other options
       dockerImage = “maven:3.6.0-jdk-8”
       jvmArgs = “-Xmx512”
       //etc
     }
}

maven build step

Besides Maven, there are plenty of other build steps to choose from in TeamCity. Here are a few examples:

Gradle build step

gradle {
    tasks = “clean build”
}

Command line runner

script {
    scriptContent = “echo %build.number%”
}

Ant task with inline build script

ant {
    mode = antScript {
        content = """
            
            
            
              
            
              
        """.trimIndent()
    }
    targets = "sayHello"
}

Docker command

dockerCommand {
    commandType = build {
        source = path {
            path = "Dockerfile"
        }
        namesAndTags = "antonarhipov/myimage:%build.number%"
        commandArgs = "--pull"
    }
}

Finding a DSL snippet for a build step

Despite the IDE support for Kotlin it might still be a bit challenging for new users to configure something in the code. “How do I know how to configure the desired build step and what are the configuration options for that?” Have no fear – TeamCity can help with that! For a build configuration, find the View DSL toggle on the left-hand side of the screen:

Preview Kotlin DSL toggle

The toggle will provide a preview of the given build configuration, here you can locate the build step that you want to configure. Say, we’d like to add a new build step for building a Maven module.

Add a new build step, choose Maven, fill in the attributes, and without saving the build step – click on the toggle to preview the build configuration. The new build step will be highlighted as follows:

Preview Kotlin DSL

You can now copy the code snippet and paste it to the configuration script opened in your IDE.

Please note that If you save the new build step for a project that is already configured via Kotlin DSL scripts — this is allowed — TeamCity will generate a patch and commit it to the settings VCS root. It is then the user’s responsibility to merge the patch into the main Kotlin script.

The VCS root

Besides the build configuration, our demo project also includes a VCS root definition that the build configuration depends on.

object PetclinicVcs : GitVcsRoot({
   name = "PetclinicVcs"
   url = "https://github.com/spring-projects/spring-petclinic.git"
})

This is a minimal definition of the VCS root for a Git repository. The id attribute is not explicitly specified here, hence it is calculated automatically from the object’s name. The name attribute is required for displaying in the UI, and the url defines the location of the sources.

Depending on the kind of VCS repository we have, we may specify other attributes as well.

vcs root

Modifying the build process

Let’s extend this build a little bit. TeamCity provides a feature called Build Files Cleaner, also known as Swabra. Swabra makes sure that files left by the previous build are removed before running new builds.

We can add it using the features function. As we start to type, we can see that the IDE provides us with completion:

swabra in Kotlin DSL

The features function takes a series of feature functions, each of which adds a particular feature. In our case, the code we’re looking for is

features {
   swabra {
   }
}

In UI, you will find the result in the Build Features view:

swabra in build features

We have now modified the build configuration, and it works well. The problem is, if we want to have this feature for every build configuration, we’re going to end up repeating the code. Let’s refactor it to a better solution.

Refactoring the DSL

What we’d ideally like is to have every build configuration automatically have the Build Files Cleaner feature, without having to manually add it. In order to do this, we could introduce a function that wraps every instance of BuildType with this feature. In essence, instead of having the Project call

buildType(Build)
buildType(AnotherBuild)
buildType(OneMoreBuild)

we would have it call

buildType(cleanFiles(Build))
buildType(cleanFiles(AnotherBuild))
buildType(cleanFiles(OneMoreBuild))

For this to work, we’d need to create the following function

fun cleanFiles(buildType: BuildType): BuildType {
   buildType.features {
       swabra {}
   }
   return buildType
}

The new function essentially takes a BuildType, adds a feature to it, and then returns the BuildType. Given that Kotlin allows top-level functions (i.e. no objects or classes are required to host a function), we can put it anywhere in the code or create a specific file to hold it.

We can improve the code a little so that it only adds the feature if it doesn’t already exist:

fun cleanFiles(buildType: BuildType): BuildType {
   if (buildType.features.items.find { it.type == "swabra" } == null) {
       buildType.features {
           swabra {
           }
       }
   }
   return buildType
}

Generalizing feature wrappers

The above function is great in that it allows us to add a specific feature to all the build configurations. What if we wanted to generalize this so that we could define the feature ourselves? We can do so by passing a block of code to our cleanFiles function, which we’ll also rename to something more generic.

What we’re doing here is creating what’s known as a higher-order function, a function that takes another function as a function. In fact, this is exactly what features, feature, and many of the other TeamCity DSL’s are.

fun wrapWithFeature(buildType: BuildType, featureBlock: BuildFeatures.() -> Unit): BuildType {
   buildType.features {
       featureBlock()
   }
   return buildType
}

One particular thing about this function, however, is that it’s taking a special function as a parameter, which is an extension function in Kotlin. When passing in this type of parameter, we refer to it as Lambdas with Receivers (i.e. there is a receiver object that the function is applied on).

buildType(wrapWithFeature(Build){
   swabra {}
})

This then allows us to make calls to this function in a nice way, referencing feature directly.

Summary

In this post, we’ve seen how we can modify TeamCity configuration scripts using the extensive Kotlin-based DSL. What we have in our hands is a full programming language along with all the features and power that it provides. We can encapsulate functionality in functions to re-use, we can use higher-order functions as well as other things that open up many possibilities.

In the next post, we’ll see how to use some of this to dynamically create scripts.

Configuration as Code, Part 3: Creating Build Configurations Dynamically

$
0
0

This is part three of the six-part series on working with Kotlin to create build configurations for TeamCity.

  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

We have seen in the previous post how we can leverage some of Kotlin’s language features to reuse code. In this part, we’re going to take advantage of the fact that we are dealing with a full programming language and not just a limited DSL, to create a dynamic build configuration.

Generating build configurations

The scenario is the following: we have a Maven project that we need to test on different operating systems and different JDK versions. This potentially generates a lot of different build configurations that we’d need to create and maintain.

Here’s an example configuration for building a Maven project:

version = "2018.2"

project {
    buildType(BuildForMacOSX)
}

object BuildForMacOSX : BuildType({
   name = "Build for Mac OS X"

   vcs {
       root(DslContext.settingsRoot)
   }

   steps {
       maven {
           goals = "clean package"
           mavenVersion = defaultProvidedVersion()
           jdkHome = "%env.JDK_18%"
       }
   }

   requirements {
       equals("teamcity.agent.jvm.os.name", "Mac OS X")
   }
})

If we try to create each individual configuration for all the combinations of OS types and JDK versions we will end up with a lot of code to maintain. Instead of creating each build configuration manually, what we can do is write some code to generate all the different build configurations for us.

A very simple approach we could take here is to have two lists with the versions of OS types and JDK versions, and then iterate over them to generate the build configurations:

val operatingSystems = listOf("Mac OS X", "Windows", "Linux")
val jdkVersions = listOf("JDK_18", "JDK_11")

project {
   for (os in operatingSystems) {
       for (jdk in jdkVersions) {
           buildType(Build(os, jdk))
       }
   }
}

We need to adjust our build configuration a little to use the parameters. Instead of an object, we will declare a class with a constructor that will accept the parameters for the OS type and JDK version.

class Build(val os: String, val jdk: String) : BuildType({
   id("Build_${os}_${jdk}".toExtId())
   name = "Build ($os, $jdk)"

   vcs {
       root(DslContext.settingsRoot)
   }

   steps {
       maven {
           goals = "clean package"
           mavenVersion = defaultProvidedVersion()
           jdkHome = "%env.${jdk}%"
       }
   }

   requirements {
       equals("teamcity.agent.jvm.os.name", os)
   }
})

An important thing to notice here is that we are now setting the id of the build configuration explicitly using the id(...) function call, e.g. id("Build_${os}_${jdk}".toExtId())
Since the id shouldn’t contain any other characters the DSL library provides a toExtId() function that can be used to sanitize the value that we want to assign.

The result of this is that we will see 6 build configurations created:

Dynamic configurations

Summary

The above is just a sample of what can be done when creating dynamic build scripts. In this case, we created multiple build configurations, but we could have just as easily created multiple steps, certain VCS triggers, or whatever else that might come in useful. The important thing to understand is that at the end of the day, Kotlin Configuration Script isn’t just merely a DSL but a fully fledged programming language.

Configuration as Code, Part 4: Extending the TeamCity DSL

$
0
0
  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

TeamCity allows us to create build configurations that are dependent on one another, with the dependency being either snapshots or artifacts. The configuration for defining dependencies is done at the build configuration level. For instance, assuming that we have a build type Publish, that has a snapshot and artifact dependencies on Package, we would define this in the build type Publish in the following way:

object Package : BuildType({
   name = "Package"

   artifactRules = “application.zip”

   steps {
       // define the steps needed to produce the application.zip
   }
})

object Publish: BuildType({
   name="Publish"

   steps {
       // define the steps needed to publish the artifacts
   }

   dependencies {
       snapshot(Package){}
       artifacts(Package) {
           artifactRules = "application.zip"
       }
   }
})

and in turn, if Package had dependencies on previous build configurations, we’d define these in the dependencies segment of its build configuration.

TeamCity then allows us to visually see this using the Build Chains tab in the user interface:

build chains

The canonical approach to defining build chains in TeamCity is when we declare the individual dependencies in the build configuration. The approach is simple but as the number of build configurations in the build chain grows it becomes harder to maintain the configurations.

Imagine there’s a large number of build configurations in the chain, and we want to add one more somewhere in the middle of the workflow. For this to work, we have to configure the correct dependencies in the new build configuration. But we also need to update the dependencies in the existing build configurations to point at the new one. This approach does not seem to scale well.

But we can work around this problem by introducing our own abstractions in TeamCity’s DSL.

Defining the pipeline in code

What if we had a way to describe the pipeline on top of the build configurations that we define separately in the project? The pipeline abstraction is something we need to create ourselves. The goal of this abstraction is to allow us to omit specifying the snapshot dependencies in the build configurations that we want to combine into a build chain.

Assume that we have a few build configurations: Compile, Test, Package, and Publish. Test needs a snapshot dependency on Compile, Package depends on Test, Publish depends on Package, and so on. So these build configurations compose a build chain.

Let’s define, how the new abstraction would look. We think of the build chain described above as of a “sequence of builds”. So why not to describe it as follows:

project {
    sequence {
        build(Compile)
        build(Test)
        build(Package)
        build(Publish)
    }
}

Almost immediately, we could think of a case where we need to be able to run some builds in parallel.

project {
    sequence {
        build(Compile)
        parallel {
            build(Test1)
            build(Test2)
        } 
        build(Package)
        build(Publish)
    }
}

In the example above, Test1 and Test2 are defined in the parallel block, both depend on Compile. Package depends on both, Test1 and Test2. This can handle simple but common kinds of build chains where a build produces an artifact, several builds test it in parallel and the final build deploys the result if all its dependencies are successful.

For our new abstraction, we need to define, what sequence, parallel, and build are. Currently, the TeamCity DSL does not provide this functionality. But that’s where Kotlin’s extensibility proves quite valuable, as we’ll now see.

Creating our own DSL definitions

Kotlin allows us to create extension functions and properties, which are the means to extend a specific type with new functionality, without having to inherit from them. When passing extension functions as arguments to other functions (i.e. higher-order functions), we get what we call in Kotlin Lambdas with Receivers, something we’ve seen already in this series when Generalising feature wrappers in the second part of this series. We will apply the same concept here to create our DSL.

class Sequence {
   val buildTypes = arrayListOf()

   fun build(buildType: BuildType) {
       buildTypes.add(buildType)
   }
}

fun Project.sequence(block: Sequence.() -> Unit){
   val sequence = Sequence().apply(block)

   var previous: BuildType? = null

   // create snapshot dependencies
   for (current in sequence.buildTypes) {
       if (previous != null) {
           current.dependencies.snapshot(previous){}
       }
       previous = current
   }

   //call buildType function on each build type
   //to include it into the current Project
   sequence.buildTypes.forEach(this::buildType)
}

The code above adds an extension function to the Project class and allow us to declare the sequence. Using the aforementioned Lambda with Receivers feature we declare that the block used as a parameter to the sequence function will provide the context of the Sequence class. Hence, we will be able to call the build function directly within that block:

project {
    sequence {
         build(BuildA)
         build(BuildB) // BuildB has a snapshot dependency on BuildA
    }
} 

Adding parallel blocks

To support the parallel block we need to extend our abstraction a little bit. There will be a serial stage that consists of a single build type and a parallel stage that may include many build types.

interface Stage

class Single(val buildType: BuildType) : Stage

class Parallel : Stage {
   val buildTypes = arrayListOf()

   fun build(buildType: BuildType) {
       buildTypes.add(buildType)
   }
}

class Sequence {
   val stages = arrayListOf()

   fun build(buildType: BuildType) {
       stages.add(Single(buildType))
   }

   fun parallel(block: Parallel.() -> Unit) {
       val parallel = Parallel().apply(block)
       stages.add(parallel)
   }
}

To support the parallel blocks we will need to write slightly more code. Every build type defined in the parallel block will have a dependency on the build type which was declared before the parallel block. And the build type declared after the parallel block will depend on all the build types declared in the block. We’ll make the assumption that a parallel block cannot follow a parallel block, though it’s not a big problem to support this feature.

fun Project.sequence(block: Sequence.() -> Unit) {
   val sequence = Sequence().apply(block)

   var previous: Stage? = null

   for (current in sequence.stages) {
       if (previous != null) {
           createSnapshotDependency(current, previous)
       }
       previous = current
   }

   sequence.stages.forEach {
       if (it is Single) {
           buildType(it.buildType)
       }
       if (it is Parallel) {
           it.buildTypes.forEach(this::buildType)
       }
   }
}

fun createSnapshotDependency(stage: Stage, dependency: Stage){
   if (dependency is Single) {
       stageDependsOnSingle(stage, dependency)
   }
   if (dependency is Parallel) {
       stageDependsOnParallel(stage, dependency)
   }
}

fun stageDependsOnSingle(stage: Stage, dependency: Single) {
   if (stage is Single) {
       singleDependsOnSingle(stage, dependency)
   }
   if (stage is Parallel) {
       parallelDependsOnSingle(stage, dependency)
   }
}

fun stageDependsOnParallel(stage: Stage, dependency: Parallel) {
   if (stage is Single) {
       singleDependsOnParallel(stage, dependency)
   }
   if (stage is Parallel) {
       throw IllegalStateException("Parallel cannot snapshot-depend on parallel")
   }
}

fun parallelDependsOnSingle(stage: Parallel, dependency: Single) {
   stage.buildTypes.forEach { buildType ->
       singleDependsOnSingle(Single(buildType), dependency)
   }
}

fun singleDependsOnParallel(stage: Single, dependency: Parallel) {
   dependency.buildTypes.forEach { buildType ->
       singleDependsOnSingle(stage, Single(buildType))
   }
}

fun singleDependsOnSingle(stage: Single, dependency: Single) {
   stage.buildType.dependencies.snapshot(dependency.buildType) {}
}

The DSL now supports parallel blocks in the sequence:

parallel {
  sequence {
    build(Compile)
    parallel {
       build(Test1)
       build(Test2)
    }
    build(Package)
    build(Publish)
  }
}

basic-parallel-blocks

We could extend the DSL even further to support nesting of the blocks by allowing defining the sequence inside the parallel blocks.

project {
   sequence {
       build(Compile) 
       parallel {
           build(Test1) 
           sequence {
              build(Test2) 
              build(Test3)
           } 
       }
       build(Package) 
       build(Publish) 
   }
}

sequence-in-parallel

Nesting the blocks allows us to create build chains of almost any complexity. However, our example only covers snapshot dependencies. We haven’t covered artifact dependencies here yet and these would be nice to see in the sequence definition as well.

Adding artifact dependencies

For passing an artifact dependency from Compile to Test, simply specify that Compile produces the artifact and Test requires the same artifact.

sequence {
   build(Compile) {
      produces("application.jar")
   }
   build(Test) {
      requires(Compile, "application.jar")
   }
}

produces and requires are the new extension functions for the BuildType:

fun BuildType.produces(artifacts: String) {
   artifactRules = artifacts
}

fun BuildType.requires(bt: BuildType, artifacts: String) {
   dependencies.artifacts(bt) {
       artifactRules = artifacts
   }
}

We also need to provide a way to execute these new functions in the context of BuildType. For this, we can override the build() function of the Sequence and Parallel classes to accept the corresponding block by using Lambda with Receivers declaration:

fun Sequence.build(bt: BuildType, block: BuildType.() -> Unit = {}){
   bt.apply(block)
   stages.add(Single(bt))
}

fun Parallel.build(bt: BuildType, block: BuildType.() -> Unit = {}){
   bt.apply(block)
   stages.add(Single(bt))
}

As a result, we can define a more complex sequence with our brand new DSL:

sequence {
   build(Compile) {
       produces("application.jar")
   }
   parallel {
       build(Test1) {
           requires(Compile, "application.jar")
           produces("test.reports.zip")
       }
       sequence {
           build(Test2) {
               requires(Compile, "application.jar")
               produces("test.reports.zip")
           }
           build(Test3) {
               requires(Compile, "application.jar")
               produces("test.reports.zip")
           }
       }
   }
   build(Package) {
       requires(Compile, "application.jar")
       produces("application.zip")
   }
   build(Publish) {
       requires(Package, "application.zip")
   }
}

Summary

It’s important to understand that this is just one of many ways in which we can define pipelines. We’ve used the terms sequence, parallel and build. We could just as well have used the term buildchain to align it better with the UI. We also added the convenience methods to the BuildType to work with the artifacts.

The ability to easily extend the TeamCity DSL with our own constructs, provides us with flexibility. We can create custom abstractions on top of the existing DSL to better reflect how we reason about our build workflow.

In the next post, we’ll see how to extract our DSL extensions into a library for further re-use.


Configuration as Code, Part 5: Using DSL extensions as a library

$
0
0
  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

In the previous post, we have seen how to extend TeamCity’s Kotlin DSL by adding new abstractions. If the new abstraction is generic enough, it would make sense to reuse it in different projects. In this post, we are going to look at how to extract the common code into a library. We will then use this library as a dependency in a TeamCity project.

Maven project and dependencies

In the first post of this series, we started out by creating an empty project in TeamCity. We then instructed the server to generate the configuration settings in Kotlin format.

The generated pom.xml file pointed at two repositories and a few dependencies. This pom.xml is a little excessive for our next goal, but we can use it as a base, and remove the parts that we don’t need for the DSL library.

The two repositories in the pom.xml file are jetbrains-all, the public JetBrains repository, and teamcity-server that points to the TeamCity server where we generated the settings. The reason why the TeamCity server is used as a repository for the Maven project is that there may be some plugins installed that extend the TeamCity Kotlin DSL. And we may want to use those extensions for configuring the builds.

However, for a library, it makes sense to rely on a minimal set of dependencies to ensure portability. Hence, we keep only those dependencies that are downloaded from the public JetBrains Maven repository and remove all the others. The resulting pom.xml lists only 3 libraries: configs-dsl-kotlin-{version}.jar, kotlin-stdlib-jdk8-{version}.jar, and kotlin-script-runtime-{version}.jar.

The code

It’s time to write some code! In fact, it’s already written. In the previous post, we have introduced the new abstraction, the sequence, to automatically configure the snapshot dependencies for the build configurations. We only need to put this code into a *.kt file in our new Maven project.

teamcity-pipelines-dsl-lib

We have published the example project on GitHub. Pipelines.kt lists all the extensions to the TeamCity DSL. That’s it! We now can build the library, publish it, and use it as a dependency in any TeamCity project with Kotlin DSL.

Using the library

The new library project is on GitHub, but we haven’t published it to any Maven repository yet. To add it as a dependency to any other Maven project we can use the awesome jitpack.io. The demo project demonstrates how the DSL library is applied.

Here’s how we can use the library:

1. Add the JitPack repository to the pom.xml file:


  
    jitpack.io
    https://jitpack.io
  

2. Add the dependency to the dependent DSL project’s pom.xml:


  
    com.github.JetBrains
    teamcity-pipelines-dsl
    0.8
  

The version is equal to a tag in the GitHub repository:

teamcity-pipelines-dsl-tags

Once the IDE has downloaded the dependencies, we are able to use the DSL extensions provided by the library. See settings.kts of the demo project for an example.

using-the-library

Summary

TeamCity allows adding 3rd-party libraries as Maven dependencies. In this post, we have demonstrated how to add a dependency on the library that adds extensions to the TeamCity Kotlin DSL.

Webinar: Getting Started With Building Plugins For Teamcity

$
0
0

Missing a feature in TeamCity? Build your own plugin! To learn how, join us Tuesday, April 30th, 16:00 CEST (11:00 AM EDT) for the Getting Started with TeamCity Plugins webinar.

webinar-14 (1)

The webinar introduces you to the ins and outs of plugin development for TeamCity. How to get started, where to find the docs and samples, what are the typical use cases, and even more importantly – where to ask the questions! We will develop a new plugin for TeamCity from scratch, explore the possible extension points, and discuss the essential concepts.

In this webinar:

  • Use the Maven archetype for TeamCity plugins
  • Applying TeamCity SDK for Maven
  • Overview of the typical plugins for TeamCity
  • TeamCity OpenAPI overview
  • Updating plugins with no server restarts

Space is limited, so please register now. There will be an opportunity to ask questions during the webinar.

Register for the webinar

Anton ArhipovAnton Arhipov is a Developer Advocate for JetBrains TeamCity. His professional interests include everything Java, but also other programming languages, middleware, and developer tooling. A Java Champion since 2014, Anton is also a co-organizer of DevClub.eu, a local developer community in Tallinn, Estonia.

Configuration as Code, Part 6: Testing Configuration Scripts

$
0
0

In this blog post, we are going to look at how to test TeamCity configuration scripts.

  1. Getting started with Kotlin DSL
  2. Working with configuration scripts
  3. Creating build configurations dynamically
  4. Extending Kotlin DSL
  5. Using libraries
  6. Testing configuration scripts

Given that the script is implemented with Kotlin, we can simply add a dependency to a testing framework of our choice, set a few parameters and start writing tests for different aspects of our builds.

In our case, we’re going to use JUnit. For this, we need to add the JUnit dependency to the pom.xml file


    junit
    junit
    4.12

We also need to define the test directory.

tests
settings

In this example, we have redefined the source directory as well, so it corresponds with the following directory layout.

Once we have this in place, we can write unit tests as we would in any other Kotlin or Java project, accessing the different components of our project, build types, etc.

However, before we can start writing any code we need to make a few adjustments to the script. The reason is that our code for the configuration resides in settings.kts file. The objects that we declared in the kts file are not visible in the other files. Hence, to make these objects visible, we have to extract them into a file (or multiple files) with a kt file extension.

First, instead of declaring the project definition as a block of code in the settings.kts file, we can extract it into an object:

version = "2018.2"

project(SpringPetclinic)

object SpringPetclinic : Project ({
   …
})

The SpringPetclinic object then refers to the build types, VCS roots, etc.

Next, to make this new object visible to the test code, we need to move this declaration into a file with a kt extension:

kotlin-dsl-test-code-in-files

settings.kts now serves as an entry point for the configuration where the project { } function is called. Everything else can be declared in the other *.kt files and referred to from the main script.

After the adjustments, we can add some tests. For instance, we could validate if all the build types start with a clean checkout:

import org.junit.Assert.assertTrue
import org.junit.Test

class StringTests {

   @Test
   fun buildsHaveCleanCheckOut() {
       val project = SpringPetclinic

       project.buildTypes.forEach { bt ->
           assertTrue("BuildType '${bt.id}' doesn't use clean checkout",
               bt.vcs.cleanCheckout)
       }
   }
}

Configuration checks as part of the CI pipeline

Running the tests locally is just one part of the story. Wouldn’t it be nice to run validation before the build starts?

When we make changes to the Kotlin configuration and check it into the source control, TeamCity synchronizes the changes and it will report any errors it encounters. The ability to now add tests allows us to add another extra layer of checks to make sure that our build script doesn’t contain any scripting errors and that certain things are validated such as the correct VCS checkout, as we’ve seen above, and the appropriate number of build steps are being defined, etc.

We can define a build configuration in TeamCity that will execute the tests for our Kotlin scripts prior to the actual build. Since it is a Maven project, we can apply Maven build step – we just need to specify the correct path to pom.xml, i.e. .teamcity/pom.xml.

kotlin-dsl-code-in-ci-pipeline

The successful run of the new build configuration is a prerequisite for the rest of the build chain. Meaning, if there are any JUnit test failures, then the rest of the chain will not be able to start.

Building GitHub pull requests with TeamCity

$
0
0

The support for pull requests in TeamCity was first implemented for GitHub as an external plugin. Starting with TeamCity version 2018.2 the plugin is bundled in the distribution package with no need to install the external plugin. The functionality has since been extended in version 2019.1 to support GitLab and BitBucket Server.

In this blog post, we will share some tips for building GitHub pull requests in TeamCity. First, there are a few things you need to know about when configuring the VCS root in regards to pull request handling. Next, we’ll cover Pull Requests and the Commit Status Publisher build features. And finally, we’ll see how it all comes together when building pull request branches.

Setting up a VCS root

First, let there be a VCS root in a TeamCity project. We can configure the VCS root in Build Configuration Settings | Version Control Settings and click Attach VCS root.

When setting up the VCS root we have to make sure that the branch specification does not match the pull request branches.

vcs-root

The branch specification in the screenshot above includes a +:refs/heads/feature-* filter. This means that any branch in the GitHub repository that starts with feature-* will be automatically detected by this VCS root. A pull request in GitHub is a git branch with a specific naming convention: refs/pull/ID/head, whereas the ID is the number of the pull request submitted to the repository.

It is possible to configure the VCS root to match the incoming pull request branches and TeamCity will start the builds automatically. However, you might want to restrict the automatic build triggering for these branches. Hence, it is better to avoid adding +:* or +:refs/pull/* patterns to the branch specification of a VCS root. Instead, we can use the Pull Requests build feature to gain more control over the incoming pull requests.

Configuring Pull Requests build feature

Pull request support is implemented as a build feature in TeamCity. The feature extends the VCS root’s original branch specification to include pull requests that match the specified filtering criteria.

To configure the pull requests support for a build configuration, go to Build Configuration Settings | Build Features, click Add build feature, and select the Pull Requests feature from the dropdown list in the dialog.

adding-build-feature

We can then configure the build feature parameters: select the VCS root, VCS hosting type (GitHub), credentials, and filtering criteria.

pull-requests-configuration

The Pull Requests build feature extends the branch specification of the related VCS root. As a result, the full list of branches that will be visible by the VCS root will include the following:

  • The default branch of the VCS root
  • Branches covered by the branch specification in the VCS root
  • Service-specific open pull request branches that match the filtering criteria, added by Pull Requests build feature

For GitHub’s pull request branches we can configure some filtering rules. For instance, we can choose to only build the pull requests automatically if they are submitted by a member of the GitHub organization.

In addition to this, we can also filter the pull requests based on the target branch. For instance, if the pull request is submitted to refs/head/master then the pull request branch will be visible in the corresponding VCS root. The pull request branches whose target branch does not match the value specified in the filter will be filtered out.

Publishing the build status to GitHub

For better transparency in the CI workflow, it is useful to have an indication of the build status from the CI server next to revision in the source control system. So when we look at a specific revision in the source control system we can immediately tell if the submitted change has been verified at the CI server. Many source control hosting services support this functionality and TeamCity provides a build feature to publish the build status into external systems, the Commit Status Publisher.

commit-status-publisher

The build status indication is useful when reviewing the pull requests submitted to a repository on GitHub. It is advisable to configure the Commit Status Publisher build feature in TeamCity if you are working with pull requests.

Triggering the builds

The Pull Requests build feature makes the pull request branches visible to the related VCS root. But it does not trigger the builds. In order to react to the changes detected by the VCS root we need to add a VCS trigger to the build configuration settings.

To add the VCS trigger to a build configuration, go to Build Configuration Settings | Version Control Settings, click Add new trigger, and select the VCS trigger from the list.

vcs-trigger

The default value in the branch filter of the VCS trigger is +:*. It means that the trigger will react to the changes in all the branches that are visible in the VCS roots attached to the same build configuration. Consequently, when a pull request is submitted, the trigger will apply and the build will start for the pull request branch.

Building pull requests

Once the Pull Requests build feature is configured we can try submitting a change to a GitHub repository:

pr1

When the new pull request is created, we can choose the branch in the target repository. This is the branch we can filter in the Pull Requests build feature settings in TeamCity.

pr2

Once the pull request is submitted, TeamCity will detect that there’s a new branch in the GitHub repository and will start the build.

building-pr

The build overview page in TeamCity provides additional details about the pull request.

building-pr-info

The build status is also published to the GitHub repository by the Commit Status Publisher:

building-pr-status

Here is a short screencast demonstrating the process above:



Summary

Now the puzzle pieces are coming together. The Pull Requests build feature extends the branch specification of the VCS root to match the pull request branches. The VCS trigger detects that a new pull request was submitted to the GitHub repository and triggers the build. Once the build is complete, the Commit Status Publisher sends the build status back to GitHub.

Building Go programs in TeamCity

$
0
0

TeamCity provides support for multiple technologies and programming languages. In TeamCity 2019.1, support for Go has been included in the distribution. In this blog post, we will explain how to configure TeamCity to work with Go programs.

Configuring Golang build feature

To enable Go support in TeamCity, go to Build Configuration Settings | Build Features, click Add build feature, and select Golang from the list.

The Golang build feature enables the real-time reporting and history of Go test results in TeamCity.

golang-build-feature

Running Go tests

Once the Goland build feature is enabled, TeamCity will parse the JSON output of go test command. Hence, the command should be executed with a -json flag using one of these two methods:

  • Add this flag to the Command Line build runner’s script: go test -json
  • Add the env.GOFLAGS=-json parameter to the build configuration

go-build-step

In fact, since a lot of Go projects are actually using make to build the code and execute the tests, it will require changing the Makefile respectively:

test:
    go test -json ./... ; exit 0

The build feature is enabled and a -json argument is added to the go test command. We can now execute the build configuration. As a result, TeamCity will record data about the test execution status, execution time and present this data in the UI.

go-tests

For each individual test, it is possible to review its execution time from a historical perspective across different build agents where the test was executed.

Muting the failing tests

Assume that there are 248 tests in our project and 4 of those tests are failing. Meaning, the build does not succeed. However, we know that it is okay for those tests to fail and we would like to temporarily “mute” the specific test failures in the project. TeamCity provides a way to “mute” any of the currently failing tests so they will not affect the build status for future builds.

mute-tests

Muting test failures is a privilege of the project administrator. Select the individual failed tests in the build results and click the Investigate/Mute button that appears at the bottom of the screen. It is then possible to mute the tests either project-wide or just in the selected build configuration. The unmute policy can also be specified. For instance, once the test is fixed, it will automatically unmute.

build-with-muted-tests

But what does muting the test failure has to do with running Go tests? We use the go test command to execute the tests, and its exit code depends on the status of the execution. If there is even a single test failure, the exit code will be 1. The build will fail since the Command Line build step in TeamCity aims to respect the exit codes.

To mitigate this issue, we just have to make sure that the exit code of the last command in the Command Line runner (or a build script) is 0. For instance, by executing an exit 0 command in the build step (or in the Makefile). In this case, it will be possible to mute the failures and make the build configuration succeed. If the exit code is 0 but there are still test failures that are not muted, then TeamCity knows that it has to fail the build.

Summary

Support for Go is provided by TeamCity out of the box, there are no external plugins required. TeamCity parses results of go test command execution. The results are persisted and it is possible to review the figures in a historical perspective. Consequently, all the TeamCity features that are related to test reporting are now available for Go developers.

banner_blog@2x

Getting Started with TeamCity TestDrive

$
0
0

TeamCity is mostly known as an on-premises CI server. But if you want to get a taste of TeamCity, you don’t really need to install it on your servers. Enter TestDrive!

TestDrive is a limited cloud TeamCity offering. It is a way to try TeamCity for 60 days, without the need to download and install it. TestDrive is currently hosted on top of the teamcity.jetbrains.com server and lets you create one TeamCity project.

This blog post is a getting started guide on how to set up a project in TestDrive.

Logging in

On the TeamCity download page, select the TestDrive tab and click “Test Drive in Cloud” button.

01-testdrive-get-teamcity

You will proceed to the login screen of the hosted TeamCity instance. It is possible to register a JetBrains account or log in with any other account that is supported in TestDrive: Google, GitHub, Yahoo!, or Atlassian BitBucket. For instance, if the repository that you are planning to work with is located at GitHub, then it makes sense to log in with a GitHub account.

Creating a project

After the login, the setup wizard will offer a number of options to configure your project. Say, we logged in using a GitHub account. Now the connection with GitHub is created and TeamCity can list all the repositories available on the account. We can just select the repository, and TeamCity will create the project and an initial build configuration.

03-testdrive-new-project-from-github

04-testdrive-new-build-configuration

In this example, we have selected to build a fork of the Go Buffalo framework. In one of the previous blog posts, we already described how to build Go projects with TeamCity. Let’s use that knowledge in this new context.

TeamCity will scan the repository and detect the possible build steps. The steps are backed by the build runners. If TeamCity locates any relevant files that are related to the runner, it will offer to configure a corresponding build step. Such file examples include IDE project files, Dockerfile, pom.xml, gradle.build, shell scripts, etc.

05-testdrive-build-steps

For Buffalo, we only need a Command Line build step to execute the go command.

To make sure that the build runs with the correct version of Go, it makes sense to execute the build step in a Docker container. In the build step configuration, we can specify which image should be used. In this case it’s golang:1.12.

06-testdrive-command-line-build-step

Don’t forget to configure the Golang build feature:

07-testdrive-golang-build-feature

The build is ready to start. We can either start it manually or configure a build trigger to react to the new changes in the repository.

Running the build

To run the build, click the Run… button on the top-right of the screen. The build job will be placed in a queue. While it is waiting, we can see which agents are available to execute the build.

09-testdrive-in-build-queue

10-testdrive-running-build

Once a build agent becomes available, the build will start and its status will be updated in real time while it is running. I there are any test failures, the status will indicate the failure long before the build finishes. You can monitor the build progress on the Build Log tab:

11-testdrive-build-logs-2

Once the execution has finished, we can see the test results and analyze any failures. In our example, a few tests have failed and we can check what the reason was, assign an investigation, or even mute the failures.

12-testdrive-build-results

13-testdrive-build-results-test-failure

Invite friends to collaborate

Once we have configured the project in TestDrive, it is listed under the account that we used for sign in. We can invite more people to work on that project. To do this, go to the Invitations tab in project settings and create an invitation link.

The invitation includes the correct role that will be assigned to the collaborator when they join the project by using the generated link. In this way, we can create multiple invitations for different roles.

14-testdrive-invite-collaborator

14-testdrive-invite-collaborator-url

Screen Shot 2017-10-25 at 13.22.35

Build Chains: TeamCity’s Blend of Pipelines. Part 1 – Getting Started

$
0
0

In TeamCity, when we need to build something, we create a build configuration. A build configuration consists of the build steps and is executed in one run on the build agent. You can define as many build steps as you like in one build configuration. However, if the number of steps grows too large, it makes sense to examine what the build configuration is doing – maybe it does too many things!

We can split the steps into multiple build configurations and use TeamCity snapshot dependencies to link the configurations into a build chain. The way TeamCity works with build chains enables quite a few interesting features, including parallel execution of the builds, re-using the build results, and synchronization for multiple source control repositories. But most of all, it makes the overall maintenance significantly easier.

In this blog post, we will explain how to create a build chain in TeamCity by configuring the snapshot dependencies for the build configurations.

The minimal build chain

To create a simple build chain, it is enough to create two build configurations and configure a snapshot dependency from one configuration to another.

For this example, we are using a GitHub repository. The repository includes a Gradle project that builds a Spring Boot application. In addition, there is a Dockerfile for building a Docker image.

We are going to build this project in two steps. First, we will build the application, and then build the Docker image with the binary produced in the first step.

01-tc-pipelines-todo-backend

First the build configuration, TodoApp, builds the Spring Boot application and publishes an artifact as a result. The second build configuration, TodoImage, depends on the TodoApp build configuration and builds a Docker image.

02-tc-pipelines-todoapp-config

There are two kinds of dependencies that we will need to configure: the snapshot dependency and the artifact dependency.

03-tc-pipelines-todoimg-dependencies

Artifact dependency

The TodoApp build configuration publishes an artifact to the todo.jar file. Any other build configuration can download the file by configuring the corresponding artifact dependency.

On the Dependencies tab, click the “Add new artifact dependency” button. In the dialog, locate the build configuration from which the files should be downloaded and specify the patterns to match the file path(s).

05-tc-pipelines-todo-backend-artifact-dep

The documentation page describes how to configure the artifact rules in more detail.

Snapshot dependency

To configure a snapshot dependency, go to the Dependencies tab in the build configuration settings and click the “Add new snapshot dependency” button. In the dialog, locate the build configuration it will depend on and click Save.

04-tc-pipelines-todoimg-snapshot-dep

There are a number of settings associated with the snapshot dependency. You can read about the various settings in the documentation. For our current example, the default configuration is sufficient.

When the TodoImage build configuration is triggered, TeamCity makes sure all the dependencies for this build configuration are up to date. Consequently, all the dependent build configurations that form a build chain via the snapshot dependencies will be added to a build queue. The result is visible on the Build Chains tab of the project:

06-tc-pipelines-todo-backend-build-chain

Triggering the build chain

There are various kinds of triggers that are possible to set up for a build configuration in TeamCity. The VCS trigger is the one that reacts to the changes in a version control system.

11-tc-pipelines-todo-backend-vcs-trigger

In our example, there are two build configurations but we will only need one VCS trigger. This is because it’s possible to tell the trigger to monitor for the changes in the dependencies. In the configuration dialog, we have to enable “Trigger a build on changes in snapshot dependencies” checkbox.

We can configure a dedicated VCS trigger for each and every build configuration in the chain. However, each trigger creates some overhead: the server has to allocate some cycles to maintain the trigger. Additionally, it’s easier to make changes to the settings if you only have one trigger to configure. Hence, we can just add a single VCS trigger to the very last configuration in the build chain and it will do the job.

Re-using the results

The build chain completed successfully. We can start the build again. However, this time TeamCity will be smarter and skip running the builds for the dependencies where it detects no source changes.

For instance, on the first run, the build number for both our build configurations is 1. Now let’s try running the TodoImage build configuration a couple of times. Since there are no changes to TodoApp’s sources its build is not started and TeamCity re-uses the result of the 1st build. The TodoImage build number is now 3 since it was our intent to execute it.

07-tc-pipelines-todo-backend-build-chain-2

It is also possible to enforce building the dependencies even if there are no source changes. For this, click on the ellipsis next to the Run button of the build configuration. From the dialog, choose the Dependencies tab, and select the dependencies that are required to run.

08-tc-pipelines-todo-backend-enforce-dependcies

Configuring checkout rules

The current setup of our build chain doesn’t really provide us with much value yet. All the sources are located in the same GitHub repository, hence if we configure the trigger to fire on code changes both build configurations will run.

For instance, if we make a change to the Dockerfile we don’t need to run the build for the application. However, with the current setup, TeamCity detects that there’s a change in source control repository for TodoApp as well and both build configurations will execute incrementing the build numbers.

09-tc-pipelines-todo-backend-dockerfile-change

Instead, it would be nice to run the build only when necessary.

In this example, we would like to build TodoApp if there is a change to anything but the Dockerfile. And we only want to build a new image if just the Dockerfile changes – there’s no need to build the application in this case.

It is possible to configure the checkout rules for the VCS root accordingly. The checkout rules affect the actual checkout of the sources on the agent. That said, if we exclude any folders of the repository using the filters in checkout rules, those folders won’t end up in the workspace.

We will exclude the docker/ folder from the checkout in the TodoApp build configuration. And we will only pull the docker/ folder in the TodoImage build configuration, and ignore everything else.

10-tc-pipelines-todo-backend-chckout-rules-docker

Now, if we only change the Dockerfile, only TodoImage should execute and re-use the results from the previous TodoApp run without executing it.

Summary

The concept of snapshot dependencies is essential for configuring the build pipelines, aka build chains in TeamCity. The feature allows implementing incremental builds by re-using the results of the previous executions, thus saving a lot of time. In this article, we have also learned about triggering and VCS root checkout rules – both very useful for working with build chains.

Next, we will look at what else is possible with the features we have described – running the builds in parallel and orchestrating the deployment process.


Build Chains: TeamCity’s Blend of Pipelines. Part 2 – Running Builds in Parallel

$
0
0

In the previous blog post, we learned about snapshot dependencies and how they can be applied to create build chains in TeamCity. In this blog post, we describe how snapshot dependencies enable parallel builds.

More snapshot dependencies

Previously, we started creating the build chain for the demo application. We created two build configurations: one builds the application and the other builds the Docker image. What about tests?

Suppose there are a lot of tests and running those tests sequentially takes a lot of time. It would be nice to execute the groups of tests in parallel. We can create two build configurations, Test1 and Test2, which will execute the different groups of the tests. Both Test1 and Test2 have a snapshot dependency on TodoImage build configuration.

12-tc-pipelines-todo-tests

Since Test1 and Test2 do not depend on each other, TeamCity can execute these build configurations in parallel if there are build agents available.

Now the new aspect of snapshot dependencies is revealed. Using the snapshot dependencies, we created a build chain which is actually a DAG – Directed Acyclic Graph. The independent branches of the graph can be processed in parallel given that there are enough processing resources, i.e. build agents.

This leads us to the next question: how can we trigger such a build chain? Previously, we added the trigger to TodoImage as it was the last build configuration in the chain. Now there are two build configurations. Should we add two triggers – one to Test1 and the other to Test2? While it is certainly an option, there is a more idiomatic way to do that – with the Composite type of build configuration.

Composite build configuration

The purpose of the composite build configuration is to aggregate results from several other builds combined by snapshot dependencies, and present them in a single place. One very interesting property of this kind of build configuration is that it does not occupy a build agent during the execution.

In our example, we can create a composite build configuration that depends on Test1 and Test2 via a snapshot dependency. The new configuration will be the last one in the build chain. Now it’s possible to add a VCS trigger to that build configuration. As a result, there will be just one VCS trigger for the whole build chain.

14-tc-pipelines-concurrent-tests-annotated

Notice on the screenshot above that the two build configurations, Test1 and Test2, are running in parallel. The TestReport build configuration is also running, but it doesn’t occupy the build agent and will be marked as finished as soon as all the builds complete.

The nice feature of the composite build configuration is that it also aggregates the test results from the dependencies. In our example, if we navigate to the Tests tab of the TestReport build configuration, we will observe the list of tests that were executed in all the previous build configurations that belong to the same build chain.

14-tc-pipelines-concurrent-tests-composite

Summary

At first sight, snapshot dependency looks like a simple concept. However, it enables a lot of features in TeamCity. In the previous blog post, we saw how we can re-use the results of the builds to save build time and resources. In this blog post, we have learned that snapshot dependencies also enable better use of resources when builds are executed in parallel. Next, we will learn how to orchestrate the deployment process with the TeamCity build chains.

If you are interested in trying out the demo project on your local TeamCity instance, we have uploaded the configuration to a GitHub repository. The .teamcity/ directory contains Kotlin DSL settings for the build chain that we have described in this blog post. To import these settings, create a project from repository URL. TeamCity will detect .teamcity/ directory in the repository and will suggest creating the project using these settings.

TeamCity UI: how do we test it

$
0
0

teamcity-frontend-preview

Developing a working piece of software is difficult. Just like building an airplane, it requires talented people, working components, and a testing framework. No plane leaves the hangar before everything is ready, checked and double-checked.

In JetBrains, we adopt the same philosophy for building our software. Vigorous testing helps us discover bugs and problems before the final product takes off. Just like building a plane, software development is a process that consists of multiple stages. Although the authors of this post are not aerospace engineers, we will use simplified aircraft analogies. There are several reasons for that: aircraft is beautiful, it is pure engineering, and it reveals that the problems we raise here are not exclusive to software engineering.

The bigger your product, the more steps and modules there are. To make sure your software is ready to lift off, every module needs to be tested and correctly integrated with everything else. CI/CD services, if set up correctly, help automate this process. Most importantly, they remove the human factor famous for one careless action being able to lead to total disaster.

Contrary to popular belief, testing is very important in front-end development. To continue the analogy, your plane is not only required to fly – it has to be comfortable inside! Moreover, its exterior affects how airplane flights (aerodynamics). Getting back to the front end, this means that you have to test the usability as well as functionality. This makes front-end testing a must. In this article, we will provide an overview of UI testing used in TeamCity. If you have any questions about the technical details – don’t hesitate to ask us.

(more…)

New in 2020.1: Conditional build steps

$
0
0

In TeamCity 2020.1, we have introduced a highly demanded feature – conditional build steps. With new execution conditions, you can control whether or not a given build step is executed in every build run, depending on the current environment and parameters.

In this video demonstration we:

  • explain how to add an execution condition to a step;
  • show how to elevate your building experience by creating custom conditions based on build parameters.

Here is a quick recap of the video:

Adding execution conditions

To add execution conditions to a build step:

  1. Open the step’s advanced settings.
  2. Opposite the Execute step field, click Add condition.
  3. Select any of the example conditions (e.g. run the step only in the default branch) or add a custom one (e.g. run the step only on the specific agent OS).
  4. Add as many conditions as needed. You can change and delete them anytime.
  5. Save the build step settings.

In every build run, this conditional step will only be executed if all its execution conditions are satisfied.

Adding parameter-based conditions

By combining execution conditions with other classic features of TeamCity, you can significantly improve your building experience. One great example is creating a condition that is based on a build parameter.

Let’s consider the following use case:

Your build configuration can be deployed to any of the three environments (QA, Staging, and Production). By default, it is deployed to QA, but you can run a custom build and select a different environment. One of the build steps contains a script that must be executed only when deployed to Production.

With conditional steps, it is easy to arrange:

  1. Add a build parameter with the following settings:
    • Name: Environment
    • Spec | Display: Prompt
    • Spec | Type: Select
    • Items: a newline-delimited list of environment names (QA, Staging, Production)
  2. In the production-only build step, click Add condition and select Other condition.
  3. Enter Environment as the parameter name. TeamCity will suggest all matching results. Choose the “equal” condition. Enter Production as the expected value of the Environment parameter.
  4. Save the build step settings.

Now, you can click Deploy to run a custom build. Since you selected the Prompt display type for the parameter, TeamCity will ask you about the target environment for this build.

If you select anything other than Production, the production-only build step will be skipped in this custom build run:

That’s it for the tutorial!

Refer to our documentation for more information and leave your feedback about the feature in the comments.

Happy building!

New in 2020.1: Conditional build steps

$
0
0

In TeamCity 2020.1, we have introduced a highly demanded feature – conditional build steps. With new execution conditions, you can control whether or not a given build step is executed in every build run, depending on the current environment and parameters.

In this video demonstration we:

  • explain how to add an execution condition to a step;
  • show how to elevate your building experience by creating custom conditions based on build parameters.

Here is a quick recap of the video:

Adding execution conditions

To add execution conditions to a build step:

  1. Open the step’s advanced settings.
  2. Opposite the Execute step field, click Add condition.
  3. Select any of the example conditions (e.g. run the step only in the default branch) or add a custom one (e.g. run the step only on the specific agent OS).
  4. Add as many conditions as needed. You can change and delete them anytime.
  5. Save the build step settings.

In every build run, this conditional step will only be executed if all its execution conditions are satisfied.

Adding parameter-based conditions

By combining execution conditions with other classic features of TeamCity, you can significantly improve your building experience. One great example is creating a condition that is based on a build parameter.

Let’s consider the following use case:

Your build configuration can be deployed to any of the three environments (QA, Staging, and Production). By default, it is deployed to QA, but you can run a custom build and select a different environment. One of the build steps contains a script that must be executed only when deployed to Production.

With conditional steps, it is easy to arrange:

  1. Add a build parameter with the following settings:
    • Name: Environment
    • Spec | Display: Prompt
    • Spec | Type: Select
    • Items: a newline-delimited list of environment names (QA, Staging, Production)
  2. In the production-only build step, click Add condition and select Other condition.
  3. Enter Environment as the parameter name. TeamCity will suggest all matching results. Choose the “equal” condition. Enter Production as the expected value of the Environment parameter.
  4. Save the build step settings.

Now, you can click Deploy to run a custom build. Since you selected the Prompt display type for the parameter, TeamCity will ask you about the target environment for this build.

If you select anything other than Production, the production-only build step will be skipped in this custom build run:

That’s it for the tutorial!

Refer to our documentation for more information and leave your feedback about the feature in the comments.

Happy building!

Creating TeamCity project templates with Kotlin DSL context parameters

$
0
0

In version 2019.2, TeamCity introduced the ability to use context parameters in Kotlin DSL project configurations. These parameters can vary from project to project in TeamCity while the common project declaration stays the same. It means that you can create one DSL project template and reuse it in as many projects on the TeamCity server as necessary. This is far more convenient than storing similar DSL specifications in multiple branches or creating a complex project hierarchy in TeamCity.

In this video, we’ll explore an example project and show you how to utilize the power of context parameters in your DSL projects. You can find a quick text recap of the tutorial below.

Let’s consider a TeamCity project that builds a plugin for integration with the GitHub issue tracker. It comprises three build configurations connected to a build chain.

Build chain

The "Test It" build configuration contains a VCS trigger that detects changes in the source repository.

The project is also synchronized with the "DSLDemo" repository, where its settings are stored in the settings.kts file. This file describes the project configuration in the portable Kotlin DSL format.

Versioned settings

The source code of the plugin is hosted on GitHub. Thus, the VCS root specification should point to the respective repository:

object PluginRepo : GitVcsRoot({

   name = "GitHub Issues Plugin Repo"
   url = “git@github.com:JetBrain/teamcity-github-issues.git”
   branchSpec = "+:refs/heads/*"
   authMethod = uploadedKey {
       uploadedKey = "id_rsa"

   }
})

This is how you would specify these settings explicitly, without context parameters. But imagine you want to reuse this code in a similar project that, for example, builds a plugin that implements the Bitbucket issue tracker integration. You’d need to copy this whole DSL project and replace the name and url values. To make changes in the shared code that the projects have in common, you would have to change it in both projects, which is surely not the best approach.

Using context parameters is a good solution to this problem. You can add them under Versioned Settings | Context Parameters if versioned settings are enabled for your project.

Context Parameters tab

Remember to add them in the DSL as well. A context parameter is declared as follows:

${DslContext.getParameter(<name>)}

And this is how the DSL code above becomes a single source for multiple potential projects:

object PluginRepo : GitVcsRoot({

   name = "${DslContext.getParameter("repoName")} Repo"
   url = DslContext.getParameter("fetchUrl")
   branchSpec = "+:refs/heads/*"
   authMethod = uploadedKey {
       uploadedKey = "id_rsa"

   }

})

Moreover, you can use context parameters to compose custom logic. For example, you may need the "Deploy It" build configuration only in some of your TeamCity projects – not in all of them.
This is how the project DSL would look normally:

project {

   vcsRoot(PluginRepo)

   buildType(BuildIt)
   buildType(TestIt)
   buildType(DeployIt)

}

Now, you can perform an extra check and add the DeployIt build type only if the deploy context parameter is set to true in a given TeamCity project:

project {

   vcsRoot(PluginRepo)

   buildType(BuildIt)
   buildType(TestIt)
   if ("true".equals(DslContext.getParameter("deploy")))
       buildType(DeployIt)

}

As all required context parameters are now in the project DSL, let’s use this specification to create a new TeamCity project for the Bitbucket integration.

After you add the project, go to its Versioned Settings and enable the synchronization of the versioned settings with the "DSLDemo" repository. After the settings have been synchronized, specify the custom values for the context parameters in the Versioned Settings | Context Parameters tab. Until you do, the project configuration will be inactive in TeamCity.

Bitbucket context parameters

Since we set the deploy parameter to false, the "Deploy It" build configuration will be absent from this project.

Note that you can only modify common settings for projects that contain context parameters in their source DSL code. Editing them in the TeamCity UI is prohibited, as it contradicts the single-source approach.

Pro tip: When you generate settings from your DSL code locally, the placeholder values will be displayed instead of the context parameters. If you want to use particular values of these parameters for local tests, you can temporarily add them in the pom.xml file under the Mavin plugin settings:

<plugin>

       <groupId>org.jetbrains.teamcity</groupId>
       <artifactId>teamcity-configs-maven-plugin</artifactId>
       <version>${teamcity.dsl.version}</version>
       <configuration>
         <format>kotlin</format>
         <dstDir>target/generated-configs</dstDir>

        <contextParameters>
            <repoName>My repo</repoName>
            <fetchURL>git@...</fetchURL>
            <deploy>true</deploy>   
        </contextParameters>

       </configuration>

</plugin>

We hope context parameters will make your build pipeline even more efficient and powerful.

Happy building!

Viewing all 103 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>