Published: August 2025

Running custom tools

As I'm using a custom tool to convert content in markdown to a website, the Azure pipeline has to run tools that are not part of the out-of-the-box task catalogue. How to do that, and should we should use Ubuntu or Windows for the pipeline?

When running custom tools in the pipeline, there are a few things to consider:

  • How to make the tools available to the pipeline?
  • How to run the tools?
  • Which virtual machine image to choose for the pipeline?

Deployment via git.

There are probably two main approaches to getting the command line tools in the pipeline. In both cases, the pipeline first checks out the git repository that contains the content for the website. The next step is either:

  • Use one of the package managers to install packages. There are standard tasks for managers like nuget, nmp and maven. In Azure DevOps it is possible to create a private artefacts collection (for free if small-scale) where you can publish packaged tools to.
  • Don't do anything, just add the tools to the git repository.

I used to favour the first approach, as its architecture is a cleaner: separation of concerns, tools have a different life cycle than the website content, tools can be used for multiple websites/content repositories. That comes at a cost: more moving parts, most package managers are not made for distributing tools out of a (source code) project context. The downside of adding tools to git always has been the size of the repository and the time/bandwidth/disk size required to clone a repository. But that is no longer an issue: git repositories in Azure DevOps can be 250GB in size, download speeds at home are at least 100Mb/s, and disk sizes are measured in TB.If size becomes an issue, it is always possible to remove some of the history of the git repository.

My current approach is to add the tools to the git repository, on a separate branch, and then merge the tools into the branch with the website content:

gitGraph
    commit id: "Start"
    branch Tools
    checkout Tools
    commit id: "T1"
    branch Content
    checkout Content
    commit id: "C1"
    commit id: "C2"
    commit id: "C3"
    checkout Tools
    commit id: "T2"
    checkout Content
    merge Tools
    commit id: "C4"

This approach has similar advantages as using packages: tool development is separated from content, it is easy to create a new branch for test purposes with the current content and a new set of tools. Disadvantage is that the same tools have to be deployed to all repositories Advantage is that the correct version of the tools is paired with the content the tools are known to work for.

There is a second advantage: the correct version of the tools are automatically available in every clone of the repository. As static websites can be tested on a desktop with a local webserver, this is a great advantage.

It is possible to store the tools and content in separate git repositories and use git wizardry. E.g., by using submodules, or by cloning via git in the pipeline (although this is using git like a package manager). It adds complexity and chances to break the pipelines, and only saves some storage and solves the deployment of a new version of the tools to a handful of repositories. That is not worth the effort. If the deployment of tools to other repositories is important, git can be used in the pipeline to distribute a new tools version to other repositories.

Custom tools and log files

Let's assume that the custom tools are used in a pipeline based on vmImage: windows-latest. Then running a tool in a pipeline is straightforward using script:

steps:
  ...

  - script: $(System.DefaultWorkingDirectory)/tool-name.exe -argument1 -argument2 ...
    displayName: ... human friendly name ...

 ...

Use System.Environment.Exit(integer) with a non-zero exit code to signal an error that should abort the pipeline.

If you want have output from the tools, you can write to the standard output. Another option is to create a log file and publish the log file as pipeline artefact:

  - publish: $(System.DefaultWorkingDirectory)/..directory with log files...
    artifact: ..name of the artefacts...
    condition: always()
    displayName: Publish logs

The name of the artefacts is used to list the artefacts in the pipeline result. There are all kinds of settings related to the retention of the artefacts. The defaults are good enough to be able to view the results of the last few runs.

Windows or Ubuntu?

With the current .NET versions it is not very hard to create custom tools that run both on Windows and on Linux. Azure is promoting the use of Ubuntu in pipelines:

pool:
  vmImage: ubuntu-latest

rather than Windows:

pool:
  vmImage: windows-latest

So does it make a difference, and which one to choose?

If you've started the tool from a (cross platform .NET) console app template in Visual Studio and use the publish to file to create a release version of the tool that is written to (or copied to) the git repository clone, make sure you select the portable format. This will produce both a .dll and an .exe file for the tool. In a pipeline for Windows the .exe version of the tool can be run directly, e.g.:

  - script: $(System.DefaultWorkingDirectory)/Tools/MyTool/Tool.exe -argument1 -argument2

In a pipeline for Ubuntu the tool should be run via the dotnet command:

  - script: dotnet $(System.DefaultWorkingDirectory)/Tools/MyTool/Tool.dll -argument1 -argument2

If your tool is truly cross-platform, either of these will do.

But… there are some differences in the underlying platform that are not hidden by the .NET libraries. The most obvious is that you must take into account that references by file name from one web page to another are case sensitive on Ubuntu and case insensitive on Windows. So on Windows you must validate the case sensitivity, otherwise the website creation fails in the pipeline but not on a desktop. And a second one is the way time zones are named, which is standardized on Unix and quite custom on Windows.

But wait… To be able to run the pipeline on ubuntu-latest, the pipeline has to be modified and there is a chance that the website's content is valid when tested on the desktop (before committing new content) but fails in a build pipeline. And the advantage to use Ubuntu in the pipeline is… nothing! All predefined tasks offered by Azure for use in the pipeline seem to run on both Ubuntu and Windows.

So don't bother about Ubuntu. Look at it again if vmImage: windows-latest can no longer be used. Right? No! It is not that easy if the pipeline has to publish to the Azure Static Web App - it is one of the few tasks that are not available for Windows.