Published: August 2025
This article is part of a series on: My projects that use static websites.

Typical architecture for publish-only websites

The various free offerings for the deploying and hosting of (static) websites can be taken as a starting point for the architecture of an application. For all my personal projects the architecture follows the same pattern.

Application architecture

The architecture is best explained from the final component back to the source (right to left in the diagram).

Browser

The architecture is about websites, and that is already the first design choice. Why a website and not an app? The technical reason is that the type of user interface would be quite similar, as I would create an app using MAUI Blazor that internally uses web pages. But a website does not require the effort to get an app in an app store and to install it. Unless the app/user interface requires direct access to the device it is running on, a website and app offer the same user experience. An added bonus is that it is available on every device that has internet access, like in an internet cafe or airport - as I am one of the website's users, I can always access the information even if all my devices are dead.

Currently only one of my projects needs more interactivity than a standard browser can offer. As part of the interactivity must be implemented client-side, it was an easy choice to use a Blazor web assembly as user interface, which runs in the browser. The second reason to choose for a web assembly (or at least a client-side app) is that no web server is required, and the architecture can keep using a static website to distribute the app.

Website

The end users visit a website, which is currently hosted as an Azure Static Web App. Many of my projects actually have two websites: a public one with all information I want to share with the world, and a private one with additional information that is relevant only for me. Ideally each website is an environment in the Static Web App, and I'm the only one authorised to visit the private website. In times of major software development efforts, a third environment may be present to test the new software. This environment also has me as only authorised user. Because of a current limitation the private website is published to a separate Static Web App.

The websites are typically static websites that consist of web pages, images and other resources that have been created before the website is deployed to the hosting location. That type of hosting is completely free. It is also possible to add an API to the website that executes code on request to return data. That is free for low volume websites (up to 1 million calls per month in 2025). But you'd need a client-side application to integrate the data into the web pages. In that case you could also implement the API functionality in the client-side app. A reason to use an API may be that the API looks through a lot of data to return only a limited amount, but my projects do not require APIs of that sort.

Pipeline

The deployment of content to the website is handled by an Azure CI/CD Pipeline, which is part of Azure DevOps. The minimum the pipeline does is to push the website's files to the Azure Static Web App. For most of my websites the content is written in Markdown, so the pipeline also runs a tool that converts the markdown to HTML. Some websites integrate external data into the website, and a tool to get that data is part of the pipeline.

In general: all that can be automated and does not require user intervention will be part of the pipeline. It is even possible to solicit user intervention. The main reason for this choice is that the tools do not need to be available in every environment where the website's content can be modified. The pipeline can run custom .NET tools in addition to many third party tools, so there is no limit on what can be done. The only practical restriction is (in 2025) 60 minutes per day of execution time for all pipelines, which limits the number of times content can be updated (a single run is typically 1 - 1.5 minutes). The pipeline also can be scheduled to run at regular intervals.

From an architectural point of view, the best choice is to split the functionality required in the pipeline and use multiple small tools for each task, similar to microservices for webservices. For things that cannot be done by third-party tools, a custom .NET tool is created. The .NET tools can share content access code, but as the scope of each tool is limited, it is easier to maintain than a single application that does it all.

Git repository

The content files and (custom) tools for the website and for the pipelines are collected in a specific branch in a git repository. I'm currently using the git repositories of Azure DevOps for this, as I have free access to those and can store anything I need (maximum size of 250 GB in 2025). There are other ways to get third-party tools into the pipeline (e.g., downloading directly or using package managers), but for now putting all in the git repository is the easiest.

Some of the branches in the git repository are relevant for the architecture as the branch structure can help with content quality control and content flow through the application.As an example, the repository for this website has two branches, one or the public and one for the private website. If changes are committed to the public website branch, the branch is merged into the private website one. This prevents that changes in the private content lead to unintended loss of public content, e.g., because public pages are no longer accessible via public links from the home page. In another project, there is one branch for manually maintained content and another for automatically downloaded content, and git is used as a third-party tool to merge the content flows and detect merge conflicts. In a pipeline, of course.

Content creation

The (manual) content creation can be done on every device that can download files from and commit changes to a git repository. The content creation is done by best-of-breed third party tools if possible. Or by custom .NET tools if needed. Again, it is advantageous to create small custom tools rather than an all-in-one editor, as it is easier to develop and maintain. And the tools can share content access code with other tools, including tools designed for the pipelines.

For some projects content creation is always done on a desktop. The pipeline tools can then be run on the desktop for the files in the clone of the git repository. That makes testing a local affair: all tools can be run that produce the website as it will be published, and a desktop version of a webserver can be used as stand in for the Static Web App. That is good enough for content quality control. Only if it is really important to test the whole architecture a test environment in the Static Web App is required and a git branch and pipelines to feed the environment.