Published: August 2025
This article is part of a series on: My projects that use static websites.

The case for static websites

Part of the investigation of the options to modernize the Waar is Frank? website was to keep an open mind about the architecture of the website.The original was a website for editing and displaying information that stored content in a database. But what if we start from the best (free) offerings for web hosting and see how to make that work? Although a personal website with database can probably be hosted for free, there are still quite a lot of moving parts to consider and software to write.A bit to my surprise a solution based on a static website turned out to be the easiest to work with. That is mainly because of a few major technological advances, imho:

  • Small-scale hosting is a commodity and is almost free.
  • Small-scale hosting is easy if it is publish-only.
  • Browsers take care of a lot of device specific headaches.
  • Remaining custom software development can be done in one language (.NET everywhere).

Let's explore that further.

Small-scale hosting is a commodity and is almost free

It is now (almost) free to host small-scale websites. There are several providers that offer free services for personal/hobby projects, or for commercial projects that do not exceed certain traffic/storage/computational limits. The providers most likely have a commercial interest for free offerings - get potential customers accustomed to the services and start earning when they use it for larger projects. That type of offering has existed for a long time, but not for web hosting.

The big enabler for free web hosting offerings is that the providers of hosting platforms have managed to automate everything, so the costs per interaction is very low. They have made it very easy to enable their customers to deploy the software and content to host, and to offer self-service portals for administrative tasks. For the providers this is a way to protect their infrastructure - a customer's mistakes to not harm the web hosting platforms. And it prevents support calls from customers that may obliterate any profit made from a cheap subscription.

In 2025 the result is that for a hobby project you can get commercial-grade hosting for free. Hobby/personal projects can use a Azure Static Web App for free, even with a custom domain (although the domain itself is not free). Even hosting a website with a web server component can be free. The free web plan of Azure Web App is probably good enough. Other providers like Vercel also offer free plans for this type of hosting.

Small-scale hosting is easy if it is publish-only

In most cases the automated deployment model offered by the providers is to prepare a set of files and publish those to the hosting platform. In the publish action all existing files are replaced. There is an obvious reason for that: merging existing and new content in a generic way is notoriously difficult. In practice content merging should be done before the publishing step by custom software, as that software can use business knowledge to resolve merge conflicts.

The files also have to be transferred to the cloud. Traditional issues like completeness and consistency of a set of files are now handled by using git - you can verify that the software works and the content is complete before a commit, and git ensures that the resulting file set will always be complete. And the transfer of files to the cloud is handled by cloud-hosted git repositories as provided by Github, Gitlab or Azure DevOps, some of which have free or cheap offerings for small-scale personal/hobby projects.

The various providers also offer a lot of automation to pick up files that appear in the cloud-hosted git repository and transfer them to the web hosting platforms. Sometimes it is the web hosting provider that offers it: Vercel uses web hooks to monitor the git repository. It can also be part of CI/CD infrastructure, as a third component: Azure Pipelines can access a variety of git repositories and can publish to multiple hosting platforms.

If your project has an architecture where all software and content changes are published via git, then the deployment of the files can be done using the automation of the git, CI/CD and web hosting providers (and for free). It is not very hard to configure the automation. Deployment is almost a fire-and-forget thing: commit the files and they will get deployed correctly.

That is quite different from an architecture where content lives in the web-hosted application. Then you do have to worry about taking sites (partially) offline and content merges, all as part of the (automated) deployment pipeline. The various providers can offer some help, but as there are no universal content merging solutions, you have to create part of the software yourself.

Browsers take care of a lot of device specific headaches.

Once upon a time creating a website was quite challenging. There were browser wars and as a consequence you had to take all kinds of dialects into account to make things work on all platforms. A more complex or interactive layout required special tricks and code to get it right. The website software development was quite close to “bare metal”: your code has to instruct the browser what to do. Some of that work has to be done by the web server, as it was too hard to achieve in the browser.

Those days are gone. The modern web standards offer a far more declarative approach: you specify how the layout and visual elements of a website should be, and the browsers apply those rules for the actual form factor and find the device specific resources that match your intends. That makes it a lot easier to design a website for a host of devices. The standards also enable some client-side visual effects.

The web standards evolution makes it easier to keep a website static. As more and more layout and presentation functions are delegated to the browser, it becomes less important how the actual content is encoded in the web page. Layout/presentation functionality used to be part of the web page construction software that also encoded the content. Now it is coded in a file separate from the content. That reduces the need for a web server component.

One remaining language / .NET everywhere

It always was possible to create a static website and to have all remaining user interface interactions running in the web browser. The problem was that you had to invest a lot in programming, frameworks and tooling to pull it off:

  • Javascript for user interface interactions, with some framework as the javascript interpreters offer very limited functionality. Debugging, testing and maintaining a website with a lot of pages is quite challenging.
  • A general purpose language plus web application framework for services that provided the content. This should also match the hosting platform.
  • Increased development effort. As the development tooling for Javascript and the general purpose language were quite different, some data-related code had to be created twice in both environments. Standard/open source libraries for both platforms may be solving the same (data) problem in different ways.
  • A lot of test/debug effort to keep the various disjunct parts of the software working correctly.

This is problematic for personal projects: I simply don't have enough time to do all the work. An approach where I can reuse code and skills has a lot of benefits. I do speak javascript, C#, Python, Java and a few other languages, but I would prefer doing all coding in one general purpose language and one or a few set of frameworks. Recent technological developments made things easier for me:

  • Web assemblies allow for interactive user interfaces that run in the browser but that are coded in other languages than javascript.
  • Unification in the .NET world has led to a web-based framework (Blazor) with extensive tooling and libraries that can be used to create web assemblies, traditional websites (with a web server) and (with MAUI) apps that run on desktops, tablets and phones.
  • .NET is available to create applications on many platforms, including microcontrollers.

Although not all .NET features are available on all platforms, the common core is large enough to facilitate extensive code reuse. And there is first-class tooling to debug and test (most of) the software in the same way for the various platforms.

The technology is now good enough to pick the language/framework first and then adapt the architecture. In my case: use Blazor for all user interfaces. Use .NET command line tools for content manipulation or additional tooling. For websites: use Blazor as web assembly combined with a static website deployment model if possible. Use third party tools or custom .NET tools for content creation and for the CI/CD pipelines.