I‘ve been looking forward quite a lot to this post because it marks the transition to my new website design. Alongside the visual update, there are also a couple of significant under the hood improvements that I find worthy to talk about. In particular, the switch to AstroJS as my static site generation framework and improvements to my dev tooling and workflow. I quite enjoy the changes, so if you’re looking to start a blog or any kind of content-driven website in 2025 this post may contain some pointers for you.
AstroJS
First and foremost, the entire website was ported to AstroJS from Jekyll. AstroJS—just like Jekyll—is a framework intended for content-driven static websites. In particular, it allows me to write posts in the comparatively light weight Markdown format and export them as static web pages.
Compared to Jekyll, AstroJS, however, feels like a substantially more modern take on static site generation. In particular by being a JavaScript (JS) framework and, therefore, being able to draw on a lot of great open source web development projects written in JS. For example, AstroJS allows me to
- add custom components in Markdown files using JSX syntax via the MDX integration (for example, for info boxes, image lightboxes, or custom emojis); it also has
- out-of-the-box server-side optimization (e.g. image size reductions); and it has
- quite a lot of other standard integrations (such as the tailwindcss and sitemap/SEO integration) in addition to third party JSX components.
The website theme is based on astro-aria theme by ccbikai. With a couple of additions in particular to fonts and header styling. I also added JSX components for
- highlighting images (lightbox),
- cookie consent pop up, and
- KaTeX-based Math rendering.
as well as adding support for Mermaid diagrams using Rehype and Remark plugins and adding calculation of reading time.
The switch from Jekyll to AstroJS, admittedly, wasn‘t quick and easy, but definitely a good move in terms of developer experience and future proofing.
Check out the following links if you want to find out more about AstroJS.
Dev containers
I’m a big fan of dev containers to quickly create and reproduce development environments. Prior to using a dev container, I set up my dev environment in a Lima virtual machine (VM). This is also a viable choice. It is, however, a comparatively heavy weight solution in terms of resource requirements (due to the higher degree of isolation provided by a full VM compared to a container). And the startup times are also significantly longer. I, moreover, frequently encountered problems with filesystem mounting my workspace into the Lima VM which I worked around by cloning my project repositories inside the VM instead of mounting the existing repository on my host machine thus increasing the required disk space even more.
Dev containers improve a lot on this. After the initial hurdle of creating the image with all tools and configurations required for development, my workflow consists primarily of starting the dev container with Dev Container CLI
devcontainer up --workspace-directory=.
and connecting to it with SSH (e.g. using the VS Code Remote Development extension).
Using Dev Container CLI, all configuration properties for Docker, like
- filesystem mounts, or
- port forwards
are specified within devcontainer.json
. There is no need to memorize the occasionally unconventional syntax of Docker commands (looking at you, --mount
) and it’s easy to share and version the dev container configuration. Tearing the dev environment down down and freeing resources is as simply as running docker stop <container-name>
. Due to the light weight nature of containers, resource requirements are also comparatively low.
OpenSSH may seem like a bit of overkill since we may already attach a shell to a running docker container using, for instance, docker exec
. It does, however, enable us to connect to our dev container with any tool that has SSH remote development support but no dedicated dev container extension.
The remote development mode of Zed—which is otherwise a brilliant and highly recommended alternative to VS Code—, for example, currently only works if there is an SSH server running on the remote end.
Thankfully, building a dev container image with an OpenSSH server installation does not necessarily have to be challenging. (The link at the end of this section contains more details.)
If you want your development environment to be light weight and easily reproducible, dev containers are the way to go, in my opinion.
Nushell
As someone who has written and will likely continue to write a lot of automation scripts in some sort of shell language, I absolutely love the ambition of the Nushell project to create a truly modern shell (language). Unsurprisingly, I decided to write every bit of automation code used to run or generate the website in Nushell.
Nushell has a lot of great features. More than can be talked about here. But as an example for some of the awesome features, take this command that I use in my automation scripts to detect if a file is a Markdown file:
const pattern_markdown_file_suffix = "mdx?"
export def is_markdown_file [file_path: string]: nothing -> bool {
($file_path | path parse | get extension) =~ $pattern_markdown_file_suffix
}
Here’s a quick break down:
- Nushell is typed and, therefore, lets you detect type-level errors more easily (as you can see in the signature
[file_path: string]: nothing -> bool
which also, interestingly, includes a type declaration for piped arguments—: nothing
in this case); - Nushell, moreover, always works on structured data and has true functions with return values (rather than treating everything as a string and only allowing integer return values like BASH);
- Nushell has a module system (
export def <the-function-definition>
exports a command definition from a module); and, - Nushell, like other shells, has some convenience operators for e.g. chaining command invocations (the piping operator
|
) or regex matching (=~
); and lastly, - Nushell has an extensive standard command library that provides some basic commands like
path parse
(for parsing a file or directory path into components).
In a Nushell session or script, we would import the Markdown module containing the is_markdown_file
command like this
nu> use markdown.nu
This is, of course, way more explicit and than in most shell languages. Nushell‘s module system, moreover, lets you import specific members of a module and module authors may hide implementation details using private members.
Another nice aspect of Nushell are self-documenting commands. Every command definition with def
automatically defines a help output generated directly from the signature. The help output is accessible via help <the-command-name>
after importing the command. It includes usage example as well as documentation of flags, arguments, and input/output types. The help screen for the is_markdown_file
command shown above, for example, looks as follows:
nu> help markdown is_markdown_file
Usage:
> is_markdown_file <file_path>
Flags:
-h, --help: Display the help message for this command
Parameters:
file_path <string>
Input/output types:
╭───┬─────────┬────────╮
│ # │ input │ output │
├───┼─────────┼────────┤
│ 0 │ nothing │ bool │
╰───┴─────────┴────────╯
And finally applying the command to an argument would look like this:
nu> markdown is_markdown_file "the-markdown-file.md"
true
There are also other interesting features of the Nushell language like
- exceptions and
try
/catch
control structures for exception handling, - functional programming support via first-class functions and closures, as well as
- support for immutable-first programming via parse-time constants and immutable variables.
All in all, Nushell is a really interesting and modern new shell language that positions itself somewhere between established shell languages like BASH and general purpose languages like Python.
If you are not convinced yet, here are a couple of my existing posts on Nushell.
GitHub Actions
In the spirit of upholding DevOps principles, I also decided to implement a somewhat more complex CI/CD pipeline architecture using GitHub Actions. The new pipeline architecture corresponds to a new trifold environment setup with a staging, pre-prod, and prod environment. (The prod environment serves the publicly accessible version of the website.)
This was way overdue since I, up to this point, always directly deployed the live website from the command line after making changes locally. Consequently, I never really had a lot of confidence that my remote deployment would closely mirror the local deployment until after the fact.
Staging Workflow
I use Conventional Commits commit type names for my branches (e.g. feat/<the-branch-name>
for feature branches).
Pushes to feature branches now trigger deployment to the staging environment at staging.friedrichkurz.me
(access restricted via basic auth). I use this to, for example, ask kind people to review an upcoming post or to look at the latest build on another device.
Pre-Prod Workflow
Merged feature branches on the main
branch trigger a deployment to the pre-prod environment at preprod.friedrichkurz.me
(access also restricted via basic auth). The pre-prod deployment is nearly identical to the prod deployment. I use it to make a last review before pushing a change to prod.
Prod Workflow
As stated before, prod and pre-prod builds are nearly identical in that they run for commits to the main branch. Prod deployments, however, only run if I push a tag for a given release commit.
GitHub Actions make this pretty simple, we just have to specify the following in the workflow YAML file:
on:
push:
tags:
- "*"
meaning “run this workflow every time a tag is pushed to the remote”. (There is, by the way, no restriction that a tag push has to point to a commit on the main branch since this seems not to be possible only using GitHub actions. I don’t consider this a big deal in this simple CI/CD architecture, however.)
What’s next?
There are still quite a lot of items in my backlog that I want to work on and release in the following months. These include
- RSS feed,
- a bio page,
- tag-based post search, and
- comments, and
- repost links.
That’s it for now, though! If you made it to this last paragraph: Kudos! Hope you gained some interesting insights and inspiration to explore some of the tools and technologies discussed!