Still learning git a decade later

I think I started using git in 2013 or so, my freshman year of college. Learning git starting out really meant learning how to use GitHub. In my eyes git was really just a cli to interact with GitHub. Over time I dipped my toes into learning more but it’s really one of those tools where it can be intimidating to break past the basics.

Up until recently I was still relying on GitHub as a centralized place for storing the code for this website. But something is wrong with that sentence, git is by design a distributed version control system. So I decided to explore what it would look like to cut GitHub out of the picture. Turns out it’s really not that complicated. And I learned some new git concepts along the way.

Bare Repos

The first revelation when researching this was the concept of a bare git repo, which is essentially a repo without a working tree and everything you’d normally see in the .git directory. If you are hosting a repo on a server (acting as the remote) then it’s the right choice. I’d wager that GitHub is creating bare repos for your projects on their servers.

Worktrees

I’ve seen articles popup here and there about the benefits of using worktrees but never took the time to explore them. In a “normal” git flow you are just interacting with the “main worktree”, which is the set of files you interact with. However, a git repo can actually support multiple worktrees, allowing you to checkout multiple branches at a time, which could be useful for development, or in the context of a bare repo on a server, allowing you to checkout the actual files.

Hooks

What if I want to run some workflow once I push to a repo. Surely I need something like GitHub Actions to accomplish this? No it’s as simple as adding a hook on the git server that is receiving the commit. I’m also interested in exploring what client side hooks could be used for; Maybe linting code or running tests?

A simple workflow for deploying this site

Here is the setup I landed on for being able to edit this site locally on any machine, push to my server, and have it automatically deployed.

  1. Setup the bare repo on the server
mkdir mckennajones.com.git && cd mckennajones.com.git
git init --bare --initial-branch=main
  1. From a host that has the existing Github repo, push to the new server repo.
git remote set-url origin ssh://[email protected]/home/mac/repos/mckennajones.com.git
git push origin main
  1. Back on the server, in the bare repo, create a worktree for the site deployment.
git worktree add /home/mac/sites/mckennajones.com main
  1. Also in the bare repo, add a post-receive hook, that cd’s to the worktree, gets latest changes, and brings up the new container with docker compose.
#!/bin/bash

LOG_FILE="/tmp/mckennajones-deploy.log"
SITE_LOCATION="/home/mac/sites/mckennajones.com"

echo "Received push, starting rebuild in background..."
echo "Check logs at: $LOG_FILE"
{
     unset GIT_DIR GIT_WORK_TREE
     cd "$SITE_LOCATION"
     git pull
     docker-compose up --build -d --remove-orphans --force-recreate
} >> "$LOG_FILE" 2>&1 &

echo "Rebuild triggered successfully"

For now this flow is working pretty well. It means I can happily edit files locally and push them to git when I’m ready for the internet to see them. I can still push files to GitHub periodically if I care about maintaining some public visibility of my code, but there is no reason I have to. There are some improvements that could be made around build errors and downtime. But that’s a future problem.