Take a Deep
Breath
and
Count to Four
by Arik Devens
Jun 3rd, 2022

Good Carbonation with a Grohe Blue and City Water

I recently purchased a Grohe Blue Chilled & Sparkling 2.0 Faucet and, aside from it being fiendishly complicated to install, I'm very, very, happy with it. It's an absurd luxury, but the ability to have chilled filtered water and sparkling water from the tap is absolutely incredible. Key to that experience is the quality of the water though, and initially we were very disappointed. The sparkling water tasted like someone had heard about carbonation before, but had never actually tried it. That led to a ton of searching for help and I did ultimately find a solution. It was obscure and complicated, and that's my cue to write it up for this blog!

Grohe makes this stuff annoyingly complicated. For one thing, the entire instructions are Ikea-style, with basically no words. For another, there doesn't seem to actually be a manual one can find anywhere. They do include a water hardness test with the faucet, and that's the place to start. If your water is below 7 or so on the dKh scale, like most city water will be, you are going to have a problem. Essentially, your carbonate hardness is just not high enough to grab onto the CO2 and create a satisfying drink.

The solution is to install one or more remineralization filters into the water pipeline. These filters exist because the standard reverse osmosis water filters strip out useful minerals as well as harmful ones. The idea is that you install one of these filters after the RO-system and before your faucet, thus adding back things like magnesium and potassium to your water. I'm honestly not sure how well these things work for that purpose, but they do work for ours. However, in our case, instead of installing it after the system, we want to install it in the middle, so that we can raise our hardness and allow the system to do its job.

Specifically, we want to install it between the Grohe filter and the chiller. That presents a problem, because that part of the setup uses metalic water hoses and not plastic quick connect tubing. All the remineralization filters I could find expect 1/4" quick connect tubing. That means I had to find a way to get from the Grohe hoses to quick connect and back again. An additional problem is that the Grohe system is in metric but I live in the USA where everything is in imperial sizes.

After a ton of trial and error I end up needing two adapters. One is 3/8" female ntp to 1/4" quick connect. The other is 1/4" quick connect to 3/8" male ntp. That allowed me to put the remineralization filter in-between the Grohe filter and the chiller. The result is delicious sparkling water.

I'm leaving out a lot of details, like the different between GHT and NTP, or how 1/2" to 1/2" hoses aren't necessarily the same size on both ends. If you want to hear more, you can listen to the episode of my Fun Fact podcast, where I discuss the whole thing. Hopefully this will help someone else figure this stuff out, or at least remind me when it's time to replace a filter.

If you want to use the exact gear I'm using, here are some links!

  1. Waterdrop Remineralization Filter (I ended up only needing one filter for my system.)
  2. 1/4" R0 Tubing
  3. Pipe Tube Hose Cutter
  4. 3/8" female ntp to 1/4" quick connect
  5. 1/4" quick connect to 3/8" male ntp
Dec 9th, 2021

Getting Custom Posters out of Plex

I recently decided to change the location of the content in my Plex libraries. For the last few years they've been kept in Google Drive, and mounted on my Plex server Mac using rclone and macFUSE. That's worked reasonably well, but it's certainly not perfect. I've hit rate limits, difficulties with non-ASCII characters, and folders that couldn't be deleted. Also, I've been paying for an unlimited plan, and Google has recently decided it's less interested in selling me one. Ultimately, they just raised the price by almost 2x, but they clearly aren't as committed as I need them to be. It's time to move on.

As I've been moving the content, I've run into a problem. Plex has an article on their website of how to do this kind of move, but in practice it's a bit flaky. Specifically, the library scanner isn't always great at figuring out that the moved files are actually duplicates. This means you might have to recreate any custom metadata, like posters, for the new items. That's a problem for me, because I've been doing something I shouldn't have. I've been using the Plex web app to store the posters, which "uploads" them to an internal database, where they dissapear into the ether. The problem, is that for many of the items that aren't being seen as duplicates, I don't have a way to re-download that original artwork.

That's what led me to try and figure out how to get my artwork back out of the Plex vortex. The solution I came up with was complicated enough that I figured I should document it here. Now that I have the posters, I won't be storing them this way again either. Instead I'm putting them where they belong, in the folders with the original content. From there Plex will happily find and use them.

If you're stuck like I was, here's what you can do. First you need to find the directory where Plex is storing the metadata, on the computer you use to host your Plex install. If that computer is a Mac, it will be in ~Library/Application Support/Plex Media Server/Metadata. What you want to do, is to cd into that directory in a terminal, and run this command.

find . -type f -path "*Uploads/posters*" ! -name "*com.*" ! -name "*tv.plex*" -exec cp {} ~/Downloads \;

What that's doing is finding all the files in the directory that have a path that includes Uploads/posters, which means they were custom. It ignores posters coming from com. and tv.plex, which are from metadata agents. Then it takes the files it finds and copies them to the Downloads directory. You'll end up with something like this.

00050a40e0277edd23787448ecea6f69508f2444 8020371bab6b0af52b95f1a8e4e15c1a59d33b42
002925311db7753c36bc9e8c1ad5535a38ce2001 807f3f900d7b4eb45350a1cb96851f7cf72a05cc
003f2a9532ad3a7a37fb5909eee778bd2d45bfa0 80f6a42bf56a25f08d1b3cf1d322033684dbe9b3
00830928241dda2c4b84877a9653ba4d860413a7 810bb89c56cde83e74f18b4eaa85d4cf475a1e31
008f3c57cddc40c6c92eb46e699ad9211e426ab4 8110133ed3d9903f4ae61fc19190d75c8607b42c
0091777970a6addce7f724aea10094628cf3a615 816e256f8ddf242352d181013a36ccf2a937818a
00a3938344dbda196bafffcdaf7bf2ac595af8cc 81ac77e5414ff2e346389f019ce39daaedb446ab

Each of those is a poster you uploaded, named with a UUID, still in the image format you saved them in. Assuming jpg you can use the Finder to rename them and add .jpg to every file, and you're done. You just need to look through them, find the ones you want back, and rename them to something less UUIDy. Now to never store them like this again!

Nov 22nd, 2021

Custom HTTPS Subdomains for My Home Server

The problem seemed simple. I run a couple services on a home Mac mini, and I wanted access to them from outside my local network. The solution I came up with was definitely not simple, but I'm really happy with it. A friend asked me to write it up, which was a great idea because I will almost certainly forget how I did this. Thus a blog was born.

I'll describe the end state first, so you know if it's something you're interested in. This site runs on danieltiger.com, and now I've added a few subdomains that I'm using to access my home server. One is for things like ssh or vnc, and one is to use with my Komga server. More importantly, it's using https, which is required to for Panels.app to connect to Komga from my iPad thanks to App Transport Security. This is how I did it.

Caveat: I'm using an Eero mesh router network and their Eero Secure+ service. I think all these instructions should work with any router that supports DDNS, but YMMV. Now on to the good stuff.

A while back Eero added DDNS to their Secure+ service. What this means, is that you can get a domain from Eero, that will always map back to your home network. Using that you can port forward whatever you want, to whatever machines you want, and access them from anywhere. That's great, and at first it seemed like a perfect solution, but there are two important gotchas.

The first is that Eero assigns you the domain. You might not care about that, but I wasn't that excited to have to memorize something like r1560134.eero.online. I own a domain already and I wanted to use that. The second issue is that Eero gives you a domain that only supports http by default. They mention that you can do https, but then they link you to a page on the Let's Encrypt website that I think you have to have built Let's Encrypt to understand.

Ok, so I have two things I want to do. Forward my own https subdomains to my Eero DDNS, and make the Eero DDNS connect over https for Komga, and any other web services I want to run in the future.

The first task was pretty easy. I host this site on Netlify and they provide wildcard TLS by default. All I had to do was add a CNAME record pointing from home.danieltiger.com to r1560134.eero.online, and that part was done.

What that means is that now I can ssh into home.danieltiger.com and that will automatically forward to r1560134.eero.online over port 22, which is already forwarded via the Eero app to the Mac mini server. A bit circuitous I suppose, but it works great. The bigger problem comes when trying to use things that connect over https.

Getting that working was a lot less easy, but mostly because I am not a Docker expert. If I was I think this would likely have taken way, way less time. If you're also setting things up on a Mac, go and install Docker Desktop, which comes with Docker Compose. It makes it way easier to setup the containers you'll be running.

I was already running Komga as a Docker container. What I did was setup another container, to run a service called Caddy. Caddy is a web server that handles https automatically, and supports reverse proxying. What that means is, I can use Caddy to manage the process of getting and refreshing my Let's Encrypt certificate, route all my web traffic over https, and forward to Komga.

Setting up Caddy is pretty easy, but I'm going to assume you have a similar lack of experience with Docker as I do. The first thing to do is to create a directory to store the Caddy data. Mine is under ~/Docker/caddy. Inside you'll also need to make a few subdirectories.

$ mkdir data
$ mkdir config

Next up is to add a docker-compose.yml file, which will tell Docker how to spin up your Caddy container. You can copy mine.

version: "3"

services:
    caddy:
        image: caddy:2-alpine
        restart: unless-stopped
        ports:
            - "80:80"
            - "443:443"
        volumes:
            - ~/Docker/caddy/Caddyfile:/etc/caddy/Caddyfile
            - ~/Docker/caddy/data:/data
            - ~/Docker/caddy/config:/config

Next you need to add a Caddyfile, which will tell Caddy how to proxy your requests. This is what mine looks like, using the example names from earlier.

{
    # Global options block. Entirely optional, https is on by default
    # Optional email key for lets encrypt
    email youremail@yourdomain.com
    # Optional staging lets encrypt for testing. Comment out for production.
    # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
r1560134.eero.online {
    reverse_proxy http://r1560134.eero.online:8080
}
komga.danieltiger.com {
    reverse_proxy http://r1560134.eero.online:8080
}

One important note: that optional staging line is super important to use while testing. Let's Encrypt has pretty aggresive rate limiting and you don't want to accidentally trigger it. Comment it out once you have everything up and running.

Start Caddy with docker-compose up -d and load whatever you used instead of https://komga.danieltiger.com, to connect to your service. Your browser will likely complain about the certificate being invalid, but if you look at it, you'll see it's because it's a staging certificate from Let's Encrypt. If you view the site anyway you'll see your service, and you'll know you're ready to comment out that staging line.

So there you have it. Now I can connect to any of my services, from anywhere in the world, via my own domain, and all over https.