Mobile Podcasting Setup 2025

I’ll be escaping the grey, dark, cold and wet Berlin for a month. Last year I did the same and I wanted to be able to record a Glitterbrains podcast , record guitar and actually take guitar lessons remotely.

Back then I brought my UA Apollo Twin with me so I could do some flexible audio routing including the fabled mix-minus (n-1) setup and also a Beyerdynamic headset.

Last year I’ve sold my Universal Audio interfaces and switched to an RME interface instead which is far superior when it comes to routing flexibility and software support.

So for this year’s workation, I needed a new mobile audio interface that allowed for flexible routing and so I bought the RME Babyface Pro FS. The investment hurt a little but it’s just so convenient to stay in the same eco system, carry over my TotalMix settings and rely on the great RME engineering. (I also think there is hardly a better audio interface for Podcasting / Streaming than the RME ones)

So without further ado, here is my mobile podcasting, guitar recording and guitar lesson setup:

Mobile Podcasting Equipment

It consists of:

(^ the above are affiliate links to Thomann)

I could definitely save some weight and space by using a headset again but there isn’t currently a model that I feel like investing money in. Therefore I went with a slightly more involved but also more flexible setup. Looking forward to try it out.

Tuning rspamd

For many years I’m running my own mailserver based on postfix and dovecot. To combat spam I’ve used spamassassin like everybody else back in the day but I was never quite satisfied with it. It came from a different era and as the spammers got more sophisticated and billions of people put poorly maintained and therefore hackable computers on the internet, our trusty old friend spamassassin wasn’t keeping up. 

Then in 2013 a new contender entered the scene, rspamd. I remember discovering it, probably a few moons after its initial release and feeling quite excited. It was not written in Perl but in C, promising much better performance and offering a ton of modern features to combat spam.

When I first tried it, its default config was almost enough to get rid of most of the spam that I was struggling to filter with spamassassin but again, over the years as the spammers got more sophisticated, more and more spam was reaching my inbox again which is why I spent a weekend recently to try and figure out what I can do to improve the situation.

The first thing that became obvious to me was that the configuration options and format of certain modules has changed and that certain modules were just not working or even enabled in the first place. 

But that was just the beginning of renovating my rspamd config. So here are a few suggestions for you if you have too much spam in your inbox. I will assume that you are familiar with common email and spamfilter related terms like greylisting and the principles behind it.

Check your config

Suggestion number one is pretty straight forward. Check your active configuration! You can do this by running 

rspamadm configdump

or

rspamadm configdump <module name>

Check if the modules and values are as you expect them to be. Rspamd has a hierarchical config overloading structure and if not fully understood it is easy to believe that what you’ve configured in the local.d folder is actually what is active but I’ve realized that a few of these did not work as expected due to the before mentioned changes in the configuration. 

Deal with the repetitive spam themes first

In my case, I’ve received a lot of similar looking spam. All german speakers probably have seen their fair share of spam mails with a subject like “Apotheke / Apo-theke / A-potheke”. There are many more “common” spam themes and topics and this is what I’ve tackled first because these categories of repetitive spam were very unlikely to produce false positives if I just blacklisted them. 

But if you’re unsure whether this is the right approach on a multiuser setup with varying interests then you can fall back to greylisting. To set this up you will need to edit local.d/multimap.conf and maybe take a look at the corresponding documentation: https://rspamd.com/doc/modules/multimap.html

I’d say this page is one of the most important pieces of documentation to leverage rspamd’s potential.

Subject Blocklist

The first thing in my multimap.conf file is the following block:

BAD_SUBJECT_BL {
  type = "header";
  header = "subject";
  regexp = true;
  map = "$LOCAL_CONFDIR/local.d/local_bl_subject_map.inc";
  description = "Blacklist for common spam subjects";
  score = 10; 
}

The content of that local_bl_subject_map.inc file is as follows:

/\bpsoriasis\b/i
/\bprostatitis\b/i
/\bderila\b/i
/\betf\b/i
/\bbitcoin\b/i
/\breich\b/i
/\bgeld\b/i
/\bki\b/i
/\baktien\b/i
/\bmakita\b/i
/\blotto|lottery\b/i
/\bmubi\b/i
/\bauto\b/i
/\bantihaftbeschichtung\b/i
/.*r[:.-]*?e[:.-]*?z[:.-]*?e[:.-]*?p[:.-]*?t[:.-]*?f[:.-]*?r[:.-]*?e[:.-]*?i/i
/\br[-_]?e[-_]?zept[-_]?frei\b/i
/zeptfrei/i
/\beinkommen\b/i
/\bnubuu\b/i
/\bnuubu\b/i
/\bentgiftungsprogramm\b/i
/\bgelenkschmerzen\b/i
/\bmädchen\b/i
/\bsprachübersetzer\b/i
/\bstabilisierung.+blutdrucks\b/i
/\bmüheloses.+reinigen\b/i
/\bpapillome\b/i
/\bküchenmesser\b/i
/\brendite\b/i
/\bgewichtsverlust\b/i
/\bpreissturz\b/i
/\bchance.+kostenlos\b/i
/\bhamorrhoiden\b/i
/\bhörvermögens\b/i
/\bmuama\b/i
/\bryoko\b/i
/\bbambusseide\b/i
/\bluxusseide\b/i
/\bHondrostrong\b/i
/\btabletten.+apotheke\b/i
/\bEinlegesohlen\b/i
/\bEinlegesohlen\b/i
/\btest\syour\siq\snow\b/i
/\bzukunft.+sauberkeit\b/i
/\bcbd\b/i
/\bharninkontinenz\b/i
/\bpillen\b/i
/\btabletten\b/i

This might seem surprisingly short but this list got rid of the majority of spam mails reaching my inbox. It’s dull, it’s simple but quite effective. Very rarely I have to add things to it these days and it especially effective for those mails that don’t have a lot of suspicious content and fail other spam identification methods.

Again, if you’re uncomfortable to use it as a block / blacklist you can either lower the associated score to be below your global spam threshold or you can convert this map into a prefilter and send the matching mails into greylisting which also gets rid of 95-99% of spam mails.

TLD Blocklist

Speaking of prefilters and greylisting, let’s talk about my most crude blocklist where I apply special treatment on mails coming from certain top level domains. Here is the corresponding entry in local.d/multimap.conf:

SENDER_TLD_FROM {
  type = "from";
  filter = 'email:domain:tld';
  prefilter = true;
  map = "$LOCAL_CONFDIR/local.d/local_bl_tld_from.map.inc";
  regexp = true;
  description = "Local tld from blacklist";
  action = "greylist";
}

And here is the list of “blocked” top level domains:

[.]tr$
[.]su$
[.]mom$
[.]mg$
[.]com\.py$
[.]af$
[.]ng$
[.]ro$
[.]ar$
[.]pro$

For whatever reason, a disproportionate amount of spam mails is coming from those top level domains. Equally for me personally, there is very little chance of false positives but since this is even cruder than the subject based blocking, I changed this to a prefilter which means that this is evaluated before all other checks. I’ve set the action to greylist which basically sends matching mails directly into greylisting and that does the job very well. In case a “good” mail is coming from those top level domains, it should make it through the greylisting and all other modules.

Other Blocklists

I do have a few more blocklists for display names, domains and names (the part of an email address before the @) but they are quite short. For example I get a lot of spam mails from email addresses starting with “firewall@” so again I take care of those. 

The multimap blocks for those look like this: 

SENDER_FROM {
  type = "header";
  header = "from";
  filter = 'email:domain';
  map = "$LOCAL_CONFDIR/local.d/local_bl_from.map.inc";
  description = "Local from blacklist";
  score = 7;
}

SENDER_USER_FROM {
  type = "header";
  header = "from";
  filter = 'email:user';
  map = "$LOCAL_CONFDIR/local.d/local_bl_user_from.map.inc";
  description = "Local user from blacklist";
  score = 7;
}

SENDER_USER_DISPLAY_FROM {
  type = "header";
  header = "from";
  filter = 'email:name';
  map = "$LOCAL_CONFDIR/local.d/local_bl_from_display.map.inc";
  description = "Local user from display name blacklist";
  regexp = true;
  score = 7;
}

As mentioned before, this takes care of a very large portion of spam that wasn’t detected otherwise but is my no means the only thing you can tune. 

Tuning Symbol Scores

While looking at the history tab of rspamd’s web interface, I noticed certain symbols being added to emails which didn’t have enough weight to get the score over the threshold which I thought should be weighted higher. You can also manually paste the mail source into the form field in the “Scan/Learn” tab of the web interface to scan spam mails that have slipped through the filter to see what score the mail gets and what symbols where added. If you spot certain symbols over and over again and feel like they should be weighted more in the overall score, then head over to the Symbols tab and add custom scores to them.

There are so many symbols that I don’t remember which ones I have changed because I have used the web interface. I should’ve done that in a config file right away but too late now. You can be smarter than me and add a file local.d/scores.conf and add symbols and your custom scores as follows:

ONCE_RECEIVED = 5.0; 
MANY_INVISIBLE_PARTS = 5.0;

etc etc. 

Check/Configure the Fuzzy and Neural Modules

These modules are a cornerstone of rspamd’s effectiveness and therefore it’s worthwhile to check if they are indeed enabled and working. To do this run 

rspamadm configdump neural
rspamadm configdump fuzzy_check

For recommended values check out the module documentation of both. 

Ask the Mail Cow

Another great tip for getting more inspiration on how to fight spam with rspamd is to look into the repository of mailcow, which is a dockerized and pre-configured mail server setup and many of their configuration choices are proven to be solid. 

For example you can take a look at the entire local.d folder and get inspiration, e.g. for tuning the fuzzy module. Also for your postfix and dovecot configs you could get useful settings that might have not occurred to you. What I did was to look at their configs and when I saw options that sounded interesting and which I didn’t know, I looked them up in the postfix/dovecot/rspamd documentation to see if they’d be suitable for me as well.

I wouldn’t blindly copy all their settings because many might not apply to your scenario and without understanding what they do, you can make your setup worse or break it entirely. Don’t change too many things at once. Do one change at a time, test and confirm that they are working as intended. Use rspamd’s web interface to scan and check mails and to feed the fuzzy and neural modules.

Auto Learn From Users Spam

This is another great option for training your spam filter. There are ways to auto scan junk boxes and auto feed them to the rspamd but I am not using this as all the previous methods already work well enough for me. Spam mails are usually quite distinguishable from “proper” mail with all the previous methods mentioned – but if you have a medium to large multiuser setup with a diverse user base (region, language, age) you might be receiving very diverse spam and auto learning from user classified spam might bring the last few percent. 

You could even implement it in a way like gmail, by flaggin mail in user mail boxes after delivery, when enough users have marked it the same mail as spam. However there is a lot more effort required when you want to preserve data privacy which means a bit of scripting – but it is possible.

I hope that helps some of you to drastically reduce your spam. It did for me and I was surprised that some of the dullest methods were the most effective ones.

Questions?

I’m sure I haven’t answered all your questions and it’s not easy to cover everything. The rspamd config documentation isn’t easy to consume and to understand in its entirety and I wouldn’t claim I’ve reached the pinnacle of understanding but what I’ve done is enough so that I don’t get a single spam email into my inbox for days in a row. Whenever one slips through the cracks, I adjust one of the modules mentioned above.

Feel free to ask if you have any remaining questions in the comments or via the usual channels and let me know what things you have tuned to great effect. Sharing is caring 🙂

Oh and of course feel free to correct any errors I might have made!

Special thanks to @leah@chaos.social who saved my sanity during my config debugging session where I tried to figure out which modules are actually active and working.

Replacing the TouchMix DAW Utility

I’ve bought the QSC TouchMix 30 digital mixer a few years ago and I really like the device for many reasons. QSC’s software support isn’t one of them though.

The mixer allows you to record directly to a USB connected SSD (or fast USB Stick). It does so by putting the raw .wav files in a generic folder structure and saving a project xml file (project_name.tmRecord) that holds the track name / track number info as well as information about sections and markers.

To get the .wav files named after the track names in the mixer I’ve used QSC’s own tool called “TouchMix DAW Utility” which allows you to select source, destination, tracks to import and does the renaming of the .wav files according to the information in the .tmRecord file.

The tools was not update in years, it does not support dark mode, it is not apple silicone native and it copies the files rather slowly and sometimes even appears to be stalling.

Since I only record continuous sessions (full rehearsal room sessions) – I thought that it should be fairly simple to replace the sluggish and unmaintained tool with a simple shell script.

You can find the script on Github: https://github.com/hukl/qsc_touchmix_extract/

To use it, invoke it like this:

./qsc_tm.sh /path/to/project_name.tmRecord /path/to/destination/folder

If someone comes up with a more advanced version that properly deals with sub-regions and markers feel free to shoot me a PR.

How to install Photoview on FreeBSD

Intro

Recently I was getting back into photography and as a result I was looking for a place to host my photos and share them with my friends without making them public to the whole world. Additionally I would like to see the photo’s EXIF information and other metadata.

I do have an old flickr account and so I’ve tried that first but I was quite disappointed by the antiquated interface of adding and editing photos, including its permission settings for viewing.

Next I was looking for self-hosted photo gallery options, ideally with few external dependencies and written in a programming language like Go or Elixir.

There is a great wiki for looking up self-hosted software options which has photo galleries as one of its categegories.

I’ve checked a few of them out and decided to give Photoprism and Photoview a shot since they’re both written in Go.

Photoprism, despite having a 3rd party portfile for FreeBSD was impossible for me to install as the portfile does not appear to be well maintained and failed at a critical build stage with no apparent workarounds.

Photoview had to be installed manually on FreeBSD and the installation process also had some things I needed to figure out to get it running. There is a manual installation page in the documentation but not all steps lined up.

This is why I’ve decided to compile all the required steps to install Photoview on FreeBSD for the next person attemting to give it a go – so here we go.

Installation Steps

First step is of course to clone the repository:

git clone https://github.com/photoview/photoview.git

Next I had to figure out the correct pkgs as some did not correspond 1:1 with their linux versions:

  • libheif
  • libde265
  • go
  • pkgconfig
  • dlib-cpp
  • lapack
  • blas
  • cblas
  • node16 (higher version would probably work as well)
  • npm-node16

To build the UI part of photoview I had to run:

cd ui
npm install

Then before building the frontend I had to edit the vite.config.js file and add the folloing lines to the top level of the defineConfig section.

build: {
chunkSizeWarningLimit: 1600,
}

Mine now looks like this:

import { defineConfig } from 'vite'
import svgr from 'vite-plugin-svgr'
import react from '@vitejs/plugin-react'

export default defineConfig({
  plugins: [react(), svgr()],
  build: {
    chunkSizeWarningLimit: 1600,
  },
…

After that the frontend part of photoview should build by running:

npm run build

When this was successful, change to the api directory.

The official documentation says that a simple

go build -v -o photoview .

should be sufficient but on FreeBSD it fails to find some of the dependencies which lead me to this Github issue which had the solution in the comments.

Runing this command did the trick for me:

env CGO_CFLAGS="-I/usr/local/include/" CGO_CXXFLAGS="-I/usr/local/include" CGO_LDFLAGS="-L/usr/local/lib" go build -v -o photoview .

Lastly the documentation tells you to copy the build results to a new location. Instead of building into a folder called “build”, on my machine the frontend was built into a directory called “dist”.

Therefore these are the commands I’ve used to put everything together:

sudo mkdir -p /usr/local/www/photoview
sudo chown www:www /usr/local/www/photoview
cp -R api/photoview /usr/local/www/photoview
cp -R api/data /usr/local/www/photoview/data
cp -R ui/dist /usr/local/www/photoview/ui
cp api/example.env /usr/local/www/photoview/.env

I’ve edited the .env file and put in my database connection details and set those to options:

PHOTOVIEW_SERVE_UI=1
PHOTOVIEW_DEVELOPMENT_MODE=0

Then I made a folder for the photos to go. To upload new photos, create a subfolder and put your photos inside. A new album will be automatically created for that subfolder.

mkdir /var/db/photos/
sudo chown www:www /var/db/photos/

Last step, run the thing:

cd /usr/local/www/photoview
./photoview

This is now in a local only jail, meaning that it has no LAN or WAN address and instead uses a 127.0.1.x IP. On my web jail I configured a new vhost in nginx to proxy requests to the photoview jail.

Right now I have not made an RC script for it but when I do I will amend this post accordingly.

That’s it for now – I hope it helps another FreeBSD sould along the way. Right now Photoview does pretty much what I wanted. It’s quite simple but not too simple. If I had failed installing and running it, I would’ve went with Lychee otherwise.

How to Add Caching to Your Website

In this blog post I will describe how you can dramatically improve the performance of your PHP CMS website that you’ve hosted at a webspace provider, in this case Hosteurope. To achieve this I’m using nginx, haproxy, varnish, s3 and Cloudfront CDN.

A friend of mine is selling her designer brides dresses on her website and occasionally her business is featured on TV fashion shows. When that happened in the past, her website broke down and was basically unreachable. She called me because she was expecting to be featured on another TV show soon and this time she would like her website to be up and running, especially her web shop. Of course, there was only a few days before the broadcast, so the solution had to work fast.

From the outside it wasn’t clear why the website was unreachable when the traffic was surging in. Was the PHP of the CMS too inefficient and slow? Was the web server of the webspace provider too slow? Was the uplink saturated from all the large images and videos on her website? Because there is no way to figure that out quickly and all of those options are possible I tried to come up with a plan:

  1. Check if there is caching in place and if not add it to make the dynamic PHP site static, except for the online shop
  2. See if we can somehow dynamically add a CDN (Content Delivery Network) in the mix to serve all the large assets from other and more capable locations

It turned out that the CMS (I believe Drupal) had some sort of caching enabled but because cookies were enabled all the way and many elements in the HTML had dynamic queries in their URL, I wasn’t convinced that the caching actually had any effect.

I wanted to add a caching reverse proxy in front of the website to have full control but of course that isn’t easy on a webspace provider. So I thought, maybe I could use my own server, set up varnish there and have the Hosteurope website work as its origin server. But there was another problem. The website was using HTTPS and it was not easy to disable it or download the certificates. In order to get the caching on my server to work I had to do this:

  1. Somehow unwrap the HTTPS/SSL
  2. Feed the decrypted HTTP to varnish
  3. Strip cookies and query parameters for the static parts of the website which do not change frequently
  4. Re-wrap everything in HTTPS with a letsencrypt certificate
  5. Point DNS to my server

This is the nginx config:

upstream remote_https_website {                                                         
    server <origin_ip_address:443;
}

server {
    listen 8090;
    server_name www.popularwebsite.com popularwebsite.com;

    location / { 
        proxy_pass https://remote_https_website;
        proxy_set_header host www.popularwebsite.com;
    }   
}

This is the varnish config:

vcl 4.0;                                                                                  

# this is nginx
backend default {
    .host = "127.0.0.1";
    .port = "8090";
}

# Remove cookies except for shop and admin interface
sub vcl_recv {
    if (req.url ~ "(cart|user|system)") {
        return (pass);
    } else {
        unset req.http.Cookie;
    }   
}

# Add caching header to see if it's working
sub vcl_deliver {
    # Display hit/miss info
    if (obj.hits > 0) {
        set resp.http.X-Cache = "HIT";
    }   
    else {
        set resp.http.X-Cache = "MISS";
    }   
}

After setting this up, it was time to test if that actually made things better. For these kind of quick tests, I’d like to use a tool called wrk. I modified my /etc/hosts file to point the domain to my server locally and temporarily and then fired away. This alone provided a 3x increase of requests per second, however if you start small, 3x is not that amazing. It went from ~ 18 requests per second to about 60 requests per second.

It is worth pointing out that the cross datacenter latency between my server and Hosteurope can be neglected in this scenario. Since varnish is caching most of the requests, the origin server is barely ever contacted once the cache is filled. Very quickly the statistics of of varnish showed nothing but cache hits, all served directly from RAM.

These kind of tests are always limited though. My server was still relaxed. The CPU was bored, my 1Gbit uplink not saturated and the disk with a ZFS mirror and 16GB of read cache was also not preventing more throughput. It was of course my own machine and home internet connection.

To properly simulate TV broadcast conditions, you need a distributed load test and because I didn’t have the time to set that up, I moved on to the next problem which was getting the large assets delivered from a CDN. I know from experience what my server and haproxy, varnish and nginx are capable of and I was confident, they would not buckle.

Getting the assets on a CDN wasn’t easy either as it would have meant to manually go through all pages of the website in the CMS and change each and every single one of them.

Luckily most of the asset URLs followed a consistent path structure which meant I could download the folders containing the images and videos, upload them to s3 and put the AWS Cloudfront CDN in front of it.

When a user is browsing to the website which is now essentially hosted on my server, all the referenced assets will also point to it. This means that I can rewrite and redirect the asset URLs to point to the CDN instead. The overhead of the 303 redirects would be ok.

This is the final haproxy config:

global                                                                                                                                                                
  maxconn 200000
  daemon
  nbproc  4
  stats   socket /tmp/haproxy

defaults
  mode            http
  retries         3   
  option          redispatch
  maxconn         20000
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  log             global
  option          dontlog-normal
  option          tcplog
  option          forwardfor
  option          http-server-close

frontend http-in
  mode   http
  option httplog
  option http-server-close
  option httpclose
  bind   :::80 v4v6
  redirect scheme https code 301 if !{ ssl_fc }

frontend https-in
  option http-server-close
  option httpclose
  rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains;\ preload
  rspadd X-Frame-Options:\ DENY
  reqadd X-Forwarded-Proto:\ https if { ssl_fc }
  bind   :::443 v4v6 ssl crt /usr/local/etc/ssl/haproxy.pem ciphers AES128+EECDH:AES128+EDH force-tlsv12 no-sslv3
  acl video  path_beg /sites/default/files/videos/
  acl images path_beg /sites/default/files/styles/
  http-request redirect code 303 location https://d37nu8xxtvb77.cloudfront.net%[url,regsub(/sites/default/files/videos,,)] if video
  http-request redirect code 303 location https://d37nu8xxtvb77.cloudfront.net%[url,regsub(/sites/default/files,,)]        if images
  default_backend popular_website_cache

backend popular_website_cache
  server varnish 127.0.0.1:8080

listen stats
  bind  :1984
  stats enable
  stats hide-version
  stats realm Haproxy\ Statistics
  stats uri /
  stats auth stats:mysupersecretpassword

The benefit of this is also increased introspection as varnish and haproxy each have their own elaborate statistics reporting. Spotting errors and problems becomes very easy as well as confirming that everything works.

The last piece of the puzzle was to configure AWS CloudFront properly as I have never done this before. It is worth mentioning though, that if you’re not sure if you want this permanently, CloudFront is the most unbureaucratic way of setting up a CDN. Most other will bug you with sales droids requesting lengthy calls for potential upsales and signing long term contracts. With AWS you can just log in with your amazon account, set things up and use them as long as you need them. No strings attached.

As a last preparation step I reduced the TTL of the DNS records to the lowest setting which was 5 minutes at Hosteurope so that in case, something goes wrong, I can switch back and forth rather quickly.

Then it was time for the broadcast and this time the traffic surge was handled with ease. Instead of breaking down and being unreachable, ~15k users and ~700k requests within 1-2 hours were served. Clouldfront served about ~48GB of assets while my server delivered ~2GB cached HTML and some JS and CSS which was too cumbersome to extract to the CDN.

This is of course a temporary setup but it worked and solved the problem with about half a day of work. All humming and buzzing in a FreeBSD Jail without any docker voodoo involved. Made me feel a little bit like the internet version of Mr. Wolf.

What we learned from all the statistics of this experiment is that it is most likely that Hosteurope does not provide enough bandwidth for the webspace to host all those large assets and that it would be wise to move them to a CDN either way which then would require the manual labour of changing all the links in the CMS.

Until the transition is made, I’ll keep my setup on stand-by. Either way I hope this is helpful for other people searching for solutions for similar problems.

Lastly I want to address the common question of why I’m not just using nginx for everything that haproxy and varnish are doing?

The answer is that while nginx can indeed do everything, it isn’t great at caching, ssl termination and load balancing. Both, haproxy as well as varnish are highly specialised and optimised tools that provide fine grained control, high performance and as mentioned above, in-depth statistics and debug information which nginx is not providing at all.

To me it’s like having a Multitool instead of having dedicated tools for specific jobs. Sure you can cut, screw and pinch with a multitool, but doing those things with dedicated tools will be more satisfying and provide you with more control and potentially better results.