Tech
Calibrating a no-name Amazon digital spirit level
If your digital level looks like this...
...then I can show you how to calibrate it. If it looks mostly like this but isn't covered in a thin film of angle grinder dust, I can also show you how to calibrate it, because it's the same thing. (Mine is pretty grim; this is after I cleaned it.)
Here's the short version:
- Power off your level, if it isn't already powered off.
- Place your level on a reasonably flat surface; your desk will do.
- Press and hold the
MODE
button, then push the power button (while still holding MODE), then release both. - Press and release the
MODE
button once;CAL1
will show on the display. - Rotate the level 180° through its horizontal centre (keeping it the right way up).
- Press and release the
MODE
button again;CAL2
will show on the display. - You're calibrated!
The longer version
...and with that out the way, I can ramble on forever without getting in anyone's way. If you made it this far it's because you like me, and I like you because you're great.
My digital spirit level was like £25 from Amazon. It's cool! It's easy to read from a distance because of its big illuminated display, and it is much more precise than a non-digital spirit level's bubble. It was critical to re-engineering my Rover P5 subframe to accept an LS1 engine.
It does, however, drift out of calibration over time, for reasons I don't pretend to understand. This is normal; that is why every digital level will have a calibration mode. Mine was about half a degree wrong when I calibrated it today.
Be aware that for a lot of jobs it won't matter if it is miscalibrated! It only matters if you need to construct something level relative to Earth's centre of gravity. A workbench would be a good example of when it matters. So would a building. Putting new pieces of metal into my front subframe was a good example of where calibration doesn't matter. I wasn't interested in making those things level compared to our planet; I wanted them level with the rest of the subframe. That's what the "REF" button is for; set up some thing as a reference, and all angle measurements will be relative to that thereafter.
As it happened, I later did need to get something level relative to Earth much later (it was installing a sign). By the time I needed it, I had lost the tiny paper manual for my level, and as with all weird Amazon ALLCAPS non-brands I wasn't able to find a manual for it online either. So I didn't know how to calibrate it! But with the faith that it must have a calibration mode, I could guess what it might be, based on how other levels with a more obvious calibration mode work. And I did! So here it is.
Turn your level OFF
Calibration is done during power-on, so it'll have to be powered off first.
Find something flat-ish and place your level on it
Whatever surface you choose does not have to be completely flat. That would be a chicken-egg problem! It only has to be nearly flat, and smooth enough that the level can sit on it without wobbling. If you are reading this from a computer, you probably have a desk, and that will certainly be OK.
At this point I recommend placing a ruler on your surface, and placing your level against that. Just trust me; you'll see why later. It can be any completely straight object with an unambiguous end. I use a ruler because that was the obvious thing to use. You can use the straight edge of your desk if you like. Whatever you use for your edge, keep it in the same place on your surface for all of the steps!
Hold MODE and the power button at the same time
Press MODE
first, and hold it down while pressing the power button.
When you power it on, your level should show CAL
on the display.
Release both buttons.
Your level is now in calibration mode.
Press MODE
once
Pressing and releasing MODE
will take the first reference measurement. After you have pressed it, CAL1
will show on the display.
Rotate your level through 180 degrees
Turn it around so it is facing away from you but still the right way up:
This is why I suggested setting up a reference straight edge earlier, and keeping it there; it means you can be sure that you are putting it in exactly the same place on your surface. Your surface might not be at exactly the same incline at different points of the surface; any difference between them will reflect in inaccurate calibration.
Similarly, you want to be sure that you are rotating it through exactly 180 degrees. If you don't, and your surface has a tiny front-to-rear tilt, that will be reflected in inaccurate calibration too.
Using a reference to ensure you're putting it in exactly the same place on your desk (or other surface) will make little practical difference compared to eyeballing it, but there's no reason to not chase as much accuracy as you possibly can.
Press MODE
again
This will take the second reference measurement, and CAL2
will show on the display.
You are now calibrated!
What if you have a no-name digital level, but yours doesn't look quite like mine?
If you have a CAL
button, this replaces the MODE
button in everything I said above.
Otherwise if you do not have the same button arrangement as me,
and MODE
doesn't work,
some variation of the above procedure will almost certainly work with one of the buttons.
I think, but do not know, that all of the cheap ones will use similar circuit boards,
and might all be made in the same factory for all I know.
So it is extremely likely pressing some button while powering on will enter calibration mode,
and probably the same button will be used to take the reference measurements.
Try all of them!
How do I know if my level is incorrectly calibrated?
First, believe the bubble; your bubble cannot be wrong if there weren't drastic errors in manufacturing which you just don't see these days.
Second, you can use exactly the same reasoning that the level uses to calibrate itself based on two measurements to determine whether your level is properly calibrated. Consider: this calibration procedure works by taking an angle measurement. Whatever this angle measures, a perfectly calibrated level rotated through 180° should read exactly the same in the other direction. If your surface is completely flat, it should read completely flat after being rotated along its length. If it is sloped 1°; to the left, it should read as being sloped 1° to the right after being rotated along its length. Any difference between the two is the amount of miscalibration, and that difference will be stored in the level's memory to be added to measurements thereafter.
So if you need to find out if your level is miscalibrated, just put it on a surface, read the measurement, turn it around and read the measurement. They should be the same degree measurement (but in different directions), and if the difference between the two is unacceptable for your purposes you will need to calibrate it. Simple!
DNSControl is good software
So like, Google Domains.
I liked Google Domains. I had a bunch of domains registered there. I registered them with Google Domains because whatever Google's faults, they have an outstanding record on security, because they hire some of the best people in the world. It was a very well-designed, easy-to-use service that did what I wanted it to and nothing more which is exactly the sort of thing Google likes shutting down.
Of course I was notified that Google sold it to Squarespace, which I was sad about. I intended to do something before my domain was moved to Squarespace, and also that was effort, so I procrastinated and forgot about it, i.e. didn't do it. Because I didn't do it, I was automatically moved onto Squarespace, which didn't work out well.
I used Google Domains because of Google's security posture, and the fact the domain registry industry is a security tyre-fire. Ironically, the sale to Squarespace resulted in a massive security problem which allowed anyone to hijack any account. This proved past-me right, but only in the most irrelevant way possible; using Google Domains directly resulted in a time window where anyone could have hijacked all of my domains.
Squarespace also lacked basic functionality for DNS. I know why they did this; it's because their target market is non-technical people, a category which does not include me. But I do not consider complete inability to set a TTL on DNS records to be normal. I also don't consider inability to export a zone file to be normal either.
Squarespace does what it needs to do for the market it targets. It is nerfed as a DNS management tool. That plus a catastrophic security hole on day one made me want to switch to something else, which I did, eventually.
So, I made the obvious error of being on Google Domains in the first place, despite Google's history of randomly killing shit off even if you are paying for it. And I didn't migrate to something else before the Squarespace switchover, because I couldn't be bothered and/or procrastinated. Both of those things were entirely self-inflicted.
Why did I procrastinate over moving somewhere else? Because moving over DNS records was effort. That seems like an insanely trivial reason, because it is. But I have lots else to do, and that one thing to do fell by the wayside. Maybe I could have solved this for the longer term by divorcing my DNS management from my domain registrations, but that too would be effort, and then I would be paying for two things, either of which could disappear like Google Domains did. How, then, could I make moving DNS records between registrars easier?
What would be really neat is if I could manage my DNS configuration as a text file on my computer. That text file on my computer would be the source of truth, to be pushed to whatever DNS provider I am using at the time, so I can swap out providers at will. As with all text files on my computer, it could be managed as a Git repository, so I have the complete history of it available, so if I break something I can easily roll back to an earlier version. Wouldn't that be cool?
WELL IT JUST SO HAPPENS
...that this exists, and also that I eventually get to the point. It's called DNSControl, and I like it.
I make a text file called dnsconfig.js
which looks like this:
// These two things are defined in my other config file, `creds.json`, which
// contains my API keys.
//
// The names in the files below do not have to correspond to provider names;
// they map to entries in `creds.json`. It's less effort to make them match
// though.
var REG_GANDI = NewRegistrar("gandi");
var DSP_GANDI = NewDnsProvider("gandi");
D(
"lewiscollard.com",
REG_GANDI,
DnsProvider(DSP_GANDI),
// Records for the apex domain and www.
A("@", "138.68.161.203"),
A("www", "138.68.161.203"),
// all my other records go here
)
OK, not a text file; it's actually JavaScript with some stuff shoved into the global namespace to make it look like a DSL. Someone could mumble something about Turing-complete configuration files, and I'd mumble something back like "yeah innit". I mean, all programmers dislike the idea of writing configuration files in a programming language in theory, and then we're all secretly thankful that we get variables and loops and conditionals and stuff when someone else implements their configuration file with a programming language. And then we all go around with guilty consciences! I recommend sitting on your hands till they go numb before writing your config file; that way it'll feel like someone else is doing it.
Anyway, there's also a creds.json
which looks like this:
{
"gandi": {
"TYPE": "GANDI_V5",
"token": "XXXX",
"sharing_id": "YYYY"
}
}
This tells DNSControl which provider I want to use, and has API keys for it. I moved my domains and DNS to Gandi, because they exist, they were easy to migrate to, and they're probably as good as anyone for all I know. The "awesome" comes in when I can switch this to use any one of over 50 providers, meaning that moving my DNS records somewhere else is entirely automated (other than creating API keys for them). Awesome!
Squarespace is notably absent from that list of providers, because they don't have an API for managing DNS records. If it wasn't for this I might have even stayed with Squarespace for my domain registrations, because it is less effort. The security hole was forgivable; all software has flaws. But now that I know DNSControl exists, I don't want to manage my DNS records via a web interface ever again, so I won't.
After I changed my configuration, I run dnscontrol preview
to see what things will be changed,
then dnscontrol push
to push them to Gandi.
I could make things even fancier than this.
I could set up CI, so that pushing changes to my remote Git repository automatically pushes changes to Gandi, which would save me having to run DNSControl manually after each change.
That would eliminate the risk of my Git repo not reflecting reality.
But for now I am happy enough that I can manage my DNS with version-controlled text files;
maybe I'll do the CI thing some other day (and here I was to write "this is a magic spell which makes things disappear forever", then I realised I used that joke before).
I like that DNSControl stops you from making mistakes. It could have been a straight translator from zone files to API calls, and fortunately it is not. It has opinions, and those opinions are good because they stop you doing stupid things.
I like that DNSControl is a single binary, which is also a thing I like about Hugo. That means that I can check the binary into the repo via LFS and it'll probably work for as long as I'm using an x86-64 Linux system, which might be as close to "forever" as I care to look.
Also, finally, the documentation. It has it! Lots of it! I never had to Google anything, nor copy-paste magic spells from random GitHub issues and Stack Exchange posts as I do with most other config file formats.
Anyway, this post is really about me and things I like, because that's what this site is, rather than being any kind of useful introduction to DNSControl. But if someone somewhere reads this who hasn't heard of DNSControl might consider using it now, I wouldn't mind that at all.
I migrated this site to Hugo
On and off over the last few weeks I have been migrating this site from a Django app written by me to a static site generator called Hugo. If you don't recognise at least one of the technical words in the previous sentence, then you won't find anything interesting to read here. But if you find anything that is broken on here, it's probably something I broke during the migration, and you might want to let me know.
Migration wasn't hard
I had to move all my old content into the new site while not breaking any links. I used Markdown for all the content in the old Django app (in case I wanted to do something like this some day), so there was not much conversion required there. I had my own special snowflake markup for handling images and YouTube videos, which required a little work to convert.
The core of the migration was 161 lines of Python code (and then another 20 lines of throwaway Python to fix my breaking all the OpenGraph images). If I had entire days to spend on it and an unlimited attention span I might have finished the exporter in a day. I didn't have that, because I have a day job and a stupid car project and a mind that says well I'm going to have to Google this thought unrelated to any particular thing I am doing, but it was much more work thinking about than doing.
The slightly harder part was learning what I needed to learn about Hugo and doing a basic theme for it, but all of that might have been only a couple of long days - if, again, I had been focused on it.
After the migration, I rather wish I had used Hugo from the start. This site was a Django app, because Django is my day job and it is what I know; I figured I would more time fiddling with Hugo (or some other static site generator) than I would writing my own Django app. But I definitely spent far more time writing tests for my Django app than I did migrating this to Hugo!
Authoring is nicer
I like writing things in my favourite text editor. It seems a more obvious way to write, because that is what I do with most of the rest of my time at a computer. Actually, that was partly the way I was writing already; I was writing posts in a text editor, then copy-pasting into the Django admin interface when I was finished. And if I needed to do non-trivial edits I was copy-pasting it into my text editor, editing it, and then pasting it back.
This is not because Django's admin interface is bad. Although it wins no beauty contests, it's rather better than most web-based interfaces, because text boxes in the Django admin behave as text boxes have in your operating system for about 30 years, rather than the fallout of React meeting some UX designer's reinvention of "type text into a computer". But to those people for whom "favourite text editor" is a valid sequence of words, writing text into a web browser is not as natural as working on files in one's favourite text editor.
As with all static site generators, Hugo's master copy is just a bunch of text files. A bunch of text files can be version controlled with Git, which is also what I do with everything else I write at a computer. That, too, is what I do with anything else I write on a computer, and so it's a better way of working for me.
It's much less maintenance
This is the real reason for the upgrade; I wanted less work for myself in the long run. As said, Django is what I know. I like it a lot! But I have little interest in maintaining a Django app when I am not being paid for it.
It's easier to secure a static site than it is to secure a Django app. My web server is just serving some files from disk; it's nothing the unattended upgrades of my OS can't handle. Being a responsible citizen requires that I kept the various non-OS dependencies of a Django app up to date. That became a chore I didn't want, and if I'm honest I ended up getting Dependabot email alert blindness.
OS version updates were a source of dread. Serving static files does not change much, if at all, between major operating system versions. Serving a Django app does. That requires, invariably, a major Python version update and a PostgreSQL update as well.
Backups are easier
My old site was backed up multiple ways. Snapshots of the entire virtual machine (what Digital Ocean calls "droplets") were made daily, which cost money. That they would restore to a usable site was a matter of faith. I also periodically pulled the database and media files (such as images) to my local machine, which in turn is backed up elsewhere.
The entirety of the source code of this site is in a Git repository. That includes the theme, and all of the site's text and images. Simply by virtue of how this is deployed, that makes at least three up-to-date copies of the site in normal circumstances: one on my computer, one on GitHub, and one on the server. I'm getting those "for free", but then my local machine gets backed up, which makes four.
I also have the statically-linked Hugo binary checked in to the repo. This gives me some faith that, so long as I am on an x86-64 Linux machine, I will always be able to rebuild the entire site from source in a few seconds and re-deploy to a new server very quickly. This means I am more likely to have backups that can be restored.
Dear God it's fast
Really really fast.
It should load just about instantly on any device and any Internet connection that is likely to be in use. The technical people out there will say "of course it does", because of course it does; it's serving small files from a solid state drive.
The old site wasn't slow. I know how to make Django sites fast, and I did. But, without adding more piles of caching on top, it will never be as fast as directly serving files from a disk.
I decided to make that even faster, by inlining all of the CSS for the site on every page. It adds less than two kilobytes per page and saves a render-blocking HTTP request. Fast internet connections won't see this difference, but slow mobile ones will. I also added some magic tricks which make every link within this site to be prefetched, so once you actually click on it it'll load even faster.
"Faster" - of the "just serving file from disk kind" - also means I need fewer computing resources to serve it, which means...
It's slightly cheaper
...I could run this on the same host as a bunch of other static sites. Previously it was running on its own virtual server, because I figured a Django app would need it. Virtual servers are so cheap these days that they may as well give them away with Happy Meals, but that's still a few quid saved. I could reduce that to near-zero if I used S3, and actual zero if I used GitHub Pages or some other free host, but I like having a server I control.
Arguably the reason the Django app was maintenance in the first place was because I like having servers, rather than some Docker container running in some platform-as-a-service that I don't entirely understand. As it was, when it was time to upgrade the operating system on my server, it was also time to upgrade the database along with it, and also the Python version. The trick here is that when I upgraded the OS on my desktop I had to do the same thing on the server, and vice-versa. That could have been made easier by using a managed database, and a Dockerised Python. If I was doing this as a job I probably would. For something I am having fun with I do not like that loss of control.
I did a bunch of other things while I was there
I have some JavaScript on this site. It's used to handle the privacy-respecting, no-script-friendly YouTube embeds, such as this one. It was originally written in Vue, version 2. That version of Vue went out of support a while ago.
I didn't really want to maintain a JavaScript build system anymore. It felt like effort, and it turned out I could replace it with not many more lines of plain JavaScript than it was in Vue code; it's so small (like 500 bytes gzipped) that I didn't even bother minifying it. I can count on plain JS working in browsers for more-or-less ever. I can't count on a JS build system still working on my machine next year, so I do not feel much like maintaining one for a personal project. This fits the plan of making less long-term work for myself.
Not installing anything from NPM means I don't have any linters anymore. I don't feel good about this, but I feel far worse about the utterly insane dependency tree of ESLint and stylelint, and only slightly less bad about some of the (non-dependency) insanities of JSLint.
I did a minor visual revamp. The home page now lists every post, which means you can find any post by using Ctrl+F, and also go to all of the categories immediately. I also added support for high-contrast rendering, for those who have that preference set in their operating system. And the whole site now uses a monospace font, because I are programmer.
I could have done these things earlier, but I didn't. I didn't consider them meaningful enough differences to be worth doing by themselves. A rewrite of the site seemed like a good time to do that.
Hugo is really good
I like Hugo. Actually I tend to like anything that comes out of the Go programming community; it tends towards generating high-quality software.
I looked into Hugo in the past, and liked it a bit. As I recall its templating system was not as good back whenever that was as it is now. It, and I may be misremembering this, had nothing that would allow one to override blocks of a base template; instead, it only allowed including other templates, which is not slightly the same thing. That made it feel nerfed compared to Django or Jinja2 templates. Now that it supports extending from and overriding blocks of other templates I like it a lot.
Hugo is wonderfully documented and every maintainer should aspire to writing documentation this comprehensive.
Hugo builds to a standalone executable, which should continue to work on any of my Linux boxes into the indefinite future; for as long as I have a libc6 x86-64 Linux system, I'll be able to use the same Hugo binary I'm using today. I checked in to my repo with LFS in case it disappears.
Anyway
It all works! Please let me know if you find something broken.
The desk chair post, part 2
This was my desk chair.
I wrote about it before.
When I wrote about it before, I mentioned my concern that the much sturdier castors I fitted might end up breaking the no-metal-in-particular that cheap desk chairs swivel bases are made from. It broke a few months later.
Rather than fish another desk chair from a skip, I bought an entire swivel base assembly from Amazon for about £80. It turns out that not just the castors, but these entire assemblies are largely interchangeable between desk chairs. This thought had not occurred to me before! So, I did not have to "un-weld" the baseplate from the subframe as I had every previous time a swivel base had exploded on me. Just plop the old base plate and subframe on top of this...
...and my desk chair was fixed again. Simple!!
But, while I'm there...
Previously, I wrote:
I showed a photo of it to someone earlier today and they said "it needs arm rests". It doesn't need arm rests, but the fact someone thinks it needs arm rests means that it isn't the unquestioned best desk chair in the world.
It still did not have arm rests, so this time around I decided it was going to have arm rests. I had a pair of arm rests, salvaged from the previous donor chair.
Let's make some brackets! This time I bought (rather than salvaged) some steel for the purpose, for about £20. I still have some left over.
That turned into some smaller lengths of steel...
...which, via some dubious MIG welding and Jenolite satin black paint, turned into two slightly-wonky but almost presentable brackets for the arm rests.
Easy! (Just kidding, that took forever, because I am not all that good at this.)
As everything was dismantled (so that I could make a means to fix these brackets to the subframe), I figured I would give the subframe a cleanup and a coat of paint. It looked like this, resplendent in its original brown paint and marker pen assembly-guide scribbles from the first time I built it.
This subframe is an adapter plate between the car seat and the desk chair swivel base. It is almost always out of sight, so it didn't matter what it looked like. Still, I would never tidy it up it if I didn't do it now (the proof of this is that it has been unpainted for over a decade). This should have been just a coat of paint, but while I'm there...
...I was never very happy with those unfinished ends, either. They've never bitten me, and I've never seen them so I didn't mind them being ugly, but I always had the thought in my mind that they needed to be capped with something. This was as good a time as any to do it. So, some offcuts, some more dubious MIG welding, and some over-aggressive linishing to make the MIG welding look less dubious...
...and they look a bit better, if you don't really look at them. Which I won't! Because I'm sitting above them.
Still, the subframe that I never see now looks a lot more presentable. And when everything is bolted together...
...it has arm rests! Which was far more effort than it was actually worth, given that it never really needed arm rests. Especially when I came to use it and realised when setting the height for my arm rests I hadn't considered whether that height would allow it to fit under my desk...which meant chopping about 70mm out of the brackets the day after I assembled it all. But still, arm rests! And that, if nobody issues me some other challenge that makes me over-solve another problem that doesn't exist, should make it the unquestioned best desk chair in the world.
The desk chair post
This is my desk chair.
It is a front seat from a 1990 Vauxhall Astra GTE, bolted to a subframe made from steel box section scavenged from some industrial shelving, welded to a swivel-chair base that I found in a skip. I've had some variant of it for over a decade, but I had cause to re-engineer it recently, so I am posting about it now. It's extremely comfortable! It is more supportive than any other chair I have used, including chairs that look like car seats, and including chairs that cost upwards of a grand.
There are two of these desk chairs in existence! My brother was dismantling an Astra GTE that he purchased for an engine donor (MOT-failure GTEs were not worth much more than scrap weight back then, and that is a memory from the "painful to think about" department, next to the working two-door Range Rover I helped dismantle...). I got the seat for free on the condition that I made the other front seat into a desk chair for him as well. That other chair is still in use by a kid in the family as a gamer chair, and that makes me happy.
It has been rebuilt several times. This is because office swivel chairs are made of no material in particular, especially the cheap kind that gets thrown into a skip when it becomes too ugly to use. Usually this does not matter, because the sitting part of a swivel chair is also made of no material in particular, so the system in its entirety has plenty of flex. The Astra seat has a lot of extra weight, and there is no flex in the over-engineered subframe, so swivel chair bases tend to break.
This is what the subframe looks like.
It's not pretty, but you can't see it when you're sitting on it. If you look closely, you can see where I cut out a reinforcing section in the middle in the latest incarnation. This is in a probably-vain attempt to try and un-engineer a bit more rigidity out of the frame. I might try speed holes next.
I had a stroke of luck last time this broke a swivel-chair base. The base collapsed, and literally minutes later I saw my neighbour throwing a shitty-looking desk chair into a skip. I'll have some of that, thank you.
This time around, I decided to give the Astra seat a deep clean after reassembling the chair. I did not know how badly it needed one. This little thing is a game changer:
It's a brush attachment for a drill, which you can buy for about £15 on Amazon as part of a set. It demolishes baked-in cat fluff and everything else on a seat that a vacuum cleaner won't touch. I was impressed.
Everything mentioned so far, I acquired for free. This time around, I have got some improved castors for it, to replace the usual scratchy-sounding castors that you get on cheap desk chairs.
These have roller bearings, seem to be made of actual metal, and its wheels are made of a material not entirely unlike that of the small bouncy balls we had as kids that could be launched at the floor and which rebounded to the height of a four-storey building. They roll very nicely. They're also a lot stronger than flimsy desk chair castors. This isn't an unqualified good. See also what I wrote about the subframe earlier; they don't flex, which means they transfer forces elsewhere. That might cause the swivel base to break earlier than it would otherwise. We'll see!
You may have noticed that it does not have arm rests. You are not the first. I showed a photo of it to someone earlier today and they said "it needs arm rests". It doesn't need arm rests, but the fact someone thinks it needs arm rests means that it isn't the unquestioned best desk chair in the world. So maybe that is a project for another day...
Cracks
The easy, snarky post would be "it's the current year and there are still systems out there that can't handle non-ASCII characters", on Twitter. Snark is boring! Pretending to be offended is sad!
Years ago, there was a blog (whose name I forgot) that documented cracks in the system like this; tiny reminders that systems that we take for granted, that we have little cause to give thought to, are oft computer systems written by humans, and humans make mistakes. I enjoy these reminders.
Conjecture: the text was originally written in some older version of Microsoft Word. Word converted the apostrophe in haven't to a "smart" quote, and that got pasted into a system that didn't understand it. Still, the system worked, I got a receipt, KFC didn't buckle under the weight of a thousand UnicodeDecodeErrors, and the world kept turning. :)
"Ain't that some bullshit", 365-to-Migadu-migration edition
TL;DR: Use imapsync with DavMail.
So my subscription for Office 365 (or whatever it's called these days) was due for renewal in January. I only ever used it for email on my domain. Given how increasingly hateful the Outlook web client has been in recent years (worst among them was when it randomly disabled the ability to send plain-text email for some time), and my general desire to move as much of my life away from software I have no control over, I decided to migrate my shit to Migadu. Partly because Migadu is kinda cheap, partly because of a good recommendation, but mostly because I really like their beautifully simple and functional website. I'm shallow!
Obviously, I didn't want to lose any of my email in this process. What should have happened is that I would enable IMAP on Office365, plug in some command line options to imapsync, and by the magic of open standards email would be moved from one place to another. Open standards are good! People who write open source software are awesome!
Of course, one half of that involved dealing with Microsoft. That and the rule of time estimates (increase the unit, halve the quantity) and the rule of everything is bullshit and never fucking works meant that I wasted more than a few hours trying to make that happen. Greetings from 3am!
The short version is if you use Office 365 in the same manner I did, which is that you were both the administrator of the domain and a user on the domain (because it was for personal email and you're the only person on the domain), nothing you do will make IMAP work for your account. Not if you navigate six levels of Enterprise-ness and a UI with decade-old branding to enable 2FA and then to enable "app passwords", not if you go disable security defaults and probably still nope if you run some PowerShell magic spells that I've seen around.
Give up trying to make IMAP work directly and use DavMail. I did!
DavMail is a proxy between Microsoft's proprietary protocols and protocols such as IMAP that the rest of the world uses.
There's a package in the Ubuntu repositories called davmail
,
but because everything is bullshit and never fucking works
the Ubuntu package is broken if you don't install the openjdk-11-jre
package first
(through no fault of DavMail's authors).
Anyway, once you've worked out that last bit from reading some random Debian bug report, you will want to go into DavMail's settings, under the "Main" tab, and change "Exchange Protocol" to "O365Interactive".
Then run a magic spell...
./imapsync \
--host1 localhost \
--port1 1143 \
--user1 'you@dealingwithbullshit.com' \
--debugimap1 \
--password1 'hunter2' \
--host2 'imap.migadu.com' \
--user2 'you@dealingwithbullshit.com' \
--password2 'hunter3'
...and DavMail will pop up a with a link to open in your browser, and that will in turn redirect to a URL that you copy-and-paste into DavMail. Some time later your mail will appear in Migadu or whatever else you decide to migrate to.
So, that was some bullshit. But it's done now! And I only have to do that once. And if I ever do have to do it again I won't waste hours of my time and hours of someone else's time trying to work out why IMAP isn't working, because, reminder, if your account is an admin account for an O365 domain it absolutely never will.
Anyway, imapsync and DavMail are good! They made me happy! I've only been dealing with mail migration bullshit for one long night, but the authors of both have been dealing with it for years. Here's a figurative toast to those fine people, and here's to many years of Migadu and me!
Self-archaeology and the Internet Movie Car Database
This was my mum's Mercedes 220. It was glorious.
She bought LRO 468L for £500 in either 1989 or 1990, because back then it was merely an old car (though cars aged much quicker 30 years ago). It was beautiful, silent, luxurious, and very wafty. I loved it, and everyone else did. She sold it a few months later for slightly more than she paid for it, because it needed welding work on the floorpan. A very young Me did not talk to her for a day after that.
It was last MOT'ed in 1990, so we can probably conclude it does not exist anymore (or, to avoid offending those of you who believe in the laws of thermodynamics, exists in an entirely different form). A W114 or W115 is still on my bucket list of cars to own. It might even be the next project I build, if my next project starts before the prices of these go through the roof.
Fast-forward just a few years. Recently, someone pointed out that if you search for a registration plate on Google Images, there's a good chance that it will find a photo of that car, because Google indexes any text it finds within an image, and may notice the text on the numberplate. Like ANPR, but for everything.
It worked on my car. Among others, it found a photo taken at the late Rockingham Motor Speedway, which my brother (the previous owner) took in 2007 back when "camera phone" still meant "thing with a dialpad", and posted on a forum in 2008.
It worked when I entered the registration of this Mercedes, too. I know the registration off by heart, because my memory is weird. I remember a Windows 95 product key that I last used in anger in 1997, and the registration of my mum's car from 31 years ago, and sometimes draw a blank when I have to enter my PIN into a cash machine.
The first result was the picture you saw at the top of this article. I uploaded that photo to Wikimedia Commons over 15 years ago, and things often spread to weird and unexpected places when you do that. (I'll tell you the story about the wall art in the bogs at Downham Market station some other time...)
The second and third images were the offspring of obsessive categorisation at scale. I would hope any petrolhead would know about the Internet Movie Car Database, wherein a (presumably vast) number of very dedicated people are aiming to identify every car in every film & television program. If your car appeared in the background of, e.g., an episode of The Bill in December 1984, then there's a good chance someone has captured and categorised it.
Well how about that!
But wait: It's brown! Or at least looks brown. Did it get a respray before my mum owned it? Or did a worn 80s VHS tape not reproduce the glorious red that it was? I won't ever know the answer to that, and I am okay with that.
The cars we knew in our youth are, or will be, almost all lost to time and entropy; only a very few are lucky enough to be recommissioned or restored. But this Mercedes was lucky to have its few seconds of fame on the small screen, and was immortalised, to some very tiny extent and entirely accidentally, by some extraordinarily committed people on the Internet. That's more than most cars will get, and that, is good enough for me.
Wordoid is crack cocaine for word nerds
Your deactivated Facebook account really is reactivating itself
There are a lot of posts & questions & other things in various places online wherein people are complaining about this happening to them. Usually there's someone to tell them that nope, it's something you did. I can guarantee that it is not, in fact, necessarily something you have done. Because it happened to me (which totally changed my opinion about Facebook being an awful company), and I went to specific lengths to ensure that Facebook would not reactivate my account when I deactivated it. Specifically:
- I did not sign in from any other devices. I do not have the Facebook app on any of my phones, and never have. Only two things have accessed it: my desktop computer and my laptop. After deactivating my account, I cleared the cookies on my laptop to be sure.
- I do not have any apps that could have reactivated it, because there are NO apps connected to my account.
- It is entirely implausible that anyone could have logged into my account with my credentials. My password is extremely long and complex (it is "hunter2"[1]). In the unlikely event that someone guessed a 28-character password, they decided to...do absolutely nothing with my account. But I'll reiterate, nobody guessed my password.
- From what I have read, according to some rando on Quora, there is a "This is temporary, I will get back soon" checkbox?! Hmm... I'm not actually saying this is impossible. But when I deactivated my account I specifically checked for sneaky things like this because Facebook is an awful company staffed with terrible people, and if I trust them with anything it is that they will reliably do whatever thing gives nice values on the "Engagement" axis on a graph.
So...either Facebook does in fact randomly reactivate your account, or Facebook is doing some sneaky shit in which you have "agreed" to randomly reactivate your account. They are either betraying you, or they are using dark patterns during account deactivation such that someone who is actively looking for dark patterns is tricked into "agreeing" that they will reactivate it. Either way, it is being reactivated without your consent. Anyone that tells you otherwise is gaslighting you.
[1] Kidding.
"consciousness causes collapse"
From HN wag __s:
[T]his is why quantum computers have to be kept at very low temperatures as opposed to asking everyone to look away from the computer while it operates