I Hate Moving

I’ve restored all my data to the new drive, restored the previous backups to a new location, and reconciled what pieces of that to keep and what’s been replaced, so now I can set up backups to run again and everything’s back to normal. I just have to watch how much data I put on this drive since it’s an 8TB drive but I only have a 4TB backup drive for now.

But, since I’ve merged two drives into one, a lot of stuff has moved around, and in particular, a lot of games that were previously installed to D:\Games are now under C:\Games, not where they were originally installed. So, is this a problem? It depends on where they came from…

Steam:
Steam and any games installed through it are actually pretty easy to move around. Move or restore the entire ‘Steam’ directory to its new location, go to there and run ‘steam.exe’ directly, and it’ll grind away for a while repairing things and then pop up and continue on as normal. Piece of cake, and this alone accounted for around half the games I had.

GOG Galaxy:
Games installed from GOG were a bit trickier. The launcher was already installed on C:\, but upon running it, the list of installed games was now empty. Fortunately you can tell it to scan folders for previously installed games, and after selecting the new C:\Games location, it churned away for a while and then most of the games suddenly reappeared in the list and worked fine. There were a couple quirks: We Happy Few didn’t initially get found, but rerunning the scan on just its folder then managed to find it. And Disco Elysium just could not be detected no matter where I tried to scan, so I’ll probably have to reinstall it.

Epic Games Launcher:
Dealing with the Epic launcher was also a bit tricky, but in a different way. The games disappeared from the install list, as expected, but there’s no way to scan for existing installs. Instead, what you have to do is set the default install directory to somewhere on the new drive, go to install a game, find the directory name that it would install to, and move your copy of the game’s files to that same location before starting the install. Then, start the install, and the launcher will recognize that ‘oh, there are some files already here’, and do a verify/repair on them instead of a full install from scratch.

Origin / EA App:
The Origin launcher has been replaced by some new ‘EA App’, and unfortunately it seems to have lost the ability to tell it that a game has moved to a different location, which I think the old one had. Trying to do install swap tricks like with the Epic launcher didn’t work either, so it seems like there isn’t a good way to get it to reuse my existing install files and I’ll have to reinstall all of these from scratch. Not great, EA.

MMOs:
I had good luck with MMOs, though. With every single one I have installed, between Guild Wars 2, EQ, EQ2, Elder Scrolls Online, Final Fantasy XIV, and Lord of the Rings Online, all I had to do was run its launcher in its new location and everything worked fine, no repair or reinstallation or anything needed.

Others:
There are also a bunch of other games that were installed on their own rather than from a launcher, or were installed from disc, or needed some modding to run on modern systems, and I’m still not sure how many of these are affected by the move. I’ll have to deal with them on a case-by-case basis, whenever I get around to playing them. My modded Minecraft instances are fine, for example, since MultiMC doesn’t really care where it’s installed, and I just had to adjust the path to the Java runtime that I previously had on D:\. I have a heavily-modded Skyrim install and had to change a bunch of drive letters for paths in Mod Organizer 2’s .ini file.

So, there are a small handful of games I’ll have to reinstall from scratch, but otherwise, having to move a whole ton of games from one drive to another hasn’t been too disruptive. Way less of a hassle than I was expecting, at least.

Data Chaos

I really need to stop slacking when it comes to maintaining my PC.

Some time ago, the boot drive on my gaming PC started getting bad blocks, so I swapped it out for a spare laptop drive that I had handy. It was a smaller drive, and not exactly high performance, but it would do in a pinch until I had a chance to replace it properly. I’ve been meaning to do a complete PC upgrade, so it would just have to last until then. Using a smaller drive also meant that I couldn’t restore everything from my backup of the old drive. But that also meant that I couldn’t run any new backups or I’d lose that data that I wasn’t able to restore, and I didn’t have enough room on the backup drive for a parallel backup, so I just turned the backups off. After all this is purely temporary, right?

And now, many months later, I still haven’t done that PC upgrade… Instead of backups, I’d occasionally copy the most important files from under my user profile to some spare space on my Linux server, just in case. And now, I’m getting block errors on my secondary drive… The secondary drive is mostly just game installs, so 95% of it can just be redownloaded from Steam or GOG or wherever, but there is some unique data on there, too. Modded Minecraft installs and worlds, saves for games that don’t put them under the user profile, some game installs that didn’t come from downloads but had to be modded or cracked in order to get them working on modern systems, some other handfuls of miscellaneous files, etc. And since the backups were disabled, these files have gradually grown a bit out of sync with what’s in the backups, and now this drive is potentially failing.

Then I remembered that I actually have a spare 8TB drive that I’d never wound up using for anything. When I ordered it, I was completely reworking how storage was allocated across both the PC and Linux box, but by the time I got it, I realized that I couldn’t use it as a main drive in the gaming PC because I didn’t have enough backup drive space to cover it. I couldn’t use it for storage on the Linux box because I didn’t have a second one to pair with it for mirroring, and I couldn’t use it as a backup drive since I didn’t have a spare enclosure, so it just sat on the shelf of my parts closet.

So now I’m trying to clone all of the data off of both the boot and secondary drives onto this 8TB drive and make it just the single main drive. That won’t be the end of the trouble, though. I still won’t be able to back up this drive, and any data copied from the secondary drive might be affected by bad sectors (I’ll have to keep an eye on any copy errors). And I still need to do the PC upgrade, and when I do so I’ll have to take the data on this drive, the manually copied profile files, and the older backups, and reconcile all of them.

The lesson here is not to put off any important data management tasks, figuring that you can sort them out later on. If I’d just replaced that boot drive with a proper one right away, I’d have been able to keep proper backups going and avoided all this mess.

Back to Abnormal

Yay, I finally tested negative for covid today. I still feel a little congested and maybe slightly brain-foggy, but it’s apparently not unusual for it to take a while to fully clear up. Here’s hoping that Long COVID doesn’t become a problem…

By chance, this period largely overlapped with a week-and-a-half of vacation time that I’d booked. So, at least I didn’t have to try and work during it, but I also didn’t really get much of anything done during it. I didn’t even get much gaming done, just opting to collapse into my comfy chair and watch videos and streams most of the day.

I really need to get back to, y’know, actually doing stuff

*cough*

It’s been 8 days since my symptoms started, and the current AHS guideline is that isolation can end at 5 days after symptoms start, so I was hoping for a little bit more normalcy, to start getting back in the habit of daily walks, etc.

I masked up, bathed in hand sanitizer, and visited the grocery store to finally get some fresh food and also to get a new set of rapid tests, since I’d used my last one on that positive test. Got home, used one of the tests, and…it’s still a strong positive. :< It’s kind of unclear exactly when I should expect tests to start coming back negative, but to be safe I guess I’ll isolate for another 5 days and test again.

I am feeling better, at least. No more fever since last Thursday, just some lingering coughing and congestion, and taste might be returning a bit.

Dangit

Welp, I caught COVID.

I felt like I was still being careful, staying home most of the time (a stretch for me, I know…), masking indoors, etc., and was basically only going out to do grocery shopping, walks for exercise, and some required in-office work attendance one day per week. But, on Sunday I started feeling terrible in the early evening, figured I’d try a rapid test on the off chance, and there was the line. Partly my fault though, as I’d been diligent about my first three vaccine shots, but then started slacking on getting the bivalent booster.

It’s definitely the sickest I’ve felt in probably 10-15 years. Not necessarily in intensity, but in length, the spells of fever are lasting longer than they usually do; I’m usually over the worst of it within a day or two, but this is day 4 going on 5 now. Mostly just headaches, an occasional cough, sore throat, and congestion, but there’s the occasional curveball. The theme for my first night of trying to sleep while sick was “oh god why can’t I stop peeing”. I haven’t lost my sense of taste, but it’s a lot harder to smell things, but I’m not sure how much of that is from just plain old congestion. (update: welp, fully lost smell and taste a day later)

And today I had something weird happen: I got up to check my fever and put the thermometer in my mouth, felt a mild pit in my stomach suddenly swell up, and started to completely lose my train of thought as the world seemed to become brighter. All I could think to do was stumble into the bedroom and collapse on the bed, where over the next minute or so the brain fog gradually lifted and I started feeling normal again, and finally noticed I still had the thermometer in hand (reading 37.5C). I had to look up the symptoms afterwards, and it turns out that that’s what fainting actually feels like. Huh. First time in my life.

I seem to be feeling better tonight at least, though it wouldn’t be the first time if it turns worse again by tomorrow. I’ll be hunkering down for a little while longer, at least; the current guideline is to isolate for five days from the onset of symptoms, but that seems a bit short? That would be tomorrow and the symptoms haven’t even cleared yet. Oh well, I’ve got canned food for weeks…

Airhead

I got tired of complaining about my old laptop (and it was falling apart), so I got a spiffy new M2 Macbook Air. It’s pretty nice. I’m not thrilled about dropping from a 15″ screen to 13″, but eh, I’ll live with it. The Pros are getting out of my price range anyway. I’m still getting used to the keyboard and touch pad just because they’re so different from my old 2010 MBP, but they definitely work well, I just have to break the habit of pressing hard because my old pad had gotten so stiff. Another thing to get used to is that there’s basically no gap between the pad and the keyboard, where I used to rest my fingers a lot.

I figured I’d give Minecraft a whirl and thought I’d be a clever boy and get one of the native ARM Java distributions for setting up modded instances, no Rosetta for me! And then it tries to load native LWJGL libraries, which are still x64 for older Minecraft versions, so back to Rosetta after all… Still, it managed to get 30-50ish FPS under Rosetta with a heavy 1.7.10 modpack, not too shabby. The latest Minecraft version is supposed to be fully native so maybe I can experiment with mixing libraries.

One annoyance so far was that as part of initial setup, it asked what I wanted to sync with iCloud, and I unchecked everything I could. But later on, after migrating files over, I got an email telling me that my iCloud storage was now full. I can’t remember if it was even part of that initial setup, but apparently it had just gone and synced Photos by default. Then, after deleting them in iCloud, you get a scary email about how you’d better redownload the originals from iCloud because they’re no longer on your device, but I double-checked and they are still on the laptop. I think that message is mainly for phones and tablets?

And, it was a bit odd that when moving some files around by command-line, even Terminal would pop up with prompts like “Terminal wants to access the files in Documents. Do you want to allow this?” It only asks once and only for the ‘major’ folders, at least. (Since my MBP was stuck all the way back on High Sierra, I still have to get used to all the OS changes since then too.)

Other miscellaneous bits: dang, it’s small and light. The notch is weird but not a big deal, full-screen apps just don’t use that area. The speakers are definitely louder than my old MBP, where even max volume was a bit too quiet in some videos and streams. I kinda wish the magsafe plug was the L-shaped one instead of straight-on, but that’s just due to how my space is set up. I’m still amazed by being able to open Calibre in four seconds instead of a full minute.

Rough Trip

I just got a package from UPS that was a bit of an adventure. They first notified me that “hey, delivery should be today!” last Thursday, but upon checking the tracking, it was still in Anchorage, Alaska. I kept an eye out just in case the tracking was way behind, but nope, didn’t arrive that day. Same notification the next day, though now it was in Louisville. It finally made it into the correct country by the end of the day, but nope, missed that ‘delivery window’ too. So that’s two ‘delivery windows’ that were physically impossible to meet by their own tracking data.

On Monday it got marked as out for delivery! Woohoo! And then soon afterwards went into “waiting for release by a non-UPS broker…” Usually that means customs, but shouldn’t you have cleared that before telling me it was out for delivery? And then a couple hours later it went into “Address is incorrect” status, when it looked perfectly fine to me. It sat there until the end of the day, when it finally went into “Address corrected, will be sent out soon”. They basically changed “1234 Fake St., unit 404” to “404 – 1234 Fake St.” You had to delay it a day because you couldn’t handle the first format? The format that Big Company would have given it to you in? Are you a shipping company or not?

I finally got the package today without further incident, though I was a bit worried since deliveries here are a bit of a crapshoot. There’s no public way into our building, not even to get to the buzzers or intercom, so what do delivery people do? I try and give them a phone number to call to let me know they’re there and I can come get it, but it’s not clear whether a company like UPS would do that or if they even had my number to call, since this was on behalf of another company. They don’t let you add one to the delivery instructions, which couldn’t be changed anyway, even to direct it to a local UPS Store instead, since the shipper locked them in. Fortunately, from my living room I could keep an eye on the arrival of vehicles and give the delivery guy a “Hey, over here!” from the balcony as he was trying to deal with the locked front door. Even though it meant spending all morning running to the window at the slightest hint of noise…(I seem to have logged about 1600 steps doing that, at least.)

Getting a package should not be this much hassle, which is why I preferred delivering to the office when possible, but with working from home…

Hair-B-Gone

Dang, that’s a load off.

It’s probably been, oh, four years since I last had a haircut. I was already overdue for one, and then COVID hit, and I’m still not entirely comfortable with close contact. So…today I finally just said screw it, found my shears, and snip-snip.

I’m sure it’s a terrible job, I could not for the life of me get things even, and it’s still long in the back to hide all sorts of sins. But at least it’s now just covering-the-neck-long, and not down-past-my-nipples-long…

Proving Myself

I just upgraded this server several major distro versions in a row, so hopefully nothing’s too broken…

I also took the opportunity to tighten up some of the security, and in particular e-mail. It’s kinda weird to be running your own local e-mail server nowadays, and it feels a bit too fragile to use it for anything really important, but it’s nice to have a contact point that’s not thoroughly controlled by Google or Microsoft or whoever, and I may as well do it right.

So, now it should be using TLS to encrypt all outbound mail connections, and DKIM signing is set up and a DMARC policy set. Not that I send a lot of e-mails from here, but this should help prove that e-mails from this domain really are legit, and spammers won’t be able to forge addresses from this domain.

It kind of sucks having to SSH into here every time I want to send an e-mail from this domain though, so now do I want to run the risk of running an IMAP server so I can manage e-mail remotely…

Life

Oh, the sun, it burns. But I must let it burn, for I need to get outside and get that daily exercise in, no matter how hot and sweaty I get.

For a while now I’ve been getting back into the habit of taking walks. I used to take them around the edge of SAIT, but then COVID hit and I fell out of the habit. That path around SAIT is blocked by construction now, so instead I’ve been going down along the Bow River, first just down to the Peace Bridge, and then down to Prince’s Island Park, for a total walk of around 4.4km. And I try to do it at least every 2-out-of-3 or 3-out-of-4 days. It was rough going at first since I was so badly out of shape, but it’s been slowly getting easier and I’ve been getting better at controlling my breathing.

I’ve also been trying to get my weight under control again. With these walks, and cutting out nearly all snacks, and trying to buy healthier meal options, I’ve lost about 6kg over the last couple months. Still a long ways to go, but it’s a start.

This renewed strive for healthiness wasn’t entirely unprompted, though. A couple months ago I had what was, well, not an actual heart attack but some kind of mild cardiac episode. I was dumb though, and didn’t seek immediate treatment. It was “only” a feeling of mild discomfort around the heart so it was easy to brush off as “well, it can’t be that serious, I’ll go for help if it gets any worse…” And, honestly, I was kind of too ashamed to seek help. Yup, this blob of a human being hasn’t been taking very good care of himself, of course his arteries are clogged to hell, why should doctors even give a shit if he lets himself get this way…

But it has been a bit of a wakeup call; taking care of my health is now less of an abstract thing that I acknowledge but just floats around in the back of my mind and now more of a “you will get healthier or you will die” imperative. I’ve been feeling mostly back to normal lately, so the biggest danger is that heightened sense of urgency dissipating and falling back into the old habits again. It’s certainly not the first time I’ve tried to make the necessary lifestyle changes, but maybe, hopefully, now it’s serious enough to permanently stick.

Back to Basics

I’ve been feeling stagnant lately. In a lot of ways, but professionally in particular. I’ve been working at the same job now for quite a while, doing fixes and enhancements on old codebases, in only a small team, which imposes various limits. I largely have to stick to the languages already in use in the projects, changes have to fit within the existing, often poor-quality designs, new components and large-scale changes are infrequent, they’re fairly “old-fashioned” applications where newer techniques aren’t really applicable, etc. And with a small team there’s no real feedback as to whether what I’m doing is actually any good or not, so who knows what bad habits and antipatterns I’ve been picking up and relying on.

I’ve never really branched out on my own time, either. Despite having been a computer nerd most of my life, I’m not one of those who spent all day coding at the office, and then went home and coded all evening as a hobby. There’s sometimes a perception that you really should be coding 24/7, work and home, or you’re just crippling your own growth and career, but…eh, I played Minecraft instead.

So, I’ve been thinking that I should probably get back to fundamentals, try and approach things from a fresh perspective and learn anew. I don’t have a specific project to work with yet, so I’ll start with reading some of the books I’ve seen recommended in various places, including:

Structure and Interpretation of Computer Programs – An introductory work, so I’ll be familiar with a lot of it already, but I’m sure there’ll be new stuff as well. And it’ll be from a ‘functional programming’ perspective, which I’ve never really investigated before and is a significantly different way of approaching programming, so I’m hoping to broaden my horizons there.

A Philosophy of Software Design – I know one of my weak points is on how to approach the overall design of a program, since 99% of the time I’m working stuff into an already-existing design. Having to envision and construct the entire design yourself is a lot more intimidating, so I’m hoping for good advice here.

Code Complete, 2nd Edition – A good book focused more on the lower-level nuts-‘n-bolts of programming. I’ve got plenty of experience there, but maybe this can help point out where my habits run contrary to good practice and recommendations. (I’ve actually had this book for a while, but it’s a massive tome and I’ve barely made a dent in it. Oh wait, my copy of this is stuck at the office…)

Sweet 17

Up to now I’ve had to write fairly plain C++ due to having to support a wide variety of environments and compilers, but we’ve recently finally gotten everything in place to support at least C++17 in most of our projects, so hey, it’s time to learn what I can do now!

Structured Bindings

One of the big new features is structured bindings, which can split an object apart into separate variables. It’s not really all that useful on just plain objects, you may as well just reference them by member name then, but it really shines in a few specific scenarios, like iterating over containers like maps, where you can do:

// Old way:
for(auto& iter : someMap)
{
    if(iter.first == "something")
        doStuff(iter.second);
}

// New way
for(auto& [key, value] : someMap)
{
    if(key == "something")
        doStuff(value);
}

so you can give more meaningful names to the pair elements instead of ‘x.first’ and ‘x.second’, which has always irritated me. It’s a small thing, but anything that improves comprehensibility helps.

The other nice use is for handling multiple return values. Only having one return value from a function has always been a bit limiting; if you wanted a function to return multiple values, you had a few options with their own tradeoffs:

  • Kludge it by passing in pointers or references as parameters and modify through them. This in turn meant that the caller had to pre-declare a variable for that parameter, which is also kind of ugly since it means either having an uninitialized variable or an unnecessary construction that’s going to get overwritten anyway.
  • Return a struct containing multiple members. A viable option, but now you have to introduce a new type definition, which you may not want to do for a whole bunch of only-used-once types.
  • Return a tuple. Also viable, but accessing tuple elements is kind of annoying since you have to either use std::get, or std::bind them to pre-existing variables, which runs into the pre-declared variable problem again.

Structured bindings help with the tuple case by breaking the tuple elements out into newly-declared variables, making them conveniently accessible by whatever name you want and avoiding the pre-declaration problem.

// old and busted:
bool someFunc(int& count, std::string& msg);

int count;       // uninitialized
std::string msg; // unnecessary empty construction
bool success = someFunc(count, msg);

// new hotness:
std::tuple<bool,int,std::string> someFunc();

auto [success, count, msg] = someFunc();

The downside is that the tuple elements are unnamed in the function prototype, which makes them a bit less self-documenting. If that’s important to you, returning a structure into structured bindings is also a viable option, where you can often use an IDE to see the order of members of the returned struct and their names.

Update: You could also return an unnamed struct defined within the function itself, so you don’t need to declare it separately. You can’t put the struct declaration in the function prototype, but you can work around this by returning ‘auto’. This should give you a multi-value return with names visible in an IDE. The downside is that you can’t really do this cross-module, since it needs to be able to deduce the return type. (Maybe if you put it in a C++20 module?)

auto someFunc()
{
    struct { bool success; int count; std::string msg; } result;
    ...
    return result;
}

...
auto [success, count, msg] = someFunc();

Optional Values

One of the scenarios I started using structured bindings for was the common situation where a function had to return both an indication of whether it succeeded or not (e.g., a success/failure bool, or integer error code), and the actual information if it succeeded. The trouble when you’re returning a tuple though is that you always have to return all values in the tuple, even if the function failed, so what do you do for the other elements when you don’t have anything to return? You could return an empty or default-constructed object, but that’s still unnecessary work and not all types necessarily have sensible empty or default constructions.

That’s when I discovered std::optional, which can represent both an object and the lack of an object within a single variable, much like how you might have used ‘NULL’ as an “I am not returning an object” indicator back in ye old manual memory allocation days. The presence or lack of an object can also represent success or failure, so now I find myself often returning an std::optional and checking .has_value() instead of separately returning a bool and an object when it’s a simple success/failure result. If the failure state is more complicated or I need to return multiple pieces of information, then structured bindings may still be preferable.

It’s also been useful where a rule or policy may be enabled or disabled, and the presence or lack thereof of the value can represent whether it’s enabled or not. (Though if it can be dynamically enabled or disabled then this might not be appropriate since it doesn’t retain the value when ‘unset’.)

struct OptionalPasswordRules
{
    std::optional<int> minLowercase;
    ...
};

if(rules.minLowercase.has_value())
{
    // Lowercase rule is enabled, check it
    if(countLower(password) < *rules.minLowercase)
       ...
}

Initializing-if

Another new feature that’s been really useful is the initializing-if, where you can declare and assign a variable and test a condition within the if statement, instead of having to do it separately.

if(std::optional<Foo> val = myFunc(); val.has_value())
{
    // We got a value from myFunc, do something with it
    ...
}
else
{
    // myFunc failed, now what
}

The advantage here is that the variable is scoped to the if and its blocks, avoiding the common problem of creating a variable that’s only going to be tested and used in an if statement and its blocks but that then lives on past that anyway.

Variant Types

This one is a bit more niche, but I’ve been doing a bunch of work with lists of name/value pairs where the values can be of mixed types, and std::variant makes it a lot easier to have containers of these mixed types. With stronger use of initializer lists and automatic type deduction, it’s even possible to do things like:

using VarType = std::variant<std::string,int,bool>;
std::string MakeJSON(const std::vector<std::pair<std::string, VarType>>& fields);

auto outStr = MakeJSON({ { "name", nameStr },
                         { "age", user.age },
                         { "admin", false } });

and have it deduce and preserve the appropriate string/integer/bool type in the container.

String Views

I’ve talked about strings before, and C++17 helps make things a bit more efficient with the std::string_view type. If you use this as a parameter type for accepting strings in functions that don’t alter or take ownership of the string, both std::string and C-style strings are automatically and efficiently converted to it, so you don’t need multiple overloads for both types. It’s inherently const and compact, so you can just pass it by value instead of having to do the usual const reference. And it can represent substrings of another string without having to create a whole new copy of the string.

bool CheckString(std::string_view str)
{
    // Many of the usual string operations are available
    if(str.length() > 100) ...
    // No allocation cost to these substr operations
    if(str.substr(0, 3) == "xy:")
        auto postPrefix = str.substr(3);
}

std::string foo("slfjsafsafdasddsad");
// A bit more awkward here since it has to build a string_view before
// doing the substr to avoid an allocation
CheckString(std::string_view(foo).substr(5, 3));
// Also works on literal strings
CheckString("xy:124.562,98.034");

The gotcha is that string_view objects cannot be assumed to be null-terminated like plain std::strings, so they’re not really usable in various situations where I really do need a C-compatible null-terminated string. Still, wherever possible I’m now trying to use string_view as the preferred parameter type for accepting strings.

A lot of this is fairly basic stuff that you’ll see in a million other tutorials around the net, but hey, typing all this out helps me internalize it…

Rusty Robots

I have an old Android tablet, a Nexus 7 from 2012, that I haven’t really used in a long time. I’ve played a few games on it, but I mainly used it for reading comics and doing Duolingo lessons, and I was thinking of doing Duolingo again, so I dusted it off and fired it up again.

The first problem was obvious: it felt sluggish. I had no idea how much crud it had accumulated over time, so I did a factory reset on it to wipe everything out and start fresh.

This led directly to the second problem: it’s old. It was still running Android 4.4.4, and when I went to reinstall apps, a ton of them simply no longer offered installs that would still run on a version this old. I was using some of those apps before, but since I just wiped it, it no longer had the versions I’d installed years ago. The tablet can support 5.1, but when I tried upgrading way back when it first came out, performance was pretty poor and I rolled it back to 4.4.4 and kept it there ever since.

Even with it still on the older version of Android and a fresh reset, it was starting to feel sluggish again, though. It’s not just the operating system itself; the built-in Google apps also get updated and it seems like they’ve bloated enough over time that they just don’t run well on older hardware. So, if it was going to run sluggishly anyway, I figured that I may as well just re-upgrade to Android 5.1.

That solved the app availability problem, as a lot more of them were now available to install, but unfortunately it’s made the performance problems far worse. From hitting the power button to wake it up, it can take 10 seconds or more for the screen to come on. Typing in the PIN, it’s often a second until it acknowledges a tap. It can take another 20 or more seconds for the home screen to appear. Opening the Google Play store can take several minutes, punctuated by being prompted several times “This app isn’t responding, do you want to wait or close it?” Updating an app can take several minutes, even for small apps. Trying to scroll through lists can take a few seconds just for it to respond to the swipe gesture, and scrolls in jarring jumps instead of smoothly. Some of this happened with 4.4.4 as well, but it’s even worse now.

It’s just not usable anymore, for anything, really. My options are to just try and live with that, or revert it back to 4.4.4 again and live with it being sluggish-but-slightly-less-so and fewer apps available. Or, well, an iPad starts to look awfully tempting… It’s just a shame that it feels like a piece of tech from 2012 should at least still be practically useful for something.

And, as I’m typing this, my latest attempt to update apps just ended with:

Remembering To Read

I need to get back to reading books. I haven’t bought a physical book in quite a while now, but I’ve collected a fair number of ebooks from bundles, sales, giveaways, etc. Time to, you know, actually read some!

So what the heck, go to the Calibre menus, “Pick a random book”. Hmm, no. *Pick* Nope, not that one. *Pick* No…

ebook library problem #1: On this digital bookshelf, “The Book of Kells” lives right next to “Functional Programming in Python” and “Pathfinder Core Rulebook”. It’s just a big pile-o’-stuff that’s not very consistently tagged, so it’s difficult to organize in the same ways you might a regular bookshelf.

Problem #2: Since I got a lot of them from bundles and giveaways, I’m not actually very familiar with a lot of these titles and authors. Any random book might be great…or it might be a complete waste of time. Some of them are parts of series, which weights them more heavily in the count, and might expect you to be familiar with a larger universe.

So, new approach: I’ve added a custom Yes/No field to Calibre named “interested” and I’ll go through the whole library. Any titles that I recognize as “ooh yeah, I’ve been meaning to read that one” or just look intriguing I’ll mark as Yes. Then I can do a random pick from this limited set.

After applying all that, I’ve reduced my set of candidates from 840-ish down to 31, and the random pick from that is…*drumroll*…The Computer Connection, by Alfred Bester.

The Need For Power

Bah. A few months ago I started having trouble with games suddenly crashing, often hanging the system and sending the video card fans full blast, with errors in the event log about the display manager crashing. Looked like video card trouble, so I swapped my RTX 2070 out for an old GTX 770 I still had. It seemed better at first, but I’d still get sudden video driver resets that would make things freeze for a few seconds, and the occasional hard system reset. Since it was unlikely that both cards were going bad, my suspicions shifted to the power supply. I want to build a whole new system at some point soon anyway, so I’ve just limped along with it like that for those last couple months now.

Today I just realized that I’d forgotten another factor: at around the same time, I’d hooked up a second monitor, to help make working from home a bit easier. Since I didn’t have the right cabling for the 2070, I hooked the monitor up to the integrated graphics instead. No biggie, since it’s mainly just for displaying some doc and web pages, so it doesn’t need 3D performance. I hadn’t thought about it much since, since it wasn’t the integrated graphics that was crashing, after all. But after I remembered this today, I disabled the integrated graphics and put the 2070 back in and…it’s been fine. It might still be a power supply problem, but I guess something about the extra power draw or stress from enabling the integrated graphics is causing the main video card to glitch out.

So now I can have either working games or a second monitor but not both. Sorry work, but I wanna see what’s new in No Man’s Sky…

Update: Well dangit, after being fine for hours, I had another crash with the 2070. Seems to happen less often, at least? I suspect there may still be a problem with the power supply getting weaker (watt-wise, it should be more than enough), but for now maybe I’ll have to try underclocking it a bit.

Update 2: Ordered and installed a new power supply, and that does indeed seem to have fixed it. The old one was probably overheating, which explains why games would work for a half hour or so and then it would keep crashing even after reboots until it cooled down a bit.

Chubby Templates

One of our DLLs was lacking in logging, so I spent a bit of time adding a bunch of new logging calls, using variadic templates and boost::format to make the interface fairly flexible, much like in a previous post. However, I noticed that after doing so, the size of the DLL had increased from 80kB to around 200kB.

Now that’s not exactly going to break the bank, especially on newer systems where even a VM will probably have 4+ GB of RAM, but that kind of large change still kind of irks me. Modern languages let you do a lot more things a lot more easily, and 99% of the time it’s pointless to try and count every byte of RAM and instruction cycle you’ve spent, but I still kind of have a lurking fear that it might also let me get a bit too…sloppy? If I keep at it, will I eventually turn an application that runs fine on a 2GB system into one that needs 4 or 8 GB?

In this case, from the map file and some tweaking, I can break down this change into various parts:

  • 50kB from code pulled in from Boost. Although I’m only directly using boost::format, that class is going to pull in numerous other Boost classes as well. At least this is generally a one-time ‘capital’ cost, and I can now use boost::format in other places without incurring this cost again.
  • 24kB of templates generated from the logging interface. Since I’m using a recursive template approach to pick off and specialize certain types, a lot of different variants of the logging function get generated.
  • 32kB for new calls to the logging interface. This is across 80 new calls that were added, so each new logging message is adding about 400 bytes of code. That seems like a lot just to make a function call, even accounting for the message text.
  • 4kB in exception handling. Not a big deal.
  • And 10 kB of miscellaneous/unaccounted for code. Also not going to worry about this too much. Rounding to pages makes these smaller values kind of uncertain anyway.

So, I guess the increase in size does make sense here, though I’m not sure if I can really do much about it. If I switch away from boost::format, I’d lose all its benefits and have to reimplement my own solution, which I certainly don’t want to have to do from scratch. sprintf-based solutions would have to be non-recursive, and that wouldn’t let me do the type specializations I want.

I might look at the assembly and see just where those 400 bytes per call are going, but that’ll probably only save a dozen or so kB at best for a lot of work. It may irk me, but in this case I’ll probably just have to live with it.

Browser Wars

After using Chrome for years now, I figured I’d give Firefox a try again just to give it a fair shake. Although Chrome still works perfectly well for me, it’s a major memory hog and quickly sucks up all the RAM on my laptop, and I’m a bit concerned about privacy issues with it as well.

Unfortunately Firefox still has a quirk that really annoyed me back in the day: when reading a forum thread that contains a lot of images, Firefox takes you to the last-unread-post anchor immediately, but then doesn’t keep you at the same relative position, so the page starts scrolling back up as images load in. So, quite often, I go to read the new posts in a thread, and it positions me somewhere back in posts I’ve already read, not at the actual first new post. Unfortunately, I read an awful lot of forum threads like this…

It also has trouble with Twitch streams, which seem a lot choppier under Firefox and sometimes gets into a state where the audio becomes staticky, and this persists until I reload the tab.

These are annoying enough that I’m probably going to wind up going back to Chrome, alas. I can at least live with having to restart it more frequently to free up memory.

Who Needs Blue Teeth Anyway

I’ve needed to upgrade some audio equipment; my trusty old Sony MDR-CD380 headphones lasted for ages, but have been cutting out in the right ear and the cable’s connection feels a bit flimsy now. I also needed a proper microphone to replace the ancient old webcam that I’d still been using as a “mic” long after its video drivers had stopped working with modern OSes.

I normally anguish for ages over researching models, trying to find the perfect one, but I cut that research short this time. A lot of the “best” gear is out-of-stock pretty much everywhere, and I don’t want to rely on ordering from Amazon too much. Instead, I figured I’d look at what was in-stock in stores in town, and try and get something good and actually locally available. So, after a bit of stock-checking and some lighter research, I finally left my neighbourhood for the first time in this pandemic and headed to a Best Buy.

For the microphone, I picked up a Blue Yeti Nano. Not the best mic ever, but readily available and perfectly adequate for my needs. From some quick tests, it already sounds waaaay better than that old webcam I’d been using. Clearer and crisper, and almost no background hiss, which had been awful with the webcam. It doesn’t have any of the advanced pickup patterns, just cardioid and omnidirectional, but it’s highly unlikely that’ll ever matter to me. It’s not like I’m doing interviews in noisy settings, where you’d really want the “bidirectional” pattern, for example.

For the headphones, I wound up picking up the Sennheiser HD 450BT. I wasn’t really originally considering Bluetooth headphones, since I didn’t want to worry about pairing, battery lifetime, etc., but this model appealed to me as the best of both worlds, as we’ll see in a bit.

I am actually a bit disappointed with the Bluetooth aspect of it. It mostly works…except that there’s a tiny bit of lag on the audio. Not really noticeable most of the time, except when you’re watching someone on Youtube and you can definitely notice a bit of desync between their mouth and what you’re hearing. I suspect my particular really-old Mac hardware/OS combination doesn’t support the low-latency Bluetooth mode, but it’s hard to verify. That wasn’t all, though. If I paused audio for a while, it would spontaneously disconnect the headphones, requiring me to manually reconnect them in the Bluetooth menu before resuming playback, which is really annoying. Playback also becomes really choppy when the laptop gets memory and/or CPU-starved, which happens fairly easily with Chrome being a huge memory hog. None of these are really the fault of the headphones themselves, it’s more the environment I’m trying to use them in, so I don’t think any other model would have done any better.

But, fortunately, I’m not entirely reliant on Bluetooth. The other major feature of these headphones is that you can still attach an audio cable and use them in a wired mode, not needing Bluetooth at all. They still sound just as good, and don’t even consume any battery power in this mode, so I’ll probably just use them this way with the laptop and desktop. I’ll leave the Bluetooth mode for use with my phone and TV, which should work far more reliably.

Speaking of my phone though, the other disappointment is that some of the features of the headphones like equalizer settings can be managed via a mobile app…which requires a newer version of iOS than I have. I could upgrade, but I’ve been reluctant to because that would break all the 32-bit iOS games I have. Dangit. I’ll probably have to upgrade at some point, but I don’t think this will be the tipping point just yet; the headphones still work fine without the app.

These headphones also have active noise cancellation, but I haven’t really had a chance to test it yet. Just sitting around at home, it’s hard to tell whether it’s even turned on or not.

So, overall I’m pretty happy with them so far. The Bluetooth problems aren’t really their fault and aren’t fatal, they sound pretty good, and they’ve been comfortable enough (not quite as comfy as the old Sonys, but those were much bigger cups).

Angry At Clouds

I’ve had a Youtube Premium subscription for a while now and it’s definitely nice not having ads on videos anymore, but I mainly wanted it to check out the Youtube Music service for my music streaming needs.

I have my own music library of ripped CDs and other files, of course, but I’d been turning into one of those old farts who still listened to mainly just their old music from 20 years ago and had no clue about much outside that comfort zone. YT Music has a “Discover Mix” feature where it’ll recommend new music to you based on what tracks you’ve marked ‘liked’, and after tagging a bunch of my regular music, the recommendations so far have been pretty good and I’ve found a lot of good, newer music. It is kind of electronic-heavy though, which might be some kind of feedback loop where having tagged a bunch of a genre starts biasing what it presents, which then biases how many of them you tag as ‘liked’, which further biases what it presents… I’ll have to see if manually finding and tagging some more stuff like industrial and rock balances things out.

However, the big problem with it is the interface. It’s a web site, so of course you have to keep it open in a web browser, closing the browser stops the music, it can get choppy if the browser’s heavily loaded, etc. All the usual drawbacks of being a web app.

It’s also glitchy, though, with new glitches appearing and disappearing all the time. At one point, my ‘liked’ playlist was filled with non-music Youtube videos I’d also happened to hit ‘like’ on. Songs are often left with “ghost” pause or like/dislike buttons on their row when they’re not the selected row. Most recently, anytime I started playing a song, the usual song information and playback and volume controls at the bottom of the page would appear for a split second and then vanish, leaving no way to control it other than starting a different song from the start. I’m often left wondering “okay, what’s going to break this week…”

But right now, my biggest frustration is probably with trying to manage my collection. You have a “Library” with all of the artists, albums, and individual songs you’ve manually added to it, but when you’ve been using it the way I have, hitting ‘like’ on a bunch of tracks it recommends, most of your music is going to be in the “Your likes” playlist. After you’ve been doing this for a while, that list gets unwieldy. There’s no way to sort the list. There’s no way to filter or search just a particular artist or album or song name. Scrolling through the list takes forever as it regularly pauses for 4 or 5 seconds to load the next chunk of songs. You can’t even invert the order of the list, so the songs you liked early on are buried deeply in it. You can click on the controls at the bottom to pop up the album art for the current song, but closing the art puts you at the top of the list of songs, not where you left off, so now you have to scroll back and scroll and scroll… You can’t add the songs to your main library from this list individually, let alone in bulk; you have to go to the three-dot menu, select Go To Album/Artist which takes you to a new page, and then add them from there. (Update: They did change this so the artist of a song you mark as ‘liked’ is automatically added to your library, but I’m not sure I want all of them in there either.) You can make playlists, but there’s a complete lack of “smart playlists” that would let me play my overall favourites by playcount, songs I haven’t played recently, grouped by genre, etc., like I can do in Clementine.

I guess it would be fine if I were to put it on shuffle mode and never worry about even trying to “manage” the list, but I do get these moods for some song or cluster of songs from a few months ago and then I have to dig through the list and it’s just awful.

Man, now I miss WinAmp…

To end on a positive note, here’s a few of the songs I’ve discovered through YT Music:

Back In The Stone Age

Woke up to a dead router this morning (RIP ASUS Black Knight, you served well) but at least I was able to find an old one in the old pile o’ parts to sub in for now. Even though it only has 100Mb ports and too-insecure-to-actually-use-802.11b…

I’m not sure what to do now though, since I’ve been thinking of upgrading my internet service and I think either of the potential options are going to force their own all-in-one DOCSIS/Fibre router on me anyway. I’ve always been kind of wary of those since I’ve always used custom firmware for advanced features like static DHCP, dynamic DNS updates, QoS, bandwidth monitoring, etc., and having to use their router might mean giving some of that up. Time to do some research.