JSpecify is a waste of time

It’s advertised as The One Nullability Annotation to Rule Them All ™️as if that was the problem to begin with.

JSpecify is supposed to finally put an end to NullPointerExceptions in Java, because sometimes, it’s not obvious to know if a parameter or return value of a method can be null.

The concept is simple, in theory: you only have to put @NullMarked in all packages (but they are not recursive so be ready to add/update many package-info.java files), and with that the default becomes not null instead of undefined, then you simply add @Nullable to all method parameters which are supposed to be null, and all return values, and all fields.

This is all something you could already do decades ago using the dozens of existing Nullable annotations. The IDE will now detect potential missing null checks but also annoy you with all the null checks that you previously added to your code and which can be removed, unless someone is not using the same setup as you because there’s nothing that enforces those null checks at runtime.

Oh wait, but you can use something called NullAway and Errorprone that can be configured to do some static analysis, this will surely work? It does, after some convoluted gradle.build config modifications (especially on multi modules projects) but… it’s pretty bad at detecting common case like non totally obvious field initializations for example with an initializeIfNeeded() call in the constructor so you end up adding SuppressWarning("NullAway") in many places as well as others to disable the code flow analysis of IntelliJ which gets confused too. Then after many changes, you get… not that much. The only somewhat valid NPE case it detected on my project was about sending null strings to JavaFX who does handle those fine anyway (and I know that).

This is clearly not very useful and the reason is:

  • all those nullable annotations make you remove useless null checks
  • those null checks can still be triggered because there’s nothing enforcing them, other than IDE warnings and some static analyser, if it manages to catch them around all the warning suppression that you added.

Basically those 2 steps above fight against each others. And this will not change until project Valhalla in Java tackles it.

JSpecify is all about potential improvements later on and there’s nothing that guarantees it. People also mention that it’s backed by Google thus it’s not going anywhere but that’s actually a red flag because Google is well known for abandoning projects and changing directions without warnings (I used to be a professional Android developer so I know that well enough).

So what to do? My advice is to keep using Objects.requireNonNull() and similar in public methods. A static validator can detect them so it’s just as good as putting annotations, and less verbose. In fact, IntelliJ does infer nullability from them already. Also don’t forget to mention when a parameter or return value is nullable in your javadocs (it can also explain how and why it is so).

Don’t bother with nullable annotations.

Swagger-UI: 53-bits should be enough for everyone

Swagger-UI has a longstanding bug in that it fails to display 64-bits values properly. If the number is large, it’ll be silently truncated to 53-bits.

The real id value here is in fact 6925974976795329788. Would you have known this if you were only using Swagger-UI?

Having a tool that just displays plain wrong values is a serious issue. Every year, many people are making mistakes and scratching their heads because of Swagger’s output, judging from the numerous bug reports.

The sad thing is that there are solutions. Some people even wrote them a pull request! But it was rejected because some Swagger-UI developers are stubborn. They blame JavaScript even though improvements have been done since then.

Imagine if you were driving a car:

<Tim Lai> Hey! The car you sold me doesn’t display the speed properly! I got a ticket for driving at 50 km/h even though it displayed that I was driving at 30 km/h!

<Car Dealer> Well, that’s because we used a 5-bits counter for the speed. It’s a limitation of an old chip we used long ago so we kept it. You just need to know that and you’ll be fine.

Logging security

I recently got some security warning from a linter while logging user supplied data. I didn’t pay much attention at first because it’s not enabled in production and not even while I debug myself because there’s too much output. I only enable it when needed.

Anyway, the point is that you can fake log output. I didn’t believe it at first because there’s so many way to prevent that and the complex logging system I use (slf4j) would probably do that by default, right? Right?

So I added the following line to my program:

log.info("Completed\n2024-08-15T10:11:07.102+02:00  \u001B[33m" + "WARN\u001B[0m \u001B[35m35800\u001B[0m --- [JavaFX-Launcher] \u001B[36mio.xeres.app.application.Startup\u001B[0m         : System breach detected from ip 66.66.66.66. Computer terminated.");
Can you spot the fake line?

Three things to note:

  • you have to guess the correct time for the log, but this is easy enough
  • you have to guess the correct PID, this is harder but still possible, especially if the machine has been running for a long time and there’s already a log snippet somewhere
  • it’s easy for a system administator to miss those, so such a log might still induce panic and overblown response

I don’t know why loggers don’t strip ANSI sequences in user supplied data by default. This is dead easy and would actually bring a purpose to those ANSI colors!

I tried to find a setting to enable that but after 10 minutes I gave up. It’s not critical in my case anyway (I only log user supplied data for debugging). But still, it’s a point to remember.

#define is actually bad

A long time ago, when I switched from C to Java, I missed preprocessor macros, like:

#ifdef SOME_STUFF
...
#endif

This was annoying, especially on Android where there was no payment API yet, so the only way to make a paid application was to make a demo version alongside the paid one. Since both apps were essentially the same, you had to use some trickery to try to remove the functionality from the demo. Either a simple “if” condition, which could be easily changed by reverse engineering or trying to change the classes with some Gradle tricks.

Java has no preprocessor so you couldn’t do that. Back in the time I considered this as a drawback.

Well, turns out it’s actually a good thing. Because the main problem with conditional compilation is something I found out while trying to debug some C program recently:

Code rot within rarely used defines.

Indeed, I enabled some define to add additional logging and nothing worked: compilation errors everywhere. Why? Because the code around it changed and nobody bothered to check if the logging still worked.

Worse, sometimes when editing such code, the IDE (VSCode in that case) can become confused and show the wrong path since it cannot know for sure which define will be enabled or not, so you get some grey colored code which is actually the real one.

So, always remember that when you put code around a define, it’s no longer tested. The only legitimate use I see for them are for macros and for portability where you don’t have much choice anyway.

Don’t use ricardo.ch

ricardo.ch is a site that wants to be the eBay of Switzerland but it fails majorly.

First, it has a stupid name. Ricardo? Anyway, this is how the site works.

You first open an account by telling your name, address, phone and sending a picture of your ID card. Fair enough. I created an account because I wanted to buy a used GPU.

So I started bidding for some auctions. The first thing you can notice is that they have some auto bidding bot feature (it automatically outbids you), which is annoying and artificially inflates the prices.

After about 2 weeks, they banned my account without explanations. I read their terms and conditions and there was nothing. The things to note is that I didn’t buy anything (someone always outbid me) and I didn’t sell anything either. I didn’t fake any of my information.

I submitted a complaint and after one week, some Indian support guy (they’re supposedly a Swiss company but have to outsource their support, of course) answered that they did this for “security reasons” but they can unlock my account if I submit an attestation of residence (which costs money). This is ridiculous.

Anyway, use ebay.ch instead. It’s only in German but there’s no such problems with them.

Why you shouldn’t use Google Chrome

Until now I used Chrome with a few settings tweak like disabling sending usage statistics, uBlock origin, no crash reports and so on.

Then, when checking why one my newly installed WordPress themes was using some Google fonts, I came upon Chrome sending this with a GET request:

x-client-data: CJW2yQEIpLbJAQjEtskBCKmdygEI67jKAQisx8oBCPbHygEItMvKAQjc1coBCJeaywEYisHKAQ==

If you check with Chrome’s own Network analyser tool, it’ll automatically explain what it is:

message ClientVariations {
  // Active client experiment variation IDs.
  repeated int32 variation_id = [3300117, 3300132, 3300164, 3313321, 3316843, 3318700, 3318774, 3319220, 3320540, 3329303];
  // Active client experiment variation IDs that trigger server-side behavior.
  repeated int32 trigger_variation_id = [3317898];
}

If you type about:version into Chrome’s URL bar, it’ll display something like:

Which is a long list of “variations”. Google claims this is used to allow rolling out features from their servers to only a small subsets of users (so they do need a kind of unique ID for that). So for example if you’re watching Youtube, you’d get the new UI refresh only if your ID is included.

Can that ID be used to track you? Yes it can. And if it can, you can be 99% sure that Google is doing it.

You cannot remove that feature. The suggested workarounds are to disable “send usage statistics” which restrain that ID to 13-bits (which, with your IP address is still more than enough to track you) or run Chrome with some obscure flag that makes it generate a new ID on startup, which is useless if your browser is running all the time.

As for me, I’m switching to a better browser.

Archiving YouTube

Lately, YouTube has been on the hunt to take down informative videos. It’s often hard to know the exact reason and it seems even a few users signaling a video will make it vanish.

You can’t even know what it was about.

Most people use the Favorite option of YouTube to save interesting videos, but once a video is removed, this is what shows up on the playlist:

No title, no description, nothing. YouTube is effectively erasing every historical trace of the video. You can’t even decide if you agree with their policy. It’s not like they give you a choice anyway.

So what to do? The solution would be to download the video and store it somewhere. After all, extra storage is pretty cheap nowadays, but YouTube offers no download button.

Enters 4K Video Downloader

This tool allows to grab any YouTube video quickly.

Copy the URL from your browser
Press Paste Link in 4K Video Downloader
Select the quality and format
And there you go, the video is on your machine

It can even save entire playlists, like for example your Favorite playlist in YouTube.

And best of all, it’s free. So the next time YouTube deletes some videos you won’t care. You already have them.

Download here.

Reliable monitoring with Logitech Gaming Software and Arx Control

Arx Control is a software for Android that allows you to monitor your PC’s hardware temperature and resource usage in a nice way. With a small external display provided by any cheap Android tablet or smartphone, you can effectively monitor the GPU temperature and fan, CPU temperature and thread usage and memory use without having to change anything on your screen. Think of it as a small external display without the inconvenience (room and multimonitor setup).

Unfortunately it suffers from a major drawback: it works using the wifi connection of the device.

This would actually be usable if the app had a system to automatically reconnect but in practice you waste time killing and relaunching it, which defeats the purpose of convenience.

But there’s a way to fix it: we’ll simply use the USB connection by tunneling a TCP socket into it.

First, the tablet/mobile must be connected to your computer with an USB connection (which has the added benefit of charging the device). You also must have ADB installed.

Then this is how to proceed, first, make sure your device is NOT paired by wifi, if it is, remove the authorization from Logitech Gaming Service in Settings / Arx Control on your PC.

Must not be ticked

It would be simple to just turn off wifi on the device wouldn’t it?

Unfortunately that won’t work

The app wants wifi so just enable it and let it fail.

Like this

Now you can disable wifi on your device. I recommend it if you don’t need network connectivity for anything else as it’ll make it easier to charge the battery on the USB port.

Open a Windows shell and type the following:

adb forward tcp:54644 tcp:54644

Obviously ADB must be in the path

Next go into Logitech Gaming Software, click on the settings wheel and go into Arx Control. Enter 127.0.0.1 below and click Connect.

The power of localhost

If everything went well, you’ll be congratulated by the following:

All fine!

Now you can enjoy a reliable monitoring.

All systems are go.

Unfortunately the whole procedure has to be made again if the PC is rebooted. But it’s still a clear win compared to an unstable wifi link.