SemaphoreSlim

Mark
3 min readApr 9, 2021

Okay, I’m calling out a specific implemenation here, but this is an issue I have with counting semaphores as a whole. They’re probably just shit because they were invented by a Dutchman — as the Dutch are intollerable cunts, he wouldn’t have had anyone to bounce his stupid ideas off of.

For the mouth breathers amongst you, a semaphore is a variable that describes the number of resources available/allocated within a pool. It’s used for controlling access to code/resources in multithreaded situations. A semaphore has a maximum value, which is the number of resources in the pool. For example, if there are 3 database connections in a pool then a semaphore with maxCount = 3 could be used to control access to them. A binary semaphore is a special case of a counting semaphore, where maxCount = 1, and they are used as synchronisation locks in code. All good so far.

Semaphores also have a current count which is a value between 0 and maxCount. So, the current count is the number of resources allocated right? Wrong. It’s the number of resources unallocated. And herein lies my gripe with semaphores — they don’t indicate the degree of exhaustion of a resource pool / how many threads are actively processing, rather they are an arbitrary value which only has meaning in the context of the maxCount. This is just fucking dumb and it’s counterintuitive. It might be fine from the perspective of the thread entering the lock, which only needs to care that it has a non-zero value, but from the perspective of a monitor its arse-backwards. Moreover, there’s a fundamental incongruity between currentCount and maxCount: maxCount tells you the number of things that can happen simultaneously, so currentCount should tell you the number of things that are happening simultaneously, but it actually tells you the number of things that aren’t happening.

Part of the reason for this is the implementation for increasing maxCount on the fly. Instead of a common-sense approach where you would literally just change maxCount if the number of resources available changed, instead you don’t set a maxCount and just magically Release() resources that were never allocated in the first place.

Now, I understand that MS were in a damned if you do, damned if you don’t situation, but I’m here to damn them for the doing. Every time you implement something you have a choice: you can stick to convention or you can break from it. Wisdom is knowing when to do each. The downside of breaking from convention is that people familiar with concepts but new to your framework will guess wrong. The downside of sticking to convention is that when that convention is utterly fucking retarded you exacerbate the problem and give people one more reason why they shouldn’t bother using your framework over the alternatives. Microsoft are a bunch of timid little simps who need to grow a pair. Stop fucking doing shit just because some Cheetos-fingered Unix-using neckbeard from 30 years ago did it that way in C++. This is absolutely no different than the incomprehensible decision to go with 1-based indexing for reverse indexing in arrays, but I’ll save that rant for another day.

--

--