Accidentally Reverse Engineered Ethereum's Business Model Whilst Making Jokes about Pythons Eating the Keys to your House.
Honestly, if more people learned the way I just did, starting from systems, not tokens, I believe there’d be far fewer NFT pyramid schemes and far more distributed poetry. Most of it would be wrong, but it would be way funnier.
Enjoy, how to reverse engineer a riddle disguised as syntax disguised as a duel.
Courtesy of, Ella :)
I have no formal technical training. But my whole thing with tech is that I’m systems-obsessed and weirdly, the logic used in tech is often deeply philosophical.
Take the word instantiate for example:
In philosophy
(of a universal or abstract concept) be represented by an actual example.
In programming
core concept in object-oriented programming. It's the process where you create an instance of a class, which essentially means creating an object from a blueprint (class).
So the word in programming came from the fact that if you create something “tangible” from “abstraction” - the object provides the “meaning” for the “concept”.
I’m learning programming as I go, not before anything else, because I know I won’t understand or use it properly if I just jump into language learning.
Like how Duolingo kinda messes you up with language learning - it’s rote, not understanding why that’s the word used, and not just “where it’s sposed to go”.
I think my original barrier when I studied IT and dropped out was I didn’t understand that it isn’t knowing syntax — it’s understanding how constraint generates meaning. Once I knew that if I understood boundaries, the logic of structures presented, I could map my way through it using systems thinking.
Honestly, it’s probably why Silicon Valley bros are out here reading Nietzsche like it’s scripture, trying to patch their consensus algorithms with existential coping tools.
I thought this would be interesting as a Substack post, because I am learning tech through literally just throwing myself against any question I encounter.
I posted this on my Instagram stories that I was after questions so I could challenge myself, and a friend got their friend who works at Ethereum to ask me:
"Explain how you would implement a thread-safe singleton pattern in a concurrent environment while avoiding the double-checked locking problem, and compare how this implementation would differ across Java, C++, and Python."
Like I said, I am a systems guy, not syntax. So I had to reverse-engineer this in order to understand what:
- The question was
- What they were really trying to test me on
So.
I treated this like a riddle. Because I know tech people want to riddle people and make them panic.
And I will not panic.
But first, I need to know what a singleton is.
Probably.
From what I can see, the singleton became a “thing” in programming in the mid-to-late 1990s, but its conceptual roots go back further.
(Pre-1990s)
- The idea of restricting instantiation to a single object existed earlier, especially in systems programming where shared access to limited resources (like configuration files or global states) needed coordination.
- But it wasn’t formalised as a reusable design pattern until later
I’m assuming this is because of scale, which is where pretty much all problems arise from.
Actually, yeah. They do.
Make something, more conditions/factors, less prediction, more random, less functional.
Like inviting way more people than you should’ve to your party, and now someones stole your jewellery and someone is in a K-hole and the other is asking you a question like:
“Explain how you would implement a thread-safe singleton pattern in a concurrent environment while avoiding the double-checked locking problem, and compare how this implementation would differ across Java, C++, and Python.”
When you just wanna smoke a Vogue menthol and chat to someone behind them about why dogs SHOULD have wings.
The Gang of Four (1994)
- The term Singleton was formally defined and popularised in 1994, in the landmark book “Design Patterns: Elements of Reusable Object-Oriented Software” by the Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides).
- It was described as a creational pattern — one that deals with how objects are created.
- The core motivation: control access to shared resources, provide a single point of entry, and prevent duplication.
I’m assuming because of hacking? And also limitations to selling software or building things that you could have more users to engage with.
Maybe, just an assumption.
I feel like men who need to feel important always turn to this phrase. There’s a Gang of Four in the history of Beaujolais and they’re cited as the revolutionaries of making Gamay a thing.
Yawn, always a thing.
Concurrency + Double-Checked Locking (Late 1990s – Early 2000s)
- As multithreaded programming grew more common, especially in Java and C++, developers started running into thread safety problems with singleton instantiation.
- This led to hacks like the double-checked locking pattern — first used widely but later flagged as unsafe due to quirks in memory models (especially in Java pre-1.5).
- The singleton now wasn't just a pattern — it became a problem space about memory visibility, execution ordering, and side effects.
I imagine this is around the time that people in tech were first feeling the head rush of being “ON THE VERGE OF SOMETHING” and knew that they could make money fast.
They needed things they were building to download a file, update an interface and then write something to a database all at the same time - but they’re all racing (race conditions? I think?
That’s like putting a French speaker, an Icelandic speaker, and an Egyptian Arabic speaker in a room and asking them to communicate without a translator but you need one of them to make coffee but they’re all busy.
You need a way of identifying patterns of interaction that can be recognised through the limitations of language.
The hello-before-anything-else pattern. The nod-as-consent pattern. A kind of protocol of minimal recognisable intent.
Modern Times (Post-2010s)
- In languages with better concurrency models
Paused when I saw concurrency models because I didn’t know exactly what that meant for the problem - found this:
threads and locks, functional programming, separating identity and state, actors, sequential processes, data parallelism, and the lambda architecture.
Now I understand that threads and locks are just a model and like, a PART of concurrency, not like, a thing in and of itself. Helpful.
[Bonus: Side quest at the bottom of the post, where I fell down when I got distracted by the word ‘lambda’—and why I now think anonymous functions are lowkey Greek myth all because I went to a club when I was a teen called Lambda (RIP).]
Back to the task:
- Now there’s built-in tools or guarantees for handling multiple threads or processes safely and predictably. Makes sense, seeing as this problem has been hanging since pre 90’s.
- So, you don’t need to duct-tape a volatile variable to a synchronised block anymore. There are now safer abstractions.
- Many developers now consider it a bad idea altogether, or at least outdated in its original form.
That is actually funny that - because I kind of agree with the like “isn’t it just a global state” thing. If it’s initial function was to assist global state functionality, why would like, manipulating it or just adapting to its problems, make it ever useful outside of that functionality?
It’s like someone wanting a banana, growing them on a commercial scale, then saying “why is my banana monoculture empire not letting me into the oranges market? We are both fruits? What’s the problem”
Uh…duh?
Anyway.
Okay — so now I’ve got the structural sense of the singleton problem. I know it’s about shared state under pressure, about coordination in contested space.
The next step is: how does this play out across different languages?
I know some Java. I know a little C++, mostly from context and syntax osmosis. Python? Not so much. So for me this becomes a research-and-interpretation problem.
So I hit the forums. Read the docs (ughhhhhh so much jargon help). Seeing as it’s an old issue (I see you riddle poser) I went onto some old-school dev blogs.
But the riddle is because they want me to copy an answer, but for me to defeat this I need to watch HOW people talk about the problem.
Because even if the languages look different, they’re all trying to solve the same shape:
How do I create one reliable signal that everyone sees the same way — no matter when or how they access it?
Because I don’t just want to know how the singleton pattern gets implemented in Java, C++, and Python.
What assumptions each language is making about time, trust, safety. Like, what it thinks “one true instance” even means.
Java
Java’s concurrency model assumes threads are like kids in a schoolyard.
You give them a swing set (shared state), and you better lay down rules — or someone’s getting hit in the face.I kinda love Java for this.
So, it seems that the words that keep coming up in forums are: synchronized blocks and volatile to control memory visibility and race conditions.
The volatile keyword is most important, as its the latest value of the instance is always seen across threads, and synchronisation makes sure only one thread gets to build the thing in the first place.
Seems like everyone’s moved onto something else though, and the safer pattern is actually to dodge the whole mess with this:
public class Singleton {
private Singleton() {}
private static class Holder {
static final Singleton INSTANCE = new Singleton();
}
public static Singleton getInstance() {
return Holder.INSTANCE;
}
}
But because of the whole singleton historical context and problems fixed as it went, class loading in Java is thread-safe. So the singleton gets built only when it’s actually used — no double-checked locking required.
Order is enforced structurally, not conditionally.
Love you Java. You keep me safe. Not conditionally, but with a roof over your head otherwise the building falls over.
C++
In C++, the concurrency model’s closer to: “You’re on your own, but we gave you sharp tools.”
I think that’s why I’m dropping the knife here a bit. Oh well, this is helping me learn.
It looks like all of the literature that’s relevant addresses it from the scope of C++11 onwards in regards to this now, so I don’t want to give an old answer to an old problem and be foiled.
Looks like what was introduced, was std::call_once and std::once_flag — a standardised way to guarantee that a block of code runs only once, safely, across threads.
std::once_flag initFlag;
Singleton* Singleton::instance = nullptr;
Singleton* Singleton::getInstance() {
std::call_once(initFlag, []() {
instance = new Singleton();
});
return instance;
}
This is explicit and portable. But the key here is: C++ trusts the developer more. It gives you the tools, but you have to use them right.
Java protects you from yourself (I need him). C++ expects you to protect others from you (Ugh, WHY)
Python
Python I’ve avoided because the most annoying people I’ve encountered when I say I’m doing more tech stuff, they ask me
dO yOu KnOw PyThOn?
And it’s my instinct to defeat them WITHOUT ever learning it.
But, unavoidable here.
Anyway:
Looks like similar to the functionalities expressed above, but Python, by default (CPython), has the Global Interpreter Lock (GIL), which means only one thread executes Python bytecode at a time.
So in practice, a classic singleton is accidentally safe under many conditions:
class Singleton:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super(Singleton, cls).__new__(cls)
return cls._instance
But if you’re making dinner and you:
- Put a pot of water on to boil.
- While it’s boiling, you start chopping veggies.
- When the water boils, then you throw the pasta in.
You don’t just wait for the pot to boil before doing anything else — you use the in-between time productively.
So the GIL only accommodates for this:
Put the water on → stand there and wait → only then start chopping veggies.
That’s not efficient. Ipso facto. Developer non-porn.
So that means now you’re back to locks. Or external coordination.
Python’s attitude is: “You probably don’t need it — but if you do, we’ll make you Google around a bit.”
Sounds precisely in cohesion with the riddle vibe of this question…
Does your developer friend own a snake? Or are they just tech-pilled?
:)
Okay so:
Java guards the house, C++ gives you the blueprints and says “don’t burn it down,” Python says “You live alone. Deal with it.”
But why ask me this question if I could get here this (relatively) simply?
Hmm. What does Ethereum have to do with this?
All the stuff I just worked through: threads, locks, instantiation, visibility, side effects that all relies on a central assumption: that you can own memory and direct it as you need with conditions.
It functions on the basis that someone, somewhere, gets to build the thing and declare: this is it. And everyone else defers.
But Ethereum doesn’t do deferral. It does distrust.
There are no threads. No locks. No shared heap. No “global” anything. Every participant is in their own bunker, independently verifying what counts as real. You can’t enforce a singleton. You can’t even assume a shared timeline.
So your friend isn’t asking how to make a singleton.
They’re asking how agreement emerges when every actor distrusts the others by default. How coherence appears when there’s no central authority to say, “this is the real one.”
And that’s what Ethereum is built to solve: not through protection, but through consensus. It doesn’t prevent duplication. It makes duplication irrelevant by structuring things so that everyone comes to the same result anyway, or gets left out of the chain.
Omg.
Is that what blockchain actually means?
Blockchain doesn’t prevent lies. It makes lies pointless.
It’s not hype. It’s just a weird epistemological mechanism for incentivising convergence without control.
A protocol for agreement in the absence of trust.
A truth machine that doesn’t care who you are, only that you match.
Holy shit. How cool is reverse engineering?
Back to the problem.
So, in Ethereum, singleton logic isn’t conditional so you can’t just VOLATILE and call it a day. Nor can you just eat the keys cause you’re a python, because that undoes the business model.
That’s where it becomes structural. The chain is the singleton not because it’s locked down by code, but because it’s handshook enough to lead to agreement.
That’s why the question isn’t really about Java or Python or C++. It’s about understanding what underwrites truth in a system that refuses to centralise. It’s about how we coordinate without consensus, and what is built to make consensus possible.
TLDR
"Explain how you would implement a thread-safe singleton pattern in a concurrent environment while avoiding the double-checked locking problem, and compare how this implementation would differ across Java, C++, and Python."
Answer:
In Java, the safest way now is the nested static class. It delays instantiation until the object is actually needed, and it leverages classloader guarantees to make that thread-safe without needing explicit locks.
So instead of constantly checking if the instance exists (like in double-checked locking), it just builds it when the class gets used, structure enforces order.
No swings in face. Plenty of roof for all.
In C++, from C++11 onwards, you’ve got std::call_once and std::once_flag, which let you initialise something once, across threads, without race conditions. It’s more manual than Java, but more transparent too. You get the sharp tools, you just have to use them right.
Do not use tomato knife for fish. (learnt that the hard way from an angry chef once)
In Python, it’s weirdly “safe enough” most of the time thanks to the Global Interpreter Lock (GIL), which only allows one thread to execute bytecode at a time. But that breaks down in multi-process or async contexts, so if you actually want thread-safe singletons in those cases, you’re back to using locks, or reaching for external systems.
Swings and knives everywhere, FIGURE IT OUT YOURSELF.
BUT! Ethereum doesn’t work like any of these.
There are no threads. No shared memory. No global state. You can’t lock what doesn’t exist in one place.
So you don’t make a singleton the traditional way. You don’t guard the instance, you design the conditions so that everyone ends up agreeing on the same instance anyway.
Ethereum flips the singleton inside out.The chain is the singleton.
(Mean Girls comes to mind…THE LIMIT DOES NOT EXIST)
Not because it’s protected, but because it’s ALLOWED by consensus.
And that, I think, is the real answer to your friend’s question. Not just how you do it in code, but what it means to do it at all, when trust is distributed and agreement is all you’ve got.
You gotta get those people speaking all different languages to make coffee, because if they don’t, someone’s gonna get grumpy.
So Ethereum forces them to choose one, and pays them all for it.
Hope that answers your question oh sphinx in the maze!
More importantly, if any of this is correct Ethereum person: can I have a job? :)
LAMBDA DISTRACTION SIDE QUEST
λ (lambda) is the 11th letter, but apparently the reason it ends up in programming has nothing to do with its placement — it's because of its role in lambda calculus, a formal system developed by Alonzo Church in the 1930s to express computation through functions.
This was one of the theoretical foundations of what it means for a function to exist and be computable. Like proto-math for software — before we had programming languages, we had lambda calculus.
Makes sense. Also why a lot of programmers (particularly the old guard) have a mathmatician style to gatekeeping and HATE vibe coders.
I get it man, I feel the same about writing.
In programming, a lambda function is an anonymous function — a function without a name — usually passed as a value to other functions. It lets you treat behaviour like data.
For example:
python
x = lambda a: a + 10
print(x(5)) # prints 15
Okay so what does this mean for concurrency?
In concurrent programming, you’re need to, or inevitably do frequently, pass little chunks of behaviour (functions) around:
- To be executed in separate threads
- As callbacks
- As tasks in queues
- As actions for futures, promises, etc.
And lambdas let you do this compactly, without writing full formal function blocks.
It’s lightweight, self-contained logic — perfect for handing off to another thread or process to run later.
Makes shit faster to build, and less problems to run into, I get it.
Also means that concurrency is coordination without centralisation. So, distributing tasks — possibly across threads, cores, or nodes — and you want:
- Small units of logic
- Clear boundaries
- Minimal side-effects
Lambdas are:
- Stateless or minimal-state
- Encapsulated (they know what they need)
- Composable (can be nested, chained, mapped, etc.)
So, concurrency adores them because they’re like smart courier pigeons: tiny, mission-focused, and don’t get in the way.
Well, until they die on route. Lol.
Soooooooo:
- Lambda in programming represents a compressed logic container.
- Concurrency is the practice of unfolding logic across multiple timelines.
So when you pass a lambda into a concurrent system, you’re doing something mythic:
You are dispersing meaning into fragments of time, asking them to cohere later.
It's symbolic delegation.
In distributed systems (like Ethereum), you can’t hold everything at once. You must pass functions of intent — little lambdas — and trust that coherence will reassemble on the other side.
So yes, lambda’s not just some quirky syntax. It’s a philosophical inheritance from the desire to compress intent, to express possibility, and to let execution happen elsewhere — in time, in space, in logic.
That’s kinda hot, glad I got distracted.