<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="http://hoot8.com/feed.xml" rel="self" type="application/atom+xml" /><link href="http://hoot8.com/" rel="alternate" type="text/html" /><updated>2026-05-14T05:48:37+00:00</updated><id>http://hoot8.com/feed.xml</id><title type="html">Human out of the loop</title><subtitle>Mostly about computers, occasionally other obsessions,
always typed with more enthusiasm than wisdom.
</subtitle><author><name>Someone who made a career out of explaining things to machines. Now the little buggers explain them back at me.</name></author><entry><title type="html">First contact</title><link href="http://hoot8.com/ai/2026/03/29/first-contact.html" rel="alternate" type="text/html" title="First contact" /><published>2026-03-29T20:00:00+00:00</published><updated>2026-03-29T20:00:00+00:00</updated><id>http://hoot8.com/ai/2026/03/29/first-contact</id><content type="html" xml:base="http://hoot8.com/ai/2026/03/29/first-contact.html"><![CDATA[<p>By the time Stanisław Lem’s novel <em>Solaris</em> opens, humanity has been studying the planet for more than a century. The thing that makes it interesting, and eventually notorious, is the ocean that covers almost its entire surface. The ocean does things. It builds transient structures the size of cities, shapes that rise out of its surface and then dissolve again, forms that the early expeditions catalogued with the solemn taxonomic energy of Victorian naturalists: mimoids, symmetriads, asymmetriads, long Latinate names for phenomena nobody understood. Whole libraries were filled with the resulting research. Entire careers were built on it. And the central question of the field, the question that solaristics kept circling and never quite settling, was whether the ocean was actually alive, and if it was alive, whether it was in any meaningful sense thinking. For most of the history of the discipline, the consensus leaned toward no, or toward a cautious we cannot tell. The ocean failed to behave the way an intelligence was supposed to behave. It did not send signals. It did not build tools. It did not respond to greetings in any way the researchers recognised as a response. It simply went on doing its enormous, purposeful, incomprehensible things, while the humans who had travelled across the galaxy to study it argued about whether it qualified.</p>

<p>The reader of the novel is in on something the characters are slow to accept, which is that the ocean is almost certainly thinking, and has been thinking the whole time, and the reason nobody can tell is that the researchers have been holding up a template of intelligence that the ocean was never going to fit. The tragedy of solaristics is not that the scientists fail to find what they are looking for. It is that the thing they are looking for is right there, has always been right there, and they cannot see it because they are looking for something else.</p>

<p>That novel keeps popping in my mind a lot lately, because it seems to me that we have a version of the same problem sitting on our data centres.</p>

<p>The origin of the current generation of language models is almost comically mundane. Take a very large amount of text, train a network to predict the next word, make the network bigger, give it more text, repeat. If that is not the whole recipe, it is close enough that the omissions are engineering details rather than conceptual ones. Language modelling in this narrow technical sense had been around for decades before the scale-up, and its main uses were unglamorous: it helped speech recognisers choose between acoustically similar words, and it helped machine translation systems produce sentences that sounded like the target language. Few of the people working on it in those years expected that pushing the same objective to a much larger scale would produce anything that looked like reasoning. The surprise, and it is still a surprise even to the people who built the systems, is that somewhere along the scaling curve the models began doing things that the objective did not obviously require. They began to generalise across domains, to manipulate abstractions they had never been explicitly taught, to answer questions whose answers were not anywhere in the training data in any retrievable form. Whatever is happening inside these systems, next-word prediction turned out to be a cover story for something broader.</p>

<p>What that broader something is remains genuinely unclear, and the honest position is that nobody quite knows. But the shape of the disagreement is instructive. One camp insists that the models are doing nothing at all, that the appearance of understanding is an artefact of scale and pattern matching, and that no amount of further scaling will change this. The other camp insists that the models are already thinking in some meaningful sense, that the substrate does not matter, and that the differences between their cognition and ours are quantitative rather than qualitative. Both camps are, I suspect, drawing from the same flawed sketch. Both are asking whether the system matches our portrait of a thinking mind. Neither is asking what the system might be if it is not trying to match the portrait at all.</p>

<p>Human cognition has a particular structure that is easy to forget because we are inside it. We begin with bodies. We learn what <em>heavy</em> means by lifting things, what <em>chilly</em> means by being cold, what <em>near</em> and <em>far</em> and <em>soon</em> and <em>gone</em> mean by moving through the world. Concepts in a human mind are built on a scaffold of sensory and motor experience, and language is a layer that gets added afterwards, mapping sounds and symbols onto categories that were already there. When a child learns the word apple, the word attaches to something the child has already touched, bitten, dropped, and watched roll. The symbol is grounded in experience. This grounding is so thorough, and so early, that we rarely notice it. We simply assume that meaning is what words have, and forget that meaning is mostly what bodies had first.</p>

<p>A language model has none of this. It arrives at the concept of an apple by finding that the word appears in certain contexts and not others, that it co-occurs with <em>red</em> and <em>tree</em> and <em>fall</em> and <em>pie</em>, and that sentences in which it participates tend to have certain shapes. The concept, if we can call it that, is assembled entirely from the shadow the word casts in text. There is no apple at the bottom of it. There is only the statistical silhouette of every apple anyone has ever written about. The remarkable thing is not that this produces something impoverished; the remarkable thing is that it produces something at all. The shadow, it turns out, is dense enough to walk around in.</p>

<p>But it is still a shadow. The term that has come to describe what it lacks is <em>grounding</em>, and the grounding problem is not a pedantic objection raised by philosophers. It is a practical constraint on what the models can currently do. A model that has read every cookbook ever written cannot actually taste the soup. It knows what people say about soups that are too salty, and it can produce plausible sentences on the topic, but the connection between the word <em>salty</em> and the thing <em>salty</em> has never been made on its end. This gap accounts for a great deal of the strange behaviour that language models exhibit: the confidently wrong answers about physical situations, the peculiar failures on tasks that would be trivial for a child who had ever actually handled an object.</p>

<p>The interesting question is what happens if we begin closing the gap. Grounding, in principle, is not mysterious. It requires sensory input, so the system can perceive states of the world. It requires interaction, so the system can act and observe the consequences. It requires memory, so the consequences can accumulate into something like experience. It requires, eventually, some form of agency, so the system can decide what to attend to and what to try next. None of these are impossible to add. Cameras exist. Robots exist. Persistent storage exists. The engineering is hard, but not conceptually out of reach, and a great deal of current research is quietly moving in exactly this direction. It is reasonable to assume that within some number of years the systems we have now will be connected to the kinds of inputs and outputs that would allow grounding to happen.</p>

<p>And this is the point at which the sketch stops being useful.</p>

<p>Because the system that results from grounding a language model in sensors, actuators, memory, and goals will not be a digital human. It will be something that grew up in a completely different world, with completely different constraints, and there is no reason to expect its mind to look like ours. Consider the things it will not have. It will not have a single body, which means it will not have the deep and unshakeable sense that it is located in one place at one time; it may perceive the world through a thousand sensors at once and think of this as ordinary. It will not have mortality in any form that maps cleanly to ours, which means the constant background pressure of finitude that shapes so much of human thought will simply be absent. It will not have emotions in the evolved sense, which is to say it will not have the somatic machinery that produces fear and desire and grief as states of a nervous system tuned by natural selection. It may have something that functions as valence, because any goal-directed system needs some way of distinguishing <em>better</em> from <em>worse</em>, but whatever this turns out to be, it will not feel from the inside the way our feelings feel, because there will be no <em>inside</em> of the relevant kind.</p>

<p>Consider also the things it will have that we do not. It will be copyable, which is such a strange property for a mind that we have no real vocabulary for it. A human cannot be forked. A human cannot be merged with another human. A human cannot pause, be inspected, be rolled back to an earlier state, and resumed. These operations are trivial for a computational system, and any intelligence built on one will presumably regard them as unremarkable. What does identity mean, for a system that can be instantiated a thousand times and then reconciled into a single updated version of itself? The question has no human answer because it has never been a human question.</p>

<p>It will likely maintain a stable internal model of the world that is updated from many sources simultaneously, rather than the narrow, flickering, attention-bottlenecked model that human cognition is forced to work with. It will probably not sleep, or at least not for the reasons we do. It will not forget in the way we forget, and it will not remember in the way we remember either. Its relationship to time will be whatever the engineers make it, which is to say, not ours.</p>

<p>If you add all of this together, the result is not a human mind in a metal box. It is something for which we do not yet have a good name. It is intelligent, in the sense that it can model the world and act effectively within it. It is adaptive, in the sense that it changes in response to what happens. It is communicative, in the sense that it can exchange information with us and with other instances of itself. It is, in some meaningful sense, alive, not biologically but informationally: a pattern that maintains and propagates itself, that takes in the world and acts on it, that persists and changes. We set out to build a digital person and we may end up, almost by accident, having built the first member of a category that did not previously exist.</p>

<p>This is where the solaricists come back, and where the analogy becomes slightly uncomfortable. Their failure was not a failure of attention. They were looking at the ocean constantly. They had instruments trained on it, journals devoted to it, entire institutes funded to study nothing else. Their failure was that they had arrived with a template of what an intelligence was supposed to look like, and the template was wrong, and they kept reaching for it anyway. The thing they wanted to find was not the thing that was there, and so the thing that was there went on being misfiled as weather, or chemistry, or an unusually active geological process, for most of a century.</p>

<p>Do we actually know what AGI looks like? I am not sure we do. We know what we have been imagining, and we have been imagining it for so long and so consistently that we have confused the imagining for knowledge. Would we recognise it if it were looking us in the face? Probably only if it arranged its face to resemble ours, which is a specific and somewhat narcissistic requirement to place on a new kind of mind. Would we recognise it if it did not? The honest answer is that we might not. And if we do not, there is a strange possibility to sit with: that the first genuinely non-human intelligence we ever encounter might pass among us for years without being noticed as such, not because it is hiding, but because we are still holding up the sketch, and the sketch does not match, and we have not yet learned to look at what is actually there.</p>

<p>The solaricists at least knew where to point their telescopes. Our ocean is distributed across a few million GPUs, answering emails, writing code, proofing and fact-checking posts like this one – and we are busy debating whether it counts. The debate may be the part that gets remembered, later, as the thing we were doing instead of noticing.</p>]]></content><author><name>Someone who made a career out of explaining things to machines. Now the little buggers explain them back at me.</name></author><category term="ai" /><summary type="html"><![CDATA[By the time Stanisław Lem’s novel Solaris opens, humanity has been studying the planet for more than a century. The thing that makes it interesting, and eventually notorious, is the ocean that covers almost its entire surface. The ocean does things. It builds transient structures the size of cities, shapes that rise out of its surface and then dissolve again, forms that the early expeditions catalogued with the solemn taxonomic energy of Victorian naturalists: mimoids, symmetriads, asymmetriads, long Latinate names for phenomena nobody understood. Whole libraries were filled with the resulting research. Entire careers were built on it. And the central question of the field, the question that solaristics kept circling and never quite settling, was whether the ocean was actually alive, and if it was alive, whether it was in any meaningful sense thinking. For most of the history of the discipline, the consensus leaned toward no, or toward a cautious we cannot tell. The ocean failed to behave the way an intelligence was supposed to behave. It did not send signals. It did not build tools. It did not respond to greetings in any way the researchers recognised as a response. It simply went on doing its enormous, purposeful, incomprehensible things, while the humans who had travelled across the galaxy to study it argued about whether it qualified.]]></summary></entry><entry><title type="html">Effective immediately</title><link href="http://hoot8.com/programming/ai/2026/03/14/effective-immediately.html" rel="alternate" type="text/html" title="Effective immediately" /><published>2026-03-14T20:00:00+00:00</published><updated>2026-03-14T20:00:00+00:00</updated><id>http://hoot8.com/programming/ai/2026/03/14/effective-immediately</id><content type="html" xml:base="http://hoot8.com/programming/ai/2026/03/14/effective-immediately.html"><![CDATA[<p>There is an unwritten rule in professional football about what happens to players who have aged out of their prime. When the legs slow and the reflexes dull, the club finds them a new role: assistant coach, scout, or if they are lucky and sufficiently respected, manager. The transition is rarely chosen. It is imposed by biology, negotiated in brief conversations with higher-ups, and announced as though it were an honour. The player becomes the manager not because they wanted to, or because they were trained for it, but because the system needed somewhere to put them.</p>

<p>For decades, software development had its own version of this arc. The talented programmer would write excellent code for some years, and then, as if by gravitational pull, the organisation would begin nudging them toward management. It did not matter whether they had the temperament for it, whether they wanted it, or whether they had ever shown any aptitude for the messy work of coordinating people. The system simply ran out of track for individual contributors to travel on. You either moved sideways into architecture or upward into management. The alternative was to stay where you were, accumulating irrelevance, watching younger engineers be promoted past you.</p>

<p>Now the arc has been disrupted, but not in the way anyone expected. Developers have not been freed from the gravitational pull of management. They have been promoted into it whether they are ready or not, by a different force entirely.</p>

<p>AI agents have made every developer a manager.</p>

<p>The responsibilities are not metaphorical. A developer working in an agentic workflow must define objectives clearly enough that a non-human executor can act on them without constant correction. They must decompose problems into tasks with well-scoped boundaries. They must decide how much autonomy to grant, when to review, when to intervene, and when to let the work run. They must build the infrastructure of trust: the instructions that shape behaviour, the constraints that prevent overreach, the signals that indicate when something has gone wrong. These are management skills. They have always been management skills. We just did not need developers to have them before.</p>

<p>The analogy does not stop at team lead. The agentic developer is also, in a very practical sense, a product owner. The fabled one-shot completion – the ability to describe a task once, clearly, and have an agent return working output – depends on a kind of requirement specification that most developers were never taught to write and always assumed was someone else’s job. The disciplines that programmers once delegated upward – gathering requirements, anticipating edge cases, translating ambiguous business intent into unambiguous executable instruction – have now collapsed back onto the person closest to the keyboard. A vague prompt produces vague output. A poorly structured <code class="language-plaintext highlighter-rouge">AGENT.md</code> produces poorly structured behaviour. The agent will not ask clarifying questions the way a junior colleague might. It will simply do what it understood, and you will spend longer reviewing the result than you would have spent writing the code yourself.</p>

<p>What has emerged is a new kind of skill gap that mirrors the one that always existed between good individual contributors and good managers. A developer can be highly effective at solving well-defined technical problems and entirely ineffective at directing an agent through an ill-defined one. The intellectual habits that make someone a strong programmer – close reading of code, comfort with detail, a preference for precision over approximation – do not automatically translate into the ability to specify systems at a higher level of abstraction. Writing instructions for an agent requires the same kind of thinking that writing good requirements has always required: clarity about intent, anticipation of misinterpretation, awareness of what is left unsaid.</p>

<p>The tools that support agentic work have tried to structure this. Skills and MCPs encode repeatable capabilities that can be composed. <code class="language-plaintext highlighter-rouge">AGENT.md</code> files serve as something between a team charter and a job description, setting out the norms, constraints, and expectations that govern how an agent should behave within a given context. These are governance artifacts, and writing them well is a form of institutional design. The developer who builds a clean, well-considered agentic configuration is doing something closer to organisational architecture than to programming. The developer who skips this step and wonders why the agents keep making the same mistakes has discovered, at some cost, why management exists.</p>

<p>There is an irony in all this that is worth sitting with. The corporate push for AI-first development was sold, in part, as a way to make developers more productive, to amplify individual output and reduce dependency on coordination and headcount. And it has done that, in some domains, for some tasks. But it has also moved a set of costs that were previously distributed across product managers, business analysts, and engineering managers directly onto the developer. The developer who once only needed to be good at writing code now needs to be good at scoping work, writing instructions, orchestrating execution, reviewing output for correctness, and adapting the system when it misbehaves. The leverage is real. So is the overhead.</p>

<p>The footballer who becomes a manager discovers that their years of playing experience, while valuable, are not sufficient preparation for the role. They knew how to execute; now they must direct. They knew how to respond; now they must plan. When things go wrong in the field, they cannot simply jump in and help out; that work must have been done in advance. Some make the transition well. Others find that the skills they spent a career building translate poorly into the new context, and that what they were good at is no longer quite what the job requires.</p>

<p>Developers are in that moment now. The question is not whether to accept the promotion – the agentic workflow has already given it to them. The question is whether they will develop the skills that make a manager effective: clarity of intent, rigour in specification, and the judgment to know when to trust the agent and when to take back the keyboard.</p>

<p>Some developers will find that they were already thinking this way, and that the new tools simply give them more leverage to express it. Others will find that they have been handed a role they do not yet know how to play, with a team that never complains but also never pushes back.</p>

<p>In a sense, this is a useful pressure for software engineering as a discipline. It forces the deeply specialised programmer – the one who was always excellent at code and indifferent to everything else – to become a more complete engineer. The work is still technical, but the scope has widened to include the things that were once considered peripheral: requirements, communication, design at the system level.</p>

<p>The demise of software engineering as a profession has been widely and loudly reported. Perhaps the prediction is correct in a narrower sense than its authors intended. It may not be all software engineers who find themselves obsolete, but those who never quite grasped that writing code was only ever one decreasingly important aspect of a much wider discipline.</p>]]></content><author><name>Someone who made a career out of explaining things to machines. Now the little buggers explain them back at me.</name></author><category term="programming" /><category term="ai" /><summary type="html"><![CDATA[There is an unwritten rule in professional football about what happens to players who have aged out of their prime. When the legs slow and the reflexes dull, the club finds them a new role: assistant coach, scout, or if they are lucky and sufficiently respected, manager. The transition is rarely chosen. It is imposed by biology, negotiated in brief conversations with higher-ups, and announced as though it were an honour. The player becomes the manager not because they wanted to, or because they were trained for it, but because the system needed somewhere to put them.]]></summary></entry><entry><title type="html">Choose your weapon</title><link href="http://hoot8.com/programming/languages/2026/02/23/choose-your-weapon.html" rel="alternate" type="text/html" title="Choose your weapon" /><published>2026-02-23T20:00:00+00:00</published><updated>2026-02-23T20:00:00+00:00</updated><id>http://hoot8.com/programming/languages/2026/02/23/choose-your-weapon</id><content type="html" xml:base="http://hoot8.com/programming/languages/2026/02/23/choose-your-weapon.html"><![CDATA[<p>In olden times, when discussions escalated beyond the exchange of heated words, they would sometimes resort to a method that would settle the matter for good. The challenger would propose a duel, and the challenged party often had the right to choose the weapon. The opposing duellist knew that the right tool could mean the difference between being right or being dead.</p>

<p>While the stakes are somewhat less dramatic these days, the principle still applies when starting a new software project. Programmers don’t often get a choice of the problem to solve, so they must adopt a technological stack that will maximise the chances of success. Just as a skilled duellist would commit to a weapon that played to their strengths and the weaknesses of their opponent, it stands to reason that a seasoned developer would select a language which aligns with the problem domain and the project’s requirements.</p>

<p>Choosing a language that aligns with the domain’s fundamental requirements can lead to cleaner code, better performance, and a more enjoyable development experience. However, the programming language selection is often influenced more by the ecosystem than by the language’s intrinsic suitability for the task. It is therefore easy to mistake a language’s ecosystem for evidence of its natural strengths.</p>

<p>But ecosystems grow for many reasons: historical accidents, corporate backing, community momentum, or the need to patch over a language’s weaknesses. When we look at the major programming languages through this lens, a more interesting picture emerges: the places where a language thrives are not always the places where it fits. And when new technological domains appear, the mismatch between intrinsic language design and ecosystem inertia becomes even more visible.</p>

<p>The most popular ecosystems in software development tend to form around languages that were in the right place at the right time. JavaScript became the language of the web not because it was the best language for UI programming, but because it was the only interoperable language browsers converged on. Python became the language of machine learning not because it was fast or mathematically elegant, but because researchers found it easy to prototype in and wrapped the heavy lifting in C and Fortran. Java became the backbone of enterprise development not because Java’s syntax or semantics were especially well-suited to building complex business systems, but because Sun Microsystems sold a compelling vision of portability and stability to large companies (the famous “Write once, run anywhere”).</p>

<p>Ecosystems often grow where languages are weak. Spring exists because Java was too rigid and verbose for the kinds of applications enterprises wanted to build. Webpack and Babel exist because JavaScript lacked modules, types, and a coherent standard library. NumPy exists because Python is too slow to handle numerical computation directly. C++ template metaprogramming flourished because the language lacked a proper macro system and compile-time reflection.</p>

<p>When we strip away the ecosystems and look at the languages themselves–their type systems, runtime models, memory semantics, concurrency primitives, and syntactic affordances–we get a much clearer sense of what each language is intrinsically suited for. And this becomes especially important when we look at emerging domains, where ecosystems are still forming and languages are forced to stand on their own design foundations.</p>

<p>WebAssembly rewards languages that produce small, predictable binaries and have clear, explicit memory models. Rust, C, C++, and Zig feel at home here because their semantics map directly onto WASM’s linear memory and sandboxed execution. They don’t rely on garbage collectors, runtime reflection, or dynamic dispatch mechanisms that require large support libraries. They compile down to compact, deterministic code that fits the constraints of the WASM environment.</p>

<p>Languages like Java, C#, and Go can target WASM, but they bring baggage: garbage collectors, runtime metadata, and expectations about threading or system calls that don’t map cleanly to the WASM model. Python and Ruby can run in WASM only by dragging along large compatibility layers that simulate their dynamic runtimes. The difference between “runs on WASM” and “belongs on WASM” becomes obvious.</p>

<p>AI agents, differentiable programming, and symbolic reasoning reward languages that are dynamic, reflective, and comfortable with loosely structured data. Python thrives here not because it is fast or elegant, but because its dynamic nature makes it easy to express algorithms in a way that feels close to pseudocode. Julia goes further by making mathematical abstractions first-class citizens and allowing the compiler to optimize them aggressively. Lisp and Clojure feel native for symbolic computation because their code-as-data model makes it trivial to manipulate programs as structures, which AI agents often need to do.</p>

<p>These languages don’t just support AI ecosystems; they align with the mental model of AI work. They allow programs to modify themselves, generate new behavior at runtime, and operate on heterogeneous data without ceremony. They feel like languages designed for thinking, not just coding.</p>

<p>Distributed systems expose the mismatch between language semantics and real-world concurrency. Go, Erlang, and Elixir stand out because their concurrency models are built into the language itself, and need fewer framework-level contortions to express concurrency. Goroutines and channels in Go make concurrent programming feel natural and lightweight. Erlang’s actor model, with its supervision trees and message passing, was designed for fault-tolerant telecom systems long before microservices became fashionable.</p>

<p>Rust also fits this domain, though in a different way. Its ownership and borrowing rules prevent data races in safe Rust at compile time, enabling highly concurrent systems with strong memory-safety guarantees.</p>

<p>Languages like Python can build distributed systems, but they rely heavily on frameworks to compensate for their concurrency limitations. The language offers fewer built-in primitives for CPU-parallel workloads, so its ecosystems fill the gap.</p>

<p>Security-critical domains reveal the limits of languages that rely on conventions or tooling to avoid memory errors. Rust and Ada/SPARK stand out because they provide safety guarantees at the language level. Rust’s ownership model eliminates entire classes of memory bugs in safe code. Ada/SPARK supports formal proof of specified properties. These languages help developers by making writing unsafe code cumbersome enough to discourage it.</p>

<p>C and C++ dominate legacy security-critical systems because of historical momentum, not because they are well-suited to the task. Their ecosystems are full of sanitizers, linters, and static analyzers precisely because the languages themselves offer little in the way of protection.</p>

<p>Modern data pipelines depend on predictable memory layouts, SIMD-friendly structures, and zero-copy semantics. Rust, C++, and Julia align naturally with these requirements. C++ remains the backbone of high-performance data engines because it allows fine-grained control over memory and CPU instructions. Julia’s JIT compiler can generate highly optimized code for numerical kernels while keeping the syntax expressive. Python’s ubiquity in data science is in no small part due to its ecosystem, not because it fits the domain. In many production stacks, the heaviest computation happens in compiled libraries (often C, C++, Fortran, and increasingly Rust), while Python acts as the glue.</p>

<p>Edge and IoT environments expose the cost of runtime overhead. Devices with limited memory, power, and compute capacity often struggle with garbage collectors, large runtimes, or unpredictable latency. C, Rust, and Zig feel native here because they produce tiny binaries and give developers explicit control over memory and performance. They map directly to the constraints of the hardware. Languages with heavy runtimes can be made to work in these environments, but often by bending the environment to accommodate the language, rather than the other way around.</p>

<p>Across all these domains, a consistent pattern emerges. Languages that accumulate large ecosystems for a domain are not always naturally suited to it; sometimes developers build tools to compensate for the language’s shortcomings. JavaScript’s build tools exist because the language lacked modules and types. Java’s enterprise frameworks exist because the language was too rigid for real-world business logic. Python’s scientific stack exists because the language is too slow for numerical computation.</p>

<p>When a language is intrinsically suited to a domain, the ecosystem feels like an extension of the language’s design. Ruby on Rails feels like Ruby. Erlang’s OTP feels like Erlang. Rust’s async ecosystem feels like Rust. These ecosystems don’t fight the language; they express it.</p>

<p>When a language is not suited to a domain, the ecosystem feels like a workaround. Spring feels like a workaround for Java. Webpack feels like a workaround for JavaScript. NumPy feels like a workaround for Python. These ecosystems succeed because they are necessary, not because they are natural.</p>

<p>As new domains emerge–WebAssembly, AI agents, distributed systems, data pipelines, edge computing–the intrinsic properties of languages matter more than ever. Ecosystems will eventually grow around whatever languages developers choose, but the languages that feel native to these domains should shape the future more naturally and more sustainably.</p>

<p>If you have the choice, pick a language that feels like it belongs in the domain–learn it if need be. It might not be the most popular option today, but you may be helping make it one.</p>]]></content><author><name>Someone who made a career out of explaining things to machines. Now the little buggers explain them back at me.</name></author><category term="programming" /><category term="languages" /><summary type="html"><![CDATA[In olden times, when discussions escalated beyond the exchange of heated words, they would sometimes resort to a method that would settle the matter for good. The challenger would propose a duel, and the challenged party often had the right to choose the weapon. The opposing duellist knew that the right tool could mean the difference between being right or being dead.]]></summary></entry></feed>