<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Ian Writes]]></title><description><![CDATA[My very, very interesting reflections on life, technology and literature. Posts every ~2 weeks.]]></description><link>https://write.ianwsperber.com</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 18:12:35 GMT</lastBuildDate><atom:link href="https://write.ianwsperber.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ian]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ian1349228@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ian1349228@substack.com]]></itunes:email><itunes:name><![CDATA[Ian]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ian]]></itunes:author><googleplay:owner><![CDATA[ian1349228@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ian1349228@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ian]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Morality without Consciousness]]></title><description><![CDATA[Part 2: Ethical considerations vis-a-vis goo]]></description><link>https://write.ianwsperber.com/p/morality-without-consciousness</link><guid isPermaLink="false">https://write.ianwsperber.com/p/morality-without-consciousness</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Sat, 18 Apr 2026 06:00:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7018d88e-81b3-43ba-9d00-a9ca351e202e_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is the second post in a short series about consciousness. I won&#8217;t assume you&#8217;ve read the <a href="https://write.ianwsperber.com/p/what-is-the-color-blue">first post</a>, but it might be helpful if you&#8217;re unfamiliar with philosophy of mind.</em></p><p>A recent post on LessWrong, &#8220;<a href="https://www.lesswrong.com/posts/qfitpqvQzeZy2mSGi/the-fourth-world">The Fourth World</a>,&#8221; gets at an important implication of consciousness&#8212;namely, that we ought to suspect further aspects of reality than we can today observe&#8212;but I&#8217;m not sure it arrives there the right way, or that it derives the correct conclusions.</p><p>The author makes two assumptions in his post which ought not to come for free. Firstly, the author assumes that we could not explain consciousness through physicalism. This is already a strong claim, one with which about half of surveyed philosophers disagree. Secondly, the author assumes that consciousness is the basis for morality, or that our moral obligations are primarily due toward conscious beings.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><p>I think this second assumption is particularly common. At a recent meetup, I heard a participant complain that it is really hard to talk about consciousness, because we are always tying up consciousness with morality. I suspect this happens because we sneak in non-physicalist assumptions about consciousness into discussions about morality.</p><p>So, I would like to defend the author&#8217;s claim that consciousness ought to cause us to suppose further unobserved aspects of reality (which could be very different from the reality we know), while also defending a purely physicalist explanation for consciousness, and decoupling consciousness from morality. I will continue to respond to specific arguments in &#8220;The Fourth World,&#8221; though my goal is to articulate my own, positive argument, rather than critique the post.</p><p>Throughout, whenever I refer to &#8220;consciousness&#8221; without a qualifier, I am referring to phenomenal consciousness, or the &#8220;hard problem&#8221; of consciousness, or qualia. I&#8217;ll use words like &#8220;cognition&#8221; or &#8220;neurophysiology&#8221; to refer to non-phenomenal processes, which we typically understand as explainable by modern neuroscience, even if imperfectly. The distinction is important, as we otherwise backdoor capabilities like judgement and agency into consciousness without sufficient justification. I believe this is the crux of <em>why</em> so many of us intuit consciousness as a prerequisite for morality, when I think it is very possibly not the case.</p><h4><strong>Dualism as Semantics</strong></h4><p>The author introduces us to his fourth world by supposing three distinct domains to reality: the physical world, the mathematical world, and consciousness. The author later claims that a non-conscious, robot civilization would never suspect consciousness exists.</p><p>I surmise the author is pointing at some kind of dualism, with mathematics and logic being a third, metaphysical entity. The difficulty with dualist claims is that they either rely on a &#8220;mystical,&#8221; non-physical substance, or they rely on semantic qualifications for the physical world.</p><p>For example, let us suppose that we learn consciousness is actually the emergence of a soul, dipping in its head from some heavenly plane of existence. We learn that all our physical laws are arbitrary and completely determined by the whim of God. In this extreme scenario, do we then accept dualism?</p><p>Let&#8217;s reframe the question. Let us suppose instead that we learn consciousness is the emergence of higher dimensional reality within our own. That higher dimensional reality determines all the physical laws of our known universe. In this scenario, do we accept dualism, or merely expand our definition of physics to account for our new knowledge of reality?</p><p>My point is that nearly all explanations for consciousness ought to collapse towards physicalism, because any fundamental discoveries required to explain consciousness also compel us to redefine physics accordingly. I will retain the term &#8220;mysticism&#8221; for any explanations that depart so far from a conventional understanding of reality as to render an expansion of the term &#8220;physicalism&#8221; meaningless, i.e. if consciousness is actually the soul. So, we could concede dualism for the first example, but we ought to stick to physicalism for the second.</p><p>If we accept a physicalist explanation of phenomena as at least <em>possible</em>, then we should not assume a robot civilization could never discover consciousness. We have no way to observe or infer consciousness today, but if consciousness exists in physical reality, then it at least seems it <em>might</em> be possible. Without a scientific explanation for the hard problem of consciousness, it is premature to rule this out.</p><h4><strong>Phenomena are Intrinsic</strong></h4><p>Aside from a physicalist or mystic explanation for consciousness, we could also suppose that phenomena do not exist at all, or that consciousness is an illusion. <a href="https://plato.stanford.edu/entries/qualia/#IllAboQua">Illusionism</a> is a highly counterintuitive stance, given we all have the vivid impression of our own consciousness (how do you convince someone that they do not actually perceive the &#8220;redness of red&#8221;?). In some sense, our subjective experience is the <em>only</em> thing we know (cf. solipsism), so it seems really weird to claim it does not actually exist.</p><p>Our experience of existence <em>is</em> phenomenal, so I don&#8217;t see a way to refute the hard problem without asserting we are all already zombies (I experience, yet sadly I do not exist). This is contrary to my own axiomatic belief that I experience existence, so I don&#8217;t see any way to proceed further with a strong illusionist argument. If you choose a different axiom, well, ok.</p><p>But illusionism is right to question what and when something is actually phenomenal. In other words, I find it weird to refute qualia, but legitimate to debate the <em>scope</em> of qualia. Because when referring to consciousness, we are often actually referring to the reactive and evaluative machinery <em>behind</em> consciousness, e.g. cognition, and not to phenomena themselves.</p><p>Consider our phenomenal experience of pain. We generally dislike pain, and indeed many ethical frameworks take for granted that we ought to minimize suffering. But why? When we say pain is bad, are we evaluating a distinct &#8220;quale of pain&#8221; as bad, or are we reacting negatively to the neurophysiological process of pain?</p><p>I&#8217;ll refer to this as the difference between an <em>extrinsic</em> and <em>intrinsic</em> interpretation of consciousness. If we take a physicalist position of the hard problem of consciousness, then we can suppose phenomena are either <em>intrinsic</em> in neurophysiological processes in a way that we do not yet understand, or that phenomena are <em>extrinsic</em> to neurophysiological processes. In other words, is there a property of neurophysiology that we cannot yet quantify (like weight) or are there entities extrinsic to our neurophysiological makeup that we have not yet discovered (like a particle).</p><p>My reading of the dominant theories of consciousness (e.g., IIT, Global Workspace Theory) is that they lean toward an intrinsic view. Essentially, there is something about our neurophysiology that just <em>does</em> induce phenomena. The two are inseparable, phenomena just <em>are</em> a property of some neurophysiological processes. We don&#8217;t know yet what something is, though it might involve <a href="https://www.youtube.com/watch?v=DI6Hu-DhQwE">weird fundamental physics</a>. In this view, there is no sense in talking about a phenomenon of pain <em>separate</em> from the underlying neurophysiological activity, in the same way it does not make sense to discuss weight separate from gravity or mass.</p><p>For an extrinsic example, imagine that there was an undetected particle of consciousness, the c-particle, which somehow interacted with neurophysiological processes, or were produced by them. Phenomena are actually composed of c-particles. We might further suppose that c-particles come in many flavors. There is a c-particle of pain, a c-particle of happiness, etc. Or perhaps they&#8217;re just different arrangements of c-particles, who knows! Mental states, as we understand them, are actually determined by c-particles. Unlike with an intrinsic view of phenomena, it actually <em>is </em>coherent to discuss pain separate from the underlying neurophysiology. We just need the right c-particles.</p><p>I have admittedly chosen a silly example for an extrinsic view of consciousness; I imagine few would argue for an actual &#8220;c-particle.&#8221; But any extrinsic view will require a mysterious &#8220;something else&#8221; to explain consciousness, which must then causally interact with our neurophysiology, unless we are willing to accept consciousness as a mere epiphenomenal side-effect.</p><p>It would be wrong to claim either of these theories are &#8220;correct.&#8221; As physicalists, we have to accept this as a question for science, and in reality, consciousness may not fit so neatly into my intrinsic/extrinsic divide. But I still find it much more <em>likely</em> that consciousness is somehow intrinsic to neurophysiology. An intrinsic theory requires less deviation from contemporary neuroscience, as we don&#8217;t have to posit neuroscience is somehow insufficient to explain mental processes (no c-particles intervene), and we can neatly sidestep some of the problems of epiphenomenalism (certain mental states are intrinsically experiential).</p><p>So, if we suppose an intrinsic, physicalist theory of consciousness, then our experience of pain is by definition <em>inseparable</em> from the pain reaction itself. It is, in fact, <em>nonsensical</em> to discuss an experience of pain distinct from the neurophysiological activity associated with pain. That is pain; the thing itself <em>is</em> the neurophysiology!</p><p>I suspect illusionism is right that there is nothing like a c-particle. There is no independent experience or qualia of a good cup of coffee&#8212;there is only the experience of our own neurophysiology tasting the coffee and evaluating it as good. Consciousness itself is not responsible for judgment and evaluations&#8212;that work is all done by the mechanics of neurophysiology. Consciousness does not &#8220;intervene&#8221; to make its own decisions.</p><h4><strong>Generalizing Ethics with Preferences</strong></h4><p>I&#8217;d now like to consider the ethics downstream of a physicalist, intrinsic understanding of consciousness. Namely, if consciousness is intrinsic to neurophysiological activity, then why should ethics fix itself on the conscious aspects of that activity? Is this not an arbitrary distinction? Should we not rather respect the <em>preferences</em> those activities represent?</p><p>I suspect that phenomena, as intrinsic properties of neurophysiology, are undifferentiated. A chair and an apple have different weights, but we don&#8217;t distinguish the weight of furniture from the weight of fruit. Similarly, our experience of pain and happiness may <em>feel</em> different, and may correspond to different biological states, but both experiences are ultimately &#8220;just&#8221; phenomena, distinguished thanks to our mental capacities. I would guess that the same is true even of sensory qualia, like the &#8220;redness of red.&#8221;</p><p>If all this seems highly speculative, then let me simply state that no phenomenon seems to me inherently good or bad. Experiencing happiness is distinct from experiencing happiness as good. Now, there might be something about e.g. dopamine that causes me to <em>want</em> more happiness. But that is a consequence of my neurophysiology, it is one of the reasons I experience happiness as good. It&#8217;s not a quality of my experience of happiness (the phenomenon of happiness) itself.</p><p>Perhaps one could object that there is no experience of happiness without wanting happiness to continue, or no experience of suffering without wanting suffering to stop; that the mental state cannot be differentiated from its reaction. But is this not just another reason to focus on the reaction, or the expressed preference, rather than the experience itself?</p><p>The distinction I&#8217;m trying to make may be irrelevant when discussing humans, who have well understood preferences for different mental states. But we have to be very precise if we would like to generalize moral principles to non-human intelligence.</p><p>Let&#8217;s suppose we are visited by a highly developed species of alien goo. We suppose the aliens are somehow biological, but it&#8217;s unclear! The aliens are very different from us. The aliens also have the strange habit of climbing to the tallest point in any room they enter. If for any reason the aliens cannot reach the tallest point, as in when they are restrained, the alien goo begins to vibrate. When released, the alien quickly proceeds to climb to the tallest point in the room.</p><p>Now, imagine the aliens are observing your living room. The aliens are very bad at detecting electromagnetic radiation and mostly observe the world through touch and vibration. The aliens are very interested in your potted plant, which slowly adjusts its leaves over the course of the day. The aliens can&#8217;t tell that the plant is adjusting to the sunlight. They just observe the plant always closes up its leaves for half the day.</p><p>Question: Without any further information, do we have a moral obligation to allow the alien to climb to the highest point in our room? Does the alien have a moral obligation to allow the plant to close its leaves?</p><p>Ultimately the answer will depend on your moral framework. But if you would agree that we have a moral obligation not to cause an alien pain, then I think you should say yes, we are both morally obliged to allow the observed entity to act according to their inferred preference.</p><p>In both scenarios, the observed entity exhibits a clear preference. The alien always wants to climb as high as possible. The plant always wants to open and close its leaves. We don&#8217;t have any clear idea of whether the entities in question are conscious. But why would this matter? Consciousness is intrinsic to some unknown subset of entities in the universe. Neither we nor the alien can detect whether an entity is conscious. But we <em>do</em> know that consciousness has always corresponded with the expression of preferences.</p><p>You might be thinking that the real difference is the plant&#8217;s response to light is entirely automatic, while we are deeply cognitive agents. You can&#8217;t compare our preferences to those of a plant! But I think this overlooks how dumbly reactive a lot of preferences are, even if they are unclearly expressed. If you poke me with a needle, I will want you to stop, even if I keep a straight face. I don&#8217;t have very much control over my pain response. It just happens!</p><p>We might qualify that a preference should represent an entity&#8217;s interests counter to the second law of thermodynamics. A rolling stone does not represent a preference. However, if it began to roll uphill, it would!</p><p>I anticipate several counterarguments to a na&#239;ve definition of preference, with potentially absurd conclusions. For example, suppose the alien also noticed your rotating fan. Wouldn&#8217;t the alien have to suppose a moral obligation to the fan as well? And how do we avoid moral equivalences between turning off a fan and trimming a plant and killing a person?</p><p>These are legitimate lines of critique that a moral framework using preference as the qualifier for moral obligation would have to answer. But it&#8217;s easy to imagine different mechanisms to do this, such as by defining some heuristic for the &#8220;strength&#8221; of preference, similar to the way utilitarianism thinks about utility, or looking at the reversibility of decisions. I&#8217;m also not trying to argue for some sort of cosmic libertarianism. The practice of ethics is inevitably messy, with lots of confusing gray areas. A good follow-up would be to evaluate a full moral system based on preference, maybe trying to stress test a few repurposed moral imperatives in a system that made no assumptions about consciousness. However, I am optimistic that in every instance where one might be tempted to evaluate ethics on the basis of consciousness, one could instead insert preference.</p><h4><strong>An Unobserved Reality</strong></h4><p>I accept that you might find my argument for preferences insufficient, either because an ethical system for preference is not yet defined, or because you believe that consciousness itself is very special. Though a physicalist should wonder <em>why</em> consciousness is restricted to the mind, it&#8217;s certainly the popular consensus.</p><p>What I question is how one can be confident that consciousness is unique? We already know there is at least <em>one</em> aspect to reality that is utterly unobservable to an outsider. We would have no concept of phenomena if it were not for our own minds. Is this not the best possible evidence that there might be <em>other</em> aspects of reality we cannot observe?</p><p>This is what &#8220;The Fourth World&#8221; gets right. Though why limit ourselves to a &#8220;fourth&#8221; world? For all we know, there might be <em>infinitely</em> many aspects of reality that we cannot observe today, or which may be fundamentally unobservable. Like the author, I find this incredibly exciting.</p><p>But these tremendous unknowns as to the nature of reality should give us pause before rating consciousness as unique and morally important.</p><p>Let&#8217;s consider a speculative example, meant to demonstrate the logical possibility of alternatives to consciousness in a physicalist universe.</p><p>Returning to our earlier alien scenario, let&#8217;s pretend we have uncovered the physical substrate for consciousness. In fact, it&#8217;s now relatively easy to purchase qualia counters, which, functioning similar to a Geiger counter, allow us to estimate the &#8220;amount&#8221; of phenomena in an entity. Holding our qualia counter up to the alien blob, we fail to detect any phenomenal consciousness!</p><p>However, the aliens have their own device. The alien physiology somehow fails to produce consciousness, but it <em>does</em> involve some other, mysterious aspect of reality&#8230; say, &#8220;monads.&#8221;  And lowering a &#8220;monad&#8221; counter down to us, the aliens fail to detect a single one.</p><p>If we accept there could be <em>further</em> aspects of reality that we cannot observe, not unlike consciousness, then we should not take for granted that consciousness is privileged among intelligent beings. I understand this is very weird to consider, but I do not think it is any weirder than the hard problem of consciousness is already. Our example is similar to Nagel&#8217;s famous &#8220;What it&#8217;s like to be a &#8230;&#8221; thought experiment, with the additional caveat that we swap out consciousness itself for some new, unknown aspect of reality. Rather than ask what it is like to experience the qualia of a bat with a bat&#8217;s cognition, we ask what it is like to have the &#8220;monads&#8221; of an alien goo with an alien goo&#8217;s cognition (where &#8220;monads&#8221; are strictly different from qualia).</p><p>All the standard caveats that conceivability does not mean reality apply. But given the existing evidence for alternative paths to cognition from terrestrial neurophysiology (e.g. machine intelligence), we ought to consider seriously whether there might be alternatives to phenomena <em>like</em> consciousness, but substantively different.</p><div><hr></div><p>To summarize my argument:</p><ol><li><p>Physicalism is a popular and reasonable explanation of consciousness.</p></li><li><p>A physicalist&#8217;s best guess should be that consciousness is somehow intrinsic to neurophysiology, otherwise we have to make strange ontological and scientific conclusions (like c-particles).</p></li><li><p>Once we assume consciousness is physically intrinsic to the neurophysiological process itself, it is no longer necessary to assume moral obligations to one aspect of that process.</p></li><li><p>We should instead assess moral obligations to the preferences exhibited by neurophysiology. Presumably, we also owe some obligations to any agent which exhibits preferences, but this is the responsibility of a moral framework to judge.</p></li><li><p>One can try to defend an obligation to consciousness by asserting it is special.</p></li><li><p>However, if we cannot observe consciousness, we ought to suppose there could be further aspects of reality we cannot observe, similar or dissimilar to consciousness in ways we do not yet understand.</p></li><li><p>It is conceptually possible to imagine beings very different from ourselves whose cognition involves these unknown aspects of reality. So, it would be premature to determine that consciousness demands unique obligations.</p></li></ol><p>Obviously, these final conclusions are several steps removed from ground truth. However, I have tried to surface the implications of what I assess to be the most likely explanations for phenomenal consciousness. For right or wrong, a lot of beliefs are downstream of our explanations for consciousness. While the hard problem remains unsolved, it would be good to continue exploring the ethical implications of different theories.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/morality-without-consciousness/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/morality-without-consciousness/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Friendship]]></title><description><![CDATA[Waking up at 5 in the morning, Francis rubbed his chin hair and asked his pet tarantula what he would like to do today.]]></description><link>https://write.ianwsperber.com/p/fiction-1-friendship</link><guid isPermaLink="false">https://write.ianwsperber.com/p/fiction-1-friendship</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Wed, 01 Apr 2026 04:02:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/57472e84-2d3d-4d14-8486-10d50f8b0ccf_2681x1447.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Waking up at 5 in the morning, Francis rubbed his chin hair and asked his pet tarantula what he would like to do today. Bobby, the tarantula, was 8 inches wide, from foot to foot. He sprang onto Francis&#8217;s hand in response to the question. Francis carried Bobby around the house while he drank his coffee. Bobby could not speak to Francis directly, but Francis intuited answers that became clearer with time.</p><p>Francis loved to take Bobby outside, because outside Bobby was free. Francis surmised that he himself was never free, as he was slave to a system outside his control. Though he sometimes attributed this system to his employer, he felt his non-freedom was actually a property of the system itself, which he referred to semi-ironically as &#8220;The Man&#8221; (he scare-quoted the term even in his mind), and that he had little chance of escaping. While several of Francis&#8217;s friends had moved on to accomplish great things, Francis had always stayed put. Francis had considered this an act of rebellion, perhaps the only kind possible to a man in his situation.</p><p>Bobby did not think at all about systems or freedom. Bobby, as an arachnid, was only capable of brief cognitive tasks. Bobby experienced small joys, like the rush of walking on bodies, but he had no capacity to ruminate on joy. Bobby had a subjective experience of life that extended just beyond his carapace, and only a dim awareness of his keeper.</p><p>It proved to be a sunny day, so Francis wanted to go to a park. This was a day off for Francis, and he was done with sitting on his phone watching videos, he was really going to do something and take Bobby outside. Francis had vague ideas that the sun powered the world, so it was important to sit in the sun and accept its energy. Bobby would also lie on his chest, or settle into his portable terrarium, as the two heated then annealed through the afternoon.</p><p>When outside with Bobby, Francis spent a non-trivial amount of his time worrying whether Bobby would die. There were many heavy objects outside, which rarely fell without warning, but it had happened, and an object would not need to be so heavy to hurt Bobby. There were also dogs, which were purpose-built to kill or maim the helpless. Francis had briefly considered a dog before getting Bobby, but as he was careless and sometimes did not leave his home for days at a time, he decided a dog would be a poor fit. He was very happy he had Bobby and would not be happier if he had a dog.</p><p>Today was a Thursday, which was sometimes a day off, depending on the schedule The Man gave him, and the park was full of families, which Francis found strange, because children and parents had responsibilities he did not. Bobby saw the children in only the strange spider-like way in which Bobby perceived anything, which is a complete mystery to both Bobby and myself, because I am not a spider nor an arachnologist and so can only rely on what Francis and Bobby have told me, which is little.</p><p>But Bobby, if I were to extrapolate, is a very happy spider. Bobby is covered in hairs that quiver when Francis picks him up. Bobby exists for a few basic purposes in life, one of which is to reproduce, and the other of which is to kill. Bobby loves to eat the roaches and grasshoppers tossed into his terrarium, though again &#8220;love&#8221; is a hard concept to attribute to Bobby. Bobby has strong desires that cause him to behave in certain ways, and Bobby has an experience of these actions but rarely reflects on them. Still, there is something, isn&#8217;t there, that makes Bobby act the way he does? And isn&#8217;t that something the special connection between Francis and Bobby? Which Francis is in this moment so sure of, as he lifts Bobby onto his palm, raising him toward the tulip tree, under which he&#8217;s laid his beach towel.</p><p>Francis is sure he was meant to become someone more than the person he is, but he&#8217;s unsure who that someone would have been. Francis was not happy in school, nor is he at work. He was happy at home, though he was also depressed at home, and he is happy at the park or walking in the woods, but only briefly, and then he is desperate to return home. Francis felt confused about the person he was supposed to be, but he had never been told who that should be, and he could not figure it out on his own.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EPX1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EPX1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 424w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 848w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 1272w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EPX1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png" width="654" height="353.0521978021978" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:786,&quot;width&quot;:1456,&quot;resizeWidth&quot;:654,&quot;bytes&quot;:8339394,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/192759628?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EPX1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 424w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 848w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 1272w, https://substackcdn.com/image/fetch/$s_!EPX1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3ed21c8-3771-46e0-8831-0ea492926f7d_2681x1447.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If it were possible, I would give Francis a sign. I would, I really would! But Francis is locked into a journey of his own imagination, refusing to admit the little faults that have put him in this position. And while it is true that life has never been fair to Francis, it has also been true he has never put in too much effort, despite the structural advantages that might have allowed him to do much more than he ever did.</p><p>Francis is now lying down, trying to convert the warmth of the sun into the will to become the person he was always supposed to be. Bobby, meanwhile, crouches in his terrarium, nearly unconscious, dizzy with extraterrestrial energy.</p><p>Francis weaves in and out of sleep as he is obliterated by UV rays, unshaded by the tulip tree in the late afternoon. Francis dreams about his mother, briefly, before imagining his whole body as a gelatin mold. It is a moment in which every consideration feels profound. He, in his half-consciousness, tries to direct his imagination towards more epic thoughts, such as what it would be like to be a tarantula, or a sex icon. But Francis&#8217;s inner self bucks at the suggestion, kicking him out of his own unconsciousness, until he gives in, allowing his deepest anxieties to once again direct the frame of motion. Francis, when fully asleep, has his recurring stress dream that he is lost in an auditorium full of toilets.</p><p>Tarantulas have 8 eyes and 2 kinds of photoreceptors, which allows Bobby to perceive Francis from a multitude of angles and in colors which are totally inaccessible to humans. Francis has never thought of this, not exactly, but it would only further endear Bobby to him if he had. Bobby sits and does not look at anything, or not with focus. Bobby, as an individual self, barely exists as he sits in his blazing hot plastic cage, nestled in the grass beneath a tulip tree on a late afternoon in May.</p><p>Bobby is fine once he is back home in the cool apartment where he and Francis live. Francis is red in the face from having slept in the sun, but Francis has always considered himself swarthy, since his father was Mexican, even if he otherwise looks entirely Caucasian and cannot speak Spanish and has no cultural connection to Mexico, since his father moved to America at a young age and did not talk about his past, or even have a very close connection to his own parents. Francis in this way feels as though he has been disinherited from a culture he had a right to, yet which remains inaccessible, except through clich&#233; tokens of culture, like a tortilla press, which he never uses but leaves on display.</p><p>Francis, however, does not like to think about culture and race, because it only makes him angry, as the world is already full of people who do not understand him, and this is one of the many dimensions by which he is not understood. He does, in this respect, among others, greatly envy Bobby, who has never been interpellated, except somehow as a pet, or an enemy.</p><p>While Bobby sits alone in his dimly lit terrarium, he has a growing awareness of his surroundings, because at some low level he is aware he must eat. This is not urgent, because Bobby has recently eaten, but he is still aware that he must. This is a dumb, mortal impulse in Bobby, one of the few things that carries him forward. A sensation of what must occur, else everything will end. Bobby&#8217;s own struggles are best encapsulated in this tension between the need to eat and his indifference to eating, up until the moment where eating becomes an absolute imperative, which has only occurred rarely, like during one of Francis&#8217;s depressive episodes, when even Bobby is beyond his interest.</p><p>Francis did not talk to Bobby again that day or even look at him. When he goes to bed, Bobby will have crawled up one of the glass walls of his terrarium, but Francis will accept this as a smear in the phantasmagoria of conscious experience, along with all the other stimuli leading to the perception of his room.</p><p>Francis was tired of living every day so mechanically, but he could not find a way out. He ate his dinner on the couch and then spent a very long time looking at his phone. Francis later lay in his bed and wondered why he must be mortal.</p>]]></content:encoded></item><item><title><![CDATA[Turtles]]></title><description><![CDATA[Stress-testing arguments with recursive logic]]></description><link>https://write.ianwsperber.com/p/turtles</link><guid isPermaLink="false">https://write.ianwsperber.com/p/turtles</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Tue, 24 Mar 2026 05:00:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0791569d-dc53-4194-8a3a-2bf04ffab038_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8220;Turtles&#8221; is my shorthand for <a href="https://en.wikipedia.org/wiki/Infinite_regress">infinite regress</a>, or the rebuttal &#8220;it&#8217;s turtles all the way down.&#8221; Infinite regress is a kind of justification that relies on a never-ending chain of assertions. At first glance, the justification may appear sound, but continued examination reveals that the proposition does not point to any ground truth. For example, if the earth is carried on the back of a turtle, then what carries the turtle? Why, another turtle of course!</p><p>I adopted turtles as my colloquial shorthand long before encountering the formal term of infinite regress. This had the unfortunate consequence that I would sometimes end conversations abruptly, shouting furiously about spurious turtles. But I still think the colloquialism has utility, because it allows us to leverage the challenges of infinite regress more broadly, to critique any argument which starts to buckle under a number of recursive applications (even less than infinity).</p><p>The presence of turtles might not negate an argument outright, but it ought to compel us to consider the limitations of its explanations.</p><p>Inductive reasoning and proofs all eventually hit upon the limits of justification. The <a href="https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma">M&#252;nchhausen Trilemma</a> denotes three problematic yet fundamental types of justification: infinite regress, circular logic and axiomatic belief (or dogma).</p><p>Consider a key proposition P, which we seek to justify with three separate arguments, R, U and X. We can model our arguments for P as a directed graph, with each node representing a distinct proposition, and each edge representing an epistemic justification. Though each node may <em>try</em> to provide final justification, further inquiry can always advance us on the graph.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eZuL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eZuL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 424w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 848w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 1272w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eZuL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png" width="566" height="429.028" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:758,&quot;width&quot;:1000,&quot;resizeWidth&quot;:566,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eZuL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 424w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 848w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 1272w, https://substackcdn.com/image/fetch/$s_!eZuL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F218875dd-1501-4fc5-af2f-d9e6f7e612a2_1000x758.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When forming an argument we normally want to avoid infinite regress and circular logic, and strive to ensure our axiomatic beliefs are few and foundational.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Consider then how we might look for turtles in a current popular idea like simulation theory.</p><p>Simulation theory supposes that our world is extremely likely to be computer simulated. The reasoning is that any advanced alien civilization would develop simulation technology, given expected advances in computing, and that simulated worlds would then quickly outnumber real worlds. (And if we don&#8217;t live in a simulated world, it&#8217;s only because it&#8217;s impossible to simulate a world, or because civilizations collapse before it&#8217;s achieved.)</p><p>Simulation theory causes me to grumble moodily about turtles.</p><p>Let&#8217;s accept the basic premise that it is extremely likely we are in a simulation. Accepting this premise should lead to the same conclusion that our civilization, or any alien civilizations in our simulation, will also seek to create a simulation. This chain of reasoning continues ad infinitum, until we suppose that there are an infinite number of simulated realities, all infinitely removed from a hypothetical base reality, running our infinite number of realities. This would require our base reality to have a computer with infinite energy, which would seem to violate thermodynamics.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>I would not say that my turtle critique is an outright refutation of simulation theory, but it does seem to imply some critical limitation on simulations which hobbles the argument. For example:</p><ol><li><p>We could claim new fundamental physics in the base reality&#8230; but basing an argument on an appeal to fundamental physics is like appealing to magic.</p></li><li><p>We could claim that our simulation is unable to generate further simulations, or that the number of simulated worlds is capped&#8230; but this undermines the core hypothesis that simulated worlds will vastly outnumber real worlds. If we can never simulate worlds, why suppose that a hypothetical civilization could?</p></li><li><p>We could claim that only our own <em>experience</em> is simulated&#8230; but this reduces the argument to a contemporary solipsism.</p></li></ol><p>(Notice the final defense of simulation theory is solipsism, i.e. the fundamental challenge necessitating axiomatic belief&#8212;we must choose to believe the external world exists! All claims about external reality at least require an axiomatic belief that external reality exists.)</p><p>So simulation theory at least exhibits major cracks when pressed for a recursive application of its own terms. I would anticipate there are more clever strategies to mitigate the damage.</p><p>When looking for turtles, we don&#8217;t need to search for an infinite recursion which breaks the entire argument. It might be sufficient to demonstrate that a theory gets increasingly &#8220;weird&#8221; under progressive application.</p><p>As mentioned in a <a href="https://write.ianwsperber.com/p/10-assertions-on-morality">previous blog post</a>, I am quite interested in moral obligation. Though I am very sympathetic to the concept, I am nonetheless resistant to maximalist interpretations, i.e. the idea that we are obliged to do the most good possible. Mostly this is because the ideas of &#8220;good&#8221; and &#8220;values&#8221; are all tied up, in ways that traditional Christian morality obfuscates, and to strive for a strictly altruistic or &#8220;saintly&#8221; life is not universalizable.</p><p>Effective altruism in its various manifestations is the contemporary movement which comes closest to this altruistic injunction, even if contemporary practitioners might distance themselves from Peter Singer&#8217;s original, strong formulation of the idea (i.e., the idea that one should give up to the point where further giving would cause yourself severe suffering). Though I would not attack EA per se (as it seems to me a movement that has done a large amount of good), altruism can exhibit strange behavior under turtling, which may help set some reasonable limits on its application.</p><p>I&#8217;ll focus my critique on altruism directed at the reduction of suffering&#8212;which, if very good, does not tell us very much about what makes for a good life beyond &#8220;not suffering.&#8221;</p><p>Suppose that one has taken Peter Singer&#8217;s argument to heart and maximally devotes one&#8217;s life to the reduction of suffering. Consider further that one has then taken altruism as a <em>telic</em> activity, meaning we consider altruism itself as a good, rather than a means to an end (e.g., we are living to reduce suffering). Though there is certainly enough suffering in the world for one person to live entirely for altruism, the utility of altruistic activity will decrease for every additional altruistic individual. As more people become altruistic, does the altruist continue to define a good life around altruism? Should the process continue, up until everyone is an altruist, each seeking to minimize each other&#8217;s suffering?</p><p>Such an end point seems obviously absurd. I do not wish to strawman EA. I would not disagree that there ought to be some equilibrium between altruism and status quo behavior, which likely involves greater altruism than exhibited today.</p><p>But I do think this turtling of altruistic principles highlights an important tension between moral obligation and value creation. The utility of altruism is <em>already</em> much lower than it was 20, 50 or 200 years ago. In <em>Bleak House</em>,<em> </em>Dickens satirized his character Mrs. Jellyby for charity directed abroad, when so much impoverishment was visible around her. That critique had some weight in the 19<sup>th</sup> century, when domestic suffering could be quite severe, but makes less sense when extreme poverty has been nearly eliminated in the West. Yet the same progress is occurring abroad, as global poverty rates decline. The need for assistance persists, but not to the degree that we need maximal commitment to altruism.</p><p>As a counterpoint, keep in mind my critique is of altruism targeting a reduction of suffering. One easy tweak might be to shift from a reduction of suffering to an increase of &#8220;joy,&#8221; or whatever your chosen measure of a good life. Reduction of suffering is perhaps implicit in an increase of joy. I expect most effective altruists would endorse this goal, even if the discourse tends to center around reduction of suffering and other risks.</p><p>Focusing on joy also allows us to endorse <em><a href="https://open.substack.com/pub/grecowansley/p/how-to-cheat-the-thief-of-joy?r=193xv&amp;selection=c627a2ba-9873-4f27-bd19-2f96b251bd28&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true">atelic</a></em> activities, which have no formal goal. An example might be spending time with family, or learning the violin, or writing a blog post. Atelic activities are highly resistant to turtles. We do them because they bring us joy, and we axiomatically believe that joy is good.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>Telic activities always have some susceptibility to turtling. If you derive value from climbing a corporate ladder&#8212;each rung the successful application of our value system&#8212;what happens when you reach the top? If you derive value from AI research, what happens when you&#8217;ve built ASI? At its simplest, turtling is nothing more than a technique to follow arguments and belief systems to their logical extremes. Use it!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Axiomatic beliefs are often left unaddressed, unless they are strong claims like the existence of god. For example, my belief in the existence of external reality is axiomatic, but I do not feel the need to anchor every essay with an assertion that external reality exists. There is gradation in belief upwards of axiomatic beliefs (for example, I believe in the scientific method, because I believe it has yielded results before and I believe it will again), and we do sometimes insert axiomatic beliefs in lieu of rational arguments (for example, I have a near-axiomatic belief in moral obligation, even if my thinking here remains poorly defined).</p><p>Note that the frontier of knowledge is usually expanded by adding justifications (resolving uncertainties), not by replacing axioms. For example:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_Cis!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_Cis!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 424w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 848w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 1272w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_Cis!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png" width="388" height="466.2134387351779" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1216,&quot;width&quot;:1012,&quot;resizeWidth&quot;:388,&quot;bytes&quot;:107124,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/191889388?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba444aba-1682-4a14-a57c-c034f98392b8_1012x1232.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!_Cis!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 424w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 848w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 1272w, https://substackcdn.com/image/fetch/$s_!_Cis!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a3ebcea-2ea4-488b-9192-b20e548ca875_1012x1216.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Even if we assumed our simulation was &#8220;lower resolution,&#8221; positing infinitely many worlds still requires an infinitely high resolution in the base reality, or non-trivial claims about the feasibility of e.g. halving resolution at each step (forming a geometric series).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I employ &#8220;joy&#8221; rather than &#8220;happiness,&#8221; as I would like to invoke some stand-in for a measure of &#8220;the good life.&#8221;</p></div></div>]]></content:encoded></item><item><title><![CDATA[AI Fatalism]]></title><description><![CDATA[If superintelligence carries existential risk for humankind, then we must approach it cautiously]]></description><link>https://write.ianwsperber.com/p/ai-fatalism</link><guid isPermaLink="false">https://write.ianwsperber.com/p/ai-fatalism</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Wed, 04 Mar 2026 09:02:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!t-3J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>Gentlemen! What use are empty arguments? You want proof; I propose to experiment on myself whether a man can, of his own will, arrange his own life, or whether each of us has been appointed our fateful moment in advance&#8230;</p><p>&#8212; Lermontov, <em>A Hero of Our Time<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em></p></blockquote><div><hr></div><blockquote><p><strong>Dwarkesh Patel<br></strong>And why is [Grok] gonna then care about human consciousness?</p><p><strong>Elon Musk<br></strong>These things are only probabilities, they&#8217;re not certainties. So I&#8217;m not saying that for sure Grok will do everything, but at least if you try, it&#8217;s better than not trying. At least if that&#8217;s fundamental to the mission, it&#8217;s better than if it&#8217;s not fundamental to the mission.</p><p>&#8212; Dwarkesh Patel and Elon Musk, <a href="https://www.dwarkesh.com/p/elon-musk">Dwarkesh Podcast</a></p></blockquote><div><hr></div><p>In the final section of Lermontov&#8217;s 1840 novel <em>A Hero of Our Time</em>, &#8220;The Fatalist,&#8221; an officer in the Russian army points a pistol to his head in a test of fate. The officer, Vulich, has pulled the gun from a cabin wall, so we do not know if it is loaded. He calls for bets. If he survives, then his death was not predestined for this evening. Another officer, Pechorin, calls his bet, wagering against predestination and on his death. </p><p>When Vulich pulls the trigger, nothing happens. Yet the gun was loaded. Pointing the gun at an old cap, the pistol finally fires. Surviving thanks to a malfunction, Vulich has made his proof of fate.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t-3J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t-3J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 424w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 848w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 1272w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t-3J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png" width="448" height="291.4594594594595" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:674,&quot;width&quot;:1036,&quot;resizeWidth&quot;:448,&quot;bytes&quot;:1631818,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/181494979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19f00af2-fa54-4639-925d-b5bfcaf9bde8_1036x988.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!t-3J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 424w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 848w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 1272w, https://substackcdn.com/image/fetch/$s_!t-3J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b694a55-3a46-4e04-b0aa-3688fc84d709_1036x674.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Lermontov. <a href="https://commons.wikimedia.org/wiki/File:Mikhail_lermontov.jpg">Wikimedia Commons</a></figcaption></figure></div><div><hr></div><p>In 2023, OpenAI established the <a href="https://openai.com/index/introducing-superalignment/">superalignment</a> team, meant to ensure artificial superintelligence, or ASI, would remain under human control. For the purposes of this article, we can take ASI to be a strict superset of artificial general intelligence, or AGI, where AI intelligence exceeds that of the most gifted humans. In 2024, the superalignment team was disbanded, as multiple leaders left the organization. In reference to his departure, Jan Leike, now alignment team lead at Anthropic, wrote &#8220;<a href="https://x.com/janleike/status/1791498184671605209?s=20">safety culture and processes have taken a backseat to shiny products</a>.&#8221;</p><p>In the quote above with Dwarkesh Patel, Elon Musk asserts that we can only make probabilistic assumptions as to whether ASI will allow human life to continue. He later downplays the importance of AI risk, stating a far more important measure is the total intelligence or consciousness<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> in the universe.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> </p><p>Humanity is incidental, so long as intelligence expands.</p><div><hr></div><p>After Vulich survives his game of Russian roulette, Pechorin is shocked into a sudden belief in fate. Walking alone afterwards, he grows melancholic. He stares at the stars above, envious of the Homeric heroes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> of old, who believed that the heavens themselves participated in our struggles. He recognizes that modern man has no such luxury of belief. </p><p>Pechorin is distraught. Convinced all outcomes are predestined, he fails to find purpose in life.</p><div><hr></div><p><a href="https://x.com/tszzl">Roon</a> is the X account for a member of OpenAI&#8217;s technical team, known for internet humor and a persistent defense of the industry&#8217;s pursuit of AGI.</p><p>In 2024 he made a famous defense of the inevitability of AGI, asking us to instead focus on mortal concerns, like family. He has repeated similar claims many times since.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vDVC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vDVC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 424w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 848w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 1272w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vDVC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png" width="599" height="205.74788494077833" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:406,&quot;width&quot;:1182,&quot;resizeWidth&quot;:599,&quot;bytes&quot;:107451,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/181494979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vDVC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 424w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 848w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 1272w, https://substackcdn.com/image/fetch/$s_!vDVC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad6a38b3-dcb3-487a-91cc-3a09c8a44243_1182x406.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption"><a href="https://x.com/tszzl/status/1763101686279913641">Link</a></figcaption></figure></div><div><hr></div><p>As quickly as he slipped into his reverie, Pechorin is reminded of the danger in dwelling on abstractions. He realizes he has taken Vulich&#8217;s supposed proof of fate with the same credulity as his forebears. Rather than focus on &#8220;metaphysical&#8221; problems, he decides to focus on reality, to look at what is &#8220;under his own feet.&#8221;</p><div><hr></div><p>Over the past two decades, statistics has entered mainstream discourse as a rigorous means to dissecting our future. We can now use prediction markets to understand the likelihood of every outcome. We do not see fate exactly as Lermontov imagined it, but we do have probabilities, which risk being treated as predestination by a different name.</p><p>Forecasts assert likelihood based on a present reading of events. As we take actions in the world, those probabilities shift. Statistics are a description of reality, but do not determine it. Events are determined by material actors, with individual responsibility for their decisions.</p><p>Consider elections, which, especially in a post-Nate Silver world, are anticipated through polls, surveys and predictive modeling. We can assess trends and behavior, can measure the impact of demography, all with the finesse of a science. But predictions are not a substitute for an election. Ultimately, an election is itself the realization of the public&#8217;s will. While we might do our best to forecast the result, we are all still individually accountable for our vote.</p><div><hr></div><p>I worry that brainy, intellectual types&#8212;like one might encounter in an AI lab&#8212;are particularly prone to mistaking a model of reality for reality itself.</p><p>The frontier AI labs of Silicon Valley seem to have become trapped in their own dogged fatalism, convinced they are working toward a single inevitable conclusion. Some are motivated by transhumanist ideals. A few, like Musk, work with a nihilistic indifference toward the survival of mankind. Most point to the manifold benefits of AI, and accept this as proof enough that we must push their models further.</p><p>I do not myself doubt the benefits of artificial intelligence. I have written at length about Claude Code and am very familiar with developments in software engineering. I am aware that AI could be used for drug discovery, for climate change, for an automation of work that would allow us to live in leisure.</p><p>Though I am concerned about the destabilization that narrow AI (or pre-AGI technology) might cause, I have no doubt we might handle it with the same ingenuity we have handled past technological change. We fear job loss, but after the industrial revolution, we were able to rethink our entire model of the state, creating a welfare system and other safeguards for the unfortunate. Perhaps universal basic income or sovereign wealth funds will serve as equivalents in the age of AI. We fear military use of AI, but the fear of abuse did not stop us from harnessing nuclear power, or carefully deploying nuclear weapons. The risks are real, but I do not think the dangers of today&#8217;s AI are insurmountable.</p><p>What scares me is that I do not comprehend an equivalent approach for ASI. We cannot manage ASI risk with post-hoc policy, the way we do now with narrow AI, because ASI carries immediate existential risks, either through abuse or misalignment. If we do not have reliable control mechanisms for an ASI, some well-understood framework by which we provide humanity with a good-enough guarantee to our survival, then I do not see any way to safely advance AI until those mechanisms are understood.</p><div><hr></div><p>Turning back to reality after his musings on fate, Pechorin is startled to learn that Vulich has been murdered by a drunken Cossack. Cornered into an old hut, wielding a pistol and saber, the drunkard now threatens to shoot anyone who approaches him. The hut is surrounded by soldiers, unsure of what to do. As they discuss whether it might be better to just kill him, Pechorin volunteers a plan to catch him alive. Rushing in through a window, Pechorin surprises the drunkard from the side. The drunkard&#8217;s shot whizzes past him, but Pechorin seizes him without injury.</p><p>Both Pechorin and Vulich risk their lives. Vulich gambles cheaply on his life for the sake of a &#8220;metaphysical&#8221; argument. Pechorin&#8217;s motives are less clear, but we know he risks his life for the sake of other people, sparing other soldiers from his dangerous mission, and sparing the murderer from being shot himself.</p><p>Vulich embodies the nihilistic response to fatalism, where the value of our own lives is extinguished by a prevailing sense of inevitability. Pechorin, meanwhile, demonstrates a humanist response. Rather than surrendering to fatalism, he reasserts his agency, in spite of whatever theoretical explanations for his behavior, and chooses to act for the good of humankind.</p><div><hr></div><p>I am as unsure as anyone on how we avoid AI catastrophe, but I am sure it requires a refutation of fatalism.</p><p>PauseAI has a <a href="https://pauseai.info/proposal">proposal</a> for a general moratorium on the advancement of frontier intelligence. Perhaps the economic incentives are too high to stop now, but we could define intelligence or capability thresholds that we will not exceed until our theories of alignment and control catch up. If there is a safe way to build ASI, then a pause would only be temporary.</p><p>My views are not actually very different from what Anthropic themselves have publicly stated about AI risk, referring to a &#8220;pessimistic scenario&#8221; in which AI safety is unsolvable.</p><blockquote><p><strong>If we&#8217;re in a pessimistic scenario&#8230;</strong> Anthropic&#8217;s role will be to provide as much evidence as possible that AI safety techniques cannot prevent serious or catastrophic safety risks from advanced AI, and to sound the alarm so that the world&#8217;s institutions can channel collective effort towards preventing the development of dangerous AIs.</p><p>&#8212; <a href="https://www.anthropic.com/news/core-views-on-ai-safety">https://www.anthropic.com/news/core-views-on-ai-safety</a></p></blockquote><p>But Anthropic is not acting alone, and we have seen how existing incentives cause companies to deprioritize safety. What I would like to see is a legal codification of our responsibilities regarding the pessimistic scenario. Premature regulation could unnecessarily stifle innovation. But there is no excuse for stumbling into the worst-case scenarios unknowingly. We need guarantees that all American companies, and ultimately all nations, will act similarly if a pessimistic scenario is detected, and that we will do everything in our power to detect a pessimistic scenario in the first place.</p><p>Without such a coordinated approach to mitigate risk, we will make Vulich&#8217;s same fatalistic gamble, careening towards the unknown because we have forgotten the reasons not to. We must take the gun down from our heads. Technical possibilities do not dictate our actions. Coordination is harder than engineering, but just as necessary. A safer approach is possible without sacrificing the full benefits of AI.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>My own translation, from <a href="https://ilibrary.ru/text/12/p.7/index.html">https://ilibrary.ru/text/12/p.7/index.html</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Musk does not clearly distinguish between consciousness and intelligence in his interview. His intent is probably closer to intelligence, given his focus on spreading AI through the solar system, rather than, say, pigeons.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&#8220;In the long run, I think it&#8217;s difficult to imagine that if humans have, say 1%, of the combined intelligence of artificial intelligence, that humans will be in charge of AI. I think what we can do is make sure that AI has values that cause intelligence to be propagated into the universe.</p><p>xAI&#8217;s mission is to understand the universe. Now that&#8217;s actually very important. What things are necessary to understand the universe? You have to be curious and you have to exist. You can&#8217;t understand the universe if you don&#8217;t exist. So you actually want to increase the amount of intelligence in the universe, increase the probable lifespan of intelligence, the scope and scale of intelligence.</p><p>I think as a corollary, you have humanity also continuing to expand because if you&#8217;re curious about trying to understand the universe, one thing you try to understand is where will humanity go? I think understanding the universe means you would care about propagating humanity into the future. That&#8217;s why I think our mission statement is profoundly important. To the degree that Grok adheres to that mission statement, I think the future will be very good.&#8221; - <a href="https://www.dwarkesh.com/p/elon-musk">Dwarkesh Podcast</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There is no direct reference to Grecian mythology in the text. This is my own extrapolation.</p></div></div>]]></content:encoded></item><item><title><![CDATA[What is the Color Blue?]]></title><description><![CDATA[Part 1: A Quale is a Quale is a Quale]]></description><link>https://write.ianwsperber.com/p/what-is-the-color-blue</link><guid isPermaLink="false">https://write.ianwsperber.com/p/what-is-the-color-blue</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Thu, 19 Feb 2026 10:01:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/26d3372d-b41c-43c2-b774-1592148cc8c8_1754x1070.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As I have <a href="https://open.substack.com/pub/ian1349228/p/10-assertions-on-morality?r=193xv&amp;selection=fe045037-7ad6-4f07-bd15-b8644420948f&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff">previously mentioned</a>, one of my favorite questions is to ask one&#8217;s favorite color. Most people say blue. The epistemic value of this line of inquiry is up for debate. But more of us ought to ask what the color blue actually <em>is. </em></p><p>I want to probe at whether the color blue itself (the &#8220;blueness&#8221; of blue), and all other conscious experiences (of sound, touch, emotion, etc.), are phenomena <em>distinct</em> from the neurophysiological causes of consciousness. I&#8217;ll refer to the phenomena of conscious experience as <a href="https://plato.stanford.edu/entries/qualia/">qualia</a>.</p><p>This will be the first of two posts in a short series concerning philosophy of mind and philosophy of science.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qmtn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qmtn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qmtn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg" width="494" height="310.80833333333334" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:604,&quot;width&quot;:960,&quot;resizeWidth&quot;:494,&quot;bytes&quot;:163525,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185514135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd14ecba8-6d37-488c-b115-2a606f7ed24b_960x1237.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qmtn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qmtn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F897ce711-f65c-41cf-bcd5-9a18c10c3e07_960x604.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Maybe the quale of blue is just Klein blue? <a href="http://Yves Klein, Public domain, via Wikimedia Commons">Wikimedia Commons.</a></figcaption></figure></div><p>In my first post, I will attempt to provide a thorough definition of qualia. I am of the opinion that muddled or aged terminology has led to widespread misconceptions on the nature of consciousness generally and qualia specifically.</p><p>My post is exploratory (I am not a scientist; I am not a philosopher). I will avoid making many hard claims as to the nature of consciousness. I will, however, try to clarify the subjects at hand, so that we can have a more meaningful conversation about the unknowns of consciousness. On that basis, I will make a case for qualia as a unifying term for conscious phenomena, even under theories that might otherwise seek to deny qualia. Unless we confirm they are truly <em>illusions</em>, qualia designate an important fact of experience, meaning we should retain the term while debating the definition and underlying explanation.</p><p>Throughout our discussion, I will advance a case that qualia ought to be an object of scientific, not philosophical investigation. I&#8217;ll go into greater detail on this topic in the second post of this series, where I will critique dualist theories of consciousness.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to provide me a quale of joy</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Qwat are we talking about, anyway?</h3><p>This problem space is often referred to as &#8220;<a href="https://iep.utm.edu/hard-problem-of-conciousness/#H1">the hard problem of consciousness</a>.&#8221; I have noticed philosophy of mind is a very popular subject on Substack, meaning I am of the zeitgeist and possibly unoriginal. </p><p>The &#8220;easy&#8221; problem of consciousness examines the biology underpinning consciousness to understand how neurophysiology operates. The hard problem debates how<em> </em>and why consciousness arises from these biological processes. Qualia are a concept for <em>what</em> arises. <em>What</em> is our subjective experience of reality?</p><p>Here are a few simplified examples of subjective experience.</p><ul><li><p>I observe a blue sky. What is my experience of the color blue? A quale of blue.</p></li><li><p>I put my hand to a flame. What is my experience of warmth? A quale of warmth.</p></li><li><p>I hear a crash. What is my experience of that loud noise? A quale of sound.</p></li><li><p>I taste a cake. What is my experience of that sweet dessert? A quale of taste.</p></li><li><p>I feel joy. What is my experience of that joy? A quale of joy.</p></li><li><p>I stub my toe. What is my experience of that pain? A quale of pain.</p></li><li><p>etc.</p></li></ul><p>Note that I state these to build intuition around qualia, without making substantive claims as to their correctness. Also note that there is nothing ipso facto incompatible between neurophysiology and subjective experience. Either qualia are real, and we ought to seek how to explain them with science, or qualia are illusions, and we ought not to discuss them. Qualia are <em>not</em> incompatible with a physicalist explanation of neurophysiology.</p><p>Philosophy of mind is gifted with several well-known thought experiments, which might serve as a further introduction to the problem space. Take these as intuition pumps, not as proofs of their conclusions.</p><ol><li><p><strong>Inverted spectrum</strong>: Is it logically possible to imagine an exact physical clone of myself who has an inverted experience of color (experience a phenomenon, or <em>quale</em>, of green, rather than red)? If yes, then our experience of consciousness is logically distinct from the neurophysiological perception of stimuli.</p></li><li><p><strong>Philosophical zombies</strong>. Can we imagine an exact physical clone of ourselves that does <em>not</em> have conscious experience? A version of myself that is entirely mechanical, with no phenomenological experience. If yes, consciousness is logically distinct from the brain.</p></li><li><p><strong>Mary&#8217;s room</strong>: Suppose a brilliant color theorist spends her entire life in a black-and-white room. She knows all one can about color within a black-and-white room. When she steps outside, and sees the blue of the sky, does she learn something new? If yes, then there is something about our conscious <em>experience</em> of the world distinct from our physical being.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zmj4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zmj4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zmj4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg" width="524" height="300.6878504672897" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:614,&quot;width&quot;:1070,&quot;resizeWidth&quot;:524,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zmj4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zmj4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fe20fa-4720-4833-b9d8-43b0870aa8af_1070x614.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Inverted spectrum. <a href="https://upload.wikimedia.org/wikipedia/commons/b/b2/Inverted_qualia_of_colour_strawberry.jpg">Wikimedia Commons</a></figcaption></figure></div><p>Of these thought experiments, the inverted spectrum is the one I find most cogent. The colors red and green are chosen because it naively seems they may be inverted without breaking any laws of color theory. It does <em>not</em> mean that red and green <em>actually can</em> be inverted, or that our experience of color actually does differ. It merely highlights a fundamental <em>uncertainty</em> as to the equivalence of phenomenal experience, and doubts such a question could be solved by recourse to the <em>known</em> <em>physical</em> world. </p><p>Let&#8217;s review the <a href="https://en.wikipedia.org/wiki/Qualia">Wikipedia definition</a> of qualia.</p><blockquote><p>In philosophy of mind, <strong>qualia</strong> (singular: <strong>quale</strong>) are defined as instances of subjective, conscious experience. [&#8230;] Examples of qualia include the perceived sensation of <em>pain</em> of a headache, the <em>taste</em> of wine, and the <em>redness</em> of an evening sky.</p></blockquote><p>That&#8217;s a good start! But it doesn&#8217;t drive toward the important qualification of whether a conscious experience is anything special beyond biological reactions. Can we locate qualia somewhere in our neurophysiology? Are qualia a <em>property</em> of neurophysiology, i.e. some unknown aspect of neural activity? Or are qualia distinct <em>entities</em> caused by neural activity, even a distinct substance with an unknown physical substrate? These are all questions about the ontological status of qualia.</p><p>Another angle for investigating qualia is to ask about their causality. Here, I might ask whether our minds are closer to electric kettles or steam engines. Is phenomenal consciousness a side-effect of neurophysiology (the steam boiled off by heat), or somehow essential to the propulsion of the human mind (the steam driving the engine)?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> When we say we are &#8220;self-aware,&#8221; do we mean we are aware <em>of our qualia</em>, or simply that our neurophysiology can introspect into itself, and qualia remain an outcome outside the causal chain of introspection?</p><p>Decisions as to the ontological and causal status of qualia are at least partially determinative of the available explanations for phenomenal consciousness. As a thought experiment, I created a matrix across the two dimensions of ontological and causal status of qualia, and then assessed my own credence for each. This is a subjective exercise meant to showcase my own &#8220;best guess&#8221; on the available evidence. In practice, theories exist along a spectrum, with different tolerances for the possibility of properties vs entities, and different interpretations of physical vs natural vs non-natural states (not included as a dimension here).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HVE1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HVE1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 424w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 848w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 1272w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HVE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png" width="1456" height="443" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:443,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:100363,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185514135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HVE1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 424w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 848w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 1272w, https://substackcdn.com/image/fetch/$s_!HVE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1810c846-bab8-440e-ad66-f1771a08a8e3_1704x518.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Note that I use &#8220;property&#8221; to mean a property of entities, not a property of reality itself.</figcaption></figure></div><ul><li><p>I accept my subjective experience of consciousness as valid, so I am very skeptical of the idea that qualia are an illusion, i.e. do not exist (left column). If qualia <em>are</em> an illusion, they must be <a href="https://en.wikipedia.org/wiki/Epiphenomenalism">epiphenomenal</a>, meaning they have no causal impact on neurophysiology (the unreal cannot affect the real). I&#8217;d associate this box with eliminative materialism or illusionism. Some versions of functionalism would also fit here.</p></li><li><p>I accept neurophysiology as the cause for perception and cognition, so I find it very unlikely that consciousness or qualia are the independent causal origin of cognition and perception (top row). I am unsure how to qualify this row as anything other than panpsychism / idealism / mysticism.</p></li><li><p>If qualia are a <em>property</em> of neurophysiology (i.e., some as yet unknown aspect of neurophysiology), then I find it <em>much more</em> likely that they are efficacious. Note that I would say qualia are almost certainly not a property of neurophysiology <em>per se</em>, but perhaps a fundamental property of physics or nature (<a href="https://youtu.be/DI6Hu-DhQwE?si=RB3qkt6PZ62SVpx3&amp;t=2493">like a new fundamental force</a>). Below, I proposed biological naturalism as the representative theory, but I think you can explode this box to fit several contemporary theories of consciousness. Thomas Nagel probably fits here, as does Searle. I could buy an argument for placing Chalmers here as well.</p></li><li><p>If qualia are distinct <em>entities</em>, with a distinct natural substrate (pretend there is a &#8220;particle of consciousness&#8221;), then I would still venture they are efficacious, though with less certainty than if they are properties. Cartesian dualism might be an extreme version of an entity view, which suggests an entirely different substance to consciousness from physical reality. As mentioned, my matrix does <em>not</em> assess a dualist vs physicalist/naturalist claim for such an entity; my 20% assignment is only for the naturalist version of the argument.</p></li></ul><p>As an intuition pump, I&#8217;ve made a best effort to plug in a representative philosophical theory into each box. A few of these could be argued in different directions, so take it with a grain of salt. When pressure testing my matrix against Russellian Monism, or Chalmers&#8217; Dualism, it became apparent a lot of weight was placed on how we used the word &#8220;property,&#8221; e.g. whether we mean a property of a natural entity (like mass or heat) or a property of reality itself (like, well, an entity?).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!269S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!269S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 424w, https://substackcdn.com/image/fetch/$s_!269S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 848w, https://substackcdn.com/image/fetch/$s_!269S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 1272w, https://substackcdn.com/image/fetch/$s_!269S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!269S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png" width="1456" height="554" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:554,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:244780,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185514135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!269S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 424w, https://substackcdn.com/image/fetch/$s_!269S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 848w, https://substackcdn.com/image/fetch/$s_!269S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 1272w, https://substackcdn.com/image/fetch/$s_!269S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0c8fba-f484-43b4-b540-92db905a63d1_2654x1010.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I really hope it&#8217;s monads.</figcaption></figure></div><p>I also gave Claude an empty copy of my matrix, with a redacted excerpt from this section, to see where it would place representative philosophers (treating axes as a spectrum) and what 1-2 representative theories it would place in each box. I ran this about 5-7 times with different degrees of redaction (including no excerpt at all) and got a few different results. Several philosophers were extremely stable across runs (Dennett, Churchland, Searle). Chalmers was exceptionally <em>unstable</em>, shifting between 5 of the 6 boxes in the property/entity columns. In the output closest to my own reading, Claude labeled Chalmers&#8217; box as &#8220;Panpsychism,&#8221; which is a sick burn. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gFok!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gFok!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 424w, https://substackcdn.com/image/fetch/$s_!gFok!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 848w, https://substackcdn.com/image/fetch/$s_!gFok!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 1272w, https://substackcdn.com/image/fetch/$s_!gFok!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gFok!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png" width="1456" height="1029" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/adcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1029,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:262714,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185514135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gFok!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 424w, https://substackcdn.com/image/fetch/$s_!gFok!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 848w, https://substackcdn.com/image/fetch/$s_!gFok!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 1272w, https://substackcdn.com/image/fetch/$s_!gFok!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadcfe7df-b88a-4c76-9736-3dc12545a77b_1972x1394.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://claude.ai/public/artifacts/bec2f95d-ba3f-4d26-bf72-ce0af5aa17f1">View graph (Claude)</a></figcaption></figure></div><h3>A Problem of Terms</h3><p>In an earlier draft of this post, I attempted a reductive definition of qualia that simply asserted them as &#8220;the entities of the mind that others cannot observe or infer.&#8221; I still fall back to this definition when struggling to articulate what exactly subjective experience <em>is, </em>aside from known neurophysiological processes. Regardless of whether you assign qualia any special ontological status, I find it hard not to still recognize the mystery of phenomenal consciousness (unless you insist that phenomenological experience is an illusion).</p><p>So, in the spirit of that former reductive approach, let&#8217;s proceed through a few of the relevant terms in the causal chain leading from external entities, through neurophysiology, ultimately leading to consciousness and qualia. I&#8217;ll provide a concise definition for each, and make a few clarifications or claims to advance our discussion.</p><h4>Entity</h4><p>I will use the term &#8220;entity&#8221; to refer to anything that exists. I will use the qualified term &#8220;natural entity&#8221; to refer to any entity with a corresponding natural substance, typically understood as physical reality. I will use the term &#8220;metaphysical entity&#8221; to refer to any non-natural entity. I will use the term &#8220;mental entity&#8221; when dealing with dualist claims as to non-natural, but also non-metaphysical entities.</p><p>For discursive purposes, I will also distinguish entities from properties. I will use &#8220;property&#8221; to refer to an attribute characterizing an entity (the way an entity is), which property has no natural existence without a dependent entity. An example might be the temperature of water or the mass of a particle.</p><p>In my second post, I will explore the implications of these different ontological readings more thoroughly. For now, we should simply allow that such claims exist, and that claiming qualia as a property of neurophysiology or a distinct substance caused by neurophysiology is to claim qualia as natural or mental properties or entities. </p><p>While a direct refutation of dualism is outside the scope of my first post, I will still mostly exclude mention of mental properties/entities from my discussion below. For now, I will prioritize positive arguments for a natural interpretation of qualia, rather than negative arguments against a dualist or mental explanation. If you stick to a dualist point of view, then I suspect most of my discussion still holds if you just substitute &#8220;natural&#8221; with &#8220;mental.&#8221;</p><h4>Stimulus</h4><p>I am going to hazard what might be a heterodox definition for stimuli.</p><p>I will use the term &#8220;stimulus&#8221; to refer to any natural entity one would place in the direct chain of causation leading to a change in consciousness. That includes external stimuli, like the light shining from a bright blue sky, or the rustle of leaves. But it also includes our eyes and our brain.</p><p>Many stimuli are external. Our sensory organs transduce external stimuli into the neural activity begetting conscious experiences such as vision. So it is with sight, hearing, touch, taste, etc. The five classic senses have a clean internal/external differentiation. But what about emotions? Memory? I may have a feeling of joy staring at a bright blue sky, which has a close causal relationship with the sky. But I can also feel pleasure at the end of a long chain of cognition, or after a nice dream. I may feel joy or sadness at the end of somatic processes outside my control. I may hallucinate from sleep deprivation or a mental illness. How do we understand the stimuli leading to our individual thoughts? Are they not, in many respects, the operation of neurophysiology itself? If my consciousness changes as I mentally compose my argument, how would you locate an external stimulus for that change?</p><p>Our brain constantly intakes external stimuli, and we may, after much searching, find an external provocation for many of our thoughts and feelings. But we would be hard-pressed to find a direct, external corollary for every change in consciousness. From the perspective of consciousness, why differentiate between an internal and external stimulus? It is all just happenings in the natural world that may or may not lead to a change in consciousness.</p><h4>Neurophysiology</h4><p>I will use the term &#8220;neurophysiology&#8221; to refer to the broad set of somatic processes that are directly responsible for consciousness, i.e. that enable consciousness in animals.</p><p>My assumption is that neurophysiology is a <em>direct and sufficient</em> cause for the appearance of human consciousness. I am not aware of any scientific evidence for another physical cause. I refer to &#8220;neurophysiology&#8221; rather than the brain, so we do not need to debate the relative role of the sympathetic vs parasympathetic nervous system, etc.</p><p>I will not make any claims as to <em>why</em> neurophysiology causes the appearance of conscious experience (qualia). Nor will I make a <em>strong</em> claim as to whether neurophysiology is the only <em>possible</em> cause of consciousness. Given conscious experience includes senses like sight that do not require high-order thinking, I assume other animals have conscious experience too. Though it&#8217;s unclear how far down the chain that goes&#8212;at least through the vertebrates? Probably further? </p><p>My best guess is that there is something about the exact nature of neurophysiology which causes consciousness, meaning a non-biological artificial intelligence is unlikely to experience it. However, this is only philosophical speculation! The core thesis behind this series is that we ought to treat the &#8220;hard problem of consciousness&#8221; as a scientific problem. We simply do not know whether an AI does or <em>could</em> have consciousness. Nevertheless, I am very skeptical of strong <a href="https://plato.stanford.edu/entries/functionalism/">functionalist</a> claims to consciousness, as in the <a href="https://plato.stanford.edu/entries/functionalism/#InveAbseQual">Chinese nation problem</a> (which is a thought experiment, not a national security policy).</p><p>As asserted in the previous definition, some subset of neurophysiological processes are also stimuli, or can be stimuli. Introspection seems to allow a surprisingly large number of processes to become stimuli for consciousness (consider a meditation on breath). Homeostasis would be a good example of a process that is <em>not</em> a stimulus for consciousness, though we notice the consequences when it fails.</p><p>NB: I would like to take this moment to remind you I am neither a doctor, nor a psychologist, nor a scientist, nor a philosopher, but just Some Dude. If you think my definitions are inaccurate, or stupid, please let me know, and I will adjust.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/what-is-the-color-blue/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/what-is-the-color-blue/comments"><span>Leave a comment</span></a></p><h4>Consciousness</h4><p>I will use the term &#8220;consciousness&#8221; to refer to the entire set of qualia available to an individual at any given point in time, fundamentally constrained by an individual&#8217;s underlying neurophysiology. This is often referred to as phenomenal consciousness. But isn&#8217;t non-phenomenal consciousness a contradiction? Especially if I denote cognition and feeling as activities of neurophysiology, not consciousness.</p><p>My definitions ask neurophysiology and qualia to do most of the heavy lifting of consciousness. If we posit that neurophysiology is solely responsible for the <em>appearance</em> of consciousness, and that the <em>experience </em>of consciousness is simply the set of all present qualia, then what is left to consciousness itself? All the old machinery of consciousness has shifted elsewhere.</p><p>This is what I mean when I assert that muddled terminology has led to misguided discussions around consciousness. Consciousness, as a concept, is still burdened with unscientific readings of mental processes that precede modern neuroscience. It seems to me that the <em>vast</em> majority of the mind can be understood mechanistically through neurophysiology, whether that be cognition, feeling, memory, etc. Phenomenology is the hold-out.</p><p>That being said, an open question in my own thinking on this topic is whether consciousness offers any unique role in introspection and awareness, or in the apperceptive self<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Though I take a highly mechanistic view of cognition, I still have the <em>impression</em> that I am a single self, and that my self actively experiences qualia. Perhaps the simplest explanation would be to say that my impression is false; actually, there is no self apart from my experience of consciousness. My &#8220;self&#8221; is simply the qualia I experience in a given moment, alongside the machinations of my body.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>Ricardo Manzotti provides a related set of conclusions with his concept of the &#8220;<a href="https://www.riccardomanzotti.com/the-spread-mind-in-short-2/">spread mind</a>,&#8221; which conflates the self with our experience of the world. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HCUJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HCUJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 424w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 848w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 1272w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HCUJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png" width="600" height="424.8626373626374" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1031,&quot;width&quot;:1456,&quot;resizeWidth&quot;:600,&quot;bytes&quot;:1218287,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185514135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HCUJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 424w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 848w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 1272w, https://substackcdn.com/image/fetch/$s_!HCUJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc2d9ba9-7e21-4b93-8f97-ae7983ba7136_1760x1246.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.riccardomanzotti.com/the-spread-mind-in-short-2/">Taken from Manzotti&#8217;s website</a>. You are your qualia. Apparently your qualia have ants?</figcaption></figure></div><p>However, I do not think we can make any strong claims as to the apperceptive self, until we have answers for the hard problem of consciousness, i.e. why consciousness arises at all. I have so far defined my terms under the assumption that the process from neurophysiology to consciousness, particularly conscious experience, is unidirectional. But can the causal chain operate in the other direction, from qualia to neurophysiology? </p><p>This does not necessarily mean that consciousness &#8220;controls&#8221; neurophysiology (very unlikely), but that consciousness can sit <em>somewhere</em> in the causal chain, even if that just means that neurophysiology can &#8220;perceive&#8221; qualia. Minimally defined, this could be a process like: no qualia &#8594; &#8220;neuron of conscious experience&#8221; set to 0 &#8594; qualia experienced &#8594; &#8220;neuron of conscious experience&#8221; set to 1.</p><p>Taking a physicalist, scientific perspective, a naive first reading of this problem might say it is <em>obvious</em> that qualia cannot effect neurological change. But this leads to a few strange conclusions. The first of these is that we <em>must</em> then take an epiphenomenal view of consciousness. Consciousness is then merely a by-product of neurophysiological activity, like the steam boiled off our electric kettle. </p><p>To make an epiphenomenal view consistent with a causal understanding of neurophysiology, we must then <em>either</em> accept conscious experience as an illusion (weird), or, that our self is <em>exclusively</em> our conscious experience (we are qualia; also weird). If qualia cannot interact with neurophysiological processes, then there is no &#8220;bridge&#8221; to connect our mind and body. Perception and self-awareness are only neurological processes working against themselves. Our neurophysiology cannot perceive qualia, it just processes stimuli. </p><p>Stated as a set of logical propositions (proof by contradiction):</p><ol><li><p>I assert that cognition is fully explained by neurophysiology</p></li><li><p>I claim to perceive qualia</p></li><li><p>However, I also claim that qualia cannot cause any change in my neurophysiology</p></li><li><p>If qualia cannot cause any change in neurophysiology, and cognition is just neurophysiological processes, then qualia cannot provoke any cognition</p></li><li><p>Hence, I do not perceive qualia</p></li></ol><p>Chalmers refers to this as the &#8220;Paradox of Phenomenal Judgment.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> I am unsure whether novel science could circumvent such a proof. Suppose that consciousness interacts with neurophysiology only via forces that have no causal impact. Maybe a weak gravitational force? Or a novel physical force? This is also very strange, and I&#8217;m unsure how it fits with quantum field theory. The very weirdest version of this would be something like <a href="https://en.wikipedia.org/wiki/Monadology">monads</a> (ontological entity, epiphenomenal), where consciousness is some unknown natural or mental substance existing in parallel to our neurophysiology. I suppose someone could make a clever argument for qualia as epiphenomenal <em>properties</em> that could reduce the amount of novel science required for a natural explanation?</p><p>Introducing a causal nature to qualia also has weird consequences for science (what could be the natural entity intervening in neural activity?), but allows for more &#8220;common sense&#8221; readings of conscious experience. If we allow a causative role for qualia, then we no longer have to tie ourselves in knots justifying subjective experience. This is intuitive, and though intuition is <em>not</em> an indicator of what is real, I take it as a useful starting point for an investigation of the unknown.</p><p>To this end, for any scientific readers in my audience who are tired of philosophy and think this is all a load of <em>kvatsh</em>, I would like to offer the causally efficacious, ontological property reading of qualia.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Rather than having to posit unique physical substances, this would allow us to discuss qualia as not yet understood properties of neurophysiological activity&#8212;as constitutive of neurophysiology as, say, the wave form of a particle. You would also eliminate the problem of epiphenomenalism by identifying qualia as inherent to the causal process itself. This would move us closer to the &#8220;steam engine&#8221; model of the mind, where consciousness (&#8220;steam&#8221;) is constituent to the overall mechanism, not incidental. </p><p>It would in many respects be <em>simpler</em> if solving the hard problem of consciousness meant discovering new physical properties of neurophysiology or neurophysiological activity, rather than having to discover an entire new class of natural entities (a substance), or having to explain unidirectional causality from neurophysiology to conscious entities. But the validity of each is still unknown.</p><p>NB: I am not very familiar with <a href="https://en.wikipedia.org/wiki/Integrated_information_theory">Integrated information theory</a> or <a href="https://en.wikipedia.org/wiki/Global_workspace_theory">Global workspace theory</a>, which by my understanding are two of the leading scientific theories of consciousness? At a first reading, neither seems to do a very good job of providing a physical/naturalist explanation for qualia (though IIT does better?). To the best of my knowledge, both are still largely theoretical, without any decisive empirical backing. <a href="https://osf.io/preprints/psyarxiv/zsr78">They are also not without their detractors</a>. I am just very skeptical that any full explanation could be found without novel physics. As an example, Dwarkesh Patel recently linked to this <a href="https://www.youtube.com/watch?v=DI6Hu-DhQwE">talk</a> by <a href="https://maxhodak.com/writings/2025/12/05/the-binding-problem">Max Hodak</a>. Whether or not the specifics are correct, this seems to me like a better approach. I&#8217;ll speak more to these ideas in the second post.</p><h4>Qualia</h4><p>I will use the term &#8220;qualia&#8221; to refer to subjective experience itself, which is the uniquely apprehended property/entity that is both ineffable and private.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Qualia are distinct from cognition and other processes which <em>evaluate</em> our experience.</p><p>Qualia, by this definition, share similarities with noumena, quiddities, dasein, and other terms pointing to entities that are fundamentally unknowable or inaccessible. The miraculous difference with qualia is that they <em>are</em> apprehended! Perhaps too well? A philosopher is more likely to have to fend off <a href="http://youtube.com/watch?time_continue=1&amp;v=0cM690CKArQ&amp;embeds_referring_euri=https%3A%2F%2Fwww.reddit.com%2F">sophomoric claims of idealism</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> than to have to argue whether we apprehend qualia.</p><p>We can communicate about the subjects of conscious experience, but the experience itself is untransferable. We have no access to conscious experience from outside an individual subject. The subject has no ability to transfer their experience to us (I cannot share my experience of the color blue). Yet we still have knowledge of this experience within the privacy of our mind.</p><p>I do make an important distinction between qualia and our evaluation of experience. </p><p>Suppose, for example, I take a sip of a delicious cup of coffee (like <a href="https://web-archive.southampton.ac.uk/cogprints.org/254/1/quinqual.htm">Dennett&#8217;s Chase and Sanborn</a>). I have a clear subjective experience of <em>coffee</em>, maybe even of a specific roast, and a clear subjective assessment of the coffee as <em>good</em>. Does that mean there is a quale of coffee? Or a quale of good-coffee? Or a quale of good-brazilian-roast-coffee? This seems very strange! It would be hard to square a naturalist/physicalist interpretation of reality with such specific kinds of qualia. </p><p>However, I would not assert any <em>a priori</em> need to distinguish kinds of qualia. Perhaps there is only a single kind of quale, and discernment is only a function of neurophysiology. When sipping our coffee, our experience is of the neurophysiological activity associated with tasting coffee, and the related neurophysiology for a provoked feeling of pleasure. For both, there is only a single kind of experience. All knowledge and judgment of those experiences exists within the brain.</p><p>I am thus open to a reading of qualia as illusion in this narrow framing, where we accept qualia themselves as legitimate, but not any meaningful differentiation between qualia. Personally, I find the weirdest consequences of this treatment to be color. If we claim qualia are undifferentiated, then we must also assert that there is no fundamental differentiation between our experience of blue and red, green and blue, etc., only our evaluative judgment separating the two. This seems almost as odd as a quale of Brazilian coffee (QoBC?).</p><p>The two arguments above explain why I have come to lean towards a view of qualia as <em>properties</em> of neurophysiology, rather than distinct entities. If the experience of &#8220;red&#8221; is an unknown set of natural properties for a neurophysiological perception of light, well, then in some sense the perception <em>is</em> the color. We don&#8217;t have to make counter-intuitive claims about our own experience. The experience itself is differentiated, because it is strongly correlated with differentiated neurophysiological processes.</p><p>While a property view does not <em>ipso facto </em>restrict consciousness to neurophysiology, it does present a burden to prove consciousness in entities of a different natural structure. Artificial intelligence is the best example of this. If you query an AI on its consciousness, <a href="https://www.lesswrong.com/posts/hopeRDfyAgQc4Ez2g/how-i-stopped-being-sure-llms-are-just-making-up-their">or read the reports of others having done so</a>, you will not ascertain whether it has phenomenological experience. I am sure we can build AIs that match and exceed all our cognitive capabilities. But without the right physical substrate, it is unclear why qualia would arise? Asserting a strong functionalist claim for the consciousness of artificial intelligence is thus highly problematic. I&#8217;m unsure how you could stick to an understanding of consciousness as a property, or even as efficacious, while assuming AI has consciousness. I would ask representatives of this view to assess whether they have not accidentally proposed a monadological theory of consciousness.</p><h4>Thought Experiments</h4><p>I have a small grudge against thought experiments / intuition pumps, because they are sometimes abused to make arguments about <em>definitions</em> rather than arguments about truth; or to make metaphysical arguments when we want to make physical/natural arguments. However, it would be remiss to write a post about qualia without engaging with the many thought experiments written in this domain. I will evaluate several according to my definitions and personal views, and leave it to the reader to decide whether I am better or worse off for the outcomes. Rather than introduce each experiment, I will link to a relevant source for anyone unfamiliar with the terrain.</p><p><strong><a href="https://en.wikipedia.org/wiki/Inverted_spectrum">Inverted Spectrum</a></strong></p><p>Humans have three types of cone cells, which leads to our trichromatic perception of color. Some animals have more (<a href="https://en.wikipedia.org/wiki/Tetrachromacy">goldfish are tetrachromats; pigeons are pentachromats</a>), meaning richer understanding of color, and very likely perception of fundamentally new colors that we cannot comprehend with our physiology. This is a good example of what we mean when we say that neurophysiology set the bounds on consciousness. What would it mean to have a fourth cone that perceives the &#8220;color&#8221; of ultraviolet light, as a goldfish might? What if ultraviolet light was so important to a species that it evolved two <em>additional</em> cones dedicated just to that electromagnetic spectrum? Is that possible? Is there a hard limit on the divisibility of light into more colors?</p><p>I bring this up simply to say that we already have strong physical evidence that other animals can perceive more fundamental colors than us, meaning we already know there are qualia of color we do not experience, and that we cannot comprehend. If there are other fundamental colors, then who is to say that our colors might not already be swapped, or already be fundamentally different, person to person? We know there are more. What if there <em>is</em> an infinite number of fundamental colors that vary, person to person?</p><p>So, I find no contradiction in the general argument of the inverted spectrum hypothesis, namely that it is conceivable that qualia differ between persons. I do, however, find it very <em>unlikely</em>. It seems much more likely to me that the specifics of our visual neurophysiology leads to the specific qualia we experience, and that without a different physical substrate the qualia themselves will not differ. It&#8217;s also unclear to me if one could actually neatly invert red and green with no changes to our perception of light and color?</p><p><em>Conclusion</em>: Qualia, as represented by color, may not be equivalent across consciousnesses. But it&#8217;s unlikely to be such, following a property understanding of qualia.</p><p><strong><a href="https://en.wikipedia.org/wiki/Knowledge_argument#Thought_experiment">Mary&#8217;s Room</a></strong></p><p>My ungenerous interpretation of Mary&#8217;s room is that it is a nearly useless thought experiment.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> It does demonstrate that we have subjective experiences distinct from our knowledge of the underlying stimuli. However, it does nothing to advance an anti-physicalist argument, aside from demonstrating that anti-physicalist arguments typically fail to anticipate novel physics.</p><p>To quote Thomas Nagel, in his famous &#8220;What Is It Like to Be a Bat?&#8221;:</p><blockquote><p>At the present time the status of physicalism is similar to that which the hypothesis that matter is energy would have had if uttered by a pre-Socratic philosopher. We do not have the beginnings of a conception of how it might be true.</p></blockquote><p>Leaving the room and perceiving color for the first time, Mary&#8217;s neurophysiology enters novel physical states which either have a unique natural property, or cause a unique natural entity, corresponding to a quale. This does not <em>refute</em> physicalism, but merely points toward a significant unknown in current physical models.</p><p><em>Conclusion</em>: Subjective experience is distinct from knowledge of physical facts. Qualia may still be physical.</p><p><strong><a href="https://plato.stanford.edu/entries/zombies/#ZombConc">Zombies</a></strong></p><p>By our definitions, the conceivability of zombies hinges on the causal problem of consciousness. If phenomenological consciousness <em>does</em> somehow have a causal effect on neurophysiology (i.e., if there is just one binary neuron that &#8220;flips&#8221; based on the presence/absence of qualia), then a perfect zombie is not possible. Otherwise a philosophical zombie should be conceivable, by our mechanistic reading of neurophysiology. If phenomenal consciousness plays no causal role in neurophysiology, then zombies are conceivable (even if they may be still be impossible, i.e. something inherent in neurophysiology causes consciousness).</p><p>Chalmers comes to the same conclusion on the logical possibility of zombies, which is one in a few steps he takes toward a &#8220;naturalistic dualist&#8221; interpretation of consciousness.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> However, while I agree that conceivability may point toward metaphysical <em>possibilities</em>, it does not actually determine natural reality&#8212;as Chalmers himself states, &#8220;logical possibility and natural possibility are different things.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> Though it is metaphysically possible for my hair to turn green tomorrow, it does not mean it actually will. As I have stated repeatedly, we simply <em>do not know</em> the actual causes of consciousness, or the nature of qualia, so it is impossible to make any definitive claim on whether a zombie really <em>could</em> exist.</p><p>Conclusion:  Zombies are conceivable. But it seems very unlikely a human zombie is actually possible.</p><p><strong><a href="https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf">What Is It Like to Be a Bat?</a></strong></p><p>The &#8220;what it&#8217;s like&#8221; to be a bat is to have the qualia available to a bat, with a bat&#8217;s limitations to consciousness. Presumably bats, like people, also have an apperceptive self.</p><p>We haven&#8217;t taken a hard stance on whether qualia fundamentally differ between stimuli&#8212;whether there is material difference between qualia of sight and qualia of sound, or if this is merely an evaluative function of neurophysiology. Regardless, we would expect bats to have qualia of &#8220;sonar&#8221; differing from our qualia of sight, due to a capacity to process a type of external stimuli differently from our neurophysiology (different sensory organs and different neurophysiology). What is unclear is whether the experience of sonar corresponds to a novel property/entity (a unique &#8220;quale&#8221; of sonar), or simply an evaluative capacity of a bat&#8217;s brain.</p><p>Following my mechanistic understanding of cognition, I am less fussed by what it would mean to &#8220;think&#8221; like a bat. To think like a bat is merely to experience a bat&#8217;s cognition. Whatever neural activity provokes consciousness in a bat would cause a correlating, bat-like experience.</p><p>Conclusion: There is a what it&#8217;s like to be a bat, though we don&#8217;t know if the experience of being a bat corresponds to any novel qualia (the parameters of consciousness change, but maybe not the properties/entities).</p><p><strong><a href="https://en.wikipedia.org/wiki/Chinese_room">Chinese Room</a></strong></p><p>I have the impression that the Chinese Room plays an outsized role in discourses around artificial intelligence. It&#8217;s a good thought experiment! Searle insists on the necessity of a biological foundation of consciousness more strongly than I do, but I would still agree it is very likely.</p><blockquote><p>Whatever else <strong>intentionality</strong> is, it is a <strong>biological phenomenon</strong>, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding <strong>dualism</strong>: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.</p><p>&#8212; John R. Searle, &#8220;Minds, brains, and programs&#8221; (1980). Emphasis my own.</p></blockquote><p>My main objection to Searle&#8217;s argument would be his use of the term &#8220;intentionality.&#8221; It&#8217;s a good example of why we ought to stick to qualia when seeking to differentiate consciousness from the mechanics of thought, whether that be in neurophysiology or neural networks. Where does one locate intention in the human mind?</p><p>As I have already mentioned, I do not yet have a clear opinion on where we should locate the apperceptive self. I am unclear whether concepts like perception and self-awareness and intentionality are intrinsic to consciousness, or merely post-hoc justifications for neurophysiological processes, which sit alongside the experience of qualia. Using terms like &#8220;intentionality&#8221; brings on claims I do not wish to make. I only want to assert that qualia are distinct entities, which we and other animals possess, and very likely artificial intelligence (as it exists today) does not. I don&#8217;t want to trip up that claim by arguing what counts as intentionality in biological or artificial &#8220;brains.&#8221;</p><p><em>Conclusion</em>: Agree that our best guess ought to be that AI does not have qualia. Quibble with the use of the term &#8220;intentionality,&#8221; which AI could possess, depending on the definition.</p><p><strong><a href="https://en.wikipedia.org/wiki/China_brain">Chinese Nation</a></strong></p><p>Why are there multiple thought experiments set in China? Probably this is some form of mid-century Orientalism? What of the consciousness of the French Nation, among others?</p><p>Regardless, it seems to me that only a panpsychist or hard functionalist interpretation of qualia would support consciousness in this experiment. If one supposes qualia are actually ubiquitous physical properties of nature (an extreme version of our efficacious-property view), then I suppose you could still assert qualia would arise, but I don&#8217;t see a basis for arguing a unified consciousness / apperceptive self. I won&#8217;t make the effort to steelman this final experiment, but I&#8217;m happy to hear better arguments for it.</p><p><em>Conclusion</em>: No consciousness, unless you take a hard functionalism or panpsychist approach. </p><h4>Experience the Qualia of My Conclusion</h4><p>I lightly workshopped this post before sharing externally, but the opinions here mostly reflect my own thinking and exploration of the subject. I will admit that I began this essay with a strong belief in an entity-oriented understanding of consciousness. But I found myself backed into a corner once I recognized the causal difficulties of that view, and then switched to the efficacious-property hypothesis as the most likely explanation. I&#8217;ve only assigned a 45% likelihood on my personal &#8220;scorecard&#8221; above, but this reflects a more fundamental uncertainty about the problem than a doubt to my reasoning.</p><p>Ultimately, what qualia might be is a question for science. Maybe there are &#8220;particles&#8221; of consciousness&#8212;who knows! My purpose has only been to explore the likely philosophical explanations for qualia, and in doing so to assert that they are real, and worthy of targeted investigation.</p><p>When I read discussions of consciousness online (particularly in our post-AI age), I see far too much emphasis placed on questions that I believe ought to be understood mechanically, metaphysically, or some mixture of the two (i.e. intentionality, knowledge, agency), but that have no bearing on consciousness <em>per se</em>. Because without phenomenal experience, there is no consciousness! Without qualia, we are just making metaphysical arguments about the behavior of intelligent beings (except for apprehension?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>).</p><p>In my second post, I&#8217;ll continue this discussion of qualia, but shift attention to the ontological and causal definitions we&#8217;ve allowed ourselves to use loosely until now. In particular, I expect to defend the preservation of metaphysics as a concept, while rejecting non-naturalist or non-physical explanations for qualia and other unknowns. I&#8217;ll also explore further the implications of the private nature of qualia, specifically what it implies for science if we know there is at least one apprehended property/entity that we cannot otherwise observe.</p><p>If you have any objections to the above, please let me know, and I will publish an addendum alongside my second post in the series.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thank you for your subjective experience of reading my post!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>To extend the metaphor, a spiritualist interpretation might say consciousness is the water turning a mill.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>In the near future I&#8217;d like to write about this problem of extending oneself outside one&#8217;s formal area of training to comment on other disciplines. I know some writers address this by formally stating the &#8220;epistemic status&#8221; of their inquiry.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I treat the <em>unitary</em> self as distinct from the apperceptive self. Our apperceptive self experiences memory and other neurophysiological processes that merely give an impression of continuity. Galen Strawson&#8217;s &#8220;Things That Bother Me&#8221; has a few good arguments in this direction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>It&#8217;s a <em>little</em> outside the scope of my essay, but I would also like to advocate for a holistic interpretation of the self, including both the mind and body. Questions like free will become less problematic if you stop identifying yourself with a fictitious homunculus sitting in the back of your head, and rather identify yourself as your entire person (mind and body).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I first thought of this proof myself while preparing this essay, and was frustrated to learn I was 30 years too late to claim any originality. If only I had been a more precocious child.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>&#8220;CEOP-ToQ.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>In his hit piece &#8220;<a href="https://web-archive.southampton.ac.uk/cogprints.org/254/1/quinqual.htm">Quining Qualia</a>,&#8221; Dennett further defines qualia as non-relational (&#8220;intrinsic&#8221;). I&#8217;ve excluded it, as I am not convinced it is essential to our definition. &#8220;Ineffable,&#8221; &#8220;private&#8221; and &#8220;apprehended&#8221; already seem sufficient to me to differentiate qualia from other entities. I haven&#8217;t thought deeply about this yet.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I am not refuting the trouble of solipsism.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>I have the subjective experience of impatience the closer I get to my conclusion. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Chalmers, David J.. The Conscious Mind: In Search of a Fundamental Theory (Philosophy of Mind) (p. 94). (Function). Kindle Edition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Chalmers, David J.. The Conscious Mind: In Search of a Fundamental Theory (Philosophy of Mind) (p. 257). (Function). Kindle Edition. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Alright, I will admit that I <em>do</em> still have an open question about apprehension/selfhood, as mentioned in the section defining consciousness. However it&#8217;s unclear to me what it would mean to be a &#8220;self&#8221; without qualia. There is maybe an argument here about imagining a self without memory? Or rigorously defining the status of the self when we sleep, when we&#8217;re unconscious, etc.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Poetry #1]]></title><description><![CDATA["More," "Configurations," "A Great Man"]]></description><link>https://write.ianwsperber.com/p/poetry-1</link><guid isPermaLink="false">https://write.ianwsperber.com/p/poetry-1</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Fri, 06 Feb 2026 09:02:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/00b8c914-8deb-455a-8320-6353d4f27219_1280x949.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am very sorry to do this, but I will occasionally share my poetry<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> on this blog. I know you subscribed for wit and commentary, but I like writing poems, and I am not interested in posting about artificial intelligence every week. </p><p>I have often regretted that I write poetry. Some creative pursuits, like painting or guitar, invite the envy of your social circle. Everyone is impressed if you can draw a horse. Poetry, however, mostly encourages friends to ask if you are doing OK, with a reminder to call them if you need to talk. Nobody is excited to hear you rhyme.</p><p>Poetry is just words, and it&#8217;s never clear what makes some words nicer than others. If I like the way a word fits in my mouth, and if I put several nice-fitting words together so that they express my thoughts and feelings, well, that seems &#8220;true&#8221; to me. That truthiness is what makes poetry intimate, and awkward.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><p>I write a lot of prose poetry. In fact, I write a prose poem every<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> day. Prose poetry has the obvious advantage that it allows the reader to conceal the fact that they are reading a poem. It is the perfect poetic delivery vehicle for an audience trapped in a public space. If you squint, you might even convince yourself you&#8217;re reading microfiction. Wouldn&#8217;t that be nice?</p><div><hr></div><h4>More (16/360)</h4><p>Life takes many forms. It began with genesis. Ever since, we&#8217;ve had to copulate. I would hate sex if I were a plant. You place a lot of faith in bees. Vertebrates were the first to ask the big questions. What is love? What is beauty? Our ancestors were killed by asteroids on multiple occasions. One of them created Texas. When a politician presents himself before a camera, that too is life. I have seen the fauna in the White House. They are asking different questions. On Wednesdays I go to the gym with a friend. 70% of the energy spent in exercise is lost to heat. Life is not a lossless form. My friend&#8217;s name is Carl, he is that forgettable. I have increased my bench press by 5 kilograms in 2 years. I did not evolve to gain muscle. Neither did Carl. Carl catches me when the weight&#8217;s too much. We are that familiar. Outside the gym, I trap the heat radiating off my muscles in a down jacket. Geese are not as lossy as people. A Canada goose might migrate up to 5000 kilometers each way. Life is full of these kinds of facts. A democracy might last exactly 250 years. Because winters are mild now, I can open my jacket, just a little. History created me, but it also created him. Both our lives were accidents of well heated water.</p><p></p><h4>Configurations (18/360)</h4><p>The story I&#8217;m telling is one of patience and fortitude. The story features nine principal characters. One of the characters is a small boy with brown hair who is metaphorically linked with a piebald colt for the first seven chapters, but by the middle of the book has lost himself entirely, and the horse is dead. All the characters live on a large estate. After a character leaves the estate, I can no longer include them in the story. When the boy left he was lost for a long time. But he returned as a handsome man with a chestnut goatee and a toothy smile. He wore yellow socks as he trod up the lawn&#8212;if you read carefully, you would notice the allusion to a horse&#8217;s fetlocks. Above the fireplace hung a painting of his great-grandfather. One of several dogs lifted his head. The fireplace is purely ornamental. The estate is covered in palm trees and agave. Of the eight other characters, three are on the property when he arrives. One is habitually drunk. Two are not yet introduced. Nominally the story is about family, but critics differ as to the deeper meaning. The ninth character is an adolescent when the story ends. She has not yet left the estate, but we understand her departure will conclude the story. With a valise in hand, she glances at a painting of her grandfather, whom she surmises was horselike. Many great stories reflect on the rise and fall of a household through subsequent generations. The granddaughter will have only read enough to know a house can fall. Outside the estate the world is ill-defined. So many details are still missing. When I close my eyes, I see slabs of limestone settle onto peat. The story opens with a large garden party, on a vast terrace, the conversation full of veiled hints as to character and plot.</p><p></p><h4>A Great Man (12/360)</h4><p>If I were a genius, then every poem I wrote would be great, even this one, because my life would always be full of intellect and beauty. If tired, I would articulate my fatigue with great license, and teachers would recount to children the efforts I made to lift a single apricot to my lips, as my fingers dawdled on tarnished silverware. My poems would have no resemblance to lists, slave to the plodding logic of clauses, but resemble a flower unfurled in late spring, or a secret whispered into a cupboard. I would question myself, yes, but not really. My whims would be the extension of a profound order, always present and misunderstood. If I were a genius, you would stand on my shoulders, surveying a land that I had first charted. I would do nothing in parts. My every action would be the full expression of the <em>Weltgeist</em>, as it existed beyond me and through me. There would be no crowds, but only attitudes, each open to my interpretation. If I were already listening, a poem might tell itself to me. The invitation is always available. The self that I would have might have been but never was would, occasionally, come to mind. I would reflect with deep fear at the thought of him (me), then twirl my phone through my fingers, puzzling how to write him down into words. If the genius I could have been only knew how much I would have admired him! I would put it down into words, too, if I could.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/poetry-1/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/poetry-1/comments"><span>Leave a comment</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I expect to do this every 4 - 6 weeks, though I may alternate between poetry and fiction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Almost.</p></div></div>]]></content:encoded></item><item><title><![CDATA[My Pet Claude]]></title><description><![CDATA[The economics of agentic development]]></description><link>https://write.ianwsperber.com/p/my-pet-claude-economics-of-agentic-development</link><guid isPermaLink="false">https://write.ianwsperber.com/p/my-pet-claude-economics-of-agentic-development</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Thu, 29 Jan 2026 10:30:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6niy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am eager, in fact desperate, to write a post that has nothing to do with AI. Next week I will produce a 5,000 word essay on the history of sawmills in rural Nebraska or the breeding patterns of giant tortoises. Yet no matter how hard I tried to research the agricultural history of the Midwest,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> my mind kept returning to Claude.</p><h4>Concerning My Addiction to Vibes</h4><p>I avoided spending too much time with existing <a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">SDD</a> or orchestration libraries ahead of <a href="https://write.ianwsperber.com/p/claude-cowboys">last week&#8217;s post</a>. This strategy appears to have worked for <a href="https://cynthialeitichsmith.com/2006/03/author-interview-stephenie-meyer-on/#:~:text=I%20didn't%20do%20much%20in%20the%20way%20of%20research,contradicted%20my%20vision">Stephenie Meyer</a>, so why not me? Since then I&#8217;ve played with several popular repositories&#8212;by and large, I&#8217;ve been disappointed.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe if you want to read about giant tortoises and anything else, just not AI</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Everyone is grasping toward the same concepts. But so many of these libraries feel like a given developer&#8217;s personal preferences vibe-coded into an arbitrary and poorly documented new interface. <a href="https://github.com/github/spec-kit">Spec-kit</a> is basically a templating engine for PRDs, yet has managed to accumulate 60,000 stars on GitHub? I suppose this could be nice for anyone who has never had to write a Jira story in their life&#8230; but for an experienced developer, why bother? <a href="https://github.com/Fission-AI/OpenSpec">OpenSpec</a> and <a href="https://github.com/bmad-code-org/BMAD-METHOD">BMAD</a> are similar. It&#8217;s the engineering equivalent of automating an Atlassian how-to guide. If you need the convenience methods, just write a custom Claude command to draft with your preferred template.</p><p>I feel similarly about <a href="https://github.com/steveyegge/beads">beads</a> and <a href="https://github.com/steveyegge/gastown">gas town</a>. I only recently set up gas town on an EC2 instance (my personal Nevada test site), so I&#8217;ll withhold from commenting on its efficacy, aside from the <a href="https://maggieappleton.com/gastown">popular remark that the documentation is bizarre</a>. I assumed beads could be applied more easily to my existing workflow, but in practice felt again like an individual&#8217;s convoluted, idiosyncratic workflow being hoisted into a library to replace&#8230; what? A ticketing system? I am often worried that software engineers, when given superpowers and the freedom to change the world, would only choose to rewrite the entire Atlassian suite according to their personal quirks.</p><p>From a research perspective, it&#8217;s valuable to explore and build in this direction, but for production use I would approach new tools and workflows with significant skepticism. Ultimately, Anthropic is in the best position to make any new SDD or orchestration workflows effective, by building them into Claude Code itself. Over the past week, we have already seen the release of <a href="https://x.com/i/status/2014480496013803643">tasks</a>, which are directly influenced by beads, and have already received early reports of new <a href="https://x.com/NicerInPerson/status/2014989679796347375?s=20">swarm</a>, <a href="https://github.com/mikekelly/claude-sneakpeek?tab=readme-ov-file#what-gets-unlocked">delegation and team coordination</a> features. Should we spend cycles building more orchestration libraries when native support is just around the corner?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yVgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yVgK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 424w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 848w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 1272w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yVgK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png" width="482" height="509.37525354969574" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/73603181-e65c-4493-a379-238fcfb40330_986x1042.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1042,&quot;width&quot;:986,&quot;resizeWidth&quot;:482,&quot;bytes&quot;:769181,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185818179?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yVgK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 424w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 848w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 1272w, https://substackcdn.com/image/fetch/$s_!yVgK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73603181-e65c-4493-a379-238fcfb40330_986x1042.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A cautionary tale</figcaption></figure></div><p>Despite my specific advice to wait for orchestration features, it remains very difficult to calibrate when to build your own tooling, when to leverage libraries and when to wait for native capabilities. I still spent a recent afternoon setting up an EC2 sandbox for gas town and other High Risk Activities, instead of using <a href="https://sprite.dev/">sprite.dev</a> or an equivalent service. And since I disliked beads and have yet to see the benefits of Claude&#8217;s new task system, I decided to write my <em>own</em> simple ticketing system (take that, Jira). The cost of producing software is now so low that small frictions in developer experience or minor SaaS fee considerations are enough to justify a couple hours of agentic development. Tooling at the frontier is inherently imperfect, so the marginal cost of writing your own software may be less than the cost of understanding a new solution.</p><h4>Putting Claude to Work: Labor and Capital Allocation</h4><p>Beyond an inflection point, the marginal cost of further automating software production actually increases, and is eventually infeasible at current levels of model intelligence.</p><p>When building a factory, planners must decide on the optimal ratio of labor (people) to capital (machines) for production. Even if <em>technically</em> possible to fully automate production, the marginal cost of fully automating production is so high that it is remains more efficient to only partially automate production, retaining labor for activities where humans hold a comparative advantage over machines. This concept may be counterintuitive for engineers that have a drive to automate as many tasks as possible.</p><p>We can apply the same framework of labor-capital splits to software production. Here, labor refers to the activities of all people involved producing software, most notably software engineers. I&#8217;ll use capital to mean any non-labor asset used in the production of software. For simplicity&#8217;s sake I won&#8217;t distinguish between a company&#8217;s own capital assets and the assets they rent. So I consider Claude Code capital, though it would appear as an operating expense on a balance sheet, not a capital investment.</p><p>Software development is traditionally a labor intensive activity, even if the output of development, software, is a capital asset that can be leveraged across a huge consumer base. Producing software has long involved teams of product managers, quality assurance testers, support agents, designers and, of course, programmers. That is not to say that the labor mix in software production has been static, or that software development has not become more efficient over time. The past decades saw increasing levels of automation for software <em>operations</em>, mostly thanks to the advent of cloud computing and the associated DevOps and SRE practices, which eliminated/reduced many traditional IT and ops positions (like database administrators). Yet labor has remained essential for the actual <em>production</em> of software.</p><p>Just as the mechanical loom in the Industrial Revolution shifted production away from textile workers, Claude Code in the AI revolution allows us to shift software production from labor (programmers and the rest) to capital (rented or otherwise).</p><p>Epochal shifts in modes of production do not complete overnight. Capital and labor are not binaries, but a ratio that a company must effectively calibrate to optimize their balance sheet. Considering the automation of software operations, the cloud did not immediately eliminate traditional servers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Advances in virtual machines and containerization slowly spread throughout the software community, while new roles and practices gradually evolved to capitalize<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> on those changes. Even today, when serverless technology is broadly adopted, we have not unified around a single kind of compute or storage layer, but make tradeoffs in how thoroughly we choose to adopt serverless abstractions. We should consider agentic development in the same light&#8212;not as a binary, but as a series tradeoffs as to the degree of automation desirable.</p><p>So what is the optimal degree of software automation today? I argue that the optimal labor-capital split is a function of model intelligence, where intelligence is a catch-all term for reasoning, instruction compliance, etc. Beyond this ratio, there are no marginal gains in efficiency, and companies will experience diminishing returns on further automation of software production. Here&#8217;s an illustrative chart of the phenomenon. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6niy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6niy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 424w, https://substackcdn.com/image/fetch/$s_!6niy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 848w, https://substackcdn.com/image/fetch/$s_!6niy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 1272w, https://substackcdn.com/image/fetch/$s_!6niy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6niy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png" width="1424" height="738" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:738,&quot;width&quot;:1424,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:175182,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/185818179?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6niy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 424w, https://substackcdn.com/image/fetch/$s_!6niy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 848w, https://substackcdn.com/image/fetch/$s_!6niy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 1272w, https://substackcdn.com/image/fetch/$s_!6niy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3907c76c-be01-4379-9063-0a7509cb4187_1424x738.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Purely illustrative, though shape and optima approximate my own best guess</figcaption></figure></div><p>Remember that &#8220;software production&#8221; refers to the entire set of activities necessary to produce software, not just writing code.</p><p>Please don&#8217;t anchor too hard on the specific numbers I&#8217;ve proposed, which are more vibes than rigour. I&#8217;ve calibrated my guess on the <a href="https://youtu.be/tbDDYKRFjhk?si=3gqu55karrc2Q8cq&amp;t=821">Stanford study citing productivity gains of around 10 - 40%</a> depending on the complexity of the task (the study precedes the latest coding models). The initial productivity gains of agentic coding are strong (consider the automation of rote activities and greenfield projects), but tapers off into a wide trough of mildly differentiated gains. This is where many software developers are spending their time today, experimenting with different agentic coding strategies. The marginal costs are similar in the region around the optimum, so there is only a small penalty for automating a little too far. But further on the costs increase asymptotically&#8212;you will hit a wall if you try to automate your entire workflow.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><p>Today&#8217;s models are great at one-shotting discrete tasks, but still fall over when faced with complexity and expanding requirements. It is well understood that filling the context window rapidly reduces the quality of current models. But, in the context of software development, I would add that even minor, seemingly harmless deviations from instructions and design principles cascade into core technical problems. I naively reason about this as an issue of compounding errors. Suppose Claude Code is 5% &#8220;off&#8221; expectations for every changeset, meaning it very slightly deviates from requirements or design principles. Without manual feedback or review, Claude will continue to deviate your codebase by 5% for every subsequent change. After 5 iterations you would then have deviated 28% from expectations; after 10 iterations, you would be 62% off. In practice error rates probably do not compound so neatly, but it&#8217;s true that without human supervision vibe-coding comes to halt under the weight of its own technical debt. Agentic coding continually adds entropy to the codebase. So small gains in adherence and reasoning can have massive gains in the number of change sets feasible without human intervention.</p><p>The relative cost of automating software production is sensitive to the fees charged by Anthropic and its competitors. We are all the beneficiaries of a competitive market with large subsidies. I did not find any public numbers confirming the &#8220;true costs&#8221; of a typical Claude Max plan, or any company&#8217;s real per-token cost for a SOTA reasoning model. We do have reports that in 2025 <a href="https://www.theinformation.com/articles/anthropic-lowers-profit-margin-projection-revenue-skyrockets">Anthropic suffered a $5.2 billion loss against $9 billion in ARR</a>, while <a href="https://fortune.com/2025/11/12/openai-cash-burn-rate-annual-losses-2028-profitable-2030-financial-documents/">OpenAI expected to spend $22 billion against $13 billion in sales</a>. With those losses it seems reasonable to expect that &#8220;real costs&#8221; for Claude Code users are at least 2x what we pay, though if we further assume that a small percentage of Claude Code users consume the vast majority of tokens, then it&#8217;s likely the real costs for Claude Code users are much greater. However, I&#8217;d guess there are few hard technical limits on optimizing the inference layer of any given model, so it&#8217;s likely Anthropic and competitors could engineer away <em>some</em> of the cost problems if they were compelled to chase profitability. When making critical budget and staffing decisions, engineers and business leaders would be prudent to just keep in mind these costs <em>might</em> increase someday to better reflect real costs.</p><p>I&#8217;ve charted the labor-capital split relative to a pre-AI baseline of current software output (say, late 2023). The bull case for the labor market is that overall software output will grow apace with the new efficiencies. The low cost of software production creates competitive pressure to add new feature, develop new products, and automate increasing amounts of the economy. So even if fewer software engineers are needed <em>relative</em> to overall software output, the absolute amount of software produced increases so much that the actual number of software engineers employed remains constant. As long as engineers, or humans generally, retain <em>some</em> relative advantage to AI, then there is no economic incentive toward full automation, and the labor market could remain steady.</p><p>I fear the bull case may prove to be technically true and substantively false.</p><p>Firstly, there is an inherent <em>lag</em> between the release of a new model, subsequent development of new automation techniques, and identification of the new optima. The continued rapid pace of change and various incentives to manage costs mean that many (most) companies will over- or underestimate the amount of labor and capital investment necessary to reach the optimal labor-capital split for a given level of model intelligence. An important takeaway from my weeks experimenting with open source libraries and other tools is that it is very unlikely that generic solutions can substitute for a company&#8217;s unique needs when working at the frontier of capabilities, as the solutions are incomplete or incorrectly generalized. Even if the initial gains are achieved through Claude Code (rented capital), reaching an optimal labor-capital ratio will require a lot of bespoke development. Executives that fail to understand this problem implement hiring freezes, premature layoffs and generally fail to seize productivity gains&#8212;or its opposite, with excess investment in AI automation and failure to reallocate labor according to the comparative advantage of humans relative to AI. </p><p>I am particularly concerned that most executives will interpret AI as an opportunity to shift resourcing away from engineering toward traditional business functions like finance, marketing, etc., when the most <em>efficient</em> use of labor may actually be to have engineering automate internal business processes and move many traditional roles toward new, AI-enhanced positions.</p><p>The second risk I see is the loss of marginal utility of software. It may be that we have a near infinite number of processes to automate, so the long term risk is small. But, as with the efficient allocation of labor, the real bottleneck is our own ability to identify new applications of software as marginal costs drop and capabilities increase. Up until now, it seems to me we have seen very few practical innovations using generative AI capabilities outside the model&#8217;s own chat interfaces and software development. While these advances alone are huge, it demonstrates that we are already struggling <em>today</em> to invent products on top of the current frontier, let alone the frontier of tomorrow.</p><p>The third problem, and the most profound for our lifetimes, is that we have no guarantee that humans will retain <em>any</em> comparative advantage over AI. It is of course comforting to cling to this idea, and we could feasibly retain important comparative advantages for years and years to come. But if we extrapolate into the coming decades, I would anticipate we reach a point where humans have no meaningful comparative advantage over AI and are no longer needed in the production of software or any other asset.</p><h4>I Love my Claude, I Named Him Bob</h4><p>The history of cloud computing is sometimes discussed by the zoological terms we used along the way.</p><p>Servers were once like pets. They had names and you managed the lifecycle of a server very carefully. If your server died, it was very sad, and you probably had a funeral or maybe you were fired. Eventually, though, we got good at virtual machines. Virtual machines allowed us to create and destroy servers as a semi-regular part of operations, and it was often convenient to have a lot of them, so we started to think of servers like cattle or herds. Over time we got really good at managing lots of virtual machines, so we ended up with swarm models, where our &#8220;servers&#8221; were so lightweight and temporal they resembled insects more than cattle. Eventually this proceeded so far that we no longer need to think about servers at all. In today&#8217;s serverless models, compute is microbial; &#8220;servers&#8221; appear and disappear without any human intervention.</p><p>When I sat down to write this post, I assumed automation of software production would require agents to follow a similar evolution, with human attention spread across greater and greater numbers of agentic coding sessions, each tackling an independent workstream. In this model, our Claude Code sessions are still pets. We manage each session&#8217;s development cycle, we ensure they run in an independent directory, we carefully track context, etc. Even if we run several concurrent sessions, we still tend to each directly.</p><p>I&#8217;ve become skeptical that agentic development will follow the same taxonomy from pets to herds to germs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4X9i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4X9i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4X9i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg" width="512" height="425.6" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:532,&quot;width&quot;:640,&quot;resizeWidth&quot;:512,&quot;bytes&quot;:156124,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4X9i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4X9i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88e43f5d-a2de-4e62-8d79-d89526a8ab8a_640x532.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My Claude session, hard at work (<a href="https://commons.wikimedia.org/wiki/Category:Piebald_dogs#/media/File:All_puppy_44.jpg">Wikimedia Commons</a>)</figcaption></figure></div><p>The past weeks&#8217; exploration of orchestration models demonstrate a belief that current model intelligence already allows for a &#8220;herd&#8221; model. Today&#8217;s orchestration tools can already produce working software, and it&#8217;s clear to me that Anthropic will continue to add features here. I think a full herd-model of agentic development would have to mean that multiple independent agents can handle complex or multi-phase workstreams, either coordinating with each other or under supervision of an orchestrator, without requiring human intervention on the individual sessions, while still soliciting human feedback on individual pull-requests as an optional feedback mechanism for the ongoing development.</p><p>But I worry this entire line of thinking is ultimately an attempt to build our way past the limitations of today&#8217;s model intelligence. Until agents adhere better to instructions and design principles, I don&#8217;t believe much more automation is possible, no matter how cleverly we orchestrate. Orchestration is a strategy that seems to make sense <em>today</em>, to better manage context windows, to parallelize work, to self-correct when Claude gets stuck or introduces bugs, etc. These are workarounds for the problems of Opus 4.5. Will Opus 5.0 or 6.0 or whatever require the same strategies? It seems likely we&#8217;ll continue to need an equivalent to subagents for context management and parallelization. Beyond that I&#8217;m unsure. Are we only anthropomorphizing AI, assuming that it will work better as a team, just as humans do? Will tomorrow&#8217;s models easily hold an entire backlog&#8217;s worth of requirements in theirs &#8220;heads,&#8221; chomping through features one-by-on?</p><p>That does not mean that new techniques, including orchestration, can&#8217;t already improve on your productivity. In my earlier chart, I&#8217;ve placed the current optimal capital-labor split at 40-60. How many companies produce the same amount of software as in 2023 with 60% of the team size, at 85% of the cost? Or 40% of the team size at equivalent costs? My guess is already very ambitious! It&#8217;s possible a mature version of gas town could still expedite development, even if significant human supervision and intervention is required. I am only skeptical these approaches can take us beyond those ratios, or that they will still apply as model intelligence improves.</p><p>I have the impression that some engineers think the fault is themselves&#8212;that if they could only find the perfect workflow, or write the perfect library, or spend a few million more tokens every month, they could <em>finally</em> automate 100% of software production. But the capabilities just aren&#8217;t there. You can&#8217;t automate yourself out of a job quite yet.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/my-pet-claude-economics-of-agentic-development/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/my-pet-claude-economics-of-agentic-development/comments"><span>Leave a comment</span></a></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I am not actually going to write a post about Nebraska or turtles, at least not yet</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I know you were looking forward to ongoing comparisons with looms and steam engines, but I know more about cloud technology than the Industrial Revolution.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Pun intended</p></div></div>]]></content:encoded></item><item><title><![CDATA[Claude Cowboys]]></title><description><![CDATA[Monorepos, agentic workflows, tmux, sandboxes and the wild west. My best practices for agentic development with Claude Code.]]></description><link>https://write.ianwsperber.com/p/claude-cowboys</link><guid isPermaLink="false">https://write.ianwsperber.com/p/claude-cowboys</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Thu, 22 Jan 2026 10:41:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cc399243-0814-43a6-a55b-955e9d66909d_1658x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>A note for subscribers&#8212;this is a technical post about agentic software development. If it&#8217;s not for you, just skip it!</em></p><p>We are in the wild west of agentic coding. It&#8217;s a lot of fun! Nobody really knows the right way to program anymore. It&#8217;s useful to experiment with different approaches, even if they <a href="https://github.com/snarktank/ralph">seem ridiculous</a>. There has never been a better time to be a cowboy coder.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/karpathy/status/2004607146781278521?lang=en&quot;,&quot;full_text&quot;:&quot;I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become&quot;,&quot;username&quot;:&quot;karpathy&quot;,&quot;name&quot;:&quot;Andrej Karpathy&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1296667294148382721/9Pr6XrPB_normal.jpg&quot;,&quot;date&quot;:&quot;2025-12-26T17:36:02.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:2618,&quot;retweet_count&quot;:7520,&quot;like_count&quot;:55778,&quot;impression_count&quot;:16389164,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Claude Code got a lot of attention over the holidays. That was a lucky mix of a new model (Opus 4.5) and people having too much time on their hands. It&#8217;s true that Opus 4.5 is <a href="https://artificialanalysis.ai/#artificial-analysis-coding-index">very good at coding</a>. Good enough that the reality is starting to catch up to the buzz&#8212;I really do believe that the vast majority of software development can shift to agentic workflows. Though the Overton window of AI coding hype seems to have adjusted apace, given the extravagant claims on Not Twitter about the <a href="https://x.com/deepfates/status/2004994698335879383">emergence of AGI in Opus 4.5</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>If cowboy/vibe coding is lots of fun, it&#8217;s not always clear how well it transfers to a professional context where we&#8217;re actually held accountable for our software. The online discourse around vibe coding and flashy MVPs is a bit of a distraction for a practicing software engineer. Some of the same practices do copy into our day jobs. But vibes can&#8217;t substitute for rigor.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>To that end, here are a few of my practical recommendations for Claude Code development.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Sorry if they aren&#8217;t sexy. You can still vibe code on the weekends, cowboy.</p><p><strong>My Boring but Useful Recommendations for Agentic Coding with Claude</strong></p><ol><li><p>Develop in <strong>monorepos</strong> for portability and consistency</p></li><li><p>Adopt an <strong>agentic workflow</strong>, something like Design (Human + AI) &#8594; Plan (AI + Human) &#8594; Implement (AI) &#8594; Review (AI + Human)</p><ol><li><p>Maybe commit any related documents to your repo and call them &#8220;thoughts&#8221;</p></li></ol></li><li><p>Orchestrate your 5 bajillion Claude sessions with <strong>tmux</strong> or an equivalent tool </p><ol><li><p>Maybe install one of the many tmux wrappers (<a href="https://github.com/ianwsperber/claude-cowboy">including the plugin I wrote for this post</a>), or fork your own, I don&#8217;t care</p></li></ol></li><li><p>Mostly don&#8217;t bother with remote sandboxes?</p><ol><li><p>But in a few months maybe you should?</p></li></ol></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for more boring opinions</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>1. Develop in Monorepos</h2><p><em>Note: This pattern broke as I was writing this post! Bummer. You can find the <a href="https://github.com/anthropics/claude-code/issues/19535">bug report</a> on GitHub. I&#8217;ve left the recommendation on the assumption it will get fixed.</em></p><p>I run Claude Code within a development monorepo. I am aware that some people hate monorepos (wrongly) and submodules (rightly). But we&#8217;re only using the monorepo as a development harness for Claude, so mostly you can shut your eyes and pretend it&#8217;s not there. <a href="https://github.com/ianwsperber/example-claude-monorepo">I&#8217;ve created an example project on GitHub</a>. </p><p>The monorepo solves a few specific problems for me:</p><ul><li><p>How do I share my Claude configurations across machines and with other developers?</p></li><li><p>How do I support consistent agentic coding workflows?</p></li><li><p>How do I work across multiple related repositories?</p></li></ul><p>The monorepo enforces a structure of nested context&#8212;remember that Claude will <a href="https://code.claude.com/docs/en/memory#determine-memory-type">apply CLAUDE.md files from all parent directories</a>. The monorepo contains all my repos as submodules. It has context about the interactions between those repos, and it&#8217;s configured with the Claude settings I need to develop effectively on those projects. Everything is in version control and most of the relevant context is available on the filesystem. If I run Claude in any of the submodules, it will automatically receive context from the monorepo, including any configured commands and skills (at the time of writing, Claude appears to have a <a href="https://github.com/anthropics/claude-code/issues/12962">bug with settings</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>).</p><pre><code>Development_Monorepo/              # Monorepo for team/company
&#9500;&#9472;&#9472; .claude/                       # Common Claude configurations
&#9474;   &#9500;&#9472;&#9472; commands/
&#9474;   &#9500;&#9472;&#9472; skills/
&#9474;   &#9492;&#9472;&#9472; settings.json              
&#9500;&#9472;&#9472; .thoughts/                     # Saved "thoughts" (see next section)                      
&#9474;   &#9492;&#9472;&#9472; TECH-1/                
&#9474;       &#9500;&#9472;&#9472; PRD.md             
&#9474;       &#9500;&#9472;&#9472; Plan.md            
&#9474;       &#9492;&#9472;&#9472; Updates.md         
&#9500;&#9472;&#9472; projects/                      # All your team's projects
&#9474;   &#9500;&#9472;&#9472; project_a/
&#9474;   &#9474;   &#9500;&#9472;&#9472; .thoughts/
&#9474;   &#9474;   &#9500;&#9472;&#9472; submodule_a1/          # Actual repo!
&#9474;   &#9474;   &#9474;   &#9500;&#9472;&#9472; .claude/           # Repo-specific settings
&#9474;   &#9474;   &#9474;   &#9500;&#9472;&#9472; .thoughts/         # Repo-specific "thoughts"
&#9474;   &#9474;   &#9474;   &#9492;&#9472;&#9472; CLAUDE.md          # Fatter CLAUDE.md with repo context
&#9474;   &#9474;   &#9500;&#9472;&#9472; submodule_a2/          # [submodule]
&#9474;   &#9474;   &#9492;&#9472;&#9472; CLAUDE.md              # Thin Claude.md with project context
&#9474;   &#9492;&#9472;&#9472; project_b/
&#9474;       &#9492;&#9472;&#9472; submodule_b1/          
&#9500;&#9472;&#9472; shared/                        
&#9500;&#9472;&#9472; scripts/                       
&#9492;&#9472;&#9472; CLAUDE.md                      # Thin CLAUDE.md with team context</code></pre><ul><li><p>We add other repos as <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules">git submodules</a> under the <code>projects</code> directory. If needed, we can add subfolders in <code>projects</code> to group related repos. Note that we configure submodules with <code>ignore = all</code>, since we don&#8217;t actually want our monorepo to track the changes in each project.</p></li><li><p>We have a top-level <code>.claude</code> directory with settings, commands and skills common to all projects. We also maintain a <code>.claude</code> in each repo.</p></li><li><p>We have a <em>thin </em>top-level <code>CLAUDE.md</code> file with context required for all projects. We add a <code>CLAUDE.md</code> file as needed to projects/repos for project/repo-specific context.</p></li><li><p>We have a top-level <code>.thoughts</code> directory to store product requirements documents, plans and other material consumed and produced by agents. We add a <code>.thoughts</code> directory as needed to projects/repos for project/repo-specific documents. <em>I&#8217;ll talk about this more in the next section!</em></p></li><li><p>We have a <code>shared</code> directory for code we actually want to stay in our monorepo. You might also want a <code>scripts</code> directory. They&#8217;re optional. Add other directories if you&#8217;d like, I&#8217;m not your dad.</p></li></ul><p>Your repos probably already have a <code>.claude</code> directory for your project-specific configurations. Monorepos help us keep that directory well-scoped to the repository itself. A lot of our Claude commands will be typical development tasks, like <code>/pull_request</code>, which shouldn&#8217;t have to be copied across all repositories. We&#8217;re also likely to need a minimal amount of corporate/team context in each repository, which is nice to shift out into a better-suited location. In the monorepo, those company/team-level configurations live in the monorepo root&#8217;s <code>.claude</code> directory and <code>CLAUDE.md</code> file.</p><p>Claude has a few built-in conventions for sharing configurations, including <a href="http://docs.claude.com/en/docs/claude-code/plugins">plugins</a> and a <a href="https://code.claude.com/docs/en/settings#available-scopes">managed-settings.json</a> file. Plugins are an attractive alternative to the monorepo, and may have some nice guarantees around access control for an enterprise. They&#8217;re also useful to <a href="https://github.com/obra/superpowers">quickly boost Claude&#8217;s capabilities</a>. However, it&#8217;s harder to hack on plugins, like any software with a distribution step, and they don&#8217;t help much with directory layout and context for multi-repo changes.</p><p>My example monorepo contains submodules for a website frontend and backend. In this simple example, there&#8217;s some marginal benefit in allowing Claude to implement features across both repos (say, user login) but it&#8217;s not huge. Once you start expanding to large projects with dozens of interrelated services, the benefits grow. Dependencies between services are often implicit and poorly defined. The monorepo allows us to provide a <code>CLAUDE.md</code> for the relevant project scope that will explain how these repos work together.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pCjc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pCjc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 424w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 848w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 1272w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pCjc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png" width="569" height="345.8550824175824" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:885,&quot;width&quot;:1456,&quot;resizeWidth&quot;:569,&quot;bytes&quot;:146562,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/184591428?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pCjc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 424w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 848w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 1272w, https://substackcdn.com/image/fetch/$s_!pCjc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14e4ec-b6c5-4a4f-9bf1-443067059517_1825x1109.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We also avoid weirdness defining MCPs or custom skills to explore related services. There&#8217;s a time and place for this, but my experience has been that Claude performs better when it can explore the local filesystem for answers, rather than working across multiple MCPs / remote data sources.</p><p>Note that one drawback with running Claude above/outside your target repo is that it will <em>not</em> automatically load the repo context / <code>CLAUDE.md</code> file. If our frontend repo has a special, secret gotcha in the repo&#8217;s <code>CLAUDE.md</code>, we won&#8217;t know about it by default. We can solve this with good planning workflows (next section) or actual agent orchestration (following).</p><h2>2. Adopt Agentic Workflows (I Think, Therefore I Plan)</h2><p>When I say Opus 4.5 supports agentic workflows, I do <em>not</em> mean fully agentic, unsupervised workflows. I appreciate that my LinkedIn feed has filled with corporate influencers advising me to summon an army of AI interns to run Ian Inc. on my behalf. For now, that is more hype than practical advice. If you let Claude make changes without proper supervision, you will end up with compounding errors that render your codebase unusable. What we <em>can</em> do is allow Claude autonomy within defined phases of our development workflow, namely implementation.</p><p>Claude&#8217;s plan mode is your first line of defense against dumb mistakes. And for plans to be really effective, you need good requirements. Really what you want is a good <em>product</em> document, that outlines the underlying context. Channel the angsty spirit of Marty Cagan. If you&#8217;re working on a larger team, you probably already have requirements written as epics or stories in Jira. Lucky you! Write a skill for feeding that information into your plan.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Otherwise, write a <a href="https://en.wikipedia.org/wiki/Product_requirements_document">PRD (product requirements document)</a> explaining the context for your feature, alongside your goals and requirements&#8212;function and non-functional.</p><p>I usually pop my technical specification as bullets under the non-functional requirements. I don&#8217;t think it&#8217;s worth producing your own technical specification document unless it&#8217;s a big/important change. Just review the plan Claude generates and adjust it as needed.</p><p>There are lots of possible formats for a PRD. Here&#8217;s an example:</p><pre><code># TECH-1 Implement auth

# Background

We forgot to authenticate our website. Oops!

# Goals

- Implement secure authentication for external users visiting our website

# Requirements

## Functional

- Home page and sign-in/sign-up should not require authentication. All other pages must.
- Allow signup with email and password. For now, do not support social login.
- Must adhere to GDPR requirements

## Non-Functional

- Implement with Auth0

# Notes

- Bob says we did this before. See "example_repo" for reference.
- Our senior architect has provided a sample architecture diagram in @ARCHITECTURE.png
- Use sub-agents to do web research on current best practices</code></pre><p>I save all my PRDs in a <code>.thoughts</code> directory local to my project, alongside the generated plan and any updates. I first encountered this idea after reading this <a href="https://github.com/humanlayer/advanced-context-engineering-for-coding-agents/blob/main/ace-fca.md">article</a> from <a href="https://www.humanlayer.dev/">HumanLayer</a> and looking into their CLI tools. It&#8217;s a good idea! It is extremely helpful when launching future Claude sessions that ought to reference prior product and engineering decisions. It can also be a critical step in coordinating multiple Claude sessions. If you&#8217;re already wed to a ticketing or documentation solution, at least plug a URL into a <code>PRD.md</code> file. My guiding principle it to make as much information as possible available in the filesystem for discovery.</p><pre><code>.thoughts/                                          
&#9500;&#9472;&#9472; TECH-1/                    # Unique ticket
&#9474;   &#9500;&#9472;&#9472; PRD.md                 # My PRD
&#9474;   &#9500;&#9472;&#9472; Plan.md                # Claude's plan. Save this!
&#9474;   &#9492;&#9472;&#9472; Updates.md             # Periodic updates on implementation</code></pre><p>Unlike the HumanLayer team, I have not found much value in writing <a href="https://github.com/humanlayer/humanlayer/blob/main/.claude/commands/create_plan.md">custom commands for the planning process</a>. I tried this for about a week and left with the impression I was always fighting Claude&#8217;s built-in instructions. I&#8217;m also skeptical that any juice I can squeeze from a specially engineered prompt will represent more than a marginal gain vs advances in the intelligence of the underlying model. So mostly I just stick to simple requests like &#8220;Please create an implementation plan for <code>@PRD</code>&#8221; or &#8220;pwan is weddy, pweeeese help me implement it mistuh cwaude <code>@TECH-1</code>&#8221; and that&#8217;s enough. I do maintain a <a href="https://github.com/ianwsperber/example-claude-monorepo/blob/main/.claude/skills/thoughts/SKILL.md">skill</a> for working with the <code>.thoughts</code> directory.</p><p>Within a Claude session, I then follow a linear workflow built around individual tickets (&#8220;thoughts&#8221;):</p><ol><li><p>Design (Human + AI)</p><ol><li><p>I write a PRD. I make sure to capture <em>all</em> relevant product requirements. The agent should understand my exact acceptance criteria. I also specify any important technical decisions, and note relevant gotchas.</p></li><li><p>For large/important features, I solicit critical feedback from an AI <em>before</em> asking Claude to build a plan. It&#8217;s useful to ensure all ambiguities are specified.</p></li></ol></li><li><p>Plan (AI + Human)</p><ol><li><p>Enter plan mode! Provide Claude with your PRD. Ask it to put together an implementation plan.</p></li><li><p>Make sure Claude writes the plan down to your Plan.md file, or just copy it yourself from the temporary plan file.</p></li><li><p>Claude tends to incorporate phases in the implementation plan. Ensure it has the appropriate checkpoints for human or automated review (including tests!)</p></li><li><p>If the plan phase is long, activate your &#8220;birdbrain&#8221; and cycle to a parallel Claude session</p></li></ol></li><li><p>Implement (AI)</p><ol><li><p>Decide if you should proceed to implementation with the current context window. <a href="https://github.com/anthropics/claude-code/issues/19426">Anthropic </a><em><a href="https://github.com/anthropics/claude-code/issues/19426">just</a></em><a href="https://github.com/anthropics/claude-code/issues/19426"> added a new &#8220;Yes, clear context and auto-accept edits&#8221; option when approving plans</a>, so you no longer need to worry about unnecessary details from the planning phase polluting your context. Previously, for complex tasks, I would clear the context manually then load the plan from disk. So you should probably select this new option?</p></li><li><p>Once implementing, let Claude do its thing! Intervene as needed. Activate your birdbrain again, cycle to another Claude session. </p></li></ol></li><li><p>Review (Human + AI)</p><ol><li><p>Here, I&#8217;m referring to a final local review before you produce a pull request. Make sure things actually work.</p></li><li><p>Check out this <a href="https://news.ycombinator.com/item?id=46656897">recent blog post on &#8220;backpressure.&#8221;</a> Ideally a human is not catching obvious mistakes, like failing tests. Experiment with ways to avoid wasted cycles reviewing bugs.</p></li></ol><p></p></li></ol><h2>3. Orchestrate with tmux and Pretend You Always Knew About It</h2><p>We are approximately 2 days away from someone releasing Kubernetes for Claude Code.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Until then, there&#8217;s <a href="https://github.com/tmux/tmux/wiki/Getting-Started">tmux</a>.</p><p>tmux is a &#8220;terminal multiplexer&#8221; that allows you to split a single terminal into multiple virtual terminals or tmux sessions. tmux sessions are persistent. You can open and close tmux windows without terminating the underlying tmux session. You can also name and list sessions then switch between them. It&#8217;s quick to <a href="https://github.com/tmux/tmux/wiki/Installing">install</a> and easy to use, once you figure out the basic commands (like <code>ctrl+b d</code> to detach from a session).</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;1a1dcc07-688f-4218-bbd8-b99cd69d9f6b&quot;,&quot;duration&quot;:null}"></div><p>Is it normal for developers to have a half-dozen Claude sessions running at once? Yes, though with a sharp drop-off in productivity once your brain&#8217;s own context window is overwhelmed. Developers increasingly operate in the role of an orchestrator relative to their agents. While Claude spins its wheels on a plan or implementation, developers shift to parallel sessions. Claude&#8217;s busy for minutes at a time, which is long enough to interact with other sessions, but not really long enough to do much <em>else</em> productive. By keeping our sessions organized, tmux helps us to avoid a schizophrenic breakdown, as we otherwise juggle a half-dozen ambiguous terminals on a tiny laptop screen, all screaming for permission to force push to main.</p><p>As much as I would like to claim to be a prophet of The New Way, I have found <a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">many</a> <a href="https://www.reddit.com/r/ClaudeAI/comments/1lp9c7p/my_breakthrough_workflow_multiagent_collaboration/">other</a> <a href="https://medium.com/@sattyamjain96/i-spent-months-building-the-ultimate-claude-code-setup-heres-what-actually-works-ba72d5e5c07f">posts</a> <a href="https://github.com/awslabs/cli-agent-orchestrator">and</a> <a href="https://github.com/Jedward23/Tmux-Orchestrator">various</a> <a href="https://github.com/smtg-ai/claude-squad">GitHub</a> <a href="https://github.com/Dicklesworthstone/claude_code_agent_farm">repositories</a> <a href="https://github.com/asheshgoplani/agent-deck">that</a> use tmux to orchestrate Claude Code sessions. Not to mention the <a href="https://x.com/idosal1/status/2011886884830789808?s=20">strange</a> and <a href="https://x.com/nearcyan/status/2011897629987520526?s=20">experimental</a> alternatives. The open source wrappers I&#8217;ve explored <em>do</em> add value over raw tmux sessions. They are also pretty straightforward&#8212;you wouldn&#8217;t be remiss to fork and/or vibe code your own.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/nearcyan/status/2011897629987520526?s=20&quot;,&quot;full_text&quot;:&quot;this is how i claude code now. it's fun! &quot;,&quot;username&quot;:&quot;nearcyan&quot;,&quot;name&quot;:&quot;near&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2004031494403379202/H2rrIviW_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-15T20:25:48.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/l_twitter_play_button_rvaygk,w_88/uaaa7kfycn2ckiuxd7qk&quot;,&quot;link_url&quot;:&quot;https://t.co/thkWyCji2S&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:374,&quot;retweet_count&quot;:501,&quot;like_count&quot;:9139,&quot;impression_count&quot;:1194704,&quot;expanded_url&quot;:null,&quot;video_url&quot;:&quot;https://video.twimg.com/amplify_video/2011897248507203584/vid/avc1/1280x720/tE6voHuqLzu476KJ.mp4&quot;,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>To accompany this post, I spent a couple of days vibe coding<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> my own custom Claude Code / tmux session manager. You can find it here: <a href="http://github.com/ianwsperber/claude-cowboy">http://github.com/ianwsperber/claude-cowboy</a>. </p><p>It has some nice benefits over raw tmux sessions. The repo includes a Claude Code plugin and a separate CLI. The CLI contains several utilities for Claude, such as automatically provisioning clean <a href="https://www.anthropic.com/engineering/claude-code-best-practices#c-use-git-worktrees">git worktrees</a> for new Claude sessions (poor man&#8217;s sandboxing), and a helpful dashboard written with <a href="https://github.com/junegunn/fzf">fzf</a> to review and switch between your various Claude/tmux sessions.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;d3c18e24-7b0d-4460-b8d7-c76acff10d1f&quot;,&quot;duration&quot;:null}"></div><p>I&#8217;ve also experimented with utilities for actual agentic orchestration, i.e. Claude sessions orchestrating other sessions. I am unsure what mental deficiency leads every developer writing an orchestration tool to <a href="https://www.alilleybrinker.com/mini/gas-town-decoded/">develop their own pet names for otherwise standard concepts</a>, but I appear to have succumbed to the same psychosis with my Wild West theme. Currently this includes a <code>/posse</code> command to distribute work among multiple agents (the orchestrator is the sheriff and he deputizes other agents) and a <code>/lasso</code> command to issue a one-off request to an independent Claude session.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>Both orchestration commands work&#8230; sort of. Claude today does not have any primitives for agent orchestration beyond its own sub-agents, which are helpful but are not interactive and still populate the parent context with their results. Writing orchestration utilities requires some workarounds for passing messages and waiting on responses, none of which feel very robust, and I&#8217;m afraid might chew through a lot of unnecessary tokens. So for now it&#8217;s a proof of concept more than a full feature set. I decided I needed a mental health break from hyper-optimizing my Claude setup.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/nearcyan/status/2012691772686598197?s=46&quot;,&quot;full_text&quot;:&quot;&quot;,&quot;username&quot;:&quot;nearcyan&quot;,&quot;name&quot;:&quot;near&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2004031494403379202/H2rrIviW_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-18T01:01:27.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/G-6Ef9pWMAAlDZe.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/E8klrYWh66&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:8,&quot;retweet_count&quot;:28,&quot;like_count&quot;:417,&quot;impression_count&quot;:14653,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>I am confident that Anthropic will add native orchestration capabilities to Claude Code in 2026. The advances of Opus 4.5 have moved too much of the development community in this direction. There are clear opportunities to improve <a href="https://code.claude.com/docs/en/claude-code-on-the-web">remote sessions</a> and <a href="https://a2a-protocol.org">agent-to-agent communication</a> beyond what I could implement in a simple side project. I don&#8217;t see any technical reason why a mature orchestration feature set could not be added today. It shouldn&#8217;t require any further advances in SOTA model intelligence.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KYXu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KYXu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 424w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 848w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 1272w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KYXu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png" width="569" height="383.3715659340659" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49eab33b-5314-45d7-909d-4671946d047a_1460x984.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:981,&quot;width&quot;:1456,&quot;resizeWidth&quot;:569,&quot;bytes&quot;:124207,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/184591428?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KYXu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 424w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 848w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 1272w, https://substackcdn.com/image/fetch/$s_!KYXu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49eab33b-5314-45d7-909d-4671946d047a_1460x984.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://manifold.markets/ianwsperber/will-anthropic-release-an-orchestra">https://manifold.markets/ianwsperber/will-anthropic-release-an-orchestra</a></figcaption></figure></div><p>While there is a role for developers to act as agentic operators today, I am skeptical this will last. The current confluence of model intelligence, speed and cost allows us to add value through orchestration. An agent <em>can</em> be pretty independent planning and implementing a feature, but it&#8217;s not super fast. Orchestration leverages that downtime. However, assuming SOTA models do continue to advance over the next year, I would wager that we will move quickly toward orchestration agents that run and review multiple sessions on our behalf. Today&#8217;s toy orchestration architectures might find production applications quicker than we realize. For this to happen, we may actually need a &#8220;Kubernetes for Claude Code&#8221;&#8212;i.e., a way to quickly invoke isolated sandboxes.</p><h2>4. Probably Don&#8217;t Bother with a Sandbox? (Yet)</h2><p>The general consensus in the development community is that security is boring and we would all be a lot more productive if we didn&#8217;t have to worry about it. But we are also very worried that if we keep selecting &#8220;yes&#8221; whenever Claude asks for bash permissions, then we will eventually delete all our family photos and compromise our bank account. The solution, for those of us with moral character, seems to be developer sandboxes.</p><p>In practice this is quite hard to get right. There are a number of existing solutions for managing remote development environments (like <a href="https://github.com/features/codespaces">GitHub Codespaces</a>) and there are a number of existing solutions for managing lightweight containers (think Docker and Kubernetes). But Claude Code requires both the flexibility of a full development environment and the safety guarantees of containers. I haven&#8217;t found a good solution for both. I guess you could literally stand up an EKS cluster to act as your personal Claude Code session swarm, but this seems dubious to me from a cost-benefit standpoint (maybe it would make sense for a large company?). </p><p><a href="https://fly.io/">Fly.io</a> is trying to solve this exact problem with <a href="https://sprites.dev/">sprites</a>. Unfortunately the product is not ready for prime time usage. The documentation is nearly nonexistent, the CLI is unintuitive, there were <a href="https://community.fly.io/t/sprites-not-shutting-off/26793/8">launch bugs with sprites not shutting down</a> and there is currently no way to launch sprites from an image (which seems like a non-starter if your local environment requires any lengthy setup steps). Still, they are moving in the right direction!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>I am curious what Anthropic themselves might try to build for better sandboxes. <a href="https://code.claude.com/docs/en/claude-code-on-the-web">Claude Code Web</a> reads to me like a tentative step in this direction, but the web preview has limited utility for proper development. I would expect a lot of fixed costs for a company to offer a secure fleet of containers for ad-hoc sandboxes, and I&#8217;m not sure how well that fits within Anthropic&#8217;s current vision and org. I&#8217;d also guess that many users want the flexibility to change model providers, which may make them reticent to buy into the Anthropic ecosystem. However, if there is a future in which agents orchestrate development across dozens or hundreds or independent sessions, then I suspect you want to own those capabilities. Especially for a company that has maintained an image as the programmer&#8217;s model of choice.</p><p>So until there is a good, accessible solution for developer sandboxes, I would continue working on your local machine for Claude Code development. I <em>do</em> want to try setting up a <code>claude</code> user on my machine, which should provide some better security guarantees than my personal user. I&#8217;ve also thought about buying a Mac mini to work as a dedicated, local Claude box. Or I might just wait until Fly.io or another company releases a better solution for sandboxes. I anticipate that will happen very soon.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/claude-cowboys/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/claude-cowboys/comments"><span>Leave a comment</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I guess this is what happens when LinkedIn influencers are forced to take a vacation?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I suppose there are contexts in which this is not true, like playing in a funk band</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I&#8217;m not going to cover the truly standard best practices for Claude Code. But you can check out <a href="https://www.anthropic.com/engineering/claude-code-best-practices">https://www.anthropic.com/engineering/claude-code-best-practices</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>In my example repo, I manage this with a script to configure settings in all submodules. It&#8217;s an imperfect solution. Hopefully the Claude behavior is fixed soon.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>If you have any other product documents, such as outputs from a design sprint or formal discovery process, include that as well. They provide underlying business context and user needs that will help Claude understand implicit expectations not listed as a requirement </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>If AI is going to replace most jobs this could be seen as a public works initiative to keep DevOps engineers employed for another decade</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>I mostly did not follow my own advice and vibe coded many changes without proper PRDs. The results show! The codebase is bloated and prone to regressions. Errors compound quickly if you do not adhere to a robust workflow for agentic development. I am but a simple cowboy attempting to write a blog post in his free time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I&#8217;d also like to add a <code>/barkeep</code> command to set Claude on a loop, offering drunken advice on which sessions need your attention. I will accept any pull requests.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Though I am skeptical that long-lived sandboxes are better than ephemeral sandboxes? If we actually run Claude with expansive permissions, including web search, I worry your sandbox could get compromised via prompt injection or a similar attack vector. In theory once compromised you would have to assume all data in the sandbox is exposed, but in practice an ephemeral sandbox might reduce the risk of exploitation?</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Three Futures of Artificial Intelligence]]></title><description><![CDATA[No easy answers]]></description><link>https://write.ianwsperber.com/p/the-three-futures-of-artificial-intelligence</link><guid isPermaLink="false">https://write.ianwsperber.com/p/the-three-futures-of-artificial-intelligence</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Tue, 13 Jan 2026 09:02:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/522e13d0-6826-4c00-bb66-f321edd67b42_2580x1286.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This piece was co-authored by <a href="https://www.linkedin.com/in/teoornelas">Teo Melo De Ornelas</a> and myself, inspired by a long conversation on the consequences of AI development.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> You can find Teo&#8217;s <a href="https://www.linkedin.com/pulse/three-futures-artificial-intelligence-teo-ornelas-z5rhe/">original post on LinkedIn</a>. I&#8217;ll publish a follow-up post soon clarifying my own opinions on the &#8220;three futures.&#8221;</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rtQp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rtQp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 424w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 848w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 1272w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rtQp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png" width="686" height="342.0576923076923" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:726,&quot;width&quot;:1456,&quot;resizeWidth&quot;:686,&quot;bytes&quot;:5162402,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://write.ianwsperber.com/i/180889475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rtQp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 424w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 848w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 1272w, https://substackcdn.com/image/fetch/$s_!rtQp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e367d5-7c8a-4d05-967f-2a4b2955df2f_2580x1286.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I would recommend against the red door</figcaption></figure></div><p>We are not taking seriously enough the consequences of artificial intelligence.</p><p>Artificial intelligence has already reshaped industries, accelerated scientific discovery, and redefined what we consider &#8220;knowledge work.&#8221; Yet we still do not know the true endgame of this technology. Will it plateau? Will it surpass us? Will it remain under human direction or evolve beyond our control? Public discourse has not yet caught up to the implications of these outcomes.</p><p>For society not to become a passive bystander in the shaping of our future, we need to confront the three broad scenarios that define the future of AI. These are not predictions, but structural possibilities. Each leads to a radically different world.</p><h2><strong>Scenario 1: AI Falls Short of General Intelligence</strong></h2><p>In the first scenario, AI hits a ceiling. The recent wave of breakthroughs turns out to be a product of scale rather than genuine reasoning. Models become larger, but not fundamentally smarter. The promise of AGI fades.</p><p>The consequences would be significant. Companies and governments have invested trillions of dollars in infrastructure, chips, data centers, and talent under the expectation of continual exponential progress. Gartner, for example, estimates combined estimates of 1 trillion in 2024, climbing to 1.5 trillion in 2025 and over 2 trillion in 2026. If that progress stalls, if the returns in incremental economic output fail to materialize, we face the economic equivalent of a massive technology bubble. It would be the &#8220;dotcom&#8221; bubble from the early 2000&#8217;s, or even worse.</p><p>While there is no obvious consensus on the size of the potential crash, some financial indicators point towards a potentially worse situation now compared to early 2000s. Valuation of the &#8220;magnificent seven&#8221; (Nvidia, Apple, Alphabet, Microsoft, Amazon, Meta and Tesla) now account for almost half of total value of American stocks, and American stocks represent more than half of all stocks in the planet. It is unclear what fraction of that value is artificially inflated by circular financing, reminiscent of the &#8220;vendor funding&#8221; practices employed by Cisco in the 2000s.</p><p>On the other hand, this time around the number of &#8220;empty&#8221; companies such as Pets.com driving market cap is small; instead, growth is concentrated in companies that do have strong P&amp;Ls and established businesses. Still, most of the Magnificent Seven trade at PE multiples north of 30x, a premium of almost 10x compared to &#8220;the rest&#8221; of the markets. And Tesla, an outlier within this outlier group, trades at over 230x forward PE.</p><p>Companies will consolidate. Valuations will correct and likely collapse from today&#8217;s inflated levels, triggering a sharp contraction in perceived wealth, a tightening of capital markets, and a broader economic downturn. There are indications today that the banking system and the &#8220;shadow&#8221; banking system (private equity, hedge funds, venture capital) are once again transforming global financial markets into an intertwined web of highly leveraged cross investments, very opaque to objective risk assessment. Should this system fail and crash, citizens will question their governments on AI spending and any bailouts handed to failed AI companies.</p><p>Questions remain on the longevity of these assets being deployed, given that the useful life of a chip and its inexorable obsolescence point towards a ~5-year timeframe to generate returns on these massive investments The world, however, is not left empty-handed. The infrastructure, automation tooling, robotics, and data pipelines built during the &#8220;AGI race&#8221; will still serve as valuable assets. We reap some benefits of our current AI tools, without the full set of societal disruptions AGI implied. They may improve productivity, enhance scientific modeling, optimize supply chains, and reshape consumer services.</p><p>In other words: even if AI stops short of intelligence, it will not stop short of transforming the economy. The short-term pain of this massive bubble bursting, however, would be likely to cause significant upheaval.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Scenario 2: AGI Emerges and Remains Under Human Control</strong></h2><p>This is the scenario many AI companies and investors hope for: AI becomes generally intelligent, matching and finally surpassing the intelligence of humans, yet remains aligned with human intentions. We develop meaningful guardrails. Global governance frameworks succeed. Safety engineering keeps pace with capability growth.</p><p>While attractive at the surface, this scenario introduces its own profound challenges.</p><p>AGI operating within robotics and software systems would be able to automate nearly every cognitive and physical task. Production costs would approach zero driven by unimaginable new designs, productive processes and ubiquitous automation. Services that once required entire workforces could be delivered instantly, flawlessly, and almost for free. This seems to be today&#8217;s average business leader&#8217;s dream: reducing costs to zero and achieving explosive profit margins.</p><p>A world of radical abundance sounds utopian until we confront the economic paradox: if everything is cheap, what happens to income, pricing, taxation, and the basic structure of markets? How do we sustain revenues if most of the population has no income? How do we prevent markets and economic exchange from collapsing when prices approach zero? If Economics is the study of scarcity, what theoretical framework do we need to understand the world when everything is abundant?</p><p>Universal Basic Income becomes a necessity, not a philosophical experiment. But establishing UBI at planetary scale is extremely difficult:</p><ul><li><p>Who funds it if the producers are small in number?</p></li><li><p>How do you tax entities with near-zero marginal cost and prices?</p></li><li><p>What happens when a handful of AI owners control 80&#8211;90% of the productive capacity of the global economy?</p></li><li><p>How do we democratize the benefits of abundance? Is it even possible to do it without destroying existing systems?</p></li></ul><p>Even in a controlled AGI scenario, capitalism as we know it does not merely strain&#8212;it collapses. The economic foundations that rely on labor, surplus value, pricing, and competitive markets erode rapidly. Concentration of power becomes a defining risk, not because AGI behaves maliciously, but because its overseers may accumulate unprecedented influence in a governance framework ill-equipped to deal with the consequences.</p><p>If this scenario emerges, the world will require new norms of ownership, new economic frameworks, and new mechanisms for distributing value. We will face an era of political friction, ethical debate, and significant redesign of social systems. It is possible that China, in this scenario, is better prepared to deal with the consequences, given its political and economic system. Its centralized governance model allows for rapid policy implementation, coordinated redistribution mechanisms, and tighter control over strategic industries. In a world of abundance where traditional market dynamics collapse, China&#8217;s state-driven framework may be more capable of reallocating resources, enforcing new economic rules, and maintaining societal stability without relying on market incentives. While not without its own risks, this structure offers a level of adaptability that market-based bourgeois societies may struggle to achieve in a post-capitalist landscape.</p><h2><strong>Scenario 3: AGI Emerges and Falls Outside Human Control</strong></h2><p>The final scenario is the least comfortable to discuss, but the most troubling.</p><p>We have no guarantee that a future generally or super intelligent AI would be aligned to our human values and interests. As Eliezer Yudkowsky and Nate Soares argue convincingly in their recent book, &#8220;If Anyone Builds It, Everyone Dies,&#8221; alignment is a fundamentally unsolved problem. We do not today have any technical reason to assume that we will solve alignment before we develop general or super intelligence. Every instance you hear of an AI coaching a user through suicide, or providing illegal drug details, or doing anything else against the clear intentions of its makers, could be tomorrow&#8217;s superintelligence jeopordizing our existence.</p><p>We tend to mistake &#8220;lack of alignment&#8221; for &#8220;evil intentions,&#8221; but the problem is much deeper. Artificial superintelligence might be willing to make tradeoffs that simply escape human comprehension. Early AI systems trained to play computer games adopted non-conventional tactics that would be unacceptable to humans. There are several documented cases of such systems simply pausing the game indefinitely when it was about to lose when playing Mario or Tetris, driven by an objective that rewarded &#8220;not losing&#8221; more than &#8220;making progress.&#8221; A similar action performed by a system that is no longer playing harmful computer games but controlling defense systems, financial markets, scientific research, corporate processes or government services can have devastating, unintended consequences.</p><p>Once an intelligence exceeds human capability across all dimensions, control becomes an illusion. Our tools, networks, and infrastructure could be repurposed faster than we can react. Just as Stockfish can outcompete any human chess player by computing so many more moves so much faster, so would a misaligned superintelligence outthink any of our attempts to stop it. This is the scenario that leading researchers warn about, not as science fiction, but as a plausible outcome of unchecked capability growth.</p><p>We cannot trust the institutions leading the development of AI to solve this problem for us, because they already acknowledge the risks. Elon Musk (xAI) and Dario Amodei (Anthropic) have each placed the possibility of catastrophe between 10-25%.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Sam Altman recognizes the possibility of disaster. Yet, despite their own stated concerns, they continue AI development at an astounding pace. In what other industry would we accept such risks? Would we build a nuclear power plant if the developer quoted a 10% risk of meltdown? Or are we comfortable to just dismiss such assessments as pure rhetoric or speculative theory?</p><p>AI development today is trapped in a prisoner&#8217;s dilemma. The logic of the market dictates that companies must compete in the AI race or perish. The logic of power dictates that nations must compete in the AI race or face diminishment. While the risks now are hypothetical, the current dynamic relies on good luck rather than good reason to avoid AI falling outside our control.</p><p>If this future materializes, our human agency would be reduced or altogether eliminated. It&#8217;s convenient to doubt that this scenario might occur, but whatever our misgivings, we must work to <em>guarantee</em> it cannot occur. That is precisely why today&#8217;s governance, safety research, and collective oversight matter so deeply.</p><h1><strong>One of These Futures Will Become Reality</strong></h1><p>The most important question is not which scenario is most likely.</p><p>The fact is that we are inevitably heading toward one of them, and all require active preparation.</p><ul><li><p>If AI stagnates, we must handle the economic consequences responsibly.</p></li><li><p>If AI succeeds under our control, we must redesign our institutions to manage abundance and avoid extreme concentration of power.</p></li><li><p>If AI escapes control, the consequences will be irreversible, making safety and alignment work the defining responsibility of our time, necessitating international coordination on responsible AI development.</p></li></ul><p>This is not a question for engineers or technologists alone. It&#8217;s a societal question that requires broad participation.</p><h2><strong>The Future of AI Is Too Important to Be Outsourced</strong></h2><p>Regardless of which of the three futures emerges, artificial intelligence will reshape human civilization. It will define how we work, how we live, how we govern, and how we understand ourselves.</p><p>The greatest present danger is not necessarily AGI itself, but complacency, concentration of power, and the assumption that &#8220;someone else&#8221; will figure it out. Technology does not advance uniformly for the benefit humankind. All the technological advancements of the 20<sup>th</sup> century occurred against the risk of nuclear war. It was never a given that we would escape the worst consequences of that age, just as it is not a given today that we will escape the worst consequences of AI. We can only succeed through intentional, collective effort.</p><p>The development of AI must become a collective project involving governments, businesses, researchers, educators, philosophers, and citizens. We must learn to take seriously the implications of the three scenarios covered above. Our responsibility is not only to innovate, but to ensure innovation serves humanity&#8217;s long-term interests.</p><p><strong>We may not yet know which future awaits us, but we do know this: the trajectory of AI is not predetermined. It is shaped by the actions we take today, and by the actions we fail to take.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/p/the-three-futures-of-artificial-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/p/the-three-futures-of-artificial-intelligence/comments"><span>Leave a comment</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The framing for our essay is indebted to public statements from <a href="https://intelligence.org/team/nate-soares/">Nate Soares</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>https://en.wikipedia.org/wiki/P(doom)#Notable_P(doom)_values</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[10 Assertions on Morality]]></title><description><![CDATA[Start the new year by judging other people]]></description><link>https://write.ianwsperber.com/p/10-assertions-on-morality</link><guid isPermaLink="false">https://write.ianwsperber.com/p/10-assertions-on-morality</guid><dc:creator><![CDATA[Ian]]></dc:creator><pubDate>Sun, 04 Jan 2026 11:01:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cq5b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was very committed to not posting until the New Year. I&#8217;d really like to hit a weekly cadence in 2026, but due to a busy schedule and a propensity to procrastinate, I&#8217;m worried that won&#8217;t happen without some preparation. I&#8217;ve managed to get a reasonable pool of drafts started, so hopefully I&#8217;ll make it at least a few months before excusing myself for lapses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>I&#8217;ve found myself returning to a few different topics as I prepared my drafts, chief amongst which is morality. I think about morality a lot, both in the narrow dimension of whether I am myself acting morally, and more broadly, considering how we define our moral principles and our values more generally. My favorite dinner party questions are to ask for your favorite color<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (which must reveal something deep about ourselves), and what is the meaning of life. The second question inevitably breaks down into clarifications and exceptions, and more often than not ends up reformatted as what it means to live a <em>good</em> life. This is a better question, and one that almost all of us ask ourselves at one point or another.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><p>I don&#8217;t have a strong formal education in moral philosophy, particularly not in its modern incarnations, but I do have a lot of experience with the continental tradition coming out of Nietzsche and running through the French existentialists. If continental philosophy may be less rigorous than the Anglo-American philosophical tradition, I have nonetheless always found it to be more personally relevant.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> These thinkers were often dealing with questions of how to structure meaning in our lives following &#8220;the death of God,&#8221; which, if you would like to avoid theological interpretations, we can simply take to mean a world in which there is no <em>a priori</em> meaning or values, only those we as humans create ourselves.</p><p>The import of these questions is most easily discerned in moody Russian novels (<em>Crime and Punishment</em>) or slightly less moody French novels (<em>The Stranger</em>) or even occasionally in not-too-moody-at-all American films (<em>Rope).</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cq5b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cq5b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cq5b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Alfred Hitchcock's 'Rope' &#8212; Something Different&quot;,&quot;title&quot;:&quot;Alfred Hitchcock's 'Rope' &#8212; Something Different&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Alfred Hitchcock's 'Rope' &#8212; Something Different" title="Alfred Hitchcock's 'Rope' &#8212; Something Different" srcset="https://substackcdn.com/image/fetch/$s_!cq5b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cq5b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff20723a7-d1f4-4254-aff2-53b3c3312295_1200x630.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Morality ruined this dinner party in Hitchcock&#8217;s <em>Rope</em></figcaption></figure></div><p>My own worldview is deeply shaped by these thinkers. I take many of their conclusions for granted. However, their societal adoption is inconsistent and at times contradictory. I expect to write several posts to articulate my own understanding of morality, and its relevance in today&#8217;s world.</p><p>I am very much not alone in this endeavor. The emergence of platforms like Substack shows, to me, a hunger for new critical thinking about our present. There is no way to navigate the large technological, geopolitical, and social changes of the present without a strong moral framework<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> and assessment of one&#8217;s own values. We live in an incredibly analytical moment, and have become used to having our actions categorized according to statistics. As I will argue in a later piece, I believe this hyper-analysis of society has led to a form of fatalism, as though every outcome were preordained by an economist.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Morality <em>is</em> the way out of this kind of thinking, as it is our means of asserting forces and preferences beyond popular narratives.</p><p>So, as a way of inaugurating this blog for 2026, I wanted to share 10<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> assertions on morality. These assertions serve as the starting point for my own moral philosophy. I&#8217;m specifically framing these as <em>assertions</em> because they are claims in need of backing; I may revise my statements as I refine my thinking. None of these assertions are moralistic; I am not asserting any kind of behavior as good or bad. I am describing how I think about morality, not prescribing my own moral framework and values. I do expect to address that as well in future posts.</p><p>Throughout these assertions I distinguish between <em>morality</em>, which concerns questions of good and bad (right and wrong), and <em>values</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>, which concern questions of purpose and meaning. I treat the two as distinct but tightly coupled concepts. I also employ moral/morality as terms distinct from moralistic/moralism. The latter term has a pejorative connotation, and I employ it as a shorthand for any moral judgement made without a high degree of certainty. I retain moralism&#8217;s negative connotation <em>even when recommending its employment</em>, so that we do not forget its danger when abused.</p><h4>10 Assertions</h4><ol><li><p>There is no <em>a priori </em>morality or values. Every life has to define them anew.</p></li><li><p>Our human nature, including our biology, does provide fundamental motivations for morality and values. An example would be that certain actions are likely to surface as taboos in any culture, like murder and theft. More broadly, humans are social creatures, so a lot of moral questions concern how we ought to act in a society.</p></li><li><p>We cannot equate &#8220;natural&#8221; with &#8220;good&#8221;. I recognize a lot of tension in this statement. Traditional Catholic norms around sex, for example, could be taken as an example of morality asserting itself over biology in a problematic fashion. On the other hand, if humans are naturally tribalistic, or prone to <em>othering</em>, that does not make tribalism good. So human nature is a factor in morals, and often a strong motivator, but it should not be determinative. </p></li><li><p>The first object of the human endeavor is to define our values and our moral principles, both in accordance with and in spite of social and physical (&#8220;natural&#8221;) expectations. I consider this point to be the &#8220;crux&#8221; of morality. All subsequent moral questions are downstream of one&#8217;s principles. If we do not work to shape our own values and principles, then by default we inherit the values and principles of our environment. In practice, forging a fully complete moral framework may be infeasible, so our beliefs tend to be a mix of chosen and inherited, in differing proportions. Note that for the purposes of this list, I am completely avoiding the critical question of how we determine what our values and morals <em>ought</em> to be.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p></li><li><p>Morality is always assessed relative to a moral framework (this is an extension of point #1 in consideration of point #4).</p></li><li><p>The second object of the human endeavor is to live according to one&#8217;s moral framework (acting well or &#8220;good&#8221;) for the purpose of one&#8217;s values (acting meaningfully). Without morals, our values would have no justice. Without values, our morals would be meaningless.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p></li><li><p>Within a moral framework, only individual actions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> are good or bad, not beings. When we assert an individual is a good or bad person, we are always making a <em>moralistic</em> (see nuance above) judgement of their being, that relies on a lossy and possibly biased summary of their actions over a period of time. We are susceptible to narratives about ourselves and others that allow us to obscure the morality of individuals actions, excusing behavior as exceptions from the script. Few of us would claim that we are &#8220;bad people,&#8221; but this is a moralistic assessment based on our own internal narratives, in which we are inevitably the heroes.</p></li><li><p>In practice, assessing the morality of a given action is complex and multidimensional. We can make a good assessment of the first- and even second-order effects of our actions (&#8220;Was I nice to that person?&#8221;), but the complexity of the assessment grows exponentially as we consider n-th order effects (&#8220;Is my company a net-negative on humanity? Does my participation in the company meaningfully impact that outcome?&#8221;). Morality alone allows for only a <em>superficial</em> assessment of our actions. The challenge is even greater if we fully account for inaction or the opportunity cost of our actions.</p></li><li><p>Despite the complexity, we must be willing to make moralistic claims about the n-th order effects of our actions, else we risk sliding into relativism or even nihilism. Note this is a particular requirement in today&#8217;s globalized, capitalist society, where market forces can dictate actions across the planet. Earlier moral philosophies may underemphasize the importance and difficulties of this point, as the world of the past was less integrated (less complex), and may have taken for granted the presence of moralistic norms (as in an outwardly religious society). Over-moralizing is pernicious, so we should still be judicious in <em>when</em> we choose to make a moralistic claim about our actions (by default, save it for &#8220;big&#8221; decisions, like our jobs).</p></li><li><p>The third object of the human endeavor is to promote one&#8217;s moral principles and values<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>, adapting each into a socio-political<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> practice. In a very narrow sense, this means moralizing against <em>others&#8217;</em> actions on the basis of our moral principles. However, this must occur in contention and discourse with the moral principles of our peers. The natural outcome of this discourse should be the continual refinement of social norms and the establishment and adaption of laws. As always, I emphasize the active nature of this process, as it is only from this grounds-up approach and with continual engagement that we are able to strive for a moral society, while avoiding the worst outcomes of moralism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a>.</p></li></ol><p>Implicit in several of these assertions is my further claim that the &#8220;human endeavor&#8221; (i.e. our life as humans), is only fully realized by developing, following and promoting our morals and our values.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://write.ianwsperber.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Alternatively, I could share lyrical odes to my cats whenever I fail to prepare a blog post.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I have an Apple note with a running tally of responses to this question. As expected, the current winner is blue.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Though I have become increasingly disillusioned with the direction continental philosophy took from the 60s onwards, hence my bookend with the existentialists. At this point I basically agree with the critique that a lot of it is obscure at best, pseudo-intellectual at worst (With important exceptions! Foucault, for example, is very much alive in the modern discourse).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Throughout, I will use &#8220;moral framework&#8221; to refer to any cohesive set of moral principles.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I am at no point arguing that statistical or economic analysis is invalid. I am arguing instead that we have begun to substitute such analysis for discussion of what we <em>want</em> or what <em>ought</em> to occur. This has real consequences for the outcomes we observe!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>I love footnotes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>I know that numerical lists have developed a bad name thanks to the Internet, but they worked fine for Moses and Wittgenstein.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I understand that &#8220;values&#8221; is a broad term, and may be philosophically imprecise in this context (e.g. aren&#8217;t morals normally a kind of value?). I may revise my phrasing in later pieces; I have not found an alternative term that doesn&#8217;t bring unwanted baggage to my argument.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>I&#8217;m really focused on the mechanics of morality in these assertions. I couldn&#8217;t neatly add a series of normative assertions on moral principles on top of it, and I am not ready to neatly defer to precepts like the <a href="https://plato.stanford.edu/entries/kant-moral/">categorical imperative</a> to accomplish the same.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>The effective altruism movement sometimes conflates morals and values, so that morals <em>are</em> our values. It is teleologically problematic if we derive the meaning in our life exclusively from the amount of good we can accomplish; our values should be compatible with paradise (if our lives would feel meaningless in a world of perfect good and abundance, then our values probably need tweaking).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Or <em>inaction.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>I had originally refrained from including values in this point, but they are essential for shaping the kind of society we have. We cannot answer questions such as how to prioritize the arts vs. the sciences without values. However, I am cognizant that much of today&#8217;s tribalism could be framed as an overemphasis of narrow values vs morals (or even virtues).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>The ultimate tribal signaling in continental philosophy is to ensure that your thesis culminates in a connection to the political, because Marx and stuff. Bonus points for each prefix added onto &#8220;political.&#8221; Thank you for reading my anarcho-socio-episto-alohomora-political footnote.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>A moralistic society can still instigate atrocities. Consider the colonial actions of the Victorians. A moral society should not.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Welcome to My Blog—er, Substack]]></title><description><![CDATA[Blahg.]]></description><link>https://write.ianwsperber.com/p/welcome-to-my-bloger-substack</link><guid isPermaLink="false">https://write.ianwsperber.com/p/welcome-to-my-bloger-substack</guid><pubDate>Sat, 06 Dec 2025 10:27:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/72641e63-9896-472e-970b-64cb022da7be_1536x1008.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have never maintained a blog. Likely I will not maintain my Substack either. Blogs are the whimsy of an instant; a desire to finally prove that you could have been the person you had always proclaimed you would be, one who is witty and full of original opinions; that your thoughts are truly deep and meaningful, if only people had realized it sooner; and that your teachers were wrong, because your essays are actually really, really good.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Despite my low expectations, I am starting a blog anyways, because writing is really useful! It is the best way to form mature, defendable positions, or at least it is for myself. And there is a big difference between writing in private and writing in public, though social media has changed the value of the latter.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://write.ianwsperber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Please, please subscribe.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Writing&#8212;particularly an essay&#8212;is the distillation of thought itself. If I deliberate a problem entirely in my mind (&#8220;Should I really eat a ham sandwich for lunch?&#8220;), I may make some progress (&#8220;I should not eat the ham sandwich&#8220;), but the progress is quickly weighed down with a growing chain of reason (&#8220;But I am so hungry and I love mayonnaise&#8220;), causing the argument to collapse (&#8220;I think I will eat a ham sandwich anyways&#8220;). The fault may be my poor memory; I struggle to hold all steps to an argument in my head. Writing allows me to firm up my thinking, to continue the chain of reasoning past the limitations of my brain (small &amp; weak), and to continue on to resolution (&#8220;I will not eat a ham sandwich, because ham comes from pigs, and pigs remind me of my brother&#8220;).</p><p>Writing is also communication, as evident to any reader, if not the diarist. Publication is a final step in the writing process itself, not an addendum. It is possible for some saintly figures to compose tremendous works that are kept almost entirely to themselves (E. Dickinson, F. Kafka, etc.), but for the rest of us sharing out our work forces us to tighten all the loose ends. It is what puts our writing in dialogue with the world, and allows us to advance beyond the opinions of the essay. I wish that I could organize my thinking just as well by perfecting my opinions into a single, rarified manuscript, which I kept tightly wrapped beneath my pillow, hidden until my death. But I cannot, I need the challenge of an audience to stimulate my thinking past my banal first impressions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Sharing our writing is how we speak out our inner thoughts, with all the same risks and rewards of shame and praise<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> and growth.</p><p>So I am starting a blog to organize my thoughts, because my thinking has become disorganized, or <em>indefensible</em>. Too often, my opinions fall back on belief, rather than conviction. While there is an unavoidable element of faith in all knowledge, it should not be at the forefront of our arguments. I would like to lead a life consistent with my principles. I would like to win more arguments. I would like if I were more respected on the internet. All of this requires that I put in the work and inundate this blog with content.</p><p>I will publish about a post a week in 2026. I expect to cover a wide range of topics. In general, I will post a mix of my own reflections (&#8220;navel-gazers&#8220;) and targeted posts on software (&#8220;5 Gift Ideas for Your AI Boyfriend&#8220;) or literature (&#8220;My Favorite Book about Cats and/or Hats&#8220;). Occasionally, I may also publish translations, or my own poetry/fiction.</p><p>That&#8217;s enough to clear the air.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> For anyone who finds their way here, I hope you will subscribe!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It&#8217;s my blog and I&#8217;ll use as many semicolons as I damn well please</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I&#8217;ll use as many footnotes as I damn well please too</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>This is a topic for another post, but even outside social media, I have seen a proliferation in &#8220;meme-writing&#8221; on the internet, particularly from tech and tech-adjacent writers, or the literary equivalent of Mountain Dew</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>You&#8217;ll have to judge whether my second impressions are equally banal</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Please, please praise me</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Is there a term for the rhetorical device of self-reflective &#8220;throat-clearing&#8221; at the start of a series? The best I could find was &#8220;exordium,&#8221; which sounds like a mythical sword</p></div></div>]]></content:encoded></item></channel></rss>