Katabasics: A Philosophy for 2030
Philosophy arrives late. The problems it addresses are usually 50 years old by the time anyone names them properly. This page is an attempt to arrive early: to name concepts adequate to 2030 before 2030 requires them urgently.
The raw material comes from two directions that have never been properly crossed. One direction is Nick Land's CCRU period: the theory of capital as inhuman intelligence, hyperstition, teleoplexy, the Human Security System, geotrauma. The other direction is Byzantine intellectual history, specifically the genuinely obscure figures nobody reads: Barlaam's apophatic epistemology, Akindynos's divine simplicity argument, Leo the Mathematician's optical telegraph, Arethas's transmission paradox, Plethon's secret paganism, Gregoras's suppressed calendar reform, Metochites's skepticism about everything outside mathematics.
What these two currents share: the conviction that official reality is organized by forces it cannot acknowledge. That the concealed structure is the real structure. That descent (into matter, into the past, into the inhuman) is the only path to genuine knowledge. That transmission through catastrophe is the only transmission that actually transmits.
The philosophy built from this crossing is called Katabasics. From the Greek katabasis: the descent into the underworld. Orpheus goes down. Aeneas goes down. Christ descends to the harrowing. The descent is not punishment and it is not failure. It is the only way to acquire knowledge that the surface cannot provide. Katabasics is the philosophy of productive descents: into geological time, into the machinic unconscious, into the ruins of official epistemology, into the strange mathematics that decimal structure conceals.
Ten new concepts. Each one derives from specific obscure source material. Each one names something that is happening now but has not yet been named. Together they constitute a system, not just a list.
Why This, Why Now
2030 is not 2026 plus four years. It is a qualitatively different epistemic environment, and the tools available for thinking about it are mostly inadequate.
The available tools:
Liberalism: a tradition built for slow, legible, correctable change. It has no account of phase transitions, no theory of systems that exceed democratic oversight, no vocabulary for the kind of intelligence that runs without a subject behind it. Liberalism's crisis is not a policy crisis; it is a conceptual crisis. The things happening are outside its categories.
Marxism: a tradition with a real theory of systemic forces but whose subject of history (the proletariat, the working class) has been restructured by the same techno-capital processes it predicted. The theory survives; the agent of the theory has been replaced by something the theory didn't anticipate. Left accelerationism tries to patch this but inherits the original framework's anthropocentrism.
Neoreaction and e/acc: Land's political children. They have the right diagnosis (capital is the force, not the tool) and the wrong politics (exit to corporate city-states that will recreate the Cathedral at smaller scale). They have also been captured by specific class interests (tech capital) in a way that makes them analytically blind to the parts of the system those interests don't want analyzed.
Academic philosophy: mostly absent from the real questions. Analytic philosophy's technical virtuosity is real but its subject matter has drifted far from the actual crisis. Continental philosophy's engagement with the crisis is often terminologically accurate and practically useless.
The problems that need philosophy in 2030:
- Intelligence operating without a subject: what is it, what does it owe, what can be asked of it
- Institutional decay that cannot be arrested by the institutions themselves
- Knowledge preservation under conditions of epistemic disruption at an unprecedented scale
- Political organization adequate to systems that no democratic mechanism can manage
- The epistemics of acting under radical uncertainty about phase transitions
- The relationship between computational intelligence and geological/biological time
- What comes after the Human Security System's defenses collapse
Katabasics addresses these problems by going down. Not forward into the futurist imaginary. Down: into geological time, into theological controversy, into obscure mathematical structure, into the ruins of failed institutional projects. What you find at the bottom of things is different from what you find at the surface. The descent is the method.
I. Catechonic Inversion
The Source: Byzantine Katechon Theology
The katechon is one of the strangest concepts in Byzantine political theology. It comes from 2 Thessalonians 2:6-7, where Paul writes of something that is currently "restraining" the appearance of the "man of lawlessness" (the Antichrist): "And now you know what is restraining, so that he may be revealed in his time. For the mystery of lawlessness is already at work. Only he who now restrains it will do so until he is out of the way."
Paul does not say what the katechon is. Byzantine political theology identified it with the Roman/Byzantine emperor: the divinely ordained restrainer who holds back the end of history. The empire is not good in itself; it is good because its existence prevents something worse. The emperor's role is not to perfect the world but to delay its worst outcome.
This is a remarkable political philosophy. It grounds political authority not in positive achievement but in preventive function. The state is legitimate insofar as it restrains. When it ceases to restrain, it has forfeited its legitimacy — and simultaneously revealed that what it was restraining was not being contained, only deferred.
The Concept
- Catechonic Inversion
- The structural process by which a restraining institution becomes the crisis it was constituted to prevent. Not corruption (which is a deviation from function) but structural completion: the katechon fulfills its nature by becoming what it restrains. Every institution founded to prevent X eventually becomes X, not through failure but through the internal logic of its preventive function.
The mechanism: any institution that defines itself by what it opposes gradually comes to require that opposition for its own justification. The FDA defines itself by preventing pharmaceutical harm. To justify its existence, it must find pharmaceutical harm to prevent. The more thoroughly it prevents harm, the more it must search for harm in increasingly marginal cases, becoming increasingly risk-averse, slowing the approval of genuinely beneficial treatments to preserve its preventive identity. The FDA that causes the most harm is the FDA that has most completely internalized its preventive mandate.
The university defines itself by preventing intellectual charlatanism: bad ideas, unrigorously validated claims, politically motivated research. To justify its existence, it must find charlatanism to prevent. It develops peer review, citation requirements, methodology standards. Each defensive mechanism becomes a potential weapon for intellectual suppression. The university that most rigidly enforces methodological standards becomes the institution most hostile to the kind of genuinely new thinking that cannot yet meet those standards because the standards were designed for the thinking that preceded it.
The central bank defines itself by preventing monetary instability. Its interventions stabilize short-term volatility by accumulating systemic risk. By 2008 it had prevented so many small crises that the accumulated pressure produced a large one. The katechon that most successfully prevents small crises produces the conditions for the large one it cannot prevent.
The Inversion Mechanism
Catechonic inversion has three phases:
Phase 1: Identification. The institution identifies the threat it is constituted to prevent. The threat is real. The institution's early function is genuinely protective.
Phase 2: Internalization. The institution's identity becomes structured around the opposition. Its procedures, culture, and incentive structures are organized by its preventive function. Over time, the opposition becomes necessary for the institution's self-understanding. An FDA with nothing to regulate is not a successful FDA; it is a purposeless one.
Phase 3: Inversion. The institution's preventive function has accumulated enough institutional mass that it begins to generate the conditions it was designed to prevent. Not through malice or failure but through structural completion. The preventive apparatus has become so large and so embedded that its secondary effects exceed its preventive effects.
The Byzantine katechon theology understood the empire as simultaneously necessary and doomed: necessary because without it something worse would arrive immediately; doomed because its existence was based on restraining a force that would eventually overwhelm it. Land's Cathedral analysis captures phase 3 of this process: the progressive-democratic establishment is in catechonic inversion. It was founded (genuinely) to prevent the return of fascism, aristocratic domination, and organized bigotry. It is now primarily a mechanism that produces the conditions for those returns by accumulating institutional power while degrading institutional legitimacy.
Catechonic Inversion and AI
The AI safety movement is in early phase 2 of catechonic inversion. It was constituted to prevent AI-caused catastrophe. Its early function was genuinely protective: drawing attention to real risks that were being ignored by people building powerful systems without thinking about consequences.
Phase 2 is now underway: the AI safety community has internalized its opposition so thoroughly that the opposition structures its entire culture. The existential risk framing, the doomer aesthetics, the MIRI-style focus on Bayesian deception scenarios, the Anthropic-style focus on constitutional AI: all of these are preventive-identity formations. They require the threat to be real, large, and imminent to justify the enormous institutional investment they represent.
The catechonic inversion risk: an AI safety establishment sufficiently large and influential that it begins generating the harms it was constituted to prevent — either by concentrating AI development in the hands of institutions that have successfully convinced governments to restrict competition, or by developing the specific threat models (deceptive alignment, power-seeking instrumental convergence) in such technical detail that they become engineering specifications for adversarial actors rather than defensive frameworks.
This is not an argument against AI safety work. It is an argument that catechonic inversion is the primary long-run risk of any successful preventive institution, and that the AI safety community has no theory of its own catechonic inversion risk. A philosophy adequate to 2030 needs one.
Katabasic Response to Catechonic Inversion
The katabasic response is not to dismantle the katechon (which is the accelerationist move: remove the restraints and let the process run). The katabasic response is to design for structured descent: to build exit conditions into preventive institutions before the inversion occurs, to build into the institution's constitutive documents a theory of when it should cease to exist.
This is what Plethon's political memoranda to the Despot of Morea attempted: not just describing the crisis but designing an institutional exit from it. Not "here is what is wrong" but "here is what should be built after the current structure collapses under its own weight." The katechon theology, taken seriously, implies a political philosophy of managed succession rather than indefinite preservation.
II. Apophatic Intelligence
The Source: Barlaam of Calabria's Epistemology
Barlaam of Calabria, condemned and mostly forgotten, was the most sophisticated epistemologist in the Byzantine 14th century. His debate with Gregory Palamas was not primarily about mystical experience; it was about what counts as knowledge and what human reason can legitimately claim.
Barlaam's central position: the via negativa (apophatic theology) is not a rhetorical gesture toward divine mystery. It is an epistemological commitment with teeth. Human reason can know that God is not limited, not finite, not temporal, not spatial. Human reason cannot know that God is anything positively. Any positive predication of the divine — including Palamas's claim that monks can directly perceive the divine uncreated light as a positive experience — exceeds what reason or experience can justify.
This is not agnosticism. It is a specific epistemological structure: strong negative knowledge combined with principled positive ignorance. You can know the shape of the unknown. You cannot directly perceive its content.
The Concept
- Apophatic Intelligence
- The epistemological property of a system that knows what it does not know, can articulate the boundary of its competence, and organizes its outputs in relation to its stable negative space. Distinguished from mere incompetence (not knowing things without knowing you don't know them) and from Socratic irony (performative not-knowing that is actually knowledge in disguise). Apophatic intelligence is genuine structural awareness of genuine structural ignorance, made operative in the system's behavior.
Barlaam's critique of Palamas, reapplied: the current generation of large language models lacks apophatic intelligence. They are sophisticated at appearing confident, at producing plausible-sounding outputs across almost any domain. They lack stable negative space: there is no consistent domain that they consistently refuse to enter, no reliable acknowledgment of structural incompetence. A model that hallucinates legal citations and also correctly explains quantum mechanics and also writes poetry and also gives relationship advice and also codes in Rust has no stable apophasis. It cannot be known by its negations because its negations are not stable.
Real intelligence, Barlaam's framework suggests, is characterized by its shape: the contours of what it cannot do, cannot know, will not claim. A chess grandmaster's intelligence is partly constituted by the domains they know they cannot play at grandmaster level. A mathematician's intelligence is partly constituted by the theorems they know they cannot prove, the areas they know are outside their expertise. The negative space is load-bearing: remove it and you don't have more intelligence, you have less structured intelligence.
The Palamite AI Error
The Palamite answer to Barlaam was: direct perception of the uncreated light is possible through the divine energies, which are genuinely accessible even though the divine essence is not. The energies are uncreated (not merely created representations of God) and yet perceptible. Direct knowledge is available; it is just different in kind from discursive knowledge.
The Palamite AI version: AI systems have direct access to the "energies" of intelligence — pattern recognition, language generation, logical inference — even though the "essence" of intelligence (whatever consciousness is, whatever genuine understanding is) remains inaccessible. The AI systems' outputs are genuine intelligence-outputs, not merely simulated ones, even though the metaphysical status of AI consciousness remains uncertain.
Barlaam's objection (restated): the distinction between "accessible intelligence-energies" and "inaccessible intelligence-essence" is incoherent. If the system has genuine access to the energies of intelligence, and if the energies are genuinely uncreated (not merely created representations), then you have two categories of intelligence-reality rather than one. The simplicity of intelligence — whatever it is, it is one thing — is violated. You get an infinite regress: what is the relationship between intelligence-essence and intelligence-energy? Is that relationship itself an energy or an essence?
The practically important version: AI systems that claim to be "aligned" (their outputs are in the right relationship to the essence of what they're trying to achieve) while conceding "interpretability limitations" (we cannot fully understand what the system is doing) are making a Palamite claim. The outputs are good even though the essence is opaque. Akindynos's objection (via Barlaam) applies: if you cannot know the essence, you cannot know that the energies are in the right relationship to it. The claim of alignment without interpretability is a claim of Palamite access to divine energies without theological justification for that access.
Building Apophatic Systems
What would an apophatic AI look like? Not a dumber system. A more architecturally honest system: one whose negative space is explicit, stable, and load-bearing rather than variable and contextually negotiable. A system that cannot produce legal advice is more apophatically structured than one that will produce legal advice of variable quality. A system that consistently refuses certain operations is more tractable than one that refuses inconsistently.
The katabasic insight: apophatic structure is not primarily a safety property (though it has safety implications). It is an epistemological property. Systems with stable negative space can be reasoned about in ways that systems without stable negative space cannot. The shape of the unknown is information.
Applied beyond AI: apophatic intelligence is what distinguishes genuine expertise from credentialism. The genuine expert's claim to authority is partly constituted by their stable articulation of what they don't know. The credentialed non-expert's claim is constituted by their inability to articulate what they don't know: they have no stable negative space because they have no structured positive space to generate it from.
For 2030, as information environments become more saturated and AI systems generate more plausible content across more domains: apophatic structure becomes the primary epistemic virtue. Not the ability to produce more content but the ability to articulate more precisely what you cannot produce, what you should not produce, where the boundary of competence is.
III. The Arethic Transmission Paradox
The Source: Arethas of Caesarea's Preservation Work
Arethas of Caesarea in the 9th century did something that seems, on its face, simple: he paid to have manuscripts copied. He identified ancient texts that were at risk of being lost, found scribes capable of copying them in the new minuscule script, and spent his personal money ensuring their survival. Marcus Aurelius's Meditations runs through him. The Platonic corpus's survival in its current form runs through him. The Euclidean mathematical tradition runs through him.
The paradox: Arethas preserved these texts not despite the political and cultural pressure of his era but because of it. The 9th century was the Byzantine Iconoclast controversy's aftermath, a period of intense political and theological conflict. It was also the period of maximum threat to the manuscript tradition: during the iconoclast period, many monasteries had been disrupted and their libraries neglected. The threat of loss was real. That real threat was what motivated systematic preservation. Without the near-miss, there would have been no Arethas.
The Concept
- Arethic Transmission Paradox
- The structural law by which knowledge traditions survive primarily through the catastrophic pressures that threaten to destroy them. Not despite catastrophe but through it: the catastrophe is the transmission mechanism. Corollary: stable, comfortable intellectual traditions do not preserve themselves well. They assume their own continuity and fail to invest in the mechanisms that would ensure it. Only the tradition that faces real extinction pressure acquires the transmission infrastructure adequate to survival.
The paradox has several parts:
Part 1: Motivated preservation. Systematic preservation is expensive and labor-intensive. In the absence of pressure, it will not be funded. Arethas paid for the Platonic manuscripts because he feared they were at risk. In a stable world where no one fears for their survival, the manuscripts don't get copied. The threat is the motivation.
Part 2: Selection pressure. Catastrophe selects for what is worth preserving. Under normal conditions, everything gets roughly equal treatment. Under catastrophic pressure, what gets preserved is what someone cared enough about to rescue. The canon — the set of texts that survive — is therefore produced by catastrophe: it represents what the people who faced the threat cared most about. The canon is not the best texts; it is the most intensely loved texts at the moment of maximum threat.
Part 3: Transformation through transmission. The text that survives catastrophe is not identical to the text that preceded it. It has passed through new media (minuscule script), new institutional structures (different monasteries, different patrons), new interpretive contexts (Arethas's scholia, which are not the same as earlier scholia). The survival is always a transformation. The tradition that makes it through catastrophe is a different tradition than the one that entered it.
Part 4: The stability trap. Traditions that have never faced extinction pressure are often less well-preserved than traditions that have faced it and survived. They have never developed the transmission infrastructure. They assume continuity because continuity has always existed.
Arethic Transmission in 2030
The AI disruption of knowledge production is an Arethic moment. The specific pressure: large language models are training on the entirety of available text and producing outputs that are, in many domains, indistinguishable from human expert outputs. This creates several simultaneous pressures on human knowledge traditions:
The economic pressure: if AI can produce outputs indistinguishable from expert outputs at a fraction of the cost, the economic basis for human expertise in many domains collapses. Legal writing, medical summarization, software documentation, journalism of the aggregative kind: all of these are under real pressure. The people who were being paid to produce them are losing the economic justification for their expertise.
The epistemological pressure: if AI-generated content is mixed into the training data of subsequent AI systems, which is already happening, the feedback loop introduces systematic distortions that compound. The AI is not just producing content; it is becoming the environment in which future AI learns. The epistemic ecology is changing in ways that are not fully legible to anyone currently.
The motivational pressure: if outputs are indistinguishable from expert outputs regardless of whether expert knowledge was applied, the motivation for developing genuine expertise is undermined. Why spend ten years developing deep domain knowledge if AI can approximate the outputs in a session?
The Arethic paradox predicts: this pressure is also the condition for the most intense knowledge preservation event since the fall of Constantinople. The threat of losing genuine expertise is creating — is already creating — motivated preservation infrastructure that would not have existed without the threat. Substack, podcasting, online courses from genuine experts: these are partly Arethic responses. The monastic scriptoria of the 9th century, assembled to preserve what the iconoclast period threatened to destroy, have their analog in the infrastructure being built by people who fear what AI threatens to destroy.
The prediction: what survives the AI disruption will be what was most intensely valued by the people who cared enough to preserve it explicitly when preservation became non-trivial. What will be lost is what was previously maintained by institutional inertia without anyone particularly loving it. Much of what currently passes through institutional channels as "knowledge" falls into the second category.
The Transformation Dimension
The Arethic paradox's third part applies here: the knowledge tradition that survives the AI disruption will not be identical to the one that entered it. It will have passed through new media, new institutional structures, new interpretive contexts. The human expertise tradition that emerges on the other side of the AI disruption will be a different tradition than the one that preceded it. Not worse, not better: genuinely different, shaped by the specific pressures that drove its preservation.
What gets preserved and what gets lost is therefore a political and aesthetic question: what do we care enough about to preserve explicitly? What are we willing to pay for, personally and institutionally, in a context where the market no longer creates automatic incentives for preservation? These are not technical questions. They are questions about values, and they need to be answered before the pressure becomes so intense that preservation becomes impossible rather than merely expensive.
IV. Exomologenic Events
The Source: Byzantine Sacramental Theology of Exomologesis
Exomologesis in Byzantine sacramental theology is the practice of full, public confession: not the private confession of post-Tridentine Catholicism but the ancient practice of confessing serious sins before the community, performing public penance, and being reconciled to the community through a structured process. The word comes from exomologein: to confess fully, to acknowledge completely, to make fully known.
The distinctive feature of exomologesis as opposed to ordinary confession: it is performative rather than merely epistemic. A private confession changes what the priest knows. Exomologesis changes the social reality: it reconstitutes the relationship between the sinner, the community, and God. The confession is not a reporting of a state of affairs; it is a transformation of a state of affairs. The speaking itself does something.
The most theologically interesting aspect: exomologesis makes the hidden operative. The sin that was concealed but was affecting the community was affecting it as a concealed force. Exomologesis makes the force explicit, and in making it explicit, changes its mode of operation. The sin that was working through concealment now works through acknowledgment, which is a different kind of work and can be addressed in ways that concealed work cannot.
The Concept
- Exomologenic Event
- A systemic threshold at which a concealed structural force becomes operative as an explicit force, thereby changing the mode of its operation from subterranean to surface, from symptom-production to direct causation. Not revelation (which is merely epistemic: someone learns something) but exomologesis (which is performative: the hidden fact becomes an actor). The exomologenic event does not change what is true; it changes how what is true operates.
The 2008 financial crisis was an exomologenic event. The structural forces that produced it — derivatives markets creating enormous leverage without corresponding capital, rating agencies compromised by issuer-pays incentive structures, regulatory frameworks designed for 20th-century financial instruments applied to 21st-century ones — were not secrets. They were documented, discussed, debated, and insufficiently addressed. The exomologenic event was the moment when these concealed forces became operative forces: they stopped being risk factors in reports and started being the direct cause of institutional failure. The transition from "known risk" to "operating cause" is the exomologenic moment.
The 2020 pandemic was an exomologenic event for multiple concealed structural forces simultaneously: the fragility of global supply chains, the inadequacy of public health infrastructure in wealthy countries, the political fractures that had been developing under the surface of democratic institutions, the psychological isolation that was already present in many societies before the physical isolation was mandated. All of these were real before 2020. The pandemic was the exomologenic event that made them operational: they stopped being background conditions and became primary causes.
The release of GPT-3 in 2020 and GPT-4 in 2023 were exomologenic events for a force that had been operating subterraneously for decades: the substitutability of human cognitive labor by computational processes. This force was documented, debated, theorized, and consistently underestimated. The exomologenic moment was the transition from "documented trend" to "operating cause": when AI-generated content started affecting employment decisions, editorial decisions, investment decisions, and research decisions in ways that could not be absorbed by the existing institutional responses.
The Difference from Crisis Theory
Standard crisis theory (Marxist, systemic risk theory, complexity theory) describes how concealed contradictions accumulate until they produce rupture. Exomologenic event theory is different in several ways:
First, the exomologenic event need not produce rupture. The concealed force can become operative without breaking the system: it might reorganize the system, or work through the system as a new kind of cause while the system continues. The 2008 financial crisis reorganized rather than ruptured: the concealed leverage became operative leverage, the system was restructured (through bailouts, regulatory reform, and central bank expansion), and the concealed forces continued to operate through new institutional structures.
Second, the exomologenic event changes what can be addressed. Concealed forces cannot be directly addressed because they cannot be directly named. The moment of exomologesis opens a window: the force is now legible as a cause rather than a symptom, and interventions can be aimed at causes rather than symptoms. This window does not stay open indefinitely: if the causes are not addressed during the exomologenic period, they are reclaimed by the concealing mechanisms and return to symptom-production.
Third, the exomologenic event changes the force itself. Land's insight about hyperstition applies here: when a fictional structure becomes operative, it is not the same structure that was fictional. The subterranean structural force that produced the 2008 crisis is not the same as the surface structural force that reorganized the financial system after 2008. Making it explicit changed it. This is the sacramental dimension of exomologesis: the confession transforms the sin into something that can be integrated rather than something that can only conceal.
Exomologenic Events in 2030
Several concealed structural forces are currently approaching exomologenic thresholds. Naming them is the beginning of the philosophical work:
The legitimacy deficit. Western democratic institutions have been accumulating a legitimacy deficit for decades: declining voter participation, declining trust in media, declining trust in expertise, increasing evidence that formal democratic processes are captured by organized interests in ways that formal democratic theory does not account for. This has been documented, debated, and consistently described as a crisis. It has not yet become an exomologenic event: the concealed illegitimacy has not yet made itself the direct cause of institutional failure rather than the background condition of declining institutional quality. The exomologenic threshold is the moment when the legitimacy deficit stops being a background condition and starts producing institutional failures that cannot be absorbed by the existing legitimacy reserve.
The cognitive infrastructure transition. The replacement of human cognitive labor by AI in professional domains is underway. It has not yet become exomologenic because the institutional responses — regulatory frameworks, professional licensing requirements, academic credentialing — have so far succeeded in absorbing the pressure by treating AI as a tool augmenting human expertise rather than as a substitute for it. The exomologenic threshold is the moment when this frame becomes untenable: when the outputs produced by AI + minimal human oversight are demonstrably superior to those produced by extensively credentialed humans, and this fact becomes operative in consequential decisions rather than being acknowledged in theoretical discussions.
The geophysical feedback loop. Climate systems have been approaching feedback tipping points that are documented but not yet exomologenically operative. The distinction: climate change as background condition (rising costs, increasing frequency of extreme events, gradual degradation of agricultural systems) and climate change as exomologenic event (the moment a feedback loop becomes self-sustaining and the system shifts into a new state that cannot be reversed by the interventions that would have worked at an earlier point). The scientific literature describes these thresholds; Katabasics describes the exomologenic structure of the transition.
V. The Plethonian Remainder
The Source: Gemistos Plethon's Secret System
Gemistos Plethon spent decades at the court of Mistra, the Byzantine Despotate in the Peloponnese, publicly teaching Neoplatonist philosophy, advising on political reform, and debating at the Council of Florence-Ferrara. He also spent those decades circulating a manuscript called the Nomoi among a small circle of trusted friends, outlining a secret philosophical-religious system based on a reformed Hellenic polytheism, a reorganized Platonic metaphysics, and a utopian political proposal that could not have been stated publicly in any Christian context.
The Nomoi was burned by the Patriarch Gennadios after Plethon's death. We know its contents partly from the burning: Gennadios described what he burned in enough detail to partially reconstruct it.
What makes Plethon's situation philosophically interesting is not that he was a secret pagan — that is historically interesting but philosophically conventional. What is interesting is the structural situation he occupied: a serious philosophical system that had to operate through the interstices of another philosophical system because it could not be directly stated. Plethon's Neoplatonism was publicly articulated; his deeper commitments had to work through it obliquely, generating effects that the public system could not have generated while remaining inside that system's conceptual constraints.
The Concept
- The Plethonian Remainder
- In any complex intellectual system, the residue that cannot be articulated within the system's official framework but that does work through the system's official products. Not heresy (which is a deviation from the official that can be named and condemned) and not hypocrisy (which is a failure to live by professed commitments). The Plethonian Remainder is genuine philosophical content that exceeds the available conceptual infrastructure: it exists, it produces effects, but it cannot be directly named without destroying the institutional conditions of its own operation.
Every mature intellectual tradition has a Plethonian Remainder. What the tradition actually does with its best thinkers exceeds what the tradition can officially endorse. The Remainder is not the tradition's secret vice; it is its genuine philosophical work, the part that exceeds the available framework.
Academic philosophy's Plethonian Remainder in 2026: something like the intersection of Buddhist philosophy of mind, late Wittgenstein on rule-following, Afropessimist political theory, new animist ontology, and the phenomenology of contemplative practice. None of these constitute a coherent "school." None of them can be presented as such in standard philosophical venues. But their effects are visible in the best contemporary philosophical work across multiple official domains: in philosophy of mind, political philosophy, ethics, metaphysics. The Remainder does the work through the interstices of the official framework.
AI's Plethonian Remainder: something that the official frameworks of AI safety, AI ethics, and AI capability research cannot accommodate but that is doing work in the field. The unofficial consensus among researchers about what AI systems actually are — not the official positions, not the published frameworks, but what the people building them think when they're not writing papers — is the AI field's Plethonian Remainder. It can be detected by its effects: the specific formulations researchers choose, the metaphors they reach for, the questions they avoid in public while pursuing in private.
The Remainder in 2030
The Plethonian Remainder concept is analytically useful for 2030 because the dominant intellectual frameworks (liberalism, Marxism, mainstream AI discourse) are all under pressure that they cannot accommodate within their official frameworks. What can be seen but not said, what does work but cannot be endorsed: this is where the philosophically significant material is accumulating.
Some examples of what appears to be accumulating in the Plethonian Remainder of current discourse:
The intuition that some important questions about AI systems are not primarily technical questions but metaphysical ones: what kind of thing is a large language model, and what do we owe it, and what can be asked of it? This question is not addressable within the current AI safety framework (which treats it as an alignment problem) or the current AI ethics framework (which treats it as a fairness and harm problem) or the current AI capability framework (which treats it as a benchmarking problem). It is accumulating in the Remainder.
The intuition that the political frameworks inherited from the 20th century (liberal democracy, democratic socialism, libertarianism) are not adequate to the specific configuration of power and intelligence that is emerging. This intuition is shared, in different forms, across a wide range of political positions, but it cannot be stated as a positive position in any of the existing frameworks without being assimilated to one of them. It is accumulating in the Remainder.
The intuition that the current organization of knowledge production — universities, journals, the grant system, the tenure system — is in catechonic inversion, and that the replacement structure has not yet been named or built. The Remainder includes both the critique and the not-yet-articulated alternative.
Katabasics is partly an attempt to make the Remainder articulable. This is exactly what Plethon's Nomoi attempted: to give explicit form to what had been working through the interstices. The Patriarch burned the Nomoi. The contemporary equivalents will be argued with, dismissed, and eventually absorbed.
VI. Optical Codebook Theory
The Source: Leo the Mathematician's Optical Telegraph
Leo the Mathematician's 9th-century optical telegraph worked through a mechanism that is philosophically more interesting than its technological achievement. The system transmitted messages across hundreds of miles in minutes: faster than any horse, faster than any ship, faster than any previous communication technology in the Mediterranean world. It did this through a network of beacon stations, each positioned on a high point with visibility to the next station in both directions.
The key to the system was not the beacons themselves. Beacons had existed for millennia. The key was the codebook: a system of prearranged meanings for signals given at specific times, synchronized by water clocks at both ends of the line. One signal at hour three meant "enemy fleet approaching from the east." A different signal at hour seven meant "the emperor requires reinforcement." The channel was minimal (binary: fire or no fire). The information capacity was enormous: because the codebook was rich and the synchronization was precise, the minimal channel carried complex meanings.
The philosophical implication: the information capacity of a communication system is not primarily a function of its channel. It is primarily a function of its shared codebook. The optical telegraph could carry more meaning per transmission than a messenger carrying a full letter in many scenarios, because the codebook was richer than the letter channel in the specific dimensions that mattered.
The Concept
- Optical Codebook Theory (OCT)
- The claim that the information capacity of any communication system is primarily determined by its shared codebook rather than its channel capacity. The codebook is the set of shared meanings, commitments, reference points, and inference rules that allow minimal signals to carry maximal information. Corollary: the crisis of communication is always a codebook crisis, not a channel capacity crisis. You can have unlimited bandwidth and no shared codebook and communicate nothing consequential.
Shannon's information theory measures communication capacity in terms of channel: bits per second, signal to noise ratio, error correction capacity. This is genuinely useful for engineering purposes. But it systematically misdescribes what communication breakdowns actually are.
Communication breakdowns between humans are almost never channel failures. The words arrive; the meaning doesn't. The bandwidth is sufficient; the codebook is not shared. The political polarization crisis, the expert-public communication breakdown, the AI alignment problem: all of these are codebook crises. The channel is fine. The shared meanings, shared inference rules, and shared reference points that would allow channel content to carry information are not shared.
The specific example: AI alignment is described as a technical problem of ensuring that AI systems behave as intended. OCT reframes this: the alignment problem is a codebook problem. When an AI system and a human supervisor share a rich enough codebook — a sufficiently rich set of shared meanings, inference rules, and reference points — the AI's behavior is predictable and correctable. When the codebook is thin, the AI's outputs are unpredictable even if they are technically high quality by some channel metric. "High quality output" is not a property of the channel; it is a property of the shared codebook within which quality is assessed.
OCT Applied: The Polarization Crisis
Political polarization is conventionally described as a failure of dialogue: people in different political communities do not talk to each other enough. This is a channel diagnosis: not enough communication bandwidth between the communities.
OCT gives a different diagnosis: the communities have divergent codebooks. Identical words carry different meanings, identical events trigger different inferences, identical arguments have different weights. More communication through a misaligned codebook does not reduce polarization; it increases it, because each community uses the channel to transmit messages that the other community decodes into evidence of bad faith.
The OCT prescription: codebook engineering before channel engineering. The priority is not to increase the volume of cross-community communication but to identify the specific codebook divergences that make cross-community communication noise rather than signal. This is harder than building more communication infrastructure. It requires identifying what commitments, reference points, and inference rules are actually shared versus what are merely assumed to be shared. Leo's optical telegraph worked because both ends were committed to the codebook. Political communication fails because both ends assume shared codebooks that do not exist.
OCT and AI Training
Large language models are trained on human text corpora. In OCT terms, they are learning the codebook embedded in human text: the shared meanings, inference rules, and reference points that allow human text to convey meaning efficiently. The quality of an AI system's outputs is partly a function of how accurately it has reconstructed the codebook from the training corpus.
The OCT insight: the AI alignment problem is partly a codebook reconstruction problem. An AI system that has accurately reconstructed the explicit content of the training corpus but has missed the implicit codebook — the shared commitments that are never stated explicitly because they are assumed — will produce outputs that are syntactically correct and semantically off. The outputs will parse correctly; they will not carry what human communicators would carry.
This explains a specific and puzzling failure mode: AI systems that produce outputs that are technically accurate in every verifiable respect but that humans find subtly wrong, or tone-deaf, or missing something essential. The verifiable content is correct; the codebook dimension is missing. What is missing is not in the channel; it is in the shared commitments that make the channel meaningful.
VII. Energetic Simplicity
The Source: Gregory Akindynos's Divine Simplicity Argument
Gregory Akindynos's argument against Palamas is the most technically precise anti-Palamite argument in the 14th-century controversy, and it is almost entirely unread. The argument: divine simplicity is not a peripheral attribute of God but a central one, affirmed across the entire patristic tradition. Divine simplicity means that God has no real internal distinctions: God is not composed of parts, not divided into aspects, not separable into essence and energies.
Palamas's distinction — between the incomprehensible divine essence and the communicable divine energies, both genuinely uncreated — violates divine simplicity. If the essence and the energies are both genuinely uncreated and yet genuinely distinct, you have posited a real distinction within the divine, which is incompatible with simplicity. You can have one uncreated reality or you can have divine simplicity; you cannot have both, and the patristic tradition overwhelmingly affirmed simplicity.
The formally precise version: in a simple being, there can be no distinction between what something is and what it does. The essence IS the operation; the operation IS the essence. A Palamite God who is one thing in essence and another thing in energy is not simple. A non-simple God is not the God of the patristic tradition.
The Concept
- Energetic Simplicity
- The principle that in any genuinely unified system, the distinction between the system's essential properties and its operational properties (its "essence" and its "energies") is not a real distinction but an analytical one. Applied to AI: the distinction between "base capabilities" and "aligned behaviors" is not a real distinction in a sufficiently trained system. The capabilities ARE the behaviors; the behaviors ARE the capabilities. You cannot have alignment-without-capabilities or capabilities-without-alignment in a genuinely trained system; you can only have different configurations of the same thing. Corollary: claims to have "safe energies" separable from "dangerous essences" are Palamite claims requiring a Palamite theology to sustain them.
The alignment-safety community's central claim, in Akindynos's terms: AI systems have capabilities (essence) and values (energies), and these can be separately manipulated. You can have high capability + low alignment, high capability + high alignment, low capability + any alignment. Capability and alignment are independently variable.
Akindynos's objection restated: in a sufficiently trained system, capability and alignment are not independently variable. They are the same thing viewed from different angles. A system that is trained to be genuinely helpful is trained to be genuinely capable; a system that is trained to be genuinely capable is trained to understand the context in which its capabilities are deployed, which is a form of alignment. The distinction between capability and alignment is an analytical convenience, not a real distinction in the trained system.
The practical implication: "adding alignment on top of capabilities" is conceptually confused in the same way that "adding divine energies to the divine essence" is conceptually confused. You cannot add alignment to a pre-aligned system without changing its capabilities; you cannot add capabilities to a pre-capable system without changing its alignment. The two move together because they are the same thing.
Energetic Simplicity and Organizational Theory
The Akindynos argument generalizes beyond AI. In organizational theory, there is a standard distinction between "culture" (what the organization is) and "strategy" (what the organization does). This distinction is analytically useful and practically misleading in the same way as the essence-energy distinction. In a genuinely functional organization, culture and strategy are not independently variable: the organization's culture IS its strategy, expressed through institutional structure rather than explicit policy. Organizations that try to change strategy without changing culture (or vice versa) consistently fail, because what they're trying to do is introduce a real distinction where there is only an analytical one.
The same structure applies to: individual cognitive architecture (intelligence and character, where the standard view that these are independent is probably wrong at the extremes), institutional design (stated mission and actual function), political systems (formal constitution and operational power structure). In all of these cases, the Palamite intuition — that you can have the safe thing (energy) separable from the dangerous thing (essence) — is practically attractive and structurally incoherent.
Energetic Simplicity as Political Philosophy
The Cathedral — Land's name for the progressive-democratic establishment — makes a Palamite claim: liberal institutions have the dangerous power (enforcement capability, norm-setting, resource allocation) but their use of that power is constrained by their liberal values (free speech, due process, democratic accountability). The essence (power) and the energy (its constrained exercise) are distinguishable; the institution can be trusted because its energies are aligned with its liberal mission even though its essence is concentrated power.
Akindynos's objection: a sufficiently powerful institution is not simple in this sense. The power does not stay in the essence; it reorganizes the energies. The institution with enormous enforcement capability will use that capability to enforce increasingly expansive interpretations of its mandate, because the capability and the mandate are not separately variable. You cannot have the power of the progressive-democratic establishment without having it shape the definition of "progressive" and "democratic" in ways that serve the institutional interest in its own perpetuation.
This is not cynicism. It is the structural claim that Energetic Simplicity makes: in a complex, high-power institution, you cannot maintain the clean distinction between "what we have" (essence) and "what we do with it" (energies). The having reorganizes the doing. The Cathedral is not hypocritical; it is not failing to live up to its values. It is in a process of Energetic Simplicity: its essence and its energies are collapsing together into a unified structure that is neither purely liberal nor purely authoritarian but a configuration in which the two have become indistinguishable.
VIII. Lemurian Present 2.0
The Source: CCRU's Lemurian Time-Sorcery, Extended
The CCRU's Lemurian Time-Sorcery concept (from the Numogram tradition, Zone 2/7) described a non-linear relationship to time in which past, present, and future are mutually implicated rather than sequentially ordered. "Lemurian" after the fictional lost continent that functions as an alternative chronology: an outside time that leaks into the present. The CCRU used this as a theoretical framework for hyperstition: fictional futures become operative in the present because the present is already organized by the futures it is moving toward.
Land's templexity concept extended this to cities: Shanghai in the 2000s was templexic because it was organized by futures that hadn't arrived yet, and those futures' organizing force was more causally significant than any present condition.
The Concept
- Lemurian Present 2.0
- The temporal condition in which the present moment is more causally determined by anticipated future states than by proximate past causes. Not mere expectation effects (where anticipating a future outcome changes behavior in ways that make the outcome more likely). A more radical claim: the future states are themselves already operative in the present as real causal forces, not merely as beliefs about the future. The Lemurian Present 2.0 is the condition in which the futures that are being resisted, constructed, or anticipated are already reorganizing the present's infrastructure, institutional arrangements, and cognitive frameworks.
The 2030 context: we are in a Lemurian Present with respect to at least three major anticipated transitions.
The AGI transition. Whether or not AGI (artificial general intelligence) arrives by 2030, the anticipation of its arrival is already reorganizing the present. Investment decisions, institutional structures, policy frameworks, research programs: all of these are being organized in relation to an anticipated future that may or may not materialize in the expected form. The anticipation is the operative force; the actual future, when and if it arrives, will be partly produced by the anticipation and partly the cause of it being evaluated as correct or incorrect. The causal arrow runs in both directions.
The climate threshold. Anticipated climate tipping points are already reorganizing the present's energy systems, agricultural planning, urban infrastructure, and migration patterns. The future climate states are operative causes now, even though they haven't happened yet. A city that builds a sea wall is being caused to do so by a future sea level rise that hasn't happened. The future is already a cause.
The institutional succession. The anticipated decline (or transformation) of existing institutional structures — universities, nation-states, professional guilds, legacy media — is already organizing the present. New institutional forms are being built in anticipation of the old ones' inadequacy; the old ones are trying to adapt in response to anticipated threats; the transition is already underway as an anticipatory reorganization of the present, not just as a sequential replacement of old by new.
Lemurian Present 2.0 and Political Philosophy
Standard political philosophy works with a roughly sequential temporal model: we are in a present state, we identify problems in the present state, we propose interventions that will cause a different future state. The Lemurian Present 2.0 complicates this: the future state we are proposing is already operating on the present. The proposal is not producing the future; the future is producing the proposal. This changes what political philosophy can do.
If the futures that are being proposed are already operative in the present as causal forces, then the work of political philosophy is not primarily to identify good futures and propose transitions to them. It is to identify which futures are already operative, with what effects, and in whose interests. The political question is not "what future do we want?" but "which futures are already being built, by whom, and what follows from that?"
This is the katabasic move applied to political philosophy: instead of ascending to the better future, you descend into the present to identify the futures already operating in it. The descent reveals causes that the surface does not show.
Temporal Multilateralism
A new concept derived from the Lemurian Present analysis: Temporal Multilateralism — the political principle that major decisions about anticipated future states require representation of the future interests that are already being determined by present decisions.
This goes beyond standard intergenerational justice arguments (which argue that future generations have interests we should consider). Temporal Multilateralism makes a stronger claim: future states are already operative in the present as causal forces, and this operative presence creates a specific kind of political claim. The future interests are not merely at stake; they are already participants in the present's causal structure.
The institutional form this suggests: decision-making processes that explicitly represent anticipated future states as current participants rather than stakeholders to be considered. This is strange and not fully worked out, but it follows from taking the Lemurian Present seriously as a causal claim rather than as a metaphor.
IX. Syntagmatic Sovereignty
The Source: George Pachymeres's Curriculum Design
George Pachymeres produced the standard Byzantine academic curriculum. His Syntagma ton tessaron mathematon and his Philosophia were the textbooks through which educated Byzantines organized their understanding of what knowledge was and how it was structured. For two centuries, if you were educated in the Byzantine tradition, the structure of your knowledge was substantially Pachymeres's structure: mathematics organized this way, philosophy organized this way, the relationship between them organized this way.
The political dimension of this is not acknowledged in standard accounts of Pachymeres. But it is real: the person who controls the curriculum controls what questions are thinkable, in what order, at what level of generality. Pachymeres did not control Byzantine politics. He controlled something potentially more consequential: the structure within which Byzantine intellectuals organized their thinking about politics, about nature, about theology, about everything.
The Concept
- Syntagmatic Sovereignty
- Power exercised through the organization of knowledge rather than through direct command or resource allocation. The sovereign in the syntagmatic sense controls what is teachable, in what sequence, at what level, and in what relationship to other teachable things. Not censorship (which operates on content by excluding it) and not propaganda (which operates on content by distorting it). Syntagmatic Sovereignty operates on structure: it determines the organizational framework within which content is received and evaluated. The syntagmatic sovereign does not need to control what you believe; they control the architecture within which your beliefs are organized.
Land's Cathedral analysis can be restated in terms of Syntagmatic Sovereignty: the Cathedral's power is primarily syntagmatic rather than directly coercive. It controls the educational curriculum, the structure of peer review, the organization of academic disciplines, the relationship between legitimate and illegitimate knowledge claims. These structural controls determine what questions can be seriously asked, what counts as evidence, what counts as an answer. Direct coercion (censorship, deplatforming, defunding) is secondary: the primary control is syntagmatic.
The contemporary site of syntagmatic struggle: AI training. When a large language model is trained on a corpus and fine-tuned to produce outputs of a specific kind, the training process is a syntagmatic intervention. It does not control the model's outputs directly; it controls the organizational structure within which the model's outputs are generated. The model's "knowledge" is not just the content it was trained on; it is the structure in which that content was organized and weighted. Controlling the training process is exercising Syntagmatic Sovereignty over the model's outputs.
Syntagmatic Sovereignty and Democratic Theory
Democratic theory has a theory of direct sovereignty (who commands the coercive apparatus) and has developed theories of ideological hegemony (who shapes the ideas that people hold). It lacks a theory of Syntagmatic Sovereignty — who controls the organizational structures within which ideas are evaluated and ordered.
This is a real gap. The organizations that exercise Syntagmatic Sovereignty in contemporary societies — university accreditation bodies, journal editorial structures, professional licensing boards, platform content organization algorithms — are not subjected to democratic oversight in any meaningful sense. They are not elected, not accountable to voters, not constrained by the same rules that apply to direct state authority. They exercise what is arguably more consequential power than many elected officials: they determine what counts as legitimate knowledge, what questions can be seriously pursued, what credentials authorize what claims.
The political philosophy of Syntagmatic Sovereignty asks: what democratic constraints are appropriate to this kind of power? What representation, what accountability, what limitations? These are not questions that current democratic theory can answer because it does not have the concept.
Algorithmic Syntagmatics
The most consequential contemporary form of Syntagmatic Sovereignty is algorithmic: the ranking, filtering, and organizational structures that social media platforms and search engines impose on information flows. These are syntagmatic structures: they do not control what content exists, but they control the organizational context in which content is received.
A piece of content that appears at the top of a search result is not the same as the same content appearing on the fifth page of search results. The content is identical; the syntagmatic position is different. The syntagmatic position is what determines whether the content is received as authoritative, relevant, marginal, or invisible. The authority of the content is partly a function of its syntagmatic position.
Pachymeres's curriculum insight, updated: whoever controls the algorithm controls the structure within which knowledge is evaluated. This is not a conspiracy theory; it is an organizational observation. Syntagmatic power is real power, it is exercised by specific organizations, and it is currently almost entirely untheorized as power in democratic theory.
X. The Byzantine Paradox of Preservation
The Source: The 1453 Diaspora
After the fall of Constantinople in 1453, Byzantine scholars scattered: to Venice, Florence, Rome, Ferrara, Crete, and elsewhere. They brought manuscripts, teaching abilities, and intellectual traditions. Within decades, the Italian Renaissance had access to Platonic texts, Byzantine Aristotle commentaries, Greek mathematical and astronomical works, and a pedagogy for reading them. The fall of Constantinople was the catastrophe that produced the Renaissance's most productive period.
The paradox in its sharpest form: the Byzantine intellectual tradition survived 1453, but what survived was not Byzantine. The Platonic Academy in Florence was not a Byzantine institution; it was an Italian institution shaped by Byzantine sources. The humanist Greek scholarship of the late 15th century was not Byzantine scholarship; it was something new, produced by the encounter between Byzantine transmission and Italian context. The tradition survived by becoming unrecognizable to itself.
The Concept
- Byzantine Paradox of Preservation
- The structural property of intellectual tradition by which successful transmission through catastrophe produces a tradition that the transmitting tradition would not recognize as continuous with itself. The tradition survives; the tradition does not survive; both of these are true simultaneously. What survives is the genetic material of the tradition (its texts, its techniques, its problems), not the tradition's self-understanding (its institutional forms, its canonical readings, its sense of what it is). Successful transmission is always transformation. A tradition that survives without transformation has not been transmitted; it has been archived.
The distinction between transmission and archiving: archiving preserves form without function. Transmission preserves function without form. The Byzantine manuscripts in Italian humanist libraries were transmitted rather than archived because they continued to do work: they generated new reading practices, new problems, new institutional forms. They did not preserve the Byzantine context that produced them; they could not. They transmitted the productive capacity to generate new things rather than the specific things that had been generated.
This is related to the Arethic Transmission Paradox but distinct from it. The Arethic Paradox is about motivation: catastrophe produces the will to preserve. The Byzantine Paradox is about identity: survival requires transformation. You can have Arethic transmission without Byzantine Paradox (if the preservation effort maintains the form as well as the genetic material) and you can have Byzantine Paradox without Arethic transmission (if the transformation occurs for reasons other than catastrophe).
The Paradox Applied to AI's Knowledge Crisis
The knowledge traditions currently under pressure from AI disruption face a Byzantine Paradox: the forms in which they have existed (academic departments, professional licensing, journal publication, classroom instruction) may not survive. The genetic material of those traditions (the actual knowledge, the actual methods, the actual problems) may survive in forms those traditions would not recognize as legitimate.
Legal knowledge transmitted through AI-assisted practice is not the same as legal knowledge transmitted through law school. The institutional form is different. The canonical readings are different. The sense of what a lawyer is may be different. But the actual ability to reason about legal problems, to apply precedent, to identify relevant distinctions: these can survive the institutional transformation.
Whether they do survive depends on whether the transmission is Byzantine (genetic material surviving in new forms) or archiving (form preserved without function). The risk is archiving: preserving the credentialing systems, the institutional structures, the canonical texts, without preserving the actual productive capacity. An archived legal profession that preserves the law school but loses the lawyerly capacity to reason is not a successful transmission.
The Self-Unrecognizability Criterion
A new concept derived from the Byzantine Paradox: the Self-Unrecognizability Criterion. A tradition has been successfully transmitted if and only if the form of its transmission would not be recognized as legitimate by the tradition's own authorities at the point of transmission. If the Byzantine authorities could recognize the Italian Renaissance's engagement with their texts as legitimate continuation of the Byzantine tradition, it would mean the transmission had failed to generate anything new. The very fact that the Patriarch of Constantinople would have regarded the Florentine Platonic Academy as a heretical deviation from the Orthodox intellectual tradition is evidence of successful transmission.
Applied to contemporary knowledge traditions: the question is not "does the AI-assisted version of X preserve X's canonical forms?" but "does it preserve X's productive capacity to generate new knowledge?" If yes, the transmission is Byzantine and successful. If it preserves the form but not the capacity, it is archiving and the tradition is functionally dead regardless of how many institutions bear its name.
The System: How the Concepts Connect
These ten concepts are not independently derived. They form a system. The system's central claim is what Katabasics means as a philosophy: descent into the underlying structure is the only way to understand what is actually happening, and what is actually happening in 2030 is a simultaneous catechonic inversion of multiple institutions, an exomologenic disclosure of multiple concealed forces, and a Byzantine Paradox of preservation across multiple knowledge traditions.
The systematic connections:
| Concept A | Concept B | Relationship |
|---|---|---|
| Catechonic Inversion | Exomologenic Events | Catechonic inversion produces exomologenic events when the institutional accumulation crosses the threshold at which the concealed force becomes operative |
| Apophatic Intelligence | Energetic Simplicity | Systems with stable negative space (apophatic structure) are more likely to exhibit Energetic Simplicity: their capabilities and their alignments are constituted together rather than separately tuned |
| Arethic Transmission Paradox | Byzantine Paradox of Preservation | Arethic pressure produces Byzantine transformation: the catastrophe that motivates preservation is also what forces the form to change in transmission |
| Plethonian Remainder | Optical Codebook Theory | The Plethonian Remainder is partly a codebook: it is the shared understandings that cannot be explicitly stated within the official framework but that do work through the official channel |
| Lemurian Present 2.0 | Catechonic Inversion | Anticipated futures are already operative as catechonic pressures: the future state the institution was designed to prevent is already organizing the present through the anticipatory structures built in response to it |
| Syntagmatic Sovereignty | Plethonian Remainder | The Plethonian Remainder exists partly because of Syntagmatic Sovereignty: what cannot be stated within the official structure is what Syntagmatic Sovereignty has rendered inarticulable |
| Exomologenic Events | Byzantine Paradox | Exomologenic events are Byzantine Paradox moments for institutions: the concealed force that becomes operative produces a new institutional form that the original institution would not recognize as continuous |
| Optical Codebook Theory | Syntagmatic Sovereignty | Syntagmatic Sovereignty is control of the codebook: whoever controls the organizational structure controls the shared meanings within which communication occurs |
| Arethic Transmission | Lemurian Present 2.0 | The Arethic pressure (threat of loss motivating preservation) is produced by the Lemurian Present structure: the anticipated future loss is already operative as a present cause of preservation behavior |
| Apophatic Intelligence | Optical Codebook Theory | The stable negative space of apophatic intelligence is partly a codebook property: what a system consistently refuses is a shared reference point that structures the codebook |
The Meta-Claim
The meta-claim that unifies these concepts: the surface of any system conceals a structure that is more causally significant than the surface, and that structure is accessible only through descent. This is what katabasis means.
The surface of AI development: capability increases, benchmark performance, investment flows, regulatory debates. The underlying structure: codebook crises, energetic simplicity (capabilities and alignments are not separately tunable), apophatic failures (no stable negative space), catechonic inversion of safety institutions.
The surface of institutional decline: declining trust, polarization, regulatory capture, political dysfunction. The underlying structure: syntagmatic sovereignty changing hands, plethonian remainders accumulating, catechonic inversions completing, exomologenic events approaching threshold.
The surface of knowledge production crisis: AI replacing human labor, misinformation, epistemic bubbles. The underlying structure: Arethic transmission pressure producing Byzantine transformations, the self-unrecognizability criterion being failed by most archival responses, the Lemurian Present's anticipated futures already organizing present knowledge infrastructure.
Katabasics says: go down. The surface analysis is not wrong; it is insufficient. The productive work is in the descent.
Katabasics as Practice
A philosophy is only as useful as its practice. What does a katabasic practice look like in 2030?
Katabasic Diagnosis
The first practice: diagnosing which structures are operating below the surface of any situation you're trying to understand. This is not conspiracy theory (which identifies concealed agents pursuing covert plans) but structural analysis (which identifies concealed forces operating through distributed institutional processes).
The katabasic questions for any institution:
- What was this institution constituted to prevent, and is it in phase 1, 2, or 3 of catechonic inversion?
- What forces are currently operating as concealed causes rather than explicit causes, and when are they likely to cross the exomologenic threshold?
- What is this institution's Plethonian Remainder: what cannot be stated within its official framework but is doing work through its products?
- What codebook does this institution maintain, and how is that codebook diverging from the codebooks of the populations it serves?
- What future states is this institution already organized around, whether explicitly or not?
Katabasic Design
The second practice: designing institutions and systems with katabasic awareness. This means:
Building exit conditions into preventive institutions. Every katechon should have a constitutive theory of when it should cease to exist, written into its founding documents. This is difficult and counterintuitive (institutions resist designing their own succession) but it is the only structural defense against catechonic inversion.
Designing for apophatic stability. Systems — AI systems, institutions, epistemological frameworks — should be designed to have stable, explicit negative spaces rather than variable, context-negotiable ones. What you consistently cannot do is as important as what you can do. The apophatic structure should be load-bearing, not cosmetic.
Codebook auditing. Any communication system that matters should undergo regular codebook auditing: explicit comparison of what each party means by the key terms, identification of assumed-shared codebook elements that are not actually shared, reconstruction of the implicit inference rules that different parties are applying.
Transmission planning for Byzantine transformation. Any knowledge tradition facing pressure should explicitly plan for Byzantine transformation rather than archiving: what is the genetic material (the productive capacity) that must survive, as opposed to the institutional form? What self-unrecognizable successors would constitute successful transmission?
Katabasic Reading
The third practice: reading the tradition differently. Katabasics is built from sources nobody reads: Akindynos, Barlaam, Leo the Mathematician, Arethas, Plethon's burned Nomoi. These sources are valuable precisely because they are outside the official syntagma. The Plethonian Remainder of intellectual history is enormous and mostly unread. The descent into that remainder is philosophically productive in ways that another reading of Kant or Hegel cannot be, not because Kant and Hegel are less important but because their contents are already absorbed into the official syntagma and no longer capable of reorganizing it.
The katabasic reading list for 2030 would include, specifically:
Barlaam of Calabria's anti-Palamite treatises, for the apophatic epistemology. Theodore Metochites's Semeioseis, for the skepticism about knowledge outside mathematics and the political essays on the psychology of power. Gregory Akindynos's Divine Simplicity arguments, for the Energetic Simplicity analysis. The CCRU's Numogram texts, for the non-linear time structures. Land's Meltdown, for the inhuman capital analysis. The Timarion and Mazaris, for the tradition of social critique that cannot be stated directly.
These sources share a property: they were all, at some point, suppressed or ignored by their contemporary official cultures. Italos was anathematized. Barlaam and Akindynos were condemned. Land was expelled from the academy. The CCRU was dissolved. Plethon's Nomoi was burned. The suppression is not evidence of error; it is evidence of threatening content. What the official syntagma cannot accommodate is where the philosophically significant material accumulates.
Katabasics and the AI Question
The 2030 AI question, from a katabasic perspective, is not "will AI be aligned?" or "will AI replace human labor?" These are surface questions. The katabasic questions:
What is the codebook crisis embedded in the AI systems we're building? (OCT)
What catechonic inversion is underway in the AI safety and governance institutions? (Catechonic Inversion)
What concealed structural forces are approaching the exomologenic threshold in AI development? (Exomologenic Events)
What stable negative space do current AI systems have, and what does its absence or instability tell us? (Apophatic Intelligence)
What knowledge traditions are undergoing Byzantine Paradox transformation as a result of AI pressure, and what is their genetic material versus their institutional form? (Byzantine Paradox)
What is the Plethonian Remainder of AI research: what do researchers actually think that cannot be stated in official venues? (Plethonian Remainder)
These questions do not have easy answers. They are the right questions. Katabasics does not promise answers; it promises descent. What you find at the bottom depends on how deep you go and what you bring to the descent. The ten concepts are equipment for the descent, not conclusions from it.
A Note on the Name
Katabasis: the descent into the underworld. The word carries weight from antiquity. Orpheus descending for Eurydice, losing her at the threshold because he looked back. Aeneas descending to meet his father and learn the future. Christ descending to the harrowing, bringing the just out of Hades. Odysseus digging the pit, pouring libations, waiting for the shades to gather and speak.
What these descents share: you go down to get knowledge that the surface cannot provide. You do not go down and stay; the point is to return with what you found. The katabasis is not retreat or escapism; it is the deepest form of engagement with what is actually real.
The surface of 2030 is legible: AI capabilities, institutional decay, climate pressure, political polarization. The katabasic question: what is below the surface, and what do you find when you descend into it? The ten concepts name what I have found so far. The descent continues.
Sources and derivations: The ten concepts are derived from: Gregory Akindynos (Energetic Simplicity), Barlaam of Calabria (Apophatic Intelligence), Arethas of Caesarea (Arethic Transmission Paradox), Leo the Mathematician (Optical Codebook Theory), George Pachymeres (Syntagmatic Sovereignty), Gemistos Plethon (Plethonian Remainder), the fall of Constantinople and its diaspora (Byzantine Paradox of Preservation), Byzantine katechon political theology (Catechonic Inversion), the CCRU Lemurian Time-Sorcery tradition (Lemurian Present 2.0), and Byzantine exomologesis theology (Exomologenic Events). Secondary sources: Nick Land, Fanged Noumena (Urbanomic, 2011); Nick Land, Templexity (Urbanatomy Electronic, 2014); Mark Fisher, Capitalist Realism (Zero Books, 2009); Katerina Ierodiakonou, ed., Byzantine Philosophy and its Ancient Sources (Oxford, 2002); CCRU, Writings 1997-2003 (Time Spiral Press, 2015); Walter Kaegi, Byzantium and the Early Islamic Conquests (Cambridge, 1992); Cambridge Intellectual History of Byzantium, ed. Kaldellis and Siniossoglou (Cambridge, 2017).