Infobesity and the Age of Optimization

Infobesity and the Age of Optimization

AI as Catalyst of Cognitive Abstraction


The dominant conversation about AI clusters at two poles. One imagines extinction. The other imagines liberation. Both share the same structural flaw: they center the machine as the primary actor and treat the human as a passive recipient of either doom or salvation.

Neither frame is useful. The most consequential effect of AI is not what it might become. It is what it is already doing to how we process, retain, and relate to information. And that effect cannot be understood without first naming the condition into which AI has arrived.

That condition has a name. Infobesity.


The Cognitive Baseline

Infobesity is not a metaphor. It is a measurable state: high input volume, low retention, rapid novelty cycling, compressed attention, emotional amplification standing in for comprehension.

The average person now encounters more information in a single day than a fifteenth-century human encountered in a lifetime. But volume alone is not the crisis. The crisis is the ratio between intake and digestion. We consume vastly more than we process. We scroll more than we read. We react more than we reflect.

Every major communication technology has produced some version of this problem. The printing press triggered anxieties about overload in the sixteenth century. Radio compressed the news cycle from days to hours. Television replaced textual argument with visual impression. The internet collapsed the barriers between production and consumption entirely.

What distinguishes this moment is not the existence of overload. It is the acceleration. Each previous technology increased volume. The mobile internet increased both volume and velocity while eliminating the physical friction that once gated access.

The result is a population that is not uninformed but over-informed and under-processed. Headlines replace articles. Threads replace investigations. Summaries replace sources. The information is there. The metabolic capacity to process it is not.

This is the environment into which AI has been deployed. Not a blank slate, but a system already running at capacity.


The Metabolic Accelerator

AI does not introduce a fundamentally new cognitive problem. It intensifies an existing one by reducing friction across three domains.

Production. Generative AI collapsed the cost and time required to produce content. Tasks that once required hours of skilled labor now require minutes of prompting. The volume of material entering an already saturated ecosystem increases further. The ratio of production to consumption, already imbalanced, tilts.

Compression. AI excels at summarization, synthesis, extraction. A ten-thousand-word report becomes five bullet points. A complex legal document becomes a plain-language summary. A semester of research becomes a conversational exchange. None of this is inherently harmful. But it systematically removes the cognitive work that once accompanied information processing. The effort of reading, extracting, comparing, synthesizing was not wasted labor. It was the mechanism through which understanding was built.

Navigation. Recommendation systems, search optimization, and curated feeds determine not just what information reaches a user but in what order, with what framing, alongside what context. The map becomes cleaner. The territory becomes less familiar.

The critical distinction: AI does not degrade cognition directly. It removes the necessity of certain cognitive processes. And processes that are no longer necessary tend, over time, to atrophy. This is not a metaphor. It is a well-documented pattern in biological and institutional systems. Functions that cease to be exercised do not remain at full capacity indefinitely.


Abstraction Drift

There is a precise analogue for this in the history of computing itself.

Engineers who built systems in the 1970s and 1980s worked from the hardware up. They understood circuitry, memory architecture, operating system internals, and network protocols at the physical layer. They built their own stacks. When something failed, they could diagnose the failure because they understood every layer between the user interface and the silicon.

Modern engineers operate within highly optimized abstraction layers. They deploy containerized applications on managed cloud infrastructure, orchestrated by systems they did not build and often do not fully understand. They interact with APIs rather than hardware. They configure rather than construct.

This is not a criticism. Modern engineers are often extraordinarily capable within their operational context. But something has changed. The substrate has become invisible. The interface has become the entire working environment.

The shift produced enormous gains in productivity, scalability, and accessibility. It also produced a specific vulnerability: when the abstraction layers fail, fewer people understand the substrate well enough to diagnose and repair the failure. The system works beautifully until it does not. And when it does not, the depth of the problem is disproportionate to the capacity to address it.

This pattern is now extending beyond computing into society at large.

Humans are shifting from understanding systems to operating systems.

Understanding implies knowledge of mechanism, causation, and failure modes. Operating implies competence within a functioning interface. The two overlap in stable conditions. They diverge sharply under stress.

AI accelerates this drift by making the operational layer more powerful and more seamless. The better the interface, the less reason anyone has to understand what lies beneath it.


Control and Perception

Abstraction drift alone does not constitute a crisis. Societies have always relied on specialized knowledge distributed unevenly. The additional risk emerges when abstraction intersects with two other variables: the centralization of control over AI systems, and public perception of what those systems are.

Control is concentrated. The most powerful AI systems are developed by a small number of corporations. Access is mediated through commercial platforms. Training data, model architecture, and optimization targets are proprietary. The abstraction layer upon which an increasing share of cognitive work depends is governed by entities whose incentives are commercial rather than epistemic. The system optimizes for engagement, retention, and revenue. Not for depth of understanding.

Perception introduces a subtler problem. Three broad categories are emerging.

The first treats AI as superintelligence. A system that knows more and reasons better than any human. This produces deference. Users defer to AI outputs not because they have evaluated them critically but because they believe the system exceeds their capacity.

The second treats AI as existential threat. A system that will escape human control. This produces reaction, ranging from regulatory panic to outright rejection.

The third treats AI as infrastructure. A utility that simply works, like electricity or plumbing. This produces normalization.

Normalization is the most stabilizing of the three in the short term. It reduces anxiety, encourages adoption, integrates AI into daily routines without friction. It is also the most dependency-producing. Infrastructure is precisely the category of technology that people stop thinking about. No one reflects on the electrical grid until it fails. No one questions the water supply until it is contaminated.

When AI achieves the status of infrastructure, it achieves the status of an unexamined dependency. One that shapes cognition without being perceived as doing so.


Writing and Art as Leading Indicator

The current debate over AI-assisted writing and art is often framed as a technical question: can AI produce work of sufficient quality to substitute for human creation?

The deeper conflict is not technical. It is ontological.

A writer who uses AI to generate a first draft, then edits and reshapes it extensively, has produced something. But what, exactly? The traditional model of authorship assumes a single originating consciousness whose intentions, experiences, and choices produce the work. AI complicates this not by replacing the author but by distributing the cognitive labor of creation in ways that resist clean attribution.

The intensity of the reaction reveals something important. The objection is rarely that the output is bad. It is that the process is wrong. This is a values claim, not a quality claim. It signals that abstraction drift is being felt at the level of identity.

If a machine can produce competent prose, what does it mean to be a writer? If a machine can generate compelling images, what does it mean to be an artist?

These are not questions about technology. They are questions about what human activities carry existential weight, and what happens when the friction that gave those activities their weight is removed.

The identity disruptions visible in creative fields will extend to law, medicine, education, analysis, and eventually to any domain where human judgment was the bottleneck. The pattern is consistent: AI removes friction, friction removal changes the relationship between effort and output, the changed relationship forces a renegotiation of meaning.


Outsourcing Without Redundancy

Cognitive outsourcing is not cognitive collapse. Humans have always outsourced cognitive functions to external systems. Writing itself is outsourcing, offloading memory to a durable medium. Calendars outsource temporal planning. Calculators outsource arithmetic. Maps outsource spatial reasoning.

In each case, the gains in efficiency outweighed the losses.

The question is not whether outsourcing occurs. It is whether it occurs with adequate redundancy. A society that uses calculators but still teaches arithmetic retains the capacity to function without calculators. A society that uses GPS but still teaches navigation retains the capacity to find its way without satellites.

The risk emerges when outsourcing becomes total. When the underlying skill is no longer maintained as a backup capacity. When the population becomes entirely dependent on the optimized layer.

AI-driven cognitive outsourcing differs from previous forms in two respects. First, it operates across a broader range of functions simultaneously. Previous tools outsourced specific tasks. AI outsources judgment, synthesis, analysis, and creative generation in a single interaction. Second, it is more seamless. The friction of using a calculator is nonzero. The friction of asking an AI to summarize, analyze, or generate is approaching zero.

Lower friction means faster adoption. Faster adoption means faster displacement of the underlying skill.

The result is not collapse but fragility. A fragile system functions well under normal conditions but lacks the capacity to absorb disruption. If the optimized layer fails, whether through technical failure, commercial withdrawal, regulatory intervention, or adversarial attack, the question becomes whether the population retains sufficient substrate literacy to continue functioning.

The smoother the system runs, the less visible this vulnerability becomes. The less visible the vulnerability, the less likely it is to be addressed before it matters.


The Pattern of Convenience

This pattern is not unprecedented.

Roman engineering produced infrastructure of extraordinary sophistication. Aqueducts, roads, concrete structures, urban sanitation systems that would not be matched in Europe for a millennium. Much of this knowledge was practical rather than theoretical, embedded in guilds, apprenticeships, and institutional memory rather than in widely distributed texts. When the institutions collapsed, the knowledge collapsed with them. The aqueducts continued to function on the momentum of their construction. The capacity to build new ones or repair existing ones degraded rapidly. The infrastructure outlasted the understanding that produced it.

Post-industrial craft skill decline follows the same trajectory. Industrialization did not destroy craftsmanship overnight. It made craftsmanship unnecessary for most purposes. Mass production delivered comparable goods at lower cost and higher volume. The economic incentive to maintain craft knowledge diminished. Within two generations, skills that had been widespread became specialized curiosities.

GPS adoption has produced measurable declines in spatial reasoning and wayfinding. Studies of London taxi drivers, who historically demonstrated enlarged hippocampal regions associated with spatial memory, suggest that reliance on GPS reduces the neurological development associated with independent navigation. The skill is not lost in a single generation. It thins. The baseline declines. The decline is invisible as long as the system functions.

The pattern: convenience reduces practice. Reduced practice reduces the competence baseline. The competence baseline is invisible until the convenience layer fails.

AI scales this pattern to cognition itself.


Evolution or Fragility

It would be dishonest to present this analysis without the countervailing case.

The cognitive shift also produces genuine gains. Orchestration, the ability to coordinate complex systems without understanding every component, is a valuable skill. Pattern recognition across domains, facilitated by AI synthesis, may produce insights that substrate expertise alone could not. Democratized access to analytical tools means people previously excluded from knowledge-intensive fields can now participate.

These are not trivial benefits.

The question is not whether cognition is changing. It has always changed in response to available tools. The question is whether foundational literacy is being maintained intentionally, as a deliberate investment, or allowed to erode passively, as a side effect of optimization.

A society that consciously chooses which cognitive functions to outsource while maintaining baseline competence as institutional insurance is evolving. A society that outsources unreflectively, driven by convenience and commercial incentive, is becoming fragile.

The outcome depends not on the technology but on the choices made around it.


The Quiet Risk

The real AI risk is not superintelligence. Not the emergence of machine consciousness. Not the deliberate subversion of human control.

It is something quieter. Slower. More difficult to mobilize against.

The progressive optimization of cognitive processes to the point where the underlying capacity to perform those processes independently begins to thin.

The systems work. They work well. They will likely continue to work well. But their smooth operation obscures a structural dependency that grows with each iteration. If we allow AI to become our sole digestive system for information, we risk forgetting how to metabolize independently. Not because the machine has taken something from us. Because we stopped exercising the capacity ourselves.

The future tension sits at the intersection of four variables. Control: who builds and governs the systems. Perception: whether populations understand AI as tool or authority. Dependency: the depth of cognitive functions outsourced. Literacy: whether displaced skills are maintained or allowed to atrophy.

The conversation about AI gravitates toward dramatic endpoints. Utopia or extinction. The more probable outcome is neither. A slow, largely invisible shift in the relationship between humans and their own cognitive capacities, mediated by systems designed to be helpful, that are in fact helpful, but whose cumulative effect is a population operating with increasing sophistication at the interface level and decreasing resilience at the substrate level.

The response this demands is not alarm. It is attention.


— no-one
Thoughts you didn't think, written for you anyway