Contents

On the Question of Values

This is my first attempt at blog-style short-form writing.

Introduction

As technology advances, certain cognitive abilities become less important, and society updates its value function accordingly.

One well-studied phenomenon in cognitive science is memory externalization—the process of offloading memory to external devices. While this issue has been extensively discussed in the medical literature, it is by no means new. Consider the rolodex, invented in the 1950s, which allowed users to alphabetize their contacts and retrieve information based on location rather than recalling full details from memory (Wilmer, Sherman, and Chein 2017). Similarly, the advent of GPS technology has significantly diminished the need for human navigation skills (Dahmani and Bohbot 2020).

The question we examine today concerns the newest wave of technology reshaping human cognition: Artificial Intelligence (AI), and more specifically, Large Language Models (LLMs). LLMs exhibit remarkable proficiency in tasks such as answering queries, performing reasoning tasks, and generating human-like text.

AI: Beyond Specific Cognitive Tasks

Unlike previous technologies that offloaded specific cognitive tasks, AI generalizes across a broad spectrum of intellectual functions. This raises a critical question: What happens when intelligence—long considered one of humanity's most prized attributes—becomes abundant and, crucially, easily outsourced?

The trajectory of AI-driven skill displacement is not without precedent. The best chess algorithms, for instance, consistently outperform even the world's top human players (McIlroy-Young et al. 2020). Yet, chess remains a popular recreational and competitive pursuit, with players leveraging tools like Stockfish to refine their own strategies .

Most people don't derive meaning from being the best at something—they find purpose in personal growth and the journey itself. Consider competitive gaming: players enthusiastically engage even knowing countless others surpass their skill level. Similarly, the existence of AI systems that outperform humans in various domains is unlikely to render our pursuits meaningless. The satisfaction comes from our individual improvement and engagement with the process.

Some argue that humans will naturally gravitate toward uniquely "human" qualities like creativity and emotional connection as AI advances. However, the chess example suggests a more nuanced reality: people will likely continue developing technical skills despite AI superiority, finding fulfillment in the learning process itself rather than comparative excellence.

Redefining Mastery in an AI-Abundant World

In a landscape where artificial intelligence routinely outperforms humans on measurable metrics, we must reconsider what constitutes mastery. Perhaps the future expert isn't one who knows the most facts or executes tasks with the highest precision, but rather one who demonstrates superior judgment about when and how to deploy artificial intelligence.

Consider the emerging field of prompt engineering—a discipline essentially focused on effectively communicating with AI systems. This represents a meta-skill: the ability to leverage artificial intelligence itself becomes the expertise. As AI tools proliferate across domains, we may witness a shift from domain-specific technical proficiency toward this type of orchestration competency.

AI literacy—understanding the capabilities, limitations, and appropriate applications of artificial intelligence—is rapidly becoming as fundamental as traditional literacy was in previous centuries. Just as reading and writing transformed from specialized skills of scribes to universal expectations, AI literacy may soon be considered a basic requirement for effective participation in society. The truly skilled individual might be one who maintains a mental model of various AI systems' strengths and weaknesses, deploying them strategically while maintaining critical awareness of their limitations.

This mirrors historical transitions in other fields: when calculators became ubiquitous, mathematical education shifted emphasis from computation to conceptual understanding. Similarly, we may see less value placed on information recall and more on information curation, synthesis, and contextual application—skills that currently remain challenging for AI systems.

The Psychological Adaptation Process

As we increasingly integrate AI into our cognitive processes, our very self-concept—our understanding of what it means to be human—undergoes transformation. Throughout history, humanity has defined itself partly through unique capabilities: tool use, language, abstract reasoning. Each time one of these boundaries has blurred, we've experienced collective identity adjustment.

The externalization of cognition to AI systems represents perhaps the most significant boundary dissolution yet. We face the prospect of redefining intelligence itself not as an inherent quality but as a distributed phenomenon spanning human-technology partnerships. This shift may trigger what psychologists call "cognitive dissonance"—the mental discomfort that occurs when beliefs clash with reality. As individuals raised to value intellectual capability confront systems that outperform them cognitively, psychological adaptation becomes necessary.

This adaptation might manifest in several ways. Some may embrace "cognitive offloading," viewing AI as an extension of their own mind rather than a separate entity. Others might emphasize distinctly human aspects of consciousness—subjective experience, embodied cognition, or emotional intelligence—as sources of identity. Still others might reorient their self-worth toward relational capabilities: the uniquely human ability to connect emotionally with other humans.

Perhaps most promising is the potential for a more integrated view—one that recognizes human cognition not as a standalone phenomenon but as fundamentally relational and technologically embedded. In this framework, intelligence isn't located solely within individual minds but emerges from our interactions with tools, technologies, and one another. Our psychological adaptation may ultimately lead to a more humble yet expansive understanding of human potential—one defined not by superiority but by symbiosis.

Conclusion

As we navigate this unprecedented shift in our relationship with intelligence, perhaps the most valuable adaptation will be a philosophical one. Rather than viewing AI as either a threat to human value or a simple tool, we might consider it a mirror that reflects back our deepest questions about what truly matters. The externalization of cognitive functions may ultimately free us to explore what lies beyond computation—the messy, beautiful complexity of human experience that resists algorithmic reduction.

The question of values in an AI-abundant world brings us full circle to ancient philosophical inquiries: What makes a good life? What aspects of human experience are most worth cultivating? As AI systems handle increasingly complex cognitive tasks, we may find ourselves with both the capacity and the necessity to engage more deeply with these fundamental questions. In this sense, advanced AI doesn't diminish human value—it creates space for us to rediscover what was valuable all along.

Citations

Dahmani, Louisa, and Véronique D. Bohbot. 2020. "Habitual Use of GPS Negatively Impacts Spatial Memory During Self-Guided Navigation." Scientific Reports 10 (1): 6310. https://doi.org/10.1038/s41598-020-62877-0.
McIlroy-Young, Reid, Siddhartha Sen, Jon Kleinberg, and Ashton Anderson. 2020. "Aligning Superhuman AI with Human Behavior: Chess as a Model System." In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1677–87. https://doi.org/10.1145/3394486.3403219.
Wilmer, Henry H., Lauren E. Sherman, and Jason M. Chein. 2017. "Smartphones and Cognition: A Review of Research Exploring the Links Between Mobile Technology Habits and Cognitive Functioning." Frontiers in Psychology 8 (April): 605. https://doi.org/10.3389/fpsyg.2017.00605.