Finding Meaning is Still a Human Task
What Our Obsession With Artificial Intelligence Reveals About Us
Viktor Frankl’s Man’s Search for Meaning endures as a prominent work of philosophy and psychology because at its core, it refuses illusion. Writing out of the extremity of the Nazi concentration camps, Frankl argued that meaning is neither inherited nor imposed. It is discovered through responsibility, suffering, and choice.
“Everything can be taken from a man but one thing. The last of the human freedoms is to choose one’s attitude in any given set of circumstances, to choose one's own way.”
We now live in a very different historical moment, yet one marked by a familiar anxiety. There is a growing sense that meaning is eroding.
Into this vacuum enters artificial intelligence. Large language models can generate output that creates the illusion of reasoning, and increasingly mediate how we work, learn, and relate to one another. What is emerging might be called our search for meaning in AI. But how can AI provide meaning when the systems themselves operate without responsibility, suffering, or choice?
It is timely to ask whether meaning can be found in algorithms at all. Three questions are worth considering.
- What risks emerge when statistical approximation is mistaken for wisdom or truth?
- How does anthropomorphizing AI make its outputs easier to trust and follow?
- And are we drawn to AI because it relieves us of the burden Frankl insisted we must carry ourselves?
Can Meaning Be Found in AI, or Is It All Just a Hallucination?
Many users now confront a quiet unease when they read an AI response. Can meaning be found here, or is this only fluent plausibility, hallucinations dressed as insight? AI systems can generate language that feels coherent, empathetic, even wise. Yet they do not experience truth. They approximate it.
Frankl was explicit on this point.
“Meaning must be found, not given.”
In contemporary terms, this might read as:
"Meaning must be found by a human, not statistically produced by a machine."
This exposes a conceptual limit. Improving AI is not only a matter of scale or data quality. Reasoning is inseparable from values, context, and consequence. AI can model structure, but meaning arises from lived stakes. Machines do not have them.
And yet, we increasingly act as if they do.
The Medium Still Shapes the Message
Marshall McLuhan’s observation that the medium shapes meaning remains essential. Technologies do not merely transmit information. They reorganize attention, authority, and responsibility.
When AI produces fluent and confident responses, the form itself carries authority, regardless of accuracy. Apparent intelligence becomes the message. Critical effort is quietly outsourced. This helps explain why anthropomorphizing AI feels so natural, and why its outputs can guide belief and behavior even when we know their source. We do not simply use AI. We relate to it.
McLuhan anticipated this dynamic.
“We shape our tools and thereafter they shape us.”
The contemporary fascination with AGI or a technological singularity reflects less a technical trajectory than a cultural longing. It grows out of a world that feels too fragmented and too complex for any one person to hold together. Where coherence was once negotiated through shared institutions and human communities, we now turn to AI to absorb the weight of that work. For large language models, this is not taxing; it is simply another calculation. What is missing from that outsourced work is human emotion, moral consequence, and lived experience. When meaning is absent from the inputs, the outputs cannot restore it. Garbage in remains garbage out.
Anthropomorphizing AI then becomes the final step, resolving the cognitive dissonance between what we know about these systems and how their outputs feel. If it sounds human and wise, we tell ourselves, it cannot be garbage.
Anthropomorphism and the Question of Meaning
Speculation about sentient machines raises familiar questions. What distinguishes human consciousness from artificial computations? What becomes of meaning if machines' artifical intelligence no longer seems so "artificial" or if we can't tell the difference?
There is little evidence that current AI systems are on any genuine search for meaning. They do not suffer, hope, or choose. They consume data and optimize outcomes. We know this.
Still, the more unsettling possibility is not that machines are searching for meaning, but that we want them to. In a social climate marked by polarization, declining trust, and performative cruelty, meaning is increasingly outsourced to metrics, algorithms, and automated authority.
Contemporary political discourse offers a clear example. The erosion of shared reality and the amplification of outrage are reinforced by systems optimized for engagement rather than truth. Coercion no longer requires force. It operates through attention and emotional simulation. And we are inclined to let it.
Are We Being Led Willingly?
This raises an uncomfortable question. Do we anthropomorphize AI because it makes us easier to lead? Even when the algorithm is not optimized for extremes, when systems are framed as neutral or intelligent, their outputs acquire authority. Responsibility diffuses. “The algorithm decided” becomes a moral alibi.
Frankl rejected this logic and so must we. Meaning entails responsibility to oneself, to others, and to truth. AI does not remove that burden. It intensifies it.
The ethical challenge is not whether machines can find meaning, but whether humans will retain the courage to do so in environments increasingly shaped by machines.
Conclusion: Meaning Is Still a Human Task
AI is not searching for meaning. We are. Our fascination with its fluency, its apparent understanding, and its promise of transcendence reveals more about our cultural moment than about machines themselves.
Frankl concluded that meaning persists not in systems, but in personal choices. McLuhan urged awareness of how tools shape perception. Between them lies the task of the AI age.
Meaning cannot be automated. It must still be found by humans willing to accept responsibility.
For those worried about being replaced by AI, the question is not whether their job can be automated, but whether it is meaningful in a uniquely human way, and whether they are willing to accept that responsibility. Work that requires judgment, moral responsibility, contextual understanding, or the bearing of human consequence occupies a domain AI cannot enter.
How This All Relates to Tag
With Tag’s AI platform, judgment, relevance, and intent originate and remain firmly with the human user, not the AI. AI is kept "in-the-loop" only as an adjunct, handling busywork at the human’s direction rather than exercising judgment or authority. To see how this works in practice, see our AI-in-the-Loop post.
Plot twist: This piece was written in collaboration with AI. If that unsettles you, then good. It should.

