Is Presence a Privilege?
Reflections on AI, Equity, and the Future of the Human Experience
As I write this, I find myself caught in a profound paradox. My own research has been dedicated to making AI more seamless, more present, more intuitively woven into the fabric of everyday life. Yet the more I contemplate the trajectories we're creating, the more I'm haunted by a troubling question: Are we designing a future where presence itself becomes a luxury good?
Those of us familiar with AI 2027 are undoubtedly thinking about it a lot. Personally, it has consumed many showers and sleepless nights. My quiet but deliberate resistance to our increasingly automated, robot-filled future has taken the form of designing for presence. The AI research community has long championed the vision of ambient computing - technology that recedes into the background while equipping the human with its advantageous edge. This vision animates much of my work: creating AI that catalyses us being present in our surroundings without demanding much of our attention.
The appeal is undeniable. Superintelligence enhancing our analog lives to help us become the best versions of ourselves paints a rather utopian picture. Yet the more deeply we embed AI into the lives of its early adopters, the clearer it becomes that the benefits are not evenly distributed. As ambient computing systems become increasingly sophisticated, they promise to create seamless, intuitive environments where technology anticipates our needs and responds without explicit commands. This is creating a new form of privilege that disproportionately affects the underprivileged, transforming basic human presence and attention into luxury commodities.
The educational AI divide is the most concerning of facets. It threatens to create what researchers describe as distinct "cognitive aristocracy": a layer of society increasingly capable and self-possessed through AI enhancement, while others navigate systems that become increasingly hostile or bewildering (5). International perspectives reinforce these concerns. UNESCO has issued warnings about unequal global access to AI in education, emphasizing that without universal AI literacy initiatives, students in underprivileged communities will fall "perpetually behind".
This means that we are raising two generations in one. One grows up with time and attention as quiet companions. The other learns to ration them like scarce fuel. In practice, we are giving the former more than tools - we are giving them time itself. Their 24 hours expand, bend, and breathe. For the rest, the day stays fixed, unyielding, a clock they must constantly negotiate.
What emerges, then, is not merely an educational achievement gap, but a widening rift in the texture of lived experience - and its repercussions accumulate over a lifetime.
Those raised with ambient, invisible AI assistance will internalize the expectation that the world adapts to them. Their environment will filter distractions and streamline logistics, gifting these children the priceless sense of calm from which real learning, focus, and resilience can grow. The contrasting cohort faces a world of micro-frictions without AI-infused scaffolds to anticipate their needs. Learning gaps become inevitable, but might be the least of the concerns. Longitudinal research already hints at how early digital disparities echo into adulthood, shaping everything from career prospects to mental health. But the invisible AI divide risks making these outcomes starker and less bridgeable. As “presence” itself becomes a marker of privilege, the two generations grow up inhabiting different temporal realities - one where time feels expansive, the other, where it is always running out.
We must also ask what might be lost in this AI-assisted upbringing. Constant AI assistance also reshapes emotional coping and autonomy. In the AI-enabled group, many everyday burdens might vanish – but so might opportunities to practice patience, tolerance, and self-regulation. Psychologists caution that deep dependence on AI can undermine autonomy: as one therapist notes, working without AI can become “daunting…even anxiety-inducing,” and people who rely on AI “rarely question” its outputs, undermining confidence in their own judgment. (7) A recent study at MIT (8) echoes similar thoughts, highlighting potential cognitive costs for students excessively using LLM technology. In short, for the AI-rich young, slipping into automated assistance is easy, but breaking free can induce fear or helplessness. By contrast, those without AI support may be forced to tolerate more immediate stress, but they may also develop greater self-reliance.
Perhaps the most insidious risk is this: AI systems often fail to “see” everyone equally. The promise of reduced cognitive load rests on a critical assumption: that the system knows what you need, and that you trust it to decide. For those outside the datasets these systems were trained on, it often means friction, misinterpretation, or exclusion altogether. Research shows that common AI systems routinely misgender, misclassify, or stereotype users of marginalized groups, effectively ignoring parts of their identity. Such misrecognition by algorithmic culture can degrade self-worth: Waelen and Wieczorek argue that these errors signal to underrepresented users that they matter “less” in the system, which could “affect… their ability to develop self-esteem and self-confidence” (9). This opaque classification can act like a new form of identity theft: by forcing people into broad data-driven categories, algorithms constrain self-expression and treat presence as data to be bought and sold. In turn, those excluded (e.g. people in non-Western cultures, rural communities, older users) not only miss AI benefits but face intensified invisibility.
The horizon of ambient AI suggests a stark divergence: some will live in a richly mediated world where identity and consciousness are intertwined with algorithms, while others remain analog in spirit. These futures raise urgent questions. If presence – our attention, time, and engagement – becomes a resource to optimize, how do we guard it from exploitation and uneven distribution?
Can we imagine designing AI-rich environments that enhance human agency instead of eroding it, and at the same time ensure that everyone has fair access to those benefits? For instance, might we adopt design principles and policies that treat attention as a shared commons rather than a commodity? Could digital literacy and user controls empower all individuals to manage AI tools, rather than be passive subjects of them?
—---------
References:
1. https://ai-2027.com/
2. https://www.unesco.org/en/articles/ai-literacy-and-new-digital-divide-global-call-action
3. https://www.cip.org/
4. https://www.weforum.org/stories/2025/01/digital-divide-intelligent-age-how-everyone-can-benefit-ai/
5. https://newhouse.syracuse.edu/research/research-spaces/emerging-insights-lab/2024-25-fluency-report-bridging-the-ai-digital-divide/
6. https://www.cs.cmu.edu/~asim/DistractionFreeComputing.pdf
7. https://cbhealthpartners.com/blog/ai-dependence#:~:text=The%20furthest%20point%20along%20this,critical%20thinking%20among%20knowledge%20workers
8. https://www.media.mit.edu/publications/your-brain-on-chatgpt/
9. https://philarchive.org/archive/WIETSF#:~:text=their%20ability%20to%20develop%20self,can%20have%20such%20an%20efect