In our modern world, the most precious asset isn’t oil, artificial intelligence, or cryptocurrency. These are mere objects, devoid of worth without human attention to bestow value upon them. Attention stands as the most powerful and coveted resource, influencing everything from the demand for rare minerals to global trade regulations. Capturing attention, even through something as straightforward as a skillfully crafted iPhone commercial, can transform data into a series of tangible actions—resource extraction, labor mobilization, and geopolitical influence. Once you learn to harness attention on a large scale, mastering the physical world becomes relatively easy.
The line between our consciousness and the attention economy is blurring
Today, corporations, wealthy individuals, and soon artificial intelligence are all in a race to seize and manipulate human attention for their purposes. Just three decades ago, attention was captured through clumsy, broad methods like print ads and television commercials. Despite their inefficiencies, these approaches succeeded in mobilizing entire populations and shaping societies toward remarkable feats, sometimes even enabling the dehumanization needed for total warfare. The advent of the iPhone marked a significant leap in attention management, introducing a cognitive tool that connected individuals to a global network, effectively turning nearly everyone into a “tuned-in cyborg.”
Fast forward to less than 17 years after the iPhone’s debut, and we’re on the brink of another revolutionary shift, one that promises to be more turbulent and transformative than we can currently fathom. Imagine a future where corporations view your mind as an extension of their data systems, extracting your attention directly from your neural activity to drive profit while selling you a façade of autonomy. Picture a reality “enhanced” by limitless artificial worlds, sensations, and experiences.
“The sky above the port was the color of television, tuned to a dead channel.”
William Gibson’s seminal work, Neuromancer, penned in 1984, eerily anticipates our current trajectory. In this future, privacy is nonexistent, with massive corporations treating data as a commodity to be hoarded and sold. Reality morphs into a shared “consensual hallucination,” experienced by billions who are neurally linked to cyberspace, which is governed by corporations and reliant on frequently compromised infrastructure. Innovations like the Apple Vision Pro and Orion AR, often dismissed as mere gadgets for the affluent, inch us closer to this unsettling reality, as they incorporate advanced hardware that connects our intentions and thoughts directly to the digital environments they generate.
To capture attention, focus on the eyes
The Apple Vision Pro represents a new frontier, building upon the legacy of Google Glass by creating a system that reacts to our attention through eye movement. Its internal sensors can detect emotional states, cognitive load, and arousal by analyzing subtle changes in pupil size and rapid eye movements. For instance, pupil size can indicate noradrenergic tone, which reflects sympathetic nervous system activity influenced by the locus coeruleus—an area of the brain associated with attention and arousal. While the device’s current applications may seem limited, it’s impressive that Apple has eliminated the need for external input devices, allowing users to navigate and manipulate digital spaces using just their gaze.
Society is being conditioned, and so are the influencers
Despite the technological wonders, it’s crucial to recognize that these advancements come with significant costs. Devices like the Vision Pro are subtly preparing society for a future where more invasive technologies, such as Neuralink or other brain-computer interfaces, could potentially erode human agency. The current economic landscape fails to prioritize privacy, autonomy, or digital rights because maximizing profits often depends on manipulating human behavior in predictable markets, such as those centered around status, security, and desire.
These markets thrive on collective attention and memetics rather than on the self-determined, free-thinking individual. If this grim scenario unfolds, technologies that connect personal decision-making with information systems will disproportionately benefit corporations, governments, and increasingly sophisticated artificial agents. The primary beneficiaries won’t be ordinary users, authors, or even the creators of these technologies, but rather the artificial entities optimizing for goals that may alienate the humans who birthed them. This is the path we’re on unless we take decisive action.
A biometric privacy framework is essential
There is a widespread consensus on the need for privacy. Yet, despite regulatory frameworks like GDPR and CCPA, along with initiatives like Chile’s Neuro Rights Bill, the fundamental issues remain unresolved—and the dangers are escalating. While regulations aim to tackle these concerns, they fall short without effective implementation.
What we lack is a foundational integration of digital natural rights into the very fabric of the internet and connected devices. This requires making it exceedingly simple for individuals to maintain self-custody of their cryptographic keys, which can safeguard communications, verify identities, and protect personal data without depending on corporations, governments, or third parties.
Holonym’s Human Keys present one viable solution. By allowing individuals to generate cryptographic keys securely and privately, we can defend sensitive data while ensuring autonomy and privacy. Human Keys stand out because they don’t necessitate trust in any single corporation, person, or government for their creation or use.
Integrating technologies like homomorphic encryption with devices such as Apple’s Vision Pro or Neuralink could enhance cognitive functions without sacrificing user data privacy.
However, it’s crucial to understand that software alone cannot solve these problems. We must also have secure hardware that complies with publicly verifiable and open standards. Governments have a critical role in ensuring that manufacturers adhere to stringent security protocols when developing devices that manage and store cryptographic keys. Just as clean water and breathable air are public goods, secure hardware for storing these keys should also be treated as such, with governments responsible for their safety and accessibility.
As we look forward to the future of ethical neurotechnology, we must heed the warnings of visionaries like Gibson who cautioned against the erosion of privacy, autonomy, and humanity through technology. Brain-computer interfaces (BCIs) hold the potential to enhance human capabilities, but only if driven by ethical considerations. By embedding biometric privacy into our digital systems, utilizing self-custodial keys and homomorphic encryption, and advocating for open hardware standards, we can ensure that these technologies empower rather than exploit individuals.
Our future does not have to be dystopian. Instead, it can be a realm where innovation elevates humanity, safeguarding our rights while unlocking new possibilities. This vision is not merely optimistic; it is crucial for shaping a future where technology serves us, rather than controls us.