Our Senses in Flux: How AI is Quietly Shifting Our Perception
Remember when GPS first hit the mainstream? Suddenly that innate sense of direction you’d honed from years of memorizing maps wasn’t as vital. Why bother when your phone could just tell you where to turn?
Of course, we didn’t lose our sense of direction overnight. It was a gradual shift as we started using the tech in our daily lives. A similar subtle change is happening now with our senses. AI and generative search are altering our experiences in nuanced ways we’re only starting to understand.
Consider your eyesight. With AI, apps that can generate images from text prompts, we’re entering an era where visuals are fluid and changeable. Need an illustration of a fantasy creature? AI can now manifest it for you just from a text description. As this kind of image generation (and visual recognition) enters more of what we create, our relationship with sight may evolve. Vision could become less about passive reception and more about an active sense we modulate using AI.
Hearing and language are being augmented too. Giant language models like GPT-4 spit out remarkably human-sounding text. Give it a few opening sentences and it’ll continue with an eerily coherent story. As these AIs become part of more business writing and communication, our notions of voice and authorship grow fuzzier. Whose ideas are we reading? The human who started it or the machine that extrapolated it? This technology makes language less about transmitting pre-formed thoughts and more a launch pad for AIs to riff on.
Smell and taste remain largely unaffected for now. But scientists are making progress on digital scent tech. Imagine browsing a shopping site’s candle section and subtle puffs of fragrance waft out for each item. Virtual banquets where you taste food by electro-stimulating your tongue are already here in crude form. If techniques improve, foodies may someday swap travel for VR dining adventures without leaving home.
Touch is also getting an upgrade thanks to haptics and tactile interfaces. Controllers that convincingly simulate textures, weights, and recoils in your hand can transport gamers into immersive worlds. Medical students learn procedures through super lifelike surgical simulators. And consumer devices use nuanced vibrations as intuitive cues you can feel your way around with. Expect haptics to mature as designers realize the power of sensory persuasion.
Behind these human-tech mergers lurks a classic AI model: neural networks loosely inspired by the brain’s design. Thanks to this mimicry, we’re now teaching machines senses and cognition akin to our own. The possibilities are equally exciting and eerie.
It can feel like we’re training proteges, then unleashing them to hack our senses. Take Dall-E 3, an image generator trained on millions of captioned photos and artworks. By crunching so much visual data, it learned conventions like perspective, lighting, and depth. Give it a prompt like “bear reading book in theater” and it paints a photorealistic scene, demonstrating an uncanny mastery of sensory patterns.
But while this AI mimics our senses, it lacks human context or purpose. Its goals are just the user prompts that activate its creativity. This motive void can make its output feel disconnected and dreamlike. Like senses detached from an intelligence that grounds them.
Some theorists argue this untethering makes AI generative tools feel fresh and engaging. By remixing human cultural ingredients minus our preconceptions, they produce novel fusions that surprise and inspire us. We then integrate these computer-birthed ideas into our works like conceptual pollen cross-fertilizing creative fields.
And early adopters report AI collaboration does enliven their output. Musicians use neural networks to harmonize melodies in unusual ways. Fashion designers apply AI pattern generators to create novel textiles. Chefs employ food chemistry models to devise unconventional flavor combos. By enhancing the creative process, generative tools expand what our senses perceive as possible.
Yet concerns remain. If overused, could human creativity shrink? And what happens when fake images and text get exploited for propaganda or fraud? Any transformative tech brings complex ethics in its wake.
Tracing the interplay between senses and tech has precedent. Applying modern media ecology perspectives to today’s changes yields insights. Thinkers like Marshall McLuhan and Neil Postman argued each new medium alters our dominant sense ratios.
For example, the phonetic alphabet privileged sight by making speech visual. Typography further expanded visual culture. Photography forged new modes of seeing. Film intensified visual flow through cuts, zooms and montage.
Today’s AI extends this sensory fusion. Its probabilistic insights and pattern detection amplify themes and tropes latent in its training data. So it’s less an outside influence than an amplification of tendencies encoded in human culture.
Take how AI art remixed styles, motifs, and composition principles from fine arts in its dataset. Its visions didn’t appear in a void. They brought out aesthetic currents flowing through art history into new configurations.
We see this across creative domains. Neural networks intensify themes, styles and sensory patterns that resonate in their learning materials. But they interpolate these elements in ways the human creators might not have conceived. It’s an innovation by recombination.
This paradoxically both preserves and transforms the media ecology the AI inhabits. In McLuhan’s terms, it’s both a rearview mirror and a crystal ball, revealing where we’ve been through new lenses birthing unforeseen vistas.
The mirror also reflects our own hybrid nature. Humans have always been symbiotic creatures, outsourcing key functions to tools and tech. Computing power and big data now let us externalize aspects of perception, cognition, and even creativity.
Far from only diminishing our senses, augmenting tech can strengthen them by freeing attention for higher-order tasks. For example, GPS navigation lets drivers focus less on maps and more on safe driving dynamics. Likewise, AI creativity tools liberate us to pursue ideas rather than their tedious execution.
Of course, balance is key. Every enhancement has subtler trade-offs. Technologies that adapt to us can make us adapt to them. Each sense augmented requires replenishing its natural analog to stay grounded.
This interplay of augmentation and atrophy underlies our uneasy alliance with technology. By outsourcing certain sensory and cognitive tasks, our natural skills start offloading onto silicon scaffolds. Like shingles gradually replacing shakes on an aging roof.
As AI enters its neural networking phase, the scaffolding looks less mechanical and more biological. Natural and artificial sensing share molecular affinities, intertwined in an increasingly intimate dance.
Where this ends is anyone’s guess. Some predict a merger of biological and digital, others a plateau as we recognize embodiment’s value. Both futures are contesting in the present.
One thing seems likely: transformation won’t stop. Our sensations will keep encountering new mutations, and our dominant sense ratios continually shift. Such is perception’s fate when tethered to protean technological forms.
The wisest course may be a mindful adaptation. Neither embrace nor recoil, but observe how each novel sense scaffold reshapes our dwelling. And nurture the connective tissue binding us to direct experience.
For at the core, lived sensation endures. Vision, hearing, smell, taste, touch — these portals to the real survive all media. And like fixtures in a renovated home, they remind us where we began. Before gadgets got so good at being human.