I've discussed human intelligence enhancing technologies, and the coming wave of more and more intelligent (as if "intelligence" were binary!) machinery around us.
How will we interact with computers in the years ahead?
What's in store for the Human-Computer Interface (HCI)?
Where do we stop and say "cyborg?"
What happens when we do?
Our current interactions are Informational.
The tools/technologies include the familiar keyboards, monitors, mouses, joysticks, and game-box handsets.
The next generation of these types are in use, and becoming commercialized: voice recognition software, eye/motion tracking devices, headsets (getting closer towards immersive environments ala the Holodeck), and handwriting inputs (like Palm's Grafitti).
Informational HCIs = computer-as-tool.
The next modality of HCI will be Emotional.
Advances like ambiant computing, face and emotion recognition, and computer hearing and vision, along with increasing refinement of real-time brain-space modelling technologies such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) Scanning, Electoencephalagraphs (EEGs) will enable computers to correlate brain-states with emotional expression and develop a robust response set. Transcranial Magnetic Stimulation offers a possible tool for completing the feedback loop directly, although the technology is still a bit crude. I would also note that these are non-invasive tools.
Emotional HCIs = computer-as-companion
Beyond these will follow Physical HCIs.
Here I'm thinking full-on wet-ware. I'm not quite sure we'll have spikes in our neck like The Matrix, but there will come some kind of silicon-neuron connectivity.
Physical HCIs = computer-as-augmentation
This will all end with Metaphysical HCIs.
At this point, computer consciousness and human consciousness will reach some measure of understanding in that they will cease to be distinct.
Metaphysical HCIs = computer-as-self
Site Tools
Intro