A Petition to Reject Racially Prejudiced Research at Harvard | News with Tags http://t.co/KGwnBBDCXA
Most Popular Ever
- Experiment: Stay in EU/Schengen for More than 3 Months (How-To) 61 comment(s)
- Sleep Hack: Fall Asleep in 2 Minutes or Less 10 comment(s)
- Google+, A Programmer’s First Experience (Loaded with Screenshots) 138 comment(s)
- Why You Should (Not) Have Your Own Mobile App 33 comment(s)
- Experiment: Sleep Less, Do More 30 comment(s)
Analysts who try to predict the future of technology have one quality in common: they are almost always wrong. The internet and personal computers are phenomenon that defied (most of) the collective imagination that spawned them. Even seemingly prescient novels like Neuromancer can only come so close. To attempt to predict what will become of the different mobile platforms (like iOS, Android and Blackberry) is inherently futile.
Yet, as a mobile application developer it is part of my job. One thing is immediately clear: the mobile application market is one that has experienced a growth curve reminiscent of the web boom in the late 90s. Clearly mobile software is here to stay.
A Whole New Approach
The most striking quality of mobile hardware is that it is a new approach to computing. The Android, iOS, etc. platforms are all operating systems designed specifically to address the needs of a whole new set of hardware. In creating effective support for touch screens, smaller screen size, etc. companies were forced to take operating systems back to the basics.
This is good for programmers, users and even the progress of technology as a whole. By stripping down to the basics, the software was able to be written with a fresh start of a sorts. Every major operating system (most notably Windows) has become bloated over the years. They no longer have the clean and streamlined internals that drive efficiency.
Programmers, in turn, had to adapt to new constraints and tools. All of a sudden memory management became an issue again, meaning that we had to be more aware of the efficiency and stability of our software. In this sense, mobile software was a wake-up call for developers to go back to the basics and write better software.
Now that we have done so, mobile devices and desktops can begin to converge…
Looking at the recently-announced iCloud as well as the emergence of Android-based tablets, it is not hard to see that Steve Jobs is right: we are moving away from our computers as a central-hub. More and more, the mobile device is all we need. I have friends who conduct business, go to meetings etc. with only their iPad. Some do not even have traditional computers at all.
Thanks to Moore’s Law and consumer demand, mobile hardware is getting faster at the same time that it gets smaller. An interesting byproduct of hardware becoming smaller is efficiency – it requires less power. There is no escaping the current trend of smaller, faster, more connected and longer battery life.
So, the gap in capabilities between computers and mobile devices is quickly closing. There will come a time when the computational power and size of a mobile device will be sufficient to do all conceivable work and still fit in a pocket. There is just one bottleneck remaining.
The Last Bottleneck
Speed and size will eventually catch up with computers, but mobile devices have yet to see an adequate solution for the input problem. Virtual keyboards are clunky, while external keyboards destroy the benefit of “carrying just one device in your pocket.” They are simply too large to fit in with the mobile paradigm. A truly mobile input method would let me interface with the mobile device (type, click, etc) at the same speed as I currently do on my laptop while allowing me to move about (stand or even walk).
There are some interesting developments on this front. Certainly, virtual keyboards are getting better and “slide-out” hardware keyboards are still popular with some. Both are great intermediary steps, but ultimately we will need to do away with both. We need a way to input text without the idea of a keyboard as we currently know it. We need a technological advancement as revolutionary as when Xerox invented the first mouse and GUI.
To this end, I think we need to look at emerging technologies. This amazing TED video shows one such interface that can actually monitor the position of fingers in mid-air, which could hypothetically be adapted to create a text input method based upon gestures:
What I am more excited about, though, is BMIs (short for Brain-Machine Interfaces). It may conjure scary images, but in truth BMIs do not need to be invasive. In fact, I used one every day for my study drugs experiment. The NeuroSky headset I own is just a band worn about the head which reads brain waves, and it is even sufficient to control a video game.
Following neuroscience is one of my passions – the field is moving insanely quickly. I have no doubt that the remaining technical hurdles will eventually be surmounted. The question is simply: will people accept it?
Becoming a Cyborg
The truth is, we’re all cyborgs now:
People have gotten very used to Bluetooth headsets. Sometimes they even can be seen as a sort of status symbol or workaholic jewelry. BMIs will eventually get to this level of size and accuracy, and could conceivably even be hidden underneath the hair. You might wear one without anybody even realizing it, yet be able to use it to communicate with all the devices you have permission for in the nearby area.
To get even a bit more sci-fi, there have recently been other innovations of interest. Scientists have managed to give rats contact lenses that contain computer screens, and neuroscientists have restored sight by attaching a camera to the optic nerve and even started to turn it into a commercial technology.
It is hard to say which, if either, will lead to a technology that allows us to no longer need a screen on our mobile devices. In either case, though, we would no longer need to ever pull the device out of our pocket. Between BMIs and contact lenses/optic implants we would have everything we ever needed to communicate with the device.
If all this sounds scary, consider laser eye surgery and cochlear implants. I had a laser shined into my eye for a few seconds a year ago and now I have better than 20/20 vision, and my father had the same procedure over a decade ago. Even more impressive, cochlear implants are small devices that are wired into a deaf person’s brain. The amazing bit is that they are not simply amplifying audio, but transmitting messages directly into the brain.
Both of these two technologies have been around for years and are conducted as fairly standard medical operations. The rate of complications are low enough and the benefits high enough that many people choose to use them.
So, if someone offered you surgery tomorrow to have both input (BMI) and output (visual transmission) from any of your devices with no aesthetic changes, would you accept? Would you, in effect, become a cyborg if it did not change your appearance?