Published by Dan Cunning on Jul 10, 2014
Filed under Technology
The first feature of computers also provided their name. Computing accepts inputs and generates outputs. In World War II, it aided physicists creating the first atomic bomb and intelligence offers deciphering enemy communications. These scientists could have done the work by hand but computers automated repetition and minimized human error by providing the computer with a set of instructions (a computer program).
Though powerful, these first computer programs lacked storage. If they needed to perform 2 + 1
, they needed to explicitly specify both numbers (data) along with the operation (behavior). Persistent storage brought computers' second leap forward. By separating the behavior and data, programs could work together: one reads in data while another analyzes it, a meaningful separation of responsibilities that led to more complex functionality. Databases were developed that allowed governments and businesses to analyze unheard of amounts of information.
New storage methods were developed improving reliability, capacity, and speed. To ensure different programs could understand the same data, formats like gif, wav, and mpg were standardized. Soon entire encyclopedias, books, songs, movies, and even the Earth (through satellite and aerial imagery) were digitized. These standards and the increasingly cheap/reliable methods of storing have allowed data to essentially live forever.
As programs and their data became more complex, computers needed new ways to interact. Computers that only displayed text were replaced with ones that could generate (render) more complex graphical imagery:
While the technical progress can be best followed in the evolution of video games and movies, the graphics pipeline plays a critical piece in every computer: displaying data, icons, windows, menus, pictures, videos, and webpages. Because rendering graphics differs so greatly from what the CPU does, computers now contain a GPU. They are more limited than CPUs but given the right problem they can be 10x, 100x, >1000x faster, leading them into non-graphical applications like artificial intelligence, weather forecasting, bioinformatics, and molecular dynamics.
Currently, computing speeds have plateaued between 2-3Ghz, harddrives are big enough to store everything most people desire, and renderings are limited by only the artist's abilities, but our current computers are defined by other advancements.
ARPANET was developed by the United States government in the 1960's and 70's and would pioneer how computers talked to one another with TCP/IP (the protocol of the Internet). Before TCP/IP, sharing information across computers required a physical disk or a direct connection along with compatible software+hardware. After TCP/IP, any computer could communicate to all other connected computers. A network of cables, routers, and switches would handle the rest.
The number of connected computers grew, and by the mid-1980's communication across the world was commonplace. New protocols built on top of TCP/IP emerged such as email, usenet, HTTP, chatrooms, and instant messaging. New online services like American Online, CompuServe, and Prodigy were established to introduce these features to the non-tech savvy. Traffic on the Internet doubled every of year the 1990's (PDF), but the need for a network of wires connecting every computer kept it limited to rather wealthy/populated areas, until the wireless explosion of the 2000's.
The invention of the transistor slowly shrank room-sized computers to handheld ones while wireless networks removed the wires. Mobile devices were born. They gained momentum with Palm Pilots and BlackBerrys then boomed with Android and the iPhone.
All life-changing advancements in computers since around 2005 boil down to these two factors: your computer is networked to every other computer, and it's small enough to carry everywhere with you. Everyone is always connected and always communicating: the Communication Age.
This communication drives "the Internet Economy" which now holds most of computers' interest and investment. Computers are the driving force behind gathering, organizing, transferring, and analyzing information at a world-wide scale. If you get paid to work with computers in the 2010's, you're somewhere in this information pipeline, but computers can't stop there: we need to build on it.
Computers are already inside your most expensive purchases. Your car has computers monitoring its engine, integrating with your phone, giving you directions, and possibly watching from afar with Lowjack or OnStar. Few people want to spend $50 to connect a light-switch to WiFi, but the prices will drop as the hardware becomes more common and new approaches develop, and once the cost of networking falls far enough computers will become ubiquitous: an afterthought that they're are in everything and networked to everything else.
However convenient, a computer inside your wallet, watch, shoes, coffeemaker, or bed isn't revolutionary, but computers becoming able to recognize aspects of our world (computer vision) might be. We're just beginning to see what computers connected to cameras, microphones, and accelerometers can figure out. Shazam recognizes songs. The Amazon Fire Phone promises to recognize objects. Facebook recognizes your friends' faces almost better than you can, which is good for casinos and airports, but what about on streets, in restaurants, or parks?
Computer recognition extends far past companies' ability to advertise and governments' ability to monitor: it could provide superhuman senses. There's enough detail in a key's photograph to print a working copy. Computers detect enough variation in your skin color to monitor your heart-rate and blood flow. With the right sensors, a computer could exhibit the sense of any animal:
Computers that recognize your surroundings and change it creates an augmented reality. Building on the advancements of 3D rendering and computer vision, our entertainment has already blurred the line between reality and computer-generated reality. Can you tell what's real in a movie's background? Do you forget people at the stadium can't see the first down line?
Augmented reality extends beyond our TV screens. Our phones can insert cartoon characters onto your desk or make a $20 bill come alive. Google Glass overlays a screen between our eyes and the world. Imagine the possibilities if Google ever stops merely selling it as a way to communicate your experiences with others:
Ten years ago, these innovations were waiting for some big leaps: small enough hardware, powerful enough batteries, detailed enough screens, and the ability to overlay images fast and accurately enough to trick your mind. I'd argue only the last roadblock exists today. Oculus VR was attacking it until Facebook bought them, who knows what they're doing now.
Once computers reliably recognize significant parts of our world, they can learn about and interact with them:
In a similar way, computers can interact with our bodies:
Computers could also improve our lives on a more abstract level than interaction: they could generate knowledge about our world. Google crawls more of the Internet in one day than you can read in a lifetime. A fast enough computer can process a two-hour movie in seconds. Computers aren't limited by our sense of time, only by how fast they compute.
If the mechanical workings of molecules, cells, and viruses were broken down into their basic behavioral patterns, computers could simulate millions of experiments outside of "realtime". As computers get faster and the known patterns grow and improve, computers may help us:
Starting to sound like science fiction? Bare with me: we are currently making computers that recognize and interact with our world. Simulation begins with computers observing, experimenting, and recording the behavioral patterns of molecules, cells, and groups of cells at levels beyond human perception. With these behavioral recordings, computers could develop new solutions to real-world problems much like today's architects use CAD systems to help them design buildings more resistant to powerful earthquakes, hurricanes, and aging.
Computers have come a long way since the 1940's. I hope their future advancement is aided (not distracted) by today's Communication Age: sharing of ideas+programs+data is easier than ever, but bright minds are also stuck organizing data, generating page views, and working beneath their ability. Innovative companies tend to be swallowed up by established ones who are unlikely to produce giant leaps forward as they are focused on controlled growth, iterative improvements, and heavily-calculated risks.
Overall, tomorrow's computer will change our world: simultaneously creating and disrupting systems of influence and power, like the computer of today. They'll just be better at it.
For more on how computers are shaping the present read my article "The Communication Age".
I'm a Ruby on Rails contractor from Atlanta GA, focusing on simplicity and usability through solid design. Read more »