Are the old days coming back?

Decades ago, the time a program spent running on an electronic computer was pretty expensive. Little bit of history: Back then they were called “electronic”, as a “computer” used to be a woman with good math skills who calculated logarithm tables and trajectories of ballistic weapons. One of the first jobs of their electronic counterparts was to do the same calculations faster and with better precision. Programmer time, on the other hand, was comparatively cheap. Programs were written in a very careful way, making sure not to waste precious CPU time or what little memory was available.

Later, computers became more mainstream. Now if was not only the big government agencies, but large businesses and even universities could get enough money together to buy an electronic computer. Students needed a special permission to gain access to the sacred device, mostly by turning in punch cards and getting a printout of the result (and CPU time taken, i.e. cost) the next day, or, if they were really lucky, by accessing the system on a terminal. There could be multiple “dumb terminals” connected to one of these “Big Iron” things humming away in the basement, with everyone sharing time slices.

Suddenly, the time of personal computing came along. Now everyone could buy a microcomputer that could be plugged into any standard socket, and people like you and me typed away at night in front of a glowing screen. I fondly remember the hours I spent on my Commodore 64 in the early 80s. Good times. And no one shed tears over the run time of a program — if my interpreted BASIC source of a Mandelbrot demo took a whole night to finish, who cares. At that point programmer time became more expensive than CPU time, and while people still spent lots of time making sure their programs ran fast, it was not as imperative as before. Reusing code became more important, and libraries were written that encapsulated common problems. Those libraries were not as finely tuned to the specific problem at hand, mind you, but more general, so they could be used for a variety of problems. People who used libraries created larger probrams, as the library code had to be included, and, by and large, the programs ran more slowly.

PCs started to appear on the desks of office workers, some of them still connected to big servers in the basement, but more and more people started using “their own” computer at work. Software companies began to fiercly compete against each other, and getting a new program out faster than the others became paramount. More and more programmers started to use “easy” programming systems like Visual Basic to write programs that were quite slow and used more resources than their carefully crafted counterparts, but could be sold earlier. So what if it was three floppies instead of one.

Now, with web apps becoming the default for many tasks, it looks like we are returning to the olden days of yore: While CPU time is no problem anymore, suddenly it’s bandwith that is expensive. If your HTML pages are too large, or if you use too many JavaScript libraries, the download size will increase a lot, and you’ll waste money. Oh, and while programmer time is a one-time cost, the bandwidth costs of your web app are not. Sure, for a small app that’s no biggie. But imagine having a million users a day, each of them downloading 250 kilobytes of extra stuff because of the many frameworks you used. Suddenly you’re looking at roughly 250 gigabytes of wasted bandwidth. EVERY DAY. Whoops.  Better do some more coding yourself instead of relying on all those frameworks…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s