It’s easy to forget the layer upon layer of technological advances that led to the computers we use today. But this look at the state of the art half a century ago does a good job of reminding us. Here [Fernando J. Corbató] explains the concept of Time-Sharing. He is one of the pioneers of the topic which is now used in every computer system in the world.
Since processors (read: a single core) can only work on one operation at a time, it inherently creates a bottle-neck. This is a huge issue when you consider the cost of the computers used at the time. In the video he mentions $300-$600 an hour. That was in the 1960’s and would roughly equate to about $2300-$4600 in 2012. In other words, there’s big money in using the machine as efficiently as possible.
Early on in the discussion he mentions how programs were loaded and solutions were returned by computers of the day. It started with punch cards, then moved to magnetic tape. At the time this was filmed they had just started using teletype and were hoping to add a graphical interface in the near future. We’ve come a long way but the core principles he’s explaining are still quite important. See both parts of the film after the break.
http://www.youtube.com/watch?v=Anxxe8SdX78
http://www.youtube.com/watch?v=Jc6jrhycDsA
[via Reddit]
Thanks for this. My father designed time-shared OS’s for GE (GECOS) and IBM (CALL/360, a stop-gap for OS/360). He was part of an OS team that formed at GE and split off in 1967 when GE Computers dissolved. One of them hit it big later, I think with Netware (one of the first network sw companies).
In CALL/360, he supported 256 users with 256K (that’s KiloBytes) of core memory.
They tested the system by having two 360’s with 256 serial lines between them. One had 256 “users” with different characteristics (i/o bound vs cpu bound) and the other running CALL/360. This allowed them to both debug and tune the system.
BTW – the IBM manager in charge of both OS/360 and CALL/360 was named “Buck Rogers”. See “The Mythical Man Month” for the OS/360 saga.
The same team created the Spectra physics barcode reader – now used in most grocery stores.
They later were bought up by Memorex in 1972. About half worked for Seagate starting fairly early on.
“The Mythical Man Month” is one of my favorite books. Every manager in a technology company should be required to read it. It’s an excellent guide to the pitfalls of technology development from someone who experienced them firsthand!
//love the parts about secretaries and keypunch operators
God, I wish I saw this video years ago. In college, I was required to take a VB course. It was the schools way of filtering students to ensure the students interest in CS. I still remember to this day what the professor told us. “Processors are becoming faster and RAM is cheaper than ever before. There is no need to make your programs as small as possible or as fast as possible since all the computer has to do is buy a faster computer or more RAM.”
It would be years before I discovered the resources to writing more effective code.
That should read computer user.
That line of thinking is a disturbing trend in computing unfortunatly.
That instructor should be fired.
I encounter this when I teach a new person who comes from pc programming to program on an microcontroller, they don’t get the 256 limts etc..
I have to wonder how prepared the US or the world would be if the costs reverse for some reason or another?Will those recently new to the field have the resources conserve resource to ensure enough computing power to keep the economy at least static or from back sliding, much less grow?
Actually the instructor is right but for the wrong reasons. Humans are very bad at optimizing code; more specifically we are bad at knowing what to optimize! As my Software Engineering professor used to say “Optimizing a sort routine to run 5ms faster is of no use if it takes seconds to read the data off the disk.” N.b. this holds true even if you reduced the sort from 100ms to 5ms (wow 2 whole orders of magnitude improvement,) but since the input is operating on a scale several orders of magnitude larger the entire sort optimization is a wasted effort.
“The First Rule of Program Optimization: Don’t do it. The Second Rule of Program Optimization (for experts only!): Don’t do it yet.” — Michael A. Jackson
The third rule actually is use profiling tools to determine if optimization is necessary and to identify bottle-necks.
I do serious embedded real-time systems. I have had to optimize code 4 times in the last 20 years, and I knew each time (by design) where I would need to optimize – and I measured to make sure. First I got it to just work.
For most PC apps – if they are only going to be used a few times (ie class projects), ignore optimization – get the algorithm right (the real optimization).
For professional PC apps, your time is much more valuable than machine time – again, get the algorithm right, ie don’t do silly things (like read in a database a byte at a time over a socket…)
For embedded stuff – the rules are quite different, and too complex to go into here. The short version: proper design is critical. If your coding and debugging is over 10% of the project time, you are doing things *wrong*. If you need to optimize more than a couple of small functions (and you don’t know what they are in advance), you are doing it *wrong*.
To further clarify, clarity is often more important that hand-optimized code. Why not let the experts do the work, and use their output an optimizing compiler (Embedded might be a special case) that can do most optimize for you, they really have been quite advanced for at least a decade , these days the JIT optimizes the code for the target architecture before the IL is executed, which should result in approximately the fastest code being generated.
Profiling usually reveals Big-O and Architecture types of issues, ex. “why exactly are we creating billions of date objects to then just drop them on the floor (for the garbage collector to clean up, causing it to run more often to deal with the billions of discarded dates)?”
“Processors are becoming faster and RAM is cheaper than ever before. There is no need to make your programs as small as possible or as fast as possible since all the computer has to do is buy a faster computer or more RAM”
Any professor that says that is a no talent hack that should not be teaching. you can ALWAYS gain improvements by making things simpler.
>you can ALWAYS gain improvements by
>making things simpler.
Or you could make things worse. If you consider just how fast modern machines are and how clever modern compilers are/are getting I would say it’s more important to write clear code before you start thinking about cycles.
There seems to be this group of people that think “I could do so much in *insert less than hundred k ram quantity here* on a *insert machine with less than hundred k of ram here*” without considering just how much stuff software does these days. All that nice user interface stuff, contextual stuff etc comes at a price.. and I would rather have that *bloat* than go back to when software was dumb as shit.
“…I was required to take a VB course.”
It was FORTRAN when I had to take it. I talked my way out of it by showing them some code or something. Then, on to the wonders of PASCAL :-/
Unfortunately this philosophy is most popular excuse among Java users (“Why should I use efficient programming language if computers are so fast?”… don’t get me started on this) and is usually extended to “You can get away with crappy code if your machine is fast enough”.
I swear you just quoted the professor who teaches Java courses at my uni! But seriously, look at Minecraft, there is no reason that couldn’t run on old laptops but instead you need a modern gaming computer because its written in Java.
>Unfortunately this philosophy
>is most popular excuse among Java users
What’s that noise in the background, sounds like a torrent of bullshit coming..
>(“Why should I use efficient
>programming language if computers are so fast?”…
And there it is.. are you trying to say that Java (the JVM in this case) isn’t fast or that it’s impossible to write junk in other lower level languages like C? I can assure you it’s very possible to write junk in machine code.. the reason people use high level managed languages these days is because the robustness outweighs the usually tiny performance benefits of writing stuff in lower level languages.. don’t get me wrong I like C etc but blaming Java for crap code is moronic.
>“You can get away with crappy code
>if your machine is fast enough”.
Why does writing code in Java make it crappy? Try doing nice deserialisation of random JSON or XML in C.. yeah not fun and looks like a ton of shit. There is a reason people use languages like Java. The fact that you don’t know what they are just proves you haven’t written enough software.
I’m surprised in not seeing any animations, even the military has effectively used animations in instructional material over the years. Then again the military has a bigger budget for that probably
This appears to be the work of a single interviewer, who may be his own camera man, in a day when editing meant cutting and splicing film. Yet it is surprisingly clear.
Also known as kernel thread marshaling, or kernel thread polling. Basically uses kernel structs to poll virtual threads, usually has timer APIs to store thread-execution time stamps too. This is all OSs and architectures.
On microcontrollers the most efficient way is basic polling with stored RTC value offsets(less cycles and RAM than all other solutions)..
you don’t need stored rtc on micros unless you want thread timers. elc it’s wasted resources.
>Since processors (read: a single core)
>can only work on one operation at a time
http://en.wikipedia.org/wiki/Superscalar
Bowties are cool. (c:
Best part was when Professor Frink analogized the console users as “opponents”.
Because that’s how I thought of them when I was in IT.
A surprisingly clear explanation. I wish they had started my HS C++ course with this when I took it a decade ago. Many of the keyword names and statements like “print” would make a lot more sense if you understood the genesis of these systems.
I’d love to see a picture of that computer room today (assuming the building still stands). I’m weird about “history of place” like that… heck, I wonder who walked through that room, what they went on to do, who the influenced, etc.
Anybody here on HAD work/study at MIT and have access to someone who would know where that room is/was?
WHOA. Watched both videos. Credits at the end indicate the producer was someone named Russell Morash. Does that name ring a bell? It should.
Russell Morash is a TV producer who was responsible for shows such as “This Old House”, “The Victory Garden”, and “New Yankee Workshop”. According to wikipedia, he also worked with Julia Childs to produce “The French Chef” and other cooking programs (http://en.wikipedia.org/wiki/Russell_Morash).
Based on the dates in the wiki entry, the time this feature was created, and the comparative uniqueness of his name (statistically, he’s the only person with his full name in the US), I’d say that this was some of his very early work.
You want to find a man dedicated to his craft with high standards of professional excellence? That would be Russell Morash. His contributions to American public television are unrivalled.